You are on page 1of 83

Politechnika Poznańska

Wydział Automatyki, Robotyki i Elektrotechniki


I NSTYTUT AUTOMATYKI I ROBOTYKI

O PRACOWANIE Z WYBRANYMI ZADANIAMI


RACHUNKOWYMI Z PRZEDMIOTU
U KŁADY NIELINIOWE
WERSJA 2.8

BARTŁOMIEJ K RYSIAK , DARIUSZ PAZDERSKI

M ATERIAŁY POMOCNICZE DO ĆWICZE Ń Z PRZEDMIOTU


U KŁAY NIELINIOWE NA STUDIACH MAGISTERSKICH

Poznań 2020
Poznan University of Technology

Faculty of Control, Robotics and Electrical Engineering


I NSTITUTE OF AUTOMATIC C ONTROL AND ROBOTIC

E LABORATION WITH SELECTED CALCULATION


TASKS FOR THE SUBJECT
N ONLINEAR SYSTEMS
VERSION 2.8

BARTŁOMIEJ K RYSIAK , DARIUSZ PAZDERSKI

S UPPORTING MATERIALS FOR THE EXERCISE


FOR THE SUBJECT
N ONLINEAR SYSTEMS - MASTER ’ S STUDIES

Poznan 2020
Contents

Notations and symbols......................................................................................................................... 6


1. Revision of selected differential operations ................................................................................. 7
1.1. Partial derivative ..................................................................................................................... 7
1.1.1. Partial derivative of matrix function .......................................................................... 7
1.2. Vector field.............................................................................................................................. 15
1.3. Directional derivative ............................................................................................................. 16
1.3.1. Directional derivative of a scalar function ................................................................ 17
1.3.2. Directional derivative of a vector function ................................................................ 17
1.4. Lie bracket .............................................................................................................................. 18
1.5. Nonholonomic constraints in the kinematic and dynamics .................................................... 21
2. State evolution for nonlinear systems .......................................................................................... 26
2.1. State evolution for general case - the vector fields are given in undisclosed form................. 26
2.2. State evolution in case of a given system ............................................................................... 29
3. Relative degree ............................................................................................................................... 39
3.1. Theoretical introduction.......................................................................................................... 39
3.2. Calculation of the relative degree of linear systems ............................................................... 40
3.3. Calculation of the relative degree of nonlinear systems ......................................................... 40
4. Distribution and its integrability .................................................................................................. 46
4.1. Theoretical introduction.......................................................................................................... 46
4.2. Exercises ................................................................................................................................. 47
5. Accessibility and controllability.................................................................................................... 50
5.1. Theoretical introduction.......................................................................................................... 50
5.2. Accessibility ........................................................................................................................... 50
5.3. Accessibility and STLC.......................................................................................................... 53
6. Linearisation................................................................................................................................... 58
6.1. Conditions for the linearisation with pure state transformation ............................................ 58
6.2. Linearisation with pure state transformation .......................................................................... 58
6.3. Conditions for linearisation with state transformation and input transformation (so called
linearisation with transformation and feedback) .................................................................... 66
6.4. Linearisation with state transformation and input transformation (so called linearisation
with transformation and feedback) ......................................................................................... 66
Bibliography ......................................................................................................................................... 83

5
Notations and symbols

Description of symbols:
ż time derivative of z variable
∂f (z)
∂z partial derivative of f (z) function w.r.t. z variable

6
1. Revision of selected differential operations

1.1. Partial derivative

1.1.1. Partial derivative of matrix function


Theoretical inftoduction
Let’s assume that the matrix function is given by A(x) as
 
a11 (x) a12 (x) . . . a1l (x)
a21 (x) a22 (x) . . . a2l (x)
k×l
A(x) =  . ..  ∈ R , (1.1)
 
. ..
 . . ... . 
ak1 (x) ak2 (x) . . . akl (x)

where x ∈ Rn , vector b is in the form


T
∈ Rl .

b= b1 b2 . . . bl (1.2)

Considered vector function g(x) ∈ Rk is:

g(x) = A(x)b. (1.3)

The partial derivative of g(x) is given by:

∂ ∂A (x)
g (x) = b,
∂x ∂x

∂A(x)
where ∂x is the partial derivative of A(x) in the form:
 
a11 (x) a12 (x) a1l (x)
...
 a21∂x(x) ∂x
a22 (x)
∂x
a2l (x) 
∂A (x)  ∂x ∂x ... ∂x 
 k×l×n
:=  .. .. .. ∈R .

∂x  . . ... . 
ak1 (x) ak2 (x) akl (x)
∂x ∂x ... ∂x

Due to the size of A(x)∂x (which is k × l × n) the analytical notation is quite troublesome. In such a
situation, we can use the stack operator and Kronecker’s product, which will simplify the notation of
partial derivatives of the function such as ∂g(x)
∂x . It is worth noting that if the A(x)b multiplication is

realized first, and only then the calculation of the ∂x (A(x)b) derivative is realized, then there will be no
problem with notation difficulties of this derivative. At this point we will skip this fact and assume that
there is a need to calculate A(x)
∂x .

7
1.1. Partial derivative 8

Stack operator
The stack operator is also called the Bellman operator and it is a transformation that turns a matrix
into a column so that all columns of a given matrix are stored in one column of a new vector.
Let’s assume that the stack operator is described by the symbol (·)S , where (·) means any two-
dimensional array and S means a transformation of the (·) array.
For the purpose of explaining this operator, let’s define the G matrix:
 
g11 g12 . . . g1l
g21 g22 . . . g2l 
k×l
G= . ..  ∈ R .
 
..
 .. . ... . 
gk1 gk2 . . . gkl
The stack operator is defined as:
   
g11



 g21 



  ..  


 .  

 gk1
 
  

 g12 

  g22  
   
S
 ..
 ∈ Rk·l .
  
G := 

 . 
 (1.4)

 gk2 

 .. 

  . 



 g1l 




 g2l 



  ..  
  .  
gkl
S
Thus (·) : Rk×l → Rk·l , which means the two-dimensional matrix G of a size k × l is changed into
one-dimensional vector GS of a size k · l.

Kronecker product
Kronecker’s product is called an action that realizes a specific multiplication of the matrix, which
realises the multiplication of all elements of one matrix with the whole content of the other matrix.
Let us assume that the Kronecker’s product is described by the symbol ⊗. Let’s add that it is an
noncommutative operation. In order to explain the operator ⊗, we will use a predefined G matrix and H
matrix:
 
h11 h12 . . . h1r
h21 h22 . . . h2r 
p×r
H= . ..  ∈ R .
 
..
 .. . ... . 
hp1 hp2 . . . hpr
It is worth noting that, in general, there is no requirement for correlation between G dimensions and
H dimensions.
We define Kronecker’s product as:
 
h11 G h12 G . . . h1r G
h21 G h22 G . . . h2r G
(k·p)×(l·r)
H ⊗ G :=  . ..  ∈ R . (1.5)
 
. .. . .
 . . . . 
hp1 G hp2 G . . . hpr G

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 9

If you put the values of the matrix H, then we will obtain the matrix

      
g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 . . . g1l
 g21 g22 . . . g2l  g21 g22 . . . g2l  g21 g22 . . . g2l  
h 
..  h12  .. ..  . . . h1r  ..
    
 11  .. .. .. .. .. 


 . . ... .   . . ... .   . . ... . 
gk1 gk2 . . . gkl  gk1 gk2 . . . gkl  gk1 gk2 . . . gkl 

 

 g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 . . . g1l  
 g21 g22 . . . g2l  g21 g22 . . . g2l  g21 g22 . . . g2l  
h21  .. ..  h22  .. ..  . . . h2r  ..
      
.. .. .. .. 
H⊗G := 

 . . ... .   . . ... .   . . ... .  .

 gk1 gk2 . . . gkl gk1 gk2 . . . gkl gk1 gk2 ... gkl  
 .. .. .. .. 

 . . . . 
     

 g11 g12 . . . g1l g11 g12 . . . g1l g11 g12 ... g1l  
 g21 g22 . . . g2l  g21 g22 . . . g2l  g21 g22 ... g2l  
hp1  . hp2  . . . . hpr  .
      
.. ..  .. ..  .. .. 
  .. . ... .   .. . ... .   .. . ... . 
gk1 gk2 . . . gkl gk1 gk2 . . . gkl gk1 gk2 . . . gkl

Calculation of partial derivative of matrix function with use of stack operator and
Kronecker product

Now let’s go back to the case of calculation of the partial derivative of g(x) described by (1.3). For
this case it is possible to use the stack operator and the Kronecker product in the form:

∂ ∂AS (x)
g(x) = [b ⊗ Ik×k ]T , (1.6)
∂x ∂x

S
where AS (x) is determined by (1.4) for A(x) matrix given in (1.1), ∂A∂x(x) is a partial derivative of
AS (x), Ik×k ∈ Rk×k is an identity matrix, b is given by (1.2), [b ⊗ Ik×k ]T means the realization of the
Kronecker product defined in (1.5) on elements b and Ik×k and subjected to transposition. Using (1.1)
and (1.4), AS (x) is determined in form:

   
a11 (x)



 a21 (x) 



  ..  


 .  

 ak1 (x)
 
  

 a12 (x) 

  a22 (x)  
   
S
  .. 
 ∈ Rk·l ,

A (x) = 

 . 


 ak2 (x) 

 .. 

  . 



 a1l (x) 




 a2l (x) 



  ..  
  .  
akl (x)

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 10

and a partial derivative is in the form:


   
∂a (x) ∂a11 (x) ∂a11 (x)
   
11
∂x1 ∂x2 ∂xn

  ∂a21 (x)   ∂a21 (x)   ∂a21 (x)  

∂x1 ∂x2 ∂xn
     
 ...
.. .. ..
      
       

  .   .   .  

 ∂ak1 (x) ∂ak1 (x) ∂ak1 (x) 
 ∂x1 ∂x2 ∂xn 
∂a12 (x) ∂a12 (x) ∂a12 (x)
       
∂x1 ∂x2 ∂xn
 
∂a22 (x) ∂a22 (x) ∂a22 (x)
       
 
∂x1 ∂x2 ∂xn
     
∂AS (x) 
  ... 
.. .. ..
    
 ∈ Rk·l×n .

=
     
. . .
∂x        
 ∂ak2 (x) ∂ak2 (x) ∂ak2 (x) 
 ∂x1 ∂x2 ∂xn 
 .. .. .. .. 
. . . .
 
 
∂a1l (x) ∂a1l (x) ∂a1l (x)
       
 
 ∂x1 ∂x2 ∂xn 
  ∂a2l (x)   ∂a2l (x)   ∂a2l (x)  
∂x1 ∂x2 ∂xn
       
 ...
.. .. ..
      
     
. . .
 
       
∂akl (x) ∂akl (x) ∂akl (x)
∂x1 ∂x2 ∂xn

Matrix Ik×k is in form:  


1 0 ··· 0
 0 1 ··· 0 
Ik×k =   ∈ Rk×k ,
 
.. .. . . ..
 . . . . 
0 0 ··· 1
and using (1.5) and (1.2) we receive:
 
b1 Ik×k
 b2 Ik×k 
[b ⊗ Ik×k ] =   ∈ R(k·l)×(k·1) ,
 
..
 . 
bl Ik×k
which means    
1 0 ··· 0
 
 b  0 1 ··· 0 



 1 .. .. . . ..  
  . . . .  
 

  0 0 ··· 1 



 1 0 ··· 0 

  0 1 ··· 0  
 b2 
   
.. .. . . ..  
[b ⊗ Ik×k ] = 

 . . . .  ,


 0 0 ··· 1 

 .. 

  . 



 1 0 ··· 0 

  0 1 ··· 0  
 bl  . . .
   
  .. .. .. ..  
.  
0 0 ··· 1
and after a transposition
       
1 0 ··· 0 1 0 ··· 0 1 0 ··· 0
  0 1 ··· 0   0 1 ··· 0   0 1 ··· 0  
[b ⊗ Ik×k ]T =  b1   b2   ··· bl  ,
       
.. .. . . .. .. .. . . .. .. .. . . .. 
  . . . .   . . . .   . . . .  
0 0 ··· 1 0 0 ··· 1 0 0 ··· 1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 11

where [b ⊗ Ik×k ]T ∈ R(k·1)×(k·l) . Next, using the definition (1.6) we calculate the final form of the
derivative:
··· 0 ··· ···
       
b1 0 b2 0 0 bl 0 0

  0 b1 ··· 0   0 b2 ··· 0   0 bl ··· 0  
g(x) =  ··· ·
       
.. .. .. ..  .. .. .. .. .. .. .. ..
∂x
    
  . . . .   . . . .   . . . .  
0 0 · · · b1 0 0 ··· b2 0 0 ··· bl
∂a11 (x)   ∂a11 (x)
    ∂a11 (x)  
∂x1 ∂x2 ∂xn
  ∂a21 (x)   ∂a21 (x)   ∂a21 (x)  
  ∂x1   ∂x2   ∂xn  

 
 .. 


 .. 
 ... 
 .. 




  .   .   .  

 ∂ak1 (x) ∂ak1 (x) ∂ak1 (x) 
 ∂x1 ∂x2 ∂xn 
  ∂a12 (x)   ∂a12 (x)   ∂a12 (x)  
 ∂x1 ∂x2 ∂xn

  ∂a22 (x)   ∂a22 (x)   ∂a22 (x)  
 
  ∂x1   ∂x2   ∂xn  
  ..   ..  ...  ..  
k×n
· ∈R
     
.
 
 .   .   . 

 ∂ak2 (x) ∂ak2 (x) ∂ak2 (x) 

 ∂x1 ∂x2 ∂xn 
 .. .. .. .. 

 . . . .


  ∂a1l (x)   ∂a1l (x)   ∂a1l (x)  
∂x1 ∂x2
 ∂xn

  ∂a2l (x)   ∂a2l (x)   ∂a2l (x)  
 
  ∂x1   ∂x2   ∂xn  



 .. 


 .. 
 ... 
 .. 
 

  .   .   .  
∂akl (x) ∂akl (x) ∂akl (x)
∂x1 ∂x2 ∂xn

Let’s assume the result is given in form

∂  ∂ 1 ∂ 2 ∂ n

g(x) = ∂x g(x) ∂x g(x) ··· ∂x g(x) ,
∂x

then
 ∂a11 (x)   ∂a12 (x)   ∂a1l (x) 

b1 0 ··· 0
 ∂x1 
b2 0 ··· 0
 ∂x1 
bl 0 ··· 0
 ∂x1
 ∂a21 (x)   ∂a22 (x)   ∂a2l (x) 
0 b1 ··· 0  ∂x1
  0 b2 ··· 0  ∂x1
  0 bl ··· 0 
∂x1



1         
g(x) = . . .  + . . . +· · ·+  .. . . ,
     
.  .  .  .   .  .
∂x
 . . . . .   . . . . . . . . .
.  .  .
  
 . . .  .   . . .  .   . . . 
.

0 0 ··· b1 ··· ···
 
0 0 b2
 
0 0 bl
 
∂ak1 (x) ∂ak2 (x) ∂akl (x)
∂x1 ∂x1 ∂x1

 ∂a11 (x)   ∂a12 (x)   ∂a1l (x) 



b1 0 ··· 0
 ∂x2 
b2 0 ··· 0
 ∂x2 
bl 0 ··· 0
 ∂x2
 ∂a21 (x)   ∂a22 (x)   ∂a2l (x) 
0 b1 ··· 0  ∂x2
  0 b2 ··· 0  ∂x2
  0 bl ··· 0 
∂x2



2         
g(x) = . . .  + . . .  +· · ·+  . . . ,
     
.  . .  .
 .  .
∂x
 . . . . .   . . . . .  .. . . . .
.  .  .
  
 . . .  .  . . .  .  . . 
.

0 0 ··· b1 ··· ···
 
0 0 b2
 
0 0 bl
 
∂ak1 (x) ∂ak2 (x) ∂akl (x)
∂x2 ∂x2 ∂x2

 ∂a11 (x)   ∂a12 (x)   ∂a1l (x) 


b1 0 ··· 0 ∂xn b2 0 ··· 0 ∂xn bl 0 ··· 0 ∂xn
     
 ∂a21 (x)   ∂a22 (x)   ∂a2l (x) 
0 b1 ··· 0 
∂xn
  0 b2 ··· 0 
∂xn
  0 bl ··· 0  


n         ∂xn 
g(x) = . . .  + . . .  +· · ·+ 
 . . . .
     
. . .. .  .   . .
.
. .  . .  .
∂x  .. . . .

 . . . .  . . . . .  . 
. . .  . 
 .   .   . 
0 0 ··· b1 ··· ···
 
0 0 b2
 
0 0 bl
 
∂ak1 (x) ∂ak2 (x) ∂akl (x)
∂xn ∂xn ∂xn

By making further multiplication we obtain:


 ∂a11 (x)   ∂a12 (x) 
b1 ∂x1
0 ··· 0 b2 ∂x1
0 ··· 0
 ∂a21 (x)   ∂a22 (x) 
 0 b1 ∂x1
··· 0   0 b2 ∂x1
··· 0 
∂ 1
   
g(x) =  + +
   
∂x . . . . . . . .
 . . . .   . . . . 

 . . . . 


 . . . . 

∂ak1 (x) ∂ak2 (x)
0 0 ··· b1 ∂x1
0 0 ··· b2 ∂x1
 ∂a1l (x) 
bl ∂x 0 ··· 0
1
 ∂a2l (x) 

 0 bl ∂x1
··· 0 

+ ··· +  ,
 
. . .. .
 . . . 

 . . . .


∂akl (x)
0 0 ··· bl ∂x
1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 12

 ∂a11 (x)   ∂a12 (x) 


b1 ∂x2
0 ··· 0 b2 ∂x2
0 ··· 0
 ∂a21 (x)   ∂a22 (x) 
 0 b1 ∂x ··· 0   0 b2 ∂x ··· 0 
∂ 2
 2
  2

g(x) =  + +
   
∂x . . . . . . . .
 . . . .   . . . . 

 . . . . 


 . . . . 

∂ak1 (x) ∂ak2 (x)
0 0 ··· b1 ∂x2
0 0 ··· b2 ∂x2
 ∂a1l (x) 
bl ∂x2
0 ··· 0
 ∂a1l (x) 

 0 bl ∂x2
··· 0 

+ ··· +  ,
 
. . . .
 . . .. . 
. . .
 
 
∂akl (x)
0 0 ··· bl ∂x2

 ∂a11 (x)   ∂a12 (x) 


b1 ∂xn
0 ··· 0 b2 ∂xn
0 ··· 0
 ∂a21 (x)   ∂a22 (x) 
 0 b1 ∂x ··· 0   0 b2 ∂x ··· 0 
∂ n
 n   n 
g(x) =  + +
   
. . . . . . . .
∂x  . . . .   . . . . 

 . . . . 


 . . . . 

∂ak1 (x) ∂ak2 (x)
0 0 ··· b1 ∂xn
0 0 ··· b2 ∂xn
 ∂a1l (x) 
bl ∂xn
0 ··· 0
 ∂a2l (x) 

 0 bl ∂xn
··· 0 

+ ··· +  .
 
. . . .
 . . . . 

 . . . . 

∂akl (x)
0 0 ··· bl ∂x
n

When we add it up, we obtain:


 ∂a11 (x) ∂a12 (x) ∂a1l (x) 
b1 ∂x1
+ b2 ∂x1
+ · · · + bl ∂x1
0 ··· 0
 ∂a21 (x) ∂a22 (x) ∂a2l (x) 
 0 b1 ∂x + b2 ∂x + ··· + bl ∂x ··· 0 
∂ 1
 1 1 1

g(x) =  ,
 
∂x . . .. .
 . . . 

 . . . .


∂ak1 (x) ∂ak2 (x) ∂akl (x)
0 0 ··· b1 ∂x + b2 ∂x + · · · + bl ∂x1
1 1

 ∂a11 (x) ∂a12 (x) ∂a1l (x) 


b1 ∂x2
+ b2 ∂x2
+ · · · + bl ∂x2
0 ··· 0
 ∂a21 (x) ∂a22 (x) ∂a1l (x) 
 0 b1 ∂x + b2 ∂x + ··· + bl ∂x ··· 0 
∂ 2
 2 2 2

g(x) =  ,
 
∂x . . . .
 . . . . 

 . . . .


∂ak1 (x) ∂ak2 (x) ∂akl (x)
0 0 ··· b1 ∂x + b2 ∂x + · · · + bl ∂x2
2 2
 ∂a11 (x) ∂a12 (x) ∂a1l (x) 
b1 ∂xn
+ b2 ∂xn
+ · · · + bl ∂xn
0 ··· 0
 ∂a21 (x) ∂a22 (x) ∂a2l (x) 
 0 b1 ∂x + b2 ∂x + ··· + bl ∂x ··· 0 
∂ n
 n n n

g(x) =  .
 
∂x . . . .
 . . .. . 

 . . . 

∂ak1 (x) ∂ak2 (x) ∂akl (x)
0 0 ··· b1 ∂x + b2 ∂x + · · · + bl ∂xn
n n

Exercises

Ćwiczenie 1. Calculate the ∂x (A(x)b) derivative using the stack operator and Kronecker product. Com-
pare the results obtained with the derivation calculated in the standard way.
   
sin x1 x1 x2 + x1 b1
A(x) = , b = .
cos x1 x22 + x2 + tan x1 b2
Solution

∂ ∂AS (x)
(A(x)b) = [b ⊗ Ik×k ]T ,
∂x ∂x
 
sin x1
cos x1
AS (x) = 
 
,
 x1 x2 + x1 
2
x2 + x2 + tan x1
 
cos x1 0
∂ S  − sin x1 0 
A (x) =  ,
∂x  x2 + 1 x1 
1
cos2 x1
2x2 + 1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 13

   T  T
1 0 b1 0
 b1 0
 T  
b1 I2×2 1    0 b1  b1 0 b2 0
[b ⊗ I2×2 ]T = = 
 b2 0  = 0 b1 0 b2 .
 = 
b2 I2×2  1 0 
b2
0 1 0 b2

 
0 cos x1
 
∂ ∂ S b1 0 b2 0 0  − sin x1
(A(x)b) = [b ⊗ I2×2 ]T

A (x) = = 
 x2 + 1
∂x ∂x 0 b1 0 b2 x1 
1
cos2 x1
2x2 + 1
 
b1 cos x1 + b2 (x2 + 1) b2 x1
= .
−b1 sin x1 + b2 cos12 x1 b2 (2x2 + 1)

Ćwiczenie 2. A mechanical system is given in the conservative force field. For this system the dynamic
equation is defined as
M (q)q̈ + C(q,q̇)q̇ + G(q) = τ,
where q ∈ Rn is a state, M (q) is a symmetrical mass matrix, C(q,q̇) is a Coriolis force matrix G(q) is a
gravitational matrix, and τ is a vector for external extortion. For this system, kinetic energy is defined as

1
T = q̇ T M (q)q̇,
2
and the potential energy in form
V = V (q).
Prove that Ṁ (q) − 2C(q,q̇) = −J, where J is some Skew-symmetric matrix.

Definicja 3. A skew-symmetric matrix, also known as antisymmetric or antimetric matrix, is a matrix


whose elements are symmetrical to the main diagonal. One feature of an antisymmetric matrix is that for
a square A matrix, J = A − AT is an antisymmetric matrix.

Solution
 T  T  T
d ∂L ∂L ∂R
− + =Q
dt ∂ q̇ ∂q ∂ q̇
L=T −V
∂L ∂T ∂V
= −
∂ q̇ ∂ q̇ ∂ q̇
 
∂T ∂ 1 T 1 ∂
q̇ T (M (q)q̇)
 
= q̇ M (q)q̇ =
∂ q̇ ∂ q̇ 2 2 ∂ q̇

 
∂T 1 ∂ T T ∂ 1
M (q)q̇ + q̇ T M (q) =
 
= q̇ M (q)q̇ + q̇ (M (q)q̇) =
∂ q̇ 2 ∂ q̇ ∂ q̇ 2
1 T T 1
q̇ M (q) + q̇ T M (q) = q̇ T M T (q) + M (q)
 
=
2 2
Due to the symmetry of M (q)
M T (q) + M (q) = 2M T (q)
1 T  1
q̇ M T (q) + M (q) = q̇ T 2M T (q) = q̇ T M T (q)
2 2
∂V
=0
∂ q̇

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.1. Partial derivative 14

T
∂V T d ∂T T
    
d ∂L d ∂T
= − =
dt ∂ q̇ dt ∂ q̇ ∂ q̇ dt ∂ q̇
 T
∂T
= M (q)q̇
∂ q̇
 T
d ∂T d d d
= (M (q)q̇) = (M (q)) q̇ + M (q) (q̇) = Ṁ (q)q̇ + M (q)q̈
dt ∂ q̇ dt dt dt
∂ d ∂
Ṁ (q) = (M (q)) (q) = (M (q)) q̇
∂q dt ∂q

Ṁ (q) = [q̇ ⊗ In×n ]T M S (q)

∂q
 T
d ∂T ∂
= [q̇ ⊗ In×n ]T M S (q) q̇ + M (q)q̈

dt ∂ q̇ ∂q

∂L ∂T ∂V
= −
∂q ∂q ∂q
∂T 1 ∂
= q̇ T (M (q)) q̇
∂q 2 ∂q
∂ ∂
(M (q)) q̇ = [q̇ ⊗ In×n ]T M S (q)

∂q ∂q
∂T 1 ∂
= q̇ T [q̇ ⊗ In×n ]T M S (q)

∂q 2 ∂q
∂V
= G(q)
∂q
∂L 1 ∂
= q̇ T [q̇ ⊗ In×n ]T M S (q) − G(q)

∂q 2 ∂q
 T
 T
 
∂L 1 T T ∂
= q̇ [q̇ ⊗ In×n ] S
M (q) − (G(q))T
∂q 2 ∂q

d ∂L T
 T 
∂R T
  
∂L
− + =Q
dt ∂ q̇ ∂q ∂ q̇

∂R T
 
=0
∂ q̇
 T
 
T ∂ 1 T T ∂
S S
+ (G(q))T = Q

M (q)q̈ + [q̇ ⊗ In×n ] M (q) q̇ − q̇ [q̇ ⊗ In×n ] M (q)
∂q 2 ∂q
| {z }
C(q,q̇)q̇

 T
  
∂ 1 ∂
C(q,q̇)q̇ = [q̇ ⊗ In×n ]T M S (q) q̇ − [q̇ ⊗ In×n ]T q̇ T
M S (q)

=
∂q 2 ∂q
 T
 
∂ 1 ∂
= [q̇ ⊗ In×n ]T M S (q) q̇ − [q̇ ⊗ In×n ]T M S (q)

q̇ =
∂q 2 ∂q
 T !
∂ 1 ∂
[q̇ ⊗ In×n ]T M S (q) − [q̇ ⊗ In×n ]T M S (q)
 
= q̇ =
∂q 2 ∂q

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.2. Vector field 15

 T
 
T∂  1 ∂
C(q,q̇) = [q̇ ⊗ In×n ] S
M (q) − [q̇ ⊗ In×n ]T S
M (q) =
∂q 2 ∂q
 T
 
1 ∂  1 ∂  1 T ∂
= [q̇ ⊗ In×n ]T M S (q) + [q̇ ⊗ In×n ]T S
M (q) − [q̇ ⊗ In×n ] S
M (q) =
2 ∂q 2 ∂q 2 ∂q
| {z }
Ṁ (q)
 
 T
1 ∂ ∂
Ṁ (q) + [q̇ ⊗ In×n ]T M S (q) − [q̇ ⊗ In×n ]T M S (q)
  
=  
2  ∂q ∂q 
| {z }
J

J = A − AT
1 
C(q,q̇) = Ṁ (q) + J
2
Ṁ (q) − 2C(q,q̇) = −J

1.2. Vector field


In general, a vector field (vf) defined on a differential manifold is a tangent vector for each point
of manifold. In this sense it can be said that vector fields are understood as operators of differentiation.
Let us introduce some notations. Let M denote differential manifold, C ∞ (M ) denote a set of smooth
functions of the degree ∞ defined on M , a ∈ M denote a point, and X denote a linear transformation
called a vector field. Let the set of all smooth vector fields on a manifold M will be marked as Γ(M ), and
the set of all vectors tangent to M will be marked as T M . Then one can say that the vector field X ∈ T M
means a differential. If this vector field X is determined at point a, then X(a) will be determined by a
tangent vector at point a, i.e. X(a) ∈ Ta M .

Definicja 4 (Vector field acting on a function). For a differentiation X ∈ Γ(M ) and for a point a ∈ M
formula:

X(f )(a) ≡ X(f (a)), (1.7)


means the tangent vector at point a, i.e. X(a) ∈ Ta M , where f ∈ C ∞ (M ). This vector will be called a
vector field.

Referring to the definition (1.7) vf X ∈ Γ(M ) specified on the curve f in point a may be written in
a different way.

Definicja 5 (Vector field acting on a function – in the form of a differential operator). Let X ∈ Γ(M )
mean differentiation, a ∈ M mean point of tangencity and ∂1 , . . . ,∂n forms the base of the space module
Γ(M ), then the X(f (a)) field can be written as:
n
X
X(f (a)) = X(k) (f (a))∂k , (1.8)
k=1

where f ∈ C ∞ (M ).

From (1.8) one can state that vf can be written in form of summation:

X(f (a)) = X(1) (f (a))∂1 + . . . + X(n) (f (a))∂n , (1.9)

where a ∈ M .

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.3. Directional derivative 16

From the definition 5 it can be seen that the operation of the differentiation X on the function f in
point a is based on assigning to the function f its directional derivative in point a in the direction of
vector X(k) (f (a)). Thus, we can say the vectors X(k) (f (a)), for k = 1, . . . ,n, uniquely determine the
differentiation X.
It is worth noting that (1.8) is a notation of vf in the form of a differential operator, i.e. the sum
of individual components of X(f ) with the designation of operator no k in the form of ∂k . Another
commonly used vf notation a matrix form, i.e. using a single-column matrix. Then the equation (1.8) can
be rewritten to the form:
 T
X(f (a)) = X(1) (f (a)) . . . X(n) (f (a)) .
This form of notation will be most commonly used in this document.

1.3. Directional derivative


A directional derivative is also called a Lie derivative. It determines the change of a tenor field (e.g.
a scalar function, a vf or a one-parameter form) along the flow of another vf.
Definicja 6 (Directional derivative, Lie derivative, of the scalar function). For the function f : M → R
and for the vector field X ∈ M , the directional derivative of the function f along X, denoted as LX f , is
the operation of X vf on the function f , that is X(f ).
The directional derivative determined in point a ∈ M can be written in the form:

(LX f ) (a) := dfa (Xa ) .


If the position of point a is related to the selected coordinate system, a directional derivative can be
 >
defined with reference to that coordinate system. So let’s define the point x = x1 x2 . . . xn ∈
Rn , which defines the coordinates of a point in n-dimensional Euclidean space. Then the directional
derivative determined in x ∈ Rn will have the form:
∂f (x)
LX(x) f (x) = X(x),
∂x

where f : Rn → R is a scalar function, X(x) ∈ T Rn is a vf defined in the tangent space to Rn , and ∂x
means the standard partial derivative realized on elements of the vector x.
It is also possible to determine a directional derivative in relation to a certain vector that is not a
vector field. Then the following notation is used:
∂f (x)
Lv f (x) = v,
∂x
where v ∈ Rn is a vector.
Definicja 7 (Directional derivative, Lie derivative, of the vector field). The directional derivative of the
X2 vector field along the X1 vector field, denoted as LX1 X2 , has the form:

LX1 X2 = dX2 (X1 ) − dX1 (X2 ) ,


what results from the definition of a algebra.
If X1 and X2 are defined in the Euclidean coordinate system, i.e., in the system associated with the
>
vector x = x1 x2 . . . xn ∈ Rn , then the directional derivative of the vector field can be written


in the form:
∂X2 (x) ∂X1 (x)
LX1 (x) X2 (x) = X1 (x) − X2 (x),
∂x ∂x
where X1 (x),X2 (x) ∈ T Rn . The directional derivative of the vector field is also called the Lie bracket,
which will be discussed later in this paper.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.3. Directional derivative 17

1.3.1. Directional derivative of a scalar function


Ćwiczenie 8. Calculate the directional derivative of the scalar function f (x) ∈ R in the direction of the
vector field v ∈ R2 , where

f (x) = 7x1 + x1 x22 + x2 sin x21 − 12x32 ,



 
2
v= .
3

Solution

∂f (x)
Lv f (x) := v
∂x
∂f (x) 
= 7 + x22 + 2x1 x2 cos x21 2x1 x2 + sin x21 − 36x22
  
∂x
 
∂f (x)  2 2
 2
 2
 2
Lv f (x) = v = 7 + x2 + 2x1 x2 cos x1 2x1 x2 + sin x1 − 36x2 =
∂x 3
= 2 7 + x22 + 2x1 x2 cos x21 + 3 2x1 x2 + sin x21 − 36x22 =
  

= 14 + 2x22 + 4x1 x2 cos x21 + 6x1 x2 + 3 sin x21 − 108x22 =


 

= 14 − 106x22 + x1 x2 4 cos x21 + 6 + 3 sin x21


  

Ćwiczenie 9. Calculate the directional derivative of the scalar function f (x) ∈ R in the direction of the
vector field w(x) ∈ R2 , where
f (x) = 7x1 + x1 x22 + x2 sin x21 − 12x32 ,

 
x2
w(x) = .
x1 x2

Solution

∂f (x)
Lw f (x) := w(x)
∂x
∂f (x) 
= 7 + x22 + 2x1 x2 cos x21 2x1 x2 + sin x21 − 36x22
  
∂x
 
∂f (x)  2 2
 2
 2
 x2
Lw f (x) = w(x) = 7 + x2 + 2x1 x2 cos x1 2x1 x2 + sin x1 − 36x2 =
∂x x1 x2
= x2 7 + x22 + 2x1 x2 cos x21 + x1 x2 2x1 x2 + sin x21 − 36x22 =
  

= 7x2 + x32 + 2x1 x22 cos x21 + 2x21 x22 + x1 x2 sin x21 − 36x1 x32
 

1.3.2. Directional derivative of a vector function


Ćwiczenie 10. Calculate the directional derivative of the vector function f (x) ∈ R in the direction of
the vector field g(x) ∈ R2 , where
x1 x22 + x2 sin x21 − 12x1 x32 x3
  
f (x) = ,
x2 x3
 
x1 x2
g(x) =  2x1  .
x3

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.4. Lie bracket 18

Solution

∂f (x)
Lg f (x) := g(x)
∂x
 2
x2 + 2x1 x2 cos x21 − 12x32 x3 2x1 x2 + sin x21 − 36x1 x22 x3 −12x1 x32
  
∂f (x)
=
∂x 0 x3 x2
 
 2 2
 3 2
 2 3
 x1 x2
∂f (x) x + 2x1 x2 cos x1 − 12x2 x3 2x1 x2 + sin x1 − 36x1 x2 x3 −12x1 x2 
Lg f (x) = g(x) = 2 2x1  =
∂x 0 x3 x2
x3
2 2 3 2 2 3
     
x x x + 2x1 x2 cos x1 − 12x2 x3 + 2x1 2x1 x2 + sin x1 − 36x1 x2 x3 − 12x1 x2 x3
= 1 2 2
2x1 x3 + x2 x3
∂f (x)
Is it possible to calculate the above operation ∂x g(x) using a Kronecker product and Bellman oper-
ator?

1.4. Lie bracket


Definicja 11 (Lie bracket[?]). Lie bracket of two vector fields X1 ,X2 ∈ Γ(M ) is called a vector field
defined as:
[X1 ,X2 ] = X1 (X2 ) − X2 (X1 ) ,
where the vector field X(·) are understood as differential actions. From the symmetry of the second de-
rivatives it follows that [X1 ,X2 ] contains only first derivatives and is a well defined field. In the literature,
the Lie bracket is often called the commutator of the Lie group.
The above definition of the Lie bracket can be written in the same way as the directional derivative
of a vector field:

[X1 ,X2 ] = dX2 (X1 ) − dX1 (X2 ) ,


and if X1 and X2 are defined in the Euclidean coordinate system, we obtain the following form:

∂X2 (x) ∂X1 (x)


[X1 (x),X2 (x)] = X1 (x) − X2 (x).
∂x ∂x

Adjoin representation
Adjoin representation is a differential operator, which is equivalent to the Lie bracket. In order to
clarify the scope of the concept of adjoin representation, let us note that there is difference between the
Lie group’s adjoin representation and the Lie algebra adjoin representation. There is a lack of such a dis-
tinction in the literature on the subject, which often introduces a terminological and semantic inaccuracy.
The adjoin representation of a Lie group is denoted by Adg , where g is an element of the group, and the
adjoin representation of a Lie algebra is denoted by ad. In this paper, we will use only the Lie algebra
adjoin representation and will call it short as adjoin representation. We will omit formal definition of the
Lie algebra adjoin representation here, as it requires the definition of terms such as the Lie group and
Lie algebra. Instead, we will give the relationship of this adjoin representation with a Lie bracket and a
directional derivative (a Lie derivative) that can be written as an equation:

adX1 X2 = LX1 X2 = [X1 ,X2 ],

where X1 i X2 are vector fields. Therefore, to put it simply, we can say that the adjoin representation
realized on two vector fields is identical to the operator of the Lie bracket on these vector fields.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.4. Lie bracket 19

Nilpotent algebra
In the case of using the Lie group description in control theory aspects, one of the considered issues
is the problem of adequacy of local Lie group properties with global properties, and this is related to
the possibility of mapping the Lie group structure through the associated Lie algebra. In order for such
mapping to give a correct result in the global sense, it is necessary to preserve the nilpotency of algebra.
Since this information do not constitute the scope of this subject, the introduction to the topic of algebra’s
nilpotency will be limited only to this brief description.
Let’s define the concept of the algebra nilpotency.

Definicja 12 (Lie algebra’s nilpotency - using operator ad). Lie algebra g is nilpotent when the following
equality occurs:
ad(X1 )ad(X2 )ad(X3 ) . . . ad(Xr ) = 0,
where Xi ∈ g, a i = 1,2,3, . . . ,r.

Thus, the nilpotency verification involves calculation of the directional derivatives from given vector
fields and checking whether the final directional derivatives of a sufficiently high order will nullify. The
above claim can be simplified to expression with use of the Lie brackets in the form:

[X1, [X2 ,[X3 ,[· · · [Xr−1 ,Xr ] · · · ]]]] = 0.

Ćwiczenie 13. The system is defined in the form:


   
1 0
q̇ =  0 u1 + 1  u2 .
 
−q2 q1

Calculate the iterative Lie brackets of the generator fields and check the nilpotency of the system algebra.

Solution
   
1 0
g1 (q) =  0 , g2 (q) = 1  .
 
−q2 q1
∂g2 ∂g1
[g1 ,g2 ] = g1 − g2
∂q ∂q
   
0 0 0 0 0 0
∂g1  ∂g2 
= 0 0 0 , = 0 0 0
∂q ∂q
0 −1 0 1 0 0
           
0 0 0 1 0 0 0 0 0 0 0
g3 = [g1 ,g2 ] = 0 0 0
   0 − 0 0 0
    1 = 0 − 0 = 0
     
1 0 0 −q2 0 −1 0 q1 1 −1 2
       
0 0 0 1 0 0 0 0 0
∂g3 ∂g1
g4 = [g1 ,g3 ] = g1 − g3 = 0 0 0
   0 − 0 0 0
    0 = 0
 
∂q ∂q
0 0 0 −q2 0 −1 0 2 0
       
0 0 0 0 0 0 0 0 0
∂g3 ∂g2
g5 = [g2 ,g3 ] = g2 − g3 = 0 0 0
   1 − 0 0 0
    0 = 0
 
∂q ∂q
0 0 0 q1 1 0 0 2 0
Since the vector fields g4 and g5 are zero, any vector fields created using the Lie bracket and these fields
will produce zero vector fields. Thus the algebra associated with the given system is nilpotent.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.4. Lie bracket 20

Ćwiczenie 14. The mechanical system is given in the form of kinematic equations:
   
0 1
I −2 
q̇ = 0 u1 + − m q3 u2 ,
1 0
where I is the inertia of a certain element of the system, and m is the mass of a certain element of
the system. Calculate iterative Lie brackets of generator fields and check the nilpotency of the system
algebra.

Solution
   
0 1
I −2 
g1 = 0 , g2 (q) = − m q3 .
1 0
∂g2 ∂g1
[g1 ,g2 ] = g1 − g2
∂q ∂q
   
0 0 0 0 0 0
∂g1  ∂g 2 I −3 
= 0 0 0 , = 0 0 2 m q3
∂q ∂q
0 0 0 0 0 0
           
0 0 0 0 0 0 0 1 0 0 0
I −3    I −2  I −3  I −3 
g3 = [g1 ,g2 ] = 0 0 2 m q3 0 − 0 0 0 − m q3 = 2 m q3 − 0 = 2 m q3
0 0 0 1 0 0 0 0 0 0 0
       
0 0 0 0 0 0 0 0 0
∂g3 ∂g1 I −4    I −3  I −4 
g4 = [g1 ,g3 ] = g1 − g3 = 0 0 −6 m q3 0 − 0 0 0 2 m q3 = −6 m q3
∂q ∂q
0 0 0 1 0 0 0 0 0
       
0 0 0 1 0 0 0 0 0
∂g3 ∂g2 I −4   I −2   I −3   I −3 
g5 = [g2 ,g3 ] = g2 − g3 = 0 0 −6 m q3 − m q3 − 0 0 2 m q3 2 m q3 = 0
∂q ∂q
0 0 0 0 0 0 0 0 0
Is it possible to generate more non-zero vector fields using g4 ?
       
0 0 0 0 0 0 0 0 0
∂g4 ∂g1 I −5    I −4  I −5 
g6 = [g1 ,g4 ] = g1 − g4 = 0 0 24 m q3 0 − 0 0 0 −6 m q3 = 24 m q3
∂q ∂q
0 0 0 1 0 0 0 0 0
   
0 0
g7 = [g2 ,g4 ] = 0 , g8 = [g3 ,g4 ] = 0
  
0 0
You may notice the possibility of generating successive non-zero vector fields using g1 and g4 . Let’s
list all previously obtained non-zero vector fields and present their notation using the ad operator.
g3 = [g1 ,g2 ] = adg1 g2
g4 = [g1 ,g3 ] = [g1 , [g1 ,g2 ]] = adg1 [g1 ,g2 ] = adg1 (adg1 g2 ) = adg1 adg1 g2 = ad2g1 g2

g6 = [g1 ,g4 ] = [g1 , [g1 ,g3 ]] = [g1 , [g1 , [g1 ,g2 ]]] =
= adg1 [g1 , [g1 ,g2 ]] = adg1 (adg1 [g1 ,g2 ]) = adg1 (adg1 (adg1 g2 )) =
= adg1 adg1 adg1 g2 = ad3g1 g2
Let’s calculate the vector field for k times execution of the adjoin representation:
 
0
adkg1 g2 = (−1)k+1 (k + 1)! mI −k−2  .
q3
0

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.5. Nonholonomic constraints in the kinematic and dynamics 21

1.5. Nonholonomic constraints in the kinematic and dynamics


Ćwiczenie 15. A mechanical system in the form of a jumping robot placed in space (without gravity).

?figurename? 1.1: Jumping robot mechanical model


T
∈ R3 . The control signal is

The state of the system is defined by the vector q = q1 q2 q3
T T
u = [ u1 u2 ] = [ τ F ] , where τ is the torque in the rotational joint and F is the leg extension
force F . The inertia of the rectangular block is J and the mass concentrated at the end of the leg is m.

1. Determine the system dynamics equation using the stack operator and Cronecker product.

2. Show that the acceleration limits are integrable.

3. Define the system kinematics equation.

4. Calculate the Lie bracket adkg1 g2 of the kinematic system, where k is the degree of the action ad.

Solution
Ad.1
 
I 0 0
1 T
EK = q̇ M q̇, M =  0 mq32 0  ,
2
0 0 m
Vp = 0,
L = EK − VP = EK
d ∂L T
   T
∂L
− =Q⇒
dt ∂ q̇ ∂q
d ∂EK T ∂EK T
   
− = Q,
dt ∂ q̇ ∂q
∂EK 1
= q̇ T M + M T = q̇ T M ⇒

∂ q̇ 2
∂EK T
 
= M q̇,
∂ q̇

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.5. Nonholonomic constraints in the kinematic and dynamics 22

 T
d ∂EK d
= (M q̇) = Ṁ q̇ + M q̈,
dt ∂ q̇ dt

∂M ∂M S
Ṁ = q̇ = [q̇ ⊗ I3x3 ]T ,
∂q ∂q
 
q̇1 0 0 q̇2 0 0 q̇3 0 0
[q̇ ⊗ I3x3 ]T =  0 q̇1 0 0 q̇2 0 0 q̇3 0  ,
0 0 q̇1 0 0 q̇2 0 0 q̇3
T
MS = I 0 0 0 mq32 0 0 0 m

,

0 0 0
 

 0 0 0  

 0 0 0  
0 0 0 
∂M S

 
= 0 0 2mq3 
,
∂q 

 0 0 0  

 0 0 0  
 0 0 0 
0 0 0

0 0 0
 

 0 0 0  


 0 0 0    
q̇1 0 0 q̇2 0 0 q̇3 0 0  0 0 0   0 0 0
Ṁ =  0 q̇1 0 0 q̇2 0 0 q̇3 0 
 0 0 2mq3 
 = 0 0 2mq3 q̇2 ,
 
0 0 q̇1 0 0 q̇2 0 0 q̇3 
 0 0 0   0 0 0

 0 0 0  
 0 0 0 
0 0 0

       
 T 0 0 0 I 0 0 0 I q̈1
d ∂EK
=  0 0 2mq3 q̇2  q̇ +  0 mq32 0  q̈ =  2mq3 q̇2 q̇3  +  mq32 q̈2  =
dt ∂ q̇
0 0 0 0 0 m 0 mq̈3
 
I q̈1
2
= mq3 q̈2 + 2mq3 q̇2 q̇3  ,

mq̈3
 
  0 0 0
∂EK ∂ 1 T 1 T ∂M 1 T 1 T
0 0 2q3 q̇2 m  = 0 0 q3 q̇22 m ,
 
= q̇ M q̇ = q̇ q̇ = q̇ Ṁ = q̇
∂q ∂q 2 2 ∂q 2 2
0 0 0
 
0
∂EK T 
 
= 0 ,
∂q 2
mq3 q̇2
 
u1
∂W = u1 ∂q1 − u1 ∂q2 + u2 ∂q3 ⇒ Q =  −u1  ,
u2
 T  
d ∂EK ∂EK
− = Q,
dt ∂ q̇ ∂q

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.5. Nonholonomic constraints in the kinematic and dynamics 23

     
I q̈1 0 u1
 mq32 q̈2 + 2mq3 q̇3 q̇2  −  0  =  −u1  ,
mq̈3 2
mq3 q̇2 u2

 I q̈1 = u1
mq32 q̈2 + 2mq3 q̇3 q̇2 = −u1 . (1.10)
mq̈3 − mq3 q̇22 = u2

Ad.2
Are the equations (1.10) integrable?
For the first equation we have:
Z Z
I q̈1 = u1 ⇒ Iq1 = u1 dt - is integrable.

For the second:


q32 q̈2 + 2q3 q̇3 q̇2 = −u1 - integrable?

The right side of the equation is integrable, but how do you prove that the left side is also integrable? We
can show that for the left side of the equation we have:

d 2 
q q̇2 = q32 q̈2 + 2q3 q̇3 q̇2 - is integrable.
dt 3
The last equation:
mq̈3 − mq3 q̇22 = u2 ⇒

mq̈3 = u2 + mq3 q̇22 ,

and using a new input ũ2 = u2 + mq3 q̇22 we can get rid of the problem of integration:

mq̈3 = ũ2 ⇒
Z Z
mq3 = ũdt.

After compounding the equations (1.10) we obtain an equation of acceleration limits:

mq32 q̈2 + 2mq3 q̇3 q̇2 = −I q̈1

After the integration, the speed limit equation is obtained:

I q̈1 + m q32 q̈2 + 2q3 q̇3 q̇2 = 0 ⇒




I q̇1 + mq32 q̇2 = C,

where we assume C = 0, so
I q̇1 + mq32 q̇2 = 0.

This is a kinematic equation of speed limits and it is not integrable. It defines moments of inertia and is
an equation for the momentum conservation principle for the free system.
In result, a new system of equations is obtained:

 I q̈1 = u1
mq̈3 = u2 .
 2
I q̇1 + mq3 q̇2 = 0

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.5. Nonholonomic constraints in the kinematic and dynamics 24

Ad.3
The kinematics equation will be calculated from the non-holonomic constraint Pfaff equation :

I q̇1 + mq32 q̇2 = 0 ⇒

I mq32 0 q̇ = 0,
 

where the general form of this equation is A (q) q̇ = 0 and A(q) is a constraint matrix. So the
velocities q̇ lies in the zero space of matrix A(q). So, we’re looking for a base that spans the speed limit
space q̇. Based on the relationship between the system kinematics and the constraints matrix we can write
down:
A (q) S (q) = 0, (1.11)
where S(q) is the kinematics matrix, so:
q̇ = S (q) v,
   T
where S (q) = g1 (q) g2 (q) , v = v1 v2 is the input. Thus:
 
  v1
q̇ = g1 g2 = g1 v1 + g2 v2 .
v2

On the basis of general guidelines for simplification of vector fields we assume that:
 T
g1 (q) = g1 = 0 0 1 ,
 T
g2 (q) = g21 g22 g23
Taking into account (1.11) we get:
 
0 g21
I mq32
   
0  0 g22  = 0 0 ⇒
1 g23

Ig21 + mq32 g22 = 0.


So g23 can be chosen arbitrarily, sog23 = 0 is assumed. Also g21 = 1 is assumed. So:
mg32 g22 = −I,
I −2
g22 = − q ,
m 3
 I −2
T
g2 = 1 −m q3 0 .
The equations of kinematics take the form:
  
0 1
I −2 
q̇ =  0  v1 +  − m q3 v2 . (1.12)
1 0
Next, the input transformation must be specified. Using (1.12) we obtain:
q̇1 = v2 ⇒ q̈1 = v̇2 ,
q̇3 = v1 ⇒ q̈3 = v̇1 .
From the model of dynamics we have:
1 1
q̈1 = u1 ⇒ v̇2 = u1 ,
I I
1 1
q̈3 = ũ2 ⇒ v̇1 = ũ2 ,
m m

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


1.5. Nonholonomic constraints in the kinematic and dynamics 25

where v is speed, and v̇ is forces and momentum of forces. Then we introduce a general formula for the
input transformation:
v̇ = T u∗ ⇒

1
    
v̇1 0 m u1
= 1 .
v̇2 I 0 ũ2
The final form of the equations of kinematics with the input transformation can be written in the form:

q̇ = S (q) v
.
v̇ = T u∗

Ad.4

    
 T  T 0 0 0 0 0
∂g2 ∂g1 I −3   I −3 
[g1 ,g2 ] = adg1 g2 = g1 − g2 =  0 0 2 m q3 0  =  2m q3 ,
∂q ∂q
0 0 0 1 0
    
0 0 0 0 0
I −4   I −4 
ad2g1 g2 =  0 0 −6 m q3 0  =  −6 m q3 ,
0 0 0 1 0
 
0
I −(k+2)  .
adkg1 g2 =  (−1)k+1 (k + 1)! m q3
0

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2. State evolution for nonlinear systems

2.1. State evolution for general case - the vector fields are given in un-
disclosed form
Ćwiczenie 16. Let us consider the state evolution of the two-input system described by the equation:
ẋ = g1 (x) u1 + g2 (x) u2 . (2.1)
We assume the initial state of the system is x (0) = x0 . The input of the system is piecewise constant
value, where the sequence is illustrated in Fig. 2.1. The task is to determine the state after the applied
sequence of controls, i.e. at t = 4.

?figurename? 2.1: Input signals plot

Solution
Phase I
The state of the system after the first control phase can be presented as an expansion in the Taylor
series relative to time t. It is then obtained:
1
x (t)|t= = x () = x0 + ẋ (t)|t=0  + ẍ (t)|t=0 2 + O 3 ,

(2.2)
2
where O 3 describes all higher-level components (depending on p for p ≥ 3)1 . Note that in this phase


u1 = +α and u2 = 0. This allows to write the equation (2.1) if the form


ẋ (t) = αg1 (x (t)) .
Calculating the second derived of x(t) we have
∂g1 (x (t)) ∂g1 (x (t))
ẍ (t) = α ẋ (t) = α2 g1 (x (t)) .
∂x ∂x
1
Note that the O 3 designation only specifies that the components of the rest of the series can be represented as a third


degree polynomial (relative to  variable) - therefore it cannot be assumed that O 3 describes the exact form of the function.
3

In subsequent relationships, O  can aggregate different components. Therefore, the detailed form of components described
by O 3 in each equation may be different.


26
2.1. State evolution for general case - the vector fields are given in undisclosed form 27

Note that in equation (2.2) derivatives ẋ and ẍ are valued at t = 0. So we can write:
∂g1 (x)
ẋ (t)|t=0 = αg1 (x0 ) oraz ẍ (t)|t=0 = α2 g1 (x0 ) = α2 Dg1 (x0 ) g1 (x0 ) , (2.3)
∂x x=x0
∂g1 (x)
with the following notation introduced for simplicity: Dg1 (x0 ) := ∂x . Taking into account the
x=x0
dependence (2.3) in the equation (2.2) we obtain the result describing the state of the system after the
first phase
1
x () = x0 + αg1 (x0 )  + α2 Dg1 (x0 ) g1 (x0 ) 2 + O 3 .

(2.4)
2
Phase II
As in the previous phase, we use the expansion in series to determine the state. Now we are interested
in the state of the system at t = 2, that is at the end of phase two. So we have:
1
x (t)|t=2 = x () + ẋ (t)|t=  + ẍ (t)|t= 2 + O 3 .

(2.5)
2
The input signals in phase two are u1 = 0 and u2 = α which allows you to write down
∂g2 (x (t))
ẋ (t) = αg2 (x (t)) i ẍ (t) = α2 g2 (x (t)) .
∂x
Now, let’s note that in equation (2.5) derivatives ẋ and ẍ are specified at the moment t =  (not t = 0 as
in the previous phase!). Referring to the result (2.3) we get
ẋ (t)|t= = αg2 (x ()) oraz ẍ (t)|t= = α2 Dg2 (x ()) g2 (x ()) . (2.6)
By substituting (2.6) for (2.5) we can enter the following
1
x (2) = x () + αg2 (x ())  + α2 Dg2 (x ()) g2 (x ()) 2 + O 3 .

(2.7)
2
The state value at t =  is known by the result (2.4). From this we get
g2 (x ()) = g2 (x0 + αg1 (x0 )  + . . .) oraz Dg2 (x ()) = Dg2 (x0 + αg1 (x0 )  + . . .) .
Note that in the solution (2.7) assumed that all components in which the  variable occurs in the degree
3

higher than two are aggregated in the form of O  . Taking this into consideration, we define the
following component development2 in equation  (2.7) (This means that the expansions should not include
too many words if they are replaced by O  ):3
!
∂g2 (x)
g2 (x ())  = g2 (x0 + αg1 (x0 )  + . . .)  = g2 (x0 ) + (αg1 (x0 )  + . . .)  =
∂x x=x0
= g2 (x0 )  + αDg2 (x0 ) g1 (x0 ) 2 + O 3

(2.8)
and
Dg2 (x ()) g2 (x ()) 2 = Dg2 (x0 + . . .) g2 (x0 + . . .) 2 = Dg2 (x0 ) g2 (x0 ) 2 + O 3 .

(2.9)
Then, substituting the relationships (2.8) and (2.9) into (2.7) we get:
1
x (2) = x () + αg2 (x0 )  + α2 Dg2 (x0 ) g1 (x0 ) 2 + α2 Dg2 (x0 ) g2 (x0 ) 2 + O 3 .

2
Taking into account the solution (2.4) we obtain the result describing the state after the second phase of
control
x (2) = x0 + α (g1 (x0 ) + g2 (x0 ))  +
 
2 1 1
Dg1 (x0 ) g1 (x0 ) + Dg2 (x0 ) g1 (x0 ) + Dg2 (x0 ) g2 (x0 ) 2 + O 3 .


2 2
2
Note that in Taylor’s expansions described by equations (2.8) and (2.9) it is assumed that the increment is a vector. For
example, in the equation (2.8) increment is ∆x = αg1 (x0 )  + . . .. The formula used to describe the expansion results from
the Taylor expansion of the vector function of many variables.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.1. State evolution for general case - the vector fields are given in undisclosed form 28

Phase III
As in the previous two phases , we will describe the following

1
ẍ (t)|t=2 2 + O 3 .

x (t)|t=3 = x (2) + ẋ (t)|t=2  +
2

By carrying out the calculations as in the previous cases, one obtains:

1
x (3) = x (2) − αg1 (x (2))  + α2 Dg1 (x (2)) g1 (x (2)) 2 + O 3 .

(2.10)
2

We’re continuing with the substitution:

g1 (x (2))  = g1 (x0 )  + αDg1 (x0 ) (g1 (x0 ) + g2 (x0 )) 2 + O 3



(2.11)

and
Dg1 (x (2)) g1 (x (2)) 2 = Dg1 (x0 ) g1 (x0 ) 2 + O 3 .

(2.12)

Taking into account (2.11) and (2.12) in (2.10) we have

x (3) = x0 + α (g1 (x0 ) + g2 (x0 ))  +


 
1 1
+α2 Dg1 (x0 ) g1 (x0 ) + Dg2 (x0 ) g1 (x0 ) + Dg2 (x0 ) g2 (x0 ) 2 −
2 2
1
−αg1 (x0 )  − α2 Dg1 (x0 ) (g1 (x0 ) + g2 (x0 )) 2 + α2 Dg1 (x0 ) g1 (x0 ) 2 + O 3 =

 2 
1
= x0 + αg2 (x0 )  + α2 Dg2 (x0 ) g1 (x0 ) − Dg1 (x0 ) g2 (x0 ) + Dg2 (x0 ) g2 (x0 ) 2 + O 3 .

2

Phase IV
The final result will be given without a detailed output (we leave this as a task for the Reader). The
state of the system after applying the given sequence of controls is

x (4) = x0 + α2 (Dg2 (x0 ) g1 (x0 ) − Dg1 (x0 ) g2 (x0 )) 2 + O 3 .



(2.13)

Let’s consider the component in brackets in more detail. We can write it down:
 
∂g2 (x) ∂g1 (x)
Dg2 (x0 ) g1 (x0 ) − Dg1 (x0 ) g2 (x0 ) = g1 (x) − g2 (x) = [g1 , g2 ] (x0 ) .
∂x ∂x x=x0

Ultimately, the relation (2.13) will be presented as follows

x (4) = x0 + [g1 , g2 ] (x0 ) α2 2 + O 3 .



(2.14)

It turns out that in the solution (2.14) there is a component, which is a Lie bracket of the first order
specified for vector fields at the starting point. So the total change of state results is resulting not only
from directions generated by primary fields g1 and g2 but also from fields of higher orders. If the higher-
order fields were equal to zero, then due to the closed cycle and symmetrical input, the system would
returnto the initial state x (4) = x0 . Components present in the rest of the expansion and marked as
O 3 should be identified with the presence of vector fields of higher rows (i.e. orders higher than Lie
[g1 ,g2 ]).

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 29

2.2. State evolution in case of a given system


Ćwiczenie 17. For the control system in a form
 
u1
ẋ =  u2 , (2.15)
sin x1 u2
 T
the state at the initial stage was x0 = x01 x02 x03 . Then the sequence of controls illustrated in
Fig. 2.2 was applied.

?figurename? 2.2: Input signals plot

Calculate the final state and then check if the solution is in the direction of the first order Lie brackets.

Solution
Phase I
For t ∈ (0; ) input signals are: u1 = α, u2 = 0. After substituting the controls u, we get:
 
α
ẋ = 0  .
 (2.16)
0

The solution to this equation at the moment t =  is:

1
ẍ (t)|t=0 2 + O 3 .

x () = x0 + ẋ (t)|t=0  + (2.17)
2
In this case x0 = x(0), so
         
x01 α 0 0 α + x01
1
x () = x02  +  0  + · 0 · 2 + 0 =  x02  . (2.18)
2
x03 0 0 0 x03
| {z } |{z} |{z} |{z}
x0 ẋ(0) ẍ(0) O(3 )

Phase II
For t ∈ (; 2) input signals are: u1 = 0, u2 = α. After substituting the controls u, we get:
 
0
ẋ =  α  . (2.19)
α sin x1

The solution to this equation at t = 2 is:

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 30

1
ẍ (t)|t= 2 + O 3 .

x (2) = x() + ẋ (t)|t=  + (2.20)
2
In the above equation we know the value of x() but for the ẋ (t) element we have an unresolved
relationship in the third element, i.e. ẋ3 (t). In it, the variablex1 appears as an argument of the sine
function, so we should calculate its value from the solution of the differential equation (2.19) for the
variable ẋ1 :
Z Z
ẋ1 (t) = 0 ⇒ dx1 (t) = 0 · dt ⇒ x1 (t) = C,

C = x1 (0) = x1 () = α + x01 ⇒


⇒ x1 (t) = α + x01 . (2.21)
It can be seen that the calculation of ẋ1 (t) can also be performed as an integral indicated in the range
t ∈ (; 2):

ẋ1 (t) = 0 ⇒
Z 2 Z 2
dx1 (t) = 0 · dt ⇒
 
x1 (2) − x1 () = C − C ⇒
x1 (2) − x1 () = 0 ⇒
x1 (2) = x1 (),
and on the basis of the value x1 () form (2.18) we have:
x1 (2) = α + x01 ,
which gives the same result as in the equation 2.21. So you can see that any integration method will lead
to a correct result.
From (2.19) and (2.21) we can write:
 
0
ẋ =  α .
α sin (α + x01 )
Since ẋ is a constant vector (independent of t), then vector ẍ will be zero. On this basis, we also get a
zero O 3 vector.
By substituting known values of elements to (2.20) we obtain the solution of the equation x (t) at
t = 2 is:    
0 α + x01
x (2) = x () +  α  =  α + x02 . (2.22)
α sin (α + x01 ) α sin (α + x01 )  + x03

Phase III
For t ∈ (2; 3) input signals are: u1 = −α, u2 = 0, so we have:
 
−α
ẋ =  0  .
0
Following the same procedure as for phase I, we calculate that the state at t = 3 is:
   
−α x01
x (3) = x (2) +  0   =  α + x02 . (2.23)
0 α sin (α + x01 )  + x03

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 31

Phase IV
For t ∈ (3; 4) input signals are: u1 = 0, u2 = −α. Then the kinematics is simplified to the form:
 
0
ẋ =  −α  . (2.24)
−α sin x1
As in the case of phase II we have to calculate the value of x1 for the third element ẋ, i.e. ẋ3 . Then we
obtain:

x1 (t) = x1 (3) = x01 ,


so  
0
ẋ =  −α .
−α sin (x01 )
The solution to this equation at t = 4 is:

1
ẍ (t)|t=3 2 + O 3 .

x (4) = x(3) + ẋ (t)|t=3  + (2.25)
2
By substituting the known elements of x(3) and ẋ (t) for t = 3 and zero vectors ẍ (t) and O 3 we


get:

     
0 x01 0
x (4) = x (3) +  −α  =  α + x02 + −α 
−α sin (x01 ) α sin (α + x01 )  + x03 −α sin (x01 )
 
x01
=  x02 .
−α sin (x01 )  + α sin (α + x01 )  + x03

Elements x01 , x02 ,x03 can be aggregated to vector x0 and then x (4) can be presented in the form:
 
0
x (4) = x0 +  0 .
−α sin (x01 )  + α sin (α + x01 ) 

There are non-linear relationships in the last component. For more detailed analysis, let us consider the
expansion of sinus function into the Taylor series:

sin (α + x01 ) = sin (x01 ) + cos (x01 ) α − 12 sin (x01 ) α2 2 + . . . .

Taking this extension into account we can write down:

−α sin (x01 )  + α sin (α + x01 )  = −α sin (x01 )  + α sin (x01 )  + cos (x01 ) α2 2 −
− 12 sin (x01 ) α3 3 + . . . = cos (x01 ) α2 2 − 12 sin (x01 ) α3 3 + . . . .

Therefore, the solution can be presented as follows


 
0
x (4) = x0 + α2 2  0  + R, (2.26)
cos (x01 )

where R stands for high-order terms.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 32

Analysis of the direction of the system evolution after the control sequence
 T
When calculating the Lie bracket of the control system vector fields, i.e. g1 = 1 0 0 and
 T
g2 = 0 1 sin x1 , we get:
 
0
[g1 ,g2 ] =  0  . (2.27)
cos (x1 )

Invoking the solution (2.26) you can see that the component listed there is derived from the Lie bracket
(2.27), i.e:
 
0
α2 2  0  = α2 2 [g1 ,g2 ] (x0 ) .
cos (x01 )

This is confirmed by the overall result described by the relation (2.14). It should be noted, however,
that because Lie brackets of higher orders will not be zero, e.g. [g1 , [g1 ,g2 ]] and [g1, [g1 , [g1 ,g2 ]]], so
the direction indicated by [g1 ,g2 ] is not the exact direction of evolution of the system, although it is the
dominant one. To be able to determine more precisely the direction of evolution of the system one should
take into account vectors generated by Lie brackets of higher orders. Then components containing higher
powers of  parameter will appear. In this case additional elements connected with Lie brackets of higher
orders influence only the third coordinate of x vector, because these vector fields will have non-zero
component only in the third element. Let us notice that the first and second element of the vector field
[g1 ,g2 ] are zero, which is consistent with the fact that

x1 (4) = x01 oraz x2 (4) = x02 .

So you can show that


 
0
α2 2  0  + R = [g1 ,g2 ] (x0 ) + R[ , ] ,
cos (x01 )

where R[ , ] is a vector associated with higher-order Lie brackets. Therefore, it turns out that in this
particular case, after applying a given control sequence, the state of the system changes in the direction
indicated by the Lie brackets of the first order, and if the derivatives of the higher orders in the extension
of the Taylor series (component R) are taken into account, it will be necessary to take into account the
Lie brackets of the higher orders (component R[ , ] ). Graphical illustration Fig. 2.3 shows the trajectory
 T
of the solution, which was obtained assuming x0 = 0,25 0 0 and α = 0,5 i  = 1.

?figurename? 2.3: Trajectory of the system evolution

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 33

Ćwiczenie 18. For the control system in the form


 
u1
ẋ =  u2  ,
x1 u2
 T  T
the state at the initial stage was x0 = x1 (0) x2 (0) x3 (0) = 0 0 0 . Then the sequence of
controls illustrated in Fig. 2.4 was used.
u1
1
t
0
T 2T 3T 4T
-1
u2
1
t
0
T 2T 3T 4T
-1

?figurename? 2.4: Input signals plot

Calculate the state for each period.

Solution
Phase I
For t ∈ (t0 ; tK ), t0 = 0, tK = T , u1 = 1, u2 = 0. The dynamics of the system is simplified into the
form:  
1
ẋ = 0 .
0
When integrating the given dynamics, we determine the state for individual variables:

ẋ1 (t) = 1,
Z Z
ẋ1 (t)dt = 1dt,

x1 (t) = t + C,
C = x1 (t0 ),
x1 (t) = t + x1 (t0 ),
x1 (T ) = T + x1 (0),
x1 (T ) = T,

ẋ2 (t) = 0,
Z Z
ẋ2 (t)dt = 0dt,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 34

x2 (t) = C,
C = x2 (t0 ),
x2 (t) = x2 (t0 ),
x2 (T ) = x2 (0),
x2 (T ) = 0,

ẋ3 (t) = 0,
Z Z
ẋ3 (t)dt = 0dt,

x3 (t) = C,
C = x3 (t0 ),
x3 (t) = x3 (t0 ),
x3 (T ) = x3 (0),
x3 (T ) = 0.
The final result is  
T
x(T ) =  0  .
0
The same result can be obtained by using the definite integral. Then the calculations for individual
variables are as follows:

ẋ1 (t) = 1,
Z tK Z tK
ẋ1 (t)dt = 1dt,
t0 t0

x1 (t)|ttK
0
= t|ttK
0
,
x1 (tK ) − x1 (t0 ) = tK − t0 ,
x1 (T ) − x1 (0) = T − 0,
x1 (T ) = T,

ẋ2 (t) = 0,
Z tK Z tK
ẋ2 (t)dt = 0dt,
t0 t0

x2 (t)|ttK
0
= C|ttK
0
,
x2 (tK ) − x2 (t0 ) = C − C,
x2 (T ) − x2 (0) = 0,
x2 (T ) = 0,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 35

ẋ3 (t) = 0,
Z tK Z tK
ẋ3 (t)dt = 0dt,
t0 t0

x3 (t)|ttK
0
= C|ttK
0
,
x3 (tK ) − x3 (t0 ) = C − C,
x3 (T ) − x3 (0) = 0,
x3 (T ) = 0.

Phase II
For t ∈ (t0 ; tK ), t0 = T , tK = 2T , u1 = 0, u2 = 1. The dynamics of the system is simplified into
the form:  
0
ẋ =  1  .
x1
When integrating the given dynamics, we determine the state for individual variables:

ẋ1 (t) = 0,
Z Z
ẋ1 (t)dt = 0dt,

x1 (t) = C,
C = x1 (t0 ),
x1 (t) = x1 (t0 ),
x1 (2T ) = x1 (T ),
x1 (2T ) = T,

ẋ2 (t) = 1,
Z Z
ẋ2 (t)dt = 1dt,

x2 (t) = t + C,
C = −t0 + x2 (t0 ),
x2 (t) = t − t0 + x2 (t0 ),
x2 (2T ) = 2T − T + x2 (T ),
x2 (2T ) = T,

ẋ3 (t) = x1 (t),


Z Z
ẋ3 (t)dt = x1 (t)dt,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 36

where form calculated x1 we know x1 (t) = x1 (t0 ), so


Z Z
ẋ3 (t)dt = x1 (t0 )dt,

x3 (t) = x1 (t0 )t + C,
C = x3 (t0 ) − x1 (t0 )t0 ,
x3 (t) = x1 (t0 )t + x3 (t0 ) − x1 (t0 )t0 ,
x3 (2T ) = x1 (T )2T + x3 (T ) − x1 (T )T,
x3 (2T ) = 2T 2 − T 2 ,
x3 (2T ) = T 2 .
The calculation of x3 can also be performed by substituting the value of x1 (t) (for time t = 2T )
before the integration is performed, which allows to write x3 derivative as:

ẋ3 (t) = x1 (t),

ẋ3 (t) = x1 (2T ),


ẋ3 (t) = T,
Z Z
ẋ3 (t)dt = T dt,

x3 (t) = T t + C,
C = −T t0 + x3 (t0 ),
x3 (t) = T t − T t0 + x3 (t0 ),
x3 (2T ) = 2T 2 − T 2 + x3 (T ),
x3 (2T ) = T 2 ,
The final result is  
T
x(2T ) =  T  .
T2
Further calculations will be carried out without any detailed description of the integrals, as they are
similar to those of the previous phases.

Phase III
For t ∈ (t0 ; tK ), t0 = 2T , tK = 3T , u1 = −1, u2 = 0
The dynamics of the system for the period under consideration is in the form:
 
−1
ẋ =  0  .
0

When integrating the given dynamics, we determine the state for individual variables:

ẋ1 (t) = −1,


Z Z
ẋ1 (t)dt = −dt,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 37

x1 (3T ) = 0,

ẋ2 (t) = 0,
Z Z
ẋ2 (t)dt = 0dt,

x2 (3T ) = T,

ẋ3 (t) = 0,
Z Z
ẋ3 (t)dt = 0dt,

x3 (3T ) = T 2 .
The final result is  
0
x(3T ) =  T  .
T2

Phase IV
For t ∈ (t0 ; tK ), t0 = 3T , tK = 4T , u1 = 0, u2 = −1
The dynamics of the system for the period under consideration is in the form:
 
0
ẋ =  −1  .
−x1

When integrating the given dynamics, we determine the state for individual variables:

ẋ1 (t) = 0,
Z Z
ẋ1 (t)dt = 0dt,

x1 (4T ) = 0,

ẋ2 (t) = −1,


Z Z
ẋ2 (t)dt = −dt,

x2 (4T ) = 0,

ẋ3 (t) = −x1 (t),


Z Z
ẋ3 (t)dt = −x1 (t)dt,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


2.2. State evolution in case of a given system 38

x3 (4T ) = T 2 .
The final result is  
0
x(4T ) =  0  .
T2

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3. Relative degree

3.1. Theoretical introduction


By the term relative degree we will understand the difference between the denominator and nominator
of the transmittance of an object. Thus, it will be the difference between the number of poles and zeros
in that transmittance. Let’s determine the number of the relative degree as r. One of the methods of
determining the relative degree of the system is to calculate the appropriate derivatives of the vector
fields of the system and check the condition of zeroing these derivatives.
Let’s define the control system as

ẋ = f (x) + g(x)u,
y = h(x),

where x ∈ Rn is a state, u ∈ R is an input, y ∈ R to an output, f (x) is a drift vector field, g(x) is a


generator vector field, a h(x) is an output function. For the control system defined in this way, we will
define the concept of relative degree
Relative degree condition

1. For all i = 0, . . . ,r − 2 conditions must be met

Lg Lif h(x) = 0.

2. For specific i = r − 1 conditions must be met

Lg Lr−1
f h(x) 6= 0.

When conditions 1 and 2 are met, the relative degree is defined by the number r. The procedure is
performed in such a way that we increase i by 1 as long as Lg Lif h(x) = 0. If for a given i we get
Lg Lr−1
f h(x) 6= 0, it means that the second condition is satisfied and the relative degree is r.
The relative degree can also be interpreted as the number of required output differentials of the system
that are necessary for the differential function to be directly dependent on the system input.
Calculating the relative order by differentiation of the output function

1. Calculate the subsequent derivatives of the output function as

di di
y = h(x).
di t di t

2. For the i-th derivative function y in which the input u is directly present, i.e.

di di h(x)
i
y= (x,u),
dt di t
we conclude that the number of the relative degree is equal to the order of the derivative of function
y, i.e. r = i.

39
3.2. Calculation of the relative degree of linear systems 40

3.2. Calculation of the relative degree of linear systems


Ćwiczenie 19. Carry out an analysis of the relative degree of a linear system given by the equation
ẋ = Ax + Bu
y = Cx

Solution

h(x) = y = Cx, f (x) = Ax, g(x) = B,


We calculate Lg Lif h(x) for i = 0.
L0f h(x) = h(x) = Cx,
∂ ∂
Lg L0f h(x) = Lg h(x) = (h(x))g = (Cx)B = CB.
∂x ∂x
We then check if the condition is met
Lg L0f h(x) = 0.
In this case we do not have the possibility to check this condition, because the explicit matrices of the
system are not given. So we continue the analysis assuming that the condition is met. We recalculate
Lg Lif h(x) for a greater than 1 value and then check the condition. The process of increasing i continues
until the condition of the second relative degree calculation procedure is obtained.
We calculate Lg Lif h(x) for i = 1.
∂ ∂
L1f h(x) = Lf h(x) = (h(x))f = (Cx)Ax = CAx,
∂x ∂x

Lg L1f h(x) = Lg Lf h(x) = Lg (CAx) = (CAx)g = CAB.
∂x
We calculate Lg Lif h(x) for i = 2.

L2f h(x) = Lf Lf h(x) = Lf (CAx) = (CAx)Ax = CA2 x,
∂x

Lg L2f h(x) = Lg (CA2 x) = (CA2 x)B = CA2 B.
∂x
By analogy, it can be shown that for i = k the function Lg Lif h(x) will take the form

Lg Lkf h(x) = CAk B.

3.3. Calculation of the relative degree of nonlinear systems


Ćwiczenie 20. The nonlinear system is given by the equation
ẋ = f (x) + gu,
where    
x2 0
f (x) = ,g= .
−x2 − x31 − x1 1
Perform a relative degree analysis in the case
a) y = h(x) = x1 ,
b) y = h(x) = x2 ,
c) y = h(x) = cos x2 .

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3.3. Calculation of the relative degree of nonlinear systems 41

Solution
Case a) y = h(x) = x1 .
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = x1 = 1 0 x,


 

   
0 ∂ ∂   0   0
Lg Lf h(x) = Lg h(x) = (h(x))g = ( 1 0 x) = 1 0 = 0.
∂x ∂x 1 1
We conclude that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the
increased i. We calculate Lg Lif h(x) for i = 1.

   
∂ ∂   x2   x2
Lf h(x) = (h(x))f = ( 1 0 x) = 1 0 = x2 ,
∂x ∂x −x2 − x31 − x1 −x2 − x31 − x1
 
∂   0
Lg Lf h(x) = Lg (x2 ) = (x2 )g = 0 1 = 1.
∂x 1
We conclude that Lg Lf h(x) 6= 0, so that means the second condition is fulfilled. Using the equation in
the second condition, we can write down

Lg Lr−1 1
f h(x) = Lg Lf h(x),

so
r − 1 = 1 ⇒ r = 2.
So the relative degree is 2.
Case b) y = h(x) = x2 .
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = x2 = 0 1 x,


 

   
∂ ∂  0  0
Lg L0f h(x) = Lg h(x) =
 
(h(x))g = ( 0 1 x) = 0 1 = 1.
∂x ∂x 1 1
We find that Lg L0f h(x) 6= 0, so this means that the second condition for i = 0 is fulfilled. So there was
no checking of the first condition in this case. Using the equation in the second condition, we can enter

Lg Lr−1 0
f h(x) = Lg Lf h(x),

so
r − 1 = 0 ⇒ r = 1.
So the relative degree is 1.
Case c) y = h(x) = cos x2 .
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = cos x2 ,


   
0 ∂ ∂ 0   0
Lg Lf h(x) = Lg h(x) = (h(x))g = (cos x2 ) = 0 − sin x2 = − sin x2 .
∂x ∂x 1 1
In this case we obtained an ambiguity, because we do not know the value of the variable x2 and we are
not able to determine what value takes sin x2 . In order to be able to continue the analysis it is necessary
to make an assumption about the value of x2 . We will consider two cases.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3.3. Calculation of the relative degree of nonlinear systems 42

If x2 6= kπ, where k = 0,1,2, . . ., so then Lg L0f h(x) 6= 0 and we conclude that the second condition
is met. So
r − 1 = 0 ⇒ r = 1,
which ends the relative order analysis.
If we take x2 = kπ, where k = 0,1,2, . . ., then we get Lg L0f h(x) = 0. This means that it is necessary
to increase i further check conditions. Let’s consider this case. We calculate Lg Lif h(x) for i = 1.
 
∂ ∂ x2
Lf h(x) = (h(x))f = (cos x2 ) =
∂x ∂x −x2 − x31 − x1
 
x2
= sin x2 (x2 + x31 + x1 ),
 
= 0 − sin x2
−x2 − x31 − x1


Lg Lf h(x) = Lg (sin x2 (x2 + x31 + x1 )) = (sin x2 (x2 + x31 + x1 ))g =
∂x  
 0
2 3 = cos x2 (x2 + x31 + x1 ) + sin x2 .

= sin x2 (3x1 + 1) cos x2 (x2 + x1 + x1 ) + sin x2
1

Given that x2 = kπ, where k = 0,1,2, . . ., we choose k = 0 and then the condition is simplified to

Lg Lf h(x) = x31 + x1 .

Therefore further assumptions are necessary, this time for the value x1 . We will not analyse this case
further due to its complexity. However, it should be remembered that the analysis of the relative degree
may depend on the value of the state that will be considered in the conditions used to calculate the relative
degree.
Ćwiczenie 21. Determine the relative degree of the system
   
−ax1 1
ẋ = −bx2 + k − cx1 x2  + 0 u,
εx1 x2 0

where a,b,c,k,ε are constant values. The analysis should be carried out for cases

a) y = x2 ,

b) y = x3 .

Solution
Case a) y = h(x) = x2 .
Analysis using the calculation of the derived y function.

y = x2, - lack of dependence on u

ẏ = ẋ2 = −bx2 + k − cx1 x2 , - lack of dependence on u

d d
ÿ = ẋ2 = (−bx2 + k − cx1 x2 ) = −bẋ2 − cẋ1 x2 − cx1 ẋ2 =
dt dt
= −b(−bx2 + k − cx1 x2 ) − c(−ax1 + u)x2 − cx1 (−bx2 + k − cx1 x2 ).

In the ÿ function there is an input u, so if you assume that c 6= 0 and x2 6= 0, it was enough to consider a
second order derivative on the system output function to obtain a direct dependence on the system input.
Therefore, the relative degree is 2. If the conditions for c and x2 are not met, then you should continue

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3.3. Calculation of the relative degree of nonlinear systems 43

to calculate the derivatives y of the higher degrees. For simplicity, c 6= 0 and x2 6= 0 are assumed, which
ends the analysis of this case.
Analysis using directional derivatives.
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = x2 = 0 1 0 x,


 

   
1 1
∂ ∂
Lg L0f h(x) = Lg h(x) =
   
(h(x))g = ( 0 1 0 x) 0 = 0 1 0 0 = 0.
∂x ∂x
0 0
We find that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the increased
i. We calculate Lg Lif h(x) for i = 1.

 
−ax1
∂ ∂  
Lf h(x) = (h(x))f = ( 0 1 0 x) −bx2 + k − cx1 x2  =
∂x ∂x
εx1 x2
 
  −ax1
= 0 1 0 −bx2 + k − cx1 x2  = −bx2 + k − cx1 x2 ,
εx1 x2


Lg Lf h(x) = Lg (−bx2 + k − cx1 x2 ) = (−bx2 + k − cx1 x2 )g =
 ∂x
  1
= −cx2 −cx1 − b 0 0 = −cx2 .
0

We conclude that Lg Lf h(x) = −cx2 . To simplify the analysis we assume that cx2 6= 0, which implies
that Lg Lf h(x) 6= 0, so the second condition is met. So

r − 1 = 1 ⇒ r = 2.

The relative degree is 2.


Case b) y = h(x) = x3 .
Analysis using the calculation of the derived y function.

y = x3, - lack of dependence on u

ẏ = ẋ3 = εx1 x2 , - lack of dependence on u


d d
ÿ = ẋ3 = (εx1 x2 ) = εẋ1 x2 + εx1 ẋ2 = ε(−ax1 + u)x2 + εx1 (−bx2 + k − cx1 x2 ).
dt dt
For simplicity’s sake, εx2 6= 0 is assumed, so there is an input u in the ÿ function. Hence, the relative
degree is 2.
Analysis using directional derivatives.
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = x3 = 0 0 1 x,


 

   
1 1
∂ ∂
Lg L0f h(x) = Lg h(x) =
   
(h(x))g = ( 0 0 1 x) 0 = 0 0 1 0 = 0.
∂x ∂x
0 0

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3.3. Calculation of the relative degree of nonlinear systems 44

We note that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the increased
i. We calculate Lg Lif h(x) for i = 1.

 
−ax1
∂ ∂  
Lf h(x) = (h(x))f = ( 0 0 1 x) −bx2 + k − cx1 x2  =
∂x ∂x
εx1 x2
 
  −ax1
= 0 0 1 −bx2 + k − cx1 x2  = εx1 x2 ,

εx1 x2
 
∂   1
Lg Lf h(x) = Lg (εx1 x2 ) = (εx1 x2 )g = εx2 εx1 0 0 = εx2 .
∂x
0
For simplicity’s sake, εx2 6= 0 is assumed, so we conclude that Lg Lf h(x) 6= 0. So the second condition
is met. So
r − 1 = 1 ⇒ r = 2.
The relative degree is 2.

Ćwiczenie 22. Determine the relative degree of the system


   
x2 0
ẋ = + u,
2ωη(1 − µx21 )x2 − ω 2 x1 1

where ω,η,µ are not defined constant values. The analysis should be carried out for cases

a) y = x1 ,

b) y = sin x2 .

Solution
Case a) y = h(x) = x1 .
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = x1 = 1 0 x,


 
   
0 ∂ ∂   0   0
Lg Lf h(x) = Lg h(x) = (h(x))g = ( 1 0 x) = 1 0 = 0.
∂x ∂x 1 1
We conclude that Lg L0f h(x) = 0, so the first condition is met and we continue the calculation for the
increased i. We calculate Lg Lif h(x) for i = 1.

 
∂ ∂   x2
Lf h(x) = (h(x))f = ( 1 0 x) =
∂x ∂x 2ωη(1 − µx21 )x2 − ω 2 x1
 
  x2
= 1 0 = x2 ,
2ωη(1 − µx21 )x2 − ω 2 x1
 
∂   0
Lg Lf h(x) = Lg (x2 ) = (x2 )g = 0 1 = 1.
∂x 1
We conclude that Lg Lf h(x) 6= 0, so that means the second condition is fulfilled. So

r − 1 = 1 ⇒ r = 2.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


3.3. Calculation of the relative degree of nonlinear systems 45

The relative degree is 2.


Case b) y = h(x) = sin x2 .
We calculate Lg Lif h(x) for i = 0.

L0f h(x) = h(x) = sin x2 ,


   
0 ∂ ∂ 0   0
Lg Lf h(x) = Lg h(x) = (h(x))g = (sin x2 ) = 0 cos x2 = cos x2 .
∂x ∂x 1 1
For simplicity, it is assumed that x2 6= (2k + 1) π2 , where k = 0,1, . . ., then we find that Lg L0f h(x) 6= 0.
This means that the second condition is met. So we do not increase i and we have

r − 1 = 0 ⇒ r = 1.

The relative degree is 1.


If an alternative assumption of x2 = (2k + 1) π2 , where k = 0.1, . . ., is made, then we find that
Lg L0f h(x) = 0. So the first condition is met and we continue the calculation for increased i. We calculate
Lg Lif h(x) for i = 1.

 
∂ ∂ x2
Lf h(x) = (h(x))f = (sin x2 ) =
∂x ∂x 2ωη(1 − µx21 )x2 − ω 2 x1
 
x2
= cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 ),
 
= 0 cos x2
2ωη(1 − µx21 )x2 − ω 2 x1


Lg Lf h(x) = Lg (cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 )) = (cos x2 (2ωη(1 − µx21 )x2 − ω 2 x1 ))g = . . .
∂x
The continuation of the calculations is left to the Reader’s own realization.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


4. Distribution and its integrability

4.1. Theoretical introduction


Distribution of the vector fields
Lets consider k dimensional ∆ distribution defined in the neighbourhood U ⊂ Rn of a point x0 ∈ Rn ,
where k ≤ n. Distribution ∆ is called a set of k linearly independent vector fields defined in each point
x ∈ U and described by
∆ = span {X1 ,X2 , . . . ,Xk } ,

where Xi ∈ Rn (i = 1, . . . ,k) is a vector field. The dimension of ∆ distribution we define by ∀x ∈


U, dim ∆ (x) = k. You could also say that the distribution ∆ (x) is mapping to each point x ∈ U a k
dimensional vector space.

Codistribution
Alternatively, instead of distribution of vector fields, a set of their dual forms, i.e. covectors, is con-
sidered. Then we’re talking about codistribution. The k-dimensional codistribution Ω defined in the
neighbourhood U ⊂ Rn of a point x0 ∈ Rn we call a set of k linearly independent covectors defined in
each point x ∈ U and described by

Ω = span {ω1 ,ω2 , . . . ,ωk } ,

where ωi ∈ (Rn )∗ (i = 1, . . . ,k) is a covector field (covector). The dimension of codistribution we define
similarly as the dimension of the distribution of vector fields.

Distribution and codistribution


Distribution and codistribution are dual objects. For a given distribution, you can specify the appro-
priate codistribution and vice versa.
Lets assume that ∀x ∈ U ⊂ Rn , dim ∆ (x) = k. Then we can find the codistribution Ω = ∆⊥ of a
dimension l = n − k defined as

∆⊥ (x) = {ω ∈ (Rn )∗ : ωV = 0 dla każdego V ∈ ∆ (x)} .

It follows that the covectors belonging to the codistribution ∆⊥ (x) are anihilating („nulling”) any vector
field that can be determined from the distribution ∆ (x). Such covectors are called anihilators.
Similarly, we assume that ∀x ∈ U ⊂ Rn , dim Ω (x) = k, so we can define distribution ∆ = Ω⊥ of
a dimension l = n − k defined as

Ω⊥ (x) = {X ∈ Rn : ωX = 0 for each ω ∈ Ω (x)} .

Generally we can state that the pairs ∆⊥ (x) i ∆ (x) (Ω⊥ (x) i Ω (x)) are anihilating each other.

46
4.2. Exercises 47

Distribution integrability
Distribution ∆ (x) of dimension d, for which dimx = n, is integrable , if there exist n − d scalar
functions λ1 , . . . ,λn−d , for which the condition
span{dλ1 ,. . .,dλn−d } = ∆⊥ (x) ,
is fulfilled, where ∆⊥ (x) is a codistribution of a distribution ∆ (x).

Frobenius theorem
Distribution ∆ (x) is integrable if, and only if, when it is involutive.

Distribution involutivnes
The ∆ (x) distribution is involutive if for any two vector fields f1 (x) and f2 (x) belonging to ∆ (x)
distribution, their Lie brackets also belong to ∆ (x) distribution. Such a dependence can be written down
by the formula
f1 (x),f2 (x) ∈ ∆ (x) ⇒ [f1 (x),f2 (x)](x) ∈ ∆ (x) .
In practice, the easiest way to check ∆ (x) distribution involutivnes is to use the condition
rank[f1 (x), . . . ,fd (x)] = rank[f1 (x), . . . ,fd (x),[fi (x),fj (x)]],
where x ∈ U, dimx = n, dim∆ (x) = d, 1 ≤ i, j ≤ d.

4.2. Exercises
Ćwiczenie 23. Annihilator calculation
The distribution is defined as:
   
 0 −x2 
∆(x) = span  0 , 1  . (4.1)
1 + x1 2
 

 T
1. Define the dimension of the ∆(x) in a point x = 0 0 0 .
2. Check if the ∆(x) is integrable.
3. Define a function λ (x), which gradient is an annihilator of the ∆(x) (any method of λ(x) calcu-
lation is acceptable).

Solution
Ad.1
 T
Define the dimension of the ∆(x) in a point x = 0 0 0 .
The rank condition is  
0 −x2
rank  0 1 ,
1 + x1 2
 T
and for x = 0 0 0
 
0 0
rank  0 1  = 2,
1 2
so dim ∆(x) = 2.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


4.2. Exercises 48

Ad.2
Check if the ∆(x) is integrable.
To check if ∆(x) is integrable, one have to calculate Lie bracket
         
0 −x2 0 −1 0 0 0 0 0 −x2
 0  ,  1  =  0 0 0   0  −  0 0 0  1  =
1 + x1 2 0 0 0 1 + x1 1 0 0 2
    
0 0 0
=  0  −  0  =  0 .
0 −x2 x2
Now one have to check if this new vector field increases the dimension of ∆(x). Let’s define ∆2 (x) as:
     
 0 −x2 0 
∆2 (x) = span  0 , 1 , 0  ,
1 + x1 2 x2
 

and check its dimension:


 
0 −x2 0
rank  0 1 0 ,
1 + x1 2 x2
 T
and if we submit x = 0 0 0 we have
 
0 0 0
rank  0 1 0  = 2.
1 2 0

so dim ∆2 (x) = 2 ⇒it is involutive, so it is integrable.

Ad.3
Define a function λ (x) which gradient is an annihilator of the ∆(x) (any method of λ(x) calculation
is acceptable).
We look for a function that will provide
 
h i 0 −x2
∂λ ∂λ ∂λ
 
∂x1 ∂x2 ∂x3
 0 1 = 0 0 ,
1 + x1 2

so one have the following conditions:


∂λ
(1 + x1 ) = 0,
∂x3

∂λ ∂λ ∂λ
− x2 + +2 = 0.
∂x1 ∂x2 ∂x3
∂λ ∂λ
Let’s assume that ∂x 3
= 0, then the first condition is fulfilled for any x1 . Submitting ∂x3 = 0 into the
second equation we have:
∂λ ∂λ
− x2 + = 0.
∂x1 ∂x2
So
∂λ ∂λ
= x2 .
∂x2 ∂x1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


4.2. Exercises 49

∂λ
We assume ∂x1 = 1 and we have
∂λ
= x2 .
∂x2
The final gradient of λ(x) is
∂λ h ∂λ ∂λ ∂λ i  
= ∂x1 ∂x2 ∂x3 = 1 x2 0 .
∂x
Performing a simple integration we can write the proposition of λ(x) function as:
1
λ = x1 + x22 + C,
2
where C is a constant.
Ćwiczenie 24. Annihilator calculation no 2
There are given two vector fields
   
1 0
f1 =  0  , f2 =  1  .
x1 + x2 x1
Calculate the anihilator of the distribution which consists f1 and f2 .

Solution
We define distribution in form:
   
 1 0 
∆(x) = span  0 , 1  . (4.2)
x1 + x2 x1
 

So, we looking for the covector ω(x) ∈ ∆⊥ (x), for which


   
ω(x) f1 f2 = 0 0 .
Then  
  1
ω1 (x) ω2 (x) ω3 (x)  0  = 0,
x1 + x2
 
  0
ω1 (x) ω2 (x) ω3 (x)  1  = 0.
x1
On this basis, we obtain
ω1 (x) = −ω3 (x)(x1 + x2 ),
ω2 (x) = −ω3 (x)x1 .
It is assumed ω3 (x) = 1, so
ω1 (x) = −(x1 + x2 ),
ω2 (x) = −x1 ,
then  
ω(x) = −(x1 + x2 ) −x1 1 .
Ćwiczenie 25. Calculate the distribution for the given anihilator
Determine a distribution consisting of at least two vector fields, for which the covector given in the
form  
ω(x) = 2x1 1 0
is its anihilator.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5. Accessibility and controllability

5.1. Theoretical introduction


Accessibility and controllability
Theoretical issues of controllability and accessibility are discussed in the paper „Wybrane aspekty
geometrii różniczkowej” in the file „_Geom_roz_oprac - B.Krysiak.pdf”. Basic definitions are:

• local accessibility (def. 1.50, p. 32),

• local controllability in the short time (def. 1.52, p.32),

• condition of local accessibility (def. 1.53, p. 33),

• Lie algebra rank condition (def. 1.55, p. 33),

• condition of local controllability (def. 1.56, p.34).

5.2. Accessibility
Ćwiczenie 26. Check if the dynamics (5.1) is reachable.

x22
     
ẋ1 0
= + u. (5.1)
ẋ2 0 1

Solution
For the system (5.1) the vector fields are as:

x22
   
0
f (x) = , g(x) = .
0 1

We conclude that dim x = 2. We create the distribution:


  2    
 x2 0
41 = span f, g = span , .
0 1

For this distribution dim 41 = 2 for assumption x2 6= 0. So, we have dim 41 = n, what allows to stare
that the system is accessible form any point xU, U = {xR2 : x2 6= 0}.
Is the system accessible from the point x2 = 0? We calculate the vector field of a Lie bracket:
 
  ∂g ∂f −2x2
f, g = f− g= .
∂x ∂x 0

50
5.2. Accessibility 51

 
Extending the distribution 41 by the f, g we create a new distribution:

x22
       
   0 −2x2
42 = span f, g, f, g = span , , .
0 1 0
 
The vector field f, g does not increase the distribution dimension and for x2 =  0 we have
dim 42 = 1. So we calculate a higher order field (any combination of fields f,g, f, g such that the
newly created field allows to increase the distribution dimension. In general, we may not get a new field
allowing to increase the distribution dimension, then we continue the procedure of increasing the field
order):  
 ∂ f, g
 
   ∂g  2
f, g ,g = f, g − g= .
∂x ∂x 0
  
We are expanding our distribution 42 by f, g ,g and we’re creating a new distribution:
     
43 = span f, g, f, g , f, g ,g =

x22
         
0 −2x2 2
= span , , , .
0 1 0 0
  
The vector field f, g ,g increases the distribution dimension and for x2 = 0 we have dim 43 = 2.
We find that the system (5.1) is accessible from the point x2 = 0.

Ćwiczenie 27. There is a system in the form:


    
ẋ1 x1 x2
 ẋ2  =  0  +  1  u. (5.2)
ẋ3 x2 0
 T
Check if the system is reachable form the point x = 0 0 0 .

Solution
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
   
x1 x2
f (x) =  0  , g(x) =  1  .
x2 0

Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:


     
  x1 x2 
41 = span f, g = span  0 ,  1  .
x2 0
 

Dla tej dystrybucji dim 41 = 2 (przy założeniu x1 6= 0 lub x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na
to, rozszerzamy dystrybucj˛e 41 o nawias Liego

∂g  ∂f 
f− f, g
g= =
∂x ∂x
       
0 1 0 x1 1 0 0 x2 −x2
=  0 0 0  0  −  0 0 0  1  =  0 
0 0 0 x2 0 1 0 0 −1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.2. Accessibility 52

i tworzymy nowa˛ dystrybucj˛e:


       
    x1 x2 −x2 
42 = span f,g, f, g = span  0 ,  1 , 0  ,
x2 0 −1
 

 
x1 x2 −x2
det  0 1 0  = −x1 + x22
x2 0 −1
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu −x1 + x22 6= 0), a wi˛ec dim 42 = n i
stwierdzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}. Układ (5.2)
posiada dryf, a zatem fakt osiagalności
˛ tego układu nie implikuje jego sterowalności.
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
 
    ∂g   ∂ f, g
f, g , g = f, g − g=
∂x ∂x
       
0 1 0 −x2 0 −1 0 x2 1
=  0 0 0  0  −  0 0 0  1  =  0 
0 0 0 −1 0 0 0 0 0
i tworzymy nowa˛ dystrybucj˛e:
      
43 = span f,g, f, g , f, g , g =
         
 x1 x2 −x2 1 
= span  0  ,  1 , 0 , 0  ,
x2 0 −1 0
 
 
x2 −x2 1
det  1 0 0  = −1.
0 −1 0
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .

Ćwiczenie 28. Check if the dynamics (5.3) is reachable.

(x2 − 1)2
     
ẋ1 0
= + u. (5.3)
ẋ2 0 1

Solution
Dla układu (5.3) pola wektorowe sa˛ nast˛epujace:
˛
 2   
x2 − 2x2 + 1 0
f (x) = , g(x) = .
0 1

Stwierdzamy, że dim x = 2. Tworzymy dystrybucj˛e:


  2    
 x2 − 2x2 + 1 0
41 = span f, g = span , .
0 1

Dla tej dystrybucji dim 41 = 2 przy założeniu, że x2 6= 1. Uzyskujemy dim 41 = n, wi˛ec układ jest
osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R2 : x2 6= 1}.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.3. Accessibility and STLC 53

Czy układ jest osiagalny


˛ z punktu x2 = 1? Obliczamy pole wyższego rz˛edu:
 
  ∂g ∂f −2x2 + 2
f, g = f− g= .
∂x ∂x 0
 
Rozszerzamy dystrybucj˛e 41 o nawias Liego f, g i tworzymy nowa˛ dystrybucj˛e:

x22 − 2x2 + 1
       
   0 −2x2 + 2
42 = span f, g, f, g = span , , .
0 1 0
 
Pole wektorowe f, g nie powoduje zwi˛ekszenia wymiaru dystrybucji i dla x2 = 1 uzyskujemy

dim 42 = 1. Obliczamy wi˛ec pole wyższego rz˛edu (dowolna kombinacja pól f,g, f, g taka, by
nowo powstałe pole umożliwiło zwi˛ekszenie wymiaru dystrybucji; w ogólności możemy nie uzyskać
nowego pola umożliwiajacego
˛ zwi˛ekszenie wymiaru dystrybucji, wówczas kontynuujemy procedur˛e
zwi˛ekszania rz˛edu pola):
 
 ∂ f, g
 
   ∂g  2
f, g ,g = f, g − g= .
∂x ∂x 0
  
Rozszerzamy dystrybucj˛e 42 o nawias Liego f, g ,g i tworzymy nowa˛ dystrybucj˛e:
     
43 = span f, g, f, g , f, g ,g =

x22 − 2x2 + 122


         
0 −2x2 + 2 2
= span , , , .
0 1 0 0
  
Pole wektorowe f, g ,g powoduje zwi˛ekszenie wymiaru dystrybucji i dla x2 = 1 uzyskujemy
dim 43 = 2. Stwierdzamy, że układ (5.3) jest osiagalny
˛ z punktu x2 = 1.

5.3. Accessibility and STLC


Ćwiczenie 29. For the system with two inputs:

     
ẋ1 1 0
 ẋ2  =  0  u1 +  1  u2 , (5.4)
ẋ3 0 x1
check if it is accessible and STLC (short time local controllability).

Solution
For a given system, vector fields are as follows:
     
0 1 0
f (x) =  0  , g1 (x) =  0  , g2 (x) =  1  .
0 0 x1

We conclude that dim x = 3. We create distribution:


     
  1 0 
41 = span g1 , g 2 = span  0 ,  1  .
0 x1
 

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.3. Accessibility and STLC 54

For this distribution we have dim 41 = 2,so dim 4


 1 6= n. Therefore we extend distribution 41 by the
0
vector field g1 , g2 = ∂g ∂g1
 
∂x g1 − ∂x g2 =
2  0  and we create a nwe distribution:
1
       
    1 0 0 
42 = span g1 , g2 , g1 , g2 = span  0 ,  1 , 0  .
0 x1 1
 

For distribution 42 we obtain dim 42 = 3, so dim 42 = n and we state, that the system is accessible
for any point x ∈ U, U = {x ∈ R3 }. The system (5.4) has no drift, therefore the accessibility implies
controllability (STLC).
Ćwiczenie 30. For the system with two inputs:

     
ẋ1 x2 0
 ẋ2  =  0  u1 +  1  u2 . (5.5)
ẋ3 0 x1
 T
check if it is accessible and STLC (short time local controllability) in the point x = 0 0 0 .

Solution
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
     
0 x2 0
f (x) =  0  , g1 (x) =  0  , g2 (x) =  1  .
0 0 x1
Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:
     
  x2 0 
41 = span g1 , g2 = span  0 ,  1  .
0 x1
 

Dla tej dystrybucji dim 41 = 2 (przy założeniu x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na to, roz-
szerzamy dystrybucj˛e 41 o nawias Liego
 ∂g2 ∂g1

g1 −
g1 , g2 g2 =
=
∂x ∂x
       
0 0 0 x2 0 1 0 0 −1
=  0 0 0  0  −  0 0 0  1  =  0 
1 0 0 0 0 0 0 x1 x2
i tworzymy nowa˛ dystrybucj˛e:
       
    x 2 0 −1 
42 = span g1 , g2 , g1 , g2 = span  0 ,  1 , 0  ,
0 x1 x2
 
 
x2 0 −1
det  0 1 0  = x22
0 x1 x2
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu x2 6= 0), a wi˛ec dim 42 = n i stwier-
dzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}. Układ (5.5) nie
posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.3. Accessibility and STLC 55

 T
Układ nie jest osiagalny
˛ dla x = , wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
0 0 0
 
    ∂g2   ∂ g1 , g2
g1 , g2 , g2 = g1 , g2 − g2 =
∂x ∂x
       
0 0 0 −1 0 0 0 0 0
= 0 0 0  0  −  0 0 0  1  =  0 
1 0 0 x2 0 1 0 x1 −2
i tworzymy nowa˛ dystrybucj˛e:
      
43 = span g1 , g2 , g1 , g2 , g1 , g 2 , g 2 =
         
 x2 0 −1 0 
= span  0  ,  1 , 0 , 0  ,
0 x1 x2 −2
 
 
0 −1 0
det  1 0 0  = −2.
x1 x2 −2
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .

Ćwiczenie 31. The system is given :

     
ẋ1 x2 0
 ẋ2  =  0  u1 +  2  u2 . (5.6)
ẋ3 0 x1 + x2
 T
1. Check if the system is accessible in the neighbourhood of x = 0 0 0 with assumption
 T
x 6= 0 0 0 .
 T
2. Check if the system is accessible for x = 0 0 0 .

3. Check if the system is controllable (STLC).

Solution
Ad.1
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
     
0 x2 0
f (x) =  0  , g1 (x) =  0  , g2 (x) =  2 .
0 0 x1 + x2

Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:


     
  x2 0 
41 = span g1 , g2 = span  0 ,  2  .
0 x1 + x2
 

Dla tej dystrybucji dim 41 = 2 (przy założeniu x2 6= 0), a wi˛ec dim 41 6= n. Z uwagi na to, roz-
szerzamy dystrybucj˛e 41 o nawias Liego
  ∂g2 ∂g1
g1 , g 2 = g1 − g2 =
∂x ∂x

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.3. Accessibility and STLC 56

       
0 0 0 x2 0 1 0 0 −2
=  0 0 0  0  −  0 0 0  2 = 0 
1 1 0 0 0 0 0 x1 + x2 x2
i tworzymy nowa˛ dystrybucj˛e:
       
    x2 0 −2 
42 = span g1 , g2 , g1 , g2 = span  0 ,  1 , 0  ,
0 x1 x2
 

 
x2 0 −2
det  0 1 0  = 2x22
0 x1 x2
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu x2 6= 0), a wi˛ec dim 42 = n i stwier-
dzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : x2 6= 0}.

Ad.2
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
 
    ∂g2   ∂ g1 , g2
g1 , g2 , g2 = g1 , g2 − g2 =
∂x ∂x
       
0 0 0 −2 0 0 0 0 0
=  0 0 0  0  −  0 0 0  2 = 0 
1 1 0 x2 0 1 0 x1 + x2 −4
i tworzymy nowa˛ dystrybucj˛e:
      
43 = span g1 , g 2 , g1 , g2 , g1 , g2 , g2 =
         
 x2 0 −2 0 
= span  0  ,  2 , 0 , 0  ,
0 x1 + x2 x2 −4
 
 
0 −2 0
det  2 0 0  = −16.
x1 + x2 x2 −4
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .

Ad.3
Układ (5.6) nie posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.

Ćwiczenie 32. The system is given :

     
ẋ1 x2 0
 ẋ2  =  x1  u1 +  1  u2 . (5.7)
ẋ3 0 2x1 + x2
 T
1. Check if the system is accessible in the neighbourhood of x = 0 0 0 with assumption
 T
x 6= 0 0 0 .
 T
2. Check if the system is accessible for x = 0 0 0 .

3. Check if the system is controllable (STLC).

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


5.3. Accessibility and STLC 57

Solution
Ad.1
Dla danego układu pola wektorowe sa˛ nast˛epujace:
˛
     
0 x2 0
f (x) =  0  , g1 (x) =  x1  , g2 (x) =  1 .
0 0 2x1 + x2
Stwierdzamy, że dim x = 3. Tworzymy dystrybucj˛e:
     
  x 2 0 
41 = span g1 , g2 = span  x1  ,  1  .
0 2x1 + x2
 

Dla tej dystrybucji dim 41 = 2 (przy założeniu x2 6= 0), a wi˛ec dim 41 =


6 n. Z uwagi na to, roz-
szerzamy dystrybucj˛e 41 o nawias Liego
  ∂g2 ∂g1
g1 , g2 = g1 − g2 =
∂x ∂x
       
0 0 0 x2 0 1 0 0 −1
=  0 0 0   x1  −  1 0 0   1 = 0 
2 1 0 0 0 0 0 2x1 + x2 2x2 + x1
i tworzymy nowa˛ dystrybucj˛e:
       
    x2 0 −1 
42 = span g1 , g2 , g1 , g2 = span  x1 ,
  1  ,  0  ,
0 2x1 + x2 2x2 + x1
 
 
x2 0 −1
det  x1 1 0  = 2x22 − 2x21
0 2x1 + x2 2x2 + x1
Dla dystrybucji 42 uzyskujemy dim 42 = 3 (przy założeniu (x1 6= 0 lub x2 6= 0) i |x1 | =
6 |x2 |), a wi˛ec
dim 42 = n i stwierdzamy, że układ jest osiagalny
˛ z każdego punktu x ∈ U, U = {x ∈ R3 : (x1 6=
0 ∨ x2 6= 0) ∧ |x1 | =
6 |x2 |}.

Ad.2
Układ nie jest osiagalny
˛ dla x = 0, wi˛ec rozszerzamy dystrybucj˛e 42 o nawias Liego
 
    ∂g2   ∂ g1 , g2
g1 , g 2 , g 2 = g1 , g2 − g2 =
∂x ∂x
       
0 0 0 −1 0 0 0 0 0
=  0 0 0  0  −  0 0 0  1 = 0 
2 1 0 2x2 + x1 1 2 0 2x1 + x2 −4
i tworzymy nowa˛ dystrybucj˛e:
      
43 = span g1 , g2 , g1 , g2 , g1 , g 2 , g 2 =
         
 x2 0 −1 0 
= span  x1 ,  1  ,  0  ,  0  ,
0 2x1 + x2 2x2 + x1 −4
 
       
det g2 g1 , g2 g1 , g2 , g2 = −4.
Dla dystrybucji 43 uzyskujemy dim 43 = 3 dla x ∈ R3 .

Ad.3
Układ (5.7) nie posiada dryfu, a zatem fakt osiagalności
˛ tego układu implikuje jego sterowalność.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6. Linearisation

6.1. Conditions for the linearisation with pure state transformation


The system is linearisable with use of pure state transformation if, and only if for x ∈ X the following
conditions occurs:

• condition no 1

dim 4(x) = n, (6.1)


n o
where 4(x) = span adqf gi (x); 1 ≤ i ≤ m, 0 ≤ q ≤ n − 1 .

• condition no 2
h i
adqf gi (x),adrf gj (x) = 0, (6.2)

for all 1 ≤ i,j ≤ m, 0 ≤ q ≤ n − 1 and 0 ≤ r ≤ n.

6.2. Linearisation with pure state transformation


Ćwiczenie 33. [1] The system is defined as:

x2 − 2x2 x3 + x23
   
4x2 x3
ẋ =  x3  +  −2x3  u. (6.3)
0 1

1. Check the conditions for linearisation of the system with use of pure state transformation.

2. Define the dyffeomorphism x = Ψ (z) which defines the pure state transformation.

3. Calculate the inverse of dyffeomorphism in the form z = Ψ−1 (x) and present the system equations
after the linearisation.

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe określamy nast˛epujaco:
˛

x2 − 2x2 x3 + x23
   
4x2 x3
f (x) =  x3  , g (x) =  −2x3  . (6.4)
0 1

Sprawdzamy warunki linearyzowalności za pomoca˛ czystej transformacji stanu. Dla rozpatrywanego


układu stan opisany jest w przestrzeni R3 , tj. n = dim x = 3.

58
6.2. Linearisation with pure state transformation 59

Zgodnie z (6.1) określamy dystrybucj˛e:

∆(x) = span g adf g ad2f g ,



(6.5)

gdzie
 
2x2
adf g =  −1  , (6.6)
0
 
1
ad2f g =  0  . (6.7)
0
Na tej podstawie można stwierdzić, że dim ∆(x) = 3 = n, tak wi˛ec pierwszy warunek jest spełniony.
Wypiszmy wszystkie nawiasy Liego zwiazane˛ ze spełnieniem drugiego warunku (6.2) dla 0 ≤ q ≤ 2
i dla r = 0:
 0
adf g(x),ad0f g(x) = [g(x),g(x)] = 0 − spelniony z def inicji,


 1
adf g(x),ad0f g(x) = 0 − do sprawdzenia,

(6.8)

 2
adf g(x),ad0f g(x) = 0 − do sprawdzenia.

(6.9)

Zwi˛ekszamy r i wypisujemy warunki dla 0 ≤ q ≤ 2 i dla r = 1:


 0
adf g(x),ad1f g(x) = 0 − jest już wy żej,


ad1f g(x),ad1f g(x) = 0 − spelniony z def inicji,


 

 2
adf g(x),ad1f g(x) = 0 − do sprawdzenia.

(6.10)

Zwi˛ekszamy r i wypisujemy warunki dla 0 ≤ q ≤ 2 i dla r = 2:


 0
adf g(x),ad2f g(x) = 0 − spelniony z def inicji,


 1
adf g(x),ad2f g(x) = 0 − spelniony z def inicji,


 2
adf g(x),ad2f g(x) = 0 − spelniony z def inicji.


Zwi˛ekszamy r i wypisujemy warunki dla 0 ≤ q ≤ 2 i dla r = 3:

ad0f g(x),ad3f g(x) = 0 − do sprawdzenia,


 
(6.11)

ad1f g(x),ad3f g(x) = 0 − do sprawdzenia,


 
(6.12)

ad2f g(x),ad3f g(x) = 0 − do sprawdzenia.


 
(6.13)

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 60

Zatem należy sprawdzić najpierw warunek (6.8):

ad1f g(x),ad0f g(x)


 
= [adf g(x),g(x)] =
   
2x2 4x2 x3
=  −1  ,  −2x3  =
0 1
     
0 4x3 4x2 2x2 0 2 0 4x2 x3
=  0 0 −2   −1  −  0 0 0   −2x3  =
0 0 0 0 0 0 0 1
     
−4x3 −4x3 0
=  0 − 0 = 0 . (6.14)
0 0 0

h i
Na podstawie (6.14) można analitycznie dowieść, że warunek (6.9) (tj. ad2f g(x),ad0f g(x) = 0) jest
również spełniony. Można to również łatwo sprawdzić podstawiajac
˛ wartości odpowiednich pól wekt-
orowych, co prowadzi do zapisu:

     
 2 1 4x 2 x 3 0
0

adf g(x),adf g(x) = [[f,adf g(x)] ,g(x)] =  0 , −2x3
   = 0 .

0 1 0

Dodatkowo na tej samej podstawie można analitycznie wykazać, że warunki (6.10), (6.11), (6.12) i
(6.13) również b˛eda˛ spełnione. Zatem „Warunek 2” podany równaniem (6.2) dla układu w postaci
(6.3)
h i wejściem u) ostatecznie sprowadza si˛e do sprawdzenia tylko warunku (6.8) (tj.
(układ z jednym
1 0
adf g(x),adf g(x) = 0).
Na tej podstawie stwierdzamy, że warunki na linearyzacj˛e układu przez czysta˛ transformacj˛e zmien-
nych stanu sa˛ spełnione.

Ad.2
Kolejnym krokiem jest przyj˛ecie wektorów hi , gdzie i = 1, . . . ,3, które b˛eda˛ rozpinać dystrybucj˛e
4(x), która˛ wykorzystamy do obliczenia strumienia rozwiazania
˛ równania całkowego. Jako, że ta dys-
trybucja została już określona równaniem (6.5), to przyjmujemy:

     
 4x2 x3 2x2 1 
∆(x) = span {h1 h2 h3 } = span g adf g ad2f g = span  −2x3   −1   0  . (6.15)


1 0 0
 

Nast˛epnie określamy złożenie strumieni rozwiazań


˛ równań różniczkowych zwyczajnych w postaci:

  
Φhz31 ◦ Φhz22 ◦ Φhz13 (x0 ) = Φhz31 Φhz22 Φhz13 (x0 ) = Ψ (x0 ,z) = x. (6.16)

Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 (możliwe jest również
przyj˛ecie odwrotnej kolejności ułożenia tych zmiennych).
 T
Przyjmujemy stan zerowy jako x0 = x10 x20 x30 .
Pierwszy strumień Φhz13 (x0 ) obliczamy z zależności:

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 61

 
1
ẋ = h3 =  0  ⇒
0
 
t + x10
⇒ Φht 3 (x0 ) = x =  x20  ⇒
x30
 
z1 + x10
⇒ Φhz13 (x0 ) = x =  x20 .
x30

Drugi strumień Φhz22 (x0 ) obliczamy z zależności:

 
2x2
ẋ = h2 =  −1  ⇒
0
−t2 + 2x20 t + x10
 

⇒ Φht 2 (x0 ) = x =  −t + x20 ⇒


x30
−z22 + 2x20 z2 + x10
 

⇒ Φhz22 (x0 ) = x =  −z2 + x20 .


x30

Trzeci strumień Φhz31 (x0 ) obliczamy z zależności:

 
4x2 x3
ẋ = h1 =  −2x3  ⇒
1
−t4 − 4x30 t3 + 2 x20 − 2x230 t2 + 4x20 x30 t + x10
  

⇒ Φht 1 (x0 ) = x =  −t2 − 2x30 t + x20 ⇒


t + x30
−z3 − 4x30 z3 + 2 x20 − 2x230 z32 + 4x20 x30 z3 + x10
4 3
  

⇒ Φhz31 (x0 ) = x =  −z32 − 2x30 z3 + x20 .


z3 + x30

Wypadkowy strumień rozwiazania


˛ (na podstawie (6.16)) można przedstawić w postaci:

−z34 − 4x30 z33 + 2 (−z2 + x20 ) − 2x230 z32 + 4 (−z2 + x20 ) x30 z3 − z22 + 2x20 z2 + z1 + x10
  

Ψ (x0 ,z) =  −z32 − 2x30 z3 − z2 + x20 .


z3 + x30
(6.17)
 T  T
Dla uproszczenia analizy przyjmujemy x0 = x10 x20 x30 = 0 0 0 . Stad ˛ otrzymujemy:

−z34 − 2z2 z32 − z22 + z1


   
x1
Ψ (z) =  −z32 − z2  =  x2  . (6.18)
z3 x3

Ad.3
Po odwróceniu przekształcenia (6.18) możemy zapisać zależność:

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 62


  −1   4 2 
x3 + 2 −x23 − x2 x23 + −x23 − x2 + x1

z1 Ψ1 (x)
z =  z2  = Ψ−1 (x) =  Ψ−1 2 (x)
= −x23 − x2 =
−1
z3 Ψ3 (x) x3
 4
x3 − 2 x43 + x2 x23 + x43 + x22 + 2x2 x23 + x1
 

=  −x23 − x2 = (6.19)
x3
 2 
x2 + x1
=  −x23 − x2  . (6.20)
x3
Aby móc przedstawić postać dynamiki układu po linearyzacji konieczne jest obliczenie pochodnej
po czasie równania (6.20):
d −1 ∂ −1 dx ∂ −1
ż = Ψ (x) = Ψ (x) = Ψ (x)ẋ =
dt ∂x dt ∂x
 ∂ −1 
∂x Ψ1 (x)
∂ −1
= 
∂x Ψ2 (x)
 ẋ =
∂ −1
∂x Ψ3 (x)
 ∂   ∂   
∂x z1 ∂x z1 ẋ ż1
∂ ∂
=  ∂x z2  ẋ =  ∂x z2 ẋ  =  ż2  . (6.21)
∂ ∂ ż3
∂x z3 ∂x z3 ẋ
Wyznaczamy poszczególne pochodne czastkowe
˛ z zależności (6.21) z wykorzystaniem równania
układu oryginalnego (nieliniowego) (6.3):
x2 − 2x2 x3 + x23
    
4x2 x3
∂  
 +  −2x3  u =
ż1 = z1 ẋ = 1 2x2 0  x3
∂x
0 1
2
 
  x2 − 2x2 x3 + x3 + 4x2 x3 u
= 1 2x2 0  x3 − 2x3 u =
u
= x2 − 2x2 x3 + x23 + 4x2 x3 u + 2x2 (x3 − 2x3 u) =
(6.20)
= x2 + x23 = −z2 ,

2
 
∂   x2 − 2x2 x3 + x3 + 4x2 x3 u
ż2 = z2 ẋ = 0 −1 −2x3  x3 − 2x3 u =
∂x
u
= −x3 + 2x3 u − 2x3 u =
(6.20)
= −x3 = −z3 ,

x2 − 2x2 x3 + x23 + 4x2 x3 u


 
∂  
ż3 = z3 ẋ = 0 0 1  x3 − 2x3 u  = u.
∂x
u
Podstawiajac ˛ wyliczone pochodne poszczególnych elementów zmiennej z do jednego równania uzyskuje
si˛e układ po linearyzacji:
      
ż1 0 −1 0 z1 0
 ż2  =  0 0 −1   z2  +  0  u. (6.22)
ż3 0 0 0 z3 1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 63

Ćwiczenie 34. The system is defined as:

x2 + x23 −2x23
   

ẋ =  −x3  +  −2x3  u. (6.23)


0 1

1. Check the conditions for linearisation of the system with use of pure state transformation.

2. Define the dyffeomorphism x = Ψ (z) which defines the pure state transformation.

3. Calculate the inverse of dyffeomorphism in the form z = Ψ−1 (x) and present the system equations
after the linearisation.

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe określamy nast˛epujaco:
˛

x2 + x23 −2x23
   

f (x) =  −x3  , g (x) =  −2x3  . (6.24)


0 1

Sprawdzamy warunki linearyzowalności za pomoca˛ czystej transformacji stanu. Dla rozpatrywanego


układu stan opisany jest w przestrzeni R3 , tj. n = dim x = 3.
Zgodnie z (6.1) określamy dystrybucj˛e:

∆(x) = span g adf g ad2f g ,



(6.25)

gdzie

x2 + x23 −2x23
       
0 0 −4x3 0 1 2x3 0
adf g =  0 0 −2x3   −x3  −  0 0 −1   −2x3  =  1  , (6.26)
0 0 0 0 0 0 0 1 0

    
0 1 2x3 0 −1
ad2f g = −  0 0 −1   1  =  0  . (6.27)
0 0 0 0 0

Na tej podstawie można stwierdzić, że dim ∆(x) = 3 = n, tak wi˛ec pierwszy warunek jest spełniony.
Aby spełnić drugi warunek linearyzacji przez czysta˛ transformat˛e stanu należy sprawdzić warunek:
 1
adf g(x),ad0f g(x) = [adf g(x),g(x)] =


−2x23
   
0
=  1  ,  −2x3  =
0 1
    
0 0 −4x3 2x2 0
=  0 0 −2x3   −1  =  0  .
0 0 0 0 0

Na tej podstawie stwierdzamy, że warunki na linearyzacj˛e układu przez czysta˛ transformacj˛e zmien-
nych stanu sa˛ spełnione.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 64

Ad.2
Kolejnym krokiem jest przyj˛ecie wektorów hi , gdzie i = 1, . . . ,3, które b˛eda˛ rozpinać dystrybucj˛e
4(x), która˛ wykorzystamy do obliczenia strumienia rozwiazania
˛ równania całkowego. Jako, że ta dys-
trybucja została już określona równaniem (6.25), to przyjmujemy:
2
     
 −2x 3 0 −1 
∆(x) = span {h1 h2 h3 } = span g adf g ad2f g = span  −2x3   1   0  . (6.28)


1 0 0
 

Nast˛epnie określamy złożenie strumieni rozwiazań˛ równań różniczkowych zwyczajnych w postaci:


  
Φhz31 ◦ Φhz22 ◦ Φhz13 (x0 ) = Φhz31 Φhz22 Φhz13 (x0 ) = Ψ (x0 ,z) = x. (6.29)

Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 (możliwe jest również
przyj˛ecie odwrotnej kolejności ułożenia tych zmiennych).
 T
Przyjmujemy stan zerowy jako x0 = x10 x20 x30 .
Pierwszy strumień Φhz13 (x0 ) obliczamy z zależności:

 
−1
ẋ = h3 =  0  ⇒
0
 
−t + x10
⇒ Φht 3 (x0 ) = x =  x20 ⇒
x30
 
−z1 + x10
⇒ Φhz13 (x0 ) = x =  x20 .
x30

Drugi strumień Φhz22 (x0 ) obliczamy z zależności:


0
ẋ = h2 =  1  ⇒
0
 
x10
⇒ Φht 2 (x0 ) = x =  t + x20  ⇒
x30
 
x10
⇒ Φhz22 (x0 ) = x =  z2 + x20  .
x30

Trzeci strumień Φhz31 (x0 ) obliczamy z zależności:

−2x23
 

ẋ = h1 =  −2x3  ⇒
1
 2 3
− 3 t − 2x30 t2 − 2x230 t + x10

⇒ Φht 1 (x0 ) = x =  −t2 − 2x30 t + x20 ⇒


t + x30
 2 3
− 3 z3 − 2x30 z32 − 2x230 z3 + x10

⇒ Φhz31 (x0 ) = x =  −z32 − 2x30 z3 + x20 .


z3 + x30

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.2. Linearisation with pure state transformation 65

Wypadkowy strumień rozwiazania


˛ (na podstawie (6.29)) można przedstawić w postaci:
 2 3
− 3 z3 − 2x30 z32 − 2x230 z3 − z1 + x10

Ψ (x0 ,z) =  −z32 − 2x30 z3 + z2 + x20 . (6.30)


z3 + x30
 T  T
Dla uproszczenia analizy przyjmujemy x0 = x10 x20 x30 = 0 0 0 . Stad
˛ otrzymujemy:
 2 3   
− 3 z3 − z1 x1
Ψ (z) =  −z32 + z2  =  x2  . (6.31)
z3 x3

Ad.3
Po odwróceniu przekształcenia (6.31) możemy zapisać zależność:

  −1   2 3 
z1 Ψ1 (x) − 3 x3 − x1
z =  z2  = Ψ−1 (x) =  Ψ−1
2 (x)
 =  x23 + x2  (6.32)
z3 Ψ−1
3 (x) x3

Aby móc przedstawić postać dynamiki układu po linearyzacji konieczne jest obliczenie pochodnej
po czasie równania (6.32):
d ∂
ż = z(x) = z(x)ẋ =
dt ∂x
 ∂   
∂x z1 ẋ ż1

=  ∂x z2 ẋ  =  ż2  .
∂ ż3
∂x z3 ẋ

Wyznaczamy poszczególne pochodne czastkowe


˛ z zależności (6.21) z wykorzystanie równania
układu oryginalnego (nieliniowego) (6.23):

x2 + x23 − 2x23 u
 

−1 0 −2x23  −x3 − 2x3 u  =
 
ż1 = z1 ẋ =
∂x
u
(6.32)
= −x2 − x23 = −z2 ,

2 2
 
∂   x2 + x3 − 2x3 u
ż2 = z2 ẋ = 0 1 2x3  −x3 − 2x3 u  =
∂x
u
(6.32)
= −x3 = −z3 ,

x2 + x23 − 2x23 u
 
∂  
 −x3 − 2x3 u  = u.
ż3 = z3 ẋ = 0 0 1
∂x
u
Podstawiajac ˛ wyliczone pochodne poszczególnych elementów zmiennej z do jednego równania uzyskuje
si˛e układ po linearyzacji:
      
ż1 0 −1 0 z1 0
 ż2  =  0 0 −1   z2  +  0  u. (6.33)
ż3 0 0 0 z3 1

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.3. Conditions for linearisation with state transformation and input transformation (so called linearisation with
transformation and feedback) 66

6.3. Conditions for linearisation with state transformation and input


transformation (so called linearisation with transformation and feed-
back)
The system can be linearised with state transformation and input transformation if and only if when
for x ∈ X the following conditions occurs:

• condition no 1

4n−2 is involutive, (6.34)


n o
where 4n−2 = span adqf gi (x); 1 ≤ i ≤ m, 0 ≤ q ≤ n − 2 .

• condition no 2

dim 4n−1 = n,, (6.35)


n o
where 4n−1 = span adqf gi (x); 1 ≤ i ≤ m, 0 ≤ q ≤ n − 1 .

6.4. Linearisation with state transformation and input transformation


(so called linearisation with transformation and feedback)
Ćwiczenie 35. The system is defined as:

x2 − 3x23 − 3u
 

ẋ =  sin x1 + x22 + x3  . (6.36)


x23 + u

1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.

2. Define linearising input function.

3. Define a feedback control allowing for trajectory tracking of smooth admissible reference traject-
ory defined for the linearising output as zr1 = r (t).

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛

x2 − 3x23
   
−3
f (x) =  sin x1 + x22 + x3  , g (x) =  0  . (6.37)
x23 1

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R3 , tj. n = dim x = 3. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:

∆n−2 = ∆1 = span {g, adf g} (6.38)

oraz
∆n−1 = ∆2 = span g, adf g, ad2f g .

(6.39)

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 67

Obliczajac
˛ poszczególne nawiasy Liego mamy:
 
6x3
adf g =  3 cos x1 − 1  (6.40)
−2x3
oraz
2 + 1 − 3 cos x
 
−6x 3 1
ad2f g =  −3 x2 − 3x23 sin x1 − 6x3 cos x1 − 2x2 (3 cos x1 − 1) + 2x3  .

(6.41)
2x23
Sprawdzamy inwolutywność dystrybucji ∆1 .
Obliczajac
˛ nawiasy Liego
   
6 0
adg adf g = [g, adf g] =  9 sin x1  , ad2g adf g =  −27 sin x1  , (6.42)
−2 0
 
0
ad3g adf g =  −81 cos x1  , . . . (6.43)
0
wykazujemy, że dla k = 1,2, . . . adkg adf g ∈ ∆1 , a zatem dowodzimy inwolutywności dystrybucji ∆1 .
Nast˛epnie sprawdzamy wymiar dystrybucji ∆2 .
Należy sprawdzić rz˛edu macierzy

C = g, adf g, ad2f g .
 

˛ wyznacznik macierzy C otrzymujemy


Obliczajac

det C = (3 cos x1 − 1)2 . (6.44)

Stad
˛ wynika, że dla
1
cos x1 6= (6.45)
3
zachodzi dim ∆2 = 3.
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e

∆ = span {h1 h2 h3 } , (6.46)

gdzie h1 , g, h2 , adf g sa˛ polami wektorowymi wchodzacymi ˛ w skład dystrybucji, natomiast h3


jest dowolnie wybranym polem wektorowym liniowo niezależnym od dwóch pozostałych. Przykład-
owo przyjmujemy h3 = [1 0 0]T . Nast˛epnie określamy strumienie rozwiazań ˛ równań różniczkowych
zwyczajnych:   
Φhz31 ◦ Φhz22 ◦ Φhz13 (x0 ) = Φhz31 Φhz22 Φhz13 (x0 ) = Ψ (x0 ,z) = x. (6.47)

(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 ; w tym przypadku
jako pierwsza˛ zmienna˛ podstawiamy z1 i dlatego później przyjmiemy h (x) = z1 ).
Każdy ze strumieni określamy nast˛epujaco:
˛

−3x30 e−2z2 + x10


   
z1 + x10
Φhz13 (x0 ) =  x20  , Φhz 2 (x0 ) = 
2
ψ2 (x0 ,z) ,
x30 x30 e −2z 2

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 68

 
−3z3 + x10
Φhz31 (x0 ) =  x20 ,
z3 + x30
równania różniczkowego ẋ2 = 3 cos −3x30 e−t + x20 .

gdzie ψ2 (x0 ,z) jest całka˛ rozwiazania
˛
Wypadkowy strumień rozwiazania
˛ można przedstawić w postaci nast˛epujacej:
˛

z1 − 3z3 − 3x30 e−2z2 + x10


 

Ψ (x0 ,z) =  ψ2 (x0 ,z) . (6.48)


z3 + x30 e −2z 2

W celu znalezienia rozwiazania


˛ szczególnego podstawmy warunek poczatkowy
˛ x0 = 0. Stad
˛
otrzymujemy:    
z1 − 3z3 x1
Ψ (z) =  ψ2 (0,z)  =  x2  . (6.49)
z3 x3
˛ z1 na podstawie zależności (6.49) znajdujemy:
Obliczajac

z1 = x1 + 3x3 . (6.50)

Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z1 posiada taka˛ właściwość, że jej gradient ∇h jest ani-
hilatorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [1 0 3] prawdziwa
jest zależność:
∇h · hi = 0, (6.51)
gdzie i ∈ (1,n − 1) ⇒ i = {1,2}.
Inna˛ metoda˛ określenia funkcji wyjścia jest wykorzystanie właściwości, które musi spełniać jej gradi-
ent. W analizowanym przypadku mamy:

Lg h = 0, Lg Lf h = 0, Lg L2f h 6= 0

lub alternatywnie:
Lg h = 0, Ladf g h = 0, Lad2 g h 6= 0. (6.52)
f

Na tej podstawie otrzymujemy:


   
h i −3 h i 6x3
∂h ∂h ∂h  0  = 0, ∂h ∂h ∂h  3 cos x1 − 1  = 0
∂x1 ∂x2 ∂x3 ∂x1 ∂x2 ∂x3 (6.53)
1 −2x3
oraz
−6x23 + 1 − 3 cos x1
 
h i
∂h ∂h ∂h 2

 −3 x2 − 3x3 sin x1 − 6x3 cos x1 − 2x2 (3 cos x1 − 1) + 2x3  6= 0. (6.54)
∂x1 ∂x2 ∂x3
2x23

Można pokazać, że przykładowym anihilatorem pól wektorowych g i adf g jest nast˛epujacy
˛ kowektor

∂h
= [c 0 3c] , (6.55)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.54) dla 1 − 3 cos x1 6= 0.
Na podstawie (6.55) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx1 + 3cx3 . (6.56)

Łatwo wykazać, że funkcja (6.56) jest szczególnym prrzypadkiem funkcji określonej równaniem (6.50).

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 69

Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:


 
h
z = T (x) =  Lf h  . (6.57)
L2f h
Na tej podstawie otrzymujemy:
 
x1 + 3x3
z = T (x) =  x2 . (6.58)
2
sin x1 + x2 + x3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
   
0 1 0 0
ż = 0 0 1 z + 0  v,
   (6.59)
0 0 0 1
gdzie
v = Lg L2f h · u + L3f h (6.60)
jest nowym wejściem sterowania. W rozważanym przypadku mamy
Lg L2f h = 1 − 3 cos x1 , L3f h = x2 − 3x23 cos x1 + 2x22 sin x1 + x22 + x3 + x23 .
 

Ad.3
Zgodnie z treścia˛ zadania przyjmujemy, że trajektoria odniesienia określona dla wyjścia odsprz˛ega-
jacego
˛ ma postać
zr1 = r (t) . (6.61)
Po uwzgl˛ednieniu modelu układu liniowego (6.59) możemy zdefiniować:
zr2 = ṙ (t) , zr3 = r̈ (t) .
Definiujac
˛ bład
˛ sterowania
ez , z − zr (6.62)
i różniczkujac
˛ go po czasie otrzymujemy liniowe równanie różniczkowe
ėz = Aez + b (v − vr ) , (6.63)
gdzie    
0 1 0 0
A =  0 0 1 , b =  0  (6.64)
0 0 0 1
oraz vr = żr3 = r̈ (t) . Nast˛epnie projektujemy sterowanie liniowe w postaci
v = Kez + vr , (6.65)
gdzie K = [k1 k2 k3 ] jest macierza˛ wzmocnień wybrana˛ tak, aby macierz liniowego układu zamkni˛etego
A + bK była macierza˛ Hurwitza. Bezpośrednio na wejście układu sterowania nieliniowego podajemy
sygnał w postaci:
1 1 1 1
u=− 2 L3f h + 2 ·v =− 2 L3f h + · (Kez + vr ) . (6.66)
Lg Lf h Lg Lf h Lg Lf h Lg L2f h
Ćwiczenie 36. The system is defined as:
 2
x1 + 2x22 + 4x1 x2 + x2 − 2(1 + x22 )u

ẋ = . (6.67)
x22 + (1 + x22 )u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
2. Define linearising input function.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 70

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
 2
x1 + 2x22 + 4x1 x2 + x2 −2(1 + x22 )
  
f (x) = , g (x) = . (6.68)
x22 1 + x22
Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla
rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:

∆n−2 = ∆0 = span {g} (6.69)

oraz
∆n−1 = ∆1 = span {g, adf g} . (6.70)
Obliczajac
˛ poszczególne nawiasy Liego mamy:
  2
x1 + 2x22 + 4x1 x2 + x2
 
0 −4x2
adf g = · +
0 2x2 x22

−2(1 + x22 ) 4x2 − x22 − 1


     
2x1 + 4x2 4x2 + 4x1 + 1
− · = (6.71)
0 2x2 1 + x22 −2x2
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac

−2(1 + x22 ) 4x2 − x22 − 1


 
det C = det = 1 + x22 + x42 . (6.72)
1 + x22 −2x2
˛ wynika, że det C > 0 ⇒ dim ∆1 = 2.
Stad
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e

∆ = span {h1 h2 } , (6.73)

gdzie h1 , g jest polem wektorowym wchodzacym ˛ w skład dystrybucji, natomiast h2 jest dowolnie
wybranym polem wektorowym liniowo niezależnym od h1 . Przykładowo przyjmujemy h2 = [1 0]T .
Nast˛epnie określamy strumienie rozwiazań
˛ równań różniczkowych zwyczajnych:
 
Φhz21 ◦ Φhz12 (x0 ) = Φhz21 Φhz12 (x0 ) = Ψ (x0 ,z) = x. (6.74)

(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 , z2 i z3 ; w tym przypadku
jako pierwsza˛ zmienna˛ podstawiamy z1 i dlatego później przyjmiemy h (x) = z1 ).
Obliczamy Φhz12 :
     
1 h2 t + x10 h2 z1 + x10
ẋ = , Φt = ⇒ Φz1 = .
0 x20 x20

Obliczamy Φhz21 :

−2(1 + x22 )
 
ẋ = ⇒ ẋ1 = −2(1 + x22 ), ẋ2 = 1 + x22 .
1 + x22

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 71

ẋ2 = 1 + x22 ,
dx2
= 1 + x22 ,
dt
dx2
= dt,
1 + x22
arctan x2 = t + C,
dla t = 0 ⇒ C = arctan x20 , a wi˛ec

x2 = tan (t + C) = tan (t + arctan x20 ) ,

ẋ1 = −2(1 + x22 ) = −2 − 2 tan2 (t + arctan x20 ) ,


tan2 (t + arctan x20 ) dt = tan (t + arctan x20 ) − t + C1 , wi˛ec
R
jako, że
Z
−2 − 2 tan2 (t + arctan x20 ) dt = −2t + C2 − 2 (tan (t + arctan x20 ) − t + C1 ) =

x1 =

= −2 tan (t + arctan x20 ) + C2 − 2C1 = −2 tan (t + arctan x20 ) + C3 ,


dla t = 0 ⇒ C3 = x10 + 2 tan (arctan x20 ) = x10 + 2x20 , a wi˛ec

x1 = −2 tan (t + arctan x20 ) + x10 + 2x20 ,

wówczas
   
h1 −2 tan (t + arctan x20 ) + x10 + 2x20 h1 −2 tan (z2 + arctan x20 ) + x10 + 2x20
Φt = ⇒ Φ z2 = .
tan (t + arctan x20 ) tan (z2 + arctan x20 )
   
x10 0
˛ x0 =
Przyjmujac = uzyskujemy:
x20 0
 
h2 z1
Φz1 (x0 ) = ,
0
˛ Φhz12 (x0 ) jako argument dla Φhz21 uzyskujemy:
a wykorzystujac
   −2 tan z + z   x 
h1 h2 2 1 1
Ψ (x0 ,z) = Φz2 Φz1 (x0 ) = = . (6.75)
tan z2 x2

˛ z1 na podstawie zależności (6.75) znajdujemy:


Obliczajac

z1 = x1 + 2x2 . (6.76)

Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z1 posiada taka˛ właściwość, że jej gradient ∇h jest anihil-
atorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [1 2] prawdziwa jest
zależność:
∇h · hi = 0, (6.77)
gdzie i ∈ (1,n − 1) ⇒ i = {1}.
Inna˛ metoda˛ określenia funkcji wyjścia jest wykorzystanie właściwości, które musi spełniać jej gradi-
ent. W analizowanym przypadku mamy:

Lg h = 0, Lg Lf h 6= 0

lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.78)

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 72

Na tej podstawie otrzymujemy:


h i  −2(1 + x2 ) 
∂h ∂h 2 =0 (6.79)
∂x1 ∂x2 1 + x22
oraz h i  4x − x2 − 1 
∂h ∂h 2 2
∂x1 ∂x2 6= 0. (6.80)
−2x2
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor

∂h
= [c 2c] , (6.81)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.80) dla −1 − x22 6= 0.
Na podstawie (6.81) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx1 + 2cx2 . (6.82)

Łatwo wykazać, że funkcja (6.82) jest szczególnym prrzypadkiem funkcji określonej równaniem (6.76).
Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:
 
h
z = T (x) = . (6.83)
Lf h

Przyjmujemy c = 1 i uzyskujemy:

 x21 + 2x22 + 4x1 x2 + x2


 
= x21 +2x22 +4x1 x2 +x2 +2x22 = x21 +4x22 +4x1 x2 +x2 ,

Lf h = 1 2
x22
 
x1 + 2x2
z = T (x) = . (6.84)
x1 + 4x22 + 4x1 x2 + x2
2

Obiekt (6.67) w nowych współrz˛ednych ma postać:

 x21 + 2x22 + 4x1 x2 + x2 − 2(1 + x22 )u


 
∂z1
= x21 + 4x22 + 4x1 x2 + x2 = z2 ,

ż1 = ẋ = 1 2
∂x x22 + (1 + x22 )u

x21 + 2x22 + 4x1 x2 + x2 − 2(1 + x22 )u


 
∂z2  
ż2 = ẋ = 2x1 + 4x2 8x2 + 4x1 + 1 =
∂x x22 + (1 + x22 )u
= (2x1 + 4x2 ) x21 + 2x22 + 4x1 x2 + x2 − 2(1 + x22 )u + (8x2 + 4x1 + 1) x22 + (1 + x22 )u =
 

= 2x31 + 24x1 x22 + 12x21 x2 + 2x1 x2 + 16x32 + 5x22 + u + ux22


W nowym układzie współrz˛ednych równanie układu jest liniowe:
   
0 1 0
ż = z+ v, (6.85)
0 0 1

gdzie
v = Lg Lf h · u + L2f h (6.86)
jest nowym wejściem sterowania. W rozważanym przypadku mamy

Lg Lf h = Lg x21 + 4x22 + 4x1 x2 + x2 = 1 + x22 ,




 2
x1 + 2x22 + 4x1 x2 + x2
 
L2f h = Lf x21 + 4x22 + 4x1 x2 + x2 =

= Lf Lf h = Lf [1 2] 2
x2

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 73

x21 + 2x22 + 4x1 x2 + x2


 
 
= 2x1 + 4x2 8x2 + 4x1 + 1 =
x22

= (2x1 + 4x2 )(x21 + 2x22 + 4x1 x2 + x2 ) + (8x2 + 4x1 + 1)x22 .

Obliczone elementy podstawiamy do wzoru (6.86):

v = (1 + x22 )u + (2x1 + 4x2 )(x21 + 2x22 + 4x1 x2 + x2 ) + (8x2 + 4x1 + 1)x22 =

= 2x31 + 24x1 x22 + 12x21 x2 + 2x1 x2 + 16x32 + 5x22 + u + ux22 .

Wyznaczone v jest równe ż2 , co potwierdza poprawność definicji układu w postaci (6.85).

Ćwiczenie 37. The system is defined as:


 
− sin x1 + 2u
ẋ = . (6.87)
x1 + x22

1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.

2. Define linearising input function.

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
   
− sin x1 2
f (x) = , g (x) = . (6.88)
x1 + x22 0

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
2
∆n−2 = ∆0 = span {g} = span (6.89)
0

oraz    
2 2 cos x1
∆n−1 = ∆1 = span {g, adf g} = span , . (6.90)
0 −2

Sprawdzamy inwolutywność dystrybucji ∆0 .


Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
2 2 cos x1
det C = det = −4. (6.91)
0 −2

˛ wynika, że det C < 0 ⇒ dim ∆1 = 2.


Stad
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 74

Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0

lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.92)

Na tej podstawie otrzymujemy:


h i 2 
∂h ∂h
∂x1 ∂x2 =0 (6.93)
0
oraz
h i  2 cos x 
∂h ∂h 1
∂x1 ∂x2 6= 0. (6.94)
−2

Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy


˛ kowektor

∂h
= [0 c] , (6.95)
∂x

gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.94) dla c 6= 0. Na
podstawie (6.95) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx2 . (6.96)

Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:


 
h
z = T (x) = . (6.97)
Lf h

Przyjmujemy c = 1 i uzyskujemy:
 
x2
z = T (x) = . (6.98)
x1 + x22

W nowym układzie współrz˛ednych równanie układu jest liniowe:


   
0 1 0
ż = z+ v, (6.99)
0 0 1

gdzie
v = Lg Lf h · u + L2f h = 2u − sin x1 + 2x2 (x1 + x22 ). (6.100)

Ćwiczenie 38. The system is defined as:

x21 + x2
 
ẋ = . (6.101)
cos x1 + x2 + 3u

1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.

2. Define linearising input function.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 75

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛

x21 + x2
   
0
f (x) = , g (x) = . (6.102)
cos x1 + x2 3

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
0
∆n−2 = ∆0 = span {g} = span (6.103)
3
oraz    
0 −3
∆n−1 = ∆1 = span {g, adf g} = span , . (6.104)
3 −3
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
0 −3
det C = det = 9. (6.105)
3 −3

˛ wynika, że det C > 0 ⇒ dim ∆1 = 2.


Stad
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.106)
Na tej podstawie otrzymujemy:
h i 0 
∂h ∂h
∂x1 ∂x2 =0 (6.107)
3
oraz h i  −3 
∂h ∂h
∂x1 ∂x2 6= 0. (6.108)
−3
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor
∂h
= [c 0] , (6.109)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.108) dla c 6= 0. Na
podstawie (6.109) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx1 . (6.110)

Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:


 
h
z = T (x) = . (6.111)
Lf h

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 76

Przyjmujemy c = 1 i uzyskujemy:
 
x1
z = T (x) = 2 . (6.112)
x1 + x2

W nowym układzie współrz˛ednych równanie układu jest liniowe:


   
0 1 0
ż = z+ v, (6.113)
0 0 1

gdzie
v = Lg Lf h · u + L2f h = 3u + 2x1 (x21 + x2 ) + cos(x1 ) + x2 . (6.114)

Ćwiczenie 39. The system is defined as:


 
sin x1 + 2u
ẋ = . (6.115)
sin x1 + 2x1 + x2 + 2u

1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.

2. Define diffeomorphism T (x) which defines linearising output z = T (x).

3. Define control signal v = f (x,u) for the linearised system.

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
   
sin x1 2
f (x) = , g (x) = . (6.116)
sin x1 + 2x1 + x2 2

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
2
∆n−2 = ∆0 = span {g} = span (6.117)
2

oraz    
2 −2 cos x1
∆n−1 = ∆1 = span {g, adf g} = span , . (6.118)
2 −2 cos x1 − 6
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
2 −2 cos x1
det C = det = −12. (6.119)
2 −2 cos x1 − 6

˛ wynika, że det C 6= 0 ⇒ dim ∆1 = 2.


Stad
Układ spełnia zatem warunki linearyzacji.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 77

Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.120)
Na tej podstawie otrzymujemy:
h i 2 
∂h ∂h
Lg h = ∂x1 ∂x2 =0 (6.121)
2
oraz
h i −2 cos x1

∂h ∂h
Ladf g h = ∂x1 ∂x2 6= 0. (6.122)
−2 cos x1 − 6
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor

∂h
= [c − c] , (6.123)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.122) dla c 6= 0. Na
podstawie (6.123) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx1 − cx2 . (6.124)

Ad.2
Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:
 
h
z = T (x) = . (6.125)
Lf h

Przyjmujemy c = 1 i uzyskujemy:
 
x1 − x2
z = T (x) = . (6.126)
−2x1 − x2

Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
   
0 1 0
ż = z+ v, (6.127)
0 0 1

gdzie
v = Lg Lf h · u + L2f h = −6u − 3 sin x1 − 2x1 − x2 ). (6.128)

Ćwiczenie 40. The system is defined as:


 
x1 + x2 + u + x1 u
ẋ = . (6.129)
sin x2 + 2u + 2x1 u

1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.

2. Define diffeomorphism T (x) which defines linearising output z = T (x).

3. Define control signal v = f (x,u) for the linearised system.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 78

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
   
x1 + x2 1 + x1
f (x) = , g (x) = . (6.130)
sin x2 2 + 2x1

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
1 + x1
∆n−2 = ∆0 = span {g} = span (6.131)
2 + 2x1
oraz
∆n−1 = ∆1 = span {g, adf g} = (6.132)
   
1 + x1 −2x1 + x2 − 3
= span ,
2 + 2x1 2x1 + 2x2 − cos x2 (2 + 2x1 )
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
1 + x1 −2x1 + x2 − 3
det C = det = (6.133)
2 + 2x1 2x1 + 2x2 − cos x2 (2 + 2x1 )

= 12x1 − 2 cos x2 − 4x1 cos x2 + 6x21 − 2x21 cos x2 + 6


Dla x1 = x2 = 0 uzyskujemy det C = 4 > 0 ⇒ dim ∆1 = 2.
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.134)
Na tej podstawie otrzymujemy:
h i 1 + x 
∂h ∂h 1
∂x1 ∂x2 =0 (6.135)
2 + 2x1
oraz
h i −2x1 + x2 − 3

∂h ∂h
∂x1 ∂x2 6= 0. (6.136)
2x1 + 2x2 − cos x2 (2 + 2x1 )
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor

∂h
= [2 − 1] , (6.137)
∂x
Jednocześnie kowektor ten spełnia zależność (6.136). Na podstawie (6.137) znajdujemy funkcj˛e skalarna˛
h (x):
h (x) = 2x1 − x2 . (6.138)

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 79

Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:


 
h
z = T (x) = . (6.139)
Lf h
i uzyskujemy:  
2x1 − x2
z = T (x) = . (6.140)
2x1 2x2 − sin x2

Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
   
0 1 0
ż = z+ v, (6.141)
0 0 1
gdzie
v = Lg Lf h · u + L2f h = (6.142)
= (2 + 2x1 + (2 − cos x2 ) (2 + 2x1 )) u + 2x1 + 2x2 + (2 − cos x2 ) sin x2 .
Ćwiczenie 41. The system is defined as:
 
x1 + x2 + 1
ẋ = . (6.143)
− sin x1 + 2u
1. Check the conditions for linearisation of the system with state transformation and input transform-
ation.
2. Define diffeomorphism T (x) which defines linearising output z = T (x).
3. Define control signal v = f (x,u) for the linearised system.

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
   
x1 + x2 + 1 0
f (x) = , g (x) = . (6.144)
− sin x1 2
Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla
rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
0
∆n−2 = ∆0 = span {g} = span (6.145)
2
oraz    
0 −2
∆n−1 = ∆1 = span {g, adf g} = span , . (6.146)
2 0
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
0 −2
det C = det = 4. (6.147)
2 0
˛ wynika, że det C > 0 ⇒ dim ∆1 = 2.
Stad
Układ spełnia zatem warunki linearyzacji w otoczeniu punktu x = 0.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 80

Ad.2
Określamy funkcj˛e wyjścia wykorzystujac
˛ właściwości, które musi spełniać jej gradient. W an-
alizowanym przypadku mamy:
Lg h = 0, Lg Lf h 6= 0
lub alternatywnie:
Lg h = 0, Ladf g h 6= 0. (6.148)
Na tej podstawie otrzymujemy:
h i 0 
∂h ∂h
∂x1 ∂x2 =0 (6.149)
2
oraz
h i  −2 
∂h ∂h
∂x1 ∂x2 6= 0. (6.150)
0
Można pokazać, że przykładowym anihilatorem pola wektorowego g jest nast˛epujacy
˛ kowektor

∂h
= [c 0] , (6.151)
∂x
gdzie c ∈ R jest dowolna˛ stała.˛ Jednocześnie kowektor ten spełnia zależność (6.150) dla c 6= 0. Na
podstawie (6.151) znajdujemy funkcj˛e skalarna˛ h (x):

h (x) = cx1 . (6.152)

Po określeniu funkcji wyjścia h znajdujemy transformat˛e współrz˛ednych według zależności:


 
h
z = T (x) = . (6.153)
Lf h

Przyjmujemy c = 1 i uzyskujemy:
 
x1
z = T (x) = . (6.154)
x1 + x2 + 1

Ad.3
W nowym układzie współrz˛ednych równanie układu jest liniowe:
   
0 1 0
ż = z+ v, (6.155)
0 0 1

gdzie
v = Lg Lf h · u + L2f h = 2u + x1 + x2 + 1 − sin x1 . (6.156)

Ćwiczenie 42. The one input system is given:

x31 + x2
 
ẋ = . (6.157)
x32 + u

1. Check the conditions for system’s linearisation with use of state transformation and feedback.

2. Calculate the linearising output h(x) (any method).

3. Define the control signal v = f (x,u) for the linearised system.

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 81

Rozwiazanie
˛
Ad.1
Dryf i pole wektorowe zwiazane
˛ z wejściem określamy nast˛epujaco:
˛
 3   
x1 + x2 0
f (x) = , g (x) = . (6.158)
x32 1

Sprawdzamy warunki linearyzowalności za pomoca˛ transformacji stanu i sprz˛eżenia zwrotnego. Dla


rozpatrywanego układu stan opisany jest w przestrzeni R2 , tj. n = dim x = 2. Należy sprawdzić czy
dystrybucja 4n−2 jest inwolutywna oraz czy 4n−1 ma wymiar n. Określamy dwie dystrybucje:
 
0
∆n−2 = ∆0 = span {g} = span (6.159)
1
oraz    
0 −1
∆n−1 = ∆1 = span {g, adf g} = span , . (6.160)
1 −3x22
Sprawdzamy inwolutywność dystrybucji ∆0 .
Dystrybucja ∆0 jest inwolutywna, gdyż rozpi˛eta jest na jednym polu wektorowym.
Nast˛epnie sprawdzamy wymiar dystrybucji ∆1 określajac ˛ warunek rz˛edu macierzy C = [g, adf g].
˛ wyznacznik macierzy C otrzymujemy
Obliczajac
 
0 −1
det C = det = 1. (6.161)
1 −3x22

˛ wynika, że det C 6= 0 ⇒ dim ∆1 = 2. Układ spełnia zatem warunki linearyzacji.


Stad

Ad.2
Szukamy teraz funkcji linearyzujacej
˛ rozpatrujac
˛ nast˛epujac
˛ a˛ dystrybucj˛e

∆ = span {h1 h2 } , (6.162)

gdzie h1 , g, h2 , adf g sa˛ polami wektorowymi wchodzacymi ˛ w skład dystrybucji. Nast˛epnie


określamy strumienie rozwiazań
˛ równań różniczkowych zwyczajnych:
 
Φhz11 ◦ Φhz22 (x0 ) = Φhz11 Φhz22 (x0 ) = Ψ (x0 ,z) = x. (6.163)

(Należy zwrócić uwag˛e w jakiej kolejności zostały podstawione zmienne z1 i z2 ; w tym zmienna,˛ która
jest przeliczana przez wszystkie pola jest zmienna z2 i dlatego później przyjmiemy h (x) = z2 ).
Obliczamy poszczególne strumienie. Dla h1 mamy:
   
0 h1 x10
ẋ = h1 = ⇒ Φt (x0 ) = ,
1 t + x20

a dla h2 mamy:  
−1
ẋ = h2 = ,
−3x22
gdzie
dx1
ẋ1 = −1 ⇒ = −1,
dt
Z Z
dx1 = −dt ⇒ x1 = −t + C,

C = x10 ,

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


6.4. Linearisation with state transformation and input transformation (so called linearisation with transformation
and feedback) 82

x1 = −t + x10 ,
oraz
dx2
ẋ2 = −3x22 ⇒ = −3x22 ,
dt
Z Z
1 −2 1
− x2 dx2 = dt ⇒ x−1 + C = t,
3 3 2
1
C = − x−1 ,
3 20
1 1 1
− x−1 − x−1 = t ⇒ x2 = .
3 2 3 20 −3t − x−1
20
Na tej podstawie uzyskujemy: " #
−t + x10
Φht 2 (x0 ) = 1 .
−3t−x−1
20

˛ zmienna˛ t ma zmienna˛ z uzyskujemy:


Podmieniajac
 
h1 x10
Φz1 (x0 ) = ,
z1 + x20
" #
−z2 + x10
Φhz22 (x0 ) = 1 .
−3z2 −x−1
20

Wypadkowy strumień rozwiazania


˛ można przedstawić w postaci:
" #
  −z 2 + x 10
Ψ (x0 ,z) = Φhz11 Φhz22 (x0 ) = z + 1 . (6.164)
1 −3z −x−1 2 20

W celu znalezienia rozwiazania


˛ szczególnego podstawmy warunek poczatkowy
˛ x0 = [0 1]. Stad
˛
otrzymujemy:    
−z2 x1
Ψ (z) = = . (6.165)
z1 + −3z12 −1 x2
˛ z na podstawie zależności (6.165):
Obliczajac

z2 = −x1 .
1 1
z1 = x 2 − = x2 − .
−3z2 − 1 3x1 − 1
Zgodnie z teoria˛ Frobeniusa funkcja h (x) = z2 posiada taka˛ właściwość, że jej gradient ∇h jest ani-
hilatorem dystrybucji ∆. Można wykazać, że w rozważanym przypadku ∇h = ∂h ∂x = [−1 0] prawdziwa
jest zależność:
∇h · hi = 0, (6.166)
gdzie i ∈ (1,n − 1) ⇒ i = {1}.
Zmienne z1 i z2 nie stanowia˛ jednak zmiennych systemu po linearyzacji, gdyż zgodnie z teoria˛ należy
je obliczyć z zależności:  
h
z = T (x) = . (6.167)
Lf h
Zatem układ po transformacji stanu ma postać:
 
−x1
z = T (x) = . (6.168)
−x31 − x2

Ad.3
Obliczenia v = f (x,u) ...

B. Krysiak, D. Pazderski Układy nieliniowe na studiach magisterskich - zadania


?bibname?

[1] B. d’Andrea Novel, G. Campion, and G. Bastin. Control of nonholonomic wheeled mobile robots
by state feedback linearization. International Journal of Robotics Research, pages 543–559, 1995.

83
Appendix A

84

You might also like