You are on page 1of 49

Topic 02: Linear algebra basics

Advanced System Theory


Winter semester 2021/22

Dr.-Ing. Oliver Wallscheid


Automatic Control
Faculty of Electrical Engineering, Computer Science, and Mathematics

Oliver Wallscheid AST Topic 02 1


Agenda

I Linear independence and linear transformations


I Eigenvalues and eigenvectors
I Matrix inverse
I Selected matrix shapes and their characteristics

Learning objectives
Securing the basic knowledge of linear algebra, which we need in the following lectures to be
able to carry out typical system-theoretical investigations (such as checking for the stability of
a system or to calculate the system response in the time domain).

Oliver Wallscheid AST Topic 02 2


Course outline

No. Topic
1 Introduction (admin, system classes, motivating example)
2 Linear algebra basics
3 Response of linear systems incl. discrete-time representations
4 Laplace and z-transforms (complex frequency domain)
5 Frequency response
6 Stability
7 Controllability and observability
8 State transformation and realizations
9 State feedback and state observers

Oliver Wallscheid AST Topic 02 3


Linear independence
Notation:
  
x1 a11 · · · a1n
Vectors: x =  ...  Matrices: A =  ... .. ..  Elements: (A)ij = aij
  
. . 
xn am1 · · · amn

Definition 2.1: Linear independence


The set of vectors v1 , ..., vn is called linearly dependent if there exist scalars α1 , ..., αn , not
all zero, such that
α1 v1 + · · · + αn vn = 0. (2.1)
A set is linearly independent if this relationship only holds for α1 = · · · = αn = 0.

If the vectors are functions {fi (x)} defined on an interval I ⊂ R, then they are linearly
independent if there exists at least one x ∈ I for which α1 f1 (x) + · · · + αn fn (x) = 0 implies
α1 = · · · = αn = 0.
Oliver Wallscheid AST Topic 02 4
Basis (1)

We write span(v1 , ..., vn ) to denote the linear subspace generated (spanned) by v1 , ..., vn .

Definition 2.2: Basis


A set of vectors v1 , ..., vn is called a basis for the vector space V if they are linearly independent
and span(v1 , ..., vn ) = V . The number n is called the dimension of the vector space V :
dim V = n.

This means we can write any vector v ∈ V as a linear combination of basis vectors:

v = α1 v1 + · · · + αn vn . (2.2)

The numbers α1 , ..., αn are called the coordinates of v with respect to the basis v1 , ..., vn .
These numbers are unique. We may assemble them in a vector α = [α1 · · · αn ]T .

Oliver Wallscheid AST Topic 02 5


Basis (2)

−2 −1 0 1 2

−1

Fig. 2.1: Illustration the standard basis in R2 : The blue and orange vectors are the elements of the
basis; the green vector can be given in terms of the basis vectors, and so is linearly dependent upon
them (derivative of www.wikipedia.org, CC BY-SA 3.0).

Oliver Wallscheid AST Topic 02 6


Pingo time
           
 1 0 0   0 0 0 
I Which vector sets form a basis in R3 0 1 0 , 1 2 0 , . . . ?
0 0 1 0 0 3
   
I https://pingo.coactum.de/871072

Oliver Wallscheid AST Topic 02 7


Linear transformations (1)
A matrix A ∈ Rm×n describes a linear transformation

v → Av (2.3)

from a vector space V to a vector space W , dim V = n, dim W = m.

Definition 2.3: Range and null space


We define the null space (kernel) of A as the set

N (A) = ker(A) = {v ∈ V : Av = 0} (2.4)

and the range space (image) of A as the set

R(A) = im(A) = {w ∈ W : w = Av, for some v ∈ V }. (2.5)

We call null(A) = dim N (A) the nullity and rank(A) = dim R(A) the rank of A.

Oliver Wallscheid AST Topic 02 8


Linear transformations (2)

Fig. 2.2: Kernel and image of a linear mapping (derivative of www.wikipedia.org, CC BY-SA 4.0)

Oliver Wallscheid AST Topic 02 9


Linear transformations (3)
Using the above nomenclature one can find:

Theorem 2.1: Rank-nullity theorem

dim N (A) + dim R(A) = dim V (2.6)

Example: Consider the matrix  


3 1
A= .
−6 −2
 T
Here, we receive rank(A) = 1 since the the basis of A can be reduced to 1 −2 . The
 T
nullity of A is a vector space spanned by 1 −3 and, therefore, null(A) = 1 yields.Hence
the rank and nullity are both one, and sum to two, the number of columns in A.
Note: The rank of a matrix A is the largest number of linearly independent columns (or
rows) of A.
Oliver Wallscheid AST Topic 02 10
Linear transformations (4)

Fig. 2.3: A visual representation of the rank-nullity theorem


(derivative of www.wikipedia.org by A. Satoru, CC BY-SA 4.0)
Oliver Wallscheid AST Topic 02 11
Pingo time
 
2 5 −3
I What is the rank and nullity of ?
1 4 2
I https://pingo.coactum.de/871072

Oliver Wallscheid AST Topic 02 12


Determinant (1)

Definition 2.4: Determinant (informal)


The determinant is a scalar number associated with a square matrix and can be calculated from
its entries. It indicates how the ’volume’ changes in the linear mapping described by the matrix.

Definition for 2 × 2 matrices:


 
a b
det(A) = det = ad−cb. (2.7)
c d

The area of the parallelogram is the


absolute value of the determinant of the
matrix formed by the vectors representing
the parallelogram’s sides.

Oliver Wallscheid AST Topic 02 13


Determinant (2)

The determinant for higher-order matrices can be computed using:

Theorem 2.2: Laplace expansion


The determinant of a n × n matrix A can be expressed in terms of determinants of smaller
matrices, known as its minors Mij ,
n
X
det(A) = (−1)i+j aij Mij (2.8)
j=1

where Mij is defined to be the determinant of the (n − 1) × (n − 1) matrix that results from
A by removing the i-th row and the j-th column.

Oliver Wallscheid AST Topic 02 14


Determinant (3)
Laplace expansion example: 3 × 3 matrix
 
a b c      
e f d f d e
det d e f  = a · det − b · det + c · det
h i g i g h
g h i
= a(ei − f h) − b(di − f g) + c(dh − eg)
= aei + bf g + cdh − ceg − bdi − af h.

The rule of Sarrus is a mnemonic for


the Laplace expansion applied to 3 × 3
matrices as indicated on the right.

Fig. 2.4: Schematic diagram for the rule of Sarrus


(source www.wikipedia.org, CC BY-SA 4.0)
Oliver Wallscheid AST Topic 02 15
Eigenvalues and eigenvectors (1)

Definition 2.5: Eigenvalue and eigenvector


If, for a given n × n matrix A,
Avi = λi vi
then λi is called an eigenvalue of A, and vi is the corresponding (right) eigenvector.

I An n × n matrix A has n, not necessarily distinct, eigenvalues. They are found as the
solutions to characteristic polynomial det(A − λI) = 0.
I Eigenvectors can be scaled. Sometimes it is convenient to assume that they are
normalized to have unit norm (length) kvi k = 1.

An eigenvector is a nonzero vector of a linear transformation that changes only by a scalar


factor when that linear transformation is applied to it. The corresponding eigenvalue is the
factor by which the eigenvector is scaled.
Oliver Wallscheid AST Topic 02 16
Eigenvalues and eigenvectors (2)
Example: the matrix
 
2 1
A=
1 2

has the the characteristic polynomial


 
2−λ 1
det (A − λI) = det
1 2−λ
= 3 − 4λ + λ2

leading to the eigenvalues λ1,2 = {1, 3}


with the corresponding eigenvectors Fig. 2.5: Matrix A acts by stretching the vector x, not
    changing its direction, so x is an eigenvector of A
1 1 (derivative of www.wikipedia.org by L. Lantonov,
v1 = , v2 = .
−1 1 CC BY-SA 4.0).
Oliver Wallscheid AST Topic 02 17
Pingo time
 
1 0 3
I What are the eigenvalues of 0 5 1 ?
0 0 2
I https://pingo.coactum.de/871072

Oliver Wallscheid AST Topic 02 18


Left eigenvectors
Definition 2.6: Left eigenvector
If, for a given n × n matrix A,
wi A = wi λ i
the row vector wi is the corresponding left eigenvector given a eigenvalue λi .

I If we transpose A (i.e., switching its row and column indices)


 T  
a11 · · · a1n a11 · · · am1
T  .. .. ..  =  .. .. .. 
A = . . .   . . .  (2.9)
am1 · · · amn a1n · · · amn
we receive
AT wTi = λwTi (2.10)
and, therefore, it follows immediately that a left eigenvector of A is the same as the
transpose of a right eigenvector of AT , with the same eigenvalue.
Oliver Wallscheid AST Topic 02 19
Cayley-Hamilton (1)
The characteristic polynomial p(λ) = det(A − λI) of a n × n matrix A has degree n:

p(λ) = α0 + α1 λ + α2 λ2 + · · · + αn λn . (2.11)

Theorem 2.3: Cayley-Hamilton


Every square matrix A satisfies its characteristic equation, that is, for p(λ) = det(A − λI), we
have
p(A) = α0 I + α1 A + α2 A2 + · · · + αn An = 0. (2.12)

An immediate consequence of Cayley-Hamilton is that we never need to compute Ak for


powers k ≥ n. Therefore, for any polynomial f , we can write

f (A) = α0 I + α1 A + · · · + αn−1 An−1 . (2.13)

Oliver Wallscheid AST Topic 02 20


Cayley-Hamilton (2)
Consider the following example matrix
 
1 2
A= .
3 4
Its characteristic polynomial is given by
 
λ − 1 −2
p(λ) = det(λI − A) = det = (λ − 1)(λ − 4) − (−2)(−3) = λ2 − 5λ − 2.
−3 λ − 4
Applying the Cayley-Hamilton theorem for n = 2 eigenvalues leads to
 
2 0 0
p(A) = A − 5A − 2I = .
0 0
We can verify by computation that indeed,
       
2 7 10 5 10 2 0 0 0
A − 5A − 2I = − − = .
15 22 15 20 0 2 0 0
Oliver Wallscheid AST Topic 02 21
Cayley-Hamilton (3)

Continuing the example from the previous slide we can rearrange

p(A) = A2 − 5A − 2I = 0

to
A2 = 5A + 2I.
Hence, we have found a simple expression to calculate the square of A. Likewise we can use
Cayley-Hamilton to calculate higher power terms of A:

A3 = A2 A = (5A + 2I) A = 5 (5A + 2I) + 2A = 27A + 10I


A4 = A3 A = (27A + 10I) A = 27 (5A + 2I) + 10A = 145A + 54I
A5 = · · ·

Oliver Wallscheid AST Topic 02 22


Pingo time
 
1 1
I What is A3
considering A = ?
1 3
I https://pingo.coactum.de/871072

Oliver Wallscheid AST Topic 02 23


Inverse of a matrix

I A square matrix A ∈ Rn×n is invertible if it has full rank.


I Such a matrix is called nonsingular.
I Other matrix properties which indicate an invertible square matrix:
I The columns of A are linearly independent (i.e., full rank).
I The determinant is non-zero: det(A) 6= 0.
I All eigenvalues are non-zero: λi 6= 0, ∀i = 1, . . . , n .

Definition 2.7: Inverse matrix


The inverse A−1 of an invertible matrix A ∈ Rn×n is defined by

A−1 A = AA−1 = I (2.14)

where I is the n × n identity matrix.

Oliver Wallscheid AST Topic 02 24


Block diagonal matrix and Gramian
I The inverse of a block diagonal matrix
   −1 
A1 0 · · · 0 A1 0 ··· 0
 0 A2 · · · 0   0 A−1
2 ··· 0 
A= .  is A−1 = ..  .
   
.. ..
0 0 . ..   0 0 . . 
0 0 · · · Ap 0 0 · · · A−1
p

I The determinant of this block diagonal matrix is det A = det(A1 ) · · · det(Ap ).


I Product formula for the determinant is det(AB) = det(A) det(B).
Definition 2.8: Gramian
Given some real-valued column vectors ai ∈ Rn , i = 1, . . . , m comprised in the matrix A =
[a1 a2 . . . am ], the Gramian (or Gram matrix) is defined as

G = AT A. (2.15)

I The columns of A are linear independent if and only if (iff) det(G) 6= 0.


Oliver Wallscheid AST Topic 02 25
Inverse and Gramian example
Consider the simple example matrix

1 3
A= .
2 4

The Gramian is  T       
1 3 1 3 1 2 1 3 10 14
G= = =
2 4 2 4 3 4 2 4 14 20
with its determinant
det(G) = 10 · 20 − 142 = 4 6= 0.
Hence, all columns of A are linear independent and its inverse exists:
 
−1 −2 3/2
A = ⇒ A−1 A = AA−1 = I.
1 −1/2

Oliver Wallscheid AST Topic 02 26


Ways of calculating an inverse (1)

Given an invertible matrix A ∈ Rn×n we have a multitude of options to calculate its inverse:
I Gaussian elimination (perform row operations to find a row echelon form):
   
3 1 1
2 −1 0 1 0 0 1 0 0 4 2 4
I|A−1 = 
     
[A|I] =  ⇐⇒ 1 1 .
 −1 2 −1 0 1 0 
  0 1 0 2 1 2 
1 1 3
0 −1 2 0 0 1 0 0 1 4 2 4

I Cayley-Hamilton: Exploiting the characteristic polynomial

−1
A−1 = An−1 + cn−1 An−2 + . . . + c1 I

with ci as coefficients of
det(A)
p(A) = An + cn−1 An−1 + . . . + c1 A + det(A)I = 0.

Oliver Wallscheid AST Topic 02 27


Ways of calculating an inverse (2)

I Cramer’s rule / adjugate matrix: One can also calculate the inverse

1
A−1 = adj(A)
det(A)

using its adjugate matrix adj(A). For low-order matrices, the adjugate matrix can be
easily computed:
   
a b d −b
adj = ,
c d −c a
   
a b c ei − f h f g − di dh − eg
adj d e f  =  ch − bi ai − cg bg − ah .
g h i bf − ce cd − af ae − bd

Oliver Wallscheid AST Topic 02 28


Generalized adjugate matrix
The adjugate matrix is defined as
 T  
ã11 ã12 · · · ã1n ã11 ã21 · · · ãn1
 ã21 ã22 ã2n   ã12 ã22 ãn2 
adj(A) =  . ..  =  .. (2.16)
   
.. .. .. 
 .. . .   . . . 
ãn1 ãn2 · · · ãnn ã1n ã2n · · · ãnn
with the so-called co-factors ãij
 
a1,1 · · · a1,j−1 a1,j+1 · · · a1,n
 .. . .. .
.. .. .. .. 
 . . . .  
 
a
 i−1,1 · · · ai−1,j−1 ai−1,j+1 · · · ai−1,n 
ãij = (−1)i+j Mij = (−1)i+j det 
  
 (2.17)
ai+1,1 · · · ai+1,j−1 ai+1,j+1 · · · ai+1,n 

 
 .. .. .. .. .. .. 
 . . . . . . 
an,1 · · · an,j−1 an,j+1 · · · an,n
and given minors Mij .
Oliver Wallscheid AST Topic 02 29
Ways of calculating an inverse (3)

I Eigendecomposition: If A is diagonalizable and none of its eigenvalues are zero with n


linearly independent eigenvectors vi the inverse is

A−1 = QΛ−1 Q−1

where Q contains the eigenvectors as columns and Λ is a diagonal matrix whose diagonal
elements are the corresponding eigenvalues. Note that Q is orthogonal, i.e., QT = Q−1 .
I Blockwise inversion: In certain cases it might be handy to invert a matrix blockwise using
−1
B −1 + B −1 C(E − DB −1 C)−1 DB −1 −B −1 C(E − DB −1 C)−1
  
B C
= .
D E −(E − DB −1 C)−1 DB −1 (E − DB −1 C)−1

Here, B, C, D and E are matrix sub-blocks of A. Moreover, B is square and invertible


as well as (E − DB −1 C) is also invertible. Very useful if sub-blocks are zero.

Oliver Wallscheid AST Topic 02 30


Ways of calculating an inverse (4)

I Numerical approximation:
I Newton’s method: Assuming one have an informed guess on A−1
i , i.e., the approximate
inverse at iteration step i, we can apply:

A−1 −1 −1 −1
i+1 = 2Ai − Ai AAi .

I Neumann series: If there exists a scaling factor γ > 0 leading to kI − γAk < 1 then A is
invertible using the Neumann series:

" #
X
A−1 = γ I + (I − γA)i .
i=1

There are 30+ methods of matrix inversion, many of them covering special cases requiring
certain matrix properties. Hence, we have only scratched the surface at this point.

Oliver Wallscheid AST Topic 02 31


Example: inverse utilizing Cayley-Hamilton (1)
Consider the 3 × 3 matrix  
1 1 2
A = 9 2 0 .
5 0 3
Applying the rule of Sarrus we can find its characteristic polynomial:
 
1−λ 1 2
det(A − Iλ) = det  9 2−λ 0 
5 0 3−λ
= (1 − λ)(2 − λ)(3 − λ) + 10(2 − λ) + 9(3 − λ) = −λ3 + 6λ2 + 8λ − 41.

Then, the Cayley-Hamilton theorem yields


 
3 2 1 2

p(A) = 0 = −A + 6A + 8A − 41I ⇔ I=A −A + 6A + 8I .
41

Oliver Wallscheid AST Topic 02 32


Example: inverse utilizing Cayley-Hamilton (2)
Multiplying both sides of the previous equation with A−1 results in
1
A−1 = −A2 + 6A + 8I .

41
By direct computation we can find
 
20 3 8
A2 = 27 13 18 .
20 5 19

Finally, putting everything together:


        
20 3 8 1 1 2 8 0 0 −6 3 4
1   1 
A−1 = − 27 13 18 + 6 9 2 0 + 0 8 0 = 27 7 −18 .
41 41
20 5 19 5 0 3 0 0 8 10 −5 7

Oliver Wallscheid AST Topic 02 33


Similarity of matrices (1)

I Let v1 , ..., vn be a basis for V , and let A describe a linear transformation T in this basis.
I Now let v10 , ..., vn0 be another basis for V whose matrix with respect to v1 , ..., vn is P
(must be nonsingular).
I What is the representation of T in the basis v10 , ..., vn0 ?

We can switch between the representations with respect to the two bases as x = P x0 and
x0 = P −1 x.
I First consider the linear transformation in the original basis: w = Ax.
I In the new basis we have: w0 = P −1 (Ax) = (P −1 AP )x0 .

Definition 2.9: Similarity


Two matrices A and A0 are called similar if there exists a nonsingular matrix P such that
A0 = P −1 AP .

Oliver Wallscheid AST Topic 02 34


Similarity of matrices (2)

Similar matrices share some important properties such as


I Rank, determinant and eigenvalues (but not eigenvectors).

Lets investigate a small example: are


   
2 1 −1 −8 0
A= and A =
1 2 1 5

similar? The answers is yes, using  


1 3
P =
0 1
we receive
        
−1 1 −3 2 1 1 3 1 −3 2 7 −1 −8
P AP = = = = A0 .
0 1 1 2 0 1 0 1 1 5 1 5

Oliver Wallscheid AST Topic 02 35


Diagonalization (1)
Definition 2.10: Diagonalizable matrices
An n × n square matrix A is called diagonalizable (or nondefective) if there exists an invertible
matrix P such that  
d1 0 · · · 0
 .. 
−1
 0 d2 .
D = P AP =  . 
..
. (2.18)
 ..

. 0
0 · · · 0 dn

If A has n linear independent eigenvectors it is diagonalizable.


I In this case, the eigenvectors vi of A form the change of basis matrix P = [v1 . . . vn ].
I We receive,
P −1 AP = Λ with Λ = diag (λ1 , ..., λn ) .

Note that not all matrices are diagonalizable.


Oliver Wallscheid AST Topic 02 36
Diagonalization (2)
For example, consider the matrix
 
0 1 −2
A = 0 1 0 .
1 −1 3

The eigenvectors of A are


 T  T  T
v1 = 1 1 0 , v2 = 0 2 1 , v3 = 1 0 −1 .

Building the basis P out of these eigenvectors and applying (2.18) yields:
 −1     
1 0 1 0 1 −2 1 0 1 1 0 0
P −1 AP = 1 2 0  0 1 0  1 2 0  = 0 1 0 .
0 1 −1 1 −1 3 0 1 −1 0 0 2

Note that the eigenvalues {λ1,2 = 1, λ3 = 2} of A form the diagonal elements.


Oliver Wallscheid AST Topic 02 37
Real symmetric matrices
An n × n matrix A for which AT = A is symmetric. For symmetric matrices with real
entries:
I all eigenvalues are real and
I eigenvectors corresponding to distinct eigenvalues are orthogonal .
If a matrix V is orthogonal, then all columns vi are mutually orthogonal and have unit
norm, i.e., U U T = I. For an orthogonal matrix V , we have V T = V −1 .

Theorem 2.4: Eigenvalue decomposition of a real symmetric matrix


The eigenvalue decomposition of a real symmetric matrix A is

A = V ΛV T ⇔ V T AV = Λ. (2.19)

In this case V orthogonally diagonalizes A. Real symmetric matrices are always orthogonally
diagonizable.

Oliver Wallscheid AST Topic 02 38


Nondiagonalizable matrices – Jordan normal form (1)
For non-diagonalizable matrices A, it is convenient to use the Jordan normal form

P −1 AP = J ,

where the matrix J is block diagonal:


 
Λ 0 ··· 0
 0 J1 · · · 0 
J =. ..  . (2.20)
 
. .. . .
. . . .
0 0 · · · Js

I As before, Λ contains the eigenvalues associated to the linearly independent eigenvectors,


I The Jordan blocks J1 , . . . Js include the remaining eigenvalues as shown next.
I The columns of P correspond to the generalized eigenvectors (will be shortly
introduced) of A. P can be found from AP = P J .
Oliver Wallscheid AST Topic 02 39
Nondiagonalizable matrices – Jordan normal form (2)
I Each Jordan block Jj is a bidiagonal matrix of the form
 
λi 1 0

 λi 1 

 . . 
Jj = 
 λi .  = λi I + Nj .
 (2.21)
 . . 
 . 1
0 λi

I If λi has algebraic multiplicity µi > 1, and geometric multiplicity νi < µi , then this
eigenvalue gives rise to νi Jordan blocks.
I Algebraic multiplicity µi : how often does the eigenvalue appears as a root of
p(λi ) = det(λi I − A).
I Geometric multiplicity νi : dimension of the nullspace of (λI − A), i.e., how many linear
independent eigenvectors can be found for a given λi .
I The sum of the sizes all the Jordan blocks corresponding to λi is µi .
Oliver Wallscheid AST Topic 02 40
Generalized eigenvector

Definition 2.11: Generalized eigenvector


A vector vi is a generalized eigenvector of rank m of A corresponding to the eigenvalue λi
if
(A − Iλi )m vi = 0 but (A − Iλi )m−1 vi 6= 0. (2.22)
Consequently, a generalized eigenvector of rank one is an ordinary eigenvector.

Example: the matrix  


1 1
A=
0 1
 T
has an eigenvalue λ = 1 with multiplicity two. The ordinary eigenvector is v1 = 1 0 .
Using (2.22) we can find the generalized eigenvector v2 of rank two:
   
0 1 1
(A − Iλ)2 v2 = 0 = (A − Iλ) v1 ⇔ (A − Iλ) v2 = v1 ⇔ u2 = .
0 0 0
Oliver Wallscheid AST Topic 02 41
Jordan normal form example (1)
Consider the following matrix
 
5 4 2 1
 0 1 −1 −1 
A= .
 
−1 −1 3 0
1 1 −1 2

Evaluating its characteristic polynomial det(A − λI) = 0 we can find out that the eigenvalues
are λ = {1, 2, 4, 4}, i.e., we have two unique eigenvalues and one with algebraic multiplicity
two. According to (2.20) and (2.21) the corresponding Jordan matrix is:
 
1 0 0 0
0 2 0 0
J = .
 
0 0 4 1
0 0 0 4

Oliver Wallscheid AST Topic 02 42


Jordan normal form example (2)
How can we obtain the transition matrix P of P −1 AP = J ? By rearranging towards
AP = P J we obtain
 
1 0 0 0
   0 2 0
 0
  
A p1 p2 p3 p4 = p1 p2 p3 p4   = p1 2p2 4p3 p3 + 4p4 .
0 0 4 1
0 0 0 4
Hence, we receive an equation system of the form

(A − 1I) p1 =0,
(A − 2I) p2 =0,
(A − 4I) p3 =0,
(A − 4I) p4 =p3 .

Please note that the above equation system will deliver the (generalized) eigenvectors of A.
Oliver Wallscheid AST Topic 02 43
Jordan normal form example (3)

Finally, we can write down the transition matrix


 
−1 1 1 1
   1 −1 0 0
P = p1 p2 p3 p4 =  .
0 0 −1 0
0 1 1 0

Please note that Matlab and GNU Octave also offer pre-defined functions to calculate J and
P . However, it is highly recommend to use only exact (symbolic) algorithms since
approximate (numeric) calculations of the Jordan normal form tend to instability.

Oliver Wallscheid AST Topic 02 44


Pingo time
 
2 0 0
I What is the Jordan form of A = −4 −1 0  ?
3 −4 −2
I https://pingo.coactum.de/871072

Oliver Wallscheid AST Topic 02 45


Important Matlab / GNU Octave commands
1 a = [1; 2 ; 3; 4]; % Define a coloumn vector
2 a = [1 2 3 4]; % Define a row vector
3 A = [1 2; 3 4]; % Define a two=by=two matrix
4
5 e = eig(A); % Eigenvalues of a square matrix
6 [V,D] = eig(A); %diagonal matrix D: eigenvalues; matrix V: columns are the right eigenvectors
7 [V,D,W] = eig(A); % also returns in corresponding left eigenvectors in W
8
9 A inv = inv(A); % Inverse of an invertible matrix
10 A T = transpose(A); % Transpose of A, short command is A'
11
12 d = det(A); % Determinant of A
13 r = rank(A); % Rank of A
14 n = null(A); % Null space of A
15
16 J = jordan(A); % computes the Jordan form of the matrix A (requires Symbolic Math Toolbox)

Note that above commands are from Matlab; GNU Octave commands might slightly differ.
Oliver Wallscheid AST Topic 02 46
Recommended reading

I K. Petersen and M. Pedersen, ”’The matrix cookbook”’, Technical University of


Denmark, 2008
I P. Antsaklis and A. Michel, ”‘A Linear Systems Primer”’, Birkhäuser Basel, 2007
I Appendix on linear algebra
I Any solid (undergraduate) linear algebra book (cf. UPB library’s analog and digital stock)

This quick recap section on selected aspects of linear algebra which have a high relevance for
the subsequent lectures cannot compensate for fundamental knowledge gaps in the field.
Students with such gaps are expected to to catch up on linear algebra basics on their own.

Oliver Wallscheid AST Topic 02 47


Summary: what you’ve learned in this section

I Linear algebra is an important and often used toolbox when performing system theory
analysis (especially when dealing with linear systems).
I Eigenvalues and eigenvectors are important properties of linear transformations.
I The theorem of Cayley-Hamilton allows simplified calculations of higher order matrix
powers using the characteristic polynomial of that matrix.
I Inverting a matrix can be achieved using several exact and approximate methods which
performance highly depend on certain matrix properties.
I Transforming a matrix into diagonal (or Jordan) form can help to reduce the
computational burden of subsequent calculations (sparsely populated matrix.)

Oliver Wallscheid AST Topic 02 48


End of this section

Thanks for your attention!

Oliver Wallscheid AST Topic 02 49

You might also like