Professional Documents
Culture Documents
Learning objectives
Securing the basic knowledge of linear algebra, which we need in the following lectures to be
able to carry out typical system-theoretical investigations (such as checking for the stability of
a system or to calculate the system response in the time domain).
No. Topic
1 Introduction (admin, system classes, motivating example)
2 Linear algebra basics
3 Response of linear systems incl. discrete-time representations
4 Laplace and z-transforms (complex frequency domain)
5 Frequency response
6 Stability
7 Controllability and observability
8 State transformation and realizations
9 State feedback and state observers
If the vectors are functions {fi (x)} defined on an interval I ⊂ R, then they are linearly
independent if there exists at least one x ∈ I for which α1 f1 (x) + · · · + αn fn (x) = 0 implies
α1 = · · · = αn = 0.
Oliver Wallscheid AST Topic 02 4
Basis (1)
We write span(v1 , ..., vn ) to denote the linear subspace generated (spanned) by v1 , ..., vn .
This means we can write any vector v ∈ V as a linear combination of basis vectors:
v = α1 v1 + · · · + αn vn . (2.2)
The numbers α1 , ..., αn are called the coordinates of v with respect to the basis v1 , ..., vn .
These numbers are unique. We may assemble them in a vector α = [α1 · · · αn ]T .
−2 −1 0 1 2
−1
Fig. 2.1: Illustration the standard basis in R2 : The blue and orange vectors are the elements of the
basis; the green vector can be given in terms of the basis vectors, and so is linearly dependent upon
them (derivative of www.wikipedia.org, CC BY-SA 3.0).
v → Av (2.3)
We call null(A) = dim N (A) the nullity and rank(A) = dim R(A) the rank of A.
Fig. 2.2: Kernel and image of a linear mapping (derivative of www.wikipedia.org, CC BY-SA 4.0)
where Mij is defined to be the determinant of the (n − 1) × (n − 1) matrix that results from
A by removing the i-th row and the j-th column.
I An n × n matrix A has n, not necessarily distinct, eigenvalues. They are found as the
solutions to characteristic polynomial det(A − λI) = 0.
I Eigenvectors can be scaled. Sometimes it is convenient to assume that they are
normalized to have unit norm (length) kvi k = 1.
p(λ) = α0 + α1 λ + α2 λ2 + · · · + αn λn . (2.11)
p(A) = A2 − 5A − 2I = 0
to
A2 = 5A + 2I.
Hence, we have found a simple expression to calculate the square of A. Likewise we can use
Cayley-Hamilton to calculate higher power terms of A:
G = AT A. (2.15)
The Gramian is T
1 3 1 3 1 2 1 3 10 14
G= = =
2 4 2 4 3 4 2 4 14 20
with its determinant
det(G) = 10 · 20 − 142 = 4 6= 0.
Hence, all columns of A are linear independent and its inverse exists:
−1 −2 3/2
A = ⇒ A−1 A = AA−1 = I.
1 −1/2
Given an invertible matrix A ∈ Rn×n we have a multitude of options to calculate its inverse:
I Gaussian elimination (perform row operations to find a row echelon form):
3 1 1
2 −1 0 1 0 0 1 0 0 4 2 4
I|A−1 =
[A|I] = ⇐⇒ 1 1 .
−1 2 −1 0 1 0
0 1 0 2 1 2
1 1 3
0 −1 2 0 0 1 0 0 1 4 2 4
−1
A−1 = An−1 + cn−1 An−2 + . . . + c1 I
with ci as coefficients of
det(A)
p(A) = An + cn−1 An−1 + . . . + c1 A + det(A)I = 0.
I Cramer’s rule / adjugate matrix: One can also calculate the inverse
1
A−1 = adj(A)
det(A)
using its adjugate matrix adj(A). For low-order matrices, the adjugate matrix can be
easily computed:
a b d −b
adj = ,
c d −c a
a b c ei − f h f g − di dh − eg
adj d e f = ch − bi ai − cg bg − ah .
g h i bf − ce cd − af ae − bd
where Q contains the eigenvectors as columns and Λ is a diagonal matrix whose diagonal
elements are the corresponding eigenvalues. Note that Q is orthogonal, i.e., QT = Q−1 .
I Blockwise inversion: In certain cases it might be handy to invert a matrix blockwise using
−1
B −1 + B −1 C(E − DB −1 C)−1 DB −1 −B −1 C(E − DB −1 C)−1
B C
= .
D E −(E − DB −1 C)−1 DB −1 (E − DB −1 C)−1
I Numerical approximation:
I Newton’s method: Assuming one have an informed guess on A−1
i , i.e., the approximate
inverse at iteration step i, we can apply:
A−1 −1 −1 −1
i+1 = 2Ai − Ai AAi .
I Neumann series: If there exists a scaling factor γ > 0 leading to kI − γAk < 1 then A is
invertible using the Neumann series:
∞
" #
X
A−1 = γ I + (I − γA)i .
i=1
There are 30+ methods of matrix inversion, many of them covering special cases requiring
certain matrix properties. Hence, we have only scratched the surface at this point.
I Let v1 , ..., vn be a basis for V , and let A describe a linear transformation T in this basis.
I Now let v10 , ..., vn0 be another basis for V whose matrix with respect to v1 , ..., vn is P
(must be nonsingular).
I What is the representation of T in the basis v10 , ..., vn0 ?
We can switch between the representations with respect to the two bases as x = P x0 and
x0 = P −1 x.
I First consider the linear transformation in the original basis: w = Ax.
I In the new basis we have: w0 = P −1 (Ax) = (P −1 AP )x0 .
Building the basis P out of these eigenvectors and applying (2.18) yields:
−1
1 0 1 0 1 −2 1 0 1 1 0 0
P −1 AP = 1 2 0 0 1 0 1 2 0 = 0 1 0 .
0 1 −1 1 −1 3 0 1 −1 0 0 2
A = V ΛV T ⇔ V T AV = Λ. (2.19)
In this case V orthogonally diagonalizes A. Real symmetric matrices are always orthogonally
diagonizable.
P −1 AP = J ,
I If λi has algebraic multiplicity µi > 1, and geometric multiplicity νi < µi , then this
eigenvalue gives rise to νi Jordan blocks.
I Algebraic multiplicity µi : how often does the eigenvalue appears as a root of
p(λi ) = det(λi I − A).
I Geometric multiplicity νi : dimension of the nullspace of (λI − A), i.e., how many linear
independent eigenvectors can be found for a given λi .
I The sum of the sizes all the Jordan blocks corresponding to λi is µi .
Oliver Wallscheid AST Topic 02 40
Generalized eigenvector
Evaluating its characteristic polynomial det(A − λI) = 0 we can find out that the eigenvalues
are λ = {1, 2, 4, 4}, i.e., we have two unique eigenvalues and one with algebraic multiplicity
two. According to (2.20) and (2.21) the corresponding Jordan matrix is:
1 0 0 0
0 2 0 0
J = .
0 0 4 1
0 0 0 4
(A − 1I) p1 =0,
(A − 2I) p2 =0,
(A − 4I) p3 =0,
(A − 4I) p4 =p3 .
Please note that the above equation system will deliver the (generalized) eigenvectors of A.
Oliver Wallscheid AST Topic 02 43
Jordan normal form example (3)
Please note that Matlab and GNU Octave also offer pre-defined functions to calculate J and
P . However, it is highly recommend to use only exact (symbolic) algorithms since
approximate (numeric) calculations of the Jordan normal form tend to instability.
Note that above commands are from Matlab; GNU Octave commands might slightly differ.
Oliver Wallscheid AST Topic 02 46
Recommended reading
This quick recap section on selected aspects of linear algebra which have a high relevance for
the subsequent lectures cannot compensate for fundamental knowledge gaps in the field.
Students with such gaps are expected to to catch up on linear algebra basics on their own.
I Linear algebra is an important and often used toolbox when performing system theory
analysis (especially when dealing with linear systems).
I Eigenvalues and eigenvectors are important properties of linear transformations.
I The theorem of Cayley-Hamilton allows simplified calculations of higher order matrix
powers using the characteristic polynomial of that matrix.
I Inverting a matrix can be achieved using several exact and approximate methods which
performance highly depend on certain matrix properties.
I Transforming a matrix into diagonal (or Jordan) form can help to reduce the
computational burden of subsequent calculations (sparsely populated matrix.)