You are on page 1of 36

K chapter 2

MATRIX CALCULATION
matrix
definition

• A matrix is a rectangular system of elements a ij ( numbers here) that are in m Lines


and n Columns are arranged:
• a 11 a 12th • a 1•n • m = Line index
• j = Column index
• a 21 a 22nd • a 2• n
A. • A. • ( a) • • • •
ij
( m, n) • • • A. = Matrix name (capital letters)
• •
• am1 am2 • a mn •• a ij = Matrix element (lower case)

• (mxn) is the order or dimension d the matrix
• Rectangular matrix: m ≠ n
• Quadratic matrix: m = n

• Matrices are used to formalize the description of computing processes,


especially in linear algebra
• Introduced in 1850 by James Joseph Sylvester
• here: Inductive Statistics & Regression Analysis (Chapters 4 - 7)

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 33


matrix
Transpose

• Example 2.1

• Transposed matrix
• Swap the rows and columns of the output matrix
• Indicated by a comma ( ′ ) at the matrix name

• a 11 a 21 • a m 1••

• a 12th a 22nd • a m• 2
A ' • A ' • ( a) • • •ji
( n, m) • • • •
• •
• a a2n • a mn ••
• 1n

• Example 2.2

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 34


Matrices
special cases

• For symmetrical matrices applies: a ij = a ji and thus A = A ′

• For skew-symmetric matrices applies: a ij = - a ji and thus A = -A ′

• Single column matrix = ( Columns) vector A.


(m, 1)

• a 1••

A. • a • • • •
( m, 1)
••
• am•

• Single line matrix = ( Lines) vector A. (1, n)

A. • a ' • • a 1 • a n •
( 1, n)

• At a Zero matrix 0 applies to all elements: a ij = 0

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 35


Matrices
special cases

• Diagonal matrix
• Square matrix, the elements of which are zero off the diagonal

• d 11 0 • 0•
• •
• 0 d 22nd • •
D. • diag d,• d11..., d
22
nn • • • d ij= 0 for all i ≠ j
• • 0•
• •
• 0 • 0 d nn •


• Triangular matrix
• Square matrix, the elements of which are equal to zero above or below the main diagonal

• a 11 a 12th • a 1 n • • a 11 •
• • • 0 •
• a 22nd • a 2 n • • a 21 a 22nd •
A. • A. •
( n, n) • • • • ( m, m) •• • • •
• 0 • • •
• a nn
• •
• a m 1 a m 2 • a mm • •
• •

Upper triangular matrix Lower triangular matrix


Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 36
Matrices
Vectors

• Amount (or length) of a vector a


• Square root of the sum of the squared • Geometric interpretation:
elements Zero point, origin or
Position vector ( here in 2D)
• a1•
••
a•••• a • a 2 • a1 2
2 y
2 • • • am • x 1• • • • x 1, y •
• a• P • •• • 1
• m• y1 • y1•

p

p
• example " Vector"
0
x
x1

• One vector e •1 •
••
• All elements a i = 1 With i = 1, ..., m •1 •
e••••
• amount e • m
••
••
•1 •
Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 37
Matrices
Unit vector and unit matrix

• Unit vector •0•


••
• The amount of the vector is 1 •••
• Special vectors are k-th Unit vectors where the k-th Element 1, all ek• • 1 •
••
others are zero •••
••
•0•
• Identity Matrix I.
• Diagonal matrix whose elements are equal to 1
• System of n Column vectors e k With k = 1, ..., n

•1 0 • 0•
• •
•0 1 •• a ij= 0 for all i ≠ j
I. • e 1,•e 2, ..., e n • • • •
( n, n) • 0• a ij = 1 for all i = j
• •

•0 • 0 1 ••

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 38


Matrices
Operations

• Addition and subtraction


• Two ( mxn ) - Matrices A = ( a ij) and B = ( b ij) are element-wise added or.
subtracted

A. • B. • C.
( m, n) ( m, n) ( m, n)
• Example 2.7
a ij • b ij • c ij (for all i and j )

• Commutative law: A. • B. • B. • A. • C.
• Associative law: A. • • B. • C. • • • A. • B. • • C. • A. • B. • C.

• Scalar multiplication: Matrix multiplied by scalar


• Every element of A. becomes with a • Regulate: c • A. • B. • • cA • cB
scalar c multiplied
• c • d • A. • cA • dA c • FROM • • • cA •
B. • cA
• A. • cB •
•b ij • • •c•a • ij • Example 2.8 c • there • • • CD • A.

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 39


Matrices
Operations

• Matrix multiplication: Product of two matrices

C. • FROM
( m, n) ( m, k) (k, n) Note: FROM • BA ( not commutative)

c ij • • a ir b rj c ij • a i 1 • b 1 j • a i 2 • b 2 j • • • a ik • b kj
r•1

• Requirement: number of columns from A. = Number of lines from B = k

• c 11 • c1j • c1n•
• a11 a 12th • a 1• k • • b 11 • b 1 j • •
• • b1n••
• • • • •
• • • • ••
b 21 • b 2j • b n • • k

• a a i2 • a ik ••

2

•••
ci1 • c ij • • a ir b rj • c in •
• i1 •• • • •
• • r•1

• • • • ••
• bk1 • b kj • b kn •• • • • • •
• ••
• a m1 a 2 • ammk • •c c mj • c mn ••
• m1•

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 40


matrix
Operations

• Example 2.9a

• Falk's scheme
• Calculation scheme and visual aid for matrix multiplication

21
B. • 1 0 • Be
( 3.2) ( 3.2) (2.1)

74
• Example 2.9b
1 24
A. •
( 2.3) 5•10

• C. • C e
( 2.2) ( 2.2) (2.1)

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 41


Matrix calculation
Vector operations

A vector is a special case of a matrix with the dimension ( mx 1)


(see slide 28)
• Dot product: product of two vectors
• Special case of matrix multiplication, yields one Scalar (no vector!) in the result

• In terms of matrix multiplication, the first vector must be transposed for this

• b1•
• •
m m
• b 2•
m

a • b • a•1, a, •• 2a m • • • from • • ba a•a• •a 2 b•b• •b 2


• ••
ii i i
i•1 i•1 i•1
• •
• b m ••

• Example 2.13

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 42


Matrix calculation
Vector operations

• Vector norm
• A standard is generally used to describe the size of objects
• The Euclidean norm ( 2 norm) of a vector corresponds to the length of the vector (and thus its magnitude • see
slide 30) and can be calculated using the scalar product

a 2 subsequently only as a designated


m

a•a•a• •a 2
i • Example 2.15
i•1

• The Euclidean norm can be used for geometric interpretation of the scalar product can be used

a•b•a b • cos • α = Intersection angle of both vectors

a•b a•b
cos • • •
from • a • a •• b • b • • Example 2.16

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 43


Matrix calculation
Vector operations

• Orthogonal vectors
• From the calculation of the intersection angle it follows that two vectors are orthogonal to one
another if the scalar product is equal to zero

• Example 2.16

• Orthonormal vectors
• Orthogonal vectors whose norm (or length) = 1 are called orthogonal vectors

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 45


Matrix calculation
Linear dependency

• Linear dependence on vectors


• vector b when Linear combination, ie as the sum of the vectors a i multiplied by scalars c i :

b • c1 a1• c2 a2• • • cm am• •c i ai


i•1

• If not all c i = 0 are and themselves instead of the vector b the zero vector 0 results are called the vectors a i linearly
dependent ...
m

0 • c1 a1• c2 a2• • • cm am• • ci ai


i•1

• ... otherwise (vector b ≠ zero vector) are the vectors a i linearly independent

• Matrix as a vector system


• Example: ( 3x3) - Matrix as a system of
• Each ( mxn) - Matrix can be described as a
Column vectors (red) or
system of m Line vectors or n Conceive • a 11 a 12th a 13th ••
Line vectors (green): •
column vectors A. • • a 21 a 22nd a 23 •
• a 31 a 32 a 33 ••

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 46


Matrix calculation
Operations

• Rank of a matrix
• The maximum number of independent vectors of a vector system is called
rank of the vector system
• For one ( mxn) - Matrix as a system of m Line and n Column vectors:
rg (A) • r • min ( m, n)
• The rank is at most equal to the smaller of the two numbers m or n
• If r = m : Matrix has full row rank
• If r = n : Matrix has full column rank
• If r <min (m, n) lies a Rank effect d in front: d • min •• m • r •• • n • r ••
• The row and column rank of each matrix is the same
• Example 2.23
• Regular and Singular Matrices
• For one square matrix of order m: rg (A) • r • m
• A square matrix with full rank ( rg (A) = m ) called regular
• Has a square matrix not full rank ( rg (A) <m ) is she singular

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 48


Matrix calculation
Operations

• Elementary transformations
• By multiplying one on the left ( mxn) - Matrix with the elementary
( mxm) - Matrices E. i let the Lines elementary reshaping of a matrix
• Examples of elementary matrices:

•010•0• •100•0• •100•0•


• • • • • •
•100 0• • 0c0 0• •c10 0•
E. 1 • • 0 0 1 0• E. 2 • • 0 0 1 0• E. 3 • • 0 0 1 0•
• • • • • •
•• •• • •• •• • •• •• •
• • • • • •
•000•1• •000•1• •000•1•

E. 3 A.
E. 1 A. E. 2 A.

multiplies 2nd line multiplies 1st line of A.


swaps 1st and 2nd
With c and adds this element by
Line of A. from A. With c
element to the 2nd line

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 49


Matrix calculation
Operations

• Elementary transformations (continued)


• can be carried out with a little practice without explicit multiplication with the elementary matrices ("quasi
in the head")

• Elementary transformations do not change the rank of a matrix and therefore become
among other things used for its determination

• Example 2.25

• Right-sided Multiplication one ( mxn) - Matrix with the transposed elementary


( nxn) - Matrices E ' i cause column reshaping

• Trace of a matrix
• The trace of a square ( nxn) - Matrix is equal to the sum of the main diagonal elements

spa) • a 11 • a 22nd • • • a nn • • a ii
i•1

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 50


Matrix calculation
Operations

• Determinant (general)
• Everyone square matrix A. a scalar can be assigned as a unique number that starts with det A referred to
as
• Determinants are an important tool in linear algebra (see slide 46)

• Determinant one ( 2 x 2) - matrix


•a a 12th

A. • •• 11 • det A. • a • a 22nd • a 21 • a 12th

• a 21
11
( 2.2) a 22nd •

• Determinant one ( 3 x 3) - Matrix:


• Rule of Sarrus ("hunter fence rule"):
+ + +
• a 11 a 12th a 13th • a 11 a 12th a 13th a 11 a 12th det A. • a 11 • a 22nd • a 33 • a 12th • a 23 • a 31 •
• •
A. • • a 21 a 22nd a 23 • a 21 a 22nd a 23 a 21 a 22nd a 13th • a 21 • a 32 • a 31 • a 22nd • a 13th •
( 3.3)
•a a 32 a 33 •• a 31 a 32 a 33 a 31 a 32 a 32 • a 23 • a 11 • a 33 • a 21 • a 12th
• 31
- - -
Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 51
Matrix calculation
Vector operations

• Geometric interpretation of the determinant of a matrix


• The determinant (a square matrix) of order 2 or 3 corresponds to the area size or volume, which is
spanned by the (column or row) vectors of the matrix.

• Example:

•321•
• •
A. • • 1 0 2 •
( 3.3)
•413•
• •

Rule of Sarrus:

det A. • 3 • 0 • 3 • 2 • 2 • 4th • 1 • 1 • 1 • 4th • 0 • 1 • 1 • 2 • 3 • 3 • 1 • 2

• 5

If two vectors are linearly dependent, two points of the object lie on a straight line, which is why
the area or the volume becomes zero

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 52


Matrix calculation
Operations

• Determinants for any ( mxn) - Matrices


• For every square matrix A. there is one for each element Sub- or Sub-determinant (minor) m kj by
deleting the k - th line and j - th column, e.g.

• a 11 a 12th a 13th • • a 11 a 12th a 13th •


• • • • a 21 a 23 • a 21 • a
A. • • a 21 a 22nd a 23 • A. • • a 21 a 22nd a 23 • m 12th • • a 31 • a 23
• •
33
(3.3) (3.3) a 31 a 33

• a 31 a 32 a 33 • • a 31 a 32 a 33
• •

• Laplace's expansion theorem: The determinant is the sum of the products of all elements of the k-th Row
(or column) with the associated Cofactors c kj
m

det A. • •a kj • c kj With c kj• • • 1 •k•j m kj


j•1

• The cofactor c kj results from adding a positive or negative sign to the minor m kj

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 53


Matrix calculation
Operations

• a 11 a 12th a 13th •
• •
A. • • a 21 a 22nd a 23 •
( 3.3)
• a 31 a 32 a 33 •

•• ••• •
• •
Sign of the cofactor c kj note: •• •••

•• ••• •

a 22nd a 23 • a • a 21 a 23 • a • a 21 a 22nd
det A. • a 11 • 12th a 31 13th a 31
a 32 a 33 a 33 a 32

det A. • a 11 • • a 22nd • a 33 • a 32 • a 23 • • a 12th • • a 21 • a 33 • a 31 • a 23 • • a 13th • • a 21 • a 32 • a 31 • a 22nd •

• Example 2.18

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 54


Matrix calculation
Operations

• Special case: determinant of a triangular matrix

• a 11 a 12th • a 1 n •
• •
• a 22nd • a • 2n
A. • •
• • •
• 0 •
• a nn •

• The determinant of a triangular matrix (or diagonal matrix) is equal to the product of its diagonal
elements:

det A. • a 11 • a 22nd • a 33 • a nn

• this also follows directly from the rule of Sarrus

DANGER:
Elementary transformations can change the determinant of a matrix. Especially the multiplication of rows
by a factor • changes the determinant by that •• times. Accordingly, this must be taken into account when
calculating the determinant after or through elementary transformations.

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 55


Matrix calculation
Operations

• Matrix inversion (general)


• The division of matrices is not defined.
• The inverse matrix A. • 1 therefore serves as the "reciprocal of a matrix" (inverse matrix)
( n, n)

• The following applies: AA • 1 • I. or. A. • 1 A. • I.

• Requirement:
• Quadratic matrix : dimension ( nxn)
• Regular matrix ( no rank defect): det A. • 0

• Matrix inversion according to the determinant formula


• For the inversion of a matrix A. after Determinant formula becomes the cofactor matrix C. formed in
each element a ij by its cofactor c ij is replaced

• a 11 a 12th • a 1n•• • c 11 c 12th • c1n•


• • •
• a 21 a 22nd • a2n • c ij• • • 1 • i • j m ij • c 21 c 22nd • c 2• n
A. • • C. •••
• • • • • • • • •
• • • •
•a an2 • a nn ••
•c cn2 • c nn •

• n1 • n1

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 56


Matrix calculation
Operations

• Matrix inversion according to the determinant formula ( Continuation)


• The too A. adjoint matrix adj A is the transpose C ′ from C.

• c 11 c 21 • c n1 ••

• c 12th c 22nd • cn2•
adj AC• • • • • • • • •
• •
• c1n c2n • c nn
• •

• The inverse after the Determinant formula finally results through

1
A. • 1 • • adj A
det A.

• or element by element according to

c ij
a ij( • 1) • • Example 2.20
det A.

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 57


Matrix calculation
Operations

• Matrix inversion according to the Gauss-Jordan method


• In the inversion according to the Gauss-Jordan method, the matrix becomes A. through elementary line
operations (see slide 49) into a upper triangular matrix reshaped

• a 11 a 12th • a 1n ••

• a 22nd • a2n•
A. • •
• • •
• 0 •
• a nn •

• By carrying along and reducing the identity matrix in parallel I. you get the inverse at the end A- 1

Gauss-Jordan
FROM • I. B. • A. • 1 I.
Procedure
• Example 2.20

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 58


Matrix calculation
Systems of linear equations

• Linear functions ( with a variable)


• Linear functions are polynomial functions of at most the first degree
n
f (x) • y • c 0 • c 1 x • c 2 x 2 • • • c n x Polynomial function (degree n)

f (x) • y • c 0 • c 1 x Polynomial function (degree 1) = linear function

• By vary from x can the function values ( y - Values)


• The graph of a linear function describes a straight line with a slope c 1 and the intercept c 0

y • c1 x • c0

Note : the straight line equation (e.g. in textbooks) is often equivalent to


y • mx • b described, where m the slope and b the intercept is)

Example: y • 1 • 2 x

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 59


Matrix calculation
Graph of linear functions

y y•1•2x

4th

3 P 1( 1, 3)

c1= 2
1

c0= 1

x
-2 -1 1 2 3 4th

P 2 (- 1, -1) -1

Univ.-Prof. Dr-Ing. J. Blankenbach Applied Statistics - WS 20/21 60


Matrix calculation
Systems of linear equations

• Linear equations
• By transforming a linear function with a variable one immediately results linear equation with two
unknowns

y • c0• c1 x 1 y • c1 x • c0

Example:

y•2x•1 1y•2x•1

• all points that lie on the straight line solve this linear equation
• there are infinitely many solutions (ambiguous solution) for the clear solution a second (linearly

• independent) equation is required

• a system of (two) linear equations is created ( L. ineares


G calibration S. ystem, LGS )

• Looking at the system line by line ( Line image) the point that satisfies both equations is the
unique solution since it lies on both straight lines.

• Only the fulfills this condition Intersection both straight lines

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 61


Matrix calculation
Systems of linear equations

Example: 1. FC "Steil-nach-Vorn" soccer field in the Rheinaue


• Target:
• Planning a (rectangular)
Soccer field
a • We are looking for the side lengths a and b

• Requirements:

RHINE • scope of the sports field due to the vacant lot:


max. 350 m
b
• Aspect ratio: 1 / 1.5 (long / short side)

• Solution:
• Linear system of equations 2 eq .:

2 a • 2 b • 350 ( Scope condition)


1.5 a • b • 0 (Aspect ratio)
Stadion

Univ.-Prof. Dr-Ing. J. Blankenbach Applied Statistics - WS 20/21 62


Matrix calculation
Line image: Graphic solution of the LGS

b
2nd equation: 1.5 a • b • 0 • • b • 1.5 a •

200

P 11 ( 0, 175) P 22 ( 100,150)
The point of intersection lies on both straight lines and

100 P S ( 70.105) thus solves both Equations

• clear solution: a = 70; b = 105

a
- 100 100 200 300 400 500

- 100
P 12 ( 300, -125)

P 21 (- 100, -150)

- 200
1st equation: 2 a • 2 b • 350 • • b • • a • 175 •

Univ.-Prof. Dr-Ing. J. Blankenbach Applied Statistics - WS 20/21 63


Matrix calculation
Linear dependency

y y•2x•1• •y•2x•1 • 1st equation

a 2 • P 21 • P 22nd

•3••2••1•
P 11 ( 1, 3) a 2 • •• •• • •• •• • •• ••
a 1 • P 11 • P 12th •2••0••2•

• 1 • •• 1 • • 2 • P 21 ( 3, 2)
a 1 • •• •• • •• •• • • •••
• 3 • •• 1 •• 4th • 0 • 1 • a1• 2 • a2
a1
Vectors are
a2 linearly dependent!

x
P 22 ( 2, 0)

P 12 (- 1, -1)

y • 2 x • • 4th • • y • 2 x • 4th • 2nd equation

Univ.-Prof. Dr-Ing. J. Blankenbach Applied Statistics - WS 20/21 64


Matrix calculation
Systems of linear equations

• Systems of linear equations (general)


• Linear equations can contain more than two unknowns, e.g.
a1 x1• a2 x2• a3 x3• b (Plane equation)
• ... or in general equations in which only linear terms, that is
Linear combinations of the Unknown x i are included
n

a1 x1• a2 x2• • • an xn• b • • aixi


i•1

• Many problems, e.g. in engineering and natural sciences, can be formulated in linear equations and
solved by setting up linear systems of equations:
a 11 x 1 • a 12th x 2 • • • a 1 n x n • b 1
a 21 x 1 • a 22nd x 2 • • • a 2 n x n • b 2
• • • •
a m 1 x 1 • a m 2 x 2 • • • a mn x n • b m

• To solve a system of equations with n Unknowns are (at least) n


Equations required, meaning the number of equations must be equal to the number of unknowns • quadratic
system (quadratic coefficient matrix)!
Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 67
Matrix calculation
Systems of linear equations

• Systems of linear equations (general)


• In addition to the graphical solution (in rows or columns), there are (more efficient) methods for solving
linear systems of equations by formulating them in matrix notation

• For this purpose, the system of equations is converted into a coefficient matrix A. , the solution vector x

and the vector of the absolute terms b divided, e.g.:

a 11 x 1 • a 12th x 2 • • • a 1 n x n • b 1 • a 11 a 12th • a1n• • x1• • b1•


• • •• ••
a 21 x 1 • a 22nd x 2 • • • a 2 n x n • b 2 • a 21 a 22nd • a 2• n • x2• • b2•
A. • ; x • • •; b • • • •
• • • • • • • • • • •
• • •• ••
a m1 xa
1• • bm • a m 1 a m 2 • a mn • • •• ••
mx 22 • • • a mn x n • • xn• • bm•

• System of equations in matrix notation


= 0 • homogeneous LGS
Ax•b b
( m, n) (n, 1) ( m, 1) ≠ 0 • inhomogeneous LGS

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 68


Matrix calculation
Systems of linear equations

• Solution of linear systems of equations


• To solve the system of equations

Ax • b

• it applies the solution vector on the left by multiplying with the inverse A- 1 to be separated:

A. • 1 A x • A. • 1 b With A. • 1 A. • I.

I x • A. • 1 b

x • A. • 1 b

• Prerequisite: the inverse A- 1 must be formable, i.e.:


• Coefficient matrix A. got to regular be, i.e. have full rank: rg (A) = n

• Is the coefficient matrix A. singular ( rg (A) <n ) is det (A) = 0 ( • there are linear dependencies
between the column and row vectors), there are an infinite number of solutions for the system of
equations

• If the LGS has a solution, it is consistent, otherwise inconsistent

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 69


Matrix calculation
Solving systems of linear equations

• Different methods can be used to solve linear systems of equations

• Explicit calculation of the inverse of the coefficient matrix A- 1 , e.g. with the help of the
Determinant formula ( • see slide 46ff) and left-sided multiplication with the LGS

example

sports ground

• Gauss-Jordan method, ie elementary transformations of the coefficient matrix and parallel entrainment of b
, so that the identity matrix (on the left) arises ( • see slide 48)

Gauss-Jordan A. • 1 Ax • A. • 1 b
Ax•b
Procedure I x • A. • 1 b
• Gaussian elimination method ( as a preliminary stage of the Gauss-Jordan algorithm)

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 70


Matrix calculation
Solving systems of linear equations

• Gaussian elimination method


• To solve the linear system of equations

Ax • b
• for example
a 11 x 1 • a 12th x 2 • a 13th x 3 • b 1 example

a 21 x 1 • a 22nd x 2 • a 23 x 3 • b 2 sports ground

a 31 x 1 • a 32 x 2 • a 33 x 3 • b 3

• the lines are based on elementary transformations Step shape brought so that at least one unknown
less occurs per line, ie eliminated becomes
~ ~
a 11
~ x 1 • a 12th x 2 • a 13th x 3 • b 1

a ~22nd x 2 • a ~ 23 x3• b2
a~
33 x3• b3
• Then through Insert backwards starting from the last line, each additional line - and thus each additional
unknown - can be calculated

Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 71

You might also like