Professional Documents
Culture Documents
MATRIX CALCULATION
matrix
definition
• Example 2.1
• Transposed matrix
• Swap the rows and columns of the output matrix
• Indicated by a comma ( ′ ) at the matrix name
• a 11 a 21 • a m 1••
•
• a 12th a 22nd • a m• 2
A ' • A ' • ( a) • • •ji
( n, m) • • • •
• •
• a a2n • a mn ••
• 1n
• Example 2.2
• a 1••
•
A. • a • • • •
( m, 1)
••
• am•
A. • a ' • • a 1 • a n •
( 1, n)
• Diagonal matrix
• Square matrix, the elements of which are zero off the diagonal
• d 11 0 • 0•
• •
• 0 d 22nd • •
D. • diag d,• d11..., d
22
nn • • • d ij= 0 for all i ≠ j
• • 0•
• •
• 0 • 0 d nn •
•
•
• Triangular matrix
• Square matrix, the elements of which are equal to zero above or below the main diagonal
• a 11 a 12th • a 1 n • • a 11 •
• • • 0 •
• a 22nd • a 2 n • • a 21 a 22nd •
A. • A. •
( n, n) • • • • ( m, m) •• • • •
• 0 • • •
• a nn
• •
• a m 1 a m 2 • a mm • •
• •
• One vector e •1 •
••
• All elements a i = 1 With i = 1, ..., m •1 •
e••••
• amount e • m
••
••
•1 •
Univ.-Prof. Dr.-Ing. J. Blankenbach Applied Statistics - WS 20/21 37
Matrices
Unit vector and unit matrix
•1 0 • 0•
• •
•0 1 •• a ij= 0 for all i ≠ j
I. • e 1,•e 2, ..., e n • • • •
( n, n) • 0• a ij = 1 for all i = j
• •
•
•0 • 0 1 ••
A. • B. • C.
( m, n) ( m, n) ( m, n)
• Example 2.7
a ij • b ij • c ij (for all i and j )
• Commutative law: A. • B. • B. • A. • C.
• Associative law: A. • • B. • C. • • • A. • B. • • C. • A. • B. • C.
C. • FROM
( m, n) ( m, k) (k, n) Note: FROM • BA ( not commutative)
c ij • • a ir b rj c ij • a i 1 • b 1 j • a i 2 • b 2 j • • • a ik • b kj
r•1
• c 11 • c1j • c1n•
• a11 a 12th • a 1• k • • b 11 • b 1 j • •
• • b1n••
• • • • •
• • • • ••
b 21 • b 2j • b n • • k
•
• a a i2 • a ik ••
•
2
•••
ci1 • c ij • • a ir b rj • c in •
• i1 •• • • •
• • r•1
•
• • • • ••
• bk1 • b kj • b kn •• • • • • •
• ••
• a m1 a 2 • ammk • •c c mj • c mn ••
• m1•
• Example 2.9a
• Falk's scheme
• Calculation scheme and visual aid for matrix multiplication
21
B. • 1 0 • Be
( 3.2) ( 3.2) (2.1)
74
• Example 2.9b
1 24
A. •
( 2.3) 5•10
• C. • C e
( 2.2) ( 2.2) (2.1)
• In terms of matrix multiplication, the first vector must be transposed for this
• b1•
• •
m m
• b 2•
m
• Example 2.13
• Vector norm
• A standard is generally used to describe the size of objects
• The Euclidean norm ( 2 norm) of a vector corresponds to the length of the vector (and thus its magnitude • see
slide 30) and can be calculated using the scalar product
a•a•a• •a 2
i • Example 2.15
i•1
• The Euclidean norm can be used for geometric interpretation of the scalar product can be used
a•b a•b
cos • • •
from • a • a •• b • b • • Example 2.16
• Orthogonal vectors
• From the calculation of the intersection angle it follows that two vectors are orthogonal to one
another if the scalar product is equal to zero
• Example 2.16
• Orthonormal vectors
• Orthogonal vectors whose norm (or length) = 1 are called orthogonal vectors
• If not all c i = 0 are and themselves instead of the vector b the zero vector 0 results are called the vectors a i linearly
dependent ...
m
• ... otherwise (vector b ≠ zero vector) are the vectors a i linearly independent
• Rank of a matrix
• The maximum number of independent vectors of a vector system is called
rank of the vector system
• For one ( mxn) - Matrix as a system of m Line and n Column vectors:
rg (A) • r • min ( m, n)
• The rank is at most equal to the smaller of the two numbers m or n
• If r = m : Matrix has full row rank
• If r = n : Matrix has full column rank
• If r <min (m, n) lies a Rank effect d in front: d • min •• m • r •• • n • r ••
• The row and column rank of each matrix is the same
• Example 2.23
• Regular and Singular Matrices
• For one square matrix of order m: rg (A) • r • m
• A square matrix with full rank ( rg (A) = m ) called regular
• Has a square matrix not full rank ( rg (A) <m ) is she singular
• Elementary transformations
• By multiplying one on the left ( mxn) - Matrix with the elementary
( mxm) - Matrices E. i let the Lines elementary reshaping of a matrix
• Examples of elementary matrices:
E. 3 A.
E. 1 A. E. 2 A.
• Elementary transformations do not change the rank of a matrix and therefore become
among other things used for its determination
• Example 2.25
• Trace of a matrix
• The trace of a square ( nxn) - Matrix is equal to the sum of the main diagonal elements
spa) • a 11 • a 22nd • • • a nn • • a ii
i•1
• Determinant (general)
• Everyone square matrix A. a scalar can be assigned as a unique number that starts with det A referred to
as
• Determinants are an important tool in linear algebra (see slide 46)
• Example:
•321•
• •
A. • • 1 0 2 •
( 3.3)
•413•
• •
Rule of Sarrus:
• 5
If two vectors are linearly dependent, two points of the object lie on a straight line, which is why
the area or the volume becomes zero
• Laplace's expansion theorem: The determinant is the sum of the products of all elements of the k-th Row
(or column) with the associated Cofactors c kj
m
• The cofactor c kj results from adding a positive or negative sign to the minor m kj
• a 11 a 12th a 13th •
• •
A. • • a 21 a 22nd a 23 •
( 3.3)
• a 31 a 32 a 33 •
•
•• ••• •
• •
Sign of the cofactor c kj note: •• •••
•
•• ••• •
a 22nd a 23 • a • a 21 a 23 • a • a 21 a 22nd
det A. • a 11 • 12th a 31 13th a 31
a 32 a 33 a 33 a 32
• Example 2.18
• a 11 a 12th • a 1 n •
• •
• a 22nd • a • 2n
A. • •
• • •
• 0 •
• a nn •
•
•
• The determinant of a triangular matrix (or diagonal matrix) is equal to the product of its diagonal
elements:
det A. • a 11 • a 22nd • a 33 • a nn
DANGER:
Elementary transformations can change the determinant of a matrix. Especially the multiplication of rows
by a factor • changes the determinant by that •• times. Accordingly, this must be taken into account when
calculating the determinant after or through elementary transformations.
• Requirement:
• Quadratic matrix : dimension ( nxn)
• Regular matrix ( no rank defect): det A. • 0
• c 11 c 21 • c n1 ••
•
• c 12th c 22nd • cn2•
adj AC• • • • • • • • •
• •
• c1n c2n • c nn
• •
•
1
A. • 1 • • adj A
det A.
c ij
a ij( • 1) • • Example 2.20
det A.
• a 11 a 12th • a 1n ••
•
• a 22nd • a2n•
A. • •
• • •
• 0 •
• a nn •
•
•
• By carrying along and reducing the identity matrix in parallel I. you get the inverse at the end A- 1
Gauss-Jordan
FROM • I. B. • A. • 1 I.
Procedure
• Example 2.20
y • c1 x • c0
Example: y • 1 • 2 x
y y•1•2x
4th
3 P 1( 1, 3)
c1= 2
1
c0= 1
x
-2 -1 1 2 3 4th
P 2 (- 1, -1) -1
• Linear equations
• By transforming a linear function with a variable one immediately results linear equation with two
unknowns
y • c0• c1 x 1 y • c1 x • c0
Example:
y•2x•1 1y•2x•1
• all points that lie on the straight line solve this linear equation
• there are infinitely many solutions (ambiguous solution) for the clear solution a second (linearly
• Looking at the system line by line ( Line image) the point that satisfies both equations is the
unique solution since it lies on both straight lines.
• Requirements:
• Solution:
• Linear system of equations 2 eq .:
b
2nd equation: 1.5 a • b • 0 • • b • 1.5 a •
200
P 11 ( 0, 175) P 22 ( 100,150)
The point of intersection lies on both straight lines and
a
- 100 100 200 300 400 500
- 100
P 12 ( 300, -125)
P 21 (- 100, -150)
- 200
1st equation: 2 a • 2 b • 350 • • b • • a • 175 •
a 2 • P 21 • P 22nd
•3••2••1•
P 11 ( 1, 3) a 2 • •• •• • •• •• • •• ••
a 1 • P 11 • P 12th •2••0••2•
• 1 • •• 1 • • 2 • P 21 ( 3, 2)
a 1 • •• •• • •• •• • • •••
• 3 • •• 1 •• 4th • 0 • 1 • a1• 2 • a2
a1
Vectors are
a2 linearly dependent!
x
P 22 ( 2, 0)
P 12 (- 1, -1)
• Many problems, e.g. in engineering and natural sciences, can be formulated in linear equations and
solved by setting up linear systems of equations:
a 11 x 1 • a 12th x 2 • • • a 1 n x n • b 1
a 21 x 1 • a 22nd x 2 • • • a 2 n x n • b 2
• • • •
a m 1 x 1 • a m 2 x 2 • • • a mn x n • b m
• For this purpose, the system of equations is converted into a coefficient matrix A. , the solution vector x
Ax • b
• it applies the solution vector on the left by multiplying with the inverse A- 1 to be separated:
A. • 1 A x • A. • 1 b With A. • 1 A. • I.
I x • A. • 1 b
x • A. • 1 b
• Is the coefficient matrix A. singular ( rg (A) <n ) is det (A) = 0 ( • there are linear dependencies
between the column and row vectors), there are an infinite number of solutions for the system of
equations
• Explicit calculation of the inverse of the coefficient matrix A- 1 , e.g. with the help of the
Determinant formula ( • see slide 46ff) and left-sided multiplication with the LGS
example
•
sports ground
• Gauss-Jordan method, ie elementary transformations of the coefficient matrix and parallel entrainment of b
, so that the identity matrix (on the left) arises ( • see slide 48)
Gauss-Jordan A. • 1 Ax • A. • 1 b
Ax•b
Procedure I x • A. • 1 b
• Gaussian elimination method ( as a preliminary stage of the Gauss-Jordan algorithm)
Ax • b
• for example
a 11 x 1 • a 12th x 2 • a 13th x 3 • b 1 example
•
a 21 x 1 • a 22nd x 2 • a 23 x 3 • b 2 sports ground
a 31 x 1 • a 32 x 2 • a 33 x 3 • b 3
• the lines are based on elementary transformations Step shape brought so that at least one unknown
less occurs per line, ie eliminated becomes
~ ~
a 11
~ x 1 • a 12th x 2 • a 13th x 3 • b 1
a ~22nd x 2 • a ~ 23 x3• b2
a~
33 x3• b3
• Then through Insert backwards starting from the last line, each additional line - and thus each additional
unknown - can be calculated