Professional Documents
Culture Documents
Chapter 1
Chapter 1
2.1
Operations with Matrices
Matrix:
a11 a12 a13 a1n
a a2n
21 a22 a 23
A [aij ] a31 a32 a33 a3n M mn
am1 am2 am3 amn mn
Square matrix: m = n
2.2
Equal matrices: two matrices are equal if they have the same size
(m × n) and entries corresponding to the same position are equal
If A B, then a 1, b 2, c 3, and d 4
2.3
Matrix addition:
then A B [aij ]mn [bij ]mn [aij bij ]mn [cij ]mn C
Ex 2: Matrix addition
1 2 1 3 1 1 2 3 0 5
0 1 1 2 0 1 1 2 1 3
1 1 1 1 0
3 3 3 3 0
2 2 2 2 0
2.4
Scalar multiplication:
If A [aij ]mn and c is a constant scalar,
then cA [caij ]mn
Matrix subtraction:
A B A (1) B
2.6
Matrix multiplication:
If A [aij ]mn and B [bij ]n p ,
then AB [aij ]mn[bij ]n p [cij ]m p C,
If equal, A and B are multipliable
size of C=AB is m × p
n
where cij aik bkj ai1b1 j ai 2b2 j ainbnj
k 1
a11 a12 a1n c c c1 j c1n
b11 b1 j b1n 11 12
b b2 j b2n
ai1 ai 2 ain 21 ci1 ci 2 cij cin
bn1 bnj bnn
an1 an 2 ann cn1 cn 2 cnj cnn
※ The entry cij is obtained by calculating the sum of the entry-by-entry
product between the i-th row of A and the j-th column of B
2.7
Ex 4: Find AB
1 3
3 2
A 4 2 B
4 1 22
5 032
Sol:
(1)(3) (3)(4) (1)(2) (3)(1)
AB (4)(3) (2)(4) (4)(2) (2)(1)
(5)(3) (0)(4) (5)(2) (0)(1) 32
9 1
4 6
15 1032
Note: (1) BA is not multipliable
(2) Even BA is multipliable, it could be that AB ≠ BA 2.8
Matrix form of a system of linear equations in n variables:
a11x1 a12 x2 a1n xn b1
a x a x a x b
21 1 22 2 2n n 2
m linear equations
am1 x1 am2 x2 amn xn bm
a11 a12 a1n x1 b1
a a a x b
21 2n 2 single matrix equation
22
2
A xb
m n n 1 m 1
A x b
2.9
Ax is a linear combination of the column vectors of matrix A:
=
c1 c2 cn
x1c1 x2c2 xncn Ax can be viewed as the linear combination of the
x1 column vectors of A with coefficients x1, x2,…, xn
x
c1 c2 cn 2 ← You can derive the same result if you perform
the matrix multiplication for matrix A
expressed in column vectors and x directly
xn 2.10
Properties of Matrix Operations
Three basic matrix operators:
(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication
0 0 0
0 0 0
Zero matrix : 0mn
0 0 0 mn
1 0 0
0 1 0
Identity matrix of order n: In
0 0 1 n n
2.11
Properties of matrix addition :
Matrix addition is only possible between two
matrices which have the same size.
The operation is done simply by adding the
corresponding elements. e.g.:
1 3 6 2 7 5
4 7 3 1 7 8
2.12
Properties of matrix multiplication:
1 3 5 15
5*
4 7 20 35
Ex 3: Matrix Multiplication is Associative
Calculate (AB)C and A(BC) for
1 0
1 2 1 0 2
A , B , and C 3 1 .
2 1 3 2 1 2 4
Sol:
1 0
1 2 1 0 2
( AB)C
1 3 2 1
3 1
2 2 4
1 0
5 4 0 17 4
3 1
1 2 3
2 4
13 14
2.14
1 0
1 2 1 0 2
A( BC )
1 3 2 1
3 1
2 2 4
1 2 3 8 17 4
2 1 7 2 13 14
2.15
Transpose of a matrix:
Taking the transpose of a matrix is similar to that of a
vector:
1 3 8 1 4 6
if A 4 7 2, then AT 3 7 5
6 5 0 8 2 0
The diagonal elements in the matrix are unaffected,
but the other elements are switched. A matrix which is
the same as its own transpose is called symmetric, and
one which is the negative of its own transpose is called
skew-symmetric.
Ex 8: Find the transpose of the following matrix
1 2 3 0 1
2 A 4 5 6
(a) A (b) (c) A 2 4
8 7 8 9 1 1
Sol: (a) 2
A AT 2 8
8
(b) 1 2 3 1 4 7
A 4 5 6 AT 2 5 8
7 8 9 3 6 9
(c) 0 1
0 2 1
A 2 4 AT
1 4 1
1 1
2.17
Properties of transposes:
(1) ( AT )T A
(2) ( A B)T AT BT
(3) (cA)T c( AT )
(4) ( AB)T BT AT
※ Properties (2) and (4) can be generalized to the sum or product of
multiple matrices. For example, (A+B+C)T = AT+BT+CT and (ABC)T =
CTBTAT
※ Since a real number also can be viewed as a 1 × 1 matrix, the transpose
of a real number is itself, that is, for a R , aT = a. In other words,
transpose operation has actually no effect on real numbers
2.18
Ex 9: Verify that (AB)T and BTAT are equal
2 1 2 3 1
A 1 0 3 B 2 1
0 2 1 3 0
Sol:
T
2 1 2 3 1 2 1
T
2 6 1
( AB) 1 0 3 2 1 6 1
T
0 2 1 3 0 1 2 1 1 2
2 1 0
3 2 3 2 6 1
B A
T T
1 0 2
1 1 0 2 3 1 1 1 2
2.19
Ex 4:
Show that AB and BA are not equal for the following matrices.
1 3 2 1
A and B
2 1 0 2
Sol:
1 3 2 1 2 5
AB
2 1 0 2 4 4
2 1 1 3 0 7
BA
0 2 2 1 4 2
2.20
Ex 5: An example in which cancellation is not valid
Show that AC=BC
1 3 2 4 1 2
A , B , C
0 1 2 3 1 2
Sol:
1 3 1 2 2 4
AC
0 1 1 2 1 2
2 4 1 2 2 4
BC
2 3 1 2 1 2
2.21
The Inverse of a Matrix
Theorem 2.7: The inverse of a matrix is unique
If B and C are both inverses of the matrix A, then B = C.
Pf: AB I
C ( AB ) CI
(CA) B C (associative property of matrix multiplication and the property
for the identity matrix)
IB C
BC
Consequently, the inverse of a matrix is unique.
Notes:
(1) The inverse of A is denoted by A1
(2) AA1 A1 A I
2.22
Find the inverse of a matrix by the Gauss-Jordan elimination:
A | I
Gauss-Jordan elimination
I | A1
1 1 0
A 1 0 1
6 2 3
Sol:
1 1 0 1 0 0
A I 1 0 1 0 1 0
6 2 3 0 0 1
1 1 0 1 0 0 1 1 0 1 0 0
0 1 1 1 1 0
0 1 1
( 1) (6)
1 1 0
A1,3
A1,2
6 2 3 0 0 1 0 4 3 6 0 1
1 1 0 1 0 0 1 1 0 1 0 0
0 1 1 1 1 0
M 3( 1)
0 1 1 1 1 0
( 4)
A2,3
2.25
1 1 0 1 0 0 1 0 0 2 3 1
0 1 0 3 3 1 0 1 0 3 3 1
(1) (1)
A2,1
A3,2
0 0 1 2 4 1 0 0 1 1 4 1
[ I A1 ]
So the matrix A is invertible, and its inverse is
2 3 1
A1 3 3 1
2 4 1
Check it by yourselves:
AA1 A1 A I
2.26
Theorem 2.11: Systems of equations with a unique solution
If A is an invertible matrix, then the system of linear equations
Ax = b has a unique solution given by x A1b
Pf:
Ax b
A1 Ax A1b ( A is nonsingular)
Ix A1b
x A1b
a11 0 0
a a 0 3 3 lower triangular matrix: all entries
21 22 above the principal diagonal are zero
a31 a32 a33
2.31
(b)
1 3 0 1 3 0 1 3 0
A 0 ( 2)
1 3
A1,3
0 1 3
( 4)
A2,3
0 1 3 U
2 10 2 0 4 2 0 0 14
(4) ( 2)
E E 2,3 1,3 A U Ai(,kj)
( 2) 1 (4) 1
A (E1,3 ) ( E2,3 ) U LU
( 2) 1 (4) 1
L ( E1,3 ) ( E2,3 )
1 0 0 1 0 0 1 0 0
0 1 0 0 1 0 0 1 0
2 0 1 0 4 1 2 4 1
2.32
Solving Ax=b with an LU-factorization of A (an important
application of the LU-factorization)
Ax b If A LU , then LUx b
Let y Ux, then Ly b
2.33
Ex 7: Solving a linear system using LU-factorization
x1 3x2 5
x2 3x3 1
2 x1 10 x2 2 x3 20
Sol:
1 3 0 1 0 0 1 3 0
A 0 1 3 0 1 0 0 1 3 LU
2 10 2 2 4 1 0 0 14
(1) Let y Ux, and solve Ly b (solved by the forward substitution)
1 0 0 y1 5 y1 5
0 1 0 y2 1 y2 1
2 4 1 y3 20 y3 20 2 y1 4 y2
20 2(5) 4(1) 14
2.34
(2) Solve the following system Ux y (solved by the back substitution)
1 3 0 x1 5 ※ Similar to the method using A –1 to
0 1 3 x2 1 solve the systems, the LU-
0 0 14 x3 14 factorization is useful when you
need to solve many systems with
the same coefficient matrix. In
So x3 1 such scenario, the LU-factorization
is performed once, and can be used
x2 1 3x3 1 (3)(1) 2 many times.
x1 5 3x2 5 3(2) 1 ※ You can find that the
computational effort for LU-
factorization is almost the same as
Thus, the solution is that for the Gaussian elimination,
so if you need to solve one system
1 of linear equations, just use the
x2 Gaussian elimination plus the back
substitution or the Gauss-Jordan
1 elimination directly.
2.35