You are on page 1of 68

Chapter 2

Matrices
2.1 Operations with Matrices
2.2 Properties of Matrix Operations
2.3 The Inverse of a Matrix
2.4 Elementary Matrices
2.5 Applications of Matrix Operations
※ Since any real number can be expressed as a 11 matrix, all rules, operations,
or theorems for matrices can be applied to real numbers, but not vice versa
※ It is not necessary to remember all rules, operations, or theorems for
matrices, but memorize those that are different for matrices and real numbers 2.1
2.1 Operations with Matrices

Matrix:
 a11 a12 a13  a1n 
a a22 a 23  a2 n 
 21
A  [aij ]   a31 a32 a33  a3n   M m n
 
    
am1 am 2 am 3  amn  mn

(i, j)-th entry (or element): aij


number of rows: m
number of columns: n
size: m×n


Square matrix: m = n
2.2

Equal matrices: two matrices are equal if they have the same
size (m × n) and entries corresponding to the same position are
equal
For A  [aij ]m n and B  [bij ]m n ,
A  B if and only if aij  bij for 1  i  m, 1  j  n


Ex: Equality of matrices
1 2 a b 
A  B 
3 4   c d 

If A  B, then a  1, b  2, c  3, and d  4

2.3

Matrix addition:
If A  [aij ]m n , B  [bij ]m n ,

then A  B  [aij ]mn  [bij ]mn  [ aij  bij ]mn  [cij ]mn  C


Ex 2: Matrix addition

 1 2  1 3  1  1 2  3  0 5
 0 1   1 2   0  1 1  2   1 3
       

 1  1  1  1  0
  3   3    3  3   0
       
 2  2  2  2 0

2.4

Scalar ( 純量 ) multiplication:
If A  [aij ]m n and c is a constant scalar,
then cA  [caij ]m n

Matrix subtraction:
A  B  A  (1) B


Ex 3: Scalar multiplication and matrix subtraction
 1 2 4  2 0 0
A   3 0  1 B   1  4 3
   
 2 1 2  1 3 2

Find (a) 3A, (b) –B, (c) 3A – B


2.5
Sol:
(a)
 1 2 4  31 32  34   3 6 12
3A  3 3 0  1  3 3 30  3 1   9 0  3
     
 2 1 2  32  31 32   6 3 6
(b) 0 0   2 0 0
 2
 B   1 1  4 3    1 4  3

 
 1 3 2  1  3  2
(c)
 3 6 12  2 0 0  1 6 12
3 A  B   9 0  3   1  4 3   10 4  6
   
 6 3 6  1 3 2  7 0 4

2.6

Matrix multiplication:
If A  [aij ]m n and B  [bij ]n p ,
then AB  [ aij ]m n [bij ]n p  [cij ]m p  C ,
should be equal
size of C=AB is m × p
n
where cij   aik bkj  ai1b1 j  ai 2b2 j    ainbnj
k 1
 a11 a12  a1n   c11 c12  c1 j  c1n 
  
 11b  b1j  b1n    
       
  b21  b2 j  b2 n
 ai1 ai 2  ain     c
i1 ci 2  cij  cin 
         
     b  b  b       
 an1 an 2  ann  
 n1 nj nn 
 cn1 cn 2  cnj  cnn 
 

※ The entry cij is obtained by calculating the sum of the entry-by-entry


product between the i-th row of A and the j-th column of B
2.7

Ex 4: Find AB
 1 3
   3 2
A   4 2  B 
  4 1 2 2
 5 0  3 2
Sol:
 (1)( 3)  (3)(4) (1)(2)  (3)(1) 
AB   (4)(3)  ( 2)(4) (4)(2)  (2)(1) 
 (5)(3)  (0)(4) (5)(2)  (0)(1)  3 2
 9 1 
  4 6 
 15 10  3 2

Note: (1) BA is not multipliable
(2) Even BA is multipliable, it could be that AB ≠ BA
2.8

Matrix form of a system of linear equations in n variables:
 a11 x1  a12 x2    a1n xn  b1
 a x  a x  a x  b
 21 1 22 2 2n n 2
 m linear equations
 
am1 x1  am 2 x2    amn xn  bm


 a11 a12  a1n   x1   b1 
a a22  a2 n   x2   b2 
 21 single matrix equation

          Axb
     m  n n 1 m 1

am1 am 2  amn   xn  bm 


=

A x b
2.9

Partitioned matrices ( 分割矩陣 ):
row vector
 a11 a12 a13 a14   r1 
A   a21 a22 a23 a24   r2 
 a31 a32 a33 a34   r3 

 a11 a12 a13 a14  column vector


A   a21 a22 a23 a24   c1 c 2 c3 c4 
 a31 a32 a33 a34 

submatrix
 a11 a12 a13 a14 
 A11 A12  ※ Partitioned matrices can be
A  a21 a22 a23 
a24  
A22 
used to simplify equations or
   A21 to obtain new interpretation of
 a31 a32 a33 a34  equations (see the next slide)
2.10

Ax is a linear combination of the column vectors of matrix A:
 a11 a12  a1n   x1 
a  x 
a  a
A 21 22 2 n   c c  c  x   2
     1 2 n  
   
a a
 m1 m 2  a mn   xn 

 a11 x1  a12 x2    a1n xn   a11   a21   a1n 


 a x  a x   a x  a  a  a 
 Ax   21 1 22 2 2n n 
 x1    x2      xn  2 n 
21 22

        
       
a x
 m1 1  a x
m2 2    a x
mn n  m1 a
 m1  a
 m2  amn 

=
c1 c2 cn
 x1c1  x2c 2    xn c n  Ax can be viewed as the linear combination of the
column vectors of A with coefficients x1, x2,…, xn
 x1 
x 
 c1 c 2  c n   2  ← You can derive the same result if you perform
  the matrix multiplication for matrix A
 
 xn  expressed in column vectors and x directly
2.11

To practice the exercises in Sec. 2.1, we need to know the
trace operation and the notion of diagonal matrices


Trace ( 跡數 ) operation:
If A  [aij ]n n , then Tr ( A)  a11  a22    ann


Diagonal matrix ( 對角矩陣 ): a square matrix in which
nonzero elements are found only in the principal diagonal

d1 0  0
0 d  0
A  diag (d1 , d 2 , , d n )   2
  M nn
   
0 0  d n 

※ It is the usual notation for a diagonal matrix 2.12
Keywords in Section 2.1:

equality of matrices: 相等矩陣

matrix addition: 矩陣相加

scalar multiplication: 純量積

matrix multiplication: 矩陣相乘

partitioned matrix: 分割矩陣

row vector: 列向量

column vector: 行向量

trace: 跡數

diagonal matrix: 對角矩陣

2.13
2.2 Properties of Matrix Operations

Three basic matrix operators, introduced in Sec. 2.1:
(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication
0 0  0
0 0  0 

Zero matrix ( 零矩陣 ):0mn 
   
 
0 0  0  mn
1 0  0
0 1  0 

Identity matrix of order n ( 單位矩陣 ):I n  
   
 
0 0  1  n n
2.14

Properties of matrix addition and scalar multiplication:
If A, B, C  M m n , and c, d are scalars,
then (1) A+B = B+A (Commutative property ( 交換律 ) of matrix addition)

(2) A+(B+C) = (A+B)+C (Associative property ( 結合律 ) of matrix


addition)
(3) ( cd ) A = c ( dA ) (Associative property of scalar multiplication)
(Multiplicative identity property, and 1 is the multiplicative
(4) 1A = A identity for all matrices)
(5) c( A+B ) = cA + cB (Distributive ( 分配律 ) property of scalar
multiplication over matrix addition)
(6) ( c+d ) A = cA + dA (Distributive property of scalar
multiplication over real-number addition)

Notes:
All above properties are very similar to the counterpart
properties for real numbers
2.15

Properties of zero matrices:
If A  M m n , and c is a scalar,
then (1) A  0mn  A
※ So, 0m×n is also called the additive identity for the set of all m×n matrices

(2) A  (  A)  0mn
※ Thus , –A is called the additive inverse of A

(3) cA  0mn  c  0 or A  0mn


Notes:
All above properties are very similar to the counterpart
properties for the real number 0

2.16

Properties of matrix multiplication:
(1) A (BC) = (AB ) C (Associative property of matrix multiplication)
(2) A (B+C) = AB + AC (Distributive property of LHS matrix multiplication
over matrix addition)
(3) (A+B)C = AC + BC (Distributive property of RHS matrix multiplication
over matrix addition)
(4) c (AB) = (cA) B = A (cB) (Associative property of scalar and matrix
multiplication)
※ For real numbers, the properties (2) and (3) are the same since the order for the
multiplication of real numbers is irrelevant
※ The real-number multiplication can satisfy above properties and there is a
commutative property for the real-number multiplication, i.e., cd = dc

Properties of the identity matrix:
If A  M mn , then (1) AI n  A
(2) I m A  A
※ The role of the real number 1 is similar to the identity matrix. However, 1 is unique
in real numbers and there could be many identity matrices with different sizes 2.17

Ex 3: Matrix Multiplication is Associative
Calculate (AB)C and A(BC) for
 1 0 
 1 2  1 0 2
A  , B  , and C  3 1  .
 2 1 3 2 1   2 4 
Sol:
 1 0
 1 2  1 0 2   
( AB)C        3 1 
 2 1 3 2 1  
 2 4 
 1 0 
 5 4 0   17 4
  3 1  
 1 2 3 
 13 14
 2 4 

2.18
  1 0  
1 2  1 0 2  
A( BC )   3 1
2 1  3 2 1   
  2 4  
1 2  3 8  17 4 
 
2 1  7 2  13 14 

2.19

Definition of Ak : repeated multiplication of a square matrix:
A1  A, A2  AA, , Ak  
AA A
k matrices of A

Properties for Ak:
(1) AjAk = Aj+k
(2) (Aj)k = Ajk
where j and k are nonegative integers and A0 is assumed
to be I

For diagonal matrices:
 d1 0  0  d1k 0  0
0 d  
 0  0 d 2k  0 
D 2
 Dk  
      
   k
0 0  dn   0 0  d n 
2.20

Equipped with the four properties of matrix multiplication,
we can prove a statement on Slide 1.35: if a homogeneous
system has any nontrivial solution, this system must have
infinitely many nontrivial solutions

Suppose there is a nonzero solution x1 for this homegeneous system such


that Ax1  0. Then it is straightforward to show that tx1 must be another
solution, i.e.,
A(tx1 )  t ( Ax1 )  t (0)  0

The fourth property of matrix multiplication on Slide 2.17

Finally, since t can be any real number, it can be concluded that there are
infinitely many solutions for this homogeneous system

2.21

Transpose of a matrix ( 轉置矩陣 ):

 a11 a12  a1n 


a a22  a2 n 
If A   21  M mn ,
    
 
 am1 am 2  amn 

 a11 a21  am1 


a a22  am 2 
then AT   12  M n m
    
 
 a1n a2 n  amn 
※ The transpose operation is to move the entry aij (original at the position (i, j)) to
the position (j, i)
※ Note that after performing the transpose operation, AT is with the size n×m
2.22

Ex 8: Find the transpose of the following matrix
 1 2 3 0 1
 2 (c
(a) A    (b) A  4 5 6 A   2 4
8    )  
7 8 9  1  1
Sol: (a)  2
A   AT  2 8
8 
(b)  1 2 3 1 4 7
A  4 5 6  AT  2 5 8
   
7 8 9 3 6 9 
(c
 0 1
)   0 2 1
A 2 4 T
   A  1 4  1
 1  1
2.23

Properties of transposes:
(1) ( AT )T  A
(2) ( A  B )T  AT  B T
(3) (cA)T  c( AT )
(4) ( AB)T  BT AT
※ Properties (2) and (4) can be generalized to the sum or product of
multiple matrices. For example, (A+B+C)T = AT+BT+CT and (ABC)T =
CTBTAT
※ Since a real number also can be viewed as a 1 × 1 matrix, the transpose
of a real number is itself, that is, for a  R , aT = a. In other words,
transpose operation has actually no effect on real numbers

2.24

Ex 9: Show that (AB)T and BTAT are equal

 2 1 2   3 1
A   1 0 3 B   2 1
 0 2 1  3 0 

Sol:
T T
 2 1 2   3 1   2 1
T         2 6 1
( AB)    1 0 3  2 1    6 1   
  0 2  1 1 2 
 1 
 3 0 
 
 1 2 

 2 1 0 
T T 3 2 3     2 6 1
B A    1 0 2    
1 1 0   2 3 1 1 2 
 1 

2.25

Symmetric matrix ( 對稱矩陣 ):
A square matrix A is symmetric if A = AT

Skew-symmetric matrix ( 反對稱矩陣 ):
A square matrix A is skew-symmetric if AT = –A

Ex:
 1 2 3
If A  a 4 5 is symmetric, find a, b, c?
 
b c 6
Sol:
 1 2 3 1 a b  T
A  A
A  a 4 5 AT  2 4 c 
     a  2, b  3, c  5
b c 6 3 5 6

2.26

Ex:
 0 1 2
If A  a 0 3 is a skew-symmetric, find a, b, c?
 
b c 0
Sol:
 0 1 2  0  a  b
A   a 0 3  AT    1 0  c 
b c 0  
 2  3 0 

A   AT  a  1, b  2, c  3

AAT must be symmetric ※ The matrix A could be with any size,
Note: i.e., it is not necessary for A to be a
Pf: ( AAT )T  ( AT )T AT  AAT square matrix.

T ※ In fact, AAT must be a square matrix.


 AA is symmetric
2.27

Before finishing this section, two properties will be discussed,
which is held for real numbers, but not for matrices: the first is
the commutative property of matrix multiplication and the
second is the cancellation law

Real number:
ab = ba (Commutative property of real-number multiplication)

Matrix:
AB  BA
m n n p n p m n

Three results for BA:


(1) If m  p , then AB is defined, but BA is undefined
(2) If m  p, m  n, then AB  M mm , BA  M nn (Sizes are not the
same)
(3) If m  p  n, then AB  M mm , BA  M mm
(Sizes are the same, but resulting matrices may not be equal)
2.28

Ex 4:
Show that AB and BA are not equal for the following matrices.
 1 3 2  1
A  and B 
 2  1  0 2 
Sol:
 1 3 2  1 2 5
AB       
 2  1 0 2   4  4 

2  1  1 3 0 7
BA       
 0 2  2  1  4  2 

AB  BA (noncommutativity of matrix multiplication)

2.29

Notes:
(1) A+B = B+A (the commutative law of matrix addition)
(2) AB  BA (the matrix multiplication is not with the
commutative law) (so the order of matrix multiplication is very
important)

※ This property is different from the property for the


multiplication operations of real numbers, for which the
order of multiplication is with no difference

2.30

Real number:
ac  bc and c  0
 ab (Cancellation law for real numbers)


Matrix:
AC  BC and C  0 (C is not a zero matrix)
(1) If C is invertible, then A = B
(2) If C is not invertible, then A  B (Cancellation law is not
necessary to be valid)

※ Here I skip to introduce the definition of “invertible” because


we will study it soon in the next section

2.31

Ex 5: (An example in which cancellation is not valid)
Show that AC=BC
 1 3  2 4  1  2
A  , B  , C 
 0 1  2 3   1 2 
Sol:
1 3  1  2  2 4
AC     
0 1  1 2   1 2

2 4  1  2   2 4
BC   
2 3  1 2    1 2
  

So, although AC  BC , A  B

2.32
Keywords in Section 2.2:

zero matrix: 零矩陣

identity matrix: 單位矩陣

commutative property: 交換律

associative property: 結合律

distributive property: 分配律

cancellation law: 消去法則

transpose matrix: 轉置矩陣

symmetric matrix: 對稱矩陣

skew-symmetric matrix: 反對稱矩陣

2.33
2.3 The Inverse of a Matrix

Inverse matrix ( 反矩陣 ):
Consider A  M nn ,
if there exists a matrix B  M nn such that AB  BA  I n ,
then (1) A is invertible ( 可逆 ) (or nonsingular ( 非奇
異 ))
(2) B is the inverse of A

Note:
A square matrix that does not have an inverse is called
noninvertible (or singular ( 奇異 ))
※ The definition of the inverse of a matrix is similar to that of the inverse of a
scalar, i.e., c · (1/c) = 1
※ Since there is no inverse (or said multiplicative inverse ( 倒數 )) for the real
number 0, you can “imagine” that noninvertible matrices act a similar role to
the real number 0 is some sense
2.34

Theorem 2.7: The inverse of a matrix is unique
If B and C are both inverses of the matrix A, then B = C.
Pf: AB  I
C ( AB)  CI
(associative property of matrix multiplication and the property
(CA) B  C 
for the identity matrix)
IB  C
BC
Consequently, the inverse of a matrix is unique.

Notes
(1) The inverse of A is denoted by A1
:
(2) AA1  A1 A  I

2.35

Find the inverse of a matrix by the Gauss-Jordan
elimination:
 A | I  Gauss-Jordan elimination
  
 I | A 1



Ex 2: Find the inverse of the matrix A
 1 4
A 
  1  3
Sol:
AX  I
 1 4  x11 x12   1 0
 1  3  x  
   21 x22  0 1
 x11  4 x21 x12  4 x22   1 0
 x  3x   
 11 21  x12  3 x 22   0 1
2.36
by equating corresponding entries
x11  4 x21  1 This two systems of linear
 (1)
 x11  3 x21  0 equations have the same
coefficient matrix, which
x12  4 x22  0 is exactly the matrix A
(2)
 x12  3 x22  1
 1 4  1 A1,2(1) , A2,1( 4 )  1 0  3
(1)        x11  3, x21  1
 1 3  0   0 1  1
 1 4  0  A1,2(1) , A2,1( 4)  1 0  4 
(2)        x12  4, x22  1
 1 3  1  0 1  1
Thus
Perform the Gauss-
 3  4
1 Jordan elimination on
X A 
1 1  the matrix A with the
same row operations
2.37

Note:
Instead of solving the two systems separately, you can
solve them simultaneously by adjoining (appending) the
identity matrix to the right of the coefficient matrix

 1 4  1 0  Gauss-Jordan elimination  1 0  3 4 
 1 3  0   (1) ( 4 )  0 1  1 
 1 A1,2 , A2,1
 1
A I I A1

x  x 
solution for  11  solution for  12 
 x21   x22 

※ If A cannot be row reduced to I, then A is singular

2.38

Ex 3: Find the inverse of the following matrix

 1  1 0
A   1 0  1
 6 2 3
Sol:
 1  1 0  1 0 0
A  I    1 0  1  0 1 0
 6 2 3  0 0 1
 1 1 0  1 0 0   1 1 0  1 0 0 
( 1)
A1,2  
(6)
A1,3 0 
   0 1 1  1 1 0     1 1  1 1 0 
 6 2 3  0 0 1 0 4 3  6 0 1

 1 1 0  1 0 0   1 1 0  1 0 0 
 0 1 1  1 1 0   0 1 1  1 1 0 
( 4)
A2,3 M 3( 1)
 
0 0 1  2 4 1 0 0 1  2 4 1

2.39
 1 1 0  1 0 0   1 0 0  2 3 1
0 1 0  3 3 1  0 1 0  3 3 1
(1) (1)
A3,2 A2,1
  
0 0 1  2 4 1 0 0 1  1 4 1

 [ I  A1 ]

So the matrix A is invertible, and its inverse is


 2  3  1
A1    3  3  1
 2  4  1


Check it by yourselves:
AA1  A1 A  I

2.40

Matrix Operations in Excel

TRANSPOSE: calculate the transpose of a matrix

MMULT: matrix multiplication

MINVERSE: calculate the inverse of a matrix

MDETERM: calculate the determinant of a matrix

SUMPRODUCT: calculate the inner product of two vectors

※ For TRANSPOSE, MMULT, and MINVSRSE, since the output should


be a matrix or a vector, we need to input the formula in a cell first, then
identify the output range (the cell with the input formula located at the
position a11), and finally put focus on the formula description cell and
press “Ctrl+Shift+Enter” to obtain the desired result
※ See “Matrix operations in Excel for Ch2.xls” downloaded from my
website
2.41

Theorem 2.8: Properties of inverse matrices
If A is an invertible matrix, k is a positive integer, and c is a scalar,
then
(1) A1 is invertible and ( A1 ) 1  A

(2) Ak is invertible and ( Ak ) 1  ( A1 ) k


1 1
1
(3) cA is invertible given c  0 and (cA)  A
c
(4) AT is invertible and ( AT ) 1  ( A1 )T ← “T” is not the number of
power. It denotes the
transpose operation.

Pf:
1
(1) A1 A  I ; (2) Ak ( A1 ) k  I ; (3) (cA)( A1 )  I ; (4) AT ( A1 )T  I
c
2.42

Theorem 2.9: The inverse of a product
If A and B are invertible matrices of order n, then AB is
invertible and
( AB) 1  B 1 A1
Pf:
( AB)( B 1 A1 )  A( BB 1 ) A1  A( I ) A1  ( AI ) A1  AA1  I
(associative property of matrix multiplication)

Thus, if AB is invertible, then its inverse is B 1 A1



Note:
(1) It can be generalized to the product of multiple matrices
 A1 A2 A3  An   An1  A31 A21 A11
1

(2) It is similar to the results of the transpose of the products of


multiple matrices (see Slide 2.24)
 A1 A2 A3  An   AnT  A3T A2T A1T
T
2.43

Theorem 2.10: Cancellation properties for matrix multiplication
If C is an invertible matrix, then the following properties hold:
(1) If AC=BC, then A=B (right cancellation property)
(2) If CA=CB, then A=B (left cancellation property)
Pf:
AC  BC
( AC )C 1  ( BC )C 1 (C is invertible, so C -1 exists)

A(CC 1 )  B (CC 1 ) (Associative property of matrix multiplication)


AI  BI
A B

Note
If C is not invertible, then cancellation is not valid
:
2.44

Theorem 2.11: Systems of equations with a unique solution
If A is an invertible matrix, then the system of linear equations
1
Ax = b has a unique solution given by x  A b
Pf:
Ax  b
A1 Ax  A1b ( A is nonsingular)
Ix  A1b
x  A1b

If x1 and x 2 were two solutions of equation Ax  b,

then Ax1  b  Ax 2  x1  x 2 (left cancellation property)

So, this solution is unique


2.45

Ex 8:
Use an inverse matrix to solve each system
(a) (b)
2 x  3 y  z  1 2x  3 y  z  4
3x  3 y  z  1 3x  3 y  z  8
2 x  4 y  z  2 2x  4 y  z  5
(c)
2x  3y  z  0
3x  3 y  z  0
2x  4 y  z  0

Sol:
 2 3 1  1 1 0 
 A   3 3 1 
Gauss-Jordan elimination
 A1   1 0 1 
 2 4 1  6 2 3
2.46
(a)
 1 1 0   1  2  ※ This technique is very
x  A1b   1 0 1   1    1
convenient when you face
the problem of solving
 6 2 3  2   2  several systems with the
same coefficient matrix
(b)
※ Because once you have A-
 1 1 0   4   4  1
, you simply need to
x  A1b   1 0 1  8    1  perform the matrix
multiplication to solve the
 6 2 3  5   7  unknown variables
(c) ※ If you solve only one
system, the computational
 1 1 0  0  0  effort is less for the G. E.
plus the back substitution
x  A1b   1 0 1  0   0  or the G. J. E.
 6 2 3 0  0 
2.47
Keywords in Section 2.3:

inverse matrix: 反矩陣

invertible: 可逆

nonsingular: 非奇異

singular: 奇異

2.48
2.4 Elementary Matrices

Elementary matrix ( 列基本矩陣 ):
An nn matrix is called an elementary matrix if it can be obtained
from the identity matrix I by a single elementary row operation

Three row elementary matrices:
(1) Ei , j  I i , j ( I ) (interchange two rows)
(2) Ei( k )  M i( k ) ( I ) (k  0) (multiply a row by a nonzero constant)
(3) Ei(,kj)  Ai(,kj) ( I ) (add a multiple of a row to another row)


Note:
1. Perform only a single elementary row operation on the identity
matrix
2. Since the identity matrix is a square matrix, elementary matrices
must be square matrices 2.49

Ex 1: Elementary matrices and nonelementary matrices

1 0 0 1 0 0 
1 0 0  (c) 0 1 0 
(a) 0 3 0  (b)  
 0 1 0  0 0 0 
 0 0 1 

Yes (M 2(3) (I 3 )) No (row multiplication must


No (not square)
be with a nonzero constant)

1 0 0  1 0 0 
(d) 0 0 1  1 0  (f ) 0 2 0 
(e)  
 2 1 
0 1 0  0 0 1

Yes (I 2,3 (I 3 )) (2) No (use two elementary row


Yes (A (I 2 ))
1,2
operations, M 2(2) and M 3( 1) )
2.50

Note:
Elementary matrices are useful because they enable you to use
matrix multiplication to perform elementary row operations

Theorem 2.12: Representing elementary row operations
Let E be the elementary matrix obtained by performing an
elementary row operation on Im. If that same elementary row
operation is performed on an mn matrix A, then the resulting
matrix equals the product EA
(Note that the elementary matrix must be multiplied to
the left side of the matrix A)
※ There are many computer programs implementing the matrix addition and
multiplication, but the elementary row operations are seldom to be implemented
※ If you want to perform the elementary row operations on computers (maybe for G.
E. or G.-J. E.), you can compute the matrix multiplication of the corresponding
elementary matrices and the target matrix A instead 2.51

Ex 3: Using elementary matrices
Find a sequence of elementary matrices that can be used to
rewrite the matrix A in its row-echelon form
0 1 3 5 1
( )
 1 3 0 2 
A   1 3 0 2   0
  1 3 5
( 2 )
I1,2 , A1,3 ,M3 2

 2 6 2 0  0 0 1 2 
Sol:
0 1 0  1 0 0
E1  I1,2 ( I 3 )  1 0 0  ( 2)
E2  A1,3 ( I 3 )   0 1 0 
0 0 1   2 0 1

1
1 0 0
( )  
E3  M 3 2 ( I 3 )  0 1 0
0 0 12 
2.52
0 1 0  0 1 3 5   1 3 0 2
A1  I1,2 ( A)  E1 A  1 0 0  1 3 0 2    0 1 3 5
0 0 1   2 6 2 0   2 6 2 0 
 1 0 0  1 3 0 2   1 3 0 2
( 2)
A2  A1,3 ( A1 )  E2 A1   0 1 0   0 1 3 5   0 1 3 5 
 2 0 1   2 6 2 0  0 0 2 4 
 
1
 1 0 0  1 3 0 2   1 3 0 2 
 
A3  M 3 2 ( A2 )  E3 A2  0 1 0  0 1 3 5   0 1 3 5  B
( )

 1  0 0 2 4  0 0 1 2 
0 0 
 2
same row-echelon form
1
( )
( 2)
 B  E3 E2 E1 A or B  M 3
2
( A1,3 ( I1,2 ( A)))

※ This example demonstrate that we can obtain the same effect of


performing elementary row operations by multiplying corresponding
elementary matrices to the left side of the target matrix A 2.53

Row-equivalent ( 列等價 ):
Matrix B is row-equivalent to A if there exists a finite number
of elementary matrices such that
※ This row-equivalent
property is from the row-
B  Ek Ek 1  E2 E1 A equivalent property after
performing a finite
sequence of the
elementary row operations
(see Slide 1.23)


Theorem 2.13: Elementary matrices are invertible
1 1
If E is an elementary matrix, then E exists and E is still an
elementary matrix

2.54

Ex:
Elementary Matrix Inverse Matrix
0 1 0  0 1 0  E11 still corresponds
E1  1 0 0   I1,2 ( I ) E11  1 0 0   I1,2 ( I ) to I1,2(I)
0 0 1  0 0 1 
The corresponding
element row operations
 1 0 0 1 0 0 1
for E2 is still to add a
E2   0 1 0   A1,3
( 2)
(I ) E21   0 1 0   A1,3
(2)
(I ) multiple of the row 1 to
the row 3, but the
 2 0 1  2 0 1  multiplying scalar is
from –2 to 2

The corresponding
 1 0 0 1 0 0  element row operations
E31   0 1 0   M 3(2) ( I )
1
  ( )
for E31 is still to
E3  0 1 0   M 3 2 ( I ) multiply the row 3 by a
0 0 12   0 0 2  nonzero constant, but
the constant is from 1/2
to 2 2.55

Theorem 2.14: A property of invertible matrices
A square matrix A is invertible if and only if it can be written as
the product of elementary matrices (we can further obtain that an
invertible matrix is row-equivalent to the identity matrix)
Pf:
() Assume that A is the product of elementary matrices
1. Every elementary matrix is invertible (Thm. 2.13)
2. The product of invertible matrices is invertible (Thm. 2.9)
Thus A is invertible
() If A is invertible, Ax  0 has only one solution (Thm. 2.11), and
this solution must be the trivial solution.
  A0 G.-J. E.
  I 0  Ek  E3 E2 E1 A  I  A invertible matrix
is row-equivalent to
 A  E11 E21 E31  Ek1I =E1E2 E3  Ek I the identity matrix

Thus A can be written as the product of elementary matrices


2.56

Ex 4:
Find a sequence of elementary matrices whose product is
  1  2
A 
 3 8
Sol:
 1 2  M1( 1) 1 2  A1,2( 3) 1 2
A       
 3 8   3 8   0 2 
1
( )
M22 1 2  A2( ,12 ) 1 0 
      I
0 1  0 1 
1
( )
( 2) ( 3) ( 1)
Therefore E2,1 E
2
2
E1,2 E1 A  I

2.57
1
( )
Thus A  ( E1( 1) ) 1 ( E1,2
( 3) 1
) (E 2
2 1 ( 2) 1
) ( E2,1 )

 1 0 1 0 1 0 1 2
  3 1 0 2 0 1 
 0 1    


Note: another way to justify using the Gauss-Jordan elimination
to find A–1 on Slide 2.36
First, if A is invertible, according to Thm. 2.14,
then Ek  E3 E2 E1 A  I . (Therefore, the G.-J. E. can be performed by applying the
counterpart elementary row operations for E1,…E2, and Ek)
Second, since A1 A  I , A1  Ek  E3 E2 E1 according to the cancellation rule.
Third , by expressing the G.-J. E. on [ AI ] through the matrix multiplication with
elementary matrices, we obtain Ek  E3 E2 E1[ AI ]  [ Ek  E3 E2 E1 AEk  E3 E2 E1 I ]
 [ I A 1 ] 2.58

Theorem 2.15: Equivalent conditions
If A is an nn matrix, then the following statements are equivalent
(1) A is invertible
(2) Ax = b has a unique solution for every n1 column vector b

(3) Ax = 0 has only the trivial solution


(4) A is row-equivalent to In
(5) A can be written as the product of elementary matrices

※ Statements (2) and (3) are from Theorem 2.11, and Statements (4) and (5) are
from Theorem 2.14

2.59

LU-factorization (or LU-decomposition) (LU 分解 ):
If the nn matrix A can be written as the product of a lower
triangular matrix L and an upper triangular matrix U, then
A  LU is a LU -factorization of A

 a11 0 0
a 3  3 lower triangular matrix ( 下三角矩
 21 a22 0  陣 ): all entries above the principal
 a31 a32 a33  diagonal are zero

 a11 a12 a13 


0 a 3  3 upper triangular matrix ( 上三角矩
 22 a23  陣 ): all entries below the principal
 0 0 a33  diagonal are zero

2.60

Note:
If a square matrix A can be row reduced to an upper triangular
matrix U using only the elementary row operations of adding
a multiple of one row to another, then it is easy to find an LU-
factorization of A

Ek  E2 E1 A  U
(U is similar to a row-echelon form matrix, expcet that the
leading coffieicnets may not be 1)
1 1 1 ※ If only Ai(,kj) is applied, then E11E21  Ek1 will be
 A  E E E U
1 2 k lower triangular matrices
 A  LU (L  E11E21  Ek1 )

2.61

Ex 5: LU-factorization
 1 3 0 
1 2  0 
(a) A    (b) A  1 3
1 0   
 2 10 2 
Sol: (a)
1 2 A1,2(-1) 1 2 
A      U
1 0   0 2 

( 1)
 E1,2 A U
( 1) 1
 A  ( E1,2 ) U  LU
1 0 
( 1) 1
 L  (E 1,2 )  
1 1 

2.62
(b)
 1 3 0   1 3 0  1 3 0 

A  0  ( 2)
A1,3
1 3  
 0 
1 3 
(4)
A2,3
 0 1 3   U
 2 10 2  0 4 2  0 0 14 
※ Note that we do not perform the G.-J. E. here
(4) ( 2) ※ Instead, only the elementary row operations of
 E2,3 E1,3 A  U adding a multiple of one row to another is used
to derive the upper triangular matrix U
( 2) 1 (4) 1 ※ In addition, the inverse of these elementary
 A  ( E1,3 ) ( E2,3 ) U  LU matrices should be lower triangular matrices
※ Together with the fact that the product of lower
triangular matrices is still a lower triangular
( 2) 1 (4) 1
 L  ( E1,3 ) ( E2,3 ) matrix, we can derive the lower triangular matrix
L in this way
 1 0 0  1 0 0   1 0 0 
  0 1 0  0 1 0    0 1 0 
 2 0 1  0 4 1   2 4 1 
2.63

Solving Ax=b with an LU-factorization of A (an important
application of the LU-factorization)

Ax  b If A  LU , then LUx  b
Let y  Ux, then Ly  b


Two steps to solve
Ax=b:
(1) Write y = Ux and solve Ly = b for y
(2) Solve Ux = y for x

2.64

Ex 7: Solving a linear system using LU-factorization
x1  3 x2  5
x2  3x3   1
2 x1  10 x2  2 x3   20
Sol:
 1  3 0  1 0 0  1  3 0
A  0 1 3  0 1 0  0 1 3  LU
    
2  10 2 2  4 1 0 0 14
(1) Let y  Ux, and solve Ly  b (solved by the forward
substitution)
 1 0 0  y1    5  y1  5
0 1 0  y2     1   y2  1
2  4 1  y3   20 y3  20  2 y1  4 y2
 20  2(5)  4(1)  14
2.65
(2) Solve the following system Ux  y (solved by the back substitution)
 1  3 0  x1    5  ※ Similar to the method using A –1 to
0 1 3  x2     1  solve the systems, the LU-
0 0 14  x3   14 factorization is useful when you
need to solve many systems with
the same coefficient matrix. In
So x3  1 such scenario, the LU-factorization
is performed once, and can be used
x2  1  3 x3  1  (3)(1)  2 many times.
x1  5  3 x2  5  3(2)  1 ※ You can find that the
computational effort for LU-
factorization is almost the same as
Thus, the solution is that for the Gaussian elimination,
so if you need to solve one system
1 of linear equations, just use the
x2 Gaussian elimination plus the back
  substitution or the Gauss-Jordan
 1 elimination directly.

2.66
Keywords in Section 2.4:

row elementary matrix: 列基本矩陣

row equivalent: 列等價

lower triangular matrix: 下三角矩陣

upper triangular matrix: 上三角矩陣

LU-factorization: LU 分解

2.67
2.5 Applications of Matrix Operations

Stochastic ( 隨機 ) Matrices or Markov Chain processes in
Examples 1, 2, and 3

Least Squares Regression Analysis (Example 10 in the text book
vs. the linear regression in EXCEL)

Read the text book or “Applications in Ch2.pdf”
downloaded from my website

See “Example 10 for regression in Ch 2.xls” downloaded
from my website for example

Since the formula for the solution of the least squares regression
problem will be proved in Sec. 5.4 and the homework to apply
the Least Squares Regression Analysis is about the CAPM,
which you will learn later in the course of Investments in this
semester, the homework will be assigned after Sec. 5.4
2.68

You might also like