You are on page 1of 35

Chapter 1:Matrices

2.1
Operations with Matrices
 Matrix:
 a11 a12 a13  a1n 
a  a2n 
 21 a22 a 23
A  [aij ]   a31 a32 a33  a3n   M mn
 
    
am1 am2 am3  amn  mn

(i, j)-th entry (or element): aij


number of rows: m
number of columns: n
size: m×n

 Square matrix: m = n
2.2
Equal matrices: two matrices are equal if they have the same size
(m × n) and entries corresponding to the same position are equal

For A  [aij ]mn and B  [bij ]mn ,


A  B if and only if aij  bij for 1  i  m, 1  j  n

 Ex: Equality of matrices


1 2 a b 
A   B 
3 4  c d 

If A  B, then a  1, b  2, c  3, and d  4

2.3
 Matrix addition:

If A  [aij ]mn , B  [bij ]mn ,

then A  B  [aij ]mn  [bij ]mn  [aij  bij ]mn  [cij ]mn  C

 Ex 2: Matrix addition

1 2  1 3 1  1 2  3  0 5
 0 1  1 2   0 1 1  2  1 3
       

 1  1  1  1  0
  3   3    3  3  0
       
 2  2  2  2 0

2.4
 Scalar multiplication:
If A  [aij ]mn and c is a constant scalar,
then cA  [caij ]mn

 Matrix subtraction:
A  B  A  (1) B

 Ex 3: Scalar multiplication and matrix subtraction


 1 2 4  2 0 0
A   3 0  1 B   1  4 3
   
 2 1 2  1 3 2

Find (a) 3A, (b) –B, (c) 3A – B


2.5
Sol:
 1 2 4  31 32 34  3 6 12
(a)
3A  3 3 0  1  3 3 30 3 1   9 0  3
     
 2 1 2  32 31 32  6 3 6
(b)
 2 0 0  2 0 0
 B   1 1  4 3    1 4  3
 
 1 3 2  1  3  2
(c)
 3 6 12  2 0 0  1 6 12
3 A  B   9 0  3   1  4 3   10 4  6
   
 6 3 6  1 3 2  7 0 4

2.6
 Matrix multiplication:
If A  [aij ]mn and B  [bij ]n p ,
then AB  [aij ]mn[bij ]n p  [cij ]m p  C,
If equal, A and B are multipliable
size of C=AB is m × p
n
where cij   aik bkj  ai1b1 j  ai 2b2 j    ainbnj
k 1
 a11 a12 a1n  c c c1 j c1n 
  b11 b1 j b1n   11 12 
  b b2 j b2n   
 ai1 ai 2 ain   21   ci1 ci 2 cij cin 
     
 
 
bn1 bnj bnn   
an1 an 2 ann  cn1 cn 2 cnj cnn 

※ The entry cij is obtained by calculating the sum of the entry-by-entry
product between the i-th row of A and the j-th column of B
2.7
 Ex 4: Find AB
1 3
   3 2
A   4 2 B 
 4 1  22
 5 032
Sol:
 (1)(3)  (3)(4) (1)(2)  (3)(1) 
AB  (4)(3)  (2)(4) (4)(2)  (2)(1) 
 (5)(3)  (0)(4) (5)(2)  (0)(1) 32
 9 1 
  4 6 
15 1032
 Note: (1) BA is not multipliable
(2) Even BA is multipliable, it could be that AB ≠ BA 2.8
 Matrix form of a system of linear equations in n variables:
 a11x1  a12 x2    a1n xn  b1
 a x  a x   a x  b
 21 1 22 2 2n n 2
 m linear equations
 
am1 x1  am2 x2    amn xn  bm


 a11 a12  a1n   x1   b1 
a a  a  x  b 
 21 2n   2  single matrix equation
22
 2
           A xb
     m  n n 1 m 1

am1 am2  amn   xn  bm 


=

A x b
2.9
 Ax is a linear combination of the column vectors of matrix A:

 a11 a12 a1n   x1 


a a22 a2n  x 
A  21
 c1 c2 cn  and x   2
   
   
am1 am 2 amn   xn 

 a11x1  a12 x2   a1n xn   a11   a21   a1n 


a x a x   a2n xn  a  a  a 
 Ax   21 1 22 2  x1    x2      xn  2n 
21 22
          
       
am1x1  am2 x2   amn xn  m1  m1 
a  m2 
a amn 

=
c1 c2 cn
 x1c1  x2c2   xncn  Ax can be viewed as the linear combination of the
 x1  column vectors of A with coefficients x1, x2,…, xn
x 
 c1 c2 cn   2  ← You can derive the same result if you perform
  the matrix multiplication for matrix A
  expressed in column vectors and x directly
 xn  2.10
Properties of Matrix Operations
 Three basic matrix operators:
(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication
0 0 0
0 0 0
 Zero matrix : 0mn 
 
 
0 0 0 mn
1 0 0
0 1 0
 Identity matrix of order n: In  
 
 
0 0 1  n n
2.11
 Properties of matrix addition :
 Matrix addition is only possible between two
matrices which have the same size.
 The operation is done simply by adding the
corresponding elements. e.g.:

1 3  6 2 7 5
4 7  3 1   7 8
     

2.12
 Properties of matrix multiplication:

 Multiplication of a matrix or a vector by a scalar is also


straightforward:

1 3  5 15 
5*    
4 7 20 35
 Ex 3: Matrix Multiplication is Associative
Calculate (AB)C and A(BC) for
1 0
1 2 1 0 2
A  , B  , and C  3 1  .
2 1 3 2 1  2 4
Sol:
1 0
 1 2 1 0 2   
( AB)C    
1 3 2 1   
3 1 
 2 2 4
1 0
5 4 0   17 4 
   3 1   
1 2 3
2 4 
13 14 

2.14
 1 0 
1 2  1 0 2  
A( BC )  
1  3 2 1  
3 1 
2 2 4 

1 2  3 8 17 4 
 
2 1 7 2 13 14

2.15
 Transpose of a matrix:
 Taking the transpose of a matrix is similar to that of a
vector:
1 3 8  1 4 6 
if A  4 7 2, then AT  3 7 5
6 5 0  8 2 0
 The diagonal elements in the matrix are unaffected,
but the other elements are switched. A matrix which is
the same as its own transpose is called symmetric, and
one which is the negative of its own transpose is called
skew-symmetric.
 Ex 8: Find the transpose of the following matrix
 1 2 3 0 1
2 A  4 5 6  
(a) A    (b)   (c) A  2 4
8 7 8 9  1  1
Sol: (a) 2
A   AT  2 8
8
(b)  1 2 3 1 4 7
A  4 5 6  AT  2 5 8
   
7 8 9 3 6 9
(c) 0 1
0 2 1
A  2 4  AT  
   1 4 1
 1  1
2.17
 Properties of transposes:
(1) ( AT )T  A
(2) ( A  B)T  AT  BT
(3) (cA)T  c( AT )
(4) ( AB)T  BT AT
※ Properties (2) and (4) can be generalized to the sum or product of
multiple matrices. For example, (A+B+C)T = AT+BT+CT and (ABC)T =
CTBTAT
※ Since a real number also can be viewed as a 1 × 1 matrix, the transpose
of a real number is itself, that is, for a  R , aT = a. In other words,
transpose operation has actually no effect on real numbers

2.18
 Ex 9: Verify that (AB)T and BTAT are equal

 2 1 2  3 1
A  1 0 3 B  2 1
 0 2 1  3 0

Sol:
T
  2 1 2  3 1   2 1
T

       2 6 1
( AB)   1 0 3 2 1    6 1  
T

  0 2 1  3 0  1 2  1 1 2 
    
 2 1 0
3 2 3    2 6 1
B A 
T T
  1 0 2   
1 1 0  2 3 1  1 1 2 
 
2.19
 Ex 4:
Show that AB and BA are not equal for the following matrices.
 1 3 2  1
A  and B 
 2  1   0 2 
Sol:
 1 3 2 1 2 5
AB       
 2  1 0 2  4  4
2 1  1 3 0 7
BA       
 0 2 2  1  4  2

AB  BA (noncommutativity of matrix multiplication)

2.20
 Ex 5: An example in which cancellation is not valid
Show that AC=BC
 1 3 2 4  1  2
A  , B  , C 
0 1  2 3   1 2
Sol:
1 3  1  2  2 4
AC      
0 1 1 2  1 2
2 4  1  2  2 4
BC      
2 3 1 2   1 2

So, although AC  BC, A  B

2.21
The Inverse of a Matrix
 Theorem 2.7: The inverse of a matrix is unique
If B and C are both inverses of the matrix A, then B = C.
Pf: AB  I
C ( AB )  CI
(CA) B  C  (associative property of matrix multiplication and the property
for the identity matrix)
IB  C
BC
Consequently, the inverse of a matrix is unique.

 Notes:
(1) The inverse of A is denoted by A1
(2) AA1  A1 A  I
2.22
 Find the inverse of a matrix by the Gauss-Jordan elimination:

A | I  
Gauss-Jordan elimination
  I | A1 

 Ex 2: Find the inverse of the matrix A


 1 4
A 
  1  3
Sol:
AX  I
 1 4  x11 x12   1 0
1  3  x   
   21 x22  0 1
 x11  4 x21 x12  4 x22   1 0
 x  3 x   
 11 21  x12  3 x22   0 1
2.23
by equating corresponding entries
x11  4 x21  1 This two systems of linear
 (1)
 x11  3x21  0 equations share the same
coefficient matrix, which
x12  4 x22  0 is exactly the matrix A
(2)
 x12  3x22  1
 1 4 1 A1,2(1) , A2,1( 4)  1 0 3
(1)        x11  3, x21  1
1 3 0 0 1 1
 1 4 0 A1,2(1) , A2,1( 4)  1 0 4
(2)        x12  4, x22  1
1 3 1 0 1 1
Thus
Perform the Gauss-
 3  4
1 Jordan elimination on
X A 
1 1  the matrix A with the
same row operations
2.24
 Ex 3: Find the inverse of the following matrix

 1  1 0
A   1 0  1
 6 2 3
Sol:
 1 1 0  1 0 0
A  I    1 0 1  0 1 0
 6 2 3  0 0 1
 1 1 0 1 0 0  1 1 0 1 0 0
0 1 1 1 1 0
  0 1 1 
( 1) (6)

1 1 0  
A1,3

A1,2

6 2 3 0 0 1 0 4 3 6 0 1
 1 1 0 1 0 0  1 1 0 1 0 0
 0 1 1 1 1 0
M 3( 1)
  0 1 1 1 1 0
( 4)
 
A2,3

0 0 1 2 4 1 0 0 1 2 4 1

2.25
 1 1 0 1 0 0 1 0 0 2 3 1
0 1 0 3 3 1  0 1 0 3 3 1
(1) (1)
A2,1

A3,2

0 0 1 2 4 1 0 0 1 1 4 1

 [ I  A1 ]
So the matrix A is invertible, and its inverse is
 2  3  1
A1    3  3  1
 2  4  1

 Check it by yourselves:
AA1  A1 A  I

2.26
 Theorem 2.11: Systems of equations with a unique solution
If A is an invertible matrix, then the system of linear equations
Ax = b has a unique solution given by x  A1b
Pf:
Ax  b
A1 Ax  A1b ( A is nonsingular)
Ix  A1b
x  A1b

If x1 and x2 were two solutions of equation Ax  b,


then Ax1  b  Ax2  x1  x2 (left cancellation property)

So, this solution is unique


2.27
 Ex 8:
Use an inverse matrix to solve each system
(a) (b)
2 x  3 y  z  1 2x  3 y  z  4
3x  3 y  z  1 3x  3 y  z  8
2 x  4 y  z  2 2x  4 y  z  5
(c)
2x  3 y  z  0
3x  3 y  z  0
2x  4 y  z  0
Sol:
2 3 1 1 1 0 
 A  3 3 1 
Gauss-Jordan elimination
 A1  1 0 1 
2 4 1  6 2 3
2.28
(a)
1 1 0   1  2  ※ This technique is very
x  A1b  1 0 1   1    1
convenient when you face
the problem of solving
 6 2 3 2 2 several systems with the
same coefficient matrix
(b)
※ Because once you have A-1,
1 1 0  4  4  you simply need to
x  A1b  1 0 1  8   1  perform the matrix
multiplication to solve the
 6 2 3 5  7 unknown variables
(c) ※ If you solve only one
system, the computational
1 1 0  0 0 effort is less for the G. E.
x  A1b  1 0 1  0  0
plus the back substitution
or the G. J. E.
 6 2 3 0 0
2.29
 LU-factorization (or LU-decomposition):
If the nn matrix A can be written as the product of a lower
triangular matrix L and an upper triangular matrix U, then
A  LU is a LU -factorization of A

 a11 0 0
a a 0  3  3 lower triangular matrix: all entries
 21 22  above the principal diagonal are zero
 a31 a32 a33 

a11 a12 a13 


0 a a  3  3 upper triangular matrix: all entries
 22 23  below the principal diagonal are zero
 0 0 a33 
2.30
 Ex 5 and 6: LU-factorization
 1 3 0
1 2 0 
(a) A    (b) A  1 3
1 0   
2 10 2
Sol: (a)
1 2 A1,2(-1) 1 2 
A      U
1 0 0 2
( 1)
 E1,2 A U
( 1) 1
 A  (E1,2 ) U  LU
1 0
( 1) 1
 L  (E 1,2 )  
1 1 

2.31
(b)
 1 3 0  1 3 0 1 3 0 

A  0  ( 2)
1 3 
A1,3  
 0 1 3 
( 4)
A2,3
 0 1 3   U
2 10 2 0 4 2 0 0 14

(4) ( 2)
E E 2,3 1,3 A U Ai(,kj)

( 2) 1 (4) 1
 A  (E1,3 ) ( E2,3 ) U  LU

( 2) 1 (4) 1
 L  ( E1,3 ) ( E2,3 )
1 0 0 1 0 0 1 0 0
 0 1 0 0 1 0  0 1 0
2 0 1 0 4 1 2 4 1
2.32
 Solving Ax=b with an LU-factorization of A (an important
application of the LU-factorization)

Ax  b If A  LU , then LUx  b
Let y  Ux, then Ly  b

 Two steps to solve Ax=b:


(1) Write y = Ux and solve Ly = b for y

(2) Solve Ux = y for x

2.33
 Ex 7: Solving a linear system using LU-factorization
x1  3x2  5
x2  3x3  1
2 x1  10 x2  2 x3   20
Sol:
 1  3 0  1 0 0  1  3 0
A  0 1 3  0 1 0 0 1 3  LU
    
2  10 2 2  4 1 0 0 14
(1) Let y  Ux, and solve Ly  b (solved by the forward substitution)

1 0 0  y1    5  y1  5
0 1 0  y2    1   y2  1
2  4 1  y3   20 y3  20  2 y1  4 y2
 20  2(5)  4(1)  14
2.34
(2) Solve the following system Ux  y (solved by the back substitution)
 1  3 0  x1    5  ※ Similar to the method using A –1 to
0 1 3  x2    1  solve the systems, the LU-
0 0 14  x3  14 factorization is useful when you
need to solve many systems with
the same coefficient matrix. In
So x3  1 such scenario, the LU-factorization
is performed once, and can be used
x2  1  3x3  1  (3)(1)  2 many times.
x1  5  3x2  5  3(2)  1 ※ You can find that the
computational effort for LU-
factorization is almost the same as
Thus, the solution is that for the Gaussian elimination,
so if you need to solve one system
1 of linear equations, just use the
x2 Gaussian elimination plus the back
  substitution or the Gauss-Jordan
 1 elimination directly.

2.35

You might also like