You are on page 1of 52

Definition of invertible matrices.

Uniqueness of inverse.

Some laws involving inverses.


Representing an elementary row operation by a matrix.

Definition of an elementary matrix.

Elementary matrices are invertible and their


inverses are also elementary matrices.
Elementary matrices (cont’d)
Determinants
same
Linear system X Linear system Y
solution set

Augmented matrix row Augmented matrix


of linear system X equivalent of linear system Y

elementary row
 A b C d 
operations

We want to show that Ax  b and Cx  d have the same


solution set.
same
Linear system X Linear system Y
solution set

Augmented matrix row Augmented matrix


of linear system X equivalent of linear system Y

one elementary
 A b C d 
row operation

We want to show that Ax  b and Cx  d have the same


solution set.
one elementary
 A b C d 
row operation
We want to show that Ax  b and Cx  d have the same
solution set. 

There exists an elementary matrix E such that

 C d   E  A b    C d    EA Eb  C  EA
and d  Eb
Let u be a solution to Ax  b
Au  b  EAu  Eb  Cu  d
 u is a solution to Cx  d
one elementary
 A b C d 
row operation
We want to show that Ax  b and Cx  d have the same
solution set. 

There exists an elementary matrix E such that

 C d   E  A b    C d    EA Eb  C  EA
and d  Eb
Let v be a solution to Cx  d
E 1C  A
Cv  d  E 1Cv  E 1d  Av  b
and E 1d  b
 v is a solution to Ax  b
If A is a square matrix, then the following statements
are equivalent.
(If one of them is true, so are the rest. If one of
them is false, so are the rest.)
1) A is invertible.

2) Ax  0 has only the trivial solution.

3) The reduced row-echelon form of A is I .

4) A can be expressed as a product of


elementary matrices.
So what if we
know A is invertible? The proof of the
previous theorem
How can we find A1 ?
actually tells us how...
Recall that for any matrix A, there exists elementary
matrices E1 , E2 ,..., Ek such that

Ek Ek1... E1 A

is the reduced row-echelon form of A.


If A is invertible, we know that

Ek Ek 1... E1 A  I n
Ek Ek 1... E1 A  I n

By the '50%' remark in Lecture 05, since ( Ek Ek1... E1)


and A are both square matrices of the same size,
we can conclude that
( Ek Ek 1... E1)  A1

If A is a square matrix of order n, consider the


following n  2n matrix:
 
 A 
 In 
 
 
Ek Ek 1... E1 A  I n ( Ek Ek 1... E1)  A1

What if we premultiply ( Ek Ek 1... E1) to this n  2n


matrix?
 
 A 
 I n 
 
 
( Ek Ek1... E1)  A I n 
  Ek Ek 1... E1 A Ek Ek 1... E1 I n 
  I n Ek Ek1... E1 
  In A1 
( Ek Ek1... E1)  A I n 
  Ek Ek 1... E1 A Ek Ek 1... E1 I n 
  I n Ek Ek1... E1 
  In A1 
This provides us with a way to find the inverse
of an invertible matrix A.

Question:
What happens if the matrix A is not invertible?
( Ek Ek1... E1)  A I n 
  Ek Ek 1... E1 A Ek Ek 1... E1 I n  when A is singular
  I n Ek Ek1... E1 
  In A1    R Ek Ek1... E1 
Question:
What happens if the matrix A is not invertible?
Answer:
If A is singular, then its reduced row-echelon form
will not be the identity matrix.
Determine if the following matrix is invertible
and if so, find its inverse.

 2 1 0
A   1 2 0 
 
 3 1 1
 
Determine if the following matrix is invertible
and if so, find its inverse.

1 1 1 1
 1 2 6 3 
A 
 1 2 6 4 
1 1 1 0
 
1 1 1 1 1 0 0 0
 1 2 6 3 0 1 0 0
 
1 2 6 4 0 0 1 0
1 
 1 1 0 0 0 0 1

R2  R1 1 1 1 1 1 0 0 0
0 3 7 4 1 1 0 0
R3  R1  
0 3 7 5 1 0 1 0
R4  R1 0
 0 0 1 1 0 0 1 
1 1 1 1 1 0 0 0
0 3 7 4 1 1 0 0
 
0 3 7 5 1 0 1 0
0 1 1 1 
 0 0 0 0

1 1 1 1 1 0 0 0
R3  R2 0 3 7 4 1 1 0 0
 
0 0 0 1 0 1 1 0
0 1 1 1 
 0 0 0 0
1 1 1 1 1 0 0 0
0 row-echelon form
3 7 4 1 1 0 0
  has only 3 leading
0 0 0 1 0 1 1 0
0 entries
 0 0 1 1 0 0 1 

1 1 1 1 1 0 0 0
0 0
R4  R3 
3 7 4 1 1 0

0 0 0 1 0 1 1 0
0 1 1 1 1 
 0 0 0

The reduced row-echelon form of A can never


be I 4 , so A is singular.
We have already seen (from Lecture 5) that:
a b
If ad  bc  0, then A    is invertible
c d
b
1  d
ad bc ad bc 
and A    c a 
.
 ad bc ad bc 

We will now show:

a b
If A    is invertible, then ad  bc  0.
c d 
a b
If A    is invertible, then ad  bc  0.
c d 

Case 1: If a  0, and c  0.
 a b   0 b  This matrix will not have I 2 as
A    its reduced row-echelon form
 c d   0 d 
and so is not invertible.

So we do not need to consider this case, since the


hypothesis "If A is invertible" is not satisfied.
a b
If A    is invertible, then ad  bc  0.
c d 

Case 2: a  0 or c  0. First suppose a  0.

 a b  R2  ac R1 a b  a b 
A   0 d  cb    0 ad bc 
 c d   a   a 

So if A is invertible, we must have two leading entries


and thus ad bc
a  0 (that is, ad  bc  0).
a b
If A    is invertible, then ad  bc  0.
c d 

Case 2: a  0 or c  0. Now suppose a  0, c  0.

 0 b  R2  R1 c d 
A  0 b 
 c d   
So if A is invertible, we must have two leading entries
and thus b  0.
For this case, this implies ad  bc  0.
a b
If ad  bc  0, then A    is invertible
c d
b
1  d
ad bc ad bc 
and A    c a 
.
 ad bc ad bc 

a b
If A    is invertible, then ad  bc  0.
c d 

a b
A  is invertible if and only if ad  bc  0.
c d
Let A and B be square matrices of the same size.
If AB  I , then

BA  I A  B 1 B  A1

Proof:
Consider the homogeneous linear system Bx  0.

Strategy: If we can show


Bx  0 has only the trivial
solution, then B is invertible.
If AB  I , then
BA  I A  B 1 B  A1
Proof:
Consider the homogeneous linear system Bx  0.
Let u be a solution to Bx  0.
Bu  0 Strategy: If we can show
 ABu  A0 Bx  0 has only the trivial
 Iu  0  u  0 solution, then B is invertible.

So Bx  0 has only the trivial solution u  0 and


thus B is invertible (that is, B 1 exists).
If AB  I , then
BA  I A  B 1 B  A1
Proof:
So Bx  0 has only the trivial solution u  0 and
thus B is invertible (that is, B 1 exists).
AB  I  ABB 1  IB 1  AI  B 1  A  B 1
Since A  B 1 , A is invertible and

A1  (B 1)1  B
Finally,
BA  A1 A  I
If A is a square matrix such that

A2  6 A  8 I  0,

prove that A is invertible.

A2  6 A  8 I  0  A2  6 A  8 I
something
 A( A  6)  8 I
wrong??
 A( A  6 I )  8I

A1  [ 18 ( A  6 I )]  A[ 18 ( A  6 I )]  I
If A and B are both square matrices of the same
size and B is singular, then

AB and BA are both singular.


E1
cRi
Ri  R j
A : p  m matrix Im E2
Ei : m  m elementary matrix R j  kRi
AE1 : matrix resulting from multiplying E3
column i of A by c
AE2 : matrix resulting from interchanging
columns i and j of A
AE3 : matrix resulting from adding k times of
column j of A to column i
Note that 'reversal' of role of i and j
1 4 1 2 
A   0 3 2 3 
 
1 1 0 0 
 
2 times column 3

1 0 0 0
1 4 2 2  0 1 0 0
AE1   0 3 4 3  E1    (2 R3 )
  0 0 2 0
1 1 0 0  0
 
 0 0 1 
1 4 1 2 
A   0 3 2 3 
 
1 1 0 0 
 
interchange
columns 1 and 4
0 0 0 1
 2 4 1 1  0 1 0 0
AE2   3 3 2 0  E2    (R1  R4 )
  0 0 1 0
0 1 0 1 1
   0 0 0 
1 4 1 2 
A   0 3 2 3 
 
1 1 0 0 
 
column 3 minus 2
times column 2
1 0 0 0
 1 4 9 2  0 1 2 0
AE3   0 3 8 3  E3    (R2  2 R3 )
  0 0 1 0
 1 1 2 0  0
   0 0 1 
Recall that we have have shown
a b
A  is invertible if and only if ad  bc  0.
c d
The quantity ad  bc is known as the determinant
a b
of   .
c d
Let A  (aij ) be a square matrix of order n.
Let M ij be a square matrix of order n  1 obtained
by removing the ith row and jth column of A.

2 1 3 0  1 3 1 
 4 1 3 1 M11   4 3 1 
A   
 1 4 3 1  0 1 1
 
0 0 1 1

2 3 0 
M 32   4 3 1 
 
 0 1 1
 
Let A  (aij ) be a square matrix of order n.
Let M ij be a square matrix of order n  1 obtained
by removing the ith row and jth column of A.
The determinant of A is defined as

a11 if n  1
det(A)  
a11 A11  a12 A12  ...  a1n A1n if n  2

where Aij  (1)i j det( M ij ).


Aij  (1)i j det( M ij ) is called the (i , j)-cofactor of A.
To know the determinant of a n  n matrix, we need
to know the determinants of (n  1)  (n  1) matrices...
To know the determinant of a (n  1)  (n  1) matrix,
we need to know the determinants of
(n  2)  (n  2) matrices...
a11 if n  1
det(A)  
a11 A11  a12 A12  ...  a1n A1n if n  2

where Aij  (1)i j det( M ij ). wow! this is


complicated!
The determinant of A is defined as

a11 if n  1
det(A)  
a11 A11  a12 A12  ...  a1n A1n if n  2

where Aij  (1)i j det( M ij ).


Aij  (1)i j det( M ij ) is called the (i , j)-cofactor of A.

This is known as cofactor expansion.


The determinant of A  (aij ) is usually written as

a11 a12 ... a1n


a21 a22 ... a2n

an1 an2 ... ann


a b
A 
 c d 
M11   d  A11  (1)11 det  d   d

M12   c  A12  (1)12 det  c   c


We have seen
det( A)  aA11  bA12  ad  bc this expression
before!
a b
If A    , then A is invertible if and only if
c d
det( A)  0.
1 3 4
Evaluate 2 4 1 .
4 2 9

By cofactor expansion,
1 3 4
11
4 1 2 1
2 4 1  (1)  (1) (3)  (1)1 2
2 9 4 9
4 2 9
13
2 4
(4)  (1)
4 2
By cofactor expansion,
1 3 4
11
4 1 2 1
2 4 1  (1)  (1) (3)  (1)1 2
2 9 4 9
4 2 9
13
2 4
(4)  (1)
4 2

 (4  9 1 2) 3(2  9 1 4)


4(2  2  4  4)

 0.
What is the determinant of the following matrix?
a b c 
Ad e f 
 
g h i 
 

Answer:
a b c a b
d e f d e
g h i g h

det( A)  aei  bfg  cdh ceg  afh  bdi


Verify this expression using cofactor expansion!
What is the determinant of the following matrix?
a b c d
 e f g h
A 
i j k l
m n o p
 

Answer:
No 'special formula'! Use cofactor expansion!
æ a a12 ... a1n ö
ç ÷
11

ç a21 a22 ... a2n ÷


A=
ç ÷
ç ÷
çè an1 an2 ... ann ÷ø

a11 if n  1
det(A)  
a11 A11  a12 A12  ...  a1n A1n if n  2

This is actually performing cofactor expansion


along the first row of A.
 a11 a12 ... a1n 
 a a ... a 
A  21 22 2n

 
a a 
 n1 n 2 ... ann 

It turns out that we can compute det( A) by


performing cofactor expansion along any row
or any column of A.
cofactor expansion
det(A)  ai1 Ai1  ai 2 Ai 2  ...  ain Ain along ith row
 a1 j A1 j  a2 j A2 j  ...  anj Anj along jth column
1 3 4
Check that 2 4 1  0 by cofactor expansion
4 2 9
along second row.

1 3 4
21
3 4 2 2
1 4
2 4 1  2  (1)  4  (1)
2 9 4 9
4 2 9
2 3
1 3
1 (1)
4 2
1 3 4
21
3 4 2 2
1 4
2 4 1  2  (1)  4  (1)
2 9 4 9
4 2 9
2 3
1 3
1 (1)
4 2

 2(27  8)  4(9 16) 1(2  12)

 38  28 10
0
Try cofactor expansion along another row or
column!
If A is a triangular matrix, then det( A) is the
product of its diagonal entries.

1  2 
 1

A 3



 3

A 1
0 

0
2
  

0
 1 
 
   4 
 3  
det(A)  0 det(A)  2  3  12  1 4  12
Proof is omitted.
If A is a square matrix, then

det( A)  det( AT ).

Proof is omitted.

1 3 4 1 2 4
2 4 1 0 3 4 2 0
4 2 9 4 1 9
The determinant of a square matrix with two
identical rows is 0.
The determinant of a square matrix with two
identical columns is 0.
1 0 1 1
1 0 1
2 3 2 3
1 0 1 0 0
4 1 4 3
1 1 1
1 2 1 0

Proof is omitted.
The three theorems we have stated without
proofs can all be proven using a technique known
as Mathematical Induction.

If you are interested, you are encouraged to figure


out the proofs.
Lecture 07:
Determinants (till end of Chapter 2)

You might also like