Professional Documents
Culture Documents
CHAPTER 1
1.1)
a11
a 21
a
m1
a12
a 22
am2
a1n b1
a 2 n b2
a mn bm
3x1 7 x2 2 x3 x4 3x5 2
4 x3 2 x 4
6
2x1
4 x1 5x2 4 x3
x5 0
2 x1 x2 3x3 x4 0
2 x3 3 x 4 0
x1
3x1 x2 4 x3
0
Example 3: Find the augmented matrix for the system of linear equations
2 x3 3
x1
2 x1 x2 3x3 4
3x1 4 x2 2 x3 6
2
3
1 0
Solution : 2 1 3 4
3 4 2 6
3 0 2 5
4 3
7 1
0 2 1
7
Solution : 3x1
2 x3 5
7 x1 x2 4 x3 3
2 x 2 x3 7
A system of linear equations has either
(I) No solution
Case (I)
(Two lines are parallel)
(No solution)
Case (II)
(Two lines intersect)
(Exactly one solution)
Case (III)
(Two lines coincide)
(Infinite number of solutions)
Example 5:
Case (I)
l1 : x 2 y 3 l 2 : 2 x 4 y 1
Case (II)
l1 : x 2 y 3 l 2 : 2 x 3 y 5
Case (III)
l1 : x 2 y 3 l 2 : 2 x 4 y 6
Case (a)
Case (b)
Example 6:
Case (a)
l1 : x 2 y 0 l 2 : 2 x 3 y 0
Case (b)
l1 : x 2 y 0 l 2 : 2 x 4 y 0
Theorem 1.1 Any homogeneous system of linear equations with more unknowns than
equations has infinitely many solutions.
Example 7: Consider the homogeneous system of 2 equations in 3 unknowns
2 x1 3x2 4 x3 0
x1 2 x2 5x3 0
This system has infinitely many solutions.
Note that the equations represent two planes passing through the origin and they intersect
at a line.
Solution:
(a)
(b)
(c)
1.2)
Gaussian Elimination
1 2
1 2
R2 3R2
3 4
9 12
1 2
3 4
R1 R2
3 4
1 2
1 2
1 2
R2 R2 (3) R1
1 1 2 9
(a) 2 4 3 1 R2 3R2
3 6 5 0
1 1 2 9
(b) 2 4 3 1 R1 R2
3 6 5 0
1 1 2 9
(c) 2 4 3 1 R2 R2 (2) R1
3 6 5 0
Example 3
Perform the indicated elementary operation on the matrix
1 1 2 9
2 4 3 1 R2 R2 (2) R1
3 6 5 0
R3 R3 (3) R1
1 2 3 4
0 2 1 1
0 0 1 3
1 2 1 2
0
0 0 0
0 1 2 4
1 2 1 4
3
0 1 0
0 0 1 2
1 5
0 0
0 0
0 0
REF
0 1 0 5
0 0 1 3
0 0 0 0
0
0
0
1
0
0
2 1 3
1 3 2
0 1
4
0 0
1
0 1
0 2
1 3
0 0
RREF
Example 5
Determine whether the following matrix is in row-echelon form, reduced row-echelon
form or neither
1 0 0 0
1 1 0
1 1 0
0 1 1 0
(a) 0 2 0
(b) 0 1 0
(c)
0 0 0 1
0 0 0
0 3 0
0 0 0 0
0
(d)
0
1 3
0 0
0 0
0 0
0
1
0
(e)
0
0
1
0
0
3
0
0
0
0
1
0
(f)
0
2
1
0
0
0
1
0
0
0
1
Example 6 For the following 3 cases, obtain an augmented matrix and reduce them to
REF and/or RREF
Case (I)
l1 : x 2 y 3 l 2 : 2 x 4 y 1 (No solution)
1 2 3
1 2 3
1 2 3
R2 R2 (2) R1
(REF)
R2 15 R2
2 4 1
0 0 1
0 0 5
1 2 0
(RREF)
R1 R1 (3) R2
0 0 1
Case (II) l1 : x 2 y 3 l 2 : 2 x 3 y 5 (Exactly one solution)
1 2 3
1 2 3
1 2 3
R2 R2 (2) R1
R2 (1) R2
(REF)
2 3 5
0 1 1
0 1 1
1 0 1
(RREF)
Note: The solution is x 1, y 1
R1 R1 (2) R2
0 1 1
Case (III) l1 : x 2 y 3 l 2 : 2 x 4 y 6 (Infinitely many solutions)
1 2 3
1 2 3
R2 R2 (2) R1
(RREF)
2 4 6
0 0 0
What observation can be you make regarding the RREF for these cases?
x1 x2 2 x3 9
2 x1 4 x2 3x3 1
3x1 6 x2 5x3 0
Solution:
1 1 2 9
R2 R2 (2) R1
2 4 3 1
3 6 5 0 R3 R3 (3) R1
9
1 1 2
1
0 2 7 17 R2 2 R2
0 3 11 27
9
1 1 2
1 1 2
7
17
7
0 1 2 2 R3 R3 (3) R2 0 1 2
0 0 1
0 3 11 27
1 1 2
7
0 1 2
0 0 1
172
3
172 R3 (2) R3
32
row-echelon form
x1 x2 2 x3 9
x2 72 x3 172
x1 1
x2 2
x3 3
Example 8 Solve the system by Gaussian elimination or Gauss-Jordan elimination
x1 2 x2 3x3 2
3x1 2 x2 x3 2
4 x1 5x2 3x3 6
Solution:
x1 8x3 5x4 6
x2 4 x3 9 x4 3
x3 x 4 2
Solution:
1 0 8 5 6
The augmented matrix for the system is 0 1 4 9 3 . Note that this matrix is
0 0 1 1 2
x1 10 13t 10 13
x 2 5 13t 5 13
x
t
~
x3
2 t 2 1
0 1
x
t
4
Example 10 (System has no solution)
Solve the system by Gauss-Jordan elimination
2 x1 3x2 2
2 x1 x2 1
3x1 2 x2 1
Solution:
2 3 2
1 0 0
is 0 1 0 and the system has no solution. Notice that the three lines above do not
0 0 1
10
Example 11
Determine the values of k for which the following system have
(a) No solution (b) Exactly one solution (c) Infinitely many solutions
x1 2 x2 3x3 4
3x1 x2 5x3 2
4 x1 x2 (k 2 14) x3 k 2
Solution:
3
4
1 2
5
2
The augmented matrix for the system is 3 1
4 1 k 2 14 k 2
3
4
1 2
R2 R2 (3) R1
14
10
0 7
R3 R3 (4) R1
2
0 7 k 2 k 14
R3 R3 (1) R2
3
4
1 2
14
10
0 7
0 0 k 2 16 k 4
Pay attention to the last row of the above augmented matrix, we notice that,
If k 4 , then the system has no solution
If k 4 , then the system has infinitely many solutions
If k 4,4 , then the system has exactly one solution
11
Example 12
Determine the value(s) of k (where k 1 ) for which the following system have no
solution
x1 x2 kx3 3
x1 kx2 x3 2
kx1 x2 x3 1
Solution:
1 1 k 3
1
k
3
1
R2 R2 (1) R1
1
0 k 1 1 k
R3 R3 (k ) R1
2
0 1 k 1 k 1 3k
1
k
1
1 k
R3 R3 R2 0 k 1
0
0
2k k2
1
3k
Pay attention to the last row of the above augmented matrix, we notice that,
If we let 2 k k 2 0 , after factoring, we get (2 k )(1 k ) 0 . Therefore k 2 .
12
1.3)
a11
a
An m n matrix A is given by A 21
a
m1
a12
a 22
am2
a1n
a2n
.
a mn
ai 2 ain
a1 j
a2 j
and
respectively.
a
mj
i j
i j
Identity matrix, I
a 0 0
Example: 0 b 0
0 0 c
1 0 0
Example: I 3 0 1 0
0 0 1
1 2 3
Example: U 0 5 6
0 0 9
1 0 0
Example: L 4 5 0
7 8 9
Transpose of a matrix
If A (aij ) , then the transpose of A, denoted by AT is given by AT (a ji )
13
Symmetric matrix
1 2 3
T
A A
T
Trace of a matrix
n
Operations on Matrices
Addition, subtraction, scalar multiplication, matrix multiplication
1 2 3
1 2
1 2 3
C
Example: Let A 4 5 6 , B
3 4
4 5 6
7 8 9
1 2 3
1 2 3
1 2
1 2 3
is undefined, AC 4 5 6
is undefined
AB 4 5 6
3
4
4
5
6
7 8 9
7 8 9
1 2 3
1 2 3
4 5 6
CA
4 5 6 7 8 9
(4)(1) (5)(4) (6)(7) (4)(2) (5)(5) (6)(8) (4)(3) (5)(6) (6)(9)
Example 1
Classify each of the following matrices
1 0 2
(a) 0 0 0
2 0 3
1 0 0
(b) 0 3 1
0 0 2
1 0 0
(c) 1 3 0
0 2 1
2 0 0
(d) 0 3 0
0 0 4
14
1.4)
1 4
1 3
(b)
(a)
2 3
2 6
Solution:
1 3
R2 R2 (2) R1
(a) A
2 6
1 3
1 4
R2 R2 (2) R1
(b) A
2 3
1 4
R2 ( 15 ) R2
0 5
1 4
0 1
1 0
I 2 the identity matrix
R1 R1 (4) R2
0 1
Solution:
1 4
1 2
1 4
, AT
, ( AT ) T
. Therefore ( AT ) T A
(a) A
2 3
4 3
2 3
1 2 2 3 3 5
3 3
3 5
and AT B T
(b) ( A B)
4 3 1 4 3 7
5 7
3 7
T
15
1 2 2 3 1 1
1 5
1 1
and AT B T
( A B)
4
3
1
4
1
5
1
5
3 12
3 6 1 2
3
3 AT
(c) (3 A)
6
9
12
9
4
3
1 4 2 1
14 15
14 13
(d) ( AB )
13 10
15 10
2 3 3 4
T
2 3 1 2 14 13
.
B T AT
1 4 4 3 15 10
Therefore ( AB) T B T AT
1 d b
ad bc c a
1 4
Example 3: Find A 1 if A
2 3
Solution:
A 1
3 4
1
1 3 4
(1)(3) (4)(2) 2 1
5 2 1
16
1 1
A
k
Solution:
4 1
4 1
, B 1
(a) A 1
5 1
9 2
2 1 1 1 3 2
AB
9 4 5 4 11 7
7 2
4 1 4 1 7 2
, B 1 A1
(AB ) 1
11 3
5 1 9 2 11 3
Therefore, ( AB) 1 B 1 A1
(b)
17
(c)
(d)
18
1.5)
1 0 0
0 3 0
0 0 1
1 0 0
0
1
0
1 0 0
0 1 0
0 0 0
1 0 0
0 0 1
0 1 0
1 0
2
1
1 0 0
0 2 0
0 0 1
1 0 2 3
Example 2: Let A 2 1 3 6 .
1 4 4 0
1 0 2 3
1 0 0
1 0 2 3
B 6 3 9 18
1 4 4 0
1 0 0 1 0 2 3
0 3 0 2 1 3 6 EA
0 0 1 1 4 4 0
1 0 2 3
1 0 0
1 0 2 3
B 1 4 4 0
2 1 3 6
1 0 0 1 0 2 3
0 0 1 2 1 3 6 EA
0 1 0 1 4 4 0
19
1 0 2 3
1 0 0
1 0 2 3 1 0 0 1 0 2 3
B 2 1 3 6 0 1 0 2 1 3 6 EA
0 4 2 3 1 0 1 1 4 4 0
Example 3: Find a sequence of elementary matrices that can be used to write the matrix A
0 1 3 5
Solution:
Elementary row operation
R1 R2
R3 R3 (2) R1
R3
1
R3
2
Matrix
1 3 0 2
B 0 1 3 5
2 6 2 0
1 3 0 2
C 0 1 3 5
0 0 2 4
1 3 0 2
D 0 1 3 5
0 0 1 2
Elementary matrix
0 1 0
E1 1 0 0
0 0 1
1 0 0
E2 0 1 0
2 0 1
1 0 0
E3 0 1 0
0 0 1
2
Note
B E1 A
C E2 B
D E3 C
1 0 0 1 0 0 0 1 0 0 1 3 5
D E3 E2 E1 A 0 1 0 0 1 0 1 0 0 1 3 0 2
0 0 1 2 0 1 0 0 1 2 6 2 0
20
Example 4: Find a sequence of elementary matrices that can be used to write the matrix A
1 3 1
1
3 0 3 .
in row-echelon form, where A 2
3 2 1 0
21
1 0
1 0
exists and is an elementary matrix
E 1
E
1
0 2
0 2
0 1
0 1
E 1
exists and is an elementary matrix
E
1 0
1 0
1 0
1 0
E 1
exists and is an elementary matrix
E
2 1
2 1
1 0 0
1 0 0
1
E 0 1 0 is obtained from I 3 using the ERO R3 12 R3
0 0 1
2
1 0 0
1 0 0
1
E 0 0 1 is obtained from I 3 using the ERO R2 R3
0 1 0
1 0 0
1 0 0
1
E 0 1 0 is obtained from I 3 using the ERO R3 R3 (2) R1
2 0 0
22
Example 7:
Write down the inverse of the following 3 3 elementary matrices
1 0 0
(a) 0 3 0
0 0 1
1 0 0
(b) 0 1 0
4 0 1
1 0 0
(c) 0 0 1
0 1 0
Example 8:
Write down the inverse of the following 4 4 elementary matrices
0
(a)
0
0
1
0
0
0
0
1
0
0
0
0
(b)
0
0
0
0
1
0
0
1
0
1
0
0
(c)
0
0
1
0
0
0
0
a
0
0
0
Example 9:
Write down the inverse of the following elementary matrices
1 0 0
0 1 0
0 0 2
0
0
0
1
0
0
0
0
0
0
0 0 0 0
0 0 1 0
0 1 0 0
1 0 0 0
0 0 0 1
3
0
1
0
0
0
1 0 3
0 1 0
0 0 1
0
0
0
0
0
0
0
0
1
0
0
1
0
0
0 0 1
0 1 0
1 0 0
0
0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 3 0
0 0 0 1
0
0
0
0
0
0
0
1
0
0
0 0
1 0
0 1
0 0
0 0
0
0
0
0
4
0
3 0
0 0
1 0
0 1
0
23
Example 10:
2
3
3
3
1
1
(b) E4 E3 B A
Solution:
(a) E2 E1 A B
2
3
1
0
A 1 4 1 R1 R3 1
0
1
1
0
1 0 0
0 0
I 3 0 1 0 R1 R3 0 1
0 0 1
1 0
1
0
4 1 R1 R1 R3
2
3
0 E1
3
3
1
1 4 1 B
1
2
3
1 0 0
I 3 0 1 0 R1 R1 R3
0 0 1
1 0 1
0 1 0 E2
0 0 1
1
3
1 0 1
0 0 1
1
1
E2 E3 E 0 1 0 and E4 E1 E4 E1 0 1 0
0 0 1
1 0 0
1
2
24
Example 11:
3 2
0
1 2 0
(b) E4 E3 B A
25
Verify the Equivalent Statements (a), (b), (c), and (d) above for the matrix A
3 1
Solution:
1 2
A
3 1
1 2 x1 0
and
(b) Consider A x 0 . Then
~
~
3 1 x 2 0
x1 2 x2 0
3x1 x2 0
By inspection x1 0, x2 0 . Therefore A x 0 has only the trivial solution
~
1 2
1 2
R2 R2 (3) R1
R2 ( 15 ) R2
(c) A
3 1
0 5
1 0
I 2
R1 R1 (2) R2
0 1
Therefore the reduced row echelon form of A is I 2
1 2
0 1
1 0
1 0
1 2
, E3
, E 2
and A ( E3 E2 E1 ) 1 .
E1
1
3 1
0 1
0 5
1
1
1
Thus A E1 E 2 E3
1 0 1 0 1 2 1 2
3
1
0
1
3
1
0
26
1.6)
Example 1:
1
Verify the Equivalent Statements (e) and (f) above for the matrix A
3
Solution:
1
Consider the system A x b . The augmented matrix for the system is
~
~
3
Apply elementary row operations
1 2 b1
R2 R2 (3) R1
3 1 b2
b1
1 2
R2 ( 15 ) R2
0 5 b2 3b1
2 b1
.
1 b2
b1
1 2
1
0 1 5 (3b1 b2 )
1 0 15 (b1 2b2 )
R1 R1 (2) R2
1
0 1 5 (3b1 b2 )
Therefore x1 15 (b1 2b2 ) and x2 15 (3b1 b2 ) . This implies that the system
A x b is consistent for every n 1 column matrix b .
~
x
1 1 2 b1
A 1 b
x 1
~
~
5 3 1 b2
x2
Therefore A x b has a unique solution x A 1 b for every n 1 column matrix b
~
27
Example 2:
Find the condition that b's must satisfy for the system to be consistent
x1 2 x2 5x3 b1
4 x1 5x2 8x3 b2
3x1 3x2 3x3 b3
Solution:
1 2 5 b1
1 2 5 b1
R2 R2 (4) R1
Apply ERO 4 5 8 b2
3 3 3 b R3 R3 3R1
3
1 2 5
R2 R2 0 1 4
0 3 12
1
3
1
3
b1
1 2 5
0 3 12 b2 4b1
0 3 12 b 3b
3
1
b1
b1
1 2 5
1
0 1 4 3 (b2 4b1 )
0 0
0 b1 b2 b3
b1 b1
The system
b1 b2 b3 0 .
1 2 5
28
Example 3:
Find the condition that b's must satisfy for the system to be consistent
x1 2 x2 3x3 b1
2 x1 5x2 3x3 b2
8x3 b3
x1
Solution:
1 2 3 b1
1 2 3 b1
R2 R2 (2) R1
Apply ERO 2 5 3 b2
1 0 8 b R3 R3 (1) R1
3
b1
1 2 3
b2 2b1 R3 (1) R3
0 1 3
0 0 1 b 2b 5b
3
2
1
3
b1
1 2
0 1 3 b2 2b1 R3 R3 2R2
0 2 5
b3 b1
b1
1 2 3
b2 2b1
0 1 3
0 0 1 b 2b 5b
3
2
1
Therefore x3 b3 2b2 5b1 , x2 3b3 5b2 13b1 and x1 9b3 16b2 40b1 .
Note:
1 2 3
29
Example 4:
Find the condition that b's must satisfy for the system to be consistent
x1 2 x2 2 x3 b1
2 x1 3x2 5x3 b2
3x1 4 x2 8x3 b3
Solution
1 2 2 b1
Apply ERO
1 2 2 b1
R2 R2 (2) R1
2 3 5 b2
3 4 8 b R3 R3 3R1
3
b1
1 2 2
1 b2 2b1 R3 R3 2R2
0 1
0 2 2 b 3b
3
1
b1
1 2 2
b2 2b1
0 1 1
0 0 0 b 2b b
3
2
1
b1 b1
The system
b3 2b2 b1 0 .
1 2 2
30
Example 5
Determine whether the coefficient matrix of the following system of linear equation is
invertible by checking whether the system is consistent for all bs
x1 x2 x3 b1
x3 b2
x1
2 x1 x2 3x3 b3
Solution
31
1.7)
Diagonal Matrices
A diagonal matrix D is invertible if and only if its diagonal entries are nonzero
Example 1:
13 0
1
0
2
3 0
(a) D
0 2
0 0
(b) D
0 2
Triangular Matrices
A triangular matrix is invertible if and only if its diagonal entries are nonzero
Example 2:
3 0
(a) L
1 2
L1 exists
0 3
(b) U
0 2
Symmetric Matrices
Theorem 1.7.1 Let A and B be symmetric matrices and k any scalar. Then
(a) AT is symmetric
(b) A + B and A B are symmetric
(c) kA is symmetric
Example 3:
1 3
1 2
and B
.
Verify the above theorem for the matrices A
3 2
2 4
32
Solution:
T
1 3
1 3
AT AT is symmetric
(a) ( A )
3 2
3 2
T
1 3 1 2
2 5
2 5
(b) ( A B)
5 6
5 6
3 2 2 4
T
1 3
3 2
1 2
A B
2 4
A B is symmetric
T
1 3 1 2
0 1
0 1
( A B)
1 2
1 2
3 2 2 4
T
1 3 1 2
A B
3 2 2 4
A B is symmetric
( A1 )T ( AT ) 1 A1 A 1 is symmetric
Theorem 1.7.3 The matrices AAT and AT A are symmetric
Proof:
Theorem 1.7.4 Let A and B be two symmetric matrices. Then AB is symmetric if and only
if AB=BA
Proof:
() : Suppose AB is symmetric. Then AB ( AB)T B T AT BA
() : Suppose AB BA . Then ( AB)T B T AT BA AB implies that AB is symmetric.
33
Example 4:
1 2
1 3
and B
Solution:
7 11
is symmetric,
Note that A and B are two symmetric matrices. Since AB
11 18
therefore we can conclude that AB BA .
Example 5:
1 3
4 1
and B
commute.
Determine whether the matrix A
3 2
1 2
Solution:
5
1
is not symmetric,
Note that A and B are two symmetric matrices. Since AB
10 1
therefore AB BA and the matrix A and B do not commute.
Example 6:
3 1
1 2
and B
commute.
Determine whether the matrix A
1 2
2 3
Solution:
34
CHAPTER 2
2.1)
DETERMINANTS
C11 C12
C 21 C 22
C
n1 C n 2
C1n
C2n
C nn
The transpose of this matrix is called the adjoint of A, denoted as adj(A), that is
C11 C12
C 22
C
adj(A) 21
C
n1 C n 2
C1n
C2n
C nn
(i th row expansion) or
j 1
n
(j th column expansion)
i 1
Example 1:
a11 a12
Suppose A a 21 a 22
a
31 a32
a13
35
Solution:
1st row expansion:
1 2 2
Solution:
1st row expansion: det( A) a11C11 a12C12 a13C13
3 2
0
(2)(1)1 2
(2)(1)13
5 0
17
2 2
0 3
(3)(1) 2 2
1 2
5 3
(2)(1) 23
1 2
5 0
17
36
(0)C32 (3)(1) 33
3 2
(5)(10) (0)C32 (3)(11)
17
1 2
4
3 2
0
(4)(1) 21
2 2
0 3
(5)(1) 31
3 2
17
37
Example 3:
Evaluate det(A) by an appropriate cofactor expansion
1 2
0
2
(b)
3 6
3 4
1 2 3
(a) A 2
0
1
3 4 5
0 3
1 3
0 1
3
0
(c)
2 2
4
1
5 6
0 5
2 2
0 0
3 2
0 3
Solution:
(a) det( A) a21C21 a22C22 a23C23 (using 2nd row expansion)
(2)C21 (0)C22 (1)C23
(2)(1) 21
(0)C 22 (1)(1) 23
4 5
(2)(2) (0)C22 (1)(2)
6
1
(2)(1) 33 2
3
(2)(6)
12
2
0
3
1
38
1
adj(A)
det( A)
a b
. Then
Example 4: Let A
c d
M 11 d ; C11 d
M 12 c ; C12 c
M 21 b ; C21 b
M 22 a ; C22 a
C11 C12
1
det( A)
C 21 C 22
C11 C 21
1
C12 C 22 ad bc
d b
c a
1 2 2
Let A 4 3 2 . Find A 1
5 0 3
Example 5:
Solution:
M 11
3 2
9 ; C11 9 M 12
M 21
2 2
6 ; C21 6 M 22
M 31
M 33
0 3
3 2
1 2
4 3
4 2
5
1 2
5 3
10 ; C31 10 M 32
2 ; C12 2 M 13
7 ; C22 7
4 2
M 23
4 3
5 0
1 2
5 0
15 ; C13 15
10 ; C23 10
6 ; C32 6
11 C33 11
C11 C12
adj(A) C 21 C 22
C
31 C32
6 10
C13
2 15
9
9
6
C 23 6 7 10 2 7
15 10 11
10 6 11
C33
6 10
9
1
1
6
adj( A) 2 7
A
17
det( A)
15 10 11
1
39
unknowns such that det( A) 0 . Then the solution to the system is given by x j
det( A j )
,
det( A)
where A j is the matrix obtained by replacing the entries in the jth column of A by the
entries in b
~
Example 6: Use Cramers rule to solve the following system of linear equations.
x1 2 x2 3x3 0
x3 3
2x1
3x1 4 x2 5 x3 0
Solution:
x1
0 4
a 21C 21
(3)(1) 21
1 0 3
x2
x3
4 0
a 22C 22
a 23C 23
(3)(1) 2 2
1 3
3
(3)(1) 23
6
40
Example 7
Use Cramers rule to solve the following system of linear equations.
(a)
2 x1 2 x2 x3 2
x1 x2 3x3 4
3x1 2 x2 x3 7
(b) x1 2 x2 x3 2
x1 x2 3x3 1
3x1 2 x2 x3 6
41
2.2)
0 1 5
3 9
(5)(1)13
2 1
(0)C11 (1)(15) (5)(30)
165
3 6
2
0 3 2
A 1 6 6
5 9 1
1 6
(2)(1)13
5 1
(0)C11 (3)(29) (2)(39)
165
1 6
5
det(A) det( AT )
42
1
5
0
6 18
(0)C11 (1)(0) (5)(0)
0
(5)(1)13
6 12
Theorem 2.2.2 Let A (aij ) be an n n triangular matrix. Then det( A) a11a22 ...ann
Example 2:
2 0 0
4 0
(a) A 0
3 1 5
2 5 3
4 1
(b) A 0
0
0 0
5 0 0
(c) A 0 3 0
0 0 4
det(A) = (2)(4)(5) 40
det(A) = (2)(4)(0) 0
det(A) = (5)(3)(4) 60
43
Example 3:
a b
(a) Let A d e
g h
c
d e
f and B a b
g h
i
c .
i
d e
Hence det(B) det( A) , that is a b
g h
a b
(b) Let A d e
g h
f
a b
c d e
c
f
c
ka kb kc
f and B d e c .
g h i
i
ka kb kc
a b
Hence det(B) k det( A) , that is d e f k d e
g h i
g h
a b
(c) Let A d e
g h
c
f
i
c
a kd b ke c kf
f and B d
e
f
g
i
h
i
a kd b ke c kf
a b
d
e
f d e
Hence det(B) det( A) , that is
g
h
i
g h
c
f
i
44
Example 4:
3 6 9
1 2 3
(b) 4 5 6 3 4 5 6
7 8 9
7 8 9
1 2 3
4 5 6
(a) 4 5 6 1 2 3
7 8 9
7 8 9
9 12 15 1 2 3
(c) 4 5 6 4 5 6
7 8 9
7 8 9
a b
Example 5: Given that d e
g
d e
(a) g h
a b
2a
(b) d
3g
f
a b
i g h
e
c
f 3.
c
a b
i d e
f
g h
2b
e
2c
a b
f (2)(3) d e
3h 3i
a g bh ci a b
e
f d e
(c) d
g
h
i
g h
c
f 3
c
f (2)(3)(3) 18 (Factoring two times)
i
c
f 3
( R1 R1 (1) R3 )
Using Cofactor Expansion and Theorem 2.2.3 we can evaluate the determinant of
matrices in a less tedious way.
Example 6:
Show that
bc ca ba
b
c 0
(a) a
1
1
1
1
(b) a
a3
1
b
1
c (b c)(c a)(a b)(a b c)
b3
c3
45
Solution:
(Note: ( R1 R1 R2 ) )
1
(b) a
1
b
1
1
c a
c
0
ba
b a
3
0
ca
3
c a
3
(b a)(c a)[(c 2 ca a 2 ) (b 2 ba a 2 )]
(b a)(c a)(c 2 b 2 ca ba)
(b a)(c a)(c b)(c b a)
(b c)(c a)(a b)(a b c)
Example 7:
a b c
Show that b c a 3abc a 3 b 3 c 3
c a b
Solution: Use cofactor expansion along the 1st row
a b c
b c a a11C11 a12C12 a13C13
c a b
(a)C11 (b)C12 (c)C13
(a)(1)11
c a
a b
(b)(1)1 2
3abc a b c
3
b a
c b
(c)(1)13
b c
c a
46
Example 8
Evaluate the following determinant
1 0 3
(a) 5 1 1
0 1 2
1 1
(b) 1 0
0
1
1 2 3
(c) 2 3 1
3 1 2
5 2 2
(d) 1 1 2
3 0 0
4 1 3
(e) 2 2 4
1 1 0
47
0
0
(a)
0
2
2
1
(b)
1
1
0
0
4
1
2
1
1
5
0
2 0
3
1 0
3
1 4
6
5 1
4 2 2
2
3 1
2
1
3 1
3
1
2 1
2
1
0
0
0
2
0 2 0
3 1 3
3 1 3
6 5 6
0
1
1
4
0
2
2
0
0
3
4
1
0
0
0
2
0
2 0
0
1 3
4
1 3
1
5 6
0
0
4
1
0
0
(2)(3)(4)(2) 48
0
2
C 2 C 2 (2)C1
C3 C3 C1
C C C
4
1
4
2
1
1
1
0 0
0
1 0
0
1 0
1
4 8 11
C3 C3 (2)C 2
C 4 C 4 (3)C 2
2
1
1
1
0 0
0
1 0
0
1 1
0
4 11 8
(C3 C4 )
(2)(1)(1)(8) 16
0
0
(c)
0
g
0 0
0 b
d e
h i
a
g
c
0
f
0
j
0
h i
0 b
d e
0 0
j
g
c
0
f
0
a
0
h i
d e
0 b
0 0
j
f
gdba
c
a
48
Example 10
Evaluate the following determinant
0 2 4 5
3 0 3 6
(a)
2 4
5 7
5 1 3 1
2
1
(b)
0
2
6
4
1
2
3 1
2 2
1 4
1 3
49
det
(13)(16) (18)(11) 10
det( AB ) det
2 3 3 4
11 16
1 4
(1)(3) (4)(2) 5
det( A) det
2 3
1 2
(1)(4) (2)(3) 2
det(B) det
3 4
3 12
27 72 45 and 32 det( A) 32 (3 8) 45
det(3 A) det
6 9
3 4
1
1
1
( 15 ) 2 (3 8) and
det( A 1 ) det 15
5
det( A) 5
2 1
Example 2:
a b
Let A d e
g h
f where det( A) 6
i
(b) det( A 1 )
1
1
det( A) 6
1 4
(c) det(2 A 1 ) 2 3 det( A 1 ) 2 3.
6 3
50
(d) det((2 A) 1 )
(e) det b
c
g
h
i
1
1
1
1
3
3
det(2 A) 2 det( A) 2 .6 48
d
a
e det b
c
f
d
e
f
h det( AT ) det( A) 6
i
Example 3:
Let A be a 4 4 matrix where det( A) 3
(a) det(4 A)
(b) det( A 1 )
(c) det(3 A1 )
(d) det((3 A) 1 )
Example 4
Suppose A and B are two n n matrices such that det( A) 3 and det(B) 2 . Then
(a) det( AB) det( A) det(B) (3)(2) 6
(b) det( A2 ) det( AA) det( A) det( A) (3)(3) 9
(c) det(B 1 A) det(B 1 ) det( A)
1
3
det( A)
det(B)
2
51
(g) det( A) 0
Example 4:
Verify the Equivalent Statements (a) and (g) for the matrices
1 2
A
3 1
1 3
and B
2 6
Solution:
det(A) (1)(1) (2)(3) 0 A is invertible
det(B) 0 B is not invertible
Example 5:
Solution:
k 0
2 k is invertible
k k
2 2 0
0 0 0
1 1 0
det( 4
2 2 ) 0 , det( 0 2 0 ) 0 , and det( 1 2 1 ) 0
0 2 2
0 0 0
0 1 1
52
Example 6:
Determine whether the following is TRUE or FALSE
b c c a b a
(Answer: TRUE)
(Answer: TRUE)
1
(c) The matrix
1
4 2 2
3 1
2
is invertible
3 1
3
2 1
2
1
b
3
c is not invertible if a b c 0
c 3
(Answer: TRUE)
53
2.4)
a12
a
, then det(A) = a11a22 a12a21
If A 11
a 21 a 22
a11 a12
If A a 21 a 22
a
31 a32
a13
a 23 , then det(A) can be found by first writing out the matrix in the
a33
a11 a12
following manner: a 21 a 22
a
31 a32
a 23 a 21 a 22
a33 a31 a32
Then det(A) =
0 1 5
Example 1: A 3 6 9
2 6 1
0 1 5 0 1
Then det(A) =
Example 2:
Evaluate the following determinant using the combinatorial approach discussed in this
section
1 0 3
(a) 5 1 1
0 1 2
1 1 0
(b) 1 0 1
0 1 1
1 2 3
(c) 2 3 1
3 1 2
5 2 2
(d) 1 1 2
3 0 0
4 1 3
(e) 2 2 4
1 1 0
55
a b
a b
the form d e
g h
ca b
f d e
i g h
b(
)c
56
CHAPTER 3
a1 , a2 is an ordered pair
a1 , a2 , a3 is an ordered triple
The length of a vector u in 2-space is called the norm of u and is denoted as u . If
~
u u1 ,u 2 , then u u12 u 22 .
~
If u and v are two vectors in 2-space or 3-space, and is the angle between the two
~
vectors, then the dot product or Euclidean inner product is defined by u .v u v cos .
~ ~
The dot product can also be computed using the formula u .v u1v1 u2 v2 for 2-space
~ ~
Orthogonal projection
Suppose two vectors u and a are positioned so that their initial points coincide at a
~
In the above, the vector v is called the orthogonal projection of u on a or the vector
~
The vector w is called the vector component of u orthogonal to a . This vector can
~
written as w u proja u .
~
57
u .a
~
2 ~
a
~
Example 1:
Let u 2,1 and a 3,2 .
~
Then,
u .a u1a1 u2 a2 (2)(3) (1)(2) 4 , and
~
a a12 a22 32 2 2 13
~
u .a
~
a 134 3,2
~
2 ~
u proja u u
~
u.a
~
~
2 ~
Example 2:
Let u 2,1,3 and a 4,1,2 .
~
Then,
u .a u1a1 u2 a2 u3 a3 (2)(4) (1)(1) (3)(2) 15 , and
~
u .a
a 15
4,1,2 75 4,1,2
21
~ ~
2 ~
a
~
u proja u u
~
u .a
~ ~
2 ~
a
~
58
Cross Product
If u u1 , u 2 , u3 and v v1 , v2 , v3 , then the cross product or vector product u v is
~
u v u1
u2
u3
v1
v2
v3
Example 3:
Let u 1,1,3 , v 2,0,3 , and w 2,3,5 . Compute
~
(b) u (v w)
(a) v w
~
(c) (u v) w
(d) (u v) (v w)
(e) u .(v w)
Solution:
i
k
~
(a) v w 2 0 3 i
~
2 3
5
i
(b) u (v w) 1
~
j
~
0 3
3
j
~
2 3
2
k
~
2 0
2 3
9 i 16 j 6 k
~
k
~
9 16 6
59
Example 4:
Suppose u and v are two vectors in 3-space Show that u .(u v) 0 and v .(u v) 0
~
Solution:
Let u u1 , u2 , u3 and v v1 , v2 , v3 . Then
~
k
~
u v u1
u2
u3 i
v1
v2
v3
u2
u3
v2
v3
j
~
u1 u3
v1
v3
u1 u 2
~ v
v2
1
Therefore,
u .(u v) (u1 )(u2 v3 u3v2 ) (u2 )((u1v3 u3v1 )) (u3 )(u1v2 u2 v1 ) 0
~
v .(u v)
~
u1
u .(v w) v1
~ ~
~
w1
u2
v2
u3
v3
w2
w3
Example 5:
Use the above formula and the properties of determinant to deduce that u .(u v) 0 and
~
v .(u v) 0
~
Solution:
60
Then
u v u1 v1 , u 2 v2 ,..., u n vn
~
u u12 u 22 ... u n2
Euclidean norm
u . v u1v1 u 2 v2 ... u n vn
~ ~
Euclidean distance
~ ~
Example 1:
Let u 3,1,2 and v 1,4,2 . Find
~
Solution:
(a) u 32 (1) 2 (2) 2 14
~
v 12 (4) 2 2 2 21
~
61
Example 2:
Let u 3,1,2 and v 1,4,2 . Verify that the Cauchy-Schwarz Inequality holds.
~
Solution:
u .v 3 3
~ ~
u 14
v 21
u v u v
~
Example 3:
Let u 3,1,2,2 , v 4,2,1,3 and w 0,3,8,2
~
Solution:
62
u .v
~ ~
2
1
1
u v u v
4 ~ ~
4 ~ ~
Example 4:
Let u and v be such that u v 2 and u v 4 . Find u . v
~
Solution:
Orthogonality
Two vectors u , v R n are orthogonal if u . v 0
~
~ ~
u v
~
Example 5:
Let u 0,3,3,2 , v 4,3,1,3 .
~
u v
~
Solution:
63
3.2)
x x1 , x2 ,..., xn R n
~
w w1 , w2 ,..., wm R m
~
T : R n R m is a transformation
The transformation can also be written as
T x1 , x2 ,..., xn w1 , w2 ,..., wm
f1 ( x1 , x2 ,..., xn ), , f 2 ( x1 , x2 ,..., xn ) ,, f m ( x1 , x2 ,..., xn )
Now if f1 ( x1 , x2 ,..., xn ) , f 2 ( x1 , x2 ,..., xn ) ,, f m ( x1 , x2 ,..., xn ) are all linear functions of
x1 , x2 ,..., xn , then
T : R n R m is a linear transformation
w1 x1 x2 f1 ( x1 , x2 )
w2 3x1 x2 f 2 ( x1 , x2 )
w3 x12 x22 f 3 ( x1 , x2 )
define a transformation T : R 2 R 3 where T x1 , x2 x1 x2 ,3x1 x2 , x12 x22
This transformation is not linear
64
2) The equations
w1 2 x1 3x2 x3 5x4 f1 ( x1 , x2 , x3 , x4 )
w2 4 x1 x2 2 x3 x4 f 2 ( x1 , x2 , x3 , x4 )
w3 5x1 x2 4 x3 f 3 ( x1 , x2 , x3 , x4 )
define a linear transformation T : R 4 R 3 where
T x1 , x2 , x3 , x4
2 3 1 5
T : R2 R2
w1 x1
w2 x 2
w 1 0 x1
1 0
Standard matrix A
1
0 1
w2 0 1 x2
1 0
65
T : R3 R3
w1 x1
w2 x 2
w3 x3
w1 1 0 0 x1
1 0 0
w2 0 1 0 x 2 Standard matrix A 0 1 0
0 0 1
w 0 0 1 x
3
1 0 0
1 0 0
66
1 0
Standard matrix A
0 0
0 0
Standard matrix A
0 1
(b) T : R 3 R 3
1 0 0
Standard matrix A 0 1 0
0 0 0
1 0 0
Standard matrix A 0 0 0
0 0 1
0 0 0
Standard matrix A 0 1 0
0 0 1
67
Example 5: (Rotation)
(a) T : R 2 R 2
cos
Standard matrix A
sin
sin
cos
(b) T : R 3 R 3
(i) counter-clockwise about the positive x-axis through an angle
0
1
sin
cos
cos
Standard matrix A 0
sin
0 sin
1
0
0 cos
cos
sin
cos
0
0
1
(b) T :
R3
R3
k 0
Standard matrix A
0 k
k 0 0
Standard matrix A 0 k 0
0 0 k
68
1
2
Solution:
cos 90 0
A1
0
sin 90
sin 90 0
cos 90 0
0 1
sin 90 0 1 0
cos 90 0 0 1
(b)
(c)
69
3.3)
T (u ) T (v) u , v R n
~
70
cos
(iii) A sin
0
sin
cos
Inverse Operator
Let T : R n R n be a linear operator and let A be the standard matrix for T. Then
w1 x1 2 x2 2 x3
w2 2 x1 x2 x3
w3 x1 x2
is one-to-one. If so, find the standard matrix for the inverse operator T 1
Solution:
1 2 2
is A
71
Example 3:
Determine whether T : R 3 R 2 is a linear transformation where
T x, y, z x, x y z
Solution:
Let u u1 , u 2 , u3 and v v1 , v2 , v3
To show whether T is a linear transformation, we need to check the two conditions
(a) T(u v) T (u) T (v) and (b) T (c u) cT ( u)
Condition(a)
T(u v) T u1 v1 , u 2 v2 , u3 v3 u1 v1 , u1 v1 u 2 v2 u3 v3
T (u) T (v) T u1 , u 2 , u3 T v1 , v2 , v3
u1 , u1 u 2 u3 v1 , v1 v2 v3
u1 v1 , u1 v1 u 2 v2 u3 v3
T (c u) T cu1 , cu 2 , cu 3
cu1 , cu1 cu 2 cu3
c u1 , u1 u2 u3
cT ( u)
T (c u) cT ( u)
T is a linear transformation.
72
1
0
0
1
0
0
Let e1 , e2 ,, en . Then the standard matrix for T is A =
~
~
~
0
1
0
1 a
0 d
0 g
Suppose T (e ) T 0 b , T (e ) T 1 e and T (e ) T 0 h , then
~2
~3
~1
0 f
1 i
0 c
a
b
the standard matrix for T is A T (e1 ), T (e2 ), T (e3 )
~
~
~
c
d
e
f
h
i
73
1 1
0 0
0 0
Since T (e1 ) T 0 0 , T (e2 ) T 1 1 and T (e3 ) T 0 0 ,
~
~
~
0 0
0 0
1 1
1 0 0
Example 8: (Rotation)
1 cos
0 sin
and T (e2 ) T
,
Since T (e1 ) T
~
~
0 sin
1 cos
cos
the standard matrix for T is A T (e1 ), T (e2 )
~
~
sin
sin
cos
74
CHAPTER 4
4.1)
Let V be a non-empty set of objects. If the following 10 axioms are satisfied, then V is
called a vector space.
Let u , v, w V
~
Axiom 1: If u, v V then u v V
~ ~
Axiom 2: u v v u
~
Axiom 3: u (v w) (u v) w
~
Axiom 7: k (u v) k u k v
~
Axiom 8: (k l ) u k u l u
~
Axiom 9: k (l u ) (kl) u
~
Axiom 10: 1 u u
~
Example 1:
Let V denote the set of all triples of real numbers x, y, z with the operations
x, y, z x, y , z x x, y y , z z and k x, y, z 0,0,0
Determine whether V is a vector space. If V is not a vector space, list all axioms that fail
to hold.
Solution:
Let u u1 , u 2 , u3 , v v1 , v2 , v3 , w w1 , w2 , w3 V
~
Axiom 1: If u, v V then u v V
~ ~
Axiom 2: u v v u
~
Axiom 3: u (v w) (u v) w
~
75
Axiom 7: k (u v) k u1 v1 , u 2 v2 , u3 v3 0,0,0
~
Axiom 8: (k l ) u 0,0,0
(kl) u 0,0,0
Axiom 1: If u, v V then u v V
~ ~
Axiom 2: u v v u
~
Axiom 3: u (v w) (u v) w
~
Axiom 5:
u u1 2,u 2 2 V such that ( u ) u u ( u ) 0 u V
~
Axiom FAILS
Axiom 8: (k l ) u (k l ) u1 , u 2 (k l )u1 , (k l )u 2
~
Axiom FAILS
Axiom 9: k (l u) k lu1 , lu 2 klu1 , klu2
~
76
Axiom 10: 1 u u1 , u 2 u
~
Determine whether V is a vector space. If V is not a vector space, list all axioms that fail
to hold.
Solution:
Let u u , v v , w w V
~
Axiom 1: If u, v V then u v u v uv V
~ ~
Axiom 2: u v v u
~
Axiom 3: u (v w) (u v) w
~
Axiom 5:
u
~
1
V such that ( u ) u u ( u ) 0 u V
~
~
~
~
~
~
u
k u k v k u k v uk vk uk vk
~
Axiom 8: (k l ) u (k l ) u u k l
~
k u l u k u l u u k u l u k u l u k l
~
Axiom 9: k (l u) k (l u) k u l (u l ) k u kl
~
(kl) u (kl) u u kl
~
Axiom 10: 1 u 1 u u u u
1
V is a vector space.
77
4.2)
Subspaces
Example 3: (Subspaces of R 3 )
Let W be the set of all vectors of the form 0, a,0 and let V R 3 . Show that W is a
subspace of V.
Solution:
Let u 0, a1 ,0 , v 0, a2 ,0 W
~
(a) u v 0, a1 ,0 0, a2 ,0 0, a1 a2 ,0 W
~
(b) ku k 0, a1 ,0 0, ka1 ,0 W
~
Therefore W is a subspace of V.
78
Example 4: (Subspaces of M 22 )
a b
such that a b c d 0 and let V M 22 .
Let W be the set of all matrices
c d
Show that W is a subspace of V.
Solution:
a
Let u 1
~
c1
b1
a
, v 2
d1 ~ c 2
b2
W .
d 2
Therefore a1 b1 c1 d1 0 and a2 b2 c2 d 2 0
a
(a) u v 1
~
~
c1
b1
d1
a2
c2
a
(b) ku k 1
~
c1
b1 ka1
d1 kc1
b2 a1 a 2
d 2 c1 c2
b1 b2
W
d1 d 2
kb1
W
kd1
Therefore W is a subspace of V.
Example 5: (Subspaces of M nn )
Let W be the set of all n n symmetric matrices and let V M nn . Show that W is a
subspace of V.
Solution:
Let u A , v B W .
~
Therefore W is a subspace of V.
For questions on subspaces of Pn refer to Tutorial 5.
79
Theorem 4.2.2 Let W be the set of solution vectors of the homogeneous linear system
A x 0 of m equations in n unknowns. Then W is a subspace of V = R n
~
1 2 3 x 0 1 2 3 0
1 2 3 0
1 0 0 1
1 0
(a) 3 7 8 y 0 0 1
4
0 0 19 0
1
2 z 0 0 9 10 0
The solution is x 0, y 0, z 0
(The origin, which is a subspace of R 3 )
1 2 3 x 0 1 2 3 0
(b) 3 7 8 y 0 0 1 1 0
2 4 6 z 0 0 0 0 0
x
y
z
5 1 1
(Line through the origin, which is a subspace of R 3 )
1 2 3 x 0 1 2 3 0 1 2 3 0
(c) 2 4 6 y 0 2 4 6 0 0 0 0 0
3 6 9 z 0 3 6 9 0 0 0 0 0
0 0 0 x 0
(d) 0 0 0 y 0
0 0 0 z 0
The solutions are x r, y s, z t
(R 3 ,which is a subspace of R 3 )
80
Linear Combination
A vector w is a linear combination of the vectors v1 , v2 ,..., vr if
~
w k1 v1 k 2 v2 ... k r vr
~
Example 7: (Problem 1)
Determine whether w 2,2,2 is a linear combination of v1 0,2,2 and
~
v2 1,3,1
~
Solution:
Let w k1 v1 k 2 v2 . Therefore 2,2,2 k1 0,2,2 k 2 1,3,1
~
k1 2, k 2 2
2 k1 k 2 2
Example 8: (Problem 2)
Let w 2,3,4 , v1 1,0,3 , and v2 0,3,10 .
~
Solution:
81
Theorem 4.2.3 Let v1 , v2 ,..., vr V and W be the set of all linear combinations of
~
v1 , v2 ,..., vr . Then
~
(a) W is a subspace of V
(b) W is the smallest subspace of V, ie every other subspace of V that contains
v1 , v2 ,..., vr must contain W
~
(Illustration)
Spanning
Let S {v1 , v2 ,..., vr } and let W be the set of all linear combinations of v1 , v2 ,..., vr . Then
~
Note that S spans W if every vector in W can be expressed as a linear combination of the
vectors in S
Example 9: Determine whether v1 1,0,0 , v2 0,1,0 , and v3 0,0,1 span R 3
~
Solution:
Let S {v1 , v2 , v3 } and W R 3 . Let a, b, c be any vector in W. Then
~
82
Solution:
Let S {v1 , v2 , v3 } and W R 3 . We want to know whether S spans W, ie whether
~
every vector b b1 , b2 , b3 W
~
v1 , v2 , and v3 .
~
Now, S spans W
v1 , v2 , and v3 .
~
b1 , b2 , b3 k1 v1 k 2 v2 k 3 v3
~
b1 , b2 , b3
b1 , b2 , b3
k1 k 2 2k 3 b1
k1
k 3 b2
b1 , b2 , b3
2k1 k 2 3k 3 b3
det( A) 0 , where A 1 0 1
2 1 3
83
4.3)
Linear Independence
k1 v1 k 2 v2 ... k r vr 0
~
(Method I) Use Equivalent Statements (or solve directly if det(A) is not defined)
Example 1: Determine whether the set S {v1 , v2 , v3 } is LI or LD, where
~
Solution: Let k1 v1 k 2 v2 k 3 v3 0
~
k1 5k 2 3k 3 0
2k1 6k 2 2k 3 0
3k1 k 2 k 3 0
S is LD
Solution: Let k1 v1 k 2 v2 k 3 v3 0
~
3k 2 k 3 0
3k 2 k 3 0
0
2k1
k3 0
2k1
Solving the system, we obtained k1 k 2 k 3 0 S is LI
84
(Method II) Use linear combination (Can only check LD, cannot check LI)
Theorem 4.3.1 The set S {v1 , v2 ,..., vr } , r 2 is linearly dependent if and only if at
~
(Method III) Use comparison (Can only check LD, cannot check LI)
Theorem 4.3.3 Let S {v1 , v2 ,..., vr } be a set of vectors in R n . If r n , then S is
~
linearly dependent
Example 4: Determine whether the set S {v1 , v2 , v3 , v4 } is LI or LD, where
~
Example 5:
Determine whether the set S {v1 , v2 , v3 } is LI or LD, where
~
85
4.4)
Definition(Basis)
Let S {v1 , v2 ,..., vr } be a set of vectors in a vector space V. Then S is a basis for V if
~
(a) S spans V
(b) S is linearly independent
Example 1: Determine whether S {v1 , v2 , v3 } is a basis for R 3 ,
~
Solution: The set S is linearly independent and S spans R 3 . Therefore S is a basis for R 3 .
Example 2: Determine whether S {v1 , v2 , v3 } is a basis for R 3 ,
~
Solution:
1 2 3
Solution:
2 2 1
86
where v1 2 x , v2 2 x , v3 1 2 x 3x 2 .
2
Solution:
2 2 1
Solution:
1
Let A
0
6
4
1
2
3 1
2 2
. Since det( A) 8 0 (See Example 10 of Chapter 2),
1 4
1 3
2 1
4 3
2 1
2 2
, v2
, v4
.
, v3
where v1
~
1 1 ~ 3 2 ~ 1 1 ~ 3 2
Solution:
1
Let A
1
4 2 2
3 1
2
. Since det( A) 16 0 (See Example 9 (b) of Chapter 2),
3 1
3
2 1
2
87
Definition (Dimension)
Let S {v1 , v2 ,..., vn } be a basis for a vector space V. The dimension of V, dim(V) = n.
~
88
4.5)
a11
a
Let A 21
a
m1
a12
a 22
am2
a2n
a mn
Definition
Row space of A : subspace of R n spanned by the row vectors of A
Column space of A : subspace of R m spanned by the column vectors of A
Nullspace of A : the solution space of the homogeneous system A x 0
~
89
Finding a basis for the row space, column space & nullspace of A
Basis for the row space and the column space of A
Theorem 4.5.1 If a matrix B (in row-echelon form) can be obtained from A by
elementary row operations, then the nonzero row vectors of B form a basis for the row
space of A
Example 1: (Basis for the row space and the column space, at the same time)
Find a basis for the row space and the column space of
1
0
A 3
3
2
3
1
0
4
0
1
0
6 1
2 1
4 2
1
0
B 0
0
0
3 1 3
1 1 0
0 0 1
0 0 0
0 0 0
90
Example 2: (Basis for the row space and the column space, at the same time)
Find a basis for the row space and the column space of
3
1 1
1
6
1
0
2
A 1 3 1
2
7 2 1
2
3
0 2 1
Solution:
91
0
A 3
3
2
3
1
0
4
0
1
0
6 1
2 1
4 2
1
3
T
A
1
0 3 3
2
1 0
4
0
1 6 2 4
0 1 1 2
0
*
B
0
0 3 3
2
1 9 5 6
0 1 1 1
0 0
0
0
B* ,
1
0
0
0
1
0
92
6
1
0
2
A 1 3 1
2
7 2 1
2
3
0 2 1
Solution:
93
0
Find a basis for the nullspace of A 3
3
2
3
1
0
4
0
1
0
6 1
2 1
4 2
1
0
From Example 1, the row-echelon form is 0
0
0
Now,
x1 3x2 x3 3x4 0
x2 x3
3 1 3 0
1 1 0 0
0 0 1 0
0 0 0 0
0 0 0 0
0
x4 0
x1 2s 2
x2 s 1
s
The solution space, x
~
x3
s 1
x 0 0
4
2
1
Since spans the solution space and
1
0
2
1
is linearly independent,
1
0
2
1
therefore form a basis for the nullspace of A.
1
0
94
6
1
0
2
Find a basis for the nullspace of A 1 3 1
2
7 2 1
2
3
0 2 1
Solution:
95
1 2 2 1
A 3 6 5 4
1 2 0 3
1 2 0 3 0
Now,
x1 2 x2
3x4 0
x3 x4 0
x1 2s 3t 2 3
s
x2
1 0
s
t
The solution space, x
~
x3
t 0 1
0 1
x
t
4
2 3
1 0
Since , spans the solution space and
0 1
0 1
2 3
1 0
, is linearly independent,
0 1
0 1
2 3
1 0
therefore , form a basis for the nullspace of A.
0 1
0 1
96
4.6)
Definition
rank(A) = dim(row space of A)
= number of vectors in a basis for the row space of A.
rank(A) = dim(column space of A)
= number of vectors in a basis for the column space of A.
nullity(A) = dim(nullspace of A)
= number of vectors in a basis for the nullspace of A.
Theorem 4.6.1
(a) rank(A) = rank( AT )
(b) rank(A) + nullity(A) = n, where A contains n columns
Example 1:
1
0
Find rank(A) and nullity(A), where A 3
3
2
3
1
0
4
0
1
0
6 1
2 1
4 2
1
Solution:
From Example 1 of Section 4.5, we know that,
The vectors r1 1,3,1,3 , r2 0,1,1,0 , and r3 0,0,0,1 form a basis for the row
~
2
1
form a basis for the nullspace of A, therefore nullity(A) = 1.
1
0
97
Example 2:
3
1 1
1
6
1
0
2
Find rank(A) and nullity(A), where A 1 3 1
2
7 2 1
2
3
0 2 1
Solution:
Example 3:
Given the size of a matrix A is 5 9 and rank(A) = 2,
(a)
(b)
(c)
(d)
Example 4:
Given the size of a matrix A is 8 6 and rank(A) = 3,
(a)
(b)
(c)
(d)
dim(row space of A) =
dim(column space of A) =
dim(nullspace of A) =
dim(nullspace of AT ) =
98
Inner Products
Axiom 1: u, v v, u
~ ~
~ ~
Axiom 2: u v, w u, w v, w
~
Axiom 3: ku, v k u, v
~
~ ~
(i) norm of u , u u, u
~
1
2
~ ~
~ ~
~ ~
~ ~
99
~ ~
a1 a 2
b b2
, v B 1
.
Let u A
~
a3 a 4 ~
b3 b4
The expression u, v A, B tr( AT B ) a1b1 a2 b2 a3b3 a4 b4 defines an inner
~ ~
product in M 22 .
(i) Find the norm of u , u
~
~ ~
100
P2 .
(i) Find the norm of u , u
~
~ ~
~ ~
~ ~
~ ~
1 1
0 2
, v B
(c) u A
~
1 2 ~
3 1
(d) u p( x) 2 x x 2 , v q( x) 3x 2 x 2
~
101
5.2)
~ ~
Example 1: Verify that the Cauchy-Schwarz Inequality holds for the vectors
u 3,1,0,2 and v 2,1,3,0 with respect to the Euclidean inner product.
~
Solution:
u, v
~ ~
u v
~
Example 2: Verify that the Cauchy-Schwarz Inequality holds for the vectors
u 3,1,0,2 and v 2,1,3,0 with respect to the weighted Euclidean inner
~
Solution:
u, v
~ ~
u v
~
(c) cos
u, v
~ ~
u v
~
102
Orthogonality
Two vectors u and v are orthogonal if their inner product u, v 0
~
~ ~
Example 3:
1 1
0 2
and v B
are orthogonal
(a) The two vectors u A
~
~
2
1
1
0
Orthogonal Complement
Definition Let W be a subspace of an inner product space V. The set of all vectors in V
that are orthogonal to W is called the orthogonal complement of W. The orthogonal
complement of W is denoted as W . We say that W and W are orthogonal complements.
103
Solution:
1
2 2 1 0
1 1 2 3 1
Let A
. Now, W is the row space of A. By Theorem 5.2.4(a),
1 1 2 0 1
0 0
1
1
1
0
The row echelon form of the augmented matrix is
0
Now,
x1 x2 2 x3
x5 0
x3
x5 0
x4
0
1 2
0 1
0 0
0 0
0 1
0 1
1 0
0 0
0
0
basis for W .
104
5.3)
(i) q1 , q2 q2 , q3 q3 , q1 0 , and
~
(ii) q1 q 2 q3 1
~
Let u V . Then u u, v1 v1 u, v2 v2 u, v3 v3
~
Proof:
Since S {v1 , v2 , v3 } is a basis, a vector u V can be expressed in the form
~
u k1 v1 k 2 v2 k3 v3
~
Now,
u,v1 k1 v1 k 2 v2 k3 v3 ,v1
~
(Axiom 3)
k1
Similarly, we have u, v2 k 2 and u, v3 k3 . Therefore,
~
u u, v1 v1 u, v2 v2 u, v3 v3
~
105
The scalars u,v1 , u,v2 , u, v3 in the Theorem 5.3.1 are the coordinates of the
~
(u ) S u, v1 , u, v2 , u, v3
~
3 4
4 3
v2 ,0, , v3 ,0, . Express the vector u 1,1,1 as a linear combination
~
5 5
5 5
~
~
of the vectors in S, and find the coordinate vector (u) S
~
Solution:
7
1
u, v1 1 , u, v 2 , u, v3
~ ~
~ ~
~ ~
5
5
7
1
By Theorem 5.3.1, we have u v1 v 2 v3
~
5 ~ 5 ~
~
1 7
The coordinate vector of u relative to S is (u ) S 1, ,
~
~
5 5
12 5
5 12
v2 , ,0 , v3 , ,0 . Express the vector u 1,1,1 as a linear
~
13 13
13 13
~
~
combination of the vectors in S, and find the coordinate vector (u) S
~
106
Gram-Schmidt Process
Given any basis {u1 , u 2 , u3 } , we can transform the basis into an orthogonal basis
~
{v1 , v2 , v3 } and then normalize the orthogonal basis vectors to obtain an orthonormal
~
basis {q1 , q 2 , q3 } . This process is called the Gram-Schmidt Process. The construction of
~
these orthogonal and orthonormal bases are outline in the following steps. Recall from
u .a
Theorem 3.0.1, proja u ~ ~2 a .
~
~ ~
a
~
Step 1
v1 u1
~
Step 2
v2 u 2 projv1 u 2 u 2
~
u 2 , v1
~
~
2
v1
v1
~
Step 3
v3 u3 projv1 u3 projv2 u3 u3
~
u3 , v1
~
2
v1
v1
~
u3 , v2
~
Step 5
v2
v2
Step 4
~
2
v1
~
, q2
v1
v2
~
, q3
v2
v3
~
v3
~
107
Example 3: Use the Gram-Schmidt process to transform the basis {u1 , u 2 , u3 } into an
~
Solution:
Basis S 1,1,1 , 0,1,1 , 0,0,1
~1
~3
v1 u1 1,1,1
~
v2 u 2
~
u 2 , v1
~
~
2
v1
~
v1
~
2
1
1,1,1 2,1,1
3
3
u
,
u 3 , v1
3 v2
~
~
~
~
v3 u 3
v1
v2
2
2
0,1,1
v1
[Choose v2 2,1,1 ]
~
v2
~
1
1
1
0,0,1 1,1,1 2,1,1 0,1,1
3
6
2
[Choose v3 0,1,1 ]
~
q1
~
v1
~
v1
~
1
3
1,1,1
q2
~
v2
~
v2
~
1
6
2,1,1
q3
~
v3
~
v3
1
2
0,1,1
108
5.4)
QR-decomposition
Theorem 5.4.1
If A is an m n matrix with linearly independent column vectors, then A can be factored
as
A QR
where Q is an m n matrix with orthonormal column vectors q1 , q2 ,..., qn (resulting from
~
0
u 2 , q2
~
~
R
0
0
u n , q1
~
~
u n , q2
~
~
u n , qn
~
~
We can show that the column vectors of the matrix A are linearly independent if and only
if A is invertible. Hence we can conclude that every invertible matrix A has a QRdecomposition.
Example 1:
1 0 0
Solution:
The column vectors of A, u1 1,1,1 , u 2 0,1,1 , u3 0,0,1 are linearly
~
q1
~
1
3
1,1,1 , q2
~
1
6
2,1,1 , q3
~
1
2
0,1,1
109
u1 , q1 u 2 , q1 u3 , q1
~
~
~
~
~ ~
R
0 u 2 , q2 u3 , q2
~
~
~
~
0
0
u
,
q
3
3
~
~
3
3
0
0
2
3
2
6
1
3
1
6
1
2
1 0 0
1 1 0
1 1 1
1
3
1
3
1
3
2
6
1
6
1
6
0 33
1
0
2
1
0
2
2
3
2
6
1
3
1
6
1
2
Example 2:
0
1 1
0 .
Find the QR-decomposition of the matrix A 1 1
0 1 3
Solution:
The column vectors of A, u1 1,1,0 , u 2 1,1,1 , u3 0,0,3 are linearly
~
q1
~
, q2
, q3
u1 , q1 u 2 , q1 u3 , q1
~
~
~
~
~ ~
R
0 u 2 , q2 u3 , q2
~
~
~
~
0
0
u
,
q
3
3
~
~
0
0 0
0
1 1
0
The QR-decomposition of the matrix A is 1 1
0 1 3
0
0 0
110
Solution:
Let det (I A) 0
det( 0
0
det 1
1
( 2)
0 0 0
1 0 1
0 1 1
0
2
2 1
0 2
2 1 ) 0
0 3
0
3
0
2
1 3
111
( 2)( ( 3) 2) 0
( 2)(2 3 2) 0
( 1)( 2) 2 0 1, 2, 2
0
2 x1 0
To find the eigenvectors, consider (I A) x 0 , ie 1 2 1 x2 0
~
~
1
0
3 x3 0
0
2 x1 0 1 0
2 0 1 0 2 0 1 0 2 0
1
1 1 1 x 2 0 1 1 1 0 0 1 1 0 0 1 1 0
1 0 2 x 0 1 0 2 0 0 0 0 0 0 0 0 0
Now,
x1
2 x3 0
x2 x3 0
x1 2s 2
The solution space x x2 s s 1
~
x s 1
3
x1 2
the solution space of (I A) x 0 is of the form x x2 s 1
~
~
~
x 1
3
2
1 is a basis for the nullspace of (I A) corresponding to 1
1
2
1 is a basis for the eigenspace of A corresponding to 1
1
112
2 0 2 x1 0 2 0 2 0 1 0 1 0 1 0 1 0
1 0 1 x2 0 1 0 1 0 1 0 1 0 0 0 0 0
1 0 1 x 0 1 0 1 0 1 0 1 0 0 0 0 0
Now,
x1
x3 0
x1 s 1 0
The solution space x x2 t s 0 t 1
~
x s 1 0
3
x1 1 0
the solution space of (I A) x 0 is of the form x x2 s 0 t 1
~
~
~
x 1 0
3
1 0
0 , 1 is a basis for the nullspace of (I A) corresponding to 2
1 0
1 0
0 , 1 is a basis for the eigenspace of A corresponding to 2
1 0
113
Example 2:
Find the characteristics equation, eigenvalues, the corresponding eigenvectors, and bases
3 0 1
Solution:
Let det (I A) 0
1 0 0 3 0 1
det( 0 1 0 0 2 4 ) 0
0 0 1 0 0 1
0
1
3
2 4 0
det 0
0
0
1
( 3)( 2)( 1) 0
1, 2, 3
0
1 x1 0
3
2 4 x2 0
To find the eigenvectors, consider (I A) x 0 , ie 0
~
~
0
0
1 x3 0
2 0 1 x1 0 2 0 1 0
0 1 4 x2 0 0 1 4 0
0
0
0 x3 0 0
0
0 0
Now,
2 x1
x3 0
x2 4 x3 0
114
x1 12 s
1
1
The solution space x x2 4s 2 s 8
~
x s
2
3
x1
1
1
the solution space of (I A) x 0 is of the form x x2 2 s 8
~
~
~
x
2
3
1
8 is a basis for the nullspace of (I A) corresponding to 1
2
1
8 is a basis for the eigenspace of A corresponding to 1
2
For the eigenvalue 2 , we have
1 0 1 x1 0 1 0 1 0
0 0 4 x2 0 0 0 4 0
0 0 1 x 0 0 0 1 0
3
Now,
x1
x3 0
4 x3 0
x3 0
x1 0 0
The solution space x x2 s s 1
~
x 0 0
3
x1 0 0
the solution space of (I A) x 0 is of the form x x2 s s 1
~
~
~
x 0 0
3
0
1 is a basis for the nullspace of (I A) corresponding to 2
0
115
0
1 is a basis for the eigenspace of A corresponding to 2
0
For the eigenvalue 3 , we have
0 0 1 x1 0
0 0 1 0
0 1 4 x2 0 0 1 4 0
0 0 2 x 0
0 0 2 0
Now,
x3 0
x2 4 x3 0
2 x3 0
x1 s 1
The solution space x x2 0 s 0
~
x 0 0
3
x1 s 1
the solution space of (I A) x 0 is of the form x x2 0 s 0
~
~
~
x 0 0
3
1
0 is a basis for the nullspace of (I A) corresponding to 3
0
1
0 is a basis for the eigenspace of A corresponding to 3
0
116
Example 3:
Find the characteristics equation, eigenvalues, the corresponding eigenvectors, and bases
1 2 2
Solution:
117
and x is a corresponding
~
eigenvector.
Example 4:
Find
the
eigenvalues
0 0 2
7
A where A 1 2 1
1 0 3
Solution:
and
corresponding
eigenvectors
of
the
matrix
1
eigenvector of A correspond to 2 128 is also 1
1
7
Example 5:
Find
the
eigenvalues
and
1 2 2
9
matrix A where A 2 5 2 .
6 6 3
Solution:
the
corresponding
eigenvectors
of
the
118
6.2)
Diagonalization
0 0 2
2
We also found that 1 is a basis for the eigenspace of A corresponding to 1 , and
1
1 0
0 , 1 form a basis for the eigenspace of A corresponding to 2 .
1 0
2
1
0
We let p1 1 , p 2 0 , and p3 1 .
~
~
1 ~ 1
0
2 1 0
0 1
The matrix P 1
1
1 0
1 0 0
1
P AP 0 2 0 .
0 0 2
diagonalizes
A.
It
can
be
checked
that
119
1 2 2
Solution:
120
1 1 1
Solution:
Let det (I A) 0
( 2)( 2)( 3) 0 2 , 2 , 3
Example 4:
4 3 2
Solution:
121
Example 5:
3 0 0
1
that diagonalizes A and find P AP
Solution:
Let det (I A) 0
3, 2, 2
Since A has 2 distinct eigenvalues, we cannot use Theorem 6.2.2 to decide whether A is
diagonalizable or not. We therefore find how many linearly independent eigenvectors A
has.
0
0 x1 0
3
2
0 x2 0
To find the eigenvectors, consider (I A) x 0 , ie 0
~
~
0
1 2 x3 0
0 0 0 x1 0 0 0 0 0
0 1 0 x2 0 0 1 0 0
0 1 1 x 0 0 0 1 0
Now,
x2 0
x3 0
Let x1 s .
x1 s 1
The solution space x x2 0 s 0
~
x 0 0
3
x1 s 1
the solution space of (I A) x 0 is of the form x x2 0 s 0
~
~
~
x 0 0
3
122
1
0 is a basis for the nullspace of (I A) corresponding to 3
0
1
0 is a basis for the eigenspace of A corresponding to 3
0
For the eigenvalue 2 , we have
1 0 0 x1 0 1 0 0 0
0 0 0 x2 0 0 0 0 0
0 1 0 x 0 0 1 0 0
Now,
x1 0
x2 0
Let x3 s .
x1 0 0
The solution space x x2 0 s 0
~
x s 1
3
x1 0 0
the solution space of (I A) x 0 is of the form x x2 0 s 0
~
~
~
x s 1
3
0
0 is a basis for the nullspace of (I A) corresponding to 2
1
0
0 is a basis for the eigenspace of A corresponding to 2
1
Since A has only 2 linearly independent eigenvectors, therefore by Theorem 6.2.1, A is
NOT diagonalizable.
123
Example 6:
2 0 0
124
Theorem 6.2.3 Let D P 1 AP be a diagonal matrix for some invertible matrix P. Then
Ak PD k P 1
0 0 2
(Refer to Example 1)
13
2 1 0
0 1 diagonalizes A and
Solution: It was shown that the matrix P 1
1
1 0
1 0 0
1
D P AP 0 2 0 . Thus A13 PD13 P 1
0 0 2
13
2 1 0 1
1
0 1 0
1
1 0 0
0
213
0
0
213
0
16382
1 0 1 8190
8191
1 0 2 8191 8192
1 1 1 8191
0
16383
1 2 2
(Refer to Example 2)
10
Solution:
125
126
CHAPTER 7
Z c1 x1 c2 x2
subject to the constraints
am1 x1 am 2 x2 ()()() bm
and non-negativity constraints
x1 0 , x2 0
A pair of values ( x1 , x2 ) satisfying all of the constraints is called a feasible solution. The
set of all feasible solutions determines a subset of the x1 x2 - plane called the feasible
region. A solution that optimizes the objective function is called an optimal solution.
Note that each of the constraints defines a line or a region in the x1 x2 - plane. The feasible
region is therefore an intersection of finitely many lines or regions. The boundary points
of a feasible region that are intersections of two of the lines are called corner points.
Theorem 7.1.1
(a) If a feasible region of a linear programming problem is nonempty and bounded,
then the objective function attains the optimum values and these values occur at
the corner points of the feasible region
(b) If a feasible region of a linear programming problem is unbounded, then the
objective function may or may not attain the optimum values, however, if it
attains an optimum value, these values occur at the corner points of the feasible
region
127
Example 1:
Find values of x1 and x 2 that maximizes the objective function
Z x1 3x2
subject to the constraints
2 x1 3x2 24
x1 x2 7
x2 6
and non-negativity constraints
x1 0 , x2 0
Solution:
The value of the objective function at the corresponding corner points are shown below
( x1 , x2 )
(0,6)
(3,6)
(9,2)
(7,0)
(0,0)
Z x1 3x2
18
21
15
7
0
( x1 , x2 )
(3,2)
(6,0)
Z 2 x1 x2
4
12
128
p11
For a 3-state Markov chain, the transition matrix has the form P p 21
p
31
p12
p 22
p32
p13
p 23 . The
p33
entry p32 is the probability that the sytem will change from state 3 to state 2. The sum of
each column p1 j p2 j p3 j 1. This is because if the system is in state j at one
observation, it is certain to be in one of the three possible states in the next observation.
Example 1
A car rental agency has three rental locations, denoted by 1,2, and 3. A customer may
rent a car from any of the three locations and return the car to any of the three locations.
The manager finds that customers return the cars to the various locations according to the
following probabilities
This matrix is a transition matrix of a Markov chain. The probability is p23 0.6 that a
car will be returned to Location 2 after being rented from Location 3. The probability is
p11 0.8 that a car will be returned to Location 1 after being rented from Location 1.
129
Example 2:
The alumni office of a university finds that 80% of its alumni who contribute to the
annual fund one year will also contribute the next year and 30% of those who do not
contribute one year will contribute the next. This can be viewed as a Markov chain wih
two states. State 1 may correspond to an alumnus contribute in any one year, State 2
correponds to the alumnus not contributing in that year. The transition matrix is given by
0.8 0.3
P
0.2 0.7
Definition
The state vector for an observation of a Markov chain with k states is a column vector
whose ith component is the probability that the system is in the ith state at that time.
In a Markov chain with k states, we might describe the possible state of the system at
x1
x
some observation time by a column vector x 2 , where x i is the probability that the
~
x
k
system is in the ith state at that time. Observe that the entries in any state vector for a
Markov chain are nonnegative and have a sum of 1, so such state vector is also called a
probability vector.
Theorem 7.2.1
Let P be the transition matrix of a Markov chain and let x
~
observation. Then x
~
( k 1)
Px
(k )
(k )
(n)
(n)
Pn x
~
( 0)
(0)
and the
for n 1,2,...
130
Example 3:
Consider the transition matrix in Example 2. We are interested to know the probable
future contribution record of a new graduate who did not contribute in the initial year
after graduation. In this case, the system is initially in State 2 with certainty, so the initial
0
(0)
state vector is x . From Theorem 7.2.1, we have
~
1
x
(1)
( 2)
( 3)
Px
( 0)
Px
(1)
Px
( 2)
Compute x
~
0.2 0.7 0.7 0.55
0.8 0.3 0.45 0.525
, x
~
(8)
, and x
(10)
0.6
. (Intepret this result)
~
0.4
The state vectors converges to a fixed vector as the number of observations increases.
(n)
Example 4:
Consider the transition matrix in Example 1. Suppose a car is rented initially from
0
(0)
Location 2, so that the initial state vector is x 1 . From Theorem 7.2.1, we have
~
0
(1)
( 2)
Px
(0)
Px
~
(1)
0.1 0.2 0.6 1 0.2
0.1 0.5 0.2 0 0.5
0.8 0.3 0.2 0.3 0.40
Do the state vectors converge to a fixed vector as the number of observations increases?
131
Definition
A transition matrix P is regular if for some positive integer m, all the entries of the matrix
P m are positive. A Markov chain with a regular transition matrix is called a regular
Markov chain.
Theorem 7.2.2
Let P be a regular transition matrix, and x is any state vector. Suppose q1 , q2 ,...qk are
~
q1
q
(a) P n 2
q
k
q1 q1
q2 q2
as n , and
q k q k
q1
q2
(b) P n x
~
q
k
q1
q
The vector q 2 in Theorem 7.2.2 above is called the steady-state vector of the
~
q
k
regular Markov chain. To compute the steady-state vector, we make use of the following
theorem.
Theorem 7.2.3
The steady-state vector q of a regular transition matrix P is the unique vector that
~
132
Example 5:
0.8 0.3
, we begin
To find the steady-state vector for the regular transition matrix, P
0.2 0.7
by solving the equation P q q or ( I P) q 0 . The linear system simplifies to
~
0.2 0.3 q1 0
0.2 0.3 q 2 0
1.5 0.6
1
The solution is q s where s
0.4
1.5 1
~
1 0.4
This means that over the long run, 60% of the alumni will contribute to the annual fund in
any one year, and 40% will not.
Example 6:
Refer to Example 1. The steady-state vector for the regular transition matrix,
34
0.8 0.3 0.2
0.5573
61
14
P 0.1 0.2 0.6 , is q 61 0.2295 . The entries give the long-run probabilities
~
13 0.2132
0.1 0.5 0.2
61
that any one car will be returned to Locations 1, 2, or 3, respectively. If the car agency
has a fleet of 10,000 cars, it should design its facilities so that there are at least 5,573
spaces at Location 1, at least 2,295 spaces at Location 2, and at least 2,132 spaces in
Location 3.
133
p1
e11 e12 e1k
p2
e21 e22 e2 k
Form the price vector p , and the input-output matrix E
p
e
e
kk
k1 k 2
k
Now, in order that the total expenditures of each industry to be equal to its total income,
the following matrix equation must be satisfied
Ep p
~
or
(I E) p 0
~
It can be proved that the system E p p always has a nontrivial solution p whose
~
134
Example 1:
Three homeowners a carpenter, an electrician, and a plumber agree to make repairs in
three homes. They agree to work a total of 10 days each according to the following
schedule
Days of Work in Home of
Carpenter
Electrician
Plumber
Work performed by
Carpenter Electrician Plumber
2
1
6
4
5
1
4
4
3
The three of them must pay each other a reasonable daily wages, even for the work each
does on his or her own home. Their normal daily wages are about RM100, but they agree
to adjust their respective daily wages so that each homeowner will come out even, that is
the total paid out by each is the same as the total amount each receives. To satisfy the
equilibrium condition, we require that
Total expenditure = Total income
Let p i be the price charged by (daily wages of) the ith industry for its output (work done).
Let p1 , p2 , p3 be the daily wages of the carpenter, electrician, and plumber, respectively.
[Note: As an example, the carpenter pays a total of 2 p1 1 p2 6 p3 for the repair of his
own home and receives 10 p1 . For equilibrium, 2 p1 1 p2 6 p3 10 p1 or
0.2 p1 0.1 p2 0.6 p3 p1 ]
Based on the 10-day period, the input-output matrix is given by E 0.4 0.5 0.1
0.4 0.4 0.3
Since E p p , we have
~
0.4 0.5 0.1 p 2 p 2
0.4 0.4 0.3 p p
3 3
p1 31
Solving we obtained p p2 s 32 . The constant s is a scale factor which the
~
p 36
3
homeowners may choose for their convenience. Since their normal daily wages are
RM100, they may choose s 3 so that the corresponding daily wages are RM93, RM96,
and RM108, for the carpenter, electrician, and plumber respectively.
135
x1
d1
x2
d
Form the production vector x , the demand vector d 2 and the consumption
~
~
d
x
k
k
c11 c12 c1k
c 21 c 22 c 2 k
matrix C
c
k2
kk
k1
The theory further suggests that
x1
x2
x
k
c21 c22 c2 k x 2 d 2
c
d
c
c
kk x k
k1 k 2
k
This leads to
x C x d
~
or
(I C) x d
~
136
Example 2:
A town has three main industries, a coal mining operation (Industry 1), an electric powergenerating plant (Industry 2), and a local railroad (Industry 3).
To mine RM1 of coal, the mining operation must purchase RM0.25 of electricity to run
its equipment and RM0.25 of transportation for its shipping needs.
To produce RM1 of electricity, the generating plant requires RM0.65 of coal for fuel,
RM0.05 of its own electricity to run auxiliary equipments and RM0.05 for transportation.
To provide RM1 of transportation, the railroad requires RM0.55 of coal for fuel and
RM0.10 of electricity for its auxiliary equipment.
In a certain week the coal-mining operation receives orders for RM50,000 of coal from
outside the town, and the generating plant receives orders for RM25,000 of electricity
from outside. There is no outside demand for the local railroad.
How much must each of the three industries produce in that week to exactly satisfy their
own demand and the outside demand?
Solution:
For the one-week period, let x i denote monetary value of the total output of the ith
industry. The consumption matrix of the system is
0.65 0.55
0
3 3
1
220 690 190 25,000 56,163
The solution is given by x ( I C ) d
~
~
503
The total output of the coal-mining operation should be RM102,087, the total output of
the power-generating plant should be RM56,163, and the total output of the railroad
should be RM28,330.
137
7.4) Cryptography
Cryptography is the study of encoding and decoding secret messages.
Plaintext Uncoded message
Code Cipher
Ciphertext Coded message
Enciphering process of converting Plaintext to Ciphertext
Deciphering process of converting Ciphertext to Plaintext
Enciphering
Substitution Ciphers
The simplest ciphers, called substitution cipher, are those whereby each letter is replaced
by a different letter. A disadvantage of substitution cipher is that they preserve the
frequescies of individual letters, making it relatively easy to break the code by statistical
methods.
Example 1:
Convert the following plaintext to ciphertext by replacing a letter with another which is at
the next four position, ie A is replaced with D, B is replaced with E, , X is replaced
with A, Y is replaced with B, Z is replaced with C.
ACTUARIAL STUDIES
Solution:
Hill n-cipher
A polygraphic system is a cryptography system in which the plaintext is divided into sets
of n letters, each of which is replaced by a set of n cipher letters. A special class of
polygraphic systems called the Hill n-cipher, are based on matrix transformations. In a
Hill n-cipher, plaintext is grouped into sets of n letters and enciphered by an n n matrix
with integer entries.
Example 2:
1 2
to obtain the Hill 2-cipher for the plaintext I AM HIDING
Use the matrix
0 3
138
Solution:
Group the plaintext into pairs of letters IA, MH, ID, IN, GG. The dummy letter G is
introduced to fill out the last pair.
1 2 9 11
0 3 1 3
1 2 13 29
CX
0 3 8 24
1 2 9
0 3 4
17
12
QL
1 2 9 37
0 3 14 42
KP
1 2 7
0 3 7
21
21
KC
UU
139
Example 4:
Find the residue modulo 26 of 87, 38 , and 26
Solution:
87 9 (mod 26) , 38 14 (mod 26) , 26 0 (mod 26)
Deciphering
Every useful cipher must have a procedure for decipherment. In the case of Hill n-cipher,
decipherment uses the inverse (mod 26) of the enciphering matrix.
Let A (aij ) where aij Z m {0,1,2,..., m 1} . If there exist a matrix B (bij ) where
bij Z m {0,1,2,..., m 1} such that AB BA I (mod m) , then A is said to be invertible
modulo m.
140
Example 6:
9 6
is invertible modulo 26 because det( A) 3 is not divisible by
(a) The matrix A
4 3
2 or 13.
6 3
is NOT invertible modulo 26 because det( A) 12 is divisible
(b) The matrix A
2 3
by 2.
8 17
is NOT invertible modulo 26 because det( A) 39 is
(c) The matrix A
1 7
divisible by 13.
a b
where a, b, c, d Z 26 . Then
Theorem 7.4.2 Let A
c d
d b
(mod 26)
A 1 (ad bc) 1
c a
Example 7:
5 6
d b
. Then A 1 (ad bc) 1
(mod m)
Let A
2 3
c a
3 6
(mod 26)
[(5)(3) (6)(2)]1
2 5
3 6
(mod 26)
31
2 5
3 6
(mod 26)
9
2 5
27 54
(mod 26)
18 45
1 24
(mod 26)
8
19
141
Example 8:
5 6
.
Decipher the following Hill 2-cipher, which was enciphered by the matrix A
2 3
GTNKGKDUSK
Solution:
1 24 7 487 19
(mod 26)
8 19 20 436 20
1 24 14 278 18
(mod 26)
8 19 11 321 9
1 24 7 271 11
(mod 26)
8 19 11 265 5
1 24 4 508 14
(mod 26)
8 19 21 431 15
1 24 19 283 23
(mod 26)
8
19
11
361
23
Example 9:
Decipher the IMPORTANT message from Dr Ho, a message which was enciphered by
5 6
the matrix A
2 3
YLXFRENSGTYHGQERFMXELMVYWE
142
(0)
x1( 0 )
(0)
x
2 .
x (0)
n
As time progresses, the number of females within each of the n classes changes due to
three biological processes namely birth, death, and aging. We can observe the population
at discrete times. The Leslie model requires that the duration between any successive
observation times be the same as the duration of the age interval. Therefore we set
kL
for k 0,1,2,... . Next we define two demographic parameters.
tk
n
Let a i denote the average number of daughters born to each female during the same time
she is in the ith age class. Let bi denote the fraction (probability) of females in the ith age
class that can be expected to survive and pass into the (i+1)th age class. With these
definitions, we have ai 0 and 0 bi 1 .
x1( k )
(k )
x
kL
(k )
Define the age distribution vector x 2 at time t k
, where xi(k ) is the
~
n
x (k )
n
kL
number of females in the ith age class at time t k
. Therefore, we have
n
x1( k ) a1 x1( k 1) a2 x2( k 1) ... an xn( k 1)
and
143
x (k )
n 0
Or more compactly, x
(k )
Lx
~
( k 1)
(k )
Lk x
a3 an 1
b2
bn 1
an ( k 1)
x
0 1( k 1)
x
0 2
( k 1)
x
0 n
a1
b1
L 0
It follows that x
a2
a2
a3 an 1
b2
bn 1
an
0
0
( 0)
Example 1
Draw a diagram depicting the numbers of female population in each age class and at each
time for the case where the maximum attained age is 80 years and the population is
divided into 4 age classes so that each class is 20 years in duration.
Solution:
0Class 120Class 240Class 360Class 480
At time t 0
x1( 0)
x2( 0)
x3( 0 )
x4( 0)
At time t 20
x1(1)
x2(1)
x3(1)
x4(1)
At time t 40
x1( 2)
x2( 2)
x3( 2 )
x4( 2)
At time t 60
x1(3)
x2(3)
x3( 3)
x4(3)
At time t 80
At time t t k
x1( 4)
x1( k )
x2( 4)
x2( k )
x3( 4 )
x4( 4)
x4( k )
x3( k )
Notice that,
144
x2( k ) b1 x1( k 1)
x3( k )
b2 x2( k 1)
b3 x3( k 1)
x4( k )
Therefore,
x1( k ) a1
(k )
x2 b1
x (k ) 0
3
x (k ) 0
4
a1
b1
The Leslie matrix is given by L
0
a2
0
b2
0
a2
0
b2
0
a3
0
0
b3
a3
0
0
b3
a4
0
0
x1( k 1)
( k 1)
x2
x ( k 1)
3
x ( k 1)
4
a4
0
0
Example 2:
Suppose that a certain animal population is divided into two age classes and has a Leslie
1 2
100
. Beginning with the initial age distribution vector x ( 0)
,
matrix L 2
~
0
5 0
(1)
calculate x , x
~
( 2)
( 3)
, x , x
~
( 4)
, and x
(5)
Solution:
145
Example 3:
Suppose that the maximum age attained by the females in a certain animal population is
15 years and we divide the population into three age classes with equal durations of five
0 4 3
years. Let the Leslie matrix for this population be L 12 0 0 . There are initially
0 1 0
4
(0)
1000 x1
(0)
1,000 females in each of the three age classes, so that x 1000 x2( 0 ) .
~
1000 x ( 0 )
(1)
( 2)
x
~
( 3)
( 0)
L x 12 0 0 1000 500 x2(1)
~
0 1 0 1000 250 x (1)
3
4
(1)
L x 12 0 0 500 3500 x2( 2 )
~
0 1 0 250 125 x ( 2 )
3
4
Lx
~
( 2)
12 0 0 3500 1375 x2( 3)
0 1 0 125 875 x ( 3)
3
4
Therefore, after 15 years, there are 14,375 females between 0 and 5 years of age, 1,375
females between 5 and 10 years of age, and 875 females between 10 to 15 years of age.
146
(The 2 2 case)
a
Let L 1
b1
a2
. The characteristic equation of L is given by
0
p( ) det(I L) 0
a1 a2
0
b1
( a1 ) a2b1 0
2 a1 a2b1 0
Let q( )
a1
a2b1
147
(The 3 3 case)
a1
Let L b1
0
a3
a2
0
b2
p( ) det(I L) 0
a1 a2
b1
0
a3
0 0
b2
( a1 )2 a2b1 a3b1b2 0
3 a12 a2b1 a3b1b2 0
Let q( )
a1
a2b1
a3b1b2
148
(0)
(k )
1 x
~
( k 1)
implies that for large values of time, each age distribution vector is a scalar multiple of
the preceding age distribution vector, the scalar being the positive eigenvalue 1 .
Consequently, the proportion of females in each of the age classes becomes constant.
These limiting proportions can be determined from the eigenvector x .
~
Example 4
Suppose that the maximum age attained by the females in a certain animal population is
15 years and we divide the population into three age classes with equal durations of five
0 4 3
(2 3)(42 6 1) 0
149
(k )
32 x
~
( k 1)
females in each of the three classes will increase by about 50%, as will the total number
of females in the population. Consequently, eventually the females will be distributed
among the three age classes in the ratio 1 : 13 : 181 or 18 : 6 : 1 . This corresponds to a
distribution of 72% of the females in the first class, 24% of the females in the second
class, and 4% of the females in the third age class.
150