Professional Documents
Culture Documents
Linear Algebra: Wwlchen
Linear Algebra: Wwlchen
W W L CHEN
c
This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990.
It is available free to all individuals, on the understanding that it is not to be used for financial gain,
and may be downloaded and/or photocopied, with or without permission from the author.
However, this document may not be kept on any information storage and retrieval system without permission
from the author, unless such system is not accessible to any individuals other than its owners.
Chapter 2
MATRICES
2.1. Introduction
A rectangular array of numbers of the form
a11
...
...
am1
...
a1n
..
.
(1)
amn
is called an m n matrix, with m rows and n columns. We count rows from the top and columns from
the left. Hence
a1j
..
( ai1 . . . ain )
and
.
amj
represent respectively the i-th row and the j-th column of the matrix (1), and aij represents the entry
in the matrix (1) on the i-th row and j-th column.
Example 2.1.1. Consider the 3 4 matrix
2
3
1
4 3 1
1 5 2 .
0 7 6
Here
(3
Chapter 2 : Matrices
2)
and
3
5
7
page 1 of 39
Linear Algebra
represent respectively the 2-nd row and the 3-rd column of the matrix, and 5 represents the entry in the
matrix on the 2-nd row and 3-rd column.
We now consider the question of arithmetic involving matrices. First of all, let us study the problem
of addition. A reasonable theory can be derived from the following definition.
Definition. Suppose that the two matrices
a11
.
A = ..
...
am1
...
a1n
..
.
b11
.
B = ..
bm1
and
amn
...
...
b1n
..
.
bmn
a11 + b11
..
A+B =
.
am1 + bm1
...
...
a1n + b1n
..
amn + bmn
2
A= 3
1
4 3 1
1 5 2
0 7 6
and
1
B= 0
2
2 2
2 4
1 3
7
1 .
3
Then
3
2 + 1 4 + 2 3 2 1 + 7
A+B = 3+0 1+2 5+4 21 = 3
3
1 2 0 + 1 7 + 3 6 + 3
6 1
3 9
1 10
6
1.
9
2
1
4 3 1
0 7 6
and
2
3
1
4 3
1 5.
0 7
page 2 of 39
Linear Algebra
a11
.
A = ..
a1n
..
.
...
am1
...
amn
ca11
.
cA = ..
ca1n
..
.
...
cam1
...
camn
4 3 1
1 5 2 .
0 7 6
8 6 2
2 10 4 .
0 14 12
2
A= 3
1
Then
4
2A = 6
2
(2)
am1 x1 + . . . + amn xn = bm ,
in the form Ax = b, where
a11
..
A=
.
am1
...
...
a1n
..
.
and
amn
b1
.
b = ..
bm
(3)
x1
.
x = ..
(4)
xn
Chapter 2 : Matrices
page 3 of 39
Linear Algebra
a11
...
a1n
x1
.. ..
=
.
.
...
am1
...
amn
xn
b1
..
.
.
bm
a11
..
A=
.
am1
...
a1n
..
.
...
b11
..
B= .
and
amn
bn1
...
...
b1p
..
.
bnp
are respectively an m n matrix and an n p matrix. Then the matrix product AB is given by the
m p matrix
q11
AB = ...
qm1
...
...
q1p
.. ,
.
qmp
n
X
k=1
Remark. Note first of all that the number of columns of the first matrix must be equal to the number
of rows of the second matrix. On the other hand, for a simple way to work out qij , the entry in the i-th
row and j-th column of AB, we observe that the i-th row of A and the j-th column of B are respectively
( ai1
...
ain )
and
b1j
..
. .
bnj
We now multiply the corresponding entries from ai1 with b1j , and so on, until ain with bnj and then
add these products to obtain qij .
Example 2.1.5. Consider the matrices
2
A= 3
1
4 3 1
1 5 2
0 7 6
and
1
2
B=
0
3
4
3
.
2
1
Note that A is a 3 4 matrix and B is a 4 2 matrix, so that the product AB is a 3 2 matrix. Let
us calculate the product
q11
AB = q21
q31
Chapter 2 : Matrices
q12
q22 .
q32
page 4 of 39
Linear Algebra
Consider first of all q11 . To calculate this, we need the 1-st row of A and the 1-st column of B, so let us
cover up all unnecessary information, so that
1
3 1
2
0
3
q
11
3 1
4
q12
3
= .
2
1
2
2
0
= q21
4
3
= q22 .
2
1
0
6
3
q31
page 5 of 39
Linear Algebra
Consider finally q32 . To calculate this, we need the 3-rd row of A and the 2-nd column of B, so let us
cover up all unnecessary information, so that
4
3
= .
2
q32
1
2
AB = 3
1
4
7
3
=
11
2
17
1
13
7 .
12
4
3
.
2
1
2
A= 3
1
4 3 1
1 5 2
0 7 6
and
1
2
B=
0
3
Note that B is a 4 2 matrix and A is a 3 4 matrix, so that we do not have a definition for the
product BA.
We leave the proofs of the following results as exercises for the interested reader.
PROPOSITION 2C. (ASSOCIATIVE LAW) Suppose that A is an mn matrix, B is an np matrix
and C is an p r matrix. Then A(BC) = (AB)C.
PROPOSITION 2D. (DISTRIBUTIVE LAWS)
(a) Suppose that A is an m n matrix and B and C are n p matrices. Then A(B + C) = AB + AC.
(b) Suppose that A and B are m n matrices and C is an n p matrix. Then (A + B)C = AC + BC.
PROPOSITION 2E. Suppose that A is an m n matrix, B is an n p matrix, and that c R. Then
c(AB) = (cA)B = A(cB).
page 6 of 39
Linear Algebra
Proof. Clearly the system (2) has either no solution, exactly one solution, or more than one solution.
It remains to show that if the system (2) has two distinct solutions, then it must have infinitely many
solutions. Suppose that x = u and x = v represent two distinct solutions. Then
Au = b
and
Av = b,
so that
A(u v) = Au Av = b b = 0,
where 0 is the zero m 1 matrix. It now follows that for every c R, we have
A(u + c(u v)) = Au + A(c(u v)) = Au + c(A(u v)) = b + c0 = b,
so that x = u + c(u v) is a solution for every c R. Clearly we have infinitely many solutions.
a11
..
In =
.
an1
...
...
a1n
..
,
.
ann
where
aij =
1
0
if i = j,
if i =
6 j,
I1 = ( 1 )
and
1
0
I4 =
0
0
0
1
0
0
0
0
1
0
0
0
.
0
1
The following result is relatively easy to check. It shows that the identity matrix In acts as the identity
for multiplication of n n matrices.
PROPOSITION 2G. For every n n matrix A, we have AIn = In A = A.
This raises the following question: Given an n n matrix A, is it possible to find another n n matrix
B such that AB = BA = In ?
We shall postpone the full answer to this question until the next chapter. In Section 2.5, however, we
shall be content with finding such a matrix B if it exists. In Section 2.6, we shall relate the existence of
such a matrix B to some properties of the matrix A.
Chapter 2 : Matrices
page 7 of 39
Linear Algebra
a11
..
A=
.
an1
...
...
a1n
..
,
.
ann
1 0
0 2
0 0
0
0
0
0
0
0
and
0
0
0
0
0
0
However, the calculation is rather simple when A is a diagonal matrix, as we shall see in the following
example.
Chapter 2 : Matrices
page 8 of 39
Linear Algebra
17 10 5
A = 45 28 15 .
30 20
12
1
P = 3
2
1 2
0 3,
3 0
then
P 1
3
= 2
3
2
4/3
5/3
1
1 .
1
Furthermore, if we write
3
D= 0
0
0 0
2 0,
0 2
A98 = (P DP 1 ) . . . (P DP 1 ) = P D98 P 1
|
{z
}
98
398
=P 0
0
0
298
0
0
0 P 1 .
298
This is much simpler than calculating A98 directly. Note that this example is only an illustration. We
have not discussed here how the matrices P and D are found.
a11 a12
A = a21 a22
a31 a32
a13
a23
a33
and
1
I3 = 0
0
0
1
0
0
0.
1
a21
a11
a31
Chapter 2 : Matrices
a22
a12
a32
a23
a13
a33
and
0
1
0
1
0
0
0
0.
1
page 9 of 39
Linear Algebra
Note that
a21
a11
a31
a22
a12
a32
a11
a31
a21
Note that
a11
a31
a21
a23
0
a13 = 1
a33
0
0
a11
0 a21
1
a31
1
0
0
a12
a22
a32
a13
a23 .
a33
a12 a13
1 0 0
0 0 1.
a32 a33
and
a22 a23
0 1 0
a12
a32
a22
a13
1
a33 = 0
a23
0
0
a11
1 a21
0
a31
0
0
1
a12
a22
a32
a13
a23 .
a33
Let us add 3 times row 1 to row 2 of A and do likewise for I3 . We obtain respectively
a11
a12
a13
1 0 0
3a11 + a21 3a12 + a22 3a13 + a23
3 1 0.
and
a31
a32
a33
0 0 1
Note that
a11
3a11 + a21
a31
a12
3a12 + a22
a32
a13
1
3a13 + a23 = 3
a33
0
a11
0
0 a21
1
a31
0
1
0
a13
a23 .
a33
a12
a22
a32
Let us add 2 times row 3 to row 1 of A and do likewise for I3 . We obtain respectively
0 1 0 .
a21
a22
a23
and
0 0 1
a31
a32
a33
Note that
2a31 + a11
a21
a31
2a32 + a12
a22
a32
a11
5a21
a31
Note that
a11
5a21
a31
2a33 + a13
1
= 0
a23
0
a33
0
1
0
2
a11
0 a21
1
a31
a12
a22
a32
a13
a23 .
a33
a12
a13
1 0 0
0 5 0.
5a22 5a23
and
a32
a33
0 0 1
a12
5a22
a32
a13
1
5a23 = 0
a33
0
0
5
0
0
a11
0 a21
1
a31
a12
a22
a32
a13
a23 .
a33
a11
a12
a13
1 0 0
a21
0 1 0 .
a22
a23
and
a31 a32 a33
0 0 1
Chapter 2 : Matrices
page 10 of 39
Linear Algebra
Note that
a11
a21
a31
a12
a22
a32
a13
1
a23 = 0
a33
0
0
1
0
0
a11
0 a21
1
a31
a12
a22
a32
a13
a23 .
a33
2 (E2 E1 A|E2 E1 In )
3 . . .
In other words, we consider an array with the matrix A on the left and the matrix In on the right. We
now perform elementary row operations on the array and try to reduce the left hand half to the matrix
In . If we succeed in doing so, then the right hand half of the array gives the inverse A1 .
Example 2.5.2. Consider the matrix
1
A= 3
2
1 2
0 3.
3 0
1
(A|I3 ) = 3
2
Chapter 2 : Matrices
1 2 1 0 0
0 3 0 1 0.
3 0 0 0 1
page 11 of 39
Linear Algebra
We now perform elementary row operations on this array and try to reduce the left hand half to the
matrix I3 . Note that if we succeed, then the final array is clearly in reduced row echelon form. We
therefore follow the same procedure as reducing an array to reduced row echelon form. Adding 3 times
row 1 to row 2, we obtain
1
0
2
1
3
3
2
3
0
1 0 0
3 1 0 .
0 0 1
1
0
0
1
3
5
2
3
4
1 0 0
3 1 0 .
2 0 1
1
3
15
2
3
12
1 0 0
3 1 0 .
6 0 3
1
3
0
2
3
3
1 0 0
3 1 0 .
9 5 3
3
3
0
6
3
3
3 0 0
3 1 0 .
9 5 3
1
0
0
Adding 5 times row 2 to row 3, we obtain
1
0
0
Multiplying row 1 by 3, we obtain
3
0
0
Adding 2 times row 3 to row 1, we obtain
3
0
0
3
3
0
0 15
3 3
3 9
10
1
5
6
0.
3
3
0
0
0
0
3
15
6
9
10
4
5
6
3 .
3
0
3
0
0
0
3
9
6
9
6
4
5
3
3 .
3
0
3
0
0
0
3
3
6
9
2
4
5
1
3 .
3
3
3
0
3
0
0
Multiplying row 1 by 1/3, we obtain
1
0
0
Chapter 2 : Matrices
page 12 of 39
Linear Algebra
1 0
0 1
0 0
0
0
3
3 2 1
2 4/3 1 .
9 5 3
1 0 0
0 1 0
0 0 1
3
2
3
2
4/3
5/3
1
1 .
1
Note now that the array is in reduced row echelon form, and that the left hand half is the identity matrix
I3 . It follows that the right hand half of the array represents the inverse A1 . Hence
A1
3
= 2
3
1
1 .
1
2
4/3
5/3
1
2
A=
0
0
1
2
3
0
3
5
.
0
1
2
4
0
0
1
2
(A|I4 ) =
0
0
1
2
3
0
2
4
0
0
3
5
0
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0
.
0
1
We now perform elementary row operations on this array and try to reduce the left hand half to the
matrix I4 . Adding 2 times row 1 to row 2, we obtain
1 1
0 0
0 3
0 0
0 0 0
1 0 0
.
0 1 0
0 0 1
2
0
0
0
3
1
0
1
1
2
0
0
2
0
0
0
3
1
0
0
1 0 0 0
2 1 0 0
.
0 0 1 0
2 1 0 1
2
0
0
0
3
0
1
0
1
0
2
2
1 1
0 0
0 3
0 0
Interchanging rows 2 and 3, we obtain
1 1
0 3
0 0
0 0
Chapter 2 : Matrices
0 0 0
0 1 0
.
1 0 0
1 0 1
page 13 of 39
Linear Algebra
At this point, we observe that it is impossible to reduce the left hand half of the array to I4 . For those
who remain unconvinced, let us continue. Adding 3 times row 3 to row 1, we obtain
1 1 2
0 3 0
0 0 0
0 0 0
0
0
1
0
5 3 0 0
0 0 1 0
.
2 1 0 0
2 1 0 1
1 1
0 3
0 0
0 0
2
0
0
0
0
0
1
0
5 3 0 0
0 0 1 0
.
0 0 0 1
2 1 0 1
Multiplying row 1 by 6 (here we want to avoid fractions in the next two steps), we obtain
6 6 12
0 3 0
0 0 0
0 0 0
0 0
1 0
.
0 1
0 1
0
0
1
0
30
0
0
2
18
0
0
1
0
0
1
0
0
0
0
2
3 0 15
0 1
0
.
0 0 1
1 0
1
6
0
0
0
6
3
0
0
12
0
0
0
6 0 12
0 3 0
0 0 0
0 0 0
0
0
1
0
0
0
0
2
3 2
0 1
0 0
1 0
15
0
.
1
1
Multiplying row 1 by 1/6, multiplying row 2 by 1/3, multiplying row 3 by 1 and multiplying row 4 by
1/2, we obtain
1 0 2 0 0
0 1 0 0 0
0 0 0 1 0
0 0 0 0 1
1/2
0
0
1/2
1/3
1/3
0
0
5/2
0
.
1
1/2
Note now that the array is in reduced row echelon form, and that the left hand half is not the identity
matrix I4 . Our technique has failed. In fact, the matrix A is not invertible.
page 14 of 39
Linear Algebra
These elementary row operations can clearly be reversed by elementary row operations. For (1), we
interchange the two rows again. For (2), if we have originally added c times row i to row j, then we can
reverse this by adding c times row i to row j. For (3), if we have multiplied any row by a non-zero
constant c, we can reverse this by multiplying the same row by the constant 1/c. Note now that each
elementary matrix is obtained from In by an elementary row operation. The inverse of this elementary
matrix is clearly the elementary matrix obtained from In by the elementary row operation that reverses
the original elementary row operation.
Suppose that an n n matrix B can be obtained from an n n matrix A by a finite sequence of
elementary row operations. Then since these elementary row operations can be reversed, the matrix A
can be obtained from the matrix B by a finite sequence of elementary row operations.
Definition. An n n matrix A is said to be row equivalent to an n n matrix B if there exist a finite
number of elementary n n matrices E1 , . . . , Ek such that B = Ek . . . E1 A.
Remark. Note that B = Ek . . . E1 A implies that A = E11 . . . Ek1 B. It follows that if A is row
equivalent to B, then B is row equivalent to A. We usually say that A and B are row equivalent.
The following result gives conditions equivalent to the invertibility of an n n matrix A.
PROPOSITION 2N. Suppose that
a11
..
A=
.
...
an1
...
a1n
..
,
.
ann
and that
x1
.
x = ..
0
.
0 = ..
0
and
xn
...,
xn = 0.
a11
...
an1
Chapter 2 : Matrices
...
...
a1n
..
.
ann
0
..
.
0
page 15 of 39
Linear Algebra
can be reduced by elementary row operations to the reduced row echelon form
1 ... 0 0
.. ..
...
. . .
0 ... 1 0
Hence the matrices A and In are row equivalent.
(c) Suppose that the matrices A and In are row equivalent. Then there exist elementary nn matrices
E1 , . . . , Ek such that In = Ek . . . E1 A. By Proposition 2M, the matrices E1 , . . . , Ek are all invertible, so
that
A = E11 . . . Ek1 In = E11 . . . Ek1
is a product of invertible matrices, and is therefore itself invertible.
a11
..
A=
.
...
an1
...
x=
and
.
xn
a1n
..
.
ann
b1
.
b = ..
bn
are n 1 matrices, where x1 , . . . , xn are variables and b1 , . . . , bn R are arbitrary. Since A is invertible,
let us consider x = A1 b. Clearly
Ax = A(A1 b) = (AA1 )b = In b = b,
so that x = A1 b is a solution of the system. On the other hand, let x0 be any solution of the system.
Then Ax0 = b, so that
x0 = In x0 = (A1 A)x0 = A1 (Ax0 ) = A1 b.
It follows that the system has unique solution. We have proved the following important result.
PROPOSITION 2P. Suppose that
a11
.
A = ..
...
an1
...
a1n
..
,
.
ann
and that
x1
.
x = ..
xn
and
b1
.
b = ..
bn
are n 1 matrices, where x1 , . . . , xn are variables and b1 , . . . , bn R are arbitrary. Suppose further
that the matrix A is invertible. Then the system Ax = b of linear equations has the unique solution
x = A1 b.
Chapter 2 : Matrices
page 16 of 39
Linear Algebra
a11
.
A = ..
...
an1
...
a1n
..
,
.
ann
and that
x1
.
x = ..
and
b1
.
b = ..
xn
bn
are n 1 matrices, where x1 , . . . , xn are variables. Suppose further that for every b1 , . . . , bn R, the
system Ax = b of linear equations is soluble. Then the matrix A is invertible.
Proof. Suppose that
1
0
.
.
b1 =
. ,
0
...,
0
0
.
.
bn =
. .
0
1
In other words, for every j = 1, . . . , n, bj is an n 1 matrix with entry 1 on row j and entry 0 elsewhere.
Now let
x11
x1n
.
.
x1 = .. ,
...,
xn = ..
xn1
xnn
denote respectively solutions of the systems of linear equations
Ax = b1 ,
...,
Ax = bn .
...
xn ) = ( b1
...
bn ) ;
in other words,
x11
..
A
.
xn1
...
...
x1n
..
= In ,
.
xnn
so that A is invertible.
We can now summarize Propositions 2N, 2P and 2Q as follows.
PROPOSITION 2R. In the notation of Proposition 2N, the following four statements are equivalent:
(a) The matrix A is invertible.
(b) The system Ax = 0 of linear equations has only the trivial solution.
(c) The matrices A and In are row equivalent.
(d) The system Ax = b of linear equations is soluble for every n 1 matrix b.
Chapter 2 : Matrices
page 17 of 39
Linear Algebra
x1
.
x = .. Rn
and
xn
d1
.
d = .. Rn ,
dn
known respectively as the production vector and demand vector of the economy.
On the other hand, each of the n sectors requires material from some or all of the sectors to produce
its output. For i, j = 1, . . . , n, let cij denote the monetary value of the output of sector i needed by
sector j to produce one unit of monetary value of output. For every j = 1, . . . , n, the vector
c1j
cj = ... Rn
cnj
is known as the unit consumption vector of sector j. Note that the column sum
c1j + . . . + cnj 1
(5)
in order to ensure that sector j does not make a loss. Collecting together the unit consumption vectors,
we obtain the matrix
c11 . . . c1n
.
..
C = ( c1 . . . cn ) = ..
,
.
cn1 . . . cnn
known as the consumption matrix of the economy.
Consider the matrix product
c11 x1 + . . . + c1n xn
..
.
Cx =
.
cn1 x1 + . . . + cnn xn
For every i = 1, . . . , n, the entry ci1 x1 + . . . + cin xn represents the monetary value of the output of sector
i needed by all the sectors to produce their output. This leads to the production equation
x = Cx + d.
(6)
Here Cx represents the part of the total output that is required by the various sectors of the economy
to produce the output in the first place, and d represents the part of the total output that is available
to satisfy outside demand.
Clearly (I C)x = d. If the matrix I C is invertible, then
x = (I C)1 d
represents the perfect production level. We state without proof the following fundamental result.
Chapter 2 : Matrices
page 18 of 39
Linear Algebra
PROPOSITION 2S. Suppose that the entries of the consumption matrix C and the demand vector d
are non-negative. Suppose further that the inequality (5) holds for each column of C. Then the inverse
matrix (I C)1 exists, and the production vector x = (I C)1 d has non-negative entries and is the
unique solution of the production equation (6).
Let us indulge in some heuristics. Initially, we have demand d. To produce d, we need Cd as input.
To produce this extra Cd, we need C(Cd) = C 2 d as input. To produce this extra C 2 d, we need
C(C 2 d) = C 3 d as input. And so on. Hence we need to produce
d + Cd + C 2 d + C 3 d + . . . = (I + C + C 2 + C 3 + . . .)d
in total. Now it is not difficult to check that for every positive integer k, we have
(I C)(I + C + C 2 + C 3 + . . . + C k ) = I C k+1 .
If the entries of C k+1 are all very small, then
(I C)(I + C + C 2 + C 3 + . . . + C k ) I,
so that
(I C)1 I + C + C 2 + C 3 + . . . + C k .
This gives a practical way of approximating (I C)1 , and also suggests that
(I C)1 = I + C + C 2 + C 3 + . . . .
Example 2.8.1. An economy consists of three sectors. Their dependence on each other is summarized
in the table below:
To produce one unit of monetary
value of output in sector
1
2
3
monetary value of output required from sector 1
monetary value of output required from sector 2
monetary value of output required from sector 3
0.3
0.4
0.1
0.2
0.5
0.1
0.1
0.2
0.3
Suppose that the final demand from sectors 1, 2 and 3 are respectively 30, 50 and 20. Then the production
vector and demand vector are respectively
x1
d1
30
x = x2
and
d = d2 = 50 ,
x3
d3
20
while the consumption matrix is given by
0.7
I C = 0.4
0.1
7
4
1
0.2
0.5
0.1
2
5
1
0.1
0.2 .
0.7
1
2
7
300
500 ,
200
page 19 of 39
Linear Algebra
1
0
0
echelon
0 0
1 0
0 1
form
3200/27
6100/27 .
700/9
satisfies
x1
x2
=
1
0
0
1
x1
x2
=
x1
x2
for every (x1 , x2 ) R2 , and so represents reflection across the x1 -axis, whereas the matrix
1 0
x1
1 0
x1
x1
A=
satisfies
A
=
=
0 1
x2
0 1
x2
x2
for every (x1 , x2 ) R2 , and so represents reflection across the x2 -axis. On the other hand, the matrix
1 0
x1
1 0
x1
x1
A=
satisfies
A
=
=
0 1
x2
0 1
x2
x2
for every (x1 , x2 ) R2 , and
0
A=
1
for every (x1 , x2 ) R2 , and so represents reflection across the line x1 = x2 . We give a summary in the
table below:
Transformation
Equations
Matrix
ny = x
1 0
1
1
Reflection across x1 -axis
y2 = x2
0 1
n y = x
1 0
1
1
Reflection across x2 -axis
y2 = x2
0 1
n y = x
1 0
1
1
Reflection across origin
y2 = x2
0 1
ny = x
0 1
1
2
Reflection across x1 = x2
y2 = x1
1 0
Chapter 2 : Matrices
page 20 of 39
c
!
Linear
Linear Algebra
Algebra
W
2008
WW
WL
L Chen,
Chen, 1982,
1982, 2006
2006
for every (x11 , x22 ) R22 , and so represents a dilation if k > 1 and a contraction if 0 < k < 1. On the
other hand, the matrix
!
"
!
"
"
"
!
!
"
!
k 0
x11
k 0
x11
kx11
A=
satisfies
A
=
=
0 1
x22
0 1
x22
x22
2
for
for every
every (x
(x11 ,, x
x22 ))
R
R2 ,, and
and so
so represents
represents an
an expansionn
expansionn in
in the
the
in
the
x
-direction
if
0
<
k
<
1,
whereas
the
matrix
1
in the x1 -direction if 0 < k < 1, whereas the matrix
!
"
!
"
!
11 00
x
11
1
x
1
A
satisfies
A
A=
= 0 k
satisfies
A x2 =
= 0
0 k
x2
0
x
x11 -direction
-direction if
if k
k>
> 11 and
and aa compression
compression
00
k
k
"
!
x
x11
x
x22
"
=
=
!
x
x11
kx
kx22
"
2
for
for every
every (x
(x11 ,, x
x22 ))
R
R2 ,, and
and so
so represents
represents aa expansion
expansion in
in the
the x
x22 -direction
-direction if
if k
k>
> 11 and
and aa compression
compression in
in
the
the x
x22 -direction
-direction if
if 00 <
<k
k<
< 1.
1. We
We give
give aa summary
summary in
in the
the table
table below:
below:
Transformation
Transformation
Dilation
Dilation or
or contraction
contraction by
by factor
factor k
k>
> 00
Expansion
Expansion or
or compression
compression in
in x
x111 -direction
-direction by
by factor
factor k
k>
> 00
Expansion
Expansion or
or compression
compression in
in x
x222 -direction
-direction by
by factor
factor k
k>
> 00
Example
Example 2.9.3.
2.9.3. Let
Let k
k
!
1
1
A
A=
= 0
0
Equations
Equations
#
y111 = kx111
yy222 =
= kx
kx222
#
y = kx
y111 = kx111
yy222 =
=x
x222
$
n yy11 =
x11
1 = x1
yy222 =
= kx
kx222
Matrix
Matrix
!
"
k 0
00 k
k
!
k 0"
k 0
00 11
!
1 0"
1 0
00 k
k
be
be aa fixed
fixed real
real number.
number. The
The matrix
matrix
"
!
"
!
1 k"
!
x "
!
x + kx "
k
x
1
1
1
1
k
x
1
k
x
x111 + kx222
1
1
satisfies
A
=
=
satisfies
A x2 = 0 1
=
11
x
x
x22
0 1
x222
x222
2
2
for
for every
every (x
(x111 ,, x
x222 ))
R
R2 ,, and
and so
so represents
represents aa shear
shear in
in the
the x
x111 -direction.
-direction. For
For the
the case
case k
k=
= 1,
1, we
we have
have the
the
following:
following:
T
T
(k=1)
(k=1)
For
For the case
case k =
= 1,
1, we have
have the following:
following:
Chapter
Chapter 2
2 :: Matrices
Matrices
T
T
(k=1)
(k=1)
page
page 21
21 of
of 39
39
Linear Algebra
0
1
satisfies
x1
x2
=
1
k
0
1
x1
x2
=
x1
kx1 + x2
for every (x1 , x2 ) R2 , and so represents a shear in the x2 -direction. We give a summary in the table
below:
Transformation
Shear in x1 -direction
Shear in x2 -direction
Equations
y1 = x1 + kx2
y2 = x2
ny = x
1
Matrix
1 k
0 1
1 0
k 1
y2 = kx1 + x2
Example 2.9.4. For anticlockwise rotation by an angle , we have T (x1 , x2 ) = (y1 , y2 ), where
y1 + iy2 = (x1 + ix2 )(cos + i sin ),
and so
y1
y2
=
cos
sin
sin
cos
sin
cos
x1
x2
.
.
Equations
y1 = x1 cos x2 sin
y2 = x1 sin + x2 cos
Matrix
cos sin
sin
cos
We conclude this section by establishing the following result which reinforces the linearity of matrix
transformations on the plane.
PROPOSITION 2T. Suppose that a matrix transformation T : R2 R2 is given by an invertible
matrix A. Then
(a) the image under T of a straight line is a straight line;
(b) the image under T of a straight line through the origin is a straight line through the origin; and
(c) the images under T of parallel straight lines are parallel straight lines.
Proof. Suppose that T (x1 , x2 ) = (y1 , y2 ). Since A is invertible, we have x = A1 y, where
x=
x1
x2
and
y=
y1
y2
.
page 22 of 39
Linear Algebra
Linear Algebra
Hence
Hence
1
((
)) A
A1
!
y "
y11 = ( ) .
yy2 = ( ) .
2
Let
Let
1
"
"
((
0 0 )) =
= ((
)) A
A1..
Then
Then
!
"
yy11
"0
"0
((
)
=
) y
= (( )) ..
y22
"
"
In
In other
other words,
words, the
the image
image under
under TT of
of the
the straight
straight line
line x
x11 +
+ x
x22 =
= is
is
0yy11 +
+ 0yy22 =
= ,
, clearly
clearly another
another
straight
line.
This
proves
(a).
To
prove
(b),
note
that
straight
lines
through
the
origin
straight line. This proves (a). To prove (b), note that straight lines through the origin correspond
correspond to
to
=
= 0.
0. To
To prove
prove (c),
(c), note
note that
that parallel
parallel straight
straight lines
lines correspond
correspond to
to different
different values
values of
of for
for the
the same
same
values
values of
of
and
and .
. !
2.10.
2.10. Application
Application to
to Computer
Computer Graphics
Graphics
Example
Example 2.10.1.
2.10.1. Consider
Consider the
the letter
letter M
M in
in the
the diagram
diagram below:
below:
!
"
7
,
0
!
"
0
.
8
Let
Let us
us apply
apply aa matrix
matrix transformation
transformation to
to these
these vertices,
vertices, using
using the
the matrix
matrix
!
"
1 11
1 2
A
A=
= 0 12 ,,
0 1
representing
representing aa shear
shear in
in the
the xx11-direction
-direction with
with factor
factor 0.5,
0.5, so
so that
that
! " !
"
x1 x1 + 11 x2
A x1 = x1 + 22 x2
for every (x1 , x2 ) R22.
for every (x1 , x2 ) R .
A x2 =
x2
x2
x2
Chapter 2 : Matrices
Chapter 2 : Matrices
page 23 of 39
page 23 of 39
Linear Algebra
Linear Algebra
! "
10
10 ,
6 ,
!6 "
5
5 ,
8 ,
8
! "
7
7 ,
0 ,
!0 "
4
4 ,
8 ,
8
"
4 4 10 7 8 12 11 5 5 4
4 4 10 7 8 12 11 5 5 4 .
6 0 6 0 0 8
8 2 8 8 .
6 0 6 0 0 8 8 2 8 8
In
In view
view of
of Proposition
Proposition 2T,
2T, the
the image
image of
of any
any line
line segment
segment that
that joins
joins two
two vertices
vertices is
is aa line
line segment
segment that
that
joins
the
images
of
the
two
vertices.
Hence
the
image
of
the
letter
M
under
the
shear
looks
joins the images of the two vertices. Hence the image of the letter M under the shear looks like
like the
the
following:
following:
Next, we may wish to translate this image. However, a translation is a transformation by vector
h = (h11 , h22 ) R22 is of the form
"
!
"
!
"
!
y11
x11
h11
2
=
+
for every (x11, x22) R2 ,
y22
x22
h22
and this cannot be described by a matrix transformation on the plane. To overcome this deficiency,
2
we introduce homogeneous coordinates. For every point (x11, x22) R2 , we identify it with the point
3
3
(x11, x22, 1) R . Now we wish to translate a point (x11, x22) to (x11, x22) + (h11, h22) = (x11 + h11, x22 + h22), so
+ hh11
xx11 +
xx11
xx2 +
=
xx2
for
+ hh22
=A
A
for every
every (x
(x11,, xx22))
R
R22..
2
2
11
11
It is
is easy
easy to
to check
check that
that
It
+ hh11
xx11 +
11 00 hh11
xx11
xx2 +
=
00 11 hh2
xx2
+ hh22
=
2
2
2
11
00 00 11
11
for every
every (x
(x11,, xx22))
R
R22..
for
It follows
follows that
that using
using homogeneous
homogeneous coordinates,
coordinates, translation
translation by
by vector
vector h
h=
= (h
(h11,, hh22))
R
R22 can
can be
be described
described
It
by
the
matrix
by the matrix
1 0 h1
1
0
h
A =
0 1 h1 .
A = 0 1 h22 .
0 0 1
0 0 1
Chapter 2 : Matrices
Chapter 2 : Matrices
page 24 of 39
page 24 of 39
Linear Algebra
a12
a22
x1
x2
.
Under homogeneous coordinates, the image of the point (x1 , x2 , 1) is now (y1 , y2 , 1). Note that
y1
a11 a12 0
x1
y2 = a21 a22 0 x2 .
1
0
0 1
1
It follows that homogeneous coordinates can also be used to study all the matrix transformations we
have discussed in Section 2.9. By moving over to homogeneous coordinates, we simply replace the 2 2
matrix A by the 3 3 matrix
A 0
A =
.
0 1
Example 2.10.2. Returning to Example 2.10.1 of the
homogeneous coordinates, put in an array in the form
0 1 1 4 7 7 8
0 0 6 0 6 0 0
1 1 1 1 1 1 1
Then the 2 2 matrix
A=
1
A = 0
0
Note that
0
A 0
1
1
0
1
1
= 0
0
0
= 0
1
1
2
1
0
7
8
1
4
2
1
1
8
1
1
2
1
0
0
0.
1
1
0
7 7 8 8 7 4 1 0
6 0 0 8 8 2 8 8
1 1 1 1 1 1 1 1
0
0 1 1 4 7 7 8 8
00 0 6 0 6 0 0 8
1
1 1 1 1 1 1 1 1
1
0
1
4
6
1
1
6
1
4
0
1
1
2
4
0
1
10
6
1
0
8.
1
7 8 12
0 0 8
1 1 1
11
8
1
7
8
1
4
2
1
1
8
1
0
8
1
5 5 4
2 8 8.
1 1 1
Next, let us consider a translation by the vector (2, 3). The matrix under homogeneous coordinates for
this translation is given by
1 0 2
B = 0 1 3 .
0 0 1
Chapter 2 : Matrices
page 25 of 39
Linear Algebra
Linear Algebra
Note that
Note that
0 1 1 4 7 7 8 8
0 1 1 4 7 7 8 8
B
B A
A 00 00 66 00 66 00 00 88
11 11 11 11 11 11 11 11
1 0 2
0 1 4 4
1 0 2
0 1 4 4
=
= 00 11 33 00 00 66 00
00 00 11
11 11 11 11
2 3 6 6 12 9 10
2 3 6 6 12 9 10
=
= 33 33 99 33 99 33 33
11 11 11 11 11 11 11
11 00
88 88
11 11
77 88 12
12 11
11 55
00 00 88 88 22
11 11 11 11 11
14
14 13
13 77 77 66
11
11 11
11 55 11
11 11
11 ,,
11 11 11 11 11
77 44
88 22
11 11
10
10
66
11
2
giving
giving rise
rise to
to coordinates
coordinates in
in R
R2,, displayed
displayed as
as an
an array
array
%
2 3 6 6 12 9 10
2 3 6 6 12 9 10
33 33 99 33 99 33 33
55
88
11
44
88
11
&
14
14 13
13 77 77 66
11
11 11
11 55 11
11 11
11
Hence
Hence the
the image
image of
of the
the letter
letter M
M under
under the
the shear
shear followed
followed by
by translation
translation looks
looks like
like the
the following:
following:
Example 2.10.3. Under homogeneous coordinates, the transformation representing a reflection across
Example 2.10.3. Under homogeneous coordinates, the transformation representing a reflection across
the x1 -axis, followed by a shear by factor 2 in the x1 -direction, followed by anticlockwise rotation by
the x1 -axis, followed by a shear by factor 2 in the x1 -direction, followed by anticlockwise rotation by
90, and
followed by translation by vector (2, 1), has matrix
90 , and followed by translation by vector (2, 1), has matrix
1 0 2 0 1 0 1 2 0 1 0 0 0 1
2
2 0 1 0 1 2 0 1 0 0 0 1
2
01 10 1
0 = 1 2 1 .
0 1 1 11 00 00 00 11 00 00 1
1 0 = 1 2 1 .
0 0 1
0 0 1
0 0 1
0 0 1
0 0
1
0 0 1
0 0 1
0 0 1
0 0 1
0 0
1
page 26 of 39
page 26 of 39
Linear Algebra
One way of solving the system Ax = b is to write down the augmented matrix
a11 . . . a1n b1
.
.
.
..
.. .. ,
an1 . . . ann bn
(7)
and then convert it to reduced row echelon form by elementary row operations.
The first step is to reduce it to row echelon form:
(I) First of all, we may need to interchange two rows in order to ensure that the top left entry in the
array is non-zero. This requires n + 1 operations.
(II) Next, we need to multiply the new first row by a constant in order to make the top left pivot
entry equal to 1. This requires n + 1 operations, and the array now looks like
1
a12 . . . a1n b1
a21 a22 . . . a2n b2
.
.
..
.. ..
..
.
. .
an1 an2 . . . ann bn
Note that we are abusing notation somewhat, as the entry a12 here, for example, may well be different
from the entry a12 in the augmented matrix (7).
(III) For each row i = 2, . . . , n, we now multiply the first row by ai1 and then add to row i. This
requires 2(n 1)(n + 1) operations, and the array now looks like
1
0
.
..
a12
a22
..
.
an2
...
...
...
a1n
a2n
..
.
ann
b1
b2
.
..
.
(8)
bn
(IV) In summary, to proceed from the form (7) to the form (8), the number of operations required is
at most 2(n + 1) + 2(n 1)(n + 1) = 2n(n + 1).
(V) Our next task is to convert the smaller array
a22
...
an2
...
...
a2n
..
.
ann
b2
..
.
bn
1
0
.
..
a23
a33
..
.
an3
...
...
...
a2n
a3n
..
.
ann
b2
b3
.
..
.
bn
These have one row and one column fewer than the arrays (7) and (8), and the number of operations
required is at most 2m(m + 1), where m = n 1. We continue in this way systematically to reach row
echelon form, and conclude that the number of operations required to convert the augmented matrix (7)
to row echelon form is at most
n
X
m=1
Chapter 2 : Matrices
2m(m + 1)
2 3
n .
3
page 27 of 39
Linear Algebra
The next step is to convert the row echelon form to reduced row echelon form. This is simpler, as
many entries are now zero. It can be shown that the number of operations required is bounded by
something like 2n2 indeed, by something like n2 if one analyzes the problem more carefully. In any
case, these estimates are insignificant compared to the estimate 32 n3 earlier.
We therefore conclude that the number of operations required to solve the system Ax = b by reducing
the augmented matrix to reduced row echelon form is bounded by something like 32 n3 when n is large.
Another way of solving the system Ax = b is to first find the inverse matrix A1 . This may involve
converting the array
a11
...
an1
...
...
a1n
..
.
ann
..
to reduced row echelon form by elementary row operations. It can be shown that the number of operations
required is something like 2n3 , so this is less efficient than our first method.
page 28 of 39
Linear Algebra
such elementary matrices unit lower triangular. If an m n matrix A can be reduced in this way to
quasi row echelon form U , then
U = Ek . . . E2 E1 A,
where the elementary matrices E1 , E2 , . . . , Ek are all unit lower triangular. Let L = (Ek . . . E2 E1 )1 .
Then A = LU . It can be shown that products and inverses of unit lower triangular matrices are also
unit lower triangular. Hence L is a unit lower triangular matrix as required.
If Ax = b and A = LU , then L(U x) = b. Writing y = U x, we have
Ly = b
and
U x = y.
It follows that the problem of solving the system Ax = b corresponds to first solving the system Ly = b
and then solving the system U x = y. Both of these systems are easy to solve since both L and U have
many zero entries. It remains to find L and U .
If we reduce the matrix A to quasi row echelon form by only performing the elementary row operation
of adding a multiple of a row higher in the array to another row lower in the array, then U can be
taken as the quasi row echelon form resulting from this. It remains to find L. However, note that
L = (Ek . . . E2 E1 )1 , where U = Ek . . . E2 E1 A, and so
I = Ek . . . E2 E1 L.
This means that the very elementary row operations that convert A to U will convert L to I. We
therefore wish to create a matrix L such that this is satisfied. It is simplest to illustrate the technique
by an example.
Example 2.12.1. Consider the matrix
2
4
A=
2
2
1
2
1
6
10 4
13 6
2
5
8
16
3
8
.
5
5
The entry 2 in row 1 and column 1 is a pivot entry, and column 1 is a pivot column. Adding 2 times
row 1 to row 2, adding 1 times row 1 to row 3, and adding 1 times row 1 to row 4, we obtain
2
0
0
0
1
2
3
2
9 6
12 8
3
2
.
8
8
2
1
10
18
1
2
1
1
0 0 0
1 0 0
1 0
1
to
1 0 0 0
0 1 0 0
.
0 1 0
0 1
Next, the entry 3 in row 2 and column 2 is a pivot entry, and column 2 is a pivot column. Adding 3
times row 2 to row 3, and adding 4 times row 2 to row 4, we obtain
2
0
0
0
Chapter 2 : Matrices
1
3
0
0
2 2
2 1
0 7
0 14
3
2
.
2
0
page 29 of 39
Linear Algebra
1
0
0
0
0
1
3
4
0 0
0 0
1 0
1
1
0
0
0
to
0 0
0 0
.
1 0
1
0
1
0
0
Next, the entry 7 in row 3 and column 4 is a pivot entry, and column 4 is a pivot column. Adding 2
times row 3 to row 4, we obtain the quasi row echelon form
2
0
U =
0
0
1
3
0
0
2 2
2 1
0 7
0 0
3
2
,
2
4
where the entry 4 in row 4 and column 5 is a pivot entry, and column 5 is a pivot column. Note that
the same elementary row operation converts
1
0
0
0
0
1
0
0
0
0
1
2
0
0
0
1
1
0
0
0
to
0
1
0
0
0
0
1
0
0
0
.
0
1
1
2
L=
1
1
0 0 0
1 0 0
,
3 1 0
4 2 1
,
7
14
,
9
12
Dividing them respectively by the pivot entries 2, 3, 7 and 4, we obtain respectively the columns
1
2
,
1
1
,
3
4
,
1
2
1
2
L=
1
1
0
1
3
4
0 0
0 0
1 0
2 1
page 30 of 39
Linear Algebra
LU FACTORIZATION ALGORITHM.
(1) Reduce the matrix A to quasi row echelon form by only performing the elementary row operation of
adding a multiple of a row higher in the array to another row lower in the array. Let U be the quasi
row echelon form obtained.
(2) Record any new pivot column at the time of its first recognition, and modify it by replacing any entry
above the pivot entry by zero and dividing every other entry by the value of the pivot entry.
(3) Let L denote the square matrix obtained by letting the columns be the pivot columns as modified in
step (2).
Example 2.12.3. We wish to solve the system of linear equations Ax = b, where
3
3
A=
6
6
1
3
4
8
2
4
1
5
5
2
11 10 6
21 13 9
and
1
2
b=
.
9
15
Let us first apply LU factorization to the matrix A. The first pivot column is column 1, with modified
version
1
1
.
2
2
Adding row 1 to row 2, adding 2 times row 1 to row 3, and adding 2 times row 1 to row 4, we obtain
3
0
0
0
1
2
2
6
2
4
3
1
7
2
17 5
1
1
.
4
7
0
1
.
1
3
3
0
0
0
1
2
0
0
2
3
4
8
4
1
1
2
1
1
.
3
4
0
0
.
1
2
Adding 2 times row 3 to row 4, we obtain the quasi row echelon form
3
0
0
0
Chapter 2 : Matrices
1
2
0
0
2
3
4
0
4
1
1
0
1
1
.
3
2
page 31 of 39
Linear Algebra
1
1
L=
2
2
0
1
1
3
0 0
0 0
1 0
2 1
3
0
U =
0
0
and
1
2
0
0
2
3
4
0
4
1
1
0
1
1
.
3
2
1
1
2
2
0
1
1
3
0 0 1
0 0 2
.
1 0 9
2 1 15
1
1
y=
.
6
2
3
0
0
0
1
2
0
0
2
3
4
0
4
1
1
0
1
1
3
2
1
1
.
6
2
Here the free variable is x4 . Let x4 = t. Using row 4, we obtain 2x5 = 2, so that x5 = 1. Using row 3,
we obtain 4x3 = 6 + x4 3x5 = 3 + t, so that x3 = 43 + 41 t. Using row 2, we obtain
2x2 = 1 + 3x3 x4 + x5 =
so that x2 =
Hence
9
8
9
4
14 t,
9t 1 9 t 3 + t
,
,
, t, 1 ,
8
8
4
27
3
8 t 8,
so that x1 = 98 t 18 .
where t R.
Remarks. (1) In practical situations, interchanging rows is usually necessary to convert a matrix A to
quasi row echelon form. The technique here can be modified to produce a matrix L which is not unit
lower triangular, but which can be made unit lower triangular by interchanging rows.
(2) Computing an LU factorization of an n n matrix takes approximately 23 n3 operations. Solving
the systems Ly = b and U x = y requires approximately 2n2 operations.
(3) LU factorization is particularly efficient when the matrix A has many zero entries, in which case
the matrices L and U may also have many zero entries.
Chapter 2 : Matrices
page 32 of 39
Linear Algebra
a11 . . . a1n
.
..
A = ..
.
.
am1 . . . amn
The entries can be positive, negative or zero.
Suppose that for every i = 1, 2, 3, . . . , m, player R makes move i with probability pi , and that for
every j = 1, 2, 3, . . . , n, player C makes move j with probability qj . Then
p1 + . . . + pm = 1
and
q1 + . . . + qn = 1.
Assume that the players make moves independently of each other. Then for every i = 1, 2, 3, . . . , m and
j = 1, 2, 3, . . . , n, the number pi qj represents the probability that player R makes move i and player C
makes move j. Then the double sum
EA (p, q) =
n
m X
X
aij pi qj
i=1 j=1
p = ( p1
...
pm )
and
q1
.
q = ..
qn
are known as the strategies of player R and player C respectively. Clearly the expected payoff
q1
a11 . . . a1n
m X
n
X
.
.. ..
EA (p, q) =
aij pi qj = ( p1 . . . pm ) ..
. = pAq.
.
i=1 j=1
am1 . . . amn
qn
Here we have slightly abused notation. The right hand side is a 1 1 matrix!
We now consider the following problem: Suppose that A is fixed. Is it possible for player R to choose
a strategy p to try to maximize the expected payoff EA (p, q)? Is it possible for player C to choose a
strategy q to try to minimize the expected payoff EA (p, q)?
FUNDEMENTAL THEOREM OF ZERO SUM GAMES. There exist strategies p and q such
that
EA (p , q) EA (p , q ) EA (p, q )
for every strategy p of player R and every strategy q of player C.
Remark. The strategy p is known as an optimal strategy for player R, and the strategy q is known as
an optimal strategy for player C. The quantity EA (p , q ) is known as the value of the game. Optimal
strategies are not necessarily unique. However, if p and q are another pair of optimal strategies,
then EA (p , q ) = EA (p , q ).
Chapter 2 : Matrices
page 33 of 39
Linear Algebra
Zero sum games which are strictly determined are very easy to analyze. Here the payoff matrix A
contains saddle points. An entry aij in the payoff matrix A is called a saddle point if it is a least entry
in its row and a greatest entry in its column. In this case, the strategies
p = ( 0
...
...
0)
0
..
.
0
q = 1,
0
.
..
0
and
where the 1s occur in position i in p and position j in q , are optimal strategies, so that the value of
the game is aij .
Remark. It is very easy to show that different saddle points in the payoff matrix have the same value.
Example 2.13.1. In some sports mad school, the teachers require 100 students to each choose between
rowing (R) and cricket (C). However, the students cannot make up their mind, and will only decide
when the identities of the rowing coach and cricket coach are known. There are 3 possible rowing coaches
and 4 possible cricket coaches the school can hire. The number of students who will choose rowing ahead
of cricket in each scenario is as follows, where R1, R2 and R3 denote the 3 possible rowing coaches, and
C1, C2, C3 and C4 denote the 4 possible cricket coaches:
C1
C2
C3
C4
R1
75
50
45
60
R2
20
60
30
55
R3
45
70
35
30
[For example, if coaches R2 and C1 are hired, then 20 students will choose rowing, and so 80 students
will choose cricket.] We first reset the problem by subtracting 50 from each entry and create a payoff
matrix
25
0 5
10
A = 30 10 20
5 .
5 20 15 20
[For example, the top left entry denotes that if each sport starts with 50 students, then 25 is the number
cricket concedes to rowing.] Here the entry 5 in row 1 and column 3 is a saddle point, so the optimal
strategy for rowing is to use coach R1 and the optimal strategy for cricket is to use coach C3.
In general, saddle points may not exist, so that the problem is not strictly determined. Then the
solution for these optimal problems are solved by linear programming techniques which we do not
discuss here. However, in the case of 2 2 payoff matrices
A=
a11
a21
a12
a22
page 34 of 39
Linear Algebra
Let
p1 = p1 =
a22 a21
.
a11 a12 a21 + a22
Then
EA (p , q) =
a22 a12
,
a11 a12 a21 + a22
then
EA (p, q ) =
a22 a21
a11 a12 a21 + a22
a11 a12
a11 a12 a21 + a22
(9)
and
a22 a12
a a12 a21 + a22
q = 11
,
a11 a21
a11 a12 a21 + a22
(10)
with value
EA (p , q ) =
Chapter 2 : Matrices
page 35 of 39
Linear Algebra
5
4,
1
2
A = 1
2
B=
1
9
7
2
2
7
9
1
,
1
2
C=
1
3
0
1
1
2
4
3
,
5
1
1
D = 2
1
0
1
3
7
2.
0
2 1
1 1 5
b) A =
and B = 3 6
3 0 4
1 5
2 1
1 4
c) A =
and B =
3 2
12 1
2 0 0
3
1 4
d) A = 2 0
5 and B = 0 5 0
0 0 1
1 2 3
2 5
3. Evaluate A , where A =
, and find , , R, not all zero, such that the matrix
3 1
2
I + A + A is the zero matrix.
2
4
4. a) Let A =
. Show that A2 is the zero matrix.
6
b) Find all 2 2 matrices B =
such that B 2 is the zero matrix.
6
9
5. Prove that if A and B are matrices such that I AB is invertible, then the inverse of I BA is
given by the formula (I BA)1 = I + B(I AB)1 A.
[Hint: Write C = (I AB)1 . Then show that (I BA)(I + BCA) = I.]
6. For each of the matrices below, use elementary row operations to find its inverse, if the inverse
exists:
1 1 1
1 2 2
a) 1 1 1
b) 1 5 3
0 0 1
2 6 1
1 5 2
2 3 4
c) 1 1 7
d) 3 4 2
0 3 4
2 3 3
1 a b+c
e) 1 b a + c
1 c a+b
Chapter 2 : Matrices
page 36 of 39
Linear Algebra
2
1
2
1
5
2
4
3
8
3
7
5
5
1
2
3
3
2
0
1
is
2
5
2
1
1
2
1
0
5
3
.
0
1
b) Without performing any further elementary row operations, use part (a) to solve the system of
linear equations
2x1 + 5x2 + 8x3 + 5x4 = 0,
x1 + 2x2 + 3x3 + x4 = 1,
2x1 + 4x2 + 7x3 + 2x4 = 0,
x1 + 3x2 + 5x3 + 3x4 = 1.
8. Consider the matrix
1
1
A=
2
2
0
1
1
0
3
5
9
6
1
5
.
8
3
+ 3x3 + x4 = 1,
x1 + x2 + 5x3 + 5x4 = 0,
2x1 + x2 + 9x3 + 8x4 = 0,
2x1
+ 6x3 + 3x4 = 0.
0.2 0.2 0
4000000
c) C = 0.1 0 0.2 and d = 8000000
0.3 0.1 0.3
6000000
10. Consider three industries A, B and C. For industry A to manufacture $1 worth of its product,
it needs to purchase 25c worth of product from each of industries B and C. For industry B to
manufacture $1 worth of its product, it needs to purchase 65c worth of product from industry A
and 5c worth of product from industry C, as well as use 5c worth of its own product. For industry
C to manufacture $1 worth of its product, it needs to purchase 55c worth of product from industry
A and 10c worth of product from industry B. In a particular week, industry A receives $500000
worth of outside order, industry B receives $250000 worth of outside order, but industry C receives
no outside order. What is the production level required to satisfy all the demands precisely?
11. Suppose that C is an n n consumption matrix with all column sums less than 1. Suppose further
that x0 is the production vector that satisfies an outside demand d0 , and that x00 is the production
vector that satisfies an outside demand d00 . Show that x0 + x00 is the production vector that satisfies
an outside demand d0 + d00 .
Chapter 2 : Matrices
page 37 of 39
Linear Algebra
12. Suppose that C is an n n consumption matrix with all column sums less than 1. Suppose further
that the demand vector d has 1 for its top entry and 0 for all other entries. Describe the production
vector x in terms of the columns of the matrix (I C)1 , and give an interpretation of your
observation.
13. Consider a pentagon in R2 with vertices (1, 1), (3, 1), (4, 2), (2, 4) and (1, 3). For each of the following
transformations on the plane, find the 3 3 matrix that describes the transformation with respect
to homogeneous coordinates, and use it to find the image of the pentagon:
a) reflection across the x2 -axis
b) reflection across the line x1 = x2
c) anticlockwise rotation by 90
d) translation by the fixed vector (3, 2)
e) shear in the x2 -direction with factor 2
f) dilation by factor 2
g) expansion in x1 -direction by factor 2
h) reflection across the x2 -axis, followed by anticlockwise rotation by 90
i) translation by the fixed vector (3, 2), followed by reflection across the line x1 = x2
j) shear in the x2 -direction with factor 2, followed by dilation by factor 2, followed by expansion in
x1 -direction by factor 2
14. In homogeneous coordinates, a 3 3 matrix that describes a transformation on the plane is of the
form
a11 a12 h1
A = a21 a22 h2 .
0
0
1
Show that this transformation can be described by a matrix transformation on R2 followed by a
translation in R2 .
15. Consider the matrices
1
A1 = sin
0
0
cos
0
0
0
1
and
sec tan 0
A2 = 0
1
0,
0
0
1
1
A1 = 0
0
tan
1
0
0
0
1
and
1
A2 = sin 2
0
0 0
1 0,
0 1
where R is fixed.
a) What transformation on the plane does the matrix A1 describe?
b) What transformation on the plane does the matrix A2 describe?
c) What transformation on the plane does the matrix A1 A2 A1 describe?
[Remark: This technique is often used to reduce the number of multiplication operations.]
17. Show that the products and inverses of 3 3 unit lower triangular matrices are also unit lower
triangular.
Chapter 2 : Matrices
page 38 of 39
Linear Algebra
18. For each of the following matrices A and b, find an LU factorization of the matrix A and use it to
solve the system Ax = b:
2 1 2
6
a) A = 4 6 5 and b = 21
4 6 8
24
3 1
3
5
b) A = 9 4 10 and b = 18
6 1 5
9
2 1 2 1
1
c) A = 4 3 5 4 and b = 9
4 3 5 7
18
3 1 1
5
10
d) A = 9 3 4 19 and b = 35
6 2 1 0
7
1
2 3
1
2
1
6 10 5 4
e) A =
and b =
6
4 7
6
1
28
4 2 10 19
2 2 1
2
2
4
7
5
4 3 0
12
f) A =
and b =
4 7 5 3
2
14
6 8 19 8 18
48
19. Consider a payoff matrix
4
A = 6
3
1
2
8
6
0
7
4
8 .
5
1/4
1/4
a) What is the expected payoff if p = ( 1/3 0 2/3 ) and q =
?
1/4
1/4
b) Suppose that player R adopts the strategy p = ( 1/3 0 2/3 ). What strategy should player C
adopt?
1/4
1/4
c) Suppose that player C adopts the strategy q =
. What strategy should player R adopt?
1/4
1/4
20. Construct a simple example show that optimal strategies are not unique.
21. Show that the entries in the matrices in (9) and (10) are in the range [0, 1].
Chapter 2 : Matrices
page 39 of 39