You are on page 1of 39

LINEAR ALGEBRA

CHAPTER I

Matrices and Vectors.

A rectangular array of numbers is called a matrix and in particular

[~ ~3]

is called a 2 by 2 matrix. This course will focus on Mathematical ideas that can be expressed using matrices. A matrix of the form

l2 3 5 2J

233

I 3

is said to have dimension 3 by 4. General practice dictates that we first state the row dimension followed by the column dimension. To keep this straight, choose your own device: Row Column, Roman Catholic, Royal Crown and Root Canal. As usual, the rows go across and the columns go up and down.

A matrix with a single row is called a row vector. A matrix with a single column is called a column vector. A 1 by n matrix is called an n-dimensional vector. [2 -3] is

referred to as a two dimensional vector. Similarly

1-8531

l- IS a column vector of

dimension 3.

Matrix Addition:

Two matrices of the same dimension can be added as seen in the next example.

Example 1:

[23] + [3 -2] = [5 1]

CHAPTER I

LINEAR ALGEBRA

It should be clear how to add to matrices, but even so, we need to make a formal definition for the process.

Definition (Matrix Addition): If two matrices are of identical dimension, they may be added. The entry in the i-th row and j-th column of the sum is the sum of the entries in the i-th row and j-th column of the operands.

Scalar-Matrix Multiplication

Next, we introduce multiplication between real numbers and matrices referred to as scalar multiplication.

Example 2:

2[3 5] = [6 10]

4 6 8 12

3[345]=[91215]

Definition (Scalar-Matrix Multiplication): If r is a real number and A is a matrix, then rA is the matrix obtained from A by replacing each entry with r times the corresponding entry of A.

Linear Combinations:

These two definitions permit us to introduce a fundamental idea of Linear Algebra. Consider the expression

2[3 2] + (-3)[1 5].

We refer to this expression as a linear combination of the vectors [3 2] and [1 5]. Here is another example of a linear combination but this time involving the three vectors

[3 2], [1 5] and [-1 2].

2

LINEAR ALGEBRA

CHAPTER I

2[3 2] + 3[1 5] + 2[-1 2]

By the expression "linear combination", we will mean a sum with several terms each one being a scalar times a vector.

We note that 2[3 2] + (-3)[1 5] = [3 -11]. This equation can be expressed verbally by writing "The vector [3 -11] can be written as a linear combination of the vectors [3 2] and [1 5]" using the coefficients 2 and -3 respectively.

The most basic problem in Linear Algebra can be phrased as follows: Can the vector [0 8] be expressed as a linear combination of the vectors [1 3] and [-2 2]. The answer is yes, and to see this, try 2 as the coefficient of [1 3] and 1 as the coefficient [-2 2].

2[1 3] + 1[-2 2] = [0 8]

We will spend a good deal of time learning how to answer this questions in general. We now state a formal definition of linear combination.

Definition: By a linear combination of the vectors VI' v2' v}, ... , vn with coefficients a., a2, a}, ... , an, we mean a sum of the form alvl + a2v2 + a3v3 + ... + anvn.

The next example will demonstrate the use of the idea of linear combinations applied to a broad class of practical problems.

Example 3: We need to blend two types of fertilizer. Type A is classified as 10-30-10 indicating that it consists of 10% phosphorus, 30% nitrogen and 10% potassium. Type B is classified as 20-20-20. Blending 30 pounds of type A with 20 pounds of type B will result in a mixture described by a vector.

30[.1 .3 .1] + 20[.2 .2 .2] = [3 9 3] + [4 4 4] = [7 13 7] = 50[.14 .26 .14].

The resulting mixture has 7 pounds of phosphorus, 13 pounds of nitrogen and 7 pounds of potassium.

3

CHAPTER I

LINEAR ALGEBRA

Vectors and Points.

We consider two similar sets. The first is the set of points in the Euclidean plane. We denote this set by E2, and we specify a particular point by using coordinates such as (l,2). The second set is the set of all vectors with two real components. We denote this set by R2 which is read "the 2 dimensional vector space over the reals". To emphasize the difference between these two sets, we use [1 2] to represent a vector and (1, 2) to represent the point.

So points belong to the plane and vectors belong to vector spaces. We need to be aware of the difference.

beginning at (0,0) and ending at (2, 3). But any other arrow that goes to the right two and up three is also a representative of [2 3]. For example the arrow with its tail at (3, 1) and head at (5, 4) also is a representative of the vector [2 3].

5

Assume that we have a plane with a coordinate system in place. It is correct to say that the point (2, 3) is a member of E2. But technically the vector [2 3] is not a subset or a member of E2. But we

4

(3,1)

(5,4)

3

can choose a representative of [2 3] to be a directed segment, an arrow,

[2 3]

2

o

2

3

4

5

05

Which vector has a representation with its tail at 2

(1,2) and head at (5, I)? The answer is [4 -1]. 1.5

The change in the x-coordinate from 1 to 5 is 4

and the change in the y-coordinate from 2 to equals -1.

o

5

2

3

4

4

Addition between vectors has an interesting visualization in the plane. Consider [3 1) and [2 2]. Their sum is [5 3]. Choose the representative vector for [3 1] with its tail at (0, 0). This places the head at (3, 1). Representing [2 2] with its tail at (3, 1) requires that the head be located at (5, 3). The representative of the sum [3 1] + [2 2] with its tail at (0, 0) has its head at

LINEAR ALGEBRA

3

2.5

2

1.5

0.5

o

3

4

2

5

CHAPTER!

(5, 3) completes the triangle. In general, a realization of the sum of two vectors can be found by placing the tail of one operand on the head of the second operand forming half of a parallelogram. The diagonal of the parallelogram is a realization of the sum.

A realization of 3 [2 1] is a directed segment or arrow in the same direction as the representative

of [2 1] but three times as long.

3

2.5

2 1.5

0.5

o

Example 4: Expressing a vector as a linear combination of two other vectors has an appealing geometric solution ill the plane. Find the coefficients X and Y so that [3 4] = X[3 -1] + Y[ -1 2]. We first depict the target vector [3 4] with its tail at the origin and head at (3, 4). We add to this the vector [3 -1] with its tail fixed at the origin and the vector [-1 2] with its head placed at the head of the target vector at (3,4).

5

7

3[2 1

2

3

5

6

4

4

4

3

[-1 2]

2

[3 4]

o

2

3

-1

CHAPTER I

LINEAR ALGEBRA

Next, we extend the line containing the vector [-1 2] with the knowledge that a multiple Y[-l 2] can be represented on that line by altering the position of the tail. In the same manner, choose all representatives of vectors of the form

3

2

2 3 4 5~

X[3 -1] with tails at (0, 0). The set of all heads form a line. We name the point of intersection of these two lines P. Choose the value for X so that X[3 -1] is represented

with its tail at (0, 0) the head falls at P. Similarly choose Y so that Y[ -1 2] has its tail at

-1

p

-2

P, the head at [-1 2]. For this problem, we estimate the values X = 2 and Y = 3.

This graphical approach to solving such problems is limited by our ability to measure and will soon be augmented by analytical tools, but understanding this technique may help form an intuitive grasp of linear combinations .

./ Exercises: Please go to Maple'TA site for Assignment 1.1.

Matrix Notation

It will be useful to select a single row of a matrix and treat it as a vector and we introduce notation to facilitate this.

Example 5: Let A ~ [~ !J. Then Row(A, I) ~ [I 2] and Row(A, 2) ~ (3 4].

For a general n by m matrix A we will write Row(A, j) where 1 ::::; j ::::; n to denote an mdimensional vector whose entries are the entries of the j-th row of A. Similarly Column(A, k) for 1 ~ k ~ m will represent the k-th column vector of A.

6

LINEAR ALGEBRA

CHAPTER I

To select an individual entry from a matrix we will say A = [aj,d to mean that aj,k represents the entry in the j-th row and k-th column.

Matrix Multiplication

Before defining matrix multiplication, we will look at an example.

Example 6:

[1 2] [3 51 [ 5 19]

4 3 1 7 = 15 41

The factors of the product are referred to as operands. The left operand is [~ ~ ] and the right is [~ ~ l First con sider why the top row of the product equals [5 1 9]. r 5 1 9] is obtained as a linear combination of the row vectors [3 5] and [1 7] from the right operand. When computing the top row of the product, we use the top row of the left operand as coefficients. That is

1[35]+2[17]=[519]

Similarly, the second row of the product is a linear combination of row vectors [3 5] and [1 7]. But this time we use the second row of the left operand for the coefficients.

4[3 5] + 3[1 7] [15 41]

Let's look at a second example.

Example 7:

Definition(Matrix Multiplication): Suppose A and B are two matrices such that the number of columns of A equal the number of rows of B. We regard each row of B as a vector. The j-th row of the product of A and B is a linear combination of the rows of the right operand B with coefficients from the j-th row of the left operand A used in order.

7

CHAPTER I

LINEAR ALGEBRA

If A = [aj,k] then we write Row(AB,j) = aj,lRow(B,l) + aj,2Row(B,2) + ... +

m

aj,mRow(B,m). Using sigma notations we can write Row(AB,j) = I>j,kRow(B,k). k=l

Unlike ordinary multiplication, matrix multiplication is not commutative. In other words, for some matrices A and B, AB -::j:; BA. In one case you are using A as the source of coefficients and in the other A is providing the row vectors.

In our previous example we calculated [~ ~l[! !] cc I~~ ~~l We reverse the order of the operands in the next example.

Example 8:

Clearly, multiplication is not commutative.

Identity Matrix

1 01

Multiplying with as the left operand leaves the right operand unaffected and this

o 1 J

is simple to see.

1 0 a b1_ I[a b] + O[c d]1
Example 9: 0 1 dr Ora b] + I[c d]j
c
=fa b
d
lC
1 0
Considering the result when 0 is the right operand we see that the matrix, I, behaves as an identity both on the left and right.

rla b 1 0 = [Ia[l 0] + b[O 1]1

cd 01 ,c[I O]+d[O l]j

8

LINEAR ALGEBRA

CHAPTER I

r 1 01

Indeed, lo 1 j is called the identity for 2 by 2 matrices.

When we are considering 3 by 3 matrices, the identity is I = ~ t ~11j . You can guess ,0 0

l

what the identity for 4 by 4 matrices looks like.

Elementary Matrices

There are matrices whose role will be central to our work and consequently are called elementary matrices. All elementary matrices are square, that is of dimension n by n. There are three types elementary matrices. We name them type zero, type one and type two. We shall soon see the reason for the choice of names.

Type Zero

An elementary matrix of type zero is a matrix obtained from the identity matrix I by replacing exactly one of the zeros with a non-zero constant.

Example 10:

-1

[1 0 0 l o 1 1/ 2J

lo 0 1

o

Type Zero Elementary Matrices

.. . [1 2] 1 -1

Try multiplying the matnx B = on the left by E =

3 4 0 1

1 -fl[l 2] [1[12]+(-1)[3 4]]1-2 -2]

Example 11: = =l

o 1 J 3 4 [3 4] 3 4

9

CHAPTER I

LINEAR ALGEBRA

The top row of the resulting matrix, EB, is formed by subtracting the bottom row from the original top row. The new bottom row is the same as the old. Row notation can be used to describe exactly what is happening.

Row(EB, 1) Row(B,l) Row(B,2) and Row(EB,2) = Row(B,2)

1 0 0

In fact, the effect of using E, = 0 1 0 as a left multiplier is easily described. [o a 1

Row(E1B,1) = Row(B,l) Row(E1B,2) Row(B,2) Row(ElB,3) = Row(B,3) + a*Row(B,2)

Type One

A type one elementary matrix is obtained from the identity by replacing one of the ones with a non-zero constant. We intentionally include I, the identity matrix, in this group.

1 0 0 1 0 ~I
112 0
Example 12: 0 113 0 0
0
0 0 0 0 5J
Type One Elementary Matrices When multiplying by a type one matrix, one row of the right operand is changed by multiplying each member of the row by a constant.

lr2 01 fa b1 lf2a 2b

o 1 J lC dj c d

10

LINEAR ALGEBRA

CHAPTER I

1 0 0

lfE = 0 3 0, then we can write o 0 1 j

Row(EA,l) = Row(A,l), Row(EA,2) = 3*Row(A,2) and Row(EA,3) = Row(A,3).

Type Two

The remaining type of elementary matrix, type two, is obtained from the identity by

interchanging exactly two rows.
p 0 0 0 0 11
0
Example 13: 10 0 1 0 1 0
1 0
lO 1 0 0 0
Type Two Elementary Matrices [0 1

Using l as a left operand produces easily predicted results. The product is the right

1 0

operand with interchanged rows.

[0 l1[a b

[1 oJlc d

c d a b

Elementary Matriices have Inverses

The notion of inverses is slightly more complicated owing to the fact that multiplication is non-commutative.

Definition (Inverse of a matrix): Assume that A and Bare n by n matrices and that I is the n by n identity. We will say that A is a left inverse of B if AB = 1. In this case B is called a right inverse of A.

We will prove in good time that AB = I implies BA = I, but for the time being we will entertain the possibility that a left inverse could fail to be a right inverse.

11

CHAPTER I

LINEAR ALGEBRA

In addition to the fact that multiplication by elementary matrices is easy, the property which makes elementary matrices so very useful is that the effect of multiplication is reversible. That is, if you multiply E times A, an elementary matrix times a matrix, there is an elementary matrix E-1 which undoes the effects of E. In essence we have

E-1(EA) = A.

For example, left multiplying by r 2 0 doubles the top row of the right operand leaves lO 1

the bottom row unchanged. It should be clear that to "undo" the effects of multiplication

[[1/2 0

use as a left multiplier.

o 1

rla b =r2a 2b c d led

[112 0 [2a 2b a b1 l OIl cdc dj

1 -3

Similarly, the effect of left multiplication by the elementary matrix is undone by

o

II 3

left multiplication by lo

1

b L r a - 3c b - 3d] c dj led J

1 o

-3 a

1 3 fa -3c b-3d
0 1 l c d
~[: b
d rIO What matrix would undo the effects of multiplication by the type two 0 0 o 1

12

LINEAR ALGEBRA

CHAPTER I

To summarize the reversibility of multiplication by elementary matrices we state and prove the following theorem.

Theorem 1.1(Undoing Elementary Multiplication): IfE is elementary and EA = B then there is an elementary matrix Kl such that A = E-1B.

Proof: Let E be an elementary matrix. E must be one of the three types. If E is of type 0, E is obtained from I by replacing one of the O's with a non-zero real number r. The elementary matrix which undoes the effect of this matrix inverse is obtained from E replacing r with -r.

If E is of type 1, that is, E is obtained from the identity by replacing one of the l's with another non-zero real number r. The elementary matrix which undoes the effect of this matrix inverse is obtained from E by replacing r with 1Ir.

Finally, if E is type 2 elementary matrix obtained from the identity by swapping two rows, then the effects of multiplication by E are undone by a second multiplication by E.

Corollary(Each Elementary Matrix has an Inverse): IfE is an elementary matrix then there is an elementary matrix, called E-I, such that FIE = I and EE-J = 1..

Proof: We know that EI = E. Applying Theorem l.1 we get E-1E = I.

13

CHAPTER I

LINEAR ALGEBRA

Exercises

Problem: You are to use Maple to demonstrate that matrix multiplication is indeed associative by performing 3 distinct "experiments".

1) Choose three 2 by 2 matrices A, Band C and show that AB times C is the same as a times BC. There is a Maple command for testing if two matrices are equal. Using LinearAlgebra simply say Equal(X,Y).

2) Repeat the above for three 3 by 3 matrices

3) Repeat the above where none of the three is a square matrix.

Save this file as "yourlastname maple 1" and append it to an email to tsrnithrdtmorunouth.edu.

Problem: You are to investigate whether matrix multiplication between 2 by 2 elementary matrices of Type 0 is commutative. Submit your answer as a word document featuring matrices. Two people can team up on this problem.

Save this file as "yourlastname 1 yourlastname2 word 1" and append it to an email to

tsmith@,monmouth.edu.

Exercises: Please go to MapleTA site for Assignment 1.2.

14

LINEAR ALGEBRA

CHAPTER I

Solving Matrix Equations I

fa b1 13 9l[a b 115 30]

We want to determine a matrix l j such that l J l = l .

c d -2 -4 c d -8 -14

Conceptually, this problem has much in common with finding x such that ~ x = ~ .

3 5

Step 1: Multiply both sides of the equation by 15 producing the new equation lOx = 9. Step 2: Multiply both sides of the equation by _1 producing the new equation x = _2_.

10 10

Why is this legitimate? That is, why does our new equation have the same solution as the original? The key, of course, is that in each step we have multiplied by a number which has an inverse and therefore the multiplication can be "undone". That is

923 x = - => lOx = 9 => - x = - .

10 3 5

This is precisely why we are not permitted to multiply both sides of an equation by zero. Zero doesn't have an inverse.

Example 13: Find a b such that [ 3 9] a b =[15 30 l. We are permitted to

c d -2 -4 c d -8 -14 J

Jeft multiply both sides of any equation by elementary matrices. The resulting equation has the same solutions as the original. We simply left multiply both sides by elementary

matrices until we have changed the equation into a new equation with coefficient [~ ~]. The strategy is to choose an elementary matrix so that effect of left multiplication is to makeeach new coefficient matrix a step closer to the identity matrix I. We start with the top of column 1 of the coefficient matrix. The identity matrix has a 1 in this position. Therefore changing the 3 to 1 in the top left position is our first goal. We left multiply

f! O·

both sides by 3 . Notice that we use a type 1 matrix to produce a 1.

o 1 J

15

CHAPTER I

LINEAR ALGEBRA

f 1 °1 ([~ ~ ]f a b]J=[l °1 rl~5

lo 1_j 2 4 Lc d ° 1 8

30 J -14

Assuming that matrix multiplication is associative we write.

30 ]

-14

The new coefficient agrees with the identiy in the top left spot.

Continuing to focus on the first column, we now change the -2 to a zero. We are trying to make a zero so we use a type zero matrix as the multiplier. We multiply by

[~ ~] because this adds twice the 1 to the -2 producing zero. Again using associativity,

we write

[ ~ ~] [~2 ~4] [ : : H ~ ~] [~8 ~ ~4l [~ ~][: :H~ l~l

The first column of the coefficient [~ ~}s identical to first column of l. We now focus on column 2. First we need a 1 in the bottom right. Use the type 1 elementary

16

LINEAR ALGEBRA

CHAPTER I

Finally we use a type zero to replace the 3 with a O.

Now we check or answer:

[3 9][2 I] [3[21J+9[13Jl f15 30 -2 -413 = -2[21]-4[13]J=l-6 -14

The entire process required 4 steps. One for each entery in the coeeficient matrix. Such a problem should never require more and sometimes less. We started in column 1 and got the required 1. Then we used that 1 to get 0 below it. Once column 1 is the same as column 1 of I, we move to column 2. Replace the bottom right entry with a 1. Use this 1 to replace the top right with a O. There will be times when this technique will not yield a solution. This will mean that a solution does not exist.

b' [ " 6] r bl [2 3]

Example 14: Determine a I such that _) I a dj = -8 2 .

c d] -1 -2 [c

Our first step is to multiply both sides by [~ ~ J. The result is;

Next we multiply by [: ~J

17

CHAPTER I

LINEAR ALGEBRA

1 2 a b

But the product will always have a bottom row consisting of zeros. This

o 0 c d

says that not only are we not going to find a solution in this manner, but that there cannot

be a solution. For solution to

1 21ra b1 [2/3 1]

'l = must also be a solution for

-1 -~ c dJ -8 2

II 2 a b'j [2/3 1]

lo 0 c d = -22/3 3 .

Exercises: Please go to MapleTA site for Assignment 1.1.

Algebraic Properties of Matrix Multiplication

We begin with a property which matrix multiplication does not have. Matrix

multiplication is not commutative. That is for some matrices A and B, AB "* BA.

[011101=[111 [1 OJ 1 1 j h OJ

So, when we multiply both sides of an equation by a matrix, if we left multiply on one side, then we must left multiply on the other.

Associativity

We are so used to the associative law being true that we rarely ever think about it. Consider a matrix equation solved in the previous section.

[ 3 9] I a b] [15 30 1

-2 -4 lc d -8 -14

18

LINEAR ALGEBRA

CHAPTER I

p- ol

Our first step in the solution was to left multiply by l ~ 1 j

3 0 (fr 3 9 1j. a b 1 = [1 0 lj [15

-2 -4 c d -8

o 1 0 1

30 -,

-14J

And it should be clear how we use the associative law, on the left side, to produce the next equation.

0[3 91Jra bl=l~ 01[15301

[-2 -41 led J L-8 -14J

1 J L 0 1

3 o

Theorem 1.2(Matrix Multiplication is Associative.): That is if A, Band C are three matrices such that AB and BC are defined, then (AB)C and A(BC) are defined and equal.

Proving associativity IS, In many instances, pure drudgery, and this instance is no

exception. Therefore, the proof will not be given. We leave the proof of the 2 by 2 case, as an exercise. It is a matter of multiplying out each of the following products and determining that they are the same.

Distributive Law

Theorem 1.3(Matrix multiplication distributes over addition.): If A is a j-by-k matrix and Band Care k-by-n matrices then ACB + C) = AB + AC.

The proof for the 2 by 2 case is also left as an exercise and involves determining that the following is indeed an equality.

19

CHAPTER I

LINEAR ALGEBRA

[ a" au l([b" •• [c" c" ]J
' + '
a2,! a2,2 J b2,! bn c2,! C2,2
(, ~1'! aur b, ]H[ a" a"r' e" I
\L a2,! a2,2 b2,! b2,2 a2,1 a2,2 C2,1 C2,2 Some Matrices have Inverses

1 We have already identified

o

0'

I as the identity matrix for 2 by 2 matrices. 1 J

Definition: Let I be the identity matrix and A be an n-by-n matrix. A is said to be invertible if there is a matrix B such that AB = BA = I. B is said to be the inverse of A and we write, B = A-1•

We have seen that each elementary matrix has an inverse. But not all matrices have inverses and this can be easily seen. To find an inverse of f 0 01 we would need to solve

lO 1 J

[0 0 a b =fl1 0 lO 1 c dOl

But multiplication by o ~l produces a top row which is the zero vector.

o 0 [a bLro 01

o 1 lc dj lc d]

fO and so lo

01

I has no inverse.

Ij

20

LINEAR ALGEBRA

CHAPTER I

Most matrices have inverses. Interestingly, the probability of choosing a matrix randomly and getting one that is not invertible is zero. But since we rarely choose randomly, we get matrices having no inverses often.

Finding the Right Inverse of a Matrix

For any n by n matrices A and B we can form the product AB. If AB = I we will say that B is the right inverse of A. We could also say that A is the left inverse ofB. We shall see eventually see that AB = I means that BA = 1. That is B a right inverse of A means that A and B are inverses. Until this is established we will continue to make a distinction

between inverses and right inverses.

Let us assume that [~ ~ I has a right inverse. Then the matrix equation

[1 3 a bl=[l 0 [2 7 c d] [0 1

has the right inverse as a solution. To find the inverse, we solve this equation in the same manner as the previous equations.

Strategy

Left multiply by:

Results:

Replace 3 with 0

[~ ~] a b _ [1 01
-I
c d [0 1 J
1 01 1 ,., r bl _ I I OJ
j r:
-2 11 0 1 dj -[-2 1
J
1 -31 r 1 0 a bl ... [ 7 "
-j
[0 dJ -2
o 1 J 1 c Replace 2 with 0

The inverse turns out to be

7 2

21

CHAPTER I

LINEAR ALGEBRA

This example was very since it only took two steps. The important thing is to understand how the elementary multipliers were chosen. In both instances we are trying to replace a non-zero number with a zero. To do the job, that of producing a zero, we used a Type 0 elementary matrix. We will see that to produce a one, we will generally choose a Type 1 elementary matrix.

The reader should check that we have indeed found the inverse by performing the following multiplication.

Admittedly, we rigged the previous example to come out easily. Two-step problems are rather rare. But a 2 by 2 matrix need never take more than four steps.

Exam p Ie 17: Find the inverse for the matrix' [ ~ :

Strategy

Left multiply by:

Replace 2 with 1 [112 0
l 0 1
Replace 5 with 0 1 01
I
-5 1 j
r 1 0 1
Replace -9/2 with 1 [0
-2/9]
Replace 3/2 with 0 r1 -3:2]
LO
f2 3 . [-1/3 1/3 ]
The inverse of I IS
l5 "") 5/9 -2/9
.J Results:

[2 31 a b = rl1 01 [5 3j c dOl J

r 1 3/21 fla b = [1/2 01]

[5 3 jed 0

3/2 1 a ° -9/2] c

f 1 312 fa b [0 1 lC d

b r 112 0]

d L -5/2 1

[1/2 0]

5/9 -2/9

01 a b = [-1/3 113 ]

o 1 jed 519 -2/9

22

LINEAR ALGEBRA

CHAPTER I

In the next example we will find the inverse of a 3 by 3 matrix and begin to simplify the notation. We will not write down the elementary matrix at each step and we will combine the coefficient and target matrix in one array.

2 2 Example 18: Find the inverse of the matrix· 1 2 2 4

As before, finding an inverse is simply a specialized case of solving a general matrix equation of the form AX = B. The B in this case is the identity. And we proceed to left multiply both sides by elementary matrices until the coefficient matrix A has been converted into the identity.

2 2 -21[a b cl We now find the inverse by solving 1 2 0 ld e f l2 4 1 g h

[1 0 01 10 1 01' lO 0 1 j

But we capture all the action by the changes in this combined array

[2 2 -211 0 0l

SJ = 1 2 0 I ° 1 0 I .

2 4 1 0 0 IJ

Row Action New Array
[~ 1 -1 I 112 ° ~J
Row(S2,1) ~ (1I2)*Row(SI,1) 2 0 0 1
4 0 °
I~ -1 112 ° ~'
ROW(S3,2) ~ Row(S2,2)-Row(S2,1) 1 -1/2 1
l2 4 1 0 0 IJ 23

CHAPTER I

ROW(S4,3) *- Row(S3,3)-2*Row(S3,1)

Row(Ss,l) *- Row(S4,1)-Row(S4,2)

ROW(S6,3) *- Row(Ss,3)-2*Row(Ss,2)

Row(S7,1) *- ROW(S6, 1 )+2* ROW(S6,3)

Row(Ss,2) *- ROW(S7,2)-Row(S7,3)

rl -1

o 1 1

r~

10 0

o -2

101 0 0

I 1 0 -112

L 0 0 1 0

[2 2 -21 I 1 -5 _2111

We conclude that the inverse of ~ ~ ~ I is l--- ~ 2 ~2 J

24

1 I 2 -112

-1

1 -1/2

-1

1 -1/2 1 I 0

LINEAR ALGEBRA

o OJ

1 ° o 1

1

LINEAR ALGEBRA

CHAPTER I

Exercises:

Problem 1.12 Prove the associative law.

Problem 1.13 Prove the distributive law.

Problem 1.14 Find the right inverse for each of the following matrices and prove that your answer is correct.

1 21 2 5]

Problem 1.15 Find the right inverse for each of the following matrices and prove that
your answer is correct.
p 2 21 1 2 2 2 3 ~41
10 1 21 1 2 -1 -2 9
l2 5 7j l2 5 1 l-2 3 -4] Problem 1.16 The following matrices do not have right inverses. Attempt to find the inverse and see how the process breaks down.

Problem 1.17 Find the right inverse of the following matrix. Try to draw a conclusion from your result.

25

CHAPTER I

Exactly Which Matrices Have Inverses?

LINEAR ALGEBRA

For this discussion we consider only square matrices, that is n-by-n matrices. Some square matrices have inverse and others do not. Our goal is to get a better grasp of the differences between these two types of matrices. We begin with an example in which we attempt to find an inverse and fail.

Example 19: We want to lind the inverse of matrix [~ ~~ .. That is we wish to

solve the matrix equation r 231 ~ -\j l-: : ;l ~ I~ ~ ~l

l 7 0 g h iJ lo 0 d

Left-multiply by:

Result

11 2 o 1

o 1

L

Cl [1

<' = -2

I J -1

We have just produced a row of zeros on the left side of the equation and the corresponding row on the right side has non-zero entries, By the definition of multiplication, the bottom row of the right side is a linear combination using all zero coefficients. Therefore, the final equation has no solution. Further, a solution to any of these equation is a solution to them all. It follows that none of them has a solution. So,

[1 2 1 J

2 - 3 - 1 does not have inverse.

3 -1 0

26

LINEAR ALGEBRA

CHAPTER}

It seems that if we are attempting to solve a matrix system and in the process produce a coefficient matrix with a row of zeros, then the matrix equation does not have a solution. Unless of course, the matrix on the right side has a corresponding row of zero's. We need to develop some theory to support this idea. We first introduce the idea of semireduced form.

Semi-Reduced Form

Example 20:
1 1 2 11 [1 2 q I~ 1 2 11
11
0 1 iO 1 1 1 1 1 11
0 0 1\ 10 0 1 1 0 0 1 11
I Ij
lO 0 1J lo 0 0 2 0 0 0
Not semi-reduced Not semi-reduced Semi-reduced Definition: Suppose A is an n-by-n matrix. A is said to be in semi-reduced form if each of the following is true.

1. The first non-zero entry of any row is a I, this 1 is called the leading 1 of the

row.

2. A column containing the leading one of a row has all zeros beneath this leading one.

3. Any zero beneath a leading one has all zeros to its left.

4. A row with a leading one does not have any row of all zeros above it.

11 2 1 j

Example 21: Notice the matrix l2 - 3 -1 from the above example is not in semi-

3 -1 0

reduced form. By a sequence of left-multiplications by elementary matrices we change it into a matrix in semi-reduced form.

27

CHAPTER I

LINEAR ALGEBRA

l~ ° 0lr 1 ° OF I () °rl 2 I 111 2 1 l
-117 °1 ° I 0 l-2 I 0 l2 -3 ~I ~ l ~ 1 3~: j
0 1 J L - 3 o IJ 0 0 1--, 3 -1 -7
11 0 ~H~ o or ° Olil ° °l il 2 .: 2 3~7 ]
10
1 -117 0 0 1 OJ -2 1 ~ J l~ " 1
° d-3 -_)
LO 7 1 J lo OIl 0 ° -1 0
the next theorem simply formalizes the fact that this process is repeatable for all n-by-m
matrices. Theorem 1.4(Every Matrix can be Semi-reduced): For every matrix A, there is a finite sequence of elementary matrices E1, E2, .. " Ek such that the product E, ... E2EJA is in semi-reduced form.

Proof: Let A be a matrix. Choose the first column containing a non-zero element. If there is none, then we have a zero matrix which is semi-reduced. We can move this non-

zero element to the first row by multiplication by a type two matrix, This element is now the leading non-zero element of the first row. We can change this leading non-zero entry to a one by multiplying by a type one elementary matrix. Any non-zero element below this leading one, and by below we mean in the same column, can be changed to zero by multiplication by a type zero elementary matrix. We now have a matrix with a leading one in the first row. Beneath this leading one we have all zeros. Further, to the left of anyone of these zeros we have only zeros.

We now identify the next column with a non-zero entry other than in the first row. By choosing elementary matrices that refrain from involving the top row, we can cause the

28

LINEAR ALGEBRA

CHAPTER I

second row have a leading non-zero entry of one, with all zeros beneath, and zeros to the left of any of these zeros. We simply continue in this fashion until we arrive at a semireduced form.

Reduced Form

We now extend the idea of semi-reduced form to fully reduced or simply reduced form.
Example 22:
r 1 0 11 r 1 0 0 11 r 1 0 0 0
I 11
10 1 0 1 I 10 1 0 1 '0 1 0 0
0 0 0 0 0 0 1 0 0 0 ~j
oj I
0 0 0 0 0 0 oj lO 0 0
not reduced not reduced reduced Definition: A square matrix is in reduced form if it is in semi-reduced form and each leading one has all zeros in the other positions in its column.

Theorem 1.5(Every Matrix can be Reduced): For every square matrix A, there is a finite sequence of elementary matrices E], E2, ... ,Ek such that the product Ek ... E2EJA IS in reduced form.

Proof: We can assume that the matrix A has been placed in semi-reduced form. We begin by selecting the least row with a leading one. This one has all zeros to its left as well as all zeros below it. If there are non-zeros above it, each can be changed to a zero by left-multiplying by the proper elementary matrix. Since all values in the row to the left of the leading one are zeros, employing this row with a type 0 elementary matrices does not change any columns to the left of the column containing the current leading one. Choose the next row with a leading one and continue until the proper form is achieved

We can now divide square matrices of reduced form into two types: matrices that are the identity or else they have at least one row of zeros. We distinguish this fact as the following theorem.

29

CHAPTER I

LINEAR ALGEBRA

Theorem 1.6(Characterizing Invertible Matrices): If A is a square matrix and E; ... E2EJA is in reduced form, then exactly one of the following is true.

1) Ek E2EJA is the identity, in which case A has a left inverse.

2) Ek E2EIA has a row of zeros, in which case A does not have a left inverse.

Proof: If Ek ... E2EIA = I then by successively left multiplying both sides of the equation by the inverses of the elementary matrices we see that

E -IE -1 E -IE ·E E A E -IE -1 E -]1

1 2· .. k k· .. 2 I =~! 2 .. · k

The left side simplifies to A. The ride simplifies to E]-IE2-1 ... Ek-I.

A E -IE -I E-1

= J 2· .. k·

It follows that E, ... E2EJ is both a left and right inverse of A.

In the case where Ek ... E2EI A has a row of zeros we can argue by contradiction that A cannot have a right inverse. Suppose AB = I and E, ... E2EJA has a row of zeros then and Ek ... E2EJAB = Ek ... E2Ed = Ek ... E2E,.

It follows that Ek ... E2E, has a row of zeros and therefore does not have an inverse. This

is of course a contradiction.

Here is the situation. If A is an n by n matrix A can be reduced. to R. If R = I then A is a product of elementary matrices. In this case A has an inverse. If R has a row of zeros, then A cannot have a right inverse and therefore no inverse.

Corollary: If A does not have an inverse then A = ER where E IS a product of elementary matrices and R is reduced and has a row of zeros.

We now know that to say that A has an inverse simply means A can be written as a product of elementary matrices. We can now quickly clean up some remaining questions

concermng inverses.

30

LINEAR ALGEBRA

CHAPTER I

Theorem 1.7: A square matrix has in inverse if and only if it can be factored into

elementary

matrices.

Theorem 1.8: If A has an inverse and AB = AC then B = C.

Proof: Assume that A has an inverse and AB = AC. Multiplying both sides of the equation by Al we get AI(AB) = A1(AC). Therefore B = C.

Theorem 1.9: If AB = I then BA = 1.

Proof: Case 1: Suppose Al exists. Then AA-I = I. So AB = AA-I and by previous theorem B = A-I.

Case 2: A does not have an inverse. By the corollary A = ER where E is a product of elementary matrices and R has a row of zeros. So

AB = I=> ERB = I => RB = FI, But RB has a row of zeros which contradicts E-I being an inverse.

Theorem 1.10: If A and B have inverses then AB has an inverse and CABrl = B-IAl,

Burden of proof: To algebraically intuitive people what needs to be done to prove this theorem is obvious. To others it is a puzzle. We know that if a matrix C has an inverse then it is unique. For if CX = CY then X = Y. CABrl represents the unique inverse of AB. To prove that B-IA-l is that matrix we try it out. Hence we start with (AB)(B-1A-I) and see if we can get 1.

Proof: Assume A and B have inverses denoted by Al and B-1 respectively. Then AXl = I and BB-1 = 1. We wish to show that (ABrl = B-1 AI,

(AB) (B-1A1) = ACB (B-IAl» = AC( BB-I)Ai)

= A(I AI)

=AAI

=1

31

CHAPTER I

LINEAR ALGEBRA

r 1 Example 23: Does the matrix [2

31

8 j have an inverse? To answer this question we find

its reduced form.

[1 -3'1[101[10113 lo 1 do 112L-2 1j 2 8

1 3

The reduced form is the identity and therefore the matrix has an inverse. In fact

2 8

we see that
p ,,' r 1 Or! 11 o r ! r 1 "
.)1 -- .)
8] = l-2 lo
12 1 j lO 1/2]
l
1 3r 1 31 [I Op1 01_ 4 -3/2
2 8] 0 lLo 2] l2 1 j - -1 112 The matrices that have inverses are the ones which can be written as a product of elementary matrices. Further, if a matrix has an inverse, then it can be factored into elementary matrices. What could be simpler? If every matrix could be factored into elementary matrices, we would have the "fundamental theorem of linear algebra". If we need a fundamental theorem of linear algebra, we have to settle for the following.

Theorem 1.10: Almost every matrix can be written as a product of elementary matrices.

Using the Inverse to Solve a Matrix Equation

Now that we are familiar with the idea of inverting a matrix, we use inverses to solve some matrix equations. The process is analogous to solving ~ x = l by multiplying both 3 5

sides of the equation by l. the inverse of ~ .

2 3

32

LINEAR ALGEBRA

CHAPTER I

[1 31 4 -3/21

Recall that we determined the inverse of l· to be i . Once the inverse is

2 8J 1 112 J

[1 31 ra1 9

known we can solve an equation of the form Il I by left-multiplying both

2 8] lbJ 6

sides by the inverse that is
4 -3/21 [1 31[a r4 12 [9
=l-l
-1 112 j l2 8j lb 112 l6
which yields
r 1 ~1 a [271
lo b = l-6]
J 33

CHAPTER I

Exercises:

LINEAR ALGEBRA

Problem 1.18 Write each of the following matrices as a product of elementary matrices.

r 1 3 [2 "1 13 6 -2 -2
-'I
l2 -1 l4 1 j l2 1 3
Problem 1.19 Determine the inverse of each of the above matrices by usmg the
elementary factors. Problem 1.20 Use the results of the above problem to solve the following equations.

3Hal_r21 2 .• lbJ -l3J

1

3 6 a = [51

2 1 b l3J

[2 31 a1 = 5 l4 1 j b] 3

Problem 1.21 Write each of the following matrices as a product of elementary matrices. Use this factorization to find the inverse,

[2 0 11 11 11 [1 0 oj

r 1 1 11 11 1 01 II 0 o]

Problem 1.22 Write a short analysis of the following statement. If A is a 2-by-2 matrix

1

with an inverse then each row of I = 1 0 I can be written as a linear combination of the o 1J

rows of A.

34

LINEAR ALGEBRA

CHAPTER I

Solving Matrix Equations III

We need to consider a new class of matrix equations in which the coefficient matrix does not have an inverse.

Example 24: Consider the equation [~ ~ m: J ~ m In our usual fashion, we

reduce the coefficient matrix.

12~ ~ ~ ~l ~ r ~ ~

l 3 35 l2 3

2 21 r 1

-1 1 ~lo

3 5 0

1 ~1 ~lJ ~r ~ ~

1 -11 lo 1

3 11

-1 1

o 0

Returning to equation form we write [~ ~ ~I J l ~ J ~ [~ J. Multiplying out the left side

we see [ :~3: J~[~ l This says that x ~ -3z + I and y ~ z + L This means that we have

many solutions to the system. Each choice of a value for z produces a valid solution. For example, letting z = 2, we get a solution vector [-5 3 2].

A second example, similar to the first, produces a different result

Example 25: Retaining the coefficient matrix but changing the target vector gives the

~ ~ljr~l=r~l. oncernoreredUCing[: ~ ~I~]~~~r~ ~ ~1~].

3 3 _ z j 6 J 2 3 316 lo 0 0 1

fX+3zl rq

Once more returning to equation form results in l y ~ z r l : J Any solution to the

original equation is a solution to the final equation and that clearly has no solution.

35

CHAPTER I

LINEAR ALGEBRA

The lesson here is that when a square coefficient matrix does not have an inverse, the equation may have many solutions or no solutions. The row of zeros that indicates no inverse in the reduced coefficient matrix is the key. The corresponding entry in the target vector position must be zero if there are to be solutions.

In the event that the target vector is [~l we always have at least one solution and sometimes many solutions.

Example 26: Solve [~ : m ~ H ~ J Reducing l:

2 3'Ol

1 1 OJ does not effect the 4 50

target vector in anyway. In fact, one does not usually include the zero column.

[1 2 3] [1 2 3~ [1 2 3J 11 ° -ll

1 II=> 1 -1 -2 => 1 1 2 => l' ° 1 2 J'

3 4 5 ° -2 -4 ° -2 -4 ° ° °

Therefore each solution to [~

o -11 [Xl iOl

1 2 y =1 0JI is a solution to our original problem.

° ° z lo

-.

That is y ~ 2z ~ l o rrx ~ z and y ~ - 2z produces a so Julian for each choice of z:

36

LINEAR ALGEBRA CHAPTER I
Exercises:
12 ~j l~ F~J
Problem 1.24: Find all solutions to l ~ ')
.J 5 z LO
Problem 1.25: Find all solutions to l: 3 m~H~j
Problem 1.26: Find all solutions to l~ -1 ;jl~H~j
3
1
i2 -1 -3jlXl II j
Problem 1.27: Find all solutions to l ~ ') 3 Y l~l3
.J
3 z J -2
11 m~H~J
Problem 1.28: Find all solutions to l ! ')
.J 37

CHAPTER I

LINEAR ALGEBRA

A Different Perspective on Multiplication

We have introduced multiplication as a linear combinations or row vectors of the right operand. The following example leads us to a different understanding.

~ : ;, l ~ 1 can be though l of as ali near com binati on of the co I umns of [he left

operand. This duality is a very valuable tool.

We look at another example.

Example 28: a bj x y 1 =

c d] Z Wj

[ax+bz aY+bwl_ra[x y]+b[z wl_a[x Y]+b[z -I. [cx+dz cy +dw 1 -l c [x y 1 + d [z w 1 c [x y 1 + d [z w 1 _

[ax bz ay+ bW·j·_rrxa+zb l ex + dz cy + dw _ J xc + zd

la b1 x y 1

This says that first column of the productl'c I can be regarded as a linear

dj z wJ

combination of the columns of the left operand with coefficients from the first column of the right operand. The second column of the product is entirely analogous. This gives us a natural way to answer the linear combination question.

38

LINEAR ALGEBRA

CHAPTER I

We revisit our, Example 4, where we used the graphing technique to express one vector as a combination of two other vectors in the next example ..

Example 28: Find the coefficients x and y so that [3 4] = x[3 -1] + y[ -1 2]. No graphs this time.

X[3] +y[-l] =[3] => [3X] + r-1yl =f3]=> Il3 -1][X]=[3l

-1 2 4 -1 x L 2y J l 4 -1 2 y 4 J

Weare back to a matrix equation.

What we see from this example is a linear combination question, that is a question of the type, can [1 2 4] be written as a linear combination of [1 3 -1], [2 3 -1 J and [2 -1 2] can be turned into a matrix equation. We have the techniques to solve matrix equations if there is a solution.

39