You are on page 1of 6

Linear Algebra, Math 19620, Section 42

Spring Quarter 2024

Day 2: RREF, Matrix Algebra and Linear Systems (Sections 1.2 and 1.3)

Outline

• Goal: Solving systems with Gaussian elimination: identify rref matrices and reduce matrices to rref
• Goal: Describe and explain the relationship between the number of equations, number of variables,
number of solutions and rref
• Goal: Computational skills: Add/subtract matrices, multiply matrices by scalars, multiply matrices
by vectors, and write a system as a matrix equation

Last time we solved three systems of linear equations (Examples 2,3,4 from Day 1) and then saw how we
could keep track of the steps in the process using matrices. Once we arrived at a point where we could easily
substitute to get the solution we did so.We now see how we could continue our elimination process to get the
final solution(s). While this isn’t necessary for small systems, it provides an algorithm that solves any system,
and it also helps to develop some insight into the relationships between the number of equations/number of
variables/number of solutions.
Example 2 revisited: We started with
8
>
<x + y + z =2
x y+z =2 ,
>
:
2x 3y + 4z = 6.

wrote its augmented matrix 2 3


1 1 1 2
4 1 1 1 2 5,
2 3 4 6
reduced it to 2 3
1 1 1 2
4 0 1 0 0 5,
0 0 1 1
then read o↵ z = 1, y = and substituted in x + y + z = 2, to obtain x = 1. We now show how one can
finish the process with row operations.
First we eliminate z from R1 by doing the row operation R3 + R1 . This leads us to
2 3
1 1 0 1
4 0 1 0 0 5.
0 0 1 1
Next we eliminate y from R2 by doing the row operation R2 + R1 . This leads to
2 3
1 0 0 1
4 0 1 0 0 5.
0 0 1 1
We can now read o↵ the solution x = 1, y = 0, z = 1.

1
Example 3 revisited: We started with
8
>
<2x y + z =1
x + y 3z =4
>
:
3y + 7z = 7

and found a reduced augmented matrix


2 3
1 1 3 4
4 0 1 7 7 5.
3 3
0 0 0 0
There is no way to eliminate both z and y from R1 . We treat z as the free variable and then do R2 + R1
to eliminate y from R1 , giving 2 3
2 5
1 0 3 3
4 0 1 7 7 5
.
3 3
0 0 0 0
This augmented matrix corresponds to the system
(
x 23 z = 5
3 .
y 73 z = 7
3

7
We now let z = t and see that y = 3 + 73 t, x = 5
3 + 23 t. So there are infinitely many solutions and the solution
set is
5 2 7 7
{( + t, + t, t) | t 2 R}.
3 3 3 3

Definition. A matrix is in reduced row echelon form (rref ) if


• all rows consisting entirely of 0’s are at the bottom
• all other rows start with 0’s followed by a 1 (note that the first row can - and often does - have no 0’s
before the leading 1)
• the leading one (aka the pivot) in each row is to the right of the leading one in the row above it
• all entries above or below a leading 1 in a column are 0.

Remark. The point of rref is that once we have an augmented matrix in rref we can write down the solutions
very quickly.
Example 1. Which of the following matrices are in reduced-row-echelon form? If a matrix is not in rref,
what row operation(s) will get it into rref?
2 3
1 0 0 0
a) 4 0 1 0 0 5 ; this is in rref.
0 0 0 1
2 3
1 0 0 2
b) 4 0 1 1 3 5 ; this is not in rref because the -1 in row 2, column 3 should be a 0. To get it into
0 0 1 0
rref we do R3 + R2 .

2
2 3
1 0 0 2
6 0 1 0 3 7
c) 6 7
4 0 0 0 0 5 ; this is not in rref because the row of 0’s is not at the bottom. To get it into rref we
0 0 1 2
switch R3 anad R4 .

Example 2. Write down the augmented matrix for each of the following systems. Then reduce them to rref
and hence solve the system.
8
>
<x + 2y =4
a) 2x + y 3z = 11
>
:
3x 3y + z = 10
Solution:
2 3 2 3 2 3
1 2 0 4 ! 1 2 0 4 ! 1 2 0 4
4 2 1 3 11 5 2R1 + R2 4 0 3 3 3 5 R 2 + R3 4 0 1 1 1 5
R2
3 3 1 10 3R1 + R3 0 3 1 2 3 0 0 2 5
2 3 2 3 2 3
1 2 0 4 1 2 0 4 1 0 0 1
! 4 ! !
R3 0 1 1 1 5 4 0 1 0 3
2
5 4 0 1 0 3
2
5.
2 5 R 3 + R2 5 2R2 + R1 5
0 0 1 2 0 0 1 2 0 0 1 2
3 5
Hence x = 1, y = 2, z = 2.
8
>
<2x y + z =1
b) 3x + 2y 4z =4
>
:
6x + 3y 3z =2
Solution:
2 3 2 3 2 1 1 1
3
2 1 1 1 ! 2 1 1 1 1 2 2 2
4 3 !
2 4 4 5 3R1 + R3 4 0 7
2
11
2
5
2
5 2 1 1
4 0 1 11
7
5
7
5
6 3 3 2 3 7 R2 , 2 R1 , 5 R3
2 R 1 + R2 0 0 0 5 0 0 0 1
2 1 1
3 2 2
3
! 1 2 2 0 1 0 7 0
4 0 !
5
7 R 3 + R2 1 11
7 0 5 1
4 0 1 11
7 0 5
1 2 R 2 + R1
2 R 3 + R1 0 0 0 1 0 0 0 1
There is no solution since the last row corresponds to 0x + 0y + 0z = 1, which is not possible.
8
>
<x y + 2z = 3
c) 4x + 4y 2z = 1 .
>
:
2x + 2y z = 6.
Solution:
2 3 2 3 2 3
1 1 2 3 ! 1 1 2 3 1 1 2 3
4 4 ! 4
4 2 1 5 4R1 + R2 4 0 8 10 13 5 R3 0 8 10 13 5
2 2 1 6 2R1 + R3 0 0 3 0 3 0 0 1 0

2 3 2 3 2 3
1 1 2 3 1 1 2 3 1 1 0 3
! 4 0 ! 4 !
8 0 13 5 1 0 1 0 13
8
5 4 0 1 0 13
8
5
10R3 + R2 8 R2 2R3 + R1
0 0 1 0 0 0 1 0 0 0 1 0

3
2 11
3
1 0 0 8
! 4 0 13 5.
1 0 8
R2 + R 1
0 0 1 0
11 13
Hence x = 8 ,y = 8 ,z = 0.
Example 3. The following matrices are each the rref of an augmented matrix associated with a linear
system. How many solutions does each system have and what are they?
2 3
1 2 3 2
a) 4 0 0 0 0 5 ; infinitely many solutions: {(x, y, z) = (2 2s 3t, s, t) | s, t 2 R}.
0 0 0 0

0 1 0 2
b) ; infinitely many solutions: {(x, y, z) = (t, 2, 3) | t 2 R}
0 0 1 3
2 3
1 0 3
c) 4 0 1 2 5 ; one solution: x = 3, y = 2.
0 0 0
2 3
1 5 0 2 2
d) 4 0 0 1 3 1 5 ; no solution
0 0 0 0 1
2 3
1 0 3 0 1 2
e) 4 0 1 2 0 4 0 5 ; infinitely many solutions: {(x, y, z, v, w) = (2 3s + t, 2s 4t, s, 3 2t, t) |
0 0 0 1 2 3
s, t 2 R}.

Definition. The rank of a matrix is the number of leading 1’s in its row-reduced-echelon form.

Remark. Find the rank of each matrix in the previous example. Notice that if A is an m ⇥ n matrix, then
rankA  m and rank A  n.
This is because there cannot be more than one leading 1 in each row, and hence rankA  m. Also, since
the leading 1 in each row must be to the right of the leading 1 in the row above it, we cannot have more
leading 1’s than the number of columns, so rank A  n.
(The ranks in the previous example are a) 1, b) 2, c) 2, d) 2, e) 3.)

Theorem 4. Let a linear system have m equations and n unknowns (variables). Let r be the rank of the
coefficient matrix, A.
a) If r = m then there must be at least one solution.
b) If r = n then there cannot be infinitely many solutions (so there must be either 1 or 0).
c) If the system has exactly one solution then n  m.
d) If r = m = n then there is exactly one solution.
Proof. First we note that the coefficient matrix will be an m ⇥ n matrix and

number of leading 1’s + number of free variables = number of columns,

so
r + (number of free variables) = n.

4
a) If r = m then, since r  n, we must have m  n. Then the number of free variables is n m 0. Since
there is a leading 1 in every row of the coefficient matrix, we cannot have a row of the form 0 0 · · · 0|1
in the augmented matrix, and so the situation that leads to no solution does not arise.
(More specifically, we have two cases. The first is n = m. In this case we have a square matrix with a
leading 1 in every row and so the reduced-row-echelon form of A is
2 3
1 0 ··· 0
6 0 1 ··· 0 7
6 7
6 .. .. . . .. 7 .
4 . . . . 5
0 0 ··· 1

It follows that there must be exactly one solution since the reduced-row-echelon form of the augmented
matrix will be 2 3
1 0 · · · 0 c1
6 0 1 · · · 0 c2 7
6 7
6 .. .. . . .. .. 7 ,
4 . . . . . 5
0 0 · · · 1 cm
for some c1 , · · · , cm , which has solution x1 = c1 , x2 = c2 , · · · , xm = cm .
The second case is n > m. In this case there is at least one free variable and so there are infinitely
many solutions.)
b) If r = n, then there are no free variables, and hence there cannot be infinitely many solutions.
c) If the system has exactly one solution then there can be no free variables and so r = n. Since also
r  m, we have n  m.
d) This follows from a) and b). Suppose that r = m = n. Then, by a) there must be at least one solution,
and by b) there cannot be infinitely many solutions. Hence there must be exactly one solution.

Definition.(Matrix sums and scalar


 mutiplication)
a11 a12 b11 b12
Let A = and B = . Then we define the sum of two matrices by
a21 a31 b21 b22

a11 + b11 a12 + b12
A+B = .
a21 + b21 a22 + b22

Thus, to add two matrices we add corresponding components. We can extend this definition to any size
matrix. We write A = [aij ] to denote the matrix whose entry in row i, column j, is aij . So
2 3
a11 a12 · · · a1n
6 a21 a22 · · · a2n 7
6 7
A=6 . .. .. .. 7 .
4 .. . . . 5
am1 am2 · · · amn .

Then, if B = [bij ], we define the sum A + B by


2 3
a11 + b11 a12 + b12 ··· a1n + b1n
6 a21 + b21 a22 + b22 ··· a2n + b2n 7
6 7
A + B = [aij + bij ] = 6 .. .. .. .. 7.
4 . . . . 5
am1 + bm1 am2 + bm2 ··· amn + bmn

5
Notice that we can only define A + B when A and B are of the same size.
If we now let k be a constant then we define scalar multiplication of A by k by
2 3
ka11 ka12 · · · ka1n
6 ka21 ka22 · · · ka2n 7
6 7
kA = [kaij ] == 6 . .. .. .. 7.
4 .. . . . 5
kam1 kam2 ··· kamn .
 
1 2 0 1
Example 5. Let A = ,B = . Find A + B, A + 2B, 3A B.
3 4 3 2
    
1 + ( 1) 2+0 0 2 1 2 2 0 1 2
Solution: A + B = = ; A + 2B = + = ;
3+2 4+3 5 7 3 4 4 6 7 10
    
1 2 1 0 3 6 1 0 4 6
3A B=3 = = .
3 4 2 3 9 12 2 3 7 9

Definition. (Matrix-vector multiplication)


A vector is a matrix with only one column. The set of all column vectors with n components (entries) is
called Rn . Thus a vector ~x in Rn looks like 2 3
x1
6 x2 7
6 7
~x = 6 . 7 .
4 .. 5
xn
2 3
x1
6 x2 7
6 7
Now let A be an m ⇥ n matrix with columns ~v1 , v~2 · · · , v~n . Let ~x = 6 . 7 . Then we define the product of
4 .. 5
xn
A and ~x by
A~x = x1~v1 + x2~v2 + · · · + xn~vn .
(An expression such as the right-hand-side of this equation is called a linear combination of the vectors
~v1 , v~2 · · · , v~n . )

Example. 2 3 2 3 2 3 2 3
1 0  1 0 2
4 3 2
25 = 2 4 35 + 1 425 = 4 45
1
1 4 1 4 2
2 3
 2    
2 1 0 4 5 2 1 0 3
1 =2 + = .
3 1 4 3 1 4 11
1
Question. If A is a 3 ⇥ 6 matrix, what size vectors can it multiply and what is the size of the resulting
vector?
Remark 6. The book defines matrix multiplication rather di↵erently (using the rows of the matrix and the
dot product) and then proves that the two definitions are equivalent (in Theorem 1.3.8). We will see this
later. Our definition is chosen as it is the most natural one when it comes to extending the definition to
matrix-matrix multiplication, as we will see next week.

You might also like