First Exam Coverage
ES 21 Notes
MATRICES
D.
Definition
A matrix is a rectangular array of numbers or functions
arranged in rows and columns usually designated by a
capital letter and enclosed by brackets, parentheses or
double bars. A matrix may be denoted by:
a11 a12
a
a 22
21
A
:
:
a m 1 a m 2
... a1n
... a 2n
... a mn
Upper Triangular Matrix a square matrix all
elements of which below the principal diagonal
are zero (aij = 0 for i>j).
Example:
u11 u12
U 0 u 22
0
0
E.
Lower Triangular Matrix a square matrix all
elements of which above the principal diagonal
are zero (aij = 0 for i<j).
Example:
l 11 0
L l 21 l 22
l 31 l 32
Unless stated, we assume that all our matrices are
composed of real numbers.
The horizontal groups of elements are called the rows of the
matrix. The ith row of A is
a i 1
ai2
... a in
1 i m
F.
The vertical groups of elements are called the columns of
the matrix. The jth column of A is
a1 j
a
2j
:
a 3 j
u13
u 23
u 33
0
0
l 33
Diagonal Matrix is a square matrix that is an
upper triangular and lower triangular at the same
time. The only nonzero elements are the
elements on the principal diagonal. (aij = 0 for i
j)
Example:
d 11 0
D 0 d 22
0
0
j n
G.
The size of a matrix is denoted by m x n (m by n) where m
is the number of rows and n is the number of columns.
0
0
d 33
Scalar Matrix is a diagonal matrix whose
diagonal elements are equal.
Example:
s
S 0
0
We refer to aij as the entry or the element in the ith row and
jth column of the matrix.
We may often write a given matrix as
A = [aij].
H.
SOME SPECIAL TYPES OF MATRICES
A.
Row Matrix or Row Vector is a matrix consists
of only one row.
Example: B = [b1
B.
b2 . . . bj . . .
Example:
1
I 3 0
0
Column Matrix or Column Vector is a matrix
consists of only one column.
I.
Example:
C.
0
0
s
Identity Matrix represented by In, a diagonal
matrix where all the elements along the main
diagonal are equal to 1 or unity.
bn]
c1
c
.2
.
C .
ci
.
..
c
m
0
s
0
1
0
0
0
1
Null Matrix represented by is a matrix in
which all the elements are zero.
0 0 ... 0
0 0 ... 0
: :
:
0 0 ... 0
J.
Square Matrix is a matrix in which the no. of
rows equals the no. of columns.
Symmetric Matrix a square matrix whose
element aij = aji.
Example:
1
S 2
4
Order of a Square Matrix is the number of rows
or columns of the matrix. Thus, we can just refer
to a 3x3 matrix as a square matrix of order 3.
K.
Principal Diagonal or Main Diagonal of a Square
Matrix consists of the elements a11, a22, a33,
ann.
The Trace of a Square Matrix is the sum of the
elements on the main diagonal of the matrix.
ES 21 Notes
Page 1 of 12
2
2
5
4
5
3
Skew Symmetric Matrix a square matrix whose
element aij = aji.
Example:
0
T 2
4
2
0
5
4
5
0
First Exam Coverage
ES 21 Notes
3 4 2
2 1 4
AB
3 8 3
5 7 4
4 ( 1) 2 4
32
4 3
5 ( 3) 7 8
EQUALITY OF MATRICES
Two matrices A = (aij) and B = (bij) are equal if and only if
the following conditions are satisfied:
a)
b)
c)
They have equal number of rows.
They have equal number of columns.
All elements in A agree with the elements in
B. (aij=bij, for all i and j.)
Example: The matrices
x
A a
3
1 5 6
8 1 7
Note: We can only add or subtract matrices with the same
number of rows and columns.
D.
MATRIX MULTIPLICATION
1
c 2
1
1
2
y 1
2 and B x 1 b 2
2
3
b
z
4
c 1
If A = (aij) is an m x n matrix and B = (bij) is an n x
p matrix, then the product of A and B, AB = C =
[cij] is an m x p matrix where
c ij
are equal if and only if x = 2, y = 1, z = 2, a = 3,
b = 4 and c = 1.
The formula tells us that in order to get the
element cij of the matrix C, get the elements of the
ith row of A (the premultiplier) and the elements
on the jth column of B (the post multiplier).
Afterwards, obtain the sum of the products of
corresponding elements on the two vectors.
MATRIX ADDITION AND SUBTRACTION
If A = (aij) and B = (bij) are matrices of the same size m x n,
then the sum A + B is another m x n matrix C = [c ij] where
cij = aij + bij for i = 1 to m and j = 1 to n. Matrix addition is
accomplished by adding algebraically corresponding
elements in A and B.
Example:
1 2 4
A
3 2 1
3 2 2
4 1 1
4 ( 2 )
1 1
The product is defined only if the
number of columns of the first factor
A (premultiplier) is equal to the
number of rows of the second factor
B (postmultiplier). If this is
satisfied, we say that the matrices
are comformable in the order AB.
MATRIX EXPONENTIATION
The formula An will be defined as A A A A
4 0 2
1 1 2
B.
Note:
3 2 2
B
4 1 1
1 2 4
AB
3 2 1
22
1 3
3 4 2 ( 1)
k 1
for i = 1 to m and j = 1 to p
ELEMENTARY OPERATIONS ON MATRICES
A.
a ikbkj
Example:
1 2
A
3 4
SCALAR MULTIPLICATION
1( 2 ) 2( 3) 1( 4 ) 2( 1)
AB
3( 2 ) 4( 3) 3( 4 ) 4( 1)
8 6
AB
18 16
If A = (aij) is an m x n matrix and k is a real number (or a
scalar), then the scalar multiple of A by k is the m x n matrix
C = [cij] where cij = kaij for all i and j. In other words, the
matrix C is obtained by multiplying each element of the
matrix by the scalar k.
Examples:
We obtain
2(1) ( 4 )( 3 ) 2( 2 ) ( 4 )( 4 )
BA
3(1) ( 1)(3 ) 3( 2 ) ( 1)( 4 )
10 12
BA
2
0
2 0 1 3 6 0 3 9
3
4 3 2 5 12 9 6 15
2 1 3 8 4 12
4 5
2 1 20 8 4
2 4 7 8 16 28
2 4
B
3 1
Note: Although AB and BA are defined it is not necessary
that AB = BA.
Example:
C.
MATRIX SUBTRACTION
2 1
A 5
3
4 6
If A and B are m x n matrices, the difference between A and
B denoted as A B is obtained from the addition of A and (1)B.
1.
A B = A + (1)B
Matrix subtraction is accomplished by subtracting from the
elements of the first matrix the elements of the second
matrix correspondingly.
Example:
3 4 2
A
5 7 4
2 1 4
B
3 8 3
2.
3.
E.
6 2 3
B
1 4 5
1 3
C
2 1
A x B is a 3 x 3 matrix while B x A is
a 2 x 2 matrix.
A x C is a 3 x 2 matrix but C x A is not
defined.
B x C is not defined but C x B is
defined (2 x 3).
MATRIX TRANSPOSITION
If A = [aij] is an m x n matrix, then the transpose of A,
denoted by AT=[aij] is an n x m matrix defined by aij=aji.
ES 21 Notes
Page 2 of 12
First Exam Coverage
ES 21 Notes
Odd and Even Permutations
The transpose of A is obtained by interchanging the rows
and the columns of A.
A permutation a1a2a3an is said to have an inversion if a
larger number precedes a smaller one. If the total number of
inversion in the permutation is even, then we say that the
permutation is even, otherwise it is odd.
Examples: ODD and EVEN Permutation
Example:
If
2 2
1
1 3
4
A
2 1 3
3 2
5
then
5
1 1 2
A T 2
3 1 3
2 4
3 2
S1 has only one permutation; that is 1, which is even since
there are no inversions.
Note: The transpose of a symmetric matrix is equal to itself.
PROPERTIES
AND
THEOREMS
ON
MATRIX
OPERATIONS:
In the permutation 35241, 3 precedes 2 and 1, 5 precedes
2, 4 and 1, 2 precedes 1 and 4 precedes 1. There is a total
of 7 inversions, thus the permutation is odd.
MATRIX ADDITION
S3 has 3! = 6 permutations: 123, 231and 312 are even while
132, 213, and 321 are odd.
A+O=A
A + (A) = O
A+B=B+A
(A + B) + C = A + (B + C)
Existence of Additive Identity
Existence of Additive Inverse
Commutative Property
Associative Property
S4 has 4! = 24 permutations: 1234, 1243, 1324, 1342, 1423,
1432, 2134, 2143, 2314, 2341, 2413, 2431, 3124, 3142,
3214, 3241, 3412, 3421, 4123, 4132, 4213, 4231, 4312,
4321.
SCALAR MULTIPLICATION
For any Sn, where n>1 it contains n!/2 even permutations
and n!/2 odd permutations.
0xA=O
1x A=O
k l (A) = k (l A) = l (k A)
(k + l) A = k A + l A
k(A + B) = k A + k B
DEFINITION:
Let A = [aij] be a square matrix of order n. The determinant
of A denoted by det(A) or A is defined by
MATRIX MULTIPLICATION
A(BC) = (AB)C
A(B + C) = AB + AC
(A + B)C = AC + BC
AI = IA = A
Associative Property
Left Distributive Property
Right Distributive Property
Existence of Multiplicative
Identity
kl (AB) = (k A)(l B) = (l A)(k B)
Note:
DETERMINANT
In general, Matrix Multiplication
commutative. That is, AB BA.
is
not
det A A a1 j 1 a 2 j 2 ...anj n
where the summation is over all permutations j1j2jn of the
set S = {1,2,,n}. The sign is taken as (+) if the permutation
is even and () if the permutation is odd.
Examples:
If A = [a11] is a 1 x 1 matrix then det(A) or A= a11.
MATRIX TRANSPOSITION
(AT)T = A
(A + B)T = AT + BT
(k A)T = k AT
(AB)T = BTAT
If A = a11 a12 , then to get Awe write down the terms a1a
21 a 22
a2b and replace the dashes with the allpossible
permutations of S = {1, 2}, namely 12 (even) and 21 (odd).
Thus A= a11a22  a12a21.
In general (A1 A2 A3An1 An)T= AnTAn1TA3TA2TA1T
DETERMINANTS
Another very important number associated with a square
matrix A is the determinant of A which we will now define.
This unique number associated to a matrix A is useful in the
solutions of linear equation.
Permutation:
Let S={1, 2, 3, , n} be the set of integers from 1 to n,
arranged in increasing order. A rearrangement a1a2a3an of
the elements in S is called a permutation of S.
If A = a11 a12 a13 , then to compute the A we write
a
21 a 22 a 23
a 31 a 32 a 33
down the six terms a1a2a3, a1a2a3, a1a2a3, a1a2a3, a1a2a3, a1a2a3. Replace the dashes with all the elements of S3,
affix a (+) or () sign and get the sum of the six terms.
If A is a square matrix of order n, there will be n! terms in
the determinant of A with n!/2 positive terms and n!/2
negative terms.
2 1 3
3 2 1
( 2 )( 2 )( 2 ) ( 3)(1)( 3) ( 0 )(1)(1) ( 0 )( 2 )( 3) ( 2 )(1)(1) ( 3)(1)( 2 )
0 1 2
890026
9
By the Fundamental Principle of Counting we can put any
one of the n elements of S in the first position, any one of
the remaining (n1) elements in the second position, any
one of the remaining (n2) elements in the third position,
and so on until the nth position. Thus there are n(n1)(n2)3*2*1 = n! permutations of S. We refer to the set of all
permutations of S by Sn.
METHODS IN GETTING THE DETERMINANT
Examples:
A.
If S = {1, 2, 3} then S3 = {123, 132, 213, 231, 312, 321}
DIAGONAL METHOD
This method is applicable to matrices with size
less than or equal to 3.
If S = {1, 2, 3, 4} then there are 4! = 24 elements of S4.
1.
ES 21 Notes
Page 3 of 12
2 x 2 Matrices
First Exam Coverage
ES 21 Notes
a11
a12
a 21
a 22
2.
B.
1
4
a11a 22 a 21a12 s
2 3
1
2 3 2
3 x 3 Matrices
3.
a11
a 21
a12
a 22
a13
a 23
a11
a 21
a12
a 22
a 31
a 32
a 33
a 31
a 32
4.
b
1 a
y 5 then 4 2x
b
2 y 2( 5 ) 10
2b
2b
5.
det( A ) a i 1 Ai 1 a i 2 Ai 2 ... a in Ain a ik Aik
18
2x
y
8 y 2xy 2x
3x
6y
2x
y
8 y 2xy( 2 ) x
3x
3y
2x
4y
2xy
18
2xy
18
2xy
x 2 0
x 2
0
0 xy
xy
x 2
det( A ) a 1 j A1 j a 2 j A2 j ... a nj Anj a kj Akj
x y z y
k 1
(expansion about the ith row)
6.
Note: We may choose any row or any column in getting the
determinant of a given matrix.
2 1 4 1
3 2 4
0
0 3 1 0
7.
3x y
5z
xzy
2z
5z
x
0
xyz
xz
2 0
The determinant of a square matrix A = [aij] is equal to
the determinant of its transpose AT = [aij].
(i.e.A=AT)
Example:
4z
If B = [bij] is a square matrix
from another square matrix
adding correspondingly the
column) to a multiple of the
(or column), then B=A.
Example:
9.
2.
8.
Example:
xy
2x
3y
x z xy2
As a corollary to the sixth theorem, if the elements in a
row (or column) of a square matrix A = [aij] are
multiples of the corresponding elements of another row
or column of the matrix A, then A= 0.
Example:
6y
If a square matrix A = [aij] contains a row (or a column)
that has elements all equal to zero, then A= 0.
xyz
2 3 2
4 6 4 0
THEOREMS ON DETERMINANTS
3x y
3y
5z
It is best to expand about the fourth row because it has the
most numbers of zeros. The optimal course of action is to
expand about the row or column that has the largest
number of zeros, because in that case the cofactors Aij of
those aij which are zero need not be evaluated since the
product of aijAij = (0)Aij = 0.
0 0
x y z y
2x y
3x
2z
x
2x y
3x
3x
6x2
9x2 3xy 15xz 3x2yz
2x
3x y
4z
0
0
x 2 0
If two rows (or columns) of a matrix A = [aij] are
identical then A= 0.
Example:
2x
To evaluate
Example:
and
2a
If two row (or columns) of a square matrix A = [aij] were
interchanged to form a new matrix B = [bij], then B=
A.
k 1
(expansion about the ith row)
Example:
6xy
1.
As a corollary to the third theorem, if A has a row (or
column) that has a common factor l, then this k may be
factored out of the determinant of A, where a simplified
matrix B is formed. (i.e. .A= kB.
2xy 2 3x
4x 2 y 6 y
DETERMINANT USING THE COFACTOR METHOD
The determinant of a square matrix maybe obtained using
expansion about a row or expansion about a column. The
following formulas maybe used in getting the determinant:
a
x
4 2a
Algebraic Complement or Cofactor, Aij
The algebraic complement or cofactor of an element aij of
the matrix A is that signed minor obtained from the formula
(1)i+j Mij
Example:
Complementary Minor, det(Mij) or Mij
The complementary minor or simply minor of an element aij
of the matrix A is that determinant of the submatrix Mij
obtained after eliminating the ith row and jth column of A.
1 0
1
4
If a row (or column) of a square matrix A = [aij] is
multiplied by a constant k, then the determinant of the
resulting matrix B = [bij] is equal to k times the
determinant of A (i.e. B= kA).
1
If 2
METHOD OF COFACTORS
Example:
4
2
2xy
of order n that is derived
A = [aij] of order n, by
elements of a row (or
elements of another row
a11 a12
a 21 a 22
a13
a11 ka 31 a12 ka 32
a 23
a 21
a 22
a 31 a 32
a 33
a 31
a 32
a13 ka 33
a 23
a 33
If the elements of one row (or column) of a square
matrix A = [aij] of order n may be expressed as
binomials such that two square matrices B = [bij] and C
= [cij] both of order n, are formed after splitting the
binomial elements, then A=B+C.
Example:
ES 21 Notes
Page 4 of 12
First Exam Coverage
ES 21 Notes
a 11
a 12
a 13
a 11 a 12 a 13 a 11 a 12 a 13
b 21 c 21 b 22 c 22 b 23 c 23 b 21 b 22 b 23 c 21 c 22 c 23
a 31
a 32
a 33
a 31 a 32 a 33
a 31 a 32 a 33
10. The determinant of the product of two square matrices A
= [aij] and B = [bij] of the same order n is equal to the
product of the determinant of A and the determinant of B.
Example:
If
2x
6z
6y
then
3
4 2 and a
2xy
2a
4a
6b
a b
2x 3 2
4 6z 4
3
a
4a
6b
5
5
6y
2a
a b
2b 2
2xy
5
5
2b
=
Matrix Inversion Using the Adjoint
and the Inverse
=3
2
23
A 1
=
6.
adjA
A
Notation: A1, B1
Example:
0
0
x 2 0 x2 4
x y z y
If the inverse of a matrix exists, we say that the matrix is
invertible or nonsingular. Otherwise, we say that the matrix
is noninvertible or singular.
Matrix Inversion applies only to square matrices and can be
produced using the adjoint matrix and the
determinant.
11. The determinant of a triangular matrix is equal to the
product of the elements in its principal diagonal.
x 2
xy
Not all matrices has its inverse. However, if the inverse of a
matrix exists, it is unique.
The proof of this needs the knowledge of the following
theorem:
The sum of the products of the elements in one row (or
column) and the cofactors of the elements of another row
(or column) of a given square matrix is zero.
From the above formula for inverse, it is highly suggested
that the determinant be computed first. If it so
happened that the matrix is singular (i.e., the
determinant is zero), then the inverse of the
matrix is said to be nonexistent.
12. The determinant of an Identity Matrix is equal to 1.
Note that it is a waste of effort to still produce the adjoint if
ADJOINT OF A MATRIX
The Adjoint of a square matrix A=[aij] of order n is that
square matrix with the same order n denoted by adj(A)=[A ji]
where Aij is the cofactor of the element aij of matrix A. The
adjoint of a matrix is the transpose of the matrix of cofactors
of the elements of A.
Input: Square Matrix
Output: Square Matrix (with the same size as the original
matrix)
Notation: adj A, adj B
Step 1: Get the cofactors of all the elements in the
original matrix.
Recall: the cofactor of an element aij can be denoted as Aij
and is defined by:
Aij ( 1)
i j
M ij
Step 2: Set up the adjoint matrix by taking the transpose
of the matrix of cofactors.
adj A Aij
a b
c d
1
2
1
5
7
the matrix is singular. Therefore, it is advised
that you first check for singularity.
Example: Set up the Inverse of the given matrix.
Using the diagonal method to compute for the determinant
of the given matrix:
A 35 2 14 5 14 14 0
Since matrix A is singular, as evidenced by its zero
determinant, it can thus be concluded that the
Inverse of A (or A1) does not exist.
Example 2:
Set up the Inverse of the given matrix
1 1 1
A 1 2 3
4 2 3
A 6 12 2 8 6 3 7
Since the determinant is not zero, then matrix A is said to be nonsingular. In this case, the inverse exists and there is a need to set up
the adjoint.
Getting the cofactors of all the elements in the original matrix.
Example:
If A =
1
A
2
then adj(A) =
d
c
b .
a
Inverse of a Matrix
The inverse of a square matrix A = [aij] of order n is that
matrix B = [bij] of the same order n such that AB = BA = In.
We denote the inverse matrix of A by A1. Thus, we define
the inverse of A as that matrix A1 such that
Aij ( 1)i j M ij
A11 ( 1)2
A13 ( 1)4
2 3
1
ES 21 Notes
Page 5 of 12
4 2
A22 ( 1)4
A(A1) = (A1)A = In.
1 1
4 3
3
12 A12 ( 1)
1 3
4 3
10 A21 ( 1)
1 A23 ( 1)5
1 1
2 3
1
4 2
First Exam Coverage
ES 21 Notes
4
A31 ( 1)
1 1
2
A33 ( 1)6
5
5 A 32 ( 1)
1 1
1
AX B
A 1 AX A 1 * B
( A 1 A ) * X A 1 * B
A 1
1 3
Derivation of the Solution for xis :
12 9 10
Aij 1 1 2
5 2
3
Consequently,
1 1
adj A
A
1 5
12
,thus
adj A 9
1 2
10 2 3
I * X A 1B
X A 1B
Take note that the
derivation assumes
that A1 exists. If A1
does not exist, we
can not find the
solution to
the
system AX = B.
Example: Determine the values of x1, x2 and x3 in the
following system of equations.
x1 x 2 x3 1
x1 2 x 2 3x3 6
1 5
12
1
A
9
1 2
7
10 2 3
4 x1 2 x 2 3x3 5
Solution:
The above system of equations can be written in matrix
form:
SOLUTION TO SYSTEM OF LINEAR EQUATIONS
In general, we can think of a system of linear
equations as a set of m equations that contains n
unknowns. There are several forms by which a system of
equations can be written.
We can have the equation form:
a 11x 1 a 12 x 2 a 13 x 3 a 1n x n b1
a 21x 1 a 22 x 2 a 23 x 3 a 2n x n b 2
a 31x 1 a 32 x 2 a 33 x 3 a 3n x n b 3
a m 1 x 1 a m 2 x 2 a m 3 x 3 a mn x n b m
where aij are
constant
coefficients of the
unknowns xj and bI
are constants
1 1 1 x 1 1
1 2 3 x 6
2
4 2 3 x 3 5
We can write this in matrix form AX = B and let X = A1B,
where:
1 1 1
A 1 2 3
4 2 3
Getting A1
A 1
To get x1, x2 and x3 , multiply A1 to B:
Or we can transform that to the matrix form:
a11 a12 a13 a1n x1 b1
a
21 a 22 a 23 a 2 n x 2 b2
a31 a32 a33 a3n x3 b3
a m1 a m 2 a m3 a mn x n bm
1 5 1
x1
12
1
X x 2
9
1 2 * 6
7
x 3
10 2 3 5
Referring to the matrix form, we can actually rewrite the
system of equations as a compact matrix operation:
Performing the operation A1B will yield the solution
matrix:
AX = B.
Where:
A
X
B
1 5
12
1
9
1 2
7
10 2 3
Coefficient Matrix
Column Matrix of Unknowns/Variables
Column Matrix of Constants
x 1 1
X x 2 1
x 3 1
Make it a habit to check if all the computed values of the
unknowns satisfy all the given equations. Checking is done
by substituting the values x1 = 1, x2 = 1 and x3 = 1 to the
original equations.
Equation1 1(1) 1(1) + 1(1) =? 1Satisfied
SOLUTION TO SYSTEM OF nLINEAR EQUATIONS
WITH n UNKNOWNS
Equation 21(1) + 2(1) + 3(1) =? 6Satisfied
A.
USING THE INVERSE METHOD
Equation 34(1) 2(1) + 3(1)=? 5Satisfied
The Inverse Method maybe applied only to a
system of linear equations in which the number of
independent equations is equal to the number of unknowns.
If the number of equations is equal to the number of
unknowns, the equation AX = B will have a matrix of
coefficients that is square.
If the matrix of coefficients A is nonsingular, the
solution to the system is unique. On the other hand, if A is
singular, either the system has a unique solution or no
solution at all.
Since all the equations were satisfied, then (x1, x2, x3) = (1,
1, 1) is indeed the solution to the system.
SOLUTION TO SYSTEM OF EQUATIONS USING
CRAMER'S RULE
Recall that A system of equation n equations in n
unknowns can be modeled as a matrix operation AX = B.
ES 21 Notes
Page 6 of 12
First Exam Coverage
ES 21 Notes
a 11 a 12
a
a
21 22
a 31 a 32
a n 1 a n 2
a 13 a 1n x 1 b1
a 23 a 2n x 2 b 2
a 33 a 3n x 3 b3
a n 3 a nn x n bn
a11
a
21
a31
a41
coefficient matrix
ith variable
right hand side constants
matrix resulting from replacing the
ith column of A by the column
vector of constants B
or
a21 l 21 (1) l 22 (0) 0(0) 0(0)
or
a31 l31 (1) l32 (0) l33 (0) 0(0)
or
l11 a11
l 21 a21
l31 a31
a41 l 41 (1) l 42 (0) l 43 (0) l 44 (0)
or
l 41 a41
Ai
a12 l11 (u12 ) 0(1) 0(0) 0(0)
or
u12 a12 / l11
a13 l11 (u13 ) 0(u 23 ) 0(1) 0(0)
or
a14 l11 (u14 ) 0(u 24 ) 0(u34 ) 0(1)
or
u13 a13 / l11
u14 a14 / l11
a22 l 21 (u12 ) l 22 (1) 0(0) 0(0)
or
Notice that regardless of the variable i that is computed, the
denominator of the above formula is fixed at A. Therefore,
it is suggested that the determinant of the coefficient matrix
be the first to be computed.
Example: Using Cramer's Rule, determine the values of x1,
x2 and x3 that simultaneously satisfy the following system of
equations.
1 2 1 x1 1
3 3 1 x 2
2
1 2 1 x3 3
or
l32 a32 l31 (u12 )
a23 l 21 (u13 ) l 22 (u 23 ) 0(1) 0(0)
or
u 23 [a23 l 21 (u13 )] / l 22
a24 l 21 (u14 ) l 22 (u 24 ) 0(u 23 ) 0(1)
or
u 24 [a24 l 21 (u14 )] / l 22
Now, let us compute for the value of x1 by using the formula
A1
A
l33 a33 l31 (u13 ) l32 (u 23 )
l 43 a43 l 41 (u13 ) l 42 (u 23 )
a34 l31 (u14 ) l32 (u 24 ) l33 (u34 ) 0(1) or
u34 [a34 l31 (u14 ) l32 (u 24 )] / l33
a44 l 41 (u14 ) l 42 (u 24 ) l 43 (u34 ) l 44 (1) or
1 2 1
A
0
x1 1 0
A1 2 3 1
A 6
3 2 1
A1 3 6 4 9 2 4 0
l 44 a44 l 41 (u14 ) l 42 (u 24 ) l 43 (u34 )
How to get the solution to a system of equations using
LU Decomposition Method?
Applying the same process to solve x2 and x3:
x3
a33 l31 (u13 ) l32 (u 23 ) l33 (1) 0(0) or
a43 l 41 (u13 ) l 42 (u 23 ) l 43 (1) l 44 (0) or
The right hand side matrix B is 1
2
3
To set up the matrix A1, all you just have to do is to replace
the first column of A by b. Doing what has just been
described will result in:
1 1 1
A2 1
* 3 2 1 1
A 6
1 3 1
a32 l31 (u12 ) l32 (1) l33 (0) 0(0)
l 42 a42 l 41 (u12 )
1 2 1
A 3 3 1
1 2 1
A 3 2 6 3 2 6 6
x2
l 22 a22 l 21 (u12 )
a42 l 41 (u12 ) l 42 (1) l 43 (0) l 44 (0) or
Solution:
Compute for the determinant of A first:
x1
From matrix multiplication, we know that:
a11 l11 (1) 0(0) 0(0) 0(0)
The solution of the system of equations can be determined
by using the formula:
xi
a32
a42
a13 a14 l11 0 0 0 1 u12 u13 u14
a23 a24 l21 l22 0 0 0 1 u23 u24
*
a33 a34 l31 l32 l33 0 0 0
1 u34
a43 a44 l41 l42 l43 l44 0 0
0
1
Notice that the diagonal elements of the upper triangular
matrix have been set to values of 1 for reason of simplicity.
(LU Factorization is not unique.)
Let:
A
xi
B
AI
a12
a22
1 2 1
A3 1
* 3 3 2 1
A 6
1 2 3
Recall: A system of equations can be written as a compact
matrix operation AX = B
SOLUTION TO SYSTEM OF LINEAR EQUATIONS USING
LU FACTORIZATION
Direct LU Factorization:
In theory any square matrix A may be factored into a
product of lower and upper triangular matrices.
If we factor out the coefficient matrix A as L*U and
substitute to AX = B, we can generate the equation
L(UX)=B.
Momentarily define UX = Y which suggests LY = B. From
this transformation, we have actually decomposed AX = b to
two systems of equations.
Twostage solution:
Let us take the case of a 4th order matrix:
ES 21 Notes
Page 7 of 12
First Exam Coverage
ES 21 Notes
I.
Solve for Y in the equation LY = B using forward
substitution.
Solve for X in the equation UX = Y using back
substitution.
II.
1.
2.
Example: Determine the values of xi's in :
3.
4 2 1 x1 3
3 5 1 x 4
2
1 2 1 x3 8
Knowing that
and
An m x n matrix A is said to be in reduced row echelon form
if added to the first three properties it satisfies a fourth
property:
4 2 1 , therefore
4 0
0
A 3 5 1
L 3 7 2 0
1 2 1
5
5
2
2
1
4.
1
1 1
2 4
U 0
1
12
0
0
1
The following matrices are not in row echelon form. (Why
not?)
0 y1 3 4 y1 3
y1 3
4
0 y 2 4 3 y1 7 y2 4
y2 1
2
2
5 y 3 8 y 5 y 5 y 8
y3 3
1
2 2
2 3
2
Note that the computed values of yi's here are not yet the
solution since the original system of equations is in terms of
xi's.
Stage 2: Back substitution using UX = Y
x3 3
x 3
1 1
1 x1 3 3
2
4
4
1
1
x2 2
1
1 x 2 1 x2 2 x3 2
0
2
2
0
x3
0
1
3
x1 1
x1 1 2 x2 1 4 x3 3 4
This time (x1, x2, x3) = (1, 2, 3) is the solution to the original
system of equations.
AUGMENTED MATRIX OF A AND B
If A is an m x n matrix and B is a p x n matrix, then the
augmented matrix of A and B denoted by [A : B] is the
matrix formed by the elements of A and B separated by
pipes.
2
1 2 5 and
B 0
A 7
1 6
7
3 3 8
1 then A : B is
2
4
1 2 5  2 1
7
1 6  0
2
3 3 8  7 4
The augmented matrix associated to a system of linear
equation AX=B is the matrix [A : B]. For example, we can
now rewrite the system of equation:
2 1 3 x 1
as simply
1
2 6 y 1
2 4 1 z 1
1
0
A 0
0
0
2 0 1 3 2
1 0 1 3 5
0 0 1
9 2
0 0 0
0 0
0 0 0
0 1
1
0
B 0
0
0
0
0
0 1 0
0 0 1
0 0 0
0
1
0
2
1
0
C
0
2 2 4
1 6 7
0 1
8
1 0
0
The following matrices are in row echelon form but not in
reduce row echelon form.
1
0
D
0
2 3 4
1 0 5 6 2
1 2 3
E 0 1 0 0 8
0 1 2
0 0 1 0 0
0 0 1
1
0
F 0
0
0
0 8 0
1 0 3
0 1 0
0 0 0
0 0 0
The following matrices are in reduced row echelon form.
(Hence, in row echelon form.)
1
0
G
0
1
0 0 0
0
0
1
0
0
2
1 0 0
H 0 0 1 0 3 J 0
0 1 0
0 0 0 1 0
0
0 0 1
0
0 3
1 0
0 0
0 0
0 0
ELEMENTARY ROW (COLUMN) OPERATIONS ON
MATRICES
An elementary row (column) operation on a matrix A is any
one of the following operations:
Type I.
Type II.
Example:
If
If a column contains a leading entry of
some row, then all the other entries must
be zero.
Example:
Stage 1: Forward substitution using LY = B
4 0
3 7 2
1 5
2
All rows whose elements are all zeros, if
exist, are at the bottom of the matrix.
If at least one element on a row is not equal
to zero, the first nonzero element is 1, and
this is called the leading entry of the row.
If two successive rows of the matrix have
leading entries, the leading entry of the row
below the other row must appear to the
right of the leading entry of the other row.
Interchange any two rows (columns).
Multiply a row (column) by a nonzero
constant k.
Type III. Add to elements of a row k times of the elements
of another row the correspondingly.
Example: Let
0 2 0
1
A 3 1
6 4
2 8 2
2
Interchanging rows 1 and 3 of A (R1R3) obtain
2
2 8 2
B 3 1 6 4
1 0 2 0
2 1 3  1
.
1
2 6  1
2 4 1  1
ECHELON FORM OF A MATRIX
Multiplying row 3 by (R3 R3), we obtain
An m x n matrix A is said to be in row echelon form if it
satisfies the following properties:
ES 21 Notes
Page 8 of 12
First Exam Coverage
ES 21 Notes
0 2 0
1
C 3 1
6 4
1
1 4 1
a 11
a
21
a 31
a m 1
Adding 3 times the elements in row 1 to the elements in row
2 (R2R2 + 3R1), we obtain
a 12
a 22
a 13
a 23
a 32
am2
a 33
am3
a 1n b1
a 2n b 2
a 3n b 3
a mn b m
Applying the theorems on equivalent matrices we now have
the following methods of solution:
1 0 2 0
A 0 1
0 4
2
2 8 2
GAUSSIAN ELIMINATION METHOD
ELEMENTARY ROW OPERATIONS AS APPLIED TO
THE A SYSTEM OF EQUATION A:B
As a applied to the augmented matrix [A : B] as a system of
equation, the three elementary row operation will
correspond to the following:
TYPE I
TYPE II
TYPE III
rearranging the order of the equations
multiplying both side of the equation by
a constant
working with two equations
From this observation, we could see that as applied to a
operations does not alter the solution of the system.
ROW (COLUMN) EQUIVALENT MATRICES
An m x n matrix A is row (column) equivalent to an m x n
matrix B if B can be obtained from A by applying a finite
sequence of elementary row operations.
The objective of the Gaussian Elimination Method is to
transform the augmented matrix [A : B] to the matrix [A* :
B*] in row echelon form by applying a series of elementary
row transformations. Getting the solution of the system [A* :
B*] using back substitution will also give the solution to the
original system [A : B].
To reduce any matrix to row echelon form, apply the
following steps:
1. Find the leftmost nonzero column.
2. If the 1st row has a zero in the column of step 1,
interchange it with one that has a nonzero entry in
the same column.
3. Obtain zeros below the leading entry by adding
suitable multiples of the top row and to the rows
below that.
4. Cover the top row and repeat the same process
starting with step 1 applied to the leftover submatrix.
Repeat this process with the rest of the rows.
5. For each row obtain leading entry 1 by dividing each
row by their corresponding leading entry.
Example: The linear system
x
2x
3x
THEOREMS ON MATRIX EQUIVALENCE
1.
Every nonzero m x n matrix A = [aij] is row (column)
equivalent to a matrix in row (column) echelon form.
2.
Every nonzero m x n matrix A = [aij] is row (column)
equivalent to a matrix in reduced row (column) echelon
form.
3.
Let AX = B and CX = D be two systems of m linear
equations in n unknowns. If the augmented matrices
[A : B] and [C : D] are row equivalent, then the linear
systems are equivalent (i.e. they have exactly the
same solutions).
2 y
y
3z
z
z
9
8
3
has the augmented matrix associated to the system
4.
2
1
A : B 2
 9
 8
1  3
3
1
which can be transformed as a matrix in row echelon form
1 2 3  9
A : B 0 1 1  2
0 0 1  3
using back substitution we have
As a corollary to the third theorem, if A and B are row
equivalent matrices, then the homogeneous systems
AX = 0 and BX = 0 are equivalent.
z 3
y z 2
y 2 3 1
x 2 y 3z 9
x 9 2( 1) 3( 3) 2
SOLUTIONS TO A SYSTEM OF m EQUATIONS in n
UNKNOWNS
thus we have the solution (x, y, z) = (2, 1, 3).
In general a system of m equations in n unknowns may
be written in matrix form:
GAUSSJORDAN REDUCTION METHOD
a11
a
21
a31
a m1
a12
a 22
a32
am2
a13 a1n x1 b1
a 23 a 2 n x 2 b2 .
a33 a3n x3 b3
a m3 a mn x n bm
This system may now be represented by the augmented
notation:
On the other hand a second method called the GaussJordan Reduction Method gets rid of the back substitution
phase. The objective of the GaussJordan Reduction
Method is to transform the augmented matrix [A : B] to the
matrix [A* : B*] in reduced row echelon form by applying a
series of elementary row transformations. Doing this will
automatically give the solution of the system [A* : B*] which
also provides the solution to the original system [A : B].
To reduce any matrix to reduced row echelon form, apply
the following steps (SINE):
1.
ES 21 Notes
Page 9 of 12
Search search the ith column of the augmented
matrix from the ith row to the nth row for the
First Exam Coverage
ES 21 Notes
2.
3.
4.
maximum pivot, i.e. element with the largest absolute
value.
Interchange assuming the maximum pivot occurs
in the jth row, interchange the ith row and the jth row
so that the maximum pivot will now occur in the
diagonal position.
Normalize normalize the new ith row by dividing it
by the maximum pivot on the diagonal position.
Eliminate eliminate the ith column from the first up
to the nth equation, except in the ith equation itself
using the transformations.
1 2 3
2 1 4 0
1 2
2 1
3 0 5
3 0 10
4
3
14 0
3 5 10
2 3
1 4
4
3 60 0
0 5 10
Example: What is the rank of B?
2
3
4
1
2 4
6
8
B
3
6
9
12
4 8 12 16
Solution:
The determinant of B is equal to zero (THEOREM:
Proportional rows). And it can also be shown that 3x3
submatrices of B will have determinants equal to zero.
3  9
1 2
A : B 2 1 1  8
3 0 1  3
2
e.g. 3
which can be transformed as a matrix in row echelon form
1 0 0  2
*
*
A : B 0 1 0  1
0 0 1  3
1 3
2 4
Since at least one 3x3 submatrix of A has a nonzero
determinant, then r(A) = 3.
Example: The linear system
x 2 y 3z 9
2x y z 8
3x
z 3
has the augmented matrix associated to the system
4
3 0
4 8
6 12 0 (Rows are proportional)
8
16
But at least one 2x2 submatrix has nonzero determinant.
thus we have the solution (x, y, z) = (2, 1, 3).
e.g.
2 4
3
24 0
Therefore r(B) = 2.
SUBMATRIX AND RANK
THEOREMS ON RANKS
SUBMATRIX
A submatrix of A=[aij] is any matrix obtained by eliminating
some rows and/or columns of the matrix A.
1.
The rank of a matrix is not altered by any
sequence of elementary row (column)
transformations.
2.
Let A = [aij] and B = [bij] be two mxn matrices, if
rank(A) = rank(B) then A and B are equivalent.
3.
If A = [aij] and B = [bij] are mxn matrices, and
rank(A) = rank(B) = n, then rank(AB) = rank(BA) =
n.
Example: Let
1 8
2 3 0
3
0 3 8
2
A
1
6
9 5 2
7
2
4 11 5
The following are some submatrices of A:
2 3 0 1 8
3
0 3 8 2 ( remove third row )
4 11 5 7 2
3 0
0 3
6
9
11 5
8
2 ( remove first , third
2 and fifth column )
2 3 1
( remove sec ond row and
1
6 5
third and fifth column )
4 11 7
1
8 ( fourth column
5 remaining )
7
0 3 8 2
Example: What is the rank of C?
1
2
C 3
4
5
(sec ond column
RANK OF A MATRIX
remaining )
Solution:
Operating on rows of matrix C, we obtain the equivalent
matrix C
1
1
C' 1
1
1
The rank of a matrix A = [aij] is the order of the largest
square submatrix of A with a nonzero determinant. We
denote the rank of A by rank(A) or simply r(A).
Example: What is the rank of A?
1 2 3 4
A 2 1 4
3
3 0 5 10
Solution:
Checking out first the determinants of 3x3 submatrices:
2 3 4 5
3 4 5 6
4 5 6 7
5 6 7 8
6 7 8 9
2 3 4 5
1 1 1 1 R 2 ' R 2 R 1
1 1 1 1 R 3 ' R 3 R 2
1 1 1 1 R 4 ' R 4 R 3
1 1 1 1 R 5 ' R 5 R 4
We could easily see that all 5x5, 4x4 and 3x3 submatrices
of C have determinants equal to zero (THEOREM: Identical
rows). But for at least one 2x2 submatrix of C has a nonzero determinant.
(e.g.
ES 21 Notes
Page 10 of 12
1 2
1 1
1 0 )
First Exam Coverage
ES 21 Notes
Consequently r(C) = 2. But C and C are equivalent
matrices and hence they have equal ranks. Therefore r(C) is
also equal to 2.
2 3 3  2k R1 R 2 1 2 4  5k
1 2 4  5k
2 3 3  2k
2
2
0 1 5  8k
0 1 5  8k
RANKS AND THE TYPES OF SOLUTION TO A SYSTEM
OF EQUATION
R 2 R 2 2R1 1 2 4  5k
0 1 5  8k
0 1 5  8k 2
Recall that for the system of m linear equations in n
unknowns AX = B. We can associate the system of
equation to the augmented matrix of the system [A:B].
The type of solution may be classified as unique, nonunique or inconsistent. Applying the concept of rank to the
augmented matrix [A:B], we have the following propositions:
1.
If r(A) = r([A:B]) = n then the solution to the
system is unique.
Example:
1 1 1 : 1
1 0 0 : 3
9
3
1
:
27
0 1 0 : 1
1 1 1 :
0 0 1 : 3
1
2.
3.
'
R 3 R 3 R 2 1 2 4 
5k
0 1 5 
8k
2
0 0
0  8k 8k
Therefore we have the following conclusions:
'
a) For a unique solution, r(A) = r[A:B] = n
There will be no value of k that will satisfy this
since r(A) = 2 < n =3.
b) For nonunique solutions, r(A) = r[A:B] < n.
This will be satisfied if r[A:B] is also equal to 2.
This will happen when the last element in the third row of
the augmented matrix is also equal to zero.
If r(A) = r([A:B]) < n, then the solution to the
system is nonunique.
8k2  8k = 0 k = 0, 1.
Example:
1 1 1 : 0
1 0 0 : 1
1
1
:
3
0 1 0 : 1
4 2 2 : 6
0 0 0 : 0
c) For the system to be inconsistent, r(A) < r[A:B]. This will
be satisfied if r[A:B] = 3 > 2. This will happen when the last
element in the third row of the augmented matrix is not
equal to zero.
If r(A) < r([A:B]), then the system has no solution
or inconsistent.
Example 2:
Example:
2 1 1 : 3 1 0 0 : 5 4
4 2 2 : 2 0 1 0 : 3
2
6 1 1 : 6 0 0 0 : 3
8k2  8k 0 k 0, 1.
For what values of m will the system of equations have
d) a unique solution
e) a nonunique solution
f) no solution
a
(m 2)a
a
2
c
0
b m2c m 4
Solution: In augmented matrix form, we have:
Example 1: Rank and the Type of Solution to a System
1 1 
2
1
m 2 0 1 
2
1
1 m  m 4
For what values of k will the system of equations have
a) a unique solution
b) a nonunique solution
c) no solution
2x 3y 3z 2k
x 2 y 4 z 5k
y 5z 8k 2
Solution: In augmented matrix form, we have:
2 3 3  2k
1 2 4  5k
2
0 1 5  8k
Performing Gaussian Elimination Method:
Performing Gaussian Elimination Method:
1 1
1
m 2 0 1
1
1 m2

2

0
 m 4
R 2 R 2 (m 2)R1 1
1
1

2
0 m 2 m 1  2m 4
'
R 3 R 3 R1
0
m2 1 
m 2
0
'
Therefore we have the following conclusions:
Therefore we have the following conclusions:
a) For a unique solution, r(A) = r[A:B] = n
This will be satisfied if m 210 and m20. Thus
we have a unique solution if m 1,2.
b) For nonunique solutions, r(A) = r[A:B] < n.
ES 21 Notes
Page 11 of 12
First Exam Coverage
ES 21 Notes
This will happen when the last element in the third
row of matrix A and the augmented matrix are both equal to
zero.
Clearly, we could see that the resulting system
gives a nonunique solution because, r(A) = r[A:B]=2 <3.
Thus, the system gives nonunique solutions when m = 2.
m2  1 = 0 and m+2 = 0.
c) For the system to be inconsistent, r(A) < r[A:B].
There is no value of m that will satisfy both
equation.
The other value of m to be checked is when m = 2.
Substituting this to the system gives the system:
1 1 1  2
0 0 1  0
0 0 3  0
This will be satisfied if r[A:B] = 3 > 2. This will
happen when the last element in the third row of the
augmented matrix is not equal to zero but the last element
of the third row of A is equal to zero.
m2 1 = 0 and m+2 0.
Thus, the system is inconsistent when m = 1 .
ES 21 Notes
Page 12 of 12