You are on page 1of 74

Linear Algebra

MMME 21 First Exam Lecture Notes


Prepared by: Magdaleno R. Vasquez Jr.,Dr.Eng.
Date released: January 2020

This document may contain errors that will be corrected in class. It is your responsibility
to take note of these corrections. This may be considered as a supplementary material
only and should not be used as substitute to the class discussions and reading materials.
Not everything that you need to know is included in these notes.

Contents
1 Introduction 4
1.1 Mathematical Methods for Mining, Metallurgical, and Mate-
rials Engineering . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Course Overview . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Course Outcomes . . . . . . . . . . . . . . . . . . . 4
1.2 Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Introduction to Linear Algebra . . . . . . . . . . . . . 5
1.2.2 Applications of Linear Algebra . . . . . . . . . . . . . 7

2 Matrix Operations and Properties 8


2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Special Matrices . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Equality of Matrices . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Elementary Operations on Matrices . . . . . . . . . . . . . . 12
2.4.1 Matrix Addition and Subtraction . . . . . . . . . . . . 12
2.4.2 Scalar Multiplication . . . . . . . . . . . . . . . . . . 13
2.4.3 Matrix Subtraction . . . . . . . . . . . . . . . . . . . 13
2.4.4 Matrix Multiplication . . . . . . . . . . . . . . . . . . 14

1
First Exam Notes MMME 21

2.4.5 Matrix Exponentiation . . . . . . . . . . . . . . . . . 15


2.4.6 Matrix Transposition . . . . . . . . . . . . . . . . . . 16
2.5 Properties and Theorems on Matrix Operations . . . . . . . 17
2.5.1 Matrix Addition . . . . . . . . . . . . . . . . . . . . . 17
2.5.2 Scalar Multiplication . . . . . . . . . . . . . . . . . . 17
2.5.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . 17
2.5.4 Matrix Transposition . . . . . . . . . . . . . . . . . . 18

3 Determinants, Adjoints, and Inverses 19


3.1 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 Permutation . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 Odd and Even Permutations . . . . . . . . . . . . . . 19
3.1.3 Determinant . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Methods for Computing Determinants . . . . . . . . . . . . 21
3.2.1 Diagonal Method . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Method of Cofactors . . . . . . . . . . . . . . . . . . 23
3.3 Theorems on Determinants . . . . . . . . . . . . . . . . . . 27
3.4 Adjoint of a Matrix . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . 32

4 Solutions to Systems of Equations 36


4.1 Inverse Method . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3.1 Elementary Row (Column) Operations on Matrices . 41
4.3.2 Direct LU Factorization . . . . . . . . . . . . . . . . . 42
4.3.3 Solution via LU Decomposition Method . . . . . . . . 44
4.4 Augmented Matrix . . . . . . . . . . . . . . . . . . . . . . . 46
4.4.1 Echelon From of a Matrix . . . . . . . . . . . . . . . 47
4.4.2 Elementary Row Operations as Applied to a System
of Equations A:B . . . . . . . . . . . . . . . . . . . . 48
4.4.3 Row (Column) Equivalent Matrices . . . . . . . . . . 49
4.4.4 Theorems on Matrix Equivalance . . . . . . . . . . . 49
4.4.5 Solutions to a System of “m” Equations with “m” Un-
knowns . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.5 Gaussian Elimination Method . . . . . . . . . . . . . . . . . 50
4.6 Gauss-Jordan Reduction Method . . . . . . . . . . . . . . . 52
4.7 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . 53
4.7.1 Theorems on Ranks . . . . . . . . . . . . . . . . . . 54

Vasquez (January 2020) 2/74


First Exam Notes MMME 21

4.7.2 Ranks and the Typs of Soluation to a System of Equa-


tions . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5 Eigenvalues 60
5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 Eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . 63
5.4 Eigenvalues of a Triangular Matrix . . . . . . . . . . . . . . 66

6 Applications 69
6.1 Curve Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2 Spring-mass System . . . . . . . . . . . . . . . . . . . . . . 70
6.3 Stresses and Strains . . . . . . . . . . . . . . . . . . . . . . 72

Vasquez (January 2020) 3/74


First Exam Notes MMME 21

1 Introduction

1.1 Mathematical Methods for Mining, Metallurgical, and


Materials Engineering

1.1.1 Course Overview

The course allows students to recognize mathematical techniques neces-


sary in engineering practice, solve problems involving linear algebra, basic
differential equations and Laplace/Fourier transforms, and apply mathe-
matical techniques relevant in mining, metallurgical, and materials engi-
neering.

1. Course Number: MMME 21


2. Course Title Mathematical Methods for Mining,
Metallurgical and Materials Engineering
3. Course Description: Solutions to systems of linear equations, first-
and higher-order ordinary differential,
Laplace transforms and mathematical
applications used in the mining, metallurgical,
and materials engineering fields
4. Prerequisite: Math 23 Elementary Analysis III
5. Course Credit: 3 units
6. Number of Hours: 3 h per week
7. Meeting Type: Lecture

1.1.2 Course Outcomes

Upon completion of the course, students must be able to:

Vasquez (January 2020) 4/74


First Exam Notes MMME 21

1. Recognize basic mathematical techniques necessary in engineering


practice;

2. Solve problems involving linear algebra, basic differential equations


and Laplace/Fourier Transforms;

3. Apply mathematical techniques in solutions for problems in mining,


metallurgical and materials engineering.

1.2 Linear Algebra

1.2.1 Introduction to Linear Algebra

What is linear algebra?

• Linear: having to do with lines, planes, etc.

• Algebra: solving equations involving unknowns.

Linear algebra is the branch of mathematics concerning linear equations


such as
a1 x 1 + · · · + an x n = b

or system of linear equations

Eq. 1 : a11 x1 + a12x2 + a13 x3 + · · · + a1n xn = b1


Eq. 2 : a21 x1 + a22x2 + a23 x3 + · · · + a2n xn = b2
Eq. 3 : a31 x1 + a32x2 + a33 x3 + · · · + a3n xn = b3
.. .. ..
. . .
Eq. m : am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm

and their representations through matrices and vector spaces. It is impor-


tant to understand how to solve systems of linear equations algebraically.

Vasquez (January 2020) 5/74


First Exam Notes MMME 21

The term “algebra” was coined by the 9th century mathematician Abu
Ja’far Muhammad ibn Musa al-Khwarizmi. It comes from the Arabic word
al-jebr, meaning reunion of broken parts.

At the basic level, solving systems of equations is straightforward. For


instance consider the following illustrations:

1. The admission fee at a small fair is 150 for children and 400 for
adults. On a certain day, 2200 people entered the fair and 505,000
was collected. How many children and how many adults attended?
Answer: 1,500 children and 700 adults

2. The sum of the digits of a two-digit number is 7. When the digits are
reversed, the number is increased by 27. Find the number.
Answer: 25

3. Find the equation of the parabola that passes through the points
(-1, 9), (1, 5), and (2, 12).
Answer: y = 3x2 − 2x + 4

However, in actual applications, one needs to be more clever and inge-


nious. Sometimes, it is important to gain insight about the solution without
having to solve the equations. For example, does the solution exist? What
is the significance of the solution? What is the physical interpretation of the
solution? Is there still a solution if I change this value to another value?

Vasquez (January 2020) 6/74


First Exam Notes MMME 21

The main goal is to present a library of linear algebra tools, and more
importantly, to teach a conceptual framework for understanding which tools
should be applied in a given context.

If a computer program can find the answer faster than you can, then your
question is just an algorithm: this is not real problem solving. The sub-
tle part of the course lies in understanding what computation to ask the
computer to do for you-it is far less important to know how to perform com-
putations that a computer can do better than you anyway.

1.2.2 Applications of Linear Algebra

Majority, if not all, engineering undergraduates have to take a course in


linear algebra. The reason for this is that most engineering problems, no
matter how complicated, can be reduced to linear algebra:

AX = B or AX = λX or AX ≈ B

Take note that nearly all engineering or scientific computations involve lin-
ear algebra. Consequently, linear algebra algorithms have been highly
optimized.

Linear algebra can be used in:

1. Linear Programming - an optimization technique for a system of lin-


ear constraints and a linear objective function.

2. Study of complex systems

3. Numerical modeling

4. Solution of system of linear equations, for instance,

x + 2y − 3z = 2 x = 1
2x − 3y + 4z = 1 −→ y = −1
3x + 4y − 5z = 4 z = −1

Vasquez (January 2020) 7/74


First Exam Notes MMME 21

2 Matrix Operations and Properties

2.1 Definitions

A matrix is a rectangular array of numbers or functions arranged in rows


and columns usually designated by a capital letter and enclosed by brack-
ets, parentheses, or double bars. A matrix may be denoted by:

 
a11 a12 · · · a1n
 a21 a22 · · · a2n 
 
A=
 .. .. .. ..  (1)
 . . . .  
an1 an2 · · · ann

Unless stated, we assume that all our matrices are composed of real num-
bers.

The horizontal groups of elements are called the rows of the matrix. The
ith row of A is

[ai1 ai2 ··· ain ] (1 ≤ i ≤ m) (2)

The vertical groups of elements are called the columns of the matrix. The
jth column of A is

 
a1j
a2j 
 
 . 
 .  (1 ≤ j ≤ n) (3)
 . 
aij

The size of a matrix is denoted by “m×n” (m by n) where m is the number

Vasquez (January 2020) 8/74


First Exam Notes MMME 21

of rows and n is the number of columns.

We refer to aij as the entry or the element in the ith row and jth column
of the matrix.

We may often write a given matrix as

A = [aij ] (4)

2.2 Special Matrices

1. Row matrix or row vector – is a matrix that consists of only one row.
h i
B = b1 b 2 · · · bj · · · bn (5)

2. Column matrix or column vector – is a matrix that consists of only


one column.
 
c1
 c2 
 
 . 
 .. 
C=  (6)
 
 ci 
 .. 
 
 . 
cm

3. Square matrix – is a matrix in which the number of rows equals to


the number of columns.
Order of a square matrix – is the number of rows or columns of the
matrix. Thus, we can just refer to a 3×3 matrix as a square matrix of
order 3.

Vasquez (January 2020) 9/74


First Exam Notes MMME 21

Principal diagonal or main diagonal of a square matrix – consists


of the elements a11 , a22 , a33 , · · · , ann .
The trace of a square matrix – is the sum of the elements along the
main diagonal of the matrix.

4. Upper triangular matrix – a square matrix where all elements below


the principal diagonal are zero (aij = 0 for i > j).
 
u11 u12 u13
U =  0 u22 u23  (7)
 

0 0 u33

5. Lower triangular matrix – a square matrix where all elements above


the principal diagonal are zero (aij = 0 for i > j).
 
l11 0 0
L = l21 l22 0  (8)
 

l31 l32 l33

6. Diagonal matrix – is a square matrix where the only non-zero ele-


ments are on the principal diagonal (aij = 0 for i 6= j).
 
d11 0 0
D =  0 d22 0  (9)
 

0 0 d33

7. Scalar matrix – is a diagonal matrix whose elements are equal.


 
s 0 0
S = 0 s 0 (10)
 

0 0 s

Vasquez (January 2020) 10/74


First Exam Notes MMME 21

8. Identity matrix – represented by In , a diagonal matrix where all the


elements along the main diagonal are equal to 1 or unity. n is the
size of the square matrix.
 
1 0 0
I3 = 0 1 0 (11)
 

0 0 1

9. Null matrix – represented by O is a matrix in which all the elements


are zero.
 
0 0 ··· 0
0 0 · · · 0
 
O=  .. .. . . . .. 
 (12)
. . .
0 0 ··· 0

10. Symmetric matrix – is a square matrix whose elements aij and aji
are equal.
 
1 2 4
S = 2 2 −5 (13)
 

4 −5 3

11. Skew symmetric matrix – is a square matrix whose elements are


defined by aij = −aji .
 
0 2 −4
T = −2 0 −5 (14)
 

4 5 0

Vasquez (January 2020) 11/74


First Exam Notes MMME 21

2.3 Equality of Matrices

Two matrices A = [aij ] and B = [bij ] are equal if and only if the following
conditions are satisfied:

1. Both have equal number of rows.

2. Both have equal number of columns.

3. All elements in A agree with the elements in B. (aij = bij , for all i and
j.)

Example: The matrices


   
x 1 c−2 2 1 −1
A = a y + 1 2  and B = x + 1 b − 2 2 
   

3 b z 3 4 c+1

are equal if and only if x = 2, y = 1, z = 2, a = 3, b = 4, and c = 1.

2.4 Elementary Operations on Matrices

2.4.1 Matrix Addition and Subtraction

If A = [aij ] and B = [bij ] are matrices of the same size m × n, then the sum
A + B is another m × n matrix C = [cij ] where cij = aij + bij for i = 1 to
m and j = 1 to n. Matrix addition is accomplished by adding algebraically
corresponding elements in A and B.

Example: " # " #


1 −2 4 3 2 −2
A= B=
−3 2 1 4 −1 1

Vasquez (January 2020) 12/74


First Exam Notes MMME 21

" # " #
1 −2 4 3 2 −2
A+B = +
−3 2 1 4 −1 1
" #
1+3 −2 + 2 4 + (−2)
=
−3 + 4 2 + (−1) 1+1
" #
4 0 2
=
1 1 2

2.4.2 Scalar Multiplication

If A = [aij ] is an m × n matrix and k is a real number (or a scalar), then


the scalar multiple of A by k is the m × n matrix C = [cij ] where cij = kaij
for all i and j. In other words, the matrix C is obtained by multiplying each
element of the matrix by the scalar k.

Examples: " # " #


2 0 −1 3 −6 0 3 −9
−3 =
4 −3 2 −5 −12 9 −6 15
   
2 −1 3 8 −4 12
4 5 2 1 =  20 8 4 
   

−2 4 7 −8 16 28

2.4.3 Matrix Subtraction

If A and B are m × n matrices, the difference between A and B denoted


as A − B is obtained from the addition of A and (−1)B.

A–B = A + (−1)B (15)

Matrix subtraction is accomplished by subtracting from the elements of the


first matrix the elements of the second matrix correspondingly.

Vasquez (January 2020) 13/74


First Exam Notes MMME 21

Example: " # " #


3 4 −2 2 −1 4
A= B=
5 7 −4 −3 8 3

" # " #
3 4 −2 2 −1 4
A−B = + (−1)
5 7 −4 −3 8 3
" #
3 − 2 4 + 1 −2 − 4
=
5 + 3 7 − 8 −4 − 3
" #
1 5 −6
=
8 −1 −7

NOTE: We can only add or subtract matrices with the same number of
rows and columns.

2.4.4 Matrix Multiplication

If A = [aij ] is an m × n matrix and B = (bij ) is an n × p matrix, then the


product of A and B, AB = C = [cij ] is an m × p matrix where

n
X
cij = aik bkj (16)
k+1

for i = 1 to m and j = 1 to p.

The formula tells us that in order to get the element cij of the matrix C, get
the elements of the ith row of A (the pre-multiplier) and the elements on
the jth column of B (the post-multiplier). Afterwards, obtain the sum of the
products of corresponding elements on the two vectors.

NOTE: The product is defined only if the number of columns of the first

Vasquez (January 2020) 14/74


First Exam Notes MMME 21

factor A (pre-multiplier) is equal to the number of rows of the second factor


B (post-multiplier). If this is satisfied, we say that the matrices are com-
formable in the order AB.

2.4.5 Matrix Exponentiation

The formula An will be defined as A × A × A . . . × A.

Examples: " # " #


1 2 2 −4
A= B=
3 4 3 −1

" # " #
1(2) + 2(3) 1(−4) + 2(−1) 8 −6
AB = =
3(2) + 4(3) 3(−4) + 4(−1) 18 −16
" # " #
2(1) + −4(3) 2(2) + −4(4) −10 −12
BA = =
3(1) + −1(3) 3(2) + −1(4) 0 2

NOTE: Although AB and BA are defined it is not necessary that AB = BA.

Example:
 
2 −1 " # " #
6 2 −3 1 3
A= 5 3 B= C=
 
−1 4 5 2 −1
−4 6

1. A × B is a 3 × 3 matrix while B × A is a 2 × 2 matrix.

2. A × C is a 3 × 2 matrix but C × A is not defined.

3. B × C is not defined but C × B is defined 2 × 3.

Vasquez (January 2020) 15/74


First Exam Notes MMME 21

Show:
  
2 −1 " # 13 0 −11
6 2 −3
A×B = 5 3 × =  27 22 0 
   
−1 4 5
−4 6 −30 16 42
 
#" 2 −1 " #
6 2 −3 34 −18
B×A= × 5 3 =
 
−1 4 5 −2 43
−4 6
How about A × C =? and C × B =?

2.4.6 Matrix Transposition

If A = [aij ] is an m × n matrix, then the transpose of A, denoted by AT =


[a0ij ] is an n × m matrix defined by a0ij = aji . The transpose of A is obtained
by interchanging the rows and the columns of A.

Example:
 
1 2 −2  
−1 3 1 −1 2 5
4
If A =  then AT =  2 3 −1 3 
   

 2 −1 3 
−2 4 3 −2
5 3 −2

NOTE: The transpose of a symmetric matrix is equal to itself.

Vasquez (January 2020) 16/74


First Exam Notes MMME 21

2.5 Properties and Theorems on Matrix Operations

2.5.1 Matrix Addition

A + O = A Existence of Additive Identity


A + (−A) = O Existence of Additive Inverse
A + B = B + A Commutative Property
(A + B) + C = A + (B + C) Associative Property

2.5.2 Scalar Multiplication

0×A=A×0=O Multiplicative Property of Zero


1 × A = A × 1 = A Multiplicative Identity Property
kl(A) = k(lA) = l(kA) Associative Property of Multiplication
(k + l)A = kA + lA Distributive Property
k(A + B) = kA + kB Distributive Property

2.5.3 Matrix Multiplication

A(BC) = (AB)C Associative Property


A(B + C) = AB + AC Left Distributive Property
(A + B)C = AC + BC Right Distributive Property
AI = IA = A Existence of Multiplicative Identity
kl(AB) = (kA)(lB) = (lA)(kB)

NOTE: In general, matrix multiplication is not commutative. That is, AB 6=


BA.

Vasquez (January 2020) 17/74


First Exam Notes MMME 21

2.5.4 Matrix Transposition

(AT )T = A
(A + B)T = AT + B T
(kA)T = kAT
(AB)T = B T AT

In general (A1 A2 A3 . . . An−1 An )T = ATn ATn−1 . . . AT3 AT2 AT1 .

Vasquez (January 2020) 18/74


First Exam Notes MMME 21

3 Determinants, Adjoints, and Inverses

3.1 Determinants

Another very important number associated with a square matrix A is the


determinant of A which we will now define. This unique number associated
to a matrix A is useful in solving a system of linear equations.

3.1.1 Permutation

Let S = {1, 2, 3, . . . , n} be the set of integers from 1 to n, arranged in


increasing order. A rearrangement a1 a2 a3 . . . an of the elements in S is
called a permutation of S.

By the Fundamental Principle of Counting we can put any one of the n


elements of S in the first position, any one of the remaining (n−1) elements
in the second position, any one of the remaining (n − 2) elements in the
third position, and so on until the nth position. Thus, there are n(n − 1)(n −
2) . . . 3×2×1 = n! permutations of S. We refer to the set of all permutations
of S by Sn .

Examples:
If S = 1, 2, 3 then S3 = {123, 132, 213, 231, 312, 321}
If S = 1, 2, 3, 4 then there are 4! = 24 elements of S4 . Can you list all the
elements of 4!?

3.1.2 Odd and Even Permutations

A permutation a1 a2 a3 . . . an is said to have an inversion if a larger number


precedes a smaller one. If the total number of inversion in the permutation
is even, then we say that the permutation is even, otherwise it is odd.

Vasquez (January 2020) 19/74


First Exam Notes MMME 21

Examples: Odd and Even Permutation

S1 has only one permutation; that is 1, which is even since there are no
inversions.

In the permutation 35241, 3 precedes 2 and 1, 5 precedes 2, 4 and 1, 2


precedes 1 and 4 precedes 1. There is a total of 7 inversions, thus the
permutation is odd.

S3 has 3! = 6 permutations: 123, 231and 312 are even while 132, 213,
and 321 are odd.

S4 has 4! = 24 permutations: 1234, 1243, 1324, 1342, 1423, 1432, 2134,


2143, 2314, 2341, 2413, 2431, 3124, 3142, 3214, 3241, 3412, 3421,
4123, 4132, 4213, 4231, 4312, 4321.

n! n!
For any Sn , where n > 1 it contains 2
even permutations and 2
odd
permutations.

3.1.3 Determinant

Let A = [aij ] be a square matrix of order n. The determinant of A denoted


by det(A) or |A| is defined by

X
det(A) = |A| = (±)a1j1 a2j2 · · · anjn (17)

where the sum is over all permutations j1 j2 . . . jn of the set S = 1, 2, . . . , n.


The sign is taken as (+) if the permutation is even and (–) if the permuta-
tion is odd.

Examples:

If A = [a11 ] is a 1 × 1 matrix then det(A) or |A| = a11 .

Vasquez (January 2020) 20/74


First Exam Notes MMME 21

" #
a11 a12
If A = , then to get |A| we write down the terms a1− a2− and
a21 a22
replace the dashes with the all-possible permutations of S = {1, 2}, namely
12 (even) and 21 (odd). Thus |A| = a11 a22 − a12 a21 .
 
a11 a12 a13
If A = a21 a22 a23 , then to compute the |A| we write down the six
 

a31 a32 a33


terms a1− a2− a3− , a1− a2− a3− , a1− a2− a3− , a1− a2− a3− , a1− a2− a3− , a1− a2− a3− .
Replace the dashes with all the elements of S3 , affix a (+) or (-) sign and
get the sum of the six terms.

If A is a square matrix of order n, there will be n! terms in the determinant


of A with n!2 positive terms and n!2 negative terms. Example,
 
2 1 3
3 2 1 = (2)(2)(2) + (3)(1)(3) + (0)(1)(1) − (0)(2)(3) − (2)(1)(1) − (3)(1)(2)
 

0 1 2
=8+9+0−0−2−6
=9

3.2 Methods for Computing Determinants

3.2.1 Diagonal Method

This method is applicable to matrices with size less than or equal to 3.

1. 2×2 matrix
!
a11 a12
det = a11 a22 − a12 a21 (18)
a21 a22

Vasquez (January 2020) 21/74


First Exam Notes MMME 21

a11 a12
a21 a22

Example: !
1 2
det = (1)(1) − (−2)(2) = 5
−2 1
Compute the determinant:
! !
1 2 1 2
det = det =
−2 −1 2 1

2. 3×3 matrix

 
a11 a12 a13
a11 a22 a33 + a12 a23 a31 + a13 a21 a32
det a21 a22 a23  =
 
−a31 a22 a13 − a32 a23 a11 − a33 a21 a12
a31 a32 a33
(19)

a11 a12 a13 a11 a12


a21 a22 a23 a21 a22
a31 a32 a33 a31 a32

Examples:
 
1 2 3
(1)(1)(2) + (2)(3)(3) + (3)(2)(1)
det 2 1 3 = =6
 
−(3)(1)(3) − (1)(3)(1) − (2)(2)(2)
3 1 2
 
1 −2 3
(1)(−1)(2) + (−2)(3)(3) + (3)(2)(−1)
det 2 −1 3 = = −6
 
−(3)(−1)(3) − (−1)(3)(1) − (2)(2)(−2)
3 −1 2

Vasquez (January 2020) 22/74


First Exam Notes MMME 21

Compute the determinant:


   
1 2 −3 1 2 3
det  2 −1 3  = det 2 −1 3 =
   

−3 1 2 3 1 2

3.2.2 Method of Cofactors

Complementary Minor The complementary minor (det(Mij ) or Mij ) or


simply minor of an element aij of the matrix A is that determinant of the
sub-matrix Mij obtained after eliminating the ith row and jth column of A.

Algebraic Complement or Cofactor The algebraic complement or


cofactor (Aij ) of an element aij of the matrix A is that signed minor ob-
tained from the formula (−1)i+j |Mij |. The signs of the cofactors follow a
“checkerboard pattern.” Namely, (−1)i+j has alternating signs
 
+ − + ···
− + − · · ·
 
 
+ − + · · ·
 .. .. . . .. 
 
. . . . 
+ − + ···

The determinant of a square matrix maybe obtained using expansion about


a row or expansion about a column. The following formulas maybe used
in getting the determinant:

n
X
det(A) = ai1 Ai1 + ai2 Ai2 + · · · + ain Ain = aik Aik (20)
k=1

This is called cofactor expansion along the ith row.

Vasquez (January 2020) 23/74


First Exam Notes MMME 21

n
X
det(A) = a1j A1j + a2j A2j + · · · + anj Anj = akj Akj (21)
k=1

This is called cofactor expansion along the jth column.

We may choose any row or any column in getting the determinant of a


given matrix.

Example: Determinant of a 2×2 matrix A,


" #
a11 a12
A=
a21 a22

The minors are


" # " #
a11 a12 a11 a12
M11 = = [a22 ] M21 = = [a12 ]
a21 a22 a21 a22
" # " #
a11 a12 a11 a12
M12 = = [a21 ] M22 = = [a11 ]
a21 a22 a21 a22

The minors are all 1×1 matrices. As we have seen that the determinant of
a 1×1 matrix is just the number inside of it, the cofactors are therefore,

A11 = + det(M11 ) = a22 A12 = − det(M12 ) = −a12


A12 = − det(M12 ) = −a21 A22 = + det(M22 ) = a11

Expanding the cofactors along the first column, we find that

det(A) = a11 A11 + a21 A21 = a11 a22 − a21 a12 (22)

This agrees with the diagonal method (Eq. 18, p. 21).

Vasquez (January 2020) 24/74


First Exam Notes MMME 21

Example: Determinant of a 3×3 matrix A


 
a11 a12 a13
A = a21 a22 a23 
 

a31 a32 a33

Expanding the first row


 
a11 a12 a13 " # !
a22 a23 a22 a23
M11 =  a21 a22 a23  = A11 = + det
 
a32 a33 a32 a33
a a32 a33
 31 
a11 a12 a13 " # !
a21 a23 a21 a23
M12 = a21 a22 a23  = A12 = − det
 
a31 a33 a31 a33
a a32 a33
 31 
a11 a12 a13 " # !
a21 a22 a21 a22
M13 = a21 a22 a23  = A13 = + det
 
a31 a32 a31 a32
a31 a32 a33

The determinant of A is

det(A) = a11 A11 − a12 A12 + a13 A13


! ! !
a22 a23 a21 a23 a21 a22
= a11 det − a12 det + a13 det
a32 a33 a31 a33 a31 a32
= a11 (a22 a33 − a32 a23 ) − a12 (a21 a33 − a31 a23 ) + a13 (a21 a32 − a31 a22 )
= a11 a22 a33 − a11 a32 a23 − a12 a21 a33 + a12 a31 a23 + a13 a21 a32 − a13 a31 a22

This is similar to the diagonal method for 3×3 matrix (Eq. 19, p. 22).

Example: Consider again the matrix


 
1 2 3
2 1 3
 

3 1 2

and solve for the determinant using the method of cofactors.

Vasquez (January 2020) 25/74


First Exam Notes MMME 21

Solution: Calculating the elements using (−1)i+j |Mij |



1 3
2 3 2 1
A11 = (−1)1+1 = −1 A = (−1)1+2 = 5 A = (−1)1+3
= −1

12 13
1 2 3 2 3 1

2 3 1 3 1 2
A21 = (−1)2+1 = −1 A = (−1)2+2 = −7 A = (−1)2+3
=5

22 23
1 2 3 2 3 1

2 3 1 3 1 2
3+1 3+2 3+3
A31 = (−1) = 3 A = (−1) = 3 A = (−1) = −3

32 33
1 3 2 3 2 1

Hence, the cofactor matrix is


 
−1 5 −1
−1 −7 5 
 

3 3 −3

The determinant of the matrix is thus using cofactor expansion along the
first row,  
1 2 3
det 2 1 3 = (1)(−1) + (2)(5) + (3)(−1) = 6
 

3 1 2
Along the second row,
 
1 2 3
det 2 1 3 = (2)(−1) + (1)(−7) + (3)(5) = 6
 

3 1 2

Along the third row,


 
1 2 3
det 2 1 3 = (3)(3) + (1)(3) + (2)(−3) = 6
 

3 1 2

Vasquez (January 2020) 26/74


First Exam Notes MMME 21

How about cofactor expansion along the first column?


 
1 2 3
det 2 1 3 = (1)(−1) + (2)(−1) + (3)(3) = 6
 

3 1 2

Along the second column,


 
1 2 3
det 2 1 3 = (2)(5) + (1)(−7) + (1)(3) = 6
 

3 1 2

Along the third colum,


 
1 2 3
det 2 1 3 = (3)(−1) + (3)(5) + (2)(−3) = 6
 

3 1 2

All determinants are equal, as expected.

Show:  
1 0 3 0
2 1 4 −1
det   = −13
 
3 2 4 0
0 3 −1 0

3.3 Theorems on Determinants

1. If a square matrix A = [aij ] contains a row (or a column) that has


elements all equal to zero, then A = 0.
Example:
1 −2 0

2 3 0 = 0


−2 4 0

2. The determinant of a square matrix A = [aij ] is equal to the determi-

Vasquez (January 2020) 27/74


First Exam Notes MMME 21

nant of its transpose AT = [aij ]. (i.e. |A| = |AT |).


Example:
1 −2 3 1 4 −1

4 2 −3 = −2 2 4


−1 4 2 3 −3 2

3. If a row (or column) of a square matrix A = [aij ] is multiplied by a


constant k, then the determinant of the resulting matrix B = [bij ] is
equal to k times the determinant of A (i.e. |B| = k|A|).
Example:

1 a b 1 a b

If 2 x y = 5 then 4 2x 2y = 2(5) = 10


4 2a 2b 4 2a 2b

4. As a corollary to the third theorem, if A has a row (or column) that has
a common factor l, then this k may be factored out of the determinant
of A, where a simplified matrix B is formed. (i.e. |A| = k|B|).
Example:

2xy 2 3x 2x y 3x 2x y 3x 2x

2
4x y 6y 8y = 2xy 2x 6y 8y = 2xy(2) x 3y 4y


6xy 18 2xy 3 18 2xy 3 18 2xy

5. If two rows (or columns) of a square matrix A = [aij ] were inter-


changed to form a new matrix B = [bij ], then |B| = −|A|.
Example:
xy x − 2 0 x + 2 0 0

x + 2 0 0 xy x − 2 0
=


x − y z + y 1 x − y z + y 1

6. If two rows (or columns) of a matrix A = [aij ] are identical then |A| =
0.

Vasquez (January 2020) 28/74


First Exam Notes MMME 21

Example:

2x
3x − y 5z xyz

2x
3x − y 5z xyz

2x − y 3x 2z x 2x − y 3x 2z x
= 3x =0

6x2 2 2

9x − 3xy 15xz 3x yz

2x
3x − y 5z xyz

4z 3y x − z xy 2 4z 3y x − z xy 2

7. As a corollary to the sixth theorem, if the elements in a row (or col-


umn) of a square matrix A = [aij ] are multiples of the corresponding
elements of another row or column of the matrix A, then |A| = 0.
Example:
2 −3 2

−4 6 −4 = 0


6y 1 2xy

8. If B = [bij ] is a square matrix of order n that is derived from another


square matrix A = [aij ] of order n, by adding correspondingly the
elements of a row (or column) to a multiple of the elements of another
row (or column), then |B| = |A|.
Example:

a11 a12 a13 a11 + ka31 a12 + ka32 a13 + ka33

a21 a22 a23 = a21 a22 a23



a31 a32 a33 a31 a32 a33

9. If the elements of one row (or column) of a square matrix A = [aij ] of


order n may be expressed as binomials such that two square matri-
ces B = [bij ] and C = [cij ] both of order n, are formed after splitting
the binomial elements, then |A| = |B| + |C|.
Example:

a11 a 12 a 13
a11 a12 a13 a11 a12 a13

b21 + c21 b22 + c22 b23 + c23 = b21 b22 b23 + c21 c22 c23


a31 a32 a33 a31 a32 a33 a31 a32 a33

Vasquez (January 2020) 29/74


First Exam Notes MMME 21

10. The determinant of the product of two square matrices A = [aij ]


and B = [bij ] of the same order n is equal to the product of the
determinant of A and the determinant of B.
Example:

2x −3 2 3 −4a 5

If −4 6z −4 = 2 and a 6b 5 = 3 then


6y 1 2xy 2a a + b 2b2


2x −3 2 3 −4a 5

−4 6z −4 × a 6b 5 =2×3=6


6y 1 2xy 2a a + b 2b2

11. The determinant of a triangular matrix is equal to the product of the


elements in its principal diagonal.
Example:
x + 2 0 0

xy x − 2 0 = x2 − 4


x − y z + y 1

12. The determinant of an identity matrix is equal to 1.

3.4 Adjoint of a Matrix

The adjoint of a square matrix A = [aij ] of order n is that square matrix


with the same order n denoted by adj(A) = [Aji ] where Aij is the cofactor
of the element aij of matrix A. The adjoint of a matrix is the transpose of
the matrix of cofactors of the elements of A.

Input: Square Matrix


Output: Square Matrix (with the same size as the original matrix)
Notation: adjA or adj(A)

Step 1: Get the cofactors of all the elements in the original matrix.
Recall: the cofactor of an element aij can be denoted as Aij and is defined

Vasquez (January 2020) 30/74


First Exam Notes MMME 21

by:

Aij = (−1)i+j |Mij | (23)

Step 2: Set up the adjoint matrix by taking the transpose of the matrix of
cofactors.

adjA = [Aij ]T (24)

Example: " # " #


a b d −b
If A = then adjA =
c d −c a

Example: Consider again the matrix


 
1 2 3
2 1 3
 

3 1 2

and determine the adjoint of the matrix.

Solution: Recall the cofactor matrix has been determined previously which
is (see page 26)  
−1 5 −1
−1 −7 5 
 

3 3 −3
Hence, the adjoint of the matrix is
   
1 2 3 −1 −1 3
Adj 2 1 3 =  5 −7 3 
   

3 1 2 −1 5 −3

Vasquez (January 2020) 31/74


First Exam Notes MMME 21

3.5 Inverse of a Matrix

Let A be an n × n (square) matrix. We say that A is invertible if there is an


n × n matrix B such that AB = In and BA = In . In this case, the matrix B
is called the inverse of A, and we write B = A−1 .

The inverse of a square matrix A = [aij ] of order n is that matrix B = [bij ]


of the same order n such that AB = BA = In . We denote the inverse
matrix of A by A−1 . Thus, we define the inverse of A as that matrix A−1
such that

A(A−1 ) = (A−1 )A = In (25)

Not all matrices has its inverse. However, if the inverse of a matrix exists,
it is unique. If the inverse of a matrix exists, we say that the matrix is in-
vertible or non-singular. Otherwise, we say that the matrix is non-invertible
or singular.

Facts about invertible matrices. Let A and B be invertible n × n matrices.

1. A−1 is invertible, and its inverse is (A−1 )−1 = A.

2. AB is invertible, and its inverse is (AB)−1 = B −1 A−1 (note the order).

Computing the inverse of matrix A,

adjA
A−1 = (26)
|A|

Vasquez (January 2020) 32/74


First Exam Notes MMME 21

Example: Inverse of a 2×2 matrix


!
a11 a12
" #−1 adj " #
a11 a12 a21 a22 1 a22 −a12
= =
a21 a22 a
11 a12
a11 a22 − a21 a12 −a21 aaa

a21 a22

Example:
" #−1 " #
4 7 1 6 −7
=
2 6 4 × 6 − 2 × 7 −2 4
" #
1 6 −7
=
10 −2 4
" #
0.6 −0.7
=
−0.2 0.4

Check your answer noting that A(A−1 ) = In


" #" # " # " #
4 7 0.6 −0.7 2.4 − 1.4 −2.8 + 2.8 1 0
= = = I2
2 6 −0.2 0.4 1.2 − 1.2 −1.4 + 2.4 0 1

Or (A−1 )A = In
" #" # " # " #
0.6 −0.7 4 7 2.4 − 1.4 4.2 − 4.2 1 0
= = = I2
−0.2 0.4 2 6 −0.8 + 0.8 −1.4 + 2.4 0 1

Example: Consider again the matrix


 
1 2 3
2 1 3
 

3 1 2

and determine the inverse of the matrix.

Vasquez (January 2020) 33/74


First Exam Notes MMME 21

Solution: The adjoint of the matrix (see page 31) is


 
−1 −1 3
 5 −7 3 
 

−1 5 −3

while the determinant is 6 (see page 26). Then the inverse is


   
−1 −1 3 − 61 − 16 21
1
 5 −7 3  =  56 − 67 21 
  
6
−1 5 −3 − 16 56 − 21

Example: Set up the inverse of the matrix


 
1 1 1
A = 2 5 −2
 

1 7 −7

Solution:
Using the diagonal method to solve for the determinant of A, |A|

1 1 1

|A| = 2 5 −2 = −35 − 2 + 14 − 5 + 14 + 14 = 0


1 7 −7

Since matrix A is singular, as evidenced by its zero determinant, it can


thus be concluded that the Inverse of A (or A−1 ) does not exist.

Example: Set-up the inverse of the given matrix


 
1 −1 1
A = 1 2 3
 

4 −2 3

Solution:
|A| = 6 − 12 − 2 − 8 + 6 + 3 = −7

Vasquez (January 2020) 34/74


First Exam Notes MMME 21

Since the determinant is not zero, then matrix A is said to be non-singular.


In this case, the inverse exists and there is a need to set up the adjoint.

Aij = (−1)i+j |Mij |



2 3 1 3 1 2
A11 = (−1)2 = 12 A12 = (−1)3 = 9 A13 = (−1)4 = −10

−2 3 4 3 4 −2

3 −1 1 4 1 1 5 1 −1

A21 = (−1) = 1 A22 = (−1) = −1 A23 = (−1) = −2
−2 3 4 3 4 −2

4 −1
1 1 1 1 −1
A31 = (−1) = −5 A32 = (−1)5 = −2 A33 = (−1)6 =3

2 3 1 3 1 2

The matrix of cofactors is


 
12 9 −10
[Aij ] =  1 −1 −2 
 

−5 −2 3

The adjoint is  
12 1 −5
adjA = [Aij ]−1 =  9 −1 −2
 

−10 −2 3
Consequently,  
12 1 −5
adjA 1 
A−1 = =  9 −1 −2

|A| −7
−10 −2 3
Why use inverse? Because with matrices we can’t divide. However we
can multiply by an inverse.

Vasquez (January 2020) 35/74


First Exam Notes MMME 21

4 Solutions to Systems of Equations

In general, we can think of a system of linear equations as a set of “m”


equations that contains “n” unknowns. There are several forms by which a
system of equations can be written. For instance, we have the system of
linear equations shown below:

Eq. 1 : a11 x1 + a12x2 + a13 x3 + · · · + a1n xn = b1


Eq. 2 : a21 x1 + a22x2 + a23 x3 + · · · + a2n xn = b2
Eq. 3 : a31 x1 + a32x2 + a33 x3 + · · · + a3n xn = b3
.. .. ..
. . .
Eq. m : am1 x1 + am2 x2 + am3 x3 + · · · + amn xn = bm

where aij are constant coefficients of the unknowns xi while the bi are
constants.

This can be transformed to a matrix form:


    
a11 a12 a13 · · · a1n x1 b1
 a21 a22 a23 · · · a2n   x2   b2 
    
    
 a31 a32 a33 · · · a3n   x3  =  b3 
 .. .. .. ..   ..   .. 
    
..
 . . . . .  .   . 
am1 am2 am3 · · · amn xn bm

Referring to the matrix form, we can actually rewrite the system of equa-
tions as a compact matrix operation:

AX = B. (27)

where A is coefficient matrix, X is the column matrix of unknowns/vari-


ables, and B is the column matrix of constants.

Vasquez (January 2020) 36/74


First Exam Notes MMME 21

4.1 Inverse Method

The inverse method maybe applied only to a system of linear equations


in which the number of independent equations is equal to the number of
unknowns. If the number of equations is equal to the number of unknowns,
the equation AX = B will have a matrix of coefficients that is square.

If the matrix of coefficients A is non-singular, the solution to the system is


unique. On the other hand, if A is singular, either the system has a unique
solution or no solution at all.

Derivation of the solution for x0i s:

AX = B
A−1 × AX = A−1 × B
(A−1 A) × X = A−1 × B
I × X = A−1 × B
X = A−1 B

NOTE: the derivation assumes that A−1 exists. If A−1 does not exist, we
can not find the solution to the system AX = B.

Example: Determine the values of x1 , x2 , and x3 in the following system of


equations.
x1 − x2 + x3 = 1
x1 + 2x2 + 3x3 = 6
4x1 − 2x2 + 3x3 = 5

Solution:
In matrix form, the system of equations becomes,
    
1 −1 1 x1 1
1 2 3 x2  = 6
    

4 −2 3 x3 5

Vasquez (January 2020) 37/74


First Exam Notes MMME 21

The above system is of the form AX = B. Hence, the solution is X =


A−1 B.  
1 −1 1
Let A = 1 2 3 then |A| = −7
 

4 −2 3
 
12 1 −5
adjA 1 
A−1 = =  9 −1 −2

|A| −7
−10 −2 3
Solving for X = A−1 B
    
12 1 −5 1 1
1 
X=  9 −1 −2 6 = 1
   
−7
−10 −2 3 5 1

Hence, the solution is    


x1 1
X = x2  = 1
   

x3 1
You can check your solution,
Equation 1: 1 − 1 + 1 = 1 X
Equation 2: 1 + 2(1) + (3)1 = 6 X
Equation 3: 4(1) − 2(1) + 3(1) = 5 X

4.2 Cramer’s Rule

Recall that a system of equation with “n” equations in “n” unknowns can
be modeled as a matrix operation AX = B.

Vasquez (January 2020) 38/74


First Exam Notes MMME 21

    
a11 a12 a13 · · · a1n x1 b1
 a21 a22 a23 · · · a2n   x2   b2 
    
    
 a31 a32 a33 · · · a3n 
  x3  =  b 3 
   
 .. .. .. . .  . 

.. ..  
 . . . .  ..   .. 

am1 am2 am3 · · · amn xn bm

Let A Coefficient matrix


xi ith variable
B right hand side constants
AI matrix resulting from replacing the i column of A
by the column vector of constants B

The solution of the system of equations can be determined by using the


formula:

|Ai |
xi = (28)
|A|

Notice that regardless of the variable i that is computed, the denominator


of the above formula is fixed at |A|. Therefore, it is suggested that the
determinant of the coefficient matrix be the first to be computed.

Example: Use Cramer’s Rule to solve for x1 , x2 , and x3 of the system


    
1 2 1 x1 1
3 3 1  x2  = 2
    

1 2 −1 x3 3

Solution:
Compute for the determinant |A|,
 
1 2 1
|A| = 3 3 1  = −3 + 2 + 6 − 3 − 2 + 6 = 6
 

1 2 −1

Vasquez (January 2020) 39/74


First Exam Notes MMME 21

Solving for x0i s, replace ith column with column B


 
1 2 1
A1 =  2 3 1
 

3 2 −1

The determinant of A1 is
 
1 2 1
|A1 | =  2 3 1  = −3 + 6 + 4 − 9 + 4 − 2 = 0
 

3 2 −1

Solving for x1
|A1 | 0
x1 = = =0
|A| 6
Using the same procedure for solving x2 and x3 ,
 
1 1 1
A2 = 3 2 1
 

1 3 −1

The determinant of A2 is
 
1 1 1
|A2 | = 3 2 1  = −2 + 9 + 1 − 2 − 3 + 3 = 6
 

1 3 −1

Solving for x2
|A2 | 6
x2 = = =1
|A| 6
For x3 ,  
1 2 1
A3 = 3 3 2
 

1 2 3

Vasquez (January 2020) 40/74


First Exam Notes MMME 21

The determinant of A3 is
 
1 2 1
|A3 | = 3 3 2  = 9 + 4 + 6 − 3 − 4 − 18 = −6
 

1 2 3

Solving for x3
|A3 | −6
x3 = = = −1
|A| 6
Hence, the solution is    
x1 0
x2  =  1 
   

x3 −1

4.3 LU Factorization

4.3.1 Elementary Row (Column) Operations on Matrices

An elementary row (column) operation on a matrix A is any one of the


following operations:

Type I Interchange any two rows (columns).


Type II Multiply a row (column) by a non-zero constant k.
Type III Add to elements of a row k times of the elements
of another row the correspondingly.

Example: Let  
1 0 −2 0
A = −3 1 6 −4
 

2 −8 2 2
Interchanging rows 1 and 3 of A (R1 ↔ R3 ), we obtain
 
2 −8 2 2
B = −3 1 6 −4
 

1 0 −2 0

Vasquez (January 2020) 41/74


First Exam Notes MMME 21

Multiplying row 3 by 12 , (R30 → 12 R3 ), we obtain


 
1 0 −2 0
C = −3 1 6 −4
 

1 −4 1 1

Adding 3 times the elements in row 1 to the elements in row 2 (R20 →


R2 + 3R1 ), we obtain
 
1 0 −2 0
D = 0 1 0 −4
 

2 −8 2 2

4.3.2 Direct LU Factorization

In theory any square matrix A may be factored into a product of lower and
upper triangular matrices.

Let us take the case of a 4th order matrix.


     
a11 a12 a13 a14 l11 0 0 0 1 u12 u13 u14
a a22 a23 a24 
 l21 l22 0 0  0 1 u23 u24 
   
 21
 = · 
a31 a32 a33 a34  l31 l32 l33 0  0 0 1 u34 
a41 a42 a43 a44 l41 l42 l43 l44 0 0 0 1

Notice that the diagonal elements of the upper triangular matrix have been
set to values of 1 for reason of simplicity. Also, LU Factorization is not
unique.

Vasquez (January 2020) 42/74


First Exam Notes MMME 21

From matrix multiplication, we know that:

a11 = l11 (1) + 0(0) + 0(0) + 0(0) or l11 = a11


a21 = l21 (1) + l22 (0) + 0(0) + 0(0) or l21 = a21
a31 = l31 (1) + l32 (0) + l33 (0) + 0(0) or l31 = a31
a41 = l41 (1) + l42 (0) + l43 (0) + l44 (0) or l41 = a41

a12
a12 = l11 (u12 ) + 0(1) + 0(0) + 0(0) or u12 =
l11
a13
a13 = l11 (u13 ) + 0(u23 ) + 0(1) + 0(0) or u13 =
l11
a14
a14 = l11 (u14 ) + 0(u24 ) + 0(u34 ) + 0(1) or u14 =
l11

a22 = l21 (u12 ) + l22 (1) + 0(0) + 0(0) or l22 = a22 − l21 (u12 )
a32 = l31 (u12 ) + l32 (1) + l33 (0) + 0(0) or l32 = a32 − l31 (u12 )
a42 = l41 (u12 ) + l42 (1) + l43 (0) + l44 (0) or l42 = a42 − l41 (u12 )

a23 − l21 (u13 )


a23 = l21 (u13 ) + l22 (u23 ) + 0(1) + 0(0) or u23 =
l22
a24 − l21 (u14 )
a24 = l21 (u14 ) + l22 (u24 ) + 0(u23 ) + 0(1) or u24 =
l22

a33 = l31 (u13 ) + l32 (u23 ) + l33 (1) + 0(0) or l33 = a33 − l31(u13 ) − l32 (u23 )
a43 = l41 (u13 ) + l42 (u23 ) + l43 (1) + l44 (0) or l43 = a43 − l41(u13 ) − l42 (u23 )

a34 − l31 (u14 ) − l32 (u24 )


a34 = l31 (u14 ) + l32 (u24 ) + l33 (u34 ) + 0(1) or u34 =
l33
a44 = l41 (u14 ) + l42 (u24 ) + l43 (u34 ) + l44 (1) or l44 = a44 − l41 (u14 ) − l42 (u24 ) − l43 (u34 )

Vasquez (January 2020) 43/74


First Exam Notes MMME 21

4.3.3 Solution via LU Decomposition Method

Recall: A system of equations can be written as a compact matrix opera-


tion AX = B.

If we factor out the coefficient matrix A as L · U and substitute to AX = B,


we can generate the equation L(U X) = B.

Momentarily define U X = Y which suggests LY = B. From this transfor-


mation, we have actually decomposed AX = B to two systems of equa-
tions.

Two-stage solution:

1. Solve for Y in the equation LY = B using forward substitution.

2. Solve for X in the equation U X = Y using back substitution.

Example: Solve for xi .


    
−4 2 1 x1 3
−3 5 −1 x2  = 4
    

1 2 1 x3 8

Solution: Let  
−4 2 1
A = −3 5 −1
 

1 2 1
Transforming to L and U .

For U , use row operation to zero the elements below the diagonal,
   
−4 2 1 −4 2 1
 0
−3 5 −1 R2 = R2 − ( 43 )R1 ⇒  0 27 − 74 
  

1 2 1 R03 = R3 − (− 14 )R1 0 52 45

Vasquez (January 2020) 44/74


First Exam Notes MMME 21

   
−4 2 1 −4 2 1
 0 27 − 74  ⇒  0 27 − 47  = U
   

0 25 54 R03 = R3 − ( 57 )R2 0 0 52
To get L, let the diagonal elements be equal to 1 and the lower elements
equal to the multipliers used to get U
 
1 0 0
L =  43 1 0
 

− 14 5
7
1

Hence, one LU pair is


   
1 0 0 −4 2 1
L =  34 1 0 U =  0 27 − 74 
   

− 14 5
7
1 0 0 52

To check, LU should be equal to A.


     
1 0 0 −4 2 1 −4 2 1
 3
1 0 ×  0 27 − 47  = −3 5 −1
    
 4
− 41 5
7
1 0 0 52 1 2 1

Solving now for xi .


Stage 1: Forward substitution using LY = B
    
1 0 0 y1 3 y1 = 3 → y1 = 3
 3 3
1 0 y2  = 4 y + y2 = 4 → y2 = 47
   
 4 4 1
− 14 5
7
1 y3 8 − 14 y1 + 5
y
7 2
+ y3 = 8 → y3 = 15
2

Stage 2: Back substitution using U X = Y


    
5
−4 2 1 x1 3 x = 15
2 3
→ x3 = 3
2
7 7   7 7 7 7
 0 2 − 4  x2  =  4  x − x =→ x2 = 2

2 2 4 3 4
0 0 52 x3 15
2
−4x1 + 2x2 + x3 = 3 → x1 = 1

Vasquez (January 2020) 45/74


First Exam Notes MMME 21

Hence, the answer is    


x1 1
x2  = 2
   

x3 3
Alternative LU decomposition
   
−4 0 0 1 − 12 − 14
L = −3 27 0  U = 0 1 − 12 
   

1 52 25 0 0 1

Solving for xi using the alternative LU decomposition

Stage 1: Forward substitution using LY = B


    
−4 0 0 y1 3 −4y1 = 3 → y1 = − 34
7
−3 2 0  y2  = 4 −3y1 + 27 y2 = 4 → y2 = 12
    

1 52 52 y3 8 y1 + 25 y2 + 52 y3 = 8 → y3 = 3

Stage 2: Back substitution using U X = Y


    
1 − 21 − 14 x1 − 43 x3 = 3 → x3 = 3
0 1 − 12  x2  =  12  x2 −1
x = 12 → x2 = 2
    
2 3
0 0 1 x3 3 x1 − 12 x2 − 41 x3 = − 34 → x1 = 1

Hence, we get the same answer as in the previous LU decomposition.


   
x1 1
x2  = 2
   

x3 3

4.4 Augmented Matrix

If A is an m × n matrix and B is a p × n matrix, then the augmented matrix


of A and B denoted by [A : B] is the matrix formed by the elements of A
and B separated by pipes.

Vasquez (January 2020) 46/74


First Exam Notes MMME 21

Example:    
1 −2 5 −2 1
If A =  7 1 −6 and B = 0 2
   

−3 −3 8 7 −4
 
1 −2 5 | −2 1
then [A : B] =  7 1 −6 | 0 2
 

−3 −3 8 | 7 −4
The augmented matrix associated to a system of linear equation AX =
is the matrix [A : B]. For example, we can now rewrite the system of
equation:
      
2 −1 3 x 1 2 −1 3 | 1
1 2 6 y  = 1 as simply  1 2 6 | 1
      

−2 4 1 z 1 −2 4 1 | 1

4.4.1 Echelon From of a Matrix

An m × n matrix A is said to be in row echelon form if it satisfies the


following properties:

1. All rows whose elements are all zeros, if exist, are at the bottom of
the matrix.

2. If at least one element on a row is not equal to zero, the first non-zero
element is 1, and this is called the leading entry of the row.

3. If two successive rows of the matrix have leading entries, the leading
entry of the row below the other row must appear to the right of the
leading entry of the other row.

An m × n matrix A is said to be in reduced row echelon form if added to


the first three properties it satisfies a fourth property:

Vasquez (January 2020) 47/74


First Exam Notes MMME 21

4. If a column contains a leading entry of some row, then all the other
entries must be zero.

Examples:
The following matrices are not in row echelon form. Why not?
   
1 2 0 −1 3 2 1 0 0 0  
1 2 2 −4
0 1 0 1 −3 5 0 1 2 0
   
    0 1 6 −7
A=
0 0 0 1 9 2 B=
0 0 −1 0 C=
 

  0 0 1 8
0 0 0 0 0 0 0 0 0 1
   
0 1 0 0
0 0 0 0 0 1 0 0 0 0

The following matrices are in row echelon form but not in reduced row
echelon form.
 
  1 0 8 0
1 2 3 4  
1 0 −5 6 2 0 1 0 −3
 
0 1 2 3  
D=  E = 0 1 0 0 −8 F =  0 0 1 0
   
0 0 1 2  
0 0 1 0 0 0 0 0 0 
 
0 0 0 1
0 0 0 0

The following matrices are in reduced row echelon form. Hence, in row
echelon form.
 
  1 0 −3
1 0 0 0  
0 1 0 0 2 0 1 0 
 
0 1 0 0  
G=  H = 0 0 1 0 −3 J =  0 0 0
    
0 0 1 0  
0 0 0 1 0 0 0 0
 
0 0 0 1
 
0 0 0

4.4.2 Elementary Row Operations as Applied to a System of Equa-


tions A:B

As a applied to the augmented matrix [A : B] as a system of equation, the


three elementary row operation will correspond to the following:

Vasquez (January 2020) 48/74


First Exam Notes MMME 21

TYPE I Rearranging the order of the equations


TYPE II Multiplying both side of the equation by a constant
TYPE III Working with two equations

From this observation, we could see that as applied to a operations does


not alter the solution of the system.

4.4.3 Row (Column) Equivalent Matrices

An m × n matrix A is row (column) equivalent to an m × n matrix B if B


can be obtained from A by applying a finite sequence of elementary row
operations.

4.4.4 Theorems on Matrix Equivalance

1. Every nonzero m × n matrix A = [aij ] is row (column) equivalent to a


matrix in row (column) echelon form.

2. Every nonzero m × n matrix A = [aij ] is row (column) equivalent to a


matrix in reduced row (column) echelon form.

3. Let AX = B and CX = D be two systems of “m” linear equations


with “n” unknowns. If the augmented matrices [A : B] and [C : D]
are row equivalent, then the linear systems are equivalent (i.e. they
have exactly the same solutions).

4. As a corollary to the third theorem, if A and B are row equivalent


matrices, then the homogeneous systems AX = 0 and BX = 0 are
equivalent.

Vasquez (January 2020) 49/74


First Exam Notes MMME 21

4.4.5 Solutions to a System of “m” Equations with “m” Unknowns

In general a system of “m” equations with “n” unknowns may be written in


matrix form:
    
a11 a12 a13 · · · a1n x1 b1
 a21 a22 a23 · · · a2n   x2   b2 
    
    
 a31 a32 a33 · · · a3n   x3  =  b3 
 .. .. .. ..   ..   .. 
    
..
 . . . . .  .   . 
am1 am2 am3 · · · amn xn bm

This system may now be represented by the augmented notation:


 
a11 a12 a13 · · · a1n | b1
 a21 a22 a23 · · · a2n | b2 
 
 
 a31 a32 a33 · · · a3n | b3 
 .. .. .. .. .. 
 
..
 . . . . . | . 
am1 am2 am3 · · · amn | bm

Applying the theorems on equivalent matrices we now have the following


methods of solution.

4.5 Gaussian Elimination Method

The objective of the Gaussian Elimination Method is to transform the aug-


mented matrix [A : B] to the matrix [A∗ : B ∗ ] in row echelon form by
applying a series of elementary row transformations. Getting the solution
of the system [A∗ : B ∗ ] using back substitution will also give the solution to
the original system [A : B].

To reduce any matrix to row echelon form, apply the following steps:

1. Find the leftmost non-zero column.

Vasquez (January 2020) 50/74


First Exam Notes MMME 21

2. If the 1st row has a zero in the column of step 1, interchange it with
one that has a non-zero entry in the same column.

3. Obtain zeros below the leading entry by adding suitable multiples of


the top row and to the rows below that.

4. Cover the top row and repeat the same process starting with step 1
applied to the leftover submatrix. Repeat this process with the rest of
the rows. For each row obtain leading entry 1 by dividing each row
by their corresponding leading entry.

Example: Solve the system of linear equations

x +2y +3z = 9
2x −y +z = 8
3x −z = 3

The augmented matrix is


 
1 2 3 | 9
[A : B] = 2 −1 1 | 8
 

3 0 −1 | 3

Transforming into row echelon form


 
1 2 3 | 9
[A∗ : B ∗ ] = 0 1 1 | 2
 

0 0 1 | 3

Use back substitution to solve

z=3 −→ z = 3
y+z =2 −→ y = −1
x + 2y + 3z = 9 −→ x = 2

Vasquez (January 2020) 51/74


First Exam Notes MMME 21

Hence the solution is    


x 2
y  = −1
   

z 3

4.6 Gauss-Jordan Reduction Method

A second method called the Gauss-Jordan Reduction Method gets rid of


the back substitution phase. The objective of the Gauss-Jordan Reduction
Method is to transform the augmented matrix [A : B] to the matrix [A∗ : B ∗ ]
in reduced row echelon form by applying a series of elementary row trans-
formations. Doing this will automatically give the solution of the system
[A∗ : B ∗ ] which also provides the solution to the original system [A : B].

To reduce any matrix to reduced row echelon form, apply the following
steps (SINE):

1. Search – search the ith column of the augmented matrix from the
ith row to the nth row for the maximum pivot, i.e. element with the
largest absolute value.

2. Interchange – assuming the maximum pivot occurs in the jth row,


interchange the ith row and the jth row so that the maximum pivot
will now occur in the diagonal position.

3. Normalize – normalize the new ith row by dividing it by the maximum


pivot on the diagonal position.

4. Eliminate – eliminate the ith column from the first up to the nth equa-
tion, except in the ith equation itself using the transformations.

Vasquez (January 2020) 52/74


First Exam Notes MMME 21

Example: The linear system

x + 2y + 3z = 9
2x − y + z = 8
3x − z =3

has the augmented matrix


 
1 2 3 | 9
[A : B] = 2 −1 1 | 8
 

3 0 −1 | 3

which can be transformed as a matrix in row echelon form


 
1 0 0 | 2
[A∗ : B ∗ ] = 0 1 0 | −1
 

0 0 1 | 3

Thus, the solution is    


x 2
y  = −1
   

z 3

4.7 Rank of a Matrix

The rank of a matrix A = [aij ] is the order of the largest square submatrix
of A with a non-zero determinant. We denote the rank of A by rank(A) or
simply r(A).
Example: What is the rank of A?
 
1 2 3 4
A = 2 1 4 3 
 

3 0 5 −10

Vasquez (January 2020) 53/74


First Exam Notes MMME 21

Solution:
Checking out first the determinants of 3×3 submatrices:

1 2 3 1 2 4 1 3 4 2 3 4

2 1 4 = 0 2 1 3 = 0 2 4 3 = 14 6= 0 1 4 3 = −60 6= 0


3 0 5 3 0 −10 3 5 −10 0 5 −10

Since at least one 3×3 submatrix of A has a non-zero determinant, then


r(A) = 3.

What is the rank of B?


 
1 2 3 4
 2 −4 6 −8
 
 
3 6 9 12 
−4 8 −12 16

The determinant of B is equal to zero (THEOREM: Proportional rows). It


can also be shown that 3×3 submatrices of B will have determinants equal
to zero.
2 −4 −8

3 6 12 = 0


−4 8 16

But at least one 2×2 submatrix has non-zero determinant



2 −4
= 24 6= 0


3 6

Therefore, the r(B) = 2.

4.7.1 Theorems on Ranks

1. The rank of a matrix is not altered by any sequence of elementary


row (column) transformations.

2. Let A = [aij ] and B = [bij ] be two m × n matrices, if r(A) = r(B) then

Vasquez (January 2020) 54/74


First Exam Notes MMME 21

A and B are equivalent.

3. If A = [aij ] and B = [bij ] are m × n matrices, and r(A) = r(B) = n,


then r(AB) = r(BA) = n.

Example: What is the rank of C?


 
1 2 3 4 5
2 3 4 5 6
 
 
C=
3 4 5 6 7
4 5 6 7 8
 
5 6 7 8 9

Solution: Operating on rows of matrix C, we obtain the equivalent matrix


C0  
1 2 3 4 5
1 1 1 1 1 R20 = R2 − R1
 

C0 = 
 
 0
1 1 1 1 1 R3 = R3 − R2
1 1 1 1 1 R40 = R4 − R3
 
1 1 1 1 1 R50 = R5 − R4
We could easily see that all 5×5, 4×4 and 3×3 submatrices of C 0 have
determinants equal to zero (THEOREM: Identical rows). But for at least
one 2×2 submatrix of C 0 has a non-zero determinant.

1 2
= −1 6= 0


1 1

Consequently r(C 0 ) = 2. But C and C 0 are equivalent matrices and hence


they have equal ranks. Therefore r(C) is also equal to 2.

4.7.2 Ranks and the Typs of Soluation to a System of Equations

Recall that for the system of “m” linear equations in “n” unknowns AX = B.
We can associate the system of equation to the augmented matrix of the

Vasquez (January 2020) 55/74


First Exam Notes MMME 21

system [A : B].

The type of solution may be classified as unique, non-unique or inconsis-


tent. Applying the concept of rank to the augmented matrix [A : B], we
have the following propositions:

1. If r(A) = r([A : B]) = n then the solution to the system is unique.


Example:
   
1 1 1 | −1 1 0 0 | −3
9 3 1 | −27 → 0 1 0 | −1
   

1 −1 1 | 1 0 0 1 | 3

2. If r(A) = r([A : B]) < n, then the solution to the system is non-
unique.
Example:    
1 1 −1 | 0 1 0 0 | 1
2 −1 1 | 3 → 0 1 0 | −1
   

4 −2 2 | 6 0 0 0 | 0

3. If r(A) < r([A : B]), then the system has no solution or inconsistent.
Example:
   
2 −1 1 | 3 1 0 0 | 45
4 2 −2 | 2 → 0 1 0 | − 32 
   

6 1 −1 | 6 0 0 0 | −3

Example 1: Rank and the Type of Solution to a System

For what values of k will the system of equations have

1. a unique solution

2. a non-unique solution

3. no solution

Vasquez (January 2020) 56/74


First Exam Notes MMME 21

2x − 3y + 3z = 2k
x − 2y + 4z = 5k
− y + 5z = 8k 2

Solution: In augmented matrix form,


 
2 −3 3 | 2k
1 −2 4 | 5k 
 

0 −1 5 | 8k 2

Performing Gaussian Elimination Method:


   
2 −3 3 | 2k R1 ↔ R2 1 −2 4 | 5k R20 = R2 − 2R1
1 −2 4 | 5k  −→ 2 −3 3 | 2k  −→
   

0 −1 5 | 8k 2 0 −1 5 | 8k 2

   
1 −2 4 | 5k R30 = R3 + R2 1 −2 4 | 5k
0 1 −5 | −8k  −→ 0 1 −5 | −8k 
   
2
0 −1 5 | 8k 0 0 0 | 8k 2 − 8k
Therefore, we have the following conclusions:

1. For a unique solution, r(A) = r[A : B] = n


There will be no value of k that will satisfy this since r(A) = 2 < n = 3.

2. For non-unique solutions, r(A) = r[A : B] < n.


This will be satisfied if r[A : B] is also equal to 2. This will happen
when the last element in the third row of the augmented matrix is
also equal to zero.

8k 2 − 8k = 0 ⇒ k = 0, 1

3. For the system to be inconsistent, r(A) < r[A : B]. This will be
satisfied if r[A : B] = 3 > 2. This will happen when the last element

Vasquez (January 2020) 57/74


First Exam Notes MMME 21

in the third row of the augmented matrix is not equal to zero.

8k 2 − 8k 6= 0 ⇒ k 6= 0, 1

Example 2:
For what values of m will the system of equations have

1. a unique solution

2. a non-unique solution

3. no solution

a + b + c = 2
(m + 2)a + c = 0
2
a + b + m c = m+4
Solution: In augmented matrix form
 
1 1 1 | 2
m + 2 0 1 | 0 
 

1 1 m2 | m + 4

Performing Gaussian Elimination Method:


 
1 1 1 | 2 R20 = R2 − (m + 2)R1
m + 2 0 1 | 0  −→
 
2 0
1 1 m | m+4 R3 = R3 − R1
 
1 1 1 | 2
0 −m − 2 −m − 1 | −2m − 4
 

0 0 m2 − 1 | m + 2
Therefore we have the following conclusions:

1. For a unique solution, r(A) = r[A : B] = n


This will be satisfied if m2 − 1 6= 0 and –m − 2 6= 0. Thus we have a
unique solution if m 6= ±1, 2.

Vasquez (January 2020) 58/74


First Exam Notes MMME 21

2. For non-unique solutions, r(A) = r[A : B] < n.


This will happen when the last element in the third row of matrix A
and the augmented matrix are both equal to zero.

m2 − 1 = 0 and m + 2 = 0

There is no value of m that will satisfy both equation. The other value
of m to be checked is when m = −2. Substituting this to the system
gives the system:  
1 1 1 | 2
0 0 1 | 0
 

0 0 3 | 0
Clearly, we could see that the resulting system gives a non-unique
solution because, r(A) = r[A : B] = 2 < 3. Thus, the system gives
non-unique solutions when m = −2.

3. For the system to be inconsistent, r(A) < r[A : B].


This will be satisfied if r[A : B] = 3 > 2. This will happen when the
last element in the third row of the augmented matrix is not equal to
zero but the last element of the third row of A is equal to zero.

m2 –1 = 0 and m + 2 6= 0.

Thus, the system is inconsistent when m = ±1.

Vasquez (January 2020) 59/74


First Exam Notes MMME 21

5 Eigenvalues

5.1 Definitions

Let A be an n × n matrix,

1. An eigenvector of A is a nonzero vector x in Rn such that Ax = λx,


for some scalar λ.

2. An eigenvalue of A is a scalar λ such that the equation Ax = λx has


a nontrivial solution.

If Ax = λx for x 6= 0, we say that λ is the eigenvalue for x, and that x is an


eigenvector for λ.

The German prefix “eigen” roughly translates to “self” or “own”. An eigen-


vector of A is a vector that is taken to be a multiple of itself by the matrix
transformation T (x) = Ax, which perhaps explains the terminology. On
the other hand, “eigen” is often translated as “characteristic”; we may think
of an eigenvector as describing an intrinsic, or characteristic, property of
A.

NOTES: Eigenvalues and eigenvectors are only for square matrices.


Eigenvectors are by definition nonzero. Eigenvalues may be equal to zero.

Example: Which are eigenvectors? What are their eigenvalues?


" # " # " #
2 2 1 2
A= and vectors v = w=
−4 8 1 1

Solution: " #" # " #


2 2 1 4
Av = = = 4v
−4 8 1 4

Vasquez (January 2020) 60/74


First Exam Notes MMME 21

Hence, v is an eigenvector of A with eigenvalue λ = 4. On the other hand,


" #" # " #
2 2 2 6
Aw = = not a scalar multiple of w
−4 8 1 0

Hence, w is not an eigenvector of A.

NOTE: An n × n matrix A has at most n eigenvalues.

5.2 Eigenspaces

Let A be an n × n matrix, and let λ be a scalar. The eigenvectors with


eigenvalue λ, if any, are the nonzero solutions of the equation Ax = λx.
We can rewrite this equation as follows:

Ax = λx (29)
Ax − λx = 0 (30)
Ax − λIn x = 0 (31)
(A − λIn )x = 0 (32)

Therefore, the eigenvectors of A with eigenvalue λ, if any, are the nontrivial


solutions of the matrix equation (A − λIn )x = 0, i.e., the nonzero vectors
in Nul(A − λIn ). If this equation has no nontrivial solutions, then λ is not
an eigenvector of A.

The above observation is important because it says that finding the eigen-
vectors for a given eigenvalue means solving a homogeneous system of
equations. For instance, if
 
7 1 3
A = −3 2 −3
 

−3 −2 −1

Vasquez (January 2020) 61/74


First Exam Notes MMME 21

then an eigenvector with eigenvalue λ is a non-trivial solution of the matrix


equation     
7 1 3 x x
−3 2 −3 y  = λ y 
    

−3 −2 −1 z z
This translates to the system of equations

7x + y + 3z = λx (7 − λ)x + y + 3z =0
−3 + 2y − 3z = λy =⇒ −3 + (2 − λ)y − 3z =0
−3 − 2y − z = λz −3 − 2y − (1 − λ)z = 0

This is the same as the homogeneous matrix equation


  
7−λ 1 3 x
 −3 2 − λ −3  y  = 0
  

−3 −2 −1 − λ z

i.e., (A − λI3 )x = 0

Let A be an n × n matrix, and let λ be an eigenvalue of A. The λ-


eigenspace of A is the solution set of (A − λIn )x = 0, i.e., the subspace
Nul(AλIn ).

The λ-eigenspace is a subspace because it is the null space of a matrix,


namely, the matrix A − λIn . This subspace consists of the zero vector and
all eigenvectors of A with eigenvalue λ.

NOTE: Since a nonzero subspace is infinite, every eigenvalue has infinitely


many eigenvectors. (For example, multiplying an eigenvector by a nonzero
scalar gives another eigenvector.) On the other hand, there can be at
most n linearly independent eigenvectors of an n × n matrix, since Rn has
dimension n.

Let A be an n × n matrix and let λ be a number.

1. λ is an eigenvalue of A if and only if (A − λIn )x = 0 has a nontrivial

Vasquez (January 2020) 62/74


First Exam Notes MMME 21

solution, if and only if Nul(A − λIn ) 6= 0.

2. In this case, finding a basis for the λ-eigenspace of A means finding


a basis for Nul(A − λIn ), which can be done by finding the parametric
vector form of the solutions of the homogeneous system of equations
(A − λIn )x = 0.

3. The dimension of the λ-eigenspace of A is equal to the number of


free variables in the system of equations (A − λIn )x = 0 which is the
number of columns of A − λIn without pivots.

4. The eigenvectors with eigenvalue λ are the nonzero vectors in Nul(A−


λIn ), or equivalently, the nontrivial solutions of (A − λIn )x = 0.

5.3 Characteristic Polynomial

Let A be an n × n matrix. The characteristic polynomial of A is the


function f (λ) given by

f (λ) = det(A − λIn ) (33)

Example: Find the characteristic polynomial of the matrix A


" #
5 2
A=
2 1

Vasquez (January 2020) 63/74


First Exam Notes MMME 21

Solution:
" # " #!
5 2 λ 0
f (λ) = det(A − λIn ) = det −
2 1 0 λ
!
5−λ 2
= det
2 1−λ
= (5 − λ)(1 − λ) − (2)(2)
= λ2 − 6λ + 1

Example: Find the characteristic polynomial of the matrix A


 
0 6 8
A =  12 0 0
 

0 12 0

Solution:
   
0 6 8 λ 0 0
f (λ) = det(A − λIn ) = det  21 0 0 −  0 λ 0 
   

0 12 0 0 0 λ
 
−λ 6 8
 1
= det  2 −λ 0 

1
0 2
−λ
= −λ3 + 0 + 8 21 12 − 0 − 0 − 6 12 (−λ)
  

= −λ2 + 3λ + 4

Theorem: Eigenvalues are roots of the characteristic polynomial


Let A be an n × n matrix, and let f (λ) = det(A − λIn ) be its characteristic
polynomial. Then a number λ0 is an eigenvalue of A if and only if f (λ0 ) = 0.

Example: Find the eigenvalues and eigenvectors of the matrix


" #
5 2
A=
2 1

Vasquez (January 2020) 64/74


First Exam Notes MMME 21

Solution: Since the characteristic polynomial is

f (λ) = λ2 − 6λ + 1

Then the eigenvalues are the roots of f (λ) = λ2 − 6λ + 1



−6 ± 36 − 4 √
λ= =3±2 2
2
√ √
Hence the eigenvalues are 3 + 2 2 and 3 − 2 2. To compute the eigen-
vectors, we solve the homogeneous system of equations (A − λI2 )x = 0

for each eigenvalue λ. When λ = 3 + 2 2, we have
" √ #
√ 2−2 2 2
A − (3 + 2 2)I2 = √
2 −2 − 2 2
" √ # √
−4 4 + 4 2 R1 → R1 (2 + 2 2)
= √
2 −2 − 2 2
" √ #
−4 4 + 4 2 R2 → R2 + R21
=
0 0
" √ #
1 −1 − 2 R1 → R1 (− 14 )
=
0 0


The parametric form of the general solution is x = !
(1 + 2)y, so the (3 +

√ 1+ 2
2 2) -eigenspace is the line spanned by . We compute in the
1
√ !
√ 1− 2
same way that the (3−2 2)-eigenspace is the line spanned by .
1

When the determinant of A − λI is written out, the resulting expression is a


monic polynomial in λ. A monic polynomial is one in which the coefficient
of the leading (the highest-degree) term is 1. It is called the character-
istic polynomial of A and will be of degree n if A is n × n. The zeros of
the characteristic polynomial of A – i.e., the solutions of the characteristic
equation, det(A − λI) = 0 – are the eigenvalues of A.

Vasquez (January 2020) 65/74


First Exam Notes MMME 21

5.4 Eigenvalues of a Triangular Matrix

It is easy to compute the determinant of an upper- or lower-triangular ma-


trix; this makes it easy to find its eigenvalues as well.

Corollary: If A is an upper- or lower-triangular matrix, then the eigenvalues


of A are its diagonal entries.

Example: Find the eigenvalues and eigenvectors of


 
1 2 3
A = 0 4 5
 

0 0 6

Solution: The characteristic polynomial



1 − λ 2 3

0 4−λ 5 = (1 − λ)(4 − λ)(6 − λ)


0 0 6 − λ

Then, the eigenvalues are


λ = 1, 4, 6

For every λ, find its own eigenvector.

(A − λ1 I3 )x = 0

For λ1 =1
     
1 2 3 1 0 0 0 2 3
A − λ1 I3 = 0 4 5 − (1) 0 1 0 = 0 3 5
     

0 0 6 0 0 1 0 0 5

Vasquez (January 2020) 66/74


First Exam Notes MMME 21

In augmented form and solving using Gaussian elimination,


   
0 2 3 | 0 0 1 0 | 0
(A − λ1 I3 )x = 0 =⇒ 0 3 5 | 0 = 0 0 1 | 0
   

0 0 5 | 0 0 0 0 | 0
   
x1 x1
x2  =  0 
   

x3 0
 
x1
The general solution: X =  0 
 

0
  

 1 

The solution set: x1 0
 
 
0
 

For λ2 =4
     
1 2 3 1 0 0 −3 2 3
A − λ2 I3 = 0 4 5 − (4) 0 1 0 =  0 0 5
     

0 0 6 0 0 1 0 0 2

In augmented form and solving using Gaussian elimination,


   
−3 2 3 | 0 1 − 23 0 | 0
(A − λ2 I3 )x = 0 =⇒  0 0 5 | 0 = 0 0 1 | 0
   

0 0 2 | 0 0 0 0 | 0
   
2
x1 x
3 2
=
x2   x2 
   

x3 0
 
2
x
3 2
The general solution: X =  x2 
 

Vasquez (January 2020) 67/74


First Exam Notes MMME 21

  
2
3 

 
The solution set: x2  1 
 
 
0
 

For λ3 =6
     
1 2 3 1 0 0 −5 2 3
A − λ3 I3 = 0 4 5 − (6) 0 1 0 =  0 −2 5
     

0 0 6 0 0 1 0 0 0

In augmented form and solving using Gaussian elimination,


   
−5 2 3 | 0 1 0 − 58 | 0
(A − λ3 I3 )x = 0 =⇒  0 −2 5 | 0 = 0 1 − 25 | 0
   

0 0 0 | 0 0 0 0 | 0
   
8
x1 x
5 3
  5 
x2  =  2 x3 
x3 x3
 
8
5
x 3
The general solution: X =  52 x3 
 

x
   3
8
5 

 
5
The solution set: x3  2 
 
1
 

Vasquez (January 2020) 68/74


First Exam Notes MMME 21

6 Applications

6.1 Curve Fitting

Find the equation of the parabola (y = Ax2 + Bx + C) that passes through


the points (-1, 9), (1, 5), and (2, 12).

Solution:
A(−1)2 + B(−1) + C = 9
A(1)2 + B(1) + C = 5
A(2)2 + B(2) + C = 12
In matrix form,     
1 −1 1 A 9
1 1 1 B  =  5 
    

4 2 1 C 12
Augmented matrix form
 
1 −1 1 | 9
1 1 1 | 5 
 

4 2 1 | 12

Solving using Gaussian elimination


 
1 0 1 | 7
0 1 0 | −2
 

0 0 1 | 4

The solution is    
A 3
B  = −2
   

C 4
The parabolic equation is

y = 3x2 − 2x + 4

Vasquez (January 2020) 69/74


First Exam Notes MMME 21

6.2 Spring-mass System

Look at the spring-mass system as shown in the picture below.

Assume each of the two mass-displacements to be denoted by x1 and x2 ,


and let us assume each spring has the same spring constant k. Then
by applying Newton’s second and third law of motion to develop a force-
balance for each mass we have

d 2 x1
m1 = −kx1 + k(x2 − x1 )
dt2
d 2 x2
m2 2 = −k(x2 − x1 )
dt
Rewriting the equations

d 2 x1
m1 + kx1 − k(x2 − x1 ) = 0
dt2
d2 x2
m2 2 + k(x2 − x1 ) = 0
dt
Let m1 = 10, m2 = 20, and k = 15, hence

d2 x1
10 + 15x1 − 15(x2 − x1 ) = 0
dt2
d2 x2
20 2 + 15(x2 − x1 ) = 0
dt
From vibration theory, the solutions can be of the form

xi = Ai sin(ωt − φ)

Vasquez (January 2020) 70/74


First Exam Notes MMME 21

where Ai amplitude of the vibration of mass i


ω frequency of vibration
φ phase shift
then
d 2 xi
= −Ai ω 2 sin(ωt − φ)
dt2
d2 xi
Substituting xi and dt2

−10A1 ω 2 − 15(−2A1 + A2 ) = 0
−20A2 ω 2 − 15(A1 − A2 ) = 0

Rearranging

(−10A1 ω 2 + 30)A1 − 15A2 = 0


−15A1 + (−20ω 2 + 15)A2 = 0

Simplifying

(−A1 ω 2 + 3)A1 − 1.5A2 = 0


−0.75A1 + (−ω 2 + 0.75)A2 = 0

In matrix form " #" # " #


−ω 2 + 3 −1.5 A1 0
2
=
−0.75 −ω + 0.75 A2 0
Rewriting " #" # " # " #
3 −1.5 A1 A1 0
− ω2 =
−0.75 0.75 A2 A2 0

Let ω 2 = λ " # " #


3 −1.5 A1
[A] = [X] =
−0.75 0.75 A2

Vasquez (January 2020) 71/74


First Exam Notes MMME 21

Then

[A][X] − λ[X] = 0
[A][X] = λ[X]

λ is the eigenvalue and [X] is the eigenvector corresponding to λ. If we


know λ, we can calculate the natural frequency of the vibration

ω= λ

Why are the natural frequencies of vibration important? Because you do


not want to have a forcing force on the spring-mass system close to this
frequency as it would make the amplitude very large and make the system
unstable.

Extend analysis to springs as atomic bonds.

6.3 Stresses and Strains

For the stress (strain) tensor, the eigenvalues represent principal stresses
(strains), and eigenvectors represent principal axes (i.e., faces with zero
shear stress (strain)).

Example: Find the principal stresses and principal axes for


" #
80 30
[σ] =
30 40

Solution: The given stress tensor can be represented graphically by

Vasquez (January 2020) 72/74


First Exam Notes MMME 21

The principal stresses are the eigenvalues (λi ) of the stress tensor and
can be solved by

det(σ − λI) = 0
" #
80 − λ 30
det =0
30 40 − λ
(80 − λ)(40 − λ) − 302 = 0
λ2 − 120λ + 2300 = 0

Hence,
" # " #
λ1 96.05
=
λ2 23.95
The principal directions are the eigenvectors and can be solved using

(σ − λi I)ni = 0

For λ1 = 96.05
" #" # " #
80 − 96.05 30 x1 0
=
30 40 − 96.05 y1 0

(80 − 96.05)x1 + 30y1 = 0

Vasquez (January 2020) 73/74


First Exam Notes MMME 21

Any (x1 , y1 ) that satisfies this equation is an eigenvector. Therefore, we


can choose y1 = 1 and solve for x1 , then normalize the vector as follows:

(80 − 96.05)x1 + 30 = 0
30
x1 =
16.05
" # " #
30
1 16.05 0.88
n1 = q =
30 2
( 16.05 ) + 12 1 0.47

The second eigenvector is


" #
−0.47
n2 =
0.88

Take note that two eigenvectors should be orthogonal hence n1 · n2 = 0

Vasquez (January 2020) 74/74

You might also like