You are on page 1of 141

EE2007 Engineering Mathematics II

Linear Algebra

K.V. Ling

ekvling@ntu.edu.sg
Rm: S2-B2a-22, School of EEE
Nanyang Technological University

4 August 2016
Scope and Textbook
I Topic 1: Systems of Linear Equations and Matrices
I Topic 2: Linear Combinations and Linear Independence
I Topic 3: Vector Spaces
I Topic 4: Eigenvalues and Eigenvectors
I References:
DeFranza and Gagliardi
Introduction to Linear Algebra with Applications
McGraw-Hill, 2009
NTU LWNL: QA184.2.D316

David C. Lay, Steven R. Lay and Judi J. McDonald


Linear Algebra and Its Applications, Global Edition, 5/E
Pearson, 2016
NTU LWNL QA184.2.L426

EE2007/KV Ling/Aug 2016 2/ 141


Free Resources on the Web

Some examples:
I Professor Gilbert Strang’s Video Lectures on Linear Algebra
(ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/)
I Free Linear Algebra textbook (joshua.smcvt.edu/linearalgebra)
I Linear Algebra at UC Davis (www.math.ucdavis.edu/∼linear/)

EE2007/KV Ling/Aug 2016 3/ 141


What you will learn

Many diverse applications are modeled by systems of equations. In this module we will:
I develop systematic methods for solving systems of linear equations, and
I determine whether the equations will have (a) unique, (b) many, or (c) no solution.

EE2007/KV Ling/Aug 2016 4/ 141


Example: Electrical Networks
Find the node voltages v1 , v2 and v3 of the circuit

3
4 v1 − 12 v2 − 41 v3 = 3

− 12 v1 + 78 v2 − 81 v3 = 0

1
4 v1 − 38 v2 + 81 v3 = 0

EE2007/KV Ling/Aug 2016 5/ 141


Example: Resource allocation
A florist offers three sizes of flower arrangements containing roses, daisies, and chrysanthemums.
Each small arrangement contains one rose, three daisies, and three chrysanthemums. Each
medium arrangement contains two roses, four daisies and six chrysanthemums. Each large
arrangement contains four roses, eight daisies and six chrysanthemums. One day, the florist
noted that she used a total of 24 roses, 50 daisies and 48 chrysanthemums in filling orders for
these three types of arrangements. How many arrangements of each type did she make?
Solution: Let there be x1 , x2 and x3 orders of small, medium and large arrangements
respectively. Thus

x1 +2x2 +4x3 = 24
3x1 +4x2 +8x3 = 50
3x1 +6x2 +6x3 = 48

EE2007/KV Ling/Aug 2016 6/ 141


Example: Photosynthesis
The chemical equation is
aCO2 + bH2 O → cO2 + dC6 H12 O6
where a, b, c and d are some positive whole numbers. Balance the chemical equation.
This gives us the system of three linear equations in four variables

 C: a − 6d = 0
O : 2a + b − 2c − 6d = 0
H: 2b − 12d = 0

Any positive integers a, b, c and d that satisfy all three equations are a solution. For example,
a = 6, b = 6, c = 6 and d = 1. The general solution is a = b = c = 6d

EE2007/KV Ling/Aug 2016 7/ 141


Gaussian Elimination

I a systematic way to solve systems of linear equations


I determine whether the equations will have (a) unique, (b) many, or (c) no solution.

EE2007/KV Ling/Aug 2016 8/ 141


Example: Two Equations with Two Unknowns
To develop this idea, consider the set of equations

x + 2y = 6
2x − y = 2
Since both equations represent straight lines, a solution exists provided that the lines intersect. In this
case, the lines intersect at the unique point (2, 2).
Of course, two lines need not intersect at a single point. They could be parallel (no solution), or they
could coincide and hence “intersect” at every point on the line (many solutions).

EE2007/KV Ling/Aug 2016 9/ 141


System of Linear Equations
We generalise to the case of m linear equations in n unknowns.
Given a system of m linear equations in n variables (This is also refers to as an m × n linear
system)


 a11 x1 + a12 x2 + . . . + a1n xn = b1
 a21 x1 + a22 x2 + . . . + a2n xn = b2

.. .. ..


 . . .
am1 x1 + am2 x2 + . . . + amn xn = bm

There are three possible outcomes:


I unique solution
I many solutions
I no solution

EE2007/KV Ling/Aug 2016 10/ 141


Matrix Notation
Coefficient matrix and augmented matrix

For example, the system

x1 −2x2 +x3 = 0 1 −2 1 0
" #
x1 −x3 = 2 can be recorded as 1 0 −1 2
x2 −4x3 = 4 0 1 −4 4
h 1 −2 1 i h 1 −2 1 0
i
The matrix 1 0 −1 is called the coefficient matrix and 1 0 −1 2 is called the
0 1 −4 0 1 −4 4
augmented matrix of the system.

EE2007/KV Ling/Aug 2016 11/ 141


Elementary Row Operations (EROs)
There are three types ERO:
I Interchanging any two rows, Ri ↔ Rj
I Multiplying any row by a nonzero constant, Rj ← αRj , α 6= 0
I Adding a multiple of one row to another, Rj ← Rj + βRi
Performing any one of the EROs will not change the solution of the linear system.

DEFINITION
Two linear systems are equivalent if they have the same solution.

EE2007/KV Ling/Aug 2016 12/ 141


Gaussian Elimination Method
This is an algorithm used to solve linear systems.

The method essentially convert a m × n linear system into an equivalent system which has a
triangular form, and then use the back substitution method to obtain the solution.

For example
 
 x1 −2x2 +x3 = −1  x1 −2x2 +x3 = −1
2x1 −3x2 −x3 = 3 ⇒ x2 −3x3 = 5
x1 −2x2 +2x3 = 1 x3 = 2
 

giving x1 = 19, x2 = 11 and x3 = 2.

EE2007/KV Ling/Aug 2016 13/ 141


Solving a Linear System / Recording GE Steps
 
x1 −2x2 +x3 = 0 1 −2 1 0
x1 −x3 = 2  1 0 −1 2 
x2 −4x3 = 4 0 1 −4 4
Eliminating x1 in equation (2) gives R2 ← R 2 − R1
 
x1 −2x2 +x3 = 0 1 −2 1 0
2x2 −2x3 = 2  0 2 −2 2 
x2 −4x3 = 4 0 1 −4 4
Eliminating x2 in equation (3) gives R3 ← R3 − 21 R2
 
x1 −2x2+x3 = 0 1 −2 1 0
−2x3 = 2
2x2  0 2 −2 2 
−3x3 = 3 0 0 −3 3
By back substitution,
1
x3 = −1, x2 = (2 + 2x3 ) = 0, x1 = −x3 + 2x2 = 1.
2

EE2007/KV Ling/Aug 2016 14/ 141


The Key Questions

Given a system of linear equations, how can we tell, systematically, whether the system has
I unique solution,
I infinitely many solutions, or
I no solution?

EE2007/KV Ling/Aug 2016 15/ 141


Gaussian Elimination, Unique Solution
  
 x +y +z = 4 1 1 1 4
−x −y +z = −2  −1 −1 1 −2 
2x −y +2z = 2 2 −1 2 2

 
1 1 1 4
R2 ← R2 + R1 
0 0 2 2 
R3 ← R3 − 2R1
0 −3 0 −6
   
1 1 1 4 1 1 1 4
R2 ← −1 R
3 2  0 1 0 2 
R2 ↔ R3  0 −3 0 −6 
0 0 2 2 R3 ← 12 R3 0 0 1 1

Answer: x = 1, y = 2, z = 1

EE2007/KV Ling/Aug 2016 16/ 141


Gaussian Elimination, No Solution
Write the row operations in the steps shown
   
1 −1 2 5 1 −1 2 5
 2 1 0 2   0 3 −4 −8 
  ∼  
 1 8 −1 3   0 9 −3 −2 
−1 −5 −12 4 0 −6 −10 9
   
1 −1 2 5 1 −1 2 5
 0 3 −4 −8   0 3 −4 −8 
∼ 
 0
 ∼  
0 9 22   0 0 9 22 
0 0 −18 −7 0 0 0 37
The last row essentially says that
0x1 + 0x2 + 0x3 = 37
which indicates that there is no solution.

EE2007/KV Ling/Aug 2016 17/ 141


Gaussian Elimination, Many Solutions
   
4 −8 −3 2 13 2 −4 −2 2 6
 3 −4 −1 −3 5  R1 ↔ R3  3 −4 −1 −3 5 
2 −4 −2 2 6 4 −8 −3 2 13
 
1 −2 −1 1 3
1
R1 ← R1  3 −4 −1 −3 5 
2
4 −8 −3 2 13
 
1 −2 −1 1 3
R2 ← R2 − 3R1 
0 2 2 −6 −4 
R3 ← R3 − 4R1
0 0 1 −2 1

Answer: {(3t − 2, t − 3, 2t + 1, t)|t ∈ R}

EE2007/KV Ling/Aug 2016 18/ 141


Gaussian Elimination, Many Solutions
Parametric Description of Solution Sets
−2 −1
h 1 1 3
i
The augmented matrix 0 2 2 −6 −4 represents the equations
0 0 1 −2 1

x1 −2x2 −x3 +x4 = 3


2x2 +2x3 −6x4 = −4
x3 −2x4 = 1
So we have 3 equations with 4 unknowns. If we chose x4 to be a free variable, then by back
substitution, we can write
let
x4 = free = t
x3 = 1 + 2x4 = 1 + 2t
x2 = 21 (−4 + 6x4 − 2x3 ) = −3 + t
x1 = 3 − x4 + x3 + 2x2 = −2 + 3t

EE2007/KV Ling/Aug 2016 19/ 141


Gaussian Elimination, Many Solutions
Parametric Description of Solution Sets
Alternatively, we could have chosen x3 as free variable, giving
let
x3 = free = s
x4 = 21 (x3 − 1) = 12 s − 12
x2 = 21 (−4 + 6x4 − 2x3 ) = −7 1
2 + 2s
x1 = 3 − x4 + x3 + 2x2 = −7 3
2 + 2s
We can also write the solution set as
     
x1 −7/2 3/2
 x2   −7/2   1/2 
 x3  = 
   +s , s∈R
0   1 
x4 −1/2 1/2
Note that choosing either x3 or x4 as free variable gives the same solution set.
This can be verified by the substitution s = 1 + 2t.
EE2007/KV Ling/Aug 2016 20/ 141
Use software to perform Gaussian Elimination

 x1 −6x2 −4x3 = −5
2x1 −10x2 −9x3 = −4
−x1 +6x2 +5x3 = 3

Solve using MS Excel: Solve using MATLAB:

Answer: x1 = −1, x2 = 2, x3 = −2

EE2007/KV Ling/Aug 2016 21/ 141


Practice
I Is the system consistent?
 
1 5 2 −6
 0 4 −7 2 
0 0 5 0
I Is (3, 4, −2) a solution of the following system?
 
5 −1 2 7
 −2 6 9 0 
−7 5 −3 −7
I For what values of h and k is the system consistent?
 
2 −1 h
−6 3 k

EE2007/KV Ling/Aug 2016 22/ 141


Row Echelon (RE) Form
The leading nonzero term in each (non-zero) row is called a pivot.
A matrix is in row echelon form if all entries in a column below a pivot are zeros.
For example, these matrices are in RE form:
   
  N    N    0 0
N      0 N     0 N     
   
 0 0 N     0 0 N    0 0 N  0  
   
0 0 0 N   0 0 0 0   0 0 0 0 N  
0 0 0 0 0 0 0 0 0 0

EE2007/KV Ling/Aug 2016 23/ 141


Reduced Row Echelon (RRE) Form
The matrix is in reduced row echelon form (RRE) if, in addition, each pivot is a 1 and all
other entries in this column are 0.
For example, the following matrices are in RRE form:
   
  1 0 0  1 0 0  0 0
1  0 0   0 1 0    0 1 0  0
    

 0 0 1 0    0 0 1    0 0 1  0  
   
0 0 0 1   0 0 0 0   0 0 0 0 1  
0 0 0 0 0 0 0 0 0 0
Note that the RRE form of a matrix is unique
The process of transforming a matrix to RRE form is called
Gauss-Jordan Elimination.

EE2007/KV Ling/Aug 2016 24/ 141


Example: Reduce the matrix to RE and RRE
−6 −5 −3 −3 −3 −3
h 0 3 6 4
i h 1 4 2 5
i h 1 4 2 5
i
3 −7 8 −5 8 9 ∼ 0 2 −4 4 2 −6 ∼ 0 1 −2 2 1 −3
1 −3 4 −3 2 5 0 3 −6 6 4 −5 0 0 0 0 1 4

−2 −24
h 1 0 3 0
i
∼ ··· ∼ 0 1 −2 2 0 −7
0 0 0 0 1 4

1. Start with the leftmost nonzero column. This is the pivot column.
2. Interchange rows, if necessary, such that the top of the pivot column is nonzero.
3. Use ERO to reduce the matrix to a RE form.
4. Start with the rightmost pivot and work to the left. Use ERO to reduce the matrix to a RRE form.

EE2007/KV Ling/Aug 2016 25/ 141


Gauss-Jordan Elimination
Reduce the coefficient matrix to RRE form


 x1 −x2 −2x3 +x4 = −3
2x1 −x2 −3x3 +2x4 = −1


 −x1 +2x2 +x3 +3x4 = 18
x1 +x2 −x3 +2x4 = 8

Answer: x1 = 1, x2 = 2, x3 = 3, x4 = 4
EE2007/KV Ling/Aug 2016 26/ 141
Many Solutions

 3x1 −x2 +x3 −8x4 = 4
x1 +2x2 −x3 −5x4 = 3
−x1 −3x2 +2x3 +3x4 = −2

Answer: S = {(1 + 2t, 3 + 5t, 4 + 7t, t)|t ∈ R}

EE2007/KV Ling/Aug 2016 27/ 141


Inconsistent Linear System, No Solution

 x1 +x2 +x3 = 2
3x1 −x2 −x3 = 2
x1 +3x2 +3x3 = 5

The third row of the RE form corresponds to the equation 0.x1 + 0.x2 + 0.x3 = 1, hence the system is inconsistent and has no solution.

EE2007/KV Ling/Aug 2016 28/ 141


Condition for consistent Linear System
Re-do the previous example, but now with unknown RHS

 x1 +x2 +x3 = a
3x1 −x2 −x3 = b
x1 +3x2 +3x3 = c

So we need c − a + b−3a
2
= 0 for consistency.

EE2007/KV Ling/Aug 2016 29/ 141


Summary: Conditions for Unique, Many, and No Solution
Represent a linear system with m equations and n unknowns as an augmented matrix [A|b].
Transform [A|b] to RE form and denote the result as [AR |bR ], i.e.,

[A|b] ∼ ··· ∼ [AR |bR ]


1. [AR |bR ] and AR has the same number of non-zero rows
1.1 [Unique solution] The number of non-zero rows is equal to n [rank(A|b) = rank(A) = n]
1.2 [Many solutions] The number of non-zero rows is less than n [rank(A|b) = rank(A) < n]
2. [No solution] AR has a row of zeros, and the corresponding term in bR is not zero.
[rank(A|b) > rank(A)].
where we denote rank(A) = number of non-zero rows in AR . A more formal
definition of rank will come later.

EE2007/KV Ling/Aug 2016 30/ 141


Introduction to Matrix Algebra
Notations and Definitions
Let A be an m × n matrix. Then each entry of A can be uniquely specified by using the row
and column indices of its location, as shown below:
 
a11 · · · a1j · · · a1n
 .. .. .. 
 . . . 
 
A=  ai1 · · · aij · · · ain 

 .. .. .. 
 . . . 
am1 · · · amj · · · amn
We can define addition and multiplication so that algebra can be performed with
matrices, just like numbers. This extends the application of matrices beyond just
a means for representing a linear systems.

EE2007/KV Ling/Aug 2016 31/ 141


Row and Column Vectors of a Matrix
A vector is just an n × 1 matrix. For a given matrix A, it is convenient to refer to its row
vectors and its column vectors. For example, let
 
1 2 −1
A= 3 0 1 
4 −1 2
Then the column vectors of A are
     
1 2 −1
 3   0  and  1 
4 −1 2
while the row vectors of A, written vertically, are
     
1 3 4
 2   0  and  −1 
−1 1 2
EE2007/KV Ling/Aug 2016 32/ 141
Addition, Scalar Multiplication and Equality of Matrices
Consider two m × n matrices A and B.

A+B means aij + bij , 1 ≤ i ≤ m and 1 ≤ j ≤ n.

cA means caij , 1 ≤ i ≤ m and 1 ≤ j ≤ n, where c is a real number

A=B means aij = bij , 1 ≤ i ≤ m and 1 ≤ j ≤ n.

EE2007/KV Ling/Aug 2016 33/ 141


Example: Addition and Scalar Multiplication of Matrices
Perform the operations: (a) A + B and (b) 2A − 3B if
   
2 0 1 −2 3 −1
A =  4 3 −1  and B =  3 5 6 
−3 6 5 4 2 1

EE2007/KV Ling/Aug 2016 34/ 141


Properties of Matrix Addition and Scalar Multiplication
Let A, B, and C be m × n matrices and c and d be real numbers. Then
1. A + B = B + A
2. A + (B + C ) = (A + B) + C
3. c(A + B) = cA + cB
4. (c + d)A = cA + dA
5. c(dA) = (cd)A
6. A + 0 = 0 + A = A, where 0 denotes a m × n matrix with all zero entries
7. −A = (−1)A, thus A + (−A) = (−A) + A = 0

EE2007/KV Ling/Aug 2016 35/ 141


Review: Dot Products of Vectors
Given two vectors
   
u1 v1
 u2   v2 
u =  .  and v =  . 
   
 . . .
 . 
un vn
Pn
The dot product of u and v is defined by u · v = i=1 ui vi = u1 v1 + u2 v2 + · · · + un vn

EE2007/KV Ling/Aug 2016 36/ 141


Matrix Multiplication – the Dot Product View
Let A be an m × n matrix and B an n × p matrix, then the product AB is an m × p matrix.
The ij term of AB is the dot product of the ith row vector of A with the jth column vector of
B, so that
n
X
(AB)ij = ai1 b1j + ai2 b2j + · · · + ain bnj = aik bkj
k=1
Example

1 3 0 3 −2 5
" # " #
A= 2 1 −3 , and B= −1 4 −2
−4 6 2 1 0 3
(1)(3) + (3)(−1) + (0)(1) −2 + 12 + 0 5−6+0
" #
AB = 6−1−3 −4 + 4 + 0 10 − 2 − 9
−12 − 6 + 2 8 + 24 + 0 −20 − 12 + 6

EE2007/KV Ling/Aug 2016 37/ 141


Matrix Multiplication
 is not Commutative,
 AB 6= BA
2 0
1 −2 −1 3
For example, let A =  4 −1
3  and B =  3 5
6 .
−3 6
5 4 1 2
   
0 8 −1 11 3 −10
It can be verified that AB =  −3 25 13  6= BA =  8 51 28 
44 31 44 13 12 7

Sometimes, it could happen that AB is defined but BA is not. For example


 
  3 −2 5
1 3 0  −1 4 −2  is defined
AB =
2 1 −3
1 0 3
 
3 −2 5  
1 3 0
BA =  −1 4 −2  is NOT defined
2 1 −3
1 0 3
If AB = BA, we say the two matrices A and B commute.
EE2007/KV Ling/Aug 2016 38/ 141
Example
Find all 2 × 2 matrices that commute with the matrix
 
1 0
A=
1 1
 
a b
Let S = such that SA = AS. Then ...
c d

  
a 0
Answer: S = |a, c ∈ R
c a

EE2007/KV Ling/Aug 2016 39/ 141


Properties of Matrix Multiplication

Let A, B and C be matrices with sizes so that the given expression are all defined, and let c be
a real number.
1. A(BC ) = (AB)C
2. c(AB) = (cA)B = A(cB)
3. A(B + C ) = AB + AC
4. (B + C )A = BA + CA

EE2007/KV Ling/Aug 2016 40/ 141


Transpose of a Matrix
If A is an m × n matrix, the transpose of A, denoted by AT , is the n × m matrix with ij term
(AT )ij = aji where 1 ≤ i ≤ n and 1 ≤ j ≤ m.
For example, the transpose of the matrix
   
1 2 −3 1 0 −1
A=  0 1 4  is AT =  2 1 2 
−1 2 1 −3 4 1
In some textbook, the notation A0 is also used to denote the transpose of A.

Using the transpose notation, the dot product of two (column) vectors can be
written as
n
X
T
u · v = u v = u1 v1 + u2 v2 + . . . + un vn = ui v i
i=1

EE2007/KV Ling/Aug 2016 41/ 141


Properties of Matrix Transpose

Suppose A and B are m × n matrices, C is an n × p matrix, and c is a scalar.


1. (A + B)T = AT + B T
2. (AC )T = C T AT
3. (AT )T = A
4. (cA)T = cAT
5. If AT = A, then A is known as a symmetric matrix.

EE2007/KV Ling/Aug 2016 42/ 141


Inverse of a Square Matrix
Definition:
Let A be an n × n matrix. The inverse of A, denoted as A−1 , if exists, is such that

AA−1 = I = A−1 A

When the inverse of a matrix A exists, we say A is invertible or non-singular. Otherwise, we


say A is non-invertible or singular.

The inverse of a matrix, if it exists, is unique.

EE2007/KV Ling/Aug 2016 43/ 141


Properties of Inverse
Theorem
Let A and B be n × n invertible matrices. Then AB is invertible and
(AB)−1 = B −1 A−1
Proof:
B −1 A−1 AB = I ⇒ (AB)−1 = B −1 A−1

ABB −1 A−1 = I ⇒ (AB)−1 = B −1 A−1

The above result can be extended to (ABC )−1 = C −1 B −1 A−1 and, in general,
(A1 A2 . . . Ak )−1 = A−1 −1 −1
k Ak−1 . . . A1 .

EE2007/KV Ling/Aug 2016 44/ 141


Inverse of 2 × 2 Matrix via definition
You already knew and memerised the result, but can you construct it?
Theorem  
a b
The inverse of the matrix A = exists if and only if ad − bc 6= 0. In this case, the inverse is the
c d
matrix
 
1 d −b
A−1 =
ad − bc −c a

Proof :  
w x
Let B = . We want B = A−1 . By definition of inverse matrix, we want to find
y z
w , x, y , z such that AB = I . Thus
   
aw + by ax + bz 1 0
AB = = ,
cw + dy cx + dz 0 1
and use ERO to solve the resulting 4 × 4 system.

EE2007/KV Ling/Aug 2016 45/ 141


Inverse of 2 × 2 Matrix, cont’d
  

 aw +by = 1 a 0 b 0 1
ax +bz = 0  0 a 0 b 0 

⇒ 

 cw +dy = 0  c 0 d 0 0 
cx +dz = 1 0 c 0 d 1

   
1 0 b/a 0 1/a 1 0 b/a 0 1/a
 0 1 0 b/a 0   0 1 0 b/a 0 
∼   ∼  
 c 0 d 0 0   0 0 d − bc/a 0 −c/a 
0 c 0 d 1 0 0 0 d − bc/a 1

a −c −bz −b d
z= ;y = ;x = = ;w = ... =
ad − bc ad − bc a ad − bc ad − bc

EE2007/KV Ling/Aug 2016 46/ 141


Inverse of n × n Matrix via Method of Augmented Matrix
Instead of finding the inverse of an n × n matrix A by definition, an alternative is to form the
augmented matrix [A|I ] and then row reducing the n × 2n augmented matrix. If, in the
reduction process, A is transformed to the identity matrix, then the resulting augmented part of
the matrix is the inverse.

[A|I ] ∼ ... ∼ [I |A−1 ]

Why this method works?

EE2007/KV Ling/Aug 2016 47/ 141


Find inverse of
 a matrix byaugmented matrix
1 1 −2
Find A−1 if A =  −1 2 0 .
0 −1 1
Solution:
     
1 1 −2 1 0 0 1 1 −2 1 0 0 1 1 −2 1 0 0
 −1 2 0 0 1 0 ∼ 0 3 −2 1 1 0  ∼  0 3 −2 1 1 0 
0 −1 1 0 0 1 0 −1 1 0 0 1 0 0 1 1 1 3
   
3 0 0 6 3 12 1 0 0 2 1 4
··· ∼  0 3 0 3 3 6 ··· ∼  0 1 0 1 1 2 
0 0 1 1 1 3 0 0 1 1 1 3

EE2007/KV Ling/Aug 2016 48/ 141


Other Ways of Finding Inverse of a Matrix

Akan Datang ...


see slide 58 [Adjoint matrix and inverse]

EE2007/KV Ling/Aug 2016 49/ 141


Ax = b, Matrix Form of Linear System
The linear system


 x1 −6x2 −4x3 = −5
2x1 −10x2 −9x3 = −4 can be written as Ax = b
−x1 +6x2 +5x3 = 3

where
    

1 −6 −4 x1 −5
A =  2 −10 −9  , x =  x2  , b =  −4 
−1 6 5 x3 3

EE2007/KV Ling/Aug 2016 50/ 141


Solving Ax = b
via x = A−1 b

Theorem
If the n × n matrix A is invertible, then for every vector b, with n components, the linear system
Ax = b has the unique solution x = A−1 b.
Proof:

Ax = b ⇒ A−1 Ax = A−1 b ⇒ x = A−1 b

From previous slide, given A, we can compute


 
2 3 7
A−1 =  − 12 21 12 
1 0 1
Then, the solution is x = A−1 b ⇒ x1 = −1, x2 = 2, x3 = −2

EE2007/KV Ling/Aug 2016 51/ 141


1
Non-trivial Solution of Homogeneous System
h 1 2 1 i
Find all vectors of x such that Ax = 0 where A = 1 3 0 .
1 1 2
Solution:
   
1 2 1 0 1 2 1 0
 1 3 0 0  ∼  0 1 −1 0 
1 1 2 0 0 0 0 0
Thus x2 = x3 , and x1 = −2x2 − x3 . Let x3 = t, the solution set in vector form is
 
−3
x = t  1 , t ∈ R
1

1
A homoogeneous linear system is a system of the form Ax = 0. The vector x = 0 is always
a solution to the homogeneous system Ax = 0 and is called the trivial solution.

EE2007/KV Ling/Aug 2016 52/ 141


Determinant of a 2 × 2 Matrix
 
a b
The determinant of the matrix A = , denoted by |A|, is given by
c d

a b
|A| = det A = = ad − bc
c d

EE2007/KV Ling/Aug 2016 53/ 141


Determinant of a 3 × 3 Matrix
The determinant of the matrix
 
a11 a12 a13
A =  a21 a22 a23 
a31 a32 a33

is

a a23 a21 a23 a21 a22
|A| = a11 22 − a12
+ a13

a32 a33 a31 a33 a31 a32

EE2007/KV Ling/Aug 2016 54/ 141


Determinant of a 3 × 3 Matrix
The computation of the 3 × 3 determinant takes the form

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗

|A| = a11 ∗ a22 a23 − a12 a21 ∗ a23 + a13 a21 a22 ∗
∗ a32 a33 a31 ∗ a33 a31 a32 ∗
In other words, the determinant of a 3 × 3 matrix is calculated by using an expansion along the first
row. With an adjustment of signs as shown in figure below, the determinant can be computed by using
an expansion along any row or column.
 
+ − +
 − + − 
+ − +

Hint: To reduce computation, expand along a row/column which has many zeros.

EE2007/KV Ling/Aug 2016 55/ 141


× i3 Matrix
Example: Determinanthof2 a 13 −1
Compute det (A) where A = 3 1 4 .
5 −3 3
Solution:
Expand along row 1

1 4 3 4 3 1
det (A) = |A| = 2
− 1
+ (−1) = 55
−3 3 5 3 5 −3

Expand along row 2



1 −1 2 −1
− 4 2 1

det (A) = |A| = −3 + 1 = 55
−3 3 5 3 5 −3

Expand along row 3



1 −1 2 −1
+ 3 2 1

det (A) = |A| = 5 − (−3) = 55
1 4 3 4 3 1
EE2007/KV Ling/Aug 2016 56/ 141
Determinant of a n × n Matrix
The generalisation of the 3 × 3 determinant to the n × n case can be expressed as follows:
1. Expand along the ith row of A
n
X The minor Mij is defined to be the determinant
det(A) = (−1)i+j aij Mij of the (n − 1) × (n − 1) matrix that results from
j=1
A by removing the ith row and the jth column.
2. Expand along the jth column of A The expression (−1)i+j Mij is known as
n
X cofactor.
det(A) = (−1)i+j aij Mij
i=1
 
2 1 −1
Thus, for the 3 × 3 matrix  3 1 4 , several minors are
5 −3 3

1 4 3 4 3 1
M11 = , M12 =
, M13 =
−3 3 5 3 5 −3
EE2007/KV Ling/Aug 2016 57/ 141
Adjoint Matrix and Inverse
 T
C11 C12 . . . C1n
.
C21 C22 . . C2n
 
−1 1  
A = adj(A) where adj(A) =  .. .. .. ..

det(A) 
 . . . .


Cn1 Cn2 . . . Cnn
is the transpose of the cofactor matrix and is called the adjoint matrix of A.
   T
a b C11 C12
Example Let A = . Then adj(A) =
c d C21 C22
 T  T  
(+1)M11 (−1)M12 d −c d −b
= = =
(−1)M21 (+1)M22 −b a −c a

EE2007/KV Ling/Aug 2016 58/ 141


Determinant of a Triangular Matrix
Theorem: Determinant of a triangular matrix
If A is an n × n triangular matrix, then the determinant of A is the product of the terms on the
diagonal. That is
det(A) = a11 a22 . . . ann
Proof: Proof by induction.
Let n = 1, A = a11 . Then |A| = a11 . Next,assume the case nis true, i.e.,
a ... ...
11 ...
a22 ... ...
det(An×n ) = a11 a22 . . . ann , where An×n = .
.
.
.
. Now, consider
. .
ann

..
" #
A(n+1)×(n+1) = An×n . ⇒ det(A(n+1)×(n+1) )
a(n+1)(n+1)

= (−1)(n+1)+(n+1) a(n+1)(n+1) det(An×n ) = a11 a22 . . . ann a(n+1)(n+1) .


EE2007/KV Ling/Aug 2016 59/ 141
Properties of Determinants
Let A be a square matrix.
1. If two rows(columns) of A are interchanged to produce a matrix B, then det(B) = − det(A).
2. If a multiple of one row(column) of A is added to another row(column) to produce a matrix B,
then det(B) = det(A).
3. If a row(column) of A is multiplied by a real number α to produce a matrix B, then
det(B) = α det(A).
With the above properties, an alternative method to find the determinant of a matrix is to
row(column)-reduce the matrix to triangular form and at each step, record the changes to the
determinant due to the row (column)-reduce operation. See example in next slide

EE2007/KV Ling/Aug 2016 60/ 141


Find determinant via ERO " #
0 1 3 −1
2 4 −6 1
Find the determinant of the matrix A = 0 3 9 2
−2 −4 1 −3


0 1 3 −1 2 4

−6 1
R ←R +R 2 4 −6 1 R1 ↔R2 0 1
3 −1
det(A) 4 =4 2 = (−1)
0 3 9 2 0 3
9 2
0 0 −5 −2 0 0
−5 −2


2 4 −6 1 2
4 −6 1
R3 ←R3 −3R2 0 1 3 −1 R3 ↔R4 0 1 3 −1
= (−1) = (−1)(−1)
0 0 0 5 0 0 −5 −2
0 0 −5 −2 0 0 0 5
= (−1)(−1)(2)(1)(−5)(5) = −50

EE2007/KV Ling/Aug 2016 61/ 141


More Properties of Determinant
Let A and B be n × n matrices and α a real number.
1. det(AB) = det(A) det(B)
2. det(αA) = αn det(A)
3. det(AT ) = det(A)
4. If A has a row (or column) of all zeros, then det(A) = 0.
5. If A has two equal rows (or columns), then det(A) = 0.
6. If A has a row (or column) that is a multiple of another row (or column) then det(A) = 0.
7. A square matrix A is invertible if and only if det(A) 6= 0

EE2007/KV Ling/Aug 2016 62/ 141


Elementary Matrices
DEFINITION
An elementary matrix is any matrix that can be obtained from the identity matrix by performing a
single elementary row operation.

Thus, by performing the three types of elementary row operations on a 3 × 3 identity matrix, we obtain
the corresponding 3 × 3 elementary matrices, and their determinants

R1 ↔ R3 R2 ← αR2 , α 6= 0 R3 ← R3 + βR1
     
0 0 1 1 0 0 1 0 0
E13 = 0 1 0  E2 (α) =  0 α 0  E13 (α) =  0 1 0 
1 0 0 0 0 1 β 0 1

det(E13 ) = −1 det(E2 (α)) = α det(E13 (β)) = 1

EE2007/KV Ling/Aug 2016 63/ 141


Elementary Matrices and Elementary Row Operations
Elementary matrices can be used to perform or record row operations.
In other words, if E is the elementary matrix corresponding to the elementary row operation R,
then
R
A ∼ B ⇔ B = EA

Thus, if we perform a series of Elementary Row Operations Ri represented by Ei , then


R R R
A ∼1 B1 ∼2 B2 · · · ∼k B ⇔ B = Ek . . . E2 E1 A

EE2007/KV Ling/Aug 2016 64/ 141


Example: Elementary Matrices and ERO
   
1 2 −1 1 2 −1
E1 :R2 ←R2 −3R1 E2 :R3 ←R3 +R1 E3 :R3 ←R3 +3R2
A= 3 5 0  ∼ [·] ∼ [·] ∼ B =  0 −1 3 
−1 1 1 0 0 9

The elementary matrices corresponding to these row operations are


     
1 0 0 1 0 0 1 0 0
E1 =  −3 1 0  , E2 =  0 1 0  , E3 =  0 1 0 
0 0 1 1 0 1 0 3 1
and it can be verified that
 
1 2 −1
B = E3 E2 E1 A =  0 −1 3 
0 0 9

EE2007/KV Ling/Aug 2016 65/ 141


Determinant via Elementary Matrices
The example in the previous slide suggest that we can also compute the determinant of a n × n matrix
by ERO and Elementary Matrices
To see this, continue with the example, where after ERO we arrived at
 
1 2 −1
E3 E2 E1 A = B =  0 −1 3 
0 0 9
 
1 2 −1
⇒ det(E3 E2 E1 A) = det  0 −1 3 
0 0 9
⇒ det(E3 ) det(E2 ) det(E1 ) det(A) = (1)(−1)(9)
Thus
−9
⇒ det(A) = = −9
det(E3 ) det(E2 ) det(E1 )
since, from the properties of Elementary Matrices, det(E1 ) = det(E2 ) = det(E3 ) = 1.

EE2007/KV Ling/Aug 2016 66/ 141


Elementary Matrices are Invertible
Let E be an n × n elementary matrix. Then E is invertible. Moreover, its inverse is also an
elementary matrix.
As an illustration, consider the row operation R1 ← R1 + 2R2 . The corresponding elementary
matrix is
 
1 2 0
E = 0 1 0 
0 0 1
Then, E is invertible and is given by (how to find E −1 ?)
 
1 −2 0
E = 0 1 0 
0 0 1

EE2007/KV Ling/Aug 2016 67/ 141


LU Factorization
In certain cases, a m × n matrix A can be written as A = LU where L is a lower triangular
matrix and U is an upper triangular matrix. We call this an LU factorization of A.

For example
    
−3 −2 −1 0 3 2
=
3 4 1 2 0 1
When such a factorization of A exists, we can solve the linear system Ax = b using forward and
back substitution

Ly = b, (forward)
Ax = b ⇒ LUx = b ⇒
Ux = y (back substitution)

EE2007/KV Ling/Aug 2016 68/ 141


Example: LU factorization of A
   
3 6 −3 1 2 −1
E1 :R1 ←R1 /3 E2 :R2 ←R2 −6R1 E :R ←R +R1
A =  6 15 −5  ∼ [·] ∼ [·] 3 3 ∼ 3 U= 0 3 1 
−1 −2 6 0 0 5
Thus, E3 E2 E1 A = U ⇒ A = (E3 E2 E1 )−1 U where
| {z }
L
 
3 0 0
L = (E3 E2 E1 )−1 = E1−1 E2−1 E3−1 = 6 1 0 
−1 0 1
In other words, A can be factorised as
    
3 6 −3 3 0 0 1 2 −1
A =  6 15 −5  =  6 1 0   0 3 1 
−1 −2 6 −1 0 1 0 0 5
EE2007/KV Ling/Aug 2016 69/ 141
Solving Ax = b by LU Factorization
 T
With b = 3 11 9 , we have

    
3 6 −3 x1 3
Ax =  6 15 −5   x2  =  11 
−1 −2 6 x3 9
which can be equivalently written as
     
3 0 0 1 2 −1 x1 3 
Ly = b
LUx =  6 1 0   0 3 1   x2 = 11 ⇒
  
Ux = y
−1 0 1 0 0 5 x3 9
| {z }
y

So that we can first solve for y by forward substitution and then x


by back substitution.

EE2007/KV Ling/Aug 2016 70/ 141


Solving Ax = b by LU Factorization
continue ...
y1
Let y =  y2 , so with Ly = b
y3
        
3 0 0 y1 3 y1 1
 6 1 0   y2  =  11  ⇒  y2  =  5 
−1 0 1 y3 9 y3 10
and
       
1 2 −1 x1 1 x1 1
Ux = y ⇒  0 3 1   x2  =  5  ⇒  x2  =  1 
0 0 5 x3 10 x3 2

EE2007/KV Ling/Aug 2016 71/ 141


Is LU Factorization Unique?

−3 −2
Consider the matrix A = . Let’s carry out the following operations on A:
3 4
     
−3 −2 E1 :R2 ←R2 +R1 −3 −2 E2 :R1 ←R1 /3 −1 −2/3
A= ∼ U1 = ∼ U2 =
3 4 0 2 0 2
Thus
  
1 0 −3 −2
U 1 = E1 A ⇒ A = E1−1 U1 = L1 U1 =
−1 1 0 2
and
  
−1 3 0 −1 −2/3
U2 = E2 E1 A ⇒ A = (E2 E1 ) U2 = L2 U2 =
−3 1 0 2
which demonstates that LU factorization is not unique.

EE2007/KV Ling/Aug 2016 72/ 141


PLU Factorization
Sometimes, we may need to interchange rows in order to reduce A to a LU form, as illustrated
by the following example:
   
0 2 −2 1 2 0
E :R1 ↔R3 E :R ←R −R E :R ←R −R
A= 1 4 3  1 ∼ [·] 2 2 ∼ 2 1 [·] 3 3 ∼ 3 2 U =  0 2 3 
1 2 0 0 0 −5
Notice that E1 is a permuatation matrix while E2 and E3 are lower triangular. So,
A = (E3 E2 E1 )−1 U = E1−1 (E2−1 E3−1 ) U = PLU.
| {z }
L

EE2007/KV Ling/Aug 2016 73/ 141


Vectors in Rn

Euclidean n-space, denoted by Rn , or simply n-space, is defined by


  

 x1 

 x2 
 

n
R =  .  |xi ∈ R, fori = 1, 2, . . . , n
 

  ..  

 
xn
 

The entries of a vector are called the components of the vector.

For example, R2 and R3 .

EE2007/KV Ling/Aug 2016 74/ 141


Addition and Scalar Multiplication of Vectors
You should already know this
Example: Let
    
1 −1 4
u =  −2  , v =  4 , w= 2 
3 3 6
Find (2u + v) − 3w.
            
1 −1 4 1 12 −11
2  −2  +  4  − 3  2  =  0  −  6  =  −6 
3 3 6 9 18 −9

EE2007/KV Ling/Aug 2016 75/ 141


Algebraic Properties of Vectors in Rn
You should already know this
Let u, v and w be vectors in Rn , and let c and d be scalars. The following algebraic properties
hold.
1. Commutative: u + v = v + u
2. Associative: (u + v) + w = u + (v + w)
3. Additive identity: The vector 0 satisfies 0 + u = u + 0 = u
4. Additive inverse: For every u, the vector −u satisfies u + (−u) = −u + u = 0
5. c(u + v) = cu + cv
6. (c + d)u = cu + du
7. c(du) = (cd)u
8. (1)u = u

EE2007/KV Ling/Aug 2016 76/ 141


Linear Combination

Linear Combination:
Let S = {v1 , v2 , . . . , vk } be a set of vectors in Rn and let c1 , c2 , . . . , ck be scalars. An
expression of the form
k
X
c1 v1 + c2 v2 + · · · + ck vk = ci vi
i=1

is called a linear combination of the vectors of S. Any vector v that can be written in this
form is also called a linear combination of the vector of S.

EE2007/KV Ling/Aug 2016 77/ 141


Linear Combinations  
−1
Determine whether the vector v =  1  is a linear combination of the vectors
10
     
1 −2 −6
v1 =  0  , v2 =  3  , and v3 =  7  .
1 −2 5
Solution:
We just need to check whether c1 v1 + c2 v2 + c3 v3 = v has solution or not. In other words,
whether
      
  c1 or 1 −2 −6 c1 −1
v1 v2 v3  c2  = v equivalently  0 3 7   c2  =  1 
c3 1 −2 5 c3 10
has a solution or not? Yes, c1 = 1, c2 = −2, c3 = 1.

EE2007/KV Ling/Aug 2016 78/ 141


Linear Combinations  
−5
Determine whether the vector v =  11  is a linear combination of the vectors
−7
     
1 0 2
v1 = −2 , v2 = 5 , and v3 = 0  .
    
2 5 8
Solution:
We just need to check whether c1 v1 + c2 v2 + c3 v3 = v has solution or not.
        
1 0 2 c1 −5 1 0 2 −5 1 0 2 −5
 −2 5 0   c2  =  11  ⇒  −2 5 0 11  ∼  0 5 4 1 
2 5 8 c3 −7 2 5 8 −7 0 0 0 2
which has no solution.

EE2007/KV Ling/Aug 2016 79/ 141


Spanning Set

The set of all linear combinations of v1 , v2 , . . . , vk is called the span of v1 , v2 , . . . , vk and is


denoted by span(v1 , v2 , . . . , vk ) or span(S).

In other words

span(S) = {c1 v1 + c2 v2 + · · · + ck vk |c1 , c2 , . . . , ck ∈ R}

EE2007/KV Ling/Aug 2016 80/ 141


Example: Spanning Set
   
2 1
Show that span( , ) is R2 .
−1 3
Solution:
We need to show that
     
2 1 x
c1 + c2 =
−1 3 y
has solution for all x and y .
   
2 1 x 2 1 x

−1 3 y 0 7 x + 2y
has solution for all x and y .

EE2007/KV Ling/Aug 2016 81/ 141


Example: Spanning Set
  
1 −1
Let u =  0 , and v =  1 . What is span(u, v)?
3 −3
Solution:
 T
Let x y z be a vector in the span(u, v), then
         
x 1 −1 1 −1 x 1 −1 x
 y  = c1  0  + c2  1  ⇒  0 1 y ∼ 0 1 y 
z 3 −3 3 −3 z 0 0 z − 3x
has solution when z − 3x. Thus the span(u, v) is the plane z − 3x = 0.

Geometrically, the set of all linear combination of u and v is just the plane through the
origin with u and v as direction vectors.

EE2007/KV Ling/Aug 2016 82/ 141


Example: Spanning Set
Let
2 1 1
" # " # " #
v1 = −1 , v2 = 3 , v3 = 1
0 −2 4
 
−4
Show that v =  4  is in span(v1 , v2 , v3 ).
−6
Solution: We have to show that v can be written as a linear combination of v1 , v2 and v3 , i.e.,
c1 v1 + c2 v2 + c3 v3 = v
Solving this linear system, we obtain
c1 = −2, c2 = 1, c3 = −1
Hence v is in span(v1 , v2 , v3 ).
EE2007/KV Ling/Aug 2016 83/ 141
Example: Span

    
1 1 1
Show that span( 1  ,  0  ,  1 ) = R3
1 2 0
Solution: T
A vector in R3 has the form x y z

. We need to show that
       
1 1 1 x
c1 1 + c2 0 + c3 1 = y  has solution for all x, y and z.
      
1 2 0 z
   
1 1 1 x 1 1 1 x
 1 0 1 y  ∼  0 −1 0 y −x 
1 2 0 z 0 0 −1 z + y − 2x
has solution for all x, y , and z.

EE2007/KV Ling/Aug 2016 84/ 141


Example: Span

    
−1 4 −6
Show that span( 2  ,  1  ,  3 ) 6= R3
1 −3 5
Solution: T
A vector in R3 has the form x y z

. Since
       
−1 4 −6 x
c1  2 + c2
  1 + c3
  3 = y 
 
1 −3 5 z
   
−1 4 −6 x −1 4 −6 x
⇒ 2 1 3 y  ∼ 0 9 9 y + 2x 
1 −3 5 z 0 0 0 9z + 7x − y
has solution only if 9z + 7x − y = 0, the three vectors do not
span R3 .
EE2007/KV Ling/Aug 2016 85/ 141
Linear Independence
The set of vectors S = {v1 , v2 , . . . , vm } in Rn is linear independent provided that the only
solution to the equation
c1 v1 + c2 v2 + · · · + cm vm = 0
is the trival solution c1 = c2 = · · · = cm = 0. Otherwise, the set S is linearly dependent.

In other words, to determine whether the set of vectors v1 , . . . , vn is LI, we form


 
c1
v1 . . . vn  ...  = 0 and solve for c1 , . . . , cn .
  

cn
If c1 = . . . = cn = 0, then v1 , . . . , vn are LI; otherwise LD.

EE2007/KV Ling/Aug 2016 86/ 141


Example: Linear Independence
Determine whether the vectors
     
1 0 1
 0   1   1 
 1  , v2 =  1  , and v3 =  1
v1 =       are linearly independent or linearly dependent.

2 2 3

Solution: Form the equation c1 v1 + c2 v2 + c3 v3 = 0 and solve for c1 , c2 and c3 .


   
1 0 1 0 1 0 1 0
 0 1 1 0   0 1 1 0 
  ∼ ··· ∼  
 1 1 1 0   0 0 1 0 
2 2 3 0 0 0 0 0
Since the solution is c1 = c2 = c3 = 0, the vectors are linearly independent.

EE2007/KV Ling/Aug 2016 87/ 141


Example: Linear Independence
Determine whether the vectors
       
1 −1 −2 2
v1 = 0 , v2 =
   1 , v3 =
  3 , v4 = 1  are linearly independent
 
2 2 1 1

Solution: Form the equation c1 v1 + c2 v2 + c3 v3 + c4 v4 = 0 and solve for c1 , c2 , c3 and c4 .


   
1 −1 −2 2 1 −1 −2 2
 0 1 3 1  ∼ ··· ∼  0 1 3 1 
2 2 1 1 0 0 −7 −7
Since there are many (non-trival) solutions the vectors are Linearly Dependent.

EE2007/KV Ling/Aug 2016 88/ 141


Orthogonality

Two vectors u and v are orthongonal if uT v = 0

Example
 
2
If u =  1 . Find a v that is (i) orthogonal, and (ii) not orthogonal to u.
1

EE2007/KV Ling/Aug 2016 89/ 141


Vector Form of a Linear System
A linear system with m equations and n variables


 a11 x1 + a12 x2 + ... + a1n xn = b1
 a21 x1

+ a22 x2 + ... + a2n xn = b2
.. .. ..


 . . .
am1 x1 + am2 x2 + . . . + amn xn = bm

can be written in matrix form as Ax = b.


The matrix equation can also be written in the equivalent form
       
a11 a12 a1n b1
 a21   a22   a2n   b2 
x1  + x + · · · + x ..  =  .. 
       
..  2 ..  n
 .   .   .   . 
am1 am2 amn bm
This last equation is called the vector form of the linear system.

EE2007/KV Ling/Aug 2016 90/ 141


Consistency of Ax = b
Until now, consistency means Ax = b has solution (either unique or many solutions).
Here is another way of looking at consistency of Ax = b.

THEOREM
The linear system Ax = b is consistent if and only if the vector b can be expressed as a linear
combination of the column vectors of A.

 
x1
  x2 
Ax = b ⇒ v1 v2 ... vn =b
 
 ..
 . 
xn
⇒ x1 v1 + x2 v2 + . . . + xn vn = b

EE2007/KV Ling/Aug 2016 91/ 141


Linear Systems having Unique Solution

Let Ax = b be a consistent m × n linear system. The solution is unique if and only if the
column vectors of A are linearly independent.

EE2007/KV Ling/Aug 2016 92/ 141


Summary: Different Views of the Same Coin
If A is a m × n matrix, with columns a1 , . . . , an , and if b is in Rn , the matrix equation
Ax = b
has the same solution set as the vector equation
x1 a1 + x2 a2 + · · · + xn an = b
which, in turn, has the same solution set as the system of linear equations whose augmented
matrix is
 
a1 a2 · · · an b

EE2007/KV Ling/Aug 2016 93/ 141


Summary: Different Views of the Same Coin
Cont’d
Thus, a system of linear equations may be viewed in three different but equivalent ways:
I as a matrix equation
I as a vector equation, or
I as a system of linear equations

Whenever you construct a mathematical model of a problem in real life, you are free to choose
whichever viewpoint that is most natural. Then you may switch from one formulation of a
problem to another whenever it is convenient.

EE2007/KV Ling/Aug 2016 94/ 141


Photosynthesis Application Revisited
Recall that in slide 7, we set up a system of equations to balance the chemical equation
aCO2 + bH2 O → cO2 + dC6 H12 O6
We could instead set up a vector equation that describes the number of atoms of each type
present in a reaction.
         
C 1 0 0 6
 O : a 2 +b 1  = c  2 +d  6 
H 0 2 0 12
which is the same system of three linear equations in four variables shown in slide 7

 C: a − 6d =0
O : 2a + b − 2c − 6d =0
H: 2b − 12d =0

EE2007/KV Ling/Aug 2016 95/ 141


Application: Resource allocation
A coffee merchant sells three blends of coffee. A bag of the house blend contains 300g of
Colombian beans and 200g of French roast beans. A bag of the special blend contains 200g of
Colombian beans, 200g of Kenyan beans and 100g of French roast beans. A bag of the
gourmet blend contains 100g of Colombians, 200g of Kenyan beans and 200g of French roast
beans. The merchant has on hand 30kg of Colombian beans, 15kg of Kenyan beans and 25kg
of French roast beans. If he wishes to use up all of the beans, how many bags of each type of
blend can be made?
Solution:
Let there be x1 , x2 and x3 bags of the house blend, special blend and gourmet blend
respectively. Thus
         
C 300 200 100 30000
 F  : x1  200  + x2  100  + x3  200  =  25000 
K 0 200 200 15000
[65 30 40]

EE2007/KV Ling/Aug 2016 96/ 141


Application: Resource allocation
A florist offers three sizes of flower arrangements containing roses, daisies, and chrysanthemums.
Each small arrangement contains one rose, three daisies, and three chrysanthemums. Each
medium arrangement contains two roses, four daisies and six chrysanthemums. Each large
arrangement contains four roses, eight daisies and six chrysanthemums. One day, the florist
noted that she used a total of 24 roses, 50 daisies and 48 chrysanthemums in filling orders for
these three types of arrangements. How many arrangements of each type did she make?
Solution: Let there be x1 , x2 and x3 orders of small, medium and large arrangements
respectively. Thus
         
R 1 2 4 24
 D  : x1  3  + x2  4  + x3  8  =  50 
C 3 6 6 48
[2 3 4]

EE2007/KV Ling/Aug 2016 97/ 141


Application of Linear Systems: Electrical Networks
Find the node voltages of the circuit

EE2007/KV Ling/Aug 2016 98/ 141


Summary: Linear System, Linear Independence, Invertibility of Matrices
and Determinants
Let A be a square matrix. Then the following statements are equivalent.
1. The matrix A is invertible
2. The determinant of the matrix A is nonzero
3. The linear system Ax = b has a unique solution for every vector b
4. The homogeneous linear system Ax = 0 has only the trivial solution
5. The column vectors of A are linearly independent
6. The matrix A is row equivalent to the identity matrix

EE2007/KV Ling/Aug 2016 99/ 141


Null Space, Column Space and Row Space of a Matrix
Definition: Let A be an m × n matrix.
1. The null space of A, denoted by N(A), is the set of all vectors in Rn such that Ax = 0
N(A) = {x|Ax = 0}
2. The column space of A, denoted by col(A), is the set of all linear combinations of the
column vectors of A
col(A) = {v|v = c1 v1 + . . . + cn vn } = span(v1 , v2 , . . . , vn )
3. The row space of A, denoted by row (A), is the set of all linear combinations of the row
vectors of A
row (A) = {r|r = c1 r1 + . . . + cm rm } = span(r1 , r2 , . . . , rm )

What are the dimensions of x, vi and ri ?

x : n × 1; vi : m × 1; ri : n × 1
EE2007/KV Ling/Aug 2016 100/ 141
Example:
 Column, Row and
 Null Space of a Matrix
−1 0 3 1
Let A =  2 3 −3 1 , then
2 −2 −2 −1
       
−1 0 3 1
1. col(A) = span( 2  ,  3  ,  −3  ,  1 )
2 −2 −2 −1
     
−1 2 2
 0   3   −2 
2. row (A) = span(
 3  ,  −3  ,  −2 )
    

1 1 −1
 
1
 1 
3. N(A) = span( 1 )

−2
EE2007/KV Ling/Aug 2016 101/ 141
Example: Column, Row and Null Space of a Matrix
cont.
Null space of A can be determined as follows:
   
−1 0 3 1 1 0 −3 −1
 2 3 −3 1  ∼ ... ∼  0 1 1 1 
2 −2 −2 −1 0 0 2 1
The last row gives 2x3 = −x4 . Let x3 = α, then x4 = −2α, and by back substitution,
x2 = −x3 − x4 = α, x1 = 3x3 + x4 = α
Hence, the solution for Ax = 0 is
   
x1 1
 x2 
 = α 1 
 
x=  x3   1 
x4 −2
EE2007/KV Ling/Aug 2016 102/ 141
Example: Null Space and Column Space
Let
   
1 −1 −2 3
A =  −1 2 3  and b =  2 
2 −2 −2 −2
Determine whether b is in col(A), and find N(A).
Solution:
1. Find out whether
       
1 −1 −2 3
c1  −1  + c2  2  + c3  3  =  2 
2 −2 −2 −2
has solution. If yes, then b is in col(A).
2. Solve Ax = 0, and the solution(s) is the Null Space of A.
[Yes; N(A) = {0}]

EE2007/KV Ling/Aug 2016 103/ 141


Subspaces, Basis, Dimension, and Rank
In our study of vectors, we have seen that geometry and algebra are linked, and we often use
geometric intuition and reasoning to obtain algebraic results, and the power of algebra will
often allow us to extend our findings beyond geometric settings.

The notion of subspace is an algebraic generalisation of the geometric examples of lines and
planes through the origin.

The concept of a basis for a subspace is linked to direction vectors of such lines and planes.
The concept of basis gives us a precise definition of dimension.

EE2007/KV Ling/Aug 2016 104/ 141


Subspaces

DEFINITION:
A subspace of Rn is any collection S of vectors in Rn such that
1. The zero vector is in S
2. If u and v are in S, then u + v is in S (i.e., S is closed under vector addition)
3. If u is in S and c is a scalar, then cu is in S (i.e., S is closed under scalar
multiplication

Note that properties (2) and (3) can equivalently be stated as:
S is closed under linear combinations,
i.e., If u1 , u2 , . . . , uk are in S and c1 , c2 , . . . , ck are scalars,
then c1 u1 + c2 u2 + · · · + ck uk is in S

EE2007/KV Ling/Aug 2016 105/ 141


Example: Subspaces
  
a
Show that W = |a ∈ R is not a subspace.
a+1
Solution: Consider two vectors in W , e.g.,
   
a1 a2
u= , and v =
a1 + 1 a2 + 1
 
a1 + a2
Since u + v = is NOT in W , thus, W is not a subspace.
a1 + a2 + 2
Alternatively, we could also note that W is not a subspace since the zero vector is
not in W .

EE2007/KV Ling/Aug 2016 106/ 141


Example: Subspaces
Every line and plane through the origin in Rn is a subspace in Rn .
Solution:
The line that passes through the origin and the point with position vector u in Rn is
L = {x|x = tu, t ∈ R}
Given any v, w ∈ L, and any scalar α, we have
v + w = t1 u + t2 u = t3 u ∈ L
and αv = αt1 u = (αt1 )u = t4 u ∈ L, i.e. L is closed under vector addition and scalar multiplication.

Similarly, the plane that passes through the origin and two points with position vectors u1 and u2 in Rn is
P = {x|x = su1 + tu2 , s, t ∈ R}
It can be shown that P is closed under vector addition and scalar multiplication.

EE2007/KV Ling/Aug 2016 107/ 141


Example: Subspaces

Every line and plane that does not pass through the origin in Rn is not a subspace in Rn .

Proof:
The zero vector is not included, hence not a subspace.

EE2007/KV Ling/Aug 2016 108/ 141


Basis

Definition: (Basis for a Vector Space)


A subset B of a vector space V is a basis for V provided that
1. B is a linearly independent set of vectors in V
2. span(B) = V

Theorem: (Non-uniqueness of Bases)


Let B = {v1 , . . . , vn } be a basis for a vector space V and c a nonzero scalar. Then
Bc = {cv1 , . . . , vn } is a basis.

EE2007/KV Ling/Aug 2016 109/ 141


Example: Show that a given set of vectors is a Basis
Show that the set
     
 1 1 0 
S =  1 , 1 , 1  is a basis for R3 .
0 1 −1
 

Solution: We need to show that (1) S spans R3 and (2) S is LI. Requirement (1) means
       
1 1 0 a
c1  1  + c2  1  + c3  1  =  b  has a solution for every choice of a, b, c.
0 1 −1 c
Requirement (2) means
       
1 1 0 0
c1 1 + c2 1 + c3
     1 = 0  has solution c1 = c2 = c3 = 0.
 
0 1 −1 0

EE2007/KV Ling/Aug 2016 110/ 141


Example: Show that a given set of vectors is a Basis
cont.

Combining requirements (1) and (2), we just need to show


    
1 1 0 c1 0
 1 1 1   c2  =  0 
0 1 −1 c3 0
| {z }
B

has solution c1 = c2 = c3 = 0. In other words, the matrix B is non-singular or det(B) 6= 0.

EE2007/KV Ling/Aug 2016 111/ 141


Example: Finding a Basis
         
 1 0 1 1 −1 
Let S =  0  ,  1  ,  1  ,  2  ,  1  . Find a basis for the span of S.
1 1 2 1 −2
 

Solution: This means finding a set of independent vectors from S.


 
1 1 1 1 −1
 
1 0 1 1 −1
reduce to RE 
 0 1 1 2 1 ∼ ··· ∼  0 1 1 2 1 
 
1 1 2 1 −2 0 0 0 -2 −2
     
1 0 1
So the basis is { 0 , 1 , 2 }.
    
1 1 1

EE2007/KV Ling/Aug 2016 112/ 141


Example: Basis for Row Space
Find a basis for the row space of
 
1 2 3 4
A= 5 6 7 8 
10 12 14 16
Solution: Use ERO and reduce A to B.
   
1 2 3 4 1 0 −1 −2
A∼ 0 4 8 12  ∼  3 =B
 
0 1 2
0 0 0 0 0 0 0 0
There are only 2 independent rows. So, the first two rows of A forms a basis for
row (A). We can also choose the first two rows of B as a basis for row (A).

EE2007/KV Ling/Aug 2016 113/ 141


Summary: Finding a Basis from a set of vectors
equivalently, find the linearly independent vectors from a set of vectors

Given a set S = {v1 , v2 , . . . , vn }, here are two “recipes” to find a basis for span(S):
Method 1:
1. Form a matrix A whose columns are the vectors
2. Reduce A to RE form
3. The vectors from S that correspond to the pivoting columns of the RE matrix form a
basis for span(S).
Method 2:
1. Form a matrix B whose rows are the vectors
2. Reduce B to RE form
3. The vectors from S that correspond to the non-zero rows of the RE matrix
form a basis for span(S).

EE2007/KV Ling/Aug 2016 114/ 141


Dimension

Definition: (Dimension of a Vector Space)


The dimension of the vector space V , denoted by dim(V ), is the number of vectors in any
basis of V .

Note that if a vector space V has a basis with n vectors, then every basis has n vectors. In
other words, the dimension is a property of V and does not depend of the choice of basis.

EE2007/KV Ling/Aug 2016 115/ 141


Dimension, Rank and Nullity
Definition:(Dimension)
If S is a subspace of Rn , then the number of vectors in a basis for S is called the dimension of
S, denoted as dim(S).

dim(Rn ) = n, since the standard basis for Rn has n vectors, and we define dim(0) to be 0.

Definition:(Rank)
The rank of a matrix A is the dimension of its row or column space and is denoted as rank(A).

Definition:(Nullity)
The nullity of a matrix A is the dimension of its null space and is denoted as
nullity (A).

EE2007/KV Ling/Aug 2016 116/ 141


Rank of a Matrix
The row and column space of a matrix A have the same dimension.
Hence we have rank(AT ) = rank(A).

In addition, we have
1. rank(AB) ≤ rank(B)
2. rank(AB) = rank(B), if A−1 exists
3. rank(AB) = rank(A), if B −1 exists
4. rank(A + B) ≤ rank(A) + rank(B)
5. rank(A) ≤ min(m, n), where A is an m × n matrix

EE2007/KV Ling/Aug 2016 117/ 141


The Rank Theorem

Theorem:
If A is an m × n matrix, then

rank(A) + nullity (A) = n

Proof:
Let R be the RRE of A, and suppose rank(A) = r . Then R has r leading variables and (n − r )
free variables in the solution of Ax = 0, i.e. dim(N(A)) = n − r . Thus we have
rank(A) + nullity (A) = r + (n − r ) = n

EE2007/KV Ling/Aug 2016 118/ 141


The Rank Theorem  
1 2 3 4
Example: Find the rank and nullity of the matrix A =  5 6 7 8 .
15 18 21 24
Solution:  
1 2 3 4
Use ERO, A ∼  0 −4 −8 −12 , so rank(A) = 2.
0 0 0 0
Since A is a 3 × 4 matrix, the Rank Theorem gives
rank(A) + nullity (A) = n = 4, giving nullity (A) = 2.
Alternatively, if we take the Ax = 0 method to find nullity, we will have 2
independent equations with 4 unknowns, so there are two free variables, hence
nullity (A) = 2.

EE2007/KV Ling/Aug 2016 119/ 141


Eigenvalues and Eigenvectors

Definition: Eigenvalues and Eigenvectors


Let A be an n × n matrix. A number λ is called an eigenvalue of A if there exist a nonzero
vector v in Rn such that

Av = λv

Every nonzero vector v satisfying this equation is called an eigenvector of A


corresponding to the eigenvalue λ.

Example:       
1 2 1 1 1
Let A = . Since A = , so is an eigenvector of A
0 −1 0 0 0  
1
corresponding to the eigenvalue λ1 = 1. The other eigenvector of A is
−1
corresponding to the eigenvalue λ2 = −1.
EE2007/KV Ling/Aug 2016 120/ 141
Finding eigenvalues and eigenvectors of n × n matrix
By definition, if λ is an eigenvalue and v is the corresponding eigenvector of the matrix A, then
Av = λv ⇒ (λI − A)v = 0
Since v 6= 0, we must have
det(λI − A) = 0, which we can solve to find all the eigenvalues of A.
Once the eigenvalues are found, we then substitute the particular eigenvalue λi and solve the
following linear system for the corresponding eigenvector vi
(λi I − A)vi = 0

In other words, vi is in the null space of (λi I − A), and we sometimes refer it as the
eigenspace corresponding to λi .

EE2007/KV Ling/Aug 2016 121/ 141


Example: Eigenvalues and Eigenvector  
0 1
Find the eigenvalues and eigenvectors for the matrix A = .
1 0
Solution:
Step 1: Find the eigenvalues

λ −1
det(λI − A) = 0 ⇒ = λ2 − 1 = 0 ⇒ λ = ±1
−1 λ
Step 2: Find the corresponding eigenvectors
Substitute λ1 = 1, solve for v1 .
    
1 −1 x1 1
(λ1 I − A)v1 = 0 ⇒ = 0 ⇒ x1 = x2 ⇒ v1 =
−1 1 x2 1
Similarly, substitute λ2 = −1, solve for v2 .
    
−1 −1 x1 1
(λ2 I − A)v2 = 0 ⇒ = 0 ⇒ x1 = −x2 ⇒ v2 =
−1 −1 x2 −1
EE2007/KV Ling/Aug 2016 122/ 141
Geometric Interpretation of Eigenvalues and Eigenvectors

see MATLAB demo: eigshow

EE2007/KV Ling/Aug 2016 123/ 141


Example: Complex Eigenvalues
 
0 0 0
Find the eigenvalues of A =  0 0 −1 
0 1 0
Solution:
λ 0 0

The characteristic equation is det(λI − A) = 0
λ 1 = λ(λ2 + 1) = 0
0 −1 λ
Thus the eigenvalues are λ1 = 0, λ2 = i, λ3 = −i.
     
1 0 0
The corresponding eigenvectors are  0  ,  1  ,  1 
0 −i i

EE2007/KV Ling/Aug 2016 124/ 141


Example: one eigenvalue, more than one eigenvectors
Find the eigenvalues of the matrix
 
1 0 0 0
 0 1 5 −10 
A=  1 0 2
 and find the corresponding eigenvector.
0 
1 0 0 3
Solution:
First, find the eigenvalues:

λ−1 0 0 0

0 λ − 1 −5 10 = (λ − 1)2 (λ − 2)(λ − 3) = 0

|λI − A| =
−1 0 λ−2 0

−1 0 0 λ−3
Hence, the eigenvalues are λ = 1, 1, 2, 3.

EE2007/KV Ling/Aug 2016 125/ 141


Example: one eigenvalue, more than one eigenvectors
cont.
Let’s find the eigenvector(s) for λ = 1 through (λI − A)v1 = 0. Solving for the eigenvector by the
augmented matrix method
     
0 0 0 0 1 0 0 2 1 0 0 2
 0 0 −5 10   0 0 1 −2  1 −2 
∼ 0 0

 ∼ 
 −1 0 −1 0   −1 0 −1 0   0 0 −1 2 
−1 0 0 −2 0 0 0 0 0 0 0 0
     
x1 0 −2
 x2   1   0 
⇒  x3  = t  0  + s  2  = tv11 + sv12
    

x4 0 1
Thus there are two eigenvectors corresponding to λ = 1:
A(tv11 + sv12 ) = (tv11 + sv12 )
Set s = 0, t = 1, ⇒ Av11 = v11 ; set s = 1, t = 0, ⇒ Av12 = v12 .

EE2007/KV Ling/Aug 2016 126/ 141


Use MATLAB to find eigenvalues and eignevectors

EE2007/KV Ling/Aug 2016 127/ 141


Diagonalisation, A = PDP −1
An n × n matrix A is diagonalisable if and only if A has n linearly independent eigenvectors.
Moreover, if D = P −1 AP, with D a diagonal matrix, then the diagonal entries of D are the
eigenvalues of A and the column vectors of P are the corresponding eigenvectors.

Proof:
Suppose A has n linearly independent eigenvectors v1 , v2 , . . . , vn corresponding to the eigenvalues
λ1 , λ2 , . . . , λn , then
λ1 0 ... 0
 
 0 λ2 ... 0 
Avi = λi vi ⇒ A = 
   
v1 v2 ... vn v1 v2 ... vn  . . . . 
 . . .. . 
| {z } | {z } . . . 
P P 0 0 ... λn
| {z }
D

EE2007/KV Ling/Aug 2016 128/ 141


Mini-Tutorial:
 
λ1 ... 0
    . .. .. 
λ1 v1 ... λn vn = v1 . . . vn  .. . . 
  0 ... λn
λ1 4λ2 7λ3
Consider A =  2λ1 5λ2 8λ3 . Then we can re-write A as either
3λ1 6λ2 9λ3
       
1 4 7
A =  λ1  2  λ2  5  λ3  8  
3 6 9
or
  
1 4 7 λ1 0 0
A =  2 5 8   0 λ2 0 
3 6 9 0 0 λ3

EE2007/KV Ling/Aug 2016 129/ 141


Example: Diagonalisation of a matrix
Diagonalise
 
1 0 0
A =  6 −2 0 
7 −4 2
Solution:
It can be shown that the eigenvalues and the corresponding eigenvectors of A are
     
1 0 0
λ1 = 1, v1 =  2  , λ2 = −2, v2 =  1  , λ3 = 2, v3 =  0 
1 1 1
Thus
   
1 0 0 1 0 0
P =  2 1 0 , D =  0 −2 0  and A = PDP −1 .
1 1 1 0 0 2
EE2007/KV Ling/Aug 2016 130/ 141
Example: Matrix not diagonalisable
−1 1 0
Diagonalise  0 −1 1 
0 0 2
Solution:  
1
The matrix has eigenvalues λ = −1, −1, 2. For eigenvalue λ = −1, the eigenvector is  0 ,
0
 
1
and for eigenvalue λ = 2, the eigenvector is  3 . The matrix is not diagonalisable because it
9
does not have a full set of independent eigenvectors.

EE2007/KV Ling/Aug 2016 131/ 141


Mini-Tutorial: Solution of ẋ(t) = λx(t)

d
dt x(t) = λx(t)
R x(t) d x(t) Rt
⇒ x(0) x(t) = λ t=0 dt
x(t)
⇒ ln x(0) = λt
⇒ x(t) = e λt x(0)

How to handle coupled differential equations?


ẋ1 (t) = a1 x1 (t) + a2 x2 (t)
ẋ2 (t) = b1 x1 (t) + b2 x2 (t)

EE2007/KV Ling/Aug 2016 132/ 141


Application: Solution of Systems of Linear Differential Equations
Find the general solution to the system of differential equations
 d
dt y1 (t) = ẏ1 (t) = −y1 (t)
d
dt y2 (t) = ẏ2 (t) = 3y1 (t) + 2y2 (t)

Solution:
Write the system in the form Ax = b
    
ẏ1 −1 0 y1
=
ẏ2 3 2 y2
Hence
ẏ = Ay = PDP −1 y ⇒ P −1 ẏ = D P −1 y ⇒ ẇ = Dw
| {z } | {z }
ẇ w
     
−1 0 1 0 −1 0
The matrix A = can be diagonalised with P = and D = .
3 2 −1 1 0 2

EE2007/KV Ling/Aug 2016 133/ 141


The general solution for ẇ = Dw is
 −t 
e 0
w(t) = e Dt w(0) = w(0)
0 e 2t

Hence
e −t
 
0
y(t) = Pw(t) = Pe Dt w(0) = Pe Dt P −1 y(0) = −t y(0)
−e + e 2t e 2t

EE2007/KV Ling/Aug 2016 134/ 141


Application: Time response of Circuits
The circuit in the figure can be described by the differential equation
    
ẋ1 −(1/R1 + 1/R2 )/C1 1/(R2 C1 ) x1
=
ẋ2 1/(R2 C2 ) −1/(R2 C2 ) x2

where x1 and x2 are voltages across the capacitors C1 and C2 respectively; R1 = 1,


R2 = 2, C1 = 1 and C2 = 0.5.
The initial voltages on the capacitors were x1 (0) = 5 and x2 (0) = 4. Show how the voltages across the
capacitors change with time.

EE2007/KV Ling/Aug 2016 135/ 141


Application: Time response of Circuits
cont.
Solution:     
ẋ1 −1.5 0.5 x1
The system is = and the matrix has the corresponding
ẋ2 1 −1 x2
eigenvalue/eigenvector pairs
   
1 −1
λ1 = −0.5, v1 = , λ2 = −2, v2 =
2 1
Thus
 
−1
  λ1 0
ẋ = PDP x(0), where P = v1 v2 , D=
0 λ2
and ...

EE2007/KV Ling/Aug 2016 136/ 141


Application: Time response of Circuits
cont.

e λ1 t
  
0 c1
x(t) = Pe Dt P −1 x(0) =
 
v1 v2
0 e λ2 t c2
= c1 e λ1 t v1 + c2 e λ2 t v2
 
c1
where = P −1 x(0).
c2

The constants c1 and c2 can be determined from initial conditions as follows:


     
1 −1 5
x(0) = c1 + c2 =
2 1 4
giving c1 = 3 and c2 = −2.

EE2007/KV Ling/Aug 2016 137/ 141


Application:

LongTerm Behaviour of Stable Dynamical System
0.95 0.03
Let A = . Analyse the long-term behaviour of the dynamical system
0.05 0.97
 
0.6
xk+1 = Axk , with x1 = .
0.4
Solution:
The long term behavour of x can be found as follows:
xk+1 = Axk = A(Axk−1 ) = A . . . Ax1 = Ak x1 , let k → ∞, x∞ = A∞ x1
But, how to compute A∞ ? The trick is to diagonalise A.
Then
x∞ = A∞ x1 = (PDP −1 )∞ x1
= (PDP −1 )(PDP −1 ) . . . (PDP −1 )x1 = PD ∞ P −1 x1

EE2007/KV Ling/Aug 2016 138/ 141


Application: Long Term Behaviour of Stable Dynamical System
cont.
Matrix A has the following eigenvalue/eigenvector pairs:
   
3 1
λ1 = 1, v1 = , and λ2 = 0.92, v2 =
5 −1
 
3 1
Hence, A can be diagonalised with P = , i.e.,
5 −1
 
1 0
A = PDP −1 , where D =
0 0.92
Thus, as k → ∞
   
1 0 0.375
x∞ = P P −1 x1 =
0 0 0.625

EE2007/KV Ling/Aug 2016 139/ 141


MRT Early Bird Incentive Scheme
In July 2013, city S started experimenting with the “early bird” incentive scheme to encourage
a change in the morning peak-hour communting pattern on the MRT system. The authority
estimated that each month 30% of those who tried the “early bird” scheme would decide to go
back to travel during the morning peak-hour, and 20% of those who travelled during the
morning peak-hour would switch to the “early bird” scheme.
Suppose the total population of the commuters using the MRT is 100,000 and reamins
constant. In July 2013, 20,000 commuters joined the “early bird” scheme.
How many commuters will participate in the “early bird” scheme in the long run?

EE2007/KV Ling/Aug 2016 140/ 141


Solution:
The problem can be modelled by the following dynamical system:
     
Early Bird 0.7 0.2 20000
: xk+1 = xk , x1 =
Peak Hour 0.3 0.8 80000
 
0.7 0.2
The matrix can be diagonalised with
0.3 0.8
   
−1 1 0.5 0
P= , and D =
1 1.5 0 1
Thus, as k → ∞
     
Early Bird 0 0 −1 40000
: x∞ = P P x1 =
Peak Hour 0 1 60000

EE2007/KV Ling/Aug 2016 141/ 141

You might also like