Professional Documents
Culture Documents
Linear Algebra
K.V. Ling
ekvling@ntu.edu.sg
Rm: S2-B2a-22, School of EEE
Nanyang Technological University
4 August 2016
Scope and Textbook
I Topic 1: Systems of Linear Equations and Matrices
I Topic 2: Linear Combinations and Linear Independence
I Topic 3: Vector Spaces
I Topic 4: Eigenvalues and Eigenvectors
I References:
DeFranza and Gagliardi
Introduction to Linear Algebra with Applications
McGraw-Hill, 2009
NTU LWNL: QA184.2.D316
Some examples:
I Professor Gilbert Strang’s Video Lectures on Linear Algebra
(ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/)
I Free Linear Algebra textbook (joshua.smcvt.edu/linearalgebra)
I Linear Algebra at UC Davis (www.math.ucdavis.edu/∼linear/)
Many diverse applications are modeled by systems of equations. In this module we will:
I develop systematic methods for solving systems of linear equations, and
I determine whether the equations will have (a) unique, (b) many, or (c) no solution.
3
4 v1 − 12 v2 − 41 v3 = 3
− 12 v1 + 78 v2 − 81 v3 = 0
1
4 v1 − 38 v2 + 81 v3 = 0
x1 +2x2 +4x3 = 24
3x1 +4x2 +8x3 = 50
3x1 +6x2 +6x3 = 48
Any positive integers a, b, c and d that satisfy all three equations are a solution. For example,
a = 6, b = 6, c = 6 and d = 1. The general solution is a = b = c = 6d
x1 −2x2 +x3 = 0 1 −2 1 0
" #
x1 −x3 = 2 can be recorded as 1 0 −1 2
x2 −4x3 = 4 0 1 −4 4
h 1 −2 1 i h 1 −2 1 0
i
The matrix 1 0 −1 is called the coefficient matrix and 1 0 −1 2 is called the
0 1 −4 0 1 −4 4
augmented matrix of the system.
DEFINITION
Two linear systems are equivalent if they have the same solution.
The method essentially convert a m × n linear system into an equivalent system which has a
triangular form, and then use the back substitution method to obtain the solution.
For example
x1 −2x2 +x3 = −1 x1 −2x2 +x3 = −1
2x1 −3x2 −x3 = 3 ⇒ x2 −3x3 = 5
x1 −2x2 +2x3 = 1 x3 = 2
Given a system of linear equations, how can we tell, systematically, whether the system has
I unique solution,
I infinitely many solutions, or
I no solution?
1 1 1 4
R2 ← R2 + R1
0 0 2 2
R3 ← R3 − 2R1
0 −3 0 −6
1 1 1 4 1 1 1 4
R2 ← −1 R
3 2 0 1 0 2
R2 ↔ R3 0 −3 0 −6
0 0 2 2 R3 ← 12 R3 0 0 1 1
Answer: x = 1, y = 2, z = 1
Answer: x1 = −1, x2 = 2, x3 = −2
−2 −24
h 1 0 3 0
i
∼ ··· ∼ 0 1 −2 2 0 −7
0 0 0 0 1 4
1. Start with the leftmost nonzero column. This is the pivot column.
2. Interchange rows, if necessary, such that the top of the pivot column is nonzero.
3. Use ERO to reduce the matrix to a RE form.
4. Start with the rightmost pivot and work to the left. Use ERO to reduce the matrix to a RRE form.
Answer: x1 = 1, x2 = 2, x3 = 3, x4 = 4
EE2007/KV Ling/Aug 2016 26/ 141
Many Solutions
3x1 −x2 +x3 −8x4 = 4
x1 +2x2 −x3 −5x4 = 3
−x1 −3x2 +2x3 +3x4 = −2
The third row of the RE form corresponds to the equation 0.x1 + 0.x2 + 0.x3 = 1, hence the system is inconsistent and has no solution.
So we need c − a + b−3a
2
= 0 for consistency.
1 3 0 3 −2 5
" # " #
A= 2 1 −3 , and B= −1 4 −2
−4 6 2 1 0 3
(1)(3) + (3)(−1) + (0)(1) −2 + 12 + 0 5−6+0
" #
AB = 6−1−3 −4 + 4 + 0 10 − 2 − 9
−12 − 6 + 2 8 + 24 + 0 −20 − 12 + 6
a 0
Answer: S = |a, c ∈ R
c a
Let A, B and C be matrices with sizes so that the given expression are all defined, and let c be
a real number.
1. A(BC ) = (AB)C
2. c(AB) = (cA)B = A(cB)
3. A(B + C ) = AB + AC
4. (B + C )A = BA + CA
Using the transpose notation, the dot product of two (column) vectors can be
written as
n
X
T
u · v = u v = u1 v1 + u2 v2 + . . . + un vn = ui v i
i=1
AA−1 = I = A−1 A
The above result can be extended to (ABC )−1 = C −1 B −1 A−1 and, in general,
(A1 A2 . . . Ak )−1 = A−1 −1 −1
k Ak−1 . . . A1 .
Proof :
w x
Let B = . We want B = A−1 . By definition of inverse matrix, we want to find
y z
w , x, y , z such that AB = I . Thus
aw + by ax + bz 1 0
AB = = ,
cw + dy cx + dz 0 1
and use ERO to solve the resulting 4 × 4 system.
1 0 b/a 0 1/a 1 0 b/a 0 1/a
0 1 0 b/a 0 0 1 0 b/a 0
∼ ∼
c 0 d 0 0 0 0 d − bc/a 0 −c/a
0 c 0 d 1 0 0 0 d − bc/a 1
a −c −bz −b d
z= ;y = ;x = = ;w = ... =
ad − bc ad − bc a ad − bc ad − bc
x1 −6x2 −4x3 = −5
2x1 −10x2 −9x3 = −4 can be written as Ax = b
−x1 +6x2 +5x3 = 3
where
1 −6 −4 x1 −5
A = 2 −10 −9 , x = x2 , b = −4
−1 6 5 x3 3
Theorem
If the n × n matrix A is invertible, then for every vector b, with n components, the linear system
Ax = b has the unique solution x = A−1 b.
Proof:
1
A homoogeneous linear system is a system of the form Ax = 0. The vector x = 0 is always
a solution to the homogeneous system Ax = 0 and is called the trivial solution.
is
a a23 a21 a23 a21 a22
|A| = a11 22 − a12
+ a13
a32 a33 a31 a33 a31 a32
Hint: To reduce computation, expand along a row/column which has many zeros.
..
" #
A(n+1)×(n+1) = An×n . ⇒ det(A(n+1)×(n+1) )
a(n+1)(n+1)
Thus, by performing the three types of elementary row operations on a 3 × 3 identity matrix, we obtain
the corresponding 3 × 3 elementary matrices, and their determinants
R1 ↔ R3 R2 ← αR2 , α 6= 0 R3 ← R3 + βR1
0 0 1 1 0 0 1 0 0
E13 = 0 1 0 E2 (α) = 0 α 0 E13 (α) = 0 1 0
1 0 0 0 0 1 β 0 1
For example
−3 −2 −1 0 3 2
=
3 4 1 2 0 1
When such a factorization of A exists, we can solve the linear system Ax = b using forward and
back substitution
Ly = b, (forward)
Ax = b ⇒ LUx = b ⇒
Ux = y (back substitution)
3 6 −3 x1 3
Ax = 6 15 −5 x2 = 11
−1 −2 6 x3 9
which can be equivalently written as
3 0 0 1 2 −1 x1 3
Ly = b
LUx = 6 1 0 0 3 1 x2 = 11 ⇒
Ux = y
−1 0 1 0 0 5 x3 9
| {z }
y
Linear Combination:
Let S = {v1 , v2 , . . . , vk } be a set of vectors in Rn and let c1 , c2 , . . . , ck be scalars. An
expression of the form
k
X
c1 v1 + c2 v2 + · · · + ck vk = ci vi
i=1
is called a linear combination of the vectors of S. Any vector v that can be written in this
form is also called a linear combination of the vector of S.
In other words
Geometrically, the set of all linear combination of u and v is just the plane through the
origin with u and v as direction vectors.
cn
If c1 = . . . = cn = 0, then v1 , . . . , vn are LI; otherwise LD.
Example
2
If u = 1 . Find a v that is (i) orthogonal, and (ii) not orthogonal to u.
1
THEOREM
The linear system Ax = b is consistent if and only if the vector b can be expressed as a linear
combination of the column vectors of A.
x1
x2
Ax = b ⇒ v1 v2 ... vn =b
..
.
xn
⇒ x1 v1 + x2 v2 + . . . + xn vn = b
Let Ax = b be a consistent m × n linear system. The solution is unique if and only if the
column vectors of A are linearly independent.
Whenever you construct a mathematical model of a problem in real life, you are free to choose
whichever viewpoint that is most natural. Then you may switch from one formulation of a
problem to another whenever it is convenient.
x : n × 1; vi : m × 1; ri : n × 1
EE2007/KV Ling/Aug 2016 100/ 141
Example:
Column, Row and
Null Space of a Matrix
−1 0 3 1
Let A = 2 3 −3 1 , then
2 −2 −2 −1
−1 0 3 1
1. col(A) = span( 2 , 3 , −3 , 1 )
2 −2 −2 −1
−1 2 2
0 3 −2
2. row (A) = span(
3 , −3 , −2 )
1 1 −1
1
1
3. N(A) = span( 1 )
−2
EE2007/KV Ling/Aug 2016 101/ 141
Example: Column, Row and Null Space of a Matrix
cont.
Null space of A can be determined as follows:
−1 0 3 1 1 0 −3 −1
2 3 −3 1 ∼ ... ∼ 0 1 1 1
2 −2 −2 −1 0 0 2 1
The last row gives 2x3 = −x4 . Let x3 = α, then x4 = −2α, and by back substitution,
x2 = −x3 − x4 = α, x1 = 3x3 + x4 = α
Hence, the solution for Ax = 0 is
x1 1
x2
= α 1
x= x3 1
x4 −2
EE2007/KV Ling/Aug 2016 102/ 141
Example: Null Space and Column Space
Let
1 −1 −2 3
A = −1 2 3 and b = 2
2 −2 −2 −2
Determine whether b is in col(A), and find N(A).
Solution:
1. Find out whether
1 −1 −2 3
c1 −1 + c2 2 + c3 3 = 2
2 −2 −2 −2
has solution. If yes, then b is in col(A).
2. Solve Ax = 0, and the solution(s) is the Null Space of A.
[Yes; N(A) = {0}]
The notion of subspace is an algebraic generalisation of the geometric examples of lines and
planes through the origin.
The concept of a basis for a subspace is linked to direction vectors of such lines and planes.
The concept of basis gives us a precise definition of dimension.
DEFINITION:
A subspace of Rn is any collection S of vectors in Rn such that
1. The zero vector is in S
2. If u and v are in S, then u + v is in S (i.e., S is closed under vector addition)
3. If u is in S and c is a scalar, then cu is in S (i.e., S is closed under scalar
multiplication
Note that properties (2) and (3) can equivalently be stated as:
S is closed under linear combinations,
i.e., If u1 , u2 , . . . , uk are in S and c1 , c2 , . . . , ck are scalars,
then c1 u1 + c2 u2 + · · · + ck uk is in S
Similarly, the plane that passes through the origin and two points with position vectors u1 and u2 in Rn is
P = {x|x = su1 + tu2 , s, t ∈ R}
It can be shown that P is closed under vector addition and scalar multiplication.
Every line and plane that does not pass through the origin in Rn is not a subspace in Rn .
Proof:
The zero vector is not included, hence not a subspace.
Solution: We need to show that (1) S spans R3 and (2) S is LI. Requirement (1) means
1 1 0 a
c1 1 + c2 1 + c3 1 = b has a solution for every choice of a, b, c.
0 1 −1 c
Requirement (2) means
1 1 0 0
c1 1 + c2 1 + c3
1 = 0 has solution c1 = c2 = c3 = 0.
0 1 −1 0
Given a set S = {v1 , v2 , . . . , vn }, here are two “recipes” to find a basis for span(S):
Method 1:
1. Form a matrix A whose columns are the vectors
2. Reduce A to RE form
3. The vectors from S that correspond to the pivoting columns of the RE matrix form a
basis for span(S).
Method 2:
1. Form a matrix B whose rows are the vectors
2. Reduce B to RE form
3. The vectors from S that correspond to the non-zero rows of the RE matrix
form a basis for span(S).
Note that if a vector space V has a basis with n vectors, then every basis has n vectors. In
other words, the dimension is a property of V and does not depend of the choice of basis.
dim(Rn ) = n, since the standard basis for Rn has n vectors, and we define dim(0) to be 0.
Definition:(Rank)
The rank of a matrix A is the dimension of its row or column space and is denoted as rank(A).
Definition:(Nullity)
The nullity of a matrix A is the dimension of its null space and is denoted as
nullity (A).
In addition, we have
1. rank(AB) ≤ rank(B)
2. rank(AB) = rank(B), if A−1 exists
3. rank(AB) = rank(A), if B −1 exists
4. rank(A + B) ≤ rank(A) + rank(B)
5. rank(A) ≤ min(m, n), where A is an m × n matrix
Theorem:
If A is an m × n matrix, then
Proof:
Let R be the RRE of A, and suppose rank(A) = r . Then R has r leading variables and (n − r )
free variables in the solution of Ax = 0, i.e. dim(N(A)) = n − r . Thus we have
rank(A) + nullity (A) = r + (n − r ) = n
Av = λv
Example:
1 2 1 1 1
Let A = . Since A = , so is an eigenvector of A
0 −1 0 0 0
1
corresponding to the eigenvalue λ1 = 1. The other eigenvector of A is
−1
corresponding to the eigenvalue λ2 = −1.
EE2007/KV Ling/Aug 2016 120/ 141
Finding eigenvalues and eigenvectors of n × n matrix
By definition, if λ is an eigenvalue and v is the corresponding eigenvector of the matrix A, then
Av = λv ⇒ (λI − A)v = 0
Since v 6= 0, we must have
det(λI − A) = 0, which we can solve to find all the eigenvalues of A.
Once the eigenvalues are found, we then substitute the particular eigenvalue λi and solve the
following linear system for the corresponding eigenvector vi
(λi I − A)vi = 0
In other words, vi is in the null space of (λi I − A), and we sometimes refer it as the
eigenspace corresponding to λi .
x4 0 1
Thus there are two eigenvectors corresponding to λ = 1:
A(tv11 + sv12 ) = (tv11 + sv12 )
Set s = 0, t = 1, ⇒ Av11 = v11 ; set s = 1, t = 0, ⇒ Av12 = v12 .
Proof:
Suppose A has n linearly independent eigenvectors v1 , v2 , . . . , vn corresponding to the eigenvalues
λ1 , λ2 , . . . , λn , then
λ1 0 ... 0
0 λ2 ... 0
Avi = λi vi ⇒ A =
v1 v2 ... vn v1 v2 ... vn . . . .
. . .. .
| {z } | {z } . . .
P P 0 0 ... λn
| {z }
D
d
dt x(t) = λx(t)
R x(t) d x(t) Rt
⇒ x(0) x(t) = λ t=0 dt
x(t)
⇒ ln x(0) = λt
⇒ x(t) = e λt x(0)
Solution:
Write the system in the form Ax = b
ẏ1 −1 0 y1
=
ẏ2 3 2 y2
Hence
ẏ = Ay = PDP −1 y ⇒ P −1 ẏ = D P −1 y ⇒ ẇ = Dw
| {z } | {z }
ẇ w
−1 0 1 0 −1 0
The matrix A = can be diagonalised with P = and D = .
3 2 −1 1 0 2
Hence
e −t
0
y(t) = Pw(t) = Pe Dt w(0) = Pe Dt P −1 y(0) = −t y(0)
−e + e 2t e 2t
e λ1 t
0 c1
x(t) = Pe Dt P −1 x(0) =
v1 v2
0 e λ2 t c2
= c1 e λ1 t v1 + c2 e λ2 t v2
c1
where = P −1 x(0).
c2