You are on page 1of 155

Chapter 3

Vector Spaces
Chapter 3 Vector Spaces

Section 3.1
Euclidean n-Space
Euclid of Alexandria (about 320-260 B.C.)

Euclid of Alexandria is the


most important
mathematician of antiquity
best known for his treatise on
mathematics, The Elements,
a textbook on plane geometry
that summarized the works of
the Golden Age of Greek
Mathematics. However little
is known of Euclid’s life
except that he taught at
Alexandria in Egypt.
Geometric vectors (Discussion 3.1.1)

A (nonzero) vector can be represented geometrically by a


directed line segment or an arrow.
The direction of the arrow specifies the direction of the
vector and the length of the arrow describes its
magnitude.
The zero vector, denoted by 0, is represented by a point
or a degenerated vector with zero length and no direction.
Two vectors are regarded as equal if they have the same
length and direction.

u = v, u ≠ w and u ≠ x
u v w x
Geometric vectors (Discussion 3.1.1)

(a) The addition u + v of two vectors u and v:

u
v
u

v u+v
=v+u
Note that u + v = v + u. v

u
Geometric vectors (Discussion 3.1.1)

(b) The negative −u of a vector u:

u −u

The vector −u has the same length as u but the


reverse direction.
Geometric vectors (Discussion 3.1.1)

(c) The difference u − v of two vectors u and v:

u
v
u

u−v
Note that u − v = u + (−v). −v
Geometric vectors (Discussion 3.1.1)

(d) The scalar multiple cu of a vector u


where c is a real number:

u 1 (−1.5)u
u 2u
2
If c is positive, the vector cu has the same direction
as u and its length is c times of the length of u.
If c is negative, the vector cu has the reverse
direction of u and its length is |c| times of the length
of u.
0u = 0 is the zero vector.
(−1)u = −u is the negative of u.
Coordinate systems: xy-plane (Discussion 3.1.2.1)

Suppose we position a vector u in the xy-plane such


that its initial point is at the origin (0, 0).
The coordinates (u1, u2) of the end point of u are called
the components of u and we write u = (u1, u2).
y
(u1, u2) Algebraically, a
vector in the xy-
u plane can be
u2 identified as a
point on the plane.
(0, 0) u1 x
Coordinate systems: xy-plane (Discussion 3.1.2.1)

(a) The addition of two vectors:


Let u = (u1, u2) and v = (v1, v2).
y u+v
v
v2

u v1 u2 + v 2
u2

(0, 0) x
u1
u1 + v 1 So u + v = (u1 + v1, u2 + v2).
Coordinate systems: xy-plane (Discussion 3.1.2.1)

(b) The scalar multiple of a vector:


Let u = (u1, u2) and c a real constant.
y cu

u cu2
u2

(0, 0) x
u1
cu1 So cu = (cu1, cu2).
Coordinate systems: xyz-space (Discussion 3.1.2.2)

Suppose we position a vector u in the xyz-space such


that its initial point is at the origin (0, 0, 0).
The coordinates (u1, u2, u3) of the end point of u are
called the components of u and we write u = (u1, u2, u3).

z (u1, u2, u3)


Algebraically, a
u vector in the xyz-
u3
space can be
(0, 0, 0) y identified as a
u1 point in the space.
u2

x
Coordinate systems: xyz-space (Discussion 3.1.2.2)

(a) The addition of two vectors:


Let u = (u1, u2, u3) and v = (v1, v2, v3).
Then u + v = (u1 + v1, u2 + v2, u3 + v3).

(b) The scalar multiple of a vector:


Let u = (u1, u2, u3) and c a real constant.
Let cu = (cu1, cu2, cu3).
n-vectors (Definition 3.1.3)

An n-vector or ordered n-tuple of real numbers has the


form
(u1, u2, ..., ui , ..., un)
where u1, u2, ..., un are real numbers.
The number ui in the i th position of an n-vector is called
the i th component or the i th coordinate of the n-vector.
Let u = (u1, u2, ..., un) and v = (v1, v2, ..., vn) be two
n-vectors.
1. u = v if and only if ui = vi for all i = 1, 2, ..., n.
2. The addition u + v of u and v is defined by
u + v = (u1 + v1, u2 + v2, ..., un + vn).
n-vectors (Definition 3.1.3)

3. For a real number c, the scalar multiple cu of u is


defined by
cu = (cu1, cu2, ..., cun).
4. The n-vector (0, 0, ..., 0) is called the zero vector
and is denoted by 0.
5. The negative of u is defined by (−1)u and is
denoted by −u, i.e.
−u = (−u1, −u2, ..., −un).
6. The subtraction u − v of u and v is defined by
u + (−v), i.e.
u − v = (u1 − v1, u2 − v2, ..., un − vn).
Examples (Example 3.1.4)

Let u = (1, 2, 3, 4) and v = (−1, −2, −3, 0). Then


u + v = (1 + (−1), 2 + (−2), 3 + (−3), 4 + 0)
= (0, 0, 0, 4),
u − v = (1 − (−1), 2 − (−2), 3 − (−3), 4 − 0)
= (2, 4, 6, 4),
3u = (3 · 1, 3 · 2, 3 · 3, 3 · 4) = (3, 6, 9, 12),
3u + 4v
= (3 · 1 + 4 · (−1), 3 · 2 + 3 · (−2), 3 · 3 + 3 · (−3), 3 · 4 + 3 · 0)
= (−1, −2, −3, 12).
Row and column vectors (Notation 3.1.5)

The features in the definition of n-vectors are similar to


those of matrices.
We can identify an n-vectors (u1, u2, ..., un) with a 1 × n
matrix
u1 u2 ··· un (a row vector)
or an n × 1 matrix u1
u2
(a column vector).

un
Warning: Do not use two different sets of notation for
n-vectors within the same context.
Some basic properties (Theorem 3.1.6)

Let u, v, w be n-vectors and c, d real numbers.


1. u + v = v + u.
2. u + (v + w) = (u + v) + w.
3. u + 0 = u = 0 + u.
4. u + (−u) = 0.
5. c(du) = (cd )u.
6. c(u + v) = cu + cv.
7. (c + d )u = cu + du.
8. 1u = u.
Euclidean n-space (Definition 3.1.7)

The set of all n-vectors of real numbers is called the


Euclidean n-space or simply n-space.
We use to denote the set of all real numbers
and n to denote the Euclidean n-space.
u∈ n if and only if u = (u1, u2, ..., un)
for some u1, u2, ..., un ∈ .
In set notation, we write
n = { (u , u , ..., u ) | u , u , ..., u ∈ }.
1 2 n 1 2 n
Subsets (Example 3.1.8.1)

Let B = { (u1, u2, u3, u4) ∈ 4 | u1 = 0 and u2 = u4 }


or simply B = { (u1, u2, u3, u4) | u1 = 0 and u2 = u4 }.
It means that B is a subset of 4 such that
(u1, u2, u3, u4) ∈ B if and only if u1 = 0 and u2 = u4.
For example,
(0, 0, 0, 0), (0, 0, 10, 0), (0, 1, 3, 1), (0, π, π, π) ∈ B
and (1, 2, 3, 4), (0, 10, 0, 0), (0, 1, 3, 2), (π, π, π, π) ∉ B.
Explicitly, we can write B = { (0, a, b, a) | a, b ∈ }.
Solution sets (Example 3.1.8.2)

If a system of linear equation has n variables, then its


solution set is a subset (may be empty) of n.
For example, a general solution of the linear system
x+y+ z =0
x − y + 2z = 1
can be expressed in vector form:
1 3 1 1
(x, y, z ) = 2
− 2
t, − 2
+ 2
t, t where t ∈ .
The solution set can be written as
{ (x, y, z) | x + y + z = 0 and x − y + 2z = 1 } (implicit)
1 3 1 1
or 2
− 2
t, − 2
+ 2
t, t t∈ (explicit).
Lines in 2 (Example 3.1.8.3 (a))

A line in can be expressed implicitly in set notation by


2

{ (x, y) | ax + by = c }
where a, b, c are real constants and a, b are not both
zero.
Explicitly, the line can also be expressed as
c − bt
a
, t t∈ if a ≠ 0;

or c − at
t, b
t∈ if b ≠ 0.
Planes in 3 (Example 3.1.8.3 (b))

A plane in 3 can be expressed implicitly in set notation


by
{ (x, y, z) | ax + by + cz = d }
where a, b, c, d are real constants and a, b, c are
not all zero.
Explicitly, the line can also be expressed as
d − bs − ct
a
, s, t s, t ∈ if a ≠ 0;

d − as − ct
s, b
, t s, t ∈ if b ≠ 0;

d − as − bt
or s, t, s, t ∈ if c ≠ 0.
c
Lines in 3 (Example 3.1.8.3 (c))

A line in 3 is usually represented explicitly in set


notation by
{ (a0 + at, b0 + bt, c0 + ct ) | t ∈ }
= { (a0, b0, c0) + t (a, b, c) | t ∈ }. t (a, b, c)

the line
(a0, b0, c0) + t (a, b, c)
(a0, b0, c0)

(a, b, c)

In here, (a0, b0, c0) is a point on the line


and (a, b, c) is the direction of the line.
Lines in 3 (Example 3.1.8.3 (c))

A line in 3 cannot be represented by a single equation


as in the case of 2.
Instead, it can be regarded as the intersection of two non-
parallel planes and hence written implicitly as
{ (x, y, z) | a1x + b1y + c1z = d1 and a2x + b2y + c2z = d2 }
for some suitable choice of constants a1, b1, c1, d1 and
a2, b2, c2, d2.
Finite sets (Notation 3.1.9 & Example 3.1.10)

Let S be a finite set.


We use |S| to denote the number of elements contained
in S.

For example, let S1 = { 1, 2, 3, 4 }, S2 = { (1, 2, 3, 4) }


and S3 = { (1, 2, 3), (2, 3, 4) }.
Then |S1| = 4, |S2| = 1 and |S3| = 2.
Chapter 3 Vector Spaces

Section 3.2
Linear Combinations and
Linear Spans
Linear combinations (Definition 3.2.1)

Let u1, u2, ..., uk be vectors in n.

For any real numbers c1, c2, ..., ck, the vector
c1u1 + c2u2 + ··· + ckuk
is called a linear combination of u1, u2, ..., uk.
Examples (Example 3.2.2.1 (a))

Let u1 = (2, 1, 3), u2 = (1, −1, 2) and u3 = (3, 0, 5).


Is v = (3, 3, 4) a linear combination of u1, u2 and u3?
Write v = au1 + bu2 + cu3, i.e.
(3, 3, 4) = a(2, 1, 3) + b(1, −1, 2) + c(3, 0, 5)
= (2a + b + 3c, a − b, 3a + 2b + 5c).
So 2a + b + 3c = 3
a− b =3
3a + 2b + 5c = 4.
Examples (Example 3.2.2.1 (a))

2 1 3 3 2 1 3 3
Gaussian
1 −1 0 3 0 −3/2 −3/2 3/2
Elimination
3 2 5 4 0 0 0 0

By using back-substitution, we obtain a general solution


(a, b, c) = ( 2 − t, −1 − t, t )
where t is an arbitrary parameter.
For example, we have particular solutions
(2, −1, 0), (1, −2, 1), etc.
So we can write v as linear combinations
v = 2u1 − u2 + 0u3, v = u1 − 2u2 + u3, etc.
Examples (Example 3.2.2.1 (b))

Let u1 = (2, 1, 3), u2 = (1, −1, 2) and u3 = (3, 0, 5).


Is w = (1, 2, 4) a linear combination of u1, u2 and u3?
Write w = au1 + bu2 + cu3, i.e.
(1, 2, 4) = a(2, 1, 3) + b(1, −1, 2) + c(3, 0, 5)
= (2a + b + 3c, a − b, 3a + 2b + 5c).
So 2a + b + 3c = 1
a− b =2
3a + 2b + 5c = 4.
Examples (Example 3.2.2.1 (b))

2 1 3 1 2 1 3 1
Gaussian
1 −1 0 2 0 −3/2 −3/2 3/2
Elimination
3 2 5 4 0 0 0 3

The system is inconsistent.


w is not a linear combination of u1, u2 and u3.
Examples (Example 3.2.2.2)

Let e1 = (1, 0, 0), e2 = (0, 1, 0) and e3 = (0, 0, 1).


For any (x, y, z) ∈ 3,
(x, y, z) = xe1 + ye2 + ze3.
So every vector in 3 is a linear combination of e1, e2
and e3.
Geometrically, e1, e2 and e3 are called the directional
vectors of the x-axis, y-axis and z-axis, respectively, of
3.
Linear spans (Definition 3.2.3 & Example 3.2.4.1)

Let S = { u1, u2, ..., uk } be a set of vectors in n.

The set of all linear combinations of u1, u2, ..., uk


{ c1u1 + c2u2 + ··· + ckuk | c1, c2, ..., ck ∈ }
is called a linear span of S (or the linear span of u1, u2,
..., uk)
and is denoted by span(S) (or span{ u1, u2, ..., uk } ).

Let u1 = (2, 1, 3), u2 = (1, −1, 2) and u3 = (3, 0, 5).


Then (by Example 3.2.2.1)
v = (3, 3, 4) ∈ span{ u1, u2, u3 }
and w = (1, 2, 4) ∉ span{ u1, u2, u3 }.
Examples (Example 3.2.4.2-3)

Let S = { (1, 0, 0, −1), (0, 1, 1, 0) } ⊆ 4.

Every element in span(S) is of the form


a(1, 0, 0, −1) + b(0, 1, 1, 0) = (a, b, b, −a)
where a and b are real numbers.
So span(S) = { (a, b, b, −a) | a, b ∈ }.

Let V = { (2a + b, a, 3b − a) | a, b ∈ }⊆ 3.

For any a, b ∈ ,
(2a + b, a, 3b − a) = a(2, 1, −1) + b(1, 0, 3).
So V = span{ (2, 1, −1), (1, 0, 3) }.
Examples (Example 3.2.4.4)

Show that span{ (1, 0, 1), (1, 1, 0), (0, 1, 1) } = 3.

Solution: We need to verify that for any (x, y, z) ∈ 3,

there exists real numbers a, b, c such that


a(1, 0, 1) + b(1, 1, 0) + c(0, 1, 1) = (x, y, z).
This is equivalent to show that the linear system
a+b =x
b+c =y where a, b, c
are variables
a +c =z
is consistent for all x, y, z ∈ .
Examples (Example 3.2.4.4)

1 1 0 x 1 1 0 x
Gaussian
0 1 1 y 0 1 1 y
Elimination
1 0 1 z 0 0 2 z−x+y

The system is consistent regardless of the values of x, y,


z.
So we have shown that
span{ (1, 0, 1), (1, 1, 0), (0, 1, 1) } = 3.
Examples (Example 3.2.4.5)

Show that
span{ (1, 1, 1), (1, 2, 0), (2, 1, 3), (2, 3, 1) } ≠ 3.

Solution: For any (x, y, z) ∈ 3, we solve the vector


equation
a(1, 1, 1) + b(1, 2, 0) + c(2, 1, 3) + d(2, 3, 1) = (x, y, z),
where a, b, c, d are variables.
The linear system is
a + b + 2c + 2d = x
a + 2b + c + 3d = y
a + 3c + d = z.
Examples (Example 3.2.4.5)

1 1 2 2 x 1 1 2 2 x
Gaussian
1 2 1 3 y 0 1 −1 1 y−x
Elimination
1 0 3 1 z 0 0 0 0 z + x − 2y

The system is inconsistent if z + x − 2y ≠ 0.


For example, if (x, y, z) = (1, 0, 0), then z + x − 2y ≠ 0,
i.e. a(1, 1, 1) + b(1, 2, 0) + c(2, 1, 3) + d(2, 3, 1) = (1, 0, 0)
has no solution and hence
(1, 0, 0) ∉ span{ (1, 1, 1), (1, 2, 0), (2, 1, 3), (2, 3, 1) }.
So span{ (1, 1, 1), (1, 2, 0), (2, 1, 3), (2, 3, 1) } ≠ 3.
When span(S) = n (Discussion 3.2.5)

Let S = { u1, u2, ..., uk } ⊆ n


where ui = (ai1, ai 2, ..., ai n) for i = 1, 2, ..., k.
For any v = (v1, v2, ..., vn) ∈ n, v ∈ span(S) if and only
if the vector equation
c1u1 + c2u2 + ··· + ckuk = v
has solution for c1, c2, ..., ck,
i.e. the following linear system is consistent:
a11c1 + a21c2 + ··· + ak1ck = v1
a12c1 + a22c2 + ··· + ak2ck = v2
⁞ ⁞
a1nc1 + a2nc2 + ··· + aknck = vn .
When span(S) = n (Discussion 3.2.5)

u1 u2 uk
a11 a21 ··· ak1
a12 a22 ··· ak2
Let A = .
⁞ ⁞ ⁞
a1n a2n ··· akn

1. If a row-echelon form of A has no zero row, then the


linear system is always consistent regardless the
values of v1, v2, ..., vn and hence span(S) = n.
2. If a row-echelon form of A has at least one zero row,
then the linear system is not always consistent and
hence span(S) ≠ n.
(See Question 43 of Exercise 2 of the textbook.)
Examples (Example 3.2.6)

1 1 0 1 1 0
Gaussian
0 1 1 0 1 1
Elimination
1 0 1 0 0 2
So span{ (1, 0, 1), (1, 1, 0), (0, 1, 1) } = 3.

1 1 2 2 1 1 2 2
Gaussian
1 2 1 3 0 1 −1 1
Elimination
1 0 3 1 0 0 0 0
So span{ (1, 1, 1), (1, 2, 0), (2, 1, 3), (2, 3, 1) } ≠ 3.
When span(S) = n (Theorem 3.2.7 & Example 3.2.8)

Let S = { u1, u2, ..., uk } ⊆ n. If k < n, span(S) ≠ n.

the number of nonzero rows


u1 u2 uk
= the number of leading entries
a11 a21 ··· ak1 = the number of pivot columns
⁞ ⁞ ⁞ ≤k<n
Gaussian
a12 a22 ··· an2 * * ··· *
Elimination
⁞ ⁞ ⁞ ⁞ ⁞ ⁞
a1n a2n ··· akn * * ··· *
0 0 ··· 0
In particular, ⁞ ⁞ ⁞
1. one vector cannot span 2;
2. one vector or two vectors cannot span 3.
Some basic results (Theorem 3.2.9)

Let S = { u1, u2, ..., uk } ⊆ n.

1. 0 ∈ span(S).
2. For any v1, v2, ..., vr ∈ span(S)
and c1, c2, ..., cr ∈ ,
c1v1 + c2v2 + ··· + crvr ∈ span(S).

Proof:
1. 0 = 0u1 + 0u2 + ··· + 0uk ∈ span(S).
2. Write v1 = a11u1 + a12u2 + ··· + a1kuk ,
v2 = a21u1 + a22u2 + ··· + a2kuk ,

vr = ar 1u1 + ar 2u2 + ··· + ar kuk .
Some basic results (Theorem 3.2.9)

Then c1v1 + c2v2 + ··· + crvr


= c1(a11u1 + a12u2 + ··· + a1kuk )
+ c2(a21u1 + a22u2 + ··· + a2kuk )
+ ··· + cr(ar 1u1 + ar 2u2 + ··· + ar kuk )
= (c1a11 + c2a21 + ··· + cr ar 1)u1
+ (c1a12 + c2a22 + ··· + cr ar 2)u2
+ ··· + (c1a1k + c2a2k + ··· + cr ar k)uk
which is a linear combination of u1, u2, ..., uk .
Hence c1v1 + c2v2 + ··· + crvr ∈ span(S).
When span(S1) ⊆ span(S2) (Theorem 3.2.10)

Let S1 = { u1, u2, ..., uk } and S2 = { v1, v2, ..., vm } are


subsets of n.
Then span(S1) ⊆ span(S2) if and only if each ui is a
linear combination of v1, v2, ..., vm .
(Please read our textbook for a proof of the result.)
Examples (Example 3.2.11.1)

Let u1 = (1, 0, 1), u2 = (1, 1, 2), u3 = (−1, 2, 1)


and v1 = (1, 2, 3), v2 = (2, −1, 1).
Show that span{ u1, u2, u3 } = span{ v1, v2 }.

Solution: (For two sets A and B, if we want to show


that A = B, we need to show that A ⊆ B and B ⊆ A.)
To show span{ u1, u2, u3 } ⊆ span{ v1, v2 }:
It suffices to show that u1, u2, u3 are linear
combinations of v1, v2. (See Theorem 3.2.10.)
Examples (Example 3.2.11.1)

u1 is a linear combination of v1, v2


⇔ u1 = av1 + bv2 for some real numbers a and b
⇔ (1, 0, 1) = a(1, 2, 3) + b(2, −1, 1) for some real
numbers a and b
⇔ (1, 0, 1) = (a + 2b, 2a − b, 3a + b) for some real
numbers a and b
a + 2b = 1
⇔ the linear system 2a − b = 0 is consistent.
3a + b = 1
Examples (Example 3.2.11.1)

Similarly,
u2 is a linear combination of v1, v2
a + 2b = 1
⇔ the linear system 2a − b = 1 is consistent;
3a + b = 2
and
u3 is a linear combination of v1, v2
a + 2b = −1
⇔ the linear system 2a − b = 2 is consistent.
3a + b = 1
Examples (Example 3.2.11.1)

a + 2b = 1 a + 2b = 1 a + 2b = −1
2a − b = 0 2a − b = 1 2a − b = 2
3a + b = 1 3a + b = 2 3a + b = 1
The row operations required to solve the three systems
are the same, we can work them out together:
1 2 1 1 −1 1 2 1 1 −1
Gaussian
2 −1 0 1 2 0 −5 −2 −1 4
Elimination
3 1 1 2 1 0 0 0 0 0
The three systems are consistent, i.e. all ui are linear
combinations of v1 and v2 .
So span{ u1, u2, u3 } ⊆ span{ v1, v2 }.
Examples (Examaple 3.2.11.1)

To show span{ v1, v2 } ⊆ span{ u1, u2, u3 }:


To check that v1, v2 are linear combinations of u1, u2,
u3, we need to show the following two linear systems are
consistent.
a+ b− c =1 a+ b− c = 2
b + 2c = 2 b + 2c = −1
a + 2b + c = 3 a + 2b + c = 1

1 1 −1 1 2 1 1 −1 1 2
Gaussian
0 1 2 2 −1 0 1 2 2 −1
Elimination
1 2 1 3 1 0 0 0 0 0
Examples (Example 3.2.11.1)

The two systems are consistent, i.e. all vi are linear


combinations of u1, u2, u3 .
So span{ v1, v2 } ⊆ span{ u1, u2, u3 }.

Since we have shown span{ u1, u2, u3 } ⊆ span{ v1, v2 }


and span{ v1, v2 } ⊆ span{ u1, u2, u3 },
span{ u1, u2, u3 } = span{ v1, v2 }.
Examples (Example 3.2.11.2)

Let u1 = (1, 0, 0, 1), u2 = (0, 1, −1, 2), u3 = (2, 1, −1, 4),


v1 = (1, 1, 1, 1), v2 = (−1, 1, −1, 1), v3 = (−1, 1, 1, −1).
To check whether span{ u1, u2, u3 } ⊆ span{ v1, v2, v3 }:

1 −1 −1 1 0 2 1 −1 −1 1 0 2
1 1 1 0 1 1 Gaussian 0 2 2 −1 1 −1
1 −1 1 0 −1 −1 Elimination 0 0 2 −1 −1 −3
1 1 −1 1 2 4 0 0 0 0 0 0

All three systems are consistent, i.e. all ui are linear


combinations of v1, v2, v3 .
So span{ u1, u2, u3 } ⊆ span{ v1, v2, v3 }.
Examples (Example 3.2.11.2)

u1 = (1, 0, 0, 1), u2 = (0, 1, −1, 2), u3 = (2, 1, −1, 4),


v1 = (1, 1, 1, 1), v2 = (−1, 1, −1, 1), v3 = (−1, 1, 1, −1).
To check whether span{ v1, v2, v3 } ⊆ span{ u1, u2, u3 }:

1 0 2 1 −1 −1 1 0 2 1 −1 −1
0 1 1 1 1 1 Gaussian 0 1 1 1 1 1
0 −1 −1 1 −1 1 Elimination 0 0 0 2 0 2
1 2 4 1 1 −1 0 0 0 0 0 0

Since not all systems are consistent, some vi are not


linear combinations of u1, u2, u3 .
So span{ v1, v2, v3 } ⊆ span{ u1, u2, u3 }.
Redundant vectors (Theorem 3.2.12 & Example 3.2.13)

Let u1, u2, ..., uk ∈ n.

If uk is a linear combination of u1, u2, ..., uk − 1, then


span{ u1, u2, ..., uk − 1 } = span{ u1, u2, ..., uk − 1, uk }.
(Please read our textbook for a proof of the result.)
uk is a redundant
For example, let u1 = (1, 1, 0, 2), vector.
u2 = (1, 0, 0, 1), u3 = (0, 1, 0, 1).
It is easy to check that u3 = u1 − u2.
So span{ u1, u2, u3 } = span{ u1, u2 }.
Geometrical interpretation (Discussion 3.2.14.1)

Let u be a nonzero vector in 2 or 3.

Then span{ u } = { cu | c ∈ } cu span{ u }


is a line through the origin. u

the origin
In 2, if u = (u1, u2), then
span{ u } = { (cu1, cu2) | c ∈ } = { (x, y) | u2x − u1y = 0 }.

In 3, if u = (u1, u2, u3), then


span{ u } = { (cu1, cu2, cu3) | c ∈ }.
Geometrical interpretation (Discussion 3.2.14.2)

Let u and v be a nonzero vector in 2 or 3.

If u and v are not parallel, then


span{ u, v } = { su + tv | s, t ∈ }
is a plane containing the origin.
su + tv
In 2, then span{ u, v } = 2.

v
u

the origin
span(u, v)
Geometrical interpretation (Discussion 3.2.14.2)

In 3, if u = (u1, u2, u3) and v = (v1, v2, v3), then


span{ u, v } = { (su1 + t v1, su2 + t v2, su3 + t v3) | s, t ∈ }
= { (x, y, z) | ax + by + cz = 0 }
where (a, b, c) is (any) one non-trivial solution of the
linear system
u1a + u2b + u3c = 0 su + tv

v1a + v2b + v3c = 0.


v

See Example 5.2.11 for the u


geometrical interpretation of
the vector (a, b, c). the origin
span(u, v)
Lines in 2 and 3 (Discussion 3.2.15.1)

Let L be a line in 2 or 3.

Pick a point x on L
(where x is regarded as a vector joining the origin to the point)
and a nonzero vector u such that span{ u } is a line L0
through the origin and parallel to L.
Explicitly, x + tu
L = { x + w | w ∈ L0 } L x
= { x + w | w ∈ span{ u } }
tu
= { x + tu | t ∈ }. u
the origin
L0 = span{ u }
Planes in 3 (Discussion 3.2.15.2)

Let P be a plane in 3.

Pick a point x on P
(where x is regarded as a vector joining the origin to the point)
and two nonzero vectors u and v such that span{ u, v }
is a plane P0 containing the origin and parallel to P.
Explicitly,
P
P = { x + w | w ∈ P0 } x
= { x + w | w ∈ span{ u, v } } x + su + tv
= { x + su + tv | s, t ∈ }.
the origin u

v su + tv
P0 = span{ u, v }
Lines and planes in n (Discussion 3.2.15)

We can generalize the idea of lines and planes to n:

Take x, u ∈ n where u is a nonzero vector.


The set L = { x + w | w ∈ span{ u } } is called a line in
n.

Take x, u, v ∈ n where u, v is a nonzero vector and u


is not a multiple of v.
The set P = { x + w | w ∈ span{ u, v } } is called a plane
in n.
Lines and planes in n (Discussion 3.2.15)

Take x, u1, u2, ..., ur ∈ n.

The set Q = { x + w | w ∈ span{ u1, u2, ..., ur } } is called


a k-plane in n
where k is the "dimension" of span{ u1, u2, ..., ur } which
will be studied in Section 3.6.
Chapter 3 Vector Spaces

Section 3.3
Subspaces
Subspaces (Discussion 3.3.1)

At the end of last section, we have learnt that linear spans


are used to define lines and planes in n.

Furthermore, a linear span of vectors in n always


behaves like a smaller "space" sitting inside n.
For example, in 3, span{ (1, 0, 0), (0, 1, 0) } is the
xy-plane which behaves like an 2 sitting inside 3.

To make it easier for us to refer to such sets of vectors,


we give a new name to describe them.
Subspaces (Definition 3.3.2)

Let V be a subset of n.

V is called a subspace of n
if V = span(S) where S = { u1, u2, ..., uk } for some
vectors u1, u2, ..., uk ∈ n.
More precisely, we say that
V is a subspace spanned by S;
or V is a subspace spanned by u1, u2, ..., uk.
We also say that
S spans V.
or u1, u2, ..., uk spans V.
Trivial subspaces (Remark 3.3.3.1-2)

Let 0 be the zero vector of n.

The set { 0 } = span{ 0 } is a subspace of n and is


known as the zero space.

Let e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), ...,


en = (0, ..., 0, 1) be vectors in n.
Any vector u = (u1, u2, ..., un) ∈ n can be written as
u = u1e1 + u2e2 + ··· + unen.
Thus n = span{ e1, e2, ..., en } is a subspace of n.
Examples (Example 3.3.4.1-2)

Let V1 = { (a + 4b, a) | a, b ∈ }⊆ 2.

For any a, b ∈ , (a + 4b, a) = a(1, 1) + b(4, 0).


So V1 = span{ (1, 1), (4, 0) } is a subspace of 2.

Let V2 = { (x, y, z) | x + y − z = 0 } ⊆ 3.

The equation x + y − z = 0 has a general solution


(x, y, z) = (−s + t, s, t ) = s(−1, 1, 0) + t(1, 0, 1)
where s and t are arbitrary parameters.
So V2 = span{ (−1, 1, 0), (1, 0, 1) } is a subspace of 3.
How to determine a subset is a subspace?

By Definition 3.2.3, if S = { u1, u2, ..., uk }, then span(S)


is a set containing all the vectors written in the form
c1u1 + c2u2 + ··· + ckuk.

If all the vectors in a subset V of n can be written as


c1u1 + c2u2 + ··· + ckuk
where c1, c2, ..., ck are arbitrary parameters
and u1, u2, ..., uk are constant vectors
(their coordinates do not consist of arbitrary parameters),
then V is a subspace of n.
(This is what we have done in the previous slide.)
How to determine a subset is a subspace?

By the definition, to show that a subset W is not a


subspace, one need to show that not all vectors in W
can be written as c1u1 + c2u2 + ··· + ckuk for some
constant vectors u1, u2, ..., uk.
But it is always difficult to show that something cannot be
expressed in certain form.
Instead, we try to show that W does not satisfy some
known properties of subspaces.
A subspace version of Theorem 3.2.9

We replace span(S) in Theorem 3.2.9 by a subspace V :


Let V be a subspace of n.
1. 0 ∈ V.
2. For any v1, v2, ..., vr ∈ V and c1, c2, ..., cr ∈ ,
c1v1 + c2v2 + ··· + crvr ∈ V.

For example, if a subset W does not contain the zero


vector, then W is not a subspace;
or if there exists v ∈ W and c ∈ such that cv ∉ W,
then W is not a subspace;
or if there exists u, v ∈ W such that u + v ∉ W, then W
is not a subspace.
Examples (Example 3.3.4.3-4)

Let V3 = { (1, a) | a ∈ }⊆ 2.

As (0, 0) ≠ (1, a) for any a ∈ , the zero vector is not


contained in V3.
So V3 is not a subspace of 2.

Let V4 = { (x, y, z) | x2 ≤ y2 ≤ z2 } ⊆ 3.

Observe that (1, 1, 2), (1, 1, −2) ∈ V4.


Note that (1, 1, 2) + (1, 1, −2) = (2, 2, 0),
but (x, y, z) = (2, 2, 0) does not satisfy x2 ≤ y2 ≤ z2.
This means (1, 1, 2) + (1, 1, −2) ∉ V4.
So V4 is not a subspace of 3.
Geometrical interpretation (Remark 3.3.5.1)

The following are all the subspaces of 2:

(a) the zero space { (0, 0) };


(b) lines through the origin;
(c) 2.

The following are all the subspaces of 3:

(a) the zero space { (0, 0, 0) };


(b) lines through the origin;
(c) planes containing the origin;
(d) 3.
Solution spaces (Theorem 3.3.6)

The solution set of a homogenous system of linear


equations in n variables is a subspace of n.

Proof: If the homogenous system has only the trivial


solution, then the solution set is { 0 } which is the zero
space.
Suppose the homogeneous system has infinitely many
solutions.
Let x1, x2, ..., xn be the variables of the system.
Solution spaces (Theorem 3.3.6)

By solving the system using Gauss-Jordan Elimination, a


general solution can be expressed in the form
x1 = r11t1 + r12t2 + ··· + r1ktk
x2 = r21t1 + r22x2 + ··· + r2ktk
⁞ ⁞
xn = rn1t1 + rn2t2 + ··· + rnktk
for some arbitrary parameters t1, t2, ..., tk, where r11,
r12, …, rnk are real numbers,
Solution spaces (Theorem 3.3.6)

We can rewrite a general solution as


x1 r11 r12 r1k
x2 r21 r22 r2k
= t1 + t2 + ··· + tn .
⁞ ⁞ ⁞ ⁞
xn rn1 rn2 rnk

So the solution set is


span{ (r11, r21, ..., rn1), (r12, r22, ..., rn2), ..., (r1k, r2k, ..., rnk) }
and hence is a subspace of n.
Examples (Example 3.3.7.1)

The linear system x − 2y + 3z = 0


2x − 4y + 6z = 0
3x − 6y + 9z = 0
has a general solution
x 2s − 3t 2 −3
where s, t are
y = s =s 1 +t 0
arbitrary parameters.
z t 0 1
The solution set is { (2s − 3t, s, t ) | s, t ∈ }
= span{ (2, 1, 0), (−3, 0, 1) }.
It is a plane in 3 containing the origin.
Examples (Example 3.3.7.2)

The linear system x − 2y + 3z = 0


−3x + 7y − 8z = 0
−2x + 4y − 6z = 0
has a general solution
x −5t −5
where t is an
y = −t = t −1
arbitrary parameter.
z t 1
The solution set is { (−5t, −t, t ) | t ∈ }
= span{ (−5, −1, 1) }.
It is a line in 3 through the origin.
Examples (Example 3.3.7.3)

The linear system x − 2y + 3z = 0


−3x + 7y − 8z = 0
4x + y + 2z = 0
has a general solution
x 0
y = 0 .
z 0
The solution set is { (0, 0, 0) }, the zero space.
An alternative definition (Remark 3.3.8)

Let V be a non-empty subset of n.


V is called a subspace of n if and only if
for all u, v ∈ V and c, d ∈ , cu + dv ∈ V.
(This is the definition of subspaces in abstract linear algebra.)

The two definitions of subspaces are the same. (See


Question 3.31.)

This definition sometimes gives us a neater way to show


when a set of vectors is a subspace.
For example, we give another proof for Theorem 3.3.6,
i.e. the solution set of a homogenous system of linear
equations in n variables is a subspace of n.
An alternative definition (Remark 3.3.8 &Theorem 3.3.6)

A subset V of n is a subspace of n if and only if


(i) V is non-empty and (ii) for all u, v ∈ V and c, d ∈ , cu + dv ∈ V.

Let Ax = 0 be the homogeneous linear system and


V the solution set of the system. Observe that V ⊆ n.
• Since x = 0 is a solution to Ax = 0, 0 ∈ V
and hence V is non-empty.
• Take any u, v ∈ V, i.e. Au = 0 and Av = 0.
For any c, d ∈ ,
A(cu + dv) = A(cu) + A(dv) = cAu + dAv = c0 + d 0 = 0.
Thus x = cu + dv is a solution to Ax = 0
and hence cu + dv ∈ V.
By Remark3.3.8, V is a subspace of n.
Subspace version of Theorem 3.2.9:
Let V be a subspace of n.
An exercise (Question 3.24(a)) 1. 0 ∈ V.
2. For any v1, v2, ..., vr ∈ V and
c1, c2, ..., cr ∈ ,
Let V and W be two c1v1 + c2v2 + ··· + crvr ∈ V.

subspaces of n. Show that V ∩ W is a subspace of n.

Observe that V ∩ W = { u ∈ n | u ∈ V and u ∈ W } ⊆ n.

• Since V, W are subspaces of n,


0 ∈ V and 0 ∈ W ⇒ 0 ∈ V ∩ W.
So V ∩ W is nonempty.
• Take any u, v ∈ V ∩ W.
Since V, W are subspaces of n, for any c, d ∈ ,
cu + dv ∈ V and cu + dv ∈ W ⇒ cu + dv ∈ V ∩ W.
By Remark3.3.8, V ∩ W is a subspace of n.
Chapter 3 Vector Spaces

Section 3.4
Linear Independence
Redundant vectors (Discussion 3.4.1)

Let u1 = (1, 1, 0, 2), u2 = (1, 0, 0, 1), u3 = (0, 1, 0, 1).


Since u3 = u1 − u2, span{ u1, u2, u3 } = span{ u1, u2 }.

u3 is a redundant
vector.

Given a subspace V = span{ u1, u2, ..., uk }, how do we


know whether there are redundant vectors among u1, u2,
..., uk ?
We can answer the question by using the concept of
linear dependence (or independence).
Linear independence (Definition 3.4.2)

Let S = { u1, u2, ..., uk } ⊆ n.

Consider the vector equation


c1u1 + c2u2 + ··· + ckuk = 0 (*)
where c1, c2, ..., ck are variables.
Note that c1 = 0, c2 = 0, ..., cn = 0 satisfies (*) and
hence is a solution to (*). This solution is called the trivial
solution.
1. S is called a linearly independent set and
u1, u2, ..., uk are said to be linearly independent
if (*) has only the trivial solution.
Linear independence (Definition 3.4.2)

2. S is called a linearly dependent set and


u1, u2, ..., uk are said to be linearly dependent
if (*) has non-trivial solutions,
i.e. there exists real numbers a1, a2, ..., ak, not all of
them are zero, such that a1u1 + a2u2 + ··· + akuk = 0.

We shall learn that in span{ u1, u2, ..., uk }, there are


no redundant vectors among u1, u2, ..., uk if and only if
u1, u2, ..., uk are linearly independent.
(See Theorem 3.4.4 and Remark 3.4.5.)
Examples (Example 3.4.3.1)

Determine whether the vectors (1, −2, 3), (5, 6, −1),


(3, 2, 1) are linearly independent.
Solution:
c1(1, −2, 3) + c2(5, 6, −1) + c3(3, 2, 1) = (0, 0, 0)
c1 + 5c2 + 3c3 = 0
⇔ −2c1 + 6c2 + 2c3 = 0
3c1 − c2 + c3 = 0.

By Gaussian Elimination, we find that there are infinitely


many solutions.
So the vectors are linearly dependent.
Examples (Example 3.4.3.2)

Determine whether the vectors (1, 0, 0, 1), (0, 2, 1, 0),


(1, −1, 1, 1) are linearly independent.
Solution:
c1(1, 0, 0, 1) + c2(0, 2, 1, 0) + c3(1, −1, 1, 1) = (0, 0, 0, 0)
c1 + c3 =0
2c2 − c3 =0

c2 + c3 =0
c1 + c3 = 0.
By Gaussian Elimination, we find that there is only the
trivial solution.
So the vectors are linearly independent.
Examples (Example 3.4.3.3)

Let S = { u } ⊆ n.

S is linearly dependent means that there exists a real


number a ≠ 0 such that au = 0.
For any a ≠ 0, au = 0 ⇔ u = a−10 = 0.
So S is linearly dependent if and only if u = 0.
Examples (Example 3.4.3.4)

Let S = { u, v } ⊆ n.

S is linearly dependent means that there exist real


numbers c, d, not both are zero, such that cu + dv = 0.
When c ≠ 0, cu + dv = 0 ⇔ u = −c−1dv.
When d ≠ 0, cu + dv = 0 ⇔ v = −d −1cu.
So S is linearly dependent if and only if
u = av for some real number a
or v = bu for some real number b.
Examples (Example 3.4.3.5)

Let S = { u1, u2, ..., uk } ⊆ n.

Suppose 0 ∈ S, say, ui = 0 for some i = 1, 2, ..., k.


Then c1 = 0, ..., ci − 1 = 0, ci = 1, ci + 1 = 0, ..., ck = 0 is
a non-trivial solution to the equation
c1u1 + ··· + ci − 1ui − 1 + ci ui + ci + 1ui +1 + ··· + ckuk = 0.
So S is linearly dependent.
Linear independence (Theorem 3.4.4)

Let S = { u1, u2, ..., uk } ⊆ n where k ≥ 2.


1. S is linearly dependent if and only if at least one
vector ui ∈ S can be written as a linear combination
of other vectors in S,
i.e. ui = a1u1 + ··· + ai − 1ui − 1 + ai + 1ui +1 + ··· + akuk for
some real numbers a1, ..., ai − 1, ai + 1, ..., ak .
2. S is linearly independent if and only if no vector in S
can be written as a linear combination of other vectors
in S.
(The statements 1 and 2 are logically equivalent. We only need to
prove one of them.)
Linear independence (Theorem 3.4.4)

In the following, we prove statement 1:


(⇒) Suppose S is linearly dependent, i.e. there exists
real numbers b1, b2, ..., bk, not all of them are
zero, such that b1u1 + b2u2 + ··· + bkuk = 0.
Let bi be one of the nonzero coefficients.
Then
biui = −(b1u1 + ··· + bi − 1ui − 1 + bi + 1ui +1 + ··· + bkuk)
implies
ui = −bi−1(b1u1 + ··· + bi − 1ui − 1 + bi + 1ui +1 + ··· + bkuk)
= a1u1 + ··· + ai − 1ui − 1 + ai + 1ui +1 + ··· + akuk
where aj = −bi−1bj for j = 1, ..., i −1, j +1, ..., k.
Linear independence (Theorem 3.4.4)

(⇐) Suppose there exists ui such that


ui = a1u1 + ··· + ai − 1ui − 1 + ai + 1ui +1 + ··· + akuk
for some real numbers a1, ..., ai − 1, ai + 1, ..., ak .
Then c1 = a1, ..., ci − 1 = ai − 1, ci = −1, ci + 1 = ai + 1,
..., ck = ak is a non-trivial solution to the equation
c1u1 + ··· + ci − 1ui − 1 + ci ui + ci + 1ui +1 + ··· + ckuk = 0.
So S is linearly dependent.
Redundant vectors (Remark 3.4.5)

1. If a set of vectors is linearly dependent, then there


exists at least one redundant vector in the set.
2. If a set of vectors is linearly independent, then there is
no redundant vector in the set.
Examples (Example 3.4.6.1)

Let S1 = { (1, 0), (0, 4), (2, 4) } ⊆ 2.

S1 is linearly dependent.
(The equation c1(1, 0) + c2(0, 4) + c1(2, 4) = (0, 0) has non-trivial
solution.)
We see that (2, 4) = 2(1, 0) + (0, 4),
i.e. (2, 4) can be expressed as a linear combination of
(1, 0) and (0, 4).
Examples (Example 3.4.6.2)

Let S2 = { (−1, 0, 0), (0, 3, 0), (0, 0, 7) } ⊆ 3.

S2 is linearly independent.
(The equation c1(−1, 0, 0) + c2(0, 3, 0) + c1(0, 0, 7) = (0, 0, 0)
has only the trivial solution.)
(−1, 0, 0) cannot be expressed as a linear combination of
(0, 3, 0) and (0, 0, 7).
(0, 3, 0) cannot be expressed as a linear combination of
(−1, 0, 0) and (0, 0, 7).
(0, 0, 7) cannot be expressed as a linear combination of
(−1, 0, 0) and (0, 3, 0).
Linear independence (Theorem 3.4.7 & Example 3.4.8)

Let S = { u1, u2, ..., uk } ⊆ n.


If k > n, then S is linearly dependent.

In particular,
1. In 2, a set of three or more vectors must be linearly
dependent;
2. In 3, a set of four or more vectors must be linearly
dependent.
Proof of the theorem (Theorem 3.4.7)

Proof: Let ui = (ai1, ai 2, ..., ai n) for i = 1, 2, ..., k.


Then
c1u1 + c2u2 + ··· + ckuk = 0
a11c1 + a21c2 + ··· + ak1ck =0
a12c1 + a22c2 + ··· + ak2ck =0

⁞ ⁞
a1nc1 + a2nc2 + ··· + aknck = 0.

The system has k unknowns and n equations.


Since k > n, (by Remark 1.5.4.2) the system has non-trivial
solutions.
So S is linearly dependent.
Geometrical interpretation (Discussion 3.4.9.1)

In 2 or 3, two vectors u, v are linearly dependent if


and only if they lie on the same line (when they are placed
with their initial points at the origin).

y y y
v
v
u u
u
x x x
v

u, v are linearly u, v are linearly u, v are linearly


dependent. dependent. independent.
Geometrical interpretation (Discussion 3.4.9.1)

In 3, three vectors u, v, w are linearly dependent if


and only if they lie on the same line or the same plane
(when they are placed with their initial points at the origin).

P w
w v
u w
P'
v
the origin u the origin
v the origin u

u, v, w are linearly u, v, w are linearly u, v, w are linearly


dependent. dependent independent
(P = span{ u, v, w }). (P' = span{ u, v }).
Add an independent vector (Theorem 3.4.10)

Let u1, u2, ..., uk be linearly independent vectors in n.

If uk + 1 is not a linear combination of u1, u2, ..., uk,


then u1, u2, ..., uk, uk + 1 are linearly independent.
A new vector is added.

(Please read our textbook for a proof of the result.)


Chapter 3 Vector Spaces

Section 3.5
Bases
Vector spaces and subspaces (Discussion 3.5.1)

We adopt the following conventions:


1. A set V is called a vector space if
either V = n or V is a subspace of n.

2. Let W be a vector space, say, W = n or W is a


subspace of n.
A set V is called a subspace of W if
V is a vector space and V ⊆ W,
i.e. V is a subspace of n which lies completely
inside W.
Examples (Example 3.5.2)

Let U = span{ (1, 1, 1) }, V = span{ (1, 1, −1) }


and W = span{ (1, 0, 0), (0, 1, 1) }.
Since U, V, W are subspace of 3, they are vector
spaces.
As (1, 1, 1) = (1, 0, 0) + (0, 1, 1), (by Theorem 3.2.10)
U = span{ (1, 1, 1) } ⊆ span{ (1, 0, 0), (0, 1, 1) } = W.
So U is a subspace of W.
As (1, 1, −1) ∉ span{ (1, 0, 0), (0, 1, 1) } = W, V ⊆ W.
So V is not a subspace of W.
Bases (Definition 3.5.4 & Discussion 3.5.3)

Let V be a vector space


and S = { u1, u2, ..., uk } a subset of V.
Then S is called a basis (plural bases) for V if
1. S is linearly independent and
2. S spans V.

We shall learn that a basis for V can be used to build a


coordinate system for V. (See Theorem 3.5.7 and Definition
3.5.8.)
Examples (Example 3.5.5.1)

Show that S = { (1, 2, 1), (2, 9, 0), (3, 3, 4) } is a basis


for 3.
Solution:
(a) c1(1, 2, 1) + c2(2, 9, 0) + c3(3, 3, 4) = (0, 0, 0)
c1 + 2c2 + 3c3 = 0
⇔ 2c1 + 9c2 + 3c3 = 0
c1 + 4c3 = 0.

The system has only the trivial solution.


So S is linearly independent.
Examples (Example 3.5.5.1)

(b) 1 2 3 1 2 3
Gaussian There is no
2 9 3 0 5 −3 zero rows.
Elimination
1 0 4 0 0 −1/5
Thus (by Discussion 3.2.5) span(S) = 3.

By (a) and (b), S is a basis for 3.


Examples (Example 3.5.5.2)

Let V = span{ (1, 1, 1, 1), (1, −1, −1, 1), (1, 0, 0, 1) }


and S = { (1, 1, 1, 1), (1, −1, −1, 1) }.
Show that S is a basis for V.
Solution:
(a) c1(1, 1, 1, 1) + c2(1, −1, −1, 1) = (0, 0, 0, 0)
c1 + c2 =0
c1 − c2 =0

c1 − c2 =0
c1 + c2 = 0.
The system has only the trivial solution.
So S is linearly independent.
Examples (Example 3.5.5.2)

V = span{ (1, 1, 1, 1), (1, −1, −1, 1), (1, 0, 0, 1) },


S = { (1, 1, 1, 1), (1, −1, −1, 1) }.
1 1
(b) Since (1, 0, 0, 1) = 2 (1, 1, 1, 1) + 2 (1, −1, −1, 1),
(by Theorem 3.2.12) span(S) = V.

By (a) and (b), S is a basis for V.


Examples (Example 3.5.5.3-4)

Is S = { (1, 1, 1, 1), (0, 0, 1, 2), (−1, 0, 0, 1) } is a basis


for 4?
Solution: Since three vectors cannot span 4 (see
Theorem 3.2.7), S is not a basis for 4.

Let V = span(S) with S = { (1, 1, 1), (0, 0, 1), (1, 1, 0) }.


Is S a basis for V ?
Solution: S is linearly dependent.
So S is not a basis for V.
(1, 1, 1) = (0, 0, 1) + (1, 1, 0)
Some remarks (Remark 3.5.6)

1. A basis for V is a set of the smallest size that can


span V. (See Theorem 3.6.1.2.)

2. For convenience, the empty set, ∅, is defined to be


the basis for the zero space.

3. Except the zero space, any vector space has infinitely


many different bases.
Coordinate systems (Theorem 3.5.7)

Let S = { u1, u2, ..., uk } be a basis for a vector space V.


Then every vector v ∈ V can be expressed in the form
v = c1u1 + c2u2 + ··· + ckuk
in exactly one way, where c1, c2, ..., ck are real
numbers.

Proof: Since S spans V, every vector v ∈ V can be


expressed in the form
v = c1u1 + c2u2 + ··· + ckuk
for some real numbers c1, c2, ..., ck.
It remains to show that the expression is unique.
Coordinate systems (Theorem 3.5.7)

Suppose v can be expressed in two ways


v = c1u1 + c2u2 + ··· + ckuk , v = d1u1 + d2u2 + ··· + dkuk
where c1, c2, ..., ck, d1, d2, ..., dk are real numbers.
(c1u1 + c2u2 + ··· + ckuk) − (d1u1 + d2u2 + ··· + dkuk)
=v−v=0
⇒ (c1 − d1)u1 + (c2 − d2)u2 + ··· + (ck − dk)uk = 0. (#)
Since u1, u2, ..., uk are linearly independent, (#) can
only have the trivial solution
c1 − d1 = 0, c2 − d2 = 0, ..., ck − dk = 0,
i.e. c1 = d1, c2 = d2, ..., ck = dk.
So the expression is unique.
Coordinate systems (Definition 3.5.8)

Let S = { u1, u2, ..., uk } be a basis for a vector space V.


A vector v ∈ V can be expressed uniquely in the form
v = c1u1 + c2u2 + ··· + ckuk .
The coefficients c1, c2, ..., ck are called the coordinates
of v relative to the basis S.
The vector
(v)S = (c1, c2, ..., ck) ∈ k
is called the coordinate vector of v relative to S.
(In here, we assume that the vectors of S are in a fixed order so
that u1 is the 1st vector, u2 is the 2nd vector, etc.
In some textbooks, such a basis is called an ordered basis.)
Examples (Example 3.5.9.1 (a))

Let S = { (1, 2, 1), (2, 9, 0), (3, 3, 4) } which is a basis


for 3.
Find the coordinate vector of v = (5, −1, 9) relative to S.

Solution: Solving
a(1, 2, 1) + b(2, 9, 0) + c(3, 3, 4) = (5, −1, 9),
we obtain only one solution a = 1, b = −1, c = 2,
i.e. v = (1, 2, 1) − (2, 9, 0) + 2(3, 3, 4).
The coordinate vector of v relative to S is
(v)S = (1, −1, 2).
Examples (Example 3.5.9.1 (b))

Let S = { (1, 2, 1), (2, 9, 0), (3, 3, 4) } which is a basis


for 3.
Find a vector w such that (w)S = (−1, 3, 2).

Solution: Since (w)S = (−1, 3, 2),


w = −(1, 2, 1) + 3(2, 9, 0) + 2(3, 3, 4)
= (11, 31, 7).
Examples (Example 3.5.9.2 (a))

Let v = (2, 3) ∈ 2
and S1 = { (1, 0), (0, 1) }.

Since
(2, 3) = 2(1, 0) + 3(0, 1), v
(v)S1 = (2, 3).

(0, 1)

(1, 0)
Examples (Example 3.5.9.2 (b))

Let v = (2, 3) ∈ 2
and S2 = { (1, −1), (1, 1) }.

Since
1 5 v
(2, 3) = − 2 (1, −1) + 2 (1, 1),
1 5
(v)S2 = − 2 , 2 .

(1, 1)

(1, −1)
Examples (Example 3.5.9.2 (c))

Let v = (2, 3) ∈ 2
and S3 = { (1, 0), (1, 1) }.

Since
(2, 3) = −(1, 0) + 3(1, 1), v
(v)S3 = (−1, 3).

(1, 1)

(1, 0)
Standard basis for n (Example 3.5.9.3)

Let E = { e1, e2, ..., en }


where e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), ...,
en = (0, ..., 0, 1) are vectors in n.
(Recall Remark 3.3.3.2.)
For any u = (u1, u2, ..., un) ∈ n,
u = u1e1 + u2e2 + ··· + unen.
Thus n = span(E ) and hence E spans n.
Standard basis for n (Example 3.5.9.3)

On the other hand,


c1e1 + c2e2 + ··· + cnen = 0 (†)
⇒ (c1, c2, ..., cn) = (0, 0, ..., 0)
⇒ c1 = 0, c2 = 0, ..., cn = 0.
Since the vector equation (†) has only the trivial solution,
E is linearly independent.
Thus E is a basis for n which is known as the standard
basis for n.
For u = (u1, u2, ..., un) ∈ n,
(u)E = (u1, u2, ..., uk) = u.
Coordinate systems (Remark 3.5.10)

Let S be a basis for a vector space V.


1. For any u, v ∈ V, u = v if and only if (u)S = (v)S.
2. For any v1, v2, ..., vr ∈ V and c1, c2, ..., cr ∈ ,
(c1v1 + c2v2 + ··· + crvr)S = c1(v1)S + c2(v2)S + ··· + cn(vr)S.
Coordinate systems (Theorem 3.5.11)

Let S be a basis for a vector space V where |S| = k.


Let v1, v2, ..., vr ∈ V.
1. v1, v2, ..., vr are linearly dependent vectors in V if
and only if (v1)S, (v2)S, ..., (vr)S are linearly
dependent vectors in k;
equivalently, v1, v2, ..., vr are linearly independent
vectors in V if and only if (v1)S, (v2)S, ..., (vr)S are
linearly independent vectors in k.
2. span{ v1, v2, ..., vr } = V if and only if
span{ (v1)S, (v2)S, ..., (vr)S } = k.

(Please read our textbook for a proof of the result.)


Chapter 3 Vector Spaces

Section 3.6
Dimensions
Size of bases (Theorem 3.6.1 & Remark 3.6.2)

Let V be a vector space which has a basis with k


vectors.
1. Any subset of V with more than k vectors is always
linearly dependent.
2. Any subset of V with less than k vectors cannot
spans V.

This means that every basis for V have the same size k.
Proof of the theorem (Theorem 3.6.1)

Proof: Let S be a basis for V and |S| = k.


1. Let T = { v1, v2, ..., vr } ⊆ V where r > k.
Since (v1)S, (v2)S, ..., (vr)S are vectors in k, (by
Theorem 3.4.7) (v1)S, (v2)S, ..., (vr)S are linearly
dependent.
Then (by Theorem 3.5.11.1) T is linearly dependent.
2. Let T' = { w1, w2, ..., wt } ⊆ V where t < k.
Since (w1)S, (w2)S, ..., (wt )S are vectors in k, (by
Theorem 3.2.7) (w1)S, (w2)S, ..., (wt )S cannot span k.

Then (by Theorem 3.5.11.2) T' cannot span V.


Dimensions (Definition 3.6.3 & Example 3.6.4.1-3)

The dimension of a vector space V, denoted by dim(V ),


is defined to be the number of vectors in a basis for V.
The dimension of the zero space is defined to be 0.
dim( n) = n.
(Note that the standard basis E = { e1, e2, ..., en } for n

has n vectors.)
Except { 0 } and 2, subspaces of 2 are lines through
the origin which are of dimension 1.
Except { 0 } and 3, subspaces of 3 are either
lines through the origin, which are of dimension 1, or
planes containing the origin, which are of dimension 2.
An example (Example 3.6.4.4)

Find a basis for and determine the dimension of the


subspace W = { (x, y, z) | y = z } of 3.
Solution: Every vector of W is of the form
(x, y, y) = x(1, 0, 0) + y(0, 1, 1).
So W = span{ (1, 0, 0), (0, 1, 1) }.
On the other hand,
c1(1, 0, 0) + c2(0, 1, 1) = (0, 0, 0) ⇒ c1 = 0, c2 = 0.
Thus { (1, 0, 0), (0, 1, 1) } is linearly independent.
So { (1, 0, 0), (0, 1, 1) } is a basis for W and
dim(W ) = 2.
Solution spaces (Discussion 3.6.5)

How do we determine the dimension of the solution space


of a homogeneous linear system Ax = 0?
Gaussian or Gaussian or
Gauss-Jordan Gauss-Jordan
(A|0) ( R | 0 ) i.e. A R
Elimination Elimination

0 ⊗ * ··· ··· ··· * First, set the variable


0 0 ⊗ * ··· ··· * corresponding to non-
⁞ ⁞ pivot columns of R to
R= 0 ··· ··· 0 ⊗ * * be arbitrary.
0 ··· ··· ··· ··· ··· 0 Equate the other
variable accordingly.

non-pivot columns
Solution spaces (Discussion 3.6.5)

Then a general solution to the system can be written as


x = t1u1 + t2u2 + ··· + tkuk
where t1, t2, ..., tk are arbitrary parameters and
u1, u2, ..., uk are fixed vectors.
In this way, the vectors u1, u2, ..., uk are always linearly
independent.
So { u1, u2, ..., uk } is a basis for the solution space.
Note that the dimension of the solution space is equal to
k = the number of arbitrary parameters
= the number of non-pivot columns in R.
An example (Example 3.6.6)

Consider the homogeneous linear system


2v + 2w − x + z =0
−v − w + 2x − 3y + z =0
x+ y+ z =0
v + w − 2x − z = 0.

2 2 −1 0 1 0 1 1 0 0 1 0
−1 −1 2 −3 1 0 Gauss-Jordan 0 0 1 0 1 0
0 0 1 1 1 0 Elimination 0 0 0 1 0 0
1 1 −2 0 −1 0 0 0 0 0 0 0
An example (Example 3.6.6)

We have a general solution


v −s − t −1 −1
w s 1 0
x = −t =s 0 + t −1
y 0 0 0
z t 0 1

where s, t are arbitrary parameters.


So { (−1, 1, 0, 0, 0), (−1, 0, −1, 0, 1) } is a basis for the
solution space
and the dimension of the solution space is 2.
A useful result (Theorem 3.6.7)

Let V be a vector space of dimension k and S a


subset of V. The following are equivalent:
1. S is a basis for V,
i.e. S is linearly independent and S spans V.
2. S is linearly independent and |S| = k.
3. S spans V and |S| = k.
(Please read our textbook for a proof of the result.)

If we want to check that S is a basis for V, we only


need to check any two of the three conditions:
(i) S is linearly independent; (ii) S spans V;
(iii) |S| = k.
An example (Example 3.6.8)

Let u1 = (2, 0, −1), u2 = (4, 0, 7) and u3 = (−1, 1, 4).


Show that { u1, u2, u3 } is a basis for 3.

Solution:
c1u1 + c2u2 + c3u3 = 0
⇒ c1(2, 0, −1) + c2(4, 0, 7) + c3(−1, 1, 4) = (0, 0, 0)
⇒ c1 = 0, c2 = 0, c3 = 0.
So { u1, u2, u3 } is linearly independent.
Since dim( 3) = 3, (by Theorem 3.6.7) { u1, u2, u3 } is a
basis for 3.
Dimensions of subspaces (Theorem 3.6.9)

Let U be a subspace of a vector space V.


Then dim(U ) ≤ dim(V ).
Furthermore, if U ≠ V, dim(U ) < dim(V ).
Proof: Let S be a basis for U.
Since U ⊆ V, S is a linearly independent subset of V.
Then dim(U ) = |S| ≤ dim(V ). By Theorem 3.6.1.1, a subset
T of V with |T | > dim(V )
Assume dim(U ) = dim(V ). must be linearly dependent.
As S is linearly independent
and |S| = dim(V ), (by Theorem 3.6.7) S is a basis for V.
But then U = span(S) = V.
Hence if U ≠ V, dim(U ) < dim(V ).
An example (Example 3.6.10)

Let V be a plane in 3 containing the origin.


V is a vector space of dimension 2.
Suppose U is a subspace of V such that U ≠ V.
Then (by Theorem 3.6.9) dim(U ) < 2.
U is either the zero space { (0, 0, 0) } (of dimension 0)
or a line through the origin (of dimension 1).

V V
U

U
the origin the origin
Invertible matrices (Theorem 3.6.11)

Let A be an n × n matrix. The following statements are


equivalent:
1. A is invertible.
2. The linear system Ax = 0 has only the trivial solution.
3. The reduced row-echelon form of A is an identity
matrix.
4. A can be expressed as a product of elementary
matrices.
5. det(A) ≠ 0.
6. The rows of A form a basis for n.
7. The columns of A form a basis for n.
Invertible matrices (Theorem 3.6.11)

We already learn that statements 1-5 are equivalent (see


Theorem 2.4.7 and Theorem 2.5.19).

To prove 7 ⇔ 1:
Let A = a1 a2 ··· an where ai is the i th column of A.

{ a1, a2, ..., an } is a basis for n


⇔ span{ a1, a2, ..., an } = n (by Theorem 3.6.7)
⇔ a row echelon form of A has no zero row
(by Discussion 3.2.5)
⇔ A is invertible. (by Remark 2.4.10)
Invertible matrices (Theorem 3.6.11)

The rows of A is the columns of AT.


Since A is invertible if and only if AT is invertible (see
Theorem 2.3.9) “1 ⇔ 6” follows from “1 ⇔ 7”.
Examples (Example 3.6.12)

1. Let u1 = (1, 1, 1), u2 = (−1, 1, 2) and u3 = (1, 0, 1).


1 −1 1
1 1 0 =3≠0 ⇒ { u1, u2, u3 } is a basis for 3.

1 2 1

2. Let u1 = (1, 1, 1, 1), u2 = (1, −1, 1, −1),


u3 = (0, 1, −1, 0) and u4 = (2, 1, 1, 0).
1 1 1 1
1 −1 1 −1
=0 ⇒ { u1, u2, u3, u4 } is not
0 1 −1 0 a basis for 4.
2 1 1 0
Chapter 3 Vector Spaces

Section 3.7
Transition Matrices
Coordinate vectors (Notation 3.7.1)

Let S = { u1, u2, ..., uk } be a basis for a vector space V.


For v ∈ V, recall that if v = c1u1 + c2u2 + ··· + ckuk , then
the row vector
(v)S = (c1, c2, ..., ck)
is called the coordinate vector of v relative to S.
From now on, we also define the column vector
c1
c2
[v ]S =

ck
to be the coordinate vector of v relative to S.
Transition matrices (Discussion 3.7.2)

Let S = { u1, u2, ..., uk } and T = { v1, v2, ..., vk } be two


bases for a vector space V.
Take any vector w ∈ V.
Since S is a basis for V,
w = c1u1 + c2u2 + ··· + ckuk
for some real constants c1, c2, ..., ck ,
c1
c2
i.e. [w ]S = .

ck
Transition matrices (Discussion 3.7.2)

Since T is a basis for V, we can write


u1 = a11v1 + a21v2 + ··· + ak1vk ,
u2 = a12v1 + a22v2 + ··· + ak2vk ,
⁞ ⁞
uk = a1kv1 + a2kv2 + ··· + akkvk
for some real constants a11, a12, ..., akk ,
a11 a12 a1k
a21 a22 a2k
i.e. [u1 ]T = , [u2 ]T = , ..., [uk ]T = .
⁞ ⁞ ⁞
ak1 ak2 akk
Transition matrices (Discussion 3.7.2)

Then w = c1u1 + c2u2 + ··· + ckuk


= c1(a11v1 + a21v2 + ··· + ak1vk)
+ c2(a12v1 + a22v2 + ··· + ak2vk)
+ ··· + ck(a1kv1 + a2kv2 + ··· + akkvk)
= (c1a11 + c2a12 + ··· + cka1k)v1
+ (c1a21 + c2a22 + ··· + cka2k)v2
+ ··· + (c1ak1 + c2ak2 + ··· + ckakk)vk ,

c1a11 + c2a12 + ··· + cka1k


c1a21 + c2a22 + ··· + cka2k Question:
i.e. [w ]T = . How is [w ]T

c1ak1 + c2ak2 + ··· + ckakk related to [w ]S?
Transition matrices (Discussion 3.7.2 & Definition 3.7.3)

c1a11 + c2a12 + ··· + cka1k


c1a21 + c2a22 + ··· + cka2k
[w ]T =

c1ak1 + c2ak2 + ··· + ckakk

a11 a12 ··· a1k c1


a21 a22 ··· a2k c2 The matrix P is
= called the
⁞ ⁞ ⁞ ⁞
transition matrix
ak1 ak2 ··· akk ck
from S to T.
= [u1 ]T [u2 ]T ··· [u
. k ]T [w ]S .

Let P = [u1 ]T [u2 ]T ··· [uk ]T . Then [w ]T = P [w ]S .


Examples (Example 3.7.4.1 (a))

Let S = { u1, u2, u3 },


where u1 = (1, 0, −1), u2 = (0, −1, 0), u3 = (1, 0, 2),
and T = { v1, v2, v3 },
where v1 = (1, 1, 1), v2 = (1, 1, 0), v3 = (−1, 0, 0).
Both S and T are bases for 3.
Find the transition matrix from S to T.

Solution: First we need to find a11, a12, ..., a33 such


that
a11v1 + a21v2 + a31v3 = u1,
a12v1 + a22v2 + a32v3 = u2,
a13v1 + a23v2 + a33v3 = u3.
a11v1 + a21v2 + a31v3 = u1,
Examples (Example 3.7.4.1 (a))
a12v1 + a22v2 + a32v3 = u2,
a13v1 + a23v2 + a33v3 = u3.
v1 v2 v3 u1 u2 u3
1 1 −1 1 0 1 1 0 0 −1 0 2
Gauss-Jordan
1 1 0 0 −1 0 0 1 0 1 −1 −2
Elimination
1 0 0 −1 0 2 0 0 1 −1 −1 −1
We have u1 = −v1 + v2 − v3,
u2 = − v2 − v3,
u3 = 2v1 − 2v2 − v3.
So the transition matrix from S to T is
−1 0 2
P= 1 −1 −2 .
−1 −1 −1
Examples (Example 3.7.4.1 (b))

S = { u1, u2, u3 },
where u1 = (1, 0, −1), u2 = (0, −1, 0), u3 = (1, 0, 2),
T = { v1, v2, v3 },
where v1 = (1, 1, 1), v2 = (1, 1, 0), v3 = (−1, 0, 0).
Let w such that (w)S = (2, −1, 2). Find (w)T.

Solution: P is the transition matrix from S to T.

−1 0 2 2 2
[w ]T = P [w ]S = 1 −1 −2 −1 = −1
−1 −1 −1 2 −3
So (w)T = (2, −1, −3).
Examples (Example 3.7.4.2)

Let S = { u1, u2 }, where u1 = (1, 1), u2 = (1, −1),


and T = { v1, v2 }, where v1 = (1, 0), v2 = (1, 1).
We have u1 = v2,
u2 = 2v1 − v2.
0 2
Thus the transition matrix from S to T is P = .
1 −1
1 1
On the other hand, v1 = 2 u1 + 2 u2,
v2 = u1. 1
2 1
Thus the transition matrix from T to S is Q = 1
.
2
0
Note that Q = P −1.
Transition matrices (Theorem 3.7.5)

Let S and T be two bases for a vector space


and let P be the transition matrix from S to T.
1. P is invertible.
2. P −1 is the transition matrix from T to S.

Proof: Let Q be the transition matrix from T to S.


If we can show that QP = I, then (by Theorem 2.4.12) P is
invertible and P −1 = Q.
Transition matrices (Theorem 3.7.5)

0

An observation:
0
the i th
Given a matrix A = (ai j )m × n and let ei = 1 .
entry
0
0 ⁞
a11 ··· a1, i −1 a1i a1, i +1 ··· a1n ⁞ 0
a21 ··· a2, i −1 a2i a2, i +1 ··· a2n 0
Aei = 1
⁞ ⁞ ⁞ ⁞ 0
am1 ··· am, i −1 am i am, i +1 ··· amn ⁞
0
a111i 0 + ··· + a1, i −10 + a1i 1 + a1, i +10 + ··· + a1n0
a212i 0 + ··· + ath2, i −10 + a2i 1 + a2, i +10 + ··· + a2n0
= = the i column of A.
⁞ ⁞
am1m i 0 + ··· + am, i −10 + ami 1 + am, i +10 + ··· + amn0
Transition matrices (Theorem 3.7.5)

Suppose S = { u1, u2, ..., uk }.


1 0 0
0 1 ⁞
[u1 ]S = 0 = e1, [u2 ]S = 0 = e2, ..., [uk ]S = 0 = ek.
⁞ ⁞ 0
0 0 1

For i = 1, 2, ..., k, P is the transition matrix from S to T.


the i th column of QP = QPei
= QP [ui ]S = Q [ui ]T = [ui ]S = ei.

So QP = e1 e2 ··· ek = I. Q is the transition matrix


from T to S.
An example (Example 3.7.6)

Let S = { u1, u2, u3 },


where u1 = (1, 0, −1), u2 = (0, −1, 0), u3 = (1, 0, 2),
and T = { v1, v2, v3 },
where v1 = (1, 1, 1), v2 = (1, 1, 0), v3 = (−1, 0, 0).
The transition matrix from S to T is
−1 0 2
P= 1 −1 −2
−1 −1 −1
(see Example 3.7.4.1).
An example (Example 3.7.6)

Then the transition matrix from T to S is


−1 1 2 2
−1 0 2 3 3 − 3
P −1 = 1 −1 −2 = −1 −1 0 .
−1 −1 −1 2 1 1
3 3 − 3

If (w)T = (2, −1, −3), then


1 2 2
3 3 − 3 2 2
[w ]S = P −1 [w ]T = −1 −1 0 −1 = −1 .
2 1 1 −3 2
3 3 − 3

You might also like