You are on page 1of 4

MATH231: Linear Algebra, Lecture 1

Tran Duc Anh, mail: ducanh@hnue.edu.vn


September 2021

Contents
A. Definition of vector space 1

B. Linear combination and linear independence 2

C. Maximal linearly independent subset and the rank of a system of vectors 3

A. Definition of vector space


Firstly, R is the set of real numbers and we call this set the real number field. Then let V be
a non-empty set. The elements of V are denoted by ~u, ~v , w ~ etc. Suppose V is endowed with
two operations + and . as follows:

• For all ~u, ~v ∈ V, the sum ~u + ~v makes sense and is an element of V.

• For each real number λ and each element ~u ∈ V, the multiplication λ · ~u makes sense
and is an element of V.

Definition 1 (Vector space). We say V is a vector space over the real number field R (or
briefly, V is a R−vector space, or a vector space over R) if the operations + and . above
satisfy the following properties:

• For all ~u, ~v ∈ V, we have ~u + ~v = ~v + ~u (commutativity of the addition).

• For all ~u, ~v , w


~ ∈ V, we have (~u + ~v ) + w
~ = ~u + (~v + w)
~ (associativity).

• There exists a ”neutral” element in V with respect to the operation +, denoted by ~0,
satisfying ~u + ~0 = ~0 + ~u = ~u for every ~u ∈ V.

• For each element ~u ∈ V, there exists an ”opposite” element of ~u, denoted by −~u, satis-
fying ~u + (−~u) = (−~u) + ~u = ~0.

• For each λ ∈ R and each ~u, ~v ∈ V, we have λ · (~u + ~v ) = λ · ~u + λ · ~v (distributivity of


the operation · with respect to the operation + on the right).

• For all λ, µ ∈ R, and every ~u ∈ V, we have (λ + µ) · ~u = λ · ~u + µ · ~u (distributivity of


the operation · with respect to the operation + on the left).

• For all λ, µ ∈ R and each ~u ∈ V, we have (λ · µ) · ~u = λ · (µ · ~u) (associativity of the


multiplication).

1
• For every ~u ∈ V, we have 1 · ~u = ~u (normalisation property).

Note 2. If V is a R−vector space, we can briefly say V is a vector space. The elements of V
will be called vectors. Element ~0 is called zero vector. The multiplication · of V is called the
multiplication with scalars, or the multiplication with real numbers.

Example 3. We endow Rn with a R−vector space structure as follows. Each element of


Rn has n coordinates. We consider such two elements of Rn : ~u = (x1 , x2 , . . . , xn ) and ~v =
(y1 , y2 , . . . , yn ). The sum ~u + ~v is defined by ~u + ~v = (x1 + y1 , x2 + y2 , . . . , xn + yn ) ∈ Rn .
The multiplication with real numbers is defined as follows: For λ ∈ R and ~u = (x1 , x2 , . . . , xn ) ∈
Rn , we define λ · ~u = (λx1 , λx2 , . . . , λxn ).
With these two operations, Rn becomes a vector space over the real number field.

Example 4. Denote by R[x] the set of all the polynomials of one variable x with real coeffi-
cients. It means each element of R[x] is of the form f (x) = a0 + a1 x + a2 x2 + . . . + ak xk with k
a natural number and ai real numbers. Let f (x) and g(x) be two polynomials. We can define
the sum f (x) + g(x) in the usual way and obtain a polynomial. Similarly, if λ is a real number
and f (x) is a polynomial then λ · f (x) is also a polynomial. Therefore, (R[x], +, ·) is a vector
space over R.

B. Linear combination and linear independence


Definition 5. Let V be a vector space. Let ~v1 , ~v2 , . . . , ~vk be k vectors in V and a1 , a2 , . . . , ak ∈
R. The expression
a1~v1 + a2~v2 + . . . + ak~vk
is called a linear combination of the vectors ~v1 , ~v2 , . . . , ~vk . The real numbers ai are called the
coefficients (or weights) of the linear combination.

Definition 6. The vectors ~v1 , ~v2 , . . . , ~vk ∈ V are said to be linearly independent, if whenever
we find a linear combination a1~v1 + a2~v2 + . . . + ak~vk = ~0 then the coefficients a1 = a2 = . . . =
ak = 0. If these vectors are not linearly independent then we say these vectors are linearly
dependent.

Note 7. A linear combination a1~v1 + a2~v2 + . . . + ak~vk all of whose coefficients a1 = a2 = . . . =


ak = 0 are null is called a trivial combination.

Example 8. Consider a vector ~v1 ∈ V. ~v1 is linearly independent iff ~v1 6= ~0.

Example 9. Consider two vectors ~v1 , ~v2 ∈ V. These two vectors are linearly independent
iff ~v1 6 k~v2 . Therefore, the two vectors ~v1 , ~v2 are linearly dependent iff they are of the same
direction. It means that the linear dependence generalizes the parallel notion in high school.

Example 10. Consider three vectors ~v1 , ~v2 , ~v3 ∈ R3 . These three vectors are linearly depen-
dent iff they are coplanar.

Definition 11 (vector subspace). Let V be a vector space and W ⊂ V a non-empty subset.


We say W is a vector subspace of V if it satisfies two conditions:

• For all ~u, ~v ∈ W, we have ~u + ~v ∈ W (we say W is closed under the addition of V ).

• For all λ ∈ R and ~u ∈ W, we have λ·~u ∈ W (we say W is closed under the multiplication
with real numbers).

2
Definition 12. Let A ⊂ V be a subset. Suppose all the elements of A are ~v1 , ~v2 , . . . , ~vk .
Define the set span A to be the set of all the linear combinations of vectors in A. Then span A
is a vector subspace of V and we call span A the vector subspace generated by A (or the linear
hull A).
Note 13. In the textbook, the authors use the notation hAi instead of span A. We will use
these two notations in this subject.
From the definition of linear hull, we have the following simple proposition.
Proposition 14. Suppose ~v1 , ~v2 , . . . , ~vk are k linearly independent vectors in the vector spaceV.
Suppose ~vk+1 ∈ V is a vector with ~vk+1 6∈ span {~v1 , ~v2 , . . . , ~vk }, then k + 1 vectors

~v1 , ~v2 , . . . , ~vk+1

are linearly independent.

Proof Suppose ~v1 , ~v2 , . . . , ~vk+1 are linearly dependent, then there exists a nontrivial linear
combination
a1~v1 + a2~v2 + . . . + ak+1~vk+1 = ~0
where ai are real numbers which are not simultaneously zero.
If ak+1 6= 0 then we obtain
a1 a2 ak
~vk+1 = − ~v1 − ~v2 − . . . − ~vk ∈ span {~v1 , ~v2 , . . . , ~vk }.
ak+1 ak+1 ak+1
This is a contradiction.
It means ak+1 = 0. Then we obtain a nontrivial linear combination

a1~v1 + a2~v2 + . . . + ak~vk = ~0.

This is also impossible, since, according to the assumption, ~v1 , ~v2 , . . . , ~vk are linearly indepen-
dent.
Proposition 15. Let X, Y ⊂ V be two subsets of a vector space V. Suppose every vector of
Y is a linear combination of vectors in X. Then we have

span Y ⊂ span X.

Proof Each element of span Y is a linear combination of vectors in Y, i.e., an expression of


the form a1~v1 + a2~v2 + . . . + ak~vk with ai real numbers and ~vi vectors in Y.
Each vector ~vi is also a linear combination of vectors in X. Therefore we can expand the
expression a1~v1 + a2~v2 + . . . + ak~vk to obtain a linear combination of vectors in X.
It means span Y ⊂ span X.

C. Maximal linearly independent subset and the rank


of a system of vectors
Consider k vectors ~v1 , ~v2 , . . . , ~vk ∈ V. Denote by A = {~v1 , ~v2 , . . . , ~vk }. We call A a system of
vectors in V.

3
Definition 16. A subset B ⊂ A is called a maximal linearly independent of A if the vectors
in B are linearly independent and if we put another vector in A\B into B then the set B will
be no longer linearly independent.
Proposition 17. Suppose B is a maximal linearly independent subset of A.
(a) Then each vector ~v ∈ A is a linear combination of vectors in B, i.e. ~v ∈ span B.
(b) Therefore
span B = span A.

Proof Proof of (a) Suppose B = {~v1 , ~v2 , . . . , ~vm }. Consider an arbitrary vector ~vj ∈ A. If
~vj 6∈ span B, then {~v1 , ~v2 , . . . , ~vm , ~vj } is linearly independent by Proposition 14, therefore, B
is not a maximal linearly independent subset. So ~vj ∈ span B.

Proof of (b) Since B ⊂ A, span B ⊂ span A. On the other hand, each vector of A is a
linear combination of vectors in B by Proposition 15. So span A ⊂ span B.
Therefore, span B = span A.
Proposition 18 (Technical lemma). Let B and C be two maximal linearly independent subsets
of A. Then B and C are of the same number of elements.

Ideas of the proof For a detailed proof, see Lemma 2.3.3, page 36 of the Textbook. Here I
would like to sum up the basic ideas.
Suppose B = {~u1 , ~u2 , . . . , ~ur } and C = {~v1 , ~v2 , . . . , ~vs }. Since B is a maximal linearly
independent subset of A, each vector ~vi ∈ C is a linear combination of vectors B by Proposition
17. Consider vector ~v1 ∈ C. Suppose
~v1 = a1~u1 + a2~u2 + . . . + ar ~ur .
Since ~v1 6= ~0, there exists a coefficient ai 6= 0. We can suppose a1 6= 0.
Then we obtain
1 a2 ar
~u1 = ~v1 − ~u2 − . . . − ~ur .
a1 a1 a1
It means ~u1 is a linear combination of {~v1 , ~u2 , . . . , ~ur }. From this, we can prove {~v1 , ~u2 , ~u3 , . . . , ~ur }
is a maximal linearly independent subset(1) of A.
It means we can successively draw vectors ~vi of C, and we can draw a respective vector ~uj
of B such that if we replace ~uj by ~vi we still have a maximal linearly independent subset of r
elements, i.e. of the same number of elements of B. This process can be performed until we
draw all the elements of C, so we deduce s ≤ r.
Since the role of B and C are similar, their numbers of elements are equal.
Definition 19. For each system of vectors ~v1 , ~v2 , . . . , ~vk ∈ V, we call the number of a maximal
linearly independent subset of the system the rank of the system of vectors, and denote it by
rank {~v1 , ~v2 , . . . , ~vk } or more briefly rk {~v1 , ~v2 , . . . , ~vk }.
Note 20. In some books, the notation for the rank of system of vectors may be different:
rang{~v1 , ~v2 , . . . , ~vk }, or rg {~v1 , ~v2 , . . . , ~vk }. In the textbook, the authors use ha.ng {~v1 , ~v2 , . . . , ~vk }.
Corollary 21. From the definition of rank, we have
rank{~v1 , ~v2 , . . . , ~vk } ≤ k.

(1)
This statement is left as an exercise to students.

You might also like