Professional Documents
Culture Documents
1 Vector Spaces: Closure Property Associative Property Identity Property
1 Vector Spaces: Closure Property Associative Property Identity Property
Prasad, IITG
Class Notes for the Lectures on the Topic of Vector Spaces (MA102)
1 Vector Spaces
Definition 1.1. Let G be a non-empty set. If a binary operation ∗ on S satisfies the
following properties:
Here the element e is called the identity element of G with respect to the binary
operation ∗.
4. Inverse Property: For every a ∈ G, there exists an inverse element a−1 ∈ G such that
a−1 ∗ a = a ∗ a−1 = e .
Examples of Groups:
Definition 1.2. A group (G, ∗) is said to be an abelian (or commutative) group if its
binary operation satisfy
Page No. 1
Lecture Notes by M. G. P. Prasad, IITG
Definition 1.3. Let F be a non-empty set together with two binary operation + (addition,
say) and ∗ (multiplication, say) such that
a ∗ (b + c) = a ∗ b + a ∗ c for all a, b, c ∈ F .
In a field F = (F, +, ∗), the identity element with respect to the addition + is denoted
by 0 and the identity element with respect to the multiplication ∗ is denoted by 1.
The binary operation multiplication is normally denoted by · and a · b may be simply
written as ab.
Examples of Fields: With usual addition + and multiplication ·, each of the sets: Q, R
and C forms a field.
Definition 1.4. Let V be a non-empty set and F be a scalar field such that the operations
vector addition and scalar multiplication are defined on them.
We say that V is a vector space (or colorblue linear space) over the field F if
Page No. 2
Lecture Notes by M. G. P. Prasad, IITG
The elements of V are called the vectors and the elements of F are called the scalars.
When there is no confusion between V and F , we simply write V is a vector space without
mentioning the field F . Whenever it is desirable to specify the field F , we shall say V is a
vector space over the field F .
In the MA102 course, the field F is normally taken as the real field R. That is, if the field
is not mentioned, then one has to take F = R. Sometimes we take the complex field C,
and in that case it will be explicitly mentioned. If a field other than R and C is taken then
it will be explicitly specified.
A vector space V over the real field R is called a real vector space.
A vector space V over the complex field C is called a complex vector space.
Notation: The vector spaces are denoted by the capital letters like U , V , W , V1 , V2 ,
etc. The elements of a vector space V are denoted by the small letters like u, v, w, v1 ,
v2 , etc. The zero vector in V is denoted by 0 or by ~0. The elements of the scalar field F
are denoted by the small letters a, b, c, d, c1 , c2 , etc. or by Greek alphabets like α, β, γ, etc.
Example 2: V = Cn and F = R.
Let z = (z1 , z2 , · · · , zn ) and w = (w1 , w2 , · · · , wn ) be any two vectors in V .
Vector Addition:
z + w = ((z1 + w1 ), (z2 + w2 ), · · · , (zn + wn )) for z, w ∈ V .
Scalar Multiplication:
α z = (αz1 , αz2 , · · · , αzn ) for any z ∈ V and α ∈ F .
Example 3: V = Rn and F = Q.
Let x = (x1 , x2 , · · · , xn ) and y = (y1 , y2 , · · · , yn ) be any two vectors in V .
Vector Addition:
x + y = ((x1 + y1 ), (x2 + y2 ), · · · , (xn + yn )) for x, y ∈ V .
Page No. 3
Lecture Notes by M. G. P. Prasad, IITG
Scalar Multiplication:
Scalar Multiplication:
Page No. 4
Lecture Notes by M. G. P. Prasad, IITG
Scalar Multiplication:
Theorem 1.1. In any vector space V over the field F , the following holds:
1. 0 u = 0 for any u ∈ V .
2. α 0 = 0 for any α ∈ F .
4. If cu = 0 then c = 0 or u = 0.
Page No. 5
Lecture Notes by M. G. P. Prasad, IITG
Proof.
Let 0 denote the scalar zero in the field F . Let 0 denote the vector zero in the vector space
V.
Proof of 1:
To show: 0 u = 0 for any u ∈ V .
0 · u = (0 + 0) · u = 0 · u + 0 · u .
Now
Thus,
(0 · u) = 0 .
Proof of 2:
To show: α 0 = 0 for any scalar α ∈ F .
α · 0 = α · (0 + 0) = α · 0 + α · 0 .
Now,
= (α · 0) + 0 = (α · 0)
Thus,
(α · 0) = 0 .
Proof of 3:
To show: (−1) · u = −u for any vector u ∈ V .
0 = 0 · u = (1 − 1) · u = 1 · u + ((−1) · u) = u + ((−1) · u)
Add both sides −u in the above identity to get
Proof of 4:
Given that c · u = 0. To show: Either c = 0 or u = 0.
If c = 0, then we are done with the proof.
Suppose c 6= 0. Then, we have to show u = 0.
Page No. 6
Lecture Notes by M. G. P. Prasad, IITG
Since c 6= 0, its inverse element with respect to multiplication is well defined and is given
by (1/c). Now,
u = 1 · u = c(1/c) · u = (1/c)(c · u) = (1/c)0 = 0 .
This completes the proof.
2 Subspaces
Definition 2.1. Let V be a vector space over the field F . A subspace of V is a subset W
of V such that W is itself a vector space over the (same) field F with the (same) operations
of vector addition and scalar multiplication on V . It is denoted by W 4 V .
Examples: The subset W = R is a subspace of the vector space V = R3 .
In any vector space V , V is a subspace of V itself. The zero subspace W = {0} is a
subspace of V . These two subspaces V and {0} are called trivial subspaces of the vector
space V .
Theorem 2.1. Let V be a vector space over the field F . A non-empty subset W of V is a
subspace of V if and only if
• u + w ∈ W for any two vectors u and w in W .
• α w ∈ W for any vector w ∈ W and for any scalar α ∈ F .
Proof.
Proof of =⇒ :
Given that W is a subspace of the vector space V .
To show: u + w ∈ W for each u, w ∈ W and αw ∈ W for each w ∈ W and for each α ∈ F .
Since W is a vector space with the (same) operations of vector addition and scalar multi-
plication of V over the (same) field F , it follows that if u ∈ W , w ∈ W and α ∈ F then
u + w ∈ W and αw ∈ W .
Proof of ⇐= :
Given that u + w ∈ W for u, w in W and αw ∈ W for each w ∈ W and for each α ∈ F .
To show: W is a subspace of V .
Since W 6= Ø, there exists w ∈ W . Since w ∈ W , (−1)w ∈ W and (−1)w + w = 0 ∈ W
by the given condition.
If w ∈ W and α ∈ F , then αw ∈ W by the given condition.
Other conditions in the definitions of the vector spaces are satisfied by W , since they hold
true in V . Thus, W is subspace of V .
Theorem 2.2. Let V be a vector space over the field F . A non-empty subset W of V is
a subspace of V if and only αu + w ∈ W for any two vectors u and w in W and for any
scalar α ∈ F .
Page No. 7
Lecture Notes by M. G. P. Prasad, IITG
Proof.
Proof of =⇒ :
Given that W is a subspace of the vector space V .
To show: αu + w ∈ W for each u, w ∈ W and α ∈ F .
Since W is a subspace, it follows that αu ∈ W and hence αu + w ∈ W .
Proof of ⇐= :
Given that αu + w ∈ W for u, w in W and for α ∈ F .
To show: W is a subspace of V .
Since W 6= Ø, there exists w ∈ W . Since w ∈ W , (−1)w + w = 0 ∈ W by the given
condition.
If w ∈ W and α ∈ F , then αw + 0 = αw ∈ W .
In particular, −w ∈ W .
If w1 and w2 in W then w1 + w2 = 1w1 + w2 ∈ W . Other conditions in the definitions of
the vector spaces are satisfied by W , since they hold true in V . Thus, W is subspace of
V.
Example 1: Let V denote the space all functions from the field F to F and P denote the
space of all polynomials over the field F . Then, P is a subspace of the vector space V.
Example 2: Let P denote the space all polynomials over the real field. Let n ∈ N. Let
Pn denote the subset of P consist of all polynomials of degree at most n. Then, Pn is a
subspace of the vector space P.
Example 3: Let Mn,n (F ) denote the space of all n × n matrices over the field F . Let
Sn,n (F ) denote the subset of Mn,n (F ) consist of all symmetric matrices of size n × n.
Then, Sn×n (F ) is a subspace of the vector space Mn,n (F ).
Example 4: Let Mn,n (C) denote the space of all n × n complex matrices over the complex
field C. Let Hn,n (C) denote the subset of Mn,n (C) consist of all Hermitian (or self-adjoint)
matrices of size n × n. Observe that if A is a Hermitian matrix over the complex field then
the diagonal entries of A are real. For example, if we take α = i ∈ C, then i A does not
belong to Hn,n (C), because its diagonal entries are not real. Therefore, Hn,n (C) is NOT a
subspace of Mn,n (C) over the complex field C. However it is worth to note that Hn,n (R)
is a subspace of Mn,n (R) over the real field R.
Page No. 8
Lecture Notes by M. G. P. Prasad, IITG
W = {x ∈ Mn,1 (R) : Ax = 0} .
Then W is a subspace of Mn,1 (R) over the real field R. That is, the set of all solutions of
a system of homogeneous linear equations is a vector subspace.
Example 9: Let S be a non-empty subset of R. Let
denote the space of all functions from the set S to R over the real field. It is denoted by
RS .
Let (a, b) ⊆ R. Let
. Then C((a, b), R) is a subspace of the vector space R(a,b) of all functions from the interval
(a, b) to R.
Let
C 2 ((a, b), R) = {f : (a, b) → R : f 00 exists and continuous on (a, b) } .
Then C 2 ((a, b), R) (space of twice continuously differentiable functions on (a, b)) is a sub-
space of the vector space of C((a, b), R and also a subspace of R(a,b) .
Example 10: Let
V = {f : R → R : f 00 exists on R }
denote the vector space of twice differentiable functions over the real field. Let a and b two
real constants. Then,
Page No. 9
Lecture Notes by M. G. P. Prasad, IITG
Theorem 2.3. Let V be a vector space over the field F . The intersection of any collection
of subspaces of V is a subspace of V .
\
Proof. Let Wα be a collection of subspaces of V . Let W = Wα .
α∈J
To show: W is a subspace of V . \
Since 0 ∈ Wα for each α ∈ J, 0 ∈ Wα = W . Therefore W 6= Ø.
α∈J
Let w1 , w2 in W and let c ∈ F . To show: cw1 + w2 ∈ W .
Since each Wα is a subspace of V , the vector cw1 + w2 ∈ Wα .
\
cw1 + w2 ∈ Wα for each α ∈ J =⇒ cw1 + w2 ∈ Wα = W .
α∈J
Therefore, W is subspace of V .
Theorem 2.4. Let V be a vector space over the field F . Let W1 and W2 be subspaces of
V . Then, the set-theoretic union W1 ∪ W2 is a subspace of V if and only if W1 ⊆ W2 or
W2 ⊆ W1 . That is, one of the subspaces is contained in the other.
Proof.
Proof of =⇒:
Given that W1 ∪ W2 is a subspace of the vector space V . To show: W1 ⊆ W2 or W2 ⊆ W1 .
Suppose that W1 ⊆ W2 or W2 ⊆ W1 does not hold. Then we will arrive at a contradiction.
If W1 6⊆ W2 , then there exists x ∈ W1 \ W2 . That is, x is in W1 but not in W2 .
If W2 6⊆ W1 , then there exists y ∈ W2 \ W1 . That is, y is in W2 but not in W1 .
Since W1 ∪ W2 is a subspace, we have x + y ∈ W1 ∪ W2 .
If x + y ∈ W1 then write y = (x + y) + (−x).
Since both (x+y) and −x are elements of W1 , then their vector addition (x+y)+(−x) = y
is an element of W1 which is a contradiction to the fact that y 6∈ W1 .
If x + y ∈ W2 and −y ∈ W2 , then their vector addition (x + y) + (−y) = x is an element of
W2 which is a contradiction to the fact that x 6∈ W2 . Therefore, it follows that W1 ⊆ W2
or W2 ⊆ W1 holds.
Page No. 10
Lecture Notes by M. G. P. Prasad, IITG
Proof of ⇐=:
Given that W1 and W2 are subspaces of the vector space V . Also given that W1 ⊆ W2 or
W2 ⊆ W1 . To show: W1 ∪ W2 is a subspace of V .
If W1 ⊆ W2 then W1 ∪ W2 = W2 which is a subspace of V .
If W2 ⊆ W1 then W1 ∪ W2 = W1 which is a subspace of V .
This completes the proof.
Important Note: Let V be a vector space over the field F . Let S be an infinite subset of
V . Each linear combination of every finite set of vectors of S is an element of the span of
S which is defined as the set of all (finite) linear combinations of S.
( m )
X
span (S) = αi vi : vi ∈ S, αi ∈ F, m is a non-negative integer .
i=1
That is, a linear combination of vectors of (finite/ infinite set) S means that linear com-
bination of a∞finite set of vectors of S. We never take an infinite linear combination of
X
vectors like αi vi in Linear Algebra Course.
i=1
Theorem 2.5. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then, the span of S is a subspace of V .
Proof. Let u and w be any two vectors in the span of S.
Since u ∈ span (S), there exist vectors u1 , . . ., um in S and the scalars α1 , . . ., αm such
that
u = α1 u1 + α2 u2 + · · · + αm um .
Since w ∈ span (S), there exist vectors w1 , . . ., wn in S and the scalars β1 , . . ., βn such
that
w = β1 w1 + β2 w2 + · · · + βn wn .
Page No. 11
Lecture Notes by M. G. P. Prasad, IITG
Now,
u + w = α1 u1 + α2 u2 + · · · + αm um + β1 w1 + β2 w2 + · · · + βn wn
is a linear combination of finite set of vectors in S and hence u + w ∈ span (S).
Let c be any scalar in F .
c u = cα1 u1 + cα2 u2 + · · · + cαm um = γ1 u1 + γ2 u2 + · · · + γm um
where γk = cαk , 1 ≤ k ≤ m. Since c u is a linear combination of finite set of vectors in S,
it follows that cu ∈ span (S).
Therefore, the span of S is a subspace of V .
Theorem 2.6. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then the subspace spanned by S is the intersection of all subspaces of V which contain
S.
Proof. Let \
W = {Uα : Uα is a subspace of V that contains S} .
α
To show: span (S) = W .
First we observe that span (S) is a subspace that contains S and hence it is one of Uα ’s.
Therefore, W ⊆ S.
Secondly note that span (S) only has linear combinations of vectors in S, so every vector in
span (S) has to be in every vector subspace Uα that contains all of S. That is, span (S) ⊆ Uα
for each α. Since W = ∩Uα is a subspace that contains S, it follows that span S ⊆ W .
Therefore span (S) = W .
Corollary 2.1. Let V be a vector space over the field F . Let S be a non-empty subset of
V . Then the subspace spanned by S is the smallest subspace of V which contains S.
Theorem 2.7. Let S and T be two subsets of a vector space V . Then
1. If S ⊂ T then span (S) ⊂ span (T ).
2. span (S ∪ T ) = span (S) + span (T ).
3. span (span (S)) = span (S).
Page No. 12
Lecture Notes by M. G. P. Prasad, IITG
Theorem 2.8. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subspaces
of V . Then, the sum of the subspaces W1 , W2 , . . ., Wk is given by
W1 + W2 + · · · + Wk := {w1 + w2 + · · · + wk : wi ∈ Wi , 1 ≤ i ≤ k}
is a subspace of the vector space V which contains each of the subspaces Wi . Further, it is
a subspace spanned by W1 ∪ · · · ∪ Wn .
Proof.
Let W = W1 + W2 + · · · + Wk .
Claim 1: W is a subspace of V
The zero vector 0 belongs to each Wi and hence 0 = 0 + · · · + 0 is in W . So, W is non-
empty.
If u ∈ W , w ∈ W and c ∈ F , then we have to show that cu + w ∈ W .
Since u ∈ W , u = u1 + u2 + · · · + uk where ui ∈ Wi for 1 ≤ i ≤ k.
Since w ∈ W , w = w1 + w2 + · · · + wk where wi ∈ Wi for 1 ≤ i ≤ k.
Then, we have
Since Wi is a subspace of V for each i, the element (cui + wi ) ∈ Wi for each i. Therefore,
it follows that (cu + w) ∈ W .
Claim 2: (W1 ∪ · · · ∪ Wk ) ⊆ W
We show that Wi ⊆ W for each i.
For any vector u ∈ Wi , u can be written as the sum of zero vectors of Wj , j 6= i and the
vector u as given by u = 0 + · · · + 0 + u + 0 + · · · + 0. Therefore u ∈ W and hence Wi ⊆ W
for each i. Consequently, it follows that W1 ∪ · · · ∪ Wk ⊆ W .
Page No. 13
Lecture Notes by M. G. P. Prasad, IITG
OR
Wi ∩ (W1 + W2 + · · · + Wi−1 ) = {0} for each i = 2, . . . , k .
If the sum is direct, then we write it as W1 ⊕ W2 ⊕ · · · ⊕ Wk .
Theorem 2.9. Let V be a vector space over the field F . Let W1 , W2 , . . ., Wk be subspaces
of V . Let W = W1 + · · · + Wk . Then, the following statements are equivalent.
1. W = W1 ⊕ · · · ⊕ Wk .
Example 2: Let P3 denote the space of all real polynomials of degree at most 3. Let
W1 = {P ∈ P3 : P (x) = P (−x) for all x ∈ R} and W2 = {P ∈ P3 : P (x) =
−P (−x) for all x ∈ R}. Then P3 = W1 ⊕ W2 because W = W1 + W2 and W1 ∩ W2 = {0}.
Example 3: Let Mn,n (R) denote the space of all n × n real matrices. Let W1 = {A ∈
Mn,n (R) : A = AT } and W2 = {A ∈ Mn,n (R) : A = −AT }. Then, Mn,n (R) = W1 ⊕ W2
because W = W1 + W2 and W1 ∩ W2 = {0}.
Example 4: Let M2,2 (R) denote the space of all 2 × 2 real matrices. Let
a b
W1 = ∈ M2,2 (R) : a ∈ R, b ∈ R
−b a
c d
W2 = ∈ M2,2 (R) : c ∈ R, d ∈ R .
d −c
Then, M2,2 (R) = W1 ⊕ W2 because W = W1 + W2 and W1 ∩ W2 = {0}.
Page No. 14
Lecture Notes by M. G. P. Prasad, IITG
Definition 2.5. Let U and V be two vector spaces over a field F . On the cartesian product
U × V = {(u, v) : u ∈ U, v ∈ V }
c1 v1 + c2 v2 + · · · + ck vk = 0 .
Alternative Way of Writing the Definition for Linearly Dependence and Linearly Indepen-
dence:
Definition 3.3. Let V be a vector space over a field F . A subset S of V is said to be
linearly dependent if there exist distinct vectors v1 , v2 , . . ., vn in S and scalars c1 , c2 , . . .,
cn in F with at least one ci 6= 0 such that
c1 v1 + c2 v2 + · · · + cn vn = 0 .
The following are easy consequences of the above definition of linearly dependent (or in-
dependent).
Page No. 15
Lecture Notes by M. G. P. Prasad, IITG
Example of linearly dependent set: Let u = (1, −1, 0), v = (1, 3, −1) and w = (5, 3, −2).
Let S = {u, v, w}.
Example of linearly independent set: Let u = (6, 2, 3, 4), v = (0, 5, −3, 1) and
w = (0, 0, 7, −2). Let S = {u, v, w}.
Theorem 3.1. The non-zero vectors v1 , v2 , · · · , vm are linearly dependent if and only if
one of them, say, vi is a linear combination of the preceding vectors:
vi = β1 v1 + · · · + βi−1 vi−1 .
Proof.
Proof of =⇒:
Given that the non-zero vectors v1 , v2 , · · · , vm are linearly dependent. Then, there exists
scalars α1 , α2 , . . ., αn not all of them are zero such that
α1 v1 + α2 v2 + · · · + αn vn = 0 .
Let i be the largest integer for which αi 6= 0. That is, αi+1 = · · · = αn = 0. Therefore,
α1 v1 + α2 v2 + · · · + αi vi = 0 with αi 6= 0 .
Then,
−α1 −α2 −αi−1
vi = v1 + v2 + · · · + vi .
αi αi αi
Proof of ⇐=:
Given that vi = β1 v1 + · · · + βi−1 vi−1 .
Since vi 6= 0, at least one βj 6= 0.
This gives that v1 , . . ., vi is linearly dependent and hence the v1 , . . ., vm is linearly depen-
dent.
Page No. 16
Lecture Notes by M. G. P. Prasad, IITG
Note: The above theorem does NOT say every vector of linearly dependent set can be
written as a linear combination of other vectors. It says that at least one vector of lin-
early dependent set can be written as a linear combination of other vectors. For example
S = {(1, 0), (0, 1), (2, 0)} is a linearly dependent set in R2 . The vector (0, 1) can not be
written as a linear combination of other two vectors. Where as, (2, 0) = 2(1, 0) + 0(0, 1).
4 Basis
Definition 4.1. Let V be a vector space. Let B be a subset of V . The set B is said to
be a basis for the vector space V if (i) B is linearly independent and (ii) span (B) = V .
Definition 4.2. A vector space V is said to be finite dimensional if V has a basis B which
is a finite set. That is, it has a finite basis.
Example 1: In the vector space Rn (or F n where F is a field), consider the set B =
{e1 , e2 , . . . , en } where e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . ., en = (0, 0, 0, . . . , 1).
Then this set B is basis for Rn (or F n ). This particular basis is called the standard basis
for for Rn (or F n ).
Example 2: In the vector space P(R) of all real polynomials over the real field, consider
the set B = {1, x, x2 , . . .}. Then the set B is a basis for P(R).
Example 3: In the vector space C over the real field R, the set B = {1, i} is a basis for C
over R.
Example 4: In the vector space C over the complex field C, the set B = {1} is a basis for
C over C.
In the vector space M2 (R of all 2 × 2 real matrices over the real field, the set
1 0 0 1 0 0 0 0
B = E11 = , E12 = , E21 = , E22 =
0 0 0 0 1 0 0 1
is a basis for M2 (R.
Theorem 4.1. Let V be a vector space which is spanned by a finite set of vectors v1 , v2 ,
. . ., vm . Then any linearly independent set of vectors in V is a finite set and contains no
more than m vectors. That is, every subset S of V which contains more than m vectors is
linearly dependent.
Page No. 17
Lecture Notes by M. G. P. Prasad, IITG
Proof. To show: If S = {u1 , u2 , · · · , un } is any set with n distinct vectors where n > m
in V then S is linearly dependent.
X n
To show: There exist scalars α1 , · · · , αn (not all of them are 0) such that αj uj = 0
j=1
Since v1 , · · · , vm spans V , there exist scalars aij such that
m
X
uj = aij vi for j = 1, 2, . . . , n .
i=1
Page No. 18
Lecture Notes by M. G. P. Prasad, IITG
Proof.
To show: S ∗ = S ∪ {u} is linearly independent
Suppose v1 , v2 , . . ., vm are distinct vectors in S. Consider the linear combination
c1 v1 + c2 v2 + · · · + cm vm + cm+1 u = 0 .
Note that u 6= 0.
Then, it shows that u ∈ span (S) which is a contradiction. Therefore cm+1 = 0.
Since S is linearly independent subset of V , we have c1 = · · · = cm = 0.
Therefore, S ∗ = S ∪ {u} is linearly independent.
This completes the proof.
Theorem 4.3. Let V be a vector space spanned by a finite set of vectors S = {v1 , v2 , . . . , vm }.
Then, S contains a set B which is a basis for V .
Theorem 4.4. Let V be a finite dimensional vector space. Then, any two bases of V have
the same (finite) number of vectors.
Proof.
Since V is a finite dimensional vector space, it has a (finite) basis B = {v1 , v2 , . . . , vm }.
Recall: Let V be a vector space which is spanned by a finite set of vectors v1 , v2 , . . ., vm .
Then any linearly independent set of vectors in V is a finite set and contains no more than
m vectors.
By the above result, every basis of V is finite and contains no more than m vectors. Thus
if C = {u1 , u2 , . . . , un } is a basis for V then n ≤ m.
By the same argument, it follows that m ≤ n. Therefore m = n.
This completes the proof.
Page No. 19
Lecture Notes by M. G. P. Prasad, IITG
Definition 4.3. Let V be a finite dimensional vector space. Then the dimension of V is
defined to be the number of vectors in a basis for V and it is denoted by dim V .
Examples:
• The dimension of the space Pn (R) of all real polynomials of degree at most n over
the real field is n + 1.
• The empty set spans the zero vector space. Therefore, the dimension of {0} is zero.
Definition 4.4. A vector space V is called infinite dimensional if it is NOT finite dimen-
sional. That is, it does not have a finite set as a basis.
Example: The space P(R) of all real polynomials over the real field has a basis as
B = {1, x, x2 , x3 , . . .} which is an infinite set. Observe that no finite set is a basis for
P(R). Therefore, P(R) is an infinite dimensional vector space.
Theorem 4.5. Let V be a finite dimensional vector space and let dim V = n. Then
Proof.
Proof of (1):
Let S = {u1 , u2 , . . . , um } be a subset of V such that m > n.
To show: S is linearly dependent
Recall: Let V be a vector space which is spanned by a finite set of vectors v1 , v2 , . . ., vn .
Then any linearly independent set of vectors in V is a finite set and contains no more than
n vectors. That is, every subset S of V which contains more than n vectors is linearly
dependent.
Using the above mentioned result, it follows that S is linearly dependent.
Proof of (2):
Page No. 20
Lecture Notes by M. G. P. Prasad, IITG
Theorem 4.6. In a finite dimensional vector space V , every non-empty linearly indepen-
dent set of vectors is part of a basis.
Proof. Let the dimension of the vector space be n.
Let S0 be a non-empty linear independent set of vectors of V .
If S0 spans V then S0 is a basis for V and we are done.
If S0 does not span V , then we find a vector v1 ∈ V but not in the span of S0 and define
a set S1 = S0 ∪ {v1 }. If S1 spans V then S1 is a basis for V and we are done.
If S1 does not span V , then we find a vector v2 ∈ V but not in the span of S1 and define
a set S2 = S1 ∪ {v2 }.
If we continue in this way, then (in not more than n steps) we reach a set
Sm = Sm−1 ∪ {vm } = S0 ∪ {v1 , v2 , . . . , vm }
which is a basis for V .
Example: Let V = R3 and S = {u1 = (0, 1, −1), u2 = (1, 0, −1)}.S Note that S is a
3
linearly independent set in V = R . It can be extended as B = S {u3 = (0, 0, 1)} so
that B is a basis for V .
Page No. 21
Lecture Notes by M. G. P. Prasad, IITG
Proof.
Since v ∈ V and span (B) = V , there exist scalars α1 , α2 , · · · , αn such that
n
X
v= αj vj .
j=1
Then,
n
X n
X n
X
0=v−v = αj vj − βj vj = (αj − βj ) vj .
j=1 j=1 j=1
αj − βj = 0 for each j = 1, · · · , n .
Therefore,
α j = βj for each j = 1, · · · , n .
Theorem 4.9. Let A be an n × n square matrix (whose entries are real numbers). If the
row vectors A1 , · · · , An of A form a linear independent set in Rn then A is invertible.
Proof. Let W denote the subspace spanned by the row vectors A1 , · · · , An . Since A1 , · · · ,
An are linearly independent, the dimension of W is n. Let ei = (0, 0, · · · , 1, · · · , 0) for
1 ≤ i ≤ n.
There exist scalars bij such that
n
X
ei = bij Aj 1≤i≤n.
j=1
Page No. 22
Lecture Notes by M. G. P. Prasad, IITG
Important Theorem:
Theorem 4.11. If W1 and W2 are finite dimensional subspaces of a vector space V then
the subspace W1 + W2 is finite dimensional and
Proof. Recall the Result: If W is a subspace of a finite dimensional vector space V , every
linearly independent set of W is finite and is part of a (finite) basis for W .
Since W1 ∩ W2 is a subspace of a finite dimensional vector space W1 (and W2 ), W1 ∩ W2
has a finite basis {u1 , . . . , uk } (say). Then it is part of a basis of W1 as
{u1 , . . . , uk , v1 , . . . , vm } .
{u1 , . . . , uk , x1 , . . . , xn } .
u1 , . . . , uk , v1 , . . . , vm , x1 , . . . , xn .
Page No. 23
Lecture Notes by M. G. P. Prasad, IITG
α1 u1 + · · · + αk uk + β1 v1 + · · · + βm vm + γ1 x1 + · · · + γn xn = 0 . (1)
Then
− (α1 u1 + · · · + αk uk + β1 v1 + · · · + βm vm ) = (γ1 x1 + · · · + γn xn ) ,
n
X
which shows that γi xi belongs to W1 .
i=1
n
X
As γi xi ∈ W2 , it follows that
i=1
n
X k
X
γi xi = cj u j
i=1 j=1
γi = 0 for each i = 1, 2, . . . , n .
Page No. 24
Lecture Notes by M. G. P. Prasad, IITG
Example: Let V be a finite dimensional vector space with dimension 5 and let W1 and W2
be distinct (in the sense that each one is not the subset of another one) four dimensional
subspaces of V . Find the dimension of W1 ∩ W2 .
Since W1 and W2 are distinct, the subspace W1 + W2 which contains the set W1 ∪ W2 has
dimension > 4. Since W1 + W2 ⊆ V , the dimension of W1 + W2 = 5 = dimV .
By above result, it follows that
Further observe that since dim (W1 ∩ W2 ) = 3 > 0, we have W1 ∩ W2 6= {0} and hence V
is NOT the direct sum of W1 and W2 . That is, V 6= W1 ⊕ W2 .
Note: “Finite Sequence” means “a finite set whose elements are numbered as: 1st element,
2nd element, · · · , n-th element, etc. That is, elements are arranged in a particular/specific
order.
For example, Let V be a finite dimensional vector space with dim (V ) = n. Suppose that
B is a basis for V . An ordered basis of V is a basis B = {v1 , v2 , . . . , vn } with a specific
order of its elements. The first element of B is v1 , the second element of B is v2 , etc.
Examples:
Let V = R2 and F = R.
The set S = {(1, 0), (0, 1)} forms a basis. Now the set S arranged in a specific order with
elements are numbered so that B1 = {e1 = (1, 0), e2 = (0, 1)} forms an ordered basis for
R2 . The set B2 = {u1 = (0, 1), u2 = (1, 0)} forms another ordered basis for R2 . Note
that these two bases of R2 are not to be taken as same. That is, B1 6= B2 as an ordered
bases.
The set B3 = {v1 = (1, 1), v2 = (1, −1)} is also an ordered basis for R2 .
The set B4 = {v1 = (1, 2), v2 = (4, 3)} is also an ordered basis for R2 .
Definition 5.2. Let V be a finite dimensional vector space and let dim (V ) = n.
Let B = {u1 , u2 , · · · , un } be an ordered basis for V .
Page No. 25
Lecture Notes by M. G. P. Prasad, IITG
Example 1:
Let V = R2 . Let B = {u1 = (1, 1), u2 = (1, −1)} be an ordered basis for R2 .
Find the coordinates of the vector v = (4, 2) with respect to the ordered basis B.
4 1 1
v = c1 u 1 + c2 u 2 =⇒ = c1 + c2
2 1 −1
=⇒ c1 + c2 = 4 and c1 − c2 = 2 .
Solving it, we get
c1 = 3 and c2 = 1 .
Therefore,
3
[v]B = .
1
Example 2:
Let V = R2 . Let B ∗ = {u∗1 = (1, 2), u∗2 = (4, 3)} be an ordered basis for R2 .
Find the coordinates of the vector v = (4, 2) with respect to the ordered basis B ∗ .
−4
5
[v]B∗ =
6
5
Page No. 26
Lecture Notes by M. G. P. Prasad, IITG
Question: Changes in the coordinates, as a change from one ordered basis to another
For a given vector v ∈ V , How to get new cooridnates of v while changing from one ordered
basis to another ordered basis for V ? Is there any simple mechanism to do it?
Form a matrix A as
a11 a12 · · · a1n
a21 a22 · · · a2n
A = .. .
. ... · · · ...
an1 an2 · · · ann
v = α1 u1 + · · · + αn un
Xn
= αi ui
i=1
n n
!
X X
= αi aij u∗j
i=1 j=1
n
XX n
= aij αi u∗j
i=1 j=1
n n
!
X X
= aij αi u∗j
j=1 i=1
Page No. 27
Lecture Notes by M. G. P. Prasad, IITG
It gives that
n
X
βj = aij αi , 1≤j≤n.
i=1
β = Aα .
Since B and B ∗ are linearly independent sets α = 0 if and only if β = 0. It shows that A
is invertible. Therefore,
α = A−1 β .
[v]B∗ = A [v]B
[v]B = A−1 [v]B∗
Thus
A = [u1 ]B∗ [u2 ]B∗ · · · [un ]B∗ = A B∗ ←B .
Form the change of basis matrix A B∗ ←B from the orderded basis B to the ordered basis
Page No. 28
Lecture Notes by M. G. P. Prasad, IITG
B ∗ as
a11 a12 ··· a1n
a21 a22 ··· a2n
A B∗ ←B = .. .
. ... ···
..
.
an1 an2 ··· ann
Step 3: Computing the coordinates of the vector v ∈ V with respect to the ordered basis B ∗
Let v ∈ V whose coordinate vector with respect to the ordered basis B is α = (α1 , . . . , αn )T
is given. That is,
v = α1 u1 + α2 u2 + · · · + αn un .
Then we want to compute the coordinate vector of v with respect to the ordered basis B ∗
is β = (β1 , . . . , βn ). Then by previous theorem,
β = A B∗ ←B α .
That is, v B∗ = A B∗ ←B v B .
Example:
Let V = R2 . Let B = {u1 = (1, 1), u2 = (1, −1)} and B ∗ = {u∗1 = (1, 2), u∗2 = (4, 3)}
be two ordered bases for R2 .
Find the coordinates of the vector v = [v]B = (3, 1) with respect to the basis B ∗ .
Step 1: Expressing each basis vector of B in terms of basis vectors of B ∗
(1/5)
[u1 ]B∗ = ,
(1/5)
(−7/5)
[u2 ]B∗ = .
(3/5)
Step 2: Forming the Change of Basis Matrix A B∗ ←B from B to B ∗
The change of basis matrix A B∗ ←B from the ordered basis B to the ordered basis B ∗ is
given by
(1/5) (−7/5)
A B∗ ←B = [u1 ]B∗ [u2 ]B∗ = .
(1/5) (3/5)
Step 3: Computing the coordinates of the vector v ∈ V with respect to the ordered basis B ∗
By previous thoerem,
(1/5) (−7/5) 3 (−4/5)
[v]B∗ = A B∗ ←B [v]B = = .
(1/5) (3/5) 1 (6/5)
Page No. 29
Lecture Notes by M. G. P. Prasad, IITG
[v]B = A [v]B∗
[v]B∗ = A−1 [v]B
Procedure for computing A B←B∗
Let V be an n-dimensional vector space and let B = {u1 , u2 , · · · , un } and B ∗ =
{u∗1 , u∗2 , · · · , u∗n } be two ordered bases for V .
Let B = {u1 = (1, 1), u2 = (1, −1)} and B ∗ = {u∗1 = (1, 2), u∗2 = (4, 3)} be two ordered
bases for the vector space R2 .
Find the change of basis matrix AB∗ ←B from the ordered basis B to the ordered basis B ∗
Page No. 30
Lecture Notes by M. G. P. Prasad, IITG
∗ ∗ 1 4 1 1
A = u1 u2 u1 u1 =
2 3 1 −1
Step 2: Reduce A to reduced row echelon form R by performing elementary row operations
1 4 1 1 1 4 1 1 1 4 1 1 1 0 (1/5) (−7/5)
A= → → → =R.
2 3 1 −1 0 −5 −1 −3 0 1 (1/5) (3/5) 0 1 (1/5) (3/5)
Step 3: Take/ extract the matrix AB∗ ←B from the matrix R
(1/5) (−7/5)
AB∗ ←B = .
(1/5) (3/5)
Page No. 31