You are on page 1of 114

Vector Spaces

By now, you must have studied the algebraic structures consisting


of one set and one or two binary compositions only.

For example

1. one set and one binary composition in a Group and


2. one set and two binary composition in a Group.

Ring, Integral Domain, Field etc.

Now here we shall study the structure consisting of two sets,


(a field and an abelian group) and a composition combining an
element of the field to an element of the abelian group. In fact, this
is the most fundamental and basic structure in which the concept of
distance and hence the concept of limits etc can be introduced,
leading to the study of analysis in a much wider prospect. Thus this
serves as a link between algebraic and topological structure.

Some Definitions 

➤ Ring
Definition: A non-empty set R is called ring if 

1. R is abelian group under addition 


2. R is semi-group under multiplication 
3. Distribution law holds
a(b + c) = a • b + a • c
(a + b)c = a • c + b • c
Example 
(i) (Z, +, •) is a ring where Z = {0, ±1, ±2, ...}
(ii) (Q, +, •), where Q is the set of rational numbers.
(iii) (R, +, •), where R is set of real numbers.
(iv) (Zn, +, •), Zn = residue classes of module n.

➤ Field
Definition: A non-empty set F is called a field if 
1. F is abelian group under addition
2. F - {0} is abelian group under multiplication
3. Right distribution law holds in F.
i.e. a, b, c, e F
(a + b)c = ac + be
Example 
(i) (R, +, •) is a field
(ii) (C, +, •) is a field
(iii) (Q, +, •) is a field
(iv) (Z, +, •) is not field as (Z - {0}, •) is not group under
multiplication. 

1. Vectors: Let V be a non-empty set whose element α,β, γ,....


or v1, v2, v3, ........ etc. will be called vectors. 
2. Scalars: Let (F, +, .) be a field whose element α,β, γ..... will
be called Scalars. 
3. Internal Binary Composition (Vector addition): A map * :
V * V →V is called an internal composition in V and denoted
by '+’ . i.e., for every α ∈ V , β ∈ V , α * γ∈ V
Example: In the set Z of integers, the addition and
multiplication are internal binary compositions. 
4. External Binary Composition (Scalar Multiplication): A
map o : F * V → V is called an external composition in V and
denoted by 'o’ or '•'.
i.e., ∀ a ∈ F and ∀ a ∈ V,  a o α ∈ V,

Example: In the set of matrices, scalar multiplication of matrices is


an external binary operation on the R of real numbers.
The following defines the notion of a vector space V where K is the
field of scalars.
Definition: Let V be a nonempty set with two operations:
(i) Vector Addition: This assigns to any u, v, ∈ V a sum u + v in V.
(ii) Scalar Multiplication: This assigns to any u ∈ V, k ∈ K a
product ku ∈ V.
Then V is called a vector space (over the field K) if the following
axioms hold for any vectors u, v, w ∈ V:
[A1] (u + v) + w = u + (v + w)
[A2] There is a vector in V, denoted by 0 and called the zero vector,
such that, for any u ∈ V, u + 0 = 0 + u = u
[A3] For each u ∈ V, there is a vector in V, denoted by - u and
called the negative of u, such that u + ( - u) = ( - u) + u = 0
[A4] u + v = v + u
[M1] k(u + v) = ku + kv, for any scalar k ∈ K.
[M2] (a + b)u = au + bu, for any scalar a, b ∈ K.
[M3] (ab)u = a(bu), for any scalar a, b ∈ K.
[M4] 1 u = u, for the unit scalar 1 ∈ K.
Notation: Generally, the vector space V over the field F is
expressed by V(F). When the field F itself is clear, then it is
expressed simply by V only.
Throughout in the text, V(F) will mean a vector space V of vectors
over a field F of scalars If F = R (the field of real numbers), then
V(R) is called a Real vectors over a field F of scalars.

Space
Similarly V(ℂ) is called the Complex Vector Space and V( ꙩ) is
called the Rational Vector Space. The set of three dimensional
vectors of geometry is called Three Dimensional Vector Space and
is written as V 3(F) or V3(F) or simply V 3.

Important Remarks:

1. Here by vector we do not mean the vector quantity which we


have defined in Vector algebra as a directed line segment.
Here the word vectors and scalars will used in more general
sense.
2. Four operations have been used in the vector space V(F),
Two different internal operations (+), (•) in the field F,
Third internal operations (+) in the set V and the fourth
external operations (•) in V over F.
Though in these four operations, two are expressed by the
symbol (+) and remaining two by the symbol (•), but it should
be clear that under the reference which symbol (+), or (•) is
being used for which operation.
For Example, if a, b ∈ F , then a + b ∈ F express the
addition in the field F which is often called the scalar addition
in the field F and if α, β ∈ V then α + β ∈ V express the
addition in the set V which is generally called the vector
addition. Similarly a • b G f express the multiplication in the
field F and a • b ∈ V express the external operation scalar
multiplication of the vector space.
3. Two type of zero element have been used in every vector
space. Zero element i.e. additive identity of V will be
indicated by hold face type ‘0’ and is called zero vector and
zero element of the field F is generally indicated by ‘0’ which
is called zero scalar. In future text we shall indicated both of
these by the same symbol ‘0’ and the context will make it
clear, which zero we intend.
4. The vector space which contains only zero element is called
Null space or Trivial space and is written as {0}.

Examples of Vector space


Example 1: Every field is a vector space over its subfield.

Let F be a field and H be its subfield. Then F is an abelian group for


the usual addition.

Also a ∈ H, x ∈ F ⇒ a ∈ F, x ∈ F ⇒ ax ∈ F
Hence usual multiplication in F can be taken as external
composition.
Now we verify the various postulates of a vector space in F(H):
V1. (F, +) is an abelian group .

V2. For every a ∈ Hand α,β ∈ F

a(α + p) = aα + ap       [By distributivity in F]

V3. For every a, b ∈ Hand α,∈ F

(a + b) a = aα + bα      [By distributivity in F]

V4. For every a, b∈ H and a, ∈ F


(ab)α = α(bα)      [By multiplicative associativity]

V5. If 1 is the unity element of R then

1α = α,∀α∈ F    [∵ 1 is multiplicative identity in F]


Therefore F(H) is a vector space.

Special Case: Since every field is also a subfield of itself, therefore


F(F) is a vector space. Consequently, ℂ(ℂ), ℝ (ℝ ), ꙩ(ꙩ) are also
vector space for the ordinary and multiplication.
Remark: C(R) and R( ꙩ) are vector space, but R(C) and ꙩ(R) are
not vector spaces, because they are not closed for scalar
multiplication.

Example 2: The set Fn of all ordered n-tuples of a field F is a


vector space over the field F. 

Let Fn = {(a 1, a2,......,a n) I a1, a2,.......,an ∈ F}


and α = (a1,a2,.....,  an), β = (b1, b2,.....,bn) and a∈F.
We define the addition of n-tuples as

α + β = (a1 + b1, a2 + b2,....an + bn)    ...(1)


and scalar multiplication of n-tuples as
aα = a(a 1, a2,..., a n) = (aa 1, aa2,......,aa n)    ...(2)
Clearly, α, β ∈ F and a∈ Fn.
α + β ∈ Fn ∀ α, β ∈ Fn and a ∈ F, α ∈ Fn => aα ∈ Fn

Therefore F n(F) is closed for the above defined addition and scalar
multiplication operations.

Verification of Space axioms: 


V1. It can also be easily verified that F n is an abelian group for the
addition defined above and 0 = (0, 0 .....0) ∈ Fn being the additive
identity and -- α = ( -a 1, -a2,. . . , - an) ∈ Fn being the additive identity
in verse of α = (a 1, a2,...., a n) ∈ Fn.
V2. Scalar Multiplication is distributive over vector addition:
Let a ∈ F and α.β ∈ Fn; then
a = (α + β) = a {(a 1, a2,...., a n) + (b1, b2,...., bn)}
= a (a1 + b1, a2 + b2,...an + bn)    [by (1) ]
= {a(a1 + b1), a(a2 + b2),..... a(an + bn)}    [by (2) ]
= (aa1 + ab1, aa2 + ab2,..., aa n + abn)    [by distributivity in F]
= (aa1, aa2,..., aa n) + (ab1,ab2,.....,ab n)
= a(a1, a2,...., an) + a(b1, b2,...., bn)
= aα + aβ
Therefore scalar multiplication is distributive over vector addition in
F n.
V3. Scalar addition is distributive over scalar Multiplication :
Let a, b ∈ F and α ∈ Fn; then
(a + b) α = (a + b)(a,, a2    an)
= {(a + b) a1, (a + b) a 2,.....,(a + b) a n    [by (2)]
= (aa1 + ba1, aa2 + ba2,....aan + ban)    [by distributivity in F]
= (aa1, aa2,..., aa n) + (ba1, ba2,......, ban)
= a (a1, a2,...., a n) + b(a 1, a2,...., a n)    [by (2)]
= aα + bα
Therefore addition is distributive over scalar multiplication in F n
V4. Associativity for Scalar Multiplication:

Let a,b ∈ F, then α ∈ Fn ;


(ab) α = (ab) (a 1, a2,...., a n)

= (aba1, aba2,..., aba n)    [by (2)]

= a (ba1, ba2,......, ba n)    [by (2)]

= a {b(a1, a2,...., an)}    [by (2)]

= a (bα)
Therefore scalar multiplication is associative in F n
V5. Let I be the unity element of the field F, then for α ∈ Fn

1α = 1 (a 1, a2,...., a n) = (1a 1, 1a2,...., 2a n)    [by (2)]

= (a1, a2,...., a n)

From the above discussion, it is clear that Fn satisfies all the
axioms for the vector space, therefore F n(F) is a vector space.

Remarks: 

1. The above vector space is also expressed by V n(F). 


2. If F = ℝ (Field of real number), then the corresponding vector
space ℝn is called the Real Euclidean n space.
Similarly, ℂn is called the Complex Euclidean n space. 
3. The set of all ordered pairs on F i.e. F 2(F) = {( a1, a2) I a1, a2 ∈
f } is the vector space. Similarly the set of all ordered triads on
F i.e. F 3(F) = {(a 1, a2, a3) I a1, a2, a3 ∈ F } is the vector space
These vector spaces are denoted by V 2(F) and V3(F)
respectively.

Example 3: The set M of all m * n real matrices (having their


elements as real number) is a vector space over the field ͹ of
real numbers with respect to addition of matrices and scalar
multiplication of matrices.

Let M = {[aij]m*n |aij ∈ R} . If A, B ∈ M
where A = [a ij]m*n ;B = [bij]m*n and a∈ℝ, then matrix addition and
scalar multiplication in M are defined as follows :

A + B= [aij] + [bij] = [aij+bij|]m*n    ...(1)

and aA= a[aij] = [aa ij]m*n    ...(2)

Verification of space axioms:


V1. (M, +) is additive abelian group: 
In Group theory, we have already seen that (M,+) is a commutative
group where null matrix O mKn is the zero element and - A = [—
aij] m*n ∈ M is the addictive inverse of any matrix A = [a ij] ∈ M From
the theory of matrices, following can be easily verified:
V2. Scalar Multiplication is distributive over vector addition

V3. Scalar addition is distributive over Scalar Multiplication.


V4. Scalar Multiplication is associative.

V5. 1 is the unity element in ℝ, therefore  every A ∈ M


1A = 1[aij] = [1a ij] = [aij] = A
From the above discussion, it is clear that M(ℝ) is a vector space.

Example 4: The set V of all real valued continuous (differential


or integral) function defined in a closed interval [0,1] is a
vector space over the field ℝ of real numbers with addition and
scalar multiplication of functions defined by:
(f + g ) ( x ) = f (x) + g (x) ∀ f , g ∈ V ; x ∈ [0,1]    ..... (1)
(af)(x) = af (x) ∀ f ∈ V ,a ∈ ℝ ; x ∈ [0,1]   .... (2)

Let V = (f I f :[0,1] → R is a continuous function}


From Calculus, we know that the sum of two continuous function is
also a continuous function i.e. f ∈ V, g∈ V ⇒ (f + g) ∈ V
⇒ V is closed for addition of function defined above. Also we know
that a ∈ ℝ ,f ∈ V af ∈ V ⇒ V is closed for scalar multiplication of
function.
Verification of space axioms V (R): 
V1. In group theory, we have already seen that (V, +) is an additive
abelian group.

V2. Scalar multiplication is distributive over vector addition:


Let f, g ∈ V and a ∈ ℝ. then
[a(f + g)](x) = a[(f + g)(x)]     [by (2)]
= [a(f(x) + g(x))]    [by (1)]
= af(x) + ag(x)    [by distributivity in R]
= (af)(x) + (ag)(x)    [by (2)]
= (af + ag)(x) ∀x ∈ [0,1]    [by (1)]
⇒ a(f + g) = af + ag ,∀f ∈V and ∀a, b ∈ ℝ

V3. Scalar addition is distributive over Scalar Multiplication.


Let f ∈ V and a ,b ∈ ℝ , then
[(a + b)]f(x) = [(a + b)f(x)]      [by (2)]
= [a f(x) + b f(x)]    [by (1)]
V4. Scalar multiplication is associative :
Let  a ,b ∈ ℝ and f ∈ V, then
[(ab)f](x) = (ab)f(x)      [by (2)]
= a[(bf)(x)]       [by (2)]
 = [a(bf)](x) ∀ x ∈ [0,1]     [by (2)]

Therefore scalar multiplication is associative in V.


V5. If unity 1 is real number and f ∈ V, then
(1f) (x) = 1 f(x)     [by (2)]
= f(x)      [v 1 is multiplicative identity in ℝ]
1 f = f        [∀ f ∈ V]
From the above discussion, it is clear that V(ℝ) satisfies all the
axioms for the vector space, therefore V(R) is a vector space.

Example 5: The set P(x) of all polynomials in one variable x


over a field F is vector space over F with respect to addition of
polynomials and multiplication of polynomials by an element
of F. 

The addition of polynomials and scalar multiplication in P(x) is


defined as follows :
if   then
     ...(1)
and      ..... (2)
We also know that if
p(x) = ∑ anxn ∈ P(x) and q(x) = ∑ b nxn ∈ P(x)
then p(x) + q(x) = S(a n + bn)xn ∈ P(x)     [ ∵ an ∈ F  bn ∈ F ⇒ (an + bn)
∈F]
Again if a ∈ F,
then ap(x) = ∑(aa n)xn ∈ P(x)    [ ∵ a ∈ F , an ∈ F ⇒ aan  ∈ F ]
Therefore P(x) is closed for the above defined addition and scalar
multiplication operations.
Verification of space axioms in P(x):
V1. In the Ring theory, it has already been shown that (P(x), +) is
an additive abelian group.
V2. Scalar multiplication is distributive over vector addition:

    [by (1)]

V3. Scalar addition is distributive over scalar multiplication:


Let a, b ∈ F and p(x) ∈ P(x), then

V4. Associativity for scalar multiplication:


Let a, b ∈ F and p(x) ∈ P(x), then
 
V5. If 1 is the unity element of the field F and p(x) ∈ P(x), then

From the above discussion, it is clear that P(x) satisfies all the
axioms for the vector space, therefore P(x) is a vector space.

Example 6: The set ℝ+ of positive real numbers is a vector


space over the field ℝ of real numbers for the operations
Vector addition’ ⊕ and ‘scalar multiplication'   defined as
follows:

Since the product of two positive real numbers is definitely a


positive number i.e.,

Also every real power of a positive number is always positive real,

Therefore ℝ+ is closed for the above defined vector addition (and
scalar multiplication).
Verification of a space axioms in R+(R):
V1. It can be easily see that   is an additive abelian group,
for 1 ⊕ x = x ⊕ 1, x1 = x                    [∵ x1 = x, ∀ x ∈ ℝ+]
and for every x ∈ ℝ+, there exist 1/x ∈ ℝ+ such that

V2. Scalar multiplication is distributive over vector addition:


Let x, y ∈ ℝ+ and a ∈ ℝ, then
V3. Distributivity on scalar addition:
Let a, b ∈ ℝ and x ∈ ℝ, then

V4. Scalar multiplication is associative:


Let a, b ∈ ℝ and x ∈ ℝ+, then

V5. 
From the above discussion, it is clear that ℝ+ satisfies all the
axioms for the vector space, therefore ℝ+(ℝ) is a vector space.

General properties of Vector Space


If V be the vector space over the field F and 0 ∈ V be its zero
vector, then the following properties are obvious because of V
being an abelian group :

(i) For any α ∈ V, α + β = α β = 0      [Uniqueness of the zero


element]
(ii) α ∈ V , β ∈ V , a + β = 0 = > β = - α     [Uniqueness of inverse]
(iii) For any α, β, γ ∈ V , α + β = α + γ = > β = γ     [cancellation law]
Now we shall establish few more properties of V : 
Theorem: Let V be a vector space over a field F. If 0 be the zero
vector in V and 0 be the additive identity in F, then
(i) a 0 = 0, ∀ a ∈ F 

(ii) 0α = 0, ∀ α ∈ V
(iii) a(-α) = -(aα), ∀ a ∈ F, a ∈ V
(iv) (-a)α = -(aα), ∀ a ∈ F, a ∈ V
(v) a(α - β) = aα - αβ, ∀ a ∈ F, α, β ∈ V
(iv) For any a ∈ F, α ∈V
aα = 0 a = 0 or α = 0 

Proof,

(i) ∵ 0 zero element in V, therefore 0 + 0 = 0

⇒ a(0 + 0) = a0      V a ∈ F
⇒ a0 + a0 = a0     ...... [by V 2]
⇒ a0 + a0 = a0 + 0   ...... [∵ 0 is zero element in V]
⇒ a0 = 0
(ii) ∵ 0 ∈ F is additive identity in F, therefore
0+0=0

⇒ (0 + 0)α = 0α     ∀ a ∈ V
⇒ 0α+ 0α = 9α     ...... [by V 3]
⇒ 0α + 0α = 0α + 0     ...... [∵ 0 is zero element in V]
⇒ 0α = 0   [cancellation law in V]
(iii) For any a ∈ F and a ∈ V
a[α + (-α) = aa + a(-α)    ...... [by V 2]
⇒ a0 = aα + a(-α)
⇒ 0 = aα + a(-α)      ...... [by (1)]
⇒ a(-α) is additive inverse of aa
⇒ a (-α) = -(aα) 

(iv) For any a ∈ F and α ∈ V


[a + (-a)]α = aa + (-a)α    ...... [by V 3]
⇒ 0a = aα + (-a)α
⇒ 0 = aα + (-a)α    ....... [by (ii)]
⇒ (-a)α is additive inverse of aa
⇒ (-a)α = -(aα)
(v)    For any a ∈ F and α, β ∈ V
a(α - β) = a[α + (-p)]

= aα + a(-β)    .......[by V2]

= aα + [-(aβ)]    ...... [by (iii)]

= aα - aβ

(vi)    Let a ∈ F and α ∈ V such that a ≠ 0 and aα = 0

Now since multiplicative inverse of every non zero element exist in


F,
therefore a ∈ F,  a ≠ 0 ⇒    a-1 ∈ F

aα = 0 =>    a-1 (aα) = a -1 0

⇒    (a-1a)α = 0    . .......[by V 4 and (i)]

⇒ 1α = 0

⇒ α = 0        ......[by V5]

Again let aα = 0 and α ≠ 0, then to prove that a = 0.

If possible, let a ≠ 0, then a -1 ∈ F


aα = 0    ⇒    a-1(aα) = 0
⇒    (a-1a)α = 0
⇒    1α = 0
⇒    α = 0
which is contrary to the assumption α ≠ 0, therefore a = 0.

Hence aα = 0  ⇒ a = 0 or α = 0

Example 7: Let V(F) be a vector space, then :


(i)    For any a, b ∈ F and a ∈ V, a ≠ 0,

aα = bα ⇒ a = b

(ii)    For any α, β ∈ V and a ∈ F, a ≠ 0

aα = aβ ⇒ α = β

(i) aα = bα ⇒ aα - b α = 0
(a - b)α = 0    .... [by V3]

⇒ [a + (-b)]α = 0

⇒ a + (-b) = 0    [∵  α ≠ 0]

[by theorem (vi)]

⇒ a = -(-b)    [by additive inverse]

⇒a=b
(ii) aα = aβ => aα - aβ = 0

⇒ a(α - β) = 0    [by V2]

⇒ α - β = 0       [∵  a ≠ 0, therefore by theorem (vi)]

⇒α=β

Example 8: If V(F) is a vector space, then prove that:

(i) By theorem (iv), we have


(ii) 

(iii) 

Example 9: Let V be the set of all ordered pairs of real


numbers and F be the field of real numbers. Examine whether
V(F) is a vector space or not for the following defined
operations :
(a) (a, b) + (C, d) = (0, b + d);     P(a, b) = (pa, pb)
(b) (a, b) + (C, d) == (a + c, b + d);    P(a, b) = (0, pb)
(c) (a, b) + (C, d) == (a + c, b + d); P(a, b) = (p 2 a, p2 b)
(d) (a, b) + (C, d) == (a, b); P(a, b) = (pa, pb)
(e) (a, b) + (C, d) == (a + c, b + d); P(a, b) = (I p I a, I p I b)    ∀ a,
b, c, d, p ∈ ℝ

(a) It can be easily seen that the additive identity for the defined
vector addition does not exist. If suppose any element (c, d) ∈ V is
taken as additive identity for vector addition, then
(a, b) + (c, d) = (0, b + d) * (a, b)
Therefore (V, +) is not an abelian group.

Consequently V(F) is not a vector space.


(b) In this case, (V, +) is an abelian group but space axiom V 5 is
not satisfied, because
1(a, b) = (0, 1b) = (0, b) ≠ (a, b)
Therefore V(F) is not a vector space.
(c) In this case, (V +) is an abelian group but space axiom V3 is not
satisfied, because

Therefore V(F) is not a vector space.


(d) In this case the defined vector addition is not commutative,
because for
(a, b); (c, d) ∈ V
(a, b) + (c, d) = (a, b)
and (c, d) + (a, b) = (c, d)
(a, b) + (c, d) ≠ (c, d) + (a, b)
Therefore (V, +) is not an abelian group.

Consequently, V(F) is not a vector space.


(e) In this case, (V, +) is an abelian group but space axiom V 3 is not
satisfied, because
(p + q)(a, b) = (I p + q I a, I p + q I b)        p, q ∈ ℝ
≠ (I p I a, I p I b) + (I q I a, I q I b)      [ ∵ l p + q l ≠ lp l + l q l , p, q ∈
ℝ]
⇒ (p + q)(a, b) ∈ p(a, b) + q(a, b)
Therefore V(F) is not a vector space.

Theorem: Let V be a vector space over a field K.

(i) For any scalar k ∈ K and 0 ∈ V, k0 = 0.

(ii)  For 0 ∈ K and  any vector u ∈ V, 0u = 0.

(iii) If ku = 0, where k ∈ K and    u ∈ V, then k = 0 or u = 0.

(iv) For any k ∈ K  and any u ∈   V, (- k)u = k(- u) = - ku.

Examples of Vector Spaces

Space Kn
Let K be an arbitrary field. The notation K n is frequently used to
denote the set of all n-tuples of elements in K. Here K n is a vector
space over K using the following operations :
(i) Vector
Addition: 

(ii) Scalar Multiplication:


k(a1, a2, ..... a n) = (ka1, ka2,........., ka n)
The zero vector in K n is the n-tuple of zeros,
0 = (0, 0, .... , 0)

and the negative of a vector is defined by

Polynomial Space
P(t) Let P(t) denote the set of all polynomials of the form

where the coefficient a, belong to a field K. Then P(t) is a vector


space over K using the following operations:
(i) Vector Addition: Here p(t) + q(t) in P(t) is the usual operation of
addition of polynomials.
(ii) Scalar Multiplication: Here kp(t) in P(t) is the usual operation
of the product of a scalar k and a polynomial p(t).
The zero polynomial 0 is the zero vector in P(t).

Vector Subspaces 
Let V(F) be a vector space and W ⊂ V. We have also seen that the
set W is closed for the binary operation +, if for any

and in that case restriction of + to W is a binary operation in W.


Similarly we say that a subset W of vector space V(F) is closed for
the external composition if for any

In such a case the restriction of the mapping of F * V to F * W is a


mapping from F * W to W and is therefore an external composition
for W. We shall therefore say that W is closed for the external
composition in V, then external composition in V induces an
external composition in W. If W itself be a vector space for these
induced compositions, then we say that W is a subspace of V.

Definition: Let V(F) be a vector space. A non void subset W of V is


a subspace of V, if W itself is a vector space over F for the
restrictions to W of the addition and multiplication by scalar defined
for V.

Improper or Trivial subspaces 


Every vector space V(F) has atleast two subspaces viz,
(i) V itself
(ii) {0} zero vector space
These space are called Improper subspaces.

Proper subspaces: The subspaces other than the improper


subspaces are called proper subspaces.
Example: The set of real numbers  is a subspace of the vector
space 
Example: The set   is a subspace of the
vector space  .

Criteria for a Subspace


Theorem: The necessary and sufficient conditions for a non void
subset W of a vector space V(F) to be a subspace of V(F) is that W
is closed under vector addition and scalar multiplication:
i.e., α g W, β g W ⇒ α + β ∈ W ...(1)
a ∈ F, β ∈ W ⇒ aα ∈ W ...(2)
Proof. 
The condition is necessary (⇒): Let W be a non empty subspace
of V(F). By definition of subspace W itself is also a vector space
over F wrt vector addition and scalar multiplication in V. So it
implies that W is closed wrt these two binary compositions.
The condition is sufficient (⇐): Let W be closed for vector
addition and scalar multiplication i.e. (1) and (2) hold. Now we have
to prove that W is a subspace of V(F).
∵ W ≠ φ, therefore let α ∈ W and 1 ∈ F be unity element of the field
F.
∵1∈F⇒-1∈F
∴ -1 ∈ F, α ∈ W

Therefore every element of W has its additive inverse in W.


Now α ∈ W, -α ∈ W

Therefore zero vector of V is also the zero vector of W.


Since the elements of W are also the elements of v, therefore they
also obey the commutative as well as associative laws as are true
in V.
Therefore (W, +) is an abelian group.
Moreover V being the vector space satisfies remaining axioms V 2,
V3, V4, V5 so is in W and W is a subset of V.
Therefore W is a subspace of the vector space V(F).
Theorem: The necessary and sufficient conditions for a non void
subset W of a vector space V(F) to be a subspace of V(F) are:

Proof. 
The condition is necessary (⇒) :
Let W be a subspace of V(F). Then W itself is also a vector space
wrt the operations defined for V. Therefore (W, +) is also an abelian
group.
Consequently,

Therefore the condition (i) is necessary.


Again W is closed for the scalar multiplication, being subspace of
V. Therefore

Therefore the condition (ii) is also necessary.


The condition is sufficient (⇐) : 
Now let W be a non empty subset of V(F) which satisfies the given
conditions, then to prove that W is a subspace of V(F).
∵ W ≠ φ, therefore let α ∈ W, then by condition (i)

Therefore zero vector of V is also the zero vector of W.


Again by condition (i),

Therefore additive inverse of each element exist in W.


Now α ∈ W, β ∈ W ⇒ α ∈ W, -β ∈ W
⇒ α - (-β) ∈ W      [by condition (i)]
⇒ (α + β) ∈ W

Therefore W is closed for vector addition.

Since W ⊂ V and vector addition is associative and commutative in


V, so also in W
Hence (W, +) is an abelian group.
Again by condition (ii), W is closed for the scalar multiplication and
W ⊂ V , therefore W will also satisfy remaining axioms V 2, V3, V4,
V5. Consequently, W is a subspace of V(F).

Theorem: The necessary and sufficient condition for a non void


subset W of a vector space V(F) to be a subspace of V(F) is:
a, b ∈ F and α, β ∈ W ⇒ (aα + bβ) ∈ W
Proof: The condition is necessary (⇒ ):
Let W be a subspace of V(F). Then W will be closed for the scalar
multiplication. Therefore

Again, We being subspace, will be closed for the vector addition,


therefore
Therefore the given condition is necessary.

The condition is sufficient (⇐): Now let W be a non empty subset


of V(F) which satisfy the given conditions. Taking a = 1, b = 1, then
by the given condition

     [by V5]
Therefore W is closed for the vector addition.
Now taking a = -1, b = 0, then again by the given condition

Therefore additive inverse of each element exist in W.


Finally, taking a = 0, b = 0, then by the given condition

Therefore zero vector of V also exists in W.


Since W ⊂ V and vector addition is associative and commutative in
V, so is in W also.
Hence (W, +) is an abelian group.
Again taking b = 0, we see that if a ∈ F and α, β ∈ W, then by the
given condition
a, 0 ∈ F and α, β ∈ W

Therefore W is closed for the scalar multiplication. But W ⊂ V,


therefore W will satisfy the remaining axioms V 2, V3, V4, V5.
Consequently W is a subspace of V(F).
Intersection of two subspaces 
Theorem: The intersection of two subspaces W 1 and W2 of vector
space V(F) is also a subspace of V(F).
Proof:

Since W1, and W2 are subspaces, therefore for any a, b ∈ F

Therefore a, b ∈ F and α, β ∈ W1 ∩ W2 ⇒ aα + bβ ∈ W1 ∩ W2


⇒ W1 ∩ W2 is also a subspace of V(F).

Generalisation: The intersection of an arbitrary family of


subspaces of a vector space is also a subspace.
Proof: Let V(F) be a vector space and { W 1, W2, ...} be a family of
its subspace, then to prove that ∩ W i is also a subspace of V(F).

Now since every W is a subspace of V(F), therefore for any a, b ∈


F

⇒ ∩ Wi is also a subspace of V(F).

Union of two subspaces


The union of two subspaces of a vector space V(F) is not
necessarily a subspace of V(F). This result can be proved with the
help of the following counter example:
if 
are subspaces of the vector space 
but their union
W1 ∪ W2 = {α I α = (a, 0, 0) or (0, b, 0)}
is not subspace of   because if

Theorem: The union of two subspaces W 1 and W2 of a vector


space V(F) is a subspace iff either W 1 ⊂ W2 or W2 ⊂ W1
Proof: Let W1 and W2 be the subspaces of a vector space V(F)
such that

W1 ⊂ W2 or W2 ⊂ W1, then


W1 ⊂ W2 or W2 ⊂ W1 ⇒  W1 ∪ W2 = W1 or W1 
But W1 and W2 are subspaces, therefore W 1 ∪ W2 will also be a
subspace.

Conversely:
Let W1 ∪ W2 be the subspace of the vector space V(F).

Also suppose that   then there exist α 1,α2 ∈


V such that

Hence W1 ⊆ W2 or W2 ⊂ W1


Example 10: Prove that the set W = {(a, b, 0) I a, b ∈ F} is a
subspace of the vector space V 3(F). 

∵ V3(F) = {(a, b, c) I a, b, c ∈ F} and W = {(a, b, 0) I a, b ∈ F}


Therefore, clearly W ⊂ V3 (F).
Let α = (a1, b1, 0) and p = (a2, b2, 0) be any two elements of W and
p, q ∈ F, then

Therefore p, q ∈ F and α, β ∈ W ⇒ pα + qβ ∈ W
Therefore W is a subspace of the vector space V 3(F).

Example 11: Show that the set W = {(a, b, c) I a - 3b + 4c = 0; a,


b, c ∈ R} of 3-tuples is a subspace of the vector space  V3(ℝ) 

∵ V3(ℝ) = {(a, b, c) I a, b, c ∈ ℝ}
and W = {(a, b, c) I a - 3b + 4c = 0; a, b, c ∈ ℝ} therefore, clearly W
⊂ V3(ℝ)
Let

     .....(1)
Now if, p, q ∈ ℝ, then
Therefore p, q ∈ ℝ and α, β ∈ W ⇒ pα + qβ ∈ W
∴ W is a subspace of the vector space  V3(ℝ).

Example 12 : Examine which of the following sets is a


subspace of the vector space V 3(ℝ).
(a) W1 = {(x, 2y, 3z) I x, y, z ∈ ℝ}
(b) W 2 = {(x, x, x) I x ∈ ℝ )
(c) W 3 = {(x, y, z) I x, y, z ∈ ℚ  }

∵ V3(ℝ) = {(x, 2y, 3z) I x, y, z ∈ ℝ}


Therefore observing 3-tuples of W 1, W2, W3 and V3(ℝ), it is clear
that

(a) Let α = (x1, 2y 1, 3z1) ∈ W1 and β = (x 2, 2y 2, 3z2) ∈ W1


where x1, x2, y1, y2, z1, z2 are all real numbers.
if a, b ∈ R, then

∴ a, b ∈ R and α, β ∈ W1 ⇒  aα + bβ ∈ W


Therefore W 1 is a subspace of the vector space V 3(ℝ).

(b) let 
where x1, x2 ∈ ℝ. If a, b ∈ ℝ, then
Therefore W 2 is a subspace of the vector space V 3(ℝ).
(c) Let a = (3, 5, 4) ∈ W3 then for a = √2  ∈ ℝ

Consequently, W 3 is not closed for the scalar multiplication.


Therefore W 3 is not a subspace of the vector space V 3(ℝ).

Example 13: If V(F) is a vector space of all n * n matrices over


the field F, then prove that the set W of all n x n symmetric
matrices over F will be a subspace of V(F).

Clearly W is a non empty subset of V(F), W = (A : A T = A, A ∈ V}.

Let A, B ∈ W ⇒ AT = A and BT = B [being symmetric]    ...(1)


If a, b ∈ F, Then aA ∈ V, bB ∈ V ⇒ aA + bB ∈ V
and

⇒ aA + bB is a symmetric matrix⇒ (aA + bB) ∈ W


a, b ∈ F and A, B ∈ W ⇒ aA + bB ∈ W
Therefore W is a subspace of V(F).

Example 14: If M 2(F) is a vector space of all 2 x 2 matrices over


the field F, then prove that the set W of all matrices of the

form   over F will be a subspace of M 2(F).


Let

and p ∈ F, then

Therefore W is closed for the vector addition and scalar


multiplication in M 2(F) Hence M 2(F) is a subspace.

Example 15: If a 1, a2, a3 are fixed elements of a field F; then


prove that the set W = {(x 1, x2, x3) I x1, x2, x2 ∈ F; a1x1  + a2x2 +
a3x3 = 0} is a subspace of the vector space V 3(F).

Let α = (x 1, x2, x3) ∈ W and b = (y 1, y2, y3) ∈ W


and a1x1 + a2x2 + a3x3 = 0 and a1y1 + a2y2 + a3y3 = 0   ...(1)
If a, b ∈ F, then

(by
the properties of the field F)= a0 + b0
= 0 + 0 = 0    ...(3)
Therefore by (2) and (3),
aα + bβ = (ax 1 +  by1, ax2 + by2, ax 3 + by3) ∈ W
Therefore W is a subspace of V 3(F).

Example 16: Let V(F) be a vector space of all polynomials of


degree n in an indeterminate x, then prove that the set S of all
polynomials of degree ≤ n over F is a subspace of V(F) where n
is an arbitrary positive integer.

Let f(x), g(x) ∈ S, so that f(x) and g(x) are polynomials over F of
degree ≤ n. Now if a and b are any scalars ∈ F, then af(x) + bg(x)
will also be a polynomials of degree ≤ n.

Thus f(x), g(x) ∈ S and a, b ∈ F ⇒ af(x) + bg(x)


Hence S is a subspace of V.

Example 17: Let S be the set of all solutions (x, y, z) satisfying


the simultaneous equations ax + by + cz = 0 and dx + ey + fz =
0 ; a, b . c, d, e, f ∈ ℝ. Thus S is a subspace of ℝ3 over ℝ.

If (x1, y1, z1) and (x2, y2, z2) be solutions of the given equations,

 
Hence if (x 1, y1, z2) ∈ V, (x2, y2, z2) ∈ V, then (x 1 + x2, y1 + y2, z1 + z2)
is also in V.
Similarly a(ax 1) + b(αy1) + c(αz1) = (aαx1 + bαy1 + cαz1)

Hence S is a subspace of ℝ3 over ℝ.

Example 18: Examine which of the following sets of vectors α


= (a1, a2,....., a n) in ℝn are subspaces of ℝn ? (n ≥ 3)

(a) Let 
Again let   and b = (b1, b2,...... b n) ∈ W1,
then by the given condition a 1 ≥ 0, b1 ≥ 0.

Now for a, b ∈ ℝ

But aa1 + bb1 is not necessarily non-negative,


for example If we take a = -2, b = -1, then for a, = 4, b = 3
aa1 + bb1 = (-2 )4 + (-1 )3 = -11 < 0
Therefore aα + bβ ∉ W,. Hence W 1 is not a subspace.

(b) Let W2 = {α I α ∈ ℝn and a1 + 3a2 = a3}


Again let α = (a 1, a2, ..., a n) ∈ W2 and β= (b 1, b2, ..., b n) ∈ W2 
Then by the given condition
a1 + 3a2 = a3 and b, + 3b 2 = b3    ...(1)
Now for a, b ∈ ℝ

= aa3 + bb3 [by (1)]


a, b ∈ ℝ  and α, β ∈ W2 ⇒ aα + gβ ∈  W2 
Therefore W 2 is a subspace of .

(c) Let  Again let α = (a 1, a2, ..., an)


∈ W3 and β = (b1, b2, ..., bn) ∈ W3 
Then by the given condition,
 Now for a, b ∈ ℝ

But the square of (aa 1 + bb1) is not necessarily (aa 2 + bb2).
for example When a 2 = 4, a1 = 2, b2 = 9, b1 = 3, a = 2, b = -2,
then aa 2 + bb2 = 8 - 18 = - 10 when (aa 1 + bb1)2 = (4 - 6)2 = 4.
therefore  Hence W3 is not a subspace of ℝn.

(d) Let 
Again let α = (a 1, a2, ..., a n) ∈ W4 and β = (b1, b2, ..., bn) ∈ W4
Then by the given condition a 1a2 = 0 and b1b2 = 0.

Now for a, b ∈ ℝ

But (aa 1 + bb1) (aa2 + bb2) is not necessarily zero,


for example Taking a 1 = 0, a2 = 2, b1 = 1, b2 = 0, a = 2, b = 3
(aa1 + bb1)(aa2 + bb2) = (0 + 3)(4 + 0) = 12 * 0
Therefore aα + bβ ∈ W4. Hence W. is not a subspace of ℝn.

(e) Let 
Again let α = (a 1, a2, ..., a n) ∈ W5 and β = (b1, b2, ..., bn) ∈ W4
Then by the given condition a 2 b2 are rational numbers.

Now for a, b ∈ ℝ

But aa2 + bb2 is not necessarily a rational number.

for example 
If we take a = √2, b = √3, a 2 = 3, b2 = 4
aa2 + bb2 = 3√2 + 4√3  which is not rational
Therefore aα + bβ ∉ W5. Hence W5 is not a subspace of ℝn.
Linear Combination

Let V be a vector space over a field K. A vector v in V is a linear


combination of vectors u 1, u2, .... , um in V if the exist scalars a 1,
a2, ...... am in K such that

Alternatively, v is a linear combination of u 1, u2, .... , u m  if there is a


solution to the vector equation

where x1, x2, ..... xm are unknown scalars.

Example: (Linear Combinations in ℝn) Suppose we want to


express v = (3, 7, - 4) in ℝ3 as a linear combination of the vectors
u1 = (1, 2, 3), u 2 = (2, 3, 7) u 3 = (3, 5, 6)
We seek scalar x, y, z such that v = xu 1 + yu2 + zu3; that is,

(For notational convenience, we have written the vectors in ℝ3 as


columns, because it is then easier to find the equivalent system of
linear equations.) Reducing the system to echelon form yields

Back-substitution yields the solution


x = 2, y = - 4 , z = 3
Thus v = 2u 1 — 4u2 + 3u3.

Spanning Set

Let V be a vector space over K. Vectors u 1, u2, .... , u m in V are said
to span V or to form a spanning set of V if every v in V is a linear
combination of the vectors u 1, u2, .... , u m that is, if there exist
scalars a1, a2, ...... a m in K such that
The following remarks follow directly from the definition.

Remark

1. Suppose u1, u2, .... , um span V. Then, for any vector w, the
w,u1, u2, .... , um also spans V.
2. Suppose u1, u2, .... , um span V and suppose u k is a linear
combination of some of the other u’s. Then u’s without
uk also span V.
3. Suppose u1, u2, .... , um span V and suppose one of the u’s is
the zero vector. Then the u’s without the zero vector also
span V.

Example: Consider the vector space V = ℝ3.


(a) We claim that the following vectors form a spanning set of ℝ3 :
e1 = (1,0,0),   e2 = (0,1,0), e 3 = (0, 0, 1)
Specifically, if v = (a, b, c) is any vector in ℝ3,
then v = ae 1 + be2 + ce3
For example, v = (5, - 6, 2) = - 5e 1 - 6e2 + 2e3.
Note : L(ϕ) = {0}
Example: If S = {(1, 0, 0)} ⊂ V3(ℝ),
then L(S) = {(a, 0, 0) I a ∈ ℝ}
which is the set of all points of x-axis.
Example: If S = {(1, 0, 0), (0, 1, 0)} ⊂ V3(ℝ), then
L(S) = {(a, b, 0) I a, b ∈ ℝ}
which is the set of all points of xy plane.
Example: If S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} ⊂ V3(ℝ),
then for every vector α = (a, b, c) of V 3(ℝ)

(a, b, c) = a(1, 0, 0) + b(0, 1, 0) + c(0, 0, 1)


⇒  (a, b, c) ∈ L(S) ⇒ V3(ℝ) ⊂ L(S)
but    L(S)⊂  V3(ℝ)
L(S) = V3(ℝ)
Theorem: The linear span L(S) of any non empty subset S of a
vector space V(F) is the smallest subspace of V(F) containing S
i.e.,    L(S) = {S}

Proof. Let α, β ∈ L(S), then

where

and

If a, b are any arbitrary elements of the field F, then

which shows that aα + bβ is a LC of the vectors of finite number of


subsets

Hence aα + bβ ∈ L(S)

∴ α, β ∈ F and α, β ∈ L(S) ⇒ aα + bβ ∈ L(S)


Therefore L(S) is a subspace of V(F).

Again for every vector α of S

Now let W be any subspace of V containing S(S ⊂ W), then


element of L(S) must be in W, for W being subspace, is closed for
vector addition and scalar multiplication. Therefore L(S) ⊂ W.
Consequently, L(S) = {S}
i.e. L(S) is the smallest subspace of V containing S.
Linear Sum of two subspaces
Let W1 and W2 be the two subspaces of V(F). Then the linear sum
of the subspaces W 1 and W2 is the set of all possible sums (a, +
a2), where a, ∈ W. and a2 ∈ W2 The linear sum is denoted by W 1 +
W 2.

Symbolically, W 1 + W2 = {α1 + α2 I α1 ∈ W1, α2 ∈ W2}

Since 0 ∈ W1, and 0 ∈ W2, therefore

Remark: 

Theorem: The linear sum of two subspaces of a vector space is


also a subspace.
Proof: Let W1 and W2 be the two subspaces of V(F).
Let   and 
where 
Now since W1 and W2 are two subspaces, therefore

 
Again

Therefore a, b ∈ F
and 
∴ (W1 + W2) is also a subspace of V(F).

Theorem: If W1 and W2 are subspace of a vector space V(F), then


their sum is generated by their union i.e.

Proof: Since W1 and W2 are the subspaces of V(F), therefore


 
Let

⇒ W1 ⊂  (W1 + W2)    ...(1)


Similarly W2 ⊂ (W1 + W2)     ...(2)
(1) and (2) ⇒ W1 ∪ W2 ⊂ (W1 + W2)     ...(3)
But by theorem (W 1 + W2) is a subspace of V(F) containing W 1 ∪
W 2.

To prove W 1 + W2 = L (W1 ∪ W2) :


Let   where α1 ∈ W1 and α2 ∈ W2 
then clearly 
Now since 
⇒ α1 + α2 can be expressed as a LC of the vectors α 1, α2 ∈ W1 ∪ W2

   .... (4)
But we know that L(W 1 ∪ W2) is the smallest subset of V(F)
containing W1 ∪ W2 and by (3)
 
therefore

   .... (5)
(4) and (5)
Theorem: If S and T are subsets of a vector space V(F) then:
(a) S ⊂ T u L(S) ⊂ L(T)
(b) S ⊂ L(T) L(S) ⊂ L(T)
(c) L(S ∪ T) ⇒ L(S) + L(T)
(d) S is a subspace of V ⇔ L(S) = S
(e) L{L(S)} = L(S)

Proof: 

(a) Let    where a i ∈ F and {α 1, α2, .... αn} is a finite


subset of S.
Now since S ⊂ T , therefore
S ⊂ T ⇒ ai ⊂ T    (i = 1, 2, n)

Therefore

(b) Let    where a i ∈ F and {α 1, α2, .... αn} is a finite


subset of S.
Now since S ⊂ L(T) ,
therefore 

Therefore

(c) Let   be any


element of the set L(S∪ T) where   are
the elements of the field F and   is a
limits subset of S ∪ T such that

Now let α ± β + γ ∈ L(S) + L(T)


⇒ β ∈ L(S) and γ ∈ L(T)
⇒ β is LC of elements of the finite subset of S and  γ is a LC of
elements of the finite subset of T
⇒ β + γ is a LC of finite elements of S ∪ T
⇒ β + γ ∈ L(S ∪ T)
⇒ L(S) + L(T) ⊂ L(S ∪ T)    ...(2)
Therefore (1) and (2) L(S ∪ T ) = L(S) + L(T)

(d) First suppose S is a subspace of V, then to show that L(S) = S

Let   where ai ∈ F and {α 1, α2, ..., αn} is a finite


subset of S.
Now since S is a finite subset of V(F), therefore this is closed for
vector addition and scalar and scalar multiplication. Consequently

Therefore α ∈ L(S) ⇒ a ∈ S    ...(1)


L(S) ⊂ S     ...(2)
But in each case,  S ⊂ L(S)
Therefore (1) and (2)  ⇒ L(S) = S
Conversely: Let L(S) = S, then to show that S is a subspace of
V(F). By theorem L(S) is a subspace of V(F), therefore S(= L(S))
will also be subspace of V(F).

(e) Since L(S) is a subspace of V(F), therefore by (d) above,

L{L (S)} = L(S).

Example 1: Is the vector α = (2, -5, 3) <= V 3(R), a LC of the


following vectors?
α1 = (1, -3, 2), α 2 = (2, -4, -1), α 3 = (1, -5, 7)

Let 
Now (2, - 5 , 3) = a 1(1, - 3 , 2) + a 2(2, - 4 , - 1 ) + a 3(1, - 5 , 7)

(1) and (2)

 and
(1) and (3)

which is not possible. Therefore equations (1), (2), (3) are


inconsistent. Consequently it is not possible to express a is LC of
α1, α2, α3.

Example 2: If possible, express the vector   


as a LC of the following vectors:
α1 = (2, 1, - 1), α 2 = (- 1, 0, 3), α 3 = (0, 1, 5)
2a1 - a2 = 0    ...(1)
a1 + a3 = 4   ...(2)
 -a1 + 3a2 + 5a3 = 20    ...(3)
Solving (1), (2), (3), we obtain a 1 = 1, a2 = 2, a3 = 3 all in the field R.
Therefore vector a can be expressible as a LC of α 1, α2, α3 

Example 3: In the vector space V 3(ℝ), let α1 = (1, 2, 1); α 2 = (3,
1,5); α 3 = (3, -4, 7). Then prove that the subspaces spanned by
S = {α 1, α2} and T = {α 1, α2, α3} are the same.

Since the linear span L(T) of T is the set of LC of the vectors α 1, α2,
α3, therefore let

Substituting these values of b 1 and b2 in (2), we get

(3, -4, 7) = -3(1, 2, 1) + 2(3, 1, 5)


⇒ a3(3, -4, 7) = -3a 3(1, 2, 1) + 2a 3(3, 1, 5)   .... (3)
Now by (1) and (3),

Example 4: Show that the following vectors span the vector


space ℝ3:

Let

then

 are scalars such that


Subspaces

Definition: Let V be a vector space over a field K and let W be a


subset of V. Then W is a subspace of V if W is itself a vector space
over K with respect to the operations of vector addition and scalar
multiplication on V.
Theorem: Suppose W is a subset of a vector space V. Then 1 is a
subspace of V if the following two conditions hold :
(a) The zero vector 0 belongs to W.
(b) For every u, v ∈ W, k ∈ K: 

1. The sum u + v ∈ W. 


2. The multiple ku ∈ W.

Example: Consider the vector space V = ℝ3.


(a) Let U consist of all vectors in ℝ3 whose entries are equal; that is
U = {(a, b, c) : a = b = c}
For example, (1, 1, 1), (- 3, - 3, - 3), (7, 7, 7), (- 2, - 2, - 2) are
vectors in U. Geometrically, U is the line through the origin O and
the point (1, 1, 1). Clearly O = (0, 0, 0) belongs to U, because all
entries in O are equal. Further, suppose u and v are arbitrary
vectors in U, say u = (a, a, a) and v = (b, b, b). Then, for any scalar
k ∈ R, the following are also vectors in U : u + v = (a + b, a + b, a +
b) and ku = (ka, ka, ka)

Thus, U is a subspace of ℝ3.

Linear Dependence and Independence

Let V be a vector space over a field K. The following defines the


notion of linear dependence and independence of vector over K.

Definition: We say that the vectors v 1, v2,......vm in V are linearly


dependent if there exist scalars a 1, a2,......a m in K, not all of them 0,
such tha
  
Otherwise, we say that the vectors are linearly independent.
The above definition may be restated as follows. Consider the
vector equatio
 .. (*)
where the x’s are unknown scalars. This equation always has the
zero solution x 1 = 0, x2 = 0, ..., x m = 0. Suppose this is the only
solution; that is, suppose we can show:
 implies 
Then the vectors v 1, v2,......vm are linearly independent, On the
other hand, suppose the equation (*) has a nonzero solution; then
the vectors are linearly dependent.
A set S = {v 1, v2,......vm} of vectors in V is linearly dependent or
independent according to whether the vectors v 1, v2,......vm are
linearly dependent or independent.
An infinite set S of vectors is linearly dependent or independent
according to whether there do or do not exist vectors  v1,
v2,......vk in S that are linearly dependent.
The following remarks follow directly from the above definition.

Remark: Suppose 0 is one of the vectors v 1, v2,......vm, say v1 = 0.


Then the vectors must be linearly dependent, because we have the
following linear combination where the coefficient of v 1 ≠ 0:

Remark: Suppose v is a nonzero vector. Then v, by itself, is


linearly independent, because kv = 0, v ≠ 0 implies k = 0

Remark: Suppose two of the vectors v 1, v2,......vm are equal or one


is a scalar multiple of the other, say v 1 = kv2. Then the vectors must
be linearly dependent, because we have the following linear
combination where the coefficient of v 1 ≠ 0:

Remark: Two vectors v 1 and v2 are linearly dependent if and only if


one of them is a multiple of the other.
Remark: If the set {v 1,......vm} is linearly independent, then any
rearrangement of the vectors   is also linearly
independent.

Remark: If a set S of vectors is linearly independent, then any


subset of S is linearly independent. Alternatively, if S contains a
linearly dependent subset, then S is linearly dependent.

Theorems on Linear Dependence and Independence


Theorem: A set of vectors which contains atleast one zero vector
is LD.

Proof: Let S = {α 1, α2,......αn} be the set of n vectors of any vector


space V(F) in which α 1 = 0 and all the remaining are non zero
vectors.
Now choose scalars 1, 0, 0, 0, ... (not all zero) so that
1α1 + 0α2 + 0α3 +...+ αn = 0     ..... (1)
Therefore from (1), it is clear that

where a, = 1 ∈ F unit element and a 1 = 0 ∈ F is zero scalar.


Hence S is LD.

Theorem: Every non-empty subset of a LI set of vectors is also LI.

Proof: Let S = {α 1, α2,......αn} be LI set of n vectors of any vector


space V(F). Then there exist a 1, a2, .... a n ∈ F such that

Let S’ = {α 1, α2,......αm}, m ≤ n be the non empty subset of the vector


set S. Now since α 1, α2, ... αn are LI, therefore
Therefore the subset S’ is also LI.

Theorem: Any superset of a LD set of vectors is also LD.


Proof: Let S = {a 1, a2, .... a n} be LD set of n vectors of vector space
V(F). Then there exist a 1, a2, .... a n ∈ F (not all zero) such that
    ....(1)
Let S’   be a super of set S,
then by (1),

[∵  scalars a1, a2, .... a n are not all zero]


⇒ S’ is also LD set.

Theorem: If V(F) is a vector space and S = {α 1, α2,......αn} is a


subset of some non-zero vectors of V, then S is LD iff some
element of S can be expressed as a LC of the others.

Proof: Let S = {α 1, α2,......α} be LD. Then there exist a 1, a2, .... a n ∈


F (not all zero) such that a 1α1+ a2α2 + .... + anαn = 0    ...(1)
Let ak ≠ 0, then (1) can be written as

Therefore α k ∈ S is expressible as a LC of the remaining vectors.

Conversely: Let α1 ∈ S be expressible as a LC of the remaining


vectors of S. Therefore let
⇒ 

⇒ α1, α2,......αn  is LD    ( ∵ - 1 ≠ 0)
⇒ S is LD

Theorem: If V(F) is a vector space and S = (α 1, α2,......αn) is a


subset of some non-zero vectors of V, then S is LD iff some of the
vectors of S, say α k where 2 ≤ k ≤ n, can be expressed as a LC of
preceding one's.

Proof: Let αk ∈ S be a vector expressible as a LC of the preceding


vectors of S.

Therefore let there exist scalars a 1, a2, .... ak-1 ∈ F such that

Conversely: Let S be LD.

The set {a 1} is clearly LI because a 1 ≠ 0


Now consider the set { α 1, α2}
If this is LD then there exist a 1, a2 ∈ F such that
 where both a 1, a2 are not zero.  ... (1)
But a2 ≠ 0 because

Therefore by (1),

⇒ a2 can be expressed as a LC of its preceding vector α 1


Now if the set   is LI, then we consider the set {α 1, α2,α3}
If this is LD then on the above lines, it can be easily shown that
α3 can be expressed as a LC of its preceding vectors α 1 and α2.
If {α1, α2,α3} is LI, then proceed as above,
Let k ∈ N such that 2 ≤ k ≤ n and set {α 1, α2,......αk-1} is LI
Now consider the set {α 1, α2,......αk-1,αk }
If this is LD, then

    ..... (2)
where b i ∈ F(i = 1, 2, ..., k) are scalars such that atleast one of
them is non zero. But a K ≠ 0, because if a k = 0 then set {α 1,
α2,......α k-1} will be LD which is contrary to our earlier assumption.
Therefore by (2)

⇒ αk can be expressed as a LC of its preceding vectors.

Example 1: Show that the following vectors of V 3(ℝ) are LI :

Let a1, a2, a3 ∈ ℝ be the real numbers such that

From (1) and (2), a 3 = 0, then by (3), a 2 = 0


Now using these in (2), we obtain a 2 = 0
Therefore
Therefore the given vectors α 1, α2, α3 are LI

Example 2: The following vectors of V 3(ℝ) are LD : 

a1 = (1, 3, 2); a 2 = (1, -7, -8); a 3 = (2, 1, -1) 

Let a1, a2, a3 ∈ ℝ be real numbers such that

Therefore a 1= 3, a2= -2, a3 = 1 is a non zero solution of the above


equations. Consequently,

Example 3: Show that the following matrices in the vector


space M(R) of all 2 * 2 matrices are LI:

Let   such that 


Consequently the given matrices are LI.

Example 4: Show that the vectors   

⇒    aa1 + bb1 = 0    .... (1)

aa2 + bb2 = 0    ... (2)


The necessary and sufficient conditions for the existence of the non
zero solution of (1) and (2) are:

⇒ The given vectors will be LD iff a 1b2 - a2b1 = 0


Example 5: Show that in the vector space V(F) of all
polynomials over a field F, the infinite set S = {1, x, x 2, x3, ...} is
LI.

Let   be the finite subset of the given set


S where m1, m2, .... mn are non zero integers.
Let a1, a2, ..., a n ∈ F such that
 

⇒ Sn is LI.
⇒ Every finite subset of S is LI
⇒ S is also LI.

Example 6: If α 1, α2, α3 are LI vectors in a vector space V(ℂ),


then show that the following vectors are also LI:

(a) Let a1, a2, a3 ∈ ℂ such that

  .... (1)
But α1, α2, α3 are LI, therefore by (1)

Consequently the vectors α 1 + α2, α2 + α3, α3 + α1 are also LI.
(b) Again let b1, b2, b3 ∈ ℂ such that
But α1, α2, α3 are LI, therefore by (2)

Consequently the vectors


 are also LI.

Example 7: If V be the vector space of all functions defined


from ℝ to ℝ; then show that f, g, h ∈ V are LI where:
(a) f(x) = e 2x, g(x) = x2, h(x) = x
(b) f(x) = sin x, g(x) = cos x, h(x) = x

(a) Let there exist a 1, a2, a3 ∈ ℝ exist such that


a1f + a2g + a3h = 0, where 0 is zero function

 
but

From (1), (2), (3), a 1 = 0, a2 = 0, a3 = 0


⇒ f, g, h are LI.

(a) Let there exist b 1, b2, b3 ∈ ℝ exist such that


b1f + b2g + b3h = 0, where 0 is zero function

From (4), (5), (6), b 1 = 0, b2 = 0, b3 = 0


⇒ f, g, h are LI.
Example 8: α 1, α2, α3 are vectors of V(F ) and a 1, a2 ∈ F, then
show that the set {α 1, α2, α3} is L D if the set {α 1 + a1α2 + a2α3, α2,
α3} 

Let {α1 + a1α2 + a2α3, α2, α3} be a LD set, then there exist a, b, c ∈ F
(not all zero) such that

Now if in the above relation (1) if all the coefficients of the vectors
α1, α2, α3 are not zero, then the set {α 1, α2, α3} will also be LD.
If a ≠ 0, then for any value of b and c, the set {α 1, α2, α3} will be LD.
If a = 0, then at least one of b and c will not be zero,
(because if all the three are zero, then the other set will not be LD).

Consequently, at least one of the coefficient (aa 1 + b) and (aa 2 + c)
will not be zero.
Therefore the set {α 1, α2, α3} will be LD.

From the above discussion, it is clear that if the set {α1 + a1α2 +


a2α3, α2, α3} is LD, then the set {α 1, α2, α3} will always be LD.

Example : (a) Let u = (1, 1, 0), v = (1, 3, 2), w = (4, 9, 5). Then u,
v, w are linearly dependent, because 3u + 5v - 2w = (1, 1, 0) +
5(1, 3, 2) - 2(4, 9, 5) = (0, 0, 0) = 0
(b) We show that the vectors u = (1, 2, 3), v = (2, 5, 7), w = (1, 3,
5) are linearly independent. We form the vector equation xu +
yv + zw = 0, where x, y, z are unknown scalars. This yields
Back-substitution yields x = 0, y = 0, z = 0. We have shown that xu
+ yv + zw = 0 implies x = 0, y = 0, z = 0
Accordingly, u, v, w are linearly independent.
Lemma: Suppose two or more nonzero vectors v 1, v2, ......, v m are
linearly dependent. Then one of the vectors is a linear combination
of the preceding vectors; that is, there exists k > 1 such that
vk = c1v1 + c2v2 + ... + ck-1vk-1,
Theorem: The nonzero rows of a matrix in echelon form are
linearly independent.
Proof: Every non-zero row of a matrix in reduced row-echelon form
contains a leading 1, and the other entries in that column are
zeroes. Then any linear combination of those other non-zero rows
must contain a zero in that position, so the original non-zero row
cannot be a linear combination of those other rows. This is true no
matter which non-zero row you start with, so the non-zero rows of
the matrix must be linearly independent.

Basis and Dimension

Basis of a Vector Space: 


Definition: Any subset of S of a vector space V(F) is called basis
of V(F), if
(i) S is LI and
(ii) S generates V i.e., V = L(S)
i.e. each vector of V is a LC of a finite number of elements of S.
Therefore any LI subset of V which generates V is a basis of V.

Remark: 

 Zero vector can not be an element of a basis because then


the vectors of the basis will not be LI.
 There may exist a number of bases for a given vector space.

Examples of Bases
Example 1: (Finite Basis) Let V n(F) be a vector space, then S =
{e1, e2,....., en} is a basis of V n(F), where e1 = (1, 0, 0, .... 0); e 2 =
(0, 1, 0    0);....., e n = (0, 0, 0,    1).
Earlier we have already proved that S is LI.
Again for every vector a = (a 1, a2,....., an) of Vn,
there exist a 1, a2,....., an ∈ F
such that a = a 1e1 + a2e2 +...+ anen 
⇒ each vector of V n is a LC of elements of S.

Therefore S is the basis of the vectors space V n(F).

Remarks: 

1. The above basis of V n(F) is called the Standard basis. 


2. The standard basis of V 2(F) = {(1, 0), (0, 1)} 
3. The standard basis of V 3(F) = {(1, 0, 0); (0, 1, 0); (0, 0, 1)} etc.

Example 2: (Finite basis) Every vector space F(F) has a basis


{1} where 1 ∈ F is unit element. 

Clearly, the set {1} has a non zero element only.


Therefore {1} is LI.
Again for every a ∈ F, a = a1 i.e.
every element of F is LC of {1}.
Consequently {1} is a basis of F(F).

Remark: Corresponding to every non zero element a ∈ F, {a} is a


basis of F(F)

Example 3: (Infinite basis) : Let F[x] be the vector space of all


polynomials over the field F. Then the set S = {1, x, x 2, ...,
xn, ...} is a basis of F[x].

Let   be the finite subset of the set S and


a1, a2,....., a k ∈ F Such that
⇒ S’ is LI.
⇒ every finite subset of S is LI
⇒ S is also LI.
Now we will show that S generate F(x) i.e. every element
(Polynomial) of F(x) can be expressed as a LC of finite number of
elements of F.
Let 
Then there exist a 0, a1, a2,....., ak ∈ F such that

Therefore S is a basis of F[x].

Remark: F[x] has no finite basis.

Dimension of a Vector Space

Definition: The number of elements in the finite basis of a vector


space V(F) is called the dimension of the vector space V(F).
The dimension of the vector space V(F) is denoted by dim V or Dim
V.
If dim V = n, then V is called n-dimensional vector space.

Remark: The dimension of the zero space {0} is taken as zero (0).


Example: dim F(F) = 1, because every basis of F is a singleton set
containing a nonzero element of F.
Example: dim Vn(F) = n, because {e 1, e2,....., en} is a basis of V n(F)
containing n elements.
Similarly, R 2(R) = 2, dim R 3(R) = 3 etc.

➤ Finite Dimensional Vector Space (FDVS):


Definition: A vector space V(F) is said to be Finite Dimensional
Vector Space, if it has a finite basis S such that V = L(S)
i.e. every vector of V is generated by S
The vector space which is not finite dimensional is called infinite
dimensional vector space.
Remark: Throughout the text, we shall be using finite dimensional
vector spaces (FDVS) only.
Remark: The FDVS is also called finitely generated vector space.
Example: The vector space V 3(R) is finite dimensional because S =
{(1, 0, 0); (0, 1, 0); (0, 0, 1)} is a finite subset of V 3 such that V 3 =
L(S).

➤ Infinite dimensional vector space: 


Definition: The vector space which is not finite dimensional is
called infinite dimensional vector space.
Example: The vector space F[x] of all the polynomials in x over
any field F is infinite dimensional because there does not exist any
finite subset of F[x] which can generate F[x].

Basis of a Finite Dimensional Vector Space:

Theorem: [Existence theorem]:


Every finite dimensional vector space has a basis.
Proof. Let V(F) be a FDVS.
Then by definition, there exist a finite subset
S = α1, α2,......αn of V which spans V i.e. V = L(S).
Without loss of generality, we may suppose that 0 ∉ S because its
contribution in the LC of elements of S is zero.
If S is LI, then S itself is a basis of V and the theorem is
established.

If S is LD, then by an earlier theorem there exist α k, 2 ≤ k ≤ n is S


such that either α k = 0 or αk is a LC of its preceding vectors α 1,
α2,......α k-1

Let us remove α k from S, and denote the remaining set by S 1.

By an earlier theorem S1 also spans V i.e. V = L(S 1).


Now if S1 is LI, then it is a basis of V.
But if S1 is LD, then again there exist α 1 ∈ S1(l > k) is S 1 such that
either α i = 0 or αi is a LC of its preceding vectors α 1, α2,......α i-1  of
S1.
Again let us remove a. from S 1 and denote the remaining set by
S2 i.e. S2 = S - {αk, αi} which again spans V i.e. V = L(S 2).
Proceeding this way after a finite number of steps, either we get a
basis for V in the mean while or we are left with the set consisting
of a single non zero element which generates V and is LI, thus
becomes the basis of V.
Consequently, V(F) has a basis.
Another Form: If a finite set S generates a FDVS V(F), then there
exists a subset of S which is the basis of V(F).

Theorem: (Replacement theorem):
Let V(F) be a vector space which is generated by a finite set S =
{v1, v2,......vn} of V, then any LI set of V contains not more than n
elements.
Proof: Let S = {v 1, v2,......vn} generates the vector space V.

S’ = {u1, u2,......un} be any LI set in V, then we have to prove that m


≤ n.
In order to prove this result, it sufficient to show that for m > n, S’ is
L.D.
Now since L(S) = V, therefore any v ∈ V

In particular, u 1 ∈ S ’⊂ V
⇒ u1 is a LC of v 1, v2,......vn
⇒ {u1,v1, v2,......vn} = S1 say) is LD and L(S,) = V (Since the set S, is
a spanning set for V)

Let us remove this vector v k from S1 and denote the remaining set
by S2 i.e.

Since L(S 1) = V, therefore any v ∈ V is a LC of vectors of S 1


Thus any v ∈ V is a LC of vectors belonging to S 2.
∴ L(S2) = V
Next u2 ∈ S’ ⊂ V and S2 generates V therefore u 2 is a LC of vectors
of S2.

Therefore remove this vector from the set S 3 and denote the
remaining set by S 4 which generates V.
Proceeding in this manner we find that each step consist in the
exclusion of a v and the inclusion of u and the resulting set after
each step generates V.
If m > n, then after n steps we obtain a set 
generates V and therefore v n+1, is a LC of the proceeding
vectors  leading us to the conclusion that the
set 
which contradicts the hypothesis that S’ is LI.
Hence m ≥ n i.e. m ≤ n.

Corollary. [Invariance of dimension]:


Any two bases of a FDVS have the same number of elements.
Proof: Let V(F) be a FDVS which has two bases

To prove m = n:
Now S1 is basis ⇒ S1 is LI and L(S 1) = V    ...(i)
and S2 is a basis S 2 is LI and L(S 2) = V    ...(ii)
(i) and (ii)  ⇒  L(S1) = V and S 2 is LI
Therefore by the above result, m ≤n    ...(1)
Also when L(S 2) = V and S 1 is LI
n ≤ m      ...(2)
(1) and (2) ⇒    m = n.

Example: For the vector space V 3(F), the sets


B1 ={(1, 0, 0), (0, 1, 0), (0, 0, 1)}
and B2 = {(1, 0, 0), (1, 1, 0), (1, 1, 1)}
are bases as can easily be seen. Both these bases contain the
same number of elements i.e. 3.

Some properties of FDVS


Theorem : [Extension Theorem]:
Every LI subset of a FDVS V(F) is either a basis of V or can be
extended to form a basis of V.
Proof: Let V(F) be a FDVS whose basis B is 
Therefore dim V = n.
Let S = (α 1, α2,......αn) be a LI subset of V. Since B is a basis of the
space V, therefore B is LI and L(B) = V.
Consider 
then clearly, L(B’) and B’ are LD because every a can be
expressed as a LC of β 1, β2,......βn.
Therefore there will exist β k is B' which will be expressible as LC of
α1, α2,......αm, β1, β2,......βk-1.
Clearly, β k can not be any of the α 1, α2,......αm, because S is LI.
Now remove β k from B’ and denote the remaining set by B”

Obviously, L(B”) = V.
Now if B” is LI, then this will be a basis of V and is the required
extension set.
If B” is LD, then we repeat the process till after a finite number of
steps, we obtain a LI set containing α 1, α2,......α m and generating V
i.e. basis of V.
Since dim V = n, therefore every basis of V will contain n elements.
Thus exactly (n - m) elements of B will be adjoined to S.
 so as to form a basis of V
which is the extended form of S. Hence either S is already a basis
(when n = m) or it can be extended (when m < n) by adjoining (n -
m) elements of B to form the basis of V.
Another form: “Any LI subset of a FDVSV is a part of a basis"
Theorem: In an n-dimensional vector space V(F), any set of (n + 1)
or more vectors of V is LD.
Proof: Let V(F) be a vector space and dim V = n.
Therefore every basis of V will contain exactly n elements.
Let S be a subset of V containing (n + 1) or more vectors.
Let, if possible, S is LI, then either it is already a basis or it can be
extended to form the basis of V.
Thus in both the cases, the basis will contain (n + 1) or more than
(n + 1) vectors which contradicts the hypothesis that V is n-
dimensional.
Therefore S is LD and so is every superset of the same.

Theorem: In an n-dimensional vector space V(F), any set of n LI


vectors is a basis of V.
Proof: We have dim V = n.
Let S = {α 1, α2,......αn} be a LI subset of n-dimensional V(F).
S, being LI subset of V is
either (i) a basis of V,
or (ii) can be extended to form the basis of V.
In case (ii), the basis of V (extended form of S) will contain more
than n vectors which contradicts the hypothesis that dim V = n.
Hence the case (i) is true i.e. S is basis.

Theorem: In an n-dimensional finite vector space V(F), any list of n


vectors spanning V is a basis.
Proof: We have dim V = n.
Let S = {α 1, α2,......αn} be a subset of V such that L(S) = V.
If S is LI, then it will form a basis of V.
IF S is LD, then there exist a proper subset of S’ spanning V and
will form a basis of V. This basis S’ will contain less than n
elements which contradicts the hypothesis that dim V = n.
Hence S is LI and form a basis of V.

Dimension of a Subspace

Theorem: If W is a subspace of a FDVS V(F), then :


(a) dim W ≤ dim V
(b) W = V ⇔ dim W = dim V.
(c) dim W < dim V, whenever W is a proper subset of V.
Proof: (a) Let V(F) be a finite n-dimensional vector space and W
be the subspace of V.
Since dim V = n, therefore any subset of V containing (n + 1) or
more vectors is LD,
consequently a LI set of vectors in W contains at the most n
elements.
Let S = {α1, α2,......αm }, m ≤ n, be a maximal LI set of vectors in W.
Now if α is any element of W, then the set
S1 = {α1, α2,......α m} is LD (for S being maximum LI vectors of W)
Therefore α ∈ W is a LC of the vectors α1, α2,......α m 
⇒    L(S) = W
⇒ S is a basis of the subspace W
⇒ dim W = m, where m ≤ n
⇒ dim W ≤ dim V

(b) (⇒) : Let W = V, then


W = V ⇒ W is a subspace of V and V is a subspace of W.
⇒ dim W ≤ dim V and dim V ≤ dim W    [by (a)]

⇒ dim W = dim V.

Conversely (⇐) : Let dim W = dim V = n (say)


Let B = {α1, α2,......αn} be the basis of the space W, then
L(B) = W    ...(1)
But B being a LI subset of V containing n vectors, will also
generate V i.e.
L(B) = V    ...(2)

(1) and (2) ⇒    W = V = L(B)

Therefore    W = V ⇔ dim W = dim V.

(c)    When W is a proper subspace of V, then there exist a vector α


∈ V but not in W and as such a cannot be expressed as a LC of
vectors of W, the basis of W.
Consequently the set {α1, α2,......α m} forms a LI subset of V,
therefore the basis of V will contain more than m vectors
∴ dim W < dim V.

Dimension of a Linear Sum


Theorem: If W1 and W2 are two subspaces of a FDVS V(F), then:
dim(W1 + W2) = dim W 1 + dim W2 - dim(W 1 ∩ W2)

Proof: Let dim(W 1 ∩ W2) = k and set S = {γ 1, γ2,......, γ k} be a basis


of W1 ∩ W2, then S ⊂ W1 and S ⊂ W2.
Now since S is LI and S ⊂ W1, therefore by the extension theorem,
S can be extended to form the basis of W 1.
Let   be a basis of W 1.
Then dim W, = k + m
Similarly, let   be the basis of W 2.
The dim W2 = k + n
∴ dim W1 + dim W2  - dim(W 1 ∩ W2) = (k + m) + (k + n ) - k = k + m +
n

To prove dim(W 1 + W2) = k + m + n :


For this, we shall show that

is a basis of (W 1 + W2).

Let there exist   such


that
Now since

Therefore this can be expressible as a LC of the elements of the


basis S of W1 ∩ W2.
Hence let d1, d2, dk ∈ F such that

Substituting these values in (1).

Therefore by (4) and (5),(2)


⇒ c1 = 0, c2 = 0, ck = 0, a1 = 0, a2 = 0...... am = 0, b1 = 0, b2 = 0, bn =
0
⇒ the set S, is LI.

To show L(S 1) = (W1 + W2):


(W1 + W2) is a subspace of V(F) and each element of S 1 belongs to
(W1 + W2).
Hence  L(S,) c (W 1 + W2)    ...(6)
Again let α ∈ (W1 + W2),
then    α = any element of W 1 + any element of W 2
= LC of elements of a basis of W 1 + LC of elements of a basis of
W2 = LC of elements of S 1.
⇒    α ∈ L(S1)
⇒    (W1 + W2) ⊂ L(S 1)    ...(7)
Therefore (6) and (7)
⇒ L(S1) = (W1 + W2)
Consequently, S. is a basis of (W 1 + W2)

Direct sum of two spaces.


(a) External Direct Sum.

Let V(F) and V'(F) be two vector spaces, then the product set

is a vector space over F for the compositions defined as

This vector space V * V ’ over F is a called external direct sum o f


two vector spaces V(F) and V'(F) and is written as V * V’.
(b) Internal Direct Sum or Direct Sum.
If W1 and W2 be the two subspaces of a vector space V(F), then
V(F) is said to be the direct sum of its subspaces W 1 and W2, if
every α ∈ V can be uniquely expressed a

We denote the direct sum of W 1 and W2 by W1 ⊕ W2


Then V = W1 ⊕ W2
⇒ v = W1 + W2
Complementary subspaces
If V = W 1 ⊕ W2 then W1 and W2 are said to be complementary
subspaces i.e. any α ∈ V is uniquely expressible as

Disjoint subspaces
Two subspaces W 1 and W2 of the vector space V(F) are said to be
disjoint if W1 ∩ W2 = {0} = zero space.
Theorem: The necessary and sufficient conditions for a vector
space V(F) to be a direct sum of its two subspaces W 1 and W2 are
Proof: Th
e Conditions are necessary (⇒):
Let V = W 1 ⊕ W2
By definition of direct sum, each element a ∈ V is uniquely
expressed

Let, If possible

Evidently

Since the sum for a is unique and hence α = 0But α ∈ W1 ∩ W2 is
arbitrary
∴ W1 ∩ W2 = {o}.    ...(2)
The conditions are sufficient (⇐):
Let V = W 1 + W2 and W1 ∩ W2 = {0}
Let α ∈ V be arbitrary.

such thatα = α1 + α2


Now we have to show that this representation is unique.
If possible, let

 
being subspaces of V

     (given)
Hence V = W1 ⊕ W2

Dimension of direct sum.

Theorem: If a FDVS V(F) is a direct sum of its two subspaces


W1 and W2, then
dim V = dim W 1 + dim W2
i.e   Proof: We have
V = W1 ⊕ W2

By theorem we have
    ...(3)
Here we haveW 1 + W2 =

Example 1: Prove that the set S = {(1,2, 1), (2, 1, 0), (1,-1, 2)}
forms a basis of the vector space V 3(ℝ).

All the 3 elements of S are in V 3(ℝ), therefore S ⊂ V3(ℝ).


To show that S is LI:
Let a1, a2, a3 ∈ R such that
By (1) and (2),

By (1) and (3),

By (3) and (4) and (5),a 1 = 0, a2 = 0, a3 = 0

⇒ Vectors of S are LI.

Hence S is a set of 3 LI vectors of V 3(ℝ).

Example 2: If  V(ℝ) be the vector space of all ordered pairs of


complex numbers over the real field R, then prove that the set
S = {(1, 0); (i, 0); (0, 1), (0, i)} is a basis of  V(ℝ). To show that S
is LI:

Let a1, a2, a3, a4 ∈ R Such that


⇒ S is LI.

To show that V = L(S):


Let α = (a + ib, c + id) ∈ V, where a, b, c, d ∈ R, then
α = (a + ib, c + id)

= a(1, 0) + b(i, 0) + c(0, 1) + d(0, i)


⇒ α ∈ V is LC of elements of S
⇒ V = L(S).
Hence S is a basis of V(R).

Example 3: Prove that the set S = {a + ib, c + id} is a basis set


of the vector space ℂ(ℝ) iff (ad - be) ≠ 0.

To show that S is LI :

Let a1, a2 ∈ ℝ such that

  ...(1)

Again when a2 = 0 then by (1), a 1 = 0


Therefore S will be LI.
Therefore S is LI if (be - ad) ≠ 0.
To show that C(R) = L(S).

Let α = e + if ∈ ℂ(ℝ) such that

α = e + if = x(a + ib) + y(c + id), where x, y ∈ E


⇒ e = if = (xa + yc) + i(xb + yd)
⇒ c = ax + cy, f = bx + dy

Therefore each element of ℂ(ℝ) can be expressed as LC of


elements of S if (ad - be) ≠ 0⇒ ℂ(ℝ) = L(S) if (ad - be) ≠ 0
Hence S is a basis of ℂ(ℝ) iff (ad - be) ≠ 0

Example 4: If the set S = {α, β, γ} be a basis of the vector space


V3(ℝ), then prove that the set   will
also be a basis of V 3(ℝ).

Since there are 3 elements in the basis S of V 3(ℝ).


Therefore the set of any 3 LI vectors of V 3(ℝ) will be its basis.

To show that S’ is LI :

Let a1, a2, a3 ∈ ℝ such that

  ...(1)

But α, β, γ are LI, therefore

Therefore S’ is also LI. Now since the basis S of V 3(ℝ) contains 3


vectors, therefore S’ is a subset of 3 LI vectors. As such S’ will also
be basis of V 3(ℝ). 

Example 5: Show that the dimension of the vector space V(ℝ)


of all 2 * 2 real matrices is 4. 
Let us consider the following 4 matrices of the order 2*2.

We will show that their set   is LI and generates


V(ℝ)

Let

Again for any element   of V, there exist a, b, c, d ∈ ℝ such


that

⇒ each element of V is a LC of elements of S⇒ V  = L(S)


⇒ S is a basis of V(ℝ).
But the number of elements in S is 4, therefore dim V = 4.

Example 6: Show that the set S = {(1, 0, 0); (1, 1, 0); (1, 1, 1); (0,
1, 0)} spans the vector spaceV 3(ℝ) but is not a basis set.

Here we will show that S is not LI, yet V 3(ℝ) = L(S).


Let a1, a2, a3, a4 ∈ ℝ such that

   ...(A)

The above system (A) has 4 unknown quantities and 3 equations.

Therefore there exist a non zero solution. Consequently, S is not LI.


Hence S’ is not a basis set of V 3(ℝ).
Again suppose that for any vector (a, b, c) of V 3(ℝ), there exist a1,
a2, a3, a4 ∈ ℝ such that
Therefore for the relation (B), non zero values of a 1, a2, a3, a4 exist.

Therefore the set S spans V 3(ℝ), but it is not a basis set. Hence
Proved.

Example 7: Show that the set S = {1, x, x 2, ..., xn} is a basis of


the vector space F[x] of all polynomials of degree at most n,
over the field ℝ of real numbers.

To show that S is LI:


Let a1, a2, a3, a4 ∈ ℝ such that
 (zero polynomial)

Again F[x] = L(S),

since

⇒ p(x) is a LC of elements 1, x, x 2, ..., xn of S


Therefore S is a basis of F[x].
First we state two equivalent ways to define a basis of a vector
space V.

Definition A: A set S = {u1, u2, ..., u n} of vectors is a basis of V if it


has the following two properties: (1) S is linearly independent. (2) S
spans V.
Definition B: A set S = {u 1, u2, ..., un} of vectors is a basis of V if
every v e V can be written uniquely as a linear combination of the
basis vectors.
Theorem: Let V be a vector space such that one basis has m
elements and another basis has n elements. Then m = n.
A vector space V is said to be of finite dimension n or n-
dimensional, written
dim V = n
If V has a basis with n elements. Theorem tells us that all bases of
V have the same number of elements, so this definition is well
defined.
The vector space {0} is defined to have dimension 0.
Suppose a vector space V does not have a finite basis. Then V is
said to be of infinite dimension or to be infinite-dimensional.

Lemma: Suppose {v1, v2 ..... vn} spans V, and suppose {w 1, w2, ...,


wm} is linearly independent.

Then m ≤ n, and V is spanned by a set of the form

➤ Examples of Bases 

Vector space K n: Consider the following n vectors in K n:


e1 = (1, 0, 0, 0, .... 0, 0), e 2 = (0, 1, 0, 0, .... 0, 0 ) ...... e n = (0, 0 , 0 ,
0 ...... 0 , 1)
These vectors are linearly independent. (For example, they form a
matrix in echelon form.)
Furthermore, any vector u = (a 1, a2......an) in Kn can be written as a
linear combination of the above vectors. Specifically,
 
Accordingly, the vectors form a basis of K n called the usual or
standard basis of K n. Thus (as one might expect), K n has dimension
n. In particular, any other basis of K n has n elements.

➤ Theorem on Bases
The following three theorems will be used frequently.

Theorem : Let V be a vector space of finite dimension n. Then:


(i) Any n + 1 or more vectors in V are linearly dependent.
The statement dim V = n means there exists a basis {v 1, .... vn} for
V. Let vn+1 ∈ v\{v1, .... vn}. Then by the definition of a basis, there
exist a1, ..., an ∈ F(F being the field over which V is defined) such
that
 
So, {v1, .... vn+1} is linearly dependent.

(ii) Any linearly independent set S = {u 1, u2, ...... ,un} with n


elements is a basis of V.
We just need to show that S spans V. That is, each elements of V
can be expressed as a linear combination of the elements of S. Let
w be an element in V. Since S is a maximal linearly independent
set, the elements w, u 1, u2, ...... ,un are linearly dependent. Hence
there exists λ 0, λ1, ...... ,λ n in K, not all 0 K, such that

It is clear that  :
otherwise the elements u 1, u2, ...... ,un would be linearly dependent.
Thus, we obtain from (1)

This show that w is a linear combination of the elements of S.


Thus, S is a basis of V.
(iii) Any spanning set   of V with n elements is a
basis of V.
Suppose   an
d  . Subtracting these equations side
from side, we obtain   
Since the set {v 1, v2, ...... ,v n} is linearly independent we have a 1 -
b1 = 0, which means a i = bi for each i = 1, 2, ..., n.
Hence T is a basis of V.
Theorem: Suppose S spans a vector space V. Then :
(i) Any maximum number of linearly independent vectors in S form
a basis of V.
(ii) Suppose one deletes from S every vector that is a linear
combination of preceding vectors in S. Then the remaining vectors
form a basis of V.
Theorem: Let V be a vector space of finite dimension and let S =
{u1, u2, ...... ,un} be a set of linearly independent vectors in V. Then
S is part of a basis of V; that is, S may be extended to a basis of V.

Example: (a) The following four vectors in ℝ4 form a matrix in


echelon form:
(1, 1, 1, 1), (0, 1, 1, 1), (0, 0, 1, 1), (0, 0, 0, 1)
Thus, the vectors are linearly independent, and, because dim ℝ4 =
4, the four vectors form a basis of ℝ4.
(b) The following n + 1 polynomials in P n(t) are of increasing
degree:

Therefore, no polynomial is a linear combination of preceding


polynomials; hence, the polynomials are linear independent.
Furthermore, they form a basis of Pn(t), because dim Pn(t) = n + 1.

➤ Quotient Space

Definition :

Let V(F) be a vector space and W be its subspace. Then for any α
∈ V, the set W + α  = {w + αlw ∈ W} is called the Right coset of W
wrt a in V.
Also the set   is called the Left coset of
W wrt α in V.

Since V is an abelian group wrt addition, therefore

i.e. each right coset of W is equal to its corresponding left cost.


Therefore this is simply called coset.
The set of all the cosets of W in V is denoted by VA/V i.e.
Some properties of cosets:
Let W be a subspace of the vector space V(F), then :

(i) W + 0 = 0 + W = 0, i.e. W is the right as well as left coset of


itself.

Addition and Scalar


multiplication for cosets :If W is a subspace of the vector space
V(F), then the addition and scalar multiplication of cosets in V/W
are defined as follows:

Theorem: If W is a subspace of vector space V(F), then the set


V/W of all cosets of W in V is a Vector Space over F for the vector
addition and scalar multiplication defined as follows:

Proof: Vector addition is well defined :


For any α , α', β, β' ∈ V

Therefore vector addition is well defined in V/W.

Scalar multiplication is well defined:


For any α ∈ V, α ∈ F
Therefore scalar multiplication is also well defined in V/W.

Again for each α, β ∈ V, a ∈ F

Therefore V/W is closed for both the above operations.

Verification of Space axioms: 


V1. (V/W, +) is an abelian group.
In Group theory it has already been shown that (V/W, +) is an
abelian group.
V2. Scalar multiplication is distributive over addition of coset:

Therefore scalar multiplication is distributive over addition of cosets


in V/W.
V3. Scalar addition is distributive over scalar multiplication:
Therefore scalar addition is distributive over scalar multiplication in
V/W.
V4. Associativity for scalar multiplication:

Therefore scalar multiplication is associative in V/W.


V5. Let 1 ∈ F, is the unit element of F, then for W + α ∈ V/W

From the above discussion, it is clear that


V/W satisfy all the space axioms. Hence V/W is a vector space
over the field F.V/W is called the Quotient space or Factor
space of V relative to W Dimension of a Quotient Space 
Theorem: If W be a subspace of a FDVS V(F), then:

dim(V/W) = dimV - dimW.

Proof. Let dim V = n and dim W = m. Clearly, m ≤ n.


Let   be the basis of W.Since S is a LI subset of
V, therefore this can be extended to form the basis of V. Therefore
let the extended set   be the
basis for V.
Now we will show that the set of (n - m) cosets
is a basis of V/W.C is LI:
Let a1, a2...... an-m ∈ F such that
[since any element of W can be expressed as a LC of elements of
its basis S]

V/W = L(C): Let W + α be any element of V/W where α ∈ V but S’ is


a basis of V, therefore α = LC of elements of S’

where

since S is the basis of W.

⇒ V/W = L(C)∴ C
is a basis of V/W.
⇒  dim(V/W) = n - m = dimV - dimW.

Co-ordinate Representation of a vector 


With the help of the basis, we can determine the co-ordinates of
any vector α ∈ V which is just similar to natural co-ordinates of α.
For example, if  {e1, e2,..... e n} is a standard (natural) basis of V n(F)
where e 1 = (1, 0, 0......0), e 2 = (0, 1, 0, .... 0), e n = (0, 0, 0, ..... 1)
then any   can be expressed as LC of
basis vectors i.e.

 If   is an arbitrary basis of the FDVS V n(F), then


any vector α of V n can be uniquely expressed as a LC of basis
vectors α 1, α2,..... αn.

The n-tuple of scalars (a 1, a2,..... a n) used in this LC is called


the co-ordinates of a relative to the basis B.
But in the basis B, the basis vectors can be written in any order. By
changing the order of the vectors, the c-ordinates of α are
changed.
Ordered Basis: If the vectors of B are kept in a particular form by a
well defined way, then B is called the Ordered Basis for V. The co-
ordinates of any vector of the space of unique wrt an ordered basis.
➤ Co-ordinates of a Vector. 
Definition:
Let V (F ) be a FDVS. Let B = ( α 1, α2,..... α n) be an ordered basis
for V. Again in let α ∈ V, then ∃ a unique n-tuple (x 1, x2, ..... ,x n) of
scalars such that

The n-tuple (x 1, x2, ..... ,xn) is called n-tuple of co-


ordinates of relative to the ordered basis B. The scalar x i is
called ith co-ordinate of relative to the ordered basis B.

Example 1: Show that S = {(1, 1, 1), (0, 1, 1), (0, 0, 1)} is a basis
for the space V3(ℝ). Also find the co-ordinates of α = (3, 1, -4) ∈
V3(ℝ) relative to this basis.

Clearly S ⊂ V3(ℝ).
S is LI: Let there exist a 1, a2, a3 ∈ ℝ such that

Therefore the required co - ordinates are (3, - 2 , - 5)

Example 2: Find the co-ordinates of the vector (a, b, c) of V 3(ℝ)


relative to its basis S = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}.

Let a1, a2, a3 ∈ ℝ

Therefore the required co-ordinates are (c, b - c, a - b).

Example 3: Find the co-ordinates of the vector p(x) = 2x 2 - 5x +


6 of P2[x] of all polynomials over R with degree < 2, relative to
its basis S = {1, x - 1, x 2 - x + 1}
Let a1, a2, a3 ∈ R such that

Therefore the required co -ordinates are (3, - 1 , 2).

Example 4: If V (R ) be the vector space over the real field of all

2*2 matrices and   be a basis

of it, then find the co-ordinates of    relative to the


basis.

Let a1, a2, a3 ∈ R such that

Solving the equations,a, = -7, a2 = 11, a3 = -21, a, = 30


Therefore the required co-ordinates are (-7, 11, -21, 30).
Example 5: Let V 3(ℝ) be a FDVS. Find the co-ordinate vector of
= (3, 1, -4) relative to the following basis: v = (0, 0, 1); v 2 = (0, 1,
1); v3 = (1, 1, 1)

Let a1, a2, a3 ∈ R such that


i.e. (3, 1, -4) = a 1(0, 0, 1) + a 2(0, 1, 1) +
a3(1, 1, 1)
⇒ a3 = 3, a 2 + a3 = 1, a1 + a2 + a3 = -4
⇒ a1 = -5, a2 = -2, a3 = 3
 ∴ The required co-ordinates of v wrt the given basis are [v] = (-5,
-2, 3)
Linear Transformation

We begin with a definition.

Definition: Let V and U vector spaces over the same field K. A


mapping F: V → U is called a linear mapping or linear
transformation if it satisfies the following two conditions:

1. For any vectors v, w ∈ V, F(v + w) = F(v) + F(w).


2. For any scalar k and vector v ∈ V, F(kv) = kF(v).

Namely, F: V → U is linear if it “preserves” the two basic operations


of a vector space, that of vector addition and that of scalar
multiplication.
Substituting k = 0 into condition (2), we obtain F(0) = 0. Thus, every
linear mapping takes the zero vector into the zero vector.

Now for any scalars a, b ∈ K and any vector v, w ∈ V,


we obtain F(av + bw) = F(av) + F(bw) = aF(v) + bF(w)
More generally, for any scalars a i ∈ K and any vectors v i ∈ V, we
obtain the following basic property of linear mappings:

Remark: A linear mapping F: V→ U is completely characterized by


the condition
F(av + bw) = aF(v) + bF(w)     (*)
and so this condition is sometimes used as its definition.
Remark: The term linear transformation rather than linear mapping
is frequently used for linear mappings of the form F: ℝn -> ℝm.

Some definitions

1. Linear Functional: If t is a linear transformation from V(F) to


F(F) i.e. t: V → F, then t is called a Linear Functional on V.
2. Linear Operator: If t is a linear transformation from V to V
i.e., t: V → V, then t is called a Linear Operator of V.
3. Homomorphic Image: If t: V→V’ is an onto homomorphism,
then V’ is called the Homomorphic image of V. 
Examples of Linear Transformation:
Example: Let V(F) and V’(F) be two vector spaces. Then
 
is a linear transformation from V to V'
Since for any a, b ∈ F and α, β ∈ V
t(aα + bα) = 0

= a0 + b0
= at(α) + bt(α)
This is called Zero linear mapping.

Example: Let V(F) be a vector space, then


 
is a linear transformation from V to V (i.e. linear operation of V).
Since for any a, b ∈ F and α, β ∈ V
t(aα + bβ) = aα + bβ
= at(α) + bt(β)

Example: Let V3(F) and V2(F) be two vector spaces over the field
F, then

is a linear transformation from V 3 to V2 since for any a, b ∈ F and


(a1, a2, a3), (b1, b2, b3) ∈ V3
Example: t : V 2(ℝ) → ℝ(ℝ), t(a 1, a2) = a1 + a2 is a linear
transformation from V 2(ℝ) to ℝ(ℝ) i.e. a linear functional on V 1 
Since for any a, b ∈ R and (a 1, a2); (b1, b2) ∈ V2

Example: t:V2(F) → F(F), t(a 1, a2) = a1 is a linear functional on


V2(F) since for any a, b ∈ F and (a 1, a2); (b1, b2) ∈ V2.

Properties of Linear mapping

Theorem: If t is a linear mapping from the vector space V(F) to


V'(F), then:
(a) t(0) = 0’ where 0 and O’ are zero vectors of V and V’
respectively.
(b) t(-α) = -t(α),    ∀ α ∈ V
(c) t(α - β) = t(α) - t(β),    ∀ α, β ∈ V

Proof:
(a) Let α ∈ V. Since 0 is zero element in V, therefore
α+0=α
⇒ t(α + 0) = t(α)
⇒ t(α) + t(0) = t(α)    [∵ t is a linear transformation]
⇒ t(α) + t(0) = t(α) + O'    [∵ O’ is zero element in V’]
⇒ t(0) = 0’    [by cancellation law, (V’, +) being an abelian group]

(b) ∵ α ∈ V ⇒ -a e V is a additive inverse of a, therefore


α + (-α) = 0
⇒ t[α + (-α)] = t(0) = 0’    [by (a)]
⇒ t(α) + t(-α) = 0’    [∵ t is a linear transformation]
⇒ t(-α) is additive inverse of t(α) in V’
⇒ t(-α) = -t(α)

(c) t(α - β) = t[a + (-P)l

= t(α) + t(-β)]    [by linear mapping]

= t(α) - t(β)    [by (b)]

Kernel of a Homomorphism : Definition:

Let V(F) and V’(F) be two vector spaces and t be a homomorphism


from V to V'. Then the set K of all those elements of V whose t-
images are the identity O’ of V’ is called the kernel of t.
It is generally denoted by Ker t or ker t or Ker(t).
i.e. Ker t = K = {α ∈ V I t(α) = O’ ∈ V’}

Remark: The kernel of the linear transformation of t is called Null


space of t and is denoted by N(t).
Theorem: The kernel of a linear transformation is a subspace.

Proof: Let V(F) and V’(F) be the two vector spaces and 0 and O’
be their zero vectors respectively. Let t be a linear transformation
from V to V’ and K be the kernel of t i.e. Ker t = K = {α ∈ V I t(α) =
O’ ∈ V’}
To show that K is a subspace of
V:

Therefore let α, β ∈ K, then by the definition of kernel,


t(α) = 0’ and t(β) = 0’    ...(1)
Now if a and b be any two elements of the field F, then
t(aα + bβ) = at(α) + bt(β)    [∵ t is a linear transformation]
= a(0') + b(0’)    [by (1)]
= 0’ + 0’
or t(aα + bβ) = 0’    [by definition]

⇒ aα + bβ ∈ K
Therefore a, b ∈ F and a, b ∈ F ⇒ aα + bβ ∈ K
∴ K is a subspace of V.

Isomorphism of Vector Spaces : Definition


Let V(F) and V'(F) be the two vector spaces, then a map
t : V → V' is called an isomorphism, if t is :
(i) one-one

(ii) onto and

(iii) a linear mapping i.e.

t(aα + bβ) = at(α) + bt(β), ∀ a, b ∈ F, V a, b ∈ V


Under these conditions, the vector spaces V and V’ are
called isomorphic images any symbolically it is expressed as
V ≅ V’
Again vector space V’ is also called the isomorphic image of V.
Example 1: Prove that the mapping t : V 2(ℝ) → V3(ℝ) which is
defined by t(a, b) = (a, b, 0) is a linear transformation from V 2 to
V3.

Let α = (a, b,) and β = (a 2, b2) be any two elements of V2(ℝ) and a,
b ∈ ℝ, then

Therefore t is a linear transformation from V 2 to V3.

Example 2: Show that the mapping t: V 2(ℝ) → V3(ℝ) where t(a,


b, c) = (c, a + b) is a linear mapping.

Let α = (a1, b1, c,) and β = (a 2, b2, c2) be any two elements of the
space V3(ℝ) and a, b ∈ R, then

Therefore t is a linear transformation from V 3 to V2.


Example 3: Show that the mapping f : V 3(F) → V2(F) which is
defined by f(a 1, a2, a3) = (a1, a2) is a homomorphism from
V3 onto V2. Also find its kernel.

Let α = (a1, a2, a3) and β = (b 1, b2, b3) be any two elements of the
space V3(F) and a, b ∈ F, then

∴ f is a homomorphism from space V 3 to V2.

Onto: Let (a1, a2) ∈ V2, then corresponding to (a 1, a2), there exist


(a1, a2, 0) ∈ V3 for which f(a 1, a2, 0) = (a1, a2)

⇒ there exist f-pre image of each element of V 2 in V3 


∴ f is onto homomorphism.

Kernel: Let Ker f = K, then K will be the set of all the those vectors
of V3 which map on the zero vector (0, 0) of V 2 i.e.

Example 4: If P[x] denotes the vector space over the field of


real numbers ℝ  of all polynomials d in x of degree ≤ n, then
prove that the mapping f : P[x] → P[x] where

 is a linear transformation.

and 
 be any two elements of P[x]
where a 0, a1, .... a n; b0, b1,....., bn are all real numbers. Then

Again, for any a ∈ ℝ, 

Therefore f is a linear transformation.


Example 5: Show that the mapping f: V 3(ℝ) → V3(ℝ) which is
defined by f(a 1, a2, a3) = (a1,a2, aa3) where a ≠0 is fixed in ℝ is an
isomorphism on V 3(ℝ).

One-one : Let α = (a 1, a2, a3) and p = (b 1, b2, b3) be any two


elements of the space V3(ℝ). Then

  [∵ a ≠ 0]

∴ f is one-one.
Onto: For every a = (a 1, a2, a3) ∈ V3(ℝ) there

exist   for which

∴ f is onto.
f is a linear transformation: For any a’, b’ ∈ ℝ

∴ f is a linear transformation.
Consequently, f is an isomorphism on V3(ℝ)
Example 6: Show that the mapping f : V 2(ℝ) → V2(ℝ), where
(x, y) = (x cos θ - y sin θ, x sin θ + y cos θ) is an isomorphism
on V2(ℝ) 

One-one: Let α = (a1,a2) and β(b1,b2) be any two elements of the


vector space V 2(ℝ), then

∴ f is one.
Onto: Since for every (x cos θ - y sineθ, x sinθ + y cosθ) ∈ V2(ℝ)
there exist (x, y) el such that f(x, y) = (x cosθ - y sinθ, x sinθ + y
cosθ)
∴ f is onto.
f is a linear transformation : For any a, b ∈ ℝ

∴ f is a linear transformation.
Consequently, f is an isomorphism on V 2(ℝ).

Example 7: If V(F) be the vector space of all n x n matrices


over the field F and M e V be a given matrix, then prove that
the mapping

is a linear transformation. 

Let A, B ∈ V, then
f(A + B) = (A + B)M + M(A + B)
= AM + BM + MA + MA    [By distributivity of Matrices]
= (AM + MA) + (BM + MB)    [By associativity for +]
= f(A) + f(B)    ...(1)
Again, for any a ∈ F
f(aA) = (aA)M + M(aA)
= a(AM) + a(MA)
= a(AM + MA)    By scalar multiplication of Matrices]
= af(A)    ...(2)
From (1) and (2), f is a linear transformation on V(F).

Example 8: If the mapping t : V(F) → V(F’) is one-one onto


linear map; then show that t -1: V’ → V will also be a linear map.

Let a1, b1 ∈ V and a 2, b2 ∈ V' such that


    ...(1)
 .... (2)
∵ t is a linear, therefore
    [by (1)]

 [by (2)]
    ...(3)
and for any

    ...(4)
From (3) and (4), it is clear than t -1 is also a linear transformation.

Example: (a) Let F ; ℝ3->:ℝ3 be the “projection” mapping into the


xy-plane; that is, F is the mapping defined by F(x, y, z) = (x, y, 0).
We show that F is linear. Let v = (a, b, c) and w = (a’, b’, c’). Then

and, for any scalar k,


F(kv) = K(ka, kb, kc) = (ka, kb, 0) = k(a, b, 0) = kF(v)
Thus, F is linear.

Matrices as Linear Mappings 

Let A be any real m * n matrix. Recall that A determines a


mapping   (where the vectors in K n and
Km are written as columns). We show F A is linear. By matrix
multiplication,

In other words, using A to represent the mapping, we have


A(v + w) = Av + Aw and A(kv) = k(Av)
Thus, the matrix mapping A is linear.

Vector Space Isomorphism

Definition: Two vector spaces V and U over K are isomorphic,


written   if there exists a bijective (one- to-one and onto) linear
mapping F : V → U. The mapping F is then called an isomorphism
between V and U.
Consider any vector space V of dimension n and let S be any basis
of V. Then the mapping

which maps each vector v ∈V into its coordinate vector [v] s, is an


isomorphism between V and K n.

Some Theorem on Space Morphism 

Theorem: If W be a subspace of a vector space V(F), then the


quotient space V/W, is a homomorphic image of V with kernel W.

Proof. Let a mapping f from V to V/W be defined as follows :

To prove (a) f is an onto linear transformation and (b) ker f = W


(a) If α, β ∈ V and α, β ∈ F, then
f(aα + bβ) = W + (aα + bβ)
= (W + aα) + (W + bβ)
= a(W + α) + b(W + β)
= af(α) + bf(β)
∴ f is a linear transformation.

Again for every W + α ∈ V/W, there exist α ∈ V such that

f(α) = W + α
∴ f is onto i.e. f(V) = V/W
Therefore f : V → V/W is onto homomorphism
⇒ V/W is homomorphic image of V.
(b) ker f = {α ∈ V I f(α) = W}    [∵ W is zero element of V/W]

= α ∈V I W + α = W}

= {α ∈ V I α ∈ W} = W

Theorem : [Fundamental theorem on Space Homomorphism] :


If f be a linear transformation from a vector space V(F) onto V’(F),
then

Proof: Let Ker f = K.

Now define a linear transformation ϕ from V/K to V’ as follows :


 ....(1)
ϕ is well defined: Here it will be shown that for any two elements
K + α and K + β of V/K

 [∵ f is a linear transformation]

∴ ϕ is well defined.

ϕ is one-one: Let K + α, K + β ∈ V/K, then


 ....([by (1)])

ϕ is one-one.
ϕ is onto: ∵ f is onto, therefore for every α’ ∈ V', there exist α ∈ V
such that
   [by (1)]
Therefore for α’ ∈ V there exist K + α ∈ V/K such that

∴ ϕ is onto
ϕ is a linear transformation : For any a, b ∈ F
 

∴ ϕ is a linear transformation.

Hence ϕ is an isomorphism from V/K to V'


Consequently,

Theorem : [Isomorphism theorem for Vector Spaces]


Two FDVS V and V’ over the same field F are isomorphic iff they
are of the same dimensions
i.e V ≌ V’ dim V = dimV’

Proof: Let V(F) and V’(F) be the two vector spaces and dim V = n.

Firstly, let V ≌ V', then there exist a map f from V to V' which is
one-one onto linear transformation.
Let   be the basis of the space V.
if 
then to show that B’ is the basis of V’.

B’ is LI: Let a1, a2, ..., a n ∈ F are such that


 [ ∵ f is linear transformation]
 where 0 ∈ V is zero element
[∵ f is one-one and f(0) = O’]

Therefore

L (B’) = V’ : Let α' ∈ V ’ since f is onto,


therefore for every α' ∈ V there exist α ∈ V such that f(α) = α’

⇒ α' ∈ V' is a LC of vectors of B’


⇒ L(B’) = V’

Hence B’ is a basis of V’. Consequently dim V’ = n


∴ dim V = n = dim V’

Therefore V = V’ => dimV = dimV'

Conversely: Let dimV = dimV’.


Let   be the bases of V
and V’ respectively.

If α ∈ V, then there exist   


such that

Define a map f from V to V’ as follows :


f : V → V’ such that

To prove that f is an isomorphism:


f is one-one: Let α, β ∈ V where

Then f(a) = f(b)

∴ f is one-one.
f is onto: Let   be any element of V’, then
there exist  such that

∴ f is onto.
f is a linear transformation: For any a, b ∈ F

   [by (1)]

∴ f is a linear transformation.

Therefore  f is an isomorphism from V to V ’ .

Consequently, V≅ V ’
Hence

Theorem: Every n-dimensional vector space V(F) is isomorphic to


Vn(F) i.e.
V(F) ≅ Vn(F)

Proof: dimV(F) = n, therefore let   be any basis of


the space V n(F). Then every vector of V can be expressed as a LC
of elements of B. Therefore for any α ∈ V there exist a 1, a2, .... a n ∈
F such that

Now define a map f from V(F) to V n(F) as follows:

To prove that f is an isomorphism: 


f is one-one: Let α, β ∈ V where

then f(α) = f(β)

∴ f is one-one.

f is onto: Let   then there exist an element,


 in V such that

∴ f is onto.

f is a linear transformation: Let α, β ∈ V and a, b ∈ F, then


∴ f is a linear transformation.
Hence f is an isomorphism from V(F) to V n(F).
Consequently, V(F) ≅ Vn(F).

Range space of a linear transformation: Definition:

Let V(F) and V'(F) be two vector space and t: V → V’ be a linear


transformation. Then the image set of V under t is called the range
space of t.
This is expressed by R(t) or l m(t).

Theorem: If t is a linear transformation from a vector space V(F) to


V’(F), then the range of t is a subspace of V’.
Proof: Let the range of t be R(t), then

Clearly, R(t) ⊂ V’ and R(t) ≠ ϕ


Let β1, β2 ∈ R(t), then there exist α 1, α2,  ∈ V such that

But V(F) is a vector space, therefore

Therefore
∴ R(t) is a vector subspace of V’.

Definition: A linear transformation t: U→V is called non-singular if


t(u) = 0 implies u = 0.
NOTE: Moreover, U and V must be of same dimension otherwise t:
U→V may not be surjective.
Example 1: A linear transformation t : U → V is an
isomorphism iff it is non singular. 

To show that if t is an isomorphism ⇒ t is non singular i.e. t is one-


one, therefore 0 ∈ U is the only vector whose image 0 ∈ V or, only
zero vector is in the zero space of t. t is non singular.
Conversely: If t is non singular ⇒ t is an isomorphism
Let t(v1) = t(v2)

Hence t is one-one.

t is nonsingular and one-one, therefore this is one-one also, 

∴ t is an isomorphism.

Example 2: If t : U → V is a linear transformation; then t is non


singular if and only if the range of LI set by t in U is LI set in V.
First let t be nonsingular i.e. only zero vector is in the empty space
of t.

Let B = {v 1, v2, ....., vn} be a LI subset of U, then its image set under
t is
   [t is linear]
 [t is nonsingular]
  [B is LI]
Hence the image B, of B under t is LI.

Conversely: Let t-image of any LI set of U be LI in V.


Consider a nonzero vector v of U, then the set
S = {v} is a LI set in V.

Now t-image S, of S contains only an element t(v)

[Note]

S, = {t(v)},

S, is LI in V, therefore for v ≠ 0, t(v) ≠ 0,


because the set containing zero element alone is LI.
Consequently, the null space of t is null subspace, i.e. t is
nonsingular.

Kernel and Image of a Mapping

We begin by defining two concepts.


Definition Let F : V → U be a linear mapping. The kernel of F,
written Ker F, is the set of elements in V that map into zero vector 0
in U; that is,
Ker F = {v ∈ V : F(v) = 0}
The image (or range) of F, written Im F, is the set of image points
in U; that is,
Im F = (u ∈ U : there exists v ∈ V for which F(v) = u}

Theorem: Let F : V → U be a linear mapping. Then the kernel of F


is a subspace of V and the image of F is a subspace of U.
Now suppose that v 1, v2, ..., vm span a vector space V and that F : V
→ U is linear. We show that F(v 1), F(v2), ..., F(v m) span Im F. Let u
∈ Im F. Then there exists v ∈ V such that F(v) = u. Because the v i’s
span V and v ∈ V, there exist scalars a 1, a2, ..., am for which

Therefore,

Thus, the vectors F(v 1), F(v2),....., F(v m) span Im F.

Proposition: Supposev 1, v2, ..., vm span a vector space V, and


suppose F : V → U is linear. Then F(v 1), F(v2),....., F(v m) space Im
F.

Example: (a) Let F : ℝ3 → ℝ3 be the projection of a vector v into


the xy-plane, i.e.
F(x, y, z) = (x, y, 0)
Clearly the image of F is the entire xy-plane—that is, points of the
form (x, y, 0). Moreover, the kernel of F is the z-axis—that is, points
of the form (0, 0, c). That is,
Im F = {(a, b, c) : c = 0} = xy-plane and Ker F = {(a, b, c) : a = 0, b =
0} = z-axis

Kernel and Image of Matrix Mappings

Consider, say, a 3 * 4 matrix A and the usual basis {e 1, e2, e3, e4} of
K4 (written as columns):

Recall that A may be viewed as a linear mapping A : K 4 → K3,


where the vectors in K 4 and K3 are viewed as column vectors. Now
the usual basis vectors span K 4 so their images Ae 1, Ae2, Ae3, Ae4,
span the image of A. But the vectors Ae 1, Ae2, Ae3, Ae4 are
precisely the columns of A:

Thus, the image of A is precisely the column space of A.


On the other hand, the kernel of A consists of all vectors v for
which Av = 0. This means that the kernel of A is the solution space
of the homogeneous system AX = 0, called the null space of A. 

Proposition: Let A be any m * n matrix over a field K viewed as a


linear map A : K n → Km. Then
Ker A = nullsp(A) and Im A = colsp(A)
Here colsp(A) denotes the column space of A, and nullsp(A)
denotes the null space of A.

Rank and Nullity of a Linear Mapping

Let F : V → U be a linear mapping. The rank of F is defined to be


the dimension of its image, and the nullity of F is defined to be the
dimension of its kernel; namely,
rank(F) = dim(lm F) and nullity(F) = dim(Ker F)

Theorem: Let V be of finite dimension, and let F : V → U be linear.


Then
dim V = dim(Ker F) + dim(lm F) = nullity(F) + rank(F)

Matrix Representation of a Linear Operator

Let T be a linear operator (transformation) from a vector space V


into itself, and suppose S = {u 1, u2, ...., u n} is a basis of V. Now
T(u1), T(u2),..., T(u n) are vectors in V, and so each is a linear
combination of the vectors in the basis S; say,

Definition
The transpose of the above matrix of coefficients, denoted by m s(T)
or [T]s, is called the matrix representation of T relative to the basis
S, or simply the matrix of T in the basis S.
Example: Let F: ℝ2 → ℝ2 be the linear operator defined by F(x, y)
= (2x + 3y, 4x - 5y).

(a) Find the matrix representation of F relative to the basis S = {u 1,


u2} = {(1, 2), (2, 5)}.

1. First find F(u 1), and then write it as a linear combination of the
basis vectors u 1 and u2. (For notational convenience, we use
column vectors.) We have

Solve the system to obtain x = 52, y = -22. Hence, F(ut) =


52u 1 - 22u 2.
2. Next find F(u 2), and then write it as a linear combination of
u1 and u2:

So
lve the system to get x = 129, y = -55. Thus F(u 2) = 129u 1 -
55u 2.
Now write the coordinates of F(u,) and F(u2) as columns to
obtain the matrix

Example: Let F : ℝ4 → ℝ3 be the linear mapping defined by


F(x, y, z, t) = (x - y + z + t, 2x - 2y + 3z + 4t, 3x - 3y + 4z + 5t)
(a) Find a basis of the dimension of the image of F.
Find first the image of the usual basis vectors of R4.
F(1, 0, 0, 0) = (1. 2, 3),    F(0, 0, 1, 0) = (1, 3, 4)
F(0, 1, 0, 0) = (-1, -2, -3), F(0, 0, 0, 1) = (1, 4, 5)
By Proposition the image vectors span Im F. Hence, form the
matrix M whose rows are these image vectors and row reduce to
echelon form:
Thus, (1, 2, 3) and (0, 1, 1) form a basis of Im F. Hence, dim(lm F)
= 2 and rank(F) = 2.

Change of Basis
Let V be an n-dimensional vector space over a field K. We have
shown that once we have selected a basis S of V, every vector v ∈
V can be represented by means of an n-tuple [v] s in Kn, and every
linear operator T in A(V) can be represented by an n * n matrix over
K.

Definition
Let S = {u 1, u2, ...., u n} be a basis of a vector space V, and let S’ =
{v1, v2, ..., vn} be another basis. (For reference, we will call S the
“old” basis and S’ the “new" basis.) Because S is a basis, each
vector in the “new” basis S' can be written uniquely as a linear
combination of the vectors in S; say, 

Let P be transpose of the above matrix of coefficients; that is, let P


= [pij], where p ij = aji, Then P is called the change-of-basis matrix
(or transition matrix) from the “old” basis S to the “new” basis S’.
The following remarks are in order.
Remark: The above change-of-basis matrix P may also be viewed
as the matrix whose columns are, respectively, the coordinate
column vectors of the “new” basis vectors v. relative to the “old"
basis S; namely,
P = [[V1]s, [V2]s,......, [V n]S]
Remark: Analogously, there is a change-of-basis matrix Q from the
“new” basis S' to the “old” basis S. Similarly, Q may be viewed as
the matrix whose columns are, respectively, the coordinate column
vectors of the “old” basis vectors u. relative to the “new” basis S’;
namely,
Q = [[u1]s, [u2]s,......, [un]S]

Remark : Because the vectors v 1, v2, ..., vn in the new basis S’ are
linearly independent, the matrix P is invertible. Similarly, Q is
invertible. In fact, we have the following proposition.

Proposition: Let P and Q be the above change-of-basis matrices.


Then Q = P -1.
Now suppose S = {u 1, u2,.....,un} is a basis of a vector space V, and
suppose P = [p ij] is any nonsingular matrix. Then the n vectors

corresponding to the columns of P, are linearly independent. Thus


they form another basis S’ of V. Moreover, P will be the change-of-
basis matrix from S to the new basis S’.

Example: Consider the following two bases of ℝ2:

(a) Find the change-of-basis matrix P from S to the “new” basis S”.
Write each of the new basis vectors of S’ as a linear combination of
the original basis vectors u, and u2 of S. We have

Thus
Note that the coordinates of v 1 and v2 are the columns, not rows, of
the change-of-basis matrix P.

Rank-Nullity Theorem 

If t is a linear transformation defined from a vector space V(F) to


V'(F) where V(F) is a finite dimensional, then :

Rank (t) + Nullity (t) = Dim V.

Proof. Let V(F) be a n-dimensional vector space and R and K be


their range and kernel respectively.

Define a mapping ϕ from R to V/K as follows:


 

∴ ϕ is one-one.
ϕ is onto : Let K + α ∈ V/E, then there exist t(α) ∈ ℝ such that
ϕ[t(α)] = K + α
Therefore preimage of each element of V/K exist in ℝ.

∴ ϕ is onto.
ϕ is a linear transformation: For any a, b ∈ F and t(α), t(β) ∈ ℝ
f[at(α) + bt(β)] = ϕ[t(aα + bβ)] [∵ t is a linear transformation]

∴ ϕ is a linear transformation.
Hence ϕ is an isomorphism from R to V/K.
⇒ ℝ ≅ V/K
⇒ dim ℝ = dim(V/K)
⇒ dim ℝ = dim V - dim K
⇒ dim ℝ + dim K = dim V
⇒ Rank(t) + Nullity(t) = dimV.

Example 1: If

find a basis for nullspace (A) and verify Theorem.

We must find all solutions to Ax = 0. Reducing the augmented


matrix of this system yields

Consequently, there are two free variables, x 3 = t, and x 4 = t2, so
that
x2 = 7t, + 7t 2, x1 = —9t1 - 10t 2.

Hence,
nullspace (A)

Since the two vectors in this spanning set are not proportional, they
are linearly independent. Consequently, a basis for nullspace(A) is
{(-9, 7, 1, 0), (-10, 7, 0, 1)}, so that nullity(A) = 2. In this problem, A
is 3 * 4 matrix, and so, in the Rank-Nullity Theorem, n = 4. Further,
from the foregoing row- echelon form of the augmented matrix of
the system Ax = 0, we see that rank(A) = 2. Hence, rank(A) +
nullity(A) = 2 + 2 = 4 = Dim(A) and the Rank-Nullity Theorem is
verified.

Example: Find rank (A) and nullity (A) for A

 rank (A). It is enough to put A in row-echelon form and count


the number of leading ones.
The reader will verify that a row-echelon form of A =

is   . There are three leading ones,


therefore rank (A) = 3.
 nullity (A). For this, we need to find a basis for the solution
set of Ax = 0. Its reduced row-echelon form

is:   Then corresponding system

is   The leading variables are x 1, x2, and


x4. Hence, the free variables are X 3 and x5. We can write the
solution in parametric form as follows:  

 
Thus, nullity (A) = 2 .

Example : Find the rank and nullity of 


Note this matrix has 5 columns. The row-echelon form

is 
We see that rank(A) = 2 (2 leading 1’s). Therefore nullity (A) = 5 - 2
= 3.

You might also like