You are on page 1of 18

Chapter 1

Vector Spaces

1.1 Introduction
Definition 1.1 Let V be a non-empty set on which two operations are de-
fined, addition and multiplication by scalars (numbers). By addition we
mean a rule for associating with each pair of objects u, v ∈ V an object
u + v, called the sum of u and v; by scalar multiplication we mean a rule of
associating with each scalar k and each object u ∈ V an object ku, called the
scalar multiple of u by k. V is called a vector space if the following conditions
are satisfied for all u, v, w ∈ V and all scalars a and b :

1. If u, v ∈ V then u + v ∈ V.

2. (u + v) + w = u + (v + w).

3. There is an element 0 such that u + 0 = 0 + u = u.

4. For every u ∈ V there is an element −u ∈ V such that


u + (−u) = (−u) + u = 0.

5. u + v = v + u

6. If u ∈ V and a is a scalar then au ∈ V.

7. a · (u + v) = a · u + a · v

8. (a + b) · v = a · v + b · v

9. a · (b · v) = (ab) · v

10. 1 · v = v.

1
2 Dr. V. V. Acharya

We denote a · v by av. The elements of the vector space V are called


vectors. The set of scalars is generally R or C. A vector space in which the
scalars are real numbers is called a real vector space while a vector space in
which the scalars are complex numbers is called a complex vector space.1

Proposition 1.1 Let V be a vector space. Then the element 0 is unique.


Also, for every u ∈ V, −u is unique.

Proof. Suppose there exist an element w such that

u + w = w + u = u for all u ∈ V (1.1)


and also u + 0 = 0 + u = u for all u ∈ V. (1.2)

Then using (1.1), we get 0 + w = w + 0 = 0 while using (1.2), we get


w + 0 = 0 + w = w. Hence, we get 0 = w.
The proof of uniqueness of additive inverse is left as an exercise.
Proposition 1.2 Let V be a vector space and v ∈ V and a be a scalar.
Then
1. a0 = 0.

2. 0v = 0.

3. av = 0 implies either a = 0 or v = 0.

4. (−1)v = −v.
Proof.
1. a0 = a(0 + 0) = a0 + a0 Adding additive inverse of a0 to both sides,
we get 0 = a0.

2. 0v = (0 + 0)v = 0v + 0v. Adding additive inverse of 0v to both sides,


we get 0 = 0v.

3. Suppose av = 0 and a 6= 0. Multiplying both sides by a−1 i.e. by a1 , we


get 1v = a1 0 = 0 i.e v = 0.
1
More precisely, addition is a function from V × V to V and scalar multiplication is a
map f : R × V → V given by (a, v) 7→ a · v. One may replace R by C or more generally
by a field. A function from V × V to V is called a binary operation defined on V. Quite
often we denote a binary operation by ?.
Linear Algebra 3

4. v + (−1)v = (1 + (−1))v = 0v = 0. Adding additive inverse of v to


both sides, we get (−1)v = −v.

Example 1.1

1. Note that C is a real vector space (i.e. a vector space over R) as well
as a complex vector space (i.e. a vector space over C).

2. Let V = {(x1 , . . . , xn )|xi ∈ R} be the set of all n-tuples. Define the


sum and scalar multiplication as follows:

(x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn ),


c(x1 , . . . , xn ) = (cx1 , . . . , cxn ), c ∈ F.

Then V is a vector space over R. We denote V by Rn .

3. Let V = R[x], the set of all polynomials with real coefficients. Then V
is a vector space over F for the usual addition and multiplication of a
polynomial by a scalar.

4. Let m and n be positive integers. Let Mm×n be the set of all m × n


matrices over R. The sum of two vectors A and B in Mm×n is defined
by
(A + B)ij = Aij + Bij .
The product of a scalar c (i.e. c ∈ F ) and the matrix A is defined by

(cA)ij = c(Aij ).

5. The space of real valued functions from a set Let S be any


non-empty set. Let V be the set of all functions from the set S into R.
Define for all f, g ∈ V and c ∈ R

(f + g)(t) = f (t) + g(t), (cf )(t) = c(f (t)).

Then V is a vector space over R.2


2
In fact, in V we can define multiplication of vectors as (f g)(t) = f (t)g(t). Hence, we
say that this is an ‘algebra.’
4 Dr. V. V. Acharya

1.2 Subspaces
Definition 1.2 Let V be a vector space over the field F . A vector v ∈ V
is said to be a linear combination of the vectors v1 , . . . , vn ∈ V if there exist
c1 , . . . , cn ∈ F such that v = c1 v1 + · · · + cn vn .

Definition 1.3 Let V be a vector space.A subspace of V is a subset W of


V which is itself a vector space with the operations of vector addition and
scalar multiplication on V .

Theorem 1.1 Let V be a vector space and W be a non-empty subset of V.


Then following are equivalent:

1. W is a subspace of V,

2. v1 , v2 ∈ W and c ∈ F implies cv1 + v2 ∈ W,

3. v1 , v2 ∈ W and c1 , c2 ∈ F implies c1 v1 + c2 v2 ∈ W,

4. v, v1 , v2 ∈ W and c ∈ F implies v1 + v2 ∈ W and cv ∈ W.

Example 1.2 Let V = M2×2 (R), the space of 2×2 matrices with real entries.
If W is the set of all symmetric matrices, then it is easy to see that W is a
subspace of V. Similarly, the set of all 2 × 2 diagonal matrices is a subspace
of V. If U denotes the set of all 2 × 2 upper triangular matrices and L denotes
the set of all lower triangular matrices then both U and L are subspaces of
V. Prove that U ∪ L = L ∪ W = U ∪ W. Can you identify this subspace!

Example 1.3 Let Pn denote the set of all polynomials with real coefficients
together with zero polynomial. Then Pn is a subspace of R[x]. In fact, Pn is
a subspace of Pm if n ≤ m.

Theorem 1.2 Let V be a vector space. The intersection of any collection


of subspaces of V is a subspace of V.

Proof. We note that 0 is in every subspace. Hence if W denotes the intersec-


tion of some collection of subspaces of V then 0 ∈ W. Thus, W is non-empty.
If u, v ∈ W then u, v is in every subspace of the collection and so is u + v.
Hence, u + v ∈ W. Similarly, if c is a scalar and u ∈ W then cu ∈ W. Hence,
W is a subspace of V.
Linear Algebra 5

Definition 1.4 Let V be a vector space. A vector w ∈ V is said to be a lin-


ear combination of the vectors v1 , v2 , . . . , vn if there exist scalars c1 , c2 , . . . , cn
such that
w = c1 v1 + c2 v2 + · · · + cn vn .
c1 v1 + c2 v2 + · · · + cn vn is called as a linear combination of the vectors
v1 , v2 , . . . , vn .

Theorem 1.3 Let V be a vector space and v1 , v2 , . . . , vn ∈ V. Then the set


W of all linear combinations of v1 , v2 , . . . , vn is a subspace of V. Further W
is the smallest subspace of V that contain v1 , v2 , . . . , vn in the sense that
every other subspace of V that contains v1 , v2 , . . . , vn must contain W.
n
X
Proof. We note that W = { ci vi |ci are scalars}. Thus, if u, w ∈ W
i=1
then u = ni=1 ci vi , w = ni=1 di vi , where ci , di are scalars. Then u + w =
P P
Xn
Pn
i=1 (ci + di )vi ∈ W. and αu = (αci )vi ∈ W. Hence, W is a subspace of
i=1
V. n
X
Let V1 subspace of V that contains v1 , v2 , . . . , vn must contain c i vi ,
i=1
where ci ’s are scalars. Thus, every element of W is in V1 . Hence, W ⊆ V1 .
This proves that W is the smallest subspace of V that contains v1 , v2 , . . . , vn.

Definition 1.5 The subspace W that we have in the theorem above is called
the subspace spanned by v1 , v2 , . . . , vn and we say that v1 , v2 , . . . , vn span
W. We denote W by span{v1 , v2 , . . . , vn .}

In view of the above theorem, more generally we have the following:

Definition 1.6 Let V be a vector space and S ⊂ V. The subspace spanned


by S is defined to be the intersection of all subspaces of V which contain S.

Remark 1.1

1. If S is a finite set of vectors, S = {v1 , . . . , vn }, the subspace spanned


by S is also called as the subspace spanned by the vectors v1 , . . . , vn .

2. If S = ∅ then the subspace spanned by S equals 0.


6 Dr. V. V. Acharya

Theorem 1.4 Let V be a vector space. Suppose S = {v1 , . . . , vn } and


S 0 = {w1 , . . . , wm } are two sets of vectors in V. Then

span{v1 , . . . , vn } = span{w1 , . . . , wm }

if and only if each vector in S is a linear combination of vectors in S 0 and


each vector in S 0 is a linear combination of vectors in S.

Proof. If span{v1 , . . . , vn } = span{w1 , . . . , wm } then each vector in S is a


linear combination of vectors in S 0 and each vector in S 0 is a linear combi-
nation of vectors in S. For vi ∈ span{v1 , . . . , vn } = span{w1 , . . . , wm } and
similarly wi ∈ span{w1 , . . . , wm } = span{v1 , . . . , vn }.
It remains to show the converse. That is, if each vector in S is a linear
combination of vectors in S 0 and each vector in S 0 is a linear combination of
vectors in S then span{v1 , . . . , vn } = span{w1 , . . . , wm }

Definition 1.7 Let V be a vector space and S1 , . . . , Sk be non-empty subsets


of V. The set of all sums v1 + · · · + vk of vectors vi ∈ Si is called the sum of
k
X
the subsets S1 , . . . , Sk and is denoted by S1 + · · · + Sk or by Si .
i=1

Theorem 1.5 If W1 , . . . , Wk are subspaces of V, then the sum

W = W1 + · · · + Wk

is a subspace of V which contains each of the subspaces Wi .

Proof. We note that 0 ∈ Wi for 1 ≤ i ≤ k. Hence, 0 ∈ W. Thus, W is non-


empty. Now if u, v ∈ W then u = u1 + · · · + uk and v = v1 + · · · + vk , where
ui , vi ∈ Wi for 1 ≤ i ≤ k. Hence, u + v = (u1 + v1 ) + · · · + (uk + vk ) ∈ W
as each ui + vi ∈ Wi for 1 ≤ i ≤ k as Wi is a subspace of V. Similarly, if c is
a scalar then cu = cu1 + · · · + cuk ∈ W as cui ∈ Wi for 1 ≤ i ≤ k. Thus, W
is a subspace of V.

Remark 1.2 Note that the union of W1 , . . . , Wk is a subset of W. In fact W


is the smallest subspace which contains the union of W1 , . . . , Wk . Hence, W
is the subspace spanned by the union of W1 , . . . , Wk .
Linear Algebra 7

1.3 Linear Dependence


Definition 1.8 Let V be a vector space. A subset S of V is said to be lin-
early dependent (or simply, dependent) if there exist distinct vectors v1 , . . . , vn
in S and scalars c1 , . . . , cn ∈ F, not all of which are zero, such that

c1 v1 + · · · + cn vn = 0.

A set which is not linearly dependent is called linearly independent.

If the set S contains only finitely many vectors v1 , v2 , . . . , vn , we sometimes


say that v1 , v2 , . . . , vn are dependent (or independent) instead of saying S is
dependent (or independent).

Remark 1.3

1. Any set which contains a linearly dependent set is linearly dependent.

2. Any subset of a linearly independent set is linearly independent.

3. A set which contains the zero vector is linearly dependent, as 1.0 = 0.

4. A set S of vectors is linearly independent if and only if each finite subset


of S is linearly dependent, i.e., if and only if for any distinct vectors
v1 , v2 , . . . , vn ∈ S, c1 v1 + . . . + cn vn = 0 implies each ci = 0.

Definition 1.9 Let V be a vector space. A basis for V is a linearly inde-


pendent set of vectors in V which spans the vector space V . The space V is
finite dimensional if it has a finite basis.

Theorem 1.6 Let V be a vector space which is spanned by a finite set of


vector v1 , v2 , . . . , vn . Then any independent set of vectors in V is finite and
contains no more than m element.

Proof. To prove the theorem it suffices to show that every subset S of


V which contains more than m vectors is linearly dependent. Let S be
such a set. In S there are distinct vectors w1 , w2 , . . . , wn where n > m.
n
X
Since,v1 , . . . , vm span V , there exist scalars aij such that wj = aij vi .
i=1
8 Dr. V. V. Acharya

Let x1 , . . . , xn be scalars, we have


n n m m n
!
X X X X X
xj wj = xj ( aij vi ) = aij xj vi
j=1 j=1 i=1 i=1 j=1

P
Since, n > m, ∃ scalars x1 , . . . , xn not all zero such that aij xj = 0, 1 ≤ i ≤
m. Hence, x1 w1 + . . . + xn wn = 0. This shows that S is a linearly dependent
set.

Corollary 1.1 If V is a finite-dimensional vector space, then any two bases


of V have the same (finite) number of elements.

Proof. Since V is finite-dimensional, it has a finite basis {v1 , v2 , . . . , vn }.


By the above theorem, every basis of V is finite and contains no more than m
elements. Thus, if w1 , . . . , wn is a basis, m ≤ n. By the same argument,m ≤
n.Hence m = n.
This corollary allows us to define the dimension of a finite-dimensional
vector space.

Definition 1.10 Let V be a finite-dimensional vector space. The dimension


of V is defined to be the number of elements in a basis for V . We denote it
by dim V .

Corollary 1.2 Let V be a finite-dimensional vector space and let n = dim V .


Then

1. Any subset of V which contains more than n vectors is linearly depen-


dent;

2. No subset of V which contains less than n vectors can span V .

3. Let W be a subspace of V . Then it follows that W is a finite dimen-


sional vector space.

Lemma 1.1 Let S be a linearly independent subset of a vector space V .


Suppose w is a vector in V which is not in the subspace spanned by S. Then
the set obtained by adjoining w to S is linear independent.
Linear Algebra 9

1.4 Dimension and Subspaces


Before we start the discussion about dimensions of subspaces, we observe
that if V is a finite dimensional vector space then V has a finite basis, say
{v1 , . . . , vn }. If v ∈ V then there exist unique scalars c1 , . . . , cn such that

v = c1 v1 + · · · + cn vn .

For, if v = k1 v1 + · · · + kn vn , then

c1 v1 + · · · + cn vn = k1 v1 + · · · + kn vn .

Hence, (c1 − k1 )v1 + · · · + (cn − kn )vn = 0. Since, {v1 , . . . , vn } is a basis it is


linearly independent. Hence, (c1 − k1 ) = · · · = (cn − kn ) = 0. Hence, ci = ki
for every i.

Definition 1.11 Let V be a vector space with basis B = {v1 , . . . , vn }. If


v ∈ V and c1 , . . . , cn are scalars such that such that

v = c1 v1 + · · · + cn vn

then we say that (c1 , . . . , cn ) are the coordinates of v with respect to the
ordered basis B. We denote the coordinates of v by [v]B .

Remark 1.4 We represent the coordinates of a vector either by a row matrix


or by a column matrix.

Example 1.4 Consider the vector space R3 with basis {e1 , e2 , e3 } where
e1 = (1, 0, 0), e2 = (0, 1, 0) and e3 = (0, 0, 1)}. Now if v ∈ R3 then v =
ae1 + be2 + ce3 for some scalars a, b, c. We have the coordinates of v as
(a, b, c). Thus, our old notion of coordinates is same as we just defined in a
general vector space.

Let V be a finite dimensional vector space and W be a subspace of V .


Then by theorem 1.6, it follows that W is a finite dimensional vector space.

Lemma 1.2 Let V be a finite dimensional vector space and S be a linearly


independent subset of the vector space V . Suppose w is a vector in V which
is not in the subspace spanned by S. Then the set obtained by adjoining w
to S is linear independent.
10 Dr. V. V. Acharya

Proof. Let S ∪ {w} be a linearly dependent set. Hence,

c1 v1 + c2 v2 + · · · + cn vn + bw = 0,

where c1 , c2 , . . . cn , b are scalars, not all zero and v1 , . . . , vn are vectors in S.


If b 6= 0 then w = − 1b (c1 v1 + c2 v2 + · · · + cn vn ) , a contradiction. If b = 0
then c1 v1 + c2 v2 + · · · + cn vn = 0. But S is a linearly independent set, hence
each ci = 0. Thus, we get a contradiction. Hence, S ∪ {w} is a linearly
independent set.
Theorem 1.7 (Extension Theorem) Let V be a finite dimensional vector
space and S be a linearly independent subset of V. Then S can be extended
to a basis of V.
Proof. If S spans V then S is a basis of V as S is a linearly independent set.
If S does not span V then there exists a vector w1 which does not belong
to span of S. Then, by the above lemma, we have S1 = S ∪ {w1 } which
is a linearly independent set. If S1 spans V then we are done otherwise
continue this process. Since V is a finite dimensional vector space, this
process terminates and we get a basis of V .

Theorem 1.8 Let V be a finite dimensional vector space and W1 , W2 be


subspaces of V. Then

dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) − dim(W1 ∩ W2 ).

Remark 1.5 Before we proceed to the proof of this theorem, we observe


that W1 + W2 , W1 ∩ W2 are subspaces of V and as V is finite dimensional
vector spaces, all the subspaces are finite dimensional.

Proof. Let {u1 , . . . , uk } be a basis of W1 ∩ W2 . Hence, it is a linearly


independent set in W1 as well as in W2 . Extend it to a basis of W1 as
well as to a basis of W2 . Let {u1 , . . . , uk , v1 , . . . , vr } be a basis of W1 and
{u1 , . . . , uk , w1 , . . . , ws } be a basis of W2 . Thus, dim(W1 ∩W2 ) = k, dim(W1 ) =
k + r, dim(W2 ) = k + s. Since, every element in W1 + W2 is sum of an element
from W1 and W2 it follows that every element of W1 +W2 is a linear combina-
tion of elements {u1 , . . . , uk , v1 , . . . , vr , w1 , . . . , ws }. We claim that this set is
linearly independent. Assume that there exists scalars {c1 , . . . , ck , d1 , . . . , dr , e1 , . . . , es }
such that

c1 u1 + · · · + ck uk + d1 v1 + · · · + dr vr + e1 w1 + · · · + es ws = 0. (1.3)
Linear Algebra 11

Hence,

c1 u1 + · · · + ck uk + d1 v1 + · · · + dr vr = −(e1 w1 + · · · + es ws ).

Now LHS of the above equation is an element of W1 while the RHS is an


element of W2 . Thus, −(e1 w1 + · · · + es ws ) ∈ W1 ∩ W2 . Hence, there exist
scalars a1 , . . . , ak such that

−(e1 w1 + · · · + es ws ) = a1 u1 + · · · + ak uk ,

i.e. a1 u1 + · · · + ak uk + e1 w1 + · · · + es ws = 0. But, {u1 , . . . , uk , w1 , . . . , ws }


is a basis of W2 , hence a1 = · · · = ak = e1 = · · · = es = 0. Substituting, these
values of ei ’s in equation (1.3) we get c1 u1 +· · ·+ck uk +d1 v1 +· · ·+dr vr = 0.
But, {u1 , . . . , uk , v1 , . . . , vr } is a basis of W1 , hence

c1 = · · · = ck = d1 = · · · = dr = 0.

Thus, the set {u1 , . . . , uk , v1 , . . . , vr , w1 , . . . , ws } is a linearly independent


set which spans W1 + W2 and hence it is a basis of W1 + W2 .

Remark 1.6 Consider, our usual Euclidean space R3 . Let L be a line passing
through origin and P be a plane passing through origin. Then L + P = P if
and only if line L lies in the plane P i.e. L ∩ P = L. If line L does not lie in
the plane P then L ∩ P = {0, 0, 0} and L + P is the whole space R3 .

Remark 1.7 If W1 ∩ W2 = {0} then we say that W1 + W2 is direct sum of


W1 and W2 . In this case every element of W1 + W2 can be written as sum of
elements of W1 and W2 in a unique way. We denote the direct sum of W1
and W2 by W1 ⊕ W2 . We observe that dim(W1 ⊕ W2 ) = dim(W1 ) + dim(W2 ).
If V is a vector space and W1 , W2 are subspaces of V such that V =
W1 ⊕ W2 then we say that V is a direct sum of W1 and W2 . Further, every
vector v ∈ V can be written as sum of vectors from W1 and W2 in a unique
way. For, if v = w1 + w2 and v = w3 + w4 where w1 , w3 ∈ W1 and
w2 , w4 ∈ W2 then w1 + w2 = w3 + w4 . Hence, w1 − w3 = w4 − w2 . But
LHS is in W1 and RHS is in W2 . Since, W1 ∩ W2 = 0, we get w1 = w3 and
w2 = w4 .

Example 1.5 Let L and U be the space of all lower and upper triangular
matrices in Mn×n . Note that L ∩ U is the space of all diagonal matrices,
which we denote by D. Then dim(L) = dim(U ) = n(n+1)
2
and dim(D) = n.
12 Dr. V. V. Acharya

Hence, dim(L + U ) = n2 . Hence, L + U = Mn×n . Note that Mn×n is not a


direct sum of L and U.
Let A and S denote the set of all skew-symmetric matrices and symmetric
matrices in Mn×n i.e.

A = {[aij ]|aij = −aji for all i, j} and S = {[aij ]|aij = aji for all i, j}

then it is easy to see that both A and S are subspaces of Mn×n . Further,
dim(A) = n(n−1)
2
and dim(S) = n(n+1)
2
. Observe that

Mn×n = A ⊕ L = A ⊕ U = A ⊕ S.

1.5 Linear Transformation


Definition 1.12 Let V and W be vector spaces.A linear transformation T
from V into W is a function such that,

T (cv1 + v2 ) = cT (v1 ) + T (v2 ) ∀v1 , v2 ∈ V and c664 ∈ F.

Theorem 1.9 Let V and W be vector spaces and T be a linear transforma-


tion from V to W. Then T (V ) is a subspace of W.
Proof. Let x, y ∈ T (V ) Hence, x = T (v1 ) and y = T (v2 ) for some v1 , v2 ∈
V. Now, x + cy = T (v1 ) + cT (v2 ) = T (v1 + cv2 ) ∈ T (V ). Therefore T (V ) is
a subspace of W.

Theorem 1.10 Let V be a finite dimensional vector space over the field F
and let v1 , v2 , . . . , vn be an ordered basis for V. Let W be a vector space over
the same field F and let w1 , w2 , . . . , wn be any vectors in W. Then there is
unique linear transformation T from V into W such that

T (vi ) = wi , i = 1, . . . n.

Proof: Let v ∈ V .Hence there exist unique scalars c1 , . . . , cn F such that

v = c1 v1 + c2 v2 + . . . + cn vn .

Define T (v) = c1 w1 + c2 w2 + . . . + cn wn . Since v1 , v2 , . . . , vn is an ordered


basis, v = c1 v1 +c2 v2 +. . .+cn vn is unique representation. Hence,(c1 , . . . , cn )
is uniquely determined. Thus, T is well-defined. It is easy to see that T is a
linear transformation and T (vi ) = wi for each i.
Linear Algebra 13

It remains to show that T is unique. Suppose U is a linear transformation


from V to W such that U (vi ) = wi for i = 1, . . . , n. Then
X X X
U (v) = U ( ci vi ) = i = 1n ci U (vi ) = ci wi = T (v).
Thus T (v) = U (v)∀v ∈ V. Hence, T = U.
Definition 1.13 Let V and W be vector spacesand let T be a linear trans-
formation from V into W. The null space of T is the set of all vector v ∈ V
such that T (v) = 0. We denote the null space of T by N (T ). Thus ,
N (T ) = {v ∈ v|T (v) = 0}.
If V is finite dimensional, the rank of T is dimension of range space of T
and nullity of T is the dimension of null space of T.
Theorem 1.11 Let V and W be vector spacesand let T be a linear trans-
formation from V into W. The null space of T, N (T ), is a subspace of V.
Proof. We note that T (0) = 0. Hence, 0 ∈ N (T ). Therefore N (T ) 6= ∅.
x, y ∈ N (T ) ⇒ T (x) = 0 and T (y) = 0. T (x+cy) = T (x)+cT (y) = 0+0 =
0. x, y ∈ N (T ) ⇒ x + cy ∈ N (T ). Therefore, N (T ) is a subspace of V.
Theorem 1.12 (RANK-NULLITY THEOREM) Let V and W be the
vector spaces over the field F and let T be a linear transformation from V
into W. Suppose that V is finite dimensional vector space. Then
rank(T ) + nullity(T) = dim(V). (1.4)
Proof: Let dim(V ) = n. Let nullity(T) = dim(N(T)) = r. Suppose v1 , v2 , . . . , vr
is a basis of N (T ). Therefore, v1 , v2 , . . . , vr is a linearly independent set
in V . We can extend it to form a basis of V . Now, there exists vectors
vr+1 , vr+2 , . . . , vn such that the set v1 , v2 , . . . , vr , vr+1 , . . . , vn is a basis of V .
Let v ∈ P V .Hence,
v = ci vi = c1 v1 + c2 v2 + . . . + cr vr + cr+1 vr+1 + · · · + cn vn .
Therefore,
X Xi=r i=n
X
T (v) = T ( ci vi ) = T ( ci v i + ci v i )
i=1 i=r+1
i=r
X i=n
X
= ( ci T (vi )) + ( ci T (vi ))
i=1 i=r+1
i=n
X i=n
X
= 0+( ci T (vi )) = ci T (vi )
i=r+1 i=r+1
14 Dr. V. V. Acharya

Therefore, T (v)is spanned by T (vr+1 ), . . . , T (vn )


Now, we have to show that, T (vr+1 ), T (vr+2 ), . . . , T (vn ) are linearly inde-
pendent. Suppose that the set T (vr+1 ), T (vr+2 ), . . . , T (vn ) is linearly depen-
dent. Then there exists cr+1 , cr+2 , . . . , cn not all zero such that

cr+1 T (vr+1 ) + cr+2 T (vr+2 ) + . . . + cn T (vn ) = 0


n
X n
X n
X r
X
Hence, T ( ci v i ) = 0 ⇒ ci vi ∈ N (T ). Hence ci vi = ci vi ⇒
i=r+1 i=r+1 i=r+1 i=1
n
X Xr
ci vi − ci vi = 0
i=r+1 i=1
Thus, ci = 0 for all i as {v1 , v2 , . . . , vn } is a basis and hence is a linearly
independent set.
Therefore, rank(T ) = dim(T (V )) = n − r ⇒ rank(T ) = dim(V ) −
nullity(T ). Hence, rank(T ) + nullity(T ) = dim(V ).

Theorem 1.13 Let V and W be the finite dimensional vector spaces over
field F such that dim(V ) = dim(W ). If T is a linear transformation from V
into W, then following are equivalent:

1. T is invertible.

2. T is non-singular.

3. T is onto, i.e., the range of T is W.

Theorem 1.14 If A is m × n matrix with entries in the field F, then

rowrank(A) = columnrank(A).

Theorem 1.15 Let V and W be vector spaces such that dim(V ) = m and
dimF (W ) = n.Then the space L(V, W ) is finite dimensional and has dimen-
sion mn.

Proof: Let v1 , v2 , . . . , vm be an ordered basis of V and w1 , w2 , . . . , wn be


an ordered basis of W . Define
(
0 if i 6= q
E pq (vi ) =
wp , if i = q, where 1 ≤ p ≤ n&1 ≤ q ≤ m.
Linear Algebra 15

Since,p can be any of 1, 2, . . . , n and q any of 1, 2, . . . , m, there are mn such


E pq ’s. For this we shall show that Xthe set of these mn transformations E ’s is
pq
pq
linearly independent. Suppose Cpq E = 0 ,where 1 ≤ p ≤ n,1 ≤ q ≤ m.
pq pq
P P
Let U = Cpq E U (vi ) = cpq E (vi ) 0 = c1i w1 + c2i w2 + . . . + cni wn but
w1 , w2 , . . . , wn are linearly independent. C1i = C2i = C3i = . . . = Cni = 0
Cij = 0∀i ∈ 1, 2, . . . , n So, E pq are linearly independent j ∈ 1, 2, . . . , m Thus
now we shall show that L(V, W ) is linear span of E pq . Let T ∈ L(V, W ) i.e.
T is linear transformation from V to W . Suppose

T (v1 ) = a11 w1 + . . . + a1n wn


T (v2 ) = a21 w1 + . . . + a2n wn
..
.
T (vm ) = am1 w1 + . . . + amn wn

T (v1 ) = a11 E 11 (v1 ) + a12 E 21 (v1 ) + . . . + a1n E n1 (v1 )


T (v2 ) = a21 E 12 (v2 ) + a22 E 22 (v2 ) + . . . + a2n E n2 (v2 )
..
.
T (vm ) = am1 E 1m (vm ) + am2 E 2m (vm ) + . . . + amn E mn (vm )
So,T = (a11 E 11 + · · · + a1n E n1 )
+ (a21 E 12 + . . . + a2n En2) + · · · + (am1 E 1m + . . . + amn E mn )

U (vi ) = ai 1w1 + . . . + ai nwn = T (vi ) Hence, U = T. U is a linear combination


of elements of E pq . Therefore,L(V, W ) is linear span of E pq . Further, E pq are
linearly independent and hence they form a basis of L(V, W ). This implies
that dim(V, W ) = the number of elements in E pq = mn.

Exercise Set
1. Let S = {(x, y)|x, y ∈ R}. In each case determine whether or not S is
a vector space with the indicated operations.

(a) (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ), k(x1 , y1 ) = (kx1 , 0)


(b) (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 + 1, y1 + y2 + 1), k(x1 , y1 ) = (kx1 , ky1 )
(c) (x1 , y1 ) + (x2 , y2 ) = (x1 + y1 + 1, x2 + y2 + 1), k(x1 , y1 ) = (kx1 , ky1 )
16 Dr. V. V. Acharya

(d) (x1 , y1 ) + (x2 , y2 ) = (x1 , x2 + y2 ), k(x1 , y1 ) = (kx1 , ky1 )


(e) (x1 , y1 ) + (x2 , y2 ) = (|x1 + x2 |, |y1 + y2 |), k(x1 , y1 ) = (|kx1 |, |ky1 |)
(f) (x1 , y1 ) + (x2 , y2 ) = (y1 + y2 , x1 + x2 ), k(x1 , y1 ) = (ky1 , kx1 )

2. Let V be the set of positive real numbers. Define x + y = xy and


kx = xk . Show that V is a vector space.

3. Show that the intersection of two subspaces of a given vector space is


a subspace. In fact, show that the intersection of subspaces of a given
vector space is a subspace.

4. Give an example to show that union of two subspaces of a vector space


need not be a subspace. Give a necessary and sufficient condition so
that the union of two subspaces is a subspace.

5. Determine in each of the following whether or not the given subset is


a subspace of the given vector space:

(a) S = {(x, y, z)|z = 0}, V = R3 .


(b) S = {(x, y, z)|x = 2y + 3z}, V = R3 .
(c) S = {(x, y)|y = x + 2}, V = R2 .
(d) S = {(x, y)|xy = 1}, V = R2 .
  
a b
(e) S = |b = 0, c = 0 , V = M2×2 .
c d
  
a b
(f) S = |b = a − d , V = M2×2 .
c d
  
a b
(g) S = |b = 0, c = 1 , V = M2×2 .
c d
(h) S be the set of all polynomials of degree two with integer coeffi-
cients, V = P2 .
(i) S be the set of all polynomials with even powers of x, V = Pn .
(j) S be the set of all matrices with determinant zero, V = Mn×n .
(k) S be the set of all polynomials with sum of their coefficients 3,
V = P2
(l) S be the set of all symmetric matrices of order n × n, V = Mn×n .
Linear Algebra 17

6. In each of the following express x as a linear combination of the re-


maining vectors:

(a) x = (1, −2, 5), u = (1, 1, 1), v = (1, 2, 3) and w = (2, −1, 1).
(b) x = (1, −2, 5), u = (1, 1, 1), v = (1, 2, 3) and w = (2, 3, 4).
       
2 4 1 1 0 0 0 2
(c) x = A= ,B = ,C = .
−1 −4 1 0 1 1 0 −1
     
2 1 2 0 0 1
(d) x = A= ,B =
−1 3 0 3 1 0

7. In each of the following express p as a linear combination of the re-


maining polynomials:

(a) p = 1 + 2x − x2 , p1 = 1 + x, p2 = 1 − x, p3 = x2 .
(b) p = 2 − x3 , p1 = 1 + x, p2 = x3 , p3 = 1 − x.
(c) p = 1 + 2x − 4x2 , p1 = 1 + x, p2 = 1 − x2 , p3 = x2 .

8. For which values of k will the vector (1, −2, k) be a linear combination
of the vectors (3, 0, −2) and (2, −1, −5)?

9. Find all the subspaces of R and R2 .

10. For which values of k will the vectors (k, 1, 1), (1, k, 1), (1, 1, k) form a
linearly independent set?

11. Which of the following vectors are in the linear span of


v1 = (2, −1, 3), v2 = (4, 1, 2), v3 = (8, −1, 8).
(i) (2, 2, −1) (ii) (1, 0, 0) (iii) (2, 1, 1) (iv) (0, −3, 4)

12. Which of the following sets are linearly independent subsets in the
indicated vector space.

(a) {(1, −1, −1), (4, −3, −1), (3, −1, 3)} in R3 .
(b) {(4, −4, 8, 0), (2, 2, 4, 0), (6, 0, 0, 2), (6, 3, −3, 0)} in R4 .
(c) {1 − 2x + 3x2 , 5 + 6x − x2 , 3 + 2x + x2 } in P2 .
       
1 1 1 1 1 0 0 1
(d) , , , in M2×2 .
1 0 0 1 1 1 1 1
18 Dr. V. V. Acharya

13. Which of the following sets of vectors form a basis for the indicated
space:

(a) {(0, 1, 2), (0, 2, 3), (0, 3, 1)} in R3 .


       
1 2 0 −2 0 2 3 0
(b) , , , in M2×2 .
1 −2 −2 0 3 1 −3 6
(c) {(2, 1, −3), (3, −2, 2), (5, −3, 1)} in R3 .
(d) {(1, 0, 0, 1), (0, 2, 1, 1), (0, 3, 0, 0), (−1, 0, 1, 0)} in R4 .
(e) {1 + x + x2 , x + x2 , x − x2 } in P2 .

14. Determines bases and the dimensions of the following subspaces in the
respective vector spaces

(a) S = {(x, y, z)|3x − 2y + 5z = 0} in R3 .


(b) S = {(x, y, z, w)|w = 0} in R4 .
(c) S = {(x, y, z)|x = y + z} in R2 .
(d) S = {a0 + a1 x + a2 x2 |a0 = a1 } in P2 .
  
a 0
(e) S = |a, b ∈ R in M2×2 .
0 b

You might also like