Professional Documents
Culture Documents
LINEAR ALGEBRA
~
;4sian 'B""ks 'iJJ'ioaie t..iHliie'J
7128, Mahavir Lane, Vardan House, Ansari Road,
Darya Ganj, New Delhi-ll0002
Registered and Editorial office
7/28, Mahavir Lane, Vardan H()Use, Ansari Road, Darya Ganj, New Delhi-I 10 002.
E-Mail: asian@nda.vsnl.netin;purobi@asianbooksindia.com
World Wide Web: http://ww\... asianbooksindia.com
Phones: 23287577, 23282098, 23271887, 23259161
Fax: 91-11-23262021
Sales Offices
Baf/galore 103, Swiss Complex No. 33, Race Course Road, Bangalore-560 001
Phones: 22200438, Fax: 91-80-22256583, Email: asianblr@blr.vsnl.net.in
Chellna: Palani Murugan Building No.21, West Cott Road, Royapettah,
Chennai-600 014 Phones: 28486927. 28486928, Email: asianmdS@vsnl.net
Delhi 7/28. Mahavir Lane, Vardan House, Ansari Road, Darya Ganj,
New Delhi-l 10002 Phones: 23287577,23282098,23271887,23259161,
!:ax : 91-11-23262021
Email: asian@nds.vsnl.net.in;purobi@asianbooksindia.com
Guwahati 6, GN.B. Road, Panbazar, Guwahati, Assam-781 001
Phones: 0361-2513020, 2635729 Email: asianghyl@sancharnet.in
Hyderabad3-5-1101l11B Hnd Floor, Opp. Blood Bank, Narayanguda,
Hyderabad-500 029 Phones: 24754941, 24750951, Fax: 91-40-24751152
Email: hydasian@hd2.vsnl.net.in.hydasian@eth.net
Kolkata lOA, Hospital Street, Calcutta-700 072 Phones: 22153040,
Fax: 91-33-22159899 Email: calasian@cal.vsnl.net.in
Mumbai 5, Himalaya House, 79 Palton Road, Mumbai-400 001
Phones: 22619322, 2~623572, Fax: 91-22-2262313 7
Email: asianbk@hathway.com
Showroom : 3 & 4, Shilpin Centre, 40, GD. Ambekar Marg,
Wadala, Mumbai-·~~) 031; Phones: 241576li, 24157612
© Publisher
1st Published: 2007
ISBN: 978-81-8412-019-6
All rights reserved. No part of this pUblication may be reproduced, stored in a
retrieval system, or transmitted in any form or by any means, electronic, mechanical,
photocopying, recording and/or otherwise, without the prior written permission of
the publisher.
Published by Kamal Jagasia for Asian Books Pvt. Ltd., 7128, Mahavir Lane,
Vaidhan House, Ansari Road, Darya Ganj, New Delhi-110 002
Laser Typeset at P.N. Computers S/:ahdara De!hi-JJO 032
Printed at Gopa/jee Enterprises, Delhi
7n
J\1y wifo rama,
and our little sons ratsav and ratk.arsh,
whose support, encouragement and
faithful prayers made this book. possible
"This Page is Intentionally Left Blank"
PREFACE
The main aim of writing this book is to provide a text for the undergraduate
students in science and engineering. This book is designed to meet the
requirements of syllabus of various Engineering Institutes at the
undergraduate level. It is written to facilitate teaching linear algebra in
one semester or in a part of it.
The presentation of material in this book preassumes the knowledge
of determinant of a matrix, gained from class XII. Throughout the text,
material is presented in a very smooth and easily understandable manner.
In each chapter, theories are illustrated by numerous examples. In each
section, exercises of both theoretical and computational nature are included.
Theoretical exercises provide experience in constructing logical and
deductive arguments while computational exercises illustrate fundamental
techniques useful in the field of business, science and engineering.
The first chapter deals with the linear system of equations with the
help of elementary row operations. The aim of keeping this topic in the
first chapter is to provide an efficient development of the basic theory,
used in all forth coming chapters. In Chapter 2, vector spaces, linear
dependence and independence, basis and dimension are explained. Chapter
3 treats linear transformations, their rank and nullity, inverse of linear
transformations and linear functionals. Chapter 4 treats the representation
of linear transformation in terms of matrix, representation of matrix in
terms of linear transformation, rank and column space of a matrix. Chapter
5 includes a complete treatment of eigenvalues and eigenvectors of
matrices, and diagonalization of matrices.
Although all possible care has been taken while preparing this text,
there may be a possibility of error. I will be thankful to the readers for any
such information so that it may be incorporated in the subsequent
edition. I also invite constructive suggestions for further improvement.
I believe that this book will fulfill the needs of first-year science and
engineering students at the undergraduate level in linear algebra.
Balram Dubey
"This Page is Intentionally Left Blank"
ACKNOWLEDGEMENTS
Balram Dubey
"This Page is Intentionally Left Blank"
CONTENTS
Preface vii
1.1 FIELDS
We assume that F denotes either the set of real numbers 9\ or the set of
complex numbers C. Suppose that F has two operations called addition (+)
and multiplication C.). The elements of the set Fhave the following properties:
I. F is closed with respect to addition:
x +y E F for all x, y in F.
2 Addition is associative:
x+(y+z) = (x+y)+z
for all x, y and z in F.
3. There is a unique element 0 (zero) in F such that
x + 0 = x = 0 + x for all x in F.
The element 0 is. known as the additive identity of F.
4. To each x in F, th~re corresponds a unique element C-x) in F such that
x + C-x) = 0 = C-x) + x.
The element -x is known as the additive inverse of x in F.
5. Addition is commutative:
x+y=y+x
for all x and y in F.
6. F is closed with respect to multiplication:
xy E F for all x and y in F.
2 Introductory Linear Algebra
7. Multiplication is associative:
x (yz) = (xy) z
for all x, y and z in F.
8. There is a unique non-zero element 1 (one) in F such that
x·l=x=l·x
for every x in F.
The element I is known as the multiplicative identity of F
Now throughout our discussion in this text book, we shall assume that
F = 9\ or F = C unless otherwise stated.
where aI' a2 , ••• , an and b are in F (9\ or C), usually known in advance.
The equations
5 and 5xI + 2xJ = 0
are both linear.
The equations
XI - x 2 = XI X 2 and.,J-;; + 2X2 == I
are no~inear because of the presence of the term x 1x 2 in the first equation
and ..J i in the second equation.
A system of linear equations is a collection of more than one linear
equations involving the same variables XI' x 2 ' ••••• , xn (say).
Consider the following system of m linear equations in n unknowns:
... (l.l)
all l
[b ]
ln
Let A =
a 21 ... a 2n ]
.
x2 b2
. , x== . ,b== . .
[XI]
[ ~n
a:1I1
. a: 1I1I
b:
1I
Cb) both equations are straight lines intersecting at a unique point (i,i)
and in Cc) both equations are straight lines coinciding with each other.
This shows that the system of equations in Ca) has no solution, in (b)
has exactly one solution, and in (c) there are infinitely many solutions. This
example illustrates the following general fact for linear system ofeQuations.
System of Linear Equations 5
2. The zero rows (i.e., rows containing only zero), if any, occur below all
the non-zero rows.
3. Let there be r non-zero row. If the leading entry of the ith row occurs
in the column k j (i = I, 2, ... , r), then kl < k2 < ... < kr ·
For example, the matrices
3. [~ ~ -~ ~l :
4'l~ ; ~ ~l
where the starred entries (*) may take any values including zero.
System of Linear Equations 7
0 I * 0 0 0 * * 0 *
0 0 0 0 0 * * 0 *
5.
0 0 0 0 I 0 * * 0 *
0 0 0 0 0 I * * 0 *
0 0 0 0 0 0 0 0 I *
0 0 0 0 0 0 0 0 0 0
where the starred entries (*) may have any values including zero.
Remark 1. Row echelon form of a matrix is not unique, but the row-reduced
echelon form of a matrix is unique.
Remark 2. The basic difference between row echelon form and row-reduced
echelon form of a matrix are the following:
(i) The leading entry in row echelon form of a matrix is any non zero
number, while in row-reduced echelon form of a matrix the leading
entry is always 1.
(ii) In row echelon form of a matrix, if a column contains the leading
entry of any row, then all entries below the leading entry are zero
and all entries above the leading entry may be any number (including
zero). But in row-reduced echelon form of a matrix, if a column contains
the leading 1, then all other entries in that column are zero.
Remark 2. shows that if a matrix is in row-reduced echelon form, then it
is also in row echelon form. However, if a matrix is in row echelon form, then
it need not be in row-reduced echelon form.
Example 7. Obtain the row-reduced echelon form of the matrix
~ [: -5]
2
A 6 1 .
2 8 5
~ [: -;]
2
Solution. A 6
8
--6]
- [:
2
-4
6
8 5
I ,
R, --> R, - R,
8 Introductory Linear Algebra
[~ ~] ,R2
-4
18 1~ -7 R2 -3Rl
- 8
- [~
-4
18 -6]
19 ,
16 17 R, -7 R3 -2Rl
[~ -6]
-4
2 2 ,R2 -7 R2 - RI
- 16 17
~]I,R2-7R)2
[~
-4
I
- 16 17
0 1 ,R3 -7 R3 - 16 R2
- [~
0 0] R, --> R, + 2R,
1 ,
0 1
- [~
0
1
0
n 11, --> R, - R,
Suppose that m < n. Then r < n. It means that we are solving r equations
in n unknowns and can solve for r unknowns in terms of remaining
11 - r unknowns. These n - r unknowns are free variables that can take any
real numbers. Ifwe take any free variable to be non-zero, we get a non-zero
solution to Rx = 0, and thus a non-zero solution to Ax = o.
The following corollary IS an immediate consequence of the above
theorem.
Corollary: If A is an m x n matrix and Ax = 0 has a trivial solution, then
m ~ 11.
Theorem 1.3. x = 0 is the only soluton to Ax = 0 if and only if row-rank
of A equals the number of unknowns.
Proof: Let R denotes the row-reduced echelon form of A. Then
x = 0 is the only solution to Ax = 0
<:::> x = 0 is the only solution to Rx = 0
¢:;>
[~
~l
-I
B = (A, b) = I -1
2 4
[~ -~lR'
-1 1
3 -3
- 3 3 5 ,R)
--> R, -2R,
~R)-R,
-1 2
1
- 0 -1
3
,R2~R2/3
5
0 ,~ ~~/3
3
-!l~ -->~-~
-1
-1
- [: 0 2
5
0 0
3 ,R. ~RI+R2
1
- 0 -1 --
3
,~~~12
0 0
5
0 0
3
2
- 0 0
3
,Rz~R2+R)
0 0 1
5 2
x I = -, X 2 =-, X3 =1 .
3 3
Example 9. Determine whether the following system of equations are
consistent or not. If yes, find all possible solutions.
1 2
3]5
-[i -2
-4
6
12 10 ,R3
,R2~R2+RI
~ R3 + R,
2 3]
-[i -2 6 5
0 o 0, R3 ~ R3 - 2R2
31
- [: 0
-3 -~
2 ' R2 ~-R/2
2
0 0
2
11
0 5
2 ,R, ~ RI -R2
5
- 0 -3
2
0 0 0 0
B ~ (A, b) ~ H -2
4
-6 -6
-1
5 -5
3
8 ~l
~lR'
-2 -1 3
0 3 1
- [i 0 -3 -1
--> II, +2R,
2 ,R] -7 R] -3RI
[~ ~t
-2 -I 3
0 3
- 0 0 0 --> II, + R,
-2 -I 3
- [:
0 -
3
1
]11, -->R,D
0 0 0
10
-2 0 -
,RI -7 RI +R2
3
1
~
0 0 -
3
(l 0 0 0 5
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Which of the following matrices are in row-reduced echelon form?
2. Dl!termine the row-reduced echelon form for each of the matrices given
below.
~ -~ ~ -~]
(l) -1 2 4 3
r 1 2 -1 2
(Ii,)
(11,) r~ j ;~]
3. Determine the solution set of the following system of homogeneous
equations:
(l) 4xI + 2X2 - 5x3 =0, (it) -XI +X2 +X3 +X4 =0,
3xI + 6x2 + X3 = 0, XI -x2 + X3 + x 4 = 0,
2xI +8x2 +5x3 =0. -XI +X2 +3X3 +3x4 =0.
XI -x2 +5x3 +5x4 =0.
System of Linear Equations 15
5. Let A = [~ ~ ~ ~ ~l
00012
Determine all b for which the system AX = b is consistent.
6. Let RI and R2 be 2 x 3 real matrices that are in row reduced echelon
form and the system RIX = 0 and R~ = 0 are equivalent. Without
making use of any result, show that RI Rl = R2 •
7. Let AX = b be a consistent non-homogeneous system. Prove or disprove:
XI and X 2 are solutions of the system AX = b, then so is Xl + X 2 •
If Xl
8. If Xl is a solution of the non homogeneous system AX = band Yl is
a solution of the corresponding homogeneous system AX = 0, then
XI + Yl is a solution of the system AX = b. Moreover, if Xo
show that Xl
is any solution of the system AX = b, then there is a solution YYoo of
the system AX = 0 such that Xo = Xl XI + Yo'
9. Prove or disprove: If two matrices have the same row-rank, then they
are row equivalent.
posible rank for an n x n matrix. Thus, we can state the following theorem.
Theorem 1.4. (Existence of Inverse)
For an n x 11 matrix A, the inverse A-I exists if and only if row-rank of
A= 11. Hence A is non singular if row-rank of A = n, and is singular if row-
rank of A < 11.
From Theorem 1.3, we note that the homogeneous linear system of
equations
Ax =0
has a trivial solution if and only if row-rank of A equals the number of
unknowns. If A is 11 X 11 matrix, then from Theorem 1.4
lA it follows that
A is non singular <=> row-rank of A equals n
<=> Ax = 0 has a trivial solution.
Thus, we can state the following theorem.
Theorem 1.5. If A is n x n matrix, the homogeneous linear system of equations
Ax = 0
has a nontrivial solution if and only if A is singular.
System of Linear Equations 17
A ~ [~ -~ -~]
Note that the matrix A is the coefficient matrix of Example 8. We wish
to find A-I, if it exists. We will write
[I -I 11
~]
0
B = [A I] = 2 1 -1 0
1 2 4 0 0
- [~ ~l1l, -->11,-211,
-1 I 0
3 -3 -2 1
3 3 -I 0 1 ,R) ~R)-Rl
- [~
-I I 1 0
1 -1 -2/3 113 0]
o ,R2~R2/3
-113 0 1/3 ,R) ~ R) 13
18 Introductory Linear Algebra
[~
0 0 113 1/3 O]'R' -> R, +R,
-1 -2/3 1/3
1/~ ,R3 ~ R3 -R2
1
- 0 2 1/3 -1/3
[~ ~JI<,
0 0 1/3 1/3
-1 -2/3 1/3
- 0 1/6 -1/6 -> 1<,12
113 0]
[~
0 0 1/3
1 0 -1/2 1/ 6 1/6, Rz ~ R2 + R3
- 0 116 -1/6 1/6
Thus, A-I =
["3
-1/2
113
1/6
1/6 -1/6 1/6
lI~l
Example 12. Let us consider the matrix
[I -I
A = 2
1
In order to find A-I, if it exists, we write
1
2 n
1 0I 00]
~ ~ [~
-1 0
B= [A 3 o
2 3 o 0 1
- [~
-1 0
3 3 -2
o 0]
1 0 ,Rz ~ Rz - 2R,
3 3 -1 o 1 ,R3~R3-R,
[~
-1 0
3 3 -2 1 0
1
o 0]
- 0 0 -1 1, R3 ~ R3 - Rz
At this stage, we note that row-rank of A < 3 = the order of the matrix
A. Hence by Theorem 1.4, it follows that A is singular implying that A-I
does r.ot exist.
System of Linear Equations 19
[~
01 1/3 1/3 O]'Rl~Rl+Rz
1 1 -2/3 113 0
- o 0 1 -1 1
At this stage, we also note that the row-reduced echelon form of
A :t= I, and hence A-I does not exist.
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
l. Let a, b, c, d are all nonzero and the entries not written are all O's. Find
the inverses of the matrices, if they exist, given below.
2. For the following matrices A, determine whether A-I exists. If A-I exists,
find it.
[~
-1 0
[-~ ~]
-1
(/)
-7 -2
3 (ii)
4
0
0
0
5
-6
j]
~l
0 0
-2
H -:]
0
(iii) (iv) 5
[:
a 3
a
aZ a
-4
20 Introductory Linear Algebra
(,) [: 0 ;] (,,) [j ~ ~l
(w) 0
1
(,;,{~ -: -;] (viii)
[-1 -13 21]4
-3
(a) [~ -i]
0 (x) [! ~ ;]
3. Let A be an n x n matrix. Prove the following:
(I)
(t) If A is invertible and AB = 0 for some matrix B, then B = o.
(ii) If A is not invertible, there exist an n x m matrix B such that
AB = 0, but B "* O.
4. If A and B are invertible, are A + B, A - B and -A invertible? Justify
your answer.
~2 11 VECTOR SPACES
=
= (<XXI +aYp<XXl
+aY"<XX2+aYz'''',<XX
+aYl'···'<XXn
n +aYn)
=
= (<xx,,<xx1, .. ·,<xxn) + (ay"aYz,
(<XXp<XXz'·.·'<XX/I) (aYp aY2,···,aYn)
.. ·,aYn)
= a (Xl'xZ'
(XI'X '+Xn) + a (Y"Yz,
2, .....·,+X (YPY2, ... ,Yn)
= <xx+ay ';;f x, YEVn,aE9t.
=
8. (a+~)x = (a+~)(x"x2,
(a+~)(xpx2' .....·,xJ
'x.) I
= «a+ ~)x"
~)xp (a+~) xz,
xl, ... ,(a+~) xn)
= (<XXI + ~x"
~xp <XXl + ~l'''''<XXn
~l' ... '<XXn + ~xn)
~x,,)
= (<XXI , <XX1'· .. ,<XXn) + (~I ,~x2'''''~n)
,~xl'···'~n)
= a (xPxl, ... ,xn)+~ (x"x1,
(x"xl,,,,,xn)+~ (xpx1, ... ,xn)
= <xx+~';;fa,~E9t,xEv'"
= <xx+~';;fa,~E9t,xEv',.
9. a (~) = a (~"~l""'~n)
(~P~2' ... '~n)
= (a(~I),a (~l), .. ·,a (~n»
(~2),···,a (~n))
= «a~) xI>
xl' (a~) xl, ... ,(a~) xn)
x2,···,(a~)
= (a~)(xl,xl,,,,,xn)
(a~)(xI,xl, ... ,xn) = (a~)x ';;f a, ~ E 9t, x E Vn.
Vn.
24 Introductory Linear Algebra
= (X 1,X2 ,···,X,,)
= x V X E V".
v".
Thus, all ten axioms of a vector space are satisfied. Hence VII is a real
vector space.
Example 2. Let V be the set of pairs (x I' x 2) of real number, and let F be
the field of real numbers. Define.
(x" x 2 ) + (YI' Y2) = (x) + YI' 0)
(xl'
a (xl' x 2 ) = (ax p 0)
«(XXI'
Example 4. Let Wbe the se~ofall points in 9\2 of the form (3x, 2 + 5x), where
x E 9t Is Wa vector space with usual addition of vectors and multiplication
of vectors by scalars?
Solution. No, for eaxmple, take x = 1 so that u = (3, 7) E W.
For a = 3, we have au = (9, 21).
If au were in W, then
If
au = (9, 21) = (3x, 2 + 5x).
That would imply x = 3 and x = 19/5 which is impossible, and hence
au is not in W. Thus, W is not a vector space.
Several other examples are given in Exercises. Readers are advised to
verify all ten axioms of a vector space.
In the following proposition, we shall list some important properties of
a vector space. These properties can easily be verified using ten axioms
given in the definition of a vector space. These are left as an exercise for
the readers (see Exercise 6).
Proposition 2.1. For each u in a vector space V and scalar a in F, the
following statements are true (Ov denotes the zero vector in V):
I. Ou= 0v'
Qv'
2. aOv=Ov'
3. (-I)u=-u.
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Show that the set 9\ of real numbers is a vector space over the field
9\ under usual addition of real numbers and multiplication of real
numbers by real numbers.
2. Let e be the field of complex numbers and e" be the set of ordered
n-tuples of the elements of C of the form
u =(u 2,UZ,. .. , un)' V =(V2' V2'"'' vn)'
where ui ' Vi are in C. Define + and . in en as follows:
u+v =(u) +v),u2+v2, ... ,un +Vn)
au =(aul'au2, ... ,aun),a e C.
Then show that (!' is a vector space over C. In particular, C is a vector
space over C.
26 Introductory Linear Algebra
IsVa vector space over 9\ under usual vector addition and multiplication
of vectors by scalars?
2.2 SUBSPACES
In this section, readers will be introduced with some basic concepts of
vector spaces.
Vector Spaces 27
Example 10. The vector space V2 (= 9\2) is not a subs pace of V3 (= 9\3) as
subspace
V2 is not even a subset of V3; since the vectors in V3 all have three entries.
whereas the vectors in V2 have only two entries.
Example 11. Consider the set
au = (2, 2) ~ W.
In the following theorem we shall establish an important property of
a subspace.
Theorem 2.3. Let V be a vector space over the field F. The intersection
of any collection of subspaces of V is a subspace of V.
Proof: Let {W) be the collection of all possible subspaces of V and let
W= nWi
01' En W; =W
=> aU+VEnWj=W
= ~(x)+ h2 (x),
I 1
where hi (x) = -(f(x)+ fe-x»~, ~(x)=-(f(x)- fe-x»~.
. 2 2
It is easy to see that
hi (-x) = hi (x) and h2 (-x) = -h2 (x)
=> hi E Ve and h2 E Vo·
Vu·
Hence f =~ + h2 E V" + Vo .
Thus V k v;, + Vo ' and
certainly ''-: + r:, k V.
Therefore, V = V" + Vo .
(iii) To show that V" 11
II Vo = {Ov}
{Or}
Since Ve and Vo are subspaces of V
=> Or
0v v;, and
E 0v E Vo
=> 0v E v;, 11
II Vo
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Let W be a non-empty subset of a vector space V. Show that W is a
subspace of V if and only if
au +
all Bv E W for all u, vE Wand all scalars (1, p.
2. Let W = {(x,2x+ I):x E ~H} . Is W is a subspace of V2?
w = {[;,
(iv) W :] E M 2X2 : l!,b,c E 9\}
(v)
W={[~ ~]EM2X2:~,bE9\}
(vi) W ={aD +al x+a2x 2 E P,. I: al =0; ao' a2 E 9t}.
10. Let WWII and W2 be subspaces of a vectors space V such that W WII + W2
= Vand W WII n W2 = {Qv}.
{Ov}. Prove that for each vector u in V there are
It, WI and u2 in W2 such that 11II = III + 112 • (In such
unique vectors 1I, in WI
a situation, V is said to be the direct sum of the subspaces W WII and
W2 and it is written as V = WWII Ei3 W2 ).
n
and v = LP
;=1
;Vj for some Vj E S and for some scalar Pj .
au + v = L(auJuj +
1=1
L P;Vj
j=1
=> UU+VE[S]
=> [S] is a subspace of V.
Theorem 2.5. If S is a non-empty subset of a vector space V, then [S] is
the smallest subspace of V containing S.
Proof: From Theorem 2.4, [S] is a subspace of V.
First of all, we shall show that
S~[S].
To prove this, let U E S
=> u=l·u
=> u is expressed as finite linear combination of the elements of S.
=> UE[S]
Thus, S~[S]
Example 16. Let S = {(x, 3x, 2x): x E~} . Find a vector v in J.j (= ~') such
that S = [v]. Hence conclude that S is a subspace of V3 .
= {x(l, 3, 2) :XE~}
~ [~
4
5
B
-I -2 1]
- [~ -I
4
-3 -l
-2 -6 -I ,~~lS-RI
R, --> R, -2R,
4
,R2 ~(-I)R2
- 0 3
0 3 -1 ,~~(-~)~
2
1 4
- 0 3
,Rl~Rl-R2
0 0 0
2
This show that
row-rank of coefficient matrix :I: row-rank of augmented matrix.
~ The system is inconsistent, and hence (1, 1,0) does not lie in [S].
=> V = 1
-U+
<X
_ 1) U +
(-<X
<X I
+(_ <Xn)u <X "
=> VE[SU{U}].
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Let S = {(l, 1,0), (2, 2, 1), (0, 0, 2)}. Determine which of the following
vectors are in [S]:
(a) (3,4,5) (b) (3,3, 5)
(c) (0,0,0) (d) (3,3, 1)
(e) (-11,7, -I) if) (1,2,3)
2 2
2. Let S = {x\ x + x , 1+ x + x } • Determine which of the following
polynomials are in [S]:
(a) 2y 3 + x + I (b) 3x3 + x-I
3
(c) x + 2x + I (d) 2x3 + x 2 + 2x +1
3. Let S = {(2a, 0, - a) : a E 9\}. Show that S is a subspace of V3"
(Hints: Use the method discussedl,In, example 16).
{(Sa + 2b, a, b}: a, b E 9\}' Find vectors u and v such that
4. Let S = {(5a
S = [u, v]. Why does this show that S is a subspace of V3?
(Hints: Use the method discussed in example 17)
5. Let S = {(a - 3b, b - a, a, b): a, b E 9\}. Show that S is a subspace
of V4 . (Hints: Use the method discussed in example 17).
- 6. Let S be a non-empty subset of a vector space V. Prove that
(a) S is a subspace of V iff S = [S].
(b) [[S]] = [S].
7. Let V be a vector space and u, v E V.
Prove that [u, [It, v] = [u - v, u + v].
8. For what value(s) of h will y be in the subspace of V3 spanned by
Ill' u2' u
lt33 if
1I1 = (1, -1, -2), u2 = (5, -4, -7), u3 = (-3,1,0), andy = (-4,3, h).
Vector Spaces 39
9. Let u 1l = (1, 4, -2), u2 = (-2, -3,7) and b = (4, 1, h). For what value(s)
of h is b in the plane spanned by up u2?
10. Let Uu = (2, -1) and v = (2, 1). Show that the vector b = (h, k) is in
[u, v] for all hand k.
11. Determine the subs subspace
pace of V3 spanned by each of the following sets:
(a) {(\, I, 1),(0, 1,2),(1,0,-1)}
(b) {(2,1,0),(2,0,-I)}
(c) {(I, 2,3), (1, 3, 5)},
(d) {(I, 2, 3), (1, 3, 5), (1,2, 4)}.
if
CXlli1l
CX11l + CX 2 U 2 + ........cx"u" = 0v => each cx; = 0.
Definition. An infinite subset S of a vector space V is said to be linearly
independent if every finite subset of S is linearly independent. The set S
is said to be linearly dependent if it is not linearly independent.
Remark: Throughout this book we shall use the notation LD for linearly
dependent and LI for linearly indendent.
How to check LD or LI of a given set S
Let S = {u"u 2 ' ...... ,u,,} be a subset of a vector space V. To check whether
the set S is LD or LI, conisder the equation
40 Introductory Linear Algebra
lfit
Ifit is possible to find atleast one a i *°
that staisfy the above equation,
then S is LO.
LD. If the above equation has only the trivial solution, namely,
a) = 0. = ... = an = 0, then S is LI.
2
Remarks:
(a) A 'lon-zero vector is always LI for if u 0V! then*
au = 0v => 0.=0.
(b) Zero vector is always LO, 0v- and hence any set containing
LD, for 1I . 0v= 0V'
the zero vector is always LO.
LD.
(c) By convention, we take the empty set clcI>> as LI.
Now we shall give some examples of LO LD and LI sets.
Example 20. Let e) = (1, 0, 0), e2 = (0, I, 0) and e) = (0, 0, 1). Then the set
S = {e
{el' z', eJ is LI in V) over 9t as
l , e2
aiel + a z + aJe) =
2 e2
aze ° = (0, 0, 0)
=> (a" a 2z,, aJ = (0,0,0)
°
=> a ll = =a 2z =a JJ .•
Exwmple 21. Let e) = (1, 0, 0, ... ,0), e2 = (0, 1,0, ... ,0), ... , en = (0,0, ... ,0, 1).
Note that each ei has n entries. Then as in example I, it can easily be
checked that the set S = {el' e2 , •.• , ell} is LI in Vn over 9t .
Example 22. Suppose that we want to check whether the set
S ={(I, 2, I,), (-I, I, 0), (5, -I, 2)}
is LD or LI in V) over 9t .
For this, we let a" a 2z ,, a 3 be scalars (reals) such that
a l (I, 2, 1) + 0. 2 (-1, I, 0) + 0. 3 (5, -1,2) = (0, 0, 0).
=> 3 = 0,
a ll -- a 2z + 5a3
2a l + a 2z -- a 3 =0,
a l + 2a3 =0.
To solve the above system of linear equation, we write the augmented
matrix associated with the above system as
B = [~ -: -~ ~l
1I ° ° 2
Vector Spaces 41
0 2 0
11
- 0 --
3
0
,~-'tR3-Rr
2
0 0 0
3
-[
0 2
11
--
3
0 1.-->mR,
I 0 0
- 0 I 0
[
o 0 I
S = {I- x, x - x 2 , 1- x 2 } is LD or LI in P2 over 9t
Solution: Let a, b, c be-scalars such that
a(l- x) + b(x - x 2 ) + c(l- x 2 ) =0, a zero polynomial.
~ (a+c)+(b-a)x-(b+c)x 2 = 0
~ a+c = 0, b-a =0, b+c =0.
42 Introductory Linear Algebra
On solving, we get
b = a,
c =-a,
and a is arbitrary real number.
Thus, the above system of equations has infinitely many solution and
therefore a nontrivial solution which implies that the set S is LD.
Example 24. Show that the set S = {sinx,sin2x, ... ,sinnx} is a LI subset of
C [-n, n] for every positive integer n.
Example 26. If S = {up U2"'" un} is a subset ofa vector space V and U E V
be such that UE [S]. Then S u {u} is LO.
LD.
Solution: Since U E [S]
=> U =alul + (1,2U2 + ... + anun for some scalars aj"
°
=> 3 a scalar a :1:- such that av = 0v
=> v = 0v'
Qv'
Conversely, let v = Qv'
0v' then l.0 v = 0v implies that fO v} is LD.
LO.
(b) Let {VpV2} is LD.
LO.
=> 3 scalars a and 13 such that atleast one of a and 13 is not zero,
say a:1:- 0, and aV I + I3v2 = 0v
=> 3 scalars (Xi' (Xl' (X3 such that atleast one of (Xi is non-zero,
say,
(XI *" 0, and (XlVI + (XlV2 + <X3 V3 =0v·
=> VIE[Vl,Vl ]
Remark: This particular basis S = {I, X, x 2 , ••• , xn} is called the standard
basis for Pn'
(b) If W "# {Ov}, some subset of S is a basis for W, that is, we can find
a subset A of S such that A is LI and [A] = W = [S].
Proof: Given that
=> There exists scalars u" ... ,cxk_Pcxk+p ... ,cxn such that
v k = (cx\v\ +CX 2V2 + ... +CXk_\Vk_1 +CXk+\Vk+\ + ... +unvn) ...(2.3)
To prove that
=> v = C,V, + C2V 2 + ... + Ck_,Vk _, + Ck+'Vk +' + ... + CnVn '
for some scalars ·c/s.
=> v=c,v, +C2V 2 + ••• +Ck_'Vk _' +0. Vk +Ck+'Vk +' + ••• +cnvn
(b) If
Ifthe
the original set S is LI, then it is already a basis for Was lS] =
W. If S is LD, then one of the vectors in S can be expressed as a linear
combination of the remaining vectors of S. Then by part (a), the set S, SI
formed from S by deleting that vector spans W. If SI S, is LI, we are done.
Otherwise, one of the vectors of S,SI is a linear combination of the remaining
vectors of S,.
SI. Form the set S2 by deleting that vector of S,.SI. Then S2 spans
S,.
SI. If S2 is LI, we are done. Otherwise continue the above process until the
spanning set is LI and hence is a basis for W = [S]. If the spanning set
is eventually reduced to one vector, then this vector will be non-zero as
W"* {Ov}, and hence linearly independent.
This completes the proof of part (b).
Example 29. Let S = {(t, 0, 2), (0, - 2,5), (2, - 6, 19)} be a subset of V3.
Determine whether the set S is LI or LD. If the set S is LD, find a LI
subset A of S such that [A] = [S]. (Note that this LI set A is a basis for
[Sj).
Solution: It is standard to check (readers should verify) that the set S is
LD. In fact
(2, -6, 19) = 2 (1, 0, 2) + 3 (0, -2, 5).
Then by the spanning :;et theorem,
[(1,0,2), (0, -2,5)] = [(I, 0,2), (0, -2,5), (2, -6, 19)]
One can check that the set A = {(I, 0, 2), (0, -2, 5)} is LI.
Thus, A is a basis for [S].
Now in the following theorem, we shall establish an important result
which is a basic tool in basis of a vector space.
Theorem 2.9. Let S = {v" v2 ' ••• , VIII} spans a vector space V. Then any LI
set of vectors in V contains no more than m elements.
48 Introductory Linear Algebra
Proof: In order to prove this theorem, it is enough to show that any set
in V which contains more than m vectors is LD. Let W be such a set, that
is,
W = {up U 2 ' ••• , un} be a set in V and n > m.
Since S spans V,
=> Each u; (1 :;; j:;; n), which is in V, must be expressed as linear
combination of elements of S.
nI
n
UIUI +U 2U2 + .... +unun ='L,UjU j
j=1
=~UJ
n (
~AijV;
III )
n III
= 'L,'L,{AyUJ)v,
J=I ,:1
= ~(~AijUj}..
We know that if A is m x n matrix and m < n, then the homogeneous
°
linear system Ax = has a non-trivial solution, that is, if the number of
unknowns is more than number of equations (n > m), the equation Ax = °
has a solution x other than x = 0, (See Theorem 1.2).
Using this fact, we note that the homogenous linear system of equations
Multiplying the above equations in right by vI' v2 ' ••• ' VIII' respectively,
and adding we get
Vector Spaces 49
Proof: If W ={Ov},
{OJ'} , then certainly dimW = O:s; dim V. So assume W::t:- {Ov}
and let
So = {v" v2 ' •• "v,,} is any LI set in W.
If So spans W, then So is a basis for Wand we are done.
Vector Spaces 51
tu"~
Let SI= {u 1, u2 , ••• ,u.} is a basis for W.
Similarly, let
S3 = {up Uu2 ' u,' Wp W 2 '·.·,
••• , U,' WI' w , ••• , wn _k } is a basis for W2 •
Let us consider the set
S4 = {up ... , Uk'
uk ' vp
Vp ...
••• ,, V wp ...
vm_ k ' Wp ... , W,,-k} •
k m-k n-k
La;u;+ Lb;v;+ LC;w; =Ov·
LQ;u;+
;=1 ;=1 ;=1
k m-k n-k
=:> La;u;+ Lb,v; =- Lc;w;.
LQ;u;+ ...(2.6)
;=1 ;=1 ;=1
Clearly
It ",-k
La,u; + Lb;v;
LQ,U; E W; as S2 is a basis for W
WI'
I,
;=) ;=1
II-k k n-k
and - LC;w; = LO.u; + L(-c;)w; E W2 as S3 is a basis for W2 .
;=1 ;=1 ;=1
La;u; + Lb,v; = 0v
1=1 ;=1
9\?
Solution. To show that S is a basis for V3 , one needs to show that (i) S
is LI and (ii) S spans V3.
(i) Let a, b, eC be scalars (rea Is) such that
a (1,2, I) + b (-I, 1,0) + eC (5, -1,2) = (0, 0, 0).
=> a - b + 5e
5c = 0,
2a+b-c= 0,
2a+b-e=
a+2c=0.
a+2e=0.
On solving we obtain a = 0 = b = e. c.
=> Sis
S is LI.
(ii) Since SS is LI and number of elements in S
S= 3 = dim V3
=> S spans V3, by Theorem 2.10.
Thus S is a basis for V3 .
54 Introductory Linear Algebra
Clearly the set B ={(-I, 1,0), (-1,0,1)} spans S and it is easy ;'0 check
that B is LI. Hence B is a basis for S and dim S = 2.
Vector Spaces 55
{Ill
{III + Uz , liz
II z + 113 , U3 + U4 ,,·•••
•• ,1I,,_1 + 11", linn + lIj } is also LI in Vn .
II", lI
Write wI
Wj = vI'
k
VI' w k = ~>ikVi; 2 ~ k ~ n, where cik >
1=1
°
for all i(l ~ i ~ k)
In many physical systemS- vector spaces are transformed into another vector
spaces by means of an app'ropriate transformation. In this chapter, readers
will be introduced with linear transformation that sends each element of a
vector spaces into another vector space preserving some basic properties.
59
60 Introductory Linear Algebra
Then
D(p(x) + q (x)) = D«ao +bo ) + (al +bl)x + ... + (an +bn)x n )
n
= (al +q) + 2(a2 +b2 )x + ... + n(an +bn)x - I
= (al + 2a2x+ ... + nanx n- I ) + (b l +2b2x+ ... + nbnx n- l )
= D(p(x)) + D (q(x)).
If a is any scalar, then
D(a p(x)) = D«aao) + (aal)x + ... + (aan)xn)
n I
=(aal) + 2(a a2)x + ... + n (aan)x -
n I
= a (al + 2a2x + '" + nanx - )
= aD(p(x)).
Thus D is a linear transformation.
Example 4. Let T: V2 ~ V2 be a map defined by
T(ul' u2) = (u2 +4, ul)'
Then T is not a linear transformation, because for U = (ul' u2) E V2 and
a E ~H, we note that
T(all) = T(aul,a1l2)=(au2+4,aul)' and
aT(u) = a(u2 +4,lIl)=
+4,1I1)= (au2 +4a,aul)
This show that T(au):t:-aT(u) if a :t:- 1.
Hence T is not a linear transformation.
Definition. If V be a vector space over the field F, then a linear transformation
T from V into itself is called linear operator.
In the following theorems, we shall give some basic properties of a
linear transformation.
Theorem 3.1. Let U and V be vector spaces over the same field F, and
T: U ~ V be a linear transformation. Then
(a) T(Ou) = Ov, where 0u and 0v are zero elements of U and V,
respectively.
(b) T(-u) = - T(u) for all u in U.
Proof: Since T is linear transformation and hence for all u in U and for all
scalars a,
linear Transformations 61
T(au)=aT(u)
Substituting a = 0, we obtain (a) and for a = - 1, we obtain (b).
Theorem 3.2. Let U and V be vector spaces over the same field F. Then
the map T: U ~ V is a linear transformation iff
T(aul + u2) = a T(uj)
T(auj T(lll) + T(u2) for all up u2 in U and a in F.
Proof: First, suppose that T is a linear transformation. Then for all up ll2
u2
in U and a in F, we have
III + u2)
T (a Uj ll2) = T (a Uj)
lll) + T(u2)
T(1l2)
aT(uI) + T(u2).
= aT(uj)
Conversely, suppose that T (a Uj ll2) = aT (Uj
III + u2) (ul ) + T (u2) for allll
allu l , u2
in U and a in F.
Then for a = I, we obtain
(u j + ull2)
T (lll (u j ) + T (u
2) = T (lll) 2);
(1l2);
and for ll22 = v'
U °
u' we obtain
T(alll
T(auj + 0v) T(uj) + T(Ou)
0u) = a T(lll)
T(auI) = aT(uj)+Ov
T(auj) aT(lll)+Ov =aT(uj).
=aT(lll).
Hence T is a linear transformation.
The proof offollowing theorem is similar to Theorem 3.2 and hence left
as an exercise for readers (see exercise 1).
Theorem 3.3. Let U and V be vector spaces over the same field F. Then
T is a linear transformation iff·
ul + a2 u2)
T(a1 Uj T(uI) + a2 T(u2) for all up u
al T(uj)
ll2) = aj ll22 in U and aj'
ai' a 2
inF.
inFo
In fact. by induction on n one can show that for a linear transformation
T, we always have
T(ajuj + a2u2
T(allil a2ll2 + ... + allll,,) = aj T(lll) + a2 T(1l2) + ... + an T(u lI )
al T(llj)
for all lll' ll" in U and a p a 2 ,
u j ' 112' ... , u" ... , all in F.
Remark: Theorem 3.2 or 3.3 can directly be used to show that the given
map is a linear transformation.
In the following theorem we shall show that a linear transformation is
completely determined by its values on the elements of a basis.
Theorem 3.4. Let U be a finite-dimensional vector space over the field F
and let {u j, u
{lll' 2 , ... , un} be an ordered basis for U. Let V be a vector space
ll2'
62 Introductory Linear Algebra
over the same field F and let vI' v22' ... , vn be n vectors (not necessarily
distinct) in V. Then there exists a unique linear transformation T: U ~ V
such that
T(u;) = Vj for i = 1,2, ... , n.
Proof: Let U E Ube any element, and given that B = {u l , u2, •.. ,un } is a basis
for U.
=> :3 scalars ai' a 2 , .•• ,an such that
U = al ul + a2 u2 + ... + an un .
U
For this vector u in U, we define
T(u) = a l vI + a 2 v2 + ... + an un·
Clearly
T (u I) = T (l.uI
(1I 1 + O.u2 + ... + O.u n ) = l.vI + O.v2 + ... + O.vn = vI
(1I 2) = T (O.uI + l.u2 + O.u3 + ... + O.u
T (u O.lIn ) = O.vI + l.v2 + O.v3 + ... + O.vn = v2
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Prove Theorem 3.3.
2. Which of the following functions are linear transformation?
(a) T:V2 ~V2 defined by T(xt>x2)=(l+xl,x2)'
(b) T: V2 ~ V2 defined by T (xl' X2) = (XI + X2' 0) .
(c) T:V3~V2 defined by T(xl,x2,x3)=(xl-x2'0).
(c) T:V2 ~V2 such that T(3, 4) = (1, 2), T(I, 3) = (1, 0), and
T (2. 1) = (1,- 1).
(d) T: V3 ~ V2 such that T (1, -1, 1) = (1, 0) and T (1,1,1) = (0, 1).
(e) T: V3 ~ V3 such that
T (1, - 1. 2) = (1,2,4), T(2, 3, 1) = (2,1,3) and T(3, 4, -2)=(3, 0,2).
u: + 112
Since aau: E U as U is a vector space, it follow that
a VI + v2 E (T) .
R (n
Linear Transformations 65
~ R (7) is subspace of V.
(b) Since T(Ou) = 0v~ 0u E N(7) ~ N(7)"* <1>.
Let lit, 112 EN (T) ~ T (Ut)
(U t ) Op T (U 2 ) = 0v.
= 0V' QV·
{1,.u2.···. uk'"k+I'···.u
Let B, is extended so that B2 = {11.u2.···. k'"k+"···.un } is a basis
for U.
We wish to show that
B={T(Uk+l).T(Uk+2)' ...• T(un )} is a basis for R(7).
To do this, we shall show that
(i) B is LI, and (ii) B spans R (7).
To show (i), let <Xk+l' <Xk+2' .... , <Xn- be scalars such that
n
=> L Uilli E N(T)
,=k+1
n k
=> L
La/IIi L~/Ui' as BI is a basis for N(7)
a,llj = L~,Uj,
i=k+1 i=1
k h 11
=> Lp,lIi+ L (-U,)Ui=OU
,=1 i=k+1
=> Pi = 0 for i = 1,2, ... , k; and
~a i = 0 for i = k + 1, k + 2, ... , n as B2 is LI.
This show that the set B is LI.
Now to prove (ii), we note that any element in R (7) is of the form T (u)
for some U in U.
Since B2 is a basis for U and U is in U, therefore
II
il
= T(±CiUi +
i=1
±
i=k+1
CiUi)
Then, dim R(T) :s; dim U, because dim R(T) + dim N (T) = dim U < 00.
Remark 2. If T: U -+ V is a linear transformation and dim U < 00. Then,
dimR(T) :S;min{dimU, dim V}.
Proof: Since R (1) is a subspace of V
=> dim R (1)~dim V.
From Remark 1, dim R (1):S; dim U
Hence dimR(T):S; min{dimU,
min {dimU, dim V}.
Remark 3. If T: U -+ V is a linear transformation and dimU < 00 • Then
dim R(T) = dimU ~ T is one-one.
Proof: .: dim R(T) + dimN(T) = dimU
.. dimR(T)=dimU ~ dimN(T)=O
¢::>
(:::> N (T) ={Ov }
¢::>
(:::> T is one-one.
Remark 4. If dimR(T) <dimU, then T is not one-one. It follows from
Remark 3.
Remark 5. T is onto ~ R(T) = V
~ dimR(T)=dimV
Remark 6. dim R (T) = dim U = dim V ~ T is one-one and onto.
=> dimN(T)=O
One may note here that T is one-one.
Now we have
R(T)={T(x" x2, x3): (x" x2, x3) E V3}
= {(x" x" x" 0) + (0, -x2, -x2, 0)+ (0, 0, -x3' x3) :x" x2' x3
are any real numbers}
={x, (1,1,1,0) + x2 (0, -1, -1, 0) + x3 (0,0, -1, 1): x" x2' x3
are any real numbers}
_=[{(l, I, I, 0), (0, -1, - 1, 0), (0, 0, - 1, I)}]
Clearly, the set
S = {(l,
{(1, 1, I, 0), (0, -I, -I, 0), (0, 0, -I,
-1, I)}
spans R (T), and it is standard to verify that S is LI.
Since dim N (T) + dim R (T) = 0 + 3 =3 = dim V3, and hence it verifies
the rank - nullity theorem.
Remark for Example 8. Suppose that we do not wish to verify the rank-
nullity theorem. In such case, one need not check the linear independence
of the set S. After knowing dim N (T), one may directly use the rank-nullity
theorem to find dim R(T) . For example, in the above case dim N (T) = 0 .
Hence.
dim R (T) =dim V3 - dim N (T) =3
= [{x - x 3 }]
Remark for example 9. It may be noted here that the set S = {I, x, x 3 }
spans R(T) and S is LI. Hence dimR(T) = 3.
EXERCISES _______________________________
1. 1fT: V2 ~ V2 be a linear transformation such that T (I, 2) = (I, I) and
T(2, I) = (0, I); find. R (7).
(T).
2. If T: P2 ~ V2 be a linear transformation .such that T(I) = (1,0),
T(x) =(1,1) and T(x 2 ) = (2,2) ; find R (T).
(7).
3. Let T: Vm ~ Vn be a linear transformation with m > n. Is Tone-one?
Justify your answer.
4. Let T: Vm ~ Vn be a linear transformation with m < n. Is Tonto? Justify
your answer.
72 Introductory Linear Algebra
Theorem 3.9. Let U and V be vector spaces over the field F and
dim U = n < 00, dim V = m < 00. Then dim L (U, V) = mn < 00 •
Proof. Let
B2 = {vI' v2, ... , vm }
be ordered bases for U and V, respectively. For each pair of integers (p, q)
with I $ P $ m, 1 $ q $ n, we define a linear transformation
EM:U ~ V by
EP,q (u .) = { 0, j.* q
} Vp,j=q
= ojqvp,l$j$n.
Here 0 jq is the Kronocker delta and it may be noted that such linear
.
transformation exists uniquely by Theorem 3.4. We claim that the set
B= {EM:I$p$m,l$q$n}
forms a basis for L (U, V). It may be noted that B has mn elements.
First of all, we show that B- spans L (U, V).
Let TeL
T e L (U, V) be any eleI:l1ent.
m
=> T(uj) = L vp , for sQme
Apj Vp sQrne scalars Apj'
Api' 1 $j $ n,
p=1
m
= L Apj Vp
vp
p=1
(U j),
= T (u 1$ j $ n
Linear Transformations 75
S =T.
Hence B spans L (V, V).
Finally, we show that B is Ll.
III
1/1 n
Let S = LL ApqEP,q = 0, a zero transformation.
p=1 q=\
III
1/1 n
=> S(Uj) = L L Apq Ep,q (Uj) = O(Uj) = Qv,
0v, 1 S; j S;n
p=1 q=\
III n
=> L L Apqojqvp =Ov
p=\ '1=\
III
1/1
= S(x2' xI)
= (x2' 0).
(iii) (TS)(xl' x2) = T(S(xl> x2»
= T(xl'O)
= (0, xI).
(iv) T2 (Xl' X2) = T(T(XI' X2»
= T(X2,XI)
= (XI' X2)·
(v) S2 (XI' X2) = S(S(XI' X2))
= S(XI'O)
= (XI' 0).
__
EXERCISES _ _ _ __
____
____
____
____
______
I. Let T and S be linear operators defined on V2 by
T(xl' x2) = (0, x2), S(xl' x2) = (-xI, - x2)
Find a general rule likes the one defining Sand T for the linear
transformations
S + T, ST, TS, S2, T2.
2. Let T: V3 ~ V2 be a linear transformation defined by
T(xl,x2,x3)=(xl -x2,x2 -X3)
Linear Transformations 77
= a T- I (vI) + T- I (v2)
=> rl is linear.
II: To show that T- I : V ~ U is one-one.
Step 11:
Hence rl is onto.
In the following theorem, we shall show that non-singular linear
transformations preserve linear independence.
Theorem 3.12. Let U and V be vector spaces over the field F and let
T : U ~ V be a linear transformation. Then T is one-one if and only if
T maps linearly independent subsets of U onto a linear independent subset
ofV.
Proof: Let T is one-one.
Let S = {u), u2, ... , un} is L/ in U.
Linear Transformations 79
=> T (
[ t ajuI)=Qv
a;uI)=ov as T is linear
/=1
n
=> L aja; Uju; = Qv
0u as T is one-one
j=1
;=1
°
aj = Q for all i = I, 2, ... , n; as S is LI
=> a;
=> S· is LI in V.
Conversely, let T maps every LI sets in U onto a LI sets in V.
Let Uu E U such that Uu *" Qv
0u
=> tu}
{u} is LI in U
=> {T(u)} is LI in V
=> T(u) *" 0v
Qv
Thus, 1I *" Qv
0u => T(u) *" 0v
Qv
i.e T(u)=Qv=>u=Qv
T(u)=Ov=>u=Ou
=> T is one-one.
Theorem 3.13. Let U and V be vector spaces over the field F, and let
T : U ~ V be a linear transformation. Suppose that dim U = dim V < 00.
Then T is one-one iff T is onto.
Proof: We have
T is one-one <=> N (T) = {Qv}
{Ou}
<=> dim N(T) = Q °
<=> dimR(T) = dimU, (":dimN(T) + dimR(T) = dimU)
dim V, as given
<=> R(T) = V
<=> T is onto.
80 Introductory Linear Algebra
T(ao + a,x 2
alx + a2x2) = ao + (a,
(al - a2)x + (ao + a,
al + a2)x .
[s T non-singular. If yes, find a rule for 11
l ' like the one which defines T.
Solution: To show that T is non-singular, it is enough to show that T is
onc-one as T : P 2 ~ P2 is linear and dim P2 = 3 < 00 •
We have,
N(T) = {(p(x) E P2 : T(p(x)) = 0, a zero polynomial in P2}
2
= {ao + a,x
alx + a2x2 : ao + (a,
(al - a2)x + (ao + al
a, + a2)x = O}
= {ao +a,x+a2x2
+alx+a2x2 :ao =O,a,
=O,al -a2 =O,ao +a( +a2 =O}
= {aO+alx+a2 x2 ::aO=0=a,=a2}
{ao+a,x+a2 aO=0=al=a2}
=> N (T) = to} , where 0 is a zero polynomial in P 2
=> T is one-one.
1 1, let
In order to find 1',
T- I (ao + a,x
alx + a2x2) = bhoo + htx + b2x2; ai' bi E 91
9l
rl (ao + a,x 2
alx + a2x2) = ao +.!.. (a,
(al + a2 - ao)x +.!.. (a2 - al - ao)x .
2 2
Example 12. Let T be a linear operator defined on V3 such that
T(el) = el + e2'
T(e2) = el - e2 + e3'
e,
T(e3) =3e( +4e3,
where {el' e2' e3} is a standard basis for V3·
Linear Transformations 81
Thus,
l
T- (xl' X2' X3)
EXERCISES ______________________________
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
I. In all problems of Exercise 9 in Sec 3.2, check whether Tis non-singular
(that is, one-one and onto) or not. If T is non-singular, find rl.
2. Let T: V3 ~ V3 be a linear transformation such that
T (1, 0, 0) =(-1, 0, 0), T (0, 1, 0) = (0, 0, -1) and T (0, 0, 1) = (0, 1, - 1).
Is T non-singular? If so, find rl.
3. Let T: V3 ~ V3 be a linear transformation such that
T (1, 1, 0) =(1, 0, 0), T (0, 1, 0) =(0, 0, 1) and T (0, 1, - 1) = (0, 1, 0).
[s T non-singular? If so, find rl.
4. Let Tbe a linear operator on a finite-dimensional vector space V. Suppose
that rank of f2 = rank of T. Prove that R (T) fl N (T) = {Ov }.
T2 (xI' x2' x3) =T(T(xl ,x2,x3» =T(O, x2, x3) = (0, x2' x3)
=T(xl> x2, x3) for all (xI> x2, x3) in V3
=> f2 = T.
Example 16. Let T be an idempotent operator on a vector space V. Then
N(1) =R (/-1).
Solution. Let u e N(1) => T(u) = 0v.
Now (J-T)(u)=I(u)-T(u)=u => ueR(I-T)
Hence N (1) ~ R (/ - 1).
Let v e R (l - 1)
84 Introductory Linear Algebra
=> Dn+I(Pn(x» = 0.
Hence D is nilpotent and the degree of nilpotence of D is n + 1.
Example 18. Let Sand T be nilpotent linear operators on a vector space
V such that ST = 1'S . Then show that ST is nilpotent. Find the degree of
nilpotence of ST.
Solution: Since Sand T Tare
are nilpotent
=> 3 n > 1, m > 1 such that Sn =0, T m = °.
Given ST= TS
Linear Transformations 85
6. Let Tbe a linear operator on a vector space V3 defined by T(xj, x2' x3)
= (0, Xj, xI + x2). Is Tnilpotent? Ifso, what is the degree of nil potence ,
ofT?
7. Let Tbe a linear operator on a vector space P3 defined by
T(ao + alx + a2x2 + Q3x3) = (ao - Q) )x2 + (Qo + Q) )x3 .
86 Introductory Linear Algebra
Prove that
(a) T is nilpotent of degree 2.
(b) For a non~"Zero scalar A, I + AT is non-singular.
Lei f:
Example 19. Le! m2 ~ mbe defined by
f(XI,X2) = alxl +a2 x 2; al,a2 E9\.
Then f is a lillear functional.
Definition. Let V be a vector space over the field F. Then L(V, F) , the set
LCV, F) is
of linear functionals on V, is called the dual space of V, and Lev,
clenoted by V*, that is,
L(V, F) = V* = dual space of V.
= if: / is a linear functional on V}.
and dim V*= dim L(V, F) = dim V. dim F = n.l, = n = dim V.
Theorem 3.14. Let V be an n-dimensional vector space over the field F. Let
B={vl'v2' .... ,v"} is an ordered basis for V. Then
"
/= L/(vJfi.
i=1
(c) For each vector v in V,
"
v = Lfiev)
LfiCv) Vi'
i=1
Proof. (a) Given that B = {vI' v2,"" VII} is an ordered basis for V. Then from
Theorem 3.4, for each i = 1,2, ... , n; there exists a unique linear functional
1; on V such that
o, i"l= j
v j)=8lj= I,
fie
fiCvj)=8lj= { i=j'
L" a
i=l
i f, = °, a zero functiorral.
=> (fai
1=1
f,)CVj) =Oev)= 0, j == 1,2, ... , n
f,)evj) =OCv)=
88 Introductory Linear Algebra
n
~ 2,(X;Jj(Vj )=O
;=1
n
~ 2,(X;Oij=O
;=1
Now,
n
= 2,(X;!;(Vj)
;=1
n
= 2,(x;oij
;=1
= (Xj for all) = 1,2, ... , n.
n
Hencef= 2,f(v~)Jj .
;=1
(c) Given that v E V be any element.
n
~ V =2,~;Vi for some scalars ~i"
;=1
n
Hence, v = Lii (v) v; .
Iii
;=1
SJ = {f E V· : f ( u ) =0 for all u in S}
is called ann·ihilator of S.
Theorem 3.15. Let S be a subset (need not necessarily be a subspace) of
a vector space of V. Then SO is a subspace of V*.
Proof.' We have
SO = {fe v· :f(u) =ofor all u inS}
Since 0 (u) = 0 all u in S, therefore 0 e SO. (Here 0 is the zero functional
in V). Thus So ¢#; cp.
Let J, g e So and (X is any scalar. Then for all u in S,
(a.f + g) (u) = (a.!) (u)+ g(u)
= a.f(u) + g(u)
= a..0+0 =0
=> CI/+geSo.
Thus, SO is (X subspace of V*.
Theorem 3.16. Let Vbe a finite-dimensional vector space over the field F
and S be a subspace of V. Then
dimS + dim So =dimV.
Proof. Let dim V = n and dim S = k .
Let
Let B 1 is extended so that
B2 = {ul' .... ,uk'uk+l'· .. ,un } is a basis for V.
Let B~ = {ft,h, .... ,jn} is a basis for V*, which is dual to the basis
B2 for V.
90 Introductory Linear Algebra
tu) = Oij'
Hence 1; (u)
We claim ~hat B = {h+IJk+2, ... Jn} is a basis for S".
We prove our claim in the following three steps.
Step I: To show-that /i E So for all) = k + I, k + 2, ... ,n. That is, to show
that~ (1I) = 0 for-all U in S and for all) = k + 1, k + 2, ... , n.
Let U E S be any element.
k
~ U = I, a; ui
i=1
~ Jj(u) = fj
Ij (~ai Ui),) = k + I, k+2, ... ,n
/=1
i=1
k
= I,aioiJ = 0 as i"#j.
i=1
~ JjESO for all )=k+l,k+2, ... ,n.
II: B is obviously LI as Br
Step 11: Bi is LI (since every subset of a LI set is LI).
Step Ill: To show that B spans S".
Let If E So k: V *
n
~ 1=
f= I,f(Ui)/i,
I,/(Ui)/i, by part (b) of Theorem 3.14.
i=1
k n
I,/(Ui)/i +
= I,f(Ui)/i I, f(ui)!;
I(ui)!;
i=1 i=k+1
n
=0+ I, J(ui)/i, sinceJES" ~/(u)=Ofori=1,2, ... ,k
~f(u)=Ofori=1,2,
i=k+1
n
I, f(ui)/i
I(ui)/i
i=k+1
Hence B spans S", and dim S" = n - k
Therefore, dim S + dim S" = k + n - k = n = dim V.
Linear Transformations 91
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ __
l. Let V be a vector space over the field F. Then prove that
(a) S = {O} => so = V*, (Here 0 is the zero vector of V).
(b) S = V => so = {O}, (Here 0 is the zero functional in V*).
2. Let Vbe an n-dimensional vector space over the field F.If 1I('" 0v)
Dv) E V,
then show that there exists 1
lEE V* such that f(lI) *- O.
0.
3. Let V be an n-dimensional vector space over the field F. If u E V be
°
such that f(lI) = for all f E V*, then show that u = 0v'Dv,
4. Let St
SI and S2 be two subspace of a finite-dimensional vector space V.
Prove the following:
SI =S2 ~Sp =sg
(a) St
(b) (SI
(St +S2)O =SP nSg
(c) (SlnS2)o=sP+sf.
(StnS2)o=sP+sf.
I is a linear functional on V3 such that
5. If 1
l(l, 0, 0) = I, f(O, I, 0) =-1,1(0, 0, 1)= 1, and ifu = (x,y, z), findf(u).
1(1,0,0)
Iff
6. If f is a linear functional on V3 such that
10,0,
l(l, 0, I) = -I, f(O, 1, -1)= 1,/(2, -1, 0) = 2 and if u = (x, y, z), find
feu).
MATRIX REPRESENATION OF
A LINEAR TRANSFORMATION
4.1 COORDINATES
Let V be a finite-dimensional vector space, and B = {ul, U2, ... , un} is an
ordered basis for V.
Let U E V be any element.
=:) There exists scalars 0.1' 0.2' ... , an such that
U = alul + a2 u 2 + ... + anun ·
Then a; is called ithcoordinate of U relative to the ordered basis B. The
coordinate matrix of u relative to the ordered basis B, denoted by [u]B,
Example 1. Let B = {(I, 0, 0), (0,1,0), (0,0, I)} be an ordered standard basis
for V3 • Suppose that we wish to find the coordinate matrix of the vector
U= (1,4,5) relative to the basis B. Then there exists scalars 0.1' 0.2' 0.3 such
that
1I = (1,4,5) = 0.1 (1,0,0) + 0.2(0, I, 0) + 0.3(0, 0, 1)
= (0.1,0.2,0.3)
=:) 0. 1 = 1, 0.2 = 4, 0. 3 = 5.
Matrix Represenation of a Linear Transformation 93
Hence
In general, it can easily be seen that the coordinate matrix ofthe vector
U = (UI,U2'·'"un)
relative to the ordered standard basis
B = (el' e2' ... ' en) for Vn is
Example 2. Let B = {(2, I, 0), (2, I, I), (2,2, I)} be an ordered basis for V3 •
Then find [u]8 for U = (I, 2, 0).
Solution: Let u" u 2' u 3 be scalars. Then
(1,2,0) = u l (2, 1,0) + u 2 (2, I, I) + u 3 (2, 2, I)
= (2u l + 2u2 + 2u3 , u l + u 2 + 2u3, u 2 + u 3)
~ 2u I + 2u2 + 2u3 = I,
u l + u 2 + 2u3 = 2,
u 2 + u 3 =0.
On solving, we get
I
1 3 3
1
I
=-
2'
u
2
=--
2'
u3 =-.
2
Hence [u]8 =
I/2]
-3/2 .
[ 3/2
Example 3. Suppose that we wish to find an ordered basis for V4 relative
which the vector u = (-I, 3, 2, I) has the coordinate matrix as
[4, I, -2, 7] T.
Solution: Let B = (u l ' u 2' u 3, u 4 ) be an ordered basis for V4 .
Given that [u]8 = [A, 1, -2, 7f.
~ U = (-1,3,2, I) = 4u + 1 . u - 2u + 7u
I 2 3 4
= 4 ( 0, I, 0, 0) + 1I (-I, -I, 0, 0) - 2 ( 0, 0, -1, 3) + 7 ( 0, 0, 0, I).
94 Introductory Linear Algebra
l. Determine the coordinates of (i) (1,2, 1), (ii) (3, 1,2) and (iii) (4, -2,2)
with respect to the ordered basis B = {(2, 1, 0), (2, 1, 1), (2, 2, I)} for
a vector space "3 over ~n.
2. Show that B = {x + 1, x 2 + x-I, x 2 - x + I} is a basis for P2 over
~H. Hence determine the coordinates of (i) 2x - 1, (ii) 1 + x 2 , and (iii)
x 2 + 5x - 1 with respect to the ordered basis B.
3. Show that B = {( 1,0, -I ), ( 1, I, I ), ( 1,0, o)} is a basis for "3
over
~H . What are the coordinates of the vector (a, h, c) with respect to
the ordered basis B.
4. Let" be the real vector space of all polynomial functions from m
minto
into
~H of degree atmost 2, that is, the space of all polynomial functions
fof the form
f(x) = a o + a1x + a~.
Let t be a fixed real number and define
gl{x) = 1, g2(x)=x+t, g3(x)=(x+t)2.
Prove that B = {gl' g2' g3} is a basis for V. If
2,
= a o + a1x + a
f{x) r
find the coordinates off with respect to this ordered basis B.
5. Show that the vectors
U1 = (I, 1,0,0), 1I2 = (0, I, 1,0)
u 3 = CO, 0, 1, I), u 4 = CO, 0, 0, 1)
"4
form a basis for over 91 . Find the coordinates of each ofthe standard
basis vectors with respect to the ordered basis B = {u I' 11 2, U3 , u4 }.
6. Find an ordered basis for "3
relative to which the vector u = ( 1, - 1, 1 )
has the coordinate matrix [ 1,2, 3f.
7. Find an ordered basis for P2 relative to which the polynomial 1 + x 2
has the coordinate matrix [ 1, -1/2, 112 f.
Matrix Represenation of a Linear Transformation 95
,j = 1,2, ... , n ;
any
and matrix of Trelative to BI and B 2 , denoted by ( T: BI' B2 ), is defined as
amI a m2 a mn
Note that first column of (T: B I , B 2) is the coordinate of T(u l ) relative
to 8 2 ( i.e., [T(ul)]B ), second column of (T: BI' B2) is [T(u 2)]B ' and so
on, finally nth colun~l of(T: B I , B 2) is [T(un)]B . 2
2
Example 4. Let D : Pn ~ Pn be a differential linear operator and
B = {I, X, x 2, ... , x"} be the standard ordered basis for Pn • Suppose that
we are interested to know (D : B, B), the coordinate matrix of D relative
to B.
To see this, we note that
D(I) =0 =0.1+0.x+0.x 2 + ... +O.xn
D(x) = 1 = 1.1+0.x+0.x 2 + ... +O.xn
D(x 2 ) =2x =0.1 +2.x+0.x2 + ... +O.xn
~ (T: B2 , B,) = [~ ~ l
(c) To compute (T: B, , B,)
~ (T: BI , BI) = [~ ~ J- l
(d) To compute ( T : B2 , B 2 )
Since T (xI' x 2 ) = ( 0, x 2 ).
1 1
T(1,I) = (0,1) = "2(1,1) + "2(-1,1),
1 1
T(-I,I) = (0,1) = "2(1,1) + "2(-1,1).
( T: B,. B,)
Example 6. Let T: V3 ~
~ [i n
V3 be a linear operator such that
T(1, 0, 0) = (-1,0,0), T(O, 1, 0) = (0, 0, -1) and T(O, 0, 1) = (0, 1, -1) .
Let BI = {(l, 0, 0), (0, 1,0), (0, 0, I)} be the standard ordered basis
for V3 and
B2 = {(-I, 0, 0), (0, 0, -1), (0, 1, - I)} is an ordered basis for V3•
Then determine
(T:BI ,B2), (T:B2 ,BI ), (T:BI ,~), and (T:~,B2).
Solution: First of all, we shall find a general formula that defines TexpIicitly.
Let (XI' x 2' x 3 ) be any element in V3 . Then
(XI' x2' x3) = XI (1, 0, 0) + x2(0, 1, 0) + x3(0,0,I)
~ T(xl' x2' x3) = XI T(1, 0, 0) + x2 T(O, 1, 0) + x3T(0, 0, 1)
Hence
Hen'e :B,,~)
(T:B
(T l ,B2 ) =
I 0 0]
~ [[~0 : O.
001
n
Remark: To compute (T: BIl , B2 ), the general formula for T obtained is not
required, and hence it is not used. •
(b) To compute (T: B 2 , B Il )
We have
T(xl' x2, x3) = (-xI
(-xl 'X3'
' x3, - x2
X2 - x3)·
X3)·
=> T(-I,O,O) = (\,0,0)
(1,0,0) = 1-(1,0,0)+0·(0,1,0)+0·(0,0,1),
T(O, 0, -1) == (0, -1,
-I, 1)
=
= O· (1,0,0) - 1-(0,
0· 1· (0, 1, 0) + 1· (0, 0, 1), and
T(O,I,-I) = (0,-1,0) = 0·(1,0,0)-1·(0,1,0)+0·(0,0,1)
-1 0~ 0]~l.
[-~
Hence (T : BIl , B,)
(T: Bl ) =
[ °o
0 0
-1 -I
-I -1
1.
(d) To compute (T : B
8 2, B 2 )
We have T(x"x2,x3)
T(xl,x2,x3) = (-x"x3,-x2
(-Xl,x3,-x2 -x3).
Hence
°
°
-1 ~l·
-1
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
1. Let D be the linear differential operator defined on P3' the space of all
polynomial functions from 9\ into 9\ of degree atmost 3. Let
B = {I, x, x2 ,x3}
be an ordered basis for P3 • Determine (D: B, B).
2. Let T be the linear operator on V2 defined by T(xI' x2) = (x2' - xI).
(a) Determine (T: B, B), where B is the standard ordered basis for V2
over 9\ .
(b) Determine (T: B, B), where B = {( 1,2 ), (-1, I)} is an ordered basis
for V2 over 9t
(c) Determine (T: BI , B2 ), where
BI = ({l, -I), (1, 2)}, B2 = ({l, 0), (3, 4)} are two ordered bases
for V2 over 9\.
(d) If B is any ordered basis for V2 over 9\ and (T: B, B) = A = (A i)'
then prove that A I 021 ::F 0.
3. Let T be the linear operator on V3 defined by
T( x I,x2,X3) = (3 x I +x3' -2xI +x2' -xI +2x2 + 4x3)·
(a) Determine (T : B, B), where B is the standard ordered basis for V3
over 91-
(b) Determine (T: B, B), where B = {(l, 0, 1), (-1, 2, 1), (2, 1, I)} is an
ordered basis for V3 over 9\.
(c) Determine (T: Bp
Bi' B2 ), where
BI = {(l, 0, 1), (-1, 2, 1), (2, 1, I,)},
and B2 = {(1,
{(l, 0, 0), «0, 1, 1) (1, 0, I)}
are ordered bases for V.? over 9\.
4. Let T: V3 ~ V3 be a linear operator such that
T( 1,0,0) =( 1,0,1), T(O, 1,0)=(0, 1, 1)
and T ( 0, 0, 1 ) = ( 1, 0, °).
If B = {(I, 0, 1), (0, 1, 1), (1, 0, O)} be an ordered basis for V3 over 9\,
determine ( T: B, B ).
100 Introductory Linear Algebra
Hence, we have
T(u l ) = all VI + a21 v2 + ... + amI vm '
T(u 2) = al2 VI + a22 v2 + ... + a m2 vm'
+ a2n v2 + ... + a nm vm •
T(un) = al n VI
From Theorem 3.4, it follows that T (obtained from the above process)
is a unique linear transformation such that B = ( T : BI' B2 ).
Now to find a general formula that defines T, we extend T on the whole
space U. To do this, let u be any element in U.
=> U = al "l + a2 u2 + ... + an un'
for some scalars aI' a2'"'' anas BI is a basis for U
=> T(u) = aIT(u l ) + a 2T(u 2) + ... + anT(un), as Tis linear.
After substituting the values ofT(u l ), T(u 2), ... , T(un) in above equation,
a little simplification yields
T(u) = Plv l + P2v2 + ... + Pmvm,
where PI'''''Pm are some scalars.
This is the required linear transformation.
E.ampl. 7. Let A = [~ I : ~l
Then find the linear transformation T: V4 ~ V3 such that
A = ( T: BI' B2 ), where BI and B2 are standard ordered bases for V4 and VJ •
respectively.
102 Introductory Linear Algebra
Solution: We have
BI = {el,e2,e3,e4}'
B2 = {el,e2,e3}'
[t may be noted that ei appearing in BI and B2 have different meaning,
that is, e l = ( I, 0, 0, 0) in BI and e l = ( I, 0, 0) in B2, and so on.
Further, it should be noted that first column of A is the coordinate
matrix of T (e l ) in the ordered basis B2 , second column of A is the coordinate
matrix of T (e2 ) in B2' and so on. Therefore, we have
T(e l ) = I·el +0'e2 + 0'e3= ( 1,0,0),
T(e2) = I·el + l·e2 + l'e3 = ( I, I, I),
T(e3) = O·el + l·e2 + 0'e3 = (0, 1,0),
T(eJ = O·el + 0'e2 + l·e3 = (0,0, I ).
Now let (xl' x 2' x 3' x 4) is any element in V4 •
=> (x\>x2,x3,x4) = xJ el +x2 e2 +x3 e3 +x4 e4
=> T(xl, x2' x3' x4) = Xf, T(el) + x2 T (e2) + x3 T (e3) + x4 T (e4)
= x1C1, 0,0) + x2 (I, I, 1) + x3 (0,1,0) + x4 (0,0, I)
= (x~ + x2' x2 + x3' x2 + x4)'
This is required linear tran~formation.
°1 °1 0,
0]
1 1 I
find the linear transformation T: V4 ~ V3 such that A = (T: B I , 8 2),
where
8 1 = {(l, 1, 1,0), (0,1,1,1), (0, 0, 1, I), (1, -1, 0, O)},
B2 = {(t, 1, 1), (0, 1, 1), (1, 0, I)}
be ordered bases for V4 and V3 , respectively.
104 Introductory Linear Algebra
A { -l ~l
Let B I = {I, X, x 2 } and B2 = {I, x + 1, x 2 + x, x 3 - I} be ordered bases
for P2 and P3' respectively. Determine a linear transformation T: P2
~ P 3 such that A = (T: BI'
Bp B2 ).
4. Consider the matrix
o
I
11
I .
1 OJ
(I) If B = {x + I, x 2 + X + I, x 2 - x} be an ordered basis for P 2' determine
a linear operator Ton P2 such that A = (T: B, B).
2
B) = {I, x, x
(ii) If RI is an standard ordered basis for P2 and B2 =
}
2 2
{x + I, x + X + I, x - x} is another ordered basis for P2' determine
a linear operator Ton P 2 such that A = (T: BI'Bp B2 ).
5. Consider the matrix
A = [~ ~ ~l.
-I 1
If RI
B) = {(1, 0, -1), (0, 0, I), (-I, 1, 1) is an ordered basis for V3 , and B2
{(l,
~~~
a21 a22 a ]
A=
[
... .
a",1 alll 2 ... a mn
Matrix Represenation of a Linear Transformation 105
n
Letrj = (ail' ai2.""ajn), i=I,2, ... ,m be the ithrowofA in Vn (=91 ).
Let
c;
C; =
ali
az' j
.'.' i= 1,2, ... ,n
[
anll
be the ith column of A in VII1 (= 9\nl).
1. The row-space of A, denoted by R(A), is the subspace in Vn spanned
by the row vectors r" r 2, ... , rll1 , that is,
R (A) = [ri'
[rl, r2, .. , rll1 ] •
2. The column space of A, denoted by C(A) , is the subspace in VIII
spanned by the column vectors cl'
cI, c2, ... , c n , that is,
A are the zero rows. Then certainly row space of A is spanned by the
Pr · Hence it is enough to show that B is LI.
vectors PI' P2' ..... , Pr'
Let the leading 1 of Pi be in the column kr Then we have
(I) kl < k2 < ..... < kr •
(ii) a'k; = 1.
=>
=> ai = 0, 1 5. i 5. n as aik, = 1.
This shows that the set B is LI, and hence it is a basis for the row
space of A.
With the help of following theorem, one can find a basis for row-space
of A and hence R(A).
Theol 11 4.1. Let U be the row-reduced echelon form of a matrix A. Then
R(A) = R(U) and N(A) = N(U) .
Moreover, if U has r non-zero rows containing leading 1's, then they
form a basis for the row space R(A), so that dim R(A) is r.
Proof: Since A and U are row-equivalent, hence by Theorem 1.1, Ax = 0 and
Ux = 0 have same solution set. Thus, N(A) = N(U).
Let r i be the ith row of an m x n matrix A. Then
R(A) = Ilj,r2, ... ,rml
IIj,r2,
If we apply three elementary row operations on A, then A is transformed
into the matrices Ai of the following form:
Matrix Represenation of a Linear Transformation 107
rl
Ij Ij
rj
Al = kIj ,A2 = for i <j, A3 = r;+krj
rj
rill rill
rill
Here we note that row vectors of Ai' A2 and A3 are the linear combination
of row vectors of A. By applying inverse elementary row operations on A I'
A2 and A 3 , these matrices can be changed into A. This shows that row
vectors of A can also be written as linear combinations of rows of Ai 's.
Hence if two matrices are row-equivalent, they must have the same row
space. Since A and V are row-equivalent, therefore, RCA) = RCV).
Since V has r non-zero rows, from Lemma 4.1 it follows that the set
of r non-zero rows of V forms a basis for the row space of A.
Thus, the numbers of non-zero rows in V = row-rank of A.
In the following theorem, we show that row-rank of a matrix is same
as the column rank of A.
Theorem 4.2. For any matrix A,
dim R(A) = dim CCA).
Proof: Let dim RCA) =r.
Let V be the row-reduced echelon form of A. Then
r = the number of non-zero rows in U.
If we can show that r columns of A corresponding to the leading 1's
in V form a basis for C(A), then it would imply that dim C(A) = r = dim RCA).
We shall prove it in two parts: (i) They are LI, and (ii) they span C(A).
(i) To show that r columns of A corresponding to the leading 1's in V
are Ll.
Let A be the submatrix of A whose columns are those columns of
A that correspond to the r leading I's in U.
Let V be the submatrix of V corresponding to the r columns containing
leading I's in V. Then clearly V is the row-reduced echelon from
of A. Hence A x = 0 and Vx = 0 have the same solutions set as
A and [j are row-equivalent. But the columns of V containing the
108 Introductory Linear Algebra
Remark. For any m x n matrix A, rank (A) :s; min (m, n).
It follows from the fact that dim R(A) :s; m and dim C(A) :s; n.
Example 9. Consider the matrix
A = [-; -~ -; -~l'
3-6-68
Note that the matrix A is the coefficient matrix of example lOin Chapter
I, and the row- reduced echelon form of A is
-2 0 10
3
U = 0 0
3
o 0 0 0
Matrix Represenation of a Linear Transformation 109
Let
1. A = [~o ~ ! ~l
0 0 0
2. A
_
_ [00 01
=
0 0
o 0 0 0
3. A" [1 ~~ ~l 4. A = [- ~ -1 ~l
-1 3 4
:il 6. A" [~ ~ rl
8. A = [~ 0 0 -:l
-2 0 0 2
10. A = [~ ~ ~l.
000
EIGENVALUES AND
EIGENVECTORS
A =[~ ~l
Then by definition N(A) ={x E V2 : Ax =O}.
We note that
A = [~ -~ -~l.
o -1 1
To find N (A),
CA), w, need to know solutions ofAx
of Ax = o. The augmented
matrix associated with ·the system Ax = 0 is given by
[~ ~l
3 -2
-1 3
B=
-1
- [~
3 -2
-7
-1
7 n R, --+ iI,
R, - 211,
- [~
3 -2
-1
-1
niHR,l, iI, 17
--+ R,
I O}
OJ
- [~
0 R, --+ II,
11, +3R,
-1 1 0
0 o 0 , R3 ~ RJ - R2
112 Introductory Linear Algebra
This shows that the system Ax = 0 has infinitely many solution given
by
XI +X3 =0,
-X2 +x]
-x =0,
X3 is a free variable.
::::> xl=-xJ '
x 2 = Xl'
and x3 is free variable.
Hence
N(A) = {x = (xi' x2 ' xJ E J-; : Ax = o}
={C-X3,Xl'X3):X3 E9t}
= {x 3(-I, I, I): X3 E 9t}
= [((-I, I,
1, I)}]
We note that the set S ={(-I, I, I)} spans N (A) and S is Ll. Thus, S
is a basis for N (A).
EXERCISES ______________________________
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ _ __
In the following problems, for the given matrix A determine the null space
of A and hence find a basis for the nuIJ space of A.
j
0 0
l. A=
[-)
[-3 6-1 -7]
I -2
-7] I
2 3 -1
-I 2. A= [-: I -1
1
0 -1 1
I
2 -4 5 8 -4
I
1 0 0
3. A = 2 0 3
[1[' -1 4] 4.
A~[~ -1] 2 -I
-1
2 -2 8 2 0
Now we shall bring our attention to eigenvalues of a matrix A, which
is always a square matrix from here onwards.
We note that
A is an eigenvalue of A<:=:>3 x (;t 0) such that Ax = Ax.
<:=:> (A - AI) x = 0 has a non-trivial solution.
<:=:> A - ')..[ is singular.
<:=:> det (A - AI) = 0 is singular.
Remark 1. For an n x n matrix A, det (A - AI) is a polynomial in A of degree
n. This polynomial is known as characteristic polynomial of A, that is,
characteristic polynomial of A = det (A - AI).
Remark 2. From Theorem 5.1, it follows that A = 0 is an eigenvalue of A
if and only if A is singular. That is, A is non-singular if and only if all
eigenvalues of A are non-zero.
Theorem 5.2. Let AI' A2 , ... ,Ak be distinct eigenvalues of a square matrix
A of order n. Let Vi be an eigenvector corresponding to Ai' i = I, 2, ... , k.
Then the set
SK = {vi' v2 ' ... ··, vk } is LI.
Proof: We prove this theorem by the method of induction.
Since VI is an eigenvector corresponding to AI
~ 0
vl;t
~ SI = {vI} is LI
Now assume that the set
Sr = {vI' v2 '···, vI'} is LI
SI'
Then to show that
=> u
U Il = 0 = u
U 12 = ... = u,
U, as A, :t A,+I for i = 1, 2, ... , r.
Hence equation (5.2) reduces to
U,+IV,+I = 0
=> a r + 1 = o'as Vr + 1 :t 0
Sf' + 1 is LI.
Thus, the set SI'
Hence by induction, it follows that Sk is LI
Theorem 5.3. Let 1..1'1..12 "'" A" be n distinct eigenvalues of a square matrix
A of order n. Let VI be an eigenvector corresponding to Ai' i = 1, 2, ... , n.
Then the set
B ={vI' v1 ' " ' ' v,,}
is a basis for the domain space of A, and matrix of the linear transformation
A relative to the ordered basis B is
AI 0 ,0
o
(A:B,B)= .~. ~~
[
o 0 o
Proof: From Theorem 5.2, it follows that B is LI. Since A is of order n, hence
the dimension of the domain space of A is n. Therefore, B is a basis for
the domain space of A.
Since VI is an eigenvector of A corresponding to Ai
=> Av, = A., v" i = 1, 2, ... , n.
=> AVI =AIVI =AIVI +0,v2 +0,V3 + ... +O.v"
1
AVl = A
A1V
2V1 = O,VI + A2 ,V1 + O,vJ + ... + O.v",
A1,V
·~l·
AI 0 0
(A:B,B)=
[
.~. ~~ .~.
o 0 0 A"
This completes the proof of the theorem.
116 Introductory Linear Algebra
Theorem 5.4. Let A." A. 2, ... ,A." be n eigenvalues (need not necessarily be
distinct) of a matrix A of order n x n. Then
(a) det A = A.I A.2 ...A. n = product of the n eigenvalues
(b) tr(A)=A. 1 +A.2+···+A.n = sum of the n eigenvalues,
where tr (A) = trace of A = sum of the diagonal elements of A.
Proof: (a) Since eigenvalues A." A.2' ... ,A." are the roots of characteristic
polynomial, hence we can write
det(A - A.I) = (A.I - A.)(A. 2- A.) ... (A. n - A.)
For A. = 0, we get
det (A) = A.I A. 2.. ·A.n •
This proves part (a).
(b) We have
(A.I - A.) (A. 2- A.) ... (A. n - A.) = det (A - A.I)
all - A. al2 aln
a21 al2 -A. a2n
...{5A)
The left hand side and the right hand side of the above equation are
polynomials in '). . of degree n of the form
p(A.) =(-I)"A.n +an_lA. n- 1+ ... +alA.+ao
By expanding both sides of equation (5.4), we can compute the coefficient
all _ 1 of A.n - 1 to get
A.I + A.2 + ... + A.n = all + azz + ... +anll •
This proves part (b).
Proposition 5.1. Let A. be an eigenvalue of A. Iff is any polynomial, then
f(A)x =f(A.) x for some x =I:- O.
Proof: Since')... is an eigenvalue of A
~ 3 X (=I:- 0) such that Ax = Ax.
= f(A)x. ...(5.6)
From the above proposition we note the following obvious remarks.
Remark 1. If A A. is an eigenvalue of A, then An A.n is an eigenvalue of An. It
follows from equation (5.5).
Remark 2. Let A A. be an eigenvalue of A. If x is an eigenvector of A
corresponding to A, A., then x is an eigenvector of An corresponding to the
eigenvalue An. It also follows from equation (5.5).
Remark 3. Let A be an eigenvalue of A andfbe any polynomial. Thenf(A.)ThenfCA)
is an eigenvalue of f(A). It follows from equation (5.6).
Remark 4. Let x be an eigenvector of A corresponding to the eigenvalue
A.. Then x is also an eigenvector off{A) corresponding to the eigenvaluef(A.),
A. eigenvaluef(A),
where f is any polynomial. It also follows from equation (5.6).
Theorem 5.5. (CayIey-Hamilton
(Cayley-Hamilton Theorem) Every square matrix satisfies its
own characteristic polynomial.
Proof: Let A be a square matrix of order n x n. Let
p(A) =An +an_JAn- J+ ... +aJA+ao
peA)
be the characteristic polynomial of A.
Then to show that p (A) = O.
(A.) is the characteristic polynomial of A
Since p (A)
=> peA)
p(A) = det{Al- A)
From elementary matrix theory we know that
{det{Al- A»l =(A.!- A) adj(Al-
adj(A.l- A) ...(5.7)
This shows that adj(Al - A), adjoint of AI - A, is a polynomial of
degree (n - 1). Suppose that
adj(AI-A) = Bo+BJA+B2A2 + ... +Bn_JAn-J ,
where each Bj is an n x n matrix.
Thus, equation (5.7) can be written as
118 Introductory Linear Algebra
B"_I = I.
0 a 22 -A a 2n
det(A - AI) = =0
0 0 ann -'}..,
A = [7 4]
-3 -1 .
Solution: Let A is an eigenvalues of A.
7-A 4 /
~ det(A-'}..,/)= -3 -I-A =0
/
~ A2 -6A+5 =0
~ '}..,=1,5.
These are the eigenvalues of A.
To find eigenspace corresponding to A = 1.
We have, N(A-AJ) = {x:(A-AJ)x=O}.
For A = 1, N (A - I) = {x : (A - I) x = 0].
Thus,
A = [=: : :].
-16 8 7
Solution. Let A is an eigenvalue of A. Then
-9-A 4 4
det(A - AI) = -8 3-A. 4 =0.
-16 8 7-A
=> (A + Ii (A - 3) = 0
=> A = -1, -1, 3 are eigenvalues of A.
Now to find eigenspace corresponding to A = -1.
We have, N(A + I) = {x: (A + I)x =.Q}.
In order to solve (A + I) x = 0, we note that the augmented matrix
associated with this system is
~]
4 4
[ -s
B = (A + J, 0) = -8 4 4
-16 8 8
B = (A - 31, 0) = [-~! ~
-16 8 4 0
: ~]
[-3 1 O]'R' - > R,I4
111,-->11,/'
1
- ~
0 1 0 ,R2 -'t~/4
2 1 0 ,R) -'tR)/4
[[-3-3
- -~ 0
1
1 0
1 0]
0 -1 0 ,R)-~ R) -2RI
[[-3-3
~t -->R,+~
n~-->~+~
1 1
- -~ 0 1
0 0
-
[[-I-I
-~ 0
1 0
n~ -->~-R,
~r-->R'-~
0 0
-
[-I0 -2
0
HR, - > ~R, -2R,
n~ --> -2R,
0 0 O·
-xI + x 2 =0,
-2x2 + x3 = 0, and
x3 is a free variable.
I
=> xI = x2 = '2 x ),
and x3 is a free variable.
= [(1,1,2)]
= eigenspace corresponding to A. = 3.
Hence, set of eigenvectors = [(1, 1,2)] - {CO, 0, O)}.
Example 6. Determine eigenvalues and ei~envectors for the matrix
A=[~o ~ ~l.
1 1
Solution: Note that A. 0, 1 an,d 2 are eigenvalues of A.
To find eigenspace for A. = o.
We have, -OJ) = N (A) = {x : Ax = O}.
N(A -O.l) OJ.
The augmented matri~ associated with Ax = 0 is given by
B ~ G)~
e)~ [~ 0 ~ ~l
(A, 0
~ xI =0,
-[~ ::~l
x2 + x3 0, and
=
x3 is a free variable.
Thus,
N(A) = {(XPX2,X3)~Xl'=0,X2=-X3,X3 is a free variable}
= {(O,-Xl,Xl):Xl e9t}
== {Xl (0, -1, 1):Xl e 9t} = [(0, -1, 1)]
= eigenspace corresponding to A. = o.
And set of eigenvectors corresponding to~A. = 0 is
[(0, -1, I)] - {CO, 0, O)}.
To find eigenspaces corresponding t~ A. = 1.
We have, N(A-I)={x:(A-I)x=ol:
124/ Introductory Linear Algebra
B ~ [~ ~ r ~]
~ x 2 = 0 = x) and xI is a free variable.
N(A-1) = {(X.,X2 ,X3 ):X2 =0=X3 ,XI e9\}
= {(X., 0, 0): XI e 9\}
= {XI (1,0,0): XI e 9\} = [(1,0,0)]
= eigenspace corresponding to A. = 1.
Set of eigenvectors corresponding to A. = 1 is
[(1,0,0)] - {(O, 0, O)}.
Similarly, readers can verify that
N (A - 21) = {(X., x 2 ' x3 ): XI =0, x2 =xl' X3 e 9\}
= {(O, xl' x3 ) : X3 e 9\}
= {X3 (0, 1, I) :X3 e 9\}
= [(0, 1, 1)]
= eigenspace corresponding to A. = 2,
and set of corresponding eigenvectors is [(0, 1, I)] - {(O, 0, O)}.
Example 7. For the matrix A given in example 6 ofthis chapter, verifY Cayley-
Hamiton Theorem.
Solution: The characteristic polynomial of A is
p (A.) = 1.3 -3A? +21.
3 2
~ P (A) = A -3A +2A
~ [~ :]-[~ :H~ ~]
0 0 0
4 6 2
4 6 2
~ [~ ~]
0
0
0
P (A) = 0, a zero matrix.
Eigenvalues and Eigenvectors 125
EXERCISES ____________________________
EXERCISES _ _ _ _ _ _ _ _ _ _ _ _ _ ____
In problems 1-14, matrix A is given. Determine the eigenvalues and the
corresponding eigenspaces of A.
I. A=G ~]
3. A = [; ~]
5. A =[~ =~ ~l
6 -6 4
7. A =[; ~ -~l
4 8 15
9. A{~ -~ ~l
11. A =[~o =~ ~l
4-1
13. A=[i ; ~i il
15. Let A be an invertible matrix. Prove that if A is an eigenvalue of A, then
A-I is an eigenvalue of A-I. (Note that A cannot be zero as A is invertible).
16. Prove that A and AT have the same eigenvalues. Do they necessarily
have same eigenvectors? Justify your answer.
b]
17. For the matrix A =[ ac d ' prove the followmg: .
126 Introductory Linear Algebra
(l) A has two distinct real eigenvalues if (a- d)2 + 4bc > o.
(ii) A has one eigenvalue if (a- d)2 + 4bc = o.
(iii) A has no real eigenvalue if (a - d)2 + 4bc < 0 .
(iv) A has only real eigenvalues if b = c.
18. Show that if A is an eigenvalue of an idempotent matrix A (i.e, A2 = A) ,
then A must be either 0 or 1.
= (pp-I) B (pp-I)
= lBl=
IBI= B
(p-Ir ll AP-I = B
=> (p-1r
Q-1AQ= B, where Q = p-I
=> Q-IAQ=
=> B '" A.
Thus, we simply say that A and B are similar.
Proposition 5.2. Similar matrices have same characteristic polynomial, but
converse need not be true.
Proof: Let A and B are similar matrices => 3 an invertible matrix P such that
A =p-IBP
Now the characteristic polynomial of A is
det(A - A1) = det(p-I BP - ')..J)
= det(p-IBP-AP-Ip)
det(p-1BP-AP-1p)
Eigenvalues and Eigenvectors 127
= det(p-I(B-AI)P)
= det(p-I).det(B- A.!).det(P)
= (detprl.det(B-AI)detP
= det(B-A.!)
characteristic polynomial of B.
=
To see that the converse is not necessarily true, consider the two
matrices
A =[ ~ ~] and B = G ~]
Then it is easy to see that
characteristic polynomial of A A2 - 2A. + I
=
= characteristic polynomial of B.
!
1..1 0 0
and PD ~
=
P
p
[
[A.IVI
1..2 0
0 0
A. 2V2 ... A.nVn]·
il ... (5.10)
P-IAP ~ PD = AP.
But D = rlAP
Hence from (5.9) and (5.10) we get
[Avl AV2 ... Avn] = [A.IVI A.2 V2 ... A.nvn]
Equating columns, we get
AVj = A.;Vp i = 1, 2, ... ,n.
Av; ...(5.11)
Since P is invertible, its columns must be LI and therefore each column
Vj must be non-zero. The equation (5.11) shows that 1..1' 1.. 2"'" A.n are
v;
eigenvalues and VI' Vv2' ... ,Vn
vn are corresponding LI eigenvectors of A.
Conversely, let vI' v2' ... , vn be n LI eigenvectors of A corresponding
to the eigenvalues 1..1' 1..2,,,,, A.n , respectively. Now we construct a matrix
P whose columns are vI' v2 ' ... , vn ; and we also construct a diagonal matrix
D with diagonal entries as 1..1' 1..2'"'' A.nn .• Then from (5.9) - (5.11), we note
that
AP =PD ...(5.12)
Since the columns vI' VV 22'' ...
... ,, Vn
vn of P are LI, therefore P is invertible.
Thus, equation (5.12) yields
~ A is diagonalizable.
Eigenvalues and Eigenvectors 129
Remark 2. AIJ matrices are not diagonalizable. For example, take A = [~ ~].
If A is diagonalizable, then for some invertible matrix P we note that
rlAP = [AIo 0]
A2 '
where A( = 0 and A2 = 0 are eigenvalues of A.
=> A must be zero matrix as P is invertible, which is a contradiction.
Theorem 5.6. An n x n matrix with n distinct eigenvalues is diagonalizable.
Proof: Let v" v2' ... , vn be the eigenvector corresponding to the n distinct
eigenvalues of an n x n matrix A.
=> The set {vI> v2 ' ••• , vn } is LI by Theorem 5.2.
=> A is diagonalizable by Theorem 5.5.
Now we emphasize the important steps involved in diagonalization of
a matrix.
How to Diagonalize an n x n matrix A?
Step 1: Find the eigenvalues AI' A2' ... ,An of A.
Step 2: Find n LI eigenvectors vI' v2 ' ••• , Vn of A corresponding to
A" A2"'" An' If it is possible, go to step 3; otherwise stop the
process and conclude that A is not diagonalizable.
Step 3: Construct the matrix P with vi as its ith column, that is,
P = [VI v2 ••• vnl.
Step 4: Construct the diagonal matrix D with diagonal entries as
AI' A2' ... ,A n. In fact, D = rIAP.
Example 8. We consider the matrix
A=[_~ _~]
as given in example 4 of this chapter. As we have seen in example 4 that
")... = I, 5 are the eigenvalues of A. Since these eigenvalues are distinct,
therefore, by Theorem 5.6, A is diagonalizable.
130 Introductory Linear Algebra
p=
P =
[ 3[-~
-2
-2]
1 and D= 0 5[1 0]
In fact, one can verify that
P-IAP =D,
=[~ ~]=D.
Remark 1. As in example 4, we have seen that
SI = [(-2, 3)] - {CO, O)} = set of eigenvectors corresponding to A = 1.
S,
S2 = [(-2, 1)] - {CO, O)}= set of eigenvectors corresponding to A = 1.
One can ask a natural question here that can any eigenvectors from
8S,1 and S2 will diagonalize A? Answer is yes. Because any two vectors vI v,
and v 22 (v,
(vI from S,
SI and v2
2 from S2) are LI and hence the set whose columns
v,
are vI and v22 will diagonalize A.
For example, let us take
v,
vI = (--4, 6) and v2 = (6, -3).
Then P = [-4 6]
6] and D = [1
6 -3
[1° 0]5
0]
Example 9. Now wc
Exa.,ple we consider the .,atrix
matrix A ~ [_~: ~ ~l as given in
;]
p~[~ -;
p~[! n D~[~l ~I~l n
il D~[~'
.... Example 10. Consider the matrix A = [~ ~ ~l~]
005
We wish to check whether A is diagonalizable or not. One can note that
A = 4, 4, 5 are eigenvalues of A.
Since eigenvalues are not distinct, therefore, no conclusion can be
made at this stage.
Now, N(A-4J) = {x:(A-4J)x=0}
= {(XI' Xl' Xl): XI °
= =Xl' Xl E 9t}
= [(0, 1, 0)] .
N(A-51) = {x:(A-5J)x=0}
= {(x" Xl' Xl): XI °
= =Xl' Xl E 9t}
= [(0, 0, 1)].
=
Hence every eigenvector of A is either a multiple of VI (0, 1, 0) or
V l = (0, 0, 1). Thus it is not possible to have three linearly independent
eigenvectors of A, i.e., one can not construct a basis for V3 with the
eigenvectors of A. Hence A is not diagonalizable.
132 Introductory Linear Algebra
EXERCISES ______________________________
I. In problems 1-14 of Sec 5.2, determine whether the given matrix A is
diagonalizable. Ifso, find an invertible matrix P over 9\ and a diagonal
matrix Dover 91 such that P-IAP = D.
2. I, the matrix A ~ [~ =~ :]
diagonalizable? If so, find an invertible matrix P and a diagonal matrix
n
D such that P-IAP = D.
l
diagonalizable.
~ [~ ~
0
4. Prove that the matrix a a i, any real number. i, not
A
diagonalizable.
5. Without any calculation, justify why these matrices are diagonalizable.
[~ ~] [~
0 0
(a) 2
0
(b) 2
0
;]
~]
0 0
[~ ~]
2
(c) 2
3
(d)
[-i 0
0
2
0
~ [~ ~]
0 0
0 0
6. Consider the matrix
A b 0
o c
Find condition(s) on a, band c such that A is diagonalizable.
Bibliography
135
(r1cj t_ '- ,.."..
{:\ r \':,S:.
\( \ ".,,1" ( ...
136 Introductory Linear Algebra.