You are on page 1of 4

MA 106: Spring 2014: Tutorial Sheet 3

1. Describe all subspaces of R1×1 , R1×2 , R1×3 and R1×4 .


Solution (i) The only subspaces of R1×1 are {0} and R1×1 .
(ii) The subspaces of R1×2 are : {(0, 0)}, {(x1 , x2 ) ∈ R1×2 : a1 x1 + a2 x2 = 0, where (a1 , a2 ) 6=
(0, 0)} and R1×2 .
(iii) The subspaces of R1×3 are : {(0, 0, 0)}, {(x1 , x2 , x3 ) ∈ R1×3 : a1 x1 + a2 x2 + a3 x3 =
0, where (a1 , a2 , a3 ) 6= (0, 0, 0)}, {(x1 , x2 , x3 ) ∈ R1×3 : a1 x1 + a2 x2 + a3 x3 = b1 x1 + b2 x2 +
b3 x3 = 0, where (b1 , b2 , b3 ) is not a multiple of (a1 , a2 , a3 )} and R1×3 .
(iii) The subspaces of R1×4 are : {(0, 0, 0, 0)}, {(x1 , x2 , x3 , x4 ) ∈ R1×4 : a1 x1 + a2 x2 + a3 x3 +
a4 x4 = 0, where (a1 , a2 , a3 , a4 ) 6= (0, 0, 0, 0)}, {(x1 , x2 , x3 , x4 ) ∈ R1×4 : a1 x1 + a2 x2 + a3 x3 +
a4 = b1 x1 + b2 x2 + b3 x3 + b4 = 0, where (b1 , b2 , b3 , b4 ) is not a multiple of
(a1 , a2 , a3 , a4 )}, {(x1 , x2 , x3 , x4 ) ∈ R1×4 : a1 x1 +a2 x2 +a3 x3 +a4 x4 = b1 x1 +b2 x2 +b3 x3 +b4 x4 =
c1 x1 + c2 x2 + c3 x3 + c4 x4 = 0 : none of (a1 , a2 , a3 , a4 ), (b1 , b2 , b3 , b4 ) and
(c1 , c2 , c3 , c4 ) is a linear combination of the other two} and R1×4 .
The subspaces of R1×4 of dimensions 1,2,3 are called, respectively, lines, planes, and hyper-
planes through the origin.

2. Given a set of n linearly independent vectors {v1 , v2 , . . . , vn } in a vector space V , show that
for any nonzero scalar α, the set {v1 , v2 . . . , vi−1 , vi + αvj , vi+1 , . . . , vn } with i 6= j is linearly
independent.
Solution Let
i−1
X n
X
αk vk + αi (vi + αvj ) + αk vk = 0.
k=1 k=i+1
Pn
Write the linear combination above as k=1 βk vk . By linear independence of the vk ’s, βk = 0
for all k. Since βj = αj + ααi and βk = αk , for k 6= j it follows that αk = 0 for all k.

3. Let A be a n × n matrix. For i = 1, 2, . . . , n, let A[i, i] denote the submatrix formed by


the first i rows and first i columns of A. The matrix A is said to be strongly nonsingular if
det(A[i, i]) 6= 0, for all i. (‘Strongly nonsingular’ is a nonce word cooked up for the purpose of
this exercise.) If A is strongly nonsingular, show that A can be reduced to a diagonal matrix
by row operations of type I only.
Solution Let A = (aij ). Since a11 6= 0 we can use type 1 operations to make all entries
in the first column below the (1, 1) entry zero. Call the resulting matrix B = (bij ). Since
elementary row operations preserve invertibility det(B[2, 2]) 6= 0. Since b21 = 0 we must have
b22 6= 0. Now use elementary row operations of type I to make all entries in column 2 below
and above the (2, 2) entry 0. Use induction.

4. (a) Let W1 and W2 be subspaces of a vector space V . Define the sum of W1 and W2 by
W1 + W2 = {w1 + w2 : w1 ∈ W1 and w2 ∈ W2 }. Show that W1 + W2 = L(W1 ∪ W2 ).
(b) A sum W1 + W2 is said to be direct if W1 ∩ W2 = {0}, and we denote it by W1 ⊕ W2 .
Suppose V = W1 ⊕ W2 . Show that for every v ∈ V , there are unique w1 ∈ W1 and
w2 ∈ W2 such that v = w1 + w2 . Deduce that dim(W1 ⊕ W2 ) = dim(W1 ) + dim(W2 ).
(c) Let V and W be finite dimensional vector spaces. The set V × W of ordered pairs
{(v, w) : v ∈ V and w ∈ W } is a vector space under componentwise operations. Set
W1 = {(v, 0) : v ∈ V } and W2 = {(0, w) : w ∈ W }. Then W1 and W2 are subspaces of
V × W . Show that V × W = W1 ⊕ W2 .
Solution

1
(a) Set W = {u + v|u ∈ W1 , v ∈ W2 }. It is easy to check that W is a subspace. It is also
easily seen that any subspace containing W1 and W2 contains W . The result follows.
(b) Let u1 + w1 = u2 + w2 , where ui ∈ W1 and wi ∈ W2 for i = 1, 2. Then

u1 − u2 = w2 − w1 ∈ W1 ∩ W2 = {0},

yielding u1 = u2 and w1 = w2 .
Now suppose B1 is a basis for W1 and B2 is a basis for W2 . We claim that B1 ∪ B2 is a
basis of W1 ⊕ W2 . That it is spanning is easy to see.
P P P
Now assume
P v∈B1 av v + u∈B2 au u = 0, for scalars av , au . Then v∈B1 av v =
u∈B2 −a u u ∈ W 1 ∩ W 2 = {0}. It follows that all the av and a u are 0.
(c) That W1 and W2 are subspaces is clear. We have (v, w) = (v, 0) + (0, w), showing that
V = W1 + W2 . Clearly, W1 ∩ W2 = {(0, 0)}, showing that the sum is direct.

5. (a) Let A and B be n×n matrices. If A is invertible, show that rank(AB) = rank(B) =
rank(BA).
(b) Let A be an m×n matrix and let B be an n×p matrix. Show that

rank(A) + rank(B) − n ≤ rank(AB) ≤ min{ rank(A), rank(B)}.

Solution

(a) For convenience, we denote ranks of matrices and linear maps by r(A) or r(T ). Similarly,
we denote nullity by n(A) or n(T ).
Since R(AB) ⊆ R(B) we have r(AB) ≤ r(B). Now R(B) = R(A−1 (AB)) ⊆ R(AB)
yielding r(B) ≤ r(AB).
The same argument with the column space gives r(B) = R(BA).
(b) Since R(AB) ⊆ R(B) and C(AB) ⊆ C(A) we have r(AB) ≤ r(A) and r(AB) ≤ r(B).
Recall the linear maps TA : Rn → Rm , TB : Rp → Rn , and TAB : Rp → Rm discussed
in class. We have TAB = TA TB . Consider the restriction of the map TA to Im(TB ), i.e.,
consider the map fA : Im(TB ) → Rm given by fA (v) = TA (v).
We have (why?) r(AB) = r(fA ). By rank-nullity theorem we have r(fA ) = r(TB ) −
n(fA ). The null space of fA is Im(TB ) ∩ N (TA ) (why?) and its maximum possible
dimension is n(TA ) = n − r(TA ). Thus r(fA ) ≥ r(TB ) − (n − r(TA )) = r(TB ) + r(TA ) − n.

6. Let Ax = b be a linear system, where A has m rows and n columns.

(a) Suppose a sequence of elementary row operations reduces Ax = b to R1 x = b1 , and


suppose that another sequence of elementary row operations reduces Ax = b to R2 x = b2 ,
where R1 and R2 are both in row echelon form. Show that the set of pivotal and free
columns of R1 and R2 are the same, and so the particular solution of Ax = b as well as
the basic solutions of Ax = 0 as defined in the class are also the same for both systems.
(b) Show that there is a unique row canonical form of A.

Solution

(a) Let U be a matrix in ref with n columns, k of which are pivotal. Denote the pivotal
columns by P = {j1 < j2 < · · · < jk } and the free columns by F = {1, 2, . . . , n} \ P .
Denote the first k row vectors of U by R1 , R2 , . . . , Rk .
Observe the following:

2
(a) Let 0 6= a = (a1 , . . . , an ) ∈ R(U ). Then the first nonzero entry of a occurs in a
pivotal column.
(b) For every pivotal column jl , there is a vector in R(U ), namely Rl , whose first nonzero
entry is in column jl .
(c) From items (a) and (b) above we can describe P as the
set of all j ∈ {1, 2, . . . , n} such that there exists a nonzero row vector a ∈ R(U ) whose
first nonzero entry occurs in column j.
So the set of pivotal (and free) columns of U is determined by R(U ).
Since R(R1 ) = R(R2 ), they have the same pivotal and free columns.
Now R1 x = b1 and R2 x = b2 have the same solution set. Also if two solutions agree in
the free variables they agree everywhere. The particular and basic solutions have this
property.
(b) Let U be the rcf of A with k nonzero row vectors R1 , . . . , Rk and pivotal columns
P = {j1 < · · · < jk }. By part (i), P is uniquely determined by R(A). We need to show
that R1 , . . . , Rk are also uniquely determined by R(A). A little thought reveals that Ri
can be characterized as the only row vector a = (a1 , . . . , an ) ∈ R(U ) = R(A) satisfying
• aji = 1,
• ajl = 0, l = {1, 2, . . . , k} \ {i}.

7. Let U be a k × n matrix in a row canonical form with pivotal columns j1 < · · · < jk and with
= [α1 , . . . , αn ] be a row vector. Show that v ∈ R(U ), the row
row vectors R1 , . . . , Rk . Let vP
space of U , if and only if v = ki=1 αji Ri .
Solution This is obvious once the solution to the last question is understood.

8. Let S1 , . . . , S5 denote the subspaces of all n×n real matrices which are diagonal, upper triangu-
lar, trace-zero, symmetric, skew-symmetric, respectively. Find the dimensions of S1 , . . . , S5 .
Solution A basis of S1 is given by {eii |1 ≤ i ≤ n} and so the dimension of S1 is n.
A basis of S2 is given by {eij |i ≤ j} and so the dimension is given by n(n + 1)/2.
The entries of a n × n matrix A = (aij ) have to satisfy a single linear equation, namely,
a11 + · · · + ann = 0 for it to be trace 0. Thus the dimension S3 is n2 − 1.
A basis of S4 is given by {eij + eji |i ≤ j} and so the dimension is n(n + 1)/2.
A basis of S5 is given by {eij − eji |i < j} and so the dimension is n(n − 1)/2.

9. Let V = {(x1 , x2 , . . .) : xn ∈ R for all n ∈ N} denote the vector space of all real sequences.
Let U denote the subspace of all real sequences converging to 0. A sequence in V is said to
be eventually zero if there is some n0 ∈ N such that xn = 0 for all n ≥ n0 Let W denote
the subspace of all eventually zero sequences. Clearly W ⊆ U ⊆ V . Show that W is infinite
dimensional. Can you think of bases of U, V and W ?
Solution For i = 1, 2, 3, . . ., let ei be the vector in V with 1 in the ith coordinate and 0’s
elsewhere. Set B = {ei |i ≥ 1}. It is obvious that B is a basis of W and hence (why?) V, U, W
are infinite dimensional.
Existence of bases for U, V are shown using Zorn’s lemma. No explicit bases are known.

10. Find the rank and the nullity of the following linear transformations.
(i) T : R1×2 −→ R1×3 defined by T ([x1 , x2 ]) = [x1 , x1 + x2 , x2 ].
(ii) T : R1×4 −→ R1×3 defined by T ([x1 , x2 , x3 , x4 ]) = [x1 − x4 , x2 + x3 , x3 − x4 ].
Solution We shall denote the image of a transformation T by R(T ).

3
(i) N (T ) = {0}, so Rank = 2 and Nullity = 0.
(ii) N (T ) = {[s, −s, s, s] : s ∈ R}, so Rank = 3 and Nullity = 1.

11. Find the matrix of the linear operator T : R3×1 → R3×1 defined by T ([x1 , x2 , x3 ])t = [x1 +
x3 , 2x1 − x2 , x3 /2]t w.r.t. the ordered basis (i) E = (e1 , e2 , e3 ) of R3×1 as well as w.r.t. the
ordered basis (ii) F = (e1 + e2 , e2 , 6e1 + 8e2 − 3e3 ) of R3×1 .
 
1 0 1
Solution (i) MEE (T ) =  2 −1 0 .
0 0 1/2
, f3 ), where f1 = e1 + e2 , f2 = e2 and f3 = 6e1 + 8e2 − 3e3 . Then
(ii) Let F := (f1 , f2
1 0 6
MEF =  1 1 8 . Since e1 = f1 − f2 , e2 = f2 and e3 = 2f1 + 2f2 /3 − f3 /3, MFE =
0 0 −3
   
1 0 2 1 0 0
 −1 1 2/3 . Hence MFF (T ) = MFE MEE (T ) MEF =  0 −1 0 .
0 0 −1/3 0 0 1/2

12. Find the matrix of the linear transformation T : R3×1 → R4×1 defined by T ([x1 , x2 , x3 ])t =
[x1 + x2 , x2 + x3 , x3 + x1 , x1 + x2 + x3 ]t w.r.t. the ordered bases (i) E = (e1 , e2 , e3 ) of R3×1 and
F = (e1 , e2 , e3 , e4 ) of R4×1 as well as w.r.t. the ordered bases (ii) E e = (e1 + e2 , e2 + e3 , e3 + e1 )
of R3×1 and Fe = (e1 + e2 + e3 , e2 + e3 + e4 , e3 + e4 + e1 , e4 + e1 + e2 ) of R4×1 . Also, find the
transition matrices MEE and M Fe and verify M E (T ) = M Fe MFE (T ) MEE .
e e e
F Fe F
 
1 1 0
 0 1 1 
Solution (i)MFE (T ) =   1 0 1 .

1 1 1
(ii) Let ẽ1 = e1 + e2 , ẽ2 = e2 + e3 , ẽ3 = e3 + e1 , and f˜1 = e1 + e2 + e3 , f˜2 = e2 + e3 + e4 , f˜3 =
e3 + e4 + e1 , f˜4 = e4 + e1 + e2 . Then T (ẽ1 ) = T (e1 ) + T (e2 ) = (e1 + e3 + e4 ) + (e1 + e2 + e4 ) =
f˜3 + f˜4 , T (ẽ2 ) = (e1 +e2 +e4 )+(e2 +e3 +e4 ) = f˜2 + f˜4 and T (ẽ3 ) = (e2 +e3 +e4 )+(e1 +e3 +e4 ) =
f˜2 + f˜3 . Hence
 
0 0 0  
1 0 1
 0 1 1 
ME (T ) =  E
 1 0 1 . Also, ME = 1 1 0 .
e  e  
Fe
0 1 1
1 1 0
Let f1 = e1 + e2 , f2 = e2 + e3 and f3 = e3 + e1 . Since f1 = 31 (f˜1 − 2f˜2 + f˜3 + f˜4 ), f2 =
1 ˜ ˜ ˜ ˜ 1 ˜ ˜ ˜ ˜ 1 ˜ ˜ ˜ ˜
3 (f1 + f2 − 2f3 + f4 ), f3 = 3 (f1 + f2 + f3 − 2f4 ) and f4 = 3 (−2f1 + f2 + f3 + f4 ),
 
1 1 1 −2
1  −2 1 1 1 
M Fe =  .
F 3  1 −2 1 1 
1 1 −2 1
We can verify that M E F E E
e (T ) = M e MF (T ) ME .
e e
F F

You might also like