You are on page 1of 7

MATH1030 C,D

Suggested Solution to HW5 (8th edition)

Frank Li Hangfan
March 19, 2015

Section 3.6

1(a). The reduced row echelon form of matrix is:


⎛ ⎞
1 0 2
⎝ 0 1 0 ⎠
0 0 0

Recall that both the row space and null space are unaffected by elementary row operations. Hence the
row space is spanned by (1,0,2) and (0,1,0), while the null space is spanned by (-2,0,1) (as solutions to
homogeneous system corresponding to row echelon form would be y = x, x = −2z, z ∈ R).
Now, rank of matrix is 2, hence to find basis for column space, it suffices to find two linearly independent
columns of original matrix. As textbook says, we may select these that contain leading ones in row
echelon form. Hence first two columns (1, 2, 4)T and (3, 1, 7)T would do.

1(b). By the same way as (a), we can calculate the set


10 2
{(1, 0, 0, − ), (0, 1, 0, − ), (0, 0, 1, 0)}
7 7
is a basis for the row space. Since the reduced row echelon form of the matrix involves one free variable
the nullspace will have dimension 1. Set the free variable x4 = t, we know {( 10 2 T
7 , 7 , 0, 1) } is a basis for
the nullspace. The dimension of the column spce equals the rank of the matrix which is 3. Thus the
column space must be R3 and we can take as our basis the standard basis {e1 , e2 , e3 }.
1(c). By the same way as (a). The set {(1, 0, 0, −0.65), (0, 1, 0, 1.05), (0, 0, 1, 0.75)} is a basis for the row
space. The set {(0.65, −1.05, −0.75, 1)T } is a basis for the null space. And the basis for the column
space is {e1 , e2 , e3 }.

2(c). The dimension is 2, because two of the vectors can be represented by the other two.
3(a). The reduced row echelon form of A is given by:
⎛ ⎞
1 2 0 5 −3 0
U =⎝ 0 0 1 −1 2 0 ⎠
0 0 0 0 0 1

The second, fourth and fifth column correspond to the free variables. u2 = 2u1 and u4 = 5u1 − u3 and
u5 = −3u1 + 2u3

3(b). The lead variables correspond to columns 1,3 and 6. Thus, a1 , a3 and a6 form a basis for the column
space of A. The remaining column vectors satisfy the following dependency relations:

a2 = 2a1 , a4 = 5a1 − a3 , a5 = −3a1 + 2a3

1
8(a). Since N (A) = {0}, we know that Ax=0 has only the trivial solution x=0, which tells us that the
columns of A are linearly independent. But they cannot span Rm , since the column space of A has
dimension n < m.

8(b). If b is not in CS(A), then obviously Ax=b has no solutions. Suppose, then, that b ∈ CS(A). Clearly
Ax=b has at least one solution, by definition of CS(A). So assume that

Ax = Ay = b

Then
A(x − y) = Ax − Ay = b − b = 0 − 0 = 0
so x − y ∈ N (A). Since N (A) = {0}, it follows that x = y and the solution is unique.
10. Yes. Since rank(A)= n, the n columns of A span an n-dimensional subspace. So the columns of A are
linearly independent. So Ax=0 has only the trivial solution x=0, from which you infer that Ac=Ad
implies c=d. If the rank of A is less than n, consider for example A being the all-zero matrix, you can
get your answer.
12(a). If A and B are row equivalent, then they have the same row space and consequently the same rank.
Since the dimension of the column space equals the rank it follows that the two column spaces will
have the same dimension.

12(b). If A and B are row equivalent, then they will have the same row space, however, their column spaces
are in general not the same. For example if:
% &
1 0
A=
0 0

and % &
0 0
B=
1 0
then A and B are row equivalent but the column space of A is equal to Span(e1 ) while the column
space of B is Span(e2 ).
14. The column vectors of A and U satisfy the same dependency relations. By inspection one can see that:

u3 = 2u1 + u2 , u4 = u1 + 4u2

Therefore, ⎛ ⎞
−2
⎜ 7 ⎟
a3 = 2a1 + a2 = ⎜ ⎟
⎝ 11 ⎠
1
and ⎛ ⎞
13
⎜ −7 ⎟
a4 = a1 + 4a2 = ⎜ ⎟
⎝ 30 ⎠
−3

16. If A is 5 by 8 with rank 5, then the column space of A will be R5 . So by the consistency theorem,
the system Ax=b will be consistent for any b in R5 . Since A has 8 columns, its reduced row echelon
form will involve 3 free variables. A consistent system with free variables must have infinitely many
solutions.

18(a). Since A is 5 by 3 with rank 3, its nullity is 0. Therefore N (A) = {0}.

2
18(b). If c1 y1 + c2 y2 + c3 y3 = 0 then
c1 Ax1 + c2 Ax2 + c3 Ax3 = 0
and it follows that c1 x1 + c2 x2 + c3 x3 is in N(A). However, we know from part (a) that N (A) = {0}.
Therefore:
c 1 x1 + c 2 x2 + c 3 x3 = 0
Since x1 , x2 , x3 are linearly independent it follows that c1 = c2 = c3 = 0 and hence y1 , y2 , y3 are linearly
independent.

18(c). Since dimR5 = 5 it takes 5 linearly independent vectors to span the vector space. The vectors y1 , y2 , y3
do not span R5 and hence cannot form a basis for R5 .

19. Given A is m by n with rank n and y=Ax where x ̸= 0. If y=0, then

x1 a1 + ... + xn an = 0

But this would imply that the columns vectors of A are linearly dependent. Since A has rank n we
know that its column vectors must be linearly independent. Therefore y cannot be equal to 0.

20. If the system Ax=b is consistent, then b is in the column space of A. Therefore the column space of
(A|b) will equal the column space of A. Since the rank of a matrix is equal to the dimension of the
column space, it follows that the rank of (A|b) equals the rank of A.
Conversely if (A|b) and A have the same rank, then b must be in the column space of A. If b were not
in the column space of A, then the rank of (A|b) would equal to rank(A)+1.
22(a). If x ∈ N (A), then
BAx = B0 = 0
and hence x ∈ N (BA). Thus N(A) is a subspace of N(BA). On the other hand, if x ∈ N (BA), then

B(Ax) = BAx = 0

and hence Ax ∈ N (B). But N(B)={0} since B is nonsingular. Therefore Ax=0 and hence x ∈ N (A).
Thus BA and A have the same nullspace. It follows from the Rank-Nullity Theorem that

rank(A) = n − dimN (A) = n − dimN (BA) = rank(BA)

22(b). By part (a), left multiplication by a nonsingular matrix does not alter the rank. Thus

rank(A) = rank(AT ) = rank(C T AT = rank((AC)T ) = rank(AC)

24. If N (A − B) = Rn then the nullity of A-B is n and consequently the rank of A-B must be 0. Therefore

A − B = 0, A = B

Section 4.1
3. The verification that L is nonlinear is easy:

L(αx + y) = αx + y + a ̸= α(x + a) + (y + a) = αL(x) + L(y)

The geometric effect is to shift the line containing x away from the origin, the distance and direction
of the shift determined by the length and direction of a. In the event that a and x are collinear, the
shifted line still passes through the origin, but the transformation is nonetheless nonlinear, as shown
by the preceding step.

3
4. Because (7, 5)T = 4(1, 2)T + 3(1, −1)T , so

L((7, 5)T ) = 4L((1, 2)T ) + 3L((1, −1)T ) = 4(−2, 3)T + 3(5, 2)T

Thus, L((7, 5)T ) = (7, 18)T


6 (a) Answer: We know if L is a linear transformation from a vector space V to another vector space
W, then L(0v ) = 0w .
From this question L(x) = (x1 , x2 , 1)T , thus we have if x = 0, that means x1 = x2 = 0,
L(x) = (0, 0, 1) ̸= 0, thus, this is not a linear transformation.

(b) Answer: This is a linear transformation.Since L(x) = (x1 , x2 , x1 + 2x2 )T , we can deduct:
L(αx + βy) = L((αx1 + βy1 , αx2 + βy2 )T ) = (αx1 + βy1 , αx2 + βy2 , αx1 + βy1 + 2(αx2 + βy2 ))T =
α(x1 , x2 , x1 + 2x2 )T + β(y1 , y2 , y1 + 2y2 )T = αL(x) + βL(y). For any scalars α, β. Thus, it is a
linear transformation.

(c) Answer:This is a linear transformation.Since L(x) = (x1 , 0, 0)T , we can deduct:


L(αx + βy) = L((αx1 + βy1 , αx2 + βy2 )T ) = (αx1 + βy1 , 0, 0)T = α(x1 , 0, 0)T + β(y1 , 0, 0)T =
αL(x) + βL(y). For any scalars α, β. Thus, it is a linear transformation.

(d) Answer:If L is a linear transformation from a vector space V to another vector space W, then:

L(v1 + v2 ) = L(v1 ) + L(v2 ).


Here, if we pick v1 = (1, 2)T , v2 = (2, 1)T in the R2 , as L(x) = (x1 , x2 , x21 + x22 )T , we have:
L(v1 ) = (1, 2, 5)T , L(v2 ) = (2, 1, 5)T , L(v1 + v2 ) = L((3, 3)T ) = (3, 3, 18)T . Thus, L(v1 + v2 ) ̸=
L(v1 ) + L(v2 ), we get the counter example. Thus, this is not a linear transformation.

8 (a) Answer: Let A, B be any elements in Rn×n , and α, β be any two scalars. We have:

L(αA+βB) = C(αA+βB)+(αA+βB)C = αCA+βCB+αAC+βBC = α(CA+AC)+β(CB+BC)

= αL(A) + βL(B)
Thus, it is a linear transformation.

(b) Answer: Let A, B be any elements in Rn×n , and α, β be any two scalars. We have:

L(αA + βB) = C 2 (αA + βB) = αC 2 A + βC 2 B = αL(A) + βL(B)

Thus, it is a linear transformation.

(c) Answer: For this one, consider the counter example with A any nonzero matrix and α any
nonzero scalar such that, α2 ̸= α:

L(αA) = α2 A2 C ̸= αL(A)

Thus, it is not a linear transformation.

9 (a) Answer: Let p(x), q(x) be two P2 functions, α, β be any two scalars. We have:

L(p(x)) = xp(x), L(q(x)) = xq(x)

L(αp(x) + βq(x)) = x(αp(x) + βq(x)) = αxp(x) + βxq(x) = αL(p(x)) + βL(q(x))


Thus, this is a linear transformation from P2 to P3 .

4
(b) Answer: Let p(x), q(x) be two P2 functions, α, β bew any two scalars. We have:

L(p(x)) = x2 + p(x), L(q(x)) = x2 + q(x)

L(αp(x) + βq(x)) = x2 + αp(x) + βq(x); αL(p(x)) + βL(q(x)) = α(x2 + p(x)) + β(x2 + q(x))
Thus L(αp(x) + βq(x)) not always equal to αL(p(x)) + βL(q(x)). This is not a linear trans-
formation.
(c) Answer: Let p(x), q(x) be two P2 functions, α, β bew any two scalars. We have:

L(p(x)) = p(x) + xp(x) + x2 p′ (x), L(q(x)) = q(x) + xq(x) + x2 q ′ (x)

L(αp(x) + βq(x)) = αp(x) + βq(x) + x(αp(x) + βq(x)) + x2 (αp(x) + βq(x))′ =


αp(x) + αxp(x) + αx2 p′ (x) + βq(x) + βxq(x) + βx2 q ′ (x) = αL(p(x)) + βL(q(x))
Thus this is a linear transformation from P2 to P3 .

12. In the case n=1


L(α1 v1 ) = α1 L(v1 )
Let us assume the result is true for any linear combination of k vectors and apply L to a linear
combination of k+1 vectors.

L(α1 v1 + ... + αk vk + αk+1 vk+1 ) = L(α1 v1 + ... + αk vk ) + L(αk+1 vk+1 )

= α1 L(v1 ) + ... + αk L(vk ) + αk+1 L(vk+1 )


The result follows then by mathematical induction.
13 Proof: Because v1 , v2 , . . . , vn is a basis for V, we know that any vector v in V can be written as:

v = α1 v1 + α2 v2 + . . . + αn vn ; αi is a scalar f or i = 1, 2, . . . , n .
thus
L1 (v) = L1 (α1 v1 + α2 v1 + · · · + αn vn ) = α1 L1 (v1 ) + α2 L1 (v2 ) + · · · + αn L1 (vn )
As L1 is a linear transformation. Because L1 (vi ) = L2 (vi ), for i = 1, 2, . . . , n, we have:

L1 (v) = α1 L2 (v1 ) + α2 L2 (v2 ) + · · · + αn L2 (vn )

As L2 is a linear transformation as well, we have

α1 L2 (v1 ) + α2 L2 (v2 ) + · · · + αn L2 (vn ) = L2 (α1 v1 + α2 v1 + · · · + αn vn ) = L2 (v)

Thus, L1 (v) = L2 (v) for all v ∈ V


We prove L1 = L2 .

16. If v1 , v2 ∈ V , then

L(αv1 + βv2 ) = L2 (L1 (αv1 + βv2 )) = L2 (αL1 (v1 ) + βL1 (v2 ))

= αL2 (L1 (v1 )) + βL2 (L1 (v2 )) = αL(v1 ) + βL(v2 )


Therefore, L is a linear transformation.
17 (a) Answer: The linear operator of the corresponding vector x is just a R3 space. Thus:
The kernel of R3 space is zero vector and the range is R3 as well.

ker(L) = {0}, L(R3 ) = R3

5
(b) Answer: Let L(x)=0, we get x1 = 0 and x2 = 0, thus the kernel of L is x1 = 0, x2 = 0 and any
x3 , or we can say:
ker(L) = Span(e3 )
The range of L is just the space that x1 , x2 can be any numbers but x3 must be 0, or we can say
as L(x) = (x1 , x2 , 0)T ,L(R3 ) = Span(e1 , e2 )

(c) Answer: Let L(x)=0, we get x1 = 0, thus the kernel of L is x1 = 0 and any x2 , x3 , or we can say:

ker(L) = Span(e2 , e3 )

The range of L is just the space that x1 , x2 , x3 must be equal to each other, or we can say as
L(x) = (x1 , x1 , x1 )T ,L(R3 ) = Span((1, 1, 1)T ), so we can make any vectors in this range that have
equal number of x1 ,x2 and x3

19 We assume P3 can be written as any forms of a + bx + cx2 , where a,b,c are scalars.
(a) Answer: p(x) = c + bx + ax2 , thus we have:

L(p(x)) = xp′ (x) = bx + 2ax2

If p(x) is in the kernel of L, then 2ax2 + bx = 0 for all x, thus a=0 and b=0. So every polynomial
in the kernel of L is of the form p(x) = c. Thus,

ker(L) = Span(1) = P1

To determine the range of L, we again consider an arbitrary polynomial p(x) = ax2 + bx + c, and
apply L to the polynomial. L(p(x)) = xp′ (x) = bx + 2ax2 Thus, the range of L is all polynomials
of the form bx + 2ax2 Thus,
Range(L) = Span(x, x2 )
(b) Answer: p(x) = c + bx + ax2 , thus we have:

L(p(x)) = c + bx + ax2 − b − 2ax = (c − b) + (b − 2a)x + ax2

So L(p(x))=0 for any x, iff:


c − b = 0, b − 2a = 0, a = 0
which means a=b=c=0. Hence,
ker(L) = 0
To determine the range of L, we again consider an arbitrary polynomial p(x) = ax2 + bx + c, and
apply L to the polynomial.

L(p(x)) = c + bx + ax2 − b − 2ax = (c − b) + (b − 2a)x + ax2

Thus, given any polynomial p(x) in P3 , we can find another polynomial L(p(x)) in P3 as well.
That means
Range(L) = P3 or L(P3 ) = P3
(c) Answer: p(x) = c + bx + ax2 , thus we have:

L(p(x)) = cx + a + b + c

So L(p(x))=0 for any x, iff:


c = 0, a + b + c = 0
which means a=-b and c=0. Hence, if we want L(p(x))=0, then p(x) = ax2 − ax = a(x2 − x).
Thus,
Ker(L) = a(x2 − x), or we say Ker(L) = Span(x2 − x)

6
To determine the range of L, we again consider an arbitrary polynomial p(x) = ax2 + bx + c, and
apply L to the polynomial.
L(p(x)) = cx + a + b + c
Thus, given any polynomial p(x) in P3 , we can find another polynomial L(p(x)) in P2 that has
the form L(p(x))=cx+a+b+c.That means

Range(L) = P2 or L(P3 ) = P2

21. Suppose L is one-to-one and v ∈ ker(L).

L(v) = 0w , L(0v ) = 0w

Since L is one-to-one, it follows that v = 0v . Therefore ker(L) = {0v }.


Conversely, suppose ker(L) = {0v } and L(v1 ) = L(v2 ). Then L(v1 − v2 ) = L(v1 ) − L(v2 ) = 0w .
Therefore v1 − v2 ∈ ker(L) and hence

v1 − v2 = 0v , v1 = v2

So L is one-to-one.

22. To show that L maps R3 onto R3 we must show that for any vector y ∈ R3 , there exists a vector
x ∈ R3 such that L(x)=y. This is equivalent to showing that the linear system

x1 = y1 , x1 + x2 = y2 , x1 + x2 + x3 = y3

is consistent. This system is consistent since the coefficient matrix is non-singular.


25. (a). If p = ax2 + bx + c ∈ P3 , then
D(p) = 2ax + b
Thus,
D(P3 ) = Span(1, x) = P2
The operator is not one-to-one, for if p1 (x) = ax2 + bx + c1 and p2 (x) = ax2 + bx + c2 where
c1 ̸= c2 , then D(p1 ) = D(p2 ).
(b). The subspace S consists of all polynomials of the form ax2 +bx. If p1 = a1 x2 +b1 x, p2 = a2 x2 +b2 x
and D(p1 ) = D(p2 ), then
2a1 x + b1 = 2a2 x + b2
and it follows that a1 = a2 , b1 = b2 . Thus, p1 = p2 and hence D is one-to-one. D does not map S
onto P3 since D(S) = P2
That’s all. Thank you!

You might also like