You are on page 1of 6

Apr 2006 Selected Problems, Chapter 4 Math 230(Mackey)

Revised: March 2008.

§4.1, p.183, #6a Is the following map L from R2 to R3 a linear transformation?

L(x) = (x1 , x2 , 1)T

Solution: If L is a linear transformation, L must respect both vector + and scalar multiplication.
This means it must be true that L(x + y) = L(x) + L(y) and L(αx) = αL(x) for all x, y ∈ R2 and
for all α in R. We begin our analysis by investigating whether L respects +.

L(x + y) = L ([ xx12 ] + [ yy12 ]) substituting for x, y, since they are in R2 ,


¡£ x1 +y1 ¤¢
= L x2 +y2 by definition of addition in R2 ,
h x1 +y1 i
= x2 +y2 by definition of L.
1

On the other hand,

L(x) + L(y) = L ([ xx12 ]) + L ([ yy12 ]) substituting for x, y, since they are in R2 ,


h x1 i h y1 i
= x2 + y2 by definition of L,
1 1

h x1 +y1 i
= x2 +y2 by definition of addition in R2 .
2

Now we see that L(x + y) 6= L(x) + L(y) since their third coordinates are not equal. Hence L is
not a linear transformation.

§4.1, p. 184, #17c Determine the kernel and range of the following linear operator in R3 .

L(x) = (x1 , x1 , x1 )T

Solution: The kernel of L, denoted ker(L), consists of all those vectors in R3 that are killed by L,
that is, all vectors that get mapped to the zero vector by L. Thus our strategy for finding ker(L)
is always to solve L(x) = 0 for x. In the present case, we have
h x1 i h i
0
L(x) = 0 =⇒ xx11 = 0 .
0

Clearly, this tells us x1 = 0, and there are no restrictions on x2 and x3 . Thus


nh 0 i o n h i h i o
0 0
ker(L) = x 2 : x2 , x 3 ∈ R = x 2 1 + x3 0 : x2 , x 3 ∈ R .
x 3 0 1
nh i h io
0 0
This shows that a basis for ker(L) is 1 , 0 and so dim(ker(L)) = 2, making ker(L) a
0 1
2-dimensional subspace of R3 .

The range of L, denoted range(L), consists of all vectors in R3 that are hit by L, that is, all
vectors v in the codomain of L for which there exists a corresponding vector u in the domain of
L such that L(u) = v.h x1Our
i strategy
h i for finding range(L) is to describe
n h L(x) i in terms
o of x1 , x2 , x3 .
1 1
In this case, L(x) = xx11 = x1 1 . This shows that range(L) = x1 1 : x1 ∈ R , so a basis for
nh io 1 1
1
range(L) is 1 . Hence dim(range(L)) = 1, making range(L) a 1-dimensional subspace of R3 .
1
Finally, we note that dim(range(L)) + dim(ker(L)) = 1 + 2 = 3 = dim(R3 ), as expected.

§4.2, p. 198, #14a Let L : P3 → P2 be the linear transformation given by


L(p(x)) = p0 (x) + p(0)
Find the matrix representation for L with respect to the bases S := [x2 , x, 1] and T := [2, 1 − x] of
P3 and P2 respectively.
Solution: Let A denote the matrix representation of L relative to the specified bases S and T . The
columns of A correspond to the coordinate vectors representing L(x2 ), L(x), and L(1) in the basis
T . So we begin by using the definition of L to calculate these images. Since L(p(x)) = p0 (x) + p(0),
we get
L(x2 ) = 2x + 0 = 2x, (1)
L(x) = 1 + 0 = 1 (2)
and L(1) = 0 + 1 = 1. (3)
Then h£ ¤ £ ¤ £ ¤ i h£ ¤ £ ¤ £ ¤ i
A = L(x2 ) T L(x) T L(1) T = 2x T 1 T 1 T .
£ ¤ £ ¤ £ ¤
Next, we must find 2x and 1 . Recall that 2x stands for the coordinate vector of 2x relative
T T T
to the basis T of P2 . To find the coordinates of 2x we must we must determine how to express 2x
as a linear combination of the “vectors” (polynomials) in basis T .
Sometimes, as here, this may be easy to see directly by inspection. (But we also offer a general
approach for more complex situations.) Observe that
2x = 1 · 2 + (−2)(1 − x). (4)
1
Hence, relative to the basis T , the polynomial 2x is represented by the coordinate vector [ −2 ]T .
This will be the first column of A, the matrix representing L in the specified bases.
What if we are faced with a more complex situation or if can’t see (4) simply by inspection?
Well, we are after scalars c1 , c2 such that
c1 · 2 + c2 · (1 − x) = 2x. (5)
The coordinate vector of 2x relative to the basis T := [2, 1 − x] will be [ cc12 ]T , and this in turn will
become the first column of A. We need to solve (5) for c1 , c2 . Expand the left hand side and collect
like terms:
(2c1 + c2 ) − c2 x = 2x,
=⇒ (2c1 + c2 ) + (−c2 − 2)x = 0,
=⇒ (2c1 + c2 )1 + (−c2 − 2)x = 0.
But now we have a linear combination of the linearly independent polynomials 1 and x equal to
the zero polynomial. This means that
2c1 + c2 = 0
−c2 − 2 = 0

2
We must solve this linear system for the unknowns c1 , c2 . Make sure to write the system in standard
form before proceeding:

2c1 + c2 = 0
−c2 = 2

The system
· ¸ is already
· ¸in echelon form. Back substitution yields c2 = −2 and c1 = 1. Thus
c1 1
[2x]T = = .
c2 T −2 T
Next, (2) and (9) tell us that we must express 1 as a linear· 1 ¸combination of the polynomials in
the basis T . Clearly, 1 = 12 · 2 + 0 · (1 − x), so that [1]T = 2 .
0 T
Thus · ¸
£ ¤ 1 12 12
A = [2x]T [1]T [1]T = .
−2 0 0

§4.2, p.198, #18a Let E = [u1 , u2 , u3 ] and F = [b1 , b2 ] be bases for R3 and R2 , respectively,
where u1 = (1, 0, −1)T , u2 = (1, 2, 1)T , u3 = (−1, 1, 1)T , b1 = (1, −1)T , and b2 = (2, −1)T . Let
L : R3 → R2 be given by L(x) = (x3 , x1 )T . Find the matrix representation of L with respect to the
bases E and F .
Solution: Let A denote the matrix representation of L relative to the specified bases E and F .
The columns of A correspond to the coordinate vectors representing L(u1 ), L(u2 ), and L(u3 ) in the
basis F . We begin by using the definition of L to calculate these images. By definition,
³h x1 i´
L xx23 = [ xx31 ] . (6)

Note that coordinates in (6) are with respect to the standard bases of R3 and R2 . Using (6) we get
³h 1 i´
L(u1 ) = L −1 0 = [ −1 1 ] (7)
³h i´
1
L(u2 ) = L 2 = [ 11 ] (8)
1
³h −1 i´
1
and L(u3 ) = L 1 = [ −1 ]. (9)
1

We must rewrite each of L(u1 ), L(u2 ), and L(u3 ) as linear combinations of the vectors in the basis
F = [b1 , b2 ]. We begin with L(u1 ). We want to find scalars c1 , c2 such that

c1 b1 + c2 b2 = L(u1 ) (10)
1 2
=⇒ c1 [ −1 ] + c2 [ −1 ] = [ −1
1 ] (11)
· ¸· ¸
1 2 c1
=⇒ = [ −1
1 ]. (12)
−1 −1 c2

If you use the column-wise view of matrix multiplication, you will be able to see by inspection that
c1 = −1 and c2 = 0 is a solution to this system. Moreover, we can argue that this solution must be
unique, because the coefficient matrix is invertible, since its columns come from the basis F , so the
columns are linearly independent. (Observe that this argument justifying the invertibility of the
coefficient matrix can be applied just as easily even if we were dealing with a higher dimensional

3
co-domain!) If you didn’t see the solution by inspection, set up the usual gig, reduce to echelon
form, then do back substitution:
1 2 −1
−1 −1 1 R2 ← R2 + R1
1 2 −1
0 1 0
From R2 we see that c2 = 0, and then substituting in R1 yields c1 = −1. Thus the first column of
A has been found:
· ¸ · ¸
c1 −1
[L(u1 )]F = = . (13)
c2 F 0 F

Similarly, we find [L(u2 )]F and [L(u3 )]F by setting up and solving appropriate linear systems.
Observe that the coefficient matrix will be the same, only the RHS will change.
1 2 1 1 2 1
−1 −1 1 R2 ← R2 + R1 −1 −1 1 R2 ← R2 + R1
[L(u2 )]F : [L(u3 )]F :
1 2 1 1 2 1
0 1 2 0 1 2
Back substitution yields
· ¸ · ¸
−3 1
[L(u2 )]F = , [L(u3 )]F = .
2 F 0 F

Thus · ¸
£ ¤ −1 −3 1
A = [L(u1 )]F [L(u2 )]F [L(u3 )]F =
0 2 0
is the matrix encoding of L relative to the basis E in the domain, and F in the co-domain.

Chapter 4, Test A, p.207 Answer true if the statement is always true, and false otherwise. In the
case of a true statement, explain or prove your answer. In the case of a false statement, give an
example to show that the statement is not always true.
#1 Let L : Rn → Rn be a linear transformation. If L(x1 ) = L(x2 ), then x1 = x2 .
Solution: We begin by using what we are given, that L is a linear transformation and that
L(x1 ) = L(x2 ).

L(x1 ) = L(x2 ) by assumption,


=⇒ L(x1 ) − L(x2 ) = 0 subtracting L(x2 ) from both sides,
=⇒ L(x1 − x2 ) = 0 since L is linear.

Thus L kills the vector x1 − x2 . If the only vector that L kills is the zero vector (in other words,
if ker(L) = {0}), then we can conclude that x1 − x2 = 0 and hence x1 = x2 . However, if ker(L)
is nontrivial, then there will be a nonzero vector u mapped to zero by L. In this case, we can set
x1 − x2 = u 6= 0, and then it will follow that x1 6= x2 . This suggests a counterexample.

Let L : R2 → R2 be the projection onto the x1 −axis, that is, L ([ xx12 ]) = [ x01 ]. Then

L ([ 01 ]) = [ 00 ] , and L ([ 02 ]) = [ 00 ] ,

4
but [ 01 ] 6= [ 02 ]. Thus we conclude that the statement is false in general.

#2 If L1 and L2 are linear operators on a vector space V , then L1 + L2 is also a linear operator
on V , where L1 + L2 is defined as

(L1 + L2 )(x) := L1 (x) + L2 (x), for all x ∈ V . (14)

Solution: We must investigate whether L1 + L2 respects vector addition and scalar multiplication.
Read (14) with care —the plus sign on the LHS is between maps while the plus sign on the RHS is
between vectors.
1. We want to know if L1 + L2 respects vector plus, that is

(L1 + L2 )(x1 + x2 ) = (L1 + L2 )(x1 ) + (L1 + L2 )(x2 ), for all x1 , x2 ∈ V . (15)

Start with the LHS. Let x1 , x2 ∈ V .


(L1 + L2 )(x1 + x2 ) = L1 (x1 + x2 ) + L2 (x1 + x2 ) by definition (14) of L1 + L2 ,
= L1 (x1 ) + L1 (x2 ) + L2 (x1 ) + L2 (x2 ) since L1 and L2 are linear,
= L
¡ 1 (x1 ) + L2 (x1 ) +
¢ L1¡(x2 ) + L2 (x2 ) ¢ since vector + is commutative,
= L1 (x1 ) + L2 (x1 ) + L1 (x2 ) + L2 (x2 ) since vector + is associative,
= (L1 + L2 )(x1 ) + (L1 + L2 )(x2 ) by definition (14) of L1 + L2 .

Thus (15) holds, and L1 + L2 respects vector +.

2. Next we check whether

(L1 + L2 )(αx) = α(L1 + L2 )(x), for all x ∈ V , and for all α ∈ R. (16)

So start with the LHS. Let α be any scalar, and x any vector in V .
(L1 + L2 )(αx) = L1 (αx) + L2 (αx) by definition (14) of L1 + L2 ,
= αL
¡ 1 (x) + αL2 (x)¢ since L1 and L2 are both linear,
= α L1 (x) + L2 (x) since · distributes over + in any vector space,
= α(L1 + L2 )(x) by definition (14) of L1 + L2 .

Thus (16) hold, and L1 + L2 respects scalar multiplication.


Since L1 + L2 respects both operations, L1 + L2 is a always linear transformation. In other words,
the sum of two linear transformations defined on V is also a linear transformation on V .

#3 Let L : V → V be a linear operator. If x ∈ ker(L), then L(v + x) = L(v), for all v ∈ V .


Solution: This is patently true, as we show. Let v ∈ V . Then we have
L(v + x) = L(v) + L(x) since L is linear,
= L(v) + 0 since x ∈ ker(L),
= L(v) since 0 is the additive identity.

#4 If L1 rotates each vector x in R2 by 60 degrees and then reflects the resulting vector about
the x-axis, and L2 is a transformation that does the same two operations, but in the reverse order,
then L1 = L2 .

5
Solution: Since the direction of the rotation has not been specified, we choose a direction, say
counterclockwise. Let R denote the linear transformation that rotates each vector in R2 by 60
degrees counterclockwise, and S denote the linear transformation that reflects each vector in R2
about the x−axis. Then L1 := S ◦ R, while L2 := R ◦ S. Thus L1 = L2 if and only if S ◦ R = R ◦ S.
Let’s find the matrix representations of R and S relative to the standard basis for R2 .
· ¸ · ¸
cos 60 − sin 60 1 0
[R] = , [S] = . (17)
sin 60 cos 60 0 −1

Now S ◦ R = R ◦ S if and only if the matrices that represent them are equal, that is, if and only if
the matrix products [S][R] and [R][S] are equal. Now,
· ¸· ¸ · ¸
1 0 cos 60 − sin 60 cos 60 − sin 60
[S][R] = = .
0 −1 sin 60 cos 60 − sin 60 − cos 60

On the other hand,


· ¸· ¸ · ¸
cos 60 − sin 60 1 0 cos 60 sin 60
[R][S] = = .
sin 60 cos 60 0 −1 sin 60 − cos 60

Since [S][R] 6= [R][S], we conclude that S ◦ R 6= R ◦ S, and hence L1 6= L2 .

#7 Let E = [x1 , x2 , . . . , xn ] be an ordered basis for Rn . If L1 : Rn → Rn and L2 : Rn → Rn have


the same matrix representation with respect to E, then L1 = L2 .
Solution: The notation L1 = L2 means that L1 and L2 represent the same function, that is,
L1 and L2 yield the same output for every possible input. Thus we need to determine whether
L1 (v) = L2 (v) for all v ∈ V . Since L1 and L2 are represented by same matrix, call it A, with
respect to the basis E, we have

[L1 (v)]E = A[v]E and [L2 (v)]E = A[v]E , for all v ∈ Rn

Thus [L1 (v)]E = [L2 (v)]E for all v ∈ Rn . This means that for every v, L1 (v) and L2 (v) are expressed
by the same linear combination of the basis vectors x1 , x2 , . . . , xn . Thus L1 (v) = L2 (v) for every
v ∈ Rn . Consequently, L1 = L2 and the statement is true.

You might also like