Professional Documents
Culture Documents
Solution: If L is a linear transformation, L must respect both vector + and scalar multiplication.
This means it must be true that L(x + y) = L(x) + L(y) and L(αx) = αL(x) for all x, y ∈ R2 and
for all α in R. We begin our analysis by investigating whether L respects +.
h x1 +y1 i
= x2 +y2 by definition of addition in R2 .
2
Now we see that L(x + y) 6= L(x) + L(y) since their third coordinates are not equal. Hence L is
not a linear transformation.
§4.1, p. 184, #17c Determine the kernel and range of the following linear operator in R3 .
L(x) = (x1 , x1 , x1 )T
Solution: The kernel of L, denoted ker(L), consists of all those vectors in R3 that are killed by L,
that is, all vectors that get mapped to the zero vector by L. Thus our strategy for finding ker(L)
is always to solve L(x) = 0 for x. In the present case, we have
h x1 i h i
0
L(x) = 0 =⇒ xx11 = 0 .
0
The range of L, denoted range(L), consists of all vectors in R3 that are hit by L, that is, all
vectors v in the codomain of L for which there exists a corresponding vector u in the domain of
L such that L(u) = v.h x1Our
i strategy
h i for finding range(L) is to describe
n h L(x) i in terms
o of x1 , x2 , x3 .
1 1
In this case, L(x) = xx11 = x1 1 . This shows that range(L) = x1 1 : x1 ∈ R , so a basis for
nh io 1 1
1
range(L) is 1 . Hence dim(range(L)) = 1, making range(L) a 1-dimensional subspace of R3 .
1
Finally, we note that dim(range(L)) + dim(ker(L)) = 1 + 2 = 3 = dim(R3 ), as expected.
2
We must solve this linear system for the unknowns c1 , c2 . Make sure to write the system in standard
form before proceeding:
2c1 + c2 = 0
−c2 = 2
The system
· ¸ is already
· ¸in echelon form. Back substitution yields c2 = −2 and c1 = 1. Thus
c1 1
[2x]T = = .
c2 T −2 T
Next, (2) and (9) tell us that we must express 1 as a linear· 1 ¸combination of the polynomials in
the basis T . Clearly, 1 = 12 · 2 + 0 · (1 − x), so that [1]T = 2 .
0 T
Thus · ¸
£ ¤ 1 12 12
A = [2x]T [1]T [1]T = .
−2 0 0
§4.2, p.198, #18a Let E = [u1 , u2 , u3 ] and F = [b1 , b2 ] be bases for R3 and R2 , respectively,
where u1 = (1, 0, −1)T , u2 = (1, 2, 1)T , u3 = (−1, 1, 1)T , b1 = (1, −1)T , and b2 = (2, −1)T . Let
L : R3 → R2 be given by L(x) = (x3 , x1 )T . Find the matrix representation of L with respect to the
bases E and F .
Solution: Let A denote the matrix representation of L relative to the specified bases E and F .
The columns of A correspond to the coordinate vectors representing L(u1 ), L(u2 ), and L(u3 ) in the
basis F . We begin by using the definition of L to calculate these images. By definition,
³h x1 i´
L xx23 = [ xx31 ] . (6)
Note that coordinates in (6) are with respect to the standard bases of R3 and R2 . Using (6) we get
³h 1 i´
L(u1 ) = L −1 0 = [ −1 1 ] (7)
³h i´
1
L(u2 ) = L 2 = [ 11 ] (8)
1
³h −1 i´
1
and L(u3 ) = L 1 = [ −1 ]. (9)
1
We must rewrite each of L(u1 ), L(u2 ), and L(u3 ) as linear combinations of the vectors in the basis
F = [b1 , b2 ]. We begin with L(u1 ). We want to find scalars c1 , c2 such that
c1 b1 + c2 b2 = L(u1 ) (10)
1 2
=⇒ c1 [ −1 ] + c2 [ −1 ] = [ −1
1 ] (11)
· ¸· ¸
1 2 c1
=⇒ = [ −1
1 ]. (12)
−1 −1 c2
If you use the column-wise view of matrix multiplication, you will be able to see by inspection that
c1 = −1 and c2 = 0 is a solution to this system. Moreover, we can argue that this solution must be
unique, because the coefficient matrix is invertible, since its columns come from the basis F , so the
columns are linearly independent. (Observe that this argument justifying the invertibility of the
coefficient matrix can be applied just as easily even if we were dealing with a higher dimensional
3
co-domain!) If you didn’t see the solution by inspection, set up the usual gig, reduce to echelon
form, then do back substitution:
1 2 −1
−1 −1 1 R2 ← R2 + R1
1 2 −1
0 1 0
From R2 we see that c2 = 0, and then substituting in R1 yields c1 = −1. Thus the first column of
A has been found:
· ¸ · ¸
c1 −1
[L(u1 )]F = = . (13)
c2 F 0 F
Similarly, we find [L(u2 )]F and [L(u3 )]F by setting up and solving appropriate linear systems.
Observe that the coefficient matrix will be the same, only the RHS will change.
1 2 1 1 2 1
−1 −1 1 R2 ← R2 + R1 −1 −1 1 R2 ← R2 + R1
[L(u2 )]F : [L(u3 )]F :
1 2 1 1 2 1
0 1 2 0 1 2
Back substitution yields
· ¸ · ¸
−3 1
[L(u2 )]F = , [L(u3 )]F = .
2 F 0 F
Thus · ¸
£ ¤ −1 −3 1
A = [L(u1 )]F [L(u2 )]F [L(u3 )]F =
0 2 0
is the matrix encoding of L relative to the basis E in the domain, and F in the co-domain.
Chapter 4, Test A, p.207 Answer true if the statement is always true, and false otherwise. In the
case of a true statement, explain or prove your answer. In the case of a false statement, give an
example to show that the statement is not always true.
#1 Let L : Rn → Rn be a linear transformation. If L(x1 ) = L(x2 ), then x1 = x2 .
Solution: We begin by using what we are given, that L is a linear transformation and that
L(x1 ) = L(x2 ).
Thus L kills the vector x1 − x2 . If the only vector that L kills is the zero vector (in other words,
if ker(L) = {0}), then we can conclude that x1 − x2 = 0 and hence x1 = x2 . However, if ker(L)
is nontrivial, then there will be a nonzero vector u mapped to zero by L. In this case, we can set
x1 − x2 = u 6= 0, and then it will follow that x1 6= x2 . This suggests a counterexample.
Let L : R2 → R2 be the projection onto the x1 −axis, that is, L ([ xx12 ]) = [ x01 ]. Then
L ([ 01 ]) = [ 00 ] , and L ([ 02 ]) = [ 00 ] ,
4
but [ 01 ] 6= [ 02 ]. Thus we conclude that the statement is false in general.
#2 If L1 and L2 are linear operators on a vector space V , then L1 + L2 is also a linear operator
on V , where L1 + L2 is defined as
Solution: We must investigate whether L1 + L2 respects vector addition and scalar multiplication.
Read (14) with care —the plus sign on the LHS is between maps while the plus sign on the RHS is
between vectors.
1. We want to know if L1 + L2 respects vector plus, that is
(L1 + L2 )(αx) = α(L1 + L2 )(x), for all x ∈ V , and for all α ∈ R. (16)
So start with the LHS. Let α be any scalar, and x any vector in V .
(L1 + L2 )(αx) = L1 (αx) + L2 (αx) by definition (14) of L1 + L2 ,
= αL
¡ 1 (x) + αL2 (x)¢ since L1 and L2 are both linear,
= α L1 (x) + L2 (x) since · distributes over + in any vector space,
= α(L1 + L2 )(x) by definition (14) of L1 + L2 .
#4 If L1 rotates each vector x in R2 by 60 degrees and then reflects the resulting vector about
the x-axis, and L2 is a transformation that does the same two operations, but in the reverse order,
then L1 = L2 .
5
Solution: Since the direction of the rotation has not been specified, we choose a direction, say
counterclockwise. Let R denote the linear transformation that rotates each vector in R2 by 60
degrees counterclockwise, and S denote the linear transformation that reflects each vector in R2
about the x−axis. Then L1 := S ◦ R, while L2 := R ◦ S. Thus L1 = L2 if and only if S ◦ R = R ◦ S.
Let’s find the matrix representations of R and S relative to the standard basis for R2 .
· ¸ · ¸
cos 60 − sin 60 1 0
[R] = , [S] = . (17)
sin 60 cos 60 0 −1
Now S ◦ R = R ◦ S if and only if the matrices that represent them are equal, that is, if and only if
the matrix products [S][R] and [R][S] are equal. Now,
· ¸· ¸ · ¸
1 0 cos 60 − sin 60 cos 60 − sin 60
[S][R] = = .
0 −1 sin 60 cos 60 − sin 60 − cos 60
Thus [L1 (v)]E = [L2 (v)]E for all v ∈ Rn . This means that for every v, L1 (v) and L2 (v) are expressed
by the same linear combination of the basis vectors x1 , x2 , . . . , xn . Thus L1 (v) = L2 (v) for every
v ∈ Rn . Consequently, L1 = L2 and the statement is true.