77 views

Uploaded by Matteo Esposito

Demo File

Demo File

© All Rights Reserved

- LAVectorSpaces
- Finite Dimensional Vector Spaces
- linear_algebra_solution.pdf
- 224 Notes
- Linalg20
- Linear Algebra and Numerical Analysis
- MAT_217_all_lectures.pdf
- Notes - NMA
- Lecture2-8web
- la_fis_11_01
- Vector Spaces _ Final
- basis.pdf
- LinAlg Basis
- Test 1 Notes
- Linear Algebra Exam
- Basis and Dimension
- Vector Subspace Dim
- ma121Notes.pdf
- General
- MAT_217_Lecture_4.pdf

You are on page 1of 73

Paul Skoufranis

August 10, 2012

Contents

1 Chapter One

2 Chapter Two

14

3 Chapter Four

45

4 Chapter Five

46

5 Chapter Six

63

Chapter One

1.2 Question 13) Let V denote the set of ordered pairs of real numbers. If (a1 , a2 ) and (b1 , b2 ) are

elements of V and c R, define

(a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 b2 )

and

c (a1 , a2 ) = (ca1 , a2 )

Solution: The set V with the above operations is not a vector space. We will provide two ways to

see this:

1. Property (VS 8) in the definition of a vector space fails for V with these operations. To see this, we

notice that if ~v = (1, 2), then

(1 + 1) ~v = 2 ~v = (2, 2)

yet

1 ~v + 1 ~v = 1 (1, 2) + 1 (1, 2) = (1, 2) + (1, 2) = (2, 4)

so (1 + 1) ~v 6= 1 ~v + 1 ~v . Hence V is not a vector space.

2. Suppose V is a vector space. Then V must have a unique zero vector. Since

(a, b) + (0, 0) = (a + 0, b(0)) = (a, 0),

~v + (0, 0) = ~v for all ~v V . Hence (0, 0) must be the zero vector of V . However, we notice that

(1, 0) + (1, 0) = (0, 0) = (1, 0) + (1, 1).

Therefore (1, 0) and (1, 1) are distinct additive inverse of (1, 0). Since every element of a vector

space has a unique additive inverse, we have a contradiction. Therefore V is not a vector space.

Thus we have demonstrated that V is not a vector space in two different ways.

1.2 Question 17) Let V = {(a1 , a2 ) | a1 , a2 F}, where F is a field. Define addition of elements

of V coordinate wise, and for c F and (a1 , a2 ) V , define

c (a1 , a2 ) = (ca1 , 0)

Is V a vector space over F with these operations? Justify your answer.

Solution: The set V with the above operations is not a vector space. The easiest way to see this is

to note (using the know properties of fields) that 1 (1, 1) = (1, 0). Since 1 6= 0 in any field, (1, 1) 6= (1, 0) so

1 (1, 1) 6= (1, 1). Hence property (VS 5) in the definition of a vector space fails for V with these operations.

1.2 Question 21) Let V and W be vector spaces over a field F with respect to the operations +V ,

+W , V , and W . Let

Z = {(~v , w)

~ | ~v V, w

~ W}

Prove that Z is a vector space with the operations

(~v1 , w

~ 1 ) + (~v2 , w

~ 2 ) = (~v1 +V ~v2 , w

~ 1 +W ~v2 )

for all ~v , ~v1 , ~v2 V , w,

~ w

~ 1, w

~ 2 W , and c F.

and

c (~v , w)

~ = (c V ~v , c W w)

~

Proof: To show that Z is a vector space over F with the above operations, we will show that Z satisfies the definition of a vector space. It is clear that the operations + and are well-defined (that is, if

~v , ~v1 , ~v2 V , w,

~ w

~ 1, w

~ 2 W , and c F, then (~v1 , w

~ 1 ) + (~v2 , w

~ 2 ) Z and c (~v , w)

~ Z). Therefore, it suffices

to check the 8 vector space properties:

1. Let ~z1 = (~v1 , w

~ 1 ) Z and ~z2 = (~v2 , w

~ 2 ) Z be arbitrary. Then

~z1 + ~z2 = (~v1 +V ~v2 , w

~ 1 +W w

~ 2)

by definition

= (~v2 +V ~v1 , w

~ 2 +W w

~ 1)

= ~z2 + ~z1

by definition

2. Let ~z1 = (~v1 , w

~ 1 ) Z, ~z2 = (~v2 , w

~ 2 ) Z, and ~z3 = (~v3 , w

~ 3 ) Z be arbitrary. Then

(~z1 + ~z2 ) + ~z3 = (~v1 +V ~v2 , w

~ 1 +W w

~ 2 ) + (~v3 , w

~ 3)

by definition

~ 1 +W w

~ 2 ) +W w

~ 3)

by definition

~ 1 +W (w

~ 2 +W w

~ 3 ))

= (~v1 , w

~ 1 ) + (~v2 +V ~v3 , w

~ 2 +W w

~ 3)

by definition

by definition

3. Let ~0 = (~0V , ~0W ) Z where ~0V is the zero vector in V and ~0W is the zero vector in W . Since V and

W are vector spaces ~v +V ~0V = ~v and w

~ +W ~0W = w

~ for all vectors ~v V and w

~ W . Thus for all

~z = (~v , w)

~ Z

~z + ~0 = (~v +V ~0V , w

~ +W ~0W ) = (~v , w)

~ = ~z

Therefore ~0 is indeed a zero vector for Z.

4. Let ~z = (~v , w)

~ Z be arbitrary. Since V and W are vector spaces, ~v has an additive inverse, denoted

~v , in V , and w

~ has an additive inverse, denoted w,

~ in W . Let ~z2 = (~v , w).

~ Then it is clear that

~z2 Z and

~z + ~z2 = (~v +V (~v ), w

~ +W (w))

~ = (~0V , ~0W ) = ~0

as desired. Hence ~z2 is an additive inverse of ~z. Since ~z = (~v , w)

~ Z was arbitrary, every element of

Z has an additive inverse.

5. Let ~z = (~v , w)

~ Z be arbitrary. Then

1 ~z = (1 V ~v , 1 W w)

~

= (~v , w)

~ = ~z

by definition

since V and W are vector spaces and thus have property (VS 5)

6. Let a, b F and ~z = (~v , w)

~ Z be arbitrary. Then

a (b ~z) = a (b V ~v , b W w)

~

= (a V (b V ~v ), a W (b W w))

~

by definition

by definition

= (ab V ~v , ab W w)

~

since V and W are vector spaces and thus have property (VS 6)

= ab ~z

by definition

~ Z were arbitrary, Z has property (VS 6).

~ 1 ) Z, and ~z2 = (~v2 , w

~ 2 ) Z be arbitrary. Then

a (~z1 + ~z2 ) = a (~v1 +V ~v2 , w

~ 1 +W w

~ 2)

by definition

= (a V (~v1 +V ~v2 ), a W (w

~ 1 +W w

~ 2 ))

by definition

~ 1 ) +W (a W w

~ 2 ))

= (a V ~v1 , a W w

~ 1 ) + (a V ~v2 , a W w

~ 2)

by definition

= a ~z1 + a ~z2

by definition

Therefore, since a F and ~z1 , ~z2 Z were arbitrary, Z has property (VS 7).

8. Let a, b F and ~z = (~v , w)

~ Z be arbitrary. Then

(a + b) ~z = ((a + b) V ~v , (a + b) W w)

~

by definition

= ((a V ~v ) +V (b V ~v ), (a W w)

~ +W (b W w))

~

= (a V ~v , a W w)

~ + (b V ~v , b W w)

~

by definition

= a ~z + b ~z

by definition

~ Z were arbitrary, Z has property (VS 8).

Therefore, by the definition of a vector space, Z is a vector space over F with the operations given.

1.3 Question 8) Determine whether the following sets are subspaces of R3 under the operations of

addition and scalar multiplication defined on R3 . Justify your answers.

(a) W1 = {(a1 , a2 , a3 ) R3 | a1 = 3a2 , a3 = a2 }

W1 is a subspace. To see this, we need only verify the three properties of being a subspace.

(a) Since ~0 = (0, 0, 0), 0 = 3(0), and 0 = 0, ~0 W1 .

(b) Suppose (a1 , a2 , a3 ) W1 and (b1 , b2 , b3 ) W1 are arbitrary. Then a1 = 3a2 , a3 = a2 , b1 = 3b2 ,

and b3 = b2 by the definition of W1 . Therefore (a1 + b1 ) = 3(a2 + b2 ) and (a3 + b3 ) = (a2 + b2 )

so (a1 , a2 , a3 ) + (b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 ) W1 by the definition of W1 . Since

(a1 , a2 , a3 ) W1 and (b1 , b2 , b3 ) W1 were arbitrary, W1 is closed under addition.

(c) Suppose (a1 , a2 , a3 ) W1 and b R are arbitrary. Then a1 = 3a2 and a3 = a2 by the definition

of W1 . Therefore ba1 = 3(ba2 ) and ba3 = (ba2 ) so b (a1 , a2 , a3 ) = (ba1 , ba2 , ba3 ) W1 by the

definition of W1 . Since (a1 , a2 , a3 ) W1 and b R were arbitrary, W1 is closed under scalar

multiplication.

Hence W1 is a subspace of R3 .

(b) W2 = {(a1 , a2 , a3 ) R3 | a1 = a3 + 2}

W2 is not a subspace of R3 . To see this, we notice that since 0 6= 0 + 2, ~0 = (0, 0, 0)

/ W2 . Hence W2

cannot be a subspace of R3 .

(c) W3 = {(a1 , a2 , a3 ) R3 | 2a1 7a2 + a3 = 0} W3 is a subspace. To see this, we need only verify the

three properties of being a subspace.

(a) Since ~0 = (0, 0, 0) and 2(0) 7(0) + 0 = 0, ~0 W3 .

(b) Suppose (a1 , a2 , a3 ) W and (b1 , b2 , b3 ) W1 are arbitrary. Then 2a1 7a2 + a3 = 0 and

2b1 7b2 + b3 = 0 by the definition of W1 . Therefore 2(a1 + b1 ) 7(a2 + b2 ) + (a3 + b3 ) = 0

so (a1 , a2 , a3 ) + (b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 ) W3 by the definition of W3 . Since

(a1 , a2 , a3 ) W3 and (b1 , b2 , b3 ) W3 were arbitrary, W3 is closed under addition.

(c) Suppose (a1 , a2 , a3 ) W3 and b R are arbitrary. Then 2a1 7a2 + a3 = 0 by the definition of

W3 . Therefore 2(ba1 ) 7(ba2 ) + (ba3 ) = 0 so b (a1 , a2 , a3 ) = (ba1 , ba2 , ba3 ) W3 by the definition

of W3 . Since (a1 , a2 , a3 ) W3 and b R were arbitrary, W3 is closed under scalar multiplication.

Hence W3 is a subspace of R3 .

(d) W4 = {(a1 , a2 , a3 ) R3 | a1 4a2 a3 = 0} W4 is a subspace. To see this, we need only verify the

three properties of being a subspace.

(a) Since ~0 = (0, 0, 0) and (0) 4(0) (0) = 0, ~0 W4 .

(b) Suppose (a1 , a2 , a3 ) W and (b1 , b2 , b3 ) W1 are arbitrary. Then a1 4a2 a3 = 0 and b1 4b2

b3 = 0 by the definition of W4 . Therefore (a1 + b1 ) 4(a2 + b2 ) (a3 + b3 ) = 0 so (a1 , a2 , a3 ) +

(b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 ) W4 by the definition of W4 . Since (a1 , a2 , a3 ) W4 and

(b1 , b2 , b3 ) W4 were arbitrary, W4 is closed under addition.

(c) Suppose (a1 , a2 , a3 ) W4 and b R are arbitrary. Then a1 4a2 a3 = 0 by the definition of W4 .

Therefore (ba1 ) 4(ba2 ) (ba3 ) = 0 so b (a1 , a2 , a3 ) = (ba1 , ba2 , ba3 ) W4 by the definition of W4 .

Since (a1 , a2 , a3 ) W4 and b R were arbitrary, W4 is closed under scalar multiplication.

Hence W4 is a subspace of R3 .

(e) W5 = {(a1 , a2 , a3 ) R3 | a1 + 2a2 3a3 = 1}

W5 is not a subspace of R3 . To see this, we notice that since 0 + 2(0) 3(0) 6= 1, ~0 = (0, 0, 0)

/ W5 .

Hence W5 cannot be a subspace of R3 .

(f) W6 = {(a1 , a2 , a3 ) R3 | 5a21 3a22 + 6a23 = 0}

W6 is not a subspace of R3

. To see this,

we notice that (0, 2, 1) W6 as 5(0)2 3( 2)2 + 6(1)2 =

/ W6 .

0 6 + 6 = 0. However (0, 2, 1) + (0, 2, 1) = (0, 0, 2) but 5(0)2 3(0)2 + 6(2)2 6= 0 so (0, 0, 2)

Hence W6 is not closed under addition so W6 is not a subspace of R3 .

1.3 Question 11) Is the set W = {f (x) P(F) | f (x) = 0 or f (x) has degree n} a subspace of P(F) if

n 1? Justify your answer.

Solution: The set W is never a subspace of P(F). To see this, let f (x) = xn + 1 and g(x) = (1)xn

(where 1 is the unit of F and 1 is the additive inverse of 1 in F). Then clearly f, g W as f and g are

polynomials of degree n with coefficient in F. However (f + g)(x) = 1 so f + g is the constant function 1

which is not an element of W . Hence W is not closed under addition so W is not a subspace of P(F).

1.3 Question 14) Let S be a non-empty set and let F be a field. Let C(S, F) denote the set of all

functions f F(S, F) such that f (s) = 0 for all but a finite number of elements s S. Prove that C(S, F) is

a subspace of F(S, F).

Solution: To see that C(S, F) is a subspace of F(S, F), we need only verify the three properties of being a subspace.

1. Since the zero vector ~0 of F(S, F), ~0 is zero on every element of S and thus ~0 C(S, F) by the definition

of C(S, F).

2. Let f, g C(S, F) be arbitrary. Since f, g C(S, F), by the definition of C(S, F) there exists finite sets

S1 , S2 S such that f (s) = 0 for all s S \ S1 and g(s) = 0 for all s S \ S2 . Let S3 = S1 S2 .

Hence S3 is a finite subset of S. Moreover, if s S \ S3 , then s

/ S1 and s

/ S2 so (f + g)(s) =

f (s) + g(s) = 0 + 0 = 0. Hence f + g is zero except on the finite subset S3 of S. Hence f + g C(S, F)

by the definition of C(S, F).

3. Let f C(S, F) and a F be arbitrary. Since f C(S, F), by the definition of C(S, F) there exist a

finite set S1 S such that f (s) = 0 for all s S \ S1 . Therefore (af )(s) = a(f (s)) = a(0) = 0 for all

s

/ S \ S1 . Hence af is zero except on the finite subset S1 of S. Hence af C(S, F) by the definition

of C(S, F).

Hence C(S, F) is a subspace of F(S, F).

1.3 Question 18) Prove that a subset W of a vector space V is a subspace of V if and only if ~0 W and

a~x + ~y W whenever a F and ~x, ~y W .

Solution: We proceed with each direction separately. Suppose that W is a subspace of V . Therefore

~0 W by condition (a) of Theorem 1.3 of the text. Next let a F and ~x, ~y W be arbitrary. Then a~x W

by condition (c) of Theorem 1.3 of the text. Since a~x, ~y W , condition (b) of Theorem 1.3 of the text

implies that a~x + ~y W . Thus the first direction has been proven.

For the other direction, suppose W is a subset of a vector space V and has the properties that ~0 W

and a~x + ~y W whenever a F and ~x, ~y W . To verify that W is a subspace of V , we must verify the

three properties of being a subspace.

1. The zero vector ~0 W by assumption.

2. Let ~x, ~y W be arbitrary. Therefore 1~x + ~y W by the assumptions on W (by letting a = 1). Since

1~x = ~x by the vector space axiom (VS 5) on V , ~x + ~y W .

3. Let ~x W and let a F be arbitrary. Therefore, since ~0 W by our assumptions on W , a~x + ~0 W

by the assumptions on W (by letting ~y = ~0). Since a~x + ~0 = a~x by the vector space axiom (VS 3) on

V , a~x W .

Hence W is a subspace of V .

1.5 Question 9) Let ~u and ~v be distinct vectors in a vector space V . Show that {~v , ~u} is linearly

dependent if and only if ~u or ~v is a multiple of the other.

Solution: First suppose {~v , ~u} is linearly dependent. Then there exists scalars a, b F not both zero

such that a~v + b~u = ~0. We split the proof into two cases:

Case 1: a 6= 0 Suppose a 6= 0. Since a~v + b~u = ~0 implies a~v = b~u and since a 6= 0 (and thus is

invertible), ~v = (a1 b)~u. Hence ~v is a multiple of ~u.

Case 2: b 6= 0 Suppose b 6= 0. Since a~v + b~u = ~0 implies b~u = a~v and since b 6= 0 (and thus is

invertible), ~u = (b1 a)~v . Hence ~u is a multiple of ~v .

Therefore, since the two cases cover all possible cases (as a and b are not both zero), ~u or ~v is a multiple

of the other.

For the other direction, suppose ~u or ~v is a multiple of the other. Again we split the proof into two cases:

Case 1: ~u is a multiple of ~v Suppose ~u is a multiple of ~v . Therefore there exists a scalar a F such

that ~u = a~v . Hence 1~u + (a)~v = ~0. As 1~u + (a)~v = ~0 is a non-trivial linear combination of ~u and ~v that

gives the zero vector, {~u, ~v } is linearly dependent.

Case 2: ~v is a multiple of ~u Suppose ~v is a multiple of ~u. Therefore there exists a scalar a F such

that ~v = a~u. Hence 1~v + (a)~u = ~0. As 1~v + (a)~u = ~0 is a non-trivial linear combination of ~v and ~u that

gives the zero vector, {~v , ~u} is linearly dependent.

Therefore, since the two cases cover all possible cases, {~v , ~u} is linearly dependent.

1.5 Question 13a) Let V be a vector space over a field F of characteristic not equal to two. Let ~u and

~v be distinct vectors in V . Prove that {~u, ~v } is linearly independent if and only if {~u + ~v , ~u ~v } is linearly

independent.

Solution: First suppose {~u, ~v } is linearly independent. Suppose a, b F are such that

a(~u + ~v ) + b(~u ~v ) = ~0.

Therefore

(a + b)~u + (a b)~v = ~0.

Since {~u, ~v } is linearly independent, a + b = 0 and a b = 0. By adding and subtracting these equations,

we obtain that 2a = 0 and 2b = 0. As F does not have characteristic zero (that is 2 = 1 + 1 is invertible),

a = 0 and b = 0. Hence {~u + ~v , ~u ~v } is linearly independent.

Now suppose {~u + ~v , ~u ~v } is linearly independent. Suppose a, b F are such that

a~u + b~v = ~0.

Since F does not have characteristic two, 2 is invertible and thus

(21 a + 21 b)(~u + ~v ) + (21 a 21 b)(~u ~v ) = a~u + b~v = ~0

(as 21 +21 = 2(21 ) = 1). Since {~u +~v , ~u ~v } is linearly independent, 21 a+21 b = 0 and 21 a21 b = 0.

By adding and subtracting these equations, we obtain that a = 0 and b = 0. Hence {~u, ~v } is linearly independent.

1.5 Question 15) Let S = {~v1 , ~v2 , . . . , ~vn } be a finite set of vectors in a vector space V . Suppose S

is linearly dependent. Prove that either ~v1 = ~0 or ~vk+1 span{~v1 , ~v2 , . . . , ~vk } for some k (where 1 k n).

Solution: We will present two solutions:

Solution One: Since S is linearly dependent, there exists a1 , . . . , an F not all zero such that ~0 =

a1~v1 + + an~vn . Since not all of the aj s are zero, there exists an index k such that ak 6= 0 and aj = 0 for

all j > k. We split the proof into two cases:

Case 1: k = 1 Then aj = 0 for all j 2 and a1 6= 0. Therefore

~0 = a1~v1 + + an~vn = a1~v1 + 0~v2 + + 0~vn = a1~v1 .

Since a1 =

6 0, a1~v1 = ~0 implies that ~v1 = ~0.

Case 2: k > 1 Then aj = 0 for all j > k and ak 6= 0. Therefore

~0 = a1~v1 + + an~vn = a1~v1 + + ak~vk + 0~vk+1 + + 0~vn = a1~v1 + + ak~vk .

Since ak 6= 0, the above implies that

~vk = (a1 a1

v1 + + (ak1 a1

vk1 span({~v1 , . . . , ~vk1 }).

k )~

k )~

Therefore, if S is linearly dependent, either ~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk }) for some k 1.

Solution Two: For each n N, let Pn be the mathematical statement that if S = {~v1 , , ~vn } is a

linearly dependent set, then either ~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk }) for some 1 k n 1. We will

prove the result by using the Principle of Mathematical Induction.

Base Case: n = 1 If S = {~v1 } is linearly dependent, we obtain that ~v1 = ~0 as desired (as a single vector

is linearly dependent if and only if it is the zero vector).

7

Inductive Step: Suppose Pn is true; that is, if S = {~v1 , , ~vn } is a linearly dependent set, then either

~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk }) for some 1 k n 1. We desire to prove that if S = {~v1 , , ~vn+1 }

is a linearly dependent set, then either ~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk }) for some 1 k n.

Suppose S = {~v1 , , ~vn+1 } is a linearly dependent set. Then there exists scalars ai F not all zero

such that

~0 = a1~v1 + + an~vn + an+1~vn+1 .

We split the proof into two cases:

Case 1: an+1 = 0 If an+1 = 0, then

~0 = a1~v1 + + an~vn .

Therefore, as not all of the ai s for i = 1, . . . , n are zero (or else all of the aj s would be zero) T = {~v1 , , ~vn }

is a linearly dependent set. Hence, by the inductive hypothesis, either ~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk })

for some 1 k n 1.

Case 2: an+1 6= 0 If an+1 6= 0, then by rearranging the equation

~0 = a1~v1 + + an~vn + an+1~vn+1

we obtain that

~vn+1 = (a1 a1

v1 + + (an a1

vn span({~v1 , . . . , ~vn }).

n+1 )~

n+1 )~

Therefore, as the two cases cover all possible cases, either ~v1 = ~0 or ~vk+1 span({~v1 , . . . , ~vk }) for some

1 k n.

Hence, by the Principle of Mathematical Induction, the result follows.

1.6 Question 9) The vectors ~u1 = (1, 1, 1, 1), ~u2 = (0, 1, 1, 1), ~u3 = (0, 0, 1, 1), and ~u4 = (0, 0, 0, 1)

form a basis for F4 . Find the unique representation of an arbitrary vector (a1 , a2 , a3 , a4 ) in F4 as a linear

combination of ~u1 , ~u2 , ~u3 , and ~u4 .

Solution: Let (a1 , a2 , a3 , a4 ) F4 be arbitrary. We desire to determine the constants a, b, c, d F such

that

(a1 , a2 , a3 , a4 ) = a~u1 + b~u2 + c~u3 + d~u4 .

By looking at the first entry in the vectors, we see that a = a1 . By looking at the second entry in the vectors,

we see that a2 = a + b so b = a2 a1 . By continuing this process, we see that

(a1 , a2 , a3 , a4 ) = a1 ~u1 + (a2 a1 )~u2 + (a3 a2 )~u3 + (a4 a3 )~u4

which completes the problem.

1.6 Question 11) Let ~u and ~v be distinct vectors of a vector space V . Show that if {~u, ~v } is a basis for V and a and b are non-zero scalars, then both {~u + ~v , a~u} and {a~u, b~v } are also bases for V .

Solution: Since {~u, ~v } is a basis for V , we know that dim(V ) = 2. Therefore, to show that a set of

two vector is a basis for V , it suffices to show that either the two vectors span V or that the two vectors are

linearly independent. We will demonstrate both ways (one for each set).

To see that {~u + ~v , a~u} is a basis for V , we will show that {~u + ~v , a~u} is linearly independent. To see

that {~u + ~v , a~u} is linear independent, suppose , F are such that

(~u + ~v ) + (a~u) = ~0.

Therefore

( + a)~u + ~v = ~0.

Since {~u, ~v } is a linearly independent subset of V , the above equation implies that + a = 0 and = 0.

Therefore = 0 and a = 0. Since a 6= 0, a = 0 implies = 0. Hence = = 0 so {~u + ~v , a~u} is a

linearly independent subset of V . Since dim(V ) = 2, we automatically have (by Corollary 2 in Section 1.6)

that span({~u + ~v , a~u}) = V and thus {~u + ~v , a~u} is a basis of V .

To see that {a~u, b~v } is a basis for V , we will show that span({a~u, b~v }) = V . To see that span({a~u, b~v }) =

V , let w

~ V be arbitrary. Then, since {~u, ~v } is a basis for V , there exists scalars , F such that

w

~ = ~u + ~v .

Therefore, as a, b 6= 0, a1 F, b1 F, and

w

~ = (a1 )(a~u) + (b1 )(b~v ).

Hence w

~ span({a~u, b~v }). Therefore, since w

~ V is arbitrary and clearly span({a~u, b~v }) V (as the span

of vectors in V is a subspace of V ), we obtain that span({a~u, b~v }) = V . Therefore, since dim(V ) = 2, we

automatically have (by Corollary 2 in Section 1.6) that {a~u, b~v } is linearly independent and {a~u, b~v } is a

basis for V .

1.6 Question 25) Let V , W , and Z be as in Exercise 21 of Section 1.2. If V and W are vector spaces

over F of dimension m and n respectively, determine the dimension of Z.

Solution: Let V , W , and Z be vector spaces over a field F as in Exercise 21 of Section 1.2. Suppose

further that V and W are finite dimensional vector spaces with dimensions m and n respectively. Let

{~v1 , . . . , ~vm } be a basis for V and let {w

~ 1, . . . , w

~ n } be a basis for W . We claim that

= {(~v1 , ~0W ), (~v2 , ~0W ), . . . , (~vm , ~0W ), (~0V , w

~ 1 ), (~0V , w

~ 2 ), . . . , (~0V , w

~ n )}

is a basis of Z. To see this, we must show that is linearly independent and span() = Z. To see that is

linearly independent, suppose a1 , a2 , . . . , am , b1 , b2 , . . . , bn F are such that

a1 (~v1 , ~0W ) + a2 (~v2 , ~0W ) + + am (~vm , ~0W ) + b1 (~0V , w

~ 1 ) + b2 (~0V , w

~ 2 ) + + bn (~0V , w

~ n ) = ~0Z .

Hence

(a1~v1 + a2~v2 + + am~vm , b1 w

~ 1 + + bn w

~ n ) = ~0Z = (~0V , ~0W )

by the properties of Z given in Exercise 21 of Section 1.2. Hence a1~v1 + a2~v2 + + am~vm = ~0V and

b1 w

~ 1 + + bn w

~ n = ~0W . Since {~v1 , . . . , ~vm } is a basis for V , {~v1 , . . . , ~vm } is linearly independent and thus

a1~v1 + a2~v2 + + am~vm = ~0V implies aj = 0 for all j {1, . . . , m}. Since {w

~ 1, . . . , w

~ n } is a basis for W ,

~

{w

~ 1, . . . , w

~ n } is linearly independent and thus b1 w

~ 1 + + bn w

~ n = 0W implies bj = 0 for all j {1, . . . , n}.

Hence all of the ai s and bj s are zero so is linearly independent.

To see that span() = Z, we note clearly span() Z. To see the other inclusion, let ~z Z be arbitrary.

By the definition of Z we can write ~z = (~v , w)

~ where ~v V and w

~ W . Since {~v1 , . . . , ~vm } is a basis for V ,

there exists scalars a1 , . . . , am F such that a1~v1 + a2~v2 + + am~vm = ~v . Since {w

~ 1, . . . , w

~ n } is a basis for

W , there exists scalars b1 , . . . , bn F such that b1 w

~ 1 + + bn w

~ n = w.

~ Hence

~z = (~v , w)

~

= (a1~v1 + a2~v2 + + am~vm , b1 w

~ 1 + + bn w

~ n)

= a1 (~v1 , ~0W ) + a2 (~v2 , ~0W ) + + am (~vm , ~0W ) + b1 (~0V , w

~ 1 ) + b2 (~0V , w

~ 2 ) + + bn (~0V , w

~ n ) span().

Hence, as ~z Z was arbitrary, span() = Z. Hence is a basis for Z. Since has n + m elements, Z is a

finite dimensional vector space with dim(Z) = n + m.

9

1.6 Question 26) For a fixed a R, determine the dimension of the subspace of Pn (R) defined by

{f Pn (R) | f (a) = 0}.

Solution: Let W = {f Pn (R) | f (a) = 0}. We claim that dim(W ) = n. To see this, for each

i = 0, 1, . . . , n 1, let fi Pn (R) be the polynomial

fi (x) = (x a)xi = xi+1 axi

(where x0 = 1). We claim that {f0 , f1 , . . . , fn1 } is a basis for W . To begin, we must show that each fi is

an element of W . To see this, we notice that fi (a) = 0 by construction. Hence {f0 , f1 , . . . , fn1 } W .

Next we need to show that {f0 , f1 , . . . , fn1 } is linearly independent. To see this, suppose 0 , 1 , . . . , n1

R are such that

0 f0 + 1 f1 + + n1 fn1 = ~0

(where ~0 is the zero polynomial). By substituting the definition of each fi , we see that

~0 = n1 xn + (n2 an1 )xn1 + (n3 an2 )xn2 + + (0 a1 )x + (a0 )

Therefore, as {1, x, x2 , . . . , xn } is linearly indepenent, we obtain that n1 = 0 and j1 = aj for all

j = 0, 1, . . . , n 1. Since n1 = 0, we obtain that n2 = an1 = 0. Similarly n3 = 0 and, by

repeating this argument, we obtain that i = 0 for all i = 0, 1, . . . , n 1. Hence {f0 , f1 , . . . , fn1 } is linearly

independent.

Lastly, we need to show that span({f0 , f1 , . . . , fn1 }) = Pn (R). To see this, let f W be arbitrary.

Therefore f (a) = 0. Hence we can write f (x) = (x a)g(x) where g(x) is a polynomial with real coefficients

of degree at most n 1. Therefore, there exists scalars 0 , 1 , . . . , n1 R such that

g(x) = n1 xn1 + n2 xn2 + + 1 x + 0 .

Hence

f (x) = (x a)(n1 xn1 + n2 xn2 + + 1 x + 0 )

= n1 (x a)xn1 + n2 (x a)xn2 + + 1 (x a)x + 0 (x a)

= n1 fn1 (x) + n2 fn2 (x) + + 1 f1 (x) + 0 f0 (x)

so f span({f0 , f1 , . . . , fn1 }). Since f W was arbitrary and span({f0 , f1 , . . . , fn1 }) W (as W is a

vector space and {f0 , f1 , . . . , fn1 } W ), we obtain that span({f0 , f1 , . . . , fn1 }) = W as claimed.

Hence {f0 , f1 , . . . , fn1 } is a linear independent set that generated W so {f0 , f1 , . . . , fn1 } is a basis for

W . Hence dim(W ) = n as claimed.

1.6 Question 28) Let V be a finite-dimensional vector space over C with dimension n. Prove that

if V is now regarded as a vector space over R, then dimR (V ) = 2n.

Solution: Let {~v1 , ~v2 , . . . , ~vn } be a basis for V when viewed as a vector space over C. Therefore

V = spanC ({~v1 , ~v2 , . . . , ~vn }) = {1~v1 + + n~vn | j C}

and, if 1 , . . . , n C are such that 1~v1 + + n~vn = ~0, then j = 0 for all i = 1, . . . , n.

To show that dimR (V ) = 2n, we claim that {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is a basis for V when viewed as

a vector space over R. To see this, we first notice that the R-span of {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is

spanR ({~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn }) = {a1~v1 + b1 (i~v1 ) + + an~vn + bn (i~vn ) | aj , bj R}

= {(a1 + b1 i)~v1 + + (an + bn i)~vn | aj , bj R}

= {1~v1 + + n~vn | j C} = V

10

where the last two linear are equal since a every complex number can be written uniquely as a + bi where

a, b R. Therefore spanR ({~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn }) = V .

To see that {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is a linearly independent set when V is viewed as a vector space

over R, suppose there exists a1 , . . . , an , b1 , . . . , bn R such that

a1~v1 + b1 (i~v1 ) + + an~vn + bn (i~vn ) = ~0.

For each i, let j = aj + ibj C. Therefore, the above equation implies that

1~v1 + + n~vn = ~0.

However, since {~v1 , ~v2 , . . . , ~vn } was a linearly independent subset of V when V was viewed as a vector space

over C, we obtain that aj + ibj = j = 0 for all j. Therefore, since aj , bj R, the equations aj + ibj = 0

imply that aj = 0 and bj = 0 for all j. Hence {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is a linearly independent set when

V is viewed as a vector space over R.

Therefore {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is a linearly independent set that generated V when V is viewed

as a vector space over R. Hence {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } is a basis of V when V is viewed as a vector

space over R. Therefore, since {~v1 , i~v1 , ~v2 , i~v2 , . . . , ~vn , i~vn } contains 2n vectors, dimR (V ) = 2n.

1.6 Question 29a) Prove that if W1 and W2 are finite-dimensional subspaces of a vector space V ,

then the subspace W1 + W2 is finite dimensional and dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ).

Solution: Let W1 and W2 be finite dimensional subspaces of a vector space V and let Y = W1 + W2 .

Thus Y is a subspace of V (see the file on direct sums). Since W1 W2 W1 , by the definition of a subspace

and the fact that W1 W2 and W1 are subspaces of V , W1 W2 is a subspace of the finite dimensional vector

space W1 . Hence W1 W2 is a finite dimensional vector space.

Let = {~v1 , . . . , ~vn } be a basis for W1 W2 . Since W1 W2 is a subspace of W1 and W1 W2 is a subspace

of W2 we can extend to bases = {~v1 , . . . , ~vn , w

~ 1, . . . , w

~ k } and = {~v1 , . . . , ~vn , ~z1 , . . . , ~zm } for W1 and W2

respectively by a Corollary to the Replacement Theorem. Therefore dim(W1 W2 ) = n, dim(W1 ) = n + k,

and dim(W2 ) = n + m.

We claim that = {~v1 , . . . , ~vn , w

~ 1, . . . , w

~ k , ~z1 , . . . , ~zm } is a basis for Y . To see this, we must show that

span() = Y and is linearly independent. To see that span() = Y , we notice that

Y = W1 + W2 = {w

~ + ~z | w

~ W1 , ~z W2 }

= {(a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k ) + (c1~v1 + + cn~vn + d1 ~z1 + + dm ~zm ) | ai , bi , ci , di F}

as and are bases for W1 and W2 respectively

= span()

as desired.

To see that is linearly independent, suppose a1 , . . . , an , b1 , . . . , bk , c1 , . . . , cm F are such that

a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k + c1 ~z1 + + cm ~zm = ~0V .

Thus

a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k = (c1 )~z1 + + (cm )~zm

However, since and are bases for W1 and W2 respectively, and W1 and W2 are subspaces, we obtain that

a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k W1 and (c1 )~z1 + + (cm )~zm W2 . Hence the above equality

implies that

(c1 )~z1 + + (cm )~zm W1 W2 .

Since is a basis for W1 W2 , there exists scalars, d1 , . . . , dn F such that

d1~v1 + + dn~vn = (c1 )~z1 + + (cm )~zm .

11

Thus

d1~v1 + + dn~vn + c1 ~z1 + + cm ~zm = ~0V .

However, since is a basis, is a linearly independent set of vectors and thus the above equation implies

that di = 0 and ci = 0 for all i. Therefore, as ci = 0 for all i, the equation

a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k + c1 ~z1 + + cm ~zm = ~0V

implies

a1~v1 + + an~vn + b1 w

~ 1 + + bk w

~ k = ~0V

However, since is a basis, is a linearly independent set of vectors and thus the above equation implies

that ai = 0 for all i and bi = 0 for all i. Hence is a linearly independent subset. Hence is a basis for Y .

Since has n + k + m vectors, dim(Y ) = n + k + m. Therefore

dim(Y ) + dim(W1 W2 ) = (n + k + m) + n = (n + k) + (n + m) = dim(W1 ) + dim(W2 )

as desired.

1.6 Question 29b) Let W1 and W2 be finite dimensional subspace of a vector space V such that

V = W1 + W2 . Deduce that V is the direct sum of W1 and W2 if and only if dim(V ) = dim(W1 ) + dim(W2 ).

Solution: Let W1 and W2 be finite dimensional subspace of a vector space V such that V = W1 + W2 .

First suppose V is the direct sum of W1 and W2 . Hence W1 W2 = {~0} by the definition of the direct sum of

vector spaces. Since V is finite dimensional and dim(V ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ) by Section

1.6 Question 29a) and since dim({~0}) = 0, we obtain that dim(V ) = dim(W1 ) + dim(W2 ) as desired.

Now let W1 and W2 be finite dimensional subspace of a vector space V such that V = W1 + W2 . Suppose dim(V ) = dim(W1 ) + dim(W2 ). By Section 1.6 Question 29a) V is finite dimensional and dim(V ) =

dim(W1 ) + dim(W2 ) dim(W1 W2 ). Hence dim(V ) = dim(W1 ) + dim(W2 ) implies dim(W1 W2 ) = 0.

Thus W1 W2 = {~0} as dim(W1 W2 ) = 0. Since W1 W2 = {~0} and V = W1 + W2 , V is the direct sum

of W1 and W2 by the definition of the direct sum.

1.6 Question 33b) Let 1 and 2 be disjoint bases for subspace W1 and W2 , respectively, of a vector space V . Prove that if 1 2 is a basis for V , then V = W1 W2 .

Solution: To show that V = W1 W2 , it suffices to show that V = W1 + W2 and W1 W2 = {~0}.

To see that V = W1 + W2 , let ~v V be arbitrary. Since 1 2 is a basis for V , there exists vectors

~x1 , . . . , ~xn 1 and ~y1 , . . . , ~ym 2 and scalars a1 , . . . , an , b1 , . . . , bm F such that

~v = a1 ~x1 + + an ~xn + b1 ~y1 + + bm ~ym .

Since 1 is a basis for W1 and ~x1 , . . . , ~xn 1 , a1 ~x1 + + an ~xn W1 as W1 is a subspace. Since 2 is a

basis for W2 and ~y1 , . . . , ~ym 2 , b1 ~y1 + + bm ~ym W2 as W2 is a subspace. Hence

~v = (a1 ~x1 + + an ~xn ) + (b1 ~y1 + + bm ~ym ) W1 + W2 .

Hence, as ~v V was arbitrary, V = W1 + W2 .

To see that W1 W2 = {~0}, let ~v W1 W2 be arbitrary. Then ~v W1 and ~v W2 . Since 1 is a basis

for W1 , there exists distinct vectors ~x1 , . . . , ~xn 1 and scalars a1 , . . . , an F such that

~v = a1 ~x1 + + an ~xn .

Since 2 is a basis for W2 , there exists distinct vectors ~y1 , . . . , ~ym 2 and scalars b1 , . . . , bm F such that

~v = b1 ~y1 + + bm ~ym .

12

a1 ~x1 + + an ~xn + (b1 )~y1 + + (bm )~ym = ~0.

However, since ~x1 , . . . , ~xn are distinct vectors in 1 , since ~y1 , . . . , ~ym are distinct vectors in 2 , and since 1

and 2 are disjoint, ~x1 , . . . , ~xn , ~y1 , . . . , ~ym are distinct vectors in 1 2 . Therefore, since 1 2 is a basis

for V and thus is linearly independent, the above equation implies aj = 0 and bj = 0 for all j. Hence ~v = ~0.

Thus W1 W2 = {~0}. Hence V = W1 W2 .

1.6 Question 34a) Prove that if W1 is any subspace of a finite dimensional vector space V , then there

exists a subspace W2 of V such that V = W1 W2 .

Solution: Let W1 be a subspace of a finite dimensional vector space V . Let = {~v1 , . . . , ~vk } be a

basis for W1 . By a corollary of the Replacement Theorem, we can extend to a basis = {~v1 , . . . , ~vn } for

V . Let W2 = span({~vk+1 , . . . , ~vn }) so that W2 is a subspace of V . We claim that V = W1 W2 . To see

this, we notice that {~v1 , . . . , ~vk } and {~vk+1 , . . . , ~vn } are bases for W1 and W2 respectively that are disjoint

with union equal to a basis for V . Therefore, V = W1 W2 by Section 1.6 Quetsion 33b.

13

Chapter Two

Solution: We will divide the solution into its various parts.

T is a linear transformation: To see that T is linear, it suffices to show that T (~x + ~y ) = T (~x) + T (~y )

for all R and ~x, ~y R3 . Therefore, if ~x = (x1 , x2 , x3 ) and ~y = (y1 , y2 , y3 ), then

T (~x + ~y ) = T (x1 + y1 , x2 + y2 , x3 + y3 )

= ((x1 + y1 ) (x2 + y2 ), 2(x3 + y3 ))

= (x1 x2 , 2x3 ) + (y1 y2 , 2y3 )

= T (~x) + T (~y )

as desired. Hence T is a linear map.

Basis for ker(T ) and nullity of T : To compute the a basis for ker(T ), we first need to compute ker(T ).

To compute ker(T ), we notice that (a1 , a2 , a3 ) ker(T ) if and only if T (a1 , a2 , a3 ) = ~0 if and only if

(a1 a2 , 2a3 ) = (0, 0) if and only if a1 = a2 and a3 = 0. Hence ker(T ) = {(a1 , a2 , a3 ) R3 | a1 = a2 , a3 =

0} = {(a, a, 0) R3 | a R}.

Let ~v = (1, 1, 0). Clearly ~v ker(T ) and span({~v }) = ker(T ). Since {~v } is also a linearly independent

set (as ~v 6= ~0}), {~v } is a basis for ker(T ). Hence the nullity of T (which is the dimension of ker(T )) is 1.

Basis for Im(T ) and rank of T : To compute the a basis for Im(T ), we first need to compute Im(T ). To

compute Im(T ), we note that ~e1 = (1, 0, 0), ~e2 = (0, 1, 0), and ~v3 = (0, 0, 1) is a basis for R3 . Therefore,

Theorem 2.2 implies that

Im(T ) = span({T (~e1 ), T (~e2 ), T (~e3 )}) = span({(1, 0), (1, 0), (0, 1)})

Since span({(1, 0), (0, 1)}) = R2 and Im(T ) R2 , Im(T ) = R2 . Therefore, the rank of T (which is the

dimension of the image of T ) is 2.

Verification that the Dimension Theorem holds for T : Since T maps from R3 to R2 and dim(R3 ) = 3,

the Dimension Theorem implies that 3 = rank(T ) + nullity(T ). Since rank(T ) = 2, nullity(T ) = 1, and

1 + 2 = 3, the Dimension Theorem holds.

Is T one-to-one and/or onto?: Since ker(T ) 6= {~0}, T is not one-to-one. Since Im(T ) = R2 , T is onto.

2.1 Question 3) Let T : R2 R3 be defined by T (a1 , a2 ) = (a1 + a2 , 0, 2a1 a2 ).

Solution: We will divide the solution into its various parts.

T is a linear transformation: To see that T is linear, it suffices to show that T (~x + ~y ) = T (~x) + T (~y )

for all R and ~x, ~y R3 . Therefore, if ~x = (x1 , x2 ) and ~y = (y1 , y2 ), then

T (~x + ~y ) = T (x1 + y1 , x2 + y2 )

= ((x1 + y1 ) + (x2 + y2 ), 0, 2(x1 + y1 ) (x2 + y2 ))

= (x1 + x2 , 0, 2x1 x2 ) + (y1 + y2 , 0, 2y1 y2 )

= T (~x) + T (~y )

as desired. Hence T is a linear map.

Basis for ker(T ) and nullity of T : To compute the a basis for ker(T ), we first need to compute ker(T ).

To compute ker(T ), we notice that (a1 , a2 ) ker(T ) if and only if T (a1 , a2 ) = ~0 if and only if (a1 +a2 , 02a1

a2 ) = (0, 0) if and only if a1 = a2 and a2 = 2a1 if and only if a1 = 0 and a2 = 0. Hence ker(T ) = {~0}.

Therefore is a basis for ker(T ) (since span() = {~0} and the empty set is vacuously linearly independent).

Hence the nullity of T (which is the dimension of ker(T )) is 0.

14

Basis for Im(T ) and rank of T : To compute the a basis for Im(T ), we first need to compute Im(T ). To

compute Im(T ), we note that ~e1 = (1, 0) and ~e2 = (0, 1) is a basis for R3 . Therefore, Theorem 2.2 implies

that

Im(T ) = span({T (~e1 ), T (~e2 )}) = span({(1, 0, 2), (1, 0, 1)})

Since {(1, 0, 2), (1, 0, 1)} is a linearly independent set (since neither vector is a multiple of the other),

{(1, 0, 2), (1, 0, 1)} is a basis for Im(T ). Therefore, the rank of T (which is the dimension of the image of

T ) is 2.

Verification that the Dimension Theorem holds for T : Since T maps from R2 to R3 and dim(R2 ) = 2,

the Dimension Theorem implies that 2 = rank(T ) + nullity(T ). Since rank(T ) = 2, nullity(T ) = 0, and

0 + 2 = 2, the Dimension Theorem holds.

Is T one-to-one and/or onto?: Since ker(T ) = {~0}, T is one-to-one. Since Im(T ) 6= R3 , T is not onto.

2.1 Question 5) Let T : P2 (R) P3 (R) be defined by T (f (x)) = xf (x) + f 0 (x).

Solution: We will divide the solution into its various parts.

T is a linear transformation: To see that T is linear, it suffices to show that T (~x + ~y ) = T (~x) + T (~y )

for all R and ~x, ~y R3 . Therefore, if f, g P2 (R), then

T (f + g)(x) = x((f + g)(x)) + (f + g)0 (x)

= xf (x) + xg(x) + f 0 (x) + g 0 (x)

= (xf (x) + f 0 (x)) + (xg(x) + g 0 (x))

= T (f )(x) + T (g)(x)

as desired. Hence T is a linear map.

Basis for ker(T ) and nullity of T : To compute the a basis for ker(T ), we first need to compute ker(T ).

To compute ker(T ), we notice that a2 x2 + a1 x + a0 ker(T ) if and only if T (a2 x2 + a1 x + a0 ) = ~0 if and

only if x(a2 x2 + a1 x + a0 ) + (a2 x2 + a1 x + a0 )0 = 0 if and only if a2 x3 + a1 x2 + (a0 + 2a2 )x + a1 = 0 if and

only if a2 = a1 = a0 = 0. Hence ker(T ) = {~0}. Therefore is a basis for ker(T ) (since span() = {~0} and

the empty set is vacuously linearly independent). Hence the nullity of T (which is the dimension of ker(T ))

is 0.

Basis for Im(T ) and rank of T : To compute the a basis for Im(T ), we first need to compute Im(T ). To

compute Im(T ), we note that {1, x, x2 } is a basis for R3 . Therefore, Theorem 2.2 implies that

Im(T ) = span({T (1), T (x), T (x2 )}) = span({x, x2 + 1, x3 + 2x})

We claim that {x, x2 + 1, x3 + 2x} is linearly independent and thus a basis for Im(T ). To see that {x, x2 +

1, x3 + 2x} is linearly independent, suppose a1 , a2 , a3 R are such that a1 x + a2 (x2 + 1) + a3 (x3 + 2x) = 0.

Therefore a3 x3 + a2 x2 + (a1 + 2a3 )x + a2 = 0 which clearly implies that a1 = a2 = a3 = 0. Hence

{x, x2 + 1, x3 + 2x} is linearly independent and thus a basis for Im(T ). Therefore, the rank of T (which is

the dimension of the image of T ) is 3.

Verification that the Dimension Theorem holds for T : Since T maps from P2 (R) to P3 (R) and dim(P2 (R)) =

3, the Dimension Theorem implies that 3 = rank(T ) + nullity(T ). Since rank(T ) = 3, nullity(T ) = 0, and

0 + 3 = 3, the Dimension Theorem holds.

Is T one-to-one and/or onto?: Since ker(T ) = {~0}, T is one-to-one. Since Im(T ) 6= R3 , T is not onto.

15

2.1 Question 10) Suppose that T : R2 R2 is linear, T (1, 0) = (1, 4), and T (1, 1) = (2, 5). What is

T (2, 3)? Is T one-to-one?

Solution: Since T : R2 R2 is linear, T (1, 0) = (1, 4), and T (1, 1) = (2, 5), we obtain that

T (0, 1) = T ((1, 1) (1, 0)) = T (1, 1) T (1, 0) = (2, 5) (1, 4) = (1, 1)

Therefore, since T is linear

T (x1 , x2 ) = x1 T (1, 0) + x2 T (0, 1) = x1 (1, 4) + x2 (1, 1) = (x1 + x2 , 4x1 + x2 )

for all (x1 , x2 ) R2 . Therefore

T (2, 3) = (2 + 3, 2(4) + 3) = (5, 11)

Moreover, T is one-to-one. To see this, we notice that (x1 , x2 ) ker(T ) if and only if T (x1 , x2 ) = ~0 if and

only if (x1 + x2 , 4x1 + x2 ) = (0, 0) if and only if x1 = x2 and x2 = 4x1 if and only if x1 = x2 = 0.

Therefore ker(T ) = {(0, 0)} and thus T is one-to-one by Theorem 2.4.

2.1 Question 11) Prove that there exists a linear transformation T : R2 R2 such that T (1, 1) = (1, 0, 2)

and T (2, 3) = (1, 1, 4). What is T (8, 11)?

Solution: Notice {(1, 1), (2, 3)} is a linearly independent subset of R2 as neither vector is a multiple

of the other. Hence, as dim(R2 ) = 2, {(1, 1), (2, 3)} is a basis for R2 . Therefore, by Theorem 2.6 of the text,

there exists a unique linear map T : R2 R2 such that T (1, 1) = (1, 0, 2) and T (2, 3) = (1, 1, 4).

To compute T (8, 11), we notice that

(8, 11) = 2(1, 1) + 3(2, 3).

Therefore, since T is linear,

T (8, 11) = T (2(1, 1) + 3(2, 3)) = 2T (1, 1) + 3T (2, 3) = 2(1, 0, 2) + 3(1, 1, 4) = (5, 3, 16)

as desired.

2.1 Question 12) Is there a linear transformation T : R3 R2 such that T (1, 0, 3) = (1, 1) and

T (2, 0, 6) = (2, 1)?

Solution: No; there is no such linear transformation. To see this, suppose T : R3 R2 was a linear

map such that T (1, 0, 3) = (1, 1) and T (2, 0, 6) = (2, 1). Then

(2, 1) = T (2, 0, 6) = T (2(1, 0, 3)) = 2T (1, 0, 3) = 2(1, 1) = (2, 2)

which is impossible as 2 6= 2 in R. Hence there is no linear transformation with the above properties.

2.1 Question 13) Let V and W be vector spaces, let T : V W be linear, and let {w

~ 1, . . . , w

~ k}

be a linearly independent subset of R(T ). Prove that if S = {~v1 , . . . , ~vk } is chosen so that T (~vi ) = w

~ i for

i = 1, 2, . . . , k, then S is linearly independent.

Solution: Let V and W be vector spaces, let T : V W be linear, and let {w

~ 1, . . . , w

~ k } be a linearly independent subset of R(T ). Fix a set S = {~v1 , . . . , ~vk } such that T (~vi ) = w

~ i for i = 1, 2, . . . , k. To see

that S is linearly independent, suppose there exists scalars a1 , a2 , . . . , ak F such that

a1~v1 + + ak~vk = ~0V .

16

~0W = T (~0V ) = T (a1~v1 + + ak~vk ) = a1 T (~v1 ) + + ak T (~vk ) = a1 w

~ 1 + + ak w

~ k.

However, since {w

~ 1, . . . , w

~ k } is linearly independent, the above equation implies ai = 0 for all i = 1, 2, . . . , k.

Hence S is linearly independent by the definition of a linearly independent set.

2.1 Question 14a) Let V and W be vector spaces and let T : V W be linear. Prove T is oneto-one if and only if T carries linearly independent subsets of V onto linearly independent subset of W .

Solution: Suppose T is one-to-one. Let S be an arbitrary linearly independent subset of V . We claim that

T (S) is a linearly independent subset of W . To see this, suppose there exists distinct vectors w

~ 1, . . . , w

~k

T (S) and scalars a1 , . . . , ak F such that

~0W = a1 w

~ 1 + + ak w

~ k.

Since w

~ 1, . . . , w

~ k T (S) are distinct vectors and since T is one-to-one, there exists distinct vectors ~v1 , . . . ~vk

S such that T (~vj ) = w

~ j for all j = 1, . . . , k (the ~vj s must be distinct since T is one-to-one and the w

~ j s are

distinct). Since T is linear, the above equation implies

T (a1~v1 + + ak~vk ) = a1 T (~v1 ) + + ak T (~vk ) = a1 w

~ 1 + + ak w

~ k = ~0W .

Therefore a1~v1 + + ak~vk ker(T ). Since T is one-to-one and linear, ker(T ) = {~0V } so a1~v1 + + ak~vk =

~0V . Therefore, since ~v1 , . . . , ~vk S are distinct vectors and S is a linearly independent subset of V , aj = 0

for all j = 1, 2, . . . , k. Hence T (S) is a linearly independent subset of W .

For the other direction, we will prove the contrapositive. Suppose T is not one-to-one. We claim that

there exists a linearly independent subset S of V such that T (S) is not a linearly independent subset of W .

Since T is linear and not one-to-one, ker(T ) 6= {~0V }. Hence there exists a non-zero vector ~v V such that

~v ker(T ). Hence {~v } is a linearly independent subset of V such that {T (~v )} = {~0W } is linearly dependent.

Hence we have proven the claim.

2.1 Question 14b) Let V and W be vector spaces and let T : V W be linear. Suppose that T

is one-to-one and that S is a subset of V . Prove that S is linear independent if and only if T (S) is linearly

independent.

Solution: Let V and W be vector spaces, let S be a subset of V , and let T : V W be one-to-one

linear map. Suppose S is a linearly independent subset of V . Then T (S) is linearly independent by Question

14a) which was proven above.

Now suppose that T (S) is linearly independent. To see that S is linearly independent, suppose ~v1 , . . . , ~vk

S are distinct vectors and a1 , . . . , ak F are scalars such that

a1~v1 + + ak~vk = ~0V .

Since T is a linear map, the above equation implies

~0W = T (~0V ) = T (a1~v1 + + ak~vk ) = a1 T (~v1 ) + + ak T (~vk ).

Since ~v1 , . . . , ~vk S are distinct vectors and since T is one-to-one, T (~v1 ), . . . , T (~vk ) T (S) are distinct

vectors. Since T (S) is linearly independent, the above equation implies aj = 0 for all j = 1, . . . , k. Hence S

is linearly independent.

17

2.1 Question 14c) Let V and W be vector spaces and let T : V W be linear. Suppose = {~v1 , . . . , ~vn }

is a basis for V and T is one-to-one and onto. Prove that T () = {T (~v1 ), , T (~vn )} is a basis.

Solution: Let T : V W be a linear map that is one-to-one and onto. Let = {~v1 , . . . , ~vn } be a

basis for V and let w

~ j = T (~vj ) for all j = 1, . . . , n. We desire to show that T () = {w

~ 1, . . . , w

~ n } is a basis

for W . To see this, we first claim that T () is linearly independent. To see this, suppose there exists scalars

1 , . . . , n F such that

1 w

~ 1 + + n w

~ n = ~0W .

Therefore

~0W = 1 T (~v1 ) + + n T (~v ) = T (1~v1 + + n~vn ) .

Since T is one-to-one, ker(T ) = {~0V } so the above equation tells us 1~v1 + + n~vn = ~0V . Since was a

basis for V , is linearly independent so 1~v1 + + n~vn = ~0V implies j = 0 for all j = 1, . . . , n. Hence

T () is linearly independent.

To show that T () is a basis, we need to show that span(T ()) = W . To see that span(T ()) = W , let

w

~ W be arbitrary. Since T is onto, Im(T ) = W so there exists a vector ~v V such that T (~v ) = w.

~ Since

is a basis for V , there exists scalars 1 , . . . , n F such that

~v = 1~v1 + + n~vn .

Hence

w

~ = T (~v ) = 1 w

~ 1 + + n w

~ n span(T ()).

Hence, as w

~ W was arbitrary, span(T ()) = W . Therefore T () is a basis for W as desired.

2.1 Question 17a) Let V and W be finite-dimensional vector spaces and let T : V W be linear.

Prove that if dim(V ) < dim(W ), then T cannot be onto.

Solution: Let V and W be finite dimensional vector spaces with dim(V ) < dim(W ) and let T : V W

be a linear map. Therefore, by the Dimension Theorem,

dim(Im(T )) dim(ker(T )) + dim(Im(T )) = dim(V ) < dim(W ).

Therefore, since dim(Im(T )) < dim(W ), Im(T ) 6= W so T is not onto.

2.1 Question 17b) Let V and W be finite-dimensional vector spaces and let T : V W be linear.

Prove that if dim(V ) > dim(W ), then T cannot be one-to-one.

Solution: Let V and W be finite dimensional vector spaces with dim(V ) > dim(W ) and let T : V W

be a linear map. Since Im(T ) is a subspace of W so dim(Im(T )) dim(W ). Therefore, by the Dimension

Theorem,

dim(V ) = dim(ker(T )) + dim(Im(T )) dim(ker(T )) + dim(W ).

Hence dim(ker(T )) dim(V ) dim(W ) > 0 so ker(T ) 6= {~0}. Hence T is not one-to-one.

2.1 Question 20) Let V and W be vector spaces with subspaces V1 and W1 respectively. If T : V W

is linear, prove that T (V1 ) is a subspace of W and that {~x V | T (~x) W1 } is a subspace of V .

Solution: First we will demonstrate that if V1 is a subspace of V , then T (V1 ) is a subspace of W . To prove

T (V1 ) is a subspace of W , we need to demonstrate the three properties from Theorem 1.3.

18

1. To see that ~0W T (V1 ), recall that since V1 is a subspace of V , ~0V V1 . Since T is linear, ~0W =

T (~0V ) T (V1 ) as desired.

2. Suppose w

~ 1, w

~ 2 T (V1 ) are arbitrary. By the definition of T (V1 ) there exists vectors ~v1 , ~v2 V1 such

that T (~v1 ) = w

~ 1 and T (~v2 ) = w

~ 2 . Since T is linear, w

~1 + w

~ 2 = T (~v1 ) + T (~v2 ) = T (~v1 + ~v2 ). However,

since V1 is a subspace of V and thus closed under addition and since ~v1 , ~v2 V1 , ~v1 + ~v2 V1 . Hence

w

~1 + w

~ 2 = T (~v1 + ~v2 ) T (V1 ). Therefore T (V1 ) is closed under addition.

3. Suppose w

~ T (V1 ) and F are arbitrary. By the definition of T (V1 ) there exists a vector ~v V1

such that T (~v ) = w.

~ Since T is linear, w

~ = T (~v ) = T (~v ). However, since V1 is a subspace of V

and thus closed under scalar multiplication and since ~v V1 , ~v V1 . Hence w

~ = T (~v ) T (V1 ).

Therefore T (V1 ) is closed under scalar multiplication.

Thus, as we have check the three properties, T (V1 ) is a subspace of W .

Let W1 be a subspace of W . To see that Z = {~x V | T (~x) W1 } is a subspace of V , we need to

demonstrate the three properties from Theorem 1.3.

1. To see that ~0V Z, notice T (~0V ) = ~0W as T is linear and ~0W W1 as W1 is a subspace of W . Hence

T (~0V ) W1 so ~0V Z by the definition of Z.

2. Let ~v1 , ~v2 Z be arbitrary. By the definition of Z, T (~v1 ) W1 and T (~v2 ) W1 . Since W1 is a

subspace of W , W1 is closed under addition so T (~v1 ) + T (~v2 ) W1 . Since T is linear, T (~v1 + ~v2 ) =

T (~v1 ) + T (~v2 ) W1 . Hence ~v1 + ~v2 Z by the definition of Z. Therefore Z is closed under addition.

3. Let ~v Z and F be arbitrary. By the definition of Z, T (~v ) W1 . Since W1 is a subspace of

W , W1 is closed under scalar multiplication so T (~v ) W1 . Since T is linear, T (~v ) = T (~v ) W1 .

Hence ~v Z by the definition of Z. Therefore Z is closed under scalar multiplication.

Thus, as we have check the three properties, {~x V | T (~x) W1 } is a subspace of V .

For Question 21, let V be the vector space of sequences as described in Example 5 of Section 1.2. Define

the functions T, U : V V by

T (a1 , a2 , a3 , . . .) = (a2 , a3 , a4 , . . .)

and

U (a1 , a2 , a3 , . . .) = (0, a1 , a2 , . . .)

T and U are called the left shift and right shift operators on V , respectively.

2.1 Question 21a) Prove that T and U are linear.

Solution: To see that T and U are linear, let F be arbitrary and let

~v = (a1 , a2 , a3 , . . .)

and

w

~ = (b1 , b2 , b3 , . . .)

T (~v + w)

~

= T ((a1 + b1 , a2 + b2 , a3 + b3 , . . .))

= (a2 + b2 , a3 + b3 , a4 + b4 , . . .)

= (a2 , a3 , a4 , . . .) + (b2 , b3 , b4 , . . .)

= T (~v ) + T (w)

~

U (~v + w)

~

= U ((a1 + b1 , a2 + b2 , a3 + b3 , . . .))

= (0, a1 + b1 , a2 + b2 , . . .)

= (0, a1 , a2 , . . .) + (0, b1 , b2 , . . .)

= U (~v ) + U (w).

~

and

19

Solution: To see that T is onto, let (a1 , a2 , a3 , . . .) V be arbitrary. Then (0, a1 , a2 , a3 , . . .) V and

T (0, a1 , a2 , a3 , . . .) = (a1 , a2 , a3 , . . .).

Hence (a1 , a2 , a3 , . . .) Im(T ). Since (a1 , a2 , a3 , . . .) V was arbitrary, T is onto.

To see that T is not one-to-one, notice (1, 0, 0, 0, . . .) V is a non-zero vector such that

T (1, 0, 0, 0, . . .) = (0, 0, 0, 0, . . .) = ~0V .

Hence ker(T ) 6= {~0V } so T is not one-to-one.

Solution: To see that U is one-to-one, suppose (a1 , a2 , a3 , . . .) ker(U ). Therefore

(0, 0, 0, 0 . . .) = ~0V = T (a1 , a2 , a3 , . . .) = (0, a1 , a2 , a3 , . . .).

Therefore, it is easy to see that aj = 0 for all j N and thus (a1 , a2 , a3 , . . .) = ~0V . Hence ker(U ) = {~0V } so

U is one-to-one.

To see that U is not onto, we claim that (1, 0, 0, 0, . . .)

/ Im(U ). To see this, we notice for all

(a1 , a2 , a3 , . . .) V that

U (a1 , a2 , a3 , . . .) = (0, a1 , a2 , . . .) 6= (1, 0, 0, 0, . . .).

Hence (1, 0, 0, 0, . . .)

/ Im(U ) so U is not onto.

2.1 Question 26a) Let T : V V be the projection on W1 along W2 . Prove that T is linear and

W1 = {~x V | T (~x) = ~x}.

Solution: To see that T is linear, let F and ~x, ~y V be arbitrary. Since V = W1 W2 , there

exists vectors w

~ 1, w

~ 10 W1 and w

~ 2, w

~ 20 W2 such that ~x = w

~1 + w

~ 2 and ~y = w

~ 10 + w

~ 20 . Hence T (~x) = w

~ 1 and

0

T (~y ) = w

~ 1 by the definition of the projection on W1 along W2 . However, notice

~x + ~y = (w

~1 + w

~ 2 ) + (w

~ 10 + w

~ 20 ) = (w

~1 + w

~ 10 ) + (w

~2 + w

~ 20 ).

Since W1 and W2 are subspaces, w

~1 + w

~ 10 W1 and w

~2 + w

~ 20 W2 . Therefore, since V = W1 W2 and

thus every in V can be written uniquely as a sum of a vector in W1 and a vector in W2 ,

~x + ~y = (w

~1 + w

~ 10 ) + (w

~2 + w

~ 20 )

is the unique decomposition of ~x + ~y as a vector from W1 plus a vector from W2 . Therefore

T (~x + ~y ) = w

~1 + w

~ 10

by the definition of T . Therefore T (~x + ~y ) = T (~x) + T (~y ). Hence T is linear.

To see that W1 = {~x V | T (~x) = ~x}, notice if ~y {~x V | T (~x) = ~x} then ~y Im(T ). Therefore,

by the definition of T , ~y W1 . Hence {~x V | T (~x) = ~x} W1 . To see the other inclusion, let ~x W1

be arbitrary. Therefore ~x = ~x + ~0V is the unique decomposition of ~x as the sum of a vector from W1 and

a vector from W2 . Hence T (~x) = ~x by the definition of T . Therefore W1 {~x V | T (~x) = ~x}. Hence

{~x V | T (~x) = ~x} = W1 as desired.

20

2.1 Question 26b) Let T : V V be the projection on W1 along W2 . Prove that W1 = Im(T )

and W2 = ker(T ).

Solution: Let T : V V be the projection on W1 along W2 . To see that W1 = Im(T ), let ~x W1

be arbitrary. Therefore ~x = ~x + ~0V is the unique decomposition of ~x as the sum of a vector from W1 and a

vector from W2 . Hence T (~x) = ~x by the definition of T . Therefore ~x = T (~x) Im(T ). Hence W1 Im(T ).

However, by the definition of T , it is clear that Im(T ) W1 . Hence Im(T ) = W1 .

To see that W2 = ker(T ), let ~x W2 be arbitrary. Therefore ~x = ~0V + ~x is the unique decomposition of

~x as the sum of a vector from W1 and a vector from W2 . Hence T (~x) = ~0V by the definition of T . Therefore

~x ker(T ). Hence W2 ker(T ). To see the other inclusion, let ~x ker(T ). Since V = W1 W2 , there exists

vectors w

~ 1 W1 and w

~ 2 W2 such that ~x = w

~1 + w

~ 2 and T (~x) = w

~ 1 . Since ~x ker(T ), w

~ 1 = T (~x) = ~0V .

~

Hence ~x = w

~1 + w

~ 2 = 0V + w

~2 = w

~ 2 W2 . Thus ker(T ) W2 so ker(T ) = W2 as desired.

2.1 Question 27a) Suppose that W is a subspace of a finite dimensional vector space V . Prove that

there exists a subspace W 0 and a function T : V V such that T is a projection on W along W 0 .

Solution: Let W be a subspace of a finite dimensional vector space V . Let = {~v1 , . . . , ~vk } be a basis for W . By a corollary of the Replacement Theorem, we can extend to a basis = {~v1 , . . . , ~vn } for V .

Let W 0 = span({~vk+1 , . . . , ~vn }) so that W 0 is a subspace of V . We claim that V = W W 0 . To see this, we

need to show that V = W + W 0 and W W 0 = {~0V }. To see that V = W + W 0 , notice W + W 0 V . To

verify the other inclusion, let ~v V be arbitrary. Since is a basis for V , there exists scalars a1 , . . . , an F

such that

~v = a1~v1 + + an~vn = (a1~v1 + + ak~vk ) + (ak+1~vk+1 + + an~vn ).

However, since W and W 0 are subspaces, a1~v1 + + ak~vk W and ak+1~vk+1 + + an~vn W 0 . Hence

~v W + W 0 . Therefore, as ~v was arbitrary, V = W + W 0 .

To see that W W 0 = {~0V }, notice it is clear that {~0V } W W 0 as W and W 0 are subspaces.

For the other inclusion, suppose ~x W W 0 . Thus ~x W and ~x W 0 . Since is a basis for W ,

span() = W . Therefore, since ~x W , ~x span() so there exists scalars a1 , . . . , ak F such that

~x = a1~v1 + + ak~vk . Since ~x W 0 = span({~vk+1 , . . . , ~vn }), there exists scalars ak+1 , . . . , an F such that

~x = ak+1~vk+1 + + an~vn . Hence

a1~v1 + + ak~vk = ~x = ak+1~vk+1 + + an~vn

so, by rearranging this equation, we obtain

a1~v1 + + ak~vk + (ak+1 )~vk+1 + + (an )~vn = ~0V .

However, since is a basis for V , the above equation implies that aj = 0 for all j. Hence

~x = a1~v1 + + ak~vk = ~0V

as desired. Thus W W 0 = {~0V } so V = W W 0 .

To complete the proof, we desire to show that if V = W W 0 , there always exists a projection on W

along W 0 . To see this, we define the function T : V V by T (w

~ + ~y ) = w

~ for all w

~ W and ~y W 0 . To

0

complete the proof, we need to verify that T is well-defined; that is, if w

~ + ~y = w

~ + ~y 0 where w,

~ w

~ 0 W and

0

0

0

~y , ~y W , then w

~ =w

~ . However, this follows from Question 30 of Section 1.3. Hence the proof is complete.

21

2.1 Question 27b) Give an example of a subspace W of a vector space V such that there are two projection on W along two (distinct) subspaces.

Solution: Let V = R2 with the usual vector space operations and let W be the x-axis. Let W1 be

the y-axis and let

W2 = {(a, a) | a R}.

It is trivial to verify that V = W W1 = W W2 . From Question 27a) there exists a projection T : V V

on W along W1 and a projection R : V V on W along W2 . We claim that T 6= R. To see this, let

~v = (1, 1). Then ~v = (1, 0) + (0, 1) is the unique decomposition of ~v as the sum of a vector from W and a

vector from W1 . Therefore T (~v ) = (1, 0). However ~v = (0, 0) + (1, 1) is the unique decomposition of ~v as

the sum of a vector from W and a vector from W2 so R(~v ) = (0, 0). Therefore T (~v ) 6= R(~v ) so T and R are

distinct maps.

2.1 Question 28) Let T : V V be a linear map. Prove the subspace {~0}, V , Im(T ), and ker(T )

are all T -invariant.

Solution: Let T : V V be a linear map. To see that {~0}, notice that T (~0) = ~0 as T is linear.

Therefore T ({~0}) = {~0} so {~0} is T -invariant. Clearly T (V ) V so V is T -invariant.

To see that Im(T ) is T -invariant, we desire to show that T (Im(T )) Im(T ). Let ~v Im(T ) be

arbitrary. Then T (~v ) Im(T ) by the definition of the image of T . Hence T (Im(T )) Im(T ) so Im(T ) is

T -invariant.

Finally, to see that ker(T ) is T -invariant, we desire to show that T (ker(T )) ker(T ). Let ~v ker(T )

be arbitrary. Then T (~v ) = ~0 as ~v ker(T ). Since ker(T ) is a subspace of V , ~0 ker(T ) so T (~v ) ker(T ).

Hence T (ker(T )) ker(T ) so ker(T ) is T -invariant.

2.1 Question 35a) Let V be a finite dimensional vector space and T : V V be a linear map. Suppose

that V = Im(T ) + ker(T ). Prove V = Im(T ) ker(T ).

Proof: Since V = Im(T )+ker(T ), to prove that V = Im(T )ker(T ) it suffices to prove Im(T )ker(T ) =

{~0}. By the Dimension Theorem dim(V ) = dim(Im(T ))+dim(ker(T )). Since V is finite dimensional, ker(T )

and Im(T ) are finite dimensional subspace of V so Question 29a) of Section 1.6 implies

dim(Im(T ) + ker(T )) = dim(Im(T )) + dim(ker(T )) dim(Im(T ) ker(T )).

Since V = Im(T ) + ker(T ), dim(V ) = dim(Im(T )) + dim(ker(T )) dim(Im(T ) ker(T )). Combining the

two expressions for dim(V ), we obtain dim(Im(T ) ker(T )) = 0. Hence Im(T ) ker(T ) = {~0} as desired.

2.1 Question 35b) Let V be a finite dimensional vector space and T : V V be a linear map.

Suppose that Im(T ) ker(T ) = {~0}. Prove V = Im(T ) ker(T ).

Proof: Since Im(T ) ker(T ) = {~0}, to prove that V = Im(T ) ker(T ) it suffices to prove V =

Im(T ) + ker(T ). By the Dimension Theorem dim(V ) = dim(Im(T )) + dim(ker(T )). Moreover, since

Im(T ) ker(T ) = {~0}, dim(Im(T ) ker(T ) = {~0}) = 0. Since V is finite dimensional, ker(T ) and Im(T )

are finite dimensional subspace of V so Question 29a) of Section 1.6 implies

dim(Im(T )+ker(T )) = dim(Im(T ))+dim(ker(T ))dim(Im(T )ker(T )) = dim(Im(T ))+dim(ker(T )) = dim(V ).

Since Im(T ) + ker(T ) is the sum of subspaces of V , Im(T ) + ker(T ) is a subspace of V with dimension

equal to that of V . Hence Theorem 1.11 implies V = Im(T ) + ker(T ).

22

2.1 Question 37) A function T : V W between vector space V and W is called additive if T (~x + ~y ) =

T (~x) + T (~y ) for all ~x, ~y V . Prove that if V and W are vector spaces over the field of rational numbers,

then any additive function from V into W is a linear transformation.

Proof: Let V and W be vector space over Q and let T : V W be an additive map. To prove that

T is linear, we must prove that T (q~v ) = qT (~v ) for all q Q and ~v V . First we notice that

T (~0V ) = T (~0V + ~0V ) = T (~0V ) + T (~0V )

so T (~0V ) = ~0W by the Cancellation Law for the vector space W .

Next we claim that T (n~v ) = nT (~v ) for all n N and ~v V . To see this, we will proceed by the Principle

of Mathematical Induction on n.

Base Case: For n = 1, it is clear that T (1~v ) = T (~v ) = 1T (~v ) as desired.

Inductive Step: Suppose that the result holds for some fixed n N; that is, T (n~v ) = nT (~v ) for all ~v V .

We claim that T ((n+1)~v ) = (n+1)T (~v ) for all ~v V . To see this, fix ~v V . Since T (n~v +~v ) = T (n~v )+T (~v )

by our assumptions on T and T (n~v ) = nT (~v ) by the inductive hypothesis, we obtain that

T ((n + 1)~v ) = T (n~v + ~v ) = T (n~v ) + T (~v ) = nT (~v ) + T (~v ) = (n + 1)T (~v )

as desired.

Hence, by the Principle of Mathematical Induction, we obtain that T (n~v ) = nT (~v ) for all n N and

~v V .

1

1

Next we claim that T m

~v = m

T (~v ) for all m N and ~v V . To see this, fix m N and ~v V .

Therefore, by the result proven above,

1

1

T (~v ) = T m

~v

= mT

~v .

m

m

1

1

Hence, by dividing both sides of the above equation by m, we obtain that T m

~v = m

T (~v ) as desired.

Finally, we claim that T (~v ) = T (~v ) for all ~v V . To see this, fix ~v V . We notice (since T (~x + ~y ) =

T (~x) + T (~y ) for all vectors ~x, ~y V ) that

T (~v ) + T (~v ) = T (~v + (~v )) = T (~0V ) = ~0W

and thus T (~v ) = T (~v ) by the uniqueness of additive inverses in a vector space.

To prove that T (q~v ) = qT (~v ) for all q Q and ~v V , we split the proof into three cases:

Case 1: q = 0 If q = 0, then T (q~v ) = T (~0V ) = ~0W = qT (~v ) for all vectors ~v V .

n

Case 2: q > 0 If q > 0, then there exists n, m N such that q = m

. Therefore

T (q~v ) = T

1

1

1

n

~v

= nT

~v = n

T (~v ) = qT (~v )

m

m

m

n

Case 3: q < 0 If q < 0, then there exists n, m N such that q = m

. Therefore

n

1

1

1

T (q~v ) = T

~v

= T n

~v

= nT

~v = n

T (~v ) = qT (~v )

m

m

m

m

(by using the above results) for all vectors ~v V .

Thus, as we have covered all possible q Q for all vectors ~v V , T (q~v ) = qT (~v ) for all q Q and ~v V

so T is a linear map as desired.

23

2.1 Question 38) Let T : C C be the function T (z) = z. Prove that T is additive but not linear over the complex numbers.

Proof: To see that T is additive, let z, w C be arbitrary. Then there exists a, b, c, d R such that

z = a + bi and w = c + di. Then

T (z + w)

= T ((a + c) + (b + d)i)

= (a + c) + (b + d)i

= (a + c) (b + d)i

= (a bi) + (c di)

= a + bi + c + di

= T (a + bi) + T (c + di)

= T (z) + T (w).

Hence T is additive.

To see that T is not complex linear, we notice that iT (i) = i(i) = i(i) = 1 yet T (i i) = T (1) = 1.

Thus iT (i) 6= T (i i) so T is not linear.

For 2.2 Question 2), let and be the standard orthonormal bases for Rn and Rm respectively. For

each linear transformation T : Rn Rm , compute [T ] .

2.2 Question 2a) T : R2 R2 defined by T (a1 , a2 ) = (2a1 a2 , 3a1 + 4a2 , a1 ).

Solution: Let = {(1, 0), (0, 1)} and = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} be the standard bases. Moreover

we know that

However, we notice that T (1, 0) = (2, 3, 1) and T (0, 1) = (1, 4, 0). Hence

2 1

[T ] = [(2, 3, 1)] [(1, 4, 0)] = 3 4

1 0

Solution: Let = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} and = {(1, 0), (0, 1)} be the standard bases. Moreover

we know that

However, we notice that T (1, 0, 0) = (2, 1), T (0, 1, 0) = (3, 0), and T (0, 0, 1) = (1, 1). Hence

2 3 1

[T ] = [(2, 1)] [(3, 0)] [(1, 1)] =

1 0 1

24

Solution: Let = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} and = {1} be the standard bases. Moreover we know

that

Solution: Let = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} and = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} be the standard bases.

Moreover we know that

However, we notice that T (1, 0, 0) = (2, 1, 1), T (0, 1, 0) = (0, 4, 0), and T (0, 0, 1) = (1, 5, 1). Hence

2 0 1

[T ] = [(2, 1, 1)] [(0, 4, 0)] [(1, 5, 1)] = 1 4 5

1 0 1

Solution: Let = {~e1 , . . . , ~en } and = {~e1 , . . . , ~en } be the standard bases. Moreover we know that

[T ] = [T (~e1 )] [T (~en )]

Pn

ej

j=1 ~

[T ] =

1 0

1 0

.. ..

. .

1 0

25

0

0

..

.

Solution: Let = {~e1 , . . . , ~en } and = {~e1 , . . . , ~en } be the standard bases. Moreover we know that

[T ] = [T (~e1 )] [T (~en )]

0

0

[T ] = ...

0

1

j = 1, . . . , n. Hence

0

0

0

..

.

0

0

0

..

.

0

0

1

..

.

0

1

0

..

.

0

1

0

1

0

0

...

0

0

0

0

0

0

1

0

0

..

.

0

0

Solution: Let = {~e1 , . . . , ~en } and = {1} be the standard bases. Moreover we know that

[T ] = [T (~e1 )] [T (~en )]

However, we notice that T (~e1 ) = 1 = T (~en ) and T (~ej ) = 0 for all j = 2, . . . , n 1. Hence

[T ] = 1 0 0 1

which completes the problem.

2.2 Question 3) Let T : R2 R3 be defined by T (a1 , a2 ) = (a1 a2 , a1 , 2a1 + a2 ). Let be the standard

orthonormal basis for R2 and let = {(1, 1, 0), (0, 1, 1), (2, 2, 3)}. Compute [T ] . If = {(1, 2), (2, 3)},

compute [T ] .

Solution: First we know that

[T ] = [T (1, 0)]

[T (0, 1)]

However, we notice that T (1, 0) = (1, 1, 2) and T (0, 1) = (1, 0, 1). Thus, to complete the problem, we need

to compute [(1, 1, 2)] and [(1, 0, 1)] . To find [(1, 1, 2)] , we need to find the scalars a, b, c R such that

(1, 1, 2) = a(1, 1, 0) + b(0, 1, 1) + c(2, 2, 3)

which is easily solved to give a = 31 , b = 0, and c = 32 . Similarly, to find [(1, 0, 1)] , we need to find the

scalars a, b, c R such that

(1, 0, 1) = a(1, 1, 0) + b(0, 1, 1) + c(2, 2, 3)

26

3

[T ] = [(1, 1, 2)] [(1, 0, 1)] = 0

2

1

1

0

as desired.

Next, to compute [T ] , we could repeat the procedure given above. However, let us examine another

way to solve the problem. To begin, we notice that [T ] = [T ] [I] where [I] is the change of basis matrix

that takes coordinates to coordinates. Therefore

1 2

[I] = [(1, 2)] [(2, 3)] =

2 3

Hence

13

0

[T ] = [T ] [I] =

2

3

1

1

1

2

0

2

3

73

2

=

2

3

11

3

3

4

3

a b

T

= (a + b) + (2d)x + bx2

c d

Let

=

1

0

0

0

0

,

0

1

0

0

,

1

0

0

0

,

0

0

1

= {1, x, x2 }

and

Compute [T ] .

Solution: We know that

1 0

[T ] =

T

0 0

1 0

T

=1

0 0

Hence

T

0 1

T

0 0

0

0

[T ] = [1]

1

0

= 1 + x2

2

1+x

[0]

27

0 0

T

1 0

T

0

1

0

0

0 0

0 1

1

[2x] = 0

0

=0

1

0

1

0

0

0

0

2

0

0

0

0

1

= 2x

1 0

0

=

,

0 0

0

1

0

0

,

1

0

,

0

0

0

0

1

= {1, x, x2 }

= {1}

and

2.2 Question 5a) Define T : M22 (F) M22 (F) by T (A) = AT . Compute [T ] .

Solution: We know that

1 0

[T ] =

T

0 0

1 0

1 0

T

=

0 0

0 0

T

0 1

T

0 0

0

0

1

0

=

0

1

0

0

0 0

0 1

0 0

T

1 0

T

0

1

0

0

=

0

0

1

0

T

0

0

0

1

0

0

0

0

0

1

Hence

[T ] =

1 0

0 0

0 0

1 0

0 1

0 0

1

0

0 0

=

0

0 1

0

0

1

0

0

f (0)

T (f (x)) =

0

2f (1)

f 00 (3)

.

Solution: We know that

[T (1)]

[T ]

=

2

T (x )

[T (x)]

T (1) =

Hence

[T ]

=

0

0

2

0

0 2

0 0

T (x) =

1

0

1 2

0 0

28

2

0

T (x ) =

0

0

2

2

0

2

0 2

=

0

0 2

1

2

0

0

0

2

0

2

0

1

=

0

0

0

1

Solution: We know that

1 0

[T ] =

T

0 0

1 0

T

=1

0 0

Hence

0 1

T

0 0

T

0

0

1

0

[T ] = [1]

0 0

T

1 0

=0

[0]

0

1

0

0

=0

[1] = 1

[0]

0 0

0 1

0

0

0

1

0

0

0

1

=1

2.2 Question 5d) Define T : P2 (R) R by T (f (x)) = f (2). Compute [T ] .

Solution: We know that

[T ] = [T (1)]

[T (x)]

T (x2 )

T (1) = 1

Hence

T (x) = 2

[T ] = [1]

[2]

T (x2 ) = 4

[4] = 1

A=

1

0

2

4

0

0

1

0

compute [A] .

Solution: It is incredible clear that

1 2

1

A=

=1

0 4

0

and thus

0

0

+ (2)

1

2

[A] =

0

4

29

+0

0

1

0

0

+4

Solution: It is incredible clear that

f (x) = 3 6x + x2 = 3(1) + (6)x + (1)x2

and thus

3

[f ] = 6

1

Solution: Since a = a(1), it is clear that [a] = [a] which completes the problem.

2.2 Question 8) Let V be an n-dimensional vector space with an ordered basis . Define T : V Fn by

T (~x) = [~x] . Prove that T is linear.

Solution: Let = {~v1 , . . . , ~vn }. To prove that T is linear, let F and ~x, ~y V be arbitrary. Since is

a basis for V , there exists unique scalars a1 , . . . , an , b1 , . . . , bn F such that

~x = a1~v1 + + an~vn

and

~y = b1~v1 + + bn~vn .

Hence

~x + ~y = (a1 + b1 )~v1 + + (an + bn )~vn

is the unique decomposition of ~x + ~y in terms of . Therefore, by the definition of coordinates,

[~x] = (a1 , . . . , an ),

[~y ] = (b1 , . . . , bn )

and

[~x + ~y ] = (a1 + b1 , . . . , an + bn ).

Hence

T (~x + ~y )

=

=

=

=

=

[~x + ~y ]

(a1 + b1 , . . . , an + bn )

(a1 , . . . , an ) + (b1 , . . . , bn )

[~x] + [~y ]

T (~x) + T (~y ).

2.2 Question 9) Let V be the vector space of complex numbers over the field R. Define T : V V by

T (z) = z, where z is the complex conjugate of z. Prove that T is linear and compute [T ] where = {1, i}.

(Recall from discussion that T is not linear if V is regarded as a vector space over the field C.)

Solution: To prove that T is linear, we need to show that for all z, w C and a R that T (az + w) =

aT (z) + T (w). Fix z, w C and a R. Write z = b + ci and w = d + ei where b, c, d, e R. Then

T (az + w) = T (a(b + ci) + (d + ei))

= T ((ab + d) + i(ac + e))

= (ab + d) + i(ac + e)

= (ab + d) i(ac + e)

= a(b ic) + (d ie)

= az + w = aT (z) + T (w)

30

To compute [T ] where = {1, i}, we note that

[T ] = [T (1)] [T (i)]

[T ] = [1]

1

[i] =

0

0

1

2.2 Question 10) Let V be a vector space with the ordered basis = {~v1 , ~v2 , . . . , ~vn }. Define ~v0 = ~0. By

Theorem 2.6 (page 72 of the text), there exists a linear transformation T : V V such that T (~vj ) = ~vj +~vj1

for j = 1, 2, . . . , n. Compute [T ] .

Solution: To compute [T ] where = {~v1 , ~v2 , . . . , ~vn }, we note that

[T ] = [T (~v1 )] [T (~vn )]

Since T (~vj ) = ~vj + ~vj1 for j = 1, 2, . . . , n (where ~v0 = ~0), we see that

1 1

0

0 1

1

0 0

1

.

.

.

.. ..

[T ] = [~v1 ] [~v1 + ~v2 ] [~vn1 + ~vn ] = ..

0 0

0

0 0

0

0 0

0

0

0

1

..

.

..

.

0

0

0

..

.

0

0

0

..

.

0

0

0

..

.

0

0

0

1

0

0

1

1

0

0

1

1

0

0

0

..

.

0

1

2.2 Question 11) Let V be an n-dimensional vector space, and let T : V V be a linear map.

Suppose W is a T -invariant subspace having dimension k. Show that there exists a basis for V such that

[T ] has the form

A B

0 C

where A is a k k matrix, 0 is the (nk)k zero matrix, B is a k (nk) matrix, and C is a (nk)(nk)

matrix.

Solution: Let {~v1 , . . . , ~vk } be any basis for W . By a corollary of the Replacement Theorem, we can

extend {~v1 , . . . , ~vk } to a basis = {~v1 , . . . , ~vk , ~x1 , . . . , ~xnk } for V . We claim that this ordered basis works.

To see this, it suffices to show that T (~vi ) is a linear combination of the first k elements of for all 1 i k.

However, this is trivial since T (~vi ) W (as ~vi W and W is T -invariant) so T (~vi ) is a linear combination

of vectors from {~v1 , . . . , ~vk }.

31

2.2 Question 13) Let V and W be vector spaces and let T and U be non-zero linear transformations

from V into W . If Im(T ) Im(U ) = {~0W }, prove that {T, U } is a linearly independent subset of L(V, W ).

Solution: To see that {T, U } is a linearly independent subset of L(V, W ), suppose a, b F are such

that aT + bU = 0 (where 0 represents the zero linear transformation). Since T is non-zero, there exists a

~v V such that T (~v ) 6= ~0W . Therefore

~0W = 0(~0V ) = (aT + bU )(~v ) = aT (~v ) + bU (~v ).

Thus, as T and U are linear, the above equation implies T (a~v ) = U ((b)~v ). Since T (a~v ) Im(T ),

U ((b)~v ) Im(U ), T (a~v ) = U ((b)~v ), and Im(T ) Im(U ) = {~0W }, T (a~v ) = U ((b)~v ) = ~0W . Therefore

aT (~v ) = ~0W . However, since T (~v ) 6= ~0W , aT (~v ) = ~0W implies a = 0. By applying similar arguments, b = 0.

Hence {T, U } is a linearly independent subset of L(V, W ).

2.2 Question 16) Let V and W be vector spaces such that dim(V ) = dim(W ), and let T : V W be

a linear map. Show that there exists ordered bases and for V and W respectively such that [T ] is a

diagonal matrix.

Solution: Let n = dim(V ) = dim(W ) and let k = dim(ker(T )). Let {~v1 , ~v2 , , ~vk } be a basis for

ker(T ). By a Corollary of the Replacement Theorem, we can extend {~v1 , ~v2 , , ~vk } to a basis =

{~x1 , . . . , ~xnk , ~v1 , . . . , ~vk } for V . Let w

~ j = T (~xj ) for all j = 1, . . . , n k. By the proof of the Dimension Theorem, {w

~ 1, w

~ 2, . . . , w

~ nk } is linearly independent in W . By a Corollary of the Replacement Theorem, we can

extend {w

~ 1, w

~ 2, . . . , w

~ nk } to a basis = {w

~ 1, w

~ 2, . . . , w

~ n } for W . Since T (~xj ) = w

~ j for all j = 1, . . . , n k

~

and T (~vj ) = 0W for all j = 1, . . . , k, it is easy to see that

1 0 0 0 0 0

0 1 0 0 0 0

.. . .

.. .. ..

..

..

.

.

.

.

.

.

.

0

0

1

0

0

0

Ink 0nk,k

[T ] =

= 0k,nk

0k

0 0 0 1 0 0

0 0 0 0 0 0

.

..

.. .. ..

..

..

.

. . .

.

0 0 0 0 0 0

which is a diagonal matrix as desired.

2.3 Question 9) Find linear transformations U, T : F2 F2 such that U T = T0 (the zero transformation) but T U 6= T0 . Use your answer to find matrices A and B such that AB = 0 and BA 6= 0.

Solution: Define U, T : F2 F2 by U (a1 , a2 ) = (a2 , 0) and T (a1 , a2 ) = (a1 , 0). Then for all (a1 , a2 ) R2

we have

U T (a1 , a2 ) = U (T (a1 , a2 )) = U (a1 , 0) = (0, 0)

and

T U (a1 , a2 ) = T (U (a1 , a2 )) = T (a2 , 0) = (a2 , 0)

which is non-zero with a2 6= 0. Hence U T = T0 yet T U 6= T0 as desired.

To find the matrices A and B such that AB = 0 and BA 6= 0, let = {(1, 0), (0, 1)} which is the standard

ordered basis for R2 . Then it is easy to see

0 1

[U ] =

0 0

32

and

[T ] =

Therefore

and

0

0

1

0

1

0

0

0

1

0

1

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

1

0

2.3 Question 11) Let V be a vector space and let T : V V be linear. Prove that T 2 = T0 if

and only if Im(T ) ker(T ).

Solution: Suppose T 2 = T0 . We desire to show that Im(T ) ker(T ). Let ~v Im(T ) be arbitrary.

By the definition of the image of T , there exists a vector w

~ V such that ~v = T (w).

~ Hence

T (~v ) = T (T (w))

~ = T 2 (w)

~ = T0 (w)

~ = ~0

so that ~v ker(T ). Hence, as ~v Im(T ) was arbitrary, Im(T ) ker(T ) as desired.

Now suppose Im(T ) ker(T ). To show that T 2 = T0 we need to show that T 2 (~v ) = ~0 for all ~v V .

Let ~v V be arbitrary. Then T (~v ) Im(T ) ker(T ). Therefore, as T (~v ) ker(T ), T (T (~v )) = ~0. Hence

T 2 (~v ) = ~0 as desired. Hence T 2 = T0 .

For Question 12 in Section 2.3, let V , W , and Z be vector spaces and let T : V W and U : W Z

be linear maps.

2.3 Question 12a) Prove that if U T is one-to-one then T is one-to-one. Must U also be one-to-one?

Solution: Suppose U T is one-to-one. We will provided two proofs that T is one-to-one.

Proof 1: If T were not one-to-one, then ker(T ) 6= {~0V } so there would exists a non-zero vector ~v V

such that T (~v ) = ~0W . Hence U T (~v ) = U (T (~v )) = U (~0W ) = ~0Z so that ~v ker(U T ). Since ~v was non-zero,

ker(U T ) 6= {~0V } which contradicts the fact that U T is one-to-one. Hence T must be one-to-one.

Proof 2: Suppose ~v1 , ~v2 V are vectors such that T (~v1 ) = T (~v2 ). Therefore U T (~v1 ) = U (T (~v1 )) =

U (T (~v2 )) = U T (~v2 ). However, since U T is one-to-one, U T (~v1 ) = U T (~v2 ) implies that ~v1 = ~v2 . Hence T

must be one-to-one. (Note that we did not need T and U to be linear for this second proof.)

For the second claim, U need not be one-to-one. To see this, we will provide an example. Let T : R2 R3

be defined by T (a1 , a2 ) = (a1 , a2 , 0) and define U : R3 R2 by U (a1 , a2 , a3 ) = (a1 , a2 ). Then U T : R2 R2

is such that U T (a1 , a2 ) = U (a1 , a2 , 0) = (a1 , a2 ) so U T is the identity map and thus one-to-one. However U

is not one-to-one as (0, 0, 1) ker(U ). Hence if U T is one-to-one, U need not be one-to-one.

2.3 Question 12b) Prove that if U T is onto then U is onto. Must T also be onto?

Solution: Suppose U T is onto. To show that U is onto, we need to show for any vector ~z Z there

exists a vector w

~ W such that U w

~ = ~z. Let ~z Z be arbitrary. Then, since U T is onto, there exists a

vector ~v V such that U T (~v ) = ~z. Hence U (T (~v )) = U T (~v ) = ~z. Hence, as ~z Z was arbitrary, U is onto.

(Note that we did not need U and T to be linear in this proof).

33

For the second claim, T need not be onto. To see this, we will provide an example. Let T : R2 R3 be

defined by T (a1 , a2 ) = (a1 , a2 , 0) and define U : R3 R2 by U (a1 , a2 , a3 ) = (a1 , a2 ). Then U T : R2 R2 is

such that U T (a1 , a2 ) = U (a1 , a2 , 0) = (a1 , a2 ) so U T is the identity map and thus onto. However T is not

onto as (0, 0, 1)

/ Im(T ). Hence if U T is onto, T need not be onto.

2.3 Question 12c) Prove that if U and T are one-to-one and onto then U T is also one-to-one and

onto.

Solution: Let T and U be one-to-one and onto. We claim that U T is one-to-one and onto. To see

that U T is one-to-one, we will provide two proofs:

Proof 1: Suppose ~v ker(U T ). Then ~0Z = U T (~v ) = U (T (~v )). Therefore T (~v ) ker(U ). Since U

is one-to-one, ker(U ) = {~0W } so T (~v ) = ~0W . Therefore ~v ker(T ). However, since T is one-to-one,

ker(T ) = {~0V }. Hence ~v = ~0V . Therefore ker(U T ) = {~0V }. Hence U T is one-to-one.

Proof 2: Suppose ~v1 , ~v2 V are vectors such that U T (~v1 ) = U T (~v2 ). Therefore U (T (~v1 )) = U (T (~v2 )).

However, since U is one-to-one, U (T (~v1 )) = U (T (~v2 )) implies that T (~v1 ) = T (~v2 ). Therefore, since T is

one-to-one, T (~v1 ) = T (~v2 ) implies ~v1 = ~v2 . Hence U T must be one-to-one. (Note that we did not need T

and U to be linear for this second proof.)

To see that U T is onto, we need to show that for each vector ~z Z there exists a vector ~v V such that

U T (~v ) = ~z. Let ~z Z be arbitrary. Since U is onto, there exists a vector w

~ W such that U (w)

~ = ~z. Since

T is onto, there exists a vector ~v V such that T (~v ) = w.

~ Therefore U T (~v ) = U (T (~v )) = U (w)

~ = ~z. Hence,

as ~z Z was arbitrary, U T is onto as desired.

2.3 Question 13) Let A and B be n n matrices. Recall that the trace of A is defined by

tr(A) =

n

X

Ai,i

i=1

Solution: Write A = [Ai,j ]i,j and B = [Bi,j ]i,j . Then

" n

#

X

AB =

Ai,k Bk,j

and

k=1

Hence

tr(AB) =

n

n

X

X

i=1

"

BA =

n

X

Bi,k Ak,j .

k=1

i,j

!

Ai,k Bk,i

and

tr(BA) =

n

n

X

X

i=1

k=1

!

Bi,k Ak,i

k=1

Since Ai,k Bi,k = Bi,k Ak,i as Ai,j and Bi,j are all scalars, these two expressions are the same so tr(AB) =

tr(BA).

Finally, At = [Aj,i ]i,j so

n

X

tr(At ) =

Ai,i = tr(A)

i=1

as desired.

34

For Question 2 in Section 2.4, for each of the given linear transformations T , determine whether T is

invertible and justify your answer.

2.4 Question 2a) T : R2 R3 defined by T (a1 , a2 ) = (a1 2a2 , a2 , 3a1 + 4a2 ).

Solution: T cannot be invertible by Theorem 2.19 since dim(R2 ) = 2 6= 3 = dim(R3 ).

2.4 Question 2b) T : R2 R3 defined by T (a1 , a2 ) = (3a1 a2 , a2 , 4a1 ).

Solution: T cannot be invertible by Theorem 2.19 since dim(R2 ) = 2 6= 3 = dim(R3 ).

2.4 Question 2c) T : R3 R3 defined by T (a1 , a2 , a3 ) = (3a1 2a3 , a2 , 3a1 + 4a2 ).

Solution: T is an isomorphism. To see this, we notice that dim(R3 ) = 3 = dim(R3 ) so Theorem 2.5

implies that T is one-to-one if and only if T is onto. Therefore, to show that T is an isomorphism, we

need only show that T is one-to-one. To see that T is one-to-one, suppose that (a1 , a2 , a3 ) ker(T ). Then

(0, 0, 0) = T (a1 , a2 , a3 ) = (3a1 2a3 , a2 , 3a1 + 4a2 ). Hence a2 = 0, 3a1 + 4a2 = 0 so a1 = 0, and 3a1 3a3 = 0

so a3 = 0. Hence (a1 , a2 , a3 ) = (0, 0, 0) so ker(T ) = {~0}. Hence T is one-to-one and thus T is an isomorphism.

Solution: T cannot be invertible by Theorem 2.19 since dim(P2 (R)) = 3 6= 4 = dim(P3 (R)).

2.4 Question 2e) T : M22 (R) P2 (R) defined by

a b

T

= a + 2bx + (c + d)x2

c d

Solution: T cannot be invertible by Theorem 2.19 since dim(M22 (R)) = 4 6= 3 = dim(P2 (R)).

2.4 Question 2f ) T : M22 (R) M22 (R) defined by

a b

a+b

T

=

c d

c

a

c+d

Solution: T is an isomorphism. To see this, we notice that dim(M22 (R)) = 4 = dim(M22 (R)) so

Theorem 2.5 implies that T is one-to-one if and only if T is onto. Therefore, to show that T is an isomorphism,

we need only show that T is one-to-one. To see that T is one-to-one, suppose that

a b

ker(T )

c d

Then

a+b

c

a

c+d

=

0

0

0

0

and thus T is an isomorphism.

35

2.4 Question 4) Let A and B be n n invertible matrices. Prove that AB is invertible and (AB)1 =

B 1 A1 .

Solution: Let A and B be invertible n n matrices. Therefore there exists n n matrices A1 and

B 1 such that A1 A = In = AA1 and B 1 B = In = BB 1 . To show that AB is invertible, we need to

show that there exists an n n matrix C such that C(AB) = In = (AB)C. To see this, let C = B 1 A1 .

Therefore

C(AB) = (B 1 A1 )(AB) = B 1 In B = B 1 B = In

and

(AB)C = (AB)(B 1 A1 ) = AIn A1 = AA1 = In

Hence AB is invertible and (AB)1 = C = B 1 A1 .

2.4 Question 5) Let A be an invertible n n matrix. Prove that At is invertible and (At )1 = (A1 )t .

Solution: Let A be an invertible n n matrix. Therefore there exists an n n matrix A1 such that

A1 A = In = AA1 . To show that At is invertible, we need to show that there exists an n n matrix C

such that CAt = In = At C. To see this, let C = (A1 )t . Then

CAt = (A1 )t At = (AA1 )t = (In )t = In

and

At C = At (A1 )t = (A1 A)t = (In )t = In

Hence At is invertible and (At )1 = C = (A1 )t .

Solution: Let A, B Mn (F) be such that A is invertible and AB = 0. Then

0 = A1 (0) = A1 (AB) = (A1 A)B = In B = B

as desired.

2.4 Question 9) Let A and B be n n matrices such that AB is invertible. Prove that A and B

are invertible. Given an example to show that arbitrary matrices A and B need not be invertible if AB is

invertible.

Solution: By Corollary 2 of Theorem 2.18, an n n matrix C is invertible if and only if LC is invertible.

Suppose A and B are n n matrices such that AB is invertible. Therefore LAB is invertible. Moreover, by

Theorem 2.15(e), LAB = LA LB . Hence, since LAB is invertible, LA LB is one-to-one and onto. Therefore

LA is onto and LB is one-to-one by Question 12 in Section 2.3. Therefore, since LA , LB : Fn Fn and Fn

is a finite dimensional vector space, Theorem 2.5 implies that LA is one-to-one and LB is onto. Hence LA

and LB are bijective linear maps and thus LA and LB are invertible. Therefore A and B are invertible by

Theorem 2.18.

Next we desire to given an example arbitrary non-invertible matrices A and B such that AB is invertible.

Let

1 0

1 0 0

A=

and

B= 0 1

0 1 0

0 0

36

Then

AB =

1

0

0

1

which is invertible. However, since A and B are not square matrices, A and B cannot be invertible (as LA

and LB cannot be invertible since they map one vector space to another of different dimension).

2.4 Question 10a and 10b) Let A and B be n n matrices such that AB = In . Prove A and B

are invertible with A = B 1 .

Solution: Since In is invertible and AB = In , Question 9 implies A and B are invertible. Moreover

A1 = A1 In = A1 (AB) = (A1 A)B = In B = B

and

B 1 = In B 1 = (AB)B 1 = A(BB 1 ) = AIn = A

as desired.

2.4 Question 12) Prove for any finite dimensional vector space V of dimension n with an ordered

basis that the linear map : V Fn defined by (~v ) = [~v ] is an isomorphism.

Solution: Let = {~v1 , . . . , ~vn }. To see that is an isomorphism, we note since dim(V ) = n = dim(Fn ),

it suffices to prove that is one-to-one. Suppose ~v ker( ). Then ~0Fn = (~v ) = [~v ] . Therefore, by the

definition of coordinates,

~v = 0~v1 + + 0~vn = ~0V .

Hence ker( ) = {~0V } so is an isomorphism.

2.4 Question 15) Let V and W be n-dimensional vector spaces and let T : V W be a linear transformation. Suppose that is a basis for V . Prove that T is an isomorphism if and only if T () is a basis for W .

Solution: Suppose T is an isomorphism. Therefore T is one-to-one and onto. Hence T () is a basis

for W by Question 14c of Section 2.1.

Now suppose T is a linear map such that T () is a basis for W . Write = {~v1 , . . . , ~vn } and let w

~ j = T (~vj )

for all j = 1, . . . , n. Since dim(V ) = dim(W ), to show that T is an isomorphism, we need only show that T

is one-to-one or T is onto by Theorem 2.5. We will demonstrate both proofs (although you only need to do

one).

T is one-to-one Suppose ~v ker(T ). Then, as is a basis for V , there 1 , . . . , n F such that

~v = 1~v1 + + n~vn

Hence, as ~v ker(T )

~0W = T (~v ) = 1 w

~ 1 + + n w

~n

However, since T () is a basis for W , {w

~ 1, . . . , w

~ n } is linearly independent and thus j = 0 for all j = 1, . . . , n.

Therefore ~v = 1~v1 + + n~vn = ~0V . Hence ker(T ) = {~0V } so T is one-to-one.

T is onto Let w

~ W be arbitrary. Since T () is a basis for W , there 1 , . . . , n F such that

w

~ = 1 w

~ 1 + + n w

~n

Let

~v = 1~v1 + + n~vn V

37

~ Hence w

~ Im(T ). Since w

~ W was arbitrary, Im(T ) = W .

Hence T is onto.

Hence, as T is one-to-one (or onto) linear map from vector spaces of the same dimension, T is an isomorphism as desired.

2.4 Question 16) Let B be an n n invertible matrix. Define : Mn (F) Mn (F) by (A) = B 1 AB.

Prove that is an isomorphism.

Solution: To prove that is an isomorphism, we first need to check that is linear. To see this, let

F and A1 , A2 Mn (F) be arbitrary. Then

(A1 + A2 ) = B 1 (A1 + A2 )B = (B 1 A1 B) + (B 1 A2 B) = (A1 ) + (A2 ).

Thus, F and A1 , A2 Mn (F) were arbitrary, is linear.

Since both the domain and codomain of are finite dimensional vector spaces of the same dimension

(specifically n2 ), to prove that is an isomorphism it suffices to prove that is one-to-one. To see that

is one-to-one, suppose A ker(). Then 0 = (A) = B 1 AB. Therefore

0 = B(0)B 1 = B(B 1 AB)B 1 = (BB 1 )A(BB 1 ) = In AIn = A.

Hence ker() = {0} so is one-to-one and thus an isomorphism.

2.4 Question 17) Let V and W be finite-dimensional vector spaces, let T : V W be an isomorphism, and let V0 be a subspace of V . Prove that T (V0 ) is a subspace of W such that dim(T (V0 )) = dim(V0 ).

Solution: By Section 2.1 Question 20, T (V0 ) is a subspace of W . Therefore it remains only to show

that dim(T (V0 )) = dim(V0 ). To show this, we will repeat a proof that we did in the previous question.

Let = {~v1 , . . . , ~vn } be a basis for V0 and let w

~ j = T (~vj ) for all j = 1, . . . , n. We desire to show that

T () = {w

~ 1, . . . , w

~ n } is a basis for T (V0 ). To see this, we first claim that T () is linearly independent. To

see this, suppose there exists scalars 1 , . . . , n F such that

1 w

~ 1 + + n w

~ n = ~0W

Therefore

~0W = 1 T (~v1 ) + + n T (~v ) = T (1~v1 + + n~vn )

Since T is one-to-one (being an isomorphism), ker(T ) = {~0V } so the above equation tells us 1~v1 + +

n~vn = ~0V . Since was a basis for V0 , is linearly independent so 1~v1 + + n~vn = ~0V implies j = 0

for all j = 1, . . . , n. Hence T () is linearly independent.

To show that T () is a basis, we need to show that span(T ()) = T (V0 ). To see that span(T ()) = T (V0 ),

let w

~ T (V0 ) be arbitrary. By the definition of T (V0 ) there exists a vector ~v V0 such that T (~v ) = w.

~

Since is a basis for V0 , there exists scalars 1 , . . . , n F such that

~v = 1~v1 + + n~vn

Hence

w

~ = T (~v ) = 1 w

~ 1 + + n w

~ n span(T ())

Hence, as w

~ T (V0 ) was arbitrary, span(T ()) = T (V0 ). Therefore T () is a basis for T (V0 ) as desired.

Since T () has n vectors, dim(T (V0 )) = n = dim(V0 ) as desired.

38

2.4 Question 20) Let T : V W be a linear transformation from an n-dimensional vector space V to

an m-dimensional vector space W . Let and be ordered bases for V and W respectively. Prove that

rank(T ) = rank(LA ) and that nullity(T ) = nullity(LA ) where A = [T ] .

Solution: Since LA : Fn Fm , the Dimension Theorem implies that rank(LA )+nullity(LA ) = dim(Fn ) =

n. Since T : V W , the Dimension Theorem implies that rank(T ) + nullity(T ) = dim(V ) = n. Hence

rank(LA ) + nullity(LA ) = rank(T ) + nullity(T ). Therefore, it is easy to see that rank(T ) = rank(LA ) if

and only if nullity(T ) = nullity(LA ). Hence we need only show one of these equalities.

Define : V Fn and : W Fm by (~v ) = [~v ] and (w)

~ = [w]

~ for all ~v V and w

~ W . By

Question 12 of Section 2.4, and are isomorphisms.

We claim that LA = T . To show that LA = T , it suffices to show that LA ( (~v )) =

(T (~v )) for all ~v V . However, if ~v V then

LA ( (~v ))

=

=

=

=

=

LA ([~v ] )

A[~v ]

[T ] [~v ]

[T (~v )]

(T (~v ))

by

by

by

by

by

definition

definition

definition

definition

definition.

To complete the problem, we need only show that rank(T ) = rank(LA ) or show that nullity(T ) =

nullity(LA ). We will present a proof of each (and you would only need to provide one of them).

Proof that nullity(T ) = nullity(LA ): To prove this direction, we first claim that ker(LA ) = (ker(T )).

To see this, let = {~v1 , . . . , ~vn }, let (a1 , . . . , an ) Fn be arbitrary and let ~v = a1~v1 + + an~vn so

that (~v ) = (a1 , . . . , an ). Then (a1 , . . . , an ) ker(LA ) if and only if LA ((a1 , . . . , an )) = ~0Fn if and only

if LA ( (~v )) = ~0Fn if and only if (T (~v )) = ~0Fn if and only if T (~v ) = ~0V (since is invertible so

ker( ) = {~0V }). Hence (~v ) = (a1 , . . . , an ) is in the kernel of LA if and only if ~v is in the kernel of T .

Thus ker(LA ) = (ker(T )) as claimed. Hence nullity(T ) = nullity(LA ) by Question 17 of Section 2.4.

Proof that rank(T ) = rank(LA ): First we claim that Im(LA ) = (Im(T )). To see this, we notice that

Im(LA )

=

=

=

=

{LA ( (~v )) | ~v V }

{ (T (~v )) | ~v V }

{ (w)

~ | w

~ Im(T )}

by definition

since is onto

by the theorem

by the definition of Im(T ).

2.4 Question 24) Let V and Z be vector space over a field F, let W be a subspace of V , and let

q : V V /W be the canonical quotient map. Let T : V Z be a linear map such that W ker(T ). Prove

the following:

1. There exists a unique linear map R : V /W Z such that T = R q. (Hint: The equation T = R q

tells you how to define R. Check that this definition is well-defined and gives a linear map with the

desired property. Then show any other linear map with this property must be R.)

2. The map R from part (1) is one-to-one if and only if W = ker(T ).

3. The map R from part (1) is onto if and only if T is onto.

Proof of 1): We desire to define the map R : V /W Z by R(~v + W ) = T (~v ) for all ~v V . However, to

define R in this way, it is necessary to check that R is well-defined; that is, if ~v1 + W = ~v2 + W for some

39

~v1 , ~v2 V , then T (~v1 ) = T (~v2 ). Thus suppose ~v1 , ~v2 V are such that ~v1 + W = ~v2 + W . Then ~v1 ~v2 W

by the proof given in discussion. Since W ker(T ), we obtain that

~0Z = T (~v1 ~v2 ) = T (~v1 ) T (~v2 ).

Hence T (~v1 ) = T (~v2 ) so R is a well-defined map.

To see that R is a linear map, let F and ~v1 , ~v2 V be arbitrary. Then

R((~v1 + W ) + (~v2 + W ))

=

=

=

=

R((~v1 + ~v2 ) + W )

T (~v1 + ~v2 )

T (~v1 ) + T (~v2 )

R(~v1 + W ) + R(~v2 + W )

by the operations in V /W

by the definition of R

since T is linear

by the definition of R.

To see that T = R q, notice for all ~v V that

(R q)(~v ) = R(q(~v )) = R(~v + W ) = T (~v ).

Therefore, as ~v V was arbitrary, R q = T as desired.

To see that R : V /W Z is the unique linear map such that R q = T , suppose S : V /W Z is

another map such that S q = T . To see that S = R, let ~v V be arbitrary. Then

S(~v + W ) = S(q(~v + W )) = (S q)(~v + W ) = T (~v ) = R(~v + W ).

Therefore, as ~v V was arbitrary, S = R as desired.

Proof of 2): Let ~v V be arbitrary. Notice that ~v + W ker(R) if and only if R(~v + W ) = ~0Z if

and only if T (~v ) = ~0Z if and only if ~v ker(T ).

Suppose ker(T ) 6= W . Then, as W ker(T ), there exists a vector ~v ker(T ) such that ~v

/ W . Hence

~v + W 6= ~0V + W yet ~v + W ker(R) as ~v ker(T ). Hence R is not one-to-one if ker(T ) 6= W .

Suppose R is not one-to-one. Then there exists a vector ~v + W ker(R) such that ~v + W 6= ~0V + W .

Hence ~v

/ W . However T (~v ) = ~0Z since ~v + W ker(R). Therefore ~v ker(T ) \ W so ker(T ) 6= W .

Proof of 3): Suppose T is onto. Let ~z Z be arbitrary. Since T is onto, ~z Im(T ) so there exists

a vector ~v V such that T (~v ) = ~z. Hence R(~v + W ) = T (~v ) = ~z so ~z Im(R). Since ~z Z was arbitrary,

R is onto.

Suppose R is onto. Let ~z Z be arbitrary. Since R is onto, ~z Im(R) so there exists a vector ~v V

such that R(~v + W ) = ~z. Hence T (~v ) = R(~v + W ) = ~z so ~z Im(T ). Since ~z Z was arbitrary, T is onto.

2.5 Question 4) Let T be the linear operator on R2 defined by T (a, b) = (2a + b, a 3b), let be

the standard ordered basis for R2 , and let 0 = {(1, 1), (1, 2)}. Use Theorem 2.23 and the fact that

1

1 1

2 1

=

1 2

1 1

to find [T ] 0 .

0

Solution: By Theorem 2.23, [T ] 0 = ([I] )1 [T ] [I] where [I] is the change of basis matrix that

takes 0 -coordinates to -coordinates. To compute [T ] , we must compute [T (1, 0)] and [T (0, 1)] (where

is the standard ordered basis for R2 ). However T (1, 0) = (2, 1) and T (0, 1) = (1, 3). Thus, as is the

standard basis,

2 1

[T ] =

1 3

40

[I]

1

[(1, 2)] =

1

= [(1, 1)]

1

2

0

([I] )1 =

1

1

1

2

1

=

2

1

1

1

1

1

and thus

[T ] 0 =

2

1

1

1

2

1

1

3

1

1

1

2

=

3

1

5

4

1

2

=

8

5

13

9

2.5 Question 8) Prove the following generalization of Theorem 2.23. Let T : V W be a linear

transformation from a finite-dimensional vector space V to a finite-dimensional vector space W . Let and

0

0 be ordered bases for V and let and 0 be ordered bases for W . Then [T ] 0 = P 1 [T ] Q where Q is the

matrix that changes 0 -coordinate into -coordinates and P is the matrix that changes 0 -coordinates into

-coordinates.

Solution: The main tool is Theorem 2.11. To begin, we recall that P = [IW ] 0 and Q = [IV ] 0 . Hence

0

P 1 = [IW ] so

0

as desired.

2.5 Question 10) Prove that if A and B are similar n n matrices then tr(A) = tr(B).

Solution: By Exercise 12 of Section 2.3, we know that tr(CD) = tr(DC) for all n n matrices C and

D. However, if A and B are similar n n matrices, there exists an invertible n n matrix Q such that

A = QBQ1 . Therefore

tr(A) = tr(QBQ1 ) = tr((QB)(Q1 )) = tr((Q1 )(QB)) = tr(In B) = tr(B)

which completes the proof.

2.5 Question 11a) Let V be a finite dimensional vector space with ordered bases , , and . Prove

that of Q and R are the change of coordinate matrices that change -coordinates into -coordinates and

-coordinates into -coordinates respectively, then RQ is the change of coordinate matrix that changes coordinates into -coordinates.

Solution: Let I : V V be the identity map. Then Q = [I] and R = [I] . Hence RQ = [I] [I] = [I]

which is change of coordinate matrix that changes -coordinates into -coordinates.

41

2.5 Question 11b) Let V be a finite dimensional vector space with ordered bases , , and . Prove

that if Q changes -coordinates into -coordinates, then Q1 changes -coordinates into -coordinates.

Solution: Let I : V V be the identity map. Then Q = [I] . Let R = [I]

which is the change of

coordinate matrix that changes -coordinates into -coordinates. Then

QR = [I] [I]

= [I] = In

and

RQ = [I]

[I] = [I] = In .

2.5 Question 13) Let V be a finite dimensional vector space over a field F and let = {~x1 , . . . , ~xn }

be an ordered basis for V . Let Q be an n n invertible matrix with entries from F. Define

~yj =

n

X

Qi,j ~xi

i=1

for all 1 j n and set 0 = {~y1 , . . . , ~yn }. Prove that 0 is a basis for V and that Q is the change of

coordinate matrix changing 0 -coordinates to -coordinates.

Solution: Since and 0 have the same number of elements, it suffices, by a corollary of the Replacement Theorem, to prove that 0 is linearly independent. To begin, suppose a1 , . . . , an F are such that

a1 ~y1 + + an ~yn = ~0V .

Then, by substituting the definition of the ~yj s, we obtain that

!

n

n

n

n

X

X

X

X

~0V =

aj

Qi,j ~xi =

Qi,j aj ~vi .

j=1

i=1

i=1

j=1

Pn

Therefore, since is a basis for V and thus linearly independent, the above equation implies j=1 Qi,j aj = 0

for all i. However, this implies that LQ (a1 , a2 , . . . , an ) = 0 (i.e. multiplying the column vector (a1 , a2 , . . . , an )T

by Q is the zero column vector). Since Q is invertible, LQ is one-to-one and thus (a1 , a2 , . . . , an ) =

(0, 0, . . . , 0). Hence aj = 0 for all j. Thus 0 is a linearly independent set and thus a basis.

To see that Q is the change of coordinate matrix changing 0 -coordinates to -coordinates, notice that

[~yj ] = (Q1,j , Q2,j , . . . , Qn,j )T .

Hence Q = [Qi,j ] = [I] 0 as desired.

2.5 Question 14) Let A, B Mmn (F). Suppose there exists an invertible m m matrix P with

entries in F and an invertible n n matrix Q with entries in F such that B = P 1 AQ. Prove there exists an n-dimensional vector space V over F and an m-dimensional vector space W over F, ordered bases

0

and 0 for V and and 0 for W and a linear transformation T : V W such that A = [T ] and B = [T ] 0 .

Solution: Let V = Fn , let W = Fm , let be the standard basis for V , let be the standard basis

for W , and let T : V W be the linear map T = LA . Therefore, by Theorem 2.15a of the text, [T ] = A.

By Section 2.5 Question 13, there exists bases 0 for V and 0 for W such that Q = [I] 0 and P = [I] 0 .

Thus

0

[T ] 0 = ([I] 0 )1 [T ] [I] 0 = P 1 AQ = B

as desired.

42

2.6 Question 9) Prove that a function T : Fn Fm is linear if and only if there exists f1 , f2 , . . . , fm

(Fn ) such that T (~x) = (f1 (~x), f2 (~x), . . . , fm (~x)) for all ~x Fn .

Solution: First suppose that T : Fn Fm is linear. Let be the standard ordered basis for Fm and

let = {g1 , . . . , gm } be the dual basis. Then, as T is linear, fj = gj T (Fn ) for all j {1, . . . , m}.

However, if ~x Fn and T (~x) = (a1 , . . . , an ), then

aj = gj (a1 , . . . , an ) = gj (T (~x)) = fj (~x)

for all j {1, . . . , n}. Hence T (~x) = (f1 (~x), f2 (~x), . . . , fm (~x)). Since ~x Fn was arbitrary, T (~x) =

(f1 (~x), f2 (~x), . . . , fm (~x)) for all ~x Fn .

For the other direction, suppose there exists f1 , f2 , . . . , fm (Fn ) such that T (~x) = (f1 (~x), f2 (~x), . . . , fm (~x))

for all ~x Fn . To see that T is linear, let F and ~x, ~y Fn be arbitrary. Then

T (~x + ~y )

= (f1 (~x) + f1 (~y ), f2 (~x) + f2 (~y ), . . . , fm (~x) + fm (~y ))

= (f1 (~x), f2 (~x), . . . , fm (~x)) + (f1 (~y ), f2 (~y ), . . . , fm (~y ))

= T (~x) + T (~y ).

2.6 Question 14) Prove that if W is a subspace of a finite dimensional vector space V , then dim(W ) +

dim(W 0 ) = dim(V ) where W 0 = {f V | f (~x) = 0 for all ~x W }.

Solution: Let W be a subspace of a finite dimensional vector space V . Let = {~v1 , . . . , ~vk } be a basis for

W . Thus dim(W ) = k. By a corollary of the Replacement Theorem, extends to a basis = {~v1 , . . . , ~vn }

for V . Thus dim(V ) = n. Let = {f1 , . . . , fn } be the dual basis of .

We claim that {fk+1 , . . . , fn } is a basis for W 0 . To begin, we claim that each element of {fk+1 , . . . , fn }

is an element of W 0 . To see this, let w

~ W be arbitrary. Therefore, as is a basis for W , there exists

scalars a1 , . . . , ak F such that w

~ = a1~v1 + + ak~vk . Therefore, for all j {k + 1, . . . , n},

fj (w)

~ = fj (a1~v1 + + ak~vk ) = 0

by the definition of the dual basis. Hence fj (w)

~ = 0 for all j {k + 1, . . . , n} and for all w

~ W . Hence

fj W 0 for all j {k + 1, . . . , n}.

To see that {fk+1 , . . . , fn } is a basis for W 0 , we note that is a linearly independent set and thus

{fk+1 , . . . , fn } is linearly independent. Hence it suffices to show that span({fk+1 , . . . , fn }) = W 0 . Clearly

span({fk+1 , . . . , fn }) W 0 as {fk+1 , . . . , fn } W 0 and W 0 is a subspace of V . Let f W 0 be arbitrary.

Since is a basis for V and f V , there exists scalars a1 , . . . , an F such that f = a1 f1 + + an fn .

However, since f W 0 and is contained in W , f (~vj ) = 0 for all j {1, . . . , k}. Therefore

0 = f (~vj ) = a1 f1 (~vj ) + + an fn (~vj ) = aj

(by the definition of the dual basis) for all j {1, . . . , k}. Hence f = ak+1 fk+1 + +an fn span({fk+1 , . . . , fn }).

Therefore, as f W 0 was arbitrary, span({fk+1 , . . . , fn }) = W 0 as desired. Hence {fk+1 , . . . , fn } is a basis

for W 0 .

Therefore dim(W 0 ) = n k so dim(V ) = n = k + (n k) = dim(W ) + dim(W 0 ) as desired.

43

2.6 Question 19) Let V be a non-zero vector space and let W be a proper subspace of V (that is, W 6= V ).

Prove that there exists a non-zero linear functional f V such that f (~x) = 0 for all ~x W .

Solution: Let be a basis for W . By Theorem 1.13 there exists a basis of V containing . Since

V 6= W and is a basis of W , there exists a vector ~y \ . By Question 34 in Section 2.1, there exists a

linear map f : V F such that f (~y ) = 1 yet f (~x) = 0 for all ~x \ {~y }. Therefore f is a non-zero linear

functional as f (~y ) = 1. However, as f (~x) = 0 for all ~x \ {~y }, f (~x) = 0 for all ~x . Therefore, as is a

basis for W , it is easy to see that f (~x) = 0 for all ~x W as desired.

44

Chapter Four

4.4 Question 5) Suppose that M Mnn (F) can be written in the form

A

B

M=

0pm Ip

where A Mmm (F), B Mmp (F), 0pm Mpm (F) is the zero matrix, and Ip Mpp (F) is the p p

identity matrix (and thus n = m + p). Prove that det(M ) = det(A).

Solution: We will proceed by by the Principle of Mathematical Induction on p. When p = 0, the result is trivial. Therefore, suppose we have proven that the result holds for some p N {0} and we desire

to prove the result for p + 1. Suppose M Mnn (F) can be written in the form

A

B

M=

0m(p+1) Ip+1

where A Mmm (F), B Mm(p+1) (F), 0m(p+1) M(p+1)m (F) is the zero matrix, and Ip+1

M(p+1)(p+1) (F) is the (p + 1) (p + 1) identity matrix. By expanding along the last row of M , we obtain

that

A

B

det(M ) = (1)p+p (1)det

0pm Ip

Mmp (F) is the matrix obtained by removing the (p + 1)st -column of B,

where A Mmm (F), B

0pm Mpm (F) is the zero matrix, and Ip Mpp (F) is the p p identity matrix. Therefore, by the

inductive hypothesis, we obtain that

A

B

det(M ) = det

= det(A)

0pm Ip

as desired. Hence, by the Principle of Mathematical Induction, the result is true.

4.4 Question 6) Suppose that M Mnn (F) can be written in the form

A

B

M=

0pm C

where A Mmm (F), B Mmp (F), 0pm Mpm (F) is the zero matrix, and C Mpp (F) (and thus

n = m + p). Prove that det(M ) = det(A)det(C).

Solution: It is possible to verify using the definition of matrix multiplication that

A

B

Im

0mp

A

B

M=

=

0pm C

0pm

C

0pm Ip

Therefore, since the determinant of a product of matrices is the product of the determinants, we see that

Im

0mp

A

B

det(M ) = det

det

0pm

C

0pm Ip

From Question 5 as proven above, we see that

A

det

0pm

B

Ip

= det(A)

Im

0mp

det

= det(C)

0pm

C

Hence det(M ) = det(A)det(C) as desired.

45

Chapter Five

For Question 3 of Section 5.1, for each matrix A Mnn (F), determine all eigenvalues of A and for each

eigenvalue of A, find the set of eigenvectors corresponding to . If possible, find a basis of Fn consisting

of eigenvectors of A and determine an invertible matrix Q and a diagonal matrix D such that Q1 AQ = D.

1 2

5.1 Question 3a) A =

for F = R.

3 2

Solution: We desire to find the eigenvalues and eigenvector of A. First we compute the eigenvalues.

To find the eigenvalues, we compute the characteristic polynomial:

1

2

A () = det(I A) = det

= ( 1)( 2) 6 = 2 3 4 = ( 4)( + 1)

3

2

Thus the eigenvalues of A are 4 and 1.

Now we shall compute the eigenspaces. We notice that

3 2

3

E4 = ker(4I A) = ker

= ker

3 2

0

2

0

= span({(2, 3)})

where the third equality comes from a simple row reduction. Therefore {(2, 3)} is a basis for E4 .

Next we notice that

2 2

1 1

E1 = ker(I A) = ker

= ker

= span({(1, 1)})

3 3

0 0

where the third equality comes from a simple row reduction. Therefore {(1, 1)} is a basis for E1 .

Combining these two bases, we see that = {(2, 3), (1, 1)} is an eigenbasis for A (and thus a basis for

R2 consisting of eigenvectors). Finally, to find the desired matrices Q and D, we notice that, if is the

standard basis for R2 , then Q1 AQ = D where

4 0

D=

0 1

and Q is the change of basis matrix that takes -coordinates to -coordinates. Therefore, by using the

definition of the change of basis matrix

2 1

Q=

3 1

which completes the problem.

0

5.1 Question 3b) A = 1

2

2

1

2

3

1 for F = R.

5

Solution: We desire to find the eigenvalues and eigenvector of A. First we compute the eigenvalues.

To find the eigenvalues, we compute the characteristic polynomial:

2

3

1

A () = det 1 1

2

2

5

= ( 1)( 5) + (2)(1)(2) + (3)(1)(2) (3)( 1)(2) (2)(1)( 5) (1)(2)

= (3 62 + 5) 4 6 + (6 6) (2 10) + 2

= 3 62 + 11 6

46

Therefore, to find the eigenvalues of A, we need to factor A . One way is to notice that A (1) = 0 (we

would try 1 as 1 divides 6 (see the Rational Roots Theorem)) and then use long division of polynomials.

Another way is to notice that

A () = (3 62 + 5) + (6 6)

= ( 5)( 1) + 6( 1)

= ( 1)(( 5) + 6)

= ( 1)(2 5 + 6) = ( 1)( 2)( 3)

Thus the eigenvalues of A are 1, 2, and 3.

Now we shall compute the eigenspaces. We notice that

1

3

2

3

2

1 = ker 0

E3 = ker 1

2 2 2

0

0

1

0

1

0 = span({(1, 0, 1)})

0

where the second equality comes from a simple row reduction. Therefore {(1, 0, 1)} is a basis for E3 .

Next we notice that

1 1 0

2

2

3

1

1 = ker 0 0 1 = span({(1, 1, 0)})

E2 = ker 1

0 0 0

2 2 3

where the second equality comes from a simple row reduction. Therefore {(1, 1, 0)} is a basis for E2 .

Finally, we notice that

1 0 1

1

2

3

0

1 = ker 0 1 1 = span({(1, 1, 1)})

E1 = ker 1

0 0 0

2 2 4

where the second equality comes from a simple row reduction. Therefore {(1, 1, 1)} is a basis for E1 .

Combining these three bases, we see that = {(1, 0, 1), (1, 1, 0), (1, 1, 1)} is an eigenbasis for A (and

thus a basis for R3 consisting of eigenvectors) (note that eigenvectors corresponding to distinct eigenvectors

are automatically linearly independent). Finally, to find the desired matrices Q and D, we notice that, if

is the standard basis for R3 , then Q1 AQ = D where

3 0 0

D= 0 2 0

0 0 1

and Q is the change of basis matrix that takes -coordinates to -coordinates. Therefore, by using the

definition of the change of basis matrix

1

1

1

Q = 0 1 1

1 0 1

which completes the problem.

47

5.1 Question 3c) A =

i

2

1

i

for F = C.

Solution: We desire to find the eigenvalues and eigenvector of A. First we compute the eigenvalues.

To find the eigenvalues, we compute the characteristic polynomial:

i 1

A () = det(I A) = det

= (2 + 1) 2 = 2 1 = ( 1)( + 1)

2 + i

Thus the eigenvalues of A are 1 and 1.

Now we shall compute the eigenspaces. We notice that

1 i 1

1i

E1 = ker(I A) = ker

= ker

2 1 + i

0

1

0

= span({(1, i 1)})

where the third equality comes from a simple row reduction. Therefore {(1, 1 i)} is a basis for E1 .

Next we notice that

1 i

1

1 i 1

E1 = ker(I A) = ker

= ker

= span({(1, 1 i)})

2

1 + i

0

0

where the third equality comes from a simple row reduction. Therefore {(1, 1 i)} is a basis for E1 .

Combining these two bases, we see that = {(1, 1 i), (1, 1 i)} is an eigenbasis for A (and thus a

basis for C2 consisting of eigenvectors). Finally, to find the desired matrices Q and D, we notice that, if

is the standard basis for C2 , then Q1 AQ = D where

1 0

D=

0 1

and Q is the change of basis matrix that takes -coordinates to -coordinates. Therefore, by using the

definition of the change of basis matrix

1

1

Q=

1 i 1 i

which completes the problem.

2

5.1 Question 3d) A = 4

2

0

1

0

1

4 for F = R.

1

Solution: We desire to find the eigenvalues and eigenvector of A. First we compute the eigenvalues.

To find the eigenvalues, we compute the characteristic polynomial:

2

0

1

1

4

A () = det 4

2

0

+1

= ( 2)( 1)( + 1) + 0 + 0 1( 1)(2) 0 0

= ( 1)(( 2)( + 1) + 2)

= ( 1)(2 )

= ( 1)2

Thus the eigenvalues of A are 1 and 0.

48

1

E1 = ker 4

2

0 1

1 0 1

0 4 = ker 0 0 0 = span({(1, 0, 1), (0, 1, 0)})

0 2

0 0 0

where the second equality comes from a simple row reduction. Therefore {(1, 0, 1), (0, 1, 0)} is a basis for E1

(as they are clearly linearly independent).

Next we notice that

2 0 1

2 0 1

E0 = ker 4 1 4 = ker 0 1 2 = span({(1, 4, 2)})

2 0 1

0 0 0

where the second equality comes from a simple row reduction. Therefore {(1, 4, 2)} is a basis for E0 .

Combining these three bases, we see that = {(1, 0, 1), (0, 1, 0), (1, 4, 2)} is an eigenbasis for A (and thus

a basis for R3 consisting of eigenvectors) (note that eigenvectors corresponding to distinct eigenvectors are

automatically linearly independent). Finally, to find the desired matrices Q and D, we notice that, if is

the standard basis for R3 , then Q1 AQ = D where

1 0 0

D= 0 1 0

0 0 0

and Q is the change of basis matrix that takes -coordinates to -coordinates. Therefore, by using the

definition of the change of basis matrix

1 0 1

Q= 0 1 4

1 0 2

which completes the problem.

For Question 4 of Section 5.1, for each linear operator T on V , find the eigenvalues of T and an ordered

basis for V such that [T ] is a diagonal matrix.

5.1 Question 4a) V = R2 and T (a, b) = (2a + 3b, 10a + 9b).

Solution: There are several ways to solve this problem. We will demonstrate one way (all solutions

are computationally similar). The main idea is to apply Question 6 (which is proven below). Let be the

standard ordered basis of R2 . Then

2 3

[T ] =

10 9

Hence (by Question 6), is an eigenvalue of T if and only if is an eigenvalue of A = [T ] . Thus we desire

to find the eigenvalues of A. To find the eigenvalues, we compute the characteristic polynomial:

+2

3

A () = det(I A) = det

= ( + 2)( 9) + 30 = 2 7 + 12 = ( 3)( 4)

10

9

Thus the eigenvalues of A and T are 3 and 4.

By my addition to Question 6, to compute a basis of eigenvectors of T we will compute a basis of

eigenvectors for A. Thus we will compute the eigenspaces of A. We notice that

6 3

2 1

E4 (A) = ker(4I A) = ker

= ker

= span({(1, 2)})

10 5

0 0

49

where the third equality comes from a simple row reduction. Therefore {(1, 2)} is a basis for E4 (A).

Next we notice that

5 3

5 3

E3 (A) = ker(3I A) = ker

= ker

= span({(3, 5)})

10 6

0 0

where the third equality comes from a simple row reduction. Therefore {(3, 5)} is a basis for E3 (A).

Therefore, by Question 6, the -coordinates of a basis of eigenvectors of T are (1, 2) and (3, 5) (where I

should really write these as column vectors). Since was the standard basis, (1, 2) and (3, 5) are eigenvectors

of T with eigenvalues 4 and 3 respectively. Therefore, if = {(1, 2), (3, 5)}, then is a basis for R2 such

that

4 0

[T ] =

0 3

is a diagonal matrix.

5.1 Question 4d) V = P1 (R) and T (ax + b) = (6a + 2b)x + (6a + b).

Solution: There are several ways to solve this problem. We will demonstrate one way (all solutions

are computationally similar). The main idea is to apply Question 6 (which is proven below). Let = {x, 1}

which is an ordered basis of V . Then

6 2

[T ] =

6 1

Hence (by Question 6), is an eigenvalue of T if and only if is an eigenvalue of A = [T ] . Thus we desire

to find the eigenvalues of A. To find the eigenvalues, we compute the characteristic polynomial:

+6

2

A () = det(I A) = det

= ( + 6)( 1) + 12 = 2 + 5 + 6 = ( + 3)( + 2)

6

1

Thus the eigenvalues of A and T are 2 and 3.

By my addition to Question 6, to compute a basis of eigenvectors of T we will compute a basis of

eigenvectors for A. Thus we will compute the eigenspaces of A. We notice that

4 2

2 1

E2 (A) = ker(2I A) = ker

= ker

= span({(1, 2)})

6 3

0 0

where the third equality comes from a simple row reduction. Therefore {(1, 2)} is a basis for E2 (A).

Next we notice that

3 2

3 2

E3 (A) = ker(3I A) = ker

= ker

= span({(2, 3)})

6 4

0 0

where the third equality comes from a simple row reduction. Therefore {(2, 3)} is a basis for E3 (A).

Therefore, by Question 6, the -coordinates of a basis of eigenvectors of T are (1, 2) and (2, 3) (where I

should really write these as column vectors). Since = {x, 1}, x + 2 and 2x + 3 are eigenvectors of T with

eigenvalues 2 and 3 respectively. Therefore, if = {x + 2, 2x + 3}, then is a basis for R2 such that

2 0

[T ] =

0 3

is a diagonal matrix.

50

Solution: There are several ways to solve this problem. We will demonstrate one way (all solutions are computationally similar). The main idea is to apply Question 6 (which is proven below). Let = {1, x, x2 , x3 }

which is an ordered basis of V . Then T (1) = 1+x, T (x) = x+2x = 3x, T (x2 ) = x2 +4x, and T (x3 ) = x3 +8x.

Therefore

1 0 0 0

1 3 4 8

[T ] =

0 0 1 0

0 0 0 1

Hence (by Question 6), is an eigenvalue of T if and only if is an eigenvalue of A = [T ] . Thus we desire

to find the eigenvalues of A. To find the eigenvalues, we compute the characteristic polynomial:

1

0

0

0

1

3

4

8

A () = det

0

0

1

0

0

0

0

1

1

0

1

0

= det

det

1

3

0

1

= ( 1)( 3)( 1)( 1)

(see Question 6 in Section 4.4 for the confusing step). Thus the eigenvalues of A and T are 1 and 3.

By my addition to Question 6, to compute a basis of eigenvectors of T we will compute a basis of

eigenvectors for A. Thus we will compute the eigenspaces of A. We notice that

1 0 0 0

2 0 0 0

0 0 1 0

1 0 4 8

0 0 1 0 = ker 0 0 0 1 = span({(0, 1, 0, 0)})

0 0 0 0

0 0 0 1

where the third equality comes from a simple row

Next we notice that

0

0 0

1 2 4

0

0 0

0

0 0

8

= span({(8, 0, 0, 1), (0, 4, 0, 1), (0, 0, 2, 1)})

0

0

Therefore {(8, 0, 0, 1), (0, 4, 0, 1), (0, 0, 2, 1)} is a basis for E1 (A) (as they are clearly linearly independent).

Therefore, by Question 6 (and the fact that eigenvectors corresponding to different eigenvalues are linearly

independent), the -coordinates of a basis of eigenvectors of T are (0, 1, 0, 0), (8, 0, 0, 1), (0, 4, 0, 1), and

(0, 0, 2, 1) (where I should really write these as column vectors). Since = {1, x, x2 , x3 }, x, 8 + x3 ,

4x + x3 , and 2x2 + x3 are eigenvectors of T with eigenvalues 3, 1, 1, and 1 respectively. Therefore, if

= {x, 8 + x3 , 4x + x3 , 2x2 + x3 }, then is a basis for V such that

3 0 0 0

0 1 0 0

[T ] =

0 0 1 0

0 0 0 1

is a diagonal matrix.

51

5.1 Question 4h) V = M22 (R) and T

a

c

b

d

=

d

c

b

a

are computationally similar). The main idea is to apply Question

1 0

0 1

0

= ~v1 =

, ~v2 =

, ~v3 =

0 0

0 0

1

.

will demonstrate one way (all solutions

6 (which is proven below). Let

0

0 0

, ~v4 =

0

0 1

which is an ordered basis of V . Then T (~v1 ) = ~v4 , T (~v2 ) = ~v2 , T (~v3 ) = ~v3 , and T (~v4 ) = ~v1 . Therefore

0 0 0 1

0 1 0 0

[T ] =

0 0 1 0

1 0 0 0

Hence (by Question 6), is an eigenvalue of T if and only if is an eigenvalue of A = [T ] . Thus we desire

to find the eigenvalues of A. To find the eigenvalues, we compute the characteristic polynomial:

0

0

1

0 1

0

0

A () = det

0

0

1 0

1

0

0

1

= ( 1)2 det

1

= ( 1)2 (2 1) = ( 1)3 ( + 1)

(where the missing step follows from using the Cofactor Expansion on rows/columns containing only one

non-zero entry). Thus the eigenvalues of A and T are 1 and 1.

By my addition to Question 6, to compute a basis of eigenvectors of T we will compute a basis of

eigenvectors for A. Thus we will compute the eigenspaces of A. We notice that

1 0 0 1

1 0

0 1

0 2 0

0

= ker 0 1 0 0 = span({(1, 0, 0, 1)})

0 0 1 0

0

0 2 0

0 0 0 0

1 0

0 1

where the third equality comes from a simple

Next we notice that

1 0

0 0

0 0

1 0

0 1

0 0

= span({(1, 0, 0, 1), (0, 1, 0, 0), (0, 0, 1, 0)})

0 0

0 1

Therefore {(1, 0, 0, 1), (0, 1, 0, 0), (0, 0, 1, 0)} is a basis for E1 (A) (as they are clearly linearly independent).

Therefore, by Question 6 (and the fact that eigenvectors corresponding to different eigenvalues are linearly

independent), the -coordinates of a basis of eigenvectors of T are (1, 0, 0, 1), (1, 0, 0, 1), (0, 1, 0, 0), and

(0, 0, 1, 0) (where I should really write these as column vectors). Since = {~v1 , ~v2 , ~v3 , ~v4 }, ~v1 ~v4 , ~v1 + ~v4 ,

~v2 , and ~v3 are eigenvectors of T with eigenvalues 1, 1, 1, and 1 respectively. Therefore, if = {~v1 ~v4 , ~v1 +

~v4 , ~v2 , ~v3 }, then is a basis for V such that

1 0 0 0

0 1 0 0

[T ] =

0 0 1 0

0 0 0 1

is a diagonal matrix.

52

5.1 Question 5) Let T be a linear operator on a vector space V , and let be an eigenvalue of T . Prove

that a vector ~v V is an eigenvector of T corresponding to if and only if ~v 6= ~0 and ~v ker(I T ).

Solution: Suppose ~v V is an eigenvector of T corresponding to . Hence, by the definition of an

eigenvector, ~v 6= ~0 and T (~v ) = ~v . Using the second equation

(I T )(~v ) = ~v T (~v ) = ~v ~v = ~0.

Hence ~v 6= ~0 and ~v ker(I T ).

Conversely, suppose ~v 6= ~0 and ~v ker(I T ). Since ~v ker(I T ), (I T )(~v ) = ~0. Hence

~v T (~v ) = ~0 so T (~v ) = ~v . Since ~v 6= ~0, ~v V is an eigenvector of T corresponding to .

5.1 Question 6) Let T be a linear operator on a finite dimensional vector space V , and let be an

ordered basis for V . Prove that is an eigenvalue of T if and only if is an eigenvalue of [T ] . (My

addition: Show that ~v V is an eigenvector of T with eigenvalue if and only if [~v ] is an eigenvector of

[T ] with eigenvalue ).

Solution: We will examine each part separately (with my addition being proved along the way). Suppose that is an eigenvalue of T . Therefore there exists an eigenvector ~v V such that T (~v ) = ~v .

Therefore

[~v ] = [~v ] = [T (~v )] = [T ] [~v ]

Therefore, since ~v 6= ~0, we obtain that [~v ] is not the zero vector. Hence [~v ] is a non-zero vector such

that [T ] [~v ] = [~v ] . Hence is an eigenvalue of [T ] and [~v ] is an eigenvector of [T ] with eigenvalue

whenever ~v is a eigenvector of T with eigenvalue .

Suppose is an eigenvalue of [T ] . Therefore there exists an eigenvector w

~ Fn such that [T ] w

~ = w

~

(where we write w

~ as a column vector). Write = {~v1 , . . . , ~vn } and w

~ = (a1 , . . . , an ) (so not all of the aj s

are zero). Let ~v = a1~v1 + + an~vn . Therefore ~v V and [~v ] = w

~ (so ~v 6= ~0V ). Therefore

[T (~v )] = [T ] [~v ] = [T ] w

~ = w

~ = [~v ] = [~v ]

Therefore, since T (~v ) and ~v have the same coordinates, we obtain that T (~v ) = ~v . Therefore, since

~v 6= ~0V , is an eigenvalue of T and ~v is an eigenvector of T with eigenvalue whenever [~v ] is an eigenvector

of [T ] with eigenvalue .

For Question 7 of Section 5.1, let T be a linear operator on a finite dimensional vector space V . We

define the determinant of T , denoted det(T ), as follows: Choose any ordered basis for V and define

det(T ) = det([T ] ).

5.1 Question 7a) Prove that the preceding definition is independent of the choice of an ordered basis for V . That is, prove that if and are two ordered bases for V , then det([T ] ) = det([T ] ).

Solution: Let and be any two bases for V . Therefore, if Q is the change of coordinate matrix

from -coordinates to -coordinates, then [T ] = Q1 [T ] Q. Therefore, since det(AB) = det(A)det(B) =

det(B)det(A) = det(BA) for all A, B Mnn (F), we obtain that

det([T ] ) = det(Q1 [T ] Q) = det(Q1 ([T ] Q)) = det(([T ] Q)Q1 ) = det([T ] )

as desired. Hence det(T ) does not depend on the basis chosen for V .

53

Solution: Let be any basis for V . By Theorem 2.18, T is invertible if and only if [T ] is invertible.

However, [T ] is invertible if and only if det([T ] ) 6= 0. Since det([T ] ) = det(T ), by combining the above if

and only ifs, we obtain that T is invertible if and only if det(T ) 6= 0.

5.1 Question 7c) Prove that if T is invertible, then det(T 1 ) =

1

det(T ) .

1

Solution: Let be a basis for V . If T is invertible, then [T ] is invertible with [T ]1

] . Hence

= [T

)=

1

1

=

det([T ] )

det(T )

as desired.

5.1 Question 7d) Prove that if U : V V is also linear, then det(T U ) = det(T )det(U ).

Solution: Let be a basis for V . Then, if U : V V is linear, T U is linear and

det(T U ) = det([T U ] ) = det([T ] [U ] ) = det([T ] )det([U ] ) = det(T )det(U )

as desired.

5.1 Question 7e) Prove that det(T IV ) = det([T ] I) for any scalar and ordered basis

for V .

Solution: Since [T IV ] = [T ] [IV ] for any basis for V , we obtain that

det(T IV ) = det([T IV ] ) = det([T ] [IV ] )

as desired.

5.1 Question 8a) Prove that a linear operator T on a finite dimensional vector space is invertible if

and only if zero is not an eigenvalue of T .

Solution: Let T : V V be an invertible linear operator. Suppose to the contrary that zero is an

eigenvalue of T . Therefore, since zero is an eigenvalue of T , there must exist a non-zero vector ~v V such

that T (~v ) = 0~v = ~0V . Hence ~v ker(T ). However, since T is invertible, ker(T ) = {~0V } and thus ~v = ~0V

which contradicts the fact that ~v is a non-zero vector. Therefore zero cannot be an eigenvalue of T if T is

invertible.

Suppose T : V V is a linear map such that zero is an eigenvalue of T . Therefore there exists a non-zero

vector ~v V such that T (~v ) = 0~v = ~0V . Hence ~v is a non-zero vector in the kernel of T so ker(T ) 6= {~0V }.

Hence T cannot be invertible.

5.1 Question 8b) Let T be an invertible linear operator. Prover that a non-zero scalar is an eigenvalue

of T if and only of 1 is a eigenvalue of T 1 .

Solution: Let T : V V be an invertible linear map. Suppose that is a non-zero eigenvalue of T .

54

Hence there exists a non-zero vector ~v V such that T (~v ) = ~v . Since is non-zero, T (~v ) = ~v implies that

1 T (~v ) = ~v . Therefore, since T is invertible,

T 1 (~v ) = T 1 1 T (~v ) = 1 T 1 (T (~v )) = 1 IV (~v ) = 1~v

Hence, as ~v is non-zero, 1 is an eigenvalue of T .

Let T : V V be an invertible linear map. Suppose that is a non-zero scalar such that 1 is an

eigenvalue of T 1 . Since T 1 : V V is also an invertible linear map, by the above proof we have that

(1 )1 = is an eigenvalue of (T 1 )1 = T as desired.

5.1 Question 13b) Let T be a linear operator on a finite dimensional vector space V over a field F,

let be an ordered basis for V , and let A = [T ] . Let : V Fn be the isomorphism defined by

(~v ) = [~v ] for all ~v V . A vector ~y Fn is an eigenvector of A corresponding to if and only if 1

v)

(~

is an eigenvector of T corresponding to .

Solution: Notice a vector ~y Fn is an eigenvector of A corresponding to if and only if ~y 6= ~0Fn

and A~y = ~y if and only if 1

y ) 6= ~0V and 1

y ) = 1

y ) (as is an isomorphism) if and only

(~

(A~

(~

1

1

1

1

1

if (~y ) 6= ~0V and ([T ] [ (~y )] ) = (~y ) if and only if 1

y ) 6= ~0V and 1

y ))] ) =

(~

([T ( (~

1

1

1

1

1

~

(~y ) if and only if (~y ) 6= 0V and T ( (~y )) = (~y ) if and only if (~y ) is an eigenvector of T

corresponding to .

5.1 Question 14) For any square matrix A, prove that A and At have the same characteristic polynomial (and hence the same eigenvalues).

Solution: Let A be a square matrix. Then

At (x) = det(xI At ) = det((xI)t At ) = det((xI A)t ) = det(xI A) = A (x)

as desired.

5.1 Question 15a) Let T be a linear operator on a vector space V and let ~x V be an eigenvector

of T corresponding to the eigenvalue . For any positive integer m, prove that ~x is an eigenvector of T m

corresponding to the eigenvalue m .

Solution: Let T be a linear operator on a vector space V and let ~x V be an eigenvector of T corresponding to the eigenvalue . Then ~x 6= ~0 and T (~x) = ~x. Therefore

T m (~x) = T m1 (T (~x)) = T m1 (~x) = T m1 (~x) = = m ~x.

Hence ~x is an eigenvector of T m corresponding to the eigenvalue .

5.1 Question 16a) Prove that similar matrices have the same trace.

Solution: Let A, B Mn (F) be similar matrices. Then there exists an invertible matrix Q Mn (F)

such that A = Q1 BA. Therefore

tr(A) = tr(Q1 BQ) = tr(Q1 (BQ)) = tr((BQ)Q1 ) = tr(BQQ1 ) = tr(B)

(by Section 2.3 Question 13) as desired.

55

5.1 Question 17a and 17b) Let T be the linear operator on Mn (R) defined by T (A) = At . Show that

1 are the only possible eigenvalues of T . Describe the eigenvectors corresponding to each eigenvalue of T .

Solution: Let T be the linear operator on Mn (R) defined by T (A) = At . Suppose B Mn (R) is an

eigenvector of T with eigenvalue . Then B 6= 0 and B = T (B) = B t . By taking the transpose of both

sides, we obtain that

B = (B t )t = (B)t = B t = 2 B.

Therefore, since B 6= 0, the above equation implies 2 = 1. Hence = 1.

The eigenvectors of T with eigenvalue 1 are all non-zero matrices A such that A = At ; that is, E1 is the

set of all symmetric matrices. The eigenvectors of T with eigenvalue 1 are all non-zero matrices A such

that A = At ; that is, E1 is the set of all skew-symmetric matrices.

5.1 Question 18a) Let A, B Mn (C). Prove that if B is invertible, then there exists a scalar c C such

that A + cB is not invertible.

Solution: Let A, B Mn (C) with B is invertible. Consider the matrix B 1 A Mn (C). By the Fundamental Theorem of Algebra, there exists a scalar c C such that det(cI + B 1 A) = 0. Therefore

det(A + cB) = det(B(B 1 A + cI)) = det(B)det(B 1 A + cI) = det(B)0 = 0.

Hence A + cB is not invertible.

5.1 Question 19) Let A and B be similar n n matrices. Prove that there exists an n-dimensional

vector space V , a linear operator T on V , and ordered bases and for V such that A = [T ] and B = [T ] .

Solution: Let A and B be similar n n matrices. Therefore there exists an invertible matrix Q Mn (F)

such that A = Q1 BQ. Let V = Fn , let be the standard basis for V , and let T : V V be the linear

map T = LA . Therefore, by Theorem 2.15a of the text, [T ] = A.

By Section 2.5 Question 13 there exists a basis for V such that Q = [I] . Thus

[T ] = ([I] )1 [T ] [I] = Q1 AQ = B

as desired.

f (t) = (1)n tn + an1 tn1 + + a1 t + a0 .

Prove that f (0) = a0 = det(A). Deduce that A is invertible if and only if a0 6= 0.

Solution: We recall that f (t) = det(A tI). Therefore a0 = f (0) = det(A 0I) = det(A). Since A

is invertible if and only if det(A) 6= 0, A is invertible if and only if a0 6= 0.

5.1 Question 21a and 21b) Let A be an n n matrix with characteristic polynomial

fA (t) = (1)n tn + an1 tn1 + + a1 t + a0 .

Prove that fA (t) = (A1,1 t)(A2,2 t) (An,n t) + q(t) where q(t) is a polynomial of degree at most n 2.

Use this to show that tr(A) = (1)n1 an1 .

56

Solution: To prove that fA (t) = (A1,1 t)(A2,2 t) (An,n t) + q(t) where q(t) is a polynomial of degree

at most n 2, we proceed by induction on n; that is, let Pn be the statement that if A Mn (F) and fA

is the characteristic polynomial of A, then fA (t) = (A1,1 t)(A2,2 t) (An,n t) + q(t) where q(t) is a

polynomial of degree at most n 2.

Base Case: n = 1 For n = 1, f (t) = A1,1 t. Hence the result follows.

Base Case: n = 2 For n = 2, f (t) = (A1,1 t)(A2,2 t) A1,2 A2,1 . As A1,2 A2,1 is a polynomial of

degree n 2 = 0, the result follows.

Inductive Step Suppose that the result is true for some fixed n N with n 2: that is, suppose if

A Mn (F) and fA is the characteristic polynomial of A, then fA (t) = (A1,1 t)(A2,2 t) (An,n t) + q(t)

where q(t) is a polynomial of degree at most n 2. We desire to prove the result for n + 1.

Let A Mn+1 (F) be arbitrary. Let B be the n n matrix with entries in F obtained by removing the

(n + 1)st rows and columns of A. Therefore, by the Cofactor Expansion of the Determinant,

fA (t) = det(A tIn+1 ) = (An+1,n+1 t)det(B tIn ) +

n+1

X

^

an+1,j (1)n+1+j det((A

tIn+1 )n+1,j )

j=2

^

where (A

tIn+1 )n+1,j is the matrix obtained from A tIn+1 by removing the (n + 1)st row and j th column.

By the induction hypothesis,

det(B tIn ) = (A1,1 t)(A2,2 t) (An,n t) + q(t)

^

where q(t) is a polynomial in t of degree at most n 2. However, for each 2 j n, (A

tIn+1 )n+1,j has

^

at most n 1 entries with a t-term so det((A tIn+1 )n+1,j ) is a polynomial in t with degree at most n 1.

Therefore

fA (t) = (A1,1 t) (An,n t)(An+1,n+1 t)+(An+1,n+1 t)q(t)+

n+1

X

^

an+1,j (1)n+1+j det((A

tIn+1 )n+1,j ).

j=2

n+1

X

^

an+1,j (1)n+1+j det((A

tIn+1 )n+1,j )

j=2

Hence, by the Principle of Mathematical Induction, the result is true.

Since fA (t) = (A1,1 t)(A2,2 t) (An,n t) + q(t) where q(t) is a polynomial of degree at most

n 2 for all n n matrices A with entries in F, it is clear that the coefficient of tn1 of fA is an1 =

(1)n1 (A1,1 + A2,2 + + An,n ) = (1)n1 tr(A) as desired.

5.1 Question 22a) Let T be a linear operator on a vector space V over a field F and let g(t) be a

polynomial with coefficients from F. Prove that if ~x is an eigenvector of T with corresponding eigenvalue ,

then g(T )(~x) = g()~x. That is, ~x is an eigenvector of g(T ) with corresponding eigenvalue g().

Solution: Recall that if g(t) = an tn +an1 tn1 + +a1 t+a0 then g(T ) = an T n +an1 T n1 + +a1 T +a0 I.

Moreover, if ~x is an eigenvector of T with corresponding eigenvalue , then ~x 6= ~0 and T m (~x) = m ~x by

Section 5.1 Question 15a. Hence

g(T )(~x)

= an T n (~x) + an1 T n1 (~x) + + a1 T (~x) + a0 I(~x)

= an n ~x + an1 n1 ~x + + a1 ~x + a0 ~x

= (an n + an1 n1 + + a1 + a0 )~x = g()~x

57

5.1 Question 23) Use Question 22 to prove that if f (t) is the characteristic polynomial of a diagonalizable

linear operator T , then f (T ) = 0 (the zero operator).

Solution: We will provide two proofs of this result; one involving Question 22 and one alternate proof.

Proof 1: Using Question 22) Let T be a diagonalizable linear operator on a finite dimensional vector

space V . Then there exists a basis of V consisting of eigenvectors of T . By Question 22, every eigenvector

of T with eigenvalue is an eigenvector of f (T ) with eigenvalue f (). However, since f is the characteristic

polynomial of T , if is an eigenvalue of T then f () = 0. Hence is a basis of V consisting of eigenvectors

of f (T ) all of which have eigenvalue 0. Hence [f (T )] = 0 so f (T ) = 0 (the zero operator).

Proof 2: Diagonalizing T ) Let T be a diagonalizable linear operator on a finite dimensional vector space

V . Then there exists a basis for T such that if D = [T ] , then

1 0 . . .

0

0

0 2 . . .

0

0

..

..

D = ... . . . . . .

.

.

.

.

0

. n1 0

0

0

0 ...

0

n

where 1 , . . . , n F are the eigenvalues of T (counting algebraic

k

1 0 . . .

0

.

.

k

0 2

.

0

.. . .

k

.

.

..

D = .

. ..

..

0

. kn1

0

0

0 ...

0

..

.

0

kn

for all k N. Therefore, if f (t) = (1)n tn + an1 tn1 + + a1 t + a0 is the characteristic polynomial of T ,

(1)n Dn + an1 Dn1 + + a1 D + a0 In

n

1 0 . . .

0

0

0 n2 . . .

0

0

.

.

..

.. + + a1

= (1)n .. . . . . . .

.

.

n

0

. n1 0

0

n

0

0 ...

0

n

f (1 )

0

...

0

0

.

0

f (2 ) . .

0

0

.

.

..

..

..

..

..

=

.

.

.

.

.

0

. f (n1 )

0

0

0

0

...

0

f (n )

0

..

.

2

..

.

0

0

0

0

...

..

.

..

.

..

.

...

0

0

..

.

n1

0

+ a0

0

n

0

..

.

1

..

.

0

0

0

0

0

0

..

.

...

..

.

..

.

..

.

...

(1)n Dn + an1 Dn1 + + a1 D + a0 In = 0.

However

[f (T )] = [(1)n T n + an1 T n1 + + a1 T + a0 ] = (1)n Dn + an1 Dn1 + + a1 D + a0 = 0

so f (T ) = 0 as desired.

58

0

..

.

0

..

.

1

0

0

1

5.2 Question 7) For A =

1

2

4

3

integer.

Solution: The trick of this problem is to diagonalize A. The reason for this is that if D is a diagonal

matrix and we desire to compute Dn , it is easy to see that Dn is the matrix where each diagonal entry of D

has been raised to the nth power. Moreover, if A = QDQ1 , then

An = (QDQ1 )n = (QDQ1 )(QDQ1 ) (QDQ1 )(QDQ1 ) = Q(DI2 DI2 I2 D)Q1 = QDn Q1

Therefore, since Dn is easy to compute and matrix multiplication of three matrices is simple, the problem

will be easy to solve if we can diagonalize A.

To diagonalize A, we need to compute all eigenvalues of A and a basis of eigenvectors. First we compute

the eigenvalues. To find the eigenvalues, we compute the characteristic polynomial:

1

4

A () = det(I A) = det

= ( 1)( 3) 8 = 2 4 5 = ( 5)( + 1)

2

3

Thus the eigenvalues of A are 5 and 1.

Now we shall compute the eigenspaces. We notice that

4 4

1

E5 = ker(5I A) = ker

= ker

2 2

0

1

0

= span({(1, 1)})

where the third equality comes from a simple row reduction. Therefore {(1, 1)} is a basis for E5 .

Next we notice that

2 4

1 2

E1 = ker(I A) = ker

= ker

= span({(2, 1)})

2 4

0 0

where the third equality comes from a simple row reduction. Therefore {(2, 1)} is a basis for E1 .

Combining these two bases, we see that = {(1, 1), (2, 1)} is an eigenbasis for A (and thus a basis for

R2 consisting of eigenvectors). Finally, to find the desired matrices Q and D, we notice that, if is the

standard basis for R2 , then Q1 AQ = D where

5 0

D=

0 1

and Q is the change of basis matrix that takes -coordinates to -coordinates. Therefore, by using the

definition of the change of basis matrix

1 2

Q=

1 1

Therefore, to compute An , we saw above that we needed to compute Q1 and Dn . However, it is easy to

see that

n

5

0

Dn =

0 (1)n

using the formula for the inverse of a 2 2 matrix

1

1

Q1 =

1 2 1

2

1

Hence

1

An =

1

2(1)n

(1)n

2

1

5n

0

0

(1)n

1

3

1

3

2

3

13

=

5n

5n

59

=

1

3

1

3

2

3

13

1

3

1

3

2

3

31

=

1

3

5n + 2(1)n

5n (1)n

2(5)n 2(1)n

2(5)n + (1)n

5.2 Question 8) Suppose A Mn (F) has two distinct eigenvalues 1 and 2 such that dim(E1 ) = n 1.

Prove that A is diagonalizable.

Solution: Recall that an n n matrix is diagonalizable if and only if the sums of the dimensions of

the eigenspaces is (at least) n. Since 2 is an eigenvalue of A, dim(E2 ) 1. Hence

n dim(E1 ) + dim(E2 ) (n 1) + 1 = n.

Hence dim(E1 ) + dim(E2 ) = n so A is diagonalizable.

5.2 Question 10) Let T be a linear operator on a finite dimensional vector space V with the distinct eigenvalues 1 , 2 , . . . , k and corresponding algebraic multiplicities m1 , m2 , . . ., mk . Suppose that

is a basis for V such that [T ] is an upper triangular matrix. Prove that the diagonal entries of [T ] are

1 , . . . , k and that each i occurs mi times (for 1 i k).

Solution: Let be a basis for V such that [T ] is an upper triangular matrix. Let 1 , 2 , . . . , n be

the diagonal entries of [T ] . Thus it is easy to see that

T () = det(I [T ] ) = ( 1 ) ( n )

Pk

Therefore the characteristic polynomial splits. Therefore the sum of the algebraic multiplicities, j=1 mj ,

is n and, as 1 , 2 , . . . , k are the distinct eigenvalues of T with algebraic multiplicities m1 , m2 , . . ., mk , we

must have that

( 1 ) ( n ) = T () = ( 1 )m1 ( k )mk

Hence, as 1 , 2 , . . . , n are the diagonal entries of [T ] , the above equation implies that the diagonal entries

of [T ] are 1 , . . . , k and that each i occurs mi times (for 1 i k).

5.2 Question 12a) Let T be an invertible linear operator on a finite dimensional vector space V . Recall

that for any eigenvalue of T , 1 is an eigenvalue of T 1 . Prove that the eigenspace of T corresponding

to is the same as the eigenspace of T 1 corresponding to 1 .

Solution: Let T : V V be an invertible linear map. Suppose that is a non-zero eigenvalue of T

and let ~v E (T ). Since is non-zero, T (~v ) = ~v implies that 1 T (~v ) = ~v . Therefore, since T is invertible,

T 1 (~v ) = T 1 1 T (~v ) = 1 T 1 (T (~v )) = 1 IV (~v ) = 1~v

Hence ~v E1 (T 1 ) so E (T ) E1 (T 1 ).

By reversing the roles of T and T 1 in the above argument, we see that E1 (T 1 ) E (T ). Hence the

eigenspace of T corresponding to is the same as the eigenspace of T 1 corresponding to 1 .

5.2 Question 12b) Let T be an invertible linear operator on a finite dimensional vector space V . Prove

that T is diagonalizable, then T 1 is diagonalizable.

Solution: Recall that a linear operator is diagonalizable if and only if there exists a basis of eigenvectors corresponding to eigenvalues of a linear operator. Since T and T 1 share the same eigenspaces by

Section 5.2 Question 12a, T is invertible if and only if T 1 is invertible.

60

5.2 Question 13c) Prove that if A Mn (F) is diagonalizable, then At is also diagonalizable.

Solution: Although Question 13b) of Section 5.2 can be used to easily prove this result, here is a proof

that requires less work. Recall that if A is diagonalizable then there exists a diagonal matrix D Mn (F)

and an invertible matrix Q Mn (F) such that A = QDQ1 . Therefore

At = (QDQ1 )t = (Q1 )t Dt Qt = (Qt )1 Dt Qt

where (Q1 )t = (Qt )1 by Question 5 of Section 2.4. Therefore, since Qt is invertible and Dt is a diagonal

matrix, At is similar to a diagonal matrix and thus At is diagonalizable.

For Question 17 to 19 of Section 5.2, two linear operators T and U on a finite dimensional vector space

V are called simultaneously diagonalizable if there exists an ordered basis for V such that both [T ] and

[U ] are diagonal matrices. Similarly, A, B Mn (F) are simultaneously diagonalizable if there exists an

invertible matrix Q Mn (F) such that both Q1 AQ and Q1 BQ are diagonal matrices.

5.2 Question 17a) Prove that if T and U are simultaneously diagonalizable linear operators on a finite dimensional vector space V , then the matrices [T ] and [U ] are simultaneously diagonalizable matrices

for any ordered basis .

Solution: Let T and U be simultaneously diagonalizable linear operators on a finite dimensional vector

space V . Therefore there exists a basis such that both [T ] and [U ] are diagonal matrices.

Let be any ordered basis for V . Let Q = [I] be the change of basis matrix. Then Q Mn (F) is an

invertible matrix such that

Q1 [T ] Q = [T ]

and

Q1 [U ] Q = [U ]

are both diagonal matrices. Hence [T ] and [U ] are simultaneously diagonalizable matrices. Since was

an arbitrary ordered basis, the result follows.

5.2 Question 17b) Prove that if A and B are simultaneously diagonal matrices, then LA and LB

are simultaneously diagonalizable linear operators.

Solution: Let A and B are simultaneously diagonal matrices. Therefore there exists an invertible matrix Q Mn (F) such that both Q1 AQ and Q1 BQ are diagonal matrices.

Let be the standard basis for Fn . Therefore [LA ] = A and [LB ] = B. By Question 13 of Section 2.5,

there exists a basis for Fn such that Q = [I] . Therefore

[LA ] = ([I] )1 [LA ] [I] = Q1 AQ

and

[LB ] = ([I] )1 [LB ] [I] = Q1 BQ

are both diagonal matrices. Hence LA and LB are simultaneously diagonalizable linear operators.

5.2 Question 18a) Prove that if T and U are simultaneously diagonalizable then T and U commute

(i.e. T U = U T ).

Solution: Let T and U be simultaneously diagonalizable on a finite dimensional vector space V . Therefore,

there exists a basis of V such that [T ] and [U ] are diagonal matrices. Since diagonal matrices commute

[T U ] = [T ] [U ] = [U ] [T ] = [U T ] .

Therefore, since is a basis, T U = U T as desired.

61

5.2 Question 18b) Show that if A and B are simultaneously diagonalizable matrices, then A and B

commute.

Solution: Let A and B be simultaneously diagonalizable matrices. Then there exists an invertible matrix

Q Mn (F) such that both Q1 AQ and Q1 BQ are diagonal matrices. Since diagonal matrices commute

Q1 ABQ = Q1 A(QQ1 )BQ = (Q1 AQ)(Q1 BQ) = (Q1 AQ)(Q1 BQ) = Q1 A(QQ1 )BQ = Q1 ABQ.

Hence

AB = (QQ1 )AB(QQ1 ) = Q(Q1 ABQ)Q1 = Q(Q1 BAQ)Q1 = (QQ1 )BA(QQ1 ) = BA

as desired.

5.2 Question 19) Let T be a diagonalizable linear operator on a finite dimensional vector space and

let m be any positive integer. Prove that T and T m are simultaneously diagonalizable.

Solution: Let T be a diagonalizable linear operator on a finite dimensional vector space V and let m

be any positive integer. Therefore there exists a basis for V such that [T ] is a diagonal matrix. Hence

m

[T m ] = [T ]m

are simultane is a power of a diagonal matrix and thus a diagonal matrix. Hence T and T

ously diagonalizable.

62

Chapter Six

6.1 Question 9a) Let be a basis for a finite dimensional inner product space. Prove that if h~x, ~zi = 0

for all ~z , then ~x = ~0.

Solution: Let be a basis for a finite dimensional inner product space V . Suppose ~x V is such

that h~x, ~zi = 0 for all ~z . Since is a basis for V , there exists vectors ~z1 , . . . , ~zn and scalars

a1 , a2 , . . . , an F such that ~x = a1 ~z1 + + an ~zn . Hence

h~x, ~xi = h~x, a1 ~z1 + + an ~zn i

= a1 h~x, ~z1 i + + an h~x, ~zn i

= a1 0 + + an 0 = 0.

Hence h~x, ~xi = 0 so ~x = ~0V as desired.

6.1 Question 9b) Let be a basis for a finite dimensional inner product space. Prove that if h~x, ~zi = h~y , ~zi

for all ~z , then ~x = ~y .

Solution: Let be a basis for a finite dimensional inner product space V . Suppose ~x, ~y V are such

that h~x, ~zi = h~y , ~zi for all ~z . Then

h~x ~y , ~zi = h~x, ~zi h~y , ~zi = 0

for all ~z . Hence ~x ~y = ~0V by Question 9a of Section 6.1.

6.1 Question 10) Let V be an inner product space, and suppose that ~x and ~y are orthogonal vec2

2

tors in V . Prove that k~x + ~y k = k~xk + k~y k .

Solution: Let ~x, ~y V be orthogonal vectors. Therefore h~x, ~y i = 0. Hence

2

k~x + ~y k = h~x + ~y , ~x + ~y i = h~x, ~x + ~y i + h~y , ~x + ~y i = h~x, ~xi + h~y , ~xi + h~x, ~y i + h~y , ~y i

2

as desired.

6.1 Question 11) Prove the parallelogram law on an inner product space V ; that is, show that

2

for all ~x, ~y V .

Solution: The proof is the following direct computation:

2

k~x + ~y k + k~x ~y k

= h~x + ~y , ~x + ~y i + h~x ~y , ~x ~y i

= h~x, ~xi + h~x, ~y i + h~y , ~xi + h~y , ~y i + h~x, ~xi h~x, ~y i h~y , ~xi + h~y , ~y i

2

2

= 2h~x, ~xi + 2h~y , ~y i = 2 k~xk + 2 k~y k

as desired.

63

6.1 Question 12) Let {~v1 , . . . , ~vk } be an orthogonal set in V , and let a1 , . . . , ak be scalars. Prove

2 P

P

k

k

2

that
j=1 aj ~vj
= j=1 |aj |2 k~vj k .

Solution: We will provide two proofs to this question.

Proof 1: To prove this result, we will proceed by induction on k and apply Question 10.

2

2

Base Case: Clearly ka1~v1 k = (|a1 | k~v1 k)2 = |a1 |2 k~v1 k as desired.

Inductive Step: Suppose the result holds for some fixed k; that is, if {~v1 , . . . , ~vk } is an orthogonal set in

2 P

P

k

k

2

V and a1 , . . . , ak are scalars then
j=1 aj ~vj
= j=1 |aj |2 k~vj k . We desire to prove the result for k + 1.

Pk

Therefore, suppose {~v1 , . . . , ~vk+1 } is an orthogonal set in V and a1 , . . . , ak+1 are scalars. Let ~x = j=1 aj ~vj .

Since {~v1 , . . . , ~vk } is an orthogonal set, h~vj , ~vk+1 i for all j = 1, . . . , k so

h~x, ak+1~vk+1 i =

k

X

j=1

k

X

aj ak+1 (0) = 0.

j=1

Hence ~x and ak+1~vk+1 are orthogonal vectors. Therefore, by Question 10 as proven above,

2

k+1

X

2

2

2

aj ~vj

= k~x + ak+1~vk+1 k = k~xk + kak+1~vk+1 k

j=1

P

2 P

k

2

2

2

k

2

However, kak+1~vk+1 k = |ak+1 |2 k~vk+1 k by the base case and k~xk =
j=1 aj ~vj
= j=1 |aj |2 k~vj k by

the inductive step. Hence

2

k+1

k

k+1

X

X

X

2

2

2

2

2

|aj | k~vj k + |ak+1 | k~vk+1 k =

|aj |2 k~vj k

aj ~vj
=

j=1

j=1

j=1

as desired.

Hence, by the Principle of Mathematical Induction, the result holds.

Proof 2: Let {~v1 , . . . , ~vk } be an orthogonal set in V , and let a1 , . . . , ak be scalars. A direct computation

shows

2 *

+

k

k

k

X

X

X

ai~vi ,

aj ~vj

aj ~vj

=

j=1

i=1

j=1

*

+

k

k

X

X

=

ai ~vi ,

aj ~vj

i=1

j=1

k

X

ai aj h~vi , ~vj i

i,j=1

k

X

aj aj k~vj k

j=1

k

X

|aj |2 k~vj k

j=1

as desired.

64

6.1 Question 15a) Prove that if V is an inner product space, then |h~x, ~y i| = k~xk k~y k if and only if one of

the vectors ~x or ~y is a multiple of the other.

Solution: First we notice that if ~x = a~y for some a F then

2

as desired. Similarly, if ~y = a~x for some a F then

2

as desired. Therefore, if one of the vectors ~x or ~y is a multiple of the other then |h~x, ~y i| = k~xk k~y k.

For the converse, suppose |h~x, ~y i| = k~xk k~y k. If ~y = ~0 then clearly ~y = 0~x so ~y is a multiple of ~x. Hence

we may assume that ~y 6= ~0 (and thus k~y k =

6 0). Therefore, we may define

a=

hx, yi

2

kyk

F.

|a| =

|hx, yi|

kyk

k~xk

.

k~y k

2

by our choice of a. Hence ~z and ~y are orthogonal vectors. Therefore ~z and a~y are orthogonal vectors so, by

Section 6.1 Question 10,

2

k~xk = k~z + a~y k = k~zk + ka~y k = k~zk + |a| k~y k = k~zk + k~xk .

It is clear that the above equation implies k~zk = 0. Thus ~z = 0 so ~x = a~y . Hence ~x is a multiple of ~y as

desired.

6.1 Question 17) Let T be a linear operator on an inner product space V and suppose that kT (~x)k = k~xk

for all ~x V . Prove that T is one-to-one.

Solution: To see that T is one-to-one, suppose ~x ker(T ). Then T (~x) = ~0V so

k~xk = kT (~x)k =
~0
= 0.

Since k~xk = 0, ~x = ~0V . Hence ker(T ) = {~0V } so T is one-to-one.

2

6.1 Question 19a) Let V be an inner product space. Prove that k~x ~y k = k~xk 2Re(h~x, ~y i) + k~y k

for all ~x, ~y V .

Solution: This question is another direct computation:

k~x ~y k

=

=

=

=

h~x ~y , ~x ~y i

h~x, ~xi h~x, ~y i h~y , ~xi + h~y , ~y i

2

2

k~xk (h~x, ~y i + h~x, ~y i) + k~y k

2

2

k~xk 2Re(h~x, ~y i) + k~y k

as desired.

65

6.1 Question 19b) Let V be an inner product space. Prove | k~xk k~y k | k~x ~y k for all ~x, ~y V .

Solution: By the triangle inequality

k~xk = k(~x ~y ) + ~y k k~x ~y k + k~y k .

Hence k~xk k~y k k~x ~y k. By reversing the roles of ~x and ~y in the above computation, we obtain that

k~y k k~xk k~y ~xk = k~x ~y k. Hence, by combining these two inequalities, the result follows.

6.1 Question 20a) Let V be an inner product space over R. Prove that h~x, ~y i =

for all ~x, ~y V .

1

4

k~x + ~y k 41 k~x ~y k

Solution: This question is another direct computation solved by expanding the right-hand-side:

1

4

k~x + ~y k

1

4

k~x ~y k

=

=

=

1

x, ~xi + h~x, ~y i + h~y , ~xi +

4 (h~

1

x, ~y i + 12 h~y , ~xi

2 h~

1

x, ~y i + 12 h~x, ~y i = h~x, ~y i

2 h~

h~y , ~y i)

1

4

as desired.

6.1 Question 20b) Let V be an inner product space over R. Prove that h~x, ~y i =

for all ~x, ~y V .

1

4

P4

k=1

2

ik
~x + ik ~y

Solution: This question is another direct computation solved by expanding the right-hand-side:

2

P4

P4

1

k

~x + ik ~y
= 14 k=1 ik h~x, ~xi ik h~x, ~y i + ik h~y , ~xi + h~y , ~y i

k=1 i

4

P

4

= 41 k=1 ik h~x, ~xi + h~x, ~y i + (1)k h~y , ~xi + ik h~y , ~y i

= h~x, ~y i

P4

P

4

(since k=1 ik = 0 = k=1 (1)k ) as desired.

1

6.1 Question 21) Let A be an n n matrix and define Re(A) = 21 (A + A ) and Im(A) = 2i

(A A ).

Prove that (Re(A)) = Re(A), (Im(A)) = Im(A), and A = Re(A) + iIm(A). Moreover, prove that if

A = B1 + iB2 where B1 = B1 and B2 B2 , then B1 = Re(A) and B2 = Im(A).

1

(A + A ) =

(Re(A)) =

2

and

we notice that

1

(A + A) = Re(A)

2

1

1

1

(A A ) =

(A A) = (A A ) = Im(A)

2i

2i

2i

so Re(A) and Im(A) are self-adjoint.

Suppose A = B1 + iB2 where B1 = B1 and B2 B2 . Then

(Im(A)) =

Re(A) =

1

1

1

1

(A+A ) = ((B1 +iB2 )+(B1 +iB2 ) ) = ((B1 +iB2 )+(B1 iB2 )) = ((B1 +iB2 )+(B1 iB2 )) = B1

2

2

2

2

and

Im(A) =

1

1

1

(A A ) = ((B1 + iB2 ) (B1 + iB2 ) ) = ((B1 + iB2 ) (B1 iB2 )) = B2

2i

2i

2i

as desired.

66

6.2 Question 6) Let V be an inner product space, and let W be a finite dimensional subspace of V . If

~x

/ W , prove that there exists a ~y V such that ~y W , but h~x, ~y i =

6 0.

Solution: Let V be an inner product space, and let W be a finite dimensional subspace of V . Suppose ~x

/ W . By Theorem 6.6, there exists unique vectors w

~ W and ~y W such that ~x = w

~ + ~y . Suppose

to the contrary that h~x, ~y i = 0. Therefore

0 = h~x, ~y i = hw

~ + ~y , ~y i = hw,

~ ~y i + h~y , ~y i.

Since w

~ W and ~y W , hw,

~ ~y i = 0 so the above equation implies that h~y , ~y i = 0. Hence ~y = ~0V by the

definition of the inner product. However, this implies that ~x = w

~ + ~y = w

~ W which is a contradiction.

Hence h~x, ~y i =

6 0 as desired.

6.2 Question 7) Let be a basis for a subspace W of an inner product space V , and let ~z V .

Prove that ~z W if and only if h~z, ~v i = 0 for every ~v .

Solution: First suppose that ~z W . Since is a basis for W , if ~v then ~v W so h~z, ~v i = 0

by the definition of W . Hence we have proven one direction.

Now suppose that h~z, ~v i = 0 for every ~v . To show that ~z W , we must show that h~z, wi

~ = 0 for all

w

~ W . Let w

~ W be arbitrary.

Therefore,

since

is

a

basis

for

W

,

there

exists

~

v

,

.

.

.

,

~

v

and scalars

1

n

Pn

a1 , . . . , an F such that w

~ = j=1 aj ~vj . Therefore h~z, ~vj i = 0 for all j = 1, . . . , n by assumption and thus

h~z, wi

~ =

n

X

aj h~z, ~vj i =

n

X

aj (0) = 0

j=1

j=1

as desired. Hence, as w

~ W was arbitrary, ~z W by the definition of the orthogonal complement.

6.2 Question 11) Let A be an n n matrix with complex entries. Prove that AA = In if and only if

the rows of A form an orthonormal basis for Cn .

Solution: Let ~vi be the ith row of A. Therefore the ith column of A is the vector whose entries are

the complex conjugates of ~vi . Therefore it is easy to verify that AA = [h~vi , ~vj i]i,j . Therefore AA = In if

and only if h~vi , ~vj i = i,j if and only if the rows of A form an orthonormal basis for Cn .

6.2 Question 13) Let V be an inner product space, let S and S0 be subsets of V , and let W be a

finite dimensional subspace of V . Prove the following results

(a) S0 S implies that S S0 .

(b) S (S ) so span(S) (S ) .

(c) W = (W ) .

(d) V = W W .

Solution: a) Suppose S0 S. Let ~x S be arbitrary. Then h~x, ~zi = 0 for all ~z S. Therefore, since

S0 S, h~x, ~zi = 0 for all ~z S0 . Hence ~x S0 by the definition of S0 . Therefore, as ~x S was arbitrary,

S S0 .

b) Let ~x S be arbitrary. However, if ~z S , then h~z, ~xi = 0. Hence h~x, ~zi = 0 for all ~z S . Hence

~x (S ) by the definition of (S ) . Therefore, since ~x S was arbitrary, S (S ) . Since S (S )

and (S ) is a subspace, span(S) (S ) .

67

c) Let W be a subspace of V . By part b), W (W ) . To see the other inclusion, suppose the the

contrary that W 6= (W ) . Then there exists a vector ~x (W ) such that ~x

/ W . By Section 6.2

Question 6, there exists a vector ~y V such that ~y W but h~x, ~y i = 0. However, since ~x (W ) ,

~y W , and h~x, ~y i = 0, we have a contradiction to the definition of (W ) . Hence W = (W ) as desired.

d) To show that V = W W , it suffices to show that V = W + W and W W = {~0}. To see that

V = W + W , let ~v V be arbitrary. By Theorem 6.6 of the text, there exists vectors w

~ W and ~z W

such that ~v = w

~ + ~z. Hence ~v W + W . Therefore, since ~v V was arbitrary, V = W + W .

To see that W W = {~0}, let ~x W W be arbitrary. Therefore ~x W and ~x W . Since

~x W , h~x, ~y i = 0 for all ~y W . In particular, since ~x W , h~x, ~xi = 0. Hence ~x = ~0 as desired. Thus

W W = {~0} so V = W W .

6.2 Question 14) Let W1 and W2 be subspace of a finite dimensional inner product space. Prove

that (W1 + W2 ) = W1 W2 and (W1 W2 ) = W1 + W2 .

Solution: Let W1 and W2 be subspace of a finite dimensional inner product space. To see that (W1 +W2 ) =

W1 W2 , we will prove both inclusions. First we recall that W1 W1 + W2 and W2 W1 + W2 . Therefore,

by Section 6.2 Question 13a), (W1 + W2 ) W1 and (W1 + W2 ) W2 . Hence (W1 + W2 ) W1 W2 .

To prove the other inclusion, suppose ~x W1 W2 . To see that ~x (W1 + W2 ) , let ~z W1 + W2 be

arbitrary. Then there exists vectors w

~ 1 W1 and w

~ 2 W2 such that ~z = w

~1 + w

~ 2 . Since ~x W1 W2 ,

h~x, w

~ 1 i = 0 = h~x, w

~ 2 i. Hence

h~x, ~zi = h~x, w

~ 1 i + h~x, w

~ 2 i = 0 + 0 = 0.

Hence, as ~z W1 + W2 was arbitrary, ~x (W1 + W2 ) . Hence (W1 + W2 ) = W1 W2 .

Since W1 and W2 are subspaces, by applying the first part of this proof to these subspaces, we obtain

that

(W1 + W2 ) = (W1 ) (W2 ) = W1 W2

where the second inequality comes from Section 6.2 Question 13c). Hence

(W1 W2 ) = ((W1 + W2 ) ) = W1 + W2

by Section 6.2 Question 13c) as W1 + W2 is a subspace.

6.2 Question 15) Let V be a finite dimensional inner product

Pn space over F. Let {~v1 , . . . , ~vn } be an

orthonormal basis of V . For any ~x, ~y V prove that h~x, ~y i = i=1 h~x, ~vi ih~y , ~vi i. Conclude that for any

~x, ~y V that h[~x] , [~y ] i0 = h~x, ~y i where h , i0 is the standard inner product on Fn .

Solution: Let V be a finite dimensional inner productPspace over F, let {~v1 ,P

. . . , ~vn } be an orthonormal

n

n

basis of V , and let ~x, ~y V be arbitrary. Therefore ~x = i=1 h~x, ~vi i~vi and ~y = i=1 h~y , ~vi i~vi . Thus

DP

E

Pn

n

h~

x

,

~

v

i~

v

,

h~

y

,

~

v

i~

v

h~x, ~y i =

i

i

j

j

i=1

j=1

Pn

=

h~

x

,

~

v

ih~

y

,

~

v

ih~

vi , ~vj i

j

i

Pni,j=1

h~

x

,

~

v

ih~

=

y

,

~

v

i

i

j i,j

Pi,j=1

n

=

x, ~vi ih~y , ~vi i

i=1 h~

as desired.

Since

h~x, ~v1 i

..

[~x] =

.

h~x, ~vn i

and

h~y , ~v1 i

..

[~y ] =

.

h~y , ~vn i

we obtain that h[~x] , [~y ] i0 = h~x, ~y i where h , i0 is the standard inner product on Fn .

68

6.2 Question 16) Let V be an inner product space and let S = {~v1 , . . . , ~vn } be an orthonormal subset

Pn

2

of V . Prove that for any ~x V we have k~xk k=1 |h~x, ~vk i|2 . Prove that this inequality is an equality if

and only if ~x span(S).

Solution: Let W = span(S). Therefore W is a subspace of V with

Pn S as an orthonormal basis. Therefore,

by Theorem 6.6, there exists a vector ~z W such that ~x = ~z + k=1 h~x, ~vk i~vk . Hence, as {~z, ~v1 , . . . , ~vn } is

an orthogonal set, by Section 6.1 Question 12 we have

2

kxk = k~zk +

n

X

k=1

n

X

k=1

~

Moreover,

Pn the above inequality is an equality if and only if k~zk = 0 if and only if ~z = 0 if and only if

~x = k=1 h~x, ~vk i~vk if and only if ~x W = span(S) as desired.

6.2 Question 17) Let T be a linear operator on an inner product space V . If hT (~x), ~y i = 0 for all

~x and ~y in some fixed basis of V , then T is the zero operator.

Solution: Let T be a linear operator on an inner product space V . Suppose hT (~x), ~y i = 0 for all ~x

and ~y in some fixed basis of V . Let ~x, ~y V are arbitrary. Since is a basis for V , there exists vectors

~v1 , . . . , ~vn and scalars a1 , . . . , an , b1 , . . . , bn F such that ~x = a1~v1 + + an~vn and ~y = b1~v1 + + bn~vn .

Therefore

n

X

ai bj hT (~vi ), ~vj i = 0.

hT (~x), ~y i =

i,j=1

Therefore, since ~x, ~y V were arbitrary, hT (~x), ~y i = 0 for all ~x, ~y V . In particular, for each fixed ~x V ,

by letting ~y = T (~x) V we obtain that 0 = hT (~x), T (~x)i so T (~x) = ~0 for all ~x V . Therefore T = 0 as

desired.

6.3 Question 6) Let T be a linear operator on an inner product space V . Let U1 = T + T and

U2 = T T . Prove that U1 = U1 and U2 = U2 .

Solution: To see that U1 is self-adjoint, we notice

U1 = (T + T ) = T + (T ) = T + T = T + T = U1 .

To see that U2 is self-adjoint, we notice

U2 = (T T ) = (T ) T = T T = U2

as desired.

6.3 Question 8) Let V be a finite dimensional inner product space and let T be a linear operator

on V . Prove that if T is invertible then T is invertible and (T )1 = (T 1 ) .

Solution: Let V be a finite dimensional inner product space and let T be an invertible linear operator on

V . Then

T (T 1 ) = (T 1 T ) = I = I

and

(T 1 ) T = (T T 1 ) = I = I.

Hence T is invertible and (T )1 = (T 1 ) .

69

6.3 Question 10) Let T be a linear operator on an inner product space V . Prove that kT (~x)k = k~xk for

all ~x V if and only if hT (~x), T (~y )i = h~x, ~y i for all ~x, ~y V .

Solution: Let T be a linear operator on an inner product space V . Suppose kT (~x)k = k~xk for all ~x V .

If F = R, then by Section 6.1 Question 20a),

h~x, ~y i =

=

=

=

=

2

2

1

x + ~y k 41 k~x ~y k

4 k~

2

2

1

x + ~y )k 41 kT (~x ~y )k

4 kT (~

1

x) + T (~y ), T (~x) + T (~y )i 14 hT (~x) T (~y ), T (~x) T (~y )i

4 hT (~

1

(hT

(~x), T (~x)i + hT (~x), T (~y )i + hT (~y ), T (~x)i + hT (~y ), T (~y )i)

4

41 (hT (~x), T (~x)i hT (~x), T (~y )i hT (~y ), T (~x)i + hT (~y ), T (~y )i)

1

x), T (~y )i + 12 hT (~y ), T (~x)i = hT (~x), T (~y )i

2 hT (~

for all ~x, ~y V . Therefore the result is true in the case that F = R. If F = C, then by Section 6.1 Question

20b),

2

P4

h~x, ~y i = 41 k=1 ik
~x + ik ~y

2

P4

= 14 k=1 ik
T (~x + ik ~y )

P4

= 14 k=1 ik hT (~x) + ik T (~y ), T (~x) + ik T (~y )i

P4

= 14 k=1 ik hT (~x), T (~x)i + hT (~x), T (~y )i + (1)k hT (~y ), T (~x)i + ik hT (~y ), T (~y )i

= hT (~x), T (~y )i

P4

P4

(since k=1 ik = 0 = k=1 (1)k ) for all ~x, ~y V .

To prove the other direction, suppose hT (~x), T (~y )i = h~x, ~y i for all ~x, ~y V . By letting ~y = ~x in the above

equation, we obtain that kT (~x)k = k~xk for all ~x V as desired.

6.3 Question 11) For a linear operator T on a inner product space V , prove that T T = 0 implies

T = 0. Is the same result true if we assume that T T = 0?

Solution: Suppose T is a linear operator on a inner product space V such that T T = 0. Then for

all ~x V

2

kT (~x)k = hT (~x), T (~x)i = h(T T )(~x), ~xi = h~0, ~xi = 0.

Hence T (~x) = ~0 for all ~x V so T = 0.

If T T = 0, then, since (T ) = T , T = 0 by the above proof. Hence T = (T ) = 0 = 0.

6.3 Question 12) Let V be an inner product space and let T be a linear operator on V . Prove that

Im(T ) = ker(T ). Moreover, prove that if V is finite dimensional then Im(T ) = ker(T ) .

Solution: To see that Im(T ) = ker(T ), we will prove both equalities. Suppose ~x ker(T ). Thus

T (~x) = ~0. To see that ~x Im(T ) , let ~z Im(T ) be arbitrary. Therefore there exists a ~y V such that

~z = T (~y ). Hence

h~x, ~zi = h~x, T (~y )i = hT (~x), ~y i = h~0, ~y i = 0.

Therefore, since ~z Im(T ) was arbitrary, ~x Im(T ) . Hence ker(T ) Im(T ) as ~x ker(T ) was

arbitrary.

To prove the other inequality, let ~x Im(T ) be arbitrary. Since T (~y ) Im(T ) for all ~y V ,

0 = h~x, T (~y )i = hT (~x), ~y i

2

for all ~y V . In particular, by letting ~y = T (~x), we obtain that kT (~x)k = 0. Hence T (~x) = ~0 so ~x ker(T ).

Therefore, since ~x Im(T ) was arbitrary, Im(T ) ker(T ). Hence Im(T ) = ker(T ).

70

In the case of finite dimensions, Question 13c of Section 6.2 implies that (W ) = W for any subspace

W of a finite dimensional inner product space V . Therefore, since Im(T ) is a subspace of V , if V is finite

dimensional Im(T ) = (Im(T ) ) = ker(T ) as desired.

6.4 Question 4) Let T and U be self-adjoint operators on an inner product space T . Prove that T U is

self-adjoint if and only if T U = U T .

Solution: Let T and U be self-adjoint operators on an inner product space T . If T U is self-adjoint,

then

T U = (T U ) = U T = U T

as desired. If T U = U T then

(T U ) = U T = U T = T U

so T U is self-adjoint as desired.

6.4 Question 5) Prove that if N is a normal linear operator on an inner product space V , then N + I

is a normal linear operator for any F (where F = R or F = C).

Solution: To see that N + I is a normal operator, we notice that

(N + I) (N + I)

= (N + I)(N + I)

= N N + N + N + I

= N N + N + N + I

= (N + I)(N + I) = (N + I)(N + I) .

Hence N + I is normal.

6.4 Question 6) Let V be a complex inner product space and let T be a linear operator on V . De1

fine Re(T ) = 21 (T + T ) and Im(T ) = 2i

(T T ). Prove that Re(T ) and Im(T ) are self-adjoint and

that T = Re(T ) + iIm(T ). Moreover, show that if U1 and U2 are self-adjoint linear maps on V such

that T = U1 + iU2 then U1 = Re(T ) and U2 = Im(T ). Finally, prove that T is normal if and only if

Re(T )Im(T ) = Im(T )Re(T ).

Solution: To see that Re(T ) and Im(T ) are self-adjoint,

1

(Re(T )) =

(T + T ) =

2

and

(Im(T )) =

we notice that

1

(T + T ) = Re(T )

2

1

1

1

(T T ) =

(T T ) = (T T ) = Im(T )

2i

2i

2i

Suppose T = U1 + iU2 where U1 = U1 and U2 U2 . Then

Re(T ) =

1

1

1

1

(T +T ) = ((U1 +iU2 )+(U1 +iU2 ) ) = ((U1 +iU2 )+(U1 iU2 )) = ((U1 +iU2 )+(U1 iU2 )) = U1

2

2

2

2

and

Im(T ) =

1

1

1

(T T ) = ((U1 + iU2 ) (U1 + iU2 ) ) = ((U1 + iU2 ) (U1 iU2 )) = U2

2i

2i

2i

71

as desired.

For the final part of the question, notice T = (Re(T ) + iIm(T )) = Re(T ) iIm(T ) as Re(T ) and

Im(T ) are self-adjoint. Therefore T T = T T is true if and only if

(Re(T ) iIm(T ))(Re(T ) + iIm(T )) = (Re(T ) + iIm(T ))(Re(T ) iIm(T ))

if and only if

(Re(T ))2 + (Im(T ))2 iIm(T )Re(T ) + iRe(T )Im(T ) = (Re(T ))2 + (Im(T ))2 + iIm(T )Re(T )iiRe(T )Im(T )

if and only if

2iRe(T )Im(T ) = 2iIm(T )Re(T )

if and only if Im(T )Re(T ) = Re(T )Im(T ) as desired.

6.4 Question 9) Let T be a normal operator on a finite dimensional inner product space V . Prove

that ker(T ) = ker(T ) and Im(T ) = Im(T ).

Solution: Let T be a normal operator on a finite dimensional inner product space V . To see that

ker(T ) = ker(T ), we notice that if ~x V then

2

kT (~x)k = hT (~x), T (~x)i = hT T (~x), ~xi = hT T (~x), ~xi = hT (~x), T (~x)i = kT (~x)k .

Therefore ~x ker(T ) if and only if T (~x) = ~0 if and only if kT (~x)k = 0 if and only if kT (~x)k = 0 if and only

if T (~x) = ~0 if and only if ~x ker(T ). Hence ker(T ) = ker(T ).

To see that Im(T ) = Im(T ), we notice that since V is a finite dimensional vector space, Question 12 of

Section 6.3 implies

Im(T ) = Im((T ) ) = (ker(T )) = (ker(T )) = Im(T )

as desired.

6.4 Question 11) Assume that T is a linear operator on a complex (not necessarily finite dimensional)

inner product space V with an adjoint T . Prove the following results:

(a) If T is self-adjoint, then hT (~x), ~xi is real for all ~x V .

(b) If T satisfies hT (~x), ~xi = 0 for all ~x V , then T = 0.

(c) If hT (~x), ~xi is real for all ~x V , then T = T .

Solution: a) Suppose T is self-adjoint. Then for all ~x V

hT (~x), ~xi = h~x, T (~x)i = hT (~x), ~xi = hT (~x), ~xi.

Therefore, since a complex number is real if and only if = , hT (~x), ~xi is real for all ~x V .

b) Suppose T satisfies hT (~x), ~xi = 0 for all ~x V . Let ~x, ~y V be arbitrary. Therefore, since V is a

complex inner product space

0

=

=

=

1

x + ~y ), ~x + ~y i hT (~x ~y ), ~x ~y i + ihT (~x + i~y ), ~x + i~y i ihT (~x i~y ), ~x i~y i)

4 (hT (~

1

(hT

(~

x), ~xi + hT (~x), ~y i + hT (~y ), ~xi + hT (~y ), ~y i) 41 (hT (~x), ~xi hT (~x), ~y i hT (~y ), ~xi + hT (~y ), ~y i)

4

1

+i 4 (hT (~x), ~xi ihT (~x), ~y i + ihT (~y ), ~xi + hT (~y ), ~y i) i 41 (hT (~x), ~xi + ihT (~x), ~y i ihT (~y ), ~xi + hT (~y ), ~y i)

hT (~x), ~y i.

Hence, since ~x, ~y V were arbitrary, hT (~x), ~y i = 0 for all ~x, ~y V . Hence T = 0 by Question 17 in Section

6.2.

72

hT (~x), ~xi = h~x, T (~x)i = hT (~x), ~xi = hT (~x), ~xi

for all ~x V . By subtracting, we obtain that

0 = hT (~x), ~xi hT (~x), ~xi = h(T T )(~x), ~xi

for all ~x V . Hence T T = 0 by part b) so T = T as desired.

73

- LAVectorSpacesUploaded byShree Salunkhe
- Finite Dimensional Vector SpacesUploaded byRahul Dey
- linear_algebra_solution.pdfUploaded byBqsauce
- 224 NotesUploaded byRGM36
- Linalg20Uploaded byYashoverdhan Vyas
- Linear Algebra and Numerical AnalysisUploaded byDinesh Reddy
- MAT_217_all_lectures.pdfUploaded byJuan Ignacio Gonzalez
- Notes - NMAUploaded byavishwas
- Lecture2-8webUploaded byBOB
- la_fis_11_01Uploaded byAzurelysium
- Vector Spaces _ FinalUploaded byAnant Bhushan Nagabhushana
- basis.pdfUploaded byMohammed Ahmad Osama
- LinAlg BasisUploaded byRazrEdgeLe
- Test 1 NotesUploaded byCristan Oates
- Linear Algebra ExamUploaded byTombiydsf
- Basis and DimensionUploaded byMuhd Faisal Samsudin
- Vector Subspace DimUploaded bySharif M Mizanur Rahman
- ma121Notes.pdfUploaded byAnonymous WUJ7PWpJXe
- GeneralUploaded byNurul Ichsan Sahira
- MAT_217_Lecture_4.pdfUploaded byCarlo Karam
- LA QMW.pdfUploaded byShweta Sridhar
- VS6-Basis and DimensionUploaded byVan Minh
- MH1200 Lecture 12bUploaded byDicky Djayadi
- Mat67 Lfg Span and BasesUploaded byAditya Pratap
- lect-89.pdfUploaded bySarit Burman
- Assignment 2 So LnUploaded bysid ney
- 5cwvvxtEEeibhw72l5cNxg e5f4ec801b4411e8966ca50fe182f2e3 BasisUploaded byPrashant
- s01-printableUploaded byGustavo G Borzellino C
- Chapter 1Uploaded byecd4282003
- Chapter 2 Normed Spaces (1)Uploaded byMelissa Aylas

- Midterm - Winter 2015Uploaded byMatteo Esposito
- Introduction-to-Financial-Accounting-print-text.pdfUploaded byLeonard Berisha
- Edu 2017 06 Fm SyllabusUploaded byNguyễn Vũ Hải
- Exam FM QuestioonsUploaded bynkfran
- ASS#8_364_2017Uploaded byMatteo Esposito
- [Larson] Introduction to Real AnalysisUploaded byMatteo Esposito
- Course SequenceUploaded byMatteo Esposito

- Business Research MethodologyUploaded byKamala Sundaran
- Electric ForcesUploaded byMark Prochaska
- E-Commerce Critical Success Factors East vs. WestUploaded byAhmed Khalaf
- ITTOs for 47 Process PMBOK 5 - 13 August.pdfUploaded bykhurshidoman123
- scienceUploaded byJaspreet Singh
- IGCSE Math Question Paper 2 October/November 2008Uploaded byMariam A.
- Educational research: some basic concepts and terminologyUploaded bySetyo Nugroho
- Teaching Thinking and CreativityUploaded byjp sia
- ZombagiUploaded bymapley
- The Standish Group ReportUploaded byErlin Diaz
- Friktionsmatningar Och Sambandet Mellan Vagfriktion Och TrafiksakerhetUploaded byMarijan Jakovljevic
- ScribdUploaded byMehuel
- Sam Carpenter Work the SystemUploaded byManos Koufakis
- EmploymentHandbook QueenslandUploaded byTim Blank
- Wall Thickness Calculation - ASME B31.8 2007Uploaded bySomi Khan
- ASTM A1010-A1010M E-2001Uploaded byJorge Toribio
- proabUploaded byakseolu
- Rittal Innvoation 2015Uploaded byarkupi
- How To Write Critical EssaysUploaded byheribertogri106
- ABCUploaded byPrasanth Kumar
- 72603659-Basic-Math-Skills-Workbook.pdfUploaded byAndy Rodrigues
- V2_Rev_A.pdfUploaded byEdgar Muñoz
- Helical Gears FormulasUploaded byRagava Moorthy
- Filing Application for AINUploaded byAjay Dhoke
- Audio SpotlightUploaded bycheguwera
- How Sensors Work - PH MeasurementUploaded byAnonymous J8bMqYsYRe
- tpc_2002_rajaramanUploaded byHarish Nambi
- Phqf Consensus StatementUploaded bybugasandu
- ASTM003 Angular Momentum and Accretion Lecture 2 of 11 (QMUL)Uploaded byucaptd3
- 16342-electUploaded byuddinnadeem