You are on page 1of 11

(b) We want to claim that y = ct+d, whrer t, y are defined in the question and c, d are a solution of the normal equations.

But this is an instant result by dividing the second equation by m. 24. (a) Check

T (cσ + τ )(k) =
i=k ∞

(cσ + τ )(k)

=c
i=k

(σ)(k) +
i=k

(τ )(k)

= cT (σ)(k) + T (τ )(k). (b) For k ≤ n we have

T (en )(k) =
i=k

en (i) = 1 =

n i=1

ei (k).

And for k > n we have

T (en )(k) =
i=k

en (i) = 0 =

n i=1

ei (k).

(c) Suppoe that T ∗ exist. We try to compute T ∗ (e1 ) by 1 ⋅ T ∗ (e1 )(i) = ⟨ei , T ∗ (e1 )⟩ = ⟨
n

i=1

ei , e1 ⟩ = 1.

This means that T ∗ (e1 )(i) = 1 for all i. This is impossible since T ∗ (e1 ) is not an element in V .

6.4

Normal and Self-Adjoint Operators
(b) No. The two matrices 1 1 1 0 1 0 and have and to be 0 1 1 1 0 1 their unique normalized eigenvectors respectly.

1. (a) Yes. Check T T ∗ = T 2 = T ∗ T .

(c) No. Consider T (a, b) = (2a, b) to be a mapping from R2 to R2 and β to be the basis {(1, 1), (1, 0)}. We have T is normal with T ∗ = T . 1 0 But [T ]β = is not normal. Furthermore, the converse is also 1 2 not true. We may let T (a, b), = (b, b) be a mapping from R2 to R2 and β be the basis {(1, −1), (0, 1)}. In this time T is not normal with 0 0 T ∗ (a, b) = (0, a + b). However, [T ]β = is a normal matrix. 0 1 (d) Yes. This comes from Theorem 6.10. 178

179 . But it’s not diagonalizable since the characteristic polynomial of T does not split. self-adjoint. If it’s an operator on a real inner product space. b) = (a. 1 2 So it’s normal but not self-adjoint. See the Lemma before Theorem 6.(e) Yes. a) is normal since T ∗ (a. 2 2 2 2 2 2 √ √ (d) Pick an orthonormal basis β = {1. − + i).2(c) and get that √ 0 ⎛0 2 3 √ ⎞ [T ]β = ⎜0 0 6 2⎟ . or neither. use Theorem 6. [T ]β = ⎜ 0 ⎝ 4 −2 5⎠ So it’s neither normal nor self-adjoint. √ (2. Use one orthonormal basis β to check [T ]β is normal. (h) Yes.17. And the basis is 1 1 1 1 1 1 {( √ . just find an orthonormal basis for each eigenspace and take the union of them as the desired basis. where I and O are the identity and zero operators. 1)}. 2. (g) No. To find an orthonormal basis of eigenvectors of T for V . 5 5 (b) Pick β to be the standard basis and get that ⎛−1 1 0⎞ 5 0⎟ . (f) Yes. (c) Pick β to be the standard basis and get that [T ]β = 2 i . −2). 5 So it’s self-adjoint. And the basis is 1 1 { √ (1. We have I ∗ = I and O∗ = O. (a) Pick β to be the standard basis and get that [T ]β = 2 −2 −2 . − i)}. If it’s an operator on a complex inner product space.17. b) = (−b. 3(2t − 1). The mapping T (a. ⎝0 0 0 ⎠ So it’s neither normal nor self-adjoint. Ususally we’ll take β to be the standard basis. −b). ( √ . use Theorem 6.2.16. 6(6t2 − 6t + 1)} by Exercise 6.

1. 1. 0). −1. They are the same because T T ∗ = T ∗ T . 1). This means that 1 U1 = (T + T ∗ ) = T1 2 and 1 U2 − (T − T ∗ ) = T2 . √ (0. 0. (a) Observe the fact 1 1 ∗ T1 = ( (T + T ∗ ))∗ = (T ∗ + T ) = T1 2 2 1 1 (T − T ∗ ))∗ = − (T ∗ − T ) = T2 . √ (0. 5. 0. 0. 2i 2i ∗ ∗ (b) Observe that T ∗ = U1 − iU2 = U1 − iU2 . √ (0. 1. 1)} 2 2 2 2 3. 0). Just see Exercise 1(c). Observe that (T − cI)∗ = T ∗ − cI and check and (T − cI)(T − cI)∗ = (T − cI)(T ∗ − cI) = T T ∗ − cT − cT ∗ + c 2 I (T − cI)∗ (T − cI) = (T ∗ − cI)(T − cI) = T ∗ T − cT − cT ∗ + c 2 I. And the basis is 1 1 1 1 { √ (1. Use the fact (T U )∗ = U ∗ T ∗ = U T. −1). 0⎟ 0⎠ So it’s self-adjoint. 0⎟ 1⎠ So it’s self-adjoint. 0. 4. 0). √ (0. 6. 0. 1. 0. 1. 0)} 2 2 (f) Pick β to be the standard basis and get that ⎛0 ⎜0 [T ]β = ⎜ ⎜1 ⎝0 0 0 0 1 1 0 0 0 0⎞ 1⎟ ⎟. −1. 0). 1. 2 180 and ∗ T2 = ( . 0. 0. And the basis is 1 1 {(1. √ (1. (0.(e) Pick β to be the standard basis and get that ⎛1 ⎜0 [T ]β = ⎜ ⎜0 ⎝0 0 0 1 0 0 1 0 0 0⎞ 0⎟ ⎟.

T ∗ (y)⟩ = ⟨T (x). By Theorem 6. We check ⟨x. This means that TW (TW )∗ = TW (T ∗ )W = (T ∗ )W TW = (TW )∗ TW . (TW )∗ (y)⟩ = ⟨TW (x). we have (TW )∗ = (T ∗ )W by the previous argument. 181 . y⟩ = ⟨T (x).(c) Calculate that T1 T2 −T2 T1 = 1 1 2 (T −T T ∗ +T ∗ T −(T ∗ )2 )− (T 2 +T T ∗ −T ∗ T −(T ∗ )2 ) 4i 4i = 1 ∗ (T T − T T ∗ ). So we get that N (T ) = N (T ∗ ). by Exercise 5. then x is also a eigenvector of T ∗ since T is normal.24 we know that TW is also diagonalizable. we have T T ∗ = T ∗ T . y⟩ = ⟨x. y⟩ = 0 for all x ∈ W . y⟩ ⟨x. So W is also T -invariant. (b) Let y be an element in W ⊥ . (d) Since T is normal. 8. Also. TW (y)⟩ for all x and y in W . y⟩ = ⟨T ∗ (x). by Exercise 6. This means that there’s a basis for W consisting of eigenvectors of T . If x is a eigenvectors of T . Also. (TW )∗ (y)⟩ = ⟨TW (x).12 we know that R(T ) = N (T ∗ )⊥ = N (T )⊥ = R(T ∗ ). (T ∗ )W (y)⟩.15(a) we know that T (x) = 0 if and only if T ∗ (x) = 0. This means that there’s a basis for W consisting of eigenvectors of T ∗ . T ∗ (y)⟩ = ⟨x. (a) We check ⟨x. By Theorem 6. 7. 2i It equals to T0 if and only if T is normal. 9. since T (x) is also an element in W by the fact that W is T -invariant.16 we know that T is diagonalizable. y⟩ = ⟨T (x).4. T (y)⟩ = ⟨x. since W is both T and T ∗ -invariant. (c) We check ⟨x.3. Also.

T (x)⟩ ⟨x. x⟩. ⟨T (x). x⟩ ± ⟨T ∗ (x). x⟩ and hence −i⟨T (x). So T is the zero mapping. x⟩. T ± iI is injective since T (x) ± x = 0 if and only if T (x) = 0 and x = 0. = ⟨(T − iI)−1 (x). 0 = ⟨T (x + y). y⟩ + ⟨T (y). x⟩ = ⟨x. 11. iy⟩ = −⟨T (iy). T (x)⟩ + x 2 = ⟨T (x) ± ix. (T + iI)(y)⟩ = ⟨(T − iI)(T − iI)−1 (x). Finally we may calculate that ⟨x. x⟩ + ⟨T (y). y⟩ + ⟨T (y). [(T − iI)−1 ]∗ (T + iI)(y)⟩ = ⟨(T − iI)−1 (x). y⟩ ∓ i⟨T (x). T ∗ (x)⟩ = ⟨T (x). x⟩ + x ± ⟨T (x). x⟩. ⟨T (x). (T ∗ + iI)(y)⟩ = ⟨(T − iI)−1 (x). That is. y⟩ = −i⟨T (y). replace y by iy and get This can only happen when 182 . Directly calculate that T (x) ± ix = T (x) = T (x) 2 2 2 Also. ix⟩ ± ⟨ix. (T − iI)∗ (y)⟩ for all x and y. That is. T (x) ± ix⟩ = 2 2 = T (x) + x 2. ⟨T (x). we have Also. we compute = ⟨T (x). x⟩ + ⟨T (x).10. x + y⟩ (b) As Hint. x⟩. y⟩ = −⟨T (y). (a) We prove it by showing the value is equal to its own conjugate. ⟨T (x). y⟩ = ⟨x. So we get the desired equality. Now T − iI is invertible by the fact that V is finite-dimensional. y⟩ = ⟨T (x). y⟩ = 0 for all x and y.

x⟩ = ⟨x. . x⟩ is real. ⟨(T − T ∗ )(x). By Theorem 6. We already know that v1 is an eigenvector. Then we have Ax = λx and λ = ⟨Ax. 13. x⟩ = ⟨Bx. vi ⟩ = ⟨vt+1 . v2 . Bx⟩ ≥ 0. By the previous argument we get the desired conclusion T = T ∗. We say that [T ]β = {Ai. Denote the basis by β = {v1 . 183 = ⟨vt+1 . So we have D2 = [LA ]β and A = [I]α [LA ]β [I]β = ([I]α D)(D[I]β ). Pick t to be the maximum integer such that v1 . Thus we know that T (vt+1 ) = t+1 Ai. Also. If A is Gramian. So we find a matrix α β B = D[I]β α such that A = B t B. 12. v2 . . .15(c). we will find some contradiction.17 we know that T is self-adjoint. . vt are all eigenvectors with respect to eigenvalues λi . we have [I]α = ([I]β )t . Denote D to be a diagonal matrix with its ii-entry √ to be λi . This is a contradiction. we may apply Schur’s Theorem and get an orthonormal basis β such that [T ]β is upper triangular. T (x)⟩ = ⟨T ∗ (x).(c) If ⟨T (x). Since the characteristic polynomial splits. T ∗ (vi )⟩ by Theorem 6. x⟩ = ⟨B t Bx. if A is symmetric. . we know that Ai. . vn }. Since the basis β is orthonormal. let λ be an eigenvalue with unit eigenvector x.t+1 vi . . i=1 Since the basis is orthonormal. Conversely.t+1 = ⟨T (vt+1 ). If not. x⟩ = 0 for all x. λi vi ⟩ = 0 . we have A is symmetric since At = (B t B)t = B t B = A. we know that LA is a self-adjoint operator.j }. x⟩. So β is an orthonormal basis. This means that vt+1 is also an eigenvector. we have This means that ⟨T (x). So we may find an orthonormal basis β such that [LA ]β is diagonal with the ii-entry to be λi. β α β α where α is the standard basis. . If t = n then we’ve done.

.7. Let T = LA and U = LB . So those vectors are also eigenvectors of T and U . Note that W is T -invariant naturally and U -invariant since T U (w) = U T (w) = λU (w) for all w ∈ W . we have f (A) = f (P −1 BP ) = P −1 f (B)P = O. β 16. Denote α to be the standard basis. By Schur’s Theorem A = P −1 BP for some upper triangular matrix B and invertible matrix P . we know that W ⊥ is also T .7.4. Those vectors will also be eigenvectors of T . Consider the case n = k. ei−1 and so this vector will vanish after multiplying the matrix i−1 j=1 (Bii I − B). Finally.4. U and T will be diagonalized simultaneously by any orthonormal basis. we find some orthonormal basis β such that [T ]β and [U ]β are diagonal. Pick P = [I]α and get the desired result. Now we have that [T ]β = [I]β A[I]α α β and are diagonal. Again. Since the characteristic polynomial of A and B are the same. . Now we want to say that f (B) = O first. .14. 184 . we may apply the induction hypothesis to TW and UW . by applying the induction hypothesis we get an orthonormal basis β2 for W ⊥ consisting of eigenvectors of T and U . and get an orthonormal basis β1 for W consisting of eigenvectors of TW and UW . we know that β = β1 ∪ β2 is an orthonormal basis for V consisting of eigenvectors of T and U . Now pick one arbitrary eigenspace W = Eλ of T for some eigenvalue λ. Suppose the statement is true for n ≤ k − 1. e2 . Since V is finite dimentional.and U -invariant by Exercise 6. which are self-adjoint by Exercise 6. we have the characteristic polynomial of A would be f (t) = n i=1 [U ]β = [I]β B[I]α α β (Bii − t) since B is upper triangular. If n = 1. If W is a proper subspace of V . If W = V . . Applying the previous exercise.17 to the operator U and get an orthonormal basis β consisting of eigenvectors of U . On the other hand. Let C = f (B) and {ei } the be the standard basis. Also. We use induction on the dimension n of V . 15. We have Ce1 = 0 since (B11 I −B)e1 = 0. we have Cei = 0 since (Bii I − B)ei is a linear combination of e1 . So we get that f (B) = C = O. then we may apply Theorem 6.

i=1 The value is greater than [no less than] zero for arbitrary set of ai ’s if and only if λi is greater than [no less than] zero for all i. i. . . Pick B to be EP and get the partial result. . i=1 i=1 ai vi ⟩ n a i 2 λi . since T is self-adjoint. we may use the result of the previous exercise. where vi is the eigenvector with respect to the eigenvalue λi . . we have all eigenvalue of T are nonnegative.j Aij aj ai and y ∗ Ay = y ∗ B ∗ By = (By)∗ By = By 185 2 ≥ 0. then we have y ∗ Ay = i. . .16 and Theorem 6. a2 . .17 we get an orthonormal basis α = {v1 . we may write it as x= i=1 ai ei . an ) is a vector in Fn .16 and 6. n For each x ∈ V . . If y = (a1 . Now if T is positive semidefinite. by Theorem 6. we may write it as n x= i=1 ai vi . . . Also compute ⟨T (x). Thus we have E 2 = D and A = (P ∗ E)(EP ). vn }. x⟩ = ⟨ n n i=1 j=1 ( n Aij aj )ei . v2 . Compute ⟨T (x). . en }. (b) Denote β to be {e1 . Conversely. n i=1 ai ei ⟩ = i=1 j=1 ( n Aij aj )ai = Aij aj ai . For each vector x.j (c) Since T is self-adjoint.17. x⟩ = ⟨ = n n a i λi v i . . . So the iientry of D is nonnegative by the previous√ argument. e2 . (a) By Theorem 6.17 we have A = P ∗ DP for some matrix P and some diagonal matrix D. We may define a new diagonal matrix E whose ii-entry is Dii .

there’s a basis β consisting of eigenvectors of T . by the fact T U = U T and Exercise 5. x⟩ = ⟨U 2 (x). And this means that 0 = (U 2 − λ2 I)(x) = (U + λI)(U − λI)(x).(d) Since T is self-adjoint. we have T ∗ T (x) = T ∗ (0) = 0. then we have T ∗ T (x) = λx. If x ∈ N (T ∗ T ). x⟩ = ⟨T (x). an ). If x ∈ N (T ). For all x ∈ β. (b) We prove that N (T ∗ T ) = N (T ).4. If λ is an eigenvalue with the eigenvector x. . we have U 2 (x) = T 2 (x) = λ2 x. (a) We have T ∗ T and T T ∗ are self-adjoint.4. If λ = 0. Also. 18. So T U = U T is also positive definite since they are self-adjoint by Exercise 6. x⟩ = ⟨T (x).j Aij aj ai = y ∗ Ay = ⟨LA (y). who are nonnegative since T and U are postive definite. x⟩ = 0.25. we may find a basis β consisting of eigenvectors of U and T . Finally we get that all eigenvalue of T U is nonnegative since T U (x) = λµx.17(b) and denote y = (a1 . T (x)⟩ ≥ 0.4. So the statement is true.4. Say x ∈ β is an eigenvector of T and U with respect to λ and µ. . and so T (x) = 0. (f) Follow the notation of Exercise 6. n i=1 ai ei ⟩ i.17(a). y⟩. We have ⟨T ( n i=1 ai ei ). But det(U + λI) cannot be zero otherwise the negative value −λ is an eigenvalue of U . (e) We have T and U are diagonalizable since they are self-adjoint. So we have U + λI is invertible and (U − λI)(x) = 0. T (x)⟩ = 0 We get that T ∗ T is positive semidefinite by Exercise 6. n i=1 ai ei ⟩ = ⟨ n i=1 j=1 ( n Aij aj )ei . 186 . U (x)⟩ = ⟨U ∗ U (x). Hence we get U (x) = λx = T (x). By similar way we get the same result for T T ∗ . Finally since U and T meet on the basis β. Hence λ = ⟨T ∗ T (x). then we have U 2 (x) = 0 and so U (x) = 0 = T (x) since ⟨U (x).4. we have ⟨T ∗ T (x). By the previous arguments we may assume that λ > 0. . a2 . . we have U = T .

x⟩ > 0 where y = T −1 (x). y⟩ = ⟨T (cx). y⟩′ = ⟨T (x). x⟩ > 0 ⟨x. 187 ⟨x. 20. T (x)⟩ = ⟨T (y). x⟩ = c⟨T (x). x⟩′ . (T −1 )∗ T ∗ = (T T −1 )∗ = I. x⟩ > 0 (b) It comes from that and (T + U )∗ = T ∗ + U ∗ = T + U . y⟩′ . y⟩′ + ⟨z.Now we get that null(T ∗ T ) = null(T ) and null(T T ∗ ) = null(T ∗ ) since T ∗∗ = T ∗ . • = c⟨T (x). • = ⟨y. y⟩ + ⟨T (z). y⟩ = ⟨x. 19. T (y)⟩ > 0. x⟩′ = ⟨T (x). ⟨(cT )(x). Note that ⟨T −1 (x). Check the condition one by one. • if x is not zero. So we have (T −1 )∗ = (T ∗ )−1 = T −1 . x⟩ = ⟨T (x). x⟩ = ⟨y. (a) It comes from that ⟨(T + U )(x). x⟩ + ⟨U (x). x⟩ = ⟨y. we have rank(T ) = rank(T ∗ ) by the fact rank([T ]β ) = rank([T ]∗ ) = rank([T ∗ ]β ) β for some orthonormal basis β. Finally by Dimension Theorem we get the result rank(T ∗ T ) = rank(T ) = rank(T ∗ ) = rank(T T ∗ ). y⟩′ . y⟩ . • = ⟨T (x). y⟩ ⟨cx. y⟩ = ⟨x. y⟩′ = ⟨T (x + z). y⟩ ⟨x + z. (c) It comes from that and (cT )∗ = cT ∗ = cT . Also.

y⟩ ⟨T (cx). Thus T −1 is the unique operator such that ⟨x. y⟩ for some unique vector T (x). 22. we check whether U T is self-adjoint with respect to the inner product ⟨x. ⋅⟩′ respectly. y⟩ = ⟨T (x) + T (z).19(c). y⟩′′ = ⟨U (x). By the same argument. 188 = ⟨y. and z. y⟩′ . y⟩ = c⟨x. y⟩ = ⟨T −1 (x). Compute that ⟨x. And so there’s some orthonormal basis consisting of eigenvectors of U T and all the eigenvalue is real by the Lemma before Theorem 6. denote V1 and V2 to be the spaces with inner products ⟨⋅. y⟩ for all x and y. x⟩′ > 0 if x is not zero. y⟩′ be a function from V1 to F. y⟩ for all x and y. y. So T is also positive definite by Exercise 6. y⟩ + ⟨T (z). we may check that = ⟨T (x).21. x⟩ = ⟨x. y⟩ ⟨T (x + z). y⟩ c⟨T (x). By the same argument we get the conclusion. y⟩ = ⟨T −1 (x).19(c). y⟩ = ⟨x. By Theorem 6. y⟩′ or not. Denote F to be the operator U T with respect to the new inner product. F ′∗ (y)⟩′′ = ⟨T U (x). So T is invertible. (b) First. we have ⟨x.4. (a) For brevity. F (y)⟩′ and for all x. x⟩ = ⟨T (y). On the other hand. Denote F ′ to be the operator T U with respect to this new inner product. y⟩ + ⟨z. we get that T −1 is positive definite on V2 . T −1 is positive definite by Exercie 6. y⟩′ = ⟨T U T (x). y⟩ = ⟨cx.4. y⟩ is also a inner product by the previous exercise.8 we have fy (x) = ⟨T (x). the operator T is self-adjoint since for all x and y. y⟩ = ⟨x.17. ⋅⟩ and ⟨⋅. As Hint. And these two properties is independent of the choice of the inner product. Define fy (x) = ⟨x. y⟩ = ⟨cT (x). This means that U T is self-adjoint with respect to the new inner product. T U (y)⟩ = ⟨x. F ∗ (y)⟩′ = ⟨U T (x). T ∗ (y)⟩ = ⟨T (x). T (y)⟩ ⟨x. y⟩′ . To see T is linear. Now we know that 0 cannot be an eigenvalue of T . We have that fy (x) is linear for x on V1 . x⟩ = ⟨x. y⟩′′ ∶= ⟨T −1 (x). Then T is positive definite on V1 since ⟨T (x). U T (y)⟩ = ⟨x. y⟩ = ⟨x + z. So the function ⟨x. F ′ (y)⟩′′ = ⟨T (x). Similarly.