You are on page 1of 6

Math 416 Homework 13. Solutions.

1. For each of the following inner products, write down an orthonormal basis for P3 (R) using
the Gram–Schmidt Algorithm applied to the basis (1, x, x2 , x3 ).
R1
(a) hf, gi = 0 f (t)g(t) dt,
R1
(b) hf, gi = −1 f (t)g(t) dt,
R∞ −t2 f (t)g(t) dt,
(c) hf, gi = −∞ e

Which one was easiest?

Solution:
In all of these, we write wk = xk for k = 0, 1, 2, 3.

(a) Let us first notice that


Z 1
1
xn dx = .
0 n+1
We compute v0 = 1. Then

hw1 , v0 i 1/2
v 1 = w1 − v0 = x − 1 = x − 1/2.
hv0 , v0 i 1
hw2 , v1 i hw2 , v0 i
v 2 = w2 − v1 − v0
hv1 , v1 i hv0 , v0 i

2
x , x − 1/2 1/3
= x2 − (x − 1/2) −
hx − 1/2, x − 1/2i 1
1/12 1
= x2 − (x − 1/2) − = x2 − x + 1/6.
1/12 3

Finally,

hw3 , v2 i hw3 , v1 i hw3 , v0 i


v 3 = w3 − v2 − v1 − v0
hv2 , v2 i hv1 , v1 i hv0 , v0 i
1/120 3/40 1
= x3 − v3 − v2 −
1/180 1/12 4
3 9
= x3 − (x2 − x + 1/6) − (x − 1/2) − 1/4
2 10
3 3 1
= x3 − x2 + x − .
2 5 20
So we have

v1 = 1, v2 = x − 1/2, v3 = x2 − x + 1/6, v4 = x3 − 3x2 /x + 3x/5 − 1/20.

We can rescale all of these and make them unit vectors, and then the set is orthonormal.

1
(b) Here we can definitely use the fact that the domain is symmetric and note that if f, g
have opposite parity (i.e. that their product has a odd degree) then the integral is zero
and they are orthogonal. Therefore h1, xi = 0 and we can choose v0 = w0 = 1 and
v1 = w1 = x. Then
hw2 , v1 i hw2 , v0 i
w2 = v 2 − v1 − v0 ,
hv1 , v1 i hv0 , v0 i
but since 2 + 1 is odd we can ignore the middle term and obtain
2/3
w2 = x 2 − = x2 − 1/3. (1)
2
Note that w2 remains an even polynomial, and so when we write
hw3 , v2 i hw3 , v1 i hw3 , v0 i
v 3 = w3 − v2 − v1 − v0 ,
hv2 , v2 i hv1 , v1 i hv0 , v0 i
we again only have to consider terms where the powers sum up to an even number, and
thus
hw3 , v1 i 2/5 3
v 3 = w3 − v1 = x3 − x = x3 − x. (2)
hv1 , v1 i 2/3 5
This gives an orthogonal set of

{1, x, x2 − 1/3, x3 − 3x/5}

which we can normalize.


(c) Note that we can exploit the same parity as we did in part (b)! In fact, single word can
be repeated until (1), when we obtain
R ∞ −t2 2 √
2 −∞ e t dt π/2
w2 = x − R ∞ −t2 2
· 1 = x − √ = x2 − 1/2.
−∞ e dt π

And we can again repeat every word until (2), and obtain
R ∞ −t2 4 √
hw3 , v1 i 3 −∞ e t dt 3 3 π/4 3
v 3 = w3 − v1 = x − R ∞ −t2 2 · x − x − √ x = x3 − x.
hv1 , v1 i −∞ e t dt π/2 2

This gives an orthogonal set of

{1, x, x2 − 1/2, x3 − 3x/2}

which we can normalize.

2. The question about difficulties depends on how one approaches this problem. If one steps back
from the problem for a bit before computing, as we did above, then (b),(c) are basically equally
difficult and much easier than (a). If one just dives in and starts computing, then (b) and
especially (c) will give you a sad.

2
3. Let β be a basis for a subspace W of an inner product space V , and let z ∈ V . Show that
z ∈ W ⊥ if and only if hz, vi = 0 for all v ∈ β.

Solution: The forward direction is easy. Since z ∈ W ⊥ means that hz, wi = 0 for all w ∈ W ,
then since β ⊆ W , this means that hz, wi = 0 for all w inβ.
For the opposite direction, notice that if hz, vi i = 0 for all i ∈ I, then
* +
X X
z, vi = hz, vi i = 0,
i∈I i∈I

where I is any set that indexes the elements of β. From this we can deduce that z is orthogonal
to the span of β, which is W .

4. In all of these questions below, V is a finite-dimensional vector space, W is a subspace of V ,


and S is a subset of V .

(a) Show that S ⊥ is a subspace of V .


(b) Show that Span(S) = (S ⊥ )⊥ .
(c) Show that (W ⊥ )⊥ = W .
(d) Show that V = W ⊕ W ⊥ .

Solution:
(a) Recall that
S ⊥ = {x ∈ V : hx, yi = 0 ∀y ∈ S}.
Let y, z ∈ S ⊥ . This means that hx, yi = hx, zi = 0 for all x ∈ S. Then we see that
hx, ay + bzi = a hx, yi + b hx, zi = 0
for all x ∈ S, so that ay + bz ∈ S ⊥ . Since S ⊥ is a subset of V that is closed under
addition and scalar multiplication, it is a subspace of V .
(b) Consider the set S. Either it is linearly independent or not. If it is, then S is a basis for
Span(S). If it is not, remove vectors until it is, and then β ⊆ S is a basis for Span(S).
But Problem #2 above, Span(S)⊥ = β ⊥ = S ⊥ . Therefore we only need to prove that
Span(S) = (Span(S)⊥ )⊥ . But since Span(S) is a subspace of V , this follows from (c)
below.
(c) Let us write Y = W ⊥ , and recall that x ∈ Y if and only if hx, wi = 0 for all x ∈ W .
Therefore, if w ∈ W , then w ∈ Y ⊥ . This shows that W ⊆ Y ⊥ = (W ⊥ )⊥ .
Conversely, now assume that x 6∈ W . Let (v1 , . . . , vk ) be an orthonormal basis for W
(this exists since W is finite-dimensional). Then if we write
k
X
w= hx, vi i vi ,
i=1

3
then w ∈ W , and z = x − w ∈ W ⊥ . From this, we see that

hx, zi = hw + z, zi = hw, zi + hz, zi = 0 + kzk2 > 0.

So there is a z ∈ W ⊥ such that hx, zi 6= 0. This means that x 6∈ (W ⊥ )⊥ . Since


x 6∈ W =⇒ x ∈ 6 (W ⊥ )⊥ , this means that (W ⊥ )⊥ ⊆ W , and we are done.
(d) Let us say that dim V = n and dim W = k ≤ n for specificity.
We form an orthonormal basis for V in the following way. Let β be any basis for W ,
and convert it to β 0 , an orthonormal basis for W (apply Gram-Schmidt). Since β 0 is a
linearly independent subset of V , we can then extend β 0 to a basis for V called γ. Notice
that the first k vectors in γ are exactly those in β 0 , and then are followed by vectors not
in W . Again apply Gram-Schmidt, but to γ, to obtain a orthonormal basis γ 0 for V . We
claim (think why this is true!) that applying G-S doesn’t change the first k vectors of γ 0 ,
so we now have an orthonormal basis

γ 0 = (z1 , z2 , . . . , zk , . . . , zn )

where z1 , . . . , zk is an orthonormal basis for W . Now, given any v ∈ V , let us write the
expansion
Xn
v= hv, zi i zi ,
i=1

and write this as


v = w + w0 ,
where
k
X n
X
w= hv, zi i zi , w0 = hv, zi i zi .
i=1 i=k+1

It is clear that w ∈ W since it is a linear combination of vectors in β 0 . But, moreover,


notice that for all zi with i > k, we have zi ⊥ W (since γ 0 is an orthonormal basis!).
And thus we have written v as a linear combination of a vector in W plus a vector in
W ⊥.
To see that this expansion is unique, just use the fact that W ∩ W ⊥ = {0} as proved in
class: if
v = w + w0 = q + q 0 ,
with w, q ∈ W and w0 , q 0 ∈ W ⊥ , then we have

0 = (w − q) + (w0 − q 0 )

or
w − q = q 0 − w0
The left-hand side is in W and the right hand side is in W ⊥ , and therefore both are 0.
And thus w = q and w0 = q 0 .

4
5. Let β = (v1 , . . . , vn ) be an orthonormal basis for V . Show that for any x, y ∈ V ,
n
X
hx, yi = hx, vi i hy, vi i.
i=1

Solution: From class, we have


n
X n
X
x= hx, vi i vi , y= hy, vi i vi ,
i=1 i=1

so
* n n
+
X X
hx, yi = hx, vi i vi , hy, vj i vj
i=1 j=1
n X
X n
= hx, vi i hy, vj i hvi , vj i .
i=1 j=1

Since hvi , vj i =
6 0 only if i = j, this double sum becomes a single sum, and we get
n
X
hx, vi i hy, vi i.
i=1

6. Let V be an inner product space, and choose y, z ∈ V . Define T : V → V by

T (x) = hx, yi z.

Show that T is linear. Then show that T ∗ exists, and compute an explicit formula for it.
Compute the rank of T and T ∗ .

Solution: First we show that T is linear. We have

T (a1 x1 + a2 x2 ) = ha1 x1 + a2 x2 , yi z = (a1 hx1 , yi + a2 hx2 , yi)z = a1 T (x1 ) + a2 T (x2 ).

Since T : V → V is linear, then T ∗ : V → V exists, is unique, is linear, and satisfies

hT x, wi = hx, T ∗ wi .

So we compute
D E
hT x, wi = hhx, yi z, wi = hx, yi hz, wi = x, hz, wi y = hx, hw, zi yi .

Therefore T ∗ w = hw, zi y.

5
Now, we want to compute the rank of T . Notice that if q ∈ R(T ), then q is a scalar multiple
of the vector z. This means that T has rank at most one. Moreover, if we choose x = y,
then T (y) = hy, yi z, and as long as y 6= 0, this is a non-zero number. So R(T ) is exactly
one-dimensional. Therefore it has rank one.
The rank of T ∗ is the same by the same argument (T ∗ w is a scalar multiple of y, etc.).
(A theorem in the book that we did not get to in class is that T and T ∗ always have the same
rank, so knowing this we could prove it this way as well.)

7. Let T be a linear operator on an inner product space V . Let U1 = T + T ∗ and U2 = T T ∗ .


Show that U1 , U2 are both self-adjoint.

Solution: We write

hU1 x, yi = h(T + T ∗ )x, yi = hT x + T ∗ x, yi = hT x, yi + hT ∗ x, yi


= hx, T ∗ yi + hx, T yi = hx, T y + T ∗ yi = hx, (T + T ∗ )yi = hx, U1 yi .

Therefore U1∗ = U1 .
We proceed in a similar manner for U2 . We have

hU2 x, yi = hT T ∗ x, yi = hT ∗ x, T ∗ yi = hx, T T ∗ yi = hx, U2 yi ,

so U2∗ = U2 .

8. Let V be a finite-dimensional inner product space, and T a linear operator on V . Show that
if T is invertible, then T ∗ is invertible, and, moreover, that (T ∗ )−1 = (T −1 )∗ .

Solution: Since T is invertible, T −1 exists and is linear. As we showed in class, this means
that (T −1 )∗ exists, and it also linear. Moreover, we know that

−1
T x, y = x, (T −1 )∗ y .

However, we also know that


T T −1 = idV
and thus
idV = (idV )∗ = (T T −1 )∗ = (T −1 )∗ T ∗ ,
and therefore (T ∗ )−1 = (T −1 )∗ .

You might also like