Professional Documents
Culture Documents
FALL 2007/08
PROBLEM SET 9 SOLUTIONS
In the following V will denote a finite-dimensional vector space over R that is also an inner product
space with inner product denoted by h ·, · i. The norm induced by h ·, · i will be denoted k · k.
1
W + W ⊥ = V . Clearly W + W ⊥ ⊆ V . For the converse, let v ∈ V and let {w1 , . . . , wr } be
an orthonormal basis of W . Consider
x := hv, w1 iw1 + hv, w2 iw2 + · · · + hv, wr iwr
and
y := v − x.
Clearly x ∈ W . We claim that y ∈ W ⊥ . For any w ∈ W , we could write
w = hw, w1 iw1 + hw, w2 iw2 + · · · + hw, wr iwr
since {w1 , . . . , wr } is an orthonormal basis of W . So
hy, wi = hv − x, wi
= hv, wi − hx, wi
D Xr E DXr E
= v, hw, wi iwi − hv, wi iwi , w
i=1 i=1
Xr Xr
= hv, wi ihw, wi i − hv, wi ihwi , wi
i=1 i=1
=0
since hw, wi i = hwi , wi. Hence v = x + y ∈ W + W ⊥ .
(b) Show that W = (W ⊥ )⊥ .
Solution. By Problem 1(b), we have W ⊆ (W ⊥ )⊥ . But by (a), we have
W ⊕ W ⊥ = V = W ⊥ ⊕ (W ⊥ )⊥
and so
dim(W ) + dim(W ⊥ ) = dim(W ⊥ ) + dim((W ⊥ )⊥ )
and so
dim(W ) = dim((W ⊥ )⊥ ).
Hence W = (W ⊥ )⊥ .
3. Let A ∈ Rm×n and regard A : Rn → Rm as a linear transformation in the usual way. Show that
nullsp(A> ) = (colsp(A))⊥
and
rowsp(A) = (nullsp(A))⊥ .
Hence deduce that
Rm = nullsp(A> ) ⊕ colsp(A)
and
Rn = rowsp(A) ⊕ nullsp(A).
Solution. Clearly colsp(A) = {Ax ∈ Rm | x ∈ Rn }. So if v ∈ (colsp(A))⊥ , then
hv, Axi = 0
for all x ∈ Rn . So
hA> v, xi = (A> v)> x = v> Ax = hv, Axi = 0
for all x ∈ Rn . In particular, we may pick x = A> v to get
kA> vk2 = hA> v, A> vi = 0,
giving us
A> v = 0.
Hence v ∈ nullsp(A> ). Conversely, if v ∈ nullsp(A> ), then
hv, Axi = v> Ax = (A> v)> x = 0> x = 0
2
for all x ∈ Rn . So v ∈ (colsp(A))⊥ . This proves the first equality. For the second equality, just
note that
rowsp(A) = colsp(A> )
and apply the first equality to get
(colsp(A> ))⊥ = nullsp(A),
and then apply Problem 2(b) to get
rowsp(A) = ((rowsp(A))⊥ )⊥ = (nullsp(A))⊥ .
The next two inequalities are immediate consequences of Problem 2(a).
5. Let B1 = {x1 , . . . , xn } and B2 = {y1 , . . . , yn } be two bases of Rn . B1 and B2 are not necessarily
orthonormal bases.
(a) Show that there exists an orthogonal matrix Q ∈ Rn×n such that
Qxi = yi , i = 1, . . . , n,
if and only if
hxi , xj i = hyi , yj i, i, j = 1, . . . , n.
Solution. By a theorem we have proved in the lectures, an orthogonal matrix Q preserves
inner product and so satisfies
hyi , yj i = hQxi , Qxj i = hxi , xj i.
It remains to show that if the latter condition is satisfied, then such an orthogonal matrix
may be defined. First, observe that since B1 and B2 are bases, the equations
Qxi = yi , i = 1, . . . , n,
3
defines the matrix Q uniquely1. We only need to verify that Q is orthogonal. Let x ∈ Rn
and let Xn
x= α i xi
i=1
be the representation of x in terms of B1 . Then
kQxk2 = hQx, Qxi
DXn Xn E
= αi Qxi , αj Qxj
i=1 j=1
DXn Xn E
= α i yi , α j yj
i=1 j=1
Xn Xn
= αi αj hyi , yj i
i=1 j=1
Xn Xn
= αi αj hxi , xj i
i=1 j=1
DXn Xn E
= α i xi , α j xj
i=1 j=1
= hx, xi
= kxk2 .
Since this is true for all x ∈ Rn , ie. Q preserves norm, the aforementioned theorem in the
lectures imply that Q is orthogonal.
(b) Show that if B1 is an orthonormal basis and if Q ∈ Rn×n is an orthogonal matrix, then
{Qx1 , . . . , Qxn }
is also an orthonormal basis.
Solution. Clearly Qx1 , . . . , Qxn are linearly independent since if
α1 Qx1 + · · · + αn Qxn = 0,
then
Q(α1 x1 + · · · + αn xn ) = 0,
and so
α1 x1 + · · · + αn xn = Q> 0 = 0
and so α1 = · · · = αn = 0 by the linear independence of x1 , . . . , xn . Hence {Qx1 , . . . , Qxn }
is a basis (since the set has size n = dim(Rn )). The orthonormality follows from the fact
that Q preserves inner product and that x1 , . . . , xn form an orthonormal basis:
(
0 if i 6= j,
hQxi , Qxj i = hxi , xj i =
1 if i = j.
1In fact, Q is just the matrix whose coordinate representation with respect to B and B is the identity matrix, ie.
1 2
[Q]B1 ,B2 = I. Explicitly, if we let X and Y be the matrices whose columns are x1 , . . . , xn and y1 , . . . , yn respectively,
then Q = Y X −1 .