You are on page 1of 6

Math 110, Spring 2019.

Homework 3 solutions.
R2
Prob 1. Let U = {p ∈ P4 (IR) : −2
p(x) dx = 0}.

(a) Find a basis for U .


(b) Extend your basis in part (a) to a basis of P4 (IR).
(c) Find a subspace W of P4 (IR) such that P4 (IR) = U ⊕ W .

Solution. First let us prove the following


Lemma. Let p1 (x), . . ., pk (x) ∈ Pn (IR) be polynomials of degrees d1 < d2 < · · · < dk respectively. Then
the list (pj (x)) is linearly independent.
Pk
Proof. Suppose j=1 αj pj (x) ≡ 0 for some α1 , . . ., αk ∈ IR. If αk 6= 0, then the highest-degree term in
Pk
j=1 αj pj (x) is the same as the highest-degree term in αk pk (x), which has degree dk . Hence αk must equal
Pk
zero. Now, if αk−1 6= 0, the highest-degree term in j=1 αj pj (x) is the same as the highest-degree term in
αk−1 pk−1 (x), which has degree dk−1 . Hence αk−1 = 0. And so on. This shows that all αj must in fact equal
zero. Hence the initial list of polynomials is linearly independent. This proves the Lemma.
P4
(a) Suppose a polynomial p(x) = j=0 aj xj belongs to U . Integrating p(x) over [−2, 2] and dividing the
result by 4, we obtain
4 16
a0 + a2 + a4 = 0.
3 5
Note that the polynomials x, 3x2 − 4, x3 , 5x4 − 16 all satisfy this condition. These polynomials have distinct
degrees 1 through 4, so they are linearly independent by the Lemma above.
Note that U 6= P4 (IR) since, e.g., the constant function 1 does not belong to U . Thus U is a proper
subspace of P4 (IR), hence its dimension cannot be equal to the dimension of P4 (IR), which is equal to 5.
Hence dim U ≤ 4. But the list x, 3x2 − 4, x3 , 5x4 − 16 of length four is linearly independent, so by 2.39, we
conclude that this is a basis for U .
(b) If we add the constant 1 to this basis of U , we will obtain a list of five polynomials of all possible degrees
from 0 to 4, which is therefore linearly independent by the Lemma. Since it contains 5 = dim P4 (IR) vectors,
we conclude by 2.39 that this list forms a basis of P4 (IR).
(c) Take W := span{1}. We already know that the list 1, x, 3x2 − 4, x3 , 5x4 − 16 is linearly independent,
hence the zero polynomial can be expressed in only one way as a linear combination of these polynomials,
hence the sum W + U = W + span(x, 3x2 − 4, x3 , 5x4 − 16) = span(1, x, 3x2 − 4, x3 , 5x4 − 16) is direct.
Prob 2. Suppose v1 , . . . , vm are linearly independent in V and w ∈ V . Prove that

dim span(v1 − w, v2 − w, . . . , vm − w) ≥ m − 1.

Proof. Let U := span(v1 − w, v2 − w, . . . , vm − w) and let W := span(w). Note that

U + W = span(w, v1 − w, v2 − w, . . . , vm − w)

contains the vectors v1 through vm , which are linearly independent. Hence dim(U + W ) ≥ m. Also, W is of
dimension at most 1 because it is a span of a single vector.
Now, m ≤ dim(U + W ) = dim U + dim W − dim(U ∩ W ), where the equality is due to 2.43. This implies

dim U = dim(U + W ) + dim(U ∩ W ) − dim(W ) ≥ dim(U + W ) − dim(W ) ≥ m − 1.

2
Prob 3. Does the ‘inclusion-exclusion formula’ hold for three subspaces, i.e., is it always true that

dim(U1 + U2 + U3 ) = dim(U1 ) + dim(U2 ) + dim(U3 )


− dim(U1 ∩ U2 ) − dim(U1 ∩ U3 ) − dim(U2 ∩ U3 )
+ dim(U1 ∩ U2 ∩ U3 )?

Prove this formula or provide a counterexample.


Counterexample. Here is a fairly straightforward counterexample. Let V = IR2 , u1 :=(1, 0), u2 :=(0, 1),
u3 :=(1, 1), and let Uj := span(uj ). Each Uj is 1-dimensional, being a span of a single non-zero vector. Next,

U1 + U2 + U3 = span(u1 , u2 , u3 ) ⊇ span(u1 , u2 ) = V.

Note that none of the vectors uj is in fact a scalar multiple of another, so any two of these vectors are linearly
independent. Hence the zero vector can be represented only as a trivial linear combination of any pair of
these vectors. By 1.44, this implies that

U1 ∩ U2 = U1 ∩ U3 = U2 ∩ U3 = {0}.

Finally, U1 ∩ U2 ∩ U3 is a subspace of, say, U1 ∩ U2 , which is the zero subspace. Hence

U1 ∩ U2 ∩ U3 = {0}

as well. But then

dim(U1 + U2 + U3 ) = 2 6= 3 = dim(U1 ) + dim(U2 ) + dim(U3 ) − dim(U1 ∩ U2 )


− dim(U1 ∩ U3 ) − dim(U2 ∩ U3 ) + dim(U1 ∩ U2 ∩ U3 ).

3
Prob 4. Let a, b ∈ IR. Define T : P(IR) → IR2 by
Z 2
T p := (2p(4) + 5p0 (2) + ap(1)p(3), x3 p(x) dx + b cos p(0)).
−1

Show that T is linear if and only if a = b = 0.


Proof. Suppose T is linear. Then we must have T (0) = 0 by 3.11. Plugging in p ≡ 0, we obtain (0, b), which
therefore must equl (0, 0). This means b = 0. Now, by linearity of T again, we must have T (2) = 2T (1).
R2 R2
Plugging in p ≡ 1 and p ≡ 2, we obtain T (2) = (4 + 4a, 2 −1 x3 dx), T (1) = (2 + a, −1 x3 dx). Now
T (2) = 2T (1) implies 4 + 4a = 2(2 + a), hence a = 0.
R2
Conversely, suppose a = b = 0, hence T p = (2p(4) + 5p0 (2), −1 x3 p(x) dx) for all p ∈ P(IR). Then, by the
linearity of point evaluation, integration, and differentiation, we get
Z 2 Z 2
T (λp) = (2λq(4) + 5λp0 (2), x3 λp(x) dx) = λ(2p(4) + 5p0 (2), x3 p(x) dx) = λT (p)
−1 −1

for any λ ∈ IR. By the linearity of point evaluation, integration and differentiation again,
Z 2
T (p + q) = (2(p + q)(4) + 5(p + q)0 (2), x3 (p + q)(x) dx)
−1
Z 2
= (2(p(4) + q(4)) + 5(p0 (2) + q 0 (2)), x3 (p(x) + q(x)) dx)
−1
Z 2 Z 2
0 0 3
= (2p(4) + 2q(4) + 5p (2) + 5q (2), x p(x) dx + x3 q(x) dx)
−1 −1
Z 2 Z 2
= (2p(4) + 5p0 (2), x3 p(x) dx) + (2q(4) + 5q 0 (2), x3 q(x) dx)
−1 −1
= T (p) + T (q).

So T is a linear map.

4
Prob 5. Suppose T ∈ L(V, W ), v1 , . . . , vm ∈ V and the list T v1 , T v2 , . . . T vm is linearly independent (in
W ). Prove that v1 , . . . , vm must be linearly independent in V . What is the contrapositive of this statement?

Proof. Given that the list T v1 , T v2 , . . . T vm is linearly independent in W , suppose that a linear combination
α1 v1 + · · · + αm vm is equal to zero (vector of V ). By the linearity of T , we have

0 = T (0) = T (α1 v1 + · · · + αm vm ) = α1 T v1 + · · · + αm T vm .

The linear independence of T v1 , T v2 , . . . , T vm now implies that α1 = · · · = αm = 0. This shows that the list
v1 , v2 , . . ., vm is linearly independent in V .

The contrapositive of this statement is as follows: Suppose that the list v1 , . . . , vm is linearly dependent in
V . Then the list T v1 , T v2 , . . . , T vm is linearly dependent in W .

5
Prob 6. Suppose V is a nonzero finite-dimensional vector space and W is infinite-dimensional. Prove that
L(V, W ) is infinite-dimensional.
Proof. Let us prove the contrapositive, namely, that the finite dimensionality of L(V, W ) implies the finite
dimensionality of W . If dim L(V, W ) = N , then any N + 1 linear maps from V to W are linearly dependent.
Consider N + 1 arbitrary vectors wj ∈ W , j = 1, . . . , N + 1. Take any basis (vk ) of V . By 3.5, there exist
linear maps Tj ∈ L(V, W ) such that Tj (v1 ) = wj for all j = 1, . . . , N + 1 (note that we do not care where
the basis vectors other than v1 are being sent). Since the list of maps T1 , . . ., TN +1 is linearly dependent in
L(V, W ), the list T1 v, . . ., TN +1 v is linearly dependent in W for any v ∈ V . Therefore, the list

w1 (= T1 (v1 )), w2 (= T2 (v1 )), . . . , wN +1 (= TN +1 (v1 ))

is linearly dependent in W .
Thus, any N + 1 vectors in W form a linearly dependent list. Hence W contains no linearly independent
lists of length greater than N . Hence dim W ≤ N . This concludes the proof.

You might also like