You are on page 1of 9

CORRECTION OF HOMEWORK

In all the following, we let F denote either R, the field of real nubers or C the field of complex numbers. We shall identify polynomials and polynomial functions. We let F [X] be the set of polynomials with coefficients in F , i.e. the set of functions P such that
n

P (X) = a0 + a1 X + a2 X 2 + · · · + an X n =
i=0

ai X i

for some a0 , a1 , . . . , an in F . For such a function, if an = 0, we call n the degree of P . Finally, we let Fn [X] be the set of elements in F [X] of degree less than or equal to n. 1. Exercise 9 page 123 Here, we give a possible correction of this exercise. See at the bottom for other possibilities of proofs. On F [X], we define the follwong transformations, T, D, F [X] → F [X] by
n

(T (P )) (X) = (D(P )) (X) = where we have expressed

X 0

P (x)dx

=
i=0 n

ai X i+1 i+1 iai X i−1

(1.1) (1.2)

P (X)

=
i=0

P = a0 + a1 X + a2 X 2 + . . . an X n .

(1.3)

Show that T is nonsingular on F [X], but is not invertible T is nonsingular if and only if its null space is the zero space null(T ) = {0}. But if P as (1.3) is such that T (P ) = 0, then, we deduce that an a1 X n+1 = 0, a0 X + X 2 + · · · + 2 n+1 and since (X, X 2 , X 3 , . . . , X n+1 ) is linearly independent, we deduce that a1 an a0 = = ··· = = 0, 2 n+1 from which it follows that P expressed as (1.3) is 0. Consequently, null(T ) = {0} and T is nonsingular. To prove that T is not invertible, we must prove that it is not onto (or at least, prove something that would imply it). Here, we remark that every polynomial Q in the range of T satisfies Q(0) = 0, and so it suffices to find a polynomial P such that P (0) = 0. For example, P = X + 1 (or P = 1 or P = X 2 + 1,. . . ) is not in the range of T . Find the null space of T
Date: Wednesday March 25, 2008.
1

X 2 .3) satisfies D(P ) = 0.1). . Show that T (T (f )g) = (T f )(T g) − T (f (T g)) (1. it suffices to test it with a basis”. To prove that T D = IdF [X] . using the definition of D.Suppose that P given by (1. we first prove the following claim: Suppose that (1. we get that an a1 X n+1 .1). Indeed. .4) holds for every polynomial g. One way to answer this question uses the following very useful principle in linear analysis: ”to verify that a linear relation/equality holds. we need only find a polynomial Q such that T D(Q) = Q. Conversely. ). and every f . for each term. + an ((T X n )(T g) − T (X n (T g))) = a0 (T 1)(T g) + a1 T (X)T (g) + · · · + an T (X n )T (g) − (a0 T (1(T g)) + a1 T (X(T g)) + · · · + an T (X n (T g))) = T (a0 + a1 X + · · · + an X n )(T g) − T ((a0 + a1 X + · · · + an X n )(T g)) = (T P )(T g) − T (P (T g)) . an X n )g) = T ((a0 T (1) + a1 T (X) + · · · + an T (X n )) g) = T (a0 T (1)g + a1 T (X)g + · · · + an T (X n )g) = a0 T (T (1)g) + a1 T (T (X)g) + · · · + an T (T (X n )g) and using the claim. hence DT = IdF [X] . null(D) = Span(1).3). we get that T (T (P )g) = a0 ((T 1)(T g) − T (1(T g))) + a1 ((T X)(T g) − T (X(T g))) + . we get that a1 = 2a2 = · · · = nan = 0. we get that every constant polynomial p = a0 satisfies T (P ) = 0a0 = 0. using linearity. . . DT (P ) = P . and since (1. X n−1 ) is linearly independent. Consequently. we get that for all polynomials. Applying the definition of T in (1. . To see how this works. g ∈ F [X] . that T (T (P )g) = T (T (a0 + a1 X + . T D = IdF [X] . . . and g be another polynomial. we can (for example) compute T D(X + 1) = T (1) = X = X + 1. In conclusion. X. then. . let us assume the claim and let P be a polynomial given by (1. and consequently. we want to check (1. X 3 . And here. we get that an a2 X n = P. . . We get.4) which is both linear in f and g. element of the basis B = (1. P = a0 is constant. Inspired by the previous question. (D(T (P ))) (X) = a0 + 2 X + · · · + (n + 1) 2 n+1 Since P was an arbitrary polynomial. .4) for all f. Let P be any polynomial in F [X] written as in (1. T (P ) = a0 X + X 2 + · · · + 2 n+1 and applying the rule in (1. we get that a1 + 2a2 X + · · · + nan X n−1 = 0.4) holds for every polynomial f and and g.3). then (1. X. Sor null(D) ⊂ Span(1) = F0 [X]. Show that DT = IdF [X] and that T D = IdF [X] . X 2 .

k ≥ 0. To sum up. k ≥ 0. and by linearity this is also true whenever f and g are arbitrary polynomials. namely that if V is finite dimensional. we have that T T (X j )X k = (T X j )(T X k ) − T (X j (T X k )). Hence. Since we made no hypothesis on j. it suffices to prove that for j. this statement is linear in g. since T T (X j )X k = T = (T X j )(T X k ) − T (X j (T X k )) = = = = 1 1 X j+1 X k + T (X j+k+1 ) j+1 j+1 1 X j+k+2 and (j + 1)(j + k + 2) 1 1 1 X j+1 X k+1 − T (X j X k+1 ) j+1 k+1 k+1 1 1 X j+k+2 − T (X j+k+1 ) (j + 1)(k + 1) k+1 1 1 − X j+k+2 (j + 1)(k + 1) (k + 1)(j + k + 2) 1 X j+k+2 (j + 1)(j + k + 2) (1. i. Applying a similar reasoning. Q ∈ Fn [X]. g polynomials. We claim that for all polynomials f and g. Then for all element Q in the basis B.5) hence. Consequently. for all f ∈ V . Let us suppose that V is finite dimensional. g = X k for some k ≥ 0. ). and consequently that (1.5) holds.6) holds whenever f and g are elements in the basis B.4) is true for all polynomial g and all element in the basis B. Let us fix j. (1. Here again.4) holds.e. Then. To prove this. . let f = X j and g = X k for j. and consequently. we see that it suffices to prove it when g is an arbitrary element of the basis B. state and prove a similar statement for D. and prove that for all polynomial g ∈ F [X]. X 3 . and f = X j . X. (1. (1. it suffices to prove this when f and g are arbitrary elements of the basis B = (1. we prove the contra-posit. and V = {0}. .4). either B is empty.4) is true whenever g is any polynomial. or there exists a polynomial in B of maximal degree P with degP = n ≥ 0. and D(X j )X k + X j D(X k ) = jX j−1+k + kX j+k−1 = (j + k)X j+k−1 . Hence to prove (1. Suppose that V is a nonzero subspace of F [X] such that T V ⊂ V (i. We compute D(X j X k ) = D(X j+k ) = (j + k)X j+k−1 . there holds that D(f g) = D(f )g + f D(g).4) is true for all f. X 2 . there holds that T f ∈ V ). then either V = {0} or there exists f ∈ V such that T f does not belong to V . Show that V is not finite dimensional. V = Span(B) ⊂ Fn [X] . we get that (1. (1.which proves that the claim is true. (1.e. But this is now a simple matter of computation. . and let B be a basis of V . Finally. it suffices to prove it in the special case when f = X j for some j ≥ 0 and g is any polynomial.6) Here agin. by the claim.

we et that f is uniquely given by f = −6P1 + 2P2 − 2P3 + 6P4 where the Pi are the Lagrange interpolation polynomials associated to S = (−1. and that Val(P + Q) ≥ min(Val(P ). x2 . .1) 2. p126-127 In this section. . every polynomial in F has degree lower or equal to n).. Then formulas (1. and not to V .. V ⊂ Fn [X] for some n ≥ 0. we have the explicit formula for Pi : Pi (X) = X − xi−1 X − xi+1 X − xn+1 X − x1 X − x2 .1. Val(Q)). it is convenient to take some time to check that f satisfies the interpolation problem (2. But then. or. This finishes the proof.1). Since V is finite dimensional. if P is not constant. Besides. xi − x1 xi − x2 xi − xi−1 xi − xi+1 xi − xn+1 (2.. 1. we have that Dn+1 P = 0 (indeed. . 1At this point. then V = {0}. and hence does not belong to Fn [X]. Then. deg(T P ) = degP + 1. Find a polynomial f of degree lower or equal to 3 such that f (−1) = −6. Now. First. Either B is empty.(i.2) . where ai is the coefficient of X i in the expansion of P in (1. and formula (2. f (1) = −2 and f (2) = 6. we see that Val(P Q) = Val(P )Val(Q). 2. Another possibility to find directly f without computing the Pi is as follows. xn+1 ) n + 1 distinct points in F . V admits a basis B. 0.1) allowed very short proofs to the questions above.. 2). prove that there exists m ≥ 0 such that Dm V = {0}.e. (2) Introducing the notion of the valuation Val(P ) which is the smallest index i such that ai = 0. . we see that. We shall give two proofs of that exercise. Exercise 1. and DV = {0}. f (0) = 2. we denote Pi the i-th Lagrange interpolation polynomial associated to S. Then it suffices to compute the Pi to find that1 f = 2 − 2X − 6X 2 + 4X 3 . we will use the following notations: For S = (x1 . hence D(Dn P ) = 0).3).ValD(P ) = ValP − 1. Remark 1. as above.degD(P ) = degP − 1 Val(T P ) = ValP + 1. for every polynomial P in Fn [X] (and in particular in V ). . Suppose V is a finite dimensional subspace of F [X]. This is the only polynomial P such that deg(Pi ) ≤ n and P (xk ) = δjk . Thus Val has some properties similar to the degree. (1) Another possibility to prove all the claims above was to use the tools from Calculous since (because we have identified polynomials and polynomial function) any polynomial is in particular a differentiable function. hence a constant. . Dn P is a polynomial of degree lower or equal to 0.1. using the Lagrange theorem. Considering the valuation instead of the degree could provide alternate proofs. T (P ) is of degree n + 1.

the canonical basis of R4 .   1 −1 1 −1 1 0 0 0   [EvalS ]BC. again gives (2. γ and δ be real numbers. g(1) = β and g(3) = γ.2). This. and reduce the question to solving a system: we can rephrase the question as prove that the system P (−1) P (1) P (3) P (0) = a0 − a1 + a2 = a0 + a1 + a2 = a0 + 3a1 + 9a2 = a0 =α =β =γ =δ has a system if and only if (2.2. Introducing B = (1. g is a solution if and only if g(0) = δ. We saw in class (and it is not difficult to check. 6). Hence.B =  1 1 1 1  1 2 4 8 and we want to find the unique polynomial P = a0 + a1 X + a2 X 2 + a3 X 3 such that EvalS (P ) = (−6. there exists a unique polynomial g of degree lower or equal to 2 that solves the (partial) interpolation problem g(−1) = α. β. Let α. 2. Prove that it is possible to find a polynomial f ∈ R[X] of degree not more than 2 such that f (−1) = α. using the matrix of the application) that this is an isomorphism. if there is a solution to the full interpolation problem. f (1) = β. There are at least three different and interesting ways to attack that problem. 2. (1) One can set P = a0 + a1 X + a2 X 2 . a basis of R3 [X]. P (1). where S = (−1. it is easy to solve this system to get that     a0 2 a1  −2   =  a2  −6 a3 B 4 BC which gives (2. X. (2. 1.3) In this case. Exercise 2. −2. 3). And conversely. we have a degenerate interpolation problem since the degree allowed is smaller than Card(S)−1. X 2 .3).Following what we did in class. it must be g (that we now know explicitly). R3 [X] → R4 such that EvalS (P ) = (P (−1). f (3) = γ and f (0) = δ if and only if 3α + 6β − γ − 8δ = 0. we introduce the mapping EvalS .B   =  a2  1 1 1 1  a2  = −2 a3 B 1 2 4 8 a3 B 6 BC Then. . (2) One can use the Lagrange interpolation theorem. 3.        a0 a0 1 −1 1 −1 −6 a1  1 0 0 0  a1  2     [EvalS ]BC. we can compute the matrix of EvalS in basis B and BC. X 3 ). 1. P (2)). 0). P (0).3) holds. or in coordinates. and we know g explicitly in terms of the Lagrange polynomials associated to S = (−1. and BC. Indeed.

we get that   1 −1 1 1 1 1  M = [Evals ]BC. one gets that   P (2) 0 0 0  0 P (2) 0 0  = 0 . δ) is in the range of the application EvalS . γ. P (0)) if and only if (2. one can directly compute the image of the diagonal coefficient by P . P (A) =   0 0 P (3) 0  0 0 0 P (1) Let P1 . δ) is in the range of EvalS if and only if   α β    ∈ Col(M ). Exercise 3.3. P3 = h c B f B i B such that EvalS (Pi ) = Ei . remarking that for a polynomial matrix. compute Ei = Pi (A). 1 1 1 0 2 0 0 0 0 3 0  0 0  0 1 .(3) One can use our knowledge of matrices. the canonical basis of R3 . X 2 ) be a basis of R2 [X].B = M = 1 3 9 . Let  2 0 A= 0 0 and P = (X − 2)(X − 3)(X − 1) = X 3 − 6X 2 + 11X − 6. Indeed we can rephrase the question as follows: prove that (α. one can compute the Pi explicitly. P (3). β. we get the claim. P (1). P2 and P3 be the Lagrange polynomials associated to S = (2.3).3) holds. 1). We will give two proofs (1) Again. X. X. But if B = (1. But since   1 2 4 [EvalS ]BC. γ. we get (2. let B = (1. 2. Indeed. Alternatively. 3.B =  1 3 9 1 0 0 and the vector (α. γ  δ Column-reducing M . Show that p(A)=0. we want to find       a d g P1 =  b  . β. X 2 ) and BC is the canonical basis of R4 . where Ei is the i-th vector in BC. R2 [X] → R4 defined by EvalS (P ) = (P (−1). Computing directly. P2 =  e  . And we will see how to do it very efficiently.

2 1 1 c f i −1 2 2 hence2 P1 = − 3 + 4X − X 2 1 3 P2 =1 − X + X 2 2 2 5 1 P3 =3 − X + X 2 . This gives     −3 1 3 a d g 5 M −1 =  b e h =  4 − 3 − 2  .this amounts to  1 [EvalS ]BC. E2 =  0 0 1 0 and E3 = 0 0 0 0 0 0 0 0 0 0 0 0 (2. row-reducing the matrix M . And this can be done efficiently. i 0 1 0  0 0 1 hence. it suffice to compute M −1 to read the coefficients of the Pi . we can see that     0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0    .5) 2once again.5) (2) Another possibility was to use again the computation of polynomials of diagonal matrices and the fact that the Pi solve a known interpolation problem to get that     1 0 0 0 P1 (2) 0 0 0  0 P1 (2) 0 0  0 1 0 0 =  E1 =   0 0 P1 (3) 0  0 0 0 0 0 0 0 0 0 0 0 P1 (1)     P2 (2) 0 0 0 0 0 0 0  0 P2 (2) 0 0  0 0 0 0 =  E2 =   0 0 P2 (3) 0  0 0 1 0 0 0 0 P2 (1) 0 0 0 0     P3 (2) 0 0 0 0 0 0 0  0 P3 (2) 0 0  0 0 0 0  =  E3 =   0 0 P3 (3) 0  0 0 0 0  0 0 0 P3 (1) 0 0 0 1. at this point it is convenient to pause to check that the P satisfy the interpolation i problem . and we recover (2. 2 2 that we know explicitly the Pi .B ((P1 )B (P2 )B (P3 )B ) = 0 0 or equivalently that  a M b c d e f  g h  = I3 .4) Now  1 0 E1 =  0 0 0 0 0 0 0 0 0 0  0 0  0 1 (2.

we first prove that Ei Ej = 0 if i = j. we see that if {i. A more theoretical proof is below. Now. Hence. Pj given by (2. to consider the case i = j. we start from the fact that E1 +E2 +E3 = IdR4 . and let T be any linear transformation of on R4 such that P (T ) = 0. But the polynomial 1 satisfies the same interpolation problem. (X − xi )(xi − xj )(xi − xk ) and hence that Pi Pj = P P (X − xi )(xi − xj )(xi − xk ) (X − xj )(xj − xi )(xj − xk ) P 1 =P (X − xi )(X − xj ) (xi − xj )(xi − xk )(xj − xi )(xj − xk ) 1 = (X − xk )P (xi − xj )(xi − xk )(xj − xi )(xj − xk ) = Qij P Ei Ej = Pi (T )Pj (T ) = Qij (T )P (T ) = 0 where we have used that P (T ) = 0 by hypothesis. Prove that E1 + E2 + E3 = IdR4 . we get that all the terms on the left hand side 2 vanish except Ei .1). using the multiplicative property of SubT . we get E1 + E2 + E3 = IdR4 . If one has not computed the Pi before. 2. we get Ei . we see that Q = 1. and on the right hand side.6). we suppose i = j and we want to prove that SubT (Pi Pj ) = 0. and consequently. We remind that this is a linear transformation satisfying that SubT (QR) = SubT (Q)SubT (R). Hence. To prove the second formula. we get (2. R[X] → L(R4 ) defined by SubT (Q) = Q(T ). i.Show that E1 + E2 + E3 = I3 . k} = {1. P2 P3 be the Lagrange polynomials defined in Exercise 3 above. Ei Ej = δij Ei and A = 2E1 + 3E2 + E3 .e. SubT (a0 + a1 X + a2 X 2 + · · · + an X n ) = a0 IdR4 + a1 T + a2 T 2 + · · · + an T n . 3} (but not necessarily in that order) that P Pi = . and we can conclude as before. one can still see that Q = P1 + P2 + P3 is the only polynomial that satisfy the following interpolation problem: Q is of degree lower or equal to 2 and Q(2) = Q(3) = Q(1) = 1. We introduce the linear transformation (“substitution X → T ”) SubT . . and using (2. 2. But using the explicit formula for Pi . If one has computed explicitly the Pi . By unicity of the solution of the Lagrange interpolation problem. j. Let P be as above. Multiplying both sides by Ei .6) Thus. evaluating both sides at T . Exercise 4. This could be done by direct computations. To prove the first formula. (2. and let Ei = Pi (T ).4. we get that . Ei Ej = δij Ei and T = 2E1 + 3E2 + E3 .6). Let P1 . we need to show that SubT (P1 + P2 + P3 ) = IdR4 = SubT (1). one can compute P1 + P2 + P3 = 1.

edu . . More precisely. . scalar multiplication)). we remark using (2. or because these polynomials satisfy the same interpolation problem. we let t = T (X). we get that T (X 2 ) = T (X)T (X) = t2 .3). for P given as in (1.7) Show that T = 0 or that there exists x ∈ F such that T (f ) = f (t) for all polynomial f. . it suffices so check an identity on a basis (or on any family that generates the space given the rules that we are given (addition.7). using (2.7) that T (1) = T (1 × 1) = T (1)T (1) = T (1)2 is a root of X 2 − X = X(X − 1) on F . T (1). then. Then. Exercise 6. either T = 0. In conclusion. Brown University E-mail address: Benoit. then T (X) = T (1 × X) = T (1)x = 0. using linearity and (2. ). (2. 2. we just need to prove that SubT (2P1 + 3P2 + P3 ) = SubT (X). and more generally that T (X i ) = ti for j ≥ 1. Then. and hence for all j ≥ 0. an X n ) = a0 T (1) + a1 T (X) + a2 T (X 2 ) + · · · + an T (X n ) = a0 + a1 t + a2 t2 + · · · + an tn = P (t) and T (P ) = P (t). we know exactly the image of all elements of the basis B = (1. The goal of the exercise is to explain that if we are consider a linear transformation. (1) either T (1) = 0.Finally.5. but we can check directly that 2P1 + 3P2 + P3 = X.brown. (2) either T (1) = 1. Indeed. . Suppose that T is a linear transformation F [X] → F such that T (f g) = T (f )T (g). Now to get the image of the last element in the basis. in the case of an algebra. we get. Consequently.7). it suffices to check an identity on a generating set (in the sense of algebra: a set such that any element can be expressed in terms of linear combinations of products of element in the set). X. X 2 . that T (P ) = T (a0 + a1 X + a2 X 2 + . except for the image of 1. or there exists an element t = T (X) such that T (P ) = P (t). hence on the whole space. to prove the last equality. either by direct computations.Pausader@math. So T coincides with the 0 transformation on a basis. .