You are on page 1of 4

# Taylor Polynomials

## Based on lecture notes by James McKernan

If f : A Rm is a differentiable function, and we are given a point p A, one
can use the derivative to write down the best linear approximation to f at p. It
is natural to wonder if one can do better using quadratic, or even higher degree,
Blackboard 1. Let I R be an open interval and let f : I R be a C k -function.
Given a point a I, let
Pa,k f (x) = f (a) + f 0 (a)(x a) +
=

k
X
f i (a)
i=0

i!

f 00 (a)
f 000 (a)
f k (a)
(x a)2 +
(x a)3 + +
(x a)k
2
3!
k!

(x a)i .

Then Pa,k f (x) is the kth Taylor polynomial of f , centred at a. The remainder
is the difference
Ra,k f (x) = f (x) Pa,k f (x).
Note that we have chosen Pa,k f so that the first k derivatives of Pa,k f at a are
precisely the same as those of f . In other words, the first k derivatives at a of
the remainder are all zero. The remainder is a measure of how good the Taylor
polynomial approximates f (x) and so it is very useful to estimate Ra,k (x).
Theorem 2 (Taylors Theorem with remainder). Let I R be an open interval
and let f : I R be a C k+1 -function. Let a and b be two points in I.
Then there is a between a and b, such that
Ra,k f (b) =

f k+1 ()
(b a)k+1 .
(k + 1)!

## Before proving this we will need:

Theorem 3 (Mean value theorem). Let f : [a, b] R is continuous and differentiable at every point of (a, b), then we may find c (a, b) such that
f (b) f (a) = f 0 (c)(b a).
Proof of Theorem 2. If a = b then take = a. The result is clear in this case.
Otherwise if we put
Ra,k f (b)
M=
,
(b a)k+1
then
Ra,k f (b) = M (b a)k+1 .
We want to show that there is some between a and b such that
M=

f k+1 ()
.
(k + 1)!

If we let
g(x) = Ra,k (x) M (x a)k+1 ,
then
g k+1 (x) = f k+1 (x) (k + 1)!M.
1

## Then we are looking for such that

g k+1 () = 0.
Now the first k derivatives of g at a are all zero,
g i (a) = 0

for

0 i k.

By choice of M ,
g(b) = 0.
So by the mean value theorem, applied to g(x), there is a 1 between a and b such
that
g 0 (1 ) = 0.
Again by the mean value theorem, applied to g 0 (x), there is a 2 between a and 1
such that
g 00 (2 ) = 0.
Continuing in this way, by induction we may find i , 1 i k + 1 between a and
i1 such that
g i (i ) = 0.
Let = k+1 .

f (x) = x1/2
1
f 0 (x) = x1/2
2
1
00
f (x) = 2 x3/2
2
3
000
f (x) = 3 x5/2
2
1 3 5 7/2
4
f (x) =
x
24
1 3 5 7 9/2
x
f 5 (x) =
25
1 3 5 7 9 11/2
f 6 (x) =
x
26
(2k 1)!! (2k1)/2
f k (x) = (1)k1
x
2k
(2k 1)!! 22k1
f k (9/4) = (1)k1
2k
32k1
(2k 1)!!2k1
= (1)k1
.
32k1
Lets write down the Taylor polynomial centred at a = 9/4.
P9/4,5 f (x) = f (9/4)+f 0 (9/4)(x9/4)+f 00 (9/4)/2(x9/4)2 +f 000 (9/4)/6(x9/4)3
f 4 (9/4)/24(x 9/4)4 + f 5 (9/4)/120(x 9/4)5 .

So,
P9/4,5 f (x) = 3/2 + 1/3(x 9/4) 1/33 (x 9/4)2 + 2/35 (x 9/4)3

1 3 5 23
1 3 5 7 24
(x 9/4)4 +
(x 9/4)5 .
7
24 3
120 39

## If weplug in x = 2, so that x 9/4 = 1/4 we get an approximation to

f (2) = 2.
P9/4,3 (2) = 3/2 + 1/3(1/4) 1/33 (1/4)2 2/35 (1/4)3 =

10997
1.41422 . . . .
7776

|R3 (2, 9/4)| =

1 3 7/2
13
()
(1/2) = 1/16.
(1/4)4 <
4!
4!

In fact
|R3 (2, 9/4)| =

10997
2 4 106 .
7776

## Blackboard 4. Let A Rn be an open subset which is convex (if ~a and ~b belong

to A, then so does every point on the line segment between them). Suppose that
f : A R is C k .
Given ~a A, the kth Taylor polynomial of f centred at a is
P~a,k f (~x) = f (~a) +

X f
X
(~a)(xi ai ) + 1/2
xi

1in

1
k!

1i,jn

2f
(~a)(xi ai )(xj aj ) + . . .
xi xj

X
1i1 ,i2 ,...,ik n

f
(~a)(xi1 ai1 )(xi2 ai2 ) . . . (xik aik ).
xi1 xi2 . . . xik

## The remainder is the difference

R~a,k f (~x) = f (~x) P~a,k f (~x).
Theorem 5. Let A Rn be an open subset which is convex. Suppose that f : A
R is C k+1 , and let ~a and ~b belong to A.
Then there is a vector ~ on the line segment between ~a and ~b such that
R~a,k (~b) =

1
(k + 1)!

X
1l1 ,l2 ,...,lk+1 n

k+1 f
~ i ai )(bi ai ) . . . (bi ai ).
()(b
1
1
2
2
k+1
k+1
xi1 xi2 . . . xik+1

Proof. As A is open and convex, we may find  > 0 so that the parametrised line
~r : (, 1 + ) Rn

given by

## ~r(t) = ~a + t(~b ~a),

is contained in A. Let
g : (, 1 + ) R,
be the composition of ~r(t) and f (~x).
Claim 6.
P0,k g(t) = P~a,k f (~r(t)).

## Proof of (6). This is just the chain rule;

X f
g 0 (t) =
(~r(t))(bi ai )
xi
1in

g 00 (t) =

X
1ijn

2f
(~r(t))(bi ai )(bj aj )
xi xj

and so on.

## So the result follows by the one variable result.

We can write out the first few terms of the Taylor series of f and get something
interesting. Let ~h = ~x ~a. Then
X f
X
2f
P~a,2 f (x) = f (~a) +
(~a)hi + 1/2
(~a)hi hj .
xi
xi xj
1in

1i,jn

The middle term is the same as multiplying the row vector formed by the gradient
of f ,
f
f
f
(~a),
(~a), . . .
(~a)),
f (~a) = (
x1
x2
xn
and the column vector given by ~h. The last term is the same as multiplying the
matrix with entries
2f
(~a),
xi xj
on the left by ~h and on the right by the column vector given by ~h and dividing by
2.
The matrix

 2
f
(~a) ,
Hf (~a) = (hij ) =
xi xj
is called the Hessian of f (~x).
We have then
P~a,2 f (x) = f (~a) + f (~a) ~h + 1 ~h (Hf (~a) ~h).
2