You are on page 1of 9

ANALYTICAL SOLUTION OF TAYLOR’S SERIES IN

COMPUTATIONAL SCIENCE AND APPROXIMATION


To understand the applications of Taylor polynomials to real life, first we should grasp a concept of what these
series can illustrate.

I will stray from the calculus aspect of Taylor series to make a key observation: A Taylor polynomial can take
functions involving complex concepts such as trigonometry and logarithms into integer relations involving
multiplication, division, addition, and subtraction.

 to calculate approximate values of almost every important function on calculators and


computers

Taylor Series Approximation


In some cases, in engineering or real world technical problems, we are not interested to find the exact solution of a
problem. If we have a good enough approximation, we can consider that we’ve found the solution of the problem.

For example, if we want to compute the trigonometric function f(x)=sin(x) with a hand held calculator, we have two
options:
 use the actual trigonometric function sin(x), if the calculator has the function embedded (available if it’s a
scientific calculator)
 use a polynomial as an approximation of the sin(x) function and compute the result with any calculator, or
even by hand

In the calculator era, we often don't realize how deeply nontrivial it is to get an arbitrarily good approximation
for a number like ee, or better yet,  e^sin(√2). It turns out that in the grand scheme of things, e^x is not a very
nasty function at all. Since it's analytic, i.e. has a Taylor series, if we want to compute its values we just compute
the first few terms of its Taylor expansion at some point.
This makes plenty of sense for computing, say, e^1/2:1+1/2+1/2!(1/2)2+1/3!(1/2)3+...e1/2:1+1/2+1/2!
(1/2)2+1/3!(1/2)3+... is obviously going to converge very quickly: 1/4!24<1/1001/4!24<1/100 and 1/5!
25<1/10001/5!25<1/1000, so we know for instance we can get e1/2e1/2 to 22 decimal places by summing the
first 55 terms of the Taylor expansion.

1. Evaluating definite Integrals: Some functions have no antiderivative which can be expressed in terms of
familiar functions. This makes evaluating definite integrals of these functions difficult because the Fundamental
Theorem of Calculus cannot be used. If we have a polynomial representation of a function, we can oftentimes use that
to evaluate a definite integral.
2. Understanding asymptotic behaviour: Sometimes, a Taylor series can tell us useful information about how a
function behaves in an important part of its domain.
3. Understanding the growth of functions
4. Solving differential equations
ex=1+x+12x2+16x3+124x4⋯ex=1+x+12x2+16x3+124x4⋯

eix=1+ix−12x2−i16x3+124x4⋯eix=1+ix−12x2−i16x3+124x4⋯

=1−12x2+124x4⋯+ix−i16x3+i1120x5⋯=1−12x2+124x4⋯+ix−i16x3+i1120x5⋯

=cosx+isinx=cos⁡x+isin⁡x

eix=cosx+isinxeix=cos⁡x+isin⁡x
Polynomial functions are easy to understand but complicated functions, infinite polynomials,
are not obvious. Infinite polynomials are made easier when represented using series: complicated
functions are easily represented using Taylor’s series. This representation make some functions
properties easy to study such as the asymptotic behavior. Differential equations are made easy
with Taylor series. Taylor’s series is an essential theoretical tool in computational science and
approximation.

INTRODUCTION
Taylors series is an expansion of a function into an
infinite series of a variable x or into a finite series plus a
remainder term[1]. The coefficients of the expansion or of
the subsequent terms of the series involve the successive
derivatives of the function. The function to be expanded
should have a nth derivative in the interval of expansion.
The series resulting from Taylors expansion is referred
to as the Taylor series. The series is finite and the only
concern is the magnitude of the remainder. Given the
interval of expansion a 5 ξ 5 b the Lagrangian form of
the remainder is given as follows:
R
n
= (x−a)n
n! f(n)(ξ) (1)
a, is the reference point. The f (n)(ξ ) is the nth derivative
at a. When the expanding function is such that:
lim
n→∞
R = 0, (2)
n
the Taylors series of the expanding function becomes
f (x) =

X

n=0
(x − a)n
n! f(n)(a).(3)
Taylor series specifies the value of a function at one point,
x. Setting the derivative operator, D = d/dx, the Taylor
expansion becomes:
f (x + h) =

X

n=0
hnD n
n! f(x) = ehDf(x) [2] (4)
Taylor series could also be written in the context of a
complex variable
EVALUATING DEFINITE INTEGRALS
Some functions have no anti-derivative which can be
expressed in terms of familiar functions. This makes
evaluating definite integrals of these functions difficult
because the fundamental theorem of calculus cannot be
used. However, a series representation of this function
eases things up. Suppose we want to evaluate the defi-
nite integral
Z
1

0
sin ¡x2¢dx (5)
this integrand has no anti-derivative expressible in terms
of familial functions. However, we know how to find its
Taylor series:
sin t = t − t3
3! +t5
5! +t7
7! +. . . (6)
if, we substitute t = x2, then
sin ¡x2¢= x2− x6
3! +x10
5! +x14
7! +. . . (7)
The Taylor series can then be integrated:
Z
1

0
sin ¡x2¢dx = x3
3 +x7
7 × 3! +x11
11 × 5! +x15
15 × 7! +. . .
(8)
This is an alternating series and by adding all the terms,
the series converges to 0.31026 [1].

CONCLUSION
We have probe through the complexity of Taylor series
and shown evidence of its extensive and very effective
applications. The effectiveness in error determination,
function optimization, definite integral resolution, and
limit determination is evidence of the Taylor series being
an enormous tool in physical sciences and in Computa-
tional science as well as an effective way of representing
complicated functions.\

1 Introduction One way to deal with a nasty, analytically intractable function like tan(logπ x 37.3 ) is to approximate it with a
benign one. There are lots of ways to do this, almost all of which1 define “benign” as “polynomial.” The reason for this is
that polynomials are easy to work with: to write down, to take derivatives, to make more or less complex, etc. In general,
polynomials look like this: Pn(x) = a0 + a1x + a2x 2 + . . . + an+1x n = nX +1 i=0 aix i where n is the “degree” of the polynomial:
that is, the highest power of x that appears in it. (Polynomials can be in other variables besides x, of course, or even in
multiple variables.) At the top of the following page is a simple pictorial example of how one could use two simple
polynomials — a line and a parabola — to approximate another, more-complicated function f(x) in the region near a specific
point a. Note that both of these fitting functions are close to f(x) near the point a, that the quality of their fit to f(x) degrades
as one moves away from a, and that the higher-degree polynomial (the parabola) appears to do a 1One notable exception is
Fourier decomposition, which uses combinations of sinusoids to approximate functions that are periodic in time. 1better job
across a wider range. x a f(x) A Taylor series is a specific mathematical recipe for constructing a polynomial Pn(x) of degree n
that approximates a given function f(x) near a point a. Here is the formula: Pn(x) = f(a) + (x − a)f ′ (a) + (x − a) 2 2! f ′′(a) + (x −
a) 3 3! f ′′′(a) + . . . + (x − a) n n! f (n) (a) where f ′ (x), f′′(x). . .f(n) (x) are the first, second, ... n th derivatives of the function f(x)
with respect to x. This formula is on the CSCI 3656 formula sheet. As an example, consider f(x) = e x . We could fit a line to
this function near some point x = a by building a degree-one Taylor series like so: P1(x) = f(a) + (x − a)f ′ (a) = e a + (x − a)e a
(This makes use of the fact that the derivative of e kx with respect to x is kekx.) If we wanted the line that fits e x near a = 0,
then, we’d simply plug in that value: P1(x) = e 0 + (x − 0)e 0 = 1 + x (Please draw a rough plot of e x and 1 +x so this makes
sense to you.) If we wanted to fit a line to a different region of e x instead — say, near a = 1 — we’d simply plug that a-value
in: P1(x) = e 1 + (x − 1)e 1 = 2.718 + 2.718(x − 1) = 2.718x (Again, please draw a picture of this for yourself.) If we wanted to fit
a parabola to e x near a = 0, we’d have to use one more term in the Taylor series: P2(x) = e 0 + (x − 0)e 0 + (x − 0)2 2! e 0 = 1 +
x + x 2 2 22 Why Bother There are several reasons why you need to understand Taylor series and know how to build them: •
Because many, many numerical computation methods are based on these kinds of series. • If you have a table of values of a
function (e.g., e x for x = 0.1, 0.2, . . ., 0.9), you can use Taylor series to calculate its value at some in-between point (e.g., e
0.21). • If working with a function would unnecessarily complicate your life and you can get away with something simpler, a
Taylor series is often a good thing to try. In many graphics applications, for instance, the true effects of light falling on a
complicated surface are both horrendously expensive to compute and effectively invisible to the human eye, so practitioners
approximate those surfaces with simple curves instead. • The notion of a series whose “goodness” increases with successive
terms will help you understand error in numerical methods. 3 Taylor Series and Error A Taylor-series approximation, in
general, is good near the point where you built it. If I use P2(x) = 1 + x + x 2 /2 to obtain an estimate for e 0 , for instance, my
answer is perfect. (Thought question: is this always true?) As I move away from 0, the approximation gets worse: e 0 = 1
P2(0) = 1 e 0.1 = 1.1052 P2(0.1) = 1.1050 e 0.5 = 1.6487 P2(0.5) = 1.6250 e 2 = 7.389 P2(2) = 5 e 5 = 148.4 P2(5) = 18.5 The
inherent error in a Taylor-series approximation is also related to the match between the complexity of the approximation
and the complexity of the underlying function. If f(x) is a line and you fit a degree-one Taylor polynomial P1(x) to it, your
answer will be exact — and not only at the point where you built the polynomial, but everywhere. If f(x) is a parabola, you’ll
need a degree-two Taylor polynomial for a perfect global fit. (If f(x) is a line and you try to fit a degree-two Taylor polynomial
to it, what do you think will happen? Please try this and see.) In general, you need the degree n of Pn(x) to be at least as
large as the “degree” of f(x) in order to get a perfect global fit, and the error in your results will depend on how much smaller
n is than the “degree” of f(x). 3The quotes above are important; the word “degree” only makes sense for polynomials, and if
f(x) were a polynomial, we wouldn’t be bothering with Taylor series at all. Nonetheless, the generalized notion of the degree
of a function as a way to assess (and compare) complexity is useful in developing intuition about how all of this works. We’ll
talk more about this later and make the associated ideas more clear. Here is a formula that encodes all of those concepts.
The error in an n th degree Taylorseries polynomial approximation to a function f(x) is Rn(x) = f(x) − Pn(x) ≈ (x − a) n+1 (n + 1)!
f (n+1)(cx) This formula is also on the CSCI 3656 formula sheet. With the exception of the cx term — which is confusing and to
which we’ll come back to below — this is pretty easy to pick apart and understand. The first piece of that formula ((x − a)
n+1/(n + 1)!) captures the “the fit is perfect close to a and degrades as you move away from a” idea. The second piece — the
n+1st derivative of f — captures the “the degree of Pn(x) has to be at least as large as the “degree” of f(x)” argument.
(Thought experiment: what is the n + 1st derivative of a degree-n polynomial?) And the whole thing looks suspiciously like
the next term you’d have added to the series to get Pn+1(x). This is a common heuristic in numerical methods: you can
estimate the error in your results using the next term that you would have added to make things better. This is treated in
more depth in the CSCI 3656 Error Notes (Research Report on Curricula and Teaching CU-CS-CT004-02). Consider the task of
fitting this function with a Taylor series: a f(x) x If I wrote down a degree-two Taylor-series approximation to this function
near the point a, the resulting P2(x) would be a good fit to f(x) in some regions and a bad fit in others — specifically, in the
wavy region near the right identified with the dashed line. Moreover, error estimates must always be pessimistic, so any
calculations of the error in my P2(x) must be done using the worst possible conditions. That means that the Rn(x) equation
should be evaluated at the worst possible point – that is, the x that makes it largest. The cx term in the Rn(x) formula
captures these ideas. It is a “worst case” factor — a placeholder that means “find the x that makes this worst and plug it in.”
For example, the 4Taylor-series fits to e x near x = 0 that are given on the previous pages have the following error: Rn(x) = (x
− 0)n+1 (n + 1)! e cx These fits were constructed at a = 0, but they are used at some other x, so in order to evaluate the error
that they may contain, we need to find the “worst-case” value in the interval [0, x]. In this case, the Rn(x) function is
monotonic upwards across this interval and the answer is relatively easy: the error is biggest at the right-hand end of the
interval, so cx = x and Rn(x) = (x − 0)n+1 (n + 1)! e x In general, it’s not always easy to look at an Rn(x) function and see what x
makes it biggest; a good general strategy is to evaluate it at the endpoints of the interval, check (using derivatives) to see if
there’s a maximum inside the interval and, if so, evaluate the Rn(x) function there too, then take the largest value of the
whole lot. Of course, all of this is somewhat of an academic exercise, since these calculations presuppose that we know a fair
bit about our answer. Nonetheless, this formula does have some practical utility. For instance, if we want to know how many
terms to put into the Taylor series to obtain a particular level of accuracy, the Rn(x) formula can help. If I wanted to build a
Taylor series for e x around x = 0 that was accurate to two decimal places out to x = 1, for instance, I would have to use
enough terms for this to be true: Rn(1) = 2.718 (n + 1)! ≤ 0.01 which translates, in this case, to n ≥ 5. All of the material in this
section is covered in much more detail in any calculus tex

Taylor Series
A Taylor Series is an expansion of some function into an infinite sum of terms, where each
term has a larger exponent like x, x2, x3, etc.

Example: The Taylor Series for ex

ex = 1 + x + x22! + x33! + x44! + x55! + ...

says that the function:ex


is equal to the infinite sum of terms:1 + x + x2/2! + x3/3! + ... etc

(Note: ! is the Factorial Function.)

Does it really work? Let's try it:

Example: ex for x=2

Well, we already know the answer is e2 = 2.71828... × 2.71828... = 7.389056...

But let's try more and more terms of our infinte series:

Terms   Result
1+2   3
1+2+222!   5
1+2+222!+233!   6.3333...
1+2+222!+233!+244!   7
1+2+222!+233!+244!+255!   7.2666...
1+2+222!+233!+244!+255!+266!   7.3555...
1+2+222!+233!+244!+255!+266!+277!   7.3809...
1+2+222!+233!+244!+255!+266!+277!+288!   7.3873...

It starts out really badly, but it then gets better and better!

Try using "2^n/fact(n)" and n=0 to 20 in the Sigma Calculator and see what you get.

Here are some common Taylor Series:

Taylor Series expansion   As Sigma Notation

ex = 1 + x + x22! + x33! + ...  

sin x = x − x33! + x55! − ...  


cos x = 1 − x22! + x44! − ...  

(There are many more.)

Approximations
W/e can use the first few terms of a Taylor Series to get an approximate value for a function.

Here we show better and better approximations for cos(x). The red line is cos(x), the blue is
the approximation (try plotting it yourself) :

1 − x2/2!

1 − x2/2! + x4/4!

1 − x2/2! + x4/4! − x6/6!

1 − x2/2! + x4/4! − x6/6! + x8/8!

You can also see the Taylor Series in action at Euler's Formula for Complex Numbers.
What is this Magic?
How can we turn a function into a series of power terms like this?

Well, it isn't really magic. First we say we want to have this expansion:

f(x) = c0 + c1(x-a) + c2(x-a)2 + c3(x-a)3 + ...

Then we choose a value "a", and work out the values c0 , c1 , c2 , ... etc

And it is done using derivatives (so we must know the derivative of our function)

Quick review: a derivative gives us the slope of a function at any point.

These basic derivative rules can help us:

 The derivative of a constant is 0


 The derivative of ax is a (example: the derivative of 2x is 2)
 The derivative of xn is nxn-1 (example: the derivative of x3 is 3x2)

We will use the little mark ’ to mean "derivative of".

OK, let's start:

To get c0, choose x=a so all the (x-a) terms become zero, leaving us with:

f(a) = c0

So c0 = f(a)

To get c1, take the derivative of f(x):

f’(x) = c1 + 2c2(x-a) + 3c3(x-a)2 + ...

With x=a all the (x-a) terms become zero:

f’(a) = c1

So c1 = f’(a)

To get c2, do the derivative again:

f’’(x) = 2c2 + 3×2×c3(x-a) + ...


With x=a all the (x-a) terms become zero:

f’’(a) = 2c2

So c2 = f’’(a)/2

In fact, a pattern is emerging. Each term is

 the next higher derivative ...


 ... divided by all the exponents so far multiplied together (for which we can use factorial
notation, for example 3! = 3×2×1)

And we get:

f(x) = f(a) + f'(a)1!(x-a) + f''(a)2!(x-a)2 + f'''(a)3!(x-a)3 + ...

Now we have a way of finding our own Taylor Series:

For each term: take the next derivative, divide by n!, multiply by (x-a) n

Example: Taylor Series for cos(x)

Start with:

f(x) = f(a) + f'(a)1!(x-a) + f''(a)2!(x-a)2 + f'''(a)3!(x-a)3 + ...

The derivative of cos is −sin, and the derivative of sin is cos, so:

 f(x) = cos(x)
 f'(x) = −sin(x)
 f''(x) = −cos(x)
 f'''(x) = sin(x)
 etc...

And we get:

cos(x) = cos(a) − sin(a)1!(x-a) − cos(a)2!(x-a)2 + sin(a)3!(x-a)3 + ...

Now put a=0, which is nice because cos(0)=1 and sin(0)=0:

cos(x) = 1 − 01!(x-0) − 12!(x-0)2 + 03!(x-0)3 + 14!(x-0)4 + ...

Simplify:

cos(x) = 1 − x2/2! + x4/4! − ...

Try that for sin(x) yourself, it will help you to learn.


Or try it on another function of your choice.

The key thing is to know the derivatives of your function f(x).

You might also like