KEITH CONRAD
1. Introduction
The two basic concepts of calculus, diﬀerentiation and integration, are deﬁned in terms of limits
(Newton quotients and Riemann sums). In addition to these is a third fundamental limit process:
inﬁnite series. The label series is just another name for a sum. An inﬁnite series is a “sum” with
inﬁnitely many terms, such as
(1.1) 1 +
1
4
+
1
9
+
1
16
+· · · +
1
n
2
+· · · .
The idea of an inﬁnite series is familiar from decimal expansions, for instance the expansion
π = 3.14159265358979...
can be written as
π = 3 +
1
10
+
4
10
2
+
1
10
3
+
5
10
4
+
9
10
5
+
2
10
6
+
6
10
7
+
5
10
8
+
3
10
9
+
5
10
10
+
8
10
11
+· · · ,
so π is an “inﬁnite sum” of fractions. Decimal expansions like this show that an inﬁnite series is
not a paradoxical idea, although it may not be clear how to deal with nondecimal inﬁnite series
like (1.1) at the moment.
Inﬁnite series provide two conceptual insights into the nature of the basic functions met in
high school (rational functions, trigonometric and inverse trigonometric functions, exponential and
logarithmic functions). First of all, these functions can be expressed in terms of inﬁnite series, and
in this way all these functions can be approximated by polynomials, which are the simplest kinds
of functions. That simpler functions can be used as approximations to more complicated functions
lies behind the method which calculators and computers use to calculate approximate values of
functions. The second insight we will have using inﬁnite series is the close relationship between
functions which seem at ﬁrst to be quite diﬀerent, such as exponential and trigonometric functions.
Two other applications we will meet are a proof by calculus that there are inﬁnitely many primes
and a proof that e is irrational.
2. Definitions and basic examples
Before discussing inﬁnite series we discuss ﬁnite ones. A ﬁnite series is a sum
a
1
+ a
2
+ a
3
+· · · + a
N
,
where the a
i
’s are real numbers. In terms of Σnotation, we write
a
1
+ a
2
+ a
3
+· · · + a
N
=
N
n=1
a
n
.
1
2 KEITH CONRAD
For example,
(2.1)
N
n=1
1
n
= 1 +
1
2
+
1
3
+· · · +
1
N
.
The sum in (2.1) is called a harmonic sum, for instance
1 +
1
2
+
1
3
+
1
4
+
1
5
=
137
60
.
A very important class of ﬁnite series, more important than the harmonic ones, are the geometric
series
(2.2) 1 + r + r
2
+· · · + r
N
=
N
n=0
r
n
.
An example is
10
n=0
_
1
2
_
n
=
10
n=0
1
2
n
= 1 +
1
2
+
1
4
+· · · +
1
2
10
=
2047
1024
≈ 1.999023.
The geometric series (2.2) can be summed up exactly, as follows.
Theorem 2.1. When r = 1, the series (2.2) is
1 + r + r
2
+· · · + r
N
=
1 −r
N+1
1 −r
=
r
N+1
−1
r −1
.
Proof. Let
S
N
= 1 + r +· · · + r
N
.
Then
rS
N
= r + r
2
+· · · + r
N+1
.
These sums overlap in r +· · · + r
N
, so subtracting rS
N
from S
N
gives
(1 −r)S
N
= 1 −r
N+1
.
When r = 1 we can divide and get the indicated formula for S
N
.
Example 2.2. For any N ≥ 0,
1 + 2 + 2
2
+· · · + 2
N
=
2
N+1
−1
2 −1
= 2
N+1
−1.
Example 2.3. For any N ≥ 0,
1 +
1
2
+
1
4
+· · · +
1
2
N
=
1 −(1/2)
N+1
1 −1/2
= 2 −
1
2
N
.
It is sometimes useful to adjust the indexing on a sum, for instance
(2.3) 1 + 2 · 2 + 3 · 2
2
+ 4 · 2
3
+· · · + 100 · 2
99
=
100
n=1
n2
n−1
=
99
n=0
(n + 1)2
n
.
INFINITE SERIES 3
Comparing the two Σ’s in (2.3), notice how the renumbering of the indices aﬀects the expression
being summed: if we subtract 1 from the bounds of summation on the Σ then we add 1 to the
index “on the inside” to maintain the same overall sum. As another example,
3
2
+· · · + 9
2
=
9
n=3
n
2
=
6
n=0
(n + 3)
2
.
Whenever the index is shifted down (or up) by a certain amount on the Σ it has to be shifted up (or
down) by a compensating amount inside the expression being summed so that we are still adding
the same numbers. This is analogous to the eﬀect of an additive change of variables in an integral,
e.g.,
_
1
0
(x + 5)
2
dx =
_
6
5
u
2
du =
_
6
5
x
2
dx,
where u = x + 5 (and du = dx). When the variable inside the integral is changed by an additive
constant, the bounds of integration are changed additively in the opposite direction.
Now we deﬁne the meaning of inﬁnite series, such as (1.1). The basic idea is that we look at the
sum of the ﬁrst N terms, called a partial sum, and see what happens in the limit as N →∞.
Deﬁnition 2.4. Let a
1
, a
2
, a
3
, . . . be an inﬁnite sequence of real numbers. The inﬁnite series
n≥1
a
n
is deﬁned to be
n≥1
a
n
= lim
N→∞
N
n=1
a
n
.
If the limit exists in R then we say
n≥1
a
n
is convergent. If the limit does not exist or is ±∞
then
n≥1
a
n
is called divergent.
Notice that we are not really adding up all the terms in an inﬁnite series at once. We only add
up a ﬁnite number of the terms and then see how things behave in the limit as the (ﬁnite) number
of terms tends to inﬁnity: an inﬁnite series is deﬁned to be the limit of its sequence of partial sums.
Example 2.5. Using Example 2.3,
n≥0
1
2
n
= lim
N→∞
N
n=0
1
2
n
= lim
N→∞
2 −
1
2
N
= 2.
So
n≥0
1
2
n
is a convergent series and its value is 2.
Building on this example we can compute exactly the value of any inﬁnite geometric series.
Theorem 2.6. For x ∈ R, the (inﬁnite) geometric series
n≥0
x
n
converges if x < 1 and diverges
if x ≥ 1. If x < 1 then
n≥0
x
n
=
1
1 −x
.
Proof. For N ≥ 0, Theorem 2.1 tells us
N
n=0
x
n
=
_
(1 −x
N+1
)/(1 −x), if x = 1,
N + 1, if x = 1.
4 KEITH CONRAD
When x < 1, x
N+1
→0 as N →∞, so
n≥0
x
n
= lim
N→∞
N
n=0
x
n
= lim
N→∞
1 −x
N+1
1 −x
=
1
1 −x
.
When x > 1 the numerator 1 − x
N
does not converge as N → ∞, so
n≥0
x
n
diverges.
(Speciﬁcally, the limit is ∞ if x > 1 and the partials sums oscillate between values tending to ∞
and −∞ if x < −1.)
What if x = 1? If x = 1 then the Nth partial sum is N + 1 so the series
n≥0
1
n
diverges
(to ∞). If x = −1 then
N
n=0
(−1)
n
oscillates between 1 and 0, so again the series
n≥0
(−1)
n
is
divergent.
We will see that Theorem 2.6 is the fundamental example of an inﬁnite series. Many important
inﬁnite series will be analyzed by comparing them to a geometric series (for a suitable choice of x).
If we start summing a geometric series not at 1, but at a higher power of x, then we can still get
a simple closed formula for the series, as follows.
Corollary 2.7. If x < 1 and m ≥ 0 then
n≥m
x
n
= x
m
+ x
m+1
+ x
m+2
+· · · =
x
m
1 −x
.
Proof. The Nth partial sum (for N ≥ m) is
x
m
+ x
m+1
+· · · + x
N
= x
m
(1 + x +· · · + x
N−m
).
As N →∞, the sum inside the parentheses becomes the standard geometric series, whose value is
1/(1 −x). Multiplying by x
m
gives the asserted value.
Example 2.8. If x < 1 then x + x
2
+ x
3
+· · · = x/(1 −x).
A divergent geometric series can diverge in diﬀerent ways: the partial sums may tend to ∞ or
tend to both ∞and −∞or oscillate between 1 and 0. The label “divergent series” does not always
mean the partial sums tend to ∞. All “divergent” means is “not convergent.” Of course in a
particular case we may know the partial sums do tend to ∞, and then we would say “the series
diverges to ∞.”
The convergence of a series is determined by the behavior of the terms a
n
for large n. If we
change (or omit) any initial set of terms in a series we do not change the convergence of divergence
of the series.
Here is the most basic general feature of convergent inﬁnite series.
Theorem 2.9. If the series
n≥1
a
n
converges then a
n
→0 as n →∞.
Proof. Let S
N
= a
1
+ a
2
+ · · · + a
N
and let S =
n≥1
a
n
be the limit of the partial sums S
N
as
N →∞. Then as N →∞,
a
N
= S
N
−S
N−1
→S −S = 0.
While Theorem 2.9 is formulated for convergent series, its main importance is as a “divergence
test”: if the general term in an inﬁnite series does not tend to 0 then the series diverges. For
example, Theorem 2.9 gives another reason that a geometric series
n≥0
x
n
diverges if x ≥ 1,
because in this case x
n
does not tend to 0: x
n
 = x
n
≥ 1 for all n.
INFINITE SERIES 5
It is an unfortunate fact of life that the converse of Theorem 2.9 is generally false: if a
n
→0 we
have no guarantee that
n≥0
a
n
converges. Here is the standard illustration.
Example 2.10. Consider the harmonic series
n≥1
1
n
. Although the general term
1
n
tends to 0 it
turns out that
n≥1
1
n
= ∞.
To show this we will compare
N
n=1
1/n with
_
N
1
dt/t = log N. When n ≤ t, 1/t ≤ 1/n. Integrating
this inequality from n to n + 1 gives
_
n+1
n
dt/t ≤ 1/n, so
N
n=1
1
n
≥
N
n=1
_
n+1
n
dt
t
=
_
N+1
1
dt
t
= log(N + 1).
Therefore the Nth partial sum of the harmonic series is bounded below by log(N + 1). Since
log(N + 1) →∞ as N →∞, so does the Nth harmonic sum, so the harmonic series diverges.
We will see later that the lower bound log(N + 1) for the Nth harmonic sum is rather sharp,
so the harmonic series diverges “slowly” since the logarithm function diverges slowly.
The divergence of the harmonic series is not just a counterexample to the converse of Theorem
2.9, but can be exploited in other contexts. Let’s use it to prove something about prime numbers!
Theorem 2.11. There are inﬁnitely many prime numbers.
Proof. (Euler) We will argue by contradiction: assuming there are only ﬁnitely many prime numbers
we will contradict the divergence of the harmonic series.
Consider the product of 1/(1 − 1/p) as p runs through all prime numbers. Sums are denoted
with a Σ and products are denoted with a Π (capital pi), so our product is written as
p
1
1 −1/p
=
1
1 −1/2
·
1
1 −1/3
·
1
1 −1/5
· · · .
Since we assume there are ﬁnitely many primes, this is a ﬁnite product. Now expand each factor
into a geometric series:
p
1
1 −1/p
=
p
_
1 +
1
p
+
1
p
2
+
1
p
3
+· · ·
_
.
Each geometric series is greater than any of its partial sums. Pick any integer N ≥ 2 and truncate
each geometric series at the Nth term, giving the inequality
(2.4)
p
1
1 −1/p
>
p
_
1 +
1
p
+
1
p
2
+
1
p
3
+· · · +
1
p
N
_
.
On the right side of (2.4) we have a ﬁnite product of ﬁnite sums, so we can compute this using the
distributive law (“super FOIL” in the terminology of high school algebra). That is, take one term
from each factor, multiply them together, and add up all these products. What numbers do we get
in this way on the right side of (2.4)?
A product of reciprocal integers is a reciprocal integer, so we obtain a sum of 1/n as n runs over
certain positive integers. Speciﬁcally, the n’s we meet are those whose prime factorization doesn’t
6 KEITH CONRAD
involve any prime with exponent greater than N. Indeed, for any positive integer n > 1, if we
factor n into primes as
n = p
e
1
1
p
e
2
2
· · · p
er
r
then e
i
< n for all i. (Since p
i
≥ 2, n ≥ 2
e
i
, and 2
m
> m for any m ≥ 0 so n > e
i
.) Thus if n ≤ N
any of the prime factors of n appear in n with exponent less than n ≤ N, so 1/n is a number which
shows up when multiplying out (2.4). Here we need to know we are working with a product over
all primes.
Since 1/n occurs in the expansion of the right side of (2.4) for n ≤ N and all terms in the
expansion are positive, we have the lower bound estimate
(2.5)
p
1
1 −1/p
>
N
n=1
1
n
.
Since the left side is independent of N while the right side tends to ∞ as N →∞, this inequality
is a contradiction so there must be inﬁnitely many primes.
We end this section with some algebraic properties of convergent series: termwise addition and
scaling.
Theorem 2.12. If
n≥1
a
n
and
n≥1
b
n
converge then
n≥1
(a
n
+ b
n
) converges and
n≥1
(a
n
+ b
n
) =
n≥1
a
n
+
n≥1
b
n
.
If
n≥1
a
n
converges then for any number c the series
n≥1
ca
n
converges and
n≥1
ca
n
= c
n≥1
a
n
.
Proof. Let S
N
=
N
n=1
a
n
and T
N
=
N
n=1
b
n
. Set S =
n≥1
a
n
and T =
n≥1
b
n
, so S
N
→ S
and T
N
→T as N →∞. Then by known properties of limits of sequences, S
N
+T
N
→S +T and
cS
N
→cS as N →∞. These are the conclusions of the theorem.
Example 2.13. We have
n≥0
_
1
2
n
+
1
3
n
_
=
1
1 −1/2
+
1
1 −1/3
=
7
2
and
n≥0
5
4
n
= 5
n≥0
1
4
n
= 5
1
1 −1/4
=
20
3
.
3. Positive series
In this section we describe several ways to show that an inﬁnite series
n≥1
a
n
converges when
a
n
> 0 for all n.
There is something special about series where all the terms are positive. What is it? The
sequence of partial sums is increasing: S
N+1
= S
N
+a
N
, so S
N+1
> S
N
for every N. An increasing
sequence has two possible limiting behaviors: it converges or it tends to ∞. (That is, divergence
can only happen in one way: the partial sums tend to ∞.) We can tell if an increasing sequence
converges by showing it is bounded above. Then it converges and its limit is an upper bound on the
whole sequence. If the terms in an increasing sequence are not bounded above then the sequence
INFINITE SERIES 7
is unbounded and tends to ∞. Compare this to the sequence of partial sums
N
n=0
(−1)
n
, which
are bounded but do not converge (they oscillate between 1 and 0).
Let’s summarize this property of positive series. If a
n
> 0 for every n and there is a constant
C such that all partial sums satisfy
N
n=1
a
n
< C then
n≥1
a
n
converges and
n≥1
a
n
≤ C. On
the other hand, if there is no upper bound on the partial sums then
n≥1
a
n
= ∞. We will use
this over and over again to explain various convergence tests for positive series.
Theorem 3.1 (Integral Test). Suppose a
n
= f(n) where f is a continuous decreasing positive
function. Then
n≥1
a
n
converges if and only if
_
∞
1
f(x) dx converges.
Proof. We will compare the partial sum
N
n=1
a
n
and the “partial integral”
_
N
1
f(x) dx.
Since f(x) is a decreasing function,
n ≤ x ≤ n + 1 =⇒a
n+1
= f(n + 1) ≤ f(x) ≤ f(n) = a
n
.
Integrating these inequalities between n and n + 1 gives
(3.1) a
n+1
≤
_
n+1
n
f(x) dx ≤ a
n
.
since the interval [n, n + 1] has length 1. Therefore
(3.2)
N
n=1
a
n
≥
N
n=1
_
n+1
n
f(x) dx =
_
N+1
1
f(x) dx
and
(3.3)
N
n=1
a
n
= a
1
+
N
n=2
a
n
≤ a
1
+
N
n=2
_
n
n−1
f(x) dx = f(1) +
_
N
1
f(x) dx.
Combinining (3.2) and (3.3),
(3.4)
_
N+1
1
f(x) dx ≤
N
n=1
a
n
≤ f(1) +
_
N
1
f(x) dx.
Assume that
_
∞
1
f(x) dx converges. Since f(x) is a positive function,
_
N
1
f(x) dx <
_
∞
1
f(x) dx.
Thus by the second inequality in (3.4)
N
n=1
a
n
< f(1) +
_
∞
1
f(x) dx,
so the partial sums are bounded above. Therefore the series
n≥1
a
n
converges.
Conversely, assume
n≥1
a
n
converges. Then the partial sums
N
n=1
a
n
are bounded above (by
the whole series
n≥1
a
n
). Every partial integral of
_
∞
1
f(x) dx is bounded above by one of these
partial sums according to the ﬁrst inequality of (3.4), so the partial integrals are bounded above
and thus the improper integral
_
∞
1
f(x) dx converges.
Example 3.2. We were implicitly using the integral test with f(x) = 1/x when we proved the
harmonic series diverged in Example 2.10. For the harmonic series, (3.4) becomes
(3.5) log(N + 1) ≤
N
n=1
1
n
≤ 1 + log N.
8 KEITH CONRAD
Thus the harmonic series diverges “logarithmically.” The following table illustrates these bounds
when N is a power of 10.
N log(N + 1)
N
n=1
1/n 1 + log N
10 2.398 2.929 3.303
100 4.615 5.187 5.605
1000 6.909 7.485 7.908
10000 9.210 9.788 10.210
Table 1
Example 3.3. Generalizing the harmonic series, consider
n≥1
1
n
p
= 1 +
1
2
p
+
1
3
p
+
1
4
p
+· · ·
where p > 0. This is called a pseries. Its convergence is related to the integral
_
∞
1
dx/x
p
. Since
the integrand 1/x
p
has an antiderivative that can be computed explicitly, we can compute the
improper integral:
_
∞
1
dx
x
p
=
_
1
p−1
, if p > 1,
∞, if 0 < p ≤ 1.
Therefore
n≥1
1/n
p
converges when p > 1 and diverges when 0 < p ≤ 1. For example,
n≥1
1/n
2
converges and
n≥1
1/
√
n diverges.
Example 3.4. The series
n≥2
1
nlog n
diverges by the integral test since
_
∞
2
dx
xlog x
= log log x
¸
¸
¸
¸
∞
2
= ∞.
(We stated the integral test with a lower bound of 1 but it obviously adapts to series where the
ﬁrst index of summation is something other than 1, as in this example.)
Example 3.5. The series
n≥2
1
n(log n)
2
, with terms just slightly smaller than in the previous
example, converges. Using the integral test,
_
∞
2
dx
x(log x)
= −
1
log x
¸
¸
¸
¸
∞
2
=
1
log 2
,
which is ﬁnite.
Because the convergence of a positive series is intimately tied up with the boundedness of its
partial sums, we can determine the convergence or divergence of one positive series from another
if we have an inequality in one direction between the terms of the two series. This leads to the
following convergence test.
Theorem 3.6 (Comparison test). Let 0 < a
n
≤ b
n
for all n. If
n≥1
b
n
converges then
n≥1
a
n
converges. If
n≥1
a
n
diverges then
n≥1
b
n
diverges.
INFINITE SERIES 9
Proof. Since a
n
≤ b
n
,
(3.6)
N
n=1
a
n
≤
N
n=1
b
n
.
If
n≥1
b
n
converges then the partial sums on the right side of (3.6) are bounded, so the partial
sums on the left side are bounded, and therefore
n≥1
a
n
converges. If
n≥1
a
n
diverges then
N
n=1
a
n
→∞, so
N
n=1
b
n
→∞, so
n≥1
b
n
diverges.
Example 3.7. If 0 ≤ x < 1 then
n≥1
x
n
/n converges since x
n
/n ≤ x
n
so the series is bounded
above by the geometric series
n≥1
x
n
, which converges. Notice that, unlike the geometric series,
we are not giving any kind of closed form expression for
n≥1
x
n
/n. All we are saying is that the
series converges.
The series
n≥1
x
n
/n does not converge for x ≥ 1: if x = 1 it is the harmonic series and if x > 1
then the general term x
n
/n tends to ∞ (not 0) so the series diverges to ∞.
Example 3.8. The integral test is a special case of the comparison test, where we compare a
n
with
_
n+1
n
f(x) dx and with
_
n
n−1
f(x) dx.
Just as convergence of a series does not depend on the behavior of the initial terms of the series,
what is important in the comparison test is an inequality a
n
≤ b
n
for large n. If this inequality
happens not to hold for small n but does hold for all large n then the comparison test still works.
The following two examples use this “extended comparison test” when the form of the comparison
test in Theorem 3.6 does not strictly apply (because the inequality a
n
≤ b
n
doesn’t hold for small
n).
Example 3.9. If 0 ≤ x < 1 then
n≥1
nx
n
converges. This is obvious for x = 0. We can’t
get convergence for 0 < x < 1 by a comparison with
n≥1
x
n
because the inequality goes the
wrong way: nx
n
> x
n
, so convergence of
n≥1
x
n
doesn’t help us show convergence of
n≥1
nx
n
.
However, let’s write
nx
n
= nx
n/2
x
n/2
.
Since 0 < x < 1, nx
n/2
→ 0 as n → ∞, so nx
n/2
≤ 1 for large n (depending on the speciﬁc
value of x, e.g, for x = 1/2 the inequality is false at n = 3 but is correct for n ≥ 4). Thus
for all large n we have nx
n
≤ x
n/2
so
n≥1
nx
n
converges by comparison with the geometric
series
n≥1
x
n/2
=
n≥1
(
√
x)
n
. We have not obtained any kind of closed formula for
n≥1
nx
n
,
however.
If x ≥ 1 then
n≥1
nx
n
diverges since nx
n
≥ 1 for all n, so the general term does not tend to 0.
Example 3.10. If x ≥ 0 then the series
n≥0
x
n
/n! converges. This is obvious for x = 0. For
x > 0 we will use the comparison test and the lower bound estimate n! > n
n
/e
n
. Let’s recall that
this lower bound on n! comes from Euler’s integral formula
n! =
_
∞
0
t
n
e
−t
dt >
_
∞
n
t
n
e
−t
dt > n
n
_
∞
n
e
−t
dt =
n
n
e
n
.
Then for positive x
(3.7)
x
n
n!
<
x
n
n
n
/e
n
=
_
ex
n
_
n
.
10 KEITH CONRAD
Here x is ﬁxed and we should think about what happens as n grows. The ratio ex/n tends to 0 as
n →∞, so for large n we have
(3.8)
ex
n
<
1
2
(any positive number less than 1 will do here; we just use 1/2 for concreteness). Raising both sides
of (3.8) to the nth power,
_
ex
n
_
n
<
1
2
n
.
Comparing this to (3.7) gives x
n
/n! < 1/2
n
for large n, so the terms of the series
n≥0
x
n
/n! drop
oﬀ faster than the terms of the convergent geometric series
n≥0
1/2
n
when we go out far enough.
The extended comparison test now implies convergence of
n≥0
x
n
/n! if x > 0.
Example 3.11. The series
n≥1
1
5n
2
+ n
converges since
1
5n
2
+ n
<
1
5n
2
and
n≥1
1
5n
2
converges by
Example 3.3.
Let’s try to apply the comparison test to
n≥1
1/(5n
2
−n). The rate of growth of the nth term
is like 1/(5n
2
), but the inequality now goes the wrong way:
1
5n
2
<
1
5n
2
−n
for all n. So we can’t quite use the comparison test to show
n≥1
1/(5n
2
−n) converges (which it
does). The following limiting form of the comparison test will help here.
Theorem 3.12 (Limit comparison test). Let a
n
, b
n
> 0 and suppose the ratio a
n
/b
n
has a positive
limit. Then
n≥1
a
n
converges if and only if
n≥1
b
n
converges.
Proof. Let L = lim
n→∞
a
n
/b
n
. For all large n,
L
2
<
a
n
b
n
< 2L,
so
(3.9)
L
2
b
n
< a
n
< 2Lb
n
for large n. Since L/2 and 2L are positive, the convergence of
n≥1
b
n
is the same as convergence
of
n≥1
(L/2)b
n
and
n≥1
(2L)b
n
. Therefore the ﬁrst inequality in (3.9) and the comparison test
show convergence of
n≥1
a
n
implies convergence of
n≥1
b
n
. The second inequality in (3.9) and
the comparison test show convergence of
n≥1
b
n
implies convergence of
n≥1
a
n
. (Although (3.9)
may not be true for all n, it is true for large n, and that suﬃces to apply the extended comparison
test.)
The way to use the limit comparison test is to replace terms in a series by simpler terms which
grow at the same rate (at least up to a scaling factor). Then analyze the series with those simpler
terms.
Example 3.13. We return to
n≥1
1
5n
2
−n
. As n →∞, the nth term grows like 1/5n
2
:
1/(5n
2
−n)
1/5n
2
→1.
INFINITE SERIES 11
Since
n≥1
1/(5n
2
) converges, so does
n≥1
1/(5n
2
− n). In fact, if a
n
is any sequence whose
growth is like 1/n
2
up to a scaling factor then
n≥1
a
n
converges. In particular, this is a simpler
way to settle the convergence in Example 3.11 than by using the inequalities in Example 3.11.
Example 3.14. To determine if
n≥1
_
n
3
−n
2
+ 5n
4n
6
+ n
converges we replace the nth term with the expression
_
n
3
4n
6
=
1
2n
3/2
,
which grows at the same rate. The series
n≥1
1/(2n
3/2
) converges, so the original series converges.
The limit comparison test underlies an important point: the convergence of a series
n≥1
a
n
is not assured just by knowing if a
n
→ 0, but it is assured if a
n
→ 0 rapidly enough. That is,
the rate at which a
n
tends to 0 is often (but not always) a means of explaining the convergence of
n≥1
a
n
. The diﬀerence between someone who has an intuitive feeling for convergence of series and
someone who can only determine convergence on a casebycase basis using memorized rules in a
mechanical way is probably indicated by how well the person understands the connection between
convergence of a series and rates of growth (really, decay) of the terms in the series. Armed with
the convergence of a few basic series (like geometric series and pseries), the convergence of most
other series encountered in a calculus course can be determined by the limit comparison test if you
understand well the rates of growth of the standard functions. One exception is if the series has a
log term, in which case it might be useful to apply the integral test.
4. Series with mixed signs
All our previous convergence tests apply to series with positive terms. They also apply to series
whose terms are eventually positive (just omit any initial negative terms to get a positive series)
or eventually negative (negate the series, which doesn’t change the convergence status, and now
we’re reduced to an eventually positive series). What do can we do for series whose terms are not
eventually all positive or eventually all negative? These are the series with inﬁnitely many positive
terms and inﬁnitely many negative terms.
Example 4.1. Consider the alternating harmonic series
n≥1
(−1)
n−1
n
= 1 −
1
2
+
1
3
−
1
4
+
1
5
−
1
6
+· · · .
The usual harmonic series diverges, but the alternating signs might have diﬀerent behavior. The
following table collects some of the partial sums.
N
N
n=1
(−1)
n−1
/n
10 .6456
100 .6882
1000 .6926
10000 .6930
Table 2
12 KEITH CONRAD
These partial sums are not clear evidence in favor of convergence: after 10,000 terms we only
get apparent stability in the ﬁrst 2 decimal digits. The harmonic series diverges slowly, so perhaps
the alternating harmonic series diverges too (and slower).
Deﬁnition 4.2. An inﬁnite series where the terms alternate in sign is called an alternating series.
An alternating series starting with a positive term can be written as
n≥1
(−1)
n−1
a
n
where
a
n
> 0 for all n. If the ﬁrst term is negative then a reasonable general notation is
n≥1
(−1)
n
a
n
where a
n
> 0 for all n. Obviously negating an alternating series of one type turns it into the other
type.
Theorem 4.3 (Alternating series test). An alternating series whose nth term in absolute value
tends monotonically to 0 as n →∞ is a convergent series.
Proof. We will work with alterating series which have an initial positive term, so of the form
n≥1
(−1)
n−1
a
n
. Since successive partial sums alterate adding and subtracting an amount which,
in absolute value, steadily tends to 0, the relation between the partial sums is indicated by the
following inequalities:
s
2
< s
4
< s
6
< · · · < s
5
< s
3
< s
1
.
In particular, the evenindexed partial sums are an increasing sequence which is bounded above
(by s
1
, say), so the partial sums s
2m
converge. Since s
2m+1
−s
2m
 = a
2m+1
 →0, the oddindexed
partial sums converge to the same limit as the evenindexed partial sums, so the sequence of all
partial sums has a single limit, which means
n≥1
(−1)
n−1
a
n
converges.
Example 4.4. The alternating harmonic series
n≥1
(−1)
n−1
n
converges.
Example 4.5. While the pseries
n≥1
1/n
p
converges only for p > 1, the alternating pseries
n≥1
(−1)
n−1
/n
p
converges for the wider range of exponents p > 0 since it is an alternating series
satisfying the hypotheses of the alternating series test.
Example 4.6. We saw
n≥1
x
n
/n converges for 0 ≤ x < 1 (comparison to the geometric series)
in Example 3.7. If −1 ≤ x < 0 then
n≥1
x
n
/n converges by the alternating series test: the
negativity of x
n
makes the terms x
n
/n alternate in sign. To apply the alternating series test we
need
¸
¸
¸
¸
x
n+1
n + 1
¸
¸
¸
¸
<
¸
¸
¸
¸
x
n
n
¸
¸
¸
¸
for all n, which is equivalent after some algebra to
x <
n + 1
n
for all n, and this is true since x ≤ 1 < (n + 1)/n.
Example 4.7. In Example 3.10 the series
n≥0
x
n
/n! was seen to converge for x ≥ 0. If x < 0
the series is an alternating series. Does it ﬁt the hypotheses of the alternating series test? We need
¸
¸
¸
¸
x
n+1
(n + 1)!
¸
¸
¸
¸
<
¸
¸
¸
¸
x
n
n!
¸
¸
¸
¸
for all n, which is equivalent to
x < n + 1
INFINITE SERIES 13
for all n. For each particular x < 0 this may not be true for all n (if x = −3 it fails for n = 1 and
2), but it is deﬁnitely true for all suﬃciently large n. Therefore the terms of
n≥0
x
n
/n! are an
“eventually alternating” series. Since what matters for convergence is the longterm behavior and
not the initial terms, we can apply the alternating series test to see
n≥0
x
n
/n! converges by just
ignoring an initial piece of the series.
When a series has inﬁnitely many positive and negative terms which are not strictly alternating,
convergence may not be easy to check. For example, the trigonometric series
n≥1
sin(nx)
n
= sin x +
sin(2x)
2
+
sin(3x)
3
+
sin(4x)
4
+· · ·
turns out to be convergent for every number x, but the arrangement of positive and negative terms
in this series can be quite subtle in terms of x. The proof that this series converges uses the
technique of summation by parts, which won’t be discussed here.
While it is generally a delicate matter to show a nonalternating series with mixed signs converges,
there is one useful property such series might have which does imply convergence. It is this: if the
series with all terms made positive converges then so does the original series. Let’s give this idea
a name and then look at some examples.
Deﬁnition 4.8. A series
n≥1
a
n
is absolutely convergent if
n≥1
a
n
 converges.
Don’t confuse
n≥1
a
n
 with 
n≥1
a
n
; absolute convergence refers to convergence if we drop
the signs from the terms in the series, not from the series overall. The diﬀerence between
n≥1
a
n
and 
n≥1
a
n
 is at most a single sign, while there is a substantial diﬀerence between
n≥1
a
n
and
n≥1
a
n
 if the a
n
’s have many mixed signs; that is the diﬀerence which absolute convergence
involves.
Example 4.9. We saw
n≥1
x
n
/n converges if 0 ≤ x < 1 in Example 3.7. Since x
n
/n = x
n
/n,
the series
n≥1
x
n
/n is absolutely convergent if x < 1. Note the alternating harmonic series
n≥1
(−1)
n−1
/n is not absolutely convergent, however.
Example 4.10. In Example 3.10 we saw
n≥0
x
n
/n! converges for all x ≥ 0. Since x
n
/n! =
x
n
/n!, the series
n≥0
x
n
/n! is absolutely convergent for all x.
Example 4.11. By Example 3.9,
n≥1
nx
n
converges absolutely if x < 1.
Example 4.12. The two series
n≥1
sin(nx)
n
2
,
n≥1
cos(nx)
n
2
,
where the nth term has denominator n
2
, are absolutely convergent for any x since the absolute
value of the nth term is at most 1/n
2
and
n≥1
1/n
2
converges.
The relevance of absolute convergence is twofold: 1) we can use the many convergence tests for
positive series to determine if a series is absolutely convergent, and 2) absolute convergence implies
ordinary convergence.
We just illustrated the ﬁrst point several times. Let’s show the second point.
Theorem 4.13 (Absolute convergence test). Every absolutely convergent series is a convergent
series.
14 KEITH CONRAD
Proof. For all n we have
−a
n
 ≤ a
n
≤ a
n
,
so
0 ≤ a
n
+a
n
 ≤ 2a
n
.
Since
n≥1
a
n
 converges, so does
n≥1
2a
n
 (to twice the value). Therefore by the comparison
test we obtain convergence of
n≥1
(a
n
+ a
n
). Subtracting
n≥1
a
n
 from this shows
n≥1
a
n
converges.
Thus the series
n≥1
x
n
/n and
n≥1
nx
n
converge for x < 1 and
n≥0
x
n
/n! converges for
all x by our treatment of these series for positive x alone in Section 3. The argument we gave
in Examples 4.6 and 4.7 for the convergence of
n≥1
x
n
/n and
n≥1
x
n
/n! when x < 0, using
the alternating series test, can be avoided. (But we do need the alternating series test to show
n≥1
x
n
/n converges at x = −1.)
A series which converges but is not absolutely convergent is called conditionally convergent. An
example of a conditionally convergent series is the alternating harmonic series
n≥1
(−1)
n−1
/n.
The distinction between absolutely convergent series and conditionally convergent series might
seem kind of minor, since it involves whether or not a particular convergence test (the absolute
convergence test) works on that series. But the distinction is actually profound, and is nicely
illustrated by the following example of Dirichlet (1837).
Example 4.14. Let L = 1 −1/2 +1/3 −1/4 +1/5 −1/6 +· · · be the alternating harmonic series.
Consider the rearrangement of the terms where we follow a positive term by two negative terms:
1 −
1
2
−
1
4
+
1
3
−
1
6
−
1
8
+
1
5
−
1
10
−
1
12
+· · ·
If we add each positive term to the negative term following it, we obtain
1
2
−
1
4
+
1
6
−
1
8
+
1
10
−
1
12
+· · · ,
which is precisely half the alternating harmonic series (multiply L by 1/2 and see what you get
termwise). Therefore, by rearranging the terms of the alternating harmonic series we obtained a
new series whose sum is (1/2)L instead of L. If (1/2)L = L, doesn’t that mean 1 = 2? What
happened?
This example shows addition of terms in an inﬁnite series is not generally commutative, unlike
with ﬁnite series. In fact, this feature is characteristic of the conditionally convergent series, as
seen in the following amazing theorem.
Theorem 4.15 (Riemann, 1854). If an inﬁnite series is conditionally convergent then it can be
rearranged to sum up to any desired value. If an inﬁnite series is absolutely convergent then all of
its rearrangements converge to the same sum.
For instance, the alternating harmonic series can be rearranged to sum up to 1, to −5.673, to
π, or to any other number you wish. That we rearranged it in Example 4.14 to sum up to half
its usual value was special only in the sense that we could make the rearrangement achieving that
eﬀect quite explicit.
Theorem 4.15, whose proof we omit, is called Riemann’s rearrangement theorem. It indicates
that absolutely convergent series are better behaved than conditionally convergent ones. The
ordinary rules of algebra for ﬁnite series generally extend to absolutely convergent series but not
to conditionally convergent series.
INFINITE SERIES 15
5. Power series
The simplest functions are polynomials: c
0
+ c
1
x + · · · + c
d
x
d
. In diﬀerential calculus we learn
how to locally approximate many functions by linear functions (the tangent line approximation).
We now introduce an idea which will let us give an exact representation of functions in terms of
“polynomials of inﬁnite degree.” The precise name for this idea is a power series.
Deﬁnition 5.1. A power series is a function of the form f(x) =
n≥0
c
n
x
n
.
For instance, a polynomial is a power series where c
n
= 0 for large n. The geometric series
n≥0
x
n
is a power series. It only converges for x < 1. We have also met other power series, like
n≥1
x
n
n
,
n≥1
nx
n
,
n≥0
x
n
n!
.
These three series converge on the respective intervals [−1, 1), (−1, 1) and (−∞, ∞).
For any power series
n≥0
c
n
x
n
it is a basic task to ﬁnd those x where the series converges. It
turns out in general, as in all of our examples, that a power series converges on an interval. Let’s
see why.
Theorem 5.2. If a power series
n≥0
c
n
x
n
converges at x = b = 0 then it converges absolutely
for x < b.
Proof. Since
n≥0
c
n
b
n
converges the general term tends to 0, so c
n
b
n
 ≤ 1 for large n. If x < b
then
c
n
x
n
 = c
n
b
n

¸
¸
¸
x
b
¸
¸
¸
n
,
which is at most x/b
n
for large n. The series
n≥0
x/b
n
converges since it’s a geometric series
and x/b < 1. Therefore by the (extended) comparison test
n≥0
c
n
x
n
 converges, so
n≥0
c
n
x
n
is absolutely convergent.
Example 5.3. If a power series converges at −7 then it converges on the interval (−7, 7). We can’t
say for sure how it converges at 7.
Example 5.4. If a power series diverges at −7 then it diverges for x > 7: if it were to converge
at a number x where x > 7 then it would converge in the interval (−x, x) so in particular at
−7, a contradiction.
What these two examples illustrate, as consequences of Theorem 5.2, is that the values of x
where a power series
n≥0
c
n
x
n
converges is an interval centered at 0, so of the form
(−r, r), [−r, r), (−r, r], [−r, r]
for some r. We call r the radius of convergence of the power series. The only diﬀerence between
these diﬀerent intervals is the presence or absence of the endpoints. All possible types of interval
of convergence can occur:
n≥0
x
n
has interval of convergence (−1, 1),
n≥1
x
n
/n has interval of
convergence [−1, 1),
n≥1
(−1)
n
x
n
/n has interval of convergence (−1, 1] and
n≥1
x
n
/n
2
(a new
example we have not met before) has interval of convergence [−1, 1].
The radius of convergence and the interval of convergence are closely related but should not be
confused. The interval is the actual set where the power series converges. The radius is simply the
halflength of this set (and doesn’t tell us whether or not the endpoints are included). If we don’t
care about convergence behavior on the boundary of the interval of convergence than we can get
by just knowing the radius of convergence: the series always converges (absolutely) on the inside
of the interval of convergence.
16 KEITH CONRAD
For the practical computation of the radius of convergence in basic examples it is convenient to
use a new convergence test for positive series.
Theorem 5.5 (Ratio test). If a
n
> 0 for all n, assume
q = lim
n→∞
a
n+1
a
n
exists. If q < 1 then
n≥1
a
n
converges. If q > 1 then
n≥1
a
n
converges. If q = 1 then no
conclusion can be drawn.
Proof. Assume q < 1. Pick s between q and 1: q < s < 1. For all large n, say for n ≥ n
0
,
a
n+1
/a
n
< s, so a
n+1
< sa
n
. Then
a
n
0
+1
< sa
n
0
, a
n
0
+2
< sa
n
0
+1
< s
2
a
n
0
, a
n
0
+3
< sa
n
0
+2
< s
3
a
n
0
,
and more generally a
n
0
+k
< s
k
a
n
0
for any k ≥ 0. Writing n
0
+ k as n we have
n ≥ n
0
=⇒a
n
< s
n−n
0
a
n
0
=
a
n
0
s
n
0
s
n
.
Therefore we can compare the growth of a
n
from above by s
n
up to some scaling factor. Hence
n≥1
a
n
converges by comparison to a convergent geometric series (s < 1).
In the other direction, if q > 1 then pick s between q and 1. An argument similar to the previous
one shows a
n
grows at least as quickly as a constant multiple of s
n
, but this time
n≥1
s
n
diverges
since s > 1. So
n≥1
a
n
diverges too.
When q = 1 we can’t make a deﬁnite conclusion since both
n≥1
1/n and
n≥1
1/n
2
have
q = 1.
Example 5.6. We give a proof that
n≥1
nx
n
has radius of convergence 1 which is shorter than
Examples 3.9 and 4.11. Take a
n
= nx
n
 = nx
n
. Then
a
n+1
a
n
=
n + 1
n
x →x
as n →∞. Thus if x < 1 the series
n≥1
nx
n
is absolutely convergent by the ratio test. If x > 1
this series is divergent. Notice the ratio test does not tell us what happens to
n≥1
nx
n
when
x = 1; we have to check x = 1 and x = −1 individually.
Example 5.7. A similar argument shows
n≥1
x
n
/n has radius of convergence 1 since
x
n+1
/(n + 1)
x
n
/n
=
n
n + 1
x →x
as n →∞.
Example 5.8. We show the series
n≥0
x
n
/n! converges absolutely for all x. Using the ratio test
we look at
x
n+1
/(n + 1)!
x
n
/n!
=
x
n + 1
→0
as n → ∞. Since this limit is less than 1 for all x, we are done by the ratio test. The radius of
convergence is inﬁnite.
Power series are important for two reasons: they give us much greater ﬂexibility to deﬁne new
kinds of functions and many standard functions can be expressed in terms of a power series.
INFINITE SERIES 17
Intuitively, if a known function f(x) has a power series representation on some interval around
0, say
f(x) =
n≥0
c
n
x
n
= c
0
+ c
1
x + c
2
x
2
+ c
3
x
3
+ c
4
x
4
+· · ·
for x < r, then we can guess a formula for c
n
in terms of the behavior of f(x). To begin we have
f(0) = c
0
.
Now we think about power series as something like “inﬁnite degree polynomials.” We know how
to diﬀerentiate a polynomial by diﬀerentiating each power of x separately. Let’s assume this works
for power series too in the interval of convergence:
f
(x) =
n≥1
nc
n
x
n−1
= c
1
+ 2c
2
x + 3c
3
x
2
+ 4c
4
x
3
+· · · ,
so we expect
f
(0) = c
1
.
Now let’s diﬀerentiate again:
f
(x) =
n≥2
n(n −1)c
n
x
n−2
= 2c
2
+ 6c
3
x + 12c
4
x
2
+· · · ,
so
f
(0) = 2c
2
.
Diﬀerentiating a third time (formally) and setting x = 0 gives
f
(3)
(0) = 6c
3
.
The general rule appears to be
f
(n)
(0) = n!c
n
,
so we should have
c
n
=
f
(n)
(0)
n!
.
This procedure really is valid, according to the following theorem whose long proof is omitted.
Theorem 5.9. Any function represented by a power series in an open interval (−r, r) is inﬁnitely
diﬀerentiable in (−r, r) and its derivatives can be computed by termwise diﬀerentiation of the power
series.
This means our previous calculations are justiﬁed: if a function f(x) can be written in the form
n≥0
c
n
x
n
in an interval around 0 then we must have
(5.1) c
n
=
f
(n)
(0)
n!
.
In particular, a function has at most one expression as a power series
n≥0
c
n
x
n
around 0. And
a function which is not inﬁnitely diﬀerentiable around 0 will deﬁnitely not have a power series
representation
n≥0
c
n
x
n
. For instance, x has no power series representation of this form since
it is not diﬀerentiable at 0.
Remark 5.10. We can also consider power series f(x) =
n≥0
c
n
(x − a)
n
, whose interval of
convergence is centered at a. In this case the coeﬃcients are given by the formula c
n
= f
(n)
(a)/n!.
For simplicity we will focus on power series “centered at 0” only.
18 KEITH CONRAD
Remark 5.11. Even if a power series converges on a closed interval [−r, r], the power series for
its derivative need not converge at the endpoints. Consider f(x) =
n≥1
(−1)
n
x
2n
/n. It converges
for x ≤ 1 but the derivative series is
n≥1
2(−1)
n
x
2n−1
, which converges for x < 1.
Remark 5.12. Because a power series can be diﬀerentiated (inside its interval of convergence)
termwise, we can discover solutions to a diﬀerential equation by inserting a power series with
unknown coeﬃcients into the diﬀerential equation to get relations between the coeﬃcients. Then
a few initial coeﬃcients, once chosen, should determine all the higher ones. After we compute the
radius of convergence of this new power series we will have found a solution to the diﬀerential
equation in a speciﬁc interval. This idea goes back to Newton. It does not necessarily provide us
with all solutions to a diﬀerential equation, but it is one of the standard methods to ﬁnd some
solutions.
Example 5.13. If x < 1 then
n≥0
x
n
=
1
1 −x
. Diﬀerentiating both sides, for x < 1 we have
n≥1
nx
n−1
=
1
(1 −x)
2
.
Multiplying by x gives us
n≥1
nx
n
=
x
(1 −x)
2
,
so we have ﬁnally found a “closed form” expression for an inﬁnite series we ﬁrst met back in
Example 3.9.
Example 5.14. If there is a power series representation
n≥0
c
n
x
n
for e
x
then (5.1) shows c
n
=
1/n!. The only possible way to write e
x
as a power series is
n≥0
x
n
n!
= 1 + x +
x
2
2
+
x
3
3!
+
x
4
4!
+· · · .
Is this really a valid formula for e
x
? Well, we checked before that this power series converges for
all x. Moreover, calculating the derivative of this series reproduces the series again, so this series is
a function satisfying y
(x) = y(x). The only solutions to this diﬀerential equation are Ce
x
and we
can recover C by setting x = 0. Since the power series has constant term 1 (so its value at x = 0
is 1), its C is 1, so this power series is e
x
:
e
x
=
n≥0
x
n
n!
.
Loosely speaking, this means the polynomials
1, 1 + x, 1 + x +
x
2
2
, 1 + x +
x
2
2
+
x
3
6
, . . .
are good approximations to e
x
. (Where they are good will depend on how far x is from 0.) In
particular, setting x = 1 gives an inﬁnite series representation of the number e:
(5.2) e =
n≥0
1
n!
= 1 +
1
2
+
1
3!
+
1
4!
+· · ·
Formula (5.2) can be used to verify an interesting fact about e.
Theorem 5.15. The number e is irrational.
INFINITE SERIES 19
Proof. (Fourier) For any n,
e =
_
1 +
1
2!
+
1
3!
+· · · +
1
n!
_
+
_
1
(n + 1)!
+
1
(n + 2)!
+· · ·
_
=
_
1 +
1
2!
+
1
3!
+· · · +
1
n!
_
+
1
n!
_
1
n + 1
+
1
(n + 2)(n + 1)
+· · ·
_
.
The second term in parentheses is positive and bounded above by the geometric series
1
n + 1
+
1
(n + 1)
2
+
1
(n + 1)
3
+· · · =
1
n
.
Therefore
0 < e −
_
1 +
1
2!
+
1
3!
+· · · +
1
n!
_
≤
1
n · n!
.
Write the sum 1 +1/2! +· · · +1/n! as a fraction with common denominator n!, say as p
n
/n!. Clear
the denominator n! to get
(5.3) 0 < n!e −p
n
≤
1
n
.
So far everything we have done involves no unproved assumptions. Now we introduce the rationality
assumption. If e is rational, then n!e is an integer when n is large (since any integer is a factor
by n! for large n). But that makes n!e −p
n
an integer located in the open interval (0, 1), which is
absurd. We have a contradiction, so e is irrational.
We return to the general task of representing functions by their power series. Even when a
function is inﬁnitely diﬀerentiable for all x, its power series could have a ﬁnite radius of convergence.
Example 5.16. Viewing 1/(1 + x
2
) as 1/(1 −(−x
2
)) we have the geometric series formula
1
1 + x
2
=
n≥0
(−x
2
)
n
=
n≥0
(−1)
n
x
2n
when  −x
2
 < 1, or equivalently x < 1. The series has a ﬁnite interval of convergence (−1, 1) even
though the function 1/(1 + x
2
) has no bad behavior at the endpoints: it is inﬁnitely diﬀerentiable
at every real number x.
Example 5.17. Let f(x) =
n≥1
x
n
/n. This converges for −1 ≤ x < 1. For x < 1 we have
f
(x) =
n≥1
x
n−1
=
n≥0
x
n
=
1
1 −x
.
When x < 1, an antiderivative of 1/(1 −x) is −log(1 −x). Since f(x) and −log(1 −x) have the
same value (zero) at x = 0 they are equal:
−log(1 −x) =
n≥1
x
n
n
for x < 1. Replacing x with −x and negating gives
(5.4) log(1 + x) =
n≥1
(−1)
n−1
n
x
n
for x < 1. The right side converges at x = 1 (alternating series) and diverges at x = −1. It
seems plausible, since the series equals log(1 + x) on (−1, 1), that this should remain true at the
20 KEITH CONRAD
boundary: does
n≥1
(−1)
n−1
/n equal log 2? (Notice this is the alternating harmonic series.) We
will return to this later.
In practice, if we want to show an inﬁnitely diﬀerentiable function equals its associated power
series
n≥0
(f
(n)
(0)/n!)x
n
on some interval around 0 we need some kind of error estimate for the
diﬀerence between f(x) and the polynomial
N
n=0
(f
(n)
(0)/n!)x
n
. When the estimate goes to 0 as
N →∞ we will have proved f(x) is equal to its power series.
Theorem 5.18. If f(x) is inﬁnitely diﬀerentiable then for all N ≥ 0
f(x) =
N
n=0
f
(n)
(0)
n!
x
n
+ R
N
(x),
where
R
N
(x) =
1
N!
_
x
0
f
(N+1)
(t)(x −t)
N
dt.
Proof. When N = 0 the desired result says
f(x) = f(0) +
_
x
0
f
(t) dt.
This is precisely the fundamental theorem of calculus!
We obtain the N = 1 case from this by integration by parts. Set u = f
(t) and dv = dt. Then
du = f
(t) dt and we (cleverly!) take v = t −x (rather than just t). Then
_
x
0
f
(t) dt = f
(t)(t −x)
¸
¸
¸
¸
t=x
t=0
−
_
x
0
f
(t)(t −x) dt
= f
(0)x +
_
x
0
f
(t)(x −t) dt,
so
f(x) = f(0) + f
(0)x +
_
x
0
f
(t)(x −t) dt.
Now apply integration by parts to this new integral with u = f
(t) and dv = (x − t) dt. Then
du = f
(3)
(t) dt and (cleverly) use v = −(1/2)(x −t)
2
. The result is
_
x
0
f
(t)(x −t) dt =
f
(0)
2
x
2
+
1
2
_
x
0
f
(3)
(t)(x −t)
2
dt,
so
f(x) = f(0) + f
(0)x +
f
(0)
2!
x
2
+
1
2
_
x
0
f
(3)
(t)(x −t)
2
dt.
Integrating by parts several more times gives the desired result.
We call R
N
(x) a remainder term.
Corollary 5.19. If f is inﬁnitely diﬀerentiable then the following conditions at the number x are
equivalent:
• f(x) =
n≥0
(f
(n)
(0)/n!)x
n
,
• R
N
(x) →0 as N →∞.
Proof. The diﬀerence between f(x) and the Nth partial sum of its power series is R
N
(x), so f(x)
equals its power series precisely when R
N
(x) →0 as N →∞.
INFINITE SERIES 21
Example 5.20. In (5.4) we showed log(1 +x) equals
n≥1
(−1)
n−1
x
n
/n for x < 1. What about
at x = 1? For this purpose we want to show R
N
(1) → 0 as N → ∞ when f(x) = log(1 + x). To
estimate R
N
(1) we need to compute the higher derivatives of f(x):
f
(x) =
1
1 + x
, f
(x) = −
1
(1 + x)
2
, f
(3)
(x) =
2
(1 + x)
3
, f
(4)
(x) = −
6
(1 + x)
4
,
and in general
f
(n)
(x) =
(−1)
n−1
(n −1)!
(1 + x)
n
for n ≥ 1. Therefore
R
N
(1) =
1
N!
_
1
0
(−1)
N
N!
(1 + t)
N+1
(1 −t)
N
dt = (−1)
N
_
1
0
(1 −t)
N
(1 + t)
N+1
dt,
so
R
N
(1) ≤
_
1
0
dt
(1 + t)
N+1
=
1
N
−
1
N2
N
→ 0
as N →∞. Since the remainder term tends to 0, log 2 =
n≥1
(−1)
n−1
/n.
Example 5.21. Let f(x) = sin x. Its power series is
n≥0
(−1)
n
x
2n+1
(2n + 1)!
= x −
x
3
3!
+
x
5
5!
−
x
7
7!
+· · · ,
which converges for all x by the ratio test. We will show sin x equals its power series for all x. This
is obvious if x = 0.
Since sin x has derivatives ±cos x and ±sin x, which are bounded by 1, for x > 0
R
N
(x) =
1
N!
¸
¸
¸
¸
_
x
0
f
(N+1)
(t)(x −t)
N
dt
¸
¸
¸
¸
≤
1
N!
_
x
0
x −t
N
dt
≤
1
N!
_
x
0
x
N
dt
=
x
N+1
N!
.
This tends to 0 as N →∞, so sin x does equal its power series for x > 0.
If x < 0 then we can similarly estimate R
N
(x) and show it tends to 0 as N → ∞, but we can
also use a little trick because we know sin x is an odd function: if x < 0 then −x > 0 so
sin x = −sin(−x) = −
n≥0
(−1)
n
(−x)
2n+1
(2n + 1)!
from the power series representation of the sine function at positive numbers. Since (−x)
2n+1
=
−x
2n+1
,
sin x = −
n≥0
(−1)
n
−x
2n+1
(2n + 1)!
=
n≥0
(−1)
n
x
2n+1
(2n + 1)!
,
22 KEITH CONRAD
which shows sin x equals its power series for x < 0 too.
Example 5.22. By similar work we can show cos x equals its power series representation every
where:
cos x =
n≥0
(−1)
n
x
2n
(2n)!
= 1 −
x
2
2!
+
x
4
4!
−
x
6
6!
+· · · .
The power series for sin x and cos x look like the odd and even degree terms in the power series
for e
x
. Is the exponential function related to the trigonometric functions? This idea is suggested
because of their power series. Since e
x
has a series without the (−1)
n
factors, we can get a match
between these series by working not with e
x
but with e
ix
, where i is a square root of −1. We have
not deﬁned inﬁnite series (or even the exponential function) using complex numbers. However,
assuming we could make sense of this then we would expect to have
e
ix
=
n≥0
(ix)
n
n!
= 1 + ix −
x
2
2!
−i
x
3
3!
+
x
4
4!
+ i
x
5
5!
+· · ·
= 1 −
x
2
2!
+
x
4
4!
+· · · i
_
x −
x
3
3!
+
x
5
5!
−· · ·
_
= cos x + i sin x.
For example, setting x = π would say
e
iπ
= −1.
Replacing x with −x in the formula for e
ix
gives
e
−ix
= cos x −i sin x.
Adding and subtracting the formulas for e
ix
and e
−ix
lets us express the real trigonometric functions
sin x and cos x in terms of (complex) exponential functions:
cos x =
e
ix
+ e
−ix
2
, sin x =
e
ix
−e
−ix
2i
.
Evidently a deeper study of inﬁnite series should make systematic use of complex numbers!
The following question is natural: who needs all the remainder estimate business from Theorem
5.18 and Corollary 5.19? After all, why not just compute the power series for f(x) and ﬁnd its
radius of convergence? That has to be where f(x) equals its power series. Alas, this is false.
Example 5.23. Let
f(x) =
_
e
−1/x
2
, if x = 0,
0, if x = 0.
It can be shown that f(x) is inﬁnitely diﬀerentiable at x = 0 and f
(n)
(0) = 0 for all n, so the power
series for f(x) has every coeﬃcient equal to 0, which means the power series is the zero function.
But f(x) = 0 if x = 0, so f(x) is not equal to its power series anywhere except at x = 0.
Example 5.23 shows that if a function f(x) is inﬁnitely diﬀerentiable for all x and the power
series
n≥0
(f
(n)
(0)/n!)x
n
converges for all x, f(x) need not equal its power series anywhere except
at x = 0 (where they must agree since the constant term of the power series is f(0)). This is why
there is nontrivial content in saying a function can be represented by its power series on some
interval. The need for remainder estimates in general is important.
2
KEITH CONRAD
For example,
N
(2.1)
n=1
1 1 1 1 = 1 + + + ··· + . n 2 3 N
The sum in (2.1) is called a harmonic sum, for instance 1+ 1 1 1 1 137 + + + = . 2 3 4 5 60
A very important class of ﬁnite series, more important than the harmonic ones, are the geometric series
N
(2.2) An example is
10 n=0
1 + r + r + ··· + r
2
N
=
n=0
rn .
1 2
n
10
=
n=0
1 1 1 1 2047 = 1 + + + · · · + 10 = ≈ 1.999023. n 2 2 4 2 1024
The geometric series (2.2) can be summed up exactly, as follows. Theorem 2.1. When r = 1, the series (2.2) is 1 + r + r2 + · · · + rN = Proof. Let SN = 1 + r + · · · + r N . Then rSN = r + r2 + · · · + rN +1 . These sums overlap in r + · · · + rN , so subtracting rSN from SN gives (1 − r)SN = 1 − rN +1 . When r = 1 we can divide and get the indicated formula for SN . Example 2.2. For any N ≥ 0, 1 + 2 + 22 + · · · + 2N = Example 2.3. For any N ≥ 0, 1+ 1 1 1 1 − (1/2)N +1 1 + + ··· + N = =2− N. 2 4 2 1 − 1/2 2 2N +1 − 1 = 2N +1 − 1. 2−1 rN +1 − 1 1 − rN +1 = . 1−r r−1
It is sometimes useful to adjust the indexing on a sum, for instance
100 99
(2.3)
1 + 2 · 2 + 3 · 22 + 4 · 23 + · · · + 100 · 299 =
n=1
n2n−1 =
(n + 1)2n .
n=0
notice how the renumbering of the indices aﬀects the expression being summed: if we subtract 1 from the bounds of summation on the Σ then we add 1 to the index “on the inside” to maintain the same overall sum. and see what happens in the limit as N → ∞. a2 . We only add up a ﬁnite number of the terms and then see how things behave in the limit as the (ﬁnite) number of terms tends to inﬁnity: an inﬁnite series is deﬁned to be the limit of its sequence of partial sums. If the limit does not exist or is ±∞ Notice that we are not really adding up all the terms in an inﬁnite series at once. Theorem 2. For N ≥ 0. xn = 1−x n≥0 n≥0 x n converges if x < 1 and diverges Proof.3). When the variable inside the integral is changed by an additive constant.INFINITE SERIES 3 Comparing the two Σ’s in (2. be an inﬁnite sequence of real numbers. 1 = lim 2n N →∞ N n=0 n≥0 1 1 = lim 2 − N = 2. For x ∈ R. the (inﬁnite) geometric series if x ≥ 1. if x = 1. Deﬁnition 2. n=1 If the limit exists in R then we say then n≥1 an is called divergent.3.1). Let a1 .g. Using Example 2.5. if x = 1. where u = x + 5 (and du = dx). . such as (1. 1 6 6 (x + 5)2 dx = 0 5 u2 du = 5 x2 dx. e. The inﬁnite series n≥1 an is deﬁned to be N an = lim n≥1 N →∞ an . Example 2. 2n Building on this example we can compute exactly the value of any inﬁnite geometric series. a3 . n≥1 an is convergent. N + 1.4. The basic idea is that we look at the sum of the ﬁrst N terms. 9 6 32 + · · · + 9 2 = n=3 n2 = (n + 3)2 . If x < 1 then 1 . . . As another example. n=0 Whenever the index is shifted down (or up) by a certain amount on the Σ it has to be shifted up (or down) by a compensating amount inside the expression being summed so that we are still adding the same numbers..1 tells us N xn = n=0 (1 − xN +1 )/(1 − x). Theorem 2. This is analogous to the eﬀect of an additive change of variables in an integral. the bounds of integration are changed additively in the opposite direction.6. Now we deﬁne the meaning of inﬁnite series. called a partial sum. . n N →∞ 2 2 So n≥0 1 is a convergent series and its value is 2.
While Theorem 2. If x < 1 and m ≥ 0 then xn = xm + xm+1 + xm+2 + · · · = n≥m xm .” The convergence of a series is determined by the behavior of the terms an for large n. aN = SN − SN −1 → S − S = 0. The label “divergent series” does not always mean the partial sums tend to ∞. Then as N → ∞. N →∞ 1−x 1−x n When x > 1 the numerator 1 − xN does not converge as N → ∞. as follows. 1−x Proof. A divergent geometric series can diverge in diﬀerent ways: the partial sums may tend to ∞ or tend to both ∞ and −∞ or oscillate between 1 and 0. because in this case xn does not tend to 0: xn  = xn ≥ 1 for all n. so n≥0 x diverges.9 is formulated for convergent series. . its main importance is as a “divergence test”: if the general term in an inﬁnite series does not tend to 0 then the series diverges. and then we would say “the series diverges to ∞.” Of course in a particular case we may know the partial sums do tend to ∞. xN +1 → 0 as N → ∞.9.7. Let SN = a1 + a2 + · · · + aN and let S = n≥1 an be the limit of the partial sums SN as N → ∞. Example 2. Multiplying by xm gives the asserted value. whose value is 1/(1 − x).) What if x = 1? If x = 1 then the N th partial sum is N + 1 so the series n≥0 1n diverges (to ∞). If x < 1 then x + x2 + x3 + · · · = x/(1 − x). If the series n≥1 an converges then an → 0 as n → ∞. For example. If we change (or omit) any initial set of terms in a series we do not change the convergence of divergence of the series. the sum inside the parentheses becomes the standard geometric series. then we can still get a simple closed formula for the series. If x = −1 then N (−1)n oscillates between 1 and 0. We will see that Theorem 2. Many important inﬁnite series will be analyzed by comparing them to a geometric series (for a suitable choice of x). Theorem 2. but at a higher power of x.8. the limit is ∞ if x > 1 and the partials sums oscillate between values tending to ∞ and −∞ if x < −1. Corollary 2. As N → ∞. so N x = lim n≥0 n N →∞ xn = lim n=0 1 − xN +1 1 = .6 is the fundamental example of an inﬁnite series.9 gives another reason that a geometric series n≥0 xn diverges if x ≥ 1. Proof. so again the series n≥0 (−1)n is n=0 divergent. If we start summing a geometric series not at 1. Theorem 2.4 KEITH CONRAD When x < 1. All “divergent” means is “not convergent. The N th partial sum (for N ≥ m) is xm + xm+1 + · · · + xN = xm (1 + x + · · · + xN −m ). Here is the most basic general feature of convergent inﬁnite series. (Speciﬁcally.
9. so our product is written as 1 1 1 1 = · · ··· .4)? A product of reciprocal integers is a reciprocal integer. so does the N th harmonic sum. Since log(N + 1) → ∞ as N → ∞. so the harmonic series diverges “slowly” since the logarithm function diverges slowly. take one term from each factor. Speciﬁcally. multiply them together.10. but can be exploited in other contexts. so N n+1 1 ≥ n n=1 n dt = t N +1 1 dt = log(N + 1). Sums are denoted with a Σ and products are denoted with a Π (capital pi). Now expand each factor into a geometric series: 1 = 1 − 1/p 1+ p p 1 1 1 + + + ··· p p2 p3 . so we obtain a sum of 1/n as n runs over certain positive integers.4) p 1 > 1 − 1/p 1+ p 1 1 1 1 + 2 + 3 + ··· + N p p p p . the n’s we meet are those whose prime factorization doesn’t . so the harmonic series diverges.11. We will see later that the lower bound log(N + 1) for the N th harmonic sum is rather sharp. Proof.4) we have a ﬁnite product of ﬁnite sums. n+1 gives n dt/t ≤ 1/n. Consider the product of 1/(1 − 1/p) as p runs through all prime numbers. n When n ≤ t. (Euler) We will argue by contradiction: assuming there are only ﬁnitely many prime numbers we will contradict the divergence of the harmonic series. 1/t ≤ 1/n. Although the general term tends to 0 it n n turns out that n≥1 1 = ∞. Here is the standard illustration. Consider the harmonic series n≥1 1 1 . Example 2. t Therefore the N th partial sum of the harmonic series is bounded below by log(N + 1). Pick any integer N ≥ 2 and truncate each geometric series at the N th term. giving the inequality (2.INFINITE SERIES 5 It is an unfortunate fact of life that the converse of Theorem 2. and add up all these products. Integrating To show this we will compare this inequality from n to n + 1 N n=1 N N n=1 1/n with 1 dt/t = log N . Each geometric series is greater than any of its partial sums. so we can compute this using the distributive law (“super FOIL” in the terminology of high school algebra). What numbers do we get in this way on the right side of (2. On the right side of (2. 1 − 1/p 1 − 1/2 1 − 1/3 1 − 1/5 p Since we assume there are ﬁnitely many primes. The divergence of the harmonic series is not just a counterexample to the converse of Theorem 2.9 is generally false: if an → 0 we have no guarantee that n≥0 an converges. That is. this is a ﬁnite product. There are inﬁnitely many prime numbers. Let’s use it to prove something about prime numbers! Theorem 2.
) We can tell if an increasing sequence converges by showing it is bounded above. n≥1 n≥1 can If n≥1 an converges then for any number c the series can = c n≥1 n≥1 converges and an . We have 1 1 + n n 2 3 5 =5 4n = 1 1 7 + = 1 − 1/2 1 − 1/3 2 n≥0 and n≥0 n≥0 1 1 20 =5 = . Then it converges and its limit is an upper bound on the whole sequence.13. These are the conclusions of the theorem. We end this section with some algebraic properties of convergent series: termwise addition and scaling. n ≥ 2ei . so SN → S n=1 n=1 and TN → T as N → ∞.6 KEITH CONRAD involve any prime with exponent greater than N . Since 1/n occurs in the expansion of the right side of (2. Proof.5) p 1 > 1 − 1/p N n=1 1 . so SN +1 > SN for every N . we have the lower bound estimate (2. Let SN = N an and TN = N bn . so 1/n is a number which shows up when multiplying out (2. What is it? The sequence of partial sums is increasing: SN +1 = SN + aN . Here we need to know we are working with a product over all primes. this inequality is a contradiction so there must be inﬁnitely many primes. if we factor n into primes as n = pe1 pe2 · · · per r 1 2 then ei < n for all i. (Since pi ≥ 2.) Thus if n ≤ N any of the prime factors of n appear in n with exponent less than n ≤ N . divergence can only happen in one way: the partial sums tend to ∞.4) for n ≤ N and all terms in the expansion are positive. n Since the left side is independent of N while the right side tends to ∞ as N → ∞. If n≥1 an and n≥1 bn converge then an + n≥1 n≥1 (an + bn ) converges and (an + bn ) = n≥1 bn .12. An increasing sequence has two possible limiting behaviors: it converges or it tends to ∞. If the terms in an increasing sequence are not bounded above then the sequence . Set S = n≥1 an and T = n≥1 bn . Then by known properties of limits of sequences.4). n 4 1 − 1/4 3 3. Theorem 2. Example 2. Positive series In this section we describe several ways to show that an inﬁnite series n≥1 an converges when an > 0 for all n. and 2m > m for any m ≥ 0 so n > ei . (That is. SN + TN → S + T and cSN → cS as N → ∞. Indeed. for any positive integer n > 1. There is something special about series where all the terms are positive.
Suppose an = f (n) where f is a continuous decreasing positive ∞ function. Let’s summarize this property of positive series.5) log(N + 1) ≤ n=1 1 ≤ 1 + log N. We will compare the partial sum Since f (x) is a decreasing function. (3. Compare this to the sequence of partial sums N (−1)n . n ≤ x ≤ n + 1 =⇒ an+1 = f (n + 1) ≤ f (x) ≤ f (n) = an . We were implicitly using the integral test with f (x) = 1/x when we proved the harmonic series diverged in Example 2.2) n=1 an ≥ n=1 n f (x) dx = 1 f (x) dx and N N N n N (3. so the partial integrals are bounded above ∞ and thus the improper integral 1 f (x) dx converges. an < f (1) + n=1 1 f (x) dx. Then n≥1 an converges if and only if 1 f (x) dx converges. If an > 0 for every n and there is a constant C such that all partial sums satisfy N an < C then n≥1 an converges and n≥1 an ≤ C. assume n≥1 an converges.4) N ∞ f (x) dx < ∞ 1 f (x) dx.4) becomes N (3. Proof. so the partial sums are bounded above. Combinining (3. Thus by the second inequality in (3. For the harmonic series. Every partial integral of 1 f (x) dx is bounded above by one of these partial sums according to the ﬁrst inequality of (3.INFINITE SERIES 7 is unbounded and tends to ∞. On n=1 the other hand. Then the partial sums N an are bounded above (by n=1 ∞ the whole series n≥1 an ). Example 3. which n=0 are bounded but do not converge (they oscillate between 1 and 0).1 (Integral Test).2. Integrating these inequalities between n and n + 1 gives n+1 (3.1) an+1 ≤ n f (x) dx ≤ an .4). N 1 Assume that 1 f (x) dx converges. N n=1 an and the “partial integral” N 1 f (x) dx.10. n + 1] has length 1. if there is no upper bound on the partial sums then n≥1 an = ∞. Therefore N N n+1 N +1 (3.2) and (3. n . N +1 N N (3.4) 1 ∞ f (x) dx ≤ n=1 an ≤ f (1) + 1 f (x) dx. Therefore the series n≥1 an converges. Theorem 3. Conversely. We will use this over and over again to explain various convergence tests for positive series. since the interval [n. Since f (x) is a positive function.3).3) n=1 an = a1 + n=2 an ≤ a1 + n=2 n−1 f (x) dx = f (1) + 1 f (x) dx.
The series n≥2 1 . Since the integrand 1/xp has an antiderivative that can be computed explicitly. Its convergence is related to the integral 1 dx/xp . as in this example.908 10000 9. n≥1 bn converges then n≥1 an .5.788 10.909 7. with terms just slightly smaller than in the previous n(log n)2 ∞ 2 example. Because the convergence of a positive series is intimately tied up with the boundedness of its partial sums.210 9.8 KEITH CONRAD Thus the harmonic series diverges “logarithmically. dx = p−1 xp ∞. If n≥1 an diverges then n≥1 bn diverges. This leads to the following convergence test. Using the integral test.187 5. 2 (We stated the integral test with a lower bound of 1 but it obviously adapts to series where the ﬁrst index of summation is something other than 1. Let 0 < an ≤ bn for all n.4.3.485 7.615 5. if p > 1. For example. converges. This is called a pseries. we can compute the improper integral: 1 ∞ . Example 3. log 2 which is ﬁnite.303 100 4. consider 1 1 1 1 = 1 + p + p + p + ··· p n 2 3 4 n≥1 where p > 0. we can determine the convergence or divergence of one positive series from another if we have an inequality in one direction between the terms of the two series. N N log(N + 1) n=1 1/n 1 + log N 10 2. if 0 < p ≤ 1. If converges.398 2. Theorem 3. √ converges and n≥1 1/ n diverges.) Example 3. Generalizing the harmonic series. The series n≥2 n≥1 1/n 2 ∞ 1 diverges by the integral test since n log n ∞ 2 dx = log log x x log x ∞ = ∞.” The following table illustrates these bounds when N is a power of 10.210 Table 1 Example 3.929 3. 1 Therefore n≥1 1/np converges when p > 1 and diverges when 0 < p ≤ 1.6 (Comparison test). 1 dx =− x(log x) log x ∞ = 2 1 .605 1000 6.
If x ≥ 0 then the series n≥0 xn /n! converges. If 0 ≤ x < 1 then n≥1 xn /n converges since xn /n ≤ xn so the series is bounded above by the geometric series n≥1 xn . Thus n for all large n we have nxn ≤ xn/2 so n≥1 nx converges by comparison with the geometric √ n series n≥1 xn/2 = n≥1 ( x) .9. If n≥1 an diverges then N N n=1 bn → ∞. If x ≥ 1 then n≥1 nxn diverges since nxn ≥ 1 for all n. N N (3. If this inequality happens not to hold for small n but does hold for all large n then the comparison test still works. The integral test is a special case of the comparison test. so nxn/2 ≤ 1 for large n (depending on the speciﬁc value of x.10. We have not obtained any kind of closed formula for n≥1 nxn . If n≥1 bn converges then the partial sums on the right side of (3.6) n=1 an ≤ n=1 bn . let’s write nxn = nxn/2 xn/2 . However. nxn/2 → 0 as n → ∞. we are not giving any kind of closed form expression for n≥1 xn /n. what is important in the comparison test is an inequality an ≤ bn for large n. Example 3. and therefore n≥1 an converges. All we are saying is that the series converges.7) xn xn ex < n n = n! n /e n n .6) are bounded. so convergence of n≥1 xn doesn’t help us show convergence of n≥1 nxn .7. unlike the geometric series.INFINITE SERIES 9 Proof. Notice that.6 does not strictly apply (because the inequality an ≤ bn doesn’t hold for small n). Let’s recall that this lower bound on n! comes from Euler’s integral formula ∞ n! = 0 tn e−t dt > ∞ n tn e−t dt > nn ∞ n e−t dt = nn . where we compare an n n+1 with n f (x) dx and with n−1 f (x) dx. so the general term does not tend to 0. e. Example 3. If 0 ≤ x < 1 then n≥1 nx converges. . n Example 3. Since 0 < x < 1. This is obvious for x = 0. en Then for positive x (3. so n≥1 bn diverges. For x > 0 we will use the comparison test and the lower bound estimate n! > nn /en .g.8. Since an ≤ bn . which converges. n=1 an → ∞. This is obvious for x = 0. so the partial sums on the left side are bounded. Just as convergence of a series does not depend on the behavior of the initial terms of the series. The series n≥1 xn /n does not converge for x ≥ 1: if x = 1 it is the harmonic series and if x > 1 then the general term xn /n tends to ∞ (not 0) so the series diverges to ∞. The following two examples use this “extended comparison test” when the form of the comparison test in Theorem 3. for x = 1/2 the inequality is false at n = 3 but is correct for n ≥ 4). however. We can’t get convergence for 0 < x < 1 by a comparison with n≥1 xn because the inequality goes the wrong way: nxn > xn . so Example 3.
but the inequality now goes the wrong way: 1 1 < 2 5n2 5n − n for all n. Proof. The extended comparison test now implies convergence of n≥0 xn /n! if x > 0. For all large n.9) and the comparison test show convergence of n≥1 bn implies convergence of n≥1 an . The ratio ex/n tends to 0 as n → ∞.11.13.8) to the nth power.10 KEITH CONRAD Here x is ﬁxed and we should think about what happens as n grows. and that suﬃces to apply the extended comparison test. We return to . Let L = limn→∞ an /bn .9) and the comparison test show convergence of n≥1 an implies convergence of n≥1 bn . it is true for large n. Raising both sides of (3.9) may not be true for all n. The rate of growth of the nth term is like 1/(5n2 ). As n → ∞. the convergence of n≥1 bn is the same as convergence of n≥1 (L/2)bn and n≥1 (2L)bn .8) < n 2 (any positive number less than 1 will do here. Let’s try to apply the comparison test to n≥1 1/(5n2 − n). Since L/2 and 2L are positive.) The way to use the limit comparison test is to replace terms in a series by simpler terms which grow at the same rate (at least up to a scaling factor). The following limiting form of the comparison test will help here.7) gives xn /n! < 1/2n for large n.3.12 (Limit comparison test). ex n 1 < n. so the terms of the series n≥0 xn /n! drop oﬀ faster than the terms of the convergent geometric series n≥0 1/2n when we go out far enough. Example 3. Therefore the ﬁrst inequality in (3. L an < < 2L. the nth term grows like 1/5n2 : 5n2 − n n≥1 1/(5n2 − n) → 1.9) 2 for large n. Let an . 2 bn so L bn < an < 2Lbn (3. Then analyze the series with those simpler terms. (Although (3. 1 Example 3. n 2 Comparing this to (3. Theorem 3. bn > 0 and suppose the ratio an /bn has a positive limit. The second inequality in (3. The series n≥1 5n2 1 1 1 converges since < 2 and 2+n +n 5n 5n n≥1 1 converges by 5n2 Example 3. So we can’t quite use the comparison test to show n≥1 1/(5n2 − n) converges (which it does). Then n≥1 an converges if and only if n≥1 bn converges. 1/5n2 . we just use 1/2 for concreteness). so for large n we have ex 1 (3.
this is a simpler way to settle the convergence in Example 3. and now we’re reduced to an eventually positive series). That is. What do can we do for series whose terms are not eventually all positive or eventually all negative? These are the series with inﬁnitely many positive terms and inﬁnitely many negative terms. n 2 3 4 5 6 n≥1 The usual harmonic series diverges. in which case it might be useful to apply the integral test.1. n≥1 1/(2n which grows at the same rate. if an is any sequence whose growth is like 1/n2 up to a scaling factor then n≥1 an converges. the convergence of most other series encountered in a calculus course can be determined by the limit comparison test if you understand well the rates of growth of the standard functions.11.6882 . Series with mixed signs All our previous convergence tests apply to series with positive terms. In fact. Example 4. Consider the alternating harmonic series 1 1 1 1 1 (−1)n−1 = 1 − + − + − + ··· . so does n≥1 1/(5n2 − n).11 than by using the inequalities in Example 3. The following table collects some of the partial sums. To determine if n3 − n2 + 5n 4n6 + n n≥1 converges we replace the nth term with the expression n3 1 = 3/2 . but it is assured if an → 0 rapidly enough. which doesn’t change the convergence status. The diﬀerence between someone who has an intuitive feeling for convergence of series and someone who can only determine convergence on a casebycase basis using memorized rules in a mechanical way is probably indicated by how well the person understands the connection between convergence of a series and rates of growth (really. N 10 100 1000 10000 N n−1 /n n=1 (−1) . One exception is if the series has a log term. Example 3. The series The limit comparison test underlies an important point: the convergence of a series n≥1 an is not assured just by knowing if an → 0.14. decay) of the terms in the series.6456 . They also apply to series whose terms are eventually positive (just omit any initial negative terms to get a positive series) or eventually negative (negate the series. Armed with the convergence of a few basic series (like geometric series and pseries).6930 Table 2 . but the alternating signs might have diﬀerent behavior. 6 4n 2n 3/2 ) converges. 4. so the original series converges.INFINITE SERIES 11 Since n≥1 1/(5n2 ) converges.6926 . In particular. the rate at which an tends to 0 is often (but not always) a means of explaining the convergence of n≥1 an .
so of the form n−1 a . Does it ﬁt the hypotheses of the alternating series test? We need xn+1 xn < (n + 1)! n! for all n. which means n≥1 (−1)n−1 an converges.3 (Alternating series test). Example 4. so perhaps the alternating harmonic series diverges too (and slower). n n≥1 (−1) in absolute value.000 terms we only get apparent stability in the ﬁrst 2 decimal digits. Since successive partial sums alterate adding and subtracting an amount which. Since s2m+1 − s2m  = a2m+1  → 0. If −1 ≤ x < 0 then n≥1 x /n converges by the alternating series test: the n makes the terms xn /n alternate in sign. n−1 a where An alternating series starting with a positive term can be written as n n≥1 (−1) an > 0 for all n. the relation between the partial sums is indicated by the following inequalities: s2 < s4 < s6 < · · · < s5 < s3 < s1 .2. If the ﬁrst term is negative then a reasonable general notation is n≥1 (−1)n an where an > 0 for all n. If x < 0 the series is an alternating series. say). which is equivalent to x < n + 1 1/np . The harmonic series diverges slowly.5. which is equivalent after some algebra to n+1 x < n for all n.12 KEITH CONRAD These partial sums are not clear evidence in favor of convergence: after 10. We will work with alterating series which have an initial positive term. To apply the alternating series test we negativity of x need xn+1 xn < n+1 n for all n. and this is true since x ≤ 1 < (n + 1)/n.6. Deﬁnition 4. We saw n≥1 xn /n converges for 0 ≤ x < 1 (comparison to the geometric series) n in Example 3.4. In particular. so the sequence of all partial sums has a single limit. Obviously negating an alternating series of one type turns it into the other type. An alternating series whose nth term in absolute value tends monotonically to 0 as n → ∞ is a convergent series. n converges only for p > 1. the oddindexed partial sums converge to the same limit as the evenindexed partial sums. While the pseries n≥1 n−1 /np converges for the wider range of exponents p > 0 since it is an alternating series n≥1 (−1) satisfying the hypotheses of the alternating series test. Proof. the alternating pseries Example 4. Example 4. the evenindexed partial sums are an increasing sequence which is bounded above (by s1 .10 the series n≥0 xn /n! was seen to converge for x ≥ 0. In Example 3. Example 4. An inﬁnite series where the terms alternate in sign is called an alternating series. The alternating harmonic series n≥1 (−1)n−1 converges. Theorem 4. so the partial sums s2m converge. steadily tends to 0.7.7.
absolute convergence refers to convergence if we drop the signs from the terms in the series. Example 4.7. Don’t confuse n≥1 an  with  n≥1 an . there is one useful property such series might have which does imply convergence.12. Therefore the terms of n≥0 xn /n! are an “eventually alternating” series. not from the series overall. .11. Every absolutely convergent series is a convergent series. Example 4. convergence may not be easy to check. The two series sin(nx) . which won’t be discussed here.8. Theorem 4. Let’s show the second point. the trigonometric series sin(nx) sin(2x) sin(3x) sin(4x) = sin x + + + + ··· n 2 3 4 n≥1 turns out to be convergent for every number x. While it is generally a delicate matter to show a nonalternating series with mixed signs converges. are absolutely convergent for any x since the absolute value of the nth term is at most 1/n2 and n≥1 1/n2 converges. By Example 3. while there is a substantial diﬀerence between n≥1 an and n≥1 an  if the an ’s have many mixed signs. The proof that this series converges uses the technique of summation by parts. A series n≥1 an is absolutely convergent if n≥1 an  converges. and 2) absolute convergence implies ordinary convergence. Note the alternating harmonic series n−1 /n is not absolutely convergent. When a series has inﬁnitely many positive and negative terms which are not strictly alternating. the series n≥1 xn /n is absolutely convergent if x < 1.9.13 (Absolute convergence test). We just illustrated the ﬁrst point several times. It is this: if the series with all terms made positive converges then so does the original series. but it is deﬁnitely true for all suﬃciently large n. n≥1 n≥1 where the nth term has denominator n2 . We saw n≥1 xn /n converges if 0 ≤ x < 1 in Example 3. n2 n n≥1 nx converges absolutely if x < 1.10 we saw n≥0 xn /n! converges for all x ≥ 0. n2 cos(nx) . however. Since xn /n! = xn /n!. the series n≥0 xn /n! is absolutely convergent for all x.INFINITE SERIES 13 for all n. Since what matters for convergence is the longterm behavior and not the initial terms. For each particular x < 0 this may not be true for all n (if x = −3 it fails for n = 1 and 2). Let’s give this idea a name and then look at some examples. Example 4. In Example 3. For example. The relevance of absolute convergence is twofold: 1) we can use the many convergence tests for positive series to determine if a series is absolutely convergent. n≥1 (−1) Example 4. Since xn /n = xn /n. that is the diﬀerence which absolute convergence involves.9. The diﬀerence between n≥1 an and  n≥1 an  is at most a single sign. we can apply the alternating series test to see n≥0 xn /n! converges by just ignoring an initial piece of the series. Deﬁnition 4. but the arrangement of positive and negative terms in this series can be quite subtle in terms of x.10.
. to −5. and is nicely illustrated by the following example of Dirichlet (1837). The ordinary rules of algebra for ﬁnite series generally extend to absolutely convergent series but not to conditionally convergent series. In fact.14. Since n≥1 an  converges. Consider the rearrangement of the terms where we follow a positive term by two negative terms: 1 1 1 1 1 1 1 1 − + ··· 1− − + − − + − 2 4 3 6 8 5 10 12 If we add each positive term to the negative term following it. Therefore. by rearranging the terms of the alternating harmonic series we obtained a new series whose sum is (1/2)L instead of L.15. If (1/2)L = L. doesn’t that mean 1 = 2? What happened? This example shows addition of terms in an inﬁnite series is not generally commutative. since it involves whether or not a particular convergence test (the absolute convergence test) works on that series. the alternating harmonic series can be rearranged to sum up to 1.7 for the convergence of n≥1 xn /n and n≥1 xn /n! when x < 0. to π.) A series which converges but is not absolutely convergent is called conditionally convergent. so does n≥1 2an  (to twice the value). Therefore by the comparison test we obtain convergence of n≥1 (an + an ). For instance. using the alternating series test. The distinction between absolutely convergent series and conditionally convergent series might seem kind of minor. as seen in the following amazing theorem. Theorem 4.673. Theorem 4. Let L = 1 − 1/2 + 1/3 − 1/4 + 1/5 − 1/6 + · · · be the alternating harmonic series. If an inﬁnite series is conditionally convergent then it can be rearranged to sum up to any desired value.14 to sum up to half its usual value was special only in the sense that we could make the rearrangement achieving that eﬀect quite explicit. or to any other number you wish. this feature is characteristic of the conditionally convergent series. An example of a conditionally convergent series is the alternating harmonic series n≥1 (−1)n−1 /n. If an inﬁnite series is absolutely convergent then all of its rearrangements converge to the same sum. unlike with ﬁnite series. can be avoided. Subtracting n≥1 an  from this shows n≥1 an converges. is called Riemann’s rearrangement theorem. But the distinction is actually profound.6 and 4. so 0 ≤ an + an  ≤ 2an . Example 4. It indicates that absolutely convergent series are better behaved than conditionally convergent ones.14 KEITH CONRAD Proof. For all n we have −an  ≤ an ≤ an .15 (Riemann. 1854). we obtain 1 1 1 1 1 1 − + − + − + ··· . The argument we gave in Examples 4. Thus the series n≥1 xn /n and n≥1 nxn converge for x < 1 and n≥0 xn /n! converges for all x by our treatment of these series for positive x alone in Section 3. (But we do need the alternating series test to show n n≥1 x /n converges at x = −1. 2 4 6 8 10 12 which is precisely half the alternating harmonic series (multiply L by 1/2 and see what you get termwise). whose proof we omit. That we rearranged it in Example 4.
r). The interval is the actual set where the power series converges. We now introduce an idea which will let us give an exact representation of functions in terms of “polynomials of inﬁnite degree. n≥1 (−1)n xn /n has interval of convergence (−1. as consequences of Theorem 5. [−r. so of the form (−r. x) so in particular at −7. For instance. Proof. 1] and n≥1 xn /n2 (a new example we have not met before) has interval of convergence [−1. Since then n n≥0 cn b n≥0 cn x n converges at x = b = 0 then it converges absolutely converges the general term tends to 0.1. [−r. Therefore by the (extended) comparison test n≥0 cn xn  converges. We call r the radius of convergence of the power series. 1) and (−∞. r]. (−r. n≥1 n≥0 n≥1 xn .” The precise name for this idea is a power series. A power series is a function of the form f (x) = n≥0 cn x n. Deﬁnition 5. r). is that the values of x where a power series n≥0 cn xn converges is an interval centered at 0. Power series The simplest functions are polynomials: c0 + c1 x + · · · + cd xd . a polynomial is a power series where cn = 0 for large n. What these two examples illustrate. The series n≥0 x/bn converges since it’s a geometric series and x/b < 1. Example 5. For any power series n≥0 cn xn it is a basic task to ﬁnd those x where the series converges. as in all of our examples. 1). so cn bn  ≤ 1 for large n.2. 7). It only converges for x < 1. We have also met other power series. In diﬀerential calculus we learn how to locally approximate many functions by linear functions (the tangent line approximation).2. The geometric series n n≥0 x is a power series. If x < b cn xn  = cn bn  x n .3. b which is at most x/bn for large n. ∞). 1]. Theorem 5. If a power series for x < b. If a power series diverges at −7 then it diverges for x > 7: if it were to converge at a number x where x > 7 then it would converge in the interval (−x. a contradiction. We can’t say for sure how it converges at 7. 1). If a power series converges at −7 then it converges on the interval (−7. The radius is simply the halflength of this set (and doesn’t tell us whether or not the endpoints are included). . n! These three series converge on the respective intervals [−1. All possible types of interval n n of convergence can occur: n≥0 x has interval of convergence (−1. n nxn . 1). like xn . that a power series converges on an interval. n≥1 x /n has interval of convergence [−1. The radius of convergence and the interval of convergence are closely related but should not be confused.4. (−1. Example 5.INFINITE SERIES 15 5. It turns out in general. If we don’t care about convergence behavior on the boundary of the interval of convergence than we can get by just knowing the radius of convergence: the series always converges (absolutely) on the inside of the interval of convergence. Let’s see why. so n≥0 cn xn is absolutely convergent. The only diﬀerence between these diﬀerent intervals is the presence or absence of the endpoints. r] for some r.
an+1 /an < s. s Therefore we can compare the growth of an from above by sn up to some scaling factor.5 (Ratio test). say for n ≥ n0 . an0 +2 < san0 +1 < s2 an0 . an0 +3 < san0 +2 < s3 an0 .6. If an > 0 for all n. Pick s between q and 1: q < s < 1. n≥1 an converges. Using the ratio test x xn+1 /(n + 1)! = →0 n /n! x n+1 as n → ∞. Assume q < 1.11. Writing n0 + k as n we have an0 n ≥ n0 =⇒ an < sn−n0 an0 = n0 sn .8.16 KEITH CONRAD For the practical computation of the radius of convergence in basic examples it is convenient to use a new convergence test for positive series. 2 When q = 1 we can’t make a deﬁnite conclusion since both n≥1 1/n have n≥1 1/n and q = 1. A similar argument shows n≥1 x n /n has radius of convergence 1 since xn+1 /(n + 1) n = x → x n /n x n+1 as n → ∞. In the other direction. Notice the ratio test does not tell us what happens to n≥1 nxn when x = 1. Example 5. So n≥1 an diverges too. If q > 1 then conclusion can be drawn. If q = 1 then no Proof. assume an+1 q = lim n→∞ an exists. Since this limit is less than 1 for all x. Hence n≥1 an converges by comparison to a convergent geometric series (s < 1). if q > 1 then pick s between q and 1. we have to check x = 1 and x = −1 individually. Then an0 +1 < san0 . We show the series we look at n≥0 x n /n! converges absolutely for all x. We give a proof that n≥1 nxn has radius of convergence 1 which is shorter than Examples 3. but this time n≥1 sn diverges since s > 1. so an+1 < san . Thus if x < 1 the series n≥1 nxn is absolutely convergent by the ratio test. An argument similar to the previous one shows an grows at least as quickly as a constant multiple of sn . Example 5. . The radius of convergence is inﬁnite. Then an+1 n+1 = x → x an n as n → ∞.9 and 4. If q < 1 then n≥1 an converges. For all large n. Take an = nxn  = nxn .7. and more generally an0 +k < sk an0 for any k ≥ 0. Example 5. If x > 1 this series is divergent. Theorem 5. we are done by the ratio test. Power series are important for two reasons: they give us much greater ﬂexibility to deﬁne new kinds of functions and many standard functions can be expressed in terms of a power series.
then we can guess a formula for cn in terms of the behavior of f (x). And a function which is not inﬁnitely diﬀerentiable around 0 will deﬁnitely not have a power series representation n≥0 cn xn . n! This procedure really is valid. cn = Theorem 5. Now let’s diﬀerentiate again: f (x) = n≥2 n(n − 1)cn xn−2 = 2c2 + 6c3 x + 12c4 x2 + · · · . r) and its derivatives can be computed by termwise diﬀerentiation of the power series. . whose interval of convergence is centered at a. a function has at most one expression as a power series n≥0 cn xn around 0. so we expect f (0) = c1 . For simplicity we will focus on power series “centered at 0” only. so we should have f (n) (0) .10. according to the following theorem whose long proof is omitted. (5. Any function represented by a power series in an open interval (−r.9. Diﬀerentiating a third time (formally) and setting x = 0 gives f (3) (0) = 6c3 . x has no power series representation of this form since it is not diﬀerentiable at 0. The general rule appears to be f (n) (0) = n!cn . Now we think about power series as something like “inﬁnite degree polynomials. For instance. This means our previous calculations are justiﬁed: if a function f (x) can be written in the form n n≥0 cn x in an interval around 0 then we must have f (n) (0) . r) is inﬁnitely diﬀerentiable in (−r.1) cn = n Remark 5.INFINITE SERIES 17 Intuitively. We can also consider power series f (x) = n≥0 cn (x − a) . if a known function f (x) has a power series representation on some interval around 0. so f (0) = 2c2 . say cn xn = c0 + c1 x + c2 x2 + c3 x3 + c4 x4 + · · · f (x) = n≥0 for x < r. Let’s assume this works for power series too in the interval of convergence: f (x) = n≥1 ncn xn−1 = c1 + 2c2 x + 3c3 x2 + 4c4 x3 + · · · .” We know how to diﬀerentiate a polynomial by diﬀerentiating each power of x separately. To begin we have f (0) = c0 . In this case the coeﬃcients are given by the formula cn = f (n) (a)/n!. n! In particular.
n! n≥0 Loosely speaking. Consider f (x) = n≥1 (−1)n x2n /n. 1 + x + n≥0 Formula (5. which converges for x < 1. The number e is irrational.. so this series is a function satisfying y (x) = y(x). so this power series is ex : xn ex = . Example 5. we checked before that this power series converges for all x. n! 2 3! 4! Is this really a valid formula for ex ? Well.2) e= = 1 + + + + ··· n! 2 3! 4! 1. its C is 1. It does not necessarily provide us with all solutions to a diﬀerential equation.. for x < 1 we have 1−x n≥0 nxn−1 = n≥1 1 . setting x = 1 gives an inﬁnite series representation of the number e: 1 1 1 1 (5. Then a few initial coeﬃcients. (Where they are good will depend on how far x is from 0.1) shows cn = n≥0 xn x2 x3 x4 =1+x+ + + + ··· .18 KEITH CONRAD Remark 5. Moreover. (1 − x)2 so we have ﬁnally found a “closed form” expression for an inﬁnite series we ﬁrst met back in Example 3. Diﬀerentiating both sides.. but it is one of the standard methods to ﬁnd some solutions.13. After we compute the radius of convergence of this new power series we will have found a solution to the diﬀerential equation in a speciﬁc interval. If there is a power series representation 1/n!.14. Since the power series has constant term 1 (so its value at x = 0 is 1). 1 xn = Example 5. (1 − x)2 Multiplying by x gives us nxn = n≥1 x . The only solutions to this diﬀerential equation are Cex and we can recover C by setting x = 0. 2 2 6 are good approximations to ex . Because a power series can be diﬀerentiated (inside its interval of convergence) termwise.) In particular. we can discover solutions to a diﬀerential equation by inserting a power series with unknown coeﬃcients into the diﬀerential equation to get relations between the coeﬃcients. . should determine all the higher ones. 1 + x.12.9. If x < 1 then . It converges for x ≤ 1 but the derivative series is n≥1 2(−1)n x2n−1 . 1+x+ + . the power series for its derivative need not converge at the endpoints. The only possible way to write ex as a power series is n≥0 cn x n for ex then (5. Even if a power series converges on a closed interval [−r.2) can be used to verify an interesting fact about e.15.11. this means the polynomials x2 x2 x3 . calculating the derivative of this series reproduces the series again. Theorem 5. once chosen. This idea goes back to Newton. Remark 5. r].
Viewing 1/(1 + x2 ) as 1/(1 − (−x2 )) we have the geometric series formula 1 = (−x2 )n = (−1)n x2n 1 + x2 n≥0 n≥0 when < 1. (Fourier) For any n. The series has a ﬁnite interval of convergence (−1. its power series could have a ﬁnite radius of convergence. The right side converges at x = 1 (alternating series) and diverges at x = −1. e = = 1+ 1 1 + + ··· + 2! 3! 1 1 1 + + + ··· + 2! 3! 1 n! 1 n! 1 1 + + ··· (n + 1)! (n + 2)! 1 1 1 + + + ··· n! n + 1 (n + 2)(n + 1) + . 2! 3! n! n · n! Write the sum 1 + 1/2! + · · · + 1/n! as a fraction with common denominator n!. so e is irrational. It seems plausible. Let f (x) = n≥1 x n /n. Clear the denominator n! to get 1 (5.  − x2  This converges for −1 ≤ x < 1.16. Since f (x) and − log(1 − x) have the same value (zero) at x = 0 they are equal: xn − log(1 − x) = n n≥1 for x < 1. 1) even though the function 1/(1 + x2 ) has no bad behavior at the endpoints: it is inﬁnitely diﬀerentiable at every real number x. But that makes n!e − pn an integer located in the open interval (0. 0<e− 1+ We return to the general task of representing functions by their power series. n So far everything we have done involves no unproved assumptions. Now we introduce the rationality assumption. n + 1 (n + 1)2 (n + 1)3 n Therefore 1 1 1 1 ≤ + + ··· + . If e is rational. Example 5.INFINITE SERIES 19 Proof. The second term in parentheses is positive and bounded above by the geometric series 1 1 1 1 + + + ··· = . which is absurd. or equivalently x < 1. since the series equals log(1 + x) on (−1. then n!e is an integer when n is large (since any integer is a factor by n! for large n). Replacing x with −x and negating gives (5. Example 5.4) log(1 + x) = n≥1 (−1)n−1 n x n for x < 1. that this should remain true at the . say as pn /n!.3) 0 < n!e − pn ≤ . 1−x When x < 1. 1). We have a contradiction. For x < 1 we have xn−1 = xn = n≥0 f (x) = n≥1 1 .17. an antiderivative of 1/(1 − x) is − log(1 − x). 1). Even when a function is inﬁnitely diﬀerentiable for all x.
if we want to show an inﬁnitely diﬀerentiable function equals its associated power series n≥0 (f (n) (0)/n!)xn on some interval around 0 we need some kind of error estimate for the diﬀerence between f (x) and the polynomial N (f (n) (0)/n!)xn . n! x where RN (x) = 1 N! f (N +1) (t)(x − t)N dt. Then x t=x x f (t) dt = 0 f (t)(t − x) t=0 x − 0 f (t)(t − x) dt = f (0)x + 0 f (t)(x − t) dt. 0 so f (0) 2 1 x (3) x + f (t)(x − t)2 dt. If f is inﬁnitely diﬀerentiable then the following conditions at the number x are equivalent: • f (x) = n≥0 (f (n) (0)/n!)xn . • RN (x) → 0 as N → ∞. Set u = f (t) and dv = dt. Theorem 5. so f (x) equals its power series precisely when RN (x) → 0 as N → ∞. x so f (x) = f (0) + f (0)x + 0 f (t)(x − t) dt. This is precisely the fundamental theorem of calculus! We obtain the N = 1 case from this by integration by parts. The diﬀerence between f (x) and the N th partial sum of its power series is RN (x). The result is x f (t)(x − t) dt = 0 f (0) 2 1 x + 2 2 x f (3) (t)(x − t)2 dt. 0 Proof. . In practice. 2! 2 0 Integrating by parts several more times gives the desired result. Now apply integration by parts to this new integral with u = f (t) and dv = (x − t) dt. When the estimate goes to 0 as n=0 N → ∞ we will have proved f (x) is equal to its power series. Then du = f (3) (t) dt and (cleverly) use v = −(1/2)(x − t)2 .18.19. Then du = f (t) dt and we (cleverly!) take v = t − x (rather than just t).) We will return to this later. Proof.20 KEITH CONRAD boundary: does n≥1 (−1)n−1 /n equal log 2? (Notice this is the alternating harmonic series. If f (x) is inﬁnitely diﬀerentiable then for all N ≥ 0 N f (x) = n=0 f (n) (0) n x + RN (x). Corollary 5. f (x) = f (0) + f (0)x + We call RN (x) a remainder term. When N = 0 the desired result says x f (x) = f (0) + 0 f (t) dt.
(2n + 1)! 3! 5! 7! which converges for all x by the ratio test. If x < 0 then we can similarly estimate RN (x) and show it tends to 0 as N → ∞.20. (1 + t)N +1 as N → ∞. (2n + 1)! (2n + 1)! n≥0 n≥0 . n≥1 (−1) 1 1 N! 1 0 (−1)N N ! (1 − t)N dt = (−1)N (1 + t)N +1 1 0 (1 − t)N dt. log 2 = Example 5. We will show sin x equals its power series for all x. Since sin x has derivatives ± cos x and ± sin x. Since the remainder term tends to 0. N! This tends to 0 as N → ∞. so sin x does equal its power series for x > 0. which are bounded by 1. . f (x) = − 1+x (1 + x)2 (1 + x)3 (1 + x)4 and in general (−1)n−1 (n − 1)! f (n) (x) = (1 + x)n for n ≥ 1.INFINITE SERIES 21 Example 5. f (4) (x) = − . but we can also use a little trick because we know sin x is an odd function: if x < 0 then −x > 0 so (−x)2n+1 sin x = − sin(−x) = − (−1)n (2n + 1)! n≥0 from the power series representation of the sine function at positive numbers. −x2n+1 x2n+1 sin x = − (−1)n = (−1)n .4) we showed log(1 + x) equals n≥1 (−1)n−1 xn /n for x < 1. f (3) (x) = .21. Therefore RN (1) = so RN (1) dt N +1 0 (1 + t) 1 1 − = N N 2N → 0 ≤ n−1 /n. What about at x = 1? For this purpose we want to show RN (1) → 0 as N → ∞ when f (x) = log(1 + x). Let f (x) = sin x. for x > 0 x 1 RN (x) = f (N +1) (t)(x − t)N dt N! 0 x 1 ≤ x − tN dt N! 0 x 1 ≤ xN dt N! 0 xN +1 = . This is obvious if x = 0. To estimate RN (1) we need to compute the higher derivatives of f (x): 1 2 6 1 f (x) = . Its power series is (−1)n n≥0 x2n+1 x3 x5 x7 =x− + − + ··· . Since (−x)2n+1 = −x2n+1 . In (5.
so the power series for f (x) has every coeﬃcient equal to 0. Example 5. But f (x) = 0 if x = 0. 2 if x = 0. We have not deﬁned inﬁnite series (or even the exponential function) using complex numbers. setting x = π would say Replacing x with −x in the formula for eix eiπ = −1. assuming we could make sense of this then we would expect to have (ix)n eix = n! n≥0 x3 x4 x5 x2 −i + + i + ··· 2! 3! 4! 5! 2 4 3 x x x x5 = 1− + + ···i x − + − ··· 2! 4! 3! 5! = cos x + i sin x. 0.19? After all.22. It can be shown that f (x) is inﬁnitely diﬀerentiable at x = 0 and f (n) (0) = 0 for all n. The need for remainder estimates in general is important. sin x = . This is why there is nontrivial content in saying a function can be represented by its power series on some interval. (2n)! 2! 4! 6! n≥0 The power series for sin x and cos x look like the odd and even degree terms in the power series for ex . . we can get a match between these series by working not with ex but with eix .18 and Corollary 5. Let f (x) = e−1/x . why not just compute the power series for f (x) and ﬁnd its radius of convergence? That has to be where f (x) equals its power series.22 KEITH CONRAD which shows sin x equals its power series for x < 0 too. Adding and subtracting the formulas for eix and e−ix lets us express the real trigonometric functions sin x and cos x in terms of (complex) exponential functions: eix + e−ix eix − e−ix .23. Alas. Is the exponential function related to the trigonometric functions? This idea is suggested because of their power series. if x = 0.23 shows that if a function f (x) is inﬁnitely diﬀerentiable for all x and the power series n≥0 (f (n) (0)/n!)xn converges for all x. Example 5. f (x) need not equal its power series anywhere except at x = 0 (where they must agree since the constant term of the power series is f (0)). By similar work we can show cos x equals its power series representation everywhere: x2n x2 x4 x6 (−1)n cos x = =1− + − + ··· . this is false. which means the power series is the zero function. so f (x) is not equal to its power series anywhere except at x = 0. where i is a square root of −1. Since ex has a series without the (−1)n factors. = 1 + ix − For example. cos x = Example 5. 2 2i Evidently a deeper study of inﬁnite series should make systematic use of complex numbers! The following question is natural: who needs all the remainder estimate business from Theorem 5. gives e−ix = cos x − i sin x. However.