This action might not be possible to undo. Are you sure you want to continue?
Chapter 1
Review of Complex
Numbers
Complex numbers are deﬁned in terms of the imaginary unit, i, having the
property
i
2
= −1. (1.1)
A general complex number has the form
z = x + iy, (1.2)
where x, y are real numbers. We also often write
z = Re z + iImz, (1.3)
where Re z is the “real part of z,” and Imz is the “imaginary part of z.” Complex
numbers are added and multiplied just like real numbers: If
z
1
= x
1
+ iy
1
, (1.4a)
z
2
= x
2
+ iy
2
, (1.4b)
then
z
1
+ z
2
= (x
1
+ x
2
) + i(y
1
+ y
2
), (1.5a)
z
1
z
2
= x
1
x
2
+ iy
1
x
2
+ ix
1
y
2
+ i
2
y
1
y
2
= x
1
x
2
− y
1
y
2
+ i(x
1
y
2
+ x
2
y
1
). (1.5b)
The complex conjugate of a number is obtained by reversing the sign of i: If
z = x + iy, we deﬁne the complex conjugate of z by
z
∗
= x − iy. (1.6)
1 Version of August 22, 2011
2 Version of August 22, 2011CHAPTER 1. REVIEWOF COMPLEXNUMBERS
E
T
y axis
Imaginary axis
x axis
Real axis
zplane
x
y
r
θ
Figure 1.1: Geometrical interpretation of a complex number z = x + iy.
(Sometimes the notation ¯ z is used for the complex conjugate of z.) Note that
Re z =
z + z
∗
2
, (1.7a)
Im z =
z − z
∗
2i
. (1.7b)
Note also that
zz
∗
= x
2
+ y
2
(1.8)
is purely real and nonnegative, so we deﬁne the modulus, or magnitude, or
absolute value of z by
z =
√
zz
∗
=
(Re z)
2
+ (Im z)
2
, (1.9)
where the positive square root is implied.
We give a simple geometrical interpretation to complex numbers, by thinking
of them as twodimensional vectors, as sketched in Fig. 1.1. Here the length of
the vector is the magnitude of the complex number,
r = z, (1.10)
and the angle the vector makes with the real axis is θ, where
tan θ = y/x; (1.11)
the quadrant θ lies in is determined by the sign of x and y. We call
θ = arg z (1.12)
the argument or phase of z. The above geometrical picture is sometimes called
an Argand diagram.
3 Version of August 22, 2011
E
T
y axis
x axis
zplane
r
θ
z = x + iy
d
d
d
d
d
r
−θ
z
∗
= x − iy
Figure 1.2: Geometrical interpretation of complex conjugation.
There is an arbitrariness in the choice of the argument θ of a complex number
z, for one can always add an arbitrary multiple of 2π to θ without changing z,
θ → θ + 2πn, n an integer, z → z. (1.13)
It is often convenient to deﬁne a singlevalued argument function arg z. By
convention, the principal value of arg z is that phase angle which satisﬁes the
inequality
−π < arg z ≤ π. (1.14)
(Note that radian measure is always employed.) For every z there is a unique
arg z lying in this range.
The geometrical signiﬁcance of complex conjugation is shown in Fig. 1.2.
Complex conjugation corresponds to reﬂection in the xaxis.
From the Argand diagram we can write down the “polar representation” of
a complex number,
z = r cos θ + ir sin θ
= r(cos θ + i sin θ), (1.15)
so if we have two complex numbers,
z
1
= r
1
(cos θ
1
+ i sin θ
1
), (1.16a)
z
2
= r
2
(cos θ
2
+ i sin θ
2
), (1.16b)
the product is
z
1
z
2
= r
1
r
2
{cos θ
1
cos θ
2
− sin θ
1
sin θ
2
+ i [cos θ
1
sin θ
2
+ cos θ
2
sin θ
1
]}
= r
1
r
2
[cos(θ
1
+ θ
2
) + i sin(θ
1
+ θ
2
)] . (1.17)
4 Version of August 22, 2011CHAPTER 1. REVIEWOF COMPLEXNUMBERS
That is, the moduli of the complex numbers multiply,
z
1
z
2
 = z
1
z
2
, (1.18a)
while the arguments add,
arg(z
1
z
2
) = arg z
1
+ arg z
2
. (1.18b)
The latter statement is to be understood as modulo 2π, i.e., equality up to the
addition of an arbitrary integer multiple of 2π. In particular, note that
1
z
z
=
1
z
z = 1, (1.19a)
while
0 = arg
1
z
z
= arg
1
z
+ arg z, (1.19b)
implying that
1
z
=
1
z
, (1.20a)
arg
1
z
= −arg z. (1.20b)
1.1 De Moivre’s Theorem
From the above, if we choose a unit vector,
z = cos θ + i sin θ, (1.21)
successive powers follow a simple pattern:
z
2
= cos 2θ + i sin 2θ, (1.22a)
z
3
= cos 3θ + i sin 3θ, (1.22b)
. . . ,
z
n
= cos nθ + i sin nθ, (1.22c)
or
(cos θ + i sin θ)
n
= cos nθ + i sin nθ, (1.23)
where n is a positive integer. This is called De Moivre’s theorem.
1.2 Roots
Suppose we wish to ﬁnd all the nth roots of unity, that is, all solutions to the
equation
z
n
= 1, (1.24)
1.2. ROOTS 5 Version of August 22, 2011
•
•
•
•
• •
• •
1
i
−1
−i
1+i
√
2
−1+i
√
2
1−i
√
2
−1−i
√
2
Figure 1.3: The eight 8th roots of unity.
where n is a positive integer. If we take the polar form,
z = ρ(cos φ + i sin φ), (1.25)
this means
ρ
n
(cos nφ + i sin nφ) = 1, (1.26)
which implies
ρ = 1, (1.27a)
nφ = 2πk, (1.27b)
where k is any integer. Thus the nth root of unity has the form
z = cos
2πk
n
+ i sin
2πk
n
. (1.28)
These are distinct for
k = 0, 1, 2, . . . , n − 1; (1.29)
outside of these values of k, the roots repeat. Thus there are n distinct nth
roots of unity. For example, for n = 8, the roots are as shown in Fig. 1.3, in the
complex plane.
Chapter 2
Inﬁnite Series
2.1 Sequences
A sequence of complex numbers {z
n
}
∞
n=1
is a countably inﬁnite set of numbers,
z
1
, z
2
, z
3
, . . . , z
n
, . . . . (2.1)
That is, for every positive integer k, there is a number, the kth term of the
sequence, z
k
, in the set {z
n
}
∞
n=1
. Mathematically, a sequence is a complex
valued function deﬁned on the positive integers.
We say that the sequence possesses a limit l,
lim
n→∞
z
n
= l, or z
n
→ l as n → ∞, (2.2)
if, for every ǫ > 0, no matter how small, there exists a number N for which
z
n
− l < ǫ for all n > N. (2.3)
(The number N will depend on ǫ.) That is, {z
n
}
∞
n=N+1
all lie within a circle of
radius ǫ centered on the point l in the complex plane.
A necessary and suﬃcient condition for a sequence {z
n
}
∞
n=1
to converge to
a limit is Cauchy’s criterion: A sequence {z
n
}
∞
n=1
possesses a limit if and only
if for every ǫ > 0, no matter how small, it is possible to ﬁnd a number N such
that
z
n
− z
m
 < ǫ for all n, m > N. (2.4)
(Note that the diﬀerence n − m may be arbitrarily large.) Thus, all elements
of the sequence {z
n
}
∞
n=N+1
lie within a disk of radius ǫ. Brieﬂy, we say that the
Cauchy condition is
z
n
− z
m
 → 0 for all n, m suﬃciently large. (2.5)
Sequences having this property are called Cauchy sequences. Every Cauchy
sequence of complex numbers possesses a limit (which is, of course, a complex
number)—this property means that the complex numbers form a complete space.
7 Version of August 27, 2011
8 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
2.2 Series
Suppose we have a sequence {a
k
}
∞
k=1
from which we construct the ﬁnite sums
s
n
=
n
k=1
a
k
, n = 1, 2, 3, . . . . (2.6)
The set of all these sums, {s
n
}
∞
n=1
, itself forms a sequence. If this latter sequence
has a limit S,
s
n
→ S as n → ∞, (2.7)
then we say that the inﬁnite series
∞
k=1
a
k
= lim
n→∞
n
k=1
a
k
(2.8)
possesses the limit S (or converges to S),
∞
k=1
a
k
= S. (2.9)
By the Cauchy criterion, this will be true if and only if
¸
¸
¸
¸
¸
n
k=m
a
k
¸
¸
¸
¸
¸
< ǫ (2.10)
for any ﬁxed ǫ > 0, whenever n ≥ m > N, N a number depending on ǫ.
Obviously, a necessary condition for
∞
k=1
a
k
(2.11)
to converge is for a
k
→ 0 as k → ∞. However, this is not suﬃcient, as the
following example shows.
2.3 Examples
2.3.1 Harmonic series
Consider the sum of the reciprocals of the integers,
1 +
1
2
+
1
3
+ . . . =
∞
n=1
1
n
. (2.12)
Note that if the nth term of the series is denoted a
n
= 1/n, we have for the sum
of n adjacent terms
a
n+1
+ . . . + a
2n
=
1
n + 1
+ . . . +
1
2n
> n
_
1
2n
_
=
1
2
, (2.13)
no matter how large n is. This violates Cauchy’s criterion, so the harmonic
series diverges.
2.4. ABSOLUTE ANDCONDITIONAL CONVERGENCE9 Version of August 27, 2011
2.3.2 Geometric series
Consider the series
∞
m=0
ar
m
, (2.14)
where a is a constant and r ≥ 0. For r = 1, the nth partial sum is
s
n
=
n
m=0
ar
m
= a
1 − r
n+1
1 − r
, (2.15)
so
S = lim
n→∞
s
n
=
a
1 − r
if r < 1, (2.16)
while the series diverges if r ≥ 1.
2.4 Absolute and Conditional Convergence
Suppose we have a convergent series
∞
n=1
a
n
. If also
∞
n=1
a
n
 converges, we
say that the original series converges absolutely. Otherwise, the original series
is conditionally convergent. (That is, it converges because of sign alternations.)
A suﬃcient condition for (at least) conditional convergence is provided by the
following theorem due to Leibnitz:
If the terms of a series are of alternating sign and in addition their absolute
values tend to zero, a
n
 → 0, monotonically, i.e., a
n
 > a
n+1
 for suﬃciently
large n, then
∞
n=1
a
n
converges. (2.17)
In absolutely convergent series one can rearrange the terms without aﬀecting
the value of the sum. With conditionally convergent series, one cannot rearrange
terms; in fact, such rearrangements can make a conditionally convergent series
converge to any desired value, or to diverge!
2.4.1 Example
Consider the conditionally convergent series formed from the divergent harmonic
series by alternating every other sign:
1 −
1
2
+
1
3
−
1
4
+
1
5
−
1
6
+ . . . = ln 2, (2.18)
which converges to the natural logarithm of 2. Multiply this equation term by
term by 1/2:
1
2
−
1
4
+
1
6
−
1
8
+
1
10
− . . . =
1
2
ln 2. (2.19)
10 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
Add these two series:
1 +
1
3
−
1
2
+
1
5
+
1
7
−
1
4
+
1
9
+
1
11
−
1
6
+ . . . =
3
2
ln 2. (2.20)
Since the reciprocal of each integer occurs exactly once in the last series, we
would be tempted to rearrange the series to obtain
1 −
1
2
+
1
3
−
1
4
+
1
5
−
1
6
+ . . . = ln 2, (2.21)
which is identical to the original series. There is an obvious contradiction here!
In order to obtain the rearrangement (2.21), we have to go further and further
out in the series (2.20), which apparently is not permissible.
2.4.2 A Theorem About Absolutely Convergent Series
Not only can absolutely convergent series be rearranged without changing their
value, but they can be multiplied together term by term: If two series
S =
∞
i=1
u
i
, (2.22a)
T =
∞
i=1
v
i
(2.22b)
are both absolutely convergent, the series
P =
∞
i=1
j=1
u
i
v
j
(2.23)
formed from the product of their terms written in any order, is absolutely con
vergent, and has a value equal to the product of of the individual series,
P = ST. (2.24)
2.5 Convergence Tests
The following tests can determine whether a given series is absolutely convergent
or not.
2.5.1 Comparison test
If b
n
> 0 for all n and
∞
n=1
b
n
is convergent, and if a
n
 ≤ b
n
for all n, then
∞
n=1
a
n
is absolutely convergent. (2.25a)
2.5. CONVERGENCE TESTS 11 Version of August 27, 2011
Also, if a
n
 ≥ b
n
> 0 for all n, and
∞
n=1
b
n
diverges, then
∞
n=1
a
n
is not absolutely convergent. (2.25b)
2.5.2 Root test
The series
∞
n=1
a
n
converges absolutely if from a certain term onward
n
_
a
n
 ≤ q < 1, (2.26)
where q ≥ 0 is independent of n.
Proof: If the inequality holds, a
n
 ≤ q
n
. But
∞
n=1
q
n
converges for q < 1,
it being the geometric series, so by 2.5.1,
∞
n=1
a
n
 converges.
2.5.3 Ratio test
The series
∞
n=1
a
n
converges absolutely if from a certain term onward
¸
¸
¸
¸
a
n+1
a
n
¸
¸
¸
¸
≤ q < 1, (2.27)
where q ≥ 0 is independent of n.
Proof: Without loss of generality, we may assume the inequality holds for
all n; otherwise, we renumber the {a
n
} sequence so that 1 labels the ﬁrst term
for which the inequality (2.27) holds. Then
¸
¸
¸
¸
a
n
a
1
¸
¸
¸
¸
=
¸
¸
¸
¸
a
n
a
n−1
¸
¸
¸
¸
¸
¸
¸
¸
a
n−1
a
n−2
¸
¸
¸
¸
¸
¸
¸
¸
a
n−2
a
n−3
¸
¸
¸
¸
· · ·
¸
¸
¸
¸
a
2
a
1
¸
¸
¸
¸
≤ q
n−1
. (2.28)
Convergence is again assured by comparison with the geometric series. (Whether
these tests are satisﬁed by the ﬁrst few terms of a series is immaterial, since a
ﬁnite number of terms of an inﬁnite seris has no eﬀect on the convergence.)
Example
When does
∞
n=1
nq
n
converge? If we use the root test, we examine
lim
n→∞
n
_
a
n
 = q lim
n→∞
n
√
n = q,
1
(2.29a)
while if we use the ratio test, we look at
lim
n→∞
¸
¸
¸
¸
a
n+1
a
n
¸
¸
¸
¸
= q lim
n→∞
n + 1
n
= q. (2.29b)
In either case, we see that the series is absolutely convergent if q < 1, and
divergent otherwise.
1
Because ln
n
√
n =
1
n
lnn, which tends to zero as n →∞,
n
√
n →1.
12 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
The following are reﬁnements of the ratio test, which fails (that is, fails to
reveal whether the tested series converges or not) when
lim
n→∞
¸
¸
¸
¸
a
n+1
a
n
¸
¸
¸
¸
= 1. (2.30)
For example, this indeterminate limit results for the case a
n
= 1/n, which
yields a divergent series, but also for a
n
= 1/(nln
2
n), which corresponds to a
convergent sum (see Sec. 2.5.8).
2.5.4 Kummer’s test
Choose a sequence of positive constants b
n
. If
b
n
¸
¸
¸
¸
a
n
a
n+1
¸
¸
¸
¸
− b
n+1
≥ C > 0, (2.31)
for all n ≥ N, where N and C are ﬁxed numbers, then
∞
n=1
a
n
converges absolutely. (2.32)
On the other hand, if
b
n
¸
¸
¸
¸
a
n
a
n+1
¸
¸
¸
¸
− b
n+1
≤ 0, (2.33)
and
∞
n=1
b
−1
n
diverges, (2.34)
then
∞
n=1
a
n
 diverges. (2.35)
Proof: If the inequality (2.31) holds, take l ≥ N, so that
Ca
l+1
 ≤ b
l
a
l
 − b
l+1
a
l+1
. (2.36)
So we have the inequality
n
l=N+1
a
l
 ≤
b
N
a
N

C
−
b
n
a
n

C
≤
b
N
a
N

C
. (2.37)
Hence, the nth partial sum, for n > N, is
s
n
=
n
i=1
a
i
 ≤
N
i=1
a
i
 +
b
N
a
N

C
. (2.38)
2.5. CONVERGENCE TESTS 13 Version of August 27, 2011
The righthand side of this inequality is a constant, independent of n. There
fore, the positive sequence of increasing terms {s
n
} is bounded above, and con
sequently possesses a limit. The series is absolutely convergent.
If the inequality (2.33) holds,
a
n
 ≥
a
N
b
N
b
n
, n > N, (2.39)
so since
∞
n=1
b
−1
n
diverges, so does
∞
n=1
a
n
.
2.5.5 Raabe’s test
Raabe’s criterion for absolute convergence is
n
_¸
¸
¸
¸
a
n
a
n+1
¸
¸
¸
¸
− 1
_
≥ K > 1, (2.40)
for all n ≥ N, where N and K are ﬁxed. And if
n
_¸
¸
¸
¸
a
n
a
n+1
¸
¸
¸
¸
− 1
_
≤ 1, (2.41)
then
∞
n=1
a
n
 diverges. (2.42)
Proof: In Kummer’s test put b
n
= n.
2.5.6 Gauss’ test
If ¸
¸
¸
¸
a
n
a
n+1
¸
¸
¸
¸
= 1 +
h
n
+
B(n)
n
2
, (2.43)
where h is a constant and the function B(n) is bounded as n → ∞, then
∞
n=1
a
n
 converges for h > 1 and diverges for h ≤ 1.
Proof: For h = 1 we can use Raabe’s test:
lim
n→∞
n
_
h
n
+
B(n)
n
2
_
= h. (2.44)
For h = 1, Raabe’s test is indeterminate. In that case use Kummer’s test with
b
n
= nln n: for large n,
nlnn
_
1 +
h
n
+
B(n)
n
2
_
− (n + 1) ln(n + 1)
≈ nlnn
_
1 +
h
n
+
B(n)
n
2
_
− (n + 1)
_
ln n +
1
n
_
≈
_
h +
B(n)
n
_
ln n − ln n − 1
≈ (h − 1) ln n − 1 < 0, if h ≤ 1. (2.45)
14 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
f
1 2 3 4 5 6 7 8 9 10
Figure 2.1: Bounds on a monotone series provided by an integral.
Because
∞
n=2
1
nln n
diverges (2.46)
(see homework), the series
∞
n=1
a
n
 diverges.
2.5.7 Integral test
If f(x) is a continuous, monotonically decreasing real function of x such that
f(n) = a
n
, (2.47)
then
∞
n=1
a
n
 converges if
_
∞
1
dxf(x) < ∞, (2.48)
and diverges otherwise.
Proof: It is geometrically obvious that
_
∞
1
dxf(x) <
∞
n=1
f(n) <
_
∞
1
dxf(x) + f(1), (2.49)
for this follows merely from the geometrical meaning of the integral as the area
under the curve of the function. See Fig. 2.1.
2.5.8 Examples
• The Riemann zeta function is deﬁned by the series
∞
n=1
1
n
α
= ζ(α). (2.50)
We can test for convergence using Gauss’ test, by examining
_
n + 1
n
_
α
≈ 1 +
α
n
for large n. (2.51)
Thus the series converges if α > 1, and diverges if α ≤ 1.
2.6. SERIES OF FUNCTIONS 15 Version of August 27, 2011
• Consider the series
∞
n=1
1
(ln n)
α
. (2.52)
Let’s use Raabe’s test:
_
ln(n + 1)
ln n
_
α
=
_
ln n + ln(1 + 1/n)
ln n
_
α
≈ 1 +
α
nlnn
. (2.53)
Because
n
_
α
nln n
_
=
α
ln n
→ 0 as n → ∞, (2.54)
we conclude that the series is divergent.
• To test for convergence of
∞
n=2
1
n(ln n)
α
, (2.55)
let us use the integral test:
_
∞
2
dx
x(ln x)
α
=
_
∞
ln 2
d(ln x)
(ln x)
α
=
_
_
_
1
1−α
1
(ln x)
α−1
¸
¸
¸
¸
∞
x=2
, α = 1,
ln(ln x)
¸
¸
∞
x=2
, α = 1
=
_
ﬁnite α > 1,
∞ α ≤ 1.
(2.56)
Thus the series converges if α > 1 and diverges for other real α.
2.6 Series of Functions
2.6.1 Continuity
A (complexvalued) function f(z) of a complex variable is continuous at z
0
if
f(z) → f(z
0
) as z → z
0
(2.57)
from any direction. That is, given ǫ > 0 we may ﬁnd a δ > 0 such that
f(z) − f(z
0
) < ǫ whenever z − z
0
 < δ. (2.58)
In other words, z lies within a circle of radius δ around z
0
.
16 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
f
x
= f
= f
n
↓
↑
2ǫ
Figure 2.2: Uniform convergence of the partial sum f
n
(x) to the limit f(x). For
all x, f
n
(x) is within a band of width 2ǫ about f(x).
2.6.2 Uniform Convergence
Consider the inﬁnite series
f(z) =
∞
i=1
g
i
(z) (2.59)
constructed from the sequence of functions {g
i
}
∞
i=1
. The condition that this
series converge is expressed in terms of the partial sums,
f
n
(z) =
n
i=1
g
i
(z) (2.60)
thusly: given ǫ > 0 we can ﬁnd an integer N so that for n > N
f
n+p
(z) − f
n
(z) < ǫ for all p > 0. (2.61)
This is Cauchy’s criterion. In general the N required for this to occur will depend
on the point z. If, however, Eq. (2.61) holds for all z if n > N independent of
z, we say that the series converges uniformly throughout the region of interest.
Equivalently, there exists a function f(z) such that
f(z) − f
n
(z) < ǫ for all n > N, N independent of z. (2.62)
That is, the partial sum f
n
is everywhere uniformly close to f, the limiting
function. This situation is illustrated in Fig. 2.2 for a real function of a real
variable.
Contrast absolute and uniform convergence through the following examples.
The series
∞
n=1
(−1)
n
n + z
2
(2.63)
is only conditionally convergent, because asymptotically the terms become (−1)
n
/n.
On the other hand, for real z it is uniformly convergent because
¸
¸
¸
¸
¸
N+p
n=N+1
(−1)
n
n + z
2
¸
¸
¸
¸
¸
<
1
N + z
2
≤
1
N
, (2.64)
2.6. SERIES OF FUNCTIONS 17 Version of August 27, 2011
which is the Cauchy criterion with ǫ = 1/N.
In contrast, consider, for real z, the series
S(z) =
∞
n=0
z
2
(1 + z
2
)
n
(2.65)
which converges absolutely. For z = 0, S(0) = 0; and for z = 0,
S(z) = z
2
∞
n=0
1
(1 + z
2
)
n
=
z
2
1 −
1
1+z
2
= 1 + z
2
. (2.66)
Thus S(z) is discontinuous at z = 0. The following theorem shows that this
series cannot be uniformly convergent there.
Theorem
If a series of continuous functions of z is uniformly convergent for all values of
z in a given closed domain, the sum is continuous throughout the domain.
Proof: Let
f
n
(z) =
n
i=1
g
i
(z). (2.67)
Since
f
n
(z) → f(z) uniformly, (2.68)
we can ﬁnd, for any ǫ > 0, a value of n such that
f
n
(z) − f(z) < ǫ for all z (2.69)
throughout the domain. Then
f(z) − f(z
′
) = f(z) − f
n
(z) + f
n
(z) − f(z
′
) + f
n
(z
′
) − f
n
(z
′
)
≤ f(z) − f
n
(z) + f(z
′
) − f
n
(z
′
) + f
n
(z) − f
n
(z
′
).(2.70)
Since the f
n
’s are continuous, we can ﬁnd a δ for any given ǫ such that
f
n
(z) − f
n
(z
′
) < ǫ whenever z − z
′
 < δ. (2.71)
Therefore,
f(z) − f(z
′
) < 3ǫ whenever z − z
′
 < δ. (2.72)
QED.
Even if the limit function is continuous, convergence to it need not be uni
form, as the following example shows:
18 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
d
d
d
d
d
f
x 0 1/n 2/n
Figure 2.3: Sketch of the function f
n
(x) given by Eq. 2.73).
Example
Consider the sequence of continuous functions,
f
n
(x) =
_
_
_
nx, 0 ≤ x ≤ 1/n,
(2/n − x)n, 1/n ≤ x ≤ 2/n,
0, otherwise.
(2.73)
This function in sketched in Fig. 2.3. Note that the maximum of the function
f
n
(x) is 1. On the other hand, for all x,
lim
n→∞
f
n
(x) = 0, (2.74)
which is certainly a continuous limit function. But the convergence to this limit
is not uniform, for there is always a point, x = 1/n, for which
¸
¸
¸
¸
0 − f
n
_
1
n
_¸
¸
¸
¸
= 1 (2.75)
no matter how large n is. So the convergence is nonuniform.
Properties of Uniformly Convergent Series
Consider the series of functions of a real variable,
f(x) =
∞
n=1
g
n
(x). (2.76)
1. If the g
n
are continuous, we can integrate term by term if
n
g
n
is uni
formly convergent over the domain of integration:
_
b
a
dxf(x) =
∞
n=1
_
b
a
dxg
n
(x). (2.77)
2. If the g
n
and g
′
n
=
d
dx
g
n
are continuous, and
n
g
′
n
is uniformly conver
gent, then we can diﬀerentiate term by term:
f
′
(x) =
∞
n=1
g
′
n
(x). (2.78)
2.7. POWER SERIES 19 Version of August 27, 2011
Condition for Uniform Convergence
The following condition is suﬃcient, but not necessary, to ensure that a series
is uniformly convergent.
If g
n
(z) < a
n
, where {a
n
} is a sequence of constants such that
∞
n=1
a
n
converges, then
∞
n=1
g
n
(z) converges uniformly and absolutely.
Proof: The hypothesis implies
¸
¸
¸
¸
¸
N+p
n=N
g
n
(z)
¸
¸
¸
¸
¸
<
N+p
n=N
a
n
, (2.79)
so that if N is chosen so that
N+p
n=N
a
n
< ǫ, then
¸
¸
¸
¸
¸
N+p
n=N
g
n
(z)
¸
¸
¸
¸
¸
< ǫ ∀z. (2.80)
2.7 Power Series
By a power series, we mean a series of the form,
∞
n=0
c
n
z
n
= c
0
+ c
1
z + c
2
z
2
+ . . . , (2.81)
where the c
n
s form a sequence of complex constants, and z is a complex variable.
If a power series converges for one point, z = z
0
, it converges uniformly and
absolutely for all z satisfying
z ≤ η, (2.82)
where η is any positive number less than z
0
.
Proof: Since
∞
n=0
c
n
z
n
0
converges, it must be true that the terms are
bounded,
c
n
z
n
0
 < M, (2.83)
where M is independent of n (but not of z
0
). Hence if Eq. (2.82) is satisﬁed,
∞
n=0
c
n
z
n
 ≤
∞
n=0
c
n
η
n
< M
∞
n=0
_
η
z
0

_
n
< ∞, (2.84)
since η/z
0
 < 1. This proves absolute convergence. Uniform convergence follows
from the theorem above.
2.7.1 Radius of Convergence
Use the root test to determine where the power series
∞
n=0
c
n
z
n
(2.85)
20 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
zplane
&%
'$
ρ
Figure 2.4: Circle of convergence of a power series. The series (2.85) converges
inside the circle, and diverges outside. The radius of convergence ρ is given by
Eq. (2.87).
converges. That test says if
lim
n→∞
n
_
c
n
z < 1, the series converges, (2.86a)
while if
lim
n→∞
n
_
c
n
z > 1, the series diverges. (2.86b)
Therefore, the power series converges within a circle of convergence of radius ρ,
the radius of convergence, where
ρ =
1
lim
n→∞
n
_
c
n

, (2.87)
and diverges outside that circle, as shown in Fig. 2.4. More detailed examination
is required to determine whether or not the series converges on the circle of
convergence.
2.7.2 Properties of Power Series Within the Circle of Con
vergence
1. The function deﬁned by the power series is continuous. [This follows from
the theorem in Sec. 2.6.2.]
2. It may be diﬀerentiated or integrated term by term. [This follows from
the theorem above, togther with the fact that if
∞
n=0
c
n
z
n
converges, so
does
∞
n=0
nc
n
z
n−1
, by the ratio test,
¸
¸
¸
¸
(n + 1)c
n+1
z
n
nc
n
z
n−1
¸
¸
¸
¸
=
n + 1
n
¸
¸
¸
¸
c
n+1
c
n
¸
¸
¸
¸
z. (2.88)
Now if z lies within the circle of convergence,
lim
n→∞
¸
¸
¸
¸
c
n+1
c
n
¸
¸
¸
¸
z < 1. (2.89)
2.7. POWER SERIES 21 Version of August 27, 2011
Since lim
n→∞
(n + 1)/n = 1, convergence of the diﬀerentiated series is
assured.]
3. Two such power series may be multiplied together term by term, within the
smaller of the two circles of convergence. [This follows from the theorem
in Sec. 2.4.2.]
4. The power series is unique. [It suﬃces to show that if
f(z) =
∞
n=0
c
n
z
n
= 0 ∀z, c
n
= 0 ∀n. (2.90)
Indeed,
f(0) = c
0
= 0, (2.91a)
f
′
(0) = c
1
= 0, (2.91b)
. . . ,
f
(n)
(0) = n! c
n
= 0.] (2.91c)
2.7.3 Taylor Expansion
The Taylor expansion for a real function of a real variable is obtained from the
above argument. If we write a function as a power series,
f(x) =
∞
n=0
c
n
x
n
, (2.92)
then
c
n
=
1
n!
f
(n)
(0). (2.93)
Hence, the power series is the Taylor series of the function it represents,
f(x) =
∞
n=0
1
n!
f
(n)
(0)x
n
. (2.94)
2.7.4 Hypergeometric Function
The hypergeometric function F is deﬁned by the power series
F(a, b; c; z) =
∞
n=0
A
n
z
n
=
∞
n=0
(a)
n
(b)
n
(c)
n
z
n
n!
. (2.95)
Here the coeﬃcients are deﬁned in terms of the Pochhammer symbol,
(a)
n
= a(a + 1)(a + 2) · · · (a + n − 1) =
Γ(a + n)
Γ(a)
. (2.96)
22 Version of August 27, 2011 CHAPTER 2. INFINITE SERIES
To determine convergence, we examine
A
n
z
n
A
n+1
z
n+1
=
Γ(a + n)Γ(b + n)
Γ(c + n)
Γ(c + n + 1)
Γ(a + n + 1)Γ(b + n + 1)
(n + 1)!
n!
z
n
z
n+1
=
_
1 +
c
n
_ _
1 +
1
n
_
_
1 +
a
n
_ _
1 +
b
n
_
1
z
=
1
z
_
1 +
1
n
(c + 1 − a − b) + O
_
1
n
2
__
, (2.97)
where O(1/n
2
) means that the next term goes to zero as n → ∞ at least as
fast as 1/n
2
. According to the ratio test, the radius of convergence of this series
is z = 1; that is, the series diverges for z > 1, and converges uniformly and
absolutely for any z such that z ≤ η < 1. The remaining question is what
happens on the circle of convergence, z = 1. According to Gauss’ test, the
series is then absolutely convergent if c > a + b [if the constants are complex,
if Re (c − a − b) > 0]. For the point z = 1 the series is certainly divergent if
this condition is not satisﬁed; however, if −1 < Re (c − a − b) ≤ 0 the series
is conditionally convergent on the unit circle except for the exceptional point
z = 1. On the other hand, if Re (c − a − b) ≤ −1 the series is divergent on the
unit circle because the terms in the series increase in magnitude.
Chapter 3
Elementary Transcendental
Functions
3.1 Exponential Function
Deﬁne, for all complex z, the exponential function by
exp(z) = e
z
=
∞
n=0
1
n!
z
n
. (3.1)
By the ratio test,
n!
(n + 1)!
z =
z
n + 1
→0 ∀z, (3.2)
the series converges everywhere. By the theorem of the Sec. 2.7, that means
that the series converges uniformily in any ﬁnite closed region.
Note that the following property holds:
exp(z
1
+ z
2
) =
∞
n=0
1
n!
(z
1
+ z
2
)
n
=
∞
n=0
n
m=0
1
n!
n!
m! (n −m)!
z
m
1
z
n−m
2
=
∞
k=0
1
k!
z
k
1
∞
l=0
1
l!
z
l
2
= exp(z
1
) exp(z
2
). (3.3)
Then, by induction
(e
z
)
n
= e
nz
, (3.4)
where n is any positive integer.
23 Version of September 7, 2011
24 Version of September 7, 2011CHAPTER3. ELEMENTARYTRANSCENDENTAL FUNCTIONS
Hyperbolic and trigonometric functions are deﬁned in terms of the exponen
tial function:
sinh z =
e
z
−e
−z
2
coshz =
e
z
+ e
−z
2
, (3.5a)
sin z =
e
iz
−e
−iz
2i
cos z =
e
iz
+ e
−iz
2
, (3.5b)
so that
i sin z = sinh iz, (3.6a)
cos z = coshiz, (3.6b)
for all complex z.
Note that
e
iz
= cos z + i sinz. (3.7)
Therefore, the polar representation of a complex number,
z = r(cos θ + i sinθ)
= re
iθ
, (3.8)
becomes a most useful and compact representation. In particular,
z
n
= r
n
e
inθ
(3.9)
implies De Moivre’s formula,
cos nθ + i sinnθ = (cos θ + i sinθ)
n
. (3.10)
3.1.1 Deﬁnition of π
There exists a positive number π such that
1.
e
πi/2
= i, and (3.11a)
2.
e
z
= 1 if and only if z = 2πin, (3.11b)
where n is an integer.
Hence exp(z) is periodic with period 2πi,
exp(z + 2πi) = exp(z) exp(2πi) = exp(z). (3.12)
3.2. THE NATURAL LOGARITHM 25 Version of September 7, 2011
θ
z
x
iy
d d d d d
“cut” or “branch line”
Figure 3.1: Cut plane for deﬁning the logarithm.
3.2 The Natural Logarithm
If z = re
iθ
, we deﬁne
ln z ≡ log z ≡ ln r + iθ, (3.13)
where ln r is deﬁned as the inverse of the exponential function for real positive
r,
r = e
ln r
. (3.14)
Thus we have
z = e
ζ
where ζ = ln r + iθ = log z. (3.15)
Recall that θ = arg z is a multivalued function, because θ is only deﬁned up
to an arbitrary multiple of 2π. [This is just the periodic property (3.12).] Re
call further that we deﬁned the principal value of the argument as that which
satisﬁed
−π < arg z ≤ π. (3.16)
Correspondingly, we say that the singlevalued logarithm function (also denoted
log z) is deﬁned in the cut plane shown in Fig. 3.1. In measuring θ from the +x
axis, one is not allowed to cross the cut along the −x axis. (Where the cut is
placed is an arbitrary convention.) The correspondingly deﬁned singlevalued
functions arg z and
log z = log z + i arg z, (3.17)
or
−π < Im log z ≤ π, (3.18)
are also referred to as the principal values of the argument and logarithm, re
spectively.
Now we deﬁne complex powers of complex numbers as follows:
ζ
z
≡ e
z log ζ
, (3.19)
where log ζ is deﬁned in the cut plane. Then
e
ξ
z
= e
z log e
ξ
= e
z(Re ξ+iImξ)
= e
ξz
(3.20)
26 Version of September 7, 2011CHAPTER3. ELEMENTARYTRANSCENDENTAL FUNCTIONS
when
arg e
ξ
= Imξ (3.21)
lies between
−π < Imξ ≤ π. (3.22)
If this is not so,
log e
ξ
= ξ + 2πin (3.23)
where n is so chosen that
−π < Im(ξ + 2πin) ≤ π, (3.24)
and
e
ξ
z
= e
z(ξ+2πin)
. (3.25)
For example,
√
z = z
1/2
= e
1
2
log z
(3.26)
is deﬁned as a singlevalued function only in the cut plane
−π < arg z ≤ π. (3.27)
3.3 Inverse Hyperbolic and Trigonometric Func
tions
The inverse hyperbolic and trigonometric functions are deﬁned in terms of the
logarithm:
arcsinhz = log
z + (z
2
+ 1)
1/2
, (3.28a)
arccoshz = log
z + (z
2
−1)
1/2
, (3.28b)
arctanhz =
1
2
log
1 + z
1 −z
, (3.28c)
which are deﬁned in the cut planes shown in Fig. 3.2.
arcsinz = −i arcsinhiz
= −i log
iz + (1 −z
2
)
1/2
, (3.29a)
arccos z = −i arccoshz
= −i log
z + (z
2
−1)
1/2
, (3.29b)
arctanz = −i arctanhiz
=
i
2
log
1 −iz
1 + iz
=
i
2
log
i + z
i −z
, (3.29c)
which are deﬁned in the cut planes shown in Fig. 3.3. Note that the branch
3.3. INVERSE HYPERBOLICAND TRIGONOMETRIC FUNCTIONS27 Version of September 7, 2011
x
iy
•
•
i
−i
arcsinhz:
x
iy
•
+1
arccoshz:
x
iy
• •
+1 −1
arctanhz:
Figure 3.2: Cut planes for deﬁning the inverse hyperbolic functions. The thick
lines represent the cuts.
x
iy
• •
+1 −1
arcsinz:
arccos z:
x
iy
•
•
+i
−i
arctanz:
Figure 3.3: Cut plane for deﬁning the inverse trigonometric functions.
lines (cuts) are chosen so as not to cross the region where both the range and
the domain of the functions are real, because for real x,
sinx, cos x ∈ [−1, 1], (3.30a)
tan x ∈ (−∞, ∞), (3.30b)
sinh x ∈ (−∞, ∞), (3.30c)
coshx ∈ [1, ∞), (3.30d)
tanh x ∈ [−1, 1]. (3.30e)
An alternative notation for the inverse functions is provided by the superscript
−1, as for example,
arcsinhz = sinh
−1
z, (3.31)
which does not mean 1/ sinhz.
Chapter 4
Bernoulli Polynomials
4.1 Bernoulli Numbers
The “generating function” for the Bernoulli numbers is
x
e
x
−1
=
∞
n=0
B
n
n!
x
n
. (4.1)
That is, we are to expand the lefthand side of this equation in powers of x, i.e.,
a Taylor series about x = 0. The coeﬃcient of x
n
in this expansion is B
n
/n!.
Note that we can write the lefthand side of this expression in an alternative
form
x
e
x
−1
=
x
e
x/2
_
e
x/2
−e
−x/2
_
=
xe
−x/2
2 sinh
x
2
=
x
2
cosh
x
2
−sinh
x
2
sinh
x
2
=
x
2
coth
x
2
−
x
2
. (4.2)
Note that
x
2
coth
x
2
is an even function of x, while
x
2
is odd. Therefore we
conclude that all but one of the Bernoulli numbers of odd order are zero:
B
1
= −
1
2
(4.3a)
B
2k+1
= 0, k = 1, 2, 3, . . . . (4.3b)
By writing x = iy and noting that
coth
iy
2
= −i cot
y
2
, (4.4)
29 Version of September 16, 2011
30 Version of September 16, 2011CHAPTER4. BERNOULLI POLYNOMIALS
we conclude that
iy
2
coth
iy
2
=
y
2
cot
y
2
=
∞
n=0
B
2n
(iy)
2n
(2n)!
, (4.5)
or
y
2
cot
y
2
=
∞
n=0
(−1)
n
B
2n
(2n)!
y
2n
. (4.6)
By straightforward expansion in powers of x we can read oﬀ the ﬁrst few
Bernoulli numbers:
x
2
coth
x
2
=
x
2
cosh
x
2
sinh
x
2
≈
x
2
1 +
1
2!
_
x
2
_
2
+
1
4!
_
x
2
_
4
+
1
6!
_
x
2
_
6
+
1
8!
_
x
2
_
8
+ . . .
x
2
+
1
3!
_
x
2
_
3
+
1
5!
_
x
2
_
5
+
1
7!
_
x
2
_
7
+
1
9!
_
x
2
_
9
+ . . .
≈
_
1 +
1
2!
_
x
2
_
2
+
1
4!
_
x
2
_
4
+
1
6!
_
x
2
_
6
+
1
8!
_
x
2
_
8
_
×
_
1 −
_
1
3!
_
x
2
_
2
+
1
5!
_
x
2
_
4
+
1
7!
_
x
2
_
6
+
1
9!
_
x
2
_
8
_
+
_
1
3!
_
x
2
_
2
+
1
5!
_
x
2
_
4
+
1
7!
_
x
2
_
6
_
2
−
_
1
3!
_
x
2
_
2
+
1
5!
_
x
2
_
4
_
3
+
_
1
3!
_
x
2
_
2
_
4
_
+O(x
10
)
= . . .
= 1 +
x
2
2!
_
1
6
_
+
x
4
4!
_
−
1
30
_
+
x
6
6!
_
1
42
_
+
x
8
8!
_
−
1
30
_
+ . . . .
(4.7)
So by comparison with Eq. (4.1) we ﬁnd
B
0
= 1, B
2
=
1
6
, B
4
= −
1
30
, B
6
=
1
42
, B
8
= −
1
30
. (4.8)
What is the radius of convergence of the series
z
e
z
−1
= −
z
2
+
∞
n=0
B
2n
(2n)!
z
2n
? (4.9)
Recall that a power series converges everywhere within its circle of convergence,
and diverges outside that circle. Since a uniformly convergent series must con
verge to a continuous function, the power series must converge to a wellbehaved
function within the circle of convergence. That is, the limit function must have
a singularity somewhere on the circle of convergence, but must be singularity
free within the circle of convergence. The precise theorem, proved in Chapter
4.1. BERNOULLI NUMBERS 31 Version of September 16, 2011
n B
2n
Asymptotic value Relative error
0 1 −2 300%
1
1
6
1
π
2
39%
2 −
1
30
−
3
π
4
7.6%
3
1
42
45
2π
6
1.7%
4 −
1
30
−
315
π
8
0.41%
5
5
66
14175
2π
10
0.099%
6 −
691
2730
−
467775
2π
12
0.025%
7
7
6
42567525
4π
14
0.0061%
8 −
3617
510
−
638512875
π
16
0.0015%
9
43867
798
97692469875
2π
18
0.00038%
10 −
174611
330
−
9280784638125
2π
20
0.0000095%
Table 4.1: The Bernoulli numbers B
2n
for n from 0 to 10, compared with
the asymptotic values (4.12). The last column shows the relative error of the
asymptotic estimate. Note that the later rather rapidly approaches the true
value.
5, is that the radius of convergence of a power series is the distance from the
origin to the nearest singularity of the function the series represents.
In this case, it is clear that the generating function is singular wherever
e
z
= 1, except for z = 0. Thus the closest singularities to the real axis occur at
±2πi, so that the radius of convergence is 2π. On the other hand
(2π)
2
= ρ
2
= lim
n→∞
B
2n

(2n)!
[2(n + 1)]!
B
2(n+1)

= lim
n→∞
(2n + 2)(2n + 1)
¸
¸
¸
¸
B
2n
B
2n+2
¸
¸
¸
¸
, (4.10)
from which we can infer the fact that the Bernoulli numbers grow rapidly with
n,
B
2n
 ∼
(2n)!
(2π)
2n
, n →∞. (4.11)
We cannot deduce the sign or overall constant from this analysis: The true
asymptotic behavior of B
2n
is
B
2n
∼ 2(−1)
n+1
(2n)!
(2π)
2n
. (4.12)
The table shows the relative accuracy of the asymptotic approximation (4.12).
32 Version of September 16, 2011CHAPTER4. BERNOULLI POLYNOMIALS
4.2 Bernoulli Polynomials
The Bernoulli polynomials are deﬁned by the generating function
F(x, s) =
xe
xs
e
x
− 1
=
∞
n=0
B
n
(s)
x
n
n!
, (4.13)
that is, according to Eq. (2.94),
B
n
(s) =
_
∂
∂x
_
n
F(x, s)
¸
¸
¸
¸
x=0
. (4.14)
From the properties of F(x, s) we can deduce all the properties of these poly
nomials:
1. Note that
F(x, 0) =
x
e
x
−1
=
∞
n=0
B
n
x
n
n!
. (4.15)
Therefore, we conclude that the Bernoulli polynomials at zero are equal
to the Bernoulli numbers,
B
n
(0) = B
n
. (4.16)
2. Next we notice that
F(x, 1) =
xe
x
e
x
−1
=
x
1 −e
−x
=
−x
e
−x
−1
= F(−x, 0), (4.17)
so that by comparing corresponding terms in the generating function ex
pansion, we ﬁnd
B
n
(1) = (−1)
n
B
n
(0) = (−1)
n
B
n
. (4.18)
3. If we diﬀerentiate the generating function with respect to its second argu
ment, we obtain the relation
∂
∂s
F(x, s) =
x
2
e
xs
e
x
−1
=
∞
n=0
B
′
n
(s)
x
n
n!
. (4.19)
But obviously
x
2
e
xs
e
x
−1
= xF(x, s) =
∞
n=0
B
n
x
n+1
n!
, (4.20)
so equating coeﬃcients of x
n
/n! we conclude that
B
′
n
(s) = nB
n−1
(s). (4.21)
4.3. EULERMACLAURINSUMMATION FORMULA33 Version of September 16, 2011
(Note that B
′
0
(s) = 0 is consistent with this if B
−1
(s) is ﬁnite.)
Again, by direct power series expansion of the generating function we can
read oﬀ the ﬁrst few Bernoulli polynomials:
F(x, s) ≈ x
1 + xs +
1
2
(xs)
2
+
1
6
(xs)
3
x +
1
2
x
2
+
1
3!
x
3
≈ 1 + x
_
s −
1
2
_
+ x
2
_
s
2
2
−
s
2
−
1
6
+
1
4
_
+ . . . , (4.22)
from which we read oﬀ
B
0
(s) = 1, (4.23a)
B
1
(s) = s −
1
2
, (4.23b)
B
2
(s) = s
2
−s +
1
6
. (4.23c)
By keeping two more terms in the expansion we ﬁnd
B
3
(s) = s
3
−
3
2
s
2
+
1
2
s, (4.23d)
B
4
(s) = s
4
−2s
3
+ s
2
−
1
30
. (4.23e)
Note that the properties (4.16) and (4.18) are satisﬁed. Note further we can
use the property (4.21) to derive higher Bernoulli polynomials from lower ones.
Thus from Eq. (4.23c) we know that
B
′
3
(s) = 3s
2
−3s +
1
2
. (4.24)
The expression for B
3
(s), (4.23d) is recovered, when it is recalled that B
3
= 0.
4.3 EulerMaclaurin Summation Formula
Using the above recursion relation (4.21) we can deduce a very important for
mula which allows a precise relation between a discrete sum and a continuous
integral. First note that since B
0
= B
0
(s) = 1 we can write
_
1
0
f(x)B
0
(x) dx =
_
1
0
f(x) dx, (4.25)
valid for any function f. But now we can integrate by parts using
B
′
1
(x) = B
0
(x) : (4.26)
_
1
0
f(x) dx =
_
1
0
f(x)B
′
1
(x) dx
34 Version of September 16, 2011CHAPTER4. BERNOULLI POLYNOMIALS
= f(x)B
1
(x)
¸
¸
¸
¸
1
x=0
−
_
1
0
f
′
(x)B
1
(x) dx
=
1
2
[f(1) + f(0)] −
_
1
0
f
′
(x)B
1
(x) dx. (4.27)
Here, we have used the facts that
B
1
(0) = B
1
= −
1
2
, (4.28a)
B
1
(1) = −B
1
=
1
2
. (4.28b)
Now we can continue integrating by parts by noting that
B
1
(x) =
1
2
B
′
2
(x), (4.29)
so that
_
1
0
f(x) dx =
1
2
[f(1) + f(0)] −
1
2
[f
′
(1)B
2
(1) −f
′
(0)B
2
(0)]
+
1
2
_
1
0
f
′′
(x)B
2
(x) dx
=
1
2
[f(1) + f(0)] −
1
2
B
2
[f
′
(1) −f
′
(0)]
+
1
2
_
1
0
f
′′
(x)B
2
(x) dx. (4.30)
A general pattern is emerging. Let us assume the following formula holds
for some integer k (we have just proved it for k = 1):
_
1
0
f(x) dx =
1
2
[f(1) + f(0)]
−
k
m=1
B
2m
(2m)!
_
f
(2m−1)
(1) −f
(2m−1)
(0)
_
+
1
(2k)!
_
1
0
f
(2k)
(x)B
2k
(x) dx. (4.31)
We shall then prove that the same formula holds for k → k + 1, thereby es
tablishing this formula, the EulerMaclaurin summation formula, for all k. We
proceed as follows. Note that
B
2k
(x) =
B
′
2k+1
(x)
2k + 1
=
B
′′
2k+2
(x)
(2k + 1)(2k + 2)
, (4.32)
so that by integrating by parts, we rewrite the last term in Eq. (4.31) as
1
(2k)!
_
1
0
f
(2k)
(x)
1
(2k + 1)(2k + 2)
B
′′
2k+2
(x) dx
4.3. EULERMACLAURINSUMMATION FORMULA35 Version of September 16, 2011
=
1
(2k + 2)!
_
f
(2k)
(1)B
′
2k+2
(1) −f
(2k)
(0)B
′
2k+2
(0)
−
_
1
0
f
(2k+1)
(x)B
′
2k+2
(x) dx
_
=
1
(2k + 2)!
_
−f
(2k+1)
(1)B
2k+2
(1) + f
(2k+1)
(0)B
2k+2
(0)
+
_
1
0
f
(2k+2)
(x)B
2k+2
(x) dx
_
, (4.33)
where we have noted that for k > 0
B
′
2k+2
(0) = (2k + 2)B
2k+1
(0) = 0, (4.34a)
B
′
2k+2
(1) = (2k + 2)B
2k+1
(1) = −(2k + 2)B
2k+1
(0) = 0. (4.34b)
Hence
_
1
0
f(x) dx =
1
2
[f(1) + f(0)]
−
k+1
m=1
B
2m
(2m)!
_
f
(2m−1)
(1) −f
(2m−1)
(0)
_
+
1
(2k + 2)!
_
1
0
f
(2k+2)
(x)B
2k+2
(x) dx. (4.35)
This is exactly Eq. (4.31) with k replaced by k + 1; so since the formula is true
for k = 1 it is true for all integers k ≥ 1. Notice that the last term in this
formula, the remainder, can also be written in the form
−
1
(2k + 3)!
_
1
0
f
(2k+3)
(x)B
2k+3
(x) dx. (4.36)
Now consider the integral (N a positive integer)
_
N
0
f(s) ds =
N−1
k=0
_
k+1
k
f(s) ds =
N−1
k=0
_
1
0
f(k + t) dt, (4.37)
where we have introduced a local variable t. For the latter integral, we can use
the EulerMaclaurin sum formula, which here reads
_
1
0
f(k + t) dt =
1
2
[f(k + 1) + f(k)]
−
n
m=1
B
2m
(2m)!
_
f
(2m−1)
(k + 1) −f
(2m−1)
(k)
_
+
1
(2n)!
_
1
0
f
(2n)
(k + t)B
2n
(t) dt. (4.38)
36 Version of September 16, 2011CHAPTER4. BERNOULLI POLYNOMIALS
Now when we sum the ﬁrst term here on the righthand side over k we obtain
N−1
k=0
1
2
[f(k + 1) + f(k)] =
N
k=0
f(k) −
1
2
[f(0) + f(N)], (4.39)
while the second term when summed on k involves
N−1
k=0
_
f
(2m−1)
(k + 1) −f
(2m−1)
(k)
_
= f
(2m−1)
(N) −f
(2m−1)
(0). (4.40)
Thus we ﬁnd
_
N
0
f(s) ds =
N
k=0
f(k) −
1
2
[f(0) + f(N)]
−
n
m=1
1
(2m)!
B
2m
_
f
(2m−1)
(N) −f
(2m−1)
(0)
_
+
1
(2n)!
_
1
0
N−1
k=0
f
(2n)
(t + k)B
2n
(t) dt. (4.41)
Equivalently, we can write this as a relation between a ﬁnite sum and an integral,
with a remainder R
n
:
N
k=0
f(k) =
_
N
0
f(s) ds +
1
2
[f(0) + f(N)]
+
n
m=1
1
(2m)!
B
2m
_
f
(2m−1)
(N) −f
(2m−1)
(0)
_
+ R
n
, (4.42)
where the remainder
R
n
= −
1
(2n)!
_
1
0
N−1
k=0
f
(2n)
(t + k)B
2n
(t) dt. (4.43)
is often assumed to vanish as n → ∞. Note that the remainder can also be
written as
R
n
= −
1
(2n)!
_
N
0
f
(2n)
(t)B
2n
(t −⌊t⌋) dt, (4.44)
where ⌊t⌋ signiﬁes the greatest integer less than or equal to t.
4.3.1 Examples
1. Use the EulerMaclaurin formula to evaluate the sum
N
n=0
cos(2πn/N).
N
n=0
cos
2πn
N
=
_
N
0
dncos
2πn
N
+
1
2
(1 + 1) + 0 = 1, (4.45)
4.3. EULERMACLAURINSUMMATION FORMULA37 Version of September 16, 2011
because
f
(2m−1)
(0) = f
(2m−1)
(N) = 0 (4.46)
and
_
N
0
dn cos
2nπ
N
=
N
2π
_
2π
0
dx cos x = 0. (4.47)
Of course, the sum may be carried out directly,
N
n=0
cos
2πn
N
=
1
2
N
0
_
e
i2πn/N
+ e
−i2πn/N
_
=
1
2
_
1 −e
2πi(N+1)/N
1 −e
2πi/N
+
1 −e
−2πi(N+1)/N
1 −e
−2πi/N
_
=
1
2
(1 + 1) = 1. (4.48)
2. The following sum occurs, for example, in computing the vacuum energy
in a cosmological model:
∞
l=0
(2l + 1)e
−l(l+1)t
. (4.49)
How does this behave as t →0? We will answer this question by using the
EulerMaclaurin formula assuming that the remainder R
n
tends to zero
as n →∞. Thus we will write the limiting form of that sum formula as
∞
l=0
f(l) =
_
∞
0
dl f(l) +
1
2
[f(∞) + f(0)]
+
∞
k=1
B
2k
(2k)!
_
f
(2k−1)
(∞) −f
(2k−1)
(0)
_
. (4.50)
Here
f(l) = (2l + 1)e
−l(l+1)t
, (4.51)
so that
f(∞) = f
(2k−1)
(∞) = 0, (4.52)
while a very simple calculation shows
f(0) = 1, (4.53a)
f
′
(0) = 2 −t, (4.53b)
f
′′′
(0) = −12t + 12t
2
−t
3
, (4.53c)
f
(5)
(0) = 120t
2
−180t
3
+ 30t
4
−t
5
, (4.53d)
f
(7)
(0) = −1680t
3
+ 3360t
4
−840t
5
+ 56t
6
−t
7
, (4.53e)
f
(2k−1)
(0) = O(t
4
), k ≥ 5. (4.53f)
38 Version of September 16, 2011CHAPTER4. BERNOULLI POLYNOMIALS
Thus Eq. (4.50) yields
∞
l=0
(2l + 1)e
−l(l+1)t
=
_
∞
0
dl (2l + 1)e
−l(l+1)t
+
1
2
−
B
2
2
f
′
(0) −
B
4
4!
f
′′′
(0) −. . .
=
1
t
_
∞
0
du e
−u
+
1
2
+
1
2
_
1
6
_
(t −2)
+
1
4!
_
−
1
30
_
[12t +O(t
2
)] +O(t
2
)
=
1
t
+
1
3
+
t
15
+
4
315
t
2
+
1
315
t
3
+ . . . . (4.54)
Here the integral was evaluated by making the substitution u = l(l + 1)t,
du = (2l +1)t dl, and in the last line we have displayed the next two terms
in this asymptotic expansion for small t.
3. The Riemann zeta function (2.50) is deﬁned by
ζ(α) =
∞
n=1
1
n
α
, Re α > 1. (4.55)
Suppose we approximate this by the ﬁrst M terms in the sum occurring
in the EulerMaclaurin formula (4.42):
ζ(α, M) =
1
α −1
+
1
2
−
M
m=1
B
2m
(2m)!
f
(2m−1)
(1), (4.56)
where f(n) = n
−α
, and the ﬁrst two terms here come from the integral
and the
1
2
f(1) terms in the EM formula. It is easy to see that
f
(2m−1)
(1) = −
Γ(α + 2m−1)
Γ(α)
. (4.57)
Given the asymptotic behavior of the Bernoulli numbers in (4.12), it is
apparent that the limit M → ∞ of ζ(α, M) does not exist. This limit is
an example of an asymptotic series. However, in Table 4.2 we compare the
sum of the ﬁrst N terms of the series in (4.55) with the ﬁrst N terms in the
series deﬁned by (4.56), that is ζ(α, N), for α = 3, where ζ(3) = 1.2020569.
The original series converges monotonically to the correct limiting value,
but not spectacularly fast. For N = 9 terms, the relative error is about
−0.5%. The asymptotic series is divergent; however, the N = 1 term is in
error by only 4%, and the average of the N = 1 and N = 2 is larger than
the true value by only +0.5%. This illustrates a characteristic feature of
asymptotic series: A few terms in the series approximates the function
rather well, but as more and more terms are included the series deviates
from the true value by an ever increasing amount.
4.3. EULERMACLAURINSUMMATION FORMULA39 Version of September 16, 2011
N
N
n=1
n
−3
ζ(3, N)
1
2
[ζ(3, N) + ζ(3, N + 1)]
1 1 1.25 1.208
2 1.125 1.1667 1.208
3 1.1620 1.25 1.175
4 1.1777 1.1 1.308
5 1.1857 1.5167 0.694
6 1.1903 −0.1286
7 1.1932 8.6214
8 1.1952 −51.6619
9 1.1965 470.564
Table 4.2: Two approximations compared for ζ(3) = 1.20206 . . .: N terms in
the deﬁning series (4.55) and N terms (without the remainder) in the Euler
Maclaurin sum (4.56). The former converges monotonically to the limit from
below, while the later diverges, yet approximates the true value to better than
1% for low values of N.
Chapter 5
Analytic Functions
5.1 The Derivative
Let f(z) be a complexvalued function of the complex variable z. The derivative
of f is deﬁned as
f
′
(z) =
df
dz
= lim
δz→0
f(z + δz) −f(z)
δz
= lim
δz→0
δf
δz
, (5.1)
if the limit exists and is independent of the way in which δz approaches zero.
This is illustrated in Fig. 5.1
5.1.1 Examples
What is the derivative of z
n
?
d
dz
z
n
= lim
δz→0
(z + δz)
n
−z
n
δz
= lim
δz→0
nz
n−1
δz
δz
= nz
n−1
. (5.2)
Ec'
T
©
d
d
d
ds
Figure 5.1: In the complex plane, δz, as indicated by the arrows in the ﬁgure,
can approach zero from any direction.
41 Version of October 1, 2011
42 Version of October 1, 2011 CHAPTER 5. ANALYTIC FUNCTIONS
Then, since e
z
1s represented by a power series which converges everywhere,
and therefore converges uniformly in any ﬁnite bounded (compact) region, it is
also diﬀerentiable everywhere,
d
dz
e
z
=
d
dz
∞
n=0
1
n!
z
n
=
∞
n=1
1
(n −1)!
z
n−1
= e
z
. (5.3)
The derivative of the exponential function is the function itself.
5.2 Analyticity
Whenever f
′
(z
0
) exists, f is said to be analytic (or regular, or holomorphic) at
the point z
0
. The function is analytic throughout a region in the complex plane
if f
′
exists for every point in that region. Any point at which f
′
does not exist
is called a singularity or singular point of the function f.
If f(z) is analytic everywhere in the complex plane, it is called entire.
Examples
• 1/z is analytic except at z = 0, so the function is singular at that point.
• The functions z
n
, n a nonnegative integer, and e
z
are entire functions.
5.3 The CauchyRiemann Conditions
The CauchyRiemann conditions are necessary and suﬃcient conditions for a
function to be analytic at a point.
Suppose f(z) is analytic at z
0
. Then f
′
(z
0
) may be obtained by taking δz
to zero through purely real, or through purely imaginary values, for example.
If δz = δx, δx real, we have, upon writing f in terms of its real and imaginary
parts, f = u + iv,
f
′
(z
0
) =
_
∂u
∂x
+ i
∂v
∂x
_
z=z0
. (5.4)
On the other hand, if δz = iδy, δy real, we have similarly,
f
′
(z
0
) =
_
∂u
i∂y
+ i
∂v
i∂y
_
z=z0
=
_
−i
∂u
∂y
+
∂v
∂y
_
z=z0
. (5.5)
Since the derivative is independent of how the limit is taken, we can equate
these two expression, meaning that they must have equal real and imaginary
parts,
∂u
∂x
=
∂v
∂y
,
∂v
∂x
= −
∂u
∂y
. (5.6)
5.4. CONTOUR INTEGRALS 43 Version of October 1, 2011
These are the CauchyRiemann conditions.
These conditions are not only necessary, but if the partial derivatives are
continuous, they are suﬃcient to assure analyticity. Write
f(z + δz) −f(z) = u(x + δx, y + δy) −u(x, y) + i[v(x + δx, y + δy) −v(x, y)]
= u(x + δx, y + δy) −u(x, y + δy) + u(x, y + δy) −u(x, y)
+ i[v(x + δx, y + δy) −v(x, y + δy) + v(x, y + δy) −v(x, y)]
= δx
∂u
∂x
+ δy
∂u
∂y
+ i
_
δx
∂v
∂x
+ δy
∂v
∂y
_
(5.7)
which becomes, if the CauchyRiemann conditions hold
f(z + δz) −f(z) = δx
∂u
∂x
−δy
∂v
∂x
+ i
_
δx
∂v
∂x
+ δy
∂u
∂x
_
= (δx + iδy)
_
∂u
∂x
+ i
∂v
∂x
_
, (5.8)
so since δz = δx + iδy, we see
δf
δz
→
∂u
∂x
+ i
∂v
∂x
(5.9)
independently of how δz → 0, so
f
′
(z) =
∂u
∂x
+ i
∂v
∂x
(5.10)
exists.
Example
Consider the function z
∗
of z; that is, if z = x + iy, z
∗
= x − iy. The Cauchy
Riemann conditions never hold,
∂x
∂x
= 1 =
∂(−y)
∂y
= −1, (5.11)
so z
∗
is nowhere an analytic function of z.
5.4 Contour Integrals
Suppose we have a smooth path in the complex plane, extending from the point
a to the point b. Suppose we choose points z
1
, z
2
,. . . , z
n
−1 lying on the curve,
and connect them by straightline segments. Likewise connect a = z
0
with z
1
amd b = z
n
with z
n−1
. See Fig. 5.2. Then the contour integral of a function f
is deﬁned by the following limit,
_
b
a
C
f(z) dz = lim
∆z
i
→0
n→∞
n
i=1
f(z
i
)∆z
i
, ∆z
i
= z
i
−z
i−1
, (5.12)
44 Version of October 1, 2011 CHAPTER 5. ANALYTIC FUNCTIONS
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
a = z
0
z
1
z
2
z
3
b = z
n
r¨
$
$
¨
¨
f
f&
&
¨
Figure 5.2: Path C in the complex plane approximated by a series of straight
line segments.
and the limit taken is one in which the number n of straightline segments goes
to inﬁnity, while the length of the largest one goes to zero. Whenever this limit
exists, independently of how it is taken, the integral exists. Note that in general
the integral depends on the path C, as well as on the endpoints.
Example
Consider
_
K
dz
z
(5.13)
where K is a circle about the origin, of radius r. (The circle on the integral sign
signiﬁes that the path of integration is closed.) From the polar representation
of complex numbers, we may write
z = re
iθ
, (5.14a)
so since r is ﬁxed on K, we have
dz = re
iθ
i dθ. (5.14b)
Let us assume that the integration is carried out in a positive (counterclockwise)
sense, so then
_
K
dz
z
= i
_
2π
0
dθ = 2πi, (5.15)
which is independent of the value of r.
5.5 Cauchy’s Theorem
Chauchy’s theorem states that if f(z) is analytic at all points on and inside a
closed contour C, then the integral of the function around that contour vanishes,
_
C
f(z) dz = 0. (5.16)
5.5. CAUCHY’S THEOREM 45 Version of October 1, 2011
C
f
f w
C
i
Figure 5.3: The integral around the contour C may be replaced by the sum of
integrals around the subcontours C
i
.
Proof: Subdivide the region inside the contour in the manner shown in
Fig. 5.3. Obviously
_
C
f(z) dz =
i
_
Ci
f(z) dz, (5.17)
where C
i
is the closed path around one of the mesh elements, since the con
tribution from the side common to two adjacent subcontours evidently cancels,
leaving only the contribution from the exterior boundary. Now because f is
analytic throughout the region, we may write for small δz
f(z + δz) = f(z) + δz f
′
(z) +O(δz
2
), (5.18)
where O(δz
2
) means only that the remainder goes to zero faster that δz. We
apply this result by assuming that we have a ﬁne mesh subdividing C—we are
interested in the limit in which the largest mesh element goes to zero. Let z
i
be
a representative point within the ith mesh element (for example, the center).
Then
_
Ci
f(z) dz = f(z
i
)
_
Ci
dz + f
′
(z
i
)
_
Ci
(z −z
i
) dz +
_
Ci
O((z −z
i
)
2
) dz. (5.19)
Now it is easily seen that for an arbitrary contour C
i
_
Ci
dz =
_
Ci
(z −z
i
) dz = 0, (5.20)
so if the length of the cell is ε,
_
Ci
f(z) dz = O(ε
3
) = A
i
O(ε), (5.21)
which is to say that the integral around the ith cell goes to zero faster than the
area A
i
of the ith cell. Thus the integral required is
_
C
f(z) dz =
i
A
i
O(ε) = AO(ε), (5.22)
46 Version of October 1, 2011 CHAPTER 5. ANALYTIC FUNCTIONS
`
_¸
C
d
d
d
d
d
d
d
d
d
d
R
Figure 5.4: A multiply connected region R consisting of the area within a tri
angle but outside of an circular region. The closed contour C cannot be contin
uously deformed to a point without crossing into the disk, which is outside the
region R.
where A is the ﬁnite area contained within the contour C. As the subdivision
becomes ﬁner and ﬁner, ε → 0 and so
_
C
f(z) dz = 0. (5.23)
To state a more general form of Cauchy’s theorem, we need the concept of
a simply connected region. A simply connected region R is one in which any
closed contour C lying in R may be continuously shrunk to a point without
ever leaving R. Fig. 5.4 is an illustration of a multiply connected region. C lies
entirely within R, yet it cannot be shrunk to a point because of the excluded
region inside it.
We can now restate Cauchy’s theorem as follows: If f is analytic in a simply
connected region R then
_
C
f(z) dz = 0 (5.24)
for any closed contour C in R.
That simple connectivity is required here is seen by the example of the
function 1/z, which is analytic in any region excluding the origin.
Here is another proof of Cauchy’s theorem, as given in the book by Morse
and Feshbach. If the closed contour C lies in a simplyconnected region where
f
′
(z) exists then
_
C
f(z) dz = 0. (5.25)
Proof: Let us choose the origin to lie in the region of analyticity (if it does
not, change variables so that z = 0 lies within C). Deﬁne
F(λ) = λ
_
C
f(λz) dz. (5.26)
Then the derivative of this function of λ is
F
′
(λ) =
_
C
f(λz) dz + λ
_
C
zf
′
(λz) dz
5.6. CAUCHY’S INTEGRAL FORMULA 47 Version of October 1, 2011
_
¸
↑ C
•
_
↓ ↑ C
′
C
′′
γ
¡
z
0
Figure 5.5: Distortion of a contour C to a small one γ encircling the singularity
at z
0
.
=
_
C
f(λz) dz + zf(λz)
¸
¸
¸
¸
z=end of C
z=beginningof C
−
_
C
f(λz) dz = 0, (5.27)
where we have integrated by parts, because the function f is single valued. Thus
F(λ) is constant. But
F(0) = lim
λ→0
λ
_
C
f(λz) dz = lim
λ→0
_
λC
f(z) dz = 0 (5.28)
because f(0) is bounded because f is analytic at the origin. (We have deformed
the contour to an inﬁnitesimal one about the origin.) Thus we conclude that
F(1) = 0. This proves the theorem.
5.6 Cauchy’s Integral Formula
If f(z) is analytic on and within the closed contour C, and z
0
lies within C,
then the value of f at z
0
is given in terms of its boundary values by
f(z
0
) =
1
2πi
_
C
f(z)
z −z
0
dz, (5.29)
where the contour is traversed in the positive (counterclockwise) sense.
Proof: f(z)/(z − z
0
) is not analytic within C, so choose a contour inside of
which this function is analytic, as shown in Fig. 5.5. Here we have connected
the contour C to the small contour γ by two overlapping lines C
′
, C
′′
which are
traversed in opposite senses. Now f(z)/(z −z
0
) is analytic on the inside of the
contour C +C
′
+C
′′
+γ. (By inside, we mean that if you follow the path in the
direction indicated by the arrows, the inside is only your left, and the outside
is on your right.) Thus, by Cauchy’s theorem
_
C+C
′
+C
′′
+γ
f(z)
z −z
0
dz = 0. (5.30)
48 Version of October 1, 2011 CHAPTER 5. ANALYTIC FUNCTIONS
Now because we choose the lines C
′
, C
′′
as overlapping, since f is continuous
in the neighborhood of those lines those two integrals cancel,
_
C
′
+C
′′
f(z)
z −z
0
dz = 0. (5.31)
And since the circle γ may be chosen arbitrarily small
_
γ
f(z)
z −z
0
dz = f(z
0
)
_
γ
dz
z −z
0
= −2πif(z
0
), (5.32)
since γ is traversed in a negative or clockwise sense. Thus the theorem (5.29) is
proved.
(Implicit in the above is the assumption that the contour does not cross
itself to wind around z
0
more than once. If this happens, Cauchy’s formula is
modiﬁed. See homework.)
It is now easily shown from the deﬁnition of the derivative that if f is analytic
on and within C, we may express the derivative by
f
′
(z
0
) =
1
2πi
_
C
f(z)
(z −z
0
)
2
dz, (5.33)
and in fact the nth derivative is given by
f
(n)
(z
0
) =
n!
2πi
_
C
f(z)
(z −z
0
)
n+1
dz. (5.34)
That is, if f is analytic, so is its derivative. An analytic function is inﬁnitely
diﬀerentiable, a property which is not true for a diﬀerentiable function of a real
variable.
5.7 Morera’s Theorem
The converse to Cauchy’s theorem is the following:
If f(z) is continuous in a region R, and for all contours C lying in R
_
C
f(z) dz = 0, (5.35)
then f(z) is analytic throughout R.
Proof: If f satisﬁes the above hypotheses, then the integral
_
z2
z1
f(z) dz = F(z
2
) −F(z
1
) (5.36)
is a function of the endpoints only, and not of the path, as is evident from
Fig. 5.6. But now the function F has a unique derivative,
F
′
(z) = f(z), (5.37)
so that F(z) is analytic. Hence, so is its derivative f(z). QED.
5.8. THE LOGARITHM 49 Version of October 1, 2011
•
•
C
1
↑ ↑ C
2
z
2
z
1
Figure 5.6: Two paths C
1
and C
2
connecting the point z
1
with the point z
2
.
Because
_
C1−C2
f(z) dz = 0, we conclude that
_
z2
z1C1
f(z) dz =
_
z2
z1C2
f(z) dz.
• •
•
1
z
tplane
E
θ
z
Figure 5.7: Path of integration in the cut t plane used in deﬁning the logarithm
in Eq. (5.38).
5.8 The Logarithm
An alternative deﬁnition to that given in Sec. 3.2 is given by the path integral
log z =
_
z
1
dt
t
, (5.38)
over any contour connecting 1 with z which does not cross the cut line shown
in Fig. 5.7. The cut is present so the contour cannot encircle the singularity of
the integrand at t = 0. Because the arg function must be singlevalued, the cut
supplies the restriction
−π < arg(z) ≤ π. (5.39)
The last equality means for negative z we approach the cut from above.
Since the integral is path independent, we may chose the path to consist
of a segment along the positive z axis and an arc of a circle, as also shown in
Fig. 5.7. Then the logarithm may be written as
log z =
_
z
1
dt
t
+
_
θ
0
z i e
iθ
′
dθ
′
z e
iθ
′
50 Version of October 1, 2011 CHAPTER 5. ANALYTIC FUNCTIONS
= log z + iθ
= log z + i arg z, (5.40)
which coincides with the previous deﬁnition.
The logarithm is analytic in the cut plane, and its derivative is
d
dz
log z =
1
z
. (5.41)
If ξ = log z, deﬁne the inverse function by z = expξ. Since when z = 1, ξ = 0,
we have
exp(0) = 1. (5.42)
Also we have
d
dξ
exp ξ =
dz
dξ
=
dz
d log z
= z = exp ξ. (5.43)
These two properties uniquely deﬁne the exponential function.
5.9 A Theorem for Functions Represented by
Series
Let us suppose that the function Φ deﬁned by the series
Φ(z) =
∞
n=0
f
n
(z) (5.44)
converges uniformly on a closed contour C, and that each f
n
is analytic on and
within C. Then, on and within C
Φ(z) =
∞
n=0
f
n
(z) (5.45)
converges and Φ is analytic.
Proof: Since a uniformly convergent series may be integrated term by term,
we have for z
0
within C
1
2πi
_
C
Φ(z)
z −z
0
dz =
∞
n=0
1
2πi
_
C
f
n
(z)
z −z
0
dz
=
∞
n=0
f
n
(z
0
), (5.46)
by Cauchy’s integral formula. So this last sum exists; call it
Φ(z
0
) =
∞
n=0
f
n
(z
0
). (5.47)
5.9. A THEOREMFORFUNCTIONS REPRESENTEDBY SERIES51 Version of October 1, 2011
Now Φ
′
(z
0
) exists as well:
Φ
′
(z
0
) =
1
2πi
_
C
Φ(z)
(z −z
0
)
2
dz =
∞
n=0
f
′
n
(z
0
), (5.48)
so Φ is analytic within C.
Chapter 6
Taylor and Laurent
Expansions—
Analytic Continuation
6.1 Taylor expansion
Let f(z) be analytic within and on a circle C with center at z
0
. Let z be a point
within the circle. Then Cauchy’s integral formula can be written as
f(z) =
1
2πi
_
C
f(z
′
)
z
′
−z
dz
′
=
1
2πi
_
C
f(z
′
)
(z
′
−z
0
) −(z −z
0
)
dz
′
. (6.1)
Because z lies inside the circle,
z
′
−z
0
 > z −z
0
, (6.2)
we can expand the denominator,
1
(z
′
−z
0
) −(z −z
0
)
=
1
z
′
−z
0
1
1 −
z−z0
z
′
−z0
=
1
z
′
−z
0
∞
n=0
_
z −z
0
z
′
−z
0
_
n
. (6.3)
This series converges absolutely and uniformly for z
′
on the circle and z ﬁxed
inside, so it may be integrated term by term:
f(z) =
∞
n=0
(z −z
0
)
n
1
2πi
_
C
f(z
′
)
(z
′
−z
0
)
n+1
dz
′
=
∞
n=0
(z −z
0
)
n
1
n!
f
(n)
(z
0
), (6.4)
using the result of Eq. (5.34).
53 Version of October 12, 2011
54 Version of October 12, 2011CHAPTER6. TAYLOR ANDLAURENT SERIES
•
•
ρ
z
0
ξ
0
Figure 6.1: Circle of convergence for the Taylor series (6.4). Here z
0
is the point
about which the Taylor expansion is performed, ξ
0
is the closest singularity of f
to z
0
, and ρ = ξ
0
−z
0
 is the radius of convergence. The Taylor series converges
within the circle of convergence, and diverges outside the circle of convergence.
It may either diverge or converge on the circle of convergence.
This Taylor series will converge inside a circle having radius equal to the
distance from z
0
to the nearest singularity, and diverge outside such a circle, as
illustrated in Fig. 6.1.
Proof: For z − z
0
 < ρ, we can choose C in the above derivation to have
radius r, where z−z
0
 < r < ρ, so the above expansion converges. For z−z
0
 >
ρ, suppose it were true that the Taylor series converged. Then, according to
the theorem in Sec. 2.7, it would converge at z = ξ
0
, to an analytic function
(Sec. 5.9). This is contrary to the assertion that ξ
0
is a singular point. QED.
Example
Consider the function
f(z) =
1
1 −z
, (6.5)
which is analytic except at z = 1. The Taylor series about the origin,
1
1 −z
= 1 + z + z
2
+ . . . , (6.6)
converges only for z < 1. We may obtain a larger circle of convergence by
expanding about some other point, say z = −1:
1
1 −z
=
1
2 −(z + 1)
=
1
2
1
1 −
z+1
2
=
1
2
_
1 +
z + 1
2
+
_
z + 1
2
_
2
+ . . .
_
, (6.7)
which converges inside a circle of radius 2, centered about z = −1. In both
cases the singularity at z = 1 lies on the circle of convergence.
6.2. ANALYTIC CONTINUATION 55 Version of October 12, 2011
6.2 Analytic Continuation
The process of extending a power series representation of an analytic function
is called analytic continuation. It can be done whenever there are only isolated
singular points. The general idea is as follows.
Suppose we have a power series about z
0
f(z) =
∞
n=0
a
n
(z −z
0
)
n
, (6.8)
which has radius of convergence ρ. (That is, it converges if z − z
0
 < ρ and
diverges if z − z
0
 > ρ.) The function f has a singular point somewhere on
the circle of convergence. Since this power series represents an analytic function
inside its circle of convergence, it can, by the above, be Taylor expanded about
any other point lying within the circle of convergence, say z
1
,
f(z) =
∞
n=0
b
n
(z −z
1
)
n
. (6.9)
In general,
1
the circle of convergence of this series will lie partly outside the
original circle. Thus f is now deﬁned in a larger domain. In the new region, f
may be expanded once again, and usually the new circle of convergence will lie
partly outside both the ﬁrst two circles, so again the meaning is extended. And
so on. The idea is sketched in Fig. 6.2.
Entire functions may be represented by power series (Taylor expansions)
valid everywhere, since they have no singular points.
6.3 Laurent Expansion
Let f(z) be analytic in the annulus deﬁned by two concentric circles C
1
and C
2
,
both centered on z
0
, including the bounding circles. See Fig. 6.3. If z lies in
the annulus, Cauchy’s integral formula says (the interior boundary C
1
must be
traversed in a clockwise sense—hence, the minus sign)
f(z) =
1
2πi
_
C2
f(z
′
)
z
′
−z
dz
′
−
1
2πi
_
C1
f(z
′
)
z
′
−z
dz
′
=
1
2πi
_
C2
f(z
′
) dz
′
(z
′
−z
0
) −(z −z
0
)
−
1
2πi
_
C1
f(z
′
) dz
′
(z
′
−z
0
) −(z −z
0
)
.(6.10)
For the C
2
integral, z − z
0
 < z
′
− z
0
 so we expand in (z − z
0
)/(z
′
− z
0
); for
the C
1
integral z −z
0
 > z
′
−z
0
, so we expand in (z
′
−z
0
)/(z −z
0
). Thus we
have
f(z) =
1
2πi
_
C2
f(z
′
)
∞
n=0
(z −z
0
)
n
(z
′
−z
0
)
n+1
dz
′
1
But not always. See Whittaker and Watson, §5.501.
56 Version of October 12, 2011CHAPTER6. TAYLOR ANDLAURENT SERIES
•
•
z
0
ξ
0
•
_¸
•
•
`
_¸
•
z
1
ξ
1
z
2
ξ
2
Figure 6.2: The process of analytic continuation of a function deﬁned by a
power series. The original series is a Taylor expansion about the point z
0
,
which converges inside a circle having radius equal to the distance to the near
est singularity ξ
0
. If the function is instead expanded about the point z
1
, it
converges in a diﬀerent circle, having radius equal to the distance from z
1
to the
singular point closest to z
1
, namely ξ
1
. Instead the function can be expanded
about z
2
, lying outside the ﬁrst circle of convergence, but inside the second,
which will deﬁne the function in a diﬀerent circle of convergence, with radius
of convergence equal to the distance to the singularity closest to z
2
, namely, ξ
2
.
This process may be repeated indeﬁnitely. f is deﬁned in the union of all such
circles of convergence.
•
`
_¸
_¸
z
0
• z
C
1
C
2
Figure 6.3: Annular region deﬁned by two concentric circles.
6.3. LAURENT EXPANSION 57 Version of October 12, 2011
+
1
2πi
_
C1
f(z
′
)
∞
n=0
(z
′
−z
0
)
n
(z −z
0
)
n+1
dz
′
. (6.11)
Now
_
C
f(z
′
)(z
′
− z
0
)
k
dz
′
, where k is a positive or negative integer, has the
same value for all contours circling z
0
once and lying in the annulus, since
f(z
′
)(z
′
−z
0
)
k
is analytic there. Therefore the two sums above may be combined
to yield
f(z) =
∞
n=−∞
a
n
(z −z
0
)
n
, (6.12)
where the expansion coeﬃcients are
a
n
=
1
2πi
_
C
f(z
′
)
(z
′
−z
0
)
n+1
dz
′
. (6.13)
where C is any contour lying in the annulus. This is called the Laurent expan
sion. It generalizes the Taylor expansion in the case when there are singularities
interior to C
1
. (When there are no such singularities, the terms for negative n
are identically zero.)
Example
The function
exp
_
x
2
_
z −
1
z
__
(6.14)
is analytic except at z = 0. So it has a Laurent expansion about zero:
exp
_
x
2
_
z −
1
z
__
=
∞
n=−∞
a
n
z
n
, (6.15)
where
a
n
=
1
2πi
_
C
e
x
2
(z
′
−
1
z
′ )
dz
′
z
′ n+1
. (6.16)
We make this last integral more explicit by choosing C to be a circle of unit
radius, z
′
= e
iθ
, so
a
n
=
1
2πi
_
2π
0
e
ix sin θ
e
−inθ
i dθ
=
1
2π
_
2π
0
cos (nθ −xsin θ) dθ, (6.17)
because
_
2π
0
sin(nθ −xsin θ) dθ = 0, (6.18)
58 Version of October 12, 2011CHAPTER6. TAYLOR ANDLAURENT SERIES
owing to the integrand changing sign under the substitution θ → 2π − θ. This
function
exp
_
x
2
_
z −
1
z
__
=
∞
n=−∞
z
n
J
n
(x) (6.19)
is the generating function for the Bessel functions of integer order, J
n
(x). Thus
we have derived the following integral representation of the Bessel functions,
J
n
(x) =
1
2π
_
2π
0
cos (nθ −xsin θ) dθ. (6.20)
6.3.1 Example
Here is another example, which shows that the Laurent expansion holds for
functions with branch points and branch lines, provided those are entirely inside
the inner annular boundary. Consider the function
_
z
2
−1. (6.21)
This function has branch points at z = +1 and at z = −1, and a branch line
connecting these two points. Because it is a squareroot singularity, the branch
line for z > 1 cancels, as may be seen by considering the net phase change
when both branch points are encircled:
arg
_
_
z
2
−1
_
¸
¸
¸
¸
arg z=2π
arg z=0
= 0 mod 2π. (6.22)
This means that we may take
√
z
2
= z, and we can immediately write down the
expansion for large z:
_
z
2
−1 = z
_
1 −
1
z
2
_
1/2
= z
_
1 −
1
2z
2
−
1
8z
4
−. . .
_
= −
∞
n=0
(2n −3)!!
2
n
n!
1
z
2n−1
, (6.23)
where we have used the double factorial notation,
(2k + 1)!! = (2k + 1)(2k −1)(2k −3) · · · 3 · 1, (6.24)
and, from the recursion formula
(2k + 1)!! = (2k + 1)(2k −1)!! (6.25)
identify
(−1)!! = 1, (−3)!! = −1. (6.26)
The Laurent expansion (6.23) converges for z > 1.
6.4. CLASSIFICATION OF SINGULARITIES59 Version of October 12, 2011
6.4 Classiﬁcation of Singularities
Suppose in the neighborhood of z
0
a function f(z) may be written as
f(z) = φ(z) +
a
−1
z −z
0
+
a
−2
(z −z
0
)
2
+ . . . +
a
−n
(z −z
0
)
n
, (6.27)
where φ(z) is analytic in the neighborhood of z
0
, and a
−1
, a
−2
, . . . , a
−n
are
complex constants. When the above expansion holds true, f is said to have a
pole of order n at z = z
0
. When n = 1, the singularity is called a simple pole.
When f has a pole of order n at z
0
,
(z −z
0
)
n
f(z) (6.28)
is analytic at z = z
0
. If the function
(z −z
0
)
m
f(z) (6.29)
is not analytic at z = z
0
no matter how large the integer m is, we say that f
has an essential singularity at z
0
. (This deﬁnition applies to functions which
are singlevalued without the introduction of branch lines.)
If an essential singularity is “isolated,” that is, in a suﬃciently small neigh
borhood of z
0
, f is analytic except at z
0
, f may be expanded in a Laurent series
converging in an annulus:
f(z) =
∞
n=−∞
a
n
(z −z
0
)
n
, ∆ > z −z
0
 > δ, (6.30)
where δ is arbitrarily small, and ∆ is the distance to the next singularity. (The
proof for this statement is provided in the homework.)
6.4.1 Weierstrass–Picard Theorem
In the neighborhood of an essential singularity, f(z) becomes arbitrarily close to
every complex value. This theorem, due to Weierstrass, was greatly sharpened
by Picard.
Picard’s Theorem
In any neighborhood of an essential singularity, the function assumes every ﬁnite
value, with one possible exception, an inﬁnite number of times.
Example: Consider
e
1/z
=
∞
n=0
1
n! z
n
, (6.31)
which has an essential singular point at z = 0. Let α be any complex number
except 0. For what zs is
α = e
1/z
? (6.32)
60 Version of October 12, 2011CHAPTER6. TAYLOR ANDLAURENT SERIES
Recalling the 2πi periodicity of the exponential function, we see
log α =
1
z
+ 2πin, n = integer, (6.33)
or
z =
1
log α −2πin
. (6.34)
Thus in any neighborhood of 0 there are an inﬁnite number of these zs.
6.4.2 Branch Points and Cuts
Recall log z was deﬁned in the cut plane shown in Fig. 3.1. The location of the
cut line is arbitrary, but the location of the end point, z = 0 is not. This branch
point is a singular point of log z:
d
dz
log z =
1
z
, (6.35)
which does not exist at z = 0. This type of singularity is neither a pole nor an
essential singularity. Once the cut is speciﬁed, thus deﬁning log z, the function
is not analytic on the branch cut or branch line; in fact, it is discontinuous across
the cut:
disc(log z) = log ρe
iπ
−log ρe
−iπ
= 2iπ. (6.36)
The same applies to square roots, and all nonintegral powers, which are
deﬁned in terms of the logarithm,
√
z = z
1/2
= e
1
2
log z
. (6.37)
Here the discontinuity across the branch line is
disc(
√
z) =
_
ρe
iπ
−
_
ρe
−iπ
=
√
ρ
_
e
iπ/2
−e
−iπ/2
_
= 2i
√
ρ. (6.38)
6.5 Liouville’s Theorem
First we prove Cauchy’s inequality. Recall the integral representation for the
derivative of an analytic function, Eq. (5.34),
f
(n)
(z
0
) =
n!
2πi
_
C
f(z)
(z −z
0
)
n+1
dz (6.39)
if z
0
is inside C and f is analytic on and within C. If C is a circle of radius r
centered about z
0
,
z = z
0
+ re
iθ
, (6.40)
we write this integral more explicitly as
f
(n)
(z
0
) =
n!
2π
_
2π
0
f
_
z
0
+ re
iθ
_
r
n
e
inθ
dθ (6.41)
6.5. LIOUVILLE’S THEOREM 61 Version of October 12, 2011
or
¸
¸
¸f
(n)
(z
0
)
¸
¸
¸ ≤
n!
2π
_
2π
0
¸
¸
f
_
z
0
+ re
iθ
_¸
¸
r
n
dθ ≤
n!M
r
n
, (6.42)
where M is the maximum value attained by f on C.
Now Liouville’s theorem (also really due to Cauchy) states: An entire bounded
function is constant.
Proof: Since f(z) is entire, the Taylor series converges everywhere,
f(z) =
∞
n=0
1
n!
f
(n)
(0)z
n
. (6.43)
But from Cauchy’s inequality,
¸
¸
¸f
(n)
(0)
¸
¸
¸ ≤
Mn!
R
n
, (6.44)
where R is the radius of an arbitrarily large circle about the origin, and M may
be taken as the bound on f,
f(z) ≤ M ∀z. (6.45)
Hence by taking R →∞, we see that
f
(n)
(0) = 0, n > 0, (6.46)
and so
f(z) = f(0). (6.47)
QED.
Example:
Although e
z
is entire, is is certainly not bounded.
Chapter 7
The Calculus of Residues
If f(z) has a pole of order m at z = z
0
, it can be written as Eq. (6.27), or
f(z) = φ(z) =
a
−1
(z −z
0
)
+
a
−2
(z −z
0
)
2
+ . . . +
a
−m
(z −z
0
)
m
, (7.1)
where φ(z) is analytic in the neighborhood of z = z
0
. Now we have seen that if
C encircles z
0
once in a positive sense,
_
C
dz
1
(z −z
0
)
n
= 2πiδ
n,1
, (7.2)
where the Kronecker δsymbol is deﬁned by
δ
m,n
=
_
0, m = n,
1, m = n.
. (7.3)
Proof: By Cauchy’s theorem we may take C to be a circle centered on z
0
. On
the circle, write z = z
0
+ re
iθ
. Then the integral in Eq. (7.2) is
i
r
n−1
_
2π
0
dθ e
i(1−n)θ
, (7.4)
which evidently integrates to zero if n = 1, but is 2πi if n = 1. QED.
Thus if we integrate the function (7.1) on a contour C which encloses z
0
,
while φ(z) is analytic on and within C, we ﬁnd
_
C
f(z) dz = 2πia
−1
. (7.5)
Because the coeﬃcient of the (z − z
0
)
−1
power in the Laurent expansion of f
plays a special role, we give it a name, the residue of f(z) at the pole.
If C contains a number of poles of f, replace the contour C by contours α,
β, γ, . . . encircling the poles singly, as shown in Fig. 7.1. The contour integral
63 Version of October 26, 2011
64 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
'
&
$
%
→
C
•
•
•
i
i
i
α
β
γ
Figure 7.1: Integration of a function f around the contour C which contains
only poles of f may be reduced to the integrals around subcontours α, β, γ,
etc., each of which contains but a single pole of f.
around C may be distorted to a sum of disjoint ones around α, β, . . . , so
_
C
f(z) dz =
_
α
f(z) dz +
_
β
f(z) dz + . . . , (7.6)
and since each small contour integral gives 2πi times the reside of the single
pole interior to that contour, we have established the residue theorem: If f be
analytic on and within a contour C except for a number of poles within,
_
C
f(z) dz = 2πi
poles within C
residues, (7.7)
where the sum is carried out over all the poles contained within C.
This result is very usefully employed in evaluating deﬁnite integrals, as the
following examples show.
7.1 Example 1
Consider the following integral over an angle:
I =
_
2π
0
dθ
1 −2p cos θ + p
2
, 0 < p < 1. (7.8)
Let us introduce a complex variable according to
z = e
iθ
, dz = ie
iθ
dθ = iz dθ, (7.9)
so that
cos θ =
1
2
_
z +
1
z
_
. (7.10)
Therefore, we can rewrite the angular integral as an integral around a closed
contour C which is a unit circle about the origin:
I =
_
C
dz
iz
1
1 −p
_
z +
1
z
_
+ p
2
7.2. A FORMULA FOR THE RESIDUE 65 Version of October 26, 2011
=
_
C
dz
i
1
z −p(z
2
+ 1) + p
2
z
=
1
i
_
C
dz
1
(1 −pz)(z −p)
. (7.11)
The integrand exhibits two poles, one at z = 1/p > 1 and one at z = p < 1.
Only the latter is inside the contour C, so since
1
1 −pz
1
z −p
=
_
1
z −p
+
p
1 −pz
_
1
1 −p
2
, (7.12)
we have from the residue theorem
I = 2πi
1
i
1
1 −p
2
=
2π
1 −p
2
. (7.13)
Note that we could have obtained the residue without partial fractioning by
evaluating the coeﬃcient of 1/(z −p) at z = p:
1
1 −pz
¸
¸
¸
¸
z=p
=
1
1 −p
2
. (7.14)
This observation is generalized in the following.
7.2 A Formula for the Residue
If f(z) has a pole of order m at z = z
0
, the residue of that pole is
a
−1
=
1
(m−1)!
d
m−1
dz
m−1
[(z −z
0
)
m
f(z)]
¸
¸
¸
¸
z=z0
. (7.15)
The proof follows immediately from Eq. (7.1).
7.3 Example 2
This time we consider an integral along the real line,
I =
_
∞
−∞
dx
1
(x
2
+ 1)
3
= lim
R→∞
_
R
−R
dx
1
(x
2
+ 1)
3
, (7.16)
where we have made explicit the meaning of the upper and lower limits. We
relate this to a contour integral as sketched in Fig. 7.2. Thus we have
_
C
dz
(z
2
+ 1)
3
=
_
R
−R
dx
(x
2
+ 1)
3
+
_
Γ
dz
(z
2
+ 1)
3
, (7.17)
66 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
→
Γ
−R R
•
•
i
−i
d
d
d
d
d
ds
R
Figure 7.2: The closed contour C consists of the portion of the real axis between
−R and R, and the semicircle Γ of radius R in the upper half plane. Also shown
in the ﬁgure are the location of the poles of the integrand in Eq. (7.17).
where we are to understand that the limit R → ∞ is to be taken at the end
of the calculation. It is easy to see that the integral over the large semicircle
vanishes in this limit:
_
Γ
dz
(z
2
+ 1)
3
=
_
π
0
Ri e
iθ
dθ
(R
2
e
2iθ
+ 1)
3
→0, R →∞. (7.18)
Hence the integral desired is just the closed contour integral,
I =
_
C
dz
(z
2
+ 1)
3
= 2πi(residue at i). (7.19)
By the formula (7.15) the desired residue is
a
−1
=
1
2!
d
2
dz
2
_
(z −i)
3
1
(z −i)
3
(z + i)
3
_ ¸
¸
¸
¸
z=i
=
1
2!
d
2
dz
2
1
(z + i)
3
¸
¸
¸
¸
z=i
=
1
2!
(−3)(−4)
(z + i)
5
¸
¸
¸
¸
z=i
=
3
16i
, (7.20)
so
I =
3π
8
. (7.21)
7.4 Jordan’s Lemma
The evaluation of a class of integrals depends upon this lemma. If f(z) → 0
uniformly with respect to arg z as z → ∞ for 0 ≤ arg z ≤ π, and f(z) is
analytic when z > c > 0 and 0 ≤ arg z ≤ π, then for α > 0,
lim
ρ→∞
_
Γρ
e
iαz
f(z) dz = 0, (7.22)
7.5. EXAMPLE 3 67 Version of October 26, 2011
where Γ
ρ
is a semicircle of radius ρ above the real axis with center at the origin.
(Cf. Fig. 7.2.)
Proof: Putting in polar coordinates,
_
Γρ
e
iαz
f(z) dz =
_
π
0
e
iα(ρ cos θ+iρ sin θ)
f
_
ρe
iθ
_
ρe
iθ
i dθ. (7.23)
If we take the absolute value of this equation, we obtain the inequality
¸
¸
¸
¸
¸
_
Γρ
e
iαz
f(z) dz
¸
¸
¸
¸
¸
≤
_
π
0
e
−αρ sin θ
¸
¸
f
_
ρe
iθ
_¸
¸
ρ dθ
< ε
_
π
0
e
−αρ sin θ
ρ dθ, (7.24)
if
¸
¸
f
_
ρe
iθ
_¸
¸
< ε for all θ when ρ is suﬃciently large. (This is what we mean by
going to zero uniformly for large ρ.) Now when
0 ≤ θ ≤
π
2
, sin θ ≥
2θ
π
, (7.25)
which is easily veriﬁed geometrically. Therefore, the integral on the righthand
side of Eq. (7.24) is bounded as follows,
_
π
0
e
−αρ sin θ
ρ dθ < 2ρ
_
π/2
0
e
−2αρθ/π
dθ
=
π
α
_
1 −e
−αρ
_
. (7.26)
Hence
¸
¸
¸
¸
¸
_
Γρ
e
iαz
f(z) dz
¸
¸
¸
¸
¸
<
επ
α
_
1 −e
−αρ
_
(7.27)
may be made as small as we like by merely choosing ρ large enough (so ε →0).
QED.
7.5 Example 3
Consider the integral
I =
_
∞
0
cos x
x
2
+ a
2
dx. (7.28)
The associated contour integral is
_
C
e
iz
z
2
+ a
2
dz =
_
R
−R
e
ix
x
2
+ a
2
dx +
_
Γ
e
iz
z
2
+ a
2
dz, (7.29)
where the contour Γ is a large semicircle of radius R centered on the origin
in the upper half plane, as in Fig. 7.2. (The only diﬀerence here is that the
68 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
pole inside the contour C is at ia.) The second integral on the righthand side
vanishes as R →∞ by Jordan’s lemma. (Note carefully that this would not be
true if we replace e
iz
by cos z in the above.) Because only the even part of e
ix
survives symmetric integration,
I =
1
2
_
∞
−∞
e
ix
x
2
+ a
2
dx =
1
2
_
C
e
iz
z
2
+ a
2
dz
=
1
2
2πi
1
2ia
e
i(ia)
=
π
2a
e
−a
. (7.30)
(Note that if C were closed in the lower half plane, the contribution from the
inﬁnite semicircle would not vanish. Why?)
7.6 Cauchy Principal Value
To this point we have assumed that the path of integration never encounters any
singularities of the integrated function. On the contrary, however, let us now
suppose that f(x) has simple poles on the real axis, and try to attach meaning
to
_
∞
−∞
f(x) dx. (7.31)
For simplicity, suppose f(z) has a simple pole at only one point on the real axis,
f(z) = φ(z) +
a
−1
z −x
0
, (7.32)
where φ(z) is analytic on the entire real axis. Then we deﬁne the (Cauchy)
principal value of the integral as
P
_
∞
−∞
f(x) dx = lim
δ→0+
_
_
x0−δ
−∞
f(x) dx +
_
∞
x0+δ
f(x) dx
_
, (7.33)
which means that the immediate neighborhood of the singularity is to be omitted
symmetrically. The limit exists because f(x) ≈ a
−1
/(x−x
0
) near x = x
0
, which
is an odd function.
We can apply the residue theorem to such integrals by considering a deformed
(indented) contour, as shown in Fig. 7.3. For simplicity, suppose the function
falls oﬀ rapidly enough in the upper half plane so that
_
Γ
f(z) dz = 0, (7.34)
where Γ is the “inﬁnite” semicircle in the upper half plane. Then the integral
around the closed contour shown in the ﬁgure is
_
C
f(z) dz = P
_
∞
−∞
f(x) dx −iπa
−1
, (7.35)
7.6. CAUCHY PRINCIPAL VALUE 69 Version of October 26, 2011
x
0
• →
←
Γ
Figure 7.3: Contour which avoids the singularity along the real axis by passing
above the pole.
x
0
Γ
• →
←
Figure 7.4: Contour which avoids the singularity along the real axis by passing
below the pole.
where the second term comes from an explicit calculation in which the simple
pole is half encircled in a negative sense (giving −1/2 the result if the pole
were fully encircled in the positive sense). On the other hand, from the residue
theorem,
_
C
f(z) dz = 2πi
poles ∈UHP
(residues), (7.36)
where UHP stands for upper half plane. Alternatively, we could consider a
diﬀerently deformed contour, shown in Fig. 7.4. Now we have
_
C
f(z) dz = P
_
∞
−∞
f(x) dx + iπa
−1
= 2πi
_
_
poles ∈UHP
(residues) + a
−1
_
_
, (7.37)
so in either case
P
_
∞
−∞
f(x) dx = 2πi
poles ∈UHP
(residues) + πia
−1
, (7.38)
where the sum is over the residues of the poles above the real axis, and a
−1
is
the residue of the simple pole on the real axis.
70 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
→
Γ
−R R
• −k + iǫ
•k −iǫ
Figure 7.5: The closed contour C for the integral in Eq. (7.41).
Equivalently, instead of deforming the contour to avoid the singularity, one
can displace the singularity, x
0
→x
0
±iǫ. Then
_
∞
−∞
dx
g(x)
x −x
0
∓iǫ
= P
_
∞
−∞
dx
g(x)
x −x
0
±iπg(x
0
), (7.39)
if g is a regular function on the real axis. [Proof: Homework.]
7.7 Example 4
Consider the integral
I =
_
∞
−∞
e
iqx
q
2
−k
2
+ iǫ
dq, x > 0, (7.40)
which is important in quantum mechanics. We can replace this integral by the
contour integral
_
C
e
iqx
q
2
−k
2
+ iǫ
dq, x > 0, (7.41)
where the closed contour C is shown in Fig. 7.5. The integral over the “inﬁnite”
semicircle Γ is zero according to Jordan’s lemma. By redeﬁning ǫ, but not
changing its sign, we write the integral as (k = +
√
k
2
)
I =
_
C
dq
_
1
q −(k −iǫ)
1
q + (k −iǫ)
_
e
iqx
= 2πi
e
iqx
q −(k −iǫ)
¸
¸
¸
¸
q=−(k−iǫ)
= −
πi
k
e
−ikx
, (7.42)
in the end taking ǫ →0.
7.8 Example 5
We will consider two ways of evaluating
I =
_
∞
0
dx
1 + x
3
. (7.43)
7.8. EXAMPLE 5 71 Version of October 26, 2011
•
•
•
Figure 7.6: Contour C used in the evaluation of the integral (7.44). Shown
also is the branch line of the logarithm along the +z axis, and the poles of the
integrand.
The integrand is not even, so we cannot extend the lower limit to −∞. How
can contour methods be applied?
7.8.1 Method 1
Consider the related integral
_
C
log z
1 + z
3
dz, (7.44)
over the contour shown in Fig. 7.6. Here we have chosen the branch line of the
logarithm to lie along the +z axis; the discontinuity across it is
disc log z = log ρ −log ρe
2iπ
= −2iπ. (7.45)
The integral over the large circle is zero, as is the integral over the little circle:
lim
ρ→∞,0
_
2π
0
log ρe
iθ
1 + ρ
2
e
3iθ
ρe
iθ
i dθ = 0. (7.46)
Therefore,
I = −
1
2πi
_
C
log z
1 + z
3
dz
= −
poles inside C
(residues). (7.47)
72 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
To ﬁnd the sum of the residues, we note that the poles occur at the three cube
roots of −1, namely, e
iπ/3
, e
iπ
, and e
5iπ/3
, so
(residues) = log e
iπ/3
_
1
e
iπ/3
−e
iπ
1
e
iπ/3
−e
i5π/3
_
+ log e
iπ
_
1
e
iπ
−e
iπ/3
1
e
iπ
−e
i5π/3
_
+ log e
5iπ/3
_
1
e
5iπ/3
−e
iπ/3
1
e
5iπ/3
−e
iπ
_
= i
π
3
1
_
1+
√
3i
2
+ 1
_
1
_
1+
√
3i
2
−
1−
√
3i
2
_
+ iπ
1
_
−1 −
1+
√
3i
2
_
1
_
−1 −
1−
√
3i
2
_
+
i5π
3
1
_
1−
√
3i
2
−
1+
√
3i
2
_
1
_
1−
√
3i
2
+ 1
_
=
iπ
3
1
√
3i
2
3 +
√
3i
+ iπ
4
9 + 3
+
i5π
3
−1
√
3i
2
3 −
√
3i
=
π
12
_
2
3
√
3
(3 −
√
3i) + 4i −
10
3
√
3
(3 +
√
3i)
_
= −
2π
3
√
3
, (7.48)
or
I =
2π
3
√
3
. (7.49)
7.8.2 Method 2
An alternative method which is simpler algebraically is the following. Consider
_
C
dz
z
3
+ 1
, (7.50)
where the contour C is shown in Fig. 7.7. The integral over the arc of the circle
at “inﬁnity,” C
2
, evidently vanishes as the radius of that circle goes to inﬁnity.
The integral over C
1
is the integral I. The integral over C
3
is
_
C3
dz
z
3
+ 1
=
_
0
∞
d(xe
2iπ/3
)
(xe
2iπ/3
)
3
+ 1
= −e
2iπ/3
I, (7.51)
since
_
e
2iπ/3
_
3
= 1. Thus
_
C
dz
z
3
+ 1
= I
_
1 −e
2πi/3
_
= −Ie
iπ/3
2i sin
π
3
. (7.52)
7.9. EXAMPLE 6 73 Version of October 26, 2011
2π/3
C
1
C
2
C
3
Figure 7.7: Contour used in the evaluation of Eq. (7.50).
The only pole of 1/(z
3
+ 1) contained within C is at z = e
iπ/3
, the residue of
which is
1
e
iπ/3
−e
iπ
1
e
iπ/3
−e
i5π/3
=
e
−2πi/3
e
−iπ/3
−e
iπ/3
e
−3iπ/3
e
−2iπ/3
−e
2iπ/3
, (7.53)
so
I = −
2πie
−6πi/3
(2i)
3
_
sin
π
3
_
2
sin
2π
3
, (7.54)
or since sin
π
3
= sin
2π
3
=
√
3
2
,
I =
π
4
_
2
√
3
_
3
=
2π
3
√
3
, (7.55)
the same result (7.49) as found by method 1.
7.9 Example 6
Consider
I =
_
∞
0
x
µ−1
1 + x
dx, 0 < µ < 1. (7.56)
We may use the contour integral
_
C
(−z)
µ−1
1 + z
dz =
_
∞
0
e
−iπ(µ−1)
x
µ−1
dx
1 + x
−
_
∞
0
e
iπ(µ−1)
x
µ−1
dx
1 + x
, (7.57)
where C is the same contour shown in Fig. 7.6, and because µ is between zero
and one it is easily seen that the large circle at inﬁnity and the small circle
about the origin both give vanishing contributions. The pole now is at z = −1,
so
_
C
(−z)
µ−1
dz
1 + z
= 2πi, (7.58)
where the phase is measured from the negative real z axis. Thus
2πi =
_
e
−iπ(µ−1)
−e
iπ(µ−1)
_
I = 2iI sinπµ, (7.59)
74 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
©
c
T
1
2
−
1
2
−R R
x
y
Figure 7.8: Contour C used in integral K, Eq. (7.62). Here the two lines making
an angle of π/4 with respect to the real axis are closed with vertical lines at
x = ±R, where we will take the limit R →∞.
or
I =
π
sin πµ
. (7.60)
7.10 Example 7
Here we demonstrate a method of evaluating the Gaussian integral,
J =
_
∞
−∞
e
−x
2
dx. (7.61)
Consider the contour integral
K =
_
C
e
iπz
2
csc πz dz, (7.62)
where C is the contour shown in Fig. 7.8. The equation for the two lines making
angles of π/4 with respect to the real axis are
z = ±
1
2
+ ρe
iπ/4
, (7.63)
so
z
2
=
1
4
±ρe
iπ/4
+ iρ
2
. (7.64)
7.11. EXAMPLE 8 75 Version of October 26, 2011
Within the contour the only pole of csc πz is at z = 0, which has residue 1/π,
so by the residue theorem
K = 2πi
1
π
= 2i. (7.65)
Directly, however,
K =
_
∞
−∞
e
iπ/4
dρ exp
_
iπ
_
iρ
2
+ ρe
iπ/4
+
1
4
__
csc π
_
ρe
iπ/4
+
1
2
_
−
_
∞
−∞
e
iπ/4
dρ exp
_
iπ
_
iρ
2
−ρe
iπ/4
+
1
4
__
csc π
_
ρe
iπ/4
−
1
2
_
,(7.66)
since the vertical segments give exponentially vanishing contributions as R →
∞. Combining these two integrals, we encounter
exp
_
iπρe
iπ/4
_
csc π
_
ρe
iπ/4
+
1
2
_
−exp
_
−iπρe
iπ/4
_
csc π
_
ρe
iπ/4
−
1
2
_
= 2
exp
_
iπρe
iπ/4
_
+ exp
_
−iπρe
iπ/4
_
exp
_
iπρe
iπ/4
_
+ exp
_
−iπρe
iπ/4
_ = 2, (7.67)
since e
±iπ/2
= ±i. Hence
K = 2e
iπ/4
e
iπ/4
_
∞
−∞
dρ e
−πρ
2
=
2i
√
π
_
∞
−∞
dxe
−x
2
, (7.68)
so comparing with Eq. (7.65) we have for the Gaussian integral (7.61)
J =
√
π. (7.69)
7.11 Example 8
Our ﬁnal example is the integral
I =
_
∞
0
xdx
1 −e
x
. (7.70)
If we make the substitution e
x
= t, this is the same as
I =
_
∞
1
log t
1 −t
dt
t
=
_
∞
1
dt log t
_
1
t
+
1
1 −t
_
. (7.71)
If we make the further substitution in the ﬁrst form of Eq. (7.71)
u =
1
t
,
du
u
=
dt
t
, (7.72)
we have
I =
_
1
0
log
1
u
1 −
1
u
du
u
=
_
1
0
log u
1 −u
du, (7.73)
76 Version of October 26, 2011CHAPTER7. THE CALCULUS OF RESIDUES
If we average the two forms (7.71) and (7.73) we have
I =
1
2
_
∞
1
dt
log t
t
+
1
2
_
∞
0
dt
log t
1 −t
. (7.74)
The two integrals here separately are divergent, but the sum is ﬁnite. We
regulate the two integrals by putting in a large t cutoﬀ:
I =
1
2
lim
Λ→∞
_
_
Λ
1
dt
log t
t
+
_
Λ
0
dt
log t
1 −t
_
. (7.75)
The ﬁrst integral here is elementary,
_
Λ
1
dt
log t
t
=
1
2
log
2
t
¸
¸
¸
¸
Λ
1
=
1
2
log
2
Λ, (7.76)
while the second is evaluated by considering
K =
_
C
dz
log
2
z
1 −z
, (7.77)
where again C is the contour shown in Fig. 7.6. Now, however, the sole pole is
on the positive real axis, so no singularities are contained within C, and hence
by Cauchy’s theorem K = 0.
This time the contribution of the large circle is not zero:
_
2π
0
Λe
iθ
idθ
log
2
Λe
iθ
1 −Λe
iθ
= −i
_
2π
0
dθ [log Λ + iθ]
2
= −i
_
2π log
2
Λ + 2i
1
2
(2π)
2
log Λ −
1
3
(2π)
3
_
.(7.78)
The discontinuity of the log
2
across the branch line is
log
2
x −log
2
_
xe
2iπ
_
= log
2
x −(log x + 2iπ)
2
= −4iπ log x + 4π
2
. (7.79)
Finally, notice that there is a contribution from the pole at z = 1 below the real
axis (see Fig. 7.9): Explicitly, the contribution from the small semicircle below
the pole is
_
π
2π
dθ
iρe
iθ
−ρe
iθ
_
4iπ log
_
1 + ρe
iθ
_
−4π
2
¸
= −4iπ
3
, (7.80)
as ρ →0. The desired integral is obtained by taking the imaginary part,
ImK = −4π
_
Λ
0
dt
log t
1 −t
−
_
2π log
2
Λ −
8π
3
3
_
−4π
3
= 0, (7.81)
so
_
Λ
0
dt
log t
1 −t
= −
1
2
log
2
Λ −
π
2
3
. (7.82)
7.11. EXAMPLE 8 77 Version of October 26, 2011
•
1
Figure 7.9: Portion of integral K, Eq. (7.77), corresponding to the integration
below the cut on the real axis. The pole of the integrand at z = 1 contributes
here because log(1 −iǫ) = 2iπ. Thus the contribution of the small semicircle to
K is +iπ(2iπ)
2
= −4iπ
3
, in agreement with Eq. (7.80).
Thus averaging this with Eq. (7.76) we obtain
I = −
π
2
6
. (7.83)
A slight check of this procedure comes from computing the real part of K:
Re K = 4π
2
P
_
Λ
0
dt
1 −t
+ 4π
2
log Λ = 0. (7.84)
Chapter 8
Summation Techniques,
Pad´e Approximants, and
Continued Fractions
8.1 Accelerated Convergence
Conditionally convergent series, such as
1 −
1
2
+
1
3
−
1
4
+
1
5
−
1
6
. . . =
∞
n=1
(−1)
n+1
1
n
= ln 2, (8.1)
converge very slowly. The same is true for absolutely convergent series, such as
∞
n=1
1
n
2
= ζ(2) =
π
2
6
. (8.2)
If we call the partial sum for the latter
N
n=1
1
n
2
= S
N
, (8.3)
the diﬀerence between the limit S and the Nth partial sum is
S −S
N
=
∞
n=N+1
1
n
2
≈
_
∞
N
dn
n
2
=
1
N
, (8.4)
which means that it takes 10
6
terms to get 6ﬁgure accuracy.
Thus, to evaluate a convergent series, the last thing you want to do is actually
literally carry out the sum. We need a method to accelerate the convergence,
and get good accuracy from a few terms in the series. There are several standard
methods.
79 Version of November 14, 2011
80 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
8.1.1 Shanks’ Transformation
The Shanks transformation is good for alternating series, or oscillating partial
sums, such as Eq. (8.1). For the series
S =
∞
n=1
a
n
, (8.5)
consider the Nth partial sum
S
N
=
N
n=1
a
n
. (8.6)
Let us suppose that, for suﬃciently large N,
S
N
= S +Ab
N
, (8.7)
where −1 < b < 0, so that as N →∞, S
N
→S. We will take this as an ansatz
for all N, to obtain an estimate for the limit S. Then, successive partial sums
satisfy
S
N−1
= S +Ab
N−1
, (8.8a)
S
N
= S +Ab
N
, (8.8b)
S
N+1
= S +Ab
N+1
, (8.8c)
so that
b =
S
N+1
−S
S
N
−S
=
S
N
−S
S
N−1
−S
, (8.9)
which may be immediately solved for S,
S
(N)
=
S
N+1
S
N−1
−S
2
N
S
N+1
+S
N−1
−2S
N
, (8.10)
where now we’ve inserted the (N) subscript on the left to indicate this is an
estimate for the limit, based on the N, N + 1, and N −1 partial sums.
For the series (8.1) the ﬁrst 5 partial sums are
S
1
= 1, S
2
=
1
2
= 0.5, S
3
=
5
6
= 0.833, S
4
=
7
12
= 0.5833,
S
5
=
47
60
= 0.7833, (8.11)
which oscillate around the correct limit ln 2 = 0.693147, but are not good ap
proximations. Using the Shanks transformation (8.10) we obtain much better
approximants:
S
(1)
=
7
10
= 0.700, S
(2)
=
29
42
= 0.690, S
(3)
=
25
36
= 0.6944, (8.12)
8.1. ACCELERATED CONVERGENCE 81 Version of November 14, 2011
which use only the ﬁrst 3, 4, and 5 terms in the original series. We can do even
better by iterating the Shanks transformation,
S
[2]
(N)
=
S
(N+1)
S
(N−1)
−S
2
(N)
S
(N+1)
+S
(N−1)
−2S
(N)
, (8.13)
and then we ﬁnd using the same data (only 5 terms in the series)
S
[2]
(2)
=
165
238
= 0.693277, (8.14)
an error of only 0.02%! For more detailed comparison of Shanks estimates for
this series, see Table 8.2 on page 373 of Bender and Orzag.
8.1.2 Richardson Extrapolation
For monotone series, Richardson extrapolation is often very useful. In this case
we are considering partial sums S
N
which approach their limit S monotonically.
In this case we assume an asymptotic form for large N
S
N
∼ S +
a
N
+
b
N
2
+
c
N
3
+. . . . (8.15)
The ﬁrst Richardson extrapolation consists of keeping only the ﬁrst correction
term,
S
N
= S +
a
N
, S
N+1
= S +
a
N + 1
, (8.16)
which may be solved for the limit
S
[1]
(N)
= (N + 1)S
N+1
−NS
N
, (8.17)
where again we’ve inserted on the left a superscript [1] indicating the ﬁrst
Richardson extrapolation, and a subscript (N) to indicate the approximant
comes from the Nth and N + 1st partial sums.
We consider as an example Eq. (8.2). Here, the ﬁrst 4 partial sums are
S
1
= 1, S
2
=
5
4
= 1.25, S
3
=
49
16
= 1.361, S
4
=
205
144
= 1.424, (8.18)
to be compared with π
2
/6 = 1.644934. The ﬁrst three Richardson extrapolants
are much better:
S
[1]
(1)
=
3
2
= 1.5, S
[1]
(2)
=
19
12
= 1.58, S
[1]
(3)
=
29
18
= 1.611. (8.19)
Iteration of these results by inserting S
[1]
(N)
in (8.17) yields further improvement:
5/3 = 1.667, but this iteration improves only slowly with N.
To do better we keep the ﬁrst two terms in (8.15). This gives the second
Richardson extrapolant,
S
[2]
(N)
=
1
2
_
(N + 2)
2
S
N+2
−2(N + 1)
2
S
N+1
+N
2
S
N
¸
. (8.20)
82 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
When applied to the series (8.2) the ﬁrst three terms in the series yields nearly
1% accuracy:
S
[2]
(1)
=
13
8
= 1.625. (8.21)
For further numerical details, see Table 8.4 on page 377 of Bender and Orzag.
8.2 Summing Divergent Series
The series encountered in physics, typically perturbation expansions, are usually
divergent. How can one extract a meaningful number from such series, which
represent physical processes and so reﬂect real processes?
On the surface, it would seem impossible to attach any meaning to such
obviously divergent series as
1 + 1 + 1 + 1 + 1 +. . . , (8.22a)
1 −1 + 1 −1 + 1 −. . . . (8.22b)
However, as we will now see, perfectly ﬁnite numbers can be associated with
these series. Again there are various procedures, of which we give a sampling.
Throughout, we are considering a divergent series of the form
∞
n=0
a
n
. (8.23)
8.2.1 Euler Summation
Suppose
∞
n=0
a
n
x
n
= f(x) (8.24)
converges if x < 1. Then we deﬁne the limit of the series (8.23) by
S = lim
x→1
f(x). (8.25)
Thus, for the series (8.22b),
S =
∞
n=0
(−1)
n
, (8.26)
f(x) is
f(x) =
∞
n=0
(−1)
n
x
n
=
1
1 +x
, (8.27)
so S = 1/2. To supply more credence to this result, we note that it is reproduced
by the Shanks transformation. The partial sums of the series are
S
0
= 1, S
1
= 0, S
2
= 1, S
3
= 0, . . . , (8.28)
8.2. SUMMING DIVERGENT SERIES 83 Version of November 14, 2011
so
S =
S
N+1
S
N−1
−S
2
n
S
N+1
+S
N−1
−2S
n
=
1
2
(8.29)
for all N.
What if we apply Euler summation to the series
1 + 0 −1 + 1 + 0 −1 + 1 + 0 −1 + 1 + 0 −1 +. . .? (8.30)
Now
f(x) = 1 −x
2
+x
3
−x
5
+x
6
−x
8
+x
9
−. . .
=
∞
n=0
x
3n
−x
2
∞
n=0
x
3n
=
1 −x
2
1 −x
3
=
1 +x
1 +x +x
2
, (8.31)
so the sum of (8.30) is
S = f(1) =
2
3
. (8.32)
Thus the process of summation is not (inﬁnitely) associative. In this case the
Shanks transformation does not work.
8.2.2 Borel Summation
Now we use the Euler representation of the Gamma function, or the factorial,
n! =
_
∞
0
dt t
n
e
−t
. (8.33)
Then we formally interchange summation and integration:
S =
∞
n=0
a
n
1
n!
_
∞
0
dt t
n
e
−t
=
_
∞
0
dt e
−t
∞
n=0
1
n!
a
n
t
n
, (8.34)
which deﬁnes the sum if
g(t) =
∞
n=0
1
n!
a
n
t
n
(8.35)
exists.
Thus for (8.22b),
g(t) =
∞
n=0
(−1)
n
t
n
n!
= e
−t
, (8.36)
and so
S =
_
∞
0
dt e
−2t
=
1
2
, (8.37)
84 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
which coincides with the result found by Euler summation. In general, Borel
summation is more powerful than Euler summation, but if both Euler and Borel
sums exist, they are equal.
In fact, we can prove that any summation that is both
1. linear, meaning that if
∞
n=0
a
n
= A,
∞
n=0
b
n
= B, (8.38a)
then
∞
n=0
(αa
n
+βb
n
) = αA +βB, (8.38b)
and
2. satisﬁes
∞
n=0
a
n
= a
0
+
∞
n=1
a
n
, (8.39)
is unique. In fact, from these two properties alone (which are satisﬁed by both
Euler and Borel summation) we can ﬁnd the value of the sum. Thus for example,
1 −1 +1 −1 +1 −1 +. . . = S = 1 −(1 −1 +1 −1 +1 −1 +. . .) = 1 −S, (8.40)
implies S = 1/2. Slightly more complicated is
S = (1 + 0 −1 + 1 + 0 −1 + 1 + 0 −1 +. . .)
= 1 + (0 −1 + 1 + 0 −1 + 1 + 0 −1 +. . .)
= 1 + 0 + (−1 + 1 + 0 −1 + 1 + 0 −1 + 1 + 0 −. . .), (8.41)
where adding the three lines gives
3S = 2 + (0 + 0 + 0 + 0 + 0 +. . .) = 2, (8.42)
or S = 2/3 as before.
But there are sums resistant to such schemes. An example is (8.22a), because
the above process leads to
S = 1 + (1 + 1 + 1 +. . .) = 1 +S, (8.43)
which is only satisﬁed by S = ∞. Yet such a series can be summed.
8.2.3 Zetafunction Summation
Recall that the zeta function is deﬁned by
ζ(s) =
∞
n=1
1
n
s
, Re s > 1. (8.44)
8.2. SUMMING DIVERGENT SERIES 85 Version of November 14, 2011
In fact, ζ(s) exists for all s = 1, so we can use that function to deﬁne the sum
almost everywhere in the complex s plane. In particular, for s = 0:
1 + 1 + 1 + 1 +. . . = ζ(0) = −
1
2
. (8.45)
Even a more divergent sum can be evaluated this way:
∞
n=1
n = ζ(−1) = −
1
12
. (8.46)
Note the remarkable fact that these sums are not only ﬁnite, but negative, even
though each term in the sum is positive!
8.2.4 Casimir Eﬀect
Here we give a physical example of the utility of this last mode of summation.
The physics is that of a pair of parallel metallic plates, separated by a distance
a in the vacuum. Because the plates modify the properties of the vacuum, there
is a change in the zeropoint energy of the electromagnetic ﬁeld, which feels the
plates because they are conductors. The result is an attraction between the
plates, the famous Casimir eﬀect, predicted by Casimir in 1948 (the same year
that Schwinger discovered how to renormalize quantum electrodynamics), and
now veriﬁed by many experiments at the percent level. The zeropoint energy
(per unit area) of modes conﬁned by the plane boundaries at z = 0 and z = a
is
E =
1
2
¯hω =
¯hc
2
∞
n=1
_
d
2
k
(2π)
2
_
k
2
+
_
nπ
a
_
, (8.47)
where in the mode sum we have integrated over the two transverse wavenumbers
k
x
and k
y
, and summed over the discrete modes, which, say, must vanish at z = 0
and a, that is, be given by an (unnormalized) mode function
φ(z) = sin
nπ
a
z. (8.48)
Now we write the square root as integral, putting its argument in the exponen
tial:
_
k
2
+
_
nπ
a
_
2
=
1
Γ
_
−
1
2
_
_
∞
0
ds
s
s
−1/2
e
−(k
2
+(nπ/a)
2
)s
, (8.49)
and then interchange the two integrals:
E =
¯hc
2
∞
n=1
_
∞
0
ds
s
3/2
e
−(nπ/a)
2
s
__
∞
−∞
dk
2π
e
−k
2
s
_
2
1
−2
√
π
. (8.50)
Here we have recognized that the twodimensional integral over k = (k
x
, k
y
) can
be broken into the product of two onedimensional integrals because
e
−(k
2
x
+k
2
y
)s
= e
−k
2
x
s
e
−k
2
y
s
. (8.51)
86 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
These onedimensional integrals are simply Gaussians, so the squared factor in
(8.50) is simply 1/(4πs). The remaining sintegral is again a gamma function:
E = −
¯hc
16π
3/2
∞
n=1
_
∞
0
ds
s
5/2
e
−(nπ/a)
2
s
= −
¯hc
16π
3/2
Γ
_
−
3
2
_
∞
n=0
_
nπ
a
_
3
= −
¯hcπ
2
1440a
3
, (8.52)
where we have used the facts that
Γ
_
−
3
2
_
=
4
3
√
π, ζ(−3) =
1
120
, (8.53)
together with the zetafunction continuation embodied in Eq. (8.44) When
multiplied by 2, for the two polarization states of the photon, this is exactly
Casimir’s result, which implies an attractive force per unit area between the
plates,
P = −
∂
∂a
E = −
¯hcπ
2
240a
4
= −1.30 ×10
−27
Nm
2
/a
4
. (8.54)
8.3 Pad´e Approximants
Consider a partial Taylor sum,
T
N+M
(z) =
N+M
n=0
a
n
z
n
, (8.55)
which is an N +Mth degree polynomial. Write this in a rational form,
P
N
M
(z) =
N
n=0
A
n
z
n
M
m=0
B
m
z
m
, (8.56)
which is called the [N, M]th Pad´e approximant. Here the coeﬃcients are de
termined from the Taylor series coeﬃcients as follows: We set B
0
= 1, and
determine the (N +M +1) coeﬃcients A
0
, A
1
, . . . , A
N
and B
1
, B
2
, . . . , B
M
by
requiring that when the rational function (8.56) be expanded in a Taylor series
about z = 0 the ﬁrst N +M + 1 coeﬃcients match those of the original Taylor
expansion (8.55).
Example
Consider the exponential function
e
z
= 1 +z +
1
2
z
2
+ . . . . (8.57)
8.3. PAD
´
E APPROXIMANTS 87 Version of November 14, 2011
The [1, 1] Pad´e of this is of the form
P
1
1
(z) =
A
0
+A
1
z
1 +B
1
z
, (8.58)
which, when expanded in a series about z = 0 reads
P
1
1
(z) ≈ A
0
+ (A
1
−B
1
A
0
)z + (B
2
1
A
0
−A
1
B
1
)z
2
. (8.59)
Matching this with Eq. (8.57), we obtain the equations
A
0
= 1, (8.60a)
A
1
−B
1
A
0
= 1, (8.60b)
B
1
(B
1
A
0
−A
1
) =
1
2
, (8.60c)
so we learn immediately that
A
0
= 1, B
1
= −
1
2
, A
1
=
1
2
, (8.61)
so the [1, 1] Pad´e is
P
1
1
(z) =
1 +
1
2
z
1 −
1
2
z
. (8.62)
How good is this? For example, at z = 1,
P
1
1
(1) = 3, (8.63)
which is 10% larger than the exact answer e = 2.718281828 . . ., and is not quite
as good as the result obtained from the ﬁrst three terms in the Taylor series,
1 +z +
1
2
z
2
¸
¸
¸
¸
z=1
= 2.5, (8.64)
about 8% low. However, in higher orders, Pad´e approximants rapidly outstrip
Taylor approximants. Table 8.1 compares the numerical accuracy of P
M
N
with
T
N+M
.
Note that typically the Pad´e approximant, obtained from a partial Taylor
sum, is more accurate than the latter. This comes at a price, however; the Pad´e,
being a rational expression, has poles, which are not present in the original
function. Thus, e
z
is an entire function, while the [1, 1] Pad´e approximant of
this function has a pole at z = 2.
Example
Here’s another example:
1
z
log(1 +z) = 1 −
z
2
+
z
2
3
−
z
3
4
+
z
4
5
−
z
5
6
+. . . . (8.65)
88 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
T
N+M
(1) P
N
M
(1) Relative error of Pad´e
T
3
(1) = 2.667 P
1
2
(1) = 2.667 −1.9%
T
4
(1) = 2.708 P
2
2
(1) = 2.71429 −0.15%
T
5
(1) = 2.717 P
2
3
(1) = 2.71875 +0.017%
T
6
(1) = 2.71806 P
3
3
(1) = 2.71831 +0.00103%
T
7
(1) = 2.71825 P
3
4
(1) = 2.71827957 −0.000083%
Table 8.1: Comparison of partial Taylor series with successive Pad´e approxi
mants for the exponential function, evaluated at z = 1. Note that precisely the
same data is incorporated in T
N+M
and in P
N
M
.
Approximant z = 0.5 z = 1 z = 2
Exact 0.810930216 0.69314718 0.549306
P
3
3
0.810930365 0.69315245 0.549403
P
3
4
0.810930203 0.69314642 0.549285
Table 8.2: Pad´e approximations for the function (1/z) log(1+z) compared with
the exact values. Note that the Taylor series for this function has a radius of
convergence of unity, yet the Pad´e approximations converge rapidly even beyond
the circle of convergence.
It is a simple algebraic task to expand the form of an [N, M] Pad´e in a Taylor
series and compute the Pad´e coeﬃcients by matching with the above. This
can, of course, be easily implemented in a symbolic program. For example, in
Mathematica,
P
N
M
(z) = PadeApproximant[f[z], {z, 0, {N, M}}]. (8.66)
Doing so here yields
P
3
3
(z) =
1 +
17
14
z +
1
3
z
2
+
1
140
z
3
1 +
12
7
z +
6
7
z
2
+
4
35
z
3
. (8.67)
Table 8.2 shows representative numerical values for P
3
3
and P
3
4
. The Pad´e
approximants rapidly converge to the correct value even well beyond the circle
of convergence of the original series. Note further in this example that
• P
N
N
is larger than the function, and decreases monotonically toward it,
and
8.3. PAD
´
E APPROXIMANTS 89 Version of November 14, 2011
• P
N
N+1
is smaller than the function, and increases monotonically toward it.
This bounding behavior is typical of a class of functions. For more detail see
C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Scientists
and Engineers (McGrawHill, New York, 1978), pp. 383ﬀ.
Field Theory Examples
The following function occurs in the ﬁeld theory of a massless particle in zero
dimensions,
Z(δ) =
_
∞
−∞
dx
√
π
e
−(x
2
)
1+δ
=
2
√
π
1
(2 + 2δ)
_
∞
0
dt
t
t
1/(2+2δ)
e
−t
=
2
√
π
1
(2 + 2δ)
Γ
_
1
2 + 2δ
_
=
2
√
π
Γ
_
3 + 2δ
2 + 2δ
_
, (8.68)
where the gamma function was deﬁned by Euler as
Γ(z) =
_
∞
0
dt
t
t
z
e
−t
, (8.69)
and satisﬁes the identity
Γ(z + 1) = zΓ(z). (8.70)
The gamma function generalizes the factorial to complex values:
Γ(n + 1) = n!, n = 0, 1, 2, . . . . (8.71)
Because the gamma function Γ(z) has poles when z = −N, N = 0, 1, 2, . . . , this
function has an inﬁnite number of singularities between δ = −3/2 and δ = −1.
Thus the radius of convergence of the Taylor series about δ = 0 is 1. Yet low
order Pad´e’s for E(δ) = −log Z(δ) give an excellent approximation well outside
of this radius, as Table 8.3 shows.
The “partition function” for a zerodimensional ﬁeld theory with a mass µ
is given by the function
Z(δ) = µ
_
2
π
_
∞
0
dxe
−
µ
2
2
x
2
−λ(x
2
)
1+δ
. (8.72)
We consider two cases. If µ
2
> 0, the power series in δ again has radius of
convergence 1, but the Pad´e approximants are accurate far beyond this radius,
as shown in Table 8.4.
If, on the other hand µ
2
< 0 (which corresponds to the “Higgs mechanism” in
particle physics), the Taylor series converges nowhere, yet the Pad´e approximant
is still quite good, as seen in Table 8.5.
90 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
δ T
10
(δ) T
20
(δ) P
3
2
(δ) P
5
4
(δ) E(δ)
−2.0 −1266.97 −2.0 ×10
6
−0.651267 −0.692962 −0.693147
−0.5 −0.120055 −0.120781 −0.120831 −0.12078223848 −0.12078223764
0.5 −0.00781712 −0.00759091 −0.00759097 −0.0075905958951 −0.0075905958949
1.0 −0.367098 −0.516940 −0.0225167 −0.022510401233 −0.022510401213
2.0 −465.821 −688611 −0.0458145 −0.04575620415 −0.04575620349
5.0 −5.5 ×10
6
−7.8 ×10
13
−0.0786672 −0.078172915 −0.078172899
Table 8.3: Approximations to the function (8.68). What is approximated is
E(δ) = −log Z(δ). The Pad´e approximants based on 6 and 10 terms in the
Taylor series of this function are far more accurate that the 10 and 20 term
truncated Taylor series, and even are remarkably accurate far outside the circle
of convergence, where the Taylor series is meaningless.
δ T
8
(δ) P
4
4
(δ) Z(δ)
0.5 1.04631 1.04630 1.04630
1.0 1.07719 1.07436 1.07436
2.0 1.81047 1.10647 1.10649
5.0 745.176 1.14253 1.14285
Table 8.4: Comparison of Z(δ), Eq. (8.72), µ
2
> 0, with the 8term truncated
power series, and the corresponding [4, 4] Pad´e. Here we have taken µ
2
= 1,
λ = 1.
δ T
8
(δ) P
4
4
(δ) Z(δ)
0.1 0.94808 0.94790 0.94790
0.5 137.697 0.88388 0.88381
1.0 40109.3 0.87323 0.87253
2.0 1.1 ×10
7
0.88334 0.87974
5.0 1.8 ×10
10
0.91830 0.90517
Table 8.5: Comparison of Z(δ), Eq. (8.72), µ
2
< 0, with the 8term truncated
power series, and the corresponding [4, 4] Pad´e. Here we have taken µ
2
= −1,
λ = 1.
8.4. CONTINUED FRACTIONS 91 Version of November 14, 2011
8.4 Continued Fractions
8.4.1 Number Theory
The most familiar way of representing real numbers is in terms of a decimal
fraction, which is nonterminating and nonrepeating if the number is irrational.
However, there are other representations which, if less familiar, can be very
useful. For example, the base of the natural logarithms e can be written in the
form of a continued fraction,
e = 2 +
1
1 +
1
2+
1
1+
1
1+
1
4+...
. (8.73a)
Because this builtup form is cumbersome to write, we could write this as
e = 2 +1/(1 +1/(2 +1/(1 +1/(1 +1/(4 +1/(1 +1/(1+1/(6+1/(1 +1/(1 +. . . ,
(8.73b)
or even more compactly as
e = 2 +
1
1+
1
2+
1
1+
1
1+
1
4+
1
1+
1
1+
1
6 +. . .
. (8.73c)
The form seen here is the representation of a real number x in the form
x = a
0
+
1
a
1
+
1
a
2
+
1
a
3
+
1
a
4
+. . .
, (8.74)
where the numbers a
n
are integers called partial quotients. The rational number
formed by including only the ﬁrst n+1 partial quotients a
0
, a
1
, . . . , a
n
is called
the n convergent of x. So the continued fraction is given by the set of a
n
s:
e = {2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10, 1, 1, 12, . . .}, (8.75)
and the successive convergents, which rapidly approach e = 2.718281828 . . ., are
_
2, 3,
8
3
,
11
4
,
19
7
,
87
32
,
106
39
,
193
71
,
1264
465
,
1457
536
,
2721
1001
,
23225
8544
,
25946
9545
,
49171
18089
, . . .
_
= {2, 3, 2.666666667, 2.750000000, 2.714285714, 2.718750000, 2.717948718,
2.718309859.2.718279570, 2.718283582, 2.718281718, 2.718281835,
2.718281823, 2.718281829, 2.718281828, . . .} . (8.76)
The partial quotients of x are determined by successively determining the
unique integer that provides a bound for x for a given truncation of the partial
fraction. Thus in the above example, where in each case 0 < r < 1,
2 < e, (8.77a)
5
2
< 2 +
1
1 +r
< 3, (8.77b)
92 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
8
3
< 2 +
1
1+
1
2 +r
<
11
4
, (8.77c)
19
7
< 2 +
1
1+
1
2+
1
1 +r
<
11
4
, (8.77d)
19
7
< 2 +
1
1+
1
2+
1
1+
1
1 +r
<
30
11
, (8.77e)
and so on. The successive convergents are the upper and lower bounds corre
sponding to r = 0.
The partial fraction representation of real numbers can be generated using
your favorite symbolic program. For example, in Mathematica the ﬁrst n partial
quotients of x are given by
ContinuedFraction[x, n], (8.78)
and the ﬁrst n convergents are given by
Convergents[x, n]. (8.79)
Let us conclude this subsection with the following comments.
• Evidently, a rational number is represented by a terminating continued
fraction. For example,
12357
1234567890
= {0, 99908, 2, 1, 1, 1, 1, 3, 3, 2, 1, 2} (8.80)
exactly.
• An algebraic number, that is one which is a solution of an algebraic equa
tion, which is not rational, is represented by a repeating pattern of partial
quotients. For example,
√
137 = {11, 1, 2, 2, 1, 1, 2, 2, 1, 22, 1, 2, 2, 1, 1, 2, 2, 1, 22, . . .}. (8.81)
• A trancendental number is represented by a nonrepeating pattern. That
pattern is simple in the case of e, but not so for the case of π:
π = {3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1, 15,
3, 13, 1, 4, 2, 6, 6, 99, 1, 2, 2, 6, 3, 5, 1, 1, 6, 8, 1, 7, 1, 2, 3, 7, . . .}.(8.82)
The ﬁrst few convergents are
π = {3, 3.14285714, 3.141509434, 3.141592920, 3.141592653, 3.141592654, . . .},
(8.83)
so ten ﬁgure accuracy requires 6 terms. However, there are other continued
fraction representations for π that have simple patterns:
4
1+
1
2
2+
3
2
2+
5
2
2+
7
2
2 +. . .
, (8.84)
8.4. CONTINUED FRACTIONS 93 Version of November 14, 2011
which has rather poor partial sums:
π = {4, 2.6667, 3.4667, 2.89524, 3.33968, . . .}; (8.85)
π = 3 +
1
6+
3
2
6+
5
2
6+
7
2
6 +. . .
, (8.86)
where the convergents are somewhat better
π = {3, 3.16667, 3.13333, 3.14524, 3.13968, 3.14271, . . .}; (8.87)
π =
4
1+
1
2
3+
2
2
5+
3
2
7 +. . .
, (8.88)
which is comparable,
π = {4, 3, 3.16667, 3.13725, 3.14234, . . .}. (8.89)
All of these are much worse than the rapid convergence of the standard
convergents. But it is the existence of simple patterns that is perhaps
remarkable.
8.4.2 Continued Fraction Representation of Functions
If a function is represented by a power series about the origin,
f(x) =
∞
n=0
a
n
x
n
, (8.90)
we can also write it in a continuedfraction form. The standard approach here
is to write
f(x) =
b
0
1 +
b1x
1+
b
2
x
1+
b
3
x
1+
b
4
x
1+...
=
b
0
1+
b
1
x
1+
b
2
x
1+
b
3
x
1+
b
4
x
1 +. . .
. (8.91)
Evidently, there is a onetoone correspondance between the Taylorseries coef
ﬁcients {a
n
} and the continuedfraction coeﬃcients {b
n
}, which may be deter
mined by expanding the continued fraction in a power series for small x. The
theory of such a representation is discussed also in the book by Bender and
Orzag.
Let us consider a function with the property f(0) = 1; this is merely a
convenient choice of normalization. Then the relation between the continued
fraction coeﬃcients and the series coeﬃcients is easily found to be
a
0
= b
0
= 1, (8.92a)
a
1
= −b
1
, (8.92b)
a
2
= b
1
(b
1
+b
2
), (8.92c)
a
3
= −b
1
[b
2
b
3
+ (b
1
+b
2
)
2
], (8.92d)
a
4
= b
1
[b
2
b
3
(b
3
+b
4
) + 2(b
1
+b
2
)b
2
b
3
+ (b
1
+b
2
)
3
], (8.92e)
94 Version of November 14, 2011 CHAPTER 8. APPROXIMANTS
and so on. This constitutes a nonlinear mapping from the set of numbers {b
n
}
to the set {a
n
} or vice versa.
This mapping seems to be quite remarkable in that the sequence of b
n
s is
typically much simpler than the sequence of a
n
s. Here are some examples:
Example 1
Let b
n
= n, that is, b
1
= 1, b
2
= 2, etc. Then by computing the ﬁrst few a
n
s
from the above formulæ we ﬁnd
b
n
= n ⇒a
n
 = (2n −1)!!. (8.93)
Example 2
Let the continuedfraction sequence be {b
n
} = {1, 1, 2, 2, 3, 3, 4, 4, . . .}. Then
the power series coeﬃcients are given by the factorial,
a
n
 = n!. (8.94)
Example 3
What if b
n
= n
2
? The ﬁrst few a
n
are
a
1
= −1, (8.95a)
a
2
= 5, (8.95b)
a
3
= −61, (8.95c)
a
4
= 1385. (8.95d)
These are recognized as the ﬁrst few Euler numbers, deﬁned by the generating
function
1
cosht
=
∞
n=0
E
n
t
n
n!
, (8.96)
namely
E
0
= 1, (8.97a)
E
2
= −1, (8.97b)
E
4
= 5, (8.97c)
E
6
= −61, (8.97d)
E
4
= 1385, (8.97e)
and we conclude
a
n
= E
2n
. (8.98)
8.4. CONTINUED FRACTIONS 95 Version of November 14, 2011
Example 4
This suggests that we ask what sequence of b
n
s corresponds to the Bernoulli
numbers. It takes a bit of playing around to ﬁnd the correct normalization,
which matters since the transformation is nonlinear. If we take
a
n
= 6B
2n+2
, (8.99)
we ﬁnd that the corresponding continuedfraction coeﬃcients are given by
b
n
=
n(n + 1)
2
(n + 2)
4(2n + 1)(2n + 3)
. (8.100)
Although the latter seems a bit complicated, it is a closed algebraic expression.
It further grows with n as a low power. Neither of these features hold for the
Bernoulli numbers, which grow more rapidly than exponentially, and have no
closedform representation.
These ideas are provocative, yet the general signiﬁcance of these results
remains elusive. There appears also to be some deep connection to ﬁeld theory.
See C. M. Bender and K. A. Milton, J. Math. Phys. 35, 364 (1994) for more
details.
Chapter 9
Asymptotic Expansions
We will illustrate the notions with a couple of carefully chosen examples. For
more detail, you are referred to C. M. Bender and S. A. Orzag, Advanced Math
ematical Methods for Physicists and Engineers: Asymptotic Methods and Per
turbation Theory (Springer, 1999).
9.1 The Airy Function
The Airy function, which occurs, for example, in various radiation problems, is
deﬁned by the integral
πAi(ζ) =
_
∞
0
dt cos
_
ζt +
1
3
t
3
_
=
1
2
_
∞
−∞
dt e
i(ζt+t
3
/3)
. (9.1)
Let z = it; then this integral can also be given as
Ai(ζ) =
1
2πi
_
i∞
−i∞
dz e
ζz−z
3
/3
, (9.2)
where the path of integration is along the imaginary axis. Now, to this point,
this integral has only a formal existence, since the magnitude of the integrand
is unity. However, if we distort the contour to C, as shown in Fig. 9.1, which
passes through the origin, but is asymptotic to the lines arg z = ±2π/3, we
obtain a convergent integral since
z
3
=
_
ρe
±i2π/3
_
3
= ρ
3
> 0. (9.3)
This deformation of the contour is permissible because the contributions of the
arcs at inﬁnity, connecting the ends of C to the imaginary axis, are negligible.
97 Version of November 15, 2011
98 Version of November 15, 2011CHAPTER 9. ASYMPTOTIC EXPANSIONS
π
3
−
π
3
C
Figure 9.1: Contour C used to deﬁne the Airy function in Eq. 9.5.
That is, if z = Re
iθ
, R →∞, we have
Re z
3
= R
3
cos 3θ > 0 if
2π
3
≥ θ >
π
2
or if −
2π
3
≤ θ < −
π
2
, (9.4)
so the integrand is exponentially small there.
1
Thus the ﬁnal deﬁnition of the
Airy function is
Ai(ζ) =
1
2πi
_
C
dz e
ζz−z
3
/3
. (9.5)
We now want to ﬁnd a useful approximation to this integral valid for ζ
large. To do so we note that the exponent, and its ﬁrst two derivatives, are as
functions of z,
φ(z) = ζz −
1
3
z
3
, (9.6a)
φ
′
(z) = ζ −z
2
, (9.6b)
φ
′′
(z) = −2z, (9.6c)
so that φ(z) has vanishing derivative when z = ±
√
ζ. (For deﬁniteness, we shall
suppose that ζ is real.) Since the integrand in the integral deﬁning the Airy
function is entire, we can deform C so that it passes through one of these points,
say z = −
√
ζ, as shown in Fig. 9.2. The reason we choose the contour C to
pass through the stationary point z = −
√
ζ is that there the second derivative
is positive, so that a curve whose tangent is parallel to the imaginary axis will
pass through a maximum rather than a minimum. In particular, let us choose
C so that φ is real everywhere along the path. Then for ξ = z +
√
ζ small, we
can expand
φ(z) ≈ φ(z = −
_
ζ) + φ
′′
(z = −
_
ζ)
ξ
2
2
= −
2
3
ζ
3
2
+ 2ζ
1
2
ξ
2
2
. (9.7)
Requiring Imφ = 0 implies ξ be either real or imaginary. We choose the latter,
as indicated in Fig. 9.2, so that φ will have a maximum at z = −
√
ζ on the
path. This path is called the path of steepest descents.
1
This argument fails in the immediate vicinity of the imaginary axis, reﬂecting the ill
deﬁned nature of Eq. (9.2). A distortion so that  arg z >
π
2
must be supplied in any case.
9.1. THE AIRY FUNCTION 99 Version of November 15, 2011
2π
3
−
2π
3
C
• √
ζ
Figure 9.2: Deformed contour C which passes through the saddle point.
Note that in the perpendicular direction, along the real axis, the function
is a minimum at the stationary point. Thus, the stationary point is a saddle
point, and this method is also referred to as the saddle point method.
The reason for choosing C to be the path of steepest descents is that, for
large ζ, most of the contribution comes from the immediate neighborhood of
the saddle point. Then we can make use of the approximation above, so that
we approximate the Airy function by
Ai(ζ) ∼
1
2πi
e
−
2
3
ζ
3
2
_
C
dξ e
√
ζξ
2
, (9.8)
where the integral is just a Gaussian one,
_
C
dξ e
√
ζξ
2
= ζ
−
1
4
_
i∞
−i∞
du e
u
2
= iζ
−
1
4
_
∞
−∞
dt e
−t
2
= iζ
−
1
4
√
π. (9.9)
Thus we obtain the leading asymptotic behavior of the Airy function
Ai(ζ) ∼
1
2
√
π
ζ
−
1
4
e
−
2
3
ζ
3
2
, ζ →∞. (9.10)
This result is actually valid for complex values of ζ subject to the restriction
 arg ζ < π. (9.11)
This asymptotic approximation is really quite good for modest ζ as Fig. 9.3
shows.
9.1.1 Asymptotic series
Let us calculate the corrections to this result. We return to Eq. (9.7) and keep
the next term in ξ:
φ(z) = −
2
3
ζ
3/2
+ ζ
1/2
ξ
2
−
1
3
ξ
3
, (9.12)
which is exact in this case. Thus the Airy function is exactly represented by the
integral
Ai(ζ) =
1
2πi
_
i∞
−i∞
dξ e
−
2
3
ζ
3/2
e
ζ
1/2
ξ
2
e
−
1
3
ξ
3
. (9.13)
100 Version of November 15, 2011CHAPTER9. ASYMPTOTIC EXPANSIONS
0.0 1.0 2.0 3.0 4.0 5.0
x
0.00
0.10
0.20
0.30
0.40
0.50
Ai(x)
f(x)
r(x)
Figure 9.3: The Airy function Ai(x) compared with the asymptotic approxima
tion (9.10), denoted f(x), and the relative error of the latter, denoted r(x). The
error is less than 10% even for x as small as 1.
9.2. SYNCHROTRON RADIATION 101 Version of November 15, 2011
We approximate this by expanding the last exponential, since for large ζ the
integrand is dominated by small ξ. Expanding out to fourth order, and omitting
odd terms, we have after substituting ξ = iuζ
−1/4
:
Ai(ζ) ∼
ζ
−1/4
2π
e
−
2
3
ζ
3/2
_
∞
−∞
du e
−u
2
_
1 −
1
18
u
6
ζ
3/2
+
1
24
1
81
u
12
ζ
3
+ . . .
_
. (9.14)
The integrals may be evaluated starting from
_
∞
−∞
du e
−λu
2
=
_
π
λ
, (9.15)
so
_
∞
−∞
du u
2k
e
−λu
2
=
_
−
d
dλ
_
k
_
∞
−∞
du e
−λu
2
=
√
π
(2k −1)!!
2
k
1
λ
(2k+1)/2
. (9.16)
Thus, the two leading corrections to the asymptotic expression for the Airy
function given in Eq. (9.10) are
Ai(ζ) ∼
1
2
√
π
ζ
−1/4
e
−
2
3
ζ
3/2
_
1 −
5
48
1
ζ
3/2
+
385
4608
1
ζ
3
+ . . .
_
, (9.17)
which is the beginning of an asymptotic series expansion in powers of ζ
−3/2
.
9.2 Synchrotron Radiation
A charged particle moving in a circular orbit emits electromagnetic radiation
called (for the machine in which such radiation was ﬁrst observed) synchrotron
radiation. For details of the theory, see, for example, J. Schwinger, L. L. De
Raad, Jr., K. A. Milton, and W.y. Tsai, Classical Electrodynamics (Perseus,
1998), p. 401 ﬀ. In particular, the power radiated in the mth harmonic of the
frequency of revolution of the charged particle moving in a circle with speed
v = βc is, in part, proportional to
J
′
2m
(2mβ) = −
_
π
0
dφ
π
sinφ sin2m(β sin φ −φ). (9.18)
In the ultrarelativistic limit when β →1, most of the radiation occurs for large
harmonic numbers, m ≫ 1, and the main contribution comes from the region
near φ = 0. Therefore, we may expand the integrand in Eq. (9.18) as follows:
sin φ sin 2m(β sin φ −φ) ≈ φsin 2m
_
β
_
φ −
φ
3
3!
_
−φ
_
= φsin
_
2m
_
−φ(1 −β) −
1
6
βφ
3
__
≈ −φsin
_
m
_
(1 −β
2
)φ +
1
3
φ
3
__
102 Version of November 15, 2011CHAPTER9. ASYMPTOTIC EXPANSIONS
= −
_
1 −β
2
xsin
_
m(1 −β
2
)
3/2
_
x +
1
3
x
3
__
,
(9.19)
where we have introduced the change of scale
φ =
_
1 −β
2
x. (9.20)
As a result, in this limit, Eq. (9.18) can be approximated by
2
J
′
2m
(2mβ) ∼ (1 −β
2
)
_
∞
0
dx
π
xsin
_
m(1 −β
2
)
3/2
_
x +
1
3
x
3
__
=
(1 −β
2
)
π
Im
_
∞
0
dxxe
im(1−β
2
)
3/2
(x+x
3
/3)
. (9.21)
For m ﬁxed and β approaching unity in such a way that m(1 −β
2
)
3/2
≪1, the
signiﬁcant contribution to Eq. (9.21) comes from the region where x is large,
and Eq. (9.21) reduces to
J
′
2m
(2mβ) ∼ (1 −β
2
)
_
∞
0
dx
π
xsin
_
m
3
(1 −β
2
)
3/2
x
3
_
=
_
∞
0
dφ
π
φ sin
_
m
3
φ
3
_
, (9.22)
where all reference to the speed of the particle has disappeared. By changing
variables, we may write this as
J
′
2m
(2m) ∼ −Im
_
∞
0
dφ
π
φe
−imφ
3
/3
= −Im
_
3
m
_
2/3
e
−iπ/3
π
_
∞
0
dt
_
1
3
t
−2/3
_
t
1/3
e
−t
= −Im
_
3
m
_
2/3
Γ(2/3)
3π
e
−iπ/3
=
3
1/6
2π
Γ(2/3)
m
2/3
, for m ≫1. (9.23)
In the above evaluation, we have used Cauchy’s theorem to perform a change of
contour, as shown in Fig. 9.4, and have used the deﬁnition of the gamma function
(8.69). Notice that Eq. (9.23) is valid for m either integer or halfinteger.
However, for suﬃciently large m, the parameter m(1−β
2
)
3/2
becomes large,
and the integrand in Eq. (9.21) undergoes rapid oscillations in x except near
the stationary points, which satisfy
d
dx
_
x +
1
3
x
3
_
= 1 + x
2
= 0; (9.24)
2
Evidently, this integral is related to that deﬁning the Airy function, Eq. (9.1).
9.2. SYNCHROTRON RADIATION 103 Version of November 15, 2011
φ plane:

?

Q
Q
Q
Q
Qk
P
PPq
∞
π/6
Figure 9.4: Change of contour used in evaluating Eq. (9.23).
•
i
=⇒ =⇒
=⇒
Complex x plane
Figure 9.5: Stationary phase contour for evaluation of (9.21).
that is, the stationary phase points are located at
x = ±i. (9.25)
By extending the region of integration from −∞ to +∞, we evaluate Eq. (9.21)
asymptotically by following the standard procedure of the saddle point method
(or the method of steepest descents). We deform the contour of integration so
that it passes through the stationary point x = i, because then the dominant
contribution comes from the vicinity of that point. (See Fig. 9.5.) In the
neighborhood of x = i, we let
x = i + ξ, (9.26)
where ξ is real, to take advantage of the saddle point character. For arbitrary ξ
x +
1
3
x
3
= (i + ξ) +
1
3
(i + ξ)
3
= i
_
2
3
+ ξ
2
_
+
1
3
ξ
3
, (9.27)
so that for small ξ, if we drop the cubic term in ξ, the exponential factor in
Eq. (9.21) becomes
e
−
2
3
m(1−β
2
)
3/2
e
−m(1−β
2
)
3/2
ξ
2
, (9.28)
which falls oﬀ exponentially on both sides of x = i. The resulting Gaussian
integral in (9.21) leads to the following asymptotic form:
J
′
2m
(2mβ) ∼
1
2
(1 −β
2
)
1/4
√
πm
e
−
2
3
m(1−β
2
)
3/2
, m(1 −β
2
)
3/2
≫1. (9.29)
104 Version of November 15, 2011CHAPTER9. ASYMPTOTIC EXPANSIONS
Thus, for very large harmonic numbers, the power spectrum
3
decreases expo
nentially in contrast to the behavior for smaller values of m where it increases
like m
1/3
. The transition between these two regimes occurs near the critical
harmonic number, m
c
, for which
m
c
(1 −β
2
)
3/2
≡ 1, (9.31)
or
m
c
= (1 −β
2
)
−3/2
=
_
E
µc
2
_
3
, (9.32)
which uses the relativistic connection between the energy and the rest mass
µ, E = µc
2
(1 − β
2
)
−1/2
. The bulk of the radiation is emitted with harmonic
numbers near m
c
. The qualitative shape of the spectrum is shown in Fig. 9.6.
9.2.1 First correction
Corrections to the formula (9.29) may be computed by retaining the ξ
3
term,
but treating it as small, so the correction may be obtained by Taylor expanding
the exponential:
J
′
2m
(2mβ) ∼
1 −β
2
2π
Ime
−
2
3
m(1−β
2
)
3/2
_
∞
−∞
dξ e
−m(1−β
2
)
3/2
ξ
2
(i + ξ)
×
_
1 + im(1 −β
2
)
3/2
ξ
3
3
−
1
2
m
2
(1 −β
2
)
3
ξ
6
9
+ . . .
_
=
1 −β
2
2π
e
−
2
3
m(1−β
2
)
3/2
(1 − β
2
)
−3/4
m
−1/2
_
∞
−∞
dt e
−t
2
×
_
1 +
1
3
t
4
m(1 −β
2
)
3/2
−
1
18
t
6
m(1 −β
2
)
3/2
+ . . .
_
. (9.33)
Here we noted that the imaginary part only receives the contribution of the
even terms in ξ, which are all that survive symmetric integration. Finally, the
Gaussian integrals are evaluated according to
_
∞
−∞
dt t
2n
e
−t
2
=
_
∞
0
dx
√
x
x
n
e
−x
= Γ
_
n +
1
2
_
, (9.34)
where
Γ
_
5
2
_
=
3
√
π
4
, Γ
_
7
2
_
=
15
√
π
8
. (9.35)
3
The power radiated into the mth harmonic by a particle of charge e moving in a circle of
radius R with angular frequency ω
0
is given by
Pm =
e
2
R
mω
0
_
2β
2
J
′
2m
(2mβ) − (1 −β
2
)
_
2mβ
0
dx J
2m
(x)
_
(9.30)
The two terms in the square brackets have similar asymptotic behavior.
9.2. SYNCHROTRON RADIATION 105 Version of November 15, 2011
1 10 100 1000 10000
m
0.0
1.0
2.0
3.0
2
m
J
′
2
m
(
2
m
β
)
Figure 9.6: Sketch of power emitted into mth harmonic as a function of m.
What is actually plotted is 2mJ
′
2m
(2mβ) for β = 0.99. In this case m
c
= 356.
106 Version of November 15, 2011CHAPTER9. ASYMPTOTIC EXPANSIONS
Thus
J
′
2m
(2mβ) =
(1 −β
2
)
1/4
2
√
mπ
e
−
2
3
m(1−β
2
)
3/2
×
_
1 +
7
48
1
m(1 −β
2
)
3/2
+O
_
1
m
2
(1 −β
2
)
3
__
. (9.36)
Chapter 10
Linear Operators,
Eigenvalues, and Green’s
Operator
We begin with a reminder of facts which should be known from previous courses.
10.1 Inner Product Space
A vector space V is a collection of objects ¦x¦ for which addition is deﬁned.
That is, if x, y ∈ V , x + y ∈ V , which addition satisﬁes the usual commutative
and associative properties of addition:
x + y = y + x, x + (y + z) = (x + y) + z. (10.1)
There is a zero vector 0, with the property
0 + x = x + 0 = x, (10.2)
and the inverse of x, denoted −x, has the property
x −x ≡ x + (−x) = 0. (10.3)
Vectors may be multiplied by complex numbers (“scalars”) in the usual way.
That is, if λ is a complex number, and x ∈ V , then λx ∈ V . Multiplication by
scalars is distributive over addition:
λ(x + y) = λx + λy. (10.4)
Scalar multiplication is also associative: If λ and µ are two complex numbers,
λ(µx) = (λµ)x. (10.5)
107 Version of November 16, 2011
108 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
An inner product space is a vector space possessing an inner product. If x
and y are two vectors, the inner product
¸x, y) (10.6)
is a complex number. The inner product has the following properties:
¸x, y + αz) = ¸x, y) + α¸x, z), (10.7a)
¸x + βy, z) = ¸x, z) + β
∗
¸y, z), (10.7b)
¸x, y) = ¸y, x)
∗
, (10.7c)
¸x, x) > 0 if x ,= 0, (10.7d)
where α and β are scalars. Because of the properties (10.7a) and (10.7b), we
say that the inner product is linear in the second factor and antilinear in the
ﬁrst. Because of the last property (10.7d), we deﬁne the norm of the vector by
x =
_
¸x, x). (10.8)
10.2 The CauchySchwarz Inequality
An important result is the CauchySchwarz inequality,
1
which has an obvious
meaning for, say, threedimensional vectors. It reads, for any two vectors x and
y
[¸x, y)[ ≤ xy, (10.9)
where equality holds if and only if x and y are linearly dependent.
Proof: For arbitrary λ we have
0 ≤ ¸x −λy, x −λy) = x
2
−λ¸x, y) − λ
∗
¸y, x) +[λ[
2
y
2
. (10.10)
Because the inequality is trivial if y = 0, we may assume y ,= 0, and so we may
choose
λ =
¸y, x)
y
2
. (10.11)
The the inequality (10.10) read
0 ≤ x
2
−
2
y
2
[¸x, y)[
2
+
[¸y, x)[
2
y
2
= x
2
−
[¸x, y)[
2
y
2
, (10.12)
from which Eq. (10.9) follows. Evidently inequality holds in Eq. (10.10) unless
x = λy. (10.13)
1
The name Bunyakovskii should also be added.
10.3. HILBERT SPACE 109 Version of November 16, 2011
From the CauchySchwarz inequality, the triangle inequality follows:
x + y ≤ x +y. (10.14)
Proof:
x + y
2
= ¸x + y, x + y)
= x
2
+y
2
+ 2Re ¸x, y)
≤ x
2
+y
2
+ 2[¸x, y)[
≤ x
2
+y
2
+ 2xy = (x +y)
2
. (10.15)
QED
10.3 Hilbert Space
A Hilbert space H is an inner product space that is complete. Recall from
Chapter 2 that a complete space is one in which any Cauchy sequence of vectors
has a limit in the space. That is, if we have a Cauchy sequence of vectors, i.e.,
for any ǫ > 0,
¦x
n
¦
∞
n=1
: x
n
−x
m
 < ǫ ∀ n, m > N(ǫ), (10.16)
then the sequence has a limit in H, that is, there is an x ∈ H for which for any
ǫ > 0 there is an N(ǫ) so large that
x −x
n
 < ǫ ∀ n > N(ǫ). (10.17)
We will mostly be talking about Hilbert spaces in the following.
Suppose we have a countable set of orthonormal vectors ¦e
i
¦, i = 1, 2, . . .,
in H. Orthonormality means
¸e
i
, e
j
) = δ
ij
. (10.18)
The set is said to be complete if any vector x in H can be expanded in terms of
the e
i
s:
2
x =
∞
i=1
¸e
i
, x)e
i
. (10.19)
Here convergence is deﬁned in the sense of the norm as described above. Geo
metrically, the inner product ¸e
i
, x) is a kind of direction cosine of the vector x,
or a projection of the vector x on the basis vector e
i
.
2
If the space is ﬁnite dimensional, then the sum runs up to the dimensionality of the space.
110 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
Example
Consider the space of all functions that are square integrable on the closed
interval [−π, π]:
_
π
−π
[f(x)[
2
dx < ∞. (10.20)
The functions (not the values of the functions) are the vectors in the space, and
the inner product is deﬁned by
¸f, g) =
_
π
−π
f(x)
∗
g(x) dx. (10.21)
It is evident that this deﬁnition of the inner product satisﬁes all the properties
(10.7a)–(10.7d). This space, called L
2
(−π, π), is in fact a Hilbert space. A
complete set of orthonormal vectors is
¦f
n
¦ : f
n
(x) =
1
√
2π
e
inx
, n = 0, ±1, ±2, . . . . (10.22)
whose inner products satisfy
¸f
n
, f
m
) = δ
n,m
. (10.23)
The expansion
f =
∞
n=−∞
¸f
n
, f)f
n
(10.24)
is the Fourier expansion of f:
¸f
n
, f) =
1
√
2π
_
∞
−∞
f(x)e
−inx
dx = a
n
, (10.25)
where in terms of the Fourier coeﬃcient a
n
f(x) =
∞
n=−∞
a
n
1
√
2π
e
inx
. (10.26)
This Fourier series does not, in general, converge pointwise, but it does converge
“in the mean:”
_
_
_
_
_
f(x) −
1
√
2π
N
n=−N
a
n
e
inx
_
_
_
_
_
→ 0 as N → 0, (10.27)
that is,
lim
N→∞
_
π
−π
dx
¸
¸
¸
¸
¸
f(x) −
1
√
2π
N
n=−N
a
n
e
inx
¸
¸
¸
¸
¸
2
= 0. (10.28)
10.4. LINEAR OPERATORS 111 Version of November 16, 2011
10.4 Linear Operators
A linear operator T on a vector space V is a rule assigning to each f ∈ V a
unique vector Tf ∈ V . It has the linearity property,
T(αf + βg) = αTf + βTg, (10.29)
where α, β are scalars. In an inner product space, the adjoint (or Hermitian
conjugate) of T is deﬁned by
¸f, Tg) = ¸T
†
f, g), ∀ f, g ∈ V. (10.30)
T is selfadjoint (or Hermitian) if
T
†
= T. (10.31)
10.4.1 SturmLiouville Problem
Consider the space of twice continuously diﬀerentiable real functions deﬁned on
a segment of the real line
x
0
≤ x ≤ x
1
, (10.32)
an incomplete subset of the Hilbert space L
2
(x
0
, x
1
). Under what conditions is
the diﬀerential operator
L = p(x)
d
2
dx
2
+ q(x)
d
dx
+ r(x), (10.33)
where p, q, and r are real functions, selfadjoint?
Let u, v be functions in the space. In terms of the L
2
inner product
¸u, Lv) =
_
x1
x0
dxu(x)Lv(x)
=
_
x1
x0
dxu(x)
_
p(x)
d
2
dx
2
v(x) + q(x)
d
dx
v(x) + r(x)v(x)
_
= u(x)p(x)v
′
(x)
¸
¸
¸
¸
x1
x0
−
_
x1
x0
dx [u(x)p(x)]
′
v
′
(x)
+ u(x)q(x)v(x)
¸
¸
¸
¸
x1
x0
−
_
x1
x0
dx[u(x)q(x)]
′
v(x)
+
_
x1
x0
dxu(x)r(x)v(x)
=
_
u(x)p(x)v
′
(x) + u(x)q(x)v(x) −[u(x)p(x)]
′
v(x)
_
¸
¸
¸
¸
x1
x0
+
_
x1
x0
dx
_
[u(x)p(x)]
′′
v(x) −[u(x)q(x)]
′
v(x) + u(x)r(x)v(x)
_
112 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
= ¦p(x) [u(x)v
′
(x) −u
′
(x)v(x)] + [q(x) −p
′
(x)] u(x)v(x)¦
¸
¸
¸
¸
x1
x0
+
_
x1
x0
dx
_
p(x)u
′′
(x) + [2p
′
(x) −q(x)] u
′
(x)
+ [p
′′
(x) −q
′
(x) + r(x)] u(x)
_
v(x). (10.34)
The last integral here equals, for all v, v,
¸Lu, v) =
_
x1
x0
dx [Lu(x)] v(x) (10.35)
if and only if
2p
′
−q = q, p
′′
−q
′
+ r = r, (10.36)
which imply the single condition
p
′
(x) = q(x). (10.37)
If this condition holds for all x in the interval [x
0
, x
1
], the integrated term is
p(x) [u(x)v
′
(x) −u
′
(x)v(x)]
¸
¸
¸
¸
x1
x0
. (10.38)
Only if this is zero is L Hermitian:
¸u, Lv) = ¸Lu, v). (10.39)
The vanishing of the integrated term may be achieved in various ways:
1. The function p may vanish at both boundaries:
p(x
0
) = p(x
1
) = 0, and u, v bounded for x = x
0
, x
1
. (10.40)
Thus, for example, the Legendre diﬀerential operator
(1 −x
2
)
d
2
dx
2
−2x
d
dx
(10.41)
is selfadjoint on the interval [−1, 1].
2. The functions in the space satisfy homogeneous boundary conditions:
(a) The functions vanish at the boundaries,
u(x
0
) = u(x
1
) = 0, v(x
0
) = v(x
1
) = 0. (10.42)
These are called homogeneous Dirichlet boundary conditions.
(b) The derivatives of the functions vanish at the boundaries,
u
′
(x
0
) = u
′
(x
1
) = 0, v
′
(x
0
) = v
′
(x
1
) = 0. (10.43)
These are called homogeneous Neumann boundary conditions.
10.5. EIGENVECTORS 113 Version of November 16, 2011
(c) Homogeneous mixed boundary conditions are a linear combination
of these conditions,
u
′
(x
0
) + α(x
0
)u(x
0
) = 0, (10.44a)
u
′
(x
1
) + α(x
1
)u(x
1
) = 0, (10.44b)
where α is some function, the same for all functions u in the space.
3. A third possibility is that the solutions may satisfy periodic boundary
conditions,
u(x
0
) = u(x
1
) and u
′
(x
0
) = u
′
(x
1
). (10.45)
This only works when the function p is also periodic,
p(x
0
) = p(x
1
). (10.46)
Conditions such as the above, which insure the vanishing of the integrated
term (or, in higher dimensions, surface terms) are called selfadjoint boundary
conditions. When they hold true, the diﬀerential equation
d
dx
_
p(x)
d
dx
u(x)
_
+ r(x)u(x) = 0 (10.47)
is selfadjoint. This equation is called the SturmLiouville equation.
10.5 Eigenvectors
If T is a (linear) operator and f ,= 0 is a vector such that
Tf = λf, (10.48)
where λ is a complex number, then we say that f is a eigenvector (“characteristic
vector”) belonging to the operator T, and λ is the corresponding eigenvalue.
The following theorem is most important. The eigenvalues of a Hermitian
operator are real, and the eigenvectors belonging to distinct eigenvalues are or
thogonal. The proof is quite simple. If
Tf = λf, Tg = µg, (10.49)
then
¸g, Tf) = λ¸g, f) = ¸Tg, f) = µ
∗
¸g, f). (10.50)
Thus if g and f are the same, we conclude that
λ = λ
∗
, (10.51)
i.e., the eigenvalue λ is real, while then if λ ,= µ, we must have
¸g, f) = 0. (10.52)
114 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
10.5.1 Bessel Functions
The Bessel operator is
B
ν
=
d
2
dx
2
+
1
x
d
dx
−
ν
2
x
2
. (10.53)
where ν is a real number. This is Hermitian in the space of real functions
satisfying homogeneous boundary conditions (Dirichlet, Neumann, or mixed),
where the inner product is deﬁned by
¸u, v) =
_
b
a
xdxu(x)v(x). (10.54)
Proof: Note that
xB
ν
=
d
dx
x
d
dx
−
ν
2
x
(10.55)
is of the SturmLiouville form, (10.47), with p(x) = x, which is Hermitian with
the L
2
(a, b) inner product. Then
¸u, B
ν
v) =
_
b
a
dxu(x)xB
ν
v(x) =
_
b
a
dxxB
ν
u(x)v(x) = ¸B
ν
u, v). (10.56)
When a = 0, the lower limit of the integrated term is zero automatically if
the functions are ﬁnite at x = 0—See Eq. (10.38). Suppose we demand that
Dirichlet conditions hold at x = b, i.e., that the functions must vanish there.
Then we seek solutions to the following Hermitian eigenvalue problem,
B
ν
ψ
νn
= λ
νn
ψ
νn
, (10.57)
with the boundary conditions
ψ
νn
(b) = 0, ψ
νn
(0) = ﬁnite. (10.58)
Here n enumerates the eigenvalues. The solutions to this problem are the Bessel
functions, which satisfy the diﬀerential equation
_
d
2
dz
2
+
1
z
d
dz
+ 1 −
ν
2
z
2
_
J
ν
(z), (10.59)
which are ﬁnite at the origin, z = 0.
3
This is the same as the eigenvalue equation
(10.57) provided we change the variable z =
√
−λ
νn
x. That is,
ψ
νn
(x) = J
ν
(
_
−λ
νn
x). (10.60)
The solutions we seek are Bessel functions of a real variable, so the acceptable
eigenvalues satisfy
λ
νn
< 0, (10.61)
3
The second solution to Eq. (10.59), the socalled Neumann function Nν(z) [it is also
denoted by Yν(z) and is more properly attributed to Weber], is not regular at the origin.
10.6. DUAL VECTORS. DIRAC NOTATION115 Version of November 16, 2011
so we write
−λ
νn
= k
2
νn
. (10.62)
Finally, we impose the boundary condition at x = b:
0 = ψ
νn
(b) = J
ν
(k
νn
b), (10.63)
that is, k
νn
b must be a zero of J
ν
. There are an inﬁnite number of such zeros, as
Fig. 10.1 illustrates. Let the nth zero of J
ν
be denoted by α
νn
, n = 1, 2, 3, . . ..
For example, the ﬁrst three zeros of J
0
are
α
01
= 2.404826, α
02
= 5.520078, α
03
= 8.653728, (10.64)
while the ﬁrst three zeros of J
1
(other than 0) are
α
11
= 3.83171, α
12
= 7.01559, α
13
= 10.17347. (10.65)
Then the eigenvalues of the Bessel operator are
λ
νn
= −
_
α
νn
b
_
2
, (10.66)
and the eigenfunctions are
J
ν
_
α
νn
x
b
_
. (10.67)
Because of the Hermiticity of B
ν
, these have the following orthogonality prop
erty, from Eq. (10.52),
_
b
0
dxxJ
ν
_
α
νn
x
b
_
J
ν
_
α
νm
x
b
_
= 0, n ,= m. (10.68)
10.6 Dual Vectors. Dirac Notation
It is often convenient to think of the inner product as being composed by the
multiplication to two diﬀerent kinds of vectors. Thus, in 2dimensional vector
space we have column vectors,
v =
_
v
1
v
2
_
, (10.69)
and row vectors,
v
†
= (v
∗
1
, v
∗
2
). (10.70)
As the notation indicates, the row vector v
†
is the adjoint, the complex conjugate
of the transpose of the column vector v. The inner product is then formed by
the rules of matrix multiplication,
¸v, u) = v
†
u = v
∗
1
u
1
+ v
∗
2
u
2
. (10.71)
We generalize this notion to abstract vectors as follows. Denote a “right”
vector (Dirac called it a “ket”) by [λ) where λ is a name, or number, or set
116 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
0.0 5.0 10.0 15.0 20.0
x
0.5
0.0
0.5
1.0
J
0
J
1
J
2
Figure 10.1: Plot of the Bessel functions of the ﬁrst kind, J
0
, J
1
, and J
2
, as
functions of x.
10.6. DUAL VECTORS. DIRAC NOTATION117 Version of November 16, 2011
of numbers, labeling the vector. For example, if [λ) is an eigenvector of some
operator, λ might be the corresponding eigenvalue.
The dual (or “conjugate”) vector to [λ) is
¸λ[ = ([λ))
†
, (10.72)
which is a “left” vector or a “bra” vector. For every right vector there is a unique
left vector, and vice versa, in an inner product space. The inner product of [α)
with ¸β[ is denoted ¸β[α). Note that the double vertical line has coalesced into
a single line. This notation is a bracket notation, hence Dirac’s nomenclature.
With row and column vectors there is not only an inner product, but an
outer product as well:
vu
†
=
_
v
1
v
2
_
(u
∗
1
, u
∗
2
) =
_
v
1
u
∗
1
v
1
u
∗
2
v
2
u
∗
1
v
2
u
∗
2
_
. (10.73)
The result is a matrix or operator. So it is with abstract left and right vectors.
We may deﬁne a dyadic by
[α)¸β[ (10.74)
which is an operator. When it acts on the right vector [γ) it produces another
vector,
[α)¸β[[γ) = [α)¸β[γ), (10.75)
where ¸β[γ) is a complex number, the inner product of ¸β[ and [γ); evidently
the properties of an operator are satisﬁed.
10.6.1 Basis Vectors
Let [n), n = 1, 2, . . . be a complete, orthonormal set of vectors, that is, they
satisfy the properties
¸m[n) = δ
mn
, (10.76a)
and if [λ) is any vector in the space,
[λ) =
∞
n=1
[n)¸n[λ). (10.76b)
This is just a rewriting of the statement in Eq. (10.19). Since [λ) is an arbitrary
vector, we must have
∞
n=1
[n)¸n[ = I, (10.77)
where I is the identity operator. This operator expression is the completeness
relation for the vectors ¦[n)¦.
118 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
10.7 L
2
(V )
As we have seen, an important example of a Hilbert space is the space of all
functions squareintegrable in some region. For example, suppose we consider
complexvalued functions f(r), where r = (x, y, z), r ∈ V , where V is some
volume in 3dimensional space, such that
_
V
(dr)[f(r)[
2
< ∞, (10.78)
where the volume element (dr) = dxdy dz. We call this Hilbert space L
2
(V ).
Vectors in this space are functions: The function f corresponds to [f), which
we write as
f(r) −→ [f). (10.79)
The inner product is
¸f[g) =
_
V
(dr)f
∗
(r)g(r). (10.80)
It is most convenient to deﬁne the “function” δ(r −r
0
), the Dirac delta
function, by the property
f(r
0
) =
_
V
(dr) δ(r −r
0
)f(r) (10.81)
for all f provided r
0
lies within V . Regarding δ as a function (it is actually a
linear functional deﬁned by the integral equation above), we denote the corre
sponding vector in Hilbert space by [r
0
):
δ(r −r
0
) −→ [r
0
). (10.82)
(Actually, [r
0
) is not a vector in L
2
(V ), because it is not a squareintegrable
function.) Pictorially, [r
0
) represents a function which is localized at r = r
0
,
i.e., it vanishes if r ,= r
0
, but with the property
¸r
0
[f) =
_
V
(dr)δ(r −r
0
)f(r) = f(r
0
); (10.83)
the number ¸r
0
[f) is the value of f at r
0
. Also note that
¸r
0
[r
1
) =
_
V
(dr)δ(r −r
0
)δ(r −r
1
) = δ(r
0
−r
1
). (10.84)
In the above, we always assume that r
0
and r
1
lie in the volume V .
It may be useful to recognize that in quantum mechanics [r
0
) is an eigen
vector of the position operator. It represents a state in which the particle has a
deﬁnite position, namely r
0
.
Now notice that if the completeness relation (10.77) is multiplied on the
right by [r
′
) and on the left by ¸r[, it reads
∞
n=1
¸r[n)¸n[r
′
) = δ(r −r
′
). (10.85)
10.8. GREEN’S OPERATOR 119 Version of November 16, 2011
If we deﬁne ψ
n
(r) = ¸r[n) as the values of what is now a complete set of
functions,
∞
n=1
ψ
∗
n
(r
′
)ψ
n
(r) = δ(r −r
′
). (10.86)
Implicit in what we are saying here is the assumption that the set of vectors
[r), r ∈ V , is complete:
¸g[f) =
_
V
(dr)g
∗
(r)f(r)
=
_
V
(dr)¸g[r)¸r[f), (10.87)
which must mean, since [g) and ¸f[ are arbitrary,
I =
_
V
(dr)[r)¸r[. (10.88)
(Because the vectors are continuously, not discretely, labeled, the sum in Eq.
(10.77) is replaced by an integral.) This will not be true if there are other
variables in the problem, such as spin, but in that case the inner product is not
given in terms of an integral over r alone.
10.8 Green’s Operator
We have now reached the takingoﬀ point for the discussion of Green’s functions.
We will in this section sketch the general type of problem we wish to consider.
In the next chapter we will ﬁll in the details, by considering physical examples.
Let L be a selfadjoint linear operator in a Hilbert space. We wish to ﬁnd
the solutions [ψ) to the following vector equation
(L −λ)[ψ) = [S), (10.89)
where [S) is a prescribed vector, the “source,” and λ is a real number not equal
to any of the (real) eigenvectors of L.
Suppose the eigenvectors of L, which satisfy
L[n) = λ
n
[n), (10.90)
are complete, and are orthonormalized,
n
[n)¸n[ = I. (10.91)
We may then expand [ψ) in terms of these,
[ψ) =
n
[n)¸n[ψ). (10.92)
120 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
When we insert this expansion into Eq. (10.89) and use the eigenvalue equation
(10.90), we obtain
n
(λ
n
−λ)[n)¸n[ψ) = [S). (10.93)
Now multiply this equation on the left by ¸n
′
[, and use the orthonormality
property
¸n
′
[n) = δ
n
′
n
, (10.94)
to ﬁnd (n
′
→ n)
(λ
n
−λ)¸n[ψ) = ¸n[S), (10.95)
or, provided λ ,= λ
n
,
¸n[ψ) =
¸n[S)
λ
n
−λ
. (10.96)
Then from Eq. (10.92) we deduce
[ψ) =
n
[n)¸n[
λ
n
−λ
[S), (10.97)
which means we have solved for [ψ) in terms of the presumably known eigen
vectors and eigenvalues of L. We write this more compactly as
[ψ) = G[S), (10.98)
where G, the Green’s operator, is
G =
n
[n)¸n[
λ
n
−λ
; (10.99)
the sum ranges over all the eigenvectors of L.
We regard Eq. (10.98) as the deﬁnition of G: the response of a linear system
is linear in the source. Eq. (10.99) is the eigenvector expansion of G.
Two properties of G follow immediately from the above:
1. From Eq. (10.99), since both λ and λ
n
are real, we see that G is Hermitian,
G
†
= G. (10.100)
2. From either of Eqs. (10.98) or (10.99) we see that G satisﬁes the operator
equation
(L −λ)G = I. (10.101)
The case of functions is the most important one. Then if we use Eq. (10.88),
the inhomogeneous equation (10.89) becomes
¸r[(L −λ)
_
V
(dr
′
)[r
′
)¸r
′
[ψ) = ¸r[S). (10.102)
10.8. GREEN’S OPERATOR 121 Version of November 16, 2011
Suppose
¸r[L[r
′
) =
ˆ
Lδ(r −r
′
), (10.103)
where
ˆ
L is a diﬀerential operator (the usual case), and let us further write
¸r
′
[ψ) = ψ(r
′
), ¸r[S) = S(r). (10.104)
Then the inhomogeneous equation (10.102) reads
(
ˆ
L −λ)ψ(r) = S(r) (10.105)
The solution to Eq. (10.105) is given by ¸r[ times Eq. (10.98):
¸r[ψ) = ¸r[G
_
V
(dr
′
)[r
′
)¸r
′
[S), (10.106)
or
ψ(r) =
_
V
(dr
′
) G(r, r
′
)S(r
′
), (10.107)
where we have written the Green’s function as
G(r, r
′
) = ¸r[G[r
′
). (10.108)
The eigenfunction expansion of G(r, r
′
) is
G(r, r
′
) =
n
ψ
∗
(r
′
)ψ(r)
λ
n
−λ
, (10.109)
where the eigenfunctions, satisfying Eq. (10.86), are ψ
n
(r) = ¸r[n). Now the
properties of G(r, r
′
) are
1. The reciprocity relation:
G(r, r
′
) = G
∗
(r
′
, r), (10.110)
which follows immediately from the eigenfumction expansion (10.109) or
from Eq. (10.100):
¸r[G
†
[r
′
) = ¸r
′
[G[r)
∗
= ¸r[G[r
′
). (10.111)
2. The diﬀerential equation satisﬁed by the Green’s function is
(
ˆ
L −λ)G(r, r
′
) = δ(r −r
′
), (10.112)
which follows from Eqs. (10.107), (10.109), or (10.101).
3. Now we have an additional property. If ψ
n
(r) satisfy homogeneous bound
ary conditions, for example, ψ
n
(r) = 0 on the surface of V , G(r, r
′
) satis
ﬁes the same conditions, for example it vanishes when r or r
′
lies on the
surface of V .
122 Version of November 16, 2011 CHAPTER 10. LINEAR OPERATORS
Note that the eigenfunction expansion of G(r, r
′
),
G
λ
(r, r
′
) =
n
ψ
∗
n
(r
′
)ψ
n
(r)
λ
n
−λ
, (10.113)
where now the parameter λ has been made explicit in G, says that G
λ
has
simple poles at each of the eigenvalues λ
n
, and that the residue of the pole of
G
λ
at λ = λ
n
is
Res G
λ
(r, r
′
)
¸
¸
λ=λn
= −ψ
∗
n
(r
′
)ψ(r). (10.114)
If the eigenvalue is degenerate, that is, there is more than one eigenfunction
corresponding to a given eigenvalue, one obtains a sum over all the ψ
∗
n
ψ
n
cor
responding to λ
n
.
Thus, if G may be determined by other means than by an eigenfunction
expansion, such as directly solving the diﬀerential equation (10.112), from it
the eigenvalues and normalized eigenfunctions of
ˆ
L may be determined. We will
illustrate this eigenfunction decomposition in the next chapter.
Chapter 11
Green’s Functions
11.1 Onedimensional Helmholtz Equation
Suppose we have a string driven by an external force, periodic with frequency
ω. The diﬀerential equation (here f is some prescribed function)
_
∂
2
∂x
2
−
1
c
2
∂
2
∂t
2
_
U(x, t) = f(x) cos ωt (11.1)
represents the oscillatory motion of the string, with amplitude U, which is tied
down at both ends (here l is the length of the string):
U(0, t) = U(l, t) = 0. (11.2)
We seek a solution of the form (thus we are ignoring transients)
U(x, t) = u(x) cos ωt, (11.3)
so u(x) satisﬁes
_
d
2
dx
2
+k
2
_
u(x) = f(x), k = ω/c. (11.4)
The solution to this inhomogeneous Helmholtz equation is expressed in terms of
the Green’s function G
k
(x, x
′
) as
u(x) =
_
l
0
dx
′
G
k
(x, x
′
)f(x
′
), (11.5)
where the Green’s function satisﬁes the diﬀerential equation
_
d
2
dx
2
+k
2
_
G
k
(x, x
′
) = δ(x −x
′
). (11.6)
123 Version of December 3, 2011
124 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
As we saw in the previous chapter, the Green’s function can be written down
in terms of the eigenfunctions of d
2
/dx
2
, with the speciﬁed boundary conditions,
_
d
2
dx
2
−λ
n
_
u
n
(x) = 0, (11.7a)
u
n
(0) = u
n
(l) = 0. (11.7b)
The normalized solutions to these equations are
u
n
(x) =
_
2
l
sin
nπx
l
, λ
n
= −
_
nπ
l
_
2
, n = 1, 2, . . . . (11.8)
The factor
_
2/l is a normalization factor. From the general theorem about
eigenfunctions of a Hermitian operator given in Sec. 10.5, we have
2
l
_
l
0
dxsin
nπx
l
sin
mπx
l
= δ
nm
. (11.9)
Thus the Green’s function for this problem is given by the eigenfunction expan
sion
G
k
(x, x
′
) =
∞
n=1
2
l
sin
nπx
l
sin
nπx
′
l
k
2
−
_
nπ
l
_
2
. (11.10)
But this form is not usually very convenient for calculation.
Therefore we solve the diﬀerential equation (11.6) directly. When x = x
′
the
inhomogeneous term is zero. Since
G
k
(0, x
′
) = G
k
(l, x
′
) = 0, (11.11)
we must have
x < x
′
: G
k
(x, x
′
) = a(x
′
) sin kx, (11.12a)
x > x
′
: G
k
(x, x
′
) = b(x
′
) sin k(x −l). (11.12b)
We determine the unknown functions a and b by noting that the derivative of G
must have a discontinuity at x = x
′
, which follows from the diﬀerential equation
(11.6). Integrating that equation just over that discontinuity we ﬁnd
_
x
′
+ǫ
x
′
−ǫ
dx
_
d
2
dx
2
+k
2
_
G
k
(x, x
′
) = 1, (11.13)
or
d
dx
G
k
(x, x
′
)
¸
¸
¸
¸
x=x
′
+ǫ
x=x
′
−ǫ
= 1, (11.14)
because 2ǫG
k
(x
′
, x
′
) → 0 as ǫ → 0. Although
d
dx
G
k
(x, x
′
) is discontinuous at
x = x
′
, G(x, x
′
) is continuous there:
G(x
′
+ǫ, x
′
) −G(x
′
−ǫ, x
′
) =
_
x
′
+ǫ
x
′
−ǫ
dx
d
dx
G(x, x
′
)
11.1. ONEDIMENSIONAL HELMHOLTZ EQUATION125 Version of December 3, 2011
=
_
x
′
x
′
−ǫ
dx
d
dx
G(x, x
′
) +
_
x
′
+ǫ
x
′
dx
d
dx
G(x, x
′
)
= ǫ
_
d
dx
G(x, x
′
)
¸
¸
¸
¸
x=x
′
−ξ
+
d
dx
G(x, x
′
)
¸
¸
¸
¸
x=x
′
+
¯
ξ
_
, (11.15)
where by the mean value theorem, 0 < ξ ≤ ǫ, 0 <
¯
ξ ≤ ǫ. Therefore
G(x, x
′
)
¸
¸
¸
¸
x=x
′
+ǫ
x=x
′
−ǫ
= O(ǫ) →0 as ǫ →0. (11.16)
Now using the continuity of G and the discontinuity of G
′
, we ﬁnd two
equations for the coeﬃcient functions a and b:
a(x
′
) sinkx
′
= b(x
′
) sin k(x
′
−l), (11.17a)
a(x
′
)k cos kx
′
+ 1 = b(x
′
)k cos k(x
′
−l). (11.17b)
It is easy to solve for a and b. The determinant of the coeﬃcient matrix is
D =
¸
¸
¸
¸
sinkx
′
−sink(x
′
−l)
k cos kx
′
−k cos k(x
′
−l)
¸
¸
¸
¸
= −k sin kl, (11.18)
independent of x
′
. Then the solutions are
a(x
′
) =
1
D
¸
¸
¸
¸
0 −sink(x
′
−l)
−1 −k cos k(x
′
−l)
¸
¸
¸
¸
=
sin k(x
′
−l)
k sin kl
, (11.19a)
b(x
′
) =
1
D
¸
¸
¸
¸
sinkx
′
0
k cos kx
′
−1
¸
¸
¸
¸
=
sin kx
′
k sin kl
. (11.19b)
Thus we ﬁnd a closed form for the Green’s function in the two regions:
x < x
′
: G
k
(x, x
′
) =
sin k(x
′
−l) sinkx
k sinkl
, (11.20a)
x > x
′
: G
k
(x, x
′
) =
sin kx
′
sin k(x −l)
k sinkl
, (11.20b)
or compactly,
G
k
(x, x
′
) =
1
k sin kl
sin kx
<
sin k(x
>
−l), (11.21)
where we have introduced the notation
x
<
is the lesser of x, x
′
,
x
>
is the greater of x, x
′
. (11.22)
Note that G
k
(x, x
′
) = G
k
(x
′
, x) as is demanded on general grounds, as a conse
quence of the reciprocity relation (10.110).
Let us analyze the analytic structure of G
k
(x, x
′
) as a function of k. We see
that simple poles occur where
kl = nπ, n = ±1, ±2, . . . . (11.23)
126 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
There is no pole at k = 0. For k near nπ/l, we have
sin kl = sin nπ + (kl −nπ) cos nπ +. . . = (kl −nπ)(−1)
n
. (11.24)
If we simply sum over all the poles of G
k
, we obtain
G
k
(x, x
′
) =
∞
n=−∞
n=0
(−1)
n
sin
nπx<
l
sin
nπ
l
(x
>
−l)
nπ
l
(kl −nπ)
=
∞
n=−∞
n=0
sin
nπx
l
sin
nπx
′
l
nπ
_
k −
nπ
l
_
=
∞
n=1
sin
nπx
l
sin
nπx
′
l
1
nπ
_
1
k −
nπ
l
−
1
k +
nπ
l
_
=
∞
n=1
2
l
sin
nπx
l
sin
nπx
′
l
1
k
2
−
_
nπ
l
_
2
. (11.25)
This is in fact equal to G
k
, as seen in the eigenfunction expansion (11.10),
because the diﬀerence is an entire function vanishing at inﬁnity, which must be
zero by Liouville’s theorem, see Sec. 6.5.
11.2 Types of Boundary Conditions
Three types of secondorder, homogeneous diﬀerential equations are commonly
encountered in physics (the dimensionality of space is not important):
Hyperbolic:
_
∇
2
−
1
c
2
∂
2
∂t
2
_
u(r, t) = 0, (11.26a)
Elliptic:
_
∇
2
+k
2
_
u(r) = 0, (11.26b)
Parabolic:
_
∇
2
−
1
κ
∂
∂t
_
T(r, t) = 0. (11.26c)
The ﬁrst of these equations is the wave equation, the second is the Helmholtz
equation, which includes Laplace’s equation as a special case (k = 0), and the
third is the diﬀusion equation. The types of boundary conditions, speciﬁed
on which kind of boundaries, necessary to uniquely specify a solution to these
equations are given in Table 11.1. Here by Cauchy boundary conditions we
means that both the function u and its normal derivative ∂u/∂n is speciﬁed on
the boundary. Here
∂u
∂n
= ˆ n · ∇u, (11.27)
where ˆ n is a(n outwardly directed) normal vector to the surface. As we have
seen previously, Dirichlet boundary conditions refer to specifying the function
u on the surface, Neumann boundary conditions refer to specifying the nor
mal derivative ∂u/∂n on the surface, and mixed boundary conditions refer to
11.3. EXPRESSIONOF FIELDINTERMS OF GREEN’S FUNCTION127 Version of December 3, 2011
Type of Equation Type of Boundary Condition Type of Boundary
Hyperbolic Cauchy Open
Elliptic Dirichlet, Neumann, or mixed Closed
Parabolic Dirichlet, Neumann, or mixed Open
Table 11.1: Boundary conditions required for the three types of secondorder
diﬀerential equations. The boundary conditions referred to in the ﬁrst and third
cases are actually initial conditions.
specifying a linear combination, αu + β∂u/∂n, on the surface. If the speciﬁed
boundary values are zero, we say that the boundary conditions are homogeneous;
otherwise, they are inhomogeneous.
Example.
To determine the vibrations of a string, described by
_
∂
2
∂x
2
−
1
c
2
∂
2
∂t
2
_
u = 0, (11.28)
we must specify
u(x, 0),
∂u
∂t
(x, 0) (11.29)
at some initial time (t = 0). The line t = 0 is an open surface in the (ct, x)
plane.
11.3 Expression of Field in Terms of Green’s
Function
Typically, one determines the eigenfunctions of a diﬀerential operator subject
to homogeneous boundary conditions. That means that the Green’s functions
obey the same conditions. See Sec. 10.8. But suppose we seek a solution of
(L −λ)ψ = S (11.30)
subject to inhomogeneous boundary conditions. It cannot then be true that
ψ(r) =
_
V
(dr
′
) G(r, r
′
)S(r
′
). (11.31)
To see how to deal with this situation, let us consider the example of the
threedimensional Helmholtz equation,
(∇
2
+k
2
)ψ(r) = S(r). (11.32)
128 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
We seek the solution ψ(r) subject to arbitrary inhomogeneous Dirichlet, Neu
mann, or mixed boundary conditions on a surface Σ enclosing the volume V of
interest. The Green’s function G for this problem satisﬁes
(∇
2
+k
2
)G(r, r
′
) = δ(r −r
′
), (11.33)
subject to homogeneous boundary conditions of the same type as ψ satisﬁes.
Now multiply Eq. (11.32) by G, Eq. (11.33) by ψ, subtract, and integrate over
the appropriate variables:
_
V
(dr
′
)
_
G(r, r
′
)(∇
′2
+k
2
)ψ(r
′
) −ψ(r
′
)(∇
′2
+k
2
)G(r, r
′
)
¸
=
_
V
(dr
′
) [G(r, r
′
)S(r
′
) −ψ(r
′
)δ(r −r
′
)] . (11.34)
Here we have interchanged r and r
′
in Eqs. (11.32) and (11.33), and have used
the reciprocity relation,
G(r, r
′
) = G(r
′
, r). (11.35)
(We have assumed that the eigenfunctions and hence the Green’s function are
real.) Now we use Green’s theorem to establish
−
_
Σ
dσ ·
_
G(r, r
′
)∇
′
ψ(r
′
) −ψ(r
′
)∇
′
G(r, r
′
)
¸
+
_
V
(dr
′
) G(r, r
′
)S(r
′
) =
_
ψ(r), r ∈ V,
0, r ∈ V,
(11.36)
where in the surface integral dσ is the outwardly directed surface element, and
r
′
lies on the surface Σ. This generalizes the simple relation given in Eq. (11.31).
How do we use this result? We always suppose G satisﬁes homogeneous
boundary conditions on Σ. If ψ satisﬁes the same conditions, then for r ∈ V
Eq. (11.31) holds. But suppose ψ satisﬁes inhomogeneous Dirichlet boundary
conditions on Σ,
ψ(r
′
)
¸
¸
r
′
∈Σ
= ψ
0
(r
′
), (11.37)
a speciﬁed function on the surface. Then we impose homogeneous Dirichlet
conditions on G,
G(r, r
′
)
¸
¸
r
′
∈Σ
= 0. (11.38)
Then the ﬁrst surface term in Eq. (11.36) is zero, but the second contributes.
For example if S(r) = 0 inside V , we have for r ∈ V
ψ(r) =
_
Σ
dσ · [∇
′
G(r, r
′
)]ψ
0
(r
′
), (11.39)
which express ψ in terms of its boundary values.
If ψ satisﬁes inhomogeneous Neumann conditions on Σ,
∂ψ
∂n
′
(r
′
)
¸
¸
¸
¸
r
′
∈Σ
= N(r
′
), (11.40)
11.4. HELMHOLTZ EQUATIONINSIDE A SPHERE129 Version of December 3, 2011
a speciﬁed function, then we use the Green’s function which respects homoge
neous Neumann conditions,
∂
∂n
′
G(r, r
′
)
¸
¸
¸
¸
r
′
∈Σ
= 0, (11.41)
so again if S = 0 inside V , we have within V
ψ(r) = −
_
Σ
dσ G(r, r
′
)N(r
′
). (11.42)
Finally, if ψ satisﬁes inhomogeneous mixed boundary conditions,
_
∂
∂n
′
ψ(r
′
) +α(r
′
)ψ(r
′
)
_ ¸
¸
¸
¸
r
′
∈Σ
= F(r
′
), (11.43)
then when G satisﬁes homogeneous boundary conditions of the same type
_
∂
∂n
′
+α(r
′
)
_
G(r, r
′
)
¸
¸
¸
¸
r
′
∈Σ
= 0, (11.44)
we have for r ∈ V
ψ(r) =
_
V
(dr
′
) G(r, r
′
)S(r
′
) −
_
σ
dσ G(r, r
′
)F(r
′
). (11.45)
11.4 Helmholtz Equation Inside a Sphere
Here we wish to ﬁnd the Green’s function for Helmholtz’s equation, which sat
isﬁes
(∇
2
+k
2
)G
k
(r, r
′
) = δ(r −r
′
), (11.46)
in the interior of a spherical region of radius a, with homogeneous Dirichlet
boundary conditions on the surface,
G
k
(r, r
′
)
¸
¸
¸
¸
r=a
= 0. (11.47)
We will use two methods.
11.4.1 Eigenfunction Method
We know that the eigenfunctions of the Laplacian are
j
l
(kr)Y
m
l
(θ, φ), (11.48)
in spherical polar coordinates, r, θ, φ; that is,
(∇
2
+k
2
)j
l
(kr)Y
m
l
(θ, φ) = 0. (11.49)
130 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
Here j
l
is the spherical Bessel function,
j
l
(x) =
_
π
2x
J
l+1/2
(x), (11.50)
and the Y
m
l
are the spherical harmonics,
Y
m
l
(θ, φ) =
_
2l + 1
4π
(l −m)!
(l +m)!
_
1/2
P
m
l
(cos θ)e
imφ
, (11.51)
where P
m
l
is the associated Legrendre function. Here l is a nonnegative integer,
and m is an integer in the range −l ≤ m ≤ l. For example, the ﬁrst few
spherical Bessel functions (which are simpler than the cylinder functions, the
Bessel functions of integer order) are
j
0
(x) =
sin x
x
, (11.52a)
j
1
(x) =
sin x
x
2
−
cos x
x
, (11.52b)
j
2
(x) =
_
3
x
3
−
1
x
_
sinx −
3
x
2
cos x, (11.52c)
and in general
j
l
(x) = x
l
_
−
1
x
d
dx
_
l
sinx
x
. (11.53)
The associated Legrendre function is given by
P
m
l
(cos θ) = (−1)
m
sin
m
θ
_
d
d cos θ
_
l+m
(cos
2
θ −1)
l
2
l
l!
. (11.54)
For example, the ﬁrst few spherical harmonics are
Y
0
0
=
1
√
4π
, (11.55a)
Y
1
1
= −
_
3
8π
sin θ e
iφ
, (11.55b)
Y
0
1
=
_
3
4π
cos θ, (11.55c)
Y
1
1
=
_
3
8π
sin θ e
−iφ
, (11.55d)
Y
2
2
=
_
15
32π
sin
2
θ e
2iφ
, (11.55e)
Y
1
2
= −
_
15
8π
cos θ sin θ e
iφ
, (11.55f)
Y
0
2
=
_
5
16π
(3 cos
2
θ −1), (11.55g)
11.4. HELMHOLTZ EQUATIONINSIDE A SPHERE131 Version of December 3, 2011
Y
−1
2
=
_
15
8π
cos θ sin θ e
−iφ
, (11.55h)
Y
−2
2
=
_
15
32π
sin
2
θ e
−2iφ
. (11.55i)
The eigenfunctions must vanish ar r = a, so if β
ln
is the nth zero of j
l
,
j
l
(β
ln
) = 0, n = 1, 2, 3, . . . , (11.56)
the desired eigenfunctions are
ψ
nlm
(r, θ, φ) = A
nl
j
l
_
β
ln
r
a
_
Y
m
l
(θ, φ), (11.57)
and the eigenvalues are
λ
ln
= −k
2
ln
= −
_
β
ln
a
_
2
. (11.58)
The normalization constant A
nl
is determined by the requirement that
_
r
2
dr dΩψ
nlm
(r, θ, φ)
2
= 1, (11.59)
where dΩ = sin θ dθ dφ is the element of solid angle. Since the spherical har
monics are normalized so that [Ω = (θ, φ) represents a point on the unit sphere]
_
dΩY
m
′
∗
l
′ (Ω)Y
m
l
(Ω) = δ
ll
′ δ
mm
′ , (11.60)
the normalization constant is determined by the requirement
A
nl

2
_
a
0
r
2
dr
_
j
l
_
β
ln
r
a
__
2
= 1. (11.61)
Now
_
a
0
r
2
dr j
l
(β
ln
r/a)j
l
(β
lm
r/a) = δ
nm
1
2
a
3
j
2
l+1
(β
ln
), (11.62)
which for n = m follows from the orthogonality property (10.68). So
A
nl
 =
_
2
a
3
1
j
l+1
(β
ln
)
, (11.63)
and the Green’s function has the eigenfunction expansion
G
k
(r, r
′
) =
nlm
2
a
3
1
j
2
l+1
(β
ln
)
Y
m
l
(Ω)Y
m∗
l
(Ω
′
)j
l
(β
ln
r/a)j
l
(β
ln
r
′
/a)
k
2
−(β
ln
/a)
2
, (11.64)
where Ω = (θ, φ), Ω
′
= (θ
′
, φ
′
).
132 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
This result can be simpliﬁed by carrying out the sum on m, using the addition
theorem for spherical harmonics,
4π
2l + 1
l
m=−l
Y
m∗
l
(Ω
′
)Y
m
l
(Ω) = P
l
(cos γ), (11.65)
where P
l
(cos γ) = P
0
l
(cos γ) is Legendre’s polynomial, and γ is the angle between
the directions represented by Ω and Ω
′
, or
cos γ = cos θ cos θ
′
+ sin θ sin θ
′
cos(φ −φ
′
). (11.66)
Then we obtain
G
k
(r, r
′
) =
2
a
3
nl
2l + 1
4π
P
l
(cos γ)
1
j
2
l+1
(β
ln
)
j
l
(β
ln
r/a)j
l
(β
ln
r
′
/a)
k
2
−(β
ln
/a)
2
. (11.67)
This leads us to the second method.
11.4.2 Discontinuity (Direct) Method
Let us adopt the angular dependence found above:
G
k
(r, r
′
) =
∞
l=0
2l + 1
4π
P
l
(cos γ)g
l
(r, r
′
), (11.68)
where we will call g
l
the reduced Green’s function. Because Y
m
l
is an eigenfunc
tion of the angular part of the Laplacian operator,
∇
2
Y
m
l
(Ω) = −
l(l + 1)
r
2
Y
m
l
(Ω), (11.69)
and the delta function can be written as
δ(r −r
′
) =
1
rr
′
δ(r −r
′
)δ(Ω −Ω
′
), (11.70)
we see that, because of the orthonormality of the spherical harmonics, Eq. (11.60),
the Green’s function equation (11.46) corresponds to the following equation
satisﬁed by the reduced Green’s function, the inhomogeneous “spherical Bessel
equation,”
_
d
2
dr
2
+
2
r
d
dr
−
l(l + 1)
r
2
+k
2
_
g
l
(r, r
′
) =
1
rr
′
δ(r −r
′
). (11.71)
We solve this equation directly. For (0 < r
′
< a)
0 ≤ r < r
′
: g
l
(r, r
′
) = a(r
′
)j
l
(kr), (11.72a)
r
′
< r ≤ a : g
l
(r, r
′
) = b(r
′
)j
l
(kr) +c(r
′
)n
l
(kr). (11.72b)
11.4. HELMHOLTZ EQUATIONINSIDE A SPHERE133 Version of December 3, 2011
Only j
l
appears in the ﬁrst form because the solution must be ﬁnite at r = 0,
and the second solution to the spherical Bessel equation,
n
l
(x) =
_
π
2x
N
l+1/2
(x), (11.73)
where N
ν
is the Neumann function, is singular at x = 0. For example,
n
0
(x) = −
cos x
x
, (11.74)
and in general
n
l
(x) = −x
l
_
−
1
x
d
dx
_
l
cos x
x
. (11.75)
To determine the functions a, b, and c, we proceed as follows. The boundary
condition at r = a, g
l
(a, r
′
) = 0, implies
0 = b(r
′
)j
l
(ka) +c(r
′
)n
l
(ka), (11.76)
or
b(r
′
)
c(r
′
)
= −
n
l
(ka)
j
l
(ka)
. (11.77)
Thus we can write in the outer region,
a ≥ r > r
′
: g
l
(r, r
′
) = A(r
′
)[j
l
(kr)n
l
(ka) −n
l
(kr)j
l
(ka)]. (11.78)
The next condition we impose is that of the continuity of g
l
at r = r
′
:
a(r
′
)j
l
(kr
′
) = A(r
′
)[j
l
(kr
′
)n
l
(ka) −n
l
(kr
′
)j
l
(ka)]. (11.79)
On the other hand, the derivative of g
l
is discontinuous at r = r
′
, as we may
see by integrating Eq. (11.71) over a tiny interval around r = r
′
:
d
dr
g
l
(r, r
′
)
¸
¸
¸
¸
r=r
′
+ǫ
r=r
′
−ǫ
=
1
r
′2
, (11.80)
which implies
ka(r
′
)j
′
l
(kr
′
) −kA(r
′
)[j
′
l
(kr
′
)n
l
(ka) −n
′
l
(kr
′
)j
l
(ka)] = −
1
r
′2
. (11.81)
Now multiply Eq. (11.79) by kj
′
l
(kr
′
), and Eq. (11.81) by j
l
(kr
′
), and subtract:
j
l
(kr
′
)
r
′2
= −kA(r
′
)j
l
(ka)[j
l
(kr
′
)n
′
l
(kr
′
) −n
l
(kr
′
)j
′
l
(kr
′
)]. (11.82)
Now j
l
, n
l
are the independent solutions of the spherical Bessel equation
_
1
r
2
d
dr
_
r
2
d
dr
_
−
l(l + 1)
r
2
+k
2
_
u = 0, (11.83)
134 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
the Wronskian of which,
∆(r) ≡ j
l
(kr)n
′
l
(kr) −n
l
(kr)j
′
l
(kr) (11.84)
has the form
∆(r) =
const.
r
2
, (11.85)
as we saw in Problem 4 of Assignment 8. We can determine the constant by
considering the asymptotic forms of j
l
, n
l
,
j
l
(kr) ∼
sin(kr −lπ/2)
kr
, kr ≫1, (11.86a)
n
l
(kr) ∼ −
cos(kr −lπ/2)
kr
, kr ≫1, (11.86b)
which imply
∆(r) =
1
k
2
r
2
[sin
2
(kr −lπ/2) + cos
2
(kr −lπ/2)]
=
1
(kr)
2
. (11.87)
Thus since the righthand side of Eq. (11.82) is proportional to the Wronskian,
we ﬁnd the function A:
A(r
′
) = −k
j
l
(kr
′
)
j
l
(ka)
, (11.88)
and then from Eq. (11.79) we ﬁnd the function a:
a(r
′
) = −
k
j
l
(ka)
[j
l
(kr
′
)n
l
(ka) −n
l
(kr
′
)j
l
(ka)]. (11.89)
Hence the Green’s function is explicitly
r < r
′
: g
l
(r, r
′
) = −k
j
l
(kr)
j
l
(ka)
[j
l
(kr
′
)n
l
(ka) −n
l
(kr
′
)j
l
(ka)], (11.90a)
r > r
′
: g
l
(r, r
′
) = −k
j
l
(kr
′
)
j
l
(ka)
[j
l
(kr)n
l
(ka) −n
l
(kr)j
l
(ka)], (11.90b)
or
g
l
(r, r
′
) = −kj
l
(kr
<
)j
l
(kr
>
)
_
n
l
(ka)
j
l
(ka)
−
n
l
(kr
>
)
j
l
(kr
>
)
_
, (11.91)
where r
<
is the lesser of r, r
′
, and r
>
is the greater.
From this closed form we may extract the eigenvalues and eigenfunctions of
the spherical Bessel diﬀerential operator appearing in Eq. (11.83). The poles of
g
l
occur where j
l
(ka) has zeroes, all of which are real, at ka = β
ln
, the nth zero
of j
l
, or
k
2
=
_
β
ln
a
_
2
. (11.92)
11.5. HELMHOLTZ EQUATIONIN UNBOUNDEDSPACE135 Version of December 3, 2011
In the neighborhood of this zero,
j
l
(ka) = (ka −β
ln
)j
′
l
(β
ln
). (11.93)
But at the zero the Wronskian is
1
(β
ln
)
2
= −n
l
(β
ln
)j
′
l
(β
ln
). (11.94)
Now from the recursion relation
J
′
λ
(z) =
λ
z
J
λ
(z) −J
λ+1
(z), (11.95)
we see that the derivative of the spherical Bessel function (11.50) satisﬁes, at
the zero,
j
′
l
(β
ln
) = −j
l+1
(β
ln
). (11.96)
Thus the residue of the pole of g
l
at k = β
ln
/a is
1
a
2
β
ln
j
l
(β
ln
r
<
/a)j
l
(β
ln
r
>
/a)
j
2
l+1
(β
ln
)
. (11.97)
Now j
l
is an even or odd function of z depending on whether n is even or odd.
So if β
ln
is a zero of j
l
, so is −β
ln
, and hence if we add the contributions of
these two poles, we get the corresponding contribution to g
l
:
g
l
(r, r
′
) ∼
1
a
2
β
ln
j
l
(β
ln
r/a)j
l
(β
ln
r
′
/a)
[j
l+1
(β
ln
)]
2
_
1
k −β
ln
/a
−
1
k +β
ln
/a
_
. (11.98)
Summing up the contribution of all such pairs of poles, we obtain
g
l
(r, r
′
) =
2
a
3
∞
n=1
j
l
(β
ln
r/a)j
l
(β
ln
r
′
/a)
[j
l+1
(β
ln
)]
2
1
k
2
−(β
ln
/a)
2
, (11.99)
which is the eigenfunction expansion displayed in Eq. (11.67).
11.5 Helmholtz Equation in Unbounded Space
Again we are solving the equation
(∇
2
+k
2
)G
k
(r, r
′
) = δ(r −r
′
), (11.100)
but now in unbounded space. The solution to this equation is an outgoing
spherical wave:
G
k
(r, r
′
) = G
k
(r − r
′
) = −
1
4π
e
ikr−r
′

r −r
′

. (11.101)
136 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
This may be directly veriﬁed. Consider a small sphere S, of radius ǫ, centered
on r
′
:
_
S
(dr)(∇
2
+k
2
)G
k
(r −r
′
) ≈
_
S
(dρ)∇
2
ρ
_
−
1
4π
e
ikρ
ρ
_
=
_
dΩρ
2
d
dρ
_
−
1
4π
e
ikρ
ρ
_¸
¸
¸
¸
ρ=ǫ
→ 1, (11.102)
as ǫ → 0. Evidently, for r = r
′
, G
k
satisﬁes the Helmholtz equation, (∇
2
+
k
2
)G
k
= 0.
Alternatively, we may construct G
k
from the eigenfunction expansion (10.109),
G
k
(r −r
′
) =
n
ψ
∗
n
(r
′
)ψ
n
(r)
λ
n
−λ
(11.103)
where λ = −k
2
, λ
n
= −k
′2
, where the eigenfunctions are solutions of
(∇
2
+k
′2
)ψ
k
′ (r) = 0, (11.104)
that is, they are plane waves,
ψ
k
′ (r) =
1
(2π)
3/2
e
ik
′
·r
, (11.105)
Here the (2π)
−3/2
factor is for normalization:
_
(dk
′
) ψ
k
′ (r)
∗
ψ
k
′ (r
′
) = δ(r −r
′
), (11.106a)
_
(dr) ψ
k
′ (r)
∗
ψ
k
(r) = δ(k −k
′
), (11.106b)
where we have noted that the spectrum of eigenvalues is continuous,
n
→
_
(dk). (11.107)
Thus the eigenfunction expansion for the Green’s function has the form
G
k
(r −r
′
) =
_
(dk
′
)
(2π)
3
e
−ik
′
·r
′
e
ik
′
·r
k
2
−k
′2
. (11.108)
Let us evaluate this integral in spherical coordinates, where we write
(dk
′
) = k
′2
dk
′
dφ
′
dµ
′
, µ
′
= cos θ
′
, (11.109)
where we have chosen the z axis to lie along the direction of r − r
′
. The
integration over the angles is easy:
G
k
(r −r
′
) =
1
(2π)
3
_
∞
0
dk
′
k
′2
_
2π
0
dφ
′
_
1
−1
dµ
′
e
ik
′
r−r
′
µ
′
k
2
−k
′2
=
1
(2π)
2
1
2
_
∞
−∞
dk
′
k
′2
k
2
−k
′2
1
ik
′
ρ
_
e
ik
′
ρ
−e
−ik
′
ρ
_
, (11.110)
11.5. HELMHOLTZ EQUATIONIN UNBOUNDEDSPACE137 Version of December 3, 2011

§¤
¦¥

• •
−k
k
Figure 11.1: Contour in the k
′
plane used to evaluate the integral (11.110).
The integral is closed in the upper (lower) halfplane if the exponent is positive
(negative). The poles in the integrand are avoided by passing above the one on
the left, and below the one on the right.
deﬁning ρ = r − r
′
, where we have replaced
_
∞
0
by
1
2
_
∞
−∞
because the inte
grand is even in k
′2
. We evaluate this integral by contour methods. Because
now k can coincide with an eigenvalue k
′
, we must choose the contour appropri
ately to deﬁne the Green’s function. Suppose we choose the contour as shown
in Fig. 11.1, passing below the pole at k and above the pole at −k. We close the
contour in the upper half plane for the e
ikρ
and in the lower half plane for the
e
−ikρ
term. Then by Jordan’s lemma, we immediately evaluate the integral:
G
k
(r −r
′
) =
1
(2π)
2
1
2
_
−
2πi
2k
k e
ikρ
iρ
+
2πi
−2k
k e
ikρ
iρ
_
= −
1
4π
e
ikρ
ρ
, (11.111)
which coincides with Eq. (11.101). If a diﬀerent contour deﬁning the integral
had been chosen, we would have obtained a diﬀerent Green’s function, not
one corresponding to outgoing spherical waves. Boundary conditions uniquely
determine the contour.
Note that
G
k
(r, r
′
) = G
k
(r
′
, r), (11.112)
even though G
k
is complex. The selfadjointness property (10.110) implied by
the eigenfunction expansion is only formal, and is spoiled by the contour choice.
138 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
11.6 Green’s Function for the Scalar Wave Equa
tion
The inhomogeneous scalar wave equation,
_
∇
2
−
1
c
2
∂
2
∂t
2
_
ψ(r, t) = ρ(r, t), (11.113)
requires boundary and initial conditions. The boundary conditions may be
Dirichlet, Neumann, or mixed. The initial conditions are Cauchy (see Sec. 11.2).
Thus, we might specify at an initial time t = t
0
both ψ(r, t
0
) and
∂
∂t
ψ(r, t
0
) at
every point r in the region being considered.
The corresponding Green’s function G(r, t; r
′
, t
′
) satisﬁes
_
∇
2
−
1
c
2
∂
2
∂t
2
_
G(r, t; r
′
, t
′
) = δ(r −r
′
)δ(t −t
′
). (11.114)
It must satisfy the homogeneous form of the boundary conditions satisﬁed by
ψ. Thus, if ψ has a speciﬁed value everywhere on the bounding surface, the
corresponding Green’s function must vanish on the surface. In classical physics
it is customary to adopt as initial conditions
G(r, t; r
′
, t
′
)
∂G
∂t
(r, t; r
′
, t
′
)
_
= 0 if t < t
′
. (11.115)
These then deﬁne the socalled retarded Green’s functions. They ensure that
an eﬀect occurs after its cause. (In fact, however, this time asymmetry of the
Green’s function, which is not present in the wave equation, is not necessary;
and in fact it is impossible to maintain in relativistic quantum mechanics.)
With such a Green’s function, what takes the place of the selfadjointness
property given in Sec. 10.8? Since the second time derivative is invariant under
t →−t, we have in addition to the inhomogeneous equation (11.114)
_
∇
2
−
1
c
2
∂
2
∂t
2
_
G(r, −t; r
′′
, −t
′′
) = δ(r −r
′′
)δ(t −t
′′
). (11.116)
Multiply Eq. (11.116) by G(r, t; r
′
, t), Eq. (11.114) by G(r, −t; r
′′
, −t
′′
), subtract,
and integrate over the volume being considered, and over t from−∞to T, where
T > t
′
, t
′′
:
_
T
−∞
dt
_
V
(dr)
_
G(r, t; r
′
, t
′
)∇
2
G(r, −t; r
′′
, −t
′′
)
−G(r, −t; r
′′
, −t
′′
)∇
2
G(r, t; r
′
, t
′
)
−G(r, t; r
′
, t
′
)
1
c
2
∂
2
∂t
2
G(r, −t; r
′′
, −t
′′
)
+G(r, −t; r
′′
, −t
′′
)
1
c
2
∂
2
∂t
2
G(r, t; r
′
, t
′
)
_
= −G(r
′
, −t
′
; r
′′
, −t
′′
) +G(r
′′
, t
′′
; r
′
, t
′
). (11.117)
11.6. GREEN’S FUNCTION FORTHE SCALAR WAVE EQUATION139 Version of December 3, 2011
Now use Green’s theorem, together with the corresponding identity,
∂
∂t
_
A
∂
∂t
B −B
∂
∂t
A
_
= A
∂
2
∂t
2
B −B
∂
2
∂t
2
A, (11.118)
to conclude that
G(r
′′
, t
′′
; r
′
, t
′
) −G(r
′
, −t
′
; r
′′
, −t
′′
)
=
_
T
−∞
dt
_
Σ
dσ ·
_
G(r, t; r
′
, t
′
)∇G(r, −t; r
′′
, −t
′′
)
−G(r, −t; r
′′
, −t
′′
)∇G(r, t; r
′
, t
′
)
_
−
_
V
(dr)
1
c
2
_
G(r, t; r
′
, t
′
)
∂
∂t
G(r, −t; r
′′
, −t
′′
)
−G(r, −t; r
′′
, −t
′′
)
∂
∂t
G(r, t; r
′
, t
′
)
_¸
¸
¸
¸
t=T
t=−∞
. (11.119)
The surface integral vanishes, since both Green’s functions satisfy the same
homogeneous boundary conditions on Σ. (The boundary conditions are time
independent.) The second integral is also zero because from Eq. (11.115)
G(r, −∞; r
′
, t
′
)
∂G
∂t
(r, −∞; r
′
, t
′
)
_
= 0, (11.120a)
since −∞< t
′
, and
G(r, −T; r
′′
, −t
′′
)
∂G
∂t
(r, −T; r
′′
, −t
′′
)
_
= 0, (11.120b)
since −T < −t
′′
. Thus the reciprocity relation here is
G(r, t; r
′
, t
′
) = G(r
′
, −t
′
; r, −t) (11.121)
How do we express a solution to the wave equation (11.113) in terms of the
Green’s function? The procedure is the same as that given earlier. The ﬁeld,
and the Green’s function, satisfy
∇
′2
ψ(r
′
, t) −
1
c
2
∂
2
∂t
′2
ψ(r
′
, t
′
) = ρ(r
′
, t
′
), (11.122a)
∇
′2
G(r, t; r
′
, t
′
) −
1
c
2
∂
2
∂t
′2
G(r, t; r
′
, t
′
) = δ(r −r
′
)δ(t −t
′
). (11.122b)
Note that the diﬀerentiations on G are with respect to the second set of argu
ments (this equation follows from the reciprocity relation). Again multiply the
ﬁrst equation by G(r, t; r
′
, t
′
), the second by ψ(r
′
, t
′
), subtract, integrate over
the volume, and over t
′
from t
0
< t to t
+
, where t
+
means t +ǫ, ǫ →0 through
140 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
positive values. Then for r ∈ V ,
_
t
+
t0
dt
′
_
V
(dr
′
)
_
G(r, t; r
′
, t
′
)∇
′2
ψ(r
′
, t
′
) −ψ(r
′
, t
′
)∇
′2
G(r, t; r
′
, t
′
)
−
1
c
2
_
G(r, t; r
′
, t
′
)
∂
2
∂t
′2
ψ(r
′
, t
′
) −ψ(r
′
, t
′
)
∂
2
∂t
′2
G(r, t; r
′
, t
′
)
_ _
= −ψ(r, t) +
_
t
+
t0
dt
′
_
V
(dr
′
) G(r, t; r
′
, t
′
)ρ(r
′
, t
′
). (11.123)
Now we again use Green’s theorem and the identity (11.118) to conclude
ψ(r, t) =
_
t
+
t0
dt
′
_
V
(dr
′
) G(r, t; r
′
, t
′
)ρ(r
′
, t
′
)
−
_
t
+
t0
dt
′
_
Σ
dσ ·
_
G(r, t; r
′
, t
′
)∇
′
ψ(r
′
, t
′
) −ψ(r
′
, t
′
)∇
′
G(r, t; r
′
, t
′
)
¸
−
1
c
2
_
V
(dr
′
)
_
G(r, t; r
′
, t
0
)
∂
∂t
0
ψ(r
′
, t
0
) −ψ(r
′
, t
0
)
∂
∂t
0
G(r, t; r
′
, t
0
)
_
.
(11.124)
This is our result. The interpretation is as follows:
1. The ﬁrst integral represents the eﬀect of the sources ρ distributed through
out the volume V .
2. The second integral represents the boundary conditions. If, for example,
ψ satisﬁes inhomogeneous Neumann boundary conditions on Σ,
ˆ n · ∇ψ
¸
¸
¸
¸
Σ
= f(r
′
) (11.125)
is speciﬁed, then we use homogeneous Neumann boundary conditions for
G,
ˆ n · ∇G(r, t; r
′
, t
′
)
¸
¸
¸
¸
Σ
= 0. (11.126)
Then the second integral reads
−
_
t
+
t0
dt
′
_
Σ
dσ · G(r, t; r
′
, t)∇
′
ψ(r
′
, t
′
). (11.127)
That is, −ˆ n · ∇
′
ψ(r
′
, t
′
) represents a surface source distribution. Other
types of boundary conditions are as discussed earlier.
3. The third integral represents the eﬀect of the initial conditions, where
ψ(r
′
, t
0
),
∂
∂t
0
ψ(r
′
, t
0
) (11.128)
11.7. WAVE EQUATIONINUNBOUNDED SPACE141 Version of December 3, 2011
are speciﬁed. They correspond to impulsive sources at t = t
0
:
ρ
init
(r
′
, t
′
) = −
1
c
2
_
∂
∂t
0
ψ(r
′
, t
0
)δ(t
′
−t
0
) +ψ(r
′
, t
0
)δ
′
(t
′
−t
0
)
_
. (11.129)
We verify this statement by integrating by parts, and letting the lower
limit of the t
′
integration be t
0
−ǫ.
11.7 Wave Equation in Unbounded Space
We now wish to solve Eq. (11.114)
_
∇
2
−
1
c
2
∂
2
∂t
2
_
G(r, t; r
′
, t
′
) = δ(r −r
′
)δ(t −t
′
), (11.130)
in unbounded space by noting that then G is a function of R = r −r
′
and
T = t −t
′
only,
G(r, t; r
′
, t
′
) = G(r −r
′
, t −t
′
) = G(R, T). (11.131)
Then we can introduce a Fourier transform in space and time,
g(k, ω) =
_
(dR) dTe
ik·R
e
−iωT
G(R, T). (11.132)
The Fourier transform of the Green’s function equation is (we have set c = 1
temporarily for convenience),
(−k
2
+ω
2
)g(k, ω) = 1, (11.133)
where we write k
2
= k · k, which has the immediate solution
g(k, ω) =
1
ω
2
−k
2
. (11.134)
Thus the Green’s function has the formal representation
G(R, T) =
_
(dk)
(2π)
3
dω
2π
e
−ik·R
e
iωT
1
ω
2
−k
2
. (11.135)
The ω integral here is not well deﬁned until we impose the boundary condition
(11.115)
G(R, T) = 0 if T < 0. (11.136)
This will be true if the poles are located above the real axis, as shown in Fig. 11.2.
Here the contour is closed in the upper half plane if T > 0, and in the lower half
plane if T < 0. In both cases, by Jordan’s lemma, the inﬁnite semicircle gives
no contribution. We have
_
∞
−∞
dω
2π
e
iωT
1
(ω −k)(ω +k)
=
_
i
_
e
ikT
2k
−
e
−ikT
2k
_
, T > 0,
0 T < 0.
(11.137)
142 Version of December 3, 2011 CHAPTER 11. GREEN’S FUNCTIONS
 
• •
−k +iǫ k +iǫ
Figure 11.2: Contour in the ω plane used to evaluate the integral (11.135).
Thus, if T > 0,
G(R, T) =
1
(2π)
3
_
∞
0
k
2
dk 2π
_
1
−1
dµe
−ikRµ
i
2k
_
e
ikT
−e
−ikT
_
=
i
(2π)
2
_
∞
0
k dk
2ikR
_
e
ikR
−e
−ikR
_ _
e
ikT
−e
−ikT
_
=
1
(2π)
2
1
2R
1
2
_
∞
−∞
dk
_
e
ik(R+T)
+e
−ik(R+T)
−e
ik(R−T)
−e
ik(T−R)
_
=
1
2π
1
2R
[δ(R +T) −δ(R −T)] . (11.138)
But R and T are both positive, so R + T can never vanish. Thus we are left
with
G(R, T) = −
1
4π
1
R
δ(R −T), (11.139)
or restoring c,
G(r − r
′
, t −t
′
) = −
1
4π
1
r −r
′

δ
_
r −r
′

c
−(t −t
′
)
_
. (11.140)
The eﬀect at the observation point r at time t is due to the action at the source
point r
′
at time
t
′
= t −
r −r
′

c
. (11.141)
Physically, this means that the “signal” propagates with speed c.
11.7. WAVE EQUATIONINUNBOUNDED SPACE143 Version of December 3, 2011
Let us make this more concrete by considering a simple example, a point
“charge” moving with velocity v(t) =
d
dt
r(t),
ρ(r, t) = qδ(r −r(t)). (11.142)
There are no eﬀects from the inﬁnite surface, nor from the inﬁnite past, so we
have from Eq. (11.124)
ψ(r, t) =
_
t
+
−∞
dt
′
_
V
(dr
′
) G(r −r
′
, t −t
′
)ρ(r
′
, t
′
)
= −
q
4π
_
t
+
−∞
dt
′
1
r −r(t
′
)
δ
_
r −r(t
′
)
c
−(t −t
′
)
_
. (11.143)
If we let R(t
′
) = r −r(t
′
), the distance from the source to the observation point
at time t
′
= t −
R(t
′
)
c
, we write this as
ψ(r, t) = −
q
4π
_
t
+
−∞
dt
′
1
R(t
′
)
δ
_
R(t
′
)
c
−(t −t
′
)
_
. (11.144)
Let τ = R(t
′
)/c +t
′
, where τ = t determines the “retarded time” t
′
so
dτ = dt
′
_
1 +
1
c
dR
dt
′
_
, (11.145)
where
dR
dt
′
=
1
2R
d
dt
′
R· R = −
R· v
R
, (11.146)
that is
dτ = dt
′
_
1 −
R· v
Rc
_
. (11.147)
Thus the ﬁeld is evaluated as
ψ(r, t) = −
q
4π
_
t+R(t)/c
−∞
dτ
R(τ)
1
_
1 −
R·v
Rc
_
(τ)
δ(τ −t)
= −
q
4π
1
R(t) −R(t) · v(t)/c
. (11.148)