You are on page 1of 141

Fourier Series 1

Chapter One

FOURIER SERIES

1. Definitions and Examples

A trigonometric polynomial of degree N is an expression of the form


N
X
pN (x) = (ak cos kx + bk sin kx),
k=0

where a0 , a1 , . . . and b0 , b1 , . . . are constants. The value of the constant b0 is obviously irrelevant
since sin 0x ≡ 0; for convenience we set b0 = 0.
A Fourier series extends in a natural way the definition of a trigonometric polynomial, just
like the McLaurin series extends ordinary polynomials:

IDefinition: We say that a function f (x) admits a Fourier series expansion between x = −π
and x = π if the equation

X
f (x) = (ak cos kx + bk sin kx) (1)
k=0

holds in the interval (−π, π). J

Note that all terms in the sum on the right-hand side of (1) have period 2π, hence the sum of
the Fourier series (1) is also periodic with period 2π. Therefore, if the expansion (1) converges
in the fundamental interval (−π, π), then it must converge for all values of x, and the function
represented by (1) is—by construction, if you wish—periodic with period 2π.
Now, if the expansion (1) does hold (for certain values of x), it would be desirable to have
a formula that gives the coefficients ak and bk in terms of the function f . This can be done
by borrowing a simple procedure from linear algebra. So, before we go into that, let’s see an
example of this procedure, even if at first sight you might think it has little to do with Fourier
expansions.
T T T
IExample 1 Show that the vectors l = [ 1 2 3 ] , m = [ 1 1 −1 ] and n = [ 5 −4 1 ]
T
form an orthogonal basis for R3 ; hence expand the vector b = [ 1 1 0 ] in terms of this basis.
Solution: By inspection, we find immediately that
           
1 1 1 5 5 1
l . m =  2  .  1  = 0, l . n =  2  .  −4  = 0, n . m =  −4  .  1  = 0.
3 −1 3 1 1 −1
2 Fourier Series

So, l, m and n are at 90◦ to each other, and hence they form an orthogonal basis.† Therefore,
for every vector b in R3 , one may write

b = a1 l + a2 m + a3 n

and this expansion is unique. For the second part of the problem, to find the coefficients
{a1 , a2 , a3 }, we form the scalar product of b with l, m and n, and “read out” the result. For
instance,
b . l = (a1 l + a2 m + a3 n) . l =
= a1 l . l + a2 m . l + a3 n . l.

But m . l = 0 and n . l = 0; hence

b . l = a1 l . l + 0 + 0,

and finally
b.l
a1 = .
l.l
Similarly, considering the scalar products b . m and b . n, we get

b.m b.n
a2 = , a3 = .
m.m n.n

It follows that
b.l b.m b.n
b= l+ m+ n.
l.l m.m n.n

This formula is applicable to any b and any orthogonal triplet {l, m, n} in R3 . In particular,
with the data of this example we get:
           
1 1 1 1 1 5
  1 . 2   1 .  1     1  .  −4   
1 0 3 1 0 −1 1 0 1 5
1 =     2 +      1 +      −4  =
0 1 1 3 1 1 −1 5 5 1
2 . 2  1 . 1   −4  .  −4 
3 3 −1 −1 1 1

     
1 1 5
3   2  1 
= 2 + 1 + −4 
14 3 42
3 −1 1

So, a1 = 3/14, a2 = 2/3 and a3 = 1/42. As expected, the last term of the equation above
T
simplifies to [ 1 1 0 ] = b. J

† Remember, orthogonal does not mean necessarily orthonormal. See, for instance, Linear
Algebra by Fraleigh and Beauregard (1995), Chapter 6.
Fourier Series 3

It’s easy to see how this procedure may be extended to a vector space of any dimension. For
instance, if b is a vector in Rn and {v 1 , v 2 , v 3 , . . . v n } is an orthogonal n-plet, we find that

b . v1 b . v2 b . v3 b . v4 b . vn
b= v1 + v2 + v3 + v4 + · · · + v =
.
v1 v1 .
v2 v2 .
v3 v3 .
v4 v4 vn . vn n
Xn
b . vk
= v , (2)
vk . vk k
k=1

This is an important result: make sure you understand it.


If you followed so far, you should be wondering whether this idea is also applicable to an
infinite-dimensional vector space. Even if you don’t, the answer is: Yes, this method may also
be applied in infinite dimensions, but with care.
You have seen in first-year linear algebra that every basis for Rn contains exactly n inde-
pendent vectors; vice-versa, every set of n independent vectors in Rn is a basis for Rn . No set
of 6 vectors can be a basis for R8 , for instance, because the numbers do not match. However, if
you take a basis for R∞ and “throw away” two vectors, obviously you no longer have a basis,
but the set still has an infinity of vectors. So, “counting vectors” may not be used to determine
if an independent set is a basis for R∞ .
Another important point is that in the case of trigonometric expansions like (1) we no longer
deal with polynomials but with series, and so convergence may not be taken for granted.
A rigorous theory of Fourier analysis can be very subtle. Intuitive “proofs” invariably
contain a hidden logical pitfall, and some of the best mathematicians of the 19th century fell
into them. Many correct results have been discovered using arguments that today would be
unacceptable.
However, in this course we’ll follow an intuitive approach without worrying more than
necessary about the rigor of our proofs. Mathematicians call this the heuristic approach, from
the Greek word for “to find”.
We shall concentrate on possible applications of our results, and be satisfied to know that
they may be justified in a rigorous way, if necessary.

ORTHOGONALITY RELATIONS

Let’s now see how the method of example 1 may be adapted to Fourier polynomials and series.
Before we start, you must take a look at the so-called orthogonality relations for the
functions {cos kx, sin kx}. These are:
Z π
sin kx cos nx dx = 0 always;
−π
Z (
π 0 if k 6= n,
sin kx sin nx dx = π if k = n 6= 0,
−π 0 if k = n = 0;

Z (
π 0 if k 6= n,
cos kx cos nx dx = π if k = n 6= 0,
−π
2π if k = n = 0.

They may all be derived using certain important trigonometric identities, that you should re-
4 Fourier Series

member from your high-school days:


Z Z
1
sin A cos B dx = 2 [sin(A − B) + sin(A + B)] dx,
Z Z
1
sin A sin B dx = 2 [cos(A − B) − cos(A + B)] dx,
Z Z
1
cos A cos B dx = 2 [cos(A − B) + cos(A + B)] dx.

As an exercise, verify the orthogonality relations. Many examples in this chapter will require
the calculation of integrals like the ones above.

IDefinition: If f (x) and g(x) are two real functions for which −π f (x) g(x) dx exists, we call
this integral the inner product of f and g over the interval (−π, π). J

The inner product over an arbitrary interval (a, b) is defined in the obvious way; however, in
this chapter we’ll use, most of the times, the inner product over (−π, π).
To keep the notation as light as possible, we also introduce the symbol
Z π
def
<f | g> = f (x) g(x) dx. (3)
−π

Note that the inner product has many properties in common with the scalar (or dot) product
you saw in mechanics and linear algebra. In particular:
• <f | g> = <g | f > ⇒ the inner product is commutative;
• <f | (g + h)> = <f | g> + <f | h> ⇒ the inner product is bilinear;
• <f | f > is never negative, and <f | f > = 0 if f (x) ≡ 0.
Now consider the set W of all linear combinations of cos kx and sin kx, including (possibly)
infinite linear combinations. Obviously W is a vector space, because every linear combination
of linear combinations is itself a linear combination. Intuitively, W is infinite-dimensional, as
one would expect, but the proof is fairly subtle and in these notes we shall leave it out.
The elements of the vector space W are more abstract than the vectors you saw in first-year
mechanics: it will help if you do not picture them in your mind as some kind of an arrow. The
space W is populated by functions of x, defined over a certain interval.
In the space W we introduce the inner product (3). Using the inner-product notation (3),
the orthogonality relations become
< sin kx | cos nx> = 0 [always],
< sin kx | sin nx> = < cos kx | cos nx> = 0 [k 6= n],
and
< sin kx | sin kx> = < cos kx | cos kx> = π [k =
6 0],
< cos kx | cos kx> = 2π [k = 0].
Now, suppose that the series (1) converges to f (x) in (−π, π), and that term-by-term integration
of the series is allowed. We take the inner product of both sides of (1) with cos nx, and we get:

X ¡ ¢
<f | cos nx> = ak < cos kx | cos nx> + bk < sin kx | cos nx> .
k=0
Fourier Series 5

However, < sin kx | cos nx> = 0 always, so we quickly shed half of the right-hand side:

X
<f | cos nx> = ak < cos kx | cos nx>.
k=0

Furthermore, on the right-hand side, all terms of the sum are zero except one: the single term
where the running index k is equal to n. In other words, we have that

<f | cos nx> = an < cos nx | cos nx> + an infinity of zeroes, [for every n].

Note that this is exactly the same procedure we used in example 1 to find a1 , a2 and a3 . We
may now solve for an :
<f | cos nx>
an = . (4)
< cos nx | cos nx>
If you prefer to avoid the inner-product notation, you may expand this result as
Z π  Z π

 1
f (x) cos nx dx 
 2π f (x) dx if n = 0,
−π −π
an = Z π = Z
2 

 1 π
cos nx dx  f (x) cos nx dx if n = 1, 2, 3, . . .
−π π −π

In order to find the coefficients bk , we take the inner product of both sides of (1) with sin nx,
and we get:

X ¡ ¢
<f | sin nx> = ak < cos kx | sin nx> + bk < sin kx | sin nx> .
k=0

Proceeding like before, we note immediately that < cos kx | sin nx> = 0 always, which removes
half of the right-hand side. We get the equation:

X
<f | sin nx> = bk < sin kx | sin nx>,
k=0

but again, on the right-hand side, all terms of the sum are zero except one: the term with
k = n. Hence,

<f | sin nx> = bn < sin nx | sin nx> + an infinity of zeroes,

and finally:
<f | sin nx>
bn = . (5)
< sin nx | sin nx>
Without using the inner-product notation, this may be written
Z π
f (x) sin nx dx Z
1 π
bn = −π Z π = f (x) sin nx dx if n = 1, 2, 3, . . .
2 π −π
sin nx dx
−π
6 Fourier Series

(remember that b0 = 0 by definition). Observe that if we write out our final result in full, using
inner-product notation, we get a formula that has the same structure as (2):


X ∞
X <f | sin kx>
<f | cos kx>
f (x) = · cos kx + · sin kx.
< cos kx | cos kx> < sin kx | sin kx>
k=0 k=1

The obvious differences are that the dimension is now infinite, and the inner product (3) has
replaced the dot product. A less conspicuous differencePis thatRwe quietly assumed that term-by-
term integration was allowed (swapping the symbols and ), and convergence was assured.
While this method certainly works for trigonometric polynomials, it must be stressed that
in general one may not take a method that is known to be valid for polynomials and transfer it
onto infinite series with the certainty that it will work for them, too. Convergence, as a rule,
should always be examined.
Unfortunately, there are examples of trigonometric series like (1) that do converge, but not
to an integrable function. In that case, the procedure we have outlined would fall apart.
Historically, we have followed the path taken by Daniel Bernoulli (1700-1782) in his solution
of the vibrating string problem, and then—systematically—by Fourier (1768-1830).
Incorrect proofs of convergence of Fourier series were published by Fourier himself, by
Poisson (1820) and Cauchy (1826). Finally, the first rigorous proof that Fourier’s insight was
correct was obtained by Dirichlet (1829), who was one generation younger than Fourier.

CONVERGENCE OF FOURIER SERIES

Dirichlet was able to find a set of sufficient conditions for the convergence of a Fourier series.
Different, and perhaps better, sets of sufficient conditions were later found by Dini (1845-1918)
and others.
In order to get Dirichlet’s conditions, we reverse the logical path that we described before.
We start by defining a set of numbers ak and bk as follows:

Z π Z π Z π
1 1 1
a0 = f (x) dx ak = f (x) cos kx dx bk = f (x) sin kx dx, (6)
2π −π π −π π −π

where k = 1, 2, 3, . . . and f (x) is an integrable function defined over (−π, π).

IDefinition: The coefficients ak and bk appearing in (6) are called the Fourier coefficients, and
the corresponding expression (1) is called the Fourier Series, of the function f . J

This definition is acceptable even if the Fourier series does not converge, let alone that it con-
verges to f (x)—although, of course, we hope that this is the case. For this reason, to indicate
that (1) is the Fourier series corresponding to the function f (x), we write


X
f (x) ∼ (ak cos kx + bk sin kx).
k=0
Fourier Series 7

IExample 2 Find the Fourier series of the func-


tion n
1 if −π < x ≤ 0
f (x) = f(x)
2 if 0 < x < π.
pictured on the right.
Solution: Using (6), we find: 2
Z 0 Z π
1 1 3 1
a0 = 1 dx + 2 dx = ,
2π −π 2π 0 2
−π 0 π x
and hence
Z π Z Z
1 1 0 1 π
ak = f (x) cos kx dx = cos kx dx + 2 cos kx dx = 0
π −π π −π π 0
Z Z Z
1 π 1 0 1 π 1 − cos kπ
bk = f (x) sin kx dx = sin kx dx + 2 sin kx dx =
π −π π −π π 0 kπ.

Since cos kπ = (−1)k , we see that bk = 0 if k is even, bk = 2/kπ if k is odd. So, finally,
µ ¶
3 2 sin 3x sin 5x sin 7x
f (x) ∼ + sin x + + + + ··· =
2 π 3 5 7
3 2 X sin kx
= + .
2 π k
k=odd

The question obviously arises: for what values of x does this series converge and, if so, does it
converge to f (x)?
It should be clear that the answer to last question is no, because when x = 0 it certainly
converges to 32 , but it does not equal f (x) there, since f (0) = 1. However, x = 0 is a special
point because f (x) is discontinuous there; perhaps the series converges at other points where
f (x) has a more regular behavior.
A little experimenting with the free package GNUPLOT seems to indicate that the series
converges. The next two pictures show the graph of the Fourier series truncated after three, and
four, nonzero terms (respectively).
f(x) f(x)

2 2

1 1

−π 0 π x π 0 −π x
µ ¶ µ ¶
3 2 sin 3x 3 2 sin 3x sin 5x
y= + sin x + y= + sin x + +
2 π 3 2 π 3 5
8 Fourier Series

At this point you should do some experimenting on your own; GNUPLOT is easy to use and may
be downloaded for free from www.gnuplot.info, together with a user’s manual.
For example, the picture on the right shows
f(x)
the graph of the Fourier series truncated after
eleven nonzero terms, i.e.,
2
µ ¶
3 2 sin 3x sin 19x
y= + sin x + + ··· + .
2 π 3 19
1
The improvement is clearly visible. Compare
this picture with the one at the beginning of the
example, which shows the limit toward which
the series should converge. π 0 −π x
It appears that the series is struggling, so to speak, to cope with the discontinuity at x = 0,
but apart from some “ripple effect,” the series is approximating f (x). J

We need some criteria to decide if a Fourier expansion converges (and what is the limit, in case
it does). Continuity, for example, is not enough: there are examples of continuous functions for
which the Fourier series does not converge. Asking for differentiability, on the other hand, seems
to be too much. As we mentioned before, many sets of sufficient conditions for convergence of
Fourier series are known. Unfortunately, no set of necessary and sufficient conditions has been
discovered yet.
Dirichlet’s conditions, which we’ll now see, are adequate for most practical purposes, with-
out being unduly complicated. But before, let’s revise some definitions that you have already
encountered in second year.
If a function f (x) is not continuous at x = x0 , it may still happen that the limit from the
left and from the right of f (x) exist at x = x0 , though of course they would be different. We
write
lim f (x) = f (x0 −) lim f (x) = f (x0 +),
x→x0 − x→x0 +

for the limit from the left, and from the right, respectively.

IDefinition: A function is said to have a jump discontinuity at a point x0 if it’s continuous in


a neighborhood of x0 (with x0 removed), and the limit from the left and from the right, of f (x)
as x approaches x0 , exist and are finite:

lim f (x) = a = finite, lim f (x) = b = finite.


x→x0 − x→x0 +

The quantity b − a is called the extent of the jump discontinuity. J

For instance, the function f (x) of example 2 has a jump discontinuity at x = 0 because the limit
from the left is 1 and the limit from the right is 2, and the extent of the jump is 2 − 1 = 1.
On the other hand, the functions sin 1/x and e1/x are discontinuous at x = 0, but neither
one has a jump discontinuity. In the case of sin 1/x the limit from the right and the limit from
the left do not exist; in the case of e1/x , the limits do exist, and the limit from the left is 0, but
the limit from the right is infinity.

IDefinition: A function f is said to be piecewise continuous on an interval [a, b] if it is contin-


uous there except for a finite number of jump discontinuities. J
Fourier Series 9

Piecewise continuity is a useful property. For example, it’s possible to show that if f (x) is
piecewise continuous over (a, b) then it is also integrable, i.e..
Z b
f (x) dx exists.
a

Recall now that in calculus the term “monotonic” stands for “either increasing or decreasing”.
So, the following definition comes naturally.

IDefinition: A function f is said to be piecewise monotonic on the interval [a, b] if it is possible


to break [a, b] into a finite number of pieces, such that inside each piece the function f is either
increasing or decreasing or constant. J

DIRICHLET CONDITIONS

Dirichlet proved that if f (x) is piecewise continuous and piecewise monotonic on [−π, π], then
its Fourier series:
• converges to f (x) at each point where f is continuous.
• converges to the average of the limit from the right and the limit from the left at
each point where f has a jump discontinuity.
• At the endpoints, i.e., for x = ±π, converges to the average of f (−π+) and f (π−).
We have already seen this theorem in action in example 2: the point x = 0 is a jump discontinuity,
the limit from the right is 2 and the limit from the left is 1; for x = 0 the Fourier series clearly
adds up to 3/2, which is the average of 1 and 2. Note also that at the endpoints we have
limx→π− f (x) = 2, limx→−π+ f (x) = 1, and clearly again, substituting x = ±π, we get that
sin x = 0, and so f (x) = 3/2 as expected.
Interesting numerical series can sometimes be obtained by evaluating Fourier series at specific
points.
IExample 3 For instance, evaluating the series of example 2 at x = π/2 we observe that

π 3π 5π 7π (2n + 1)π
sin = 1, sin = −1, sin = 1, sin = −1, ... sin = (−1)n .
2 2 2 2 2

Hence, knowing that f (π/2) = 2, we deduce that


µ ¶
3 2 1 1 1
2= + 1 − + − + ···
2 π 3 5 7

Simplifying, we get a famous result known as Leibniz formula:



X
π 1 1 1 (−1)k
= 1 − + − + ··· = .
4 3 5 7 2k + 1
k=0

This formula had been independently discovered several times before Leibniz (1673); it seems
certain that it was known to Indian scholars before 1500. In principle, it could be used to
calculate π, if the rate of convergence weren’t so painfully slow. J
10 Fourier Series

IExample 4 Find the Fourier series of f (x) = x for x in the interval (−π, π).
Solution: Since f is continuous and increasing, the series will converge to f (x) for every x in
(−π, π), and to zero, which is the average of f (π−) and f (−π+) at the endpoints. Moreover,
since f (x) is odd, by symmetry all the coefficients ak are zero.
For the bk we find that
Z Z Z π
1 π 2 π 2 h iπ 2
bk = x sin k x dx = x sin k x dx = x · (− cos kx) + cos kx dx.
π −π π 0 kπ 0 kπ 0
The last integral on the right-hand side is always zero, and the remaining term may be simplified
recalling that
cos kπ = (−1)k .
It follows that
2 2(−1)k+1
bk = − · π(−1)k = .
kπ k
Thus

X (−1)k+1 sin kx
x=2 [for all x in (−π, π).]
k
k=1
Two partial sums of this Fourier series are shown in the pictures that follow, the first one
truncated after −(sin 4x)/4, the second one truncated after −(sin 20x)/20. Again, you should
experiment with GNUPLOT and draw some plots on your own.

Note that for x = ±π one has sin x = 0, and so the series converges to zero at the endpoints, as
expected. J

PERIODIC EXTENSION
If we compare the graph of the Fourier series in the last example with the graph of g(x) = x
for all x, we note immediately that the two coincide only in the interval (−π, π). Outside the
interval (−π, π), which we call the fundamental interval, the two graphs separate from each
other.
If a function f is not defined outside (−π, π), then the Fourier series formed from f converges
to the so-called periodic extension of f , which is the function obtained by repeating the definition
of f outside the fundamental interval by the recursive formula
f (x + 2π) = f (x).
Fourier Series 11

Using this formula once, one finds f (x) for x in the interval (π, 3π); using it again, one derives
f (x) in the interval (3π, 5π), and so on. Using it backwards, intervals to the left of (−π, π) are
covered. Only functions which are periodic with period 2π over the whole x axis may be equal
to their own Fourier series. If a function is not periodic, then outside the fundamental interval
its Fourier series (assuming it exists) converges to the periodic extension.
If function f is periodic with period 2π and integrable on (−π, π), then
Z π Z a+2π
f (x) dx = f (x) dx
−π a
for any real a; as an exercise, prove this. Similarly, the orthogonality relations remain valid
if the interval (−π, π) is replaced by any interval of length 2π: for instance, (0, 2π) but also
(1947, 1947 + 2π) or (a, a + 2π) for any real a.
Indeed, many authors refer to (0, 2π) as the ”fundamental interval”, instead of (−π, π) as
we did in these notes. It’s basically a matter of taste. However, some care should be taken
when a portion of the graph of a non-periodic function is “cut out” and expanded into a Fourier
series, because the periodic extension would be different. For instance, going back to the last
example, if we expand f (x) = x for 0 < x ≤ 2π in a Fourier series, we get a saw-tooth graph
that oscillates between 0 and 2π: quite different from the previous one, which oscillates between
−π and π.
Loosely speaking, a function is “naturally” periodic if it’s periodic and is described by a
single formula for every x between −∞ and +∞. For example, the functions 5/(13−5 cos x) and
sin 5x/ sin x, which we’ll meet again in chapter 2, are naturally periodic. For such functions, one
may calculate the Fourier coefficients by considering any interval of length 2π, and the Fourier
series (if it converges) will converge to the same limit. If, on the other hand, a periodic function
has been created by selecting an interval of length 2π of the graph of a non-periodic function,
and extending it by repetition, then it matters where the selection was made.

IExample 5 Compare the graphs of the periodic functions f , g and h defined as follows:
f (x) = x2 if 0 < x < 2π, f (x) = f (x − 2π) for all x;
2 g(x) = g(x − 2π) for all x;
g(x) = x if − π < x < π,
h(x) = 0 if − π < x < 0, h(x) = x2 if 0 < x < π, h(x) = h(x − 2π) for all x.
2
Solution: The three functions coincide with x and with each other inside the interval (0, π).
The fundamental interval is (0, 2π) for f , but it is (−π, π) for g and h.

f(x) g(x) h(x)


4π 2

π2 π2

−2π 0 2π x −2π −π 0 π 2π x −2π −π 0 π 2π x


The picture explains itself. Note that the three graphs are drawn on the same scale. J
12 Fourier Series

IExample 6 Find the Fourier coefficients of the function f (x) defined in example 5.
Solution: We find immediately:
Z 2π
1 4π 2
a0 = x2 dx =
2π 0 3
and Z 2π
1
ak = x2 cos kx dx. [k > 0]
π 0

Integrating by parts (twice), we find that


· ¸2π · ¸2π · ¸2π
1 x2 sin kx 2 x cos kx 2 sin kx
ak = + −
π k 0 π k2 0 π k3 0

Substituting cos 2kπ = cos 0 = 1 and sin 2kπ = sin 0 = 0, we find that
4
ak = [k > 0].
k2
Similarly, we find that Z 2π
1
bk = x2 sin kx dx.
π 0
Integration by parts yields
· ¸2π · ¸2π · ¸2π
1 −x2 cos kx 1 2x sin kx 1 2 cos kx
+ + .
π k 0 π k2 0 π k3 0

Simplifying, we find that



bk = − ,
k
and finally that
∞ ∞
4π 2 X 4 cos kx X 4π sin kx
f (x) = + − .
3 k2 k
k=1 k=1

This series converges to x only if 0 < x < 2π; it converges to 2π 2 , i.e., the average of the limit
2

from the right and the limit from the left, if x = 0 and x = 2π. J

IExample 7 Find the Fourier series of the function g(x) defined in example 5.
Solution: We have Z π
1 π2
a0 = x2 dx = .
2π −π 3
We then consider Z π
1
ak = x2 cos kx dx. [k > 0]
π −π

Integrating by parts (twice), we find that


· ¸π · ¸π · ¸π
1 x2 sin kx 2 x cos kx 2 sin kx
ak = + −
π k −π π k2 −π π k 3 −π
Fourier Series 13

Substituting cos(±kπ) = (−1)k and sin(±kπ) = 0 and simplifying, we find that


4 (−1)k
ak = [k > 0].
k2
The coefficients bk are all equal to zero. As an exercise, convince yourself of this: it may be seen
by integration (the details are similar), but also (and much more easily) by symmetry, as you
learnt in first-year calculus. We’ll revise symmetry in the next section.
So, finally, we get the Fourier series

X (−1)k cos kx
π2
g(x) = +4 .
3 k2
k=1

This series represents g(x) and converges to x2 in the fundamental interval (−π < x < π). Note
that g(x) is continuous throughout.
Corollary: We can derive a couple of interesting results from this example. Substituting x = 0
in the last equation, we get

π 2 X 4(−1)k
0= + .
3 k2
k=1
Therefore,

X (−1)k+1 π2
= .
k2 12
k=1
Similarly, substituting x = π we get
∞ ∞
π 2 X 4(−1)k π2 X 4
π2 = + cos kπ = + ,
3 k2 3 k2
k=1 k=1

because cos kπ = (−1)k . Simple manipulations then yield immediately



X 1 π2
= .
k2 6
k=1

Both results have been discovered by Euler, by a completely different approach. J

IExample 8 Find the Fourier coefficients of the function h(x) defined in example 5.
Solution: We have: Z π
1 π2
a0 = x2 dx =
2π 0 6
and then, integrating by parts,
Z
1 π 2 2(−1)k
ak = x cos kx dx = [k > 0]
π 0 k2
and 
Z  π 4
1 π 2  − 3
if k is odd,
bk = x sin kx dx = k πk
π 0 
− π
if k is even.
k
As an exercise, fill in the details of this calculation. J
14 Fourier Series

2. Sine and Cosine Series

Suppose we wish to find the Fourier series of an even integrable function on (−π, π). Recall that
a function f (x) is called even if

f (x) = f (−x) for all x,

while a function g(x) is called odd if

g(x) = −g(−x) for all x.

Obviously, if g is odd and continuous, then g(0) = 0 always. The elementary formulas
Z c Z c Z c
f (x) dx = 2 f (x) dx and g(x) dx = 0,
−c 0 −c

which you saw in first-year calculus, are applicable to any integrable even function f or odd g,
and are often very useful.
Naturally, not every function is even or odd, but if a function f (x) is defined for every x,
then it always has an “even part” and an “odd part”. This follows from the trivial identity

f (x) + f (−x) f (x) − f (−x)


f (x) ≡ + .
2 2

Now, on the right-hand side, the first term is even and the second term is odd; convince yourself
of this. For example, the even part of ex is cosh x, the odd part is sinh x, and the two parts add
up to ex .
As an exercise, show that the product of two even functions or two odd functions is even,
while the product of an even and an odd function is odd.
So, for an even function f we have, by symmetry:
Z π Z π
1 1
a0 = f (x) dx = f (x) dx,
2π −π π 0

and, then, also by symmetry,


Z π Z π
1 2
ak = f (x) cos kx dx = f (x) cos kx dx
π −π π 0
bk = 0.

We see that the Fourier series of an even function consists of cosine terms only. Similarly, if f
is an odd integrable function on (−π, π), then by symmetry

ak = 0
Z
2 π
bk = f (x) sin kx dx :
π 0

the Fourier series of an odd function has sine terms only.


Fourier Series 15

IExample 9 Find the Fourier series of the function f (x) = x |x| on (−π, π).
Solution: First of all, note that f (x) is odd. This may be seen in two ways, either by observing
that f (x) is the product of x and |x|: since the first factor is an odd function and the second
factor is an even function, the product is odd. Or, from the definition, observing that

f (−x) = −x · | − x| = −x |x| = −f (x).

Having established that f (x) is odd, we deduce that for every k:

ak = 0,
Z
2 π
bk = x |x| sin kx dx.
π 0
But clearly, if x ranges from 0 to π, then |x| = x and so x |x| = x2 . It follows that
Z
2 π 2 4(cos kπ − 1) 2π cos kπ
bk = x sin kx dx = − ;
π 0 πk 3 k

as an exercise, verify this. Finally, substituting cos kπ = (−1)k , and noting that
½
−2 if k is an odd integer,
cos kπ − 1 =
0 if k is an even integer,

we get that
 8 2π

− 3 + if k is odd,
πk k
bk =

 2π
− if k is even.
k
Therefore:
X µ 2π 8
¶ X 2π
f (x) = − 3 sin kx − sin kx.
k πk k
k=odd k=even

f(x)
π2

−2π −π 0 π 2π x

−π2
The Fourier series converges to the function sketched above. Note thatf (x) = −f (−x) and
f (±π) = 0, which is the average between the limit from the left and the limit from the right.
Note also that f (x) coincides with x2 inside (0, π) and with −x2 inside (−π, 0). J
16 Fourier Series

HALF RANGE EXPANSIONS


Suppose a function f (x), piecewise continuous and piecewise monotonic, is defined only in the
interval (0, π). We cannot, of course, find the Fourier series of this function, since we have no
information on the values of f (x) between −π and 0 and, to calculate the Fourier coefficients
ak and bk we need to integrate over the entire interval (−π, π).
However, we have the freedom to define f (x) in any manner we please on (−π, 0). As long
as the extended function is piecewise continuous and monotonic on [−π, π], we may then form
the Fourier series of such a new function, and obtain a series that represents the original function
f (x) between 0 and π, and represent whatever extension we have chosen in (−π, 0).
In practice, there are only two useful choices:
we may extend f (x) in such a way that the ex-
tended function is even, and we may extend f (x)
in such a way that the extended function is odd.
Suppose then that f (x) is initially given only
on (0, π), as shown in the picture (the unshaded
area). Extend f (x) to (−π, 0) by the formula

f (x) = f (−x).

We then obtain an even function on (−π, π), whose Fourier series has only cosine terms, and
which coincides with our original function on (0, π). For x between −π and 0, the graph of such
a function will be just the mirror image of the graph between 0 and π, reflected in the y axis.

IDefinition: The Fourier series obtained in this way is called a Fourier-cosine series. It is
periodic with period 2π, and represent the even periodic extension of the original function. J

IExample 10 Find the Fourier-cosine expansion of f (x) = x for 0 < x < π.


Solution: The coefficients of the expansion are
Z Z ½
1 π π 2 π 0 if k = 2, 4, 6, 8, . . .
a0 = x dx = , ak = x cos kx dx = 2
π 0 2 π 0 −4/k π if k = 1, 3, 5, 7, . . .
It follows that
π 4 X cos kx
x= − . [0 ≤ x ≤ π]
2 π k2
k=odd
You should plot a few partial sums of this series, and compare its convergence rate with that of
example 4. This one is clearly better; the reason has to do with the fact that the function of
this example is continuous (why?), whereas the one of example 4 is not. J
The obvious alternative to a Fourier-cosine ex-
pansion is the Fourier-sine. With the same as-
sumptions as before, one extends f (x) to the
interval (−π, 0) by the formula

f (x) = −f (−x),

as shown in the picture on the right. Once


again, the unshaded area shows where f (x) is
originally defined.
Fourier Series 17

The function obtained in this way is then odd; its plot between −π and 0 will be the mirror
image of the mirror image of the original plot, reflected first on the y axis and then on the x
axis.

IDefinition: The Fourier series obtained in this way is called Fourier-sine series. It is periodic
with period 2π, and represent the odd periodic extension of the original function. J

Let us see an example of the same function expanded both ways.

IExample 11 Consider a function defined in (0, π) as follows:



 0 if x is < π/4
f (x) = 5 if x is between π/4 and 3π/4

0 if x is > 3π/4.
The plot of f (x) is clearly a rectangle of height 5 and width π/2, resting in the middle of the
interval (0, π). Expanding f (x) in a Fourier-cosine series, we find that
Z Z ¡ ¢
1 3π/4 5 2 3π/4 10 sin 3kπ/4 − sin kπ/4
a0 = 5 dx = , ak = 5 cos kx dx = .
π π/4 2 π π/4 kπ
Expanding f (x) in a Fourier-sine series we find that
Z ¡ ¢
2 3π/4 10 cos kπ/4 − cos 3kπ/4
bk = 5 sin kx dx = .
π π/4 kπ
Routine calculations yield, respectively:
µ ¶
5 10 cos 6x cos 10x cos 14x cos 18x
f (x) = − cos 2x − + − + − ···
2 π 3 5 7 9
and
√ µ ¶
10 2 sin 3x sin 5x sin 7x sin 9x sin 11x sin 13x
f (x) = sin x − − + + − − + ··· .
π 3 5 7 9 11 13
Both series converge to the same f (x) for x between0 and π, but outside this interval they
converge to different limits. The plots of two (truncated) series are shown side by side in
the following picture. The Fourier-cosine on the left has been truncated after (cos 18x)/9; the
Fourier-sine on the right has been truncated after −(sin 19x)/19.
f(x) f(x)

5 5

0 x x

−5
−2π −π 0 π 2π −2π −π 0 π 2π
18 Fourier Series

The pictures suggests that the convergence rate of the Fourier-cosine series is much better than
the Fourier-sine. As usual, you are encouraged to try some more plotting on your own. Keep,
for example, terms as high as (cos 50x)/25 or (sin 49x)/49, respectively; what do you see? J

IExample 12 Find the Fourier-sine expansion of f (x) = cos x for 0 < x < π.
Solution: We find immediately:
Z π Z π
2 1 £ ¤
bk = cos x sin kx dx = sin(k + 1)x + sin(k − 1)x dx.
π 0 π 0

It follows (note that the case k = 1 needs to be treated separately):


½
0 if k is odd,
bk =
4k/π(k 2 − 1) if k is even.

So, finally, we have:



4 X k sin kx 8 X n sin 2nx
cos x = = ; [0 < x < π]
π k2 − 1 π n=0 4n2 − 1
k=even

the two series on the right-hand side are obviously equivalent (convince yourself of this). Between
−π and 0 this series converges to − cos(−x), which is equal to − cos x. Hence the extended
function is discontinuous at 0, ±π, ±2π, etc.
Note that we have expanded cos x into a series of sines. Needless to say, the expansion of
cos x into a series of cosines consists of one term only! J

IExample 13 Find the Fourier-cosine expansion of f (x) = sin6 x for 0 < x < π.
Solution: There are two ways of doing this problem, the hard way and the easy way. The hard
way begins with calculating the integrals
Z π
sin6 x cos kx dx,
0

until one realizes that only four of them are different from zero, so perhaps there must have
been an easier option.
And there was. You probably saw in high school the formula for the n-th power of the
binomial (a + b):
µ ¶ µ ¶ µ ¶
n n n−1
n n n−2 2 n n−3 3
(a + b) = a + a b+ a b + a b + · · · + bn . [for n ∈ N]
1 2 3
¡n¢
The binomial coefficients k are given by expressions like
µ ¶ µ ¶ µ ¶ µ ¶
n n n(n − 1) n n(n − 1)(n − 2) n n(n − 1)(n − 2)(n − 3)
= n, = , = , = ,
1 2 2! 3 3! 4 4!

and so on. They may be arranged into Pascal’s triangle, which you also saw at school.
Fourier Series 19

Now, combining Euler’s formula with the binomial formula, we get that
µ ¶6 6 µ ¶
6 eix − e−ix 1 X 6
sin x = = (−1)k ei(6−k)kx · e−ikx =
i2 −64 k
k=0
i6x i4x i2x
e − 6e − 20 + 15e−i2x − 6e−i4x + e−i6x
+ 15e
= =
−64
10 − 15 cos 2x + 6 cos 4x − cos 6x
= .
32
This is the answer; it’s a terminating Fourier expansion. Any even power of sin x and any power
of cos x may be expanded as a linear combination of cosines; any odd power of sin x may be
expanded as a linear combination of sines. Convince yourself of this. J

3. Change of Interval of Expansion

If a function f (x) is periodic with period 2L, where L 6= π, the Fourier series is easily obtained
by a change of variable. Suppose f (x) = f (x + 2L) for every x; then the substitution
πx Lu
u= ←→ x=
L π
maps x into u and x + 2L into π(x + 2L)/L = u + 2π. Therefore, in terms of new variable u,
the function f is periodic with period 2π. Simple calculations yield immediately:
∞ µ
X ¶
kπx kπx
f (x) ∼ ak cos + bk sin [−L < x < L]
L L
k=0

where Z Z
L L
1 1 kπx
a0 = f (x) dx, ak = f (x) cos dx [k = 1, 2, . . .]
2L −L L −L L
and Z L
1 kπx
bk = f (x) sin dx. [k = 1, 2, . . .]
L −L L
Note that if the series above converges, its sum (regarded as a function of all x) is periodic, with
period 2L.

IExample 14 Find the Fourier coefficients of the function f (x) that is periodic with a period
of 6 units, and coincides with |x2 − 1| if −3 < x < 3.
8
Solution: The picture on the right, produced by the free
7
package gnuplot, shows the graph of f (x) in the funda-
mental interval (−3, 3). We note that L = 3, and that by 6
symmetry all the coefficients bk are zero. Also by symmetry, 5
as f (x) is even: 4
Z Z 3
3 3
f (x) cos π3 kx dx = 2 f (x) cos π3 kx dx. 2
−3 0 1
0
The integrations may be done by splitting the range, because −3 −2 −1 0 1 2 3
20 Fourier Series

½
1 − x2 if x is between −1 and 1,
|x2 − 1| =
x2 − 1 everywhere else.
It follows that Z Z
1 3
1 2 1 22
a0 = (1 − x ) dx + (x2 − 1) dx = ,
3 0 3 1 9
and that Z Z
2 1 2 π 2 3 2
ak = (1 − x ) cos 3 kx dx + (x − 1) cos π3 kx dx =
3 0 3 1
72 sin π3 k 24 cos π3 k 36(−1)k
= − + .
k3 π3 k2 π2 k2 π2
The calculations are routine (integration by parts) but tedious; as an exercise, verify them. The
final result for ak may be simplified by noting that cos π3 k takes the values 1/2, − 1/2, −1, − 1/2,
1/2, 1 as k ranges from 1 to 6 and then repeats the same values in cycles of 6. Similarly, sin π k
√ √ √ √ 3
takes the values 3/2, 3/2, 0, − 3/2, − 3/2, 0 as k ranges from 1 to 6 and then repeats the
same values cyclically. J

Fourier-cosine and Fourier-sine series of functions defined over an interval of length L are ob-
tained in the same way. For example, if f (x) is defined on (0, L), then:

X kπx
f (x) ∼ ak cos , [0 < x < L]
L
k=0

where Z Z
L L
1 2 kπx
a0 = f (x) dx, ak = f (x) cos dx [k = 1, 2, . . .]
L 0 L 0 L
and

X kπx
f (x) ∼ bk sin [0 < x < L]
L
k=0

where Z L
2 kπx
bk = f (x) sin dx [k = 1, 2, . . .]
L 0 L
Regarded as a function defined for all x, the cosine series is even, periodic with period 2L; the
sine series is odd, periodic with period 2L.

IExample 15 Consider the function defined as f (x) = x for x in (0, 10). Find the expansion
of f (x): (a) In a Fourier series, (b) In a Fourier-cosine series, (c) In a Fourier-sine series.
Sketch, in each case, the graph of the corresponding series.
Solution: (a) The period of the expanded function is 2L = 10, therefore L = 5. It follows:
Z 10
1
a0 = x dx = 5,
10 0

Z 10
1
ak = x cos π5 kx dx = 0 [k > 0],
5 0
Fourier Series 21

and finally Z 10
1 10
bk = x sin π5 kx dx = − .
5 0 πk
Therefore

10 X sin π5 kx
x=5− [0 < x < 10].
π k
k=1

Outside the interval (0, 10) the series converges to the periodic extension of f (x), as shown in
the following picture. The expanded function is neither even nor odd.

2L=10

−30 −20 −10 0 10 20 30 x

(b) For the Fourier-cosine expansion we find that L = 10. It follows that
Z 10
1
a0 = x dx = 5,
10 0
and also that

Z 10  0 if k is even positive,
2 π
ak = x cos 10 kx dx =
10 0  − 40 if k is odd.
π2 k2
We obtain the expansion
40 X cos 10
π
kx
x=5− 2 2
[0 < x < 10],
π k
k=odd

which also converges if x is between −10 and 0, but to −x. In other words, the series converges
to |x| for all x between −10 and 10. The period of this expansion is 2L = 20, as the following
picture shows.
2L=20

−30 −20 −10 0 10 20 30 x

(c) Finally, we turn to the Fourier-sine expansion. We find again, like in part (b), that L = 10
and hence that
Z 10
2 π 20 (−1)k
bk = x sin 10 kx dx = − . [k = 1, 2, 3, 4, . . .]
10 0 πk
22 Fourier Series

The corresponding expansion is



20 X (−1)k+1 sin 10
π
kx
x= [0 < x < 10].
π k
k=1

But because f (x) = x is an odd function, the series above converges to x in the whole interval
(−10, 10), as shown.

2L=20

−30 −20 −10 0 10 20 30 x

Outside this interval, the series converges to the periodic extension with period 2L = 20. J

4. Best Approximation; Parseval’s Relation

Suppose we want to find the “best” trigonometric polynomial of given degree N to approximate
a certain function f (x). We say a function is square integrable on an interval (a, b) if f and f 2
are both integrable on (a, b). We are not assuming, at this stage, that the Fourier series for f
converges at all.
How do we decide what “best” means? Well, if a function f (x) is approximated by another
function p(x), then the error E(x) of the approximation is defined as
¯ ¯
E(x) = ¯f (x) − p(x)¯.

The problem is that the error E depends on x, and so it’s a local property. But the goodness
of the approximation is a global property: it is attached to the whole interval (a, b). How do we
give a global meaning to a local quantity?
We might decide that the “best” approximation is for us the one that minimizes the maxi-
mum error. In practice, this is not unreasonable: if an engineer needs to be absolutely sure that
the error will never, under any circumstances, exceed a certain tolerance, that’s the way forward.
This approach may not always be advisable, though. Suppose, for instance, that the error is
larger than the average in a narrow neighborhood of a point x0 , but very small everywhere else:
in this case, this criterion underestimates the goodness of the approximation.
At the opposite extreme, one might want to minimize simply the average error, rather than
the maximum. In other words, one might be prepared to accept a relatively large error in a
few rare places, as long as on the average, the approximation is good. It’s a less demanding
criterion than the first one. Again, it’s conceivable that one could make this choice, in certain
circumstances.
A third possibility is to minimize the average square error. Because such a procedure gives
more relative weight to large errors (if the error is doubled, the square error is four times as
Fourier Series 23

much), it may be seen as a compromise between the first method and the second. In practice,
this is the most common definition of “best approximation”.
So, suppose we have this problem: f (x) is to be approximated on (−π, π) by a trigonometric
polynomial of fixed order N over the interval (−π, π). We write
N
X
f (x) = (αk cos kx + βk sin kx) + EN (x),
k=0

where EN (x) is the error of the approximation, and the coefficients αk and βk are undetermined
parameters; we treat them as variables. Once again, the value of β0 is immaterial, so we set it
to zero.
We want to minimize the average square error over (−π, π), which by definition is
Z π ¯ N
X ¯2
1 ¯ ¯
¯f (x) − (αk cos kx + βk sin kx)¯ dx.
2π −π k=0

We use the methods of second year calculus. For a given x, the expression above is a function
of 2N + 1 independent variables α0 , . . . αN and β1 , . . . βN . The minimum is found by imposing
that the partial derivatives with respect to all the variables be separately equal to zero.
This requirements yields 2N + 1 equations:
" Z π ¯ #
∂ 1
N
X ¯2
¯ ¯
¯f (x) − (αk cos kx + βk sin kx)¯ dx = 0 [n = 0, 1, . . . N ]
∂αn 2π −π
k=0

and " #
Z π ¯ N
X ¯2
∂ 1 ¯ ¯
¯f (x) − (αk cos kx + βk sin kx)¯ dx. = 0. [n = 1, 2, . . . N ]
∂βn 2π −π k=0

Let’s concentrate on the first set of equations. By Leibniz’s rule, we get:


Z " N
#2
1 π
∂ X
f (x) − (αk cos kx + βk sin kx) dx = 0
2π −π ∂αn
k=0

Hence, by the chain rule, we get immediately that


Z π · N
X ¸
1
2 f (x) − (αk cos kx + βk sin kx) · (− cos nx) dx = 0
2π −π k=0

Simplifying, we get:
Z π Z π N h
X i
f (x) cos nx dx = αk cos kx cos nx + βk sin kx cos nx dx dx
−π −π k=0

This equation is easier to understand if written using the inner product (3):
N h
X i
<f (x) | cos nx> = αk < cos kx | cos nx> + βk < sin kx | cos nx> .
k=0
24 Fourier Series

By the orthogonality relations, all products of the form < sin kx | cos nx> are zero, and among
the products of the form < cos kx | cos nx>, only the one where k = n is not zero. Therefore,
we get immediately
<f (x) | cos nx> = αn < cos nx | cos nx>,
and hence
<f (x) | cos nx>
αn = . [n = 0, 1, . . . N ]
< cos nx | cos nx>
In other words, the parameters αk coincide with the Fourier coefficients ak found in (4).
In exactly the same way, from the second set of equations we get

Z " N
#2
1 π
∂ X
f (x) − (αk cos kx + βk sin kx) dx = 0.
2π −π ∂βn
k=0

and hence (skipping a few intermediate steps)


Z π Z π N h
X i
f (x) sin nx dx = αk cos kx sin nx + βk sin kx sin nx dx.
−π −π k=0

In terms of inner products, this may be written


N h
X i
<f (x) | sin nx> = αk < cos kx | sin nx> + βk < sin kx | sin nx> ,
k=0

and (again) by the orthogonality relations, we eventually get

<f (x) | sin nx>


βn = . [n = 1, 2, . . . N ]
< sin nx | sin nx>

As you perhaps expected, the parameters βn coincide with the Fourier coefficients bn of (5).

IConclusion: The trigonometric polynomial formed by taking the Fourier coefficients is the
one that provides the “best” approximation in the sense of the mean square error. J

Note that this result remains true even if the Fourier series for f does not converge, as long
as the Fourier coefficients are defined. The method described above is borrowed from statistics,
where it’s usually called the “least-squares method”. It is certainly one of the most important
concepts in modern applied mathematics.
GIBBS’ PHENOMENON
Even if the mean square error tends to zero, it does not necessarily follow that the approximation
gets better at every point. This is a fairly difficult concept to understand, so much so that it
was first noticed almost a century after Fourier’s seminal papers.
The point you must bear in mind is that in this section we are dealing with two kinds of
“error”. There is the mean square error and the truncation error. The former is the result of
an averaging process over a whole interval, and for this reason is called a global quantity; it
describes “how good”, loosely speaking, the approximation is over the whole range. The second
one is simply the difference between the true value of f (x) at a point x, and the value of a
Fourier Series 25

Fourier series truncated after a certain number of terms—obviously, if a series has an infinite
number of terms, no computing device will ever be able to add them all. This second kind of
error, the truncation error, does change from point to point, and so it is called a local property.
So, it may happen that if more and more terms of a Fourier series are added, there are
points where the local error actually gets bigger, even if “as a whole” the quality of the fit gets
better. The fact that, here and there, the approximation gets worse is compensated by a general
improvement of the approximation “almost everywhere” else.
Let’s consider a typical example illustrating this phenomenon. Consider the Fourier series
of the function ½
−1 if −π < x < 0,
f (x) =
+1 if 0 < x < π,
which is (as you should easily verify)
µ ¶
4 sin 3x sin 5x sin 7x sin 9x
f (x) ∼ sin x + + + + + ··· .
π 3 5 7 9

1 1 1

0 0 0

−1 −1 −1

−3 0 3 −3 0 3 −3 0 3
n=10 n=25 n=100
The pictures above show the graphs of three truncated Fourier series, of increasing order.
The first one is truncated after sin 9x/9, the second one after sin 25x/25, the third one after
sin 99x/99. Comparing the first graph with the second one, we see that the local error has
decreased almost everywhere, except near the points x = −π, x = 0 and x = π, where it has
actually increased. So, even if the graph in the middle looks better than the one on the left “on
the average”, there are three narrow regions where this is not true.
The same comments may be made if one compares the graph in the middle and the one
on the right. There is simply nothing that can be done about the “spikes” at x = 0 and
x = ±π. Increasing the order of the approximation makes them narrower, but not shorter; they
are a built-in feature of the method. This surprising property of Fourier series is called Gibbs’
phenomenon, after J.W. Gibbs, who reported it in 1899 to the journal Nature.

PARSEVAL’S IDENTITY

Let’s move forward. Suppose the Fourier expansion of a function f (x) converges. Then, expan-
sion (1) is valid, and squaring it we get:


X ∞
X
f (x) · f (x) = (ak cos kx + bk sin kx) · (an cos nx + bn sin nx).
k=0 n=0
26 Fourier Series

Assume that the two series on the right-hand side may be multiplied as if they were ordinary,
finite sums, and that the resulting double series is integrable. If this step is allowed then, using
again the inner product (3), we get:
∞ ³
∞ X
X
<f | f > = ak an < cos kx | cos nx>+
k=0 n=0
´
+ ak bn < cos kx | sin nx> + bk an < sin kx | cos nx> + bk bn < sin kx | sin nx> .

This horrible expression simplifies very quickly because, by the orthogonality relations,

< cos kx | sin nx> = < sin kx | cos nx> = 0

for all k and n. So, the second and third term in the sum on the right-hand side vanish
immediately. Moreover, recall that

< cos kx | cos nx> 6= 0 only if k = n,

and similarly
< sin kx | sin nx> 6= 0 only if k = n.
In the end, the two remaining double sums collapse into ordinary sums, and we get
∞ ³
X ´
<f | f > = a2k < cos kx | cos kx> + b2k < sin kx | sin kx>
k=0

Finally, going back to the orthogonality relations, and recalling that


½ ½
2π if k = 0 0 if k = 0
< cos kx | cos kx> = < sin kx | sin kx> =
π if k = 1, 2, . . . π if k = 1, 2, . . .

we deduce that

X ¡ ¢
<f | f > = 2πa20 + π a2k + b2k .
k=1

This may be written


Z π ∞
X
1 ¡ ¢
[f (x)]2 dx = a20 + 1
2 a2k + b2k , (7)
2π −π k=1

a famous result known as Parseval’s identity.


It must be stressed that this is not a proof, only a plausibility argument. However, it is
possible to show that the Dirichlet conditions, the same ones that guarantee the convergence of
a Fourier series, are also sufficient to prove Parseval’s identity.
Many interesting results may be derived from Parseval’s identity.

IExample 16 Go back to example 2, where it was established that

3 2 X sin kx n 1 if −π < x ≤ 0
+ =
2 π k 2 if 0 < x < π.
k=odd
Fourier Series 27

We observe that Z Z Z
π 0 π
1 2 1 1 5
[f (x)] dx = 1 dx + 4 dx = .
2π −π 2π −π 2π 0 2
Parseval’s identity (7) then yields
5 9 1 4 X 1
= + · 2· .
2 4 2 π k2
k=odd

Simplifying, we get
X 1 π2
= .
k2 8
k=odd

Technically we should stop here, but there is an interesting follow-up to this example. Since
clearly
X∞ X 1 X 1
1
= + ,
k2 k2 k2
k=1 k=odd k=even

and
X ∞
1 1X 1
2
= ,
k 4 k2
k=even k=1

it follows immediately that


µ ¶ ∞
1 X 1 X 1
1− = .
4 k2 k2
k=1 k=odd

Substituting on the right-hand side the value we found for the sum, we get
∞ ∞
3X 1 π2 X 1 4 π2 π2
= =⇒ = · = ,
4 k2 8 k2 3 8 6
k=1 k=1

in agreement with example 7. J

IExample 17 In example 7 it was shown that



2 π 2 X 4(−1)k
x = + cos kx. [−π < x < π]
3 k2
k=1

We find that Z π Z π
1 21 π4
[f (x)] dx = x4 dx = ;
2π −π 2π −π 5
hence, by Parseval’s identity (7), we get that

π4 π4 1 X 16
= + ,
5 9 2 k4
k=1

which gives immediately



X 1 π4
= .
k4 90
k=1
28 Fourier Series

Corollary: Since clearly


X ∞
1 1 X 1 1 π4
= = · ,
k4 16 k4 16 90
k=even k=1

substituting back we find immediately that

X 1 15 π 4 π4
= · = .
k4 16 90 96 J
k=odd

IExample 18 What numerical series may be obtained by applying Parseval’s identity to the
expansions of example 15?
Solution: To apply Parseval’s identity to part (a) of example 15, we find, first of all, that
Z 10
1 100
<f | f > = x2 dx = ;
10 0 3

therefore

100 1 X 100
= 25 + .
3 2 π2 k2
k=1

After simplifications, this may be written

X 1 ∞
π2
= ,
6 k2
k=1

a famous result that has already been found more than once in these notes (see example 7).
From part (b) we get:
Z 10
1 100
<f | f > = x2 dx = ;
20 −10 3
hence,
100 1 X 1600
= 25 + .
3 2 π4 k4
k=odd

After simplifications, this yields


X 1 π4
= ;
k4 96
k=odd

see the corollary of the preceding example.


Finally, from part (c) we get

100 1 X 400
= ,
3 2 k2
k=1

which yields again


X 1 ∞
π2
= ,
6 k2
k=1

as in part (a). J
Fourier Series 29

5. The Complex-Exponential Form of FourierSeries

Expanding cos kx and sin kx by means of Euler’s formula, the Fourier series (1) may be written
∞ µ
X ¶
eikx + e−ikx eikx − e−ikx
f (x) = ak · + bk · .
2 i2
k=0

Simple manipulations yield


∞ µ
X ¶ ∞ µ
X ¶
ak − ibk ikx ak + ibk
f (x) = a0 + e + e−ikx .
2 2
k=1 k=1

Defining

c0 = a0 , ck = (ak − ibk )/2, c−k = (ak + ibk )/2,

[where k is positive: k = 1, 2, 3, . . .], we may rewrite the last equation in the form

X
f (x) = ck eikx . (8)
k=−∞

The complex coefficients ck may be written in a very simple form. For k = 0 we get immediately
Z π
1
c0 = a0 = f (x) e−i0·x dx.
2π −π

For k = 1, 2, 3, . . . one has, by definition:


Z
ak − ibk 1 π cos kx − i sin kx
ck = = f (x) dx =
2 π −π 2
Z π
1
= f (x) e−ikx dx
2π −π

and Z
ak + ibk 1 π cos kx + i sin kx
c−k = = f (x) dx =
2 π −π 2
Z π
1
= f (x) e+ikx dx
2π −π
We now see that the last three results may be combined into just one, rather elegant equation:
Z π
1
ck = f (x) e−ikx dx, (9)
2π −π

which is valid for all k, i.e., positive, negative or zero.


For a function with period 2L rather than 2π, simple manipulations (verify this) yield

X Z L
ikπx/L 1
f (x) = ck e , where ck = f (x) e−ikπx/L dx. (10)
2L −L
k=−∞
30 Fourier Series

Equation (8) [and also its variant (10)] is called the complex form of Fourier series; it plays an
important role from a theoretical point of view, and we shall need it later on, when we’ll discuss
the Fourier transform.

IExample 19 Find the complex form of the Fourier series of the function f (x) that coincides
with |x| for x between −π and π, and is periodic with period 2π.
Solution: We need to find Z π
1
ck = |x| e−ikx dx
2π −π
for k ranging from −∞ to ∞. This may be written
Z π Z 0
1 −ikx 1
ck = xe dx − x e−ikx dx.
2π 0 2π −π

This formula holds for all k, including 0. If k 6= 0 we use integration by parts (it doesn’t matter
if k is positive or negative):
· ¸π · ¸0
1 e−ikx x e−ikx 1 e−ikx x e−ikx
ck = − − − =
2π k2 ik 0 2π k2 ik −π
· ¸ · ¸
1 e−ikπ πe−ikπ 1 1 1 eikπ −πeikπ
= − − 2 − − 2 + .
2π k2 ik k 2π k 2 k ik

Substituting
e−ikπ = eikπ = (−1)k
and simplifying, we get:
(−1)k − 1
ck = [k 6= 0]
πk 2
This expression may be simplified once more:

− 2 if k is odd (positive or negative)
ck = πk 2
 0 if k is even (positive or negative) but not zero.

If k = 0 integration by parts is not applicable (why?) but we can find c0 directly:


Z π Z π
1 1 π
c0 = |x| dx = x dx = .
2π −π π 0 2

Putting everything together:

2 X eikx π 2 X eikx
|x| = − + − [−π ≤ x ≤ π].
π k2 2 π k2
k=odd, k=odd,
negative positive

This is the Fourier series for f (x) in complex form. Note that it holds at the end-points too
because the limits from the right and from the left are equal. It may be simplified further and
Fourier Series 31

reduced to a real Fourier series by substituting k with −m in the first sum and then combining
the two sums. Do it, as an exercise. J

All the properties of “usual”, real Fourier series—except one— apply to Fourier series in the
complex form.
The exception is the definition (3) of inner product, which must be slightly adjusted. The
reason is that we need to preserve the property that

<f | f > ≥ 0,

where the equality must hold only if the function f is identically zero. If definition (3) is not
modified, this property
√ is lost when complex variables are allowed. For example, the function
defined as f (x) = 3x − iπ, over (−π, π), yields
Z π Z π √
2
¡ 2 ¢
[f (x)] dx = 3x − π 2 − i2 3πx dx = 0,
−π −π

but certainly f (x) is not zero for every x. The problem goes away if we define the inner product
of two (possibly) complex functions f and g over (−π, π) in this way:
Z π
def
<f | g> = f g dx, (11)
−π

where f denotes the complex conjugate of f . Obviously, if f is real, then this definition coincides
with the old one (3); but if f has an imaginary part, then
Z π
<f | f > = |f |2 dx ≥ 0.
−π

There is a small price to pay, though. While the real inner product (3) is commutative, meaning
that <f | g> = <g | f >, the complex inner product (11) is only semi-commutative:

<f | g> = <g | f >,

and sesqui-linear: if M and N are two constants, then

<f, | M g+N h> = M <f | g>+N <f | g>, but <M f +N g, | h> = M <f | h>+N <g | h>.

How does all this affect the complex form (8) of Fourier series? Equation (9) may be written
Z π
1 <eikx | f >
ck = f (x) e−ikx dx = . [for all k]
2π −π 2π

It’s also easy to see that


n
0 if k is not equal to n,
<eikx | einx > =
2π if k = n.
Combining the last two results we get:

<eikx | f >
ck = (12)
<eikx | eikx >
32 Fourier Series

and the complex form (8) of Fourier series becomes:



X ∞
X ∞
X
<eikx | f > ikx <eikx | f > ikx
f (x) = ck eikx = e = e .
2π <eikx | eikx >
k=−∞ k=−∞ k=−∞

Parseval’s identity also takes a very nice form:


Z π ∞
X
1
[f (x)]2 dx = |ck |2 (13)
2π −π k=−∞

for a function with period 2π. As usual, replace π with L in (13) if the period is 2L.

IExample 20 Apply Parseval’s identity in complex form to the function f (x) such that f (x) =
0 if x is between −5 and 0, f (x) = 1 if x is between 0 and 5, and f (x + 10) = f (x) for all x.
Solution: Note that f (x) is not defined for x = 0, 5, 10, 15 . . .. These are jump-discontinuities
and the Fourier series will converge to 1/2 at these points. We find that L = 5; it follows that
Z 5
1
ck = e−ikπx/5 dx.
10 0

Hence,
Z 5
1 1
c0 = dx =
10 0 2
and Z 5
1 e−ikπ − 1
ck = e−ikπx/5 dx = . [k 6= 0]
10 0 −i2kπ
Substituting e−ikπ = (−1)k , we find that
 1
 if k is odd (positive or negative)
ck = iπk
 0 if k is even (positive or negative) but not zero.

Now, we calculate
Z 5 Z 5
1 1 1
|f (x)|2 dx = dx = .
10 −5 10 0 2
Parseval’s identity then yields

X ¯ ¯ ¯ ¯2 X ¯¯ 1 ¯¯2
1 ¯ 1 ¯2 ¯1¯
= ¯ ¯ ¯ ¯ + ¯ ¯
2 ¯ ikπ ¯ + ¯2¯ ¯ ikπ ¯ ,
k=odd, k=odd,
negative positive

and hence X
1 1
=2·
4 k2 π2
k=odd,
positive
Fourier Series 33

Simplifying, we recover the identity

π2 X 1
= ,
8 k2
k=odd

which was found in example 16. This is not surprising, since the graph of f (x) in both cases is
a rectangle. J

Equation (12) is interesting: it replaces (4) and (5) of section 1 with just one equation. It’s also
possible to show that the functions
½
ikx 0, 1, 2, 3 . . .
vk = e where k =
−1, −2, −3 . . .

are linearly independent and span an abstract vector space H. The “best approximation” we
discussed in the preceding section may then be viewed as a projection over a subspace W of
H, spanned by a certain number of vk ’s. [See, for instance, Linear Algebra by Fraleigh and
Beauregard (1995), especially the section on projections.] Using this analogy with linear algebra
it’s possible to deduce many useful properties of Fourier series. Unfortunately, this discussion
would lead us too far away, and in this course we must stop here.

6. Application: Heat Flow in an Insulated Rod

In the remainder of this chapter we’ll see examples of a method—the method of separation of
variables—that enables one to solve many important linear partial differential equations (PDEs
for short) of mathematical physics.
Consider a uniform, homogeneous thin
rod, having the shape of a cylinder, but so
thin that we may regard its diameter as vir-
tually zero, compared to the length. Suppose x
the sides of the rod are thermally insulated,
so that heat may escape the rod only through
the ends. Suppose finally the specific heat is x=L
approximately constant, at least in the tem- x=0
perature range that can be reached by the
rod.
If the initial temperature (which we call u) in some parts of the rod is higher or lower than
in other parts, or if the ends are kept at a differemt temperature from the rod, then heat will
move from hotter regions to colder regions, as the second principle of thermodynamics dictates.
Heat arriving at a given point will raise the temperature there, departing heat will lower it.
Under these conditions, the temperature u will depend only on x and t, say u = u(x, t),
where x denotes the distance from one end of the rod, and t is time. It may be shown that
the evolution of the temperature u is described with reasonable accuracy by the so-called one-
dimensional heat equation
ut = αuxx (14)
where α is a positive constant. You could quibble about the name, since u is a function of
two variables and so the problem is, mathematically, two-dimensional. But obviously, in this
example, heat physically moves in one dimension only, hence the name.
34 Fourier Series

We’ll see as we go along that, to solve the problem completely, we’ll need to know:
(1) The initial temperature distribution inside the rod,
(2) Whether heat may escape through the ends and, if this is the case,
(3) How the temperature of the ends changes with time.
The first condition is called an initial condition, and may be stated in the form

u(x, 0) = f (x),

where f (x) is a known, given function of x. If heat may escape through the ends, then obviously
we also need to know how the temperature of the ends changes. The simplest case, and the only
one we’ll consider in this section, occurs when the ends are kept at a constant temperature by
a thermostat: for example, they are exposed to air and wind is blowing, or they are surrounded
by melting ice. We may always shift the temperature scale so that zero corresponds to the
temperature of the ends (if u satisfies the heat equation (14), then u + C, where C = constant,
is also a solution). So, let us assume that u = 0 at all times at the ends. If L is the length of
the rod, this requirement takes the form of two equations:
)
u(0, t) = 0
for all t,
u(L, t) = 0

and these are called boundary conditions.

THE METHOD OF SEPARATION OF VARIABLES

We begin by looking for solutions of (14) of the form u = X(x) · T (t), where X is a function of
x alone, and T is a function of t alone. When one takes the partial derivative with respect to x
of a function of this form, t is treated as a parameter, therefore

uxx = X 00 (x) · T (t).

Similarly, we also get:


ut = X(x) · Ṫ (t),
where the dot represents differentiation with respect to time Substituting back these results
into (14), we get:
X(x) · Ṫ (t) = αX 00 (x) · T (t)
X 00 (x) Ṫ (t)
= .
X(x) αT (t)
Now, note that the left-hand side of the last equation does not depend on t; on the other hand,
the right-hand side does not depend on x. They must be equal, therefore neither one may
depend on t or x: in other words, they must be equal to a constant. This constant is called the
separation constant. Hence, writing

X 00 (x) Ṫ (t)
= = λ = constant,
X(x) αT (t)

we get two ordinary differential equations (as opposed to a partial differential equation):

X 00 = λX Ṫ = αλT.
Fourier Series 35

Consider the X equation first. It is a second order linear ODE with constant coefficients and if
λ is negative it’s identical to the equation for simple harmonic motion. More generally, its real
solutions may take three forms:
 √ √
 A cosh λx + B sinh λx if λ > 0,
X = Ax + B if λ = 0,
 √ √
A cos −λx + B sin −λx if λ < 0.

Here A and B are free real constants, which we are now going to match with the boundary
conditions. Begin with the case λ > 0: requiring that u(0, t) = 0 at all times yields X(0) = 0,
which gives A cosh 0 + B sinh 0 = 0. As cosh
√ 0 = 1 and sinh 0√= 0, this yields A = 0. The other
boundary condition then becomes B sinh λL = 0: but sinh λL 6= 0, so we get B = 0 as well.
So, if λ > 0 the only solution that matches both boundary conditions is the trivial solution,
which is zero everywhere.
The second case is even easier: requiring 0A + B = 0 gives B = 0, and requiring AL = 0
yields A = 0: same conclusion.
So, we come to the third possibility, λ < 0. To get rid of negatives, we write λ = −c2 ,
where c is a new real constant. The equation X 00 = −c2 X has solutions of the form X =
A cos cx + B sin cx. Again, we match the solution to the first boundary condition, X(0) = 0,
and we get A cos 0 + B sin 0 = 0; this yields A = 0. The second boundary condition, X(L) = 0,
becomes simply B sin cL = 0, and this one finally does have non-trivial solutions, only if cL is
an integer multiple of π:

kπ kπx
cL = kπ =⇒ c= , X = B sin . [k = 1, 2, . . .]
L L

Sixth time lucky. We found, at last, non-trivial solutions that fit both boundary conditions.
Indeed, a clever student should have asked, right at the start, how did we know that such
solutions exist? (Now that we found them, we don’t need an existence theorem anymore.)
Note that the set of all functions satisfying the boundary conditions form a vector space,
since any linear combination of functions that are zero for x = 0 and for x = L is also zero at the
same points. The equation X 00 = λX is then an eigenvalue problem on such a space; non-trivial
solutions are called eigenfunctions (rather than eigenvectors, the term you used in second year).
Note also that the eigenvalues may be written λk = −k 2 π 2 /L2 , they are infinite in number
but may be counted, and are unbounded. Every eigenspace Ek is clearly one-dimensional, since
the scale factor B may take any value. So, we have solved an eigenvalue problem in an infinite-
dimensional vector space.
We now turn to the equation for T (t) which, for a given eigenvalue, is

Ṫ (t) k2 π2
= λk = − 2 , [k = 1, 2, 3, . . .]
αT (t) L

which gives immediately


αk 2 π 2
Ṫ (t) = − T (t).
L2
This equation is elementary; solutions have the form
2
π 2 /L2
T (t) = (constant) · e−αtk .
36 Fourier Series

So far we found that any expression of the form

kπx −αtk2 π2 /L2


u(x, t) = Bk sin e ,
L
where Bk is an arbitrary scale factor and k is an integer, is a solution of the heat equation (14)
that satisfies the boundary conditions.
We still have to meet the initial condition; we try to solve this problem by forming the most
general linear combination of eigenfunctions that we can construct: this is the series

X kπx −αtk2 π2 /L2
u(x, t) = Bk sin e ,
L
k=1

where the Bk are undetermined parameters. We impose that u(x, 0) = f (x) at time t = 0. Since
e0 = 1, this requirement yields immediately

X kπx
f (x) = Bk sin .
L
k=1

But this is easy: all we need to do is to set the factors Bk equal to the coefficients of the
Fourier-sine expansion of f (x), and that completes the problem.
Or does it? There are some questions that a good student should ask at this point. The
first one is, “is the solution unique? How do we know that there isn’t another function, different
from our u, that satisfies the PDE (14), and the boundary conditions, and the initial condition?”
Good question but the answer is no, it may be shown that the solution exists and is unique.
Another question could be, “how do we know that an infinite linear combination of eigenfuctions
may be handled just as if it were a finite combination?” Good question too, but again it may
be shown that the series we used is uniformly convergent, and so all the manipulations we did
were in fact allowed.

IExample 21 A rod of length L, initially at temperature u = 1 degree, is allowed to exchange


heat with the environment through its ends, in accordance with the equation ut = 0.1 uxx . If
the temperature outside remains constant at zero degrees, find how the temperature changes
inside the rod.
Solution: We have that α = 0.1 and f (x) ≡ 1 in (0, L). We must expand f (x) into a Fourier-sine
series over the interval (0, L). Simple calculations yield
n
bk = 4/kπ if k is odd,
0 if k is even.
Hence,
X 4 kπx
f (x) = sin
kπ L
k=odd

and finally
X 4e−0.1tk2 π2 /L2 kπx
u(x, t) = sin
kπ L
k=odd

Three plots are shown in the pictures that follow, corresponding to three different times.
Fourier Series 37

The plots were obtained by setting


2
/L2 2
/L2 2
/L2
4e−0.1tπ πx 4e−0.9tπ 3πx 4e−36.1tπ 19πx
u≈ sin + sin + ... + sin ,
π L 3π L 19π L

i.e., truncating the series after 10 non-zero terms. Note that the exponentials tend to zero very
fast as k → ∞, except at the very beginning, when t is small. Only the plot for time t = 0 is
clearly inadequate, due to the poor convergence of the series (note the “ripples”); for t = 7 the
plot of u(x, 7) is virtually identical to a sinusoidal. J

It may be shown that if a system cannot exchange heat through its boundary, then the gradient
of u is zero on the boundary. For one-dimensional problems the gradient coincides with ux (since
y and z do not appear in the problem), so the boundary conditions for a thermally insulated
rod are
ux (0, t) = 0, ux (L, t) = 0 for all t.

IExample 22 Suppose the rod of example 21 is thermally insulated and has an initial tem-
perature distribution f (x) = sin4 πx/L (the central region of the rod is warmer than the ends).
Find how the temperature changes inside the rod.
Solution: Proceeding like before, we seek solutions of the heat equation that have the form
u = X(x) · T (t), where X is a function of x alone, and T is a function of t alone. The boundary
conditions are that ux (0, t) = 0 and ux (L, t) = 0 at all times; the initial condition is u(x, 0) =
sin4 xπ/L for x in (0, L).
Separating the variables, we come again to the equation
 √ √
 A cosh λx + B sinh λx if λ > 0,
X = Ax + B if λ = 0,
 √ √
A cos −λx + B sin −λx if λ < 0.

Bear in mind that ux (x, t) = X 0 (x)T (t). Hence, in order to fit the boundary conditions, we must
now impose that X 0 (0) = 0 and X 0 (L) = 0. This is the key difference with the example 21.
Proceeding as before, we discard the possibility that λ be positive. However, λ = 0 is
0
acceptable because it gives X = B, which satisfies the √ condition X = 00. Finally, if λ is negative,
we write (again) X = A cos cx + B sin cx, where c = −λ. But now X = −cA sin cx + cB cos cx;
requiring that X 0 (0) = 0 yields B = 0, and requiring that X 0 (L) = 0 yields c = kπ/L, with
k = 1, 2, . . .
38 Fourier Series

So, we found eigenfunctions of the form X0 = constant and Xk = Ak cos kπx/L, with
k = 1, 2, . . . However, since cos 0x is constant anyway, we may include the first eigenfunction in
the general formula, and write
kπx
Xk = Ak cos . [k = 0, 1, 2, . . .]
L
The calculation of T (t) is done in exactly the same way, so we won’t repeat it here. We find
that 2 2 2
T (t) = (constant) · e−αtk π /L ,
and hence that

X 2
π 2 /L2 kπx
u(x, t) = Ak e−αtk cos .
L
k=0

Imposing that u(x, 0) = f (x) at time t = 0, we see that the coefficients Ak must coincide with
the coefficients of the expansion of f (x) into a Fourier-cosine series over (0, L).
Since in this example f (x) = sin4 πx/L, we need to expand this function into a Fourier-
cosine series. Well, recall example 13, and convince yourself that

πx 1³ 2πx 4πx ´
sin4 = 3 − 4 cos + cos ;
L 8 L L
it follows that A0 = 3/8, A2 = −1/2, A4 = 1/8, and all other Ak ’s are zero. So, finally,
substituting the numerical value α = 0.1, we get
2
/L2 2
/L2
3 e−0.4tπ 2πx e−1.6tπ 4πx
u(x, t) = − cos + cos .
8 2 L 8 L
This is not an infinite series, hence there is no question of convergence. The following pictures
show three plots, corresponding to the same times as in example 21.

Notice the different behavior of the solutions. In example 21 the rod was allowed to lose heat
through the ends; hence, it was reasonable to expect that the rod would cool down, though
the central region would always be warmer than the ends. And we see that the solution u(x, t)
confirms this expectation, because u is a linear combination of decaying exponentials, which
tend to zero as time increases. In the end, thermal equilibrium will be reached only when the
whole rod will be at zero temperature.
On the other hand, in the present example, heat cannot escape from the system, so the
equilibrium temperature is determined only by how much heat is stored in the rod. At ther-
mal equilibrium, the temperature must be uniform, and the pictures show that this happens.
Fourier Series 39

The solution u is the sum of two decaying exponentials plus the constant term 3/8, so clearly
limt→∞ u(x, t) = 3/8 for every x. Note also that 3/8 is equal to the average value of u(x, 0). J

DIRICHLET AND NEUMANN PROBLEMS


The preceding examples represent two common situations. In example 21, it was required that
the unknoun function u took a given value on the boundary (zero, in that case); in example 22
the partial derivative ux was prescribed to be zero on the boundary.
Recall that in both examples the physical meaning of u was the temperature in a rod, and
the conditions of example 21 meant that the boundary of the system was in contact with a
thermostat, i.e., a device that would keep the temperature constant at the ends, by absorbing
or releasing heat. On the other hand, the conditions of example 22 meant that no heat could
go through the boundary of the system.
Boundary conditions like these ones are not restricted to heat conduction problems, but are
common in several other PDEs in mathematical physics, and special names have been given to
them.

IDefinition: Conditions like the ones in example 21, where u is prescribed on the boundary,
are called Dirichlet boundary conditions, after J.P.G. Lejeune-Dirichlet (1805-1859). Conditions
like in example 22, where the normal component of ∇u is prescribed on the boundary, are called
Neumann boundary conditions, after C.G. Neumann (1832-1925). J

Several other mathematicians made great contributions to this field, notably Green, Riemann,
Kelvin and Poisson, but somehow the terms “Dirichlet problem” and “Neumann problem” have
become standard, so you should learn them.
Also, it’s common to distinguish between a homogeneous Dirichlet problem, if u = 0 on
the boundary, and a non-homogeneous one, if the prescribed value of u is not zero. Broadly
speaking, if one can solve a homogeneous problem, then the solution of the corresponding non-
homogeneous problem can also be found, but we’ll not discuss this point in these notes.
A similar distinction is made between homogeneous and non-homogeneous Neumann prob-
lems, where the requirements are placed not on u, but on the component of ∇u normal to the
boundary. Dirichlet problems and Neumann problems, in one or more dimensions, appear in
many fields of mathematical physics beside heat propagation: for example in potential theory,
hydrodynamics, wave propagation, diffusion theory.
7. Application: Laplace Equation in a Square

Heat propagation in two dimensions is described by the two-dimensional heat equation

ut = α(uxx + uyy ),

where u(x, y, t) represents the temperature. This equation is applicable, for example, to a thin,
flat lamina: so thin that its thickness, compared with the other dimensions, may be treated as
zero.
This is a PDE in three variables and we shall not discuss it here. However, one may be
interested only in its equilibrium solutions, i.e., temperature distribution inside the lamina once
thermal equilibrium has been reached. Naturally, there would have to be temperature differences
along the boundary, or else the equilibrium distribution will be uniform—not very interesting.
40 Fourier Series

We may find the equilibrium temperature distribution simply by looking for solutions of
the heat equation that no longer depend on time: in other words, we set u = u(x, y), ut = 0,
and look for solutions of
uxx + uyy = 0, (15)
which is called the two-dimensional Laplace equation. Laplace equation, like (15) or in its
three dimensional form, appears in many fields of mathematical physics, notably electrostatics.
It is so important that a special name has been given to functions that are solutions of Laplace
equation: they are called harmonic functions.
We consider one of its simplest examples, the Laplace
equation in a square. The method of separation of variables
may be easily adapted to solve this problem. y
u=1
IExample 23 Three sides of a square are kept at constant π
zero temperature; the fourth side is kept by a thermostat at
a constant temperature of 1 degree. Find the equilibrium
temperature distribution inside the lamina.
2

Solution: We may choose to measure distances in any units u=0 u=0 u=0
we want. Hence, let’s choose our unit of length in such a
way that the side of the square is exactly π units long. We
set our x and y axes as shown in the picture. The boundary
conditions are: 0 u=0 π x

u(0, y) = 0 u(π, y) = 0, for all y in (0, π)


u(x, 0) = 0, u(x, π) = 1. for all x in (0, π)

There is no initial condition, of course, since u does not depend on time. By the method of
separation of variables, we look for solutions of Laplace equation that have the form

u = X(x) · Y (y).

Simple calculations yield X 00 Y + XY 00 = 0. Separating the variables, we get the equation

X 00 (x) Y 00 (y)
=− = λ.
X(x) Y (y)

Now, λ may not be a function of y because X 00 /X does not depend on y, and may not be a
function of x because Y 00 /Y does not depend on x: hence, λ = constant. We get two ordinary
differential equations:
X 00 = λX, Y 00 = −λY.
We start with the equation for X because it is subject to two Dirichlet homogeneous boundary
conditions, so we already know what to do. Reasoning exactly as we did for the heat equation,
we deduce that λ must be negative and X must have the form

X = (constant) · sin kx. [k = 1, 2, . . .]

In this way the boundary conditions on the vertical walls of the square are satisfied, for all y.
Also, we note that λ = X 00 /X = −k 2 .
Fourier Series 41

So, the equation for Y becomes

Y 00 = −(−k 2 )Y, [k = 1, 2, . . .]

which has solutions of the form

Y = A cosh ky + B sinh ky.

We may fit immediately the boundary condition on the lower side, because if y = 0, then
Y = A cosh 0 + B sinh 0 = A: so, we set A = 0 for all the Y ’s. With only one boundary
condition left, we form the most general linear combination of solutions that meet the other
three conditions: this is the series

X
u(x, y) = Bk sin kx sinh ky.
k=1

We now impose that u(x, π) = 1. This yields



X
Bk sin kx sinh kπ = 1 for all x in (0, π).
k=1

To determine the parameters Bk , we compare this expansion with the Fourier-sine expansion of
f (x) ≡ 1. This was done in example 21; we simply copy the result here:
X 4
sin kx = 1 for all x in (0, π).

k=odd

By comparison, we get immediately that


½ u
0 if k is even,
Bk sinh kπ =
4/kπ if k is odd.

Therefore Bk = 0 if k is even, and Bk = 4/kπ sinh kπ


if k is odd. Substituting back, we find that the solu-
tion is y
X 4 sin kx sinh ky π
u(x, y) = .
kπ sinh kπ π x
k=odd 0
The picture, produced by the free package gnuplot, shows a three-dimensional plot of the series
above, truncated after k = 99. Convergence appears to be problematic (Gibbs’ phenomenon) in
a narrow strip near y = π. J

The method of the preceding example may be easily adapted to some problems where part of
the boundary is thermally insulated, as the next example shows.

IExample 24 In a square lamina having side length of π units, two opposite sides are kept at
constant zero temperature. Another side is thermally insulated, and the fourth side is kept by
42 Fourier Series

a thermostat at a temperature given by the equation f (x) = x(π 2 − x2 ), where x is the distance
from a vertex. Find the equilibrium temperature distribution inside the lamina.
Solution: The picture on the left describes the boundary
y
conditions. The vertical sides (where x = 0 and x = π,
respectively) are kept at constant zero temperature. The u=f(x)
side lying on the x axis is thermally insulated, which means
π
the directional derivative of u in the normal direction to the
side is zero. Since this side is horizontal, this simply means
2

that uy = 0. Finally, on the top side we are given that u=0 u=0 u=0

u(x, π) = f (x) = x(π 2 − x2 ) [0 < x < π]

For the first part, we proceed like in the previous example. 0 uy =0 π x


The boundary conditions on X are the same, therefore
X = (constant) · sin kx. [k = 1, 2, . . .]
We find again that
Y = A cosh ky + B sinh ky,
0
but this time we impose that Y (0) = 0. We find that B = 0 and A is free. We then form the
most general linear combination of solutions that satisfy the boundary conditions on three sides:
in this way we get
X∞
u(x, y) = Ak sin kx cosh ky.
k=1
Expanding f (x) into a Fourier-sine series over (0, π) one gets
Z
2 π 12(−1)k+1
bk = x(π 2 − x2 ) sin kx dx = ;
π 0 k3
verify this. It follows that

X (−1)k+1 sin kx
f (x) = 12 . [0 ≤ x ≤ π]
k3
k=1
Substituting y = π into our series solution, we get

X
u(x, π) = Ak sin kx cosh kπ.
k=1

This expression must be identically equal to f (x) for x between 0 and π. By comparison, we
get immediately:
12(−1)k+1
Ak = 3 , u
k cosh kπ
and finally

X (−1)k+1 sin kx cosh ky
u(x, y) = 12 x
k3 cosh kπ
k=1 y π π
The picture (produced by gnuplot) is a sketch of the 0
series above, truncated after the 11th term.
Fourier Series 43

It clearly shows that u = 0 along the y axis (and on the opposite side, which is hidden),
but u 6= 0 along the x axis. If you look carefully, you can also see that the surface u(x, y) is flat
along the line where it intersects the xu plane. This makes sense, since uy = 0 there, i.e., the
variation of u in the direction of the y axis is zero. Note also that Gibbs’ phenomenon does not
occur, thanks to the fact that f (x) is continuous throughout. J

8. Application: The Vibrating String

An “ideal” string clamped under tension at the ends, as shown in the picture, vibrates transver-
sally in accordance with the equation
utt = c2 uxx . (16) u 000
111 0000
1111
000
111 0000
1111
The derivation of this equation hinges on some dras- 000
111 0000
1111
tic simplifications. It is assumed, for instance, that 000
111
000
111 0000
1111
0000
1111
the string has zero thickness but not zero mass, that 1111111111111
0000000000000
000
111 0000
1111
000
111 L1111
0000
it’s perfectly elastic and uniform, that energy is not
dissipated, that weight is negligible, and that the am-
000
111
000
111
0 0000
1111
0000
1111
x
plitude of vibrations is extremely small.
In spite of these approximations, the vibrating string equation is remarkably useful, and
may also serve as a first step towards more realistic models.
Physically, c has the dimensions of a velocity: if for instance distances are measured in
kilometers and time in hours, then c is expressed in km/hr.
The method of separation of variables gives us another opportunity to apply Fourier series.
We call u the transversal displacement of the string from equilibrium, as shown in the picture.
We’ll find, as we go along, that a complete solution of this problem requires two boundary
conditions which have the form

u(0, t) = 0 u(L, t) = 0 at all times t,

and two initial conditions:

u(x, 0) = f (x), ut (x, 0) = g(x) [0 < x < L].

The first initial condition describes the initial shape of the string, and the second one gives the
initial velocity of each element of the string in the direction of the u axis.
We look for solutions of (16) that have the form u = X(x) · T (t), and we get:

X 00 = λX, T̈ = λc2 T.

There is nothing new here, so we simply write the eigenfunctions:

kπx
Xk = (constant) · sin .
L
The equation for T then becomes
k 2 π 2 c2
T̈ = − T.
L2
This is a second-order linear homogeneous ODE, hence its general solution is a linear combi-
nation of two independent solutions. Note the difference with the heat equation: in that case,
proceeding in the same way, we found a first-order equation for T , and so one initial condition
44 Fourier Series

was sufficient to complete the solution of the problem. Here, the general solution for T has the
form
kπct kπct
T = A cos + B sin ,
L L
where A and B are arbitrary parameters.
Proceeding in the same way, we form the most general linear combination of solutions of (16)
that fit the boundary conditions. We write
∞ µ
X ¶
kπct kπct kπx
u(x, t) = Ak cos + Bk sin sin ,
L L L
k=0

and differentiating with respect to time, we get:


∞ µ
X ¶
kπc kπct kπc kπct kπx
ut (x, t) = − · Ak sin + · Bk cos sin .
L L L L L
k=0

Setting t = 0 and imposing the initial conditions, we get:



X ∞
X
kπx kπc kπx
Ak sin = f (x), · Bk sin = g(x).
L L L
k=0 k=0

So, the coefficients Ak and Bk may be found by matching the Fourier coefficients of f (x) and
g(x). In most applications, the string is displaced from equilibrium and released from rest: think,
for example, of pulling it with your fingers and letting it go. In such a case, then clearly g ≡ 0.

IExample 25 A string of length π units, clamped at the ends, vibrates in accordance with
the equation utt = 0.01 uxx . It is initially pulled into the shape u(x, 0) = sin9 x and is released
from rest. Determine its subsequent motion.

Solution: The numerical value of c is c = 0.01 = 0.1. Because g(x) ≡ 0, all the coefficients
Bk are zero. We only need to expand f (x) into a Fourier-sine series, but that is easy: by the
method of example 13, one gets immediately that

126 sin x − 84 sin 3x + 36 sin 5x − 9 sin 7x + sin 9x


sin9 x = .
256
Therefore, the solution is

63 sin x cos ct 21 sin 3x cos 3ct 9 sin 5x cos 5ct 9 sin 7x cos 7ct sin 9x cos 9ct
u(x, t) = − + − +
128 64 64 256 256
Every element of the string moves transversally (up and down, in the picture) with period 2π/c;
substituting c = 0.1, we see that the period is about 62.8 units of time. The following picture
shows six snapshots of u(x, t) at regular intervals of 6 units.
Fourier Series 45

Note that, substituting t = 10π into the solution above (i.e., ct = π), all the cosines become −1:
it follows that
63 sin x 21 sin 3x 9 sin 5x 9 sin 7x sin 9x
u(x, 10π) = − + − + − =
128 64 64 256 256
= −u(x, 0) for every x.

In other words, at such a time (which is equal to half a period) the shape of the string is exactly
the mirror image of the initial one. This corresponds approximately to 31.4 units, or just after
the last snapshot was taken. The same configuration will return at times t = 30π, 50π, 70π, etc.
In the same way, substituting t = 5π into the solution (i.e., ct = π/2), all the cosines become
zero, and hence
u(x, 5π) = 0 for every x.
This event occurs after a quarter period, and repeats itself at times t = 15π, 25π, 35π, etc.
Practically, for an infinitesimal amount of time, the string is straight but moving.
The string goes through configurations where it is instantaneously at rest (zero kinetic
energy, maximum potential) and ones where it is instantaneously straight (maximum kinetic,
minimum potential energy). The time interval between each pair of such configurations is a
quarter period. J

The vibrating string equation is our third, and last, example of application of Fourier series
to solution of PDEs. The classic PDEs of mathematical physics are grouped into three great
families, called parabolic, elliptic and hyperbolic. We have seen the simplest instance from
each family. In each case, one may extend the model to higher dimensions, or make it more
realistic by dropping certain approximations. For example, the vibrating string equation may
be modified to account for energy dissipation, without becoming more difficult.
As their names suggest, parabolic, elliptic and hyperbolic PDEs have a very different math-
ematical character, but we are not going to discuss this aspect. They are also very different
from a practical point of view.
Parabolic equations, like the heat equation, describe systems that are evolving towards
equilibrium: in other words, systems where entropy is increasing. The heat equation, which we
have seen in its simplest form, belongs in this family. It was discovered by Fourier, who proceeded
to solve it in two and three dimensions, including non-homogeneous problems. For this reason
it is sometimes called “Fourier equation”; it is also called “diffusion equation” because it may
be applied, with no change, to describe the diffusion of a substance in a solvent.
Elliptic equations, like the Laplace equation, describe equilibrium or steady-state conditions:
no variable depends on time, nothing changes, nothing moves. Hence, no initial conditions are
required. They find application in virtually every field of physics and engineering.
Finally, the vibrating string equation, also known as “d’Alembert equation”, is the simplest
of hyperbolic PDEs. Broadly speaking, hyperbolic PDEs describe waves. The two-dimensional
d’Alembert equation is used to model the vibration of a drum and the propagation of water
waves. Light and sound waves propagate in accordance to utt = c2 (uxx + uyy + uzz ), which is
the three-dimensional d’Alembert equation. Generalizing even further, one may enter the field
of nonlinear wave theory, which has been growing dramatically in recent years.

The vibrating string equation occupies a special place in the history of mathematics. It was the
first application of PDEs to mechanics; d’Alembert (1747) derived it and solved it by an elegant
change of variables. But in 1753 Daniel Bernoulli published another solution, using for the first
time what today would be called a Fourier series (Fourier wasn’t born yet).
46 Fourier Series

This started an intense debate. Euler, among others, was not convinced. Since there was no
question that d’Alembert’s solution was correct, most scholars felt that D. Bernoulli’s solution
had to be wrong somewhere. For us it’s clear that both solutions were right because they
reproduced the same functions in different ways; but to 18th-century mathematicians this was
unacceptable. Strange as it may seem, the modern concept of function—which you can find in
your first-year calculus book—was unknown to the men who discovered calculus.
Fourier, coming half a century later, saw that D. Bernoulli had a valid intuition, and showed
how it could be applied to a wider class of problems. He invented the method of separation of
variables and the Σ notation for sums.
Initially (1807) Fourier’s papers raised some strong objections too, in part because the the-
ory of series convergence was still poorly understood. For instance, results like our example 15,
where a function of x is expanded in three different series converging to the same limit if x is
between 0 and 10, but to different limits if x is outside that interval, had never been seen before.
They seemed so strange that many mathematicians thought them impossible.
But the time was ripe for moving forward. All the conceivable objections to Fourier’s
method were eventually explained away, and by the time (1822) his book on “analytic theory of
heat” went to press his heresies had become orthodoxies.†
Out of the discussion that flourished around Fourier series and Fourier transform, the
concepts of function, integral and series emerged in the form we still use today.

† From The Norton History of the Mathematical Sciences by I. Grattan-Guinnes, W.W. Nor-
ton & Co. (1998), Chapter 8.
Fourier Series Tutorial Problems 47

PROBLEMS

Fourier Series of Functions Defined Over the Fundamental Interval


1. Find the Fourier series of the following functions for x between −π and π.
(a) f (x) = 1 if −π < x ≤ 0, f (x) = −2 if 0 < x ≤ π.
(b) f (x) = x if −π < x ≤ 0, f (x) = 2x if 0 < x ≤ π.
(c) f (x) = 0 if −π < x ≤ 0, f (x) = x if 0 < x ≤ π.
(d) f (x) = x2 − πx for −π < x ≤ π.
(e) f (x) = 0 if −π < x ≤ 0, f (x) = sin x if 0 < x ≤ π.
Orthogonality
2. For what value of the parameter α are the following pairs of functions orthogonal over (−π, π)?
(a) f (x) = 1 + x3 , g(x) = αx + x2 .
(b) f (x) = x − α, g(x) = ex .

(c) f (x) = π + x, g(x) = x − α.
Fourier Series of Symmetric Functions
3. Find the Fourier series of the following symmetric functions for x between −π and π.
(a) f (x) = −1 if −π < x ≤ 0, f (x) = 1 if 0 < x ≤ π.
(b) f (x) = x2 for −π < x ≤ π.
(c) f (x) = x4 for −π < x ≤ π.
(d) f (x) = x(x2 − π 2 ) for −π < x ≤ π.
(e) f (x) = x cos x for −π < x ≤ π.
(f) f (x) = x7 for −π < x ≤ π.
(g) f (x) = sin(x/3) for −π < x ≤ π.
(h) f (x) = cosh x for −π < x ≤ π.
(i) f (x) = − cosh x if −π < x ≤ 0, f (x) = cosh x if 0 < x ≤ π.
Symmetries of Functions
R a+2π R 2π
4. Show that if f (x) is integrable and periodic with period 2π, then a f (x) dx = 0 f (x) dx.
5. Show that if f (x) is integrable and periodic with period 2π, and f (x) = f (π − x) for all x,
then the Fourier coefficients ak = 0 for odd k, and the coefficients bk = 0 for even k.
6. Draw a picture of the following functions, defined over (−π, π):
f (x) = x (π − |x|) g(x) = |x| (π − |x|) h(x) = x (π − x) k(x) = |x| (π − x).
Which one is even or odd?
Numerical Applications
7. Using the results of problem 3, prove the following identities:

π 1 1 1 π 2 1 1 1 1 1 1 1
(a) = 1 − + − + · · · (b) =1+ − − + + − − + ···
4 3 5 7 4 3 5 7 9 11 13 15
π2 1 1 1 1 1 π3 1 1 1 1 1
(c) = 1 − 2 + 2 − 2 + 2 − 2 + ··· (d) = 1 − 3 + 3 − 3 + 3 − 3 + ···
12 2 3 4 5 7 32 3 5 7 9 11
48 Tutorial Problems Fourier Series

1 3 5 7 9 π 1 3 5 7
(e) = − + − + · · · (f) √ = 1 − 2 1 + 2 1 − 1 + ···
4 2 · 4 4 · 6 6 · 8 8 · 10 2 3 1− 9 3 −9 5 − 9 72 − 9
π 1 1 1 1 1 1
(g) = 2 − 2 + 2 − 2 + 2 − 2 + ···
2 sinh π 2 +1 3 +1 4 +1 5 +1 6 +1 7 +1
Half Range Expansions
8. Expand these functions into a Fourier-sine series over (0, π):
(a) sin7 x (b) x(π − x) (c) f (x) = x if 0 < x < π/2; f (x) = π − x if π/2 < x < π. (d) x sin x
9. Expand these functions into a Fourier-cosine series over (0, π):
(a) sin8 x (b) x(π − x) (c) f (x) = x if 0 < x < π/2; f (x) = π − x if π/2 < x < π. (d) sin 3x.
10. Let f (x) = x2 − 4x for 0 < x < 4. Expand f (x) into a Fourier-sine series over (0, 4).
11. Let f (x) = x sin πx for 0 < x < 7. Expand f (x) into a Fourier-cosine series over (0, 7).
Complex Form of Fourier Series
12. Find the complex Fourier coefficients for the following functions defined between −π and π:
(a) emx , where m is a real constant.
(b) e−x sin x.
(c) (x2 − π 2 )2 .
13. Find the complex Fourier expansion of the function f (x) defined as follows:
f (x) = sin x if −1 < x < 1, f (x + 2) = f (x) for all x.
14. Show that if f (x) is integrable and periodic with period 2π/7, then its Fourier coefficients
of order k are zero unless k = 0, 7, 14, 21, 28, 35, 42 . . .
In the same way one may prove a similar result for functions with period 2π/n, where n =
2, 3, 4, . . .
Parseval’s Identity
X∞
π8 1
15. Using the result of problem 12(c), prove that = .
9450 k8
k=1
Z
1 π 14 429
16. Using the result of problem 8(a), prove that sin x dx = .
π 0 2048
π6 X 1
17. Using the result of problem 8(b), prove that = .
960 k6
k=odd
π2 1 X 1
18. Using the result of problem 1(e), prove that − = .
16 2 (k − 1)2
2
k=even

The One-Dimensional Heat Equation


19. A rod of length L = 2 units, initially at temperature u(x, 0) = x(2 − x), loses heat through
its ends, which are kept at a constant temperature of 0 degrees, in accordance with the equation
ut = 4uxx . Find the temperature u(x, t) for t > 0 and 0 < x < 2.
20. A rod of length L = π units, initially at temperature u(x, 0) = sin x, is thermally insulated
[which means ux = 0] at the ends. Inside the rod the temperature u varies in accordance with
the equation ut = 3uxx . Find the temperature u(x, t) for t > 0 and 0 < x < π.
21. A rod of length L = 4 units loses heat through its ends, which are kept at a constant
temperature of 0 degrees, in accordance with the equation ut = 2uxx . Initially, half of the rod
Fourier Series Tutorial Problems 49

has a temperature of 100 degrees, while the other half is at 0 degrees: u(x, 0) = 100 if 0 < x < 2,
and u(x, 0) = 0 if 2 < x < 4. Find the temperature u(x, t) for t > 0 and 0 < x < 4.
22. Show that the solution of the equation ut = auxx − bu, where a and b are positive constants,
with the usual initial condition u(x, 0) = f (x) [f given] and boundary conditions u(0, t) =
u(L, t) = 0, is u(x, t) = e−bt · v(x, t), where v(x, t) is the solution of the equation vt = avxx
satisfying the given initial and boundary conditions.
23. A rod of length L = π units, initially at temperature u(x, 0) = x sin x, loses heat through its
ends, which are kept at a constant temperature of 0 degrees, in accordance with the equation
ut − 5uxx = 0. Find the temperature u(x, t) for t > 0 and 0 < x < π.
24. A rod of length L = 1 unit loses heat through its end at x = 0, which is kept at a constant
temperature of 0 degrees, in accordance with the equation ut − uxx = 0. The other end, at
1
x = 1, is thermally insulated. The initial temperature of the rod is given by f (x) = 10 x, where
0 ≤ x ≤ 1. Find the temperature as a function of x and t.
25. A rod of length L = 2 units loses heat through its ends at x = 0 and x = 2, which are kept
at a constant temperature of 0 degrees, in accordance with the equation ut − uxx = 0. The
1
initial temperature of the rod is given by f (x), where f (x) = 10 x if 0 ≤ x ≤ 1; f (x) = 15 − 10
1
x
if 1 ≤ x ≤ 2. Find the temperature as a function of x and t.
Laplace Equation in a Square
26. Find the solution of Laplace equation inside the square 0 ≤ x ≤ π, 0 ≤ y ≤ π, given that
u(0, y) = 0 and u(π, y) = 0 for 0 < y < π; u(x, 0) = 0 and u(x, π) = cos5 x sin x for 0 < x < π.
27. Find the solution of Laplace equation inside the square 0 ≤ x ≤ L, 0 ≤ y ≤ L, given that
u(0, y) = 0 and u(L, y) = 0 for 0 < y < L; u(x, 0) = 0 and u(x, L) = x(L − x) for 0 < x < L.
28. Find the solution of Laplace equation inside the square 0 ≤ x ≤ π, 0 ≤ y ≤ π, with boundary
conditions ux (0, y) = 0 and ux (π, y) = 0 for 0 < y < π; u(x, 0) = 0 and u(x, π) = sin 3x for
0 < x < π. This equation may represent, for instance, the temperature inside a square where
two sides are thermally insulated and one side is kept at zero degrees.
29. Solve the one-dimensional Laplace equation uxx = 0 over 0 ≤ x ≤ L, with boundary
conditions u(0) = a and u(L) = b. Hint: This is so easy it’s almost embarassing.
The Vibrating String
30. A string clamped at its ends vibrates in accordance with the equation utt = c2 uxx . Initially
the string is at rest and u(x, 0) = f (x). Show that the solution may be written u(x, t) =
1 1
2 f (x + ct) + 2 f (x − ct). This is called the d’Alembert solution of the vibrating string problem.

ANSWERS

X 6 X 2 X∞
1 π (−1)k
1 (a) − − sin kx. (b) − cos kx − 3 sin kx.
2 πk 4 πk 2 k
k=odd k=odd k=1
X 2 X∞
π (−1)k
(c) − cos kx − sin kx.
4 πk 2 k
k=odd k=1
∞ ∞
π 2 X 4(−1)k X 2π(−1)k 1 X 2 1
(d) + cos kx + sin kx. (e) − cos kx + sin x.
3 k2 k π π(k 2 − 1) 2
k=1 k=1 k=even
2
2 (a) α = −5/3π , (b) α = π coth π − 1, (c) α = π/5.
50 Tutorial Problems Fourier Series

X ∞
X (−1)k ∞ X µ ¶
4 π2 π4 π2 6
3 (a) sin kx (b) +4 cos kx (c) +8 − (−1)k cos kx
πk 3 k2 5 k2 k4
k=odd k=1 k=1

X ∞
X
(−1)k (−1)k k
(d) 12 sin kx (e) − 12 sin x + 2 sin kx
k3 k2 − 1
k=1 k=2
∞ µ
X ¶ X∞ √
π6 42π 4 840π 2 5040 k 3 k(−1)k+1
(f) 2 − + 3 − + (−1) sin kx (g) sin kx.
k k k5 k7 π(k 2 − 1/9)
k=1 k=1
∞ ∞
sinh π 2 sinh π X (−1)k 2 X k (1 − (−1)k cosh π)
(h) + cos kx. (i) sin kx.
π π k2 + 1 π k2 + 1
k=1 k=1
R a+2π R0 R 2π R a+2π
4 Hint: a · · · = a · · · + 0 · · · + 2π · · ·
5 Hint: Substitute x = π/2 + u and note that f is an even function of u (why?) Expand f into
a Fourier series of u and finally substitute back u = x − π/2.
6

f is odd, g is even, h and k are neither. They all coincide over (0, π).
7 (a) Substitute x = π/2 into 3(a), (b) Substitute x = π/4 into 3(a), (c) Substitute x = 0
into 3(b), (d) Substitute x = π/2 into 3(d), (e) Substitute x = π/2 into 3(e),
(f) Substitute x = π/2 into 3(g), (g) Substitute x = 0 into 3(h).
35 sin x − 21 sin 3x + 7 sin 5x − sin 7x 8 X sin kx
8 (a) (b)
64 π k3
µ ¶ k=odd
4 sin 3x sin 5x sin 7x π 8 X k sin kx
(c) sin x − + − + · · · (d) sin x −
π 32 52 72 2 π (k 2 − 1)2
k=even

35 − 56 cos 2x + 28 cos 4x − 8 cos 6x + cos 8x π 2 X cos kx


9 (a) (b) −4
128 6 k2
µ ¶ k=even
π 8 cos 2x cos 6x cos 10x cos 14x 2 12 X cos kx
(c) − + + + + · · · (d) −
4 π 22 62 102 142 3π π k2 − 9
k=even

128 X sin π4 kx
10 f (x) = − 3 .
π k3
k=odd
6 ∞
1 X 98(−1)k cos π7 kx cos πx X 98(−1)k cos π7 kx
11 f (x) = + − +
π π(49 − k 2 ) 2π π(49 − k 2 )
k=1 k=8
Fourier Series Tutorial Problems 51

(−1)k sinh mπ (−1)k sinh π


12 (a) ck = for all k; (b) ck = 2
for all k;
π m − ik π k − 2 − i2k
4 k
8π 24(−1)
(c) c0 = , ck = − for |k| > 0.
15 k4

X i(−1)k kπ sin 1 ikπx
13 sin x = ·e ; note that c0 = 0.
k2 π2 − 1
k=−∞
R 2π R 2π
14 Hint: Show that 0 f (x) eikx dx = 0 f (x) eikx dx · eik2π/7 for k = 0, 1, 2 . . .
32 X e−tk π sin kπx/2 4 X e−3tk cos kx
2 2 2
2
19 u(x, t) = 3 20 u(x, t) = + .
π k3 π π k2 − 1
k=odd k=even
200 X e−tk π /8 sin kxπ/4 200 X e−tk π /2 sin kxπ/2
2 2 2 2

21 u(x, t) = + .
π k π k
k=odd k=odd
22 Hint: Look for a solution of the form u(x, t) = v(x, t) · w(t) etc. This equation describes a
rod that radiates heat, beside losing heat through the ends.
8 X ke−5k t sin kx
2
π −5t
23 u = e sin x −
2 π (k 2 − 1)2
k=even
µ −π2 t/4 2 2 2 ¶
4 e 1 e−9π t/4 3 e−25π t/4 5 e−49π t/4 7
24 u = sin 2 πx− sin 2 πx+ sin 2 πx− sin 2 πx+· · ·
5π 2 1 9 25 49
25 The solution is the same as in the preceding problem. Why? Draw a picture of f (x). What
symmetry do you see?
5 sin 2x sinh 2y 4 sin 4x sinh 4y sin 6x sinh 6y
26 u = + + .
32 sinh 2π 32 sinh 4π 32 sinh 6π
8L2 X sin kπx/L · sinh kπy/L
27 u = 3 .
π k 3 sinh kπ
k=odd
2y 12 X cos kx · sinh ky
28 u = − .
3π 2 π (k 2 − 9) sinh kπ
k=even
29 u(x) = a + x (b − a)/L.
52 The Calculus of Residues

Chapter Two

THE CALCULUS OF RESIDUES

1. Revision

Here is a list of results and definitions that you are supposed to know from first and second year:
• A function f (z) of the complex variable z is called holomorphic at a point z0 if, and only
if, its derivative f 0 (z) exists at z0 and in some open neighbourhood of z0 . (Open sets are the
ones that consist only of internal points.) We say function is holomorphic in a region if it is
holomorphic at every point of that region.
• The existence of f 0 (z) in an open neighborhood of a point may be determined by means of
Cauchy-Riemann Equations which, in most cases, provide a simple method for deciding if (and
where) a function is holomorphic.
• A function is called entire if it’s holomorphic everywhere in the complex plane.
• The Cauchy-Goursat Theorem, also known as “Cauchy Integral Theorem”: If f (z) is holo-
morphic on and inside the closed contour† C, then
I
f (z)dz = 0.
C

• Cauchy’s Integral Formula: If f is holomorphic on and inside the closed contour C, and z is
a point inside C, then I
1 f (ζ)
f (z) = dζ. (17)
i 2π C ζ − z
Note that ζ in (17) is a dummy integration variable; it could be replaced by any other letter
except z.
• Cauchy’s integral formula may be differentiated any number of times with respect to z:
I I I
1 f (ζ) 2 f (ζ) 3! f (ζ)
f 0 (z) = dζ, f 00 (z) = dζ, f 000 (z) = dζ,
i 2π C (ζ − z)2 i 2π C (ζ − z)3 i 2π C (ζ − z)4

and so on. Generally, for k = 0, 1, 2, . . ., we have:


I
(k) k! f (ζ)
f (z) = dζ. (18)
i 2π C (ζ − z)k+1

In other words, a holomorphic function has derivatives of any order.

† By a “contour” we mean a continuous chain of a finite number of simple smooth curves.


The Calculus of Residues 53

The last two results may be extended to multiply con-


nected regions bounded by closed contours. For example,
suppose the closed contours C1 and C2 are contained en-
tirely inside a larger contour C, and let a function f be
holomorphic on the contours and in the shaded region
between them, as shown in the picture. Then Cauchy- C2
C1
Goursat theorem might be re-written as follows:
I I I
f (z)dz + f (z)dz + f (z)dz = 0,
C C1 C2
C
where the direction of integration around each contour is such that the shaded region is on the
left as the contour is traversed in such a direction. This means the outer counter C is covered
counterclockwise, but the inner contours C1 and C2 are covered clockwise.
However, it is generally agreed that contour integrals are considered as negative if the path
of integration is closed clockwise. As in these notes we follow this convention, the last result
must be written I I I
f (z)dz − f (z)dz − f (z)dz = 0, (19)
C C1 C2

with the understanding that all contours are now covered counterclockwise. This formula may
be trivially extended to any number of contours.

2. Complex Sequences and Series

Concepts here are virtually the same as in the real case, bearing in mind that, by definition,
p
|z| = x2 + y 2 ,

where x and y are the real and imaginary part of z, respectively. Remember also that

z 2 = x2 − y 2 + i 2xy, and so, in general, |z|2 6= z 2 ,

whereas, in the real field, you learnt that |x|2 = x2 and |y|2 = y 2 .
In particular, note the following.

IDefinition: A sequence of complex numbers z1 , z2 , z3 , z4 , . . . , zn , . . . is said to converge to a


limit c if, and only if, for any ε > 0, as small as desired, it’s always possible to find an index
N ∈ N such that |zn − c| ≤ ε for all n after N . J

This statement is usually written

lim zn = c, or zn −→ c
n→∞

Note: If, as usual, we write zn = xn + iyn for the components of zn , and similarly c = a + ib,
then it’s easy to see that the complex sequence {zn } converges to the complex limit c if, and
only if, the component sequences converge to their corresponding real limits:
½
xn −→ a,
zn −→ c ⇐⇒
yn −→ b.
54 The Calculus of Residues

This follows immediately from the identity


|zn − c|2 = (xn − a)2 + (yn − b)2 ;
convince yourself of this.
SIMPLE TESTS FOR CONVERGENCE
P P
• The Comparison Test: If |zn | ≤ an for all n, and an converges, then zn converges
absolutely.
¯ ¯ P
• The Ratio Test: If lim ¯zk+1 /zk ¯ = L, then zk converges absolutely if L < 1, and diverges
if L > 1. The test fails (i.e., the series may converge or diverge) if L = 1.
p P
• The Root Test: If lim k |zk | = L, then zk converges absolutely if L < 1, and diverges if
L > 1. Again, the test fails if L = 1.
The proofs of these tests for complex series are identical to the ones you learnt in first year for
real series, and will not be repeated here.
3. Power Series
Following Lagrange (1797), we say that a function f (z) is analytic at a point z0 if it may be
represented, inside a neighborhood of z0 , as a power series of the form

X
f (z) = ak (z − z0 )k . (20)
k=0

Power series have certain useful properties which are listed here without proof.†
• If the series (20) converges for some z 6= z0 , then it converges absolutely at all points that
are nearer to z0 than z itself.
• There is a circle with center at z0 (called the circle of convergence of the series) such that:
(i) The series converges absolutely for all z inside it;
(ii) The series diverges outside it;
(iii) On the boundary—the circle of convergence itself—the series may converge or diverge.
The circle of convergence may be infinite, in which case, obviously, it coincides with the whole
plane.
The radius of the circle of convergence is called radius of convergence.
• A power series may be differentiated term-by-term inside its circle of convergence:
∞ ∞
d X k
X
ak (z − z0 ) = ak · k(z − z0 )k−1 , (21)
dz
k=0 k=1

as long as |z − z0 | < r. Also, since the derivative of a power series is another power series, this
procedure may be repeated any number of times:
∞ ∞
d2 X k
X
a k (z − z 0 ) = ak · k(k − 1)(z − z0 )k−2 ,
dz 2
k=0 k=2
3 X∞ X∞
d
ak (z − z0 )k = ak · k(k − 1)(k − 2)(z − z0 )k−3 ,
dz 3
k=0 k=3

† None of the proofs would be particularly difficult, but they are not indespensable for this
course. See, for example, Mathematics of Physics and Modern Engineering by Sokolnikoff and
Redheffer.
The Calculus of Residues 55

and so on and so forth. Note that these formulas are applicable only to points z that are strictly
inside the circle of converegence; for boundary points, term-by-term differentiation may or may
not be allowed.
• If P is a path, not necessarily closed, contained in the circle of convergence, then
Z X
∞ ∞
X Z
k
ak (z − z0 ) dz = ak (z − z0 )k dz,
P k=0 k=0 P

i.e., term-by-term integration is permissible inside the circle of convergence.

TAYLOR SERIES

We have seen [equation (21)] that an analytic function of the form (20) may be differentiated
term-by-term and therefore has a derivative inside the circle of convergence. As we recalled in
section 1, functions with this property are called holomorphic: so, every analytic function is
holomorphic.
We wish now to show that the converse of this statement is also true: in other words, we
shall prove that if a function f (z) is holomorphic at a point z0 , then it is also analytic.

ITheorem (Taylor Series Theorem).


Let f (z) be holomorphic at all points in and inside a region Γ, centered at z0 . Then at each
point z inside C, f (z) may be expanded into a power series of the form

X
f (z) = ak (z − z0 )k (22)
k=0

where the coefficients ak are given by:

f (k) (z0 )
ak = .
k!
Proof: Consider a circle C centered at z0 , contained
entirely inside Γ. Then, by Cauchy’s integral formula,
we get that
Z
1 f (w) C
f (z) = dw. (23)
i 2π C w−z
z
We note the identity Γ
z0
1 1
= = r
w−z (w − z0 ) − (z − z0 )
1 1 w
= · . (24)
w − z0 1 − − z0
z
w − z0

Recall the geometric sum, which you saw in high school:


1 − βn
= 1 + β + β 2 + β 3 + · · · + β n−1 . [β 6= 1]
1−β
56 The Calculus of Residues

From this, it follows immediately that


n−1
X
1 βn
= βk + . (25)
1−β 1−β
k=0

Substituting
z − z0
β=
w − z0
into (25), we get:
¶n µ
z − z0
1 X µ z − z0 ¶k
n−1
w − z0
µ ¶= + µ ¶,
z − z0 w − z0 z − z0
1− k=0 1−
w − z0 w − z0

and hence, going back to (24), we obtain:

n−1
X (z − z0 )k
1 (z − z0 )n
= + .
w−z (w − z0 )k+1 (w − z0 )n (w − z)
k=0

Substituting this result further back into (23) we get


n−1
1
I
f (w)
¸
f (z) = dw (z − z0 )k + Rn ,
i 2π C (w − z0 )k+1
k=0

where I
(z − z0 )n f (w)
Rn = dw
i 2π C (w − z0 )n (w − z)
is the “remainder” of the expansion. Assigning a symbol to the expressions in square brackets:
· I ¸
def 1 f (w)
ak = dw (26)
i 2π C (w − z0 )k+1

we can write
n−1
X
f (z) = ak (z − z0 )k + Rn .
x=0

and to complete the proof of (22) we only need to show that the remainder Rn tends to zero.
In other words, we must prove that
I
(z − z0 )n f (w)
dw −→ 0.
2π C (w − z0 )n (w − z)

To this end we note, first of all, that


¯ I ¯ I
¯ (z − z0 )n f (w) ¯ |z − z0 |n |f (w)|
¯ ¯
dw¯ ≤ |dw|.
¯ i 2π n n
C (w − z0 ) (w − z) 2π C |w − z0 | |w − z|
The Calculus of Residues 57

In the integrals above, w covers the circle C centered at z0 . Since f (z) is holomorphic inside
Γ, it is also continuous, and so |f (w)| goes through a maximum and a minimum over C, as you
saw in first year. Say M is the maximum: then clearly
I
|z − z0 |n M
|Rn | ≤ |dw|.
2π C |w − z0 |n |w − z|

Also, if r is the radius of C, then


w = z0 + reiφ , [0 ≤ φ < 2π]
and hence

dw = ireiφ dφ, |dw| = r dφ, w − z0 = r = constant.

So, substituting back, we see that


I ¯ ¯n I
|z − z0 |n M 1 ¯¯ z − z0 ¯¯ dφ
|Rn | ≤ n
r dφ. = ·¯ ¯ · Mr .
2π C r |w − z| 2π r C |w − z|

But z is a point inside C (this is the last crucial observation), hence,


¯ ¯
¯ z − z0 ¯
¯ ¯
¯ r ¯ < 1;

a quick look at the picture on page 55 will convince you of this. Because w moves on C, and
z is fixed, it’s also clear from the picture that |w − z| is always greater than zero and less than
the diameter of C. Therefore I

Mr <B
C |w − z|
(an upper bound that does not depend on n) and finally
¯ ¯n
1 ¯¯ z − z0 ¯¯
lim |Rn | ≤ lim · ·B =
n→∞ n→∞ 2π ¯ r ¯
1
= · 0 · B = 0.

As the remainder of the expansion (26) tends to zero, we may conclude that

n
X
lim ak (z − z0 )k = f (z),
n→∞
k=0

which is equivalent to the statement (22) that we wanted to prove.


Corollary: Taylor’s coefficients. The contour integrals appearing in (26) should look familiar to
you because, up to a factor k!, they are the same as in formula (18):
I I
1 f (w) 1 k! f (w) 1
ak = k+1
dw = · k+1
dw = · f (k) (z0 ).
i 2π C (w − z0 ) k! i 2π C (w − z0 ) k!
58 The Calculus of Residues

Therefore, we may also write



X f (k) (z0 )
f (z) = (z − z0 )k ,
k!
k=0

which means the Taylor’s theorem you learnt in first year for real variables, is applicable to
complex functions without change, as long as they possess the first derivative in an open
neighborhood—i.e., as long as they are holomorphic. J

At the beginning of this section, we observed, without proof, that a power series may be differ-
entiated any number of times inside its circle of convergence: therefore, an analytic function is
certainly holomorphic inside its circle of convergence.
Now, we have finished proving that if a function is holomorphic in a region Γ, then it may
be expanded in a power series (i.e., its Taylor series) about any point inside Γ: therefore, a
holomorphic function is analytic.

IConclusion: A function is analytic if, and only if, it is holomorhic. The two words are
completely equivalent and we’ll make, from now on, no distinction between them. J

But remember, this holds for complex functions only. It is not true for real functions; there
are examples of real functions that are not analytic, and yet have continuous derivatives of any
order. All the comments made in this section apply to complex functions only.
A function cannot be analytic/holomorphic just at a single, isolated point. If f (z) may be
expanded as a power series within a certain circle, then it may be expanded as a power series
about any other point inside the same circle. If u + iv has continuous partial derivatives ux , uy ,
vx and vy satisfying Cauchy-Riemann conditions in an open region, then it’s analytic everywhere
inside the same region.
If the circle of convergence of the Taylor series (22) is not infinite, then there must be on
the boundary at least one point where f is not analytic. To see this, you only need to look
at the picture on page 55. If f is analytic in a region that completely surrounds the circle
C (which is precisely the case shown in the picture), then the radius of C may be increased
without affecting the proof, therefore the Taylor series will converge at all the points inside the
new, greater C. This procedure, enlarging C, may be continued until a point is reached where
f (z) is not analytic; going further would invalidate the key assumption of the theorem.

IConclusion: The radius of convergence of the Taylor series of f (z) about z0 is always equal
to the distance from z0 to the nearest point where f (z) is not analytic. J

On the other hand, the circle of convergence of the Taylor series (22) may be infinite, in which
case obviously f is an entire function.
It is possible for a function to be non-analytic over a whole arc, or even a closed line, but
we’ll not look into that. With a view to practical applications, we need to consider only functions
that fail to be analytic at isolated points.

IDefinition: If f (z) is not analytic at a point s, but there exists an open region R surrounding
s such that f (z) is analytic everywhere else in R, then we say that s is an isolated singular point
of f , or “singularity” for short. J
The Calculus of Residues 59

Clearly, the shape of R in the preceding definition does not matter. Since the singular point
s is internal to R, one can always find a positive number r such that f (z) is analytic for all z
such that 0 < |z − s| < r. The region defined by this double-inequality is an open disk with the
center removed and is called, for obvious reasons, a “punctured disk”.
As in the real case, the power series expansion of a given function f (z) in some region
|z − z0 | < r is unique, hence it coincides with the Taylor series. This fact can often be used
to find a Taylor series without direct use of (22), starting from simple, basic expansions. Since
the derivatives of the “elementary functions” are the same in the real and complex cases, all
the familiar expansions that you saw in first year may be carried over to the complex case. In
practice, almost all Taylor series of elementary calculus may be obtained from three “building
blocks”: the exponential series, the geometric series and the binomial series.
THE EXPONENTIAL SERIES
The function f (z) = ez is entire, because its Taylor series

X
z zk
e = , (27)
k!
k=0

converges absolutely for every z (convince yourself of this, using the ratio test or the root test).
Combining (27) with the definitions
ez + e−z ez − e−z
cosh z = and sinh z =
2 2
we get immediately that

X ∞
X
z 2k z 2k+1
cosh z = and sinh z = ; (28)
(2k)! (2k + 1)!
k=0 k=0

verify this (all the odd powers cancel out in the first expansion, and all the even ones cancel in
the second expansion). On the other hand, starting with the expansion
∞ k k
X
iz i z
e = =
k!
k=0
z2 iz 3 z4 iz 5 z6 iz 7
= 1 + iz − − + + − − + ···
2! 3! 4! 5! 6! 7!
and recalling that cos z = Re(eiz ) and sin z = Im(eiz ), we get:

z2 z4 z6 z3 z5 z7
cos z = 1 − + − + ··· sin z = z − + − + ···
2! 4! 6! 3! 5! 7!
i.e.,

X ∞
X
(−1)k z 2k (−1)k z 2k+1
cos z = sin z = (29)
(2k)! (2k + 1)!
k=0 k=0

IExample 26 Find the Taylor series of f (z) = sinh(2z − 3) about z0 = 1


Solution: Use the identity

sinh(2z − 3) = sinh[2(z − 1) − 1] = sinh(2(z − 1)) · cosh 1 − cosh(2(z − 1)) · sinh 1.


60 The Calculus of Residues

Then, replacing in equation (28) z with 2(z − 1), we get immediately:



X ∞
X
22k+1 (z − 1)2k+1 22k (z − 1)2k
sinh(2z − 3) = cosh 1 · − sinh 1 · .
(2k + 1)! (2k)!
k=0 k=0

A solution of this form is acceptable because both series above are absolutely convergent, so the
powers of (z − 1) may be rearranged in ascending order, if desired. J

THE GEOMETRIC SERIES


The series
X ∞
1
= zk (30)
1−z
k=0

converges absolutely for |z| < 1. Therefore, the circle of convergence of (30) is the unit circle
with center at the origin. As we mentioned before, there must be (at least) a singular point, on
the circle of convergence, where the function is not analytic; and indeed, f (z) = 1/(1 − z) is not
defined for z = 1.
Replacing in (30) z with −z, we get that
X ∞
1
= (−1)k z k , [for |z| < 1]
1+z
k=0

but now the singular point on the circle of convergence is at z = −1. Integrating this identity
we get
Z z ∞ Z z
X

= (−1)k ζ k dζ
0 1 + ζ 0 k=0
X∞
(−1)k z k+1
log(1 + z) − log 1 = .
k+1
k=0

It follows that for any point z inside the unit circle C with center at the origin,

z2 z3 z4
log(1 + z) = z − + − + ···; (30)
2 3 4

this result extends to complex variables the well-known Mercator’s† formula that you saw in first
year. Naturally, the logarithm used here is the complex logarithm: recall that, by definition,

log(1 + z) = ln |1 + z| + i arg(1 + z),

where arg(1 + z) is a continuous function of z inside the circle C mentioned above. Therefore

−π/2 < arg(1 + z) < π/2.

Similarly, replacing z with −z 2 in (30), we get


X ∞
1
2
= (−1)k z 2k ,
1+z
k=0

† N Mercator (1620-1687), German; not related to the famous cartographer G Mercator.


The Calculus of Residues 61

and integrating this identity we get


Z z X ∞ Z z

2
= (−1)k ζ 2k dζ.
0 1+ζ 0
k=0

It follows immediately:

X (−1)k z 2k+1
arctan z = . [|z| < 1]
2k + 1
k=0

This result extends Gregory’s formula for arctan x, which you saw in first year, to complex
variables.

IExample 27 Find the Taylor series of f (z) = z/(z 2 − 5z + 6) about z0 = 0.


Solution: Use partial fractions. By Heaviside’s “cover-up” method, one gets that
z 3 2
= − , (31)
(z − 2)(z − 3) z−3 z−2

and f (z) is singular at z = 2 and z = 3. For an expansion about z0 = 0, the nearest singular
point is at z = 2, hence the radius of convergence will be 2. It follows that

X
3 1 zk
=− =− , [|z/3| < 1]
z−3 1 − z/3 3k
k=0

and
X zk ∞
2 1
=− =− . [|z/2| < 1]
z−2 1 − z/2 2k
k=0
0 2 3
Therefore,

X µ ¶
z 1 1
= − k zk .
(z − 2)(z − 3) 2k 3
k=0

The series converges in the disk of radius 2 shown in the picture. As an exercise, verify this; use
the ratio test. J

THE BINOMIAL SERIES


If n is not a whole number, the binomial formula for (a + b)n , that we applied in example 13,
may not be used as it stands. However, Newton found the series (in modern notation)
∞ µ ¶
X α
(1 + x)α = xk ,
k
k=0

where α is an arbitrary number, and


µ ¶ µ ¶
α def α def α(α − 1) · · · (α − k + 1)
= 1 = [k = 1, 2, 3, . . .]
0 k k!
It’s easy to derive Newton’s binomial series using Taylor’s theorem, and the process may be
extended naturally to complex numbers. Do it, as an exercise, and show also that the series
62 The Calculus of Residues

converges absolutely in the circle |z| < 1. Incidentally, if α is a positive integer, the series
terminates and coincides with the binomial formula of high school; convince yourself of this.

IExample 28 Find the Taylor series of f (z) = 1/ 1 + z about z0 = 0.
Solution:

X∞ µ ¶
1 −1/2 − 1/2 k
√ = (1 + z) = z =
1+z k
k=0
z 3 z2 3 · 5 z3 3 · 5 · 7 z4 3 · 5 · 7 · 9 z5 3 · 5 · 7 · 9 · 11 z 6
=1− + − + − + − ···
2 4 · 2! 8 · 3! 16 · 4! 32 · 5! 64 · 6!

Obviously, the radius of convergence is 1 (why?). J

IExample 29 Find the Taylor series of f (z) = arsinh z.


Solution: Using the identity Z z

arsinh z = p
0 1 + ζ2
and the result of the preceding example, we get immediately:
Z ∞ µ
zX ¶
− 1/2 2k
arsinh z = ζ dζ =
0 k=0 k
∞ µ
X ¶
− 1/2 z 2k+1
= .
k 2k + 1
k=0

The binomial coefficients appearing here are the same as in the preceding example, so they shall
not be repeated. In the same way one may also obtain the Taylor series for arcsin z. J

SERIES MULTIPLICATION AND DIVISION


P∞ k
P∞ k
It may be shown that two series of the form k=0 ak (z − z0 ) and k=0 bk (z − z0 ) may be
multiplied in the natural way within their common region of convergence. We omit the proof.

IExample 30 Expand f (z) = 2/(2 − z)(1 − z)2 as the product of two series.
Solution: Note that
2 z z2 z3 z4
=1+ + + + + ···
2−z 2 4 8 16
(this is a geometric series), and

1
= 1 + 2z + 3z 2 + 4z 3 + 5z 4 + · · ·
(1 − z)2

(this is a binomial series). Substituting these expansions into f (z), we get


µ ¶ µ ¶
z z2 z3 z4 2 3 4
f (z) = 1+ + + + + · · · · 1 + 2z + 3z + 4z + 5z + · · · .
2 4 8 16
The Calculus of Residues 63

Both series converges in the unit disk, hence the expression above is certainly valid in the unit
disk. Collecting the powers of z, one gets
µ ¶ µ ¶ µ ¶ µ ¶
1 2 1 3 2 1 4 3 2 1
f (z) = 1+ 2 + z+ 3 + + z2 + 4 + + + z3 + 5 + + + + z 4 +· · ·
2 2 4 2 4 8 2 4 8 16

It would be easy to write a program that prints out all the coefficients. The first few terms of
the expansion are
3z 17 z 2 49 z 3 129 z 4
f (z) = 1 + + + + + ···
2 4 8 16
This problem may also be done by series division (read on). Alternatively, we could split f (z)
first into partial fractions (Heaviside’s method), and expand afterwards; that approach would
avoid series multiplication altogether. J

Multiplication of analytic functions in the way done in the preceding example, i.e., by collect-
ing similar powers, is called Cauchy product. Power series may also be divided, provided the
denominator is not zero at the center of the expansion.

IExample 31 Expand f (z) = log(1 + z)/(1 + z + z 2 ) in a power series about z = 0.


Solution: Write
log(1 + z)
= A + Bz + Cz 2 + Dz 3 + Ez 4 + · · · ,
1 + z + z2
where A, B, etc. are coefficients to be determined. In this example the denominator on the
left-hand side is not an infinite series, but the method works in the same way. Writing first
¡ ¢ ¡ ¢
log(1 + z) = 1 + z + z 2 · A + Bz + Cz 2 + Dz 3 + Ez 4 + · · ·

and expanding on the left-hand side by means of (30), we get:


¡ ¢ ¡ ¢
z − 12 z 2 + 13 z 3 − 14 z 4 + · · · = 1 + z + z 2 · A + Bz + Cz 2 + Dz 3 + Ez 4 + · · · .

Collecting powers of z on the right-hand side, we get

z − 21 z 2 + 13 z 3 − 14 z 4 + · · · = A + (A + B)z + (A + B + C)z 2 + (B + C + D)z 3 + (C + D + E)z 4 + · · ·

By comparison, we get an infinity of equations:

0=A
1=A+B
1
−2 = A + B + C
1
3 =B+C +D
− 41 =C +D+E
etc.

These equations yield all the coefficients recursively, with no need for manipulations. We find
immediately that A = 0, B = 1, C = − 3/2, D = 5/6, E = 5/12, , F = − 21/20, and so on. J
64 The Calculus of Residues

4. Laurent Series

A Taylor series like (22) is a series of positive powers of (z − z0 ). If we generalize this idea, and
accept series with negative powers, we get a Laurent series.
As we have seen, the Taylor series (22) converges at those points z that are nearer to z0
than the nearest singularity of f (z). A Laurent series does not carry this restriction—at a price,
as we’ll see.

IExample 32 Find an expansion in powers of z for the function f (z) = z/(z 2 − 5z + 6) of


example 27, that converges for z = 4.
Solution: We may start from the partial fractions expansion (31), obtained in example 27:

3 2
f (z) = − .
z−3 z−2

We consider the two partial fractions separately. Simple manipulations yield:


∞ µ ¶k
3 3/z 3X 3 £ ¤
= = ; |3/z| < 1
z−3 1 − 3/z z z
k=0

the series on the right is a simple geometric series that converges absolutely if |z| > 3; since we
seek convergence for z = 4, that’s exactly what we want.
The second partial fraction is handled in the same way:
∞ µ ¶k
2 2/z 2X 2 £ ¤
= = ; |2/z| < 1
z−2 1 − 2/z z z
k=0

same comments. Now we have two series: the first one converges absolutely outside the circle
with center at the origin and radius 3; the second one converges absolutely outside the circle
with center at the origin and radius 2.
Therefore, outside the larger disk both expansions con-
verge absolutely, and may be combined:

∞ µ ¶k ∞ µ ¶k ∞
3X 3 2X 2 X 3` − 2`
f (z) = − = =
z z z z z`
k=0 k=0 `=1
−1
X ¡ ¢ £ ¤
0 2 3
= 3−m − 2−m z m |3/z| < 1
m=−∞

This expansion contains only negative powers of z, and is


valid outside the disk |z| = 3.
The region of convergence, as shown in the picture, is the plane with a circular hole of radius 3
and center at the origin. J

If one orders the terms of a series in ascending powers, then a typical Taylor series will have a
“beginning” but not an “end”. The series in the last example, though, has an end but not a
beginning. Things can be even more strange, as the next examples show.
The Calculus of Residues 65

IExample 33 Find an expansion in powers of z for the function f (z) = z/(z 2 − 5z + 6) of


example 27, that converges for z = e.
Solution: The number e ≈ 2.71828, the basis of natural logarithms, is greater than 2 but smaller
than 3. Hence, we use the following geometric expansions, both of which have already been
obtained:

X zk ∞
3 1 £ ¤
=− =− , |z/3| < 1
z−3 1 − z/3 3k
k=0

X
2 2/z 2k £ ¤
= = k
. |2/z| < 1
z−2 1 − 2/z z
k=1
0 2 3
The second of the series above converges absolutely outside
the circle |z| = 2; the first one converges absolutely inside
the circle |z| = 3. The point z = e lies between these circles,
hence both expansions are valid for z = e.
Combining them, we get the equation

X ∞
X
2k zk £ ¤
f (z) = − − . 2 < |z| < 3
zk 3k
k=1 k=0

This series doesn’t have a first term, nor a last term, contains all the powers of z (positive,
negative and zero) and converges absolutely in the annulus shown in the picture. J

IExample 34 Find an expansion in powers of z − 2 for the function f (z) = z/(z 2 − 5z + 6) of


example 27, that converges in the vicinity of z = 2.
Solution: Since f (z) is meaningless at z = 2, it is obvious that the required expansion will not
be a Taylor series. Once again, we start from the partial fractions expansion (31)
z 3 2
= − ,
(z − 2)(z − 3) z−3 z−2
and note that the second term on the right-hand side is already a power of z − 2, so we leave it
as it is. We expand only the first term in powers of z − 2:

3 −3
= =
z−3 1 − (z − 2)
X∞
= −3 (z − 2)k . [|z − 2| < 1].
k=0
0 2 3

This is a straight geometric expansion, which converges ab-


solutely inside the circle with radius 1, centered at z = 2.
So, finally,

X ∞
2 £ ¤
f (z) = − −3 (z − 2)k ; 0 < |z − 2| < 1
z−2
k=0
the region of convergence is the punctured disk with radius 1, centered at 2. J
66 The Calculus of Residues

The series derived in examples 32–34 are all typical Laurent series, i.e., power series containing
negative powers. While the region of convergence of a Taylor series is always a circle, for a
proper Laurent series it’s never a circle. Specifically, in example 32 it was the plane outside a
circle; in example 33 it was a ring; and finally in example 34 it was a punctured disk.
Having seen three examples of Laurent expansion of the same function, let’s see the theorem
that deals with such expansions.
Suppose we need to expand a function f in a power series about z0 , convergent at a point z,
but there are points, between z and z0 , where f is not analytic. This is the situation encountered
in examples 32–34. Then we may use the following theorem.

ITheorem (Laurent Series Theorem).


Let f (z) be analytic on the concentric circles C1 and C2 , cen-
tered at z0 , and throughout the open region between them.
Say, z is a point in such a region. Then C1

X ∞
X z
f (z) = ak (z − z0 )k + bk (z − z0 )−k (32) C2
Γ
k=0 k=1
z0
where
I
1 f (w)
ak = dw [k = 0, 1, 2, . . .]
i 2π C1 (w − z0 )k+1

and I
1
bk = f (w)(w − z0 )k−1 dw. [k = 1, 2, 3, . . .]
i 2π C1

Proof: Apply Cauchy’s integral formula to the (multiply connected) region bounded by C1 and
C2 , oriented as shown in the picture. We get:
I I
1 f (w) 1 f (w)
f (z) = dw − dw,
i 2π C1 w−z i 2π C2 w−z

where both circles are considered counterclockwise. The minus sign in front of the second integral
accounts for the fact that C2 is actually covered clockwise. The last result may be written

f (z) = I1 + I2 ,

where (note the change of sign in the second term)


I I
1 f (w) 1 f (w)
I1 = dw, I2 = dw.
i 2π C1 w−z i 2π C2 z−w

Proceeding as in the proof of Taylor’s theorem [go back to (25) and (26)], we find that

n−1
X
I1 = ak (z − z0 )k + Rn ,
k=0
The Calculus of Residues 67

and Rn −→ 0 as n tends to infinity. Hence, we consider I2 : using again (25) and simplifying
(you should be able to fill in the details), yields:
n
X (w − z0 )k−1
1 (w − z0 )n
= +
z−w (z − z0 )k (z − z0 )n (z − w)
k=1

Thus one eventually gets that


n
X
I2 = bk (z − z0 )−k + Qn
k=1

where I
1 f (w)(w − z0 )n
Qn = dw
i 2π(z − z0 )n C2 z−w
The proof that Qn −→ 0 as n → ∞ is very similar to that for Rn , and is left as an exercise.
At this point, observe that f (z) is by assumption analytic on C1 , on C2 and in the open
region between them; therefore
I I
1 f (w) 1 f (w)
dw = dw;
i 2π C2 z−w i 2π C1 z−w

remember, all contour integrals are taken counterclockwise. Therefore, combinining I1 and I2
and letting n → ∞, we obtain (32).
Corollary: If f (z) is analytic on and inside C1 , then bk = 0 for all k, and (32) reduces to Taylor’s
series. J

It’s possible to show that the Laurent expansion of a given function f (z) in a given region is
unique. This result can often be used, exploiting known expansions, to find the Laurent series.

IExample 35 Expand f (z) = e2z /(z − 1)2 about the (singular) point z0 = 1 and determine
the region of convergence.
Solution: Use the identity

X
e2z e2 · e2(z−1) e2 2k (z − 1)k
= = · =
(z − 1)2 (z − 1)2 (z − 1)2 k!
k=0

X 2k (z − 1)k−2
= e2 · .
k!
k=0

Therefore (substitute k = 2 + m, etc):

X∞
e2 2e2 2 2m (z − 1)m
f (z) = 2
+ + 4e · .
(z − 1) z−1 m=0
(m + 2)!

Since the exponential series converges over the whole plane, the result above is valid in the
punctured plane with the point z = 1 removed. J
68 The Calculus of Residues

IExample 36 Find two different Laurent expansions about z0 = 1, with different regions of
validity, for the function f (z) = 1/z(z − 1)2 .
Solution: This example is very similar to example 27, the only difference being the square in the
denominator.
Preliminary observations: f has two singular points, namely 0 and 1. Any expansion about
1 will certainly not be a Taylor series.
• To get a Laurent series in the punctured disk 0 < |z − 1| < 1, use the expansion

X ∞
1 1
= = (−1)k (z − 1)k ; [|z − 1| < 1]
z 1 + (z − 1)
k=0

it follows immediately that



X
1
f (z) = · (z − 1)−2 = (−1)k (z − 1)k−2 . [|z − 1| < 1]
z
k=0

• To get the other Laurent series about z0 , use the expansion



X
1 1 1 1 (−1)k
= = · = . [|z − 1| > 1]
z 1 + (z − 1) z−1 1 (z − 1)k+1
+1 k=0
z−1

It follows immediately that



X (−1)k
1 1
f (z) = · = ,
z (z − 1)2 (z − 1)k+3
k=0

and this expansion is valid in the region outside the disk with radius 1, centered at z0 = 1 (the
complex plane with a hole, if you wish). Note that while both expansions are proper Laurent
series, the first one has only two negative powers, followed by an infinity of positive powers; the
second one has no positive powers. J

IExample 37 Find the Laurent series about z = 2 for the function f (z) = z/(z 2 − 5z + 6) of
example 27, that converges for arbitrarily large |z|.
Solution: This is yet another variation of example 27. It is a combination of example 32 and
example 34. We start like in example 34, observing that in the equation

z 3 2
= − ,
(z − 2)(z − 3) z−3 z−2

the second term on the right-hand side is already a power of z − 2, so we do not touch it. The
first term may be written

3 3 3 1
= = ·
z−3 z−2−1 z−2 1
1−
z−2
The Calculus of Residues 69

A straightforward geometric series expansion yields



X
3 3 1
= · [|z − 2| > 1].
z−3 z−2 (z − 2)k
k=0

It follows immediately that



X
z 3 1 2
= · k
− =
(z − 2)(z − 3) z−2 (z − 2) z−2
k=0

X 1 2
=3· k
− =
(z − 2) z−2 0 2 3
k=1

X
1 3
= + [|z − 2| > 1].
z−2 (z − 2)k
k=2

The region of convergence is the whole plane with a circular hole, as shown in the picture, of
radius 1 and center at z = 1. J

5. Classification of Singularities

Let z0 be an isolated singular point of f (z). Then it is possible that f (z) may be expanded as
a Laurent series in a puctured disk centered at z0 . Suppose this is the case.

IDefinition: If the Laurent expansion has the form


n
X ∞
X
bk
f (z) = + ak (z − z0 )k ; [bn 6= 0]
(z − z0 )k
k=1 k=0

[in other words, if the Laurent series has only a finite number of negative powers, starting with
(z − z0 )−n ], then we say that f (z) has a pole of order n at z0 .
In many books the first sum is called the principal part of f (z), and the second one is called
the analytic part. J

Make sure you understand this point. We have seen that a function may have several
Laurent expansions about the same point z0 ; obviously, each expansion will be valid in a different
region—and it will be unique in that region. The expansion that matters, as far as this definition
goes, is the one valid in a punctured disk centered at z0 .

IExample 38 The function

z 2 − 7z + 18 (z 2 − 4z + 4) − 3(z − 2) + 8 8
f (z) = = = − 3 + (z − 2)
z−2 z−2 z−2

has a pole of order 1 at z = 2. J

Poles of order 1,2 and 3 are sometimes called “simple”, “double” and “triple” poles, respectively.
70 The Calculus of Residues

IExample 39 The function


µ ¶
ez 1 z2 z3 z4 1 1 1 1 z z2 z3
= · 1 + z + + + + · · · = + + + + + + + ···
z3 z3 2! 3! 4! z3 z2 2z 3! 4! 5! 6!
has a pole of order 3 at z = 0. J

IExample 40 Show that the function


1
f (z) =
1 − cos z
has a 2nd-order pole at z = 0.
Solution: The denominator may be written

X (−1)k z 2k z2 z4 z6
1 − cos z = 1 − = − + − ···
(2k)! 2! 4! 6!
k=0

We factor out from the denominator the leading power, which is z 2 , and write
1 1 1 1
f (z) = 2
· 2 4 6 = 2· ,
z 1 z z z z g(z)
− + − + ···
2! 4! 6! 8!
where
1
def z2 z4 z6
g(z) = − + − + ···
2! 4! 6! 8!
Note that g(z) is an even function; it is also analytic at z = 0, and since g(0) is not zero, then
1/g(z) may be expanded in a power series of the form:
1
= A + B z2 + C z4 + D z6 + · · ·
g(z)
Clearly A 6= 0 (and it’s easy to see that A = 2); but the numerical values of the coefficients A,
B, C, etc. are not needed. The simple fact that we may write
1 A
f (z) = · (A + B z 2 + C z 4 + D z 6 + · · ·) = 2 + B + C z 2 + D z 4 + · · · [A 6= 0]
z2 z
in a punctured disk centered at 0 means f (z) has a double pole. How big is this punctured
disk? Easy: it must reach to the nearest singularity of f (z) which, in this case, means the
nearest point where 1/(1 − cos z) → ∞. There are two nearest points, namely z = ±2π, so the
punctured disk has radius 2π. J

The order of a zero of an analytic function is defined in a similar way to the order of a pole.
However, a zero is not a singular point.

IDefinition: If f (z) is analytic at z0 and, in a disk centered at z0 , it has a Taylor expansion


of the form

X
f (z) = ak (z − z0 )k , [an 6= 0]
k=n

then we say that f (z) has a zero of order n at z0 . J


The Calculus of Residues 71

Obviously, if f is a polynomial, then a zero of order n is what in second year you called a
“multiple root of order n”. Zeroes may be viewed as poles of negative order, and vice-versa.

IExample 41 The function 1 − cos z (see the preceding example) has a double zero at 0. J

IExample 42 The function

f (z) = 5 sin z − 4 sin 2z + sin 3z

has a zero at z = 0; find its order.


Solution: Since
µ ¶ µ ¶
z3 z5 8z 3 32z 5 27z 3 243z 5
f (z) = 5 · z − + − · · · − 4 · 2z − + − · · · + 3z − + − ··· =
3! 5! 3! 5! 3! 5!
z7
= z5 − + ···,
3
we see that f (z) has a fifth-order zero at 0. J

If f (z) has a pole of order n at a point z0 , then clearly f (z) may be written as the product of
1/(z − z0 )n times an analytic function. Also, it’s easy to show that in this case |f (z)| tends to
infinity as z approaches z0 .
Many books introduce at this point the word “meromorphic” to describe a function that,
inside a certain region, is either analytic or has (at worst) poles of finite order; you’ll probably
find this word if you do some reading on your own.
There are two kinds of singular points that we still need to examine. The first one is the
essential singularity.

IDefinition: If, in a punctured disk centered at z0 , a function f (z) has a Laurent expansion
of the form
−1
X X∞
bk
f (z) = + ak (z − z0 )k ,
(z − z0 )k
k=−∞ k=0

where an infinite number of coefficients bk are not zero [in other words, the Laurent series does
not have a beginning], then we say that f (z) has an essential singularity at z0 . J

It helps if you think of an essential singularity as a “pole of infinite order”, but be careful: it’s
possible to show that |f (z)| does not tend to any limit as z approaches an essential singularity.
This is very different from the behavior of f (z) near a pole. There’s also an interesting theorem†
saying that in any neighborhood of an essential singularity (no matter how small), f (z) gets
arbitrarily close to any chosen value. Quite difficult to picture in your mind!

IExample 43 The function e1/(z−1) has an essential singularity at z0 = 1 because


(z − 1)−2 (z − 1)−3
e1/(z−1) = 1 + (z − 1)−1 + + + ···;
2! 3!
incidentally, this expansion is valid in the whole plane punctured at 1. J

† The Casorati-Weierstrass theorem.


72 The Calculus of Residues

Finally, there are some functions that simply cannot be expanded in a power series (whether
Taylor or Laurent) about a singular point. The simplest example is perhaps f (z) = z 1/2 , which
cannot be expanded about z0 = 0, although it’s analytic everywhere else. What makes this
function different from all the other examples seen so far, is the fact that z 1/2 is two-valued: for
any z (except the origin) there are two numbers, ζ and −ζ such that that ζ 2 = (−ζ)2 = z. In
order to make z 1/2 one-valued, we may cut the complex plane along a path that branches out
from the origin; such a path is called a “branch cut”.
For example, if we cut the complex plane along the negative x axis, then in practice we
agree that the argument of every point must be an angle between −π and +π. So, if we calculate
i1/2 and (−i)1/2 , we get, respectively:
y
00i
11
¡ ¢1/2 00
11
1+i
i1/2 = ei π/2 = ei π/4 = √ ,
2 φ=π
φ=−π x
¡ ¢1/2 1−i
(−i)1/2 = e−i π/2 = e−i π/4 = √ . 00
11
00
11
2 −i
branch cut along the negative x axis
But if we cut the plane along the positive x axis, and so decree that the arguments run from 0
to 2π, we get instead:
y
00i
11
¡ ¢1/2 00
11
1+i
i1/2 = ei π/2 = ei π/4 = √ ,
2 φ=0
φ=2π x
¡ ¢ 1/2 −1 + i
(−i)1/2 = ei3π/2 = ei3π/4 = √ . 11
00
2 −i
branch cut along the positive x axis

IDefinition: A singular point that lies at the end of a branch cut is called a branch point. J

Branch cuts are completely arbitrary: sometimes it’s convenient to cut the plane one way,
sometimes another way. However, once chosen, they must be obeyed: for example, path integrals
may not trespass a branch cut. We’ll learn more about branch cuts as we go along.

6. The Residue Theorem

Suppose f (z) is analytic in a punctured disk R centered at the singular point z0 , and may be
represented in R by a Laurent series of the form (32).

IDefinition: The coefficient b1 of the expansion (32) is called the residue of f at z0 , and is
usually written res f (z). J
z=z0

By the Laurent series theorem (see page 66) we find that


I
1
bk = f (w)(w − z0 )k−1 dw, [k = 1, 2, 3, . . .]
i 2π C1
The Calculus of Residues 73

and so, clearly, that I


1
def
res f (z) = b1 = f (z) dz, (33)
z=z0 i 2π C

where C is any circle centered at z0 , contained entirely in the punctured disk R.

ITheorem (Residue Theorem) Let f (z) be analytic on and inside a closed contour C except
for a finite number n of isolated singular points z1 , z2 , . . . zn . Then
I n
X
f (z)dz = i 2π · res f (z).
C z=zk
k=1

Proof: Enclose each zk by a sufficiently small circle Ck ,


centered at zk , so that no circle intersects another circle.
This is certainly possible, as shown in the picture. Ap-
ply (19) in the region bounded by the outer circle C and C1 z1
by the small circles C1 , C2 , . . . Cn . Then (recall that all
C2 z2
integrals are taken couterclockwise):
I n I
X C
C3 z3 C z
f (z)dz − f (z) dz = 0. 4 4
C k=1 Ck Γ

But, by (33),
I n
X
f (z) = i 2π · res f (z), [k = 1, 2, . . . , n]
Ck z=zk
k=1

and the statement of theorem follows. J

FINDING RESIDUES

In order to apply the residue theorem, we must be able to find the residue at an isolated singular
point of function f (z). So, let’s see a few examples of calculation of residues.
• (i) Before we start, note that if f (z) is analytic at a point a, then clearly its residue at a is
zero, but the converse is not true: for example,
· ¸
1
res 2 = 0,
z=0 z

but 1/z 2 has a 2nd-order pole (and so it’s not analytic) at z = 0.


• (ii) Note that if
g(z)
f (z) = ,
z−a
where g(z) is another function, analytic at a, and g(a) 6= 0, then f (z) has a 1st-order pole at a
and
res f (z) = g(a); (34)
z=a

convince yourself of this.


74 The Calculus of Residues

IExample 44 The residue at z = 3 of f (z) = (tanh z)/(z − 3) is tanh 3. J

On the other hand, if g(a) is analytic and equal to zero at a, then f (z) = g(z)/(z − a) is also
analytic. In this case, a is not a singularity (some books would call it a “removable singularity”).

IExample 45 Let
sin πz
f (z) = .
z−1
Here, a = 1 and g(z) = sin πz; we find that g(1) = 0, hence z = 1 is not a proper singularity of
f (z). Using the identity sin πz = − sin π(z − 1), we find that

sin πz − sin π(z − 1)


lim = lim = −π,
z→1 z − 1 z→1 z−1

which yields f (1) = −π. Therefore,

f (z) = −π + a1 (z − 1) + a2 (z − 1)2 + a3 (z − 1)3 + · · ·

Note that this expansion is valid for every z; hence f (z) is an entire function. J
¡ ¢
IExample 46 Find resz=0 sin z/z 2 .
Solution: Define g(z) = sin z/z, and observe that
µ ¶
1 z3 z5 z7
g(z) = · z − + − + ··· =
z 3! 5! 7!
z2 z4 z6
=1− + − + ···
3! 5! 7!
£ ¤ £ ¤
We see that g(0) = 1, and therefore resz=0 sin z/z 2 = resz=0 g(z)/z = g(0) = 1. J

IExample 47 The residue at z = 0 of f (z) = (cosh z − 1)/z 3 is 1/2, because the function
g(z) = (cosh z − 1)/z 2 is entire and g(0) = 1/2 (de l’Hospital’s theorem, if you need to ask). J

IExample 48 Find the residue of f (z) = (z + 1)/z(z − 2)2 at its singular points.
Solution: The singular points are z = 0 and z = 2. Now, at z = 0 we have that g(z) =
(z + 1)/(z − 2)2 is analytic and g(0) = 1/4, hence the residue there is 1/4. However, by Heaviside’s
method, we get immediately that
1/4 1/4 3/2
f (z) = − + ,
z z−2 (z − 2)2

which not only confirms that resz=0 f (z) = 1/4, but also yields that resz=2 f (z) = − 1/4. J

IExample 49 Find the residue of f (z) = (2z 2 − 3z + 5)/(z 2 − z) at z = 0 and z = 1.


Solution: Partial fractions; by Heaviside’s method we get that

1 1 1 1
= = − .
z2 −z z(z − 1) z−1 z
The Calculus of Residues 75

It follows that
2z 2 − 3z + 5 2z 2 − 3z + 5
f (z) = −
z−1 z
Now, the first term on the right-hand side is singular at z = 1 and analytic at z = 0, while the
second term is analytic at z = 1 and singular at z = 0. Therefore,

res f (z) = 4, res f (z) = 5. J


z=1 z=0

IExample 50 Find the singular points of f (z) = 6ei πz /(9z 2 −1), and hence the corresponding
residues.
Solution: Since g(z) = 6ei πz is entire, the singular points of f are the points where 9z 2 − 1 = 0,
namely z = ±1/3. By partial fractions we get:

1 1/6 1/6
= − .
9z 2−1 z − 1/3 z + 1/3

Therefore,
ei πz ei πz
f (z) = − .
z − 1/3 z + 1/3
It follows that √ √
i π/3 1+i 3 −i π/3 1−i 3
res f (z) = e = , res f (z) = e = J
z=1/3 2 z=−1/3 2

• (iii) If, in a neighborhood of a point a,

p(z)
f (z) = ,
q(z)

where both p(z) and q(z) are analytic, q(a) = 0 and q 0 (a) 6= 0: then the residue at a is

p(a)
res f (z) = (35)
z=a q 0 (a)

To see this, simply expand p and q in a Taylor series, but remember that q(a) = 0:

p(a) + p0 (a)(z − a) + 12 p00 (a)(z − a)2 + · · ·


f (z) =
q 0 (a)(z − a) + 21 q 00 (a)(z − a)2 + · · ·

Factoring out 1/(z − a) from the denominator, we get immediately that

1 p(a) + p0 (a)(z − a) + 12 p00 (a)(z − a)2 + · · ·


f (z) = ·
z−a q 0 (a) + 12 q 00 (a)(z − a) + · · ·

Since the last term on the right-hand side is analytic at a, then we let z → a and deduce that

p(a)
res f (z) = ,
z=a q 0 (a)
76 The Calculus of Residues

which is formula (35).


£ ¤
IExample 51 Find resz=3 (z 2 − 2z + 5)/(z 4 − 4z 3 + 7z + 6) .
Solution: Let p(z) = z 2 − 2z + 5, q(z) = z 4 − 4z 3 + 7z + 6, and apply (35). It follows that
q 0 (z) = 4z 3 − 12z 2 + 7. Since q(3) = 0 and q 0 (3) = 7 6= 0, f (z) has a simple pole at z = 3, and
finally · ¸
z 2 − 2z + 5 p(3) 8
res = 0 = . J
z=3 z 4 − 4z 3 + 7z + 6 q (3) 7

IExample 52 The function f (z) = cot z = cos z/ sin z is singular at the zeros of sin z, i.e., at
z = 0, z = ±π, z = ±2π, etc. At any of these points, say z = mπ, let p(z) = cos z, q(z) = sin z
and use (35). It follows that q 0 (z) = cos mπ = (−1)m . Hence, all the singular points are simple
poles, and therefore resz=mπ cot z = cos mπ/ cos mπ = 1. J

Warning: Before applying formula (35), you must check (a) that the pole is first-order, but also
(b) that p and q satisfy the requirements given above. Checking only that the pole is first-order
may lead you into error, if you are not careful, as the next example shows.

IExample 53 Find resz=0 sin z/(1 − cos z).


Solution: First of all, let’s do it right. From first principles,
1 3 1 5 1 7
sin z z − 3! z + 5! z − 7! z + ···
= ¡ 1 1 1 6
¢=
1 − cos z 2 4
1 − 1 − 2! z + 4! z − 6! z + ···
1 3 1 5 1 7
z− 3! z + 5! z − 7! z + · · ·
= 1 2 1 4 1 6
2! z − 4! z + 6! z − · · ·

It’s clear that both sin z and 1 − cos z have a zero at z = 0, but the denominator has a 2nd-
order zero, while the numerator has a simple zero. Therefore, the order of the pole is 2 − 1 = 1.
Simplifying, it follows that
à !
1 2 1 4 1 6
sin z 1 1 − 3! z + 5! z − 7! z + ···
= · 1 1 2 1 4 ,
1 − cos z z 2! − 4! z + 6! z − · · ·

which is of the form


g(z)
f (z) = ,
z
where g(z) is the expression in big brackets. Since clearly g(z) is analytic at z = 0 and g(0) = 2,
we get immediately that
sin z
res = 2.
z=0 1 − cos z

Now, let’s do it the wrong way: say, p(z) = sin z and q(z) = 1 − cos z. By formula (35), it
would appear that resz=0 sin z/(1 − cos z) = p(0)/q 0 (0) = limz→0 sin z/ sin z = 1. Where is the
mistake? It is true that the pole at z = 0 is 1st-order, and that p and q are analytic everywhere,
but it is not true that q 0 (0) 6= 0. Hence (35) is not applicable in this way. J

We conclude this section with some results that may be useful at poles of order greater than 1.
• (iv) If a function f (z) has a pole of order n at a, then it may be written as
g(z)
f (z) = ,
(z − a)n
The Calculus of Residues 77

where g is analytic at a. Therefore, if we know the Taylor series for g:


X
g(z) = ak (z − a)k ,
k=0

we recognize immediately that resz=a f (z) = an−1 .



IExample
√ 54 The function f (z) = 3 1 + z/z 5 has clearly a 5th-order pole at z = 0, and
g(z) = 3 1 + z, which admits a straightforward binomial series expansion:

∞ µ
X ¶
√ 1/3
3
1+z = zk .
k
k=0

Hence, µ ¶
1/3
res f (z) = .
z=0 4
Expanding the right-hand side above yields: resk=0 f (z) = −80/(81 · 4!) = −10/243. J

However, we don’t need the whole series for g: we need only one coefficient. Recall Taylor’s
formula [go back to (22]):
X∞
g (k) (a)
g(z) = (z − a)k .
k!
k=0

It follows immediately that


g (n−1) (a)
res f (z) = ,
z=a (n − 1)!
and finally that
1 dn−1 £ ¤
res f (z) = · n−1 (z − a)n f (z) z=a (36)
z=a (n − 1)! dz

IExample 55 The function f (z) = 1/(z 2 − 1)2 has 2nd-order poles at z = ±1, because

1 1
= .
(z 2 − 1)2 (z − 1) (z + 1)2
2

Applying (36) with n = 2, we get that


· ¸ · µ ¶¸ · ¸
1 1 d (z − 1)2 −2 1
res = · = =−
2
z=1 (z − 1) 2 2
1! dz (z − 1) (z + 1)2
z=1
3
(z + 1) z=1 4 J

7. Use of the Residue Theorem in Evaluating Real Integrals

Several families of definite integrals may be quickly calculated by calculating residues. Among
them, there are some important ones that may not be done by the methods of first year calculus
because their antiderivative may not be written in a closed form.
78 The Calculus of Residues

TRIGONOMETRIC INTEGRALS

These are integrals of the form


Z 2π
F (sin φ, cos φ) dφ,
0

where F is a rational function. As a rule, the antiderivative of a rational function of sines and
cosines (of the same angle) may always be written down in an elementary way† ; however, this
may require a substantial amount of work with pencil and paper.
To calculate such a definite integral by the calculus of residues, simply substitute z = eiφ ,
i.e., φ = (log z)/i. It follows that

dz z + z −1 z − z −1
dφ = , cos φ = , sin φ = .
iz 2 i2

As φ ranges from 0 to 2π, obviously z covers the unit circle counterclockwise. The definite, real
integral may thus be replaced by the contour integral, which may then be found by means of
the residue theorem, considering only the singular points inside the unit circle. Integrals over
(−π, π), or any other interval of length 2π, can be evaluated in the same way.

R 2π
IExample 56 Find 0
d φ/(5 + 4 sin φ).
Solution: Note, first of all, that this integral exists, since the denominator 5 + 4 sin φ is never
zero. Proceeding as oulined before, we get:
Z 2π I I
dφ 1 dz dz
= −1
· =
0 5 + 4 sin φ |z=1| 5 + 4(z − z )/i 2 iz |z|=1 2z 2 + 5iz − 2
Z
dz
= ,
|z|=1 2(z − z1 )(z − z2 )

where z1 and z2 are the solutions of 2z 2 + 5iz − 2 = 0. Simple calculations yield


√ ½
−i5 ± −25 + 16 −i5 ± i3 −i/2
z1,2 = = = .
4 4 −i 2

The only zero inside the unit circle is at z1 = −i/2; it follows that
· ¸
dz 1 1
res = = .
z=−i/2 2(z − z1 )(z − z2 ) 2(i 2 − i/2) i3

Thus Z 2π
dφ 1 2π
= i 2π · = . J
0 5 + 4 sin φ i3 3

If the integrand is an even function of φ, the method may be used on integrals over a half period.

† See, for example, Differential and Integral Calculus by N. Piskunov.


The Calculus of Residues 79


IExample 57 Find 0 dφ/(5 + 3 cos φ).
Solution: By symmetry, one gets immediately that
Z π Z
dφ 1 π dφ
= .
0 5 + 3 cos φ 2 −π 5 + 3 cos φ
Proceeding like in the preceding example, we get:
Z π Z Z
dφ 1 dz 1 dz
= 2 + 10z + 3
= =
0 5 + 3 cos φ i |z|=1 3z i |z|=1 3(z + 1/3)(z + 3)
· ¸
1 1 π
= i 2π · · res = J
i z=−1/3 3(z + 1/3)(z + 3) 4.

IExample 58 Find the Fourier series of f (φ) = 5/(13 − 5 cos φ).


Solution: Since f (φ) is an even function, we only need to find the coefficients ak , namely
Z 2π Z
1 5 dφ 1 2π 5 cos kφ dφ
a0 = , ak = [k = 1, 2, . . .]
2π 0 13 − 5 cos φ π 0 13 − 5 cos φ
We note that Z ·Z ¸
2π 2π
5 cos kφ dφ 5eikφ dφ
= Re , [k = 0, 1, 2, . . .]
0 13 − 5 cos φ 0 13 − 5 cos φ
so we work on the integral on the right-hand side. Substituting eiφ = z, it follows that
Z 2π I I
5eikφ dφ 5z k dz 2 5z k dz
= 5
¡ ¢ = 2
,
0 13 − 5 cos φ 13 − 2 z + z −1 iz −i 5z − 26z + 5

where the complex integral extends over the unit circle. Factorizing 5z 2 −26z+5 = 5(z− 15 )(z−5),
the last integral on the right becomes
I · ¸
2 z k dz 2 zk 1/5k 5π 1
1 = i 2π · res 1 = −4π · = · .
−i (z − 5 )(z − 5) −i z=1/5 (z − 5 )(z − 5) −24/5 6 5k
Therefore,
1 5π 5 1 5π 1 5
a0 = · = , ak = · · k = [k = 1, 2, . . .]
2π 6 12 π 6 5 6 · 5k
So, finally,

5 5 5 X cos kφ
= + .
13 − 5 cos φ 12 6 5k
k=1
Note that, in practice, we have found the complex form of the Fourier series without using the
word. Since f (φ) is even, then by symmetry ck = c−k . The working is identical. J

Substituing z = eiφ , one gets also the following identities


eimφ + e−imφ z m + z −m
cos mφ = =
2 2
imφ −imφ
e −e z − z −m
m
sin mφ = = ,
i2 i2
which sometimes might be useful.
80 The Calculus of Residues

R 2π
IExample 59 Find 0
(sin 5φ/ sin φ) dφ.
Solution: We get:
Z 2π I ¡ 5 ¢ I
sin 5φ z − z −5 /i 2 dz z 10 − 1 dz
dφ = ¡ ¢ = 2 5
=
0 sin φ |z|=1 z − z −1 /i 2 iz |z|=1 z − 1 iz
I
z8 + z6 + z4 + z2 + 1
= dz
|z|=1 iz 5

The integrand has a pole of order 5 at z = 0, and clearly


· ¸
z8 + z6 + z4 + z2 + 1 1
res = .
z=0 iz 5 i

Therefore,
Z 2π
sin 5φ 1
dφ = i 2π · = 2π. J
0 sin φ i

IMPROPER INTEGRALS OF RATIONAL REAL FUNCTIONS


R∞
These are integrals of the form −∞ [N (x)/D(x)] dx, where N (x) and D(x) are real polynomials
with no common factors. It’s easy to see that such integrals converge if
(i) The denominator D(x) has no real zeroes,
(ii) The degree of D is at least 2 more than the degree of N .
By the fundamental theorem of algebra, D(x) has no more zeroes than its own degree, but it
could have fewer (if some zeroes are multiple roots). Since D has real coefficients, it’s possible
to show that if D(a + ib) = 0, then D(a − ib) = 0 too: in other words, all complex zeroes come
in complex-conjugate pairs. We are assuming that no zeroes lie on the real axis; hence half of
the zeroes will lie in the upper half-plane [y > 0], and the other half will be their mirror images
in the lower half-plane [y < 0].
In order to evaluate these integrals, we integrate
N (z)/D(z) counterclockwise around the contour Γ y
shown in the picture, making sure that R be large
enough to surround all the zeroes in the upper half
plane. The residue theorem then yields that CR
Z R Z n
X · ¸
N (x) N (z) N (z)
dx+ dz = i 2π res ,
−R D(x) CR D(z) z=zk D(z)
k=1
Γ
where z1 , z2 , . . . zn are the singular points of N/D
(i.e., the zeroes of D) in the upper half-plane. −R R x
We shall now show that the integral over the
semicircle CR tends to zero as R tends to infinity.
Hence, letting R → ∞ one gets
Z ∞ n
X · ¸
N (x) N (z)
dx = i 2π res . (37)
−∞ D(x) z=zk D(z)
k=1
The Calculus of Residues 81

ITheorem The integral over the semicircle CR approaches zero as R → ∞.


Proof: This is quite easy. Recall that m ≥ n + 2. Let

N (z) = an z n + an−1 z n−1 + · · · + a1 z + a0 [an 6= 0]


D(z) = bm z m + bm−1 z m−1 + · · · + b1 z + b0 ; [bm 6= 0]

it follows that
N (z) z n an + an−1 z −1 + an−2 z −2 + · · · + a0 z −n 1
= m· −1 −2 −m
= m−n · g(z).
D(z) z bm + bm−1 z + bm−2 z + · · · + b0 z z

Clearly g(z) −→ an /bm as |z| tends to infinity (i.e., as z −1 tends to zero): so, for |z| large
enough, we have that |g(z)| < B, where B is some uniform upper bound. On the semicircle CR
we have:
z = Reiφ =⇒ dz = iReiφ dφ.
In conclusion we may write
¯Z ¯ ¯Z ¯ Z π Z π
¯ N (z) ¯¯ ¯¯ g(z) ¯ |g(Reiφ )| B πBR
¯ dz ¯ = ¯ ¯
dz ¯ ≤ iφ
|iRe | dφ ≤ R dφ = m−n .
¯ m−n iφ m−n m−n
CR D(z) CR z 0 |(Re ) | 0 R R

Therefore, the last term on the right approaches 0 as R tends to infinity because, by assumption,
m − n is at least 2 (or greater). J
R∞
IExample 60 Apply (37) to calculate −∞ dx/(x2 + 1).
Solution: This integral is, of course, elementary even by the standards of first year calculus; we
merely wish here to illustrate the use of (37).
Because
1 1
= ,
z2 + 1 (z − i)(z + i)
the integrand has simple poles at z = ±i. Only z = +i is in the upper half-plane, and
· ¸
1 1
res = .
z=i (z − i)(z + i) i2

Therefore by (37), Z ∞
dx 1
= i 2π · = π. J
−∞ x2+1 i2
In general, if N (x)/D(x) is an even function, then (by symmetry)
Z ∞ Z
N (x) 1 ∞ N (x)
dx = dx,
0 D(x) 2 −∞ D(x)

so in this case integrals over (0, ∞) can be done as well. For instance, the last example yields
immediately that Z ∞
dx π
2+1
= .
0 x 2
R∞
On the other hand, if N (x)/D(x) is an odd function, then obviously −∞ [N (x)/D(x)] dx = 0.
82 The Calculus of Residues

R∞
IExample 61 Find −∞
x2 dx/(x4 + 1).
Solution: The singular points of z 2 /(z 4 + 1) are the roots of the equation z 4 = −1. Simple
calculations yield z = e±i π/4 or z = e±i3π/4 ; specifically, the roots are:
z1 = ei π/4 z2 = ei3π/4 z3 = e−i π/4 z4 = e−i3π/4
Only z1 and z2 lie in the upper half-plane; z3 and z4 are their mirror image under the real axis.
Thus, by (37), we get:
Z ∞ 2 · 2 ¸ · 2 ¸
x dx z z
4
= i 2π · res 4
+ i 2π · res 4
−∞ x + 1 z=z1 z + 1 z=z2 z + 1

The residues are best evaluated by (35):


· 2 ¸ · 2 ¸
z z 1 e−iπ/4 1−i
res = = = = √ ,
z=z1 z 4 + 1 3
4z z=z1 4z1 4 4 2
· 2 ¸ · 2 ¸ −i3π/4
z z 1 e −1 − i
res 4
= 3
= = = √ .
z=z2 z + 1 4z z=z2 4z2 4 4 2
So, finally, Z µ ¶

x2 dx −i 2 π
= i 2π · √ =√ .
−∞ x4 + 1 4 2 2
As a partial check on the working, the value of integrals of this kind must always turn out to be
real. J
R∞
IExample 62 Find −∞ (x + 1) dx/(x6 + 1).
Solution: First of all note that, by symmetry,
Z ∞
x
6+1
dx = 0;
−∞ x
hence
Z ∞ Z ∞
x+1 dx goes to zero
dx = =
−∞ x6 + 1 −∞ +1x6
X · ¸
1
= i 2π · res 6 ,
z +1 CR
where the sum extends to all singular points
of 1/(z 6 + 1) in the upper half-plane. There i
are six singular points, as shown:

z1 = ei π/6 , z2 = i, z3 = ei5π/6 , −R R
z4 = e−i π/6 , z5 = −i, z6 = e−i5π/6 .

Only the first three are in the upper half- −i


plane. For all of them we may apply (35):
· ¸
1 1
res 6
= 5 [k = 1, 2, . . .]
z=zk z + 1 6zk
The Calculus of Residues 83

Therefore Z µ ¶

x+1 i 2π 1 1 1 2π
6
dx = · + 5 + i 25π/6 = . J
−∞ x +1 6 ei5π/6 i e 3

FOURIER INTEGRALS

In this section we use the term “Fourier Integral” to describe only integrals of the form
Z ∞ Z ∞
f (x) cos ωx dx or f (x) sin ωx dx,
−∞ −∞

where f (x) = N (x)/D(x), and N and D are polynomials, i.e., f (x) is a rational function. Strictly
speaking this is not correct, since the proper definition of Fourier integral puts no restriction on
f (x), other than the integrals above converge. But we’ll return to Fourier integrals in chapter 3,
and then we’ll study them from a more general point of view,
We’ll assume also that the degree of D is at least 1 more than the degree of N ; under these
assumptions such integrals can be shown to converge.
As a rule, these integrals are handled by means of the identity
Z ∞ Z ∞ Z ∞
N (x) cos ωx N (x) sin ωx eiωx N (x)
dx + i dx = dx;
−∞ D(x) −∞ D(x) −∞ D(x)

real and imaginary parts are separated after the right-hand side has been calculated.
The integration is extended to the same contour Γ pictured on page 80, and the details are
very similar. Using the key assumption that

degree (N ) < degree (D),

it may be shown that the integral over the semicircle CR approaches zero as R tends to infinity;
this result is not trivial and is called Jordan’s Lemma. We’ll see its proof at a later stage.
Assuming Jordan’s lemma, and proceeding in exactly the same way as before, one eventually
derives that Z ∞ · ¸
N (x) eiωx X N (z) eiωz
dx = i 2π res , (38)
−∞ D(x) D(z)
where (again) the sum on the right-hand side extends to all singular points in the upper half-
plane. At this point, real and imaginary parts are separated.
R∞
IExample 63 Find −∞
x sin x dx/(x2 + 2).
Solution: Consider Z ∞
xeix
dx;
−∞ x2 + 2

the only singular point of zeiz /(z 2 + 2) in the upper half-plane is the simple pole at z = i 2
and, by (35) the residue there is
· ¸ · iz ¸ √
zeiz ze e− 2
res√ = = .
z=i 2 z2 + 2 2z z=i√2 2
84 The Calculus of Residues

Then, by (38), it follows that


Z " √ #

x sin x e− 2 √
− 2
dx = Im i 2π · = πe .
−∞ x2 + 2 2

Comments: R∞ √
(i) Since the integrand is even, 0 x sin x dx/(x2 + 2) = 12 πe− 2 .
(ii) This example differs from all the other ones seen so far in that the antiderivative of the inte-
grand may not be written down in a simple way as a combination of sines, cosines, polynomials,
etc. So, the calculus of residues provides the answer where the methods of first year calculus
fail. This example may be done by other methods, e.g., by the the Laplace transform (do it, as
an exercise), but the calculus of residues is probably better. J
R∞
IExample 64 Find −∞
cos2 x dx/(x2 − 2x + 5).
Solution: Note the identity cos2 x = (1 + cos 2x)/2. Therefore, we calculate
Z ∞
1 1 + ei 2x
dx
2 −∞ x2 − 2x + 5

and at the end we’ll take the real part of the result.
The integrand has two singular points, namely 1 + i 2 and 1 − i 2; only the first one lies in
the upper half-plane. Continuing along the same lines of the preceding examples, we get:
Z ∞ · ¸ · ¸
1 + ei 2x 1 + ei 2z 1 + ei 2z 1 + ei 2−4
dx = i 2π · res = i 2π · = i 2π · .
−∞ x2 − 2x + 5 z=1+i 2 z 2 − 2z + 5 2z − 2 z=1+i 2 2 + i4 − 2

We have used (35), (37) and (38). Now it follows that


Z ∞
1 1 + ei 2x 1 + e−4 (cos 2 + i sin 2) 1 + e−4 cos 2 + ie−4 sin 2
dx = i π · = π · ,
2 −∞ x2 − 2x + 5 i4 4
R∞
and finally that −∞ cos2 x dx/(x2 − 2x + 5) = π (1 + e−4 cos 2)/4.
R∞
Corollary: −∞ sin 2x dx/(x2 − 2x + 5) = (π e−4 sin 2)/4. J

The improper integrals considered in this section and the preceding one were all known to
converge, so it was legitimate
R∞ to evaluate them as we did. However, it must be stressed that an
improper integral like −∞ f (x) dx has a meaning only if both limits
Z a Z R
lim f (x) dx lim f (x) dx
Q→−∞ Q R→∞ a

exist separately and are finite for any choice of the real number a.
R∞
IExample 65 (Trivial, perhaps.) It would be meaningless to write that −∞
x dx = 0, on the
strength that
Z R
lim x dx = 0
R→∞ −R
The Calculus of Residues 85

(though the equation above is correct). The problem is that


Z 13 Z R
lim x dx = −∞, lim x dx = +∞,
Q→−∞ Q R→∞ 13
and neither limit is finite. The point x = 13 was chosen for no particular reason. J

We’ll come back to this when we discuss the Cauchy principal value of a divergent integral, in
the next section.

ITheorem (Jordan’s Lemma). Say z = Reiφ , and suppose that f (z) −→ 0 “uniformly” in
φ for all positive angles φ. In other words, for any fixed tolerance ε > 0, as small as one may
want, there is a semicircle CM in the upper half plane such that |f (z)| < ε for all z outside CM ;
the radius of CM depends on ε.
Under these assumptions, if CR denotes the upper half of the circle |z| = R, as illustrated
in the picture on page 80, and ω is any positive real number, then
Z
f (z)eiωz dz −→ 0 as R → ∞. (39)
CR
Proof: Begin with the trivial inequality
¯Z ¯ Z
¯ ¯
0≤¯ ¯ f (z)e dz ¯¯ ≤
iωz
|f (z)| · |eiωz | · |dz|.
CR CR
Let
fR = max |f (z)|;
z on CR
by assumption, fR → 0 as R → ∞. On the semicircle CR we have that
z = Reiφ =⇒ |dz| = |iReiφ dφ| = R dφ. [0 ≤ φ ≤ π]
iωz
Substituting (by Euler’s formula) z = R(cos φ + i sin φ) into e , we get:
¯ iωz ¯ ¯ iωR(cos φ+i sin φ) ¯ ¯ −ωR sin φ iωR cos φ ¯
¯e ¯ = ¯e ¯ = ¯e ·e ¯ = e−ωR sin φ .
So far, we have shown that
¯Z ¯ Z π Z π
¯ ¯
0≤¯ ¯ f (z)e dz ¯¯ ≤
iωz −ωR sin φ
fR · e · R dφ = RfR e−ωR sin φ dφ.
CR 0 0
Now note that sin φ = sin(π − φ). Therefore, by symmetry, the last inequality yields
¯Z ¯ Z π/2
¯ ¯
0 ≤ ¯¯ f (z)e iωz ¯
dz ¯ ≤ 2RfR e−ωR sin φ dφ.
CR 0

y y=2x /π
But, as the picture on the right clearly shows graph-
ically,

sin φ ≥ , y=sin x
π
for 0 ≤ φ ≤ π/2. Therefore,
0 π /2 π x
−ωR sin φ −2ωRφ/π
e ≤e

and finally
¯Z ¯ Z π/2 · ¸π/2 · ¸
¯ ¯ e−2ωRφ/π 1 − e−ωR
¯ iωz ¯
f (z)e dz ¯ ≤ 2RfR e −2ωRφ/π
dφ = 2RfR = πfR ;
¯ −2ωR/π ω
CR 0 0
86 The Calculus of Residues

and since fR approaches zero as R tends to infinity, the last term on the right also approaches
zero, hence so does the first term on the left (in first year you called this “the sandwich theorem”).
This completes the proof. J

8. The Principal Value of an Integral

If a rational function of the form N (x)/D(x), where numerator and denominator are real poly-
nomials, Rhas a pole on the real axis—in other words, if D(x) has a real root—then the improper

integral −∞ N (x) dx/D(x) is meaningless because it may be split as the sum of two divergent
integrals.
There are, however, applications where it’s convenient to use a more general definition of
integral. Consider, for example, Z ∞
sin x
dx.
−∞ x

It may be shown that this integral converges. We also know (Euler’s formula) that sin x is the
imaginary part of eix , so it is still correct to write
Z ∞ Z ∞ · ix ¸
sin x e
dx = Im dx.
−∞ x −∞ x

However, if we swap the symbols for “integral” and “imaginary part” we get nonsense:
·Z ∞ ix ¸ Z ∞
e cos x
Im dx is meaningless because dx is meaningless.
−∞ x −∞ x

To get out of this impasse, we introduce (after Cauchy) the concept of principal value.

IDefinition: Suppose a function f (x) has a simple pole at z0 in the interval (a, b). Then we
say that "Z #
z0 −ε Z b
lim f (x) dx + f (x) dx
ε→0+ a z0 +ε
Rb
is the principal value of the integral a
f (x) dx. J

To avoid confusion with the “orthodox” definition, we also need a new symbol, so we write†
Z "Z #
b b
− f (x) dx = Principal Value of f (x) dx .
a a

Naturally, if an integral exists in the usual sense, then


Z b Z b Z b
f (x) dx exists =⇒ − f (x) dx = f (x) dx, (40)
a a a

but the converse is not true in general.

† Following Handbook of Mathematical Functions, edited by Abramowitz and Stegun, Amer-


ican National Bureau of Standards (1954).
The Calculus of Residues 87

IExample 66 The value of


Z R
cos x dx

−R x
is zero, because the integrand is an odd function: hence, by symmetry, we have that
Z −ε Z R
cos x dx cos x dx
=− .
−R x ε x

Therefore, "Z #
−ε Z R
cos x dx cos x dx £ ¤
lim + = lim 0 = 0,
ε→0 −R x ε x ε→0

and this holds for every R. J

By now, you should be wondering what is


the use of such a strange concept. Here’s why.
Suppose a function f (z) has a number of
poles in the upper half-plane, and a simple pole
z0 on the real axis, between a and b. We in-
P
tegrate f over a closed contour Γ like the one
shown in the picture: starting at a, travelling
on the real axis until we reach a distance ε from
z0 , skirting the pole on a circle of radius ε, com- Cε ε Γ
ing back to the real axis, staying on it all the
way to b, and finally closing the contour by a a z0 b
path P that surrounds all other singular points.
Well, since the troublesome pole z0 is outside the contour, we may apply the residue theorem
to our contour, which consists of four pieces:
Z z0 −ε Z Z b Z X
f (z) dz + f (z) dz + f (z) dz + f (z) dz = i 2π · res f (z),
a Cε z0 +ε P z=zk
k

where Cε is the semicircle that skirts (and leaves out) the pole. On Cε , we have z = z0 + εeiφ ,
hence dz = iεeiφ , and the angle φ ranges from π to 0. Recall that we assume z0 is a simple pole:
therefore
g(z)
f (z) = ,
z − z0
and g is analytic at z0 . Note that, obviously, g(z0 ) = resz=z0 f (z); see equation (34).
Now let ε → 0. It follows that
Z Z 0 ¡ ¢ Z π
g z0 + εeiφ iφ
¡ iφ
¢
f (z) dz = i εe dφ = −i g z0 + εe dφ,
Cε π εeiφ 0

So, as ε goes to zero, the last term on the right tends to −i πg(z0 ) = −i π resz=z0 f (z); therefore
Z
f (z) dz −→ −i π res f (z).
Cε z=z0
88 The Calculus of Residues

Also as ε tends to zero, by definition


"Z Z # Z b
z0 −ε b
f (z) dz + f (z) dz −→ − f (z) dz.
a z0 +ε a

Assuming only that P does not go over any singular point of f , we get that
Z Z b X
f (z) dz + − f (z) dz − i π res f (z) = i 2π · res f (z),
P a z=z0 z=zk
k

where the sum is extended to all the singular points inside the contour. In many applications,
the integral on P is either zero or may be made as small as we want (by pushing P to infinity,
for example); but we don’t need to assume that. In general, we get:
Z Z b X
f (z) dz + − f (z) dz = i 2π · res f (z) + i π res f (z). (41)
P a z=zk z=z0
k

Loosely speaking, the pole z0 that lies on the real axis counts as if it were “half in, half out” (if
it was truly above the axis, the residue would be multiplied by i 2π, and if it were truly below,
the residue would be multiplied by 0).
This result may be extended in a natural way to the case where there are several poles on
the real axis (as long as they are simple poles), or to the case where P also goes over a singular
point (as long as P has a tangent line there).
R∞
IExample 67 Find −∞
sin πx dx/(x3 + 1)
Solution: The integrand is regular all along the real axis, in spite of the fact that x3 + 1 = 0 if
x = −1. The point is that sin πx is also zero if x = −1, and

sin πx π
lim 3
=− ,
x→−1 x + 1 3

as you should immediately check. The two poles of sin πz /(z 3 + 1) are at

±i π/3 1±i 3
z=e = ,
2
away from the real axis. Furthermore, as |x| → ∞, the integrand goes to zero with the speed of
|x|−3 , so the integral is absolutely convergent. In order to find its value, we may write
£ ¤
sin πz = Im ei πz

and use Jordan’s lemma, but this creates a problem because

ei πz cos πz + i sin πz
3
= ,
z +1 z3 + 1

and cos πz/(z 3 + 1) is singular not only at z = (1 ± i 3)/2 but also at z = −1. So, we take the
principal value.
We write
The Calculus of Residues 89

Z ∞ ·Z ∞ i πz ¸
sin πz e
dz = Im − dz
−∞ z3 + 1 3
−∞ z + 1

and integrate ei πz /(z 3 + 1) along the contour CR


shown on the right, which includes
√ the pole in
the upper half plane at (1 + i 3)/2 and goes
over the pole at −1. By Jordan’s lemma,
Z
ei πz
dz → 0 −R −1 1/2 R
CR z3 + 1

as R → ∞, and therefore, by (41):


Z ∞ i πz · i πz ¸ · i πz ¸
e e e
− 3
dz = i 2π · res√ 3
+ i π · res =
−∞ z + 1 z=(1+i 3)/2 z + 1 z=−1 z 3 + 1
· i πz ¸ · i πz ¸
e e
= i 2π · 2
+ iπ ·

3z z=(1+i 3)/2 3z 2 z=−1

Routine calculations yield


√ √ √
ei π(1+i 3)/2 e−π 3/2
(i + 3)
i 2π · √ = i 2π ·
3(1 + i 3)2 /4 6

and
ei π(−1) 1
iπ · 2
= −i π ·
3(−1) 3
So finally, combining these results and taking the imaginary part, we find:
Z ∞
√ √
sin πz 3 e−π 3/2
−1
dz = π ·
−∞ z3 + 1 3

Note
R∞that it would be wrong to consider the real part of the same results and “derive” a value
for −∞ cos πx dx/(x3 + 1): such an integral is meaningless. J

R∞
IExample 68 Calculate −∞ sin ωx dx/x, for ω real and positive.
Solution: It’s possible to show that this integral exists, though the proof is more subtle than in
the preceding example because, as |x| → ∞, the integral is not absolutely convergent.
We have seen (go back to example 46) that

sin ωz
g(z) =
z
is analytic everywhere. Hence we may write
Z ∞ Z ∞ Z ∞ · iωx ¸ ·Z ∞ iωx ¸
sin ωx sin ωx e e
dx = − dx = − Im dx = Im − dx ;
−∞ x −∞ x −∞ x −∞ x
90 The Calculus of Residues

the last step is the only one where the principal value is essential.
Proceeding like in the preceding example and using (41), we get:
Z ∞ iωx · iωz ¸
e e
− dx = i π · res = i π.
−∞ x z=0 z

So finally, taking the imaginary part, we obtain


Z ∞
sin ωx
− dx = π.
−∞ x

Strictly speaking, the example ends here, but it has an interesting follow-up. What happens if
ω is negative? Well, suppose ω < 0 and write ω = −|ω|. Then clearly sin ωx = − sin |ω|x, and
so Z ∞ Z ∞
sin ωx sin |ω|x
dx = − dx = −π. [ω < 0]
−∞ x −∞ x
Finally, if ω is zero, then sin ωx = 0 for every x, and the integral is zero. We have found that

Z (
∞ π if ω is positive,
sin ωx
dx = 0 if ω is zero,
−∞ x
−π if ω is negative.

The interesting point is that the integrand is a continuous function of ω, but the integral is
discontinuous. J
R∞
IExample 69 Evaluate −∞ sin x dx/x(x2 + 4).
Solution: As in the preceding examples, we use (40) and write
Z ∞ Z ∞ · ¸ ·Z ∞ ¸
sin x eiz eiz
dx = Im dz = Im − dz .
−∞ x(x2 + 4) −∞ z(z 2 + 4) 2
−∞ z(z + 4)

The integrand on the right has simple poles at z = 0 and z = ±2i; only z = 2i lies in the upper
half-plane, but z = 0 lies on the path of integration. So, considering again a contour like the
one on page 80, and applying (41), we deduce that
Z ∞ · ¸ · ¸
eix eiz eiz i 2π e−2 iπ i π(1 − e−2 )
− dx = i 2π · res + i π · res = + = .
−∞ x(x2 + 4) z=i 2 z 3 + 4z z=0 z 3 + 4z −12 + 4 4 4

Finally, taking the imaginary part of the last term on the right-hand side, we get that
Z ∞
sin x dx/x(x2 + 4) = π(1 − e−2 )/4. J
−∞

R∞
IExample 70 Find −∞
(1 − cos x) dx/x2 .
Solution: Write
Z ∞ Z ∞ · ¸ ·Z ∞ ¸
1 − cos x 1 − ei x 1 − eix
dx = Re dx = Re − dx .
−∞ x2 −∞ x2 −∞ x2
The Calculus of Residues 91

In the last term on the right-hand side, the integrand has a simple pole at z = 0 (why?), and
· ¸
1 − eiz
res = −i;
z=0 z2

therefore Z ∞
1 − eix
− dx = i π · (−i) = π.
−∞ x2
R∞
Taking the real part of the right-hand side then gives −∞ (1 − cos x) dx/x2 = π. This example
is also a standard application of the Laplace transform. Compare the amount of work needed
to do the problem in that way, and the work done here.
R∞
Corollary: You may deduce immediately that −∞ sin2 x dx/x2 = π, if you recall the identity
sin2 x = 12 (1 − cos 2x) and substitute 2x with y. J
R∞
IExample 71 Find −∞
sin3 x dx/x3 (this integral clearly converges absolutely).
Solution: Recall the identity sin3 x = 34 sin x − 14 sin 3x, which follows from Euler’s formula. So,
we write
Z ∞ Z ∞ · 3 ix 1 i3x 1 ¸ ·Z ∞ 3 ix 1 i3x 1 ¸
sin3 x 4e − 4e −2 4e − 4e −2
3
dx = Im 3
dx = Im − 3
dx .
−∞ x −∞ x −∞ x

Hey, not so fast. What’s that − 1/2 doing over


¡ 3 there? ¢
Well, the reason is that the function 4 eix − 14 ei3x /x3 has a third-order pole at x = 0:
the denominator has a triple zero, and the numerator tends to 1/2. To invoke (41), we need
an integrand with only simple pole(s) on the contour. That is why we subtract 1/2 from the
numerator, which creates a double zero there—verify this, it’s a revision problem on Taylor
series—and since we are planning to take the imaginary part anyway, adding or removing real
terms will have no effect in the end. After this adjustment the integrand has a simple pole on
the path of integration, and we may proceed as usual.
Close the path like on page 80, and use (41): it follows that
Z ∞ 3 ix · ¸
4e − 14 ei3x − 1
2 3eiz − ei3z − 2 3
− dx = i π · res = iπ · ,
−∞ x3 z=0 4z 3 4
R∞
(check this), and finally −∞
sin3 x dx/x3 = 3π/4. J

9. Some Further Real Integrals

Apart from the standard types seen so far, many other important real integrals can be evaluated
by complex variables methods. The choice of contour (and often the choice of integrand) usually
requires a good deal of ingenuity. We give some examples:
R∞
IExample 72 Calculate −∞ dx/ cosh x.
Solution: This integral is trivial. Indeed, one gets immediately that
Z ∞ Z ∞ Z ∞ h i∞
dx cosh x dx d(sinh x)
= 2 = 2 = arctan(sinh x) = π.
−∞ cosh x −∞ cosh x −∞ 1 + sinh x −∞
92 The Calculus of Residues

However, we want to learn something new, so


we integrate f (z) = 1/ cosh z over the rectangle i5π/2
Γ shown in the picture. It is clear that f (z) is
singular at points where cosh z = 0, and since
i3π/2
cosh iy = cos y, this means z = ±i π/2, z = z=x+i π Γ
±i3π/2, etc. The singular points are all simple
poles, and Γ surrounds only the one at z = iπ/2
i π/2. Now, it’s easy to see that the integrals
over the “short” vertical sides of Γ tend to zero −R z=x R
−i π/2
as R → ∞:
¯ ¯ ¯ ¯
¯ 1 ¯ ¯ 1 ¯ 1 1
¯ ¯=¯ ¯
¯ cosh(R + iy) ¯ ¯ cosh R cos y + i sinh R sin y ¯ = p 2

sinh |R|
.
sinh R + cos2 y

So, each integral would be less than π/ sinh |R|, a quantity that goes to zero as R tends to
infinity.
On the top side, we have that

cosh(x + i π) = cosh x cos π + i sinh x sin π = − cosh x.

Therefore, as R → ∞, we see that


I Z ∞ Z −∞ Z ∞
dz dx dx dx
−→ + =2 .
Γ cosh z −∞ cosh x ∞ − cosh x −∞ cosh x

By the residue theorem, we have that


I · ¸
dz 1 i 2π i 2π
= i 2π · res = = ;
Γ cosh z z=i π/2 cosh z sinh i π/2 i sin π/2

therefore, Z ∞
dx
2 = 2π,
−∞ cosh x
R∞
and finally −∞
dx/ cosh x = π, as expected. J

Next example is similar, except that it cannot be done in an elementary way.


R∞
IExample 73 Find −∞ emx dx/ cosh x, where m is possibly irrational, and −1 < m < 1.
Solution: Note that the condition |m| < 1 is necessary and sufficient for the integral to converge,
since for large x, one has that cosh x ≈ 12 e|x| .
Use the same rectangular contour Γ of example 72. Again, the integrals over the vertical
sides of Γ approach zero as R tends to infinity. On the upper side, we find that

emz em(i π+x) eimπ emx eimπ emx


= = = .
cosh z cosh(i π + x) cos π cosh x − cosh x

Proceeding quickly like in the preceding example, we deduce that as R tends to infinity,
Z R mx Z −R · mz ¸
e imπ emx e
dx + e dx −→ i 2π · res .
−R cosh x R − cosh x z=i π/2 cosh z
The Calculus of Residues 93

Using (35) we find that


· ¸ · mz ¸
emz e eimπ/2 eimπ/2
res = = = .
z=i π/2 cosh z sinh z z=i π/2 i sin π/2 i

Hence, letting R tend to infinity, we find that


Z ∞ Z −∞
emx emx eimπ/2
dx + eimπ dx = i 2π · = 2πeimπ/2 ,
−∞ cosh x ∞ − cosh x i

and finally that


Z ∞
emx 2πeimπ/2 2π π
dx = = imπ/2 = .
−∞ cosh x 1 + eimπ e + e−imπ/2 cos mπ/2 J

Many useful
R ∞ integrals may be transformed into the one considered in this example. One among
n
many is 0 dx/(1 + x ), where n ∈ N is at least 2, and possibly large. The case n = 2 is best
R∞
done by the methods of first year calculus, and one finds that 0 dx/(1 + x2 ) = π/2. In section
R∞
7 we found, using the residue theorem, that 0 dx/(1 + x6 ) = π/3. However, if n is irrational,
first year methods fail, and even the methods of section 7 become laborious.
On the other hand, substituting x = eu , which gives dx = eu du, and maps x = 0 into
u = −∞, x = ∞ into u = ∞, one gets that
Z ∞ Z ∞
dx eu
= du;
0 1 + xn −∞ 1 + e
nu

another substitution, nu = 2v, yields du = (2/n) dv, and hence


Z ∞ Z ∞ Z ∞ Z ∞
eu 2 e2v/n 2 e2v/n e−v 1 emv
du = du = du = du,
−∞ 1 + enu n −∞ 1 + e2v n −∞ e−v + ev n −∞ cosh v

where m = 2/n − 1. This is the integral of the last example; therefore


Z ∞
dx 1 π π/n π/n
n
= · = = .
0 1+x n cos[(2/n − 1) π/2] cos(π/n − π/2) sin π/n

Note that if n is a large integer, say n = 300,


R ∞the integral may be done by “first year methods”,
i.e., partial fractions. The calculation of 0 dx/(x300 + 1) by partial fractions is left as an
exercise for the reader. J
R∞
IExample 74 Find −∞ x dx/ sinh x.
Solution: Integrate the function f (z) = z/ sinh z over the contour Γ accompanying example 72.
However, one must be careful here, because f has a simple pole at z = i π, even if it is analytic
at z = 0. So, use principal value integration, bearing in mind that by, by (40),
Z ∞ Z ∞
x x
dx = − dx,
−∞ sinh x −∞ sinh x

for the simple reason that the left-hand side exists in the usual sense.
94 The Calculus of Residues

Considering the vertical sides of Γ we note that


¯ ¯ ¯ ¯
¯ 1 ¯ ¯ 1 ¯ 1 1
¯ ¯ ¯ ¯
¯ sinh(R + iy) ¯ = ¯ sinh R cos y + i cosh R sin y ¯ = p ≤
sinh R
,
sinh2 R + sin2 y

hence the integrals over the vertical sides go to zero as R tends to infinity.
On the top side, we have that

sinh(x + i π) = sinh x cos π + i cosh x sin π = − sinh x.

Therefore, as R → ∞, we see that


Z R Z −R h z i
x x + iπ
dx + − dx −→ i π · res ;
−R sinh x R − sinh x z=i π sinh z

the residue is multiplied by i π (instead of i 2π) because it lies on the contour.


Now we note that by symmetry
Z R

− dx = 0,
−R sinh x

and that by (35)


h z i h z i iπ
res = = = −i π.
z=i π sinh z cosh z z=i π cos π
It follows that Z ∞
x
2 dx = i π · (−i π),
−∞ sinh x
R∞
and finally that −∞
x dx/ sinh x = π 2 /2. J

FRESNEL INTEGRALS
2
Integrate the function f (z) = eiz , which is ana-
lytic over the whole plane, over the wedge-shaped φ=π/4
contour Γ pictured on the right. Clearly,
I
2
eiz dz = 0,
Γ

because the integrand has no singular points. The


contour consists of three pieces:
P CR
I Z R Z Z Γ
iz 2 ix2 iz 2 iz 2
e dz = e dx + e dz + e dz.
Γ 0 CR P

O R
We show that as R → ∞ the middle term tends
to zero. We have, on CR

z = Reiφ =⇒ dz = iReiφ dφ, [0 ≤ φ ≤ π/4]


The Calculus of Residues 95

and hence z 2 = R2 (cos 2φ + i sin 2φ). Substituting back, and proceeding quickly like in the proof
of Jordan’s lemma, we get
¯Z ¯ Z π/4 Z
¯
¯ iz 2
¯
¯ −R2 sin 2φ R π/2 −R2 sin θ
¯ e dz ¯ ≤ e R dφ = e dθ,
CR 0 2 0

where φ has been substituted by θ/2 in the last step. Now, the inequality


sin θ ≥ , [0 ≤ θ ≤ π/2]
π
which we also used in the proof of Jordan’s lemma, yields
¯Z ¯ Z ¡ 2¢ ¡ 2¢
¯ iz 2
¯ R π/2 −R2 2θ/π R π 1 − e−R π 1 − e−R
¯ ¯
e dz ¯ ≤ e dθ = · = ,
¯ 2 0 2 2R2 R
CR

and the last term on the right-hand side goes to zero as R tends to infinity.
The integral over P may be simplified by observing that, on P , we have

z = rei π/4 ,

where r goes from R back to 0. Hence,

dz = ei π/4 dr, z 2 = r2 ei π/2 = ir2 .

It follows that
Z Z 0
iz 2 2 2
e dz = ei r
ei π/4 dr −→
P R
Z ∞
2
i π/4
−→ −e e−r dr. [as R → ∞]
0

Simple manipulations yield


Z ∞ Z ∞ −1/2 −t
−r 2 t e (− 1/2)!
e dr = dt = .
0 0 2 2

You should recall that √


(− 1/2)! = π;
the proof is a standard application of the Laplace tranform.
So, we find that Z ∞ √
2 π
eix dx − ei π/4 · = 0;
0 2
separating real and imaginary part, it follows immediately that
Z ∞ Z ∞ √
π
cos x2 dx = sin x2 dx = √ . (42)
0 0 2 2
These famous results are known as Fresnel integrals, after the French physicist who first used
them in the study of light propagation.
96 Tutorial Problems The Calculus of Residues

PROBLEMS

Revision: Cauchy Integral formula


31. I
Find the value of the following Iintegrals: I
sin(z + 3) dz eiz dz sin z · sin(2 − z) dz
(a) 2
(b) 2
(c)
|z|=2 z − 5z − 6 |z−i|=3 z + 1 |z|=2 z(z − 1)
32. I
Find the value of the following integrals:
I I
dz sinh π3 (z + i) dz (2z − 1) dz
(a) 2 + 9)(z + 9)
(b) (c) 3 − 9z 2 + 8z
|z|=4 (z |z|=3 z − 1 |z|=5 z
I z ln 2
e dz
33. Find 2
in the case where:
C z − 6z
(a) C is the circle |z − 2| = 1; (b) C is the circle |z − 2| = 3; (c) C is the circle |z − 2| = 5.
34. I
Find the following integrals using
I Cauchy’s generalized formula:
I
eiz dz tan z dz sinh z dz
(a) 4
(b) 3
(c) 3 (z − 1)
|z|=2 (z + i) |z|=1 (z − π/6) |z−1/2|=1 z
Power Series
35. Expand the following function
Z z into Taylor series centered at 0 (i.e., into McLaurin series):
√ √ sinh ω z+4
(a) 1 + z + 1 − z (b) dω (c) 2 (d) cos z cosh z
0 ω z − 3z + 2
36. Using series multiplication (or Taylor’s theorem), find the first four non-zero terms of the
McLaurin series of f (z) = tan z.
37. Using series multiplication (or Taylor’s theorem), determine the coefficients of the expansion
of f (z) = ez /(1 − z) in a power series centered about z0 = 0.
Laurent Series
38. Find the Laurent series for f (z) = (2z − 3)/(z 2 − 3z + 2) in a punctured disk about each
of its two singular points.
39. Find the Laurent expansion of f (z) = (2z + 1)/(z 2 + z − 2) in the given regions:
(a) The disk |z| < 1 (b) The annulus 1 < |z| < 2 (c) The domain 2 < |z| < ∞.
40. Expand in a Laurent series in a neighborhood of the origin:
(a) (sinh z)/z 2 (b) (sinh2 z)/z (c) 1/(z 9 + z 7 ) (d) sinh (z + 1)/z
41. Expand 1/(z 2 + 1) in a punctured disk centered at z = i. Find the radius of convergence.
42. Expand 1/(z − i3) in a Laurent series that is valid: (a) inside the circle centered at z = −4
with radius 5, and (b) outside the circle centered at z = −4 with radius 5.
43. Expand (z 4 + 1)/z(z + 1)2 in a Laurent series inside a punctured disk centered at z = −1.
Find the radius of convergence.
Singular Points and Zeroes
44. Find the zeroes of the following functions and establish their order:
(a) z 2 sin z (b) 1 + cosh z (c) ez − eiz (d) (z 2 − π 2 ) cos z/2
45. Determine the character of the singular points of the following functions.
3
(a) 1/(1 − cos z) (b) e−1/z (c) 1/(z 5 + 2z 4 + z 3 )
The Calculus of Residues Tutorial Problems 97

46. Show that the point z = 0 is not a singular point of the function f (x) = 1/ sin z − 1/ sinh z.
What is it then?
Residues
47. Find the residues of the following functions at all their singular points.
1 ez sin πz
(a) 3 5
(b) z 6 e1/z (c) 4 (d)
z −z z + 9z 2 (6z − 1)(z − 1)2
48. Find the residues of the following functions at all their singular points.
ez z2 − 1 1 − cos z sin z
(a) 4
(b) 6 5 4
(c) 3 (d) z
z(z − 2) z + 2z + z z (z − 3) e −1
49. Find the following residues.
· ¸ · ¸ · ¸
£ ¤ 1 sin 2z − 2z sin 3z − 3 sin z
(a) res z 4 e1/(z−1) (b) res (c) res (d) res
z=1 z=0 2 sinh z − 2z z=0 (1 − cos z)2 z=0 sin2 z − z sin z
£ ¤
50. (a) res f (z) + g(z) = res f (z) + res g(z): true or false?
z=z0 z=z0 z=z0
£ ¤
(b) res f (z) · g(z) = res f (z) · res g(z): true or false?
z=z0 z=z0 z=z0

Trigonometric Integrals
51. Calculate the following definite integrals.
Z 2π Z 2π Z 2π
dt dt dt
(a) (b) (c)
0 5 + 3 sin t 0 (2 + cos t)2 0 4 − sin2 t
Z 2π Z π Z 2π
dt 9 cos 7t dt
(d) cos7000 t dt (e) (f)
0 0 2 − cos t 0 10 − 6 cos t
Z π Z π Z π
cos 3t cos 8t dt 1 + cos 3t
52. Calculate (a) dt (b) (c) dt
−π 5 − 4 cos t −π 2 − cos t −π 1 + cos t
Rational Integrals
53. Apply the calculus of residues to find the following integrals:
Z ∞ Z ∞ Z ∞
dx 2 dx x2 dx
(a) 4
(b) (c)
x +1 x8 + 1 (x2 + 9)(x2 + 4)2
Z−∞
∞ Z −∞
∞ Z −∞

dx dx x dx
(d) (e) (f)
−∞ (x2 + 1)(x2 + 2x + 2) 2
−∞ (x + 2x + 2)
2 2
−∞ (x − 2x + 4)
3

Fourier Integrals
54. Calculate the following definite integrals.
Z ∞ Z ∞ Z ∞
x cos x dx x sin x dx cos x dx
(a) 2
(b) 2
(c)
−∞ x − 2x + 10 −∞ x + 4x + 20 −∞ (x2 + 1)(x2 + 4)
Z ∞ √ Z ∞ 3 Z ∞
cos 2x 2x sin ωx x2 cos x
(d) 4
dx (e) dx (f) dx
−∞ 1 + x −∞ (1 + x2 )2 −∞ (x2 + 1)2
98 Tutorial Problems The Calculus of Residues

Principal Value Integrals


55. Calculate the following definite integrals.
Z ∞ Z ∞ Z ∞
sin x dx (cos 5x − cos 7x) dx sin 3x dx
(a) 2
(b) (c)
x(x + 1) x2 x(x2 + 4)2
Z−∞
∞ Z −∞
∞ Z −∞

sin x − x cos πx dx 3 sin πx dx
(d) 3
dx (e) 2 − 8x + 3
(f)
−∞ x −∞ 4x −∞ x3 − 1
56. Find
Z ∞ Z ∞ Z ∞
sin 2x − 2 cos 2 sin x sin x − cos x cos x − cos 1
(a) dx (b) dx (c) dx
−∞ x−2 −∞ 4x − π −∞ x4 − 1
Z ∞
3x + sin x − sin 4x
57. Find dx.
−∞ x3
Z ∞
sin4 x
58. Show that sin4 x = 1
8 (3−4 cos 2x+cos 4x), hence use this result to calculate dx.
−∞ x2
Hyperbolic Integrals
59. Calculate the following integrals. You may use the contour of examples 72–74.
Z ∞ Z ∞ Z ∞
x2 cosh mx cos x
(a) dx (b) dx [ |m| < 1 ] (c) dx
−∞ cosh x −∞ cosh x −∞ cosh x
Z ∞ Z ∞ Z ∞
sin x sinh x sinh mx x3
(d) dx (e) dx [ |m| < 1 ] (f) dx
−∞ cosh 2x −∞ sinh x −∞ sinh x
Z ∞
x sin x
60. Find dx. You’ll need, at some stage, the result of problem 59(c).
−∞ cosh x
Z ∞ Z ∞
cos x dx cos x dx
61. Find: (b) 2 (c) 2
−∞ 4 sinh x + 1 −∞ 4 cosh x − 1
· ¸
cos2 z
62. Show that res 2 = i sinh π (but note the 2nd-order pole).
z=iπ/2 cosh z
Z ∞
cos2 x
Hence, using this result, find 2 dx.
−∞ cosh x

Fresnel-like Integrals
63. Find:Z Z Z Z
∞ ∞ ∞ ∞
3 3 4
(a) cos x dx (b) sin x dx (c) cos x dx (d) sin x4 dx
0 0 0 0

ANSWERS

31 −i(2π/7) sin 2 (b) −2π sinh 1 (c) i2π sin2 1


(a)

32 (a)
−iπ/45 (b) iπ sinh π/3 − 3π cosh π/3 (c) −i15π/28.
33 (a)
0; (b) −iπ/3; (c) i21π.

34 (a)
−πe/3; (b) i8π/3 3; (c) i2π(sinh 1 − 1).
X µ 1/2¶ X zk ∞ µ
X 3

k
35 (a) 2 z (b) (c) 5 − k zk
k k k! 2
k=even k=odd k=0
The Calculus of Residues Tutorial Problems 99


X (−4)k z 4k 1
£ ¤
(d) Hint: Show first that cos z cosh z = 2 cosh(1 + i)z + cosh(1 − i)z
(4k)!
k=0
2z 3 16z 5 272z 7
36 tan z = z + + + + ···
3! 5! 7!
ez 1 1 1 1
37 = ak z k , where a0 = 1, a1 = 2, and in general ak = 1 + + + + · · · + .
1−z 1! 2! 3! k!
X ∞ X∞
1 1
38 About z = 1: f (z) = − (z − 1)k . About z = 2: f (z) = + (−1)k (z − 2)k .
z−1 z−2
k=0 k=0
∞ ·
X (−1)k ¸ ∞ ∞
X 1 X (−1)k z k X ∞
k 1 + (−2)k
39 (a) k+1
−1 z ; this is a Taylor series. (b) k
+ k+1
. (c) .
2 z 2 z k+1
k=0 k=1 k=0 k=0
X k X 2k+1 z k ∞
X
1 z
40 (a) + (b) ; this is a Taylor series. (c) (−1)k z 2k−7
z (k + 2)! (k + 1)!
k=odd k=odd k=0
X sinh 1 X cosh 1
(d) k
+ .
k!z k!z k
k=even k=odd
X∞
1 k
i (z − i)k
41 ; the radius of convergence is 2.
4 2k
k=−1
X∞ ∞
X
(z + 4)k (4 + i3)k
42 (a) − ; this is a Taylor series. (b)
(4 + i3)k+1 (z + 4)k+1
k=0 k=0

X
2 2
43 − 2
+ −4− (z + 1)k ; the radius of convergence is 1.
(z + 1) z+1
k=2
44 (a) z = 0 is a triple zero; z = kπ [where k is integer 6= 0] are simple zeroes.
(b) z = ±ikπ, where k is odd [all double]. (c) z = (i − 1)kπ, where k is integer; all simple.
(d) z = ±π are double zeroes; z = ±kπ where k = 3, 5, 7, . . . are simple zeroes.
45 (a) Double poles at z = kπ, where k is even. (b) Essential singularity at z = 0.
(c) Triple pole at 0, double pole at −1.
46 It’s a simple zero.
47 (a) res f (z) = res f (z) = −1/2; res f (z) = 1.
z=1 z=−1 z=0
(b) res f (z) = 1/5040.
z=0
(c) res f (z) = 1/9, res f (z) = iei3 /54, res f (z) = −ie−i3 /54.
z=0 z=i3 z=−i3
(d) res f (z) = 3/25, res f (z) = −π/5.
z=1/6 z=1

48 (a) res f (z) = 1/16, res f (z) = e2 /48.


z=0 z=2
(b) res f (z) = −2, res f (z) = 2.
z=1 z=0
(c) res f (z) = −1/6, res f (z) = (1 − cos 3)/27.
z=0 z=3
(d) res f (z) = i sinh 2kπ, where k is integer; however, z = 0 is a removable singularity.
z=i2kπ

49 (a) 501/120 Hint: Expand e1/(z−1) and z 4 into a power series about z = 1.
(b) −3/20 Hint: Expand sinh z into McLaurin series.
(c) −16/3 Hint: Expand sin 2z and cos z into McLaurin series.
100 Tutorial Problems The Calculus of Residues

(d) 24 Hint: Begin by showing that 3 sin z − sin 3z = −4 sin3 z. Then, McLaurin etc.
50 (a) True, (b) False.
√ √ √ √
51 (a) π/2 ± (b) 4π/3 3 (c) π/ 3; you probably need the√ identity 7 ± 4 3 = (2 ± 3)2 .
(d) 2π 7000! 27000 (3500!)2 ; use the binomial formula. (e) π/ 3 (f) π/972
√ √
52 (a) π/12, (b) 2π(2 − 3)8 / 3, (c) 6π.
√ √
53 (a) π/ 2, (b) π (sin π/8 + cos π/8), (c) π/100, (d) 2π/5, (e) π/2, (f) π/24 3.
54 (a) πe−3 (cos 1 − 3 sin
√ 1)/3, (b) πe−4 (2 cos 2 + sin 2)/2, (c) πe−2 (2e − 1)/12,
(d) πe−1 (cos 1 + sin 1)/ 2, (e) π(2 − ω)e−ω (f) 0.
√ √
55 (a) π(1 − e−1 ) (b) 2π (c) π(1 − 4e−6 )/16 (d) −π/2 (e) π/2 (f) −π(1 − 3e−π 3/2 )
√ ¡ ¢
56 (a) −π, (b) π/2 2, (c) 12 π cos 1 − sin 1 − e−1 .
57 15π/2.
58 π/2.
π3 2π cos mπ/2 π π π(sinh 3π/4 − sinh π/4)
59 (a) , (b) = , (c) , (d) √ .
4 1 + cos mπ cos mπ/2 cosh π/2 2(1 + cosh π)
Note: You must switch to principal-value integration for questions (e) and (f). Why?
π sin mπ π
(e) = ,
1 + cos mπ cot mπ/2
π4
(f) (you need the result of example 74; follow the same lines).
4
π 2 sinh π/2
60 .
1 + cosh π
π cosh 5π/6 − cosh π/6 π cosh 2π/3 − cosh π/3
61 (b) √ , (c) √ .
3 cosh π − 1 3 cosh π − 1
π sinh π/3 π sinh π/6
Note: These results may also be written (b) √ and (c) √ , respectively.
3 sinh π/2 3 sinh π/2
Convince yourself of this.
π
62 1 + . An easy way to find the residue is by expanding cos2 z and cosh2 z into Taylor
sinh π
series about z = iπ/2; go up to fourth order.
¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢
63 (a) 13 ! cos π6 (b) 13 ! sin π6 (c) 14 ! cos π8 (d) 14 ! sin π8
The Fourier Integral 101

Chapter Three

THE FOURIER INTEGRAL

1. Introduction

The Fourier integral is used to analyze functions that are defined over infinite intervals, or also
“semi-infinite”, a loose word for describing intervals with a beginning but no end.
A naive approach may go as follows. If f is defined between −∞ and +∞, one may always
restrict the definition of f to a finite interval (−L, L); if f is also piecewise continuous and
smooth, then we have seen in chapter 1 that its Fourier series will converge to f (x) at every
point where f is continuous, and to the average of the limit from the right and the limit from
the left at every point of jump-discontinuity.
However, outside the interval (−L, L) the Fourier series will converge to the periodic exten-
sion of f . Therefore, if f is not periodic, then the Fourier series will reproduce f only between
−L and L; outside those limits, the series will converge, but not to f (x).
An obvious way to work around this problem is to take L sufficiently large to include all the
points where the Fourier series is likely to be applied, so that one may ignore the non-convergence
in the “tails”.
At this point may naturally ask what happens if L is allowed to tend to infinity in the
formulas, after f has been restricted to the finite interval (−L, L) and then expanded into a
Fourier series. Unfortunately, attractive as it may be, this plan leads to questions of convergence.
For example, the simple function f (x) = 1 may be expanded into a Fourier series over any interval
(−L, L), no matter how large L is, but it’s not integrable over (−∞, ∞).
Actually, the idea works but the theorems supporting it are very subtle, so much so that
they have provided fertile ground for research throughout the 20th century.
On the other hand, it’s not likely that—as engineers—you’ll ever need to know the rigorous
proofs of such theorems, justifying all the steps. What we’ll do instead is to convince ourselves
that it is “reasonable” that one should be able to build up a proof in a certain way, and leave
the task of filling in the details to mathematical analysts†. We’ll be satisfied to follow a heuristic
approach, like we did in chapter 1.
So, let us begin with the complex form of Fourier series (10), modified for a function f (x)
with a period of length 2L instead of 2π:

X
f (x) = ck eikπx/L ,
k=−∞

† Who seem to enjoy the job anyway.


102 The Fourier Integral

where Z L
1
ck = f (x) e−ikπx/L dx.
2L −L
In the integral above, x is a dummy variable, and now we relabel it s. Beside reducing the
danger of confusion, this allows us to combine two formulas into one:

" Z L #
X 1 −ikπs/L
f (x) = f (s)e ds eikπx/L
2L −L
k=−∞
We now introduce the discrete variable
π
αk = k [k = . . . − 3, −2, −1, 0, 1, 2, 3, . . .]
L
Note that the spacing ∆α between consecutive values of α is constant:
π
∆α = αk+1 − αk = .
L
Substituting back, we get:

" Z # ∞
"Z #
X 1 π L 1 X L
−iαk s
f (x) = f (s)e ds eiαk x = f (s)e−iαk s
ds eiαk x ∆α.
2π L −L 2π −L
k=−∞ k=−∞
Clearly as L → ∞, the spacing ∆α tends to zero; the sum on the right-hand side has the same
form as a Riemann sum, and hence it is reasonable to expect that it will differ less and less, as
∆α → 0, from the corresponding integral:

"Z # Z ∞ "Z L #
X L
−iαk s iαk x −iαs
f (s)e ds e ∆α −→ f (s)e ds eiαx dα.
k=−∞ −L −∞ −L

We have reached the point of the proof where a careful analysis of convergence is required.
Assuming that this step is justified, we write:
Z ∞ "Z L #
1 −iα(s−x)
f (x) = lim f (s)e ds dα.
L→∞ 2π −∞ −L

Finally, defining a function F (α) as follows:


Z ∞
1
F (α) = e−iαs f (s) ds,
2π −∞
we get Z ∞
f (x) = eiαx F (α) dα,
−∞
with the understanding that
Z ∞ Z L
... means lim ...
−∞ L→∞ −L
This is not the standard definition of improper integral, but it’s rather similar to Cauchy’s
principal value. See also the comments attached to example 65.
This result is usually referred to as Fourier’s integral theorem. It was stated by Fourier in
1811 in a form equivalent to this one, but the search for rigorous proofs kept pure mathematicians
busy for the next 150 years, and forced a careful reappraisal of set theory and measure theory,
among other things.
Note that s in the integral defining F (α) is a dummy variable; the standard practice is to
change it back to x. We also replace the symbol α with k.
The Fourier Integral 103

THREE WAYS OF WRITING FOURIER’S INTEGRAL THEOREM


Unfortunately there is no agreement in the literature about where to put the factor 2π in Fourier’s
integral theorem. There are at least three sensible ways of doing it, and good arguments may
be found for each one, so let’s see them all together: the first one is the one given before.
Z ∞ Z ∞
1 −ikx
F (k) = e f (x) dx ⇐⇒ f (x) = eikx F (k) dk. (43.1)
2π −∞ −∞
Z ∞ Z ∞
1 −ikx 1
F (k) = √ e f (x) dx ⇐⇒ f (x) = √ eikx F (k) dk. (43.2)
2π −∞ 2π −∞
Z ∞ Z ∞
−i2πkx
F (k) = e f (x) dx ⇐⇒ f (x) = ei2πkx F (k) dk. (43.3)
−∞ −∞

As an exercise, you should verify that these definitions are equivalent. Naturally, for a given
f (x), the corresponding F (k) is slightly different in the three cases.
The second definition has the advantage of being symmetric. For instance, if you change the
dummy variable k into −k, you find that F and f are switched around. It is probably the only
form of Fourier’s theorem
√ that a theoretical physicist will be prepared to use. Its disadvantage
is that a factor 1/ 2π must be carried through every step of the calculations, which could be
annoying when working with pencil and paper.
The third definition, (43.3), has the same symmetry properties as (43.2), and more. How-
ever, it clutters the exponentials and, perhaps for this reason, it’s the least popular of the three
among engineers.
IExample 75 Expand as a Fourier integral the function f (x), where f (x) = 3 if 0 < x < 1;
f (x) = 0 everywhere else on the x axis.
Solution: The graph of f (x) is shown in the picture on the right; note that f (x) is not defined
for x = 0 and x = 1. Using formula (43.1), we find immediately that
Z 1 · ¸1
1 −ikx 3 e−ikx 3 1 − e−ik
3e dx = = · . f(x)
2π 0 2π −ik 0 2π ik
Therefore, 3
Z (
∞ 3 if 0 < x < 1,
3 1 − e−ik ikx 3
1 0
0 1
f (x) = e dk = 2 if x = 0 or x = 1
2π −∞ ik
0 everywhere else.
1 x
3
The convergence to is a consequence of the fact that f (x) has jump-discontinuities at x = 0
2
and x = 1, and at such points the average between the limit from the left and the limit from the
right is precisely 32 . This shows an interesting feature of the Fourier integral: it can “reconstruct”
in a sensible way the graph of a function at isolated points where the function was originally
not defined. This property is used in computerized enhancement of electronic signals, such as
music or pictures.
Corollary: Substituting x = 0 in the last equation, we recover a familiar result:
Z ∞ Z ∞ Z ∞
3 1 − e−ik 3 1 − cos k + i sin k 3 sin k
dk = dk = =⇒ dk = π;
2π −∞ ik 2π −∞ ik 2 −∞ k

see also example 68. J


104 The Fourier Integral

IExample 76 Expand as a Fourier integral the function f (x), where f (x) = x if 0 < x < 5;
f (x) = 0 everywhere else on the x axis.
Solution: Proceed as in example 75. We find immediately that
Z 5
1 −1 + e−i5k + i5ke−i5k
x e−ikx dx = (by parts).
2π 0 2πk 2
0
1
0000000000
1111111111
Therefore, f(x) 0
1
0000000000
1111111111
0
1
0000000000
1111111111
0
1
0000000000
1111111111
0
1
0000000000
1111111111
0
1
0000000000
1111111111
 0
1
0000000000
1111111111
0
1
0000000000
1111111111
Z  0 if x is negative, 0
1
0000000000
1111111111
0
1

−1 + (1 + i5k)e−i5k  0000000000
1111111111
0
1
0
1
0000000000
1111111111
x if 0 ≤ x < 5, 0
1
0
1
0000000000
1111111111
eikx dk = 0
1
0000000000
1111111111
0
1
0000000000
1111111111
0
1
−∞ 2πk 2 
 5/2 if x = 5, 0000000000
1111111111
0
1
0000000000
1111111111
0
1
0000000000
1111111111
0 if x > 5. 0
1
0000000000
1111111111
0
1
0000000000
1111111111
0
1
0000000000
1111111111
5 x
Corollary: Substituting x = 5 on the left-hand side, we get
Z ∞ Z ∞
−ei5k + 1 + i5k 5 1 − cos 5k
dk = =⇒ dk = 5π.
−∞ 2πk 2 2 −∞ k2

See also example 70. J

2. Fourier Transform

We have mentioned that Fourier’s integral theorem may be written in several equivalent ways;
each one has its own merits but leads to a slightly different definition of Fourier transform. We
must now choose one.

IDefinition: We call the function


Z ∞
1
F (k) = e−ikx f (x) dx, (44)
2π −∞

which we introduced in in (43.1), the Fourier Transform of f (x). J

The main advantage of this choice is the striking similarity between the equation
Z ∞
f (x) = F (k) eikx dk (45)
−∞

and the complex form (8) of Fourier series



X
f (x) = ck eikx ,
−∞

which makes it easy to remember.


We’ll use the same convention that you learnt in second year with Laplace transforms, i.e.,
we reserve lower-case letters for direct functions, and capital letters for the corresponding Fourier
transforms. It’s not very common that Fourier and Laplace transforms be applied to the same
problem, though it may happen—in which case, an alternative notation must be used.
The Fourier Integral 105

We’ll also denote the Fourier transformation by the calligraphic letter F: in other words,
£ ¤ def £ ¤ def
F f (x) = F (k) and F −1 F (k) = f (x).

ORTOGONALITY AND PARSEVAL’S EQUATION


It’s possible to introduce an inner product over (−∞, ∞) in the same way as we did with
formula (11) in chapter 1: if f (x) and g(x) are functions that possess a Fourier transform, then
we set Z ∞
def
<f | g> = f g dx. (46)
−∞

With this definition of inner product, it is possible to show that

1
<f | g> = <F | G>

and hence, setting f ≡ g, that


Z ∞ Z ∞
1 2
|f | dx = |F |2 dk. (47)
2π −∞ −∞

This result is Parseval’s equation for Fourier’s integral. Note its similarity with Parseval’s
equation for Fourier’s series: see (13) in chapter 1.

IExample 77 Say f (x) = 3 if 0 < x < 1; f (x) = 0 everywhere else on the x axis (this is the
function introduced in example 75).
Say also g(x) = 1 if |x| < 61 ; g(x) = 0 everywhere else. Both f and g have a rectangular
graph, and the product f · g is different from zero only where they overlap, in (0, 16 ).
Using the result of example 75, we find that
¡ ¢
3 1 − e−ik 3 e−ik/2 eik/2 − e−ik/2 3e−ik/2 2 sin k/2
F (k) = · = · = · .
2π ik 2π ik 2π k

The transform of g(x) is found in a very similar way:


Z 1/6
1 1 e−ik/6 − eik/6 2 sin k/6
G(k) = e−ikx dx = · = .
2π −1/6 2π −ik 2πk

Since f and g are real, their inner product is simply


Z 1/6
1
<f | g> = 3 · 1 dx = .
0 2

However, F and G are complex, so we have


Z ∞ Z ∞ Z ∞
3eik/2 2 sin k/2 2 sin k/6 12eik/2 sin k/2 sin k/6
<F | G> = F G dk = · · dk = · dk.
−∞ −∞ 2π k 2πk −∞ 4π 2 k2
106 The Fourier Integral

Recall that in this section we interpret


Z ∞ Z L
... as a short-hand for lim ...
−∞ L→∞ −L

(which technically is not correct). So, we may use symmetry and continue with
Z ∞ Z ∞
12 cos k/2 sin k/2 sin k/6 3 sin k sin k/6
<F | G> = · · · = 2
· 2
dk = 2
dk =
−∞ 4π k 2π −∞ k2
Z ∞ ·Z ∞ i5k/6 ¸
3 cos 5k/6 − cos 7k/6 3 e − ei7k/6
= dk = Re − dk.
4π 2 −∞ k2 4π 2 −∞ k2

From here we proceed exactly as in example 70; we find immediately that


Z ∞ i5k/6 · i5z/6 ¸ · ¸
e − ei7k/6 e − ei7z/6 i5 i7 π
− 2
dk = iπ · res 2
= iπ · − = .
−∞ k z=0 z 6 6 3

Therefore, · ¸
3 π 1
<F | G> = Re = ,
4π 2 3 4π
and finally
1
<f | g> = <F | G>,

as expected. J

What about Parseval’s equation? Let’s use the same functions of this example.

IExample 78 Say f (x) = 3 if 0 < x < 1; f (x) = 0 everywhere else on the x axis. It follows
that Z ∞ Z 1
1 2 1 9
|f (x)| dx = 9 dx = .
2π −∞ 2π 0 2π
We already found that
3e−ik/2 2 sin k/2
F (k) = · ,
2π k
hence Z Z Z
∞ ∞ ∞
92 4 sin2 k/2 9 sin2 β
|F (k)| dk = dk = dβ
−∞ 4π 2 −∞ k 2 2π 2 −∞ β2
(substituting β = k/2). The last integral was calculated in example 70:
Z ∞
sin2 β
dβ = π.
−∞ β2

Therefore, Z Z
∞ ∞
1 2
|f (x)| dx = |F (k)|2 dk,
2π −∞ −∞

which verifies Parseval’s equation. J


The Fourier Integral 107

IExample 79 Let g(x) = 1 if |x| < 61 ; g(x) = 0 everywhere else. Recall that

2 sin k/6
G(k) = .
2πk

We observe that Z Z
∞ 1/6
1 2 1 1
|g(x)| dx = dx = ,
2π −∞ 2π −1/6 6π
and Z Z Z
∞ ∞ ∞
1 sin2 k/6 1 sin2 β 1
|G(k)|2 dk = dk = dβ = ·π
−∞ π2 −∞ k2 6π 2 −∞ β2 6π 2
(substituting β = k/6). The last step follows again from example 70. The two results are equal,
as expected. J

IExample 80 Find the Fourier transform of f (x) = 1/ cosh x and verify Parseval’s equation.
Solution: By definition,
i5π/2
Z ∞
1 e−ikx
F (k) = dx. i3π/2
2π −∞ cosh x z=x+i π Γ
This is very similat to example 73, so we’ll go iπ/2
through it quickly. We integrate over the contour
Γ shown on the right, and by the same procedure −R z=x R
−i π/2
of example 73 we find immediately that
Z ∞ Z −∞ · −ikz ¸
e−ikx e−ikx+kπ e ekπ/2
dx + dx = i2π · res = i2π · .
−∞ cosh x ∞ cosh x cos π z=iπ/2 cosh z i

It follows immediately that


Z ∞
e−ikx ¡ ¢
dx · 1 + ekπ = 2πekπ/2 ,
−∞ cosh x

and hence that


Z ∞
1 e−ikx 2πekπ/2 1
F (k) = dx = ¡ ¢= .
2π −∞ cosh x 2π 1 + e kπ 2 cosh kπ/2

To verify Parseval’s equation, we observe that


Z Z
1 ∞
1 2

dx 1 h i∞ 1
|f (x)| dx = 2 = tanh x = ,
2π −∞ 2π −∞ cosh x 2π −∞ π

and
Z Z Z

2

dk 1 2 ∞
dβ 1 h i∞ 1
|F (k)| dk = 2 = · 2 = tanh β =
−∞ −∞ 4 cosh kπ/2 4 π −∞ cosh β 2π −∞ π
108 The Fourier Integral

(substituting β = kπ/2). The two results are equal, as expected. J

2
IExample 81 Find the Fourier transform of f (x) = e−x .
Solution: By definition, Z ∞
1 2
F (k) = e−x −ikx
dx.
2π −∞

Completing the square in the exponential, the integral on the right-hand side becomes
Z ∞ 2 Z ∞
1 −x2 −ikx+k2 /4−k2 /4 e−k /4 2
e dx = e−(x+ik/2) dx.
2π −∞ 2π −∞

Recall that this improper integral is first reduced to an ordinary integral between −L and L,
and then L is allowed to go to infinity. It seems obvious that we should substitute x + ik/2 with
a new variable w, but there is a little problem.
This is a complex integral, and by substi-
tuting w = x + ik/2, the endpoints of the path
become w = −L + ik/2 and w = L + ik/2 re- start ik/2 finish
spectively, as shown in the picture. Hence, we
substitute w = x + ik/2 as planned, but instead
of going from the starting point to the finish γ
along a straight horizontal line, we modify the
path. −L 0 L
We integrate along the path γ leading from −L + ik/2 to −L, hence to +L along the real
axis, and finally to +L+ik/2. It is easy to show that the contribution from the vertical segments
tends to zero as L goes to infinity; convince yourself of this. Therefore
2 Z L+ik/2 2 Z L
e−k /4 −w2 e−k /4 2
e dw −→ e−w dw as L → ∞
2π −L+ik/2 2π −L

So, taking the limit as L → ∞, we find that


2 Z ∞ 2 Z ∞
£ 2¤ e−k /4 −w2 e−k /4 e−t dt
F e−x = e dw = √ =
2π −∞ 2π 0 t
−k2 /4 −k2 /4 2
e e √ e−k /4
= · (− 1/2)! = · π= √
2π 2π 2 π

(the middle step follows from the substitution w = t).
Corollary: For a positive constant c, we have
Z ∞ 2
£ 2¤ 1 2 e−k /4c
F e−cx = e−cx −ikx
dx = √
2π −∞ 2 πc

(substitute c x = s and carry on.) J
The Fourier Integral 109

COMPLEX CONJUGATION AND JORDAN’S LEMMA

Very often, one must use the calculus of residues to calculate a Fourier transform; we already
used residues in examples 77– 80.
RL
The standard procedure in these cases is to consider the integral −L e−ikx f (x) dx and form
a closed contour with a semi-circle of radius L. By the residue theorem, then, one gets that
I X h i
e−ikz f (z) dz = i2π · res e−ikz f (z) ,
z=zn
n

where the sum on the right extends to all singular points of f (z) enclosed between the real axis
and the semicircle. At this point Jordan’s lemma (39) is usually invoked, which in practice allows
one to disregard the integral over the semicircle. But there is a little problem here. Jordan’s
lemma, as we saw it in chapter 2, says essentially that if ω is a positive constant and f (z) → 0
uniformly as |z| → ∞ in the upper half-plane, then
Z
f (z) eiωz dz −→ 0 as L → ∞,
CL

where CL is a semicircle of radius L in the upper half-plane. The key word is “positive”.
In this section we deal with integrands of the form
f (x) e−ikx , where k itself may be positive, negative or
zero. For negative k, then −k is positive and Jordan’s y
lemma may be applied in the form (39), along the lines
of chapter 2. However, if k is positive this procedure negative k
does not work because −k is negative.
There are two ways out of this situation, both
easy. The first one is to close the contour with a semi-
11
00 11
00
circle in the negative half-plane, as shown in the pic-
ture. It’s then possible to show that the integral over 00
11 −L L x
the semicircle in the lower half-plane does go to zero
as L → ∞; however, the countour
H −ikz is covered clockwise
and so the path integral e f (z) dz will equal mi-
nus one times the sum of the residues in the lower positive k
half-plane.
The second way out is possibly even easier: use the fact that if f (x) is real, then (by
definition, really) Z ∞
1
F (−k) = eikx f (x) dx = F (k), [ f (x) ∈ R ]
2π −∞
and so, for real f ,
F (k) = F (−k), (48)
where as usual the overline means complex conjugation. In this way F (k) for positive k may be
found without recalculation.

IExample 82 Find the Fourier transform of f (x) = c/(x2 + c2 ), where c is real and positive.
Solution: By definition, Z ∞ −ikx
c e dx
F (k) = .
2π −∞ x + c2
2
110 The Fourier Integral

Consider first the case where k is negative. The integrand has two singular points, at z = ±ic.
Certainly −k is positive, so we may apply Jordan’s lemma in the form (39). Closing the contour
with a semicircle in the upper half-plane, which surrounds the singular poit at ic, we find
immediately that
· −ikz ¸
c e ekc
F (k) = · i2π res = i c · = 1
2 ekc [for negative k]
2π z=ic z 2 + c2 i2c

So much for negative k. What about k ≥ 0? We note that

F (−k) = 1
2 e−kc ,

but the right-hand side of this equation is real, therefore

F (−k) = 1
2 e−kc .

By (48) F (k) = F (−k), hence



 1 ekc if k is negative,
2
F (k) =
 1 e−kc if k is positive.
2

This result may be written more concisely:

F (k) = 12 e−|k|c .

As an exercise, we quickly check that the inverse-Fourier transformation gives back the original
function:
Z ∞ Z ∞ ·Z ∞ ¸
F −1 [F (k)] = 1 −|k|c+ikx
2 e dk = cos kx · e−kc
dk = Re e ikx−kc
dk =
−∞ 0 0
· ¸
ikx−kc ∞
· ¸
e 1 c
= Re = Re = 2 = f (x),
ix − c 0 c − ix x + c2

as expected. J

IExample 83 Find the Fourier transform of f (x) = 1/(x2 − 4x + 5).


Solution: This example is similar to the preceding one, so we’ll go through it fairly quickly. By
definition, Z ∞
1 e−ikx dx
F (k) = .
2π −∞ x2 − 4x + 5
The integrand has two simple poles, at z = 2±i; only the point 2+i lies in the upper half-plane.
If k is negative, then −k is positive; closing the contour with a semicircle in the upper
half-plane and using Jordan’s lemma (39), it follows that:
· ¸
1 e−ikz e−ik (2+i)
F (k) = i 2π · res = i · = 1
2 ek−i2k . [k < 0]
2π z=2+i z 2 − 4z + 5 2(2 + i) − 4
The Fourier Integral 111

For positive k, we use (48): we get that

F (k) = 1
2 e−k+i2k = 1
2 e−k−i2k . [k > 0]

These two results may be combined into one:

F (k) = 1
2 e−|k|−i2k .

As an exercise, check that the inverse-Fourier transform returns the original function. J

IExample 84 Find the Fourier transform of f (x) = x/(x4 + 4).


Solution: By definition, Z ∞
1 x e−ikx dx
F (k) = .
2π −∞ x4 + 4
The integrand has four singular points, at z = 1 + i, z = −1 + i, z = −1 − i and z = 1 − i.
They’re all simple poles; only the first two lie in the upper half-plane.
For negative k we note that −k is positive; hence closing the contour with a semicircle in
the upper half-plane and using Jordan’s lemma (39), we get:
· −ikz ¸ · ¸
1 ze 1 z e−ikz
F (k) = i 2π · res + i 2π · res . [k < 0]
2π z=1+i z 4 + 4 2π z=−1+i z4 + 4

The residues are trivial, by (35): it follows that


· ¸ · ¸
(1 + i) e−ik(1+i) (−1 + i) e−ik(−1+i)
F (k) = i +i =
4(1 + i)3 4(−1 + i)3
· −ik+k ¸ · ik+k ¸
i e i e
= 2
+ =
4 (1 + i) 4 (−1 + i)2
· −ik ¸
i k e − eik
= ·e · = − 14 i ek sin k. [k < 0]
4 i2

For positive k, we note that

F (−k) = − 14 i e−k sin(−k) = + 14 i e−k sin k.

Therefore, using (48), we get that

F (k) = + 14 i e−k sin k = − 14 i e−k sin k. [k > 0]

The two results thus found may be combined into one:

F (k) = − 14 i e−|k| sin k . [all k]

As an exercise, check that F −1 [F (k)] = x/(x4 + 4). If you can’t do it, read on: next example
is similar. J
112 The Fourier Integral

IExample 85 Find the Fourier transform of f (x) = e−|x| cos x, and verify it using (45).
Solution: Since f (x) is even in x, we may exploit its symmetry. We write
Z ∞ Z ∞
1 1
F (k) = e−|x| cos x e−ikx dx = e−|x| cos x cos kx dx =
2π −∞ 2π −∞
Z
1 ∞ −|x|
= e cos x cos kx dx =
π 0
Z ∞
1
= e−x [cos(k + 1)x + cos(k − 1)x] dx.
2π 0

The best way to calculate this integral is perhaps by re-introducing Euler’s formula:
Z ∞ ·Z ∞ ³ ´ ¸
1 −x 1 −x+i(k+1)x −x−(k−1)x
e [cos(k + 1)x + cos(k − 1)x] dx = Re e +e dx =
2π 0 2π 0
· ¸
1 1 1
= Re + =
2π 1 − i(k + 1) 1 − (k − 1)
· ¸
1 1 + i(k + 1) 1 + i(k − 1)
= Re 2
+ =
2π 1 + (k + 1) 1 + (k − 1)2
· ¸
1 1 1
= + .
2π 1 + (k + 1)2 1 + (k − 1)2

Simple manipulations now yield


· ¸
1 1 + (k − 1)2 + 1 + (k + 1)2
F (k) = =
2π 1 + (k + 1)2 + (k − 1)2 + (k 2 − 1)2
2 + k2
= .
π(4 + k 4 )

So far we have not used Jordan’s lemma, hence this result holds for all k. Recall that (45) says
that Z ∞
f (x) = F (k) eikx dk ;
−∞

in order to verify it we must show that


Z ∞
2 + k 2 ikx
e dk = e−|x| cos x.
−∞ π(4 + k 4 )

If x is positive, we close the contour in the upper half-plane (note that k, and not x, is the
dummy variable of integration!) and use Jordan’s lemma. The singular points of the integrand
are given by the equation
z 4 = −4,
which yields z = 1 + i, z = −1 + i, z = −1 − i and z = 1 − i. Only the first two points lie in
the upper half-plane. Therefore, for positive x:
Z ∞ · ¸ · ¸
(2 + k 2 ) eikx (2 + z 2 ) eixz (2 + z 2 ) eixz
dk = i 2 · res + i 2 · res .
−∞ π(4 + k 2 ) z=1+i 4 + z4 z=−1+i 4 + z4
The Fourier Integral 113

On the right-hand side above we substitute


· ¸ · ¸
(2 + z 2 ) eixz (2 + (1 + i)2 ) eix(1+i) (2 + i 2) eix−x i eix−x
res = = = − ,
z=1+i 4 + z4 4(1 + i)3 −8 + i 8 4

and
· ¸ · ¸
(2 + z 2 ) eixz (2 + (−1 + i)2 ) eix(−1+i) (2 − i 2) eix−x i e−ix−x
res = = =− .
z=−1+i 4 + z4 4(−1 + i) 3 8 + i8 4

It follows that
Z ∞ · ¸
(2 + k 2 ) eikx i eix−x i e−ix−x
dk = i 2 · − − = e−x cos x . [x > 0]
−∞ π(4 + k 2 ) 4 4

In the case where x is negative, we note that if F (k) is real in (45), then

f (x) = f (−x),

which is analogous to (48). We find immediately that

f (−x) = ex cos(−x) = ex cos x,

and hence that


f (x) = ex cos x = ex cos x [x < 0]
To summarize, Z ½

2 + k 2 ikx e−x cos x if x is positive,
e dk =
−∞ π(4 + k 4 ) ex cos x if x is negative.
Combining the two results above, we get that
Z ∞
(2 + k 2 ) eikx
dk = e−|x| cos x
−∞ π(4 + k 2 )

for all real x, as expected. J

3. Fourier-Sine and Fourier-Cosine Integrals

Integrals of even and odd functions may be handled in a natural way. In these cases it is usually
better to abandon complex exponentials and go back to sines and cosines. For example, if f (x)
is even, then by symmetry it follows that
Z ∞ Z ∞
1 1
F (k) = f (x) e−ikx dx = f (x) cos kx dx.
2π −∞ π 0

Now, F (k) is also an even function in k, because it depends on k only through cos kx, which is
even. This shows, first of all, that if f (x) is even in x, then F (k) is even in k, a simple property
that’s worth keeping in mind. In a similar way, one may show that if f (x) is odd in x, then
F (k) is odd in k.
114 The Fourier Integral

We exploit symmetry further: first we note that [if f is even]


Z ∞ Z ∞
f (x) = F (k) eikx dx = F (k) (cos kx + i sin kx) dk =
−∞ −∞
Z ∞
=2 F (k) cos kx dk + i 0.
0

IDefinition: We define a new transform, called Fourier-cosine transform as


Z ∞
def 2
FC (k) = f (x) cos kx dx.
π 0 J

Fourier’s integral theorem then yields immediately


Z ∞
f (x) = FC (k) cos kx dx. [x > 0]
0

Needless to say, we have followed the same procedure of chapter 1, where we introduced Fourier-
cosine series for functions defined only over (0, π). Here too, if f (x) is defined only for positive
x, then its Fourier-cosine transform reproduces the even extension of f over the negative x axis.

IDefinition: Reasoning in exactly the same way, we define the Fourier-sine transform of a
function f , defined for positive x, as
Z ∞
2
def
FS (k) = f (x) sin kx dx.
π 0 J

It follows immediately that Z ∞


f (x) = FS (k) sin kx dx [x > 0]
0

This time, for negative values of x, the integral converges to the odd extension of f (x).
The Fourier-cosine and Fourier-sine transforms are also denoted F C and F S , respectively:
£ ¤ £ ¤
F C f (x) = FC (k) F S f (x) = FS (k)

IExample 86 Find the Fourier, Fourier-cosine and Fourier-sine transforms of f (x), if f is


defined as f (x) = 1 − x2 for |x| ≤ 1; f (x) = 0 everywhere else.
Solution: Observe that f (x) is even in x: hence
Z ∞ Z 1
1 1 2 (sin k − k cos k)
F (k) = e−ikx f (x) dx = (1 − x2 ) cos kx dx = .
2π −∞ π 0 π k3

Note that, as expected, F (k) is even in k. It follows that


Z ∞ Z ∞ ½
ikx 2 sin k − k cos k ikx 1 − x2 if |x| ≤ 1,
f (x) = F (k) e dx = e dx =
−∞ π −∞ k3 0 if |x| ≥ 1.
The Fourier Integral 115

Note also that f (x) is continuous, hence at x = ±1 the limit from the left and the limit from
the right are equal, and they are both 0.
In this example, the Fourier-cosine differ from the Fourier transform only by a factor of 2:
Z ∞ Z 1
2 2 4 (sin k − k cos k)
FC (k) = f (x) cos kx dx = (1 − x2 ) cos kx dx = .
π 0 π 0 π k3

It follows that Z ½

4 sin k − k cos k 1 − x2 if |x| ≤ 1,
cos kx dk =
π 0 k3 0 if |x| ≥ 1,
which is equivalent to the precedent result.
The Fourier-sine transform is
Z Z
2 ∞ 2 1
FS (k) = f (x) sin kx dx = (1 − x2 ) sin kx dx =
π 0 π 0
µ ¶
2 1 2 sin k 2(1 − cos k)
= + − .
π k k2 k3

As expected, F (k) is odd in k. It follows that



 0 if x ≤ −1
Z ∞µ ¶ 

2 1 2 sin k 2(1 − cos k)  x2 − 1 if −1 ≤ x < 0,
+ − sin kx dk = 0 if x = 0,
π 0 k k2 k3 

 1 − x2
 if 0<x≤1
0 if x ≥ 1.

Note that there is a jump-discontinuity at x = 0, and indeed the integral converges to the
average of the limit from the left (which is −1) and the limit from the right (which is +1). J

IExample 87 Find the Fourier-sine transform of f (x) = e−cx , where c is a positive constant.
Solution: By definition,
Z ∞ ·Z ∞ ¸
2 2
FS (k) = e−cx sin kx dx = Im e−cx+ikx dx .
π 0 π 0

Integrating we find that


· −cx+ikx ¸∞ · ¸
2 e 2 0−1 2k
FS (k) = Im = Im = .
π −c + ik 0 π −c + ik π(c2 + k 2 )

It follows that 
Z ∞  −e−c|x| if x is negative,
2 k
sin kx dk = 0 if x is zero,
π 0 c2 + k 2  −c|x|
e if x is positive;
note the jump-discontinuity at x = 0. J
116 The Fourier Integral

IExample 88 Find the Fourier-cosine transform of sin x/x.


Solution: This is the so called “sampling function”, which arises frequently in signal processing;
it’s also known as “sine cardinal” and written sinc x. Simple manipulations yield immediately
Z ∞ Z ∞
sin x cos kx sin(k + 1)x − sin(k − 1)x
FC (k) = dx = dx.
0 x 0 2x

The right-hand side may be simplified immediately using example 68:



Z ∞  π/2 if ω is positive,
sin ωx
dx = 0 if ω is zero,
0 x 
−π/2 if ω is negative.

Now we have to examine several possibilities.


• If k > 1, then both k + 1 and k − 1 are positive; therefore
Z
2 ∞ sin(k + 1)x − sin(k − 1)x 2 π/2 − π/2
dx = · = 0.
π 0 2x π 2

• If k = 1, then k − 1 = 0; therefore
Z
2 ∞ sin(k + 1)x − sin(k − 1)x 2 π/2 − 0 1
dx = · = .
π 0 2x π 2 2

• If k is between 0 (inclusive) and 1, then k + 1 is positive but k − 1 is negative; therefore


Z
2 ∞ sin(k + 1)x − sin(k − 1)x 2 π/2 − (−π/2)
dx = · = 1.
π 0 2x π 2

Combining all these results, we get that:



1 if k is between 0 and 1,
· ¸ 


sin x
FC = 12 if k = 1,
x 



0 if k > 1.
The final result might also be written
£ ¤
F sinc x = U(k) − U(k − 1),

where U is Heaviside’s unit step function. J

IExample 89 Find the Fourier-sine transform of f (x) = 1/ sinh x.


Solution: This is similar to example 74 in chapter 2, and we’ll do it by the same method. First
of all, observe that by symmetry
Z Z
2 ∞ sin kx dx 1 ∞ sin kx dx
F (k) = = , (49)
π 0 sinh x π −∞ sinh x

because the integrand is even in x.


The Fourier Integral 117

We integrate sin kz/ sinh z over the contour Γ pictured on the right. The integrand has
first-order poles at z = ikπ, where k = ±1, ±2, . . .
The origin is not a singular point, because sin kz y
and sinh z go to zero with the same speed as z → 0;
convince yourself of this. The pole at z = i π lies on Γ iπ z=x+ iπ
the contour, and no pole is enclosed. Therefore
I · ¸
sin kz dz sin kz dz
sinh z
= iπ res
z=iπ sinh z −L 0 z=x L x
Γ

This contour integral consists of four pieces, but it’s easy to show that the contribution from
the vertical segments tends to zero as L goes to infinity; go back to example 74 for the details.
Taking the limit for L → ∞ yields
Z ∞ Z −∞ · ¸
sin kx dx sin k(x + iπ) dx sin kz
− +− = iπ res .
−∞ sinh x ∞ sinh(x + iπ) z=iπ sinh z

Observe that
sin k(x + i π) = sin kx cos i π + cos kx sin i kπ =
= sin kx cosh kπ + i cos kx sinh kπ,
sinh(x + i π) = sinh x cosh i π + cosh x sinh i π =
= sinh x cos π + i cosh x sin π = − sinh x + i 0.
It follows:
Z ∞ Z −∞ Z −∞ · ¸
sin kx dx sin kx dx cos kx dx sin kz
− + cosh kπ − + i sinh kπ − = iπ res .
−∞ sinh x ∞ − sinh x ∞ − sinh x z=iπ sinh z

The second integral on the left-hand side is cosh kπ times the first one; the third integral is zero
by symmetry. Simplifying we get:
Z ∞ · ¸
sin kx dx sin kz
(1 + cosh kπ) · − = iπ res .
−∞ sinh x z=iπ sinh z

Now we evaluate the right-hand side:


· ¸
sin kz sin i kπ i sinh kπ
iπ res = iπ · = iπ · = π sinh kπ.
z=iπ sinh z cosh i π cos π
Substituting back and simplifying, we find that
Z ∞
sin kx dx π sinh kπ π · 2 sinh( 12 kπ) cosh( 12 kπ) π sinh( 12 kπ)
= = = = π tanh( 12 kπ).
−∞ sinh x 1 + cosh kπ 2 cosh2 ( 12 kπ) cosh( 12 kπ)
Finally, going back to (49):
π tanh( 12 kπ)
F (k) = = tanh( 21 kπ).
π
Note that the function inR this example, f (x) = 1/ sinh x, has a Fourier-sine transform but not a

Fourier-cosine (because 0 cos kx dx/ sinh x diverges). J
118 The Fourier Integral

USING THE LAPLACE TRANSFORM TO FIND A FOURIER TRANSFORM


In some cases where the calculus of residues fails, one may resort to the Laplace transform to
find a Fourier transform.

IExample 90 Find the Fourier-sine transform of e−x /x.


R∞
Solution: We need to calculate 0 e−x sin kx/x dx. We change the dummy variable x into t,
and write e−st instead of e−t , with the understanding that s = 1. We get:
Z ∞ −x Z ∞ −st
e sin kx e sin kt
dx = dt [if s = 1]
0 x 0 t

This step is clearly superfluous, but it should make you recognize where we’re heading to: the
right-hand side is simply the Laplace transform of sin kt/t, evaluated for s = 1. Last year you
saw that · ¸
sin kt s
L = arccot ;
t k
hence we now find that
Z ∞
e−x sin kx 1
dx = arccot = arctan k.
0 x k

Finally, scaling the result by the factor 2/π, we get:


· −x ¸
e 2
FS = arctan k.
x π

Again, we see a function that has a Fourier-sine transform but not a Fourier-cosine. J

IExample 91 Find the Fourier-sine transform of sin x/x.


Solution: Simple manipulations yield immediately
Z ∞ Z ∞
sin x sin kx cos(k − 1)x − cos(k + 1)x
FS (k) = dx = dx. [k 6= 1]
0 x 0 2x

Note that this integral diverges if k = 1; convince yourself of this.


We rewrite the right-hand side as
Z ∞
cos(k − 1)t − cos(k + 1)t
e−st dt, [s = 0]
0 2t

which may be viewed as a Laplace transform:


· ¸
cos(k − 1)t − cos(k + 1)t
··· = L .
2t s=0

As you saw in second year, in the context of Laplace transform division by t corresponds to
integration by s: · ¸ Z ∞
f (t)
L = F (σ) dσ.
t s
The Fourier Integral 119

Recalling that
£ ¤ s £ ¤ s
L cos(k − 1)t = and L cos(k + 1)t = ,
s2 + (k − 1)2 s2 + (k + 1)2

and that at the end of our calculations we must substitute s = 0, we find


Z µ ¶
1 ∞ σ σ
FS (k) = − 2 dσ =
2 0 σ 2 + (k − 1)2 σ + (k + 1)2
· ¸∞
1 σ 2 + (k − 1)2
= ln =
4 σ 2 + (k + 1)2 0
1 (k + 1)2
= ln =
4 (k − 1)2
¯ ¯
1 ¯ k + 1 ¯¯
= ln ¯¯ .
2 k − 1¯

Note again that FS (k) is not defined for k = 1. This example complements example 88, but
there is a subtle point. The function f (x) that corrisponds to the FS (k) found here must be
odd in x, hence, if the definition of f (x) is extended to the negative real axis, it coincides with
− sin x/x. In example 88, on the other hand, it coincides with + sin x/x throughout. J

4. The Uncertainty Principle

The Fourier transform has an interesting property, which we are simply going to note without
proof. Simply put, the more a function of x is “localized” in a region or in the the vicinity of a
certain point, the more the transform is “spread out”, and vice-versa.
Physically, this property is linked to Heisenberg’s uncertainty principle and to communi-
cation theory, but in these notes we are not going to discuss these aspects. We shall simply
illustrate this concept through an example.
IExample 92 Describe the Fourier transform of the function f (x) that is equal to 1/2c if x is
between −c and c; f (x) = 0 everywhere else.
Solution: The graph of f (x) is a rectangle with unit area, centered about the origin. Loosely
speaking, if c is large, the rectangle is broad and short, whereas if c is small, the rectangle is
narrow and tall. Two possible instances are shown below on the left, corresponding to c = 1/2
and c = 6.
f(x) c= 1/ 2 F(k)
c=6 1
2π c= 1/ 2

x k

1/ 2 6
c=6
120 The Fourier Integral

By definition, Z c
1 e−ikx dx 1 sin kc
F (k) = = .
2π −c 2c 2π kc
The picture on the right shows, on the same scale, the graphs of F (k) corresponding to the same
values of c. Note immediately that

1 sin kc 1
F (0) = lim = ,
k→0 2π kc 2π

regardless of the value of c. Note also that

lim |F (k)| = 0,
k→∞

again for any c: the k axis is always a horizontal asymptote. However, we see that in the two
cases shown in the picture, F (k) approaches zero with different speed. Considering the obvious
inequality ¯ ¯
1 ¯¯ 1 ¯¯
|F (k)| ≤ ,
2π ¯ kc ¯
we see immediately that the larger is c, the faster F (k) tends to zero.
It’s also easy to see that the graph of F (k) cuts the k axis an infinite number of times,
given by k = ±π/c, k = ±2π/c, k = ±3π/c, etc. The value of c determines how close to each
other the intersections are: if c is large the first crossing takes place near the origin, but if c is
small, it takes place far away from the origin.
Going back to the original function, we see that the function f (x) is different from zero only
in an interval of width 2c, and in this sense is “localized” there; the smaller is c, the narrower
is the interval. But the opposite is true for the transform F (k): the smaller is c, the more the
graph of F (k) is spread out.
Recall that, by definition, Z ∞
f (x) dx = 1
−∞

regardless of the value of c. Any function with this property may be interpreted as a probability
distribution, x being the outcome of some observation. Following this line of thought we find
that (in this example) there is a 100 % certainty that x will never be observed outside the
interval (−c, c). Therefore, the parameter c indicates how accurately one may estimate the
outcome of an observation: the smaller √ is c, the better we may predict x. As an exercise, verify
that the standard deviation of x is c/ 3.
If the value of c varies, the accuracy on x also changes. Decreasing c, the graph of f (x) (a
rectangle, in this example) gets narrower and taller: but then clearly the graph of F (k) spreads
out further and further. We see that if the accuracy is good for x, then for the same value of c
it is poor for k, and vice-versa.
In other words, it is impossible to predict both x and k simultaneously with arbitrary
precision. The improvement on one would come at the expense of the other. J

The same principle applies to all other examples we saw in the preceding section. Example 81,
for instance, is particularly interesting because f and F are very similar. They occur very often
in statistics, where they represent the famous bell-shaped normal distribution (also known as
Gaussian distribution).
The Fourier Integral 121

Note again that the parameter c affects their graph in opposite ways. The function f (x) =
−cx2
e has a maximum at x = 0 and decreases steadily
p to zero as x goes to ±∞. Its value
is reduced√ to 5% of the maximum when x = ± (ln 20)/c; an inflection point is found at
x = ±1/ 2c. Hence, if c is large, then f (x) tends to zero quickly and the inflection point (the
point where the graph “bends”) is close to the origin: this gives the graph the shape of a thin,
narrow bell. On the other hand, if c is small, f (x) goes to zero slowly and the inflection point
is far from the origin, giving the graph the shape of a wide bell.
But the exact contrary applies to F (k), because if c is small, then 1/4c is large and vice-
versa. So, we find again that the more f (x) is localized around the origin, the more F (k) is
“spread out”, and vice-versa.

5. Dirac’s Delta Function

Engineers often have to deal with very small or very large numbers which may conveniently be
approximated as “zero” or “infinity”, respectively. For instance, when a tennis ball bounces
off the side of a truck, one may treat the mass of the truck as infinite, though this is clearly
incorrect; a fast moving pellet may be treated as having zero mass compared to the target.
There are, however, examples where “very large’ and “very small” quantities combine in
such a way that their overall effect is neither small nor large.
Suppose, for instance, 1 liter of concentrated dye is placed at the bottom of an olympic-
size swimming pool. The dye will diffuse into the water; if there is no loss, after a sufficiently
long time all the color will spread uniformly over the pool. The diffusion equation, which is
mathematically identical to the heat equation, describes this process fairly well. The initial
condition would be that the concentration (call it u) of dye in water would initially be zero
everywhere except for the space occupied by the dye at time t = 0, where u would presumably
be constant. However, a swimming pool is 50 m long, and a 1 liter occupies the volume of a
cube with a side of only 10 cm. It would be nice to treat the size of the container as zero, but
if we do it, the dye contained in it disappears from the equation.
With an abuse of language, many people would say that the initial concentration u is
“almost infinite” in a region of the pool having “virtually zero” volume, but the total mass of
the dye, which is (concentration)×(volume), being the product of a very large and a very small
quantity, is a reasonable, finite number.
The mathematical technique that handles this problem is Dirac’s delta function. The crucial
point, as we’ll see in this section, is that when one goes through the solution of the diffusion
equation by the method of separation of variables, the initial condition is used in the calculations
always inside integrals, where it is expanded into a Fourier series of some kind, but never by
itself.
To begin with, imagine a sequence of functions {δn (x)} that:
• are defined for all x andR integrable over the whole x axis,

• are normalized so that −∞ δn (x) dx = 1 for each δn , and finally
• has the sifting property: this means that
Z ∞
f (x) δn (x) dx → f (0) as n → ∞
−∞

for every continuous function f (x).


As a matter of fact, it’s easy to construct many sequences with these properties. The
simplest one, probably, is based on example 92: define
122 The Fourier Integral

 y
 0 if x < 1/2n,
δn (x) = n if −1/2n ≤ x ≤ 1/2n 32

0 if 1/2n < x.
The picture on the right shows some typical elements of this
sequence. It’s easy to see that for every n
Z ∞ Z 1/2n 16
δn (x) dx = n dx = 1,
−∞ −1/2n

and the graph of δn is a rectangle of the same area (1 unit), 8


but with decreasing base and increasing height. As n goes to 4
infinity, these rectangles get more and more localized around 1 2
the origin, and of course δn (0) goes to infinity as well. x
Note that if f is a continuous function, then
Z 1/2n Z ∞ Z 1/2n
nm dx ≤ δn (x)f (x) dx ≤ nM dx,
−1/2n −∞ −1/2n
Z ∞
m ≤ δn (x)f (x) dx ≤ M,
−∞

where m and M are the minimum and maximum of f , respectively, over the interval from −1/2n
to +1/2n. Hence, if we let n → ∞, and the width of this interval gets smaller and smaller, then
both M and m tend to f (0). By the “sandwich theorem” of first-year calculus,
·Z ∞ ¸
lim δn (x)f (x) dx = f (0).
n→∞ −∞

But there is a problem. The two symbols, limit and integral, may not be swapped around:
·Z ∞ ¸ Z ∞
? £ ¤
 lim δn (x)f (x) dx = lim δn (x) f (x) dx
n→∞ −∞ −∞ n→∞

This step is not allowed because the right-hand side is meaningless: limn→∞ δn (x) is not a
function in the usual sense of the word. This is also easy to see: we find three equations

lim δn (x) = 0 for every x except x = 0,


n→∞
lim δn (0) = ∞,
n→∞
Z ∞
δn (x) dx = 1 for every n,
−∞

that clearly clash. No function can be zero everywhere except at one point, where it is not
defined, and yet its integral be = 1.
However, we may ignore this problem as long as we understand that the limit must be
taken always after an integration, never before. In this sense we may speak loosely of a “delta
function” as
δ(x) = lim δn (x) :
n→∞
The Fourier Integral 123

in reality δ(x) stands for the limit of a sequence of integrals. So, we may write:
Z ∞
δ(x) f (x) dx = f (0)
−∞

for every continuous function f (x). This is the fundamental property of the delta function.
It follows immediately (after a trivial substitution) that
Z ∞
δ(x − x0 ) f (x) dx = f (x0 ).
−∞

Note that the domain of integration in the equation above does not have to start from −∞, nor
to reach to +∞, since δ(x − x0 ) is zero everywhere except at x0 : the endpoints a and b, [assume
b > a] may be set freely as long as they straddle the point x0 . In other words we have that
Z (
b 0 if x0 < a,
δ(x − x0 ) f (x) dx = f (x0 ) if x0 is between a and b,
a 0 if x0 > b.

IExample 93 A rod of length L = 1 meter, thermally insulated at the ends and initially at
a temperature of 0 degrees, receives at time t = 0 an amount of 10 units of heat, localized at
the point x = 1/2. Thereafter, the temperature u changes in accordance to the heat equation
ut = 0.1 uxx . Given that the specific heat of the rod is 3 units/meter, find u(x, t).
Solution: The equation and the boundary conditions are the same as in example 22, so we may
use the solution found there:

X 2
π 2 /L2 kπx
u(x, t) = Ak e−0.1tk cos ,
L
k=0

where we’ll substitute L = 1. The coeficients Ak are the Fourier-cosine coefficients of the initial
temperature f (x):
Z L Z L
1 2 kπx
A0 = f (x) dx Ak = f (x) cos dx [k = 1, 2, 3, . . .]
L 0 L 0 L

and f (x) = u(x, 0). And here we pause for a moment.


The heat is initially distributed over a very narrow interval centered at x = 1/2. This could
mean, for instance, between x = 0.4999995 and x = 0.5000005. Since

(heat capacity) = (specific heat) × (length),

the heat capacity of such an interval would also be very small:

C = 3 × 0.000001 = 3/n,

where n = 1 million. The temperature rise inside the interval is

∆H
∆u = ,
C
124 The Fourier Integral

where ∆H is the heat absorbed. Therefore we find that



0 if |x − 0.5| > 1/2n,
f (x) = u(x, 0) =

10 n/3 if |x − 0.5| < 1/2n,

where n = 1 million, but also that


Z ∞
f (x) dx = 10/3,
−∞

regardless of n. In other words, the initial temperature is zero everywhere except a very narrow
interval, where all the heat is concentrated and the temperature is very high. So, we are in a
position to use the symbolism of the delta function. We write

10
f (x) = · δ(x − 0.5)
3
and calculate the Fourier-cosine coefficients in a natural way. Substituting L = 1, we get:
Z 1
10 10
A0 = δ(x − 1/2) dx =
3 0 3

Z 1 0 if k is odd
2 · 10 20 cos kπ/2
Ak = δ(x − 1/2) cos kπx dx = =
3 3 
0 20(−1)k/2 /3 if k is even 6= 0.

So, finally, replacing the dummy index k with 2`, we obtain



10 20 X 2 2
u(x, t) = + (−1)` e−0.4t` π cos 2`πx.
3 3
`=1

The following picture shows three snapshots of the solution truncated after 34 non-zero terms:
33
10 20 X 2 2
u(x, t) ≈ + (−1)` e−0.4t` π cos 2`πx.
3 3
`=1

Note the different scales. At t = 0.001 units, the heat is still strongly localized around the
midpoint, where the temperature is about 80 degrees. In the second picture the heat has spread
The Fourier Integral 125

somewhat toward the endpoints, which however are still virtually at zero degrees. In the third
picture the temperature has nearly evened out over the whole system. The final equilibrium
temperature is, of course, 10/3 ≈ 3.3333 . . . degrees. J

One of the most curious properties of the delta function (or Dirac’s function, as it’s often called)
is that it may be visualized as the derivative of Heaviside’s unit step function. Indeed, if we
consider the “indefinite integral” of δ(x − x0 ), we get that
Z x ½
0 if x < x0 ,
δ(ξ − x0 ) dξ =
−∞ 1 if x > x0 ,

and this is by definition the unit step function U(x − x0 ). So, if U (x − x0 ) is the anti-derivative
of δ(x − x0 ), then δ(x − x0 ) should be the derivative of U(x − x0 ) but be careful: U is a function
in the usual sense of the word, δ is merely a symbol. And yet, it almost makes sense: the graph
of U is flat, except at one point, where U has a jump-discontinuity; hence its derivative is zero
everywhere except at the point of discontinuity, where it is infinite (whatever that means).
Heaviside was the first to realize that he could just postulate the existence of the delta
function and carry on with his “operational calculus”; in the end he would get sensible results.
But he was only a telegraph operator with no university degree; most academics dismissed him
as a crank. Only after Dirac (an engineer who won the Nobel prize for physics) carried the idea
into quantum mechanics, did Heaviside get the credit he deserved.
Eventually a rigorous theory of generalized functions was built by the French mathematician
L. Schwartz, which made sense of the paradoxical properties of δ(x).† We shall not discuss this
theory, which would also require a thorough reappraisal of the concept of integration.
It’s not uncommon, in mathematics, that rigorous proofs follow applications by a large mar-
gin. For example, square roots of negative numbers were first used by G. Cardano (1501–1576)
to work around an obstacle in his solution of cubic equations. By treating these “inexistent”
numbers as if they existed, he could obtain the correct real answers. By his own admission,
he could not fully understand how his method worked, but work it did; a full understanding of
complex numbers came only much later.
To conclude this discussion, it should be mentioned that the family {δn (x)} used in these
notes is not the only one that may be used to introduce the delta function. For example, it’s
easy to show that
r
n n −nx2
δn (x) = and δn (x) = e
π(1 + n2 x2 ) π

are defined for every x, are normalized to 1 and have the sifting property. In addition, they may
be differentiated anywhere any number of times, unlike the δn ’s we used.
FOURIER TRANSFORM
The Fourier transform of the delta function is calculated in a natural way. By definition,
Z ∞
1 e−ikx0
F [δ(x − x0 )] = e−ikx δ(x − x0 ) dx = .
2π −∞ 2π

† Schwartz proposed the word “distributions”, but “generalized functions” seems to have
prevailed.
126 The Fourier Integral

Once again, this seems to make sense. Since δ(x) may be seen as “infinitely localized”, its
transform is “infinitely spread out”, in accordance with the uncertainty principle.
Formally applying Fourier’s integral theorem we get
Z L
1
δ(x − x0 ) = lim eikx F [δ(x − x0 )] dk,
L→∞ 2π −L

which yields the interesting equation


Z L
1
δ(x − x0 ) = lim eik(x−x0 ) dk =
2π L→∞ −L
sin L(x − x0 )
= lim .
L→∞ π(x − x0 )

As an exercise, verify that Z ∞


sin L(x − x0 )
dx = 1 for every L > 0;
−∞ π(x − x0 )
go back to example 68.
Fourier-sine and Fourier-cosine transforms of δ(x) may be treated in the same way.

6. Application: the Heat Equation

Fourier transforms may be used to solve certain PDEs over infinite domains, just like Fourier
series may be used for PDEs over finite domains.

IExample 94 A long rod, occupying a portion of the x axis from x = 0 to a very distant point
(loosely speaking, “infinity”) is thermally insulated at the near end. Initially the temperature
is given by the equation u(x, 0) = f (x) where
n
C = constant if 0 < x < 1,
f (x) =
0 if x > 1.
Heat propagates through the rod according to the heat equation ut = αuxx , where α is constant;
assume that u(x, t) is bounded as x → ∞ (at all times). Find u(x, t).
Solution: The requirement that u be bounded is a novelty, but certainly it makes sense from a
phyisical point of view; we’ll see very soon its role. By the method of separation of variables,
we look for solutions of the form X(x) · T (t). Proceeding like in chapter 1, we get immediately

Ṫ X 00
= = λ.
αT X
One boundary condition is X 0 (0) = 0. If λ is positive, then the√only solutions of the equation
X 00 = λX that meet this requirement have the form X = A cosh λx (where A does not depend
on x), but these solutions are unbounded as x goes to infinity. Therefore, λ > 0 is not acceptable.
If λ is zero or negative, then we write λ = −k 2 [where k ≥ 0]; this leads to solutions of the
form
X = A cos kx,
where, again, A does not depend on x. We may now turn to the equation for T , which yields
immediately
2
T = (constant) · e−αtk .
The Fourier Integral 127

Any expression of the form


2
A e−αtk cos kx
fits the equation and the boundary conditions. So far, we have followed closely the lines of
example 22. We have now reached a fundamental difference: whereas in example 22 the eigen-
values could be counted (we found that λk = −k 2 π 2 /L2 , and k could be 0, 1, 2, . . .), there is no
such restriction here. The quantity k can take any real positive value: we say that the eigenval-
ues form a continuum, whereas in example 22 they were discrete. So, the most general linear
combination of solutions is not a series, but an integral:
Z ∞
2
u(x, t) = A(k) e−αtk cos kx dk.
0
At time t = 0 this reduces to
Z ∞
u(x, 0) = A(k) cos kx dk;
0

we match this expansion with f (x). The Fourier-cosine transform of f (x) is


Z
2 1 2 C sin k
F (k) = C cos kx dx = · .
π 0 π k
Hence, by Fourier’s integral theorem, we find that
Z
2 ∞ C sin k
f (x) = · cos kx dk.
π 0 k
By inspection, we get immediately:
2C sin k
A(k) = .
πk
So, finally, the solution may be written
Z ∞
2C sin k −αtk2
u(x, t) = ·e · cos kx dk.
0 πk
This not yet the best form that one may give to the solution; it’s more like an intermediate step
toward a more practical formula, which may be deduced with a more sophisticated analysis.
But we’ll not go into that; the above integral is usable as it stands.
For instance, by giving C the value of π degrees, setting α = 1 (this is always possible,
by choosing an appropriate unit of time), and using plain Gauss-Hermite numerical integration
with 20 points (which is routine work, really, but let’s leave the details out), the free package
GNUPLOT has produced the following pictures.
128 The Fourier Integral

The scale is the same for all the pictures; the first one shows also the initial temperature
distribution (dashed line). We may see some numerical instability in the first picture, for instance
a small ripple at about x = 5 which shouldn’t be there. Obviously more than 20 points are
required to remove this instability. J

DIRECT TRANSFORM METHODS


If you have followed so far, you must have noticed that the method of separation of variable
follows always the same lines in a predictable way. It’s simple but tortuous; in practice, it’s often
possible to cut a few corners, so to speak. For instance, one may think of taking the Fourier
transform of a whole PDE. Let’s see how this method works by means of an example.

IExample 95 A infinite thin rod occupies the positive half of the x axis. Its initial temperature
is u = 0 throughout, but at time t = 0 an internal source of heat f is switched on, and from
that moment the temperature changes in accordance with the non-homogeneous heat equation
ut − αuxx = f (x) where 0 < x < ∞. The boundary conditions are u(0, t) = 0 [thermal
insulation at the near end] and u(∞, t) = 0 for all t. The initial condition is u(x, 0) = 0 for all
positive x. Find u(x, t), assuming the external source of heat f is known and does not depend
on time. Finally, consider the special case f (x) = e−sx , where s > 0 is constant.
Solution: Let’s take the transform of the whole equation. We may choose among the Fourier,
Fourier-cosine and Fourier-sine.
The “full” Fourier transform applies to problems where x ranges from −∞ to +∞, whereas
here x is restricted to (0, ∞). So, the Fourier-cosine or Fourier-sine transforms seem to be better
options.
The boundary condition at x = 0 tells us which one to choose: we need a solution that
vanishes at x = 0 for all t, hence we take the Fourier-sine. Had we wanted a solution with zero
derivative at x = 0, we should have chosen the Fourier-cosine.
So, we look for a solution of the form
Z ∞ Z ∞
2
u(x, t) = U (k, t) sin kx dk, where U (k, t) = u(x, t) sin kx dx
0 π 0

is the Fourier-sine transform of u, and will be the unknown of the problem. We also introduce
the Fourier-sine transform F (k) of f (x):
Z ∞ Z ∞
2
f (x) = F (k) sin kx dk ⇐⇒ F (k) = f (x) sin kx dx.
0 π 0

Take the Fourier-sine transform of both sides of the equation; it follows that
Z ∞ Z ∞
2 £ ¤ 2
ut − αuxx sin kx dx = f (x) sin kx dx.
π 0 π 0

The right-hand side is, by definition, F (k) and, since f is known, we regard F as known, too.
The left-hand side may be broken into two terms. Considering the first term, we note that
Z ∞ Z ∞
2 2 ∂
ut sin kx dx = u sin kx dx = Ut ,
π 0 π ∂t 0
The Fourier Integral 129

after swapping integration with respect to x and differentiation with respect to t. In the second
term, after integrating by parts (twice), and using the boundary conditions u(0, t) ≡ 0 and
u(∞, 0) ≡ 0, we get:
Z Z ∞
2 ∞ 2h i∞ 2 h i∞ 2
uxx sin kx dx = ux sin kx − k u cos kx − k2 u sin kx dx =
π 0 π 0 π 0 π 0
= 0 + k · 0 − k 2 U (k, t).

Hence the transformed equation is a simple first-order linear ODE in U :

Ut + αk 2 U = F. (50)

The initial condition is obviously


Z Z
2 ∞ 2 ∞
U (t = 0) = u(x, 0) sin kx dx = 0 · sin kx dx = 0.
π 0 π 0

The ODE (50) above may be easily solved by the method of variation of parameters or by using
2
the integrating factor µ = eαk t . It follows that

£ αk2 t ¤ 2

∂t e U = eαk t F,

and hence that Z t


2 2
U = e−αk t
eαk τ F dτ.
0

The problem has been formally solved: one may find U by integration, and then (inverting the
Fourier-sine transform) find u.
Let’s see how this work if f (x) = e−sx . First of all, we find that
Z
2 ∞ −sx 2 k
F = e sin kx dx = .
π 0 π k + s2
2

Substituting back into (50), we get:

2 k
Ut + αk 2 U = .
π k + s2
2

Solving this ODE, we find immediately:


2
2 1 − e−αk t
U= .
π αk(k 2 + s2 )

Finally, inverting the Fourier-sine transform, we find the solution:


Z ∞ 2
2 1 − e−αk t
u= sin kx dk.
π 0 αk(k 2 + s2 )

The above integral provides already a workable solution, albeit once again one could, by a
careful analysis, give it a better expression. We stop here. Note, however, that the method of
this example is applicable with no change to problems where f depends on x and t as well. J
130 The Fourier Integral

7. Application: the Laplace Equation


As a rule, any linear combination of solutions of a linear homogeneous PDE is also a solution of
the same PDE. In general, however, PDEs come with one or more non-homogeneous conditions.
For example, suppose we want to solve the Laplace
equation in a rectangle with boundary conditions on the y
four sides, as shown in the picture. The Laplace equa- u=h(x)
tion is homogeneous, but it is combined with four non- b
homogeneous boundary conditions, one on each side.
It’s then easy to see that the solution u(x, y) may be
u=k(y) ∇ 2u=0 u=g(y)
written down as the sum of four functions:

u = u1 + u2 + u3 + u4 ,
0 u=f(x) a x
where each uk is the solution of a simpler problem: for instance, u1 is the solution of ∇2 u1 = 0
that equals f (x) on the base of the rectangle, but is zero on all other sides; u2 is the solution of
∇2 u2 = 0 that equals g(y) on the right-hand vertical side, but is zero on all other sides; and so
on for u3 and u4 . In other words, we break the boundary into four pieces that do not overlap
and look for the solution that satisfies the boundary condition on each piece; finally, we combine
the solutions thus found.
This simple observation is called the superposition principle. Broadly speaking, it should
be used whenever it’s feasible: breaking a complex problem into two or more simpler ones is
almost always a step in the right direction. We shall soon see an example of its use.

IExample 96 Solve the Laplace equation


inside the semi-infinite strip 0 < x < ∞,
0 < y < L, with boundary conditions as y
shown in the picture, and the requirement
that |u| be bounded as x tends to infinity. u=0
L
Solution: The fact that x ranges from zero
to infinity and u is prescribed to be = 0 for u=0 ∇2u=0
all y if x = 0 suggests that we use a Fourier-
sine transform, much like in example 95. 0 u=f(x) x
(Here too, if the condition on the vertical side were ux = 0, we would use a Fourier-cosine
transform.) So, we take the Fourier-sine transform with respect to x of the equation
uxx + uyy = 0.
We write Z ∞
2
U (k, y) = u(x, y) sin kx dx,
π 0
for the Fourier-sine transform of u(x, y), and proceed exactly like in example 95. This yields
Z Z
2 ∞ 2 ∞
uxx sin kx dx + uyy sin kx dx = 0.
π 0 π 0
We integrate by parts (twice) the first term, and interchange in the second term integration with
respect to x and differentiation with respect to y. Simplifying, it follows that
−k 2 U + Uyy = 0,
The Fourier Integral 131

which is a linear ODE in U . Solutions may be expressed as linear combinations of cosh ky and
sinh ky, or also as linear combinations of eky and e−ky . However, we need a solution that is zero
when y = L, and this is
U = A sinh k(L − y).
Obviously, the scale factor A is at this stage arbitrary. There is no restriction on k, apart from
the fact that changing k into −k merely multiplies U by −1, so we need to consider only k ≥ 0.
Hence, the most general linear combination of eigenfunctions is, like in example 95, not a series
but an integral:
Z ∞ Z ∞
u= U sin kx dk = A(k) sinh k(L − y) sin kx dk
0 0

We have reached the last step: requiring that u(x, 0) = f (x), we match
Z ∞
u(x, 0) = A(k) sinh kL sin kx dx
0

with the Fourier-sine expansion of f (x):


Z ∞
f (x) = F (k) sin kx dk,
0

where Z ∞
2
F (k) = f (x) sin kx dx
π 0

is the Fourier-sine transform of f (x). By inspection, we find immediately that

F (k)
A(k) = ,
sinh kL
and hence that Z ∞
F (k) sinh k(L − y)
u(x, y) = sin kx dk. J
0 sinh kL

IExample 97 Find the solution of the preceding problem if the boundary conditions are that
u = 0 on both horizontal sides of the strip, and u = g(y) for 0 < y < L on the vertical side.
Solution: The picture on the right shows
the boundary conditions for this problem.
The fact that u = 0 for y = 0 and y = L y
suggests that u, regarded as a function of
y, be expanded into a Fourier-sine series. u=0
L
Not an integral, but a series, because the
range of y is finite; if the conditions had u=g(y) ∇2u=0
been uy = 0 for y = 0 and y = L, we should
have chosen a Fourier-cosine series instead. 0 u=0 x
So, we write

X kπy
u(x, y) = bk (x) sin
L
k=0
132 The Fourier Integral

and substitute this expression into the Laplace equation. This yields:


X ∞
kπy X k2 π2 kπy
b00k (x) sin − bk (x) · 2 sin = 0.
L L L
k=0 k=0

It follows that
k2 π2
b00k (x) − bk (x) = 0, [k = 1, 2, . . .]
L2
which, for each coefficient bk (x), is a linear ODE. Solutions may be written as linear combinations
of cosh kπx/L and sinh kπx/L, or also of ekπx/L and e−kπx/L . However, the only solutions that
are bounded as x → ∞ are scalar multiples of e−kπx/L . So, we may write


X kπy
u(x, y) = Bk e−kπx/L sin ,
L
k=0

where the scale factors Bk are constant. We determine them by requiring that

u(0, y) = g(y)

X kπy
Bk sin = g(y).
L
k=0

By inspection, we recognize that the Bk ’s are just the coefficients of the Fourier-sine series of
g(y):
Z
2 L kπy
Bk = g(y) sin dy,
L 0 L
and this completes the solution of the problem. J

IExample 98 Solve the Laplace equation y


inside the semi-infinite strip 0 < x < ∞,
0 < y < L, with boundary conditions as u=0
L
shown in the picture, and the requirement
that |u| be bounded as x tends to infinity. u=g(y) ∇2u=0
Solution: No calculation is necessary. The
boundary conditions are the combination of 0 u=f(x) x
the ones encountered in the two preceding examples. Hence, the solution of this problem is
simply the sum of the solution of example 96 and the solution of example 97. J

IExample 99 Solve the Laplace equation in the upper half-plane [y > 0], subject to the
boundary condition u(x, 0) = f (x), and the condition that |u| be bounded as y → +∞.
Solution: Since x ranges from −∞ to +∞, we take the full Fourier transform of the Laplace
equation. We assume that the solution may be expressed by the equation
Z ∞
u(x, y) = U (k, y) eikx dx,
−∞
The Fourier Integral 133

where U (k, y) is the Fourier transform of the solution:


Z ∞
1
U (k, y) = u(x, y) e−ikx dx.
2π −∞
Proceeding in a manner similar to example 95, we get immediately a linear ODE for U :

−k 2 U + U 00 = 0.

Solutions may be written as linear combinations of eky and e−ky ; however, in this example k
ranges from −∞ to +∞, and we need to ensure that the solution is bounded as y → +∞.
Therefore, we construct a solution of the form

 A eky if k is negative,
U= A if k is zero,

A e−ky if k is positive.
Here, A = A(k) is a scale factor that does not depend on y. The equation above may also be
written
U = A(k) e−|k|y .
Substituting it into the expression for u, it follows:
Z ∞
u(x, y) = A(k) eikx e−|k|y dk. (50)
−∞

Finally, we determine A(k) by matching it to the Fourier integral expansion of f (x):


Z ∞
u(x, 0) = A(k) eikx dk,
−∞
Z ∞
u(x, 0) = f (x) = F (k) eikx dk.
0

We see immediately that A(k) must coincide with the Fourier transform of f (x):
Z ∞
1
A(k) = f (x) e−ikx dx,
2π −∞
and at this point the problem is solved in general. J

IExample 100 Solve the preceding example in the special case where f (x) = 1/(x2 + 1).
Solution: We find immediately:
Z ∞ −ikx
1 e
A(k) = F (k) = dx.
2π −∞ x2 + 1
If k < 0 we may use Jordan’s lemma, closing the contour in the upper half-plane. We find
immediately that
· ¸
1 e−ikz
F (k) = i 2π · res =
z=i 2π z 2 + 1
· −ikz ¸
e
=i· = 12 ek . [k < 0]
2z z=i
134 The Fourier Integral

For positive k, we find that

F (k) = F (−k) = 1
2 e−k = 1
2 e−k [k > 0]

Therefore,
F (k) = 1
2 e−|k| [all k]
Substituting this expression back into (50), suing symmetry arguments, we find:
Z ∞
u(x, y) = 1
2 e−|k| eikx e−|k|y dk =
−∞
Z ∞
= 1
2 e−|k| e−|k|y cos kx dk.
−∞

It follow (again, by symmetry) that


Z ∞
u(x, y) = e−k e−ky cos kx dk =
0
·Z ∞ ¸
−k−ky+i kx
= Re e dk .
0

The integration is trivial; we find that


· ¸
1
u(x, y) = Re =
1 + y − ix
· ¸
1 + y + ix
= Re =
(1 + y)2 + x2
1+y
= .
(1 + y)2 + x2

As an exercise, check that this function does indeed satisfy Laplace equation, and that it becomes
equal to f (x) = 1/(x2 + 1) if y becomes zero. J

y
IExample 101 Solve the Laplace equation inside the first
quadrant 0 < x < ∞, 0 < y < ∞, with boundary conditions
as shown in the picture, and the requirement that |u| be
bounded at infinity.
u=f(y) ∇2u=0
Solution: The fact that uy must be zero for y = 0, and
u(0, y) is prescribed to be = f (y), suggests that we take
the Fourier-cosine transform of the Laplace equation with x
respect to y: uy=0
Z
2 ∞
(uxx + uyy ) cos ky dy = 0.
π 0

Defining Z ∞
2
U (k, x) = u(x, y) cos ky dy.
π 0
The Fourier Integral 135

and proceeding like in the preceding examples (i.e., interchanging integration with respect to
y and differentiation with respect to x, and integrating by parts where necessary—verify these
steps) we get the ordinary differential equation

Uxx − k 2 U = 0,

which may be solved for U . Solutions that are bounded as x goes to infinity have the form

U = A e−kx ,

where A is a scale factor and k ≥ 0. So, we find that


Z ∞
u(x, y) = A(k) e−kx cos ky dy
0

and Z ∞
u(0, y) = A(k) cos ky dy.
0
By Fourier’s integral theorem, we also get that
Z ∞
f (y) = F (k) cos ky dy,
0

where F (k) is the Fourier-cosine transform of f . So, finally, we get


Z
2 ∞
A(k) = F (k) = f (y) cos ky dy,
π 0
and this completes the solution. J

8. Inversion of the Laplace Transform

Recall that if f (t) is a function of exponential order and f (t) ≡ 0 for negative t, the Laplace
transform F (s) is defined by the equation
Z ∞
def
F (s) = f (t) e−st dt. (51)
0

In general, s is regarded as a real variable, and F (s) is defined only for s ranging from a certain
point s0 to infinity. For example, the Laplace transform of f (t) = t is defined for s greater than
zero, but the Laplace transform of f (t) = e−7t is defined for s greater than −7.
However, in this section we allow s to take complex values: we write z instead of s, where

z = γ + ik.

If the Laplace transform (51) is defined for s > s0 , then the real part of z, which is γ, must also
be greater than s0 . Note that
Z ∞
F (z) = f (t) e−zt dt =
0
Z ∞
= f (t) e−γt e−ikt dt.
0
136 The Fourier Integral

But since by definition f (t) ≡ 0 for negative t, we may formally extend the last integral to −∞:
Z ∞ Z ∞
1
F (z) = f (t) e−γt e−ikt dt = · 2π f (t) e−γt e−ikt dt.
−∞ 2π −∞

This is clearly a Fourier transform:


h i
F (z) = F 2π e−γt f (t) .

Inverting this Fourier transform by (45), we get:


£ ¤
F −1 F (z) = 2π e−γt f (t),

which may be written as: Z ∞


eikt F (z) dk = 2π e−γt f (t). [z = γ + i k]
−∞

It follows immediately that Z ∞


eγt
eikt F (z) dk = f (t). [z = γ + i k]
2π −∞

This formula enables one to derive f (t) by means of the integral on the left-hand side, for a
given F (z). It may be put, however, in a better form: first of all, writing
Z ∞
1
f (t) = e(γ+ik)t F (γ + ik) dk
2π −∞

we see that it’s convenient to substitute the real variable k with the complex variable z = γ + ik.
Applying such a substitution, we get that

dz
dk = ,
i
and the path of integration is mapped into a vertical straight line that goes through the point
z = γ on the real axis (because when k = 0, z = γ). So, we finally obtain the equation
Z γ+i∞
1
f (t) = ezt F (z) dz, (52)
i2π γ−i∞

which is called Mellin’s inversion formula. As we mentioned earlier, the path of integration is
an infinite vertical straight line in the z plane, and γ must be sufficiently large. We’ll presently
see what this requirement means in practice.
Mellin’s formula is hardly ever used directly in the form (52). However, in many applications
it happens that:
(i) F (z) has a finite number of singular points in the complex plane, and
(ii) tends to zero as |z| goes to infinity in all directions.
For example, consider
£ ¤ 1
L e4t =
s−4
The Fourier Integral 137

which is written 1/(z − 4) when s is allowed to take complex values; we see that F (z) has only
one pole, at z = 4, and goes to zero uniformly as |z| tends to infinity. Note that s must be > 4
for the Laplace transform to exist: hence, γ must also be > 4.
Under the conditions listed above, the calculus of residues may be used to evaluate the
integral (52). The procedure is very simple.
y y
L
L

γ x γ x

−L

−L
negative t positive t
We close the contour by means of a semicircle of radius L, as shown in the pictures above, and
then let L → ∞. It may be shown that if F (z) → 0 uniformly and has only a finite number
of singular points, the contribution from the non-vertical parts of the contour goes to zero as
L tends to infinity. The proof is virtually identical to Jordan’s lemma (39), and you should be
able to do it as an exercise. The contour corresponding to positive t is usually called Bromwich
contour.
The pictures tell us how to fix γ. Recall that f (t) ≡ 0 for negative t; therefore we must
choose γ so that the integral (52) is zero for all t < 0. This is easy: we make sure that no
singular points are included by the contour on the left, no matter how large is L. This may
be done by requiring that all the singular points lie on the left of the vertical line through γ.
In other words, γ must be so large that all the singular points are included (when L is large
enough) by the Bromwich contour on the right, and no singular point is included by the contour
on the left. At this point, by the residue theorem we get immediately:

1
Z γ+i∞ 0 if t is negative,
zt h i
f (t) = e F (z) dz = X (53)
i2π γ−i∞  res ezt F (z) if t is positive,

where the sum on the right-hand side extends to all the singular points of ezt F (z).
£ ¤
IExample 102 Apply (53) to find L−1 s/(s2 − 49) .
Solution: F (z) has two simple poles, at z = ±7. Therefore
· ¸ · ¸
z ezt z ezt e7t e−7t
f (t) = res 2 + res = + = cosh 7t.
z=7 z − 49 z=−7 z 2 − 49 2 2 J
138 The Fourier Integral

£ ¤
IExample 103 Apply (53) to find L−1 (s + 4)/(s2 + 2s + 10) .
Solution: F (z) has two simple poles, at z = −1 ± i3. Therefore
· ¸ · ¸
(z + 4) ezt (z + 4) ezt (3 + i3) e−t+i3t (3 − i3) e−t−i3t
f (t) = res 2
+ res 2
= + .
z=−1+i3 z + 2z + 10 z=−1−i3 z + 2z + 10 i6 −i6

Simplifying, we find that f (t) = e−t (sin 3t + cos 3t). J


£ ¤
IExample 104 Apply (53) to find L−1 1/(s + 1)(s − 2)2 .
Solution: F (z) has a simple pole at z = −1 and a double pole at z = 2. Hence, by (53) we have
that · ¸ · ¸
ezt ezt
f (t) = res + res .
z=−1 (z + 1)(z − 2)2 z=2 (z + 1)(z − 2)2

By (35) we find that · ¸


ezt e−t
res = ,
z=−1 (z + 1)(z − 2)2 9
and by (36) we find that
· ¸ · zt ¸
ezt d e te2t e2t
res = = − .
z=2 (z + 1)(z − 2)2 dz z + 1 z=2 3 9
1
¡ ¢
So, finally, we find that f (t) = 9 e−t − e2t + 13 te2t . J
£ ¤
IExample 105 Apply (53) to find L−1 (s + 5)/(s − 2)3 .
Solution: F (z) has a triple pole at z = 2. Therefore we must use (36). We find that
· ¸
ezt (z + 5) 1 d2 £ zt ¤
f (t) = res = e (z + 5) = 72 t2 e2t + te2t .
z=2 (z − 2)3 2! dz 2 z=2 J
£ ¤
IExample 106 Apply (53) to find L−1 1/(s2 + 1)3 .
Solution: F (z) has two triple poles, at z = i and z = −i. Hence, by (36) we find that
· ¸ · ¸
1 d2 ezt 1 d2 ezt
f (t) = +
2! dz 2 (z + i)3 z=i 2! dz 2 (z − i)3 z=−i

It follows that
· ¸ · ¸
1 t2 eit 6teit 12eit 1 t2 e−it 6te−it 12e−it
f (t) = − + + − +
2! −i8 16 i32 2! i8 16 −i32
= − 81 t2 sin t − 38 t cos t + 3
8 sin t. J

Last but not least, it must be stressed again that this method is applicable if F (z) → 0 uniformly
as |z| goes to infinity in all directions. It is not applicable as it stands, for example, if F (s) =
e−5s /(s − 3) because e−5z /(z − 3) is unbounded as z → ∞ along the
£ −5s ¤ negative real axis. On the
−1 3t−15
other hand, the shift theorem yields immediately L e /(s − 3) = U(t − 5) e , where U
is Heaviside’s unit step function.
The Fourier Integral Tutorial Problems 139

PROBLEMS

Fourier Transforms
64. Find the Fourier Transform of the following functions.
( ½
0 if x < −2 or x > 3,
0 if x is negative,
(a) f (x) = 2 if −2 < x < 1, (b) f (x) =
e−x if x is positive.
−1 if 1 < x < 3.
n
0 if x < 0 or x > 1,
(c) f (x) = (d) f (x) = e−|x| sin x
sin πx if 0 < x < 1.
65. Find the Fourier Transform of the following functions.
(a) (sin x + sin |x|) e−x (b) e−|x−2|
66. Find Fourier transforms of the following functions by making appropriate use of Jordan’s
lemma.
(a) 1/(x2 − 2x + 10) (b) x3 /(x4 + 64) (c) 1/(x2 + π 2 )2 (d) x3 /(x2 + 1)2
67. Find the Fourier transform of f (x) = sin x/x using example 68.
68. Find the Fourier transform of f (x) = sin2 x/x.
69. Find the Fourier transform of f (x) = ex/2 / cosh x by using the calculus of residues in a
similar manner to example 80.
Fourier-cosine Transforms
70. Find the following Fourier-cosine transforms (always assume k ≥ 0):
½
cos x if 0 < x < 2π,
(a) f (x) = (b) f (x) = x2 e−x ;
0 everywhere else;

(c) f (x) = x2 /(x4 + 4); (d) f (x) = 1/ x.

71. Find the Fourier-cosine transform of f (x) = |x − 3| − |x − 2| + 1.


Fourier-sine Transforms
72. Find the following Fourier-sine transforms (always assume k ≥ 0):

(a) f (x) = x/(1 + x2 ) (b) f (x) = 1/(x + x3 )

(c) f (x) = x−3/2 (d) f (x) = (1 − cos 3x)/x

73. Find the Fourier-sine transform of f (x) = |x − 3| − |x − 2| + 1.


The Heat Equation Over Infinite Domains
74. Solve the heat equation ut = αuxx for t > 0 and x ranging from 0 to ∞, given that u(0, t) ≡ 0
at all times, |u| → 0 as x → ∞, and initially u(x, 0) = x/(1 + x2 ).
75. Solve the heat equation ut = αuxx for t > 0 and x ranging from 0 to ∞, given that
ux (0, t) ≡ 0 at all times, |u| → 0 as x → ∞, and initially u(x, 0) = sin x/x.
140 Tutorial Problems The Fourier Integral

76. Solve the heat equation ut = αuxx for t > 0 x and x ranging from −∞ to ∞, given that |u|
tends to zero as x → ±∞, and initially u(x, 0) = | sin πx| if −1 ≤ x ≤ 1; u(x, 0) = 0 elsewhere.
77. Solve the heat equation ut = αuxx for t > 0 x and x ranging from −∞ to ∞, given |u| tends
to zero as x → ±∞, and initially u(x, 0) = δ(x).
The Laplace Equation Over Infinite Domains
78. Solve the Laplace equation ∇2 u = 0 for x and y ranging from 0 to ∞ (i.e., in the first
quadrant), knowing that u(0, y) ≡ 0 for every y, |u| is bounded as x → ∞, and on the positive
x axis u(x, 0) = 3 if 0 < x < 5; u(x, 0) = 0 elsewhere.
79. Solve the Laplace equation ∇2 u = 0 in the semi-infinite strip 0 < x < ∞, 0 < y < 1, under
the conditions that |u| is bounded as x → ∞, u(0, y) = y(1 − y), and:
(a) u ≡ 0 on the top side (y = 1) and the bottom side (y = 0);
(b) uy ≡ 0 on the top side (y = 1) and the bottom side (y = 0).
80. Solve the Laplace equation ∇2 u = 0 in the semi-infinite strip 0 < x < ∞, 0 < y < 3, given
that |u| is bounded as x → ∞, u(0, y) ≡ 0 on the vertical side, uy ≡ 0 on the bottom side
(y = 0), and u(x, 3) = x/(1 + x2 ) on the top side (y = 3).
81. Solve the Laplace equation ∇2 u = 0 in the upper half-plane 0 < y < ∞, under the conditions
that |u| is bounded as y → ∞, and u(x, 0) = sin x/x on the x axis.

ANSWERS

2ei2k − 3e−ik + e−i3k 1 − ik 1 + e−ik 4k


64 (a) (b) 2
(c) (d)
i 2πk 2π(1 + k ) i 2(k 2 − π 2 ) i 2π(k 4 + 4)
2 − k 2 − i 2k e−i2k
65 (a) (b)
π(k 4 + 4) π(1 + k 2 )
 ¡ ¢
e −3|k|−ik  1 i e−|2k| cos 2k if k < 0, e−|k|π 1 + |k|π
2
66 (a) (b) F (k) = (c) .
6  − 1 i e−|2k| cos 2k if k > 0. 4π 3
2

 i(2 − |k|) e
1 −|k|
if k < 0,
4
(d) F (k) =
 − 1 i(2 − |k|) e−|k| if k > 0.
4

0 if k < −1,
67 F (k) = 1/2 if −1 < k < 1,

0 if k > 1.


 0 if k < −2,
 i/4
if −2 < k < 0,
68 F (k) =

 −i/4 if 0 < k < 2,

0 if k > 2.
1
69 F (k) = ¡ ¢
2 cosh i π/4 + k π/2
¡ ¢
2 k sin 2πk 4(1 − 3k 2 ) e−k cos k + π/4
70 (a) (b) (c) √
pπ(k 2 − 1) π(1 + k 2 )3 2
(d) 2/kπ ; revise Fresnel integrals, equation (42)
µ ¶
4 cos 2k − cos 3k
71 .
π k2
The Fourier Integral Tutorial Problems 141

p n
1 if 0 < k < 3,
72 (a) e−k (b) 1 − e−k (c) 2 2k/π ; see equation (42) (d) F (k) =
µ ¶ 0 if k > 3.
4 1 sin 3k − sin 2k
73 − .
π k k2
Z ∞
2
74 u = e−αk t−k sin kx dk
0
Z 1
2
75 u = e−αk t
cos kx dk
0
Z ∞ 2 Z ∞ 2
e−αk t+ikx
(1 + cos k) e−αk t (1 + cos k) cos kx
76 u = dk = 2 dk
−∞ π2 − k2 0 π2 − k2
Z ∞ −x2 /4αt
1 2 e
77 u = e−αk t+ikx
dk = √
(revise example 81).
2π −∞ π4αt
Z ∞ −ky
6 e (1 − cos 5k) sin kx
78 u = dk. This integral may be done using Laplace transform
π 0 k
techniques (do it as an exercise).
X 8e−kπx 1 X 4e−kπx
79 (a) u = 3 3
sin kπy (b) u = − cos kπy.
k π 6 k2 π2
k=odd k=even
Z ∞
sin kx cosh ky
80 u = dk.
0 ek cosh 3k
y(1 − e−y cos x) + xe−y sin x
81 u = .
x2 + y 2

You might also like