You are on page 1of 238

# Lectures on Dierential Equations

Philip Korman
Department of Mathematical Sciences
University of Cincinnati
Cincinnati Ohio 45221-0025
Copyright @ 2008, by Philip L. Korman
Contents
Introduction vi
1 First Order Equations 1
1.1 Integration by Guess-and-Check . . . . . . . . . . . . . . . . . 1
1.2 First Order Linear Equations . . . . . . . . . . . . . . . . . . 3
1.2.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Integrating Factor . . . . . . . . . . . . . . . . . . . . 5
1.3 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 The method . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Some Special Equations . . . . . . . . . . . . . . . . . . . . . 14
1.4.1 Homogeneous Equations . . . . . . . . . . . . . . . . . 14
1.4.2 The Logistic Population Model . . . . . . . . . . . . . 17
1.4.3 Bernoullis Equation . . . . . . . . . . . . . . . . . . . 19
1.4.4

Riccatis Equation . . . . . . . . . . . . . . . . . . . . 20
1.4.5

Parametric Integration . . . . . . . . . . . . . . . . . . 22
1.4.6 Some Applications . . . . . . . . . . . . . . . . . . . . 23
1.5 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Existence and Uniqueness of Solution . . . . . . . . . . . . . . 29
1.7 Numerical Solution by Eulers method . . . . . . . . . . . . . 31
1.7.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 32
2 Second Order Equations 37
2.1 Special Second Order Equations . . . . . . . . . . . . . . . . . 37
2.1.1 y is not present in the equation . . . . . . . . . . . . . 38
2.1.2 x is not present in the equation . . . . . . . . . . . . . 39
2.2 Linear Homogeneous Equations with Constant Coecients . . 40
2.2.1 Characteristic equation has two distinct real roots . . 41
ii
CONTENTS iii
2.2.2 Characteristic equation has only one (repeated) real
root . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Characteristic equation has two complex conjugate roots . . . 45
2.3.1 Eulers formula . . . . . . . . . . . . . . . . . . . . . 45
2.3.2 The General Solution . . . . . . . . . . . . . . . . . . 45
2.3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4 Linear Second Order Equations with Variable Coecients . . 50
2.5 Some Applications of the Theory . . . . . . . . . . . . . . . . 54
2.5.1 The Hyperbolic Sine and Cosine Functions . . . . . . 54
2.5.2 Dierent Ways to Write a General Solution . . . . . . 54
2.5.3 Finding the Second Solution . . . . . . . . . . . . . . . 56
2.5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.6 Non-homogeneous Equations. The easy case. . . . . . . . . . 58
2.7 Non-homogeneous Equations. When one needs something ex-
tra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.8 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . 63
2.9 Convolution Integral . . . . . . . . . . . . . . . . . . . . . . . 65
2.9.1 Dierentiating Integrals . . . . . . . . . . . . . . . . . 65
2.9.2 Yet Another Way to Compute a Particular Solution . 66
2.10 Applications of Second Order Equations . . . . . . . . . . . . 67
2.10.1 Vibrating Spring . . . . . . . . . . . . . . . . . . . . . 67
2.10.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.10.3 Meteor Approaching the Earth . . . . . . . . . . . . . 72
2.10.4 Damped Oscillations . . . . . . . . . . . . . . . . . . . 74
2.11 Further Applications . . . . . . . . . . . . . . . . . . . . . . . 75
2.11.1 Forced and Damped Oscillations . . . . . . . . . . . . 75
2.11.2 Discontinuous Forcing Term . . . . . . . . . . . . . . . 76
2.11.3 Oscillations of a Pendulum . . . . . . . . . . . . . . . 77
2.11.4 Sympathetic Oscillations . . . . . . . . . . . . . . . . . 78
2.12 Oscillations of a Spring Subject to a Periodic Force . . . . . . 80
2.12.1 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . 80
2.12.2 Vibrations of a Spring Subject to a Periodic Force . . 83
2.13 Eulers Equation . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.13.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 84
2.13.2 The Important Class of Equations . . . . . . . . . . . 84
2.14 Linear Equations of Order Higher Than Two . . . . . . . . . 87
2.14.1 The Polar Form of Complex Numbers . . . . . . . . . 87
2.14.2 Linear Homogeneous Equations . . . . . . . . . . . . . 89
2.14.3 Non-Homogeneous Equations . . . . . . . . . . . . . . 91
2.14.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 91
iv CONTENTS
3 Using Innite Series to Solve Dierential Equations 95
3.1 Series Solution Near a Regular Point . . . . . . . . . . . . . . 95
3.1.1 Maclauren and Taylor Series . . . . . . . . . . . . . . 95
3.1.2 A Toy Problem . . . . . . . . . . . . . . . . . . . . . . 97
3.1.3 Using Series When Other Methods Fail . . . . . . . . 98
3.2 Solution Near a Mildly Singular Point . . . . . . . . . . . . . 102
3.2.1

Derivation of J
0
(x) by dierentiation of the equation . 106
3.3 Moderately Singular Equations . . . . . . . . . . . . . . . . . 107
3.3.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 112
4 Laplace Transform 116
4.1 Laplace Transform And Its Inverse . . . . . . . . . . . . . . . 116
4.1.1 Review of Improper Integrals . . . . . . . . . . . . . . 116
4.1.2 Laplace Transform . . . . . . . . . . . . . . . . . . . . 117
4.1.3 Inverse Laplace Transform . . . . . . . . . . . . . . . . 118
4.2 Solving The Initial Value Problems . . . . . . . . . . . . . . . 120
4.2.1 Step Functions . . . . . . . . . . . . . . . . . . . . . . 123
4.3 The Delta Function and Impulse Forces . . . . . . . . . . . . 125
4.4 Convolution and the Tautochrone curve . . . . . . . . . . . . 127
4.4.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 132
5 Systems of Dierential Equations 138
5.1 The Case of Distinct Eigenvalues . . . . . . . . . . . . . . . . 138
5.1.1 Review of Vectors and Matrices . . . . . . . . . . . . . 138
5.1.2 Linear First Order Systems with Constant Coecients 139
5.2 A Pair of Complex Conjugate Eigenvalues . . . . . . . . . . . 144
5.2.1 Complex Valued Functions . . . . . . . . . . . . . . . 144
5.2.2 General Solution . . . . . . . . . . . . . . . . . . . . . 144
5.2.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.3 Predator-Prey Interaction . . . . . . . . . . . . . . . . . . . . 147
5.4 An application to epidemiology . . . . . . . . . . . . . . . . . 151
5.5 Lyapunov stability . . . . . . . . . . . . . . . . . . . . . . . . 153
5.6 Exponential of a matrix . . . . . . . . . . . . . . . . . . . . . 155
5.6.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 157
6 Fourier Series and Boundary Value Problems 159
6.1 Fourier series for functions of an arbitrary period . . . . . . 159
6.1.1 Even and odd functions . . . . . . . . . . . . . . . . . 161
6.1.2 Further examples and the convergence theorem . . . . 163
6.2 Fourier cosine and Fourier sine series . . . . . . . . . . . . . 165
CONTENTS v
6.3 Two point boundary value problems . . . . . . . . . . . . . . 168
6.3.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.4 Heat equation and separation of variables . . . . . . . . . . . 174
6.4.1 Separation of variables . . . . . . . . . . . . . . . . . . 175
6.5 Laplaces equation . . . . . . . . . . . . . . . . . . . . . . . . 181
6.6 The wave equation . . . . . . . . . . . . . . . . . . . . . . . . 185
6.6.1 Non-homogeneous problems . . . . . . . . . . . . . . . 188
6.6.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.7 Calculating the Earths temperature . . . . . . . . . . . . . . 194
6.7.1 The complex form of the Fourier series . . . . . . . . . 194
6.7.2 Temperatures inside the Earth and wine cellars . . . . 196
6.8 Laplaces equation in circular domains . . . . . . . . . . . . . 197
6.9 Sturm-Liouville problems . . . . . . . . . . . . . . . . . . . . 201
6.9.1 Fourier-Bessel series . . . . . . . . . . . . . . . . . . . 204
6.9.2 Cooling of a cylindrical tank . . . . . . . . . . . . . . 206
6.10 Greens function . . . . . . . . . . . . . . . . . . . . . . . . . 207
6.11 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . 210
6.12 Problems on innite domains . . . . . . . . . . . . . . . . . . 212
6.12.1 Evaluation of some integrals . . . . . . . . . . . . . . . 212
6.12.2 Heat equation for < x < . . . . . . . . . . . . 213
6.12.3 Steady state temperatures for the upper half plane . . 214
6.12.4 Using Laplace transform for a semi-innite string . . . 215
6.12.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 216
7 Numerical Computations 221
7.1 The capabilities of software systems, like Mathematica . . . . 221
7.2 Solving boundary value problems . . . . . . . . . . . . . . . 224
7.3 Solving nonlinear boundary value problems . . . . . . . . . . 227
7.4 Direction elds . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Introduction
This book is based on several courses that I taught at the University of
Cincinnati. Chapters 1-4 are based on the course Dierential Equations
for sophomores in science and engineering. These are actual lecture notes,
either quoted, or just mentioned without proof, I have tried to show all of
the details when doing problems. I have tried to use plain language, and not
to be too wordy. I think, an extra word of explanation has often as much
potential to confuse a student, as to be helpful.
How useful are dierential equations? Here is what Isaac Newton said:
It is useful to solve dierential equations. And what he knew was just
the beginning. Today dierential equations are used widely in science and
engineering. It is also a beautiful subject, and it lets students see Calculus
in action.
It is a pleasure to thank Ken Meyer and Dieter Schmidt for constant
encouragement, while I was writing this book. I also wish to thank Ken for
several specic suggestions, like doing Fourier series early, with applications
to periodic vibrations and radio tuning. I wish to thank Roger Chalkley,
vi
Chapter 1
First Order Equations
1.1 Integration by Guess-and-Check
Many problems in dierential equations end with a computation of an inte-
gral. One even uses the term integration of a dierential equation instead
of solution. We need to be able to compute integrals quickly.
Recall the product rule
(fg)

= fg

+f

g .
Example
_
xe
x
dx. We need to nd the function, whose derivative is xe
x
.
If we try a guess: xe
x
, then its derivative
(xe
x
)

= xe
x
+ e
x
has an extra term e
x
. To remove this extra term, we subtract e
x
from the
initial guess, i.e.,
_
xe
x
dx = xe
x
e
x
+c.
By dierentiation, we verify that this is correct. Of course, integration by
parts may also be used.
Example
_
x cos 3x dx. Starting with the initial guess
1
3
x sin3x, whose
derivative is x cos 3x +
1
3
sin3x, we compute
_
x cos 3x dx =
1
3
x sin3x +
1
9
cos 3x + c.
1
2 CHAPTER 1. FIRST ORDER EQUATIONS
We see that the initial guess is the product f(x)g(x), chosen in such a
way that f(x)g

## (x) gives the integrand.

Example
_
xe
5x
dx. Starting with the initial guess
1
5
xe
5x
, we compute
_
xe
5x
dx =
1
5
xe
5x

1
25
e
5x
+c.
Example
_
x
2
sin3x dx. The initial guess is
1
3
x
2
cos 3x. Its derivative
_

1
3
x
2
cos 3x
_

= x
2
sin 3x
2
3
x cos 3x
has an extra term
2
3
x cos 3x. To remove this term, we modify our guess:

1
3
x
2
cos 3x +
2
9
x sin3x. Its derivative
_

1
3
x
2
cos 3x +
2
9
x sin3x
_

= x
2
sin3x +
2
9
sin3x
still has an extra term
2
9
sin 3x. So we make the nal adjustment
_
x
2
sin3x dx =
1
3
x
2
cos 3x +
2
9
x sin3x +
2
27
cos 3x +c .
This is easier than integrating by parts twice.
Example
_
x
_
x
2
+ 4 dx. We begin by rewriting this integral as
_
x
_
x
2
+ 4
_
1/2
dx.
One usually computes this integral by a substitution u = x
2
+ 4, with
du = 2x dx. Forgetting a constant multiple, the integral becomes
_
u
1/2
du.
Ignoring a constant multiple again, this evaluates to u
3/2
. Returning to the
original variable, we have our initial guess
_
x
2
+ 4
_
3/2
. Dierentiation
d
dx
_
x
2
+ 4
_
3/2
= 3x
_
x
2
+ 4
gives us the integrand with an extra factor of 3. To x that, we multiply
the initial guess by
1
3
:
_
x
_
x
2
+ 4 dx =
1
3
_
x
2
+ 4
_
3/2
+ c.
1.2. FIRST ORDER LINEAR EQUATIONS 3
Example
_
1
(x
2
+ 1)(x
2
+ 4)
dx. Instead of using partial fractions, let us
try to split the integrand as
1
x
2
+ 1

1
x
2
+ 4
.
We see that this is o by a factor. The correct formula is
1
(x
2
+ 1)(x
2
+ 4)
=
1
3
_
1
x
2
+ 1

1
x
2
+ 4
_
.
Then
_
1
(x
2
+ 1)(x
2
+ 4)
dx =
1
3
tan
1
x
1
6
tan
1
x
2
+ c .
Sometimes one can guess the splitting twice, as in the following case.
Example
_
1
x
2
(1 x
2
)
dx.
1
x
2
(1 x
2
)
=
1
x
2
+
1
1 x
2
=
1
x
2
+
1
(1 x)(1 + x)
=
1
x
2
+
1
2
1
1 x
+
1
2
1
1 + x
.
Then
_
1
x
2
(1 x
2
)
dx =
1
x

1
2
ln(1 x) +
1
2
ln(1 + x) +c .
1.2 First Order Linear Equations
1.2.1 Background
Suppose we need to nd the function y(x) so that
y

(x) = x.
This is a dierential equation, because it involves a derivative of the unknown
function. This is a rst order equation, as it only involves the rst derivative.
Solution is of course
y(x) =
x
2
2
+c, (2.1)
where c is an arbitrary constant. We see that dierential equations have
innitely many solutions. The formula (2.1) gives us the general solution.
4 CHAPTER 1. FIRST ORDER EQUATIONS
Then we can select the one that satises an extra initial condition. For
example, for the problem
y

(x) = x (2.2)
y(0) = 5
we begin with the general solution given in formula (2.1), and then evaluate
it at x = 0
y(0) = c = 5.
So that c = 5, and solution of the problem (2.2) is
y(x) =
x
2
2
+ 5.
The problem (2.2) is an example of an initial value problem. If the variable
x represents time, then the value of y(x) at the initial time is prescribed to
be 5. The initial condition may be prescribed at other values of x, as in the
following example:
y

= y
y(1) = 2e .
Here the initial condition is prescribed at x = 1, e denotes the Euler number
e 2.718. Observe that while y and y

## are both functions of x, we do not

spell this out. This problem is still in the Calculus realm. Indeed, we are
looking for a function y(x) whos derivative is the same as y(x). This is
a property of the function e
x
, and its constant multiples. I.e., the general
solution is
y(x) = ce
x
,
and then the initial condition gives
y(1) = ce = 2e,
so that c = 2. The solution is then
y(x) = 2e
x
.
We see that the main thing is nding the general solution. Selecting c
to satisfy the initial condition, is usually easy.
1.2. FIRST ORDER LINEAR EQUATIONS 5
Recall from Calculus that
d
dx
e
g(x)
= e
g(x)
g

(x).
In case g(x) is an integral, we have
d
dx
e
_
p(x) dx
= p(x)e
_
p(x) dx
, (2.3)
because derivative of the integral is p(x).
1.2.2 Integrating Factor
Let us nd the general solution of the equation
y

## +p(x)y = g(x), (2.4)

where p(x) and g(x) are given functions. This is a linear equation, as we
have a linear function of y and y

## . Because we know p(x), we can calculate

the function
(x) = e
_
p(x) dx
,
and its derivative

(x) = p(x)e
_
p(x) dx
= p(x). (2.5)
We now multiply the equation (2.4) by (x), giving
y

## +yp(x) = g(x). (2.6)

Let us use the product rule and the formula (2.5) to calculate a derivative
d
dx
[y] = y

+ y

= y

+yp(x).
So, we may rewrite (2.6) in the form
d
dx
[y] = g(x). (2.7)
This allows us to compute the general solution. Indeed, we know the function
on the right. By integration we express (x)y(x) =
_
(x)g(x) dx, and then
solve for y(x).
In practice one needs to memorize the formula for (x) and the form
(2.7). Computation of (x) will involve a constant of integration. We will
always set c = 0, because the method works for any c.
6 CHAPTER 1. FIRST ORDER EQUATIONS
Example Solve
y

+ 2xy = x
y(0) = 2 .
Here p(x) = 2x and g(x) = x. Compute
(x) = e
_
2xdx
= e
x
2
.
Equation (2.7) takes the form
d
dx
_
e
x
2
y
_
= xe
x
2
.
Integrate both sides, and then perform integration by parts (or guess-and-
check)
e
x
2
y =
_
xe
x
2
dx =
1
2
e
x
2
+ c.
Solving for y
y(x) =
1
2
+ce
x
2
.
From the initial condition
y(0) =
1
2
+ c = 2,
i.e., c=
3
2
1
2
+
3
2
e
x
2
.
Example Solve
y

+
1
t
y = cos 2t
y(/2) = 1 .
Here the independent variable is t, y = y(t), but the method is of course the
same. Compute
(t) = e
_
1
t
dt
= e
ln t
= t,
and then
d
dt
[ty] = t cos 2t.
Integrate both sides, and perform integration by parts
ty =
_
t cos 2t dt =
1
2
t sin2t +
1
4
cos 2t +c.
1.2. FIRST ORDER LINEAR EQUATIONS 7
Divide by t
y(t) =
1
2
sin2t +
1
4
cos 2t
t
+
c
t
.
The initial condition gives
y(/2) =
1
4
1
/2
+
c
/2
= 1,
i.e.,
c = /2 +
1
4
,
and so the solution is
y(t) =
1
2
sin2t +
1
4
cos 2t
t
+
/2 +
1
4
t
. (2.8)
This function y(t) gives us a curve, called the integral curve. The initial
condition tells us that y = 1 when t = /2, i.e., the point (/2, 1) lies on
the integral curve. What is the maximal interval on which the solution (2.8)
is valid? I.e., starting with the initial t = /2, how far can we continue the
solution to the left and to the right of the initial t? We see from (2.8) that
the maximal interval is (0, ). At t = 0 the solution y(t) is undened.
Example Solve
x
dy
dx
+ 2y = sin x
y() = 2 .
Here the equation is not in the form (2.4), for which the theory applies. We
divide the equation by x
dy
dx
+
2
x
y =
sin x
x
.
Now the equation is in the right form, with p(x) =
2
x
and g(x) =
sinx
x
. As
before, we compute using the properties of logarithms
(x) = e
_
2
x
dx
= e
2 lnx
= e
lnx
2
= x
2
.
And then
d
dx
_
x
2
y
_
= x
2
sinx
x
= x sinx.
8 CHAPTER 1. FIRST ORDER EQUATIONS
Integrate both sides, and perform integration by parts
x
2
y =
_
x sinx dx = x cos x + sin x +c,
giving us the general solution
y(x) =
cos x
x
+
sin x
x
2
+
c
x
2
.
The initial condition implies
y() =
1

+
c

2
= 2.
Solve for c:
c = 2
2
+ .
cos x
x
+
sinx
x
2
+
2
2
+
x
2
. This solution is valid on the interval
(, 0) (that is how far it can be continued to the left and to the right,
starting from the initial x = ).
Example Solve
dy
dx
=
1
yx
y(1) = 0 .
We have a problem: not only this equation is not in the right form, it is
a nonlinear equation, because
1
yx
is not a linear function of y, no matter
what the number x is. We need a little trick. Let us pretend that dy and
dx are numbers, and take reciprocals of both sides of the equation
dx
dy
= y x,
or
dx
dy
+x = y.
Let us now think of y as independent variable, and x as a function of y, i.e.,
x = x(y). Then the last equation is linear, with p(y) = 1 and g(y) = y. We
proceed as usual: (y) = e
_
1 dy
= e
y
, and
d
dy
[e
y
x] = ye
y
.
1.3. SEPARABLE EQUATIONS 9

## 0.0 0.5 1.0 1.5 2.0 2.5 3.0

1
0
1
2
3
Figure 1.1: The integral curve x = y1+2e
y
, with the initial point marked
Integrating
e
y
x =
_
ye
y
dy = ye
y
e
y
+c,
or
x(y) = y 1 + ce
y
.
To nd c we need a initial condition. The original initial condition tells
us that y = 0 for x = 1. For the inverse function x(y) this translates to
x(0) = 1. So that c = 2.
Answer: x(y) = y 1 + 2e
y
(see the Figure 1.1).
The rigorous justication of this method is based on the formula for
the derivative of an inverse function, that we recall next. Let y = y(x) be
some function, and y
0
= y(x
0
). Let x = x(y) be its inverse function. Then
x
0
= x(y
0
), and we have
dx
dy
(y
0
) =
1
dy
dx
(x
0
)
.
1.3 Separable Equations
1.3.1 Background
Suppose we have a function F(y), and y in turn depends on x, i.e., y = y(x).
So that in eect F depends on x. To dierentiate F with respect to x we
10 CHAPTER 1. FIRST ORDER EQUATIONS
use the Chain Rule from Calculus
d
dx
F(y(x)) = F

(y(x))
dy
dx
.
1.3.2 The method
Suppose we are given two functions F(y) and G(x), and let us use the
corresponding lower case letters to denote their derivatives, i.e., F

(y) = f(y)
and G

(x) = g(x). Our goal is to solve the equation (i.e., to nd the general
solution)
f(y)
dy
dx
= g(x). (3.1)
This is a nonlinear equation.
We begin by rewriting this equation, using the upper case functions
F

(y)
dy
dx
= G

(x).
Using the Chain Rule, we rewrite the equation as
d
dx
F(y) =
d
dx
G(x).
If derivatives of two functions are the same, these functions dier by a
constant, i.e.,
F(y) = G(x) +c. (3.2)
This is the desired general solution! If one is lucky, it may be possible to
solve this relation for y as a function of x. If not, maybe one can solve for
x as a function of y. If both attempts fail, one can use implicit plotting
routine to draw the integral curves, i.e., the curves given by (3.2).
We now describe a simple procedure, which leads from the equation (3.1)
to its solution (3.2). Let us pretend that
dy
dx
is not a notation for a derivative,
but a ratio of two numbers dy and dx. Clearing the denominator in (3.1)
f(y) dy = g(x) dx.
We have separated the variables, everything involving y is now on the left,
while x appears only on the right. Integrate both sides:
_
f(y) dy =
_
g(x) dx,
1.3. SEPARABLE EQUATIONS 11
which gives us immediately the solution (3.2).
Example Solve
dy
dx
= x(1 + y
2
).
To separate the variables, we multiply by dx, and divide by (1 +y
2
)
_
dy
1 +y
2
dy =
_
x dx .
I.e., the general solution is
arctany =
1
2
x
2
+ c,
which we can also put into the form
y = tan(
1
2
x
2
+c).
Example Solve
_
xy
2
+x
_
dx +e
x
dy = 0 .
This is an example of a dierential equation, written in dierential form.
(Dividing through by e
x
dx, we can put it into a familiar form
dy
dx
=
xy
2
+x
e
x
,
although there is no need to do that.)
By factoring, we are able to separate the variables:
e
x
dy = x(y
2
+ 1) dx;
_
dy
y
2
+ 1
=
_
xe
x
dx;
tan
1
y = xe
x
+e
x
+c .
_
xe
x
+e
x
+c
_
.
Recall that by the Main Theorem of Calculus
d
dx
_
x
a
f(t) dt = f(x), for
any constant a. The integral
_
x
a
f(t) dt gives us an anti-derivative of f(x),
i.e., we may write
_
f(x) dx =
_
x
a
f(t) dt+c. Here we can let c be an arbitrary
constant, and a to be xed, or the other way around.
Example Solve
dy
dx
= e
x
2
y
2
y(1) = 2 .
12 CHAPTER 1. FIRST ORDER EQUATIONS
Separation of variables:
_
dy
y
2
dy =
_
e
x
2
dx
gives on the right an integral that cannot be evaluated in elementary func-
tions. We shall change it to a denite integral, as above. We choose a = 1,
because the initial condition was given at x = 1:
_
dy
y
2
dy =
_
x
1
e
t
2
dt +c;

1
y
=
_
x
1
e
t
2
dt +c .
When x = 1, we have y = 2, which means that c =
1
2

1
_
x
1
e
t
2
dt +
1
2
. For any x, the integral
_
x
1
e
t
2
dt can be quickly computed
by a numerical integration method, e.g., by the trapezoidal method.
1.3.3 Problems
I. Integrate by Guess-and-Check:
1.
_
xe
5x
dx. 2.
_
x sin3x dx.
3.
_
xe

1
2
x
dx. Ans. e
x/2
(4 2x) + c.
4.
_
x
2
cos 2x dx. Ans.
1
2
x cos 2x +
_
1
2
x
2

1
4
_
sin2x +c.
5.
_
x

x
2
+ 1
dx. Ans.
_
x
2
+ 1 + c.
6.
_
x
(x
2
+ 1)(x
2
+ 2)
dx. Ans.
1
2
ln
_
x
2
+ 1
_

1
2
ln
_
x
2
+ 2
_
+c.
7.
_
1
(x
2
+ 1)(x
2
+ 9)
dx. Ans.
1
8
tan
1
x
1
24
tan
1
x
3
+ c.
8.
_
(lnx)
5
x
dx. Ans.
1
6
(lnx)
6
+c.
9.
_
x
2
e
x
3
dx. Ans.
1
3
e
x
3
+ c.
1.3. SEPARABLE EQUATIONS 13
10.
_
e
2x
sin3x dx. Ans. e
2x
_
2
13
sin 3x
3
13
cos 3x
_
+c.
(Hint: Look for the anti-derivative in the form Ae
2x
sin 3x+Be
2x
cos 3x, and
determine the constants A and B by dierentiation.)
II. Find the general solution of the linear problems:
1. y

+
1
x
y = cos x. Ans. y =
c
x
+ sinx +
cos x
x
.
2. xy

+ 2y = e
x
. Ans. y =
c
x
2

(x + 1)e
x
x
2
.
3. x
4
y

+ 3x
3
y = x
2
e
x
. Ans. y =
c
x
3
+
(x 1)e
x
x
3
.
4.
dy
dx
= 2x(x
2
+ y). Ans. y = ce
x
2
x
2
1.
5. xy

2y = xe
1/x
. Ans. y = cx
2
x
2
e
1/x
.
6. y

## + 2y = sin 3x. Ans. y = ce

2x
+
2
13
sin 3x
3
13
cos 3x.
7. x
_
yy

1
_
= y
2
. (Hint: Set v = y
2
. Then v

= 2yy

## , and one obtains a

linear equation for v = v(x)) Ans. y
2
= 2x +cx
2
.
III. Find the solution of the initial value problem, and state the maximum
interval on which this solution is valid:
1. y

+
1
x
y = cos x, y(

2
) = 1. Ans. y =
cos(x)+x sin(x)
x
; (0, ).
2. xy

## +(2+x)y = 1, y(2) = 0. Ans. y =

1
x
+
3e
x2
x
2

1
x
2
; (, 0).
3. x(y

y) = e
x
, y(1) =
1
e
. Ans. y = e
x
ln [x[ + e
x
; (, 0).
4. (t + 2)
dy
dt
+y = 5, y(1) = 1. Ans. y =
5t 2
t + 2
; (2, ).
5. ty

2y = t
4
cos t, y(/2) = 0.
Ans. y = t
3
sint +t
2
cos t

2
t
2
; (, ). Solution is valid for all t.
6. t lnt
dr
dt
+r = 5te
t
, r(2) = 0. Ans. r =
5e
t
5e
2
lnt
; (1, ).
14 CHAPTER 1. FIRST ORDER EQUATIONS
7.
dy
dx
=
1
y
2
+x
, y(2) = 0.
Hint: Obtain a linear equation for
dx
dy
. Ans. x = 2 +4e
y
2y y
2
.
8

## +y = sin2t, which is a periodic function.

Hint: Look for a solution in the form y(t) = Asin2t + Bcos 2t, plug this
into the equation, and determine the constants A and B.
Ans. y =
1
5
sin2t
2
5
cos 2t.
9

## +y = sin 2t has no other periodic solutions.

Hint: Consider the equation that the dierence of any two solutions satises.
IV. Solve by separating the variables
1.
dy
dx
=
2
x(y
3
+ 1)
. Ans.
y
4
4
+y 2 lnx = c.
2. e
x
dx ydy = 0, y(0) = 1. Ans. y =

2e
x
1.
3. (x
2
y
2
+y
2
)dx yxdy = 0. Ans. y = e
x
2
2
+ln x+c
= cxe
x
2
2
.
4. y

(t) = ty
2
(1 + t
2
)
1/2
, y(0) = 2. Ans. y =
2
2

t
2
+ 1 3
.
5. (y xy +x 1)dx +x
2
dy = 0, y(1) = 0. Ans. y =
e e
1
x
x
e
.
6. y

= e
x
2
y, y(2) = 1. Ans. y = e
_
x
2
e
t
2
dt
.
7. y

= xy
2
+xy, y(0) = 2. Ans. y =
2e
x
2
2
3 2e
x
2
2
.
1.4 Some Special Equations
1.4.1 Homogeneous Equations
Let f(t) be a given function. If we set here t =
y
x
, we obtain a function f(
y
x
).
Then f(
y
x
) is a function of two variables x and y, but it depends on them
1.4. SOME SPECIAL EQUATIONS 15
in a special way. One calls such function to be homogeneous. For example,
y4x
xy
is a homogeneous function, because we can put it into the form
y 4x
x y
=
y/x 4
1 y/x
,
i.e., here f(t) =
t4
1t
.
Our goal is to solve the equation
dy
dx
= f(
y
x
). (4.1)
Set v =
y
x
. Since y is a function of x, the same is true of v = v(x).
Solving for y, y = xv, and then by the product rule
dy
dx
= v +x
dv
dx
.
Switching to v in (4.1)
v + x
dv
dx
= f(v). (4.2)
This a separable equation! Indeed, after taking v to the right, we can sepa-
rate the variables
_
dv
f(v) v
dv =
_
dx
x
.
After solving this for v(x), we can express the original unknown y = xv(x).
In practice, one should try to remember the formula (4.2).
Example Solve
dy
dx
=
x
2
+3y
2
2xy
y(1) = 2 .
To see that the equation is homogeneous, we rewrite it as
dy
dx
=
1
2
x
y
+
3
2
y
x
.
Set v =
y
x
, or y = xv. Observing that
x
y
=
1
v
, we have
v +x
dv
dx
=
1
2
1
v
+
3
2
v
16 CHAPTER 1. FIRST ORDER EQUATIONS
Simplify
x
dv
dx
=
1
2
1
v
+
1
2
v =
1 +v
2
2v
.
Separating variables
_
2v
1 +v
2
dv =
_
dx
x
.
We now obtain the general solution, by doing the following steps (observe
that lnc is another way to write an arbitrary constant)
ln(1 + v
2
) = lnx + lnc = ln cx;
1 +v
2
= cx;
v =

cx 1;
y(x) = xv = x

cx 1;
From the initial condition
y(1) =

c 1 = 2.
It follows that we need to select minus, and c = 5.

5x 1.
We mention next an alternative denition: the function f(x, y) is called
homogeneous if
f(tx, ty) = f(x, y) for all constants t.
If this holds, then setting t =
1
x
, we see that
f(x, y) = f(tx, ty) = f(1,
y
x
),
i.e., f(x, y) is some function of
y
x
, as the old denition was saying.
Example Solve
dy
dx
=
y
x +

xy
, with x > 0 .
It is more straightforwardto use the new denition to verify that the function
f(x, y) =
y
x+

xy
is homogeneous:
f(tx, ty) =
(ty)
(tx) +
_
(tx)(ty)
=
y
x +

xy
= f(x, y) .
1.4. SOME SPECIAL EQUATIONS 17
Letting y/x = v or y = xv we rewrite the equation
v + xv

=
xv
x +

x xv
=
v
1 +

v
.
We proceed to separate the variables:
x
dv
dx
=
v
1 +

v
v =
v
3/2
1 +

v
;
_
1 +

v
v
3/2
dv =
_
dx
x
;
2v
1/2
+ ln v = ln x +c;
The integral on the left was evaluated by division. Finally, we replace v by
y/x:
2
_
x
y
+ ln
y
x
= ln x +c ;
2
_
x
y
+ ln y = c .
We have obtained an implicit representation of solution.
When separating the variables, we had to assume that v ,= 0 (in order
to divide by v
3/2
). In case v = 0, we obtain another solution: y = 0.
1.4.2 The Logistic Population Model
Let y(t) denote the number of rabbits on a tropical island at time t. The
simplest model of population growth is
y

= ay
y(0) = y
0
.
This model assumes that initially the number of rabbits was equal to some
number y
0
> 0, while the rate of change of population, i.e., y

(t), is propor-
tional to the number of rabbits. Here a is a given constant. The population
of rabbits grows, which results in a faster and faster rate of growth. One
expects an explosive growth. Indeed, solving the equation, we get
y(t) = ce
at
.
From the initial condition y(0) = c = y
0
, which gives us y(t) = y
0
e
at
,
i.e., exponential growth. This is the notorious Malthusian model. Is it
18 CHAPTER 1. FIRST ORDER EQUATIONS
realistic? Yes, sometimes, for a limited time. If the initial number of rabbits
y
0
is small, then for a while their number may grow exponentially.
A more realistic model, which can be used for a long time is the Logistic
Model:
y

= ay by
2
y(0) = y
0
.
Here y = y(t); a, b and y
0
are positive constants that are given. If y
0
is small, then at rst y(t) is small. Then the by
2
term is negligible, and
we have exponential growth. As y(t) increases, this term is not negligible
anymore, and we can expect the rate of growth to get smaller and smaller.
(Writing the equation as y

## = (a b y)y, we can regard the a b y term as

the rate of growth.) In case the initial number y
0
the right is negative, i.e., y

## (t) < 0, and the population decreases. We now

solve the problem to conrm our guesses.
This problem can be solved by separation of variables. Instead, we use
another technique. Divide both sides of the equation by y
2
y
2
y

= ay
1
b.
Introduce a new unknown function v(t) = y
1
(t) =
1
y(t)
. By the generalized
power rule v

= y
2
y

## , so that we can rewrite the last equation as

v

= av b,
or
v

+ av = b.
This a linear equation for v(t)! To solve it, we follow the familiar steps, and
(t) = e
_
= e
at
;
d
dt
_
e
at
v
_
= be
at
;
e
at
v = b
_
e
at
dt =
b
a
e
at
+c;
v =
b
a
+ce
at
;
1.4. SOME SPECIAL EQUATIONS 19
0.2 0.4 0.6 0.8 1.0 1.2 1.4
t
0.5
1.0
1.5
2.0
2.5
y
Figure 1.2: The solution of y

= 5y 2y
2
, y(0) = 0.2
y(t) =
1
v
=
1
b
a
+ ce
at
.
To nd the constant c, we use the initial condition
y(0) =
1
b
a
+ c
= y
0
;
c =
1
y
0

b
a
;
y(t) =
1
b
a
+
_
1
y
0

b
a
_
e
at
.
The problem is solved. Observe that lim
t+
y(t) =
a
b
, no matter what
initial value y
0
we take. The number
a
b
is called the carrying capacity. It
tells us the number of rabbits in the long run, that our island will support.
A typical solution curve, called the logistic curve is given in Figure 1.2.
1.4.3 Bernoullis Equation
Let us solve the equation
y

## (t) = p(t)y(t) +g(t)y

n
(t).
Here p(t) and g(t) are given functions, n is a given constant. We see that
the logistic model above is just a particular example of Bernoullis equation.
We divide this equation by y
n
y
n
y

= p(t)y
1n
+ g(t).
20 CHAPTER 1. FIRST ORDER EQUATIONS
Introduce a new unknown function v(t) = y
1n
(t). Compute v

= (1
n)y
n
y

, i.e., y
n
y

=
1
1n
v

## , and rewrite the equation as

v

= (1 n)p(t)v + (1 n)g(t).
This is a linear equation for v(t)! After solving for v(t), we calculate y(t) =
v
1
1n
(t).
Example Solve
y

= y +
t

y
.
Writing this equation in the form y

= y + ty
1/2
, we see that this is a
Bernoullis equation, with n = 1/2, i.e., we need to divide through by
y
1/2
. But that is the same as multiplying through by y
1/2
, which we do,
obtaining
y
1/2
y

= y
3/2
+ t .
We now let v(t) = y
3/2
, v

(t) =
3
2
y
1/2
y

## , obtaining a linear equation for v,

which we solve as usual:
2
3
v

= v + t; v

3
2
v =
3
2
t;
(t) = e

_
3
2
dt
= e

3
2
t
;
d
dt
_
e

3
2
t
v
_
=
3
2
te

3
2
t
e

3
2
t
v =
_
3
2
te

3
2
t
dt = te

3
2
t

2
3
e

3
2
t
+c;
v = t
2
3
+ ce
3
2
t
.
Returning to the original variable y, we have the answer: y =
_
t
2
3
+ ce
3
2
t
_
2/3
.
1.4.4

Riccatis Equation
Let us try to solve the equation
y

## (t) + a(t)y(t) + b(t)y

2
(t) = c(t).
Here a(t), b(t) and c(t) are given functions. In case c(t) = 0, this is a
Bernoullis equation, which we can solve. For general c(t) one needs some
luck to solve this equation. Namely, one needs to guess a particular solution
1.4. SOME SPECIAL EQUATIONS 21
p(t). Then a substitution y(t) = p(t) +z(t) produces a Bernoullis equation
for z(t), which we can solve.
There is no general way to nd a particular solution, which means that
one cannot always solve Riccatis equation. Occasionally one can get lucky.
Example Solve
y

+ y
2
= t
2
2t .
We have a quadratic on the right, which suggest that we look for the so-
lution in the form y = at + b. Plugging in, we get a quadratic on the left
too. Equating the coecients in t
2
, t and constant terms, we obtain three
equations to nd a and b. In general three equations with two unknowns will
have no solution, but this is a lucky case, a = 1, b = 1, i.e., p(t) = t + 1
is a particular solution. Substituting y(t) = t +1 +v(t) into the equation,
and simplifying, we get
v

+ 2(1 t)v = v
2
.
This is a Bernoullis equation. As before, we divide through by v
2
, and then
set z =
1
v
, z

=
v

v
2
, to get a linear equation:
v
2
v

+ 2(1 t)v
1
= 1; z

2(1 t)z = 1;
= e

_
2(1t) dt
= e
t
2
2t
;
d
dt
_
e
t
2
2t
z
_
= e
t
2
2t
;
e
t
2
2t
z =
_
e
t
2
2t
dt .
The last integral cannot be evaluated through elementary functions (Math-
ematica can evaluate it through a special function, called Er). So we
leave this integral unevaluated. We then get z from the last formula, after
which we express v, and nally y. We have obtained a family of solutions:
y(t) = t +1 +
e
t
2
2t
_
e
t
2
2t
dt
. (The usual arbitrary constant c is now inside
the integral.) Another solution: y = t + 1.
Example Solve
y

+ 2y
2
=
6
t
2
. (4.3)
This time we look for solution in the form y(t) = a/t. Plugging in, we
determine a = 2, i.e., p(t) = 2/t is a particular solution (a = 3/2 is also a
possibility). Substitution y(t) = 2/t +v(t) produces a Bernoullis equation
v

+
8
t
v + 2v
2
= 0 .
22 CHAPTER 1. FIRST ORDER EQUATIONS
Solving it as before, we obtain v(t) =
7
ct
8
2t
, and v = 0. Solutions:
y(t) =
2
t
+
7
ct
8
2t
, and also y =
2
t
.
Let us outline an alternative approach to this problem. Set y =
1
z
, where
z = z(t) is a new unknown function. Plugging into (4.3), then clearing the
denominators, we have

z
2
+ 2
1
z
2
=
6
t
2
;
z

+ 2 =
6z
2
t
2
.
This is a homogeneous equation, which can be solved as before.
Think what are other equations, for which this neat trick would work. (But
do not try to publish your ndings, this was done by Jacoby around two
hundred years ago.)
There are some important ideas that we have learned in this subsection.
Knowledge of one particular solution may help to crack open the equation,
and get all of its solutions. Also, the form of this particular solution depends
on the function in the right hand side of the equation.
1.4.5

Parametric Integration
Let us solve the problem (here y = y(x))
y =
_
1 y

2
y(0) = 1 .
Unlike the previous problems, this equation is not solved for the derivative
y

## (x), and then separating the variables, one may indeed

nd the solution. Instead, let us assume that
y

(x) = sint,
where t is a parameter. From the equation
y =
_
1 sin
2
t =

cos
2
t = cos t,
if we assume that cos t > 0. Recall dierentials: dy = y

(x) dx, or
dx =
dy
y

(x)
=
sint dt
sint
= dt,
1.4. SOME SPECIAL EQUATIONS 23
i.e.,
x = t +c.
We have obtained a family of solutions in a parametric form:
x = t + c
y = cos t .
Solving for t, t = x + c, i.e., y = cos(x + c). From the initial condition
c = 0, giving us the solution y = cos x. This solution is valid on innitely
many disjoint intervals where cos x 0 (because we see from the equation
that y 0). This problem admits another solution: y = 1.
For the equation
y

5
+ y

= x
we do not have an option of solving for y

## (x). Parametric integration appears

to be the only way to solve it. We let y

## (x) = t, so that from the equation

x = t
5
+ t, and dx = (5t
4
+ 1) dt. Then
dy = y

(x) dx = t(5t
4
+ 1) dt .
I.e.,
dy
dt
= t(5t
4
+ 1), which gives y =
5
6
t
6
+
1
2
t
2
+ c. We have obtained a
family of solutions in a parametric form:
x = t
5
+ t
y =
5
6
t
6
+
1
2
t
2
+c .
1.4.6 Some Applications
Dierential equations arise naturally in geometric and physical problems.
Example Find all positive decreasing functions y = f(x), with the follow-
ing property: the area of the triangle formed by the vertical line going down
from the curve, the x-axis and the tangent line to this curve is constant,
equal to a > 0.
Let (x
0
, f(x
0
)) be an arbitrary point on the graph of y = f(x). Draw
the triangle in question, formed by the horizontal line x = x
0
, the x-axis,
and the tangent line to this curve. The tangent line intersects the x-axis at
some point x
1
to the right of x
0
, because f(x) is decreasing. The slope of
the tangent line is f

(x
0
), so that the point-slope equation of the tangent
line is
y = f(x
0
) +f

(x
0
)(x x
0
) .
24 CHAPTER 1. FIRST ORDER EQUATIONS
At x
1
, we have y = 0, i.e.,
0 = f(x
0
) + f

(x
0
)(x
1
x
0
) .
Solve this for x
1
, x
1
= x
0

f(x
0
)
f

(x
0
)
. It follows that the horizontal side of our
triangle is
f(x
0
)
f

(x
0
)
, while the vertical side is f(x
0
). The area of the right
triangle is then

1
2
f
2
(x
0
)
f

(x
0
)
= a .
(Observe that f

(x
0
) < 0, so that the area is positive.) The point x
0
was
arbitrary, so that we replace it by x, and then we replace f(x) by y, and
f

(x) by y

1
2
y
2
y

= a; or
y

y
2
=
1
2a
.
We solve this dierential equation by taking antiderivatives of both sides:
1
y
=
1
2a
2a
x+2ac
.
This is a family of hyperbolas. One of them is y =
2a
x
.
E
T
c
x
y
(x
0
, f(x
0
))
y = f(x)
x
1
d
d
d
d
d
d
The triangle formed by the tangent line, the line x = x
0
, and the x-axis
Example A tank holding 10L (liters) is originally lled with water. A salt-
water mixture is pumped into the tank at a rate of 2L per minute. This
1.5. EXACT EQUATIONS 25
mixture contains 0.3 kg of salt per liter. The excess uid is owing out of
the tank at the same rate (i.e., 2L per minute). How much salt does the
tank contain after 4 minutes.
Let t be the time (in minutes) since the mixture started owing, and let
y(t) denote the amount of salt in the tank at time t. The derivative y

(t)
gives the rate of change of salt per minute. The salt is pumped in at a rate
of 0.6 kg per minute. The density of salt at time t is
y(t)
10
(i.e., each liter of
solution in the tank contains
y(t)
10
kg of salt). So the salt ows out at the
rate 2
y(t)
10
= 0.2 y(t) kg/min. The dierence of these two rates is y

(t). I.e.,
y

= 0.6 0.2y .
This is a linear dierential equation. Initially, there was no salt in the tank,
i.e., y(0) = 0 is our initial condition. Solving this equation together with
the initial condition, we have y(t) = 3 3e
0.2t
. After 4 minutes we have
y(4) = 3 3e
0.8
1.65 kg of salt in the tank.
Now suppose a patient has food poisoning, and doctors are pumping
water to ush his stomach out. One can compute similarly the weight of
poison left in the stomach at time t.
1.5 Exact Equations
Let us begin by recalling the partial derivatives. If a function f(x) = x
2
+a
depends on a parameter a, then f

## (x) = 2x. If g(x) = x

2
+ y
2
, with a
parameter y, we have
dg
dx
= 2x. Another way to denote this derivative is
g
x
= 2x. We can also regard g as a function of two variables, g = g(x, y).
Then a partial derivative with respect to x is computed by regarding y to be
a parameter, g
x
= 2x. Alternative notation:
g
x
= 2x. Similarly, a partial
derivative with respect to y is g
y
=
g
y
= 2y. It gives us the rate of change
in y, when x is kept xed.
The equation (here y = y(x))
y
2
+ 2xyy

= 0
can be easily solved if we rewrite it in the equivalent form
d
dx
_
xy
2
_
= 0.
26 CHAPTER 1. FIRST ORDER EQUATIONS
Then xy
2
= c, and the general solution is
y(x) =
c

x
.
We wish to play the same game for general equations of the form
M(x, y) +N(x, y)y

(x) = 0. (5.1)
Here the functions M(x, y) and N(x, y) are given. In the above example
M = y
2
and N = 2xy.
Denition The equation (5.1) is called exact if there is a function (x, y),
so that we can rewrite (5.1) in the form
d
dx
(x, y) = 0. (5.2)
The general solution of the exact equation is
(x, y) = c. (5.3)
There are two natural questions: when is the equation (5.1) exact, and
if it is exact, how does one nd (x, y)?
Theorem 1 Assume that the functions M, N, M
y
and N
x
are continuous
in some disc D : (xx
0
)
2
+(y y
0
)
2
< r
2
. Then the equation (5.1) is exact
in D if and only if the following partial derivatives are equal
M
y
= N
x
for all points (x, y) in D. (5.4)
This theorem says two things: if the equation is exact, then the partials
are equal, and conversely, if the partials are equal, then the equation is
exact.
Proof: 1. Assume the equation (5.1) is exact, i.e., it can be written in
the form (5.2). Performing the dierentiation in (5.2), we write it as

x
+
y
y

= 0.
But this is the same equation as (5.1), i.e.,

x
= M

y
= N .
1.5. EXACT EQUATIONS 27
Taking the second partials

xy
= M
y

yx
= N
x
.
We know from Calculus that
xy
=
yx
, therefore M
y
= N
x
.
2. Assume that M
y
= N
x
. We will show that the equation (5.1) is then
exact by producing (x, y). We had just seen that (x, y) must satisfy

x
= M(x, y) (5.5)

y
= N(x, y) .
Take anti-derivative in x of the rst equation
(x, y) =
_
M(x, y) dx +h(y), (5.6)
where h(y) is an arbitrary function of y. To determine h(y), plug the last
formula into the second line of (5.5)

y
=
_
M
y
(x, y) dx +h

## (y) = N(x, y),

or
h

(y) = N(x, y)
_
M
y
(x, y) dx p(x, y). (5.7)
Observe that we have denoted by p(x, y) the right side of the last equation.
It turns out that p(x, y) does not really depend on x! Indeed,

x
p(x, y) = N
x
M
y
= 0,
because it was given to us that M
y
= N
x
. So that p(x, y) is a function of y
only, so let us denote it p(y). The equation (5.7) takes the form
h

(y) = p(y).
We integrate this to determine h(y), which we then plug into (5.6) to get
(x, y).
The equation
M(x, y) dx + N(x, y) dy = 0
is an alternative form of (5.1).
28 CHAPTER 1. FIRST ORDER EQUATIONS
Example Solve
e
x
siny + y
3
(3x e
x
cos y)
dy
dx
= 0.
Here M(x, y) = e
x
sin y +y
3
, N(x, y) = 3x + e
x
cos y. Compute
M
y
= e
x
cos y + 3y
2
N
x
= e
x
cos y 3 .
The partials are not the same, the equation is not exact, and our theory
does not apply.
Example Solve
_
y
x
+ 6x
_
dx + (ln x 2) dy = 0 .
Here M(x, y) =
y
x
+ 6x and N(x, y) = lnx 2. Compute
M
y
=
1
x
= N
x
,
and so the equation is exact. To nd (x, y), we observe that the equations
(5.5) take the form

x
=
y
x
+ 6x

y
= lnx 2 .
Take anti-derivative in x of the rst equation
(x, y) = y ln x + 3x
2
+h(y),
where h(y) is an arbitrary function of y. Plug this into the second equation

y
= ln x +h

(y) = lnx 2,
i.e.,
h

(y) = 2.
Integrating, h(y) = 2y, and so (x, y) = y ln x + 3x
2
2y, giving us the
general solution
y lnx + 3x
2
2y = c .
We can solve this equation for y, y(x) =
c3x
2
lnx2
. Observe that when solving
for h(y), we choose the integration constant to be zero, because at the next
step we set (x, y) equal to c, an arbitrary constant.
1.6. EXISTENCE AND UNIQUENESS OF SOLUTION 29
Example Find the constant b, for which the equation
_
ye
2xy
+x
_
dx +bxe
2xy
dy = 0
is exact, and then solve the equation with that b.
Setting equal the partials M
y
and N
x
, we have
e
2xy
+ 2xye
2xy
= be
2xy
+ 2bxye
2xy
.
We see that b = 1. When b = 1 the equation becomes
_
ye
2xy
+ x
_
dx + xe
2xy
dy = 0,
and we know that it is exact. We look for (x, y) as before:

x
= ye
2xy
+x

y
= xe
2xy
.
Just for practice, let us begin this time with the second equation. Taking
an antiderivative in y in the second equation
(x, y) =
1
2
e
2xy
+h(x) ,
where h(x) is an arbitrary function of x. Plugging this into the rst equation,

x
= ye
2xy
+ h

(x) = ye
2xy
+x .
This tells us that h

(x) = x, h(x) =
1
2
x
2
, and then (x, y) =
1
2
e
2xy
+
1
2
x
2
.
1
2
e
2xy
+
1
2
x
2
= c, or y =
1
2x
ln(2c x
2
).
Exact equations are connected with the conservative vector elds. Recall
that a vector eld F(x, y) =< M(x, y), N(x, y) > is called conservative if
there is a function (x, y), called the potential, such that F(x, y) = (x, y).
Recalling that (x, y) =<
x
,
y
>, we have
x
= M, and
y
= N, the
same relations that we had for exact equations.
1.6 Existence and Uniqueness of Solution
Let us consider a general initial value problem
y

= f(x, y)
y(x
0
) = y
0
,
30 CHAPTER 1. FIRST ORDER EQUATIONS
with a given function f(x, y), and given numbers x
0
and y
0
basic questions: is there a solution of this problem, and if there is, is the
solution unique?
Theorem 2 Assume that the functions f(x, y) and f
y
(x, y) are continuous
in some neighborhood of the initial point (x
0
, y
0
). Then there exists a so-
lution, and there is only one solution. The solution y = y(x) is dened on
some interval (x
1
, x
2
) that includes x
0
.
One sees that the conditions of this theorem are not too restrictive, so
that the theorem tends to apply, providing us with existence and uniqueness
of solution. But not always!
Example Solve
y

y
y(0) = 0 .
The function f(x, y) =

y is continuous (for y 0), but its partial,
f
y
(x, y) =
1
2

y
, is not even dened at the initial point (0, 0). The theorem
does not apply. One checks that the function y =
x
2
4
solves our initial
value problem. But here is another solution: y = 0. (Having two dierent
solutions of the same initial value problem is like having two primadonnas
in the same theater.)
Observe that the theorem guarantees existence of solution only on some
interval (it is not happily ever after).
Example Solve for y = y(t)
y

= y
2
y(0) = 1 .
Here f(t, y) = y
2
, and f
y
(t, y) = 2y are continuous functions. The theorem
applies. By separation of variables, we determine the solution y(t) =
1
1 t
.
As time t approaches 1, this solution disappears, by going to innity. This
is sometimes called blow up in nite time.
1.7. NUMERICAL SOLUTION BY EULERS METHOD 31
1.7 Numerical Solution by Eulers method
We have learned a number of techniques for solving dierential equations,
however the sad truth is that most equations cannot be solved. Even a
simple looking equation like
y

= x + y
3
is totally out of reach. Fortunately, if you need a specic solution, say the
one satisfying the initial condition y(0) = 1, it can be easily approximated
(we know that such solution exists, and is unique).
In general we shall deal with the problem
y

= f(x, y)
y(x
0
) = y
0
.
Here the function f(x, y) is given (in the example above we had f(x, y) =
x +y
3
), and the initial condition prescribes that solution is equal to a given
number y
0
at a given point x
0
. Fix a step size h, and let x
1
= x
0
+ h,
x
2
= x
0
+ 2h, . . . , x
n
= x
0
+ nh. We will approximate y(x
n
), the value of
solution at x
n
. We call this approximation y
n
. To go from the point (x
n
, y
n
)
to the point (x
n+1
, y
n+1
) on the graph of solution y(x), we use the tangent
line approximation:
y
n+1
y
n
+ y

(x
n
)(x
n+1
x
n
) = y
n
+y

(x
n
)h = y
n
+ f(x
n
, y
n
)h.
(We have expressed y

(x
n
) = f(x
n
, y
n
) from the dierential equation.) The
resulting formula is easy to implement, it is just one loop, starting with
(x
0
, y
0
).
One continues the computations until the point x
n
goes as far as you
want. Decreasing the step size h, will improve the accuracy. Smaller hs
will require more steps, but with the power of modern computers that is
not a problem, particularly for simple examples, as the one above. In that
example x
0
= 0, y
0
= 1. If we choose h = 0.05, then x
1
= 0.05, and
y
1
= y
0
+ f(x
0
, y
0
)h = 1 + (0 + 1
3
) 0.05 = 1.05.
Continuing, we have x
2
= 0.1, and
y
2
= y
1
+ f(x
1
, y
1
)h = 1.05 + (0.05 + 1.05
3
) 0.05 1.11 .
32 CHAPTER 1. FIRST ORDER EQUATIONS
Then x
3
= 0.15, and
y
3
= y
2
+ f(x
2
, y
2
)h = 1.11 + (0.1 + 1.11
3
) 0.05 1.18 .
If you need to approximate the solution on the interval (0, 0.4), you will
need to make 5 more steps. Of course, it is better to program a computer.
Euler method is using tangent line approximation, i.e., the rst two terms
of the Taylor series approximation. One can use more terms of the Taylor
series, and develop more sophisticated methods (which is done in books on
numerical methods). But here is a question: if it is so easy to compute nu-
merical approximation to solution, why bother learning analytical solutions?
The reason is that we seek not just to solve a dierential equation, but to
understand it. What happens if the initial condition changes? The equation
may include some parameters, what happens if they change? What happens
to solutions in the long term?
1.7.1 Problems
I. Determine if the equation is homogeneous, and if it is, solve it:
1.
dy
dx
=
y + 2x
x
. Ans. y = cx + 2x lnx.
2.
dy
dx
=
x
2
xy + y
2
x
2
. Ans. y =
x + cx + x lnx
c + ln x
.
3.
dy
dx
=
y
2
+ 2x
y
.
4.
dy
dx
=
y
2
+ 2xy
x
2
, y(1) = 2. Ans. y =
2x
2
3 2x
.
5. xy

y = x tan
y
x
. Ans. sin
y
x
= cx.
6. y

=
x
2
+y
2
xy
, y(1) = 2. Ans. y =
_
x
2
lnx
2
+ 4x
2
.
7. y

=
y +x
1/2
y
3/2

xy
, with x > 0, y > 0. Ans. 2
_
y
x
= ln x +c.
II. Solve the following Bernoullis equations
1. y

1
x
y = y
2
, y(2) = 2. Ans. y =
2x
x
2
2
.
1.7. NUMERICAL SOLUTION BY EULERS METHOD 33
2.
dy
dx
=
y
2
+ 2x
y
. Ans. y =
_
1 2x +ce
2x
.
3. y

+x
3

y = 3y. Ans. y =
_
x
3
+
1
6
+ ce
2x
_3
2
, and y = 0.
Hint: When dividing the equation by
3

## y, one needs to check if y = 0 is a

solution, which indeed it is.
4. y

+xy = y
3
.
III. Determine if the equation is exact, and if it is, solve it:
1. (2x + 3x
2
y) dx + (x
3
3y
2
) dy = 0. Ans. x
2
+x
3
y y
3
= c.
2. (x+sin y) dx+(x cos y 2y) dy = 0. Ans.
1
2
x
2
+x siny y
2
= c.
3.
x
(x
2
+ y
2
)
3/2
dx +
y
(x
2
+ y
2
)
3/2
dy = 0. Ans. x
2
+y
2
= c
2
.
4.
x
(x
2
+y
2
)
3/2
dx +
y
(x
2
+y
2
)
3/2
dy = 0, y(1) = 3. What is the x
interval, on which the solution is valid? Sketch the graph of the solution.
5. (6xy cos y) dx + (3x
2
+x siny + 1) dy = 0.
6. (2xy) dx+(2y x) dy = 0, y(1) = 2. Ans. x
2
+y
2
xy = 3.
7. Find the value of b for which the following equation is exact, and then
solve the equation, using that value of b
(ye
xy
+ 2x) dx + bxe
xy
dy = 0 .
Ans. b = 1, y =
1
x
ln(c x
2
).
IV 1. Use parametric integration to solve
y

3
+ y

= x .
Ans. x = t
3
+ t, y =
3
4
t
4
+
1
2
t
2
+c.
2. Use parametric integration to solve
y = ln(1 +y

2
) .
Ans. x = 2 tan
1
t + c, y = ln(1 + t
2
). Another solution: y = 0.
34 CHAPTER 1. FIRST ORDER EQUATIONS
3. Use parametric integration to solve
y

+ sin(y

) = x .
Ans. x = t + sint, y =
1
2
t
2
+t sin t + cos t +c.
4. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. Water is owing in at a rate of 5L per minute. The new mixture ows
out at the same rate. How much salt remains in the tank after an hour?
Ans. 0.5 kg.
5. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. A water-salt mixture is owing in at a rate of 3L per minute, and each
liter of it contains 0.1 kg of salt. The new mixture ows out at the same
rate. How much salt remains in the tank after t minutes?
Ans. 10 kg.
6. Water is being pumped into patients stomach at a rate of 0.5 L per
minute to ush out 300 grams of alcohol poisoning. The excess uid is
owing out at the same rate. The stomach holds 3 L. The patient can be
discharged when the amount of poison drops to 50 grams. How long should
this procedure last?
7. Find all curves y = f(x) with the following property: if you draw a
tangent line at any point (x, f(x)) on this curve, and continue the tangent
line until it intersects the x -axis, then the point of intersection is
x
2
.
Ans. y = cx
2
.
8. Find all positive decreasing functions y = f(x), with the following prop-
erty: in the triangle formed by the vertical line going down from the curve,
the x-axis and the tangent line to this curve, the sum of two sides adjacent
to the right angle is constant, equal to a > 0.
Ans. y a lny = x + c.
9. (i) Apply Eulers method to
y

## = x(1 +y), y(0) = 1 .

Take h = 0.25 and do four steps, obtaining an approximation for y(1).
(ii) Apply Eulers method to
y

## = x(1 +y), y(0) = 1 .

1.7. NUMERICAL SOLUTION BY EULERS METHOD 35
Take h = 0.2 and do ve steps, obtaining another approximation for y(1).
(iii) Solve the above problem analytically, and determine which one of the
two approximations is better.
10

## . (From the Putnam competition, 2009) Show that any solution of

y

=
x
2
y
2
x
2
(y
2
+ 1)
satises lim
x
y(x) = .
Hint: Using partial fractions, rewrite the equation as
y

=
1 + 1/x
2
y
2
+ 1

1
x
2
.
Assume, on the contrary, that y(x) is bounded when x is large. Then y

(x)
exceeds a positive constant for all large x, and therefore y(x) tends to innity,
1
x
2
becomes negligible for large x).
11. Solve
x(y

e
y
) + 2 = 0 .
Hint: Divide the equation by e
y
, then set v = e
y
, obtaining a linear equa-
tion for v = v(x). Ans. y = ln(x +cx
2
).
12. Solve the integral equation
y(x) =
_
x
1
y(t) dt +x + 1 .
Hint: Dierentiate the equation, and also evaluate y(1).
Ans. y = 3e
x1
1.
V 1. Find two solutions of the initial value problem
y

= (y 1)
1/3
, y(1) = 1 .
Is it good to have two solutions of the same initial value problem? What
went wrong? (I.e., why the existence and uniqueness theorem does not
apply?)
36 CHAPTER 1. FIRST ORDER EQUATIONS
2. Find all y
0
, for which the following problem has a unique solution
y

=
x
y
2
2x
y(2) = y
0
.
Hint: Apply the existence and uniqueness theorem.
Ans. All y
0
except 2.
Chapter 2
Second Order Equations
2.1 Special Second Order Equations
Probably the simplest second order equation is
y

(x) = 0.
Taking antiderivative
y

(x) = c
1
.
We have denoted the arbitrary constant c
1
, because we expect another
arbitrary constant to make an appearance. Indeed, taking another anti-
derivative, we get the general solution
y(x) = c
1
x +c
2
.
We see that general solutions for second order equations depend on two
arbitrary constants.
General second order equation for the unknown function y = y(x) can
often be written as
y

= f(x, y, y

),
where f is a given function of its three variables. We cannot expect all such
equation to be solvable, as we could not even solve all rst order equations.
In this section we study special second order equations, which are reducible
to rst order equations, greatly increasing their chances to be solved.
37
38 CHAPTER 2. SECOND ORDER EQUATIONS
2.1.1 y is not present in the equation
Let us solve for y(t) the equation
ty

= t
2
.
We see derivatives of y in this equation, but not y itself. We denote y

(t) =
v(t), and v(t) is our new unknown function. Clearly, y

(t) = v

## (t), and the

equation becomes
tv

v = t
2
.
This is a rst order equation for v(t)! It is actually a linear rst order
equation, so that we solve it as before. Once we have v(t), we determine the
solution y(t) by integration. Details:
v

1
t
v = t ;
(t) = e

_
1
t
dt
= e
lnt
= e
ln
1
t
=
1
t
;
d
dt
_
1
t
v
_
= 1;
1
t
v = t +c
1
;
y

= v = t
2
+c
1
t ;
y(t) =
t
3
3
+ c
1
t
2
2
+c
2
.
Let us solve the following equation for y(x):
y

+ 2xy

2
= 0 .
Again, y is missing in the equation. By setting y

= v, with y

= v

, we
obtain a rst order equation. This time the equation for v(x) is not linear,
but we can separate the variables. Here are the details:
v

+ 2xv
2
= 0 ;
dv
dx
= 2xv
2
.
This equation has a solution v = 0, giving y = c. Assuming v ,= 0, we
continue
_
dv
v
2
dv =
_
2x dx ;
2.1. SPECIAL SECOND ORDER EQUATIONS 39

1
v
= x
2
c
1
;
y

= v =
1
x
2
+ c
1
;
Let us now assume that c
1
> 0. Then
y(x) =
_
1
x
2
+c
1
dx =
1

c
1
arctan
x

c
1
+c
2
.
If c
1
= 0 or c
1
< 0, we get two dierent formulas for solution! Indeed, in
case c
1
= 0, we have y

=
1
x
2
, and the integration gives us, y =
1
x
+c
3
, the
second family of solutions. In case c
1
< 0, we can write
y

=
1
x
2
c
2
1
=
1
2c
1
_
1
x c
1

1
x +c
1
_
;
Integrating, we get the third family of solutions,
y =
1
2c
1
ln[x c
1
[
1
2c
1
ln[x +c
1
[ + c
4
.
And, the fourth family of solutions is y = c.
2.1.2 x is not present in the equation
Let us solve for y(x)
y

+ yy

3
= 0 .
All three functions appearing in the equation are functions of x, but x itself
is not present in the equation.
On the curve y = y(x) the slope y

## is a function of x, but it is also a

function of y. We therefore set y

## = v(y), and v(y) will be our new unknown

function. By the Chain Rule
y

(x) =
d
dx
v(y) = v

(y)
dy
dx
= v

v.
and our equation takes the form
v

v +yv
3
= 0 .
This a rst order equation! To solve it, we begin by factoring
v
_
v

+yv
2
_
= 0.
40 CHAPTER 2. SECOND ORDER EQUATIONS
If the rst factor is zero, y

## = v = 0, we obtain a family of solutions y = c.

Setting the second factor to zero
dv
dy
+yv
2
= 0 ,
we have a separable equation. Separating the variables

_
dv
v
2
=
_
y dy ;
1
v
= y
2
/2 +c
1
=
y
2
+ 2c
1
2
;
dy
dx
= v =
2
y
2
+ 2c
1
.
To nd y(x) we need to solve another rst order equation. Again, we sepa-
rate the variables _
_
y
2
+ 2c
1
_
dy =
_
2 dx ;
y
3
/3 + 2c
1
y = 2x +c
2
.
This gives us the second family of solutions.
2.2 Linear Homogeneous Equations with Constant
Coecients
We wish to nd solution y = y(t) of the equation
ay

+ by

+cy = 0 ,
where a, b and c are given numbers. This is arguably the most important
class of dierential equations, because it arises when applying Newtons sec-
ond law of motion. If y(t) denotes displacement of an object at time t, then
this equation relates the displacement with velocity y

## (t) and acceleration

y

(t). This is equation is linear, because we only multiply y and its deriva-
tives by constants, and add. The word homogeneous refers to the right hand
side of this equation being zero.
Observe, if y(t) is a solution, so is 2y(t). Indeed, plug this function into
the equation:
a(2y)

+ b(2y)

+c(2y) = 2
_
ay

+ by

+cy
_
= 0 .
2.2. LINEAR HOMOGENEOUS EQUATIONS WITHCONSTANT COEFFICIENTS41
The same argument shows that c
1
y(t) is a solution for any constant c
1
. If
y
1
(t) and y
2
(t) are two solutions, similar argument will showthat y
1
(t)+y
2
(t)
and y
1
(t) y
2
(t) are also solutions. More generally, c
1
y
1
(t) + c
2
y
2
(t) is a
solution for any c
1
and c
2
. (This is called a linear combination of two
solutions.) Indeed,
a (c
1
y
1
(t) +c
2
y
2
(t))

+b (c
1
y
1
(t) + c
2
y
2
(t))

+ c (c
1
y
1
(t) +c
2
y
2
(t)) =
c
1
(ay

1
+ by

1
+ cy
1
) + c
2
(ay

2
+ by

2
+cy
2
) = 0 .
We now try to nd a solution of the form y = e
rt
, where r is a constant
to be determined. We have y

= re
rt
and y

= r
2
e
rt
, so that plugging into
the equation gives
a(r
2
e
rt
) + b(re
rt
) + ce
rt
= e
rt
_
ar
2
+br + c
_
= 0 .
Dividing by a positive quantity e
rt
, we get
ar
2
+br + c = 0 .
This is a quadratic equation for r, called the characteristic equation. If r is
a root (solution) of this equation, then e
rt
solves our dierential equation.
When solving a quadratic equation, it is possible to encounter two real roots,
one (repeated) real root, or two complex conjugate roots. We will look at
these cases in turn.
2.2.1 Characteristic equation has two distinct real roots
Call the roots r
1
and r
2
, and r
2
,= r
1
. Then e
r
1
t
and e
r
2
t
are two solutions,
and their linear combination gives us the general solution
y(t) = c
1
e
r
1
t
+c
2
e
r
2
t
.
As we have two constants to play with, one can prescribe two additional
conditions for the solution to satisfy.
Example Solve
y

+ 4y

+ 3y = 0
y(0) = 2
y

(0) = 1 .
We prescribe that at time zero the displacement is 2, and the velocity is
1. These two conditions are usually referred to as initial conditions, and
42 CHAPTER 2. SECOND ORDER EQUATIONS
together with the dierential equation, they form an initial value problem.
The characteristic equation is
r
2
+ 4r + 3 = 0 .
Solving it (say by factoring as (r + 1)(r + 3) = 0), we get its roots r
1
= 1
and r
2
= 3. The general solution is then
y(t) = c
1
e
t
+c
2
e
3t
.
Then y(0) = c
1
+ c
2
. Compute y

(t) = c
1
e
t
3c
2
e
3t
, and therefore
y

(0) = c
1
3c
2
. Initial conditions tell us that
c
1
+c
2
= 2
c
1
3c
2
= 1 .
We have two equations to nd two unknowns c
1
and c
2
. We nd that
c
1
= 5/2 and c
2
5
2
e
t

1
2
e
3t
.
Example Solve
y

4y = 0 .
The characteristic equation is
r
2
4 = 0 .
Its roots are r
1
= 2 and r
2
= 2. The general solution is then
y(t) = c
1
e
2t
+ c
2
e
2t
.
More generally, for the equation
y

a
2
y = 0 ( a is a given constant)
the general solution is
y(t) = c
1
e
at
+ c
2
e
at
.
This should become automatic, because such equations appear often.
Example Find the constant a, so that the solution of the initial problem
9y

y = 0, y(0) = 2, y

(0) = a .
2.2. LINEAR HOMOGENEOUS EQUATIONS WITHCONSTANT COEFFICIENTS43
is bounded as t , and nd that solution.
We begin by writing down (automatically!) the general solution
y(t) = c
1
e

1
3
t
+c
2
e
1
3
t
.
Compute y

(t) =
1
3
c
1
e

1
3
t
+
1
3
c
2
e
1
3
t
, and then the initial conditions give
y(0) = c
1
+c
2
= 2
y

(0) =
1
3
c
1
+
1
3
c
2
= a .
Solving the system of two equation for c
1
and c
2
(by multiplying the second
equation through by 3 and adding it to the rst equation), we get c
2
= 1+
3
2
a
and c
1
= 1
3
2
a. Solution of the initial value problem is then
y(t) =
_
1
3
2
a
_
e

1
3
t
+
_
1 +
3
2
a
_
e
1
3
t
.
In order for this solution to stay bounded as t , the coecient in front
of e
1
3
t
must be zero. I.e., 1 +
3
2
a = 0, and a =
2
3
. The solution then
becomes y(t) = 2e

1
3
t
.
Finally, observe that if r
1
and r
2
are roots of the characteristic equation,
then we can factor the characteristic polynomial as
ar
2
+ br +c = a(r r
1
)(r r
2
) . (2.1)
2.2.2 Characteristic equation has only one (repeated) real
root
This is the case we get r
2
= r
1
, when solving the characteristic equation.
We still have a solution y
1
(t) = e
r
1
t
. Of course, any constant multiple of this
solution is also a solution, but to form a general solution we need another
truly dierent solution, as we saw in the preceding case. It turns out that
y
2
(t) = te
r
1
t
is that second solution, and the general solution is then
y(t) = c
1
e
r
1
t
+c
2
te
r
1
t
.
To justify that y
2
(t) = te
r
1
t
is a solution, we observe that in this case
formula (2.1) becomes
ar
2
+br +c = a(r r
1
)
2
.
44 CHAPTER 2. SECOND ORDER EQUATIONS
Square out the quadratic on the right as ar
2
2ar
1
r + ar
2
1
. Because it is
equal to the quadratic on the left, the coecients of both polynomials in r
2
,
r, and the constant terms are the same. We equate the coecients in r:
b = 2ar
1
. (2.2)
To plug y
2
(t) into the equation, we compute its derivatives y

2
(t) = e
r
1
t
+
r
1
te
r
1
t
= e
r
1
t
(1 +r
1
t), and similarly y

2
(t) = e
r
1
t
_
2r
1
+ r
2
1
t
_
. Then
ay

2
+ by

2
+ cy
2
= ae
r
1
t
_
2r
1
+r
2
1
t
_
+ be
r
1
t
(1 +r
1
t) + cte
r
1
t
=
e
r
1
t
(2ar
1
+b) + te
r
1
t
_
ar
2
1
+br
1
+c
_
= 0 .
In the last line the rst bracket is zero because of (2.2), and the second
bracket is zero because r
1
is a solution of the characteristic equation.
Example 9y

+ 6y

+ y = 0.
The characteristic equation
9r
2
+ 6r + 1 = 0
has a double root r =
1
3
. The general solution is then
y(t) = c
1
e

1
3
t
+ c
2
te

1
3
t
.
Example Solve
y

4y

+ 4y = 0
y(0) = 1, y

(0) = 2 .
The characteristic equation
r
2
4r + 4 = 0
has a double root r = 2. The general solution is then
y(t) = c
1
e
2t
+ c
2
te
2t
.
Here y

(t) = 2c
1
e
2t
+ c
2
e
2t
+ 2c
2
te
2t
, and from the initial conditions
y(0) = c
1
= 1
y

(0) = 2c
1
+c
2
= 2 .
From the rst equation c
1
= 1, and then c
2
= 4. Answer: y(t) = e
2t
4te
2t
.
2.3. CHARACTERISTIC EQUATION HAS TWOCOMPLEXCONJUGATE ROOTS45
2.3 Characteristic equation has two complex con-
jugate roots
2.3.1 Eulers formula
Recall the Maclaurens formula
e
z
= 1 +z +
1
2!
z
2
+
1
3!
z
3
+
1
4!
z
4
+
1
5!
z
5
+ . . . .
Plug in z = i , where i =

## 1 the imaginary unit, and is a real number.

Calculating the powers, and separating the real and imaginary parts, we
have
e
i
= 1 + i +
1
2!
(i )
2
+
1
3!
(i )
3
+
1
4!
(i )
4
+
1
5!
(i )
5
+. . .
= 1 + i
1
2!

1
3!
i
3
+
1
4!

4
+
1
5!
i
5
+ . . .
=
_
1
1
2!

2
+
1
4!

4
+ . . .
_
+i
_

1
3!

3
+
1
5!

5
+ . . .
_
= cos +i sin .
We have derived Eulers formula:
e
i
= cos +i sin . (3.1)
Replacing by , we have
e
i
= cos() +i sin() = cos i sin . (3.2)
Adding the last two formulas, we express
cos =
e
i
+e
i
2
. (3.3)
Subtracting from (3.1) the formula (3.2), and dividing by 2i
sin =
e
i
e
i
2i
. (3.4)
2.3.2 The General Solution
Recall that to solve the equation
ay

+by

+ cy = 0
46 CHAPTER 2. SECOND ORDER EQUATIONS
we begin with solving the characteristic equation
ar
2
+ br + c = 0 .
Complex roots come in conjugate pairs: if p +iq is one root, then p iq is
the other. These roots are of course dierent, so that we have two solutions
z
1
= e
(p+iq)t
and z
2
= e
(piq)t
. The problem with these solutions is that
they are complex-valued. If we add z
1
+ z
2
, we get another solution. If we
divide this new solution by 2, we get yet another solution. I.e., the function
y
1
(t) =
z
1
+z
2
2
is a solution of our equation, and similarly the function
y
2
(t) =
z
1
z
2
2i
is another solution. Using the formula (3.3), compute
y
1
(t) =
e
(p+iq)t
+e
(piq)t
2
= e
pt
e
iqt
+ e
iqt
2
= e
pt
cos qt .
This is a real valued solution of our equation! Similarly,
y
2
(t) =
e
(p+iq)t
e
(piq)t
2i
= e
pt
e
iqt
e
iqt
2i
= e
pt
sinqt
is our second solution. The general solution is then
y(t) = c
1
e
pt
cos qt + c
2
e
pt
sin qt .
Example Solve y

+ 4y

+ 5y = 0.
The characteristic equation
r
2
+ 4r + 5 = 0
can be solved quickly by completing the square:
(r + 2)
2
+ 1 = 0; (r + 2)
2
= 1;
r + 2 = i; r = 2 i .
Here p = 2, q = 1, and the general solution is y(t) = c
1
e
2t
cos t +
c
2
e
2t
sin t.
Example Solve y

+y = 0.
The characteristic equation
r
2
+ 1 = 0
2.3. CHARACTERISTIC EQUATION HAS TWOCOMPLEXCONJUGATE ROOTS47
has roots i. Here p = 0 and q = 1, and the general solution is y(t) =
c
1
cos t + c
2
sint. More generally, for the equation
y

+a
2
y = 0 ( a is a given constant)
the general solution is
y(t) = c
1
cos at + c
2
sinat .
This should become automatic, because such equations appear often.
Example Solve
y

+ 4y = 0, y(/3) = 2, y

(/3) = 4 .
The general solution is
y(t) = c
1
cos 2t + c
2
sin2t .
Compute y

(t) = 2c
1
sin 2t + 2c
2
cos 2t. From the initial conditions
y(/3) = c
1
cos
2
3
+ c
2
sin
2
3
=
1
2
c
1
+

3
2
c
2
= 2
y

(/3) = 2c
1
sin
2
3
+ 2c
2
cos
2
3
=

3c
1
c
2
= 4 .
This gives c
1
=

3 1, c
2
=

y(t) = (

3 1) cos 2t + (

3 + 1) sin2t .
2.3.3 Problems
I. Solve the second order equations, with y missing:
1. 2y

= 1. Ans. y =
2
3
(x +c
1
)
3/2
+ c
2
.
2. xy

+ y

= x. Ans. y =
x
2
4
+ c
1
lnx + c
2
.
3. y

+ y

= x
2
. Ans. y =
x
3
3
x
2
+ 2x + c
1
e
x
+c
2
.
48 CHAPTER 2. SECOND ORDER EQUATIONS
4. xy

+2y

= (y

)
2
, y(1) = 0, y

## (1) = 1. Ans. y = 2 tan

1
x

2
.
II. Solve the second order equations, with x missing:
1. yy

3 (y

)
3
= 0. Ans. 3 (y ln y y) +c
1
y = x +c
2
, and y = c.
2. yy

+(y

)
2
= 0. Ans. y
2
= c
1
+c
2
x (this includes the y = c family).
3. y

= 2yy

, y(0) = 0, y

## (0) = 1. Ans. y = tanx.

4

. y

y = 2xy

2
, y(0) = 1, y

(0) = 4.
Hint: Write:
y

y y

2
= (2x 1)y

2
;
y

y y

2
y

2
= 2x 1;
_
y
y

= 2x 1 .
Integrating, and using the initial conditions

y
y

= x
2
x +
1
4
=
(2x 1)
2
4
.
Ans. y = e
4x
2x1
.
III. Solve the linear second order equations, with constant coecients
1. y

+ 4y

+ 3y = 0. Ans. y = c
1
e
t
+c
2
e
3t
.
2. y

3y

= 0. Ans. y = c
1
+ c
2
e
3t
.
3. 2y

+y

y = 0. Ans. y = c
1
e
t
+ c
2
e
1
2
t
.
4. y

3y = 0. Ans. y = c
1
e

3t
+c
2
e

3t
.
5. 3y

5y

2y = 0.
6. y

9y = 0, y(0) = 3, y

(0) = 3. Ans. y = e
3t
+ 2e
3t
.
7. y

+ 5y

= 0, y(0) = 1, y

## (0) = 10. Ans. y = 3 + 2e

5t
.
8. y

+y

6y = 0, y(0) = 2, y

(0) = 3. Ans. y =
7
5
e
3t

3e
2t
5
.
9. 4y

y = 0.
10. 3y

2y

y = 0, y(0) = 1, y

(0) = 3. Ans. y = 3e
t/3
2e
t
.
2.3. CHARACTERISTIC EQUATION HAS TWOCOMPLEXCONJUGATE ROOTS49
11. 3y

2y

y = 0, y(0) = 1, y

## (0) = a. Then nd the value of a for

which the solution is bounded, as t . Ans. a =
1
3
.
IV. Solve the linear second order equations, with constant coecients
1. y

+ 6y

+ 9y = 0. Ans. y = c
1
e
3t
+c
2
te
3t
.
2. 4y

4y

+y = 0. Ans. y = c
1
e
1
2
t
+c
2
te
1
2
t
.
3. y

2y

+y = 0, y(0) = 0, y

## (0) = 2. Ans. y = 2te

t
.
4. 9y

6y

+y = 0, y(0) = 1, y

(0) = 2.
V. Using Eulers formula, compute: 1. e
i
2. e
i/2
3. e
i
3
4
4. e
2i
5.

2e
i
9
4
5

## . Show that sin3 = 3 cos

2
sin sin
3
, and also
cos 3 = 3 sin
2
cos + cos
3
.
Hint: Begin with e
i3
= (cos +i sin)
3
. Apply Eulers formula on the left,
and cube out on the right. Then equate the real and imaginary parts.
VI. Solve the linear second order equations, with constant coecients
1. y

+ 4y

+ 8y = 0. Ans. y = c
1
e
2t
cos 2t +c
2
e
2t
sin 2t.
2. y

+ 16y = 0. Ans. y = c
1
cos 4t +c
2
sin4t.
3. y

4y

+5y = 0, y(0) = 1, y

(0) = 2. Ans. y = e
2t
cos t 4e
2t
sin t.
4. y

+ 4y = 0, y(0) = 2, y

## (0) = 0. Ans. y = 2 cos 2t.

5. 9y

+y = 0, y(0) = 0, y

## (0) = 5. Ans. y = 15 sin

1
3
t.
6. y

+ y = 0. Ans. y = e
t
2
_
c
1
cos

3
2
t +c
2
sin

3
2
t
_
.
7. 4y

+ 8y

+ 5y = 0, y() = 0, y

() = 4. Ans. y = 8e
t
cos
1
2
t.
8. y

+ y = 0, y(/4) = 0, y

## (/4) = 1. Ans. y = sin(t /4).

50 CHAPTER 2. SECOND ORDER EQUATIONS
2.4 Linear Second Order Equations with Variable
Coecients
Linear Systems
Recall that a system of two equations (here the numbers a, b, c, d, g and h
are given, while x and y are unknowns)
a x +b y = g
c x + d y = h
has a unique solution if and only if the determinant of the system is non-
zero, i.e.,

a b
c d

## = ad bc ,= 0. This is justied by explicitly solving the

system:
x =
dg bh
, y =
ah cg
.
It is also easy to justify that a determinant is zero, i.e.,

a b
c d

= 0, if
and only if its columns are proportional, i.e., a = b and c = d, for some
constant .
General Theory
We consider an initial value problem for linear second order equations
y

+p(t)y

## +g(t)y = f(t) (4.1)

y(t
0
) =
y

(t
0
) = .
We are given the coecient functions p(t) and g(t), and the function f(t).
The constants t
0
, and are also given, i.e., at some initial time t = t
0
the values of the solution and its derivative are prescribed. Is there a solution
to this problem? If so, is solution unique, and how far can it be continued?
Theorem 3 Assume that the coecient functions p(t), g(t) and f(t) are
continuous on some interval that includes t
0
. Then the problem (4.1) has a
solution, and only one solution. This solution can be continued to the left
and to the right of the initial point t
0
, so long as the coecient functions
remain continuous.
2.4. LINEAR SECONDORDER EQUATIONS WITHVARIABLE COEFFICIENTS51
If the coecient functions are continuous for all t, then the solution can
be continued for all t, < t < . This is better than what we had for
rst order equations. Why? Because the equation here is linear. Linearity
pays!
Corollary Let z(t) be a solution of (4.1) with the same initial data: z(t
0
) =
and z

(t
0
) = . Then z(t) = y(t) for all t.
Let us now study the homogeneous equations (for y = y(t))
y

+ p(t)y

+ g(t)y = 0 , (4.2)
with the given coecient functions p(t) and g(t). Although the equation
looks relatively simple, its analytical solution is totally out of reach, in gen-
eral. (One has to either solve it numerically, or use innite series.) In this
section we shall study some theoretical aspects. In particular, we shall prove
that linear combination of two solutions, that are not constant multiple of
one another, gives a general solution (a fact that we had intuitively used for
equations with constant coecients).
We shall need a concept of the Wronskian determinant of two functions
y
1
(t) and y
2
(t), or Wronskian, for short:
W(t) =

y
1
(t) y
2
(t)
y

1
(t) y

2
(t)

= y
1
(t)y

2
(t) y

1
(t)y
2
(t) .
Sometimes the Wronskian is written as W(y
1
, y
2
)(t) to stress its dependence
on y
1
(t) and y
2
(t). For example,
W(cos 2t, sin2t)(t) =

cos 2t sin2t
2 sin2t 2 cos 2t

= 2 cos
2
2t + 2 sin
2
2t = 2 .
Given the Wronskian and one of the functions, one can determine the
other one.
Example If f(t) = t, and W(f, g)(t) = t
2
e
t
, nd g(t).
Solution: Here f

(t) = 1, and so
W(f, g)(t) =

t g(t)
1 g

(t)

= tg

(t) g(t) = t
2
e
t
.
This is a linear rst order equation for g(t). We solve it as usual, obtaining
g(t) = te
t
+ct .
52 CHAPTER 2. SECOND ORDER EQUATIONS
If g(t) = cf(t), with some constant c, then we compute that W(f, g)(t) =
0 for all t. The converse statement is not true. For example, the functions
f(t) = t
2
and
g(t) =
_
t
2
if t 0
t
2
if t < 0
are not constant multiples of one another, but W(f, g)(t) = 0. This is seen
by computing the Wronskian separately in case t 0, and for t < 0.
Theorem 4 Let y
1
(t) and y
2
(t) be two solutions of (4.2), and W(t) is their
Wronskian. Then
W(t) = ce

_
p(t) dt
. (4.3)
where c is some constant.
This is a remarkable fact. Even though we do not know y
1
(t) and y
2
(t),
we can compute their Wronskian.
Proof: We dierentiate the Wronskian W(t) = y
1
(t)y

2
(t) y

1
(t)y
2
(t):
W

= y
1
y

2
+ y

1
y

2
y

1
y

2
y

1
y
2
= y
1
y

2
y

1
y
2
.
Because y
1
is a solution of (4.2), we have y

1
+ p(t)y

1
+ g(t)y
1
= 0, or
y

1
= p(t)y

1
g(t)y
1
, and similarly y

2
= p(t)y

2
g(t)y
2
. With these
formulas, we continue
W

= y
1
(p(t)y

2
g(t)y
2
) (p(t)y

1
g(t)y
1
) y
2
= p(t) (y
1
y

2
y

1
y
2
) = p(t)W .
We have obtained a linear rst order equation for W(t). Solving it by
separation of variables, we get (4.3).
Corollary We see from (4.3) that either W(t) = 0 for all t, when c = 0, or
else W(t) is never zero, in case c ,= 0.
Theorem 5 Let y
1
(t) and y
2
(t) be two solutions of (4.2), and W(t) is their
Wronskian. Then W(t) = 0, if and only if y
1
(t) and y
2
(t) are constant
multiples of each other.
We just saw that if two functions are constant multiples of each other
then their Wronskian is zero, while the converse (opposite) statement is not
true in general. But if these functions happen to be solutions of (4.2), then
the converse is true.
2.4. LINEAR SECONDORDER EQUATIONS WITHVARIABLE COEFFICIENTS53
Proof: Assume that the Wronskian of two solutions y
1
(t) and y
2
(t) is zero.
In particular it is zero at any point t
0
, i.e.,

y
1
(t
0
) y
2
(t
0
)
y

1
(t
0
) y

2
(t
0
)

= 0. When a
2 2 determinant is zero, its columns are proportional. Let us assume the
second column is times the rst, where is some number, i.e., y
2
(t
0
) =
y
1
(t
0
) and y

2
(t
0
) = y

1
(t
0
). Assume, rst, that ,= 0. Consider the
function z(t) = y
2
(t)/. It is a solution of the homogeneous equation (4.2),
and it has initial values z(t
0
) = y
1
(t
0
) and z

(t
0
) = y

1
(t
0
) the same as y
1
(t).
It follows that z(t) = y
1
(t), i.e., y
2
(t) = y
1
(t). In case = 0, a similar
argument shows that y
2
(t) = 0, i.e., y
2
(t) = 0 y
1
(t).
Denition We say that two solutions y
1
(t) and y
2
(t) of (4.2) form a funda-
mental set, if for any other solution z(t), we can nd two constants c
0
1
and
c
0
2
, so that z(t) = c
0
1
y
1
(t) + c
0
2
y
2
(t). In other words, the linear combination
c
1
y
1
(t) +c
2
y
2
(t) gives us all solutions of (4.2).
Theorem 6 Let y
1
(t) and y
2
(t) be two solutions of (4.2), that are not con-
stant multiples of one another. Then they form a fundamental set.
Proof: Let y(t) be a solution of the equation (4.2). Let us try to nd the
constants c
1
and c
2
, so that z(t) = c
1
y
1
(t) +c
2
y
2
(t) satises the same initial
conditions as y(t), i.e.,
z(t
0
) = c
1
y
1
(t
0
) +c
2
y
2
(t
0
) = y(t
0
) (4.4)
z

(t
0
) = c
1
y

1
(t
0
) + c
2
y

2
(t
0
) = y

(t
0
) .
This is a system of two linear equations to nd c
1
and c
2
. The determinant
of this system is just the Wronskian of y
1
(t) and y
2
(t), evaluated at t
0
. This
determinant is not zero, because y
1
(t) and y
2
(t) are not constant multiples
of one another. (This determinant is W(t
0
). If W(t
0
) = 0, then W(t) = 0
for all t, by the Corollary to Theorem 4, and then by Theorem 5, y
1
(t)
and y
2
(t) would have to be constant multiples of one another.) It follows
that the 2 2 system (4.4) has a unique solution c
1
= c
0
1
, c
2
= c
0
2
. The
function z(t) = c
0
1
y
1
(t) + c
0
2
y
2
(t) is then a solution of the same equation
(4.2), satisfying the same initial conditions, as does y(t). By Corollary to
Theorem 3, y(t) = z(t) for all t. So that any solution y(t) is a particular
case of the general solution c
1
y
1
(t) +c
2
y
2
(t).
Finally, we mention that two functions are called linearly independent, if
they are not constant multiples of one another. So that two solutions y
1
(t)
and y
2
(t) form a fundamental set, if and only if they are linearly independent.
54 CHAPTER 2. SECOND ORDER EQUATIONS
2.5 Some Applications of the Theory
We give some practical applications of the theory from the last section. But
rst, we recall the function sinh t and cosht.
2.5.1 The Hyperbolic Sine and Cosine Functions
One denes
cosh t =
e
t
+e
t
2
and sinht =
e
t
e
t
2
.
In particular, cosh 0 = 1, sinh 0 = 0. Observe that cosh t is an even function,
while sinht is odd. Compute:
d
dt
cosh t = sinh t and
d
dt
sinh t = cosht .
These formulas are similar to those for cosine and sine. By squaring out,
one sees that
cosh
2
t sinh
2
t = 1, for all t.
We see that the derivatives, and the algebraic properties of the new functions
are similar to those for cosine and sine. (There are other similar formulas.)
However, the graphs of sinht and cosh t look totally dierent: they are not
periodic, and they are unbounded.
2.5.2 Dierent Ways to Write a General Solution
For the equation
y

a
2
y = 0 (5.1)
you remember that the functions e
at
and e
at
form a fundamental set, and
y(t) = c
1
e
at
+ c
2
e
at
is the general solution. But y = sinh at is also a
solution, because y

= a cosh at and y

= a
2
sinhat = a
2
y. Similarly, cosh at
is a solution. It is not a constant multiple of sinh at, so that together they
form another fundamental set, and we have another form of the general
solution of (5.1)
y = c
1
cosh at + c
2
sinhat .
This is not a new general solution, it can be reduced to the old one, by
expressing cosh at and sinh at through exponentials. However, the new form
is useful.
Example Solve: y

4y = 0, y(0) = 0, y

(0) = 5.
2.5. SOME APPLICATIONS OF THE THEORY 55
We write the general solution as y(t) = c
1
cosh 2t +c
2
sinh2t. Using that
cosh 0 = 1 and sinh0 = 0, we have y(0) = c
1
= 0. With c
1
= 0, we update
the solution y(t) = c
2
sinh 2t. Now, y

(t) = 2c
2
cosh2t and y

(0) = 2c
2
= 5.
I.e., c
2
=
5
2
5
2
sinh 2t.
Yet another form of the general solution of (5.1) is
y(t) = c
1
e
a(tt
0
)
+ c
2
e
a(tt
0
)
,
where t
0
is any number.
Example Solve: y

9y = 0, y(2) = 1, y

(2) = 9.
We select t
0
= 2, writing the general solution as
y(t) = c
1
e
3(t2)
+c
2
e
3(t2)
.
Compute y

(t) = 3c
1
e
3(t2)
+ 3c
2
e
3(t2)
, and use the initial conditions
c
1
+c
2
= 1
3c
1
+ 3c
2
= 9 .
We see that c
1
= 2 and c
2
= 1. Answer: y(t) = 2e
3(t2)
+e
3(t2)
.
We can write the general solution, centered at t
0
, for other simple equa-
tions as well. For the equation
y

+a
2
y = 0
the functions cos a(tt
0
) and sin a(tt
0
) are both solutions, for any value of
t
0
, and they form a fundamental set (because they are not constant multiples
of one another). We can then write the general solution as
y = c
1
cos a(t t
0
) +c
2
sina(t t
0
) .
Example Solve: y

+ 4y = 0, y(

5
) = 2, y

5
) = 6.
We write general solution as
y = c
1
cos 2(t

5
) +c
2
sin2(t

5
) .
Using the initial conditions, we quickly compute c
1
and c
2
2 cos 2(t

5
) 3 sin2(t

5
).
56 CHAPTER 2. SECOND ORDER EQUATIONS
2.5.3 Finding the Second Solution
Next we solve the Legendre equation (for 1 < t < 1)
(1 t
2
)y

2ty

+ 2y = 0 .
It has a solution y = t. (Lucky!) We need to nd the other one. We divide
the equation by 1t
2
, to put it into the form (4.2) from the previous section
y

2t
1 t
2
y

+
2
1 t
2
y = 0 .
Call the other solution y(t). By the Theorem 4, we can calculate the Wron-
skian of two solutions:
W(t, y) =

t y(t)
1 y

(t)

= ce
_
2t
1t
2
dt
.
We shall set here c = 1, because we need just one solution, dierent from t.
I.e.,
ty

y = e
_
2t
1t
2
dt
= e
ln(1t
2
)
=
1
1 t
2
.
This is a linear equation, which we solve as usual:
y

1
t
y =
1
t(1 t
2
)
; (t) = e

_
1/t dt
= e
lnt
=
1
t
;
d
dt
_
1
t
y
_
=
1
t
2
(1 t
2
)
;
1
t
y =
_
1
t
2
(1 t
2
)
dt .
We have computed the last integral before by guess-and-check, so that
y = t
_
1
t
2
(1 t
2
)
dt = 1
1
2
t ln(1 t) +
1
2
t ln(1 + t) .
We set the constant of integration c = 0, because we need just one solution
other than t. Answer: y(t) = c
1
t + c
2
_
1 +
1
2
t ln
1 +t
1 t
_
.
2.5. SOME APPLICATIONS OF THE THEORY 57
2.5.4 Problems
I. 1. Find the Wronskians of the following functions
(i) f(t) = e
3t
, g(t) = e

1
2
t
. Ans.
7
2
e
5
2
t
.
(ii) f(t) = e
2t
, g(t) = te
2t
. Ans. e
4t
.
(iii) f(t) = e
t
cos 3t, g(t) = e
t
sin 3t. Ans. 3e
2t
.
2. If f(t) = t
2
, and the Wronskian W(f, g)(t) = t
5
e
t
, nd g(t).
Ans. g(t) = t
3
e
t
t
2
e
t
+ ct
2
.
3. Let y
1
(t) and y
2
(t) be any two solutions of
y

t
2
y = 0 .
Show that W(y
1
(t), y
2
(t))(t) = constant.
II. For the following equations one solution is given. Using Wronskians, nd
the second solution, and the general solution
1. y

2y

+y = 0, y
1
(t) = e
t
.
2. (t2)y

ty

+2y = 0, y
1
(t) = e
t
. Ans. y = c
1
e
t
+c
2
_
t
2
+ 2t 2
_
.
3. ty

+ 2y

+ ty = 0, y
1
(t) =
sint
t
. Ans. y = c
1
sint
t
+c
2
cos t
t
.
III. Solve the problem, using the hyperbolic sine and cosine
1. y

4y = 0, y(0) = 0, y

(0) =
1
3
. Ans. y =
1
6
sinh 2t.
2. y

9y = 0, y(0) = 2, y

## (0) = 0. Ans. y = 2 cosh3t.

3. y

y = 0, y(0) = 3, y

## (0) = 5. Ans. y = 3 cosht + 5 sinht.

IV. Solve the problem, by using the general solution centered at the initial
point
1. y

+ y = 0, y(/8) = 0, y

## (/8) = 3. Ans. y = 3 sin(t /8).

2. y

+ 4y = 0, y(/4) = 0, y

(/4) = 4.
Ans. y = 2 sin2(t /4) = 2 sin(2t /2) = 2 cos 2t.
3. y

2y

3y = 0, y(1) = 1, y

(1) = 7. Ans. y = 2e
3(t1)
e
(t1)
.
58 CHAPTER 2. SECOND ORDER EQUATIONS
V. 1. Show that the functions y
1
= t and y
2
= sint cannot be both solutions
of
y

+p(t)y

+g(t)y = 0 ,
no matter what p(t) and g(t) are.
Hint: Consider W(y
1
, y
2
)(0), then use Theorems 4 (Corollary) and 5.
2. Assume that the functions y
1
= 1 and y
2
= cos t are both solutions of
some equation
y

+p(t)y

+g(t)y = 0 .
What is the equation?
Hint: Use Theorem 4 to determine p(t), then argue that g(t) must be zero.
y

cot ty

= 0 .
2.6 Non-homogeneous Equations. The easy case.
We wish to solve non-homogeneous equation
y

+p(t)y

## +g(t)y = f(t) . (6.1)

Here the coecient functions p(t), g(t) and f(t) are given. The correspond-
ing homogeneous equation is
y

+p(t)y

+g(t)y = 0 . (6.2)
Suppose we know the general solution of the homogeneous equation: y(t) =
c
1
y
1
(t)+c
2
y
2
(t). Our goal is to get the general solution of the non-homogeneous
equation. Suppose we can somehow nd a particular solution Y (t) of the
non-homogeneous equation, i.e.,
Y

+p(t)Y

## +g(t)Y = f(t) . (6.3)

From the equation (6.1) we subtract the equation (6.3):
(y Y )

+ p(t)(y Y )

+ g(t)(y Y ) = 0 .
We see that the function v = y Y is a solution the homogeneous equation
(6.2), and therefore in general v(t) = c
1
y
1
(t) + c
2
y
2
(t). We now express
y = Y + v, i.e.,
y(t) = Y (t) +c
1
y
1
(t) +c
2
y
2
(t) .
2.6. NON-HOMOGENEOUS EQUATIONS. THE EASY CASE. 59
In words: the general solution of the non-homogeneous equation is equal to
the sum of any particular solution of the non-homogeneous equation, and
the general solution of the corresponding homogeneous equation.
Example Solve y

+ 9y = 4 cos 2t.
The general solution of the corresponding homogeneous equation y

+
9y = 0 is y(t) = c
1
sin 3t +c
2
cos 3t. We look for a particular solution in the
form Y (t) = Acos 2t. Plug it in, then simplify:
4Acos 2t + 9Acos 2t = 4 cos 2t;
5Acos 2t = 4 cos 2t ,
i.e., A =
4
5
, and Y (t) =
4
5
4
5
cos 2t + c
1
sin3t +
c
2
cos 3t.
This was an easy example: the y

## term was missing. If y

term was
present, we would need to look for Y (t) in the formY (t) = Acos 2t+B sin2t.
Prescription 1 If the right side of the equation (6.1) has the form a cos t+
b sint, look for a particular solution in the form Y (t) = Acos t +B sint.
More generally, if the right side of the equation has the form (at
2
+ bt +
c) cos t + (et
2
+ gt + h) sint, look for a particular solution in the form
Y (t) = (At
2
+Bt+C) cos t+(Et
2
+Gt+H) sin t. Even more generally, if
the polynomials are of higher power, we make the corresponding adjustment.
Example Solve y

+ 3y

## + 9y = 4 cos 2t + sin 2t.

We look for a particular solution in the form Y (t) = Acos 2t +B sin2t.
Plug it in, then combine the like terms
4Acos 2t 4Bsin2t + 3 (2Asin2t + 2Bcos 2t) + 9 (Acos 2t +B sin2t)
= 4 cos 2t + sin2t;
(5A+ 6B) cos 2t + (6A+ 5B) sin2t = 4 cos 2t + sin 2t .
Equating the corresponding coecients
5A+ 6B = 4
6A+ 5B = 1 .
60 CHAPTER 2. SECOND ORDER EQUATIONS
Solving this system, A =
26
61
and B =
19
61
. I.e., Y (t) =
26
61
cos 2t
19
61
sin2t. The general solution of the corresponding homogeneous equation
y

+ 3y

+ 9y = 0
is y = c
1
e

3
2
t
sin
3

3
2
t + c
2
e

3
2
t
cos
3

3
2
26
61
cos 2t
19
61
sin2t +c
1
e

3
2
t
sin
3

3
2
t +c
2
e

3
2
t
cos
3

3
2
t.
Example Solve y

+ 2y

+ y = t 1.
On the right we see a linear polynomial. We look for particular solution
in the form Y (t) = At + B. Plug this in
2A+At +B = t 1;
Equating the corresponding coecients, we have A = 1, and 2A+B = 1,
i.e., B = 3. So that Y (t) = t3. The general solution of the corresponding
homogeneous equation
y

+ 2y

+ y = 0
is y = c
1
e
t
+ c
2
te
t
. Answer: y(t) = t 3 +c
1
e
t
+c
2
te
t
.
Example Solve y

+y

2y = t
2
.
On the right we have a quadratic polynomial (two of its coecients
happened to be zero). We look for a particular solution as a quadratic
Y (t) = At
2
+ Bt +C. Plug this in
2A+ 2At +B 2(At
2
+ Bt +C) = t
2
.
Equating the coecients in t
2
, t, and the constant terms, we have
2A = 1
2A2B = 0
2A+B 2C = 0
From the rst equation A =
1
2
, from the second one B =
1
2
, and from
the third C = A +
1
2
B =
3
4
. So that Y (t) =
1
2
t
2

1
2
t
3
4
. The general
solution of the corresponding homogeneous equation is y(t) = c
1
e
2t
+c
2
e
t
.
1
2
t
2

1
2
t
3
4
+ c
1
e
2t
+ c
2
e
t
.
2.6. NON-HOMOGENEOUS EQUATIONS. THE EASY CASE. 61
Prescription 2 If the right side of the equation (6.1) is a polynomial of
degree n: a
0
t
n
+ a
1
t
n1
+ . . . + a
n1
t + a
n
, look for a particular solution
as a polynomial of degree n: A
0
t
n
+ A
1
t
n1
+ . . . + A
n1
t + A
n
, with the
coecients to be determined.
And on to the nal possibility.
Prescription 3 If the right side of the equation (6.1) is a polynomial of de-
gree n, times an exponential :
_
a
0
t
n
+a
1
t
n1
+. . . +a
n1
t + a
n
_
e
t
, look
for a particular solution as a polynomial of degree n times the same expo-
nential:
_
A
0
t
n
+A
1
t
n1
+ . . . +A
n1
t + A
n
_
e
t
, with the coecients to be
determined.
Example Solve y

+y = te
2t
.
We look for a particular solution in the form Y (t) = (At + B)e
2t
.
Compute Y

(t) = Ae
2t
2(At + B)e
2t
= (2At +A 2B)e
2t
, Y

(t) =
2Ae
2t
2(2At + A2B)e
2t
= (4At 4A+ 4B)e
2t
. Plug Y (t) in
4Ate
2t
4Ae
2t
+ 4Be
2t
+ Ate
2t
+ Be
2t
= te
2t
.
Equate the coecients in te
2t
, and in e
2t
5A = 1
4A + 5B = 0 ,
which gives A = 1/5 and B = 4/25, so that Y (t) = (1/5t + 4/25)e
2t
.
Answer: y(t) = (1/5t + 4/25)e
2t
+c
1
sin t + c
2
cos t.
Example Solve y

4y = t
2
+ 3e
t
, y(0) = 0, y

(0) = 2.
Resoning as in the beginning of this section, we can nd Y (t) as a sum
of two pieces Y (t) = Y
1
(t) + Y
2
(t), where Y
1
(t) is any particular solution of
y

4y = t
2
and Y
2
(t) is any particular solution of
y

4y = 3e
t
.
Using our prescriptions Y
1
(t) = 1/4t
2
1/8, and Y
2
(t) = e
t
. The general
solution is y(t) = 1/4t
2
1/8e
t
+c
1
sinh2t+c
2
cosh 2t. We then nd c
2
=
9/8 and c
1
= 3/2. Answer: y(t) = 1/4t
2
1/8e
t
+3/2 sinh2t+9/8 cosh2t.
62 CHAPTER 2. SECOND ORDER EQUATIONS
2.7 Non-homogeneous Equations. When one needs
something extra.
Example Solve y

+y = sint.
We try Y = Asint + B cos t, according to the Prescription 1. Plugging
it in, we get
0 = sint ,
which is impossible. Why did we strike out? Because Asin t and B cos t are
solutions of the corresponding homogeneous equation. Let us multiply the
initial guess by t, and try Y = At sint + Bt cos t. Calculate Y

= Asint +
At cos t +Bcos t Bt sin t and Y

## = 2Acos t At sin t 2Bsint Bt cos t,

and plug Y into our equation, and simplify:
2Acos t At sin t 2Bsint Bt cos t +At sin t + Bt cos t = sint ;
2Acos t 2Bsint = sin t .
We conclude that A = 0 and B = 1/2, so that Y =
1
2
y =
1
2
t cos t +c
1
sint +c
2
cos t.
This example prompts us to change the strategy. We now begin by
solving the corresponding homogeneous equation. The Prescriptions from
the previous section are now the Initial Guesses for the particular solution.
The Whole Story If any of the functions, appearing in the Initial Guess,
is a solution of the corresponding homogeneous equation, multiply the entire
Initial Guess by t, and look at the new functions. If still some of them are
solutions of the corresponding homogeneous equation, multiply the entire
Initial Guess by t
2
. This is guaranteed to work. (Of course, if none of the
functions appearing in the Initial Guess is a solution of the corresponding
homogeneous equation, then the Initial Guess works.)
In the preceding example, the Initial Guess involved the functions sint
and cos t, both solutions of the corresponding homogeneous equation. After
we multiplied the Initial Guess by t, the new functions t sint and t cos t are
not solutions of the corresponding homogeneous equation, and the new guess
works.
Example Solve y

+ 4y

= 2t 5.
The fundamental set of the corresponding homogeneous equation con-
sists of the functions y
1
(t) = 1 and y
2
(t) = e
4t
. The Initial Guess, according
2.8. VARIATION OF PARAMETERS 63
to the Prescription 2, Y (t) = At+B, is a linear combination of the functions
t and 1, and the second of these functions is a solution of the correspond-
ing homogeneous equation. We multiply the Initial Guess by t, obtaining
Y (t) = t(At +B) = At
2
+Bt. This is a linear combination of t
2
and t, both
of which are not solutions of the corresponding homogeneous equation. We
plug it in
2A+ 4(2At +B) = 2t 5 .
Equating the coecients in t, and the constant terms, we have
8A = 2
2A+ 4B = 5 .
I.e., A = 1/4, and B = 11/8. The particular solution is Y (t) = t/411/8.
Answer: y(t) = t/4 11/8 +c
1
+c
2
e
4t
.
Example Solve y

+ 2y

+y = te
t
.
The fundamental set of the corresponding homogeneous equation con-
sists of the functions y
1
(t) = e
t
and y
2
(t) = te
t
. The Initial Guess,
according to the Prescription 3, (At + B)e
t
= Ate
t
+ Be
t
is a linear
combination of the same two functions. We multiply the Initial Guess by t:
t(At + B)e
t
= At
2
e
t
+ Bte
t
. The new guess is a linear combination of
the functions t
2
e
t
and te
t
. The rst of these functions is not a solution of
the corresponding homogeneous equation, but the second one is. Therefore,
we multiply the Initial Guess by t
2
: Y = t
2
(At +B)e
t
= At
3
e
t
+Bt
2
e
t
.
It is convenient to write Y = (At
3
+Bt
2
)e
t
, and we plug it in
(6At + 2B)e
t
2(3At
2
+ 2Bt)e
t
+ (At
3
+Bt
2
)e
t
+2
_
(3At
2
+ 2Bt)e
t
(At
3
+ Bt
2
)e
t

+ (At
3
+Bt
2
)e
t
= te
t
We divide both sides by e
t
, and simplify
6At = t
2B = 0 .
I.e., A = 1/6 and B = 0. Answer: y(t) =
1
6
t
3
e
t
+c
1
e
t
+c
2
te
t
.
2.8 Variation of Parameters
We now present a more general way to nd a particular solution of non-
homogeneous equation
y

+ p(t)y

## +g(t)y = f(t) . (8.1)

64 CHAPTER 2. SECOND ORDER EQUATIONS
We assume that the corresponding homogeneous equation
y

+ p(t)y

+g(t)y = 0 (8.2)
has a fundamental solution set y
1
(t) and y
2
(t) (so that c
1
y
1
(t) + c
2
y
2
(t) is
the general solution of (8.2). We now look for a particular solution of (8.1)
in the form
Y (t) = u
1
(t)y
1
(t) +u
2
(t)y
2
(t) , (8.3)
with some functions u
1
(t) and u
2
(t), that we shall choose to satisfy the
following two equations
u

1
(t)y
1
(t) +u

2
(t)y
2
(t) = 0 (8.4)
u

1
(t)y

1
(t) +u

2
(t)y

2
(t) = f(t) .
This is a system of two linear equations to nd u

1
(t) and u

2
(t). Its determi-
nant W(t) =

y
1
(t) y
2
(t)
y

1
(t) y

2
(t)

is the Wronskian of y
1
(t) and y
2
(t). We know
that W(t) ,= 0 for all t, because y
1
(t) and y
2
(t) form a fundamental solution
set. By Cramers rule (or by elimination) we solve
u

1
(t) =
f(t)y
2
(t)
W(t)
(8.5)
u

2
(t) =
f(t)y
1
(t)
W(t)
.
We then compute u
1
(t) and u
2
(t) by integration. It turns out that then
Y (t) = u
1
(t)y
1
(t)+u
2
(t)y
2
(t) is a particular solution of the non-homogeneous
equation (8.1). To verify that, we compute the derivatives of Y (t), and plug
them into our equation (8.1):
Y

(t) = u

1
(t)y
1
(t)+u

2
(t)y
2
(t)+u
1
(t)y

1
(t)+u
2
(t)y

2
(t) = u
1
(t)y

1
(t)+u
2
(t)y

2
(t).
The rst two terms have disappeared, thanks to the rst formula in (8.4).
Next:
Y

(t) = u

1
(t)y

1
(t) +u

2
(t)y

2
(t) +u
1
(t)y

1
(t) + u
2
(t)y

2
(t)
= f(t) +u
1
(t)y

1
(t) +u
2
(t)y

2
(t) ,
by using the second formula in (8.4). Then
Y

+ pY

+gY = f(t) +u
1
y

1
+ u
2
y

2
+p(u
1
y

1
+u
2
y

2
) +g(u
1
y
1
+u
2
y
2
)
= f(t) + u
1
(y

1
+ py

1
+gy
1
) + u
2
(y

2
+ py

2
+gy
2
)
= f(t) ,
2.9. CONVOLUTION INTEGRAL 65
which means that Y (t) is a particular solution. (Both brackets are zero,
because y
1
(t) and y
2
(t) are solutions of the homogeneous equation.)
In practice, one begins by writing down the formulas (8.5).
Example y

+ y = tant.
The fundamental set of the corresponding homogeneous equation is y
1
=
sin t and y
2
= cos t. Their Wronskian W(t) =

sin t cos t
cos t sin t

= sin
2
t
cos
2
t = 1, and the formulas (8.5) give
u

1
(t) = tan t cos t = sint
u

2
(t) = tant sin t =
sin
2
t
cos t
.
Integrating, u
1
(t) = cos t. Here we set the constant of integration to zero,
because we only need one particular solution. Integrating the second formula
u
2
(t) =
_
sin
2
t
cos t
dt =
_
1cos
2
t
cos t
dt =
_
(sec t + cos t) dt
= ln [ sec t + tant[ + sint .
We have a particular solution
Y (t) = cos t sint+(ln[ sec t+tan t[+sint) cos t = cos t ln[ sec t+tan t[ .
Answer: y(t) = cos t ln[ sec t + tant[ +c
1
sint +c
2
cos t.
2.9 Convolution Integral
2.9.1 Dierentiating Integrals
If g(t, s) is a function of two variables, then the integral
_
b
a
g(t, s) ds depends
on a parameter t (s is a dummy variable). We dierentiate this integral as
follows
d
dt
_
b
a
g(t, s) ds =
_
b
a
g
t
(t, s) ds ,
where g
t
(t, s) denotes the partial derivative in t. To dierentiate the integral
_
t
a
g(s) ds, we use the Main Theorem of Calculus:
d
dt
_
t
a
g(s) ds = g(t) .
66 CHAPTER 2. SECOND ORDER EQUATIONS
The integral
_
t
a
g(t, s) ds depends on a parameter t, and it has t as its upper
limit. One can show that
d
dt
_
t
a
g(t, s) ds =
_
t
a
g
t
(t, s) ds + g(t, t) ,
i.e., in eect we combine the previous two formulas. Let now z(t) and f(t)
be some functions, then the last formula gives
d
dt
_
t
a
z(t s)f(s) ds =
_
t
a
z

## (t s)f(s) ds + z(0)f(t) . (9.1)

2.9.2 Yet Another Way to Compute a Particular Solution
We consider again the non-homogeneous equation
y

+py

## +gy = f(t) , (9.2)

where p and g are given numbers, and f(t) is a given function. Let z(t)
denote the solution of the corresponding homogeneous equation, satisfying
z(0) = 0 and z

## (0) = 1, i.e., z(t) satises

z

+pz

+gz = 0, z(0) = 0, z

(0) = 1 . (9.3)
Then we can write a particular solution of (9.2) as a convolution integral
Y (t) =
_
t
0
z(t s)f(s) ds . (9.4)
To justify this formula, we compute the derivatives of Y (t), by using the
formula (9.1), and the initial conditions z(0) = 0 and z

(0) = 1:
Y

(t) =
_
t
0
z

(t s)f(s) ds + z(0)f(t) =
_
t
0
z

(t s)f(s) ds ;
Y

(t) =
_
t
0
z

(t s)f(s) ds + z

(0)f(t) =
_
t
0
z

(t s)f(s) ds + f(t) .
Then
Y

(t) + pY

(t) + gY (t) =
_
t
0
[z

(t s) +pz

## (t s) + gz(t s)] f(s) ds +f(t) = f(t) .

Here the integral is zero, because z(t) satises the homogeneous equation at
all values of its argument t, including t s.
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS 67
Example Let us now revisit the equation
y

+ y = tant .
Solving
z

+ z = 0, z(0) = 0, z

(0) = 1 ,
we get z(t) = sint. Then
Y (t) =
_
t
0
sin(t s) tans ds .
Writing sin(ts) = sint cos scos t sins, and integrating, it is easy to obtain
the old solution.
What are the advantages of the integral formula (9.4)? It is certainly easy
to remember, and it gives us one of the simplest examples of the Duhamel
Principle, one of the fundamental ideas of Mathematical Physics.
2.10 Applications of Second Order Equations
We present several examples. You will nd many more applications in your
science and engineering textbooks. Importance of dierential equations was
clear to Newton, the great scientist and co-inventor of Calculus.
2.10.1 Vibrating Spring
Let y = y(t) be displacement of a spring from its natural position. Its
motion is governed by Newtons second law
ma = f .
The acceleration a = y

## (t). We assume that the only force f, acting on the

spring, is its own restoring force, which by Hookes law is f = ky, for small
displacements. Here the physical constant k > 0 describes the stiness (or
the hardness) of the spring. We have
my

= ky .
Divide both sides by the mass m of the spring, and denote k/m =
2
(or
=
_
k/m)
y

+
2
y = 0 .
68 CHAPTER 2. SECOND ORDER EQUATIONS
The general solution, y(t) = c
1
cos t + c
2
sint, gives us the harmonic
motion of the spring. Let us write the solution y(t) as
y(t) =
_
c
2
1
+ c
2
2
_
c
1

c
2
1
+c
2
2
cos t +
c
2

c
2
1
+c
2
2
sint
_
= A
_
c
1
A
cos t +
c
2
A
sint
_
,
where we denoted A =
_
c
2
1
+c
2
2
. Observe that (
c
1
A
)
2
+ (
c
2
A
)
2
= 1, which
means that we can nd an angle , such that cos =
c
1
A
, and sin =
c
2
A
.
Then our solution takes the form
y(t) = A(cos t cos + sint sin ) = Acos(t ) .
I.e., the harmonic motion is just a shifted cosine curve of amplitude A =
_
c
2
1
+ c
2
2
. It is a periodic function of period
2

## . We see that the larger

is, the smaller is the period. So that gives us the frequency of oscillations.
The constants c
1
and c
2
can be computed, once the initial displacement y(0)
and the initial velocity y

## (0) are given.

Example Solving the initial value problem
y

+ 4y = 0, y(0) = 3, y

(0) = 8
one gets y(t) = 3 cos 2t 4 sin2t. This solution is a periodic function, with
the amplitude 5, the frequency 2, and the period .
The equation
y

+
2
y = f(t) (10.1)
models the case when an external force, whose acceleration is equal to f(t),
is applied to the spring. Indeed, the equation of motion is now
my

= ky + mf(t) ,
from which we get (10.1), dividing by m. Let us consider a case of periodic
forcing term
y

+
2
y = a sint , (10.2)
where a > 0 is the amplitude of the external force, and is the forcing fre-
quency. If ,= , we look for a particular solution of this non-homogeneous
equation in the form Y (t) = Asint. Plugging in, gives A =
a

2
. The
general solution of (10.2), which is y(t) =
a

2

2
sin t+c
1
cos t+c
2
sin t,
is a superposition of harmonic motion and a response term to the external
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS 69
force. We see that the solution is still bounded, although not periodic any-
more (it is called quasiperiodic).
A very important case is when = , i.e., when the forcing frequency
is the same as the natural frequency. Then particular solution has the form
Y (t) = At sint + Bt cos t, i.e., we will have unbounded solutions, as time
t increases. This is the case of resonance, when a bounded external force
produces unbounded response. Large displacements will break the spring.
Resonance is a serious engineering concern.
Example y

+ 4y = sin2t, y(0) = 0, y

(0) = 1.
Both the natural and forcing frequencies are equal to 2. The fundamental
set of the corresponding homogeneous equation consists of sin2t and cos 2t.
We search for a particular solution in the form Y (t) = At sin 2t + Bt cos 2t.
As before, we compute Y (t) =
1
4
t cos 2t. Then the general solution is y(t) =

1
4
t cos 2t + c
1
sin2t + c
2
cos 2t. Using the initial conditions, we calculate
c
1
=
5
8
and c
2
= 0, so that y(t) =
1
4
t cos 2t +
5
8
sin2t. The term
1
4
t cos 2t
introduces oscillations, whose amplitude
1
4
t increases without bound with
time t. (It is customary to call this unbounded term a secular term, which
seems to imply that the harmonic terms are divine.)
2.10.2 Problems
I. Solve the non-homogeneous equations
1. 2y

3y

+y = 2 sint. Ans. y = c
1
e
t/2
+c
2
e
t
+
3 cos t
5

1
5
sin t.
2. y

+ 4y

## + 5y = sin2t + 3 cos 2t.

Ans. y =
23
65
sin2t +
11
65
cos 2t +c
1
e
2t
cos t + c
2
e
2t
sint.
3. y

2y

+y = t + 2. Ans. y = t + 4 + c
1
e
t
+ c
2
e
t
t.
4. y

+4y = t
2
3t +1. Ans. y =
1
8
_
2t
2
6t + 1
_
+c
1
cos 2t +c
2
sin2t.
5. y

4y = te
3t
, y(0) = 0, y

(0) = 1.
Ans. y =
1
5
e
3t
t
13e
2t
50
+
e
2t
2

6e
3t
25
.
6. 4y

+ 8y

+ 5y = 2t sin 3t.
Ans. y =
2
5
t
16
25
+
31
1537
sin3t +
24
1537
cos 3t + c
1
e
t
cos
t
2
+ c
2
e
t
sin
t
2
.
70 CHAPTER 2. SECOND ORDER EQUATIONS
7. y

+y = 2e
4t
+ t
2
.
Ans. y =
2
17
e
4t
+t
2
2 +c
1
cos t + c
2
sint.
II. Solve the non-homogeneous equations
1. y

## +y = 2 cos t. Ans. y = t sint +c

1
cos t +c
2
sin t.
2. y

+y

6y = e
2t
. Ans. y =
1
5
e
2t
t +c
1
e
3t
+c
2
e
2t
.
3. y

+ 2y

+y = 2e
3t
. Ans. y =
e
3t
8
+ c
1
e
t
+c
2
te
t
.
4. y

2y

+y = te
t
. Ans. y =
t
3
e
t
6
+c
1
e
t
+c
2
te
t
.
5. y

4y

= 2 cos t. Ans. y =
t
2
+
cos t
17
+
4 sint
17
+ c
1
e
4t
+c
2
.
6. 2y

y = t + 1. Ans. y = t +c
1
e

1
2
t
+ c
2
e
t
.
III. Write down the form in which one should look for a particular solution,
but DO NOT compute the coecients
1. y

4y

+ 4y = 3te
2t
+ sin4t t
2
.
2. y

9y

+ 9y = e
3t
+te
4t
+ sint 4 cos t.
IV. Find a particular solution, by using variation of parameters, and then
write down the general solution
1. y

2y

+y =
e
t
1 +t
2
. Ans. y = c
1
e
t
+c
2
te
t

1
2
e
t
ln(1+t
2
)+te
t
tan
1
t.
2. y

2y

+y = 2e
t
. Ans. y = t
2
e
t
+ c
1
e
t
+ c
2
te
t
.
3. y

+ 2y

+y =
e
t
t
. Ans. y = te
t
+te
t
lnt +c
1
e
t
+ c
2
te
t
.
4. y

+y = sec t. Ans. y = c
1
cos t +c
2
sin t + ln(cos t) cos t + t sint.
5. y

+ 2y

+y = 2e
3t
. Ans. y =
1
8
e
3t
+ c
1
e
t
+ c
2
te
t
.
V. Verify that the functions y
1
(t) and y
2
(t) form a fundamental solution
set for the corresponding homogeneous equation, and then use variation of
parameters, to nd the general solution
1. t
2
y

2y = t
3
1. y
1
(t) = t
2
, y
2
(t) = t
1
.
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS 71
Hint: begin by putting the equation into the right form.
Ans. y =
1
2
+
1
4
t
3
+ c
1
t
2
+ c
2
1
t
.
2. ty

(1 +t)y

+y = t
2
e
3t
. y
1
(t) = t + 1, y
2
(t) = e
t
.
1
12
e
3t
(2t 1) + c
1
(t + 1) + c
2
e
t
.
3

. (3t
3
+ t)y

+ 2y

6ty = 4 12t
2
. y
1
(t) =
1
t
, y
2
(t) = t
2
+ 1.
Hint: Use Mathematica to compute the integrals.
1
1
t
+ c
2
(t
2
+ 1).
VI.
1. Use the convolution integral, to solve
y

+y = t
2
, y(0) = 0, y

(0) = 1 .
2
2 + 2 cos t + sin t.
2. Show that y(t) =
_
t
0
(t s)
n1
(n 1)!
f(s) ds gives a solution of the n-th order
equation
y
(n)
= f(t) .
(I.e., this formula lets you compute n consecutive anti-derivatives at once.)
Hint: Use (9.1).
VII.
1. A spring has natural frequency = 2. Its initial displacement is 1, and
initial velocity is 2. Find its displacement y(t) at any time t. What is the
amplitude of the oscillations?
Answer. y(t) = sin 2t cos 2t, A =

2.
2. A spring of mass 2 lb is hanging from the ceiling, and its stiness constant
is k = 18. Initially the spring is pushed up 3 inches, and is given initial
velocity of 2 inch/sec directed downward. Find the displacement of the
spring y(t) at any time t, and the amplitude of oscillations. Assume that
the y axis is directed down from the equilibrium position.
2
3
sin3t 3 cos 3t, A =

85
3
.
3. A spring has natural frequency = 3. An outside force, with acceleration
f(t) = 2 cos t is applied to the spring. Here is a constant, ,= 3. Find
72 CHAPTER 2. SECOND ORDER EQUATIONS
the displacement of the spring y(t) at any time t. What happens to the
amplitude of oscillations in case is close to 3?
2
9
2
cos t + c
1
cos 3t + c
2
sin3t.
4. Assume that = 3 in the preceding problem. Find the displacement of
the spring y(t) at any time t. What happens to the spring in the long run?
1
3
t sint +c
1
cos 3t +c
2
sin3t.
5. Consider dissipative (or damped) motion of a spring
y

+y

+ 9y = 0 .
Write down the solution, assuming that < 6. What is the smallest value
of the dissipation constant , which will prevent the spring from oscillating?
6. Consider forced vibrations of a dissipative spring
y

+y

+ 9y = sin3t .
Write down the general solution for
(i) = 0
(ii) ,= 0.
What does friction do to the resonance?
2.10.3 Meteor Approaching the Earth
Let r = r(t) be the distance of the meteor from the center of the earth.
The motion of the meteor due to the Earths gravitation is governed by the
Newtons law of gravitation
mr

=
mMG
r
2
. (10.3)
Here m is the mass of the meteor, M denotes the mass of the earth, and G
is the universal gravitational constant. Let a be the radius of the Earth. If
an object is sitting on the Earths surface, then r = a, and r

= g, so that
from (10.3)
g =
MG
a
2
.
I.e., MG = ga
2
, and we can rewrite (10.3) as
r

= g
a
2
r
2
. (10.4)
2.10. APPLICATIONS OF SECOND ORDER EQUATIONS 73
We could solve this equation by letting r

## = v(r), because the independent

variable t is missing. Instead, let us multiply both sides of the equation by
r

, and write it as
r

+ g
a
2
r
2
r

= 0 ;
d
dt
_
1
2
r

2
g
a
2
r
_
= 0 ;
1
2
r

2
(t) g
a
2
r(t)
= c . (10.5)
I.e., the energy of the meteor E(t) =
1
2
r

2
(t) g
a
2
r(t)
remains constant at all
time. (That is why the gravitational force eld is called conservative.) We
can now express r

(t) =
_
2c + g
2a
2
r(t)
, and calculate the motion of meteor
r(t) by separation of variables. However, as we are not riding on the meteor,
this seems to be not worth the eort. What really concerns us is the velocity
of the impact, as the meteor hits the Earth.
Let us assume that the meteor begins its journey with zero velocity
r

## (0) = 0, and at a distance so large that we may assume that r(0) = .

Then the energy of the meteor at time t = 0 is zero, E(0) = 0. But the
energy remains constant at all time, so that the energy at the time of impact
is also zero. At the time of impact r = a, and the velocity of impact we
denote by v, i.e., r

## = v. Then from (10.5)

1
2
v
2
(t) g
a
2
a
= 0 ,
i.e.,
v =
_
2ga.
Food for thought: the velocity of impact is the same, as it would have been
achieved by free fall from height a.
Let us now re-visit the harmonic oscillations of a spring:
y

+
2
y = 0 .
Similarly to the meteor case, multiply this equation by y

:
y

+
2
yy

= 0;
d
dt
_
1
2
y

2
+
1
2

2
y
2
_
= 0;
74 CHAPTER 2. SECOND ORDER EQUATIONS
E(t) =
1
2
y

2
+
1
2

2
y
2
= constant .
With the energy E(t) conserved, no wonder the motion of the spring was
periodic.
2.10.4 Damped Oscillations
We add an extra term to our model of spring motion:
my

= ky k
1
y

,
where k
1
is another positive constant. I.e., we assume there is an additional
force, which is directed opposite and is proportional to the velocity of motion
y

## . This can be either air resistance or friction. Denoting k

1
/m = and
k/m =
2
, we have
y

+ y

+
2
y = 0 . (10.6)
Let us see what eect the extra term y

## has on the energy of the spring,

E(t) =
1
2
y

2
+
1
2

2
y
2
. We dierentiate the energy, and express from the
equation (10.6), y

= y

2
y, obtaining
E

(t) = y

+
2
yy

= y

(y

2
y) +
2
yy

= y

2
.
We see that the energy decreases for all time. We have dissipative motion.
We expect the amplitude of oscillations to decrease with time. We call
the dissipation coecient.
To solve the equation (10.6), we write down its characteristic equation
r
2
+r +
2
y = 0 .
Its roots are

2
4
2
2
.
There are several cases to consider.
(i)
2
4
2
< 0. (I.e., the dissipation coecient is small.) The roots are
complex. If we denote
2
4
2
= q
2
, the roots are

2
i
q
2
. The general
solution
y(t) = c
1
e

2
t
sin
q
2
t + c
2
e

2
t
cos
q
2
t
exhibits damped oscillations (the amplitude of oscillations tends to zero, as
t ).
2.11. FURTHER APPLICATIONS 75
(ii)
2
4
2
= 0. There is a double real root

2
. The general solution
y(t) = c
1
e

2
t
+c
2
te

2
t
tends to zero as t , without oscillating.
(iii)
2
4
2
> 0. The roots are real and distinct. If we denote
2
4
2
=
q
2
, the roots are r
1
=
q
2
and r
2
=
+q
2
. Both are negative, because
q < . The general solution
y(t) = c
1
e
r
1
t
+ c
2
e
r
2
t
tends to zero as t , without oscillating. We see that large enough
dissipation kills the oscillations.
2.11 Further Applications
2.11.1 Forced and Damped Oscillations
It turns out that even a little damping is enough to avoid resonance. We
consider the model
y

+y

+
2
y = sint . (11.1)
Our theory tells us to look for a particular solution in the form Y (t) =
A
1
cos t +A
2
sin t. Once the constants A
1
and A
2
are determined, we can
use trigonometric identities to put this solution into the form
Y (t) = Asin(t ) , (11.2)
with constants A and depending on A
1
and A
2
. So, let us look for a
particular solution directly in the form (11.2). We transform the forcing
term as a linear combination of sin(t ) and cos(t )
sin t = sin ((t ) +) = sin(t ) cos + cos(t ) sin .
We now plug Y (t) into the equation (11.1)
A
2
sin(t ) + A cos(t ) +A
2
sin(t ) =
sin(t ) cos + cos(t ) sin .
Equating the coecients in sin(t ) and cos(t ), we have
A(
2

2
) = cos (11.3)
A = sin .
76 CHAPTER 2. SECOND ORDER EQUATIONS
We now square both of these equations, and add
A
2
(
2

2
)
2
+ A
2

2
= 1 ,
which allows us to calculate A:
A =
1
_
(
2

2
)
2
+
2

2
.
To calculate , we divide the second equation in (11.3) by the rst
tan =

2
, i.e., = tan
1

2
.
We have computed a particular solution
Y (t) =
1
_
(
2

2
)
2
+
2

2
sin(t ), where = tan
1

2
.
We now make a physically reasonable assumption that the damping coe-
cient is small. Then the homogeneous equation corresponding to (11.1)
r
2
+r +
2
= 0
has a pair of complex roots

2
i, where =

4
2

2
2
. The general
solution of (11.1) is then
y(t) = c
1
e

2
t
cos t + c
2
e

2
t
sin t +
1
_
(
2

2
)
2
+
2

2
sin(t ) .
The rst two terms of the solution are called the transient oscillations, be-
cause they quickly tend to zero, as time goes on (sic transit gloria mundi).
So that the second term Y (t) describes the oscillations in the long run. We
see that oscillations of Y (t) are bounded, no matter what is the frequency
of the forcing term. The resonance is gone! Moreover, the largest am-
plitude occurs not at = , but at a slightly smaller value of . Indeed,
the maximal amplitude happens when the quantity in the denominator, i.e.,
(
2

2
)
2
+
2

2
, is the smallest. This quantity is a quadratic in
2
. Its
minimum occurs when
2
=
2

2
2
, i.e., when =
_

2
2
.
2.11.2 Discontinuous Forcing Term
We now consider equations with a jumping force function. A simple function
with a jump at some number c, is the Heaviside step function
u
c
(t) =
_
0 if t c
1 if t > c
.
2.11. FURTHER APPLICATIONS 77
Example Solve for t > 0 the problem
y

+ 4y = f(t)
y(0) = 0, y

(0) = 3
where f(t) =
_
0 if t /4
t + 1 if t > /4
. (We see that no external force is applied
before the time t = /4, and the force is equal to t+1 afterwards. The forcing
function can be written as f(t) = u
/4
(t)(t + 1).)
The problem naturally breaks down into two parts. When t /4, we
are solving
y

+ 4y = 0 .
Its general solution is y(t) = c
1
cos 2t +c
2
sin 2t. From the initial conditions
we nd that c
1
= 0, c
2
=
3
2
, i.e.,
y(t) =
3
2
sin2t, for t /4. (11.4)
At later times, when t > /4, our equation is
y

+ 4y = t + 1 . (11.5)
But what are the new initial conditions at the time t = /4? Clearly, we
can get them from (11.4):
y(/4) =
3
2
, y

(/4) = 0 . (11.6)
The general solution of (11.5) is y(t) =
1
4
t +
1
4
+ c
1
cos 2t + c
2
sin 2t. Cal-
culating c
1
and c
2
from the initial conditions in (11.6), we nd that y(t) =
1
4
t +
1
4
+
1
8
cos 2t + (
5
4

16
y(t) =
_

_
3
2
sin2t if t /4
1
4
t +
1
4
+
1
8
cos 2t + (
5
4

16
) sin2t, if t > /4
.
2.11.3 Oscillations of a Pendulum
Assume that a small ball of mass m is attached to one end of a rigid rod of
length l, while the other end of the rod is attached to the ceiling. Assume
also that the mass of the rod itself is so small, that it can be neglected.
Clearly, the ball will move on an arch of a circle of radius l. Let = (t)
78 CHAPTER 2. SECOND ORDER EQUATIONS
be the angle that the pendulum makes with the vertical line, at the time
t. We assume that > 0 if the pendulum is to the left of the vertical line,
and < 0 to the right of the vertical. If the pendulum moves by an angle
radians, it covers the distance l = l(t). It follows that l

velocity, and l

## (t) the acceleration. The gravity force is acting on the mass.

Only the projection of the force on the tangent line to circle is active, which
is mg cos(

2
) = mg sin. Newtons second law of motion gives
ml

(t) = mg sin .
(Minus, because the force works to decrease the , when > 0.) If we denote
g/l =
2
, we have the pendulum equation

(t) +
2
sin(t) = 0 .
If the oscillation angle (t) is small, then sin(t) (t), giving us again
the harmonic oscillations, this time as a model of small oscillations of a
pendulum.
2.11.4 Sympathetic Oscillations
Suppose we have two pendulums hanging from the ceiling, and they are
coupled (connected) through a weightless spring. Let x
1
denote the angle
the left pendulum makes with the vertical, which we take to be positive if
the pendulum is to the left of the vertical, and x
1
< 0 if the pendulum is to
the right of the vertical. Let x
2
be the angle the right pendulum makes with
the vertical, with the same assumptions on its sign. We assume that x
1
and
x
2
are small in absolute value, which means that each pendulum separately
can be modeled by a harmonic oscillator. For coupled pendulums the model
is
x

1
+
2
x
1
= k(x
1
x
2
) (11.7)
x

2
+
2
x
2
= k(x
1
x
2
) ,
where k > 0 is physical constant, which measures how sti is the coupling
spring. Indeed, if x
1
> x
2
, then the coupling spring is extended, so that
the spring tries to contract, and in doing so it pulls back the left pendulum,
while accelerating the right pendulum. (Correspondingly, the forcing term
is negative in the rst equation, and positive in the second one.) In case
x
1
< x
2
the spring is compressed, and as it tries to expand, it accelerates
2.11. FURTHER APPLICATIONS 79
the rst (left) pendulum, and slows down the second (right) pendulum. We
solve the system (11.7) together with the simple initial conditions
x
1
(0) = a, x

1
(0) = 0, x
2
(0) = 0, x

2
(0) = 0 , (11.8)
which correspond to the rst pendulum beginning with a small given dis-
placement angle a, and no initial velocity, while the second pendulum is at
rest.
We now add the equations in (11.7), and call z
1
= x
1
+ x
2
. We get
z

1
+
2
z
1
= 0, z
1
(0) = a, z

1
(0) = 0 .
Solution of this problem is, of course, z
1
(t) = a cos t. Subtracting the
second equation from the rst, and calling z
2
= x
1
x
2
, we have z

2
+
2
z
2
=
2kz
2
, or
z

2
+ (
2
+ 2k)z
2
= 0, z
2
(0) = a, z

2
(0) = 0 .
Denoting
2
+ 2k =
2
1
, i.e.,
1
=

2
+ 2k, we have z
2
(t) = a cos
1
t.
Clearly, z
1
+ z
2
= 2x
1
. I.e.,
x
1
=
z
1
+ z
2
2
=
a cos t +a cos
1
t
2
= a cos

1

2
t cos

1
+
2
t , (11.9)
using a trig identity for the last step. Similarly,
x
2
=
z
1
z
2
2
=
a cos t a cos
1
t
2
= a sin

1

2
t sin

1
+
2
t . (11.10)
We now analyze the solutions, given by the formulas (11.9) and (11.10). If
k is small, i.e., if the coupling is weak, then
1
is close to , and so their
dierence
1
is small. It follows that both cos

1

2
t and sin

1

2
t change
very slowly with time t. We now rewrite the solutions
x
1
= Acos

1
+
2
t, and x
2
= Bsin

1
+
2
t ,
where we regard A = a cos

1

2
t and B = a sin

1

2
t as the slowly varying
amplitudes. So that we think that the pendulums oscillate with the fre-
quency

1
+
2
, and with slowly varying amplitudes A and B. Observe that
when cos

1

2
t is zero, and the rst pendulum is at rest, the amplitude
of the second pendulum is [ sin

1

2
t[ = 1, obtaining its largest possible
value. We have therefore a complete exchange of energy: when one of the
pendulums is doing the maximal work, the other one is resting. We have
sympathy between the pendulums. Observe also that A
2
+B
2
= a
2
. This
means that at all times t, the point (x
1
(t), x
2
(t)) lies on a circle of radius a
in (x
1
, x
2
) plane.
80 CHAPTER 2. SECOND ORDER EQUATIONS
2.12 Oscillations of a Spring Subject to a Periodic
Force
2.12.1 Fourier Series
For vectors in three dimensions the central notion is that of the scalar
product (also known as inner product or dot product). Namely, if
x =
_

_
x
1
x
2
x
3
_

_ and y =
_

_
y
1
y
2
y
3
_

## _ , then their scalar product is

(x, y) = x
1
y
1
+x
2
y
2
+ x
3
y
3
.
Scalar product can be used to compute the length of a vector [[x[[ =
_
(x, x),
or the angle between the vectors x and y
cos =
(x, y)
[[x[[ [[y[[
.
In particular, the vectors x and y are orthogonal if and only if (x, y) = 0. If
i, j and k are the unit coordinate vectors, then (x, i) = x
1
, (x, j) = x
2
and
(x, k) = x
3
. Writing x = x
1
i +x
2
j + x
3
k, we express
x = (x, i)i + (x, j)j + (x, k)k . (12.1)
This formula gives probably the simplest example of Fourier Series.
We shall now consider functions f(t) that are periodic, with period 2.
Such functions show us everything they got on any interval of length 2. So
let us consider them on the interval (, ). Given two functions f(t) and
g(t), we dene their scalar product as
(f, g) =
_

f(t) g(t) dt .
We call the functions orthogonal if (f, g) = 0. For example, (sint, cos t) =
_

sin t cos t dt = 0, so that sint and cos t are orthogonal. (Observe that
orthogonality of these functions has nothing to do with the angle at which
their graphs intersect.) The notion of scalar product allows us to dene the
norm of a function
[[f[[ =
_
(f, f) =

f
2
(t) dt .
2.12. OSCILLATIONS OF ASPRING SUBJECT TO APERIODIC FORCE81
For example
[[ sint[[ =

sin
2
t dt =

_
1
2

1
2
cos 2t
_
dt =

.
Similarly, for any positive integer n, [[ sinnt[[ =

, [[ cos nt[[ =

and
[[1[[ =

2.
We now consider an innite set of functions
1, cos t, cos 2t, . . . , cos nt, . . . , sint, sin2t, . . . , sinnt, . . . .
They are all mutually orthogonal. This is because
(1, cos nt) =
_

cos nt dt = 0;
(1, sinnt) =
_

sinnt dt = 0;
(cos nt, cos mt) =
_

(sinnt, sinmt) =
_

## sinnt sin mt dt = 0, for all n ,= m;

(sinnt, cos mt) =
_

## sin nt cos mt dt = 0, for any n and m.

The last three integrals are computed by using trig identities. If we di-
vide these functions by their norms, we shall obtain an orthonormal set of
functions
1

2
,
cos t

,
cos 2t

, . . . ,
cos nt

, . . . ,
sint

,
sin 2t

, . . . ,
sin nt

, . . . ,
which is similar to the vectors i, j and k. Similarly to the formula for vectors
(12.1), we decompose an arbitrary function f(t) as
f(t) =
0
1

2
+

n=1
_

n
cos nt

+
n
sinnt

_
,
where

0
= (f(t),
1

2
) =
1

2
_

f(t) dt;

n
= (f(t),
cos nt

) =
1

## f(t) cos nt dt;

82 CHAPTER 2. SECOND ORDER EQUATIONS

n
= (f(t),
sin nt

) =
1

f(t) sinnt dt .
It is customary to denote a
0
=
0
/

2, a
n
=
n
/

, and b
n
=
n
/

, so
that the Fourier Series takes the nal form
f(t) = a
0
+

n=1
(a
n
cos nt +b
n
sinnt) ,
with
a
0
=
1
2
_

f(t) dt;
a
n
=
1

## f(t) cos nt dt;

b
n
=
1

f(t) sinnt dt .
Example Let f(t) be a function of period 2, which is equal to t + on
the interval (, ). If we you look at the graph of this function, you will
see why it has a scary name: the saw-tooth function. It is not dened at the
points n and n, but this does not aect the integrals that we need to
compute. Compute
a
0
=
1
2
_

(t +) dt =
1
4
(t + )
2

= ;
Using guess-and-check
a
n
=
1

(t +) cos nt dt =
_
1

(t +)(
sinnt
n
) +
cos nt
n
2

= 0 ,
because sin n = 0, and cosine is an even function. Similarly
b
n
=
1

(t +) sinnt dt =
_
1

(t + )(
cosnt
n
) +
sinnt
n
2

=
2
n
cos n =
2
n
(1)
n
=
2
n
(1)
n+1
.
Observe that cos n is equal to 1 for even n, and to 1 for odd n, i.e.,
cos n = (1)
n
. We have obtained a Fourier series for the function f(t),
which is dened on (, ) (with the exception of points n and n).
Restricting to the interval (, ), we have
t + = +

n=1
2
n
(1)
n+1
sin nt, for < t < .
2.12. OSCILLATIONS OF ASPRING SUBJECT TO APERIODIC FORCE83
It might look we did not accomplish much by expressing a simple function
t + through an innite series. However, we can now express solutions of
dierential equations through Fourier series.
E
T
t
y

c c c c c c
c c c c c c
3
3 5 7
The saw-tooth function
2.12.2 Vibrations of a Spring Subject to a Periodic Force
As before, the model is
y

+
2
y = f(t) ,
where y = y(t) is the displacement, > 0 is a constant (natural fre-
quency), and f(t) is a given function of period 2, the external force.
This equation also models oscillations in electrical circuits. Expressing f(t)
by its Fourier series, we have
y

+
2
y = a
0
+

n=1
(a
n
cos nt +b
n
sinnt) .
Let us assume that ,= n, for any integer n (to avoid resonance). Ac-
cording to our theory, we look for a particular solution in the form Y (t) =
A
0
+

n=1
(A
n
cos nt + B
n
sin nt). Plugging this in, we nd Y (t) =
a
0

2
+

n=1
_
an

2
n
2
cos nt +
bn

2
n
2
sinnt
_
. The general solution is then
y(t) =
a
0

2
+

n=1
_
a
n

2
n
2
cos nt +
b
n

2
n
2
sinnt
_
+ c
1
cos t +c
2
sint .
We see that coecients in the n-th harmonics, i.e., in cos nt and sinnt, are
large, provided that the natural frequency is selected close to n. That is
basically what happens, when one is turning the tuning knob on a radio set.
(The knob controls , while your favorite station broadcasts at a frequency
n.)
84 CHAPTER 2. SECOND ORDER EQUATIONS
2.13 Eulers Equation
2.13.1 Preliminaries
What is the meaning of 3

2
? Or, more generally, what is the denition of
the function t
r
, where r is any real number? Here it is: t
r
= e
lnt
r
= e
r lnt
.
We see that the function t
r
is dened only for t > 0. The function [t[
r
is
dened for all t ,= 0, but what is the derivative of this function?
More generally, if f(t) is a dierentiable function, the function f([t[) is
dierentiable for all t ,= 0 (because [t[ is not dierentiable at t = 0). Let us
dene a step function sign(t) =
_
1 if t > 0
1 if t < 0
. Observe that
d
dt
[t[ = sign(t), for all t ,= 0 ,
as follows by considering separately the cases t > 0 and t < 0.
By the Chain Rule, we have
d
dt
f([t[) = f

## ([t[)sign(t), for all t ,= 0 (13.1)

In particular,
d
dt
[t[
r
= r[t[
r1
sign(t).
Observe also that t sign(t) = [t[, and (sign(t))
2
= 1, for all t ,= 0.
2.13.2 The Important Class of Equations
Eulers Equation has the form (y = y(t))
t
2
y

+aty

+by = 0 ,
where a and b are given numbers. We look for solution in the form y = t
r
,
with the constant r to be determined. Assume rst that t > 0. Plugging in
t
2
r(r 1)t
r2
+ atrt
r1
+ bt
r
= 0 .
Dividing by the positive quantity t
r
r(r 1) + ar + b = 0 (13.2)
gives us a characteristic equation. This is a quadratic equation, and so there
are three possibilities with respect to its roots.
2.13. EULERS EQUATION 85
Case 1 There are two real and distinct roots r
1
,= r
2
. Then t
r
1
and t
r
2
are
two solutions, which are not constant multiples of each other, and so the
general solution (valid for t > 0) is
y(t) = c
1
t
r
1
+c
2
t
r
2
.
If r
1
is either an integer, or a fraction with an odd denominator, then t
r
1
is also dened for t < 0. If the same is true for r
2
, then the above general
solution is valid for all t. For other r
1
or r
2
, this solution is not even dened
for t < 0. In such a case, using dierentiation formula (13.1), we see that
y(t) = [t[
r
1
gives a solution of Eulers equation, which is valid for all t ,= 0.
Just plug it in, and observe that sign(t)sign(t) = 1, and tsign(t) = [t[. So,
if we need general solution valid for all t ,= 0, we can use
y(t) = c
1
[t[
r
1
+c
2
[t[
r
2
.
Example t
2
y

+ 2ty

2y = 0.
The characteristic equation
r(r 1) + 2r 2 = 0
has roots r
1
= 2 and r
2
= 1. Solution: y(t) = c
1
t
2
+ c
2
t, valid for
all t ,= 0. Another general solution valid for all t ,= 0 is y(t) = c
1
[t[
2
+
c
2
[t[ = c
1
t
2
+c
2
[t[. This is a truly dierent solution! Why such an unusual
complexity? If one divides the equation by t
2
, then the functions p(t) = 2/t
and g(t) = 2/t
2
from our general theory, are both discontinuous at t = 0.
We have a singularity at t = 0, and in general, solution y(t) is not dened
at t = 0 (as we saw in the above example). However, when solving initial
value problems, it does not matter which form of general solution one takes.
For example, if we prescribe some initial conditions at t = 1, then both
forms of the general solution are valid only on the interval (, 0), and on
that interval both forms are equivalent.
We now turn to the cases of equal roots, and of complex roots. We could
proceed similarly to the linear equations with constant coecients. Instead,
we make a change of independent variables from t to a new variable s, by
letting t = e
s
, or s = ln t. By the chain rule
dy
dt
=
dy
ds
ds
dt
=
dy
ds
1
t
.
86 CHAPTER 2. SECOND ORDER EQUATIONS
Using the product rule, and then the chain rule,
d
2
y
dt
2
=
d
dt
(
dy
ds
)
1
t

dy
ds
1
t
2
=
d
2
y
ds
2
ds
dt
1
t

dy
ds
1
t
2
=
d
2
y
ds
2
1
t
2

dy
ds
1
t
2
.
Then Eulers equation becomes
d
2
y
ds
2

dy
ds
+a
dy
ds
+by = 0 .
This is a linear equations with constant coecients! We know how to solve
it, for any a and b. Let us use primes again to denote the derivatives in s.
Then we have
y

+ (a 1)y

+ by = 0 . (13.3)
Its characteristic equation
r
2
+ (a 1)r + b = 0 (13.4)
is exactly the same as (13.2).
Case 2 r
1
is a double root of the characteristic equation (13.2), i.e., r
1
is a
double root of (13.4). Then y = c
1
e
r
1
s
+ c
2
se
r
1
s
is the general solution of
(13.3). Returning to the original variable t, by substituting s = ln t:
y(t) = c
1
t
r
1
+c
2
t
r
1
ln t .
This is the general solution of Eulers equation, valid for t > 0. More
generally,
y(t) = c
1
[t[
r
1
+ c
2
[t[
r
1
ln[t[
gives us general solution of Eulers equation, valid for all t ,= 0.
Case 3 p iq are complex roots of the characteristic equation (13.2). Then
y = c
1
e
ps
sinqs +c
2
se
ps
cos qs is the general solution of (13.3). Returning to
the original variable t, by substituting s = ln t, we get the general solution
of Eulers equation
y(t) = c
1
t
p
sin(q ln t) + c
2
t
p
cos(q lnt) ,
valid for t > 0. Replacing t by [t[, will give us general solution of Eulers
equation, valid for all t ,= 0.
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO 87
Example t
2
y

3ty

+ 4y = 0, t > 0.
The characteristic equation r(r 1) 3r +4 = 0 has a double root r = 2.
The general solution: y = c
1
t
2
+ c
2
t
2
lnt.
Example t
2
y

3ty

+ 4y = 0, y(1) = 4, y

(1) = 7.
Using the general solution from the preceding problem, we calculate
c
1
= 4 and c
2
= 1. Answer: y = 4t
2
t
2
lnt.
Example t
2
y

+ty

+ 4y = 0, y(1) = 0, y

(1) = 3.
The characteristic equation r(r 1) + r + 4 = 0 has a pair of complex
roots 2i. Here p = 0, q = 2, and the general solution valid for both positive
and negative t is
y(t) = c
1
sin(2 ln[t[) + c
2
cos(2 ln[t[) .
From the rst initial condition y(1) = c
2
= 0, so that y(t) = c
1
sin(2 ln[t[).
To use the second initial condition, we need to dierentiate y(t) at t = 1.
Observe that for negative t, [t[ = t, and so y(t) = c
1
sin(2 ln(t)). Then
y

(t) = c
1
cos(2 ln(t))
2
t
, and y

(1) = 2c
1
= 3. I.e., c
1
y(t) =
3
2
sin(2 ln[t[).
Example t
2
y

3ty

+ 4y = t 2.
This is a non-homogeneous equation. We look for a particular solution
in the form Y = At + B. Plugging this in, we compute Y = t
1
2
. The
fundamental set of the corresponding homogeneous equation is given by t
2
and t
2
ln t, as we saw above. The general solution is y = t
1
2
+c
1
t
2
+c
2
t
2
ln t.
2.14 Linear Equations of Order Higher Than Two
2.14.1 The Polar Form of Complex Numbers
For a complex number x + iy, one can use the point (x, y) to represent it.
This turns the usual plane into the complex plane. The point (x, y) can also
be identied by its polar coordinates (r, ). We shall always take r > 0.
Then
z = x + iy = r cos +ir sin = r(cos + i sin)
gives us a polar form to represent a complex number z. Using Eulers for-
mula, we can also write z = re
i
. For example, 2i = 2e
i
3
2
, because the
88 CHAPTER 2. SECOND ORDER EQUATIONS
point (0, 2) has polar coordinates (2,
3
2
). Similarly, 1 + i =

2e
i

4
, and
1 = e
i
(real numbers are just a particular case of complex ones).
There are innitely many ways to represent complex numbers using polar
coordinates z = re
i(+2m)
, where m is any integer (positive or negative).
We now compute the n-th root(s) of z:
z
1/n
= r
1/n
e
i(

n
+
2m
n
)
, m = 0, 1, . . . , n 1 .
Here r
1/n
is the positive n-th root of the positive number r. (The high
school n-th root.) When m varies from 0 to n1 we get dierent answers,
and then the roots repeat themselves. There are n complex n-th roots of
any complex number (and in particular, of any real number). All roots lie
1/n
, and the dierence in polar angle between any two
neighbors is 2/n.
Example Solve the equation: r
4
+ 16 = 0.
We need the four complex roots of 16 = 16e
i(+2m)
. We have:
(16)
(1/4)
= 2e
i(

4
+
m
2
)
, m = 0, 1, 2, 3. When m = 0, we have e
i

4
=
2(cos

4
+ i sin

4
) =

2 +i

## 2. We compute the other roots similarly. They

come in two complex conjugate pairs:

2 i

2 and

2 i

2. In the
complex plane, they all lie on circle of radius 2, and the dierence in polar
angle between any two neighbors is /2.
E
T
&%
'\$
b b
b b
2
The four complex roots of 16 on the circle of radius 2
Example Solve the equation: r
3
+ 8 = 0.
We need the three complex roots of 8. One of them is r
1
= 2 = 2e
i
,
and the other two lie on the circle of radius 2, at an angle 2/3 away. I.e.,
r
2
= 2e
i/3
= 1 +

3i, and r
3
= 2e
i/3
= 1

## 3i. (Alternatively, the

root r = 2 is easy to guess. Then, using the long division, one can factor
r
3
+ 8 = (r + 2)(r
2
2r + 4), and set the second factor to zero, to nd the
other two roots.)
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO 89
2.14.2 Linear Homogeneous Equations
Let us consider the fourth order equations
a
0
y

+a
1
y

+a
2
y

+ a
3
y

+a
4
y = 0 , (14.1)
with given numbers a
0
, a
1
, a
2
, a
3
and a
4
. Equations of even higher orders
can be considered similarly. Again, we search for a solution in the form
y(t) = e
rt
, with a constant r to be determined. Plugging in, and dividing
by the positive exponent e
rt
, we obtain a characteristic equation
a
0
r
4
+a
1
r
3
+ a
2
r
2
+ a
3
r + a
4
= 0 . (14.2)
If we have an equation of order n
a
0
y
(n)
+a
1
y
(n1)
+ . . . +a
n1
y

+ a
n
y = 0 , (14.3)
then the corresponding characteristic equation is
a
0
r
n
+a
1
r
n1
+ . . . +a
n1
r +a
n
= 0 .
The Fundamental Theorem of Algebra says that any polynomial of degree
n has n roots in the complex plane, counted according to their multiplicity
(i.e., double root is counted as two roots, and so on). The characteristic
equation (14.2) has four complex roots.
The theory is similar to the second order case. We need four dierent
solutions of (14.1), i.e., every solution is not a linear combination of the other
three (for the equation (14.3) we need n dierent solutions). Every root of
the characteristic equation must pull its weight. If the root is simple, it
brings in one solution, if it is repeated twice, then two solutions. (Three
solutions, if the root is repeated three times, and so on.) The following
cases may occur for the n-th order equation (14.3).
Case 1 r
1
is a simple real root. Then it brings e
r
1
t
into the fundamental
set.
Case 2 r
1
is a real root repeated s times. Then it brings the following s
solutions into the fundamental set: e
r
1
t
, te
r
1
t
,..., t
s1
e
r
1
t
.
Case 3 p+iq and piq are simple complex roots. They contribute: e
pt
cos qt
and e
pt
sinqt.
Case 4 p+iq and piq are repeated s times each. They bring the following
2s solutions into the fundamental set: e
pt
cos qt and e
pt
sinqt, te
pt
cos qt and
te
pt
sinqt,..., t
s1
e
pt
cos qt and t
s1
e
pt
sinqt.
90 CHAPTER 2. SECOND ORDER EQUATIONS
Example y

## y = 0. The characteristic equation is

r
4
1 = 0 .
We solve it by factoring
(r 1)(r + 1)(r
2
+ 1) = 0 .
The roots are 1, 1, i, i. The general solution: y(t) = c
1
e
t
+ c
2
e
t
+
c
3
cos t +c
4
sin t.
Example y

3y

+ 3y

## y = 0. The characteristic equation is

r
3
3r
2
+ 3r 1 = 0 .
This is a cubic equation. You probably did not study how to solve it by
Cardanos formula. Fortunately, you must remember that the quantity on
the left is an exact cube:
(r 1)
3
= 0 .
Root r = 1 is repeated 3 times. The general solution: y(t) = c
1
e
t
+ c
2
te
t
+
c
3
t
2
e
t
.
Let us suppose you did not know the formula for cube of a dierence.
Then one can guess that r = 1 is a root. This means that the cubic poly-
nomial can be factored, with one factor being r 1. The other factor is
then found by the long division. The other factor is a quadratic polynomial,
whose roots are easy to nd.
Example y

+ 3y

## + 5y = 0. The characteristic equation is

r
3
r
2
+ 3r + 5 = 0 .
We need to guess a root. The procedure for guessing a root is a simple one:
try r = 0, r = 1, r = 2, and then give up. We see that r = 1 is a root.
It follows that the rst factor is r + 1, and the second one is found by the
long division:
(r + 1)(r
2
2r + 5) = 0 .
The roots of the quadratic give us r
2
= 1 2i, and r
3
= 1 +2i. The general
solution: y(t) = c
1
e
t
+c
2
e
t
cos 2t +c
3
e
t
sin 2t.
Example y

+ 2y

## +y = 0. The characteristic equation is

r
4
+ 2r
2
+ 1 = 0 .
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO 91
It can be solved by factoring
(r
2
+ 1)
2
= 0 .
(Or one can set z = r
2
, and obtain a quadratic equation for z.) The roots are
i, i, each repeated twice. The general solution: y(t) = c
1
cos t + c
2
sint +
c
3
t cos t +c
4
t sint.
Example y

## + 16y = 0. The characteristic equation is

r
4
+ 16 = 0 .
Its roots are the four complex roots of 16, computed earlier, i.e.,

2
i

2 and

2 i

1
e

2 t
cos(

2 t) +
c
2
e

2 t
sin(

2 t) +c
3
e

2 t
cos(

2 t) +c
4
e

2 t
sin(

2 t).
Example y
(5)
+ 9y

## = 0. The characteristic equation is

r
5
+ 9r
3
= 0 .
Factoring r
3
(r
2
+ 9) = 0, we see that the roots are: 0, 0, 0, 3i, 3i. The
general solution: y(t) = c
1
+c
2
t +c
3
t
2
+c
4
cos 3t +c
5
sin 3t.
2.14.3 Non-Homogeneous Equations
The theory is parallel to the second order case. Again, we need a particular
solution.
Example y
(5)
+ 9y

= 3t sin2t.
We have just found the general solution of the corresponding homo-
geneous equation. We produce a particular solution in the form Y (t) =
Y
1
(t) + Y
2
(t), where Y
1
(t) is a particular solution of y
(5)
+ 9y

= 3t,
and Y
2
(t) is a particular solution of y
(5)
+ 9y

## = sin2t. We guess that

Y
1
(t) = At
4
, and compute A =
1
72
, and that Y
2
(t) = B cos 2t, with B =
1
40
.
We have Y (t) =
1
72
t
4

1
40
cos 2t.
1
72
t
4

1
40
cos 2t +c
1
+c
2
t + c
3
t
2
+ c
4
cos 3t + c
5
sin3t.
2.14.4 Problems
I. Solve the non-homogeneous equations with discontinuous forcing function.
92 CHAPTER 2. SECOND ORDER EQUATIONS
1. y

+9y = f(t), where f(t) = 0 for t < and f(t) = t for t > , y(0) = 0,
y

y(t) =
_

2
3
sin 3t if t
1
9
t +

9
cos 3t
17
27
sin 3t, if t >
.
2. y

+ y = f(t), where f(t) = 0 for t < and f(t) = t for t > , y(0) = 2,
y

(0) = 0.
II. Find the general solution, valid for t > 0.
1. t
2
y

2ty

+ 2y = 0. Ans. y = c
1
t + c
2
t
2
.
2. t
2
y

+ty

+ 4y = 0. Ans. y = c
1
cos(2 lnt) + c
2
sin(2 lnt).
3. t
2
y

+ 5ty

+ 4y = 0. Ans. y = c
1
t
2
+ c
2
t
2
ln t.
4. t
2
y

+ 5ty

+ 5y = 0. Ans. y = c
1
t
2
cos(lnt) + c
2
t
2
sin(lnt).
5. t
2
y

3ty

= 0. Ans. y = c
1
+c
2
t
4
.
6. y

+
1
4
t
2
y = 0. Ans. y = c
1

t + c
2

t lnt.
III. Find the general solution, valid for all t ,= 0.
1. t
2
y

+ty

+ 4y = 0. Ans. y = c
1
cos(2 ln[t[) + c
2
sin(2 ln[t[).
2. 2t
2
y

ty

+ y = 0. Ans. y = c
1
_
[t[ + c
2
t.
3. 2t
2
y

ty

+ y = t
2
3.
Hint: look for a particular solution as Y = At
2
+ Bt +C.
Ans. y =
1
3
t
2
3 +c
1
_
[t[ +c
2
t.
4. 2t
2
y

ty

+y = t
3
. Hint: look for a particular solution as Y = At
3
.
Ans. y =
1
10
t
3
+ c
1
_
[t[ + c
2
t.
IV. Solve the initial value problems.
1. t
2
y

2ty

+ 2y = 0, y(1) = 2, y

(1) = 5. Ans. y = t + 3t
2
.
2. t
2
y

3ty

+4y = 0, y(1) = 1, y

(1) = 2. Ans. y = t
2
4t
2
ln[t[.
2.14. LINEAR EQUATIONS OF ORDER HIGHER THAN TWO 93
3. t
2
y

+ 3ty

3y = 0, y(1) = 1, y

(1) = 2. Ans. y =
3
4
t
3

1
4
t.
4. t
2
y

ty

+ 5y = 0, y(1) = 0, y

## (1) = 2. Ans. y = t sin(2 lnt).

V. Solve the polynomial equations.
1. r
3
1 = 0. Ans. Roots are 1, e
i
2
3
, e
i
4
3
.
2. r
3
+ 27 = 0. Ans. 3,
3
2

3

3
2
i,
3
2
+
3

3
2
i.
3. r
4
16 = 0. Ans. 2 and 2i.
4. r
3
3r
2
+ r + 1 = 0. Ans. 1, 1

2, 1 +

2.
5. r
4
+ 1 = 0. Ans. e
i

4
, e
i
3
4
, e
i
5
4
, e
i
7
4
.
6. r
4
+ 4 = 0. Ans. 1 +i, 1 i, 1 +i, 1 i.
7. r
4
+ 8r
2
+ 16 = 0. Ans. 2i and 2i are both double roots.
8. r
4
+ 5r
2
+ 4 = 0. Ans. i and 2i.
VI. Find the general solution.
1. y

y = 0. Ans. y = c
1
e
t
+c
2
e
t/2
cos

3
2
t + c
3
e
t/2
sin

3
2
t.
2. y

3y

+ y

+ y = 0. Ans. y = c
1
e
t
+ c
2
e
(1

2)t
+c
3
e
(1+

2)t
.
3. y
(4)
8y

+16y = 0. Ans. y = c
1
e
2t
+c
2
e
2t
+c
3
te
2t
+c
4
te
2t
.
4. y
(4)
+8y

+16y = 0. Ans. y = c
1
cos 2t+c
2
sin2t+c
3
t cos 2t+c
4
t sin2t.
5. y
(4)
+ y = 0.
Ans. y = c
1
e
t

2
cos
t

2
+ c
2
e
t

2
sin
t

2
+ c
3
e

2
cos
t

2
+c
4
e

2
sin
t

2
.
6. y

y = t
2
. Ans. y = t
2
+c
1
e
t
+c
2
e
t/2
cos

3
2
t +c
3
e
t/2
sin

3
2
t.
7. y
(6)
y

= 0. Ans. y = c
1
+c
2
t +c
3
e
t
+c
4
e
t
+c
5
cos t +c
6
sin t.
8. y
(8)
y
(6)
= sin t.
Ans. y =
1
2
sin t + c
1
+ c
2
t + c
3
t
2
+c
4
t
3
+c
5
t
4
+c
6
t
5
+c
7
e
t
+ c
8
e
t
.
9. y

+ 4y = 4t
2
1.
94 CHAPTER 2. SECOND ORDER EQUATIONS
Ans. y = t
2

1
4
+ c
1
e
t
cos t +c
2
e
t
sint +c
3
e
t
cos t +c
4
e
t
sin t.
VII. Solve the following initial value problems.
1. y

+ 4y

= 0, y(0) = 1, y

(0) = 1, y

(0) = 2.
Ans. y =
3
2

1
2
cos 2t
1
2
sin 2t.
2. y

+ 4y = 0, y(0) = 1, y

(0) = 1, y

(0) = 2, y

(0) = 3.
Ans. y =
1
8
e
t
(cos t 5 sint) +
3
8
e
t
(3 cos t sint).
3. y

+ 8y = 0, y(0) = 0, y

(0) = 1, y

(0) = 2.
Ans. y =
1
3
e
2t
+
1
3
e
t
cos

3 t.
VIII.
1. Solve the following nonlinear equation
2y

3y

2
= 0,
Hint: Write it as
y

=
3
2
y

,
then integrate, to get
y

= c
1
y

(3/2)
.
Let y

## = v, and obtain a rst order equation.

Ans. y =
1
c
1
t +c
2
+c
3
, and also y = c
4
t + c
5
.
Chapter 3
Using Innite Series to Solve
Dierential Equations
3.1 Series Solution Near a Regular Point
3.1.1 Maclauren and Taylor Series
Innitely dierentiable functions can often be represented by a series
f(x) = a
0
+ a
1
x + a
2
x
2
+ + a
n
x
n
+ . (1.1)
Letting x = 0, we see that a
0
= f(0). If we dierentiate (1.1), and then
let x = 0, we have a
1
= f

## (0). Dierentiating (1.1) twice, and then letting

x = 0, gives a
2
=
f

(0)
2
. Continuing this way, we see that a
n
=
f
(n)
(0)
n!
, giving
us the Maclauren series
f(x) = f(0) +f

(0)x +
f

(0)
2!
x
2
+ +
f
(n)
(0)
n!
x
n
+ (1.2)
=

n=0
f
(n)
(0)
n!
x
n
.
It turns out there is a number R, so that the Maclauren series converges
inside the interval (R, R), and diverges when [x[ > R. R is called the radius
of convergence. For some f(x), we have R = (for example, for sin x, cos x,
e
x
), while for some series we have R = 0, and in general 0 R .
When we apply (1.2) to some specic functions, we get:
sin x = x
x
3
3!
+
x
5
5!

x
7
7!
+ =

n=0
(1)
n
x
2n+1
(2n + 1)!
;
95
96CHAPTER3. USINGINFINITE SERIES TOSOLVE DIFFERENTIAL EQUATIONS
cos x = 1
x
2
2!
+
x
4
4!

x
6
6!
+ =

n=0
(1)
n
x
2n
(2n)!
;
e
x
= 1 +x +
x
2
2!
+
x
3
3!
+
x
4
4!
+ =

n=0
x
n
n!
;
1
1 x
= 1 +x + x
2
+ + x
n
+ .
The last series, called the geometric series, converges on the interval (1, 1),
i.e., R = 1.
Maclauren series gives an approximation of f(x), for x close to zero. For
example, sin x x gives a reasonably good approximation for [x[ small. If
we add one more term of the Maclauren series: sin x x
x
3
6
then, say on
the interval (0.3, 0.3), this approximation is excellent.
If one needs a Maclauren series for sin x
2
, one begins with a series for
sinx, and replaces each x by x
2
, obtaining
sinx
2
= x
2

x
6
3!
+
x
10
5!
+ =

n=0
(1)
n
x
4n+2
(2n + 1)!
.
One can split a Maclauren series into even and odd powers, writing it in
the form

n=0
f
(n)
(0)
n!
x
n
=

n=0
f
(2n)
(0)
(2n)!
x
2n
+

n=0
f
(2n+1)
(0)
(2n + 1)!
x
2n+1
.
In the following series only the odd powers have non-zero coecients

n=1
1(1)
n
n
x
n
=
2
1
x +
2
3
x
3
+
2
5
x
5
+
=

n=0
2
2n+1
x
2n+1
= 2

n=0
1
2n+1
x
2n+1
.
All of the series above were centered at 0. We can replace zero by any
number a, obtaining the Taylors series
f(x) = f(a) +f

(a)(x a) +
f

(a)
2!
(x a)
2
+ +
f
(n)
(a)
n!
(x a)
n
+
=

n=0
f
(n)
(a)
n!
(x a)
n
.
It converges on an interval (a R, a + R), centered at a. Radius of con-
vergence satises 0 R , as before. The Taylors series allows us to
approximate f(x) for x close to a. For example, one can usually expect (al-
though it is not always true) that f(x) f(2) +f

(2)(x2) +
f

(2)
2!
(x2)
2
for x close to 2, say for 1.8 < x < 2.2.
3.1. SERIES SOLUTION NEAR A REGULAR POINT 97
3.1.2 A Toy Problem
Let us begin with the equation (here y = y(x))
y

+y = 0 ,
for which we know the general solution. Let us call by y
1
(x) the solution of
the initial value problem
y

+ y = 0, y(0) = 1, y

(0) = 0 . (1.3)
By y
2
(x) we denote the solution of the same equation, together with the
initial conditions y(0) = 0, y

(0) = 1. Clearly, y
1
(x) and y
2
(x) are not
constant multiples of each other. Therefore, they form a fundamental set,
giving us the general solution y(x) = c
1
y
1
(x) + c
2
y
2
(x).
Let us now compute y
1
(x), the solution of (1.3). From the initial condi-
tions, we already know the rst two terms of its Maclauren series
y(x) = y(0) + y

(0)x +
y

(0)
2
x
2
+ +
y
(n)
(0)
n!
x
n
+ .
To get more terms, we need to compute the derivatives of y(x) at zero.
From the equation (1.3) y

## (0) = y(0) = 1. We now dierentiate the

equation (1.3): y

+ y

## = 0, and then set x = 0: y

(0) = y

(0) = 0.
Dierentiating again, we have y

(0) = y

y
(5)
(0) = y

## (0) = 0. We see that all derivatives of odd order vanish, while

derivatives of even order alternate between 1 and 1. We write down the
Maclauren series:
y
1
(x) = 1
x
2
2!
+
x
4
4!
= cos x .
Similarly, we compute the series for y
2
(x):
y
2
(x) = x
x
3
3!
+
x
5
5!
= sinx .
We shall solve equations with variable coecients
P(x)y

+Q(x)y

+ R(x)y = 0 , (1.4)
where the continuous functions P(x), Q(x) and R(x) are given. We shall
always denote by y
1
(x) the solution of (1.4) satisfying the initial conditions
98CHAPTER3. USINGINFINITE SERIES TOSOLVE DIFFERENTIAL EQUATIONS
y(0) = 1, y

(0) = 0, and by y
2
(x) the solution of (1.4) satisfying the initial
conditions y(0) = 0, y

## (0) = 1. If one needs to solve (1.4), together with the

given initial conditions
y(0) = , y

(0) = ,
then the solution is
y(x) = y
1
(x) +y
2
(x) .
3.1.3 Using Series When Other Methods Fail
Let us try to nd the general solution of the equation
y

+xy

+ 2y = 0 .
This equation has variable coecients, and none of the previous methods
will apply here.
We shall derive a formula for y
(n)
(0), and use it to calculate y
1
(x) and
y
2
(x). From the equation we have y

the equation
y

+xy

+ 3y

= 0 ,
which gives y

(0) = 3y

y

+xy

+ 4y

= 0 ,
and get: y

(0) = 4y

## (0). It is clear that, in general, we get a recursive

relation
y
(n)
(0) = ny
(n2)
(0), n = 3, 4, . . . .
Let us begin with the computation of y
2
(x), for which we use the initial
conditions y(0) = 0, y

## (0) = 1. Then from the recursive relation:

y

(0) = 2y(0) = 0;
y

(0) = 3y

(0) = 3 1;
y

(0) = 4y

(0) = 0;
It is clear that all derivatives of even order are zero. Let us continue with
derivatives of odd order:
y
(5)
(0) = 5y

(0) = (1)
2
5 3 1;
3.1. SERIES SOLUTION NEAR A REGULAR POINT 99
y
(7)
(0) = 7y

(0) = (1)
3
7 5 3 1;
And in general,
y
(2n+1)
(0) = (1)
n
(2n + 1) (2n 1) 3 1 .
Then the Maclauren series for y
2
(x) is
y
2
(x) =

n=0
y
(n)
(0)
n!
x
n
= x +

n=1
y
(2n+1)
(0)
(2n+1)!
x
2n+1
= x +

n=1
(1)
n
(2n+1)(2n1)31
(2n+1)!
x
2n+1
= x +

n=1
(1)
n 1
2n(2n2)42
x
2n+1
.
One can also write this solution as y
2
(x) =

n=0
(1)
n 1
2
n
n!
x
2n+1
.
To compute y
1
(x), we use the initial conditions y(0) = 1, y

(0) = 0. As
before, we see from the recursive relation that all derivatives of odd order
vanish, while the even ones satisfy
y
(2n)
(0) = (1)
n
2n (2n 2) 4 2 , for n = 1, 2, 3, . . . .
1
(x) = 1 +

n=1
(1)
n 1
(2n1)(2n3)31
x
2n
. The general
solution:
y(x) = c
1

n=0
(1)
n
1
2
n
n!
x
2n+1
+c
2
_
1 +

n=1
(1)
n
(2n 1)(2n 3) 3 1
x
2n
_
.
We shall need a formula for repeated dierentiation of a product of two
functions. Starting with the product rule (fg)

= f

g +fg

, we have
(fg)

= f

g + 2f

+ fg

;
(fg)

= f

g + 3f

+ 3f

+fg

,
and in general
(fg)
(n)
= f
(n)
g + nf
(n1)
g

+
n(n1)
2
f
(n2)
g

+ (1.5)
+
n(n1)
2
f

g
(n2)
+nf

g
(n1)
+fg
(n)
.
This formula is similar to the binomial formula for the expansion of
(x + y)
n
. Using summation notation, we can write it as
(fg)
(n)
=
n

k=0
_
n
k
_
f
(nk)
g
(k)
,
100CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
where
_
n
k
_
=
n!
k!(nk)!
are the binomial coecients. (Convention: f
(0)
= f.)
The formula (1.5) simplies considerably in case f(x) = x, or if f(x) = x
2
:
(xg)
(n)
= ng
(n1)
+xg
(n)
;
(x
2
g)
(n)
= n(n 1)g
(n2)
+ 2nxg
(n1)
+x
2
g
(n)
.
We shall use series to solve linear second order equations with variable
coecients
P(x)y

+Q(x)y

+R(x)y = 0 ,
where the continuous functions P(x), Q(x) and R(x) are given. A number
a is called a regular point if P(a) ,= 0. If P(a) = 0, then x = a is a singular
point.
Example (2 + x
2
)y

xy

## + 4y = 0. For this equation any a is a regular

point. Let us nd a series solution, centered at a = 0, i.e., the Maclauren
series for the solution y(x). We dierentiate both sides of the equation n
times. When we use the formula (1.5) to dierentiate the rst term, only
three terms are non-zero, because derivatives of order three and higher of
2+x
2
are zero. When we dierentiate xy

## , only two terms survive. We have

n(n 1)
2
2y
(n)
+n(2x)y
(n+1)
+(2 +x
2
)y
(n+2)
ny
(n)
xy
(n+1)
+4y
(n)
= 0 .
We set here x = 0. Several terms vanish. Combining the like terms:
2y
(n+2)
(0) +
_
n
2
2n + 4
_
y
(n)
(0) = 0 ,
which gives us the recursive relation
y
(n+2)
(0) =
_
n
2
2n + 4
_
2
y
(n)
(0) .
This relation is too involved to get a neat formula for y
(n)
(0) as a function
of n. However, it can be used to crank out the coecients, as many as you
wish. To compute y
1
(x), we use the initial conditions y(0) = 1 and y

(0) = 0.
We see from the recursive relation that all derivatives of odd order are zero.
Setting n = 0 in the recursive relation, we have
y

(0) = 2y(0) = 2 .
3.1. SERIES SOLUTION NEAR A REGULAR POINT 101
When n = 2:
y

(0) = 2y

(0) = 4 .
Using these derivatives in the Maclauren series, we have
y
1
(x) = 1 x
2
+
1
6
x
4
+ .
To compute y
2
(x), we use the initial conditions y(0) = 0 and y

(0) = 1. We
see from the recursive relation that all derivatives of even order are zero,
while when n = 1, we get
y

(0) =
3
2
y

(0) =
3
2
.
Setting n = 3, we have
y
(5)
(0) =
7
2
y

(0) =
21
4
.
Using these derivatives in the Maclauren series, we conclude
y
2
(x) = x
1
4
x
3
+
7
160
x
5
+ .
The general solution:
y(x) = c
1
y
1
(x)+c
2
y
2
(x) = c
1
_
1 x
2
+
1
6
x
4
+
_
+c
2
_
x
1
4
x
3
+
7
160
x
5
+
_
.
Suppose we wish to solve the above equation together with the initial
conditions: y(0) = 2, y

## (0) = 3. Then y(0) = c

1
y
1
(0) +c
2
y
2
(0) = c
1
= 2,
and y

(0) = c
1
y

1
(0) + c
2
y

2
(0) = c
2
= 3. It follows that y(x) = 2y
1
(x) +
3y
2
(x). If we need to approximate y(x) near x = 0, say on the interval
(0.3, 0.3), then
y(x) 2
_
1 x
2
+
1
6
x
4
_
+ 3
_
x
1
4
x
3
+
7
160
x
5
_
will provide an excellent approximation.
Example Approximate the general solution of Airys equation
y

x y = 0
near x = 1.
102CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
We need to compute the Taylors series about a = 1:
y(x) =

n=0
y
(n)
(1)
n!
(x1)
n
. From the equation, y

## (1) = y(1). To get higher

derivatives, we dierentiate our equation n times, and then set x = 1
y
(n+2)
ny
(n1)
xy
(n)
= 0;
y
(n+2)
(1) = ny
(n1)
(1) + y
(n)
(1), n = 1, 2, . . . .
To compute y
1
(x), we use the initial conditions y(1) = 1, y

(1) = 0.
Then y

## (1) = y(1) = 1. Setting n = 1 in the recursive relation: y

(3)
(1) =
y(1) +y

(1) = 1. When n = 2, y
(4)
(1) = 2y

(1) +y

(1) = 1. Then y
(5)
(1) =
3y

(1) + y

(1) = 4. We have
y
1
(x) = 1 +
(x 1)
2
2
+
(x 1)
3
6
+
(x 1)
4
24
+
(x 1)
5
30
+ .
To compute y
2
(x), we use the initial conditions y(1) = 0, y

(1) = 1.
Then y

## (1) = y(1) = 0. Setting n = 1 in the recursive relation: y

(3)
(1) =
y(1) +y

(1) = 1. When n = 2, y
(4)
(1) = 2y

(1) +y

(1) = 2. Then y
(5)
(1) =
3y

(1) + y

(1) = 1. We have
y
2
(x) = x 1 +
(x 1)
3
6
+
(x 1)
4
12
+
(x 1)
5
120
+ .
3.2 Solution Near a Mildly Singular Point
Again we consider the equation
P(x)y

+Q(x)y

+R(x)y = 0 ,
with given functions P, Q and R that are continuous near a point a, at
which we want to compute a series solution y(x) =

n=0
y
(n)
(a)
n!
(x a)
n
. If
P(a) = 0, we have a problem: we cannot compute y

## (a) from the equation

(and we have the same problem for higher derivatives). However, if a is
a simple root of P(x), it turns out that we can proceed much as before.
Namely, we assume that P(x) = (xa)P
1
(x), with P
1
(a) ,= 0. Dividing the
equation by P
1
(x), and calling q(x) =
Q(x)
P
1
(x)
, r(x) =
R(x)
P
1
(x)
, we put it into the
form
(x a)y

+ q(x)y

+r(x)y = 0 .
The functions q(x) and r(x) are continuous near a. In case a = 0, we have
xy

+q(x)y

+ r(x)y = 0 . (2.1)
3.2. SOLUTION NEAR A MILDLY SINGULAR POINT 103
For this equation we cannot expect to obtain two linearly independent so-
lutions, by prescribing y
1
(0) = 1, y

1
(0) = 0 or y
2
(0) = 0, y

2
(0) = 1, the way
we did before, because the equation is singular at x = 0 (the functions
q(x)
x
and
r(x)
x
are discontinuous at x = 0, and so the existence and uniqueness
theorem does not apply).
Example Let us try to solve: xy

= 0, y(0) = 0, y

(0) = 1.
Multiplying through by x, we obtain Eulers equation, which we solve,
to get the general solution: y(x) = c
1
x
2
+ c
2
. Here y

## (0) = 0. The initial

value problem has no solution. However, if we change the initial conditions,
and consider the problem xy

= 0, y(0) = 1, y

## (0) = 0, then we have

innitely many solutions y = 1 +c
1
x
2
.
We therefore lower our expectations, and we will be satised to compute
just one series solution of (2.1). It turns out that in most cases it is possible
to calculate a series solution of the form

n=0
a
n
x
n
, starting with a
0
= 1.
Example Find a series solution of xy

+ 3y

2y = 0.
It is convenient to multiply the equation by x
x
2
y

+ 3xy

2xy = 0 .
We let y =

n=0
a
n
x
n
. Then y

n=1
a
n
nx
n1
and y

n=2
a
n
n(n
1)x
n2
. Observe that each dierentiation kills a term. Plugging this series
into the equation:

n=2
a
n
n(n 1)x
n
+

n=1
3a
n
nx
n

n=0
2a
n
x
n+1
= 0 .
The third series is not lined up with the other two. We therefore replace
n by n 1 in that series, obtaining

n=0
2a
n
x
n+1
=

n=1
2a
n1
x
n
.
We have

n=2
a
n
n(n 1)x
n
+

n=1
3a
n
nx
n

n=1
2a
n1
x
n
= 0 . (2.2)
We shall use the following fact: if

n=1
b
n
x
n
= 0 for all x, then b
n
= 0 for
all n = 1, 2, . . .. We now combine the three series in (2.2) into one, so that
104CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
we can set all of the resulting coecients to zero. The x term is present in
the second and the third series, but not in the rst. However, we can start
the rst series at n = 1, because at n = 1 the coecient is zero. I.e., we
have

n=1
a
n
n(n 1)x
n
+

n=1
3a
n
nx
n

n=1
2a
n1
x
n
= 0 .
Now for all n 1, the x
n
term is present in all three series, so that we can
combine these series into one series. We therefore just lift the coecients:
a
n
n(n 1) + 3a
n
n 2a
n1
= 0 .
We solve for a
n
:
a
n
=
2
n(n + 2)
a
n1
, n 1 .
Starting with a
0
= 1, we compute a
1
=
2
13
, a
2
=
2
24
a
1
=
2
2
(12)(34)
=
2
3
2! 4!
,
a
3
=
2
35
a
2
=
2
4
3! 5!
, . . ., a
n
=
2
n+1
n! (n+2)!
.

n=1
2
n+1
n! (n + 2)!
x
n
=

n=0
2
n+1
n! (n + 2)!
x
n
.
Example Find a series solution of xy

3y

2y = 0.
This is a small modication of the preceding problem, so that we can quickly
see that the recurrence relation takes the form:
a
n
=
2
n(n 4)
a
n1
, n 1 . (2.3)
0
= 1, and proceed as before, then at n = 4 the denominator
is zero, and the computation stops! To avoid the trouble at n = 4, we look
for the solution in the form y =

n=4
a
n
x
n
, starting with a
4
= 1, and using
the above recurrence relation for n 4. (Plugging y =

n=4
a
n
x
n
into the
equation shows that a
n
must satisfy (2.3).) Compute: a
5
=
2
51
a
4
=
2
51
,
a
6
=
2
62
a
5
=
2
2
6521
= 24
2
2
6!2!
, . . ., a
n
= 24
2
n4
n!(n4)!
.
4
+ 24

n=5
2
n4
n!(n 4)!
x
n
.
Our experience with the previous two problems can be summarized as
follows.
3.2. SOLUTION NEAR A MILDLY SINGULAR POINT 105
Theorem 7 Consider the problem (2.1). If q(0) is not a non-positive in-
teger (i.e., q(0) is not equal to 0, 1, 2 . . .), one can nd a series solu-
tion of the form y =

n=0
a
n
x
n
, starting with a
0
= 1. In case q(0) = k,
where k is a non-negative integer, one can nd a series solution of the form
y =

n=k+1
a
n
x
n
, starting with a
k+1
= 1.
Example x
2
y

+xy

+ (x
2

2
)y = 0.
This is the Bessel equation, of great importance in Mathematical Physics!
It depends on a real parameter . It is also called Bessels equation of order
. Its solutions are called Bessels functions of order . We see that a = 0
is not a mildly singular point, for ,= 0. (Zero is a double root of x
2
.) But
in case = 0, we can cancel x, putting Bessels equation of order zero into
the form:
xy

+ y

+ xy = 0 , (2.4)
where a = 0 is a mildly singular point. We now nd the series solution,
centered at a = 0, i.e., the Maclauren series for y(x).
We put the equation back into the form
x
2
y

+ xy

+ x
2
y = 0 ,
and look for the solution in the form

n=0
a
n
x
n
. Plug this in:

n=2
a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=0
a
n
x
n+2
= 0 .
In the last series we replace n by n 2:

n=2
a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=2
a
n2
x
n
= 0 .
None of the series has a constant term. The x term is present only in the
second series. Its coecient is a
1
, and so
a
1
= 0 .
The terms x
n
, starting with n = 2, are present in all series, so that
a
n
n(n 1) +a
n
n +a
n2
= 0 ,
106CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
5 10 15
0.4
0.2
0.2
0.4
0.6
0.8
1.0
J
0
x
Figure 3.1: The graph of Bessels function J
0
(x)
or
a
n
=
a
n2
n
2
.
This recurrence relation tells us that all odd coecients are zero, a
2n+1
= 0,
while for the even ones
a
2n
=
a
2n2
(2n)
2
=
a
2n2
2
2
n
2
.
Starting with a
0
= 1, we compute a
2
=
a
0
2
2
1
2
=
1
2
2
1
2
, a
4
=
1
2
2
2
2
a
2
=
(1)
2 1
2
22
(12)
2
, a
6
=
1
2
2
3
2
a
4
= (1)
3 1
2
23
(123)
2
, and in general, a
2n
=
(1)
n 1
2
2n
(n!)
2
. We then have y = 1 +

n=1
a
2n
x
2n
=

n=0
(1)
n
1
2
2n
(n!)
2
x
2n
.
We have obtained Bessels function of order zero of the rst kind, the cus-
tomary notation J
0
(x) =

n=0
(1)
n
1
2
2n
(n!)
2
x
2n
.
3.2.1

Derivation of J
0
(x) by dierentiation of the equation
We dierentiate our equation (2.4) n times, and then set x = 0:
ny
(n+1)
+xy
(n+2)
+ y
(n+1)
+ny
(n1)
+xy
(n)
= 0;
ny
(n+1)
(0) + y
(n+1)
(0) +ny
(n1)
(0) = 0 .
3.3. MODERATELY SINGULAR EQUATIONS 107
(It is not always true that xy
(n+2)
0 as x 0. However, in case of the
initial conditions y(0) = 1, y

## (0) = 0 that is true, as was justied in the

authors paper [5].) We get a recursive relation
y
(n+1)
(0) =
n
n + 1
y
(n1)
(0) .
We begin with the initial conditions y(0) = 1, y

## (0) = 0. Then all derivatives

of odd order vanish, while
y
(2n)
(0) =
2n1
2n
y
(2n2)
(0) = (1)
2 2n1
2n
2n3
2n4
y
(2n4)
(0) = . . .
= (1)
n
(2n1)(2n3)31
2n(2n2)2
y(0) = (1)
n
(2n1)(2n3)31
2
n
n!
.
Then
y(x) =

n=0
y
(2n)
(0)
(2n)!
x
2n
=

n=0
(1)
n
1
2
2n
(n!)
2
x
2n
.
We have again obtained Bessels function of order zero of the rst kind,
J
0
(x) =

n=0
(1)
n
1
2
2n
(n!)
2
x
2n
.
In case of the initial conditions y(0) = 0 and y

## (0) = 1, the problem has

no solution (the recurrence relation above is not valid in this case, because
the relation xy
(n+2)
0 as x 0 is not true here). In fact, the second
solution of Bessels equation cannot be continuously dierentiable at x = 0.
Indeed, the Wronskian of two solutions is equal to ce

_
1
x
dx
=
c
x
. We have
W(y
1
, y
2
) = y
1
y

2
y

1
y
2
=
c
x
. The solution J
0
(x) is bounded at x = 0, and
it has a bounded derivative at x = 0. Therefore, the other solution must be
discontinuous at x = 0. It turns out that the other solution, called Bessels
function of the second type, and denoted Y
0
(x), has a lnx term in its series
representation, see e.g., [1].
3.3 Moderately Singular Equations
We shall deal only with the Maclauren series, i.e., we take a = 0, with the
general case being similar. We consider the equation
x
2
y

+ xp(x)y

+q(x)y = 0 , (3.1)
where p(x) and q(x) are given innitely dierentiable functions, that can be
represented by the Maclauren series
p(x) = p(0) + p

(0)x +
1
2
p

(0)x
2
+ ;
108CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
q(x) = q(0) +q

(0)x +
1
2
q

(0)x
2
+ .
If it happens that q(0) = 0, then q(x) has a factor of x, and we can divide the
equation (3.1) by x, to obtain a mildly singular equation. So the dierence
from the preceding section is that we now allowthe case of q(0) ,= 0. Observe
also that in case p(x) and q(x) are constants, the equation (3.1) is Eulers
equation, that we have studied before. This connection with Eulers equation
is the guiding light of the theory that follows.
We change to a new unknown function v(x), by letting y(x) = x
r
v(x),
with a constant r to be specied. With y

= rx
r1
v + x
r
v

and y

=
r(r 1)x
r2
v + 2rx
r1
v

+ x
r
v

, we plug y in:
x
r+2
v

+ x
r+1
v

(2r +p(x)) + x
r
v [r(r 1) +rp(x) + q(x)] = 0 . (3.2)
We now choose r, to satisfy the following characteristic equation
r(r 1) + rp(0) +q(0) = 0 . (3.3)
The quantity in the square bracket in (3.2) is then
rp

(0)x +r
1
2
p

(0)x
2
+ + q

(0)x +
1
2
q

(0)x
2
+ ,
i.e., it has a factor of x. We take this factor out, and divide the equation
(3.2) by x
r+1
, obtaining a mildly singular equation, that we have analyzed
in the previous section.
Example 2x
2
y

xy

## + (1 + x)y = 0. To put it into the right form,

we divide by 2: x
2
y

1
2
xy

+ (
1
2
+
1
2
x)y = 0. Here p(x) =
1
2
and
q(x) =
1
2
+
1
2
x. The characteristic equation is then
r(r 1)
1
2
r +
1
2
= 0 .
Its roots are r =
1
2
and r = 1.
Case r =
1
2
. We know that a substitution y = x
1
2
v will produce a mildly
singular equation for v(x). The resulting equation for v could be derived
from scratch, but it is easier to use (3.2):
x
5/2
v

+
1
2
x
3/2
v

+
1
2
x
3/2
v = 0 ,
3.3. MODERATELY SINGULAR EQUATIONS 109
or, dividing by x
3/2
, we have a mildly singular equation
xv

+
1
2
v

+
1
2
v = 0 .
We multiply this equation by 2x, for convenience,
2x
2
v

+xv

+ xv = 0 ,
and look for solution in the form v(x) =

n=0
a
n
x
n
. Plugging this in, we
get as before

n=2
2a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=0
a
n
x
n+1
= 0 .
To line up powers, we shift n n 1 in the last series. The rst series we
may begin at n = 1 instead of n = 2, because the coecient at n = 1 is
zero. We then have

n=1
2a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=1
a
n1
x
n
= 0 .
We can now combine these series into one series. Setting its coecients to
zero, we have
2a
n
n(n 1) +a
n
n + a
n1
= 0 ,
which gives us a recurrence relation
a
n
=
1
n(2n 1)
a
n1
.
Starting with a
0
= 1, compute a
1
=
1
11
, a
2
=
1
23
a
1
= (1)
2 1
(12) (13)
,
a
3
=
1
35
a
2
= (1)
3 1
(123) (135)
, and in general
a
n
= (1)
n
1
n! 1 3 5 (2n 1)
.
We have computed the rst solution
y
1
(x) = x
1/2
v(x) = x
1/2
_
1 +

n=1
(1)
n
1
n! 1 3 5 (2n 1)
x
n
_
.
110CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
Case r = 1. We set y = xv. The resulting equation for v from (3.2) is
x
3
v

+
3
2
x
2
v

+
1
2
x
2
v = 0 ,
or, dividing by x
2
, we have a mildly singular equation
xv

+
3
2
v

+
1
2
v = 0 .
We multiply this equation by 2x, for convenience,
2x
2
v

+ 3xv

+ xv = 0 ,
and look for solution in the form y =

n=0
a
n
x
n
. Plugging this in, we get
as before

n=2
2a
n
n(n 1)x
n
+ 3

n=1
a
n
nx
n
+

n=0
a
n
x
n+1
= 0 .
As before, we can start the rst series at n = 1, and make a shift n n 1
in the last series:

n=1
2a
n
n(n 1)x
n
+

n=1
3a
n
nx
n
+

n=1
a
n1
x
n
= 0 .
Setting the coecient of x
n
to zero,
2a
n
n(n 1) + 3a
n
n + a
n1
= 0 ,
gives us a recurrence relation
a
n
=
1
n(2n + 1)
a
n1
.
Starting with a
0
= 1, compute a
1
=
1
13
, a
2
=
1
25
a
1
= (1)
2 1
(12) (135)
,
a
3
=
1
37
a
2
= (1)
3 1
(123) (1357)
, and in general
a
n
= (1)
n
1
n! 1 3 5 (2n + 1)
.
We now have the second solution
y
2
(x) = x
_
1 +

n=1
(1)
n
1
n! 1 3 5 (2n + 1)
x
n
_
.
3.3. MODERATELY SINGULAR EQUATIONS 111
The general solution is, of course, y(x) = c
1
y
1
+c
2
y
2
.
Example x
2
y

+ xy

+ (x
2

1
9
)y = 0.
This is Bessels equation of order
1
3
. Here p(x) = 1 and q(x) = x
2

1
9
.
The characteristic equation
r(r 1) +r
1
9
= 0
has roots r =
1
3
and r =
1
3
.
Case r =
1
3
. We set y = x

1
3
v. Compute y

=
1
3
x

4
3
v + x

1
3
v

, y

=
4
9
x

7
3
v
2
3
x

4
3
v

+x

1
3
v

## . Plugging this in and simplifying, we get a mildly

singular equation
xv

+
1
3
v

+xv = 0 .
We multiply by 3x
3x
2
v

+ xv

+ 3x
2
v = 0 ,
and look for solution in the form y =

n=0
a
n
x
n
. Plugging this in, we get
as before

n=2
3a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=0
3a
n
x
n+2
= 0 .
We shift n n 2 in the last series:

n=2
3a
n
n(n 1)x
n
+

n=1
a
n
nx
n
+

n=2
3a
n2
x
n
= 0 .
The x term is present only in the second series. Its coecient must be zero,
i.e.,
a
1
= 0 . (3.4)
The term x
n
, with n 2, is present in all three series. Setting its coecient
to zero
3a
n
n(n 1) +a
n
n + 3a
n2
= 0 ,
which gives us the recurrence relation
a
n
=
3
n(3n 2)
a
n2
.
112CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
We see that all odd coecients are zero (because of (3.4)), while, starting
with a
0
= 1, we compute the even ones
a
2n
=
3
2n(6n 2)
a
2n2
;
a
2n
= (1)
n
3
n
(2 4 2n) (4 10 (6n 2))
.
We have the rst solution
y
1
(x) = x
1/3
_
1 +

n=1
(1)
n
3
n
(2 4 2n) (4 10 (6n 2))
x
2n
_
.
Case r =
1
3
. We set y = x
1
3
v. Compute y

=
1
3
x

2
3
v+x
1
3
v

, y

=
2
9
x

5
3
v+
2
3
x

2
3
v

+ x
1
3
v

## . Plugging this in and simplifying, we get a mildly singular

equation
xv

+
5
3
v

+ xv = 0 .
We multiply by 3x
3x
2
v

+ 5xv

+ 3x
2
v = 0 ,
and look for solution in the form y =

n=0
a
n
x
n
. Plugging this in, we
conclude as before that a
1
= 0, and the following recurrence relation
a
2n
=
3
n(3n + 2)
a
2n2
=
1
n(n + 2/3)
a
2n2
.
We then have the second solution
y
2
(x) = x
1/3
_
1 +

n=1
(1)
n
1
(2 4 2n) ((2 + 2/3) (4 + 2/3) (2n + 2/3))
x
2n
_
.
3.3.1 Problems
I. Find the Maclauren series of the following functions, and state their radius
of convergence
1. sinx
2
. 2.
1
1 + x
2
. 3. e
x
3
.
II. 1. Find the Taylor series of f(x) centered at a.
3.3. MODERATELY SINGULAR EQUATIONS 113
(i) f(x) = sin x, a =

2
. (ii) f(x) = e
x
, a = 1. (iii) f(x) =
1
x
, a = 1.
2. Show that

n=1
1 + (1)
n
n
2
x
n
=
1
2

n=1
1
n
2
x
2n
.
3. Show that

n=0
n + 3
n!(n + 1)
x
n+1
=

n=1
n + 2
(n 1)!n
x
n
.
4. Expand the n-th derivative:
_
(x
2
+x)g(x)
_
(n)
.
5. Find the n-th derivative:
_
(x
2
+x)e
2x
_
(n)
.
III. Find the general solution, using power series centered at a (nd the
recurrence relation, and two linearly independent solutions).
1. y

xy

y = 0, a = 0.
1
(x) =

n=0
x
2n
2
n
n!
, y
2
(x) =

n=0
2
n
n!
(2n + 1)!
x
2n+1
.
2. y

xy

+ 2y = 0, a = 0.
1
(x) = 1 x
2
, y
2
(x) = x
1
6
x
3

1
120
x
5
.
3. y

xy

y = 0, a = 1.
1
(x) = 1 +
1
2
(x 1)
2
+
1
6
(x 1)
3
+
1
6
(x 1)
4
+ ,
y
2
(x) = (x 1) +
1
2
(x 1)
2
+
1
2
(x 1)
3
+
1
4
(x 1)
4
+ .
4. (x
2
+ 1)y

+xy

+y = 0, a = 0.
(n+2)
(0) = (n
2
+ 1)y
(n)
(0).
y
1
(x) = 1
1
2
x
2
+
5
24
x
4
, y
2
(x) = x
1
3
x
3
+
1
6
x
5
.
IV. 1. Find the solution of the initial value problem, using power series
centered at 2
y

2xy = 0, y(2) = 1, y

(2) = 0 .
Answer. y = 1 + 2(x 2)
2
+
1
3
(x 2)
3
+
2
3
(x 2)
4
+ .
114CHAPTER3. USINGINFINITE SERIES TO SOLVE DIFFERENTIAL EQUATIONS
2. Find the solution of the initial value problem, using power series centered
at 1
y

+xy = 0, y(1) = 2, y

(1) = 3 .
V. Find one series solution of the following mildly singular equations (here
a = 0)
1. 2xy

+y

+xy = 0.
x
2
2 3
+
x
4
2 4 3 7

x
6
2 4 6 3 7 11
+ = 1+

n=1
(1)
n
x
2n
2
n
n!3 7 11 (4n 1)
.
2. xy

+y

y = 0.

n=0
x
n
(n!)
2
.
3. xy

+ 2y

+y = 0.

n=0
(1)
n
n!(n + 1)!
x
n
.
4. xy

+y

2xy = 0.

n=1
1
2
n
(n!)
2
x
2n
.
VI. 1. Find one series solution in the form

n=5
a
n
x
n
of the following mildly
singular equation (here a = 0)
xy

4y

+ y = 0 .
5
+ 120

n=6
(1)
n5
n!(n5)!
x
n
.
2. Find one series solution of the following mildly singular equation (here
a = 0)
xy

2y

2y = 0 .
3. Find one series solution of the following mildly singular equation (here
a = 0)
xy

+ y = 0 .
3.3. MODERATELY SINGULAR EQUATIONS 115
Hint: Look for solution in the form

n=1
a
n
x
n
, starting with a
1
= 1.

n=1
(1)
n1
n!(n 1)!
x
n
.
4. Recall that J
0
(x) is a solution of
xy

+ y

+ xy = 0 .
Show that the energy E(x) = y

2
(x) + y
2
(x) is a decreasing function.
Conclude that each maximum value of J
0
(x) is greater than the absolute
value of the minimum value that follows it, which in turn is larger than the
next maximum value, and so on (see the graph of J
0
(x)).
5. Show that the absolute value of the slope decreases at each consecutive
root of J
0
(x).
VII. 1. Verify that the Bessel equation of order 1/2
x
2
y

+xy

+ (x
2
1/4)y = 0
has a moderate singularity at zero. Write down the characteristic equation,
and nd its roots.
(i) Corresponding to the root r = 1/2, perform a change of variables y =
x
1/2
v to obtain a mildly singular equation for v(x). Find one solution of
that equation, to obtain one of the solutions of the Bessel equation.
1/2
_
1 +

n=1
(1)
n
x
2n
(2n + 1)!
_
= x
1/2
_
x +

n=1
(1)
n
x
2n+1
(2n + 1)!
_
=
x
1/2
sin x.
(ii) Corresponding to the root r = 1/2, perform a change of variables
y = x
1/2
v to obtain a mildly singular equation for v(x). Find one solution
of that equation, to obtain the second solution of the Bessel equation.
1/2
cos x.
(iii) Find the general solution.
1
x
1/2
sinx + c
2
x
1/2
cos x.
2. Find the general solution of the Bessel equation of order 3/2
x
2
y

+xy

+ (x
2
9/4)y = 0 .
Chapter 4
Laplace Transform
4.1 Laplace Transform And Its Inverse
4.1.1 Review of Improper Integrals
The mechanics of computing the integrals, involving innite limits, is similar
to that for integrals with nite end-points. For example
_

0
e
2t
dt =
1
2
e
2t
[

0
=
1
2
.
Here we did not plug in the upper limit t = , but rather computed the
limit as t (the limit is zero). This is an example of a convergent
integral. On the other hand, the integral
_

1
1
t
dt = ln t [

1
is divergent, because ln t has an innite limit as t . When computing
improper integrals, we use the same techniques of integration, in essentially
the same way. For example
_

0
te
2t
dt =
_

1
2
te
2t

1
4
e
2t
_
[

0
=
1
4
.
Here the anti-derivative is computed by guess-and-check (or by integration
by parts). The limit at innity is computed by the Lopitals rule to be zero.
116
4.1. LAPLACE TRANSFORM AND ITS INVERSE 117
4.1.2 Laplace Transform
Let the function f(t) be dened on the interval [0, ). Let s > 0 be a
positive parameter. We dene the Laplace transform of f(t) as
F(s) =
_

0
e
st
f(t) dt = L(f(t)),
provided that this integral converges. It is customary to use the corre-
sponding capital letters to denote the Laplace transform (i.e., the Laplace
transform of g(t) is denoted by G(s), of h(t) by H(s), etc.). We also use the
operator notation for the Laplace transform: L(f(t)).
We now build up a collection of Laplace transforms.
L(1) =
_

0
e
st
dt =
e
st
s
[

0
=
1
s
;
L(t) =
_

0
e
st
t dt =
_

e
st
t
s

e
st
s
2
_
[

0
=
1
s
2
.
Using integration by parts
L(t
n
) =
_

0
e
st
t
n
dt =
e
st
t
n
s
[

0
+
n
s
_

0
e
st
t
n1
dt =
n
s
L(t
n1
) .
With this recurrence relation, we now compute L(t
2
) =
2
s
L(t) =
2
s
3
, L(t
3
) =
3
s
L(t
2
) =
3!
s
4
, and in general
L(t
n
) =
n!
s
n+1
.
The next class of functions are exponentials e
at
, where a is some number:
L(e
at
) =
_

0
e
st
e
at
dt =
1
s a
e
(sa)t
[

0
=
1
s a
, provided that s > a.
Here we had to assume that s > a, to obtain a convergent integral.
Next we observe that for any constants c
1
and c
2
L(c
1
f(t) + c
2
g(t)) = c
1
F(s) + c
2
G(s), (1.1)
because a similar property holds for integrals (and the Laplace transform is
an integral). This expands considerably the set of functions for which we
can write down the Laplace transform. For example,
L(cosh at) = L(
1
2
e
at
+
1
2
e
at
) =
1
2
1
s a
+
1
2
1
s +a
=
s
s
2
a
2
, for s > a .
118 CHAPTER 4. LAPLACE TRANSFORM
Similarly,
L(sinh at) =
a
s
2
a
2
, for s > a .
The Laplace transforms of sinat and cos at we compute in tandem. First
by the formula for exponentials
L(e
iat
) =
1
s ia
=
s + ia
(s ia)(s + ia)
=
s +ia
s
2
+a
2
=
s
s
2
+a
2
+i
a
s
2
+ a
2
.
Now, cos at = Re(e
iat
) and sin at = (e
iat
), and it follows that
L(cos at) =
s
s
2
+a
2
, and L(sinat) =
a
s
2
+a
2
.
If c is some number, then
L(e
ct
f(t)) =
_

0
e
st
e
ct
f(t) dt =
_

0
e
(sc)t
f(t) dt = F(s c) .
This is called a shift formula
L(e
ct
f(t)) = F(s c) . (1.2)
For example,
L(e
5t
sin 3t) =
3
(s 5)
2
+ 9
;
Another example:
L(e
2t
cosh 3t) =
s + 2
(s + 2)
2
9
;
In the last example c = 2, so that s c = s + 2. Similarly,
L(e
t
t
5
) =
5!
(s 1)
6
;
4.1.3 Inverse Laplace Transform
This is just going from F(s) back to f(t). We denote it as f(t) = L
1
(F(s)).
We have
L
1
(c
1
F(s) + c
2
G(s)) = c
1
f(t) + c
2
g(t) .
This is just the formula (1.1), read backwards. Each of the formulas for
Laplace Transform leads to the corresponding formula for its inverse:
L
1
(
1
s
n+1
) =
t
n
n!
;
4.1. LAPLACE TRANSFORM AND ITS INVERSE 119
L
1
(
s
s
2
+a
2
) = cos at ;
L
1
(
1
s
2
+a
2
) =
1
a
sinat ;
L
1
(
1
s a
) = e
at
,
and so on. To compute L
1
, one often uses partial fractions, as well as the
inverse of the shift formula (1.2)
L
1
(F(s c)) = e
ct
f(t) , (1.3)
which we shall also call the shift formula.
Example Find L
1
(
3s 5
s
2
+ 4
).
Breaking this fraction into a sum of two fractions, we have
L
1
(
3s 5
s
2
+ 4
) = 3L
1
(
s
s
2
+ 4
) 5L
1
(
1
s
2
+ 4
) = 3 cos 2t
5
2
sin 2t .
Example Find L
1
(
2
(s 5)
4
).
We recognize that a shift by 5 is performed in the function
2
s
4
. We
begin by inverting this function L
1
(
2
s
4
) =
t
3
3
, and then pay for the shift
according to the shift formula (1.3)
L
1
(
2
(s 5)
4
) = e
5t
t
3
3
.
Example Find L
1
(
s + 7
s
2
s 6
).
We factor the denominator, and then use partial fractions
s + 7
s
2
s 6
=
s + 7
(s 3)(s + 2)
=
2
s 3

1
s + 2
,
which gives
L
1
(
s + 7
s
2
s 6
) = 2e
3t
e
2t
.
Example Find L
1
(
2s 1
s
2
+ 2s + 5
).
120 CHAPTER 4. LAPLACE TRANSFORM
One cannot factor the denominator, so we complete the square
2s 1
s
2
+ 2s + 5
=
2s 1
(s + 1)
2
+ 4
=
2(s + 1) 3
(s + 1)
2
+ 4
,
and then we adjust the numerator, so that it involves the same shift (as
in the denominator). Without the shift, we have the function
2s3
s
2
+4
, whose
inverse Laplace transform is 2 cos 2t
3
2
sin 2t. By the shift formula
L
1
(
2s 1
s
2
+ 2s + 5
) = 2e
t
cos 2t
3
2
e
t
sin2t .
4.2 Solving The Initial Value Problems
Integrating by parts,
L(f

(t)) =
_

0
e
st
f

(t) dt =
_
e
st
f(t)
_
[

0
+s
_

0
e
st
f(t) dt .
Let us assume that f(t) does not grow too fast, as t , i.e., [f(t)[ be
at
,
for some positive constants a and b. If we now choose s > a, then the limit
as t is zero, while the lower limit gives f(0). We then have
L(f

## (t)) = f(0) +sF(s) . (2.1)

To compute the Laplace transform of f

## (t), we use the formula (2.1) twice

L(f

(t)) = L((f

(t))

) = f

(0)+sL(f

(t)) = f

(0)sf(0)+s
2
F(s) .
(2.2)
In general
L(f
(n)
(t)) = f
(n1)
(0) sf
(n2)
(0) s
n1
f(0) + s
n
F(s) . (2.3)
Example Solve y

+ 3y

+ 2y = 0, y(0) = 1, y

(0) = 4.
We apply Laplace transform to both sides of the equation. Using the
linearity of Laplace transform, and that L(0) = 0, we have
L(y

) + 3L(y

) + 2L(y) = 0 .
We now apply the formulas (2.1), (2.2), and then use our initial conditions:
y

(0) sy(0) +s
2
Y (s) + 3 (y(0) + sY (s)) + 2Y (s) = 0 ;
4.2. SOLVING THE INITIAL VALUE PROBLEMS 121
4 +s +s
2
Y (s) + 3 (1 +sY (s)) + 2Y (s) = 0 ;
_
s
2
+ 3s + 2
_
Y (s) +s 1 = 0 .
We now solve for Y (s)
Y (s) =
1 s
s
2
+ 3s + 2
.
To get the solution, it remains to nd the inverse Laplace transform y(t) =
L
1
(Y (s)). For that we factor the denominator, and use partial fractions
1 s
s
2
+ 3s + 2
=
1 s
(s + 1)(s + 2)
=
2
s + 1

3
s + 2
.
t
3e
2t
.
Of course, the problem could also be solved without using the Laplace
transform. The same will be true for all other problems that we shall con-
sider. Laplace transform is an alternative solution method, and in addition,
it provides a tool that can be used in more involved situations, for example,
for partial dierential equations.
Example Solve y

4y

+ 5y = 0, y(0) = 1, y

(0) = 2.
We apply Laplace transform to both sides of the equation. Using our
initial conditions, we get
2 s + s
2
Y (s) 4 (1 +sY (s)) + 5Y (s) = 0 ;
Y (s) =
s 6
s
2
4s + 5
.
To invert the Laplace transform, we complete the square in the denominator,
and then produce the same shift in the numerator
Y (s) =
s 6
s
2
4s + 5
=
(s 2) 4
(s 2)
2
+ 1
.
2t
cos t 4e
2t
sint.
Example Solve
y

+
2
y = 5 cos 2t, ,= 2,
y(0) = 1, y

(0) = 0 .
122 CHAPTER 4. LAPLACE TRANSFORM
We have a spring with natural frequency , subjected to an external force
of frequency 2. We apply Laplace transform to both sides of the equation.
Using our initial conditions, we get
s +s
2
Y (s) +
2
Y (s) =
5s
s
2
+ 4
;
Y (s) =
5s
(s
2
+ 4)(s
2
+
2
)
+
s
s
2
+
2
.
The second term is easy to invert. To nd the inverse Laplace transform of
the rst term, we use guess-and-check (or partial fractions)
s
(s
2
+ 4)(s
2
+
2
)
=
1

2
4
_
s
s
2
+ 4

s
s
2
+
2
_
.
5

2
4
(cos 2t cos t) + cos t.
When approaches 2, the amplitude of oscillations gets large. To treat
the case of resonance, when = 2, we need one more formula.
We dierentiate in s both sides of the formula
F(s) =
_

0
e
st
f(t) dt
to obtain
F

(s) =
_

0
e
st
tf(t) dt = L(tf(t)) ,
or
L(tf(t)) = F

(s) .
For example,
L(t sin2t) =
d
ds
2
s
2
+ 4
=
4s
(s
2
+ 4)
2
. (2.4)
Example Solve (a case of resonance)
y

+ 4y = 5 cos 2t,
y(0) = 0, y

(0) = 0 .
Using Laplace transform, we have
s
2
Y (s) + 4Y (s) =
5s
s
2
+ 4
;
4.2. SOLVING THE INITIAL VALUE PROBLEMS 123
Y (s) =
5s
(s
2
+ 4)
2
.
Then, using (2.4), y(t) =
5
4
t sin2t. We see that the amplitude of oscillations
(which is equal to
5
4
t) tends to innity with time.
Example Solve
y

y = 0,
y(0) = 1, y

(0) = 0, y

(0) = 1, y

(0) = 0 .
Applying the Laplace transform, then using our initial conditions, we have
y

(0) sy

(0) s
2
y

(0) s
3
y(0) + s
4
Y (s) Y (s) = 0 ;
(s
4
1)Y (s) = s
3
+s = s(s
2
+ 1) ;
Y (s) =
s(s
2
+ 1)
s
4
1
=
s(s
2
+ 1)
(s
2
1)(s
2
+ 1)
=
s
s
2
1
.
I.e., y(t) = cosh t.
4.2.1 Step Functions
Sometimes an external force acts only over some nite time interval. One
uses step functions to model such forces. The basic step function is the
Heaviside function u
c
(t), dened for any positive constant c by
u
c
(t) =
_

_
0 if t < c
1 if t c
.
E
T
c
u
c
(t)
t
u
The Heaviside step function u
c
(t)
b
124 CHAPTER 4. LAPLACE TRANSFORM
(Oliver Heaviside, 1850 - 1925, was a self-taught English electrical engineer.)
Using u
c
(t), we can build up other step functions. For example, the function
u
2
(t) u
4
(t) is equal to 1 for 2 t 4, and is zero otherwise. Then, for
example, (u
2
(t) u
4
(t)) t
2
gives a force, which is equal to t
2
for 2 t 4,
and is zero for other t. We have
L(u
c
(t)) =
_

0
e
st
u
c
(t) dt =
_

c
e
st
dt =
e
cs
s
;
L
1
_
e
cs
s
_
= u
c
(t) .
Similarly, we shall compute the Laplace transform of the following shifted
function, which begins at t = c (it is zero for 0 < t < c):
L(u
c
(t)f(t c)) =
_

0
e
st
u
c
(t)f(t c) dt =
_

c
e
st
f(t c) dt .
In the last integral we change the variable t z, by setting t c = z. We
have dt = dz, and the integral becomes
_

0
e
s(c+z)
f(z) dz = e
cs
_

0
e
sz
f(z) dz = e
cs
F(s) .
We have established another pair of shift formulas:
L(u
c
(t)f(t c)) = e
cs
F(s) ;
L
1
_
e
cs
F(s)
_
= u
c
(t)f(t c) . (2.5)
Example Solve
y

+ 9y = u
2
(t) u
4
(t), y(0) = 1, y

(0) = 0 .
Here the forcing term is equal to 1 for 2 t 4, and is zero for other t.
Taking the Laplace transform, then solving for Y (s), we have
s
2
Y (s) s + 9Y (s) =
e
2s
s

e
4s
s
;
Y (s) =
s
s
2
+ 9
+e
2s
1
s(s
2
+ 9)
e
4s
1
s(s
2
+ 9)
.
Using guess-and-check (or partial fractions)
1
s(s
2
+ 9)
=
1
9
_
1
s

s
s
2
+ 9
_
,
4.3. THE DELTA FUNCTION AND IMPULSE FORCES 125
and therefore
L
1
_
1
s(s
2
+ 9)
_
=
1
9

1
9
cos 3t .
Using (2.5), we conclude
y(t) = cos 3t +u
2
(t)
_
1
9

1
9
cos 3(t 2)
_
u
4
(t)
_
1
9

1
9
cos 3(t 4)
_
.
Observe that the solution undergoes changes in behavior at t = 2, and at
t = 4.
4.3 The Delta Function and Impulse Forces
Imagine a rod, which is so thin that we can consider it to be one dimensional,
and so long that we assume it to extend for < t < along the t axis.
Assume that the function (t) gives the density of the rod. If we subdivide
the rod, using the points t
1
, t
2
, . . . at a distance t apart, then the mass
of the piece i can be approximated by (t
i
)t, and

i
(t
i
)t gives an
approximation of the total mass. Passing to the limit, we get the exact
value of the mass m = lim
t0

i
(t
i
)t =
_

## (t) dt. Assume now that

the rod is moved to a new position in the (t, y) plane, with each point (t, 0)
moved to (t, f(t)), where f(t) is a given function. What is the work needed
for this move? For the piece i, the work is approximated by f(t
i
)(t
i
)t.
(We assume that the gravity g = 1 on our private planet.) The total work
is then W = lim
t0

i
f(t
i
)(t
i
)t =
_

(t)f(t) dt.
Now assume that the rod has unit mass, m = 1, and the entire mass is
pushed into a single point t = 0. The resulting distribution of mass is called
the delta distribution or the delta function, and is denoted (t). It has the
following properties.
(i) (t) = 0 for t ,= 0.
(ii)
_

## (t) dt = 1 (unit mass).

(iii)
_

(t)f(t) dt = f(0).
The last formula holds, because work is needed only to move mass 1 at
t = 0, the distance of f(0). Observe that (t) is not a usual function, like
126 CHAPTER 4. LAPLACE TRANSFORM
the ones studied in Calculus. (If a usual function is equal to zero, except at
one point, its integral is zero, over any interval.) One can think of (t) as
the limit of the functions
f

(t) =
_

_
1
2
if < t <
0 for other t
as 0. (Observe that
_

(t) dt = 1.)
For any number t
0
, the function (t t
0
) gives a translation of delta
function, with unit mass concentrated at t = t
0
. Correspondingly, its prop-
erties are
(i) (t t
0
) = 0 for t ,= t
0
.
(ii)
_

(t t
0
) dt = 1 .
(iii)
_

(t t
0
)f(t) dt = f(t
0
).
Using the properties (i) and (iii), we compute the Laplace transform, for
any t
0
0,
L((t t
0
)) =
_

0
(t t
0
)e
st
dt =
_

(t t
0
)e
st
dt = e
st
0
.
In particular,
L((t)) = 1 .
Correspondingly, L
1
(1) = (t), and L
1
_
e
st
0
_
= (t t
0
).
Of course, other physical quantities can also be concentrated at a point.
In the following example we consider forced vibrations of a spring, with the
external force concentrated at t = 2. In other words, an external impulse is
applied at t = 2.
Example Solve the initial value problem
y

+ 2y

+ 5y = 6 (t 2), y(0) = 0, y

(0) = 0 .
Applying the Laplace transform, then solving for Y (s),
(s
2
+ 2s + 5)Y (s) = 6e
2s
;
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 127
2 4 6 8
t
0.5
1.0
1.5
y
Figure 4.1: Springs response to an impulse force
Y (s) =
e
2s
s
2
+ 2s + 5
= e
2s
6
(s + 1)
2
+ 4
.
Recalling that by a shift formula L
1
_
1
(s+1)
2
+4
_
=
1
2
e
t
sin 2t, we conclude,
using (2.5), that
y(t) = 3u
2
(t)e
(t2)
sin 2(t 2) .
Before the time t = 2, the external force is zero. Coupled with zero
initial conditions, this leaves the spring at rest for t 2. An impulse force
at t = 2 sets the spring in motion, but the vibrations quickly die down,
because of heavy damping, see the Figure 4.1.
4.4 Convolution and the Tautochrone curve
The problem
y

+y = 0, y(0) = 0, y

(0) = 1
has solution y = sint. If we now add a forcing term g(t)
y

+ y = g(t), y(0) = 0, y

(0) = 1 ,
then the solution is
y(t) =
_
t
0
sin(t v)g(v) dv, (4.1)
128 CHAPTER 4. LAPLACE TRANSFORM
as we saw in the section on convolution integrals. Motivated by this formula,
we now dene the concept of convolution of two functions f(t) and g(t)
f g =
_
t
0
f(t v)g(v) dv.
The formula (4.1) can then be written as
y(t) = sin t g(t) .
Another example:
t t
2
=
_
t
0
(t v)v
2
dv = t
_
t
0
v
2
dv
_
t
0
v
3
dv =
t
4
3

t
4
4
=
t
4
12
.
If you compute t
2
t, the answer is the same. More generally,
g f = f g .
Indeed, making a change of variables v u, by letting u = t v, we express
g f =
_
t
0
g(t v)f(v) dv =
_
0
t
g(u)f(t u) du
=
_
t
0
f(t u)g(u) du = f g .
It turns out that the Laplace transform of convolution is equal to the
product of the Laplace transforms:
L(f g) = F(s)G(s) .
Indeed,
L(f g) =
_

0
e
st
_
t
0
f(t v)g(v) dv dt =
_ _
D
e
st
f(t v)g(v) dv dt ,
where the double integral on the right hand side is taken over the region D
of the tv plane, which is an innite wedge 0 < v < t in the rst quadrant.
We now evaluate this double integral by using the reverse order of repeated
integrations:
_ _
D
e
st
f(t v)g(v) dvdt =
_

0
g(v)
__

v
e
st
f(t v) dt
_
dv . (4.2)
For the integral in the brackets, we make a change of variables t u, by
letting u = t v,
_

v
e
st
f(t v) dt =
_

0
e
s(v+u)
f(u) du = e
sv
F(s) ,
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 129
and then the right hand side of (4.2) is equal to F(s)G(s).
T
t
v

E
d
d
d d
d
d
D
d
d
d
d
d
d
d
d
d
d
d
d
d
The innite wedge D
We conclude a useful formula
L
1
(F(s)G(s)) = (f g)(t) .
For example,
L
1
_
s
2
(s
2
+ 4)
2
_
= cos 2t cos 2t =
_
t
0
cos 2(t v) cos 2v dv .
Using that
cos 2(t v) = cos 2t cos 2v + sin2t sin2v ,
we conclude
L
1
_
s
2
(s
2
+4)
2
_
= cos 2t
_
t
0
cos
2
2v dv + sin2t
_
t
0
sin2v cos 2v dv
=
1
2
t cos 2t +
1
4
sin 2t .
Example Consider vibrations of a spring at resonance
y

+y = 3 cos t, y(0) = 0, y

(0) = 0 .
Taking the Laplace transform, we compute
Y (s) = 3
s
(s
2
+ 1)
2
.
Then
y(t) = 3 sint cos t =
3
2
t sint .
130 CHAPTER 4. LAPLACE TRANSFORM
The Tautochrone curve
E
T
x
y
d
d
(x, y)
(x
1
, v)
s

## The Tautochrone curve

Assume a particle slides down a curve through the origin in the rst
quadrant of the xy plane, under the inuence of the force of gravity. We
wish to nd a curve such that the time T it takes to reach the bottom
at (0, 0) is the same, regardless of the starting point (x, y). (The initial
velocity at the starting point is assumed to be zero.) This historical curve
the tautochrone (which means loosely the same time in Latin) was found
by Christian Huygens in 1673. He was motivated by the construction of a
clock pendulum whose period is independent of its amplitude.
Let (x
1
, v) be any intermediate position of the particle, v < y. Let
s = f(v) be the length of the curve from (0, 0) to (x
1
, v). Of course the
length s depends also on the time t, and
ds
dt
gives the speed of the particle.
The kinetic energy of the particle is due to the decrease of its potential
energy (m is the mass of the particle)
1
2
m
_
ds
dt
_
2
= mg(y v) .
By the chain rule
ds
dt
=
ds
dv
dv
dt
= f

(v)
dv
dt
, giving us
1
2
_
f

(v)
dv
dt
_
2
= g(y v) ;
f

(v)
dv
dt
=
_
2g

y v .
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 131
(Minus, because the function v(t) is decreasing.) We separate the variables,
and integrate
_
y
0
f

(v)

y v
dv =
_
T
0
_
2g dt =
_
2gT .
(During the time T, the particle descends from v = y to v = 0.) To nd the
function f

## , we need to solve an integral equation. We can rewrite it as

y
1/2
f

(y) =
_
2gT . (4.3)
Recall that in one of our homework problems we had L
_
t

1
2
_
=
_

s
, or in
terms of the variable y
L
_
y

1
2
_
=
_

s
. (4.4)
We now apply the Laplace transform to the equation (4.3)
_

s
L
_
f

(y)
_
=
_
2gT
1
s
.
We now solve for L(f

## (y)), and put the answer in the form

L
_
f

(y)
_
=
T

_
2g
_

s
=

a
_

s
,
where we denote a =
T
2

2
2g. Using (4.4) again
f

(y) =

ay
1/2
. (4.5)
We have ds =
_
dx
2
+dy
2
, and so f

(y) =
ds
dy
=
_
1 +
_
dx
dy
_
2
. We use this
in (4.5)

1 +
_
dx
dy
_
2
=

a
1

y
. (4.6)
This is a rst order equation. We could solve it for
dx
dy
, and then separate
the variables. But it is easier to use the parametric integration technique.
To do that, we solve this equation for y
y =
a
1 +
_
dx
dy
_
2
, (4.7)
and set
dx
dy
=
1 + cos
sin
, (4.8)
132 CHAPTER 4. LAPLACE TRANSFORM
where is a parameter. Using (4.8) in (4.7), we express
y = a
sin
2

sin
2
+ (1 + cos )
2
= a
sin
2

2 + 2 cos
=
a
2
1 cos
2

1 + cos
=
a
2
(1 cos ) .
Observe that dy =
a
2
sin d, and then from (4.8)
dx =
1 + cos
sin
dy =
a
2
(1 cos ) d .
We compute x by integration, obtaining
x =
a
2
( sin), y =
a
2
(1 cos ) .
We have obtained a parametric representation of the tautochrone. This
curve was studied in Calculus, under the name of cycloid.
4.4.1 Problems
I. Find the Laplace transform of the following functions.
1. 5 + 2t
3
e
4t
. Ans.
5
s
+
12
s
4

1
s + 4
.
2. 2 sin3t t
3
. Ans.
6
s
2
+ 9

6
s
4
.
3. cosh2t e
4t
.
4. e
2t
cos 3t. Ans.
s 2
(s 2)
2
+ 9
5.
t
3
3t
t
. Ans.
2
s
3

3
s
.
6. e
3t
t
4
. Ans.
24
(s + 3)
5
II. Find the inverse Laplace transform of the following functions.
1.
1
s
2
+ 4

2
s
3
. Ans.
1
2
sin2t t
2
.
2.
s
s
2
9

2
s + 3
.
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 133
3.
1
s
2
+s
. Ans. 1 e
t
.
4.
1
s
3
+s
. Ans. 1 cos t.
5.
1
s
2
3s
. Ans.
1
3
e
3t

1
3
.
6.
1
(s
2
+ 1)(s
2
+ 4)
. Ans.
1
3
sint
1
6
sin 2t.
7.
1
s
2
+ 2s + 10
. Ans.
1
3
e
t
sin 3t.
8.
1
s
2
+s 2
. Ans.
1
3
e
t

1
3
e
2t
.
9.
s
s
2
+s + 1
. Ans. e

1
2
t
_
cos

3
2
t
1

3
sin

3
2
t
_
.
10.
s 1
s
2
s 2
. Ans.
1
3
e
2t
+
2
3
e
t
.
11.
s
4s
2
4s + 5
. Ans. e
1
2
t
_
1
4
cos t +
1
8
sint
_
.
III. Using the Laplace transform, solve the following initial value problems.
1. y

+ 3y

+ 2y = 0, y(0) = 1, y

(0) = 2. Ans. y = e
2t
.
2. y

+ 2y

+ 5y = 0, y(0) = 1, y

(0) = 2.
Ans.
1
2
e
t
(2 cos 2t sin 2t).
3. y

## + y = sin 2t, y(0) = 0, y

(0) = 1. Ans. y =
5
3
sin t
1
3
sin2t.
4. y

+ 2y

+ 2y = e
t
, y(0) = 0, y

(0) = 1.
Ans. y =
1
5
e
t

1
5
e
t
(cos t 3 sint).
5. y

y = 0, y(0) = 0, y

(0) = 1, y

(0) = 0, y

(0) = 0.
Ans. y =
1
2
sin t +
1
2
sinh t.
6. y

16y = 0, y(0) = 0, y

(0) = 2, y

(0) = 0, y

(0) = 8.
Ans. y = sinh 2t.
134 CHAPTER 4. LAPLACE TRANSFORM
IV.
1. (a) Let s > 0. Show that
_

e
sx
2
dx =
_

s
.
Hint: Denote I =
_

e
sx
2
dx. Then
I
2
=
_

e
sx
2
dx
_

e
sy
2
dy =
_

e
s(x
2
+y
2
)
dA.
This is a double integral over the entire xy plane. Evaluate it using polar
coordinates, to obtain I
2
=

s
.
(b) Show that L
_
t

1
2
_
=
_

s
.
Hint: L
_
t

1
2
_
=
_

0
t

1
2
e
st
dt. Make a change of variables t x, by
letting x = t
1
2
. Then L
_
t

1
2
_
= 2
_

0
e
sx
2
dx =
_

e
sx
2
dx =
_

s
.
2. Show that
_

0
sinx
x
dx =

2
.
Hint: Consider f(t) =
_

0
sintx
x
dx, and calculate the Laplace transform
F(s) =

2
1
s
.
3. Solve a system of dierential equations
dx
dt
= 2x y, x(0) = 4
dy
dt
= x + 2y, y(0) = 2 .
Ans. x(t) = e
t
+ 3e
3t
, y(t) = e
t
3e
3t
.
4. Solve the following non-homogeneous system of dierential equations
x

= 2x 3y +t, x(0) = 0
y

= 2x + y, y(0) = 1 .
Ans. x(t) = e
t

9
16
e
4t
+
1
16
(7 + 4t), y(t) = e
t
+
3
8
e
4t
+
1
8
(3 + 4t).
V.
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 135
1. A function f(t) is equal to 1 for 1 t 5, and is equal to 0 for all
other t 0. Represent f(t) as a dierence of two step functions, and nd
its Laplace transform.
Ans. f(t) = u
1
(t) u
5
(t), F(s) =
e
s
s

e
5s
s
.
2. Find the Laplace transformof t
2
2u
4
(t). Ans. F(s) =
2
s
3
2
e
4s
s
.
3. Sketch the graph of the function u
2
(t) 2u
3
(t), and nd its Laplace
transform.
4. A function g(t) is equal to 1 for 0 t 5, and is equal to 2 for t > 5.
Represent g(t) using step functions, and nd its Laplace transform.
Ans. g(t) = 1 3u
5
(t), G(s) =
1
s
3
e
5s
s
.
5. Find the inverse Laplace transform of
1
s
2
_
2e
s
3e
4s
_
.
Ans. 2u
1
(t)(t 1) 3u
4
(t)(t 4).
6. Find the inverse Laplace transform of e
2s
3s 1
s
2
+ 4
.
Ans. u
2
(t)
_
3 cos 2(t 2)
1
2
sin2(t 2)
_
.
7. Find the inverse Laplace transform of e
s
1
s
2
+s 6
.
Ans. u
1
(t)
_
1
5
e
2t2

1
5
e
3t+3
_
.
8. Find the inverse Laplace transform of e

2
s
1
s
2
+ 2s + 5
, and simplify the
Ans.
1
2
u
/2
(t)e
t+/2
sin 2t.
9. Solve
y

+y = 2u
1
(t) u
5
(t), y(0) = 0, y

(0) = 2.
10. Solve
y

+ 3y

+ 2y = u
2
(t), y(0) = 0, y

(0) = 1.
Ans. y(t) = e
2t
e
t
+u
2
(t)
_
1
2
e
(t2)
+
1
2
e
2(t2)
_
.
136 CHAPTER 4. LAPLACE TRANSFORM
VI.
1. Show that L
_
u

c
(t)
_
= L((t c)).
This problem shows that u

c
(t) = (t c).
2. Find the Laplace transform of (t 4) 2u
4
(t).
3. Solve
y

+y = (t ), y(0) = 0, y

(0) = 2.
Ans. y(t) = 2 sint u

(t) sint.
4. Solve
y

+ 2y

+ 10y = (t ), y(0) = 0, y

(0) = 0.
Ans. y(t) =
1
3
u

(t)e
t+
sin3t.
5. Solve
4y

+y = (t), y(0) = 0, y

(0) = 0.
Ans. y(t) =
1
2
sin
1
2
t.
6. Solve
4y

+ 4y

+ 5y = (t 2), y(0) = 0, y

(0) = 1.
VII.
1. Show that sint 1 = 1 cos t. (Observe that sint 1 ,= sint.)
2. Show that f(t) (t) = f(t), for any f(t).
(So that the delta function plays the role of unity for convolution.)
3. Find the convolution t sinat. Ans.
at sinat
a
2
.
4. Find the convolution cos t cos t.
Hint: cos(t v) = cos t cos v + sint sin v. Ans.
1
2
t cos t +
1
2
sint.
5. Using convolutions, nd the inverse Laplace transform of the following
functions
4.4. CONVOLUTION AND THE TAUTOCHRONE CURVE 137
(a)
1
s
3
(s
2
+ 1)
. Ans.
t
2
2
sint =
t
2
2
+ cos t 1.
(b)
s
(s + 1)(s
2
+ 9)
. Ans.
e
t
10
+
1
10
cos 3t +
3
10
sin3t.
(c)
s
(s
2
+ 1)
2
. Ans.
1
2
t sint.
(d)
1
(s
2
+ 9)
2
. Ans.
1
54
sin3t
3
54
t cos 3t.
6. Solve the following initial value problem at resonance
y

## + 9y = cos 3t, y(0) = 0, y

(0) = 0.
Ans. y(t) =
1
6
t sin 3t.
7. Solve the initial value problem with a given forcing term g(t)
y

+ 4y = g(t), y(0) = 0, y

(0) = 0.
Ans. y(t) =
1
2
_
t
0
sin2(t v)g(v) dv.
Chapter 5
Systems of Dierential
Equations
5.1 The Case of Distinct Eigenvalues
5.1.1 Review of Vectors and Matrices
Recall that given two vectors C
1
=
_

_
a
1
a
2
a
3
_

_ and C
2
=
_

_
b
1
b
2
b
3
_

them as C
1
+C
2
=
_

_
a
1
+b
1
a
2
+b
2
a
3
+b
3
_

_, or multiply by a constant x: xC
1
=
_

_
xa
1
xa
2
xa
3
_

_.
More generally, we can compute the linear combinations x
1
C
1
+ x
2
C
2
=
_

_
x
1
a
1
+ x
2
b
1
x
1
a
2
+ x
2
b
2
x
1
a
3
+ x
2
b
3
_

## _ for any two constants x

1
and x
2
. We shall be dealing only
with the square matrices A =
_

_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_

## _. We shall view A as a row

of column vectors A =
_
C
1
C
2
C
3
_
, where C
1
=
_

_
a
11
a
12
a
13
_

_, C
2
=
_

_
a
21
a
22
a
23
_

_,
and C
3
=
_

_
a
31
a
32
a
33
_

## _. The product of the matrix A and of vector x =

_

_
x
1
x
2
x
3
_

_
138
5.1. THE CASE OF DISTINCT EIGENVALUES 139
is dened as the vector
Ax = C
1
x
1
+ C
2
x
2
+C
3
x
3
.
(This denition is equivalent to the more traditional one, that you might
have seen before.) We get
Ax =
_

_
a
11
x
1
+a
12
x
2
+a
13
x
3
a
21
x
1
+a
22
x
2
+a
23
x
3
a
31
x
1
+a
32
x
2
+a
33
x
3
_

_ .
Two vectors C
1
and C
2
are called linearly dependent if one of them is a
constant multiple of the other, i.e., C
2
= aC
1
, for some number a. (Zero
vector is linearly dependent with any other.) Linearly dependent C
1
and
C
2
go along the same line. If the vectors C
1
and C
2
do not go along the
same line, they are linearly independent. Three vectors C
1
, C
2
and C
3
are
called linearly dependent if one of them is a linear combination of the others,
e.g., if C
3
= aC
1
+ bC
2
, for some numbers a, b. This means that C
3
lies
in the plane determined by C
1
and C
2
, so that all three vectors lie in the
same plane. If C
1
, C
2
and C
3
do not lie in the same plane, they are linearly
independent.
5.1.2 Linear First Order Systems with Constant Coecients
We wish to nd the functions x
1
(t), x
2
(t) and x
3
(t), which solve the following
system of equations, with given constant coecients a
11
, . . . , a
33
,
x

1
= a
11
x
1
+a
12
x
2
+ a
13
x
3
(1.1)
x

2
= a
21
x
1
+a
22
x
2
+ a
23
x
3
x

3
= a
31
x
1
+a
32
x
2
+ a
33
x
3
,
subject to the given initial conditions
x
1
(t
0
) = , x
2
(t
0
) = , x
3
(t
0
) = .
We shall write this system in matrix notation
x

= Ax, x(0) = x
0
, (1.2)
where x(t) =
_

_
x
1
(t)
x
2
(t)
x
3
(t)
_

above, and x
0
=
_

## _ is the vector of initial conditions. Indeed, on the

140 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
left in (1.1) we have components of the vector x

(t) =
_

_
x

1
(t)
x

2
(t)
x

3
(t)
_

_, while on
the right we see the components of the vector Ax.
Let us observe that given two solutions of (1.2), x
1
(t) and x
2
(t), their
linear combination c
1
x
1
(t) +c
2
x
2
(t) is also a solution of (1.2), for any con-
stants c
1
and c
2
. This is because our system is linear. We now search for
solution of (1.2) in the form
x(t) = e
t
, (1.3)
with a constant , and a vector , whose entries are independent of t. Plug-
ging this into (1.2), we have
e
t
= A
_
e
t

_
;
A = .
So that if is an eigenvalue of A, and the corresponding eigenvector, then
(1.3) gives us a solution of the problem (1.2). Observe that this is true for
any square nn matrix A. Let
1
,
2
and
3
be the eigenvalues of our 33
matrix A. There are several cases to consider.
Case 1 The eigenvalues of A are real and distinct. Recall that the corre-
sponding eigenvectors
1
,
2
and
3
are then linearly independent. We know
that e

1
t

1
, e

2
t

2
and e

3
t

3
are solutions of our system, so that their linear
combination
x(t) = c
1
e

1
t

1
+c
2
e

2
t

2
+c
3
e

3
t

3
(1.4)
gives the general solution of our system. We need to determine c
1
, c
2
, and
c
3
to satisfy the initial conditions. I.e.,
x(t
0
) = c
1
e

1
t
0

1
+c
2
e

2
t
0

2
+ c
3
e

3
t
0

3
= x
0
. (1.5)
This is a system of three linear equations with three unknowns c
1
, c
2
, and c
3
.
The matrix of this system is non-singular, because its columns are linearly
independent (observe that these columns are constant multiples of linearly
independent vectors
1
,
2
and
3
). Therefore we can nd a unique solution
triple c
1
, c
2
, and c
3
of the system (1.5). Then x(t) = c
1
e

1
t

1
+ c
2
e

2
t

2
+
c
3
e

3
t

3
is the desired solution of our initial value problem (1.2).
Example Solve
x

=
_
2 1
1 2
_
x, x(0) =
_
1
2
_
.
5.1. THE CASE OF DISTINCT EIGENVALUES 141
This is a 22 system, so that we have only two terms in (1.4). We compute
the eigenvalue
1
= 1, and the corresponding eigenvector
1
=
_
1
1
_
, and

2
= 3, with the corresponding eigenvector
2
=
_
1
1
_
. The general solution
is then
x(t) = c
1
e
t
_
1
1
_
+ c
2
e
3t
_
1
1
_
,
or in components
x
1
(t) = c
1
e
t
+c
2
e
3t
x
2
(t) = c
1
e
t
+c
2
e
3t
.
Turning to the initial conditions,
x
1
(0) = c
1
+ c
2
= 1
x
2
(0) = c
1
+ c
2
= 2 .
We calculate c
1
= 3/2, c
2
x
1
(t) =
3
2
e
t
+
1
2
e
3t
x
2
(t) =
3
2
e
t
+
1
2
e
3t
.
Case 2 Eigenvalue
1
is double,
3
,=
1
, however
1
has two linearly
independent eigenvectors
1
and
2
. The general solution is given again
by the formula (1.4), with
2
replaced by
1
. We see from (1.5) that we
again get a linear system, which has a unique solution, because its matrix
has linearly independent columns. (Linearly independent eigenvectors is the
key here!)
Example Solve
x

=
_

_
2 1 1
1 2 1
1 1 2
_

_x , x(0) =
_

_
1
0
4
_

_ .
142 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
We calculate that the matrix has a double eigenvalue
1
= 1, which has
two linearly independent eigenvectors
1
=
_

_
1
0
1
_

_ and
2
=
_

_
0
1
1
_

_. The
other eigenvalue is
3
= 4, with the corresponding eigenvector
3
=
_

_
1
1
1
_

_.
The general solution is then
x(t) = c
1
e
t
_

_
1
0
1
_

_ +c
2
e
t
_

_
0
1
1
_

_ + c
3
e
4t
_

_
1
1
1
_

_ .
Or in components
x
1
(t) = c
1
e
t
+c
3
e
4t
x
2
(t) = c
2
e
t
+c
3
e
4t
x
3
(t) = c
1
e
t
c
2
e
t
+c
3
e
4t
.
Using the initial conditions, we calculate c
1
= 2, c
2
= 1, and c
3
= 1.
x
1
(t) = 2e
t
e
4t
x
2
(t) = e
t
e
4t
x
3
(t) = 3e
t
e
4t
.
Recall that if nn matrix A is symmetric (i.e., a
ij
= a
ji
for all i and j),
then all of its eigenvalues are real. The eigenvalues of a symmetric matrix
may be repeated, but there is always a full set of n linearly independent
eigenvectors. So that one can solve the initial value problem (1.2) for any
system with a symmetric matrix.
Case 3 Eigenvalue
1
has multiplicity two (i.e., it is a double root of the
characteristic equation),
3
,=
1
, but
1
has only one linearly independent
eigenvector . We have one solution e

1
t
. By analogy with the second
order equations, we try te

1
t
for the second solution. However, this vector
function is a scalar multiple of the rst solution, i.e., is linearly dependent
with it. We modify our guess:
x(t) = te

1
t
+e

1
t
. (1.6)
5.1. THE CASE OF DISTINCT EIGENVALUES 143
It turns out one can choose a vector , to obtain a second linearly indepen-
dent solution. Plugging this into our system (1.2), and using that A =
1
,
e

1
t
+
1
te

1
t
+
1
e

1
t
=
1
te

1
t
+e

1
t
A.
Cancelling a pair of terms, and dividing by e

1
t
, we simplify this to
(A
1
I) = . (1.7)
Even though the matrix A
1
I is singular (its determinant is zero), it
can be shown that the linear system (1.7) always has a solution , called
generalized eigenvector. Using this in (1.6), gives us the second linearly
independent solution, corresponding to =
1
.
Example Solve
x

=
_
1 1
1 3
_
x .
This matrix has a double eigenvalue
1
=
2
= 2, and only one eigenvector
=
_
1
1
_
. We have one solution x
1
(t) = e
2t
_
1
1
_
. The system (1.7) to
determine the vector =
_

1

2
_
takes the form

2
= 1

1
+
2
= 1 .
The second equation we throw away, because it is a multiple of the rst.
The rst equation has innitely many solutions. But all we need is just one
solution, that is not a multiple of . So that we set
2
= 0, which gives

1
= 1. We have computed the second linearly independent solution
x
2
(t) = te
2t
_
1
1
_
+ e
2t
_
1
0
_
.
The general solution is then
x(t) = c
1
e
2t
_
1
1
_
+c
2
_
te
2t
_
1
1
_
+e
2t
_
1
0
__
.
144 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
5.2 A Pair of Complex Conjugate Eigenvalues
5.2.1 Complex Valued Functions
Recall that one dierentiates complex valued functions much the same way,
as the real ones. For example,
d
dt
e
it
= ie
it
,
where i =

## 1 is treated the same way as any other constant. Any complex

valued function f(t) can be written in the formf(t) = u(t)+iv(t), where u(t)
and v(t) are real valued functions. It follows by the denition of derivative
that f

(t) = u

(t) + iv

## (t). For example, using Eulers formula,

d
dt
e
it
=
d
dt
(cos t +i sint) = sin t + i cos t = i(cos t +i sint) = ie
it
.
Any complex valued vector function x(t) can also be written as x(t) =
u(t) +iv(t), where u(t) and v(t) are real valued vector functions. Again, we
have x

(t) = u

(t) + iv

u

(t) +iv

## (t) = A(u(t) + iv(t)) .

Separating the real and imaginary parts, we see that both u(t) and v(t) are
real valued solutions of our system.
5.2.2 General Solution
So assume that the matrix A has a pair of complex conjugate eigenvalues
+i and i. They need to contribute two linearly independent solutions.
The eigenvector corresponding to + i is complex valued, which we can
write as +i, where and are real valued vectors. Then x(t) = e
(+i)t
(+
i) is a solution of our system. To get two real valued solutions we shall
take real and imaginary parts of this solution. We have
x(t) = e
t
(cos t +i sint)( +i)
= e
t
(cos t sint ) +ie
t
(sint + cos t ) .
So that
u(t) = e
t
(cos t sint )
v(t) = e
t
(sint + cos t )
5.2. A PAIR OF COMPLEX CONJUGATE EIGENVALUES 145
are the two real valued solutions. In case of a 2 2 matrix (when there are
no other eigenvalues), the general solution is x(t) = c
1
u(t) +c
2
v(t).
Example Solve
x

=
_
1 2
2 1
_
x, x(0) =
_
2
1
_
.
We calculate the eigenvalues
1
= 1 + 2i and
2
= 1 2i. The eigenvector
corresponding to
1
is
_
i
1
_
. So that we have a complex valued solution
e
(1+2i)t
_
i
1
_
. We rewrite it as
e
t
(cos 2t +i sin2t)
_
i
1
_
= e
t
_
sin2t
cos 2t
_
+ie
t
_
cos 2t
sin 2t
_
.
The real and imaginary parts give us two linearly independent solutions, so
that the general solution is
x(t) = c
1
e
t
_
sin2t
cos 2t
_
+ c
2
e
t
_
cos 2t
sin2t
_
.
In components
x
1
(t) = c
1
e
t
sin 2t + c
2
e
t
cos 2t
x
2
(t) = c
1
e
t
cos 2t + c
2
e
t
sin2t .
From the initial conditions
x
1
(0) = c
2
= 2
x
2
(0) = c
1
= 1 ,
so that c
1
= 1, and c
2
x
1
(t) = e
t
sin2t + 2e
t
cos 2t
x
2
(t) = e
t
cos 2t + 2e
t
sin2t .
We see from the form of the solutions, that if the matrix A has all eigen-
values which are either negative or have negative real parts, then lim
t
x(t) =
0 (i.e., all components of the vector x(t) tend to zero).
146 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
5.2.3 Problems
I. Solve the following systems of dierential equations:
1. x

=
_
3 1
1 3
_
x, x(0) =
_
1
3
_
x
1
(t) = 2e
2t
e
4t
x
2
(t) = 2e
2t
e
4t
.
2. x

=
_
0 1
1 0
_
x, x(0) =
_
2
1
_
system to a second order equation for x
1
x
1
(t) =
1
2
e
t
+
3
2
e
t
x
2
(t) =
1
2
e
t
+
3
2
e
t
.
3. x

=
_

_
2 2 3
2 3 2
4 2 5
_

_x, x(0) =
_

_
0
1
2
_

x
1
(t) = 3e
t
2e
2t
+ 5e
3t
x
2
(t) = 4e
2t
+ 5e
3t
x
3
(t) = 3e
t
+ 5e
3t
.
4. x

=
_

_
4 0 1
2 1 0
2 0 1
_

_x, x(0) =
_

_
2
5
0
_

x
1
(t) = 2e
2t
4e
3t
x
2
(t) = 5e
t
4e
2t
+ 4e
3t
x
3
(t) = 4e
2t
+ 4e
3t
.
5. x

=
_

_
2 1 1
1 2 1
1 1 2
_

_ x, x(0) =
_

_
2
3
2
_

x
1
(t) = 3e
t
+e
4t
x
2
(t) = 2e
t
+ e
4t
x
3
(t) = e
t
+e
4t
.
5.3. PREDATOR-PREY INTERACTION 147
6. x

=
_
0 2
2 0
_
x, x(0) =
_
2
1
_
system to a second order equation for x
1
x
1
(t) = 2 cos 2t sin2t
x
2
(t) = cos 2t 2 sin2t .
7. x

=
_
3 2
2 3
_
x, x(0) =
_
0
1
_
x
1
(t) = e
3t
sin2t
x
2
(t) = e
3t
cos 2t .
8. x

=
_

_
1 2 2
1 1 0
0 2 1
_

_x, x(0) =
_

_
1
1
2
_

x
1
(t) = cos t + 5 sint
x
2
(t) = e
t
+ 2 cos t + 3 sint
x
3
(t) = e
t
+ cos t 5 sint .
9. Show that all solutions of the system x

=
_
a b
b a
_
x, with positive
constants a and b, satisfy
lim
t
x
1
(t) = 0, and lim
t
x
2
(t) = 0 .
5.3 Predator-Prey Interaction
In 1925, Vito Volterras future son-in-law, biologist Umberto DAncona, told
him of the following puzzle. During World War I, when ocean shing almost
ceased, the ratio of predators (like sharks) to prey (like tuna) had increased.
Why did sharks benet more from the decreased shing? (While the object
of shing is tuna, sharks are also caught in the nets.)
The Lotka-Volterra Equations
Let x(t) and y(t) give respectively the numbers of prey (tuna) and predators
(sharks) as functions of time t. Let us assume that in the absence of sharks
tuna will obey the Malthusian model
x

(t) = ax(t),
148 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
with some growth rate a > 0. (I.e., it would grow exponentially x(t) =
x(0)e
at
.) In the absence of tuna the number of sharks will decrease expo-
nentially
y

(t) = cy(t),
with some c > 0, because its other prey is less nourishing. Clearly, the
presence of sharks will decrease the rate of growth of tuna, while tuna is
good for sharks. So the model is
x

y

## (t) = c y(t) + d x(t) y(t) ,

with two more given positive constants b and d. The x(t)y(t) term is pro-
portional to the number of encounters between sharks and tuna. These
encounters decrease the growth rate of tuna, and increase the growth rate
of sharks. This is the famous Lotka-Volterra model. Alfred J. Lotka was an
American mathematician, who developed similar ideas at about the same
time. A fascinating story of Vito Volterras life and work, and of life in Italy
in the rst half of the 20-th Century, is told in a very nice book of Judith
R. Goodstein [4].
Analysis of the Model
Remember energy being constant for the vibrating spring? We have some-
thing similar here. It turns out that
a lny(t) by(t) +c lnx(t) dx(t) = C = constant. (3.2)
To justify that, let us introduce the function F(x, y) = a lnyby+c lnxdx.
We wish to show that F(x(t), y(t)) = constant. Using the chain rule, and
expressing the derivatives from our equations (3.1), we have
d
dt
F(x(t), y(t)) = a
y

(t)
y(t)
by

(t) +c
x

(t)
x(t)
dx

(t) =
a(c + dx(t)) b(cy(t) +dx(t)y(t)) +c(a by(t)) d(ax(t) bx(t)y(t))
= 0 ,
proving that F(x(t), y(t)) does not change with time t.
We assume that the initial numbers of both sharks and tuna is known:
x(0) = x
0
, y(0) = y
0
, (3.3)
5.3. PREDATOR-PREY INTERACTION 149
1 2 3 4
Prey
0.5
1.0
1.5
2.0
2.5
3.0
Predator
Figure 5.1: The integral curves of the Lotka-Volterra system
i.e., x
0
and y
0
are given numbers. Lotka-Volterra system, together with the
initial conditions (3.3), determine both populations at all times (x(t), y(t)).
(Existence and uniqueness theorem holds for systems too.) Letting t = 0 in
(3.2), we calculate the value of C
C
0
= a lny
0
by
0
+c lnx
0
dx
0
. (3.4)
In the xy-plane, the solution (x(t), y(t)) gives a parametric curve, with time
t being the parameter. The same curve is described by the implicit relation
a lny by + c lnx dx = C
0
. (3.5)
This curve is just a level curve of the function F(x, y) = a lnyby+c lnxdx,
introduced earlier. How does the graph of z = F(x, y) look? Like a mountain
with a single peak, because F(x, y) is a sum of a function of y, a lny by,
and of a function of x, c lnx dx, and both of these functions are concave
(down). It is clear that all level lines of F(x, y) are closed curves. Following
the closed curve (3.5), one can see how dramatically the relative fortunes of
sharks and tuna change, just as a result of their interaction. We present a
picture of three integral curves, computed by Mathematica in case a = 0.7,
b = 0.5, c = 0.3 and d = 0.2.
Properties of the Solutions
All solutions are closed curves, and there is a dot (rest point) in the middle.
When x
0
= c/d and y
0
= a/b, i.e., when we start at the point (c/d, a/b), we
calculate from the Lotka-Volterra equations that x

(0) = 0 and y

(0) = 0.
Solution is then x(t) = c/d, and y(t) = a/b for all times. In the xy-plane
150 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
this gives us the point (c/d, a/b), called the rest point. All other solutions
(x(t), y(t)) are periodic, because they represent closed curves. I.e., there is
a number T, a period, so that x(t + T) = x(t) and y(t + T) = y(t). This
period changes from curve to curve, and it is larger the further the solution
curve is from the rest point. (The monotonicity of the period was proved
only in mid eighties by Franz Rothe [6], and Jorg Waldvogel [9].)
Divide the rst of the Lotka-Volterra equations by x(t), and then inte-
grate over the period T:
x

(t)
x(t)
= a by(t) ;
_
T
0
x

(t)
x(t)
dt = aT b
_
T
0
y(t) dt .
But
_
T
0
x

(t)
x(t)
dt = lnx(t) [
T
0
= 0, because x(T) = x(0) by periodicity. It
follows that
1
T
_
T
0
y(t) dt = a/b .
Similarly, we derive
1
T
_
T
0
x(t) dt = c/d .
We have a remarkable fact: the averages of both x(t) and y(t) are the same
for all solutions. Moreover, these averages are equal to the coordinates of
the rest point.
The Eect of Fishing
Extensive shing will decrease the growth rate of both tuna and sharks. The
new model is
x

y

## (t) = (c + )y(t) + d x(t) y(t) ,

where and two more given positive constants, related to the intensity
of shing. (There are other ways to model shing.) As before, we compute
the average numbers of both tuna and sharks
1
T
_
T
0
x(t) dt = (c + )/d,
1
T
_
T
0
y(t) dt = (a )/b .
5.4. AN APPLICATION TO EPIDEMIOLOGY 151
We see an increase for the average number of tuna, and a decrease for the
sharks, as a result of moderate amount of shing ( < a). Conversely,
decreased shing increases the numbers of sharks, giving us an explanation
of U. DAnconas data. This result is known as Volterras principle. It
applies also to insecticide treatments. If such treatment destroys both pests
and their predators, it may result in an increase of the number of pests!
Biologists have questioned the validity of both the Lotka-Volterra model,
and of the way we have accounted for shing (perhaps, they cannot accept
the idea of two simple dierential equations ruling the oceans). In fact,
recently it is more common to model shing using the system
x

1
(t)
y

## (t) = cy(t) + d x(t) y(t) h

2
(t) ,
with some given positive functions h
1
(t) and h
2
(t).
I imagine both sharks and tuna do not think much of Lotka-Volterra
equations. But we do, because this system shows that dramatic changes
may occur just from the interaction of the species, and not as a result of
any external events. You can read more on the Lotka-Volterra model in
the nice book of M.W. Braun [2]. That book has other applications of
dierential equations, e.g., to discovering of art forgeries, to epidemiology,
and to theories of war.
5.4 An application to epidemiology
Suppose a small group of people comes down with an infectious decease.
Will this cause an epidemic? What are the parameters that public health
ocials should try to control? We shall analyze a way to model the spread
of an infectious decease.
Let I(t) be the number of infected people at time t. Let S(t) be the
number of susceptible people, the ones at risk of catching the decease. The
following model was proposed in 1927 by W.O. Kermack and A.G. McK-
endrick
dS
dt
= rSI (4.1)
dI
dt
= rSI I ,
with some positive constants r and . The rst equation reects the fact
that the number of susceptible people decreases, as some people catch the
152 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
infection (and so join the group of infected people). The rate of decrease
is proportional to the number of encounters between the infected and
susceptible people, which in turn is proportional to the product SI. The
rst term in the second equation tells us that the number of infected people
would increase at exactly the same rate, if it was not for the second term.
The second term, I, says that the infection rate is not that bad, because
some infected people are removed from the population (people who died from
the decease, people who have recovered and developed immunity, and sick
people who are isolated from others). The coecient is called the removal
rate. To the equations (4.1) we add initial conditions
S(0) = S
0
, I(0) = I
0
, (4.2)
with given numbers S
0
- the initial number of susceptible people, and I
0
- the
initial number of infected people. Solving the equations (4.1) and (4.2) will
give us (S(t), I(t)), which is a parametric curve in (S, I) plane. Alternatively,
this curve can be described, if we express I as a function of S.
We express
dI
dS
=
dI/dt
dS/dt
=
rSI I
rSI
= 1 +

r
1
S
.
We integrate this equation (we denote

r
= ) to obtain
I = I
0
+ S
0
lnS
0
S + lnS . (4.3)
For dierent values of (S
0
, I
0
), we get a dierent integral curve from (4.3),
but all these curves take the maximum value at S = , see the picture. The
motion on these curves is from right to the left, because we see from the
rst equation in (4.1) that S(t) is a decreasing function of time t. We see
that if the initial point (S
0
, I
0
) satises S
0
> , then the function I(t) grows
at rst, and then declines. That is the case when an epidemics occurs. If,
on the other hand, the initial number of susceptible people is below , then
the number of infected people I(t) declines to zero, i.e., the initial outbreak
has been successfully contained. The number is called the threshold value.
To avoid an epidemics, one tries to increase the threshold value by
increasing the removal rate , achieved by isolating sick people. Notice also
the following harsh conclusion: if a decease kills people quickly, then the
removal rate is high, and such a decease may be easier to contain.
5.5. LYAPUNOV STABILITY 153
In some cases it is easy to estimate the number of people, who will get
sick during an epidemics. Assume that I
0
is so small that we can set it to
zero, while S
0
is a little larger than the threshold value , i.e., S
0
= + ,
where > 0 is a small value. As time t , I(t) 0, while S(t) S

,
its nal number. We then conclude from (4.3) that
S

lnS

= S
0
lnS
0
. (4.4)
The function I = S lnS takes a global maximum at S = . Such a
function is symmetric with respect to for S close to , as can be seen from
a three term Taylor series expansion. It follows from (4.4) that the points S
0
and S

## = . The total number of

people, who will get sick during an epidemics is then equal to S
0
S

= 2.
E
T
C
S
I
a
(S
0
, I
0
)

a
(S
0
, I
0
)
C
The solution curves of the system (4.1)
5.5 Lyapunov stability
We consider a nonlinear system of equations (the unknown functions are
x(t) and y(t))
x

y

## = g(x, y), y(0) = ,

154 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
with the given functions f(x, y) and g(x, y), and the given numbers and
. A point (x
0
, y
0
) is called a rest point if
f(x
0
, y
0
) = g(x
0
, y
0
) = 0 .
If we solve the system (5.1) with the initial data x(0) = x
0
and y(0) = y
0
,
then x(t) = x
0
and y(t) = y
0
, i.e., our system is at rest at all time. Now
suppose the initial conditions are perturbed from (x
0
, y
0
). Will our system
come back to rest at (x
0
, y
0
) ?
A dierentiable function L(x, y) is called a Lyapunov function at (x
0
, y
0
)
if
L(x
0
, y
0
) = 0
L(x, y) > 0, for (x, y) in some neighborhood of (x
0
, y
0
) .
How do the level lines
L(x, y) = c (5.2)
look? If c = 0, then (5.2) is satised only at (x
0
, y
0
). If c > 0, the level lines
are closed curves around (x
0
, y
0
), and the smaller c is, the closer the level
line is to (x
0
, y
0
).
Along a solution (x(t), y(t)) of our system (5.1), the Lyapunov function
is a function of t: L(x(t), y(t)). Now assume that for all solutions (x(t), y(t))
starting near (x
0
, y
0
), we have
d
dt
L(x(t), y(t)) < 0 for all t > 0 . (5.3)
Then (x(t), y(t)) (x
0
, y
0
), as t , i.e., we have an asymptotic stability
of the rest point (x
0
, y
0
). Using the chain rule, we rewrite (5.3) as
d
dt
L(x(t), y(t)) = L
x
x

+L
y
y

= L
x
f(x, y) +L
y
g(x, y) < 0 . (5.4)
One usually assumes that (x
0
, y
0
) = (0, 0), which can always be accom-
plished by declaring the point (x
0
, y
0
) to be the origin. Then L(x, y) =
ax
2
+ cy
2
, with positive a and c, is often a good choice of a Lyapunov
function.
Example The system
x

= 2x +xy
2
y

= y 3x
2
y .
5.6. EXPONENTIAL OF A MATRIX 155
has a rest point (0, 0). With L(x, y) = ax
2
+cy
2
, (5.4) takes the form
d
dt
L(x(t), y(t)) = 2axx

+ 2byy

= 2ax(2x +xy
2
) + 2by(y 3x
2
y) .
If we choose a = 3 and b = 1, this simplies to
d
dt
L(x(t), y(t)) = 12x
2
2y
2
< 0 .
It follows that all solutions of this system tend to (0, 0) as t . One
says that the domain of attraction of the rest point (0, 0) is the entire xy
plane.
Example The system
x

= 2x + y
4
y

= y +x
5
.
has a rest point (0, 0). If we drop the nonlinear terms, then the equations
x

= 2x and y

## = y have all solutions tending to zero as t . For

small [x[ and [y[, the nonlinear terms are negligible. Therefore, we expect
asymptotic stability. With L(x, y) = x
2
+ y
2
, we compute
d
dt
L(x(t), y(t)) = 2xx

+ 2yy

= 2x(2x + y
4
) + 2y(y + x
5
)
= 4x
2
2y
2
+ 2xy
4
+ 2x
5
y < 2(x
2
+ y
2
) + 2xy
4
+ 2x
5
y < 0 ,
provided that (x, y) belongs to a disc B

: x
2
+y
2
<
2
, with a suciently
small , as can be seen by using the polar coordinates. Hence any solution,
with the initial point (x(0), y(0)) in B

## , tends to zero as t . The rest

point (0, 0) is asymptotically stable. Its domain of attraction is B

.
5.6 Exponential of a matrix
In matrix notation a linear system, with a 2 2 or 3 3 matrix A,
x

= Ax, x(0) = x
0
(6.1)
looks like a single equation. In case A and x
0
are constants, the solution of
(6.1) is
x(t) = e
At
x
0
. (6.2)
156 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
In order to write the solution of our system in the form (6.2), we dene the
notion of the exponential of a matrix. First, we dene powers of a matrix:
A
2
= A A, A
3
= A
2
A, and so on. Starting with the Maclauren series
e
x
= 1 +x +
x
2
2!
+
x
3
3!
+
x
4
4!
+ =

n=0
x
n
n!
,
we dene (I is the identity matrix)
e
A
= I +A +
A
2
2!
+
A
3
3!
+
A
4
4!
+ =

n=0
A
n
n!
.
So that e
A
is the sum of innitely many matrices, i.e., each entry of e
A
is
an innite series. It can be shown that all of these series are convergent for
any matrix A.
Example Let A =
_
a 0
0 b
_
, where a and b are constants. Then
e
A
=
_
1 + a +
a
2
2!
+
a
3
3!
+ 0
0 1 +b +
b
2
2!
+
b
3
3!
+
_
=
_
e
a
0
0 e
b
_
.
Example Let A =
_
0 1
1 0
_
. Then for any constant t
e
At
=
_
1 t
2
/2! +t
4
/4! + t + t
3
/3! t
5
/5! +
t t
3
/3! + t
5
/5! + 1 t
2
/2! + t
4
/4! +
_
=
_
cos t sin t
sint cos t
_
.
The last example lets us write the solution of the system
x

1
= x
2
, x(0) =
x

2
= x
1
, y(0) =
in the form
_
x
1
(t)
x
2
(t)
_
= e
At
_

_
=
_
cos t sint
sin t cos t
_ _

_
,
which is a rotation of the initial vector
_

_
by an angle t, counterclockwise.
We see that the integral curves of our system are circles in the (x
1
, x
2
) plane.
5.6. EXPONENTIAL OF A MATRIX 157
This makes sense, because the velocity vector
_
x

1
x

2
_
=
_
x
2
x
1
_
is always
perpendicular to the position vector
_
x
1
x
2
_
.
5.6.1 Problems
I.
1. (i) Show that the rest point (0, 0) is asymptotically stable for the system
x

1
= 2x
1
+x
2
+x
1
x
2
x

2
= x
1
2x
2
+x
3
1
.
(ii) Find the general solution of the corresponding linear system
x

1
= 2x
1
+ x
2
x

2
= x
1
2x
2
,
and discuss its behavior as t .
2. The equation (for y = y(t))
y

= y
2
(1 y)
has rest points y = 0 and y = 1. Discuss their stability.
Answer: y = 1 is asymptotically stable, y = 0 is not.
3. (i) Convert the nonlinear equation
y

+ f(y)y

+y = 0
into a system, by letting y = x
1
, and y

= x
2
.
x

1
= x
2
x

2
= x
1
f(x
1
)x
2
,
(ii) Show that the rest point (0, 0) of this system is asymptotically stable,
provided that f(x
1
) > 0 for all x
1
. Hint: L = x
2
1
+x
2
2
. What does this imply
for the original equation?
158 CHAPTER 5. SYSTEMS OF DIFFERENTIAL EQUATIONS
II.
1. Let A =
_
0 1
0 0
_
. Show that
e
At
=
_
1 t
0 1
_
.
2. Let A =
_

_
0 1 0
0 0 1
0 0 0
_

_. Show that
e
At
=
_

_
1 t
1
2
t
2
0 1 t
0 0 1
_

_ .
3. Let A =
_

_
0 1 0
1 0 0
0 0 2
_

_. Show that
e
At
=
_

_
cos t sint 0
sint cos t 0
0 0 e
2t
_

_ .
Chapter 6
Fourier Series and Boundary
Value Problems
6.1 Fourier series for functions of an arbitrary pe-
riod
Recall that in Chapter 2 we considered Fourier series for functions of period
2. Suppose now that f(x) has a period 2L, where L > 0 is any number.
Consider an auxiliary function g(t) = f(
L

## t). Then g(t) has period 2, and

we can represent it by the Fourier series
f(
L

t) = g(t) = a
0
+

n=1
(a
n
cos nt +b
n
sinnt) ,
with
a
0
=
1
2
_

g(t) dt =
1
2
_

f(
L

t) dt ,
a
n
=
1

g(t) cos nt dt =
1

f(
L

t) cos nt dt ,
b
n
=
1

g(t) sinnt dt =
1

f(
L

t) sinnt dt .
We now set
x =
L

t, or t =

L
x .
Then
f(x) = a
0
+

n=1
_
a
n
cos
n
L
x +b
n
sin
n
L
x
_
, (1.1)
159
160CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
and making the change of variables t x, by t =

L
x and dt =

L
dx, we
express the coecients as
a
0
=
1
2L
_
L
L
f(x) dx ,
a
n
=
1
L
_
L
L
f(x) cos
n
L
x dx ,
b
n
=
1
L
_
L
L
f(x) sin
n
L
x dx .
The formula (1.1) gives us the desired Fourier series. Observe that we use the
values of f(x) only on the interval (L, L), when computing the coecients.
Example Let f(x) be a function of period 6, which on the interval (3, 3)
is equal to x.
Here L = 3 and f(x) = x on the interval (3, 3). We have
f(x) = a
0
+

n=1
_
a
n
cos
n
3
x +b
n
sin
n
3
x
_
.
The functions x and x cos
n
3
x are odd, and so
a
0
=
1
6
_
3
3
x dx = 0 ,
a
n
=
1
3
_
3
3
x cos
n
3
x dx = 0 .
The function x sin
n
3
x is even, giving us
b
n
=
1
3
_
3
3
x sin
n
3
x dx =
2
3
_
3
0
x sin
n
3
x dx
=
2
3
_

3
n
x cos
n
3
x +
9
n
2

2
sin
n
3
x
_

3
0
=
6
n
cos n =
6
n
(1)
n+1
,
because cos n = (1)
n
. We conclude that
f(x) =

n=1
6
n
(1)
n+1
sin
n
3
x .
6.1. FOURIER SERIES FOR FUNCTIONS OF AN ARBITRARY PERIOD 161
Restricting to the interval (3, 3), we have
x =

n=1
6
n
(1)
n+1
sin
n
3
x, for 3 < x < 3 .
Outside of the interval (3, 3) this Fourier series converges not to x, but to
the periodic extension of x, i.e., to the function f(x) we have started with.
We see that it is enough to know f(x) on the interval (L, L), in order
to compute its Fourier coecients. If f(x) is dened only on (L, L), we
can still represent it by its Fourier series (1.1).
6.1.1 Even and odd functions
Our computations in the preceding example were aided by the nice proper-
ties of even and odd functions, which we review next.
Recall that a function f(x) is called even if
f(x) = f(x) for all x.
Examples include cos x, x
2
, x
4
, and in general x
2n
, for any even power 2n.
The graph of an even function is symmetric with respect to y axis. It follows
that
_
L
L
f(x) dx = 2
_
L
0
f(x) dx
for any even function f(x), and any constant L. A function f(x) is called
odd if
f(x) = f(x) for all x .
Examples include sinx, tanx, x, x
3
, and in general x
2n+1
, for any odd power
2n + 1. (We see that the even functions eat minus, while the odd ones
pass it through.) The graph of an odd function is symmetric with respect
to the origin. It follows that
_
L
L
f(x) dx = 0
for any odd function f(x), and any constant L. Products of even and odd
functions are either even or odd:
even even = even even odd = odd odd odd = even.
162CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
If f(x) is even, then b
n
= 0 for all n (as integrals of odd functions), and
so the Fourier series (1.1) becomes
f(x) = a
0
+

n=1
a
n
cos
n
L
x ,
with
a
0
=
1
L
_
L
0
f(x) dx ,
a
n
=
2
L
_
L
0
f(x) cos
n
L
x dx .
If f(x) is odd , then a
0
= 0 and a
n
= 0 for all n, and the Fourier series (1.1)
becomes
f(x) =

n=1
b
n
sin
n
L
x ,
with
b
n
=
2
L
_
L
0
f(x) sin
n
L
x dx .
Example Let f(x) be a function of period 2, which on the interval (1, 1)
is equal to [x[.
Here L = 1 and f(x) = [x[ on the interval (1, 1). The function f(x) is
even, so that
f(x) = a
0
+

n=1
a
n
cos nx .
Observing that [x[ = x on the interval (0, 1), we compute the coecients
a
0
=
_
1
0
x dx =
1
2
,
a
n
= 2
_
1
0
x cos nx dx = 2
_
x sinnx
n
+
cos nx
n
2

2
_

1
0
=
2(1)
n
2
n
2

2
.
We have
[x[ =
1
2
+

n=1
2(1)
n
2
n
2

2
cos nx, for 1 < x < 1 .
Outside of the interval (1, 1), this Fourier series converges to the periodic
extension of [x[, i.e., to the function f(x).
6.1. FOURIER SERIES FOR FUNCTIONS OF AN ARBITRARY PERIOD 163
E
T
x
y

d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
1 2 3 4 6 1 2 3
a a a a a
The periodic extension of [x[ as a function of period 2
6.1.2 Further examples and the convergence theorem
Even and odd functions are very special. A general function is neither
even nor odd.
Example On the interval (2, 2) represent the function
f(x) =
_
1 for 2 < x 0
x for 0 < x < 2
by its Fourier series. This function is neither even nor odd (and also it is
not continuous). Here L = 2, and the Fourier series is
f(x) = a
0
+

n=1
_
a
n
cos
n
2
x +b
n
sin
n
2
x
_
.
Compute
a
0
=
1
4
_
0
2
1 dx +
1
4
_
2
0
x dx = 1 ,
where we broke the interval of integration into two pieces, according to the
denition of f(x). Similarly
a
n
=
1
2
_
0
2
cos
n
2
x +
1
2
_
2
0
x cos
n
2
x dx =
2 (1 (1)
n
)
n
2

2
.
b
n
=
1
2
_
0
2
sin
n
2
x +
1
2
_
2
0
x sin
n
2
x dx =
(1 (1)
n
)
n
.
On the interval (2, 2) we have
f(x) = 1 +

n=1
_
2 (1 (1)
n
)
n
2

2
cos
n
2
x +
(1 (1)
n
)
n
sin
n
2
x
_
.
The quantity 1 (1)
n
is equal to zero if n is odd, and to 2 if n is even.
All even n can be obtained in the form n = 2k, with k = 1, 2, 3, . . .. We can
then rewrite the Fourier series as
f(x) = 1

k=1
_
1
k
2

2
cos kx +
1
k
sin kx
_
, for 2 < x < 2 .
164CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
Outside of the interval (2, 2), this series converges to an extension of f(x)
as a function of period 4.
Example On the interval (, ) nd the Fourier series of f(x) = 2 sinx +
sin
2
x.
Here L = , and the Fourier series takes the form
f(x) = a
0
+

n=1
(a
n
cos nx +b
n
sinnx) .
Let us spell out several terms of this series
f(x) = a
0
+a
1
cos x +a
2
cos 2x +a
3
cos 3x +. . . + b
1
sinx + b
2
sin 2x +. . . .
Using a trig formula sin
2
x =
1
2

1
2
cos 2x, we write
f(x) =
1
2

1
2
cos 2x + 2 sinx .
This is the desired Fourier series! Here a
0
=
1
2
, a
2
=
1
2
, b
1
= 2, and all
other coecients are zero. We see that this function is its own Fourier series.
Example On the interval (2, 2) nd the Fourier series of f(x) = 2 sinx+
sin
2
x.
This time L = 2, and the Fourier series has the form
f(x) = a
0
+

n=1
_
a
n
cos
n
2
x +b
n
sin
n
2
x
_
.
As before, we rewrite f(x)
f(x) =
1
2

1
2
cos 2x + 2 sinx .
And again this is the desired Fourier series! This time a
0
=
1
2
, a
4
=
1
2
,
b
2
= 2, and all other coecients are zero.
To discuss the convergence properties of Fourier series, we need a concept
of piecewise smooth functions. These are functions that are continuous and
dierentiable, except for discontinuities at some isolated points. In case a
discontinuity happens at some point x
0
, we assume that the limit from the
left f(x
0
) exists, as well as the limit from the right f(x
0
+). (I.e., at a
point of discontinuity either f(x
0
) is not dened, or f(x
0
) ,= f(x
0
+).)
6.2. FOURIER COSINE AND FOURIER SINE SERIES 165
Theorem 1 Let f(x) be a piecewise smooth function of period 2L. Then its
Fourier series
a
0
+

n=1
_
a
n
cos
n
L
x + b
n
sin
n
L
x
_
converges to f(x) at any point x where f(x) is continuous. If f(x) has a
jump at x, the Fourier series converges to
f(x) +f(x+)
2
.
We see that at jump points, the Fourier series tries to be fair, and it
converges to the average of the limits from the left and right.
Let now f(x) be dened on [L, L]. Let us extend it as a function of
period 2L. Unless it so happens that f(L) = f(L), the new function will
have jumps at x = L and x = L. Then the previous theorem implies the
next one.
Theorem 2 Let f(x) be a piecewise smooth function on [L, L]. Let x be a
point inside (L, L). Then its Fourier series
a
0
+

n=1
_
a
n
cos
n
L
x + b
n
sin
n
L
x
_
converges to f(x) at any point x where f(x) is continuous. If f(x) has a
discontinuity at x, the Fourier series converges to
f(x) +f(x+)
2
.
At both end points x = L and x = L, the Fourier series converges to
f(L+) + f(L)
2
.
6.2 Fourier cosine and Fourier sine series
Suppose a function f(x) is dened on the interval (0, L). How do we repre-
sent f(x) by a Fourier series? We can compute Fourier series for a function
dened on (L, L), but f(x) lives only on (0, L). We can extend f(x) as
an arbitrary function on (L, 0) (i.e., draw randomly any graph on (L, 0)).
This gives us a function dened on (L, L), which we represent by its Fourier
series, and use this series only on the interval (0, L), where f(x) lives. So
166CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
that there are innitely many ways to represent f(x) by a Fourier series on
the interval (0, L). However, two of these Fourier series stand out, the ones
when the extension produces either an even or an odd function.
Let f(x) be dened on the interval (0, L). We dene its even extension
to the interval (L, L), as follows
f
e
(x) =
_
f(x) for 0 < x < L
f(x) for L < x < 0
I.e., we reect the graph of f(x) with respect to the y axis. The function
f
e
(x) is even on (L, L), and as we saw before, its Fourier series has the
form
f
e
(x) = a
0
+

n=1
a
n
cos
n
L
x ,
with the coecients
a
0
=
1
L
_
L
0
f
e
(x) dx =
1
L
_
L
0
f(x) dx ,
a
n
=
2
L
_
L
0
f
e
(x) cos
n
L
x dx =
2
L
_
L
0
f(x) cos
n
L
x dx ,
because on the interval of integration (0, L), f
e
(x) = f(x). We now restrict
the series to the interval (0, L), obtaining
f(x) = a
0
+

n=1
a
n
cos
n
L
x, for 0 < x < L, (2.1)
with
a
0
=
1
L
_
L
0
f(x) dx , (2.2)
a
n
=
2
L
_
L
0
f(x) cos
n
L
x dx . (2.3)
The series (2.1), with the coecients computed using the formulas (2.2) and
(2.3), is called a Fourier cosine series.
Where is f
e
(x) now? It disappeared. We used it as an artifact of con-
struction, like scaolding.
Example On the interval (0, 3) nd the Fourier cosine series of f(x) = x+2.
Compute
a
0
=
1
3
_
3
0
(x + 2) dx =
7
2
,
6.2. FOURIER COSINE AND FOURIER SINE SERIES 167
a
n
=
2
3
_
3
0
(x + 2) cos
n
3
x dx =
6 (1 + (1)
n
)
n
2

2
.
x + 2 =
7
2
+

n=1
6(1+(1)
n
)
n
2

2
cos
n
3
x
=
7
2
12

k=1
1
(2k1)
2

2
cos
(2k1)
3
x .
Consider again f(x), which is dened on the interval (0, L). We now
dene its odd extension to the interval (L, L), as follows
f
o
(x) =
_
f(x) for 0 < x < L
f(x) for L < x < 0
I.e., we reect the graph of f(x) with respect to the origin. If f(0) ,= 0, this
extension is discontinuous at x = 0. The function f
0
(x) is odd on (L, L),
and as we saw before, its Fourier series has only the sine terms
f
0
(x) =

n=1
b
n
sin
n
L
x ,
with the coecients
b
n
=
2
L
_
L
0
f
o
(x) sin
n
L
x dx =
2
L
_
L
0
f(x) sin
n
L
x dx ,
because on the interval of integration (0, L), f
o
(x) = f(x). We again restrict
the series to the interval (0, L), obtaining
f(x) =

n=1
b
n
sin
n
L
x, for 0 < x < L, (2.4)
with
b
n
=
2
L
_
L
0
f(x) sin
n
L
x dx . (2.5)
The series (2.4), with the coecients computed using the formula (2.5), is
called Fourier sine series.
Example On the interval (0, 3) nd the Fourier sine series of f(x) = x +2.
Compute
b
n
=
2
3
_
3
0
(x + 2) sin
n
3
x dx =
4 10(1)
n
n
.
168CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
We conclude
x + 2 =

n=1
4 10(1)
n
n
sin
n
3
x, for 0 < x < 3 .
Clearly this series does not converge to f(x) at the end points x = 0 and
x = 3 of our interval (0, 3). But inside (0, 3) we do have convergence.
Fourier sine and cosine series were developed by using Fourier series on
(L, L). It follows that inside (0, L) both of these series converge to f(x)
at points of continuity, and to
f(x)+f(x+)
2
if f(x) is discontinuous at x. At
both end points x = 0 and x = L, Fourier sine series converges to 0, while
Fourier cosine series converges to f(0+) and f(L) respectively.
6.3 Two point boundary value problems
We shall need to nd solutions y = y(x) of the problem
y

## +y = 0, 0 < x < L (3.1)

y(0) = y(L) = 0
on some interval (0, L). Here is some number. Unlike initial value prob-
lems, where we prescribe the values of the solution and its derivative at one
point, here we prescribe that the solution vanishes at x = 0 and at x = L,
which are the end-points (the boundary points) of the interval (0, L). Of
course, y(x) = 0 is a solution of our problem, which is called a trivial so-
lution. We wish to nd non-trivial solutions. What are the values of the
parameter , for which non-trivial solutions are possible?
The form of the general solution depends on whether is positive, neg-
ative or zero, i.e., we have three cases to consider.
Case 1. < 0. We may write =
2
, with some > 0 ( =

), and
the equation takes the form
y

2
y = 0 .
Its general solution is y = c
1
e
x
+ c
2
e
x
. The boundary conditions
y(0) = c
1
+ c
2
= 0
y(L) = e
L
c
1
+ e
L
c
2
= 0
6.3. TWO POINT BOUNDARY VALUE PROBLEMS 169
give us two equations to determine c
1
and c
2
. From the rst equation c
2
=
c
1
, and then from the second equation c
1
= 0. So that c
1
= c
2
= 0, i.e.,
the only solution is y = 0, the trivial solution.
Case 2. = 0. The equation takes the form
y

= 0 .
Its general solution is y = c
1
+c
2
x. The boundary conditions
y(0) = c
1
= 0
y(L) = c
1
L + c
2
= 0
give us c
1
= c
2
= 0. We struck out again in our search for a non-trivial
solution.
Case 3. > 0. We may write =
2
, with some > 0 ( =

), and
the equation takes the form
y

+
2
y = 0 .
Its general solution is y = c
1
cos x+c
2
sin x. The rst boundary condition
y(0) = c
1
= 0
tells us that c
1
= 0. We update the general solution: y = c
2
sinx. The
second boundary condition gives
y(L) = c
2
sinL = 0 .
One possibility for this product to be zero, is c
2
= 0. This leads again to
the trivial solution. What saves us is that sinL = 0 for some lucky
s, namely when L = n, or
n
=
n
L
, and
n
=
2
n
=
n
2

2
L
2
, n =
1, 2, 3, . . .. The corresponding solutions are c
2
sin
n
L
x, or we can simply
write sin
n
L
x, because a constant multiple of a solution is also a solution.
To recapitulate, non-trivial solutions occur at the innite number of s,

n
=
n
2

2
L
2
, called the eigenvalues, the corresponding solutions sin
n
L
x are
called the eigenfunctions.
Next, we search for non-trivial solutions of the problem
y

+ y = 0, 0 < x < L
y

(0) = y

(L) = 0 ,
170CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
in which the boundary conditions are dierent. As before, we see that in
case < 0 there are no non-trivial solutions. The case = 0 turns out to be
dierent: any non-zero constant is a non-trivial solution. So that
0
= 0 is
an eigenvalue, and y
0
= 1 the corresponding eigenfunction. In case > 0 we
get as before innitely many eigenvalues
n
=
n
2

2
L
2
, with the corresponding
eigenfunctions y
n
= cos
n
L
x, n = 1, 2, 3, . . ..
6.3.1 Problems
I. 1. Is the integral
_
3/2
1
tan
15
x dx positive or negative?
Hint: Consider rst
_
1
1
tan
15
x dx. Ans. Positive.
2. Show that any function can be written as a sum of an even function and
an odd function.
Hint: f(x) =
f(x)+f(x)
2
+
f(x)f(x)
2
.
3. Let g(x) = x on the interval (0, 3).
(i) Find the even extension of g(x). Ans. g
e
(x) = [x[, dened on (3, 3).
(ii) Find the odd extension of g(x). Ans. g
o
(x) = x, dened on (3, 3).
4. Let h(x) = x
3
on the interval (0, 5). Find its even and odd extensions,
and state the interval on which they are dened.
5. Let f(x) = x
2
on the interval (0, 1). Find its even and odd extensions,
and state the interval on which they are dened.
o
(x) = x[x[, dened on (1, 1).
6. Assume that f(x) has period 2. Show that the function
_
x
0
f(t) dt is
also 2-periodic, if and only if
_
2
0
f(t) dt = 0.
7. Assume that f(x) has period T. Show that for any constant a
_
T+a
a
f

(x)e
f(x)
dx = 0 .
II. Find the Fourier series of a given function over the indicated interval.
1. f(x) = sinx cos x + cos
2
2x on (, ).
Ans. f(x) =
1
2
+
1
2
cos 4x +
1
2
sin 2x.
6.3. TWO POINT BOUNDARY VALUE PROBLEMS 171
2. f(x) = sinx cos x + cos
2
2x on (2, 2).
Ans. f(x) =
1
2
+
1
2
cos 4x +
1
2
sin2x.
3. f(x) = sinx cos x + cos
2
2x on (/2, /2).
Ans. f(x) =
1
2
+
1
2
cos 4x +
1
2
sin2x.
4. f(x) = x + x
2
on (, ).
Ans. f(x) =

2
3
+

n=1
_
4(1)
n
n
2
cos nx +
2(1)
n+1
n
sin nx
_
.
5. (i) f(x) =
_
1 for 0 < x <
1 for < x < 0
on (, ).
Ans. f(x) =
4

_
sin x +
1
3
sin 3x +
1
5
sin 5x +
1
7
sin7x +
_
.
(ii) Set x =

2
in the last series, to conclude that

4
= 1
1
3
+
1
5

1
7
+ .
6. f(x) = 1 [x[ on (2, 2).

n=1
4
n
2

2
(1 (1)
n
) cos
n
2
x.
7. f(x) = x[x[ on (1, 1).

n=1
2
_
n
2

2
2
_
(1)
n
4
n
3

3
sin nx.
8. Let f(x) =
_
1 for 1 < x < 0
0 for 0 < x < 1
on (1, 1). Sketch the graphs of f(x)
and of its Fourier series. Then calculate the Fourier series of f(x) on (1, 1).
1
2

k=1
2
(2k 1)
sin(2k 1)x.
9. Let f(x) =
_
x for 2 < x < 0
1 for 0 < x < 2
on (2, 2). Sketch the graphs of
f(x) and of its Fourier series. Then calculate the Fourier series of f(x) on
(2, 2).
III. Find the Fourier cosine series of a given function over the indicated
interval.
172CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
1. f(x) = cos 3x sin
2
3x on (0, ).
1
2
+ cos 3x +
1
2
cos 6x.
2. f(x) = cos 3x sin
2
3x on (0,

3
).
1
2
+ cos 3x +
1
2
cos 6x.
3. f(x) = x on (0, 2).

n=1
4 + 4(1)
n
n
2

2
cos
n
2
x.
4. f(x) = sinx on (0, 2).
Hint: sin ax cos bx =
1
2
sin(a b)x +
1
2
sin(a +b)x.
1
2
(1 cos 2) +

n=1
(1)
n
2 cos 2
n
2

2
4
cos
n
2
x.
5. f(x) = sin
4
x on (0,

2
).
3
8

1
2
cos 2x +
1
8
cos 4x.
IV. Find the Fourier sine series of a given function over the indicated interval.
1. f(x) = 5 sinx cos x on (0, ).
5
2
sin2x.
2. f(x) = 1 on (0, 3).

n=1
2
n
(1 (1)
n
) sin
n
3
x.
3. f(x) = x on (0, 2).

n=1
4
n
(1)
n+1
sin
n
2
x.
4. f(x) = sinx on (0, 2).
Hint: sin ax sinbx =
1
2
cos(a b)x
1
2
cos(a +b)x.

n=1
(1)
n+1
2n sin2
n
2

2
4
sin
n
2
x.
6.3. TWO POINT BOUNDARY VALUE PROBLEMS 173
5. f(x) = sin
3
x on (0, ).
3
4
sin x
1
4
sin3x.
6. f(x) =
_
x for 0 < x <

2
x for

2
< x <
on (0, ).

n=1
4
n
2
sin
n
2
sinnx.
7. f(x) = x 1 on (0, 3).

n=1
2 + 4(1)
n
n
sin
n
3
x.
V.
1. Find the eigenvalues and the eigenfunctions of
y

+y = 0, 0 < x < L, y

(0) = y(L) = 0 .
n
=

2
(n+
1
2
)
2
L
2
, y
n
= cos
(n+
1
2
)
L
x.
2. Find the eigenvalues and the eigenfunctions of
y

## +y = 0, 0 < x < L, y(0) = y

(L) = 0 .
3. Find the eigenvalues and the eigenfunctions of
y

0
= 0 with y
0
= 1, and
n
= n
2
with y
n
= a
n
cos nx + b
n
sin nx.
4

## . Show that the fourth order problem (with a > 0)

y

a
4
y = 0, 0 < x < 1, y(0) = y

(0) = y(1) = y

(1) = 0
has non-trivial solutions (eigenfunctions) if and only if a satises
cos a =
1
cosh a
.
Show graphically that there are innitely many such as, and calculate the
corresponding eigenfunctions.
Hint: The general solution is y(x) = c
1
cos ax + c
2
sinax + c
3
cosh ax +
c
4
sinh ax. From the boundary conditions obtain two equation for c
3
and c
4
.
174CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
6.4 Heat equation and separation of variables
Suppose we have a rod of length L, which is so thin that we may assume it
to be one dimensional, extending along the x-axis, for 0 x L. Assume
that the surface of the rod is insulated, so that heat can travel only to the
left or to the right along the rod. We wish to determine the temperature
u = u(x, t) at any point x of the rod, and at any time t. Consider an element
of the rod of length x, extending over the sub-interval (x, x + x). The
amount of heat (in calories) that this element holds is
cu(x, t)x .
Indeed, the amount of heat must be proportional to the temperature u =
u(x, t), the length x, and there must be a physical constant c > 0, which
reects the rods ability to store heat (and c also makes the physical units
right, so that the product is in calories). The rate of change of heat is (here
u
t
(x, t) is the partial derivative in t)
cu
t
(x, t)x .
On the other hand, the change in heat occurs because of the heat ow at
the end points of the interval (x, x+x). Because this interval is small, the
function u(x, t) is likely to be monotone over this interval, so let us assume
that u(x, t) in increasing in x over (x, x + x) (think of t as xed). At the
right end point x +x, heat ows into our element, because to the right of
this point the temperatures are higher. The heat ow per unit time is
c
1
u
x
(x + x, t) ,
i.e., it is proportional to how steeply the temperatures increase (c
1
> 0 is
another physical constant). At the left end x, we loose
c
1
u
x
(x, t)
of heat per unit time. The balance equation is
cu
t
(x, t)x = c
1
u
x
(x + x, t) c
1
u
x
(x, t) .
We now divide by cx, call
c
1
c
= k
u
t
(x, t) = k
u
x
(x + x, t) u
x
(x, t)
x
.
6.4. HEAT EQUATION AND SEPARATION OF VARIABLES 175
And nally, we let x 0, obtaining the heat equation
u
t
= ku
xx
.
This is a partial dierential equation, or PDE for short.
Suppose now that initially, i.e., at time t = 0, the temperatures of the
rod could be obtained from a given function f(x), while the temperatures
at the end points x = 0 and x = L are kept at 0 degrees Celcius at all times
t (one can think that the end points are kept on ice). To determine the
temperature u(x, t) at all points x, and all times t, we need to solve
u
t
= ku
xx
for 0 < x < L, and t > 0 (4.1)
u(x, 0) = f(x) for 0 < x < L
u(0, t) = u(L, t) = 0 for t > 0 .
Here the second line represents the initial condition, and the third line gives
the boundary conditions.
6.4.1 Separation of variables
We search for solution of (4.1) in the form u(x, t) = F(x)G(t), with the
functions F(x) and G(t) to be determined. From the equation (4.1)
F(x)G

(t) = kF

(x)G(t) .
Divide by kF(x)G(t):
G

(t)
kG(t)
=
F

(x)
F(x)
.
On the left we have a function of t only, while on the right we have a function
of x only. In order for them to be the same, they must be both equal to the
same constant, which we denote by
G

(t)
kG(t)
=
F

(x)
F(x)
= .
This gives us two dierential equations for F(x) and G(t)
G

(t)
kG(t)
= , (4.2)
and
F

(x) + F(x) = 0 .
176CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
From the boundary conditions
u(0, t) = F(0)G(t) = 0 .
This implies that F(0) = 0 (if we set G(t) = 0, we would get u = 0, which
does not satisfy the initial condition in (4.1)). Similarly, we have F(L) = 0,
using the other boundary condition. So that to determine F(x), we have
F

## (x) + F(x) = 0, F(0) = F(L) = 0 . (4.3)

We have studied this problem before. Nontrivial solutions occur only for
=
n
=
n
2

2
L
2
. Corresponding solutions are
F
n
(x) = sin
n
L
x (and their multiples) .
We now plug in =
n
=
n
2

2
L
2
into the equation (4.2)
G

(t)
G(t)
= k
n
2

2
L
2
. (4.4)
Solving these equations for all n
G
n
(t) = b
n
e
k
n
2

2
L
2
t
,
where b
n
s are arbitrary constants. We have constructed innitely many
functions
u
n
(x, t) = G
n
(t)F
n
(x) = b
n
e
k
n
2

2
L
2
t
sin
n
L
x
which satisfy the PDE in (4.1), and the boundary conditions. By linearity,
their sum
u(x, t) =

n=1
u
n
(x, t) =

n=1
b
n
e
k
n
2

2
L
2
t
sin
n
L
x (4.5)
also satises the PDE in (4.1), and the boundary conditions. We now turn
to the initial condition
u(x, 0) =

n=1
b
n
sin
n
L
x = f(x) . (4.6)
We need to represent f(x) by its Fourier sine series, for which we choose
b
n
=
2
L
_
L
0
f(x) sin
n
L
x dx . (4.7)
6.4. HEAT EQUATION AND SEPARATION OF VARIABLES 177
Conclusion: the series

n=1
b
n
e
k
n
2

2
L
2
t
sin
n
L
x, with b
n
s computed using
(4.7), gives the solution of our problem (4.1). We see that going from the
Fourier sine series of f(x) to the solution of our problem involves just putting
k
n
2

2
L
2
t
.
Example Solve
u
t
= 5u
xx
for 0 < x < 2, and t > 0
u(x, 0) = 2 sinx 3 sinx cos x for 0 < x < 2
u(0, t) = u(2, t) = 0 for t > 0 .
Here k = 5, and L = 2. The Fourier sine series has the form

n=1
b
n
sin
n
2
x.
Writing
2 sinx 3 sinx cos x = 2 sinx
3
2
sin2x ,
we see that this function is its own Fourier sine series, with b
2
= 2, b
4
=
3
2
,
and all other coecients equal to zero. Solution:
u(x, t) = 2e
5
2
2

2
(2)
2
t
sin x
3
2
e
5
4
2

2
(2)
2
t
sin2x = 2e
5t
sinx
3
2
e
20t
sin2x .
We see that by the time t = 1, the rst term of the solution totally dominates
the second one.
Example Solve
u
t
= 2u
xx
for 0 < x < 3, and t > 0
u(x, 0) = x 1 for 0 < x < 3
u(0, t) = u(3, t) = 0 for t > 0 .
Here k = 2, and L = 3. We begin by calculating the Fourier sine series
x 1 =

n=1
b
n
sin
n
3
x ,
with
b
n
=
2
3
_
3
0
(x 1) sin
n
3
x dx =
2 + 4(1)
n
n
.
Solution:
u(x, t) =

n=1
2 + 4(1)
n
n
e
2
n
2

2
9
t
sin
n
3
x .
178CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
What is the value of this solution? Again, very quickly (by the time t = 1)
the n = 1 term dominates all others, and we have
u(x, t)
2

2
2
9
t
sin

3
x .
Initial temperatures were negative for 0 < x < 1, and positive for 1 < x <
3. Very quickly, temperatures became positive at all points, because the
rst harmonic sin

3
x > 0 on the interval (0, 3). Then temperatures tend
exponentially to zero, while retaining the shape of the rst harmonic.
Let us now solve the problem
u
t
= ku
xx
for 0 < x < L, and t > 0 (4.8)
u(x, 0) = f(x) for 0 < x < L
u
x
(0, t) = u
x
(L, t) = 0 for t > 0 .
Compared with (4.1), the boundary conditions are now dierent. This
time we assume that the rod is insulated at the end points x = 0 and x = L.
We saw above that the ux at x = 0 (i.e., amount of heat owing per
unit time) is proportional to u
x
(0, t). Since there is no heat ow at all t,
u
x
(0, t) = 0, and similarly u(L, t) = 0. We expect that in the long run the
temperatures inside the rod will average out, and be equal to the average of
the initial temperatures,
1
L
_
L
0
f(x) dx.
As before, we search for solution in the form u(x, t) = F(x)G(t). Sep-
arating variables, we see that G(t) still satises (4.4), while for F(x) we
have
F

(x) + F(x) = 0, F

(0) = F

(L) = 0 .
We have studied this problem before. Nontrivial solutions occur only for
=
0
= 0, and =
n
=
n
2

2
L
2
. Corresponding solutions are
F
0
(x) = 1, F
n
(x) = cos
n
L
x (and their multiples) .
Solving (4.4) for n = 0, and all n = 1, 2, 3,
G
0
= a
0
, G
n
(t) = a
n
e
k
n
2

2
L
2
t
,
where a
0
and a
n
s are arbitrary constants. We have constructed innitely
many functions
u
0
(x, t) = G
0
(t)F
0
(x) = a
0
, u
n
(x, t) = G
n
(t)F
n
(x) = a
n
e
k
n
2

2
L
2
t
cos
n
L
x
6.4. HEAT EQUATION AND SEPARATION OF VARIABLES 179
which satisfy the PDE in (4.8), and the boundary conditions. By linearity,
their sum
u(x, t) = u
0
(x, t) +

n=1
u
n
(x, t) = a
0
+

n=1
a
n
e
k
n
2

2
L
2
t
cos
n
L
x (4.9)
also satises the PDE in (4.8), and the boundary conditions. We now turn
to the initial condition
u(x, 0) = a
0
+

n=1
a
n
cos
n
L
x = f(x) .
We need to represent f(x) by its Fourier sine series, for which we choose
a
0
=
1
L
_
L
0
f(x) dx, a
n
=
2
L
_
L
0
f(x) cos
n
L
x dx . (4.10)
Conclusion: the series u(x, t) = a
0
+

n=1
a
n
e
k
n
2

2
L
2
t
cos
n
L
x, with a
n
s com-
puted using (4.10), gives the solution of our problem (4.8). Again, going from
the Fourier cosine series of f(x) to the solution of our problem involves just
putting in the additional factors e
k
n
2

2
L
2
t
. As t , u(x, t) a
0
, which
is equal to the average of the initial temperatures.
Example Solve
u
t
= 3u
xx
for 0 < x < /2, and t > 0
u(x, 0) = 2 cos
2
x 3 cos
2
2x for 0 < x < /2
u
x
(0, t) = u
x
(/2, t) = 0 for t > 0 .
Here k = 3, and L = /2. The Fourier cosine series has the form a
0
+

n=1
a
n
cos 2nx. Writing
2 cos
2
x 3 cos
2
2x =
1
2
+ cos 2x
3
2
cos 4x ,
we see that this function is its own Fourier cosine series, with a
0
=
1
2
,
a
1
= 1, a
2
=
3
2
, and all other coecients equal to zero. Solution:
u(x, t) =
1
2
+e
12t
cos 2x
3
2
e
48t
cos 4x .
180CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
Example Solve
u
t
= 3u
xx
au for 0 < x < , and t > 0
u(x, 0) = 2 cos x + x
2
for 0 < x <
u
x
(0, t) = u
x
(, t) = 0 for t > 0 ,
where a is a positive constant. The extra term au is an example of a lower
order term. Its physical signicance is that the rod is no longer insulated,
and heat freely radiates through its side.
We set e
at
u(x, t) = v(x, t), or
u(x, t) = e
at
v(x, t) .
We have u
t
= ae
at
v+e
at
v
t
, and u
xx
= e
at
v
xx
. We conclude that v(x, t)
satises the problem
v
t
= 3v
xx
for 0 < x < , and t > 0
v(x, 0) = 2 cos x +x
2
for 0 < x <
v
x
(0, t) = v
x
(, t) = 0 for t > 0 ,
which is of the type we considered above. Because 2 cos x is its own Fourier
cosine series on the interval (0, ), we shall expand x
2
in Fourier cosine
series, and then add 2 cos x. We have
x
2
= a
0
+

n=1
a
n
cos nx ,
where
a
0
=
1

_

0
x
2
dx =

2
3
,
a
n
=
2

_

0
x
2
cos nx dx =
4(1)
n
n
2
.
Then
2 cos x +x
2
=

2
3
2 cos x +

n=2
4(1)
n
n
2
cos nx ,
and so
v(x, t) =

2
3
2e
3t
cos x +

n=2
4(1)
n
n
2
e
3n
2
t
cos nx .

2
3
e
at
2e
(3+a)t
cos x +

n=2
4(1)
n
n
2
e
(3n
2
+a)t
cos nx.
6.5. LAPLACES EQUATION 181
6.5 Laplaces equation
We now study heat conduction in a thin two-dimensional rectangular plate:
0 x L, 0 y M. Assume that both sides of the plate are insulated, so
that heat travels only in the xy plane. Let u(x, y, t) denote the temperature
at a point (x, y) and a time t > 0. It is natural to expect that the heat
equation in two dimensions takes the form
u
t
= k (u
xx
+ u
yy
) . (5.1)
Indeed, one can derive (5.1) similarly to the way we derived the one-dimensional
heat equation. The boundary of our plate consists of four line segments. Let
us assume that the side lying on the x-axis is kept at 1 degrees Celsius, i.e.,
u(x, 0) = 1 for 0 x L, while the other three sides are kept at 0 de-
grees Celsius (they are kept on ice). I.e., u(x, M) = 0 for 0 x L, and
u(0, y) = u(L, y) = 0 for 0 y M. The heat will ow from the warmer
side toward the three sides on ice. While the heat will continue its ow
indenitely, eventually the temperatures will stabilize (we can expect tem-
peratures close to 1 near the warm side, and close to 0 near the icy sides).
Stable temperatures do not change with time, i.e., u = u(x, y). Then u
t
= 0,
and the equation (5.1) takes the form
u
xx
+ u
yy
= 0 . (5.2)
This is the Laplace equation, one of the three main equations of Mathe-
matical Physics. Mathematicians use the notation: u = u
xx
+ u
yy
, while
engineers seem to prefer
2
u = u
xx
+ u
yy
. The latter notation has to do
with the fact that computing the divergence of the gradient of u(x, y) gives
u = u
xx
+u
yy
. Solutions of the Laplace equation are called harmonic
functions.
To nd the steady state temperatures u = u(x, y) for our example, we
need to solve the problem
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M
u(x, 0) = 1 for 0 < x < L
u(x, M) = 0 for 0 < x < L
u(0, y) = u(L, y) = 0 for 0 < y < M .
We again apply the separation of variables technique, looking for solution
in the form
u(x, y) = F(x)G(y)
182CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
with the functions F(x) and G(y) to be determined. Plugging this into the
Laplace equation, we get
F

(x)G(y) = F(x)G

(y) .
Divide both sides by F(x)G(y)
F

(x)
F(x)
=
G

(y)
G(y)
.
On the left we have a function of x only, while on the right we have a function
of y only. In order for them to be the same, they must be both equal to the
same constant, which we denote by
F

(x)
F(x)
=
G

(y)
G(y)
= .
From this, and using the boundary conditions of our problem, we obtain
F

## + F = 0, F(0) = F(L) = 0 , (5.3)

G

G = 0, G(M) = 0 . (5.4)
Nontrivial solutions of (5.3) occur at =
n
=
n
2

2
L
2
, and they are F
n
(x) =
B
n
sin
n
L
x. We solve (5.4) with =
n
2

2
L
2
, obtaining G
n
(y) = sinh
n
L
(yM).
We conclude that
u
n
(x, y) = F
n
(x)G
n
(y) = B
n
sin
n
L
x sinh
n
L
(y M)
satises the Laplace equation and the three zero boundary conditions. The
same is true for their sum
u(x, y) =

n=1
F
n
(x)G
n
(y) =

n=1
B
n
sinh
n
L
(y M) sin
n
L
x .
It remains to satisfy the boundary condition at the warm side
u(x, 0) =

n=1
B
n
sinh
n
L
(M) sin
n
L
x = 1 .
We need to choose B
n
, so that B
n
sinh
n
L
(M) is the Fourier sine series
coecient of f(x) = 1, i.e.,
B
n
sinh
n
L
(M) =
2
L
_
L
0
sin
n
L
x dx =
2 (1 (1)
n
)
n
,
6.5. LAPLACES EQUATION 183
which gives
B
n
=
2 (1 (1)
n
)
n sinh
nM
L
.
u(x, y) =

n=1
2 (1 (1)
n
)
n sinh
nM
L
sinh
n
L
(y M) sin
n
L
x .
Example Solve
u
xx
+ u
yy
= 0 for 0 < x < 1, and 0 < y < 2
u(x, 0) = 0 for 0 < x < 1
u(x, 2) = 0 for 0 < x < 1
u(0, y) = 0 for 0 < y < 2
u(1, y) = y for 0 < y < 2
We look for solution u(x, y) = F(x)G(y). After separating variables, it
is convenient to use (instead of ) to denote the common value of two
functions
F

(x)
F(x)
=
G

(y)
G(y)
= .
Using the boundary conditions we conclude that
G

+G = 0, G(0) = G(2) = 0 ,
F

F = 0, F(0) = 0 .
The rst equation has non-trivial solutions at =
n
=
n
2

2
4
, and they are
G
n
(y) = B
n
sin
n
2
y. We then solve the second equation with =
n
2

2
4
,
obtaining F
n
(x) = sinh
n
2
x. We conclude that
u
n
(x, y) = F
n
(x)G
n
(y) = B
n
sinh
n
2
x sin
n
2
y
satises the Laplace equation and the three zero boundary conditions. The
same is true for their sum
u(x, y) =

n=1
F
n
(x)G
n
(y) =

n=1
B
n
sinh
n
2
x sin
n
2
y .
It remains to satisfy the boundary condition at the fourth side
u(1, y) =

n=1
B
n
sinh
n
2
sin
n
2
y = y .
184CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
We need to choose B
n
, so that B
n
sinh
n
2
is the Fourier sine series coecient
of y, i.e.,
B
n
sinh
n
2
=
_
2
0
y sin
n
2
y dy =
4(1)
n+1
n
,
which gives
B
n
=
4(1)
n+1
n sinh
n
2
.
u(x, y) =

n=1
4(1)
n+1
n sinh
n
2
sinh
n
2
x sin
n
2
y .
Our computations in the above examples were aided by the fact that
three out of the four boundary conditions were zero (homogeneous). The
most general boundary value problem has the form
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M (5.5)
u(x, 0) = f
1
(x) for 0 < x < L
u(x, M) = f
2
(x) for 0 < x < L
u(0, y) = g
1
(y) for 0 < y < M
u(L, y) = g
2
(y) for 0 < y < M,
with given functions f
1
(x), f
2
(x), g
1
(y) and g
2
(y). Because this problem is
linear, we can break it into four sub-problems, each time keeping one of the
boundary conditions, and setting the other three to zero. Namely, we look
for solution in the form
u(x, y) = u
1
(x, y) +u
2
(x, y) +u
3
(x, y) + u
4
(x, y) ,
where u
1
is found by solving
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M
u(x, 0) = f
1
(x) for 0 < x < L
u(x, M) = 0 for 0 < x < L
u(0, y) = 0 for 0 < y < M
u(L, y) = 0 for 0 < y < M,
u
2
is computed from
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M
6.6. THE WAVE EQUATION 185
u(x, 0) = 0 for 0 < x < L
u(x, M) = f
2
(x) for 0 < x < L
u(0, y) = 0 for 0 < y < M
u(L, y) = 0 for 0 < y < M,
u
3
is computed from
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M
u(x, 0) = 0 for 0 < x < L
u(x, M) = 0 for 0 < x < L
u(0, y) = g
1
(y) for 0 < y < M
u(L, y) = 0 for 0 < y < M,
and u
4
from
u
xx
+u
yy
= 0 for 0 < x < L, and 0 < y < M
u(x, 0) = 0 for 0 < x < L
u(x, M) = 0 for 0 < x < L
u(0, y) = 0 for 0 < y < M
u(L, y) = g
2
(y) for 0 < y < M.
We solve each of these four problems, using separation of variables, as
before.
6.6 The wave equation
We consider vibrations of a guitar string. We assume that the motion of
the string is transverse, i.e., it goes only up and down (and not sideways).
Let u(x, t) denote the displacement of the string a point x and time t. The
motion of an element of the string (x, x +x) is governed by the Newtons
law
ma = f .
The acceleration a = u
tt
(x, t). If denotes the density of the string, then
m = x. We assume that the internal tension is the only force acting on
this element, and we also assume that the magnitude of the tension T is
constant throughout the string. Our nal assumption is that that the slope
of the string u
x
(x, t) is small for all x and t. Observe that u
x
(x, t) = tan ,
186CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
see the gure. Then the vertical component of the force at the right end of
our element is
T sin T tan = Tu
x
(x + x, t) ,
because for small angles , sin tan . At the left end, the vertical
component of force is Tu
x
(x + x, t), and we have
xu
tt
(x, t) = Tu
x
(x + x, t) Tu
x
(x, t) .
We divide both sides by x, and denote
T

= c
2
u
tt
(x, t) = c
2
u
x
(x + x, t) u
x
(x, t)
x
.
Letting x 0, we obtain the wave equation
u
tt
(x, t) = c
2
u
xx
(x, t) .
E
T
x
u
&
&b
\$
\$W
T

T
x x + x
Forces acting on an element of a string
We consider now vibrations of a string of length L, which is xed at the
end points x = 0 and x = L, with given initial displacement f(x), and given
initial velocity g(x)
u
tt
= c
2
u
xx
for 0 < x < L, and t > 0
u(0, t) = u(L, t) = 0 for t > 0
u(x, 0) = f(x) for 0 < x < L
u
t
(x, 0) = g(x) for 0 < x < L.
6.6. THE WAVE EQUATION 187
As before, we search for the solution in the form u(x, t) = F(x)G(t):
F(x)G

(t) = c
2
F

(x)G(t) ,
G

(t)
c
2
G(t)
=
F

(x)
F(x)
= .
Using the boundary conditions, we get
F

+F = 0, F(0) = F(L) = 0 ,
G

+c
2
G = 0 .
Nontrivial solutions of the rst equation occur at =
n
=
n
2

2
L
2
, and they
are F
n
(x) = sin
n
L
x. The second equation takes the form
G

+
n
2

2
L
2
c
2
G = 0 .
Its general solution is
G
n
(t) = b
n
cos
nc
L
t +B
n
sin
nc
L
t ,
where b
n
and B
n
are arbitrary constants. The function
u(x, t) =

n=1
F
n
(x)G
n
(t) =

n=1
_
b
n
cos
nc
L
t + B
n
sin
nc
L
t
_
sin
n
L
x
satises the wave equation and the boundary conditions. It remains to
satisfy the initial conditions. Compute
u(x, 0) =

n=1
b
n
sin
n
L
x = f(x) ,
for which we choose
b
n
=
2
L
_
L
0
f(x) sin
n
L
x dx , (6.1)
and
u
t
(x, 0) =

n=1
B
n
nc
L
sin
n
L
x = g(x) ,
for which we choose
B
n
=
2
nc
_
L
0
g(x) sin
n
L
x dx . (6.2)
188CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
u(x, t) =

n=1
_
b
n
cos
nc
L
t +B
n
sin
nc
L
t
_
sin
n
L
x ,
with b
n
s computed by (6.1), and B
n
s by (6.2).
We see that the motion of the string is periodic in time, similar to the
harmonic motion of a spring that we studied before. This is understandable,
because we did not account for dissipation of energy in our model.
Example Solve
u
tt
= 9u
xx
for 0 < x < , and t > 0
u(0, t) = u(, t) = 0 for t > 0
u(x, 0) = 2 sinx for 0 < x <
u
t
(x, 0) = 0 for 0 < x < .
Here c = 3 and L = . Because g(x) = 0, all B
n
= 0, while b
n
s are
Fourier sine series coecients of 2 sinx, i.e., b
1
= 2, and all other b
n
= 0.
Answer. u(x, t) = 2 cos 3t sinx.
To interpret this answer, we use a trig identity to write
u(x, t) = sin(x 3t) + sin(x + 3t) .
The graph of y = sin(x 3t) in the xy-plane is obtained by translating the
graph of sin x by 3t units to the right. I.e, we have a wave traveling to the
right with speed 3. Similarly, the graph of sin(x +3t) is a wave traveling to
the left with speed 3. Solution is the superposition of these two waves. For
the general wave equation, c gives the wave speed.
6.6.1 Non-homogeneous problems
Let us solve
u
tt
4u
xx
= x for 0 < x < 3, and t > 0
u(0, t) = 1 for t > 0
u(3, t) = 2 for t > 0
u(x, 0) = 0 for 0 < x < 3
u
t
(x, 0) = 1 for 0 < x < 3 .
6.6. THE WAVE EQUATION 189
This problem does not t the above pattern, because it is non-homogeneous.
Indeed, the x term on the right makes the equation non-homogeneous, and
the boundary conditions are non-homogeneous (non-zero) too. We look for
solution in the form
u(x, t) = U(x) +v(x, t) .
We ask of the function U(x) to take care of all of our problems, i.e., to
satisfy
4U

= x
U(0) = 1, U(3) = 2 .
Integrating twice, we nd
U(x) =
1
24
x
3
+
17
24
x + 1 .
Then the function v(x, t) = u(x, t) U(x) satises
v
tt
4v
xx
= 0 for 0 < x < 3, and t > 0
v(0, t) = 0 for t > 0
v(3, t) = 0 for t > 0
v(x, 0) = U(x) =
1
24
x
3

17
24
x 1 for 0 < x < 3
v
t
(x, 0) = 1 for 0 < x < 3 .
This is a homogeneous problem, of the type we considered before! Here
c = 2 and L = 3. Separation of variables, as above, gives
v(x, t) =

n=1
_
b
n
cos
2n
3
t + B
n
sin
2n
3
t
_
sin
n
3
x ,
with
b
n
=
2
3
_
3
0
_
1
24
x
3

17
24
x 1
_
sin
n
3
x dx =
_

2
n
+
27 + 8n
2

2
2n
3

3
(1)
n
_
,
B
n
=
1
n
_
3
0
sin
n
3
x dx =
3 3(1)
n
n
2

2
.
u(x, t) =
1
24
x
3
+
17
24
x + 1
+

n=1
__

2
n
+
27+8n
2

2
2n
3

3
(1)
n
_
cos
2n
3
t +
33(1)
n
n
2

2
sin
2n
3
t
_
sin
n
3
x .
190CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
In the non-homogeneous wave equation
u
tt
c
2
u
xx
= f(x, t) (6.3)
the term f(x, t) gives the acceleration of an external force applied to the
string. Indeed, the ma = f equation for an element of a string takes the
form
xu
tt
= T (u
x
(x + x, t) u
x
(x, t)) + xf(x, t) .
Dividing by x, and letting x 0 (as before), we obtain (6.3).
Non-homogeneous problems for heat equation are solved similarly.
Example Let us now solve the problem
u
t
2u
xx
= 1 for 0 < x < 1, and t > 0
u(x, 0) = x for 0 < x < 1
u(0, t) = 0 for t > 0
u(1, t) = 3 for t > 0 .
Again, we look for solution in the form
u(x, t) = U(x) +v(x, t) ,
with U(x) satisfying
2U

= 1
U(0) = 0, U(1) = 3 .
Integrating, we calculate
U(x) =
1
4
x
2
+
13
4
x .
The function v(x, t) = u(x, t) U(x) then satises
v
t
2v
xx
= 0 for 0 < x < 1, and t > 0
v(x, 0) = x U(x) =
1
4
x
2

9
4
x for 0 < x < 1
v(0, t) = v(1, t) = 0 for t > 0 .
To solve the last problem, we begin by expanding the initial temperature
in its Fourier sine series
1
4
x
2

9
4
x =

n=1
b
n
sinnx ,
6.6. THE WAVE EQUATION 191
with
b
n
= 2
_
1
0
_
1
4
x
2

9
4
x
_
sinnx dx =
1 +
_
1 + 4n
2

2
_
(1)
n
n
3

3
.
Then
v(x, t) =

n=1
1 +
_
1 + 4n
2

2
_
(1)
n
n
3

3
e
2n
2

2
t
sinnx .
u(x, t) =
1
4
x
2
+
13
4
x +

n=1
1 +
_
1 + 4n
2

2
_
(1)
n
n
3

3
e
2n
2

2
t
sin nx .
6.6.2 Problems
I. Solve the following problems, and explain their physical signicance.
1.
u
t
= 2u
xx
for 0 < x < , and t > 0
u(x, 0) = sin x 3 sinx cos x for 0 < x <
u(0, t) = u(, t) = 0 for t > 0 .
2t
sin x
3
2
e
8t
sin 2x.
2.
u
t
= 2u
xx
for 0 < x < 2, and t > 0
u(x, 0) = sinx 3 sinx cos x for 0 < x < 2
u(0, t) = u(2, t) = 0 for t > 0 .
2t
sin x
3
2
e
8t
sin 2x.
3.
u
t
= 5u
xx
for 0 < x < 2, and t > 0
u(x, 0) = x for 0 < x < 2
u(0, t) = u(2, t) = 0 for t > 0 .

n=1
4(1)
n+1
n
e

5n
2

2
4
t
sin
n
2
x.
4.
u
t
= 3u
xx
for 0 < x < , and t > 0
u(x, 0) =
_
x for 0 < x <

2
x for

2
< x <
for 0 < x <
u(0, t) = u(, t) = 0 for t > 0 .

n=1
4
n
2
e
3n
2
t
sin
n
2
sin nx.
192CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
5. u
t
= u
xx
for 0 < x < 3, and t > 0
u(x, 0) = x + 2 for 0 < x < 3
u(0, t) = u(3, t) = 0 for t > 0 .

n=1
410(1)
n
n
e

n
2

2
9
t
sin
n
3
x.
6.
u
t
= u
xx
for 0 < x < 3, and t > 0
u(x, 0) = x + 2 for 0 < x < 3
u
x
(0, t) = u
x
(3, t) = 0 for t > 0 .
7
2
+

n=1
6(1+(1)
n
)
n
2

2
e

n
2

2
9
t
cos
n
3
x.
7.
u
t
= 2u
xx
for 0 < x < , and t > 0
u(x, 0) = cos
4
x for 0 < x <
u
x
(0, t) = u
x
(, t) = 0 for t > 0 .
3
8
+
1
2
e
8t
cos 2x +
1
8
e
32t
cos 4x.
8.
u
t
= 3u
xx
+u for 0 < x < 2, and t > 0
u(x, 0) = 1 x for 0 < x < 2
u
x
(0, t) = u
x
(, t) = 0 for t > 0 .

k=1
8

2
(2k1)
2
e
_

3(2k1)
2

2
4
+1
_
t
cos
(2k1)
2
x.
9.
u
xx
+u
yy
= 0 for 0 < x < 2, and 0 < y < 3
u(x, 0) = u(x, 3) = 5 for 0 < x < 2
u(0, y) = u(2, y) = 5 for 0 < y < 3 .
10.
u
xx
+u
yy
= 0 for 0 < x < 2, and 0 < y < 3
u(x, 0) = u(x, 3) = 5 for 0 < x < 2
u(0, y) = u(2, y) = 0 for 0 < y < 3 .
11.
u
xx
+ u
yy
= 0 for 0 < x < , and 0 < y < 1
u(x, 0) = 0 for 0 < x <
u(x, 1) = 3 sin2x for 0 < x <
u(0, y) = u(, y) = 0 for 0 < y < 1 .
6.6. THE WAVE EQUATION 193
3
sinh2
sin 2x sinh2y.
12.
u
xx
+ u
yy
= 0 for 0 < x < 2, and 0 < y < 2
u(x, 0) = sinx for 0 < x < 2
u(x, 2) = 0 for 0 < x < 2
u(0, y) = 0 for 0 < y < 2
u(2, y) = y for 0 < y < 2 .
1
sinh 2
sinh(y2) sinx+

n=1
4(1)
n+1
n sinhn
2
sinh
n
2
x sin
n
2
y.
13.
u
tt
4u
xx
= 0 for 0 < x < , and t > 0
u(x, 0) = sin2x for 0 < x <
u
t
(x, 0) = 4 sin2x for 0 < x <
u(0, t) = u(, t) = 0 for t > 0 .
Answer. u(x, t) = cos 4t sin2x sin 4t sin2x.
14.
u
tt
4u
xx
= 0 for 0 < x < 1, and t > 0
u(x, 0) = 0 for 0 < x < 1
u
t
(x, 0) = x for 0 < x < 1
u(0, t) = u(1, t) = 0 for t > 0 .

n=1
(1)
n+1
n
2

2
sin 2nt sinnx.
15.
u
tt
4u
xx
= 0 for 0 < x < 1, and t > 0
u(x, 0) = 3 for 0 < x < 1
u
t
(x, 0) = x for 0 < x < 1
u(0, t) = u(1, t) = 0 for t > 0 .

n=1
_
6
n
((1)
n
1) cos 2nt +
(1)
n+1
n
2

2
sin2nt
_
sin nx.
II. Solve the following non-homogeneous problems. You may leave the inte-
grals unevaluated.
1.
194CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
u
t
= 2u
xx
for 0 < x < , and t > 0
u(x, 0) = 0 for 0 < x <
u(0, t) = 0 for t > 0
u(, t) = 1 for t > 0 .
Hint: U(x) =
x

.
2. u
t
= 2u
xx
+ 4x for 0 < x < 1, and t > 0
u(x, 0) = 0 for 0 < x < 1
u(0, t) = 0 for t > 0
u(1, t) = 0 for t > 0 .
Hint: U(x) =
1
3
(x x
3
).
3. u
tt
= 4u
xx
+ x for 0 < x < 4, and t > 0
u(x, 0) = x for 0 < x < 4
u
t
(x, 0) = 0 for 0 < x < 4
u(0, t) = 1 for t > 0
u(4, t) = 3 for t > 0 .
Hint: U(x) = 1 +
7
6
x
1
24
x
3
.
6.7 Calculating the Earths temperature
6.7.1 The complex form of the Fourier series
Recall that a function f(x) dened on (L, L) can be represented by the
Fourier series
f(x) = a
0
+

n=1
_
a
n
cos
n
L
x +b
n
sin
n
L
x
_
,
with
a
0
=
1
2L
_
L
L
f(x) dx, a
n
=
1
L
_
L
L
f(x) cos
n
L
x dx ,
b
n
=
1
L
_
L
L
f(x) sin
n
L
x dx .
6.7. CALCULATING THE EARTHS TEMPERATURE 195
Using Eulers formulas
f(x) = a
0
+

n=1
_
a
n
e
i
n
L
x
+ e
i
n
L
x
2
+b
n
e
i
n
L
x
e
i
n
L
x
2i
_
.
Combining the like terms, we rewrite this as
f(x) =

n=
c
n
e
i
n
L
x
, (7.1)
where
c
0
= a
0
c
n
=
_
an
2

bni
2
for n > 0
an
2
+
bni
2
for n < 0
.
We see that c
m
= c
m
, for any integer m, where the bar denotes complex
conjugate.
Using the formulas for a
n
and b
n
, and Eulers formula, we have for n > 0
c
n
=
1
L
_
L
L
f(x)
cos
n
L
x i sin
n
L
x
2
dx =
1
L
_
L
L
f(x)e
i
n
L
x
dx .
The formula for c
n
is exactly the same. The series (7.1), with coecients
c
n
=
1
2L
_
L
L
f(x)e
i
n
L
x
dx, n = 1, 2, .
is called the the complex form of the Fourier series.
Example Find the complex form of the Fourier series of
f(x) =
_
1 for 2 < x < 0
1 for 0 < x < 2
on (2, 2).
With the help from Eulers formula, we compute
c
n
=
1
4
__
0
2
(1)e
i
n
2
x
dx +
_
2
0
1e
i
n
2
x
dx
_
=
i
n
(1 + (1)
n
) ,
and so
f(x) =

n=
i
n
(1 + (1)
n
) e
i
n
2
x
.
196CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
6.7.2 Temperatures inside the Earth and wine cellars
Suppose the average daily temperature is given by the function f(t), 0
t 365. We assume this function to be periodic, with the period T = 365.
What is the temperature x cm inside the Earth, when t = today. We assume
that x is not large, so that we may ignore the geotermic eects. We direct
the x axis inside the Earth, with x = 0 corresponding to the Earths surface,
and solve the heat equation for u = u(x, t)
u
t
= ku
xx
, u(0, t) = f(t) for x > 0, and < t < . (7.2)
Geologists tell us that k = 2 10
3 cm
2
sec
. Observe that unlike what we had
before, the initial condition is now prescribed along the t axis, and the
evolution happens along the x axis. This is sometimes referred to as a
sideways heat equation. We represent f(t) by its complex Fourier series
(L =
T
2
for T periodic functions)
f(t) =

n=
c
n
e
i
2n
T
t
.
Similarly, we expand the solution u = u(x, t)
u(x, t) =

n=
u
n
(x)e
i
2n
T
t
.
The coecients u
n
(x) are now complex valued functions of x. Plugging this
into (7.2)
u

n
= p
2
n
u
n
, u
n
(0) = c
n
, with p
2
n
=
2in
kT
. (7.3)
Depending on whether n is positive or negative, we have
2in = (1 i)
2
[n[ ,
and then
p
n
= (1 i)q
n
, with q
n
=
_
|n|
kT
.
(It is plus in case n > 0, and minus for n < 0.) Solution of (7.3) is
u
n
(x) = a
n
e
(1i)qnx
+b
n
e
(1i)qnx
.
We must set here a
n
= 0, to avoid solutions whose absolute value becomes
innite as x . Then
u
n
(x) = c
n
e
(1i)qnx
.
6.8. LAPLACES EQUATION IN CIRCULAR DOMAINS 197
Similarly,
u
0
(x) = c
0
.
We have
u(x, t) =

n=
c
n
e
qnx
e
i[
2n
T
t()qnx]
. (7.4)
We write for n > 0
c
n
= [c
n
[e
in
,
and transform (7.4) as follows
u(x, t) = c
0
+

n=1
e
qnx
_
c
n
e
i
2n
T
tqnx
+c
n
e
i
2n
T
t+qnx
_
= c
0
+

n=1
e
qnx
_
c
n
e
i
2n
T
tqnx
+ c
n
e
i
2n
T
tqnx
_
= c
0
+

n=1
[c
n
[e
qnx
cos
_
2n
T
t +
n
q
n
x
_
.
On the last step we used that c
n
= c
n
, and that z + z = 2Re(z).
We see that the amplitude [c
n
[e
qnx
of the n-th wave is damped expo-
nentially with x, and that this damping is increasing with n, i.e.,
u(x, t) c
0
+[c
1
[e
q
1
x
cos
_
2
T
t +
1
q
1
x
_
.
We see that when x changes, the cosine term is a shift of the function cos
2
T
t,
i.e., a wave. If we select x so that
1
q
1
x =
2
T
, we have a complete
phase shift, i.e., the coolest temperatures occur in summer, and the warmest
in winter. This x is a good depth for a wine cellar. Not only the seasonal
variations are very small, but they will also counteract any inuence of air
ow into the cellar.
The material of this section is based on the book of A. Sommerfeld [8].
I became aware of this application through the wonderful lectures of Henry
P. McKean at Courant Institute, NYU in the late seventies.
6.8 Laplaces equation in circular domains
Polar coordinates (r, ) will be appropriate for circular domains, and it turns
out that
u = u
xx
+ u
yy
= u
rr
+
1
r
u
r
+
1
r
2
u

. (8.1)
198CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
To justify (8.1), we begin by writing
u(x, y) = u(r(x, y), (x, y)) ,
with
r = r(x, y) =
_
x
2
+y
2
, = (x, y) = arctan
y
x
. (8.2)
By the chain rule
u
x
= u
r
r
x
+ u

x
,
u
xx
= u
rr
r
2
x
+ 2u
r
r
x

x
+ u

2
x
+ u
r
r
xx
+ u

xx
.
Similarly
u
yy
= u
rr
r
2
y
+ 2u
r
r
y

y
+ u

2
y
+ u
r
r
yy
+u

yy
,
and so
u
xx
+u
yy
= u
rr
_
r
2
x
+ r
2
y
_
+ 2u
r
(r
x

x
+r
y

y
) + u

2
x
+
2
y
_
+u
r
(r
xx
+ r
yy
) + u

(
xx
+
yy
) .
Straightforward dierentiation, using (8.2), shows that
r
2
x
+ r
2
y
= 1 ,
r
x

x
+ r
y

y
= 0 ,

2
x
+
2
y
=
1
r
2
,
r
xx
+ r
yy
=
1
r
,

xx
+
yy
= 0 ,
and the formula (8.1) follows.
We now consider a circular plate: x
2
+ y
2
< R
2
, or r < R in polar
coordinates (R > 0 is its radius). The boundary of the plate consists of
the points (R, ), with 0 2. Assume that the temperatures at the
boundary points, u(R, ) are prescribed by a given function f(), of period
2. What are the steady state temperatures u(r, ) inside the plate? We
assume that the function u(r, ) has period 2 in .
We need to solve
u = u
rr
+
1
r
u
r
+
1
r
2
u

= 0, for r < R
u(R, ) = f() .
6.8. LAPLACES EQUATION IN CIRCULAR DOMAINS 199
We perform separation of variables, looking for the solution in the form
u(r, ) = F(r)G(). Plugging this into the equation
F

(r)G() +
1
r
F

(r)G() =
1
r
2
F(r)G

() .
We multiply both sides by r
2
, and divide by F(r)G():
r
2
F

(r) + rF

(r)
F(r)
=
G

()
G()
= .
This gives
G

+ G = 0, G() is 2 periodic,
r
2
F

(r) + rF

(r) F(r) = 0 .
The rst problem has non-trivial solutions when =
n
= n
2
, and when
=
0
= 0 and they are
G
n
() = A
n
cos n + B
n
sin n, G
0
= A
0
,
with arbitrary constants A
0
, A
n
and B
n
. Then the second equation becomes
r
2
F

(r) + rF

(r) n
2
F(r) = 0 .
This is an Eulers equation! Its general solution is
F(r) = c
1
r
n
+c
2
r
n
. (8.3)
We have to select c
2
= 0, to avoid innite temperature at r = 0, i.e., we
select F
n
(r) = r
n
. When n = 0, the general solution is F(r) = c
1
lnr + c
2
.
Again, we select c
1
= 0, and F
0
(r) = 1 The function
u(r, ) = F
0
(r)G
0
() +

n=1
F
n
(r)G
n
() = A
0
+

n=1
r
n
(A
n
cos n +B
n
sinn)
satises the Laplace equation for r < R. Turning to the boundary condition
u(R, ) = A
0
+

n=1
R
n
(A
n
cos n +B
n
sinn) = f() .
Expand f() in its Fourier series
f() = a
0
+

n=1
(a
n
cos n + b
n
sin n) . (8.4)
200CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
Then we need to select A
0
= a
0
, A
n
R
n
= a
n
and B
n
R
n
= b
n
, i.e., A
n
=
1
R
n
a
n
and B
n
=
1
R
n
b
n
. We conclude that
u(r, ) = a
0
+

n=1
_
r
R
_
n
(a
n
cos n +b
n
sinn) .
Example Solve
u = 0, for x
2
+y
2
< 4
u = x
2
3y on x
2
+ y
2
= 4 .
Using that x = 2 cos , and y = 2 sin on the boundary, we have
x
2
y = 4 cos
2
6 sin = 2 + 2 cos 2 6 sin .
Then
u(r, ) = 2 + 2
_
r
2
_
2
cos 2 6
r
2
sin = 2 +
1
2
r
2
cos 2 3r sin .
In Cartesian coordinates this solution is (using that cos 2 = cos
2
sin
2
)
u(x, y) = 2 +
1
2
(x
2
y
2
) 3y .
Next, we solve the exterior problem
u = u
rr
+
1
r
u
r
+
1
r
2
u

= 0, for r > R
u(R, ) = f() .
Physically, we have a plate with the disc r < R removed. Outside this disc
the plate is so large, that we can think that it extends to innity. We follow
the same steps, and in (8.3) we select c
1
= 0 to avoid innite temperatures
as r . We conclude that
u(r, ) = a
0
+

n=1
_
R
r
_
n
(a
n
cos n +b
n
sinn) ,
with the coecients from the Fourier series of f() in (8.4).
6.9. STURM-LIOUVILLE PROBLEMS 201
6.9 Sturm-Liouville problems
Let us recall the eigenvalue problem
y

+ y = 0, 0 < x < L
y(0) = y(L) = 0
on some interval (0, L). Its eigenfunctions sin
n
L
x, n = 1, 2, are the
building blocks of the Fourier sine series on (0, L). These eigenfunctions
are orthogonal on (0, L), i.e.,
_
L
0
sin
n
L
x sin
m
L
x dx = 0 for any m ,= n.
Similarly, the eigenfunctions of
y

+ y = 0, 0 < x < L
y

(0) = y

(L) = 0
1, cos
n
L
x, n = 1, 2, give rise to the Fourier cosine series on (0, L). It
turns out that under some conditions solutions of eigenvalue problems lead
to its own type of Fourier series on (0, L).
On an interval (a, b) we consider the eigenvalue problem
_
p(x)y

+ r(x)y = 0 , (9.1)
together with the boundary conditions
y(a) + y

(a) = 0, y(b) + y

(b) = 0 . (9.2)
The given dierentiable function p(x) and continuous function r(x) are as-
sumed to be positive on [a, b]. The boundary conditions are called separated,
with the rst one at x = a, and the other one at the right end x = b. The
constants , , and are given, however we cannot allow both constants
in the same condition to be zero, i.e., we assume that
2
+
2
,= 0 and

2
+
2
,= 0. Recall that by eigenfunctions we mean non-trivial solutions of
(9.1), satisfying the boundary conditions in (9.2).
Theorem 1 Assume that y(x) is an eigenfunction corresponding to an eigen-
value , while z(x) is an eigenfunction corresponding to an eigenvalue , and
,= . Then y(x) and z(x) are orthogonal on [a, b] with weight r(x), i.e.,
_
b
a
y(x)z(x) r(x) dx = 0 .
202CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
Proof: The eigenfunction z(x) satises
(p(x)z

+ r(x)z = 0 (9.3)
z(a) +z

(a) = 0, z(b) + z

(b) = 0 .
We multiply the equation (9.1) multiplied by z(x), and subtract from it
the equation (9.3) multiplied by y(x), obtaining
_
p(x)y

z(x)
_
p(x)z

y(x) + ( )r(x)y(x)z(x) = 0 .
We can rewrite this
_
p(y

z yz

+ ( )r(x)y(x)z(x) = 0 .
Integrate this over [a, b]
_
p(y

z yz

[
b
a
+( )
_
b
a
y(x)z(x) r(x) dx = 0 . (9.4)
We shall show that
p(b)(y

(b)z(b) y(b)z

(b)) = 0 , (9.5)
p(a)(y

(a)z(a) y(a)z

(a)) = 0 . (9.6)
This means that the rst term in (9.4) is zero. Then the second term in
(9.4) is also zero, and therefore
_
b
a
y(x)z(x) r(x) dx = 0, because ,= 0.
We shall justify (9.5), and the proof of (9.6) is similar. Consider rst the
case when = 0. Then the corresponding boundary conditions simplify to
read y(b) = 0, z(b) = 0, and (9.5) follows. In the other case when ,= 0, we
can express y

(b) =

y(b), z

(b) =

## z(b), and then

y

(b)z(b) y(b)z

(b) =

y(b)z(b) +

y(b)z(b) = 0 .
Theorem 2 The eigenvalues of (9.1) and (9.2) are real numbers.
Proof: Assume, on the contrary, that an eigenvalue is not real, i.e.,

## ,= , and the corresponding eigenfunction y(x) is complex valued. Taking

complex conjugate of (9.1) and (9.2), we get
(p(x) y

+

r(x) y = 0
y(a) + y

(a) = 0, y(b) + y

(b) = 0 .
6.9. STURM-LIOUVILLE PROBLEMS 203
It follows that

is also an eigenvalue, and u is the corresponding eigenfunc-
tion. By the preceding theorem
_
b
a
y(x) y(x) r(x) dx =
_
b
a
[y(x)[
2
r(x) dx = 0 .
The second integral involves a non-negative function, and it can be zero only
if y(x) 0. But an eigenfunction cannot be zero. We have a contradiction,
which was caused by the assumption that is not real. So, only real eigen-
values are possible.
Example On the interval (0, ) we consider an eigenvalue problem
y

+y = 0
y(0) = 0, y

() y() = 0 .
Case 1. < 0. We may write = k
2
, with k > 0. The general solution is
y = c
1
e
kx
+c
2
e
kx
. Using the boundary conditions, we compute c
1
= c
2
= 0,
i.e., y = 0 and we have no negative eigenvalues.
Case 2. = 0. The general solution is y = c
1
x + c
2
. Again we have
c
1
= c
2
= 0, i.e., = 0 is not an eigenvalue.
Case 3. > 0. We may write = k
2
, with k > 0. The general solution is
y = c
1
cos kx + c
2
sin kx, and c
1
= 0 by the rst boundary condition. The
second boundary condition implies that
c
2
(k cos k sink) = 0 .
We need c
2
,= 0 to get a non-trivial solution, therefore the quantity in the
bracket must be zero, which implies that
tank = k .
This equation has innitely many solutions 0 < k
1
< k
2
< k
3
< , as can
be seen by drawing the graphs of y = k and y = tank in the ky plane.
We obtain innitely many eigenvalues
i
= k
2
i
, and the corresponding eigen-
functions y
i
= sin k
i
x, i = 1, 2, 3, . (Observe that k
i
s are also solutions
of this equation, but they lead to the same eigenvalues and eigenfunctions.)
Using that tank
i
= k
i
, we verify that for all i ,= j
_

0
sink
i
x sink
j
x dx =
1
k
2
i
k
2
j
(k
j
cos k
j
sink
i
k
i
cos k
i
sink
i
) = 0 ,
204CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
which serves to illustrate the rst theorem above.
As we mentioned before, the eigenfunctions of (9.1) and (9.2), y
i
(x) allow
us to do Fourier series, i.e., we can represent
f(x) =

j=1
c
j
y
j
(x) .
To nd the coecients, we multiply both sides by y
i
(x)r(x), and integrate
over [a, b]
_
b
a
f(x)y
i
(x)r(x) dx =

j=1
c
j
_
b
a
y
j
(x)y
i
(x)r(x) dx .
By the rst theorem above, for all j ,= i the integrals on the right are zero.
So the sum on the right is equal to c
i
_
b
a
y
2
i
(x)r(x) dx. Therefore
c
i
=
_
b
a
f(x)y
i
(x)r(x) dx
_
b
a
y
2
i
(x)r(x) dx
. (9.7)
For the above example, the Fourier series is
f(x) =

j=1
c
i
sin k
i
x ,
with
c
i
=
_

0
f(x) sink
i
x dx
_

0
sin
2
k
i
x dx
.
Using a symbolic software, like Mathematica, it is easy to approximately
compute k
i
s, and the integrals for c
i
s.
6.9.1 Fourier-Bessel series
Let F = F(r) be a solution of the following eigenvalue problem on the
interval (0, R)
F

+
1
r
F

+ F = 0 (9.8)
F

(0) = 0, F(R) = 0 .
Observe that we can rewrite the equation in the form (9.1) as
_
rF

+ rF = 0 ,
6.9. STURM-LIOUVILLE PROBLEMS 205
so that any two eigenfunctions of (9.8) are orthogonal with weight r.
We shall reduce the equation in (9.8) to Bessels equation of order zero.
To this end, we make a change of variables r x, by letting r =
1

x. By
the chain rule
F
r
= F
x

, F
rr
= F
xx
.
Then the problem (9.8) becomes
F
xx
+
1
1

F
x
+ F = 0
F
x
(0) = 0, F(

R) = 0 .
We divide the equation by , and use primes again to denote the derivatives
in x
F

+
1
x
F
x
+ F = 0
F

(0) = 0, F(

R) = 0 .
This equation is Bessels equation of order zero. The Bessel function J
0
(x),
which we considered before, satises this equation, as well as the condition
F

## (0) = 0. Recall that the function J

0
(x) has innitely many positive roots
r
1
< r
2
< r
3
< . In order to satisfy the second boundary condition, we
need

R = r
i
,
which, after returning to the original variable r (observe that F(x) = F(

r)),
gives us the eigenvalues and eigenfunctions of (9.8)

i
=
r
2
i
R
2
, F
i
(r) = J
0
_
r
i
R
r
_
.
The Fourier-Bessel series is then the expansion using the eigenfunctions F
i
(r)
f(r) =

j=1
c
i
J
0
_
r
i
R
r
_
, (9.9)
and c
i
are (in view of (9.7))
c
i
=
_
R
0
f(r)J
0
(
r
i
R
r)r dr
_
R
0
J
2
0
(
r
i
R
r)r dr
. (9.10)
206CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
6.9.2 Cooling of a cylindrical tank
Suppose we have a cylindrical tank x
2
+ y
2
R
2
, 0 z H, whose
temperatures are independent of z, i.e., u = u(x, y, t). The heat equation is
then
u
t
= ku = k (u
xx
+u
yy
) .
We assume that the boundary of the cylinder is kept at zero temperature,
while the initial temperatures are prescribed to be f(r). Because initial
temperatures do not depend on , it is natural to expect that u = u(x, y, t)
is independent of , i.e., u = u(r, t). Then
u = u
rr
+
1
r
u
r
+
1
r
2
u

= u
rr
+
1
r
u
r
.
We need to solve the following problem
u
t
= k
_
u
rr
+
1
r
u
r
_
for r < a, and t > 0
u
r
(0, t) = 0, u(a, t) = 0 for t > 0
u(r, 0) = f(r) ,
with a given function f(r). We have added the condition u
r
(0, t) = 0,
because we expect the temperatures to have a critical point in the middle of
the tank, for all times t. We separate variables, writing u(r, t) = F(r)G(t).
We plug this into our equation, then divide both sides by kF(r)G(t):
F(r)G

(t) = k
_
F

(r)) +
1
r
F

(r)
_
G(t) ,
G

(t)
kG(t)
=
F

(r)) +
1
r
F

(r)
F(r)
= ,
which gives
F

+
1
r
F

+ F = 0
F

(0) = 0, F(R) = 0 ,
G

(t)
kG(t)
= . (9.11)
The eigenvalue problem for F(r) we have solved before, with
i
=
r
2
i
R
2
and
F
i
(r) = J
0
(
r
i
R
r). Using =
i
in (9.11), we compute
G
i
(t) = c
i
e
k
r
2
i
R
2
t
.
6.10. GREENS FUNCTION 207
The function
u(r, t) =

i=1
c
i
e
k
r
2
i
R
2
t
J
0
(
r
i
R
r) (9.12)
satises our equation, and the boundary conditions. The initial condition
u(r, 0) =

i=1
c
i
J
0
(
r
i
R
r) = f(r)
tells us that we must choose c
i
s to be the coecients of Fourier-Bessel
series, given by (9.10). Conclusion: the series in (9.12), with c
i
s computed
by (9.10), gives the solution to our problem.
6.10 Greens function
We wish to solve the non-homogeneous boundary value problem
(p(x)y

## + r(x)y = f(x) (10.1)

y(a) = 0, y(b) = 0 .
Here p(x), r(x) and f(x) are given functions, r(x) > 0 on [a, b]. We shall
consider the corresponding homogeneous equation
_
p(x)y

+ r(x)y = 0 , (10.2)
and the corresponding homogeneous boundary value problem
(p(x)y

+ r(x)y = 0 (10.3)
y(a) = 0, y(b) = 0 .
Recall the concept of Wronskian determinant of two functions y
1
(x) and
y
2
(x), or Wronskian, for short:
W(x) =

y
1
(x) y
2
(x)
y

1
(x) y

2
(x)

= y
1
(x)y

2
(x) y

1
(x)y
2
(x) .
Lemma 6.10.1 Let y
1
(x) and y
2
(x) be any two solutions of the homoge-
neous equation (10.2). Then p(x)W(x) is a constant.
Proof: We need to show that (p(x)W(x))

= 0. Compute
(p(x)W(x))

= y

1
p(x)y

2
+y
1
(p(x)y

2
)

1
p(x)y

2
(p(x)y

1
)

y
2
= y
1
(p(x)y

2
)

(p(x)y

1
)

y
2
= r(x)y
1
y
2
+r(x)y
1
y
2
= 0 .
208CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
On the last step we had expressed (p(x)y

1
)

= r(x)y
1
, and (p(x)y

2
)

=
r(x)y
2
, by using the equation (10.2), which both y
1
and y
2
satisfy.
We make the following fundamental assumption: the homogeneous bound-
ary value problem (10.3) has only the trivial solution y = 0. Dene y
1
(x)
to be a non-trivial solution of the homogeneous equation (10.2) together
with the condition y(a) = 0 (which can be achieved e.g., by adding a second
initial condition y

## (a) = 1). By our fundamental assumption, y

1
(b) ,= 0.
Similarly, we dene y
2
(x) to be a non-trivial solution of the homogeneous
equation (10.2) together with the condition y(b) = 0. By the fundamental
assumption, y
2
(a) ,= 0. The functions y
1
(x) and y
2
(x) form a fundamental
set of the homogeneous equation (10.2) (they are not constant multiples of
one another). To nd a solution of the non-homogeneous equation (10.1),
we use the variation of parameters method, i.e., we look for solution in the
form
y(x) = u
1
(x)y
1
(x) + u
2
(x)y
2
(x) , (10.4)
with the functions u
1
(x) and u
2
(x) to be chosen. We shall require these
functions to satisfy
u
1
(b) = 0, u
2
(a) = 0 . (10.5)
Then y(x) in (10.4) satises our boundary conditions y(a) = y(b) = 0. We
now put the equation (10.1) into the form we considered before
y

+
p

(x)
p(x)
y

+
r(x)
p(x)
y =
f(x)
p(x)
.
Then by the formulas we had
u

1
(x) =
y
2
(x)f(x)
p(x)W(x)
=
y
2
(x)f(x)
K
, (10.6)
u

2
(x) =
y
1
(x)f(x)
p(x)W(x)
=
y
1
(x)f(x)
K
, (10.7)
where W is the Wronskian of y
1
(x) and y
2
(x), and by K we denote the
constant that p(x)W(x) is equal to.
Integrating (10.7), and using the condition u
2
(a) = 0, we get
u
2
(x) =
_
x
a
y
1
()f()
K
d .
Similarly, integrating (10.6), and using the condition u
1
(b) = 0, we get
u
1
(x) =
_
b
x
y
2
()f()
K
d .
6.10. GREENS FUNCTION 209
Using these functions in (10.4), we get the solution of our problem
y(x) = y
1
(x)
_
b
x
y
2
()f()
K
d + y
2
(x)
_
x
a
y
1
()f()
K
d . (10.8)
It is customary to dene Greens function
G(x, ) =
_
y
1
(x)y
2
()
K
for a x
y
2
(x)y
2
()
K
for x b ,
(10.9)
so that the solution (10.8) can be written as
y(x) =
_
b
a
G(x, )f()d . (10.10)
Example Find Greens function and the solution of the problem
y

+y = f(x)
y(0) = 0, y(1) = 0 .
The function sin(xa) solves the corresponding homogeneous equation, for
any constant a. Therefore, we may take y
1
(x) = sin x and y
2
(x) = sin(x1).
Compute
W = y
1
(x)y

2
(x) y

1
(x)y
2
(x) = sinx cos(x 1) cos x sin(x 1) = sin1 ,
which follows by using that cos(x 1) = cos x cos 1 + sinx sin1 and sin(x
1) = sin x cos 1 cos x sin1. Then
G(x, ) =
_
sin xsin(1)
sin1
for 0 x
sin(x1) sin
sin1
for x 1 ,
and the solution is
y(x) =
_
1
0
G(x, )f()d .
Example Find Greens function and the solution of the problem
x
2
y

2xy + 2y = f(x)
y(1) = 0, y(2) = 0 .
The corresponding homogeneous equation
x
2
y

2xy + 2y = 0
210CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
is Eulers equation, whose general solution is y(x) = c
1
x + c
2
x
2
. We then
nd y
1
(x) = x x
2
, and y
2
(x) = 2x x
2
. Compute
W = y
1
(x)y

2
(x) y

1
(x)y
2
(x) = (x x
2
)(2 2x) (1 2x)(2x x
2
) = x
2
.
Turning to the construction of G(x, ), we observe that our equation is not
in the form we considered above. To put it into the right form, we divide
the equation by x
2
y

2
x
y

+
2
x
2
y =
f(x)
x
2
,
and then multiply this equation by the integrating factor = e
_
(
2
x
) dx
=
e
2 ln x
=
1
x
2
, obtaining
_
1
x
2
y

+
2
x
4
y =
f(x)
x
4
.
Here p(x) =
1
x
2
, and K = p(x)W(x) = 1. Then
G(x, ) =
_
(x x
2
)(2
2
) for 0 x
(2x x
2
)(
2
) for x 1 ,
and the solution is
y(x) =
_
2
1
G(x, )
f()

4
d .
Finally, we observe that the same construction works for general sep-
arated boundary conditions (9.2). If y
1
(x) and y
2
(x) are solutions of the
homogeneous equation satisfying the boundary conditions at x = a and
x = b respectively, then the formula (10.9) gives the Greens function.
6.11 Fourier Transform
Recall the complex form of the Fourier series. A function f(x) dened on
(L, L) can be represented by the series
f(x) =

n=
c
n
e
i
n
L
x
, (11.1)
with the coecients
c
n
=
1
2L
_
L
L
f()e
i
n
L

d, n = 1, 2, . (11.2)
6.11. FOURIER TRANSFORM 211
We use (11.2) in (11.1):
f(x) =

n=
1
2L
_
L
L
f()e
i
n
L
(x)
d . (11.3)
Now assume that the interval (, ) along some axis, which we call s-
axis, is subdivided into pieces, using the subdivision points s
n
=
n
L
. The
length of each interval is s =

L
. We rewrite (11.3) as
f(x) =
1
2

n=
_
L
L
f()e
isn(x)
ds . (11.4)
so that we can regard this as a Riemann sum of certain integral in s over
the interval (, ). Let now L . Then s 0, and the Riemann
sum in (11.4) converges to
f(x) =
1
2
_

f()e
is(x)
dds .
This formula is known as the Fourier integral. Our derivation of it is made
rigorous in more advanced books. We rewrite this integral as
f(x) =
1

2
_

e
isx
_
1

2
_

f()e
is
d
_
ds . (11.5)
We dene the Fourier transform by
F(s) =
1

2
_

f()e
is
d .
The inverse Fourier transform is then
f(x) =
1

2
_

e
isx
F(s) ds .
As with the Laplace transform, we use capital letters to denote Fourier
transforms. We shall also use the operator notation F(s) = T(f(x)).
Example Let f(x) =
_
1 for [x[ a
0 for [x[ > a .
Then using Eulers formula
F(s) =
1

2
_
a
a
e
is
d =
2

2
e
ias
e
ias
2is
=
_
2

sinas
s
.
Assume that f(x) 0, as x . Using integration by parts
T(f

(x)) =
1

2
_

()e
is
d = is
_

f()e
is
d = isF(s) .
It follows that
T(f

(x)) = isT(f

(x)) = s
2
F(s) .
212CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
6.12 Problems on innite domains
6.12.1 Evaluation of some integrals
The following integral occurs often
I =
_

e
x
2
dx =

. (12.1)
Indeed,
I
2
=
_

e
x
2
dx
_

e
y
2
dy =
_

e
x
2
y
2
dxdy
=
_
2
0
_

e
r
2
r drd = ,
and (12.1) follows. We used polar coordinates to evaluate the double inte-
gral.
We shall show that for any x
F(x) =
_

0
e
z
2
cos xz dz =

2
e

x
2
4
. (12.2)
Using integration by parts, we compute (d denotes the dierential)
F

(x) =
_

0
e
z
2
(z sin xz) dz =
1
2
_

0
sinxz d
_
e
z
2
_
=
x
2
_

0
e
z
2
cos xz dz .
I.e.,
F

(x) =
x
2
F(x) . (12.3)
Also
F(0) =

2
(12.4)
in view of (12.1). Solving the dierential equation (12.3) together with the
initial condition (12.4), we justify (12.2).
The last integral we need is just a Laplace transform (a is a constant)
_

0
e
sy
cos as ds =
y
y
2
+a
2
. (12.5)
6.12. PROBLEMS ON INFINITE DOMAINS 213
6.12.2 Heat equation for < x <
We solve the initial value problem
u
t
= ku
xx
< x < , t > 0 (12.6)
u(x, 0) = f(x) < x < .
Here u(x, t) gives the temperature at x and time t of an innite bar. Initial
temperatures are described by the given function f(x), k > 0 is a given
constant. The Fourier transform T(u(x, t)) = U(s, t) depends on s and on t,
which we regard here as a parameter. Observe that T(u
t
(x, t)) = U
t
(s, t) (as
follows by writing the denition of the Fourier transform, and dierentiating
in t). Applying the Fourier transform, we have
U
t
= ks
2
U
U(s, 0) = F(s) .
Integrating this (we now regard s as a parameter)
U(s, t) = F(s)e
ks
2
t
.
To get the solution, we apply the inverse Fourier transform
u(x, t) =
1

2
_

e
isx
F(s)e
ks
2
t
ds =
1
2
_

e
is(x)ks
2
t
f() dds
=
1
2
_

_
_

e
is(x)ks
2
t
ds
_
f() d .
We denote by K the integral in the brackets, i.e., K =
_

e
is(x)ks
2
t
ds.
To evaluate K, we use Eulers formula
K =
_

## [cos s(x ) +i sins(x )] e

ks
2
t
ds = 2
_

0
cos s(x)e
ks
2
t
ds ,
because cos s(x ) is an even function of s, and sins(x ) is odd. To
evaluate the last integral, we make a change of variables s z, by setting

kts = z ,
and then use the integral (12.2):
K =
2

kt
_

0
e
z
2
cos
_
x

kt
z
_
dz =

kt
e

(x)
2
4kt
.
214CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
With K evaluated, we conclude
u(x, t) =
1
2

kt
_

(x)
2
4kt
f() d . (12.7)
Let us now assume that the initial temperature is positive on some small
interval (, ), and is zero outside of this interval. Then u(x, t) > 0 for all
x (, ) and t > 0. I.e., not only the temperatures become positive
far from the heat source, this happens practically instantaneously! This
is known as innite propagation speed. This points to an imperfection of
our model. Observe, however, that the temperatures given by (12.7) are
negligible for large [x[.
6.12.3 Steady state temperatures for the upper half plane
We solve the boundary value problem
u
xx
+u
yy
= 0 < x < , y > 0 (12.8)
u(x, 0) = f(x) < x < .
Here u(x, y) gives the steady state temperature at a point (x, y) of an innite
plate, occupying the upper half of the xy plane. The given function f(x)
provides the prescribed temperatures at the boundary of the plate. We
are interested in the solution that is bounded, as y . (Without this
assumption the solution is not unique: if u(x, y) is a solution of (12.8), so is
u(x, y) +cy, for any constant c.)
Applying the Fourier transform in x, U(s, y) =
1

2
_

u(, y)e
is
d,
we have (observe that T(u
yy
(x, t)) = U
yy
(s, t))
U
yy
s
2
U = 0 (12.9)
U(s, 0) = F(s) .
The general solution of the equation in (12.9) is
U(s, y) = c
1
e
sy
+c
2
e
sy
.
When s > 0, we select c
2
= 0, so that U (and therefore u) is bounded as
y . Then c
1
= F(s), giving us
U(s, y) = F(s)e
sy
when s > 0 .
6.12. PROBLEMS ON INFINITE DOMAINS 215
When s < 0, we select c
1
= 0 to get a bounded solution. Then c
2
= F(s),
giving us
U(s, y) = F(s)e
sy
when s < 0 .
Combining both cases, we conclude that the bounded solution of (12.9) is
U(s, y) = F(s)e
|s|y
.
It remains to compute the inverse Fourier transform
u(x, y) =
1

2
_

e
isx|s|y
F(s) ds =
1
2
_

f()
__

e
is(x)|s|y
ds
_
d .
We evaluate the integral in the brackets by using Eulers formula, that fact
that cos s(x ) is even in s, and sin s(x ) is odd in s, and the formula
(12.5) on the last step
_

e
is(x)|s|y
ds = 2
_

0
e
sy
cos s(x ) ds =
2y
(x )
2
+y
2
.
The solution of (12.8), known as Poissons formula, is
u(x, y) =
y

f() d
(x )
2
+y
2
.
6.12.4 Using Laplace transform for a semi-innite string
A string extending for 0 < x < is initially at rest. The left end x = 0
undergoes periodic vibrations given by Asint, with given constants A and
. Find the displacements u(x, t) at any point x > 0 and time t, assuming
that the displacements are bounded.
We need to solve the problem
u
tt
= c
2
u
xx
x > 0
u(x, 0) = u
t
(x, 0) = 0 x > 0
u(0, t) = Asint .
We take Laplace transformof the equation in the t variable, i.e., U(x, s) =
L(u(x, t)). Using the initial conditions
s
2
U = c
2
U
xx
, U(0, s) =
A
s
2
+
2
. (12.10)
216CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
The general solution of this equation is
U(x, s) = c
1
e
s
c
x
+c
2
e

s
c
x
.
To get a solution bounded as x +, we select c
1
= 0. Then c
2
=
A
s
2
+
2
by the initial condition in (12.10), and we have
U(x, s) = e

x
c
s
A
s
2
+
2
.
Taking the inverse Laplace transform, we have the answer
u(x, t) = Au
x/c
(t) sin(t x/c) ,
where u
x/c
(t) is the Heaviside function. We see that at any point x, the
solution is zero for t < x/c (the time it takes the signal to travel). For
t > x/c, the motion is identical with that at x = 0, but is delayed in time
by the amount x/c.
6.12.5 Problems
I. Find the complex form of the Fourier series for the following functions on
the given interval.
1. f(x) = x on (2, 2).

n=
2i(1)
n
n
e
i
n
2
x
.
2. f(x) = e
x
on (1, 1).
x
=

n=
(1)
n
(1 + in)(e
1
e
)
2(1 +n
2

2
)
e
inx
.
3. f(x) = sin
2
x on (, ).
1
4
e
i2x
+
1
2

1
4
e
i2x
.
4. f(x) = sin2x cos 2x on (/2, /2).
i
4
e
i4x

i
4
e
i4x
.
5. Suppose a real valued function f(x) is represented by its complex Fourier
series on (L, L)
f(x) =

n=
c
n
e
i
n
L
x
.
Show that c
n
= c
n
for all n.
6.12. PROBLEMS ON INFINITE DOMAINS 217
II. Solve the following problems on the circular domains, and describe their
physical signicance.
1.
u = 0, r < 3
u(3, ) = 4 cos
2
.
2
9
r
2
cos 2 = 2 +
2
9
(x
2
y
2
).
2.
u = 0, r > 3
u(3, ) = 4 cos
2
.
18
r
2
cos 2 = 2 + 18
x
2
y
2
(x
2
+y
2
)
2
.
3.
u = 0, r < 2
u(2, ) = y
2
.
1
2
(x
2
y
2
).
4.
u = 0, r > 2
u(2, ) = y
2
.
x
2
y
2
(x
2
+ y
2
)
2
.
5.
u = 0, r < 1
u(1, ) = cos
4
.
6.
u = 0, r < 1
u(1, ) = .
Hint: Extend f() = as a 2 periodic function, equal to on [0, 2]. Then
a
n
=
1

f() cos nd =
1

_
2
0
f() cos nd =
1

_
2
0
cos nd ,
and compute similarly a
0
and b
n
s.

n=1
2
n
r
n
sin n.
7.
u = 0, r > 3
u(1, ) = + 2 .
218CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
Answer. u = + 2 2

n=1
3
n
n
r
n
sinn.
III.
1. Find the eigenvalues and the eigenfunctions of
y

+y = 0, y(0) +y

(0) = 0, y() +y

() = 0 .
n
= n
2
, y
n
= sinnx ncos nx, and also = 1 with y = e
x
.
2. Identify graphically the eigenvalues, and nd the eigenfunctions of
y

+y = 0, y(0) +y

(0) = 0, y() = 0 .
3. Find the eigenvalues and the eigenfunctions of (a is a constant)
y

+ ay

+ y = 0, y(0) = y(L) = 0 .
n
=
a
2
4
+
n
2

2
L
2
, y
n
= e

a
2
x
sin
n
L
x.
4. (i) Find the eigenvalues and the eigenfunctions of
x
2
y

+ 3xy

+ y = 0, y(1) = y(e) = 0 .
n
= 1 + n
2

2
, y
n
= x
1
sin(n ln x).
(ii) Put this equation into the form (p(x)y

## + r(x)y = 0, and verify that

the eigenfunctions are orthogonal with weight r(x).
Hint: Divide the equation by x
2
, and verify that x
3
is the integrating factor,
so that p(x) = x
3
and r(x) = x.
5. Show that the eigenfunctions of
y

+y = 0, y(0) = y

(0) = y(L) = y

(L) = 0
corresponding to dierent eigenvalues, are orthogonal on (0, L).
Hint:
y

z yz

=
d
dx
_
y

z y

+y

yz

_
.
6. Consider an eigenvalue problem
(p(x)y

+ r(x)y = 0, y(0) +y

(0) = 0, y() +y

() = 0 .
6.12. PROBLEMS ON INFINITE DOMAINS 219
Assume that the given functions p(x) and r(x) are positive, while and
are non-zero constants of dierent sign, and and are non-zero constants
of the same sign. Show that all eigenvalues are positive.
Hint. Multiply the equation by y(x) and integrate over (0, ). Do an inte-
gration by parts.
IV. Find Greens function and the solution of the following problems.
1.
y

## +y = f(x) a < x < b

y(a) = 0, y(b) = 0 .
_
_
_
sin(xa) sin(b)
sin(ba)
for 0 x
sin(xb) sin(a)
sin(ba)
for x 1 .
2.
y

## +y = f(x) 0 < x < 2

y(0) = 0, y

(2) + y(2) = 0 .
Hint: y
1
(x) = sinx, y
2
(x) = sin(x 2) + cos(x 2).
3.
x
2
y

## + 4xy + 2y = f(x) 1 < x < 2

y(1) = 0, y(2) = 0 .
_
(x
1
x
2
)(
1
2
2
) for 1 x
(
1

2
)(x
1
2x
2
) for x 2 .
y(x) =
_
2
1
G(x, )
2
f() d.
V.
1. Find the Fourier transformof the function f(x) =
_
1 [x[ for [x[ 1
0 for [x[ > 1
.
_
2

1
s
2
(1 cos s).
2. Find the Fourier transform of the function f(x) = e
|x|
.
_
2

1
s
2
+ 1
.
3. Find the Fourier transform of the function f(x) = e

x
2
2
.

s
2
2
. (Hint: use the formula (12.2).)
4. Show that for any constant a
220CHAPTER6. FOURIER SERIES ANDBOUNDARY VALUE PROBLEMS
(i) T(f(x)e
iax
) = F(s a).
(ii) T(f(ax)) =
1
a
F(
s
a
) (a ,= 0).
5. Find a non-trivial solution of the boundary value problem
u
xx
+u
yy
= 0 < x < , y > 0
u(x, 0) = 0 < x < .
This example shows that physical intuition may fail in an unbounded do-
main.
6. Use Poissons formula to solve
u
xx
+u
yy
= 0 < x < , y > 0
u(x, 0) = f(x) < x < ,
where f(x) =
_
1 for [x[ 1
0 for [x[ > 1
.
1

_
tan
1
x + 1
y
tan
1
x 1
y
_
.
Chapter 7
Numerical Computations
7.1 The capabilities of software systems, like Math-
ematica
Mathematica uses the command DSolve to solve dierential equations ana-
lytically (i.e., by a formula). This is not always possible, but Mathematica
does seem to know the solution methods we have studied in Chapters 1 and
2. For example, to solve the equation
y

= 2y sin
2
x, y(0) = 0.3 , (1.1)
we enter the commands
sol = DSolvey'x = 2 yx - Sinx^2, y0 = .3, yx, x
zx_ = yx . sol1
Plotzx, x, 0, 1
Mathematica returns the solution y(x) = 0.125 cos(2x) + 0.175e
2x
+
0.125 sin(2x) + 0.25, and its graph, which is given in Figure 7.1.
If you are new to Mathematica, do not worry about its sintaxis now. Try
to solve other equations, by making obvious modications to the above com-
mands. Notice that Mathematica had chosen the point (0, 0.4) to originate
the axes. If one needs the general solution, the command is
DSolve[y

## [x] ==2y[x]-Sin[x]2, y[x], x]

Mathematica returns
y(x) e
2x
c[1] +
1
8
(Cos[2x] + Sin[2x] + 2) .
221
222 CHAPTER 7. NUMERICAL COMPUTATIONS
0.2 0.4 0.6 0.8 1.0
0.6
0.8
1.0
1.2
1.4
1.6
Figure 7.1: The solution of the equation (1.1)
Observe that c[1] is Mathematicas way to write an arbitrary constant c, and
that the answer is returned as a replacement rule (and that was the reason
for an extra command in the preceding example). Second (and higher) order
equations are solved similarly. To solve the following resonant problem
y

+ 4y = 8 sin2t, y(0) = 0, y

(0) = 2 ,
we enter
DSolve[{y

## [t]+4y[t] == 8 Sin[2t], y[0]==0, y

[0]==-2 }, y[t], t]
// Simplify
and Mathematica returns the solution y(t) = 2t cos 2t, which implies un-
bounded oscillations.
When we try to use DSolve command to solve the nonlinear equation
y

= 2y
3
sin
2
x
Mathematica thinks for a while, and then it throws this equation back at
us. It cannot do it, and most likely, nobody can. However, we can use
Eulers method to compute numerical approximation of any solution, if a
initial condition is provided, e.g., the solution of
y

= 2y
3
sin
2
x y(0) = 0.3 . (1.2)
Mathematica can also compute the numerical approximation of this solu-
tion. Instead of Eulers method it uses a more sophisticated method. The
command is NDSolve. We enter the following commands
7.1. THE CAPABILITIES OF SOFTWARE SYSTEMS, LIKE MATHEMATICA 223
sol = NDSolvey'x = 2 yx^3 - Sinx^2, y0 = .3, y, x, 0, 3
zx_ = yx . sol1
Plotzx, x, 0, 1, AxesLabel - "x", "y"
0.2 0.4 0.6 0.8 1.0
x
0.15
0.20
0.25
0.30
y
Figure 7.2: The solution of the equation (1.2)
Mathematica produces the graph of the solution. Mathematica returns
solution as an interpolation function, i.e., after computing the values of
the solution at a sequence of points, it joins the points on the graph by a
smooth curve. The solution function (it is z(x) in the way we did it) and its
derivatives can be evaluated at any point. It is practically indistinguishable
from the exact solution. When one uses NDSolve command to solve the
problem (1.1) and plot the solution, one obtains a graph practically identical
to Figure 7.1.
The NDSolve command can also be used for systems of dierential equa-
tions. For example, let x = x(t) and y = y(t) be solutions of
x

= y +y
2
, x(0) = 0.2 (1.3)
y

= x, y(0) = 0.3 .
Once the solution is computed, the solution functions x = x(t), y = y(t)
dene a parametric curve in the xy plane, which we draw. The commands
and output are given in Figure 7.3. The rst command tells Mathematica:
forget everything. This is a good practice with heavy usage. If you play
224 CHAPTER 7. NUMERICAL COMPUTATIONS
In[21]:= Clear"`+"
sol =
NDSolvex't = -yt + yt^2, y't = xt, x0 = 0.2, y0 = 0.3, x, y, t, 0, 20
Out[22]= x InterpolatingFunction0., 20., , y InterpolatingFunction0., 20.,
In[24]:= ParametricPlotxt . sol1, 1, yt . sol1, 2, t, 0, 20
Out[24]=
0.3 0.2 0.1 0.1 0.2 0.3
0.3
0.2
0.1
0.1
0.2
0.3
0.4
Figure 7.3: The solution of the system (1.3)
with other initial conditions, in which [x(0)[ and [y(0)[ are small, you will
discover that the rest point (0, 0) is a center.
7.2 Solving boundary value problems
Given the functions a(x) and f(x), we wish to nd the solution y = y(x) of
the following boundary value problem
y

## +a(x)y = f(x), a < x < b (2.1)

y(a) = y(b) = 0 .
The general solution of the equation in (2.1) is of course
y(x) = Y (x) +c
1
y
1
(x) + c
2
y
2
(x) , (2.2)
7.2. SOLVING BOUNDARY VALUE PROBLEMS 225
where Y (x) is any particular solution, and y
1
(x), y
2
(x) two solutions of the
corresponding homogeneous equation
y

+ a(x)y = 0 , (2.3)
which are not constant multiples of one another. To compute y
1
(x), we shall
use the NDSolve command to solve the homogeneous equation (2.3) with
the initial conditions
y
1
(a) = 0, y

1
(a) = 1 . (2.4)
To compute y
2
(x), we shall use with the initial conditions
y
2
(b) = 0, y

1
(b) = 1 . (2.5)
(Mathematica has no problem solving the equation backwards on (a, b).)
To nd Y (x), we can solve the equation in (2.1) with any initial conditions,
say Y (a) = 0, Y

## (a) = 1. We have computed the general solution (2.2). It

remains to pick the constants c
1
and c
2
to satisfy the boundary conditions.
Using (2.4),
y(a) = Y (a) + c
1
y
1
(a) + c
2
y
2
(a) = Y (a) + c
2
y
2
(a) = 0 ,
i.e., c
2
=
Y (a)
y
2
(a)
. We assume here that y
2
(a) ,= 0, otherwise our problem
(2.1) is not solvable for general f(x). Similarly, using (2.5),
y(b) = Y (b) +c
1
y
1
(b) + c
2
y
2
(b) = Y (b) + c
1
y
1
(b) = 0 ,
i.e., c
1
=
Y (b)
y
1
(b)
. The solution of our problem (2.1) is
y(x) = Y (x)
Y (b)
y
1
(b)
y
1
(x)
Y (a)
y
2
(a)
y
2
(x) .
Mathematicas subroutine, or module, to produce this solution, called lin is
given in Figure 7.4. Here a = 0 and b = 1.
For example, entering the commands in Figure 7.5 produces the graph
of the solution of
y

+ e
x
y = 3x + 1, 0 < x < 1 (2.6)
y(0) = y(1) = 0 ,
which is given in Figure 7.6.
226 CHAPTER 7. NUMERICAL COMPUTATIONS
Clear"`+"
lin :=
Modules1, s2, s3, y1, y2, Y,
s1 = NDSolve y''x + ax yx = 0, y0 = 0, y

0 = 1, y, x, 0, 1;
s2 = NDSolve y''x + ax yx = 0, y1 = 0, y

1 = -1, y, x, 0, 1;
s3 = NDSolve y''x + ax yx = fx, y0 = 0, y

0 = 1, y, x, 0, 1;
y1x_ = yx . s11;
y2x_ = yx . s21;
Yx_ = yx . s31;
zx_ := Yx -
Y1
y11
y1x -
Y0
y20
y2x ;

## Figure 7.4: The solution module for the problem (2.1)

ax_ = e^x;
fx_ = -3 x + 1;
lin
Plotzx, x, 0, 1, AxesLabel - "x", "z"
Figure 7.5: Solving the problem (2.6)
0.2 0.4 0.6 0.8 1.0
x
0.02
0.04
0.06
0.08
z
Figure 7.6: Solution of the problem (2.6)
7.3. SOLVING NONLINEAR BOUNDARY VALUE PROBLEMS 227
7.3 Solving nonlinear boundary value problems
Review of Newtons method
Suppose we wish to solve the equation
f(x) = 0 (3.1)
with a given function f(x). For example, in case f(x) = e
2x
x 1, the
equation
e
2x
x 1 = 0
has a solution on the interval (0,1) (as can be seen by drawing f(x) in
Mathematica), but this solution cannot be expressed by a formula. New-
tons method produces a sequence of iterates x
n
to approximate a solu-
tion of (3.1). If the iterate x
n
has been already computed, we use a linear
approximation
f(x) f(x
n
) + f

(x
n
)(x x
n
) .
We then replace (3.1) by
f(x
n
) + f

(x
n
)(x x
n
) = 0 ,
solve this equation for x, and declare this solution x to be next approxima-
tion, i.e.,
x
n+1
= x
n

f(x
n
)
f

(x
n
)
, n = 1, 2, , beginning with some x
0
.
Newtons method does not always converge, but when it does, the conver-
gence is usually super fast. Let us denote by x

Then [x
n
x

## [ gives the error on the n-th step. Under some conditions on

f(x) it can be shown that
[x
n+1
x

[ < c[x
n
x

[
2
,
with some constant c > 0. Let us suppose that c = 1 and [x
0
x

[ = 0.1.
Then [x
1
x

[ < [x
0
x

[
2
= 0.1
2
= 0.01, [x
2
x

[ < [x
1
x

[
2
< 0.01
2
=
0.0001, [x
3
x

[ < [x
2
x

[
2
< 0.0001
2
= 0.00000001. We see that x
3
is
practically an exact solution!
228 CHAPTER 7. NUMERICAL COMPUTATIONS
A class of nonlinear boundary value problems
We shall solve the problem
y

## + g(y) = e(x), a < x < b (3.2)

y(a) = y(b) = 0 ,
with given functions g(y) and e(x). We shall produce a sequence of iterates
y
n
(x) to approximate a solution of (3.2). If the iterate y
n
(x) has been
already computed, we use a linear approximation
g(y) g(y
n
) +g

(y
n
)(y y
n
) ,
and replace (3.2) with the linear problem
y

+ g(y
n
(x)) +g

(y
n
(x))(y y
n
(x)) = e(x), a < x < b (3.3)
y(a) = y(b) = 0 .
The solution of this problem we declare to be our next approximation,
y
n+1
(x). We rewrite (3.3) as
y

## +a(x)y = f(x), a < x < b

y(a) = y(b) = 0 ,
with known coecient functions
a(x) = g

(y
n
(x))
f(x) = g(y
n
(x)) + g

(y
n
(x))y
n
(x)) + e(x) ,
and call on the procedure lin from the preceding section to solve (3.3), and
produce y
n+1
(x).
Example We have solved the problem
y

+ y
3
= 2 sin4x x, 0 < x < 1 (3.4)
y(0) = y(1) = 0 .
The commands are given in Figure 7.7. (The procedure lin has been exe-
cuted before these commands.) We started with y
0
(x) = 1 (yold[x]= 1 in
Mathematica). We did ve iterations of Newtons method. The solution
(the function z[x]) is plotted in Figure 7.8.
7.4. DIRECTION FIELDS 229
ex_ = 2 Sin4 x - x;
yoldx_ = 1;
gy_ = y^3;
st = 5;
Fori = 1, i s st, i++,
ax_ = g'yoldx;
fx_ = ex - gyoldx + g'yoldx yoldx ;
lin;
yoldx_ = zx;

## Figure 7.7: Solving the problem (3.4)

0.2 0.4 0.6 0.8 1.0
0.01
0.02
0.03
0.04
0.05
0.06
Figure 7.8: Solution of the problem (3.4)
The resulting solution is very accurate, and we have veried it by the
following independent calculation. We calculated the slope of the solution
at zero z

## [0] 0.00756827, and then we had solved the equation in (3.4)

with the initial conditions y(0) = 0, y

## (0) = 0.00756827 (using the NDSolve

command). The graph of this solution y(x) is identical to the one in Figure
7.8.
7.4 Direction elds
The equation (y = y(x))
y

## = cos 2y + 2 cos 2x (4.1)

230 CHAPTER 7. NUMERICAL COMPUTATIONS
1 2 3 4 5 6
x
1
2
3
4
5
y
Figure 7.9: The direction eld for the equation (4.1)
cannot be solved analytically (like most equations). If we add an initial con-
dition, we can nd the corresponding solution, using the NDSolve command.
But this is just one solution. Can we visualize a bigger picture?
The right hand side of the equation (4.1) gives us the slope of the solu-
tion passing through the point (x, y) (if cos 2y + 2 cos 2x > 0, the solution
is increasing at (x, y)). The vector < 1, cos 2y + 2 cos 2x > is called the
direction vector. If solution is increasing at (x, y), this vector points up, and
the faster is the rate of increase, the larger is the amplitude of the direction
vector. If we plot the direction vectors at many points, the result is called
the direction eld, which can tell us at a glance how various solutions are
behaving. In Figure 7.9 the direction eld is plotted using Mathematicas
command VectorFieldPlot.
How will the solution of (4.1) with the initial condition y(0) = 1 behave?
Imagine a particle placed at the initial point (0, 1). The direction eld, or
wind, will take it a little down, but soon the direction of motion will be
up. After a while, a strong downdraft will take the particle much lower,
but eventually it will be going up again. In Figure 7.10, we give the actual
7.4. DIRECTION FIELDS 231
0.5 1.0 1.5 2.0 2.5 3.0
x
0.5
1.0
1.5
y
Figure 7.10: The solution of the initial value problem (4.2)
solution of
y

## = cos 2y + 2 cos 2x, y(0) = 1 , (4.2)

produced using the NDSolve command.
Bibliography
[1] W.E. Boyce and R.C. DiPrima, Elementary Dierential Equations and
Boundary Value Problems. John Wiley and Sons, Inc., New York-
London-Sydney 1965.
[2] M.W. Braun, Dierential equations and their applications. An introduc-
tion to applied mathematics. Fourth edition. Texts in Applied Mathe-
matics, 11. Springer-Verlag, New York, 1993.
[3] A.F. Filippov, A Collection of Problems in Dierential Equations (In
Russian), Nauka, Moscow 1970.
[4] Judith R. Goodstein, The Volterra chronicles. The life and times of an
extraordinary mathematician 18601940. History of Mathematics, 31.
American Mathematical Society, Providence, RI; London Mathematical
Society, London, 2007.
[5] P. Korman, Computation of radial solutions of semilinear equations,
Electron. J. Qual. Theory Dier. Equ., No. 13, 14 pp. (electronic) (
2007).
[6] F. Rothe, The periods of the Volterra- Lotka system, J. Reine Angew.
Math. 355 129-138 (1985).
[7] A. Sommerfeld, Mechanics. Lecture Notes On Theoretical Physics, Vol.
I, Academic Press, New York-London 1964.
[8] A. Sommerfeld, Partial Dierential Equations in Physics. Translated
by Ernst G. Straus. Academic Press, Inc., New York, N. Y., 1949.
[9] J. Waldvogel, The period in the Lotka - Volterra system is monotonic,
J. Math. Anal. Appl. 114 no. 1, 178-184 (1986).
[10] E.C. Young, Partial dierential equations. An introduction. Allyn and
Bacon, Inc., Boston, Mass., 1972.
232