Notes on Diffy Qs

Differential Equations for Engineers
by Jiˇ rí Lebl
May 12, 2009
2
Typeset in L
A
T
E
X.
Copyright c ´2008-2009 Jiˇrí Lebl
This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0
United States License. To view a copy of this license, visit http://creativecommons.org/
licenses/by-nc-sa/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite
300, San Francisco, California, 94105, USA.
Contents
Introduction 5
0.1 Notes about these notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
0.2 Introduction to differential equations . . . . . . . . . . . . . . . . . . . . . . . . . 7
1 First order ODEs 13
1.1 Integrals as solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Slope fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3 Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4 Linear equations and the integrating factor . . . . . . . . . . . . . . . . . . . . . . 27
1.5 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Autonomous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.7 Numerical methods: Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . 41
2 Higher order linear ODEs 47
2.1 Second order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2 Constant coefficient second order linear ODEs . . . . . . . . . . . . . . . . . . . . 51
2.3 Higher order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4 Mechanical vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.5 Nonhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.6 Forced oscillations and resonance . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3 Systems of ODEs 83
3.1 Introduction to systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 Matrices and linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.3 Linear systems of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4 Eigenvalue method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.5 Two dimensional systems and their vector fields . . . . . . . . . . . . . . . . . . . 105
3.6 Second order systems and applications . . . . . . . . . . . . . . . . . . . . . . . . 110
3.7 Multiple eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.8 Matrix exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
3.9 Nonhomogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3
4 CONTENTS
4 Fourier series and PDEs 143
4.1 Boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.2 The trigonometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.3 More on the Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.4 Sine and cosine series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.5 Applications of Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.6 PDEs, separation of variables, and the heat equation . . . . . . . . . . . . . . . . . 181
4.7 One dimensional wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4.8 D’Alembert solution of the wave equation . . . . . . . . . . . . . . . . . . . . . . 198
4.9 Steady state temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5 Eigenvalue problems 211
5.1 Sturm-Liouville problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.2 Application of eigenfunction series . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3 Steady periodic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
6 The Laplace transform 229
6.1 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.2 Transforms of derivatives and ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Further Reading 247
Index 249
Introduction
0.1 Notes about these notes
These are class notes from teaching Math 286, differential equations at the University of Illinois at
Urbana-Champaign in fall 2008 and spring 2009. These originated from my lecture notes. There
is usually a little more “padding” material than I can cover in the time alloted. There are still not
enough exercises throughout. Some of the exercises in the notes are things I do explicitly in class
depending on time, or let the students work out in class themselves. The book used for the class is
Edwards and Penney, Differential Equations and Boundary Value Problems [EP], fourth edition,
from now on referenced just as EP. The structure of the notes, therefore, reflects the structure of
this book, at least as far as the chapters that are covered in the course. Many examples and ap-
plications are taken more or less from this book, though they also appear in many other sources,
of course. Other books I have used as sources of information and inspiration are E.L. Ince’s clas-
sic (and inexpensive) Ordinary Differential Equations [I], and also my undergraduate textbooks,
Stanley Farlow’s Differential Equations and Their Applications [F], which is now available from
Dover, and Berg and McGregor’s Elementary Partial Differential Equations [BM]. See the Further
Reading section at the end of these notes.
I taught the course with the IODE software (http://www.math.uiuc.edu/iode/). IODE is
a free software package that is used either with Matlab (properietary) or Octave (free software).
Projects and labs from the IODE website are referenced throughout the notes. They need not be
used for this course, but I think it is better to use them. The graphs in the notes were made with
the Genius software (see http://www.jirka.org/genius.html). I have used Genius in class
to show essentially these and similar graphs.
I would like to acknowledge Rick Laugesen. I have used his handwritten class notes on the first
go through the course. My organization of these present notes, and the choice of the exact material
covered, is heavily influenced by his class notes. Many examples and computations are taken from
his notes.
The organization of these notes to some degree requires that they be done in order. Hence,
later chapters can be dropped. The dependence of the material covered is roughly given in the the
following diagram:
5
6 INTRODUCTION
Introduction

Chapter 1

Chapter 2
''
O
O
O
O
O
O
O
O
O
O
O

wwo
o
o
o
o
o
o
o
o
o
o
Chapter 6 Chapter 3
wwo
o
o
o
o
o
Chapter 4

Chapter 5
There are some references in chapters 4 and 5 to material from chapter 3 (some linear algebra),
but these references are not absolutely essential and can be skimmed over, so chapter 3 can safely
be dropped, while still covering chapters 4 and 5. The notes are done for two types of courses.
Either at 4 hours a week for a semester (Math 286 at UIUC):
Introduction,
chapter 1 (plus the two IODE labs),
chapter 2,
chapter 3,
chapter 4,
chapter 5.
Or a shorter version (Math 285 at UIUC) of the course at 3 hours a week for a semester:
Introduction,
chapter 1 (plus the two IODE labs),
chapter 2,
chapter 4.
For the shorter version some additional material should be covered. IODE need not be used for
either version. If IODE is not used, some additional material should be covered instead.
There is a short introductory chapter on Laplace transform (chapter 6 that could be used as
additional material. The length of the Laplace chapter is about the same as the Sturm-Liouville
chapter (chapter 5). While Laplace transform is not normally covered at UIUC 285/286, I think
it is essential that any notes for Differential equations at least mention Laplace and/or Fourier
transforms.
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 7
0.2 Introduction to differential equations
Note: more than 1 lecture, §1.1 in EP
0.2.1 Differential equations
The laws of physics are generally written down as differential equations. Therefore, all of science
and engineering use differential equations to some degree. Understanding differential equations is
essential to understanding almost anything you will study in your science and engineering classes.
You can think of mathematics as the language of science, and differential equations are one of
the most important parts of this language as far as science and engineering are concerned. As
an analogy, suppose that all your classes from now on were given in Swahili. Then it would be
important to first learn Swahili, otherwise you will have a very tough time getting a good grade in
your other classes.
You have already seen many differential equations without perhaps knowing about it. And
you have even solved simple differential equations when you were taking calculus. Let us see an
example you may not have seen.
dx
dt
+ x = 2 cos t. (1)
Here x is the dependent variable and t is the independent variable. Equation (1) is a basic example
of a differential equation. In fact it is an example of a first order differential equation, since it
involves only the first derivative of the dependent variable. This equation arises from Newton’s
law of cooling where the ambient temperature oscillates with time.
0.2.2 Solutions of differential equations
Solving the differential equation means finding x in terms of t. That is, we want to find a function
of t which we will call x such that when we plug x, t, and
dx
dt
into (1) the equation holds. It is the
same idea as it would be for a normal (algebraic) equation of just x and t. In this case we claim
that
x = x(t) = cos t + sin t
is a solution. How do we check? Just plug it back in! First you need to compute
dx
dt
. We find that
dx
dt
= −sin t + cos t. Now let us compute the left hand side of (1)
dx
dt
+ x = (−sin t + cos t) + (cos t + sin t) = 2 cos t.
Yay! We got precisely the right hand side. There is more! We claim x = cos t + sin t + e
−t
is also a
solution. Let us try,
dx
dt
= −sin t + cos t − e
−t
.
8 INTRODUCTION
Again plugging into the left hand side of (1)
dx
dt
+ x = (−sin t + cos t − e
−t
) + (cos t + sin t + e
−t
) = 2 cos t.
And it works yet again!
So there can be many different solutions. In fact, for this equation all solutions can be written
in the form
x = cos t + sin t + Ce
−t
for some constant C. See Figure 1 for the graph of a few of these solutions. We will see how we
can find these solutions a few lectures from now.
It turns out that solving differential equations
0 1 2 3 4 5
0 1 2 3 4 5
-1
0
1
2
3
-1
0
1
2
3
Figure 1: Few solutions of
dx
dt
+
y
2
= cos t.
can be quite hard. There is no general method
that solves any given differential equation. We
will generally focus on how to get exact formu-
las for solutions of differential equations, but we
will also spend a little bit of time on getting ap-
proximate solutions.
For most of the course we will look at ordi-
nary differential equations or ODEs, by which
we mean that there is only one independent vari-
able and derivatives are only with respect to this
one variable. If there are several independent
variables, we will get partial differential equa-
tions or PDEs. We will briefly see these near the
end of the course.
Even for ODEs, which are very well under-
stood, it is not a simple question of turning a crank to get answers. It is important to know when
it is easy to find solutions and how to do this. Even if you leave much of the actual calculations
to computers in real life, you need to understand what they are doing. For example, it is often
necessary to simplify or transform your equations into something that a computer can actually
understand and solve. You may need to make certain assumptions and changes in your model to
achieve this.
To be a successful engineer or scientist, you will be required to solve problems in your job
which you have never seen before. It is important to learn problem solving techniques, so that
you may apply those techniques to new problems. A common mistake is to expect to learn some
prescription for solving all the problems you will encounter in your later career. This course is no
exception to this.
0.2.3 Differential equations in practice
So how do we use differential equations in science and engineering? You have some real
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 9
world problem that you want to understand. You make some simplifying assumptions and create
a mathematical model. That is, you translate your real world situation into a set of differential
equations. Then you apply mathematics to get some sort of mathematical solution. There is still
something left to do. You have to interpret the results. You have to figure out what the mathematical
solution says about the real world problem you started with.
Learning how to formulate the mathematical
solve
Mathematical
Real world problem
interpret
Mathematical
solution model
abstract
model and how to interpret the results is essen-
tially what your physics and engineering classes
do. In this course we will mostly focus on the
mathematical analysis. Sometimes we will work
with simple real world examples so that we have
some intuition and motivation about what we are
doing.
Let us look at an example of this process. One of the most basic differential equations is the
standard exponential growth model. Let P denote the population of some bacteria on a petri dish.
Let us suppose that there is enough food and enough space. Then the rate of growth of bacteria
will be proportional to population. I.e. a large population growth quicker. Let t denote time (say in
seconds). Hence our model will be
dP
dt
= kP
for some positive constant k > 0.
Example 0.2.1: Suppose there are 100 bacteria at time 0 and 200 bacteria at time 10s. How many
bacteria will there be in 1 minute from time 0 (in 60 seconds)?
First we have to solve the equation. We claim that a solution is given by
P(t) = Ce
kt
,
where C is a constant. Let us try,
dP
dt
= Cke
kt
= kP.
And it really is a solution.
OK, so what now? We do not know C and we do not know k. Well we know something. We
know that P(0) = 100 and we also know that P(10) = 200. Let us plug these in and see what
happens.
100 = P(0) = Ce
k0
= C,
200 = P(10) = 100 e
k10
.
Therefore, 2 = e
10k
or
ln 2
10
= k ≈ 0.069. So we know that
P(t) = 100 e
(ln 2)t/10
≈ 100 e
0.069t
.
10 INTRODUCTION
At one minute, t = 60, the population is P(60) = 6400. See Figure 2.
OK, let us talk about the interpretation of the results. Does this mean that there must be exactly
6400 bacteria on the plate at 60s? No! We have made assumptions that might not be true. But if
our assumptions are reasonable, then there will be about 6400 bacteria. Also note that P in real life
is a discrete quantity, not any real number, but our model has no problem saying that for example
at 61 seconds, P(61) ≈ 6859.35.
Normally, the k in P
·
= kP will be known,
0 10 20 30 40 50 60
0 10 20 30 40 50 60
0
1000
2000
3000
4000
5000
6000
0
1000
2000
3000
4000
5000
6000
Figure 2: Bacteria growth in the first 60 sec-
onds.
and you will want to solve the equation for dif-
ferent initial conditions. What does that mean?
Suppose k = 1 for simplicity. So we want to
solve
dP
dt
= P subject to P(0) = 1000 (the ini-
tial condition). Then the solution turns out to be
(exercise)
P(t) = 1000 e
t
.
We will call P(t) = Ce
t
the general solution,
as every solution of the equation can be writ-
ten in this form for some constant C. Then you
will need an initial condition to find out what C
is to find the particular solution we are looking
for. Generally when we say particular solution
we just mean some solution.
Let us get to what we will call the 4 fundamental equations. These appear very often and it
is useful to just memorize what their solutions are. These solutions are reasonably easy to guess
by recalling properties of exponentials, sines, and cosines. They are also simple to check, which
is something that you should always do. There is no need to wonder if you have remembered the
solution correctly.
First such equation is,
dy
dx
= ky,
for some constant k > 0. Here y is the dependent and x the independent variable. The general
solution for this equation is
y(x) = Ce
kx
.
We have already seen that this is a solution above with different variable names.
Next,
dy
dx
= −ky,
for some constant k > 0. The general solution for this equation is
y(x) = Ce
−kx
.
0.2. INTRODUCTION TO DIFFERENTIAL EQUATIONS 11
Exercise 0.2.1: Check that the y given is really a solution to the equation.
Next, take the second order differential equation
d
2
y
dx
2
= −k
2
y,
for some constant k > 0. The general solution for this equation is
y(x) = C
1
cos(kx) + C
2
sin(kx).
Note that because we have a second order differential equation we have two constants in our general
solution.
Exercise 0.2.2: Check that the y given is really a solution to the equation.
And finally, take the second order differential equation
d
2
y
dx
2
= k
2
y,
for some constant k > 0. The general solution for this equation is
y(x) = C
1
e
kx
+ C
2
e
−kx
,
or
y(x) = D
1
cosh(kx) + D
2
sinh(kx).
For those that do not know, cosh and sinh are defined by
cosh x =
e
x
+ e
−x
2
,
sinh x =
e
x
− e
−x
2
.
These functions are sometimes easier to work with than exponentials. They have some nice familiar
properties such as cosh 0 = 1, sinh 0 = 0, and
d
dx
cosh x = sinh x (no that is not a typo) and
d
dx
sinh x = cosh x.
Exercise 0.2.3: Check that both forms of the y given are really solutions to the equation.
An interesting note about cosh: The graph of cosh is the exact shape a hanging chain will make
and it is called a catenary. Contrary to popular belief this is not a parabola. If you invert the graph
of cosh it is also the ideal arch for supporting its own weight. For example, the gateway arch in
Saint Louis is an inverted graph of cosh (if it were just a parabola it might fall down). This formula
is actually inscribed inside the arch:
y = −127.7 ft cosh(x/127.7 ft) + 757.7 ft.
12 INTRODUCTION
0.2.4 Exercises
Exercise 0.2.4: Show that x = e
4t
is a solution to x
···
− 12x
··
+ 48x
·
− 64x = 0.
Exercise 0.2.5: Show that x = e
t
is not a solution to x
···
− 12x
··
+ 48x
·
− 64x = 0.
Exercise 0.2.6: Is y = sin t a solution to

dy
dt

2
= 1 − y
2
? Justify.
Exercise 0.2.7: Let y
··
+ 2y
·
− 8y = 0. Now try a solution y = e
rx
. Is this solution for some r? If
so, find all such r.
Exercise 0.2.8: Verify that x = Ce
−2t
is a solution to x
·
= −2x. Find C to solve the initial condition
x(0) = 100.
Exercise 0.2.9: Verify that x = C
1
e
−t
+ C
2
e
2t
is a solution to x
··
− x
·
− 2x = 0. Find C
1
and C
2
to
solve the initial condition x(0) = 10.
Exercise 0.2.10: Using properties of derivatives of functions that you know try to find a solution
to (x
·
)
2
+ x
2
= 4.
Chapter 1
First order ODEs
1.1 Integrals as solutions
Note: 1 lecture, §1.2 in EP
A first order ODE is an equation of the form
dy
dx
= f (x, y),
or just
y
·
= f (x, y).
In general, there is no simple formula or procedure one can follow to find solutions. In the next
few lectures we will look at special cases where solutions are not difficult to obtain. In this section,
let us assume that f is a function of x alone, that is, the equation is
y
·
= f (x). (1.1)
We could just integrate (antidifferentiate) both sides here with respect to x.

y
·
(x) dx =

f (x) dx + C,
that is
y(x) =

f (x) dx + C.
This y(x) is actually the general solution. So to solve (1.1) find some antiderivative of f (x) and
then you add an arbitrary constant to get the general solution.
Now is a good time to discuss a point about calculus notation and terminology. Calculus text-
books muddy the waters by talking about integral as primarily the so-called indefinite integral. The
13
14 CHAPTER 1. FIRST ORDER ODES
indefinite integral is really the antiderivative (in fact the whole one parameter family of antideriva-
tives). There really exists only one integral and that is the definite integral. The only reason for
the indefinite integral notation is that you can always write an antiderivative as a (definite) integral.
That is, by fundamental theorem of calculus you can always write

f (x) dx + C as

x
x
0
f (t) dt + C.
Hence, the terminology integrate when you may really mean antidifferentiate. Integration is just
one way to compute the antiderivative (and it is a way that always works, see the following exam-
ple). Integration is defined as the area under the graph, it only happens to also compute antideriva-
tives. For sake of consistency, we will keep using the indefinite integral notation when we want an
antiderivative, and you should always think of the definite integral.
Example 1.1.1: Find the general solution of y
·
= 3x
2
.
We see that the general solution must be y = x
3
+ C. Let us check: y
·
= 3x
2
. We have gotten
precisely our equation back.
Normally, we also have an initial condition such as y(x
0
) = y
0
for some two numbers x
0
and y
0
(x
0
is usually 0, but not always). We can write the solution as a definite integral in a nice way.
Suppose our problem is y
·
= f (x), y(x
0
) = y
0
. Then the solution is
y(x) =

x
x
0
f (s) ds + y
0
. (1.2)
Let us check! y
·
= f (x) (by fundamental theorem of calculus) and by Jupiter, this is a solution. Is
it the one satisfying the initial condition? Well, y(x
0
) =

x
0
x
0
f (x) dx + y
0
= y
0
. And it is!
Do note that the definite integral and indefinite integral (antidifferentiation) are completely
different beasts. The definite integral always evaluates to a number. Therefore, (1.2) is a formula
you can plug into the calculator or a computer and it will be happy to calculate specific values for
you. You will easily be able to plot the solution and work with it just like with any other function.
It is not so crucial to find a closed form for the antiderivative.
Example 1.1.2: Solve
y
·
= e
−x
2
, y(0) = 1.
By the preceeding discussion, the solution must be
y(x) =

x
0
e
−s
2
ds + 1.
Here is a good way to make fun of your friends taking second semester calculus. Tell them to find
the closed form solution. Ha ha ha (bad math joke). It is not possible (in closed form). There is
absolutely nothing wrong with writing the solution as a definite integral. This particular integral is
in fact very important in statistics.
1.1. INTEGRALS AS SOLUTIONS 15
We can also solve equations of the form
y
·
= f (y)
using this method. Let us write it in Leibniz notation
dy
dx
= f (y)
Now use the inverse function theorem to switch roles of x and y.
dx
dy
=
1
f (y)
What we are doing seems like algebra with dx and dy. It is tempting to just do algebra with dx
and dy as if they were numbers. And in this case it does work. Be careful, however, as this sort of
hand-waving calculation can lead to trouble, especially when more than one independent variable
is involved. Now we can just integrate
x(y) =

1
f (y)
dy + C
Next, we try to solve for y.
Example 1.1.3: We guessed y
·
= ky has solution Ce
kx
. We can actually do it now. First note that
y = 0 is a solution. Henceforth, assume y 0. We write
dx
dy
=
1
ky
.
Now integrate and get
x(y) = x =
1
k
ln|ky| + C
·
.
we solve for y
ke
kC
·
e
kx
= |y|.
If we replace ke
kC
with an arbitrary constant C we can get rid of the absolute value bars. In this
way we also incorporate the solution y = 0, and we get the same general solution as we guessed
before, y = Ce
kx
.
Example 1.1.4: Find the general solution of y
·
= y
2
.
First note that y = 0 is a solution. We can now assume that y 0. Write
dx
dy
=
1
y
2
16 CHAPTER 1. FIRST ORDER ODES
Now integrate to get
x =
−1
y
+ C.
Solve for y =
1
C−x
. So the general solution is
y =
1
C − x
or y = 0.
Note the singularities of the solution. If for example C = 1, then the solution blows up as we
approach x = 1. It is hard to tell from just looking at the equation itself how the solution is going
to behave sometimes. The equation y
·
= y
2
is very nice and defined everywhere, but the solution
is only defined on some interval (−∞, C) or (C, ∞).
Classical problems leading to differential equations solvable by integration are problems deal-
ing with velocity, acceleration and distance. You have surely seen these problems before in your
calculus class.
Example 1.1.5: Suppose a car drives at a speed e
t/2
meters per second, where t is time in seconds.
How far did the car get in 2 seconds? How far in 10 seconds.
Let x denote the distance the car travelled. The equation is
x
·
= e
t/2
.
We can just integrate this equation to get that
x(t) = 2e
t/2
+ C.
Note that we still need to figure out C. But we know that when t = 0 then x = 0, that is: x(0) = 0
so
0 = x(0) = 2e
0/2
+ C = 2 + C.
So C = −2 and hence
x(t) = e
t/2
− 2.
Now we just plug in to get that at 2 seconds (and 10), the car has travelled
x(2) = 2e
2/2
− 2 ≈ 3.44 meters, x(10) = 2e
10/2
− 2 ≈ 294 meters.
Example 1.1.6: Suppose that the car accelerates at the rate t
2
m/s
2
. At time t = 0 the car is at the
1 meter mark and is travelling at 10 m/s. Where is the car at time t = 10.
Well this is actually a second order problem. If x is the distance travelled, then x
·
is the velocity,
and x
··
is the acceleration. The equation with initial conditions is
x
··
= t
2
, x(0) = 1, x
·
(0) = 10.
Well, what if we call x
·
= v and then we have the problem
v
·
= t
2
, v(0) = 10.
Once we solve for v, we can then integrate and find x.
Exercise 1.1.1: Solve for v and then solve for x.
1.1. INTEGRALS AS SOLUTIONS 17
1.1.1 Exercises
Exercise 1.1.2: Solve
dy
dx
= x
2
+ x for y(1) = 3.
Exercise 1.1.3: Solve
dy
dx
= sin 5x for y(0) = 2.
Exercise 1.1.4: Solve
dy
dx
=
1
x
2
−1
for y(0) = 0.
Exercise 1.1.5: Solve y
·
= y
3
for y(0) = 1.
Exercise 1.1.6: Solve y
·
= (y − 1)(y + 1) for y(0) = 3.
Exercise 1.1.7: Solve
dy
dx
=
1
y
2
+1
for y(0) = 0.
Exercise 1.1.8: Solve y
··
= sin x for y(0) = 0.
18 CHAPTER 1. FIRST ORDER ODES
1.2 Slope fields
Note: 1 lecture, §1.3 in EP
At this point it may be good to first try the Lab I and/or Project I from the IODE website:
http://www.math.uiuc.edu/iode/.
As we said, the general first order equation we are studying looks like
y
·
= f (x, y).
In general we cannot really just solve these kinds of equations explicitly. It would be good if we
could at least figure out the shape and behavior of the solutions or even find approximate solutions
for any equation.
1.2.1 Slope fields
As you have seen in IODE Lab I (if you did it), this means that at each point in the (x, y)-plane
we get a slope. We can plot the slope at lots of points as a short line with this given slope. See
Figure 1.1.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 1.1: Slope field of y
·
= xy.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 1.2: Slope field of y
·
= xy with a graph
of solutions satisfying y(0) = 0.2, y(0) = 0, and
y(0) = −0.2.
We call this the slope field of the equation. Then if we are given a specific initial condition
y(x
0
) = y
0
, we can really just look at the location (x
0
, y
0
) and follow the slopes. See Figure 1.2.
By looking at the slope field we can find out a lot about the behavior of solutions. For example,
in Figure 1.2 we can see what the solutions do when the initial conditions are y(0) > 0, y(0) = 0
1.2. SLOPE FIELDS 19
and y(0) < 0. Note that a small change in the initial condition causes quite different behavior. On
the other hand, plotting a few solutions of the of the equation y
·
= −y, we see that no matter where
we start, all solutions tend to zero as x tends to infinity. See Figure 1.3.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 1.3: Slope field of y
·
= −y with a graph of a few solutions.
1.2.2 Existence and uniqueness
We wish to ask two fundamental questions about the problem
y
·
= f (x, y), y(x
0
) = y
0
.
(i) Does a solution exist?
(ii) Is the solution unique (if it exists)?
What do you think is the answer? The answer seems to be yes to both does it not? Well, pretty
much. But there are cases when the answer to either question can be no.
Since generally the equations come from real life situation, then it seems logical that a solution
exists. It also has to be unique if we believe our universe is deterministic. If the solution does not
exist, or if it does is not unique, we have probably not devised the correct model. Hence, it is good
to know when things go wrong and why.
Example 1.2.1: Attempt to solve:
y
·
=
1
x
, y(0) = 0.
Integrate to find the general solution y = ln |x| +C Note that the solution does not exist at x = 0.
See Figure 1.4 on the next page.
20 CHAPTER 1. FIRST ORDER ODES
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 1.4: Slope field of y
·
=
1
x
.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 1.5: Slope field of y
·
= 2

|y| with two
solutions satisfying y(0) = 0.
Example 1.2.2: Solve:
y
·
= 2

|y|, y(0) = 0.
Note that y = x
2
is a solution and y = 0 is a solution (but note x
2
is a solution only for x > 0).
See Figure 1.5.
It is actually hard to tell from the slope field that the solution will not be unique. Is there any
hope? Of course there is. It turns out that the following theorem is true. It is known as Picard’s
theorem

.
Theorem 1.2.1 (Picard’s theorem on existence and uniqueness). If f (x, y) is continuous (as a
function of two variables) and
∂f
∂y
exists and is continuous near some (x
0
, y
0
), then a solution to
y
·
= f (x, y), y(x
0
) = y
0
,
exists (at least for some small interval of x’s) and is unique.
Note that y
·
=
1
x
, y(0) = 0 and y
·
= 2

|y|, y(0) = 0 do not satisfy the theorem. But we ought to
be careful about this existence business. It is quite possible that the solution only exists for a short
while.
Example 1.2.3:
y
·
= y
2
, y(0) = A,
for some constant A.

Named after the French mathematician Charles Émile Picard (1856 – 1941)
1.2. SLOPE FIELDS 21
We know how to solve this equation. First assume that A 0, so y is not equal to zero at least
for some x near 0. So x
·
=
1
y
2
, so x =
−1
y
+ C, so y =
1
C−x
. If y(0) = A, then C =
1
A
so
y =
1
1
A
− x
.
Now if A = 0, then y = 0 is a solution.
For example, when A = 1 the solution “blows up” at x = 1. Hence, the solution does not exist
for all x even if the equation is nice everywhere. y
·
= y
2
certainly looks nice.
For the most of this course we will be interested in equations where existence and uniqueness
holds, and in fact will hold “globally” unlike for the y
·
= y
2
.
1.2.3 Exercises
Exercise 1.2.1: Sketch direction field for y
·
= e
x−y
. How do the solutions behave as x grows? Can
you guess a particular solution by looking at the direction field?
Exercise 1.2.2: Sketch direction field for y
·
= x
2
.
Exercise 1.2.3: Sketch direction field for y
·
= y
2
.
Exercise 1.2.4: Is it possible to solve the equation y
·
=
xy
cos x
for y(0) = 1? Justify.
22 CHAPTER 1. FIRST ORDER ODES
1.3 Separable equations
Note: 1 lecture, §1.4 in EP
When the equation is of the form y
·
= f (x), we can just integrate: y =

f (x) dx + C. Unfor-
tunately this method no longer works for the general form of the equation y
·
= f (x, y). Integrating
both sides yields
y =

f (x, y) dx + C.
Notice dependence on y in the integral.
1.3.1 Separable equations
On the other hand, what if the equation is separable, that is, if it looked like
y
·
= f (x)g(y),
for some functions f (x) and g(y). Let us write the equation in Leibniz notation
dy
dx
= f (x)g(y).
Then we rewrite the equation as
dy
g(y)
= f (x) dx.
Now both sides look like something we can integrate. We obtain

dy
g(y)
=

f (x) dx + C.
If we can explicitly solve this integral we can maybe solve for y.
Example 1.3.1: Take the equation
y
·
= xy
First note that y = 0 is a solution, so assume y 0 from now on. Write the equation as
dy
dx
= xy,
then

dy
y
=

x dx + C.
We compute the antiderivatives to get
ln |y| =
x
2
2
+ C.
1.3. SEPARABLE EQUATIONS 23
Or
|y| = e
x
2
2
+C
= e
x
2
2
e
C
= De
x
2
2
,
where D > 0 is some constant. Because y = 0 is a solution and because of the absolute value we
actually can write:
y = De
x
2
2
,
for any number D (including zero or negative).
We check:
y
·
= Dxe
x
2
2
= x(De
x
2
2
) = xy.
Yay!
We should be a little bit more careful about the method. Because we were integrating in two
different variables, that does not sound right. We seemed to be doing a different operation to each
side. Let us see work out this method more rigorously.
dy
dx
= f (x)g(y)
We rewrite the equation as follows. Note that y = y(x) is a function of x and so is
dy
dx
!
1
g(y)
dy
dx
= f (x)
We integrate both sides with respect to x.

1
g(y)
dy
dx
dx =

f (x) dx + C.
We can use the change of variables formula.

1
g(y)
dy =

f (x) dx + C.
And we are done.
1.3.2 Implicit solutions
It is clear that we might sometimes get stuck even if we can do the integration. For example, take
the separable equation
y
·
=
xy
y
2
+ 1
.
We separate variables
y
2
+ 1
y
dy =
¸
y +
1
y

dy = x dx.
24 CHAPTER 1. FIRST ORDER ODES
Now we integrate to get
y
2
2
+ ln |y| =
x
2
2
+ C.
Or maybe the easier looking expression:
y
2
+ 2 ln |y| = x
2
+ C.
It is not easy to find the solution explicitly as it is hard to solve for y. We will, therefore, call this
solution an implicit solution. It is easy to check that implicit solutions still satisfy the differential
equation. In this case, we differentiate to get
y
·
¸
2y +
2
y

= 2x.
It is simple to see that the differential equation holds. If you want to compute values for y you
might have to be tricky. For example, you can graph x as a function of y, and then flip your paper.
Computers are also good at some of these tricks, but you have to be careful.
We note above that the equation also has a solution y = 0. In this case, it turns out that the
general solution is y
2
+2 ln |y| = x
2
+C together with y = 0. These outlying solutions such as y = 0
are sometimes called singular solutions.
1.3.3 Examples
Example 1.3.2: Solve x
2
y
·
= 1 − x
2
+ y
2
− x
2
y
2
, y(1) = 0.
First factor the right hand side to obtain
x
2
y
·
= (1 − x
2
)(1 + y
2
).
Now we separate variables, integrate and solve for y
y
·
1 + y
2
=
1 − x
2
x
2
y
·
1 + y
2
=
1
x
2
− 1
arctan(y) =
−1
x
− x + C
y = tan
¸
−1
x
− x + C

Now solve for the initial condition, 0 = tan(−2 +C) to get C = 2 (or 2 +π, etc...). The solution we
are seeking is, therefore,
y = tan
¸
−1
x
− x + 2

.
1.3. SEPARABLE EQUATIONS 25
Example 1.3.3: Suppose Bob made a cup of coffee, and the water was boiling (100 degrees Cel-
sius) at time t = 0. Suppose Bob likes to drink his coffee at 70 degrees. Let the Ambient (room)
temperature be 26 degrees. Furthermore, suppose Bob measured the temperature of the coffee at 1
minute (t = 60) and found that it dropped to 95 degrees. When should Bob start drinking?
Let T be the temperature of coffee, let A be the ambient (room) temperature. Then for some k
the temperature of coffee is:
dT
dt
= k(A − T).
For our setup A = 26, T(0) = 100, T(1) = 95. We separate variables and integrate (C and D will
denote arbitrary constants)
1
A − T
dT
dt
= k,
ln A − T = −kt + C,
A − T = De
−kt
,
T = A − De
−kt
.
That is T = 26 − De
−kt
. We plug in the first condition 100 = T(0) = 26 − D and hence D = −74.
Now we have T = 26 + 74e
−kt
. We plug in 95 = T(1) = 26 + 74e
−k
. Solving for k we get
k = −ln(95 − 26)/74 ≈ 0.07. Now to solve for which t gives me 70 degrees. That is we solve
70 = 26 + 74e
−0.07t
to get t = −
ln(70−26)/74
0.07
≈ 7.43 minutes. So Bob can begin to drink the coffee at
about 7 and a half minutes from the time Bob made it. Probably about the amount of time it took
us to calculate how long it would take.
Example 1.3.4: Solve y
·
=
−xy
2
3
.
First note that y = 0 is a solution (a singular solution). So assume that y 0 and write
−3
y
2
y
·
= x,
1
y
3
=
x
2
2
+ C,
y =
1
(
x
2
2
+ C)
1/3
.
1.3.4 Exercises
Exercise 1.3.1: Solve y
·
=
x
y
.
Exercise 1.3.2: Solve y
·
= x
2
y.
Exercise 1.3.3: Solve
dx
dt
= (x
2
− 1) t, for x(0) = 0.
26 CHAPTER 1. FIRST ORDER ODES
Exercise 1.3.4: Solve
dx
dt
= x sin(t), for x(0) = 1.
Exercise 1.3.5: Solve
dy
dx
= xy + x + y + 1. Hint: Factor the right hand side.
Exercise 1.3.6: Find an implicit solution to xy
·
= y + 2x
2
y, where y(1) = 1.
Exercise 1.3.7: Solve x
dy
dx
− y = 2x
2
y, for y(0) = 10.
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 27
1.4 Linear equations and the integrating factor
Note: more than 1 lecture, §1.5 in EP
One of the most important types of equations we will learn how to solve are so-called linear
equations. In fact the majority of this course will focus on linear equations. In this lecture we will
focus on the first order linear equation. That is a first order equation is linear if we can put it into
the following form:
y
·
+ p(x)y = f (x). (1.3)
The word “linear” here means linear in y. The dependence on x can be more complicated.
Solutions of linear equations have nice properties. For example, the solution exists wherever
p(x) and f (x) are defined, and has the same regularity (read: it is just as nice). But most importantly
for us right now, there is a method for solving linear first order equations.
What we will do is to multiply both sides of (1.3) by some function r(x) such that
r(x)y
·
+ r(x)p(x)y =
d
dx
,
r(x)y
¸
.
We can then integrate both sides of
d
dx
,
r(x)y
¸
= r(x) f (x).
Note that the right hand side does not depend on y and the left hand side is written as a derivative
of a function. We can then solve for y. The function r(x) is called the integrating factor and the
method is called the integrating factor method.
So we are looking for a function r(x) such that if we differentiate it, we get the same function
back multiplied by p(x). That seems like a job for the exponential function!
r(x) = e

p(x)dx
Let us do the calculation.
y
·
+ p(x)y = f (x),
e

p(x)dx
y
·
+ e

p(x)dx
p(x)y = e

p(x)dx
f (x),
d
dx
,
e

p(x)dx
y
¸
= e

p(x)dx
f (x),
e

p(x)dx
y =

e

p(x)dx
f (x) dx + C,
y = e

p(x)dx
¸
e

p(x)dx
f (x) dx + C

.
Of course, to get a closed form formula for y we need to be able to find a closed form formula
for the two integrals.
28 CHAPTER 1. FIRST ORDER ODES
Example 1.4.1: Solve
y
·
+ 2xy = e
x−x
2
y(0) = −1.
First note that p(x) = 2x and f (x) = e
x−x
2
. The integrating factor is r(x) = e

p(x) dx
= e
x
2
. We
multiply both sides of the equation by r(x) to get
e
x
2
y
·
+ 2xe
x
2
y = e
x−x
2
e
x
2
,
d
dx
,
e
x
2
y
¸
= e
x
.
We integrate
e
x
2
y = e
x
+ C,
y = e
x−x
2
+ Ce
x
2
.
Next, we solve for the initial condition −1 = y(0) = 1 + C, so C = −2. The solution is
y = e
x−x
2
− 2e
x
2
.
Note that we do not care which antiderivative we take when computing e

p(x)dx
. You can always
add a constant of integration, but those constants will not matter in the end.
Exercise 1.4.1: Try it! Add a constant of integration to the integral in the integrating factor and
show that the solution you get in the end is the same as what we got above.
An advice: Do not try to remember the formula itself, that is way too hard. It is easier to
remember the process and repeat it.
Since we cannot always evaluate the integrals in closed form, it is useful to know how to write
the solution in definite integral form. A definite integral is something that you can plug into a
computer or a calculator. Suppose we are given
y
·
+ p(x)y = f (x) y(x
0
) = y
0
.
Look at the solution and write the integrals as definite integrals.
y(x) = e

x
x
0
p(s) ds
¸
x
x
0
e

t
x
0
p(s) ds
f (t) dt + y
0

. (1.4)
You should be careful to properly use dummy variables here. If you now plug that into a computer
of a calculator, it will be happy to give you numerical answers.
Exercise 1.4.2: Check that y(x
0
) = y
0
in formula (1.4).
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 29
Exercise 1.4.3: Write the solution of the following problem as a definite integral, but try to simplify
as far as you can. You will not be able to find the solution in closed form.
y
·
+ y = e
x
2
−x
y(0) = 10.
Example 1.4.2: The following is a simple application of linear equations and this type of a prob-
lem is used often in real life. For example, linear equations are used in figuring out the concentra-
tion of chemicals in bodies of water.
A 100 liter tank contains 10 kilograms of salt dissolved in 60 liters of water. Solution of water
and salt (brine) with concentration of 0.1 kg / liter is flowing in at the rate of 5 liters a minute. The
solution in the tank is well stirred and flows out at a rate of 3 liters a minute. How much salt is in
the tank when the tank is full?
Let us come up with the equation. Let x denote the kg of salt in the tank, let t denote the time
in minutes. Then for a small change ∆t in time, the change in x (denoted ∆x) is approximately
∆x ≈ (rate in × concentration in)∆t − (rate out × concentration out)∆t
Taking the limit ∆t → 0 we see that
dx
dt
= (rate in × concentration in) − (rate out × concentration out)
We have
rate in = 5
concentration in = 0.1
rate out = 3
concentration out =
x
volume
=
x
60 + (5 − 3)t
Our equation is, therefore,
dx
dt
= (5 × 0.1) −

3
x
60 + 2t

Or in the form (1.3)
dx
dt
+
3
60 + 2t
x = 0.5
Let us solve. The integrating factor is
r(t) = exp
¸
3
60 + 2t
dt

= exp
¸
3
2
ln(60 + 2t)

= (60 + 2t)
3/2
30 CHAPTER 1. FIRST ORDER ODES
We multiply both sides of the equation to get
(60 + 2t)
3/2
dx
dt
+ (60 + 2t)
3/2
3
60 + 2t
x = 0.5(60 + 2t)
3/2
d
dt
,
(60 + 2t)
3/2
x
¸
= 0.5(60 + 2t)
3/2
(60 + 2t)
3/2
x =

0.5(60 + 2t)
3/2
dt + C
x = (60 + 2t)
−3/2

0.5(60 + 2t)
3/2
dt + C(60 + 2t)
−3/2
x = 0.5(60 + 2t)
−3/2
2
5
(60 + 2t)
5/2
+ C(60 + 2t)
−3/2
x =
60 + 2t
5
+ C(60 + 2t)
−3/2
Now to figure out C. We know that at t = 0, x = 10. So
10 = x(0) =
60
5
+ C(60)
−3/2
= 12 + C(60)
−3/2
or
C = −2(60
3/2
) ≈ −929.5
We are interested in x when the tank is full. So we note that the tank is full when 60+2t = 100,
or when t = 20. So
x(20) =
60 + 40
5
+ C(60 + 40)
−3/2
≈ 20 − 929.5(100)
−3/2
≈ 19.07
The concentration at the end is approximately 0.19 kg/liter and we started with
1
6
or 0.167
kg/liter.
1.4.1 Exercises
In the exercises, feel free to leave answer as a definite integral if a closed form solution cannot be
found. If you can find a closed form solution, you should give that.
Exercise 1.4.4: Solve y
·
+ xy = x.
Exercise 1.4.5: Solve y
·
+ 6y = e
x
.
Exercise 1.4.6: Solve y
·
+ 3x
2
y = sin(x) e
−x
3
, with y(0) = 1.
Exercise 1.4.7: Solve y
·
+ cos(x)y = cos(x).
Exercise 1.4.8: Solve
1
x
2
+1
y
·
+ xy = 3, with y(0) = 0.
1.4. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 31
Exercise 1.4.9: Suppose there are two lakes. The output of one is flowing to the other. The in
and out flow from each lake is 500 liters per hour. The first lake contains 100 thousand liters of
water and the second lake contains 200 thousand liters of water. A truck with 500 kg of toxic
substance crashes into the first lake. Assume that the water is being continually mixed perfectly by
the stream. a) Find the concentration of toxic substance as a function of time (in seconds) in both
lakes. b) When will the concentration in the first lake be below 0.01 kg per liter. c) When will the
concentration in the second lake be maximal.
Exercise 1.4.10: Newton’s law of cooling states that
dx
dt
= −k(x − A) where x is the temperature,
t is time, A is the ambient temperature, and k > 0 is a constant. Suppose that A = A
0
cos ωt for
some constants A
0
and ω. That is the ambient temperature oscillates (for example night and day
temperatures). a) Find the general solution. b) In the long term, will the initial conditions make
much of a difference? Why or why not.
32 CHAPTER 1. FIRST ORDER ODES
1.5 Substitution
Note: 1 lecture, §1.6 in EP
Just like when solving integrals, one method is to try to change variables to end up with a
simpler equation that can be solved.
1.5.1 Substitution
The equation
y
·
= (x − y + 1)
2
.
is neither separable nor linear. What can we do? How about trying to change variables, so that in
the new variables the equation is simpler. We will use another variable v, which we will treat as a
function of x. Let us try
v = x − y + 1.
Now we need to figure out y
·
in terms of v
·
, v and x. We differentiate (in x) to obtain v
·
= 1 − y
·
.
So y
·
= 1 − v
·
. We plug this into the equation to get
1 − v
·
= v
2
.
In other words, v
·
= 1 − v
2
. Such an equation we know how to solve.
1
1 − v
2
dv = dx.
So
1
2
ln

v + 1
v − 1

= x + C

v + 1
v − 1

= e
2x+2C
,
or
v+1
v−1
= De
2x
for some constant D. Note that v = 1 and v = −1 are also solutions.
Now we need to “unsubstitute.”
x − y + 2
x − y
= De
2x
and also the two solutions x − y + 1 = 1 or y = x and x − y + 1 = −1 or y = x + 2. We also solve
the first equation for y.
x − y + 2 = (x − y)De
2x
,
x − y + 2 = Dxe
2x
− yDe
2x
,
−y + yDe
2x
= Dxe
2x
− x − 2,
y(−1 + De
2x
) = Dxe
2x
− x − 2,
y =
Dxe
2x
− x − 2
De
2x
− 1
.
1.5. SUBSTITUTION 33
Note that D = 0 gives y = x + 2, but no value of D gives the solution y = x.
Substitution in differential equations is applied in much the same way that it is applied in
calculus. You guess. Several different substitutions might work. There are some general things to
look for. We summarize a few of these in a table.
When you see Try substituting
yy
·
y
2
y
2
y
·
y
3
(cos y)y
·
sin y
(sin y)y
·
cos y
y
·
e
y
e
y
Usually you try to substitute in the “most complicated” part of the equation with the hopes of
simplifying it. The above table is just a rule of thumb. You might have to modify your guesses. If
a substitution does not work (it does not make the equation any simpler), try a different one.
1.5.2 Bernoulli equations
There are some forms of equations where there is a general rule for substitution which always
works. For example, the so-called Bernoulli equations

.
y
·
+ p(x)y = q(x)y
n
.
This equation looks a lot like a linear equation except for the y
n
. If n = 0 or n = 1 then the equation
is linear and we can solve it. Otherwise, a change of coordinates v = y
1−n
transforms the Bernoulli
equation into a linear equation. Note that n need not be an integer.
Example 1.5.1: Solve
xy
·
+ y(x + 1) + xy
5
= 0, y(1) = 1.
First we note this is Bernoulli (p(x) = (x + 1)/x and q(x) = −1). We substitute
v = y
1−5
= y
−4
, v
·
= −4y
−5
y
·
.
In other words,
−y
5
4
v
·
= y
·
. So
xy
·
+ y(x + 1) + xy
5
= 0,
−xy
5
4
v
·
+ y(x + 1) + xy
5
= 0,
−x
4
v
·
+ y
−4
(x + 1) + x = 0,
−x
4
v
·
+ v(x + 1) + x = 0,

There are several things called Bernoulli equations, this is just one of them. The Bernoullis were a prominent
Swiss family of mathematicians. These particular equations are named for Jacob Bernoulli (1654 – 1705).
34 CHAPTER 1. FIRST ORDER ODES
and finally
v
·

4(x + 1)
x
v = 4.
Now it is linear. So use the integrating factor. Let us assume that x > 0 so |x| = x. This assumption
is OK because our initial condition is for x = 1.
r(x) = exp
¸
−4(x + 1)
x
dx

= e
−4x−4 ln(x)
= e
−4x
x
−4
e
−4x
x
4
.
Now
d
dx
,
e
−4x
x
4
v
¸
= 4
e
−4x
x
4
,
e
−4x
x
4
v =

x
1
4
e
−4s
s
4
ds + 1,
v = e
4x
x
4
¸
4

x
1
e
−4s
s
4
ds + 1

.
Note that the integral in this expression is not possible to find in closed form. But again, as we said
before, it is perfectly fine solution to have a definite integral in our solution. Now unsubstitute
y
−4
= e
4x
x
4
¸
4

x
1
e
−4s
s
4
ds + 1

,
y =
e
−x
x

4

x
1
e
−4s
s
4
ds + 1

1/4
.
1.5.3 Homogeneous equations
Another type of equations we can solve are the so-called homogeneous equations. Suppose that
we can write the differential equation as
y
·
= F

y
x

.
Here we try the substitutions
v =
y
x
and therefore y
·
= v + xv
·
.
We note that the equation is transformed into
v + xv
·
= F(v) or xv
·
= F(v) − v or
v
·
F(v) − v
=
1
x
.
1.5. SUBSTITUTION 35
Hence an implicit solution is

1
F(v) − v
dv = ln |x| + C.
Example 1.5.2: Solve
x
2
y
·
= y
2
+ xy, y(1) = 1.
First we transform this into the form y
·
=

y
x

2
+
y
x
. Now we do the substitution v =
y
x
to get the
separable equation
xv
·
= v
2
+ v − v = v
2
,
which has a solution

1
v
2
dv = ln |x| + C,
−1
v
= ln |x| + C,
v =
−1
ln |x| + C
.
We unsubstitute
y/x =
−1
ln |x| + C
,
y =
−x
ln |x| + C
.
We want y(1) = 1, so
1 = y(1) =
−1
ln |1| + C
=
−1
C
.
Thus C = −1 and the solution we are looking for is
y =
−x
ln |x| − 1
.
1.5.4 Exercises
Exercise 1.5.1: Solve xy
·
+ y(x + 1) + xy
5
= 0, with y(1) = 1.
Exercise 1.5.2: Solve 2yy
·
+ 1 = y
2
+ x, with y(0) = 1.
Exercise 1.5.3: Solve y
·
+ xy = y
4
, with y(0) = 1.
Exercise 1.5.4: Solve yy
·
+ x =

x
2
+ y
2
.
Exercise 1.5.5: Solve y
·
= (x + y − 1)
2
.
Exercise 1.5.6: Solve y
·
=
x+y
2
y

y
2
+1
, with y(0) = 1.
36 CHAPTER 1. FIRST ORDER ODES
1.6 Autonomous equations
Note: 1 lecture, §2.2 in EP
Let us consider problems of the form
dx
dt
= f (x),
where the derivative of solutions depends only on x (the dependent variable). These types of
equations are called autonomous equations. If we think of t as time, the naming comes from the
fact that the equation is independent of time.
Let us come back to the cooling coffee problem. Newton’s law of cooling says that
dx
dt
= −k(x − A),
where x is the temperature, t is time, k is some constant and A is the ambient temperature. See
Figure 1.6 for an example.
Note the solution x = A (in the example A = 5). We call these types of solutions equilibrium
solutions. The points on the x axis where f (x) = 0 are called critical points. The point x = A is
a critical point. In fact, each critical point corresponds to an equilibrium solution. Note also, by
looking at the graph, that the solution x = A is “stable” in that small perturbations in x do not lead
to substantially different solutions as t grows. If we change the initial condition a little bit, then as
t → ∞ we get x → A. We call such critical points stable. In this simple example it turns out that
all solutions in fact go to A as t → ∞. If a critical point is not stable we would say it is unstable.
0 5 10 15 20
0 5 10 15 20
-10
-5
0
5
10
-10
-5
0
5
10
Figure 1.6: Slope field and some solutions of
x
·
= −0.3(x − 5).
0 5 10 15 20
0 5 10 15 20
-5
0
5
10
-5
0
5
10
Figure 1.7: Slope field and some solutions of
x
·
= −0.1x(5 − x).
1.6. AUTONOMOUS EQUATIONS 37
Let us consider the logistic equation
dx
dt
= kx(M − x),
for some positive k and M. This equation is commonly used to model population if you know the
limiting population M, that is the maximum sustainable population. This scenario leads to less
catastrophic predictions on world population. Note that in the real world there is no such thing as
negative population, but we will still consider negative x for the purposes of the math.
See Figure 1.7 on the facing page for an example. Note two critical points, x = 0 and x = 5.
The critical point at x = 5 is stable. On the other hand the critical point at x = 0 is unstable.
It is not really necessary to find the exact solutions to talk about the long term behavior of the
solutions. For example, from the above we can easily see that
lim
t→∞
x(t) =

¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
5 if x(0) > 0,
0 if x(0) = 0,
DNE or − ∞ if x(0) < 0.
Where DNE means “does not exist.” From just looking at the slope field we cannot quite decide
what happens if x(0) < 0. It could be that the solution does not exist t all the way to ∞. Think
of the equation y
·
= y
2
, we have seen that it only exists for some finite period of time. Same can
happen here. In our example equation above it will actually turn out that the solution does not exist
for all time, but to see that we would have to solve the equation. In any case, the solution does go
to −∞, but it may get there rather quickly.
Many times are interested only in the long term behavior of the solution and hence we would
just be doing way too much work if we tried to solve the equation exactly. It is easier to just
look at the phase diagram or phase portrait, which is a simple way to visualize the behavior of
autonomous equations. In this case there is one dependent variable x. So draw the x axis, mark
all the critical points and then draw arrows in between. Mark positive with up and negative with
down.
y = 0
y = 5
Armed with the phase diagram, it is easy to approximately sketch how the solutions are going
to look.
38 CHAPTER 1. FIRST ORDER ODES
Exercise 1.6.1: Try sketching a few solutions. Check with the graph above if you are getting the
same answers.
Once we draw the phase diagram, we can easily classify critical points as stable or unstable.
unstable stable
Since any mathematical model we cook up will only be an approximation to the real world,
unstable points are generally bad news.
Let us think about the logistic equation with harvesting. Logistic equations are commonly used
for modelling population. Suppose an alien race really likes to eat humans. They keep a planet
with humans on it and harvest the humans at a rate of h million humans per year. Suppose x is the
number of humans in millions on the planet and t is time in years. Let M be the limiting population
when no harvesting is done. k > 0 is some constant depending on how fast humans multiply. Our
equation becomes
dx
dt
= kx(M − x) − h.
Multiply out and solve for critical points
dx
dt
= −kx
2
+ kMx − h.
Critical points A and B are
A =
kM +

(kM)
2
− 4hk
2k
B =
kM −

(kM)
2
− 4hk
2k
.
Exercise 1.6.2: Draw the phase diagram for different possibilities. Note that these possibilities
are A > B, or A = B, or A and B both complex (i.e. no real solutions).
It turns out that when h = 1, then A and B are distinct and positive. The graph we will
get is given in Figure 1.8 on the next page. As long as the population stays above B which is
approximately 1.55 million, then the population will not die out. If ever the population drops
below B, humans will die out, and the fast food restaurant serving them will go out of business.
When h = 1.6, then A = B. There is only one critical point which is unstable. When the
population is above 1.6 million it will tend towards this number. If it ever drops below 1.6 million,
humans will die out on the planet. This scenario is not one that we (as the human fast food
proprietor) want to be in. A small perturbation of the equilibrium state and we are out of business.
There is no room for error. See Figure 1.9 on the facing page
Finally if we are harvesting at 2 million humans per year, the population will always plummet
towards zero, no matter how well stocked the planet starts. See Figure 1.10 on the next page.
1.6. AUTONOMOUS EQUATIONS 39
0 5 10 15 20
0 5 10 15 20
0
2
5
8
10
0
2
5
8
10
Figure 1.8: Slope field and some solutions of
x
·
= −0.1x(8 − x) − 1.
0 5 10 15 20
0 5 10 15 20
0
2
5
8
10
0
2
5
8
10
Figure 1.9: Slope field and some solutions of
x
·
= −0.1x(8 − x) − 1.6.
0 5 10 15 20
0 5 10 15 20
0
2
5
8
10
0
2
5
8
10
Figure 1.10: Slope field and some solutions of x
·
= −0.1x(8 − x) − 2.
1.6.1 Exercises
Exercise 1.6.3: Let x
·
= x
2
. a) Draw the phase diagram, find the critical points and mark them
stable or unstable. b) Sketch typical solutions of the equation. c) Find lim
t→∞
x(t) for the solution
with the initial condition x(0) = −1.
Exercise 1.6.4: Let x
·
= sin x. a) Draw the phase diagram for −4π ≤ x ≤ 4π. On this interval
mark the critical points stable or unstable. b) Sketch typical solutions of the equation. c) Find
lim
t→∞
x(t) for the solution with the initial condition x(0) = 1.
Exercise 1.6.5: Suppose f (x) is positive for 0 < x < 1 and negative otherwise. a) Draw the phase
40 CHAPTER 1. FIRST ORDER ODES
diagram for x
·
= f (x), find the critical points and mark them stable or unstable. b) Sketch typical
solutions of the equation. c) Find lim
t→∞
x(t) for the solution with the initial condition x(0) = 0.5.
Exercise 1.6.6: Start with the logistic equation
dx
dt
= kx(M − x). Suppose that we modify our
harvesting. That is we will only harvest only an amount proportional to current population, that
we harvest hx for some h > 0. a) Construct the differential equation. b) Show that if kM > h, then
the equation is still logistic. c) What happens when kM < h?
1.7. NUMERICAL METHODS: EULER’S METHOD 41
1.7 Numerical methods: Euler’s method
Note: 1 lecture, §2.4 in EP
At this point it may be good to first try the Lab II and/or Project II from the IODE website:
http://www.math.uiuc.edu/iode/.
The first thing to note is that, as we said before, it is generally very hard if not impossible to
get a nice formula for the solution of the problem
y
·
= f (x, y) y(x
0
) = y
0
.
What if we want to find out the value of the solution at some particular x. Or perhaps we even
want to produce a graph of the solution to inspect the behavior.
Euler’s method

: We take x
0
and compute the slope k = f (x
0
, y
0
). The slope is the change in
y per unit change in x. We follow the line for an interval of length h. Hence if y = y
0
at x
0
, then
we will say that y
1
(the approximate value of y at x
1
= x
0
+ h) will be y
1
= y
0
+ hk. Rinse repeat!
That is, compute x
2
and y
2
using x
1
and y
1
. For an example of the first two steps of the method see
Figure 1.11.
-1 0 1 2 3
-1 0 1 2 3
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
-1 0 1 2 3
-1 0 1 2 3
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Figure 1.11: First two steps of Euler’s method with h = 1 for the equation y
·
=
y
2
3
with initial
conditions y(0) = 1.
More abstractly we compute
x
i+1
= x
i
+ h, y
i+1
= y
i
+ h f (x
i
, y
i
).
By connecting the dots we get an approximate graph of the solution. Do note that this is not exactly
the solution. See Figure 1.12 on the next page for the plot of the real solution.

Named after the Swiss mathematician Leonhard Paul Euler (1707 – 1783). Do note the correct pronunciation of
the name sounds more like "oiler."
42 CHAPTER 1. FIRST ORDER ODES
-1 0 1 2 3
-1 0 1 2 3
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.0
0.5
1.0
1.5
2.0
2.5
3.0
Figure 1.12: Two steps of Euler’s method (step size 1) and the exact solution for the equation
y
·
=
y
2
3
with initial conditions y(0) = 1.
Let us see what happens with the equation y
·
=
y
2
3
, y(0) = 1. Let us try to approximate y(2)
using Euler’s method. In Figures 1.11 and 1.12 we have essentially graphically approximated
y(2) with step size 1. With step size 1 we have y(2) ≈ 1.926. The real answer is 3. So we are
approximately 1.074 off. Let us halve the step size. If you do the computation you will find that
y(2) ≈ 2.209, so error of about 0.791. Table 1.1 on the facing page gives the values computed for
various parameters.
Exercise 1.7.1: Solve this equation exactly and show that y(2) = 3.
The difference between the actual solution and the approximate solution we will call the error.
We will usually talk about just the size of the error and we do not care much about its sign. The
main point is, that we usually do not know the real solution, so we only have a vague understanding
of the error. If we knew the error exactly ... what is the point of doing the approximation.
We notice that except for the first few times, every time we halved the interval the error approx-
imately halved. This halving of the error is a general feature of Euler’s method as it is a first order
method. In the IODE Project II you are asked to implement a second order method. A second
order method reduces the error to approximately one quarter every time you halve the interval.
Note that to get the error to be within 0.1 of the answer we had to already do 64 steps. To
get it to within 0.01 we would have to halve another 3 or four times, meaning doing 512 to 1024
steps. That is quite a bit to do by hand. The improved Euler method should quarter the error every
time you halve the interval, so you would have to approximately do half as many “halvings” to
get the same error. This reduction can be a big deal. With 10 halvings (starting at h = 1) you
have 1024 steps, whereas with 5 halvings you only have to do 32 steps, assuming that the error
was comparable to start with. A computer may not care between this difference for a problem this
simple, but suppose each step would take a second to compute (the function may be substantially
1.7. NUMERICAL METHODS: EULER’S METHOD 43
h Approximate y(2) Error
Error
Previous error
1 1.92592592593 1.074074074070
0.5 2.20861152999 0.791388470013 0.736809954840
0.25 2.47249414666 0.527505853335 0.666557415634
0.125 2.68033658758 0.319663412423 0.605990266083
0.0625 2.82040079550 0.179599204497 0.561838476090
0.03125 2.90412106479 0.095878935207 0.533849442573
0.015625 2.95035498158 0.049645018422 0.517788587396
0.0078125 2.97472419486 0.025275805142 0.509130743538
Table 1.1: Euler’s method approximation of y(2) where of y
·
=
y
2
3
, y(0) = 1.
more difficult to compute than y
2
/3). Then the difference is 32 seconds versus about 17 minutes.
Note: We are not being altogether fair, a second order method would probably double the time to
do each step. Even so, it is 1 minute versus 17 minutes. Next, suppose that you have to repeat such
a calculation for different parameters a thousand times. You get the idea.
Note that we do not know the error! How do you know what is the right step size? Essentially
you keep halving the interval and if you are lucky you can estimate the error from a few of these
calculations and the assumption that the error goes down by a factor of one half each time (if you
are using standard Euler).
Exercise 1.7.2: In the table above, suppose you do not know the error. Take the approximate
values of the function in the last two lines, assume that the error goes down by a factor of 2. Can
you estimate the error in the last time from this? Does it agree with the table? Now do it for the
first two rows. Does this agree with the table?
Let talk a little bit more about this example y
·
=
y
2
3
, y(0) = 1. Suppose that instead of y(2)
we wish to find y(3). Results of this effort are listed in Table 1.2 on the next page for successive
halvings of h. What is going on here? Well, you should solve the equation exactly and you will
notice that the solution does not exist at x = 3. In fact the solution blows up.
Another case when things can go bad is if the solution oscillates wildly near some point. Such
an example is given in IODE Project II. In this case, the solution may exist at all points, but even
a better approximation method than Euler would need an insanely small step size to compute the
solution with reasonable precision. And computers might not be able to handle such a small step
size anyway.
In real applications you would not use a simple method such as Euler’s. The simplest method
that would probably be used in a real application is the standard Runge-Kutta method (we will not
describe it here). That is a fourth order method, that means that if you halve the interval, the error
generally goes down by a factor of 16.
44 CHAPTER 1. FIRST ORDER ODES
h Approximate y(3)
1 3.16232281664
0.5 4.54328915766
0.25 6.86078752222
0.125 10.8032064113
0.0625 17.5989264104
0.03125 29.4600446195
0.015625 50.4012144477
0.0078125 87.7576927770
Table 1.2: Attempts to use Euler’s to approximate y(3) where of y
·
=
y
2
3
, y(0) = 1.
Choosing the right method to use and the right step size can be very tricky. There are several
competing factors to consider.
• Computational time: Each step takes computer time. Even if the function f is simple to
compute, you do it many times over. Large step size means faster computation, but perhaps
not the right precision.
• Roundoff errors: Computers only compute with a certain number of significant digits. Errors
introduced by rounding numbers off during your computations become noticeable when the
step size becomes too small relative to the quantities you are working with. So reducing step
size may in fact make errors worse.
• Stability: Certain equations may be numerically unstable. Small errors lead to large errors
down the line. Or in the worst case the numerical computations might be giving you bogus
numbers that look like a correct answer. Just because the numbers have stabilized after
successive halving, does not mean that you must have the right answer. Or what may happen
is that the numbers may never stabilize no matter how many times you halve the interval.
You have seen just the beginnings of the challenges that appear in real applications. There is
ongoing active research by engineers and mathematicians on how to do numerical approximation
in the best way. For example, the general purpose method used for the ODE solver in Matlab and
Octave (as of this writing) is a method that appeared only in the literature only in the 1980s.
1.7.1 Exercises
Exercise 1.7.3: Consider
dx
dt
= (2t − x)
2
, x(0) = 2. Use Euler’s method with step size h = 0.5 to
approximate x(1).
1.7. NUMERICAL METHODS: EULER’S METHOD 45
Exercise 1.7.4: Consider
dx
dt
= t − x, x(0) = 1. a) Use Euler’s method with step sizes h = 1,
1
2
,
1
4
,
1
8
to approximate x(1). b) Solve the equation exactly. c) Describe what happens to the errors for each
h you used. That is, find the factor by which the error changed each time you halved the interval.
46 CHAPTER 1. FIRST ORDER ODES
Chapter 2
Higher order linear ODEs
2.1 Second order linear ODEs
Note: less than 1 lecture, first part of §3.1 in EP
Let us consider the general second order linear differential equation
A(x)y
··
+ B(x)y
·
+ C(x)y = F(x).
We usually divide through by A to get
y
··
+ p(x)y
·
+ q(x)y = f (x), (2.1)
where p = B/A, q = C/A, and f = F/A. The word linear means that the equation contains no
powers nor functions of y, y
·
, and y
··
.
In the special case when f (x) = 0 we have a homogeneous equation
y
··
+ p(x)y
·
+ q(x)y = 0. (2.2)
We have already seen some second order linear homogeneous equations.
y
··
+ ky = 0 Two solutions are: y
1
= cos kx, y
2
= sin kx.
y
··
− ky = 0 Two solutions are: y
1
= e
kx
, y
2
= e
−kx
.
If we know two solutions two a linear homogeneous equation, we know a lot more of them.
Theorem 2.1.1 (Superposition). Suppose y
1
and y
2
are two solutions of the homogeneous equation
(2.2). Then
y(x) = C
1
y
1
(x) + C
2
y
2
(x),
also solves (2.2) for arbitrary constants C
1
and C
2
.
47
48 CHAPTER 2. HIGHER ORDER LINEAR ODES
That is, we can add together solutions and multiply by constants to obtain new different solu-
tions. We will prove this theorem because the proof is very enlightening and illustrates how linear
equations work.
Proof: Let y = C
1
y
1
+ C
2
y
2
. Then
y
··
+ py
·
+ qy = (C
1
y
1
+ C
2
y
2
)
··
+ p(C
1
y
1
+ C
2
y
2
)
·
+ q(C
1
y
1
+ C
2
y
2
)
= C
1
y
··
1
+ C
2
y
··
2
+ C
1
py
·
1
+ C
2
py
·
2
+ C
1
qy
1
+ C
2
qy
2
= C
1
(y
··
1
+ py
·
1
+ qy
1
) + C
2
(y
··
2
+ py
·
2
+ qy
2
)
= C
1
0 + C
2
0 = 0
The proof becomes even simpler to state if we use the operator notation. An operator is an
object that eats functions and spits out functions (kind of like what a function is, but a function eats
numbers and spits out numbers). Define the operator L by
Ly = y
··
+ py
·
+ qy.
L being linear means that L(C
1
y
1
+ C
2
y
2
) = C
1
Ly
1
+ C
2
Ly
2
. Hence the proof simply becomes
Ly = L(C
1
y
1
+ C
2
y
2
) = C
1
Ly
1
+ C
2
Ly
2
= C
1
0 + C
2
0 = 0.
Two other solutions to the second equation y
··
− ky = 0 are y
1
= cosh kx and y
2
= sinh kx.
Let us remind ourselves of the definition, cosh x =
e
x
+e
−x
2
and sinh x =
e
x
−e
−x
2
. Therefore, these are
solutions by superposition as they are linear combinations of the two exponential solutions.
As sinh and cosh are sometimes more convenient to use than the exponential, let us review
some of their properties.
cosh 0 = 1 sinh 0 = 0
d
dx
cosh x = sinh x
d
dx
sinh x = cosh x
cosh
2
x − sinh
2
x = 1
Exercise 2.1.1: Derive these properties from the definitions of sinh and cosh in terms of exponen-
tials.
Linear equations have nice and simple answers to the existence and uniqueness question.
Theorem 2.1.2 (Existence and uniqueness). Suppose p, q, f are continuous functions and a, b
0
, b
1
are constants. The equation
y
··
+ p(x)y
·
+ q(x)y = f (x),
has exactly one solution y(x) satisfying the initial conditions
y(a) = b
0
y
·
(a) = b
1
.
2.1. SECOND ORDER LINEAR ODES 49
For example, the equation y
··
+ y = 0 with y(0) = b
0
and y
·
(0) = b
1
has the solution
y(x) = b
0
cos x + b
1
sin x.
Or the equation y
··
− y = 0 with y(0) = b
0
and y
·
(0) = b
1
has the solution
y(x) = b
0
cosh x + b
1
sinh x.
Here note that using cosh and sinh allows us to solve for the initial conditions much more easily
than if we have used the exponentials.
Note that the initial condition for a second order ODE consists of two equations. So if we have
two arbitrary constants we should be able to solve for the constants and find a solution satisfying
the initial conditions.
Question: Suppose we find two different solutions y
1
and y
2
to the homogeneous equation (2.2).
Can every solution be written (using superposition) in the form y = C
1
y
1
+ C
2
y
2
?
Answer is affirmative! Provided that y
1
and y
2
are different enough in the following sense. We
will say y
1
and y
2
are linearly independent if one is not a constant multiple of the other. If you find
two linearly independent solutions, then every other solution is written in the form
y = C
1
y
1
+ C
2
y
2
.
In this case y = C
1
y
1
+ C
2
y
2
is the general solution.
For example, we found the solutions y
1
= sin x and y
2
= cos x for the equation y
··
+ y = 0. It
is obvious that sin and cos are not multiples of each other. If sin x = Acos x for some constant A,
we let x = 0 and this would imply A = 0 = sin x, which is preposterous. So y
1
and y
2
are linearly
independent. Hence
y = C
1
cos x + C
2
sin x
is the general solution to y
··
+ y = 0.
2.1.1 Exercises
Exercise 2.1.2: Show that y = e
x
and y = e
2x
are linearly independent.
Exercise 2.1.3: Take y
··
+ 5y = 10x + 5. Can you find guess a solution?
Exercise 2.1.4: Prove the superposition principle for nonhomogeneous equations. Suppose that
y
1
is a solution to Ly
1
= f (x) and y
2
is a solution to Ly
2
= g(x) (same operator L). Show that y
solves Ly = f (x) + g(x).
Exercise 2.1.5: For the equation x
2
y
··
− xy
·
= 0, find two solutions, show that they are linearly
independent and find the general solution. Hint: Try y = x
r
.
50 CHAPTER 2. HIGHER ORDER LINEAR ODES
Note that equations of the form ax
2
y
··
+ bxy
·
+ cy = 0 are called Euler’s equations or Cauchy-
Euler equations. They are solved by trying y = x
r
and solving for r (we can assume that x ≥ 0 for
simplicity).
Exercise 2.1.6: Suppose that (b − a)
2
− 4ac > 0. a) Find a formula for the general solution
of ax
2
y
··
+ bxy
·
+ cy = 0. Hint: Try y = x
r
and find a formula for r. b) What happens when
(b − a)
2
− 4ac = 0 or (b − a)
2
− 4ac < 0?
We will revisit the case when (b − a)
2
− 4ac < 0 later.
Exercise 2.1.7: Suppose that (b − a)
2
− 4ac = 0. Find a formula for the general solution of
ax
2
y
··
+ bxy
·
+ cy = 0. Hint: Try y = x
r
ln x for the second solution.
If you have one solution to a second order linear homogeneous equation you can find another
one. This is the reduction of order method.
Exercise 2.1.8: Suppose y
1
is a solution to y
··
+ p(x)y
·
+ q(x)y = 0. Show that
y
2
(x) = y
1
(x)

e

p(x) dx
(y
1
(x))
2
dx
is also a solution.
Let us solve some famous equations.
Exercise 2.1.9 (Chebychev’s equation of order 1): Take (1 − x
2
)y
··
− xy
·
+ y = 0. a) Show that
y = x is a solution. b) Use reduction of order to find a second linearly independent solution. c)
Write down the general solution.
Exercise 2.1.10 (Hermite’s equation of order 2): Take y
··
−2xy
·
+4y = 0. a) Show that y = 1−2x
2
is a solution. b) Use reduction of order to find a second linearly independent solution. c) Write
down the general solution.
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 51
2.2 Constant coefficient second order linear ODEs
Note: more than 1 lecture, second part of §3.1 in EP
Suppose we have the problem
y
··
− 6y
·
+ 8y = 0, y(0) = −2, y
·
(0) = 6.
This is a second order linear homogeneous equation with constant coefficients. Constant coeffi-
cients means that the functions in front of y
··
, y
·
, and y are constants, not depending on x.
Think about a function that you know that stays essentially the same when you differentiate it,
so that we can take the function and its derivatives, add these together, and end up with zero.
Let us try a solution y = e
rx
. Then y
·
= re
rx
and y
··
= r
2
e
rx
. Plug in to get
y
··
− 6y
·
+ 8y = 0,
r
2
e
rx
− 6re
rx
+ 8e
rx
= 0,
r
2
− 6r + 8 = 0 (divide through by e
rx
),
(r − 2)(r − 4) = 0.
So if r = 2 or r = 4, then e
rx
is a solution. So let y
1
= e
2x
and y
2
= e
4x
.
Exercise 2.2.1: Check that y
1
and y
2
are solutions.
The functions e
2x
and e
4x
are linearly independent. If they were not we could write e
4x
= Ce
2x
,
which would imply that e
2x
= C which is clearly not possible. Hence, we can write the general
solution as
y = C
1
e
2x
+ C
2
e
4x
.
We need to solve for C
1
and C
2
. To apply the initial conditions we first find y
·
= 2C
1
e
2x
+ 4C
2
e
4x
.
We plug in x = 0 and solve.
−2 = y(0) = C
1
+ C
2
,
6 = y
·
(0) = 2C
1
+ 4C
2
.
Either apply some matrix algebra, or just solve these by high school algebra. For example, divide
the second equation by 2 to obtain 3 = C
1
+ 2C
2
, and subtract the two equations to get 5 = C
2
.
Then C
1
= −7 as −2 = C
1
+ 5. Hence, the solution we are looking for is
y = −7e
2x
+ 5e
4x
.
Let us generalize this example into a method. Suppose that we have an equation
ay
··
+ by
·
+ cy = 0, (2.3)
52 CHAPTER 2. HIGHER ORDER LINEAR ODES
where a, b, c are constants. Try the solution y = e
rx
to obtain
ar
2
e
rx
+ bre
rx
+ ce
rx
= 0
ar
2
+ br + c = 0.
The equation ar
2
+ br + c = 0 is called the characteristic equation of the ODE. Solve for the r by
using the quadratic formula.
r
1
, r
2
=
−b ±

b
2
− 4ac
2a
.
Therefore, we have e
r
1
x
and e
r
2
x
as solutions. There is still a difficulty if r
1
= r
2
, but it is not hard
to overcome.
Theorem 2.2.1. Suppose that r
1
and r
2
are the roots of the characteristic equation.
(i) If r
1
and r
2
are distinct and real (b
2
− 4ac > 0), then (2.3) has the general solution
y = C
1
e
r
1
x
+ C
2
e
r
2
x
.
(ii) If r
1
= r
2
(b
2
− 4ac = 0), then (2.3) has the general solution
y = (C
1
+ C
2
x) e
r
1
x
.
For another example of the first case, note the equation y
··
− k
2
y = 0. Here the characteristic
equation is r
2
−k
2
= 0 or (r −k)(r +k) = 0 and hence e
−kx
and e
kx
are the two linearly independent
solutions.
Example 2.2.1: Find the general solution of
y
··
− 8y
·
+ 16y = 0.
The characteristic equation is r
2
−8r +16 = (r −4)
2
= 0. Hence a double root r
1
= r
2
= 4. The
general solution is, therefore,
y = (C
1
+ C
2
x) e
4x
= C
1
e
4x
+ C
2
xe
4x
.
Exercise 2.2.2: Check that e
4x
and xe
4x
are linearly independent.
That e
4x
solves the equation is clear. If xe
4x
solves the equation then we know we are done. Let
us compute y
·
= e
4x
+ 4xe
4x
and y
··
= 8e
4x
+ 16xe
4x
. Plug in
y
··
− 8y
·
+ 16y = 8e
4x
+ 16xe
4x
− 8(e
4x
+ 4xe
4x
) + 16xe
4x
= 0.
We should note that in practice, doubled root rarely happens. If you pick your coefficients truly
randomly you are very unlikely to get a doubled root.
Let us give a short “proof” for why the solution xe
rx
works when the root is doubled. Since
this case is really a limiting case of when cases the two roots are distinct and very close. Note that
e
r
2
x
−e
r
1
x
r
2
−r
1
is a solution when the roots are distinct. When r
1
goes to r
2
in the limit this is like taking
derivative of e
rx
using r as a variable. This limit is xe
rx
, and hence this is also a solution in the
doubled root case.
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 53
2.2.1 Complex numbers and Euler’s formula
It may happen that a polynomial has some complex roots. For example, the equation r
2
+ 1 = 0
has no real roots, but it does have two complex roots. Here we review some properties of complex
numbers.
Complex numbers may seem a strange concept especially because of the terminology. There is
nothing imaginary or really complicated about complex numbers. A complex number is really just
a pair of real numbers, (a, b). We can think of a complex number as a point in the plane. We add
complex numbers in the straightforward way. We define a multiplication by
(a, b) × (c, d)
def
= (ac − bd, ad + bc).
It turns out that with this multiplication rule, all the standard properties of arithmetic hold. Further,
and most importantly (0, 1) × (0, 1) = (−1, 0).
Generally we just write (a, b) as a + ib, and we treat i as if it were an unknown. You can just
do arithmetic with complex numbers just as you would do with polynomials. The property we just
mentioned becomes i
2
= −1. So whenever you see i
2
you can replace it by −1. Also, for example
i and −i are roots of r
2
+ 1 = 0.
Note that engineers often use the letter j instead of i for the square root of −1. We will use the
mathematicians convention and use i.
Exercise 2.2.3: Make sure you understand (that you can justify) the following identities:
• i
2
= −1, i
3
= −i, i
4
= 1,

1
i
= −i,
• (3 − 7i)(−2 − 9i) = = −69 − 13i,
• (3 − 2i)(3 + 2i) = 3
2
− (2i)
2
= 3
2
+ 2
2
= 13,

1
3−2i
=
1
3−2i
3+2i
3+2i
=
3+2i
13
=
3
13
+
2
13
i.
We can also define the exponential e
a+ib
of a complex number. We can do this by just writing
down the Taylor series and plugging in the complex number. Because most properties of the
exponential can be proved by looking at the Taylor series, we note that many properties still hold
for the complex exponential. For example, e
x+y
= e
x
e
y
. This means that e
a+ib
= e
a
e
ib
and hence if
we can compute e
ib
easily, we can compute e
a+ib
. Here we will use the so-called Euler’s formula.
Theorem 2.2.2 (Euler’s formula).
e

= cos θ + i sin θ and e
−iθ
= cos θ − i sin θ.
54 CHAPTER 2. HIGHER ORDER LINEAR ODES
Exercise 2.2.4: Using Euler’s formula, check the identities:
cos θ =
e

+ e
−iθ
2
and sin θ =
e

− e
−iθ
2i
.
Exercise 2.2.5: Double angle identities: Start with e
i(2θ)
=

e

2
. Use Euler on each side and
deduce:
cos 2θ = cos
2
θ − sin
2
θ and sin 2θ = 2 sin θ cos θ.
We also will need some notation. For a complex number a + ib we call a the real part and b
the imaginary part of the number. In notation this is
Re(a + bi) = a and Im(a + bi) = b.
2.2.2 Complex roots
So now suppose that the equation ay
··
+ by
·
+ cy = 0 has a characteristic equation ar
2
+ br + c = 0
which has complex roots. That is, by quadratic formula the roots are
−b±

b
2
−4ac
2a
. These are complex
if b
2
− 4ac < 0. In this case we can see that the roots are
r
1
, r
2
=
−b
2a
± i

b
2
− 4ac
2a
.
As you can see, you will always get a pair of roots of the form α±iβ. In this case we can still write
the solution as
y = C
1
e
(α+iβ)x
+ C
2
e
(α−iβ)x
.
However, the exponential is now complex valued. We would need to choose C
1
and C
2
to be
complex numbers to obtain a real valued solution (which is what we are after). While there is
nothing particularly wrong with this, it can make calculations harder and it would be nice to find
two real valued solutions.
Here we can use Euler’s formula. First let
y
1
= e
(α+iβ)x
and y
2
= e
(α−iβ)x
.
Then note that
y
1
= e
αx
cos βx + ie
αx
sin βx,
y
2
= e
αx
cos βx − ie
αx
sin βx.
We note that linear combinations of solutions are also solutions. Hence
y
3
=
y
1
+ y
2
2
= e
αx
cos βx,
y
4
=
y
1
− y
2
2i
= e
αx
sin βx,
are also solutions. And furthermore they are real valued. It is not hard to see that they are linearly
independent (not multiples of each other). Therefore, we have the following theorem.
2.2. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 55
Theorem 2.2.3. Take the equation
ay
··
+ by
·
+ cy = 0.
If the characteristic equation has the roots α ± iβ, then the general solution is
y = C
1
e
αx
cos βx + C
2
e
αx
sin βx.
Example 2.2.2: Find the general solution of y
··
+ k
2
y = 0, for a constant k > 0.
The characteristic equation is r
2
+ k
2
= 0. Therefore, the roots are r = ±ik and by the theorem
we have the general solution
y = C
1
cos kx + C
2
sin kx.
Example 2.2.3: Find the solution of y
··
− 6y
·
+ 13y = 0, y(0) = 0, y
·
(0) = 10.
The characteristic equation is r
2
−6r+13 = 0. By completing the square we get (r−3)
2
+2
2
= 0
and hence the roots are r = 3 ± 2i. By the theorem we have the general solution
y = C
1
e
3x
cos 2x + C
2
e
3x
sin 2x.
To find the solution satisfying the initial conditions, we first plug in zero to get
0 = y(0) = C
1
e
0
cos 0 + C
2
e
0
sin 0 = C
1
.
Hence C
1
= 0 and hence y = C
2
e
3x
sin 2x. We differentiate
y
·
= 3C
2
e
3x
sin 2x + 2C
2
e
3x
cos 2x.
We again plug in the initial condition and obtain 10 = y
·
(0) = 2C
2
, or C
2
= 5. Hence the solution
we are seeking is
y = 5e
3x
sin 2x.
2.2.3 Exercises
Exercise 2.2.6: Find the general solution of 2y
··
+ 2y
·
− 4y = 0.
Exercise 2.2.7: Find the general solution of y
··
+ 9y
·
− 10y = 0.
Exercise 2.2.8: Solve y
··
− 8y
·
+ 16y = 0 for y(0) = 2, y
·
(0) = 0.
Exercise 2.2.9: Solve y
··
+ 9y
·
= 0 for y(0) = 1, y
·
(0) = 1.
Exercise 2.2.10: Find the general solution of 2y
··
+ 50y = 0.
Exercise 2.2.11: Find the general solution of y
··
+ 6y
·
+ 13y = 0.
56 CHAPTER 2. HIGHER ORDER LINEAR ODES
Exercise 2.2.12: Find the general solution of y
··
= 0 using the methods of this section.
Exercise 2.2.13: The method of this section applies to equations of other orders than two. We will
see higher orders later. Try to solve the first order equation 2y
·
+ 3y = 0 using the methods of this
section.
Exercise 2.2.14: Let us revisit Euler’s equations of Exercise 2.1.6 on page 50. Suppose now that
(b − a)
2
− 4ac < 0. Find a formula for the general solution of ax
2
y
··
+ bxy
·
+ cy = 0. Hint: Note
that x
r
= e
r ln x
.
2.3. HIGHER ORDER LINEAR ODES 57
2.3 Higher order linear ODEs
Note: 2 lectures, §3.2 and §3.3 in EP
In general, most equations that appear in applications tend to be second order. Higher order
equations do appear from time to time, but it is a general assumption of modern physics that the
world is “second order.”
The basic results about linear ODEs of higher order are essentially exactly the same as for
second order equations with 2 replaced by n. The important new concept here is the concept of
linear independence. This concept is used in many other areas of mathematics and even other
places in this course, and it is useful to understand this in detail.
For constant coefficient ODEs, the methods are slightly harder, but we will not dwell on these.
You can always use the methods for systems of linear equations we will learn later in the course to
solve higher order constant coefficient equations.
So let us start with a general homogeneous linear equation
y
(n)
+ p
n−1
(x)y
(n−1)
+ + p
1
(x)y
·
+ p
0
(x)y = 0. (2.4)
Theorem 2.3.1 (Superposition). Suppose y
1
, y
2
, . . . , y
n
are solutions of the homogeneous equation
(2.4). Then
y(x) = C
1
y
1
(x) + C
2
y
2
(x) + + C
n
y
n
(x),
also solves (2.4) for arbitrary constants C
1
, . . . , C
n
.
We also have the existence and uniqueness theorem for nonhomogeneous linear equations.
Theorem 2.3.2 (Existence and uniqueness). Suppose p
0
through p
n−1
, and f are continuous func-
tions and a, b
0
, b
1
, . . . , b
n−1
are constants. The equation
y
(n)
+ p
n−1
(x)y
(n−1)
+ + p
1
(x)y
·
+ p
0
(x)y = f (x), .
has exactly one solution y(x) satisfying the initial conditions
y(a) = b
0
, y
·
(a) = b
1
, . . . , y
(n−1
)(a) = b
n−1
.
2.3.1 Linear independence
When we had two functions y
1
and y
2
we said they were linearly independent if one was not the
multiple of the other. Same idea holds for n functions. In this case it is easier to state as follows.
The functions y
1
, y
2
, . . . , y
n
are linearly independent if
c
1
y
1
+ c
2
y
2
+ + c
n
y
n
= 0,
has only the trivial solution c
1
= c
2
= = c
n
= 0. If we can write the equation with a nonzero
constant, say c
1
0, then we can solve for y
1
as a linear combination of the others. If the functions
are not linearly independent, way say they are linearly dependent.
58 CHAPTER 2. HIGHER ORDER LINEAR ODES
Example 2.3.1: Show e
x
, e
2x
, e
3x
are linearly independent.
Let us give several ways to do this. Most textbooks (including [EP] and [F]) introduce Wron-
skians, but that is really not necessary here.
Let us write down
c
1
e
x
+ c
2
e
2x
+ c
3
e
3x
= 0.
Use rules of exponentials and write z = e
x
. Then we have
c
1
z + c
2
z
2
+ c
3
z
3
= 0.
The left hand side is is a third degree polynomial in z. It can either be identically zero or have at
most 3 zeros. Therefore, it is identically zero and c
1
= c
2
= c
3
= 0 and the functions are linearly
independent.
Let us try another way. Write
c
1
e
x
+ c
2
e
2x
+ c
3
e
3x
= 0.
This equation has to hold for all x. What we could do is divide through by e
3x
to get
c
1
e
−2x
+ c
2
e
−x
+ c
3
= 0.
This is true for all x, therefore, let x → ∞. After taking the limit we see that c
3
= 0. Hence our
equation becomes
c
1
e
x
+ c
2
e
2x
= 0.
Rinse, repeat!
How about yet another way. Write
c
1
e
x
+ c
2
e
2x
+ c
3
e
3x
= 0.
We could evaluate at several different x to get equations for c
1
, c
2
and c
3
. That might be a lot of
computation. We can also take derivatives of both sides and then evaluate. Let us first divide by e
x
for simplicity.
c
1
+ c
2
e
x
+ c
3
e
2x
= 0.
Set x = 0 to get the equation c
1
+ c
2
+ c
3
= 0. Now differentiate both sides
c
2
e
x
+ 2c
3
e
2x
= 0,
and set x = 0 to get c
2
+2c
3
= 0. Finally divide by e
x
again and differentiate to get 4c
3
e
2x
= 0. It is
clear that c
3
is zero. Then c
2
must be zero as c
2
= −2c
3
and c
1
must be zero because c
1
+c
2
+c
3
= 0.
There is no one good way to do it. All of these methods are perfectly valid.
Example 2.3.2: On the other hand, the functions e
x
, e
−x
, and cosh x are linearly dependent. Simply
apply definition of the hyperbolic cosine:
cosh x =
e
x
+ e
−x
2
.
2.3. HIGHER ORDER LINEAR ODES 59
2.3.2 Constant coefficient higher order ODEs
When we have a higher order constant coefficient homogeneous linear equation. The song and
dance is exactly the same as it was for second order. We just need to find more solutions. If the
equation is n
th
order we need to find n linearly independent solutions. It is best seen by example.
Example 2.3.3: Find the general solution to
y
···
− 3y
··
− y
·
+ 3y = 0. (2.5)
Try: y = e
rx
. We plug in and get
r
3
e
rx
− 3r
2
e
rx
− re
rx
+ 3e
rx
= 0.
We divide out by e
rx
. Then
r
3
− 3r
2
− r + 3 = 0.
The trick now is to find the roots. There is a formula for degree 3 and 4 equations but it is very
complicated. There is no formula for higher degree polynomials. That does not mean that the roots
do not exist. There are always n roots for an n
th
degree polynomial. They might be repeated and
they might be complex. Computers are pretty good at finding roots approximately for reasonable
size polynomials.
Best place to start is to plot the polynomial and check where it is zero. Or you can try plugging
in. Sometimes it is a good idea to just start plugging in numbers r = −2, −1, 0, 1, −1, 2, . . . and
see if you get a hit. There are some signs that you might have missed a root. For example, if you
plug in −2 into our polynomial you get −15. If you plug in 0 you get 3. That means there is a root
between −2 and 0 because the sign changed.
A good strategy at first is to look for roots −1, 1, or 0, these are easy to see. When check our
polynomial we note that r
1
= −1 and r
2
= 1 are roots. The last root is then reasonably easy to
find. We note that the constant term in a polynomial is the multiple of the negations of all the roots
because r
3
− 3r
2
− r + 3 = (r − r
1
)(r − r
2
)(r − r
3
). In our case we see that
3 = (−r
1
)(−r
2
)(−r
3
) = (1)(−1)(−r
3
) = r
3
.
You should check that r
3
= 3 is a root. Hence we know that e
−x
, e
x
and e
3x
are solutions to (2.5).
They are linearly independent as can easily be checked, and there is 3 of them, which happens to
be exactly the number we need. Hence the general solution is
y = C
1
e
−x
+ C
2
e
x
+ C
3
e
3x
.
Suppose we were given some initial conditions y(0) = 1, y
·
(0) = 2, and y
··
(0) = 3. This leads
to
1 = y(0) = C
1
+ C
2
+ C
3
,
2 = y
·
(0) = −C
1
+ C
2
+ 3C
3
,
3 = y
··
(0) = C
1
+ C
2
+ 9C
3
.
60 CHAPTER 2. HIGHER ORDER LINEAR ODES
It is possible to find the solution by high school algebra, but it would be a pain. The only sensible
way to solve a system of equations such as this is to use matrix algebra, see § 3.2. For now we note
that the solution is C
1
= −
1
4
, C
2
= 1 and C
3
=
1
4
. With this the specific solution is
y =
−1
4
e
−x
+ e
x
+
1
4
e
3x
.
Next, suppose that we have real roots, but they are repeated. Let us say we have a root r
repeated k times. In this case, in the spirit of the second order solution we note the solutions
e
rx
, xe
rx
, x
2
e
rx
, . . . , x
k−1
e
rx
.
We take a linear combination of these solutions to find the general solution.
Example 2.3.4: Solve
y
(4)
− 3y
···
+ 3y
··
− y
·
= 0.
We note that the characteristic equation is
r
4
− 3r
3
+ 3r
2
− r = 0.
By inspection we note that r
4
− 3r
3
+ 3r
2
− r = r(r − 1)
3
. Hence the roots given with multiplicity
are r = 0, 1, 1, 1. Thus the general solution is
y = (c
0
+ c
1
x + c
2
x
2
) e
x
....
terms coming from r = 1
+ c
4
....
from r = 0
.
Similarly to the second order case we can handle complex roots and we really only need to talk
about how to handle repeated complex roots. Complex roots always come in pairs r = α ± iβ. The
corresponding solution is
(c
0
+ c
1
x + + c
k−1
x
k
) e
αx
cos βx + (d
0
+ d
1
x + + d
k−1
x
k
) e
αx
sin βx.
where c
0
, . . . , c
k−1
, d
0
, . . . , d
k−1
are arbitrary constants.
Example 2.3.5: Solve
y
(4)
− 4y
···
+ 8y
··
− 8y
·
+ 4y = 0.
The characteristic equation is
r
4
− 4r
3
+ 8r
2
− 8r + 4 = 0,
(r
2
− 2 + 2)
2
= 0,

(r − 1)
2
+ 2

2
= 0.
Hence the roots are 1 ± i with multiplicity 2. Hence the general solution is
y = (c
0
+ c
1
x) e
x
cos x + (d
0
+ d
1
x) e
x
sin x.
The way we solved the characteristic equation above is really by guessing or by inspection. It is
not so easy in general. You could also have asked a computer or an advanced calculator for the
roots.
2.3. HIGHER ORDER LINEAR ODES 61
2.3.3 Exercises
Exercise 2.3.1: Find the general solution for y
···
− y
··
+ y
·
− y = 0.
Exercise 2.3.2: Find the general solution for y
(4)
− 5y
···
+ 6y
··
= 0.
Exercise 2.3.3: Find the general solution for y
···
+ 2y
··
+ 2y
·
= 0.
Exercise 2.3.4: Suppose that the characteristic equation for an equation is (r −1)
2
(r −2)
2
= 0. a)
Find such an equation. b) Find its general solution.
Exercise 2.3.5: Suppose that a fourth order equation has the following solution. y = 2e
4x
x cos x.
a) Find such an equation. b) Find the initial conditions which the given solution satisfies.
Exercise 2.3.6: Find the general solution for the equation of Exercise 2.3.5.
Exercise 2.3.7: Let f (x) = e
x
−cos x, g(x) = e
x
+cos x, and h(x) = cos x. Are f (x), g(x), and h(x)
linearly independent? If so, show it, if not, find the linear combination that works.
Exercise 2.3.8: Let f (x) = 0, g(x) = cos x, and h(x) = sin x. Are f (x), g(x), and h(x) linearly
independent? If so, show it, if not, find the linear combination that works.
Exercise 2.3.9: Are x, x
2
, and x
4
linearly independent? If so, show it, if not, find the linear
combination that works.
Exercise 2.3.10: Are e
x
, xe
x
, and x
2
e
x
linearly independent? If so, show it, if not, find the linear
combination that works.
62 CHAPTER 2. HIGHER ORDER LINEAR ODES
2.4 Mechanical vibrations
Note: 2 lectures, §3.4 in EP
We want to look at some applications of linear second order constant coefficient equations.
2.4.1 Some examples
Our first example is a mass on a spring. Suppose we have a
damping c
m
k
F(t)
mass m > 0 (in kilograms for instance) connected by a spring
with spring constant k > 0 (in Newtons per meter perhaps) to a
fixed wall. Furthermore, there is some external force F(t) acting
on the mass. Finally, there is some friction in the system and this
is measured by a constant c ≥ 0.
Let x be the displacement of the mass (x = 0 is the rest position). With x growing to the right
(away from the wall). The force exerted by the spring is proportional to the compression of the
spring by Hooke’s law. Therefore, it is kx in the negative direction. Similarly the amount of force
exerted by friction is proportional to the velocity of the mass. By Newton’s second law we know
that force equals mass times acceleration and hence
mx
··
+ cx
·
+ kx = F(t).
This is a linear second order constant coefficient ODE. We set up some terminology about this
equation. We say the motion is
(i) forced, if F 0 (F not identically zero),
(ii) unforced or free, if F ≡ 0,
(iii) damped, if c > 0, and
(iv) undamped, if c = 0.
This system is appears in lots of applications even if it does not at first seems like it. Many
real world scenarios can be simplified to a mass on a spring. For example, a bungee jump setup is
essentially a spring and mass system (you are the mass). It would be good if someone did the math
before you jump off right? Let us just give 2 other examples.
Here is an example for electrical engineers. Suppose that you have the
E
L
C
R
pictured RLC circuit. There is a resistor with a resistance of R ohms, an
inductor with an inductance of L henries, and a capacitor with a capacitance
of C farads. There is also an electric source (such as a battery) giving a
voltage of E(t) volts at time t (measured in seconds). Let Q(t) be the charge
in columbs on the capacitor and I(t) be the current in the circuit. The relation between the two is
2.4. MECHANICAL VIBRATIONS 63
Q
·
= I. Furthermore, by elementary principles we have that LI
·
+RI +Q/C = E. If we differentiate
we get
LI
··
(t) + RI
·
(t) +
1
C
I(t) = E
·
(t).
This is an nonhomogeneous second order constant coefficient linear equation. Further, as L, R,
and C are all positive, this system behaves just like the mass and spring system. The position of
the mass is replaced by the current. Mass is replaced by the inductance, damping is replaced by
resistance and the spring constant is replaced by one over the capacitance. The change in voltage
becomes the forcing function. Hence for constant voltage this is an unforced motion.
Our next example is going to behave like a mass and spring system only approximately. Sup-
pose we have a mass m on a pendulum of length L. We wish to find an equation for the angle θ(t).
Let g be the force of gravity. Elementary physics mandates that the equation is of the form
θ
··
+
g
L
sin θ = 0.
This equation can be derived using Newton’s second law, where force
θ
L
equals mass times acceleration. Note that acceleration is Lθ
··
and mass
is m. This has to be equal to the tangential component of the force given
by the gravity. This is mg sin θ in the opposite direction. The m curiously
cancels from the equation.
Now we make our approximation. For small θ we have that approxi-
mately sin θ ≈ θ. This can be seen by looking at the graph. In Figure 2.1
we can see that for approximately −0.5 < θ < 0.5 (in radians) the graphs of sin θ and θ are almost
the same.
-1.0 -0.5 0.0 0.5 1.0
-1.0 -0.5 0.0 0.5 1.0
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 2.1: The graphs of sin θ and θ (in radians).
64 CHAPTER 2. HIGHER ORDER LINEAR ODES
Therefore, when the swings are small, θ is always small and we can model the behavior by the
simpler linear equation
θ
··
+
g
L
θ = 0.
Note that the errors that we get from the approximation build up so over a very long time, the
behavior might change more substantially. Also we will see that in a mass spring system, the
amplitude is independent of the period, this is not true for a pendulum. But for reasonably short
periods of time and small swings (for example if the length of the pendulum is very large), the
behavior is reasonably close.
In real world problems it is very often necessary to make these types of simplifications. There-
fore, it is good to understand both the mathematics and the physics of the situation to see if the
simplification is valid in the context of the questions we are trying to answer.
2.4.2 Free undamped motion
In this section we will only consider free or unforced motion, as we cannot yet solve nonhomo-
geneous equations. First let us start with undamped motion and hence c = 0, so we have the
equation
mx
··
+ kx = 0.
If we divide out by m and let ω
0
be a number such that ω
2
0
=
k
m
we can write the equation as
x
··
+ ω
2
0
x = 0.
The general solution to this equation is
x(t) = Acos ω
0
t + Bsin ω
0
t.
First we notice that by a trigonometric identity we have that for two other constants C and γ we
have
Acos ω
0
t + Bsin ω
0
t = C cos(ω
0
t − γ).
It is not hard to compute that C =

A
2
+ B
2
and tan γ =
B
A
. Therefore, we can write x(t) =
C cos(ω
0
t − γ), and let C and γ be our arbitrary constants.
Exercise 2.4.1: Justify this identity and verify the equations for C and γ.
While it is generally easier to use the first form with A and B to find these constants given the
initial conditions, the second form is much more natural. The constants C and γ have very nice
interpretation. If we look at the form of the solution
x(t) = C cos(ω
0
t − γ)
2.4. MECHANICAL VIBRATIONS 65
We can see that the amplitude is C, ω
0
is the (angular) frequency, and γ is the so-called phase
shift. It just shifts the graph left or right. We call ω
0
is called the natural (angular) frequency. The
motion is usually called simple harmonic motion.
A note about the word angular before the frequency. ω
0
is given in radians per unit time, not
in cycles per unit time as is the usual measure of frequency. But because we know one cycle is 2π,
the usual frequency is given by
ω
0

. It is simply a matter of where we put the constant 2π, and that
is a matter of taste.
The period of the motion is one over the frequency (in cycles per unit time) and hence

ω
0
. That
is the amount of time it takes to complete one full oscillation.
Example 2.4.1: Suppose that m = 2kg and k = 8N/m. Suppose the whole setup is on a truck
which was travelling at 1m/s and suddenly crashes and hence stops. The mass was rigged 0.5
meters forward from the rest position, and gets loose in the crash and starts oscillating. What is
the frequency of the resulting oscillation and what is the amplitude. The units are the mks units
(meters-kilograms-seconds).
Well the setup means that the mass was at half a meter in the positive direction during the crash
and relative to the wall the spring is mounted to, the mass was moving forward (in the positive
direction) at 1m/s. This gives us the initial conditions.
So the equation with initial conditions is
2x
··
+ 8x = 0, x(0) = 0.5, x
·
(0) = 1.
We can directly compute ω
0
=

k
m
=

4 = 2. Hence the angular frequency is 2. The usual
frequency in Hertz (cycles per second) is
2

=
1
π
≈ 0.318
The general solution is
x(t) = Acos 2t + Bsin 2t.
Letting x(0) = 0 means A = 0.5. Then x
·
(t) = −0.5 sin 2t + Bcos 2t. Letting x
·
(0) = 1 we get
B = 1. Therefore, the amplitude is C =

A
2
+ B
2
=

1.25 ≈ 1.118. The solution is
x(t) = 0.5 cos 2t + sin 2t.
A plot is shown in Figure 2.2 on the following page.
For the free undamped motion, if the solution is of the form
x(t) = Acos ω
0
t + Bsin ω
0
t,
this corresponds to the initial conditions x(0) = A and x
·
(0) = B.
This makes it much easier to figure out A and B, rather than the amplitude and phase shift. In the
example, we have already found C. Let us compute the phase shift. We know that tan γ = B/A = 2.
We take the arctangent of 2 and get approximately 1.107. Unfortunately if you remember, we still
66 CHAPTER 2. HIGHER ORDER LINEAR ODES
0.0 2.5 5.0 7.5 10.0
0.0 2.5 5.0 7.5 10.0
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 2.2: Simple undamped oscillation.
need to check if this γ is in the right quadrant. Since both B and A are positive, then γ should be in
the first quadrant, and 1.107 radians really is in the first quadrant.
Note: Many calculators and computer software do not only have the atan function for arctan-
gent, but also what is sometimes called atan2. This function takes two arguments, B and A and
returns a γ in the correct quadrant for you.
2.4.3 Free damped motion
Let us now focus on damped motion. Let us rewrite the equation
mx
··
+ cx
·
+ kx = 0,
as
x
··
+ 2px
·
+ ω
2
0
x = 0,
where
ω
0
=

k
m
, p =
c
2m
.
The characteristic equation is
r
2
+ 2pr + ω
2
0
= 0.
Using the quadratic formula we get that the roots are
r = −p ±

p
2
− ω
2
0
.
The form of the solution depends on whether we get complex or real roots and this depends on the
sign of
p
2
− ω
2
0
=

c
2m

2

k
m
=
c
2
− 4km
4m
2
.
2.4. MECHANICAL VIBRATIONS 67
The sign of p
2
− ω
2
0
is the same as the sign of c
2
− 4km.
Overdamping
When c
2
−4km > 0, we say the system is overdamped. In this case, there are two distinct real roots
r
1
and r
2
. Notice that both are negative, as

p
2
− ω
2
0
is always less than p so −p ±

p
2
− ω
2
0
is
always negative.
Hence the solution is
0 25 50 75 100
0 25 50 75 100
0.0
0.5
1.0
1.5
0.0
0.5
1.0
1.5
Figure 2.3: Overdamped motion for several
different initial conditions.
x(t) = C
1
e
r
1
t
+ C
2
e
r
2
t
.
Note that since r
1
, r
2
are negative, x(t) → 0 as
t → ∞. This means that the mass will just tend
towards the rest position as time goes to infinity.
For a few sample plots for different initial condi-
tions see Figure 2.3.
Do note that no oscillation happens. In fact
the graph will cross the x axis at most once. To
see this fact, we try to solve 0 = C
1
e
r
1
t
+ C
2
e
r
2
t
.
So C
1
e
r
1
t
= −C
2
e
r
2
t
, and hence
−C
1
C
2
= e
(r
2
−r
1
)t
.
This has at most one solution t ≥ 0.
Example 2.4.2: Suppose the mass is released from from rest. That is x(0) = x
0
and x
·
(0) = 0.
Then
x(t) =
x
0
r
1
− r
2

r
1
e
r
2
t
− r
2
e
r
1
t

.
It is not hard to see that this satisfies the initial conditions.
Critical damping
When c
2
− 4km = 0, we say the system is critically damped. In this case, there is one root of
multiplicity 2 and this root is −p. Therefore, our solution is
x(t) = C
1
e
−pt
+ C
2
te
−pt
.
The behavior of a critically damped system is very similar to an overdamped system. After all a
critically damped system is in some sense a limit of overdamped systems. Since these equations
are really only an approximation to the real world, in reality we are never critically damped, it
is only a place you can reach in theory. You are always a little bit underdamped or a little bit
overdamped. It is better not to dwell on critical damping.
68 CHAPTER 2. HIGHER ORDER LINEAR ODES
Underdamping
When c
2
−4km < 0, we say the system is un-
0 5 10 15 20 25 30
0 5 10 15 20 25 30
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 2.4: Underdamped motion with the en-
velope curves shown.
derdamped. In this case, the roots are complex.
r = −p ±

p
2
− ω
2
0
= −p ±

−1

ω
2
0
− p
2
= −p ± iω
1
,
where ω
1
=

ω
2
0
− p
2
. Our solution is
x(t) = e
−pt
(Acos ω
1
t + Bsin ω
1
t) ,
Or
x(t) = Ce
−pt
cos(ω
1
t − γ).
An example plot is given in Figure 2.4. Note that
we still have that x(t) → 0 as t → ∞.
In the figure we also show the envelope curves Ce
−pt
and −Ce
−pt
. The solution is the oscillating
plot between the two curves. The envelope curves give the maximum amplitude of the oscillation
at any given point in time. For example if you are bungee jumping, you are really interested in
computing the envelope curve so that you do not hit the concrete with your head.
The phase shift γ just shifts the graph left or right but within the envelope curves (the envelope
curves do not change of course if γ changes).
Finally note that the angular pseudo-frequency (we do not call it a frequency since the solution
is not really a periodic function) ω
1
becomes lower when the damping c (and hence p) becomes
larger. This makes sense since if we keep changing c at some point the solution should start looking
like the solution for critical damping or overdamping which do not oscillate at all. When we change
the damping just a little bit, we do not expect the behavior to change dramatically.
On the other hand when c becomes smaller, ω
1
approaches ω
0
(it is always smaller) and the
solution looks more and more like the steady periodic motion of the undamped case. The envelope
curves become flatter and flatter as p goes to 0.
2.4.4 Exercises
Exercise 2.4.2: Consider a mass and spring system with a mass m = 2, spring constant k = 3,
and damping constant c = 1. a) Set up and find the general solution of the system. b) Is the system
underdamped, overdamped or critically damped? c) If the system is not critically damped, find a c
which makes the system critically damped.
Exercise 2.4.3: Do Exercise 2.4.2 for m = 3, k = 12, and c = 12.
2.4. MECHANICAL VIBRATIONS 69
Exercise 2.4.4: Using the mks units (meters-kilograms-seconds) Suppose you have a spring of
with spring constant 4N/m. You want to use it to weight items. Assume no friction. Suppose you
you place the mass on the spring and put it in motion. a) You count and find that the frequency is
0.8 Hz (cycles per second) what is the mass. b) Find a formula for the mass m given the frequency
ω in Hz.
Exercise 2.4.5: Suppose we add possible friction to Exercise 2.4.4. Further, suppose you do not
know the spring constant, but you have two reference weights 1 kg and 2 kg to calibrate your
setup. You put each in motion on your spring and measure the frequency. For the 1 kg weight you
measured 0.8 Hz, for the 2 kg weight you measured 0.39 Hz. a) Find k (spring constant) and c
(damping constant). b) Find a formula for the mass in terms of the frequency in Hz. c) For an
unknown mass you measured 0.2 Hz, what is the weight?
70 CHAPTER 2. HIGHER ORDER LINEAR ODES
2.5 Nonhomogeneous equations
Note: 2 lectures, §3.5 in EP
2.5.1 Solving nonhomogeneous equations
You have seen how to solve the linear constant coefficient homogeneous equations. Now suppose
that we drop the requirement of homogeneity. This usually corresponds to some outside input to
the system we are trying to model, like the forcing function for the mechanical vibrations of last
section. That is, we have an equation such as
y
··
+ 5y
·
+ 6y = 2x + 1. (2.6)
Note that we still say this equation is constant coefficient equation. We only require constants in
front of the y
··
, y
·
, and y.
We will generally write Ly = 2x + 1 instead when the operator is not important. The way we
solve (2.6) is as follows. We find the general solution y
c
to the associated homogeneous equation
y
··
+ 5y
·
+ 6y = 0. (2.7)
We also find a single particular solution y
p
to (2.6) in some way and then we know that
y = y
c
+ y
p
is the general solution to (2.6). We call y
c
the complementary solution.
Note that y
p
can be any solution. Suppose you find a different particular solution ˜ y
p
. Then write
the difference w = y
p
− ˜ y
p
. Then plug w into the left hand side of the equation and get
w
··
+ 5w
·
+ 6w = (y
··
p
+ 5y
·
p
+ 6y
p
) − (˜ y
··
p
+ 5˜ y
·
p
+ 6˜ y
p
) = (2x + 1) − (2x + 1) = 0.
In other words, using the operator notation the calculation becomes simpler. Note that L is a linear
operator and so we could just write.
Lw = L(y
p
− ˜ y
p
) = Ly
p
− L˜ y
p
= (2x + 1) − (2x + 1) = 0.
So w = y
p
− ˜ y
p
is a solution to (2.7). So any two solutions of (2.6) differ by a solution to the
homogeneous equation (2.7). The solution y = y
c
+y
p
includes all solutions to (2.6), since y
c
is the
general solution to the homogeneous equation.
Moral of the story is that you can find the particular solution in any old way, and you might
find a different one by a different method (or by guessing) and still get the right general solution
to the whole problem even if it looks different and the constants you will have to choose given the
initial conditions will be different.
2.5. NONHOMOGENEOUS EQUATIONS 71
2.5.2 Undetermined coefficients
So the trick is to somehow in a smart way guess a solution to (2.6). Note that 2x+1 is a polynomial,
and the left hand side of the equation will be a polynomial if we let y be a polynomial of the same
degree. So we will try
y = Ax + B.
we plug in
y
··
+ 5y
·
+ 6y = (Ax + B)
··
+ 5(Ax + B)
·
+ 6(Ax + B) = 0 + 5A + 6Ax + 6B = 6Ax + (5A + 6B).
So 6Ax + (5A + 6B) = 2x + 1. So A =
1
3
and B =
−1
9
. That means that y
p
=
1
3
x −
1
9
=
3x−1
9
. Solving
the complementary problem we get (Exercise!)
y
c
= C
1
e
−2x
+ C
2
e
−3x
.
Hence the general solution to (2.6) is
y = C
1
e
−2x
+ C
2
e
−3x
+
3x − 1
9
.
Now suppose we are further given some initial conditions y(0) = 0 and y
·
(0) =
1
3
. First find
y
·
= −2C
1
e
−2x
− 3C
2
e
−3x
+
1
3
Then
0 = y(0) = C
1
+ C
2

1
9
1
3
= y
·
(0) = −2C
1
− 3C
2
+
1
3
We solve to get C
1
=
1
3
and C
2
=
−2
9
. Hence our solution is
y(x) =
1
3
e
−2x

2
9
e
−3x
+
3x − 1
9
=
3e
−2x
− 2e
−3x
+ 3x − 1
9
.
Exercise 2.5.1: Check that y really solves the equation.
Note: A common mistake is to solve for constants using the initial conditions with y
c
and only
adding the particular solution y
p
after that. That will not work. You need to first compute y = y
c
+y
p
and only then solve for the constants using the initial conditions.
Similarly a right hand side consisting of exponentials or sines and cosines can be handled. For
example:
y
··
+ 2y
·
+ 2y = cos 2x
Let us just find y
p
in this case. We notice that we may have to also guess sin 2x since derivatives
of cosine are sines. So we guess
y = Acos 2x + Bsin 2x.
72 CHAPTER 2. HIGHER ORDER LINEAR ODES
Plug in to the equation and we get
−4Acos 2x − 4Bsin 2x − 4Asin 2x + 4Bcos 2x + 2Acos 2x + 2Bsin 2x = cos 2x.
Since the left hand side must equal to right hand side we group terms and we get that −4A + 4B +
2A = 1 and −4B−4A+2B = 0. So −2A+4B = 1 and 2A+B = 0 and hence A =
−1
10
and B =
1
5
. So
y
p
=
−cos 2x + 2 sin 2x
10
.
And in a similar way if the right hand side contains exponentials we guess exponentials. For
example, if the equation is (where L is a linear constant coefficient operator)
Ly = e
3x
we will guess y = Ae
3x
. We note also that using the multiplication rule for differentiation gives us
a way to combine these guesses. Really if you can guess a form for y such that Ly has all the terms
needed to for the right hand side, that is a good place to start. For example for:
Ly = (1 + 3x
2
) e
−x
cos πx
we will guess
y = (A + Bx + Cx
2
) e
−x
cos πx + (D + Ex + Fx
2
) e
−x
sin πx.
We will plug in and then hopefully get equations that we can solve for A, B, C, D, E, F. As you can
see this can make for a very long and tedious calculation very quickly. C’est la vie!
There is one hiccup in all this. It could be that our guess actually solves the associated homo-
geneous equation. That is, suppose we have
y
··
− 9y = e
3x
.
We would love to guess y = Ae
3x
, but if we plug this into the left hand side of the equation we get
y
··
− 9y = 9Ae
3x
− 9Ae
3x
= 0 e
3x
.
There is no way we can choose A to make the left hand side be e
3x
. The trick in this case is to
multiply our guess by x until we get rid of duplication with the complementary solution. That is
first we compute y
c
(solution to Ly = 0)
y
c
= C
1
e
−3x
+ C
2
e
3x
and we note that the e
3x
term is a duplicate with our desired guess. We modify our guess to
y = Axe
3x
and notice there is no duplication. Now we can go forward and try it. Note that
y
·
= Ae
3x
+ 3Axe
3x
and y
··
= 4Ae
3x
+ 9Axe
3x
. So
y
··
− 9y = 4Ae
3x
+ 9Axe
3x
− 9Axe
3x
= 4Ae
3x
2.5. NONHOMOGENEOUS EQUATIONS 73
Then we note that this is supposed to be e
3x
and hence we find that 4A = 1 and so A =
1
4
. Thus we
can now write the general solution as
y = y
c
+ y
p
= C
1
e
−3x
+ C
2
e
3x
+
1
4
xe
3x
.
Now what about the case when multiplying by x does not get rid of duplication. For example,
y
··
− 6y
·
+ 9 = e
3x
.
Note that y
c
= C
1
e
3x
+C
2
xe
3x
. So guessing y = Axe
3x
would not get us anywhere. In this case you
want to guess y = Ax
2
e
3x
. Basically, you want to multiply your guess by x until all duplication is
gone. But no more! Multiplying too many times will also make the process not work.
Finally what if the right hand side is several terms, such as
Ly = e
2x
+ cos x.
In this case find u that solves Lu = e
2x
and v that solves Lv = cos x (do each terms separately).
Then we note that if y = u + v, then Ly = e
2x
+ cos x. This is because L is linear and this is just
superposition again. We have Ly = L(u + v) = Lu + Lv = e
2x
+ cos x.
See Edwards and Penney [EP] for more detailed and complete information on undetermined
coefficients.
2.5.3 Variation of parameters
It turns out that undetermined coefficients will work for many basic problems that crop up. It does
not work all the time. Really it only works when the right hand side of the equation Ly = f (x) has
only finitely many linearly independent derivatives, so that you can write a guess that consists of
them all. But some equations are a bit tougher. Consider
y
··
+ y = tan x.
Note that each new derivative of tan x looks completely different and cannot be written as a linear
combination of the previous derivatives. We get sec
2
x, 2 sec
2
x tan x, etc. . .
This equation calls for a different method. We present the method of variation of parameters
which will handle all the cases Ly = f (x) provided you can solve certain integrals. For simplicity
we will restrict ourselves to second order equations, but the method will work for higher order
equations just as well (but the computations will be more tedious).
Let us try to solve the example.
Ly = y
··
+ y = tan x.
74 CHAPTER 2. HIGHER ORDER LINEAR ODES
First we find the complementary solution Ly = 0. This is reasonably simple we get y
c
= C
1
y
1
+C
2
y
2
where y
1
= cos x and y
2
= sin x. Now to try to find a solution to the nonhomogeneous equation we
will try
y
p
= y = u
1
y
1
+ u
2
y
2
,
where u
1
and u
2
are functions and not constants. We are trying to satisfy Ly = tan x. That gives us
one condition on the functions u
1
and u
2
. First compute (note the product rule!)
y
·
= (u
·
1
y
1
+ u
·
2
y
2
) + (u
1
y
·
1
+ u
2
y
·
2
).
Since we can still impose at our will to simplify computations (we have two unknown functions,
so we are allowed two conditions), we impose that (u
·
1
y
1
+ u
·
2
y
2
) = 0. This makes computing the
second derivative easier.
y
·
= u
1
y
·
1
+ u
2
y
·
2
,
y
··
= (u
·
1
y
·
1
+ u
·
2
y
·
2
) + (u
1
y
··
1
+ u
2
y
··
2
).
Now since y
1
and y
2
are solutions to y
··
+ y = 0, we know that y
··
1
= −y
1
and y
··
2
= −y
2
. (Note: If
the equation was instead y
··
+ ay
·
+ by = 0 we would have y
··
i
= −ay
·
i
− by
i
.)
So
y
··
= (u
·
1
y
·
1
+ u
·
2
y
·
2
) − (u
1
y
1
+ u
2
y
2
).
Now note that
y
··
= (u
·
1
y
·
1
+ u
·
2
y
·
2
) − y,
and hence
y
··
+ y = Ly = u
·
1
y
·
1
+ u
·
2
y
·
2
.
For y to satisfy Ly = f (x) we must have f (x) = u
·
1
y
·
1
+ u
·
2
y
·
2
.
So what we need to solve are the two equations (conditions) we imposed on u
1
and u
2
u
·
1
y
1
+ u
·
2
y
2
= 0,
u
·
1
y
·
1
+ u
·
2
y
·
2
= f (x).
We can now solve for u
·
1
and u
·
2
in terms of f (x), y
1
and y
2
. You will always get these formulas for
any Ly = f (x). There is a general formula for the solution you can just plug into, but it is better to
just repeat what we do below. In our case the two equations become
u
·
1
cos x + u
·
2
sin x = 0,
−u
·
1
sin x + u
·
2
cos x = tan x.
Hence
u
·
1
cos x sin x + u
·
2
sin
2
x = 0,
−u
·
1
sin x cos x + u
·
2
cos
2
x = tan x cos x = sin x.
2.5. NONHOMOGENEOUS EQUATIONS 75
And thus
u
·
2
(sin
2
x + cos
2
x) = sin x,
u
·
2
= sin x,
u
·
1
=
−sin
2
x
cos
x = −tan x sin x.
Now we need to integrate u
·
1
and u
·
2
to get u
1
and u
2
.
u
1
=

u
·
1
dx =

−tan x sin x dx =
1
2
ln

(sin x) − 1
(sin x) + 1

+ sin x,
u
2
=

u
·
2
dx =

sin x dx = −cos x.
So our particular solution is
y
p
= u
1
y
1
+ u
2
y
2
=
1
2
cos x ln

(sin x) − 1
(sin x) + 1

+ cos x sin x − cos x sin x =
=
1
2
cos x ln

(sin x) − 1
(sin x) + 1

.
The general solution to y
··
+ y = tan x is
y = C
1
cos x + C
2
sin x +
1
2
cos x ln

(sin x) − 1
(sin x) + 1

.
2.5.4 Exercises
Exercise 2.5.2: Find a particular solution of y
··
− y
·
− 6y = e
2x
.
Exercise 2.5.3: Find a particular solution of y
··
− 4y
·
+ 4y = e
2x
.
Exercise 2.5.4: Solve the initial value problem y
··
+ 9y = cos 3x + sin 3x for y(0) = 2, y
·
(0) = 1.
Exercise 2.5.5: Setup the form of the particular solution but do not solve for the coefficients for
y
(4)
− 2y
···
+ y
··
= e
x
.
Exercise 2.5.6: Setup the form of the particular solution but do not solve for the coefficients for
y
(4)
− 2y
···
+ y
··
= e
x
+ x + sin x.
Exercise 2.5.7: a) Using variation of parameters find a particular solution of y
··
−2y
·
+y = e
x
. b)
Find a particular solution using undetermined coefficients. c) Are the two solutions you found the
same? What is going on?
Exercise 2.5.8: Find a particular solution of y
··
− 2y
·
+ y = sin x
2
. It is OK to leave the answer as
a definite integral.
76 CHAPTER 2. HIGHER ORDER LINEAR ODES
2.6 Forced oscillations and resonance
Note: 2 lectures, §3.6 in EP
Before reading the lecture, it may be good to first try Project III from the IODE website:
http://www.math.uiuc.edu/iode/.
Let us return back to the mass on a spring example. We will
damping c
m
k
F(t)
now consider the case of forced oscillations. That is, we will con-
sider the equation
mx
··
+ cx
·
+ kx = F(t)
for some nonzero F(t). In the mass on a spring example, the setup is again, m is mass, c is friction,
k is the spring constant and F(t) is an external force acting on the mass.
Usually what we are interested in is some periodic forcing, such as noncentered rotating parts,
or perhaps even loud sounds or other sources of periodic force. Once we will learn about Fourier
series we will see that we will essentially cover every type of periodic function by considering
F(t) = F
0
cos ωt (or sin instead of cosine, the calculations will be essentially the same).
2.6.1 Undamped forced motion and resonance
First let us consider undamped (c = 0) motion as this is simpler. We have the equation
mx
··
+ kx = F
0
cos ωt.
This has the complementary solution (solution to the associated homogeneous equation)
x
c
= C
1
cos ω
0
t + C
2
sin ω
0
t,
where ω
0
=

k
m
. ω
0
is said to be the natural frequency (angular). It is essentially the frequency at
which the system “wants to oscillate” without external interference.
Let us suppose that ω
0
ω. Now try the solution x
p
= Acos ωt and solve for A. Note that we
need not have sine in our trial solution as on the left hand side we will only get cosines anyway. If
you include a sine it is fine; you will find that its coefficient will be zero (I cannot find a rhyme).
So we solve as in the method of undetermined coefficients with the guess above and we find
that
x
p
=
F
0
m(ω
2
0
− ω
2
)
cos ωt.
We leave it as an exercise to do the algebra required here.
The general solution is
x = C
1
cos ω
0
t + C
2
sin ω
0
t +
F
0
m(ω
2
0
− ω
2
)
cos ωt.
2.6. FORCED OSCILLATIONS AND RESONANCE 77
or written another way
x = C cos(ω
0
t − γ) +
F
0
m(ω
2
0
− ω
2
)
cos ωt.
Hence it is a superposition of two cosine waves at different frequencies.
Example 2.6.1: Suppose
0.5x
··
+ 8x = 10 cos πt
and let us suppose that x(0) = 0 and x
·
(0) = 0.
Well let us compute. First we read off the parameters: ω = π, ω
0
=

8
0.5
= 4, F
0
= 10, m = 1.
So the general solution is
x = C
1
cos 4t + C
2
sin 4t +
20
16 − π
2
cos πt.
Nowsolve for C
1
and C
2
using the initial conditions. It is easy to see that C
2
= 0 and C
1
=
−20
16−π
2
.
Hence
x =
20
16 − π
2
(cos πt − cos 4t).
Notice the “beating” behavior in Figure 2.5.
0 5 10 15 20
0 5 10 15 20
-10
-5
0
5
10
-10
-5
0
5
10
Figure 2.5: Graph of
20
16−π
2
(cos πt − cos 4t).
First use the trigonometric identity
2 sin

A − B
2

sin

A + B
2

= cos B − cos A
to get that
x =
20
16 − π
2
¸
2 sin
¸
4 − π
2
t

sin
¸
4 + π
2
t

.
Notice that x is now a high frequency wave mod-
ulated by a low frequency wave.
Now suppose that ω
0
= ω. Obviously in this
case we cannot try the solution Acos ωt and use
undetermined coefficients. In this case we see
that cos ωt solves the homogeneous equation. Therefore, we need to try x
p
= At cos ωt +Bt sin ωt.
This time we need the sin term since two derivatives of t cos ωt do contain sines. We write the
equation
x
··
+ ω
2
x =
F
0
m
cos ωt
Then plugging into the left hand side we get
2Bωcos ωt − 2Aωsin ωt =
F
0
m
cos ωt
78 CHAPTER 2. HIGHER ORDER LINEAR ODES
Hence A = 0 and B =
F
0
2mω
. Our particular solution is
F
0
2mω
t sin ωt and our general solution is
x = C
1
cos ωt + C
2
sin ωt +
F
0
2mω
t sin ωt.
The important term is the last one (the par-
0 5 10 15 20
0 5 10 15 20
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 2.6: Graph of
1
π
t sin πt.
ticular solution we found). We can see that this
term grows without bound as t → ∞. In fact
it oscillates between
F
0
t
2mω
and
−F
0
t
2mω
. The first two
terms only oscillate between ±

C
2
1
+ C
2
2
, which
becomes smaller and smaller in proportion to the
oscillations of the last term as t gets larger. In
Figure 2.6 we see the graph with C
1
= C
2
= 0,
F
0
= 2, m = 1, ω = π.
By forcing the system in just the right fre-
quency we produce very wild oscillations. This
kind of behavior is called resonance or some-
times pure resonance and is sometimes desired.
For example, remember when as a kid you could
start swinging by just moving back and forth on
the swing seat in the correct “frequency”? You were trying to achieve resonance. The force of each
one of your moves was small but after a while it produced large swings.
On the other hand resonance can be destructive. After an earthquake some buildings are col-
lapsed and others may be relatively undamaged. This is due to different buildings having different
resonance frequencies. So figuring out the resonance frequency can be very important.
A common (but wrong) example of destructive force of resonance is the Tacoma Narrows
bridge failure. It turns out, there was an altogether different phenomenon at play there

.
2.6.2 Damped forced motion and practical resonance
Of course in real life things are not as simple as they were above. There is of course some damping.
That is our equation becomes
mx
··
+ cx
·
+ kx = F
0
cos ωt, (2.8)
for some c > 0. We have solved the homogeneous problem before. We let
p =
c
2m
ω
0
=

k
m
.

K. Billah and R. Scanlan, Resonance, Tacoma Narrows Bridge Failure, and Undergraduate Physics Textbooks,
American Journal of Physics, 59(2), 1991, 118–124, http://www.ketchum.org/billah/Billah-Scanlan.pdf
2.6. FORCED OSCILLATIONS AND RESONANCE 79
We replace equation (2.8) with
x
··
+ 2px
·
+ ω
2
0
x =
F
0
m
cos ωt.
We find the roots of the characteristic equation of the associated homogeneous problem are r
1
, r
2
=
−p±

p
2
− ω
2
0
. The form of the general solution of the associated homogeneous equation depends
on the sign of p
2
− ω
2
0
, or equivalently on the sign of c
2
− 4km, as we have seen before. That is
x
c
=

¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
C
1
e
r
1
t
+ C
2
e
r
2
t
if c
2
> 4km,
C
1
e
−pt
+ C
2
te
−pt
if c
2
= 4km,
e
−pt
(C
1
cos ω
1
t + C
2
sin ω
1
t) if c
2
< 4km .
Here ω
1
=

ω
2
0
− p
2
. In any case, we can see that x
c
(t) → 0 as t → ∞. Furthermore, there can
be no conflicts when trying to solve for the undetermined coefficients by trying x
p
= Acos ωt +
Bsin ωt. Let us plug in and solve for A and B. We get (the tedious details are left to reader)


2
0
− ω
2
)B − 2ωpA

sin ωt +


2
0
− ω
2
)A + 2ωpB

cos ωt =
F
0
m
cos ωt.
We get that
A =

2
0
− ω
2
)F
0
m(2ωp)
2
+ m(ω
2
0
− ω
2
)
2
B =
2ωpF
0
m(2ωp)
2
+ m(ω
2
0
− ω
2
)
2
.
We also compute C =

A
2
+ B
2
to be
C =
F
0
m

(2ωp)
2
+ (ω
2
0
− ω
2
)
2
.
Thus our particular solution is
x
p
=

2
0
− ω
2
)F
0
m(2ωp)
2
+ m(ω
2
0
− ω
2
)
2
cos ωt +
2ωpF
0
m(2ωp)
2
+ m(ω
2
0
− ω
2
)
2
sin ωt
Or in the other notation we have amplitude C and phase shift γ where (if ω ω
0
)
tan γ =
B
A
=
2ωp
ω
2
0
− ω
2
.
80 CHAPTER 2. HIGHER ORDER LINEAR ODES
Hence we have
x
p
=
F
0
m

(2ωp)
2
+ (ω
2
0
− ω
2
)
2
cos(ωt − γ).
If ω = ω
0
we see that A = 0, B = C =
F
0
2mωp
and γ = π/2.
The exact formula is not as important as the idea. You should not memorize the above formula,
you should remember the ideas involved. Even if you change the right hand side a little bit you
will get a different formula with different behavior. So there is no point in memorizing this specific
formula. You can always recompute it later or look it up if you really need it.
For reasons we will explain in a moment we will call x
c
the transient solution and denote it by
x
tr
and we will call the x
p
we found above the steady periodic solution and denote it by x
sp
. The
general solution to our problem is
x = x
c
+ x
p
= x
tr
+ x
sp
.
We note that x
c
= x
tr
goes to zero as t → ∞as
0 5 10 15 20
0 5 10 15 20
-5.0
-2.5
0.0
2.5
5.0
-5.0
-2.5
0.0
2.5
5.0
Figure 2.7: Solutions with different initial con-
ditions for parameters k = 1, m = 1, F
0
= 1,
c = 0.7, and ω = 1.1.
all the terms involve an exponential with a nega-
tive exponent. Hence for large t, the effect of x
tr
is negligible and we will essentially only see x
sp
.
Notice that x
sp
involves no arbitrary constants,
and the initial conditions will only affect x
tr
. This
means that the effect of the initial conditions will
be negligible after some period of time. Hence
the name transient. Because of this behavior, we
might as well focus on the steady periodic solu-
tion and ignore the transient solution. See Fig-
ure 2.7 for a graph of different initial conditions.
Notice that the speed at which x
tr
goes to zero
depends on p (and hence c). The bigger p is (the
bigger c is), the “faster” x
tr
becomes negligible.
So the smaller the damping, the longer the “tran-
sient region.” This agrees with the observation
that when c = 0, the initial conditions affect the behavior for all time (i.e. an infinite “transient
region”).
Let us describe what do we mean by resonance when damping is present. Since there were no
conflicts when solving with undetermined coefficient, there is no term that goes to infinity. What
we will look at however is the maximum value of the amplitude of the steady periodic solution. Let
C be the amplitude of x
sp
. If we plot C as a function of ω (with all other parameters fixed) we can
find its maximum. This maximum is said to be practical resonance (we call the ω that achieves
this maximum the practical resonance frequency). A sample plot for three different values of c
2.6. FORCED OSCILLATIONS AND RESONANCE 81
is given in Figure 2.8. As you can see the practical resonance amplitude grows as damping gets
smaller, and any practical resonance can disappear when damping is large.
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
0.0
0.5
1.0
1.5
2.0
2.5
0.0
0.5
1.0
1.5
2.0
2.5
Figure 2.8: Graph of C(ω) showing practical resonance with parameters k = 1, m = 1, F
0
= 1.
The top line is with c = 0.4, the middle line with c = 0.8, and the bottom line with c = 1.6.
To find the maximum it turns out we need to find the derivative C
·
(ω). This is easily computed
to be
C
·
(ω) =
−4ω(2p
2
+ ω
2
− ω
2
0
)F
0
m

(2ωp)
2
+ (ω
2
0
− ω
2
)
2

3/2
.
This is zero either when ω = 0 or when 2p
2
+ ω
2
− ω
2
0
= 0. In other words when
ω =

ω
2
0
− 2p
2
or 0
It can be shown that if ω
2
0
− 2p
2
is positive then

ω
2
0
− 2p
2
is the practical resonance frequency
(that is the point where C(ω) is maximal, note that in this case C
·
(ω) > 0 for small ω). If ω = 0 is
the maximum, then essentially there is no practical resonance since we assume that ω > 0 in our
system. In this case the amplitude gets larger as the forcing frequency gets smaller.
If practical resonance occurs, the frequency is smaller than ω
0
. As damping c (and hence p)
becomes smaller, the closer the practical resonance frequency comes to ω
0
. So when damping
is very small, ω
0
is a good estimate of the resonance frequency. This behavior agrees with the
observation that when c = 0, ω
0
is the resonance frequency.
The behavior will be more complicated if the forcing function is not an exact cosine wave, but
for example a square wave. It will be good to come back to this section once you have learned
about the Fourier series.
82 CHAPTER 2. HIGHER ORDER LINEAR ODES
2.6.3 Exercises
Exercise 2.6.1: Derive a formula for x
sp
if the equation is mx
··
+ cx
·
+ kx = F
0
sin ωt. Assume
c > 0.
Exercise 2.6.2: Derive a formula for x
sp
if the equation is mx
··
+cx
·
+kx = F
0
cos ωt +F
1
cos 3ωt.
Assume c > 0.
Exercise 2.6.3: Take mx
··
+ cx
·
+ kx = F
0
cos ωt. Fix m > 0 and k > 0. Now think of the function
C(ω). For what values of c (solve in terms of m, k, and F
0
will there be no practical resonance (for
what values of c is there no maximum of C(ω) for ω > 0).
Exercise 2.6.4: Take mx
··
+ cx
·
+ kx = F
0
cos ωt. Fix c > 0 and k > 0. Now think of the function
C(ω). For what values of m (solve in terms of c, k, and F
0
will there be no practical resonance (for
what values of m is there no maximum of C(ω) for ω > 0).
Exercise 2.6.5: Suppose a water tower in an earthquake acts as a mass-spring system. Assume
that the container on top is full and the water does not move around. The container then acts as a
mass and the support acts as the spring, where the induced vibrations are horizontal. Suppose that
the container with water has a mass of m =10,000 kg. It takes a force of 1000 newtons to displace
the container 1 meter. For simplicity assume no friction.
Suppose that an earthquake induces an external force F(t) = mAω
2
cos ωt.
a) What is the natural frequency of the water tower.
b) If ω is not the natural frequency, find a formula for the amplitude of the resulting oscillations
of the water container.
c) Suppose A = 1 and an earthquake with frequency 0.5 cycles per second comes. What is the
amplitude of the oscillations. Suppose that if the water tower moves more than 1.5 meter, the tower
collapses. Will the tower collapse?
Chapter 3
Systems of ODEs
3.1 Introduction to systems of ODEs
Note: 1 lecture, §4.1 in EP
Often we do not have just one dependent variable and one equation. And as we will see, we
may end up with systems of several equations and several dependent variables even if we start with
a single equation.
If we have several dependent variables, suppose y
1
, y
2
, . . . , y
n
we can have a differential equa-
tion involving all of them and their derivatives. For example, y
··
1
= f (y
·
1
, y
·
2
, y
1
, y
2
, x). Usually,
when we have two dependent variables we would have two equations such as
y
··
1
= f
1
(y
·
1
, y
·
2
, y
1
, y
2
, x),
y
··
2
= f
2
(y
·
1
, y
·
2
, y
1
, y
2
, x),
for some functions f
1
and f
2
. We call the above a system of differential equations. More precisely,
it is a second order system. Sometimes a system is easy to solve by solving for one variable and
then for the second variable.
Example 3.1.1: Take the first order system
y
·
1
= y
1
,
y
·
2
= y
1
− y
2
,
with initial conditions of the form y
1
(0) = 1, y
2
(0) = 2.
We note that y
1
= C
1
e
x
is the general solution of the first equation. We can then plug this y
1
into the second equation and get the equation y
·
2
= C
1
e
x
− y
2
, which is a linear first order equation
that is easily solved for y
2
. By the method of integrating factor we get
e
x
y
2
=
C
1
2
e
2x
+ C
2
,
83
84 CHAPTER 3. SYSTEMS OF ODES
or y
2
=
C
1
2
e
x
+ C
2
e
−x
. The general solution to the system is, therefore,
y
1
= C
1
e
x
,
y
2
=
C
1
2
e
x
+ C
2
e
−x
.
We can now solve for C
1
and C
2
given the initial conditions. We substitute x = 0 and find that
C
1
= 1 and C
2
=
3
2
.
Generally, we will not be so lucky to be able to solve like in the first example, and we will have
to solve for all variables at once.
As an example application, let us think of mass and spring sys-
k
m
2
m
2
tems again. Suppose we have one spring with constant k but two
masses m
1
and m
2
. We can think of the masses as carts, and we
will suppose that they ride along with no friction. Let x
1
be the
displacement of the first cart and x
2
be the displacement of the second cart. That is, we put the two
carts somewhere with no tension on the spring, and we mark the position of the first and second
cart and call those the zero position. That is, x
1
= 0 is a different position on the floor than the
position corresponding to x
2
= 0. The force exerted by the spring on the first cart is k(x
2
−x
1
), since
x
2
− x
1
is how far the string is stretched (or compressed) from the rest position. The force exerted
on the second cart is the opposite, thus the same thing with a negative sign. Using Newton’s second
law, we note that force equals mass times acceleration.
m
1
x
··
1
= k(x
2
− x
1
),
m
2
x
··
2
= −k(x
2
− x
1
).
In this system we cannot solve for the x
1
variable separately. That we must solve for both x
1
and x
2
at once is intuitively obvious, since where the first cart goes depends exactly on where the
second cart goes and vice versa.
Before we talk about how to handle systems, let us note that in some sense we need only
consider first order systems. Take an n
th
order differential equation
y
(n)
= F(y
(n−1)
, . . . . , y
·
, y, x).
Define new variables u
1
, . . . , u
n
and write the system
u
·
1
= u
2
u
·
2
= u
3
.
.
.
u
·
n−1
= u
n
u
·
n
= F(u
n
, u
n−1
, . . . , u
2
, u
1
, x).
3.1. INTRODUCTION TO SYSTEMS OF ODES 85
Now try to solve this system for u
1
, u
2
, . . . , u
n
. Once you have solved for the u’s, you can discard
u
2
through u
n
and let y = u
1
. We note that this y solves the original equation.
A similar process can be done for a system of higher order differential equations. For example,
a system of k differential equations in k unknowns, all of order n, can be transformed into a first
order system of n × k equations and n × k unknowns.
Example 3.1.2: Sometimes we can use this idea in reverse as well. Let us take the system
x
·
= 2y − x, y
·
= x,
where the independent variable is t. We wish to solve for the initial conditions x(0) = 1, y(0) = 0.
We first notice that if we differentiate the first equation once we get y
··
= x
·
and now we know
what x
·
is in terms of x and y.
y
··
= x
·
= 2y − x = 2y − y
·
.
So we now have an equation y
··
+ y
·
− 2y = 0. We know how to solve this equation and we find
that y = C
1
e
−2t
+ C
2
e
t
. Once we have y we can plug in to get x.
x = y
·
= −2C
1
e
−2t
+ C
2
e
t
.
We solve for the initial conditions 1 = x(0) = −2C
1
+C
2
and 0 = y(0) = C
1
+C
2
. Hence, C
1
= −C
2
and 1 = 3C
2
. So C
1
=
−1
3
and C
2
=
1
3
. Our solution is:
x =
2e
−2t
+ e
t
3
, y =
−e
−2t
+ e
t
3
.
Exercise 3.1.1: Plug in and check that this really is the solution.
It is useful to go back and forth between systems and higher order equations for other reasons.
For example, the ODE approximation methods are generally only given as solutions for first order
systems. It is not very hard to adapt the code for the Euler method for a first order equation to first
order systems. We essentially just treat the dependent variable not as a number but as a vector. In
many mathematical computer languages there is almost no distinction in syntax.
In fact, this is what IODE was doing when you had it solve a second order equation numerically
in the IODE Project III if you have done that project.
The above example was what we will call a linear first order system, as none of the dependent
variables appear in any functions or with any higher powers than one. It is also autonomous as the
equations do not depend on the independent variable t.
For autonomous systems we can easily draw the so-called direction field or vector field. That
is, a plot similar to a slope field, but instead of giving a slope at each point, we give a direction (and
a magnitude). The previous example x
·
= 2y − x, y
·
= x says that at the point (x, y) the direction
in which we should travel to satisfy the equations should be the direction of the vector (2y − x, x)
with the speed equal to the magnitude of this vector. So we draw the vector (2y − x, x) based at the
86 CHAPTER 3. SYSTEMS OF ODES
point (x, y) and we do this for many points on the xy-plane. We may want to scale down the size
of our vectors to fit many of them on the same direction field. See Figure 3.1.
We can now draw a path of the solution in the plane. That is, suppose the solution is given by
x = f (t), y = g(t), then we can pick an interval of t (say 0 ≤ t ≤ 2 for our example) and plot all
the points ( f (t), g(t)) for t in the selected range. The resulting picture is usually called the phase
portrait (or phase plane portrait). The particular curve obtained we call the trajectory or solution
curve. An example plot is given in Figure 3.2. In this figure the line starts at (1, 0) and travels
along the vector field for a distance of 2 units of t. Since we solved this system precisely we can
compute x(2) and y(2). We get that x(2) ≈ 2.475 and y(2) ≈ 2.457. This point corresponds to the
top right end of the plotted solution curve in the figure.
-1 0 1 2 3
-1 0 1 2 3
-1
0
1
2
3
-1
0
1
2
3
Figure 3.1: The direction field for x
·
= 2y − x,
y
·
= x.
-1 0 1 2 3
-1 0 1 2 3
-1
0
1
2
3
-1
0
1
2
3
Figure 3.2: The direction field for x
·
= 2y − x,
y
·
= x with the trajectory of the solution start-
ing at (1, 0) for 0 ≤ t ≤ 2.
Notice the similarity to the diagrams we drew for autonomous systems in one dimension. But
now note how much more complicated things become if we allow just one more dimension.
Also note that we can draw phase portraits and trajectories in the xy-plane even if the system is
not autonomous. In this case however we cannot draw the direction field, since the field changes
as t changes. For each t we would get a different direction field.
3.1.1 Exercises
Exercise 3.1.2: Find the general solution of x
·
1
= x
2
− x
1
+ t, x
·
2
= x
2
.
Exercise 3.1.3: Find the general solution of x
·
1
= 3x
1
− x
2
+ e
t
, x
·
2
= x
1
.
Exercise 3.1.4: Write ay
··
+ by
·
+ cy = f (x) as a first order system of ODEs.
Exercise 3.1.5: Write x
··
+y
2
y
·
−x
3
= sin(t), y
··
+(x
·
+y
·
)
2
−x = 0 as a first order system of ODEs.
3.2. MATRICES AND LINEAR SYSTEMS 87
3.2 Matrices and linear systems
Note: 1 and a half lectures, first part of §5.1 in EP
3.2.1 Matrices and vectors
Before we can start talking about linear systems of ODEs, we will need to talk about matrices, so
let us review these briefly. A matrix is an m × n array of numbers (m rows and n columns). For
example, we denote a 3 × 5 matrix as follows
A =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
a
11
a
12
a
13
a
14
a
15
a
21
a
22
a
23
a
24
a
25
a
31
a
32
a
33
a
34
a
35
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
By a vector we will usually mean a column vector which is an n × 1 matrix. If we mean a row
vector we will explicitly say so (a row vector is a 1 × n matrix). We will usually denote matrices
by upper case letters and vectors by lower case letters with an arrow such as x or

b. By

0 we will
mean the vector of all zeros.
It is easy to define some operations on matrices. Note that we will want 1 ×1 matrices to really
act like numbers, so our operations will have to be compatible with this viewpoint.
First, we can multiply by a scalar (a number). This means just multiplying each entry by the
same number. For example,
2
,
1 2 3
4 5 6
¸
=
,
2 4 6
8 10 12
¸
.
Matrix addition is also easy. We add matrices element by element. For example,
,
1 2 3
4 5 6
¸
+
,
1 1 −1
0 2 4
¸
=
,
2 3 2
4 7 10
¸
.
If the sizes do not match, then addition is not defined.
If we denote by 0 the matrix of with all zero entries, by c, d some scalars, and by A, B, C some
matrices, we have the following familiar rules.
A + 0 = A = 0 + A,
A + B = B + A,
(A + B) + C = A + (B + C),
c(A + B) = cA + cB,
(c + d)A = cA + dA.
88 CHAPTER 3. SYSTEMS OF ODES
Another operation which is useful for matrices is the so-called transpose. This operation just
swaps rows and columns of a matrix. The transpose of A is denoted by A
T
. Example:
,
1 2 3
4 5 6
¸
T
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 4
2 5
3 6
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
3.2.2 Matrix multiplication
Next let us define matrix multiplication. First we define the so-called dot product (or inner product)
of two vectors. Usually this will be a row vector multiplied with a column vector of the same size.
For the dot product we multiply each pair of entries from the first and the second vector and we
sum these products. The result is a single number. For example,
,
a
1
a
2
a
3
¸

,
¸
¸
¸
¸
¸
¸
¸
¸
¸
b
1
b
2
b
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
= a
1
b
1
+ a
2
b
2
+ a
3
b
3
.
And similarly for larger (or smaller) vectors.
Armed with the dot product we can define the product of matrices. First let us denote by
row
i
(A) the i
th
row of A and by column
j
(A) the j
th
column of A. Now for an m×n matrix A and an
n × p matrix B we can define the product AB. We let AB be an m × p matrix whose i j
th
entry is
row
i
(A) column
j
(B).
Note that the sizes must match. Example:
,
1 2 3
4 5 6
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 0 −1
1 1 1
1 0 0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
=
,
1 1 + 2 1 + 3 1 1 0 + 2 1 + 3 0 1 (−1) + 2 1 + 3 0
4 1 + 5 1 + 6 1 4 0 + 5 1 + 6 0 4 (−1) + 5 1 + 6 0
¸
=
,
6 2 1
15 5 1
¸
For multiplication we will want an analogue of a 1. Here we use the so-called identity matrix.
The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. It
is usually denoted by I. For each size we have a different matrix and so sometimes we may denote
the size as a subscript. For example, the I
3
would be the 3 × 3 identity matrix
I = I
3
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 0 0
0 1 0
0 0 1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
3.2. MATRICES AND LINEAR SYSTEMS 89
We have the following rules for matrix multiplication. Suppose that A, B, C are matrices of the
correct sizes so that the following make sense. c is some scalar (number).
A(BC) = (AB)C,
A(B + C) = AB + AC,
(B + C)A = BA + CA,
c(AB) = (cA)B = A(cB),
IA = A = AI.
A few warnings are in order however.
(i) AB BA in general (it may be true by fluke sometimes). That is, matrices do not commute.
(ii) AB = AC does not necessarily imply B = C even if A is not 0.
(iii) AB = 0 does not necessarily mean that A = 0 or B = 0.
For the last two items to hold we would need to essentially “divide” by a matrix. This is where
matrix inverse comes in. Suppose that A is an n×n matrix and that there exists another n×n matrix
B such that
AB = I = BA.
Then we call B the inverse of A and we denote B by A
−1
. If the inverse of A exists, then we call A
invertible. If A is not invertible we say A is singular or just say it is not invertible.
If A is invertible, then AB = AC does imply that B = C (the inverse is unique). We just
multiply both sides by A
−1
to get A
−1
AB = A
−1
AC or IB = IC or B = C. It is also not hard to see
that (A
−1
)
−1
= A.
3.2.3 The determinant
We can now talk about determinants of square matrices. We define the determinant of a 1 × 1
matrix as the value of its own entry. For a 2 × 2 matrix we define
det
¸,
a b
c d
¸
= ad − bc.
Before trying to compute determinant for larger matrices, let us first note the meaning of the
determinant. Consider an n × n matrix as a mapping of R
n
to R
n
. For example, a 2 × 2 matrix A
is a mapping of the plane where x gets sent to Ax. Then the determinant of A is then the factor
by which the volume of objects gets changed. For example, if we take the unit square (square of
sides 1) in the plane, then A takes the square to a parallelogram of area |det(A)|. The sign of det(A)
denotes changing of orientation (if the axes got flipped). For example,
A =
,
1 1
−1 1
¸
.
90 CHAPTER 3. SYSTEMS OF ODES
Then det(A) = 1 + 1 = 2. Now let us see where the square with vertices (0, 0), (1, 0), (0, 1) and
(1, 1) gets sent. Obviously (0, 0) gets sent to (0, 0). Now
,
1 1
−1 1
¸ ,
1
0
¸
=
,
1
−1
¸
,
,
1 1
−1 1
¸ ,
0
1
¸
=
,
1
1
¸
,
,
1 1
−1 1
¸ ,
1
1
¸
=
,
2
0
¸
.
So it turns out that the image of the square is another square. This one has a side of length

2 and
is therefore of area 2.
If you think back to high school geometry, you may have seen a formula for computing the
area of a parallelogram with vertices (0, 0), (a, c), (b, d) and (a + b, c + d). And it is precisely

det
¸,
a b
c d
¸

.
The vertical lines here mean absolute value. The matrix
,
a b
c d
¸
carries the unit square to the given
parallelogram.
Now we can define the determinant for larger matrices. We define A
i j
as the matrix A with the
i
th
row and the j
th
column deleted. To compute the determinant of a matrix, pick one row, say the
i
th
row and compute.
det(A) =
n
¸
j=1
(−1)
i+j
a
i j
det(A
i j
).
For example, for the first row we get
det(A) = a
11
det(A
11
) − a
12
det(A
12
) + a
13
det(A
13
) −

¸
¸
¸
¸
¸
¸
+a
1n
det(A
1n
) if n is odd,
−a
1n
det(A
1n
) if n even.
We alternately add and subtract the determinants of the submatrices A
i j
for a fixed i and all j. For
example, for a 3×3 matrix, picking the first row, we would get det(A) = a
11
det(A
11
)−a
12
det(A
12
)+
a
13
det(A
13
). For example,
det

¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 2 3
4 5 6
7 8 9
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸

¸
¸
¸
¸
¸
¸
¸
¸
¸
= 1 det
¸,
5 6
8 9
¸
− 2 det
¸,
4 6
7 9
¸
+ 3 det
¸,
4 5
7 8
¸
= 1(5 9 − 6 8) − 2(4 9 − 6 7) + 3(4 8 − 5 7) = 0.
The numbers (−1)
i+j
det(A
i j
) are called cofactors of the matrix and this way of computing the
determinant is called the cofactor expansion. It is also possible to compute the determinant by
expanding along columns (picking a column instead of a row above).
Note that a common notation for the determinant is a pair of vertical lines.

a b
c d

= det
¸,
a b
c d
¸
.
3.2. MATRICES AND LINEAR SYSTEMS 91
I personally find this notation confusing since vertical lines for me usually mean a positive quantity,
while determinants can be negative. So I will not ever use this notation in these notes.
One of the most important properties of determinants (in the context of this course) is the
following theorem.
Theorem 3.2.1. An n × n matrix A is invertible if and only if det(A) 0.
In fact, we have a formula for the inverse of a 2 × 2 matrix
,
a b
c d
¸
−1
=
1
ad − bc
,
d −b
−c a
¸
.
Notice the determinant of the matrix in the denominator of the fraction. The formula only works
if the determinant is nonzero, otherwise we are dividing by zero.
3.2.4 Solving linear systems
One application of matrices we will need is to solve systems of linear equations. This may be best
shown by example. Suppose that we have the following system of linear equations
2x
1
+ 2x
2
+ 2x
3
= 2,
x
1
+ x
2
+ 3x
3
= 5,
x
1
+ 4x
2
+ x
3
= 10.
Without changing the solution, we note that we could do swap equations in this system, we
could multiply any of the equations by a nonzero number, and we could add a multiple of one
equation to another equation. It turns out these operations always suffice to find a solution.
It is easier to write this as a matrix equation. Note that the system can be written as
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 2 2
1 1 3
1 4 1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
x
1
x
2
x
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2
5
10
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
To solve the system we put the coefficient matrix (the matrix on the left hand side of the equation)
together with the vector on the right and side and get the so-called augmented matrix
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 2 2 2
1 1 3 5
1 4 1 10
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
We then apply the following three elementary operations.
(i) Swap two rows.
92 CHAPTER 3. SYSTEMS OF ODES
(ii) Multiply a row by a nonzero number.
(iii) Add a multiple of one row to another row.
We will keep doing these operations until we get into a state where it is easy to read off the answer
or until we get into a contradiction indicating no solution, for example if we come up with an
equation such as 0 = 1.
Let us work through the example. First multiply the first row by
1
2
.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 1 1 1
1 1 3 5
1 4 1 10
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
Now subtract the first row from the second and third row.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 1 1 1
0 0 2 4
0 3 0 9
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
Multiply the last row by
1
3
and the second row by
1
2
.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 1 1 1
0 0 1 2
0 1 0 3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
Swap rows 2 and 3.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 1 1 1
0 1 0 3
0 0 1 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
Subtract the last row from the first, then subtract the second row from the first.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 0 0 −4
0 1 0 3
0 0 1 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
If we think about what equations this augmented matrix represents, we see that x
1
= −4, x
2
= 3,
and x
3
= 2. We try these and, voilà. It works.
Exercise 3.2.1: Check that this solution works.
If we write this equation in matrix notation as
Ax =

b,
3.2. MATRICES AND LINEAR SYSTEMS 93
where A is the matrix
,
2 2 2
1 1 3
1 4 1
,
and

b is the vector
,
2
5
10
,
. The solution can be also computed with the
inverse,
x = A
−1
Ax = A
−1

b.
One last note to make about linear systems of equations is that it is possible that the solution is
not unique (or that no solution exists). It is easy to tell if a solution does not exist. If during the
row reduction you come up with a row where all the entries except the last one are zero (the last
entry in a row corresponds to the right hand side of the equation) the system is inconsistent and
has no solution. For example if for a system of 3 equations and 3 unknowns you find a row such
as [ 0 0 0 1 ] in the augmented matrix, you know the system is inconsistent.
You generally try to use row operations until the following conditions are satisfied. The first
nonzero entry in each row is called the leading entry.
(i) There is only one leading entry in each column.
(ii) All the entries above and below a leading entry are zero.
(iii) All leading entries are 1.
Such a matrix is said to be in reduced row echelon form. The variables corresponding to columns
with no leading entries are said to be free variables. Free variables mean that we can pick those
variables to be anything we want and then solve for the rest of the unknowns.
Example 3.2.1: The following augmented matrix is in reduced row echelon form.
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 2 0 3
0 0 1 1
0 0 0 0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
If the variables are named x
1
, x
2
, and x
3
, then x
2
is the free variable and x
1
= 3 − 2x
2
and x
3
= 1.
On the other hand if during the row reduction process you come up with the matrix
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 2 13 3
0 0 1 1
0 0 0 3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
,
there is no need to go further. The last row corresponds to the equation 0x
1
+ 0x
2
+ 0x
3
= 3 which
is preposterous. Hence, no solution exists.
94 CHAPTER 3. SYSTEMS OF ODES
3.2.5 Computing the inverse
If the coefficient matrix is square and there exists a unique solution x to Ax =

b for any

b, then A
is invertible. In fact by multiplying both sides by A
−1
you can see that x = A
−1

b. So it is useful to
compute the inverse, if you want to solve the equation for many different right hand sides

b.
The 2 × 2 inverse is basically given by a formula, but it is not hard to also compute inverses of
larger matrices. While we will not have too much occasion to compute inverses for larger matrices
than 2 × 2 by hand, let us touch on how to do it. Finding the inverse of A is actually just solving a
bunch of linear equations. If you can solve Ax
k
= e
k
where e
k
is the vector with all zeros except a
1 at the k
th
position, then the inverse is the matrix with the columns x
k
for k = 1, . . . , n (exercise:
why?). Therefore, to find the inverse we can write a larger n × 2n augmented matrix [ A I ], where
I is the identity. If you do row reduction and put the matrix in reduced row echelon form, then the
matrix will be of the form [ I A
−1
] if and only if A is invertible, so you can just read off the inverse
A
−1
.
3.2.6 Exercises
Exercise 3.2.2: Solve
,
1 2
3 4
¸
x =
,
5
6
¸
by using matrix inverse.
Exercise 3.2.3: Compute determinant of
,
9 −2 −6
−8 3 6
10 −2 −6
,
.
Exercise 3.2.4: Compute determinant of
,
1 2 3 1
4 0 5 0
6 0 7 0
8 0 10 1
¸
. Hint: expand along the proper row or column
to make the calculations simpler.
Exercise 3.2.5: Compute inverse of
,
1 2 3
1 1 1
0 1 0
,
.
Exercise 3.2.6: For which h is
,
1 2 3
4 5 6
7 8 h
,
not invertible? Is there only one such h? Are there several?
Infinitely many.
Exercise 3.2.7: For which h is
,
h 1 1
0 h 0
1 1 h
,
not invertible? Find all such h.
Exercise 3.2.8: Solve
,
9 −2 −6
−8 3 6
10 −2 −6
,
x =
,
1
2
3
,
.
Exercise 3.2.9: Solve
,
5 3 7
8 4 4
6 3 3
,
x =
,
2
0
0
,
.
Exercise 3.2.10: Solve
,
3 2 3 0
3 3 3 3
0 2 4 2
2 3 4 3
¸
x =
,
2
0
4
1
¸
.
3.3. LINEAR SYSTEMS OF ODES 95
3.3 Linear systems of ODEs
Note: less than 1 lecture, second part of §5.1 in EP
First let us talk about matrix or vector valued functions. This is essentially just a matrix whose
entries depend on some variable. Let us say the independent variable is t. Then a vector valued
function x(t) is really something like
x(t) =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
x
1
(t)
x
2
(t)
.
.
.
x
n
(t)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
Similarly a matrix valued function is something such as
A(t) =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
a
11
(t) a
12
(t) a
1n
(t)
a
21
(t) a
22
(t) a
2n
(t)
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) a
nn
(t)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
We can talk about the derivative A
·
(t) or
dA
dt
and this is just the matrix valued function whose i j
th
entry is a
·
i j
(t).
Similar differentiation rules apply here. Let A and B be matrix valued functions. Let c a scalar
and C be a constant matrix. Then
(A + B)
·
= A
·
+ B
·
(AB)
·
= A
·
B + AB
·
(cA)
·
= cA
·
(CA)
·
= CA
·
(AC)
·
= A
·
C
Do note the order in the last two expressions.
A first order linear system of ODEs is a system which can be written as
x
·
(t) = P(t)x(t) +

f (t).
Where P is a matrix valued function, and x and

f are vector valued functions. We will often
suppress the dependence on t and only write x
·
= Px +

f . A solution is of course a vector valued
function x satisfying the equation.
For example, the equations
x
·
1
= 2tx
1
+ e
t
x
2
+ t
2
,
x
·
2
=
x
1
t
− x
2
+ e
t
,
96 CHAPTER 3. SYSTEMS OF ODES
can be written as
x
·
=
,
2t e
t
1
t
−1
¸
x +
,
t
2
e
t
¸
.
We will mostly concentrate on equations that are not just linear, but are in fact constant coeffi-
cient equations. That is, the matrix P will be a constant and not depend on t.
When

f =

0 (the zero vector), then we say the systemis homogeneous. For homogeneous linear
systems we still have the principle of superposition, just like for single homogeneous equations.
Theorem 3.3.1 (Superposition). Let x
·
= Px be a linear homogeneous system of ODEs. Suppose
that x
1
, . . . , x
n
are n solutions of the equation, then
x = c
1
x
1
+ c
2
x
2
+ + c
n
x
n
, (3.1)
is also a solution. If furthermore this is a system of n equations (P is n × n), and x
1
, . . . , x
n
are
linearly independent. Then every solution can be written as (3.1).
Linear independence for vector valued functions is essentially the same as for normal functions.
x
1
, . . . , x
n
are linearly independent if and only if
c
1
x
1
+ c
2
x
2
+ + c
n
x
n
=

0
has only the solution c
1
= c
2
= = c
n
= 0.
The linear combination c
1
x
1
+ c
2
x
2
+ + c
n
x
n
could always be written as
X(t) c,
where X(t) is the matrix with columns x
1
, . . . , x
n
, and c is the column vector with entries c
1
, . . . , c
n
.
X(t) is called the fundamental matrix, or fundamental matrix solution.
To solve nonhomogeneous first order linear systems. We apply the same technique as we did
before.
Theorem 3.3.2. Let x
·
= Px+

f be a linear system of ODEs. Suppose x
p
is one particular solution.
Then every solution can be written as
x = x
c
+ x
p
,
where x
c
is a solution to the associated homogeneous equation (x
·
= Px).
So the procedure will be exactly the same. We find a particular solution to the nonhomogeneous
equation, then we find the general solution to the associated homogeneous equation and we add
the two.
Alright, suppose you have found the general solution x
·
= Px +

f . Now you are given an
initial condition of the form x(t
0
) =

b for some constant vector

b. Now suppose that X(t) is
3.3. LINEAR SYSTEMS OF ODES 97
the fundamental matrix solution of the associated homogeneous equation (i.e. columns of X are
solutions). The general solution is written as
x(t) = X(t)c + x
p
(t).
Then we are seeking a vector c such that

b = x(t
0
) = X(t
0
)c + x
p
(t
0
).
Or in other words we are solving the nonhomogeneous system of linear equations
X(t
0
)c =

b − x
p
(t
0
)
for c.
Example 3.3.1: In §3.1 we solved the following system
x
·
1
= x
1
,
x
·
2
= x
1
− x
2
.
with initial conditions x
1
(0) = 1, x
2
(0) = 2.
This is a homogeneous system, so

f =

0. We write the system as
x
·
=
,
1 0
1 −1
¸
x, x(0) =
,
1
2
¸
.
We found the general solution was x
1
= c
1
e
t
and x
2
=
c
1
2
e
t
+ c
2
e
−t
. Hence in matrix notation,
the fundamental matrix solution is
X(t) =
,
e
t
0
1
2
e
t
e
−t
¸
.
It is not hard to see that the columns of this matrix are linearly independent. To see this, just plug
in t = 0 and note that the two constant vectors are already linearly independent here.
Hence to solve the initial problem we solve the equation
X(0)c =

b,
or in other words,
,
1 0
1
2
1
¸
c =
,
1
2
¸
.
After a single elementary row operation we find that c =
,
1
3/2
¸
. Hence our solution is
x(t) = X(t)c =
,
e
t
0
1
2
e
t
e
−t
¸ ,
1
3
2
¸
=
,
e
t
1
2
e
t
+
3
2
e
−t
¸
.
This agrees with our previous solution.
98 CHAPTER 3. SYSTEMS OF ODES
3.3.1 Exercises
Exercise 3.3.1: Write the system x
·
1
= 2x
1
− 3tx
2
+ sin t, x
·
2
= e
t
x
1
+ 3x
2
+ cos t as in the form
x
·
= P(t)x +

f (t).
Exercise 3.3.2: a) Verify that the system x
·
=
,
1 3
3 1
¸
x has the two solutions
,
1
1
¸
e
4t
and
,
1
−1
¸
e
−2t
. b)
Write down the general solution. c) Write down the general solution in the form x
1
=?, x
2
=? (i.e.
write down a formula for each element of the solution).
Exercise 3.3.3: Verify that
,
1
1
¸
e
t
and
,
1
−1
¸
e
t
are linearly independent. Hint: Just plug in t = 0.
Exercise 3.3.4: Verify that
,
1
1
0
,
e
t
and
,
1
−1
1
,
e
t
and
,
1
−1
1
,
e
2t
are linearly independent. Hint: You must
be a bit more tricky than in the previous exercise.
Exercise 3.3.5: Verify that
,
t
t
2
¸
and
,
t
3
t
4
¸
are linearly independent.
3.4. EIGENVALUE METHOD 99
3.4 Eigenvalue method
Note: 2 lectures, §5.2 in EP
In this section we will learn how to solve linear homogeneous constant coefficient systems of
ODEs by the eigenvalue method. Suppose you have a linear constant coefficient homogeneous
system
x
·
= Px.
Now suppose we try to adapt the method for single constant coefficient equations by trying the
function e
λt
. However, x is a vector. So we try ve
λt
, where v is an arbitrary constant vector. We
plug into the equation to get
λve
λt
= Pve
λt
.
We divide by e
λt
and notice that we are looking for a λ and v that satisfy the equation
λv = Pv.
To solve this equation we need a little bit more linear algebra which we review now.
3.4.1 Eigenvalues and eigenvectors of a matrix
Let A be a square constant matrix. Suppose there is a scalar λ and a nonzero vector v such that
Av = λv.
We then call λ an eigenvalue of A and v is called the corresponding eigenvector.
Example 3.4.1: The matrix
,
2 1
0 1
¸
has an eigenvalue of λ = 2 with the corresponding eigenvector
,
1
0
¸
because
,
2 1
0 1
¸ ,
1
0
¸
=
,
2
0
¸
= 2
,
1
0
¸
.
If we rewrite the equation for an eigenvalue as
(A − λI)v =

0.
We notice that this has a nonzero solution v only if A − λI is not invertible. Were it invertible, we
could write (A−λI)
−1
(A−λI)v = (A−λI)
−1

0 which implies v =

0. Therefore, A has the eigenvalue
λ if and only if λ solves the equation
det(A − λI) = 0.
Note that this means that we will be able to find an eigenvalue without finding the correspond-
ing eigenvector. The eigenvector will have to be found later, once λ is known.
100 CHAPTER 3. SYSTEMS OF ODES
Example 3.4.2: Find all eigenvalues of
,
2 1 1
1 2 0
0 0 2
,
.
We write
det

¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 1 1
1 2 0
0 0 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
− λ
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 0 0
0 1 0
0 0 1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸

¸
¸
¸
¸
¸
¸
¸
¸
¸
= det

¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 − λ 1 1
1 2 − λ 0
0 0 2 − λ
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸

¸
¸
¸
¸
¸
¸
¸
¸
¸
=
= (2 − λ)
2
((2 − λ)
2
− 1) = −(λ − 1)(λ − 2)(λ − 3).
and so the eigenvalues are λ = 1, λ = 2, and λ = 3.
Note that for an n×n matrix, the polynomial we get by computing det(A−λI) will be of degree
n, and hence we will in general have n eigenvalues.
To find an eigenvector corresponding to λ, we write
(A − λI)v =

0,
and solve for a nontrivial (nonzero) vector v. If λ is an eigenvalue, this will always be possible.
Example 3.4.3: Find the eigenvector of
,
2 1 1
1 2 0
0 0 2
,
corresponding to the eigenvalue λ = 3.
We write
(A − λI)v =

¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 1 1
1 2 0
0 0 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
− 3
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 0 0
0 1 0
0 0 1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸

¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
v
1
v
2
v
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
−1 1 1
1 −1 0
0 0 −1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
v
1
v
2
v
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=

0.
It is easy to solve this system of linear equations. Write down the augmented matrix
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
−1 1 1 0
1 −1 0 0
0 0 −1 0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
and perform row operations (exercise: which ones?) until you get
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1 −1 0 0
0 0 1 0
0 0 0 0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
The equations the entries of v have to satisfy are, therefore, v
1
− v
2
= 0, v
3
= 0, and v
2
is a free
variable. We can pick v
2
to be arbitrary (but nonzero) and let v
1
= v
2
and of course v
3
= 0. For
example, v =
,
1
1
0
,
. We try this:
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 1 1
1 2 0
0 0 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1
1
0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
3
3
0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
= 3
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1
1
0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
Yay! It worked.
3.4. EIGENVALUE METHOD 101
Exercise 3.4.1 (easy): Are the eigenvectors unique? Can you find a different eigenvector for λ = 3
in the example above? How does it relate to the other eigenvector?
Exercise 3.4.2: Note that when the matrix is 2 × 2 you do not need to write down the augmented
matrix when computing eigenvectors (if you have computed the eigenvalues correctly). Can you
see why? Try it for the matrix
,
2 1
1 2
¸
.
3.4.2 The eigenvalue method with distinct real eigenvalues
OK. We have the equation
x
·
= Px.
We find the eigenvalues λ
1
, λ
2
, . . . , λ
n
of the matrix P, and the corresponding eigenvectors v
1
, v
2
,
. . . , v
n
. Now we notice that the functions v
1
e
λ
1
t
, v
2
e
λ
2
t
, . . . , v
n
e
λ
n
t
are solutions of the equation and
hence x = c
1
v
1
e
λ
1
t
+ c
2
v
2
e
λ
2
t
+ + c
n
v
n
e
λ
n
t
is a solution.
Theorem 3.4.1. Take x
·
= Px. If P is n × n and has n distinct real eigenvalues, λ
1
, . . . , λ
n
then
there are n linearly independent corresponding eigenvectors v
1
, . . . , v
n
, and the general solution
to the ODE can be written as.
x = c
1
v
1
e
λ
1
t
+ c
2
v
2
e
λ
2
t
+ + c
n
v
n
e
λ
n
t
.
Example 3.4.4: Suppose we take the system
x
·
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
2 1 1
1 2 0
0 0 2
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
x.
Find the general solution.
We have found the eigenvalues 1, 2, 3 earlier. We have found the eigenvector
,
1
1
0
,
for the eigen-
value 3. In similar fashion we find the eigenvector
,
1
−1
0
,
for the eigenvalue 1 and
,
0
1
−1
,
for the
eigenvalue 2 (exercise: check). Hence our general solution is
x =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1
−1
0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
e
t
+
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
0
0
1
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
e
2t
+
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
1
1
0
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
e
3t
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
e
t
+ e
3t
−e
t
+ e
3t
e
2t
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
Exercise 3.4.3: Check that this really solves the system.
Note: If you write a homogeneous linear constant coefficient n
th
order equation as a first order
system (as we did in §3.1) then the eigenvalue equation
det(P − λI) = 0.
is essentially the same as the characteristic equation we got in §2.2 and §2.3.
102 CHAPTER 3. SYSTEMS OF ODES
3.4.3 Complex eigenvalues
A matrix might very well have complex eigenvalues even if all the entries are real. For example,
suppose that we have the system
x
·
=
,
1 1
−1 1
¸
x.
Let us compute the eigenvalues of the matrix P =
,
1 1
−1 1
¸
.
det(P − λI) = det
¸,
1 − λ 1
−1 1 − λ
¸
= (1 − λ)
2
+ 1 = λ
2
− 2λ + 2 = 0.
From this we note that λ = 1 ± i. The corresponding eigenvectors will also be complex
(P − (1 − i)λ)v =

0
,
i 1
−1 i
¸
v =

0.
It is obvious that the equations iv
1
+ v
2
= 0 and −v
1
+ iv
2
= 0 are multiples of each other. So we
only need to consider one of them. After picking v
2
= 1, for example, we have the eigenvector
v =
,
i
1
¸
. In similar fashion we find that
,
−i
1
¸
is an eigenvector corresponding to the eigenvalue 1+i.
We could write the solution as
x = c
1
,
i
1
¸
e
(1−i)t
+ c
2
,
−i
1
¸
e
(1+i)t
=
,
c
1
ie
(1−i)t
− c
2
ie
(1+i)t
c
1
e
(1−i)t
+ c
2
e
(1+i)t
1
¸
.
But then we would need to look for complex values c
1
and c
2
to solve any initial conditions. And
even then it is perhaps not completely clear that we get a real solution. We could use Euler’s
formula here and do the whole song and dance we did before, but we will do something a bit
smarter first.
We claim that we did not have to look for the second eigenvector (nor for the second eigen-
value). All complex eigenvalues come in pairs (because the matrix P is real).
First a small side note. The real part of a complex number z can be computed as
z+¯ z
2
, where the
bar above z means a + ib = a − ib. This operation is called the complex conjugate. Note that for
a real number a, ¯ a = a. Similarly we can bar whole vectors or matrices. If a matrix P is real then
P = P. We note that Px = Px = Px. Or
(P − λI)v = (P −
¯
λI)v.
So if v is an eigenvector corresponding to eigenvalue a +ib, then v is an eigenvector corresponding
to eigenvalue a − ib.
Now suppose that a + ib is a complex eigenvalue of P, v the corresponding eigenvector and
hence
x
1
= ve
(a+ib)t
3.4. EIGENVALUE METHOD 103
is a solution (complex valued) of x
·
= Px. Then note that e
a+ib
= e
a−ib
and hence
x
2
= x
1
= ve
(a−ib)t
is also a solution. Now take the function
x
3
= Re x
1
= Reve
(a+ib)t
=
x
1
+ x
1
2
=
x
1
+ x
2
2
.
Is also a solution. And it is real valued! Similarly as Imz =
z−¯ z
2i
is the imaginary part we find that
x
4
= Imx
1
=
x
1
− x
2
2i
.
is also a real valued solution. It turns out that x
3
and x
4
are linearly independent.
Returning to our problem, we take
x
1
=
,
i
1
¸
e
(1−i)t
=
,
i
1
¸

e
t
cos t + ie
t
sin t

=
,
ie
t
cos t − e
t
sin t
e
t
cos t + ie
t
sin t
¸
It is easy to see that
Re x
1
=
,
−e
t
sin t
e
t
cos t
¸
,
Imx
1
=
,
e
t
cos t
e
t
sin t
¸
,
are the solutions we seek.
Exercise 3.4.4: Check that these really are solutions.
The general solution is
x = c
1
,
−e
t
sin t
e
t
cos t
¸
+ c
2
,
e
t
cos t
e
t
sin t
¸
=
,
−c
1
e
t
sin t + c
2
e
t
cos t
c
1
e
t
cos t + c
2
e
t
sin t
¸
.
This solution is real valued for real c
1
and c
2
. Now we can solve for any initial conditions that we
have.
The process is this. When you have complex eigenvalues, you notice that they always come in
pairs. You take one λ = a + ib from the pair, you find the corresponding eigenvector v. You note
that Reve
(a+ib)t
and Imve
(a+ib)t
are also solutions to the equation, are real valued and are linearly
independent. You go on to the next eigenvalue which is either a real eigenvalue or another complex
eigenvalue pair. Hence, you will end up with n linearly independent solutions if you had n distinct
eigenvalues (real or complex).
You can now find a real valued general solution to any homogeneous system where the matrix
has distinct eigenvalues. When you have repeated eigenvalues, matters get a bit more complicated
and we will look at that situation in §3.7.
104 CHAPTER 3. SYSTEMS OF ODES
3.4.4 Exercises
Exercise 3.4.5: Let A be an 3 ×3 matrix with an eigenvalue of 3 and a corresponding eigenvector
v =
,
1
−1
3
,
. Find Av.
Exercise 3.4.6: a) Find the general solution of x
·
1
= 2x
1
, x
·
2
= 3x
2
using the eigenvalue method
(first write the system in the form x
·
= Ax). b) Solve the system by solving each equation separately
and verify you get the same general solution.
Exercise 3.4.7: Find the general solution of x
·
1
= 3x
1
+ x
2
, x
·
2
= 2x
1
+ 4x
2
using the eigenvalue
method.
Exercise 3.4.8: Find the general solution of x
·
1
= x
1
− 2x
2
, x
·
2
= 2x
1
+ x
2
using the eigenvalue
method. Do not use complex exponentials in your solution.
Exercise 3.4.9: a) Compute eigenvalues and eigenvectors of A =
,
9 −2 −6
−8 3 6
10 −2 −6
,
. b) Find the general
solution of x
·
= Ax.
Exercise 3.4.10: Compute eigenvalues and eigenvectors of
,
−2 −1 −1
3 2 1
−3 −1 0
,
.
3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 105
3.5 Two dimensional systems and their vector fields
Note: 1 lecture, should really be in EP §5.2, but is in EP §6.2
Let us take a moment to talk about homogeneous systems in the plane. We want to think about
how the vector fields look and how this depends on the eigenvalues. So we have a 2 × 2 matrix P
and the system
,
x
y
¸
·
= P
,
x
y
¸
. (3.2)
We will be able to visually tell how the vector field looks once we find the eigenvalues and eigen-
vectors of the matrix.
Case 1. Suppose that the eigenvalues are real
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.3: Eigenvectors of P.
and positive. Find the two eigenvectors and plot
them in the plane. For example, take the matrix
,
1 1
0 2
¸
. The eigenvalues are 1 and 2 and the corre-
sponding eigenvectors are
,
1
0
¸
and
,
1
1
¸
. See Fig-
ure 3.3.
Now suppose that x and y are on the line de-
termined by an eigenvector v for an eigenvalue λ.
That is,
,
x
y
¸
= av for some scalar a. Then
,
x
y
¸
·
= P
,
x
y
¸
= P(av) = a(Pv) = aλv.
The derivative is a multiple of v and hence points
along the line determined by v. As λ > 0, the
derivative points in the direction of v when a is
positive and in the opposite direction when a is
negative. Let us draw arrows on the lines to indicate the directions. See Figure 3.4 on the following
page.
We fill in the rest of the arrows and we also draw a few solutions. See Figure 3.5 on the next
page. You will notice that the picture looks like a source with arrows coming out from the origin.
Hence we call this type of picture a source or sometimes an unstable node.
Case 2. Suppose both eigenvalues were negative. For example, take the negation of the matrix
in case 1,
,
−1 −1
0 −2
¸
. The eigenvalues are −1 and −2 and the corresponding eigenvectors are the same,
,
1
0
¸
and
,
1
1
¸
. The calculation and the picture are almost the same. The only difference is that the
eigenvalues are negative and hence all arrows are reversed. We get the picture in Figure 3.6 on the
following page. We call this kind of picture a sink or sometimes a stable node.
Case 3. Suppose one eigenvalue is positive and one is negative. For example the matrix
,
1 1
0 −2
¸
.
The eigenvalues are 1 and −2 and the corresponding eigenvectors are the same,
,
1
0
¸
and
,
1
−3
¸
. We
106 CHAPTER 3. SYSTEMS OF ODES
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.4: Eigenvectors of P with directions.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.5: Example source vector field with
eigenvectors and solutions.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.6: Example sink vector field with
eigenvectors and solutions.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.7: Example saddle vector field with
eigenvectors and solutions.
reverse the arrows on one line (corresponding to the negative eigenvalue) and we obtain the picture
in Figure 3.7. We call this picture a saddle point.
The next three cases we will assume the eigenvalues are complex. In this case the eigenvectors
are also complex and we cannot just plot them on the plane.
Case 4. Suppose the eigenvalues are purely imaginary. That is, suppose the eigenvalues are
±ib. For example, let P =
,
0 1
−4 0
¸
. The eigenvalues turn out to be ±2i and the eigenvectors are
,
1
2i
¸
and
,
1
−2i
¸
. We take the eigenvalue 2i and its eigenvector
,
1
2i
¸
and note that the real an imaginary
3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 107
parts of ve
i2t
are
Re
,
1
2i
¸
e
i2t
=
,
cos 2t
−2 sin 2t
¸
Im
,
1
2i
¸
e
i2t
=
,
sin 2t
2 cos 2t
¸
.
Note that which combination of them we take just depends on the initial conditions. So we might
as well just take the real part. If you notice this is a parametric equation for an ellipse. Same with
the imaginary part and in fact any linear combination of them. It is not difficult to see that this is
what happens in general when the eigenvalues are purely imaginary. So when the eigenvalues are
purely imaginary, you get ellipses for your solutions. This type of picture is sometimes called a
center. See Figure 3.8.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.8: Example center vector field.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.9: Example spiral source vector field.
Case 5. Now the complex eigenvalues have positive real part. That is, suppose the eigenvalues
are a ± ib for some a > 0. For example, let P =
,
1 1
−4 1
¸
. The eigenvalues turn out to be 1 ± 2i and
the eigenvectors are
,
1
2i
¸
and
,
1
−2i
¸
. We take 1 + 2i and its eigenvector
,
1
2i
¸
and find the real and
imaginary of ve
(1+2i)t
are
Re
,
1
2i
¸
e
(1+2i)t
= e
t
,
cos 2t
−2 sin 2t
¸
Im
,
1
2i
¸
e
(1+2i)t
= e
t
,
sin 2t
2 cos 2t
¸
.
Now note the e
t
in front of the solutions. This means that the solutions grow in magnitude while
spinning around the origin. Hence we get a spiral source. See Figure 3.9.
108 CHAPTER 3. SYSTEMS OF ODES
Case 6. Finally suppose the complex eigenvalues have negative real part. That is, suppose the
eigenvalues are −a ± ib for some a > 0. For example, let P =
,
−1 −1
4 −1
¸
. The eigenvalues turn out
to be −1 ± 2i and the eigenvectors are
,
1
−2i
¸
and
,
1
2i
¸
. We take −1 − 2i and its eigenvector
,
1
2i
¸
and
find the real and imaginary of ve
(1+2i)t
are
Re
,
1
2i
¸
e
(−1−2i)t
= e
−t
,
cos 2t
2 sin 2t
¸
,
Im
,
1
2i
¸
e
(−1−2i)t
= e
−t
,
−sin 2t
2 cos 2t
¸
.
Now note the e
−t
in front of the solutions. This means that the solutions shrink in magnitude while
spinning around the origin. Hence we get a spiral sink. See Figure 3.10.
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 3.10: Example spiral sink vector field.
We summarize the behavior of linear homogeneous two dimensional systems in Table 3.1.
Eigenvalues Behavior
real and both positive source / unstable node
real and both negative sink / stable node
real and opposite signs saddle
purely imaginary center point / ellipses
complex with positive real part spiral source
complex with negative real part spiral sink
Table 3.1: Summary of behavior of linear homogeneous two dimensional systems.
3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 109
3.5.1 Exercises
Exercise 3.5.1: Take the equation mx
··
+cx
·
+kx = 0, with m > 0, c ≥ 0, k > 0 for the mass-spring
system. a) Convert this to a system of first order equations. b) Classify for what m, c, k do you
get which behavior. c) Can you explain from physical intuition why you do not get all the different
kinds of behavior here?
Exercise 3.5.2: Can you find what happens in the case when P =
,
1 1
0 1
¸
. In this case the eigenvalue
is repeated and there is only one eigenvector. What picture does this look like?
Exercise 3.5.3: Can you find what happens in the case when P =
,
1 1
1 1
¸
. Does this look like any of
the pictures we have drawn?
110 CHAPTER 3. SYSTEMS OF ODES
3.6 Second order systems and applications
Note: more than 2 lectures, §5.3 in EP
3.6.1 Undamped mass spring systems
While we did say that we will usually only look at first order systems, it is sometimes more con-
venient to study the system in the way it arises naturally. For example, suppose we have 3 masses
connected by springs between two walls. We could pick any higher number, and the math would
be essentially the same, but for simplicity we pick 3 right now. And let us assume no friction,
that is, the system is undamped. The masses are m
1
, m
2
, and m
3
and the spring constants are k
1
,
k
2
, k
3
, and k
4
. Let x
1
be the displacement from rest position of the first mass and, x
2
and x
3
the
displacement of the second and third mass. We will make, as usual, positive values go right (as x
1
grows, mass 1 is moving right). See Figure 3.11.
k
1
m
1
k
2
m
2
k
3
m
3
k
4
Figure 3.11: System of masses and springs.
This simple system turns up in unexpected places. Note for example that our world really
consists of small particles of matter interacting together. When we try this system with many more
masses, this is a good approximation to how an elastic material will behave. In fact by somehow
taking a limit of the number of masses going to infinity we obtain the continuous one dimensional
wave equation. But we digress.
Let us set up the equations for the three mass system. By Hooke’s law we have that the force
acting on the mass equals the spring compression times the spring constant. By Newton’s second
law we again have that force is mass times acceleration. So if we sum the forces acting on each
mass and put the right sign in front of each depending on the direction in which it is acting, we end
up with the system.
m
1
x
··
1
= −k
1
x
1
+ k
2
(x
2
− x
1
) = −(k
1
+ k
2
)x
1
+ k
2
x
2
,
m
2
x
··
2
= −k
2
(x
2
− x
1
) + k
3
(x
3
− x
2
) = k
2
x
1
− (k
2
+ k
3
)x
2
+ k
3
x
3
,
m
3
x
··
3
= −k
3
(x
3
− x
2
) − k
4
x
3
= k
3
x
2
− (k
3
+ k
4
)x
3
.
We define the matrices
M =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
m
1
0 0
0 m
2
0
0 0 m
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
and K =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
−(k
1
+ k
2
) k
2
0
k
2
−(k
2
+ k
3
) k
3
0 k
3
−(k
3
+ k
4
)
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 111
We write the equation simply as
Mx
··
= Kx.
At this point we could introduce 3 new variables and write out a system of 6 equations. We claim
this simple setup is easier to handle as a second order system. We will call x the displacement
vector, M the mass matrix, and K the stiffness matrix.
Exercise 3.6.1: Do this setup for 4 masses (find the matrix M and K). Do it for 5 masses. Can
you find a prescription to do it for n masses?
As before we will want to “divide by M.” In this case this means computing the inverse of M.
All the masses are nonzero and it is easy to compute the inverse, as the matrix is diagonal.
M
−1
=
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
1
m
1
0 0
0
1
m
2
0
0 0
1
m
3
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
This fact follows readily by how we multiply diagonal matrices. You should verify that MM
−1
=
M
−1
M = I as an exercise.
We let A = M
−1
K and we look at the system x
··
= M
−1
Kx, or
x
··
= Ax.
Many real world systems can be modeled by this equation. For simplicity we will keep the given
masses-and-springs setup in mind. We try a solution of the form
x = ve
αt
.
We note that for this guess, x
··
= α
2
ve
αt
. We plug into the equation and get
α
2
ve
αt
= Ave
αt
.
We can divide by e
αt
to get that α
2
v = Av. Hence if α
2
is an eigenvalue of A and v is the corre-
sponding eigenvector, we have found a solution.
In our example, and in many others, it turns out that A has negative real eigenvalues (and
possibly a zero eigenvalue). So we will study only this case here. When an eigenvalue λ is negative,
it means that α
2
= λ is negative. Hence there is some real number ω such that −ω
2
= λ. Then
α = ±iω. The solution we guessed was
x = v(cos ωt + i sin ωt).
By again taking real and imaginary parts (note that v is real), we again find that v cos ωt and
v sin ωt are linearly independent solutions.
If an eigenvalue was zero, it turns out that v and vt are solutions if v is the corresponding
eigenvector.
112 CHAPTER 3. SYSTEMS OF ODES
Exercise 3.6.2: Show that if A has a zero eigenvalue and v is the corresponding eigenvector, then
x = v(a + bt) is a solution of x
··
= Ax for arbitrary constants a and b.
Theorem 3.6.1. Let A be an n×n with n distinct real negative eigenvalues we denote by −ω
2
1
, −ω
2
2
,
. . . , −ω
2
n
, and corresponding eigenvectors v
1
, v
2
, . . . , v
n
. Then
x(t) =
n
¸
i=1
v
i
(a
i
cos ω
i
t + b
i
sin ω
i
t),
is the general solution of
x
··
= Ax,
for some arbitrary constants a
i
and b
i
. If A has a zero eigenvalue and all other eigenvalues are
distinct and negative, that is ω
1
= 0, then the general solution becomes
x(t) = v
1
(a
1
+ b
1
t) +
n
¸
i=2
v
i
(a
i
cos ω
i
t + b
i
sin ω
i
t).
Now note that we can use this solution and the setup from the introduction of this section even
when some of the masses and springs are missing. Simply when there are say 2 masses and only 2
springs, take only the equations for the two masses and set all the spring constants that are missing
to zero.
3.6.2 Examples
Example 3.6.1: Suppose we have the system in Figure 3.12, with m
1
= 2, m
2
= 1, k
1
= 4, and
k
2
= 2.
k
1
m
1
k
2
m
2
Figure 3.12: System of masses and springs.
The equations we write down are
,
2 0
0 1
¸
x
··
=
,
−(4 + 2) 2
2 −2
¸
x.
or
x
··
=
,
−3 1
2 −2
¸
x.
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 113
We find the eigenvalues of A to be λ = −1, −4 (exercise). Now we find the eigenvectors to be
,
1
2
¸
and
,
1
−1
¸
respectively (exercise).
We check the theorem and note that ω
1
= 1 and ω
2
= 2. Hence the general solution is
x =
,
1
2
¸
(a
1
cos t + b
1
sin t) +
,
1
−1
¸
(a
2
cos 2t + b
2
sin 2t) .
The two terms in the solution represent the two so-called natural or normal modes of oscilla-
tion. And the two (angular) frequencies are the natural frequencies. The two modes are plotted in
Figure 3.13.
0.0 2.5 5.0 7.5 10.0
0.0 2.5 5.0 7.5 10.0
-2
-1
0
1
2
-2
-1
0
1
2
0.0 2.5 5.0 7.5 10.0
0.0 2.5 5.0 7.5 10.0
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 3.13: The two modes of the mass spring system. In the left plot the masses are moving in
unison and the right plot are masses moving in the opposite direction.
Let us write the solution as
x =
,
1
2
¸
c
1
cos(t − α
2
) +
,
1
−1
¸
c
2
cos(2t − α
1
).
The first term,
x
1
=
,
1
2
¸
c
1
cos(t − α
1
) =
,
c
1
cos(t − α
1
)
2c
1
cos(t − α
1
)
¸
,
corresponds to the mode where the masses move synchronously in the same direction.
On the other hand the second term,
x
1
=
,
1
−1
¸
c
2
cos(2t − α
2
) =
,
c
2
cos(2t − α
2
)
−c
2
cos(2t − α
2
)
¸
,
corresponds to the mode where the masses move synchronously but in opposite directions.
The general solution is a combination of the two modes. That is, the initial conditions determine
the amplitude and phase shift of each mode.
114 CHAPTER 3. SYSTEMS OF ODES
Example 3.6.2: Let us do another example. In this example we have two toy rail cars. Car 1 of
mass 2 kg is travelling at 3 m/s towards the second rail car of mass 1 kg. There is a bumper on the
second rail car which engages one the cars hit (it connects to two cars) and does not let go. The
bumper acts like a spring of spring constant k = 2 N/m. The second Car is 10 meters from a wall.
See Figure 3.14.
m
1
k
m
2
10 meters
Figure 3.14: The crash of two rail cars.
We want to ask several question. At what time after the cars link does impact with the wall
happen? What is the speed of car 2 when it hits the wall?
OK, let us first set the system up. Let us assume that time t = 0 is the time when the two cars
link up. Let x
1
be the displacement of the first car from the position at t = 0, and let x
2
be the
displacement of the second car from its original location. Then the time when x
2
(t) = 10 is exactly
the time when impact with wall occurs. For this t, x
·
2
(t) is the speed at impact. This system acts
just like the system of the previous example but without k
1
. Hence the equation is
,
2 0
0 1
¸
x
··
=
,
−2 2
2 −2
¸
x.
or
x
··
=
,
−1 1
2 −2
¸
x.
We compute the eigenvalues of A. It is not hard to see that the eigenvalues are 0 and −3
(exercise). Furthermore, the eigenvectors are
,
1
1
¸
and
,
1
−2
¸
respectively (exercise). We note that
ω
2
=

3 and we use the second part of the theorem to find our general solution to be
x =
,
1
1
¸
(a
1
+ b
1
t) +
,
1
−2
¸

a
2
cos

3 t + b
2
sin

3 t

=
=
,
a
1
+ b
1
t + a
2
cos

3 t + b
2
sin

3 t
a
1
+ b
1
t − 2a
2
cos

3 t − 2b
2
sin

3 t
¸
We now apply the initial conditions. First the cars start at position 0 so x
1
(0) = 0 and x
2
(0) = 0.
The first car is travelling at 3 m/s, so x
·
1
(0) = 3 and the second car starts at rest, so x
·
2
(0) = 0. The
first conditions says

0 = x(0) =
,
a
1
+ a
2
a
1
− 2a
2
¸
.
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 115
It is not hard to see that this implies that a
1
= a
2
= 0. We plug a
1
and a
2
and differentiate to get
x
·
(t) =
,
b
1
+

3 b
2
cos

3 t
b
1
− 2

3 b
2
cos

3 t
¸
.
So
,
3
0
¸
= x
·
(0) =
,
b
1
+

3 b
2
b
1
− 2

3 b
2
¸
.
It is not hard to solve these two equations to find b
1
= 2 and b
2
=
1

3
. Hence the position of our
cars is (until the impact with the wall)
x =
,
¸
¸
¸
¸
¸
¸
2t +
1

3
sin

3 t
2t −
2

3
sin

3 t
¸
¸
¸
¸
¸
¸
¸
.
Note how the presence of the zero eigenvalue resulted in a term containing t. This means that the
carts will be travelling in the positive direction as time grows, which is what we expect.
What we are really interested in is the second expression, the one for x
2
. We have x
2
(t) =
2t −
2

3
sin

3 t. See Figure 3.15 for the plot of x
2
versus time.
0 1 2 3 4 5 6
0 1 2 3 4 5 6
0.0
2.5
5.0
7.5
10.0
12.5
0.0
2.5
5.0
7.5
10.0
12.5
Figure 3.15: Position of the second car in time (ignoring the wall).
Just from the graph we can see that time of impact will be a little more than 5 seconds from
time zero. For this you have to solve the equation 10 = x
2
(t) = 2t −
2

3
sin

3 t. Using a computer
(or even a graphing calculator) we find that t
impact
≈ 5.22 seconds.
As for the speed we note that x
·
2
= 2 − 2 cos

3 t. At time of impact (5.22 seconds from t = 0)
we get that x
·
2
(t
impact
) ≈ 3.85.
The maximum speed is the maximum of 2 − 2 cos

3 t which is 4. We are travelling at almost
the maximum speed when we hit the wall.
116 CHAPTER 3. SYSTEMS OF ODES
Now suppose that Bob is a tiny person sitting on car 2. Bob has a Martini in his hand and would
like to not spill it. Let us suppose Bob would not spill his martini when the first car links up with
car 2, but if car 2 hits the wall at any speed greater than zero, Bob will spill his drink. Suppose
Bob can move the car 2 a few meters back and forth from the wall (he cannot go all the way to the
wall, nor can he get out of the way of the first car). Is there a “safe” distance for him to be in? A
distance such that the impact with the wall is at zero speed?
Actually, the answer is yes. From looking at Figure 3.15 on the preceding page, we note the
“plateau” between t = 3 and t = 4. There is a point where the speed is zero. We just need to
solve x
·
2
(t) = 0. This is when cos

3 t = 1 or in other words when t =


3
,


3
, etc. . . If we plug in
x
2



3

=


3
≈ 7.26. So a “safe” distance is about 7 and a quarter meters from the wall.
Alternatively Bob could move away from the wall towards the incoming car 2 where another
safe distance is


3
≈ 14.51 and so on, using all the different t such that x
·
2
(t) = 0. Of course t = 0
is always a solution here, corresponding to x
2
= 0, but that means standing right at the wall.
3.6.3 Forced oscillations
Finally we move to forced oscillations. Suppose that now our system is
x
··
= Ax +

F cos ωt. (3.3)
That is, we are adding periodic forcing to the system in the direction of the vector

F.
Just like before this system just requires us to find one particular solution x
p
, add it to the
general solution of the associated homogeneous system x
c
and we will have the general solution to
(3.3). Let us suppose that ω is not one of the natural frequencies of x
··
= Ax, then we can guess
x
p
= c cos ωt,
where c is an unknown constant vector. Note that we do not need to use sine since there are only
second derivatives. We solve for c to find x
p
. This is really just the method of undetermined
coefficients for systems. Let us differentiate x
p
twice to get
x
p
··
= −ω
2
c cos ωt.
Now plug into the equation
−ω
2
c cos ωt = Ac cos ωt +

F cos ωt
We can cancel the cosine and rearrange to obtain
(A + ω
2
I)c = −

F.
So
c = (A + ω
2
I)
−1
(−

F).
3.6. SECOND ORDER SYSTEMS AND APPLICATIONS 117
Of course this means that (A + ω
2
I) = (A − (−ω
2
)I) is invertible. That matrix is invertible if and
only if −ω
2
is not an eigenvalue of A. That is true if and only if ω is not a natural frequency of the
system.
Example 3.6.3: Let us take the example in Figure 3.12 on page 112 with the same parameters as
before: m
1
= 2, m
2
= 1, k
1
= 4, and k
2
= 2. Now suppose that there is a force 2 cos 3t acting on
the second cart.
The equation is
x
··
=
,
−3 1
2 −2
¸
x +
,
0
2
¸
cos 3t.
We have solved the associated homogeneous equation before and found the complementary solu-
tion to be
x
c
=
,
1
2
¸
(a
1
cos t + b
1
sin t) +
,
1
−1
¸
(a
2
cos 2t + b
2
sin 2t) .
We note that the natural frequencies were 1 and 2. Hence 3 is not a natural frequency, we can
try c cos 3t. We can invert (A + 3
2
I)
¸,
−3 1
2 −2
¸
+ 3
2
I
−1
=
,
6 1
2 7
¸−1
=
,
7
40
−1
40
−1
20
3
20
¸
.
Hence,
c = (A + ω
2
I)
−1
(−

F) =
,
7
40
−1
40
−1
20
3
20
¸ ,
0
−2
¸
=
,
1
20
−3
10
¸
.
Combining with what we know the general solution of the associated homogeneous problem
to be we get that the general solution to x
··
= Ax +

F cos ωt is
x = x
c
+ x
p
=
,
1
2
¸
(a
1
cos t + b
1
sin t) +
,
1
−1
¸
(a
2
cos 2t + b
2
sin 2t) +
,
1
20
−3
10
¸
cos 3t.
The constants a
1
, a
2
, b
1
, and b
2
must then be solved for given any initial conditions.
If ω is a natural frequency of the system resonance occurs because you will have to try a
particular solution of the form
x
p
= c t sin ωt +

d cos ωt.
That is assuming that all eigenvalues of the coefficient matrix are distinct. Note that the amplitude
of this solution grows without bound as t grows.
118 CHAPTER 3. SYSTEMS OF ODES
3.6.4 Exercises
Exercise 3.6.3: Find a particular solution to
x
··
=
,
−3 1
2 −2
¸
x +
,
0
2
¸
cos 2t.
Exercise 3.6.4: Let us take the example in Figure 3.12 on page 112 with the same parameters as
before: m
1
= 2, k
1
= 4, and k
2
= 2, except for m
2
which is unknown. Suppose that there is a force
cos 5t acting on the first mass. Find an m
1
such that there exists a particular solution where the
first mass does not move.
Note: This idea is called dynamic damping. In practice there will be a small amount of damp-
ing and so any transient solution will disappear and after long enough time, the first mass will
always come to a stop.
Exercise 3.6.5: Let us take the example 3.6.2 on page 114, but that at time of impact, cart 2 is
moving to the left at the speed of 3m/s. a) Find the behavior of the system after linkup. b) Will the
second car hit the wall, or will it be moving away from the wall as time goes on. c) at what speed
would the first car have to be travelling for the system to essentially stay in place after linkup.
Exercise 3.6.6: Let us take the example in Figure 3.12 on page 112 with parameters m
1
= m
2
= 1,
k
1
= k
2
= 1. Does there exist a set of initial conditions for which the first cart moves but the second
cart does not? If so find those conditions, if not argue why not.
3.7. MULTIPLE EIGENVALUES 119
3.7 Multiple eigenvalues
Note: 1–2 lectures, §5.4 in EP
It may very well happen that a matrix has some “repeated” eigenvalues. That is, the character-
istic equation det(A − λI) = 0 may have repeated roots. As we have said before, this is actually
unlikely to happen for a random matrix. If you take a small perturbation of A (you change the
entries of A slightly) you will get a matrix with distinct eigenvalues. As any system you will want
to solve in practice is an approximation to reality anyway, it is not indispensable to know how to
solve these corner cases. But it may happen on occasion that it is easier or desirable to solve such
a system directly.
3.7.1 Geometric multiplicity
Take the diagonal matrix
A =
,
3 0
0 3
¸
.
A has an eigenvalue 3 of multiplicity 2. We usually call the multiplicity of the eigenvalue in the
characteristic equation the algebraic multiplicity. In this case, there exist 2 linearly independent
eigenvectors,
,
1
0
¸
and
,
0
1
¸
. This means that the so-called geometric multiplicity of this eigenvalue
is 2.
In all the theorems where we required a matrix to have n distinct eigenvalues, we only really
needed to have n linearly independent eigenvectors. For example, let x
·
= Ax has the general
solution
x = c
1
,
1
0
¸
e
3t
+ c
2
,
0
1
¸
e
3t
.
Let us restate the theorem about real eigenvalues. In the following theorem we will repeat eigen-
values according to (algebraic) multiplicity. So for A above we would say that it has eigenvalues 3
and 3.
Theorem 3.7.1. Take x
·
= Px. If P is n × n and has n real eigenvalues (not necessarily distinct),
λ
1
, . . . , λ
n
, and if there are n linearly independent corresponding eigenvectors v
1
, . . . , v
n
, and the
general solution to the ODE can be written as.
x = c
1
v
1
e
λ
1
t
+ c
2
v
2
e
λ
2
t
+ + c
n
v
n
e
λ
n
t
.
The geometric multiplicity of an eigenvalue of algebraic multiplicity n is equal to the number of
linearly independent eigenvectors we can find. It is not hard to see that the geometric multiplicity is
always less than or equal to the algebraic multiplicity. Above we, therefore, handled the case when
these two numbers are equal. If the geometric multiplicity is equal to the algebraic multiplicity we
say the eigenvalue is complete.
120 CHAPTER 3. SYSTEMS OF ODES
The hypothesis of the theorem could, therefore, be stated as saying that if all the eigenvalues
of P are complete then there are n linearly independent eigenvectors and thus we have the given
general solution.
Note that if the geometric multiplicity of an eigenvalue is 2 or greater, then the set of linearly
independent eigenvectors is not unique up to multiples as it was before. For example, for the
diagonal matrix A above we could also pick eigenvectors
,
1
1
¸
and
,
1
−1
¸
, or in fact any pair of two
linearly independent vectors.
3.7.2 Defective eigenvalues
If an n × n matrix has less than n linearly independent eigenvectors, it is said to be deficient.
Then there is at least one eigenvalue with algebraic multiplicity that is higher than the geometric
multiplicity. We call this eigenvalue defective and the difference between the two multiplicities we
call the defect.
Example 3.7.1: The matrix
,
3 1
0 3
¸
has an eigenvalue 3 of algebraic multiplicity 2. Let us try to compute the eigenvectors.
,
0 1
0 0
¸ ,
v
1
v
2
¸
=

0.
We must have that v
2
= 0. Hence any eigenvector is of the form
,
v
1
0
¸
. Any two such vectors are
linearly dependent, and hence the geometric multiplicity of the eigenvalue is 1. Therefore, the
defect is 1, and we can no longer apply the eigenvalue method directly to a system of ODEs with
such a coefficient matrix.
The key observation we will use here is that if λ is an eigenvalue of A of algebraic multiplicity
m, then we will be able to find m linearly independent vectors solving the equation (A−λI)
m
v =

0.
We will call these the generalized eigenvectors.
Let us continue with the example A =
,
3 1
0 3
¸
and the equation x
·
= Ax. We have an eigenvalue
λ = 3 of (algebraic) multiplicity 2 and defect 1. We have found one eigenvector v
1
=
,
1
0
¸
. We have
the solution
x
1
= v
1
e
3t
.
In this case, let us try (in the spirit of repeated roots of the characteristic equation for a single
equation) another solution of the form
x
2
= (v
2
+v
1
t) e
3t
.
3.7. MULTIPLE EIGENVALUES 121
We differentiate to get
x
2
·
= v
1
e
3t
+ 3(v
2
+v
1
t) e
3t
= (3v
2
+v
1
) e
3t
+ 3v
1
te
3t
.
x
2
·
must equal Ax
2
, and
Ax
2
= A(v
2
+v
1
t) e
3t
= Av
2
e
3t
+ Av
1
te
3t
.
By looking at the coefficients of e
3t
and te
3t
we see 3v
2
+v
1
= Av
2
and 3v
1
= Av
1
. This means that
(A − 3I)v
1
=

0, and (A − 3I)v
2
= v
1
.
If these two equations are satisfied, then x
2
is a solution. We know the first of these equations is
satisfied because v
1
is an eigenvector. If we plug the second equation into the first we find that
(A − 3I)(A − 3I)v
2
=

0, or (A − 3I)
2
v
2
=

0.
If we can, therefore, find a v
2
which solves (A − 3I)
2
v
2
=

0, and such that (A − 3I)v
2
= v
1
, we are
done. This is just a bunch of linear equations to solve and we are by now very good at that.
We notice that in this simple case (A−3I)
2
is just the zero matrix (exercise). Hence, any vector
v
2
solves (A − 3I)
2
v
2
=

0. So we just have to make sure that (A − 3I)v
2
= v
1
. Write
,
0 1
0 0
¸ ,
a
b
¸
=
,
1
0
¸
.
By inspection we see that letting a = 0 (a could be anything in fact) and b = 1 does the job. Hence
we can take v
2
=
,
0
1
¸
. So our general solution to x
·
= Ax is
x = c
1
,
1
0
¸
e
3t
+ c
2
¸,
0
1
¸
+
,
1
0
¸
t

e
3t
=
,
c
1
e
3t
+ c
2
te
3t
c
2
e
3t
¸
.
Let us check that we really do have the solution. First x
·
1
= c
1
3e
3t
+c
2
e
3t
+3c
2
te
3t
= 3x
1
+x
2
, good.
Now x
·
2
= 3c
2
e
3t
= 3x
2
, good.
Note that the system x
·
= Ax has a simpler solution since A is a triangular matrix. In particular,
the equation for x
2
does not depend on x
1
.
Exercise 3.7.1: Solve x
·
=
,
3 1
0 3
¸
x by first solving for x
2
and then for x
1
independently. Now check
that you got the same solution as we did above.
Let us describe the general algorithm. First for λ of multiplicity 2, defect 1. First find an
eigenvector v
1
of λ. Now find a vector v
2
such that Find v
2
such that
(A − 3I)
2
v
2
=

0,
(A − 3I)v
2
= v
1
.
122 CHAPTER 3. SYSTEMS OF ODES
This gives us two linearly independent solutions
x
1
= v
1
e
λt
,
x
2
=

v
2
+v
1
t

e
λt
.
This machinery can also be generalized to larger matrices and higher defects. We will not go
over, but let us just state the ideas. Suppose that A has a multiplicity m eigenvalue λ. We find
vectors such that
(A − λI)
k
v =

0, but (A − λI)
k−1
v

0.
Such vectors are called generalized eigenvectors. For every eigenvector v
1
we find a chain of
generalized eigenvectors v
2
through v
k
such that:
(A − λI)v
1
=

0,
(A − λI)v
2
= v
1
,
.
.
.
(A − λI)v
k
= v
k−1
.
We form the linearly independent solutions
x
1
= v
1
e
λt
,
x
2
= (v
2
+v
1
t) e
λt
,
.
.
.
x
k
=
¸
v
k
+v
k−1
t + +v
2
t
k−2
(k − 2)!
+v
1
t
k−1
(k − 1)!

e
λt
.
We proceed to find chains until we form m linearly independent solutions (m is the multiplicity).
You may need to find several chains for every eigenvalue.
3.7.3 Exercises
Exercise 3.7.2: Let A =
,
5 −3
3 −1
¸
. Solve x
·
= Ax.
Exercise 3.7.3: Let A =
,
5 −4 4
0 3 0
−2 4 −1
,
. a) What are the eigenvalues? b) What is/are the defect(s) of
the eigenvalue(s)? c) Solve x
·
= Ax.
Exercise 3.7.4: Let A =
,
2 1 0
0 2 0
0 0 2
,
. a) What are the eigenvalues? b) What is/are the defect(s) of the
eigenvalue(s)? c) Solve x
·
= Ax in two different ways and verify you get the same answer.
3.7. MULTIPLE EIGENVALUES 123
Exercise 3.7.5: Let A =
,
0 1 2
−1 −2 −2
−4 4 7
,
. a) What are the eigenvalues? b) What is/are the defect(s) of
the eigenvalue(s)? c) Solve x
·
= Ax.
Exercise 3.7.6: Let A =
,
0 4 −2
−1 −4 1
0 0 −2
,
. a) What are the eigenvalues? b) What is/are the defect(s) of
the eigenvalue(s)? c) Solve x
·
= Ax.
Exercise 3.7.7: Let A =
,
2 1 −1
−1 0 2
−1 −2 4
,
. a) What are the eigenvalues? b) What is/are the defect(s) of
the eigenvalue(s)? c) Solve x
·
= Ax.
Exercise 3.7.8: Suppose that A is a 2 × 2 matrix with a repeated eigenvalue λ. Suppose that there
are two linearly independent eigenvectors. Show that the matrix is diagonal, in particular A = λI.
124 CHAPTER 3. SYSTEMS OF ODES
3.8 Matrix exponentials
Note: 2 lectures, §5.5 in EP
3.8.1 Definition
In this section we present a different way of finding the fundamental matrix solution of a system.
Suppose that we have the constant coefficient equation
x
·
= Px,
as usual. Now suppose that this was one equation (P is a number or a 1 × 1 matrix). Then the
solution to this would be
x = e
Pt
.
It turns out the same computation works for matrices when we define e
Pt
properly. First let us write
down the Taylor series for e
at
for some number a.
e
at
= 1 + at +
(at)
2
2
+
(at)
3
6
+
(at)
4
24
+ =

¸
k=0
(at)
k
k!
.
Recall k! = 1 2 3 k, and 0! = 1. Now if we differentiate this series
a + a
2
t +
a
3
t
2
2
+
a
4
t
3
6
+ = a
¸
1 + at +
(at)
2
2
+
(at)
3
6
+

= ae
at
.
Maybe we can write try the same trick here. Suppose that for an n × n matrix A we define the
matrix exponential as
e
A
def
= I + A +
1
2
A
2
+
1
6
A
3
+ +
1
k!
A
k
+
Let us not worry about convergence. The series really does always converge. We usually write Pt
as tP by convention when P is a matrix. With this small change and by the exact same calculation
as above we have that
d
dt

e
tP

= Pe
tP
.
Now P and hence e
tP
is an n × n matrix. What we are looking for is a vector. We note that in the
1 × 1 case we would at this point multiply by an arbitrary constant to get the general solution. In
the matrix case we multiply by a column vector c.
Theorem 3.8.1. Let P be an n × n matrix. Then the general solution to x
·
= Px is
x = e
tP
c,
where c is an arbitrary constant vector. In fact x(0) = c.
3.8. MATRIX EXPONENTIALS 125
Let us check.
d
dt
x =
d
dt

e
tP
c

= Pe
tP
c = Px.
Hence e
tP
is the fundamental matrix solution of the homogeneous system. If we find a way
to compute the matrix exponential, we will have another method of solving constant coefficient
homogeneous systems. It also makes it easy to solve for initial conditions. To solve x
·
= Ax,
x(0) =

b, we take the solution
x = e
tA

b.
This equation follows because e
0A
= I, so x(0) = e
0A

b =

b.
We mention a drawback of matrix exponentials. In general e
A+B
e
A
e
B
. The trouble is that
matrices do not commute, that is, in general AB BA. If you try to prove e
A+B
e
A
e
B
using the
Taylor series, you will see why the lack of commutativity becomes a problem. However, it is still
true that if AB = BA, that is, if A and B commute, then e
A+B
= e
A
e
B
. We will find this fact useful.
Let us restate this as a theorem to make a point.
Theorem 3.8.2. If AB = BA then e
A+B
= e
A
e
B
. Otherwise e
A+B
e
A
e
B
in general.
3.8.2 Simple cases
In some instances it may work to just plug into the series definition. Suppose the matrix is diagonal.
For example, D =
,
a 0
0 b
¸
. Then
D
k
=
,
a
k
0
0 b
k
¸
,
and
e
D
= I + D +
1
2
D
2
+
1
6
D
3
+ =
,
1 0
0 1
¸
+
,
a 0
0 b
¸
+
1
2
,
a
2
0
0 b
2
¸
+
1
6
,
a
3
0
0 b
3
¸
+ =
,
e
a
0
0 e
b
¸
.
So by this rationale we have that
e
I
=
,
e 0
0 e
¸
and e
aI
=
,
e
a
0
0 e
a
¸
.
This makes exponentials of certain other matrices easy to compute. Notice for example that
the matrix A =
,
5 −3
3 −1
¸
can be written as 2I + B where B =
,
3 −3
3 −3
¸
. Notice that 2I and B commute,
and that B
2
=
,
0 0
0 0
¸
. So B
k
= 0 for all k ≥ 2. Therefore, e
B
= I + B. Suppose we actually
want to compute e
tA
. 2tI and tB still commute (exercise: check this) and e
tB
= I + tB, since
(tB)
2
= t
2
B
2
= 0. We write
e
tA
= e
2tI+tB
= e
2tI
e
tB
=
,
e
2t
0
0 e
2t
¸
(I + tB) =
=
,
e
2t
0
0 e
2t
¸ ,
1 + 3t −3t
3t 1 − 3t
¸
=
,
(1 + 3t) e
2t
−3te
2t
3te
2t
(1 − 3t) e
2t
¸
.
126 CHAPTER 3. SYSTEMS OF ODES
So we have found the fundamental matrix solution for the system x
·
= Ax. Note that this matrix
has a repeated eigenvalue with a defect; there is only one eigenvector for the eigenvalue 2. So we
have found a perhaps easier way to handle this case. In fact, if a matrix A is 2 × 2 and has an
eigenvalue λ of multiplicity 2, then either it is diagonal, or A = λI + B where B
2
= 0. This is a
good exercise.
Exercise 3.8.1: Suppose that A is 2×2 and λ is the only eigenvalue. Then show that (A−λI)
2
= 0.
Then we can write A = λI + B, where B
2
= 0. Hint: First write down what does it mean for the
eigenvalue to be of multiplicity 2. You will get an equation for the entries. Now compute the square
of B.
Matrices B such that B
k
= 0 for some k are called nilpotent. Computation of the matrix
exponential for nilpotent matrices is easy by just writing down the first k terms of the Taylor series.
3.8.3 General matrices
In general, the exponential is not as easy to compute as above. We cannot usually write any matrix
as a sum of commuting matrices where the exponential is simple for each one. But fear not, it
is still not too difficult provided we can find enough eigenvectors. First we need the following
interesting result about matrix exponentials. For any two square matrices A and B, we have
e
BAB
−1
= Be
A
B
−1
.
This can be seen by writing down the Taylor series. First note that
(BAB
−1
)
2
= BAB
−1
BAB
−1
= BAIAB
−1
= BA
2
B
−1
.
And hence by the same reasoning (BAB
−1
)
k
= BA
k
B
−1
. So now write down the Taylor series for
e
BAB
−1
e
BAB
−1
= I + BAB
−1
+
1
2
(BAB
−1
)
2
+
1
6
(BAB
−1
)
3
+
= BB
−1
+ BAB
−1
+
1
2
BA
2
B
−1
+
1
6
BA
3
B
−1
+
= B

I + A +
1
2
A
2
+
1
6
A
3
+

B
−1
= Be
A
B
−1
.
Now we will write a general matrix A as EDE
−1
, where D is diagonal. This procedure is called
diagonalization. If we can do that, you can see that the computation of the exponential becomes
easy. Adding t into the mix we see that
e
tA
= Ee
tD
E
−1
.
3.8. MATRIX EXPONENTIALS 127
Now to do this we will need n linearly independent eigenvectors of A. Otherwise this method
does not work and we need to be trickier, but we will not get into such details in this course. We
let E be the matrix with the eigenvectors as columns. Let λ
1
, . . . , λ
n
be the eigenvalues and let v
1
,
. . . , v
n
be the eigenvectors, then E = [ v
1
v
2
v
n
]. Let D be the diagonal matrix with the
eigenvalues on the main diagonal. That is
D =
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
λ
1
0 0
0 λ
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 λ
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
.
Now we write
AE = A[ v
1
v
2
v
n
]
= [ Av
1
Av
2
Av
n
]
= [ λ
1
v
1
λ
2
v
2
λ
n
v
n
]
= [ v
1
v
2
v
n
]D
= ED.
Now the columns of E are linearly independent as these are the eigenvectors of A. Hence E is
invertible. Since AE = ED, we right multiply by E
−1
and we get
A = EDE
−1
.
This means that e
A
= Ee
D
E
−1
. With t is turns into
e
tA
= Ee
tD
E
−1
= E
,
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
e
λ
1
t
0 0
0 e
λ
2
t
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 e
λ
n
t
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
E
−1
. (3.4)
The formula (3.4), therefore, gives the formula for computing the fundamental matrix solution e
tA
for the system x
·
= Ax, in the case where we have n linearly independent eigenvectors.
Notice that this computation still works when the eigenvalues and eigenvectors are complex,
though then you will have to compute with complex numbers. Note that it is clear from the defini-
tion that if A is real, then e
tA
is real. So you will only need complex numbers in the computation
and you may need to apply Euler’s formula to simplify the result. If simplified properly the final
matrix will not have any complex numbers in it.
Example 3.8.1: Compute the fundamental matrix solution using the matrix exponentials for the
system
,
x
y
¸
·
=
,
1 2
2 1
¸ ,
x
y
¸
.
128 CHAPTER 3. SYSTEMS OF ODES
Then compute the particular solution for the initial conditions x(0) = 4 and y(0) = 2.
Let A be the coefficient matrix
,
1 2
2 1
¸
. We first compute (exercise) that the eigenvalues are 3 and
−1 and the corresponding eigenvectors are
,
1
1
¸
and
,
1
−1
¸
. Hence we write
e
tA
=
,
1 1
1 −1
¸ ,
e
3t
0
0 e
−t
¸ ,
1 1
1 −1
¸
−1
=
,
1 1
1 −1
¸ ,
e
3t
0
0 e
−t
¸
−1
2
,
−1 −1
−1 1
¸
=
−1
2
,
e
3t
e
−t
e
3t
−e
−t
¸ ,
−1 −1
−1 1
¸
=
−1
2
,
−e
3t
− e
−t
−e
3t
+ e
−t
−e
3t
+ e
−t
−e
3t
− e
−t
¸
=
,
e
3t
+e
−t
2
e
3t
−e
−t
2
e
3t
−e
−t
2
e
3t
+e
−t
2
¸
.
The initial conditions are x(0) = 4 and y(0) = 2. Hence, by the property that e
0A
= I we find
that the particular solution we are looking for is e
tA

b where

b is
,
4
2
¸
. Then the particular solution
we are looking for is
,
x
y
¸
=
,
e
3t
+e
−t
2
e
3t
−e
−t
2
e
3t
−e
−t
2
e
3t
+e
−t
2
¸ ,
4
2
¸
=
,
2e
3t
+ 2e
−t
+ e
3t
− e
−t
2e
3t
− 2e
−t
+ e
3t
+ e
−t
¸
=
,
3e
3t
+ e
−t
3e
3t
− e
−t
¸
.
3.8.4 Fundamental matrix solutions
We note that if you can compute the fundamental matrix solution in a different way, you can use
this to find the matrix exponential e
tA
. The fundamental matrix solution of a system of ODEs is
not unique. The exponential is the fundamental matrix solution with the property that for t = 0
we get the identity matrix. So we must find the right fundamental matrix solution. Let X be any
fundamental matrix solution to x
·
= Ax. Then we claim
e
tA
= X(t) [X(0)]
−1
.
Obviously if we plug t = 0 into X(t) [X(0)]
−1
we get the identity. It is not hard to see that we can
multiply a fundamental matrix solution on the right by any constant invertible matrix and we still
get a fundamental matrix solution. All we are doing is changing what the arbitrary constants are in
the general solution x(t) = X(t)c.
3.8.5 Approximations
If you think about it, the computation of any fundamental matrix solution X using the eigenvalue
method is just as difficult as computation of e
tA
. So perhaps we did not gain much by this new tool.
However, the Taylor series expansion actually gives us a very easy way to approximate solutions,
which the eigenvalue method did not.
3.8. MATRIX EXPONENTIALS 129
The simplest thing we can do is to just compute the series up to a certain number of terms. There
are better ways to approximate the exponential

. In many cases however, few terms of the Taylor
series give a reasonable approximation for the exponential and may suffice for the application. For
example, let us compute the first 4 terms of the series for the matrix A =
,
1 2
2 1
¸
.
e
tA
≈ I + tA +
t
2
2
A
2
+
t
3
6
A
3
= I + t
,
1 2
2 1
¸
+ t
2
,
5
2
2
2
5
2
¸
+ t
3
,
13
6
7
3
7
3
13
6
¸
=
=
,
1 + t +
5
2
t
2
+
13
6
t
3
2 t + 2 t
2
+
7
3
t
3
2 t + 2 t
2
+
7
3
t
3
1 + t +
5
2
t
2
+
13
6
t
3
¸
.
Just like the Taylor series approximation for the scalar version, the approximation will be better
for small t and worse for larger t. For larger t, you will generally have to compute more terms.
Let us see how we stack up against the real solution with t = 0.1. The approximate solution is
approximately (rounded to 8 decimal places)
e
0.1 A
≈ I + 0.1 A +
0.1
2
2
A
2
+
0.1
3
6
A
3
=
,
1.12716667 0.22233333
0.22233333 1.12716667
¸
.
And plugging t = 0.1 into the real solution (rounded to 8 decimal places) we get
e
0.1 A
=
,
1.12734811 0.22251069
0.22251069 1.12734811
¸
.
This is not bad at all. Although if you take the same approximation for t = 1 you get (using the
Taylor series)
,
6.66666667 6.33333333
6.33333333 6.66666667
¸
,
while the real value is (again rounded to 8 decimal places)
,
10.22670818 9.85882874
9.85882874 10.22670818
¸
.
So the approximation is not very good once we get up to t = 1. To get a good approximation at
t = 1 (say up to 2 decimal places) you would need to go up to the 11
th
power (exercise).
3.8.6 Exercises
Exercise 3.8.2: Find a fundamental matrix solution for the system x
·
= 3x + y, y
·
= x + 3y.
Exercise 3.8.3: Find e
At
for the matrix A =
,
2 3
0 2
¸
.

C. Moler and C.F. Van Loan, Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years
Later, SIAM Review 45 (1), 2003, 3–49
130 CHAPTER 3. SYSTEMS OF ODES
Exercise 3.8.4: Find a fundamental matrix solution for the system x
·
1
= 7x
1
+ 4x
2
+ 12x
3
, x
·
2
=
x
1
+ 2x
2
+ x
3
, x
·
3
= −3x
1
− 2x
2
− 5x
3
. Then find the solution that satisfies x =
,
0
1
−2
,
.
Exercise 3.8.5: Compute the matrix exponential e
A
for A =
,
1 2
0 1
¸
.
Exercise 3.8.6: Suppose AB = BA (matrices commute). Show that e
A+B
= e
A
e
B
.
Exercise 3.8.7: Use exercise 3.8.6 to show that (e
A
)
−1
= e
−A
. In particular this means that e
A
is
invertible even if A is not.
Exercise 3.8.8: Suppose A is a matrix with eigenvalues −1, 1, and corresponding eigenvectors
,
1
1
¸
,
,
0
1
¸
. a) Find matrix A with these properties. b) Find the fundamental matrix solution to x
·
= Ax.
c) Solve the system in with initial conditions x(0) =
,
2
3
¸
.
Exercise 3.8.9: Suppose that A is an n × n matrix with a repeated eigenvalue λ of multiplicity n.
Suppose that there are n linearly independent eigenvectors. Show that the matrix is diagonal, in
particular A = λI. Hint: Use diagonalization and the fact that the identity matrix commutes with
every other matrix.
3.9. NONHOMOGENEOUS SYSTEMS 131
3.9 Nonhomogeneous systems
Note: 3 lectures (may have to skip a little), somewhat different from §5.6 in EP
3.9.1 First order constant coefficient
Integrating factor
Let us first focus on the nonhomogeneous first order equation
x
·
(t) = Ax(t) +

f (t),
where A is a constant matrix. The first method we will look at is the integrating factor method. For
simplicity we rewrite the equation as
x
·
(t) + Px(t) =

f (t),
where P = −A. We multiply both sides of the equation by e
tP
(being mindful that we are dealing
with matrices which may not commute) to obtain
e
tP
x
·
(t) + e
tP
Px(t) = e
tP

f (t).
We notice that Pe
tP
= e
tP
P. This fact follows by writing down the series definition of e
tP
,
Pe
tP
= P
¸
I + I + tP +
1
2
(tP)
2
+

= P + tP
2
+
1
2
t
2
P
3
+ =
=
¸
I + I + tP +
1
2
(tP)
2
+

P = Pe
tP
.
We have already seen that
d
dt

e
tP

= Pe
tP
. Hence,
d
dt

e
tP
x(t)

= e
tP

f (t).
We can now integrate. That is, we integrate each component of the vector separately
e
tP
x(t) =

e
tP

f (t) dt +c.
Recall from exercise 3.8.7 that (e
tP
)
−1
= e
−tP
. Therefore, we obtain
x(t) = e
−tP

e
tP

f (t) dt + e
−tP
c.
132 CHAPTER 3. SYSTEMS OF ODES
Perhaps it is better understood as a definite integral. In this case it will be easy to also solve for
the initial conditions as well. Suppose we have the equation with initial conditions
x
·
(t) + Px(t) =

f (t), x(0) =

b.
The solution can then be written as
x(t) = e
−tP

t
0
e
sP

f (s) ds + e
−tP

b. (3.5)
Again, the integration means that each component of the vector e
sP

f (s) is integrated separately. It
is not hard to see that (3.5) really does satisfy the initial condition x(0) =

b.
x(0) = e
−0P

0
0
e
sP

f (s) ds + e
−0P

b = I

b =

b.
Example 3.9.1: Suppose that we have the system
x
·
1
+ 5x
1
− 3x
2
= e
t
,
x
·
2
+ 3x
1
− x
2
= 0,
with initial conditions x
1
(0) = 1, x
2
(0) = 0.
Let us write the system as
x
·
+
,
5 −3
3 −1
¸
x =
,
e
t
0
¸
, x(0) =
,
1
0
¸
.
We have previously computed e
tP
for P =
,
5 −3
3 −1
¸
. We immediately also have e
−tP
, just by negating
t.
e
tP
=
,
(1 + 3t) e
2t
−3te
2t
3te
2t
(1 − 3t) e
2t
¸
, e
−tP
=
,
(1 − 3t) e
−2t
3te
−2t
−3te
−2t
(1 + 3t) e
−2t
¸
.
Instead of computing the whole formula at once. Let us do it in stages. First

t
0
e
sP

f (s) ds =

t
0
,
(1 + 3s) e
2s
−3se
2s
3se
2s
(1 − 3s) e
2s
¸ ,
e
s
0
¸
ds
=

t
0
,
(1 + 3s) e
3s
3se
3s
¸
ds
=
,
te
3t
(3t−1) e
3t
+1
3
¸
.
3.9. NONHOMOGENEOUS SYSTEMS 133
Then
x(t) = e
−tP

t
0
e
sP

f (s) ds + e
−tP

b
=
,
(1 − 3t) e
−2t
3te
−2t
−3te
−2t
(1 + 3t) e
−2t
¸ ,
te
3t
(3t−1) e
3t
+1
3
¸
+
,
(1 − 3t) e
−2t
3te
−2t
−3te
−2t
(1 + 3t) e
−2t
¸ ,
1
0
¸
=
,
te
−2t

e
t
3
+

1
3
+ t

e
−2t
¸
+
,
(1 − 3t) e
−2t
−3te
−2t
¸
=
,
(1 − 2t) e
−2t

e
t
3
+

1
3
− 2t

e
−2t
¸
.
Phew!
Let us check that this really works.
x
·
1
+ 5x
1
− 3x
2
= (4te
−2t
− 4e
−2t
) + 5(1 − 2t) e
−2t
+ e
t
− (1 − 6t) e
−2t
= e
t
.
Similarly (exercise) x
·
2
+ 3x
1
− x
2
= 0. The initial conditions are also satisfied as well (exercise).
For systems, the integrating factor method only works if P does not depend on t, that is, P is
constant. The problem is that in general
d
dt
e

P(t) dt
P(t) e

P(t) dt
,
because matrices generally do not commute.
Eigenvector decomposition
For the next method, we note that the eigenvectors of a matrix give the directions in which the
matrix acts like a scalar. If we solve our system along these directions these solutions would be
simpler as we can treat the matrix as a scalar. We can put those solutions together to get the general
solution.
Take the equation
x
·
(t) = Ax(t) +

f (t). (3.6)
Assume that A has n linearly independent eigenvectors v
1
, . . . , v
n
. Let us write
x(t) = v
1
ξ
1
(t) +v
2
ξ
2
(t) + +v
n
ξ
n
(t). (3.7)
That is, we wish to write our solution as a linear combination of the eigenvectors of A. If we can
solve for the scalar functions ξ
1
through ξ
n
we have our solution x. Let us decompose

f in terms
of the eigenvectors as well. Write

f (t) = v
1
g
1
(t) +v
2
g
2
(t) + +v
n
g
n
(t). (3.8)
134 CHAPTER 3. SYSTEMS OF ODES
That is, we wish to find g
1
through g
n
that satisfy (3.8). We note that since all the eigenvectors of A
are independent, the matrix E = [ v
1
v
2
v
n
] is invertible. We see that (3.8) can be written
as

f = Eg, where the components of g are the functions g
1
through g
n
. Then g = E
−1

f . Hence it is
always possible to find g when there are n linearly independent eigenvectors.
We plug (3.7) into (3.6), and note that Av
k
= λ
k
v
k
.
x
·
= v
1
ξ
·
1
+v
2
ξ
·
2
+ +v
n
ξ
·
n
= A

v
1
ξ
1
+v
2
ξ
2
+ +v
n
ξ
n

+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= Av
1
ξ
1
+ Av
2
ξ
2
+ + Av
n
ξ
n
+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= v
1
λ
1
ξ
1
+v
2
λ
2
ξ
2
+ +v
n
λ
n
ξ
n
+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= v
1

1
ξ
1
+ g
1
) +v
2

2
ξ
2
+ g
2
) + +v
n

n
ξ
n
+ g
n
).
If we identify the coefficients of the vectors v
1
through v
n
we get the equations
ξ
·
1
= λ
1
ξ
1
+ g
1
,
ξ
·
2
= λ
2
ξ
2
+ g
2
,
.
.
.
ξ
·
n
= λ
n
ξ
n
+ g
n
.
Each one of these equations is independent of the others. They are all linear first order equations
and can easily be solved by the standard integrating factor method for single equations. That is,
for example for the k
th
equation we write
ξ
·
k
(t) − λ
k
ξ
k
(t) = g
k
(t).
We use the integrating factor e
−λ
k
t
to find that
d
dx
,
ξ
k
(t) e
−λ
k
t
¸
= e
−λ
k
t
g
k
(t).
Now we integrate and solve for ξ
k
to get
ξ
k
(t) = e
λ
k
t

e
−λ
k
t
g
k
(t) dt + C
k
e
λ
k
t
.
Note that if you are looking for just any particular solution, you could set C
k
to be zero. If we leave
these constants in, we will get the general solution. Write x(t) = v
1
ξ
1
(t) +v
2
ξ
2
(t) + +v
n
ξ
n
(t),
and we are done.
Again, as always, it is perhaps better to write these integrals as definite integrals. Suppose that
we have an initial condition x(0) =

b. We take c = E
−1

b and note

b = v
1
a
1
+ +v
n
a
n
, just like
before. Then if we write
ξ
k
(t) = e
λ
k
t

t
0
e
−λ
k
s
g
k
(s) dt + a
k
e
λ
k
t
,
3.9. NONHOMOGENEOUS SYSTEMS 135
we will actually get the particular solution x(t) = v
1
ξ
1
(t) +v
2
ξ
2
(t) + +v
n
ξ
n
(t) satisfying x(0) =

b,
because ξ
k
(0) = a
k
.
Example 3.9.2: Let A =
,
1 3
3 1
¸
. Solve x
·
= Ax +

f where

f (t) =
,
2e
t
2t
¸
for x(0) =
,
3/16
−5/16
¸
.
The eigenvalues of A are −2 and 4 and the corresponding eigenvectors are
,
1
−1
¸
and
,
1
1
¸
respec-
tively. This calculation is left as an exercise. We write down the matrix E of the eigenvectors and
compute its inverse (using the inverse formula for 2 × 2 matrices)
E =
,
1 1
−1 1
¸
, E
−1
=
1
2
,
1 −1
1 1
¸
.
We are looking for a solution of the form x =
,
1
−1
¸
ξ
1
+
,
1
1
¸
ξ
2
. We also wish to write

f in terms
of the eigenvectors. That is we wish to write

f =
,
2e
t
2t
¸
=
,
1
−1
¸
g
1
+
,
1
1
¸
g
2
. Thus
,
g
1
g
2
¸
= E
−1
,
2e
t
2t
¸
=
1
2
,
1 −1
1 1
¸ ,
2e
t
2t
¸
=
,
e
t
− t
e
t
+ t
¸
.
So g
1
= e
t
− t and g
2
= e
t
+ t.
We further want to write x(0) in terms of the eigenvectors. That is, we wish to write x(0) =
,
3/16
−5/16
¸
=
,
1
−1
¸
a
1
+
,
1
1
¸
a
2
. Hence
,
a
1
a
2
¸
= E
−1
,
3
16
−5
16
¸
=
,
1
4
−1
16
¸
.
So a
1
=
1
4
and a
2
=
−1
16
. We plug our x into the equation and get that
,
1
−1
¸
ξ
·
1
+
,
1
1
¸
ξ
·
2
= A
,
1
−1
¸
ξ
1
+ A
,
1
1
¸
ξ
2
+
,
1
−1
¸
g
1
+
,
1
1
¸
g
2
=
,
1
−1
¸
(−2ξ
1
) +
,
1
1
¸

2
+
,
1
−1
¸
(e
t
− t) +
,
1
1
¸
(e
t
− t).
We get the two equations
ξ
·
1
= −2ξ
1
+ e
t
− t, where ξ
1
(0) = a
1
=
1
4
,
ξ
·
2
= 4ξ
2
+ e
t
+ t, where ξ
2
(0) = a
2
=
−1
16
.
We solve with integrating factor. Computation of the integral is left as an exercise to the student.
Note that you will need integration by parts.
ξ
1
= e
−2t

e
2t
(e
t
− t) dt + C
1
e
−2t
=
e
t
3

t
2
+
1
4
+ C
1
e
−2t
.
136 CHAPTER 3. SYSTEMS OF ODES
C
1
is the constant of integration. As ξ
1
(0) =
1
4
then
1
4
=
1
3
+
1
4
+ C
1
and hence C
1
= −
1
3
. Similarly
ξ
2
= e
4t

e
−4t
(e
t
+ t) dt + C
2
e
4t
= −
e
t
3

t
4

1
16
+ C
2
e
4t
.
As ξ
2
(0) =
1
16
we have that
−1
16
=
−1
3

1
16
+ C
2
and hence C
2
=
1
3
. The solution is
x(t) =
,
1
−1
¸ ¸
e
t
− e
−2t
3
+
1 − 2t
4

+
,
1
1
¸ ¸
e
4t
− e
t
3

4t + 1
16

=
,
e
4t
−e
−2t
3
+
3−12t
16
e
−2t
+e
4t
+2e
t
3
+
4t−5
16
¸
.
That is, x
1
=
e
4t
−e
−2t
3
+
3−12t
16
and x
2
=
e
−2t
+e
4t
+2e
t
3
+
4t−5
16
.
Exercise 3.9.1: Check that x
1
and x
2
solve the problem. Check both that they satisfy the differential
equation and that they satisfy the initial conditions.
Undetermined coefficients
The method of undetermined coefficients also still works. The only difference here is that we
will have to take unknown vectors rather than just numbers. Same caveats apply to undetermined
coefficients for systems as they do for single equations. This method does not always work, fur-
thermore if the right hand side is complicated, you will have lots of variables to solve for. In this
case you can think of each element of an unknown vector as an unknown number. So in system of
3 equations if you have say 4 unknown vectors (this would not be uncommon), then you already
have 12 unknowns that you need to solve for. The method can turn into a lot of tedious work. As
this method is essentially the same as it is for single equations, let us just do an example.
Example 3.9.3: Let A =
,
−1 0
−2 1
¸
. Find a particular solution of x
·
= Ax +

f where

f (t) =
,
e
t
t
¸
.
Note that we can solve this system in an easier way (can you see how), but for the purposes of
the example, let us use the eigenvalue method plus undetermined coefficients.
The eigenvalues of A are −1 and 1 and the corresponding eigenvectors are
,
1
1
¸
and
,
0
1
¸
respec-
tively. Hence our complementary solution is
x
c
= α
1
,
1
1
¸
e
−t
+ α
2
,
0
1
¸
e
t
,
for some arbitrary constants α
1
and α
2
.
We would want to guess a particular solution of
x = ae
t
+

bt +c.
However, something of the form ae
t
appears in the complementary solution. Because we do not
yet know the vector if the a is a multiple of
,
0
1
¸
we do not know if a conflict arises. It may very
3.9. NONHOMOGENEOUS SYSTEMS 137
well not, but to be safe we should also try

bte
t
. Here we find the crux of the difference for systems.
You want to try both ae
t
and

bte
t
in your solution, not just

bte
t
. Therefore, we try
x = ae
t
+

bte
t
+ct +

d.
Thus we have 8 unknowns. We write a =
,
a
1
a
2
¸
,

b =
,
b
1
b
2
¸
, c =
,
c
1
c
2
¸
, and

d =
,
d
1
d
2
¸
, We have to plug
this into the equation. First let us compute x
·
.
x
·
=

a +

b

e
t
+

bte
t
+c.
Now x
·
must equal Ax +

f so
Ax +

f = Aae
t
+ A

bte
t
+ Act + A

d +

f =
=
,
−a
1
−2a
1
+ a
2
¸
e
t
+
,
−b
1
−2b
1
+ b
2
¸
te
t
+
,
−c
1
−2c
1
+ c
2
¸
t +
,
−d
1
−2d
1
+ d
2
¸
+
,
e
t
t
¸
.
Now we identify the coefficients of e
t
, te
t
, t and any constants.
a
1
+ b
1
= −a
1
+ 1,
a
2
+ b
2
= −2a
1
+ a
2
,
b
1
= −b
1
,
b
2
= −2b
1
+ b
2
,
0 = −c
1
,
0 = −2c
1
+ c
2
+ 1,
c
1
= −d
1
,
c
2
= −2d
1
+ d
2
.
We could write this is an 8 × 9 augmented matrix and start row reduction, but it is easier to just do
this in an ad hoc manner. Immediately we see that b
1
= 0, c
1
= 0, d
1
= 0. Plugging these back in
we get that c
2
= −1 and d
2
= −1. The remaining equations that tell us something are
a
1
= −a
1
+ 1,
a
2
+ b
2
= −2a
1
+ a
2
.
So a
1
=
1
2
and b
2
= −1. a
2
can be arbitrary and still satisfy the equation. We are looking for just a
single solution so presumably the simplest one is when a
2
= 0. Therefore,
x = ae
t
+

bte
t
+ct +

d =
,
1
2
0
¸
e
t
+
,
0
−1
¸
te
t
+
,
0
−1
¸
t +
,
0
−1
¸
=
,
1
2
e
t
−te
t
− t − 1
¸
.
That is, x
1
=
1
2
e
t
, x
2
= −te
t
− t − 1. You would add this to the complementary solution to get the
general solution of the problem. Notice also that both ae
t
and

bte
t
really was needed.
138 CHAPTER 3. SYSTEMS OF ODES
Exercise 3.9.2: Check that x
1
and x
2
solve the problem. Also try setting a
2
= 1 and again check
these solutions. What is the difference between the two solutions we can obtain in this way?
As you can see, other than the handling of conflicts, undetermined coefficients works exactly
the same as it did for single equations. However, the computations can get out of hand pretty
quickly for systems. The equation we had done was very simple.
3.9.2 First order variable coefficient
Just as for a single equation, there is the method of variation of parameters. In fact for constant
coefficient systems, this is essentially the same thing as the integrating factor method we discussed
earlier. However this method will work for any linear system, even if it is not constant coefficient,
provided you have somehow solved the associated homogeneous problem.
Suppose we have the equation
x
·
= A(t) x +

f (t). (3.9)
Further, suppose that you have solved the associated homogeneous equation x
·
= A(t) x and found
the fundamental matrix solution X(t). The general solution to the associated homogeneous equa-
tion is X(t)c for a constant vector c. Just like for variation of parameters for single equation we try
the solution to the nonhomogeneous equation of the form
x
p
= X(t) u(t),
where u(t) is a vector valued function instead of a constant. Now substitute into (3.9) to obtain
x
p
·
(t) = X
·
(t) u(t) + X(t) u
·
(t) = A(t) X(t) u(t) +

f (t).
But X is the fundamental matrix solution to the homogeneous problem so X
·
(t) = A(t)X(t), and
thus
X
·
(t) u(t) + X(t) u
·
(t) = X
·
(t) u(t) +

f (t).
Hence X(t) u
·
(t) =

f (t). If we compute [X(t)]
−1
, then u
·
(t) = [X(t)]
−1

f (t). Now integrate to obtain
u and we have the particular solution x
p
= X(t) u(t). Let us write this as a formula
x
p
= X(t)

[X(t)]
−1

f (t) dt.
Note that if A is constant and you let X(t) = e
tA
, then [X(t)]
−1
= e
−tA
and hence we get a
solution x
p
= e
tA

e
−tA

f (t) dt which is precisely what we got using the integrating factor method.
Example 3.9.4: Find a particular solution to
x
·
=
1
t
2
+ 1
,
t −1
1 t
¸
x +
,
t
1
¸
(t
2
+ 1). (3.10)
3.9. NONHOMOGENEOUS SYSTEMS 139
Here A =
1
t
2
+1
,
t −1
1 t
¸
is most definitely not constant. Perhaps by a lucky guess, you find that
X =
,
1 −t
t 1
¸
solves X
·
(t) = A(t)X(t). Once we know the complementary solution we can easily find
a solution to (3.10). First we find
[X(t)]
−1
=
1
t
2
+ 1
,
1 t
−t 1
¸
.
Next we know a particular solution to (3.10) is
x
p
= X(t)

[X(t)]
−1

f (t) dt
=
,
1 −t
t 1
¸
1
t
2
+ 1
,
1 t
−t 1
¸ ,
t
1
¸
(t
2
+ 1) dt
=
,
1 −t
t 1
¸ ,
2t
−t
2
+ 1
¸
dt
=
,
1 −t
t 1
¸ ,
t
2

1
3
t
3
+ t
¸
=
,
1
3
t
4
2
3
t
3
+ t
¸
.
Adding the complementary solution we have that the general solution to (3.10).
x =
,
1 −t
t 1
¸ ,
c
1
c
2
¸
+
,
1
3
t
4
2
3
t
3
+ t
¸
=
,
c
1
− c
2
t +
1
3
t
4
c
2
+ (c
1
+ 1) t +
2
3
t
3
¸
.
Exercise 3.9.3: Check that x
1
=
1
3
t
4
and x
2
=
2
3
t
3
+ t really solve (3.10).
In the variation of parameters, just like in the integrating factor method we can obtain the
general solution by adding in constants of integration. That is, we will add X(t)c for a vector of
arbitrary constants. But that is precisely the complementary solution.
3.9.3 Second order constant coefficients
Undetermined coefficients
We have already previously did a simple example of the method of undetermined coefficients for
second order systems in § 3.6. This method is essentially the same as undetermined coefficients
for first order systems. There are some simplifications that you can make however as we did in
§ 3.6. Let the equation be
x
··
= Ax +

F(t),
where A is a constant matrix. If

F(t) is of the form

F
0
cos ωt, then you can try a solution of the
form
x
p
= c cos ωt,
140 CHAPTER 3. SYSTEMS OF ODES
and you do not need to introduce sines.
If the

F is a sum of cosines, you note that we still have the superposition principle, so if

F(t) =

F
0
cos ω
0
t +

F
1
cos ω
1
t, you could try a cos ω
0
t for the problem x
··
= Ax +

F
0
cos ω
0
t, and
you would try

b cos ω
1
t for the problem x
··
= Ax +

F
0
cos ω
1
t. Then sum the solutions.
However, if there is duplication with the complementary solution, or the equation is of the form
x
··
= Ax
·
+ Bx +

F(t), then you need to do the same thing as you do for first order systems.
Actually you will never go wrong with putting in more terms than needed into your guess. You
will just find that the extra coefficients will turn out to be zero. But it is useful to save some time
and effort.
Eigenvector decomposition
If we have the system
x
··
= Ax +

F(t),
we can do eigenvector decomposition, just like for first order systems.
Let λ
1
, . . . , λ
n
be the eigenvalues and v
1
, . . . , v
n
be the eigenvectors. Again form the matrix
E = [ v
1
v
n
]. Write
x(t) = v
1
ξ
1
(t) +v
2
ξ
2
(t) + +v
n
ξ
n
(t).
Decompose

F in terms of the eigenvectors

F(t) = v
1
g
1
(t) +v
2
g
2
(t) + +v
n
g
n
(t).
And again g = E
−1

F.
Now plug in and doing the same thing as before
x
··
= v
1
ξ
··
1
+v
2
ξ
··
2
+ +v
n
ξ
··
n
= A

v
1
ξ
1
+v
2
ξ
2
+ +v
n
ξ
n

+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= Av
1
ξ
1
+ Av
2
ξ
2
+ + Av
n
ξ
n
+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= v
1
λ
1
ξ
1
+v
2
λ
2
ξ
2
+ +v
n
λ
n
ξ
n
+v
1
g
1
+v
2
g
2
+ +v
n
g
n
= v
1

1
ξ
1
+ g
1
) +v
2

2
ξ
2
+ g
2
) + +v
n

n
ξ
n
+ g
n
).
Identify the coefficients of the eigenvectors to get the equations
ξ
··
1
= λ
1
ξ
1
+ g
1
,
ξ
··
2
= λ
2
ξ
2
+ g
2
,
.
.
.
ξ
··
n
= λ
n
ξ
n
+ g
n
.
Each one of these equations is independent of the others. Now solve each one of these using the
methods of chapter 2. Now write x(t) = v
1
ξ
1
(t) + + v
n
ξ
n
(t), and we are done; we have a
particular solution. If you have found the general solution for ξ
1
through ξ
n
, then again x(t) =
v
1
ξ
1
(t) + +v
n
ξ
n
(t) is the general solution.
3.9. NONHOMOGENEOUS SYSTEMS 141
Example 3.9.5: Let us do the example from § 3.6 using this method. The equation is
x
··
=
,
−3 1
2 −2
¸
x +
,
0
2
¸
cos 3t.
The eigenvalues were −1 and −4, with eigenvectors
,
1
2
¸
and
,
1
−1
¸
. So E =
,
1 1
2 −1
¸
and So E
−1
=
1
3
,
1 1
2 −1
¸
. Therefore,
,
g
1
g
2
¸
= E
−1

F(t) =
1
3
,
1 1
2 −1
¸ ,
0
2 cos 3t
¸
=
,
2
3
cos 3t
−2
3
cos 3t
¸
.
So after the whole song and dance of plugging in, the equations we get are
ξ
··
1
= −ξ
1
+
2
3
cos 3t,
ξ
··
2
= −4 ξ
2

2
3
cos 3t.
For each we can try the method of undetermined coefficients and try C
1
cos 3t for the first equation
and C
2
cos 3t for the second equation. We plug in
−9C
1
cos 3t = −C
1
cos 3t +
2
3
cos 3t,
−9C
2
cos 3t = −4C
2
cos 3t −
2
3
cos 3t.
Each of these we solve separately: we get −9C
1
= −C
1
+
2
3
and −9C
2
= −4C
2

2
3
. And hence
C
1
=
−1
12
and C
2
=
2
15
. So our particular solution is
x =
,
1
2
¸ ¸
−1
12
cos 3t

+
,
1
−1
¸ ¸
2
15
cos 3t

=
,
1
20
−3
10
¸
cos 3t.
This matches what we got previously in § 3.6.
3.9.4 Exercises
Exercise 3.9.4: Find a particular solution to x
·
= x+2y+2t, y
·
= 3x+2y−4. a) Using integrating
factor method, b) using eigenvector decomposition, c) using undetermined coefficients.
Exercise 3.9.5: Find the general solution to x
·
= 4x + y − 1, y
·
= x + 4y − e
t
. a) Using integrating
factor method, b) using eigenvector decomposition, c) using undetermined coefficients.
Exercise 3.9.6: Find the general solution to x
··
1
= −6x
1
+ 3x
2
+ cos t, x
··
2
= 2x
1
− 7x
2
+ 3 cos t. a)
using eigenvector decomposition, b) using undetermined coefficients.
142 CHAPTER 3. SYSTEMS OF ODES
Exercise 3.9.7: Find the general solution to x
··
1
= −6x
1
+ 3x
2
+ cos 2t, x
··
2
= 2x
1
− 7x
2
+ 3 cos 2t.
a) using eigenvector decomposition, b) using undetermined coefficients.
Exercise 3.9.8: Take the equation
x
·
=
,
1
t
−1
1
1
t
¸
x +
,
t
2
−t
¸
.
a) Check that
x
c
= c
1
,
t sin t
−t cos t
¸
+ c
2
,
t cos t
t sin t
¸
is the complementary solution. b) Use variation of parameters to find a particular solution.
Chapter 4
Fourier series and PDEs
4.1 Boundary value problems
Note: 2 lectures, similar to §3.8 in EP
4.1.1 Boundary value problems
Before we tackle the Fourier series, we need to study the so-called boundary value problems (or
endpoint problems). For example, suppose we have
x
··
+ λx = 0, x(a) = 0, x(b) = 0,
for some constant λ, where x(t) is defined for t in the interval [a, b]. Unlike before when we
specified the value of the solution and its derivative at a single point, we now specify the value of
the solution at two different points. Note that x = 0 is a solution to this equation, so existence of
solutions is not an issue here. Uniqueness is another issue. The general solution to x
··
+ λx = 0
will have two arbitrary constants present, so it is natural to think that requiring two conditions will
guarantee a unique solution.
Example 4.1.1: However take λ = 1, a = 0, b = π. That is,
x
··
+ x = 0, x(0) = 0, x(π) = 0.
Then x = sin t is another solution satisfying both boundary conditions. In fact, write down the
general solution of the differential equation, which is x = Acos t + Bsin t. The condition x(0) = 0
forces A = 0. But letting x(π) = 0 does not give us any more information as x = Bsin t already
satisfies both conditions. Hence there are infinitely many solutions x = Bsin t for an arbitrary
constant B.
143
144 CHAPTER 4. FOURIER SERIES AND PDES
Example 4.1.2: On the other hand, change to λ = 2.
x
··
+ 2x = 0, x(0) = 0, x(π) = 0.
Then the general solution is x = Acos

2 t + Bsin

2 t. Letting x(0) = 0 still forces A = 0. But
now letting 0 = x(π) = Bsin

2 π. sin

2 π 0 and hence B = 0. So x = 0 is the unique solution
to this problem.
So what is going on? We will be interested in classifying which constants λ imply a nonzero
solution, and we will be interested in finding those solutions. This problem is an analogue of
finding eigenvalues and eigenvectors of matrices.
4.1.2 Eigenvalue problems
In general we will consider more equations, but we will postpone this until chapter 5. For the basic
Fourier series theory we will need only the following three cases.
x
··
+ λx = 0, x(a) = 0, x(b) = 0, (4.1)
x
··
+ λx = 0, x
·
(a) = 0, x
·
(b) = 0, (4.2)
and
x
··
+ λx = 0, x(a) = x(b), x
·
(a) = x
·
(b), (4.3)
A number λ will be considered an eigenvalue of (4.1) (resp. (4.2) or (4.3)) if and only if there
exists a nonzero solution to (4.1) (resp. (4.2) or (4.3)) given that specific λ. The nonzero solution
we found will be said to be the corresponding eigenfunction.
Note the similarity to eigenvalues and eigenvectors of matrices. The similarity is not just
coincidental. If we think of the equations as differential operators, then we are doing the same
exact thing. For example, let L = −
d
2
dt
2
, then we are looking for eigenfunctions f satisfying certain
endpoint conditions that solve (L − λ) f = 0. A lot of the formalism from linear algebra can still
apply here, though we will not pursue this line of reasoning too far.
Example 4.1.3: Let us find the eigenvalues and eigenfunctions of
x
··
+ λx = 0, x(0) = 0, x(π) = 0.
We will have to handle the cases λ > 0, λ = 0, λ < 0 separately.
First suppose that λ > 0, then the general solution to x
··
+ λx = 0 is
x = Acos

λ t + Bsin

λ t.
The condition x(0) = 0 implies immediately A = 0. Next
0 = x(π) = Bsin

λ π.
4.1. BOUNDARY VALUE PROBLEMS 145
If B is zero then x is not a nonzero solution. So to get a nonzero solution we must have that
sin

λ π = 0. Hence,

λ π must be an integer multiple of π, or

λ = k for a positive integer k.
Hence the positive eigenvalues are k
2
for all integers k ≥ 1. The corresponding eigenfunctions can
be taken as x = sin kt. Just like for eigenvectors, we get all the multiples of an eigenfunction, so
we only need to pick one.
Now suppose that λ = 0. In this case the equation is x
··
= 0 and the general solution is
x = At + B. x(0) = 0 implies that B = 0 and then x(π) = 0 implies that A = 0. This means that
λ = 0 is not an eigenvalue.
Finally, let λ < 0. In this case we have the general solution
x = Acosh

−λ t + Bsinh

−λ t.
Letting x(0) = 0 implies that A = 0 (recall cosh 0 = 1 and sinh 0 = 0). So our solution must be
x = Bsinh

−λ t and satisfy x(π) = 0. This is only possible if B is zero. Why? Because sinh ξ
is only zero for ξ = 0, you should plot sinh to see this. Also we can just look at the definition
0 = sinh t =
e
t
−e
−t
2
. Hence e
t
= e
−t
which implies t = −t and that is only true if t = 0. So there are
no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
λ
k
= k
2
with an eigenfunction x
k
= sin kt for all integers k ≥ 1.
Example 4.1.4: Let us also compute the eigenvalues and eigenfunctions of
x
··
+ λx = 0, x
·
(0) = 0, x
·
(π) = 0.
Again we will have to handle the cases λ > 0, λ = 0, λ < 0 separately.
First suppose that λ > 0, then the general solution to x
··
+λx = 0 is x = Acos

λ t +Bsin

λ t.
So
x
·
= −Asin

λ t + Bcos

λ t
The condition x
·
(0) = 0 implies immediately B = 0. Next
0 = x
·
(π) = −Asin

λ π.
Again A should not be zero, and sin

λ π is only zero if

λ = k for a positive integer k. Hence the
positive eigenvalues are again k
2
for all integers k ≥ 1. And the corresponding eigenfunctions can
be taken as x = cos kt.
Now suppose that λ = 0. In this case the equation is x
··
= 0 and the general solution is
x = At + B so x
·
= A. x
·
(0) = 0 implies that A = 0. Obviously setting x
·
(π) = 0 does not get
us anything new. This means that B could be anything (let us take it to be 1). So λ = 0 is an
eigenvalue and x = 1 is the corresponding eigenfunction.
Finally, let λ < 0. In this case we have the general solution x = Acosh

−λ t + Bsinh

−λ t
and hence
x
·
= Asinh

−λ t + Bcosh

−λ t.
146 CHAPTER 4. FOURIER SERIES AND PDES
We have already seen (with roles of A and B switched) that for this to be zero at t = 0 and t = π it
implies that A = B = 0. Hence there are no negative eigenvalues.
In summary, the eigenvalues and corresponding eigenfunctions are
λ
k
= k
2
with an eigenfunction x
k
= sin kt for all integers k ≥ 1,
and there is another eigenvalue
λ
0
= 0 with an eigenfunction x
0
= 1.
We could also do this for a little bit more complicated boundary value problem. This problem
is be the one that leads to the general Fourier series.
Example 4.1.5: Let us compute the eigenvalues and eigenfunctions of
x
··
+ λx = 0, x(−π) = x(π), x
·
(−π) = x
·
(π).
You should notice that we have not specified the values or the derivatives at the endpoints, but
rather that they are the same at the beginning and at the end of the interval.
Let us skip λ < 0. The computations are the same and again we find that there are no negative
eigenvalues.
For λ = 0, the general solution is x = At + B. The condition x(−π) = x(π) implies that A = 0
(Aπ + B = −Aπ + B implies A = 0). The second condition x
·
(−π) = x
·
(π) says nothing about B and
hence λ = 0 is an eigenvalue with a corresponding eigenfunction x = 1.
For λ > 0 we get that x = Acos

λ t + Bsin

λ t. Now
Acos −

λ π + Bsin −

λ π = Acos

λ π + Bsin

λ π.
We remember that cos −θ = cos θ and sin −θ = −sin θ. Therefore,
Acos

λ π − Bsin

λ π = Acos

λ π + Bsin

λ π.
and hence either B = 0 or sin

λ π = 0. Similarly (exercise) if we differentiate x and plug in the
second condition we find that A = 0 or sin

λ π = 0. Therefore, unless we want A and B to both
be zero (which we do not) we must have sin

λ π = 0. Therefore,

λ is an integer and hence the
eigenvalues are yet again λ = k
2
for an integer k ≥ 1. In this case however, x = Acos kt +Bsin kt is
an eigenfunction for any A and any B. So we have two linearly independent eigenfunctions sin kt
and cos kt. Remember that for a matrix we could also have had two eigenvectors corresponding to
an eigenvalue if the eigenvalue was repeated.
In summary, the eigenvalues and corresponding eigenfunctions are
λ
k
= k
2
with the eigenfunctions cos kt and sin kt for all integers k ≥ 1,
λ
0
= 0 with an eigenfunction x
0
= 1.
4.1. BOUNDARY VALUE PROBLEMS 147
4.1.3 Orthogonality of eigenfunctions
Something that will be very useful in the next section is the orthogonality property of the eigen-
functions. This is an analogue of the following fact about eigenvectors of a matrix. A matrix is
called symmetric if A = A
T
. Eigenvectors for two distinct eigenvalues of a symmetric matrix are
orthogonal. That symmetry is required. We will not prove this fact here. The differential operators
we are dealing with act much like a symmetric matrix. We, therefore, get the following theorem.
Theorem 4.1.1. Suppose that x
1
(t) and x
2
(t) are two eigenfunctions of the problem (4.1), (4.2) or
(4.3) for two different eigenvalues λ
1
and λ
2
. Then they are orthogonal in the sense that

b
a
x
1
(t)x
2
(t) dt = 0.
Note that the terminology comes from the fact that the integral is a type of inner product. We
will expand on this in the next section. The theorem has a very short, elegant, and illuminating
proof so let us give it here. First note that we have the following two equations.
x
··
1
+ λ
1
x
1
= 0 and x
··
2
+ λ
2
x
2
= 0.
Multiply the first by x
2
and the second by x
1
and subtract to get

1
− λ
2
)x
1
x
2
= x
··
2
x
1
− x
2
x
··
1
.
Now integrate both sides of the equation.

1
− λ
2
)

b
a
x
1
x
2
dt =

b
a
x
··
2
x
1
− x
2
x
··
1
dt
=

b
a
d
dt

x
·
2
x
1
− x
2
x
·
1

dt
=
,
x
·
2
x
1
− x
2
x
·
1
¸
b
t=a
= 0.
The last equality holds because of the boundary conditions. For example, if we consider (4.1) we
have x
1
(a) = x
1
(b) = x
2
(a) = x
2
(b) = 0 and so x
·
2
x
1
− x
2
x
·
1
is zero at both a and b. As λ
1
λ
2
, the
theorem follows.
Exercise 4.1.1 (easy): Finish the theorem (check the last equality in the proof) for the cases (4.2)
and (4.3).
We have seen previously that sin nt was an eigenfunction for the problem x
··
+λx = 0, x(0) = 0,
x(π) = 0. Hence we have the integral

π
0
(sin mt)(sin nt) dt = 0, when m n.
148 CHAPTER 4. FOURIER SERIES AND PDES
Similarly

π
0
(cos mt)(cos nt) dt = 0, when m n.
And finally we also get

π
−π
(sin mt)(sin nt) dt = 0, when m n,

π
−π
(cos mt)(cos nt) dt = 0, when m n,
and

π
−π
(cos mt)(sin nt) dt = 0.
4.1.4 Fredholm alternative
We now touch on a very useful theorem in the theory of differential equations. The theorem holds
in a more general setting than we are going to state it, but for our purposes the following statement
is sufficient. We will give a slightly more general version in chapter 5.
Theorem 4.1.2 (Fredholm alternative

). Suppose p and q are continuous on [a, b]. Then either
x
··
+ λx = 0, x(a) = 0, x(b) = 0 (4.4)
has a nonzero solution, or
x
··
+ λx = f (t), x(a) = 0, x(b) = 0 (4.5)
has a unique solution for every continuous function f .
The theorem is also true for the other types of boundary conditions we considered. The theorem
means that if λ is not an eigenvalue, the nonhomogeneous equation (4.5) has a unique solution for
every right hand side. On the other hand if λ is an eigenvalue, then (4.5) need not have a solution
for every f , and furthermore, even if it happens to have a solution, the solution is not unique.
We also want to reinforce the idea here that linear differential operators have much in common
with matrices. So it is no surprise that there is a finite dimensional version of Fredholm alternative
for matrices as well. Let A be an n × n matrix. The Fredholm alternative then states that either
(A − λI)x =

0 has a nontrivial solution, or (A − λI)x =

b has a solution for every

b.
A lot of intuition from linear algebra can be applied for linear differential operators, but one
must be careful of course. For example, one obvious difference we have already seen is that
in general a differential operator will have infinitely many eigenvalues, while a matrix has only
finitely many.

Named after the Swedish mathematicain Erik Ivar Fredholm (1866 – 1927).
4.1. BOUNDARY VALUE PROBLEMS 149
4.1.5 Application
Let us consider a physical application of an endpoint problem. Suppose we have a tightly stretched
quickly spinning elastic string or rope of uniform linear density ρ. Let us put this problem into the
xy-plane. The x axis represents the position on the string. The string rotates at angular velocity
ω, so we will assume that the whole xy-plane rotates at angular velocity ω along. We will assume
that the string stays in this xy-plane and y will measure its deflection from the equilibrium position,
y = 0, on the x axis. Hence, we will find a graph which gives the shape of the string. We will
idealize the string to have no volume to just be a mathematical curve. If we take a small segment
and we look at the tension at the endpoints, we see that this force is tangential and we will assume
that the magnitude is the same at both end points. Hence the magnitude is constant everywhere and
we will call its magnitude T. If we assume that the deflection is small then we can use Newton’s
second law to get an equation
Ty
··
+ ρω
2
y = 0.
Let L be the length of the string and the string is fixed at the beginning and end points. Hence,
y(0) = 0 and y(L) = 0. See Figure 4.1.
L x
y
y
0
Figure 4.1: Whirling string.
We rewrite the equation as y
··
+
ρω
2
T
y = 0. The setup is similar to Example 4.1.3 on page 144,
except for the interval length being L instead of π. We are looking for eigenvalues of y
··
+ λy =
0, y(0) = 0, y(L) = 0 where λ =
ρω
2
T
. As before there are no nonpositive eigenvalues. With λ > 0,
the general solution to the equation is y = Acos

λ x + Bsin

λ x. The condition y(0) = 0 implies
that A = 0 as before. The condition y(L) = 0 implies that sin

λ L = 0 and hence

λ L = kπ for
some integer k > 0, so
ρω
2
T
= λ =
k
2
π
2
L
2
.
What does this say about the shape of the string? It says that for all parameters ρ, ω, T not
satisfying the above equation, the string is in the equilibrium position, y = 0. When
ρω
2
T
=
k
2
π
2
L
2
,
then the string will “pop out” some distance B at the midpoint. We cannot compute B with the
information we have.
Let us assume that ρ and T are fixed and we are changing ω. For most values of ω the string
is in the equilibrium state. When the angular velocity ω hits a value ω =


T
L

ρ
, then the string will
150 CHAPTER 4. FOURIER SERIES AND PDES
pop out and will have the shape of a sin wave crossing the x axis k times. When ω changes again,
the string returns to the equilibrium position. You can see that the higher the angular velocity the
more times it crosses the x axis when it is popped out.
4.1.6 Exercises
Hint for the following exercises: Note that cos

λ (t − a) and sin

λ (t − a) are also solutions of
the homogeneous equation.
Exercise 4.1.2: Compute all eigenvalues and eigenfunctions of x
··
+ λx = 0, x(a) = 0, x(b) = 0.
Exercise 4.1.3: Compute all eigenvalues and eigenfunctions of x
··
+ λx = 0, x
·
(a) = 0, x
·
(b) = 0.
Exercise 4.1.4: Compute all eigenvalues and eigenfunctions of x
··
+ λx = 0, x
·
(a) = 0, x(b) = 0.
Exercise 4.1.5: Compute all eigenvalues and eigenfunctions of x
··
+ λx = 0, x(a) = x(b), x
·
(a) =
x
·
(b).
Exercise 4.1.6: We have skipped the case of λ < 0 for the boundary value problem x
··
+ λx =
0, x(−π) = x(π), x
·
(−π) = x
·
(π). So finish the calculation and show that there are no negative
eigenvalues.
4.2. THE TRIGONOMETRIC SERIES 151
4.2 The trigonometric series
Note: 2 lectures, §9.1 in EP
4.2.1 Periodic functions and motivation
As motivation for studying Fourier series, suppose we have the problem
x
··
+ ω
2
0
x = f (t), (4.6)
for some periodic function f (t). We have already solved
x
··
+ ω
2
0
x = F
0
cos ωt. (4.7)
One way to solve (4.6) is to decompose f (t) as a sum of of cosines (and sines) and then solve many
problems of the form (4.7). We then use the principle of superposition, to sum up all the solutions
we got to get a solution to (4.6).
Before we proceed, let us talk a little bit more in detail about periodic functions. A function
is said to be periodic with period P if f (t) = f (t + P) for all t. For brevity we will say f (t) is P-
periodic. Note that a P-periodic function is also 2P-periodic, 3P-periodic and so on. For example,
cos t and sin t are 2π-periodic. So are cos kt and sin kt for all integers k. The constant functions are
an extreme example. They are periodic for any period (exercise).
Normally we will start with a function f (t) defined on some interval [−L, L] and we will want
to extend periodically to make it a 2L-periodic function. We do this extension by defining a new
function F(t) such that for t in [−L, L], F(t) = f (t). For t in [L, 3L], we define F(t) = f (t −2L), for
t in [−3L, −L], F(t) = f (t + 2L), and so on.
Example 4.2.1: Defined f (t) = 1−t
2
on [−1, 1]. Now extend periodically to a 2-periodic function.
See Figure 4.2 on the following page.
You should be careful to distinguish between f (t) and its extension. A common mistake is to
assume that a formula for f (t) holds for its extension. It can be confusing when the formula for
f (t) is periodic, but with perhaps a different period.
Exercise 4.2.1: Define f (t) = cos t on [−π/2, π/2]. Now take the π-periodic extension and sketch
its graph. How does it compare to the graph of cos t.
4.2.2 Inner product and eigenvector decomposition
Suppose we have a symmetric matrix, that is A
T
= A. We have said before that the eigenvectors of
A are then orthogonal. Here the word orthogonal means that if v and ware two distinct eigenvectors
of A, then (v, w) = 0. In this case the inner product (v, w) is the dot product, which can be computed
as v
T
w.
152 CHAPTER 4. FOURIER SERIES AND PDES
-3 -2 -1 0 1 2 3
-3 -2 -1 0 1 2 3
-0.5
0.0
0.5
1.0
1.5
-0.5
0.0
0.5
1.0
1.5
Figure 4.2: Periodic extension of the function 1 − t
2
.
To decompose a vector v in terms of mutually orthogonal vectors w
1
and w
2
we write
v = a
1
w
1
+ a
2
w
2
.
Let us find the formula for a
1
and a
2
. First let us compute
(v, w
1
) = (a
1
w
1
+ a
2
w
2
, w
1
) = a
1
( w
1
, w
1
) + a
2
( w
2
, w
1
) = a
1
( w
1
, w
1
).
Therefore,
a
1
=
(v, w
1
)
( w
1
, w
1
)
.
Similarly
a
2
=
(v, w
2
)
( w
2
, w
2
)
.
You probably remember this formula from vector calculus.
Example 4.2.2: Write v =
,
2
3
¸
as a linear combination of w
1
=
,
1
−1
¸
and w
2
=
,
1
1
¸
.
First note that w
1
and w
2
are orthogonal as ( w
1
, w
2
) = 1(1) + (−1)1 = 0. Then
a
1
=
(v, w
1
)
( w
1
, w
1
)
=
2(1) + 3(−1)
1(1) + (−1)(−1)
=
−1
2
.
a
2
=
(v, w
2
)
( w
2
, w
2
)
=
2 + 3
1 + 1
=
5
2
.
Hence
,
2
3
¸
=
−1
2
,
1
−1
¸
+
5
2
,
1
1
¸
.
4.2. THE TRIGONOMETRIC SERIES 153
4.2.3 The trigonometric series
Now instead of decomposing a vector in terms of the eigenvectors of a matrix, we will decompose
a function in terms of eigenfunctions of a certain eigenvalue problem. In particular, the eigenvalue
problem we will use for the Fourier series is the following
x
··
+ λx = 0, x(−π) = x(π), x
·
(−π) = x
·
(π).
We have previously computed that the eigenfunctions are 1, cos kt, sin kt. That is, we will want to
find a representation of a 2π-periodic function f (t) as
f (t) =
a
0
2
+

¸
n=1
a
n
cos nt + b
n
sin nt.
This series is called the Fourier series

or trigonometric series for f (t). Note that here we have
used the eigenfunction
1
2
instead of 1. This is for convenience. We could also think of 1 = cos 0t,
so that we only need to look at cos kt and sin kt.
Just like for matrices we will want to find a projection of f (t) onto the subspace generated by
the eigenfunctions. So we will want to define an inner product of functions. For example, to find
a
n
we want to compute ( f (t) , cos nt ). We define the inner product as
( f (t) , g(t) )
def
=
1
π

π
−π
f (t)g(t) dt.
With this definition of the inner product, we have seen in the previous section that the eigenfunc-
tions cos kt (this includes the constant eigenfunction), and sin kt are orthogonal in the sense that
( cos mt , cos nt ) = 0 for m n,
( sin mt , sin nt ) = 0 for m n,
( sin mt , cos nt ) = 0 for all m and n.
By elementary calculus we have that ( cos nt , cos nt ) = 1 (except for n = 0) and ( sin nt , sin nt ) =
1. For the constant we get that ( 1 , 1 ) = 2. The coefficients are given by
a
n
=
( f (t) , cos nt )
( cos nt , cos nt )
=
1
π

π
−π
f (t) cos nt dt,
b
n
=
( f (t) , sin nt )
( sin nt , sin nt )
=
1
π

π
−π
f (t) sin nt dt.
Compare these expressions with the finite dimensional example. The formula above also works
for n = 0, or more simply
a
0
=
1
π

π
−π
f (t) dt.

Named after the French mathematician Jean Baptiste Joseph Fourier (1768 – 1830).
154 CHAPTER 4. FOURIER SERIES AND PDES
Let us check the formulas using the orthogonality properties. Suppose for a moment that
f (t) =
a
0
2
+

¸
n=1
a
n
cos nt + b
n
sin nt.
Then for m ≥ 1 we have
( f (t) , cos mt ) =
¸
a
0
2
+

¸
n=1
a
n
cos nt + b
n
sin nt , cos mt

=
a
0
2
( 1 , cos mt ) +

¸
n=1
a
n
( cos nt , cos mt ) + b
n
( sin nt , cos mt )
= a
m
( cos mt , cos mt ).
And hence a
m
=
( f (t) , cos mt )
( cos mt , cos mt )
.
Exercise 4.2.2: Carry out the calculation for a
0
and b
m
.
Example 4.2.3: Take the function
f (t) = t
for t in (−π, π]. Extend f (t) periodically and write it as a Fourier series. This function is called the
sawtooth.
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 4.3: The graph of the sawtooth function.
The plot of the extended periodic function is given in Figure 4.3. Now we compute the coeffi-
cients. Let us start with a
0
a
0
=
1
π

π
−π
t dt = 0.
4.2. THE TRIGONOMETRIC SERIES 155
We will often use the result from calculus that the integral of an odd function over a symmetric
interval is zero. Recall that an odd function is a function ϕ(t) such that ϕ(−t) = −ϕ(t). For example
the function t, the function sin t, or more to the point the function t cos mt are all odd.
a
m
=
1
π

π
−π
t cos mt dt = 0.
Let us move to b
m
. Another useful fact from calculus is that the integral of an even function over a
symmetric interval is twice the integral of the same function over half the interval. Recall an even
function is a function ϕ(t) such that ϕ(−t) = ϕ(t). For example t sin mt is even.
b
m
=
1
π

π
−π
t sin mt dt
=
2
π

π
0
t sin mt dt
=
2
π
¸
,
−t cos mt
m
,
π
t=0
+
1
m

π
0
cos mt dt

=
2
π

−π cos mπ
m
+ 0

=
−2 cos mπ
m
=
2 (−1)
m+1
m
.
We have used the fact that
cos mπ = (−1)
m
=

¸
¸
¸
¸
¸
¸
1 if m even,
−1 if m odd.
The series, therefore, is
f (t) =

¸
n=1
2 (−1)
n+1
n
sin nt.
Let us write out the first 3 harmonics of the series for f (t).
f (t) = 2 sin t − sin 2 t +
2
3
sin 3 t +
The plot of these first three terms of the series, along with a plot of the first 20 terms is given in
Figure 4.4 on the following page.
Example 4.2.4: Take the function
f (t) =

¸
¸
¸
¸
¸
¸
0 if −π < t ≤ 0,
π if 0 < t ≤ π.
Extend f (t) periodically and write it as a Fourier series. This function or its variants appear often
in applications and the function is called the square wave.
156 CHAPTER 4. FOURIER SERIES AND PDES
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
-3
-2
-1
0
1
2
3
-3
-2
-1
0
1
2
3
Figure 4.4: First 3 (left graph) and 20 (right graph) harmonics of the sawtooth function.
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
0
1
2
3
0
1
2
3
Figure 4.5: The graph of the square wave function.
The plot of the extended periodic function is given in Figure 4.5. Now we compute the coeffi-
cients. Let us start with a
0
a
0
=
1
π

π
−π
f (t) dt =
1
π

π
0
π dt = π.
Next,
a
m
=
1
π

π
−π
f (t) cos mt dt =
1
π

π
0
π cos mt dt = 0.
4.2. THE TRIGONOMETRIC SERIES 157
And finally
b
m
=
1
π

π
−π
f (t) sin mt dt
=
1
π

π
0
π sin mt dt
=
,
−cos mt
m
,
π
t=0
=
1 − cos πm
m
=
1 − (−1)
m
m
=

¸
¸
¸
¸
¸
¸
2
m
if m is odd,
0 if m is even.
The series, therefore, is
f (t) =
π
2
+

¸
n=1
n odd
2
n
sin nt =
π
2
+

¸
k=1
2
2k − 1
sin (2k − 1) t.
Let us write out the first 3 harmonics of the series for f (t).
f (t) =
π
2
+ 2 sin t +
2
3
sin 3t +
The plot of these first three terms of the series, along with a plot of the first 20 harmonics is given
in Figure 4.6.
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
0
1
2
3
0
1
2
3
-5.0 -2.5 0.0 2.5 5.0
-5.0 -2.5 0.0 2.5 5.0
0
1
2
3
0
1
2
3
Figure 4.6: First 3 (left graph) and 20 (right graph) harmonics of the square wave function.
We have so far skirted the issue of convergence. It turns out that for example for the sawtooth
function f (t), the equation
f (t) =

¸
n=1
2 (−1)
n+1
n
sin nt.
158 CHAPTER 4. FOURIER SERIES AND PDES
is only an equality for t where the sawtooth is continuous. That is, we do not get an equality for
t = −π, π and all the other discontinuities of f (t). It is not hard to see that when t is an integer
multiple of π (which includes all the discontinuities), then

¸
n=1
2 (−1)
n+1
n
sin nt = 0.
If we redefine f (t) on [−π, π] as
f (t) =

¸
¸
¸
¸
¸
¸
0 if t = −π or t = π,
t otherwise.
and extend periodically, then the series equals the extended f (t) everywhere, including the dis-
continuities. We will generally not worry about changing the function at several (finitely many)
points.
We will say more about convergence in the next section. Let us however mention briefly an
effect of the discontinuity. Let us zoom in near the discontinuity in the square wave. Further, let
us plot the first 100 harmonics, see Figure 4.7. You will notice that while the series is a very good
approximation away from the discontinuities, the error (the overshoot) near the discontinuity at
t = π does not seem to be getting any smaller. This behavior is known as the Gibbs phenomenon.
The region where the error is large gets smaller and smaller, however, the more terms in the series
you take.
1.75 2.00 2.25 2.50 2.75 3.00 3.25
1.75 2.00 2.25 2.50 2.75 3.00 3.25
2.75
3.00
3.25
3.50
2.75
3.00
3.25
3.50
Figure 4.7: Gibbs phenomenon in action.
We can think of a periodic function as a “signal” being a superposition of many signals of pure
frequency. That is, we could think of say the square wave as a tone of certain frequency. It will be
in fact a superposition of many different pure tones of frequency which are multiples of the base
frequency. On the other hand a simple sine wave is only the pure tone. The simplest way to make
4.2. THE TRIGONOMETRIC SERIES 159
sound using a computer is the square wave, and the sound will be a very different from nice pure
tones. If you have played video games from the 1980s or so you have heard what square waves
sound like.
4.2.4 Exercises
Exercise 4.2.3: Suppose f (t) is defined on [−π, π] as sin 5t + cos 3t. Extend periodically and
compute the Fourier series of f (t).
Exercise 4.2.4: Suppose f (t) is defined on [−π, π] as |t|. Extend periodically and compute the
Fourier series of f (t).
Exercise 4.2.5: Suppose f (t) is defined on [−π, π] as |t|
3
. Extend periodically and compute the
Fourier series of f (t).
Exercise 4.2.6: Suppose f (t) is defined on [−π, π] as
f (t) =

¸
¸
¸
¸
¸
¸
−1 if −π < t ≤ 0,
1 if 0 < t ≤ π.
Extend periodically and compute the Fourier series of f (t).
Exercise 4.2.7: Suppose f (t) is defined on [−π, π] as t
3
. Extend periodically and compute the
Fourier series of f (t).
Exercise 4.2.8: Suppose f (t) is defined on [−π, π] as t
2
. Extend periodically and compute the
Fourier series of f (t).
There is another form of the Fourier series using complex exponentials that is sometimes easier
to work with.
Exercise 4.2.9: Let
f (t) =
a
0
2
+

¸
n=1
a
n
cos nt + b
n
sin nt.
Use Euler’s formula e

= cos θ + i sin θ, show that there exist complex numbers c
m
such that
f (t) =

¸
m=−∞
c
m
e
imt
.
Note that the sum now ranges over all the integers including negative ones. Do not worry about
convergence in this calculation. Hint: It may be better to start from the complex exponential form
and write the series as
c
0
+

¸
m=1
c
m
e
imt
+ c
−m
e
−imt
.
160 CHAPTER 4. FOURIER SERIES AND PDES
4.3 More on the Fourier series
Note: 2 lectures, §9.2 – §9.3 in EP
Before reading the lecture, it may be good to first try Project IV (Fourier series) from the
IODE website: http://www.math.uiuc.edu/iode/. After reading the lecture it may be good
to continue with Project V (Fourier series again).
4.3.1 2L-periodic functions
We have computed the Fourier series for a 2π-periodic function, but what about functions of dif-
ferent periods. Well, fear not, the computation is a simple case of change of variables. We can just
rescale the independent axis. Suppose that you have the 2L-periodic function f (t) (L is called the
half period). Let s =
π
L
t, then the function
g(s) = f

L
π
s

is 2π-periodic. We want to also rescale all our sines and cosines. We will want to write
f (t) =
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t.
If we change variables to s we see that
g(s) =
a
0
2
+

¸
n=1
a
n
cos ns + b
n
sin ns.
So we can compute a
n
and b
n
as before. After we write down the integrals we change variables
back to t.
a
0
=
1
π

π
−π
g(s) ds =
1
L

L
−L
f (t) dt,
a
n
=
1
π

π
−π
g(s) cos ns ds =
1
L

L
−L
f (t) cos

L
t dt,
b
n
=
1
π

π
−π
g(s) sin ns ds =
1
L

L
−L
f (t) sin

L
t dt.
The two most common half periods that show up in examples are π and 1 because of the sim-
plicity. We should stress that we have done no new mathematics, we have only changed variables.
If you understand the Fourier series for 2π-periodic functions, you understand it for 2L-periodic
functions. All that we are doing is moving some constants around, but all the mathematics is the
same.
4.3. MORE ON THE FOURIER SERIES 161
Example 4.3.1: Let
f (t) = |t| for −1 < t < 1,
extended periodically. The plot of the periodic extension is given in Figure 4.8. Compute the
Fourier series of f (t).
-2 -1 0 1 2
-2 -1 0 1 2
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Figure 4.8: Periodic extension of the function f (t).
We will write f (t) =
a
0
2
+
¸

n=1
a
n
cos nπt + b
n
sin nπt. For n ≥ 1 we note that |t| cos nπt is even
and hence
a
n
=

1
−1
f (t) cos nπt dt
= 2

1
0
t cos nπt dt
= 2
,
t

sin nπt
,
1
t=0
− 2

1
0
1

sin nπt dt
= 0 +
1
n
2
π
2
,
cos nπt
¸
1
t=0
=
2

(−1)
n
− 1

n
2
π
2
=

¸
¸
¸
¸
¸
¸
0 if n is even,
−4
n
2
π
2
if n is odd.
Next we find a
0
a
0
=

1
−1
|t| dt = 1.
Note: You should be able to find this integral by thinking about the integral as the area under the
graph without doing any computation at all. Finally we can find b
n
. Here, we notice that |t| sin nπt
is odd and, therefore,
b
n
=

1
−1
f (t) sin nπt dt = 0.
162 CHAPTER 4. FOURIER SERIES AND PDES
Hence, the series is
f (t) =
1
2
+
¸
n=1
n odd
−4
n
2
π
2
cos nπt.
Let us explicitly write down the first few terms of the series up to the 3
rd
harmonic.
f (t) ≈
1
2

4
π
2
cos πt −
4

2
cos 3πt −
The plot of these few terms and also a plot up to the 20
th
harmonic is given in Figure 4.9. You
should notice how close the graph is to the real function. You should also notice that there is no
“Gibbs phenomenon” present as there are no discontinuities.
-2 -1 0 1 2
-2 -1 0 1 2
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
-2 -1 0 1 2
-2 -1 0 1 2
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Figure 4.9: Fourier series of f (t) up to the 3
rd
harmonic (left graph) and up to the 20
th
harmonic
(right graph).
4.3.2 Convergence
We will need the one sided limits of functions. We will use the following notation
f (c−) = lim
t↑c
f (t), and f (c+) = lim
t↓c
f (t).
If you are unfamiliar with this notation, lim
t↑c
f (t) means we are taking a limit of as t approaches c
from below (i.e. t < c) and lim
t↓c
f (t) means we are taking a limit of as t approaches c from above
(i.e. t > c). For example, for the square wave function
f (t) =

¸
¸
¸
¸
¸
¸
0 if −π < t ≤ 0,
π if 0 < t ≤ π,
(4.8)
4.3. MORE ON THE FOURIER SERIES 163
we have f (0−) = 0 and f (0+) = π.
Let f (t) be a function defined on an interval [a, b]. Suppose that we find finitely many points
a = t
0
, t
1
, t
2
, . . . , t
k
= b in the interval, such that f (t) is continuous on the intervals (t
0
, t
1
), (t
1
, t
2
),
. . . , (t
k−1
, t
k
). Also suppose that f (t
k
−) and f (t
k
+) exists for each of these points. Then we say f (t)
is piecewise continuous.
If moreover, f (t) is differentiable at all but finitely many points, and f
·
(t) is piecewise contin-
uous, then f (t) is said to be piecewise smooth.
Example 4.3.2: The square wave function (4.8) is piecewise smooth on [−π, π] or any other inter-
val. In such a case we just say that the function is just piecewise smooth.
Example 4.3.3: The function f (t) = |t| is piecewise smooth.
Example 4.3.4: The function f (t) =
1
t
is not piecewise smooth on [−1, 1] (or any other interval
containing zero). In fact, it is not even piecewise continuous.
Example 4.3.5: The function f (t) =
3

t is not piecewise smooth on [−1, 1] (or any other interval
containing zero). f (t) is continuous, but the derivative of f (t) is unbounded near zero and hence
not piecewise continuous.
Piecewise smooth functions have an easy answer on the convergence of the Fourier series.
Theorem 4.3.1. Suppose f (t) is a 2L-periodic piecewise smooth function. Let
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t
be the Fourier series for f (t). Then the series converges for all t. If f (t) is continuous near t, then
f (t) =
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t.
Otherwise
f (t−) + f (t+)
2
=
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t.
If we happen to have that f (t) =
f (t−)+f (t+)
2
at all the discontinuities, the Fourier series converges
to f (t) everywhere. We can always just redefine f (t) by changing the value at each discontinuity
appropriately. Then we can write an equals sign between f (t) and the series without any worry.
We mentioned this fact briefly at the end last section.
Note that the theorem does not say how fast the series converges. Think back the discussion of
the Gibbs phenomenon in last section. The closer you get to the discontinuity, the more terms you
need to take to get an accurate approximation to the function.
164 CHAPTER 4. FOURIER SERIES AND PDES
4.3.3 Differentiation and integration of Fourier series
Not only does Fourier series converge nicely, but it is easy to differentiate and integrate the series.
We can do this just by differentiating or integrating term by term.
Theorem 4.3.2. Suppose
f (t) =
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t,
is a piecewise smooth continuous function and the derivative f
·
(t) is piecewise smooth. Then the
derivative can be obtained by differentiating term by term.
f
·
(t) =

¸
n=1
−a
n

L
sin

L
t +
b
n

L
cos

L
t.
It is important that the function is continuous. It can have corners, but no jumps. Otherwise the
differentiated series will fail to converge. For an exercise, take the series obtained for the square
wave and try to differentiate the series. Similarly, we can also integrate a Fourier series.
Theorem 4.3.3. Suppose
f (t) =
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t,
is a piecewise smooth function. Then the antiderivative is obtained by antidifferentiating term by
term and so
F(t) =
a
0
t
2
+ C +

¸
n=1
a
n
L

sin

L
t +
−b
n
L

cos

L
t.
where F
·
(t) = f (t) and C is an arbitrary constant.
Note that the series for F(t) is no longer a Fourier series as it contains the
a
0
t
2
term. The
antiderivative of a periodic function need no longer be periodic and so we should not expect a
Fourier series.
4.3.4 Rates of convergence and smoothness
Let us do an example of a periodic function with one derivative everywhere.
Example 4.3.6: Take the function
f (t) =

¸
¸
¸
¸
¸
¸
(1 − t) t if 0 < t < 1,
(t + 1) t if −1 < t < 0,
and extend to a 2-periodic function. The plot is given in Figure 4.10 on the facing page.
Note that this function has a derivative everywhere, but it does not have two derivatives at all
the integers.
4.3. MORE ON THE FOURIER SERIES 165
-2 -1 0 1 2
-2 -1 0 1 2
-0.50
-0.25
0.00
0.25
0.50
-0.50
-0.25
0.00
0.25
0.50
Figure 4.10: Smooth 2-periodic function.
Exercise 4.3.1: Compute f
··
(0+) and f
··
(0−).
Let us compute the Fourier series coefficients. The actual computation involves several inte-
gration by parts and is left to student.
a
0
=

1
−1
f (t) dt =

0
−1
(t + 1) t dt +

1
0
(1 − t) t dt = 0,
a
n
=

1
−1
f (t) cos nπt dt =

0
−1
(t + 1) t cos nπt dt +

1
0
(1 − t) t cos nπt dt = 0
b
n
=

1
−1
f (t) sin nπt dt =

0
−1
(t + 1) t sin nπt dt +

1
0
(1 − t) t sin nπt dt
=
4(1 − (−1)
n
)
π
3
n
3
=

¸
¸
¸
¸
¸
¸
8
π
3
n
3
if n is odd,
0 if n is even.
This series converges very fast. If you plot up to the third harmonic, that is the function
8
π
3
sin πt +
8
27π
3
sin 3πt,
it is almost indistinguishable from the plot of f (t) in Figure 4.10. In fact, the coefficient
8
27π
3
is
already just 0.0096 (approximately). The reason for this behavior is the n
3
term in the denominator.
The coefficients b
n
in this case go to zero as fast as
1
n
3
goes to zero.
It is a general fact that if you have one derivative, the Fourier coefficients will go to zero ap-
proximately like
1
n
3
. If you have only a continuous function, then the Fourier coefficients will go
to zero as
1
n
2
, and if you have discontinuities then the Fourier coefficients will go to zero approx-
imately as
1
n
. Therefore, we can tell a lot about the smoothness of a function by looking at its
Fourier coefficients.
166 CHAPTER 4. FOURIER SERIES AND PDES
To justify this behavior take for example the function defined by the Fourier series
f (t) =

¸
n=1
1
n
3
sin nt.
When we differentiate term by term we notice
f
·
(t) =

¸
n=1
1
n
2
cos nt.
Therefore, the coefficients now go down like
1
n
2
, which we said means that we have a continuous
function. That is, the derivative of f
·
(t) may be defined at most points, but at least at some points
it is not defined. If we differentiate again we find that f
··
(t) really is not defined at some points as
we get a piecewise differentiable function
f
··
(t) =

¸
n=1
−1
n
sin nt.
This function is similar to the sawtooth. If we tried to differentiate again we would obtain

¸
n=1
−cos nt,
which does not converge!
Exercise 4.3.2: Use a computer to plot f (t), f
·
(t) and f
··
(t). That is, plot say the first 5 harmonics
of the functions. At what points does f
··
(t) have the discontinuities.
4.3.5 Exercises
Exercise 4.3.3: Let
f (t) =

¸
¸
¸
¸
¸
¸
0 if −1 < t < 0,
t if 0 ≤ t < 1,
extended periodically. a) Compute the Fourier series for f (t). b) Write out the series explicitly up
to the 3
rd
harmonic.
Exercise 4.3.4: Let
f (t) =

¸
¸
¸
¸
¸
¸
−t if −1 < t < 0,
t
2
if 0 ≤ t < 1,
extended periodically. a) Compute the Fourier series for f (t). b) Write out the series explicitly up
to the 3
rd
harmonic.
4.3. MORE ON THE FOURIER SERIES 167
Exercise 4.3.5: Let
f (t) =

¸
¸
¸
¸
¸
¸
−t
10
if −10 < t < 0,
t
10
if 0 ≤ t < 10,
extended periodically (period is 20). a) Compute the Fourier series for f (t). b) Write out the series
explicitly up to the 3
rd
harmonic.
Exercise 4.3.6: Let f (t) =
¸

n=1
1
n
3
cos nt. Is f (t) continuous and differentiable everywhere? Find
the derivative (if it exists) or justify if it does not exist.
Exercise 4.3.7: Let f (t) =
¸

n=1
(−1)
n
n
sin nt. Is f (t) differentiable everywhere? Find the derivative
(if it exists) or justify if it does not exist.
168 CHAPTER 4. FOURIER SERIES AND PDES
4.4 Sine and cosine series
Note: 2 lectures, §9.3 in EP
4.4.1 Odd and even periodic functions
You may have noticed by now that an odd function has no cosine terms in the Fourier series and
an even function has no sine terms in the Fourier series. This observation is not a coincidence. Let
us look at even and odd periodic function in more detail.
Recall a function f (t) is odd if f (−t) = −f (t). A function f (t) is even if f (−t) = f (t). For
example, cos nt is even and sin nt is odd. Similarly the function t
k
is even if k is even and odd when
k is odd.
Exercise 4.4.1: Take two functions f (t) and g(t) and define their product h(t) = f (t)g(t). a) Sup-
pose both are odd, is h(t) odd or even? b) Suppose one is even and one is odd, is h(t) odd or even?
c) Suppose both are even, is h(t) odd or even.
If f (t) is odd and g(t) we cannot in general say anything about the sum f (t) + g(t). In fact, the
Fourier series of a function is really a sum of an odd (the sine terms) and an even (the cosine terms)
function.
In this section we are of course interested in odd and even periodic functions. We have previ-
ously defined the 2L-periodic extension of a function defined on the interval [−L, L]. Sometimes
we are only interested in the function in the range [0, L] and it would be convenient to have an odd
(resp. even) function. If the function is odd, all the sine (resp. cosine) terms will disappear. What
we can do is take the odd (resp. even) extension of the function to [−L, L] and then we can extend
periodically to a 2L-periodic function.
Take a function f (t) defined on [0, L]. On (−L, L] define the functions
F
odd
(t)
def
=

¸
¸
¸
¸
¸
¸
f (t) if 0 ≤ t ≤ L,
−f (−t) if −L < t < 0,
F
even
(t)
def
=

¸
¸
¸
¸
¸
¸
f (t) if 0 ≤ t ≤ L,
f (−t) if −L < t < 0.
And extend F
odd
(t) and F
even
(t) to be 2L-periodic. Then F
odd
(t) is called the odd periodic extension
of f (t), and F
even
(t) is called the even periodic extension of f .
Exercise 4.4.2: Check that F
odd
(t) is odd and that F
even
(t) is even.
Example 4.4.1: Take the function f (t) = t(1 − t) defined on [0, 1]. Figure 4.11 on the facing page
shows the plots of the odd and even extensions of f (t).
4.4. SINE AND COSINE SERIES 169
-2 -1 0 1 2
-2 -1 0 1 2
-0.3
-0.2
-0.1
0.0
0.0
0.2
0.3
-0.3
-0.2
-0.1
0.0
0.0
0.2
0.3
-2 -1 0 1 2
-2 -1 0 1 2
-0.3
-0.2
-0.1
0.0
0.0
0.2
0.3
-0.3
-0.2
-0.1
0.0
0.0
0.2
0.3
Figure 4.11: Odd and even 2-periodic extension of f (t) = t(1 − t), 0 ≤ t ≤ 1.
4.4.2 Sine and cosine series
Let f (t) be an odd 2L-periodic function. We write the Fourier series for f (t), we compute the
coefficients a
n
(including n = 0) and get
a
n
=
1
L

L
−L
f (t) cos

L
t dt = 0.
That is, there are no cosine terms in a Fourier series of an odd function. The integral is zero because
f (t) cos nπL t is an odd function (product of an odd and an even function is odd) and the integral
of an odd function over a symmetric interval is always zero. Furthermore, the integral of an even
function over a symmetric interval [−L, L] is twice the integral of the function over the interval
[0, L]. The function f (t) sin

L
t is the product of two odd functions and hence even.
b
n
=
1
L

L
−L
f (t) sin

L
t dt =
2
L

L
0
f (t) sin

L
t dt.
We can now write the Fourier series of f (t) as

¸
n=1
b
n
sin

L
t.
Similarly, if f (t) is an even 2L-periodic function. For the same exact reasons as above, we find
that b
n
= 0 and
a
n
=
2
L

L
0
f (t) cos

L
t dt.
The formula still works for n = 0 in which case it becomes
a
0
=
2
L

L
0
f (t) dt.
170 CHAPTER 4. FOURIER SERIES AND PDES
The Fourier series is then
a
0
2

¸
n=1
a
n
cos

L
t.
An interesting consequence is that the coefficients of the Fourier series of an odd (or even)
function can be computed by just integrating over the half interval. Therefore, we can compute the
odd (or even) extension of a function as a Fourier series by computing certain integrals over the
interval where the original function is defined.
Theorem 4.4.1. Let f (t) be a piecewise smooth function defined on [0, L]. Then the odd extension
of f (t) has the Fourier series
F
odd
(t) =

¸
n=1
b
n
sin

L
t,
where
b
n
=
2
L

L
0
f (t) sin

L
t dt.
The even extension of f (t) has the Fourier series
F
even
(t) =
a
0
2
+

¸
n=1
a
n
cos

L
t,
where
a
n
=
2
L

L
0
f (t) cos

L
t dt.
The series
¸

n=1
b
n
sin

L
t is called the sine series of f (t) and the series
a
0
2
+
¸

n=1
a
n
cos

L
t
is called the cosine series of f (t). It is often the case that we do not actually care what happens
outside of [0, L]. In this case, we can pick whichever series fits our problem better.
It is not necessary to start with the full Fourier series to obtain the sine and cosine series. The
sine series is really the eigenfunction expansion of f (t) using the eigenfunctions of the eigenvalue
problem x
··
+ λx = 0, x(0) = 0, x(L) = L. The cosine series is the eigenfunction expansion of f (t)
using the eigenfunctions of the eigenvalue problem x
··
+ λx = 0, x
·
(0) = 0, x
·
(L) = L. We could
have, therefore, have gotten the same formulas by defining the inner product
( f (t), g(t)) =

L
0
f (t)g(t) dt,
and following the procedure of § 4.2. This point of view is useful because many times we use
a specific series because our underlying question will lead to a certain eigenvalue problem. In
fact, if the eigenvalue value problem is not one of the three we covered so far, you can still do
an eigenfunction expansion, generalizing the results of this chapter. We will deal with such a
generalization in chapter 5.
4.4. SINE AND COSINE SERIES 171
Example 4.4.2: Find the Fourier series of the even periodic extension of the function f (t) = t
2
for
0 ≤ t ≤ π.
We will write
f (t) =
a
0
2
+

¸
n=1
a
n
cos nt,
where
a
0
=
2
π

π
0
t
2
dt =

2
3
,
and
a
n
=
2
π

π
0
t
2
cos nt dt =
2
π
,
t
2
1
n
sin nt
¸
π
0

4

π
0
t sin nt dt
=
4
n
2
π
,
t cos nt
¸
π
0
+
4
n
2
π

π
0
cos nt dt =
4(−1)
n
n
2
.
Note that we have detected the “continuity” of the extension since the coefficients decay as
1
n
2
. That
is, the even extension of t
2
has no jump discontinuities. Although it will have corners, since the
derivative (which will be on odd function and a sine series) will have a series whose coefficients
decay only as
1
n
so it will have jumps.
Explicitly, the first few terms of the series are
π
2
3
− 4 cos t + cos 2t −
4
9
cos 3t +
Exercise 4.4.3: a) Compute the derivative of the even extension of f (t) above and verify it has
jump discontinuities. Use the actual definition of f (t), not its cosine series! b) Why is it that the
derivative of the even extension of f (t) is the odd extension of f
·
(t).
4.4.3 Application
We have said that Fourier series ties in to the boundary value problems we studied earlier. Let us
see this connection in more detail.
Suppose we have the boundary value problem for 0 < t < L,
x
··
(t) + λ x(t) = f (t),
for the Dirichlet boundary conditions x(0) = 0, x(L) = 0. By using the Fredholm alternative
(Theorem 4.1.2 on page 148) we note that as long as λ is not an eigenvalue of the underlying
homogeneous problem, there will exist a unique solution. Note that the eigenfunctions of this
eigenvalue problem were the functions sin

L
t. Therefore, to find the solution, we first find f (t) in
terms of the Fourier sine series. We write x as a sine series as well with unknown coefficients. We
substitute into the equation and solve for the Fourier coefficients of x.
If on the other hand we have the Neumann boundary conditions x
·
(0) = 0, x
·
(L) = 0. We do
the same procedure using the cosine series. These methods are best seen by examples.
172 CHAPTER 4. FOURIER SERIES AND PDES
Example 4.4.3: Take the boundary value problem for 0 < t < 1,
x
··
(t) + 2x(t) = f (t),
where f (t) = t on 0 < t < 1. We want to look for a solution x satisfying the Dirichlet conditions
x(0) = 0, x(1) = 0. We write f (t) as a sine series
f (t) =

¸
n=1
c
n
sin nπt,
where
c
n
= 2

1
0
t sin nπt dt =
2 (−1)
n+1

.
We write x(t) as
x(t) =

¸
n=1
b
n
sin nπt.
We plug in to obtain
x
··
(t) + 2x(t) =

¸
n=1
−b
n
n
2
π
2
sin nπt + 2

¸
n=1
b
n
sin nπt
=

¸
n=1
b
n
(2 − n
2
π
2
) sin nπt
= f (t) =

¸
n=1
2 (−1)
n+1

sin nπt.
Therefore,
b
n
(2 − n
2
π
2
) =
2 (−1)
n+1

or
b
n
=
2 (−1)
n+1
nπ(2 − n
2
π
2
)
.
We have thus obtained a Fourier series for the solution
x(t) =

¸
n=1
2 (−1)
n+1
nπ (2 − n
2
π
2
)
sin nπt.
Example 4.4.4: Similarly we handle the Neumann conditions. Take the same boundary value
problem for 0 < t < 1,
x
··
(t) + 2x(t) = f (t),
4.4. SINE AND COSINE SERIES 173
where f (t) = t on 0 < t < 1. However, let us now consider the Neumann conditions x
·
(0) = 0,
x
·
(1) = 0. We write f (t) as a cosine series
f (t) =
c
0
2
+

¸
n=1
c
n
cos nπt,
where
c
0
= 2

1
0
t dt = 1,
and
c
n
= 2

1
0
t cos nπt dt =
2((−1)
n
− 1)
π
2
n
2
=

¸
¸
¸
¸
¸
¸
−4
π
2
n
2
if n odd,
0 if n even.
We write x(t) as a cosine series
x(t) =
a
0
2
+

¸
n=1
a
n
cos nπt.
We plug in to obtain
x
··
(t) + 2x(t) =

¸
n=1
,
−a
n
n
2
π
2
cos nπt
¸
+ a
0
+ 2

¸
n=1
,
a
n
cos nπt
¸
= a
0
+

¸
n=1
a
n
(2 − n
2
π
2
) cos nπt
= f (t) =
1
2
+

¸
n=1
n odd
−2
π
2
n
2
cos nπt.
Therefore, a
0
=
1
2
, a
n
= 0 for n even and for n odd (n ≥ 1)
a
n
(2 − n
2
π
2
) =
−4
π
2
n
2
or
a
n
=
−4
n
2
π
2
(2 − n
2
π
2
)
.
We have thus obtained a Fourier series for the solution
x(t) =

¸
n=1
n odd
−4
n
2
π
2
(2 − n
2
π
2
)
cos nπt.
174 CHAPTER 4. FOURIER SERIES AND PDES
4.4.4 Exercises
Exercise 4.4.4: Take f (t) = (t − 1)
2
defined on 0 ≤ t ≤ 1. a) Sketch the plot of the even periodic
extension of f . b) Sketch the plot of the odd periodic extension of f .
Exercise 4.4.5: Find the Fourier series of both the odd and even periodic extension of the function
f (t) = (t − 1)
2
for 0 ≤ t ≤ 1. Can you tell which extension is continuous from the Fourier series
coefficients?
Exercise 4.4.6: Find the Fourier series of both the odd and even periodic extension of the function
f (t) = t for 0 ≤ t ≤ π.
Exercise 4.4.7: Find the Fourier series of the even periodic extension of the function f (t) = sin t
for 0 ≤ t ≤ π.
Exercise 4.4.8: Let
x
··
(t) + 4x(t) = f (t),
where f (t) = 1 on 0 < t < 1. a) Solve for the Dirichlet conditions x(0) = 0, x(1) = 0. b) Solve for
the Neumann conditions x
·
(0) = 0, x
·
(1) = 0.
Exercise 4.4.9: Let
x
··
(t) + 9x(t) = f (t),
for f (t) = sin 2πt on 0 < t < 1. a) Solve for the Dirichlet conditions x(0) = 0, x(1) = 0. b) Solve
for the Neumann conditions x
·
(0) = 0, x
·
(1) = 0.
Exercise 4.4.10: Let
x
··
(t) + 3x(t) = f (t), x(0) = 0, x(1) = 0,
where f (t) =
¸

n=1
b
n
sin nπt. Write the solution x(t) as a Fourier series, where the coefficients are
given in terms of b
n
.
4.5. APPLICATIONS OF FOURIER SERIES 175
4.5 Applications of Fourier series
Note: 2 lectures, §9.4 in EP
4.5.1 Periodically forced oscillation
Let us return to the forced oscillations. We have a mass spring
damping c
m
k
F(t)
system as before, where we have a mass m on a spring with spring
constant k, with damping c, and a force F(t) applied to the mass.
Now suppose that the forcing function F(t) is 2L-periodic for
some L > 0. We have already seen this problem in chapter 2
with a simple F(t). The equation that governs this particular setup is
mx
··
(t) + cx
·
(t) + kx(t) = F(t). (4.9)
We know that the general solution will consist of x
c
which solves the associated homogeneous
equation mx
··
+ cx
·
+ kx = 0, and a particular solution of (4.9) we will call x
p
. Since the comple-
mentary solution x
c
will decay as time goes on, we are mostly interested in the part of x
p
which
does not decay. We call this x
p
the steady periodic solution as before. The difference in what we
will do now is that we consider an arbitrary forcing function F(t).
For simplicity, let us suppose that c = 0. The problem with c > 0 is very similar. The equation
mx
··
+ kx = 0
has the general solution
x(t) = Acos ω
0
t + Bsin ω
0
t.
where ω
0
=

k
m
. So any solution to mx
··
(t)+kx(t) = F(t) will be of the form Acos ω
0
t +Bsin ω
0
t +
x
sp
, where x
sp
is the particular steady periodic solution. The steady periodic solution will always
have the same period as F(t).
In the spirit of the last section and the idea of undetermined coefficients we will first write
F(t) =
c
0
2
+

¸
n=1
c
n
cos

L
t + d
n
sin

L
t.
Then we write
x(t) =
a
0
2
+

¸
n=1
a
n
cos

L
t + b
n
sin

L
t,
and we plug in x into the differential equation and solve for a
n
and b
n
in terms of c
n
and d
n
. This is
perhaps best seen by example.
Example 4.5.1: Suppose that k = 2, and m = 1. The units are the mks units (meters-kilograms-
seconds) again. There is a jetpack strapped to the mass, which fires with a force of 1 Newtons for
1 second and then is off for 1 second. We want to find the steady periodic solution.
176 CHAPTER 4. FOURIER SERIES AND PDES
The equation is, therefore,
x
··
+ 2x = F(t),
where F(t) is the step function
F(t) =

¸
¸
¸
¸
¸
¸
1 if 0 < t < 1,
0 if − 1 < t < 0,
extended periodically.
We write
F(t) =
c
0
2
+

¸
n=1
c
n
cos nπ t + d
n
sin nπ t.
It is not hard to see that c
n
= 0 for n ≥ 1:
c
n
=

1
−1
F(t) cos nπt dt =

1
0
cos nπt dt = 0.
On the other hand
c
0
=

1
−1
F(t) dt =

1
0
dt = 1.
And
d
n
=

1
−1
F(t) sin nπt dt
=

1
0
sin nπt dt
=
,
−cos nπt

,
1
t=0
=
1 − (−1)
n
πn
=

¸
¸
¸
¸
¸
¸
2
πn
if n odd,
0 if n even.
So
F(t) =
1
2
+

¸
n=1
n odd
2
πn
sin nπt.
We want to try
x(t) =
a
0
2
+

¸
n=1
a
n
cos nπt + b
n
sin nπt.
4.5. APPLICATIONS OF FOURIER SERIES 177
We notice that once we plug into the differential equation x
··
+ 2x = F(t) it is clear that a
n
= 0 for
n ≥ 1 as there are no corresponding terms in the series for F(t). Similarly b
n
= 0 for n even. Hence
we try
x(t) =
a
0
2
+

¸
n=1
n odd
b
n
sin nπt.
We plug into the differential equation and obtain
x
··
+ 2x =

¸
n=1
n odd
,
−b
n
n
2
π
2
sin nπt
¸
+ a
0
+ 2

¸
n=1
n odd
,
b
n
sin nπt
¸
= a
0
+

¸
n=1
n odd
b
n
(2 − n
2
π
2
) sin nπt
= F(t) =
1
2
+

¸
n=1
n odd
2
πn
sin nπt.
So a
0
=
1
2
and
b
n
=
2
πn(2 − n
2
π
2
)
.
The steady periodic solution has the Fourier series
x
sp
(t) =
1
4
+

¸
n=1
n odd
2
πn(2 − n
2
π
2
)
sin nπt.
We know this is the steady periodic solution as it contains no terms of the complementary solution
and is periodic with the same period as F(t) itself. See Figure 4.12 on the following page for the
plot of this solution.
4.5.2 Resonance
Just like when the forcing function was a simple cosine, resonance could still happen. Let us
assume c = 0 and we will discuss only pure resonance. Again, take the equation
mx
··
(t) + kx(t) = F(t).
When we expand F(t) and find that some of its terms coincide with the complementary solution to
mx
··
+ kx = 0, we cannot use those terms in the guess. Just like before, they will disappear when
we plug into the left hand side and we will get a contradictory equation (such as 0 = 1). That is,
suppose
x
c
= Acos ω
0
t + Bsin ω
0
t,
178 CHAPTER 4. FOURIER SERIES AND PDES
0.0 2.5 5.0 7.5 10.0
0.0 2.5 5.0 7.5 10.0
0.0
0.1
0.2
0.3
0.4
0.5
0.0
0.1
0.2
0.3
0.4
0.5
Figure 4.12: Plot of the steady periodic solution x
sp
of Example 4.5.1.
where ω
0
=

L
for some positive integer N. In this case we have to modify our guess and try
x(t) =
a
0
2
+ t

a
N
cos

L
t + b
N
sin

L
t

+

¸
n=1
nN
a
n
cos

L
t + b
n
sin

L
t.
In other words, we multiply the offending term by t. From then on, we proceed as before.
Of course, the solution will not be a Fourier series (it will not even be periodic) since it contains
these terms multiplied by t. Further, the terms t

a
N
cos

L
t + b
N
sin

L
t

will eventually dominate
and lead to wild oscillations. As before, this behavior is called pure resonance or just resonance.
Note that there now may be infinitely many resonance frequencies to hit. That is, as we change
the frequency of F (we change L), different terms from the Fourier series of F may interfere
with the complementary solution and will cause resonance. However, we should note that since
everything is an approximation and in particular c is never actually zero but something very close
to zero, only the first few resonance frequencies will matter.
Example 4.5.2: Find the steady periodic solution to the equation
2x
··
+ 18π
2
x = F(t),
where
F(t) =

¸
¸
¸
¸
¸
¸
1 if 0 < t < 1,
−1 if − 1 < t < 0,
extended periodically. We note that
F(t) =

¸
n=1
n odd
4
πn
sin nπt.
4.5. APPLICATIONS OF FOURIER SERIES 179
Exercise 4.5.1: Compute the Fourier series of F to verify.
The solution must look like
x(t) = c
1
cos 3πt + c
2
sin 3πt + x
p
(t)
for some particular solution x
p
.
We note that if we just tried a Fourier series with sin nπt as usual, we would get duplication
when n = 3. Therefore, we pull out that term and multiply by t. And we have add a cosine term to
get everything right. That is, we must try
x
p
(t) = a
3
t cos 3πt + b
3
t sin 3πt +

¸
n=1
n odd
n3
b
n
sin nπt.
Let us compute the second derivative.
x
··
p
(t) = −6a
3
π sin 3πt − 9π
2
a
3
t cos 3πt + 6b
3
π cos 3πt − 9π
2
b
3
t sin 3πt+
+

¸
n=1
n odd
n3
(−n
2
π
2
b
n
) sin nπt.
We now plug into the differential equation.
2x
··
p
+ 18π
2
x = − 12a
3
π sin 3πt − 18π
2
a
3
t cos 3πt + 12b
3
π cos 3πt − 18π
2
b
3
t sin 3πt+
+ 18π
2
a
3
t cos 3πt + 18π
2
b
3
t sin 3πt+
+

¸
n=1
n odd
n3
(−2n
2
π
2
b
n
+ 18π
2
b
n
) sin nπt.
If we simplify we obtain
2x
··
p
+ 18π
2
x = −12a
3
π sin 3πt + 12b
3
π cos 3πt +

¸
n=1
n odd
n3
(−2n
2
π
2
b
n
+ 18π
2
b
n
) sin nπt.
This series has to equal to the series for F(t). We equate the coefficients and solve for a
3
and b
n
.
a
3
=
4/(3π)
−12π
=
−1

2
b
3
= 0
b
n
=
4
nπ(18π
2
− 2n
2
π
2
)
=
2
π
3
n(9 − n
2
)
for n odd and n 3.
180 CHAPTER 4. FOURIER SERIES AND PDES
That is,
x
p
(t) =
−1

2
t cos 3πt +

¸
n=1
n odd
n3
2
π
3
n(9 − n
2
)
sin nπt.
When c > 0, you will not have to worry about pure resonance. That is, there will never be
any conflicts and you do not need to multiply any terms by t. There is a corresponding concept of
practical resonance and it is very similar to the ideas we already explored in chapter 2. We will not
go into details here.
4.5.3 Exercises
Exercise 4.5.2: Let F(t) =
1
2
+
¸

n=1
1
n
2
cos nπt. Find the steady periodic solution to x
··
+2x = F(t).
Express your solution as a Fourier series.
Exercise 4.5.3: Let F(t) =
¸

n=1
1
n
3
sin nπt. Find the steady periodic solution to x
··
+ x
·
+ x = F(t).
Express your solution as a Fourier series.
Exercise 4.5.4: Let F(t) =
¸

n=1
1
n
2
cos nπt. Find the steady periodic solution to x
··
+ 4x = F(t).
Express your solution as a Fourier series.
Exercise 4.5.5: Let F(t) = t for −1 < t < 1 and extended periodically. Find the steady periodic
solution to x
··
+ x = F(t). Express your solution as a Fourier series.
Exercise 4.5.6: Let F(t) = t for −1 < t < 1 and extended periodically. Find the steady periodic
solution to x
··
+ π
2
x = F(t). Express your solution as a Fourier series.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 181
4.6 PDEs, separation of variables, and the heat equation
Note: 2 lectures, §9.5 in EP
Let us recall that a partial differential equation or PDE is an equation containing the partial
derivatives with respect to several independent variables. Solving PDEs will be our main applica-
tion of Fourier series.
A PDE is said to be linear if the dependent variable and its derivatives appear only to the first
power and in no functions. We will only talk about linear PDEs here. Together with a PDE, we
usually have specified some boundary conditions, where the value of the solution or its derivatives
is specified along the boundary of a region, and/or some initial conditions where the value of the
solution or its derivatives is specified for some initial time. Sometimes such conditions are mixed
together and we will refer to them simply as side conditions.
We will study three partial differential equations, each one representing a more general class of
equations. First, we will study the heat equation, which is an example of a parabolic PDE. Next,
we will study the wave equation, which is an example of a hyperbolic PDE. Finally, we will study
the Laplace equation, which is an example of an elliptic PDE. Each of our examples will illustrate
behaviour that is typical for the whole class.
4.6.1 Heat on an insulated wire
Let us first study the heat equation. Suppose that we have a wire (or a thin metal rod) that is
insulated except at the endpoints. Let x denote the position along the wire and let t denote time.
See Figure 4.13.
0
L x
insulation
temperature u
Figure 4.13: Insulated wire.
Now let u(x, t) denote the temperature at point x at time t. It turns out that the equation govern-
ing the this system is the so-called one-dimensional heat equation:
∂u
∂t
= k

2
u
∂x
2
,
for some k > 0. That is, the change in heat at a specific point is proportional to the second derivative
of the heat along the wire. This makes sense. You would expect that if the heat distribution had a
maximum (was concave down), then heat would flow away from the maximum. And vice versa.
182 CHAPTER 4. FOURIER SERIES AND PDES
We will generally use a more convenient notation for partial derivatives. We will write u
t
instead of
∂u
∂t
and we will write u
xx
instead of

2
u
∂x
2
. With this notation the equation becomes
u
t
= ku
xx
.
For the heat equation, we must also have some boundary conditions. We assume that the wire
is of length L and the ends are either exposed and touching some body of constant heat, or the ends
are insulated. If the ends of the wire are for example kept at temperature 0, then we must have the
conditions
u(0, t) = 0 and u(L, t) = 0.
If on the other hand the ends are also insulated we get the conditions
u
x
(0, t) = 0 and u
x
(L, t) = 0.
In other words, heat is not flowing in nor out of the wire at the ends. Note that we always have two
conditions along the x axis as there are two derivatives in the x direction. These side conditions
are called homogeneous.
Furthermore, we will suppose we know the initial temperature distribution.
u(x, 0) = f (x),
for some known function f (x). This initial condition is not a homogeneous side condition.
4.6.2 Separation of variables
First we must note the principle of superposition still applies. The heat equation is still called
linear, since u and its derivatives do not appear to any powers or in any functions. If u
1
and u
2
are
solutions and c
1
, c
2
are constants, then u = c
1
u
1
+ c
2
u
2
is still a solution.
Exercise 4.6.1: Verify the principle of superposition for the heat equation.
Superposition also preserves some of the side conditions. In particular, if u
1
and u
2
are solutions
that satisfy u(0, t) = 0 and u(L, t) = 0, and c
1
, c
2
are constants, then u = c
1
u
1
+ c
2
u
2
is still a
solution that satisfies u(0, t) = 0 and u(L, t) = 0. Similarly for the side conditions u
x
(0, t) = 0 and
u
x
(L, t) = 0. In general, superposition preserves all homogeneous side conditions.
The method of separation of variables is to try to find solutions that are sums or products of
functions of one variable. For example, for the heat equation, we try to find solutions of the form
u(x, t) = X(x)T(t).
That the desired solution we are looking for is of this form is too much to hope for. However, what
is perfectly reasonable to ask is to find enough “building-block” solutions u(x, t) = X(x)T(t) using
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 183
this procedure so that the desired solution to the PDE is somehow constructed from these building
blocks by the use of superposition.
Let us try to solve the heat equation
u
t
= ku
xx
with u(0, t) = 0 and u(L, t) = 0 and u(x, 0) = f (x).
Let us guess u(x, t) = X(x)T(t). We plug into the heat equation to obtain
X(x)T
·
(t) = kX
··
(x)T(t).
We rewrite as
T
·
(t)
kT(t)
=
X
··
(x)
X(x)
.
As this equation is supposed to hold for all x and t. But the left hand side does not depend on t and
the right hand side does not depend on x. Therefore, each side must be a constant. Let us call this
constant −λ (the minus sign is for convenience later). Thus, we have two equations
T
·
(t)
kT(t)
= −λ =
X
··
(x)
X(x)
.
Or in other words
X
··
(x) + λX(x) = 0,
T
·
(t) + λkT(t) = 0.
The boundary condition u(0, t) = 0 implies X(0)T(t) = 0. We are looking for a nontrivial solution
and so we can assume that T(t) is not identically zero. Hence X(0) = 0. Similarly, u(L, t) = 0
implies X(L) = 0. We are looking for nontrivial solutions X of the eigenvalue problem X
··
+λX = 0,
X(0) = 0, X(L) = 0. We have previously found that the only eigenvalues are λ
n
=
n
2
π
2
L
2
, for integers
n ≥ 1, where eigenfunctions are sin

L
x. Hence, let us pick the solutions
X
n
(x) = sin

L
x.
The corresponding T
n
must satisfy the equation
T
·
n
(t) +
n
2
π
2
L
2
kT
n
(t) = 0.
By the method of integrating factor, the solution of this problem is easily seen to be
T
n
(t) = e
−n
2
π
2
L
2
kt
.
It will be useful to note that T
n
(0) = 1. Our building-block solutions are
u
n
(x, t) = X
n
(x)T
n
(t) =

sin

L
x

e
−n
2
π
2
L
2
kt
.
184 CHAPTER 4. FOURIER SERIES AND PDES
We now note that u
n
(x, 0) = sin

L
x. Let us write f (x) using the sine series
f (x) =

¸
n=1
b
n
sin

L
x.
That is, we find the Fourier series of the odd periodic extension of f (x). We used the sine series as
it corresponds to the eigenvalue problem for X(x) above. Finally, we use superposition to write the
solution as
u(x, t) =

¸
n=1
b
n
u
n
(x, t) =

¸
n=1
b
n

sin

L
x

e
−n
2
π
2
L
2
kt
.
Why does this solution work? First note that it is a solution to the heat equation by superpo-
sition. It satisfies u(0, t) = 0 and u(L, t) = 0 because x = 0 or x = L makes all the sines vanish.
Finally, plugging in t = 0, we notice that T
n
(0) = 1 and so
u(x, 0) =

¸
n=1
b
n
u
n
(x, 0) =

¸
n=1
b
n
sin

L
x = f (x).
Example 4.6.1: Suppose that we have an insulated wire of length 1, such that the ends of the wire
are embedded in ice (temperature 0). Let k = 0.003. Then suppose that initial heat distribution is
u(x, 0) = 50 x (1 − x). See Figure 4.14.
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
0.0
2.5
5.0
7.5
10.0
12.5
0.0
2.5
5.0
7.5
10.0
12.5
Figure 4.14: Initial distribution of temperature in the wire.
We want to find the temperature function u(x, t). Let us suppose we also want to find when (at
what t) does the maximum temperature in the wire drop to one half of the initial maximum 12.5.
We are solving the following PDE problem
u
t
= 0.003 u
xx
,
u(0, t) = u(1, t) = 0,
u(x, 0) = 50 x (1 − x) for 0 < x < 1.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 185
We find the sine series for f (x) = 50 x (1 − x) for 0 < x < 1. That is, f (x) =
¸

n=1
b
n
sin nπx, where
b
n
= 2

1
0
50 x (1 − x) sin nπx dx =
200
π
3
n
3

200 (−1)
n
π
3
n
3
=

¸
¸
¸
¸
¸
¸
0 if n even,
400
π
3
n
3
if n odd.
0.25
0.50
0.75
1.00
x
0
20
40
60
80
100
t
0
20
40
60
80
100
t
0.0
2.5
5.0
7.5
10.0
12.5
0.0
2.5
5.0
7.5
10.0
12.5
0.00
0.25
0.50
0.75
1.00
x
u(x,t)
11.700
10.400
9.100
7.800
6.500
5.200
3.900
2.600
1.300
0.000
Figure 4.15: Plot of the temperature of the wire at position x at time t.
The solution u(x, t), plotted in Figure 4.15 for 0 ≤ t ≤ 100, is given by the following series.
u(x, t) =

¸
n=1
n odd
400
π
3
n
3
(sin nπx) e
−n
2
π
2
0.003 t
.
Finally, let us answer the question about the maximum temperature. It is relatively obvious that
the maximum temperature will always be at x = 0.5, in the middle of the wire. The plot of u(x, t)
confirms this intuition.
186 CHAPTER 4. FOURIER SERIES AND PDES
If we plug in x = 0.5 we get
u(0.5, t) =

¸
n=1
n odd
400
π
3
n
3
(sin nπ 0.5) e
−n
2
π
2
0.003 t
.
For n = 3 and higher (remember we are taking only odd n), the terms of the series are insignificant
compared to the first term. The first term in the series is already a very good approximation of the
function and hence
u(0.5, t) ≈
400
π
3
e
−π
2
0.003 t
.
The approximation gets better and better as t gets larger as the other terms decay much faster. Let
us plot the function u(0.5, t), the temperature at the midpoint of the wire at time t, in Figure 4.16.
The figure also plots the approximation by the first term.
0 25 50 75 100
0 25 50 75 100
2.5
5.0
7.5
10.0
12.5
2.5
5.0
7.5
10.0
12.5
Figure 4.16: Temperature at the midpoint of the wire (the bottom curve), and the approximation of
this temperature by using only the first term in the series (top curve).
It would be hard to tell the difference after t = 5 or so between the first term of the series
representation of u(x, t) and the real solution. This behavior is a general feature of solving the heat
equation. If you are interested in behavior for large enough t, only the first one or two terms may
be necessary.
Getting back to the question of when is the maximum temperature one half of the initial maxi-
mum temperature. That is, when is the temperature at the midpoint 12.5/2 = 6.25. We notice from
the graph that if we use the approximation by the first term we will be close enough. Therefore,
we solve
6.25 =
400
π
3
e
−π
2
0.003 t
.
That is,
t =
ln
6.25 π
3
400
−π
2
0.003
≈ 24.5.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 187
So the maximum temperature drops to half at about t = 24.5.
4.6.3 Insulated ends
Now suppose the ends of the wire are insulated. In this case, we are solving the equation
u
t
= ku
xx
with u
x
(0, t) = 0 and u
x
(L, t) = 0 and u(x, 0) = f (x).
Yet again we try a solution of the form u(x, t) = X(x)T(t). By the same procedure as before we
plug into the heat equation and arrive at the following two equations
X
··
(x) + λX(x) = 0,
T
·
(t) + λkT(t) = 0.
At this point the story changes slightly. The boundary condition u
x
(0, t) = 0 implies X
·
(0)T(t) = 0.
Hence X
·
(0) = 0. Similarly, u
x
(L, t) = 0 implies X
·
(L) = 0. We are looking for nontrivial solutions
X of the eigenvalue problem X
··
+λX = 0, X
·
(0) = 0, X
·
(L) = 0. We have previously found that the
only eigenvalues are λ
n
=
n
2
π
2
L
2
, for integers n ≥ 0, where eigenfunctions are cos

L
x (we include
the constant eigenfunction). Hence, let us pick solutions
X
n
(x) = cos

L
x.
The corresponding T
n
must satisfy the equation
T
·
n
(t) +
n
2
π
2
L
2
kT
n
(t) = 0.
For n ≥ 1, as before,
T
n
(t) = e
−n
2
π
2
L
2
kt
.
For n = 0, we have T
·
0
(t) = 0 and hence T
0
(t) = 1. Our building-block solutions will be
u
n
(x, t) = X
n
(x)T
n
(t) =

cos

L
x

e
−n
2
π
2
L
2
kt
.
and
u
0
(x, t) = 1.
We now note that u
n
(x, 0) = cos

L
x. So let us write f using the cosine series
f (x) =
a
0
2
+

¸
n=1
a
n
cos

L
x.
That is, we find the Fourier series of the even periodic extension of f (x).
We use superposition to write the solution as
u(x, t) =
a
0
2
+

¸
n=1
a
n
u
n
(x, t) =
a
0
2
+

¸
n=1
a
n

cos

L
x

e
−n
2
π
2
L
2
kt
.
188 CHAPTER 4. FOURIER SERIES AND PDES
Example 4.6.2: Let us try the same example as before, but for insulated ends. We are solving the
following PDE problem
u
t
= 0.003 u
xx
,
u
x
(0, t) = u
x
(1, t) = 0,
u(x, 0) = 50 x (1 − x) for 0 < x < 1.
For this problem, we must find the cosine series of u(x, 0). For 0 < x < 1 we have
50 x (1 − x) =
25
3
+

¸
n=2
n even
¸
−200
π
2
n
2

cos nπx.
The calculation is left to the reader. Hence, the solution to the PDE problem, plotted in Figure 4.17
on the next page, is given by the series
u(x, t) =
25
3
+

¸
n=2
n even
¸
−200
π
2
n
2

(cos nπx) e
−n
2
π
2
0.003 t
.
Note in the graph that the temperature evens out across the wire. Eventually all the terms except
the constant die out, and you will be left with a uniform temperature of
25
3
≈ 8.33 along the entire
length of the wire.
4.6.4 Exercises
Exercise 4.6.2: Suppose you have a wire of length 2, with k = 0.001 and an initial temperature
distribution of u(x, 0) = 50x. Suppose that both the ends are embedded in ice (temperature 0).
Find the solution as a series.
Exercise 4.6.3: Find a series solution of
u
t
= u
xx
,
u(0, t) = u(1, t) = 0,
u(x, 0) = 100 for 0 < x < 1.
Exercise 4.6.4: Find a series solution of
u
t
= u
xx
,
u
x
(0, t) = u
x
(π, t) = 0,
u(x, 0) = 3 sin x + sin 3π for 0 < x < π.
4.6. PDES, SEPARATION OF VARIABLES, AND THE HEAT EQUATION 189
0.00
0.25
0.50
0.75
1.00
x
0
5
10
15
20
25
30
t
0
5
10
15
20
25
30
t
0.0
2.5
5.0
7.5
10.0
12.5
0.0
2.5
5.0
7.5
10.0
12.5
0.00
0.25
0.50
0.75
1.00
x
u(x,t)
11.700
10.400
9.100
7.800
6.500
5.200
3.900
2.600
1.300
0.000
Figure 4.17: Plot of the temperature of the insulated wire at position x at time t.
Exercise 4.6.5: Find a series solution of
u
t
= u
xx
,
u
x
(0, t) = u
x
(π, t) = 0,
u(x, 0) = cos x for 0 < x < π.
Exercise 4.6.6: Find a series solution of
u
t
= u
xx
,
u(0, t) = 0, u(1, t) = 100,
u(x, 0) = sin πx for 0 < x < 1.
Hint: Use the fact that u(x, t) = 100x is a solution satisfying u
t
= u
xx
, u(0, t) = 0, u(1, t) = 100.
Then use superposition.
Exercise 4.6.7: Find the steady state temperature solution as a function of x alone, by letting
t → ∞ in the solution from exercises 4.6.5 and 4.6.6. Verify that satisfies the equation u
xx
= 0.
190 CHAPTER 4. FOURIER SERIES AND PDES
Exercise 4.6.8: Use separation variables to find a nontrivial solution to u
xx
+ u
yy
= 0, where
u(x, 0) = 0 and u(0, y) = 0. Hint: Try u(x, y) = X(x)Y(y).
Exercise 4.6.9 (challenging): Suppose that one end of the wire is insulated (say at x = 0) and the
other end is kept at zero temperature. That is, find a series solution of
u
t
= ku
xx
,
u
x
(0, t) = u(L, t) = 0,
u(x, 0) = f (x) for 0 < x < L.
Express any coefficients in the series by integrals of f (x).
4.7. ONE DIMENSIONAL WAVE EQUATION 191
4.7 One dimensional wave equation
Note: 1 lecture, §9.6 in EP
Suppose we have a string such as on a guitar of length L. Suppose we only consider vibrations
in one direction. That is let x denote the position along the string, let t denote time and let y denote
the displacement of the string from the rest position. See Figure 4.18.
L x
y
y
0
Figure 4.18: Vibrating string.
The equation that governs this setup is the so-called one-dimensional wave equation:
y
tt
= a
2
y
xx
,
for some a > 0. We will assume that the ends of the string are fixed and hence we get
y(0, t) = 0 and y(L, t) = 0.
Note that we always have two conditions along the x axis as there are two derivatives in the x
direction.
There are also two derivatives along the t direction and hence we will need two further condi-
tions here. We will need to know the initial position and the initial velocity of the string.
y(x, 0) = f (x) and y
t
(x, 0) = g(x),
for some known functions f (x) and g(x).
As the equation is again linear, superposition works just as it did for the heat equation. And
again we will use separation of variables to find enough building-block solutions to get the overall
solution. There is one change however. It will be easier to solve two separate problems and add
their solutions.
The two problems we will solve are
w
tt
= a
2
w
xx
,
w(0, t) = w(L, t) = 0,
w(x, 0) = 0 for 0 < x < L,
w
t
(x, 0) = g(x) for 0 < x < L.
(4.10)
192 CHAPTER 4. FOURIER SERIES AND PDES
and
z
tt
= a
2
z
xx
,
z(0, t) = z(L, t) = 0,
z(x, 0) = f (x) for 0 < x < L,
z
t
(x, 0) = 0 for 0 < x < L.
(4.11)
The principle of superposition will then imply that y = w + z solves the wave equation and
furthermore y(x, 0) = w(x, 0) + z(x, 0) = f (x) and y
t
(x, 0) = w
t
(x, 0) + z
t
(x, 0) = g(x). Hence, y is a
solution to
y
tt
= a
2
y
xx
,
y(0, t) = y(L, t) = 0,
y(x, 0) = f (x) for 0 < x < L,
y
t
(x, 0) = g(x) for 0 < x < L.
(4.12)
The reason for all this complexity is that superposition only works for homogeneous conditions
such as y(0, t) = y(L, t) = 0, y(x, 0) = 0, or y
t
(x, 0) = 0. Therefore, we will be able to use the
idea of separation of variables to find many building-block solutions solving all the homogeneous
conditions. We can then use them to construct a solution solving the remaining nonhomogeneous
condition.
Let us start with (4.10). We try a solution of the form w(x, t) = X(x)T(t) again. We plug into
the wave equation to obtain
X(x)T
··
(t) = a
2
X
··
(x)T(t).
Rewriting we get
T
··
(t)
a
2
T(t)
=
X
··
(x)
X(x)
.
Again, left hand side depends only on t and the right hand side depends only on x. Therefore, both
equal a constant which we will denote by −λ.
T
··
(t)
a
2
T(t)
= −λ =
X
··
(x)
X(x)
.
We solve to get two ordinary differential equations
X
··
(x) + λX(x) = 0,
T
··
(t) + λa
2
T(t) = 0.
The conditions 0 = w(0, t) = X(0)T(t) implies X(0) = 0 and w(L, t) = 0 implies that X(L) = 0.
Therefore, the only nontrivial solutions for the first equation are when λ = λ
n
=
n
2
π
2
L
2
and they are
X
n
(x) = sin

L
x.
The general solution for T for this particular λ
n
is
T
n
(t) = Acos
nπa
L
t + Bsin
nπa
L
t.
4.7. ONE DIMENSIONAL WAVE EQUATION 193
We also have the condition that w(x, 0) = 0 or X(x)T
·
(0) = 0. This implies that T
·
(0) = 0, which
in turn forces A = 0. It will be convenient to pick B =
L
nπa
and hence
T
n
(t) =
L
nπa
sin
nπa
L
t.
Our building-block solution will be
w
n
(x, t) =
L
nπa

sin

L
x

sin
nπa
L
t

.
We differentiate in t, that is
(w
n
)
t
(x, t) =

sin

L
x

cos
nπa
L
t

.
Hence,
(w
n
)
t
(x, 0) = sin

L
x.
We expand g(x) in terms of these sines as
g(x) =

¸
n=1
b
n
sin

L
x.
Now we can just write down the solution to (4.10) as a series
w(x, t) =

¸
n=1
b
n
w
n
(x, t) =

¸
n=1
b
n
L
nπa

sin

L
x

sin
nπa
L
t

.
Exercise 4.7.1: Check that w(x, 0) = 0 and w
t
(x, 0) = g(x).
Similarly we proceed to solve (4.11). We again try z(x, y) = X(x)T(t). The procedure works
exactly the same at first. We obtain
X
··
(x) + λX(x) = 0,
T
··
(t) + λa
2
T(t) = 0.
and the conditions X(0) = 0, X(L) = 0. So again λ = λ
n
=
n
2
π
2
L
2
and
X
n
(x) = sin

L
x.
The condition for T however becomes T(0) = 0. Thus instead of A = 0 we get that B = 0 and we
can take
T
n
(t) = sin
nπa
L
t.
194 CHAPTER 4. FOURIER SERIES AND PDES
Our building-block solution will be
z
n
(x, t) =

sin

L
x

cos
nπa
L
t

.
We expand f (x) in terms of these sines as
f (x) =

¸
n=1
c
n
sin

L
x.
And we write down the solution to (4.11) as a series
z(x, t) =

¸
n=1
c
n
z
n
(x, t) =

¸
n=1
c
n

sin

L
x

cos
nπa
L
t

.
Exercise 4.7.2: Fill in the details in the derivation of the solution of (4.11). Check that the solution
satisfies all the side conditions.
Putting these two solutions together we will state the result as a theorem.
Theorem 4.7.1. Take the equation
y
tt
= a
2
y
xx
,
y(0, t) = y(L, t) = 0,
y(x, 0) = f (x) for 0 < x < L,
y
t
(x, 0) = g(x) for 0 < x < L,
(4.13)
where
f (x) =

¸
n=1
c
n
sin

L
x.
and
g(x) =

¸
n=1
b
n
sin

L
x.
Then the solution y(x, t) can be written as a sum of the solutions of (4.10) and (4.11). In other
words,
y(x, t) =

¸
n=1
b
n
L
nπa

sin

L
x

sin
nπa
L
t

+ c
n

sin

L
x

cos
nπa
L
t

=

¸
n=1

sin

L
x
,
b
n
L
nπa

sin
nπa
L
t

+ c
n

cos
nπa
L
t
,
.
4.7. ONE DIMENSIONAL WAVE EQUATION 195
2 x
y
0
0.1
Figure 4.19: Plucked string.
Example 4.7.1: Let us try a simple example of a plucked string. Suppose that the string of length
2 is plucked in the middle such that it has the initial shape
f (x) =

¸
¸
¸
¸
¸
¸
0.1 x if 0 ≤ x ≤ 1,
0.1 (2 − x) if 1 ≤ x ≤ 2.
See Figure 4.19. Further, suppose that a = 1 in the wave equation for simplicity.
We leave it to the reader to compute the sine series of f (x). The series will be
f (x) =

¸
n=1
0.8
n
2
π
2

sin

2

sin

2
x.
Note that sin

2
is the sequence 1, 0, −1, 0, 1, 0, −1, . . . for n = 1, 2, 3, 4, . . .. Therefore,
f (x) =
0.8
π
2
sin
π
2
x −
0.8

2
sin

2
x +
0.8
25π
2
sin

2
x −
The solution y(x, t) is given by
y(x, t) =

¸
n=1
0.8
n
2
π
2

sin

2

sin

2
x

cos

2
t

=
0.8
π
2

sin
π
2
x

cos
π
2
t


0.8

2
¸
sin

2
x
¸
cos

2
t

+
0.8
25π
2
¸
sin

2
x
¸
cos

2
t


A plot for 0 < t < 3 is given in Figure 4.20 on the following page. Notice that unlike the
heat equation, the solution does not become “smoother.” In fact the edges remain. We will see the
reason for this behavior in the next section where we derive the solution to the wave equation in a
different way.
Make sure you understand what the plot such as the one in the figure is telling you. For each
fixed t, you can think of the function x ·→ u(x, t) as just a function of x. This function gives you
the shape of the string at time t.
196 CHAPTER 4. FOURIER SERIES AND PDES
0.0
0.5
1.0
1.5
2.0
x
0
1
2
3
t
0
1
2
3
t
-0.10
-0.05
0.00
0.05
0.10
y
-0.10
-0.05
0.00
0.05
0.10
y
0.0
0.5
1.0
1.5
2.0
x
y(x,t)
0.110
0.088
0.066
0.044
0.022
0.000
-0.022
-0.044
-0.066
-0.088
-0.110
Figure 4.20: Shape of the plucked string for 0 < t < 3.
4.7.1 Exercises
Exercise 4.7.3: Solve
y
tt
= 9y
xx
,
y(0, t) = y(1, t) = 0,
y(x, 0) = sin 3πx +
1
4
sin 6πx for 0 < x < 1,
y
t
(x, 0) = 0 for 0 < x < 1.
Exercise 4.7.4: Solve
y
tt
= 4y
xx
,
y(0, t) = y(1, t) = 0,
y(x, 0) = sin 3πx +
1
4
sin 6πx for 0 < x < 1,
y
t
(x, 0) = sin 9πx for 0 < x < 1.
Exercise 4.7.5: Derive the solution for a general plucked string of length L, where we raise the
string some distance b at the midpoint and let go, and for any constant a.
4.7. ONE DIMENSIONAL WAVE EQUATION 197
Exercise 4.7.6: Suppose that a stringed musical instrument falls on the floor. Suppose that the
length of the string is 1 and a = 1. When the musical instrument hits the ground the string was
in rest position and hence y(x, 0) = 0. However, the string was moving at some velocity at impact
(t = 0), say y
t
(x, 0) = −1. Find the solution y(x, t) for the shape of the string at time t.
Exercise 4.7.7 (challenging): Suppose that you have a vibrating string and that there is air resis-
tance proportional to the velocity. That is, you have
y
tt
= a
2
y
xx
− ky
t
,
y(0, t) = y(1, t) = 0,
y(x, 0) = f (x) for 0 < x < 1,
y
t
(x, 0) = 0 for 0 < x < 1.
Suppose that 0 < k < 2πa. Derive a series solution to the problem. Any coefficients in the series
should be expressed as integrals of f (x).
198 CHAPTER 4. FOURIER SERIES AND PDES
4.8 D’Alembert solution of the wave equation
Note: 1 lecture, different from §9.6 in EP
We have solved the wave equation by using Fourier series. But it is often more convenient to
use the so-called d’Alembert solution to the wave equation

. This solution can be derived using
Fourier series as well, but it is really an awkward use of those concepts. It is much easier to derive
this solution by making a correct change of variables to get an equation which can be solved by
simple integration.
Suppose we have the wave equation
y
tt
= a
2
y
xx
. (4.14)
And we wish to solve the equation (4.14) given the conditions
y(0, t) = y(L, t) = 0 for all t,
y(x, 0) = f (x) 0 < x < L,
y
t
(x, 0) = g(x) 0 < x < L.
(4.15)
4.8.1 Change of variables
We will transform the equation into a simpler form where it can be solved by simple integration.
We change variables to ξ = x − at, η = x + at and we use the chain rule:

∂x
=
∂ξ
∂x

∂ξ
+
∂η
∂x

∂η
=

∂ξ
+

∂η
,

∂t
=
∂ξ
∂t

∂ξ
+
∂η
∂t

∂η
= −a
2

∂ξ
+ a
2

∂η
.
We compute
y
xx
=

2
y
∂x
2
=
¸

∂ξ
+

∂η
¸
∂y
∂ξ
+
∂y
∂η

=

2
y
∂ξ
2
+ 2

2
y
∂ξ∂η
+

2
y
∂η
2
,
y
tt
=

2
y
∂t
2
=
¸
−a
2

∂ξ
+ a
2

∂η
¸
−a
2
∂y
∂ξ
+ a
2
∂y
∂η

= a
2

2
y
∂ξ
2
− 2a
2

2
y
∂ξ∂η
+ a
2

2
y
∂η
2
.
In the above computations, we have used the fact from calculus that

2
y
∂ξ∂η
=

2
y
∂η∂ξ
. Then we plug into
the wave equation,
0 = a
2
y
xx
− y
tt
= 4a
2

2
y
∂ξ∂η
= 4a
2
y
ξη
.

Named after the french mathematician Jean le Rond d’Alembert (1717 – 1783).
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 199
Therefore, the wave equation (4.14) transforms into y
ξη
= 0. It is easy to find the general solution
to this equation by integration twice. Let us integrate with respect to η first
§
and notice that the
constant of integration depends on ξ to get y
ξ
= C(ξ). Next, we integrate with respect to ξ and
notice that the constant of integration must depend on η. Thus, y =

C(ξ) dξ + B(η). The solution
must then be of the following form for some functions A(ξ) and B(η).
y = A(ξ) + B(η) = A(x − at) + B(x + at).
4.8.2 The formula
We know what any solution must look like, but we need to solve for the given side conditions. We
will just give the formula and see that it works. First let F(x) denote the odd extension of f (x), and
let G(x) denote the odd extension of g(x). Now define
A(x) =
1
2
F(x) −
1
2a

x
0
G(s) ds, B(x) =
1
2
F(x) +
1
2a

x
0
G(s) ds.
We claim this A(x) and B(x) give the solution. Explicitly, the solution is y(x, t) = A(x−at)+B(x+at)
or in other words:
y(x, t) =
1
2
F(x − at) −
1
2a

x−at
0
G(s) ds +
1
2
F(x + at) +
1
2a

x+at
0
G(s) ds
=
F(x − at) + F(x + at)
2
+
1
2a

x+at
x−at
G(s) ds.
(4.16)
Let us check that the d’Alembert formula really works.
y(x, 0) =
1
2
F(x) −
1
2a

x
0
G(s) ds +
1
2
F(x) +
1
2a

x
0
G(s) ds = F(x).
So far so good. Assume for simplicity F is differentiable. By the fundamental theorem of calculus
we have
y
t
(x, t) =
−a
2
F
·
(x − at) +
1
2
G(x − at) +
a
2
F
·
(x + at) +
1
2
G(x + at).
So
y
t
(x, 0) =
−a
2
F
·
(x) +
1
2
G(x) +
a
2
F
·
(x) +
1
2
G(x) = G(x).
Yay! We’re smoking now. OK, now the boundary conditions. Note that F(x) and G(x) are odd.
Also

x
0
G(s) ds is an even function of x because G(x) is odd (to see this fact, do the substitution
§
We can just as well integrate with ξ first, if we wish.
200 CHAPTER 4. FOURIER SERIES AND PDES
s = −v). So
y(0, t) =
1
2
F(−at) −
1
2a

−at
0
G(s) ds +
1
2
F(at) +
1
2a

at
0
G(s) ds
=
−1
2
F(at) −
1
2a

at
0
G(s) ds +
1
2
F(at) +
1
2a

at
0
G(s) ds = 0.
Now F(x) and G(x) are 2L periodic as well. Furthermore,
y(L, t) =
1
2
F(L − at) −
1
2a

L−at
0
G(s) ds +
1
2
F(L + at) +
1
2a

L+at
0
G(s) ds
=
1
2
F(−L − at) −
1
2a

L
0
G(s) ds −
1
2a

−at
0
G(s) ds +
+
1
2
F(L + at) +
1
2a

L
0
G(s) ds +
1
2a

at
0
G(s) ds
=
−1
2
F(L + at) −
1
2a

at
0
G(s) ds +
1
2
F(L + at) +
1
2a

at
0
G(s) ds = 0.
And voilà, it works.
Example 4.8.1: What the d’Alembert solution says is that the solution is a superposition of two
functions (waves) moving in the opposite direction at “speed” a. To get an idea of how it works,
let us do an example. Suppose that we have the simpler setup
y
tt
= y
xx
,
y(0, t) = y(1, t) = 0,
y(x, 0) = f (x),
y
t
(x, 0) = 0.
Here f (x) is an impulse of height 1 centered at x = 0.5:
f (x) =

¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
0 if 0 ≤ x < 0.45,
20 (x − 0.45) if 0 ≤ x < 0.45,
20 (0.55 − x) if 0.45 ≤ x < 0.55,
0 if 0.55 ≤ x ≤ 1.
The graph of this pulse is the top left plot in Figure 4.21 on the next page.
Let F(x) be the odd periodic extension of f (x). Then from (4.16) we know that the solution is
given as
y(x, t) =
F(x − t) + F(x + t)
2
.
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 201
It is not hard to compute specific values of y(x, t). For example, to compute y(0.1, 0.6) we notice
x − t = −0.5 and x + t = 0.7. Now F(−0.5) = −f (0.5) = −20 (0.55 − 0.5) = −1 and F(0.7) =
f (0.7) = 0. Hence y(0.1, 0.6) =
−1+0
2
= −0.5. As you can see the d’Alembert solution is much
easier to actually compute and to plot than the Fourier series solution. See Figure 4.21 for plots of
the solution y for several different t.
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
0.00 0.25 0.50 0.75 1.00
0.00 0.25 0.50 0.75 1.00
-1.0
-0.5
0.0
0.5
1.0
-1.0
-0.5
0.0
0.5
1.0
Figure 4.21: Plot of the d’Alembert solution for t = 0, t = 0.2, t = 0.4, and t = 0.6.
4.8.3 Notes
It is perhaps easier and more useful to memorize the procedure rather than the formula itself. The
important thing to remember is that a solution to the wave equation is a superposition of two waves
traveling in opposite directions. That is,
y(x, t) = A(x − at) + B(x + at).
202 CHAPTER 4. FOURIER SERIES AND PDES
If you think about it, the exact formulas for A and B are not hard to guess once you realize what
kind of side conditions y(x, t) is supposed to satisfy. Let us give the formula again, but slightly
differently. Best approach is to do this in stages. When g(x) = 0 (and hence G(x) = 0) we have the
solution
F(x − at) + F(x + at)
2
.
On the other hand, when f (x) = 0 (and hence F(x) = 0), we let
H(x) =

x
0
G(s) ds.
The solution in this case is
1
2a

x+at
x−at
G(s) ds =
−H(x − at) + H(x + at)
2a
.
By superposition we get a solution for the general side conditions (4.15) (when neither f (x) nor
g(x) are identically zero).
y(x, t) =
F(x − at) + F(x + at)
2
+
−H(x − at) + H(x + at)
2a
. (4.17)
Do note the minus sign before the H.
Exercise 4.8.1: Check that the new formula (4.17) satisfies the side conditions (4.15).
Warning: Make sure you use the odd extensions F(x) and G(x), when you have formulas for
f (x) and g(x). The thing is, those formulas in general hold only for 0 < x < L, and are not usually
equal to F(x) and G(x) for other x.
4.8.4 Exercises
Exercise 4.8.2: Using the d’Alembert solution solve y
tt
= 4y
xx
, 0 < x < π, t > 0, y(0, t) = y(π, t) =
0, y(x, 0) = sin x, and y
t
(x, 0) = sin x. Hint: note that sin x is the odd extension of y(x, 0) and
y
t
(x, 0).
Exercise 4.8.3: Using the d’Alembert solution solve y
tt
= 2y
xx
, 0 < x < 1, t > 0, y(0, t) = y(1, t) =
0, y(x, 0) = sin
5
πx, and y
t
(x, 0) = sin
3
πx.
Exercise 4.8.4: Take y
tt
= 4y
xx
, 0 < x < π, t > 0, y(0, t) = y(π, t) = 0, y(x, 0) = x(π − x), and
y
t
(x, 0) = 0. a) Solve using the d’Alembert formula (Hint: You can use the sine series for y(x, 0).)
b) Find the solution as a function of x for a fixed t = 0.5, t = 1, and t = 2. Do not use the sine
series here.
4.8. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 203
Exercise 4.8.5: Derive the d’Alembert solution for y
tt
= a
2
y
xx
, 0 < x < π, t > 0, y(0, t) = y(π, t) =
0, y(x, 0) = f (x), and y
t
(x, 0) = 0, using the Fourier series solution of the wave equation, by
applying an appropriate trigonometric identity.
Exercise 4.8.6: The d’Alembert solution still works if there are no boundary conditions and the
initial condition is defined on the whole real line. Suppose that y
tt
= y
xx
(for all x on the real line
and t ≥ 0), y(x, 0) = f (x), and y
t
(x, 0) = 0, where
f (x) =

¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
0 if x < −1,
x + 1 if −1 ≤ x < 0,
−x + 1 if 0 ≤ x < 1,
0 if x > 1.
Solve using the d’Alembert solution. That is, write down a piecewise definition for the solution.
Then sketch the solution for t = 0, t =
1
2
, t = 1, and t = 2.
204 CHAPTER 4. FOURIER SERIES AND PDES
4.9 Steady state temperature, Laplacian, and Dirichlet prob-
lems
Note: 1 lecture, §9.7 in EP
Suppose we have an insulated wire, a plate, or a 3-dimensional object. We apply certain fixed
temperatures on the ends of the wire, the edges of the plate or on all sides of the 3-dimensional
object. We wish to find out what is the steady state temperature distribution. That is, we wish to
know what will be the temperature after long enough period of time.
We are really looking for a solution to the heat equation that is not dependent on time. Let us
first do this in one space variable. We are looking for a function u that satisfies
u
t
= ku
xx
,
but such that u
t
= 0 for all x and t. Hence, we are looking for a function of x alone that satisfies
u
xx
= 0. It is easy to solve this equation by integration and we see that u = Ax + B for some
constants A and B.
Suppose we have an insulated wire, and we apply constant temperature T
1
at one end (say
where x = 0) and T
2
and the other end (at x = L where L is the length of the wire). Then our steady
state solution is
u(x) =
T
2
− T
1
L
x + T
1
.
This solution agrees with our common sense intuition with how the heat should be distributed in
the wire. So in one dimension, the steady state solutions are basically just straight lines.
Things are more complicated in two or more space dimensions. Let us restrict to two space
dimensions for simplicity. The heat equation in two variables is
u
t
= k(u
xx
+ u
yy
), (4.18)
or more commonly written as u
t
= k∆u or u
t
= k∇
2
u. Here the ∆ and ∇
2
symbols mean

2
∂x
2
+

2
∂y
2
.
We will use ∆ from now on. The reason for that notation is that you can define ∆ to be the right
thing for any number of space dimensions and then the heat equation is always u
t
= k∆u. The ∆ is
called the Laplacian.
OK, now that we have notation out of the way, let us see what does an equation for the steady
state solution look like. We are looking for a solution to (4.18) which does not depend on t. Hence
we are looking for a function u(x, y) such that
∆u = u
xx
+ u
yy
= 0.
This equation is called the Laplace equation
]
. Solutions to the Laplace equation are called har-
monic functions and have many nice properties and applications far beyond the steady state heat
problem.
]
Named after the French mathematician Pierre-Simon, marquis de Laplace (1749 – 1827).
4.9. STEADY STATE TEMPERATURE 205
Harmonic functions in two variables are no longer just linear (plane graphs). For example, you
can check that the functions x
2
− y
2
and xy are harmonic. However, if you remember your multi-
variable calculus we note that if u
xx
is positive, u is concave up in the x direction, then u
yy
must be
negative and u must be concave down in the y direction. Therefore, a harmonic function can never
have any “hilltop” or “valley” on the graph. This observation is consistent with our intuitive idea
of steady state heat distribution.
Commonly the Laplace equation is part of a so-called Dirichlet problem
|
. That is, we have
some region in the xy-plane and we specify certain values along the boundaries of the region. We
then try to find a solution u defined on this region such that u agrees with the values we specified
on the boundary.
For simplicity, we will consider a rectangular region. Also for simplicity we will specify bound-
ary values to be zero at 3 of the four edges and only specify an arbitrary function at one edge. As
we still have the principle of superposition, you can use this simpler solution to derive the gen-
eral solution for arbitrary boundary values by solving 4 different problems, one for each edge, and
adding those solutions together. This setup is left as an exercise.
We wish to solve the following problem. Let h and w be the height and width of our rectangle,
with one corner at the origin and lying in the first quadrant.
∆u = 0, (4.19)
u(0, y) = 0 for 0 < y < h, (4.20)
u(x, h) = 0 for 0 < x < w, (4.21)
u(w, y) = 0 for 0 < y < h, (4.22)
u(x, 0) = f (x) for 0 < x < w. (4.23)
(0, 0)
(0, h)
u = 0
u = 0
u = f (x) (w, 0)
u = 0
(w, h)
The method we will apply is separation of variables. Again, we will come up with enough
building-block solutions satisfying all the homogeneous boundary conditions (all conditions except
(4.23)). We notice that superposition still works for all the equation and all the homogeneous
conditions. Therefore, we can use the Fourier series for f (x) to solve the problem as before.
We try u(x, y) = X(x)Y(y). We plug into the equation to get
X
··
Y + XY
··
= 0.
We put the Xs on one side and the Ys on the other to get

X
··
X
=
Y
··
Y
.
|
Named after the German mathematician Johann Peter Gustav Lejeune Dirichlet (1805 – 1859).
206 CHAPTER 4. FOURIER SERIES AND PDES
The left hand side only depends on x and the right hand side only depends on y. Therefore, there
is some constant λ such that λ =
−X
··
X
=
Y
··
Y
. And we get two equations
X
··
+ λX = 0,
Y
··
− λY = 0.
Furthermore, the homogeneous boundary conditions imply that X(0) = X(w) = 0 and Y(h) = 0.
Taking the equation for X we have already seen that we have a nontrivial solution if and only if
λ = λ
n
=
n
2
π
2
w
2
and the solution is a multiple of
X
n
(x) = sin

w
x.
For these given λ
n
, the general solution for Y (one for each n) is
Y
n
(y) = A
n
cosh

w
y + B
n
sinh

w
y. (4.24)
We only have one condition on Y
n
and hence we can pick one of A
n
or B
n
constants to be whatever
is convenient. It will be useful to have Y
n
(0) = 1, so we let A
n
= 1. Setting Y
n
(h) = 0 and solving
for B
n
we get that
B
n
=
−cosh
nπh
w
sinh
nπh
w
.
After we plug the A
n
and B
n
we into (4.24) and simplify, we find
Y
n
(y) =
sinh
nπ(h−y)
w
sinh
nπh
w
.
We define u
n
(x, y) = X
n
(x)Y
n
(y). And note that u
n
satisfies (4.19)–(4.22).
Observe that
u
n
(x, 0) = X
n
(x)Y
n
(0) = sin

n
x.
Suppose that
f (x) =

¸
n=1
b
n
sin
nπx
w
.
Then we get a solution of (4.19)–(4.23) of the following form.
u(x, y) =

¸
n=1
b
n
u
n
(x, y) =

¸
n=1
b
n

sin

w
x

¸
¸
¸
¸
¸
¸
sinh
nπ(h−y)
w
sinh
nπh
w

¸
¸
¸
¸
¸
¸
.
As u
n
satisfies (4.19)–(4.22) and any linear combination (finite or infinite) of u
n
must also satisfy
(4.19)–(4.22), we see that u must satisfy (4.19)–(4.22). By plugging in y = 0 it is easy to see that
u satisfies (4.23) as well.
4.9. STEADY STATE TEMPERATURE 207
Example 4.9.1: Suppose that we take w = h = π and we let f (x) = π. We compute the sine series
for the function π (we will get the square wave). We find that for 0 < x < π we have
f (x) =

¸
n=1
n odd
4
n
sin nx.
Therefore the solution u(x, y), see Figure 4.22, to the corresponding Dirichlet problem is given as
u(x, y) =

¸
n=1
n odd
4
n
(sin nx)
¸
sinh n(π − y)
sinh nπ

.
0
1
2
3
x
1
2
3
y
1
2
3
y
0
1
2
3
0
1
2
3
0
1
2
3
x
u(x,y)
3.500
3.150
2.800
2.450
2.100
1.750
1.400
1.050
0.700
0.350
0.000
Figure 4.22: Steady state temperature of a square plate with three sides held at zero and one side
held at π.
208 CHAPTER 4. FOURIER SERIES AND PDES
This scenario corresponds to the steady state temperature on a square plate of width π with 3
sides held at 0 degrees and one side held at π degrees. If we have arbitrary initial data on all sides,
then we solve four problems, each using one piece of nonhomogeneous data. Then we use the
principle of superposition to add up all four solutions to have a solution to the original problem.
There is another way to visualize the solutions. Take a wire and bend it in just the right way
so that it corresponds to the graph of the temperature above the boundary of your region. Then dip
the wire in soapy water and let it form a soapy film streched between the edges of the wire. It turns
out that this soap film is precisely the graph of the solution to the Laplace equation. Harmonic
functions come up frequently in problems when we are trying to minimize area of some surface or
minimize energy in some system.
4.9.1 Exercises
Exercise 4.9.1: Let R be the region described by 0 < x < π and 0 < y < π. Solve the problem
∆u = 0, u(x, 0) = sin x, u(x, π) = 0, u(0, y) = 0, u(π, y) = 0.
Exercise 4.9.2: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = sin πx − sin 2πx, u(x, 1) = 0,
u(0, y) = 0, u(1, y) = 0.
Exercise 4.9.3: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = u(x, 1) = u(0, y) = u(1, y) = C.
for some constant C. Hint: Guess, then check your intuition.
Exercise 4.9.4: Let R be the region described by 0 < x < π and 0 < y < π. Solve
∆u = 0, u(x, 0) = 0, u(x, π) = π, u(0, y) = y, u(π, y) = y.
Hint: Try a solution of the form u(x, y) = X(x) + Y(y) (different separation of variables).
Exercise 4.9.5: Use the solution of Exercise 4.9.4 to solve
∆u = 0, u(x, 0) = sin x, u(x, π) = π, u(0, y) = y, u(π, y) = y.
Hint: Use superposition.
4.9. STEADY STATE TEMPERATURE 209
Exercise 4.9.6: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = 0, u(x, h) = f (x),
u(0, y) = 0, u(w, y) = 0.
The solution should be in series form using the Fourier series coefficients of f (x).
Exercise 4.9.7: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = 0, u(x, h) = 0,
u(0, y) = f (y), u(w, y) = 0.
The solution should be in series form using the Fourier series coefficients of f (y).
Exercise 4.9.8: Let R be the region described by 0 < x < w and 0 < y < h. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = 0, u(x, h) = 0,
u(0, y) = 0, u(w, y) = f (y).
The solution should be in series form using the Fourier series coefficients of f (y).
Exercise 4.9.9: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = sin 9πx, u(x, 1) = sin 2πx,
u(0, y) = 0, u(1, y) = 0.
Hint: Use superposition.
Exercise 4.9.10: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem
u
xx
+ u
yy
= 0,
u(x, 0) = sin πx, u(x, 1) = sin πx,
u(0, y) = sin πy, u(1, y) = sin πy.
Hint: Use superposition.
210 CHAPTER 4. FOURIER SERIES AND PDES
Chapter 5
Eigenvalue problems
5.1 Sturm-Liouville problems
Note: 2 lectures, §10.1 in EP
5.1.1 Boundary value problems
We have encountered several different eigenvalue problems such as:
X
··
(x) + λX(x) = 0
with different boundary conditions
X(0) = 0 X(L) = 0 (Dirichlet) or,
X
·
(0) = 0 X
·
(L) = 0 (Neumann) or,
X
·
(0) = 0 X(L) = 0 (Mixed) or,
X(0) = 0 X
·
(L) = 0 (Mixed), . . .
For example for the insulated wire, Dirichlet conditions correspond to applying a zero temperature
at the ends, Neumann means insulating the ends, etc... Other types of endpoint conditions also
arise naturally, such as
hX(0) − X
·
(0) = 0 hX(L) + X
·
(L) = 0,
for some constant h.
These problems came up, for example, in the study of the heat equation u
t
= ku
xx
when we
were trying to solve the equation by the method of separation of variables. During the process we
encountered a certain eigenvalue problem and found the eigenfunctions X
n
(x). We then found the
eigenfunction decomposition of the initial temperature f (x) = u(x, 0) in terms of the eigenfunctions
f (x) =

¸
n=1
c
n
X
n
(x).
211
212 CHAPTER 5. EIGENVALUE PROBLEMS
Once we had this decomposition and once we found suitable T
n
(t) such that T
n
(0) = 1, we noted
that a solution to the original problem could be written as
u(x, t) =

¸
n=1
c
n
T
n
(t)X
n
(x).
We will try to solve more general problems using this method. We will study second order
linear equations of the form
d
dx
¸
p(x)
dy
dx

− q(x)y + λr(x)y = 0. (5.1)
Essentially any second order linear equation of the form a(x)y
··
+ b(x)y
·
+ c(x)y + λd(x)y = 0 can
be written as (5.1) after multiplying by a proper factor.
Example 5.1.1 (Bessel):
x
2
y
··
+ xy
·
+

λx
2
− n
2

y = 0.
Multiply both sides by
1
x
to obtain
0 =
1
x

x
2
y
··
+ xy
·
+

λx
2
− n
2

y

xy
··
+ y
·
+
¸
λx −
n
2
x

y =
d
dx
¸
x
dy
dx


n
2
x
y + λxy.
We can state the general Sturm-Liouville problem

. We seek nontrivial solutions to
d
dx
¸
p(x)
dy
dx

− q(x)y + λr(x)y = 0 a < x < b
α
1
y(a) − α
2
y
·
(a) = 0
β
1
y(b) + β
2
y
·
(b) = 0
(5.2)
In particular, we seek λs that allow for nontrivial solutions. The λs for which there is no nontrivial
solution are called the eigenvalues and the corresponding nontrivial solutions are called eigenfunc-
tions. Obviously α
1
and α
2
should not be both zero, same for β
1
and β
2
.
Theorem 5.1.1. Suppose p(x), p
·
(x), q(x) and r(x) are continuous on [a, b] and suppose p(x) > 0
and r(x) > 0 for all x in [a, b]. Then the Sturm-Liouville problem (5.2) has an increasing sequence
of eigenvalues
λ
1
< λ
2
< λ
3
<
such that
lim
n→∞
λ
n
= +∞
and such that to each λ
n
there is (up to a constant multiple) a single eigenfunction y
n
(x).
Moreover, if q(x) ≥ 0 and α
1
, α
2
, β
1
, β
2
≥ 0, then λ
n
≥ 0 for all n.

Named after the French mathematicians Jacques Charles François Sturm (1803 – 1855) and Joseph Liouville
(1809 – 1882).
5.1. STURM-LIOUVILLE PROBLEMS 213
Note: Be careful about the signs. Also be careful about the inequalities for r and p, they must be
strict for all x! Problems satisfying the hypothesis of the theoremare called regular Sturm-Liouville
problems and we will only consider such problems here. That is, a regular problem is one where
p(x), p
·
(x), q(x) and r(x) are continuous, p(x) > 0, r(x) > 0, q(x) ≥ 0, and α
1
, α
2
, β
1
, β
2
≥ 0.
When zero is an eigenvalue, we will usually start labeling the eigenvalues at 0 rather than 1 for
convenience.
Example 5.1.2: The problem y
··
+ λy, 0 < x < L, y(0) = 0, and y(L) = 0 is a regular Sturm-
Liouville problem. p(x) = 1, q(x) = 0, r(x) = 1, and we have p(x) = 1 > 0 and r(x) = 1 > 0. The
eigenvalues are λ
n
=
n
2
π
2
L
2
and eigenfunctions are y
n
(x) = sin(

L
x). All eigenvalues are nonnegative
as predicted by the theorem.
Exercise 5.1.1: Find eigenvalues and eigenfunctions for
y
··
+ λy = 0, y
·
(0) = 0, y
·
(1) = 0.
Identify the p, q, r, α
j
, β
j
. Can you use the theorem to make the search for eigenvalues easier?
Example 5.1.3: Find eigenvalues and eigenfunctions of the problem
y
··
+ λy = 0, 0 < x < 1
hy(0) − y
·
(0) = 0, y
·
(1) = 0, h > 0.
These equations give a regular Sturm-Liouville problem.
Exercise 5.1.2: Identify p, q, r, α
j
, β
j
in the example above.
First note that λ ≥ 0 by Theorem 5.1.1. Therefore, the general solution (without boundary
conditions) is
y(x) = Acos

λ x + Bsin

λ x if λ > 0,
y(x) = Ax + B if λ = 0.
Let us see if λ = 0 is an eigenvalue: We must satisfy 0 = hB − A and A = 0, hence B = 0 (as
h > 0), therefore, 0 is not an eigenvalue (no eigenfunction).
Now let us try λ > 0. We plug in the boundary conditions.
0 = hA −

λ B,
0 = −A

λ sin

λ + B

λ cos

λ.
Note that if A = 0, then B = 0 and vice versa, hence both are nonzero. So B =
hA

λ
, and 0 =
−A

λ sin

λ +
hA

λ

λ cos

λ. As A 0 we get
0 = −

λ sin

λ + h cos

λ,
214 CHAPTER 5. EIGENVALUE PROBLEMS
or
h

λ
= tan

λ.
Now use a computer to find λ
n
. There are tables available, though using a computer or a
graphing calculator will probably be far more convenient nowdays. Easiest method is to plot the
functions h/x and tan x and see for which x they intersect. There will be an infinite number of
intersections. So denote by

λ
1
the first intersection, by

λ
2
the second intersection, etc. . . For
example, when h = 1, we get that λ
1
≈ 0.86, and λ
2
≈ 3.43. A plot for h = 1 is given in Figure 5.1.
The appropriate eigenfunction (let A = 1 for convenience, then B =
h

λ
) is
y
n
(x) = cos

λ
n
x +
h

λ
n
sin

λ
n
x.
0 2 4 6
0 2 4 6
-4
-2
0
2
4
-4
-2
0
2
4
Figure 5.1: Plot of
1
x
and tan x.
5.1.2 Orthogonality
We have seen the notion of orthogonality before. For example, we have shown that sin nx are
orthogonal for distinct n on [0, π]. For general Sturm-Liouville problems we will need a more
general setup. Let r(x) be a weight function (any function, though generally we will assume it is
positive) on [a, b]. Then two functions f (x), g(x) are said to be orthogonal with respect to the
weight function r(x) when

b
a
f (x) g(x) r(x) dx = 0.
5.1. STURM-LIOUVILLE PROBLEMS 215
In this setting, we define the inner product as
( f , g)
def
=

b
a
f (x) g(x) r(x) dx,
and then say f and g are orthogonal whenever ( f , g) = 0. The results and concepts are again
analogous to finite dimensional linear algebra.
The idea of the given inner product is that those x where r(x) is greater have more weight.
Nontrivial (nonconstant) r(x) arise naturally, for example from a change of variables. Hence, you
could think of a change of variables such that dξ = r(x) dx.
We have the following orthogonality property of eigenfunctions of a regular Sturm-Liouville
problem.
Theorem 5.1.2. Suppose we have a regular Sturm-Liouville problem
d
dx
¸
p(x)
dy
dx

− q(x)y + λr(x)y = 0,
α
1
y(a) − α
2
y
·
(a) = 0,
β
1
y(b) + β
2
y
·
(b) = 0.
Let y
j
and y
k
be two distinct eigenfunctions for two distinct eigenvalues λ
j
and λ
k
. Then

b
a
y
j
(x) y
k
(x) r(x) dx = 0,
that is, y
j
and y
k
are orthogonal with respect to the weight function r.
Proof is very similar to the analogous theorem from § 4.1. It can also be found in many books
including, for example, Edwards and Penney [EP].
5.1.3 Fredholm alternative
We also have the Fredholm alternative theorem we talked about before for all regular Sturm-
Liouville problems. We state it here for completeness.
Theorem 5.1.3 (Fredholm alternative). Suppose that we have a regular Sturm-Liouville problem.
Then either
d
dx
¸
p(x)
dy
dx

− q(x)y + λr(x)y = 0,
α
1
y(a) − α
2
y
·
(a) = 0,
β
1
y(b) + β
2
y
·
(b) = 0,
216 CHAPTER 5. EIGENVALUE PROBLEMS
has a nonzero solution, or
d
dx
¸
p(x)
dy
dx

− q(x)y + λr(x)y = f (x),
α
1
y(a) − α
2
y
·
(a) = 0,
β
1
y(b) + β
2
y
·
(b) = 0,
has a unique solution for any f (x) continuous on [a, b].
This theorem is used in much the same way as we did before in § 4.4. It is used when solving
more general nonhomogeneous boundary value problems. Actually the theorem does not help us
solve the problem, but it tells us when does a solution exist and is unique, so that we know when
to spend time looking for a solution. To solve the problem we decompose f (x) and y(x) in terms
of the eigenfunctions of the homogeneous problem, and then solve for the coefficients of the series
for y(x).
5.1.4 Eigenfunction series
What we want to do with the eigenfunctions once we have them is to compute the eigenfunction
decomposition of an arbitrary function f (x). That is, we wish to write
f (x) =

¸
n=1
c
n
y
n
(x), (5.3)
where y
n
(x) the eigenfunctions. We wish to find out if we can represent any function f (x) in
this way, and if so, we wish to calculate c
n
(and of course we would want to know if the sum
converges). OK, so imagine we could write f (x) as above. We will assume convergence and the
ability to integrate term by term. Because of orthogonality we have
( f , y
m
) =

b
a
f (x) y
m
(x) r(x) dx
=

¸
n=1
c
n

b
a
y
n
(x) y
m
(x) r(x) dx
= c
m

b
a
y
m
(x) y
m
(x) r(x) dx = c
m
(y
m
, y
m
)
Hence,
c
m
=
( f , y
m
)
(y
m
, y
m
)
=

b
a
f (x) y
m
(x) r(x) dx

b
a

y
m
(x)

2
r(x) dx
. (5.4)
5.1. STURM-LIOUVILLE PROBLEMS 217
Note that y
m
are known up to a constant multiple, so we could have picked a scalar multiple
of an eigenfunction such that (y
m
, y
m
) = 1 (if we had an arbitrary eigenfunction ˜ y
m
, divide it
by

(˜ y
m
, ˜ y
m
)). In the case that (y
m
, y
m
) = 1 we would have the simpler form c
m
= ( f , y
m
) as
we essentially did for the Fourier series. The following theorem holds more generally, but the
statement given is enough for our purposes.
Theorem 5.1.4. Suppose f is a piecewise smooth continuous function on [a, b]. If y
1
, y
2
, . . . are
the eigenfunctions of a regular Sturm-Liouville problem, then there exist real constants c
1
, c
2
, . . .
given by (5.4) such that (5.3) converges and holds for a < x < b.
Example 5.1.4: Take the simple Sturm-Liouville problem
y
··
+ λy = 0, 0 < x <
π
2
,
y(0) = 0, y
·

π
2

= 0.
The above is a regular problem and furthermore we actually know by Theorem 5.1.1 on page 212
that λ ≥ 0.
Suppose λ = 0, then the general solution is y(x) = Ax + B, we plug in the initial conditions to
get 0 = y(0) = B, and 0 = y
·
(
π
2
) = A, hence λ = 0 is not an eigenvalue.
The general solution, therefore, is
y(x) = Acos

λ x + Bsin

λ x,
plugging in the boundary conditions we get 0 = y(0) = A and 0 = y
·
(
π
2
) =

λ Bcos

λ
π
2
. B
cannot be zero and hence cos

λ
π
2
= 0. This means that

λ
π
2
must be an odd integral multiple of
π
2
, i.e. (2n − 1)
π
2
=

λ
n
π
2
. Hence
λ
n
= (2n − 1)
2
.
We can take B = 1. And hence our eigenfunctions are
y
n
(x) = sin (2n − 1)x.
We finally compute
π
2
0
f (x)

sin (2n − 1)x

2
dx =
π
4
So any piecewise smooth function on [0,
π
2
] can be written as
f (x) =

¸
n=1
c
n
sin (2n − 1)x,
where
c
n
=
( f , y
n
)
(y
n
, y
n
)
=
π
2
0
f (x) sin (2n − 1)x dx
π
2
0

sin (2n − 1)x

2
dx
=
4
π
π
2
0
f (x) sin (2n − 1)x dx
Note that the series converges to an odd 2π-periodic (not π-periodic!) extension of f (x).
218 CHAPTER 5. EIGENVALUE PROBLEMS
Exercise 5.1.3 (challenging): In the above example, the function is defined on 0 < x <
π
2
, yet the
series converges to an odd 2π-periodic extension of f (x). find out how the extension is defined for
π
2
< x < π.
5.1.5 Exercises
Exercise 5.1.4: Find eigenvalues and eigenfunctions of
y
··
+ λy = 0, y(0) − y
·
(0) = 0, y(1) = 0.
Exercise 5.1.5: Expand the function f (x) = x on 0 ≤ x ≤ 1 using the eigenfunctions of the system
y
··
+ λy = 0, y
·
(0) = 0, y(1) = 0.
Exercise 5.1.6: Suppose that you had a Sturm-Liouville problem on the interval [0, 1] and came
up with y
n
(x) = sin γnx, where γ > 0 is some constant. Decompose f (x) = x, 0 < x < 1 in terms of
these eigenfunctions.
Exercise 5.1.7: Find eigenvalues and eigenfunctions of
y
(4)
+ λy = 0, y(0) = 0, y
·
(0) = 0, y(1) = 0, y
·
(1) = 0.
This problem is not a Sturm-Liouville problem, but the idea is the same.
Exercise 5.1.8 (more challenging): Find eigenvalues and eigenfunctions for
d
dx
(e
x
y
·
) + λe
x
y = 0, y(0) = 0, y(1) = 0.
Hint: First write the system as a constant coefficient system to find general solutions. Do note that
Theorem 5.1.1 on page 212 guarantees λ ≥ 0.
5.2. APPLICATION OF EIGENFUNCTION SERIES 219
5.2 Application of eigenfunction series
Note: 1 lecture, §10.2 in EP
The eigenfunction series can arise even from higher order equations. Suppose we have an
elastic beam (say made of steel). We will study the transversal vibrations of the beam. That is,
assume the beam lies along the x axis and let y(x, t) measure the displacement of the point x on the
beam at time t. See Figure 5.2
y
y
x
Figure 5.2: Transversal vibrations of a beam.
The equation that governs this setup is
a
4

4
y
∂x
4
+

2
y
∂t
2
= 0,
for some constant a (a
4
= EI/ρ in EP).
Suppose the beam is of length 1 simply supported (hinged) at the ends. Suppose the beam is
displaced by some function f (x) at time t = 0 and then let go (initial velocity is 0). Then y satisfies:
a
4
y
xxxx
+ y
tt
= 0 (0 < x < 1, t > 0),
y(0, t) = y
xx
(0, t) = 0,
y(1, t) = y
xx
(1, t) = 0,
y(x, 0) = f (x), y
t
(x, 0) = 0.
(5.5)
Again we try y(x, t) = X(x)T(t) and plug in to get a
4
X
(4)
T + XT
··
= 0 or as usual
X
(4)
X
=
−T
··
a
4
T
= λ.
We note that we want T
··
+ λa
4
T = 0. Let us assume that λ > 0. We can argue that we expect
vibration and not exponential growth nor decay in the t direction (there is no friction in our model
for instance). Similarly λ = 0 will not occur.
220 CHAPTER 5. EIGENVALUE PROBLEMS
Exercise 5.2.1: Try to justify λ > 0 just from the equations.
Write ω
4
= λ, so that we do not need to write the fourth root all the time. For X we get the
equation X
(4)
− ω
4
X = 0. The general solution is
X(x) = Ae
ωx
+ Be
−ωx
+ C sin ωx + Dcos ωx.
Now 0 = X(0) = A+ B+ D, 0 = X
··
(0) = ω
2
(A+ B− D). Hence, D = 0 and A+ B = 0, or B = −A.
So we have
X(x) = Ae
ωx
− Ae
−ωx
+ C sin ωx.
Now 0 = X(1) = A(e
ω
− e
−ω
) + C sin ω, and 0 = X
··
(1) = Aω
2
(e
ω
− e
−ω
) − Cω
2
sin ω. This means
that C sin ω = 0 and A(e
ω
− e
−ω
) = A2 sinh ω = 0. If ω > 0, then sinh ω 0 and so A = 0. This
means that C 0 else we do not have an eigenvalue. Also ω must be an integer multiple of π
hence ω = nπ and n ≥ 1 (as ω > 0). We can take C = 1. So the eigenvalues are λ
n
= n
4
π
4
and the
eigenfunctions are sin nπx.
Now T
··
+ n
4
π
4
a
4
T = 0. The general solution is T(t) = Asin n
2
π
2
a
2
t + Bcos n
2
π
2
a
2
t. But
T
·
(0) = 0 and hence we must have A = 0 and we can take B = 1 to make T(0) = 1 for convenience.
So our solutions are T
n
(t) = cos n
2
π
2
a
2
t.
Since the eigenfunctions are just sines again, we can decompose the function f (x) on 0 < x < 1
using the sine series as Now we note that on 0 < x < 1 we have (you know how to do this by now)
f (x) =

¸
n=1
b
n
sin nπx.
Then the solution to (5.5) is
y(x, t) =

¸
n=1
b
n
X
n
(x)T
n
(t) =

¸
n=1
b
n
(sin nπx)

cos n
2
π
2
a
2
t

.
The point is that X
n
T
n
is a solution that satisfies all the homogeneous conditions (that is, all condi-
tions except the initial position). And since and T
n
(0) = 1, we have
y(x, 0) =

¸
n=1
b
n
X
n
(x)T
n
(0) =

¸
n=1
b
n
X
n
(x) =

¸
n=1
b
n
sin nπx = f (x).
So y(x, t) solves (5.5).
Note that the natural (circular) frequency of the system is n
2
π
2
a
2
. These frequencies are all
integer multiples of the fundamental frequency π
2
a
2
, so we will get a nice musical note. The exact
frequencies and their amplitude are what we call the timbre of the note.
The timbre of a beam is different than for a vibrating string where we will get “more” of the
smaller frequencies since we will get all integer multiples, 1, 2, 3, 4, 5, . . . For a steel beam we will
get only the square multiples 1, 4, 9, 16, 25, . . . That is why when you hit a steel beam you hear a
very pure sound. The sound of a xylophone or vibraphone is, therefore, very different from a guitar
or piano.
5.2. APPLICATION OF EIGENFUNCTION SERIES 221
Example 5.2.1: Let us assume that f (x) =
x(x−1)
10
. On 0 < x < 1 we have (you know how to do this
by now)
f (x) =

¸
n=1
n odd
4

3
n
3
sin nπx.
Hence, the solution to (5.5) with the given initial position f (x) is
y(x, t) =

¸
n=1
n odd
4

3
n
3
(sin nπx)

cos n
2
π
2
a
2
t

.
5.2.1 Exercises
Exercise 5.2.2: Suppose you have a beam of length 5 with free ends. Let y be the transverse
deviation of the beam at position x on the beam (0 < x < 5). You know that the constants are such
that this satisfies the equation y
tt
+4y
xxxx
= 0. Suppose you know that the initial shape of the beam
is the graph of x(5 − x), and the initial velocity is uniformly equal to 2 (same for each x) in the
positive y direction. Set up the equation together with the boundary and initial conditions. Just set
up, do not solve.
Exercise 5.2.3: Suppose you have a beam of length 5 with one end free and one end fixed (the
fixed end is at x = 5). Let u be the longitudinal deviation of the beam at position x on the beam
(0 < x < 5). You know that the constants are such that this satisfies the equation u
tt
= 4u
xx
.
Suppose you know that the initial displacement of the beam is
x−5
50
, and the initial velocity is
−(x−5)
100
in the positive u direction. Set up the equation together with the boundary and initial conditions.
Just set up, do not solve.
Exercise 5.2.4: Suppose the beam is L units long, everything else kept the same as in (5.5). What
is the equation and the series solution.
Exercise 5.2.5: Suppose you have
a
4
y
xxxx
+ y
tt
= 0 (0 < x < 1, t > 0),
y(0, t) = y
xx
(0, t) = 0,
y(1, t) = y
xx
(1, t) = 0,
y(x, 0) = f (x), y
t
(x, 0) = g(x).
That is, you have also an initial velocity. Find a series solution. Hint: Use the same idea as we did
for the wave equation.
222 CHAPTER 5. EIGENVALUE PROBLEMS
5.3 Steady periodic solutions
Note: 1–2 lectures, §10.3 in EP
5.3.1 Forced vibrating string.
Suppose that we have a guitar string of length L. We have studied the wave equation problem in
this case, where x was the position on the string, t was time and y was the displacement of the
string. See Figure 5.3.
L x
y
y
0
Figure 5.3: Vibrating string.
The problem is governed by the equations
y
tt
= a
2
y
xx
,
y(0, t) = 0, y(L, t) = 0,
y(x, 0) = f (x), y
t
(x, 0) = g(x).
(5.6)
We saw previously that the solution is of the form
y =

¸
n=1

A
n
cos
nπa
L
t + B
n
sin
nπa
L
t

sin

L
x
where A
n
and B
n
were determined by the initial conditions. The natural frequencies of the system
are the (circular) frequencies
nπa
L
for integers n ≥ 1.
But these are free vibrations. What if there is an external force acting on the string. Let us
assume say air vibrations (noise), for example a second string. Or perhaps a jet engine. For
simplicity, assume nice pure sound and assume the force is uniform at every position on the string.
Let us say F(t) = F
0
cos ωt as force per unit mass. Then our wave equation becomes (remember
acceleration is force times mass)
y
tt
= a
2
y
xx
+ F
0
cos ωt. (5.7)
with the same boundary conditions of course.
5.3. STEADY PERIODIC SOLUTIONS 223
We will want to find the solution here that satisfies the above equation and
y(0, t) = 0, y(L, t) = 0, y(x, 0) = 0, y
t
(x, 0) = 0. (5.8)
That is, the string is initially at rest. First we find a particular solution y
p
of (5.7) that satisfies
y(0, t) = y(L, t) = 0. We define the functions f and g as
f (x) = −y
p
(x, 0), g(x) = −
∂y
p
∂t
(x, 0).
We then find solution y
c
of (5.6). If we add the two solutions, we find that y = y
c
+ y
p
solves (5.7)
with the initial conditions.
Exercise 5.3.1: Check that y = y
c
+ y
p
solves (5.7) and the side conditions (5.8).
So the big issue here is to find the particular solution y
p
. We look at the equation and we make
an educated guess
y
p
(x, t) = X(x) cos ωt.
We plug in to get
−ω
2
X cos ωt = a
2
X
··
cos ωt + F
0
cos ωt
or −ω
2
X = a
2
X
··
+ F
0
after cancelling the cosine. We know how to find a general solution to
this equation (it is an nonhomogeneous constant coefficient equation) and we get that the general
solution is
X(x) = Acos
ω
a
x + Bsin
ω
a
x −
F
0
ω
2
.
The endpoint conditions imply that X(0) = X(L) = 0, so
0 = X(0) = A −
F
0
ω
2
or A =
F
0
ω
2
and
0 = X(L) =
F
0
ω
2
cos
ωL
a
+ Bsin
ωL
a

F
0
ω
2
.
Assuming that sin
ωL
a
is not zero we can solve for B to get
B =
−F
0

cos
ωL
a
− 1

ω
2
sin
ωL
a
. (5.9)
Therefore,
X(x) =
F
0
ω
2

¸
¸
¸
¸
¸
cos
ω
a
x −
cos
ωL
a
− 1
sin
ωL
a
sin
ω
a
x − 1

¸
¸
¸
¸
¸
.
The particular solution y
p
we are looking for is
y
p
(x, t) =
F
0
ω
2

¸
¸
¸
¸
¸
cos
ω
a
x −
cos
ωL
a
− 1
sin
ωL
a
sin
ω
a
x − 1

¸
¸
¸
¸
¸
cos ωt.
224 CHAPTER 5. EIGENVALUE PROBLEMS
Exercise 5.3.2: Check that y
p
works.
Now we get to the point that we skipped. Suppose that sin
ωL
a
= 0. What this means is that ω
is equal to one of the natural frequencies of the system, i.e. a multiple of
πa
L
. We notice that if ω is
not equal to a multipe of the base frequency, but is very close, then the coefficient B in (5.9) seems
to become very large. But let us not jump to conclusions just yet. When ω =
nπa
L
for n even, then
cos
ωL
a
= 1 and hence we really get that B = 0. So resonance occurs only when both cos
ωL
a
= −1
and sin
ωL
a
= 0. That is when ω =
nπa
L
for odd n.
We could again solve for the resonance solution if we wanted to, but it is, in the right sense, the
limit of the solutions as ω gets close to a resonance frequency. In real life, pure resonance never
occurs anyway.
The above calculation explains why a string will begin to vibrate if the identical string is
plucked close by. In the absence of friction this vibration would get louder and louder as time
goes on. On the other hand, you are unlikely to get large vibration if the forcing frequency is not
close to a resonance frequency even if you have a jet engine running close to the string. That is,
the amplitude will not keep increasing unless you tune to just the right frequency.
Similar resonance phenomena occur when you break a wine glass using human voice (yes this
is possible, but not easy

) if you happen to hit just the right frequency. Remember a glass has much
purer sound, i.e. it is more like a vibraphone, so there are far fewer resonance frequencies to hit.
When the forcing function is more complicated, you decompose it in terms of the Fourier series
and apply the above result. You may also need to solve the above problem if the forcing function
is a sine rather than a cosine, but if you think about it, the solution is almost the same.
Example 5.3.1: Let us do the computation for specific values. Suppose F
0
= 1 and ω = 1 and
L = 1 and a = 1. Then
y
p
(x, t) =
¸
cos x −
cos 1 − 1
sin 1
sin x − 1

cos t
Call B =
cos 1−1
sin 1
for simplicity.
Then plug in t = 0 to get
f (x) = −y
p
(x, 0) = −cos x + Bsin x + 1
and after differentiating in t we see that g(x) = −
∂y
p
∂t
(x, 0) = 0.
Hence to find y
c
we need to solve the problem
y
tt
= y
xx
y(0, t) = 0 y(1, t) = 0
y(x, 0) = −cos x + Bsin x + 1
y
t
(x, 0) = 0

Mythbusters, episode 31, Discovery Channel, originally aired may 18th 2005.
5.3. STEADY PERIODIC SOLUTIONS 225
Note that the formula that we use to define y(x, 0) is not odd, hence it is not a simple matter of
plugging in to apply the D’Alembert formula directly! You must define F to be the odd, 2-periodic
extension of y(x, 0). Then our solution would look like
y(x, t) =
F(x + t) + F(x − t)
2
+
¸
cos x −
cos 1 − 1
sin 1
sin x − 1

cos t (5.10)
0.0
0.2
0.5
0.8
1.0
x
0
1
2
3
4
5
t
0
1
2
3
4
5
t
-0.20
-0.10
0.00
0.10
0.20
y
-0.20
-0.10
0.00
0.10
0.20
y
0.0
0.2
0.5
0.8
1.0
x
y(x,t)
0.240
0.148
0.099
0.049
0.000
-0.049
-0.099
-0.148
-0.197
-0.254
Figure 5.4: Plot of y(x, t) =
F(x+t)+F(x−t)
2
+

cos x −
cos 1−1
sin 1
sin x − 1

cos t.
It is not hard to compute specific values for an odd extension of a function and hence (5.10) is
a wonderful solution to the problem. For example it is very easy to have a computer do it, unlike
series solutions. A plot is given in Figure 5.4.
5.3.2 Underground temperature oscillations
Let u(x, t) be the temperature at a certain location at depth x underground at time t. See Figure 5.5
on the following page.
226 CHAPTER 5. EIGENVALUE PROBLEMS
depth x
Figure 5.5: Underground temperature.
The temperature u satisfies the heat equation u
t
= ku
xx
, where k is the diffusivity of the soil.
We know the temperature at the surface u(0, t) from weather records. Let us assume for simplicity
that
u(0, t) = T
0
+ A
0
cos ωt.
For some base temperature T
0
, then t = 0 is midsummer (could put negative sign above to make it
midwinter). A
0
is picked properly to make this the typical variation for the year. That is, the hottest
temperature is T
0
+ A
0
and the coldest is T
0
− A
0
. For simplicity, we will assume that T
0
= 0. ω is
picked depending on the units of t, such that when t = 1year then ωt = 2π.
It seems reasonable that the temperature at depth x will also oscillate with the same frequency.
And this in fact will be the steady periodic solution, independent of the initial conditions. So we
are looking for a solution of the form
u(x, t) = V(x) cos ωt + W(x) sin ωt.
for the problem
u
t
= ku
xx
, u(0, t) = A
0
cos ωt. (5.11)
We will employ the complex exponential here to make calculations simpler. Suppose we have
a complex valued function
h(x, t) = X(x) e
iωt
.
We will look for an h such that Re h = u. To find an h, whose real part satisfies (5.11), we look for
an h such that
h
t
= kh
xx
, h(0, t) = A
0
e
iωt
. (5.12)
Exercise 5.3.3: Suppose h satisfies (5.12). Use Euler’s formula for the complex exponential to
check that u = Re h satisfies (5.11).
Substitute h into (5.12).
iωXe
iωt
= kX
··
e
iωt
Hence,
kX
··
− iωX = 0,
5.3. STEADY PERIODIC SOLUTIONS 227
or
X
··
− α
2
X = 0,
where α = ±


k
. Note that ±

i = ±
1+i

2
so you could simplify to α = ±(1 + i)

ω
2k
. Hence the
general solution is
X(x) = Ae
−(1+i)

ω
2k
x
+ Be
(1+i)

ω
2k
x
.
We assume that an X(x) that solves the problem must be bounded as x → ∞ since u(x, t) should
be bounded (we are not worrying about the earth core!). If you use Euler’s formula to expand the
complex exponentials, you will note that the second term will be unbounded (if B 0), while the
first term is always bounded. Hence B = 0.
Exercise 5.3.4: Use Euler’s formula to show that e
(1+i)

ω
2k
x
will be unbounded as x → ∞, while
e
−(1+i)

ω
2k
x
will be bounded as x → ∞.
Furthermore, X(0) = A
0
since h(0, t) = A
0
e
iωt
. Thus A = A
0
. This means that
h(x, t) = A
0
e
−(1+i)

ω
2k
x
e
iωt
= A
0
e
−(1+i)

ω
2k
x+iωt
= A
0
e


ω
2k
x
e
i(ωt−

ω
2k
x)
.
We will need to get the real part of h, so we apply Euler’s formula to get
h(x, t) = A
0
e


ω
2k
x
¸
cos
¸
ωt −

ω
2k
x

+ i sin
¸
ωt −

ω
2k
x

.
Then finally
u(x, t) = Re h(x, t) = A
0
e


ω
2k
x
cos
¸
ωt −

ω
2k
x

,
Yay!
Notice the phase is different at different depths. At depth x the phase is delayed by x

ω
2k
. For
example in cgs units (centimeters, grams, seconds) we have k = 0.005 (typical value for soil),
ω =

seconds in a year
=

31,557,341
≈ 1.99 × 10
−7
. Then if we compute where the phase shift x

ω
2k
= π
we find the depth in centimeters where the seasons are reversed. That is, we get the depth at which
summer is the coldest and winter is the warmest. We get approximately 700 centimeters which is
approximately 23 feet below ground.
But be careful. The temperature swings decay rapidly as you dig deeper. The amplitude of the
temperature swings is A
0
e


ω
2k
x
. This decays very quickly as x grows. Let us again take typical
parameters as above. We also will assume that our surface temperature temperature swing is ±15

Celsius, that is, A
0
= 15. Then the maximum temperature variation at 700 centimeters is only
±0.66

Celsius.
You need not dig very deep to get an effective “refrigerator.” I.e. Why wines are kept in a cellar;
you need consistent temperature. The temperature differential could also be used to for energy. A
home could be heated or cooled by taking advantage of the above fact. Even without the earth core
you could heat a home in the winter and cool it in the summer. There is also the earth core, so
temperature presumably gets higher the deeper you dig. We did not take that into account above.
228 CHAPTER 5. EIGENVALUE PROBLEMS
5.3.3 Exercises
Exercise 5.3.5: Suppose that the forcing function for the vibrating string is F
0
sin ωt. Derive the
particular solution y
p
.
Exercise 5.3.6: Take the forced vibrating string. Suppose that L = 1, a = 1. Suppose that the
forcing function is the quare wave which is 1 on the interval 0 < x < 1 and −1 on the interval
−1 < x < 0. Find the particular solution. Hint: you may want to use result of Exercise 5.3.5.
Exercise 5.3.7: The units are cgs (centimeters, grams, seconds). For k = 0.005, ω = 1.991×10
−7
,
A
0
= 20. Find the depth at which the temperature variation is half (±10 degrees) of what it is on
the surface.
Exercise 5.3.8: Derive the solution for underground temperature oscillation without assuming
that T
0
= 0.
Chapter 6
The Laplace transform
6.1 The Laplace transform
Note: 2 lectures, §10.1 in EP
6.1.1 The transform
In this chapter we will discuss the Laplace transform

. The Laplace transform turns out to be a very
efficient method to solve certain ODE problems. In particular, the transform can take a differential
equation and turn it into an algebraic equation. If the algebraic equation can be solved, applying
the inverse transform gives us our desired solution. The Laplace transform is also useful in the
analysis of certain systems such as electrical circuits, NMR spectroscopy, signal processing and
others. Finally, understanding the Laplace transform will also help with understanding the related
Fourier transform, which, however, requires more understanding of complex numbers. We will not
cover the Fourier transform.
The Laplace transform also gives a lot of insight into the nature of the equations we are dealing
with. It can be seen as converting between the time and the frequency domain. For example, take
the standard equation
mx
··
(t) + cx
·
(t) + kx(t) = f (t).
We can think of t as time and f (t) as incoming signal. The Laplace transform will convert the
equation from a differential equation in time to an algebraic (no derivatives) equation, where the
new independent variable s is the frequency.
We can think of the Laplace transform as a black box. It eats functions and spits out functions
in a new variable. We write !¦ f (t)¦ = F(s). It is common to write lower case letters for functions
in the time domain and upper case letters for functions in the frequency domain. We will use the

Just like the Laplace equation and the Laplacian, also named after Pierre-Simon, marquis de Laplace (1749 –
1827).
229
230 CHAPTER 6. THE LAPLACE TRANSFORM
same letter to denote that one function is the Laplace transform of the other, for example F(s) is
the Laplace transform of f (t). Let us define the transform.
!¦ f (t)¦ = F(s)
def
=


0
e
−st
f (t) dt.
We note that we are only considering t ≥ 0 in the transform. Of course, if we think of t as time there
is no problem, we are generally interested in finding out what will happen in the future (Laplace
transform is one place where it is safe to ignore the past). Let us compute the simplest transforms.
Example 6.1.1: Suppose f (t) = 1, then
!¦1¦ =


0
e
−st
dt =
,
e
−st
−s
¸

t=0
=
1
s
.
Of course, the limit only exists if s > 0. So !¦1¦ is only defined for s > 0.
Example 6.1.2: Suppose f (t) = e
−at
, then
!¦e
−at
¦ =


0
e
−st
e
−at
dt =


0
e
−(s+a)t
dt =
,
e
−(s+a)t
−(s + a)
¸

t=0
=
1
s + a
.
Of course, the limit only exists if s + a > 0. So !¦e
−at
¦ is only defined for s + a > 0.
Example 6.1.3: Suppose f (t) = t, then using integration by parts
!¦t¦ =


0
e
−st
t dt
=
,
−te
−st
s
¸

t=0
+
1
s


0
e
−st
dt
= 0 +
1
s
,
e
−st
−s
¸

t=0
=
1
s
2
.
Of course, again, the limit only exists if s > 0.
Example 6.1.4: A common function is the unit step function, which is sometimes called the Heav-
iside function

. This function is generally given as
u(t) =

¸
¸
¸
¸
¸
¸
0 if t < 0,
1 if t ≥ 0.

The function is named after Oliver Heaviside (1850–1925). Only by coincidence is the function “heavy” on “one
side.”
6.1. THE LAPLACE TRANSFORM 231
Let us find the Laplace transform of u(t − a), where a ≥ 0 is some constant. That is, the function
which is 0 for t < a and 1 for t ≥ a.
!¦u(t − a)¦ =


0
e
−st
u(t − a) dt =


a
e
−st
dt =
,
e
−st
−s
¸

t=a
=
e
−as
s
,
where of course s > 0 (and a ≥ 0 as we said before).
By applying similar procedures we can compute the transforms of many elementary functions.
Many basic transforms are listed in Table 6.1.
f (t) !¦ f (t)¦ = F(s)
C
C
s
t
1
s
2
t
2 2
s
3
t
3 6
s
4
t
n n!
s
n+1
e
−at 1
s+a
sin ωt
ω
s
2

2
cos ωt
s
s
2

2
sinh ωt
ω
s
2
−ω
2
cosh ωt
s
s
2
−ω
2
u(t − a)
e
−as
s
Table 6.1: Some Laplace transforms (C, ω, and a are constants).
Exercise 6.1.1: Verify Table 6.1.
Since the transform is defined by an integral. We can use the linearity properties of the integral.
For example, suppose C is a constant, then
!¦Cf (t)¦ =


0
e
−st
Cf (t) dt = C


0
e
−st
f (t) dt = C!¦ f (t)¦.
So we can “pull out” a constant out of the transform. Similarly we have linearity. Since linearity
is very important we state it as a theorem.
Theorem 6.1.1 (Linearity of Laplace transform). Suppose that A, B, and C are constants, then
!¦Af (t) + Bg(t)¦ = A!¦ f (t)¦ + B!¦g(t)¦,
and in particular
!¦Cf (t)¦ = C!¦ f (t)¦.
232 CHAPTER 6. THE LAPLACE TRANSFORM
Exercise 6.1.2: Verify the theorem. That is, show that !¦Af (t) + Bg(t)¦ = A!¦ f (t)¦ + B!¦g(t)¦.
These rules together with Table 6.1 on the previous page make it easy to already find the
Laplace transform of a whole lot of functions already. It is a common mistake to think that Laplace
transform of a product is the product of the transforms. But in general
!¦ f (t)g(t)¦ !¦ f (t)¦!¦g(t)¦.
It must also be noted that not all functions have Laplace transform. For example, the function
1
t
does not have a Laplace transform as the integral diverges. Similarly tan t or e
t
2
do not have
Laplace transforms.
6.1.2 Existence and uniqueness
Let us consider in more detail when does the Laplace transformexist. First let us consider functions
of exponential order. f (t) is of exponential order as t goes to infinity if
| f (t)| ≤ Me
ct
,
for some constants M and c, for sufficiently large t (say for all t > t
0
for some t
0
). The simplest
way to check this condition is to try and compute
lim
t→∞
f (t)
e
ct
.
If the limit exists and is finite (usually zero), then f (t) is of exponential order.
Exercise 6.1.3: Use L’Hopital’s rule from calculus to show that a polynomial is of exponential
order. Hint: Note that a sum of two exponential order functions is also of exponential order. Then
show that t
n
is of exponential order for any n.
For an exponential order function we have existence and uniqueness of the Laplace transform.
Theorem 6.1.2 (Existence). Let f (t) be continuous and of exponential order for a certain constant
c. Then F(s) = !¦ f (t)¦ is defined for all s > c.
You may have existence of the transform for other functions, that are not of exponential order,
but that will not relevant to us. Before dealing with uniqueness, let us also note that for exponential
order functions you also obtain that their Laplace transform decays at infinity:
lim
s→∞
F(s) = 0.
Theorem 6.1.3 (Uniqueness). Let f (t) and g(t) be continuous and of exponential order. Suppose
that there exists a constant C, such that F(s) = G(s) for all s > C. Then f (t) = g(t) for all t ≥ 0.
6.1. THE LAPLACE TRANSFORM 233
Both theorems hold for piecewise continuous functions as well. Recall that piecewise contin-
uous means that the function is continuous except perhaps at a discrete set of points where it has
jump discontinuities like the Heaviside function. Uniqueness however does not “see” values at the
discontinuities. So you can only conclude that f (t) = g(t) outside of discontinuities. For example,
the unit step function is sometimes defined using u(0) =
1
2
. This new step function, however, we
defined has the exact same Laplace transform as the one we defined earlier where u(0) = 1.
6.1.3 The inverse transform
As we said, the Laplace transform will allow us to convert a differential equation into an algebraic
equation which we can solve. Once we do solve the algebraic equation in the frequency domain we
will want to get back to the time domain, as that is what we are really interested in. We, therefore,
need to also be able to get back. If we have a function F(s), to be able to find f (t) such that
!¦ f (t)¦ = F(s), we need to first know if such a function is unique. It turns out we are in luck by
Theorem 6.1.3. So we can without fear make the following definition.
If F(s) = !¦ f (t)¦ for some function f (t). We define the inverse Laplace transform as
!
−1
¦F(s)¦
def
= f (t).
There is an integral formula for the inverse, but it is not as simple as the transform itself (requires
complex numbers). The best way to compute the inverse is to use the Table 6.1 on page 231.
Example 6.1.5: Take F(s) =
1
s+1
. Find the inverse Laplace transform.
We look at the table and we find
!
−1

1
s + 1

= e
−t
.
We note that because the Laplace transform is linear, the inverse Laplace transform is also
linear. That is,
!
−1
¦AF(s) + BG(s)¦ = A!
−1
¦F(s)¦ + B!
−1
¦G(s)¦.
We can of course also just pull out constants. Let us demonstrate how linearity is used by the
following example.
Example 6.1.6: Take F(s) =
s
2
+s+1
s
3
+s
. Find the inverse Laplace transform.
First we use the method of partial fractions to write F in a form where we can use Table 6.1 on
page 231. We factor the denominator as s(s
2
+ 1) and write
s
2
+ s + 1
s
3
+ s
=
A
s
+
Bs + C
s
2
+ 1
.
Hence A(s
2
− 1) + s(Bs + C) = s
2
+ s + 1. Therefore, A + B = 1, C = 1, A = 1. In other words,
F(s) =
s
2
+ s + 1
s
3
+ s
=
1
s
+
1
s
2
+ 1
.
234 CHAPTER 6. THE LAPLACE TRANSFORM
By linearity of Laplace transform (and thus of its inverse) we get that
!
−1

s
2
+ s + 1
s
3
+ s

= !
−1

1
s

+ !
−1

1
s
2
+ 1

= 1 + sin t.
A useful property is the so-called shifting property or the first shifting property
!¦e
−at
f (t)¦ = F(s + a),
where F(s) is the Laplace transform of f (t).
Exercise 6.1.4: Derive this property from the definition.
The shifting property can be used when the denominator is a more complicated quadratic that
may come up in the method of partial fractions. You always want to write such quadratics as
(s + a)
2
+ b by completing the square and then using the shifting property.
Example 6.1.7: Find !
−1
¸
1
s
2
+4s+8
¸
.
First we complete the square to make the denominator (s + 2)
2
+ 4. Next we find
!
−1

1
s
2
+ 4

=
1
4
sin 2t.
Putting it all together with the shifting property we find
!
−1

1
s
2
+ 4s + 8

= !
−1

1
(s + 2)
2
+ 4

=
1
4
e
−2t
sin 2t.
In general, we will want to be able to apply the Laplace transform to rational functions, that is
functions of the form
F(s)
G(s)
where F(s) and G(s) are polynomials. Since normally (for functions that we are considering) the
Laplace transform goes to zero as s → ∞, it is not hard to see that the degree of F(s) will always
be smaller than that of G(s). Such rational functions are called proper rational functions and we
will always be able to apply the method of partial fractions. Of course this means we will need to
be able to factor the denominator into linear and quadratic terms, which involves finding the roots
of the denominator.
6.1.4 Exercises
Exercise 6.1.5: Find the Laplace transform of 3 + t
5
+ sin πt.
Exercise 6.1.6: Find the Laplace transform of a + bt + ct
2
for some constants a, b, and c.
6.1. THE LAPLACE TRANSFORM 235
Exercise 6.1.7: Find the Laplace transform of Acos ωt + Bsin ωt.
Exercise 6.1.8: Find the Laplace transform of cos
2
ωt.
Exercise 6.1.9: Find the inverse Laplace transform of
4
s
2
−9
.
Exercise 6.1.10: Find the inverse Laplace transform of
2s
s
2
−1
.
Exercise 6.1.11: Find the inverse Laplace transform of
1
(s−1)
2
(s+1)
.
236 CHAPTER 6. THE LAPLACE TRANSFORM
6.2 Transforms of derivatives and ODEs
Note: 2 lectures, §7.2 –7.3 in EP
6.2.1 Transforms of derivatives
Let us see how the Laplace transform is used for differential equations. First let us try to find
the Laplace transform of a function that is a derivative. That is, suppose g(t) is a continuous
differentiable function of exponential order.
!¦g
·
(t)¦ =


0
e
−st
g
·
(t) dt =
,
e
−st
g(t)
¸

t=0


0
(−s) e
−st
g(t) dt = −g(0) + s!¦g(t)¦.
We can keep doing this procedure for higher derivatives. The results are listed in Table 6.2. The
procedure also works for piecewise smooth functions, that is functions which are piecewise con-
tinuous with a piecewise continuous derivative. The fact that the function is of exponential order
is used to show that the limits appearing above exist. We will not worry much about this fact.
f (t) !¦ f (t)¦ = F(s)
g
·
(t) sG(s) − g(0)
g
··
(t) s
2
G(s) − sg(0) − g
·
(0)
g
···
(t) s
3
G(s) − s
2
g(0) − sg
·
(0) − g
··
(0)
Table 6.2: Laplace transforms of derivatives (G(s) = !¦g(t)¦ as usual).
Exercise 6.2.1: Verify Table 6.2.
6.2.2 Solving ODEs with the Laplace transform
If you notice, the Laplace transform turns differentiation essentially into multiplication by s. Let
us see how to apply this to differential equations.
Example 6.2.1: Take the equation
x
··
(t) + x(t) = cos 2t, x(0) = 0, x
·
(0) = 1.
We will take the Laplace transform of both sides. By X(s) we will, as usual, denote the Laplace
transform of x(t).
!¦x
··
(t) + x(t)¦ = !¦cos 2t¦,
s
2
X(s) − sx(0) − x
·
(0) + X(s) =
s
s
2
+ 4
.
6.2. TRANSFORMS OF DERIVATIVES AND ODES 237
We can plug in the initial conditions now (this will make computations more streamlined) to obtain
s
2
X(s) − 1 + X(s) =
s
s
2
+ 4
.
We now solve for X(s),
X(s) =
s
(s
2
+ 1)(s
2
+ 4)
+
1
s
2
+ 1
.
We use partial fractions (exercise) to write
X(s) =
1
3
s
s
2
+ 1

1
3
s
s
2
+ 4
+
1
s
2
+ 1
.
Now take the inverse Laplace transform to obtain
x(t) =
1
3
cos t −
1
3
cos 2t + sin t.
The procedure is as follows. You take an ordinary differential equation in the time variable
t. You apply the Laplace transform to transform the equation into an algebraic (non differential)
equation in the frequency domain. All the x(t), x
·
(t), x
··
(t), and so on, will be converted to X(s),
sX(s) − x(0), s
2
X(s) − sx(0) − x
·
(0), and so on. If the differential equation we started with was
constant coefficient linear equation, it is generally pretty easy to solve for X(s) and we will obtain
some expression for X(s). Then taking the inverse transform if possible, we find x(t).
It should be noted that since not every function has a Laplace transform, not every equation can
be solved in this manner.
6.2.3 Using the Heaviside function
Before we move on to more general functions than those we could solve before, we want to con-
sider the Heaviside function. See Figure 6.1 on the following page for the graph.
u(t) =

¸
¸
¸
¸
¸
¸
0 if t < 0,
1 if t ≥ 1.
This function is useful for putting together functions, or cutting functions off. Most commonly
it is used as u(t − a) for some constant a. This just shifts the graph to the right by a. That is, it is a
function which is zero when t < a and 1 when t ≥ a. Suppose for example that f (t) is a “signal”
and you started receiving the signal sin t at time t. The function f (t) should then be defined as
f (t) =

¸
¸
¸
¸
¸
¸
0 if t < π,
sin t if t ≥ π.
238 CHAPTER 6. THE LAPLACE TRANSFORM
-1.0 -0.5 0.0 0.5 1.0
-1.0 -0.5 0.0 0.5 1.0
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Figure 6.1: Plot of the Heaviside (unit step) function u(t).
Using the Heaviside function, f (t) can be written as
f (t) = u(t − π) sin t.
Similarly the step function which is 1 on the interval [1, 2) and zero everywhere else can be written
as
u(t − 1) − u(t − 2).
The Heaviside function is useful to define functions defined piecewise. If you want the function t
on when t is in [0, 1] and the function −t +2 when t is in [1, 2] and zero otherwise, you can use the
expression
t

u(t) − u(t − 1)

+ (−t + 2)

u(t − 1) − u(t − 2)

.
Hence it is useful to know how the Heaviside function interacts with the Laplace transform.
We have already seen that
!¦u(t − a)¦ =
e
−as
s
.
This can be generalized into a shifting property or second shifting property.
!¦ f (t − a)u(t − a)¦ = e
−as
!¦ f (t)¦. (6.1)
Example 6.2.2: Suppose that the forcing function is not periodic. For example, suppose that we
had a mass spring system
x
··
(t) + x(t) = f (t), x(0) = 0, x
·
(0) = 0,
where f (t) = 1 if 1 ≤ t < 3 and zero otherwise. We could imagine a mass and spring system where
a rocket was fired for 2 seconds starting at t = 1. Or perhaps an RLC circuit, where the voltage was
6.2. TRANSFORMS OF DERIVATIVES AND ODES 239
being raised at a constant rate for 2 seconds starting at t = 1 and then held steady again starting at
t = 3.
We can write f (t) = u(t − 1) − u(t − 3). We transform the equation and we plug in the initial
conditions as before to obtain
s
2
X(s) + X(s) =
e
−s
s

e
−3s
s
.
We solve for X(s) to obtain
X(s) =
e
−s
s(s
2
+ 1)

e
−3s
s(s
2
+ 1)
.
We leave it as an exercise to the reader to show that
!
−1

1
s(s
2
+ 1)

= 1 − cos t.
In other words !¦1 − cos t¦ =
1
s(s
2
+1)
. So using (6.1) we find
!
−1

e
−s
s(s
2
+ 1)

= e
−s
!¦1 − cos t¦ =

1 − cos(t − 1)

u(t − 1).
Similarly
!
−1

e
−2s
s(s
2
+ 1)

= e
−2s
!¦1 − cos t¦ =

1 − cos(t − 3)

u(t − 3).
Hence, the solution is
x(t) =

1 − cos(t − 1)

u(t − 1) −

1 − cos(t − 2)

u(t − 2).
The plot of this solution is given in Figure 6.2 on the next page.
6.2.4 Transforms of integrals
A feature of Laplace transforms is that it is also able to easily deal with integral equations. That is,
equations in which integrals rather than derivatives of functions appear. The basic property, which
can be proven by applying the definition and again doing integration by parts, is the following.
!

t
0
f (τ) dτ

=
1
s
F(s).
It is sometimes useful for computing the inverse transform to write

t
0
f (τ) dτ = !
−1

1
s
F(s)

.
240 CHAPTER 6. THE LAPLACE TRANSFORM
0 5 10 15 20
0 5 10 15 20
-2
-1
0
1
2
-2
-1
0
1
2
Figure 6.2: Plot of x(t).
Example 6.2.3: To compute the inverse transform of
1
s(s
2
+1)
we could proceed by applying this
integration rule.
!
−1

1
s
1
s
2
+ 1

=

t
0
!
−1

1
s
2
+ 1

dτ =

t
0
sin τ dτ = 1 − cos t.
If an equation contains an integral of the unknown function the equation is called an integral
equation. For example, take the equation
t
2
=

t
0
e
τ
x(τ) dτ.
If we apply the Laplace transform we obtain (where X(s) = !¦x(t)¦)
2
s
3
=
1
s
!¦e
τ
f (τ)¦
1
s
X(s − 1).
Or
X(s − 1) =
2
s
2
or X(s) =
2
(s + 1)
2
.
We use the shifting property
x(t) = 2e
−t
t.
More complicated integral equations can also be solved using the convolution that we will learn
next.
6.2. TRANSFORMS OF DERIVATIVES AND ODES 241
6.2.5 Exercises
Exercise 6.2.2: Using the Heaviside function write down the piecewise function that is 0 for t < 0,
t
2
for t in [0, 1] and t for t > 1.
Exercise 6.2.3: Using the Laplace transform solve
mx
··
+ cx
·
+ kx = 0, x(0) = 0, x
·
(0) = 0,
where m > 0, c > 0, k > 0, and c
2
− 4km > 0 (system is overdamped).
Exercise 6.2.4: Using the Laplace transform solve
mx
··
+ cx
·
+ kx = 0, x(0) = 0, x
·
(0) = 0,
where m > 0, c > 0, k > 0, and c
2
− 4km < 0 (system is underdamped).
Exercise 6.2.5: Using the Laplace transform solve
mx
··
+ cx
·
+ kx = 0, x(0) = 0, x
·
(0) = 0,
where m > 0, c > 0, k > 0, and c
2
= 4km (system is critically damped).
Exercise 6.2.6: Solve x
··
+ x = u(t − 1) for initial conditions x(0) = 0 and x
·
(0) = 0.
Exercise 6.2.7: Show the differentiation of the transform property. Suppose !¦ f (t)¦ = F(s), then
show
!¦−t f (t)¦ = F
·
(s).
Hint: differentiate under the integral sign.
242 CHAPTER 6. THE LAPLACE TRANSFORM
6.3 Convolution
Note: 1 or 1.5 lectures, §7.2 in EP
6.3.1 The convolution
We have said that the Laplace transformation of a product is not the product of the transforms. All
hope is not lost however. There exists a very important type of a product which works. Take two
functions f (t) and g(t) defined for t ≥ 0. Define the convolution

of f (t) and g(t) as
( f ∗ g)(t)
def
=

t
0
f (τ)g(t − τ) dτ. (6.2)
So the convolution of two functions of t is another function of t.
Example 6.3.1: Take f (t) = e
t
and g(t) = t for t ≥ 0. Then
( f ∗ g)(t) =

t
0
e
τ
(t − τ) dτ = e
t
− t − 1.
Where we of course did one integration by parts.
Example 6.3.2: Take f (t) = sin ωt and g(t) = cos ωt for t ≥ 0. Then
( f ∗ g)(t) =

t
0

sin ωτ

cos ω
0
(t − τ)

dτ.
Now we use the identity
cos θ sin ψ =
1
2

sin(θ + ψ) − sin(θ − ψ)

.
Hence,
( f ∗ g)(t) =

t
0
1
2

sin(ω
0
t) − sin(ω
0
t − 2ω
0
τ)


=
,
1
2
τ sin ω
0
t +
1

0
cos(2ω
0
τ − ω
0
t)
¸
t
τ=0
=
1
2
t sin ω
0
t.
Of course the formula only holds for t ≥ 0. We did assume that f and g are zero (or just not
defined) for negative t.

For those that have seen convolution defined before, you may have seen it defined as ( f ∗g)(t) =



f (τ)g(t−τ) dτ.
This definition agrees with (6.2) if you define f (t) and g(t) to be zero for t < 0. When discussing the Laplace transform
the definition we gave is sufficient. Convolution does occur in many other applications, however, where you may have
to use the more general definition with infinities.
6.3. CONVOLUTION 243
The convolution has many properties that make it behave like a product. Let c be a constant
and f , g, and h be functions then
f ∗ g = g ∗ f ,
(c f ) ∗ g = f ∗ (cg) = c( f ∗ g),
( f ∗ g) ∗ h = f ∗ (g ∗ h).
The most interesting property for us, and the main result of this section is the following theorem.
Theorem 6.3.1. Let f (t) and g(t) be of exponential type, then
!¦( f ∗ g)(t)¦ = !

t
0
f (τ)g(t − τ) dτ

= !¦ f (t)¦!¦g(t)¦.
In other words, the Laplace transform of a convolution is the product of the Laplace transforms.
The simplest way to use this result is in reverse.
Example 6.3.3: Suppose we have the function of s defined by
1
(s + 1)s
2
=
1
s + 1
1
s
2
.
We recognize the two entries of Table 6.2. That is
!
−1

1
s + 1

= e
−t
and !
−1

1
s
2

= t.
Therefore,
!
−1

1
s + 1
1
s
2

=

t
0
τe
t−τ
dτ = 2e
t
− t
2
− 2t − 2.
Where the calculation of the integral of course involved an integration by parts.
6.3.2 Solving ODEs
The next example will demonstrate the full power of the convolution and Laplace transform. We
will be able to give a solution to the forced oscillation problem for any forcing function as a definite
integral.
Example 6.3.4: Find the solution to
x
··
+ ω
2
0
x = f (t), x(0) = 0, x
·
(0) = 0,
for an arbitrary function f (t).
244 CHAPTER 6. THE LAPLACE TRANSFORM
We first apply the Laplace transform to the equation. Denote the transform of x(t) by X(s) and
the transform of f (t) by F(s) as usual.
s
2
X(s) + ω
2
0
X(s) = F(s),
or in other words
X(s) = F(s)
1
s
2
+ ω
2
0
.
We know
!
−1

1
s
2
+ ω
2
0

=
sin ω
0
t
ω
0
.
Therefore,
x(t) =

t
0
f (τ)
sin ω
0
(t − τ)
ω
0
dτ,
or if we reverse the order
x(t) =

t
0
sin ω
0
t
ω
0
f (t − τ) dτ.
Let us notice one more thing with this example. We can now also notice how Laplace transform
handles resonance. Suppose that f (t) = cos ω
0
t. Then
x(t) =

t
0
sin ω
0
τ
ω
0
(cos ω
0
(t − τ)) dτ =
1
ω
0

t
0

cos ω
0
τ

sin ω
0
(t − τ)

dτ.
We have already computed the convolution of sine and cosine in Example 6.3.2. Hence
x(t) =
¸
1
ω
0
¸
1
2
t sin ω
0
t

=
1

0
t sin ω
0
t.
Note the t in front of the sine. This solution will, therefore, grow without bound as t gets large,
meaning we get resonance.
Using convolution you can also find a solution as a definite integral for arbitrary forcing func-
tion f (t) for any constant coefficient equation. A definite integral is usually enough for most
practical purposes. It is usually not hard to numerically evaluate a definite integral.
6.3.3 Volterra integral equation
One of the most common integral equations is the Volterra integral equation
§
:
x(t) = f (t) +

t
0
g(t − τ)x(τ) dτ,
§
Named for the Italian mathematician Vito Volterra (1860 – 1940).
6.3. CONVOLUTION 245
where f (t) and g(t) are known functions and x(t) is an unknown. To solve this equation we apply
the Laplace transform to get
X(s) = F(s) + G(s)X(s)
where X(s), F(s), and G(s) are the Laplace transforms of x(t), f (t), and g(t) respectively. We find
X(s) =
F(s)
1 − G(s)
if we can find the inverse Laplace transform now we obtain the result.
Example 6.3.5: Solve
x(t) = e
−t
+

t
0
sinh(t − τ)x(τ) dτ.
We apply Laplace transform to obtain
X(s) =
1
s + 1
+
1
s
2
− 1
X(s),
or
X(s) =
1
s+1
1 −
1
s
2
−1
=
s − 1
s
2
− 2
=
s
s
2
− 2

1
s
2
− 2
.
It is not hard to apply Table 6.1 on page 231 to find
x(t) = cosh

2 t −
1

2
sinh

2 t.
6.3.4 Exercises
Exercise 6.3.1: Let f (t) = t
2
for t ≥ 0, and g(t) = u(t − 1). Compute f ∗ g.
Exercise 6.3.2: Let f (t) = t for t ≥ 0, and g(t) = sin t for t ≥ 0. Compute f ∗ g.
Exercise 6.3.3: Find the solution to
mx
··
+ cx
·
+ kx = f (t), x(0) = 0, x
·
(0) = 0,
for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c
2
−4km > 0 (system is overdamped).
Write the solution as a definite integral.
Exercise 6.3.4: Find the solution to
mx
··
+ cx
·
+ kx = f (t), x(0) = 0, x
·
(0) = 0,
for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c
2
− 4km < 0 (system is under-
damped). Write the solution as a definite integral.
246 CHAPTER 6. THE LAPLACE TRANSFORM
Exercise 6.3.5: Find the solution to
mx
··
+ cx
·
+ kx = f (t), x(0) = 0, x
·
(0) = 0,
for an arbitrary function f (t), where m > 0, c > 0, k > 0, and c
2
= 4km (system is critically
damped). Write the solution as a definite integral.
Exercise 6.3.6: Solve
x(t) = e
−t
+

t
0
cos(t − τ)x(τ) dτ.
Exercise 6.3.7: Solve
x(t) = cos t +

t
0
cos(t − τ)x(τ) dτ.
Further Reading
[BM] Paul W. Berg and James L. McGregor, Elementary Partial Differential Equations, Holden-
Day, San Francisco, CA, 1966.
[EP] C.H. Edwards and D.E. Penney, Differential Equations and Boundary Value Problems:
Computing and Modelling, 4th edition, Prentice Hall, 2008.
[F] Stanley J. Farlow, An Introduction to Differential Equations and Their Applications,
McGraw-Hill, Inc., Princeton, NJ, 1994.
[I] E.L. Ince, Ordinary Differential Equations, Dover Publications, Inc., New York, NY, 1956.
247
248 FURTHER READING
Index
acceleration, 16
addition of matrices, 87
algebraic multiplicity, 119
amplitude, 65
angular frequency, 65
antiderivative, 14
antidifferentiate, 14
associated homogeneous equation, 70
atan2, 66
augmented matrix, 91
autonomous equation, 36
autonomous system, 85
Bernoulli equation, 33
boundary conditions for a PDE, 181
boundary value problem, 143
catenary, 11
Cauchy-Euler equation, 50
center, 107
cgs units, 227, 228
characteristic equation, 52
Chebychev’s equation of order 1, 50
cofactor, 90
cofactor expansion, 90
column vector, 87
commute, 89
complementary solution, 70
complete eigenvalue, 119
complex conjugate, 102
complex number, 53
complex roots, 54
constant coefficient, 51, 96
convolution, 242
cosine series, 170
critical point, 36
critically damped, 67
d’Alembert solution to the wave equation, 198
damped, 66
damped motion, 62
defect, 120
defective eigenvalue, 120
deficient matrix, 120
dependent variable, 7
determinant, 89
diagonal matrix, 111
matrix exponential of, 125
diagonalization, 126
differential equation, 7
direction field, 85
Dirichlet boundary conditions, 171, 211
Dirichlet problem, 205
displacement vector, 111
distance, 16
dot product, 88, 151
dynamic damping, 118
eigenfunction, 144, 212
eigenfunction decomposition, 211, 216
eigenvalue, 99, 212
eigenvalue of a boundary value problem, 144
eigenvector, 99
eigenvector decomposition, 133, 140
ellipses (vector field), 107
elliptic PDE, 181
endpoint problem, 143
equilibrium solution, 36
249
250 INDEX
Euler’s equation, 50
Euler’s equations, 56
Euler’s formula, 53
Euler’s method, 41
even function, 155, 168
even periodic extension, 168
existence and uniqueness, 20, 48, 57
exponential growth model, 9
exponential of a matrix, 124
exponential order, 232
extend periodically, 151
first order differential equation, 7
first order linear equation, 27
first order linear system of ODEs, 95
first order method, 42
first shifting property, 234
forced motion, 62
systems, 116
Fourier series, 153
fourth order method, 43
Fredholm alternative
simple case, 148
Sturm-Liouville problems, 215
free motion, 62
free variable, 93
fundamental matrix, 96
fundamental matrix solution, 96, 125
general solution, 10
generalized eigenvectors, 120, 122
Genius software, 5
geometric multiplicity, 119
Gibbs phenomenon, 158
half period, 160
harmonic function, 204
harvesting, 38
heat equation, 181
Heaviside function, 230
Hermite’s equation of order 2, 50
homogeneous equation, 34
homogeneous linear equation, 47
homogeneous side conditions, 182
homogeneous system, 96
Hooke’s law, 62, 110
hyperbolic PDE, 181
identity matrix, 88
imaginary part, 54
implicit solution, 24
inconsistent system, 93
indefinite integral, 14
independent variable, 7
initial condition, 10
initial conditions for a PDE, 181
inner product, 88
inner product of functions, 153, 215
integral equation, 240, 244
integrate, 14
integrating factor, 27
integrating factor method, 27
systems, 131
inverse Laplace transform, 233
invertible matrix, 89
IODE
Lab I, 18
Lab II, 41
Project I, 18
Project II, 41
Project III, 76
Project IV, 160
Project V, 160
IODE software, 5
la vie, 72
Laplace equation, 181, 204
Laplace transform, 229
Laplacian, 204
leading entry, 93
Leibniz notation, 15, 22
linear equation, 27, 47
INDEX 251
linear first order system, 85
linear operator, 70
linear PDE, 181
linearity of Laplace transform, 231
linearly dependent, 57
linearly independent, 49, 57
logistic equation, 37
with harvesting, 38
mass matrix, 111
mathematical model, 9
mathematical solution, 9
matrix, 87
matrix exponential, 124
matrix inverse, 89
matrix valued function, 95
method of partial fractions, 233
Mixed boundary conditions, 211
mks units, 65, 175
multiplication of complex numbers, 53
multiplicity, 60
multiplicity of an eigenvalue, 119
natural (angular) frequency, 65
natural frequency, 76, 113
natural mode of oscillation, 113
Neumann boundary conditions, 171, 211
Newton’s law of cooling, 31, 36
Newton’s second law, 62, 63, 84, 110
nilpotent, 126
normal mode of oscillation, 113
odd function, 155, 168
odd periodic extension, 168
ODE, 8
one-dimensional heat equation, 181
one-dimensional wave equation, 191
ordinary differential equation, 8
orthogonal
functions, 147, 153
vectors, 151
with respect to a weight, 214
orthogonality, 147
overdamped, 67
parabolic PDE, 181
parallelogram, 90
partial differential equation, 8, 181
particular solution, 10, 70
PDE, 8, 181
period, 65
periodic, 151
phase diagram, 37
phase portrait, 37, 86
phase shift, 65
Picard’s theorem, 20
piecewise continuous, 163
piecewise smooth, 163
practical resonance, 80, 180
product of matrices, 88
projection, 153
proper rational function, 234
pure resonance, 78, 178
quadratic formula, 52
real part, 54
real world problem, 9
reduced row echelon form, 93
reduction of order method, 50
regular Sturm-Liouville problem, 213
repeated roots, 59
resonance, 78, 117, 178, 244
RLC circuit, 62
row vector, 87
saddle point, 106
sawtooth, 154
scalar, 87
scalar multiplication, 87
second order differential equation, 11
second order linear differential equation, 47
second order method, 42
252 INDEX
second shifting property, 238
separable, 22
separation of variables, 182
shifting property, 234, 238
side conditions for a PDE, 181
simple harmonic motion, 65
sine series, 170
singular matrix, 89
singular solution, 24
sink, 105
slope field, 18
solution, 7
solution curve, 86
source, 105
spiral sink, 108
spiral source, 107
square wave, 81, 155
stable critical point, 36
stable node, 105
steady periodic solution, 80, 175
steady state temperature, 189, 204
stiffness matrix, 111
Sturm-Liouville problem, 212
superposition, 47, 57, 96, 182
symmetric matrix, 147, 151
system of differential equations, 83
tedious, 72, 73, 79, 136
timbre, 220
trajectory, 86
transient solution, 80
transpose, 88
trigonometric series, 153
undamped, 64
undamped motion, 62
systems, 110
underdamped, 68
undetermined coefficients, 71
for systems, 116
second order systems, 139
systems, 136
unforced motion, 62
unit step function, 230
unstable critical point, 36
unstable node, 105
variation of parameters, 73
systems, 138
vector, 87
vector field, 85
vector valued function, 95
velocity, 16
Volterra integral equation, 244
wave equation, 181, 198
weight function, 214

2

A Typeset in LTEX.

Copyright c 2008-2009 Jiˇí Lebl r

This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/ licenses/by-nc-sa/3.0/us/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Contents
Introduction 0.1 Notes about these notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.2 Introduction to differential equations . . . . . . . . . . . . . . . . . . . . . . . . . 1 First order ODEs 1.1 Integrals as solutions . . . . . . . . . . . 1.2 Slope fields . . . . . . . . . . . . . . . . 1.3 Separable equations . . . . . . . . . . . . 1.4 Linear equations and the integrating factor 1.5 Substitution . . . . . . . . . . . . . . . . 1.6 Autonomous equations . . . . . . . . . . 1.7 Numerical methods: Euler’s method . . . 5 5 7 13 13 18 22 27 32 36 41 47 47 51 57 62 70 76 83 83 87 95 99 105 110 119 124 131

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

2

Higher order linear ODEs 2.1 Second order linear ODEs . . . . . . . . . . 2.2 Constant coefficient second order linear ODEs 2.3 Higher order linear ODEs . . . . . . . . . . . 2.4 Mechanical vibrations . . . . . . . . . . . . . 2.5 Nonhomogeneous equations . . . . . . . . . 2.6 Forced oscillations and resonance . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

3

Systems of ODEs 3.1 Introduction to systems of ODEs . . . . . . . . 3.2 Matrices and linear systems . . . . . . . . . . . 3.3 Linear systems of ODEs . . . . . . . . . . . . 3.4 Eigenvalue method . . . . . . . . . . . . . . . 3.5 Two dimensional systems and their vector fields 3.6 Second order systems and applications . . . . . 3.7 Multiple eigenvalues . . . . . . . . . . . . . . 3.8 Matrix exponentials . . . . . . . . . . . . . . . 3.9 Nonhomogeneous systems . . . . . . . . . . . 3

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

4 4 Fourier series and PDEs 4.1 Boundary value problems . . . . . . . . . . . . . . 4.2 The trigonometric series . . . . . . . . . . . . . . 4.3 More on the Fourier series . . . . . . . . . . . . . 4.4 Sine and cosine series . . . . . . . . . . . . . . . . 4.5 Applications of Fourier series . . . . . . . . . . . . 4.6 PDEs, separation of variables, and the heat equation 4.7 One dimensional wave equation . . . . . . . . . . 4.8 D’Alembert solution of the wave equation . . . . . 4.9 Steady state temperature . . . . . . . . . . . . . .

CONTENTS 143 143 151 160 168 175 181 191 198 204

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

5

Eigenvalue problems 211 5.1 Sturm-Liouville problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.2 Application of eigenfunction series . . . . . . . . . . . . . . . . . . . . . . . . . . 219 5.3 Steady periodic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 The Laplace transform 229 6.1 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.2 Transforms of derivatives and ODEs . . . . . . . . . . . . . . . . . . . . . . . . . 236 6.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 247 249

6

Further Reading Index

Introduction
0.1 Notes about these notes

These are class notes from teaching Math 286, differential equations at the University of Illinois at Urbana-Champaign in fall 2008 and spring 2009. These originated from my lecture notes. There is usually a little more “padding” material than I can cover in the time alloted. There are still not enough exercises throughout. Some of the exercises in the notes are things I do explicitly in class depending on time, or let the students work out in class themselves. The book used for the class is Edwards and Penney, Differential Equations and Boundary Value Problems [EP], fourth edition, from now on referenced just as EP. The structure of the notes, therefore, reflects the structure of this book, at least as far as the chapters that are covered in the course. Many examples and applications are taken more or less from this book, though they also appear in many other sources, of course. Other books I have used as sources of information and inspiration are E.L. Ince’s classic (and inexpensive) Ordinary Differential Equations [I], and also my undergraduate textbooks, Stanley Farlow’s Differential Equations and Their Applications [F], which is now available from Dover, and Berg and McGregor’s Elementary Partial Differential Equations [BM]. See the Further Reading section at the end of these notes. I taught the course with the IODE software (http://www.math.uiuc.edu/iode/). IODE is a free software package that is used either with Matlab (properietary) or Octave (free software). Projects and labs from the IODE website are referenced throughout the notes. They need not be used for this course, but I think it is better to use them. The graphs in the notes were made with the Genius software (see http://www.jirka.org/genius.html). I have used Genius in class to show essentially these and similar graphs. I would like to acknowledge Rick Laugesen. I have used his handwritten class notes on the first go through the course. My organization of these present notes, and the choice of the exact material covered, is heavily influenced by his class notes. Many examples and computations are taken from his notes. The organization of these notes to some degree requires that they be done in order. Hence, later chapters can be dropped. The dependence of the material covered is roughly given in the the following diagram: 5

The notes are done for two types of courses. chapter 4. IODE need not be used for either version. If IODE is not used. While Laplace transform is not normally covered at UIUC 285/286. . chapter 2. chapter 3. chapter 4. Either at 4 hours a week for a semester (Math 286 at UIUC): Introduction. chapter 1 (plus the two IODE labs). some additional material should be covered instead. The length of the Laplace chapter is about the same as the Sturm-Liouville chapter (chapter 5). chapter 5.6 INTRODUCTION Introduction Chapter 1 Chapter O 2   Chapter 6 o ooo ooo o wooo OOO OOO OOO O' Chapter 4   oo oo wo o Chapter 3 Chapter 5 There are some references in chapters 4 and 5 to material from chapter 3 (some linear algebra). Or a shorter version (Math 285 at UIUC) of the course at 3 hours a week for a semester: Introduction. There is a short introductory chapter on Laplace transform (chapter 6 that could be used as additional material. For the shorter version some additional material should be covered. but these references are not absolutely essential and can be skimmed over. I think it is essential that any notes for Differential equations at least mention Laplace and/or Fourier transforms. chapter 1 (plus the two IODE labs). so chapter 3 can safely be dropped. chapter 2. while still covering chapters 4 and 5.

2. Let us try. How do we check? Just plug it back in! First you need to compute dx = − sin t + cos t.2 Solutions of differential equations Solving the differential equation means finding x in terms of t. we want to find a function of t which we will call x such that when we plug x.1 in EP 0.1 Differential equations The laws of physics are generally written down as differential equations. INTRODUCTION TO DIFFERENTIAL EQUATIONS 7 0. dt We find that . You have already seen many differential equations without perhaps knowing about it. Understanding differential equations is essential to understanding almost anything you will study in your science and engineering classes. In this case we claim that x = x(t) = cos t + sin t is a solution. Then it would be important to first learn Swahili. And you have even solved simple differential equations when you were taking calculus. It is the dt same idea as it would be for a normal (algebraic) equation of just x and t.0. In fact it is an example of a first order differential equation. dt Yay! We got precisely the right hand side. Equation (1) is a basic example of a differential equation. dx = − sin t + cos t − e−t . otherwise you will have a very tough time getting a good grade in your other classes. (1) dt Here x is the dependent variable and t is the independent variable. dt dx .2 Introduction to differential equations Note: more than 1 lecture. Now let us compute the left hand side of (1) dt dx + x = (− sin t + cos t) + (cos t + sin t) = 2 cos t. and differential equations are one of the most important parts of this language as far as science and engineering are concerned. Therefore. t.2.2. §1. Let us see an example you may not have seen. since it involves only the first derivative of the dependent variable. all of science and engineering use differential equations to some degree. and dx into (1) the equation holds. This equation arises from Newton’s law of cooling where the ambient temperature oscillates with time. suppose that all your classes from now on were given in Swahili. 0. That is. You can think of mathematics as the language of science. dx + x = 2 cos t. As an analogy. There is more! We claim x = cos t + sin t + e−t is also a solution.

it is not a simple question of turning a crank to get answers. you need to understand what they are doing. In fact. This course is no exception to this. dt And it works yet again! So there can be many different solutions.3 Differential equations in practice So how do we use differential equations in science and engineering? You have some real . See Figure 1 for the graph of a few of these solutions. we will get partial differential equations or PDEs. We will see how we can find these solutions a few lectures from now. There is no general method that solves any given differential equation. it is often necessary to simplify or transform your equations into something that a computer can actually understand and solve. You may need to make certain assumptions and changes in your model to achieve this. so that you may apply those techniques to new problems. For example. It is important to know when it is easy to find solutions and how to do this. We will generally focus on how to get exact formulas for solutions of differential equations. Even for ODEs. It is important to learn problem solving techniques. For most of the course we will look at ordinary differential equations or ODEs. which are very well understood. Even if you leave much of the actual calculations to computers in real life. A common mistake is to expect to learn some prescription for solving all the problems you will encounter in your later career. but we will also spend a little bit of time on getting approximate solutions. you will be required to solve problems in your job which you have never seen before.2. It turns out that solving differential equations can be quite hard. If there are several independent variables. by which we mean that there is only one independent variable and derivatives are only with respect to this one variable. dt end of the course. We will briefly see these near the y Figure 1: Few solutions of dx + 2 = cos t. 0 1 2 3 4 5 3 3 2 2 1 1 0 0 -1 -1 0 1 2 3 4 5 0. To be a successful engineer or scientist.8 INTRODUCTION Again plugging into the left hand side of (1) dx + x = (− sin t + cos t − e−t ) + (cos t + sin t + e−t ) = 2 cos t. for this equation all solutions can be written in the form x = cos t + sin t + Ce−t for some constant C.

Therefore. Let us plug these in and see what happens.2. Sometimes we will work solve Mathematical Mathematical with simple real world examples so that we have model solution some intuition and motivation about what we are doing. One of the most basic differential equations is the standard exponential growth model. You have to interpret the results. Let us suppose that there is enough food and enough space. There is still something left to do. Let us look at an example of this process. 100 = P(0) = Cek0 = C.2.069. That is. Let us try. So we know that P(t) = 100 e(ln 2)t/10 ≈ 100 e0.069t . Let P denote the population of some bacteria on a petri dish. where C is a constant. you translate your real world situation into a set of differential equations. We claim that a solution is given by P(t) = Cekt .1: Suppose there are 100 bacteria at time 0 and 200 bacteria at time 10s. How many bacteria will there be in 1 minute from time 0 (in 60 seconds)? First we have to solve the equation. so what now? We do not know C and we do not know k. Let t denote time (say in seconds). In this course we will mostly focus on the mathematical analysis. dP = Ckekt = kP. Example 0. Then you apply mathematics to get some sort of mathematical solution. a large population growth quicker. .0. 2 = e10k or ln 2 10 = k ≈ 0. You have to figure out what the mathematical solution says about the real world problem you started with. Then the rate of growth of bacteria will be proportional to population. OK. Hence our model will be dP = kP dt for some positive constant k > 0. I. dt And it really is a solution.e. 200 = P(10) = 100 ek10 . INTRODUCTION TO DIFFERENTIAL EQUATIONS 9 world problem that you want to understand. You make some simplifying assumptions and create a mathematical model. Well we know something. Learning how to formulate the mathematical Real world problem model and how to interpret the results is essentially what your physics and engineering classes abstract interpret do. We know that P(0) = 100 and we also know that P(10) = 200.

not any real number. t = 60. 0 10 20 30 40 50 60 6000 6000 5000 5000 4000 4000 3000 3000 We will call P(t) = Cet the general solution. The general solution for this equation is y(x) = Cekx . dy = −ky. Also note that P in real life is a discrete quantity. Generally when we say particular solution onds. First such equation is. They are also simple to check. What does that mean? Suppose k = 1 for simplicity. and you will want to solve the equation for different initial conditions. Normally.35. Then the solution turns out to be (exercise) P(t) = 1000 et . But if our assumptions are reasonable. We have already seen that this is a solution above with different variable names. The general solution for this equation is y(x) = Ce−kx . the population is P(60) = 6400. . then there will be about 6400 bacteria. P(61) ≈ 6859. Here y is the dependent and x the independent variable. we just mean some solution. and cosines. There is no need to wonder if you have remembered the solution correctly. dx for some constant k > 0. OK. sines.10 INTRODUCTION At one minute. These solutions are reasonably easy to guess by recalling properties of exponentials. 1000 1000 0 0 0 10 20 30 40 50 60 2000 2000 Let us get to what we will call the 4 fundamental equations. but our model has no problem saying that for example at 61 seconds. let us talk about the interpretation of the results. the k in P = kP will be known. Next. which is something that you should always do. Then you will need an initial condition to find out what C is to find the particular solution we are looking Figure 2: Bacteria growth in the first 60 secfor. See Figure 2. These appear very often and it is useful to just memorize what their solutions are. dx for some constant k > 0. Does this mean that there must be exactly 6400 bacteria on the plate at 60s? No! We have made assumptions that might not be true. dy = ky. as every solution of the equation can be written in this form for some constant C. So we want to solve dP = P subject to P(0) = 1000 (the inidt tial condition).

0. Contrary to popular belief this is not a parabola.2.2. 11 Note that because we have a second order differential equation we have two constants in our general solution. dx2 for some constant k > 0. This formula is actually inscribed inside the arch: y = −127. An interesting note about cosh: The graph of cosh is the exact shape a hanging chain will make and it is called a catenary.7 ft) + 757. For those that do not know. take the second order differential equation d2 y = −k2 y.7 ft · cosh(x/127. The general solution for this equation is y(x) = C1 cos(kx) + C2 sin(kx). Next. If you invert the graph of cosh it is also the ideal arch for supporting its own weight. Exercise 0. dx Exercise 0.2. 2 e x − e−x .2. the gateway arch in Saint Louis is an inverted graph of cosh (if it were just a parabola it might fall down). For example. The general solution for this equation is y(x) = C1 ekx + C2 e−kx . INTRODUCTION TO DIFFERENTIAL EQUATIONS Exercise 0.2: Check that the y given is really a solution to the equation. or y(x) = D1 cosh(kx) + D2 sinh(kx). cosh and sinh are defined by cosh x = e x + e−x . They have some nice familiar d properties such as cosh 0 = 1.3: Check that both forms of the y given are really solutions to the equation. sinh 0 = 0. sinh x = 2 These functions are sometimes easier to work with than exponentials. take the second order differential equation d2 y = k2 y. And finally. dx2 for some constant k > 0. and dx cosh x = sinh x (no that is not a typo) and d sinh x = cosh x. .1: Check that the y given is really a solution to the equation.7 ft.

Find C to solve the initial condition x(0) = 100.10: Using properties of derivatives of functions that you know try to find a solution to (x )2 + x2 = 4.2.6: Is y = sin t a solution to dy 2 dt = 1 − y2 ? Justify.7: Let y + 2y − 8y = 0. Exercise 0.2. Now try a solution y = erx . Find C1 and C2 to solve the initial condition x(0) = 10.2. Exercise 0.2.2.9: Verify that x = C1 e−t + C2 e2t is a solution to x − x − 2x = 0. find all such r.4 Exercises Exercise 0. Exercise 0. Is this solution for some r? If so.4: Show that x = e4t is a solution to x − 12x + 48x − 64x = 0.2. Exercise 0.2. Exercise 0.5: Show that x = et is not a solution to x − 12x + 48x − 64x = 0. .8: Verify that x = Ce−2t is a solution to x = −2x.12 INTRODUCTION 0. Exercise 0.2.

Chapter 1 First order ODEs 1. f (x) dx + C. let us assume that f is a function of x alone. dx or just y = f (x. there is no simple formula or procedure one can follow to find solutions. In the next few lectures we will look at special cases where solutions are not difficult to obtain. y). We could just integrate (antidifferentiate) both sides here with respect to x.2 in EP A first order ODE is an equation of the form dy = f (x. y). In general.1) find some antiderivative of f (x) and then you add an arbitrary constant to get the general solution. y (x) dx = that is y(x) = f (x) dx + C. that is. Calculus textbooks muddy the waters by talking about integral as primarily the so-called indefinite integral. The 13 . §1. (1. the equation is y = f (x). Now is a good time to discuss a point about calculus notation and terminology.1 Integrals as solutions Note: 1 lecture.1) This y(x) is actually the general solution. So to solve (1. In this section.

We see that the general solution must be y = x3 + C. Normally. And it is! 0 Do note that the definite integral and indefinite integral (antidifferentiation) are completely different beasts. this is a solution. we also have an initial condition such as y(x0 ) = y0 for some two numbers x0 and y0 (x0 is usually 0. by fundamental theorem of calculus you can always write f (x) dx + C as x f (t) dt + C. Example 1.2: Solve y = e−x . By the preceeding discussion. the terminology integrate when you may really mean antidifferentiate.2) is a formula you can plug into the calculator or a computer and it will be happy to calculate specific values for you. It is not possible (in closed form). Tell them to find the closed form solution. This particular integral is in fact very important in statistics. The only reason for the indefinite integral notation is that you can always write an antiderivative as a (definite) integral. it only happens to also compute antiderivatives. That is.1.14 CHAPTER 1. For sake of consistency. Example 1. see the following example).1. You will easily be able to plot the solution and work with it just like with any other function. the solution must be y(x) = 0 e−s ds + 1.1: Find the general solution of y = 3x2 . There is absolutely nothing wrong with writing the solution as a definite integral. Integration is defined as the area under the graph. Integration is just one way to compute the antiderivative (and it is a way that always works. Ha ha ha (bad math joke). Let us check: y = 3x2 . (1. FIRST ORDER ODES indefinite integral is really the antiderivative (in fact the whole one parameter family of antiderivatives). There really exists only one integral and that is the definite integral. x 2 y(0) = 1. x0 Hence. y(x0 ) = y0 . The definite integral always evaluates to a number. we will keep using the indefinite integral notation when we want an antiderivative.2) Let us check! y = f (x) (by fundamental theorem of calculus) and by Jupiter. It is not so crucial to find a closed form for the antiderivative. Then the solution is x y(x) = x0 f (s) ds + y0 . Is x0 it the one satisfying the initial condition? Well. We can write the solution as a definite integral in a nice way. y(x0 ) = x f (x) dx + y0 = y0 . 2 Here is a good way to make fun of your friends taking second semester calculus. but not always). and you should always think of the definite integral. Suppose our problem is y = f (x). (1. Therefore. . We have gotten precisely our equation back.

Let us write it in Leibniz notation dy = f (y) dx Now use the inverse function theorem to switch roles of x and y. We write dx 1 = . Example 1.1. dy ky Now integrate and get x(y) = x = we solve for y kekC ekx = |y|. y = Cekx . as this sort of hand-waving calculation can lead to trouble.4: Find the general solution of y = y2 . k 1 dy + C f (y) . Example 1. First note that y = 0 is a solution.3: We guessed y = ky has solution Cekx . Now we can just integrate x(y) = Next. dx 1 = dy f (y) 15 What we are doing seems like algebra with dx and dy. First note that y = 0 is a solution. we try to solve for y. It is tempting to just do algebra with dx and dy as if they were numbers.1. Be careful. If we replace kekC with an arbitrary constant C we can get rid of the absolute value bars. INTEGRALS AS SOLUTIONS We can also solve equations of the form y = f (y) using this method. and we get the same general solution as we guessed before. In this way we also incorporate the solution y = 0. And in this case it does work. Henceforth. especially when more than one independent variable is involved. We can actually do it now.1.1. however. Write 1 ln|ky| + C . We can now assume that y 1 dx = 2 dy y 0. assume y 0.

1. . then the solution blows up as we approach x = 1.1: Solve for v and then solve for x. y So the general solution is y= 1 or y = 0. Well.1. x(10) = 2e10/2 − 2 ≈ 294 meters. Now we just plug in to get that at 2 seconds (and 10). Classical problems leading to differential equations solvable by integration are problems dealing with velocity. the car has travelled x(2) = 2e2/2 − 2 ≈ 3. You have surely seen these problems before in your calculus class. Exercise 1. If x is the distance travelled. If for example C = 1. and x is the acceleration. where t is time in seconds. At time t = 0 the car is at the 1 meter mark and is travelling at 10 m/s. x (0) = 10. then x is the velocity. It is hard to tell from just looking at the equation itself how the solution is going to behave sometimes. v = t2 . FIRST ORDER ODES −1 + C. C−x CHAPTER 1. Where is the car at time t = 10. we can then integrate and find x.44 meters. C−x Note the singularities of the solution. Note that we still need to figure out C. So C = −2 and hence x(t) = et/2 − 2. what if we call x = v and then we have the problem v(0) = 10. Well this is actually a second order problem. Let x denote the distance the car travelled. Once we solve for v. But we know that when t = 0 then x = 0. but the solution is only defined on some interval (−∞. that is: x(0) = 0 so 0 = x(0) = 2e0/2 + C = 2 + C.5: Suppose a car drives at a speed et/2 meters per second. C) or (C. The equation is x = et/2 . The equation with initial conditions is x = t2 . How far did the car get in 2 seconds? How far in 10 seconds.6: Suppose that the car accelerates at the rate t2 m/s2 . x(0) = 1. acceleration and distance. The equation y = y2 is very nice and defined everywhere.16 Now integrate to get x= Solve for y = 1 . Example 1. We can just integrate this equation to get that x(t) = 2et/2 + C. Example 1. ∞).1.

= sin 5x for y(0) = 2.7: Solve dy dx = 1 y2 +1 for y(0) = 0.1 Exercises Exercise 1.1. = 1 x2 −1 for y(0) = 0.5: Solve y = y3 for y(0) = 1.1. Exercise 1. Exercise 1.4: Solve dy dx dy dx dy dx = x2 + x for y(1) = 3.8: Solve y = sin x for y(0) = 0. Exercise 1. Exercise 1.2: Solve Exercise 1.1. .1.1.6: Solve y = (y − 1)(y + 1) for y(0) = 3.3: Solve Exercise 1. INTEGRALS AS SOLUTIONS 17 1.1.1.1.1.1.

Then if we are given a specific initial condition y(x0 ) = y0 . y).3 in EP At this point it may be good to first try the Lab I and/or Project I from the IODE website: http://www. For example.edu/iode/.uiuc. y(0) = 0 .2. See Figure 1.1. See Figure 1. we can really just look at the location (x0 .2. It would be good if we could at least figure out the shape and behavior of the solutions or even find approximate solutions for any equation. the general first order equation we are studying looks like y = f (x.2 Slope fields Note: 1 lecture. y)-plane we get a slope. and y(0) = −0.1 Slope fields As you have seen in IODE Lab I (if you did it). y0 ) and follow the slopes. We can plot the slope at lots of points as a short line with this given slope.2: Slope field of y = xy with a graph of solutions satisfying y(0) = 0. in Figure 1.2.math. As we said. By looking at the slope field we can find out a lot about the behavior of solutions. this means that at each point in the (x.18 CHAPTER 1.2. y(0) = 0. We call this the slope field of the equation.1: Slope field of y = xy. In general we cannot really just solve these kinds of equations explicitly. FIRST ORDER ODES 1. -3 3 -2 -1 0 1 2 3 3 3 -3 -2 -1 0 1 2 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 -3 -3 -2 -1 0 1 2 3 -3 Figure 1. 1. Figure 1. §1.2 we can see what the solutions do when the initial conditions are y(0) > 0.

1. or if it does is not unique. It also has to be unique if we believe our universe is deterministic. it is good to know when things go wrong and why. we see that no matter where we start.1: Attempt to solve: 1 y = . Note that a small change in the initial condition causes quite different behavior.3. (i) Does a solution exist? (ii) Is the solution unique (if it exists)? What do you think is the answer? The answer seems to be yes to both does it not? Well. y(x0 ) = y0 . plotting a few solutions of the of the equation y = −y. Integrate to find the general solution y = ln |x| +C Note that the solution does not exist at x = 0. 1. . If the solution does not exist. See Figure 1. all solutions tend to zero as x tends to infinity. But there are cases when the answer to either question can be no.2.3: Slope field of y = −y with a graph of a few solutions.2. SLOPE FIELDS 19 and y(0) < 0. Hence.2. pretty much.4 on the next page. we have probably not devised the correct model. Since generally the equations come from real life situation. See Figure 1. -3 3 -2 -1 0 1 2 3 3 2 2 1 1 0 0 -1 -1 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 Figure 1. then it seems logical that a solution exists. On the other hand. y).2 Existence and uniqueness We wish to ask two fundamental questions about the problem y = f (x. x y(0) = 0. Example 1.

5.5: Slope field of y = 2 |y| with two solutions satisfying y(0) = 0.1 (Picard’s theorem on existence and uniqueness). y(x0 ) = y0 . Example 1. then a solution to ∂y y = f (x. If f (x. It turns out that the following theorem is true.3: y = y2 . Named after the French mathematician Charles Émile Picard (1856 – 1941) .4: Slope field of y = 1 . Theorem 1. It is known as Picard’s theorem∗ . Example 1. Is there any hope? Of course there is. Note that y = x2 is a solution and y = 0 is a solution (but note x2 is a solution only for x > 0). ∗ y(0) = A.20 -3 3 -2 -1 0 1 2 3 3 3 -3 -2 CHAPTER 1. It is actually hard to tell from the slope field that the solution will not be unique. It is quite possible that the solution only exists for a short while.2: Solve: y = 2 |y|. Note that y = 1 . y) is continuous (as a function of two variables) and ∂ f exists and is continuous near some (x0 . y0 ). But we ought to x be careful about this existence business.2. exists (at least for some small interval of x’s) and is unique. y(0) = 0 and y = 2 |y|. FIRST ORDER ODES -1 0 1 2 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 -3 -3 -2 -1 0 1 2 3 -3 Figure 1. x Figure 1.2. for some constant A. y(0) = 0 do not satisfy the theorem. y(0) = 0.2. See Figure 1. y).

2.2.2.1.1: Sketch direction field for y = e x−y . Exercise 1. For the most of this course we will be interested in equations where existence and uniqueness holds.2. Exercise 1. So x = y12 .3: Sketch direction field for y = y2 . when A = 1 the solution “blows up” at x = 1. the solution does not exist for all x even if the equation is nice everywhere. so y is not equal to zero at least 1 1 for some x near 0. SLOPE FIELDS 21 We know how to solve this equation. 1.2. .2.3 Exercises Exercise 1. How do the solutions behave as x grows? Can you guess a particular solution by looking at the direction field? Exercise 1. then C = A so y y= 1 A 1 . so x = −1 + C. so y = C−x . If y(0) = A. then y = 0 is a solution. −x Now if A = 0. Hence. y = y2 certainly looks nice. First assume that A 0.4: Is it possible to solve the equation y = xy cos x for y(0) = 1? Justify.2: Sketch direction field for y = x2 . and in fact will hold “globally” unlike for the y = y2 . For example.

g(y) Now both sides look like something we can integrate. .4 in EP When the equation is of the form y = f (x). We obtain dy = g(y) f (x) dx + C. 2 0 from now on. if it looked like y = f (x)g(y). Unfortunately this method no longer works for the general form of the equation y = f (x. §1.3 Separable equations Note: 1 lecture. we can just integrate: y = f (x) dx + C. what if the equation is separable.22 CHAPTER 1. that is.3. FIRST ORDER ODES 1. Write the equation as x dx + C. f (x. so assume y then dy = y We compute the antiderivatives to get ln |y| = x2 + C.1: Take the equation y = xy First note that y = 0 is a solution. Integrating both sides yields y= Notice dependence on y in the integral.3.1 Separable equations On the other hand. y). for some functions f (x) and g(y). dy dx = xy. dx Then we rewrite the equation as dy = f (x) dx. If we can explicitly solve this integral we can maybe solve for y. Let us write the equation in Leibniz notation dy = f (x)g(y). y) dx + C. 1. Example 1.

3.1. 1 dy dx = g(y) dx We can use the change of variables formula. f (x) dx + C.3. that does not sound right. We seemed to be doing a different operation to each side. Because y = 0 is a solution and because of the absolute value we actually can write: x2 y = De 2 . f (x) dx + C. Note that y = y(x) is a function of x and so is 1 dy = f (x) g(y) dx We integrate both sides with respect to x. For example.2 Implicit solutions It is clear that we might sometimes get stuck even if we can do the integration. take the separable equation xy y = 2 . dy ! dx 1. where D > 0 is some constant. SEPARABLE EQUATIONS Or x2 x2 x2 23 |y| = e 2 +C = e 2 eC = De 2 . y y . Because we were integrating in two different variables. Let us see work out this method more rigorously. We check: x2 x2 y = Dxe 2 = x(De 2 ) = xy. Yay! We should be a little bit more careful about the method. dy = f (x)g(y) dx We rewrite the equation as follows. for any number D (including zero or negative). y +1 We separate variables y2 + 1 1 dy = y + dy = x dx. 1 dy = g(y) And we are done.

FIRST ORDER ODES It is not easy to find the solution explicitly as it is hard to solve for y. We note above that the equation also has a solution y = 0. you can graph x as a function of y. y(1) = 0. For example.3. Now we separate variables. y = tan x .. If you want to compute values for y you might have to be tricky. 1. 2 2 Or maybe the easier looking expression: y2 + 2 ln |y| = x2 + C.. Computers are also good at some of these tricks. therefore. but you have to be careful. We will.3 Examples Example 1. These outlying solutions such as y = 0 are sometimes called singular solutions. It is easy to check that implicit solutions still satisfy the differential equation. y It is simple to see that the differential equation holds.). and then flip your paper. etc. therefore. 0 = tan(−2 + C) to get C = 2 (or 2 + π. The solution we are seeking is. call this solution an implicit solution. it turns out that the general solution is y2 + 2 ln |y| = x2 + C together with y = 0. In this case. First factor the right hand side to obtain x2 y = (1 − x2 )(1 + y2 ).3. In this case.2: Solve x2 y = 1 − x2 + y2 − x2 y2 . CHAPTER 1. we differentiate to get y 2y + 2 = 2x. −1 −x+2 . integrate and solve for y 1 − x2 y = 1 + y2 x2 y 1 = 2 −1 2 1+y x −1 arctan(y) = − x+C x −1 y = tan − x+C x Now solve for the initial condition.24 Now we integrate to get y2 x2 + ln |y| = + C.

Then for some k the temperature of coffee is: dT = k(A − T ).3.4 Exercises x Exercise 1. Solving for k we get k = − ln(95 − 26)/74 ≈ 0. Example 1. suppose Bob measured the temperature of the coffee at 1 minute (t = 60) and found that it dropped to 95 degrees. That is we solve 70 = 26 + 74e−0. Exercise 1.07t to get t = − ln(70−26)/74 ≈ 7.07. Exercise 1.3. y3 2 1 y = x2 . So Bob can begin to drink the coffee at 0. y2 1 x2 = + C. So assume that y −3 y = x.3: Solve dx dt = (x2 − 1) t. Now we have T = 26 + 74e−kt . We plug in 95 = T (1) = 26 + 74e−k . Furthermore. let A be the ambient (room) temperature.3. A − T = De−kt . 3 First note that y = 0 is a solution (a singular solution). SEPARABLE EQUATIONS 25 Example 1. Let the Ambient (room) temperature be 26 degrees. and the water was boiling (100 degrees Celsius) at time t = 0.4: Solve y = −xy . A − T dt ln A − T = −kt + C.07 about 7 and a half minutes from the time Bob made it. ( 2 + C)1/3 2 0 and write 1. T (1) = 95. We separate variables and integrate (C and D will denote arbitrary constants) 1 dT = k. Probably about the amount of time it took us to calculate how long it would take. T = A − De−kt .1: Solve y = y . Now to solve for which t gives me 70 degrees. Suppose Bob likes to drink his coffee at 70 degrees.3: Suppose Bob made a cup of coffee. .2: Solve y = x2 y.3. We plug in the first condition 100 = T (0) = 26 − D and hence D = −74. When should Bob start drinking? Let T be the temperature of coffee.3. dt For our setup A = 26.3. That is T = 26 − De−kt .3. T (0) = 100.1.43 minutes. for x(0) = 0.

3.26 Exercise 1. dy Exercise 1. Exercise 1.3. = xy + x + y + 1. Hint: Factor the right hand side. for y(0) = 10. FIRST ORDER ODES = x sin(t).4: Solve Exercise 1.7: Solve x dx − y = 2x2 y.5: Solve dx dt dy dx CHAPTER 1. .6: Find an implicit solution to xy = y + 2x2 y.3.3. for x(0) = 1. where y(1) = 1.

the solution exists wherever p(x) and f (x) are defined.4. The function r(x) is called the integrating factor and the method is called the integrating factor method. That seems like a job for the exponential function! r(x) = e Let us do the calculation.4 Linear equations and the integrating factor Note: more than 1 lecture. f (x). That is a first order equation is linear if we can put it into the following form: y + p(x)y = f (x). to get a closed form formula for y we need to be able to find a closed form formula for the two integrals. For example. Of course. .1. We can then solve for y. there is a method for solving linear first order equations. In this lecture we will focus on the first order linear equation. e p(x)dx y= y = e− e p(x)dx p(x)dx f (x) dx + C . But most importantly for us right now. Solutions of linear equations have nice properties. What we will do is to multiply both sides of (1. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 27 1. f (x) dx + C.3) by some function r(x) such that r(x)y + r(x)p(x)y = We can then integrate both sides of d r(x)y = r(x) f (x). e p(x)dx p(x)dx d r(x)y . we get the same function back multiplied by p(x). In fact the majority of this course will focus on linear equations. §1. dx Note that the right hand side does not depend on y and the left hand side is written as a derivative of a function. dx y + e p(x)dx p(x)y = e d e p(x)dx y = e dx e p(x)dx p(x)dx p(x)dx f (x). The dependence on x can be more complicated. y + p(x)y = f (x). So we are looking for a function r(x) such that if we differentiate it. (1.5 in EP One of the most important types of equations we will learn how to solve are so-called linear equations.3) The word “linear” here means linear in y. and has the same regularity (read: it is just as nice).

2 2 2 2 2 2 2 = e x . p(x)dx 2 2 . An advice: Do not try to remember the formula itself. Suppose we are given y + p(x)y = f (x) y(x0 ) = y0 . FIRST ORDER ODES y(0) = −1. that is way too hard.4. but those constants will not matter in the end.4. we solve for the initial condition −1 = y(0) = 1 + C. p(x) dx First note that p(x) = 2x and f (x) = e x−x . The integrating factor is r(x) = e multiply both sides of the equation by r(x) to get e x y + 2xe x y = e x−x e x . Note that we do not care which antiderivative we take when computing e add a constant of integration.28 Example 1.4) You should be careful to properly use dummy variables here.4. (1. y(x) = e − x x0 p(s) ds x x0 t e x0 p(s) ds f (t) dt + y0 . Exercise 1. Since we cannot always evaluate the integrals in closed form. We 2 Next.1: Solve y + 2xy = e x−x 2 2 CHAPTER 1. It is easier to remember the process and repeat it. You can always Exercise 1. Look at the solution and write the integrals as definite integrals. If you now plug that into a computer of a calculator. A definite integral is something that you can plug into a computer or a calculator. The solution is y = e x−x − 2e x . y = e x−x + Ce x .4). so C = −2. d x2 e y = ex . dx We integrate e x y = e x + C. .1: Try it! Add a constant of integration to the integral in the integrating factor and show that the solution you get in the end is the same as what we got above.2: Check that y(x0 ) = y0 in formula (1. it is useful to know how to write the solution in definite integral form. it will be happy to give you numerical answers.

The integrating factor is r(t) = exp 3 3 dt = exp ln(60 + 2t) = (60 + 2t)3/2 60 + 2t 2 x x = volume 60 + (5 − 3)t . Let x denote the kg of salt in the tank. Solution of water and salt (brine) with concentration of 0. therefore. but try to simplify as far as you can. the change in x (denoted ∆x) is approximately ∆x ≈ (rate in × concentration in)∆t − (rate out × concentration out)∆t Taking the limit ∆t → 0 we see that dx = (rate in × concentration in) − (rate out × concentration out) dt We have rate in = 5 concentration in = 0.1 kg / liter is flowing in at the rate of 5 liters a minute.1 rate out = 3 concentration out = Our equation is. You will not be able to find the solution in closed form. Then for a small change ∆t in time.5 dt 60 + 2t Let us solve.1) − 3 dt 60 + 2t Or in the form (1. let t denote the time in minutes. A 100 liter tank contains 10 kilograms of salt dissolved in 60 liters of water. Example 1. LINEAR EQUATIONS AND THE INTEGRATING FACTOR 29 Exercise 1.2: The following is a simple application of linear equations and this type of a problem is used often in real life.4.4.3: Write the solution of the following problem as a definite integral.4. How much salt is in the tank when the tank is full? Let us come up with the equation. y + y = e x −x 2 y(0) = 10. linear equations are used in figuring out the concentration of chemicals in bodies of water. x dx = (5 × 0. For example.1.3) dx 3 + x = 0. The solution in the tank is well stirred and flows out at a rate of 3 liters a minute.

5(100)−3/2 ≈ 19. We know that at t = 0.6: Solve y + 3x2 y = sin(x) e−x .5(60 + 2t)3/2 dt (60 + 2t)3/2 x = CHAPTER 1. So x(20) = 60 + 40 + C(60 + 40)−3/2 ≈ 20 − 929.4. If you can find a closed form solution. or when t = 20.4.1 Exercises In the exercises.5(60 + 2t)3/2 dt + C(60 + 2t)−3/2 x = (60 + 2t)−3/2 2 x = 0.8: Solve 1 x2 +1 y + xy = 3.4.07 5 1 6 60 + C(60)−3/2 = 12 + C(60)−3/2 5 The concentration at the end is approximately 0. FIRST ORDER ODES 0. Exercise 1. or 0. So we note that the tank is full when 60 + 2t = 100. you should give that.4: Solve y + xy = x. with y(0) = 0. x = 10.5: Solve y + 6y = e x . So 10 = x(0) = or C = −2(603/2 ) ≈ −929. Exercise 1. Exercise 1.5(60 + 2t)−3/2 (60 + 2t)5/2 + C(60 + 2t)−3/2 5 60 + 2t x= + C(60 + 2t)−3/2 5 Now to figure out C.4. 3 Exercise 1.5(60 + 2t)3/2 dt 60 + 2t d (60 + 2t)3/2 x = 0. . with y(0) = 1.5(60 + 2t)3/2 dt + C 0. feel free to leave answer as a definite integral if a closed form solution cannot be found.30 We multiply both sides of the equation to get (60 + 2t)3/2 dx 3 + (60 + 2t)3/2 x = 0.4.4.5 We are interested in x when the tank is full.19 kg/liter and we started with kg/liter. Exercise 1.167 1.7: Solve y + cos(x)y = cos(x).

LINEAR EQUATIONS AND THE INTEGRATING FACTOR 31 Exercise 1. The output of one is flowing to the other. dt t is time.9: Suppose there are two lakes.4. A is the ambient temperature. c) When will the concentration in the second lake be maximal. The first lake contains 100 thousand liters of water and the second lake contains 200 thousand liters of water. Suppose that A = A0 cos ω t for some constants A0 and ω.4. a) Find the general solution. b) When will the concentration in the first lake be below 0. Assume that the water is being continually mixed perfectly by the stream. and k > 0 is a constant. A truck with 500 kg of toxic substance crashes into the first lake. will the initial conditions make much of a difference? Why or why not. That is the ambient temperature oscillates (for example night and day temperatures). Exercise 1. The in and out flow from each lake is 500 liters per hour.01 kg per liter. .4.1.10: Newton’s law of cooling states that dx = −k(x − A) where x is the temperature. a) Find the concentration of toxic substance as a function of time (in seconds) in both lakes. b) In the long term.

v = 1 − v2 . Now we need to “unsubstitute. We also solve the first equation for y. y(−1 + De2x ) = Dxe2x − x − 2. What can we do? How about trying to change variables. We plug this into the equation to get 1 − v = v2 . We will use another variable v. We differentiate (in x) to obtain v = 1 − y . one method is to try to change variables to end up with a simpler equation that can be solved.32 CHAPTER 1. Let us try v = x − y + 1. x − y + 2 = Dxe2x − yDe2x . Now we need to figure out y in terms of v .6 in EP Just like when solving integrals. which we will treat as a function of x. De2x − 1 .1 Substitution The equation y = (x − y + 1)2 . v and x. In other words. 1. 1 dv = dx.5 Substitution Note: 1 lecture. y= Dxe2x − x − 2 .5. is neither separable nor linear. FIRST ORDER ODES 1. v+1 v−1 x − y + 2 = (x − y)De2x . 1 − v2 So 1 v+1 = x+C ln 2 v−1 v+1 = e2x+2C . So y = 1 − v .” x−y+2 = De2x x−y and also the two solutions x − y + 1 = 1 or y = x and x − y + 1 = −1 or y = x + 2. −y + yDe2x = Dxe2x − x − 2. Such an equation we know how to solve. Note that v = 1 and v = −1 are also solutions. v−1 or = De2x for some constant D. so that in the new variables the equation is simpler. §1.

There are some general things to look for. a change of coordinates v = y1−n transforms the Bernoulli equation into a linear equation.5. 4 There are several things called Bernoulli equations. −xy5 v + y(x + 1) + xy5 = 0.1: Solve xy + y(x + 1) + xy5 = 0.1. Example 1. If a substitution does not work (it does not make the equation any simpler). Substitution in differential equations is applied in much the same way that it is applied in calculus. This equation looks a lot like a linear equation except for the yn . These particular equations are named for Jacob Bernoulli (1654 – 1705). For example. this is just one of them. When you see yy y2 y (cos y)y (sin y)y y ey Try substituting y2 y3 sin y cos y ey Usually you try to substitute in the “most complicated” part of the equation with the hopes of simplifying it. First we note this is Bernoulli (p(x) = (x + 1)/x and q(x) = −1). v = y1−5 = y−4 . y + p(x)y = q(x)yn . try a different one. The above table is just a rule of thumb. Several different substitutions might work. the so-called Bernoulli equations† . We substitute v = −4y−5 y . = y . Otherwise.5. SUBSTITUTION 33 Note that D = 0 gives y = x + 2. So xy + y(x + 1) + xy5 = 0. The Bernoullis were a prominent Swiss family of mathematicians. In other words. 4 −x v + v(x + 1) + x = 0. 1. If n = 0 or n = 1 then the equation is linear and we can solve it. You guess. Note that n need not be an integer. −y5 v 4 y(1) = 1. 4 −x v + y−4 (x + 1) + x = 0. You might have to modify your guesses. We summarize a few of these in a table.2 Bernoulli equations There are some forms of equations where there is a general rule for substitution which always works. but no value of D gives the solution y = x.5. † .

y . F(v) − v x .34 and finally CHAPTER 1. s4 1/4 y= x 4 e−x x e−4s 1 s4 ds + 1 . it is perfectly fine solution to have a definite integral in our solution. Suppose that we can write the differential equation as y =F Here we try the substitutions v= y x and therefore y = v + xv . v=e x 4 s4 1 Note that the integral in this expression is not possible to find in closed form. x Now it is linear. So use the integrating factor. 1. as we said before. This assumption is OK because our initial condition is for x = 1. v − r(x) = exp Now d e−4x e−4x v =4 4 .3 Homogeneous equations Another type of equations we can solve are the so-called homogeneous equations. x4 s 1 x −4s e 4x 4 ds + 1 . Let us assume that x > 0 so |x| = x. FIRST ORDER ODES 4(x + 1) v = 4.5. But again. x We note that the equation is transformed into v + xv = F(v) or xv = F(v) − v or v 1 = . x x y−4 = e4x x4 4 1 e−4s ds + 1 . dx x4 x x −4x e e−4s v= 4 4 ds + 1. Now unsubstitute x −4(x + 1) e−4x dx = e−4x−4 ln(x) = e−4x x−4 4 .

y x First we transform this into the form y = y + y . Exercise 1.4 Exercises Exercise 1.6: Solve y = √ 2 . with y(0) = 1. ln |x| + C We unsubstitute y/x = −1 . with y(0) = 1. Exercise 1.5. SUBSTITUTION Hence an implicit solution is 1 dv = ln |x| + C.3: Solve y + xy = y4 . x+y Exercise 1.2: Solve 2yy + 1 = y2 + x.5. F(v) − v Example 1. Exercise 1.5.5. y y +1 . ln |x| − 1 1 = y(1) = 1.4: Solve yy + x = 2 x2 + y2 . Now we do the substitution v = x x separable equation xv = v2 + v − v = v2 .2: Solve x2 y = y2 + xy.5.5: Solve y = (x + y − 1)2 .1. Exercise 1.1: Solve xy + y(x + 1) + xy5 = 0.5. so −1 −1 = .5. with y(0) = 1. ln |x| + C to get the We want y(1) = 1.5. which has a solution 1 dv = ln |x| + C. 2 35 y(1) = 1. with y(1) = 1. v −1 v= .5. ln |1| + C C Thus C = −1 and the solution we are looking for is −x y= . v2 −1 = ln |x| + C. ln |x| + C −x y= .

§2. We call these types of solutions equilibrium solutions. We call such critical points stable. t is time. then as t → ∞ we get x → A. by looking at the graph. that the solution x = A is “stable” in that small perturbations in x do not lead to substantially different solutions as t grows.7: Slope field and some solutions of x = −0.6 for an example. Figure 1. If a critical point is not stable we would say it is unstable. 0 10 5 10 15 20 10 0 10 5 10 15 20 10 5 5 5 5 0 0 0 -5 -5 0 -10 0 5 10 15 20 -10 -5 0 5 10 15 20 -5 Figure 1. the naming comes from the fact that the equation is independent of time. dt where x is the temperature.3(x − 5). The points on the x axis where f (x) = 0 are called critical points.6 Autonomous equations Note: 1 lecture. Note also. These types of equations are called autonomous equations. In this simple example it turns out that all solutions in fact go to A as t → ∞. Newton’s law of cooling says that dx = −k(x − A).2 in EP Let us consider problems of the form dx = f (x). If we change the initial condition a little bit. In fact.1x(5 − x). Note the solution x = A (in the example A = 5). each critical point corresponds to an equilibrium solution.6: Slope field and some solutions of x = −0. If we think of t as time. FIRST ORDER ODES 1. k is some constant and A is the ambient temperature. dt where the derivative of solutions depends only on x (the dependent variable). Let us come back to the cooling coffee problem.36 CHAPTER 1. The point x = A is a critical point. See Figure 1. .

It is easier to just look at the phase diagram or phase portrait. Note two critical points. It is not really necessary to find the exact solutions to talk about the long term behavior of the solutions. but it may get there rather quickly. This equation is commonly used to model population if you know the limiting population M.1. . we have seen that it only exists for some finite period of time. x = 0 and x = 5. Think of the equation y = y2 . On the other hand the critical point at x = 0 is unstable.” From just looking at the slope field we cannot quite decide what happens if x(0) < 0. For example. In this case there is one dependent variable x. AUTONOMOUS EQUATIONS Let us consider the logistic equation dx = kx(M − x). The critical point at x = 5 is stable.7 on the facing page for an example. but we will still consider negative x for the purposes of the math.6. In our example equation above it will actually turn out that the solution does not exist for all time.     lim x(t) = 0 if x(0) = 0. mark all the critical points and then draw arrows in between. dt 37 for some positive k and M. In any case. So draw the x axis.  Where DNE means “does not exist. Note that in the real world there is no such thing as negative population. it is easy to approximately sketch how the solutions are going to look. y=5 y=0 Armed with the phase diagram. Same can happen here. but to see that we would have to solve the equation. Mark positive with up and negative with down. Many times are interested only in the long term behavior of the solution and hence we would just be doing way too much work if we tried to solve the equation exactly. See Figure 1. which is a simple way to visualize the behavior of autonomous equations. from the above we can easily see that  5  if x(0) > 0. This scenario leads to less catastrophic predictions on world population. It could be that the solution does not exist t all the way to ∞. the solution does go to −∞.   t→∞  DNE or − ∞ if x(0) < 0. that is the maximum sustainable population.

and the fast food restaurant serving them will go out of business. FIRST ORDER ODES Exercise 1. no matter how well stocked the planet starts.9 on the facing page Finally if we are harvesting at 2 million humans per year. See Figure 1.6. If ever the population drops below B. Suppose x is the number of humans in millions on the planet and t is time in years.6. then A and B are distinct and positive. or A = B. no real solutions). A= It turns out that when h = 1. See Figure 1. unstable points are generally bad news.6 million.e. Let us think about the logistic equation with harvesting. or A and B both complex (i. the population will always plummet towards zero. unstable stable Since any mathematical model we cook up will only be an approximation to the real world. then the population will not die out. Our equation becomes dx = kx(M − x) − h.1: Try sketching a few solutions. dt Critical points A and B are kM − (kM)2 − 4hk (kM)2 − 4hk B= .6 million it will tend towards this number. Note that these possibilities are A > B. Logistic equations are commonly used for modelling population. They keep a planet with humans on it and harvest the humans at a rate of h million humans per year. A small perturbation of the equilibrium state and we are out of business. If it ever drops below 1. humans will die out. Let M be the limiting population when no harvesting is done. This scenario is not one that we (as the human fast food proprietor) want to be in.6. then A = B. When the population is above 1. As long as the population stays above B which is approximately 1. There is only one critical point which is unstable. Check with the graph above if you are getting the same answers. The graph we will get is given in Figure 1. we can easily classify critical points as stable or unstable. dt Multiply out and solve for critical points dx = −kx2 + kMx − h.10 on the next page. humans will die out on the planet.38 CHAPTER 1. kM + . Once we draw the phase diagram. When h = 1. There is no room for error.8 on the next page.2: Draw the phase diagram for different possibilities. 2k 2k Exercise 1.55 million. Suppose an alien race really likes to eat humans. k > 0 is some constant depending on how fast humans multiply.

On this interval mark the critical points stable or unstable.6. 15 20 10 8 8 5 5 2 2 0 0 0 5 10 15 20 Figure 1. c) Find limt→∞ x(t) for the solution with the initial condition x(0) = −1. c) Find limt→∞ x(t) for the solution with the initial condition x(0) = 1. find the critical points and mark them stable or unstable.6. b) Sketch typical solutions of the equation.8: Slope field and some solutions of x = −0.6. a) Draw the phase diagram for −4π ≤ x ≤ 4π.6.1x(8 − x) − 1.3: Let x = x2 .10: Slope field and some solutions of x = −0.1.6. Exercise 1.5: Suppose f (x) is positive for 0 < x < 1 and negative otherwise. b) Sketch typical solutions of the equation. 0 10 5 10 Figure 1.1 Exercises Exercise 1.1x(8 − x) − 2. Exercise 1. 1. a) Draw the phase diagram.1x(8 − x) − 1.4: Let x = sin x. a) Draw the phase .6. AUTONOMOUS EQUATIONS 0 10 5 10 15 20 10 10 0 5 10 15 20 39 10 8 8 8 8 5 5 5 5 2 2 2 2 0 0 0 0 0 5 10 15 20 0 5 10 15 20 Figure 1.9: Slope field and some solutions of x = −0.

That is we will only harvest only an amount proportional to current population. Suppose that we modify our dt harvesting. find the critical points and mark them stable or unstable.6.6: Start with the logistic equation dx = kx(M − x). Exercise 1.40 CHAPTER 1. b) Sketch typical solutions of the equation.5. b) Show that if kM > h. c) Find limt→∞ x(t) for the solution with the initial condition x(0) = 0. then the equation is still logistic. a) Construct the differential equation. FIRST ORDER ODES diagram for x = f (x). that we harvest hx for some h > 0. c) What happens when kM < h? .

Rinse repeat! That is. y0 ). y) y(x0 ) = y0 .4 in EP At this point it may be good to first try the Lab II and/or Project II from the IODE website: http://www. it is generally very hard if not impossible to get a nice formula for the solution of the problem y = f (x.0 2.edu/iode/. Do note that this is not exactly the solution.7.0 -1 0 1 2 3 0.5 0.5 2.0 3.0 2. NUMERICAL METHODS: EULER’S METHOD 41 1. -1 3.5 2. Or perhaps we even want to produce a graph of the solution to inspect the behavior.5 1. The slope is the change in y per unit change in x.11.0 2. then we will say that y1 (the approximate value of y at x1 = x0 + h) will be y1 = y0 + hk. More abstractly we compute xi+1 = xi + h.5 0.0 -1 0 1 2 3 3.0 1.math. yi+1 = yi + h f (xi .12 on the next page for the plot of the real solution. Named after the Swiss mathematician Leonhard Paul Euler (1707 – 1783).0 1.5 1.5 0. We follow the line for an interval of length h.uiuc. See Figure 1.5 0. compute x2 and y2 using x1 and y1 .5 2. For an example of the first two steps of the method see Figure 1.1.11: First two steps of Euler’s method with h = 1 for the equation y = conditions y(0) = 1. as we said before.5 1. The first thing to note is that. Hence if y = y0 at x0 .0 0.0 1. Do note the correct pronunciation of the name sounds more like "oiler.0 0.0 0 1 2 3 3.0 1." ‡ .0 2. What if we want to find out the value of the solution at some particular x.5 2. §2. Euler’s method‡ : We take x0 and compute the slope k = f (x0 .7 Numerical methods: Euler’s method Note: 1 lecture. yi ).0 -1 0 1 2 3 0.5 1. y2 3 with initial By connecting the dots we get an approximate graph of the solution.0 Figure 1.

so we only have a vague understanding of the error. Exercise 1. The main point is. so error of about 0. The difference between the actual solution and the approximate solution we will call the error. Let us try to approximate y(2) using Euler’s method. If we knew the error exactly . Table 1. assuming that the error was comparable to start with. With step size 1 we have y(2) ≈ 1. Let us halve the step size.0 Figure 1. The real answer is 3.1 on the facing page gives the values computed for various parameters. We will usually talk about just the size of the error and we do not care much about its sign.12 we have essentially graphically approximated y(2) with step size 1.11 and 1. This halving of the error is a general feature of Euler’s method as it is a first order method.5 1. The improved Euler method should quarter the error every time you halve the interval. That is quite a bit to do by hand. This reduction can be a big deal.5 0..42 -1 3.209.0 -1 0 1 2 3 0. To get it to within 0. Note that to get the error to be within 0..12: Two steps of Euler’s method (step size 1) and the exact solution for the equation 2 y = y3 with initial conditions y(0) = 1. meaning doing 512 to 1024 steps. We notice that except for the first few times. every time we halved the interval the error approximately halved. that we usually do not know the real solution.926. whereas with 5 halvings you only have to do 32 steps.5 2. but suppose each step would take a second to compute (the function may be substantially 2 .5 0. FIRST ORDER ODES 3 3. Let us see what happens with the equation y = y3 . so you would have to approximately do half as many “halvings” to get the same error.5 1.0 1.5 2.0 2. y(0) = 1. So we are approximately 1.791. A computer may not care between this difference for a problem this simple.074 off. With 10 halvings (starting at h = 1) you have 1024 steps.1 of the answer we had to already do 64 steps. A second order method reduces the error to approximately one quarter every time you halve the interval.0 2.1: Solve this equation exactly and show that y(2) = 3. what is the point of doing the approximation. If you do the computation you will find that y(2) ≈ 2. In Figures 1.0 1. In the IODE Project II you are asked to implement a second order method.7.0 0 1 2 CHAPTER 1.01 we would have to halve another 3 or four times.0 0.

Then the difference is 32 seconds versus about 17 minutes.509130743538 y2 .0078125 2. Next. assume that the error goes down by a factor of 2.1: Euler’s method approximation of y(2) where of y = y(0) = 1. you should solve the equation exactly and you will notice that the solution does not exist at x = 3.791388470013 0. suppose you do not know the error.049645018422 0. Can you estimate the error in the last time from this? Does it agree with the table? Now do it for the first two rows.179599204497 0. the error generally goes down by a factor of 16.7. suppose that you have to repeat such a calculation for different parameters a thousand times.97472419486 Error 1. Results of this effort are listed in Table 1. it is 1 minute versus 17 minutes.5 2.2: In the table above.95035498158 0.82040079550 0.25 2.0625 2.7.533849442573 0.125 2. a second order method would probably double the time to do each step.92592592593 0. but even a better approximation method than Euler would need an insanely small step size to compute the solution with reasonable precision. You get the idea. And computers might not be able to handle such a small step size anyway. Another case when things can go bad is if the solution oscillates wildly near some point.095878935207 0. In real applications you would not use a simple method such as Euler’s. Take the approximate values of the function in the last two lines. Note that we do not know the error! How do you know what is the right step size? Essentially you keep halving the interval and if you are lucky you can estimate the error from a few of these calculations and the assumption that the error goes down by a factor of one half each time (if you are using standard Euler). That is a fourth order method.2 on the next page for successive halvings of h.68033658758 0.47249414666 0.561838476090 0. Even so. 3 Table 1. NUMERICAL METHODS: EULER’S METHOD h Approximate y(2) 1 1. Exercise 1.03125 2. more difficult to compute than y2 /3).90412106479 0.527505853335 0. Suppose that instead of y(2) we wish to find y(3). In fact the solution blows up. In this case. that means that if you halve the interval. The simplest method that would probably be used in a real application is the standard Runge-Kutta method (we will not describe it here).015625 2. Does this agree with the table? Let talk a little bit more about this example y = y3 . Such an example is given in IODE Project II. What is going on here? Well.025275805142 Error Previous error 43 0.20861152999 0. 2 .319663412423 0.605990266083 0. y(0) = 1.074074074070 0. Note: We are not being altogether fair.517788587396 0.736809954840 0. the solution may exist at all points.666557415634 0.1.

4600446195 0. Choosing the right method to use and the right step size can be very tricky. There are several competing factors to consider.86078752222 0. does not mean that you must have the right answer.44 CHAPTER 1.2: Attempts to use Euler’s to approximate y(3) where of y = y2 . Use Euler’s method with step size h = 0. There is ongoing active research by engineers and mathematicians on how to do numerical approximation in the best way. Errors introduced by rounding numbers off during your computations become noticeable when the step size becomes too small relative to the quantities you are working with. So reducing step size may in fact make errors worse.3: Consider = (2t − x)2 . FIRST ORDER ODES h Approximate y(3) 1 3.4012144477 0.7.5989264104 0. Small errors lead to large errors down the line. x(0) = 2. 3 y(0) = 1.54328915766 0. Even if the function f is simple to compute.03125 29. Just because the numbers have stabilized after successive halving.015625 50. the general purpose method used for the ODE solver in Matlab and Octave (as of this writing) is a method that appeared only in the literature only in the 1980s. You have seen just the beginnings of the challenges that appear in real applications.8032064113 0.5 4. 1.0625 17. but perhaps not the right precision. For example.1 Exercises dx Exercise 1.7. Or what may happen is that the numbers may never stabilize no matter how many times you halve the interval. • Computational time: Each step takes computer time.0078125 87.5 to dt approximate x(1).16232281664 0. Large step size means faster computation. • Roundoff errors: Computers only compute with a certain number of significant digits.25 6. Or in the worst case the numerical computations might be giving you bogus numbers that look like a correct answer. .7576927770 Table 1. • Stability: Certain equations may be numerically unstable.125 10. you do it many times over.

1 2 8 dt to approximate x(1). c) Describe what happens to the errors for each h you used. find the factor by which the error changed each time you halved the interval. That is. NUMERICAL METHODS: EULER’S METHOD 45 dx 1 Exercise 1. 4 . x(0) = 1. a) Use Euler’s method with step sizes h = 1.4: Consider = t − x.7. . 1 .1.7. b) Solve the equation exactly.

46 CHAPTER 1. FIRST ORDER ODES .

Suppose y1 and y2 are two solutions of the homogeneous equation (2.1 Second order linear ODEs Note: less than 1 lecture. Two solutions are: y1 = e .1) where p = B/A. we know a lot more of them. Theorem 2.1 (Superposition).1 in EP Let us consider the general second order linear differential equation A(x)y + B(x)y + C(x)y = F(x). The word linear means that the equation contains no powers nor functions of y.Chapter 2 Higher order linear ODEs 2. and y . Then y(x) = C1 y1 (x) + C2 y2 (x). In the special case when f (x) = 0 we have a homogeneous equation y + p(x)y + q(x)y = 0. We usually divide through by A to get y + p(x)y + q(x)y = f (x). and f = F/A. We have already seen some second order linear homogeneous equations.1. also solves (2. (2. q = C/A. y . first part of §3. 47 . y + ky = 0 y − ky = 0 Two solutions are: y1 = cos kx. kx (2. If we know two solutions two a linear homogeneous equation.2) for arbitrary constants C1 and C2 .2) y2 = sin kx.2). y2 = e−kx .

Proof: Let y = C1 y1 + C2 y2 . Suppose p. . f are continuous functions and a. Hence the proof simply becomes Ly = L(C1 y1 + C2 y2 ) = C1 Ly1 + C2 Ly2 = C1 · 0 + C2 · 0 = 0.1. As sinh and cosh are sometimes more convenient to use than the exponential.1: Derive these properties from the definitions of sinh and cosh in terms of exponentials. Theorem 2. The equation y + p(x)y + q(x)y = f (x). we can add together solutions and multiply by constants to obtain new different solutions. We will prove this theorem because the proof is very enlightening and illustrates how linear equations work. Then y + py + qy = (C1 y1 + C2 y2 ) + p(C1 y1 + C2 y2 ) + q(C1 y1 + C2 y2 ) = C1 y1 + C2 y2 + C1 py1 + C2 py2 + C1 qy1 + C2 qy2 = C1 (y1 + py1 + qy1 ) + C2 (y2 + py2 + qy2 ) = C1 · 0 + C2 · 0 = 0 The proof becomes even simpler to state if we use the operator notation.48 CHAPTER 2. b1 are constants. but a function eats numbers and spits out numbers). Linear equations have nice and simple answers to the existence and uniqueness question. Therefore.2 (Existence and uniqueness). An operator is an object that eats functions and spits out functions (kind of like what a function is. Define the operator L by Ly = y + py + qy. these are 2 2 solutions by superposition as they are linear combinations of the two exponential solutions. L being linear means that L(C1 y1 + C2 y2 ) = C1 Ly1 + C2 Ly2 . cosh 0 = 1 d cosh x = sinh x dx cosh2 x − sinh2 x = 1 sinh 0 = 0 d sinh x = cosh x dx Exercise 2. has exactly one solution y(x) satisfying the initial conditions y(a) = b0 y (a) = b1 . HIGHER ORDER LINEAR ODES That is. q. let us review some of their properties. Two other solutions to the second equation y − ky = 0 are y1 = cosh kx and y2 = sinh kx. b0 . x −x x −x Let us remind ourselves of the definition. cosh x = e +e and sinh x = e −e .1.

1. Exercise 2. So y1 and y2 are linearly independent. Hence y = C1 cos x + C2 sin x is the general solution to y + y = 0. It is obvious that sin and cos are not multiples of each other. 2.1.1. Hint: Try y = xr . For example.2: Show that y = e x and y = e2x are linearly independent. Or the equation y − y = 0 with y(0) = b0 and y (0) = b1 has the solution y(x) = b0 cosh x + b1 sinh x. 49 Here note that using cosh and sinh allows us to solve for the initial conditions much more easily than if we have used the exponentials. we found the solutions y1 = sin x and y2 = cos x for the equation y + y = 0. Show that y solves Ly = f (x) + g(x). In this case y = C1 y1 + C2 y2 is the general solution.1 Exercises Exercise 2. If sin x = A cos x for some constant A.5: For the equation x2 y − xy = 0. the equation y + y = 0 with y(0) = b0 and y (0) = b1 has the solution y(x) = b0 cos x + b1 sin x.1. then every other solution is written in the form y = C1 y1 + C2 y2 . we let x = 0 and this would imply A = 0 = sin x. Exercise 2.1. . Question: Suppose we find two different solutions y1 and y2 to the homogeneous equation (2. Note that the initial condition for a second order ODE consists of two equations. Suppose that y1 is a solution to Ly1 = f (x) and y2 is a solution to Ly2 = g(x) (same operator L). If you find two linearly independent solutions.2. which is preposterous. show that they are linearly independent and find the general solution.1.2).3: Take y + 5y = 10x + 5. So if we have two arbitrary constants we should be able to solve for the constants and find a solution satisfying the initial conditions. Can you find guess a solution? Exercise 2.4: Prove the superposition principle for nonhomogeneous equations. find two solutions. Can every solution be written (using superposition) in the form y = C1 y1 + C2 y2 ? Answer is affirmative! Provided that y1 and y2 are different enough in the following sense. SECOND ORDER LINEAR ODES For example. We will say y1 and y2 are linearly independent if one is not a constant multiple of the other.

6: Suppose that (b − a)2 − 4ac > 0. If you have one solution to a second order linear homogeneous equation you can find another one. c) Write down the general solution. c) Write down the general solution. Exercise 2. Exercise 2. b) What happens when (b − a)2 − 4ac = 0 or (b − a)2 − 4ac < 0? We will revisit the case when (b − a)2 − 4ac < 0 later. They are solved by trying y = xr and solving for r (we can assume that x ≥ 0 for simplicity). b) Use reduction of order to find a second linearly independent solution. e p(x) dx dx (y1 (x))2 . Hint: Try y = xr and find a formula for r. Exercise 2. Hint: Try y = xr ln x for the second solution. HIGHER ORDER LINEAR ODES Note that equations of the form ax2 y + bxy + cy = 0 are called Euler’s equations or CauchyEuler equations. a) Show that y = x is a solution. b) Use reduction of order to find a second linearly independent solution.9 (Chebychev’s equation of order 1): Take (1 − x2 )y − xy + y = 0. Show that y2 (x) = y1 (x) is also a solution.1. Exercise 2. Exercise 2. a) Find a formula for the general solution of ax2 y + bxy + cy = 0.1. a) Show that y = 1 − 2x2 is a solution. Let us solve some famous equations. This is the reduction of order method.8: Suppose y1 is a solution to y + p(x)y + q(x)y = 0.1.1. Find a formula for the general solution of ax2 y + bxy + cy = 0.7: Suppose that (b − a)2 − 4ac = 0.50 CHAPTER 2.1.10 (Hermite’s equation of order 2): Take y − 2xy + 4y = 0.

If they were not we could write e4x = Ce2x . To apply the initial conditions we first find y = 2C1 e2x + 4C2 e4x . Constant coefficients means that the functions in front of y . Then C1 = −7 as −2 = C1 + 5. y . r2 erx − 6rerx + 8erx = 0. Plug in to get y − 6y + 8y = 0.2. which would imply that e2x = C which is clearly not possible. −2 = y(0) = C1 + C2 . add these together.1: Check that y1 and y2 are solutions. y (0) = 6.3) . We need to solve for C1 and C2 . the solution we are looking for is y = −7e2x + 5e4x . Exercise 2. Either apply some matrix algebra. 6 = y (0) = 2C1 + 4C2 . and subtract the two equations to get 5 = C2 .2. y(0) = −2. second part of §3. and end up with zero. we can write the general solution as y = C1 e2x + C2 e4x . not depending on x. Think about a function that you know that stays essentially the same when you differentiate it. and y are constants. So let y1 = e2x and y2 = e4x . The functions e2x and e4x are linearly independent. So if r = 2 or r = 4. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 51 2.2. Suppose that we have an equation ay + by + cy = 0. Then y = rerx and y = r2 erx . Hence. We plug in x = 0 and solve. For example. then erx is a solution. r2 − 6r + 8 = 0 (r − 2)(r − 4) = 0. so that we can take the function and its derivatives. This is a second order linear homogeneous equation with constant coefficients. (2.1 in EP Suppose we have the problem y − 6y + 8y = 0. or just solve these by high school algebra. (divide through by erx ). divide the second equation by 2 to obtain 3 = C1 + 2C2 .2 Constant coefficient second order linear ODEs Note: more than 1 lecture. Let us try a solution y = erx . Hence. Let us generalize this example into a method.

Let us compute y = e4x + 4xe4x and y = 8e4x + 16xe4x .3) has the general solution y = C1 er1 x + C2 er2 x . The characteristic equation is r2 − 8r + 16 = (r − 4)2 = 0. When r1 goes to r2 in the limit this is like taking r2 −r1 derivative of erx using r as a variable. This limit is xerx . Let us give a short “proof” for why the solution xerx works when the root is doubled. HIGHER ORDER LINEAR ODES where a. note the equation y − k2 y = 0. Note that er2 x −er1 x is a solution when the roots are distinct. Suppose that r1 and r2 are the roots of the characteristic equation.2. The equation ar2 + br + c = 0 is called the characteristic equation of the ODE. (ii) If r1 = r2 (b2 − 4ac = 0). Hence a double root r1 = r2 = 4. b.1. c are constants. √ −b ± b2 − 4ac r1 . Example 2. doubled root rarely happens. There is still a difficulty if r1 = r2 . and hence this is also a solution in the doubled root case. then (2. Exercise 2. We should note that in practice.2: Check that e4x and xe4x are linearly independent. we have er1 x and er2 x as solutions. (i) If r1 and r2 are distinct and real (b2 − 4ac > 0). The general solution is. Here the characteristic equation is r2 − k2 = 0 or (r − k)(r + k) = 0 and hence e−kx and ekx are the two linearly independent solutions. Since this case is really a limiting case of when cases the two roots are distinct and very close.2.3) has the general solution y = (C1 + C2 x) er1 x . r2 = . then (2. but it is not hard to overcome.52 CHAPTER 2. Plug in y − 8y + 16y = 8e4x + 16xe4x − 8(e4x + 4xe4x ) + 16xe4x = 0.2. That e4x solves the equation is clear. If xe4x solves the equation then we know we are done. Theorem 2.1: Find the general solution of y − 8y + 16y = 0. y = (C1 + C2 x) e4x = C1 e4x + C2 xe4x . . therefore. For another example of the first case. 2a Therefore. Solve for the r by using the quadratic formula. Try the solution y = erx to obtain ar2 erx + brerx + cerx = 0 ar2 + br + c = 0. If you pick your coefficients truly randomly you are very unlikely to get a doubled root.

13 We can also define the exponential ea+ib of a complex number. A complex number is really just a pair of real numbers. i3 = −i. b). e x+y = e x ey . b) as a + ib. We define a multiplication by (a. • 1 3−2i = 1 3+2i 3−2i 3+2i = 3+2i 13 = 3 13 + 2 i. . Further. You can just do arithmetic with complex numbers just as you would do with polynomials. b) × (c. This means that ea+ib = ea eib and hence if we can compute eib easily. 1) × (0. Theorem 2. 0). we can compute ea+ib .3: Make sure you understand (that you can justify) the following identities: • i2 = −1. We will use the mathematicians convention and use i. Complex numbers may seem a strange concept especially because of the terminology. we note that many properties still hold for the complex exponential.2. For example. for example i and −i are roots of r2 + 1 = 0. 1) = (−1. ad + bc). We can think of a complex number as a point in the plane. (a. and we treat i as if it were an unknown. eiθ = cos θ + i sin θ and e−iθ = cos θ − i sin θ.2. all the standard properties of arithmetic hold. For example. Generally we just write (a.2. So whenever you see i2 you can replace it by −1. but it does have two complex roots. There is nothing imaginary or really complicated about complex numbers. Note that engineers often use the letter j instead of i for the square root of −1. Here we review some properties of complex numbers.2. Exercise 2. Here we will use the so-called Euler’s formula.1 Complex numbers and Euler’s formula It may happen that a polynomial has some complex roots. Because most properties of the exponential can be proved by looking at the Taylor series. Also. CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES 53 2. i4 = 1.2. i def • (3 − 7i)(−2 − 9i) = · · · = −69 − 13i.2 (Euler’s formula). The property we just mentioned becomes i2 = −1. • (3 − 2i)(3 + 2i) = 32 − (2i)2 = 32 + 22 = 13. d) = (ac − bd. We can do this by just writing down the Taylor series and plugging in the complex number. It turns out that with this multiplication rule. and most importantly (0. • 1 = −i. the equation r2 + 1 = 0 has no real roots. We add complex numbers in the straightforward way.

check the identities: cos θ = eiθ + e−iθ 2 and sin θ = eiθ − e−iθ . That is. r2 = ±i . and y2 = e(α−iβ)x . We note that linear combinations of solutions are also solutions. However. 2. you will always get a pair of roots of the form α ± iβ. In this case we can still write the solution as y = C1 e(α+iβ)x + C2 e(α−iβ)x .2. . First let y1 = e(α+iβ)x Then note that y1 = eαx cos βx + ieαx sin βx.4: Using Euler’s formula. We would need to choose C1 and C2 to be complex numbers to obtain a real valued solution (which is what we are after).54 CHAPTER 2.2. In this case we can see that the roots are √ −b b2 − 4ac r1 . Use Euler on each side and deduce: cos 2θ = cos2 θ − sin2 θ and sin 2θ = 2 sin θ cos θ. HIGHER ORDER LINEAR ODES Exercise 2. the exponential is now complex valued. These are complex if b2 − 4ac < 0. 2a 2a As you can see. For a complex number a + ib we call a the real part and b the imaginary part of the number. 2i 2 Exercise 2.5: Double angle identities: Start with ei(2θ) = eiθ . In notation this is Re(a + bi) = a and Im(a + bi) = b. We also will need some notation. While there is nothing particularly wrong with this. we have the following theorem. Therefore.2. by quadratic formula the roots are −b± 2a −4ac .2 Complex roots So now suppose that the equation ay + by + cy = 0 has a characteristic equation ar2 + br + c = 0 √ b2 which has complex roots. y3 = 2 y1 − y2 y4 = = eαx sin βx. y2 = eαx cos βx − ieαx sin βx. it can make calculations harder and it would be nice to find two real valued solutions. Hence y1 + y2 = eαx cos βx. And furthermore they are real valued. Here we can use Euler’s formula. 2i are also solutions. It is not hard to see that they are linearly independent (not multiples of each other).

8: Solve y − 8y + 16y = 0 for y(0) = 2.2: Find the general solution of y + k2 y = 0.2.2.11: Find the general solution of y + 6y + 13y = 0.2. the roots are r = ±ik and by the theorem we have the general solution y = C1 cos kx + C2 sin kx. By the theorem we have the general solution y = C1 e3x cos 2x + C2 e3x sin 2x.2. we first plug in zero to get 0 = y(0) = C1 e0 cos 0 + C2 e0 sin 0 = C1 . Take the equation ay + by + cy = 0. The characteristic equation is r2 −6r +13 = 0. To find the solution satisfying the initial conditions. 2. y (0) = 1. By completing the square we get (r −3)2 +22 = 0 and hence the roots are r = 3 ± 2i. Exercise 2. 55 Example 2.2.2.2.2.9: Solve y + 9y = 0 for y(0) = 1. for a constant k > 0.2. then the general solution is y = C1 eαx cos βx + C2 eαx sin βx.2.7: Find the general solution of y + 9y − 10y = 0.2. Exercise 2. Exercise 2. . y (0) = 0. or C2 = 5. Example 2. We again plug in the initial condition and obtain 10 = y (0) = 2C2 . CONSTANT COEFFICIENT SECOND ORDER LINEAR ODES Theorem 2. Hence the solution we are seeking is y = 5e3x sin 2x.10: Find the general solution of 2y + 50y = 0.6: Find the general solution of 2y + 2y − 4y = 0. We differentiate y = 3C2 e3x sin 2x + 2C2 e3x cos 2x. y (0) = 10.3.2.3 Exercises Exercise 2. Therefore.3: Find the solution of y − 6y + 13y = 0. If the characteristic equation has the roots α ± iβ. Exercise 2. The characteristic equation is r2 + k2 = 0. Exercise 2. y(0) = 0. Hence C1 = 0 and hence y = C2 e3x sin 2x.

12: Find the general solution of y = 0 using the methods of this section.56 CHAPTER 2. Suppose now that (b − a)2 − 4ac < 0. We will see higher orders later. HIGHER ORDER LINEAR ODES Exercise 2.13: The method of this section applies to equations of other orders than two. Exercise 2. Exercise 2. .2. Find a formula for the general solution of ax2 y + bxy + cy = 0.6 on page 50. Try to solve the first order equation 2y + 3y = 0 using the methods of this section.14: Let us revisit Euler’s equations of Exercise 2. Hint: Note that xr = er ln x .2.2.1.

. y2 . . has only the trivial solution c1 = c2 = · · · = cn = 0. . and f are continuous functions and a. The important new concept here is the concept of linear independence. If we can write the equation with a nonzero constant.3 in EP In general. .4) for arbitrary constants C1 . In this case it is easier to state as follows.3 Higher order linear ODEs Note: 2 lectures. the methods are slightly harder. yn are linearly independent if c1 y1 + c2 y2 + · · · + cn yn = 0. . Suppose y1 . .. Theorem 2. 2. . and it is useful to understand this in detail. The functions y1 .2. Higher order equations do appear from time to time. . (2. So let us start with a general homogeneous linear equation y(n) + pn−1 (x)y(n−1) + · · · + p1 (x)y + p0 (x)y = 0. yn are solutions of the homogeneous equation (2..” The basic results about linear ODEs of higher order are essentially exactly the same as for second order equations with 2 replaced by n. . .2 and §3. b0 . Then y(x) = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x). This concept is used in many other areas of mathematics and even other places in this course. §3. HIGHER ORDER LINEAR ODES 57 2.1 Linear independence When we had two functions y1 and y2 we said they were linearly independent if one was not the multiple of the other. but we will not dwell on these.3. . has exactly one solution y(x) satisfying the initial conditions y(a) = b0 . bn−1 are constants. . For constant coefficient ODEs. y(n−1 )(a) = bn−1 . We also have the existence and uniqueness theorem for nonhomogeneous linear equations. but it is a general assumption of modern physics that the world is “second order.. You can always use the methods for systems of linear equations we will learn later in the course to solve higher order constant coefficient equations.4). b1 . way say they are linearly dependent.2 (Existence and uniqueness). .1 (Superposition). Same idea holds for n functions. The equation y(n) + pn−1 (x)y(n−1) + · · · + p1 (x)y + p0 (x)y = f (x).4) Theorem 2. .3. . Cn . most equations that appear in applications tend to be second order. also solves (2.3. say c1 0. y (a) = b1 . then we can solve for y1 as a linear combination of the others. . . Suppose p0 through pn−1 . If the functions are not linearly independent. . y2 .3. .

Now differentiate both sides c2 e x + 2c3 e2x = 0. and cosh x are linearly dependent. Hence our equation becomes c1 e x + c2 e2x = 0. Set x = 0 to get the equation c1 + c2 + c3 = 0. therefore. That might be a lot of computation.3. Let us first divide by e x for simplicity. Finally divide by e x again and differentiate to get 4c3 e2x = 0. it is identically zero and c1 = c2 = c3 = 0 and the functions are linearly independent.2: On the other hand. Use rules of exponentials and write z = e x . Most textbooks (including [EP] and [F]) introduce Wronskians. Rinse. Then we have c1 z + c2 z2 + c3 z3 = 0. Let us write down c1 e x + c2 e2x + c3 e3x = 0. c2 and c3 . Therefore. c1 + c2 e x + c3 e2x = 0. It is clear that c3 is zero. e2x .58 CHAPTER 2. After taking the limit we see that c3 = 0. Let us give several ways to do this. There is no one good way to do it. Let us try another way. HIGHER ORDER LINEAR ODES Example 2. What we could do is divide through by e3x to get c1 e−2x + c2 e−x + c3 = 0.1: Show e x . This is true for all x. let x → ∞. We can also take derivatives of both sides and then evaluate. This equation has to hold for all x. Example 2. All of these methods are perfectly valid. but that is really not necessary here. Then c2 must be zero as c2 = −2c3 and c1 must be zero because c1 +c2 +c3 = 0. e3x are linearly independent. Simply apply definition of the hyperbolic cosine: cosh x = e x + e−x . 2 . The left hand side is is a third degree polynomial in z. the functions e x . Write c1 e x + c2 e2x + c3 e3x = 0. e−x . and set x = 0 to get c2 + 2c3 = 0.3. repeat! How about yet another way. We could evaluate at several different x to get equations for c1 . Write c1 e x + c2 e2x + c3 e3x = 0. It can either be identically zero or have at most 3 zeros.

1. For example.3: Find the general solution to y − 3y − y + 3y = 0.3. these are easy to see. We just need to find more solutions. 1. Then r3 − 3r2 − r + 3 = 0. The last root is then reasonably easy to find. 2 = y (0) = −C1 + C2 + 3C3 . if you plug in −2 into our polynomial you get −15.5). A good strategy at first is to look for roots −1. There is no formula for higher degree polynomials. They are linearly independent as can easily be checked. When check our polynomial we note that r1 = −1 and r2 = 1 are roots. Computers are pretty good at finding roots approximately for reasonable size polynomials. The song and dance is exactly the same as it was for second order. or 0. The trick now is to find the roots. y (0) = 2. Hence the general solution is y = C1 e−x + C2 e x + C3 e3x . There are always n roots for an nth degree polynomial. That means there is a root between −2 and 0 because the sign changed. Or you can try plugging in. Best place to start is to plot the polynomial and check where it is zero. and see if you get a hit.3.2. You should check that r3 = 3 is a root.2 Constant coefficient higher order ODEs When we have a higher order constant coefficient homogeneous linear equation.3. That does not mean that the roots do not exist. Example 2. There are some signs that you might have missed a root. Sometimes it is a good idea to just start plugging in numbers r = −2. . If the equation is nth order we need to find n linearly independent solutions. . −1. . e x and e3x are solutions to (2. −1. 2. HIGHER ORDER LINEAR ODES 59 2. and y (0) = 3. It is best seen by example. There is a formula for degree 3 and 4 equations but it is very complicated. Try: y = erx . In our case we see that 3 = (−r1 )(−r2 )(−r3 ) = (1)(−1)(−r3 ) = r3 . Suppose we were given some initial conditions y(0) = 1. We note that the constant term in a polynomial is the multiple of the negations of all the roots because r3 − 3r2 − r + 3 = (r − r1 )(r − r2 )(r − r3 ). and there is 3 of them. We plug in and get r3 erx − 3r2 erx − rerx + 3erx = 0.5) . We divide out by erx . They might be repeated and they might be complex. This leads to 1 = y(0) = C1 + C2 + C3 . 0. Hence we know that e−x . 3 = y (0) = C1 + C2 + 9C3 . If you plug in 0 you get 3. which happens to be exactly the number we need. (2.

. suppose that we have real roots. You could also have asked a computer or an advanced calculator for the roots. Example 2. . in the spirit of the second order solution we note the solutions y= erx . It is not so easy in general. . The corresponding solution is (c0 + c1 x + · · · + ck−1 xk ) eαx cos βx + (d0 + d1 x + · · · + dk−1 xk ) eαx sin βx. Hence the roots given with multiplicity are r = 0. Similarly to the second order case we can handle complex roots and we really only need to talk about how to handle repeated complex roots. Complex roots always come in pairs r = α ± iβ. 1.3. . The only sensible way to solve a system of equations such as this is to use matrix algebra.2. Thus the general solution is y = (c0 + c1 x + c2 x2 ) e x + terms coming from r = 1 c4 from r = 0 . ck−1 . By inspection we note that r4 − 3r3 + 3r2 − r = r(r − 1)3 . HIGHER ORDER LINEAR ODES It is possible to find the solution by high school algebra. dk−1 are arbitrary constants. 1. . 4 4 Next. see § 3. xerx . .4: Solve y(4) − 3y + 3y − y = 0. d0 . The way we solved the characteristic equation above is really by guessing or by inspection. . 2 . We note that the characteristic equation is r4 − 3r3 + 3r2 − r = 0. x2 erx . C2 = 1 and C3 = 1 . For now we note that the solution is C1 = − 1 . Let us say we have a root r repeated k times. where c0 . . We take a linear combination of these solutions to find the general solution. but they are repeated. The characteristic equation is r4 − 4r3 + 8r2 − 8r + 4 = 0. Example 2. Hence the roots are 1 ± i with multiplicity 2. xk−1 erx . (r − 1)2 + 2 = 0. 1. . In this case. but it would be a pain.60 CHAPTER 2. With this the specific solution is 4 4 −1 −x 1 e + e x + e3x .5: Solve y(4) − 4y + 8y − 8y + 4y = 0. .3. . . Hence the general solution is y = (c0 + c1 x) e x cos x + (d0 + d1 x) e x sin x. (r2 − 2 + 2)2 = 0.

HIGHER ORDER LINEAR ODES 61 2. find the linear combination that works. if not. g(x) = cos x. show it.3.3. find the linear combination that works. a) Find such an equation. Exercise 2.3. if not. . y = 2e4x x cos x. Are f (x). find the linear combination that works. b) Find its general solution. find the linear combination that works.3. Exercise 2.9: Are x. Exercise 2.3.1: Find the general solution for y − y + y − y = 0. Exercise 2.3. g(x). and h(x) = cos x. and h(x) linearly independent? If so.5.3 Exercises Exercise 2. g(x).5: Suppose that a fourth order equation has the following solution. xe x .3. and h(x) linearly independent? If so.3: Find the general solution for y + 2y + 2y = 0. Exercise 2. and h(x) = sin x. Exercise 2.10: Are e x . Exercise 2. Exercise 2. show it.2. and x2 e x linearly independent? If so.3. g(x) = e x + cos x. and x4 linearly independent? If so.3. show it. show it.3. x2 . Are f (x). if not.2: Find the general solution for y(4) − 5y + 6y = 0. Exercise 2. a) Find such an equation.3. b) Find the initial conditions which the given solution satisfies.3.4: Suppose that the characteristic equation for an equation is (r − 1)2 (r − 2)2 = 0.6: Find the general solution for the equation of Exercise 2.8: Let f (x) = 0. if not.3.7: Let f (x) = e x − cos x.

It would be good if someone did the math before you jump off right? Let us just give 2 other examples. Let x be the displacement of the mass (x = 0 is the rest position). it is kx in the negative direction. if c > 0. Suppose we have a mass m > 0 (in kilograms for instance) connected by a spring with spring constant k > 0 (in Newtons per meter perhaps) to a fixed wall. there is some friction in the system and this is measured by a constant c ≥ 0. Similarly the amount of force exerted by friction is proportional to the velocity of the mass.62 CHAPTER 2. There is also an electric source (such as a battery) giving a voltage of E(t) volts at time t (measured in seconds). For example. There is a resistor with a resistance of R ohms. and (iv) undamped. This system is appears in lots of applications even if it does not at first seems like it. The relation between the two is .4. (iii) damped. Let Q(t) be the charge in columbs on the capacitor and I(t) be the current in the circuit. Finally. a bungee jump setup is essentially a spring and mass system (you are the mass). if F 0 (F not identically zero). and a capacitor with a capacitance R of C farads. Suppose that you have the pictured RLC circuit. if c = 0. By Newton’s second law we know that force equals mass times acceleration and hence mx + cx + kx = F(t). The force exerted by the spring is proportional to the compression of the spring by Hooke’s law. Here is an example for electrical engineers.4 Mechanical vibrations Note: 2 lectures. With x growing to the right (away from the wall). if F ≡ 0. §3. We say the motion is (i) forced. there is some external force F(t) acting on the mass. (ii) unforced or free. Many real world scenarios can be simplified to a mass on a spring. HIGHER ORDER LINEAR ODES 2. We set up some terminology about this equation.1 Some examples k m damping c F(t) Our first example is a mass on a spring. Therefore. 2. This is a linear second order constant coefficient ODE. Furthermore. an C E L inductor with an inductance of L henries.4 in EP We want to look at some applications of linear second order constant coefficient equations.

This can be seen by looking at the graph. Mass is replaced by the inductance. Furthermore. This is mg sin θ in the opposite direction. C This is an nonhomogeneous second order constant coefficient linear equation. R. -1. The change in voltage becomes the forcing function.5 0.5 (in radians) the graphs of sin θ and θ are almost the same. Hence for constant voltage this is an unforced motion.5 -0. The m curiously θ cancels from the equation. Further.1: The graphs of sin θ and θ (in radians). as L. The position of the mass is replaced by the current.2. This has to be equal to the tangential component of the force given L by the gravity.5 1. Note that acceleration is Lθ and mass is m.5 0. MECHANICAL VIBRATIONS 63 Q = I. damping is replaced by resistance and the spring constant is replaced by one over the capacitance.0 0.0 0.5 0. In Figure 2. If we differentiate we get 1 LI (t) + RI (t) + I(t) = E (t). For small θ we have that approximately sin θ ≈ θ. Elementary physics mandates that the equation is of the form θ + g sin θ = 0. We wish to find an equation for the angle θ(t). by elementary principles we have that LI +RI + Q/C = E.0 -1.5 -1. .0 -0.5 1.0 -1. Now we make our approximation. and C are all positive.4.5 < θ < 0. where force equals mass times acceleration.5 0. Suppose we have a mass m on a pendulum of length L.0 1.0 Figure 2.0 -0.0 0. Our next example is going to behave like a mass and spring system only approximately.0 -0. L This equation can be derived using Newton’s second law.0 0. this system behaves just like the mass and spring system. Let g be the force of gravity.0 1.1 we can see that for approximately −0.

First let us start with undamped motion and hence c = 0. Therefore. when the swings are small. so we have the equation mx + kx = 0. Therefore. But for reasonably short periods of time and small swings (for example if the length of the pendulum is very large). this is not true for a pendulum.2 Free undamped motion In this section we will only consider free or unforced motion. the behavior is reasonably close. HIGHER ORDER LINEAR ODES Therefore. √ B It is not hard to compute that C = A2 + B2 and tan γ = A .1: Justify this identity and verify the equations for C and γ. Also we will see that in a mass spring system. the amplitude is independent of the period.4. If we look at the form of the solution x(t) = C cos(ω0 t − γ) k m we can write the equation as . 0 The general solution to this equation is x(t) = A cos ω0 t + B sin ω0 t. θ is always small and we can model the behavior by the simpler linear equation g θ + θ = 0. First we notice that by a trigonometric identity we have that for two other constants C and γ we have A cos ω0 t + B sin ω0 t = C cos(ω0 t − γ). as we cannot yet solve nonhomogeneous equations. and let C and γ be our arbitrary constants. L Note that the errors that we get from the approximation build up so over a very long time. The constants C and γ have very nice interpretation. 2. the second form is much more natural. In real world problems it is very often necessary to make these types of simplifications.4. Exercise 2. While it is generally easier to use the first form with A and B to find these constants given the initial conditions.64 CHAPTER 2. the behavior might change more substantially. we can write x(t) = C cos(ω0 t − γ). If we divide out by m and let ω0 be a number such that ω2 = 0 x + ω2 x = 0. it is good to understand both the mathematics and the physics of the situation to see if the simplification is valid in the context of the questions we are trying to answer.

We know that tan γ = B/A = 2. = √ 4 = 2. 2π The period of the motion is one over the frequency (in cycles per unit time) and hence ω0 . Example 2. Then x (t) = −0. the mass was moving forward (in the positive direction) at 1m/s.4. Therefore. For the free undamped motion. This makes it much easier to figure out A and B. In the example. That is the amount of time it takes to complete one full oscillation. This gives us the initial conditions.5 sin 2t + B cos 2t. not in cycles per unit time as is the usual measure of frequency.318 The general solution is x(t) = A cos 2t + B sin 2t. But because we know one cycle is 2π. A note about the word angular before the frequency. Letting x (0) = 1 we get √ √ B = 1.5 cos 2t + sin 2t.107. ω0 is given in radians per unit time. We call ω0 is called the natural (angular) frequency. and that 2π is a matter of taste. A plot is shown in Figure 2.2 on the following page. MECHANICAL VIBRATIONS 65 We can see that the amplitude is C. Unfortunately if you remember. It is simply a matter of where we put the constant 2π.4. The units are the mks units (meters-kilograms-seconds). Let us compute the phase shift.5. we still . and gets loose in the crash and starts oscillating. x (0) = 1. So the equation with initial conditions is 2x + 8x = 0. Hence the angular frequency is 2.5. The solution is x(t) = 0. Suppose the whole setup is on a truck which was travelling at 1m/s and suddenly crashes and hence stops. the usual frequency is given by ω0 . We can directly compute ω0 = k m x(0) = 0. this corresponds to the initial conditions x(0) = A and x (0) = B. The motion is usually called simple harmonic motion. The usual 1 2 frequency in Hertz (cycles per second) is 2π = π ≈ 0. rather than the amplitude and phase shift. Well the setup means that the mass was at half a meter in the positive direction during the crash and relative to the wall the spring is mounted to.1: Suppose that m = 2kg and k = 8N/m. It just shifts the graph left or right.5 meters forward from the rest position. and γ is the so-called phase shift. the amplitude is C = A2 + B2 = 1.2.118. we have already found C. if the solution is of the form x(t) = A cos ω0 t + B sin ω0 t. The mass was rigged 0. Letting x(0) = 0 means A = 0. ω0 is the (angular) frequency. We take the arctangent of 2 and get approximately 1.25 ≈ 1. What is the frequency of the resulting oscillation and what is the amplitude.

0 2. Since both B and A are positive.0 0.0 2. as x + 2px + ω2 x = 0. 0 k .66 0.2: Simple undamped oscillation. need to check if this γ is in the right quadrant. 2m The form of the solution depends on whether we get complex or real roots and this depends on the sign of c 2 k c2 − 4km p2 − ω2 = − = .4.107 radians really is in the first quadrant.5 -0. Note: Many calculators and computer software do not only have the atan function for arctangent.0 7. This function takes two arguments. and 1. 0 where ω0 = The characteristic equation is r2 + 2pr + ω2 = 0.0 0. Let us rewrite the equation mx + cx + kx = 0. m p= c .3 Free damped motion Let us now focus on damped motion.0 0. then γ should be in the first quadrant. HIGHER ORDER LINEAR ODES 5. 0 Using the quadratic formula we get that the roots are r = −p ± p2 − ω2 .5 CHAPTER 2.5 -1.0 Figure 2.0 -1. but also what is sometimes called atan2.0 7.5 10.5 10.0 1.0 1.5 5. B and A and returns a γ in the correct quadrant for you.5 0. 0 2m m 4m2 .0 -0. 2.5 0.

r2 are negative. You are always a little bit underdamped or a little bit overdamped. Then x0 x(t) = r1 er2 t − r2 er1 t .0 1. we try to solve 0 = C1 er1 t + C2 er2 t . Example 2. there is one root of multiplicity 2 and this root is −p. To see this fact. After all a critically damped system is in some sense a limit of overdamped systems. Hence the solution is x(t) = C1 er1 t + C2 er2 t .2.3.3: Overdamped motion for several different initial conditions. r1 − r2 It is not hard to see that this satisfies the initial conditions. as always negative. Do note that no oscillation happens. and hence −C1 = e(r2 −r1 )t . Critical damping When c2 − 4km = 0. p2 − ω2 is always less than p so −p ± 0 p2 − ω2 is 0 0 1.4. C2 This has at most one solution t ≥ 0. . our solution is x(t) = C1 e−pt + C2 te−pt . Notice that both are negative. 0 Overdamping 67 When c2 − 4km > 0. For a few sample plots for different initial conditions see Figure 2. Since these equations are really only an approximation to the real world. it is only a place you can reach in theory.5 0.0 0 25 50 75 100 Figure 2.4. in reality we are never critically damped.5 0. That is x(0) = x0 and x (0) = 0. The behavior of a critically damped system is very similar to an overdamped system. there are two distinct real roots r1 and r2 .5 1. It is better not to dwell on critical damping. x(t) → 0 as t → ∞. In fact the graph will cross the x axis at most once. In this case. Note that since r1 .0 0. In this case. This means that the mass will just tend towards the rest position as time goes to infinity.5 25 50 75 100 1. we say the system is overdamped.2: Suppose the mass is released from from rest. So C1 er1 t = −C2 er2 t . MECHANICAL VIBRATIONS The sign of p2 − ω2 is the same as the sign of c2 − 4km.0 0. Therefore. we say the system is critically damped.

The envelope curves become flatter and flatter as p goes to 0. 2.4. In the figure we also show the envelope curves Ce−pt and −Ce−pt . we say the system is underdamped.4: Underdamped motion with the envelope curves shown. . overdamped or critically damped? c) If the system is not critically damped.5 -0.3: Do Exercise 2. and damping constant c = 1.5 where ω1 = ω2 − p2 . you are really interested in computing the envelope curve so that you do not hit the concrete with your head. b) Is the system underdamped. When we change the damping just a little bit. In this case.4.68 Underdamping CHAPTER 2. and c = 12. The solution is the oscillating plot between the two curves. An example plot is given in Figure 2.4. Or x(t) = Ce −pt cos(ω1 t − γ). On the other hand when c becomes smaller.2: Consider a mass and spring system with a mass m = 2.2 for m = 3. Exercise 2. Our solution is 0 -1. a) Set up and find the general solution of the system.0 = −p ± iω1 . This makes sense since if we keep changing c at some point the solution should start looking like the solution for critical damping or overdamping which do not oscillate at all.4. find a c which makes the system critically damped. -0.0 0 5 10 15 20 25 30 -1.0 5 10 15 20 25 30 1. For example if you are bungee jumping. Finally note that the angular pseudo-frequency (we do not call it a frequency since the solution is not really a periodic function) ω1 becomes lower when the damping c (and hence p) becomes larger. r = −p ± = −p ± p2 − ω2 0 √ −1 ω2 − p2 0 0 1.4 Exercises Exercise 2. the roots are complex.0 0. The envelope curves give the maximum amplitude of the oscillation at any given point in time. k = 12. ω1 approaches ω0 (it is always smaller) and the solution looks more and more like the steady periodic motion of the undamped case. The phase shift γ just shifts the graph left or right but within the envelope curves (the envelope curves do not change of course if γ changes). spring constant k = 3. we do not expect the behavior to change dramatically.5 0.0 0. Figure 2.4. Note that we still have that x(t) → 0 as t → ∞.5 0. HIGHER ORDER LINEAR ODES When c2 − 4km < 0.0 x(t) = e−pt (A cos ω1 t + B sin ω1 t) .

4. Assume no friction. MECHANICAL VIBRATIONS 69 Exercise 2. suppose you do not know the spring constant.8 Hz (cycles per second) what is the mass.4: Using the mks units (meters-kilograms-seconds) Suppose you have a spring of with spring constant 4N/m. a) Find k (spring constant) and c (damping constant). a) You count and find that the frequency is 0.4.39 Hz. Exercise 2. what is the weight? . You put each in motion on your spring and measure the frequency. c) For an unknown mass you measured 0.5: Suppose we add possible friction to Exercise 2.8 Hz.2 Hz. For the 1 kg weight you measured 0. but you have two reference weights 1 kg and 2 kg to calibrate your setup. b) Find a formula for the mass in terms of the frequency in Hz. Further.4. Suppose you you place the mass on the spring and put it in motion.2. You want to use it to weight items. for the 2 kg weight you measured 0. b) Find a formula for the mass m given the frequency ω in Hz.4.4.

The solution y = yc + y p includes all solutions to (2. Lw = L(y p − y p ) = Ly p − L˜ p = (2x + 1) − (2x + 1) = 0.6). like the forcing function for the mechanical vibrations of last section. Note that L is a linear operator and so we could just write.70 CHAPTER 2. We will generally write Ly = 2x + 1 instead when the operator is not important.5 in EP 2.6) Note that we still say this equation is constant coefficient equation.6) in some way and then we know that y = yc + y p is the general solution to (2.5 Nonhomogeneous equations Note: 2 lectures.1 Solving nonhomogeneous equations You have seen how to solve the linear constant coefficient homogeneous equations. we have an equation such as y + 5y + 6y = 2x + 1. We find the general solution yc to the associated homogeneous equation y + 5y + 6y = 0.6) differ by a solution to the ˜ homogeneous equation (2. Suppose you find a different particular solution y p . using the operator notation the calculation becomes simpler. We call yc the complementary solution. (2. and you might find a different one by a different method (or by guessing) and still get the right general solution to the whole problem even if it looks different and the constants you will have to choose given the initial conditions will be different. since yc is the general solution to the homogeneous equation. ˜ y So w = y p − y p is a solution to (2. So any two solutions of (2. (2. This usually corresponds to some outside input to the system we are trying to model. Note that y p can be any solution.7) . We only require constants in front of the y . Then plug w into the left hand side of the equation and get ˜ w + 5w + 6w = (y p + 5y p + 6y p ) − (˜ p + 5˜ p + 6˜ p ) = (2x + 1) − (2x + 1) = 0. y y y In other words.6). §3. Then write ˜ the difference w = y p − y p . HIGHER ORDER LINEAR ODES 2.7). The way we solve (2. and y.7). Moral of the story is that you can find the particular solution in any old way.5. Now suppose that we drop the requirement of homogeneity. We also find a single particular solution y p to (2. y .6) is as follows. That is.

1: Check that y really solves the equation. Note: A common mistake is to solve for constants using the initial conditions with yc and only adding the particular solution y p after that. we plug in y + 5y + 6y = (Ax + B) + 5(Ax + B) + 6(Ax + B) = 0 + 5A + 6Ax + 6B = 6Ax + (5A + 6B). For example: y + 2y + 2y = cos 2x Let us just find y p in this case.6).5.2. and the left hand side of the equation will be a polynomial if we let y be a polynomial of the same degree. So we will try y = Ax + B.6) is y = C1 e−2x + C2 e−3x + 3x − 1 . So we guess y = A cos 2x + B sin 2x. Similarly a right hand side consisting of exponentials or sines and cosines can be handled. We notice that we may have to also guess sin 2x since derivatives of cosine are sines. So A = 1 and B = 3 the complementary problem we get (Exercise!) −1 . You need to first compute y = yc +y p and only then solve for the constants using the initial conditions.2 Undetermined coefficients So the trick is to somehow in a smart way guess a solution to (2.5. 9 Solving yc = C1 e−2x + C2 e−3x . 9 That means that y p = 1 3 x− 1 9 = 3x−1 . So 6Ax + (5A + 6B) = 2x + 1. Note that 2x+1 is a polynomial.5. 3 9 9 9 Exercise 2. NONHOMOGENEOUS EQUATIONS 71 2. Hence the general solution to (2. . That will not work. 9 Hence our solution is 1 2 3x − 1 3e−2x − 2e−3x + 3x − 1 y(x) = e−2x − e−3x + = . 9 1 . 3 Now suppose we are further given some initial conditions y(0) = 0 and y (0) = 1 y = −2C1 e−2x − 3C2 e−3x + 3 Then 0 = y(0) = C1 + C2 − We solve to get C1 = 1 3 First find 1 9 1 1 = y (0) = −2C1 − 3C2 + 3 3 and C2 = −2 .

We note also that using the multiplication rule for differentiation gives us a way to combine these guesses. Now we can go forward and try it. That is. So −2A + 4B = 1 and 2A + B = 0 and hence A = −1 and B = 1 . We will plug in and then hopefully get equations that we can solve for A. It could be that our guess actually solves the associated homogeneous equation. As you can see this can make for a very long and tedious calculation very quickly. C’est la vie! There is one hiccup in all this. that is a good place to start. 10 And in a similar way if the right hand side contains exponentials we guess exponentials. Really if you can guess a form for y such that Ly has all the terms needed to for the right hand side. Note that y = Ae3x + 3Axe3x and y = 4Ae3x + 9Axe3x . if the equation is (where L is a linear constant coefficient operator) Ly = e3x we will guess y = Ae3x . We modify our guess to y = Axe3x and notice there is no duplication. For example for: Ly = (1 + 3x2 ) e−x cos πx we will guess y = (A + Bx + Cx2 ) e−x cos πx + (D + Ex + F x2 ) e−x sin πx. B. The trick in this case is to multiply our guess by x until we get rid of duplication with the complementary solution. F. Since the left hand side must equal to right hand side we group terms and we get that −4A + 4B + 2A = 1 and −4B − 4A + 2B = 0. We would love to guess y = Ae3x . suppose we have y − 9y = e3x . For example.72 Plug in to the equation and we get CHAPTER 2. So y − 9y = 4Ae3x + 9Axe3x − 9Axe3x = 4Ae3x . That is first we compute yc (solution to Ly = 0) yc = C1 e−3x + C2 e3x and we note that the e3x term is a duplicate with our desired guess. but if we plug this into the left hand side of the equation we get y − 9y = 9Ae3x − 9Ae3x = 0 e3x . There is no way we can choose A to make the left hand side be e3x . C. So 10 5 yp = − cos 2x + 2 sin 2x . D. E. HIGHER ORDER LINEAR ODES −4A cos 2x − 4B sin 2x − 4A sin 2x + 4B cos 2x + 2A cos 2x + 2B sin 2x = cos 2x.

5. . 4 Now what about the case when multiplying by x does not get rid of duplication. But no more! Multiplying too many times will also make the process not work. 2 sec2 x tan x. In this case find u that solves Lu = e2x and v that solves Lv = cos x (do each terms separately). We present the method of variation of parameters which will handle all the cases Ly = f (x) provided you can solve certain integrals. So guessing y = Axe3x would not get us anywhere. We have Ly = L(u + v) = Lu + Lv = e2x + cos x. Ly = y + y = tan x. such as Ly = e2x + cos x. Note that yc = C1 e3x + C2 xe3x . Let us try to solve the example. Note that each new derivative of tan x looks completely different and cannot be written as a linear combination of the previous derivatives. For simplicity we will restrict ourselves to second order equations. Consider y + y = tan x. etc. you want to multiply your guess by x until all duplication is gone. 2.2.3 Variation of parameters It turns out that undetermined coefficients will work for many basic problems that crop up. but the method will work for higher order equations just as well (but the computations will be more tedious). Really it only works when the right hand side of the equation Ly = f (x) has only finitely many linearly independent derivatives. so that you can write a guess that consists of them all. In this case you want to guess y = Ax2 e3x . . Thus we can now write the general solution as y = yc + y p = C1 e−3x + C2 e3x + 1 3x xe . Basically. This equation calls for a different method. . Then we note that if y = u + v. Finally what if the right hand side is several terms. See Edwards and Penney [EP] for more detailed and complete information on undetermined coefficients. We get sec2 x. then Ly = e2x + cos x.5. It does not work all the time. This is because L is linear and this is just superposition again. For example. y − 6y + 9 = e3x . NONHOMOGENEOUS EQUATIONS 73 1 Then we note that this is supposed to be e3x and hence we find that 4A = 1 and so A = 4 . But some equations are a bit tougher.

we impose that (u1 y1 + u2 y2 ) = 0. For y to satisfy Ly = f (x) we must have f (x) = u1 y1 + u2 y2 . −u1 sin x + u2 cos x = tan x.) So y = (u1 y1 + u2 y2 ) − (u1 y1 + u2 y2 ). y = (u1 y1 + u2 y2 ) + (u1 y1 + u2 y2 ). Now to try to find a solution to the nonhomogeneous equation we will try y p = y = u1 y1 + u2 y2 . but it is better to just repeat what we do below. You will always get these formulas for any Ly = f (x). In our case the two equations become u1 cos x + u2 sin x = 0. y = u1 y1 + u2 y2 . First compute (note the product rule!) y = (u1 y1 + u2 y2 ) + (u1 y1 + u2 y2 ). Since we can still impose at our will to simplify computations (we have two unknown functions. We can now solve for u1 and u2 in terms of f (x). That gives us one condition on the functions u1 and u2 . Hence u1 cos x sin x + u2 sin2 x = 0. where u1 and u2 are functions and not constants. u1 y1 + u2 y2 = f (x). There is a general formula for the solution you can just plug into. Now since y1 and y2 are solutions to y + y = 0. (Note: If the equation was instead y + ay + by = 0 we would have yi = −ayi − byi . . Now note that y = (u1 y1 + u2 y2 ) − y.74 CHAPTER 2. This is reasonably simple we get yc = C1 y1 +C2 y2 where y1 = cos x and y2 = sin x. This makes computing the second derivative easier. and hence y + y = Ly = u1 y1 + u2 y2 . −u1 sin x cos x + u2 cos2 x = tan x cos x = sin x. we know that y1 = −y1 and y2 = −y2 . HIGHER ORDER LINEAR ODES First we find the complementary solution Ly = 0. y1 and y2 . So what we need to solve are the two equations (conditions) we imposed on u1 and u2 u1 y1 + u2 y2 = 0. so we are allowed two conditions). We are trying to satisfy Ly = tan x.

4 Exercises Exercise 2.5.5. It is OK to leave the answer as a definite integral. u2 = sin x. c) Are the two solutions you found the same? What is going on? Exercise 2. NONHOMOGENEOUS EQUATIONS And thus u2 (sin2 x + cos2 x) = sin x.5. . Exercise 2. cos 1 (sin x) − 1 ln + sin x. Exercise 2. Exercise 2.5. So our particular solution is y p = u1 y1 + u2 y2 = (sin x) − 1 1 + cos x sin x − cos x sin x = cos x ln 2 (sin x) + 1 = The general solution to y + y = tan x is y = C1 cos x + C2 sin x + (sin x) − 1 1 cos x ln .2: Find a particular solution of y − y − 6y = e2x . 2 (sin x) + 1 1 (sin x) − 1 cos x ln . y (0) = 1.5.5: Setup the form of the particular solution but do not solve for the coefficients for y(4) − 2y + y = e x . b) Find a particular solution using undetermined coefficients. Exercise 2.5.6: Setup the form of the particular solution but do not solve for the coefficients for y(4) − 2y + y = e x + x + sin x. Exercise 2. u1 = − sin2 x x = − tan x sin x.2.4: Solve the initial value problem y + 9y = cos 3x + sin 3x for y(0) = 2. u1 = u2 = u1 dx = u2 dx = − tan x sin x dx = sin x dx = − cos x.7: a) Using variation of parameters find a particular solution of y − 2y + y = e x .5. 2 (sin x) + 1 75 Now we need to integrate u1 and u2 to get u1 and u2 .8: Find a particular solution of y − 2y + y = sin x2 .3: Find a particular solution of y − 4y + 4y = e2x .5. 2 (sin x) + 1 2.5.

We have the equation mx + kx = F0 cos ω t. Now try the solution x p = A cos ω t and solve for A.6 in EP Before reading the lecture. k where ω0 = m .edu/iode/. xp = 2 m(ω0 − ω2 ) We leave it as an exercise to do the algebra required here. Let us suppose that ω0 ω. This has the complementary solution (solution to the associated homogeneous equation) xc = C1 cos ω0 t + C2 sin ω0 t. you will find that its coefficient will be zero (I cannot find a rhyme). k m damping c F(t) Let us return back to the mass on a spring example. Once we will learn about Fourier series we will see that we will essentially cover every type of periodic function by considering F(t) = F0 cos ω t (or sin instead of cosine. The general solution is x = C1 cos ω0 t + C2 sin ω0 t + F0 cos ω t. it may be good to first try Project III from the IODE website: http://www. §3. So we solve as in the method of undetermined coefficients with the guess above and we find that F0 cos ω t.uiuc.76 CHAPTER 2. such as noncentered rotating parts. the setup is again. ω0 is said to be the natural frequency (angular). m is mass. In the mass on a spring example. the calculations will be essentially the same). Note that we need not have sine in our trial solution as on the left hand side we will only get cosines anyway. m(ω2 − ω2 ) 0 . 2.math. That is. Usually what we are interested in is some periodic forcing. c is friction. we will consider the equation mx + cx + kx = F(t) for some nonzero F(t).6 Forced oscillations and resonance Note: 2 lectures. HIGHER ORDER LINEAR ODES 2. It is essentially the frequency at which the system “wants to oscillate” without external interference. k is the spring constant and F(t) is an external force acting on the mass.6. We will now consider the case of forced oscillations. If you include a sine it is fine. or perhaps even loud sounds or other sources of periodic force.1 Undamped forced motion and resonance First let us consider undamped (c = 0) motion as this is simpler.

2 16 − π 2 2 Notice that x is now a high frequency wave modulated by a low frequency wave.5 = 4. First we read off the parameters: ω = π. We write the equation F0 x + ω2 x = cos ω t m Then plugging into the left hand side we get 0 5 10 15 20 -10 -10 0 0 -5 -5 2Bω cos ω t − 2Aω sin ω t = F0 cos ω t m . 16 − π2 Notice the “beating” behavior in Figure 2.2. Well let us compute. It is easy to see that C2 = 0 and C1 = Hence 20 x= (cos πt − cos 4t).5: Graph of 16−π2 (cos πt − cos 4t). ω0 = So the general solution is x = C1 cos 4t + C2 sin 4t + 20 cos πt. undetermined coefficients. we need to try x p = At cos ω t+ Bt sin ω t. Example 2.1: Suppose 0. 16 − π2 −20 . Therefore.6. Now suppose that ω0 = ω. First use the trigonometric identity 0 5 10 15 10 20 10 2 sin A+B A−B sin = cos B − cos A 2 2 5 5 to get that 20 4−π 4+π x= 2 sin t sin t . Obviously in this 20 case we cannot try the solution A cos ω t and use Figure 2. − ω2 ) 77 Hence it is a superposition of two cosine waves at different frequencies. This time we need the sin term since two derivatives of t cos ω t do contain sines.6. F0 = 10.5x + 8x = 10 cos πt and let us suppose that x(0) = 0 and x (0) = 0. In this case we see that cos ω t solves the homogeneous equation. m = 1.5. 16−π2 8 0. Now solve for C1 and C2 using the initial conditions. FORCED OSCILLATIONS AND RESONANCE or written another way x = C cos(ω0 t − γ) + m(ω2 0 F0 cos ω t.

6 we see the graph with C1 = C2 = 0.0 0. Tacoma Narrows Bridge Failure. 59(2). (2. For example. 2mω CHAPTER 2. F0 = 2.8) for some c > 0. In Figure 2. remember when as a kid you could start swinging by just moving back and forth on the swing seat in the correct “frequency”? You were trying to achieve resonance. Scanlan.5 -2.ketchum. HIGHER ORDER LINEAR ODES Our particular solution is F0 2mω t sin ω t and our general solution is F0 t sin ω t. 2mω 5 10 15 20 x = C1 cos ω t + C2 sin ω t + The important term is the last one (the particular solution we found). A common (but wrong) example of destructive force of resonance is the Tacoma Narrows bridge failure.5 0. 1991. and Undergraduate Physics Textbooks.0 0 5 10 15 20 2. 1 Figure 2. By forcing the system in just the right frequency we produce very wild oscillations. American Journal of Physics. there was an altogether different phenomenon at play there∗ . It turns out. ω = π. There is of course some damping. 118–124.org/billah/Billah-Scanlan. which becomes smaller and smaller in proportion to the oscillations of the last term as t gets larger. The force of each one of your moves was small but after a while it produced large swings. On the other hand resonance can be destructive.pdf .5 -5. Resonance. The first two 2mω 0 5. http://www. We let p= ∗ c 2m ω0 = k .0 -5.0 -2.6: Graph of π t sin πt. 2. m = 1. That is our equation becomes mx + cx + kx = F0 cos ω t. So figuring out the resonance frequency can be very important. In fact F0 t it oscillates between 2mω and −F0 t . We can see that this term grows without bound as t → ∞. After an earthquake some buildings are collapsed and others may be relatively undamaged. We have solved the homogeneous problem before.6.5 2. Billah and R. This is due to different buildings having different resonance frequencies.2 Damped forced motion and practical resonance Of course in real life things are not as simple as they were above.78 Hence A = 0 and B = F0 . m K. This kind of behavior is called resonance or sometimes pure resonance and is sometimes desired.0 2 2 terms only oscillate between ± C1 + C2 .0 5.

Thus our particular solution is xp = (ω2 − ω2 )F0 0 m(2ωp)2 + m(ω2 0 − ω2 )2 cos ω t + 2ωpF0 sin ω t + m(ω2 − ω2 )2 0 ω0 ) m(2ωp)2 Or in the other notation we have amplitude C and phase shift γ where (if ω tan γ = B 2ωp . In any case. as we have seen before.    −pt e (C cos ω t + C sin ω t) if c2 < 4km . The form of the general solution of the associated homogeneous equation depends 0  C1 er1 t + C2 er2 t  if c2 > 4km. r2 = −p± p2 − ω2 .  1 1 2 1 Here ω1 = ω2 − p2 . m(2ωp)2 + m(ω2 − ω2 )2 0 F0 cos ω t.2.6. there can 0 be no conflicts when trying to solve for the undetermined coefficients by trying x p = A cos ω t + B sin ω t.    −pt  xc = C1 e + C2 te−pt if c2 = 4km. m 79 We find the roots of the characteristic equation of the associated homogeneous problem are r1 . Let us plug in and solve for A and B. we can see that xc (t) → 0 as t → ∞. That is 0 We also compute C = √ A2 + B2 to be C= m F0 (2ωp)2 + (ω2 0 − ω2 )2 . Furthermore. m on the sign of p2 − ω2 . or equivalently on the sign of c2 − 4km.8) with x + 2px + ω2 x = 0 F0 cos ω t. We get (the tedious details are left to reader) (ω2 − ω2 )B − 2ωpA sin ω t + (ω2 − ω2 )A + 2ωpB cos ω t = 0 0 We get that A= (ω2 − ω2 )F0 0 m(2ωp)2 + m(ω2 − ω2 )2 0 2ωpF0 B= . FORCED OSCILLATIONS AND RESONANCE We replace equation (2. = 2 A ω0 − ω2 .

Even if you change the right hand side a little bit you will get a different formula with different behavior. F0 = 1. This maximum is said to be practical resonance (we call the ω that achieves this maximum the practical resonance frequency). we might as well focus on the steady periodic solution and ignore the transient solution. If we plot C as a function of ω (with all other parameters fixed) we can find its maximum. Hence the name transient. m = 1.1.depends on p (and hence c). So the smaller the damping. Because of this behavior. The exact formula is not as important as the idea. m (2ωp)2 + (ω2 − ω2 )2 0 F0 If ω = ω0 we see that A = 0. and the initial conditions will only affect xtr .0 5. What we will look at however is the maximum value of the amplitude of the steady periodic solution.5 -2.e. For reasons we will explain in a moment we will call xc the transient solution and denote it by xtr and we will call the x p we found above the steady periodic solution and denote it by x sp . Let C be the amplitude of x sp . 0 5 10 15 20 5. Hence for large t. So there is no point in memorizing this specific formula.0 2. the longer the “tranc = 0. there is no term that goes to infinity.0 -5. and ω = 1. We note that xc = xtr goes to zero as t → ∞ as all the terms involve an exponential with a negative exponent. This means that the effect of the initial conditions will be negligible after some period of time. A sample plot for three different values of c .0 0. See Figure 2. Since there were no conflicts when solving with undetermined coefficient. The general solution to our problem is x = xc + x p = xtr + x sp . Notice that x sp involves no arbitrary constants.7: Solutions with different initial con.7 for a graph of different initial conditions. Notice that the speed at which xtr goes to zero Figure 2. you should remember the ideas involved.” This agrees with the observation that when c = 0. HIGHER ORDER LINEAR ODES cos(ω t − γ). B = C = 2mωp and γ = π/2. the effect of xtr is negligible and we will essentially only see x sp . the “faster” xtr becomes negligible.0 -2.0 0 5 10 15 20 Let us describe what do we mean by resonance when damping is present.5 2. You can always recompute it later or look it up if you really need it.7. bigger c is). the initial conditions affect the behavior for all time (i.80 Hence we have xp = F0 CHAPTER 2. an infinite “transient region”). The bigger p is (the ditions for parameters k = 1.5 -5.5 0. sient region. You should not memorize the above formula.

This is easily computed to be −4ω(2p2 + ω2 − ω2 )F0 0 C (ω) = .5 3.0 0. As damping c (and hence p) becomes smaller. The behavior will be more complicated if the forcing function is not an exact cosine wave.4. The top line is with c = 0.8.0 0.5 2.0 0. In other words when 0 ω= ω2 − 2p2 or 0 0 It can be shown that if ω2 − 2p2 is positive then ω2 − 2p2 is the practical resonance frequency 0 0 (that is the point where C(ω) is maximal.0 1. If ω = 0 is the maximum. .2. As you can see the practical resonance amplitude grows as damping gets smaller.5 1.6.5 2. ω0 is the resonance frequency.0 0. 2 + (ω2 − ω2 )2 3/2 m (2ωp) 0 This is zero either when ω = 0 or when 2p2 + ω2 − ω2 = 0.5 1.8. In this case the amplitude gets larger as the forcing frequency gets smaller. ω0 is a good estimate of the resonance frequency. the frequency is smaller than ω0 .5 0.5 3.0 1. To find the maximum it turns out we need to find the derivative C (ω).8: Graph of C(ω) showing practical resonance with parameters k = 1. 0. then essentially there is no practical resonance since we assume that ω > 0 in our system. This behavior agrees with the observation that when c = 0.0 2.0 0. m = 1. but for example a square wave. the middle line with c = 0. and any practical resonance can disappear when damping is large.0 2.5 1.5 2. and the bottom line with c = 1. note that in this case C (ω) > 0 for small ω).6.0 Figure 2. FORCED OSCILLATIONS AND RESONANCE 81 is given in Figure 2.5 1.0 1.0 2.0 2. the closer the practical resonance frequency comes to ω0 . F0 = 1. If practical resonance occurs.5 0.0 1. So when damping is very small. It will be good to come back to this section once you have learned about the Fourier series.5 2.

6.5: Suppose a water tower in an earthquake acts as a mass-spring system.1: Derive a formula for x sp if the equation is mx + cx + kx = F0 sin ω t. k.6. and F0 will there be no practical resonance (for what values of c is there no maximum of C(ω) for ω > 0).6.5 meter. Will the tower collapse? . Assume that the container on top is full and the water does not move around. where the induced vibrations are horizontal.4: Take mx + cx + kx = F0 cos ω t.2: Derive a formula for x sp if the equation is mx +cx +kx = F0 cos ω t+ F1 cos 3ω t. Exercise 2.3 Exercises Exercise 2. It takes a force of 1000 newtons to displace the container 1 meter. Exercise 2. Assume c > 0. Now think of the function C(ω). HIGHER ORDER LINEAR ODES 2. The container then acts as a mass and the support acts as the spring.5 cycles per second comes. What is the amplitude of the oscillations. Now think of the function C(ω). b) If ω is not the natural frequency. For what values of m (solve in terms of c.000 kg. a) What is the natural frequency of the water tower. For what values of c (solve in terms of m. c) Suppose A = 1 and an earthquake with frequency 0. Assume c > 0.3: Take mx + cx + kx = F0 cos ω t. find a formula for the amplitude of the resulting oscillations of the water container.6.6. Suppose that if the water tower moves more than 1. For simplicity assume no friction. k.82 CHAPTER 2. Suppose that an earthquake induces an external force F(t) = mAω2 cos ω t. Fix c > 0 and k > 0. Exercise 2. the tower collapses.6. Fix m > 0 and k > 0. Exercise 2. and F0 will there be no practical resonance (for what values of m is there no maximum of C(ω) for ω > 0). Suppose that the container with water has a mass of m =10.

y2 .Chapter 3 Systems of ODEs 3. for some functions f1 and f2 . . For example. we may end up with systems of several equations and several dependent variables even if we start with a single equation. Sometimes a system is easy to solve by solving for one variable and then for the second variable. §4. Example 3. y2 . y2 . with initial conditions of the form y1 (0) = 1. y2 . y1 .1 in EP Often we do not have just one dependent variable and one equation. y2 . y1 . x). y2 = f2 (y1 . which is a linear first order equation that is easily solved for y2 . By the method of integrating factor we get e x y2 = C1 2x e + C2 . suppose y1 . it is a second order system. . y1 = f (y1 . y2 = y1 − y2 . x). 2 83 . yn we can have a differential equation involving all of them and their derivatives. Usually.1. We note that y1 = C1 e x is the general solution of the first equation. If we have several dependent variables.1 Introduction to systems of ODEs Note: 1 lecture.1: Take the first order system y1 = y1 . x). y2 (0) = 2. . . And as we will see. y2 . y1 . We can then plug this y1 into the second equation and get the equation y2 = C1 e x − y2 . y2 . More precisely. We call the above a system of differential equations. when we have two dependent variables we would have two equations such as y1 = f1 (y1 .

That is. and we will suppose that they ride along with no friction. . Suppose we have one spring with constant k but two m2 m2 masses m1 and m2 . We can think of the masses as carts. Let x1 be the displacement of the first cart and x2 be the displacement of the second cart. y1 = C1 e x . and we mark the position of the first and second cart and call those the zero position. . Define new variables u1 . . Take an nth order differential equation y(n) = F(y(n−1) . and we will have to solve for all variables at once. As an example application. x1 = 0 is a different position on the floor than the position corresponding to x2 = 0. . let us think of mass and spring systems again. thus the same thing with a negative sign. The general solution to the system is. we will not be so lucky to be able to solve like in the first example. . SYSTEMS OF ODES + C2 e−x . y. k m1 x1 = k(x2 − x1 ). Using Newton’s second law. y . un and write the system u1 = u2 u2 = u3 . m2 x2 = −k(x2 − x1 ). . The force exerted by the spring on the first cart is k(x2 − x1 ). let us note that in some sense we need only consider first order systems. since x2 − x1 is how far the string is stretched (or compressed) from the rest position. .84 or y2 = C1 x e 2 CHAPTER 3. . x). we note that force equals mass times acceleration. y2 = C1 x e + C2 e−x . un−1 = un un = F(un . Before we talk about how to handle systems. . we put the two carts somewhere with no tension on the spring. therefore. That is. . That we must solve for both x1 and x2 at once is intuitively obvious. . We substitute x = 0 and find that 3 C1 = 1 and C2 = 2 . un−1 . . . u2 . . x). . Generally. In this system we cannot solve for the x1 variable separately. u1 . The force exerted on the second cart is the opposite. 2 We can now solve for C1 and C2 given the initial conditions. since where the first cart goes depends exactly on where the second cart goes and vice versa. .

In fact.1. . For autonomous systems we can easily draw the so-called direction field or vector field. this is what IODE was doing when you had it solve a second order equation numerically in the IODE Project III if you have done that project. So we now have an equation y + y − 2y = 0. y) the direction in which we should travel to satisfy the equations should be the direction of the vector (2y − x.3. We essentially just treat the dependent variable not as a number but as a vector. We know how to solve this equation and we find that y = C1 e−2t + C2 et . For example.2: Sometimes we can use this idea in reverse as well. you can discard u2 through un and let y = u1 . Hence. Once we have y we can plug in to get x. Example 3. can be transformed into a first order system of n × k equations and n × k unknowns. That is.1: Plug in and check that this really is the solution. y = x says that at the point (x. where the independent variable is t. we give a direction (and a magnitude). 3 Exercise 3. 3 y= −e−2t + et . but instead of giving a slope at each point. So we draw the vector (2y − x. It is also autonomous as the equations do not depend on the independent variable t. C1 = −C2 and 1 = 3C2 . So C1 = −1 and C2 = 1 . a system of k differential equations in k unknowns. In many mathematical computer languages there is almost no distinction in syntax. For example. y = x. y = x = 2y − x = 2y − y . x) based at the . . x) with the speed equal to the magnitude of this vector.1. We note that this y solves the original equation. a plot similar to a slope field. u2 .1. It is useful to go back and forth between systems and higher order equations for other reasons. as none of the dependent variables appear in any functions or with any higher powers than one. y(0) = 0. un . Let us take the system x = 2y − x. the ODE approximation methods are generally only given as solutions for first order systems. A similar process can be done for a system of higher order differential equations. It is not very hard to adapt the code for the Euler method for a first order equation to first order systems. We wish to solve for the initial conditions x(0) = 1. . . Our solution is: 3 3 x= 2e−2t + et . all of order n. We solve for the initial conditions 1 = x(0) = −2C1 + C2 and 0 = y(0) = C1 + C2 . x = y = −2C1 e−2t + C2 et . The above example was what we will call a linear first order system. We first notice that if we differentiate the first equation once we get y = x and now we know what x is in terms of x and y. The previous example x = 2y − x. INTRODUCTION TO SYSTEMS OF ODES 85 Now try to solve this system for u1 . Once you have solved for the u’s.

457.1. Figure 3.1.1. The resulting picture is usually called the phase portrait (or phase plane portrait). Notice the similarity to the diagrams we drew for autonomous systems in one dimension. We can now draw a path of the solution in the plane. Since we solved this system precisely we can compute x(2) and y(2).1. We get that x(2) ≈ 2. y + (x + y )2 − x = 0 as a first order system of ODEs. 0) for 0 ≤ t ≤ 2. x2 = x2 . g(t)) for t in the selected range.86 CHAPTER 3.1 Exercises Exercise 3. See Figure 3. But now note how much more complicated things become if we allow just one more dimension. then we can pick an interval of t (say 0 ≤ t ≤ 2 for our example) and plot all the points ( f (t). In this case however we cannot draw the direction field. Exercise 3.1: The direction field for x = 2y − x. For each t we would get a different direction field.4: Write ay + by + cy = f (x) as a first order system of ODEs. 0) and travels along the vector field for a distance of 2 units of t. y) and we do this for many points on the xy-plane. The particular curve obtained we call the trajectory or solution curve.2: The direction field for x = 2y − x.1. In this figure the line starts at (1.2.1. That is. suppose the solution is given by x = f (t). . y = x with the trajectory of the solution starting at (1.3: Find the general solution of x1 = 3x1 − x2 + et .5: Write x + y2 y − x3 = sin(t). Exercise 3. This point corresponds to the top right end of the plotted solution curve in the figure. We may want to scale down the size of our vectors to fit many of them on the same direction field. y = x. SYSTEMS OF ODES point (x. y = g(t). Also note that we can draw phase portraits and trajectories in the xy-plane even if the system is not autonomous.2: Find the general solution of x1 = x2 − x1 + t. -1 3 0 1 2 3 3 3 -1 0 1 2 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 0 1 2 3 -1 -1 -1 0 1 2 3 -1 Figure 3. Exercise 3. x2 = x1 . An example plot is given in Figure 3.475 and y(2) ≈ 2. 3. since the field changes as t changes.

A matrix is an m × n array of numbers (m rows and n columns).2.2 Matrices and linear systems Note: 1 and a half lectures. so our operations will have to be compatible with this viewpoint.1 in EP 3. (A + B) + C = A + (B + C). we denote a 3 × 5 matrix as follows   a11 a12 a13 a14 a15          A = a21 a22 a23 a24 a25  . 4 5 6 0 2 4 4 7 10 If the sizes do not match. First. For example. d some scalars. We add matrices element by element.2. and by A. we will need to talk about matrices. By 0 we will mean the vector of all zeros. B.1 Matrices and vectors Before we can start talking about linear systems of ODEs. We will usually denote matrices by upper case letters and vectors by lower case letters with an arrow such as x or b. 1 2 3 1 1 −1 2 3 2 + = . For example.       a31 a32 a33 a34 a35 By a vector we will usually mean a column vector which is an n × 1 matrix. first part of §5. A + 0 = A = 0 + A. Note that we will want 1 × 1 matrices to really act like numbers. . so let us review these briefly. we have the following familiar rules. MATRICES AND LINEAR SYSTEMS 87 3. 4 5 6 8 10 12 Matrix addition is also easy. C some matrices. It is easy to define some operations on matrices. If we mean a row vector we will explicitly say so (a row vector is a 1 × n matrix). (c + d)A = cA + dA. then addition is not defined.3. A + B = B + A. by c. 1 2 3 2 4 6 2 = . For example. If we denote by 0 the matrix of with all zero entries. we can multiply by a scalar (a number). c(A + B) = cA + cB. This means just multiplying each entry by the same number.

Now for an m × n matrix A and an n × p matrix B we can define the product AB.2.   b1          a3 · b2  = a1 b1 + a2 b2 + a3 b3 . For example.       b3 a1 a2 And similarly for larger (or smaller) vectors. The result is a single number.88 CHAPTER 3. It is usually denoted by I. First we define the so-called dot product (or inner product) of two vectors. the I3 would be the 3 × 3 identity matrix   1 0 0         I = I3 = 0 1 0 . For each size we have a different matrix and so sometimes we may denote the size as a subscript. The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. Example: 1 2 3 4 5 6 T   1 4         = 2 5       3 6 3. SYSTEMS OF ODES Another operation which is useful for matrices is the so-called transpose. The transpose of A is denoted by AT . Example:   1 0 −1    1 2 3   1 1 1  =       4 5 6    1 0 0 = 6 2 1 1 · 1 + 2 · 1 + 3 · 1 1 · 0 + 2 · 1 + 3 · 0 1 · (−1) + 2 · 1 + 3 · 0 = 15 5 1 4 · 1 + 5 · 1 + 6 · 1 4 · 0 + 5 · 1 + 6 · 0 4 · (−1) + 5 · 1 + 6 · 0 For multiplication we will want an analogue of a 1. First let us denote by rowi (A) the ith row of A and by column j (A) the jth column of A. We let AB be an m × p matrix whose i jth entry is rowi (A) · column j (B).2 Matrix multiplication Next let us define matrix multiplication. Armed with the dot product we can define the product of matrices. Here we use the so-called identity matrix. For example.       0 0 1 . Note that the sizes must match. For the dot product we multiply each pair of entries from the first and the second vector and we sum these products. Usually this will be a row vector multiplied with a column vector of the same size. This operation just swaps rows and columns of a matrix.

a 2 × 2 matrix A is a mapping of the plane where x gets sent to Ax. −1 1 .3. For the last two items to hold we would need to essentially “divide” by a matrix. if we take the unit square (square of sides 1) in the plane. Then the determinant of A is then the factor by which the volume of objects gets changed. IA = A = AI.2. then AB = AC does imply that B = C (the inverse is unique). then A takes the square to a parallelogram of area |det(A)|. For example. A= 1 1 . That is. A(B + C) = AB + AC. If A is not invertible we say A is singular or just say it is not invertible. c is some scalar (number). Then we call B the inverse of A and we denote B by A−1 . (ii) AB = AC does not necessarily imply B = C even if A is not 0. If A is invertible. matrices do not commute. It is also not hard to see that (A−1 )−1 = A. C are matrices of the correct sizes so that the following make sense. (B + C)A = BA + CA. We just multiply both sides by A−1 to get A−1 AB = A−1 AC or IB = IC or B = C. (iii) AB = 0 does not necessarily mean that A = 0 or B = 0. Before trying to compute determinant for larger matrices. A few warnings are in order however. c(AB) = (cA)B = A(cB). MATRICES AND LINEAR SYSTEMS 89 We have the following rules for matrix multiplication. 3.2. We define the determinant of a 1 × 1 matrix as the value of its own entry. For example. If the inverse of A exists. Suppose that A. let us first note the meaning of the determinant. then we call A invertible. Suppose that A is an n × n matrix and that there exists another n × n matrix B such that AB = I = BA. This is where matrix inverse comes in. For example. A(BC) = (AB)C. B. The sign of det(A) denotes changing of orientation (if the axes got flipped). For a 2 × 2 matrix we define det a b c d = ad − bc. Consider an n × n matrix as a mapping of Rn to Rn . (i) AB BA in general (it may be true by fluke sometimes).3 The determinant We can now talk about determinants of square matrices.

for the first row we get  +a det(A ) if n is odd. Obviously (0.  1n 1n We alternately add and subtract the determinants of the submatrices Ai j for a fixed i and all j. (1. 1) gets sent. c). For example. n det(A) = j=1 (−1)i+ j ai j det(Ai j ). It is also possible to compute the determinant by expanding along columns (picking a column instead of a row above). carries the unit square to the given Now we can define the determinant for larger matrices. c + d). If you think back to high school geometry.   1 2 3     5 6 4 6 4 5     − 2 · det + 3 · det det 4 5 6 = 1 · det     8 9 7 9 7 8   7 8 9 = 1(5 · 9 − 6 · 8) − 2(4 · 9 − 6 · 7) + 3(4 · 8 − 5 · 7) = 0. 1) and (1. 0). (b.  1n  1n det(A) = a11 det(A11 ) − a12 det(A12 ) + a13 det(A13 ) − · · ·  −a det(A ) if n even. 0). −1 1 1 1 1 1 1 2 = . SYSTEMS OF ODES Then det(A) = 1 + 1 = 2. The numbers (−1)i+ j det(Ai j ) are called cofactors of the matrix and this way of computing the determinant is called the cofactor expansion. d) and (a + b. a b c d The vertical lines here mean absolute value. We define Ai j as the matrix A with the ith row and the jth column deleted. 0). we would get det(A) = a11 det(A11 )−a12 det(A12 )+ a13 det(A13 ). And it is precisely det a b c d . Note that a common notation for the determinant is a pair of vertical lines. say the ith row and compute. To compute the determinant of a matrix. −1 1 1 0 √ So it turns out that the image of the square is another square. c d c d .90 CHAPTER 3. Now let us see where the square with vertices (0. For example. 0). (a. Now 1 1 1 1 . pick one row. For example. This one has a side of length 2 and is therefore of area 2. picking the first row. 0) gets sent to (0. you may have seen a formula for computing the area of a parallelogram with vertices (0. (0. for a 3×3 matrix. = −1 −1 1 0 1 1 0 1 = . a b a b = det . The matrix parallelogram.

                        1 4 1 x3 10 To solve the system we put the coefficient matrix (the matrix on the left hand side of the equation) together with the vector on the right and side and get the so-called augmented matrix   2 2 2 2        1 1 3 5  . we have a formula for the inverse of a 2 × 2 matrix a b c d −1 0.4 Solving linear systems One application of matrices we will need is to solve systems of linear equations. An n × n matrix A is invertible if and only if det(A) In fact. and we could add a multiple of one equation to another equation. x1 + x2 + 3x3 = 5. MATRICES AND LINEAR SYSTEMS 91 I personally find this notation confusing since vertical lines for me usually mean a positive quantity. The formula only works if the determinant is nonzero.3. Theorem 3.2. One of the most important properties of determinants (in the context of this course) is the following theorem. (i) Swap two rows. So I will not ever use this notation in these notes. Suppose that we have the following system of linear equations 2x1 + 2x2 + 2x3 = 2. ad − bc −c a Notice the determinant of the matrix in the denominator of the fraction.         1 4 1 10 We then apply the following three elementary operations.2. we could multiply any of the equations by a nonzero number. Note that the system can be written as      2 2 2  x1   2             1 1 3  x2  =  5  . This may be best shown by example. Without changing the solution. otherwise we are dividing by zero.2. = 1 d −b . It is easier to write this as a matrix equation. while determinants can be negative.1. 3. we note that we could do swap equations in this system. x1 + 4x2 + x3 = 10. It turns out these operations always suffice to find a solution. .

  1 0 0 −4       0 1 0 3          0 0 1 2 If we think about what equations this augmented matrix represents. and x3 = 2.92 (ii) Multiply a row by a nonzero number. Exercise 3. CHAPTER 3. We try these and.1: Check that this solution works.2.   1 1 1 1      0 1 0 3          0 0 1 2 Subtract the last row from the first. SYSTEMS OF ODES We will keep doing these operations until we get into a state where it is easy to read off the answer or until we get into a contradiction indicating no solution. (iii) Add a multiple of one row to another row.   1 1 1 1       0 0 2 4         0 3 0 9 Multiply the last row by 1 3 and the second row by 1 . x2 = 3. we see that x1 = −4. voilà. First multiply the first row by 2 . then subtract the second row from the first. for example if we come up with an equation such as 0 = 1. It works. 2   1 1 1 1       0 0 1 2         0 1 0 3 Swap rows 2 and 3. . If we write this equation in matrix notation as Ax = b. 1 Let us work through the example.   1 1 1 1        1 1 3 5          1 4 1 10 Now subtract the first row from the second and third row.

Example 3. and x3 . One last note to make about linear systems of equations is that it is possible that the solution is not unique (or that no solution exists). (i) There is only one leading entry in each column. The variables corresponding to columns with no leading entries are said to be free variables. The last row corresponds to the equation 0x1 + 0x2 + 0x3 = 3 which is preposterous. (iii) All leading entries are 1.         0 0 0 3 there is no need to go further.3.2. The solution can be also computed with the x = A−1 Ax = A−1 b. (ii) All the entries above and below a leading entry are zero. For example if for a system of 3 equations and 3 unknowns you find a row such as [ 0 0 0 1 ] in the augmented matrix. Such a matrix is said to be in reduced row echelon form. 2 2 2 1 1 3 1 4 1 93 2 5 10 and b is the vector .   1 2 0 3       0 0 1 1         0 0 0 0 If the variables are named x1 . MATRICES AND LINEAR SYSTEMS where A is the matrix inverse. Free variables mean that we can pick those variables to be anything we want and then solve for the rest of the unknowns. no solution exists. .2.1: The following augmented matrix is in reduced row echelon form. The first nonzero entry in each row is called the leading entry. You generally try to use row operations until the following conditions are satisfied. you know the system is inconsistent. On the other hand if during the row reduction process you come up with the matrix   1 2 13 3       0 0 1 1 . Hence. If during the row reduction you come up with a row where all the entries except the last one are zero (the last entry in a row corresponds to the right hand side of the equation) the system is inconsistent and has no solution. x2 . It is easy to tell if a solution does not exist. then x2 is the free variable and x1 = 3 − 2x2 and x3 = 1.

. let us touch on how to do it. .2.2: Solve 1 2 3 4 x= 5 6 by using matrix inverse. 1 2 3 x= 2 0 0 . then the inverse is the matrix with the columns xk for k = 1. then the matrix will be of the form [ I A−1 ] if and only if A is invertible. . Finding the inverse of A is actually just solving a bunch of linear equations. to find the inverse we can write a larger n × 2n augmented matrix [ A I ].8: Solve Exercise 3.5: Compute inverse of Exercise 3. . 2 0 4 1 x= .2. if you want to solve the equation for many different right hand sides b.2. x= 0 3 2 3 . 9 −2 −6 −8 3 6 10 −2 −6 1 4 6 8 2 0 0 0 3 5 7 10 1 0 0 1 Exercise 3. SYSTEMS OF ODES 3.6: For which h is Infinitely many. Exercise 3. The 2 × 2 inverse is basically given by a formula. n (exercise: why?). then A is invertible.94 CHAPTER 3. So it is useful to compute the inverse.2. not invertible? Is there only one such h? Are there several? h 1 1 0 h 0 1 1 h not invertible? Find all such h. Hint: expand along the proper row or column .3: Compute determinant of Exercise 3.2. In fact by multiplying both sides by A−1 you can see that x = A−1 b.4: Compute determinant of to make the calculations simpler.2.7: For which h is Exercise 3.2.10: Solve 9 −2 −6 −8 3 6 10 −2 −6 5 3 7 8 4 4 6 3 3 3 3 0 2 2 3 2 3 3 3 4 4 1 2 3 4 5 6 7 8 h 1 2 3 1 1 1 0 1 0 . 3. If you can solve Axk = ek where ek is the vector with all zeros except a 1 at the kth position.2. If you do row reduction and put the matrix in reduced row echelon form.2. Exercise 3. . so you can just read off the inverse A−1 . While we will not have too much occasion to compute inverses for larger matrices than 2 × 2 by hand. where I is the identity.5 Computing the inverse If the coefficient matrix is square and there exists a unique solution x to Ax = b for any b.2. Therefore.6 Exercises Exercise 3.9: Solve Exercise 3. but it is not hard to also compute inverses of larger matrices. .2.

. . x2 = t .   .1 in EP First let us talk about matrix or vector valued functions. .        an1 (t) an2 (t) · · · ann (t) We can talk about the derivative A (t) or dA and this is just the matrix valued function whose i jth dt entry is ai j (t).  x(t) =  .3..3 Linear systems of ODEs Note: less than 1 lecture.  . This is essentially just a matrix whose entries depend on some variable. . Similar differentiation rules apply here. second part of §5. . the equations x1 = 2tx1 + et x2 + t2 .3. Let A and B be matrix valued functions. LINEAR SYSTEMS OF ODES 95 3. .     . Then (A + B) (AB) (cA) (CA) (AC) =A +B = A B + AB = cA = CA =AC Do note the order in the last two expressions. A solution is of course a vector valued function x satisfying the equation. Let us say the independent variable is t. Then a vector valued function x(t) is really something like    x1 (t)       x2 (t)       . We will often suppress the dependence on t and only write x = Px + f . . and x and f are vector valued functions. Let c a scalar and C be a constant matrix. .           xn (t) Similarly a matrix valued function is something such as   a11 (t) a12 (t) · · · a1n (t)      a21 (t) a22 (t) · · · a2n (t)        A(t) =  .   . x1 − x2 + et . Where P is a matrix valued function.   . A first order linear system of ODEs is a system which can be written as x (t) = P(t)x(t) + f (t). For example.

Now suppose that X(t) is . X(t) is called the fundamental matrix. If furthermore this is a system of n equations (P is n × n). That is. −1 e We will mostly concentrate on equations that are not just linear. . . xn are n solutions of the equation. The linear combination c1 x1 + c2 x2 + · · · + cn xn could always be written as X(t) c. Now you are given an initial condition of the form x(t0 ) = b for some constant vector b. cn . . then x = c1 x1 + c2 x2 + · · · + cn xn . then we say the system is homogeneous. So the procedure will be exactly the same.2. . xn . . Theorem 3. SYSTEMS OF ODES et t2 x+ t . xn are linearly independent if and only if c1 x1 + c2 x2 + · · · + cn xn = 0 has only the solution c1 = c2 = · · · = cn = 0. just like for single homogeneous equations.1 (Superposition). . . or fundamental matrix solution. . For homogeneous linear systems we still have the principle of superposition. Suppose that x1 . suppose you have found the general solution x = Px + f . . . We apply the same technique as we did before. then we find the general solution to the associated homogeneous equation and we add the two. xn are linearly independent. Suppose x p is one particular solution. . Then every solution can be written as (3. the matrix P will be a constant and not depend on t.1). Let x = Px be a linear homogeneous system of ODEs. . We find a particular solution to the nonhomogeneous equation. When f = 0 (the zero vector). . . . .3. and c is the column vector with entries c1 . and x1 . but are in fact constant coefficient equations. To solve nonhomogeneous first order linear systems. Then every solution can be written as x = xc + x p . Let x = Px+ f be a linear system of ODEs. Alright.3.96 can be written as x = 2t 1 t CHAPTER 3. where xc is a solution to the associated homogeneous equation (x = Px). (3. Linear independence for vector valued functions is essentially the same as for normal functions. . . x1 . . . where X(t) is the matrix with columns x1 .1) is also a solution. Theorem 3.

e e 2 c1 t e 2 It is not hard to see that the columns of this matrix are linearly independent. x2 = x1 − x2 . 1 −1 x(0) = 1 . We write the system as x = 1 0 x. Hence to solve the initial problem we solve the equation X(0)c = b. Hence our solution is et . 1 1 2 2 After a single elementary row operation we find that c = x(t) = X(t)c = This agrees with our previous solution. LINEAR SYSTEMS OF ODES 97 the fundamental matrix solution of the associated homogeneous equation (i. x2 (0) = 2. Hence in matrix notation. This is a homogeneous system.3. Example 3.3.1 we solved the following system x1 = x1 . Or in other words we are solving the nonhomogeneous system of linear equations X(t0 )c = b − x p (t0 ) for c. with initial conditions x1 (0) = 1. Then we are seeking a vector c such that b = x(t0 ) = X(t0 )c + x p (t0 ). The general solution is written as x(t) = X(t)c + x p (t). or in other words. so f = 0. We found the general solution was x1 = c1 et and x2 = the fundamental matrix solution is et 0 X(t) = 1 t −t . just plug in t = 0 and note that the two constant vectors are already linearly independent here. et 1 t e 2 1 3/2 . 2 + c2 e−t . + 3 e−t 2 0 1 = e−t 3 2 1 t e 2 .1: In §3.3. To see this. 1 1 0 c= .e. columns of X are solutions).

x2 =? (i. c) Write down the general solution in the form x1 =?.3. SYSTEMS OF ODES 3.1 Exercises Exercise 3. b) 3 1 1 Write down the general solution. Hint: You must and t3 t4 are linearly independent.4: Verify that 1 et and −1 et and 0 1 be a bit more tricky than in the previous exercise.1: Write the system x1 = 2x1 − 3tx2 + sin t. Exercise 3.e.3. 1 Exercise 3. x2 = et x1 + 3x2 + cos t as in the form x = P(t)x + f (t).3. Exercise 3. .5: Verify that t t2 e2t are linearly independent. write down a formula for each element of the solution). 1 −1 1 Exercise 3.3.3: Verify that 1 1 1 et and 1 −1 1 et are linearly independent.98 CHAPTER 3.3.3. Hint: Just plug in t = 0.2: a) Verify that the system x = 1 3 x has the two solutions 1 e4t and −1 e−2t .

We then call λ an eigenvalue of A and v is called the corresponding eigenvector. Therefore. 3. EIGENVALUE METHOD 99 3. Were it invertible. §5. Now suppose we try to adapt the method for single constant coefficient equations by trying the function eλt . Suppose you have a linear constant coefficient homogeneous system x = Px. x is a vector. where v is an arbitrary constant vector. Suppose there is a scalar λ and a nonzero vector v such that Av = λv. once λ is known.4. A has the eigenvalue λ if and only if λ solves the equation det(A − λI) = 0. 0 1 0 0 0 If we rewrite the equation for an eigenvalue as (A − λI)v = 0. Example 3. we could write (A − λI)−1 (A − λI)v = (A − λI)−1 0 which implies v = 0. . The eigenvector will have to be found later.4 Eigenvalue method Note: 2 lectures. Note that this means that we will be able to find an eigenvalue without finding the corresponding eigenvector. So we try veλt .1: The matrix 1 because 0 2 1 0 1 has an eigenvalue of λ = 2 with the corresponding eigenvector 2 1 1 2 1 = =2 .2 in EP In this section we will learn how to solve linear homogeneous constant coefficient systems of ODEs by the eigenvalue method. However. To solve this equation we need a little bit more linear algebra which we review now. We notice that this has a nonzero solution v only if A − λI is not invertible.4.4.3. We plug into the equation to get λveλt = Pveλt .1 Eigenvalues and eigenvectors of a matrix Let A be a square constant matrix. We divide by eλt and notice that we are looking for a λ and v that satisfy the equation λv = Pv.

We can pick v2 to be arbitrary (but nonzero) and let v1 = v2 and of course v3 = 0. For example.    0 2 1 1 The equations the entries of v have to satisfy are. therefore. the polynomial we get by computing det(A − λI) will be of degree n. this will always be possible. 0 0 2 We write          2 1 1 1 0 0 v1  −1 1 1  v1                                      (A − λI)v = 1 2 0 − 3 0 1 0 v2  =  1 −1 0  v2  = 0. we write (A − λI)v = 0. λ = 2. and v2 is a free variable. and hence we will in general have n eigenvalues. Example 3.4. and so the eigenvalues are λ = 1.100 2 1 1 CHAPTER 3. SYSTEMS OF ODES Example 3. Write down the augmented matrix   −1 1 1 0       1 −1 0 0          0 0 −1 0 and perform row operations (exercise: which ones?)  1 −1 0    0 0 1     0 0 0 until you get  0     0 . 0 0 2 We write       2 1 1 1 0 0 2 − λ 1 1                          2−λ 0  = det 1 2 0 − λ 0 1 0 = det  1                   0 0 2 0 0 1 0 0 2−λ = (2 − λ)2 ((2 − λ)2 − 1) = −(λ − 1)(λ − 2)(λ − 3). Note that for an n × n matrix. v = 1 1 0 . We try this:        1 2 1 1 1 3                      1 2 0 1 = 3 = 3 1 . and λ = 3. v1 − v2 = 0. If λ is an eigenvalue.                            0 0 2 0 0 1 v3 0 0 −1 v3 It is easy to solve this system of linear equations.2: Find all eigenvalues of 1 2 0 . To find an eigenvector corresponding to λ. v3 = 0. and solve for a nontrivial (nonzero) vector v.4.                             0 0 0 0 2 0 Yay! It worked. .3: Find the eigenvector of 1 2 0 corresponding to the eigenvalue λ = 3.

and the corresponding eigenvectors v1 .                         2t e 0 1 0 Exercise 3.1 (easy): Are the eigenvectors unique? Can you find a different eigenvector for λ = 3 in the example above? How does it relate to the other eigenvector? Exercise 3. . We have the equation x = Px. 3 earlier.       0 0 2 Find the general solution.1) then the eigenvalue equation det(P − λI) = 0.2: Note that when the matrix is 2 × 2 you do not need to write down the augmented matrix when computing eigenvectors (if you have computed the eigenvalues correctly). Hence our general solution is   t        e + e3t  1 0 1                                 x = −1 et + 0 e2t + 1 e3t = −et + e3t  . and the general solution to the ODE can be written as.4.1.4.4. . Theorem 3. . .3. x = c1 v1 eλ1 t + c2 v2 eλ2 t + · · · + cn vn eλn t . Example 3. is essentially the same as the characteristic equation we got in §2. . We have found the eigenvector 1 1 1 0 for the eigen0 1 −1 value 3.2 The eigenvalue method with distinct real eigenvalues OK. λn then there are n linearly independent corresponding eigenvectors v1 . λ2 . . λ1 . vn . . for the Note: If you write a homogeneous linear constant coefficient nth order equation as a first order system (as we did in §3. vn . vn eλn t are solutions of the equation and hence x = c1 v1 eλ1 t + c2 v2 eλ2 t + · · · + cn vn eλn t is a solution. . .4.2 and §2. . We have found the eigenvalues 1. . EIGENVALUE METHOD 101 Exercise 3. . . .3. Take x = Px. . λn of the matrix P. v2 . . Now we notice that the functions v1 eλ1 t . We find the eigenvalues λ1 . 2. Can you see why? Try it for the matrix 2 1 . 1 2 3. . . v2 eλ2 t . .3: Check that this really solves the system.4. .4.4. In similar fashion we find the eigenvector −1 for the eigenvalue 1 and 0 eigenvalue 2 (exercise: check).4: Suppose we take the system   2 1 1         x = 1 2 0 x. If P is n × n and has n distinct real eigenvalues. .

So if v is an eigenvector corresponding to eigenvalue a + ib. Now suppose that a + ib is a complex eigenvalue of P. We note that Px = P x = Px.4. So we only need to consider one of them.102 CHAPTER 3. −1 i It is obvious that the equations iv1 + v2 = 0 and −v1 + iv2 = 0 are multiples of each other. We claim that we did not have to look for the second eigenvector (nor for the second eigenvalue). but we will do something a bit smarter first. Similarly we can bar whole vectors or matrices. 1 We could write the solution as x = c1 i (1−i)t −i (1+i)t c ie(1−i)t − c2 ie(1+i)t e + c2 e = 1 (1−i)t .3 Complex eigenvalues A matrix might very well have complex eigenvalues even if all the entries are real. suppose that we have the system 1 1 x = x. z First a small side note. From this we note that λ = 1 ± i. After picking v2 = 1. The corresponding eigenvectors will also be complex (P − (1 − i)λ)v = 0 i 1 v = 0. All complex eigenvalues come in pairs (because the matrix P is real). This operation is called the complex conjugate. v the corresponding eigenvector and hence x1 = ve(a+ib)t . for example. where the 2 bar above z means a + ib = a − ib. We could use Euler’s formula here and do the whole song and dance we did before. −1 1 Let us compute the eigenvalues of the matrix P = det(P − λI) = det 1−λ 1 −1 1 − λ 1 1 −1 1 . = (1 − λ)2 + 1 = λ2 − 2λ + 2 = 0. For example. we have the eigenvector i v = 1 . 1 1 c1 e + c2 e(1+i)t 1 But then we would need to look for complex values c1 and c2 to solve any initial conditions. And even then it is perhaps not completely clear that we get a real solution. then v is an eigenvector corresponding to eigenvalue a − ib. If a matrix P is real then ¯ P = P. Or ¯ (P − λI)v = (P − λI)v. The real part of a complex number z can be computed as z+¯ . Note that for a real number a. a = a. SYSTEMS OF ODES 3. In similar fashion we find that −i is an eigenvector corresponding to the eigenvalue 1 + i.

And it is real valued! Similarly as Im z = x4 = Im x1 = x1 − x2 . are real valued and are linearly independent. you will end up with n linearly independent solutions if you had n distinct eigenvalues (real or complex). You go on to the next eigenvalue which is either a real eigenvalue or another complex eigenvalue pair. You can now find a real valued general solution to any homogeneous system where the matrix has distinct eigenvalues. Exercise 3. Then note that ea+ib = ea−ib and hence x2 = x1 = ve(a−ib)t is also a solution. You take one λ = a + ib from the pair. EIGENVALUE METHOD is a solution (complex valued) of x = Px.3. The process is this.4. . you notice that they always come in pairs. 2 2 z−¯ z 2i 103 Is also a solution. When you have repeated eigenvalues.4: Check that these really are solutions. you find the corresponding eigenvector v. 2i is the imaginary part we find that is also a real valued solution. matters get a bit more complicated and we will look at that situation in §3. When you have complex eigenvalues. et cos t et cos t . et sin t i (1−i)t i t iet cos t − et sin t e = e cos t + iet sin t = t 1 1 e cos t + iet sin t This solution is real valued for real c1 and c2 .4. t e cos t e sin t c1 et cos t + c2 et sin t −et sin t . Returning to our problem. Hence. The general solution is x = c1 −et sin t et cos t −c1 et sin t + c2 et cos t + c2 t = . Now we can solve for any initial conditions that we have. It turns out that x3 and x4 are linearly independent.7. You note that Re ve(a+ib)t and Im ve(a+ib)t are also solutions to the equation. Now take the function x3 = Re x1 = Re ve(a+ib)t = x1 + x1 x1 + x2 = . we take x1 = It is easy to see that Re x1 = Im x1 = are the solutions we seek.

Exercise 3.8: Find the general solution of x1 = x1 − 2x2 . Exercise 3.4.5: Let A be an 3 × 3 matrix with an eigenvalue of 3 and a corresponding eigenvector v= 1 −1 3 . x2 = 2x1 + 4x2 using the eigenvalue method. Find Av. . b) Solve the system by solving each equation separately and verify you get the same general solution.4.4.4. SYSTEMS OF ODES 3.4.7: Find the general solution of x1 = 3x1 + x2 . Exercise 3. x2 = 3x2 using the eigenvalue method (first write the system in the form x = Ax). x2 = 2x1 + x2 using the eigenvalue method. Exercise 3.4 Exercises Exercise 3.104 CHAPTER 3.4.6: a) Find the general solution of x1 = 2x1 .4.10: Compute eigenvalues and eigenvectors of −2 −1 −1 3 2 1 −3 −1 0 9 −2 −6 −8 3 6 10 −2 −6 . Exercise 3.9: a) Compute eigenvalues and eigenvectors of A = solution of x = Ax. Do not use complex exponentials in your solution. b) Find the general .

Now suppose that x and y are on the line determined by an eigenvector v for an eigenvalue λ. So we have a 2 × 2 matrix P and the system x x . Case 1. For example. x That is.3: Eigenvectors of P. See Figure 3. along the line determined by v. We 0 . As λ > 0. We get the picture in Figure 3. 0 1 The eigenvalues are 1 and −2 and the corresponding eigenvectors are the same.5 Two dimensional systems and their vector fields Note: 1 lecture. The eigenvalues are 1 and 2 and the corre0 2 sponding eigenvectors are 1 and 1 . We call this kind of picture a sink or sometimes a stable node. Suppose one eigenvalue is positive and one is negative. See Figure 3. Suppose that the eigenvalues are real and positive. Then x x = P(av) = a(Pv) = aλv. The only difference is that the 0 1 eigenvalues are negative and hence all arrows are reversed. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS 105 3. take the matrix 1 1 .6 on the following page.2) =P y y We will be able to visually tell how the vector field looks once we find the eigenvalues and eigenvectors of the matrix. Case 2. 0 −2 1 and 1 . Let us draw arrows on the lines to indicate the directions. We want to think about how the vector fields look and how this depends on the eigenvalues. You will notice that the picture looks like a source with arrows coming out from the origin. 1 Case 3. Suppose both eigenvalues were negative. y = av for some scalar a.3. (3. For example the matrix 1 −2 . the derivative points in the direction of v when a is positive and in the opposite direction when a is negative. The eigenvalues are −1 and −2 and the corresponding eigenvectors are the same. Hence we call this type of picture a source or sometimes an unstable node. The calculation and the picture are almost the same. 1 and −3 . See Fig0 1 ure 3. take the negation of the matrix in case 1. −1 −1 . For example.3. but is in EP §6.4 on the following page. We fill in the rest of the arrows and we also draw a few solutions. should really be in EP §5.5.5 on the next page. Find the two eigenvectors and plot them in the plane.2 Let us take a moment to talk about homogeneous systems in the plane. =P y y -3 3 -2 -1 0 1 2 3 3 2 2 1 1 0 0 -1 -1 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 The derivative is a multiple of v and hence points Figure 3.2.

The eigenvalues turn out to be ±2i and the eigenvectors are 2i 0 1 1 and −2i .4: Eigenvectors of P with directions. Figure 3.7: Example saddle vector field with eigenvectors and solutions. SYSTEMS OF ODES -1 0 1 2 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 -3 -3 -2 -1 0 1 2 3 -3 Figure 3. For example. let P = −4 1 . We take the eigenvalue 2i and its eigenvector 2i and note that the real an imaginary . That is. Suppose the eigenvalues are purely imaginary. Figure 3. We call this picture a saddle point. In this case the eigenvectors are also complex and we cannot just plot them on the plane. Case 4.7. reverse the arrows on one line (corresponding to the negative eigenvalue) and we obtain the picture in Figure 3.5: Example source vector field with eigenvectors and solutions. The next three cases we will assume the eigenvalues are complex.6: Example sink vector field with eigenvectors and solutions.106 -3 3 -2 -1 0 1 2 3 3 3 -3 -2 CHAPTER 3. -3 -2 -1 0 1 2 3 3 -3 3 -2 -1 0 1 2 3 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 -1 -1 -2 -2 -2 -2 -3 -3 -2 -1 0 1 2 3 -3 -3 -3 -2 -1 0 1 2 3 -3 Figure 3. suppose the eigenvalues are 0 1 ±ib.

3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS parts of vei2t are Re Im cos 2t 1 i2t e = −2 sin 2t 2i 1 i2t sin 2t e = . 2i 2 cos 2t

107

Note that which combination of them we take just depends on the initial conditions. So we might as well just take the real part. If you notice this is a parametric equation for an ellipse. Same with the imaginary part and in fact any linear combination of them. It is not difficult to see that this is what happens in general when the eigenvalues are purely imaginary. So when the eigenvalues are purely imaginary, you get ellipses for your solutions. This type of picture is sometimes called a center. See Figure 3.8.
-3 3 -2 -1 0 1 2 3 3 3 -3 -2 -1 0 1 2 3 3

2

2

2

2

1

1

1

1

0

0

0

0

-1

-1

-1

-1

-2

-2

-2

-2

-3 -3 -2 -1 0 1 2 3

-3

-3 -3 -2 -1 0 1 2 3

-3

Figure 3.8: Example center vector field.

Figure 3.9: Example spiral source vector field.

Case 5. Now the complex eigenvalues have positive real part. That is, suppose the eigenvalues 1 are a ± ib for some a > 0. For example, let P = −4 1 . The eigenvalues turn out to be 1 ± 2i and 1 1 and 1 . We take 1 + 2i and its eigenvector 1 and find the real and the eigenvectors are 2i −2i 2i imaginary of ve(1+2i)t are Re Im 1 (1+2i)t cos 2t e = et 2i −2 sin 2t 1 (1+2i)t sin 2t . e = et 2i 2 cos 2t

Now note the et in front of the solutions. This means that the solutions grow in magnitude while spinning around the origin. Hence we get a spiral source. See Figure 3.9.

108

CHAPTER 3. SYSTEMS OF ODES

Case 6. Finally suppose the complex eigenvalues have negative real part. That is, suppose the eigenvalues are −a ± ib for some a > 0. For example, let P = −1 −1 . The eigenvalues turn out 4 −1 1 1 1 to be −1 ± 2i and the eigenvectors are −2i and 2i . We take −1 − 2i and its eigenvector 2i and (1+2i)t find the real and imaginary of ve are Re Im 1 (−1−2i)t cos 2t e = e−t , 2i 2 sin 2t 1 (−1−2i)t − sin 2t . e = e−t 2i 2 cos 2t

Now note the e−t in front of the solutions. This means that the solutions shrink in magnitude while spinning around the origin. Hence we get a spiral sink. See Figure 3.10.
-3 3 -2 -1 0 1 2 3 3

2

2

1

1

0

0

-1

-1

-2

-2

-3 -3 -2 -1 0 1 2 3

-3

Figure 3.10: Example spiral sink vector field.

We summarize the behavior of linear homogeneous two dimensional systems in Table 3.1. Eigenvalues real and both positive real and both negative real and opposite signs purely imaginary complex with positive real part complex with negative real part Behavior source / unstable node sink / stable node saddle center point / ellipses spiral source spiral sink

Table 3.1: Summary of behavior of linear homogeneous two dimensional systems.

3.5. TWO DIMENSIONAL SYSTEMS AND THEIR VECTOR FIELDS

109

3.5.1 Exercises
Exercise 3.5.1: Take the equation mx + cx + kx = 0, with m > 0, c ≥ 0, k > 0 for the mass-spring system. a) Convert this to a system of first order equations. b) Classify for what m, c, k do you get which behavior. c) Can you explain from physical intuition why you do not get all the different kinds of behavior here? Exercise 3.5.2: Can you find what happens in the case when P = 1 1 . In this case the eigenvalue 0 1 is repeated and there is only one eigenvector. What picture does this look like? Exercise 3.5.3: Can you find what happens in the case when P = the pictures we have drawn?
1 1 1 1

. Does this look like any of

110

CHAPTER 3. SYSTEMS OF ODES

3.6

Second order systems and applications

Note: more than 2 lectures, §5.3 in EP

3.6.1 Undamped mass spring systems
While we did say that we will usually only look at first order systems, it is sometimes more convenient to study the system in the way it arises naturally. For example, suppose we have 3 masses connected by springs between two walls. We could pick any higher number, and the math would be essentially the same, but for simplicity we pick 3 right now. And let us assume no friction, that is, the system is undamped. The masses are m1 , m2 , and m3 and the spring constants are k1 , k2 , k3 , and k4 . Let x1 be the displacement from rest position of the first mass and, x2 and x3 the displacement of the second and third mass. We will make, as usual, positive values go right (as x1 grows, mass 1 is moving right). See Figure 3.11. k1 m1 k2 m2 k3 m3 k4

Figure 3.11: System of masses and springs. This simple system turns up in unexpected places. Note for example that our world really consists of small particles of matter interacting together. When we try this system with many more masses, this is a good approximation to how an elastic material will behave. In fact by somehow taking a limit of the number of masses going to infinity we obtain the continuous one dimensional wave equation. But we digress. Let us set up the equations for the three mass system. By Hooke’s law we have that the force acting on the mass equals the spring compression times the spring constant. By Newton’s second law we again have that force is mass times acceleration. So if we sum the forces acting on each mass and put the right sign in front of each depending on the direction in which it is acting, we end up with the system. m1 x1 = −k1 x1 + k2 (x2 − x1 ) m2 x2 = −k2 (x2 − x1 ) + k3 (x3 − x2 ) m3 x3 = −k3 (x3 − x2 ) − k4 x3 We define the matrices   m1 0 0          M =  0 m2 0        0 0 m3 = −(k1 + k2 )x1 + k2 x2 , = k2 x1 − (k2 + k3 )x2 + k3 x3 , = k3 x2 − (k3 + k4 )x3 .

and

  −(k1 + k2 )  k2 0       .   k2 −(k2 + k3 ) k3 K=       0 k3 −(k3 + k4 )

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS We write the equation simply as Mx = K x.

111

At this point we could introduce 3 new variables and write out a system of 6 equations. We claim this simple setup is easier to handle as a second order system. We will call x the displacement vector, M the mass matrix, and K the stiffness matrix. Exercise 3.6.1: Do this setup for 4 masses (find the matrix M and K). Do it for 5 masses. Can you find a prescription to do it for n masses? As before we will want to “divide by M.” In this case this means computing the inverse of M. All the masses are nonzero and it is easy to compute the inverse, as the matrix is diagonal. 1   m1 0 0        0 1 0 .  −1   M =    m2    1  0 0 m3 This fact follows readily by how we multiply diagonal matrices. You should verify that MM −1 = M −1 M = I as an exercise. We let A = M −1 K and we look at the system x = M −1 K x, or x = Ax. Many real world systems can be modeled by this equation. For simplicity we will keep the given masses-and-springs setup in mind. We try a solution of the form x = veαt . We note that for this guess, x = α2 veαt . We plug into the equation and get α2 veαt = Aveαt . We can divide by eαt to get that α2 v = Av. Hence if α2 is an eigenvalue of A and v is the corresponding eigenvector, we have found a solution. In our example, and in many others, it turns out that A has negative real eigenvalues (and possibly a zero eigenvalue). So we will study only this case here. When an eigenvalue λ is negative, it means that α2 = λ is negative. Hence there is some real number ω such that −ω2 = λ. Then α = ±iω. The solution we guessed was x = v(cos ω t + i sin ω t). By again taking real and imaginary parts (note that v is real), we again find that v cos ω t and v sin ω t are linearly independent solutions. If an eigenvalue was zero, it turns out that v and vt are solutions if v is the corresponding eigenvector.

112

CHAPTER 3. SYSTEMS OF ODES

Exercise 3.6.2: Show that if A has a zero eigenvalue and v is the corresponding eigenvector, then x = v(a + bt) is a solution of x = Ax for arbitrary constants a and b. Theorem 3.6.1. Let A be an n × n with n distinct real negative eigenvalues we denote by −ω2 , −ω2 , 1 2 . . . , −ω2 , and corresponding eigenvectors v1 , v2 , . . . , vn . Then n
n

x(t) =
i=1

vi (ai cos ωi t + bi sin ωi t),

is the general solution of x = Ax, for some arbitrary constants ai and bi . If A has a zero eigenvalue and all other eigenvalues are distinct and negative, that is ω1 = 0, then the general solution becomes
n

x(t) = v1 (a1 + b1 t) +
i=2

vi (ai cos ωi t + bi sin ωi t).

Now note that we can use this solution and the setup from the introduction of this section even when some of the masses and springs are missing. Simply when there are say 2 masses and only 2 springs, take only the equations for the two masses and set all the spring constants that are missing to zero.

3.6.2 Examples
Example 3.6.1: Suppose we have the system in Figure 3.12, with m1 = 2, m2 = 1, k1 = 4, and k2 = 2. k1 m1 k2 m2

Figure 3.12: System of masses and springs.

The equations we write down are 2 0 −(4 + 2) 2 x = x. 0 1 2 −2 or x = −3 1 x. 2 −2

0 2.0 2 2. In the left plot the masses are moving in unison and the right plot are masses moving in the opposite direction.0 -1.5 10. −4 (exercise). −1 2 −c2 cos(2t − α2 ) corresponds to the mode where the masses move synchronously but in opposite directions.5 10.0 7. 0.5 0 0 0.0 2.5 -1.5 -2 0. On the other hand the second term. Let us write the solution as x= The first term. The two modes are plotted in Figure 3.0 2.5 -0. x1 = 1 c cos(2t − α2 ) c cos(2t − α2 ) = 2 . SECOND ORDER SYSTEMS AND APPLICATIONS 113 1 2 We find the eigenvalues of A to be λ = −1.13: The two modes of the mass spring system.5 0.0 7.0 Figure 3.13.0 1. x1 = 1 1 c cos(t − α2 ) + c cos(2t − α1 ).5 5.0 2 1.5 5. That is.0 1 1 0. 2 1 −1 2 1 c cos(t − α1 ) c1 cos(t − α1 ) = 1 .5 5.6.3.5 5. Now we find the eigenvectors to be 1 and −1 respectively (exercise).0 7. 2 2c1 cos(t − α1 ) corresponds to the mode where the masses move synchronously in the same direction. Hence the general solution is x= 1 1 (a1 cos t + b1 sin t) + (a cos 2t + b2 sin 2t) . 2 −1 2 The two terms in the solution represent the two so-called natural or normal modes of oscillation. And the two (angular) frequencies are the natural frequencies. the initial conditions determine the amplitude and phase shift of each mode.0 7. We check the theorem and note that ω1 = 1 and ω2 = 2.0 0.0 0.0 10.0 0.5 -2 10. .0 -1 -1 -0. The general solution is a combination of the two modes.

For this t. k m1 m2 10 meters Figure 3. the eigenvectors are 1 and −2 respectively (exercise).14. First the cars start at position 0 so x1 (0) = 0 and x2 (0) = 0. The bumper acts like a spring of spring constant k = 2 N/m. At what time after the cars link does impact with the wall happen? What is the speed of car 2 when it hits the wall? OK. Car 1 of mass 2 kg is travelling at 3 m/s towards the second rail car of mass 1 kg. Furthermore. Hence the equation is 2 0 −2 2 x = x. In this example we have two toy rail cars. We note that 1 √ ω2 = 3 and we use the second part of the theorem to find our general solution to be x= √ √ 1 1 (a1 + b1 t) + a2 cos 3 t + b2 sin 3 t = 1 −2 √ √ 3 a1 + b1 t + a2 cos √3 t + b2 sin √ t = a1 + b1 t − 2a2 cos 3 t − 2b2 sin 3 t We now apply the initial conditions. It is not hard to see that the eigenvalues are 0 and −3 1 (exercise). a1 − 2a2 . so x2 (0) = 0. Let us assume that time t = 0 is the time when the two cars link up. This system acts just like the system of the previous example but without k1 . The first car is travelling at 3 m/s. so x1 (0) = 3 and the second car starts at rest.2: Let us do another example. The first conditions says a + a2 0 = x(0) = 1 . and let x2 be the displacement of the second car from its original location. Let x1 be the displacement of the first car from the position at t = 0. We want to ask several question. See Figure 3. 2 −2 We compute the eigenvalues of A. 0 1 2 −2 or x = −1 1 x. let us first set the system up. There is a bumper on the second rail car which engages one the cars hit (it connects to two cars) and does not let go. Then the time when x2 (t) = 10 is exactly the time when impact with wall occurs. The second Car is 10 meters from a wall.14: The crash of two rail cars. SYSTEMS OF ODES Example 3. x2 (t) is the speed at impact.114 CHAPTER 3.6.

15: Position of the second car in time (ignoring the wall).22 seconds.0 0 1 2 3 4 5 6 0.0 2. . We plug a1 and a2 and differentiate to get √ √ b1 + √ b2 cos √ t 3 3 x (t) = .5 2.6.3.22 seconds from t = 0) we get that x2 (timpact ) ≈ 3. What we are really interested in is the second expression.0 Figure 3. 0 1 2 3 4 5 6 12. This means that the carts will be travelling in the positive direction as time grows.5 12.5 0.5 5. 0 b1 − 2 3 b2 1 √ .0 5. SECOND ORDER SYSTEMS AND APPLICATIONS 115 It is not hard to see that this implies that a1 = a2 = 0.  2 3 Hence the position of our Note how the presence of the zero eigenvalue resulted in a term containing t. the one for x2 . which is what we expect.0 7. At time of impact (5. Just from the graph we can see that time of impact will be a little more than 5 seconds from √ 2 time zero. √ As for the speed we note that x2 = 2 − 2 cos 3 t. We have x2 (t) = √ 2 2t − √3 sin 3 t. We are travelling at almost the maximum speed when we hit the wall. 3 It is not hard to solve these two equations to find b1 = 2 and b2 = cars is (until the impact with the wall) √   1 2t + √ sin 3 t    3  x=   2t − √ sin √3 t .5 10.0 10. See Figure 3.85. b1 − 2 3 b2 cos 3 t So √ b1 + √ b2 3 3 = x (0) = .5 7.15 for the plot of x2 versus time. √ The maximum speed is the maximum of 2 − 2 cos 3 t which is 4. Using a computer (or even a graphing calculator) we find that timpact ≈ 5. For this you have to solve the equation 10 = x2 (t) = 2t − √3 sin 3 t.

116

CHAPTER 3. SYSTEMS OF ODES

Now suppose that Bob is a tiny person sitting on car 2. Bob has a Martini in his hand and would like to not spill it. Let us suppose Bob would not spill his martini when the first car links up with car 2, but if car 2 hits the wall at any speed greater than zero, Bob will spill his drink. Suppose Bob can move the car 2 a few meters back and forth from the wall (he cannot go all the way to the wall, nor can he get out of the way of the first car). Is there a “safe” distance for him to be in? A distance such that the impact with the wall is at zero speed? Actually, the answer is yes. From looking at Figure 3.15 on the preceding page, we note the “plateau” between t = 3 and t =√ There is a point where the speed is zero. We just need to 4. 2π 4π solve x2 (t) = 0. This is when cos 3 t = 1 or in other words when t = √3 , √3 , etc. . . If we plug in
4π = √3 ≈ 7.26. So a “safe” distance is about 7 and a quarter meters from the wall. Alternatively Bob could move away from the wall towards the incoming car 2 where another 8π safe distance is √3 ≈ 14.51 and so on, using all the different t such that x2 (t) = 0. Of course t = 0 is always a solution here, corresponding to x2 = 0, but that means standing right at the wall.

x2

2π √ 3

3.6.3 Forced oscillations
Finally we move to forced oscillations. Suppose that now our system is x = Ax + F cos ω t. (3.3)

That is, we are adding periodic forcing to the system in the direction of the vector F. Just like before this system just requires us to find one particular solution x p , add it to the general solution of the associated homogeneous system xc and we will have the general solution to (3.3). Let us suppose that ω is not one of the natural frequencies of x = Ax, then we can guess x p = c cos ω t, where c is an unknown constant vector. Note that we do not need to use sine since there are only second derivatives. We solve for c to find x p . This is really just the method of undetermined coefficients for systems. Let us differentiate x p twice to get x p = −ω2 c cos ω t. Now plug into the equation −ω2 c cos ω t = Ac cos ω t + F cos ω t We can cancel the cosine and rearrange to obtain (A + ω2 I)c = −F. So c = (A + ω2 I)−1 (−F).

3.6. SECOND ORDER SYSTEMS AND APPLICATIONS

117

Of course this means that (A + ω2 I) = (A − (−ω2 )I) is invertible. That matrix is invertible if and only if −ω2 is not an eigenvalue of A. That is true if and only if ω is not a natural frequency of the system. Example 3.6.3: Let us take the example in Figure 3.12 on page 112 with the same parameters as before: m1 = 2, m2 = 1, k1 = 4, and k2 = 2. Now suppose that there is a force 2 cos 3t acting on the second cart. The equation is 0 −3 1 cos 3t. x+ x = 2 2 −2 We have solved the associated homogeneous equation before and found the complementary solution to be 1 1 (a cos t + b1 sin t) + (a cos 2t + b2 sin 2t) . xc = 2 1 −1 2 We note that the natural frequencies were 1 and 2. Hence 3 is not a natural frequency, we can try c cos 3t. We can invert (A + 32 I) −3 1 + 32 I 2 −2 Hence, c = (A + ω2 I)−1 (−F) =
7 40 −1 20 −1 40 3 20 −1

6 1 = 2 7

−1

=

7 40 −1 20

−1 40 3 20

.

0 = −2

1 20 −3 10

.

Combining with what we know the general solution of the associated homogeneous problem to be we get that the general solution to x = Ax + F cos ω t is x = xc + x p = 1 1 (a1 cos t + b1 sin t) + (a2 cos 2t + b2 sin 2t) + 2 −1
1 20 −3 10

cos 3t.

The constants a1 , a2 , b1 , and b2 must then be solved for given any initial conditions. If ω is a natural frequency of the system resonance occurs because you will have to try a particular solution of the form x p = c t sin ω t + d cos ω t. That is assuming that all eigenvalues of the coefficient matrix are distinct. Note that the amplitude of this solution grows without bound as t grows.

118

CHAPTER 3. SYSTEMS OF ODES

3.6.4 Exercises
Exercise 3.6.3: Find a particular solution to x = 0 −3 1 cos 2t. x+ 2 2 −2

Exercise 3.6.4: Let us take the example in Figure 3.12 on page 112 with the same parameters as before: m1 = 2, k1 = 4, and k2 = 2, except for m2 which is unknown. Suppose that there is a force cos 5t acting on the first mass. Find an m1 such that there exists a particular solution where the first mass does not move. Note: This idea is called dynamic damping. In practice there will be a small amount of damping and so any transient solution will disappear and after long enough time, the first mass will always come to a stop. Exercise 3.6.5: Let us take the example 3.6.2 on page 114, but that at time of impact, cart 2 is moving to the left at the speed of 3m/s. a) Find the behavior of the system after linkup. b) Will the second car hit the wall, or will it be moving away from the wall as time goes on. c) at what speed would the first car have to be travelling for the system to essentially stay in place after linkup. Exercise 3.6.6: Let us take the example in Figure 3.12 on page 112 with parameters m1 = m2 = 1, k1 = k2 = 1. Does there exist a set of initial conditions for which the first cart moves but the second cart does not? If so find those conditions, if not argue why not.

3.7. MULTIPLE EIGENVALUES

119

3.7

Multiple eigenvalues

Note: 1–2 lectures, §5.4 in EP It may very well happen that a matrix has some “repeated” eigenvalues. That is, the characteristic equation det(A − λI) = 0 may have repeated roots. As we have said before, this is actually unlikely to happen for a random matrix. If you take a small perturbation of A (you change the entries of A slightly) you will get a matrix with distinct eigenvalues. As any system you will want to solve in practice is an approximation to reality anyway, it is not indispensable to know how to solve these corner cases. But it may happen on occasion that it is easier or desirable to solve such a system directly.

3.7.1 Geometric multiplicity
Take the diagonal matrix A= 3 0 . 0 3

A has an eigenvalue 3 of multiplicity 2. We usually call the multiplicity of the eigenvalue in the characteristic equation the algebraic multiplicity. In this case, there exist 2 linearly independent eigenvectors, 1 and 0 . This means that the so-called geometric multiplicity of this eigenvalue 0 1 is 2. In all the theorems where we required a matrix to have n distinct eigenvalues, we only really needed to have n linearly independent eigenvectors. For example, let x = Ax has the general solution 1 3t 0 3t x = c1 e + c2 e . 0 1 Let us restate the theorem about real eigenvalues. In the following theorem we will repeat eigenvalues according to (algebraic) multiplicity. So for A above we would say that it has eigenvalues 3 and 3. Theorem 3.7.1. Take x = Px. If P is n × n and has n real eigenvalues (not necessarily distinct), λ1 , . . . , λn , and if there are n linearly independent corresponding eigenvectors v1 , . . . , vn , and the general solution to the ODE can be written as. x = c1 v1 eλ1 t + c2 v2 eλ2 t + · · · + cn vn eλn t . The geometric multiplicity of an eigenvalue of algebraic multiplicity n is equal to the number of linearly independent eigenvectors we can find. It is not hard to see that the geometric multiplicity is always less than or equal to the algebraic multiplicity. Above we, therefore, handled the case when these two numbers are equal. If the geometric multiplicity is equal to the algebraic multiplicity we say the eigenvalue is complete.

120

CHAPTER 3. SYSTEMS OF ODES

The hypothesis of the theorem could, therefore, be stated as saying that if all the eigenvalues of P are complete then there are n linearly independent eigenvectors and thus we have the given general solution. Note that if the geometric multiplicity of an eigenvalue is 2 or greater, then the set of linearly independent eigenvectors is not unique up to multiples as it was before. For example, for the 1 diagonal matrix A above we could also pick eigenvectors 1 and −1 , or in fact any pair of two 1 linearly independent vectors.

3.7.2 Defective eigenvalues
If an n × n matrix has less than n linearly independent eigenvectors, it is said to be deficient. Then there is at least one eigenvalue with algebraic multiplicity that is higher than the geometric multiplicity. We call this eigenvalue defective and the difference between the two multiplicities we call the defect. Example 3.7.1: The matrix 3 1 0 3 has an eigenvalue 3 of algebraic multiplicity 2. Let us try to compute the eigenvectors. 0 1 v1 = 0. 0 0 v2 We must have that v2 = 0. Hence any eigenvector is of the form v01 . Any two such vectors are linearly dependent, and hence the geometric multiplicity of the eigenvalue is 1. Therefore, the defect is 1, and we can no longer apply the eigenvalue method directly to a system of ODEs with such a coefficient matrix. The key observation we will use here is that if λ is an eigenvalue of A of algebraic multiplicity m, then we will be able to find m linearly independent vectors solving the equation (A − λI)m v = 0. We will call these the generalized eigenvectors. Let us continue with the example A = 3 1 and the equation x = Ax. We have an eigenvalue 0 3 λ = 3 of (algebraic) multiplicity 2 and defect 1. We have found one eigenvector v1 = 1 . We have 0 the solution x1 = v1 e3t . In this case, let us try (in the spirit of repeated roots of the characteristic equation for a single equation) another solution of the form x2 = (v2 + v1 t) e3t .

So our general solution to x = Ax is 1 x = c1 1 3t 0 1 c e3t + c2 te3t e + c2 + t e3t = 1 .1: Solve x = 3 1 x by first solving for x2 and then for x1 independently. If we can. If we plug the second equation into the first we find that (A − 3I)(A − 3I)v2 = 0. Now check 0 3 that you got the same solution as we did above. we are done. or (A − 3I)2 v2 = 0. Hence we can take v2 = 0 .7. and (A − 3I)v2 = v1 . therefore. and such that (A − 3I)v2 = v1 . 0 1 0 c2 e3t Let us check that we really do have the solution. any vector v2 solves (A − 3I)2 v2 = 0. First for λ of multiplicity 2.7. Now find a vector v2 such that Find v2 such that (A − 3I)2 v2 = 0. We notice that in this simple case (A − 3I)2 is just the zero matrix (exercise). This means that (A − 3I)v1 = 0. the equation for x2 does not depend on x1 . Let us describe the general algorithm. good. . find a v2 which solves (A − 3I)2 v2 = 0. Now x2 = 3c2 e3t = 3x2 . defect 1. (A − 3I)v2 = v1 . Note that the system x = Ax has a simpler solution since A is a triangular matrix. If these two equations are satisfied. Write 0 1 a 1 = . Exercise 3. First find an eigenvector v1 of λ. In particular. then x2 is a solution. First x1 = c1 3e3t + c2 e3t + 3c2 te3t = 3x1 + x2 . 121 By looking at the coefficients of e3t and te3t we see 3v2 + v1 = Av2 and 3v1 = Av1 . We know the first of these equations is satisfied because v1 is an eigenvector. Hence.3. This is just a bunch of linear equations to solve and we are by now very good at that. and Ax2 = A(v2 + v1 t) e3t = Av2 e3t + Av1 te3t . MULTIPLE EIGENVALUES We differentiate to get x2 = v1 e3t + 3(v2 + v1 t) e3t = (3v2 + v1 ) e3t + 3v1 te3t . 0 0 b 0 By inspection we see that letting a = 0 (a could be anything in fact) and b = 1 does the job. good. So we just have to make sure that (A − 3I)v2 = v1 . x2 must equal Ax2 .

We form the linearly independent solutions x1 = v1 eλt . . but let us just state the ideas. . . (A − λI)k v = 0.3: Let A = 5 −3 3 −1 . . Solve x = Ax.7. SYSTEMS OF ODES This machinery can also be generalized to larger matrices and higher defects. . You may need to find several chains for every eigenvalue.4: Let A = 2 1 0 0 2 0 0 0 2 5 −4 4 0 3 0 −2 4 −1 . We will not go over. a) What are the eigenvalues? b) What is/are the defect(s) of the eigenvalue(s)? c) Solve x = Ax in two different ways and verify you get the same answer. x2 = (v2 + v1 t) eλt . x2 = v2 + v1 t eλt . We find vectors such that but (A − λI)k−1 v 0.122 This gives us two linearly independent solutions x1 = v1 eλt . xk = vk + vk−1 t + · · · + v2 tk−1 tk−2 + v1 eλt . 3. (A − λI)v2 = v1 . Such vectors are called generalized eigenvectors.7. (A − λI)vk = vk−1 . Exercise 3.3 Exercises Exercise 3.2: Let A = Exercise 3. .7. . (k − 2)! (k − 1)! We proceed to find chains until we form m linearly independent solutions (m is the multiplicity). a) What are the eigenvalues? b) What is/are the defect(s) of the eigenvalue(s)? c) Solve x = Ax. . For every eigenvector v1 we find a chain of generalized eigenvectors v2 through vk such that: (A − λI)v1 = 0. CHAPTER 3. Suppose that A has a multiplicity m eigenvalue λ.7.

in particular A = λI.8: Suppose that A is a 2 × 2 matrix with a repeated eigenvalue λ.7.7.7. Exercise 3.6: Let A = 0 4 −2 −1 −4 1 0 0 −2 .3. . a) What are the eigenvalues? b) What is/are the defect(s) of the eigenvalue(s)? c) Solve x = Ax. Suppose that there are two linearly independent eigenvectors. a) What are the eigenvalues? b) What is/are the defect(s) of the eigenvalue(s)? c) Solve x = Ax. a) What are the eigenvalues? b) What is/are the defect(s) of the eigenvalue(s)? c) Solve x = Ax.5: Let A = 0 1 2 −1 −2 −2 −4 4 7 123 .7.7: Let A = 2 1 −1 −1 0 2 −1 −2 4 .7. Show that the matrix is diagonal. MULTIPLE EIGENVALUES Exercise 3. Exercise 3. Exercise 3.

Now suppose that this was one equation (P is a number or a 1 × 1 matrix).8. Then the general solution to x = Px is x = etP c. Suppose that we have the constant coefficient equation x = Px.8 Matrix exponentials Note: 2 lectures. where c is an arbitrary constant vector.8.1 Definition In this section we present a different way of finding the fundamental matrix solution of a system. What we are looking for is a vector. We usually write Pt as tP by convention when P is a matrix. Let P be an n × n matrix. It turns out the same computation works for matrices when we define ePt properly. k! Recall k! = 1 · 2 · 3 · · · k. In fact x(0) = c.124 CHAPTER 3. First let us write down the Taylor series for eat for some number a. Now if we differentiate this series (at)2 (at)3 a3 t2 a4 t3 + + · · · = a 1 + at + + + · · · = aeat . §5. Then the solution to this would be x = ePt . a+a t+ 2 6 2 6 2 Maybe we can write try the same trick here. We note that in the 1 × 1 case we would at this point multiply by an arbitrary constant to get the general solution. With this small change and by the exact same calculation as above we have that d tP e = PetP . eat = 1 + at + (at)2 (at)3 (at)4 + + + ··· = 2 6 24 ∞ k=0 (at)k . dt Now P and hence etP is an n × n matrix. and 0! = 1. Suppose that for an n × n matrix A we define the matrix exponential as 1 1 1 def eA = I + A + A2 + A3 + · · · + Ak + · · · 2 6 k! Let us not worry about convergence. . SYSTEMS OF ODES 3. The series really does always converge. Theorem 3.1. In the matrix case we multiply by a column vector c. as usual.5 in EP 3.

in general AB BA. you will see why the lack of commutativity becomes a problem. dt dt Hence etP is the fundamental matrix solution of the homogeneous system. Let us restate this as a theorem to make a point. Suppose the matrix is diagonal. that is. Therefore. D = a 0 . Then 0 b ak 0 Dk = . This equation follows because e0A = I. Suppose we actually 0 0 want to compute etA . eB = I + B. We mention a drawback of matrix exponentials. The trouble is that matrices do not commute.8. Otherwise eA+B eA eB in general. (1 − 3t) e2t 0 e2t 3t 1 − 3t 3te2t . 2tI and tB still commute (exercise: check this) and etB = I + tB. we will have another method of solving constant coefficient homogeneous systems. 0 1 0 b 0 eb 2 6 2 0 b2 6 0 b3 So by this rationale we have that eI = e 0 0 e and eaI = ea 0 . In general eA+B eA eB . To solve x = Ax. Theorem 3. 125 d tP d x= e c = PetP c = Px. 0 bk and 1 1 1 a2 0 1 a3 0 1 0 a 0 ea 0 eD = I + D + D2 + D3 + · · · = + + + + ··· = . If AB = BA then eA+B = eA eB . we take the solution x = etA b.8. For example.2 Simple cases In some instances it may work to just plug into the series definition.2. if A and B commute. So Bk = 0 for all k ≥ 2. that is. If you try to prove eA+B eA eB using the Taylor series.3. However. MATRIX EXPONENTIALS Let us check.8. Notice for example that the matrix A = 5 −3 can be written as 2I + B where B = 3 −3 . If we find a way to compute the matrix exponential. We write etA = e2tI+tB = e2tI etB = e2t 0 (I + tB) = 0 e2t = e2t 0 1 + 3t −3t (1 + 3t) e2t −3te2t = . We will find this fact useful. 3. x(0) = b. since (tB)2 = t2 B2 = 0. Notice that 2I and B commute. it is still true that if AB = BA. then eA+B = eA eB . so x(0) = e0A b = b. 0 ea This makes exponentials of certain other matrices easy to compute. It also makes it easy to solve for initial conditions. 3 −3 3 −1 and that B2 = 0 0 .

8.126 CHAPTER 3. there is only one eigenvector for the eigenvalue 2. where B2 = 0. Then we can write A = λI + B. Matrices B such that Bk = 0 for some k are called nilpotent. then either it is diagonal. In fact. the exponential is not as easy to compute as above. if a matrix A is 2 × 2 and has an eigenvalue λ of multiplicity 2. If we can do that. This is a good exercise. it is still not too difficult provided we can find enough eigenvectors. And hence by the same reasoning (BAB−1 )k = BAk B−1 . or A = λI + B where B2 = 0. Then show that (A − λI)2 = 0. You will get an equation for the entries. −1 . Now compute the square of B. Note that this matrix has a repeated eigenvalue with a defect. Computation of the matrix exponential for nilpotent matrices is easy by just writing down the first k terms of the Taylor series.3 General matrices In general. But fear not. This can be seen by writing down the Taylor series. For any two square matrices A and B.1: Suppose that A is 2 × 2 and λ is the only eigenvalue. 3. So now write down the Taylor series for −1 eBAB 1 1 −1 eBAB = I + BAB−1 + (BAB−1 )2 + (BAB−1 )3 + · · · 2 6 1 2 −1 1 3 −1 = BB−1 + BAB−1 + BA B + BA B + · · · 2 6 1 2 1 3 = B I + A + A + A + · · · B−1 2 6 A −1 = Be B . where D is diagonal. Exercise 3. This procedure is called diagonalization. you can see that the computation of the exponential becomes easy. Now we will write a general matrix A as EDE −1 . we have eBAB = BeA B−1 . We cannot usually write any matrix as a sum of commuting matrices where the exponential is simple for each one. First note that (BAB−1 )2 = BAB−1 BAB−1 = BAIAB−1 = BA2 B−1 . Hint: First write down what does it mean for the eigenvalue to be of multiplicity 2. Adding t into the mix we see that etA = EetD E −1 .8. SYSTEMS OF ODES So we have found the fundamental matrix solution for the system x = Ax. So we have found a perhaps easier way to handle this case. First we need the following interesting result about matrix exponentials.

. . . . Let λ1 .   . . . Note that it is clear from the definition that if A is real. . λn be the eigenvalues and let v1 . Since AE = ED. Hence E is invertible.  . . D=.   .8. = y 2 1 y .4) The formula (3. e = Ee E = E  .  . then etA is real. . then E = [ v1 v2 · · · vn ]. That is   λ1 0 · · · 0        0 λ2 · · · 0         . If simplified properly the final matrix will not have any complex numbers in it.. .4). This means that eA = EeD E −1 . So you will only need complex numbers in the computation and you may need to apply Euler’s formula to simplify the result.       λn t  0 0 ··· e (3. E . . . therefore.       0 0 · · · λn Now we write AE = A[ v1 v2 · · · vn ] = [ Av1 Av2 · · · Avn ] = [ λ1 v1 λ2 v2 · · · λn vn ] = [ v1 v2 · · · vn ]D = ED. Otherwise this method does not work and we need to be trickier. . .1: Compute the fundamental matrix solution using the matrix exponentials for the system x 1 2 x . in the case where we have n linearly independent eigenvectors. Now the columns of E are linearly independent as these are the eigenvectors of A. . gives the formula for computing the fundamental matrix solution etA for the system x = Ax. but we will not get into such details in this course.   .   . . .  . We let E be the matrix with the eigenvectors as columns. With t is turns into  λ1 t  0 ··· 0  e       0 eλ2 t · · · 0     −1     tA tD −1  . . we right multiply by E −1 and we get A = EDE −1 .3. . Notice that this computation still works when the eigenvalues and eigenvectors are complex. vn be the eigenvectors. MATRIX EXPONENTIALS 127 Now to do this we will need n linearly independent eigenvectors of A. Example 3. Let D be the diagonal matrix with the eigenvalues on the main diagonal. though then you will have to compute with complex numbers. ..8.

So we must find the right fundamental matrix solution. Then the particular solution 2 we are looking for is x = y e3t +e−t 2 e3t −e−t 2 e3t −e−t 2 e3t +e−t 2 4 2e3t + 2e−t + e3t − e−t 3e3t + e−t = = . Then we claim etA = X(t) [X(0)]−1 .4 Fundamental matrix solutions We note that if you can compute the fundamental matrix solution in a different way. However. We first compute (exercise) that the eigenvalues are 3 and 2 1 1 −1 and the corresponding eigenvectors are 1 and −1 .8. So perhaps we did not gain much by this new tool. 2 2e3t − 2e−t + e3t + e−t 3e3t − e−t 3. The fundamental matrix solution of a system of ODEs is not unique. All we are doing is changing what the arbitrary constants are in the general solution x(t) = X(t)c. Hence we write 1 etA = 1 1 e3t 0 1 1 1 −1 0 e−t 1 −1 1 1 e3t 0 −1 −1 −1 = 1 −1 0 e−t 2 −1 1 −1 e3t e−t −1 −1 = 2 e3t −e−t −1 1 = −1 −e3t − e−t −e3t + e−t = 2 −e3t + e−t −e3t − e−t e3t +e−t 2 e3t −e−t 2 e3t −e−t 2 e3t +e−t 2 −1 . SYSTEMS OF ODES Then compute the particular solution for the initial conditions x(0) = 4 and y(0) = 2.128 CHAPTER 3. . you can use this to find the matrix exponential etA . Let X be any fundamental matrix solution to x = Ax. which the eigenvalue method did not. The exponential is the fundamental matrix solution with the property that for t = 0 we get the identity matrix.8. It is not hard to see that we can multiply a fundamental matrix solution on the right by any constant invertible matrix and we still get a fundamental matrix solution. Hence. Obviously if we plug t = 0 into X(t) [X(0)]−1 we get the identity. by the property that e0A = I we find that the particular solution we are looking for is etA b where b is 4 . 3. the Taylor series expansion actually gives us a very easy way to approximate solutions. the computation of any fundamental matrix solution X using the eigenvalue method is just as difficult as computation of etA . The initial conditions are x(0) = 4 and y(0) = 2. Let A be the coefficient matrix 1 2 .5 Approximations If you think about it.

1. Nineteen Dubious Ways to Compute the Exponential of a Matrix. 3–49 .8. Exercise 3. 7 3 5 2 2t + 2t + 3 t 1 + t + 2 t + 13 t3 6 Just like the Taylor series approximation for the scalar version.F.1 A = 1.3: Find eAt for the matrix A = ∗ 2 3 0 2 .22670818 9.33333333 6.13 3 1.8.12716667 0.22233333 1. 3. For larger t.1 A + A + A = .8. the approximation will be better for small t and worse for larger t. C.22251069 .12734811 0. 2 1 5 1 2 t2 2 t3 3 2 2 2 3 e ≈ I + tA + A + A = I + t +t 5 +t 2 6 2 1 2 2 tA 13 6 7 3 7 3 13 6 5 2 t 2 2 = = 1+t+ + 13 t3 2 t + 2 t2 + 7 t3 6 3 .33333333 .85882874 . y = x + 3y.12734811 This is not bad at all.6 Exercises Exercise 3. 2003.12 2 0. There are better ways to approximate the exponential∗ . Moler and C.2: Find a fundamental matrix solution for the system x = 3x + y. MATRIX EXPONENTIALS 129 The simplest thing we can do is to just compute the series up to a certain number of terms. 6.22251069 1. The approximate solution is approximately (rounded to 8 decimal places) e 0.1 into the real solution (rounded to 8 decimal places) we get e0.66666667 6. Let us see how we stack up against the real solution with t = 0. 9. Although if you take the same approximation for t = 1 you get (using the Taylor series) 6. Van Loan. SIAM Review 45 (1).12716667 2 6 And plugging t = 0. few terms of the Taylor series give a reasonable approximation for the exponential and may suffice for the application. In many cases however.22233333 ≈ I + 0.8.85882874 10. To get a good approximation at t = 1 (say up to 2 decimal places) you would need to go up to the 11th power (exercise).1 A 0. let us compute the first 4 terms of the series for the matrix A = 1 2 . 0. you will generally have to compute more terms. 0. For example.3.22670818 So the approximation is not very good once we get up to t = 1. Twenty-Five Years Later.66666667 while the real value is (again rounded to 8 decimal places) 10.

7: Use exercise 3. x2 = x1 + 2x2 + x3 .8.8. Hint: Use diagonalization and the fact that the identity matrix commutes with every other matrix. SYSTEMS OF ODES Exercise 3. Exercise 3. 3 Exercise 3. b) Find the fundamental matrix solution to x = Ax. and corresponding eigenvectors 1 . 1.8.8. .6: Suppose AB = BA (matrices commute).8. Then find the solution that satisfies x = Exercise 3.5: Compute the matrix exponential eA for A = 1 2 0 1 0 1 −2 .8: Suppose A is a matrix with eigenvalues −1. Exercise 3. Suppose that there are n linearly independent eigenvectors.130 CHAPTER 3. 1 0 . x3 = −3x1 − 2x2 − 5x3 .4: Find a fundamental matrix solution for the system x1 = 7x1 + 4x2 + 12x3 .9: Suppose that A is an n × n matrix with a repeated eigenvalue λ of multiplicity n. Show that eA+B = eA eB .8. Show that the matrix is diagonal. 1 c) Solve the system in with initial conditions x(0) = 2 .8. in particular A = λI. a) Find matrix A with these properties. . Exercise 3.6 to show that (eA )−1 = e−A . In particular this means that eA is invertible even if A is not.

Therefore.9 Nonhomogeneous systems Note: 3 lectures (may have to skip a little). Hence. The first method we will look at is the integrating factor method. NONHOMOGENEOUS SYSTEMS 131 3.9. We multiply both sides of the equation by etP (being mindful that we are dealing with matrices which may not commute) to obtain etP x (t) + etP Px(t) = etP f (t). Recall from exercise 3.8. For simplicity we rewrite the equation as x (t) + Px(t) = f (t). somewhat different from §5.1 First order constant coefficient Integrating factor Let us first focus on the nonhomogeneous first order equation x (t) = Ax(t) + f (t).9. we obtain x(t) = e−tP etP f (t) dt + e−tP c. where A is a constant matrix. dt We can now integrate. we integrate each component of the vector separately etP x(t) = etP f (t) dt + c. d tP e x(t) = etP f (t).3.6 in EP 3. This fact follows by writing down the series definition of etP . . 1 1 PetP = P I + I + tP + (tP)2 + · · · = P + tP2 + t2 P3 + · · · = 2 2 1 = I + I + tP + (tP)2 + · · · P = PetP . where P = −A.7 that (etP )−1 = e−tP . That is. 2 We have already seen that d dt etP = PetP . We notice that PetP = etP P.

Let us do it in stages. (3. Example 3.1: Suppose that we have the system x1 + 5x1 − 3x2 = et . (1 + 3t) e2t −3te2t (1 − 3t) e−2t 3te−2t . First t t e f (s) ds = sP 0 0 t (1 + 3s) e2s −3se2s es ds 2s 2s 3se (1 − 3s) e 0 (1 + 3s) e3s ds 3se3s te3t . with initial conditions x1 (0) = 1. 0 x(0) = e−0P 0 e sP f (s) ds + e−0P b = I b = b.5) Again. x2 + 3x1 − x2 = 0. Let us write the system as x + 5 −3 et x= .132 CHAPTER 3. It is not hard to see that (3. 0 We have previously computed etP for P = 5 −3 . = 0 = (3t−1) e3t +1 3 . the integration means that each component of the vector e sP f (s) is integrated separately. We immediately also have e−tP . 3 −1 0 x(0) = 1 . x2 (0) = 0. etP = 3te2t (1 − 3t) e2t −3te−2t (1 + 3t) e−2t Instead of computing the whole formula at once.5) really does satisfy the initial condition x(0) = b. e−tP = . Suppose we have the equation with initial conditions x (t) + Px(t) = f (t). SYSTEMS OF ODES Perhaps it is better understood as a definite integral. In this case it will be easy to also solve for the initial conditions as well. just by negating 3 −1 t.9. The solution can then be written as t x(0) = b. x(t) = e −tP 0 e sP f (s) ds + e−tP b.

1 + 3 − 2t e−2t Phew! Let us check that this really works. . (3. we wish to write our solution as a linear combination of the eigenvectors of A. vn . (3. Take the equation x (t) = Ax(t) + f (t). For systems.7) That is. P is constant. (3.8) . Let us decompose f in terms of the eigenvectors as well. x1 + 5x1 − 3x2 = (4te−2t − 4e−2t ) + 5(1 − 2t) e−2t + et − (1 − 6t) e−2t = et .3. If we can solve for the scalar functions ξ1 through ξn we have our solution x. we note that the eigenvectors of a matrix give the directions in which the matrix acts like a scalar. Write f (t) = v1 g1 (t) + v2 g2 (t) + · · · + vn gn (t). We can put those solutions together to get the general solution. Let us write x(t) = v1 ξ1 (t) + v2 ξ2 (t) + · · · + vn ξn (t). The initial conditions are also satisfied as well (exercise). .6) Assume that A has n linearly independent eigenvectors v1 . Similarly (exercise) x2 + 3x1 − x2 = 0. . because matrices generally do not commute. that is. the integrating factor method only works if P does not depend on t. The problem is that in general d e dt P(t) dt P(t) e P(t) dt . .9. Eigenvector decomposition For the next method. NONHOMOGENEOUS SYSTEMS Then t 133 x(t) = e = = = −tP 0 e sP f (s) ds + e−tP b te3t (3t−1) e3t +1 3 −2t (1 − 3t) e−2t 3te−2t −3te−2t (1 + 3t) e−2t te t −e 3 + (1 − 3t) e−2t 3te−2t 1 −2t −2t −3te (1 + 3t) e 0 −2t + 1 3 +t e −2t + (1 − 3t) e −3te−2t t −e 3 (1 − 2t) e−2t . If we solve our system along these directions these solutions would be simpler as we can treat the matrix as a scalar.

We take c = E −1 b and note b = v1 a1 + · · · + vn an . we will get the general solution. for example for the kth equation we write ξk (t) − λk ξk (t) = gk (t). We use the integrating factor e−λk t to find that d ξk (t) e−λk t = e−λk t gk (t). Hence it is always possible to find g when there are n linearly independent eigenvectors. and we are done. the matrix E = [ v1 v2 · · · vn ] is invertible.8) can be written as f = Eg. If we leave these constants in. Each one of these equations is independent of the others. you could set Ck to be zero. it is perhaps better to write these integrals as definite integrals. Then g = E −1 f . . SYSTEMS OF ODES That is. where the components of g are the functions g1 through gn . just like before.7) into (3. . ξn = λn ξn + gn .8). Then if we write ξk (t) = eλk t 0 t e−λk s gk (s) dt + ak eλk t . we wish to find g1 through gn that satisfy (3. dx Now we integrate and solve for ξk to get ξk (t) = eλk t e−λk t gk (t) dt + Ck eλk t . We see that (3. and note that Avk = λk vk . They are all linear first order equations and can easily be solved by the standard integrating factor method for single equations. We plug (3. Again. . x = v1 ξ1 + v2 ξ2 + · · · + vn ξn = A v1 ξ1 + v2 ξ2 + · · · + vn ξn + v1 g1 + v2 g2 + · · · + vn gn = Av1 ξ1 + Av2 ξ2 + · · · + Avn ξn + v1 g1 + v2 g2 + · · · + vn gn = v1 λ1 ξ1 + v2 λ2 ξ2 + · · · + vn λn ξn + v1 g1 + v2 g2 + · · · + vn gn = v1 (λ1 ξ1 + g1 ) + v2 (λ2 ξ2 + g2 ) + · · · + vn (λn ξn + gn ). as always. ξ2 = λ2 ξ2 + g2 .134 CHAPTER 3. Suppose that we have an initial condition x(0) = b. That is. . Write x(t) = v1 ξ1 (t) + v2 ξ2 (t) + · · · + vn ξn (t). We note that since all the eigenvectors of A are independent.6). Note that if you are looking for just any particular solution. If we identify the coefficients of the vectors v1 through vn we get the equations ξ1 = λ1 ξ1 + g1 .

−1 1 E −1 = 1 −1 2et 2t 1 1 −1 . 1 where ξ1 (0) = a1 = .2: Let A = 1 3 . Thus 1 1 1 −1 2et 2et g1 et − t = t . because ξk (0) = ak . (−2ξ1 ) + −1 1 2 −1 1 We solve with integrating factor. 3/16 Example 3. We further want to write x(0) in terms of the eigenvectors. We also wish to write f in terms 1 1 = −1 g1 + 1 g2 .3. Hence a1 = E −1 a2 So a1 = 1 4 3 16 −5 16 = 1 4 −1 16 . Computation of the integral is left as an exercise to the student. we wish to write x(0) = 3/16 1 1 −5/16 = −1 a1 + 1 a2 . and a2 = −1 . Note that you will need integration by parts.9. 2 1 1 We are looking for a solution of the form x = of the eigenvectors. 4 −1 where ξ2 (0) = a2 = . NONHOMOGENEOUS SYSTEMS 135 we will actually get the particular solution x(t) = v1 ξ1 (t) + v2 ξ2 (t) + · · · + vn ξn (t) satisfying x(0) = b. 16 We plug our x into the equation and get that 1 1 1 1 1 1 ξ2 = A ξ1 + A ξ2 + g1 + g ξ1 + −1 1 −1 1 −1 1 2 = We get the two equations ξ1 = −2ξ1 + et − t. 3 2 4 . = E −1 = g2 2t e +t 2 1 1 2t So g1 = et − t and g2 = et + t. That is. ξ2 = 4ξ2 + et + t. We write down the matrix E of the eigenvectors and compute its inverse (using the inverse formula for 2 × 2 matrices) t E= 1 1 .9. Solve x = Ax + f where f (t) = 2e for x(0) = −5/16 . 3 1 2t 1 The eigenvalues of A are −2 and 4 and the corresponding eigenvectors are −1 and 1 respec1 tively. This calculation is left as an exercise. ξ1 = e−2t e2t (et − t) dt + C1 e−2t = et t 1 − + + C1 e−2t . 16 1 1 1 1 t 4ξ + (et − t) + (e − t). That is we wish to write f = ξ1 + 1 ξ2 .

1 1 for some arbitrary constants α1 and α2 . let us just do an example. something of the form aet appears in the complementary solution. The only difference here is that we will have to take unknown vectors rather than just numbers. As ξ1 (0) = ξ2 = e4t As ξ2 (0) = 1 16 1 4 CHAPTER 3. We would want to guess a particular solution of x = aet + bt + c. Similarly 4 e−4t (et + t) dt + C2 e4t = − −1 16 et t 1 − − + C2 e4t . −2 1 Note that we can solve this system in an easier way (can you see how). Undetermined coefficients The method of undetermined coefficients also still works.3: Let A = −1 0 . x1 = e4t −e−2t 3 et − e−2t 1 − 2t 1 + + 1 3 4 3−12t 16 e4t − et 4t + 1 = − 3 16 4t−5 . Example 3. The method can turn into a lot of tedious work. furthermore if the right hand side is complicated.1: Check that x1 and x2 solve the problem. 3 4 16 we have that = −1 3 − 1 16 1 + C2 and hence C2 = 3 . you will have lots of variables to solve for.9. SYSTEMS OF ODES then 1 4 = 1 3 1 + 1 + C1 and hence C1 = − 3 . Check both that they satisfy the differential equation and that they satisfy the initial conditions. Because we do not yet know the vector if the a is a multiple of 0 we do not know if a conflict arises. The solution is 1 x(t) = −1 That is. 16 e4t −e−2t + 3−12t 3 16 −2t +e4t +2et e + 4t−5 3 16 . This method does not always work.136 C1 is the constant of integration. The eigenvalues of A are −1 and 1 and the corresponding eigenvectors are 1 and 0 respec1 1 tively. As this method is essentially the same as it is for single equations. + and x2 = e−2t +e4t +2et 3 + Exercise 3. So in system of 3 equations if you have say 4 unknown vectors (this would not be uncommon). then you already have 12 unknowns that you need to solve for. but for the purposes of the example. It may very 1 . Hence our complementary solution is t xc = α1 1 −t 0 t e + α2 e. In this case you can think of each element of an unknown vector as an unknown number. However.9. Find a particular solution of x = Ax + f where f (t) = et . Same caveats apply to undetermined coefficients for systems as they do for single equations. let us use the eigenvalue method plus undetermined coefficients.

The remaining equations that tell us something are a1 = −a1 + 1. Therefore.c= c1 c2 . Here we find the crux of the difference for systems. d1 = 0. So a1 = 1 and b2 = −1. not just btet . but it is easier to just do this in an ad hoc manner. b1 b2 .9. We are looking for just a 2 single solution so presumably the simplest one is when a2 = 0. a1 + b1 = −a1 + 1. tet + et + −2d1 + d2 t −2c1 + c2 −2a1 + a2 −2b1 + b2 Now we identify the coefficients of et . You would add this to the complementary solution to get the 2 general solution of the problem. x1 = 1 et . Plugging these back in we get that c2 = −1 and d2 = −1. t and any constants. Immediately we see that b1 = 0. Notice also that both aet and btet really was needed. tet . but to be safe we should also try btet . x2 = −tet − t − 1. x = aet + btet + ct + d = 1 2 0 et + 1 t 0 0 0 e 2 = tet + t+ . First let us compute x . 0 = −2c1 + c2 + 1. You want to try both aet and btet in your solution. We write a = a1 . Therefore. and d = d1 d2 . c2 = −2d1 + d2 .3. c1 = −d1 . b1 = −b1 . . 0 = −c1 . We could write this is an 8 × 9 augmented matrix and start row reduction. c1 = 0. t −1 −1 −1 −te − t − 1 That is. a2 + b2 = −2a1 + a2 . b = a2 this into the equation. a2 can be arbitrary and still satisfy the equation. Now x must equal Ax + f so Ax + f = Aaet + Abtet + Act + Ad + f = = et −d1 −c1 −a1 −b1 + t+ . b2 = −2b1 + b2 . Thus we have 8 unknowns. a2 + b2 = −2a1 + a2 . We have to plug x = a + b et + btet + c. NONHOMOGENEOUS SYSTEMS 137 well not. we try x = aet + btet + ct + d.

suppose that you have solved the associated homogeneous equation x = A(t) x and found the fundamental matrix solution X(t).9) to obtain x p (t) = X (t) u(t) + X(t) u (t) = A(t) X(t) u(t) + f (t). then u (t) = [X(t)]−1 f (t). The equation we had done was very simple. then [X(t)]−1 = e−tA and hence we get a solution x p = etA e−tA f (t) dt which is precisely what we got using the integrating factor method. Now integrate to obtain u and we have the particular solution x p = X(t) u(t). this is essentially the same thing as the integrating factor method we discussed earlier. Hence X(t) u (t) = f (t). But X is the fundamental matrix solution to the homogeneous problem so X (t) = A(t)X(t). the computations can get out of hand pretty quickly for systems. Also try setting a2 = 1 and again check these solutions. The general solution to the associated homogeneous equation is X(t)c for a constant vector c. Just like for variation of parameters for single equation we try the solution to the nonhomogeneous equation of the form x p = X(t) u(t). If we compute [X(t)]−1 . undetermined coefficients works exactly the same as it did for single equations. even if it is not constant coefficient. However this method will work for any linear system. In fact for constant coefficient systems. Note that if A is constant and you let X(t) = etA . there is the method of variation of parameters. Let us write this as a formula x p = X(t) [X(t)]−1 f (t) dt. However. 3.9) Further. other than the handling of conflicts.2: Check that x1 and x2 solve the problem. where u(t) is a vector valued function instead of a constant.9.2 First order variable coefficient Just as for a single equation. Example 3.9. Suppose we have the equation x = A(t) x + f (t). (3. and thus X (t) u(t) + X(t) u (t) = X (t) u(t) + f (t).4: Find a particular solution to x = t2 1 t −1 t 2 (t + 1).9. SYSTEMS OF ODES Exercise 3. x+ 1 t 1 +1 (3. Now substitute into (3.10) .138 CHAPTER 3. What is the difference between the two solutions we can obtain in this way? As you can see. provided you have somehow solved the associated homogeneous problem.

3 3 In the variation of parameters.3 Second order constant coefficients Undetermined coefficients We have already previously did a simple example of the method of undetermined coefficients for second order systems in § 3.6. x= 1 −t c1 + t 1 c2 1 4 t 3 2 3 t + 3 t = 1 c1 − c2 t + 3 t4 .10) is x p = X(t) = = = = 1 −t t 1 1 −t t 1 t2 1 1 t .9. There are some simplifications that you can make however as we did in § 3.9. Once we know the complementary solution we can easily find t 1 a solution to (3.6.10). That is.3. 3. we will add X(t)c for a vector of arbitrary constants. If F(t) is of the form F0 cos ω t. + 1 −t 1 [X(t)]−1 f (t) dt 1 1 t t 2 (t + 1) dt 2 + 1 −t 1 1 t 2t dt −t2 + 1 1 −t t2 1 3 t 1 −3 t + t 1 4 t 3 2 3 t + 3 t . you find that t +1 X = 1 −t solves X (t) = A(t)X(t).3: Check that x1 = 1 t4 and x2 = 2 t3 + t really solve (3.10). where A is a constant matrix. NONHOMOGENEOUS SYSTEMS 139 t Here A = t21 1 −1 is most definitely not constant. .10). then you can try a solution of the form x p = c cos ω t. First we find [X(t)]−1 = Next we know a particular solution to (3. This method is essentially the same as undetermined coefficients for first order systems. Adding the complementary solution we have that the general solution to (3. just like in the integrating factor method we can obtain the general solution by adding in constants of integration. Let the equation be x = Ax + F(t). Perhaps by a lucky guess. But that is precisely the complementary solution. 2 c2 + (c1 + 1) t + 3 t3 Exercise 3.9.

But it is useful to save some time and effort. we can do eigenvector decomposition. . . And again g = E −1 F. . However. If the F is a sum of cosines. so if F(t) = F0 cos ω0 t + F1 cos ω1 t. . Decompose F in terms of the eigenvectors F(t) = v1 g1 (t) + v2 g2 (t) + · · · + vn gn (t). just like for first order systems. Again form the matrix E = [ v1 · · · vn ]. you could try a cos ω0 t for the problem x = Ax + F0 cos ω0 t. Now plug in and doing the same thing as before x = v1 ξ1 + v2 ξ2 + · · · + vn ξn = A v1 ξ1 + v2 ξ2 + · · · + vn ξn + v1 g1 + v2 g2 + · · · + vn gn = Av1 ξ1 + Av2 ξ2 + · · · + Avn ξn + v1 g1 + v2 g2 + · · · + vn gn = v1 λ1 ξ1 + v2 λ2 ξ2 + · · · + vn λn ξn + v1 g1 + v2 g2 + · · · + vn gn = v1 (λ1 ξ1 + g1 ) + v2 (λ2 ξ2 + g2 ) + · · · + vn (λn ξn + gn ).140 CHAPTER 3. or the equation is of the form x = Ax + Bx + F(t). If you have found the general solution for ξ1 through ξn . Let λ1 . and we are done. and you would try b cos ω1 t for the problem x = Ax + F0 cos ω1 t. Write x(t) = v1 ξ1 (t) + v2 ξ2 (t) + · · · + vn ξn (t). . λn be the eigenvalues and v1 . Actually you will never go wrong with putting in more terms than needed into your guess. . vn be the eigenvectors. then again x(t) = v1 ξ1 (t) + · · · + vn ξn (t) is the general solution. Identify the coefficients of the eigenvectors to get the equations ξ1 = λ1 ξ1 + g1 . SYSTEMS OF ODES and you do not need to introduce sines. ξ2 = λ2 ξ2 + g2 . . then you need to do the same thing as you do for first order systems. . . . Now solve each one of these using the methods of chapter 2. . if there is duplication with the complementary solution. Each one of these equations is independent of the others. ξn = λn ξn + gn . . Now write x(t) = v1 ξ1 (t) + · · · + vn ξn (t). Then sum the solutions. Eigenvector decomposition If we have the system x = Ax + F(t). we have a particular solution. you note that we still have the superposition principle. You will just find that the extra coefficients will turn out to be zero.

3. b) using eigenvector decomposition.9.6: Find the general solution to x1 = −6x1 + 3x2 + cos t.4: Find a particular solution to x = x + 2y + 2t. 3 −9C1 cos 3t = −C1 cos 3t + Each of these we solve separately: we get −9C1 = −C1 + 2 C1 = −1 and C2 = 15 .9. b) using eigenvector decomposition.9. 3 2 −1 and 1 −1 .6.4 Exercises Exercise 3. Therefore. a) using eigenvector decomposition. Exercise 3. a) Using integrating factor method. x+ 2 −2 2 1 2 141 The eigenvalues were −1 and −4. Exercise 3. This matches what we got previously in § 3. y = 3x + 2y − 4.9. So E = 1 1 2 −1 and So E −1 = g1 0 1 1 1 = E −1 F(t) = = 3 2 −1 2 cos 3t g2 2 cos 3t 3 −2 cos 3t 3 . So after the whole song and dance of plugging in.6 using this method. x2 = 2x1 − 7x2 + 3 cos t.5: Find the general solution to x = 4x + y − 1. b) using undetermined coefficients. . y = x + 4y − et .5: Let us do the example from § 3.9. So our particular solution is 12 x= 1 2 1 −1 cos 3t + 12 −1 2 3 and −9C2 = −4C2 − 2 . We plug in 2 cos 3t. 3 2 −9C2 cos 3t = −4C2 cos 3t − cos 3t. the equations we get are ξ1 = −ξ1 + 2 cos 3t.3. with eigenvectors 1 1 1 . The equation is x = −3 1 0 cos 3t. c) using undetermined coefficients. 3 2 ξ2 = −4 ξ2 − cos 3t. c) using undetermined coefficients. 3 For each we can try the method of undetermined coefficients and try C1 cos 3t for the first equation and C2 cos 3t for the second equation.9. And hence 3 1 20 −3 10 2 cos 3t = 15 cos 3t. NONHOMOGENEOUS SYSTEMS Example 3. a) Using integrating factor method.

. SYSTEMS OF ODES Exercise 3. −t is the complementary solution. b) Use variation of parameters to find a particular solution.142 CHAPTER 3.8: Take the equation x = a) Check that xc = c1 t sin t t cos t + c2 −t cos t t sin t 1 t −1 1 t 1 x+ t2 .9.7: Find the general solution to x1 = −6x1 + 3x2 + cos 2t. b) using undetermined coefficients.9. Exercise 3. x2 = 2x1 − 7x2 + 3 cos 2t. a) using eigenvector decomposition.

That is. Hence there are infinitely many solutions x = B sin t for an arbitrary constant B. where x(t) is defined for t in the interval [a. b]. which is x = A cos t + B sin t. similar to §3.1. For example. The condition x(0) = 0 forces A = 0. write down the general solution of the differential equation. we need to study the so-called boundary value problems (or endpoint problems).1 Boundary value problems Note: 2 lectures. In fact.1 Boundary value problems Before we tackle the Fourier series. we now specify the value of the solution at two different points.Chapter 4 Fourier series and PDEs 4. b = π. x(π) = 0.1. Unlike before when we specified the value of the solution and its derivative at a single point. so existence of solutions is not an issue here.8 in EP 4. x(0) = 0. The general solution to x + λx = 0 will have two arbitrary constants present. But letting x(π) = 0 does not give us any more information as x = B sin t already satisfies both conditions. so it is natural to think that requiring two conditions will guarantee a unique solution.1: However take λ = 1. x(b) = 0. Example 4. x + x = 0. x(a) = 0. Note that x = 0 is a solution to this equation. a = 0. for some constant λ. Uniqueness is another issue. 143 . suppose we have x + λx = 0. Then x = sin t is another solution satisfying both boundary conditions.

let L = − dt2 .2) We will have to handle the cases λ > 0. This problem is an analogue of finding eigenvalues and eigenvectors of matrices. x(π) = 0.2: On the other hand. So x = 0 is the unique solution to this problem. x (a) = x (b).1.1.1) (resp. λ = 0.1) (4. 4.3) A number λ will be considered an eigenvalue of (4. (4. then the general solution to x + λx = 0 is √ √ x = A cos λ t + B sin λ t. but we will postpone this until chapter 5. sin 2 π 0 and hence B = 0. λ < 0 separately. x(0) = 0. x + λx = 0. So what is going on? We will be interested in classifying which constants λ imply a nonzero solution. The condition x(0) = 0 implies immediately A = 0. The similarity is not just coincidental. Example 4. For the basic Fourier series theory we will need only the following three cases.2 Eigenvalue problems In general we will consider more equations.3)) if and only if there exists a nonzero solution to (4. and we will be interested in finding those solutions.2) or (4. x (b) = 0. x(a) = x(b). (4. and x + λx = 0. x(a) = 0. change to λ = 2.3)) given that specific λ. x (a) = 0. then we are doing the same d2 exact thing.1) (resp. If we think of the equations as differential operators. x(0) = 0. The nonzero solution we found will be said to be the corresponding eigenfunction. though we will not pursue this line of reasoning too far. But √ now letting 0 = x(π) = B sin 2 π. A lot of the formalism from linear algebra can still apply here.144 CHAPTER 4. FOURIER SERIES AND PDES Example 4. Next √ 0 = x(π) = B sin λ π.1. . Note the similarity to eigenvalues and eigenvectors of matrices. x(b) = 0. (4. Letting x(0) = 0 still forces A = 0. x + 2x = 0. x + λx = 0. x(π) = 0. First suppose that λ > 0. (4.3: Let us find the eigenvalues and eigenfunctions of x + λx = 0.2) or (4. √ √ Then the general solution is x = A cos √ 2 t + B sin 2 t. For example. then we are looking for eigenfunctions f satisfying certain endpoint conditions that solve (L − λ) f = 0.

This means that B could be anything (let us take it to be 1). So to get a nonzero solution we must have that √ √ sin λ π = 0. Finally. √ √ First suppose that λ > 0. you should plot sinh to see this. so we only need to pick one. x (0) = 0. Now suppose that λ = 0. Now suppose that λ = 0. In this case the equation is x = 0 and the general solution is x = At + B.1. x (0) = 0 implies that A = 0. Hence the positive eigenvalues are k2 for all integers k ≥ 1. or λ = k for a positive integer k. Letting x(0)√= 0 implies that A = 0 (recall cosh 0 = 1 and sinh 0 = 0). Hence et = e−t which implies t = −t and that is only true if t = 0. So x = −A sin √ √ λ t + B cos λ t The condition x (0) = 0 implies immediately B = 0. then the general solution to x + λx = 0 is x = A cos λ t + B sin λ t. Again we will have to handle the cases λ > 0. let λ < 0. √ √ Again A should not be zero. Just like for eigenvectors.4: Let us also compute the eigenvalues and eigenfunctions of x + λx = 0. Hence. we get all the multiples of an eigenfunction. So our solution must be x = B sinh −λ t and satisfy x(π) = 0. λ π must be an integer multiple of π. Example 4. This means that λ = 0 is not an eigenvalue. This is only possible if B is zero. x (π) = 0. . The corresponding eigenfunctions can be taken as x = sin kt. In this case we have the general solution x = A cosh −λ t + B sinh −λ t and hence √ √ x = A sinh −λ t + B cosh −λ t. Hence the positive eigenvalues are again k2 for all integers k ≥ 1. λ = 0. Next √ 0 = x (π) = −A sin λ π. So there are 2 no negative eigenvalues. In this case we have the general solution √ √ x = A cosh −λ t + B sinh −λ t. BOUNDARY VALUE PROBLEMS 145 If B√is zero then x is not a nonzero solution. And the corresponding eigenfunctions can be taken as x = cos kt. In this case the equation is x = 0 and the general solution is x = At + B so x = A. λ < 0 separately. Why? Because sinh ξ is only zero for ξ = 0. Obviously setting x (π) = 0 does not get us anything new.1.4. x(0) = 0 implies that B = 0 and then x(π) = 0 implies that A = 0. and sin λ π is only zero if λ = k for a positive integer k. So λ = 0 is an eigenvalue and x = 1 is the corresponding eigenfunction. √ √ Finally. Also we can just look at the definition t −t 0 = sinh t = e −e . In summary. let λ < 0. the eigenvalues and corresponding eigenfunctions are λk = k2 with an eigenfunction xk = sin kt for all integers k ≥ 1.

but rather that they are the same at the beginning and at the end of the interval. Hence there are no negative eigenvalues. √ √ √ √ A cos λ π − B sin λ π = A cos λ π + B sin λ π. the eigenvalues and corresponding eigenfunctions are λk = k2 with an eigenfunction xk = sin kt for all integers k ≥ 1. Similarly (exercise) if we differentiate x and plug in the √ second condition we find that A = 0 or sin √ π = 0. the eigenvalues and corresponding eigenfunctions are λk = k2 λ0 = 0 with the eigenfunctions with an eigenfunction cos kt and x0 = 1.146 CHAPTER 4. In summary. Therefore. In this case however. the general solution is x = At + B. The computations are the same and again we find that there are no negative eigenvalues. So we have two linearly independent eigenfunctions sin kt and cos kt. Let us skip λ < 0. FOURIER SERIES AND PDES We have already seen (with roles of A and B switched) that for this to be zero at t = 0 and t = π it implies that A = B = 0. x(−π) = x(π). Remember that for a matrix we could also have had two eigenvectors corresponding to an eigenvalue if the eigenvalue was repeated.5: Let us compute the eigenvalues and eigenfunctions of x + λx = 0. Therefore. sin kt for all integers k ≥ 1.1. unless we want A and B to both λ √ be zero (which we do not) we must have sin λ π = 0. and there is another eigenvalue λ0 = 0 with an eigenfunction x0 = 1. The second condition x (−π) = x (π) says nothing about B and hence λ = 0 is an eigenvalue with a √ corresponding eigenfunction x = 1. x (−π) = x (π). √ For λ > 0 we get that x = A cos λ t + B sin λ t. √ and hence either B = 0 or sin λ π = 0. Therefore. The condition x(−π) = x(π) implies that A = 0 (Aπ + B = −Aπ + B implies A = 0). This problem is be the one that leads to the general Fourier series. You should notice that we have not specified the values or the derivatives at the endpoints. We could also do this for a little bit more complicated boundary value problem. x = A cos kt + B sin kt is an eigenfunction for any A and any B. Example 4. Now √ √ √ √ A cos − λ π + B sin − λ π = A cos λ π + B sin λ π. . We remember that cos −θ = cos θ and sin −θ = − sin θ. In summary. λ is an integer and hence the eigenvalues are yet again λ = k2 for an integer k ≥ 1. For λ = 0.

x(π) = 0. First note that we have the following two equations. Then they are orthogonal in the sense that b x1 (t)x2 (t) dt = 0. We will not prove this fact here. For example. Theorem 4.1. The theorem has a very short.3).1. Multiply the first by x2 and the second by x1 and subtract to get (λ1 − λ2 )x1 x2 = x2 x1 − x2 x1 .1 (easy): Finish the theorem (check the last equality in the proof) for the cases (4.1. if we consider (4.4. and illuminating proof so let us give it here. The last equality holds because of the boundary conditions. a Note that the terminology comes from the fact that the integral is a type of inner product. x1 + λ1 x1 = 0 and x2 + λ2 x2 = 0. . As λ1 λ2 . therefore.1. (4.1. A matrix is called symmetric if A = AT . 0 when m n. get the following theorem. This is an analogue of the following fact about eigenvectors of a matrix.1). b b (λ1 − λ2 ) a x1 x2 dt = a b x2 x1 − x2 x1 dt d x x1 − x2 x1 dt dt 2 b t=a = a = x2 x1 − x2 x1 = 0. Now integrate both sides of the equation. Hence we have the integral π (sin mt)(sin nt) dt = 0.3) for two different eigenvalues λ1 and λ2 .1) we have x1 (a) = x1 (b) = x2 (a) = x2 (b) = 0 and so x2 x1 − x2 x1 is zero at both a and b. That symmetry is required. x(0) = 0. the theorem follows. Eigenvectors for two distinct eigenvalues of a symmetric matrix are orthogonal.2) or (4. elegant.2) and (4. Exercise 4. We. We have seen previously that sin nt was an eigenfunction for the problem x +λx = 0. Suppose that x1 (t) and x2 (t) are two eigenfunctions of the problem (4. The differential operators we are dealing with act much like a symmetric matrix. BOUNDARY VALUE PROBLEMS 147 4.3 Orthogonality of eigenfunctions Something that will be very useful in the next section is the orthogonality property of the eigenfunctions. We will expand on this in the next section.

A lot of intuition from linear algebra can be applied for linear differential operators. −π 4. (cos mt)(cos nt) dt = 0. The Fredholm alternative then states that either (A − λI)x = 0 has a nontrivial solution. On the other hand if λ is an eigenvalue.2 (Fredholm alternative∗ ). even if it happens to have a solution.5) has a unique solution for every right hand side. And finally we also get π when m n. So it is no surprise that there is a finite dimensional version of Fredholm alternative for matrices as well. We will give a slightly more general version in chapter 5. Theorem 4.1. x(b) = 0 (4. −π π when m when m n. We also want to reinforce the idea here that linear differential operators have much in common with matrices.4) has a unique solution for every continuous function f .5) x(a) = 0. Then either x + λx = 0. then (4. n.5) need not have a solution for every f . . x(a) = 0. but one must be careful of course.148 Similarly 0 π CHAPTER 4. −π and π (cos mt)(sin nt) dt = 0.1. while a matrix has only finitely many. b]. x(b) = 0 (4. Let A be an n × n matrix. one obvious difference we have already seen is that in general a differential operator will have infinitely many eigenvalues. For example. or x + λx = f (t). The theorem holds in a more general setting than we are going to state it. Suppose p and q are continuous on [a. ∗ Named after the Swedish mathematicain Erik Ivar Fredholm (1866 – 1927). the nonhomogeneous equation (4. the solution is not unique. has a nonzero solution. but for our purposes the following statement is sufficient. The theorem means that if λ is not an eigenvalue. and furthermore. FOURIER SERIES AND PDES (cos mt)(cos nt) dt = 0. The theorem is also true for the other types of boundary conditions we considered.4 Fredholm alternative We now touch on a very useful theorem in the theory of differential equations. or (A − λI)x = b has a solution for every b. (sin mt)(sin nt) dt = 0.

we will find a graph which gives the shape of the string.1.1.3 on page 144. With λ > 0. Suppose we have a tightly stretched quickly spinning elastic string or rope of uniform linear density ρ.5 Application Let us consider a physical application of an endpoint problem. T not 2 2 2 satisfying the above equation. y = 0. As before there are no nonpositive eigenvalues. the string is in the equilibrium position. T except for the interval length being L instead of π. then the string will L 2 L x . ω. BOUNDARY VALUE PROBLEMS 149 4. on the x axis. Let us put this problem into the xy-plane. For most√values of ω the string T is in the equilibrium state. When the angular velocity ω hits a value ω = kπ √ρ . Hence the magnitude is constant everywhere and we will call its magnitude T . If we assume that the deflection is small then we can use Newton’s second law to get an equation T y + ρω2 y = 0.1: Whirling string. The condition y(0) = 0 implies λ √ that A = 0 as before. so ρω2 k2 π2 =λ= 2 . When ρω = kLπ . 2 T then the string will “pop out” some distance B at the midpoint. Hence.4. The x axis represents the position on the string. y = 0.1.1. y(L) = 0 where λ = ρω . T L What does this say about the shape of the string? It says that for all parameters ρ. we see that this force is tangential and we will assume that the magnitude is the same at both end points. Let L be the length of the string and the string is fixed at the beginning and end points. so we will assume that the whole xy-plane rotates at angular velocity ω along. We will assume that the string stays in this xy-plane and y will measure its deflection from the equilibrium position. We cannot compute B with the information we have. See Figure 4. y y 0 Figure 4. We will idealize the string to have no volume to just be a mathematical curve. Let us assume that ρ and T are fixed and we are changing ω. The condition y(L) = 0 implies that sin λ L = 0 and hence λ L = kπ for some integer k > 0. y(0) = 0 and y(L) = 0. If we take a small segment and we look at the tension at the endpoints. T √ √ the general solution to the equation is y = A cos λ x + B sin √ x. We are looking for eigenvalues of y + λy = 2 0. The setup is similar to Example 4. y(0) = 0. We rewrite the equation as y + ρω y = 0. Hence. The string rotates at angular velocity ω.

You can see that the higher the angular velocity the more times it crosses the x axis when it is popped out. .1.1. When ω changes again. x(−π) = x(π). x (−π) = x (π).150 CHAPTER 4. x(a) = x(b). x (b) = 0.2: Compute all eigenvalues and eigenfunctions of x + λx = 0. x(b) = 0.6 Exercises Hint for the following exercises: Note that cos the homogeneous equation.1. x (a) = 0. FOURIER SERIES AND PDES pop out and will have the shape of a sin wave crossing the x axis k times. x (a) = 0. √ √ λ (t − a) and sin λ (t − a) are also solutions of Exercise 4. x(a) = 0.1.6: We have skipped the case of λ < 0 for the boundary value problem x + λx = 0. Exercise 4.1. Exercise 4.4: Compute all eigenvalues and eigenfunctions of x + λx = 0. x (a) = x (b).3: Compute all eigenvalues and eigenfunctions of x + λx = 0. So finish the calculation and show that there are no negative eigenvalues. x(b) = 0.1. the string returns to the equilibrium position.5: Compute all eigenvalues and eigenfunctions of x + λx = 0. Exercise 4. 4. Exercise 4.

6) One way to solve (4. let us talk a little bit more in detail about periodic functions. 3L]. 1]. for t in [−3L.2. F(t) = f (t + 2L). w is the dot product.1: Defined f (t) = 1−t2 on [−1.2 Inner product and eigenvector decomposition Suppose we have a symmetric matrix. 0 for some periodic function f (t). but with perhaps a different period. It can be confusing when the formula for f (t) is periodic.2.6) is to decompose f (t) as a sum of of cosines (and sines) and then solve many problems of the form (4. For t in [L.1: Define f (t) = cos t on [−π/2. to sum up all the solutions we got to get a solution to (4. which can be computed as vT w.1 Periodic functions and motivation As motivation for studying Fourier series. For example.2. You should be careful to distinguish between f (t) and its extension. A common mistake is to assume that a formula for f (t) holds for its extension. and so on. How does it compare to the graph of cos t. Example 4. −L].2 on the following page. Now take the π-periodic extension and sketch its graph. L] and we will want to extend periodically to make it a 2L-periodic function. Exercise 4. We then use the principle of superposition. So are cos kt and sin kt for all integers k.7) (4. The constant functions are an extreme example. A function is said to be periodic with period P if f (t) = f (t + P) for all t. We do this extension by defining a new function F(t) such that for t in [−L. We have said before that the eigenvectors of A are then orthogonal.4. 4. For brevity we will say f (t) is Pperiodic. cos t and sin t are 2π-periodic.6). Normally we will start with a function f (t) defined on some interval [−L. we define F(t) = f (t − 2L). In this case the inner product v. §9. suppose we have the problem x + ω2 x = f (t). Before we proceed. Now extend periodically to a 2-periodic function. that is AT = A. 0 (4. Here the word orthogonal means that if v and w are two distinct eigenvectors of A. w = 0.7). THE TRIGONOMETRIC SERIES 151 4.1 in EP 4. . Note that a P-periodic function is also 2P-periodic. L]. 3P-periodic and so on.2. See Figure 4. then v. F(t) = f (t).2. π/2].2 The trigonometric series Note: 2 lectures. They are periodic for any period (exercise). We have already solved x + ω2 x = F0 cos ω t.

Therefore. w2 Hence 5 1 −1 1 2 + . w1 = a1 w1 .5 0. w2 You probably remember this formula from vector calculus. w1 2(1) + 3(−1) −1 = = . To decompose a vector v in terms of mutually orthogonal vectors w1 and w2 we write v = a1 w1 + a2 w2 . w2 = 1(1) + (−1)1 = 0.0 0.5 0. w1 = a1 w1 .0 1. FOURIER SERIES AND PDES 1 2 3 1. w1 a1 = v. 3 1 First note that w1 and w2 are orthogonal as w1 .5 -3 -2 -1 0 1 2 3 -0.5 -2 -1 0 CHAPTER 4. = 3 2 −1 2 1 . 1+1 2 w2 . w1 . w1 + a2 w2 .2: Write v = 2 as a linear combination of w1 = −1 and w2 = 1 .5 Figure 4. w1 . w2 .152 -3 1. 1(1) + (−1)(−1) 2 w1 . w1 a2 = v. w2 .0 0. a2 = 1 Example 4.2.5 1. Let us find the formula for a1 and a2 .2: Periodic extension of the function 1 − t2 .0 -0. w2 2+3 5 = = . First let us compute v. a1 = Similarly v. w1 . w1 = a1 w1 + a2 w2 . Then v.

sin nt = 1. cos nt . 1 = 2. or more simply 1 π f (t) dt. n=1 This series is called the Fourier series† or trigonometric series for f (t). So we will want to define an inner product of functions.4. to find an we want to compute f (t) . This is for convenience. For example. x (−π) = x (π). sin nt 1 bn = = sin nt . We define the inner product as f (t) . We have previously computed that the eigenfunctions are 1. Just like for matrices we will want to find a projection of f (t) onto the subspace generated by the eigenfunctions. cos nt = an = cos nt . so that we only need to look at cos kt and sin kt. −π With this definition of the inner product.3 The trigonometric series Now instead of decomposing a vector in terms of the eigenvectors of a matrix.2. . −π Compare these expressions with the finite dimensional example. sin nt = 0 sin mt . The coefficients are given by 1 f (t) . x(−π) = x(π). We could also think of 1 = cos 0t. a0 = π −π † Named after the French mathematician Jean Baptiste Joseph Fourier (1768 – 1830). −π π f (t) sin nt dt. sin nt π π f (t) cos nt dt. and sin kt are orthogonal in the sense that cos mt . cos kt.2. for all m and n. cos nt = 0 for m n. THE TRIGONOMETRIC SERIES 153 4. The formula above also works for n = 0. That is. we will decompose a function in terms of eigenfunctions of a certain eigenvalue problem. cos nt = 0 sin mt . we will want to find a representation of a 2π-periodic function f (t) as a0 f (t) = + 2 ∞ an cos nt + bn sin nt. Note that here we have 1 used the eigenfunction 2 instead of 1. For the constant we get that 1 . we have seen in the previous section that the eigenfunctions cos kt (this includes the constant eigenfunction). g(t) = def 1 π π f (t)g(t) dt. cos nt π f (t) . By elementary calculus we have that cos nt . sin kt. for m n. the eigenvalue problem we will use for the Fourier series is the following x + λx = 0. cos nt = 1 (except for n = 0) and sin nt . In particular.

n=1 an cos nt + bn sin nt .0 2.0 2.0 -3 Figure 4. The plot of the extended periodic function is given in Figure 4. Extend f (t) periodically and write it as a Fourier series.154 CHAPTER 4.2: Carry out the calculation for a0 and bm . cos mt = 2 = ∞ ∞ an cos nt + bn sin nt.5 5.5 0. cos mt . FOURIER SERIES AND PDES Let us check the formulas using the orthogonality properties. π −π .5 0. cos mt n=1 = am cos mt .0 3 -2.3: Take the function f (t) = t for t in (−π. Let us start with a0 1 π a0 = t dt = 0. cos mt . cos mt cos mt .2.5 5.0 -2. cos mt + bn sin nt .0 3 2 2 1 1 0 0 -1 -1 -2 -2 -3 -5. Example 4. Suppose for a moment that a0 f (t) = + 2 Then for m ≥ 1 we have a0 + f (t) . -5.3. π]. And hence am = f (t) . This function is called the sawtooth.2. cos mt n=1 ∞ a0 1 . cos mt + 2 an cos nt .3: The graph of the sawtooth function. Exercise 4. Now we compute the coefficients.

2. −π Let us move to bm . or more to the point the function t cos mt are all odd. is f (t) = n=1 2 (−1)n+1 sin nt.4.4 on the following page. Recall an even function is a function ϕ(t) such that ϕ(−t) = ϕ(t). bm = = = = = We have used the fact that 1 π t sin mt dt π −π 2 π t sin mt dt π 0 2 −t cos mt π 1 π + cos mt dt π m m 0 t=0 2 −π cos mπ +0 π m −2 cos mπ 2 (−1)m+1 = . THE TRIGONOMETRIC SERIES 155 We will often use the result from calculus that the integral of an odd function over a symmetric interval is zero. Another useful fact from calculus is that the integral of an even function over a symmetric interval is twice the integral of the same function over half the interval.4: Take the function  0 if −π < t ≤ 0.  ∞ The series. therefore.  cos mπ = (−1) =  −1 if m odd. For example the function t. f (t) = 2 sin t − sin 2 t + 2 sin 3 t + · · · 3 The plot of these first three terms of the series. n Let us write out the first 3 harmonics of the series for f (t). the function sin t. For example t sin mt is even. m m m  1  if m even.  Extend f (t) periodically and write it as a Fourier series. along with a plot of the first 20 terms is given in Figure 4. . Example 4.2. This function or its variants appear often in applications and the function is called the square wave. Recall that an odd function is a function ϕ(t) such that ϕ(−t) = −ϕ(t).   f (t) =  π if 0 < t ≤ π. am = 1 π π t cos mt dt = 0.

4: First 3 (left graph) and 20 (right graph) harmonics of the sawtooth function. 0 . Let us start with a0 a0 = Next.0 -2.5 0.0 3 3 2 2 1 1 0 0 -5.0 2.0 -2.5 0.5.5 5.0 3 2 2 2 2 1 1 1 1 0 0 0 0 -1 -1 -1 -1 -2 -2 -2 -2 -3 -5.5 5.0 2. -5.5: The graph of the square wave function.5 5.0 2.0 Figure 4.0 3 -2.0 3 CHAPTER 4.5 5.0 3 -2.5 0.0 2.0 2. 0 f (t) cos mt dt = −π 1 π π π cos mt dt = 0. FOURIER SERIES AND PDES -5. The plot of the extended periodic function is given in Figure 4.5 0.5 0.5 5.0 -3 Figure 4. Now we compute the coefficients.0 -2.0 -3 -3 -5. am = 1 π π 1 π π f (t) dt = −π 1 π π π dt = π.156 -5.5 0.0 2.0 -2.5 5.

It turns out that for example for the sawtooth function f (t).0 -2.0 2. the equation ∞ 2 (−1)n+1 f (t) = sin nt.0 -2.5 5.0 -2. THE TRIGONOMETRIC SERIES And finally bm = 1 π f (t) sin mt dt π −π 1 π = π sin mt dt π 0 − cos mt π = m t=0  2 1 − cos πm 1 − (−1)m  m  = = = 0  m m ∞ 157 if m is odd. n n=1 .5 5.5 0.4.0 Figure 4.0 2.5 5. π 2 + 2 sin t + sin 3t + · · · 2 3 The plot of these first three terms of the series.0 3 3 3 3 2 2 2 2 1 1 1 1 0 0 0 0 -5.0 -5.5 0.0 2. The series. along with a plot of the first 20 harmonics is given in Figure 4.5 0.0 -5.6: First 3 (left graph) and 20 (right graph) harmonics of the square wave function. is π f (t) = + 2 2 π sin nt = + n 2 ∞ n=1 n odd k=1 2 sin (2k − 1) t. 2k − 1 Let us write out the first 3 harmonics of the series for f (t). if m is even.2. therefore. We have so far skirted the issue of convergence.0 2.6.5 5.5 0. f (t) = -5.0 -2.

  f (t) =  t otherwise. 1.158 CHAPTER 4.50 2. including the discontinuities. see Figure 4. The simplest way to make . the more terms in the series you take. then the series equals the extended f (t) everywhere.50 3.75 2. however. That is. n If we redefine f (t) on [−π.75 3. π] as  0 if t = −π or t = π. then ∞ n=1 2 (−1)n+1 sin nt = 0. π and all the other discontinuities of f (t).25 3.00 3.7.7: Gibbs phenomenon in action.25 3. It is not hard to see that when t is an integer multiple of π (which includes all the discontinuities). let us plot the first 100 harmonics.00 3.25 2.75 3. Let us however mention briefly an effect of the discontinuity. It will be in fact a superposition of many different pure tones of frequency which are multiples of the base frequency. we do not get an equality for t = −π. This behavior is known as the Gibbs phenomenon. We will generally not worry about changing the function at several (finitely many) points.00 2. FOURIER SERIES AND PDES is only an equality for t where the sawtooth is continuous. We can think of a periodic function as a “signal” being a superposition of many signals of pure frequency. the error (the overshoot) near the discontinuity at t = π does not seem to be getting any smaller.00 2. Further.75 2. On the other hand a simple sine wave is only the pure tone.00 3. We will say more about convergence in the next section.50 3. we could think of say the square wave as a tone of certain frequency. Let us zoom in near the discontinuity in the square wave.25 Figure 4.75 1.50 2.  and extend periodically. That is.75 2.25 3.00 2. The region where the error is large gets smaller and smaller. You will notice that while the series is a very good approximation away from the discontinuities.25 2.

3: Suppose f (t) is defined on [−π. Hint: It may be better to start from the complex exponential form and write the series as ∞ c0 + m=1 cm eimt + c−m e−imt . show that there exist complex numbers cm such that ∞ f (t) = m=−∞ cm eimt . There is another form of the Fourier series using complex exponentials that is sometimes easier to work with.2.2. Extend periodically and compute the Fourier series of f (t). Extend periodically and compute the Fourier series of f (t). Exercise 4.4: Suppose f (t) is defined on [−π. 4. π] as t3 . π] as |t|3 . Extend periodically and compute the Fourier series of f (t). . Exercise 4. Exercise 4. Note that the sum now ranges over all the integers including negative ones.2. π] as sin 5t + cos 3t.2.2.2.7: Suppose f (t) is defined on [−π.6: Suppose f (t) is defined on [−π. Extend periodically and compute the Fourier series of f (t).2. π] as t2 .9: Let a0 f (t) = + 2 ∞ an cos nt + bn sin nt.4 Exercises Exercise 4. Exercise 4.4. Do not worry about convergence in this calculation. Exercise 4. π] as |t|.2. π] as  −1 if −π < t ≤ 0.5: Suppose f (t) is defined on [−π. n=1 Use Euler’s formula eiθ = cos θ + i sin θ.2.   f (t) =  1  if 0 < t ≤ π. Extend periodically and compute the Fourier series of f (t). Extend periodically and compute the Fourier series of f (t).8: Suppose f (t) is defined on [−π. Exercise 4. THE TRIGONOMETRIC SERIES 159 sound using a computer is the square wave. If you have played video games from the 1980s or so you have heard what square waves sound like. and the sound will be a very different from nice pure tones.

L L If we change variables to s we see that a0 g(s) = + 2 ∞ an cos ns + bn sin ns. π −π L −L 1 π 1 L nπ an = g(s) cos ns ds = f (t) cos t dt. the computation is a simple case of change of variables. Let s = L t. then the function g(s) = f L s π is 2π-periodic. We can just rescale the independent axis. you understand it for 2L-periodic functions. After we write down the integrals we change variables back to t. We will want to write f (t) = a0 + 2 ∞ an cos n=1 nπ nπ t + bn sin t. FOURIER SERIES AND PDES 4. g(s) sin ns ds = f (t) sin π −π L −L L The two most common half periods that show up in examples are π and 1 because of the simplicity.3. 1 L 1 π a0 = g(s) ds = f (t) dt. but what about functions of different periods. but all the mathematics is the same.uiuc. it may be good to first try Project IV (Fourier series) from the IODE website: http://www. Suppose that you have the 2L-periodic function f (t) (L is called the π half period). All that we are doing is moving some constants around.3 in EP Before reading the lecture. π −π L −L L 1 π 1 L nπ bn = t dt.2 – §9. 4. we have only changed variables. We should stress that we have done no new mathematics. If you understand the Fourier series for 2π-periodic functions.160 CHAPTER 4.edu/iode/.math. Well. §9. fear not. n=1 So we can compute an and bn as before. After reading the lecture it may be good to continue with Project V (Fourier series again).1 2L-periodic functions We have computed the Fourier series for a 2π-periodic function.3 More on the Fourier series Note: 2 lectures. . We want to also rescale all our sines and cosines.

4.3. MORE ON THE FOURIER SERIES Example 4.3.1: Let f (t) = |t| for −1 < t < 1,

161

extended periodically. The plot of the periodic extension is given in Figure 4.8. Compute the Fourier series of f (t).
-2 -1 0 1 2

1.00

1.00

0.75

0.75

0.50

0.50

0.25

0.25

0.00

0.00

-2

-1

0

1

2

Figure 4.8: Periodic extension of the function f (t). We will write f (t) = and hence an =
−1 1 a0 2

+

∞ n=1

an cos nπt + bn sin nπt. For n ≥ 1 we note that |t| cos nπt is even

1

f (t) cos nπt dt t cos nπt dt
0 1 1

=2

t sin nπt =2 nπ

−2
t=0 1 t=0 0

1 sin nπt dt nπ if n is even, if n is odd.

1 = 0 + 2 2 cos nπt nπ Next we find a0

 0  2 (−1)n − 1  = =  −4  2 π2  22 n n π
1

a0 =
−1

|t| dt = 1.

Note: You should be able to find this integral by thinking about the integral as the area under the graph without doing any computation at all. Finally we can find bn . Here, we notice that |t| sin nπt is odd and, therefore,
1

bn =
−1

f (t) sin nπt dt = 0.

162 Hence, the series is f (t) = 1 + 2
n=1 n odd

CHAPTER 4. FOURIER SERIES AND PDES

−4 cos nπt. n2 π2

Let us explicitly write down the first few terms of the series up to the 3rd harmonic. f (t) ≈ 4 1 4 − 2 cos πt − 2 cos 3πt − · · · 2 π 9π

The plot of these few terms and also a plot up to the 20th harmonic is given in Figure 4.9. You should notice how close the graph is to the real function. You should also notice that there is no “Gibbs phenomenon” present as there are no discontinuities.
-2 -1 0 1 2 -2 -1 0 1 2

1.00

1.00

1.00

1.00

0.75

0.75

0.75

0.75

0.50

0.50

0.50

0.50

0.25

0.25

0.25

0.25

0.00

0.00

0.00

0.00

-2

-1

0

1

2

-2

-1

0

1

2

Figure 4.9: Fourier series of f (t) up to the 3rd harmonic (left graph) and up to the 20th harmonic (right graph).

4.3.2 Convergence
We will need the one sided limits of functions. We will use the following notation f (c−) = lim f (t),
t↑c

and

f (c+) = lim f (t).
t↓c

If you are unfamiliar with this notation, limt↑c f (t) means we are taking a limit of as t approaches c from below (i.e. t < c) and limt↓c f (t) means we are taking a limit of as t approaches c from above (i.e. t > c). For example, for the square wave function  0 if −π < t ≤ 0,   f (t) =  (4.8) π if 0 < t ≤ π, 

4.3. MORE ON THE FOURIER SERIES

163

we have f (0−) = 0 and f (0+) = π. Let f (t) be a function defined on an interval [a, b]. Suppose that we find finitely many points a = t0 , t1 , t2 , . . . , tk = b in the interval, such that f (t) is continuous on the intervals (t0 , t1 ), (t1 , t2 ), . . . , (tk−1 , tk ). Also suppose that f (tk −) and f (tk +) exists for each of these points. Then we say f (t) is piecewise continuous. If moreover, f (t) is differentiable at all but finitely many points, and f (t) is piecewise continuous, then f (t) is said to be piecewise smooth. Example 4.3.2: The square wave function (4.8) is piecewise smooth on [−π, π] or any other interval. In such a case we just say that the function is just piecewise smooth. Example 4.3.3: The function f (t) = |t| is piecewise smooth. Example 4.3.4: The function f (t) = 1 is not piecewise smooth on [−1, 1] (or any other interval t containing zero). In fact, it is not even piecewise continuous. √ Example 4.3.5: The function f (t) = 3 t is not piecewise smooth on [−1, 1] (or any other interval containing zero). f (t) is continuous, but the derivative of f (t) is unbounded near zero and hence not piecewise continuous. Piecewise smooth functions have an easy answer on the convergence of the Fourier series. Theorem 4.3.1. Suppose f (t) is a 2L-periodic piecewise smooth function. Let a0 + 2

an cos
n=1

nπ nπ t + bn sin t L L

be the Fourier series for f (t). Then the series converges for all t. If f (t) is continuous near t, then f (t) = Otherwise a0 + 2

an cos
n=1

nπ nπ t + bn sin t. L L nπ nπ t + bn sin t. L L

f (t−) + f (t+) a0 = + 2 2

an cos
n=1

If we happen to have that f (t) = f (t−)+ f (t+) at all the discontinuities, the Fourier series converges 2 to f (t) everywhere. We can always just redefine f (t) by changing the value at each discontinuity appropriately. Then we can write an equals sign between f (t) and the series without any worry. We mentioned this fact briefly at the end last section. Note that the theorem does not say how fast the series converges. Think back the discussion of the Gibbs phenomenon in last section. The closer you get to the discontinuity, the more terms you need to take to get an accurate approximation to the function.

164

CHAPTER 4. FOURIER SERIES AND PDES

4.3.3 Differentiation and integration of Fourier series
Not only does Fourier series converge nicely, but it is easy to differentiate and integrate the series. We can do this just by differentiating or integrating term by term. Theorem 4.3.2. Suppose a0 f (t) = + 2

an cos
n=1

nπ nπ t + bn sin t, L L

is a piecewise smooth continuous function and the derivative f (t) is piecewise smooth. Then the derivative can be obtained by differentiating term by term.

f (t) =
n=1

−an nπ nπ bn nπ nπ sin t+ cos t. L L L L

It is important that the function is continuous. It can have corners, but no jumps. Otherwise the differentiated series will fail to converge. For an exercise, take the series obtained for the square wave and try to differentiate the series. Similarly, we can also integrate a Fourier series. Theorem 4.3.3. Suppose f (t) = a0 + 2

an cos
n=1

nπ nπ t + bn sin t, L L

is a piecewise smooth function. Then the antiderivative is obtained by antidifferentiating term by term and so ∞ a0 t nπ −bn L nπ an L F(t) = +C + sin t+ cos t. 2 nπ L nπ L n=1 where F (t) = f (t) and C is an arbitrary constant.
0 Note that the series for F(t) is no longer a Fourier series as it contains the a2 t term. The antiderivative of a periodic function need no longer be periodic and so we should not expect a Fourier series.

4.3.4 Rates of convergence and smoothness
Let us do an example of a periodic function with one derivative everywhere. Example 4.3.6: Take the function  (1 − t) t   f (t) =  (t + 1) t  if 0 < t < 1, if −1 < t < 0,

and extend to a 2-periodic function. The plot is given in Figure 4.10 on the facing page. Note that this function has a derivative everywhere, but it does not have two derivatives at all the integers.

4.3. MORE ON THE FOURIER SERIES
-2 0.50 -1 0 1 2 0.50

165

0.25

0.25

0.00

0.00

-0.25

-0.25

-0.50 -2 -1 0 1 2

-0.50

Figure 4.10: Smooth 2-periodic function.

Exercise 4.3.1: Compute f (0+) and f (0−). Let us compute the Fourier series coefficients. The actual computation involves several integration by parts and is left to student.
1 0 1

a0 =
−1 1

f (t) dt =
−1

(t + 1) t dt +
0 0

(1 − t) t dt = 0,
1

an =
−1 1

f (t) cos nπt dt =
−1 0

(t + 1) t cos nπt dt +
0 1

(1 − t) t cos nπt dt = 0 (1 − t) t sin nπt dt
0

bn =

f (t) sin nπt dt =   4(1 − (−1)n )  π38n3  = = 0 3 n3  π
−1

(t + 1) t sin nπt dt +
−1

if n is odd, if n is even.

This series converges very fast. If you plot up to the third harmonic, that is the function 8 8 sin πt + sin 3πt, 3 π 27π3
8 it is almost indistinguishable from the plot of f (t) in Figure 4.10. In fact, the coefficient 27π3 is already just 0.0096 (approximately). The reason for this behavior is the n3 term in the denominator. The coefficients bn in this case go to zero as fast as n13 goes to zero.

It is a general fact that if you have one derivative, the Fourier coefficients will go to zero approximately like n13 . If you have only a continuous function, then the Fourier coefficients will go to zero as n12 , and if you have discontinuities then the Fourier coefficients will go to zero approximately as 1 . Therefore, we can tell a lot about the smoothness of a function by looking at its n Fourier coefficients.

4. a) Compute the Fourier series for f (t). but at least at some points it is not defined.3. n=1 which does not converge! Exercise 4.   f (t) =  t if 0 ≤ t < 1. . f (t) and f (t).5 Exercises Exercise 4. n3 When we differentiate term by term we notice ∞ f (t) = n=1 1 cos nt. the derivative of f (t) may be defined at most points. the coefficients now go down like n12 .3. At what points does f (t) have the discontinuities. Exercise 4.3: Let  0 if −1 < t < 0. plot say the first 5 harmonics of the functions.3. if 0 ≤ t < 1.2: Use a computer to plot f (t).  extended periodically. n This function is similar to the sawtooth. FOURIER SERIES AND PDES To justify this behavior take for example the function defined by the Fourier series ∞ f (t) = n=1 1 sin nt. n2 Therefore.4: Let  −t   f (t) =  2 t  if −1 < t < 0. If we differentiate again we find that f (t) really is not defined at some points as we get a piecewise differentiable function ∞ f (t) = n=1 −1 sin nt. That is. If we tried to differentiate again we would obtain ∞ − cos nt. b) Write out the series explicitly up to the 3rd harmonic. which we said means that we have a continuous function. extended periodically. That is.3.166 CHAPTER 4. b) Write out the series explicitly up to the 3rd harmonic. a) Compute the Fourier series for f (t).

3. MORE ON THE FOURIER SERIES Exercise 4. Exercise 4.3.7: Let f (t) = ∞ (−1) sin nt. Is f (t) differentiable everywhere? Find the derivative n=1 n (if it exists) or justify if it does not exist. n . Is f (t) continuous and differentiable everywhere? Find n=1 the derivative (if it exists) or justify if it does not exist.3. Exercise 4. b) Write out the series explicitly up to the 3rd harmonic.6: Let f (t) = ∞ n13 cos nt.5: Let 167   −t   f (t) =  10 t  10 if −10 < t < 0. a) Compute the Fourier series for f (t).4.3. if 0 ≤ t < 10. extended periodically (period is 20).

cos nt is even and sin nt is odd. In this section we are of course interested in odd and even periodic functions. is h(t) odd or even. L] and it would be convenient to have an odd (resp. Similarly the function tk is even if k is even and odd when k is odd.4. A function f (t) is even if f (−t) = f (t). Recall a function f (t) is odd if f (−t) = − f (t).4. We have previously defined the 2L-periodic extension of a function defined on the interval [−L. Figure 4. even) extension of the function to [−L.1: Take two functions f (t) and g(t) and define their product h(t) = f (t)g(t). Take a function f (t) defined on [0. §9. If f (t) is odd and g(t) we cannot in general say anything about the sum f (t) + g(t). cosine) terms will disappear. This observation is not a coincidence. and Feven (t) is called the even periodic extension of f .2: Check that Fodd (t) is odd and that Feven (t) is even. In fact. is h(t) odd or even? c) Suppose both are even.  def  f (t) Fodd (t) =  − f (−t) if −L < t < 0. is h(t) odd or even? b) Suppose one is even and one is odd. For example. What we can do is take the odd (resp. L] define the functions   if 0 ≤ t ≤ L.168 CHAPTER 4. L].    if 0 ≤ t ≤ L.1: Take the function f (t) = t(1 − t) defined on [0. all the sine (resp.4. L]. a) Suppose both are odd.11 on the facing page shows the plots of the odd and even extensions of f (t). If the function is odd.4. Exercise 4. FOURIER SERIES AND PDES 4.1 Odd and even periodic functions You may have noticed by now that an odd function has no cosine terms in the Fourier series and an even function has no sine terms in the Fourier series. the Fourier series of a function is really a sum of an odd (the sine terms) and an even (the cosine terms) function. Then Fodd (t) is called the odd periodic extension of f (t). L] and then we can extend periodically to a 2L-periodic function. 1]. Let us look at even and odd periodic function in more detail. On (−L. Example 4. .  And extend Fodd (t) and Feven (t) to be 2L-periodic. even) function. Sometimes we are only interested in the function in the range [0.  def  f (t) Feven (t) =   f (−t) if −L < t < 0.3 in EP 4. Exercise 4.4 Sine and cosine series Note: 2 lectures.

if f (t) is an even 2L-periodic function.1 -0. The function f (t) sin nπ t is the product of two odd functions and hence even.3 0.4. L That is.0 -0.1 -0. the integral of an even function over a symmetric interval [−L. we find that bn = 0 and 2 L nπ an = f (t) cos t dt. L].2 0.2 -0. The integral is zero because f (t) cos nπL t is an odd function (product of an odd and an even function is odd) and the integral of an odd function over a symmetric interval is always zero.3 -0.1 -0.0 0. L 0 L The formula still works for n = 0 in which case it becomes a0 = 2 L L f (t) dt. We write the Fourier series for f (t).3 -2 -1 0 1 2 -0. we compute the coefficients an (including n = 0) and get an = 1 L L f (t) cos −L nπ t dt = 0. L We can now write the Fourier series of f (t) as bn sin n=1 nπ t.2 0.3 -2 -1 0 1 2 169 0. 4.2 0.0 0.11: Odd and even 2-periodic extension of f (t) = t(1 − t).4.0 0. Furthermore. L bn = 1 L L f (t) sin −L 2 nπ t dt = L L ∞ L f (t) sin 0 nπ t dt.2 -0.3 0.3 -1 0 1 2 0. L] is twice the integral of the function over the interval [0.4. SINE AND COSINE SERIES -2 0. there are no cosine terms in a Fourier series of an odd function. 0 ≤ t ≤ 1.3 -2 -1 0 1 2 -0.2 Sine and cosine series Let f (t) be an odd 2L-periodic function.2 0.1 -0.0 0.0 0. For the same exact reasons as above. 0 . L Similarly.0 0.2 -0.0 0.2 -0.3 Figure 4.

L f (t) cos 0 nπ t dt. It is not necessary to start with the full Fourier series to obtain the sine and cosine series. . L]. generalizing the results of this chapter. L The series ∞ bn sin nπ t is called the sine series of f (t) and the series a20 + ∞ an cos nπ t n=1 n=1 L L is called the cosine series of f (t).1. we can pick whichever series fits our problem better. x(L) = L. Then the odd extension of f (t) has the Fourier series ∞ Fodd (t) = n=1 bn sin nπ t.170 The Fourier series is then a0 2 ∞ CHAPTER 4. Let f (t) be a piecewise smooth function defined on [0. g(t) = 0 f (t)g(t) dt. The cosine series is the eigenfunction expansion of f (t) using the eigenfunctions of the eigenvalue problem x + λx = 0. Theorem 4. We could have. FOURIER SERIES AND PDES an cos n=1 nπ t. The sine series is really the eigenfunction expansion of f (t) using the eigenfunctions of the eigenvalue problem x + λx = 0. L]. x(0) = 0. x (L) = L.2. therefore. It is often the case that we do not actually care what happens outside of [0. x (0) = 0. This point of view is useful because many times we use a specific series because our underlying question will lead to a certain eigenvalue problem. In fact.4. if the eigenvalue value problem is not one of the three we covered so far. We will deal with such a generalization in chapter 5. L An interesting consequence is that the coefficients of the Fourier series of an odd (or even) function can be computed by just integrating over the half interval. In this case. you can still do an eigenfunction expansion. L The even extension of f (t) has the Fourier series a0 Feven (t) = + 2 where an = 2 L L ∞ an cos n=1 nπ t. Therefore. and following the procedure of § 4. have gotten the same formulas by defining the inner product L f (t). we can compute the odd (or even) extension of a function as a Fourier series by computing certain integrals over the interval where the original function is defined. L where 2 bn = L L f (t) sin 0 nπ t dt.

By using the Fredholm alternative (Theorem 4. Use the actual definition of f (t). we first find f (t) in L terms of the Fourier sine series. + 2 n=1 where a0 = and π 2 21 4 2 π 2 t sin nt − t cos nt dt = t sin nt dt an = π 0 π n nπ 0 0 π π 4 4 4(−1)n = 2 t cos nt + 2 cos nt dt = . That is. the first few terms of the series are π2 4 − 4 cos t + cos 2t − cos 3t + · · · 3 9 Exercise 4. 4. SINE AND COSINE SERIES 171 Example 4. Note that the eigenfunctions of this eigenvalue problem were the functions sin nπ t.4. Suppose we have the boundary value problem for 0 < t < L. We write x as a sine series as well with unknown coefficients. for the Dirichlet boundary conditions x(0) = 0.2 on page 148) we note that as long as λ is not an eigenvalue of the underlying homogeneous problem.4.4. If on the other hand we have the Neumann boundary conditions x (0) = 0. x (t) + λ x(t) = f (t).4. We substitute into the equation and solve for the Fourier coefficients of x.4. the even extension of t2 has no jump discontinuities. there will exist a unique solution. We do the same procedure using the cosine series.3 Application We have said that Fourier series ties in to the boundary value problems we studied earlier. 0 nπ nπ 0 n2 π 2 π π t2 dt = 0 2π2 .1. n Explicitly. . Although it will have corners. Let us see this connection in more detail. not its cosine series! b) Why is it that the derivative of the even extension of f (t) is the odd extension of f (t). since the derivative (which will be on odd function and a sine series) will have a series whose coefficients decay only as 1 so it will have jumps. x(L) = 0. We will write ∞ a0 f (t) = an cos nt. 3 Note that we have detected the “continuity” of the extension since the coefficients decay as n12 . x (L) = 0. to find the solution.3: a) Compute the derivative of the even extension of f (t) above and verify it has jump discontinuities. Therefore.2: Find the Fourier series of the even periodic extension of the function f (t) = t2 for 0 ≤ t ≤ π. These methods are best seen by examples.

x (t) + 2x(t) = f (t).172 CHAPTER 4.4: Similarly we handle the Neumann conditions. x(1) = 0. Take the same boundary value problem for 0 < t < 1. We want to look for a solution x satisfying the Dirichlet conditions x(0) = 0. nπ(2 − n2 π2 ) We have thus obtained a Fourier series for the solution ∞ x(t) = n=1 2 (−1)n+1 sin nπt. nπ Therefore. 2 (−1)n+1 bn (2 − n π ) = nπ 2 2 or bn = 2 (−1)n+1 . . FOURIER SERIES AND PDES Example 4. We write f (t) as a sine series ∞ f (t) = n=1 cn sin nπt.4. nπ We write x(t) as x(t) = n=1 bn sin nπt.4. where f (t) = t on 0 < t < 1. x (t) + 2x(t) = f (t). We plug in to obtain ∞ ∞ x (t) + 2x(t) = n=1 ∞ −bn n2 π2 sin nπt + 2 n=1 bn sin nπt = n=1 bn (2 − n2 π2 ) sin nπt ∞ = f (t) = n=1 2 (−1)n+1 sin nπt.3: Take the boundary value problem for 0 < t < 1. nπ (2 − n2 π2 ) Example 4. where cn = 2 0 1 t sin nπt dt = ∞ 2 (−1)n+1 .

n=1 1 t dt = 1. if n odd. if n even. a0 = 2 . SINE AND COSINE SERIES 173 where f (t) = t on 0 < t < 1. π2 n2 1 Therefore. We write f (t) as a cosine series c0 f (t) = + 2 where c0 = 2 0 ∞ cn cos nπt.4. − n2 π2 ) We have thus obtained a Fourier series for the solution ∞ x(t) = n=1 n odd n2 π2 (2 −4 cos nπt. − n2 π2 ) . let us now consider the Neumann conditions x (0) = 0. x (1) = 0. and 1 cn = 2 0   2 2((−1)n − 1)  π−42  t cos nπt dt = = n 0  π2 n2 We write x(t) as a cosine series a0 x(t) = + 2 We plug in to obtain ∞ ∞ ∞ an cos nπt. n=1 x (t) + 2x(t) = n=1 −an n π cos nπt + a0 + 2 2 2 n=1 ∞ an cos nπt = a0 + n=1 an (2 − n2 π2 ) cos nπt ∞ 1 = f (t) = + 2 n=1 n odd −2 cos nπt.4. However. an = 0 for n even and for n odd (n ≥ 1) an (2 − n2 π2 ) = or an = −4 π2 n2 n2 π2 (2 −4 .

a) Sketch the plot of the even periodic extension of f .4. where f (t) = ∞ bn sin nπt. x(1) = 0.4. Exercise 4. FOURIER SERIES AND PDES 4. . b) Solve for the Neumann conditions x (0) = 0.5: Find the Fourier series of both the odd and even periodic extension of the function f (t) = (t − 1)2 for 0 ≤ t ≤ 1.4.4.4.4. a) Solve for the Dirichlet conditions x(0) = 0.6: Find the Fourier series of both the odd and even periodic extension of the function f (t) = t for 0 ≤ t ≤ π. x(0) = 0.9: Let x (t) + 9x(t) = f (t). b) Solve for the Neumann conditions x (0) = 0. x (1) = 0. Exercise 4. where f (t) = 1 on 0 < t < 1. for f (t) = sin 2πt on 0 < t < 1. Exercise 4.174 CHAPTER 4.4.10: Let x (t) + 3x(t) = f (t). Write the solution x(t) as a Fourier series. x(1) = 0.4 Exercises Exercise 4. Can you tell which extension is continuous from the Fourier series coefficients? Exercise 4.8: Let x (t) + 4x(t) = f (t). Exercise 4.4: Take f (t) = (t − 1)2 defined on 0 ≤ t ≤ 1.4. x(1) = 0. x (1) = 0.7: Find the Fourier series of the even periodic extension of the function f (t) = sin t for 0 ≤ t ≤ π. a) Solve for the Dirichlet conditions x(0) = 0. where the coefficients are n=1 given in terms of bn . b) Sketch the plot of the odd periodic extension of f . Exercise 4.

The units are the mks units (meters-kilogramsseconds) again. where x sp is the particular steady periodic solution. with damping c. We have already seen this problem in chapter 2 with a simple F(t).5. Now suppose that the forcing function F(t) is 2L-periodic for some L > 0.1: Suppose that k = 2. This is perhaps best seen by example. k m damping c F(t) (4. L L an cos n=1 and we plug in x into the differential equation and solve for an and bn in terms of cn and dn .5. we are mostly interested in the part of x p which does not decay. So any solution to mx (t)+kx(t) = F(t) will be of the form A cos ω0 t+ B sin ω0 t+ x sp . The equation mx + kx = 0 has the general solution x(t) = A cos ω0 t + B sin ω0 t. The difference in what we will do now is that we consider an arbitrary forcing function F(t).5. We want to find the steady periodic solution. APPLICATIONS OF FOURIER SERIES 175 4. We call this x p the steady periodic solution as before. L L nπ nπ t + bn sin t. Since the complementary solution xc will decay as time goes on. The steady periodic solution will always have the same period as F(t). which fires with a force of 1 Newtons for 1 second and then is off for 1 second. The problem with c > 0 is very similar.9) we will call x p . Example 4. k where ω0 = m . There is a jetpack strapped to the mass. In the spirit of the last section and the idea of undetermined coefficients we will first write F(t) = Then we write x(t) = c0 + 2 a0 + 2 ∞ cn cos n=1 ∞ nπ nπ t + dn sin t. We have a mass spring system as before. and a force F(t) applied to the mass. where we have a mass m on a spring with spring constant k. For simplicity. and a particular solution of (4.4. .1 Periodically forced oscillation Let us return to the forced oscillations. §9. and m = 1.9) We know that the general solution will consist of xc which solves the associated homogeneous equation mx + cx + kx = 0.5 Applications of Fourier series Note: 2 lectures.4 in EP 4. The equation that governs this particular setup is mx (t) + cx (t) + kx(t) = F(t). let us suppose that c = 0.

if n even. On the other hand 1 1 c0 = −1 F(t) dt = 0 dt = 1. CHAPTER 4. πn an cos nπt + bn sin nπt. And 1 dn = −1 1 F(t) sin nπt dt sin nπt dt 0 = = − cos nπt 1 nπ t=0  2 n  1 − (−1)  = =  πn 0  πn ∞ if n odd. We write c0 F(t) = + 2 It is not hard to see that cn = 0 for n ≥ 1: 1 1 ∞ cn cos nπ t + dn sin nπ t. So F(t) = We want to try a0 + x(t) = 2 ∞ 1 + 2 n=1 n odd 2 sin nπt. n=1 . where F(t) is the step function  1 if 0 < t < 1.176 The equation is.  extended periodically. n=1 cn = −1 F(t) cos nπt dt = 0 cos nπt dt = 0.   F(t) =  0 if − 1 < t < 0. therefore. FOURIER SERIES AND PDES x + 2x = F(t).

they will disappear when we plug into the left hand side and we will get a contradictory equation (such as 0 = 1). we cannot use those terms in the guess. 2 n=1 n odd We plug into the differential equation and obtain ∞ ∞ x + 2x = n=1 n odd −bn n2 π2 sin nπt + a0 + 2 n=1 n odd ∞ bn sin nπt = a0 + n=1 n odd bn (2 − n2 π2 ) sin nπt ∞ 1 = F(t) = + 2 So a0 = 1 2 n=1 n odd 2 sin nπt.12 on the following page for the plot of this solution. 4. See Figure 4.5. Similarly bn = 0 for n even. When we expand F(t) and find that some of its terms coincide with the complementary solution to mx + kx = 0. Hence we try ∞ a0 x(t) = + bn sin nπt. That is. suppose xc = A cos ω0 t + B sin ω0 t. take the equation mx (t) + kx(t) = F(t). πn and bn = 2 . Again. πn(2 − n2 π2 ) We know this is the steady periodic solution as it contains no terms of the complementary solution and is periodic with the same period as F(t) itself. Just like before. . πn(2 − n2 π2 ) The steady periodic solution has the Fourier series 1 x sp (t) = + 4 ∞ n=1 n odd 2 sin nπt.4.2 Resonance Just like when the forcing function was a simple cosine.5. resonance could still happen. Let us assume c = 0 and we will discuss only pure resonance. APPLICATIONS OF FOURIER SERIES 177 We notice that once we plug into the differential equation x + 2x = F(t) it is clear that an = 0 for n ≥ 1 as there are no corresponding terms in the series for F(t).

Of course. However.0 Figure 4. this behavior is called pure resonance or just resonance.4 0. We note that F(t) = n=1 n odd 4 sin nπt. the terms t aN cos Nπ t + bN sin Nπ t will eventually dominate L L and lead to wild oscillations.0 0. we multiply the offending term by t. In this case we have to modify our guess and try ∞ a0 Nπ Nπ x(t) = + t aN cos t + bN sin t + 2 L L an cos n=1 n N nπ nπ t + bn sin t. FOURIER SERIES AND PDES 7.  ∞ extended periodically.5 5. as we change the frequency of F (we change L). Example 4.3 0.2 0.5. the solution will not be a Fourier series (it will not even be periodic) since it contains these terms multiplied by t.178 0.12: Plot of the steady periodic solution x sp of Example 4. πn .3 0.5 0. That is. we proceed as before.2: Find the steady periodic solution to the equation 2x + 18π2 x = F(t). Further. As before.1 0. Note that there now may be infinitely many resonance frequencies to hit.5 10.0 0.0 CHAPTER 4.5 2.5.0 0.1 0. where  1  if 0 < t < 1.5 0. From then on.0 7.0 2.4 0. we should note that since everything is an approximation and in particular c is never actually zero but something very close to zero. different terms from the Fourier series of F may interfere with the complementary solution and will cause resonance. where ω0 = Nπ L for some positive integer N. L L In other words.  F(t) =  −1 if − 1 < t < 0.2 0.5 5. only the first few resonance frequencies will matter.0 10.1.

We now plug into the differential equation. The solution must look like x(t) = c1 cos 3πt + c2 sin 3πt + x p (t) 179 for some particular solution x p . APPLICATIONS OF FOURIER SERIES Exercise 4. This series has to equal to the series for F(t). −1 4/(3π) = 2 −12π 9π b3 = 0 4 2 bn = = 3 nπ(18π2 − 2n2 π2 ) π n(9 − n2 ) a3 = for n odd and n 3.1: Compute the Fourier series of F to verify. If we simplify we obtain ∞ 2x p + 18π x = −12a3 π sin 3πt + 12b3 π cos 3πt + 2 n=1 n odd n 3 (−2n2 π2 bn + 18π2 bn ) sin nπt.5. We note that if we just tried a Fourier series with sin nπt as usual. That is. x p (t) = −6a3 π sin 3πt − 9π2 a3 t cos 3πt + 6b3 π cos 3πt − 9π2 b3 t sin 3πt+ ∞ + n=1 n odd n 3 (−n2 π2 bn ) sin nπt.4. Let us compute the second derivative. we would get duplication when n = 3. we pull out that term and multiply by t. And we have add a cosine term to get everything right.5. We equate the coefficients and solve for a3 and bn . . Therefore. we must try ∞ x p (t) = a3 t cos 3πt + b3 t sin 3πt + n=1 n odd n 3 bn sin nπt. 2x p + 18π2 x = − 12a3 π sin 3πt − 18π2 a3 t cos 3πt + 12b3 π cos 3πt − 18π2 b3 t sin 3πt+ + 18π2 a3 t cos 3πt ∞ + 18π2 b3 t sin 3πt+ + n=1 n odd n 3 (−2n2 π2 bn + 18π2 bn ) sin nπt.

2: Let F(t) = 1 + ∞ n12 cos nπt.5.3: Let F(t) = ∞ n13 sin nπt. you will not have to worry about pure resonance.5. −1 x p (t) = 2 t cos 3πt + 9π CHAPTER 4.180 That is. Express your solution as a Fourier series. That is.5. Exercise 4. .6: Let F(t) = t for −1 < t < 1 and extended periodically. 4. Find the steady periodic solution to x + x + x = F(t). Exercise 4. Exercise 4. There is a corresponding concept of practical resonance and it is very similar to the ideas we already explored in chapter 2. there will never be any conflicts and you do not need to multiply any terms by t.5.4: Let F(t) = ∞ n12 cos nπt. Exercise 4. FOURIER SERIES AND PDES ∞ n=1 n odd n 3 π3 n(9 2 sin nπt. Express your solution as a Fourier series. n=1 2 Express your solution as a Fourier series. n=1 Express your solution as a Fourier series. Find the steady periodic solution to x + π2 x = F(t). Find the steady periodic solution to x + 4x = F(t). − n2 ) When c > 0.5.5: Let F(t) = t for −1 < t < 1 and extended periodically. n=1 Express your solution as a Fourier series.5. We will not go into details here.3 Exercises Exercise 4. Find the steady periodic solution to x + x = F(t). Find the steady periodic solution to x + 2x = F(t).

which is an example of a hyperbolic PDE. and/or some initial conditions where the value of the solution or its derivatives is specified for some initial time. You would expect that if the heat distribution had a maximum (was concave down). t) denote the temperature at point x at time t. we usually have specified some boundary conditions. SEPARATION OF VARIABLES. we will study the wave equation. §9. Suppose that we have a wire (or a thin metal rod) that is insulated except at the endpoints. And vice versa. which is an example of a parabolic PDE. Let x denote the position along the wire and let t denote time.1 Heat on an insulated wire Let us first study the heat equation. which is an example of an elliptic PDE.6. then heat would flow away from the maximum. Sometimes such conditions are mixed together and we will refer to them simply as side conditions. AND THE HEAT EQUATION 181 4. we will study the heat equation.13: Insulated wire. L x Now let u(x. We will study three partial differential equations.6 PDEs. Together with a PDE. It turns out that the equation governing the this system is the so-called one-dimensional heat equation: ∂u ∂2 u = k 2. First. ∂t ∂x for some k > 0. We will only talk about linear PDEs here.5 in EP Let us recall that a partial differential equation or PDE is an equation containing the partial derivatives with respect to several independent variables. Each of our examples will illustrate behaviour that is typical for the whole class.13. where the value of the solution or its derivatives is specified along the boundary of a region. each one representing a more general class of equations. A PDE is said to be linear if the dependent variable and its derivatives appear only to the first power and in no functions. Solving PDEs will be our main application of Fourier series. That is. See Figure 4. PDES. . temperature u 0 insulation Figure 4.4. and the heat equation Note: 2 lectures. we will study the Laplace equation. Finally. separation of variables.6. Next. the change in heat at a specific point is proportional to the second derivative of the heat along the wire. This makes sense. 4.

0) = f (x). For the heat equation. Superposition also preserves some of the side conditions. However. We will write ut 2 instead of ∂u and we will write u xx instead of ∂ u . If the ends of the wire are for example kept at temperature 0. In general. t) = 0 and u x (L. 4. t) = 0. t) = X(x)T (t) using . u(x. t) = 0. c2 are constants. Note that we always have two conditions along the x axis as there are two derivatives in the x direction. Similarly for the side conditions u x (0. c2 are constants. This initial condition is not a homogeneous side condition. In particular. t) = 0 and u(L. Furthermore. t) = 0 and u x (L. If on the other hand the ends are also insulated we get the conditions u x (0. for the heat equation. we will suppose we know the initial temperature distribution. Exercise 4. since u and its derivatives do not appear to any powers or in any functions. In other words. t) = 0. if u1 and u2 are solutions that satisfy u(0. then u = c1 u1 + c2 u2 is still a solution. These side conditions are called homogeneous.2 Separation of variables First we must note the principle of superposition still applies. then we must have the conditions u(0. then u = c1 u1 + c2 u2 is still a solution that satisfies u(0. heat is not flowing in nor out of the wire at the ends. That the desired solution we are looking for is of this form is too much to hope for. t) = 0 and u(L. and c1 . t) = 0 and u(L. we must also have some boundary conditions.6. With this notation the equation becomes ∂t ∂x2 ut = ku xx . The heat equation is still called linear. The method of separation of variables is to try to find solutions that are sums or products of functions of one variable.1: Verify the principle of superposition for the heat equation. For example.182 CHAPTER 4. We assume that the wire is of length L and the ends are either exposed and touching some body of constant heat. superposition preserves all homogeneous side conditions. If u1 and u2 are solutions and c1 . we try to find solutions of the form u(x. FOURIER SERIES AND PDES We will generally use a more convenient notation for partial derivatives.6. or the ends are insulated. t) = 0. t) = 0. t) = X(x)T (t). what is perfectly reasonable to ask is to find enough “building-block” solutions u(x. for some known function f (x).

It will be useful to note that T n (0) = 1. The boundary condition u(0. let us pick the solutions L Xn (x) = sin The corresponding T n must satisfy the equation n2 π2 T n (t) + 2 kT n (t) = 0. 0) = f (x). t) = Xn (x)T n (t) = sin −n2 π2 nπ x e L2 kt . SEPARATION OF VARIABLES. the solution of this problem is easily seen to be T n (t) = e −n2 π2 kt L2 We rewrite as nπ x. t) = 0 implies X(0)T (t) = 0. Thus. L . T (t) X (x) = . We have previously found that the only eigenvalues are λn = nLπ . kT (t) X(x) As this equation is supposed to hold for all x and t. t) = 0 implies X(L) = 0. Let us guess u(x. We plug into the heat equation to obtain X(x)T (t) = kX (x)T (t). t) = X(x)T (t). u(L. where eigenfunctions are sin nπ x. AND THE HEAT EQUATION 183 this procedure so that the desired solution to the PDE is somehow constructed from these building blocks by the use of superposition. Hence. X(L) = 0. L By the method of integrating factor. Our building-block solutions are un (x. Let us call this constant −λ (the minus sign is for convenience later). But the left hand side does not depend on t and the right hand side does not depend on x. We are looking for nontrivial solutions X of the eigenvalue problem X +λX = 0. Hence X(0) = 0. 2 2 X(0) = 0.6. Therefore. L . Let us try to solve the heat equation ut = ku xx with u(0. T (t) + λkT (t) = 0. each side must be a constant. we have two equations X (x) T (t) = −λ = . kT (t) X(x) Or in other words X (x) + λX(x) = 0. t) = 0 and u(x. for integers 2 n ≥ 1. t) = 0 and u(L. Similarly. PDES. We are looking for a nontrivial solution and so we can assume that T (t) is not identically zero.4.

00 12. 0) = n=1 bn sin nπ x = f (x).0 1.5 10. t) = 0 and u(L.184 CHAPTER 4. we use superposition to write the solution as ∞ ∞ −n2 π2 nπ u(x. L Example 4. t) = u(1.5 2. such that the ends of the wire are embedded in ice (temperature 0).0 10. we find the Fourier series of the odd periodic extension of f (x).75 1. t) = 0 because x = 0 or x = L makes all the sines vanish. 0.1: Suppose that we have an insulated wire of length 1. Then suppose that initial heat distribution is u(x. L n=1 n=1 Why does this solution work? First note that it is a solution to the heat equation by superposition.6.25 0.0 2.00 0.50 0.0 7.003 u xx .003. we notice that T n (0) = 1 and so ∞ ∞ u(x.50 0. . Let k = 0.0 0.5 0.25 0.0 5. Let us suppose we also want to find when (at what t) does the maximum temperature in the wire drop to one half of the initial maximum 12. 0) = 50 x (1 − x). 0) = sin nπ x. Finally. See Figure 4.00 12. L That is. Finally. t) = bn un (x. 0) = n=1 bn un (x. u(x. plugging in t = 0. u(0. We want to find the temperature function u(x. Let us write f (x) using the sine series L ∞ f (x) = n=1 bn sin nπ x.5 5.14. t). t) = 0.5 0. t) = bn sin x e L2 kt . We used the sine series as it corresponds to the eigenvalue problem for X(x) above. 0) = 50 x (1 − x) for 0 < x < 1. FOURIER SERIES AND PDES We now note that un (x.5 7.75 0. It satisfies u(0.14: Initial distribution of temperature in the wire.00 Figure 4. We are solving the following PDE problem ut = 0.5.

000 2. let us answer the question about the maximum temperature.75 1.25 x 0.6.15: Plot of the temperature of the wire at position x at time t. .5 2. It is relatively obvious that the maximum temperature will always be at x = 0.0 10.5.5 0.700 10. 0. SEPARATION OF VARIABLES. is given by the following series. t) confirms this intuition. t). AND THE HEAT EQUATION We find the sine series for f (x) = 50 x (1 − x) for 0 < x < 1.00 12.5 7.50 0.200 3.00 0.500 5.5 0 20 40 t 60 80 100 u(x.25 0.0 5. where bn = 2 0   200 200 (−1)n 0  50 x (1 − x) sin nπx dx = 3 3 − =  400  3 n3  33 πn π π n if n even.003 t .800 6. in the middle of the wire.4.75 0. f (x) = 1 ∞ n=1 185 bn sin nπx.5 10. ∞ u(x. The plot of u(x.0 0 20 40 60 t 80 100 1.600 1.15 for 0 ≤ t ≤ 100. if n odd.00 0.300 0.900 2. That is.0 11.50 x 0.400 9.0 Figure 4. The solution u(x. PDES.100 7. plotted in Figure 4.0 7.5 5. t) = n=1 n odd 400 2 2 (sin nπx) e−n π 0. 3 n3 π Finally.t) 12.

5/2 = 6. in Figure 4.003 t .003 t .5 10.5 5. Getting back to the question of when is the maximum temperature one half of the initial maximum temperature. π That is.16: Temperature at the midpoint of the wire (the bottom curve).5 7.0 10. and the approximation of this temperature by using only the first term in the series (top curve).5 we get ∞ CHAPTER 4. the temperature at the midpoint of the wire at time t. FOURIER SERIES AND PDES u(0.5. we solve 400 2 6.5.5.5 2. 0 25 50 75 100 12. the terms of the series are insignificant compared to the first term. The figure also plots the approximation by the first term. when is the temperature at the midpoint 12.5. It would be hard to tell the difference after t = 5 or so between the first term of the series representation of u(x. Let us plot the function u(0.0 7.16. If you are interested in behavior for large enough t. t) = n=1 n odd 400 2 2 (sin nπ 0.003 .25 = 3 e−π 0.5 12. 3 n3 π For n = 3 and higher (remember we are taking only odd n). Therefore. −π2 0.5) e−n π 0. t) and the real solution. We notice from the graph that if we use the approximation by the first term we will be close enough. only the first one or two terms may be necessary. π The approximation gets better and better as t gets larger as the other terms decay much faster. The first term in the series is already a very good approximation of the function and hence 400 2 u(0.003 t . This behavior is a general feature of solving the heat equation.25 π 400 t= ≈ 24. That is.0 2. t).5 0 25 50 75 100 Figure 4. 3 ln 6.186 If we plug in x = 0.25.0 5. t) ≈ 3 e−π 0.

L . We now note that un (x. where eigenfunctions are cos nπ x (we include 2 L the constant eigenfunction). 187 4. T (t) + λkT (t) = 0.5. Our building-block solutions will be un (x. Hence X (0) = 0. t) = + 2 a0 an un (x. L For n = 0. u x (L.6. t) = 1. let us pick solutions nπ Xn (x) = cos x. t) = + 2 n=1 ∞ ∞ an cos n=1 −n2 π2 nπ x e L2 kt . L That is. X (L) = 0. Hence. Yet again we try a solution of the form u(x. So let us write f using the cosine series a0 + 2 ∞ f (x) = an cos n=1 nπ x. T n (t) = e −n2 π2 kt L2 n2 π2 kT n (t) = 0. We use superposition to write the solution as a0 u(x. t) = 0 implies X (L) = 0. By the same procedure as before we plug into the heat equation and arrive at the following two equations X (x) + λX(x) = 0. In this case.3 Insulated ends Now suppose the ends of the wire are insulated. 0) = f (x). as before. t) = Xn (x)T n (t) = cos and u0 (x. we are solving the equation ut = ku xx with u x (0.4. The boundary condition u x (0. t) = 0 and u x (L. L2 . PDES. −n2 π2 nπ x e L2 kt . t) = 0 implies X (0)T (t) = 0. At this point the story changes slightly. 0) = cos nπ L x.6. t) = X(x)T (t). for integers n ≥ 0. Similarly. L The corresponding T n must satisfy the equation T n (t) + For n ≥ 1. X (0) = 0. AND THE HEAT EQUATION So the maximum temperature drops to half at about t = 24. We are looking for nontrivial solutions X of the eigenvalue problem X + λX = 0. t) = 0 and u(x. we have T 0 (t) = 0 and hence T 0 (t) = 1. We have previously found that the 2 2 only eigenvalues are λn = nLπ . we find the Fourier series of the even periodic extension of f (x). SEPARATION OF VARIABLES.

FOURIER SERIES AND PDES Example 4. u x (0. t) = 0.2: Let us try the same example as before. t) = u x (1. 0) = 100 for 0 < x < 1. t) = u(1. t) = + 3 ∞ n=2 n even −200 2 2 (cos nπx) e−n π 0. 0) = 50 x (1 − x) for 0 < x < 1.6. the solution to the PDE problem.6.6. t) = 0.6. with k = 0. For 0 < x < 1 we have 50 x (1 − x) = 25 + 3 ∞ n=2 n even −200 cos nπx. t) = 0. Exercise 4. We are solving the following PDE problem ut = 0.001 and an initial temperature distribution of u(x.003 t . 4. 0) = 50x. Find the solution as a series.3: Find a series solution of ut = u xx . Exercise 4. Suppose that both the ends are embedded in ice (temperature 0).4 Exercises Exercise 4. but for insulated ends.188 CHAPTER 4. 2 n2 π Note in the graph that the temperature evens out across the wire.17 on the next page. u(0. 0) = 3 sin x + sin 3π for 0 < x < π. u(x.003 u xx . π2 n2 The calculation is left to the reader. 0). and you will be left with a uniform temperature of 25 ≈ 8. Hence. we must find the cosine series of u(x. For this problem. . plotted in Figure 4. is given by the series 25 u(x.6. t) = u x (π.33 along the entire 3 length of the wire. u(x. u x (0.4: Find a series solution of ut = u xx . Eventually all the terms except the constant die out. u(x.2: Suppose you have a wire of length 2.

300 0. u(x.0 0 5 10 15 20 t 25 30 1.25 0. u(0. PDES.600 1. t) = 100. Then use superposition. t) = 100x is a solution satisfying ut = u xx . Verify that satisfies the equation u xx = 0. .6.0 11.0 7. Exercise 4.75 0. Hint: Use the fact that u(x.17: Plot of the temperature of the insulated wire at position x at time t.t) 12.4.50 x 0. Exercise 4.25 5 10 15 20 25 30 t 189 u(x.400 9.0 10. t) = u x (π.5 5.5 10.5 0.00 Figure 4.800 6. u(1.5: Find a series solution of ut = u xx .6.6. t) = 0.00 0. u(0.6.500 5.6.100 7.000 5. u(x.5 2.00 12.5 7.0 2.5 0.6.0 0. 0) = sin πx for 0 < x < 1. t) = 0. u x (0.700 10. by letting t → ∞ in the solution from exercises 4.75 1.200 3. t) = 0.7: Find the steady state temperature solution as a function of x alone. Exercise 4. AND THE HEAT EQUATION 0.900 2.6: Find a series solution of ut = u xx .00 0 x 0. SEPARATION OF VARIABLES.6.5 and 4. u(1. 0) = cos x for 0 < x < π.50 0. t) = 100.

0) = 0 and u(0. Exercise 4. . t) = 0. u x (0. FOURIER SERIES AND PDES Exercise 4. where u(x.8: Use separation variables to find a nontrivial solution to u xx + uyy = 0.6. That is.9 (challenging): Suppose that one end of the wire is insulated (say at x = 0) and the other end is kept at zero temperature. Express any coefficients in the series by integrals of f (x). u(x. y) = 0. find a series solution of ut = ku xx . 0) = f (x) for 0 < x < L. y) = X(x)Y(y). Hint: Try u(x. t) = u(L.190 CHAPTER 4.6.

6 in EP Suppose we have a string such as on a guitar of length L. w(x. Note that we always have two conditions along the x axis as there are two derivatives in the x direction. It will be easier to solve two separate problems and add their solutions. L x The equation that governs this setup is the so-called one-dimensional wave equation: ytt = a2 y xx . w(0. 0) = g(x) for 0 < x < L. 0) = 0 wt (x. t) = 0. t) = 0 and y(L. The two problems we will solve are wtt = a2 w xx .18: Vibrating string. (4. y(x. There are also two derivatives along the t direction and hence we will need two further conditions here. ONE DIMENSIONAL WAVE EQUATION 191 4. §9. There is one change however. As the equation is again linear. t) = w(L. Suppose we only consider vibrations in one direction. We will assume that the ends of the string are fixed and hence we get y(0. And again we will use separation of variables to find enough building-block solutions to get the overall solution.10) .4. 0) = g(x). t) = 0. y y 0 Figure 4.18. for some a > 0. for 0 < x < L. for some known functions f (x) and g(x). let t denote time and let y denote the displacement of the string from the rest position. We will need to know the initial position and the initial velocity of the string.7 One dimensional wave equation Note: 1 lecture. superposition works just as it did for the heat equation. 0) = f (x) and yt (x. See Figure 4.7. That is let x denote the position along the string.

0) = f (x) zt (x. left hand side depends only on t and the right hand side depends only on x. 0) = 0.11) The principle of superposition will then imply that y = w + z solves the wave equation and furthermore y(x. t) = X(0)T (t) implies X(0) = 0 and w(L. a2 T (t) X(x) Again. 0) = f (x) for 0 < x < L.12) y(x. 0) = 0 CHAPTER 4. t) = 0. 0) = 0.10). 0) = w(x. 0) = g(x). we will be able to use the idea of separation of variables to find many building-block solutions solving all the homogeneous conditions. t) = y(L. t) = z(L. y(0. (4. t) = X(x)T (t) again. 0) + z(x. y(x. FOURIER SERIES AND PDES for 0 < x < L. T (t) + λa2 T (t) = 0. y is a solution to ytt = a2 y xx . 0) + zt (x. z(0.192 and ztt = a2 z xx . 0) = wt (x. L The general solution for T for this particular λn is nπa nπa T n (t) = A cos t + B sin t. both equal a constant which we will denote by −λ. t) = 0. or yt (x. the only nontrivial solutions for the first equation are when λ = λn = nLπ and they are 2 Xn (x) = sin nπ x. The conditions 0 = w(0. (4. L L . The reason for all this complexity is that superposition only works for homogeneous conditions such as y(0. for 0 < x < L. We try a solution of the form w(x. yt (x. X (x) T (t) = −λ = . We plug into the wave equation to obtain X(x)T (t) = a2 X (x)T (t). t) = y(L. Let us start with (4. Therefore. 0) = g(x) for 0 < x < L. Rewriting we get X (x) T (t) = . 2 2 Therefore. Hence. 0) = f (x) and yt (x. We can then use them to construct a solution solving the remaining nonhomogeneous condition. t) = 0. a2 T (t) X(x) We solve to get two ordinary differential equations X (x) + λX(x) = 0. t) = 0 implies that X(L) = 0. z(x. Therefore.

Similarly we proceed to solve (4.11). 0) = sin We expand g(x) in terms of these sines as ∞ L nπa sin t.10) as a series ∞ ∞ w(x.7.4. T (t) + λa2 T (t) = 0. It will be convenient to pick B = nπa and hence T n (t) = Our building-block solution will be wn (x. and the conditions X(0) = 0. that is (wn )t (x. t) = n=1 bn wn (x.1: Check that w(x. t) = n=1 bn nπ L sin x nπa L sin nπa t . The procedure works exactly the same at first. This implies that T (0) = 0. y) = X(x)T (t). nπa L L nπ sin x nπa L sin nπa t . t) = We differentiate in t.7. Thus instead of A = 0 we get that B = 0 and we can take nπa T n (t) = sin t. So again λ = λn = Xn (x) = sin nπ x. X(L) = 0. L nπ x L cos nπa t . We obtain X (x) + λX(x) = 0. L g(x) = n=1 bn sin nπ x. We again try z(x. L nπ x. 0) = 0 or X(x)T (0) = 0. 0) = 0 and wt (x. (wn )t (x. L . which L in turn forces A = 0. ONE DIMENSIONAL WAVE EQUATION 193 We also have the condition that w(x. t) = sin Hence. 0) = g(x). L n2 π2 L2 and The condition for T however becomes T (0) = 0. L Exercise 4. L Now we can just write down the solution to (4.

t) = y(L. ∞ y(x. nπ x. y(0. t) = n=1 cn zn (x.1. t) = n=1 ∞ bn L nπ sin x nπa L nπ x L bn sin nπa nπ t + cn sin x L L cos nπa t L = n=1 sin L nπa nπa sin t + cn cos t .2: Fill in the details in the derivation of the solution of (4.11). y(x. nπa L L . Putting these two solutions together we will state the result as a theorem. t) can be written as a sum of the solutions of (4. 0) = g(x) where f (x) = n=1 ∞ for 0 < x < L. FOURIER SERIES AND PDES nπ x L cos nπa t . t) = sin We expand f (x) in terms of these sines as ∞ CHAPTER 4.11). L f (x) = n=1 cn sin nπ x. t) = 0. L And we write down the solution to (4. 0) = f (x) yt (x.13) cn sin ∞ and g(x) = bn sin n=1 Then the solution y(x. Check that the solution satisfies all the side conditions. Theorem 4. t) = n=1 cn sin nπ x L cos nπa t . In other words. Take the equation ytt = a2 y xx . L (4.7.11) as a series ∞ ∞ z(x. L nπ x.10) and (4.7. L Exercise 4. for 0 < x < L.194 Our building-block solution will be zn (x.

. 2 f (x) = 0.8 5π sin x − 2 sin x+ sin x − ··· 2 2 π 2 9π 2 25π 2 The solution y(x. the solution does not become “smoother. The series will be ∞ f (x) = n=1 nπ nπ 0. Suppose that the string of length 2 is plucked in the middle such that it has the initial shape  0.  See Figure 4. ONE DIMENSIONAL WAVE EQUATION y 0. Further. This function gives you the shape of the string at time t.8 t + sin x 2 25π2 2 cos 5π t − ··· 2 π 0.19: Plucked string. 0.1 0 Figure 4.7.7. Make sure you understand what the plot such as the one in the figure is telling you. 1.1: Let us try a simple example of a plucked string.. for n = 1. . n2 π2 2 2 Note that sin nπ is the sequence 1. Therefore. Notice that unlike the heat equation. t) as just a function of x. . 2 x 195 Example 4. . 0. .8 t − 2 sin x 2 9π 2 A plot for 0 < t < 3 is given in Figure 4.” In fact the edges remain. −1.8 = 2 sin x π 2 3π π 0.1 (2 − x) if 1 ≤ x ≤ 2. . We leave it to the reader to compute the sine series of f (x). 2. −1. suppose that a = 1 in the wave equation for simplicity.4. 3. We will see the reason for this behavior in the next section where we derive the solution to the wave equation in a different way.8 3π 0.8 sin sin x.1 x  if 0 ≤ x ≤ 1.20 on the following page. For each fixed t. t) = n=1 nπ 0. 0.19.8 sin n2 π2 2 cos sin nπ x 2 cos nπ t 2 cos 5π 3π 0. you can think of the function x → u(x. 4.  f (x) =  0.8 π 0. . t) is given by ∞ y(x.

0) = sin 3πx + 4 sin 6πx yt (x.10 1 y(x. and for any constant a.05 y 0. t) = y(1.4: Solve ytt = 4y xx .0 1.066 -0.088 0.044 -0. t) = 0.022 0.7.t) 0. y(0.20: Shape of the plucked string for 0 < t < 3.066 0.7.196 0.0 1.5: Derive the solution for a general plucked string of length L. for 0 < x < 1.5 x 1. t) = y(1.00 . t) = 0.0 -0.05 -0. Exercise 4.0 0. where we raise the string some distance b at the midpoint and let go. 0) = 0 Exercise 4. for 0 < x < 1.7.10 0.00 -0.0 0.05 0. 1 y(x. 0) = sin 3πx + 4 sin 6πx yt (x.10 0 1 1. for 0 < x < 1. FOURIER SERIES AND PDES t 2 3 0.000 -0. y 0.05 -0.5 2. 1 y(x.0 0.10 0 CHAPTER 4.7.3: Solve ytt = 9y xx . y(0.5 t 2 3 2. 4.110 0.110 0.1 Exercises Exercise 4.5 x Figure 4.044 0. 0) = sin 9πx for 0 < x < 1.022 -0.088 -0.

That is.7. t) for the shape of the string at time t. Find the solution y(x. 0) = 0 for 0 < x < 1.7. for 0 < x < 1. the string was moving at some velocity at impact (t = 0). 0) = f (x) yt (x. say yt (x. Suppose that the length of the string is 1 and a = 1. 0) = 0. . Suppose that 0 < k < 2πa. Exercise 4.4.7. 0) = −1. However.6: Suppose that a stringed musical instrument falls on the floor.7 (challenging): Suppose that you have a vibrating string and that there is air resistance proportional to the velocity. t) = 0. you have ytt = a2 y xx − kyt . y(0. y(x. ONE DIMENSIONAL WAVE EQUATION 197 Exercise 4. Any coefficients in the series should be expressed as integrals of f (x). t) = y(1. Derive a series solution to the problem. When the musical instrument hits the ground the string was in rest position and hence y(x.

14) given the conditions y(0. yt (x.1 Change of variables We will transform the equation into a simpler form where it can be solved by simple integration. but it is really an awkward use of those concepts. FOURIER SERIES AND PDES 4. ∂x ∂ξ ∂η ∂ξ ∂η ∂ξ ∂ξ∂η ∂η2 ∂2 y ∂ ∂ ∂y ∂y ∂2 y ∂2 y ∂2 y ytt = 2 = −a2 + a2 −a2 + a2 = a2 2 − 2a2 + a2 2 . We change variables to ξ = x − at. y(x. 0) = f (x) 0 < x < L. It is much easier to derive this solution by making a correct change of variables to get an equation which can be solved by simple integration. η = x + at and we use the chain rule: ∂ ∂ξ ∂ ∂η ∂ ∂ ∂ = + = + . (4. t) = 0 for all t.15) 4. t) = y(L. .8. 0) = g(x) 0 < x < L. ∂η∂ξ Then we plug into Named after the french mathematician Jean le Rond d’Alembert (1717 – 1783). ∂x ∂x ∂ξ ∂x ∂η ∂ξ ∂η ∂ ∂ ∂ξ ∂ ∂η ∂ ∂ = + = −a2 + a2 .198 CHAPTER 4. And we wish to solve the equation (4. But it is often more convenient to use the so-called d’Alembert solution to the wave equation‡ . ∂t ∂ξ ∂η ∂ξ ∂η ∂ξ ∂ξ∂η ∂η In the above computations.14) (4. ∂2 y 0 = a2 y xx − ytt = 4a2 = 4a2 yξη . we have used the fact from calculus that the wave equation. This solution can be derived using Fourier series as well. ∂ξ∂η ‡ ∂2 y ∂ξ∂η = ∂2 y . Suppose we have the wave equation ytt = a2 y xx . different from §9. ∂t ∂t ∂ξ ∂t ∂η ∂ξ ∂η We compute ∂2 y ∂ ∂ ∂y ∂y ∂2 y ∂2 y ∂2 y y xx = 2 = = 2 +2 + + + .6 in EP We have solved the wave equation by using Fourier series.8 D’Alembert solution of the wave equation Note: 1 lecture.

First let F(x) denote the odd extension of f (x). OK. 0 So far so good. 2 2 2 2 Yay! We’re smoking now. t) = x−at 1 1 1 1 F(x − at) − G(s) ds + F(x + at) + 2 2a 0 2 2a x+at 1 F(x − at) + F(x + at) G(s) ds. and let G(x) denote the odd extension of g(x).8. the wave equation (4.4.2 The formula We know what any solution must look like. Assume for simplicity F is differentiable. By the fundamental theorem of calculus we have 1 a 1 −a yt (x.16) Let us check that the d’Alembert formula really works. 0 B(x) = 1 1 F(x) + 2 2a x G(s) ds. t) = A(x−at)+B(x+at) or in other words: y(x. Let us integrate with respect to η first§ and notice that the constant of integration depends on ξ to get yξ = C(ξ). we integrate with respect to ξ and notice that the constant of integration must depend on η.8. Note that F(x) and G(x) are odd. The solution must then be of the following form for some functions A(ξ) and B(η).14) transforms into yξη = 0. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 199 Therefore. but we need to solve for the given side conditions. + = 2 2a x−at x+at G(s) ds 0 (4. 1 1 y(x. the solution is y(x. x Also 0 G(s) ds is an even function of x because G(x) is odd (to see this fact. Explicitly. 0) = F(x) − 2 2a x 0 1 1 G(s) ds + F(x) + 2 2a x G(s) ds = F(x). 4. 0) = F (x) + G(x) + F (x) + G(x) = G(x). 0 We claim this A(x) and B(x) give the solution. now the boundary conditions. do the substitution § We can just as well integrate with ξ first. if we wish. y = A(ξ) + B(η) = A(x − at) + B(x + at). t) = F (x − at) + G(x − at) + F (x + at) + G(x + at). y = C(ξ) dξ + B(η). . It is easy to find the general solution to this equation by integration twice. Next. We will just give the formula and see that it works. 2 2 2 2 So −a 1 a 1 yt (x. Now define A(x) = 1 1 F(x) − 2 2a x G(s) ds. Thus.

55 − x) if 0. it works. y(0.     20 (x − 0.21 on the next page. The graph of this pulse is the top left plot in Figure 4. t) = y(1. y(L. 0 Now F(x) and G(x) are 2L periodic as well.8. t) = F(−at) − 2 −1 = F(at) − 2 1 2a 1 2a CHAPTER 4.200 s = −v). FOURIER SERIES AND PDES 1 1 G(s) ds + F(at) + 2 2a 0 at 1 1 G(s) ds + F(at) + 2 2a 0 −at at G(s) ds 0 at G(s) ds = 0. t) = 0. t) = L−at L+at 1 1 1 1 G(s) ds + F(L + at) + G(s) ds F(L − at) − 2 2a 0 2 2a 0 L −at 1 1 1 = F(−L − at) − G(s) ds − G(s) ds + 2 2a 0 2a 0 L at 1 1 1 + F(L + at) + G(s) ds + G(s) ds 2 2a 0 2a 0 at at 1 1 1 −1 G(s) ds + F(L + at) + G(s) ds = 0. y(x.      0  if 0.55. Example 4.45) if 0 ≤ x < 0. 0) = 0. So 1 y(0.5:  0  if 0 ≤ x < 0. Then from (4.16) we know that the solution is given as F(x − t) + F(x + t) y(x. let us do an example.1: What the d’Alembert solution says is that the solution is a superposition of two functions (waves) moving in the opposite direction at “speed” a. 0) = f (x). Here f (x) is an impulse of height 1 centered at x = 0.45. 2 .55 ≤ x ≤ 1. Suppose that we have the simpler setup ytt = y xx .45. F(L + at) − = 2 2a 0 2 2a 0 And voilà. yt (x. Let F(x) be the odd periodic extension of f (x).45 ≤ x < 0. t) = . To get an idea of how it works.   f (x) =  20 (0. Furthermore.

4. That is.00 1.5 0.00 0. 0.00 0.5 -1.75 -1.0 0.0 1.0 0. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 201 It is not hard to compute specific values of y(x.3 Notes It is perhaps easier and more useful to memorize the procedure rather than the formula itself.5 0.5 -0.0 0.75 -1. As you can see the d’Alembert solution is much 2 easier to actually compute and to plot than the Fourier series solution.55 − 0.0 0.00 1.25 0. t = 0.50 0.0 0.50 0. .5 0.50 0. See Figure 4.5) = −1 and F(0.00 1.25 0.00 1.0 0.2.75 0.0 0.75 -1.0 0.1.0 1.0 0.5 0.25 0.75 1.75 0.50 0.00 1.0 -1.25 0.00 1.25 0.0 0.00 1.0 1.1.00 Figure 4.50 0. For example. t) = A(x − at) + B(x + at).5 and x + t = 0.5 0.5 -0. Hence y(0.8.5) = − f (0. y(x. The important thing to remember is that a solution to the wave equation is a superposition of two waves traveling in opposite directions.0 -0.0 0.5 -0.25 0.5 0.00 1.5 -0. Now F(−0.75 1.7) = 0.4.6.0 0.5 0.00 0.00 -1.75 -1.25 0.25 0. 0.0 0.5) = −20 (0.5 -1.50 0.6) we notice x − t = −0.6) = −1+0 = −0.0 0. and t = 0.4.50 0.21: Plot of the d’Alembert solution for t = 0.7) = f (0.00 0.50 0.00 1. to compute y(0.0 1.0 -0.8.00 1. t = 0.5.5 -0.5 0. 0. t).7.0 0.0 0.21 for plots of the solution y for several different t.5 -0.0 0.

4 Exercises Exercise 4. a) Solve using the d’Alembert formula (Hint: You can use the sine series for y(x. 0) = sin3 πx. Warning: Make sure you use the odd extensions F(x) and G(x).3: Using the d’Alembert solution solve ytt = 2y xx . y(x. we let x H(x) = 0 G(s) ds. t) is supposed to satisfy. 2a By superposition we get a solution for the general side conditions (4.8. when you have formulas for f (x) and g(x). 4. t > 0. 0 < x < π. The thing is. y(0. 0 < x < π.15) (when neither f (x) nor g(x) are identically zero). Exercise 4. t) = 0.17) satisfies the side conditions (4. Exercise 4. t > 0. 0) = 0. 0). Exercise 4.202 CHAPTER 4.2: Using the d’Alembert solution solve ytt = 4y xx . . 0) = x(π − x).) b) Find the solution as a function of x for a fixed t = 0. 0) = sin x. Do not use the sine series here.4: Take ytt = 4y xx .5. but slightly differently. 2 On the other hand. 0) and yt (x. t) = y(1. and t = 2.1: Check that the new formula (4.15). When g(x) = 0 (and hence G(x) = 0) we have the solution F(x − at) + F(x + at) . t) = 0. 0) = sin5 πx. t) = y(π. those formulas in general hold only for 0 < x < L. when f (x) = 0 (and hence F(x) = 0). and yt (x. 0) = sin x. t) = 0. and yt (x.8. t) = y(π.8. y(0.8. t > 0.8. Hint: note that sin x is the odd extension of y(x. and are not usually equal to F(x) and G(x) for other x. y(x.17) Do note the minus sign before the H. y(x. y(x. t = 1. the exact formulas for A and B are not hard to guess once you realize what kind of side conditions y(x. Let us give the formula again. 0). FOURIER SERIES AND PDES If you think about it. t) = F(x − at) + F(x + at) −H(x − at) + H(x + at) + . 0 < x < 1. The solution in this case is 1 2a x+at G(s) ds = x−at −H(x − at) + H(x + at) . and yt (x. 2 2a (4. y(0. Best approach is to do this in stages.

     0  if x > 1. t = 1. t) = y(π.     x + 1  if −1 ≤ x < 0. That is. y(x. and t = 2.5: Derive the d’Alembert solution for ytt = a2 y xx .6: The d’Alembert solution still works if there are no boundary conditions and the initial condition is defined on the whole real line.8. where  0  if x < −1. using the Fourier series solution of the wave equation. y(0. t = 1 . 0 < x < π. write down a piecewise definition for the solution. y(x. t) = 0. 0) = 0. Solve using the d’Alembert solution. Suppose that ytt = y xx (for all x on the real line and t ≥ 0). t > 0. D’ALEMBERT SOLUTION OF THE WAVE EQUATION 203 Exercise 4. and yt (x.4. and yt (x. Exercise 4. 2 . 0) = 0.8. 0) = f (x). by applying an appropriate trigonometric identity. 0) = f (x).8.  f (x) =  −x + 1 if 0 ≤ x < 1. Then sketch the solution for t = 0.

¶ Named after the French mathematician Pierre-Simon. Then our steady state solution is T2 − T1 u(x) = x + T1. Hence we are looking for a function u(x. the edges of the plate or on all sides of the 3-dimensional object. the steady state solutions are basically just straight lines. Here the ∆ and 2 symbols mean ∂x2 + ∂y2 .18) 2 ∂ ∂ or more commonly written as ut = k∆u or ut = k 2 u. That is. The reason for that notation is that you can define ∆ to be the right thing for any number of space dimensions and then the heat equation is always ut = k∆u. now that we have notation out of the way. or a 3-dimensional object. Let us first do this in one space variable. We wish to find out what is the steady state temperature distribution. L This solution agrees with our common sense intuition with how the heat should be distributed in the wire.7 in EP Suppose we have an insulated wire. and Dirichlet problems Note: 1 lecture. Hence. and we apply constant temperature T 1 at one end (say where x = 0) and T 2 and the other end (at x = L where L is the length of the wire). 2 (4. let us see what does an equation for the steady state solution look like. So in one dimension. Things are more complicated in two or more space dimensions. FOURIER SERIES AND PDES 4. We are looking for a function u that satisfies ut = ku xx . Suppose we have an insulated wire. Let us restrict to two space dimensions for simplicity. This equation is called the Laplace equation¶ . We are looking for a solution to (4. We are really looking for a solution to the heat equation that is not dependent on time. we are looking for a function of x alone that satisfies u xx = 0. but such that ut = 0 for all x and t. We will use ∆ from now on. Laplacian.204 CHAPTER 4. . The heat equation in two variables is ut = k(u xx + uyy ).9 Steady state temperature. marquis de Laplace (1749 – 1827). a plate. we wish to know what will be the temperature after long enough period of time. Solutions to the Laplace equation are called harmonic functions and have many nice properties and applications far beyond the steady state heat problem. The ∆ is called the Laplacian.18) which does not depend on t. We apply certain fixed temperatures on the ends of the wire. OK. y) such that ∆u = u xx + uyy = 0. It is easy to solve this equation by integration and we see that u = Ax + B for some constants A and B. §9.

For simplicity. We put the Xs on one side and the Ys on the other to get − X Y = . . We try u(x. we will consider a rectangular region. Also for simplicity we will specify boundary values to be zero at 3 of the four edges and only specify an arbitrary function at one edge. (4. a harmonic function can never have any “hilltop” or “valley” on the graph. Therefore. h) u=0 u=0 (0. u is concave up in the x direction.4. For example. we will come up with enough building-block solutions satisfying all the homogeneous boundary conditions (all conditions except (4. y) = 0 for 0 < y < h. u(x. We then try to find a solution u defined on this region such that u agrees with the values we specified on the boundary. Let h and w be the height and width of our rectangle.22) (4. We wish to solve the following problem. This observation is consistent with our intuitive idea of steady state heat distribution. y) = X(x)Y(y). 0) = f (x) for 0 < x < w. Again. you can use this simpler solution to derive the general solution for arbitrary boundary values by solving 4 different problems. then uyy must be negative and u must be concave down in the y direction. u(x. X Y Named after the German mathematician Johann Peter Gustav Lejeune Dirichlet (1805 – 1859). That is. STEADY STATE TEMPERATURE 205 Harmonic functions in two variables are no longer just linear (plane graphs). We plug into the equation to get X Y + XY = 0.21) (4.23) u=0 (w. one for each edge. Therefore.9. u(w. This setup is left as an exercise. 0) The method we will apply is separation of variables. Commonly the Laplace equation is part of a so-called Dirichlet problem .20) (4. 0) u = f (x) (w. However. we have some region in the xy-plane and we specify certain values along the boundaries of the region. with one corner at the origin and lying in the first quadrant. and adding those solutions together. if you remember your multivariable calculus we note that if u xx is positive. We notice that superposition still works for all the equation and all the homogeneous conditions. y) = 0 for 0 < y < h. (0. we can use the Fourier series for f (x) to solve the problem as before. h) ∆u = 0. As we still have the principle of superposition.19) (4. u(0. you can check that the functions x2 − y2 and xy are harmonic.23)). h) = 0 for 0 < x < w.

we see that u must satisfy (4. Y − λY = 0. It will be useful to have Yn (0) = 1. Furthermore. y) = Xn (x)Yn (y).19)–(4.       nπh  sinh w As un satisfies (4. f (x) = bn sin w n=1 Then we get a solution of (4. Observe that nπ un (x.19)–(4. Setting Yn (h) = 0 and solving for Bn we get that − cosh nπh w . y) = bn sin x w n=1 n=1 ∞ ∞    sinh nπ(h−y)    w . we find Yn (y) = sinh nπ(h−y) w sinh nπh w .24) and simplify.23) of the following form. We define un (x. n Suppose that ∞ nπx .19)–(4.19)–(4.22). 0) = Xn (x)Yn (0) = sin x.19)–(4. Therefore. And note that un satisfies (4. nπ u(x. .22). w For these given λn . y) = bn un (x. the general solution for Y (one for each n) is Yn (y) = An cosh nπ nπ y + Bn sinh y. And we get two equations X X + λX = 0.22) and any linear combination (finite or infinite) of un must also satisfy (4. Bn = nπh sinh w After we plug the An and Bn we into (4.23) as well. w w (4.24) We only have one condition on Yn and hence we can pick one of An or Bn constants to be whatever is convenient. so we let An = 1. the homogeneous boundary conditions imply that X(0) = X(w) = 0 and Y(h) = 0. Taking the equation for X we have already seen that we have a nontrivial solution if and only if 2 2 λ = λn = nwπ and the solution is a multiple of 2 Xn (x) = sin nπ x. By plugging in y = 0 it is easy to see that u satisfies (4. FOURIER SERIES AND PDES The left hand side only depends on x and the right hand side only depends on y.22). there is some constant λ such that λ = −X = YY .206 CHAPTER 4.

n sinh nπ y 0 1 2 x 1 3 u(x.22: Steady state temperature of a square plate with three sides held at zero and one side held at π.100 1.9.22. STEADY STATE TEMPERATURE 207 Example 4. y) = n=1 n odd 4 sinh n(π − y) (sin nx) .400 1.450 2.500 3.350 0.9.y) 2 3 3 2 3 1 2 3. to the corresponding Dirichlet problem is given as ∞ u(x.700 0. We find that for 0 < x < π we have ∞ f (x) = n=1 n odd 4 sin nx. .150 2.750 1. see Figure 4.050 0.800 2.4.1: Suppose that we take w = h = π and we let f (x) = π. y).000 0 1 0 1 0 2 x 1 2 3 y 3 Figure 4. n Therefore the solution u(x. We compute the sine series for the function π (we will get the square wave).

Hint: Use superposition. It turns out that this soap film is precisely the graph of the solution to the Laplace equation. u(x. y) = u(1.9. 0) = u(x. Take a wire and bend it in just the right way so that it corresponds to the graph of the temperature above the boundary of your region. 0) = 0. each using one piece of nonhomogeneous data.9.9. Then dip the wire in soapy water and let it form a soapy film streched between the edges of the wire. 0) = sin x. Solve the problem u xx + uyy = 0. π) = π. Harmonic functions come up frequently in problems when we are trying to minimize area of some surface or minimize energy in some system. u(0. Hint: Guess. u(0. Hint: Try a solution of the form u(x. u(x.4: Let R be the region described by 0 < x < π and 0 < y < π.9. u(0. then we solve four problems.3: Let R be the region described by 0 < x < 1 and 0 < y < 1. u(x. Exercise 4.9. u(x.1: Let R be the region described by 0 < x < π and 0 < y < π.9. u(x. y) = 0. y) = y. y) = y. . y) = 0.1 Exercises Exercise 4. for some constant C.2: Let R be the region described by 0 < x < 1 and 0 < y < 1. u(x.208 CHAPTER 4. u(π. y) = 0. 1) = 0. u(x. FOURIER SERIES AND PDES This scenario corresponds to the steady state temperature on a square plate of width π with 3 sides held at 0 degrees and one side held at π degrees. Solve the problem ∆u = 0. π) = 0.4 to solve ∆u = 0. Exercise 4. 0) = sin x. y) = X(x) + Y(y) (different separation of variables).5: Use the solution of Exercise 4. then check your intuition.9. Exercise 4. y) = C. π) = π. u(x. Then we use the principle of superposition to add up all four solutions to have a solution to the original problem. y) = y. y) = 0. y) = y. Solve the problem u xx + uyy = 0. 0) = sin πx − sin 2πx. If we have arbitrary initial data on all sides. u(1. u(0. u(x. Exercise 4. Solve ∆u = 0. 1) = u(0. u(π. There is another way to visualize the solutions. 4. u(π.

y) = 0. u(w.9. u(x.6: Let R be the region described by 0 < x < w and 0 < y < h. u(0. Hint: Use superposition. y) = sin πy. y) = 0.8: Let R be the region described by 0 < x < w and 0 < y < h. u(1. y) = f (y). 0) = 0. y) = 0. Solve the problem u xx + uyy = 0. u(0. u(x. u(0. u(x. u(x. u(1. y) = 0. Exercise 4. u(w. Solve the problem u xx + uyy = 0.9. 1) = sin 2πx. Hint: Use superposition. The solution should be in series form using the Fourier series coefficients of f (y). STEADY STATE TEMPERATURE Exercise 4. u(x. y) = f (y). h) = 0. Solve the problem u xx + uyy = 0. The solution should be in series form using the Fourier series coefficients of f (x). Exercise 4. u(w. h) = f (x). u(x.4. 0) = 0. u(x. Solve the problem u xx + uyy = 0.7: Let R be the region described by 0 < x < w and 0 < y < h.9: Let R be the region described by 0 < x < 1 and 0 < y < 1. Exercise 4. 0) = sin 9πx.9. The solution should be in series form using the Fourier series coefficients of f (y). u(x. u(x.10: Let R be the region described by 0 < x < 1 and 0 < y < 1. Solve the problem u xx + uyy = 0. u(0.9. y) = sin πy. y) = 0. 1) = sin πx. . 0) = sin πx.9. y) = 0. 0) = 0. u(x. u(0. h) = 0. 209 Exercise 4.9.

FOURIER SERIES AND PDES .210 CHAPTER 4.

211 . in the study of the heat equation ut = ku xx when we were trying to solve the equation by the method of separation of variables. for example. such as hX(0) − X (0) = 0 hX(L) + X (L) = 0.1 Boundary value problems We have encountered several different eigenvalue problems such as: X (x) + λX(x) = 0 with different boundary conditions X(0) = 0 X(L) = 0 X (0) = 0 X (L) = 0 X (0) = 0 X(L) = 0 X(0) = 0 X (L) = 0 (Dirichlet) or. For example for the insulated wire. (Mixed)..1. Neumann means insulating the ends. §10.Chapter 5 Eigenvalue problems 5. Dirichlet conditions correspond to applying a zero temperature at the ends. Other types of endpoint conditions also arise naturally. . (Neumann) or. During the process we encountered a certain eigenvalue problem and found the eigenfunctions Xn (x).. We then found the eigenfunction decomposition of the initial temperature f (x) = u(x. for some constant h. . (Mixed) or. These problems came up. etc.1 Sturm-Liouville problems Note: 2 lectures. . 0) in terms of the eigenfunctions ∞ f (x) = n=1 cn Xn (x).1 in EP 5.

Then the Sturm-Liouville problem (5.2) has an increasing sequence of eigenvalues λ1 < λ2 < λ3 < · · · such that n→∞ lim λn = +∞ and such that to each λn there is (up to a constant multiple) a single eigenfunction yn (x). dx dx (5. q(x) and r(x) are continuous on [a. ∗ . t) = n=1 cn T n (t)Xn (x). Multiply both sides by 1 x to obtain 1 2 n2 d dy n2 2 2 0= x y + xy + λx − n y xy + y + λx − y= x − y + λxy.1.1 (Bessel): x2 y + xy + λx2 − n2 y = 0.212 CHAPTER 5. Named after the French mathematicians Jacques Charles François Sturm (1803 – 1855) and Joseph Liouville (1809 – 1882).1. Obviously α1 and α2 should not be both zero. β1 . then λn ≥ 0 for all n. EIGENVALUE PROBLEMS Once we had this decomposition and once we found suitable T n (t) such that T n (0) = 1. we noted that a solution to the original problem could be written as ∞ u(x. α2 . β2 ≥ 0. we seek λs that allow for nontrivial solutions. We seek nontrivial solutions to dy d p(x) − q(x)y + λr(x)y = 0 dx dx α1 y(a) − α2 y (a) = 0 β1 y(b) + β2 y (b) = 0 a<x<b (5. b] and suppose p(x) > 0 and r(x) > 0 for all x in [a.1) Essentially any second order linear equation of the form a(x)y + b(x)y + c(x)y + λd(x)y = 0 can be written as (5. same for β1 and β2 .1) after multiplying by a proper factor. We will study second order linear equations of the form dy d p(x) − q(x)y + λr(x)y = 0. Theorem 5. if q(x) ≥ 0 and α1 .1. Example 5. x x dx dx x We can state the general Sturm-Liouville problem∗ . Suppose p(x).2) In particular. We will try to solve more general problems using this method. p (x). b]. The λs for which there is no nontrivial solution are called the eigenvalues and the corresponding nontrivial solutions are called eigenfunctions. Moreover.

y (0) = 0.1. α2 . then B = 0 and vice versa. All eigenvalues are nonnegative 2 L as predicted by the theorem. a regular problem is one where p(x). Exercise 5. y(0) = 0. we will usually start labeling the eigenvalues at 0 rather than 1 for convenience. q. hA √ . Can you use the theorem to make the search for eigenvalues easier? Example 5. r(x) > 0. Now let us try λ > 0.3: Find eigenvalues and eigenfunctions of the problem y + λy = 0. 0 is not an eigenvalue (no eigenfunction). As A 0 we get λ √ √ √ 0 = − λ sin λ + h cos λ. p(x) = 1. The 2 2 eigenvalues are λn = nLπ and eigenfunctions are yn (x) = sin( nπ x). These equations give a regular Sturm-Liouville problem. λ h > 0.5. p(x) > 0. Therefore. r.1. the general solution (without boundary conditions) is √ √ y(x) = A cos λ x + B sin λ x if λ > 0. q(x) ≥ 0. β2 ≥ 0. β1 . Identify the p. Exercise 5. That is. Example 5. We plug in the boundary conditions. So B = √ √ √ √ √ −A λ sin λ + hA λ cos λ. hence both are nonzero. 0 < x < L. Let us see if λ = 0 is an eigenvalue: We must satisfy 0 = hB − A and A = 0. First note that λ ≥ 0 by Theorem 5. therefore. Also be careful about the inequalities for r and p. √ √ √ √ 0 = −A λ sin λ + B λ cos λ. they must be strict for all x! Problems satisfying the hypothesis of the theorem are called regular Sturm-Liouville problems and we will only consider such problems here.2: The problem y + λy. p (x). and y(L) = 0 is a regular SturmLiouville problem. β j in the example above. r. Note that if A = 0.1. q. q(x) = 0.2: Identify p. y (1) = 0.1. hence B = 0 (as h > 0).1. r(x) = 1. y(x) = Ax + B if λ = 0. STURM-LIOUVILLE PROBLEMS 213 Note: Be careful about the signs. β j .1.1: Find eigenvalues and eigenfunctions for y + λy = 0. and α1 . and we have p(x) = 1 > 0 and r(x) = 1 > 0. √ 0 = hA − λ B. 0 < x < 1 hy(0) − y (0) = 0.1. y (1) = 0. α j . q(x) and r(x) are continuous. α j . and 0 = . When zero is an eigenvalue.

For example. EIGENVALUE PROBLEMS √ h √ = tan λ.1. Easiest method is to plot the functions h/x and tan x and √ for which x they intersect. π]. 0 4 2 4 6 4 2 2 0 0 -2 -2 -4 0 2 4 6 -4 Figure 5. For general Sturm-Liouville problems we will need a more general setup. . a .2 Orthogonality We have seen the notion of orthogonality before.43. then B = √λ ) is yn (x) = cos h λn x + √ sin λn λn x. For example.214 or CHAPTER 5. by λ2 the second intersection. and λ2 ≈ 3. g(x) are said to be orthogonal with respect to the weight function r(x) when b f (x) g(x) r(x) dx = 0. though using a computer or a graphing calculator will probably be far more convenient nowdays.1. Let r(x) be a weight function (any function. b]. . etc. So denote by λ1 the first intersection. There are tables available. we get that λ1 ≈ 0. we have shown that sin nx are orthogonal for distinct n on [0. Then two functions f (x). when h = 1. There will be an infinite number of see √ intersections. λ Now use a computer to find λn .86. 5. A plot for h = 1 is given in Figure 5. h The appropriate eigenfunction (let A = 1 for convenience.1: Plot of 1 x and tan x. though generally we will assume it is positive) on [a.

The results and concepts are again analogous to finite dimensional linear algebra. Nontrivial (nonconstant) r(x) arise naturally. dx dx α1 y(a) − α2 y (a) = 0. a that is.1. you could think of a change of variables such that dξ = r(x) dx. and then say f and g are orthogonal whenever f. Then either dy d p(x) − q(x)y + λr(x)y = 0. Theorem 5. Edwards and Penney [EP]. We have the following orthogonality property of eigenfunctions of a regular Sturm-Liouville problem. β1 y(b) + β2 y (b) = 0. y j and yk are orthogonal with respect to the weight function r. Suppose that we have a regular Sturm-Liouville problem. for example from a change of variables. β1 y(b) + β2 y (b) = 0.1. Theorem 5. Hence.5. we define the inner product as f. Then b y j (x) yk (x) r(x) dx = 0. dx dx α1 y(a) − α2 y (a) = 0. Proof is very similar to the analogous theorem from § 4.1. The idea of the given inner product is that those x where r(x) is greater have more weight.2.1. It can also be found in many books including. STURM-LIOUVILLE PROBLEMS In this setting. for example.3 Fredholm alternative We also have the Fredholm alternative theorem we talked about before for all regular SturmLiouville problems.1. . We state it here for completeness.3 (Fredholm alternative). Let y j and yk be two distinct eigenfunctions for two distinct eigenvalues λ j and λk . 5. g = def a b 215 f (x) g(x) r(x) dx. Suppose we have a regular Sturm-Liouville problem d dy p(x) − q(x)y + λr(x)y = 0. g = 0.

we wish to write ∞ f (x) = n=1 cn yn (x). b f.4. EIGENVALUE PROBLEMS d dy p(x) − q(x)y + λr(x)y = f (x). (5. and then solve for the coefficients of the series for y(x).216 has a nonzero solution. 5. we wish to calculate cn (and of course we would want to know if the sum converges). This theorem is used in much the same way as we did before in § 4. It is used when solving more general nonhomogeneous boundary value problems. β1 y(b) + β2 y (b) = 0.4) .3) where yn (x) the eigenfunctions. but it tells us when does a solution exist and is unique. b]. ym = a ∞ f (x) ym (x) r(x) dx b = n=1 cn a b yn (x) ym (x) r(x) dx ym (x) ym (x) r(x) dx = cm ym .1. We will assume convergence and the ability to integrate term by term. has a unique solution for any f (x) continuous on [a. ym = cm a Hence. We wish to find out if we can represent any function f (x) in this way. (5. OK. Because of orthogonality we have b f. To solve the problem we decompose f (x) and y(x) in terms of the eigenfunctions of the homogeneous problem. ym cm = = ym . so that we know when to spend time looking for a solution. and if so.4 Eigenfunction series What we want to do with the eigenfunctions once we have them is to compute the eigenfunction decomposition of an arbitrary function f (x). so imagine we could write f (x) as above. That is. ym a a f (x) ym (x) r(x) dx b ym (x) r(x) dx 2 . or CHAPTER 5. Actually the theorem does not help us solve the problem. dx dx α1 y(a) − α2 y (a) = 0.

c2 . Hence 2 2 2 λn = (2n − 1)2 . If y1 . 2 The above is a regular problem and furthermore we actually know by Theorem 5. yn 0 π 2 f (x) sin (2n − 1)x dx π 2 0 4 = 2 π sin (2n − 1)x dx π 2 f (x) sin (2n − 1)x dx 0 Note that the series converges to an odd 2π-periodic (not π-periodic!) extension of f (x).1. hence λ = 0 is not an eigenvalue. . given by (5. but the statement given is enough for our purposes. therefore.1 on page 212 that λ ≥ 0. Suppose f is a piecewise smooth continuous function on [a. We finally compute π 4 0 π So any piecewise smooth function on [0. . we plug in the initial conditions to get 0 = y(0) = B.4) such that (5. 2 f (x) = n=1 cn sin (2n − 1)x.3) converges and holds for a < x < b. The following theorem holds more generally. This means that λ π must be an odd integral multiple of 2 2 √ π . i. (2n − 1) π = λn π . ym as ˜ ˜ we essentially did for the Fourier series. . is √ √ y(x) = A cos λ x + B sin λ x. b].1. y 0<x< π = 0. Suppose λ = 0. We can take B = 1. Theorem 5. ym = 1 (if we had an arbitrary eigenfunction ym . divide it ˜ by ym .4: Take the simple Sturm-Liouville problem y + λy = 0. y(0) = 0.e. where f. then there exist real constants c1 .5. and 0 = y ( π ) = A.1.4. ym = 1 we would have the simpler form cm = f. yn cn = = yn . In the case that ym . are the eigenfunctions of a regular Sturm-Liouville problem. . √ √ plugging in the boundary conditions we get 0 = y(0) = A and 0 = y ( π ) = λ B cos λ π . so we could have picked a scalar multiple of an eigenfunction such that ym . then the general solution is y(x) = Ax + B. . B 2 2 √ √ cannot be zero and hence cos λ π = 0. y2 . 2 ] can be written as f (x) sin (2n − 1)x dx = 2 ∞ π 2 π . Example 5. 2 The general solution.1. ym ). . And hence our eigenfunctions are yn (x) = sin (2n − 1)x. . STURM-LIOUVILLE PROBLEMS 217 Note that ym are known up to a constant multiple.

y (0) = 0. y(1) = 0.1.3 (challenging): In the above example.1. Exercise 5.1. EIGENVALUE PROBLEMS Exercise 5. where γ > 0 is some constant.1.1. Exercise 5. y(1) = 0. Decompose f (x) = x.5 Exercises Exercise 5. Hint: First write the system as a constant coefficient system to find general solutions. y (1) = 0. y(0) − y (0) = 0. . This problem is not a Sturm-Liouville problem.7: Find eigenvalues and eigenfunctions of y(4) + λy = 0.4: Find eigenvalues and eigenfunctions of y + λy = 0. but the idea is the same. 1] and came up with yn (x) = sin γnx.5: Expand the function f (x) = x on 0 ≤ x ≤ 1 using the eigenfunctions of the system y + λy = 0. y(0) = 0. dx y(0) = 0. y (0) = 0.1. Exercise 5.1 on page 212 guarantees λ ≥ 0. 0 < x < 1 in terms of these eigenfunctions. yet the 2 series converges to an odd 2π-periodic extension of f (x). the function is defined on 0 < x < π . 2 5. y(1) = 0.8 (more challenging): Find eigenvalues and eigenfunctions for d x (e y ) + λe x y = 0. Do note that Theorem 5. find out how the extension is defined for π < x < π.6: Suppose that you had a Sturm-Liouville problem on the interval [0. y(1) = 0.1.218 CHAPTER 5.1. Exercise 5.

yt (x. X aT We note that we want T + λa4 T = 0. t) measure the displacement of the point x on the beam at time t. APPLICATION OF EIGENFUNCTION SERIES 219 5. (5.2 in EP The eigenfunction series can arise even from higher order equations.2. t) = y xx (0. Suppose we have an elastic beam (say made of steel). Suppose the beam is displaced by some function f (x) at time t = 0 and then let go (initial velocity is 0). The equation that governs this setup is a4 ∂4 y ∂2 y + = 0. §10. That is. Similarly λ = 0 will not occur. Then y satisfies: a4 y xxxx + ytt = 0 (0 < x < 1. assume the beam lies along the x axis and let y(x. t) = X(x)T (t) and plug in to get a4 X (4) T + XT = 0 or as usual X (4) −T = 4 = λ.5. 0) = f (x). 0) = 0. t) = 0. Again we try y(x. t) = y xx (1. ∂x4 ∂t2 for some constant a (a4 = EI/ρ in EP).2: Transversal vibrations of a beam. Let us assume that λ > 0. t > 0). We can argue that we expect vibration and not exponential growth nor decay in the t direction (there is no friction in our model for instance). See Figure 5. y(1.2 y y x Figure 5. y(x. We will study the transversal vibrations of the beam. Suppose the beam is of length 1 simply supported (hinged) at the ends.2 Application of eigenfunction series Note: 1 lecture. y(0. t) = 0.5) .

. 1.5) is ∞ ∞ y(x. These frequencies are all integer multiples of the fundamental frequency π2 a2 . . So our solutions are T n (t) = cos n2 π2 a2 t. 5. The sound of a xylophone or vibraphone is. 0) = n=1 bn Xn (x)T n (0) = n=1 bn Xn (x) = n=1 bn sin nπx = f (x). so we will get a nice musical note. and 0 = X (1) = Aω2 (eω − e−ω ) − Cω2 sin ω. So the eigenvalues are λn = n4 π4 and the eigenfunctions are sin nπx. The general solution is X(x) = Aeωx + Be−ωx + C sin ωx + D cos ωx.2. For a steel beam we will get only the square multiples 1.220 CHAPTER 5.1: Try to justify λ > 0 just from the equations. . Also ω must be an integer multiple of π hence ω = nπ and n ≥ 1 (as ω > 0). 3. so that we do not need to write the fourth root all the time.5). The point is that Xn T n is a solution that satisfies all the homogeneous conditions (that is. Since the eigenfunctions are just sines again. Write ω4 = λ. all conditions except the initial position). . This means that C sin ω = 0 and A(eω − e−ω ) = A2 sinh ω = 0. We can take C = 1. 16. . That is why when you hit a steel beam you hear a very pure sound. t) solves (5. The general solution is T (t) = A sin n2 π2 a2 t + B cos n2 π2 a2 t. 9. EIGENVALUE PROBLEMS Exercise 5. very different from a guitar or piano. or B = −A. If ω > 0. 4. Now T + n4 π4 a4 T = 0. So y(x. But T (0) = 0 and hence we must have A = 0 and we can take B = 1 to make T (0) = 1 for convenience. we can decompose the function f (x) on 0 < x < 1 using the sine series as Now we note that on 0 < x < 1 we have (you know how to do this by now) ∞ f (x) = n=1 bn sin nπx. Hence. And since and T n (0) = 1. So we have X(x) = Aeωx − Ae−ωx + C sin ωx. 4. . then sinh ω 0 and so A = 0. Now 0 = X(1) = A(eω − e−ω ) + C sin ω. The timbre of a beam is different than for a vibrating string where we will get “more” of the smaller frequencies since we will get all integer multiples. therefore. we have ∞ ∞ ∞ y(x. D = 0 and A + B = 0. Now 0 = X(0) = A + B + D. t) = n=1 bn Xn (x)T n (t) = n=1 bn (sin nπx) cos n2 π2 a2 t . 0 = X (0) = ω2 (A + B − D). Then the solution to (5. This means that C 0 else we do not have an eigenvalue. 25. The exact frequencies and their amplitude are what we call the timbre of the note. Note that the natural (circular) frequency of the system is n2 π2 a2 . 2. For X we get the equation X (4) − ω4 X = 0. .

Exercise 5.3: Suppose you have a beam of length 5 with one end free and one end fixed (the fixed end is at x = 5). t) = y xx (0. t) = 0.2.4: Suppose the beam is L units long. do not solve.5. Let u be the longitudinal deviation of the beam at position x on the beam (0 < x < 5).2. t) = 0. t) = n=1 n odd 4 (sin nπx) cos n2 π2 a2 t . the solution to (5.1: Let us assume that f (x) = by now) f (x) = n=1 n odd x(x−1) . That is. t > 0). .2. Suppose you know that the initial shape of the beam is the graph of x(5 − x). 5π3 n3 5. Find a series solution. y(0. everything else kept the same as in (5. You know that the constants are such that this satisfies the equation utt = 4u xx . Hint: Use the same idea as we did for the wave equation. y(1. y(x. Let y be the transverse deviation of the beam at position x on the beam (0 < x < 5). t) = y xx (1. 0) = g(x).2.5). 10 ∞ 221 On 0 < x < 1 we have (you know how to do this 4 sin nπx. Set up the equation together with the boundary and initial conditions.2. and the initial velocity is uniformly equal to 2 (same for each x) in the positive y direction. do not solve. and the initial velocity is −(x−5) 50 100 in the positive u direction. You know that the constants are such that this satisfies the equation ytt + 4y xxxx = 0.1 Exercises Exercise 5.5: Suppose you have a4 y xxxx + ytt = 0 (0 < x < 1. 5π3 n3 Hence. Exercise 5. What is the equation and the series solution. Exercise 5. 0) = f (x). Suppose you know that the initial displacement of the beam is x−5 .5) with the given initial position f (x) is ∞ y(x. Just set up.2. you have also an initial velocity.2. Just set up. yt (x.2: Suppose you have a beam of length 5 with free ends. APPLICATION OF EIGENFUNCTION SERIES Example 5. Set up the equation together with the boundary and initial conditions.

We saw previously that the solution is of the form ∞ (5. See Figure 5.3. assume nice pure sound and assume the force is uniform at every position on the string.1 Forced vibrating string. L x The problem is governed by the equations ytt = a2 y xx . §10. What if there is an external force acting on the string. t) = 0. Then our wave equation becomes (remember acceleration is force times mass) ytt = a2 y xx + F0 cos ω t.222 CHAPTER 5. (5. y y 0 Figure 5. L But these are free vibrations. For simplicity. EIGENVALUE PROBLEMS 5. y(L.3. Let us say F(t) = F0 cos ω t as force per unit mass. t was time and y was the displacement of the string.6) y= n=1 An cos nπa nπ nπa t + Bn sin t sin x L L L where An and Bn were determined by the initial conditions. yt (x. Suppose that we have a guitar string of length L.7) . Let us assume say air vibrations (noise).3 Steady periodic solutions Note: 1–2 lectures. We have studied the wave equation problem in this case. 0) = g(x). y(x.3: Vibrating string. 0) = f (x). t) = 0. The natural frequencies of the system are the (circular) frequencies nπa for integers n ≥ 1. for example a second string. where x was the position on the string.3 in EP 5. y(0. with the same boundary conditions of course. Or perhaps a jet engine.

We define the functions f and g as ∂y p (x.1: Check that y = yc + y p solves (5. 2 ω a a ω is not zero we can solve for B to get 0 = X(L) = B= −F0 cos ωL − 1 a ω2 sin ωL a .8).   ωL a a sin a The particular solution y p we are looking for is   cos ωL − 1  ω ω F0    a   y p (x.3. First we find a particular solution y p of (5. t) = 0. t) = 2 cos x − sin x − 1 cos ω t. 223 (5. (5. 0) = 0. the string is initially at rest. If we add the two solutions. We plug in to get −ω2 X cos ω t = a2 X cos ω t + F0 cos ω t or −ω2 X = a2 X + F0 after cancelling the cosine. we find that y = yc + y p solves (5.   cos ωL − 1   ω ω   a cos x −   sin x − 1 . y(x. STEADY PERIODIC SOLUTIONS We will want to find the solution here that satisfies the above equation and y(0.5.7) and the side conditions (5. We look at the equation and we make an educated guess y p (x. t) = 0. t) = 0.3. 0) = 0. yt (x. 0). so 0 = X(0) = A − or A = F0 ω2 F0 ω2 and Assuming that sin ωL a F0 ωL ωL F0 cos + B sin − 2. g(x) = − Exercise 5.6). 0).9) Therefore.7) with the initial conditions.7) that satisfies y(0. So the big issue here is to find the particular solution y p . We know how to find a general solution to this equation (it is an nonhomogeneous constant coefficient equation) and we get that the general solution is ω F0 ω X(x) = A cos x + B sin x − 2 . f (x) = −y p (x. t) = y(L. a a ω The endpoint conditions imply that X(0) = X(L) = 0. y(L. t) = X(x) cos ω t. ∂t We then find solution yc of (5.8) That is.   ωL ω a a sin a F0 X(x) = 2 ω .

Suppose F0 = 1 and ω = 1 and L = 1 and a = 1. then the coefficient B in (5. What this means is that ω a is equal to one of the natural frequencies of the system.9) seems to become very large. sin 1 Then plug in t = 0 to get f (x) = −y p (x. episode 31. 0) = − cos x + B sin x + 1 and after differentiating in t we see that g(x) = − ∂tp (x. 0) = − cos x + B sin x + 1 yt (x. On the other hand. EIGENVALUE PROBLEMS Now we get to the point that we skipped.3. originally aired may 18th 2005. Discovery Channel. That is when ω = nπa for odd n. t) = 0 y(x.1: Let us do the computation for specific values. That is. 0) = 0 † cos 1 − 1 sin x − 1 cos t sin 1 ∂y Mythbusters. so there are far fewer resonance frequencies to hit. i. . The above calculation explains why a string will begin to vibrate if the identical string is plucked close by. but is very close.3. Remember a glass has much purer sound. We notice that if ω is L not equal to a multipe of the base frequency. You may also need to solve the above problem if the forcing function is a sine rather than a cosine. Then y p (x. Similar resonance phenomena occur when you break a wine glass using human voice (yes this is possible. t) = cos x − Call B = cos 1−1 for simplicity. then L cos ωL = 1 and hence we really get that B = 0. pure resonance never occurs anyway. but it is.224 Exercise 5. the solution is almost the same. Suppose that sin ωL = 0. a multiple of πa . In the absence of friction this vibration would get louder and louder as time goes on. in the right sense. but if you think about it. When ω = nπa for n even. CHAPTER 5. In real life. a L We could again solve for the resonance solution if we wanted to. Hence to find yc we need to solve the problem ytt = y xx y(0. the limit of the solutions as ω gets close to a resonance frequency. the amplitude will not keep increasing unless you tune to just the right frequency. Example 5. But let us not jump to conclusions just yet. So resonance occurs only when both cos ωL = −1 a a and sin ωL = 0.e. 0) = 0. you are unlikely to get large vibration if the forcing frequency is not close to a resonance frequency even if you have a jet engine running close to the string.2: Check that y p works. but not easy† ) if you happen to hit just the right frequency. t) = 0 y(1.e. When the forcing function is more complicated. it is more like a vibraphone. you decompose it in terms of the Fourier series and apply the above result. i.

240 0. See Figure 5.2 0 1 2 3 t 4 5 1. Then our solution would look like y(x. For example it is very easy to have a computer do it.5 on the following page.4: Plot of y(x.0 0.049 0.2 x 0.10) is a wonderful solution to the problem.8 0.099 0.197 -0. 2-periodic extension of y(x. It is not hard to compute specific values for an odd extension of a function and hence (5.148 -0.3. t) = F(x+t)+F(x−t) 2 + cos x − cos 1−1 sin 1 sin x − 1 cos t.0 0. hence it is not a simple matter of plugging in to apply the D’Alembert formula directly! You must define F to be the odd.5 x Figure 5. .0 0.t) 0.5 0. 0).8 0. STEADY PERIODIC SOLUTIONS 225 Note that the formula that we use to define y(x.148 0. 5.049 -0. unlike series solutions.099 -0.00 y -0.3.4.000 -0.10 -0.10) t y(x.5. t) be the temperature at a certain location at depth x underground at time t.254 0.10 -0. 0) is not odd. A plot is given in Figure 5. t) = F(x + t) + F(x − t) cos 1 − 1 + cos x − sin x − 1 cos t 2 sin 1 0 0.20 0.00 y 0.20 1.10 1 2 3 4 5 (5.20 0.20 0.10 0.2 Underground temperature oscillations Let u(x.0 -0.

Let us assume for simplicity that u(0. the hottest temperature is T 0 + A0 and the coldest is T 0 − A0 . (5. (5.11). iωXeiω t = kX eiω t Hence. And this in fact will be the steady periodic solution. It seems reasonable that the temperature at depth x will also oscillate with the same frequency. Substitute h into (5. We know the temperature at the surface u(0. then t = 0 is midsummer (could put negative sign above to make it midwinter). kX − iωX = 0. independent of the initial conditions.11) We will employ the complex exponential here to make calculations simpler.12). where k is the diffusivity of the soil.3: Suppose h satisfies (5.12) Exercise 5.226 CHAPTER 5. ω is picked depending on the units of t. we look for an h such that ht = kh xx . We will look for an h such that Re h = u. t) = A0 cos ω t. That is. t) = A0 eiω t . for the problem ut = ku xx . For simplicity.11). u(0. whose real part satisfies (5. t) from weather records. EIGENVALUE PROBLEMS depth x Figure 5. we will assume that T 0 = 0. t) = T 0 + A0 cos ω t.3. . Use Euler’s formula for the complex exponential to check that u = Re h satisfies (5. t) = V(x) cos ω t + W(x) sin ω t. t) = X(x) eiω t . A0 is picked properly to make this the typical variation for the year. h(0. The temperature u satisfies the heat equation ut = ku xx .12). For some base temperature T 0 . Suppose we have a complex valued function h(x. So we are looking for a solution of the form u(x. such that when t = 1year then ωt = 2π. To find an h.5: Underground temperature.

The amplitude of the √ω temperature swings is A0 e− 2k x . ω . t) should be bounded (we are not worrying about the earth core!).3. X(0) = A0 since h(0.e.66◦ Celsius. √ω Exercise 5.3. We did not take that into account above. Why wines are kept in a cellar. while √ω −(1+i) 2k x e will be bounded as x → ∞. so temperature presumably gets higher the deeper you dig. We also will assume that our surface temperature temperature swing is ±15◦ Celsius. A0 = 15. That is. Then the maximum temperature variation at 700 centimeters is only ±0. so we apply Euler’s formula to get √ω ω ω h(x. You need not dig very deep to get an effective “refrigerator. A home could be heated or cooled by taking advantage of the above fact. seconds) we have k = 0.” I. Let us again take typical parameters as above. If you use Euler’s formula to expand the complex exponentials. t) = A0 e−(1+i) 2k x eiω t = A0 e−(1+i) 2k x+iω t = A0 e− 2k x ei(ω t− 2k x) . we get the depth at which summer is the coldest and winter is the warmest. t) = A0 e− √ω 2k x cos ω t − ω x . We get approximately 700 centimeters which is approximately 23 feet below ground. 2π ω ω = seconds2π a year = 31. We will need to get the real part of h. But be careful.341 ≈ 1. t) = Re h(x.557. grams. The temperature differential could also be used to for energy. . For example in cgs units (centimeters. t) = A0 eiω t .99 × 10−7 . Then if we compute where the phase shift x 2k = π in we find the depth in centimeters where the seasons are reversed. Even without the earth core you could heat a home in the winter and cool it in the summer. X − α2 X = 0. Thus A = A0 . The temperature swings decay rapidly as you dig deeper. There is also the earth core. At depth x the phase is delayed by x 2k .5. you will note that the second term will be unbounded (if B 0). Hence B = 0. 2k Yay! ω Notice the phase is different at different depths. 2k 2k Then finally u(x. that is.005 (typical value for soil). Note that ± i = ± 1+i so you could simplify to α = ±(1 + i) k 2 general solution is √ω √ω X(x) = Ae−(1+i) 2k x + Be(1+i) 2k x .4: Use Euler’s formula to show that e(1+i) 2k x will be unbounded as x → ∞. This decays very quickly as x grows. STEADY PERIODIC SOLUTIONS or √ √ where α = ± iω . This means that √ω √ω √ω √ω h(x. you need consistent temperature. 2k 227 Hence the We assume that an X(x) that solves the problem must be bounded as x → ∞ since u(x. while the first term is always bounded. Furthermore. t) = A0 e− 2k x cos ω t − x + i sin ω t − x .

ω = 1. A0 = 20. Exercise 5.3. Exercise 5.7: The units are cgs (centimeters.3.228 CHAPTER 5.5. Find the depth at which the temperature variation is half (±10 degrees) of what it is on the surface. EIGENVALUE PROBLEMS 5.3. Suppose that the forcing function is the quare wave which is 1 on the interval 0 < x < 1 and −1 on the interval −1 < x < 0. Suppose that L = 1. Derive the particular solution y p . seconds).3. a = 1.3.6: Take the forced vibrating string. grams. Exercise 5. For k = 0.8: Derive the solution for underground temperature oscillation without assuming that T 0 = 0.3. . Find the particular solution.991 × 10−7 .005.5: Suppose that the forcing function for the vibrating string is F0 sin ωt.3 Exercises Exercise 5. Hint: you may want to use result of Exercise 5.

We will use the Just like the Laplace equation and the Laplacian. It can be seen as converting between the time and the frequency domain. In particular. take the standard equation mx (t) + cx (t) + kx(t) = f (t). ∗ 229 . The Laplace transform turns out to be a very efficient method to solve certain ODE problems. understanding the Laplace transform will also help with understanding the related Fourier transform. If the algebraic equation can be solved. the transform can take a differential equation and turn it into an algebraic equation. NMR spectroscopy. however. also named after Pierre-Simon. where the new independent variable s is the frequency. The Laplace transform also gives a lot of insight into the nature of the equations we are dealing with. which. We can think of t as time and f (t) as incoming signal. Finally. The Laplace transform will convert the equation from a differential equation in time to an algebraic (no derivatives) equation. requires more understanding of complex numbers. It is common to write lower case letters for functions in the time domain and upper case letters for functions in the frequency domain.1 The transform In this chapter we will discuss the Laplace transform∗ . We can think of the Laplace transform as a black box.1 in EP 6. marquis de Laplace (1749 – 1827). For example. We write L{ f (t)} = F(s). It eats functions and spits out functions in a new variable. signal processing and others. §10. The Laplace transform is also useful in the analysis of certain systems such as electrical circuits. applying the inverse transform gives us our desired solution.Chapter 6 The Laplace transform 6. We will not cover the Fourier transform.1.1 The Laplace transform Note: 2 lectures.

This function is generally given as  0 if t < 0. if we think of t as time there is no problem. Let us define the transform.1: Suppose f (t) = 1. L{ f (t)} = F(s) = def 0 ∞ e−st f (t) dt. for example F(s) is the Laplace transform of f (t). then ∞ L{1} = 0 e−st dt = e−st −s ∞ t=0 1 = . the limit only exists if s > 0. which is sometimes called the Heaviside function† . s Of course. We note that we are only considering t ≥ 0 in the transform.230 CHAPTER 6. Let us compute the simplest transforms. So L{e−at } is only defined for s + a > 0.1. the limit only exists if s > 0. we are generally interested in finding out what will happen in the future (Laplace transform is one place where it is safe to ignore the past). Example 6. Example 6.4: A common function is the unit step function. Only by coincidence is the function “heavy” on “one side.” † .  The function is named after Oliver Heaviside (1850–1925). again.1. Of course. So L{1} is only defined for s > 0.3: Suppose f (t) = t.1. e−st dt 0 Example 6. s+a Of course. then using integration by parts ∞ L{t} = 0 e−st t dt ∞ ∞ 1 −te−st + = s t=0 s ∞ 1 e−st =0+ s −s t=0 1 = 2.   u(t) =  1 if t ≥ 0. Example 6.1. s Of course. THE LAPLACE TRANSFORM same letter to denote that one function is the Laplace transform of the other.2: Suppose f (t) = e−at . the limit only exists if s + a > 0. then ∞ ∞ L{e } = −at 0 e e −st −at dt = 0 e −(s+a)t e−(s+a)t dt = −(s + a) ∞ = t=0 1 .

s where of course s > 0 (and a ≥ 0 as we said before). Exercise 6. B.1. then ∞ ∞ L{C f (t)} = 0 e C f (t) dt = C −st 0 e−st f (t) dt = CL{ f (t)}. and in particular L{C f (t)} = CL{ f (t)}. That is. ∞ ∞ L{u(t − a)} = 0 e u(t − a) dt = −st a e −st e−st dt = −s ∞ = t=a e−as . THE LAPLACE TRANSFORM 231 Let us find the Laplace transform of u(t − a).1. then L{A f (t) + Bg(t)} = AL{ f (t)} + BL{g(t)}. and C are constants. Similarly we have linearity. Since the transform is defined by an integral. By applying similar procedures we can compute the transforms of many elementary functions. suppose C is a constant.1. the function which is 0 for t < a and 1 for t ≥ a.1: Some Laplace transforms (C. f (t) C t t2 t3 tn e−at sin ωt cos ωt sinh ωt cosh ωt u(t − a) L{ f (t)} = F(s) C s 1 s2 2 s3 6 s4 n! sn+1 1 s+a ω s2 +ω2 s s2 +ω2 ω s2 −ω2 s s2 −ω2 −as e s Table 6. Since linearity is very important we state it as a theorem.6. ω. and a are constants). where a ≥ 0 is some constant. We can use the linearity properties of the integral.1. Suppose that A.1: Verify Table 6.1.1 (Linearity of Laplace transform). For example. So we can “pull out” a constant out of the transform. Many basic transforms are listed in Table 6. Theorem 6. .

But in general L{ f (t)g(t)} 1 t L{ f (t)}L{g(t)}. Similarly tan t or et do not have Laplace transforms. let us also note that for exponential order functions you also obtain that their Laplace transform decays at infinity: s→∞ lim F(s) = 0.2 Existence and uniqueness Let us consider in more detail when does the Laplace transform exist. For example. Let f (t) and g(t) be continuous and of exponential order. Before dealing with uniqueness.232 CHAPTER 6. That is. Hint: Note that a sum of two exponential order functions is also of exponential order. ect t→∞ If the limit exists and is finite (usually zero). For an exponential order function we have existence and uniqueness of the Laplace transform. It is a common mistake to think that Laplace transform of a product is the product of the transforms.1 on the previous page make it easy to already find the Laplace transform of a whole lot of functions already. Then show that tn is of exponential order for any n.3: Use L’Hopital’s rule from calculus to show that a polynomial is of exponential order. Suppose that there exists a constant C. such that F(s) = G(s) for all s > C.2: Verify the theorem.1. the function 2 does not have a Laplace transform as the integral diverges. for some constants M and c. THE LAPLACE TRANSFORM Exercise 6. but that will not relevant to us.3 (Uniqueness). It must also be noted that not all functions have Laplace transform. that are not of exponential order.1.1. Theorem 6. Then f (t) = g(t) for all t ≥ 0. 6. The simplest way to check this condition is to try and compute lim f (t) . Let f (t) be continuous and of exponential order for a certain constant c. f (t) is of exponential order as t goes to infinity if | f (t)| ≤ Mect . show that L{A f (t) + Bg(t)} = AL{ f (t)} + BL{g(t)}. Exercise 6.1. . First let us consider functions of exponential order. then f (t) is of exponential order.2 (Existence).1. Theorem 6. for sufficiently large t (say for all t > t0 for some t0 ). You may have existence of the transform for other functions. Then F(s) = L{ f (t)} is defined for all s > c. These rules together with Table 6.

Once we do solve the algebraic equation in the frequency domain we will want to get back to the time domain. 6. Recall that piecewise continuous means that the function is continuous except perhaps at a discrete set of points where it has jump discontinuities like the Heaviside function. we need to first know if such a function is unique.1. So we can without fear make the following definition. This new step function. s3 + s s s +1 . therefore. If we have a function F(s).1. In other words. We look at the table and we find def L−1 1 = e−t . We. A + B = 1.1. F(s) = s2 + s + 1 1 1 = + 2 . For example. Let us demonstrate how linearity is used by the following example. That is. It turns out we are in luck by Theorem 6.1. There is an integral formula for the inverse. as that is what we are really interested in.1.3. A = 1. We define the inverse Laplace transform as L−1 {F(s)} = f (t). however. need to also be able to get back. THE LAPLACE TRANSFORM 233 Both theorems hold for piecewise continuous functions as well. but it is not as simple as the transform itself (requires complex numbers). The best way to compute the inverse is to use the Table 6. the inverse Laplace transform is also linear. Uniqueness however does not “see” values at the discontinuities. Example 6. s+1 We note that because the Laplace transform is linear. 3 +s First we use the method of partial fractions to write F in a form where we can use Table 6. to be able to find f (t) such that L{ f (t)} = F(s). Find the inverse Laplace transform. C = 1. Therefore. we defined has the exact same Laplace transform as the one we defined earlier where u(0) = 1. So you can only conclude that f (t) = g(t) outside of discontinuities. Find the inverse Laplace transform. We factor the denominator as s(s2 + 1) and write 2 s2 + s + 1 A Bs + C = + 2 . s3 + s s s +1 Hence A(s2 − 1) + s(Bs + C) = s2 + s + 1.5: Take F(s) = s+1 .1 on page 231. If F(s) = L{ f (t)} for some function f (t).6. 1 the unit step function is sometimes defined using u(0) = 2 .3 The inverse transform As we said.6: Take F(s) = s s+s+1 . the Laplace transform will allow us to convert a differential equation into an algebraic equation which we can solve. 1 Example 6. We can of course also just pull out constants.1 on page 231. L−1 {AF(s) + BG(s)} = AL−1 {F(s)} + BL−1 {G(s)}.

The shifting property can be used when the denominator is a more complicated quadratic that may come up in the method of partial fractions. 6. where F(s) is the Laplace transform of f (t).1. Such rational functions are called proper rational functions and we will always be able to apply the method of partial fractions. that is functions of the form F(s) G(s) where F(s) and G(s) are polynomials. Since normally (for functions that we are considering) the Laplace transform goes to zero as s → ∞. 2+4 + 4s + 8 (s + 2) 4 In general. and c. 3+ s s s s +1 A useful property is the so-called shifting property or the first shifting property L{e−at f (t)} = F(s + a).1. which involves finding the roots of the denominator.4: Derive this property from the definition. +4 4 Putting it all together with the shifting property we find L−1 s2 1 1 1 = L−1 = e−2t sin 2t.5: Find the Laplace transform of 3 + t5 + sin πt. it is not hard to see that the degree of F(s) will always be smaller than that of G(s).1.1. First we complete the square to make the denominator (s + 2)2 + 4. Next we find L−1 s2 1 1 = sin 2t. You always want to write such quadratics as (s + a)2 + b by completing the square and then using the shifting property.4 Exercises Exercise 6. we will want to be able to apply the Laplace transform to rational functions.1.7: Find L−1 s2 +4s+8 . THE LAPLACE TRANSFORM By linearity of Laplace transform (and thus of its inverse) we get that L −1 s2 + s + 1 1 1 = L−1 + L−1 2 = 1 + sin t. b. Exercise 6. Of course this means we will need to be able to factor the denominator into linear and quadratic terms.6: Find the Laplace transform of a + bt + ct2 for some constants a.234 CHAPTER 6. Exercise 6. 1 Example 6. .

1.9: Find the inverse Laplace transform of 4 . (s−1)2 (s+1) 235 Exercise 6.1. s2 −9 2s . Exercise 6.11: Find the inverse Laplace transform of . Exercise 6.1.10: Find the inverse Laplace transform of Exercise 6.1.1.1. s2 −1 1 .8: Find the Laplace transform of cos2 ωt.6. THE LAPLACE TRANSFORM Exercise 6.7: Find the Laplace transform of A cos ωt + B sin ωt.

s s2 X(s) − sx(0) − x (0) + X(s) = 2 .2. the Laplace transform turns differentiation essentially into multiplication by s. f (t) g (t) g (t) g (t) L{ f (t)} = F(s) sG(s) − g(0) s2G(s) − sg(0) − g (0) s3G(s) − s2 g(0) − sg (0) − g (0) Table 6. The results are listed in Table 6. L{x (t) + x(t)} = L{cos 2t}. that is functions which are piecewise continuous with a piecewise continuous derivative.2. s +4 . We can keep doing this procedure for higher derivatives. By X(s) we will. §7. Example 6. We will not worry much about this fact.2 Transforms of derivatives and ODEs Note: 2 lectures. as usual. First let us try to find the Laplace transform of a function that is a derivative.1: Take the equation x (t) + x(t) = cos 2t. 6.1: Verify Table 6. ∞ L {g (t)} = 0 e−st g (t) dt = e−st g(t) ∞ t=0 ∞ − 0 (−s) e−st g(t) dt = −g(0) + sL{g(t)}.2.3 in EP 6.2. The procedure also works for piecewise smooth functions. That is.236 CHAPTER 6. x (0) = 1.2 –7. suppose g(t) is a continuous differentiable function of exponential order.1 Transforms of derivatives Let us see how the Laplace transform is used for differential equations. denote the Laplace transform of x(t). Let us see how to apply this to differential equations.2 Solving ODEs with the Laplace transform If you notice.2: Laplace transforms of derivatives (G(s) = L{g(t)} as usual). x(0) = 0. We will take the Laplace transform of both sides. The fact that the function is of exponential order is used to show that the limits appearing above exist. THE LAPLACE TRANSFORM 6. Exercise 6.2.2.

2+1 2+4 3 s 3 s s +1 Now take the inverse Laplace transform to obtain x(t) = 1 1 cos t − cos 2t + sin t.6.  . If the differential equation we started with was constant coefficient linear equation. it is a function which is zero when t < a and 1 when t ≥ a. The function f (t) should then be defined as  0  if t < π. x (t). All the x(t). Then taking the inverse transform if possible.2. will be converted to X(s). This just shifts the graph to the right by a. we want to consider the Heaviside function. Suppose for example that f (t) is a “signal” and you started receiving the signal sin t at time t. It should be noted that since not every function has a Laplace transform. s2 X(s) − sx(0) − x (0). 2 + 4) + 1)(s s +1 s2 s .2. TRANSFORMS OF DERIVATIVES AND ODES 237 We can plug in the initial conditions now (this will make computations more streamlined) to obtain s2 X(s) − 1 + X(s) = We now solve for X(s). and so on. x (t).  f (t) =  sin t if t ≥ π. 6. it is generally pretty easy to solve for X(s) and we will obtain some expression for X(s). or cutting functions off. X(s) = (s2 1 s + 2 . we find x(t). 3 3 The procedure is as follows. sX(s) − x(0). Most commonly it is used as u(t − a) for some constant a. That is. and so on.  This function is useful for putting together functions. You apply the Laplace transform to transform the equation into an algebraic (non differential) equation in the frequency domain.3 Using the Heaviside function Before we move on to more general functions than those we could solve before.   u(t) =  1 if t ≥ 1. not every equation can be solved in this manner. +4 We use partial fractions (exercise) to write X(s) = 1 s 1 s 1 − + 2 .  0 if t < 0. You take an ordinary differential equation in the time variable t. See Figure 6.1 on the following page for the graph.

00 -1. If you want the function t on when t is in [0.5 0.00 0.5 1. L{u(t − a)} = s This can be generalized into a shifting property or second shifting property.75 0.0 Figure 6. We could imagine a mass and spring system where a rocket was fired for 2 seconds starting at t = 1. Similarly the step function which is 1 on the interval [1. For example. We have already seen that e−as . f (t) can be written as f (t) = u(t − π) sin t.5 1. x (0) = 0. 1] and the function −t + 2 when t is in [1. where f (t) = 1 if 1 ≤ t < 3 and zero otherwise.1: Plot of the Heaviside (unit step) function u(t). x(0) = 0. you can use the expression t u(t) − u(t − 1) + (−t + 2) u(t − 1) − u(t − 2) . (6.25 0.0 -0. Hence it is useful to know how the Heaviside function interacts with the Laplace transform.1) Example 6.50 0.5 0. Or perhaps an RLC circuit.00 1. Using the Heaviside function. suppose that we had a mass spring system x (t) + x(t) = f (t). where the voltage was .0 -0.2. L{ f (t − a)u(t − a)} = e−as L{ f (t)}.75 0. The Heaviside function is useful to define functions defined piecewise.00 0.2: Suppose that the forcing function is not periodic.0 0.25 0.0 1. THE LAPLACE TRANSFORM 0. 2] and zero otherwise.50 0.0 CHAPTER 6. 2) and zero everywhere else can be written as u(t − 1) − u(t − 2).238 -1.

s(s2 +1) s(s2 1 = 1 − cos t. The basic property.4 Transforms of integrals A feature of Laplace transforms is that it is also able to easily deal with integral equations.2. s s We solve for X(s) to obtain e−s e−3s X(s) = − . That is. + 1) So using (6. e−2s = e−2s L{1 − cos t} = 1 − cos(t − 3) u(t − 3). s(s2 + 1) 6.2. t L 0 f (τ) dτ = 1 F(s). s .6. s It is sometimes useful for computing the inverse transform to write t f (τ) dτ = L−1 0 1 F(s) . s(s2 + 1) s(s2 + 1) We leave it as an exercise to the reader to show that L−1 In other words L{1 − cos t} = L−1 Similarly L−1 Hence. is the following.2 on the next page. which can be proven by applying the definition and again doing integration by parts. s(s2 + 1) 1 . equations in which integrals rather than derivatives of functions appear.1) we find e−s = e−s L{1 − cos t} = 1 − cos(t − 1) u(t − 1). We transform the equation and we plug in the initial conditions as before to obtain e−s e−3s s2 X(s) + X(s) = − . The plot of this solution is given in Figure 6. We can write f (t) = u(t − 1) − u(t − 3). the solution is x(t) = 1 − cos(t − 1) u(t − 1) − 1 − cos(t − 2) u(t − 2). TRANSFORMS OF DERIVATIVES AND ODES 239 being raised at a constant rate for 2 seconds starting at t = 1 and then held steady again starting at t = 3.

THE LAPLACE TRANSFORM 15 20 2 1 1 0 0 -1 -1 -2 0 5 10 15 20 -2 Figure 6. More complicated integral equations can also be solved using the convolution that we will learn next. (s + 1)2 . If an equation contains an integral of the unknown function the equation is called an integral equation. For example. 3 s s s Or X(s − 1) = We use the shifting property x(t) = 2e−t t. L−1 1 1 = s s2 + 1 t 1 s(s2 +1) we could proceed by applying this t L−1 0 1 2+1 s dτ = 0 sin τ dτ = 1 − cos t.240 0 2 5 10 CHAPTER 6. Example 6.2.3: To compute the inverse transform of integration rule. take the equation t t2 = 0 eτ x(τ) dτ. 2 s2 or X(s) = 2 . If we apply the Laplace transform we obtain (where X(s) = L{x(t)}) 2 1 1 = L{eτ f (τ)} X(s − 1).2: Plot of x(t).

where m > 0. c > 0. c > 0. 1] and t for t > 1. and c2 = 4km (system is critically damped). Exercise 6. t2 for t in [0. and c2 − 4km < 0 (system is underdamped).2. Suppose L{ f (t)} = F(s). where m > 0.2.2. x (0) = 0. x (0) = 0. Exercise 6. Exercise 6.3: Using the Laplace transform solve mx + cx + kx = 0.2. x(0) = 0.2: Using the Heaviside function write down the piecewise function that is 0 for t < 0. and c2 − 4km > 0 (system is overdamped).7: Show the differentiation of the transform property. x (0) = 0.4: Using the Laplace transform solve mx + cx + kx = 0. Exercise 6.2. .6: Solve x + x = u(t − 1) for initial conditions x(0) = 0 and x (0) = 0.6. Hint: differentiate under the integral sign.5: Using the Laplace transform solve mx + cx + kx = 0. then show L{−t f (t)} = F (s). c > 0. k > 0. x(0) = 0. TRANSFORMS OF DERIVATIVES AND ODES 241 6. k > 0. k > 0. x(0) = 0.2.5 Exercises Exercise 6.2. Exercise 6.2. where m > 0.

2) So the convolution of two functions of t is another function of t. 2 t . you may have seen it defined as ( f ∗g)(t) = ∞ f (τ)g(t−τ) dτ.2 in EP 6.242 CHAPTER 6.5 lectures. Now we use the identity cos θ sin ψ = Hence. ( f ∗ g)(t) = For those that have seen convolution defined before.1 The convolution We have said that the Laplace transformation of a product is not the product of the transforms.2: Take f (t) = sin ωt and g(t) = cos ωt for t ≥ 0. We did assume that f and g are zero (or just not defined) for negative t. however. Then t ( f ∗ g)(t) = 0 eτ (t − τ) dτ = et − t − 1.3. ‡ ∞ 1 sin(θ + ψ) − sin(θ − ψ) . Then t ( f ∗ g)(t) = 0 sin ωτ cos ω0 (t − τ) dτ. All hope is not lost however. Define the convolution‡ of f (t) and g(t) as ( f ∗ g)(t) = def 0 t f (τ)g(t − τ) dτ. There exists a very important type of a product which works. Take two functions f (t) and g(t) defined for t ≥ 0. THE LAPLACE TRANSFORM 6. When discussing the Laplace transform the definition we gave is sufficient. §7.1: Take f (t) = et and g(t) = t for t ≥ 0. Convolution does occur in many other applications.2) if you define f (t) and g(t) to be zero for t < 0.3. 1 sin(ω0 t) − sin(ω0 t − 2ω0 τ) dτ 0 2 t 1 1 = τ sin ω0 t + cos(2ω0 τ − ω0 t) 2 4ω0 τ=0 1 = t sin ω0 t.3 Convolution Note: 1 or 1. Example 6. Where we of course did one integration by parts. where you may have to use the more general definition with infinities. 2 Of course the formula only holds for t ≥ 0.3. Example 6. This definition agrees with (6. (6.

6.3. Example 6. (s + 1)s2 s + 1 s2 We recognize the two entries of Table 6.3. In other words.3. the Laplace transform of a convolution is the product of the Laplace transforms. 6. Where the calculation of the integral of course involved an integration by parts. Example 6.3.2. ( f ∗ g) ∗ h = f ∗ (g ∗ h). That is L−1 Therefore.3: Suppose we have the function of s defined by 1 1 1 = . and h be functions then f ∗ g = g ∗ f.3.2 Solving ODEs The next example will demonstrate the full power of the convolution and Laplace transform. L−1 1 1 = s + 1 s2 0 1 = e−t s+1 t and L−1 1 = t. The simplest way to use this result is in reverse. The most interesting property for us. and the main result of this section is the following theorem. then t L {( f ∗ g)(t)} = L 0 f (τ)g(t − τ) dτ = L{ f (t)}L{g(t)}. s2 τet−τ dτ = 2et − t2 − 2t − 2. x(0) = 0. We will be able to give a solution to the forced oscillation problem for any forcing function as a definite integral. .1. Let f (t) and g(t) be of exponential type. Theorem 6. (c f ) ∗ g = f ∗ (cg) = c( f ∗ g). x (0) = 0. Let c be a constant and f .4: Find the solution to x + ω2 x = f (t). 0 for an arbitrary function f (t). g. CONVOLUTION 243 The convolution has many properties that make it behave like a product.

It is usually not hard to numerically evaluate a definite integral. THE LAPLACE TRANSFORM We first apply the Laplace transform to the equation.3. Using convolution you can also find a solution as a definite integral for arbitrary forcing function f (t) for any constant coefficient equation. Hence x(t) = 1 ω0 1 1 t sin ω0 t. s2 X(s) + ω2 X(s) = F(s). t sin ω0 t = 2 2ω0 Note the t in front of the sine.2. 0 We have already computed the convolution of sine and cosine in Example 6. Denote the transform of x(t) by X(s) and the transform of f (t) by F(s) as usual.244 CHAPTER 6. therefore. Then t x(t) = 0 1 sin ω0 τ (cos ω0 (t − τ)) dτ = ω0 ω0 t cos ω0 τ sin ω0 (t − τ) dτ. A definite integral is usually enough for most practical purposes. . Suppose that f (t) = cos ω0 t. 2 ω0 + ω0 f (τ) sin ω0 (t − τ) dτ.3. x(t) = 0 1 .3 Volterra integral equation One of the most common integral equations is the Volterra integral equation§ : t x(t) = f (t) + 0 § g(t − τ)x(τ) dτ. Named for the Italian mathematician Vito Volterra (1860 – 1940). This solution will. ω0 Let us notice one more thing with this example. meaning we get resonance. We can now also notice how Laplace transform handles resonance. 6. grow without bound as t gets large. s2 + ω2 0 s2 t 1 sin ω0 t = . 0 or in other words X(s) = F(s) We know L−1 Therefore. ω0 or if we reverse the order x(t) = 0 t sin ω0 t f (t − τ) dτ.

.2: Let f (t) = t for t ≥ 0. and g(t) = u(t − 1). x (0) = 0.3. Compute f ∗ g. x(0) = 0. We apply Laplace transform to obtain X(s) = or X(s) = 1 s+1 1 1 + 2 X(s). We find X(s) = F(s) 1 − G(s) if we can find the inverse Laplace transform now we obtain the result. where m > 0. Compute f ∗ g. and g(t) respectively. x (0) = 0. and G(s) are the Laplace transforms of x(t).3. and c2 − 4km < 0 (system is underdamped).1 on page 231 to find x(t) = cosh √ √ 1 2 t − √ sinh 2 t. where m > 0. x(0) = 0. Write the solution as a definite integral. c > 0. for an arbitrary function f (t). Exercise 6. 2 6. f (t). and g(t) = sin t for t ≥ 0.4 Exercises Exercise 6.3: Find the solution to mx + cx + kx = f (t). for an arbitrary function f (t). s+1 s −1 = s−1 s 1 = 2 − 2 .3. CONVOLUTION 245 where f (t) and g(t) are known functions and x(t) is an unknown.5: Solve x(t) = e + −t 0 t sinh(t − τ)x(τ) dτ. 2−2 s s −2 s −2 1− 1 s2 −1 It is not hard to apply Table 6. Example 6.3.3. Exercise 6. To solve this equation we apply the Laplace transform to get X(s) = F(s) + G(s)X(s) where X(s).6. and c2 −4km > 0 (system is overdamped). c > 0. k > 0.1: Let f (t) = t2 for t ≥ 0.3.3. F(s). Exercise 6. k > 0. Write the solution as a definite integral.4: Find the solution to mx + cx + kx = f (t).

. for an arbitrary function f (t). where m > 0. x (0) = 0. THE LAPLACE TRANSFORM x(0) = 0.246 Exercise 6. k > 0. CHAPTER 6. t Exercise 6.3. Write the solution as a definite integral.3. c > 0.6: Solve x(t) = e−t + 0 t cos(t − τ)x(τ) dτ.5: Find the solution to mx + cx + kx = f (t).7: Solve x(t) = cos t + 0 cos(t − τ)x(τ) dτ. Exercise 6.3. and c2 = 4km (system is critically damped).

NY. Differential Equations and Boundary Value Problems: Computing and Modelling. New York. Dover Publications. [F] [I] Stanley J. Elementary Partial Differential Equations. Edwards and D. Inc. E. Penney. Farlow. 2008. NJ.. Princeton. 247 . Ordinary Differential Equations. Prentice Hall.E. San Francisco. [EP] C.Further Reading [BM] Paul W. An Introduction to Differential Equations and Their Applications. Ince.L. 1956. Berg and James L. 1994. HoldenDay. CA. 1966. McGregor. McGraw-Hill.H.. 4th edition. Inc.

248 FURTHER READING .

96 convolution. 51. 85 Dirichlet boundary conditions. 111 distance. 133. 14 antidifferentiate. 140 ellipses (vector field). 144 eigenvector. 99 eigenvector decomposition. 211 Dirichlet problem. 216 eigenvalue. 181 boundary value problem. 212 eigenfunction decomposition. 144. 211. 212 eigenvalue of a boundary value problem. 125 diagonalization. 228 characteristic equation. 99. 143 equilibrium solution. 36 critically damped. 205 displacement vector. 7 direction field. 36 autonomous system. 36 249 .Index acceleration. 88. 87 commute. 120 defective eigenvalue. 151 dynamic damping. 89 complementary solution. 50 center. 66 damped motion. 111 matrix exponential of. 143 catenary. 118 eigenfunction. 87 algebraic multiplicity. 33 boundary conditions for a PDE. 52 Chebychev’s equation of order 1. 181 endpoint problem. 119 complex conjugate. 198 damped. 66 augmented matrix. 107 elliptic PDE. 7 determinant. 107 cgs units. 89 diagonal matrix. 65 antiderivative. 65 angular frequency. 126 differential equation. 62 defect. 170 critical point. 119 amplitude. 11 Cauchy-Euler equation. 120 dependent variable. 242 cosine series. 90 cofactor expansion. 16 dot product. 120 deficient matrix. 227. 50 cofactor. 70 atan2. 53 complex roots. 91 autonomous equation. 70 complete eigenvalue. 54 constant coefficient. 85 Bernoulli equation. 67 d’Alembert solution to the wave equation. 171. 90 column vector. 102 complex number. 16 addition of matrices. 14 associated homogeneous equation.

14 integrating factor. 38 heat equation. 20. 120. 215 integral equation. 119 Gibbs phenomenon. 27 integrating factor method. 89 IODE Lab I. 168 even periodic extension. 88 inner product of functions. 7 initial condition. 41 Project I. 93 Leibniz notation. 7 first order linear equation. 158 half period. 151 first order differential equation. 10 generalized eigenvectors. 96. 96 Hooke’s law. 88 imaginary part. 96 fundamental matrix solution. 181. 233 invertible matrix. 62. 22 linear equation. 43 Fredholm alternative simple case. 62 free variable. 93 fundamental matrix. 204 Laplace transform. 42 first shifting property. 168 existence and uniqueness. 153 fourth order method. 9 exponential of a matrix. 93 indefinite integral. 41 Project III. 232 extend periodically. 47 homogeneous side conditions. 181 inner product. 234 forced motion. 24 inconsistent system. 15. 57 exponential growth model. 10 initial conditions for a PDE. 182 homogeneous system. 160 Project V. 54 implicit solution. 47 INDEX . 181 Heaviside function. 240. 131 inverse Laplace transform. 5 geometric multiplicity. 50 homogeneous equation. 122 Genius software. 62 systems. 230 Hermite’s equation of order 2. 160 harmonic function. 124 exponential order. 53 Euler’s method. 27. 181 identity matrix. 116 Fourier series. 153. 50 Euler’s equations. 72 Laplace equation. 41 even function. 215 free motion. 160 IODE software. 14 independent variable. 229 Laplacian. 76 Project IV. 110 hyperbolic PDE. 244 integrate. 5 la vie. 204 harvesting. 148 Sturm-Liouville problems. 34 homogeneous linear equation.250 Euler’s equation. 27 first order linear system of ODEs. 48. 204 leading entry. 125 general solution. 27 systems. 155. 56 Euler’s formula. 18 Lab II. 95 first order method. 18 Project II.

113 Neumann boundary conditions. 180 product of matrices. 153 proper rational function. 84. 37 with harvesting. 119 natural (angular) frequency. 87 scalar multiplication. 155. 153 vectors. 111 mathematical model. 36 Newton’s second law. 90 partial differential equation. 181 linearity of Laplace transform. 62. 87 second order differential equation. 181 one-dimensional wave equation. 42 251 . 244 RLC circuit. 175 multiplication of complex numbers. 63. 154 scalar. 87 matrix exponential. 37. 60 multiplicity of an eigenvalue. 117. 147 overdamped. 113 odd function. 57 linearly independent. 181 period. 59 resonance. 37 phase portrait. 214 orthogonality. 50 regular Sturm-Liouville problem. 163 piecewise smooth. 70 linear PDE. 10. 110 nilpotent. 213 repeated roots. 67 parabolic PDE. 65 periodic. 78. 8 orthogonal functions. 124 matrix inverse. 151 phase diagram. 76. 47 second order method. 8 one-dimensional heat equation. 11 second order linear differential equation. 20 piecewise continuous. 147. 106 sawtooth. 53 multiplicity. 126 normal mode of oscillation. 181 parallelogram. 57 logistic equation. 9 reduced row echelon form. 181 particular solution. 171. 49. 231 linearly dependent. 178 quadratic formula. 65 Picard’s theorem. 9 matrix. 85 linear operator.INDEX linear first order system. 70 PDE. 86 phase shift. 113 natural mode of oscillation. 234 pure resonance. 211 Newton’s law of cooling. 168 odd periodic extension. 95 method of partial fractions. 65 natural frequency. 54 real world problem. 8. 52 real part. 168 ODE. 93 reduction of order method. 89 matrix valued function. 62 row vector. 87 saddle point. 78. 9 mathematical solution. 80. 211 mks units. 151 with respect to a weight. 8. 88 projection. 38 mass matrix. 163 practical resonance. 65. 31. 178. 233 Mixed boundary conditions. 191 ordinary differential equation.

116 second order systems. 151 system of differential equations. 36 unstable node. 47. 16 Volterra integral equation. 175 steady state temperature. 88 trigonometric series. 214 INDEX . 105 variation of parameters. 238 separable. 230 unstable critical point. 83 tedious. 18 solution. 96. 85 vector valued function. 136 unforced motion. 189. 182 shifting property. 89 singular solution. 234. 107 square wave. 80 transpose. 22 separation of variables. 36 stable node. 136 timbre. 87 vector field. 105 spiral sink. 64 undamped motion. 86 source. 182 symmetric matrix. 79. 204 stiffness matrix. 57. 153 undamped. 73 systems. 244 wave equation. 72. 139 systems. 62 systems. 62 unit step function. 138 vector. 68 undetermined coefficients. 220 trajectory. 110 underdamped. 238 side conditions for a PDE. 181. 86 transient solution. 147. 155 stable critical point. 65 sine series. 80. 181 simple harmonic motion. 24 sink. 198 weight function. 71 for systems. 111 Sturm-Liouville problem. 7 solution curve. 105 slope field. 105 steady periodic solution. 170 singular matrix. 212 superposition. 73.252 second shifting property. 95 velocity. 81. 108 spiral source.