Professional Documents
Culture Documents
Laplace Transform and Differential Equations PDF
Laplace Transform and Differential Equations PDF
UNIVERSITY OF
KWAZULU-NATAL
Department of Mathematics
at Howard College
Le
ture Notes in
Engineering Mathemati
s
LAPLACE TRANSFORM
and
DIFFERENTIAL
EQUATIONS
2009
Preliminaries 1
Chapter Zero
PRELIMINARIES
1. Improper Integrals
RM RM
If x0 f (x) dx exists for every M , and limM →∞ x0 f (x) dx also exists and is finite, then we
define Z ∞ Z M
f (x) dx = lim f (x) dx.
x0 M →∞ x0
if the integral on the right converges, then the integral on the left also converges and we say it
converges absolutely.
An absolutely convergent integral is always convergent, too; but the converse is not neces-
sarily true.
To establish whether an improper integral converges or diverges, one may sometimes use
the comparison test: suppose |f (x)| ≤ |g(x)| from some point onwards (say, for x > a): then
Z ∞ Z ∞
|f (x)| dx = ∞ =⇒ |g(x)| dx = ∞
a a
(both integrals converge absolutely). The comparison test is useful, for example, when it is
relatively easy to integrate f (x) but not g(x), or vice-versa.
2. Integration by Parts
The “product rule” for differentiation is usually written as (uv)′ = u′ v + uv ′ . By simple manip-
ulations, we get immediately
u′ v = (uv)′ − uv ′ .
Integrating with respect to the independent variable (let us call it x), we get
Z Z
u′ v dx = uv − uv ′ dx. (1)
2 Preliminaries
We need to find h i∞ h i h i
− xe−x = lim − M e−M − − 0e−0 =
0 M →∞
−M
= lim .
M →∞ eM
The limit on the right is of the form ∞/∞, and de l’Hospital’s theorem is applicable: one gets
immediately that this limit is zero. Substituting back, it follows
Z ∞ Z ∞
xe−x dx = 0 + e−x dx = 1.
0 0
M2
lim − = 0;
M →∞ eM
hence Z ∞ Z ∞
x2 e−x dx = 0 + 2 xe−x dx = 2 · 1.
0 0
More generally: if n is a positive integer, then applying de l’Hospital’s theorem n times, we may
show that
Mn
lim M = 0;
M →∞ e
in other words, the exponential function eM always diverges faster than any (arbitrarily large)
power M n , as M tends to infinity. This statement remains true even if eM is replaced by
eεM , where ε is positive and arbitrarily small. Exponential growth is inherently stronger than
polynomial growth of any degree.
Preliminaries 3
Now, using these results, and repeating integration by parts n times, it is easy to see that
Z ∞ Z ∞
n −x
x e dx = 0 + nxn−1 e−x dx =
0
Z0 ∞
=0+ n(n − 1)xn−2 e−x dx =
Z0 ∞
=0+ n(n − 1)(n − 2)xn−3 e−x dx = etc. etc.
0
= n! ,
where n! = 1 · 2 · 3 · · · n.
4. Euler’s Formula
Euler studied the properties of the quantity†
w = cos φ + i sin φ,
where i2 = −1. It is easy to verify that the magnitude and argument of w are, respectively
|w| = 1, arg(w) = φ.
If two such expressions are multiplied together, one gets this identity:
(verify this). Following Euler, we define the exponential of an imaginary quantity in the following
way:
def
eiφ = cos φ + i sin φ.
It is easy to see that this definition extends to imaginary exponentials all properties established
for real exponentials: for example
1 d eicφ
e−iφ = , eiφ1 eiφ2 = ei(φ1 +φ2 ) , = iceicφ ,
eiφ dφ
and so on. The following formulas are also very useful and should be memorized:
cos A cos B = 12 [cos(A + B) + cos(A − B)], sin A cos B = 21 [sin(A + B) + sin(A − B)],
† Euler did not call it a complex quantity, as we should have done, because complex numbers
had not yet been properly defined. In a sense, he was ahead of his times.
4 The Laplace Transform
Chapter One
1. Introduction
converges for some value of the parameter s. Then the integral (3) is called the Laplace Transform
of f (t) and is denoted either as L [f (t)] or F (s). ◭
◮Example 1 If f (t) = 3t + 4,
Z ∞
L [3t + 4] = (3t + 4) e−st dt.
0
3 ∞ −x 4 ∞ −x 3 4
Z Z
L [3t + 4] = 2 xe dx + e dx = 2 + . [s > 0]
s 0 s 0 s s
On the other hand, if s ≤ 0, then t = +∞ is mapped into x = −∞; we get a divergent integral,
hence the Laplace transform of 3t + 4 is not defined for negative s. ◭
Note that in this example the Laplace transform is defined only for s > 2. ◭
2
◮Example 3 Find L [f (t)], if f (t) = e7t + 3 .
2
Solution: Expanding the square, we get e7t + 3 = e14t + 6e7t + 9, and hence
∞
1 6 9
Z
e14t + 6e7t + 9 e−st dt =
L [f (t)] = + + .
0 s − 14 s − 7 s
Note that in this example the Laplace transform is defined only for s > 14. ◭
The Laplace Transform 5
7 7
2e−st 2e−4s − 2e−7s
Z
−st
L [f (t)] = 2e dt = = .
4 −s 4 s
Comment: A function like f (t) in this example, is called a transient because it “comes to life”,
so to speak, when t = 4, at which point it jumps to the value 2 (here we have a discontinuity);
when t is increased beyond the point t = 7 (another discontinuity) f (t) vanishes for good. ◭
Before we begin to study the properties of the Laplace transform, which will take a fair amount
of time, let us make some preliminary remarks.
• The integration variable is commonly called t, rather than x. This is because in most
applications, t physically represents time. There may be exceptions to this rule.
• The Laplace transform defines a correspondence between functions of t and functions of s.
Such a correspondence is a linear map: in other words, given two functions of t, f1 and f2 ,
and two constants c1 and c2 , then
L c1 f1 (t) + c2 f2 (t) = c1 L f1 (t) + c2 L f2 (t) . (linearity)
• We assume that s is real. This keeps the foregoing discussion as simple as possible; but be
warned, certain properties of the Laplace transform are easier to study if s is treated as a
complex
R ∞ −st variable.
• If 0 e f (t) dt converges absolutely for a certain s, then it converges absolutely for all
values of s greater than that. This follows from the simple observation that if s > s0 , then
|e−st | < |e−s0 t | for all positive t.
• If two functions, say f (t)and g(t) are identical for t ≥ 0, then clearly they have the same
Laplace transform; their behavior for negative t is irrelevant. So, with no loss of generality,
we may assume f (t) = 0 identically for negative t.
• For the integral (3) to converge for some s, it is sufficient that |f (t)| be integrable to the
right of the origin, and do not diverge faster than an exponential as t → ∞. We have
already noted that all powers of t, and hence all polynomials, meet this requirement.
2. Notation. Uniqueness
The function f (t) appearing in (3) is sometimes called the direct function or pre-image; the
transform itself is usually called the image of f (t).
Following an established tradition, we shall use the same letter of the alphabet for the direct
function and for its transform, with the understanding that small letters such as f , g, x, y, ω
etc. will be used for direct functions, and the corresponding capital letters F , G, X, Y , Ω etc.
for their transforms.† Occasionally we shall also need Greek letters for dummy variables: in
this case, recall that τ (“tau”) and σ (“sigma”) are the Greek equivalent of small t and small s,
respectively.
Derivatives with respect to t will generally be denoted by dots (Newton’s notation), as it is
common in physics and engineering. Derivatives with respect to s will be denoted by the more
† Warning: some books use small letters for transforms and capital letters for direct functions,
which can be very confusing.
6 The Laplace Transform
dx d2 y
ẋ = , ÿ = ;
dt dt2
d2 F dF dG d2 G
(F G)′′ = F ′′ G + 2F ′ G′ + F G′′ = G + 2 + F .
ds2 ds ds ds2
We conclude these preliminaries with a rather subtle point. It is obvious that if f (t) ≡ g(t),
then F (s) = G(s). But it is not obvious at all whether from F (s) = G(s) one may deduce that
f (t) = g(t) anywhere. In a sense, this is blatantly false, since f and g may take arbitrary values
for t < 0; and that is precisely the reason why we stipulated that direct functions be re-defined
to be identically zero for negative t.
The question of equality of the original functions, when the transforms are equal, is the
object of Lerch’s theorem,† which says that under very reasonable assumptions, if F (s) = G(s)
then f (t) = g(t) for all positive t, except at most a finite number of isolated points.
From the point of view of applications, Lerch’s theorem says that if F (s) = G(s) then “for
all practical purposes” f (t) = g(t).
3. Basic Transforms
The following transforms must be committed to memory.
1
L [ect ] = [s > c] (4)
s−c
s
L [cosh ωt] = [s > |ω|] (5)
s − ω2
2
ω
L [sinh ωt] = [s > |ω|] (6)
s − ω2
2
s
L [cos ωt] = [s > 0] (7)
s + ω2
2
ω
L [sin ωt] = [s > 0] (8)
s2 + ω 2
c!
L [tc ] = . [s > 0, c > −1] (9)
sc+1
The proof of (4) is straightforward:
∞ ∞
e(c−s)t
1 1
Z
ct ct −st
L [e ] = e e dt = =0− = ,
0 c−s 0 c−s s−c
as long as s > c. The proofs for (5) and (6) are corollaries:
1 1 s
eωt + e−ωt
1
L [cosh ωt] = L 2 + = = 2 ,
2(s − ω) 2(s + ω) s − ω2
1 1 ω
eωt − e−ωt =
1
L [sinh ωt] = L 2
− = 2 .
2(s − ω) 2(s + ω) s − ω2
30 10
L sin3 10t = − 41 · 3
+ 4 · .
s2 + 900 s2 + 100 ◭
11/2 7/2
L [sin 2t cos 9t] = − 2 .
s2 + 121 s + 49 ◭
The proof of (9) is bit more involved and requires a new concept, which is introduced in the
next section.
You will find that, in applications, the few basic transforms (4–9) listed above are often
all that one needs to know. Just like we seldom calculate derivatives as a limit of the form
8 The Laplace Transform
f (x + h) − f (x) /h, but rather through the rules of “differential calculus”, so there exists a
“Laplace calculus” which allows us to calculate many Laplace transforms without going back to
the definition (3).
4. The Factorial Function
Consider L [tn ], n being a positive integer. We substitute st with x in the integral (3) (if s is
positive, this maps t = +∞ into x = +∞), and then integrate by parts n times:
Z ∞ Z ∞
1 n!
L [tn ] = tn e−st dt = n+1 xn e−x dx = n+1 . [s > 0]
0 s 0 s
The integral on the right converges, but R∞cannot be done analytically; another round of integra-
tion by parts leads nowhere, because 0 x−3/2 e−x dx is divergent (convince yourself of this).
Integration by substitution is also useless; it has been shown that there is no change of variables
that resolves this integral as a combination of elementary functions (polynomials, sine/cosine
etc.).
However, such an integral certainly
exists. As the picture shows, f(x)
Z ∞
x1/2 e−x dx = A, 0.4
0
◮Definition: If c is a real number greater than −1, we define the generalized factorial of c by
means of the expression Z ∞
c! = xc e−x dx. (11)
0
If c is a positive integer, the integral on the right may be done by parts and is found to be equal
to the product 1 · 2 · 3 · · · c. ◭
This is all we need to conclude our proof of (9): for s > 0 we have immediately:
Z ∞ Z ∞
c c −st 1 c!
L [t ] = t e dt = c+1 xc e−x dx = c+1 , [c > −1]
0 s 0 s
The Laplace Transform 9
R∞
where c! is defined in (11). As an exercise, convince yourself that 0 xc e−x dx = ∞ if c ≤ −1.
Factorials possess a simple and useful recursive property: if c > −1, then
(c + 1)! = (c + 1) · c! (12)
If c is a positive integer, this follows immediately from the “old” definition of factorial. If c is
an arbitrary real number greater than −1, then integrating by parts we get
Z ∞ h i∞ Z ∞
(c + 1)! = xc+1 e−x dx = − xc+1 e−x + (c + 1)xc e−x dx.
0 0 0
lim xc+1 = 0;
x→0
It follows Z ∞
(c + 1)! = 0 + (c + 1)xc e−x dx = (c + 1) · c! ,
0
We told you a half-lie. You probably thought that the only case when c! may be calculated in an
easy way, is if c is a positive integer or zero. Whether this statement is true or false, depends on
what you mean by “easy”. As it happens, using the Laplace transform it is possible to calculate
c! when c is half-odd; we shall come back to this in chapter 2, where it is shown that
√
(− 21 )! = π.
Finally, a word of warning. Most books still introduce generalized factorials by means of Legen-
dre’s so-called “gamma function”; that is, they write Γ(c + 1) for c! . That extra +1 is there only
for historical reasons, and is as necessary as the human appendix.† Today, as the best computer
programs (such as maxima, maple, mathematica, etc.) accept both notations, there is no
need for us to persist with the bulkier gamma notation. Most important, the exclamation mark
is a good reminder of the link with “ordinary” factorials.
5. Basic Properties; Multiplication by s
Before we begin to study the analytic properties of the Laplace transform, we must make some
assumptions on the function f (t) that appears in the definition (3). Clearly, it is not necessary
that f (t) be continuous for the integral in (3) to converge: the function in example 4, for instance,
is discontinuous. On the other hand, it is easy to produce functions with no Laplace transform
2
for any value of s: for example, f (t) = et or f (t) = 1/t.
There is no special term in the mathematical literature describing “a function that admits
a Laplace transform”, but we can easily do without it. However, we introduce now a couple of
definitions that will help us through our discussion.
◮Example 8 Let f (t) = t for t ∈ [0, 1]; f (t) = t2 for t ∈ (1, ∞). Then f (t) is continuous
for every t; f˙(t) is only sectionally continuous, because f˙(t) = 1 in (0, 1), f˙(t) = 2t in (1, ∞),
limt→1− f˙(t) = 1, limt→1+ f˙(t) = 2. At t = 1, which is a point of discontinuity, the limit from
the left and the limit from the right of f˙ are different, but both are finite. ◭
◮Definition: A function f (t) is of exponential order if there exist two positive constants c and
B such that |f (t)| < B ect , for all t from a certain point onwards. ◭
These definitions are broad enough to include most functions of practical relevance. Clearly, if
a function is sectionally continuous and of exponential order, then the integral in (3) converges,
and hence its Laplace transform exists—for sufficiently large s, possibly.
It also follows that, if f (t) is of exponential order, and s is large enough, then
lim |f (t)|e−st = 0
t→∞
(i.e., this is true for all s greater than some constant γ).
Suppose now that f (t), f˙(t) are of exponential order; f (t) is continuous and f˙(t) sectionally
continuous. By definition, Z ∞
L [f˙(t)] = (df /dt) e−st dt.
0
† Quoted from G. Arfken, Mathematical Methods for Physicists, Wiley (USA) 1970.
The Laplace Transform 11
Since we assume that f (t) is of exponential order, the first of the limits above is zero (for suffi-
ciently large s). The second limit is equal to limt→0+ f (t), i.e., the limit of f (t) as t approaches
zero from the right—remember, by definition f (t) ≡ 0 if t is negative. We write
lim f (t) = f0 .
t→0+
L [5ẋ + 3x] = 0,
5(−2 + sX) + 3X = 0,
which yields
10 2
X= = .
5s + 3 s + 3/5
Now, by inspection, we recognize that
1 −3t/5
= L e ;
s + 3/5
X = L 2e−3t/5 ,
where f˙0 denotes limt→0+ f˙(t). Substituting for L [f˙(t)] its expression (13), we get
Similarly,
˙
L f¨(t) = −f¨0 + sL [f¨(t)]) =
= −f¨0 − sf˙0 − s2 f0 + s3 F (s), (15)
where f¨0 denotes limt→0+ f¨(t). Naturally, here, we must assume that f , f˙ and f¨ are continuous,
and the third derivative of f (t) is sectionally continuous.
Formulas for higher-order derivatives may be calculated in the same way, if needed. How-
ever, in each case, the assumptions must be extended: the highest derivative of f (t) must be
sectionally continuous, and all lower-order derivatives (including f ) must be continuous. In
typical engineering applications, these assumptions are generally satisfied.
Solution: Without using (13), one may find L cos2 t using the identity cos2 t = 12 (cos 2t + 1).
This gives
L [cos2 t] = 21 L [cos 2t] + 12 L [1]
1 s 1
=2· + .
s2 + 4 s
In order to use (13), we first note that f0 = 1 and f˙ = −2 sin t cos t = − sin 2t. It then follows
that
2
L [f˙ ] = − 2 ,
s +4
and, by (13), that
L [f˙ ] = sL [cos2 t] − 1.
This example works because cos2 t is continuous throughout, and so are its derivatives of any
order. But the continuity requirements are crucial, as the following example shows.
◮Example 11 Find F (s), if f (t) = t for 0 ≤ t < 2, and f (t) = 0 everywhere else.
Solution: Before we start: if you apply (13), you get the wrong answer. Let us see why. The
graph of f (t) is pictured on the right; note the discontinuity at t = 2. It follows that
The Laplace Transform 13
1 for 0 ≤ t < 2 f(t)
f˙ =
0 everywhere else.
Hence, by (3),
2
1 − e−2s
Z
L f˙ = 1 · e−st dt =
.
0 s
0 2 t
−2s 2
Since f0 = 0, if you applied (13) you would get that L [f ] = (1 − e )/s . But this is wrong!
Working from (3), and integrating by parts, we get
2 2 2
te−st 2e−2s 1 − e−2s
1
Z Z
−st
L [f ] = te dt = − + e−st dt = − + .
0 s 0 s 0 s s2
This is the correct answer. Formula (13) may not be used because f (t) is not continuous at
t = 2. ◭
Next example is very similar, except for a small detail: f (t) is continuous and f˙(t) is sectionally
continuous. Therefore (13) is applicable.
2
3(1 − e−2s )
Z
L [f˙ ] = 3e−st dt = .
0 s
0 2 t
Since f is continuous and f˙ sectionally continuous, (13) is applicable. Therefore,
3(1 − e−2s )
L [f˙ ] = −f0 + sF (s) = .
s
But f0 = 0, hence
3(1 − e−2s )
F (s) = .
s2
On the other hand, one may calculate the transform from (3), integrating by parts:
Z 2 Z ∞
F (s) = 3te−st dt + 6e−st dt
0 2
3 3(2s + 1)e−2s 6e−2s
= 2− + .
s s2 s
6. Division by s
It follows from the fundamental theorem of calculus that
t
d
Z
f (τ ) dτ = f (t).
dt 0
L [ġ(t)] = L [f (t)]
Naturally, this formula may be iterated any number of times. For example, if we integrate
between zero and t the function g defined at the beginning of this section, we get another function
of t, let us call it h, such that ḧ(t) = f (t), and h(0) = ḣ(0) = 0; therefore L [h(t)] = F (s)/s2 .
Then, integrating h between zero and t and reasoning in the same way, we obtain yet
another function of t, and its Laplace transform will be F (s)/s3 . And so on: the process may
be repeated as many times as needed.
where L [x] = X(s), L [ẋ] = −x0 + sX(s) = −1 + sX(s), and L [2] = 2/s. It follows
2
2(−1 + sX) + 4X = ,
s
† Be careful: the dummy variable of integration must not be t, because it appears as a limit
of integration.
The Laplace Transform 15
◮Example 14 Since Z t
n
t = nτ n−1 dt,
0
we get immediately
nL [tn−1 ]
L [tn ] = .
s
In this way, starting from
∞
1
Z
L [1] = e−st dt = ,
0 s
we get
1 2! 3! 4!
L [t] = 2
, L [t2 ] = 3 , L [t3 ] = 4 , L [t4 ] = ,
s s s s5
and so on. We recover (9), though for integer powers only. ◭
These transforms may also be understood in terms of convolution, a more advanced technique
that we’ll study is section 2.7.
7. Multiplication by t
It may be shown that if the integral (3) converges absolutely, then F (s) may be differentiated
any number of times with respect to s. For example,
Z ∞ Z ∞ Z ∞
′ d −st ∂ h −st
i
F (s) = f (t) e dt = f (t) e dt = − f (t) te−st dt.
ds 0 0 ∂s 0
Repeating this process n times, we obtain
dn F (s)
L [tn f (t)] = (−1)n . (17)
dsn
◮Example 15 Find L [t cosh 4t].
Solution: Since L [cosh 4t] = s/(s2 − 16), then by (17)
d s 1 2s2 s2 + 16
L [t cosh 4t] = − = − + = .
ds s2 − 16 s2 − 16 (s2 − 16)2 (s2 − 16)2
R∞
As an exercise, calculate 0 t cosh 4t e−st dt (integrating by parts) and verify that you get the
same answer. ◭
16 The Laplace Transform
d 1 1 (s + iω)2 s2 + i2sω − ω 2
L [teiωt ] = − = = = .
ds s − iω (s − iω)2 (s2 + ω 2 )2 (s2 + ω 2 )2
s2 − ω 2
L [t cos ωt] = ;
(s2 + ω 2 )2
2sω
L [t sin ωt] = 2 .
(s + ω 2 )2
R∞ R∞
As an exercise, calculate from first principles 0
e−st t cos ωt dt or 0
e−st t sin ωt dt, and verify
that you get the same results. ◭
◮Example 18 Find the function x(t) such that 2ẋ + x = t3 e−t/2 , and x0 = 0.
Solution: Taking the Laplace transform of the equation, we get
d3
−t/2
1 3 −t/2
1 3!
L e = =⇒ L t e =− 3 = ;
s + 1/2 dt s + 1/2 (s + 1/2)4
this step (multiplication by t3 ) follows from (17). Substituting these expressions into the pre-
ceding equation yields:
6
2sX + X = .
(s + 1/2)4
Hence
6 3
X= = .
(2s + 1)(s + 1/2)4 (s + 1/2)5
Now, we observe that
d4
1 4!
= ;
dt4 s + 1/2 (s + 1/2)5
The Laplace Transform 17
therefore,
3 d4
3 1 1
L t4 e−t/2 ;
= = 8
(s + 1/2)5 4! dt4 s + 1/2
this step (multiplication by t4 ) also follows from (17). Hence,
4 −t/2
1
X=L 8t e ,
8. Division by t
In other words, if a function of s does not approach zero as s → ∞, then it is not a Laplace
transform, in the ordinary sense of the word.†
Given a function f (t) with a Laplace transform F (s), we introduce
Z ∞
def
G(s) = F (σ) dσ.
s
Note that σ is a dummy variable; s is the lower limit of integration and we treat it as a parameter.
Convince yourself that this integral is defined in such a way that if s → ∞, then G(s) tends to
zero. Therefore, G(s) may be the Laplace transform of some function g(t).
By the fundamental theorem of calculus, G(s) is an anti-derivative of −F (s):
G′ (s) = L [−tg(t)].
f (t)
g(t) = .
t
Finally, tranforming again, we get G(s) = L [f (t)/t], which may be expanded as
Z ∞
f (t)
L = F (σ) dσ. (18)
t s
† This comment does not apply to “generalized functions”, which you’ll find in more advanced
courses. But generalized functions are not functions in the usual sense of the word.
18 The Laplace Transform
R∞
Comment: By definition, L [sin t/t] = 0
(sin t/t) e−st dt. So, we get the interesting result
∞
sin t −st
Z
e dt = 12 π − arctan s. [s > 0]
0 t
R∞
For example, for s = 1 we get 0
(e−t sin t/t) dt = 21 π − arctan 1 = 41 π. Letting s → 0+ we get
∞
sin t
Z
dt = 12 π,
0 t
which is a famous example of a convergent improper integral that is not absolutely convergent.
Note also that these integrals may not be done using elementary calculus, as the anti-derivatives
may not be written in terms of elementary functions. We find the surprising result that definite
integration is elementary while indefinite integration is not. ◭
Rt
◮Example 20 Using the result of example 19, find G(s), given that g(t) = 0
(sin τ /τ ) dτ .
Solution: In example 19 we established that
sin t
L = arccot s.
t
In principle, rule (18) may be iterated: division of f (t) by t2 corresponds to integrating F (s)
twice, and so on. In practice, examples of this kind are likely to be fairly long. You may find
The Laplace Transform 19
one at the end of this chapter (see section 1.11). Next example shows how double integration
may, in some cases, be avoided.
◮Example 22 Find the Laplace transform of sin2 t/t and hence the Laplace transform of
sin2 t/t2 .
Solution: The first half of this problem is very simple: from the identity
sin2 t
f (t) = :
t
then the preceding result may be written
√
1 s2 + 4
F (s) = 2
ln .
s
sin2 t sin 2t
2
= − f˙
t t
The first term on the right-hand side is almost identical to what we found in example 19:
sin 2t
L = arccot(s/2).
t
OVERVIEW
The rules discussed in sections 5–8 may, at first, seem hard to grasp. They are not: let us put
them all together in a table.
f (t) F (s)
f˙(t) sF (s) − f0
t
F (s)
Z
f (τ ) dτ
0 s
t f (t) −F ′ (s)
∞
f (t)
Z
F (σ) dσ
t s
The first line is simply a reminder that we use lower-case letter to denote functions of t, and the
corresponding upper-case letters to denote their transforms.
You should be able to spot the thread linking the other formulas: Differentiation with
respect to one variable, t or s, is associated with multiplication by the other; integration with
respect to one variable is associated with division by the other.
We are slightly over-simplifying, now; some fine points must also be borne in mind. For
example, integration with respect to s ends at infinity, whereas integration with respect to t
starts at zero. The rule for L [f˙ ] must account for the possibility that f (t) be discontinuous at
t = 0, hence f0 appears in the second line; on the other hand, the rule for F ′ (s) has a minus
sign, which should not be overlooked.
However, the common thread (in italics above) is easy to remember. Using the six basic
transforms (4–9) as building blocks and combining the rules above, one may derive a wide variety
of transforms, with no direct call for integral calculus. We now complement this table with two
more basic rules: the shift rules.
9. Shifting s
Consider a function f (t) with Laplace transform F (s). Let α be a constant. By definition,
Z ∞
−αt
f (t) e−αt e−st dt.
L e f (t) =
0
It follows immediately
Z ∞
L e−αt f (t) = f (t) e(−s−α)t dt =
0
= F (s + α). (19)
√
◮Example 23 Find L e−4t t .
√
t = ( 1/2)! s3/2 . Therefore, L e−4t t = ( 1/2)! (s + 4)3/2 . ◭
√
Solution: By (9) we get that
6
L t3 e2t =
.
(s − 2)4
Note, however, that one may also start from L [e2t ] = 1/(s − 2) and apply (17) three times:
d3
3 2t
1 6
L t e =− 3 = .
ds s−2 (s − 2)4 ◭
e3t + e−3t
sin 2t cosh 3t = sin 2t · =
2
= 12 e3t sin 2t + 21 e−3t sin 2t.
1 1
L [sin 2t cosh 3t] = +
(s − 3) + 4 (s + 3)2 + 4
2 ◭
Rt
◮Example 27 Find F (s), if f (t) = t−1 0 e−3τ sin τ dτ .
Solution: We observe that f (t) is obtained by performing three operations on the function sin t,
which are:
(i) multiplication by e−3t ,
(ii) integration with respect to t,
(iii) division by t.
The corresponding operations on F (s) are:
(i) shift by 3 units to the left
(ii) division by s,
(iii) integration with respect to s.
Proceeding along these lines, we get:
1
L [sin t] =
,
+1 s2
1
L e−3t sin t =
(i) ,
(s + 3)2 + 1
Z t
−3τ 1
(ii) L e sin τ dτ = ,
0 s[(s + 3)2 + 1]
and finally:
Z t Z ∞
1 −3τ dσ
(iii) F (s) = L e sin τ dτ = .
t 0 s σ[(σ + 3)2 + 1]
22 The Laplace Transform
10. Shifting t
We have discussed the the effects of a shift of the s axis. A shift of the t axis may be studied in
a similar way.
This may be useful, for instance, if the function f (t) is defined by different formulas over
different pieces of the t axis. In such a case, the following approach may be useful. Suppose
0 if t < T ,
f (t) =
φ(t − T ) if t ≥ T ,
Substituting t = T + τ , we get
Z ∞
F (s) = φ(τ )e−sT −sτ dτ =
0
−sT
=e Φ(s). (20)
The Laplace Transform 23
0 if t < 3,
◮Example 28 Find L [f (t)], if f (t) =
sin 2t if t ≥ 3.
Solution: We may not apply rule (20) as long as f (t) is written in this way, because the depen-
dence on t − 3 is not explicit. To make it so, we manipulate the function slightly:
2 cos 6 s sin 6
Φ(s) = 2
+ 2 .
s +4 s +4
Since obviously
sin 2t = φ(t − 3),
i.e.,
0 if t < 3,
f (t) =
φ(t − 3) if t ≥ 3,
we are in a position to apply (20). It follows that
In most applications, rule (20) is combined with a simple and useful mathematical tool: the step
function.
0 for negative t
(
1
u(t) = 2 for t = 0
1 for positive t
Do not worry about the definition of u(t) for t = 0: it is purely conventional and does not affect
the Laplace transform in any way. It finds its place in more advanced topics.
Heaviside’s function jumps from 0 to 1 when its argument is increased across zero. So, for
example, what is u(t − 74)? Since t − 74 < 0 when t < 74, and t − 74 > 0 when t > 74, we
see that u(t − 74) jumps from 0 to 1 as t is increased across the point t = 74. By rule (20), the
Laplace transform of this function would be simply
e−74s
F (s) = e−74s · L [1] = .
s
Very often two step functions are combined to form a function that is zero for t up to a certain
point, and becomes zero again from another point onwards.
24 The Laplace Transform
e−s − e−4s
F (s) = L [u(t − 1)] − L [u(t − 4)] = .
s ◭
Things get interesting when we multiply a given function of t by u(t − a) − u(t − b): we get a
new function that coincides with the old one between t = a and t = b, but is identically zero
everywhere else. Such a function is called a transient.
◮Example 30 Find F (s), if f (t) = sin 5t for t between 1 and 4; f (t) ≡ 0 everywhere else.
Solution: The graph of sin 5t extends, of course, from −∞ to +∞. However, if sin 5t is multiplied
by u(t − 1) − u(t − 4), the part of the graph lying outside the interval (1, 4) is “wiped off”, so
to speak, while the part between 1 and 4 is not affected.
The graph of f (t), pictured on the
right, jumps from 0 to sin 5 at t = 1, and f(t)
then from sin 20 to 0 at t = 4. We may
write
f (t) = sin 5t · u(t − 1) − u(t − 4) =
= sin 5t · u(t − 1) − sin 5t · u(t − 4).
1
Proceeding like in example 28, we write 4 t
Simplifying, we get:
f (t) = sin 5(t − 1) cos 5 + cos 5(t − 1) sin 5 · u(t − 1)−
− sin 5(t − 4) cos 20 + cos 5(t − 4) sin 20 · u(t − 4),
It should be noted that problems of this type may always be done from first principles. However,
with some practice you’ll find that the method of this section is usually better. For instance,
the last example may also be done by calculating
Z 4
F (s) = e−st sin 5t dt
1
1/2 1 2e−s/2
F (s) = − + .
s s2 s2
R 1/2 R∞
As an exercise, calculate 0
( 1/2 − t)e−st dt + 1/2
(t − 1/2)e−st dt and verify that you get the
same answer. ◭
◮Example 32 Find F (s) if f (t) = 3t for 0 ≤ t ≤ 2; f (t) = 6 for t > 2 (this is the same as
example 12).
Solution: We note that
f (t) = 3t u(t) − u(t − 2) + 6u(t − 2) =
= 3t u(t) − 3(t − 2 + 2) u(t − 2) + 6u(t − 2) =
= 3t u(t) − 3(t − 2) u(t − 2).
26 The Laplace Transform
◮Example 33 Find F (s) if f (t) = 3t − t2 for 0 < t < 3, f (t) ≡ 0 everywhere else.
Solution: Write
In this way, the second term depends on t is only through the expression t − 3. And since
3 2
L − 3t − t2 = − 2 − 3 ,
s s
we get by (20) that
2 3 2
+ 3 e−3s .
L − 3(t − 3) − (t − 3) · u(t − 3) = −
s2 s
So much for the second term. The first term requires no rearrangements, and we get immediately:
2 3 2
− 3 · e0s ;
L (3t − t ) · u(t) =
s2 s
obviously e0s = 1. So, finally, we get:
3 2 3 2
F (s) = 2 − 3 + + 3 e−3s .
s s s2 s ◭
SHIFT RULES
Formulas (20) and (19) are often called shift properties. They are summarized in the following
table.
f (t) F (s)
f (t − T ) u(t − T ) e−sT F (s)
e−at f (t) F (s + a)
The Laplace Transform 27
holds only if L [f1 ] and L [f2 ] exist each one on its own. It may happen, though, that the Laplace
transform of f1 + f2 does exist, even if f1 and f2 do not have a Laplace transform.
◮Example 34 Find F (s), if f (t) = (et − 1) · t−3/2 .
Solution: We would like to use the shift theorem (19). However,
Z ∞ Z ∞
t −3/2 −st
et e dt − t−3/2 e−st dt = ∞−∞
0 0
which is clearly meaningless. So, F (s) may not be “split” as we did in several previous examples;
see for instance example 26.
On the other hand, it’s easy to see that F (s) exists, and we are going to find it in two steps.
First of all we write
et − 1 et − 1 g(t)
g(t) = , = , f (t) =
t1/2 t 3/2 t
and consider G(s). This is easy, because L et t−1/2 and L t−1/2 exist separately:
(− 1/2)! (− 1/2)!
L t−1/2 = 1/2 , L et t−1/2 =
;
s (s − 1)1/2
the second one comes via the shift theorem (19). Therefore,
(− 1/2)! (− 1/2)!
G(s) = 1/2
− 1/2 .
(s − 1) s
We should show that, in the equation above, the last expression in square brackets goes to zero
as σ tends to infinity; this is a good revision example in 1st-year calculus. Writing
(σ − 1)1/2 σ 1/2 −2
− = .
1/2 1/2 (σ − 1)1/2 + σ 1/2
28 The Laplace Transform
It’s now clear that the right-hand side goes to zero as σ → ∞, as required. So, finally:
(σ − 1)1/2 σ 1/2
F (s) = 0 − (− /2)!
1
1/2
− 1 =
/2 σ=s
= 2(− 1/2)! s1/2 − (s − 1)1/2 .
√
In section 2.7 we’ll see that (− 1/2)! = π. See also the comments at the end of section 1.4. ◭
Solution: This problem is very similar to the preceding one, so we’ll look only at the main points.
We define
2e3t − 3e2t + 1 2e3t − 3e2t + 1 h(t) 2e3t − 3e2t + 1 g(t)
h(t) = , g(t) = = , f (t) = = .
t1/2 t 3/2 t t 5/2 t
We seek F (s). Neither f (t) nor g(t) may by split as we did previously; however, for h(t) it’s
correct to write
H(s) = L 2e3t · t−1/2 − L 3e2t · t−1/2 + L t−1/2 =
Proceeding like in the previous example, it’s possible to see that the expression in square brackets
goes to zero as σ tends to infinity. Therefore,
Once again, it’s possible to show that the expression is square brackets goes to zero as σ tends
to infinity. So, the final answer is
Simplifying, we get:
f (t) = te1−t u(t) + t et−1 − e1−t u(t − 1) =
PROBLEMS
Division by t
11. Find the following Laplace transforms; read example 22 before attempting (d).
(a) L [(1 − e−t )/t] (c) L [(cosh t − cos t)/t]
(b) L [sinh2 t/t] (d) L [sinh2 t/t2 ].
Z t Z t
1 − e−τ
sin τ
12. Calculate (a) L dτ , (b) L t dτ .
0 τ 0 τ
Shifting
ANSWERS
6 (a) F (s) = 2/s3 − e−s (2/s3 + 2/s2 + 1/s); (b) L [f¨] = 2(1 − e−s )/s.
Formula (14) does not hold because f and f˙ are not continuous.
8 1 6 1
8 − .
s s2 + 64 s s2 + 36
9 (a) 6/(s + 3)4 (e) (s2 + 9)/(s2 − 9)2
(b) (4s2 − 4s + 2)/(s − 1)3 (f) 4s/(s2 − 4)2
2 2 2
(c) (8 + 12s − 2s )/(s + 4) (g) (6s4 − 18s3 + 126s2 − 162s + 432)/(s2 + 9)3
2 2 3
(d) (6s − 2)/(s + 1) (h) (6s4 − 36s2 + 6)/(s2 + 1)4 .
10 (a) 3/50, (b) −1/2.
s+1 s2 + 1
11 (a) ln (c) 21 ln 2
s s −1
s s
(b) 21 ln √ (d) arcoth(s/2) − 21 s ln √
2
s −4 s2 − 4
1 1
12 (a) ln 1 + , (b) (arccot s)/s2 + 1/s(s2 + 1).
s s
13 (a) (s + 1)/(s2 + 2s + 5) (e) 2/(s + 1)(s2 + 2s + 5)
2
(b) 8/(s − 6s + 25) (f) 1/s + 3/(s + 1)2 + 6/(s + 2)3 + 6/(s + 3)4
2
(c) 4(5 − s)/(s − 4s + 20) (g) (2/s3 + 10/s2 + 25/s) e−5s
2
(h) 2/(s+2)3 +2/(s+2)2 +1/(s+2) e−2−s
(d) (1 − 5s)/(s + 2s − 3)
(− 1/3)! (− 1/3)!
14 (a) F (s) = 2/3
− , (b) F (s) = arccot(s + 3)/2.
2(s − 1) 2(s + 1)2/3
2 (2 + 10s) e−5s π(1 + e−s )
15 (a) F (s) = 2 − , (b) F (s) = .
s s2 (s2 + π 2 )
4 2 8 4 −2s 2 1 4 4 −s
16 (a) F (s) = − 3 + 2 + 3 e , (b) F (s) = − + − 2 .
s s s s s s−1 s−1 s
2 7 12 2 1 2 1
17 3
− 2+ − 2 3 − 2 e−3s + 2 3
+ 2 e−4s .
s s s s s s s
The Inverse Transform 33
Chapter Two
1. Introduction
◮Definition: If F (s) is the Laplace transform of f (t), then f (t) is called the inverse Laplace
transform of F (s), and is denoted
(s − 2)4
8 24 32 16
L−1 = t − t2 + t3 − t4 + t5 =
s6 2! 3! 4! 5!
4 2
= t − 4t2 + 4t3 − t4 + t5 . ◭
3 15
tb eat
−1 1
L = .
(s − a)b+1 b!
b
b
b! b d 1 −d
L eat ;
b+1
= (−1) b
=
(s − a) ds s − a ds
However, the first method is more general because it works even when b is not integer. ◭
F (s) = 32 L t2 et − 13 L t3 et ,
But
s
= L [cosh 5t]
s2 − 25
5
2
= L [sinh 5t];
s − 25
hence, by the s-shift property (19), it follows
s−1
= L [et cosh 5t]
(s − 1)2 − 25
15
= L [3et sinh 5t].
(s − 1)2 − 25
and finally
s − 16
−1
= et cosh 5t − 3 sinh 5t .
L 2 ◭
s − 2s − 24
The Inverse Transform 35
2s3 + s2 + 2s + 2 2s3 + s2 + 2s + 2
= .
s5 + 2s4 + s3 s3 (s2 + 2s + 2)
2s3 + s2 + 2s + 2 2s3 s2 + 2s + 2 2 1
= + = 2 + .
s3 (s2 + 2s + 2) s3 (s2 + 2s + 2) s3 (s2 + 2s + 2) s + 2s + 2 s3
2 1
F (s) = 2
+ 3,
(s + 1) + 1 s
p
◮Example 43 Find f (t), if F (s) = s/ (s + 4)5 .
Solution 1: Rewrite F (s):
s+4−4 1 4
F (s) = p =p −p .
(s + 4)5 (s + 4)3 (s + 4)5
The inverse-transform of the right-hand side may be found by applying the shift theorem (19),
which reduces it to a pair of basic transforms of the form (9):
3 −4t 1/2
d e−4t t3/2 −4e−4t t3/2 e t
−1 s d −1 1 2
L p = L p = 3 = 3 + 3 .
(s + 4)5 dt (s + 4)5 dt 2! 2! 2!
Note that this step relies also on the obvious fact that limt→0+ e−4t t3/2 = 0 (Why?). After
simplifications, this result becomes identical to the one obtained before. ◭
where both the numerator and the denominator are polynomials in s, and†
Also, suppose initially that D(s) may be written as the product of m linear factors, i.e.,
This means that D(s) has only simple roots and there are m of them, where m = degree (D); it
also means that when (21) is expanded out in terms of powers of s, the coefficient of the leading
term (which is sm ) is exactly 1.
Under these assumptions, one may write
N (s) c1 c2 cm
= + + ··· + , (22)
D(s) s − r1 s − r2 s − rm
and the constants c1 . . . cm are found by Heaviside’s “cover-up” method.† Here is how Heaviside’s
method works: multiply both sides of (22) by (s − rk ); we get
(s − rk )N (s) s − rk s − rk s − rk s − rk
= c1 + c2 + · · · + ck + · · · + cm .
D(s) s − r1 s − r2 s − rk s − rm
Now simplify the k-th fraction on the right-hand side, which is clearly equal to 1. The resulting
formula is true for every s; in particular, if s = rk , the right-hand side is
R. H. S. = c1 · 0 + c2 · 0 + · · · + ck · 1 + · · · + cm · 0 = ck .
(s − rk )N (s)
L. H. S. = .
(s − r1 )(s − r2 ) · · · (s − rk ) · · · (s − rm )
Simplifying the common factor (s − rk ) is tantamount to “covering up” the same factor in the
denominator of the original fraction (hence the slang name): therefore
N (rk )
= ck . (23)
(rk − r1 )(rk − r2 ) · · · (rk − rk−1 ) (rk − rk+1 ) · · · (rk − rm )
◮Example 45 Solve the differential equation ẍ − 2ẋ − 24x = 0, given x(0+) = 1, and
ẋ(0+) = −14.
Solution: The transformed equation is
or
(s2 − 2s − 24)X = s − 16.
s − 16 c1 c2
X= = + .
(s − 6)(s + 4) s−6 s+4
Now, “covering up” the factor (s − 6) in the expression in the middle and setting s = 6, we get
6 − 16
c1 = = −1.
6+4
−4 − 16
c2 = = 2.
−4 − 6
It follows
1 2
X=− + ,
s−6 s+4
and finally
x = −e6t + 2e−4t .
as expected. ◭
s3 − 2s2 + 3s − 5 c1 c2 c3 c4
= + + + .
s(s − 1)(s − 2)(s + 3) s s−1 s−2 s+3
38 The Inverse Transform
03 − 2 · 02 + 3 · 0 − 5 −5
cover up s: c1 = =
(0 − 1)(0 − 2)(0 + 3) 6
3 2
1 −2·1 +3·1−5 −3
cover up s − 1: c2 = =
1(1 − 2)(1 + 3) −4
3 2
2 −2·2 +3·2−5 1
cover up s − 2: c3 = =
2(2 − 1)(2 + 3) 10
3 2
(−3) − 2 · (−3) + 3 · (−3) − 5 −59
cover up s + 3: c4 = = .
(−3)(−3 − 1)(−3 − 2) −60
It follows that
s3 − 2s2 + 3s − 5 5 3 1 59
=− + + + .
s(s − 1)(s − 2)(s + 3) 6s 4(s − 1) 10(s − 2) 60(s + 3) ◭
One more comment, very useful: go back to (22). If we let s → ∞, we certainly get 0 = 0,
because the degree of D(s) is assumed to be greater than the degree of N (s) by at least one
unit.† However, if we multiply both sides by s and then let s → ∞, we find that the left-hand
side may approach a finite limit, as well as zero; and on the right-hand side we find m limits
which may all be done by inspection. This procedure, called “testing the transform at infinity”
may be used
(i) as a quick numerical check on the calculations, or
(ii) to find a coefficient, when all but one have been computed.
s3 − 2s2 + 3s − 5 c1 c2 c3 c4
= + + + .
s(s − 1)(s − 2)(s + 3) s s−1 s−2 s+3
s(s3 − 2s2 + 3s − 5) s s s s
= c1 + c2 + c3 + c4 .
s(s − 1)(s − 2)(s + 3) s s−1 s−2 s+3
Note that:
s4 + lower powers of s
L. H. S. = :
s4 + lower powers of s
hence
lim [L. H. S.] = 1.
s→∞
By inspection,
lim [R. H. S.] = c1 + c2 + c3 + c4 .
s→∞
And indeed
5 3 1 59
1=− + + + ,
6 4 10 60
† As a rule, before embarking on a partial fractions expansion, you should always verify that
this is the case.
The Inverse Transform 39
3s − 2 c b1 b2
2
= + + .
(s + 5)(s − 1) s + 5 s − 1 (s − 1)2
3 · (−5) − 2 17
c= 2
=− .
(−5 − 1) 36
3·1−2 1
b2 = = .
1+5 6
Now only b1 remains to be found. Testing the transform at infinity, we see that
17
0=− + b1 ,
36
s3 + 11 c1 c2 b1 b2
2 2
= + 2
+ + .
(s − 1) (s + 2) s − 1 (s − 1) s + 2 (s + 2)2
1 + 11 4
c2 = 2
= .
(1 + 2) 3
40 The Inverse Transform
s3 + 11 c1 4 b1 1
2 2
= + 2
+ + . (24)
(s − 1) (s + 2) s − 1 3(s − 1) s + 2 3(s + 2)2
1 = c1 + b1 .
We need one more bit of information; we get it by testing one numerical value of s in (24): any
value not used so far would work, but s = 0 seems to be easy enough. We get
11 c1 4 b1 1
= + + + ,
4 −1 3 2 12
or
4 1
= −c1 + b1 ,
3 2
hence b1 = 14/9, c1 = −5/9. ◭
Broadly speaking, finding the coefficients in the presence of multiple roots requires more work,
be it with pencil and paper, or computer time. Several “generalized Heaviside’s methods” have
been devised to handle multiple roots, but none of them ultimately avoids a fair amount of
tedious calculations. A good discussion may be found in G. Doetsch, Guide to the applications
of the Laplace and Z-transforms, van Nostrand (1971).
One method that’s easy to remember but not particularly fast, consists of moving terms with
known coefficients from the right-hand side to the left-hand side, rearranging and simplifying.
It is best described by an example.
16 16 A B C1 C2 C3 C4
= = + + + 2 + 3 + 4.
(s2 − 3s + 2)s4 (s − 1)(s − 2)s4 s−2 s−1 s s s s
0 = 1 − 16 + C1 ,
The Inverse Transform 41
16 1 16 15 C2 C3 8
= − + + 2 + 3 + 4.
(s2 − 3s + 2)s 4 s−2 s−1 s s s s
Now, we move the term 8/s4 across to the left-hand side. It follows that
16 8 1 16 15 C2 C3
− 4 = − + + 2 + 3.
(s2 − 3s + 2)s 4 s s−2 s−1 s s s
24
C3 = = 12.
2
Repeat the procedure. Move the term 12/s3 to the left-hand side: it follows that
−8s + 24 12 1 16 15 C2
− 3 = − + + 2.
(s2 − 3s + 2)s 3 s s−2 s−1 s s
28
C2 = = 14.
2
All the coefficients have been found:
16 1 16 15 14 12 8
= − + + 2 + 3 + 4.
(s2 − 3s + 2)s 4 s−2 s−1 s s s s
4 3
So, finally, f (t) = e2t − 16et + 15 + 14t + 6t2 + 3 t . ◭
42 The Inverse Transform
Another method, which at first seems simple, is to plug in as many values of s as there are missing
coefficients. These values may be freely chosen, as long as none of them coincides with a root
of D(s). In this way one gets a system of n linear equations with n unknowns. The drawback
of this method is that the amount of work needed to solve a system of n linear equations grows
(for large n) with speed of n3 ; it can be rather laborious even for n = 3.
Yet another approach to the previous example is to apply rule (16) [division by s is associated
with integration with respect to t]. We’ll come back to this idea in section 5.
The so-called calculus of residues is probably much better than any of the methods described
so far, but since it requires some knowledge of the theory of complex variables, we leave it for a
more advanced course.
4. Partial Fractions: Complex Roots
In principle, if D(s) has complex roots, the methods described in the sections 2–3 are still
applicable. The only difference is that the coefficients of the expansion are, in general, complex.
Complex arithmetic is inherently more time-consuming than real arithmetic.† In applications,
real trigonometric functions are generally preferable to complex exponentials, though a good
case may sometimes be made for using the latter.
In engineering applications N (s) and D(s) are virtually certain to be real polynomials. It
may be shown, then, that the complex roots of D(s), if present, come in complex conjugate pairs,
and the corresponding coefficients in the partial fractions expansion are also complex conjugate.
To fix the ideas, consider a simple case where D(s) is real and has a complex root a + ib
with multiplicity 1. Then a − ib is also a root, and the expansion has the form
N (s) c + id c − id
= + + (other terms).
D(s) s − a − ib s − a + ib
N (s) Bs + C
= + (other terms),
D(s) (s − a)2 + b2
where B and C are real, and may be expressed in terms of the “old” constants a,b, c and d
(convince yourself of this). However there is no need to find c and d, since one may find B and
C directly. The following examples illustrate this method.
† Both when done by a computer and when done by pencil and paper.
The Inverse Transform 43
(by Heaviside’s method: nothing new here). To find B and C, we may either continue with the
“cover-up” method, or use a couple of shortcuts.
Method 1: By the “cover-up” method. Multiply both sides by s2 + 4 and simplify: it follows
If we now set s = ±i2 , we make s2 + 4 = 0. Either root may be used; at the end the final
results will be the same. For instance, setting s = i2 we get
−8 − i4 + 1 −1 + i18
= 0 + i2B + C =⇒ = i2B + C.
i2 − 1 5
Hence, separating real and imaginary part, we find B = 9/5 and C = −1/5. So, finally,
Method 2: Use the test at infinity. Multiply both sides of the expansion by s and let s → ∞.
We get
2s3 + · · · s/5 Bs2 + Cs
= + ,
s3 + · · · s−1 s2 + 4
and hence (in the limit of s → ∞)
2 = 1/5 + B =⇒ B = 9/5.
1 1/5 C
= + =⇒ C = −1/5,
−4 −1 4
and finally
2s2 − 2s + 1 1/5 (9s − 1)/5
2
= + .
(s − 1)(s + 4) s−1 s2 + 4
We get the same expansion, as expected. ◭
s2 − 16s + 23 A B Cs + D
2
= + + 2
(s − 2s + 5)(s − 1)(s − 3) s − 1 s − 3 s − 2s + 5
44 The Inverse Transform
1 − 16 + 23
A= = −1
(1 − 2 + 5)(1 − 3)
9 − 48 + 23
B= = −1
(9 − 6 + 5)(3 − 1)
simplify
s2 − 16s + 23 s2 − 2s + 5 s2 − 2s + 5
=A +B + Cs + D,
(s − 1)(s − 3) s−1 s−3
and set s = 1 + i2 (recall that s2 − 2s + 5 = 0 if s = 1 ± i2). We get
hence
4 − i28
= C + D + i2C,
−4 − i4
which yields
1 − i7 (1 − i7)(−1 + i)
= = 3 + i4 =
−1 − i 2
= C + D + i2C.
Separating the imaginary part we get
i4 = i2C,
hence C = 2, and finally D = 3 − C = 1.
Method 2: By the test at infinity. Note that F (s) is of second degree in the numerator, fourth
degree in the denominator, hence sF (s) → 0 as s tends to infinity. On the other hand,
A B Cs + D
s· + + 2 → A+B+C as s → ∞.
s − 1 s − 3 s − 2s + 5
Since we already found that A = −1 and B = −1, we get immediately that C = 2. We still
miss D, but when only one coefficient is missing, we may find it by inspection, substituting for
s any number that hasn’t been used. Again, s = 0 is the obvious choice. We get immediately
23 1 D
=1+ + ,
5(−1)(−3) 3 5
and hence D = 1. ◭
The Inverse Transform 45
We get the impression that the second method is slightly faster than Heaviside’s “cover-up”
method. The problem is, it may be used only once. For a partial-fractions expansion with two
complex roots or more, you must use the cover-up method at least once, and finish the job by
the method described above.
s2 − 4s − 10 As + B Cs + D
2 2
= 2 + 2 .
(s − 2s + 10)(s + 4) s − 2s + 10 s +4
(i2)2 − 4(i2) − 10
= C(i2) + D,
(i2)2 − 2(i2) + 10
or
−14 − i8
= i2C + D.
6 − i4
Simplifying the left-hand side, we get
and finally (comparing real and imaginary part) C = −1, D = −1. Now we “test at infinity”:
s2 − 4s − 10
lim s · = 0,
s→∞ (s2 − 2s + 10)(s2 + 4)
As + B Cs + D
lim s · + 2 = A + C.
s→∞ s2 − 2s + 10 s +4
But we already know that C = −1, hence A = 1. Finally, substituting s = 0 (any real number
would do), we get
−10 B D
= + :
(10)(4) 10 4
having found before that D = −1, we deduce that B = 0. ◭
s3 + 1 A Bs + C Ds + E
= + 2 + 2 .
(s − 1)(s2 + 1)2 s−1 s +1 (s + 1)2
By Heaviside’s method,
1+1
=A+0+0 =⇒ A = 1/2.
(1 + 1)2
46 The Inverse Transform
Continuing with Heaviside’s method, we multiply through by (s2 + 1)2 and simplify: we get
s3 + 1 A(s2 + 1)2
= + (Bs + C)(s2 + 1) + Ds + E.
(s − 1) s−1
Substituting s = i we get
−i + 1
= 0 + 0 + iD + E.
i−1
The left-hand side of this equation is equal to −1, hence it follows immediately that D = 0 and
E = −1. Testing at infinity we get the equation
s4 + · · · As Bs2 + Cs Ds2 + Es
= + + 4 ;
s5 + · · · s−1 s2 + 1 s + ···
as s → ∞ this yields 0 = A+B. Having already found that A = 1/2, we deduce that B = −1/2.
At this point, only C remains to be found: so, we substitute s = 0 in the expansion. We obtain
−1 = −A + C + E,
Solution: Instead of doing a direct partial fractions expansion, which would require some complex
arithmetic, we note that
1
= L 71 sin 7t .
2
s + 49
Hence Z t
1 sin 7τ
2
=L dτ .
s(s + 49) 0 7
Integrating, we get
1 1 − cos 7t
L−1 = . ◭
s(s2 + 49) 49
and
1
L −1
= e−t .
s+1
The Inverse Transform 47
It follows by (16)
Z t
1
L−1
= e−τ dτ = 1 − e−t ,
s(s + 1) 0
Z t
1
L−1 2 1 − e−τ dτ = t − 1 + e−t ,
=
s (s + 1) 0
Z t
1
L−1 3 τ − 1 + e−τ dτ = 21 t2 − t + 1 − e−t .
=
s (s + 1) 0
So, finally,
3s − 2
L−1 = 3 t − 1 + e−t − 2 1 2
− t + 1 − e−t =
3 2
t
s (s + 1)
◭
= −t2 + 5t − 5 + 5e−t .
◮Example 57 Find L−1 [50/s2 (s2 + 6s + 10)], without using partial fractions.
Solution: Completing the square in the denominator, we get
50 50
= .
s2 + 6s + 10 (s + 3)2 + 1
From
50
L −1
= 50e−3t sin t
(s + 3)2 + 1
we deduce immediately
Z t
50
L−1 = 50 e−3τ sin τ dτ = 5 − e−3t (5 cos t + 15 sin t),
s(s2 + 6s + 10) 0
and finally
Z t
50
−1
5 − e−3τ (5 cos τ + 15 sin τ ) dτ = 5t − 3 + e−3t (3 cos t + 4 sin t).
L 2 2
=
s (s + 6s + 10) 0
Note that the integrals in the equations above may be done by parts or by Euler’s formula; the
latter is more advisable. ◭
In other problems it may happen that L−1 [F ′ (s)] may be found more easily than L−1 [F (s)].
In these cases, we obtain L−1 [F (s)] by using property (18)—integration with respect to s
corresponds to division by t.
Now you see the light: integration with respect to s corresponds to division by t, and
1
= L [sinh t].
s2 −1
2s
−F ′ (s) = . [s > 1]
s4 −1
Noting the partial fractions expansion
2s 2s s s
= 2 = 2 − 2 ,
s4 −1 2
(s − 1)(s + 1) s −1 s +1
we get
s s
−F ′ (s) = − 2 = L [cosh t − cos t].
s2 −1 s +1
It follows immediately that
∞
σ σ
Z
F (s) = − dσ,
s σ2 − 1 σ2 + 1
◮Example 60 Find f (t), if F (s) = ln[(s − a)/(s − b)], where 0 < a < b.
Solution: Differentiating with respect to s and changing sign, we get
1 1
−F ′ (s) = = L ebt − eat .
− [s > a]
s−b s−a
It follows immediately that
∞
1 1
Z
F (s) = − dσ,
s σ−b σ−a
◮Example 61 Find L−1 [s/(s2 + ω 2 )2 ] and hence L−1 [1/(s2 + ω 2 )2 ] (this is a classic).
Solution: consider the identity
sin ωt 1
L = 2 .
ω s + ω2
Differentiating with respect to s, and making use of (17), we get
t sin ωt −d 1 2s
L = 2 2
= 2 .
ω ds s + ω (s + ω 2 )2
1 1 s
= · 2 .
(s2 + ω 2 )2 s (s + ω 2 )2
Part of this problem has already been solved by means of Euler’s formula: see example 17 in
chapter 1. The method shown here is less elegant, but still instructive. Both methods may be
generalized to transforms of the form s/(s2 + ω 2 )n or s/(s2 + ω 2 )n , but the calculations
become laborious as n increases. ◭
◮Example 62 Continue example 54, and find f (t) if F (s) = (s3 + 1)/(s − 1)(s2 + 1)2 .
Solution: In example 54 it was found that
1 1
2 2 (s − 1) 1
F (s) = − − ;
s−1 s2 + 1 (s2 + 1)2
6. Miscellaneous Examples
s2 − 10s + 29 = (s − 5)2 + 4.
Now write
s+1 s−5+6
= 2 .
(s2 − 10x + 29)2
(s − 5)2 + 4
Define
s+6
G(s) = .
(s2 + 4)2
By definition, F (s) = G(s − 5); hence, by the s-shift property (19), f (t) = e5t g(t). But g(t) may
be found immediately, substituting ω = 2 in the results of example 61:
s
L −1
= 14 t sin 2t,
(s2 + 4)2
6
L −1
= 38 sin 2t − 34 t cos 2t.
(s2 + 4)2
Hence, we get
g(t) = 14 t sin 2t + 3
8
sin 2t − 34 t cos 2t,
1 3 ◭
f (t) = e5t sin 2t − 43 t cos 2t .
4t sin 2t + 8
Hence, by the t-shift property (20), the second piece gives us:
−2πs
−1 e
L = u(t − 2π) · 2 sin 12 (t − 2π) =
s2 + 41
= u(t − 2π) · 2 sin( 21 t − π) =
1
= −2u(t − 2π) · sin 2 t = −2 sin 2 t
1 if t ≥ 2π,
0 if t ≤ 2π.
2 sin 12 t if 0 ≤ t ≤ 2π,
f (t) =
0 everywhere else. ◭
The Inverse Transform 51
x + 19 A B
= + .
(x + 4)(x + 9) x+4 x+9
◮Example 66 Find the inverse transform of F (s) = (2s2 − 3)/s3 (s2 + 1).
Solution: Ignore for a moment the fact that F (s) contains an odd power of s. Consider first the
expansion
2x − 3 A B
= + .
x(x + 1) x x+1
Heaviside’s method gives immediately A = −3 and B = 5. Hence, substituting x = s2 , we get
that
2s2 − 3 3 5
2 2
=− 2 + 2
s (s + 1) s s +1
and hence that
2s2 − 3
−1
L = −3t + 5 sin t.
s2 (s2 + 1)
The transform above is not quite F (s), but if we divide it by s (i.e., turn s2 into s3 ) we get
precisely F (s). Now, division by s corresponds to integration by t; recall (16) from chapter 1,
section 6. Therefore,
Z t
2s2 − 3
L −1
= (−3τ + 5 sin τ ) dτ = − 23 t2 − 5 cos t + 5.
s3 (s2 + 1) 0 ◭
By the methods of section 1.8, in particular equation (18), it’s easy to see that
1 − e−2t
s+2
L = ln .
t s
52 The Inverse Transform
1 − e−2t s+2
g(t) = ⇐⇒ G(s) = ln ,
t s
and note that g(0+) = 2 (this is a good revision example on de l’Hospital theorem), we get
immediately that
s+2
L ġ = s G(s) − g(0+) = s ln − 2.
s
On the right-hand side we find precisely what we need, that is, F (s). So, in the end:
7. Convolution
The Laplace transform of a sum is equal to the sum of the corresponding Laplace transforms.
Unfortunately, a similar rule for multiplication does not hold: the transform of a product is not
equal to the product of the transforms.
Convolution is an operation that allows us to deal with products of Laplace transforms in
a relatively simple way.
◮Definition: Let f (t) and g(t) be two functions that possess a Laplace transform. The convo-
lution of f (t) and g(t), denoted f ∗ g(t), is defined as
Z t
f ∗ g(t) = f (τ ) g(t − τ ) dτ. (25)
0
◭
f ∗ (c1 g + c2 h) = c1 f ∗ g + c2 f ∗ h,
where c1 and c2 are constant quantities. It also has the commutative property of “usual” multi-
plication: this is easy to show, so let’s see it. Substituting
τ =t−u dτ = −du,
The Inverse Transform 53
in the definition 25, and noting that τ = 0 corresponds to u = t, and τ = t to u = 0, we get that
Z t Z 0
f ∗g = f (τ ) g(t − τ ) dτ = − f (t − u) g(u) du =
0 t
Z t
= g(u) f (t − u) du =
0
= g ∗ f,
Rt √
◮Example 69 Find the Laplace transform of f (t) = 0 τ t − τ dτ .
√
Solution: Note that f (t) = t ∗ t; therefore
√
L [f (t)] = L [t] · L t .
It follows immediately that
Z t √
√ √ √ 1 ( 1/2)!
π
L τ t − τ dτ = L t ∗ t = L [t] · L t = · 3/2 = 5/2 .
0 s s 2s ◭
Rt
◮Example 70 Find the Laplace transform of f (t) = 0 (t − τ )3 cosh τ dτ .
Solution: Note that Z t
(t − τ )3 cosh τ dτ = t3 ∗ cosh t;
0
since
3! s
L [t3 ] = , L [cosh t] = ,
s4 s2 − 1
it follows immediately that
3! s 6
F (s) = 4
· 2 = 3 2 .
s s −1 s (s − 1) ◭
Rt
◮Example 71 Find the Laplace transform of f (t) = 0 τ 2 e−4τ dτ using convolution.
Solution: The integral in this example is not a convolution product. Hence, we write
Z t Z t
2 −4τ −4t
f (t) = τ e dτ = e τ 2 e4(t−τ ) dτ = e−4t g(t),
0 0
In applications one often needs to identify the original function f (t), knowing its transform F (s).
Therefore, it is also useful to write (26) backwards, in the form
L−1 F (s) G(s) = f ∗ g(t),
(27)
as illustrated by the following examples.
The Inverse Transform 55
1
G(s) = ,
s5
1
H(s) = .
(s − 1)2
1
sin A sin B = 2
cos(A − B) − cos(A + B) .
It follows that Z t
−1 1 cos ω(2τ − t) − cos ωt
L 2 2 2
= dτ =
(s + ω ) 0 2ω 2
t t
sin ω(2τ − t) τ cos ωt
= − =
4ω 3 0 2ω 2 0
2 sin ωt t cos ωt
= − .
4ω 3 2ω 2
After simplification, this may be seen to be identical to the second result of example 61. ◭
56 The Inverse Transform
e−5s
−1 0 if t < 5,
u(t − 2) ∗ u(t − 3) = L 2
= (t − 5) u(t − 5) =
s t − 5 if t ≥ 5.
◮Example 75 Find the convolution of t−1/2 with itself and hence calculate (− 1/2)!.
Solution: We start from the basic transform of tc ; see (9) in chapter 1. With c = −1/2, we get
the equation
1 (− 1/2)!
L √ = √ .
t s
Hence, by convolution, we get that
2
(− 1/2)!
1 1 (− 1/2)! (− 1/2)!
L √ ∗√ = √ · √ = .
t t s s s
The integral on the left-hand side is elementary: substituting τ = tx2 , dτ = 2tx dx, we get
immediately Z 1
2 dx 2
√ = π = (− 1/2)! ,
0 1 − x2
√
and finally (− 1/2)! = π. ◭
You might be surprised to learn that f (t) ≡ 1 is not the “unit” of convolution product; in other
words 1 ∗ f (t) is not in general equal to f (t). Indeed, the problem of finding a function δ(t) with
the property that δ ∗ f (t) = f (t) for every f (t) leads straight into operational calculus, which
you’ll encounter at a later stage of your studies.
The Inverse Transform 57
◮Example 76 Find 1 ∗ t.
Solution: Since 1 ∗ t = t ∗ 1, we get immediately:
Z t
t∗1= τ dτ = 12 t2 .
0
In other words, 1 ∗ t = 21 t2 . ◭
8. Additional Examples
Using the Laplace transform it is possible to calculate some definite integrals that cannot be
done by elementary calculus. Broadly speaking, all the following examples use the same trick,
i.e., an interchange (done at the right time) of the order of integration. As mentioned before, a
rigorous justification of this step (though not difficult) is beyond the scope of these notes.
R∞
◮Example 77 Calculate 0 cos tx dx/(1 + x2 ), where t is a positive parameter.
Solution: Define an auxiliary function a(t) as follows:
Z ∞
cos tx
a(t) = dx.
0 1 + x2
This may be seen as a double integral over the whole first quadrant of the xt plane (an improper
integral, of course). Swapping the order of integration, we get:
Z ∞ Z ∞
1
A(s) = 2
e−st cos tx dt dx.
0 1 + x 0
This integral is elementary. Evaluating it (partial fractions are required), one gets
∞
s arctan(x/s)
A(s) = arctan x − =
s2 − 1 s 0
1
π
s 1
= 2 π− 2 =
s −1 2 s
1
π
= 2 = 12 πL e−t .
s+1
58 The Inverse Transform
It follows ∞
cos tx
Z
2
dx = a(t) = 21 πe−t .
0 1 + x
Corollary: Differentiating this equation with respect to t, we get immediately that
Z ∞
x sin tx
2
dx = −ȧ(t) = 21 πe−t .
0 1 + x ◭
R∞
◮Example 79 Calculate 0
s−c ds/(1 + s), where 0 < c < 1.
Solution: We note that
1
= L e−t
s+1
and hence ∞ ∞
s−c
Z Z
s−c L e−t ds =
ds =
0 1+s
Z0 ∞ Z ∞
−c −t −st
= s e e dt ds.
0 0
Swapping the order of integration, and substituting st = u, the right-hand side becomes
∞ ∞ ∞ ∞
u−c −u du
Z Z Z Z
−t −c −st −t
e s e ds dt = e e dt =
0 0 0 0 t−c t
Z ∞ Z ∞
−t c−1 −c −u
= e t u e du dt.
0 0
∞
s−1/2
Z
2
ds = (− 21 )! ,
0 1+s
PROBLEMS
19. Use partial fractions expansion to find the inverse Laplace transform of the following
functions.
(a) 1/(s2 + 4s + 5) (d) 1/s2 (s2 + 1)
(b) s/(s + 1)2 (e) s/(s3 + 1)
3 2 2
(c) (s + 1)/(s − s)(s − 4) (f) (2s + 3)/(s3 + 4s2 + 5s)
20. Use partial fractions expansion to find the inverse Laplace transform of the following
functions.
(a) (2s − 1)/(s3 − s) (f) 2(7s − 31)/(s3 + 3s2 − 25s + 21)
(b) (27 − 12s)/(s + 4)(s2 + 9) (g) 27s/(s + 1)(s − 2)3
3 4 2
(c) (s + 16s − 24)/(s + 20s + 64) (h) (s + 1)/(s2 + 2s + 2)2
2 2
(d) s/(s − 2s + 2)(s + 2s + 2) (i) (s2 + 1)/s(s2 + 2)(s2 + 3)
2
(e) (11s − 2s + 5)/(s − 2)(s + 1)(2s − 1) (j) (5s2 − 18s + 15)/(s − 1)(s − 2)3
Shifting
Dirty Tricks
Convolution
24. Find (1 ∗ 1) ∗ sin t and 1 ∗ (1 ∗ sin t), and verify that you get the same answer.
25. Find: (a) et ∗ e−t , (b) t ∗ u(t − 8), (c) t2 ∗ u(t − 5).
2 2
26. Show that e−t ∗ et = (sinh t2 )/t.
27. Use
the convolution theorem
to calculate the followingLaplace transforms.
Rt Rt
(a) L 0 (t − τ )4 sin 3τ dτ (b) L 0 (t − τ )137 e−4τ dτ
Calculation of Integrals
∞
e−2t sin2 t
Z
30. Show that dt = 14 ln 2.
0 t
Z ∞ −3t
e − e−6t
31. Show that dt = ln 2.
0 t
Z ∞
cos 2t − cos 14t
32. Show that dt = ln 7.
0 t
Z ∞
2 √
33. Show that x4 e−x dx = 3 π/8.
0
ANSWERS
√ √
18 (a) t4 /24 (f) 1 + t − 4 t/ π
(b) 8 cos 4t (g) 43 t3/2 e−3t
(c) −4e4t/3 (h) e−t (cos 6t − 61 sin 6t)
√
(d) 2 cosh 3t − 35 sinh 3t (i) 2e−t (1 + t)/ πt
√
(e) 8t5/2 /15 π (j) et (1 + 4t + 2t2 )
19 (a) e−2t sin t (d) t − sin t
√ √ √
(b) (1 − t) e−t (e) 31 et/2 cos 3t/2 + 3 sin 3t/2 − 13 e−t
(c) 14 − 23 et + 89 e2t + 24
7 −2t
e (f) 51 3 + e−2t (4 sin t − 3 cos t)
1
1
if 0 < t < 2
(b) f (t) = 2 u(t) − u(t − 2) = 2
0 everywhere else.
0 if t < 1/3
(c) f (t) =
1 − cos(t − 31 ) if t > 1/3
t(t − 3) if 0 < t < 3
(d) f (t) =
0 everywhere else; see example 33.
0 if t is negative,
sin t if 0 < t < π,
22 (a) f (t) =
sin 2t if π < t < 2π,
( sin 3t if t is greater than 2π.
t if 0 < t < π,
(b) f (t) = 2π − t if π < t < 2π,
0 everywhere else.
2 cosh t + 2 cos t − 4
23 (a) cosh t + cos t − 2 (e)
t
e−t − e−3t 2e −2t
(cos t − cos 3t)
(b) (f)
t t
e−4t − cos t sin t
(c) (g)
t t
Z t
cos 5τ − cos 7τ cos t sin t
(d) dτ (h) − 2 .
0 τ t t
Note: The integral in 23(d) cannot be done by elementary calculus.
24 t − sin t.
25 (a) sinh t, (b) 12 (t − 8)2 · u(t − 8), (c) 13 (t − 5)3 · u(t − 5).
R t −τ 2 (t−τ )2
26 Hint: 0
e e dτ = ?
5 2
27 (a) 72/s (s + 9), (b) 137!/s138 (s + 4).
(a) e−t (1 − cos 7t)/49, (b) 12 t2 − 12 t + 41 et − 14 e−t .
29
33 Hint: Substitute x2 = u. Recall the definition of generalized factorial.
Linear Differential Equations 63
Chapter Three
1. Linearity
The quantities c2 , c1 and c0 are also called the coefficients of the equation. The special case
where the coefficients are all constant is very important in applications, and we’ll study it in
greater detail.
◮Example 81 The equation 5ẍ − 8ẋ + 3x = sin t is a linear equation with constant coefficients;
we find that c2 = 5, c1 = −8, c0 = 3 and f (t) = sin t. ◭
O x = f,
O (s1 x1 + s2 x2 ) = s1 O x1 + s2 O x2
This equation shows that the system has an infinity of solutions, parametrized by t. One such
solution (corresponding to t = 0) is represented by [ 30 −9 0 ]T , the first vector on the
right-hand side. Other solutions are obtained by fixing t to different values. For example
x1 26 −6 −10
x2 = −8 , or 0 , or 1 , ...
x3 1 9 10
are also solutions. Note that the difference between any two of them is a scalar multiple of
T
[ −4 1 1 ] , which in turn is a solution of the corresponding homogeneous system:
h1
5 17 3 h2 = 0 , (32)
3 10 2 0
h3
h1 −4
h2 = t 1 .
h3 1
We see that the null-space of the matrix for the system (32) is one-dimensional, and its basis
T
consists of the single vector [ −4 1 1 ] .
Observe also that if we complement the system (31) with one auxiliary condition of the
form, say,
x1 = 0,
then the resulting problem has a unique solution, corresponding to t = 7.5:
x1 30 −4 0
x2 = −9 + 7.5 1 = −1.5 .
x3 0 1 7.5 ◭
Although this example is elementary, it has the same structure of the more advanced problems in
differential equations and systems of equations that we are about to study. Note, in particular,
the difference between the general solution, which contains the free parameter t, and a particular
solution that meets an additional requirement.
It is not surprising that differential equations with constant coefficients and systems of
linear algebraic equations like (31), should be so similar. The link between them is represented
precisely by the Laplace transform, which transforms linear differential equations into “plain”
linear equations.
66 Linear Differential Equations
As a rule, a differential equation is regarded as “solved” if it’s possible to write down the solution
in terms of integrals, even if such integrals cannot be done by elementary calculus.
◮Example 83 The function x = t (sin t/t2 ) dt is a solution (not the Ronly one) of the equation
R
tẋ − x = sin t, as you may check using the product rule. However, (sin t/t2 ) dt may not be
done by the rule of elementary calculus. ◭
Broadly speaking, the problem of solving ODEs tends to get more and more complicated as the
order of the equation† increases.
For example, we’ll see that linear equations of order 1 may always be solved, in the sense
given above; on the other hand, there is no general, “all-purpose” method for solving arbitrary
2nd-order linear ODEs like (30), or ODEs of higher order. So, in moving from first-order to
second-order, we already encounter a major complication.
However, for linear ODEs with constant coefficients (recall, they are the equations where the
coefficients c0 , c1 , c2 etc are all constant) it is always possible to solve the equation, regardless
of its order. One way to do that is by the Laplace transform.
◮Example 84 Solve the differential equation ẍ + 7ẋ + 10x = 0, given that x(0+) = 2 and
ẋ(0+) = −1.
Solution: The coefficients are 1, 7 and 10. Applying the Laplace transform, we get:
s2 X − 2s + 1 + 7(sX − 2) + 10X = 0,
which yields
2s + 13 2s + 13 3 1
X= = = − .
s2 + 7s + 10 (s + 2)(s + 5) s+2 s+5
Finally, applying the inverse transform, we get that
x = 3e−2t − e−5t .
◮Example 85 Solve the differential equation ÿ + 9y = sin 2t, given that y(0+) = 7 and
ẏ(0+) = −3.
Solution: The coefficients are 1, 0 and 9. Applying the Laplace transform to both sides of the
equation, we get that L [y] = Y , L [ÿ] = s2 Y − 7s + 3 and L [sin 2t] = 2/(s2 + 4). Hence,
2
s2 Y − 7s + 3 + 9Y =
s2 +4
2 7s − 3
Y = + .
(s2 + 4)(s2 + 9) s2 + 9
† This is, by definition, the order of the highest derivative of the solution.
Linear Differential Equations 67
2 2/5 2/5
= 2 − 2 .
(s2 2
+ 4)(s + 9) s +4 s +9
Hence,
2/5 17/5 7s
Y = − + .
s2 +4 s2 +9 s2+9
Finally, inverting the transform, we find that our solution is
1 17
y= 5
sin 2t − 15
sin 3t + 7 cos 3t.
◭
◮Definition: The set of all solutions of a homogeneous linear differential equation is called the
solution space of the equation. ◭
68 Linear Differential Equations
It is also possible to prove that the dimension of the solution space is always equal to the order
of the equation: a third-order linear homogeneous ODE has always a three-dimensional solution
space, a fourth-order has a four-dimensional solution space, and so on.
But remember that this principle applies only to homogeneous linear ODEs; the solution
set of a non-homogeneous linear ODE is never a vector space.
To make this point clear, let us go back to examples 84 and 85, which are both second-order
problems.
◮Example 86 Check that any linear combination of e−2t and e−5t is a solution of the equation
ẍ + 7ẋ + 10x = 0.
Solution: Indeed, if we write
x = Ae−2t + Be−5t
for arbitrary A and B, then we see that
ẋ = −2Ae−2t − 5Be−5t ,
ẍ = 4Ae−2t + 25Be−5t .
and finally, for any value of A and B, we get 0 = 0, which completes the check. ◭
◮Example 87 Go back to example 85, where the equation was ÿ + 9y = sin 2t. Take two
solutions: for instance y1 = sin 3t + 15 sin 2t and y2 = cos 3t + 51 sin 2t are solutions. First of all,
we verify that y1 and y2 are indeed solutions. We have:
4 4
ÿ1 = −9 sin 3t − 5 sin 2t ÿ2 = −9 cos 3t − 5 sin 2t.
ÿ1 + 9y1 = (−9 + 9) sin 3t + (− 54 + 59 ) sin 2t = ÿ2 + 9y2 = (−9 + 9) cos 3t + (− 45 + 59 ) sin 2t =
= sin 2t. = sin 2t.
So, both y1 and y2 yield the identity sin 2t = sin 2t, which means they are solutions. However,
if we substitute a simple linear combination such as
y = 7y1 − 3y2
into the equation, we get (7 − 3) sin 2t = sin 2t, i.e., 4 = 1, which is absurd. This shows that our
linear combination of solutions is not a solution. ◭
The general solution of a homogeneous equation with constant coefficients may be always found
following a few simple rules. We’ll first look at some numeric examples, after which it will be
easy to see the general procedure.
Linear Differential Equations 69
s2 X − sa − b − 5X + 5a − 14X = 0.
A = 91 (2a + b) B = 19 (7a − b) :
clearly, A and B are also free and independent. Therefore, we may write
A B
X= + ,
s−7 s+2
where A and B are free, and finally
s2 X − sa − b − 4 sX − a + 4X = 0,
(s2 − 4s + 4) X = sa + b + 4a.
◮Conclusion: In practice one may bypass the Laplace transform, since the general solution
depends only on the characteristic roots (or root): once they have been determined, the solution
may be written down immediately. ◭
◮Definition: The roots of the characteristic equation are called characteristic roots. ◭
If the linear factors of D(s) are all distinct we say that the roots are simple. For instance, in
example 88 we saw D(s) could be written as the product of (s + 2) and (s − 7) : so −2 and 7
are simple roots. But in example 89, we saw D(s) could only be written as (s − 2)(s − 2), so the
(only) root s = 2 was not simple.
◮Definition: If a polynomial D(s) may be written as a product like D(s) = (s−r)m Q(s), where
Q(s) is another polynomial and Q(r) 6= 0, we say that r is a root of D(s) with multiplicity m.
Roots with multiplicity 1, 2 and 3 are often called simple, double and triple, respectively. ◭
◮Example 91 The polynomial D(s) = s7 + 2s5 + s3 may be factored as D(s) = s3 (s2 + 1)2 =
s3 (s − i)2 (s + i)2 . Hence, s = 0 is a triple root and s = i, −i are double complex roots. ◭
◮Example 92 Using the free computer package maxima, one may quickly see that for the
polynomial D(s) = s8 − 11s7 + 33s6 − 5s5 − 50s4 one has D(s) = s4 (s − 2)(s − 5)2 (s + 1).
Therefore, 2 and −1 are simple roots; 5 is a double root; 0 is a root with multiplicity 4. ◭
Note that in the last example the multiplicities add up to n = 8, which is also the degree of the
characteristic equation. This is no coincidence: indeed, it is easy to prove† that a polynomial of
degree n has either n simple roots in the complex field or, if some roots are multiple, the sum
of all the multiplicities is n.
If all the roots of the characteristic polynomial are simple, then, continuing from (34) we
get an expansion of the form
A B C D
X(s) = + + + + ···
s − r1 s − r2 s − r3 s − r4
and the number of terms on the right-hand side is equal to the order of the equation. Inverting
the Laplace transform we get
† It is really a corollary of the fundamental theorem of algebra which, however, is buried deep
into the theory of complex variables.
72 Linear Differential Equations
Its roots are r1 = −3, r2 = 3, r3 = −1, r4 = 1. Each root is simple; note that there are four
roots, and the order of the equation is four. Hence, the general solution may be written
convince yourself of this. We found here two equally good bases for the solution space. Each
basis has, of course, four elements. ◭
Complex exponentials (if any) may be converted to sines/cosines through Euler’s formula.
D(s) = 2s2 − 6s + 5.
where A = c1 + c2 , B = i(c1 − c2 ). Note that the first form of the general solution requires
complex arithmetic, the second one does not. ◭
3s + 1 = 0,
n
(3s + 1)(s2 − 25) = 0 =⇒
s2 − 25 = 0.
The characteristic roots are s = − 1/3, s = 5 and s = −5. So, finally, the general solution is
s6 − 9s3 + 8 = 0;
s3 − 1 = 0 and s3 − 8 = 0.
1 2
√ √
s= − 1 + i 3 /2 and s = −1 + i√3
√
− 1 − i 3 /2 −1 − i 3
If the characteristic polynomial has a double root r, and some other roots, we proceed like in
example 89: going back to (34), we get
A B
X(s) = + + (n − 2 terms that do not depend on r),
s − r (s − r)2
where A and B are “free”. The general solution will then have the form
Similarly, if the characteristic polynomial has a triple root r, and perhaps other roots, continuing
from (34), we get
A B C
X(s) = + + + (n − 3 terms that do not depend on r),
s − r (s − r)2 (s − r)3
x(t) = Aert + Btert + 21 Ct2 ert + (n − 3 exponentials that do not depend on r).
Note, however, that the factor 21 attached to C may be removed without loss of generality,
because C is free, anyway.
The pattern is now clear: a quadruple characteristic root r would contribute a combination
of solutions of the form Aert + Btert + Ct2 ert + Dt3 ert to the general solution, and so on. A
characteristic root of multiplicity m would contribute a linear combination of m terms. Again,
complex exponentials (if any) may be handled by means of Euler’s formula.
74 Linear Differential Equations
◮Example 97 Find the general solution of the equation x(4) − 6ẍ + 8ẋ − 3x = 0.
Solution: The characteristic polynomial is
D(s) = s4 − 6s2 + 8s − 3
s4 − 6s2 + 8s − 3
= s3 + s2 − 5s + 3.
s−1
By inspection, the right-hand side is zero if s = 1: hence, s = 1 is at least a double root. Again
by long division,
s3 + s2 − 5s + 3
= s2 + 2s − 3.
s−1
Now, the right-hand side may be factored as (s − 1)(s + 3). So, putting everything together:
D(s) = s4 − 6s2 + 8s − 3 =
= (s − 1)(s3 + s2 − 5s + 3) =
= (s − 1)2 (s2 + 2s − 3) =
= (s − 1)3 (s + 3).
The last line shows that D(s) has only two roots, namely s = 1 (triple), and s = −3 (simple).
Therefore, the general solution is
◮Example 98 Find the general solution of the equation y (4) + 18ÿ + 81y = 0.
Solution: The characteristic polynomial is
We see that the roots are i3 and −i3, each with multiplicity 2. Hence the general solution may
be written as
y(t) = A1 + A2 t ei3t + B1 + B2 t e−i3t ,
A non-homogeneous linear ODE of order n with constant coefficients has the form
Suppose x(t) and y(t) are two solutions of (35), leaving the initial conditions un-specified. Except
for the requirement that they be distinct, x and y are completely arbitrary. This, in other words,
means that (
cn x(n) + · · · + c2 ẍ + c1 ẋ + c0 x = f (t),
cn y (n) + · · · + c2 ÿ + c1 ẏ + c0 y = f (t)
Subtracting one equation from the other, we get that
cn x(n) − y (n) + · · · + c2 ẍ − ÿ + c1 ẋ − ẏ + c0 x − y = 0.
cn z (n) + · · · + c2 z̈ + c1 ż + c0 z = 0.
So, if we have determined just one solution y of (35), regardless of the initial conditions, we
may generate any other solution x by simply adding to y the general solution of the associate
homogeneous equation. In a sense, solving the associate homogeneous equation is the heart of
the problem. Having solved that, it does not matter which “special” solution y we add, nor how
we manage to find it.†
For equations with constant coefficients, the Laplace transform is often a good way to find
a special solution. As for the initial conditions, they are free, so we may set them all to zero,
which seems to be the simplest choice.
◮Example 99 Find the general solution of ẍ − 2ẋ − 3x = sinh t.
Solution: Begin with the associate homogeneous equation, which has characteristic polynomial
D(s) = s2 − 2s − 3. Equating this to zero yields the characteristic roots s = −1 and s = 3, both
simple. Hence the general solution of the associate homogeneous equation is
z = Ae−t + Be3t .
A particular solution y of the “full”, non-homogeneous equation, satisfying the initial conditions
y(0+) = ẏ(0+) = 0, is then found by the Laplace transform: this gives
1
(s2 − 2s − 3)Y =
s2 −1
1
(s − 3)(s + 1)Y = .
(s − 1)(s + 1)
† This statement remains true for all linear equations, but a general discussion of linear
equations with variable coefficients (beyond first-order) would be too advanced for these notes.
76 Linear Differential Equations
◮Example 101 Find the general solution of x(4) − 5ẍ − 36x = cos 2t.
Solution: The associate characteristic equation is s4 − 5s2 − 36 = 0, which factors immediately
Linear Differential Equations 77
as (s2 − 9)(s2 + 4) = 0. Hence, the characteristic roots are ±3 and ±i2, all simple. The general
solution of the associate homogeneous equation may be written
(take your choice). To find a particular solution y of the non-homogeneous equation, we set the
initial conditions to zero. By the Laplace transform, we get that
s
(s4 − 5s2 − 36)Y = ,
s2 +4
The last term on the right-hand side leads us back to example 61:
s/13 1 d 2
=−
(s2 + 4)2 52 ds s2 + 4
1 1 1
y= 169 cosh 3t − 169 cos 2t − 52 t sin 2t.
1
x = y + z = A cos 2t + B sin 2t + M cosh 3t + N sinh 3t − 52
t sin 2t.
1 1
Note that, since A and B are free, it is not wrong to drop the terms 169 cosh 3t and − 169 cos 2t
from the general solution (same argument as in example 99). ◭
◮Example 102 Find the general solution of the equation ẍ − 2x = u(t) − u(t − 8), where u(t)
is Heaviside’s step function.
Solution: The characteristic
√ polynomial of the associated homogeneous equation is D(s) = s2 −2,
with roots s = ± 2. Hence its solution may be written
√
2t
√ √ √
z = Ae + B e− 2t
or also z = M cosh 2t + N sinh 2t
(the two forms are equivalent). To find a particular solution, we set all initial conditions to zero.
By the Laplace transform, we get:
1 − e−8s
(s2 − 2)Y = .
s
78 Linear Differential Equations
Therefore,
1 1
2 s
Y = − 2
· 1 − e−8s ,
s2 − 2 s
and hence, √ √
y = sinh2 t/ 2 · u(t) − sinh2 (t − 8)/ 2 · u(t − 8).
√ √ √ √
Finally, x = A e 2t
+ B e− 2t + sinh2 t/ 2 · u(t) − sinh2 (t − 8)/ 2 · u(t − 8) is the general
solution. ◭
◮Example 103 Solve the equation ÿ + 9y = sin 2t, given that y(0+) = 0 and ẏ(0+) = 1.
Solution: Taking the Laplace transform of both sides of the equation, we get:
2
s2 Y − 1 + 9Y = ,
s2 +4
1 2
Y = + 2 .
s2 + 9 (s + 9)(s2 + 4)
2 2/5 2/5
= 2 − 2 .
(s2 2
+ 9)(s + 4) s +4 s +9
Hence,
2/5 3/5
Y = + .
s2 + 4 s2 + 9
Finally,
1 1
y= 5 sin 2t + 5 sin 3t
is the particular solution fitting the given initial values. ◭
Linear Differential Equations 79
◮Example 104 Solve the equation f¨ − 4f = t, given that f (0+) = 0 and f˙(0+) = 1.
Solution: Taking the Laplace transform of both sides of the equation, we get:
1
s2 F − s · 0 − 1 − 4F = .
s2
5 1
x+1
= 4 − 4.
x(x − 4) x−4 x
Therefore,
5 1
s2 + 1 4 4
F = = − =
s2 (s2 − 4) s2 − 4 s2
= L 8 sinh 2t − 41 t .
5
5
Therefore, the particular solution is f (t) = 8 sinh 2t − 41 t. ◭
◮Example 105 Solve ẍ − 6ẋ + 9x = 0, with the initial conditions x0 = 1, and ẋ0 = 4.
Solution: Taking the Laplace transform of the equation, we get
or
(s2 − 6s + 9)X = s − 2.
Hence
s−2 s−2 s−3+1 1 1
X= = = = + .
s2 − 6s + 9 (s − 3)2 (s − 3)2 s − 3 (s − 3)2
Now, from (4) we get
1
= L e3t ,
s−3
and
1 d 1
= L te3t ;
2
= −
(s − 3) ds s − 3
Hence,
x = e3t + te3t = (1 + t)e3t . ◭
80 Linear Differential Equations
This section deals with more advanced topics, and may be safely skipped without affecting your
understanding of the remaining sections. Leave it out if you’re reading these notes for the first
time.
Go back to the n-th order linear non-homogeneous ODE with constant coefficients (35); sup-
pose that n initial conditions have been specified. We saw that, applying the Laplace transform
to the equation, one may eventually derive the equation:
The polynomial of degree n that multiplies X is, of course, the characteristic polynomial D(s).
Note that all the other terms on the left-hand side of (37) may be grouped as a polynomial K(s)
of degree at most n − 1, which vanishes if all initial conditions are set equal to zero (convince
yourself of this). Equation (37) may be written in the more compact form
DX − K = F,
X = D −1 K + D −1 F.
G(s) = D −1
def
g(t) = L−1 [D −1 ],
we may express (38) using a convolution product (see section 2.7) as follows:
In some other course, you’ll probably learn that g is called the Green’s function for equation (35)
with initial conditions all set to zero. Such initial conditions are called homogeneous initial
conditions.
Let us now interpret equations (38)–(39), because they give us some insight into the struc-
ture of the solution.
They tell us that the solution x(t) consists of two parts:
where
a(t) = L−1 [GK],
b(t) = L−1 [GF ] = g ∗ f (t);
The first part, a(t), is the solution that we would get if we kept all the initial conditions as
given, but replaced the right-hand function f (t) with zero, i.e., if we solved the corresponding
Linear Differential Equations 81
homogeneous equation. The second part, b(t), is the solution that we would obtain if we did set
(n−1)
all initial conditions to zero, i.e., x0 = ẋ0 = ẍ0 = · · · = x0 = 0 (which would make K(s) = 0
identically), but solved the “full”, non-homogeneous, equation.
◮Conclusion: We see that the solution of (35) is obtained by combining the solutions of two
related, but rather easier, problems. Hence the tongue-twisting statement “to solve a non-
homogeneous problem one must combine the solution of the non-homogeneous equation with
homogeneous initial values, and the solution of the homogeneous equation with non-homogeneous
initial values”. ◭
Actually, this statement remains true for all linear equations, including those with variable
coefficients: unfortunately, as we mentioned before, it is impossible to give a method for solving
all linear homogeneous equations with variable coefficients in a closed form (that is, leaving out
computer-generated “solutions”).
◮Example 106 Find the solution of ẍ + x = tan t that satisfies the homogeneous initial
conditions x(0+) = ẋ(0+) = 0.
Solution: We note that D(s) = s2 + 1, hence the Green’s function is
−1 1
g(t) = L = sin t.
s2 + 1
Therefore,
Z t
x = sin t ∗ tan t = sin(t − τ ) tan τ dτ =
0
Z t
= (sin t cos τ − cos t sin τ ) tan τ dτ.
0
Recalling that
Z Z
cos τ tan τ dτ = − cos τ + C, sin τ tan τ dτ = − sin τ + artanh(sin τ ) + C,
x = sin t [1 − cos t] − cos t [− sin t + artanh(sin t)] = sin t − cos t · artanh(sin t).
◭
◮Example 107 Solve 5ẍ − 8ẋ + 3x = sin t, subject to the homogeneous initial conditions
x(0+) = 0, ẋ(0+) = 0. (This is example 81.)
Solution: The characteristic polynomial is D(s) = 5s2 − 8s + 3, and the roots are s = 1 and
s = 3/5. Hence,
1 1/2 1/2
G= = − .
5(s − 1)(s − 35 ) s − 1 s − 35
The Green’s function is
1/2 1/2
g(t) = L −1
− 3 = 21 (et − e3t/5 ).
s−1 s− 5
82 Linear Differential Equations
We have shown that any solution of a linear homogeneous ODE with constant coefficients of
order n may always be expressed as a linear combination of n functions. When the characteristic
equation has n simple roots, these functions are just n exponentials; when there are multiple
roots, each exponential is multiplied by powers of t ranging from t0 to tm−1 , where m is the
multiplicity. Since the multiplicities always add up to n, one always find n functions.
We took for granted that such functions form an independent set, but we never proved it.
The proof is actually quite easy and it may be seen as follows.
Consider the easier case where all the characteristic roots are simple. Suppose there are n
constants C1 . . . Cn such that
Then (by taking the Laplace transform of this identity), it follows immediately that
C1 C2 Cn
+ + ··· + = 0.
s − r1 s − r2 s − rn
If you multiply both sides of this equation by s − r1 , simplify and let s = r1 , you get C1 = 0.
Do the same for every term: you get Ck = 0 for every k, which proves that the functions are
independent. The case where some roots are multiple is handled in exactly the same way (but
do it, as an exercise).
In summary, we have proven that every linear homogeneous ODE with constant coefficients
of order n has an n-dimensional solution space. This theorem remains true in a more general
context, where the coefficients are functions of t. However, the proof for equations with variable
coefficients (using Wronskian determinants) falls beyond the scope of these notes.
We now turn to linear first-order differential equations with non-constant coefficients, i.e., equa-
tions of the form
c1 (t) ẋ(t) + c0 (t) x(t) = f (t), (40)
obtained by letting c2 ≡ 0 in the equation (30) that we saw at the beginning of this chapter.
The Laplace transform is definitely not the method of choice for such equations.
Linear Differential Equations 83
There is a very nice method for solving equation (40), and it is known as Lagrange’s method
or the method of variation of parameters. It consists in looking for a solution that has the form
ẋ = u̇v + uv̇;
This result is true in general, for arbitrary u and v. Now we choose v so that the expression in
square brackets vanish:
c1 uv̇ + c0 uv = 0.
c1 v̇ + c0 v = 0. (43)
If v is chosen in this way, and substituted back into (42), then only the part outside the brackets
remains:
c1 u̇v = f (t), (44)
and
dv c0
Z Z
=− dt.
v c1
Therefore, v too may be determined by an ordinary integration. Finally, having found v from
(43) and hence u from (44), we reconstruct x according to (41).
Note that two integrations are performed, hence apparently two integration constants should
be accounted for. But the integration constant from the intermediate step (finding v) falls off:
keener students should verify this statement. For practical purposes, the integration constant
for the intermediate step may generally be set to zero.†
† As an exercise, set it to 137 and verify that a factor e137 eventually falls off.
84 Linear Differential Equations
t3 uv̇ + 5t2 uv = 0,
which simplifies to
dv 5 dt
=− .
v t
Integrating this equation, we get
ln |v| = −5 ln |t|.
Hence we may let v = t−5 and substitute into what is left of the equation, which is not much:
t3 u̇ t−5 = et .
Z u̇ = t2 et
u= t2 et dt = t2 et − 2tet + 2et + C
◮Example 109 Find the general solution of the equation (1 + et )ẏ + 2et y = sinh t.
Solution: Write y = uv, ẏ = u̇v + uv̇, and proceed like in the preceding example. It follows that
h i
(1 + et )u̇ v + (1 + et )u v̇ + 2et u v = sinh t;
imposing that
(1 + et )u v̇ + 2et u v = 0
we get the equation for v, which is
v̇ 2et
=− .
v 1 + et
This may be solved immediately:
1
ln |v| = −2 ln |1 + et | =⇒ v= .
(1 + et )2
1 + et
u̇ = sinh t =⇒ u̇ = (1 + et ) sinh t = sinh t + 12 (e2t − 1).
(1 + et )2
Linear Differential Equations 85
Impose
uv̇ cos t − uv sin t = 0,
and simplify: it follows that
dv sin t dt
= .
v cos t
Integrating, we get
dv sin t dt d(cos t)
Z Z Z
= =− ,
v cos t cos t
ln |v| = − ln | cos t|;
1
v= ;
cos t
substituting back, we get:
u̇ cos t
= t2 , u̇ = t2 , u = 13 t3 + C,
cos t
and finally
1 3
3
t +C
x = uv = .
cos t
Comment: Note that x(t) becomes infinite for t = ± 12 π, regardless of C. That is why one must
specify “in a neighborhood of t = 0” in the statement of the problem. ◭
◮Example 111 Find the solution of ẋ + x cot t = −5ecos t , satisfying the initial condition
x(π/2) = 4.
Solution: Proceeding like in the previous examples, one finds
h i
u̇v + uv̇ + uv cot t = −5ecos t
dv d(sin t)
= − cot t dt = − .
v sin t
86 Linear Differential Equations
ln |v| = − ln | sin t|
1
v= .
sin t
Substituting back:
u̇
= −5ecos t ,
sin t
hence Z Z
u = −5 sin t ecos t dt = 5 ecos t d(cos t)
= 5ecos t + C.
The general solution is
5ecos t + C
x = uv = .
sin t
Imposing the initial condition x(π/2) = 4, we get
5ecos π/2 + C
= 4,
sin π/2
5+C
= 4,
1
hence C = −1, and finally the required particular solution:
5ecos t − 1
x= .
sin t ◭
7. Higher Order Equations with Variable Coefficients: The Taylor Series Method
When the coefficients are not constant, most second-order equations cannot be solved in a simple
way; linear equations with constant coefficients, of course, are a pleasant exception.
Sometimes a solution may be obtained in the form of a power series which can be useful
for further calculations, even if it cannot be expressed in terms of elementary functions. Before
we go any further, let us see how the method works.
◮Example 112 Find the solution of the eqaution ÿ + t2 y = 0.
Solution: Note that this 2nd-order equation has a variable coefficient (namely t2 , attached to
y); none of the methods seen so far is applicable. We assume y(t) may be expressed as a series
of powers of t, like the following:
y(t) = a0 + a1 t + a2 t2 + a3 t3 + a4 t4 + a5 t5 + · · · , (45)
where the coefficients a0 , a1 , a2 . . . are numbers that will determined at a later stage.
Differentiating (45) twice, we get
ẏ(t) = a1 + 2 a2 t + 3 a3 t2 + 4 a4 t3 + 5 a5 t4 + · · · ;
ÿ(t) = 2 a2 + 3 · 2 a3 t + 4 · 3 a4 t2 + 5 · 4 a5 t3 + · · · .
Linear Differential Equations 87
2 a2 + 3 · 2 a3 t + 4 · 3 a4 t2 + 5 · 4 a5 t3 + 6 · 5 a6 t4 + 7 · 6 a7 t5 + · · · +
+t2 · (a0 + a1 t + a2 t2 + a3 t3 + a4 t4 + a5 t5 + · · ·) = 0.
We now impose that, on the left-hand side, each coefficient attached to a power of t be individ-
ually equal zero. This will obviously make the whole left-hand vanish for every t. We get a set
of equations like:
2 a2 = 0 3 · 2 a3 = 0 4 · 3 a4 + a0 = 0 5 · 4 a5 + a1 = 0
6 · 5 a6 + a2 = 0 7 · 6 a7 + a3 = 0 8 · 7 a8 + a4 = 0 9 · 8 a9 + a5 = 0
10 · 9 a10 + a6 = 0 11 · 10 a11 + a7 = 0 12 · 11 a12 + a8 = 0 (and so on.)
Although we have an infinity of equations, we may solve them one-by-one. For instance, the
first equation yields a2 = 0, and the second equation gives a3 = 0. So, we have immediately
found two coefficients. The third equation is different: it has two unknowns,
4 · 3 a4 + a0 = 0,
A
a0 = A = free, a4 = − ,
4·3
and carry on. The fourth equation, like the previous one, has two unknowns, so we write
B
a1 = B = free, a5 = − .
5·4
At this point we easily see that no more free parameters will be needed, because the next
equations yield:
a2 a3 a4 A a5 B
a6 = − =0 a7 = − =0 a8 = − = a9 = − = ,
6·5 7·6 8·7 8·7·4·3 9·8 9·8·5·4
and then
A B
a10 = 0 a11 = 0 a12 = − , a13 = − ,
12 · 11 · 8 · 7 · 4 · 3 13 · 12 · 9 · 8 · 5 · 4
and so on. Clearly, an = 0 if n divided by 4 leaves a remainder of 2 or 3.
The pattern in which the coefficients may be formed is now clear, but the solution must be
left in the form
t4 t8 t12 t16
y =A 1− + − + ··· +
4 · 3 8 · 7 · 4 · 3 12 · 11 · 8 · 7 · 4 · 3 16 · 15 · 12 · 11 · 8 · 7 · 4 · 3
t5 t9 t13 t17
+B t− + − + + ··· (46)
5 · 4 9 · 8 · 5 · 4 13 · 12 · 9 · 8 · 5 · 4 17 · 16 · 13 · 12 · 9 · 8 · 5 · 4
88 Linear Differential Equations
We have obtained an algorithm for calculating the solution with arbitrary accuracy, but we
cannot express it in a simple way in terms of “elementary” functions such as sines, cosines,
polynomials etc.
In case you wonder, it may be shown that the solution belongs to the family of Bessel
functions, which are very important in advanced engineering mathematics. ◭
t t2 t3 t4
et = 1 + + + + + ···
1! 2! 3! 4!
converges for every t, hence its radius of convergence is infinite. On the other hand, the geometric
series
1
= 1 + t + t2 + t3 + t4 + · · ·
1−t
diverges if t = 1, hence its radius of convergence is 1. ◭
Linear Differential Equations 89
Clearly, the power series (45) corresponds to the special case where c = 0.
◮Example 114 Mercator’s Series, which you encountered in first year,
(t − 1)2 (t − 1)3 (t − 1)4
ln t = (t − 1) − + − + ···
2 3 4
diverges for t = 0 but converges between 0 and 2. Hence, ln t is analytic at t = 1. ◭
We shall not discuss the theory of analytic functions in these notes. One point is worth mention-
ing, though: in first year you studied Taylor’s Formula, which allows you to expand a function as
a power series using the values of its derivatives at a given point. If a function may be expanded
in a Taylor series about c, then it’s obviously analytic at c; it may be shown that the converse
is also true, but we’ll skip the proof. In other words, a function is analytic at a given point if
and only if it may be expanded in a Taylor series about that point.
Analytic functions have several important properties. The ones that are crucial for the
method of this section are:
• For a given c, the coefficients an are uniquely determined, and
• Power series may be integrated or differentiated term-by-term inside their interval of con-
vergence.
Using these properties, it may be shown that if a linear ODE has an analytic solution at a
certain point, and initial conditions are given in the usual way at the same point, then it’s
possible to determine the coefficients of the expansion, one after the other, as we have done in
example 112. The first “if” should not be taken for granted. Indeed, some of the most useful
equations in engineering do not meet this requirement. In such cases, the Taylor series method
must be modified, but you’ll learn about these ideas (Frobenius’ method, Fuchs’ theorem) in
other courses.
Example 112 was perhaps easy to follow because the equation was relatively simple. In
general, though, it is better to use a more efficient notation, like in the following examples.
◮Example 115 Find a solution of the equation tÿ + 3ẏ − ty = 0.
Solution: We write (45) in summation form:
∞
X
y(t) = an tn .
n=0
To save ourselves some time, let us agree that from now on, whenever the beginning and the end
of a sum are not indicated, it’ll be understood that the dummy index runs from zero to infinity.
For example, differentiating this equation we get:
X
ẏ(t) = n an tn−1
n
X
ÿ(t) = n(n − 1) an tn−2 .
n
90 Linear Differential Equations
Now, the first two sums in this expression may be combined into one, but the third sum may
not, because the power tn+1 is not the the same as tn−1 . Make sure you understand this
point. It follows that
X X
n(n − 1) + 3n an tn−1 − an tn+1 = 0.
n n
Simplifying, we get: X X
n(n + 2) an tn−1 − an tn+1 = 0. (47)
n n
We now note that the powers of t in the two sums in (47) differ by 2 units. If in the first sum
we write n = 2 + m, then n = 0 corresponds to m = −2, and n → ∞ corresponds to m → ∞.
So, the first half of (47) may be rewritten as
X ∞
X
n(n + 2) an tn−1 = (m + 2)(m + 4)am+2 tm+1 .
n m=−2
We then separate from the sum on the right, those terms that correspond to a negative m:
∞
X ∞
X
m+1 −1 0
(m + 2)(m + 4)am+2 t = 0 · 2 · a0 t + 1 · 3 · a1 t + (m + 2)(m + 4)am+2 tm+1 .
m=−2 m=0
Going back to (47), we don’t touch the second half, except that we replace the dummy variable
n with m. So, (47) becomes
X X
0 · a0 t−1 + 3 · a1 + (m + 2)(m + 4)am+2 tm+1 − am tm+1 = 0.
m m
Now both sums contain identical powers of t and run from zero to infinity, so we may at last
combine them into one. It follows that
X
0 · a0 t−1 + 3 · a1 + (m + 2)(m + 4)am+2 − am tm+1 = 0.
m
Proceeding like in example 112, we impose that all monomials vanish separately. Again, this
yields an infinite set of equations:
0 · a0 = 0
3 · a1 = 0
(m + 2)(m + 4)am+2 − am = 0 [m = 0, 1, 2, 3, . . .]
The first equation is always true, regardless of a0 . Hence a0 may take any value: it’s a free
parameter. The second equation is satisfied only if a1 = 0. The other ones yield
am
am+2 = . [m = 0, 1, 2, 3, . . .]
(m + 2)(m + 4)
Linear Differential Equations 91
In this way, all even coefficients may be computed one after the other. For example, we get
a0 a0 a0 a0
a2 = a4 = a6 = a8 = .
2·4 2·4·4·6 2·4·4·6·6·8 2 · 4 · 4 · 6 · 6 · 8 · 8 · 10
Similarly, from a1 = 0 we get that a3 = 0, hence a5 = 0, and so on: all odd coefficients vanish.
It is easy to spot a pattern:
a0 a0 a0 a0 a0
a2 = 2
, a4 = 4
, a6 = 6
, a8 = 8
, a10 = 10
, etc.
2 1! 2! 2 2! 3! 2 3! 4! 2 4! 5! 2 5! 6!
So, we may write our solution in compact form:
X t2m
y = a0 · ,
m
22m m! (m + 1)!
We mentioned in section 3.3 that the solution space of a 2nd-order linear homogeneous ODE
is two-dimensional. Indeed, in example 112 we found two solutions. In this example, instead,
we found only one solution (apart from the scalar factor a0 ). In other words, there are two
independent solutions but we found only one. This is an unavoidable limitation of the Taylor
series method: the “other” solution was not found because it is not analytic at t = 0. A full
discussion of this topic would lead us too far away.
In the next example, two independent solutions are found.
◮Example 116 Find the solutions of the equation tẍ − 3ẋ + t5 x = 0.
Solution: We repeat all the steps of the preceding example. We write (45) in compact form:
∞
X
x(t) = an tn .
n=0
We now note that the powers of t in the two sums in (48) differ by 6 units. Therefore, we must
write n = 6 + m in the first sum: in this way n = 0 corresponds to m = −6, and n → ∞
corresponds to m → ∞. In other words, we write:
X ∞
X
n(n − 4) an tn−1 = (m + 6)(m + 2) am+6 tm+5 .
n m=−6
Now, from the right-hand side we separate the terms that correspond to a negative m:
∞
X
(m + 6)(m + 2)am+6 tm+5 = 0 · (−4)a0 t−1 + 1 · (−3)a1 t0 + 2 · (−2)a2 t + 3 · (−1)a3 t2 +
m=−6
∞
X
+ 4 · 0 · a4 t3 + 5 · 1 · a5 t4 + (m + 6)(m + 2)am+6 tm+5 .
m=0
Going back to (48), in the second sum we replace the dummy index n with m. So, after minor
simplifications, (48) becomes
X X
0a0 t−1 − 3a1 − 4a2 t − 3a3 t2 + 0a4 t3 + 5a5 t4 + (m + 6)(m + 2)am+6 tm+5 + am tm+5 = 0.
m m
The sums in this equation contain identical powers of t, and m runs from zero to infinity in
both, hence we may combine them. It follows that
X
0a0 t−1 − 3a1 − 4a2 t − 3a3 t2 + 0a4 t3 + 5a5 t4 + (m + 6)(m + 2)am+6 + am tm+5 = 0.
m
We impose that:
0a0 = 0
−3a1 = 0
−4a2 = 0
−3a3 = 0
0a4 = 0
5a5 = 0
(m + 6)(m + 2)am+6 + am = 0 [m = 0, 1, 2, 3, . . .]
We deduce that
a0 = free, a4 = free,
because they are multiplied by zero. We also see immediately that it must be
a1 = 0, a2 = 0, a3 = 0, a5 = 0.
We obtain:
a0 a0 a6 a0 a12 a0
a6 = − =− a12 = − = a18 = − =− (etc)
6·2 12 12 · 8 1152 18 · 14 290304
and
a4 a4 a10 a0 a16 a0
a10 = − =− a16 = − = a22 = − =− (etc)
10 · 6 60 16 · 12 11520 22 · 18 4561920
All the other coefficients vanish:
where a0 and a4 are free. Clearly the numbers quickly become unwieldy as m grows, but it
would be fairly easy to program a computer to calculate them up to any desired order. ◭
If a solution is a polynomial, the algorithm will stop automatically, once the whole polynomial
has been determined.
◮Example 117 Find the solutions of the equation (1 + t4 )z̈ − 8 t3 ż + 20 t2 z = 0.
Solution: Proceeding like in the preceding examples, we write
X
z(t) = an tn ,
n
X
ż(t) = nan tn−1 ,
n
X
z̈(t) = n(n − 1)an tn−2 .
n
We now replace n with m + 4 in the first sum, and n with m in the second sum. It follows:
∞
X X
(m + 4)(m + 3)am+4 tm+2 + (m2 − 9m + 20) tm+2 = 0.
m=−4 m
94 Linear Differential Equations
Separating the first four terms from the first sum, we obtain (after minor simplifications):
∞
X X
0 a0 t−2 + 0 a1 t−1 + 2 a2 t0 + 6 a3 t1 + (m + 4)(m + 3)am+4 tm+2 + (m2 − 9m + 20) tm+2 = 0.
m=0 m
We impose that
0 a0 = 0
0 a1 = 0
2 a2 = 0
6 a3 = 0
(m + 4)(m + 3)am+4 + (m2 − 9m + 20)am = 0 [m = 0, 1, 2, 3, . . .]
a0 = free, a1 = free.
a2 = 0, a3 = 0.
(m2 − 9m + 20)
am+4 = − am
(m + 4)(m + 3)
(m − 4)(m − 5)
=− am [m = 0, 1, 2, 3 . . .]
(m + 4)(m + 3)
Note that, for m = 0, 1, 2, 3, . . . and so on, the denominator of the fraction above is always
positive (we never divide by zero). This guarantees that this algorithm may be used indefinitely
to calculate each group of four an’s from the preceding four. For example, substituting m = 0,
1, 2 and 3, we get:
20 12 6 2
a4 = − a0 , a5 = − a1 , a6 = − a2 = 0, a7 = − a3 = 0.
12 20 30 42
But when we substitute m = 4, 5, 6 and 7 we find that
0 0 2 6
a8 = a4 = 0, a9 = a5 = 0, a10 = − a6 = 0, a11 = − a6 = 0.
56 72 90 110
Since we found that four consecutive an’s are zero, it is obvious that all the following ones must
also be zero. The algorithm stops; the solution, therefore, is simply
5t4 3t5
z = a0 1 − + a1 t − ,
3 5
Linear Differential Equations 95
REDUCTION OF ORDER
Sometimes, for a given linear homogeneous equation, one may (somehow) find fewer independent
solutions than the order of the equation; in other words, one may get an incomplete solution.
For instance, we have seen in the preceding section (example 115) that the Taylor series method
may fail in this way. In cases like this, it is always possible to use the incomplete solution to get
a new differential equation of a lower order.
The method is a simple generalization of Lagrange’s method of variation of parameters.
One substitutes
x(t) = u(t) v(t),
x being the original unknown, and v the incomplete solution. Simplifying, one gets an equation
of lower order for u̇. If the original equation was second order, the resulting equation is first-order
linear, which may then be solved.
◮Example 118 Solve completely the equation (t + 1) ẍ − (3t + 4) ẋ + (2t + 3) x = 0 given that
it admits the solution v = et .
Solution: Let
x = u et ẋ = (u̇ + u) et ẍ = (ü + 2u̇ + u) et .
Substituting back and simplifying, we get:
Note that u has fallen off: this is the main feature of the method of reduction of order, and it
may be used as a check on the calculations. In other words, if u doesn’t fall off there must be a
mistake.
The last equation is separable; we find that
du̇ t+2 1
= dt = 1 + dt.
u̇ t+1 t+1
ln |u̇| = t + ln |t + 1| + C,
u̇ = A et (t + 1),
u = A tet + B.
t3/2 ü − t3/2 u̇ = 0
ü = u̇.
u̇ = A et =⇒ u = A et + B.
√ √
Hence, the complete solution is x = uv = A tet + B t. ◭
◮Example 120 Solve completely the equation t2 ÿ − (t2 + 2t) ẏ + (t + 2) y = t4 given that the
associate homogeneous equation has a solution of the form z = t. (This is example 80.)
Solution: First of all, note that the associate homogeneous equation
t2 z̈ − (t2 + 2t) ż + (t + 2) z = 0.
may be also solved by the Taylor series method; it’s actually a good revision problem—it yields
the general solution. Do it as an exercise.
In this example, let’s pretend that the incomplete solution z = t has been found by inspec-
tion. Hence, we let y = u t, where u is the new unknown. Differentiating, it follows:
ẏ = u̇ t + u, ÿ = ü t + 2u̇.
Simplifying, we get:
Note again that u has fallen off, as expected. We now have a first-order linear equation in u̇,
which may be solved by the method variation of parameters (see section 3.6). The letters u and
v have alredy been used, though, so instead of (41) we write
It follows that h i
ṗ q + p q̇ − p q = t;
p = −te−t − e−t + C.
Hence,
u̇ = p q = (−te−t − e−t + C) · et = −t − 1 + Cet .
One last integration yields
u = − 12 t2 − t + Cet + B,
which finally gives the general solution y = u t = − 21 t3 − t2 + Ctet + Bt. ◭
cn tn x(n) + . . . + c2 t2 ẍ + c1 t ẋ + c0 x = 0,
where c0 , c1 , c2 etc. are constant. This equation is easy to recognize because the n−th derivative
of the unknown is always multiplied by xn .
An important property of the homogeneous Euler equation is that it always admits a solu-
tion of the form x = A tm , where m is a constant to be determined, and A is free. Substituting
one gets an equation for m, called indicial equation. The degree of the indicial equation always
matches the order of the differential equation. If the roots are all simple, then the general
solution is a linear combination of all the solutions found in this way.
◮Example 121 Solve the equation 2t2 ẍ − 9tẋ + 12x = 0.
Solution: Let x = tm ; differentiate with respect to t and substitute back into the equation. It
follows immediately:
2m(m − 1)tm−2+2 − 9mtm−1+1 + 12tm = 0.
† The term “Euler equations” in engineering has at least three different meanings, as there
are also Euler equations for rigid body motion, and Euler equations for ideal fluids. The Euler
equations discussed here are sometimes called equidimensional Euler or also Euler-Cauchy.
98 Linear Differential Equations
2m(m − 1) − 9m + 12 = 0,
2m2 − 11m + 12 = 0.
√
Solving for m, we obtain m = 11 ± 25 /4, i.e., m = 4 or m = 3/2. The general solution is
therefore x(t) = At4 + Bt3/2 . ◭
This equation is cubic as expected, because the differential equation is third-order. However,
it’s easy to spot that a term (m − 1) may be immediately factored out:
There are two possible problems with this method: complex roots and multiple roots. Complex
roots are handled by means of Euler’s formula, as next example shows.
Its solutions are m = −1 ± i3; hence the general solution of this differential equation has the
form y = At−1+i3 + B −1−i3 . To interpret the imaginary powers, substitute t = eln t and use
Euler’s formula. This yields:
i3
ti3 = eln t = ei3 ln t = cos(3 ln t) + i sin(3 ln t).
and −i3
t−i3 = eln t = e−i3 ln t = cos(3 ln t) − i sin(3 ln t).
So, the general solution may be written y = At−1 cos(3 ln t) + Bt−1 sin(3 ln t). ◭
When the equation for m has multiple roots, the methods explained above yield an incomplete
solution. However, reduction of order is always an option.
◮Example 124 Solve completely the equation t2 ẍ − 5tẋ + 9x = 0.
Solution: Setting x = tm and proceeding like in examples 121–123, we get:
m2 − 6m + 9 = 0.
Linear Differential Equations 99
The indicial equation has the double root m = 3. Therefore, x = t3 is a solution. We need a
second solution; to find it, we let x = t3 u(t) and proceed by reduction of order. Substituting
ẋ = 3t2 + t2 and ẍ = 6t u + 6t2 u̇ + t3 ü and simplifying, we get the equation
u̇ + t ü = 0,
which is separable:
du̇ dt A
Z Z
=− =⇒ u̇ = .
u̇ t t
Integrating again, we get
u = A ln |t| + B,
and finally x = A t3 ln |t| + B t3 , which is a linear combination of two independent solutions,
i.e., the general solution ◭
It may be shown that if the indicial equation has a double root m = k, then the corresponding
Euler equation has always two solutions of the form tk and tk ln t. We’ll look into the proof,
because it is a good revision example in the calculus of several variables.
Consider a generic 2nd-order Euler equation:
c2 t2 ẍ + c1 t ẋ + c0 x = 0
where c2 , c1 and c0 are constant. We want to see if it has a solution of the form x = tm ln t.
This means we want to see under what conditions the equation
d2 (tm ln t) d(tm ln t)
c2 t2 + c1 t + c0 (tm ln t) = 0
dt2 dt
holds identically for every t. The crucial point is the observation that
∂ tm
tm ln t = ;
∂m
d2 ∂ tm d ∂ tm
m
∂t
c2 t2 + c1 t + c0 = 0.
dt2 ∂m dt ∂m ∂m
Since differentiation with respect to m and differentiation with respect to t are interchangeable,
this may also be written
2 m d tm
∂ 2d t m
c2 t + c1 t + c0 t = 0,
∂m dt2 dt
100 Linear Differential Equations
and hence
∂
c2 m(m − 1)tm + c1 mtm + c0 tm = 0.
(49)
∂m
Now, if the indicial equation has two simple roots m = k and m = l, then it may be written
c2 (m − k)(m − l) = 0. If, however, has only one double root m = k, then it must have the form
c2 (m − k)2 = 0.
∂ h i
c2 (m − k)2 tm = 0,
∂m
2c2 (m − k) tm + c2 (m − k)2 tm ln t = 0.
It’s evident that the left-hand side is identically zero if m = k. Therefore the function x = tk ln t
fits the equation, i.e., it is a solution. As an exercise, show that this is not true in the case of
two simple roots.
◮Conclusion: If m = k is a double root of the indicial equation, then the Euler equation has
two independent solutions of the form tk and tk ln t. ◭
This proof may be easily adapted to Euler equations of 3rd-order (where the indicial equation
may have three simple roots, or a double root plus a simple root, or one triple root), and higher
order. The details are essentially the same.
For instance, it may be shown that if m = k is a triple root, then the Euler equation has
three independent solutions of the form tk , tk ln t and tk (ln t)2 . Similarly, if m = k is a quadruple
root, then one gets the expressions listed above, plus a fourth solution of the form tk (ln t)3 . The
rule is extended in the same way to roots of higher algebraic multiplicity.
m(m − 1)(m − 2) + m − 1 = 0,
m3 − 3m2 + 3m − 1 = 0,
◮Example 127 Find all the solutions of t4 x(4) + 4t3 ẍ˙ + t2 ẍ + t ẋ − x = 0 [for t > 0].
Solution: We get the indicial equation
Occasionally, if the coefficients are polynomials in t, the Laplace transform may be used because
multiplication of x(t) by t corresponds to differentiation of X(s). In this way, a differential
equation of order n for x(t) is turned into a differential equation of order m for X(s), where m
is the highest power of t appearing in the original equation. If m < n this may be a step toward
the solution.
d
t· ↔ − ,
ds
we get
d 2 d
− s X − sa − b + sX − a − sX − a + X = 0.
ds ds
It follows that
−2sX − s2 X ′ + a + X + sX ′ − sX + a + X = 0,
(s2 − s)X ′ + (3s − 2)X = 2a.
We have obtained a first-order linear equation for X, which may be solved by the method of
variation of parameters: writing
X(s) = U (s)V (s)
and substituting, it follows
h i
s(s − 1)U ′ V + s(s − 1)U V ′ + (3s − 2)U V = 2a.
U ′ = 2as
U = as2 + c,
as2 + c
X = UV = =
(s − 1)s2
a+c c c
= − − .
s − 1 s s2
Since a has not been specified anyway, we may re-define a + c; finally, we get
x = L−1 X = c1 et − c2 (t + 1).
◭
A REVISION EXAMPLE
◮Example 129 Find the general solution of t2 ÿ − 2y = ln t.
Solution: This is a non-homogeneous equation. The associate homogeneous equation is
t2 z̈ − 2z = 0,
which is of the Euler type. It has solutions of the form tm , where m is given by the indicial
equation m(m − 1) − 2 = 0. Solving it, we find m = 2 or m = −1.
We proceed by reduction of order, setting y = uv, where v = t2 or v = t−1 . Either
substitution would work; let us say, y = t2 u. It follows that ÿ = t2 ü + 4tu̇ + 2u; substituting
into the original equation, we obtain:
Simplifying, we get:
t4 ü + 4t3 u̇ = ln t.
This is first-order linear in u̇, so (by the method of variation of parameters) we set
We impose that the expression in square brackets vanish, and we get an equation for x :
tẋ + 4x = 0.
t4 ẇt−4 = ln t.
Linear Differential Equations 103
u̇ = wx = (t ln t − t + C) · t−4 =
= t−3 ln t − t−3 + Ct−4 .
Integrating, we get:
u = − 12 t−2 ln t − 14 t−2 + 12 t−2 − 31 Ct−3 + B =
= − 21 t−2 ln t + 14 t−2 + At−3 + B,
where A and B are free. Finally, from y = uv, we get
y = ut2 = − 12 ln t + 1
4 + At−1 + Bt2 ,
Systems of linear equations with constant coefficients may be handled by the Laplace transform
exactly as you would expect: by transforming the system of differential equations in x, y, z, . . .
into a system of algebraic equations in X, Y , Z, . . .. Initial conditions (if given) are accounted for
in a natural way; there is really nothing new to learn. The homogeneous and non-homogeneous
systems correspond, respectively, to homogeneous and non-homogeneous equations, and all the
comments made in sections 3.2–5 apply to systems as well.
(
ẋ + 2x + y = e−t
◮Example 130 Solve the system with initial values x(0+) = y(0+) = 0.
ẏ − x = 0,
Solution: Taking the Laplace transform of the system, we get
sX + 2X + Y = 1/(s + 1)
sY − X = 0.
This is a system of 2 algebraic equations with 2 unknowns, described by the augmented matrix
s + 2 1 1/(s + 1)
,
−1 s 0
s+1−1 1 1
X(s) = 3
= 2
− .
(s + 1) (s + 1) (s + 1)3
104 Linear Differential Equations
sΩ1 − 1 = −kΩ2
sΩ2 = kΩ1 .
s 1
−k 0 k
Ω2 (s) = = ,
s k
s + k2
2
−k s
D(s) = (5s2 + 40) (0.8s2 + 4) − 16 = 4s4 + 52s2 + 144 = 4(s2 + 9)(s2 + 4).
and
(s − 8) 7 2
−10 (s + 9) 1
2 −2 0 2(1 − s) 2 2 2
Z(s) = = =− = − .
s(s − 1)2 s(s − 1)2 s(s − 1) s s−1
It follows immediately that y(t) = 11et − 10 and z(t) = 2 − 2et . ◭
Differential equations of order n, and systems of n equations of order 1, are two sides of the
same coin, so to speak, as the following example illustrates.
◮Example 134 Write the third-order differential equation ẍ˙ − 3ẍ + 3ẋ − x = t2 et , as a system
of first-order equations.
Solution: Define ẋ = y, ẍ = ẏ = z; we get immediately
ẋ = y,
ẏ = z
ż = 3z − 3y + x + t2 et ;
It is possible to show that the eigenvalue equation for the square matrix appearing above is
formally the same as the characteristic equation D(s) = 0 of the original equation: specifically,
and
−λ 1 0
1 = −(λ − 1)3 .
0 −λ
1 −3 3 − λ
Note that the only root is s = 1, with algebraic multiplicity 3. Go back to example 100 for a
solution of this problem. ◭
In exactly the same way, an equation of order n may be converted into a system of n first-
order equations (convince yourself of this). If, in addition, the equation is linear with constant
coefficients, then its characteristic equation is converted into the eigenvalue equation for the
corresponding matrix.
Therefore, the study of differential equations of order n may be seen as a special case of
the theory of linear systems of order 1: this is an important result from a theoretical point of
view, but in practice, solving a single equation of order n requires probably no more work than
solving the corresponding n ∗ n system. So, converting high-order equations into first-order
systems is not necessarily a step forward towards the solution.
where f (t) and K(t) are given functions, x(t) is unknown, represent an important class of
integral equations. The function K(t) is called the kernel of the equation. Recalling that the
Laplace transform of a convolution product is the (ordinary) product of the tranforms, we see
immediately that the Laplace transform is well suited for this kind of problems. Leaving aside
questions such as existence and uniqueness of solutions and other theoretical features, let us see
how the method works in practice.
Rt
◮Example 135 Solve the integral equation 0
et−τ x(τ ) dτ = sin t.
Solution: The equation may be written
et ∗ x(t) = sin t;
X(s) 1
= 2 .
s−1 s +1
It follows immediately
s−1
X(s) = ,
s2 + 1
and hence
x(t) = cos t − sin t. ◭
108 Linear Differential Equations
Rt
◮Example 136 Solve the integral equation x(t) = cos t + 0
(t − τ ) x(τ ) dτ .
Solution: Transforming the equation
we get
s X(s)
X(s) = 2
+ 2 .
1+s s
It follows immediately
1 1
s3 2s 2s
X(s) = = + ,
(s2 + 1)(s2 − 1) s2 + 1 s2 − 1
and finally
1 1
x(t) = 2 cos t + 2 cosh t. ◭
Rt
◮Example 137 Solve the integral equation x(t) = t + 0
sin(t − τ ) x(τ ) dτ .
Solution: Transforming the equation
we get
1 X(s)
X(s) = 2
+ ,
s 1 + s2
and hence
1 + s2 1 1
X(s) = 4
= 4 + 2.
s s s
Finally, the solution is
x(t) = 16 t3 + t. ◭
All the “special tecniques” discussed so far may be extended in a natural way to equations of
higher order and to systems. The next couple of examples deal with third-order equations.
◮Example 138 The equation t3 ẍ˙ − 3t2 ẍ + t(6 − t2 )ẋ − (6 − t2 )x = 0 admits an incomplete
solution of the form x = t. Find the general solution.
Solution: The incomplete solution is not hard to spot: the term (6 − t2 ) appearing twice, gives
it away. So, letting
x(t) = t u(t)
and differentiating, we get:
It follows immediately:
Simplifying, we get:
t4 ü˙ − t4 u̇ = 0,
ü˙ − u̇ = 0.
Having started with a third order equation, we have obtained a second order equation in u̇,
which may be solved immediately:
u̇ = A cosh t + B sinh t,
u = A sinh t + B cosh t + C.
◮Example 139 Apply the Taylor series method to the equation t2 ẍ˙ + tẋ + x = 0.
Solution: Proceeding like in section 3.7, we write:
X X X X
x= an tn , ẋ = an ntn−1 , ẍ = an n(n−1)tn−2 , ẍ˙ = an n(n−1)(n−2)tn−3 .
n n n n
Substituting n = m + 1 in the first sum, and grouping the other two sums, we get:
∞
X X
am+1 (m + 1)m(m − 1)tm + am (m + 1)tm = 0.
m=−1 m
0 · a1 + a0 = 0 [m = 0 ]
0 · a2 + 2a1 = 0 [m = 1 ]
am+1 (m + 1)m(m − 1) + am (m + 1) = 0 [m = 2, 3, . . . ]
The first equation from this set fixes a0 , regardless of the value of a1 :
a0 = 0.
110 Linear Differential Equations
So, a0 is not free. Similarly, the second equation fixes a1 , regardless of the value of a2 :
a1 = 0.
If m > 1, then certainly m(m − 1) in not zero, hence simplifying we obtain the algorithm
am
am+1 = − [m = 2, 3, . . .]
m(m − 1)
which yields all other coefficients. However, there is no equation for a2 , hence a2 is free. For
instance, we get
a2 a2 a2
a3 = − a4 = + a5 = −
2·1 3·2·2·1 4·3·3·2·2·1
t3 t4 t5 t6 t7
2
x = a2 t − + − + − + ··· .
2! 1! 3! 2! 4! 3! 5! 4! 6! 5!
Note that the given equation must have three independent solutions, and we only found one.
This means that the other solutions do not have a Taylor series about x = 0. ◭
◮Example 140 A simple harmonic oscillator having mass m = 4 kg and spring constant κ = 16
N/m, initially at rest, is pulled by a constant force f0 = 8 N for 6 seconds and then released.
Describe the subsequent motion.
Solution: Let y be the displacement from equilibrium. The equation of motion is
0 if t < 0,
(
mÿ + κy = f0 if 0 < t < 6,
0 if t < 0.
where u(t) is Heaviside’s step function. The initial conditions are: y(0+) = 0, ẏ(0+) = 0.
The transformed equation is
1 − e−6s
(s2 + 4)Y (s) = 2 · .
s
It follows:
2(1 − e−6s )
Y (s) = .
s(s2 + 4)
Now, we observe that
Z t
2
L −1
= sin 2τ dτ = 21 (1 − cos 2t).
s(s2 + 4) 0
Linear Differential Equations 111
Applying the identity cos A − cos B = −2 sin 12 (A + B) sin 21 (A − B), we finally get:
0 if t < 0,
y(t) = sin2 t if 0 < t < 6,
sin 6 sin(2t − 6) if t > 6.
◮Example 141 Suppose the oscillator considered in example 140 is initially at 2 m from the
equilibrium point, moving toward it with speed 3 m/s. Starting at time t = 0, the oscillator is
pulled by a constant force f0 = 8 N for 6 seconds and then released. Describe the subsequent
motion.
Solution: The equation of motion is the same as in example 140; only the initial conditions differ.
So, let us call x the solution of this problem, and y the solution found in example 140.
The associate homogeneous equation is 4z̈ + 16z = 0. and its general solution is quickly
found to be
z = A cos 2t + B sin 2t
(convince yourself of this). So, the general solution of this problem is
where y(t) is the solution found in example 140. We now require that x(0+) = 2 and ẋ(0+) = −3.
Recalling that y(0+) = 0 and ẏ(0+) = 0, these conditions yield:
which yields
sin t
x(t) = −a · .
t
Since in the process of finding X(s) the constant b dropped off, this solution is incomplete. So,
we write
sin t
v(t) =
t
and
x = uv, =⇒ ẋ = u̇v + uv̇, =⇒ ẍ = üv + 2u̇v̇ + uv̈.
Replacing these expressions in the original equation, we get
or
ü tv + 2u̇(tv̇ + v) + u(tv̈ + 2v̇ + tv) = 0.
Now, use the fact that v = sin t/t is a solution of the equation, i.e. that
tv̈ + 2v̇ + tv = 0.
Simplifying, we get
ü 2(v̇t + v) 2 cos t
=− =− .
u̇ vt sin t
This is immediately integrable:
We now have two independent solutions, namely sin t/t and cos t/t. Note that the second
solution, cos t/t, does not have a Laplace transform [the integral (3) would diverge]: this is why
we failed to find it by such a method. ◭
◮Example 143 ( Rt Rt
x(t) = et + 0
e(t−τ ) y(τ ) dτ,
x(τ ) dτ − 0
Solve the system of integral equations Rt Rt
y(t) = −t − 0 (t − τ ) x(τ ) dτ + 0 y(τ ) dτ.
Solution: Transforming the system, we get
1 X(s) Y (s)
X(s) = + − ,
s−1 s s−1
1 X(s) Y (s)
Y (s) = − −
+ .
s2 s2 s
s(s − 1) + 1 s2 − s + 1 1
X(s) = 3
= 2
= ,
(s − 1) − 1 (s − 2)(s − s + 1) s−2
1 1
−(s − 1)2 /s − 1 s2 − s + 1 2 2
Y (s) = = − = − .
(s − 1)3 − 1 s(s − 2)(s2 − s + 1) s s−2
Finally, we get
x(t) = e2t ,
◭
y(t) = 1
2 − 21 e2t .
114 Tutorial Problems Linear Differential Equations
PROBLEMS
34. Find the general solution of the following linear ODEs with constant coefficients.
(a) ẍ − 7ẋ + 10x = 0 (c) ẍ˙ − 6ẍ + 12ẋ − 8x = 0
(b) x(4) + 8x(2) + 16x = 0 (d) 4ẍ − 8ẋ + 1 = 0
35. Find the general solution of the following linear ODEs with constant coefficients.
(a) x(6) + 300x(4) + 30 000 x(2) + 1 000 000 x = 0
(b) x(8) − 8x(4) + 16x = 0
(c) ẍ˙ + 8x = 0
(d) ẍ˙ − ẍ + 81ẋ − 81x = 0
36. Find the general solution of the following linear ODEs with constant coefficients.
(e) ẍ − x = et (g) x(4) + 4x(3) + 6x(2) + 4x(1) + x = e−t sin t
(f) ẍ + ẋ − 6x = t e2t (h) ẍ˙ + 11ẍ − ẋ − 11x = e−11t
39. For the following problems, use the Green’s function method (39) to find the particular
solution that satisfies the initial conditions y(0+) = ẏ(0+) = 0.
(a) ÿ − y = tanh t, (c) ÿ − y = 1/ cosh3 t,
(b) ÿ + y = 1/ cos t, (d) ÿ + y = 1/(1 + cos t).
44. Use the Taylor series method to solve the following equations.
(a) ẍ − x = 0 (d) ẍ − t2 ẋ − 2t x = 0
(b) t ÿ + ẏ − 4t y = 0 (e) t2 ÿ − 2t ẏ + (2 + t2 )y = 0
2
(c) (t − t ) z̈ − 3ż + 2z = 0 (f) z̈ − t ż + 6z = 0
Reduction of Order
45. Find the general solution of (t − 4)ẍ − (t − 3)ẋ + x = 0, given that it has an incomplete
solution of the form x = et .
46. Find the general solution of ẍ + (1 − 2 tanh2 t) x = 0, given that it has an incomplete
solution of the form x = 1/ cosh t.
47. Find the general solution of ẍ t2 cos t + ẋ t(t sin t − 2 cos t) + x (2 cos t − t sin t) = 0, given
that it has an incomplete solution of the form x = t.
48. Find the general solution of ẍ cosh2 t + 2x = 0, given that it has an incomplete solution
of the form x = tanh t.
49. Verify that the equation tẍ − (t + 2)ẋ + 3x = 0 has a solution of the Rform x = t3 . Hence,
using reduction of order, show that the general solution is x = At3 + Bt3 (et /t4 ) dt.
50. Find the general solution of t3 ẍ˙ − 3t2 ẍ + t(6 + t2 )ẋ − (6 + t2 )x = 0, given that it has an
incomplete solution of the form x = t.
Euler Equidimensional Equations
54. Take the Laplace transform of the following equations, solve for X(s) and hence find x(t).
(a) tẍ + (t − 1)ẋ − x = 0; x0 = 5, ẋ0 = −5.
(b) (2t + 1)ẍ − 2ẋ − (2t + 3)x = 0; x0 = 0, ẋ0 = 1.
116 Tutorial Problems Linear Differential Equations
55. Take the Laplace transform of the following equations, solve for X(s) and hence find the
general solution.
(a) tẍ − 2ẋ − tx = 0
(b) tẍ˙ − ẍ − tẋ + x = 0.
56. Solve problem 54(a) by the Taylor series method and verify that you get the same answer.
Systems of ODEs
57. Solve
the following initial-value problems.
ẋ + y = 0, x0 = 1 ẋ = 5x − y, x0 = 2
(a) (c)
ẏ + x = 0, y0 = −1 ẏ = 3x + y, y0 = −1
x0 = 1 x0 = ẋ0 = 0
ẋ = −2x + y, ẋ − ẏ − 2x + 2y = 1 − 2t,
(b) (d)
ẏ = −5x + 4y, y0 = 3 ẍ + 2ẏ + x = 0, y0 = 0
58. Solve the following initial-value problems.
(
ẋ = −2x − 2y − 4z, x0 = 1 (
ẋ − x + ẏ + 2y = 1 + et , x0 = 0
(a) ẏ = −2x + 2y − 2z, y0 = 1 (b) ẏ + 2y + ż + z = 2 + et , y0 = 0
ż = 5x + 2y + 7z z0 = 1 ẋ − x + ż + z = 3 + et z0 = 2
Convolution Integral Equations
ANSWERS
0
if t ≤ 0,
1
38 (a) x = 9 (1 − cos 3t) if 0 < t < 2,
1
9 (cos 3t · cos 6
+ sin 3t · sin 6 − cos 3t) if t ≥ 2.
0 if t ≤ 0,
−2t
(b) x = (e − 1 + 2t)/4 if 0 < t < 1,
−2t
e (1 + e2 )/4 if t ≥ 1.
39 (a) y = − sinh t+cosh t·arctan(sinh t) (c) y = 21 sinh t · tanh t
(d) y = sin t· t−tan 12 t +cos t·ln 1 1
(b) y = t sin t + cos t · ln | cos t| 2+2 cos t
40 (a) ct + t2 (d) cetanh t − tanh2 t − 2 tanh t − 2
√
(b) ct3 − e1/t (t − 2t2 + 2t3 ) (e) t − t2 + c / t2 + 1
√ √
(c) 2/ t − 1 + c/ t2 − 1 (f) arctan t − 1 + ce− arctan t
(a) ln |1 + t| + 21 t2 /(1 + t)3
41 (c) t tan t + 1 + 7/ cos t
1 3 2 1
(b) 2 t + 3t − 2t ln t + 2 t (d) t/ ln t − (t − 3)/(ln t)2
p
43 (a) x = t2 C − 3/t2
(b) x = psinh t/(cosh t − t sinh t + C)
(c) x = 3 t/(Cecos t + 3)
t2 t4 t3 t5
44 (a) x = a0 1 + + + · · · + a1 t + + + · · · = a0 cosh t + a1 sinh t
2! 4! 3! 5!
2 4 6
t t t
(b) y = a0 1 + 2
+ 2
+ + · · · ; the second solution cannot be found by the
(1!) (2!) (3!)2
Taylor series method alone.
(c) z = a0 1 + 23 t + 13 t2 + a4 t4 + 2t5 + 3t6 + 4t7 + 5t8 + · · · .
t3 t6 t9 t4 t7 t10
(d) x = a0 1 + + + + · · · + a1 t + + + + ···
3 3 · 6 3 · 6 ·9 4 4 · 7 4 · 7 · 10
t3 t5 t7 t4 t6 t8
(e) y = a1 t − + − + · · · + a2 t2 − + − + · · · = a1 t cos t + a2 t sin t
2! 4! 7! 3! 5! 7!
2 4
(f) z = a0 1 − 3t + t +
5t3 15t5 15t7 15t9 15 · 3t11 15 · 3 · 5t13 15 · 3 · 5 · 7t15
+a1 t − + − − − − − − ···
3! 5! 7! 9! 11! 13! 15!
45 x = A(3 − t) + Bet
46 x = A (sinh t + t/ cosh t) + B/ cosh t
47 x = At sin t + Bt
48 x = A(t tanh t − 1) + B tanh t
50 x = At cos t + Bt sin t + Ct.
√
51 (a) A t + B/t (d) A cos(7 ln |t|) + B sin(7 ln |t|)
5
(b) At + Bt + C/t (e) t·[A+B cos(13 ln |t|)+C sin(13 ln |t|)]
(c) (A + B ln |t|) · t−2 (f) A + B ln |t| + C(ln |t|)2
52 (a) A (t + 4)−3 + B (t + 4)−2 , (b) A cos(π ln |t + 2|) + B sin(π ln |t + 2|).
√ √
53 cos( 2 ln |t|) · [A + B ln |t|] + sin( 2 ln |t|) · [C + D ln |t|].
54 (a) 5e−t + c(t − 1 + e−t ), where c is arbitrary,
(b) tet .
118 Tutorial Problems Linear Differential Equations
Chapter Four
1. Nonlinearity
Chapter 3 of these notes has been a long introduction to linear ODEs. We conclude now with
a discussion of some nonlinear equations.
In the words of S. Ulam†, “to speak of nonlinear science is like calling zoology the study of
nonelephant animals”. It’s a famous comment, its point being that linearity is the exception,
not the rule: we live in a nonlinear world but we tend to forget it, perhaps because our mind is
primed to think linearly.
Examples are everywhere. Simple harmonic motion, which is a corner stone of an engineer’s
education, is based on the false premise that linear springs exist. Every spring will deviate from
Hooke’s law, if we push it too far or pull it too much. The reasons why most textbooks insist so
much on “classic” harmonic motion are, first of all, the elegance of its theory, but also (perhaps
mostly) because nobody can solve anharmonic motion equations, in general, in a simple form.
In thermodynamics you learnt that the internal energy of an ideal gas is given by the
equation U = CV T , where CV is constant: another classic linear law—but ideal gases don’t
exist. Too bad.
The laws of friction and Newton’s law of collision, which you saw in first year, are also good
examples of nonlinear phenomena that are treated as linear because an exact theory would be
too complicated.
From the days of Newton to the XX century, scientists and engineers used linearized models
and linear equations whenever this approximation could be justified. The explosive growth of
computers after 1975 marked a change of attitude, saw the birth of new disciplines, such as
chaos theory or catastrophe theory, and gave new life to old ones, like numerical analysis. In a
very short time, nonlinear mathematics grew into one of the most fertile research fields.
However, in this chapter we’ll see only some simple nonlinear ODEs that may be solved
exactly by the methods of classical analysis, i.e., the calculus you studied in first year. We’ll
also stay away from Lie’s theory, which an engineer could find useful, but would lead us too far
away.
Actually, you have already encountered a nonlinear equation, the Bernoulli equation of
problems 42–43, tucked away among linear equations where it didn’t belong. But that was
because the Bernoulli equation is so similar to the first-order linear, that it would be wasteful
to study it separately.
We’ll consider now three important types of nonlinear ODEs. Last comment before we
start: we switch to the notation where the variables are x and y, rather than x and t. This is
because with nonlinear equations it is better to give both variables the same “status” (without
specifying which one is dependent and which is independent), whereas the x-t notation is used
almost always when x depends on t.
In first year you were introduced to implicit functions defined by an expression of the form
F (x, y) = constant.
x2 y2
+ = 1,
a2 b2
is implicit. It defines not one but two explicit continuous functions y(x):
p p
y=b 1 − x2 /a2 and y = −b 1 − x2 /a2 .
F (x, y) = constant
implicitly defines a function y(x) in the vicinity of a point (x0 , y0 ) under the assumption that
Fx and Fy are continuous functions of x and y, and Fy 6= 0 at that point; furthermore
dy Fx
=− .
dx Fy
The easy way to remember this is by means of differentials: since F = constant, then the
variation of F is zero along the graph of y(x); hence, we write
∂F ∂F
dF = dx + dy = 0,
∂x ∂y
and then, formally dividing by dx, we get the formula for dy/dx.
Nonlinear Differential Equations 121
Obviously x and y may be interchanged, and the corresponding result for x(y) is
dx Fy
=− ,
dy Fx
we agree that either equation may also be written in the symmetric form
which is equivalent to both, wherever the assumptions of the implicit function theorem are
satisfied. Now suppose that there exists a function F (x, y) such that
∂F ∂F
=P and =Q: (51)
∂x ∂y
∂F ∂F
P dx + Q dy = 0 =⇒ dx + dy = 0 =⇒ dF = 0 =⇒ F =C
∂x ∂y
∂(x4 y 2 − x2 ) ∂(x4 y 2 − x2 )
4x3 y 2 − 2x = and 2x4 y = ;
∂x ∂y
conditions (51) are satisfied by the function F (x, y) = x4 y 2 − x2 , hence the (implicit) solution is
x4 y 2 − x2 = C,
where C = constant. In this example one may go one step further, and write the explicit
solutions: √
C + x2
y=± ,
x2
but in general this last step may not be possible. ◭
∂2F ∂2F ∂P ∂Q
= =⇒ = .
∂x ∂y ∂y ∂x ∂y ∂x
Hence, if the last condition is not satisfied, then there is no point in looking for a function
F satisfying conditions (51), because such a function cannot exist. So, the mixed-derivatives
condition
∂P ∂Q
= , (52)
∂y ∂x
is necessary for an equation of the form (50) to be exact.
Moreover, if P and Q, besides satisfying (52), are also continuous, then it is not hard to
show that this condition is also sufficient: in other words, a function F exists such that (51)
hold, and the equation is exact. Let’s see how this comes to work in practice.
2 2
◮Example 145 Check that the equation (2xyex − 2x) dx + (ex + 3y 2 ) dy = 0 is exact, and
solve it.
Solution: First of all, we check for exactness. Clearly P and Q are continuous, and
∂P ∂ 2 2 ∂Q ∂ x2 2
2xyex − 2x = 2xex , e + 3y 2 = 2xex :
= =
∂y ∂y ∂x ∂x
hence (52) holds, and the equation is exact. Now, to find F (x, y) we may proceed as follows.
We note that
∂F 2
= P = 2xyex − 2x;
∂x
integrating this equation with respect to x (i.e., treating y as a parameter) we get:
Z
2 2
F (x, y) = (2xyex − 2x) dx = yex − x2 + “a constant”.
Careful now: “a constant” here means “any expression that does not depend on x”, because
any function of y alone remains constant if only x is varied. So, we rewrite the last result as
2
F (x, y) = yex − x2 + g(y), (53)
where g(y) is a function of y alone. In this way, we’ve made sure that the first of conditions (51)
is satisfied, but g is still undetermined. To find it, we require that the second of conditions (51),
i.e., ∂F/∂y = Q, is also satisfied. It follows that
∂ 2 2
yex − x2 + g(y) = ex + 3y 2 ,
∂y
2 2
ex + g′ (y) = ex + 3y 2 ,
g′ (y) = 3y 2 .
† Nicholas Bernoulli (1687-1759), a Swiss who lectured at Padua (Italy); not to be confused
with his cousin Nicholas Bernoulli (1695-1726), who discovered the St. Petersburg paradox. The
Bernoulli differential equation, which you saw in problems 42–43, is named after their uncle
Jakob Bernoulli (1654-1705).
Nonlinear Differential Equations 123
We see that the last equation does not contain x anymore. Hence, by a trivial integration with
respect to y, we get
g(y) + C = y 3 ,
where C is a true constant. Combining our results, we finally get that
2
F (x, y) = yex − x2 + y 3 = C,
As a rule, after one has established that a given equation is exact, there are several avenues for
finding F (x, y), all pretty much equivalent.
The procedure used in example 145 went through three steps: first, integrate P with respect
to x, up to an unknown function g(y). Second, take the partial derivative of this integral with
respect to y, and compare the result with Q; the resulting equation must contain only y (if it
doesn’t, it’s a flag for a mistake in the calculations). Third, integrate g′ (y) to find g(y).
Alternatively, one may integrate Q with respect to y, up to an unknown function g(x), and
then take the partial derivative of this integral with respect to x, comparing the result with
P . The resulting equation must contain only x, and one may therefore find g(x) by integrating
g′ (x). This procedure is the “mirror image” of the first one.
One may also integrate P with respect to x, up to a function g(x), and then Q with respect
to y, up to a function h(y). Comparing the two results, which must be equivalent, one determines
g(x) and h(y).
where g(x) depends on x alone. Differentiating this result with respect to x, and comparing the
result with P , we get:
∂ 1 2 p
(x + y 2 )3/2 − xy + g(x) = x x2 + y 2 − x − y
∂x 3
p p
x x2 + y 2 − y + g′ (x) = x x2 + y 2 − x − y
g′ (x) = −x.
g(x) + C = − 21 x2 ,
Exact differential equations are closely connected to conservative vector fields. The link becomes
clear if one considers a two-dimensional vector field f defined as
f = P ı̂ + Q ̂,
∇ × f = 0, i.e., (Qx − Py ) k̂ = 0;
the last condition is equivalent to (52). Furthermore, if f physically corresponds to a force, then
the expression P dx + Qdy represents the “infinitesimal” work done by f on a particle moving
from (x, y) to (x + dx, y + dy); the solutions of equation (50)
P dx + Q dy = 0,
INTEGRATING FACTORS
An unpleasant feature of exact equations is that if they are altered in an insignificant way, they
may lose their exactness. For example, consider the equation
y ′ = y,
dy
dx − =0
y
Nonlinear Differential Equations 125
y dx − dy = 0,
y −1 · (y dx − dy) = 0,
one obtains the exact equation again. This example is purely academic because the equation
y ′ = y is so easy that we can solve it by inspection; but the point is important.
It may be shown that any equation of the form (50) may, under reasonable assumptions,
be made exact if it’s multiplied through by an appropriate function µ(x, y). Unfortunately, the
same theorem that guarantees the existence of µ does not say a word about how to find it.
◮Example 148 Show that µ(x, y) = x is an integrating factor for the equation (2y − 3x) dx +
x dy = 0, and solve it.
Solution: The equation
(2y − 3x) dx + x dy = 0
is not exact because Qx = 1 and Py = 2. However, if it’s multiplied through by x, one gets
(2xy − 3x2 ) dx + x2 dy = 0.
Now, Qx = 2x, Py = 2x, and the equation has become exact; hence µ = x is an integrating
factor. We proceed by integrating Q with respect to y:
F (x, y) = x2 y + g(x),
It follows that g′ (x) = −3x2 , g(x) + C = −x3 , and finally that F (x, y) = x2 y − x3 = C. Note
that in this example one can write an explicit solution in the form y = C/x2 + x. ◭
◮Example 149 Show that the equation (xy−2y 2 ) dx+(3xy−x2 ) dy = 0 admits the integrating
factor µ(x, y) = 1/xy 2 ; find the general solution.
Solution: The equation
(xy − 2y 2 ) dx + (3xy − x2 ) dy = 0
is not exact because Py = x − 4y and Qx = 3y − 2x. However, if it’s divided through by xy 2 ,
one gets
xy − 2y 2 3xy − x2
dx + dy = 0
xy 2 xy 2
1 2 3 x
− dx + − 2 dy = 0.
y x y y
126 Nonlinear Differential Equations
Written in this form, the equation is exact because Py = −1/y 2 and Qx = −1/y 2 . Integrating
P with respect to x, we get:
Z
1 2 x
F (x, y) = − dx = − 2 ln |x| + g(y),
y x y
x 3 x
− + g′ (y) = − 2 ,
y2 y y
and hence that g′ (y) = 3/y, g(y) + C = 3 ln |y|, and finally that
x
− 2 ln |x| + 3 ln |y| = C.
y ◭
and the equation is exact. Integrating (by parts) Q with respect to y, we get that
◮Example 151 The first principle of thermodynamics says that if heat enters a system at a
rate Q̇, then
Q̇ = U̇ + Ẇ ,
where U̇ is the rate at which the internal energy increases, and Ẇ is the rate at which the system
is doing work. (These quantities have the same dimensions as power, so they could be measured
in Watt.) Integrating with respect to time, we get
∆Q = ∆U + ∆W,
Nonlinear Differential Equations 127
which is an equivalent (and perhaps more familiar) statement of the first principle. The disad-
vantage of this notation is that, since the time interval of integration may be made arbitrarily
small, one is tempted to write
?
dQ = dU + dW.
But this is wrong, because dQ is not an exact differential. There is no guarantee that there is a
state function Q such that its differential is equal to dU + dW .
It may be shown, however, that an integrating factor for dQ always exists, and that it is
equal to 1/T . This, in turn, means that the path integral
Z
dU dW
S= + (54)
T T
depends only on the initial and final states of the system, and not on the type of transformation
(such as adiabatic, isobaric, isothermal, etc) that connects them—as long as it is a reversible
transformation, obviously.
For example, consider an ideal gas: you have probably seen in your physics courses that
the internal energy of an ideal gas is U = CV T , where T is the absolute temperature,
R and CV
is a constant. You also know that the work done by an expanding gas is W = p dV , where p
and V are pressure and volume, respectively. Substituting back, we get
?
dQ = CV dT + p dV ;
but remember that p depends on T and V through the equation of state which, for an ideal gas,
is p = nRT /V . It follows that
? nRT
dQ = CV dT + dV,
V
and we see immediately that the right-hand side is not exact: from the exactness condition (52),
we get
∂CV ∂ nRT nR
= 0, = ,
∂V ∂T V V
and the two are not equal. Hence, there is no function Q(T, V ) such that (∂Q/∂T )V = CV and
(∂Q/∂V )T = nRT /V .
If however we multiply dQ by the integrating factor 1/T and then integrate, we get (54),
which for an ideal gas becomes
Z
CV nRT
S= dT + dV = CV ln T + nR ln V + constant,
T VT
where
∂S CV ∂S nR
= , = .
∂T V T ∂V T V
The exactness condition (52) holds because both ∂ S/∂V ∂T and ∂ 2 S/∂T ∂V are = 0.
2
QUASI-EXACT EQUATIONS
If you thought that there was practically no way of guessing the integrating factors in the last
examples, you have a point. As we saw, the theorem that asserts the existence of integrating
factors does not provide any method for finding them; for a generic differential equation of the
form (50) we have to live with that.
There are exceptions, though. Among them, we’ll consider only the class of “quasi-exact”
equations: these are equations that admit an integrating factor that depends on one variable
only, but not on the other. For quasi-exact equations there is a simple procedure for finding µ.
Suppose—for the sake of the argument—that the integrating factor for the equation (50)
depend on x alone, i.e., assume that µ = µ(x). In other words, assume the equation
∂(µ P ) ∂(µ Q)
=
∂y ∂x
′
µ Py = µ Q + µ Qx ,
µ′ Py − Qx
= .
µ Q
Recall, we assumed that µ depends only on x; hence, the left-hand side above depends on x
alone. Therefore, if the right-hand side depends on x and y, the method fails. If, however,
(Py − Qx )/Q depends on x alone, then we may find µ by integration:
Py − Qx
Z
ln |µ(x)| = dx.
Q
Make sure you understand this point: the integration of the right-hand side is meaningful
only if (Py − Qx )/Q depends on x alone.
In exactly the same way, one may see that if (Qx −Py )/P depends on y alone, then an integrating
factor is given by the equation
Qx − Py
Z
ln |µ(y)| = dy.
P
In summary, an equation is quasi-exact if either (Py −Qx )/Q depends on x alone, or (Qx −Py )/P
depends on y alone. In the first case, µ = µ(x), and in the second case µ = µ(y).
◮Example 152 Find the integrating factor of example 148.
Solution: The equation (2y − 3x) dx + x dy = 0 is quasi-exact because
Py − Qx 2−1
=
Q x
depends only on x. Hence, the integrating factor is given by
dx
Z
ln |µ| = = ln |x| + C,
x
Nonlinear Differential Equations 129
You should convince yourself that the integration constant arising in the calculation of µ may
always be set equal to zero without loss of generality. Therefore, for the rest of this section we’ll
ignore it.
◮Example 153 Find an integrating factor of the equation (x−y 2 ) dx+2xy dy = 0, and solve it.
Solution: The equation is quasi-exact because
Py − Qx −2y − 2y 2
= =− ;
Q 2xy x
therefore
dx
Z
ln |µ| = −2 = ln x−2 .
x
An integrating factor is µ = x−2 ; dividing the original equation by x2 , we get:
y2
1 2y
− 2 dx + dy = 0,
x x x
2y y2
Z
F (x, y) = dy = + g(x),
x x
y2 ′ 1 y2
Fx = − + g (x) = − .
x2 x x2
It follows that g′ (x) = 1/x, g(x) + C = ln |x|, and finally y 2 /x + ln |x| = C, which is the general
solution. ◭
It follows that g′ (y) = 0, g(y) = C, and finally that e2x + 2ye3x = C, which is the general
solution. This one may be made explicit: y = 12 Ce−3x − 21 e−x . ◭
Py − Qx = 1 − (−1) = 2;
unfortunately
Py − Qx 2
= ,
Q −(x + 1 − y 2 )
which certainly is not a function of x alone. However, observe that P = y: therefore,
Qx − Py −2
=
P y
is a function of y alone. Therefore, there is an integrating factor of the form µ = µ(y), given by
the equation
µ′ (y) 2
=− .
µ(y) y
A simple integration yields ln |µ| = ln |y|−2 , i.e., µ = y −2 . Dividing the original equation by y 2 ,
we get:
dx x 1
− + 2 − 1 dy = 0,
y y2 y
which is exact. Integrating P with respect to x we find that
x
F (x, y) = + g(y).
y
x 1
F (x, y) = + + y + h(x).
y y
1
g(y) = +y and h(x) = 0;
y
hence
x 1
+ +y =C
y y
is the general solution. In explicit form, this may be written x = Cy − y 2 − 1. ◭
Nonlinear Differential Equations 131
3. Scale-Invariant Equations
If an equation retains the same form when x and y are scaled by the same amount, it is called
scale-invariant. A first-order scale-invariant equation is easy to recognize when it’s written in
the form
y ′ = f (y/x),
where f (y/x) is some function of the ratio y/x.
It is rather unfortunate that most books call these equations “homogeneous”, a choice that
can be confusing for a beginner since they have nothing to do with the homogeneous equations
we met in chapter 3. The name “scale-invariant” has been suggested, among others, by Bender
and Orszag.†
If a first-order equation is scale-invariant, then either of the substitutions
y x
u= or u=
x y
◮Example 156 Show that the differential equation (2xy −y 2 ) dx−x2 dy = 0 is scale-invariant,
and solve it.
Solution: If one introduces the scaled variables
x = aX y = aY,
a2 (2XY − Y 2 ) a dX − a2 X 2 a dY = 0,
which (after canceling the common factor a3 ) has the same form as the original. Note also that
the equation may be written as
u + xu′ = 2u − u2
du dx
=
u(1 − u) x
Z
1 1 dx
Z
− du = ;
u u−1 x
dx (x/y)2
=
dy 2(x/y) − 1
u2
u + yu′ = .
2u − 1
From here onwards, follow the steps of method 1 (separate the variables, use partial fractions,
integrate with respect to y). Do it, as an exercise.
You may also observe that the original equation becomes a Bernoulli equation if formally
divided by dx: x dy/dx − 2y = −y 2 /x. See problem 42. ◭
p
◮Example 157 Solve the equation xy ′ = y + x2 + y 2 .
Solution: That the equation is scale-invariant is evident when it’s rewritten as follows:
p r
y + x 2 + y2 y y2
y′ = = + 1+ 2.
x x x
Substituting y = xu, y ′ = u + xu′ , one gets immediately:
p
u + xu′ = u + 1 + u2 ,
du dx
√ = .
1 + u2 x
4. Autonomous Equations
An equation is called autonomous if the independent variable is missing. If the equation is of
second order, the substitution p = dy/dx, combined with the chain rule, transforms it into a
first-order equation where p(y) is the unknown:
dy d2 y dp dp dy dp
=p =⇒ 2
= = = · p. (55)
dx dx dx dy dx dy
The best-known autonomous equation is almost certainly the simple harmonic motion equation:
ẍ + ω 2 x = 0, or (with the notation of this chapter) y ′′ + ω 2 y = 0.
◮Example 159 Solve the harmonic motion equation without using the methods of chapter 3.
Solution: Following (55), we let y ′ = p and hence y ′′ = p dp/dy. It follows:
dp
p = −ω 2 y,
dy
Z Z
2
p dp = −ω y dy.
1 2
A simple integration yields: 2p = C − 12 ω 2 y 2 . Defining a new constant A through the equation
C = 12 ω 2 A2 , we get
dy p
p= = ±|ω| A2 − y 2
dx
dy
Z Z
p = ω dx;
A2 − y 2
in the last step, ω is allowed negative values. Another simple integration gives
− arccos y/A = ωx + Φ
(where Φ = constant), which finally yields the classic solution y = A cos(ωx + Φ). ◭
dp
y3p = 1,
dy
Z Z
p dp = y −3 dy.
Integrating, we obtain:
1 2
2
p = − 12 y −2 + C = − 21 y −2 + 21 a2 ,
where a is a constant replacing C. It follows that
dy p
= a2 − y −2
p=
dx Z
y dy
Z
± p = dx.
a2 y 2 − 1
Integrating again, we get that p
± a2 y 2 − 1 = a2 x + b,
where b is another constant, and finally that a2 y 2 − 1 = (a2 x + b)2 . ◭
2
◮Example 161 Solve the autonomous equation y y ′′ = y ′ + y 2 .
Solution: By (55), we get:
dp
yp = p2 + y 2 .
dy
This is a scale-invariant equation. Proceeding along the lines of examples 157–158 (but bearing
in mind that the variables are now p and y), we first rewrite the last equation as follows:
dp p y
= + (56)
dy y p
(which shows the scale invariance). We then introduce a new variable u:
dp du
p = uy =⇒ = y + u,
dy dy
and substitute back into (56):
du 1
y+u=u+ .
dy u
Simplifying, we get:
dy
u du =
y
1 2
2 u = ln |y| + C
p
|u| = 2 ln |y| + 2C
p p
± = 2 ln |y| + 2C.
y
But p = dy/dx; substituting back and rearranging, we get:
dy
Z Z
± p = dx.
y 2 ln |y| + 2C
Nonlinear Differential Equations 135
ln |y| = 21 x2 + Bx + 12 B 2 − C,
y = ± exp( 12 x2 + Bx + 12 B 2 − C),
y = A exp( 12 x2 + Bx)
ln |p| = 1
2
ln |1 + y 2 | + C,
hence
dy p
p= = ± 1 + y 2 · eC .
dx
Defining A = ±eC , and separating again the variables, one finds that
dy
p = A dx.
1 + y2
arsinh y = Ax + B,
which may also be written y = sinh(Ax + B). Note that the explicit solution of this equation
can be written both in the form x = x(y) and y = y(x). ◭
136 Tutorial Problems Nonlinear Differential Equations
PROBLEMS
Exact ODEs
61. Check that the following ODEs are exact and hence find the general solution.
(a) (3e3x y − 2x) dx + e3x dy = 0
(b) (6x5 y 3 + 4y 5 x3 + 21x2 ) dx + (3x6 y 2 + 5x4 y 4 − 10y) dy = 0
(c) (cos x + cos y) dx − x sin y dy = 0
(d) (1 + yexy ) dx + (2y + xexy ) dy = 0
Integrating Factors
62. Solve the following equations, using the suggested integrating factors.
(a) (3y + 4xy 2 ) dx + (2x + 3x2 y) dy = 0, µ = x2 y
2 2
(b) y dx + x(1 + 3x y ) dy = 0, µ = 1/(xy)3
(c) y dx − (y 5 + y 3 x2 + x) dy = 0, µ = 1/(x2 + y 2 )
63. Show that the first order linear equation c1 (t) dy + c0 (t) y dt = f (t) dt
[go back to (40) in section 3.6] admits the integrating factor
R
exp (c0 /c1 ) dt
µ(t) = .
c1
Many books present this as the “best” method for solving first order linear ODEs.
Quasi-Exact Equations
64. The following equations admit an integrating factor that depends on x alone or y alone.
Find it, and hence solve the equation.
(a) (x2 − xy + y 3 + 2x − y) dx + (3y 2 − x) dy = 0
(b) (x4 + y 4 ) dx − xy 3 dy = 0
(c) 7xy dx + (x2 + y 2 + y) dy = 0
(d) (y + ey−x ) dx + (1 + xey−x ) dy = 0
Autonomous Equations
ANSWERS
61 (a) e3x y − x2 = C
(b) x6 y 3 + x4 y 5 + 7x3 − 5y 2 = C
(c) x cos y + sin x = C
(d) x + y 2 + exy = C
62 (a) x3 y 2 + x4 x3 = C
(b) 3 ln |y| − 1/2x2 y 2 = C
(c) arctan(y/x) − 14 y 4 = C
64 (a) µ = 1/x3 , F (x, y) = ex (x2 − xy + y 3 ) = C;
(b) µ = 1/x5 , F (x, y) = ln |x| − y 4 /4x4 = C;
7 16/7
(c) µ = y −5/7 , F (x, y) = 72 xy 2/7 + 16 y + 97 y 9/7 = C;
x x y
(d) µ = e , F (x, y) = ye + xe = C.
65 (a) cos y/x = 1 − 2/(Bx2 + 1), where B > 0.
(b) B 2 x4 = 2By 2 + 1, where B > 0.
(c) x = y ln ln |y| + C .
(d) xy = Cey/x .
(e) (x −p y)y 2 = C(x + y).
(f) y = |Ax5 − x4 |.
4