You are on page 1of 123

# MTH316 Lecture Note

1
AY 2011/12 S1
Ordinary Dierential Equations
Instructor: Wang Desheng
Division of Mathematical Sciences
School of Physical and Mathematical Sciences
Nanyang Technological University
Singapore, 637371
T 65-6513-7466
k desheng@ntu.edu.sg
http://www.ntu.edu.sg/home/desheng
Textbook
Elementary Dierential Equations and Boundary Value Problems
9th Edition, by W. E. Boyce and R. C. DiPrima, John Wiley & Sons, Inc, 2008.
1
This note complements the textbook and lecture slides.
Contents
Chapter 1. First-Order Dierential Equations 3
1. Notation and Denitions 3
2. An Example of Modeling by First-Order DEs 7
3. The Geometry of First-Order DEs 8
4. Several Types of Solvable First-Order Dierential Equations 10
4.1. Separable DEs 10
4.2. First-order linear equations 11
4.3. Bernoullis equations 15
4.4. Exact DEs 17
4.5. Non-exact DEs with integrating factors 21
4.6. Homogenous DEs 22
5. The Existence and Uniqueness Theorem 25
6. Reducible Second-Order DEs 27
6.1. Dependent variable y missing 27
6.2. Independent variable x missing 28
7. Solutions to Selected Exercises 31
Chapter 2. Second-Order Linear Equations 37
1. Basic Theoretical Results 37
1.1. Existence and uniqueness of solutions to IVP 38
1.2. Principle of superposition 38
1.3. Linear dependence/independence and Wronskian 39
2. Reduction of Order 45
3. Second-Order Homogeneous Linear DE with Constant Coecients 50
4. Nonhomogeneous DE: Method of Undetermined Coecients 53
5. Nonhomogeneous DE: Variation of Parameters 59
6. Solutions to Selected Exercises 62
Chapter 3. Higher Order Linear Dierential Equations 67
1. Basic Theoretical Results 67
2. Reduction of Order 71
3. Linear Homogeneous DE with Constant Coecients 72
4. Nonhomogeneous DE: Method of Undetermined Coecients 75
5. Nonhomogeneous DE: Method of Variation-of-Parameters 78
6. Solutions to Selected Exercises 82
iii
iv CONTENTS
Chapter 4. The Laplace Transform 85
1. Laplace Transform and Inverse Laplace Transform 85
1.1. Denition of Laplace transform 85
1.2. Linearity of Laplace transform 87
1.3. Sucient conditions for existence of Laplace transform 87
1.4. Inverse Laplace transform 90
2. Transformation of Initial Value Problems 90
3. Unit Step Functions and the Second Shifting Theorem 94
3.1. Unit step function 94
3.2. The second shifting theorem 95
4. Impulse Functions 98
Chapter 5. Systems of First-Order Linear Equations 101
1. Basic Theoretical Results 101
1.1. Principle of superposition 102
1.2. Existence and uniqueness theorem 102
1.3. Solution structure of homogeneous system 103
2. Homogeneous Linear Systems with Constant Coecients 108
2.1. Non-defective coecient matrix 108
2.2. Defective coecient matrix 112
3. Nonhomogeneous Linear Systems 116
3.1. Diagonalization 116
3.2. Method of variation-of-parameters 117
CONTENTS 1
Introduction
Nowadays, dierential equations (DEs) have become the centerpiece of much of
physics, of engineering and in many areas of mathematical modeling. Indeed, many
of the principles, or laws, underlying the behavior of the natural world are statements or
relations involving rates at which things happen. When expressed in mathematical terms,
the relations are equations and the rates are derivatives. Equations involving derivatives
are dierential equations. A DE can be thought of as a mathematical model or a
mathematical language that describes a physical process such as the motion of u-
ids, the ow of current in electric circuits, the dissipation of heat in solid objects, the
propagation and detection of seismic waves or the increase and decrease of populations,
among many others. Therefore, to understand such processes, it is necessary to know
something about dierential equations. The main purpose of this one-semester course is
to introduce analytic solution techniques for solving some types of ordinary dierential
equations (ODEs). The topics to be covered are outlined as follows:
I. First-order dierential equations
Six types of solvable equations
Two special types of higher-order equations
II. Second-order linear equations
Basic theoretical results
Homogeneous second-order linear equations
Method of undetermined coecients
Method of variation of parameters
III. General high-order linear equations
Generalization and extension of the theory and methods for second-order
linear DEs to general nth order linear DEs.
IV. Systems of rst-order linear DEs
Basic theoretical results
Solution methods
V. The Laplace transform and series solutions of second-order linear DEs
Basic properties of the Laplace transform and inverse Laplace transform
Solution of initial value problems with discontinuous forcing functions
Power series methods for some ODEs associated with special functions
CHAPTER 1
First-Order Dierential Equations
In this chapter, we introduce the solution techniques for several types of solvable
rst-order dierential equations, and apply the method to solve two special types of
second-order dierential equations. As some preparations, we rst introduce the basic
concepts and provide one example to show how dierential equations arise.
1. Notation and Denitions
Definition 1.1. A dierential equation, DE in short, is an equation involving an
unknown function and its derivatives. For example,
dy
dx
= 5x + 3, (or y

= 5x + 3), (1.1)
d
2
y
dx
2
+ 2x
dy
dx
+ y
2
= sin x, (or y

+ 2xy

+ y
2
= sin x), (1.2)
and

2
u(x, y)
x
2
+

2
u(x, y)
y
2
= cos(xy), (or u = cos(xy)). (1.3)
If the unknown function is a function of a single variable (e.g., y(t), y(x), ),
then the DE is an ordinary dierential equation (ODE), e.g., (1.1) and
(1.2).
A partial dierential equation (PDE) is one involving a function of two or
more variables, in which the derivatives are partial derivatives, for example, the
equation (1.3).
Another classication of dierential equations is based on the number of unknown
functions that are involved. If there is a single unknown function to be found, then one
equation is sucient. If there are two or more unknown functions, then a system of
3
4 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
equations is required. For instance, the following two by two system,
_
x

y

## (t) = x(t) + 5y(t) + cos(t),

(1.4)
involves two unknown functions x(t) and y(t).
Definition 1.2. The order of a dierential equation is the highest derivative that
appears in the equation. We see that the equation (1.1) is a rst-order ODE, the equation
(1.2) is a second-order ODE, and (1.3) is a second-order PDE.
In this course, we will restrict the discussions to ODEs. The ODEs can be classied
into linear or nonlinear ODEs.
Definition 1.3. Consider the nth-order ordinary dierential equation in the general
form
F(x, y, y

, y

, , y
(n)
) = 0.
If F is a linear function of y, y

, y

, , y
(n)
, i.e., the equation takes the form
a
0
(x)y
(n)
+ a
1
(x)y
(n1)
+ + a
n
(x)y = f(x), (1.5)
then it is linear, otherwise the equation is nonlinear.
As a rule, in a linear equation, the unknown function and its derivatives occur up to
the rst degree (a linear function), and not as products or arguments of other functions.
Example 1.1. The equations
y

+ xy

+ sin(x)y = e
x
, and xy

+ 4x
2
y

2
1 + x
2
y = 0,
are linear second-order DE and linear third-order DE, respectively, whereas the DE
y

+ xsin(y

) xy = x
2
and y

x
2
y + y
2
= 0
are nonlinear.
Example 1.2. The general form of the rst- and second-order linear DE is
a
0
(x)y

+ a
1
(x)y = f(x),
and
a
0
(x)y

+ a
1
(x)y

+ a
2
(x)y = f(x),
respectively.
1. NOTATION AND DEFINITIONS 5
Definition 1.4. A solution y = (x) to the ODE
F
_
x, y, y

, y

, , y
(n)
_
= 0 (1.6)
satises the equation
F
_
x, ,

, ,
(n)
_
= 0. (1.7)
Example 1.3. One veries that any polynomial
y(x) = c
1
x
n1
+ c
2
x
n2
+ + c
n
,
where {c
i
}
n
i=1
are constants, is a solution to the dierential equation:
y
(n)
=
d
n
y
dx
n
= 0, x I,
where I is an interval.
Example 1.4. Consider the dierential equation
y

5y

+ 6y = 0. (1.8)
Verify that y
1
(x) = e
2x
and y
2
(x) = e
3x
are both solutions. What about the linear
combination: (x) = c
1
y
1
+ c
2
y
2
, where c
1
, c
2
are arbitrary real constants?
Example 1.5. Show that the relation
sin(xy) + y
2
x = 0,
denes a solution to
dy
dx
=
1 y cos(xy)
xcos(xy) + 2y
.
We now distinguish two dierent ways in which solutions to a DE can be expressed.
Often, as in Example (1.4), we obtain an explicit solution y = (x), for some function
. However, sometimes, we have to be content with a solution written in the implicit
form: f(x, y) = 0 (e.g., Example 1.5).
Definition 1.5. A solution to an nth-order DE on an interval I is called the general
solution on I, if it satises the following conditions:
1. The solution contains n constants c
1
, c
2
, , c
n
;
2. All solutions to the DE can be obtained by assigning appropriate values to the
constants.
6 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
A solution to a DE is called a particular solution, if it does not contain any arbitrary
constants. That is, the solution is assigned specic values to the arbitrary constants in
the general solution.
Example 1.6. One veries readily that for any real constants c
1
and c
2
,
y(x) = c
1
y
1
(x) + c
2
y
2
(x) = c
1
e
2x
+ c
2
e
3x
. (1.9)
is a solution to (1.8).
Remark 1.1. In Chapter 2, we will show that all solutions to (1.8) can be expressed
in the form of (1.9). For some interval I, we dene
V =
_
y C
2
(I) : y

5y

+ 6y = 0
_
.
We can show that V is a linear vector space by using basic knowledge in linear algebra.
Moreover, we have
V = span
_
e
2x
, e
3x
_
.
Here, the space V is called the solution space of the equation: y

5y

+ 6y = 0.
Example 1.7. Find the general solution to the DE: y

= e
x
.
Solution: A direct integration on both sides yields
_
y

dx =
_
e
x
dx + c
1
y

= e
x
+ c
1
.
Therefore, we have
_
y

dx =
_
_
e
x
+ c
1
_
dx + c
2
y = e
x
+ c
1
x + c
2
,
where c
1
and c
2
are arbitrary (real) constants.
To obtain a particular solution, we have to impose additional conditions, called initial
conditions. For instance, we impose the conditions
y(0) = 1, y

(0) = 1 (1.10)
to the equation (1.8). Then we have
y(0) = c
1
+ c
2
= 1, y

(0) = 2c
1
+ 3c
2
= 1.
Solving out c
1
and c
2
, we obtain the particular solution
y(x) = 4y
1
(x) 3y
2
(x) = 4e
2x
3e
3x
.
2. AN EXAMPLE OF MODELING BY FIRST-ORDER DES 7
Definition 1.6. An nth order DE together with n initial conditions of the form
_
F
_
y, y

, y

, , y
(n)
_
= 0,
y(x
0
) = y
0
, y

(x
0
) = y
1
, , y
(n1)
(x
0
) = y
n1
(1.11)
where x
0
, y
0
, , y
n1
are given constants, is called an initial-value problem, IVP in
short, whose solution is a particular solution.
Exercise 1.1. Show that
dy
dx
=
y
2
1 xy
has the solution
xy = ln y + c.
Find the particular solution satisfying y(1) = 1.
Exercise 1.2. Determine c
1
and c
2
so that y(x) = c
1
sin(2x) + c
2
cos(2x) + 1 will
satisfy the conditions y(/8) = 0 and y

(/8) =

2.
Exercise 1.3. Show that
y = e
x
2
_
x
0
e
t
2
dt
is a solution of the dierential equation y

= 2xy + 1.
2. An Example of Modeling by First-Order DEs
To illustrate how dierential equations arise, we now build a mathematical model
describing the cooling (or heating) of an object. Suppose that we bring an object into a
room. If the temperature of the object is higher than that of the room, then the object
will begin to cool down. Indeed, according to Newtons law of cooling:
The rate change of temperature of an object is proportional to the temperature
dierence between the object and its surrounding medium.
To formulate this law mathematically, we denote by T(t) the temperature of the
object at time t, and let T
m
be the temperature of the surrounding medium. The
mathematical translation of Newtons law of cooling is
dT
dt
= k(T T
m
), (2.1)
where the proportional constant k > 0.
1
1
If T > T
m
, then the object will cool so dT/dt < 0, so k must be positive. Similarly, if T < T
m
,
then dT/dt > 0, so k > 0.
8 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
This equation can be solved by using a direct integration (see late part):
T(t) = T
m
+ ce
kt
.
This solution tells us how the temperature changes with time, and indicates that as
t , the temperature of this object approaches that of the surrounding medium.
Read Examples 1-3 on Pages 1-7 of the textbook
3. The Geometry of First-Order DEs
We consider the rst-order DE:
dy
dx
= f(x, y), (3.1)
where f(x, y) is a given function of x and y, sometimes referred to as the rate function,
forcing term/function or source term. In this section, we focus on the geometric
aspects of the DE and its solution.
Suppose that y = (x) with x I, where I is an interval (Note: it could be a nite
union of intervals), is a solution of (3.1). The graph (x, (x)), x I, is called a solution
curve or an integral curve of the DE.
The geometric interpretation of (3.1) is that
The slope of the tangent line at any point (x, y) on the solution
curve y = y(x) equals to the value f(x, y) at the same point. Hence,
solving the equation is to nd all curves whose slope at the point
(x, y) is given by the function f(x, y).
The fact that f(x, y) gives the slope of the tangent line to the solution curves of this
DE leads to a simple and important idea for determining the overall shape of the solution
curves, which is the direction eld (or slope eld).
Basically, a useful direction eld for equation (3.1) can be constructed by evaluating
f at each point of a rectangle grid. At each point of the grid, a short line segment is
drawn whose slope is the value of f at that point. Thus each line is tangent to the graph
of the solution passing through that point. A direction eld drawn on a fairly ne grid
gives a good picture of the overall behavior of solutions of a DE.
Two observations are worth particular mention. First, in constructing a direction
eld, we do not need to solve the equation (3.1), but merely evaluate the given function
f(t, y) many times. Thus direction elds can be readily constructed even for equations
3. THE GEOMETRY OF FIRST-ORDER DES 9
that may be quite dicult to solve. Second, repeated evaluation of a given function is
easy to be done by a computer.
Here, we provide a short Matlab code for plotting the direction eld of (3.1) with
f(x, y) = y
2
x where (x, y) [2, 10] [4, 4] :
x=linspace(-2,10,10); % Dene the x region. Here 10 means we plot the
direction elds for ten points.
y=linspace(-4,4,10); % Dene the y region.
[X,Y]=meshgrid(x,y); % Generate the grid matrices.
U=ones(size(X)); % Find the horizontal component of the direction elds.
V=Y
.
2-X; % Find the vertical component of the direction elds.
L=sqrt(U
.
2+V
.
2);U=U./L;V=V./L; % Normalize the direction elds to be
the same length.
quiver(x,y,U,V,0.5); % Plot the direction eld, 0.5 means the vector are
rescaled to half of the original size.
Example 3.1. Plot the direction elds, and discuss solution behavior of the dieren-
tial equations
(i) y

= 2 y, (ii) y

= y(y + 2).
Solution: We plot the direction elds of (i) (left) and (ii)(right) in Figure 3.1. Here,
y = 2 is an equilibrium solution of (i). Observe that all the other solutions seem to be
converging to the equilibrium solution as x increases. For (ii), y = 0, 2 are equilibrium
solutions.
Figure 3.1. Direction elds of (i) and (ii)
10 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
4. Several Types of Solvable First-Order Dierential Equations
In this section, we introduce solution methods for the following types of rst-order
DEs:
Separable DEs,
First-order linear DEs,
Exact DEs,
Bernoullis DEs,
Homogenous DEs.
4.1. Separable DEs. In this section, we shall encounter our rst general class of
equations that
(i) We can easily recognize them;
(ii) We have a direct and simple method for solving such equations.
Definition 4.1. A rst-order DE is separable, if it can be written in the form
p(y)dy = q(x)dx. (4.1)
The solution technique for a separable DE is given in the following theorem.
Theorem 4.1. If p(y) and q(x) are continuous, then (4.1) has the general solution
_
p(y)dy =
_
q(x)dx + C, (4.2)
where C is an arbitrary constant.
Proof. Recall that
d
_
_
(t)dt
_
= (t)dt.
Therefore, we can rewrite (4.1) as
p(y)dy = q(x)dx d
_
_
p(y)dy
_
= d
_
_
q(x)dx
_
d
_
_
p(y)dy
_
q(x)dx
_
= 0.
(4.3)
Therefore, we obtain (4.2).
Example 4.1. Solve the dierential equation
xy

= (1 2x
2
) tany. (4.4)
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 11
Solution. This equation is separable, if we rewrite the equation as
x
dy
dx
= (1 2x
2
) tany = cot ydy =
1 2x
2
x
dx, (4.5)
where we assume that x = 0 and sin y = 0. Direct integration leads to
_
cot ydy =
_
1 2x
2
x
dx + C.
Working out the integrals gives
ln | sin y| = ln |x| x
2
+ C,
which implies that
e
ln | sin y|
= e
ln |x|x
2
+C
= | sin y| = e
C
|x|e
x
2
.
Let

C = e
C
and allow

C to be negative so that we will no longer need to worry about
the absolute values. The solution is
sin y =

Cxe
x
2
. (4.6)
Notice that x = 0 is not a solution, while the zeros of sin y = 0 are included in (4.6) by
taking

C = 0.
Exercise 4.1. Solve the equation
dy
dx
=
x
2
+ 1
y
2
1
.
Exercise 4.2. Solve the initial-value problem:
y

=
y cos x
1 + 3y
3
, y(0) = 1.
4.2. First-order linear equations. Another class of dierential equations that is
easily recognized and readily solved is the class of rst-order linear equations.
Definition 4.2. An equation is said to be rst-order linear, if it can be written
in the standard form
y

## + p(x)y = q(x). (4.7)

The equation is linear since the left-hand side is a linear function of y and y

.
Now, we introduce the solution technique for (4.7). The idea is to multiply both sides
of the equation by a function I(x) that will make it easily integrable:
I(x)y

## (x) + p(x)I(x)y(x) = I(x)q(x),

12 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
The most desirable situation is
[I(x)y]

= I(x)q(x) I(x)y

(x) + I

(x)y(x) = I(x)q(x).
Comparing the above equations, we see that I(x) has to satisfy
I

## (x) = p(x)I(x) I(x) = exp

_
_
p(x)dx
_
.
[At this point, we could carry a constant of integration, but it is not necessary, since we
only need to nd one I(x).] Thus, we solve
[I(x)y]

= I(x)q(x) = y(x) =
1
I(x)
_
_
I(x)q(x)dx + C
_
.
A summary of this method is stated in the following theorem.
Theorem 4.2. The general solution of the rst-order linear equation
y

## + p(x)y = q(x). (4.8)

is
y(x) =
1
I(x)
_
_
I(x)q(x)dx + C
_
with I(x) = e
_
p(x)dx
, (4.9)
or equivalently
y(x) = e

_
p(x)dx
_
_
e
_
p(x)dx
q(x)dx + C
_
. (4.10)
Remark 4.1.
(i) The function I(x) is an integrating factor of the equation (4.8) (more detailed
discussions will be given later on).
(ii) The formula is valid for the standard form (4.8). Therefore, it is necessary to
write the rst-order DE in the standard form, and then apply the formula.
Example 4.2. Solve
y

1
5
y = 5 x.
Solution. The equation is a linear equation in the standard form, where
p(x) =
1
5
, q(x) = 5 x.
Therefore,
I(x) = e
_

1
5
dx
= e
x/5
,
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 13
and using integration by parts gives
y(x) = e
x/5
_
_
e
x/5
(5 x)dx + C
_
= e
x/5
_
5
_
e
x/5
dx
_
xe
x/5
dx + C
_
= e
x/5
_
25e
x/5

_
5xe
x/5
+
_
5e
x/5
dx
_
+ C
_
= e
x/5
_
5xe
x/5
+ C
_
= 5x + Ce
x/5
.
(4.11)
The graph on the left shows direction eld along with several integral curves (solution
curves). The graph on the right shows several integral curves, and a particular solution
(in red) whose initial point on y-axis separates solutions that grow large positively from
those that grow large negatively as x .
Figure 4.2. Direction elds and solution curves
Example 4.3. Solve the initial value problem (IVP):
xy

2y = 5x
2
, y(1) = 2.
Solution. We rst put it into the standard form
y

2
x
y = 5x.
Then
I(x) = e
_
p(x)dx
= e

_
2
x
dx
= e
2 ln |x|
= e
ln
1
x
2
=
1
x
2
,
and
_
I(x)q(x)dx =
_
1
x
2
5xdx = 5 ln|x| + C.
14 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Therefore the general solution is
y(x) =
1
I(x)
_
_
I(x)q(x)dx + C
_
= x
2
(5 ln|x| + C).
Using the initial condition y(1) = 2 (i.e. x = 1 and y = 2), we get C = 2. The particular
solution is
y(x) = x
2
(5 ln|x| + 2).
The graphs below show several integral curves for the dierential equation, and a par-
ticular solution (in red) whose graph passes through the initial point (1, 2).
Figure 4.3. Solution curves
Example 4.4. Solve the equation
dy
dx
=
y
2x y
2
.
Solution: This equation is not separable, and not linear in y. However, it is linear in
x, if we rewrite it as
dx
dy

2
y
x = y.
We skip the details, and nally derive
x = y
2
(C ln |y|).
Exercise 4.3. Consider the initial value problem
y

## + p(x)y = q(x), y(x

0
) = y
0
, (4.12)
where p and q are continuous functions in some interval I containing x
0
. Show that the
particular solution is
y(x) = e

_
x
x
0
p(t)dt
_
_
x
x
0
e
_
t
x
0
p()d
q(t)dt + y
0
_
. (4.13)
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 15
Exercise 4.4. Solve the initial value problem:
xy

+ 3y = 2x
5
, y(2) = 1. (4.14)
4.3. Bernoullis equations. In what follows, we consider a type of DEs that can
be converted to rst-order linear DEs.
Definition 4.3. The Bernoullis equation of order n takes the form
dy
dx
+ p(x)y = q(x)y
n
, (4.15)
where n( = 0, 1) is a real number. This equation is nonlinear, due to the power y
n
.
Remark 4.2. Clearly, if n = 1, then the equation is separable, while if n = 0, the
equation becomes linear.
The basic idea to solve (4.15) is to covert it into a linear equation. Dividing (4.15)
by y
n
, yields the equation
y
n
dy
dx
+ p(x)y
1n
= q(x). (4.16)
We make the change of variable:
u = y
1n
=
du
dx
= (1 n)y
n
dy
dx
= y
n
dy
dx
=
1
1 n
du
dx
, (4.17)
and eliminate y and y

from (4.16):
y
n
dy
dx
+ p(x)y
1n
= q(x) =
1
1 n
du
dx
+ p(x)u(x) = q(x). (4.18)
Rewrite the new equation as the standard form:
du
dx
+ (1 n)p(x)
. .
P(x)
u(x) = (1 n)q(x)
. .
Q(x)
. (4.19)
It is a linear equation with unknown function u(x). Thanks to the formula (4.13), the
general solution is
u(x) = e

_
P(x)dx
_
_
Q(x)e
_
P(x)dx
dx + C
_
= e
(1n)
_
p(x)dx
_
(1 n)
_
q(x)e
(1n)
_
p(x)dx
dx + C
_
.
(4.20)
Finally we change the variable back:
y
1n
= e
(1n)
_
p(x)dx
_
(1 n)
_
q(x)e
(1n)
_
p(x)dx
dx + C
_
.
A summary of the above derivation leads to the solution formula for the Bernoulls
equation (4.15).
16 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Theorem 4.3. For a Bernoullis equation of order n with the standard form (4.15),
setting u = y
1n
, we transform the equation into a linear equation:
du
dx
+ (1 n)p(x)
. .
P(x)
u(x) = (1 n)q(x)
. .
Q(x)
,
and the general solution is
y
1n
= u(x) = e
(1n)
_
p(x)dx
_
(1 n)
_
q(x)e
(1n)
_
p(x)dx
dx + C
_
. (4.21)
Remark 4.3. The additional factor 1 n appears in the formula in case of n = 0, 1.
Example 4.5. Solve
dy
dx
+
3
x
y =
12y
2/3
(1 + x
2
)
1/2
, x > 0. (4.22)
Solution: This equation is a Bernoullis equation of order n = 2/3. Dividing both
sides of the DE by y
2/3
:
y
2/3
dy
dx
+
3
x
y
1/3
=
12
(1 + x
2
)
1/2
. (4.23)
We now let
u = y
1/3
,
which implies that
du
dx
=
1
3
y
2/3
dy
dx
.
Substitute them into (4.23) yields
3
du
dx
+
3
x
u =
12
(1 + x
2
)
1/2
,
or in the standard form,
du
dx
+
1
x
u =
4
(1 + x
2
)
1/2
. (4.24)
An integrating factor for this equation is
I(x) = e
_
1
x
dx
= e
ln x
= x,
so that equation (4.24) can be written as
d
dx
(Iu) = I
4
(1 + x
2
)
1/2
.
u(x) =
1
x
_
4(1 +x
2
)
1/2
+ C
_
,
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 17
and so the general solution to the original equation is
y
1/3
=
1
x
_
4(1 +x
2
)
1/2
+ C
_
.
Exercise 4.5. Solve the equation:
y

3
x
y = x
4
y
1/3
. (4.25)
Exercise 4.6. Solve the equation:
3y
2
y

+ y
3
= e
x
. (4.26)
Exercise 4.7. Solve the equations
(i) 2
dy
dx
+ (tanx)y =
(4x + 5)
2
cos x
y
3
. (4.27)
and
(ii)
dy
dx
+
2
x
y = (x
2
cos x)y
2
. (4.28)
4.4. Exact DEs. For the next technique, it is best to consider rst-order DE written
in dierential form
P(x, y)dx + Q(x, y)dy = 0, (4.29)
where P and Q are given functions, assumed to be smooth. The method that we will
consider is based on the idea of a dierential.
Remark 4.4. We point out that any rst order equation can be written in this form.
Indeed, the general form of the rst-order equation:
dy
dx
= f(x, y) = f(x, y)dx dy = 0.
Recall that if = (x, y) is a function of two variables x and y, then the dierential
of , is dened by
d =

x
dx +

y
dy. (4.30)
The right-hand side of (4.30) is similar to the expression in equation (4.29). This obser-
vation can sometimes be used to solve a DE. Namely, if it happens that there exists a
function (x, y) such that

x
= P,

y
= Q. (4.31)
Then we have
d =

x
dx +

y
dy = Pdx + Qdy = 0,
18 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
which implies
(x, y) = C.
This gives the general solution of the original equation (4.29).
Example 4.6. Solve
2xsin y dx + x
2
cos y dy = 0. (4.32)
Solution: This equation is separable, however, we will use a dierent technique to
solve it. By inspection, we notice that
2xsin ydx + x
2
cos ydy = sin ydx
2
+ x
2
d siny = d(x
2
sin y).
Hence, the general solution to (4.32) is
x
2
sin y = C.
This motivates the following denition:
Definition 4.4. The dierential equation (4.29) is said to be exact, if there exists
a function (x, y) such that

x
= P,

y
= Q. (4.33)
The function (x, y) is called a potential function.
We now show that if a DE is exact then, provided we can nd a potential function
, of which the solution can be written down immediately.
Theorem 4.4. The general solution to an exact equation
P(x, y)dx + Q(x, y)dy = 0, (4.34)
is given implicitly by
(x, y) = C, (4.35)
if there exists a potential function satisfying (4.33).
Proof. Since by (4.33) and (4.29), we have
d =

x
dx +

y
dy = Pdx + Qdy = 0.
Thus, the solution is = C.
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 19
As a direct consequence, if the DE is exact, we only need to solve the system (4.33)
to nd the potential function and therefore the general solution. Two issues need to
1. How can we tell whether a given DE is exact?
2. If we have an exact equation, how do we nd a potential function?
The answer to the rst question is given in the following theorem.
Theorem 4.5. (Test for exactness) The DE
P(x, y)dx + Q(x, y)dy = 0, (4.36)
is exact, if and only if
P(x, y)
y
=
Q(x, y)
x
P
y
= Q
x
. (4.37)
Here, for simplicity, we denoted P
y
=
P
y
, and likewise for Q
x
.
Proof. Taking partial derivative w.r.t y (resp. x) on the rst (second) equation of
Example 4.7. Solve the dierential equation
_
y cos x + 2xe
y
_
+
_
sin x + x
2
e
y
1
_
y

= 0. (4.38)
Solution: We follow the following steps:
I. Check exactness. Clearly,
P(x, y) = y cos x + 2xe
y
, Q(x, y) = sin x + x
2
e
y
1,
and hence
P
y
(x, y) = cos x + 2xe
y
= Q
x
(x, y) = ODE is exact
II. Solve the system of .

x
= P = y cos x + 2xe
y
,
y
= Q = sin x + x
2
e
y
1.
Thus
(x, y) =
_

x
(x, y)dx + h(y) =
_
_
y cos x + 2xe
y
_
dx + h(y)
= y sin x + x
2
e
y
+ h(y).
Taking partial derivative w.r.t y and using the second equation yield

y
(x, y) = sin x + x
2
e
y
+ h

(y) = Q = sin x + x
2
e
y
1,
20 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
and
h

(y) = 1 = h(y) = y + c.
[Note: we can set c = 0 and take h(y) = y. The constant c will be absorbed
by the constant in the general solution.] The potential function is
(x, y) = y sin x + x
2
e
y
y.
III. Write down the general solution. The general solution is
(x, y) = y sin x + x
2
e
y
y = C.
The direction eld for (4.38), along with several solution curves, is given below
Figure 4.4. Graph of a periodic square wave
Exercise 4.8. Determine whether the following equations are exact.
(i) 2xydx + (1 +x
2
)dy = 0,
(ii) (x + sin y)dx + (xcos y 2y)dy = 0,
(iii) y
2
dt + (2yt + 1)dy = 0,
(iv) y

=
2 + ye
xy
2y xe
xy
+ xy
.
(4.39)
Exercise 4.9. Solve the equation:
(x + sin y)dx + (xcos y 2y)dy = 0. (4.40)
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 21
4.5. Non-exact DEs with integrating factors. The above method only works
for exact equations. It is sometimes possible to convert a dierential equation that is
non-exact into an exact equation by multiplying the equation by a suitable integrating
factor I(x, y) :
I(x, y)P(x, y)dx + I(x, y)Q(x, y)dy = 0. (4.41)
To make this equation exact, we require
(IP)
y
= (IQ)
x
PI
y
QI
x
+ (P
y
Q
x
)I = 0. (4.42)
In general, this is a partial dierential equation of I(x, y), which is dicult to solve.
However, this becomes possible for the following two special cases.
Suppose that (P
y
Q
x
)/Q is a function of x only. We can nd I, which is a
function of x alone, by solving
dI
dx
=
P
y
Q
x
Q
I. (4.43)
That is
I(x) = exp
_
_
P
y
Q
x
Q
dx
_
.
Similarly, if (Q
x
P
y
)/P is a function of y only, then we can nd I = I(y) by
solving
dI
dy
=
Q
x
P
y
P
I.
Thus, we have
I(y) = exp
_
_
Q
x
P
y
P
dy
_
.
Example 4.8. Solve the equation
(3xy + y
2
)dx + (x
2
+ xy)dy = 0.
Solution: Let
P(x, y) = 3xy + y
2
, Q(x, y) = x
2
+ xy.
It is easy to check that
P
y
= 3x + 2y = 2x + y = Q
x
,
so this equation is non-exact. We seek an integrating factor by solving
dI
dx
=
P
y
Q
x
Q
I =
dI
dx
=
I
x
= I(x) = x.
Multiplying our dierential equation by I, we obtain the exact equation
_
3x
2
y + xy
2
_
dx +
_
x
3
+ x
2
y
_
dy = 0.
22 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
By inspection, we have that
LHS = ydx
3
+
1
2
y
2
dx
2
+ x
3
dy +
1
2
x
2
dy
2
= (ydx
3
+ x
3
dy) +
1
2
(y
2
dx
2
+ x
2
dy
2
) = d(x
3
y) +
1
2
d(x
2
y
2
)
= d
_
x
3
y +
1
2
x
2
y
2
_
.
(4.44)
Hence the general solution is
x
3
y +
1
2
x
2
y
2
= C.
Example 4.9. Use the above method to derive the integrating fact of the rst-order
linear equation
y

+ p(x)y = q(x).
Solution: We write the equation in the dierential form:
(p(x)y q(x))dx + dy = 0.
Hence, P(x, y) = p(x)y q(x) and Q(x, y) = 1. It is not exact, since
P
y
= p(x), Q
x
= 0.
Note that
P
y
Q
x
Q
= p(x).
Therefore, the integrating factor is
I(x) = e
_
p(x)dx
.
4.6. Homogenous DEs. We rst present a preliminary denition.
Definition 4.5. A function f(x, y) is said to be homogenous of degree zero
2
, if
f(tx, ty) = f(x, y), (4.45)
for all positive values of t for which (tx, ty) is in the domain of f.
Example 4.10. We check that
f(x, y) =
x
2
y
2
2xy + y
2
is homogenous of degree zero.
2
More generally, f(x, y) is said to be homogenous of degree m if f(tx, ty) = t
m
f(x, y).
4. SEVERAL TYPES OF SOLVABLE FIRST-ORDER DIFFERENTIAL EQUATIONS 23
In the above example, if we factor x
2
from the numerator and denominator, then the
function can be written in the form
f(x, y) =
x
2
_
1 (y/x)
2

x
2
_
2y/x + (y/x)
2
=
1 (y/x)
2
2y/x + (y/x)
2
.
Thus f can be considered to depend on the single variable V = y/x.
An important property of a homogenous function of degree zero is stated in the
following theorem.
Theorem 4.6. A function f(x, y) is homogenous of degree zero, if and only if it
depends on the ratio: V = y/x. In other words, f(x, y) = F(y/x) = F(V ).
Definition 4.6. If f(x, y) is homogenous of degree zero, then the DE
dy
dx
= f(x, y) = F(y/x). (4.46)
is called a homogenous rst-order DE.
We now introduce the solution technique. Let
V =
y
x
= y = xV =
dy
dx
= V + x
dV
dx
. (4.47)
Substituting into equation (4.46), we therefore obtain
V + x
dV
dx
= F(V ),
or equivalently,
x
dV
dx
= F(V ) V.
The functions can now be separated to yield
1
F(V ) V
dV =
1
x
dx,
which can be solved directly by integration.
Theorem 4.7. The change of variable y = xV (x) reduces a homogenous rst-order
DE dy/dx = f(x, y) = F(y/x) to the separable equation:
1
F(V ) V
dV =
1
x
dx =
_
1
F(V ) V
dV = ln |x| + C.
Example 4.11. Solve
dy
dx
=
4x + y
x 4y
.
24 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Solution. The function on the right-hand side of this equation is homogenous of
degree zero, so is a rst-order homogenous equation. Substituting y = xV into the
equation yields
d
dx
(xV ) =
4 + V
1 4V
.
That is
x
dV
dx
+ V =
4 + V
1 4V
,
or equivalently,
x
dV
dx
=
4(1 +V
2
)
1 4V
.
Separating the variables gives
1 4V
4(1 +V
2
)
dV =
1
x
dx.
We write this as
_
1
4(1 +V
2
)

V
1 + V
2
_
dV =
1
x
dx,
which can be integrated directly to obtain
1
4
tan
1
V
1
2
ln(1 +V
2
) = ln |x| + C.
Substituting V = y/x, gives
1
2
tan
1
(y/x) ln(1 +y
2
/x
2
) = ln x
2
+ C,
which simplies to
1
2
tan
1
(y/x) ln(x
2
+ y
2
) = C
1
.
Exercise 4.10. Determine which of the ve types of DEs we have studied the given
equation falls into, and use an appropriate technique to nd the general solution.
(1)
dy
dx
=
2xy
x
2
+ 2y
.
(2) y

x
1
y = x
1
_
x
2
y
2
, x > 0.
(3)
dy
dx
+
1
x
y =
25x
2
ln x
2y
.
Exercise 4.11. Find all solutions to the equation
yy

+ x =
_
x
2
+ y
2
, x > 0.
5. THE EXISTENCE AND UNIQUENESS THEOREM 25
The variable substitution methods are useful in solutions of DEs. The following
exercises illustrate this.
Exercise 4.12. Show that the substitution v = ax+by +c transforms the dierential
equation y

## = F(ax + by + c) with b = 0 into a separable equation.

Exercise 4.13. Solve the equation (x + y)y

= 1.
5. The Existence and Uniqueness Theorem
At this point, it is useful to give a brief discussion on the existence and uniqueness
of solutions to the rst-order IVP:
dy
dx
= f(x, y), y(x
0
) = y
0
. (5.1)
We begin with the discussion of the rst-order linear problem.
y

## + p(x)y = q(x), y(x

0
) = y
0
. (5.2)
Theorem 5.1. If the functions p and q are continuous on the interval I = (, )
containing the point x = x
0
, then the IVP has a unique solution y = (x) for all x I.
Proof. We just sketch the proof below. We rst prove the existence. As shown
before, if p(x) and q(x) are continuous for all x I, then the following formula
(x) = e

_
x
x
0
p(t)dt
_
_
x
x
0
e
_
t
x
0
p()d
q(t)dt + y
0
_
, (5.3)
gives the solution to (5.2).
We next show the uniqueness. Suppose that
1
and
2
are two dierent solutions to
(5.2). Then =
1

2
is the solution of

+ p(x) = 0, (x
0
) = 0.
Then by (5.3), (x) 0. This ends the proof.
For the nonlinear equations, we must replace the above theorem to a more general
theorem as follows.
Theorem 5.2. Let f(x, y) be a function that is continuous on the rectangle
R = {(x, y) : a x b, c y d}.
Suppose further that
f
y
is continuous in R. Then for any interior point (x
0
, y
0
) in the
rectangle R, there exists some interval (x
0
h, x
0
+ h) (a, b) such that the IVP (5.1)
has a unique solution for all x (x
0
h, x
0
+ h).
26 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Some remarks are in order:
The proof of Theorem 5.2 is quite involved, so we will not touch on this proof.
Please refer to Section 2.8 of the textbook if you are interested in.
If the dierential equation (5.1) is linear, i.e., is reduced to (5.2), the proof of
Theorem 5.2 is simpler as it is given in Theorem 5.1. Moreover, we can claim
the global existence and uniqueness for all x (a, b).
The conditions stated in Theorem 5.2 are sucient, but not necessary.
An important geometric consequence of the uniqueness of the above two theo-
rems is that the graphs of two solutions cannot intersect each other. Otherwise,
there would be two solutions that satisfy the initial conditions.
The following example illustrates how the preceding theorem can be used to establish
the existence of a unique solution to a DE, even though at present we do not know how
to determine the solution.
Example 5.1. Prove that the IVP
y

= 3xy
1/3
, y(0) = a
has a unique solution whenever a = 0.
Solution: In this case, the initial point is (x
0
, y
0
) = (0, a). If we let f(x, y) = 3xy
1/3
,
then f/y = xy
2/3
. We see that f(x, y) is continuous at all points in the xy-plane,
whereas f/y is continuous not lying on the x-axis (y = 0). Provided a = 0, we can
certainly draw a rectangle containing (0, a) that does not intersect the x-axis. In any
such rectangle, the hypotheses of the existence and uniqueness theorem are satised, and
therefore, the IVP does indeed have a unique solution. In fact, we can verify this by
solving this separable DE.
However, if a = 0, then no conclusion can be drawn from the theorem, however, we
can verify that this problem has two solutions: (a): y(x) = 0 and (b): y(x) = x
3
.
Exercise 5.1. Use the existence and uniqueness theorem to prove that y(x) = 3 is
the only solution to the IVP:
y

=
x
x
2
+ 1
(y
2
9), y(0) = 3.
Read Examples 1-3 on Pages 70-72 of the textbook
6. REDUCIBLE SECOND-ORDER DES 27
6. Reducible Second-Order DEs
So far, we have developed analytical solution techniques for several special types of
rst-order DEs, but the methods usually can not be applied directly to higher order DE.
In this section, we consider some special high-order DEs which can be written in an
equivalent system of rst-order DEs, and be solved by the aforementioned techniques.
Consider
d
2
y
dx
2
= F
_
x, y,
dy
dx
_
, (6.1)
where F is a known function. It can be reformulated as an equivalent pair of rst-order
DEs. Let v = dy/dx. Then d
2
y/dx
2
= dv/dx, and so solving equation (6.1) is equivalent
to solving the following two rst-order DEs:
dy
dx
= v,
dv
dx
= F(x, y, v).
(6.2)
In general, the second DE can not be solved directly, since it involves three variables,
namely, x, y and v. However, if it only involves two variables, then it is possible to solve
it.
6.1. Dependent variable y missing. If y does not occur explicitly in the function
F, then the equation takes the form
d
2
y
dx
2
= F
_
x,
dy
dx
_
. (6.3)
Substituting v = dy/dx and d
2
y/dx
2
= dv/dx into this equation allows to replace it with
two rst-order equations
dy
dx
= v,
dv
dx
= F(x, v).
(6.4)
Thus, we rst solve the second equation for v in terms of x and then solve the rst
equation for y.
Example 6.1. Find the general solution to
d
2
y
dx
2
=
1
x
_
dy
dx
+ x
2
cos x
_
, x > 0. (6.5)
28 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Solution: In this equation, the dependent variable y is missing, so we let v =
dy/dx, which implies that d
2
y/dx
2
= dv/dx. Substituting into (6.5) yields the following
equivalent rst-order system
dy
dx
= v,
dv
dx
=
1
x
_
v + x
2
cos x
_
.
(6.6)
The second equation is a rst-order linear equation with standard form
dv
dx

1
x
v = xcos x, (6.7)
with
p(x) =
1
x
, q(x) = xcos x.
Hence,
I(x) = e
_
1
x
dx
= e
lnx
= x
1
,
and the solution is
v(x) = x
_
_
cos xdx + c
_
= x
_
sin x + c
1
).
Substitute v into the rst equation gives
dy
dx
= xsin x + c
1
x,
which we can integrate to obtain
y(x) = xcos x + sin x + c
1
x
2
+ c
2
,
where we have absorbed a factor 1/2 into c
1
.
6.2. Independent variable x missing. If x does not occur explicitly in the func-
tion F, then must solve a DE of the form
d
2
y
dx
2
= F
_
y,
dy
dx
_
. (6.8)
In this case, we still let
v =
dy
dx
,
as previously, but now we use the chain rule to express d
2
y/dx
2
in terms of dv/dy.
Specially, we have
d
2
y
dx
2
=
dv
dx
chain rule
=
dv
dy
dy
dx
= v
dv
dy
.
6. REDUCIBLE SECOND-ORDER DES 29
Substituting them into equation (6.8) reduces the second-order equation to the equivalent
rst-order system
dy
dx
= v,
v
dv
dy
= F(y, v).
(6.9)
In this case, we rst solve the second equation for v as a function y and then solve the
rst equation for y as a function of x.
Example 6.2. Find the general solution to
d
2
y
dx
2
=
2
1 y
_
dy
dx
_
2
. (6.10)
Solution: In this DE, the independent variable does not occur explicitly. Therefore,
we let v = dy/dx and use the chain rule to obtain
d
2
y
dx
2
=
dv
dx
chain rule
=
dv
dy
dy
dx
= v
dv
dy
.
Substituting into (6.10) results in the equivalent system
dy
dx
= v,
v
dv
dy
=
2
1 y
v
2
.
(6.11)
Separating the variables in the second equation gives
1
v
dv =
2
1 y
dy,
which can be integrated to obtain
ln |v| = 2 ln|1 y| + c.
Combining the logarithm terms and exponentiating yields
v(y) = c
1
(1 y)
2
,
where we have redened c
1
= e
c
. Substituting for v into the rst equation of (6.11)
yields
dy
dx
= c
1
(1 y)
2
.
Separating the variables and integrating we obtain
(1 y)
1
= c
1
x + c
2
.
30 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
That is
y = 1
1
c
1
x + c
2
.
Exercise 6.1. Solve the following second-order DEs.
1. y

+ y

tanx = (y

)
2
.
2. y

(y

)
2
y

= 0.
3. yy

= 3(y

)
2
.
7. SOLUTIONS TO SELECTED EXERCISES 31
7. Solutions to Selected Exercises
Exercise 1.1
We take derivative on both sides of xy = ln y +c with respect to x (and keep in
mind, y = y(x)), and nd that
y + xy

=
y

y

dy
dx
=
y
2
1 xy
.
Exercise 1.2
By the condition,
y(/8) = c
1

2
2
+ c
2

2
2
+ 1 = 0, y

(/8) = c
1

2 c
2

2 =

2.
Solve out c
1
and c
2
:
c
1
= (1

2)/2, c
2
= (1 +

2)/2.
Exercise 1.3
Recall that the dierentiation under the integral sign:
F(x) =
_
b(x)
a(x)
f(x, t)dt
F

## (x) = f(x, b(x)) f(x, a(x)) +

_
b(x)
a(x)

x
f(x, t)dt,
where we assume that the integration and dierentiation are well dened. Then
the verication in Exercise 1.3 is straightforward.
Exercise 4.1
We rewrite the equation as
(y
2
1)dy = (1 x
2
)dx
_
(y
2
1)dy =
_
(1 x
2
)dx + C.
This gives the general solution: y
3
3y = x
3
+ 3x + C. The equation denes
the solution y implicitly. A graph showing the direction eld and implicit plots
of several integral curves for the dierential equation is given in the gure (left)
below.
32 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Figure 7.5. Direction eld and solution curves
Exercise 4.2
Suppose that y = 0, and we rewrite the equation as
1 + 3y
2
y
dy = cos xdx.
Hence, the general solution is ln |y| +y
3
= sin x+C, together with y = 0. Using
the initial value y(0) = 1, we nd the constant C = 1 so the particular solution is
ln y +y
3
= sin x+1. The graph of this solution (black), along with the direction
eld and several integral curves (blue) for this dierential equation, is plotted
in the above gure (right).
Exercise 4.3
It is clear that we have y(x
0
) = y
0
. Moreover, we can verify readily that (4.13)
satises (4.12).
Exercise 4.4
Write the equation (4.14) in the standard form:
y

+
3
x
y = 2x
4
.
Applying the formula (4.13) with p(x) = 3/x, q(x) = 2x
4
, x
0
= 2 and y
0
= 1
y(x) =
x
5
4

56
x
3
, x (0, ).
Exercise 4.5
7. SOLUTIONS TO SELECTED EXERCISES 33
Following the process for the derivation of (4.21) or directly applying this for-
mula to (4.25) leads to the general solution:
y
2/3
= Cx
2
+
2
9
x
5
.
Exercise 4.6
We rewrite the equation (4.26) in the standard form
y

+
1
3
y =
e
x
3
y
2
.
It is a Bernoullis equation of order n = 2. The general solution is
y = (C + x)
1/3
e
x/3
.
Exercise 4.7
The general solution to (4.27) is
(i) y
2
=
1
12 cos x
(4x + 5)
3
+
C
cos x
,
and the general solution to (4.28) is
(ii) y
1
= x
2
(sin x + C).
Exercise 4.9
The equation (4.40) is exact and its general solution is
1
2
x
2
+ xsin y y
2
= C.
Exercise 4.10
(1) The equation (1) is exact, and we have
2xydx + (x
2
+ 2y)dy = 0 d(y
2
+ x
2
y) = 0.
Hence, the general solution is
y
2
+ x
2
y = C.
(2) The equation (2) is homogeneous DE. Hence, let y = xV and we convert the
problem to
V

1 v
2
=
1
x

_
dV

1 v
2
=
_
dx
x
+ C.
34 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Thus, the general solution is
y = xsin(C + ln x).
(3) The equation (3) is a Bernoullis equation of order n = 1. We change the
variable u = y
2
and nd that
du
dx
+ 2u = 25x
2
lnx.
We solve this linear equation and nd that
y
2
= x
2
_
x
5
(5 lnx 1) + C

.
Exercise 4.11
The solution method is similar to (2) of Exercise 4.10, and the general solution
is
y =
_
(2x + C)C.
Exercise 4.12
Using the transform: v = ax + by + c, we convert the equation to
dv
dx
= bF(v) a,
which is separable.
Exercise 4.13
Let v = x + y and we obtain that if v = 1,
dx =
v
1 + v
dv.
Therefore, the general solution is y = (x + 1) or y = ln |x + y + 1| + C.
Exercise 5.1
It is clear that y 3 is a solution to the given IVP. Moreover, we nd from the
Existence and Uniqueness Theorem that the IVP has a unique solution, which
must be y 3.
Exercise 6.1
7. SOLUTIONS TO SELECTED EXERCISES 35
1. We see that the dependent variable y is missing, so we let v = dy/dx and obtain
that
v

+ v tan x = v
2
,
which is a Bernoullis equation of order n = 2. Solve this equation and nd the
general solution:
y = C
2
ln |C
1
sin x|.
2. The equation falls into the type with dependent variable y missing. Introduce
v = dy/dx and we need to solve the Bernoullis equation: v

v = v
2
. Finally,
the general solution is
y = ln |C
1
+ C
2
e
x
|.
3. In this equation, the independent variable x is missing. Introduce
v =
dy
dx
y

= v
dv
dy
,
and we obtain the equation for v in terms of y :
y
dv
dy
= 3v.
The general solution is
y =
c
1

1 + c
2
x
.
CHAPTER 2
Second-Order Linear Equations
Linear equations of second order are of crucial importance in the study of dierential
equations for two-fold reasons. First, the linear equations have a rich theoretical structure
that underlies a number of systematic methods of solution. Moreover, the theory is
understandable at a fairly elementary mathematical level. On the other hand, the second-
order equations appear frequently in mathematical modeling of physical process and are
vital to any serious investigation of the classical areas of mathematical physics.
In this chapter, we introduce the theory of solution structure for the linear second-
order equations, and discuss the methods of solution. All these will be extended to the
general nth order equations in the next chapter.
1. Basic Theoretical Results
In this section, we introduce the basic theoretical results for second-order linear DEs.
Definition 1.1. A linear second-order equation can be written in the standard form
y

+ p(x)y

## + q(x)y = f(x), (1.1)

where p, q and f are given continuous functions on some interval I.
If f(x) in (1.1) is not identically zero on I, then the DE is called nonhomoge-
neous.
If f(x) 0, the DE
y

+ p(x)y

+ q(x)y = 0. (1.2)
is homogeneous.
We call (1.2) is the homogeneous equation corresponding to (1.1), or the homo-
geneous equation associated with (1.1).
Note that y(x) 0 is always a solution to (1.2).
The initial value problem (IVP) associated to (1.1) is given by
y

+ p(x)y

## + q(x)y = f(x), y(x

0
) = y
0
, y

(x
0
) = y
1
. (1.3)
37
38 2. SECOND-ORDER LINEAR EQUATIONS
where x
0
, y
0
, y
1
are given constants.
If two conditions are imposed at dierent points x
0
= x
1
, we dene a boundary
value problem:
y

+ p(x)y

## + q(x)y = f(x), y(x

0
) = y
0
, y(x
1
) = y
1
. (1.4)
where x
0
, x
1
, y
0
, y
1
are given values.
In what follows, we shall rst focus on the homogeneous equation (1.1), and develop
the theory on its solution structure.
1.1. Existence and uniqueness of solutions to IVP. As a standing point, we
introduce the following theorem.
Theorem 1.1. Suppose that the functions p(x), q(x) and f(x) are continuous func-
tions on an interval I containing the point x
0
. Then, given any two numbers y
0
and y
1
,
the IVP (1.3) has a unique solution on I.
Remark 1.1. Two remarks are in order.
This theorem tells us that any such IVP has a unique solution on the whole
interval I if the coecients are continuous.
The proof of this theorem is fairly dicult, so it is omitted here. One may refer
to, for example, E. A. Coddington, An Introduction to ODEs, Dover, 1961, for
the proof.
1.2. Principle of superposition. The following theorem states a fundamental
property of solutions to linear homogeneous DE.
Theorem 1.2. If y
1
(x) and y
2
(x) are two solutions to the homogenous DE (1.1),
then any linear combinations of y
1
and y
2
,
y(x) = c
1
y
1
(x) + c
2
y
2
(x),
is also a solution to the same equation, where c
1
and c
2
are arbitrary constants. This
property is called the principle of superposition.
The conclusion follows directly from the linearity of the operation of dierentiation,
so we leave it as an exercise.
Exercise 1.1. Prove the above theorem. Is it true for the nonhomogeneous DE?
1. BASIC THEORETICAL RESULTS 39
Example 1.1. Verify that y(x) = c
1
cos x + c
2
sin x with c
1
and c
2
being arbitrary
constants is a solution to y

+ y = 0.
Corollary 1.1. As a direct consequence of this principle, the solution space
V =
_
y C
2
(I) : y

+ p(x)y

+ q(x)y = 0
_
, (1.5)
is a linear vector space.
1.3. Linear dependence/independence and Wronskian. To characterize the
solution space (1.5), it is necessary to study the linear dependence of two functions.
Definition 1.2. Two functions y
1
(x), y
2
(x) dened on an interval I are said to be
linearly independent (LI) on I, if the equation c
1
y
1
(x) +c
2
y
2
(x) = 0 holds only when
c
1
= c
2
= 0. If two functions are not linearly independent on I, they are called linearly
dependent (LD).
Based on the above denition, we have the following simple rule for two functions.
Exercise 1.2. Show that two functions y
1
(x) and y
2
(x) are linearly dependent on
I if and only if they are proportional (y
1
(x) = cy
2
(x) or y
2
(x) = dy
1
(x) where c, d are
constants).
Example 1.2. The functions y
1
(x) = x
2
and y
2
(x) = 5x
2
are linearly dependent
on any interval, since y
2
= 5y
1
. The functions y
1
(x) = x and y
2
(x) = x
2
are linearly
independent on any interval, since neither is a constant multiple of the other.
We next introduce another tool to determine the dependence and independence,
which is the Wronskian. Importantly, it can be used for more than two functions as to
be shown in the forthcoming chapter.
Definition 1.3. Let y
1
and y
2
be dierentiable functions on the interval I. The
Wronskian of y
1
, y
2
, denoted by W(y
1
, y
2
)(x), is dened by
W(y
1
, y
2
)(x) = det
_
y
1
y
2
y

1
y

2
_
= y
1
y

2
y

1
y
2
. (1.6)
Exercise 1.3. Compute the Wronskian of sin x and cos x.
The following result indicates how to use Wronskian to determine linear dependence/
independence of two functions.
40 2. SECOND-ORDER LINEAR EQUATIONS
Exercise 1.4. Given any two dierentiable functions f, g on I, show that if there
exists a point x
0
I such that the Wronskian W(f, g)(x
0
) = 0, then f, g are linearly
independent. In other words, if f, g are linearly dependent, then W(f, g)(x) 0, for all
x I.
Exercise 1.5. Consider the following two functions:
f(x) =
_
x
2
, 1 x 0,
0, 0 x 1.
g(x) =
_
0, 1 x 0,
x
2
, 0 x 1.
Show that the Wronskian W(f, g) = 0.
Are f and g linearly dependent?
What conclusion can we draw from this problem?
Remark 1.2. As an important remark,
Wronskian at one point is nonzero LI
or equivalently,
LD Wronskian is zero at all points
However, the converse of the statement does not hold, that is,
LI Wronskian at one point is nonzero
Nevertheless, it becomes true if the given two functions y
1
and y
2
are solutions to
the homogenous DE (1.1). To show this, it is essential to derive the Abels formula (1.7)
below.
Theorem 1.3. If y
1
and y
2
are solutions to the homogenous DE
y

+ p(x)y

+ q(x)y = 0,
where p, q are continuous on I, then the Wronskian W(y
1
, y
2
)(x) is given by
W(y
1
, y
2
)(x) = c exp
_

_
p(x)dx
_
, (1.7)
where c is a certain constant that depends on the pair y
1
, y
2
, but not on x. This implies
that W(y
1
, y
2
)(x) is either zero for all x I (if c = 0) or else is never zero for all
x I (if c = 0).
1. BASIC THEORETICAL RESULTS 41
Proof. Note that y
1
and y
2
satisfy
y

1
+ p(x)y

1
+ q(x)y
1
= 0,
y

2
+ p(x)y

2
+ q(x)y
2
= 0.
If we multiply the rst equation by y
2
, multiply the second by y
1
equations, we obtain
(y
1
y

2
y

1
y
2
) + p(x)(y
1
y

2
y

1
y
2
) = 0.
Next, we let W(x) = W(y
1
, y
2
)(x) and observe that
W

= (y
1
y

2
y

1
y
2
)

= y
1
y

2
y

1
y
2
.
Then we can write the previous equation as
W

(x) + p(x)W(x) = 0,
which is a separable equation and admits the solution
W(x) = c exp
_

_
p(x)dx
_
,
where c is a constant independent of x, whereas depends on which pair of solutions y
1
, y
2
is involved. However, since the exponential function is never zero, W(x) is not zero for
all x unless c = 0, in which case W(x) is zero for all x.
Exercise 1.6. Suppose that y
1
and y
2
are solutions to the homogenous DE (1.1) on I,
and assume that for some x
0
I, we have W(y
1
, y
2
)(x
0
) > 0. Show that W(y
1
, y
2
)(x) > 0
for all x I.
Example 1.3. We can verify that e
2t
and e
3t
are solutions to the DE:
y

+ 5y

+ 6y = 0.
Find the Wronskian and the constant in the Abels formula.
Solution: By the denition,
W = det
_
e
2x
e
3x
2e
2x
3e
3x
_
= e
5x
.
On the other hand, by the Abels formula,
W = c exp
_

_
5dx
_
= ce
5x
.
Hence, for the pair of solutions e
2t
and e
3t
is 1.
42 2. SECOND-ORDER LINEAR EQUATIONS
We may check that from a dierent pair of solutions, say, e
2t
and 5e
3t
, the
constant in the Abels formula is dierent.
The following theorem indicates how the Wronskian relates to the linear dependence
and independence of two functions, which are solutions to a homogeneous DE.
Theorem 1.4. Let y
1
and y
2
be the solutions to the homogeneous equation:
y

+ p(x)y

+ q(x)y = 0, (1.8)
where p, q are continuous on I. Then y
1
and y
2
are linearly dependent on I if and only
if the Wronskian W(y
1
, y
2
)(x) 0 for all x I. Alternatively, y
1
and y
2
are linearly
independent on I if and only if W(y
1
, y
2
) is never zero on I. That is
W(y
1
, y
2
)(x) 0, x I LD
W(y
1
, y
2
)(x) = 0, x I LI
Proof. In view of Remark 1.2 and Theorem 1.3, it suces to show that if W(y
1
, y
2
)(x
0
) =
0 for some x
0
I, then y
1
, y
2
are linearly dependent. Notice that W(y
1
, y
2
)(x
0
) is simply
the determinant of the coecient matrix of the linear system
c
1
y
1
(x
0
) + c
2
y
2
(x
0
) = 0,
c
1
y

1
(x
0
) + c
2
y

2
(x
0
) = 0,
(1.9)
where c
1
and c
2
are unknowns. Because W(y
1
, y
2
)(x
0
) = 0, the system (1.9) has nontrivial
solutions. We just pick one nontrivial solution c
1
, c
2
(i.e., they are not all zero), and
denote
(x) = c
1
y
1
(x) + c
2
y
2
(x).
It follows from Theorem 1.2 and (1.9) that is the solution of the IVP: (1.8) with the
initial conditions y(x
0
) = y

(x
0
) = 0. By the existence and uniqueness Theorem 1.1, this
problem only has the trivial solution 0. Thus, we have shown that there exist c
1
and
c
2
, which are not all zero such that
(x) = c
1
y
1
(x) + c
2
y
2
(x) 0, x I.
Therefore, y
1
and y
2
are linearly dependent.
With the aid of Theorems 1.1 and 1.4, we are able to derive an important property
of the linear homogeneous DE.
1. BASIC THEORETICAL RESULTS 43
Theorem 1.5. The linear homogeneous equation:
y

+ p(x)y

+ q(x)y = 0 (1.10)
where p, q are continuous on I, always has two linearly independent solutions.
Proof. Let x
0
be a point in I and consider the IVP:
y

+ p(x)y

+ q(x)y = 0, y(x
0
) = 1, y

(x
0
) = 0.
By Theorem 1.1, this problem has a unique solution, denoted by y
1
(x). Similarly, the
following IVP
y

+ p(x)y

+ q(x)y = 0, y(x
0
) = 0, y

(x
0
) = 1,
also has a unique solution y
2
(x). Since the Wronskian W(y
0
, y
1
)(x
0
) = 1, y
1
and y
2
are
linearly independent.
Now, we are ready to state the most important theorem on the structure of the
solutions to the homogenous DE (1.1).
Theorem 1.6. Let y
1
and y
2
be two linearly independent solutions to the homogenous
DE
y

+ p(x)y

+ q(x)y = 0 (1.11)
where p, q are continuous on I. Then every solution to this DE is of the form
y(x) = c
1
y
1
(x) + c
2
y
2
(x), (1.12)
where c
1
and c
2
are two arbitrary constants. That is to say, the linear combination (1.12)
gives the general solution to (1.11). We also call y
1
and y
2
are fundamental solutions
of (1.11).
Proof. Let Y (x) be any solution of (1.11), and let x
0
be a given point in I. Denote
Y (x
0
) = Y
0
and Y

(x
0
) = Y
1
. We rst look for c
1
and c
2
such that
Y (x) = c
1
y
1
(x) + c
2
y
2
(x), Y (x
0
) = Y
0
, Y

(x
0
) = Y
1
.
One nds that
c
1
=
Y
0
y

2
(x
0
) Y
1
y
2
(x
0
)
W(y
1
, y
2
)(x
0
)
, c
2
=
Y
0
y

1
(x
0
) + Y
1
y
1
(x
0
)
W(y
1
, y
2
)(x
0
)
, (1.13)
where the denominator is nonzero, since y
1
and y
2
are linearly independent.
It is essential to notice that Y (x) is the solution of the initial value problem: (1.11)
with the initial values: y(x
0
) = Y
0
and y

(x
0
) = Y
1
. By the existence and uniqueness
Theorem 1.1, the solution Y (x) is unique and of the form Y (x) = c
1
y
1
(x) +c
2
y
2
(x).
44 2. SECOND-ORDER LINEAR EQUATIONS
The signicance of Theorem 1.6 is that to solve the second-order linear homoge-
nous DE (1.11) with continuous coecients on some interval I, it suces to
nd its two linearly independent solutions y
1
and y
2
. In other words,
V = {y C(I) : y

+ p(x)y

+ q(x)y = 0} = span{y
1
, y
2
} .
The dimension of the solution space is two.
Example 1.4. Determine all the values of r such that x
r
is a solution to
x
2
y

+ 3xy

8y = 0, x > 0.
Hence, nd the general solution to this equation on (0, ).
Solution: Insert y = x
r
into the given equation:
r(r 1)x
r
+ 3rx
r
8x
r
= 0 r
2
+ 2r 8 = 0.
Therefore, the roots are r = 4, 2, which means y
1
= x
4
and y
2
= x
2
are two solutions
of the equation. Moreover, they are linearly independent, and thus the general solution
is y(x) = c
1
x
4
+ c
2
x
2
.
Exercise 1.7. Determine all the values of r such that e
rx
is a solution to
y

4y

+ 3y = 0.
Hence, nd the general solution to this equation.
2. REDUCTION OF ORDER 45
2. Reduction of Order
We now introduce a technique to determine the general solution to a second-order
linear DE, when we know one solution to the associated homogenous equation. More
precisely, assuming that we know y
1
(x)( = 0) is a solution to
y

(x) + p(x)y

## (x) + q(x)y(x) = 0. (2.1)

We need to nd a second linearly independent solution y
2
(x), and hence nd the general
solution to this DE: y(x) = c
1
y
1
(x) + c
2
y
2
(x).
Method of reduction of order
In order that y
1
and y
2
be linear independent, the ratio y
1
/y
2
must be nonconstant.
We therefore set
y
2
(x) = u(x)y
1
(x), (2.2)
and determine the unknown (nonconstant) function u(x) from the equation (2.1). Plug-
ging y
2
into (2.1) gives
(uy
1
)

+ p(uy
1
)

+ q(uy
1
) = 0.
Group the coecients of u

, u

, u :
y
1
u

+
_
2y

1
+ py
1

+
_
y

1
+ py

1
+ qy
1

. .
= 0
u = 0
Since y
1
is a solution, the coecient of u vanishes, and the equation becomes
y
1
u

+
_
2y

1
+ py
1

= 0.
We deduce that u(x) satises the DE:
u

=
_
2y

1
y
1
+ p(x)
_
, (2.3)
which is a rst-order separable DE for u

## . Integrating directly leads to

ln |u

| = 2 ln |y
1
|
_
p(x)dx + c (2.4)
which implies that
u(x) = c
_
1
y
2
1
(x)
e

_
p(x)dx
dx, (2.5)
where c is any nonzero number. Then the general solution to the original equation is
y(x) = c
1
y
1
(x) + c
2
u(x)y
1
(x)
. .
y
2
(x)
.
46 2. SECOND-ORDER LINEAR EQUATIONS
Example 2.1. Find the general solution to
xy

2y

+ (2 x)y = 0, x > 0.
given that one solution is y
1
(x) = e
x
.
Solution: We know that there is a second linearly independent solution of the form
y
2
(x) = u(x)y
1
(x) = u(x)e
x
.
Inserting it into the given DE and collecting the terms yield
x(u

+ 2u

+ u) 2(u

+ u) + (2 x)u = 0
which simplies to
xu

+ 2u

(x 1) = 0.
Separating the variables yields
u

= 2(x
1
1).
By integrating, we obtain
ln|u

| = 2(lnx x) + c,
which can be written as
u

= c
1
x
2
e
2x
.
Integrating once more and setting the resulting integrating constant to zero, gives
u(x) =
1
4
c
1
e
2x
(1 + 2x + 2x
2
).
Taking c
1
= 4, we get
y
2
(x) = u(x)e
x
= e
x
(1 + 2x + 2x
2
).
Consequently, the general solution to the given DE is
y(x) = c
1
e
x
+ c
2
e
x
(1 + 2x + 2x
2
).
Exercise 2.1. Given y
1
(x) = x
2
, nd a second linearly independent solution to the
DE
x
2
y

3xy

+ 4y = 0, x > 0.
2. REDUCTION OF ORDER 47
Key: y
2
(x) = x
2
ln x.
The reduction of order technique can also be used to determine the general solution
to a second-order nonhomogeneous linear DE, assuming that we know one solution to
the associated homogeneous equation. We may follow a similar process to prove that
Theorem 2.1. Consider the DE:
y

+ p(x)y

## + q(x)y = f(x), (2.6)

where p, q and f are continuous on I. If y = y
1
(x) is a solution to the associated homo-
geneous DE, we have that y
2
= u(x)y
1
(x) is a solution of the nonhomogeneous DE (2.6),
provided that v = u

## is a solution to the linear DE

v

+
_
2
y

1
y
1
+ p
_
v =
f
y
1
. (2.7)
Example 2.2. Find the general solution to
x
2
y

+ 3xy

## + y = 4 lnx, x > 0 (2.8)

given that one solution to the associated homogeneous equation is y
1
(x) = x
1
.
Solution: We try for the solution of the form
y(x) = x
1
u(x),
where u(x) is to be determined. Dierentiating y twice yields
y

= x
1
u

x
2
u, y

= x
1
u

2x
2
u

+ 2x
3
u.
Substituting into equation (2.8) and collecting the terms, we obtain the following DE for
u :
u

+ x
1
u

= 4x
1
ln x.
To facilitate the integration, we let
v = u

## and the equation of u can be written as the rst-order linear DE

v

+ x
1
v = 4x
1
ln x.
An integrating factor for this linear equation is I = e
_
x
1
dx
= x, so that the equation
can be written in the equivalent form
d
dx
(xv) = 4 lnx.
48 2. SECOND-ORDER LINEAR EQUATIONS
Integrating both sides with respect to x yields
xv = 4x(ln x 1) + c
1
.
Thus
v(x) = 4(lnx 1) + c
1
x
1
.
Therefore,
u

(x) = 4(lnx 1) + c
1
x
1
,
which can be integrated directly:
u(x) = 4x(ln x 2) + c
1
ln x + c
2
.
Inserting this expression for u into y gives
y(x) = 4(ln x 2) + c
1
x
1
lnx + c
2
x
1
,
which is the general solution to the given DE.
Exercise 2.2. Consider the Cauchy-Euler equation
x
2
y

(2m1)xy

+ m
2
y = 0, x > 0,
where m is a constant.
(a) Determine a particular solution to this equation of the form y
1
(x) = x
r
.
(b) Use your solution from (a) and the method of reduction of order to obtain a
second linearly independent solution, and hence nd the general solution.
Key: (a) y
1
(x) = x
m
(b) y
2
(x) = x
m
ln x.
Exercise 2.3. Consider the DE
xy

(x + )y

+ y = 0, x > 0,
where and are constants.
(a) Show that y
1
(x) = e
x
is a solution to this DE.
(b) Use your solution from (a) and the method of reduction of order to obtain a
second linearly independent solution
y
2
(x) = e
x
_
x

e
x
dx.
2. REDUCTION OF ORDER 49
(c) In the particular case when = 1 and is a nonnegative integer, show that a
second linearly independent solution is
y
2
(x) = 1 +x +
1
2!
x
2
+ +
1
!
x

.
50 2. SECOND-ORDER LINEAR EQUATIONS
3. Second-Order Homogeneous Linear DE with Constant Coecients
In this section, we discuss how to solve
ay

+ by

+ cy = 0, (3.1)
where a, b, c are given real numbers. Based on the theoretical results established in the
previous section, to nd the general solution to (3.1), it is suces to nd two linearly
independent solutions.
An important observe is that if y = e
rx
is solution to (3.1), then r must be a root of
the characteristic equation:
ar
2
+ br + c = 0. (3.2)
Theorem 3.1. y(x) = e
rx
is a solution to the DE (3.1) if and only if r is a root of
the characteristic equation (3.2).
There are three possibilities for the roots, which leads to three dierent rules to derive
the general solutions of (3.1).
Theorem 3.2. Suppose that r
1
and r
2
are the roots of the characteristic equation
(3.2).
1. If the roots are real and distinct, i.e., r
1
= r
2
, then two linearly independent
solutions of (3.1) are e
r
1
x
and e
r
2
x
, and hence the general solution is
y(x) = c
1
e
r
1
x
+ c
2
e
r
2
x
. (3.3)
2. If the roots are real and repeated, i.e., r
1
= r
2
= r, then two linearly independent
solutions of (3.1) are e
rx
and xe
rx
, and hence the general solution is
y(x) = (c
1
+ c
2
x)e
rx
. (3.4)
3. If the roots are complex numbers, i.e., r = i with = 0, then two lin-
early independent solutions of (3.1) are e
x
cos x and e
x
sin x, and hence the
general solution is
y(x) = (c
1
cos x + c
2
sin x)e
x
. (3.5)
Proof.
Rule 1.
It is clear e
r
1
x
and e
r
2
x
are solutions of (3.1). Moreover, they are linearly independent,
since the Wronskian W(e
r
1
x
, e
r
2
x
) = (r
2
r
1
)e
(r
1
+r
2
)x
= 0. Therefore, (3.3) gives the
general solution.
3. SECOND-ORDER HOMOGENEOUS LINEAR DE WITH CONSTANT COEFFICIENTS 51
Rule 2.
Since we know that y
1
(x) = e
rx
is a solution, we use the techniquereduction of order
to derive the second solution y
2
(x), which is required to be linearly independent with
y
1
(x). Hence, we assume that y
2
(x) = u(x)y
1
(x) = u(x)e
rx
. Plugging it into the equation
and collecting the terms, we have that
a(ue
rx
)

+ b(ue
rx
)

+ cue
rx
=
_
au

+ (2ra + b)u

+ (ar
2
+ br + c)u
_
e
rx
= 0.
Notice that r = b/(2a) and r is a root of ar
2
+ br + c = 0. Hence, the above equation
can be simplied into
au

= 0, u

= 0 u = C
1
+ C
2
x.
We simply take C
1
= 0 and C
2
= 1, and get y
2
(x) = xe
rx
. This implies the general
solution (3.4).
Rule 3.
We know that
y
1
(x) = e
(+i)x
, y
2
(x) = e
(i)x
and they are linearly independent. Therefore, we have the complex-valued generalized
solution:
y(x) = C
1
e
(+i)x
+ C
2
e
(i)x
.
But in general we would prefer to have real-valued solutions, because the dierential
equation itself has real coecients. For this purpose, we use the Eulers formula, and
rewrite the solution as
y(x) = C
1
e
(+i)x
+ C
2
e
(i)x
= C
1
e
x
(cos x + i sin x) + C
2
e
x
(cos x i sin x)
= (C
1
+ C
2
)e
x
cos x + (C
1
C
2
)ie
x
sin x
= c
1
e
x
cos x + c
2
e
x
sin x.
Therefore, any solution can be expressed as a linear combination of e
x
cos x and
e
x
sin x.
We now use the above rules to solve some second-order linear equations with constant
coecients.
Example 3.1. Solve the following equations:
(1) y

+ 2y

15y = 0, (2) 9y

12y

+ 4y = 0, (3) y

+ 4y

+ 13y = 0.
52 2. SECOND-ORDER LINEAR EQUATIONS
Solution: (1) The characteristic equation: r
2
+2r 15 = 0 has the roots: r
1
= 5
and r
2
= 3. Hence, the general solution is
y(x) = c
1
e
3x
+ c
2
e
5x
, x (, ).
(2) The characteristic equation: 9r
2
12r + 4 = 0 has repeated roots: r
1
= r
2
= 2/3.
Hence, the general solution is
y(x) = (c
1
+ c
2
x)e
2x/3
, x (, ).
(3) The characteristic equation: r
2
+ 4r + 13 = 0 has the roots: r
1,2
= 2 3i. Hence,
the general solution is
y(x) = (c
1
cos 3x + c
2
sin 3x)e
2x
.
Exercise 3.1. Consider the Euler equation of the form
t
2
y

+ ty

+ y = 0, t > 0. (3.6)
Show that the substitution x = ln t transforms this equation into an equation with constant
coecients, and use this technique to solve the following equations (t > 0):
(1) t
2
y

ty

+ y = 0, (2) t
2
y

+ 3ty

+ 5y = 0.
Key: (1) y = (c
1
+ c
2
ln t)t, (2) y = (c
1
cos(2 ln|t|) + c
2
sin(2 ln|t|))/t.
Exercise 3.2. Consider the Laplaces equation

2
u
x
2
+

2
u
y
2
= 0. (3.7)
(a) Show that the substitution
u(x, y) = e
x/
f()
where = x y (and , are positive constants) reduces the equation (3.7)
to
d
2
f
d
2
+ 2p
df
d
+
q

2
f = 0, (3.8)
where
p =

(
2
+
2
)
, q =
1

2
+
2
.
(b) Solve equation (3.8) and hence nd the corresponding solution to (3.7).
Key: u(x, y) = e
x/
[e
p
(c
1
sin q + c
2
cos q)].
4. NONHOMOGENEOUS DE: METHOD OF UNDETERMINED COEFFICIENTS 53
4. Nonhomogeneous DE: Method of Undetermined Coecients
We now turn to the nonhomogeneous equation
y

+ p(x)y

## + q(x)y = f(x), (4.1)

where p, q and f are given functions on an interval I. The equation with the same
coecients
y

+ p(x)y

+ q(x)y = 0, x I, (4.2)
is called the homogeneous equation corresponding to (4.1), or the associated homogenous
equation of (4.1).
The following result gives the solution structure of the nonhomogeneous DE.
Theorem 4.1. Let Y
1
and Y
2
be two solutions to the nonhomogeneous equation (4.1),
and y
1
and y
2
are a fundamental set of solutions of (4.2).
The dierence Y
1
Y
2
is a solution of the corresponding homogenous equation
(4.2).
We have
Y
1
(x) Y
2
(x) = c
1
y
1
(x) + c
2
y
2
(x), (4.3)
where c
1
, c
2
are real constants.
Every solution of the nonhomogeneous DE takes the form
y(x) = y
p
(x) + c
1
y
1
(x) + c
2
y
2
(x)
. .
yc(x)
, (4.4)
where y
p
(x) is a particular solution of (4.1), and y
c
(x) is the general solution
to the associated homogeneous DE (4.2), or the complementary function for
equation (4.1).
According to this theorem, to nd the general solution to the nonhomogeneous equa-
tion (4.1), it suces to nd one solution of this equation, i.e., a particular solution y
p
(x),
and the general solution to the corresponding homogenous equation (4.2).
In this section, we introduce a method to nd a particular solution to nonhomoge-
neous equation with constant coecients
ay

+ by

+ cy = f(x) (4.5)
54 2. SECOND-ORDER LINEAR EQUATIONS
where a, b, c are constants, and f(x) is restricted to one of the following special forms:
(1) f(x) =
_

_
Ae
ax
,
Ax
k
,
Acos bx + Bsin bx.
(2) Sums or products of functions given in (1).
(4.6)
The idea behind the technique is to guess the form of the particular solution of an
appropriate form.
We shall introduce this technique via some examples.
Example 4.1. Determine the general solution to
y

y = 16e
3x
.
Solution: We rst obtain the complementary function. The characteristic equation
corresponds to the homogenous DE is
r
2
1 = 0 r = 1,
so that
y
c
(x) = c
1
e
x
+ c
2
e
x
.
We now need to determine a particular solution. Since taking derivatives of e
3x
gives
back multiples of e
3x
, we might guess a particular solution of the form
y
p
(x) = A
0
e
3x
, (4.7)
for an appropriate choice of the constant A
0
. We call (4.7) a trial solution. We now
substitute the proposed trial solution into the given DE to determine the constant A
0
.
Dierentiating y

p
yields
y

p
= 3A
0
e
3x
, y

p
= 9A
0
e
3x
so that y
p
(x) = A
0
e
3x
is a solution to the given equation if and only if A
0
= 2. Therefore,
y
p
= 2e
3x
and the general solution is
y(x) = y
p
+ y
c
(x) = 2e
3x
+ c
1
e
x
+ c
2
e
x
,
where c
1
and c
2
are arbitrary constants.
We now look at an example of similar type.
Example 4.2. Find the general solution to
y

3y

4y = 15e
4x
.
4. NONHOMOGENEOUS DE: METHOD OF UNDETERMINED COEFFICIENTS 55
Solution: It is easy to see the general solution to the associated homogenous DE is
y
c
(x) = c
1
e
x
+ c
2
e
4x
.
The usual trial solution corresponding to the nonhomogeneous term F(x) = 15e
4x
is
y
p
(x) = A
0
e
4x
.
However, this form is a solution to the homogeneous equation, so we can not determine
the constant A
0
. We therefore modify the trial solution as
y
p
(x) = A
0
xe
4x
.
We substitute it into the dierential equation (omit the details), and give that A
0
= 3.
Consequently, a particular solution is y
p
(x) = 3xe
4x
, and the general solution is
y(x) = 3xe
4x
+ c
1
e
x
+ c
2
e
4x
.
We present the above technique, i.e., method of undetermined coecients, in
the following theorem.
Theorem 4.2. Consider the equation
ay

+ by

+ cy = f(x). (4.8)
Let
P
k
(x) = a
0
+ a
1
x + + a
k
x
k
be a given polynomial of degree k (i.e. with a
k
= 0).
If
f(x) = P
k
(x).
then the trial solution is of the form
y
p
(x) = x
m
(A
0
+ A
1
x + + A
k
x
k
). (4.9)
If
f(x) = P
k
(x)e
x
.
then the trial solution is of the form
y
p
(x) = x
m
(A
0
+ A
1
x + + A
k
x
k
)e
x
. (4.10)
56 2. SECOND-ORDER LINEAR EQUATIONS
If
f(x) = P
k
(x)e
x
_
sin x
cos x.
then the trial solution is of the form
f(x) = x
m
e
x
_
(A
0
+ A
1
x + + A
k
x
k
) cos x
+ (B
0
+ B
1
x + + B
k
x
k
) sin x
_
.
(4.11)
Here, m is the smallest nonnegative integers (could be 0, 1 or 2) that ensures that no
term in y
p
(x) is a solution of the associated homogeneous equation.
We now apply the above rules to determine particular solutions to nonhomogeneous
DE.
Example 4.3. In each case, write an appropriate trial solution. Do not solve for the
(1) y

+ 5y

= 4x
2
. (2) y

y = 3xe
x
. (3) y

4y

+ 5y = 3e
2x
sin x.
Solution: (1) The characteristic equation r
2
+5r = 0 has the roots r = 0, 5, and
hence the complementary function
y
c
(x) = c
1
+ c
2
e
5x
.
For the nonhomogeneous term f(x) = 4x
2
, we propose the trial solution
y
p
(x) = x
m
(A
0
+ A
1
x + A
2
x
2
) = A
0
x
m
+ A
1
x
m+1
+ A
2
x
m+2
,
with m = 1, i.e.,
y
p
(x) = x(A
0
+ A
1
x + A
2
x
2
) = A
0
x + A
1
x
2
+ A
2
x
3
,
since r = 0 is a root of the characteristic equation, and no term solves the associated
homogeneous equation.
(2) The characteristic equation: r
2
1 = 0 has the roots r = 1, and hence the
general solution to the homogeneous equation is
y
c
(x) = c
1
e
x
+ c
2
e
x
.
The usual trial solution is
y
p
(x) = (A+ Bx)e
x
= Ae
x
+ Bxe
x
4. NONHOMOGENEOUS DE: METHOD OF UNDETERMINED COEFFICIENTS 57
However, the rst term solves the associated homogeneous equation, so we use the mod-
ied form
y
p
(x) = x(A+ Bx)e
x
= Axe
x
+ Bx
2
e
x
.
We then see that no term is a solution to the associated homogeneous DE.
(3) The characteristic equation is r
2
4r + 5 = 0, with roots r = 2 i. Hence the
complementary function is
y
c
(x) = e
2x
(c
1
cos x + c
2
sin x).
The usual trial solution corresponding to f(x) = 3e
2x
sin x is
y
p
(x) = e
2x
(Acos x + Bsin x).
However, this form solves the associated homogeneous equation, so we must modify it as
y
p
(x) = xe
2x
(Acos x + Bsin x).
Now, no any term is a solution to the associated homogeneous equation anymore.
Now, suppose that f(x) is a summation of two terms: f(x) = f
1
(x) + f
2
(x), and
suppose that Y
1
and Y
2
are solutions to the equations
ay

+ by

+ cy = f
1
,
and
ay

+ by

+ cy = f
2
,
respectively. Then, y
1
+ y
2
is a solution of the equation
ay

+ by

+ cy = f
1
+ f
2
= f.
This statement is easy to verify. Its practical signicance is that if the nonhomogeneous
term is a sum of dierent forms that we consider before, then the trial solution is obtained
by taking the sum of the corresponding trial solutions.
Example 4.4. Determine an appropriate trial solution of the equation
y

+ y = 4e
3x
+ 5x
2
.
Solution: The characteristic equation is r
2
+1 = 0 with roots r = i. Consequently,
the complementary function is
y
c
(x) = c
1
cos x + c
2
sin x.
58 2. SECOND-ORDER LINEAR EQUATIONS
Therefore, a trial solution corresponding to the nonhomogeneous term f
1
= 4e
3x
is
y
p
1
(x) = A
0
e
3x
,
whereas a trial solution corresponding to the nonhomogeneous term f
2
= 5x
2
is
y
p
2
(x) = B
0
+ B
1
x + B
2
x
2
.
Hence, a trial solution to the given DE is
y
p
(x) = y
p
1
(x) + y
p
2
(x) = A
0
e
3x
+ B
0
+ B
1
x + B
2
x
2
.
5. NONHOMOGENEOUS DE: VARIATION OF PARAMETERS 59
5. Nonhomogeneous DE: Variation of Parameters
The method of undetermined coecients has two severe limitations. Firstly, it is
only applicable to DE with constant coecients, and secondly, it can only applied to DE
whose nonhomogeneous terms are of the special forms described in the previous section.
For example, we could not use it to nd a particular solution to the DE
y

+ 4y

6y = x
2
ln x.
In this section, we develop a more powerful technique, called the variation-of-parameters
method, to nd a particular solution of a given second-order linear nonhomogeneous DE.
We next introduce the method in a general way. Consider the linear second-order
nonhomogeneous DE
y

+ py

+ qy = f, (5.1)
where p, q and f are continuous functions on an interval I. Suppose that y
1
and y
2
are
two linearly independent solutions to the associated homogeneous DE:
y

+ py

+ qy = 0 (5.2)
on I, so that the general solution of (5.2) is
y
c
(x) = c
1
y
1
(x) + c
2
y
2
(x).
The variation of parameters consists of replacing the constants c
1
and c
2
by functions
u
1
(x) and u
2
(x) (that is, we allow the parameters c
1
, c
2
to vary), and determine the trial
solution of this form
y
p
(x) = u
1
(x)y
1
(x) + u
2
(x)y
2
(x). (5.3)
We now substitute y
p
into the equation to determine the unknown functions u
1
and u
2
.
Dierentiating (5.3) with respect to x yields
y

p
= u

1
y
1
+ u
1
y

1
+ u

2
y
2
+ u
2
y

2
= u

1
y
1
+ u

2
y
2
. .
=0
+ u
1
y

1
+ u
2
y

2
.
If we dierentiate it once more, the resulting expression for y

p
will involve second-order
derivatives u

1
and u

2
, which will complicate our problem. To eliminate these derivatives,
we set
u

1
y
1
+ u

2
y
2
= 0, (5.4)
and therefore, the expression y

p
reduces to
y

p
= u
1
y

1
+ u
2
y

2
,
60 2. SECOND-ORDER LINEAR EQUATIONS
and so that
y

p
= u

1
y

1
+ u
1
y

1
+ u

2
y

2
+ u
2
y

2
.
Substituting into the equation (5.1) and collecting the terms yields
u
1
(y

1
+ py

1
+ qy
1
)
. .
=0
+ u
2
(y

2
+ py

2
+ qy
2
)
. .
=0
+ (u

1
y

1
+ u

2
y

2
) = f(x).
Since y
1
and y
2
are solutions to the homogeneous DE, the above equation reduces to
y

1
u

1
+ y

2
u

2
= f. (5.5)
We may therefore conclude that y
p
= u
1
y
1
+u
2
y
2
is a particular solution to (5.1), provided
that u
1
and u
2
satisfy the system
u

1
y
1
+ u

2
y
2
= 0,
y

1
u

1
+ y

2
u

2
= f.

_
y
1
y
2
y

1
y

2
_ _
u

1
u

2
_
=
_
0
f
_
. (5.6)
We see determinant of the coecient matrix turns out to be the Wronskian W(y
1
, y
2
)(x) =
0 for all x I. By using the Cramers rule, we can nd
u

1
=
y
2
f
W(y
1
, y
2
)
, u

2
=
y
1
f
W(y
1
, y
2
)
,
so that
u
1
(x) =
_
y
2
f
W(y
1
, y
2
)
dx, u
2
(x) =
_
y
1
f
W(y
1
, y
2
)
dx. (5.7)
We have therefore derived the following theorem.
Theorem 5.1. (Variation-of-Parameters Method) Consider
y

+ py

+ qy = f, (5.8)
where p, q and f are continuous on an interval I. Suppose that y
1
and y
2
are two linearly
independent solutions to the associated homogeneous DE:
y

+ py

+ qy = 0 (5.9)
on I. Then a particular solution to the nonhomogeneous DE (5.9) is
y
p
= u
1
y
1
+ u
2
y
2
where u
1
, u
2
satisfy
u

1
y
1
+ u

2
y
2
= 0,
y

1
u

1
+ y

2
u

2
= f.

_
y
1
y
2
y

1
y

2
_ _
u

1
u

2
_
=
_
0
f
_
. (5.10)
5. NONHOMOGENEOUS DE: VARIATION OF PARAMETERS 61
A particular solution is
y
p
(x) = y
1
_
y
2
f
W(y
1
, y
2
)
dx + y
2
_
y
1
f
W(y
1
, y
2
)
dx. (5.11)
Example 5.1. Solve y

+ y = sec x.
Solution: Two linearly independent solutions to the associated homogeneous DE
are y
1
= cos x and y
2
= sin x. Thus a particular solution to the given DE is
y
p
(x) = u
1
y
1
+ u
2
y
2
= u
1
cos x + u
2
sin x,
where u
1
and u
2
satisfy
cos xu

1
+ sin xu

2
= 0,
sin xu

1
+ cos xu

2
= sec x.
(5.12)
The solution to this system is
u

1
= sec xsin x, u

2
= sec xcos x.
Consequently,
u
1
(x) =
_
sec xsin xdx =
_
sin x
cos x
dx = ln | cos x|,
and
u
2
(x) =
_
sec xcos xdx = x,
where we have set the integration constants to zero, since we only require one particular
solution. Therefore,
y
p
(x) = cos xln | cos x| + xsin x,
and the general solution is
y(x) = cos xln| cos x| + xsin x + c
1
cos x + c
2
sin x.
Exercise 5.1. Solve y

+ 4y

+ 4y = e
2x
ln x, x > 0.
Key: y(x) = e
2x
_
c
1
+ c
2
x +
1
4
x
2
(2 ln x 3)
_
.
Exercise 5.2. Solve the nonhomogeneous DE:
y

+ y = sec x + 4e
x
, |x| <

2
.
Key:
y(x) = c
1
cos x + c
2
sin x + 2e
x
+ cos xln(cos x) + xsin x.
62 2. SECOND-ORDER LINEAR EQUATIONS
6. Solutions to Selected Exercises
Exercise 1.7
r = 1 or 3, and the general solution is
y(x) = c
1
e
x
+ c
2
e
3x
.
Exercise 2.1
Let y
2
= x
2
u be the solution to the given equation. We nd that
x
2
(x
2
u)

3x(x
2
u)

+ 4(x
2
u) = 0 xu

+ u

= 0.
Therefore,
u = c
1
ln +c
2
.
For simplicity, we take c
1
= 1 and c
2
= 0. Hence, the second linearly independent
solution is y
2
= x
2
lnx.
Exercise 2.2
(a) It is easy to nd that r = m, so y
1
= x
m
.
(b) We seek y
2
= x
m
u satisfying
x
2
(x
m
u)

(2m1)x(x
m
u)

+ m
2
(x
m
u) = 0 xu

+ xu

= 0.
We nd u = lnx, so the second linearly independent solution is y
2
= y
m
ln x.
Exercise 2.3
(a) Easy to verify.
(b) We seek y
2
= e
x
u satisfying
x(e
x
u)

(x + )(e
x
u)

+ (e
x
u) = 0 xu

+ (x )u

= 0.
We nd u =
_
x

e
x
dx, so the second linearly independent solution is y
2
=
e
x
_
x

e
x
dx.
(c) It suces to verify
(e
x
y
2
)

= x

e
x
y

2
y
2
= x

.
6. SOLUTIONS TO SELECTED EXERCISES 63
Clearly,
y
2
(x) = 1 +x +
1
2!
x
2
+ +
1
!
x

,
satises the above equation.
Exercise 3.1
(0) Letting x = ln t, we nd that t = e
x
and
t
dy
dt
=
dy
dx
, t
2
d
2
y
dt
2
=
d
2
y
dx
2

dy
dx
.
Then we convert the original equation to
d
2
y
dx
2
+ ( 1)
dy
dx
+ y = 0,
which is a second-order linear DE with constant coecients.
(1) By repeating the same process as above, we convert the given equation to
d
2
y
dx
2
2
dy
dx
+ y = 0,
whose general solution is y(x) = e
x
(c
1
+c
2
x). Changing the variable back gives
y(t) = (c
1
+ c
2
ln t)t.
(2) The transformed equation is
d
2
y
dx
2
+ 2
dy
dx
+ 5y = 0.
The general solution is
y(x) = e
x
(c
1
cos(2x) + c
2
sin(2x)).
Thus, the general solution to the original equation is
y(t) = (c
1
cos(2 ln|t|) + c
2
sin(2 ln|t|))/t.
Exercise 3.2
(a) A direct calculation gives

2
u
x
2
=
_

2
d
2
f
d
2
+
2

df
d
+
1

2
f()
_
e
x/
,

2
u
x
2
=
2
e
x/
d
2
f
d
2
.
Therefore, the Laplace equation (3.7) can be reduced to (3.8).
64 2. SECOND-ORDER LINEAR EQUATIONS
f() = e
p
(c
1
sin q + c
2
cos q).
Hence,
u(x, y) = e
x/
[e
p
(c
1
sin q + c
2
cos q)],
where = x y.
Exercise 5.1
The solution to the associated homogeneous DE is
y
c
(x) = e
2x
(c
1
+ c
2
x).
We look for a particular solution of the form:
y
p
= e
2x
u
1
+ xe
2x
u
2
,
where u
1
and u
2
satisfy
u

1
+ xu

2
= 0,
2u

1
+ (1 2x)u

2
= ln x.
We nd
u

1
= xln x, u

2
= ln x,
and
u
1
=
x
2
4
(1 2 lnx), u
2
= x(ln x 1).
Therefore,
y
p
(x) =
x
2
4
(2 lnx 3)e
2x
,
and the general solution is
y(x) = e
2x
_
c
1
+ c
2
x +
x
2
4
(2 ln x 3)
_
.
Exercise 5.2
The source term is a summation of two functions, which can be treated sepa-
rately by using two dierent techniques. We rst consider
y

+ y = 4e
x
.
We can nd a particular solution by using the Method of Undetermined Coe-
cients:
y
p
1
(x) = 2e
x
.
6. SOLUTIONS TO SELECTED EXERCISES 65
Then we turn to
y

+ y = sec x.
Using the Method of Variation-of-Parameters, we can nd (see, Example 5.1):
y
p
2
(x) = cos xln | cos x| + xsin x.
Therefore,
y
p
(x) = y
p
1
(x) + y
p
2
(x) = 2e
x
+ cos xln(cos x) + xsin x,
and the general solution is
y(x) = c
1
cos x + c
2
sin x + 2e
x
+ cos xln(cos x) + xsin x.
CHAPTER 3
Higher Order Linear Dierential Equations
The theory and solution techniques developed in the preceding chapter for second
order linear equations can be extended directly to linear equations of higher order. In this
chapter, we just briey discuss these extensions, and leave most the proofs as exercises.
Please read the textbook Page 219-224, and refer to Chapter Two for the proofs!
1. Basic Theoretical Results
Consider the nth-order linear DE of the form
L[y] := y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = f(x), (1.1)
where p
1
, p
2
, , p
n
, f are continuous in some interval I. The associated homogeneous
DE of (1.1) is
L[y] := y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = 0. (1.2)
The following theorem is of crucial importance.
Theorem 1.1. (Existence and Uniqueness) Consider the IVP:
_
y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = f(x),
y(x
0
) = y
0
, y

(x
0
) = y
1
, , y
(n1)
(x
0
) = y
n1
.
(1.3)
If p
1
, , p
n
, f are continuous in an interval I containing x
0
, then (1.3) has a unique
solution y(x) in I.
As before, an important property of the homogeneous equation (1.2) is the principle
of superposition.
Theorem 1.2. If {y
k
}
n
k=1
are solutions to (1.2), then any linear combination
z(x) =
n

k=1
c
k
y
k
(x), x I
is also a solution of (1.2) for any arbitrary constants {c
k
}
n
k=1
.
67
68 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
As with the second-order linear homogeneous DE, the main result to be established
is that every solution of (1.2) can be expressed as a linear combination of its
n linear independent solutions {y
k
}
n
k=1
. That is,
V =
_
y C
n
(I) : L[y] = 0
_
= span{y
1
, , y
n
}.
In the rst place, we need to extend the denition of linear dependence/independence
of two functions to n functions.
Definition 1.1. The functions f
1
, f
2
, , f
n
are said to be linearly dependent on
I if there exists a set of constants c
1
, c
2
, , c
n
, which are not all zero, such that
c
1
f
1
(x) + c
2
f
2
(x) + + c
n
f
n
(x) = 0, (1.4)
for all x I, otherwise, they are said to be linearly independent, in other words, the
equation (1.4) holds, only when c
1
= c
2
= = c
n
= 0.
Example 1.1. Determine whether the following functions are linearly dependent:
f
1
(x) = 2x 3, f
2
(x) = x
2
+ 1, f
3
(x) = 2x
2
1.
Solution: Set the equation
c
1
f
1
+ c
2
f
2
+ c
3
f
3
= 0 c
1
(2x 3) + c
2
(x
2
+ 1) +c
3
(2x
2
1) = 0. (1.5)
That is
(c
2
+ 2c
3
)x
2
+ 2c
1
x + (3c
1
+ c
2
c
3
) = 0,
which should be valid for all x. Therefore, we have
c
2
+ 2c
3
= 0, 2c
1
= 0, 3c
1
+ c
2
c
3
= 0.
This system has only zero solution: c
1
= c
2
= c
3
= 0. Hence, these three functions are
linearly independent.
As before, we may use the Wronskian to determine the linear dependence/independence.
Definition 1.2. Suppose that y
1
, y
2
, , y
n
are dierentiable up to derivatives of
order n 1. Then the Wronskian is dened by
W(y
1
, y
2
, , y
n
)(x) =

y
1
y
2
y
n
y

1
y

2
y

n
.
.
.
.
.
.
.
.
.
y
(n1)
1
y
(n1)
2
y
(n1)
n

.
Exercise 1.1. Compute the Wronskian W(1, x, x
2
, x
3
) and W(1, x, , x
n
).
1. BASIC THEORETICAL RESULTS 69
Exercise 1.2. Show that if there exists x
0
I such that W(y
1
, y
2
, , y
n
)(x
0
) =
0, then y
1
, y
2
, , y
n
are linearly independent in I. Equivalently, if y
1
, y
2
, , y
n
are
linearly dependent, then the Wronskian W(y
1
, y
2
, , y
n
)(x) 0, for all x I.
Question 1.1. Suppose that y
1
, y
2
, , y
n
are linearly independent, can we claim
that there always exists a point x
0
I, such that W(y
1
, y
2
, , y
n
)(x
0
) = 0? Explain
why? (Recall the conclusion for two functions in Chapter Two).
We next generalize the Abels formula to higher order DE (see Problem 20 on Page
Example 1.2. Let y
1
, y
2
and y
3
be solutions to the third-order equation
y

+ p
1
(x)y

+ p
2
(x)y

+ p
3
(x)y = 0. (1.6)
Let W(x) = W(y
1
, y
2
, y
3
)(x) be the Wronskian. Show that W satises the equation
W

(x) = p
1
(x)W(x). (1.7)
Therefore, we have the Abels formula
W(y
1
, y
2
, y
3
)(x) = c exp
_

_
p
1
(x)dx
_
. (1.8)
It follows that W is either always zero or nowhere zero in I.
Solution: To prove (1.7), we dierentiate W(x) and recall that the derivative of a 3-by-3
determinant is the sum of three 3-by-3 determinants obtained by dierentiating the rst,
second and third rows, respectively. Therefore, we have
W

(x) = det
_
_
y
1
y
2
y
3
y

1
y

2
y

3
y

1
y

2
y

3
_
_
. (1.9)
Since y
1
, y
2
and y
3
are solutions of (1.6), we substitute y

i
by p
1
(x)y

i
p
2
(x)y

i
p
3
(x)y
i
for i = 1, 2, 3. Using the addition property of the determinants, we nd that
W

(x) = det
_
_
y
1
y
2
y
3
y

1
y

2
y

3
p
1
y

1
p
1
y

2
p
1
y

3
_
_
= p
1
(x)W(x). (1.10)
Thus, (1.8) follows.
Using the same argument, we can derive the Alels formula for the n-order linear
homogeneous DE.
70 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
Exercise 1.3. Generalize the Abels formula to the nth order equation
y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = 0
with solutions y
1
, y
2
, , y
n
. That is to establish the Abels formula
W(y
1
, y
2
, , y
n
)(x) = c exp
_

_
p
1
(x)dx
_
.
As with the second-order equation, we can prove that
Theorem 1.3.
(i) If W(y
1
, y
2
, , y
n
)(x
0
) = 0 for some x
0
I, then y
1
, y
2
, , y
n
are linearly
independent in I.
(ii) If they are linearly independent, and are solutions of
y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = 0
in I, then the Wronskian is nowhere zero in I.
Proof. The proof is similar to that for two functions in Chapter Two. You may
also refer to Pages 220-221 of the textbook for the proof.
We now state the main result on the solution structure of the nth order linear homo-
geneous equation.
Theorem 1.4. If p
1
, p
2
, , p
n
C(I), and the functions y
1
, y
2
, , y
n
are n linearly
independent solutions (i.e., W(y
1
, y
2
, , y
n
)(x) = 0) of
y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = 0, (1.11)
then every solution can be expressed as a linear combination of the solutions y
1
, y
2
, , y
n
,
which forms a fundamental set of solutions.
For the nonhomogeneous equation
y
(n)
+ p
1
(x)y
(n1)
+ + p
n
(x)y = f(x), (1.12)
the general solution
y(x) = y
p
+ y
c
= y
p
+ c
1
y
1
+ c
2
y
2
+ + c
n
y
n
,
where y
p
is a particular solution of (1.12), and y
c
is the complementary function, i.e.,
the general solution of (1.11).
2. REDUCTION OF ORDER 71
2. Reduction of Order
The method of reduction of order has proven to be a useful approach to nd another
linear independent solution based on a given solution. Although such a method can be
extended to higher-order DE, it can only be used to reduce the order of DE one order
each time.
Example 2.1. Show that if y
1
is a solution of
y

+ p
1
(x)y

+ p
2
(x)y

+ p
3
(x)y = 0,
then the substitution y = y
1
(x)v(x) leads to the following second-order equation for v

:
y
1
v

+ (3y

1
+ p
1
y
1
)v

+ (3y

1
+ 2p
1
y

1
+ p
2
y
1
)v

= 0. (2.1)
Note: v

## satises the above second-order equation, so we need to one solution of this

equation to proceed this method.
Use the method to solve
(2 x)y

+ (2x 3)y

xy

+ y = 0, x < 2; y
1
(x) = e
x
. (2.2)
Proof: As the targeted solution is y
1
v, we have
(y
1
v)

+ p
1
(x)(y
1
v)

+ p
2
(x)(y
1
v)

+ p
3
(x)(y
1
v) = 0.
Using the fact that y
1
is a solution, we obtain (2.1).
Applying this method to (2.2), we nd if y = e
x
v is a solution, then v satises
v

+
3 x
2 x
v

= 0.
Let u = v

## . We solve u and nd that

u = c
1
(x 2)e
x
.
Then we integrate twice to nd v :
v = c
1
xe
x
+ c
2
x.
Thus, the general solution is
y(x) = c
1
x + c
2
xe
x
+ c
3
e
x
.
72 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
3. Linear Homogeneous DE with Constant Coecients
In the previous chapter, we saw that the second-order homogeneous equations with
constant coecients have solutions of the form y(x) = e
rx
. This observation enabled us
to derive two linearly independent solutions needed to determine the general solution
the underlying DE. We next apply this idea to the nth-order DE.
Consider the nth order homogeneous equation
L[y] = a
0
y
(n)
+ a
1
y
(n1)
+ + a
n
y = 0, (3.1)
where a
0
, a
1
, , a
n
are real constants and a
0
= 0. We expect that y = e
rx
is a solution,
and therefore
L[e
rx
] = e
rx
_
a
0
r
n
+ a
1
r
n1
+ + a
n
_
= e
rx
Z(r), (3.2)
where
Z(r) = a
0
r
n
+ a
1
r
n1
+ + a
n
. (3.3)
Consequently, we conclude that y(x) = e
rx
is a solution to (3.1) if and only if r is
a root of the characteristic equation
Z(r) = a
0
r
n
+ a
1
r
n1
+ + a
n
= 0, (3.4)
where Z(r) is referred to as the characteristic polynomial as before.
The solution structure is characterized by the following theorem.
Theorem 3.1. Consider the DE
L[y] = a
0
y
(n)
+ a
1
y
(n1)
+ + a
n
y = 0. (3.5)
Let r
1
, r
2
, , r
k
be the distinct roots of the characteristic equation, and
Z(r) = a
0
(r r
1
)
m
1
(r r
2
)
m
2
(r r
k
)
m
k
where m
i
1 is the multiplicity of the root r
i
, and

k
i=1
m
i
= n.
(i). If all r
i
are real and distinct, i.e., all m
i
= 1, then we have n distinct lin-
early independent solutions e
r
1
x
, e
r
2
x
, , e
rnx
, which form a fundamental set of
solutions;
(ii). If r
i
with m
i
> 1 is real, then the functions e
r
i
x
, xe
r
i
x
, , x
m
i
1
e
r
i
x
are m
i
LI
solutions corresponding to this repeated root;
3. LINEAR HOMOGENEOUS DE WITH CONSTANT COEFFICIENTS 73
(iii). If r
j
is complex, say, r
j
= a +bi, with multiplicity m
j
1, then its conjugate is
a root as well, and the 2m
j
LI solutions are
e
ax
cos bx, xe
ax
cos bx, , x
m
j
1
e
ax
cos bx,
e
ax
sin bx, xe
ax
sin bx, , x
m
j
1
e
ax
sin bx.
Example 3.1. Find the general solution to
y

+ y

y = 0.
Solution: The characteristic equation is
r
3
r
2
+ r 1 = 0 (r 1)(r
2
+ 1) = 0 r
1
= 1, r
2,3
= i.
Therefore the general solution is
y(x) = c
1
e
x
+ c
2
sin x + c
3
cos x.
Example 3.2. Solve the following equation:
y

+ 2y

+ 3y

+ 2y = 0.
Solution: The characteristic equation is
r
3
+ 2r
2
+ 3r + 2 = 0 (r
3
+ 2r
2
+ r) + (2r + 2) = r(r + 1)
2
+ 2(r + 1)
= (r + 1)(r(r + 1) + 2) = (r + 1)(r
2
+ r + 2) = 0.
Therefore the roots are
r
1
= 1, r
2,3
=
1

7i
2
.
Hence, the general solution is
y(x) = c
1
e
x
+ e

x
2
_
c
2
cos(

7
2
x) + c
3
sin(

7
2
x)
_
.
Example 3.3. Find the general solution to
y
(4)
+ 2y

+ y = 0.
Solution: The characteristic equation is
r
4
+ 2r
2
+ 1 = 0.
The roots are
r = i, i, i, i.
Therefore, the general solution is
y = c
1
cos x + c
2
sin x + c
3
xcos x + c
4
xsin x.
74 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
Exercise 3.1. Find the general solution to the given DE:
(i) y

6y

+ 11y

6y = 0 (y = c
1
e
x
+ c
2
e
2x
+ c
3
e
3x
)
(ii) y
(4)
9y

+ 20y = 0 (y = c
1
e
2x
+ c
2
e
2x
+ c
3
e

5x
+ c
4
e

5x
))
(iii) y

6y

+ 2y

+ 36y = 0 (y = c
1
e
2x
+ e
4x
(c
2
cos

2x + c
3
sin

2x)
(iv) y
(4)
+8y

+24y

+32y

+16y = 0 (y = c
1
e
2x
+c
2
e
2x
+c
3
x
2
e
2x
+c
4
x
3
e
2x
)
Example 3.4. Find the general solution to a forth-order linear homogenous DE for
y(x) with real numbers as coecients if one solution is known to be x
3
e
4x
.
Solution: If x
3
e
4x
is a solution, then x
2
e
4x
, xe
4x
, and e
4x
are solutions as well. We
now have four linearly independent solutions to a fourth-order linear, homogenous DE.
Hence, the general solution is
y(x) = (c
1
+ c
2
x + c
3
x
2
+ c
4
x
3
)e
4x
.
Exercise 3.2. Find the general solution to a third-order linear homogenous equation
for y(x) with real numbers as coecients if two solutions are known to be e
2x
and sin 3x.
[Key : y(x) = c
1
e
2x
+ c
2
cos 3x + c
3
sin 3x].
Example 3.5. Solve y
(4)
4y
(3)
5y
(2)
+ 36y

## 36y = 0, if one solution is xe

2x
.
Solution: If xe
2x
is a solution, then so is e
2x
, which implies that (r 2)
2
is a factor of
the characteristic equation
r
4
4r
3
5r
2
+ 36r 36 = 0.
Thus,
r
4
4r
3
5r
2
+ 36r 36
(r 2)
2
= r
2
9.
The other two roots of the characteristic equation are r = 3. Finally, the general
solution is
y(x) = c
1
e
2x
+ c
2
xe
2x
+ c
3
e
3x
+ c
4
e
3x
.
4. NONHOMOGENEOUS DE: METHOD OF UNDETERMINED COEFFICIENTS 75
4. Nonhomogeneous DE: Method of Undetermined Coecients
We now turn our attentions to nonhomogeneous DE and illustrate how the methods
for solving second-order DE introduced in Chapter 2 can be extended to the general
nth-order DE.
In this section, we consider the method of undetermined coecients. We therefore
restrict our attentions to determining a particular solution to
a
0
y
(n)
+ a
1
y
(n1)
+ + a
n1
y

+ a
n
y = f(x)
where f(x) is limited to one of the following types
(1) f(x) =
_
_
_
Ae
ax
(exponential function)
A
0
+ A
1
x + + A
k
x
k
(polynomial)
Acos bx + Bsin bx (trigonometric polynomial)
(2) sum or product of functions given in (1)
In Chapter 2, we used the following usual trial solutions corresponding to each
type
y
p
(x) =
_
_
_
A
0
e
ax
,
B
0
+ B
1
x + B
2
x
2
+ + B
k
x
k
,
A
0
cos bx + B
0
sin bx.
However, if the usual trial solution contains one term that solves the associated ho-
mogeneous DE, then we have to modify the trial solution by multiplying by x
m
, until no
any term solves the homogeneous DE. The basic rule is summarized below:
Multiply the usual trial solution by x
m
, where m is the smallest positive
integer such that the resulting trial solution has No Term solves the
associated homogeneous DE.
Example 4.1. Determine a trial solution for y
(5)
y
(4)
+2y
(3)
2y

+y

y = 4 cos x.
Solution: The characteristic equation of the homogeneous DE is
r
5
r
4
+ 2r
3
2r
2
+ r 1 = 0
r
4
(r 1) + 2r
2
(r 1) + (r 1) = (r 1)(r
4
+ 2r
2
+ 1) = 0
(r 1)(r
2
+ 1)
2
= 0 roots : 1, i, i, i, i.
76 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
Then the general solution to the homogeneous DE is
y
c
(x) = c
1
e
x
+ c
2
cos x + c
3
sin x + c
4
xcos x + c
5
xsin x.
The usual trial solution corresponding to the nonhomogeneous term f(x) = 4 cos x is
y
p
(x) = A
0
cos x + B
0
sin x.
However, we see that this y
p
solves the homogeneous DE. Hence, we need to modify it
by multiplying x
2
, and obtain that
y
p
(x) = x
2
(A
0
cos x + B
0
sin x)
The constants A
0
and B
0
can be determined by substituting the proposed trial solution
into the given DE.
Example 4.2. Determine the form of a trial solution to the DE with the characteristic
equation (r
2
+ 2r + 5)(r 1)
3
= 0 and nonhomogeneous term f(x) = 5e
x
+ 7e
x
sin 2x.
Solution: The general solution to the associated homogeneous DE is
y
c
(x) = e
x
(c
1
cos 2x + c
2
sin 2x) + c
3
e
x
+ c
4
xe
x
+ c
5
x
2
e
x
The usual trial solution corresponding to the nonhomogeneous term f
1
(x) = 5e
x
is
y
p
1
(x) = A
0
e
x
. However, this coincides with part of y
c
(x). According to the modication
rule, we need to modify the trial solution by multiplying it by x
3
, which gives
y
p
1
(x) = A
0
x
3
e
x
.
The usual trial solution corresponding to the nonhomogeneous term f
2
(x) = 7e
x
sin 2x
is y
p
2
(x) = e
x
(A
1
cos 2x + B
1
sin 2x). Once more, we need to modify it as
y
p
2
(x) = xe
x
(A
1
cos 2x + B
1
sin 2x).
Consequently, an appropriate trial solution for the given DE is
y
p
(x) = y
p
1
(x) + y
p
2
(x) = A
0
x
3
e
x
+ xe
x
(A
1
cos 2x + B
1
sin 2x).
Exercise 4.1. Find a particular solution to the following DE.
(i) y

3y

+ 3y

y = 4e
x
.
(ii) y
(4)
+ 2y

+ y = 3 sin x 5 cos x.
(iii) y

4y

= x + 3 cos x + e
2x
.
Key:
(i) y
p
(x) =
2
3
x
3
e
x
.
4. NONHOMOGENEOUS DE: METHOD OF UNDETERMINED COEFFICIENTS 77
(ii) y
p
(x) =
3
8
x
2
sin x +
5
8
x
2
cos x.
(iii) y
p
(x) =
1
8
x
2

3
5
sin x +
1
8
xe
2x
.
78 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
5. Nonhomogeneous DE: Method of Variation-of-Parameters
We now consider the generalization of the variation-of-parameter method to linear
nonhomogeneous DE of arbitrary order n :
L[y] = y
(n)
+ p
1
(x)y
(n1)
+ p
2
y
(n2)
+ + p
n
(x)y = f(x), (5.1)
where we assume that the functions p
1
, p
2
, , p
n
and f are continuous in I. As before,
to use this method, it is necessary to solve the corresponding homogeneous dierential
equation. In general, this may be dicult unless the coecients are constants. However,
the method of variation-of-parameter can be applied to nonhomogeneous DE with f(x)
in a general form as long as we are able to solve the associated homogeneous DE.
Suppose that we know a fundamental set of solutions y
1
, y
2
, , y
n
of the homoge-
neous equation
L[y] = 0. (5.2)
Then the general solution of (5.2) is
y
c
(x) = c
1
y
1
(x) + c
2
y
2
(x) + + c
n
y
n
(x). (5.3)
We now vary the constants and look for a particular solution to (5.1) of the form
y
p
(x) = u
1
(x)y
1
(x) + u
2
(x)y
2
(x) + + u
n
(x)y
n
(x). (5.4)
To determine the unknown functions u
1
, u
2
, , u
n
, we substitute y
p
(x) into equation
(5.1), but this will only give one equation. Therefore, we enforce u
1
, u
2
, , u
n
to satisfy
n 1 additional equations so that the derivatives of order 2 of u
i
will not appear.
More precisely, the unknown functions u
1
, u
2
, , u
n
satisfy
_

_
y
1
u

1
+ y
2
u

2
+ + y
n
u

n
= 0,
y

1
u

1
+ y
2
u

2
+ + y
n
u

n
= 0,
.
.
.
.
.
.
.
.
.
.
.
.
y
(n2)
1
u

1
+ y
(n2)
2
u

2
+ + y
(n2)
n
u

n
= 0,
y
(n1)
1
u

1
+ y
(n1)
2
u

2
+ + y
(n1)
n
u

n
= f(x).
(5.5)
5. NONHOMOGENEOUS DE: METHOD OF VARIATION-OF-PARAMETERS 79
The matrix form is of the form
_
_
_
_
_
y
1
y
2
. . . y
n
y

1
y

2
. . . y

n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n1)
1
y
(n1)
2
. . . y
(n1)
n
_
_
_
_
_
_
_
_
_
u

1
u

2
.
.
.
u

n
_
_
_
_
=
_
_
_
_
0
0
.
.
.
f(x)
_
_
_
_
. (5.6)
We see that the determinant of the coecient matrix is precisely the Wronskiam
W(y
1
, y
2
, , y
n
)(x) = 0 for all x, since y
1
, y
2
, , y
n
are LI solutions of (5.2). Using
the Cramers rule, we can write the solution of (5.6) in the form
u

m
(x) =
f(x)W
m
(x)
W(x)
, m = 1, , n, (5.7)
where W(x) = W(y
1
, y
2
, , y
n
)(x), and W
m
is the determinant obtained from W by
replacing the mth column by the column (0, 0, , 0, 1).
Therefore, by (5.7), we have
u
m
(x) =
_
f(x)W
m
(x)
W(x)
dx,
and the particular solution is
y
p
(x) =
n

m=1
y
m
(x)
_
f(x)W
m
(x)
W(x)
dx. (5.8)
Finally, the general solution of (5.1) becomes
y(x) =
n

m=1
_
_
f(x)W
m
(x)
W(x)
dx + c
m
_
y
m
(x) (5.9)
Example 5.1. Find the general solution to
y

3y

+ 3y

y = 36e
x
ln x, x > 0.
Solution: The characteristic equation of the associated homogeneous equation is
r
3
3r
2
+ 3r 1 = 0 (r 1)
3
= 0.
The LI solutions are
y
1
(x) = e
x
, y
2
(x) = xe
x
, y
3
(x) = x
2
e
x
According to the variation-of-parameters method, the particular solution is of the form
y
p
(x) = e
x
u
1
(x) + xe
x
u
2
(x) + x
2
e
x
u
3
(x),
80 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
where u
1
, u
2
, u
3
satisfy (after divided by e
x
)
_

_
u

1
+ xu

2
+ x
2
u

3
= 0,
u

1
+ (x + 1)u

2
+ (x
2
+ 2x)u

3
= 0,
u

1
+ (x + 2)u

2
+ (x
2
+ 4x + 2)u

3
= 36 lnx.
To solve this system, we reduce its augmented matrix to the row-echelon form
_
_
1 x x
2
0
1 x + 1 x
2
+ 2x 0
1 x + 2 x
2
+ 4x + 2 36 lnx
_
_

_
_
1 x x
2
0
0 1 2x 0
0 2 4x + 2 36 lnx
_
_

_
_
1 x x
2
0
0 1 2x 0
0 0 2 36 lnx
_
_

_
_
1 x x
2
0
0 1 2x 0
0 0 1 18 lnx
_
_
.
(5.10)
Consequently, we nd that
u

1
= 18x
2
ln x, u

2
= 36xln x u

3
= 18 lnx.
Using integration by parts yields
u
1
(x) = 18
_
x
2
ln xdx = 2x
3
(3 lnx 1),
u
2
(x) = 36
_
xln xdx = 9x
2
(1 2 lnx),
u
3
(x) = 18
_
ln xdx = 18x(ln x 1),
where we have set the integrating constants to be zero. So
y
p
(x) = e
x
u
1
+ xe
x
u
2
+ x
2
e
x
u
3
= x
3
e
x
(6 lnx 11).
The general solution is
y(x) = x
3
e
x
(6 lnx 1) + (c
1
+ c
2
x + c
3
x
2
)e
x
.
Exercise 5.1. Find the general solution to the following DE.
(i) y

+ y

= sec x.
_
Key : y(x) = c
1
+ c
2
cos x + c
3
sin x + ln |sec x + tanx| xcos x + (sin x) ln|cos x|

.
(ii) y

3y

+ 2y

=
e
x
1+e
x
.
_
Key : y(x) =
xe
2x
2

1
2
(1 + e
x
)
2
ln(1 +e
x
) + c
1
+ c
2
e
x
+ c
3
e
2x
_
.
5. NONHOMOGENEOUS DE: METHOD OF VARIATION-OF-PARAMETERS 81
(iii) Given that x, x
2
, and
1
x
are the solutions of the homogeneous equation corre-
sponding to
x
3
y

+ x
2
y

2xy

+ 2y = 2x
4
, x > 0,
determine a particular solution.
_
Key : y
p
(x) =
x
4
15
_
.
82 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
6. Solutions to Selected Exercises
Exercise 4.1
(i) We nd that the general solution to the associated homogeneous equation is
y
c
(x) = (c
1
+ c
2
x + c
3
x
2
)e
x
.
For f(x) = 4e
x
, the trial solution should be
y
p
(x) = Ax
3
e
x
.
We plug it into the original equation and nd that A = 2/3.
(ii) We nd that the general solution to the associated homogeneous equation is
y
c
(x) = (c
1
+ c
2
x) cos x + (c
3
+ c
4
x) sin x.
For f(x) = 3 sin x 5 cos x, the trial solution should be
y
p
(x) = x
2
(Acos x + Bsin x).
We plug it into the original equation to nd A = 3/8 and B = 5/8.
(iii) We nd that the general solution to the associated homogeneous equation is
y
c
(x) = c
1
+ c
2
e
2x
+ c
3
e
2x
.
For f(x) = x + 3 cos x + e
2x
, the trial solution should be
y
p
(x) = x(A + Bx) + C cos x + Dsin x + Exe
2x
.
We plug it into the original equation to nd
A = 0, B =
1
8
, C = 0, D =
3
5
, E =
1
8
.
Exercise 5.1
(i) We nd that the general solution to the associated homogeneous equation is
y
c
(x) = c
1
+ c
2
cos x + c
3
sin x.
We look for a particular solution of the form
y
p
(x) = u
1
(x) + u
2
(x) cos x + u
3
(x) sin x.
6. SOLUTIONS TO SELECTED EXERCISES 83
Then, we have the system:
_

_
u

1
+ cos x u

2
+ sin x u

3
= 0,
sin x u

2
+ cos x u

3
= 0,
cos x u

2
sin x u

3
= sec x.
This gives
u

1
= sec x, u

2
= 1, u

3
= tanx.
Therefore,
u
1
= ln |sec x + tanx| , u
2
= x, u
3
= ln | cos x|.
The GS is
y(x) = c
1
+ c
2
cos x + c
3
sin x + ln |sec x + tan x| xcos x + (sin x) ln |cos x| .
(ii) We nd that the general solution to the associated homogeneous equation is
y
c
(x) = c
1
+ c
2
e
x
+ c
3
e
2x
.
We look for a particular solution of the form
y
p
(x) = u
1
(x) + u
2
(x)e
x
+ u
3
(x)e
2x
.
Then, we have the system:
_

_
u

1
+ e
x
u

2
+ e
2x
u

3
= 0,
e
x
u

2
+ 2e
2x
u

3
= 0,
e
x
u

2
+ 4e
2x
u

3
=
e
x
1+e
x
.
This gives
u

1
=
1
2
e
x
1 + e
x
, u

2
=
1
1 + e
x
, u

3
=
1
2
e
x
1 + e
x
.
Therefore,
u
1
=
e
x
2

1
2
ln(1 +e
x
), u
2
= ln(1 +e
x
), u
3
=
x
2

1
2
ln(1 +e
x
).
Correspondingly,
y
p
(x) =
1
2
(1 + e
x
)
2
ln(1 +e
x
) +
e
x
2
+
e
2x
2
.
The GS is
y(x) =
xe
2x
2

1
2
(1 + e
x
)
2
ln(1 +e
x
) + c
1
+ c
2
e
x
+ c
3
e
2x
.
84 3. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
(iii) We write the equation in the standard form
y

+
1
x
y

2
x
2
y

+
2
x
3
y = 2x.
We look for a particular solution of the form
y
p
(x) = xu
1
+ x
2
u
2
+
1
x
u
3
.
Then, we have the system:
_

_
xu

1
+ x
2
u

2
+
1
x
u

3
= 0,
u

1
+ 2x u

1
x
2
u

3
= 0,
2u

2
+
2
x
3
u

3
= 2x.
This gives
u

1
= x
2
, u

2
=
2
3
x, u

3
=
x
4
3
.
Therefore,
u
1
=
x
3
3
, u
2
=
x
2
3
, u
3
=
x
5
15
.
Correspondingly,
y
p
(x) =
x
4
15
.
Note: This equation is a third-order Euler equation, so we may use the trans-
form t = ln x to convert it to a third-order equation with constant coecients.
Then the method of undetermined coecients may be applied to nd the par-
ticular solution.
CHAPTER 4
The Laplace Transform
In this chapter, we shall introduce Laplace transform for solutions of ODEs. As with
the Fourier transform, such a technique has much broader applications than this, for
example, in the solution of linear system of DE, partial DE, and also integral equation.
The use of Laplace transform allows us to consider linear DE with discontinuous
source terms, i.e., L[y] = f, where f is not continuous. Indeed, such DE arise more
frequently in practical engineering modeling, where the mechanic and electrical systems
are usually acted on by discontinuous or impulsive forcing terms.
1. Laplace Transform and Inverse Laplace Transform
1.1. Denition of Laplace transform. We rst dene the Laplace transform.
Definition 1.1. Let f be a function on the interval [0, ). The Laplace transform
of f is dened by
F(s) = L[f](s) =
_

0
e
st
f(t)dt, s (a, ), (1.1)
provided that the improper integral converges, and a is a constant related to the conver-
gence region of f.
Recall that the improper integral is dened by
_

0
e
st
f(t)dt = lim
M
_
M
0
e
st
f(t)dt. (1.2)
This improper integral converges if and only if the limit on the right hand side exists
and is nite. Hence, we claim that NOT all functions have a Laplace transform, or
in other words, are Laplace transformable. We shall introduce some classes of Laplace
transformable functions shortly.
85
86 4. THE LAPLACE TRANSFORM
It is also worthwhile to note the dierence with the Fourier transform:
F[f]() =
_

e
2it
f(t)dt,
which enjoys many applications in science and engineering.
To gain familiarity with the Laplace transform, we rst look at some examples.
Example 1.1. Find the Laplace transform of the following functions.
(a) f(t) = 1.
(b) f(t) = t.
(c) f(t) = e
bt
.
(d) f(t) = cos(bt).
Solution: (a) By the denition, we have
L[1] =
_

0
e
st
dt = lim
M
_

1
s
e
st
_

M
0
= lim
M
_
1
s

1
s
e
sM
_
=
1
s
, s > 0.
(b) In this case, we use integration by parts to obtain
L[t] =
_

0
e
st
tdt = lim
M
_

te
st
s
_

M
0
+
_

0
1
s
e
st
dt =
1
s
2
, s > 0.
(c) In this case, we have
L[e
bt
] =
_

0
e
(bs)t
dt = lim
M
_

e
(bs)t
b s
_

M
0
=
1
s b
, s > b.
(d) Using the denition of the Laplace transform leads to
L[cos(bt)] =
_

0
e
st
cos btdt = lim
M
_

e
st
s
2
+ b
2
(b
2
sin bt s cos bt)
_

M
0
=
s
s
2
+ b
2
, s > 0,
where we used the integral formula obtained by integration by parts
_
e
at
cos btdt =
e
at
a
2
+ b
2
(b sin bt s cos bt).
The following formulas are important.
1. LAPLACE TRANSFORM AND INVERSE LAPLACE TRANSFORM 87
Theorem 1.2. We have
L[t
n
] =
n!
s
n+1
, s > 0,
L[e
at
] =
1
s a
, s > a,
L[cos bt] =
s
s
2
+ b
2
, s > 0,
L[sin bt] =
b
s
2
+ b
2
, s > 0,
L[t
1/2
] =
_

s
, s > 0.
Remark 1.1. The rst formula can be derived by induction. The last formula can
be followed by a change of variables t =
x
2
s
, s > 0, which yields
L[t
1/2
] = 2s
1/2
_

0
e
x
2
dx =
_

s
.
1.2. Linearity of Laplace transform. Suppose that the Laplace transform of f
and g exists for some s > . Then we verify from the denition that
L[c
1
f(t) + c
2
g(t)] = c
1
L[f] + c
2
L[g], s > , (1.3)
where c
1
and c
2
are arbitrary constants.
Exercise 1.1. Determine the Laplace transform of
f(t) = 4
3t
+ 2 sin 5t 7t
3
.
Key:
L[f] =
1
s 3 ln 4
+
10
s
2
+ 25

42
s
4
, s > 3.
1.3. Sucient conditions for existence of Laplace transform. As we re-
marked at the beginning of this chapter, we need to be able to handle certain types
of discontinuous functions.
Definition 1.2. In short, a piecewise continuous function is continuous almost
everywhere except for a nite number of jump discontinuities. More precisely, let f be a
function dened on [a, b] if f can be partitioned into a nite number of subintervals in
such a manner that
(i) f is continuous on each open subinterval, and
88 4. THE LAPLACE TRANSFORM
(ii) f approaches a nite limit as the endpoints of each subinterval are approached
from within.
If f is piecewise continuous on every interval of the form [0, b], where b is a constant,
then we say that f is piecewise continuous on [0, ).
As one example, we see that
f(t) =
_

_
t
2
+ 1, 0 t 1,
2 t, 1 < t 2,
1, 2 < t 3
is piecewise continuous on [0, 3], whereas
f(t) =
_
1
1t
, 0 t < 1,
t, 1 t 3,
is not piecewise continuous on [0, 3].
Exercise 1.2. Find the Laplace transform of the piecewise continuous function
f(t) =
_
t, 0 t < 1,
1, t 1.
Key:
L[f] =
1
s
2
(1 e
s
(2s + 1)), s > 0.
We now come to the fundamental question:
What types of functions are Laplace transformable?
Intuitively, we have to require that f(t) decays suciently fast.
Definition 1.3. A function f is said to be of exponential order if there exist
constants M > 0 and so that
|f(t)| Me
t
, t > T, (1.4)
for some T > 0.
Example 1.2. The function f(t) = 10e
7t
cos 5t is of exponential order, since
|f(t)| 10e
7t
.
1. LAPLACE TRANSFORM AND INVERSE LAPLACE TRANSFORM 89
It is also easy to verify that the rational functions that dened on (0, ) are of
exponential order, whereas e
x
2
is not of exponential order. One can also verify that it
does not have a Laplace transform.
The main theorem on the existence of the Laplace transform comes as follows.
Theorem 1.1. Suppose that
1. f is piecewise continuous on (0, );
2. f is of exponential order, i.e., |f(t)| Me
t
for t > T.
Then the Laplace transform L[f](s) is dened for all s > .
Proof. This theorem can be proven by using the comparison test for improper
integrals:
Suppose 0 G(t) H(t) for all 0 < t < . If
_

0
H(t)dt converges, then so does
_

0
G(t)dt.
We rst note that we can take T = 0 in (1.4). Since by piecewise continuity, |f(t)| is
bounded on [0, T]. Increasing M is necessary, we can therefore assume that |f(t)| M
if t [0, T]. Because e
ct
1 for t 0, it can then follow |f(t)| Me
ct
for all t 0.
We see that it suces to show that lim
b
_
b
0
e
st
|f(t)|dt exists and is bounded. The
fact |f(t)| Me
t
for t 0 implies that
_
b
0
e
st
|f(t)|dt M
_
b
0
e
(s)t
dt M
_

0
e
(s)t
dt =
M
s
,
if s > .
In fact, we have seen that
L[f](s)
_

0
e
st
|f(t)|dt
M
s
.
Therefore, we have the following result.
Corollary 1.2. Under the condition of Theorem 1.1, we have that
lim
s
L[f](s) = 0.
Remark 1.2. The above conditions are sucient. For example, f(t) = t
1/2
is not
piecewise continuous in (0, ), but it has a Laplace transform
_
/s.
90 4. THE LAPLACE TRANSFORM
1.4. Inverse Laplace transform.
Definition 1.4. Suppose that F(s) is the Laplace transform of f(t), i.e., F(s) =
L[f](s). Then we say f(t) is the inverse Laplace transform of F(s), i.e.,
F(s) = L[f](s) f(t) = L
1
[F].
We point out that f(t) and F(s) are not one-to-one. We can nd innitely many
piecewise continuous functions f such that L[f] = F(s). Thus we lose the uniqueness of
the inverse Laplace transform. However, it can be shown that if two functions have the
same Laplace transform, then they can only dier in their values at points of discontinu-
ity. This does not aect the solution to our problem. Therefore, we claim the uniqueness
up to the set of discontinuous points. We cite the following result, and refer to Chapter
6 of Churchills Operational Mathematics, 3rd ed. (New York: McGraw-Hill, 1972) for
the proof.
Theorem 1.1. (Uniqueness of Inverse Laplace Transforms). Suppose that the
functions f(t) and g(t) satisfy the conditions in Theorem 1.1, so that their Laplace trans-
forms F(s) and G(s) both exist. If F(s) = G(s) for s > , then f(t) = g(t) wherever on
[0, +) both f and g are continuous.
Example 1.3. We have that the inverse Laplace transform of
1
s
,
1
1+s
2
are respectively
1 and sin t.
Exercise 1.3. Find the inverse Laplace transform of F(s) =
3s+2
(s1)(s2)
.
Key:
L
1
[F] = 8e
2t
5e
t
.
Please refer to Table 6.2.1 on Page 317 of the textbook for the Formulas!
2. Transformation of Initial Value Problems
We now discuss the application of Laplace transforms to solve a linear dierential
equation with constant coecients:
ay

(t) + by

## (t) + cy(t) = f(t)

2. TRANSFORMATION OF INITIAL VALUE PROBLEMS 91
with initial data: y(0) = y
0
and y

(0) = y
1
. By the linearity of the Laplace transform
aL(y

) + bL(y

) + cL(y) = L(f).
It involves the transforms of the derivatives y

and y

## of y. The key to the method is

expressing the derivative of a function in terms of the transform of the function itself.
Theorem 2.1. Suppose that f and f

## are piecewise continuous and of exponential

order, i.e., Laplace transformable. Then we have
L[f

## ](s) = sL[f](s) f(0) = sF(s) f(0), (2.1)

where F(s) = L[f](s).
More general, if f, f, , f
(n1)
are continuous, f
(n)
is piecewise continuous, and
they are all of exponential order, then
L[f
(n)
](s) = s
n
F(s) s
n1
f(0) s
n2
f

(0) sf
(n2)
(0) f
(n1)
(0). (2.2)
Note that the formula (2.2) can be derived by recursively applying (2.1), so it suces
to prove (2.1). It is important to notice that the Laplace transform of derivative becomes
an algebraic polynomial of the transform of the function and initial values.
Example 2.1. Use Laplace transform to solve the IVP:
y

+ 3y = 13 sin2t, y(0) = 6.
Solution: Denote by Y (s) = L[y]. We now take Laplace transform on both sides:
L[y

## ] + 3L[y] = 13L[sin 2t], sY (s) 6 + Y (s) =

26
s
2
+ 4
.
Solving out Y (s) gives
Y (s) =
6s
2
+ 50
(s + 3)(s
2
+ 4)
=
8
s + 3
+
2s + 6
s
2
+ 4
.
We now nd the inverse Laplace transform of Y (s) :
y(t) = L
1
[Y ] = 8L
1
_
1
s + 3
_
2L
1
_
s
s
2
+ 4
_
+ 3L
1
_
2
s
2
+ 4
_
.
It follows that
y(t) = 8e
3t
2 cos 2t + 3 sin 2t.
This ends the solution.
We summarize the above solution process as follows.
92 4. THE LAPLACE TRANSFORM
1. Denote its Laplace transform by Y (s) = L[y]. Apply Laplace transform on both
sides of the equation, and transform DE to an algebraic equation in Y (s);
2. Solve the transformed equation for Y (s);
3. Apply inverse Laplace transform to obtain the solution y(t) of the IVP.
Exercise 2.1. Solve the IVP: y

3y

+ 2y = e
4t
, y(0) = 1, y

(0) = 5.
Key:
y(t) =
16
5
e
t
+
25
6
e
2t
+
1
30
e
4t
.
We notice that the diculty of the Laplace technique lies in nding the Laplace and
inverse Laplace transform. We next introduce some properties of the Laplace transform.
Theorem 2.2. (First translation theorem). If F(s) = L[f] and a is any real
number, then
L[e
at
f] = F(s a). (2.3)
Conversely, if L
1
[F(s)] = f(t), then
L
1
[F(s a)] = e
at
f(t). (2.4)
This theorem can be proved by using the denition of the Laplace and inverse Laplace
transform. Here, we skip the details.
Exercise 2.2. Find the Laplace transform for each f(t).
f(t) = e
5t
cos 4t.
f(t) = e
at
sin bt,
f(t) = e
at
t
n
, where n is a positive integer.
Key:
L[f] =
s 5
(s 5)
2
+ 16
.
L[f] =
b
(s a)
2
+ b
2
.
L[f] =
n!
(s a)
n+1
.
Exercise 2.3. Find the inverse Laplace transform for each F(s).
F(s) =
3
(s2)
2
+9
.
F(s) =
6
(s4)
3
.
2. TRANSFORMATION OF INITIAL VALUE PROBLEMS 93
F(s) =
s+4
s
2
+6s+13
.
F(s) =
s2
s
2
+2s+3
.
Key:
L
1
[F] = e
2t
sin 3t.
L
1
[F] = 3t
2
e
4t
.
L
1
[F] = e
3t
cos 2t +
1
2
e
3t
sin 2t.
L
1
[F] = e
t
cos

2t
3

2
e
t
sin

2t.
Exercise 2.4. Use Laplace transform to solve the problem
y

y = 8e
t
sin 2t, y(0) = 2, y

(0) = 2.
Key:
y(t) = 2e
t
+ e
t
e
t
(sin 2t + cos 2t).
Exercise 2.5. Solve the IVP:
x

1
= 2x
1
x
2
, x

2
= x
1
+ 2x
2
; x
1
(0) = 1, x
2
(0) = 0.
Key:
x
1
(t) = e
2t
cos t, x
2
(t) = e
2t
sin t.
Exercise 2.6. Show that if f(t) is a piecewise continuous function for t 0 and of
exponential order, then
L
_
_
t
0
f()d
_
=
1
s
L[f] =
F(s)
s
, s > ,
where F(s) = L[f]. Equivalently,
L
1
_
F(s)
s
_
=
_
t
0
f()d.
We notice that the formulas in Theorem 2.1 involve the initial values at t = 0, which
have to be specied. However, in some applications, the initial values are not set at
t = 0. In such cases, we rst need to convert the initial values. To show the main idea,
we consider the equation
y

y = 8e
x
sin 2x, y(1) = 2, y

(1) = 2.
We rst make a change of variable t = x 1, and transform the equation to
y

(t) y(t) = 8e
t+1
sin 2(t + 1), y(0) = 2, y

(0) = 2
94 4. THE LAPLACE TRANSFORM
We therefore solve the transformed equation by Laplace transform, and derive y(t) =
y(x 1).
3. Unit Step Functions and the Second Shifting Theorem
In this section, we consider the solutions of DE with discontinuous source terms by
using Laplace transforms. As the rst step, we will introduce a basis to express the
piecewise continuous functions.
3.1. Unit step function.
Definition 3.1. The unit step function or Heaviside function u
a
(t) is dened by
u
a
(t) =
_
0, 0 t < a,
1, t a,
(3.1)
where a is any positive number.
We see that u
a
(t) is dened on [0, ), and has a jump discontinuity at t = a. An
important property is that it can be used as a basis to specify the piecewise continuous
functions, due to the following fact.
Example 3.1. Find the expression of the function: f(t) = u
a
(t) u
b
(t), b > a.
Solution: By the denition of the unit step function, we have
f(t) =
_

_
0, 0 t < a,
1, a t < b,
0, t b.
We can view the operation f(t) = u
a
(t) u
b
(t) as to switch on the constant 1 at t = a
and switch o at t = b.
Example 3.2. Express the following function in terms of the unit step function
f(t) =
_

_
0, 0 t < 1,
t 1, 1 t < 2,
1, t 2.
3. UNIT STEP FUNCTIONS AND THE SECOND SHIFTING THEOREM 95
Solution: We view the given function in the following way. The contribution f
1
(t) = t1
is switched on at t = 1 and is switched o again at t = 2. Mathematically, this can
be described by
f
1
(t) = u
1
(t)(t 1)
. .
switch on at t = 1
u
2
(t)(t 1)
. .
switch o at t = 2
.
At t = 2, the contribution f
2
(t) = 1 switches on and remains on for all t 2. Mathe-
matically, this is described by
f
2
(t) = u
2
(t).
The function f is then given by
f(t) = f
1
(t) + f
2
(t) = (t 1)u
1
(t) (t 1)u
2
(t) + u
2
(t)
= (t 1)u
1
(t) (t 2)u
2
(t).
Exercise 3.1. Express the following function in terms of the unit step function
f(t) =
_

_
t, 0 t < 2,
1, 2 t < 4,
t 4, 4 t 5,
e
5t
, t 5.
Key:
f(t) = t(1 u
2
(t)) (u
2
(t) u
4
(t)) + (t 4)(u
4
(t) u
5
(t)) + e
5t
u
5
(t).
3.2. The second shifting theorem. In the previous part, we saw how unit step
function can be used to represent functions that are piecewise continuous. In what
follows, we shall show that the Laplace transform provides a straightforward method for
solving constant coecient linear DE, which have such functions as source terms. We
rst need to determine the Laplace transform of the unit step functions.
Theorem 3.1. Let F(s) = L[f(t)]. Then
L
_
u
a
(t)f(t a)

= e
as
F(s). (3.2)
Conversely,
L
1
_
e
as
F(s)

= u
a
(t)f(t a). (3.3)
Exercise 3.2. Prove this theorem.
96 4. THE LAPLACE TRANSFORM
As a direct consequence of the above theorem, we have
L
_
u
a
(t)f(t)

= e
as
L
_
f(t + a)

. (3.4)
To get familiar with this property, we now look at several examples.
Example 3.3. Determine the Laplace transform of
f(t) =
_

_
0, 0 t < 1,
t 1, 1 t < 2,
1, t 2.
Solution: We know that
f(t) = (t 1)u
1
(t) (t 2)u
2
(t) = g(t 1)u
1
(t) g(t 2)u
2
(t),
where g(t) = t. Using (3.2) leads to
L[f(t)] = e
s
L[g(t)] e
2s
L[g(t)] =
1
s
2
(e
s
e
2s
).
Exercise 3.3. Determine the Laplace transform of
f(t) =
_
1, 0 t < 2,
e
(t2)
, t 2.
Key:
L[f(t)] =
1 e
2s
s
+
e
2s
s + 1
.
Example 3.4. Determine
L
1
_
2e
s
s
2
+ 4
_
.
Solution: We have that
L[sin 2t] =
2
s
2
+ 4
.
Consequently,
L
1
_
2e
s
s
2
+ 4
_
= L
1
_
e
s
L[sin 2t]

= u
1
(t) sin(2(t 1)).
Exercise 3.4. Determine
L
1
_
(s 4)e
3s
s
2
4s + 5
_
.
Key:
L
1
_
(s 4)e
3s
s
2
4s + 5
_
= e
2(t3)
u
3
(t)
_
cos(t 3) 2 sin(t 3)

.
We now illustrate how the unit function can be useful in the solution of IVP. To x
3. UNIT STEP FUNCTIONS AND THE SECOND SHIFTING THEOREM 97
Example 3.5. Solve the IVP:
y

y = 1 (t 1)u
1
(t), y(0) = 0.
Solution: Let Y (s) = L[y(t)]. Taking the Laplace transform of both sides of the DE
yields
sY (s) Y (s) y(0) =
1
s

e
s
s
2
.
Solve out Y (s) :
Y (s) =
1
s(s 1)

e
s
s
2
(s 1)
.
Decomposing the RHS into partial fractions yields
Y (s) =
1
s 1

1
s
e
s
_
1
s 1

1
s

1
s
2
_
.
Taking the inverse Laplace transform of both sides of this equation, we obtain
y(t) = e
t
1 u
1
(t)
_
e
t1
1 (t 1)
_
.
That is
y(t) = e
t
1 u
1
(t)
_
e
t1
t
_
.
Exercise 3.5. Solve IVP:
y

+ 2y

+ 5y = f(t), y(0) = y

(0) = 0,
if
f(t) =
_

_
10, 0 t < 4,
10, 4 t < 8,
0, t 8.
Key:
y(t) = g(t) 2u
4
(t)g(t 4) + u
8
(t)g(t 8),
where
g(t) = 2
_
1 e
t
cos(2t)
1
2
e
t
sin(2t)
_
.
98 4. THE LAPLACE TRANSFORM
4. Impulse Functions
In some applications, it is necessary to deal with equations with forcing terms involv-
ing impulse functions, which describe for example voltages or forces of large magnitude
that act over very short time intervals.
For this purpose, we consider the Dirac delta function, denoted by (t). It is a
function with the properties:
(t) = 0, t = 0,
_

(t)dt = 1. (4.1)
There is no ordinary function of the kind studied in elementary calculus with such prop-
erties. Notice that for any t
0
,
(t t
0
) = 0, t = t
0
;
_

(t t
0
)dt = 1. (4.2)
The delta function can be viewed as the limit of usual functions. For example, consider
d

(t) =
_
1
2
, < t < ,
0, otherwise,
> 0.
Then we have
lim
0
+
d

(t) = (t) =
_
+, t = 0,
0, t = 0.
Based on this, we can show that for any continuous function f(t),
_

(t)f(t)dt = f(0);
_

(t t
0
)f(t)dt = f(t
0
). (4.3)
Theorem 4.1. We have
L[(t t
0
)] = e
st
0
, t
0
> 0; L[(t)] = 1. (4.4)
Proof. See Page 341 of the textbook.
Example 4.1. Find the solution of the initial value problem
2y

+ y

+ 2y = (t 5), y(0) = y

(0) = 0.
Solution: Let Y (s) = L[y]. Applying the Laplace transform leads to
(2s
2
+ s + 2)Y (s) = e
5t
.
Thus
Y (s) =
e
5s
2s
2
+ s + 2
=
e
5s
2
1
_
s +
1
4
_
2
+
15
16
.
4. IMPULSE FUNCTIONS 99
Recall that
L
1
_
1
_
s +
1
4
_
2
+
15
16
_
=
4

15
e
t/4
sin

15
4
t.
Therefore
y(t) = L
t
[Y (s)] =
2

15
u
5
(t)e
(t5)/4
sin
_

15
4
(t 5)
_
.
CHAPTER 5
Systems of First-Order Linear Equations
The discussions in the previous chapters were centered around solving a single DE
with a single unknown function y(x). However, mathematical modeling in real appli-
cations may lead to DE with more than one unknown functions that require solving a
system of DE. Moreover, it is important to notice that any nth-order DE can be written
as a system of rst-order DE. In this chapter, we will discuss the solution methods for
linear systems of rst-order DE. Due to the nature of linear problems, the discussion will
follow the general lines as those in Chapters 2 and 3.
1. Basic Theoretical Results
Consider the system of rst-order linear DE:
dx
1
dt
= a
11
(t)x
1
+ a
12
(t)x
2
+ + a
1n
(t)x
n
+ f
1
(t),
dx
2
dt
= a
21
(t)x
1
+ a
22
(t)x
2
+ + a
2n
(t)x
n
+ f
2
(t),
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
dx
n
dt
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ + a
nn
(t)x
n
+ f
n
(t),
(1.1)
where x
1
(t), , x
n
(t) are unknown functions, a
ij
(t), 1 i, j n, and f
i
(t), 1 i n,
are given functions, which are continuous on some interval I. When f
i
(t) = 0, for all
1 i n, the system is said to be homogeneous, otherwise, it is nonhomogeneous.
Its more convenient to write the system (1.1) in a matrix form:
d
dt
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
=
_
_
_
_
a
11
(t) a
12
(t) a
1n
(t)
a
21
(t) a
22
(t) a
2n
(t)

.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) a
nn
(t)
_
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
+
_
_
_
_
f
1
(t)
f
2
(t)
.
.
.
f
n
(t)
_
_
_
_
, (1.2)
101
102 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
or
dx
dt
= Ax + F, (1.3)
where x, A and F are the unknown vector function, coecient matrix and source vector,
respectively. In this note, we always use boldface letters to denote vectors and matri-
ces. The dierentiation and integration of vector and matrix functions are performed
component-wisely.
Example 1.1. Let
x(t) =
_
x
1
(t)
x
2
(t)
_
=
_
2e
5t
+ 4e
t
e
5t
+ e
t
_
.
Then we have
x

(t) =
_
x

1
(t)
x

2
(t)
_
=
_
10e
5t
4e
t
5e
5t
e
t
_
,
and
_
x(t)dt =
_

2
5
e
5t

1
4
e
t
+ c
1
1
5
e
5t
e
t
+ c
2
_
=
_

2
5
1
5
_
e
5t

_

1
4
1
_
e
t
+
_
c
1
c
2
_
.
Exercise 1.1. Verify that
x(t) = c
1
_
1
2
_
e
3t
+ c
2
_
2
1
_
e
2t
(1.4)
is a solution of
x

(t) =
_
1 2
2 2
_
x(t).
1.1. Principle of superposition. As a typical property of linear DE, we have
Theorem 1.1. If the vector functions {x
j
(t)}
n
j=1
are solutions of the homogeneous
linear system x

## (t) = Ax, then the linear combination c

1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t)
is also a solution for any constants c
1
, c
2
, , c
n
.
1.2. Existence and uniqueness theorem. We now introduce the initial value
problem.
Definition 1.1. Let x

## (t) = A(t)x(t) + F(t) be an n n linear system on I. The

initial value problem is of the form
_
x

## (t) = A(t)x(t) +F(t), t I,

x(t
0
) = , t
0
I,
(1.5)
where the initial values are
x
1
(t
0
) =
1
, x
2
(t
0
) =
2
, , x
n
(t
0
) =
n
.
1. BASIC THEORETICAL RESULTS 103
Example 1.2. Solve the IVP
_

_
x

(t) =
_
1 2
2 2
_
x(t),
x(0) =
_
1
0
_
.
Solution: We know from Exercise 1.1 that (1.4) is the general solution. Now, we take
t = 0, and get that
_
c
1
+ 2c
2
= 1
2c
1
+ c
2
= 0
=c
1
=
1
5
, c
2
=
2
5
.
Substituting for c
1
and c
2
into (1.4) yields the unique solution:
x(t) =
1
5
_
1
2
_
e
3t
+
2
5
_
2
1
_
e
2t
.
The fundamental theoretical result on the existence and uniqueness of the solution
to IVP plays an important role in our following discussions.
Theorem 1.2. The IVP
x

## (t) = A(t) x(t) +F(t), x(t

0
) = ,
where A(t) and F(t) are continuous on an interval I and t
0
I, has a unique solution
on I.
Note: See, for example, F.J. Murray and K.S. Miller, Existence Theorems, New York
Univ, press, 1954, for the proof.
1.3. Solution structure of homogeneous system. Just as for a single nth-order
linear equation, the solution of a nonhomogeneous linear system can, in theory, be ob-
tained once we have solved the associated homogeneous system. Consequently, we begin
by developing the theory for the homogeneous DE
x

## (t) = A(t) x(t), (1.6)

where A(t) is an n n continuous matrix function on I.
We rst state the main theorem on the solution structure and then follow the theo-
retical framework for linear equations presented in Chapters 2 and 3 to establish it step
by step.
Theorem 1.3. The general solution of (1.6) is
x(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t), (1.7)
104 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
where c
1
, c
2
, , c
n
are n arbitrary constants, and {x
1
, , x
n
} are n LI solutions to
(1.6). The set {x
1
, , x
n
} is called a fundamental solution set of (1.6) on I, and
the corresponding matrix X(t) dened by
X(t) =
_
x
1
(t), x
2
(t), , x
n
(t)

## is called a fundamental matrix.

Example 1.3. Consider the system
x

(t) =
_
1 2
2 2
_
x(t).
We know from Exercise 1.1 that
x
1
(t) =
_
1
2
_
e
3t
, x
2
(t) =
_
2
1
_
e
2t
,
and two LI solutions to this system, so the fundamental matrix is
X(t) =
_
e
3t
2e
2t
2e
3t
e
2t
_
.
We see that the general solution is
x(t) = c
1
x
1
(t) + c
2
x
2
(t) = X(t)c,
where c = (c
1
, c
2
)
T
.
To establish Theorem 1.3, we need to address several issues as before:
(i) Linear dependence/independence and Wronskian for vector functions;
(ii) Existence of n LI solutions;
(iii) Can every solution of x

## (t) = A(t) x(t) be expressed as a linear combination of

n LI solutions?
First, the denition of linear dependence/independence is the same as that in the
vector algebra, but in this context, the entries of the vectors are functions.
We now dene the Wronskian of n vector functions. Hereafter, we denote by V
n
(I)
the set of all column n-vector real functions on an interval I, that is,
V
n
(I) =
_
x(t) : x(t) = (x
1
(t), , x
n
(t))
T
R
n
, t I
_
,
which is a vector space.
1. BASIC THEORETICAL RESULTS 105
Definition 1.2. Let x
1
(t), x
2
(t), , x
n
(t) V
n
(I). Then the Wronskian of these
vector functions is dened by
W[x
1
, x
2
, , x
n
](t) = det([ x
1
(t), x
2
(t), , x
n
(t)]), (1.8)
i.e., the determinant of the matrix formed by the given column vector functions.
Exercise 1.2. Determine the Wronskian of
x
1
(t) =
_
e
t
2e
2t
_
, x
2
(t) =
_
3 sint
cos t
_
.
Theorem 1.4. Let x
1
(t), x
2
(t), , x
n
(t) be n vector functions in V
n
(I). If
W[ x
1
, x
2
, , x
n
](t
0
) = 0, for some t
0
I,
then { x
1
, x
2
, , x
n
} is LI on I.
As an example, we verify that for x
1
, x
2
in Exercise 1.2, the W[ x
1
, x
2
](0) = 1 = 0,
so they are LI.
Its interesting to notice that the Abels formula for linear system x

(t) = A x(t)
takes the form
W[x
1
, x
2
, , x
n
](t) = c exp
_
_
tr(A(t)) dt
_
(1.9)
where x
1
, x
2
, , x
n
is a set of solutions, and
tr(A(t)) =
n

j=1
a
jj
(t),
is the trace of A(t) in (1.2).
To derive (1.9), we need to show W

## (t) = tr(A(t))W(t), where we denoted W(t) :=

W[x
1
, x
2
, , x
n
](t). We leave the details as an exercise.
Exercise 1.3. Derive the formula (1.9).
Just as with the nth-order linear equations, we can prove the converse of Theorem
1.4 as stated below.
Theorem 1.5. Let A(t) be an n n continuous matrix function on I. If {x
j
(t)}
n
j=1
is a set of LI solutions to x

## (t) = A(t)x(t) on I, then

W[x
1
, x
2
, , x
n
](t) = 0,
at every point in I.
106 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
Proof. Its easier to prove the equivalent statement:
If W[x
1
, x
2
, , x
n
](t
0
) = 0 at some t
0
I, then {x
1
, x
2
, , x
n
} is LD.
In view of this, there exist scalars c
1
, c
2
, , c
n
not all zero such that
c
1
x
1
(t
0
) + c
2
x
2
(t
0
) + + c
n
x
n
(t
0
) = 0. (1.10)
Set
x(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t). (1.11)
It follows from (1.10), (1.11) and Theorem 1.2 that x(t) is the unique solution to the
IVP:
x

## (t) = A(t) x(t); x(t

0
) = 0.
However, the IVP has a solution x(t) = 0. Thus by the uniqueness, we must have
c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t) = 0, t I.
Since not all {c
i
}
n
i=1
are zero, it follows that { x
1
, x
2
, , x
n
} is LD on I.
Remark 1.1. To determine if {x
1
, x
2
, , x
n
} is a fundamental solution set, we
can compute the Wronskian at any convenient point in I. If W[ x
1
, x
2
, , x
n
](t
0
) = 0,
then the solutions are LI, otherwise, they are LD.
Exercise 1.4. Consider the system x

## (t) = A x(t), where

A =
_
1 2
2 1
_
,
and let
x
1
(t) =
_
e
t
cos 2t
e
t
sin 2t
_
,
x
2
(t) =
_
e
t
sin 2t
e
t
cos 2t
_
(a) Verify {x
1
, x
2
} is a fundamental set of solutions, and write down the general
solution.
(b) Solve the IVP: x

## (t) = Ax(t) with

x(0) =
_
3
2
_
.
1. BASIC THEORETICAL RESULTS 107
The existence of a fundamental set of solutions can be proved in a similar fashion as for
the nth-order linear DE. Moreover, we can show that every solution of x

## (t) = A(t) x(t)

can be expressed as a linear combination of n LI solutions. Here, we omit the details.
As the end of this section, we state the main result on the solution structure of the
nonhomogeneous system.
Theorem 1.6. Let A(t) be a matrix function that is continuous on I, and let
{x
1
, x
2
, , x
n
} be a fundamental solution set for x

## (t) = A(t) x(t) on I. If x

p
(t)
is a particular solution to the nonhomogeneous system x

## (t) = A(t) x(t) + F(t) on I,

then its general solution is
x(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t) + x
p
(t). (1.12)
108 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
2. Homogeneous Linear Systems with Constant Coecients
In this section, we consider the system: x

## (t) = A x(t), where A is an n n matrix

with real constant entries. If A has n LI eigenvectors, then we say A is non-defective,
otherwise, A is defective. We rst consider the case that A is non-defective.
From the theoretical results in the previous section, we only need to nd n LI solu-
tions. For this purpose, we look for solutions of the form
x(t) = e
t
v, (2.1)
where v is a constant column n-vector. Hence, plugging it into the system x

(t) = Ax(t)
(e
t
v)

= A(e
t
v) (v)e
t
= (Av)e
t
v = Av. (2.2)
This means that (, v) is an eigen-pair of the matrix A.
Theorem 2.1. Let A be an n n matrix with real constant entries. The vector
function x(t) = e
t
v is a solution to x

## (t) = Ax(t), if and only if (, v) is an

eigen-pair of the matrix A, that is, v = Av.
Remark 2.1. Here, we have not assumed that the eigenvalues and eigenvectors of A
are real. In fact, this theorem holds in the complex case also.
2.1. Non-defective coecient matrix. Based on this theorem, we are able to
nd the general solution of x

## (t) = A x(t) by nding the eigen-pairs of A, if A is

non-defective.
Example 2.1. Find the general solution to
_
x

1
(t) = 2x
1
(t) + x
2
(t),
x

2
(t) = 3x
1
(t) 2x
2
(t).
Solution: The given system can be written as the matrix form x

## (t) = A x(t), where

A =
_
2 1
3 2
_
.
Let I
2
be the 2 2 identity matrix. We rst nd the eigenvalues of A:
det(AI
2
) =

2 1
3 2

=
2
1,
so the eigenvalues are
1
= 1 and
2
= 1.
We now identify the eigenvectors:
2. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 109
(i) For = 1, we solve the system (AI
2
)v = 0, i.e,
_
v
1
+ v
2
= 0
3v
1
3v
2
= 0
=v = r(1, 1) = x
1
(t) = e
t
_
1
1
_
is a solution.
(ii) For = 1, we solve the system (A+I
2
)v = 0, i.e.,
_
3v
1
+ v
2
= 0
3v
1
v
2
= 0
=v = r
_
1
3
_
.
Consequently,
x
2
(t) = e
t
_
1
3
_
is also a solution.
We verify that the Wronskian
W[x
1
, x
2
](0) =

1 1
1 3

= 2 = 0,
so x
1
, x
2
are LI. Hence, the general solution is
x(t) = c
1
x
1
(t) + c
2
x
2
(t) = c
1
e
t
_
1
1
_
+ c
2
e
t
_
1
3
_
.
In summary, we have the basic rules as follows.
Theorem 2.2. Let A be an n n matrix with real constant entries. If A has n real
LI eigenvectors v
1
, v
2
, , v
n
, with the corresponding eigenvalues
1
,
2
, ,
n
(not
necessarily distinct), then the vector functions x
1
, x
2
, , x
n
dened by
x
k
(t) = e

k
t
v
k
, k = 1, 2, , n
forms a fundamental set of solutions to x

(t) = Ax(t).
Example 2.2. Find the general solution to x

(t) = Ax(t), if
A =
_
_
0 2 3
2 4 3
2 2 1
_
_
.
Solution: We rst determine the eigenvalues and eigenvectors of A :
det(AI
3
) =

2 3
2 4 3
2 2 1

= ( + 1)( 2)
2
Hence, the eigenvalues are
1
= 1,
2
= 2 (multiplicity 2).
Eigenvectors
110 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
(i) For
1
= 1, we nd that
v = r
_
_
1
1
1
_
_
,
so we can take
v
1
=
_
_
1
1
1
_
_
.
(ii) For
2
= 2, the system (A2I
3
) v = 0 is reduced to
2 v
1
2 v
2
+ 3 v
3
= 0
which has solution
v = r
_
_
1
1
0
_
_
+ s
_
_
3
0
2
_
_
.
Therefore, two LI eigenvectors corresponding to
2
= 2 are
v
2
=
_
_
1
1
0
_
_
v
3
=
_
_
3
0
2
_
_
.
It follows that three LI solutions are
x
1
(t) = e
t
_
_
1
1
1
_
_
, x
2
(t) = e
2t
_
_
1
1
0
_
_
, x
3
(t) = e
2t
_
_
3
0
2
_
_
Consequently, the GS is
x(t) = c
1
e
t
_
_
1
1
1
_
_
+ c
2
e
2t
_
_
1
1
0
_
_
+ c
3
e
2t
_
_
3
0
2
_
_
=
_
_
e
t
e
2t
3e
2t
e
t
e
2t
0
e
t
0 2e
2t
_
_
_
_
c
1
c
2
c
3
_
_
.
We now consider the case when some of the eigenvalues are complex. In this case,
we know from the matrix theory in linear algebra that the eigenvalues and eigenvectors
will occur in conjugate pairs.
Suppose that = a + ib (b = 0) is an eigenvalue of A with the corresponding eigen-
vector: v = r + is. Then we know that
w(t) = e
t
v = e
(a+ib)t
(r + is) = e
at
(cos bt + i sin bt)(r + is)
2. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 111
is a solution to x

## (t) = A x(t). We rewrite it as

w(t) = e
at
(cos bt r sin bt s) + ie
at
(sin bt r + cos bt s).
Then we take the real part and imaginary part of w(t) as two LI real solutions:
x
1
(t) = e
at
(cos bt r sin bt s), x
2
(t) = e
at
(sin bt r + cos bt s).
Remark 2.2. Notice that we dont have to derive the solution corresponding to
the conjugate eigenvalue = a ib, since it does not yield any new LI solutions to
x

(t) = A x(t).
Example 2.3. Find the general solution to x

(t) = A x(t), if
A =
_
0 2
2 0
_
.
Solution: We rst nd the eigenvalues of A:
det(AI
2
) =

2
2

=
2
+ 4.
Hence, = 2i are the complex eigenvalues. The eigenvectors corresponding to = 2i
are obtained by solving
v
1
+ iv
2
= 0 v =
_
i
1
_
.
Hence, a complex-valued solution to the given DE is
w(t) = e
2it
_
i
1
_
= (cos 2t + i sin 2t)
_
i
1
_
=
_
sin 2t i cos 2t
cos 2t + i sin 2t
_
=
_
sin 2t
cos 2t
_
+ i
_
cos 2t
sin 2t
_
.
We directly obtain the two real-valued solutions
x
1
(t) =
_
sin 2t
cos 2t
_
, x
2
(t) =
_
cos 2t
sin 2t
_
.
Consequently, the general solution to the given DE is
x(t) =
_
sin 2t cos 2t
cos 2t sin 2t
__
c
1
c
2
_
.
In summary, if A is non-defective (i.e., A has n LI eigenvectors), then we are able to
nd n LI solutions to x

## (t) = A x(t). However, if A is defective, additional techniques

are needed to be introduced to nd the fundamental set of solutions. In the following
part, we will consider some special defective A.
112 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
x

## (t) = Ax(t), where A =

_
1 1
1 3
_
.
It is easy to nd that the eigenvalue of A is
1
=
2
= 2, and the corresponding
eigenvector is v = (1, 1)
T
. Hence, there is only one LI eigenvector associated with the
double eigenvalue, so the matrix A is defective.
To solve the equation, we have to nd another solution that is LI with the solution
x
1
(t) = ve
2t
. Motivated by the procedure used for second-order equations in Chapter
Two, it may be natural to attempt a second solution of the form:
x
2
(t) = te
2t
u,
where u is a constant vector to be determined. Substituting it into the given equation
gives
2ute
2t
+ue
2t
Aute
2t
= 0. (2.3)
For this equation to be satised for all t, it is necessary for the coecients of te
2t
and e
2t
both to be zero, so u = 0. Hence, this attempt fails.
We see from (2.3) that both e
2t
and te
2t
appear, so we attempt
x
2
(t) = we
2t
+ute
2t
, (2.4)
where w and u are constant vectors to be determined. Inserting (2.4) into the equation,
we nd that
_
AI
_
u = 0,
_
AI
_
w = u, (2.5)
where = 2 is the eigenvalue of A. Therefore, u = (1, 1)
T
, and w is called a generalized
eigenvector of A. We solve for w, and nd that
w =
_
0
1
_
+ r
_
1
1
_
.
We may take r = 0, and get the exact solution
x(t) = c
1
_
1
1
_
e
2t
+ c
2
__
1
1
_
te
2t
+
_
0
1
_
e
2t
_
.
In summary, we have the following property.
Theorem 2.3. If the equation x

## (t) = Ax(t), has a solution of the form

x(t) =
_
v
1
t +v
2
_
e
t
,
2. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 113
then v
1
and v
2
must satisfy
_
AI
_
v
1
= 0,
_
AI
_
v
2
= v
1
, (2.6)
where v
2
is called a generalized eigenvector corresponding to the eigenvalue .
More precisely, if the multiplicity of the eigenvalue is Two, we follow the following
two steps to identify Two linearly independent solutions:
1. Find a nonzero solution v
2
of the equation
(AI)
2
v
2
= 0 (2.7)
such that
(AI)v
2
= v
1
(2.8)
is nonzero, and therefore is an eignvector v
1
associated with .
2. Then form the two linearly independent solutions
x
1
(t) = v
1
e
t
, (2.9)
and
x
2
(t) = (v
1
t +v
2
)e
t
(2.10)
of x

= Ax.
For the defective coecient matrix, more complicated situations may occur when
the multiplicity of the eigenvalues is more than 2. Here, we restrict to the case with an
eigenvalue with multiplicity 3. Then there may be either One, Two or Three linearly
independent eigenvectors. The general solution of the system x

= Ax is dierent,
depending on the number of linear independent eigenvectors.
As noted in the text, there is no diculty if there are three LI eigenvectors.
If there are two LI eigenvectors, the steps for two LI solutions are given in (2.14)-
(2.10).
Refer to Problem 18 on Page 430 of the textbook.
We consider the case that there is Only One LI eigenvector associated with the
eigenvector with multiplicity 3. In this case, the three LI solutions are of the form:
x
1
(t) = v
1
e
t
,
x
2
(t) = (v
1
t +v
2
)e
t
,
x
3
(t) =
_
v
1
t
2
2
+v
2
t +v
3
_
e
t
,
(2.11)
114 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
where the vectors {v
1
, v
2
, v
3
} satisfy a chain of generalized eigenproblems:
(AI)v
3
= v
2
,
(AI)v
2
= v
1
,
(AI)v
1
= 0.
(2.12)
Because v
1
is an ordinary eigenvector: (AI)v
1
= 0, it follows that
(AI)
3
v
3
= 0. (2.13)
Moreover, one veries that if {v
1
, v
2
, v
3
} satisfy (2.12), then x
1
, x
2
, x
3
given by (2.11)
are LI solutions to x

= Ax.
Consequently, we take the steps to handle a multiplicity 3 eigenvalue with one LI
eigenvector:
1. Find a nonzero solution v
3
of the equation
(AI)
3
v
3
= 0 (2.14)
such that the vectors
v
2
= (AI)v
3
, v
1
= (AI)v
2
(2.15)
are both nonzero.
2. Then form the Three LI solutions in (2.11).
Example 2.4. Find the general solution to x

= Ax, if
A =
_
_
0 1 2
5 3 7
1 0 0
_
_
.
Solution: We rst nd the eigenvalues of A :
det(AI) = ( + 1)
3
= 0.
Thus, A has the eigenvalue = 1 of multiplicity 3. We solve (A + I)v = 0 for the
eigenvector(s), which is reduced to
v
1
= v
2
= v
3
.
Hence, it has only the single associated eigenvector v = r(1, 1, 1)
T
with r = 0.
To apply the method described above, we rst calculate
(A+I)
2
=
_
_
1 1 2
5 2 7
1 0 1
_
_
_
_
1 1 2
5 2 7
1 0 1
_
_
=
_
_
2 1 3
2 1 3
2 1 3
_
_
2. HOMOGENEOUS LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS 115
and
(A+I)
3
=
_
_
1 1 2
5 2 7
1 0 1
_
_
_
_
2 1 3
2 1 3
2 1 3
_
_
= 0.
Thus, any nonzero vector v
3
will be a solution of the equation (A+ I)
3
v
3
= 0. We try
v
3
= (1, 0, 0)
T
, and nd that
v
2
= (A+I)v
3
= (1, 5, 1)
T
,
and
v
1
= (A+I)v
2
= (2, 2, 2)
T
.
Note that v
1
is the previously found eigenvector v with r = 2. Substitution in (2.11)
now yields the LI solutions:
x
1
(t) = v
1
e
t
=
_
_
2
2
2
_
_
e
t
,
x
2
(t) = (v
1
t +v
2
)e
t
=
_
_
2t + 1
2t 5
2t + 1
_
_
e
t
,
x
3
(t) =
_
v
1
t
2
/2 +v
2
t +v
3
_
e
t
=
_
_
t
2
+ t + 1
t
2
5t
t
2
+ t
_
_
e
t
.
Then the general solution is x(t) = c
1
x
1
(t) + c
2
x
2
(t) + c
3
x
3
(t).
116 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
3. Nonhomogeneous Linear Systems
In this section, we turn to the nonhomogeneous system
x

## (t) = A(t)x(t) +F(t), t I, (3.1)

where A(t) and F(t) are continuous on I. It is known that the general solution of (3.1)
is
x(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t) +x
p
(t), (3.2)
where c
1
x
1
(t)+c
2
x
2
(t)+ +c
n
x
n
(t) is the general solution of the homogeneous equation
x

## (t) = A(t)x(t), and x

p
(t) is a particular solution.
We now briey describe two approaches for determining x
p
(t).
3.1. Diagonalization. Suppose that in the system (3.1), A is an n n diagonal-
izable constant matrix. More precisely, A has n LI eigenvectors in column that forms a
matrix
T = [v
1
, v
2
, , v
n
]
such that
T
1
AT = D = diag(
1
,
2
, ,
n
). (3.3)
We now make a change of variable x = Ty, and convert the system into
y

= (T
1
AT)y +T
1
F = Dy +H(t), (3.4)
where H = T
1
F.
Since D is diagonal, the system is decoupled into
y

j
(t) =
j
y
j
(t) + H
j
(t), 1 j n.
Solving this sequence of rst-order equations leads to
y
j
(t) = e

j
t
_
_
e

j
s
H
j
(s)ds + c
j
_
, 1 j n.
Finally, we transform the variable back to get the general solution x(t) = Ty.
Example 3.1. Read Example 1 on Page 433 of the textbook.
3. NONHOMOGENEOUS LINEAR SYSTEMS 117
3.2. Method of variation-of-parameters. We consider the solution of the non-
homogeneous system
x

## (t) = A(t)x(t) +F(t), t I. (3.5)

Suppose that {x
1
, x
2
, , x
n
} are n LI solutions to the homogeneous system x

(t) =
A(t)x(t), which form the n n fundamental matrix:
X(t) = [x
1
, x
2
, , x
n
].
Then the solution to the homogeneous system x

(t) = A(t)x(t) is
x
c
(t) = X(t)c.
Using the notion of method of variation of parameters, we vary the constant vector c
and look for the particular solution of the form
x
p
(t) = X(t)u(t),
where u(t) is an undetermined n-column vector function. Since x
p
is a solution to (3.5)
and X

## (t) satises the equation

Xu = F u(t) =
_
X
1
(t)F(t)dt.
Therefore,
x
p
(t) = X(t)
_
X
1
(t)F(t)dt,
and the general solution to the nonhomogeneous system (3.5) is
x(t) = X(t)
_
_
X
1
(t)F(t)dt +c
_
. (3.6)
It can be viewed as a generalization of the formula for a single linear DE introduced in
Chapter Two.
Example 3.2. Solve the IVP: x

= Ax + F, x(0) =
_
3
0
_
, if A =
_
1 2
4 3
_
and
F(t) =
_
12e
3t
18e
2t
_
.
Solution: We rst solve for the homogeneous system, and nd that two LI solutions
are
x
1
(t)
_
1
1
_
= e
t
, x
2
(t) =
_
1
2
_
e
5t
.
Hence the fundamental matrix is
X(t) =
_
e
t
e
5t
e
t
2e
5t
_
.
118 5. SYSTEMS OF FIRST-ORDER LINEAR EQUATIONS
It follows from the previous process that a particular solution is of the form x
p
= Xu,
where u satises Xu

= F.
Using Cramers rule yields
u

1
(t) = 8e
4t
+ 6e
3t
, u

2
(t) = 6e
3t
+ 4e
2t
.
Therefore,
u
1
(t) = 2e
4t
+ 2e
3t
, u
2
(t) = 2e
3t
2e
2t
,
where we set the integration constants to zero. Hence,
u(t) =
_
2e
4t
+ 2e
3t
2e
3t
2e
2t
_
,
and
x
p
(t) = X(t)u(t) =
_
4e
2t
2e
2t
6e
3t
_
.
Consequently, the general solution to the given nonhomogeneous system is
x(t) = c
1
e
t
_
1
1
_
+ c
2
e
5t
_
1
2
_

_
4e
2t
2e
2t
+ 6e
3t
_
.
Exercise 3.1. Use the method of variation of parameters to nd the general solution
of the system
x

(t) =
_
2 1
1 2
_
x(t) +
_
2e
t
3t
_
.
See Example 3 on Page 437 of the textbook.