Professional Documents
Culture Documents
Differential Equations I MATB44H3F: Version September 15, 2011-1949
Differential Equations I MATB44H3F: Version September 15, 2011-1949
MATB44H3F
ii
Contents
1 Introduction
1.1
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
2.1
Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
2.3
Integrating Factors . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.4
14
2.5
Substitutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.5.1
Bernoulli Equation . . . . . . . . . . . . . . . . . . . . . .
17
2.5.2
Homogeneous Equations . . . . . . . . . . . . . . . . . . .
19
2.5.3
20
25
3.1
Orthogonal Trajectories . . . . . . . . . . . . . . . . . . . . . . .
25
3.2
27
3.3
Population Growth . . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.4
Predator-Prey Models . . . . . . . . . . . . . . . . . . . . . . . .
29
3.5
30
3.6
Water Tanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
3.7
34
3.8
Escape Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
3.9
Planetary Motion . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
39
iii
iv
CONTENTS
. . . . . . .
. . . . . . .
Coefficients
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
57
60
64
66
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
88
89
91
91
94
97
111
111
113
115
115
116
.
.
.
.
121
121
123
129
133
CONTENTS
9.5
Phase
9.5.1
9.5.2
9.5.3
v
Portraits . . . . . . . . . .
Real Distinct Eigenvalues
Complex Eigenvalues . . .
Repeated Real Roots . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
135
137
139
141
. . . . . . . . . . . . . . .
First Order ODEs . . . .
Linear First Order ODEs
Linear Systems . . . . . .
145
145
150
155
156
.
.
.
.
163
163
165
166
167
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vi
CONTENTS
Chapter 1
Introduction
1.1
Preliminaries
dy
dx
3
d4 y
+ y = 2 sin(x) + cos3 (x),
dx4
2z
2z
+
= 0,
x2
y 2
+
yy 0 = 1.
CHAPTER 1. INTRODUCTION
dy
dx
3
+
d4 y
+ y = 2 sin(x) + cos3 (x)
dx4
Note that different solutions can have different domains. The set of all
solutions to a de is call its general solution.
1.2
Mathematical Model
Interpretation of Solution
Solution. Let x(t) be the amount of radium present at time t in years. The rate
at which the sample decays is proportional to the size of the sample at that
time. Therefore we know that dx/dt = kx. This differential equation is our
mathematical model. Using techniques we will study in this course (see 3.2,
Chapter 3), we will discover that the general solution of this equation is given
by the equation x = Aekt , for some constant A. We are told that x = 50 when
t = 0 and so substituting gives A = 50. Thus x = 50ekt . Solving for t gives
t = ln(x/50) /k. With x(1600) = 25, we have 25 = 50e1600k . Therefore,
1
1600k = ln
= ln(2) ,
2
giving us k = ln(2) /1600. When x = 45, we have
ln(x/50)
ln(45/50)
ln(8/10)
ln(10/8)
=
= 1600
= 1600
k
ln(2) /1600
ln(2)
ln(2)
0.105
1600
1600 0.152 243.2.
0.693
t=
Additional conditions required of the solution (x(0) = 50 in the above example) are called boundary conditions and a differential equation together with
boundary conditions is called a boundary-value problem (BVP). Boundary conditions come in many forms. For example, y(6) = y(22); y 0 (7) = 3y(0); y(9) = 5
are all examples of boundary conditions. Boundary-value problems, like the one
in the example, where the boundary condition consists of specifying the value
of the solution at some point are also called initial-value problems (IVP).
Example 1.5. An analogy from algebra is the equation
y=
y + 2.
(1.1)
CHAPTER 1. INTRODUCTION
(y 2) = y,
y,
(irreversible step)
y 4y + 4 = y,
y 2 5y + 4 = 0,
(y 1) (y 4) = 0.
Thus, the set y {1, 4} contains all the solutions. We quickly see that y = 4
satisfies Equation (1.1) because
4=
4 + 2 = 4 = 2 + 2 = 4 = 4,
1 + 2 = 1 = 3.
Chapter 2
2.1
Separable Equations
A first order ode has the form F (x, y, y 0 ) = 0. In theory, at least, the methods
of algebra can be used to write it in the form y 0 = G(x, y). If G(x, y) can
be factored to give G(x, y) = M (x) N (y),then the equation is called separable.
To solve the separable equation y 0 = M (x) N (y), we rewrite it in the form
f (y)y 0 = g(x). Integrating both sides gives
Z
Z
f (y)y 0 dx = g(x) dx,
Z
Z
dy
f (y) dy = f (y) dx.
dx
Example 2.1. Solve 2xy + 6x + x2 4 y 0 = 0.
We
y
2x
, x 6= 2
= 2
y+3
x 4
ln(|y + 3|) = ln x2 4 + C,
ln(|y + 3|) + ln x2 4 = C,
where C is an arbitrary constant. Then
(y + 3) x2 4 = A,
(y + 3) x2 4 = A,
y+3=
A
,
x2 4
y dy = sin(x) dx,
Z
y dy = sin(x) dx,
y2
= cos(x) + C1 ,
2
p
y = 2 cos(x) + C2 ,
p
2 + C2 = 1 = 2 + C2 = C2 = 1.
p
Therefore, y = 2 cos(x) 1 on the domain (/3, /3), since we need cos(x)
1/2 and cos(/3) = 1/2.
y dy = sin(x) dx,
Z x
y dy =
sin(x) dx,
12
y2
2
2
y2
1
2
2
y2
2
y
= cos(x) cos(0),
= cos(x) 1,
1
= cos(x) ,
2
p
= 2 cos(x) 1,
Solution. We have
y 4 + 1 y 0 = x2 1,
y5
x3
+ y = x + C,
5
3
where C is an arbitrary constant. This is an implicit solution which we cannot
easily solve explicitly for y in terms of x.
2.2
Using algebra, any first order equation can be written in the form F (x, y) dx +
G(x, y) dy = 0 for some functions F (x, y), G(x, y).
Definition
An expression of the form F (x, y) dx + G(x, y) dy is called a (first-order) differential form. A differentical form F (x, y) dx + G(x, y) dy is called exact if there
exists a function g(x, y) such that dg = F dx + G dy.
If = F dx + G dy is an exact differential form, then = 0 is called an exact
differential equation. Its solution is g = C, where = dg.
Recall the following useful theorem from MATB42:
Theorem 2.4
If F and G are functions that are continuously differentiable throughout a
simply connected region, then F dx + G dy is exact if and only if G/x =
F/y.
Example 2.5. Consider 3x2 y 2 + x2 dx + 2x3 y + y 2 dy = 0. Let
= 3x2 y 2 + x2 dx + 2x3 y + y 2 dy
{z
}
|
{z
}
|
F
G
F
= 6x2 y =
.
x
y
(2.1a)
(2.1b)
x3
+ h(y).
3
z}|{
g
y
dh
,
dy
dh
2x3 y + y 2 = 2x3 y +
,
dy
dh
= y2 ,
dy
y3
h(y) =
+C
3
= 2x3 y +
(2.2)
x3
y3
+
+ C.
3
3
3 2
= 0 which implies x3 y 2 +
x3
y3
+
+ C = C0
3
3
x3
y3
+
= D.
3
3
Example 2.6. Solve 3x2 + 2xy 2 dx + 2x2 y dy = 0, where y(2) = 3.
Solution. We have
Z
3x2 + 2xy 2 dx = x3 + x2 y 2 + C
44 x3
.
x2
y=
44 x3
x2
y
y=s(x)
(x0,y)
(x,y)
1
(x0,y0)
x
Figure 2.1: The graph of y = s(x) with connecting (x0 , y0 ) to (x, y).
by parameterizing by (x) = (x, s(x)), thereby giving us
Z
F dx + Gs (x) dx =
0 dx = 0.
x0
x0
Z
G(x0 , y) dy +
y0
F (x, y) dx.
x0
0=
2 22 y dy +
3x2 + 2xy 2 dx
2
2
= 4y 2 4 (3) + x3 + x2 y 2 23 22 y 2
= 4y 2 36 + x3 + x2 y 2 8 4y 2 ,
finally giving us x3 + x2 y 2 = 44, which agrees with our previous answer.
Remark. Separable equations are actually a special case of exact equations, that
is,
f (y)y 0 = g(x) = g(x) dx + f (y) dy = 0 =
f (y) = 0 =
(g(x)) .
x
y
11
2.3
Integrating Factors
We see that
y
1
+
1
dx + dy = 0.
x2
x
1
1
y
1
+1 .
= 2 6= 2 =
x
x
x
y x2
C
x3
.
x
3
There is, in general, no algorithm for finding integrating factors. But the
following may suggest where to look. It is important to be able to recognize
common exact forms:
x dy + y dx = d(xy) ,
y
x dy y dx
=
d
,
x2
x
!
ln x2 + y 2
x dx + y dy
=d
,
x2 + y 2
2
x dy d dx
1 y
=
d
tan
,
x2 + y 2
x
xa1 y b1 (ay dx + bx dy) = d xa y b .
y dx x dy
= 0.
x2
Therefore,
y
= C, x 6= 0,
x
where C is an arbitrary constant. Additionally, y = 0 on the domain R is a
solution to the original equation.
Solution. We have
y dx x dy
dx = 0,
x2 + y 2
y
x = C,
x
y
tan1
= C x,
x
y
= D x, (D = C)
tan1
x
y
= tan(D x) ,
x
y = x tan(D x) ,
,
2
x 6= (2n + 1)
for any integer n. Also, since the derivation of the solution is based on the
assumption that x 6= 0, it is unclear whether or not 0 should be in the domain,
i.e., does y = x tan(D x) satisfy the equation when x = 0? We have y xy 0
13
x2 + y 2 = 0. If x = 0 and y = x tan(D x), then y = 0 and the equation is
satisfied. Thus, 0 is in the domain.
Proposition 2.10
Let = dg. Then for any function P : R R, P (g) is exact.
Proof. Let Q =
Therefore, we can multiply teh equation by any function of xy without disturbing the exactness of its first two terms. Making the last term into a function of
2
y alone will make it exact. So we multiply by (xy) , giving us
1
y dx + x dy
1
dy = 0 =
ln(|y|) = C,
x2 y 2
y
xy
where C is an arbitrary constant. Note that y = 0 on the domain R is also a
solution.
Given
M dx + N dy = 0,
we want to find I such that IM dx + IN dy is exact. If so, then
(IN ) =
(IM ) .
x
y
| {z } | {z }
Ix N +INx
Iy M +IMy
()
()
then we can use it as an integrating factor to find the general solution of ().
Unfortunately, () is usually even harder to solve than (), but as we shall see,
there are times when it is easier.
Example 2.12. We could look for an I having only xs and no ys? For example, consider Iy = 0. Then
My N x
Ix
=
.
I
N
I=e
My Nx
N
dx
Nx My
M
dy
works.
2.4
N (x,y)
We use the de for the integrating factor I(x, y). The equation IM dx + IN dy
is exact if
Ix N + INx = Iy M + IMy .
15
In our case,
Ix + 0 = Iy (P (x)y Q(x)) + IP (x).
()
We need only one solution, so we look for one of the form I(x), i.e., with Iy = 0.
Then () becomes
dI
= IP (x).
dx
This is separable. So
dI
= P (x) dx,
I
Z
ln(|I|) = P (x) dx + C,
R
P (x) dx
P (x) dx
|I| = e
I=e
R
We conclude that e
P (x) dx
ex > 0
I=e
P (x) dx
= e
1
x
dx
= e ln(|x|)dx =
1
1
= ,
|x|
x
P (x) dx
P (x) dx
e
|
P (x) dx 0
to obtain
.
Therefore,
R
ye
P (x) dx
Q(x)e
y = e
P (x) dx
P (x) dx
dx + C,
Q(x)e
P (x) dx
dx + Ce
P (x) dx
Solution. What should P (x) be? To find it, we put the equation in standard
form, giving us
2
y 0 + y = 4x.
x
Therefore, P (x) = 2/x. Immediately, we have
R
I=e
(2/x)dx
2
= eln(x ) = x2 .
C
,
x2
2.5. SUBSTITUTIONS
R
where I = e
2 dy
17
= e2y . Therefore,
e2y
dx
+ 2xe2y = ey ,
dy
xe2y = ey + C,
2.5
Substitutions
In many cases, equations can be put into one of the standard forms discussed
above (separable, linear, etc.) by a substitution.
Example 2.16. Solve y 00 2y 0 = 5.
Solution. This is a first order linear equation for y 0 . Let u = y 0 . Then the
equation becomes
u0 2u = 5.
The integration factor is then I = e
2 dx
= e2x . Thus,
2.5.1
Bernoulli Equation
y n
which is linear in z.
Example 2.17. Solve y 0 + xy = xy 3 .
2x dx
= ex . Thus,
ex z 0 2xex = 2xex ,
2
ex z = ex + C,
2
z = 1 + Cex ,
where C is an arbitrary constant. But z = y 2 . So
y =
The domain is
1
1 + Cex2
R,
p
|x| > ln(C),
An additional solution is y = 0 on R.
C > 1,
C 1.
2.5. SUBSTITUTIONS
2.5.2
19
Homogeneous Equations
homogeneous of degree 6,
3x6 + 5x3 y 2
p
x x2 + y 2
y
sin
x
1
x+y
not homogeneous,
homogeneous of degree 2,
homogeneous of degree 0,
homogeneous of degree 1.
Proposition 2.19
If F is homogeneous of degree 0, then F is a function of y/x.
Proof. We have F (x, y) = F (x, y) for all . Let = 1/x. Then F (x, y) =
F (1, y/x).
Example 2.20. Here are some examples of writing a homogeneous function of
degree 0 as a function of y/x.
r
y 2
5x2 + y 2
,
= 5+
x
x
3
(y/x) + (y/x)
y 3 + x2 y
=
.
x2 y + x3
(y/x) + 1
Consider M (x, y) dx + N (x, y) dy = 0. Suppose M and N are both homogeneous and of the same degree. Then
dy
M
= ,
dx
N
Then
y
M (x, y)
=R
.
N (x, y)
x
y
dy
=R
= R(v).
dx
x
|{z}
dv
v+x dx
Therefore,
dv
= R(v) v,
dx
dv
dx
=
,
R(v) v
x
x
where the sign of A is the opposite of the sign of x. Therefore, the general
1/3
solution is y = x (3 ln(Ax)) , where A is a nonzero constant. Every A > 0
yields a solution on the domain (0, ); every A < 0 yields a solution on (, 0).
In addition, there is the solution y = 0 on the domain R.
2.5.3
2.5. SUBSTITUTIONS
21
d2 y dy
, ,y
dx2 dx
=0
=0
dv
F
v, v, y
dy
= 0.
3/2
y.
2 v
= 4v 3/2 y,
= 4y dy,
v 0,
= 2y 2 + C1 ,
= y 2 + C2 ,
C2 y
1
1 y
tan
+
2C 3/2
y 2 +C2 + C3 , C2 > 0,
C2
2
= 3y13 + C3 ,
C2 = 0,
y
1
+ C3 ,
C2 < 0.
2(C )3/2 y 2 +C2
1
u
+ u0 .
2
x
x
2.5. SUBSTITUTIONS
23
will give us some cancellation. Thus, u0 = s0 ex + sex and our equation becomes
x
x
xs0 ex +
xse
+ s2 e2x
xse
= e2x ,
xs0 + s2 ex = ex ,
xs0 = ex 1 s2 ,
ex
s0
=
,
1 s2
x
Z x
1 + s
1
e
=
ln
dx.
2
1 s
x
Working our way back through the subsitutions we find that s = zxex so our
solution becomes
Z x
1 + zxex
e
1
=
ln
dx.
2
1 zxex
x
Using algebra, we could solve this equation for z in terms of x and then integrate
the result to get v which then determines y = ev as a function of x. The
algebraic solution for z is messy and we will not be able to find a closed form
expression for the antidervative v, so we will abandon the calculation at this
point. In practical applications one would generally use power series techniques
on equations of this form and calculate as many terms of the Taylor series of
the solution as are needed to give the desired accuracy.
b1
b2
= 0.
dz a1 dx
,
b1
dz a1 dx
(z + c1 ) dx + (mz + c2 )
= 0,
b1
mz + c2
a1
dx +
dx = 0,
z + c1 +
b1
b1
b1 dx =
mz + c2
dz.
z + c1 + a1 /b1
Chapter 3
Orthogonal Trajectories
25
f
f
dy
(x, y, C) +
(x, y, C) .
x
y
dx
dy
+ 4Cx2 = 0.
dx
x2 + y 2
.
x4
Therefore,
dy
x2 + y 3
4
= 0.
dx
x
This yields a first order de, where the solution includes the initial family of
curves.
g = 0 = 2x 3y 2
Solution. Figure 3.2 shows the plot of y = C sin(x) with several values of C.
We have C = y/ sin(x), so it follows that
dy
cos(x)
= |{z}
C cos(x) = y
= y cot(x).
dx
sin(x)
y/ sin(x)
Two curves meet at right angles if the product of their slopes is 1, i.e., mnew =
1/mold . So orthogonal trajectories satisfy the equation
dy
1
tan(x)
=
=
.
dx
y cot(x)
y
27
5
3
1
-3 p
-2 p
-1
2p
3p
-3
-5
Figure 3.2: A plot of the family y = C sin(x) (in black) and its orthogonal
trajectories (in gray).
This is a separable equation, and we quickly have
y dy = tan(x),
y2
= ln(|cos(x)|) + C,
2
p
y = 2 ln(|cos(x)|) + C1 ,
where C1 = 2C is also an arbitrary constant.
3.2
There are many situations where the rate of change of some quantity x is proportional to the amount of that quantity, i.e., dx/dt = kx for some constant k.
The general solution is x = Aekt .
1. Radium gradually changes into uranium. If x(t) represents the amount of
radium present at time t, then dx/dt = kx. In this case, k < 0.
2. Bank interest. The amount of interest you receive on your bank account is
proportional to your balance. In this case, k > 0. Actually, you do not do
quite this well at the bank because they compound only daily rather than
continuously, e.g., if we measure this time in years, our model predicts
that x = x0 ekt , where x0 is your initial investment and k is the interest
rate. Actually, you get
n
k
x = x0 1 + t ,
n
Solution. Let x(t) be the amount of radium present at time t in years. Then
we know that dx/dt = kx, so x = x0 ekt . With x(0) = 50, we quickly have
x = 50ekt . Solving for t gives t = ln(x/50) /k. With x(1600) = 25, we have
25 = 50e1600k . Therefore,
1
1600k = ln
= ln(2) ,
2
giving us k = ln(2) /1600. When x = 45, we have
ln(45/50)
ln(8/10)
ln(10/8)
ln(x/50)
=
= 1600
= 1600
k
ln(2) /1600
ln(2)
ln(2)
0.105
1600
1600 0.152 243.2.
0.693
t=
3.3
Population Growth
29
L
,
1 + (L/N0 1) ekLt
3.4
Predator-Prey Models
Example 3.4 (Predator-Prey). Consider a land populated by foxes and rabbits, where the foxes prey upon the rabbits. Let x(t) and y(t) be the number
of rabbits and foxes, respectively, at time t. In the absence of predators, at any
time, the number of rabbits would grow at a rate proportional to the number
of rabbits at that time. However, the presence of predators also causes the
number of rabbits to decline in proportion to the number of encounters between
a fox and a rabbit, which is proportional to the product x(t)y(t). Therefore,
dx/dt = ax bxy for some positive constants a and b. For the foxes, the
presence of other foxes represents competition for food, so the number declines
proportionally to the number of foxes but grows proportionally to the number
of encounters. Therefore dy/dt = cx + dxy for some positive constants c and
d. The system
dx
= ax bxy,
dt
dy
= cy + dxy
dt
is our mathematical model.
If we want to find the function y(x), which gives the way that the number
of foxes and rabbits are related, we begin by dividing to get the differential
equation
cy + dxy
dy
=
dx
ax bxy
with a, b, c, d, x(t), y(t) positive.
This equation is separable and can be rewritten as
(a by) dy
(c + dx) dx
=
.
y
x
Integrating gives
a ln(y) by = c ln(x) + dx + C,
(3.1)
d
c
Figure 3.3: A typical solution of the Predator-Prey model with a = 9.4, b = 1.58,
c = 6.84, d = 1.3, and k = 7.54.
Beginning at a point such as a, where there are few rabbits and few foxes, the
fox population does not initially increase much due to the lack of food, but with
so few predators, the number of rabbits multiplies rapidly. After a while, the
point b is reached, at which time the large food supply causes the rapid increase
in the number of foxes, which in turn curtails the growth of the rabbits. By
the time point c is reached, the large number of predators causes the number of
rabbits to decrease. Eventually, point d is reached, where the number of rabbits
has declined to the point where the lack of food causes the fox population to
decrease, eventually returning the situation to point a.
3.5
31
that
dT
= k (Ts T ) , k > 0.
dt
As T changes, the object gives or takes heat from its surroundings. We now
have two cases.
Case 1: Ts is constant. This occurs either because the heat given off is
transported elsewhere or because the surroundings are so large that the contribution is negligible. In such a case, we have
dT
+ kT = kTs ,
dt
which is linear, and the solution is
T = T0 ekt + Ts 1 ekt .
Case 2: The system is closed. All heat lost by the object is gained by its
surroundings. We need to find Ts as a function of T . We have
change in temperature =
heat gained
.
weight specific heat capacity
Therefore,
T T0
T s T s0
,
=
wc
ws cs
wc
(T0 T ) ,
T s = T s0 +
ws cs
dT
wc
wc
T = k T s0 +
T0 .
+k 1+
dt
ws cs
ws cs
This is linear with different constants than the previous case. The solution is
T = T0 +
3.6
T s0 +
1+
wc
ws cs T0
wc
ws cs
wc
1 ek(1+ ws cs )t .
Water Tanks
l
i
t
r
e
Figure 3.4: Fresh water is being poured into the tank as the well-mixed solution
is flowing out.
Solution. Let Q(t) be the amount (in kilograms) of salt in the tank at time t
(in minutes). The volume of water at time t is
10 + 3t 2t = 10 + t.
The concentration at time t is given by
Q
amount of salt
=
volume
10 + t
kg per litre. Then
Q
2Q
dQ
= (rate at which salt is leaving) =
2=
.
dt
10 + t
10 + t
Thus, the solution to the problem is the solution of
dQ
2Q
=
dt
10 + t
evaluated at t = 5. We see that it is simply a separable equation.
33
|Q| = A |10 + t|
Q = A (10 + t)
A
(10 + 0)
= 20 =
A
= 20 = A = 2000.
100
2000
2.
(10 + t)
2000
80
80
=
=
8.89.
152
225
9
Therefore, after 5 minutes, the tank will contain approximately 8.89 kg of salt.
Additional conditions desired of the solution (Q(0) = 20) in the above example) are called boundary conditions and a differential equations together with
boundary conditions is called a boundary-value problem (BVP).
15
10
t
0
100
200
300
400
500
600
Figure 3.5: The plot of the amount of salt Q(t) in the tank at time t shows that
salt leaves slower as time moves on.
A=
A
(10+t)2
dQ
2dt
2000
dt = 10+t . Q = (10+t)2 is the
2dt
10+t
.; Q(0) = 200. Boundary-
3.7
Let x(t) be the height at time t, measured positively on the downward direction.
If we consider only gravity, then
a=
d2 x
dt2
and
d2 x
k
=g
dt2
m
dx
dt
n
,
I=e
k
m
dt
= ekt/m .
Therefore,
dv
k
+ v ekt/m = gekt/m ,
dt
m
gm kt/m
ekt/m v =
e
+ C,
k
mg
+ Cekt/m ,
v=
k
mg
mb
+ C = C = v0
.
k
k
mg
mg kt/m
v =
+ v0
e
.
|{z}
k
k
dx/dt
Note that
Z
dx =
x0
| {z }
v dt =
0
mg
m
mg kt/m
t
v0
e
1 .
k
k
k
xx0
mg
m
mg
t+
v0
1 ekt/m .
k
k
k
3.8
Escape Velocity
Let x(t) be the height of an object at time t measured positively upward (from
the centre of the earth). Newtons Law of Gravitation states that
kmM
F = 2 ,
|{z}
x
ma
where m is the mass of the object, M is the mass of the Earth, and k > 0. Note
that
kM
x00 = 2 .
x
We define the constant g known as the acceleration due to gravity by x00 = g
when x = R, where R = radius of the earth. So k = gR2 /M . Therefore
x00 = gR2 /x2 . Since t does not appear, letting v = dx/dt so that
dv dx
dv
d2 x
=
=
v
dt2
dx dt
dx
will reduce it to first order. Thus,
dv
gR2
= 2 ,
dx
x
gR2
v dv = 2 dx,
x
Z
Z
gR2
v dv = 2 dx,
x
2
2
v
gR
=
+ C,
2
x
v
.
v = 2R g
x xmax
Suppose we leave the surface of the Earth starting with velocity V . How far
37
1
1
R xmax
1
,
,
1
V2
=
2R2 g
R xmax
1
2Rg V 2
1
V2
=
=
,
2
xmax
R 2R g
2R2 g
2R2 g
xmax =
.
2Rg V 2
That is, if V 2 < 2Rg, we will get as far as
2R2 g
2Rg V 2
and then fall back. To escape, we need V
3.9
Planetary Motion
ar = ax cos() + ay sin(),
Let x = r cos() and y = r sin(). Then
x0 = r0 cos() r sin()0 ,
= r00 (0 ) r
and a becomes
2
r00 (0 ) r =
k
,
r2
2r0 0 + 00 r = 0.
()
()
We want the path of the planet, so want an equation involving r and . Therefore, we eliminate t between Equation () and r2 0 = h. Thus,
dr 0
dr h
,
=
d
d r2
d2 r h
dr h 0
d2 r h h
dr h dr h
r00 = 2 0 2 2
r
=
2
d r
d r2
d2 !
r2 r2
d r3 d r2
2
h2 2 dr
1 d2 r
= 2
.
r
r3 d
r2 d2
r0 =
Note
that
Z
Z r()
A=
r dr d =
0 0
r0 ()2
d,
2
so dA/d = r()2 /2. So in our case, A0 = h/2. Therefore, A = ht/2, which is Keplers Second
Law (area swept out is proportional to time).
39
d2
r3 d
r2 d2
h2
h2 u 2
Therefore, Equation () implies that
2
h
u
k
h u
r = 2,
d2
r2
r
d2 u
h2 u2 2 u3 h2 = ku2 ,
d
k
d2 u
+ u = 2.
2
d
h
2 2d
u 6= 0,
Therefore,
k
k
u = B cos( ) + 2 = C1 sin() + C2 cos() + 2 ,
|{z}
h
h
1/r
where
r=
1
1
.
=
k/h2 + B cos( )
k/h2 + C1 sin() + C2 cos()
Therefore,
k
h2
r+C1 x+C2 y
z
}|
{
k
r + C1 r sin() + C2 r cos() = 1,
h2
k
r = 1 C1 x C2 x,
h2
2
k
2
x2 + y 2 = (1 C1 x C2 y) ,
4
h
which is the equation of an ellipse.
3.10
Let x(t) and y(t) denote the x and y coordinates of a point at time t, respectively.
Pick a reference point on the curve and let s(t) be the arc length of the piece
of the curve between the reference point and the particle. Figure 3.6 illustrates
this idea. Then
Particle
d2 s
ds
,
kak = a = 2 .
dt
dt
The vector v is in the tangent direction. The goal is to find components a in
the tangential and normal directions. Let be the angle between the tangent
line and the x axis. Then
v=
cos() =
dx
,
ds
sin() =
dy
.
ds
d2 x
,
dt2
ay =
d2 y
.
dt2
cos() sin()
sin() cos()
#"
ax
ay
"
=
ax cos() + ay sin()
ax sin() + ay cos()
i.e.,
dx
dy
aT = ax cos() + ay sin() = ax
+ ay
ds
ds
dy dt
ax vx + ay vy
dx
= ax
+ ay
=
dt
dt ds
v
and
an = ax sin() + ay cos() =
Also, we have v =
ay vx ax vy
.
v
vx2 + vy2 , so
dv
y
x
2vx dv
d2 s
dv
vx ax + vy ay
dt + 2vy dt
q
=
=
=
= aT .
2
dt
dt
v
2
2
2 vx + vy
#
,
41
But this was clear to start with because it is the component of a in the tangential
direction that affects ds/dt.
We now have
y0 =
dy
dy/dt
vy
,
=
=
dx
dx/dt
vx
y 00 =
vx
dvy
dx
x
vx
vy dv
dx
=
2
vx
dvy dt
dt dx
x dt
vy dv
vx ay vy ax
an v
dt dx
=
= 3 ,
2
3
vx
vx
vx
y 00
dx 3
ds
v
v3
dx
dx ds
dx
=
=
v,
dt
ds dt
ds
y 00 v 2
=
(ds/dx)
Let
R=
1 + (y 0 )
y 00
1+
2
(y 0 )
2
3/2 v .
3/2
(3.2)
y 00
Figure 3.7: The radius of curvature is the circle whose edge best matches a
given curve at a particular point.
radius.
Conversely, suppose that R is a constant. Then
2 3/2
Ry 00 = 1 + (y 0 )
.
3/2
R dz
3/2
(1 + z 2 )
so
Z
x=
(1 +
R
3/2
z2)
dz =
Rz
+ A,
1 + z2
xA= p
,
(1/z)2 + 1
2
1
R2
+1=
2,
z
(x A)
2
2
1
R2
R2 (x A)
=
1
=
,
2
2
z
(x A)
(x A)
2
z2 =
(x A)
2,
R2 (x A)
xA
.
z = q
|{z}
2 (x A)2
R
dy/dx
Therefore,
xA
Z
y=
dx
R2
(x A)
q
2
= R2 (x A) + B,
2
(y B) = R2 (x A) ,
2
(x A) + (y B) = R2 .
Suppose now that the only external force F acting on the particle is gravity,
i.e., F = (0, mg). Then in terms of the T, n basis, the components of F are
"
cos() sin()
sin() cos()
#"
0
mg
"
=
mg sin()
mg cos()
"
=
mg dy
ds
mg dx
ds
#
.
43
ds
,
2g(y0 y(s))
Z
ds
.
t= p
2g(y0 y(s))
If the man ever becomes greater than the normal component of F, the particle
will fly off the curve. This will happen when
y 00 vx3
dx
dx dt
vx
m
=
mg
= g
= g ,
v
ds
dt ds
v
y 00 vx2 = g,
vx =
dx ds
dx
dx
=
=
v.
dt
ds dt
ds
Therefore,
2
dx
y 00 v2
g = y
v2 =
2
ds
(ds/dx)
00
00 2
y 2g(y
y v
0 y)
=
2 =
2
0
1 + (y )
1 + (y 0 )
00
= 2y 00 (y y0 ) = 1 + (y 0 ) .
Example 3.6. A particle is placed at the point (1, 1) on the curve y = x3 and
released. It slides down the curve under the influence of gravity. Determine
whether or not it will ever fly off the curve, and if so, where.
y = x3 ,
y 00 = 6x,
y0 = 1.
Therefore,
2
2y 00 (y y0 ) = 1 (y 0 ) ,
12x x3 1 = 1 + 9x4 ,
12x4 12x = 1 + 9x4 ,
3x4 12x 1 = 0.
H1,1L
x
-1
-2
-4
-6
-8
Figure 3.8: A particle released from (1, 1), under the influence of gravity, will
fly off the curve at approximately (0.083, 0.00057).
Chapter 4
Linear Differential
Equations
Although we have dealt with a lot of manageable cases, first order equations
are difficult in general. But second order equations are much worse. The most
manageable case is linear equations. We begin with the general theory of linear
differential equations. Specific techniques for solving these equations will be
given later.
y2 (x) =
dy1
= y2 ,
dx
dy
,
dx
y3 (x) =
dy2
= y3 ,
dx
d2 y
,
dx2
...,
...,
yn (x) =
dn1 y
.
dxn1
dyn1
= yn
dx
If the coefficient of dn y/dxn is not 1, we can divide the equation through this coefficient
to write it in normal form. A linear equation is in normal form when the coefficient of y (n) is
1.
45
46
and we have
dyn
dn y
dn1 y
= n = P1 (x) n1 Pn (x)y + Q(x)
dx
dx
dx
= P1 (x)yn Pn (x)y1 + Q(x).
Therefore,
dY
= A(x)Y + B(x),
dx
where
A(x) =
0
1
0
0
0
1
..
..
..
.
.
.
0
0
0
Pn (x) Pn1 (x) Pn2 (x)
and
B(x) =
0
0
..
.
0
Q(x)
..
.
0
0
..
.
1
P1 (x)
Theorem 4.1
If P1 (x), . . . , Pn (x), Q(x) are continuous throughout some interval I and x0
is an interior point of I, then for any numbers W0 , . . . , Wn1 , there exists
a unique solution throughout I of
dn y
dn1y
+ P1 (x) n1 + + Pn (x)y = Q(x)
n
dx
dx
satisfying
y(x0 ) = w0 ,
y 0 (x0 ) = w1 ,
...,
4.1
47
To begin, consider the case where Q(x) 0 called the homogeneous case, where
y (n) + P1 (x)y (n1) + P2 (x)y (n2) + + Pn (x)y = 0.
(4.1)
Proposition 4.2
1. If r(x) is a solution of Equation (4.1), then so is ar(x) for all r R.
2. If r(x) and s(x) are solutions of Equation (4.1), then so is r(x) + s(x).
Proof.
1. Substituting y = ar into the lhs gives
ar(n) + P1 (x)ar(n1) + + Pn (x)ar = a r(n) + + Pn (x)r .
But since r is a solution, we have
a r(n) + + Pn (x)r = a 0.
Therefore, ar is a solution.
2. Substituting y = r + s in the lhs gives
r(n) + s(n)
r(n) + + Pn (x)r
+s(n) + + Pn (x)s
= 0 + 0 = 0.
Therefore, r + s is a solution.
Corollary 4.3
The set of solutions to Equation (4.1) forms a vector space.
48
y 0 (x)
0
y
(x)
1
2
W (y1 , y2 , . . . , yn ) (x) =
..
..
..
.
.
.
(n1)
(n1)
y
(x) y2
(x)
1
,
(n1)
yn
(x)
yn (x)
yn0 (x)
..
.
y
(x)
1
n
y10 (x)
yn0 (x)
00
00
y
(x)
y
(x)
n
1
y 0 (x)
yn0 (x)
00
1
dW
00
y
(x)
y
n (x)
1
=
+
+
.
.
.
.
.
.
dx
.
.. . .
..
.
.
(n1)
.
.
.
y (x) yn(n1) (x)
(n1)
(n1)
y1
1
(x) yn
(x)
y1 (x)
yn (x) y1 (x)
yn (x)
..
..
..
..
..
..
.
.
.
.
.
.
+
=
y1(n2) (x) yn(n2) (x) y1(n2) (x) yn(n2) (x)
(n)
(n)
y (n) (x)
y (n) (x)
(x)
y
(x)
y
n
n
1
1
y1 (x)
yn (x)
.
.
..
..
..
.
=
(n2)
(n2)
y1
(x)
yn
(x)
P
P
n P (x)y (nk) n P (x)y (nk)
n
k
k
1
k=1
k=1
y1 (x)
yn (x)
..
..
n
..
X
.
.
.
= P1 (x)W.
=
Pk (x) (n2)
(n2)
y
(x)
y
(x)
n
k=1
1 (n)
(n)
y (x)
yn (x)
1
Therefore, W = Ae
P1 (x) dx
A = 0 = W (x) 0,
A 6= 0 = W (x) 6= x
for all x.
49
Theorem 4.4
Let V be the set of solutions to Equation (4.1) and suppose that
y1 , y2 , . . . , yn V . Then y1 , y2 , . . . , yn are linearly independent if and only
if W 6= 0.
Proof. Let V be the set of solutions to Equation (4.1) and suppose that y1 , y2 , . . . , yn
V . Suppose that y1 , y2 , . . . , yn are linearly dependent. Then there exists constants c1 , c2 , . . . , cn such that
c1 y1 (x) + c2 y2 (x) + + cn yn (x) = 0,
c1 y10 (x) + c2 y20 (x) + + cn yn0 (x) = 0,
..
.
(n1)
c1 y1
(n1)
(x) + c2 y2
Since there exists a nonzero solution for (c1 , c2 , . . . , cn ) of the equation, we have
c1
c2
..
.
cn
= 0,
so it follows that |M | = 0 = W .
Now suppose that W = 0. Select an x0 R. Since W (x) 0, we have
W (x0 ) = 0. Thus, there exists constants c1 , c2 , . . . , cn not all zero such that
where
M =
y1 (x0 )
..
.
(n1)
y1
c1
c2
..
.
cn
..
.
(x0 )
yn (x0 )
..
.
(n1)
yn
(x0 )
50
that is,
c1 y1 (x0 ) + c2 y2 (x0 ) + + cn yn (x0 ) = 0,
c1 y10 (x0 ) + c2 y20 (x0 ) + + cn yn0 (x0 ) = 0,
..
.
(n1)
c1 y1
(n1)
(x0 ) + c2 y2
f 0 (x0 ) = 0,
f 00 (x0 ) = 0,
...,
f (n1) (x0 ) = 0.
But y = 0 also satisfies these conditions, and by the uniqueness part of Theorem
4.1, it is the only function satisfying them. Therefore, f 0, that is,
c1 y1 (x0 ) + c2 y2 (x0 ) + + cn yn (x0 ) 0.
Therefore, y1 , y2 , . . . , yn are linearly dependent.
Corollary 4.5
Let V be the set of solutions to Equation (4.1). Then we have dim(V ) = n.
(j)
51
(j)
M =
y1 (x0 )
y10 (x0 )
..
.
(n1)
y1
..
.
y2 (x0 )
y20 (x0 )
..
.
(n1)
(x0 ) y2
(x0 )
yn (x0 )
yn0 (x0 )
..
.
(n1)
yn
(x0 )
c1
y(x0 )
c y 0 (x )
2
0
,
M
..
.. =
.
cn
y (n1) (x0 )
that is,
(j)
(j)
(j)
y (j) (x0 ) = c1 y1 + c2 y2 + + c1 y1
We conclude that, to solve Equation (4.1), we need to find n linearly independent solutions y1 , . . . , yn of Equation (4.1). Then the general solution to
Equation (4.1) is c1 y1 + c2 y2 + + cn yn .
Example 4.6. Solve y 00 + 5y 0 + 6y = 0.
52
4.1.1
The next problem concerns how to find the n linearly independent solutions. For
n > 1, there is no good method in general. There is, however, a technique which
works in the special cases where all the coefficients are constants. Consider the
constant coefficient linear equation
a0 y (n) + a1 y (n1) + + an1 y 0 + an y = 0.
(4.2)
(4.3)
Suppose that Equation (4.3) has a repeated real root r. Then we do not
have enough solutions of the form ex . But consider y = xerx . Then
y 0 = rerx + rxerx = (rx + 1) erx ,
y 00 = rerx + r (rx + 1) erx = r2 x + 2r erx ,
y 000 = r2 erx + r r2 x + 2r erx = r3 x + 3r2 erx ,
..
.
y (n) = rn x + nrn1 erx .
53
ak r
nk
x + (n k) r
nk
n
X
!
ak r
nk
xerx
k=0
k=0
n1
X
!
ak (n k) r
nk1
erx
k=0
= p(r)xerx + p0 (r)erx .
Since r is a repeated root of p, both p(r) = 0 and p0 (r) = 0, that is,
2
z1 z2
= eax sin(bx)
2
are solutions. So the two roots a + ib and a ib give two solutions eax cos(bx)
and eax sin(bx). For repeated complex roots, use xeax cos(bx), xeax sin(bx), etc.
So in all cases, we get n solutions.
The expression c1 eax cos(bx) + eax c2 sin(bx) can be rewritten as follows. Let
p
A = c21 + c22 . Then
!
c
1
= cos1 p 2
,
c1 + c22
54
that is,
cos() = p
c1
,
c21 + c22
sin() =
c2
1 cos2 () = p
c21
+ c22
Therefore,
c
c2
sin(bx)
A
A
= Aeax (cos() cos(bx) + sin() sin(bx))
cos(bx) +
= Aeax (cos(bx )) .
Example 4.7. Solve y (5) 4y (4) + 5000 + 600 36y 0 + 40y = 0.
Solution. We have
5 44 + 53 + 62 36 + 40 = 0,
2
( 2) ( + 2) 2 2 + 5 = 0.
Therefore, = 2, 2, 2, 1 2i and
y = c1 e2x + c2 xe2x + c3 e2x + c4 ex cos(2x) + c5 sin(2x),
where c1 , c2 , c3 , c4 , c5 are arbitrary constants.
4.2
(4.4)
55
Therefore,
y = c1 y1 + c2 y2 + + cn yn + yp
is the general solution to Equation (4.4).
Example 4.8. Solve y 00 y = x.
y2 = ex .
56
Chapter 5
(5.1)
5.1
Reduction of Order
As mentioned earlier, we can find a second solution from the first. We first
develop the tools we need.
57
58
Lemma 5.1
Let f (x) be a twice differentiable function on a closed bounded interval
J. If {x J : f (x) = 0} is infinite, then there exists an x0 J such that
f (x0 ) = 0 and f 0 (x0 ) = 0.
Proof. Let S = {x J : f (x) = 0}. If |S| is infinite, then by the BolzanoWeierstrass Theorem, S has an accumulation point x0 , i.e., there exists an
x0 S such that every interval about x0 contains another point of S. Therefore,
by Rolles Theorem, every interval about x0 contains a point where f 0 is 0.
Therefore, by continuity, f 0 (x0 ) = 0.
Corollary 5.2
Let y1 (x) be a solution of y 00 + P (x)y 0 + Q(x)y = 0 on a closed bounded
interval J. If y1 is not the zero solution, then y1 (x) = 0 for at most finitely
many x in J.
0
=
0
y2 (x0 )
y20 (x0 )
= 0.
59
on the interior of each interval. This will define y2 throughout J except for a
few isolated points.
Consider a subinterval I on which y1 (x) is never zero. So y1 (x) > 0 or
y2 (x) < 0 throughout I. We have
y
1
W (y1 , y2 ) = 0
y1
y2
y20
.
R
P (x) dx
where y1 is known. This is a linear first order equation for y2 . In standard form,
we have
R
e P (x) dx
y10
0
y2 y2 =
,
y1
y1
where y1 6= 0 throughout I. The integrating factor is
R
y0
y1
1
= e ln(|y1 |)+C =
C0
,
|y1 |
y20
y0
e P (x) dx
2 y2 =
,
y1
y1
y12
R
Z P (x) dx
y2
e
=
dx,
y1
y12
finally giving us
Z
y2 = y1
P (x) dx
dx.
y12
(5.2a)
P (x) dx
y10
As correspond to different y2 s.
(5.2b)
60
x+2 0 x+2
y +
y=0
x
x2
x+2 0 x+2
y +
y = xex .
x
x2
Therefore,
x
xex
W =
1 ex + xex
= xex + x2 ex xex = x2 ex .
xex xex
y2 R
= 2 x = ex = v1 = ex ,
W
x e
y1 R
xxex
0
v2 =
= 2 x = 1 = v2 = x.
W
x e
Therefore,
yp = v1 y1 + v2 y2 = xex + x2 ex ,
and the general solution is
y = C1 x + C2 xex xex + x2 ex = C1 x + C20 xex + x2 ex ,
where C1 , C2 , and C20 are arbitrary constants.
5.2
x 6= 0,
Undetermined Coefficients
61
y 00 = 4Ae2x .
Undetermined coefficients works when P (x) and Q(x) are constants and
R(x) is a sum of the terms of the forms shown in Table 5.1.
Table 5.1: The various forms of yp ought to be given a particular R(x).
R(x)
Form of yp
Cxn
A0 xn + A1 xn1 + + An1 x + An
Cerx
Aerx
Cerx cos(kx)
Cerx sin(kx)
62
3
ex
,
6
2
63
x cos(x)
,
2
A
1/4
1
=
= .
2
2
8
x2
1
,
4
8
x2
x,
2
64
5.2.1
Suppose we have
y 00 + ay 0 + by = R(x).
Let p() = 2 + a + b. We now consider two cases for R(x).
Case I: R(x) = cex . It follows that yp = Aex , so y 0 = Aex and
y 00 = Aex . Then
y 00 + ay 0 + by = cex
2 + a + b Aex = p()Aex .
Therefore, A = c/p(), unless p() = c.
What if p() = 0? Then ex is a solution to the homogeneous equation. Let
y = Axex . Then
y 0 = Aex + Axex ,
y 00 = Aex + Aex + A2 xex = 2Aex + A2 xex .
Thus,
y 00 + y 0 + y = A
2 x + ax + b + 2 + a ex
= A (p()xex + p0 ()ex )
= Ap0 ()ex .
Therefore, A = c/p0 (), unless p0 () = 0 too.
If p() = 0 and p0 () = 0, then ex and xex both solve the homogeneous
equation. Letting y = Ax2 ex , we have A = c/p00 ().
Case II: R(x) = cerx cos(bx) or cerx sin(bx). We use complex numbers. Set
= r + ib. Then
cex = cerx cos(bx) + icerx sin(bx).
Let z = y + iw. Consider the equation
z 00 + az 0 + bz = cex .
(5.3)
65
(5.4a)
00
(5.4b)
rx
w + aw + bw = ce
sin(bx),
cex
y = Re
p()
cex
w = Im
p()
,
respectively.
Example 5.9. Solve y 00 + 8y 0 + 12y = 3 cos(2x).
3e2ix
3e2ix
3e2ix
=
=
2
p(2i)
4 + 16i + 12
(2i) + 16i + 12
2ix
3e
3
1
3
1 2i
=
e2ix =
e2ix
8 + 16i
8 1 + 2i
8 (1 2i) (1 + 2i)
3
3 1 2i 2ix
e =
(1 2i) e2ix
8 1 + 2i
40
3
(1 2i) (cos(2x) + i sin(2x))
40
3
[cos(2x) + 2 sin(2x) + i (2 cos(2x) + sin(2x))] .
40
Therefore,
Re(z) =
3
(cos(2x) + 2 sin(2x)) ,
40
Im(z) =
3
(2 cos(x) + sin(2x)) .
40
3
(cos(2x) + 2 sin(2x)) ,
40
66
5.3
Variation of Parameters
(I)
where P (x) and Q(x) are not necessarily constant. Let y1 and y2 be linearly
independent solutions of the auxiliary equation
y 00 + P (x)y 0 + Q(x)y = 0.
(H)
67
y2 R
,
W
y
1
W = 0
y1
v20 =
y2
y20
y1 R
,
W
6= 0,
2 5 + 6 = 0 = = 3, 2.
y2 R
x3
e2x x2 e3x
= x2 = v1 =
=
.
5x
W
e
3
We can use any function v1 with the right v10 , so choose the constant of integration to be 0.
For v2 , we have
v20 =
y1 R
e3x x2 e3x
=
= x2 ex ,
W
e5x
so
Z
v2 =
2 x
2 x
x e dx = x e + 2 xex dx
| {z }
| {z }
by parts
= x2 ex + 2xex 2
by parts
ex dx
= x2 ex + 2xex 2ex
= x2 2x + 2 ex .
Finally,
yp =
x3 3x
e x2 2x + 2 ex e2x = e3x
3
x3
x2 + 2x 2 ,
3
68
e2x sin(ex )
= ex sin(ex ) = v1 = cos(ex ) .
e3x
For v2 , we have
ex sin(ex )
= e2x sin(ex ) .
e3x
v20 =
Therefore,
Z
v2 =
Z
e2x sin(ex ) dx = t sin(t) dt
|
|
{z
}
{z
}
substitute t = ex
= t cos(t) +
by parts
Z
cos(t) dt
= (t cos(t) + sin(t))
= ex cos(ex ) sin(ex ) .
69
Finally,
yp = v1 y1 + v2 y2
= ex cos(ex ) + e2x (ex cos(ex ) sin(ex ))
= ex cos(ex ) + ex cos(ex ) e2x sin(ex )
= e2x sin(ex ) ,
and the general solution is
y = C1 ex + C2 e2x e2x sin(ex ) ,
where C1 and C2 are arbitrary constants.
70
Chapter 6
Applications of Second
Order Differential
Equations
We now look specifically at two applications of first order des. We will see that
they turn out to be analogs to each other.
6.1
(6.1)
where F is the spring force, d is the displacement from the spring to the natural
length, and k > 0 is the spring constant. As we lower the object, the spring
force increases. There is an equilibrium position at which the spring and gravity cancel. Let x be the distance of the object from this equilibrium position
measured positively downwards. Let s be the displacement of the spring from
the natural length at equilibrium. Then it follows that ks = mg.
If the object is located at x, then the forces are
1. gravity.
2. spring.
3. air resistance.
4. external.
Considering just gravity and the spring force, we have
Fgravity + Fspring = mg k (x + s) = kx.
Suppose air resistance is proportional to velocity. Then
Fair = rv,
r > 0.
In other words, the resistance opposes motion, so the force has the opposite sign
of velocity. We will suppose that any external forces present depend only on
time but not on position, i.e., Fexternal = F (t). Then we have
Ftotal = kx rv + F (t),
| {z }
ma
so it follows that
m
d2 x
dx
+r
+ kx = F (t).
2
dt
dt
(6.2)
r2 4mk
r
=
2m
2m
r
r 2
k
.
2m
m
Intuitively, when x = 0, they balance. Changing x creates a force in the opposite direction
attempting to move the object towards the equilibrium position.
73
r 2 p
k
= 2 2 .
m
2m
Then = i and
x = et (C1 cos(t) + C2 sin(t)) = Aet cos(t ),
(6.3)
where C1 and C2 are arbitrary constants. Figure 6.2a shows how this solution
qualitatively looks like.
Damped vibrations is the common case. The oscillations die out in time,
where the period is 2/. If r = 0, then = 0. In such a case, there is no
resistance, and it oscillates forever.
Case 2: 4mk = r2 . In such a case, we simply have
r
x = e 2m t (C1 + C2 t) ,
(6.4)
where C1 and C2 are arbitrary constants. Figure 6.2b shows how this solution
qualitatively looks like.
Case 3: 4mk < r2 . Let a = r/2m and
b=
r
r
k
.
2m
m
(6.5)
where C1 and C2 are arbitrary constants. This is the overdamped case. The
resistance is so great that the object does not oscillate (imagine everything
immersed in molasses). It will just gradually return to the equilibrium position.
Figure 6.2c shows how this solution qualitatively looks like.
Consider Equation (6.2) again. We now consider special cases of F (t), where
F cos(t),
0
F (t) =
F0 sin(t).
(b) 4mk = r2
(k
F0
2
2
m )
+ (r)
where
F0
,
A= q
2
2
(k m 2 ) + (r)
k m 2
,
cos() = q
2
2
(k m 2 ) + (r)
r
sin() = q
.
2
2
(k m 2 ) + (r)
For F (t) = F0 sin(t), we get xp = A sin(t ), where A and is as above.
75
In any solution
et (?) + A sin(t ) ,
the first term eventually dies out, leaving A sin(t ) as the steady state solution.
Here we consider a phenomenon known as resonance. The above assumes
that cos(t) and sin(t) are not part of the solution to the homogeneous equation. Consider now
d2 x
m 2 + kx = F0 cos(t),
dt
p
where =
k/m (the external force frequency coincides with the natural
frequency). Then the solution is
F0
xp =
t sin(t).
2 km
Another phenomenon we now consider is beats. Consider r = 0 with near
p
p
= k/m. Then the solution, with x(0) = 0 =
but not equal to k/m. Let
x0 (0), is
F0
cos(t
t)
m (
2 2 )
( +
) t
(
) t
2F0
sin
sin
=
(
+ ) (
)
2
2
F0
sin(t) sin(t),
2m
x=
where = (
) /2.
6.2
Electrical Circuits
t,
2 km
battery
I amperes
switch
inductor
capacitor
Figure 6.3: An electrical circuit, where resistance is measured in Ohms, capacitance is measured in Farads, and the inductance is measured in Henrys.
capacitor produces a voltage drop of Q/C. Unless R is too large, the capacitor
will create sine and cosine solutions and, thus, an alternating flow of current.
Kirkhoffs Law states that the sum of the voltage changes around a circuit is
zero, so
Q
dI
+ ,
E(t) + RI + L
dt
C
so we have
dQ Q
d2 Q
+
= E(t).
(6.6)
L 2 +R
dt
dt
C
The solution is the same as in the case of a spring. The analogs are shown in
Table 6.1.
Table 6.1: Analogous terms between a spring-mass system and an electrical
circuit.
Circuit
Spring
Charge Q
Displacement x
Inductance I
Mass m
Resistance R
Friction r
Spring constant k
Amperes I
Velocity v
77
turn changes the capacitance C. This changes the frequency of the solution
of the homogeneous equation. When it agrees with the frequency E(t) of some
incoming signal, resonance results so that the amplitude of the current produced
from this signal is much greater than any other.
Chapter 7
(7.1)
7.1
Undetermined Coefficients
If P1 (x), P2 (x), . . . , Pn (x) are constants and R(x) is nice, we can use undetermined coefficients.
Example 7.1. Solve y (4) y = 2e2e + 3ex .
79
y1 = ex ,
y3 = cos(x),
y4 = sin(x).
Let yp = Ae3x + Bxe3x . Then by the short cut method (5.2.1, p. 64), we have
2
2
2
=
=
,
p(2)
16 1
15
3
3
3
=
B= 0
= .
3
p (1)
41
4
A=
7.2
2 2x 3 x
e + xe .
15
4
Variation of Parameters
(1)
(2)
..
.
..
.
(n2)
v10 y1
(n2)
+ v20 y2
+ + vn0 yn(n2) = 0.
(n 1)
81
n
X
n
X
i=1
i=1
n
X
n
X
(vi yi ) =
(vi yi0 ) =
i=1
vi yi0 +
vi yi00 +
n
X
n
X
vi0 yi =
i=1
n
X
vi0 yi0 =
i=1
n
X
i=1
i=1
vi yi0 ,
vi yi00 ,
i=1
..
.
y (n1) =
y
(n)
n
X
i=1
n
X
(n1)
vi yi
n
X
n
X
vi0 y (n2) =
i=1
(n)
vi yi
i=1
n
X
(n1)
vi yi
i=1
(n1)
vi0 yi
i=1
Therefore,
y
(n)
+ P1 (x)y
(n1)
+ + Pn (x)y =
n
X
(n)
vi yi
i=1
n
X
(n1)
vi0 yi
i=1
+ P1 (x)
n
X
(n1)
vi yi
+ + Pn (x)
i=1
n
X
(n1)
(n)
vi
yi
i=1
n
X
i=1
n
X
vi yi
i=1
vi0 yi
i=1
n
X
n
X
(n1)
vi0 yi
(n1)
+ P1 (x)yi
+Pn (x)yi
n
X
vi 0
i=1
(n1)
vi0 yi
i=1
(n1)
vi0 yi
y = R(x).
(n)
v10
v20
..
.
vn0
0
0
..
.
R(x)
where
M =
..
.
y1
y10
..
.
(n1)
y1
yn
yn0
..
.
(n1)
yn
Therefore, |M | = W 6= 0.
7.3
(7.2)
,
dx2
du2 x2
du x2
du2
du x2
2
d2 y 1
d y
dy 1
d3 y
3
3
=
ydx
dx3
d
du2 x3
du2
du x3
3
d y
d2 y
dy 1
=
3 2 +2
,
3
du
du
du x3
..
.
dn y
=
dxn
dn y
dn1 y
dy
+
C
+ + Cn
1
n
n1
du
du
du
1
xn
83
.
dx2
du x
du x2
du2
du x2
Therefore,
d2 y
dy
du2
du
dy
y = e3u ,
du
d2 u
y = e3u .
du2
involves looking for solutions of the form y = eu = (eu ) and finding . Since
x = eu , we could directly look for solutions of the form x and find the right .
With this in mind and considering Example 7.2 again, we have
y = x ,
y 0 = x1 ,
y 00 = ( 1) x2 .
Therefore,
x2 y 00 + xy 0 y = 0 = ( 1) x + x x = 0
= ( 1) + 1 = 0
= 2 1 = 0
= = 1,
as before.
Chapter 8
Introduction
(8.1)
Solution. Let
2
y = a0 + a1 x + a2 x + + an x + =
X
n=0
85
an xn .
nan xn1 =
n=1
(k + 1) ak+1 xk ,
k=0
(n + 1) an+1 xn .
n=0
n (n + 1) an+1 xn1 =
(n + 1) (n + 2) an+2 xn .
n=0
n=0
y0
z
z }| {
}|
{
X
X
(n + 1) (n + 2) an+2 xn x
nan xn1 2
an xn = 0,
}|
n=0
n=0
n=0
n=0
a2 =
8.1. INTRODUCTION
87
Therefore,
y = a0 + a1 x + a2 x2 + a3 x3 + + an xn +
a1
a0
a1
= a0 + a1 x + a0 x2 + x3 + + Qn
x2n + Qn
x2n+1 +
2
(2k
1)
2k
k=1
k=1
2n
3
x
x2n+1
x
2
Q
Q
= a0 1 + x + + n
+ + n
+ + a1 x +
+
2
k=1 (2k 1)
k=1 2k
X
X
x2n+1
x2n
Qn
.
+ a1
= a0
2n n!
k=1 (2k 1)
n=0
n=0
That is, we have y = C1 y1 + C2 y2 , where C1 = a0 and C2 = a1 such that
y1 =
x2n
,
k=1 (2k 1)
y2 =
Qn
n=0
X
x2n+1
.
2n n!
n=0
X
X
X
X
x2
x2 /2
2
x2n+1
x2n
y2 =
=x
=x
=x
= xex /2 .
n n!
n n!
n n!
2
2
2
n!
n=0
n=0
n=0
n=0
Having one solution in closed form, we can try to find another (see 5.1, p. 57)
by
Z
y1 = y2
x2 /2
= xe
P (x) dx
y22
x2 /2
x2 /2
= xe
2
e
dx = xex /2
x2 ex2
P (x) dx
x2 ex2
Z
dx
1
dx,
x2 ex2 /2
8.2
Theorem 8.2
P
n
Consider the power series n=0 an (z p) .
1. There exists an R such that 0 R < called the radius of conP
n
vergence such that n=0 an (z p) converges for |z p| < R and
diverges for |z p| > R. The radius of convergence is given by
an
R = lim
n an+1
(8.2)
Using (2) of Theorem 8.2, f 0 (z) exists for |z p| < R with f 0 (z) =
Similarly,
f (z) =
an (z p) ,
n=0
z = p = a0 = f (p),
n=0
f 0 (z) =
n1
z = p = a1 = f 0 (p),
n2
z = p = a2 =
f 00 (p)
,
2
n! an ,
z = p = an =
f (n) (p)
.
n!
nan (z p)
n=0
f 00 (z) =
n (n 1) an (z p)
n=0
..
.
f (n) (z) =
X
n=0
nan z n1 .
89
Example 8.4. Consider f (x) = 1/ x2 + 1 . The singularities are i. Therefore, the radius of convergence of the Taylor series about the origin is 1, with
the series given by
X
n
f (x) =
(1) x2n .
n=0
8.3
2.
Analytic Equations
5
x2
y0 +
1
y = 0.
x
Theorem 8.5
Let x = p be an ordinary point of y 00 + P (x)y 0 + Q(x)y = 0. Let R be the
distance from p to the closest singular point of the de in the complex plane.
P
P
Then the de has two series y1 (x) = n=0 an xn and y2 (x) = n=0 bn xn
which converge to linearly independent solutions to the de on the interval
|x p| < R.
Remark. The content of the theorem is that the solutions are analytic within
the specified interval. The fact that solutions exist on domains containing at
least this interval is a consequence of theorems in Chapter 10.
Example 8.6. Consider x2 + 1 y 00 + 2xy 0 + 2y = 0. Dividing by x2 + 1, we
have
3x 0
2
y 00 + 2
y + 2
y = 0.
x +1
x +1
The singularities are i. If p = 0, then r = |i 0| = 1. Therefore, analytic
8.4
91
When using power series techniques to solve linear differential equations, ideally
one would like to have a closed form solution for the final answer, but most often
one has to settle for much less. There are various levels of success, listed below,
and one would like to get as far down the list as one can.
For simplicity, we will assume that we have chosen to expand about the
point x = 0, although the same considerations apply to x = c for any c.
Suppose 0 is an ordinary point of the equation L(x)y 00 +M (x)y 0 +N (x)y = 0
P
and let y = n=0 an xn be a solution. One might hope to:
1. find the coefficients an for a few small values of n; (For example, find the
coefficients as far as a5 which thereby determines the 5th degree Taylor
approximation to y(x));
2. find a recursion formula which for any n gives an+2 in terms of an1
and an ;
3. find a general formula an in terms of n;
4. recognize the resulting series to get a closed form formula for y(x).
In general, level (1) can always be achieved. If L(x), M (x), and N (x)
are polynomials, then level (2) can be achieved. Having achieved level (2), it
is sometimes possible to get to level (3) but not always. Cases like the one
in Example 8.1 where one can achieve level (4) are rare and usually beyond
expectation.
8.5
pn (x c) ,
Q(x) =
n=0
and set y =
y =
qn (x c) .
n=0
n
n=0
X
n=0
n1
nan (x c)
00
y =
X
n=0
n (n 1) an xn2 ,
}|
y0
P (x)
n (n 1) an xn2 +
}|
!{ z
n
pn (x c)
n=0
n=0
!{
n1
nan (x c)
n=0
!
n
qn (x c)
n=0
}|
!
n
an (x c)
= 0.
n=0
{z
Q(x)
}|
{z
y
Since pn s and qn s are known (as many as desired can be computed from derivatives of P (x) and Q(x)), in theory we can inductively find the an s one after
another.
In the special case where P (x) and Q(x) are polynomials, we can get a
recursion formula by writing an+2 in terms of an and an1 (level 2) and, under
favourable circumstances, we can find the general formula for an (level 3) as in
Example 8.1. But in general, computation gets impossibly messy after a while
and, although a few an s can be found, the general formula cannot.
But if you only want a few an s, there is an easier way. We have an =
(n)
y (c)/n!, so
y 00 (c)
P (x)y 0 (c) + Q(c)y(c)
a1 P (c) a0 Q(c)
=
=
,
2!
2
2
1
y 000 (c)
= (P 0 (c)y 0 (c) P (c)y 00 Q0 (c)y(c) Q(c)y 0 (c))
a3 =
3!
6
a1 P (c)2 + a0 P (c)Q(c)
1
0
0
P (c)a1 +
=
a0 Q (c) a1 Q(c) ,
6
2
a2 =
a4 = ,
and so on.
Example 8.8. Compute the Taylor expansion about 0 as far as degree 4 for
the solution of y 00 e7x y 0 + xy = 0, which satisfies y(0) = 2 and y 0 (0) = 1.
93
Substituting gives
y 00 (0) = e0 y 0 (0) 0 = 1
y 000 (0) = e0 y 00 (0) + 5e0 y 0 (0) 0 y(0) = 1 + 5 2 = 4
y (4) (0) = e0 y 000 (0) + 10e0 y 00 (0) 0 2y 0 (0) = 4 + 10 2 = 12
Therefore,
4x3
12x4
x2
+
+
+
2!
3!
4!
1
2
1
= 2 + x + x2 + x3 x4 + .
2
3
2
y =2+x+
+
2!
3!
4!
7
17
= 1 + 2x 2x2 + x3 x4 + .
3
12
y = 1 + 2x
In this case, since the coefficients were polynomials, it would have been possible
to achieve level 2, (although the question asked only for a level 1 solution).
However even had we worked out the recursion formula (level 2) it would have
been too messy to use it to find a general formula for an .
8.6
Example 8.10. Consider x2 5x + 6 y 00 5y 0 2y = 0, where y(0) = 1 and
y 0 (0) = 1. Find the series solution as far as x3 and determine the lower bound
on its radius of convergence from the recursion formula.
Solution. We have
2y =
5y 0 =
6y 00 =
5xy 00 =
x2 y 00 =
2an xn ,
n=0
5nan xn1 =
n=0
5 (n + 1) an+1 xn =
n=0
5 (n + 1) an+1 xn ,
n=0
6n (n 1) an xn2 =
n=0
5n (n 1) an xn1 =
X
n=0
6 (n + 2) (n + 1) an+2 xn ,
5 (n + 1) nan+1 xn ,
n=0
n=0
n (n 1) an xn .
n=0
Therefore,
n=0
n (n 1) an
5 (n + 1) nan+1
+6 (n + 2) (n + 1) an+2
5 (n + 1) an+1 2an
6 (n + 2) (n + 1) an+2
n
n
x =
5 (n + 1) (n + 1) an x ,
n=0
+ n2 n 2 an
so it follows that
2
6 (n + 2) (n + 1) an+2 5 (n + 1) an+1 + (n + 1) (n 2) an = 0,
1 n2
5 n+1
an+1
an .
an+2 =
6 n+2
6 n+2
With a0 = y(0) = 1 and a1 = y 0 (0) = 1, we have
5
6
5
a3 =
6
a2 =
1
1 2
5
2
a1
a0 =
+
2
6 2
12 12
2 1 1
5 7
1
a1 =
+
3 6 3
9 12 6
7
,
12
1
35
6
41
=
+
=
.
3
108 108
108
Therefore,
y =1+x+
7 2
41 3
x +
x + .
12
108
95
Notice that although we need to write our equation in standard form (in
this case dividing by x2 5x + 6 to determine that 0 is an ordinary point of
the equation), it is not necessary to use standard form when computing the
recursion relation. Instead we want to use a form in which our coefficients are
polynomials. Although our recursion relation is too messy for us to achieve
find a general formula for an (level 3) there are nevertheless some advantages
in computing it. In particular, we can now proceed to use it to find the radius
of convergence.
To determine the radius of convergence, we use the ratio test R = 1/ |L|,
where L = limn (an+1 /an ), provided that the limit exists. It is hard to prove
that the limit exists, but assuming that
an+2
5 n + 1 1 n 2 an
=
,
an+1
6 n + 2 6 n + 2 an+1
taking the limit gives
L=
5
1 1
5 1 1
1 = .
6
6 L
6 6 L
Example 8.11. Consider x2 5x + 6 y 00 5y 0 2y = 0. Find two linearly
independent series solutions y1 and y2 as far as x3 .
Solution. Let y1 (x) be the solution satisfying y1 (0) = 1; y10 (0) = 0 and let y2 (x)
be the solution satisfying y2 (0) = 0; y20 (0) = 1. Then y1 (x) and y2 (x) will be
linearly independent solutions. We already computed the recursion relation in
Example 8.10. For y1 (x), we have a0 = y(0) = 1 and a1 = y 0 (0) = 0. Thus,
5
6
5
a3 =
6
a2 =
1
a1
2
2
a2
3
1
2
1
a0 = ,
6
2
6
1 1
5
a1 =
.
6 3
54
Therefore,
y1 = 1 +
5
x2
+ x3 + .
6
54
a2 =
1
2
5
a1 a0 =
,
2
2
12
2
1
5 2 5
1
25
6
31
a2
a1 =
+
=
+
=
.
3
3
6 3 12 6 3
108 108
108
Therefore,
y2 = x +
5 2
61 3
x +
x + .
12
108
With
7 2
41 3
x +
x + ,
12
108
Looking at the initial values, the solution y(x) of Example 8.10 ought to be
given by y = y1 + y2 , and comparing are answers we see that they are consistent
with this equation.
y =1+x+
(8.3)
8.7
97
(0)
a1 ( + 1) + a1 ( + 1) p0 + a0 p1 + a0 q1 + a1 q0 = 0,
(1)
..
.
..
.
(n)
(8.4)
y
y0
+ x2 Q(x) 2 ,
x
x
so we have
y 00 + xP (x)
X
y
y0
an ( + n) ( + n 1) x+n2
+ x2 Q(x) 2 =
x
x
n=0
2
+ p0 + p1 x + p2 x +
X
an ( + n) x+n2
n=0
+ q0 + q1 x + q2 x2 +
X
an x+n2 .
n=0
99
n=0
cn x2+n ,
n=0
X
L(x w(, x)) =
cn x2+n = c0 x2 .
n=0
g
(Lg) = L
x
x
since differentiation with respect to x commutes with differentiation with respect
to (cross derivatives equal). Therefore,
w
L x ln(x)w(, x) + x
= F 0 ()x2 + F ()x2 ln(x).
If k = 0, then
y2 = ln(x)y1 +
w
xs
=s
L x
!
n
an x
=L x
!
r+n2
cn x
n=0
n=0
L x
!
n
an x
n=0
cn xr+n2 = ck xs2 +
n=k
cn (r)xr+n2 ,
n=k+1
X
w
s
an x A ln(x)y1 (x) + x
L x
=
cn xr+n2 ,
=s
n=0
n=k+1
k1
a0 +a1 x + +ak1x
xr
w
X
x
+ ak A
=
=s
cn xr+n2 .
L
X
n=k+1
an xn A ln(x)y1 (x)
+
r
n=k+1
The choice of ak does not matterwe can choose any. Then inductively solve
for ak+1 , ak+2 , . . . in terms of predecessor a? s to make cn = 0 for n > k. This
gives a solution to the de of the form
r
y2 = x
X
n=0
an xn A ln(x)y1 (x).
101
y1 = x
an x ,
y2 = x
n=0
bn xn + B(ln(x))y1 ,
n=0
where r and s are the two solutions to the indicial equation. We have B = 0,
unless s = r + k for some k {0, 1, 2, . . .}.
Example 8.12. Solve 4xy 00 + 2y 0 y = 0.
Solution. Let y =
y0 =
n=0
an x+n . Then
( + n) an x+n1 ,
y 00 =
( + n) ( + n 1) an x+n2 .
n=0
n=0
4 ( + n) ( + n 1) an x+n1 +
2 ( + n) an x+n1
an x+n = 0,
n=0
n=0
n=0
[4 ( + n) ( + n 1) + 2 ( + n)] an x+n1
an1 x+n1 = 0.
n=1
n=0
Note that
4 ( + n) ( + n 1) + 2 ( + n) = 2 ( + n) (2 + 2n 2 + 1)
= 2 ( + n) (2 + 2n 1) .
Therefore,
2 (2 1) a0 x1 +
[2 ( + n) (2 + 2n 1) an1 ] x+n1 .
n=1
1
an1 .
2n (2n 1)
..
.
an =
1
1
1
an1 =
=
a0 .
(2n) (2n 1)
2n (2n 1) (2n 2)
(2n)!
Therefore,
x
x2
xn
y = x0 a0 1 + +
+ +
! +
2!
4!
(2n)
2
n
x
x
x
= a0 1 + +
+ +
! +
2!
4!
(2n)
= a0 cosh x .
1
1
+ n (1 + 2n 1) bn = bn1 = bn =
bn1 .
2
(2n + 1) (2n)
Thus,
1
b0 ,
32
1
1
b2 =
b1 = b0 ,
54
5!
b1 =
..
.
bn =
1
b0 .
(2n + 1)!
103
Therefore,
x
x2
xn
y = x1/2 b0 1 + +
+ +
+
3!
5!
(2n + 1)!
3/2
5/2
x
x
x(2n+1)/2
1/2
= b0 x +
+
+ +
+
3!
5!
(2n + 1)!
= b0 sinh x .
In general, however, it is possible to only calculate a few low terms. We cannot
even get a general formula for an , let alone recognize the series.
As mentioned, the above methods work near a by using the above technique
after the substitution v = x a. We can also get solutions near infinity, i.e.,
for large x, we substitute t = 1/x and look at t = 0. Then
dy
1 dy
dy dt
dy
1
dy
=
=
2 = 2
= t2 ,
dx
dt dx
dx
x
x dt
dt
2
2
2
2
2 dy
1 d y dt
2 dy
1 d y
1
d y
3 dy
4d y
=
=
=
2t
+
t
.
dx2
x3 dt
x2 dt2 dx
x3 dt
x2 dt2
x2
dt
dt2
Then
2
dy
1 2 dy
1
d2 y
3 dy
4d y
+
P
(x)
+
2t
+
Q(x)y
=
0
=
t
P
+
Q
t
y = 0.
2
2
dx
dx
dt
dt
t
dt
t
Example 8.13 (Legendres equation). Find the behaviour of the solutions
1
t2
dy
d2 y
2
dy
2t3
+ t4 2
t2
+ n (n + 1) y = 0,
dt
dt
t
dt
d2 y
dy + n (n + 1) y = 0,
2y
+ 2t3
2t
t4 t2
+
dt2
dt
d2 y
2t dy n (n + 1)
+ 2
+ 4
y = 0.
dt2
t 1 dt
t t2
1 X 1
y=
x n=0 xn
Solution. Let
y=x
an x =
n=0
an xn+ .
n=0
Then
y0 =
y 00 =
xy 0 =
X
n=0
X
n=0
an (n + ) xn+1 ,
an (n + ) (n + 1) xn+2 ,
an xn+ =
an2 xn+2 ,
n=2
n=0
y
an1 xn+2 .
an xn+1 =
=
x n=0
n=1
Note that
Indicial equation
z
}|
{
a0 ( 1) = 0,
(0)
a1 ( + 1) + a0 = 0,
(1)
..
.
..
.
an (n + ) (n + 1) + an1 + an2 = 0.
(n)
Note that Equation (0) implies that = 0, 1. The roots differ by an integer.
The largest always gives the solution. For = 1, Equation (1) implies that
2a1 + a0 = 0, so a1 = a0 /2. Equation (n) then becomes
an (n + 1) n + an1 + an2 = 0 = an =
an1 an2
.
n (n + 1)
105
1
1
+
2 12
= a0
7
7
= a3 = a0
.
12
144
Therefore,
1
7 3
1
x + .
y1 = a0 x 1 x x2 +
2
12
144
As for the second solution, we have
y = x0 b0 + b1 x + b2 x2 + + C ln(x)y1 ,
C
y1 + C ln(x)y10 ,
x
C
C
C
y 00 = 2b2 + 6b3 x + 2 y1 + y10 + y10 + C ln(x)y100 .
x
x
x
y 0 = b1 + 2b2 x + +
b0
y1
+
+ b1 + b2 x + + C ln(x)
x
x
C
2C 0
2
2b2 + 6b3 x + 24b4 x + x2 y1 + x y1
b0
+ b1 + b2 x +
+ b1 x + 2b2 x2 + + Cy1 +
x}
|
{z
y
= 0,
x
= 0,
= 0,
C
7 3
1
1 2
2
2b2 + 6b3 x + 24b4 x +
1 x x +
x +
x
2
12
144
2C
1
7
+
1 x + x2 + x3 +
x
4
36
1
1
7 4
2
+ b1 x + 2b2 x + + C x x2 x3 +
x +
2
12
144
b0
+
+ b1 + b2 x +
x
= 0.
y0
10ex
+ 2 y = 0.
x
x
an x+n ,
n=0
y0 =
( + n) an x+n1 ,
n=0
y 00 =
X
y
=
an x+n2 ,
x2
n=0
X
y0
( + n) an x+n2 ,
=
x
n=0
( + n) ( + n 1) an x+n2 .
n=0
( + n) ( + n 1) an x+n2
n=0
X
xn
+10
n!
n=0
( + n) an x+n2
n=0
!
+n2
an x
= 0.
n=0
We now have
( 1) a0 a0 + 10a0 = 0.
Therefore, F () = ( 1) + 10 = 0 = 1 3i. For 1 + 3i, we have
( + 1) a1 ( + 1) a1 + 10 (a0 + a1 ) = 0,
F ( + 1)a1 + 10a1 = 0.
Therefore,
a1 =
10
10
a0 =
a0
F ( + 1)
F (2 + 3i)
107
and
2
10
10 (1 6i)
10 60i
10 60i
a0 =
a0 =
a0 =
a0 .
1 + 6i
(1 + 6i) (1 6i)
1 + 36
37
a
2!
+ a1 + a2 = 0,
and we have
175
100 600i
a0 +
a0
37
37
75 + 600i
a0 .
= F ( + 2)a2 +
37
Therefore,
a2 =
and we have
1+3i
z=x
75 + 600i
a0
37F ( + 2)
a0 1
10 60i
37
x +
But
x1+3i = xx3i = xe3i ln(x) = x [cos (3 ln(x)) + i sin (3 ln(x))] ,
so
z = a0 x cos(3 ln(x)) + a0 i sin(3 ln(x))
60
60
10
sin(3 ln(x))
+ a0 x2 cos(3 ln(x)) x2 sin(3 ln(x)) +
37
37
37
60 2
10 2
+ ix cos(3 ln(x)) ix sin(3 ln(x)) + .
37
37
x cos(3
ln(x))
10
60
2
Re(z) = a0
+x 37 cos(3 ln(x)) 37 sin(3 ln(x)) +
+x3 [? cos(3 ln(x)) + ? sin(3 ln(x))]
and
x0
1x
=1
x
1
= 0.
q0 = lim x2 Q(x) = lim x2
x0
x0
x
xn ,
y2 =
n=0
bn xn + C ln(x)y1 ,
x > 0.
n=0
To find y1 , we have
y=
y0 =
y 00 =
X
n=0
X
n=0
X
n=0
an xn ,
nan xn1 =
X
n=0
(n + 1) nxn1 .
(n + 1) an+1 xn ,
109
n=0
(n + 1) an+1 = (n + 1) an ,
an
.
an+1 =
n+1
Therefore,
a1 =
a0
,
1
a2 =
a1
a0
= ,
2
2
So we have
y = a0
a3 =
a2
a0
= ,
2
3!
...,
an =
a0
.
n!
X
xn
= a0 ex .
n!
n=0
P (x) dx
x
, dx = e
y12
1x
x
e2x
dx
dx
1
Z (ln(x)x)
e ( x 1)dx
e
x
=e
dx
=
e
dx
2x
e
e2x
Z x
Z x
e
e /x
dx = ex
dx,
= ex
e2x
x
Example 8.17. Solve x2 + 2x y 00 2 x2 + 2x 1 y 0 + x2 + 2x 2 y = 0.
Solution. Let y =
0
y =
X
n=0
n=0
an x+n . Then
+n1
( + n) an x
00
y =
X
n=0
( + n) ( + n 1) an x+n2 .
X
X
+n
(
+
n)
(
+
n
1)
a
x
+
2
( + n) ( + n 1) an x+n1
n
n=0
n=0
X
X
2
( + n) an x+n+1 4
( + n) an x+n
n=0
n=0
X
X
X
+n1
+n+2
+2
(
+
n)
a
x
+
a
x
+
2
an x+n+1
n
n
n=0
n=0
n=0
X
2
a x+n
n
= 0,
n=0
+n1
( + n 1) ( + n 2) an1 x
n=1
+2 X ( + n) ( + n 1) a x+n1 2 X ( + n 2) a
+n1
n2 x
n
n=2
n=0
X
X
( + n) an x+n1
( + n 1) an1 x+n1 + 2
4
n=0
n=1
X
X
X
+
an1 x+n1
an2 x+n1 2
an3 x+n1 + 2
n=2
n=3
= 0,
n=1
[2 ( + n) ( + n 1) + 2 ( + n)] an
X
+n1
= 0,
+ [( + n 1) ( + n 2) 4 ( + n 1) 2] an1 x
n=0
+ [2 ( + n 2) + 2] an2 + an3
h
i
2
2
X
2 ( + n) an + ( + n) 7 ( + n) + 4 an1
= 0.
+ [2 ( + n) + 6] an2 + an3
n=0
1
,
2
1
1
1 = 3 = a3 = ,
2
6
1
1
4
1
32a4 8a3 (2) a2 + (0) a1 + a2 = 0 = 32a3 = 8 2 1 = = a3 =
.
6
2
3
24
18a3 + (8) a2 + (0) a1 + a2 = 0 = 18a3 = 8
111
If we our intuition leads us to guess that perhaps an = 1/n! be can use induction
to check that this guess is in fact correct. Explicitly, suppose by induction that
ak = 1/k! for all k < n. Then the recursion relation yields
n2 7n + 4
2n 6
1
+
(n 1)!
(n 2)! (n 3)!
n2 + 7n 4 + (2n 6)(n 1) (n 1)(n 2)
=
(n 1)!
2
2
n + 7n 4 + 2n 8n + 6 n2 + 3n 2
=
(n 1)!
2n
=
(n 1)!
2n2 an =
8.8
8.8.1
y2 = xex + 2 ln(x) y1 .
Chebyshev Equation
(8.5)
an xn ,
n=0
y0 =
nan xn1 ,
n=0
y 00 =
n (n 1) an xn2 =
n=0
(n + 2) (n + 1) an+2 xn ,
n=0
X
(n + 2) (n + 1) an+2 n (n 1) an nan + 2 an xn = 0.
n=0
n2 2
n (n 1) + n 2
an =
an .
(n + 2) (n + 1)
(n + 2) (n + 1)
Therefore,
2
a3 = 0,
a2 = ,
2
4 2 2
2
=
,
a5 = 0,
a4 =
43
4!
16 2 4 2 2
,
a6 =
6!
..
.
2
2
4 (n 1) 2 4 (n 2) 2 2
a0 = 1,
a2n =
a1 = 0,
(2n)!
a1 .
113
Also,
1 2
a0 = 0,
a1 = 1,
a2 = 0,
a3 =
a1 ,
32
9 2 1 2
a4 = 0,
a5 =
a1 ,
5!
..
.
2
(2n 1) 2 1 2
a1 .
a2n+1 =
(2n + 1)!
Note that if is an integer, then one of these solutions terminates, i.e.,
ak = 0 beyond some point. In this case, one solution is a polynomial known as
a Chebyshev polynomial.
8.8.2
Legendre Equation
(8.6)
n=0
n (n + 1) ( + 1)
n (n 1) + 2n ( + 1)
an =
an .
(n + 2) (n + 1)
(n + 2) (n + 1)
x
.
1 x2
2
2
q0 = lim (x 1) Q(x) = lim (x 1) 2
= 0.
x1
x1
x 1
p0 = lim (x 1) P (x) = lim
x1
x1
an (x 1) ,
1/2
y2 = (x 1)
bn (x 1)
n=0
n=0
for x > 1.
2x
p0 = lim (x 1)
= 1,
x1
1 x2
2 ( + 1)
q0 = lim (x 1)
= 0.
x1
x2 1
Therefore, x = 1 is a regular singular point. The indicial equation is
( 1) + p0 + q0 = 0,
( 1) + = 0,
2 = 0.
Therefore, = 0 is a double root and the solutions look like
y1 =
X
n=0
for x > 1.
an (x 1) ,
y2 =
X
n=0
bn (x 1) + C ln(x 1) y1
8.8.3
115
Airy Equation
(8.7)
n=0
an1
,
(n + 2) (n + 1)
therefore,
a3 =
a0
,
32
a4 =
a6 =
a0
,
65
a1
,
43
a2 =
8.8.4
...,
...,
a0
,
2 3 5 6 8 9 (3n 1) (3n)
a1
=
,
3 4 6 7 (3n) (3n + 1)
a3n =
a3n+1
a1
= 0,
2
0 = a2 = a5 = a8 = .
Laguerres Equation
(1 x)
= 1,
x0
x
q0 = lim x2 = 0.
x0
x
p0 = lim x
Applicable
(8.8)
an xn ,
y2 =
n=0
bn xn + C ln(x)y1
n=0
for x > 0.
8.8.5
Bessel Equation
(8.9)
We have
x
= 1,
x2
x2 2
q0 = lim x2
= .
x0
x2
p0 = lim x
x0
y1 = x
X
n=0
an x ,
y2 = x
X
n=0
an xn
117
y2 = x1/2
an xn ,
n=0
bn xn + C ln(x)y1 .
n=0
For y1 , we have
y=
an xn+1/2 ,
n=0
X
1
an xn1/2 ,
2
n=0
X
1
1
n+
y 00 =
an xn3/2 .
n
2
2
n=0 |
{z
}
y0 =
n+
n2 1/4
Therefore, the equation x2 y 00 + xy 0 + x2 1/4 y = 0 becomes
X
1
1
1
2
n
an + n +
an + an2 an xn+1/2 = 0.
4
2
4
n=0
From this, we see that
n2
1
1 1
+n+
4
2 4
an + an2 = 0,
an2
n2 + n
an2
=
.
n (n + 1)
an =
Therefore,
a2 =
a0
,
32
a4 =
a2
a0
= ,
54
5!
a6 =
a0
,
7!
...,
a2n = (1)
a0
.
(2n + 1)!
Therefore
y1 =
X
n=0
(1)
2n+1
x2n+1/2
1 X
sin(x)
n x
=
(1)
= .
(2n + 1)!
(2n + 1)!
x n=0
x
X
x3n
x3
x6
x9
=1+
+
+
+ .
(3n)!
3!
6!
9!
n=0
14
3
1
=
i.
2
2
x/2
y = C1 e + C2 e
sin
!
!
3
3
x/2
x + C3 e
cos
x ,
2
2
y 0 (0) = 0,
y 00 (0) = 0.
Therefore,
C1 + 0C2 + C3 = 1,
3
1
C1 +
C2 C3 = 0,
2
2
3
1
C2 C3 = 0.
C1
2
2
119
3
2
23
1
1
1
21
21
1
0
1
0 3
0
0
0 0 23
3
0
0
2
1 0
1 1
2
1 3 0 13
2
3
1 0 23 32 1
0
0 3 2
1
1
0 3
1 0 0 13
0 0 0 1 0 0 .
0 0 1 23
1 23
3
2
3
2
!
3
x .
2
Chapter 9
Linear Systems
9.1
Preliminaries
Consider
dY
= AY + B,
dx
where
"
A=
a b
c d
"
,
Y=
y1
y2
such that
dy1
dy2
= ay1 + by2 ,
= cy1 + dy2 .
dx
dx
We will first consider the homogeneous case where B = 0.
Only in the case when the entries of A are constants is there an algorithmic
method of finding the solution. If A a is a constant, then we know that
y = Ceax is a solution to dy/dx = ay. We will define a matrix eA such that the
general solution of dY/dx = AY is Y = CexA , where C = [C1 , C2 , . . . , Cn ] is
a vector of arbitrary constants. We define eT for an arbitrary n n matrix T
and apply T = xA. Indeed, we know that
et = 1 + t +
t2
t3
t4
tn
+ + + +
+ .
2! 3! 4!
n!
T3
Tn
T2
+
+ +
+ .
2!
3!
n!
121
(9.1)
122
"
1
0
1
1
Y.
1
0
A=
1
1
where
"
N=
= I + N,
0
0
1
0
z }| {
n
A = (I + N)
n
n n1 2
n n1
INn1 + Nn
I
N + +
I
N+
= In +
n1
2
1
"
#
1 n
= I + nN =
.
0 1
n
=I+
"
=
1
0
1
1
1+x+
"
x+
x2
2!
1
0
+ +
2
1
xn
n!
x2
+ +
2
ex
x 1+x+
x2
2!
+ +
xn
n!
ex
1
0
n
1
2
xn
+
n!
3
"
1 + x + x2! + + xn! +
"
#
x
x
+
e
xe
=
.
0
ex
y1
y2
"
=
ex
0
xex
ex
#"
C1
C2
"
=
C1 ex + C2 xex
C2 ex
#
,
9.2. COMPUTING E T
123
that is,
y1 = C1 ex + C2 xex ,
y2 = C2 ex .
I + Ax + A
2
2x
+ + A
2!
= A + A2 x + + An
n
nx
n!
+
xn1
(n 1)! +
= AeAx .
So if Y = CeAx , then
dY
= ACeAx = AY,
dx
as required. Therefore, Y = CeAx is indeed the solution.
9.2
Computing eT
1
2
D=
..
.
n
then
e1
e2
..
.
en
124
4
3
1
2
#
.
1
2
= 0,
(4 ) (2 ) 3 = 0,
8 6 + 2 3 = 0,
2 6 + 5 = 0,
( 5) ( 1) = 0.
Therefore, = 1 or = 5. For = 1, we have
"
#"
# "
#
41
1
a
0
=
,
3
21
b
0
"
#"
# "
#
3 1
a
0
=
,
3 1
b
0
"
# "
#
3a + b
0
=
.
3a + b
0
a
b
"
=
a
3a
"
=a
1
3
#
.
9.2. COMPUTING E T
125
For = 5, we have
"
#"
# "
#
45
1
a
0
=
,
3
25
b
0
"
#"
# "
#
1
1
a
0
=
,
3 3
b
0
"
# "
#
a + b
0
=
.
3 3b
0
We see that a = b, so
"
a
b
"
=
So let
a
a
"
P=
"
1
1
=a
1
3
1
1
#
.
#
.
Therefore,
1
=
|P|
"
1
PTP =
4
"
1
3
1
1
#"
4
3
1
=
4
"
1
3
1
1
#"
1
3
1
=
4
"
1
0
0
20
P1
1
3
1
1
1
=
4
"
1
3
1
1
#
.
So
1
2
#"
5
5
1
3
1
1
A = P
1
2
..
1
P .
.
n
126
z }| {
= ePDP x C = PexD P1 C
Y = eAx C
x
e 1
e2 x
C
=P
..
en x
x
e 1
e2 x
= [v1 , v2 , . . . , vn ]
..
en x
= C1 e1 x v1 + C2 e2 x v2 + + Cn en x vn ,
where C = P1 C.
Example 9.3. Solve
"
0
Y =
4
3
1
2
#
Y.
1
2
= 0,
(4 ) (2 ) 3 = 0,
8 6 + 2 3 = 0,
2 6 + 5 = 0,
( 5) ( 1) = 0.
Therefore, = 1 or = 5. For = 1, we have
"
#
#"
# "
41
1
a
0
=
,
3
21
0
b
"
#"
# "
#
3 1
a
0
=
,
3 1
b
0
"
# "
#
3a + b
0
=
.
3a + b
0
9.2. COMPUTING E T
127
a
b
"
a
3a
"
=a
1
3
#
.
For = 5, we have
"
#"
# "
#
45
1
a
0
=
,
3
25
b
0
"
#"
# "
#
1
1
a
0
=
,
3 3
b
0
"
# "
#
a + b
0
=
.
3 3b
0
We see that a = b, so
"
a
b
"
=
a
a
"
=a
1
1
"
1
0
Therefore,
"
P=
1
3
1
1
#
,
D=
0
5
and we have
"
1
3
"
C1 ex + C2 e5x
3C1 ex + C2 e5x
Y=
1
1
#"
ex
0
0
e5x
#
#"
# "
#"
#
C1
ex e5x
C1
=
C2
3ex e5x
C2
"
#
"
#
1
1
= C1 ex
+ C2 e5x
,
3
1
that is,
y1 = C1 ex + C2 e5x ,
y2 = 3C1 ex + C2 e5x .
Property 9.4
1
1. ePAP = PeA P1 .
2. If AB = BA, then eA+B = eA eB = eB eA .
MATB24.
128
3. If
1
2
D=
..
.
n
then
e1
e2
..
.
en
4. If An+1 = 0, then
eA = I + A +
A2
An
+ +
.
2!
n!
Proof. We have already proved (1), and (3) and (4) are trivial. What remains
is (2). So note that
An
B2
Bn
A2
+ +
I+B+
+ +
e e = I+A+
2!
n!
2!
n!
1
=I+A+B+
A2 + 2AB + B2 + .
2
A B
Also
2
(A + B)
(A + B)
+
+
2!
3!
1
=I+A+B+
A2 + AB + BA + B2 + ,
2
eA+B = I + A + B +
Theorem 9.5
Given T over C, there exists matrices P, D, and N with D diagonal,
Nk+1 = 0 for some k, and DN = ND such that T = P (D + N) P1 .
129
Corollary 9.6
We have eT = PeD eN P1 (eD and eN can be computed as above).
The steps involved in finding P, N, and D are essentially the same as those
involved in putting T into Jordan Canonical form (over C).
9.3
"
P
TP =
so
"
T
e =P
e1
0
1
0
0
2
0
e2
#"
e2 x
P1 .
C1
C2
130
in this case.
(T rI) = 0,
S2 = 0.
Therefore,
eS = I + S = I + T rI = T + (1 r) I
and
"
T
e = (T + (1 r) I)
er
0
0
er
#
.
(T rI) + (qI) = 0,
2
S2 + (qI) = 0,
S2 + q 2 I = 0.
131
+
+
+
2!
3!
4! 5! 6!
q4
q6
q2
q4
q2
+
+ + S 1
+
=I 1
2!
4!
6!
3!
5!
S
= I cos(q) + sin(q).
q
eS = I + S +
Therefore,
T
e =
cos(q)I +
T rI
q
" r
e
sin(q)
0
0
er
#
.
Y =
3
4
9
9
#
Y.
Solution. We have
3
4
p() = 0,
9
= 0,
9
(3 ) (9 ) + 36 = 0,
27 6 + 2 + 36 = 0,
2 6 + 9 = 0,
2
( 3) = 0.
Therefore, = 3 is a double root and we have Y = eAx C. For A, the eigenvalue
132
er
0
"
0
er
#
e3x 0
= (Ax + (1 3x) I)
C
0 e3x
"
# "
#! "
#
3x 9x
1 3x
0
e3x 0
=
+
C
4x
9x
0
1 3x
0 e3x
"
#"
#
1 6x 9x
e3x 0
=
C
4x
1 + 6x
0 e3x
#
"
#"
e3x 6xe3x
9xe3x
C1
=
4xe3x
e3x + 6xe3x
C2
#
"
C1 e3x 6xe3x 9C2 xe3x
=
4C1 xe3x + C2 e3x + 6xe3x
"
#
"
#
"
#
"
#
1
6
9
9
3x
3x
3x
3x
= C1 e
+ C1 xe
+ C2 e
+ C2 xe
.
4
1
1
6
Y =
1
1
4
1
#
Y.
Solution. We have
1
1
p() = 0,
4
= 0,
1
(1 ) (1 ) + 4 = 0,
2
( + 1) + 4 = 0.
Therefore, = 1 2i. For A, the roots are 1 2i, so for T, the roots are
(1 2i) x, i.e., r = x and q = 2x. Note that
T rI
1
=
q
2x
"
0 4x
x 0
"
=
1
2
#
.
133
Thus,
Y = eAx C
"
#! "
ex
0
= cos(2x)I + sin(2x) 1
x
0
e
2
"
#"
#
cos(2x) 2 sin(2x)
ex
0
=
C
x
1
0
e
cos(2x)
2 sin(2x)
"
#"
#
ex cos(2x) 2ex sin(2x)
C1
=
1 x
C2
sin(2x) ex cos(2x)
2e
"
#
C1 ex cos(2x) + 2C2 ex sin(2x)
.
=
1
x
sin(2x) + C2 ex cos(2x)
2 C1 e
9.4
2
0
#
C
(9.2a)
(9.2b)
"
V=
v1
v2
#
.
(9.2c)
134
To find V, we have
Yp = Yh V,
Yp0 = Yh0 V + Yh V0 ,
|{z}
|{z}
(9.2a)
(9.2b)
Therefore, Yh V0 = B V0 = Yh1 B.
Y =
4
3
1
2
"
Y+
e5x
e2x
#
.
13
V2 =
11
"
1
0
0
5
Therefore,
"
P=
1
3
1
1
#
,
D=
ex
e5x
It follows that
0
V =
Yh1 B
1
V=
4
"
1
=
4
1 4x
4e
"
ex
3e5x
#
ex
3x 13 e3x
ex
e5x
#"
e5x
e2x
1
=
4
"
e4x ex
3 + e3x
#
,
#
.
135
and
#
"
#" 1
4x
ex
ex
e5x
1
4e
Yp = Yh V =
4 3ex e5x
3x 13 e3x
" 1
#
5x
e2x + 3xe5x 31 e2x
1
4e
=
4 3 e5x + 3e2x + 9xe5x 1 e2x
4
3
"
" 1 #
" 3 #
"
#
#
1
1
12
16
4
4
5x
2x
5x
2x
=e
+e
+ xe
+e
.
3
9
3
1
16
12
4
4
Therefore,
Y = Yh C + Yp
#
"
#
"
#
"
1
1
1
16
x
5x
5x
= C1 e
+ C2 e
+e
3
3
1
16
" 1 #
" 3 #
"
#
1
4
12
4
2x
5x
2x
+e
+ xe
+e
,
3
9
1
12
4
4
where C1 and C2 are arbitrary constants.
9.5
We
t.
dy
= G(x, y) .
| {z }
dt
x2
are assuming here that V depends only on the position (x, y) and not also upon time
136
4
3
4
2
2
1
0
-1
-2
-2
-3
-4
-4
-2
-3
-2
-1
0
x
Figure 9.1: The vector field plot of y, x2 and its trajectories.
dy
= cx + dy,
dt
or X0 = AX, where
"
X=
x
y
"
,
A=
a b
c d
#
.
Recall from Theorem 10.12 (p. 157) that if all the entries of A are continuous,
then for any point (x0 , y0 ), there is a unique solution of X0 = AX satisfying
x(t0 ) = x0 and y(t0 ) = y0 , i.e., there exists a unique solution through each
point; in particular, the solution curves do not cross.
The above case can be solved explicitly, where
"
At
X=e
x0
y0
137
where |A| =
6 0.
9.5.1
1
0
0
2
{z
D
#
.
}
X=e
x0
y0
"
Dt
= Pe
"
= (v, w)
C1 e1 t
C2 e2 t
x0
y0
#
"
= (v, w)
e1 t
0
#"
e2 t
C1
C2
= C1 e1 t v + C2 e2 t w.
138
139
9.5.2
Complex Eigenvalues
p1
p2
q1
q2
{z
P
#"
}|
C1
C2
#"
#
C2
cos(t)
.
C1
sin(t)
{z
}
C
that the trace of a matrix is the sum of the elements on the main diagonal.
140
Except in the degenerate case, where p and q are linearly dependent, we have
"
cos(t)
sin(t)
cos(t)
sin(t)
= C1 P1 X.
Therefore,
"
= Xt P1
t
C1
t
C1 P1 X,
t
C1
t
C1 P1 X,
1 = Xt P1
t
C1
t
C1 P1 X.
cos(t) + sin(t)
Note that C = Ct , so
"
t
CC =
C1
C2
C2
C1
#"
C1
C2
"
C12 + C22
0
2
0
C1 + C22
= C12 + C22 I.
C2
C1
#
C2 =
Therefore,
C
,
+ C22
t
C
,
C1 = C1 = 2
C1 + C22
2
C12 + C22 I
I
= C1 =
2 = C2 + C2 .
2
2
(C1 + C2 )
1
2
C1 =
C1
t
C1
C12
Therefore,
1 = Xt P1
t
C1
t
C1 P1 X =
C12
t
1
Xt P1 P1 X.
2
+ C2
t
Let T = P1 P1 . Then Xt TX = C12 + C22 and T = Tt (T is symmetric).
Therefore, the eigenvectors of T are mutually orthogonal and form the axes of
the ellipses. Figure 9.3 shows a stable spiral and an unstable spiral.
141
(a) < 0
(b) > 0
Figure 9.3: The cases for , where we have a stable spiral when > 0 and an
unstable spiral when > 0.
9.5.3
Nt+tI
=e
C1
C2
Nt tI
=e
"
et
0
et
0
= (I + Nt)
0
et
#
.
Therefore,
"
At
X=e
= (I + Nt)
#"
0
et
C1
C2
"
t
= e (I + Nt)
n1
n1
n2
n2
#
.
"
Nv =
n1 + n2
(n1 + n2 )
"
=
0
0
#
.
C1
C2
#
.
142
"
C1
C2
"
x
y
x
y
C1 + (n1 C1 + n2 C2 ) t
X = e (I + Nt)
=e
C2 + (n1 + n2 C2 ) t
"
#
"
#!
C
1
1
= et
+ (n1 C1 + n2 C2 ) t
C2
#
!
"
C1
= et
+ (n1 C1 + n2 C2 ) tv .
C2
t
If < 0, we have
"
0
0
"
0
0
as t . If > 0, then
"
v2
y
=
,
x
v1
X=e
C1
C2
"
=
et
0
0
et
#"
C1
C2
#
.
C1
C2
"
t
=e
C1
C2
#
,
143
(a) < 0
(b) > 0
Figure 9.4: The cases for , where we have a stable node when < 0 and an
unstable node when > 0.
(a) < 0
(b) > 0
Figure 9.5: The degenerate cases for when N = 0, where we have a stable
node when < 0 and an unstable node when > 0.
144
Chapter 10
Picards Method
dy = f (x, y),
dx
()
y(x0 ) = y0 .
Picards Method is a technique for approximating the solution to Equation ().
If y(x) is a solution of Equation (), then
Z x
dy
dt =
f (t, y(t)) dt,
x0 dt
x0
Z y
dy = y y0 ,
y0
giving us
Z
y = y0 +
()
146
Z
(P (p)) (x) = y0 +
and
Z
y2 (x) = y0 +
f (t, y1 (t)) dt
x0
x
(1 t) dt = 1 x +
=1+
0
and
Z
y3 (x) = 1 +
0
t2
1t+
2
dt = 1 x +
x2
,
2
x3
x2
.
2
3!
So in general, we have
yn (x) = 1 x +
n
x2
n x
+ + (1)
.
2!
n!
X
n=0
(1)
xn
.
n!
147
dt = 1 +
[(t 1) + 2] dt
#x1
"1
2
2
(x 1)
(t 1)
+ 2 (t 1) = 1 +
+ 2 (x 1)
=1+
2
2
y1 (x) = 1 +
t+1
(x 1)
= 1 + 2 (x 1) +
2
and
Z
y2 (x) = 1 +
t +
1 + 2 (t 1) +
= 1 + 2 (x 1) +
(t 1)
2
!2
dt
5
1
1
5
2
3
4
5
(x 1) + (x 1) + (x 1) +
(x 1) .
2
3
4
20
Theorem 10.3
If f : X R is continuous, where X Rn is closed and bounded (compact),
then f (X) is closed and bounded (compact). In particular, there exists an
M such that |f (x)| M for all x X.
Corollary 10.4
Let R be a (closed) rectangle. Let f (x, y) be such that f /y exists and is
continuous throughout R. Then there exists an M such that
|f (x, y2 ) f (x, y1 )| M |y2 y1 |
|
{z
}
Lipschitz condition with respect to y
148
Proof. Let R be a (closed) rectangle. Let f (x, y) be such that f /y exists and
is continuous throughout R. By Theorem 10.3, there exists an M such that
f
(r) M
y
for all r R. By the Mean Value Theorem, there exists a c (y1 , y2 ) such that
f (x, y2 ) f (x, y1 ) =
f
(x, c)(y2 y1 ).
y
Theorem 10.5
Let f be a function. Then
1. f is differentiable at x implies that f is continuous at x.
2. f is continuous on B implies that f is integrable on B, i.e.,
exists.
R
B
f dV
The point is that the same N works for all xs. If each x has an N that worked for it
x
but there was no N working for all xs at once, then it would converge, but not uniformly.
149
Theorem 10.6
Suppose that {fn } is a sequence of functions that converges uniformly to f
on B. if fn is integrable for all n, then f is integrable and
Z
Z
f (x) dV = lim
fn (x) dV.
n
Theorem 10.7
Let {fn } be a sequence of functions defined on B. Suppose that there exists
P
a positive convergent series n=1 an such that |fn (x) fn1 (x)| an for
all n. Then f (x) = limn fn (x) exists for all x B and {fn } converges
uniformly to f .
n
X
k=1
Therefore
X
(fk (x) fk1 (x))
lim fn (x) = f0 (x) +
n
| {z }
k=1
f (x)
and
|f (x) fn (x)| =
k=n+1
n=1
ak <
k=n+1
150
10.2
Theorem 10.8 (Existence and uniqueness theorem for first order odes )
Let R be a rectangle and let f (x, y) be continuous throughout R and satisfy
the Lipschitz Condition with respect to y throughout R. Let (x0 , y0 ) be an
interior point of R. Then there exists an interval containing x0 on which
there exists a unique function y(x) satisfying y 0 = f (x, y) and y(x0 ) = y0 .
To prove the existence statement in Theorem 10.8, we will need the following
lemma.
Lemma 10.9
Let I = [x0 , x0 + ] and let p(x) satisfy the following:
1. p(x) is continuous on I.
2. |p(x) y0 | a for all x I.
Rx
Rx
Then x0 f (t, p(t)) dt exists for all x I and q(x) = y0 + x0 f (t, p(t)) dt
also satisfies (1) and (2).
t (t, p(t))
Rx
so f is continuous, hence integrable, i.e., x0 f (t, p(t)) dt exists for all x
I. Since q(x) is differentiable on I, it is also continuous on I. Also, since
x0
f (t, p(t)) dt M |x x0 | M a.
By Lemma 10.9, the existence of yn1 satisfying (1) and (2) guarantees the
existence of yn . By hypothesis, there exists an A such that |f (x, v) f (x, w)|
A |v w|, whenever (x, v), (x, w) R. As in the proof of Lemma 10.9, we have
|y1 (x) y0 (x)| = |y1 (x) y0 | M |x x0 |
for all x I. Likewise, we have
Z x
|y2 (x) y1 (x)| =
[f (t, y1 (t)) f (t, y0 (t))] dt
Zxx0
|f (t, y1 (t)) f (t, y0 (t))| dt
Zxx0
A |y1 (t) y0 (t)| dt
Zxx0
AM |t x0 | dt
x0
MA
|x x0 |
,
2
152
and
Z x
|y3 (x) y2 (x)| =
[f (t, y2 (t)) f (t, y1 (t))] dt
x
Z x0
|f (t, y2 (t)) f (t, y1 (t))| dt
x
Z 0
!
2
x
|t x0 |
dt
AM A
x0
2
3
M A2
|x x0 |
.
3!
Continuing, we get
M An1 n
.
n!
yn (x) = y0 (x) + (y1 (x) y0 (x)) + (y2 (x) y1 (x)) + + (yn (x) yn1 (x))
= y0 (x) +
n
X
k=1
Since
|yk (x) yk1 (x)|
and
X
M Ak1 k
k=1
k!
M Ak1 k
k!
M A
e 1 ,
A
k=1
converging for all x I. Therefore, y(x) = limn yn (x) exists for all x I
and yn converges uniformly to y.
Note that
|f (x, yn (x)) f (x, yn1 (x))| A |yn (x) yn1 (x)|
yn (x) = y0 +
we then have
x0
To prove the uniqueness statement in Theorem 10.8, we will need the following lemma.
Lemma 10.10
If y(x) is a solution to the de with y(x0 ) = y0 , then |y(x) y0 | a for all
x I. In particular, (x, y(x)) R for all x I.
Proof. Suppose there exist an x I such that |y(x) y0 | > a. We now consider
two cases.
Suppose x > x0 . Let s inf({t : t > x0 and |y(t) y0 | > a}). So s < x
x0 + x0 + a. By continuity, we have
a = |y(s) y0 | = |y(s) y(x0 )| = |y 0 (c) (s x0 )| = |f (c, y(c)) (s x0 )|
|
{z
}
M (sx0 )
154
both satisfy the given de and initial condition. Let (x) = (y(x) z(x)) . Then
0 (x) = 2 (y(x) z(x)) (y 0 (x) z 0 (x))
= 2 (y(x) z(x)) (f (x, y(x)) f (x, z(x))) .
Therefore for all x I,
| 0 (x)| 2 |y(x) z(x)| A |y(x) z(x)|
2
= 2A (y(x) z(x))
= 2A(x).
= e
( (x) 2A(x)) 0,
d e2Ax (x)
(x) =
dx
0
= e
( (x) + 2A(x)) 0,
d e2Ax (x)
(x) =
dx
0
10.3. EXISTENCE AND UNIQUENESS THEOREM FOR LINEAR FIRST ORDER ODES155
10.3
Theorem 10.11 (Existence and uniqueness theorem for linear first order des)
Let a(x) and b(x) be continuous throughout an interval I. Let x0 be an
interior point of I and let y0 be arbitrary. Then there exists a unique
function y(x) with y(x0 ) = y0 satisfying dy/dx = a(x)y + b(x) throughout
I.
yn+1 (x) = y0 +
Then
Z x
(a(t)yn1 (t) + b(t)) dt
0+
y
Zx0 x
|yn (x) yn1 (x)| =
y
(a(t)yn2 (t) + b(t)) dt
0
x0
Z x
=
[a(t) (yn1 (t) yn2 (t))] dt .
x0
|x x0 |
,
(n 1)!
Assuming by induction that y (t) is continuous, a(t)y (t) + b(t) is continuous throughn
n
out I and thus integrable, so the definition of yn+1 (x) makes sense. Then yn+1 (x) is also
0
differentiable, i.e., yn+1 (x) = a(x)yn (x) + b(x), and thus continuous, so the induction can
proceed.
156
we then have
Z
x0
Z x
[a(t) (yn1 (t) yn2 (t))] dt
x0
n1
|t x0 |
AAn2
(n 1)!
dt
An1
|x x0 |
,
n!
An1 n
,
n!
10.4
A=
a11 (x)
a21 (x)
..
..
.
.
an1 (x)
a1n (x)
a2n (x)
..
.
ann (x)
Y=
y1 (x)
y2 (x)
..
.
yn (x)
B=
b1 (x)
b2 (x)
..
.
bn (x)
Theorem 10.12 (Existence and uniqueness theorem for linear systems of n des)
Let A(x) be a matrix of functions, each continuous throughout an interval
I and let B(x) be an n-dimensional vector of functions, each continuous
throughout I. Let x0 be an interior point of I and let Y0 be an arbitrary
n-dimensional vector. Then there exists a unique vector of functions Y(x)
with Y(x0 ) = Y0 satisfying
dY
= A(x)Y + B(x)
dx
throughout I.
Proof. Let A(x) be a matrix of functions, each continuous throughout an interval I and let B(x) be an n-dimensional vector of functions, each continuous
throughout I. Let x0 be an interior point of I and let Y0 be an arbitrary
n-dimensional vector.
Yn+1 (x) + Y0 +
x0
vector obtained by
integrating componentwise
158
Then
Z x
|Yn (x) Yn1 (x)| =
[A(t) (Yn1 (t) Yn2 (t))] dt
x
Z 0
!
n1
x
|t
x
|
0
dt
n1
x0
(n 1)!
n
|x x0 |
n!
n n
,
= n
where is the width of I. The rest of the proof is essentially the same as
before.
Recall Theorem 8.5 (p. 90) restated here as follows.
X
1 n
f (p)rn = f (r)
n
n=0
X
1 n
M=
|f (p)| rn .
n!
n=0
()
1 (n)
f (p)rn M,
n!
so Equation () holds. Therefore, our claim holds.
M
,
1 zp
a
B(z) =
N
.
1 zp
a
Note that
A(z) = M
zp
1+
+
a
zp
a
2
+ +
zp
a
n
+
so
A(p) = M,
M
,
a
M
A00 (p) = 2 2 ,
a
A0 (p) =
..
.
A(n) (p) = n!
M
(n)
(p)
P
.
an
B (n) (p) = n!
N
Q(n) (p) .
n
a
Similarly,
Consider
y 00 = A(z)y 0 + B(z)y.
()
160
Using P (n) (p) A(n) (p) and Q(n) (p) B (n) (p) gives w(n) (p) v (n) (p).
P
n
The Taylor series of W about p is n=0 an (z p) , where an = w(n) (p)/n!;
P
n
the Taylor series of v about p is n=0 bn (z p) , where bn = v (n) (p)/n!.
Since we showed that |an | bn , the first converges anywhere the second does.
So we need to show that the Taylor series of w(x) has a radius of convergence
equal to a (this implies that the Taylor series for w converges for n pk, but
a < R was arbitrary). We have
v 00 =
N
M
0
v.
zp v +
1 a
1 zp
a
Then we have
1 d2 v
M 1 dv
N
=
+
v,
2
2
a du
1 u a du 1 u
d2 v
dv
(1 u) 2 = aM
+ a2 N v.
du
du
Write v =
n=0
v = n un = n
zp
a
n
=
n
n
n
(z p) = bn (z p) ,
an
X
X
dv
=
nn un1 =
(n + 1) n+1 un ,
du n=1
n=0
|
{z
}
adjust indices
X
d v
n1
=
(n
+
1)
n
u
=
(n + 2) (n + 1) n+2 un .
n+1
du2
n=1
n=0
|
{z
}
2
adjust indices
Now we have
(1 u)
(n + 2) (n + 1) n+2 u =
n=0
aM (n + 1) n+1 u +
n=0
(n + 2) (n + 1) n+2 u
n=0
X
(n + 2) (n + 1) n+2 un+1
(n + 2) (n + 1) n+2 u
n=0
=
X
(n + 1) nn+1 un .
n=0
a2 N n un ,
n=0
n=0
Therefore,
X
(n + 1) (n + aM ) n+1 + a2 N n un ,
(n + 2) (n + 1) n+2 u =
n
n=0
n=0
so it follows that
(n + 2) (n + 1) n+2 = (n + 1) (aM + n) n+1 + a2 N n ,
a2 N n
aM + n
n+1 +
,
n+2
n+2
n + aM
a2 N n
=
+
n+2
n + 2 n+1
n+2 =
n+2
n+1
Therefore,
lim
n+2
= 1 + 0 = 1.
n+1
So by the ratio test (Equation (8.2) of Theorem 8.2, p. 88), the radius conver-
162
gence of
is a.
n
X
X
n
zp
n
=1
n u =
(z p) =
bn
a
a
n=0
n=0 n
n=0
n
The point of Theorem 8.5 is to state that solutions are analytic (we already
know solutions exists from early theorems). The theorem gives only a lower
bound on the interval. The actual radius of convergence may be larger.
Chapter 11
Numerical Approximations
11.1
Eulers Method
Consider y 0 = f (x, y), where y(x0 ) = y0 . The goal is to find y(xend ). Figure
11.1 shows this idea. We pick a large N and divide [x0 , xend ] into segments of
actual value
yN
y0
x0
x0+h
= x1
approximate value
y2
y1
...
x0+2h
= x2
xend = xN
163
164
x2 = 0.2,
x3 = 0.3.
Therefore,
y1 = y0 + hf (x0 , y0 ) = 1 + 0.1 (0 2)
= 1 0.2 = 0.8,
y2 = 0.8 + hf (0.1, 0.8) = 0.8 + h(0.1 1.6)
= 0.8 0.1 1.5 = 0.8 0.15 = 0.65,
y3 = 0.65 + hf (0.2, 0.65) = 0.65 + h (0.2 1.3)
= 0.65 0.1 1.1 = 0.65 0.11 = 0.54.
Therefore, y(0.3) 0.54.
What about the actual value? To solve the de, we have
y 0 + 2y = x,
e2x y 0 + 2e2x y = xe2x ,
Z
Z
1 2x
1
e dx
ye2x = xe2x dx = xe2x
2
2
1
1
= xe2x e2x + C,
2
4
165
1
1
x + Ce2x .
2
4
Note that
5
1
y(0) = 1 = + C = 1 = C = ,
4
4
so the general solution is
y=
1
1 5
x + e2x .
2
4 4
Therefore,
5
y(0.3) = 0.15 0.25 + e0.6 0.1 + 0.68601 0.58601,
4
so y(0.3) = 0.58601 is the (approximated) actual value.
11.1.1
Error Bounds
The Taylor second degree polynomial of a function y with step size h is given
by
y 00 (z)
y(a + h) = y(a) + hy 0 (a) + h2
2
for some z [a, h], where h2 y 00 (z)/2 is the local truncation error, i.e., the error
in each step. We have
M 2
y 00 (z)
h
h2
2
2
if |y 00 (z)| M on [a, h]. Therefore, dividing h by, say, 3 reduces the local truncation error by 9. However, errors also accumulate due to yn+1 being calculated
using the approximation of yn instead of the exact value of f (xn ). Reducing h
increases N , since N = (xend x0 ) /h, so this effect wipes out part of the local
gain from the smaller h. The net effect is that
h,
(overall error global truncation error) M
e.g., dividing h by 3 divides the overall error only by 3, not 9.
166
11.2
f 0 (xn ) + f 0 (xn+1
f (xn , yn ) + f (xn+1 , yn+1 )
=
.
2
2
Figure 11.2 illustrates this idea. However, we have one problem in that we do
xn
xn+1
not know yn+1 (that is what we are working out at this stage). Instead, we fill
into the inside yn+1 from the original Eulers method, i.e.,
1
(f (xn , yn ) + f (xn+1 , yn + hf (xn , yn ))) ,
2
i.e.,
yn+1 = yn +
h
[f (xn , yn ) + f (xn+1 , yn + hf (xn , yn ))] .
2
11.3
167
Runge-Kutta Methods
xn
xn + h/2
xn + 1 = xn + h
kn1 = f (xn , yn ),
1
kn2 = f xn + h, yn +
2
1
kn3 = f xn + h, yn +
2
1
kn4 = f xn + h, yn +
2
Use
yn+1 = yn +
1
hkn1 ,
2
1
hkn2 ,
2
1
hkn3 .
2
h
(kn1 + 2kn2 + 2kn3 + 3kn4 ) .
6
168