Professional Documents
Culture Documents
B44 PDF
B44 PDF
MATB44H3F
1 Introduction 1
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Sample Application of Differential Equations . . . . . . . . . . . 2
iii
iv CONTENTS
Introduction
1.1 Preliminaries
1
2 CHAPTER 1. INTRODUCTION
2z 2z
+ =0
x2 y 2
on domain R2 ; the function y = 2 x is a solution of
yy 0 = 2
on domain (0, ).
Note that different solutions can have different domains. The set of all
solutions to a de is call its general solution.
Mathematical Model
Interpretation of Solution
1.2. SAMPLE APPLICATION OF DIFFERENTIAL EQUATIONS 3
Example 1.4. The half-life of radium is 1600 years, i.e., it takes 1600 years for
half of any quantity to decay. If a sample initially contains 50 g, how long will
it be until it contains 45 g?
Solution. Let x(t) be the amount of radium present at time t in years. The rate
at which the sample decays is proportional to the size of the sample at that
time. Therefore we know that dx/dt = kx. This differential equation is our
mathematical model. Using techniques we will study in this course (see 3.2,
Chapter 3), we will discover that the general solution of this equation is given
by the equation x = Aekt , for some constant A. We are told that x = 50 when
t = 0 and so substituting gives A = 50. Thus x = 50ekt . Solving for t gives
t = ln(x/50) /k. With x(1600) = 25, we have 25 = 50e1600k . Therefore,
1
1600k = ln = ln(2) ,
2
Thus, the set y {1, 4} contains all the solutions. We quickly see that y = 4
satisfies Equation (1.1) because
4= 4 + 2 = 4 = 2 + 2 = 4 = 4,
The complexity of solving des increases with the order. We begin with first
order des.
A first order ode has the form F (x, y, y 0 ) = 0. In theory, at least, the methods
of algebra can be used to write it in the form y 0 = G(x, y). If G(x, y) can
be factored to give G(x, y) = M (x) N (y),then the equation is called separable.
To solve the separable equation y 0 = M (x) N (y), we rewrite it in the form
f (y)y 0 = g(x). Integrating both sides gives
Z Z
f (y)y 0 dx = g(x) dx,
Z Z
dy
f (y) dy = f (y) dx.
dx
5
6CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
x2 4 y 0 = 2xy 6x,
= 2xy 6x,
0
y 2x
= 2 , x 6= 2
y+3 x 4
ln(|y + 3|) = ln x2 4 + C,
ln(|y + 3|) + ln x2 4 = C,
(y + 3) x2 4 = A,
(y + 3) x2 4 = A,
A
y+3= ,
x2 4
y dy = sin(x) dx,
Z Z
y dy = sin(x) dx,
y2
= cos(x) + C1 ,
2
p
y = 2 cos(x) + C2 ,
y dy = sin(x) dx,
Z y Z x
y dy = sin(x) dx,
1 0
y2 12
= cos(x) cos(0),
2 2
y2 1
= cos(x) 1,
2 2
y2 1
= cos(x) ,
2 2
p
y = 2 cos(x) 1,
Solution. We have
y 4 + 1 y 0 = x2 1,
y5 x3
+ y = x + C,
5 3
where C is an arbitrary constant. This is an implicit solution which we cannot
easily solve explicitly for y in terms of x.
Definition
An expression of the form F (x, y) dx + G(x, y) dy is called a (first-order) differ-
ential form. A differentical form F (x, y) dx + G(x, y) dy is called exact if there
exists a function g(x, y) such that dg = F dx + G dy.
Theorem 2.4
If F and G are functions that are continuously differentiable throughout a
simply connected region, then F dx + G dy is exact if and only if G/x =
F/y.
Example 2.5. Consider 3x2 y 2 + x2 dx + 2x3 y + y 2 dy = 0. Let
= 3x2 y 2 + x2 dx + 2x3 y + y 2 dy
| {z } | {z }
F G
g
= 3x2 y 2 + x2 , (2.1a)
x
g
= 2x3 y + y 2 . (2.1b)
y
x3
g = x3 y 2 + + h(y). (2.2)
3
So differentiating that with respect to y gives us
Eq. (2.1b)
z}|{
g dh
= 2x3 y + ,
y dy
dh
2x3 y + y 2 = 2x3 y + ,
dy
dh
= y2 ,
dy
y3
h(y) = +C
3
2.2. EXACT DIFFERENTIAL EQUATIONS 9
x3 y3
g = x3 y 2 + + + C.
3 3
Note that according to our differential equation, we have
x3 y3 x3 y3
3 2
d x y + + +C = 0 which implies x3 y 2 + + + C = C0
3 3 3 3
x3 y3
x3 y 2 + + = D.
3 3
Example 2.6. Solve 3x2 + 2xy 2 dx + 2x2 y dy = 0, where y(2) = 3.
Solution. We have
Z
3x2 + 2xy 2 dx = x3 + x2 y 2 + C
8 + 4 9 = C = C = 44.
y
y=s(x)
(x0,y) 2
(x,y)
1
(x0,y0)
x
Figure 2.1: The graph of y = s(x) with connecting (x0 , y0 ) to (x, y).
Remark. Separable equations are actually a special case of exact equations, that
is,
f (y)y 0 = g(x) = g(x) dx + f (y) dy = 0 = f (y) = 0 = (g(x)) .
x y
2.3. INTEGRATING FACTORS 11
Solution. We have y 1
+ 1 dx + dy = 0.
x2 x
We see that
1 1 1 y
= 2 6= 2 = +1 .
x x x x y x2
So the equation is not exact. Multiplying by x2 gives us
y + x2 dx + x dy = 0,
x3
d xy + = 0,
3
x3
xy + =C
3
for some arbitrary constant C. Solving for y finally gives us
C x3
y= .
x 3
There is, in general, no algorithm for finding integrating factors. But the
following may suggest where to look. It is important to be able to recognize
common exact forms:
x dy + y dx = d(xy) ,
x dy y dx y
= d ,
x2 x
!
x dx + y dy ln x2 + y 2
=d ,
x2 + y 2 2
x dy d dx
1 y
= d tan ,
x2 + y 2 x
xa1 y b1 (ay dx + bx dy) = d xa y b .
12CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
Example 2.8. Solve x2 y 2 + y dx + 2x3 y x dy = 0.
x2 y 2 dx + 2x3 y dy + y dx x dy = 0.
d xy 2 = y 2 dx + 2xy dy.
y dx x dy
y 2 dx + 2xy dy + = 0.
x2
Therefore,
y
xy 2 + = C, x 6= 0,
x
where C is an arbitrary constant. Additionally, y = 0 on the domain R is a
solution to the original equation.
Example 2.9. Solve y dx x dy x2 + y 2 dx = 0.
Solution. We have
y dx x dy
dx = 0,
x2 + y 2
unless x = 0 and y = 0. Now, it follows that
y
tan1 x = C,
x
y
tan1 = C x,
x
y
tan1 = D x, (D = C)
x
y
= tan(D x) ,
x
y = x tan(D x) ,
Proposition 2.10
Let = dg. Then for any function P : R R, P (g) is exact.
R
Proof. Let Q = P (t) dy. Then d(Q(g)) = P (g) dg = P (g).
y dx + x dy yx2 dy = 0.
| {z }
d(xy)
y dx + x dy 1 1
dy = 0 = ln(|y|) = C,
x2 y 2 y xy
Given
M dx + N dy = 0, ()
(IN ) = (IM ) .
x y
| {z } | {z }
Ix N +INx Iy M +IMy
14CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
Ix N + INx = Iy M + IMy , ()
then we can use it as an integrating factor to find the general solution of ().
Unfortunately, () is usually even harder to solve than (), but as we shall see,
there are times when it is easier.
Example 2.12. We could look for an I having only xs and no ys? For exam-
ple, consider Iy = 0. Then
Ix My N x
Ix N + INx = IMy implies = .
I N
This works if (My Nx ) /N happens to be a function of x alone. Then
R My Nx
dx
I=e N .
works.
y 0 + P (x)y = Q(x).
We use the de for the integrating factor I(x, y). The equation IM dx + IN dy
is exact if
Ix N + INx = Iy M + IMy .
2.4. LINEAR FIRST ORDER EQUATIONS 15
In our case,
Ix + 0 = Iy (P (x)y Q(x)) + IP (x). ()
We need only one solution, so we look for one of the form I(x), i.e., with Iy = 0.
Then () becomes
dI
= IP (x).
dx
This is separable. So
dI
= P (x) dx,
I Z
ln(|I|) = P (x) dx + C,
R
P (x) dx
|I| = e , ex > 0
R
P (x) dx
I=e .
R
We conclude that e P (x) dx
is an integrating factor for y 0 + P (x)y = Q(x).
x dy y dx
= x3 dx.
x
Multiplying by the integrating factor 1/x gives us
x dy y dx
= x2 dx.
x2
Then
y x3
= + C,
x 3
x3
y= + Cx
3
on the domain (0, ), where C is an arbitrary constant (x > 0 is given).
16CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
R
In general, given y 0 + P (x)y = Q(x), multiply by e P (x) dx
to obtain
R R R
P (x) dx 0
e y + e P (x) dx P (x)y = Q(x)e P (x) dx
.
| {z }
R
d(ye P (x) dx )/dx
Therefore,
R
Z R
P (x) dx P (x) dx
ye = Q(x)e dx + C,
R
Z R R
y = e P (x) dx
Q(x)e P (x) dx
dx + Ce P (x) dx
,
Solution. What should P (x) be? To find it, we put the equation in standard
form, giving us
2
y 0 + y = 4x.
x
Therefore, P (x) = 2/x. Immediately, we have
2
= eln(x ) = x2 .
R
(2/x)dx
I=e
x2 y 0 + 2xy = 4x3 ,
x2 y = x4 + C,
C
y = x2 + ,
x2
where C is an arbitrary constant and x 6= 0.
dx
+ 2x = ey ,
dy
2.5. SUBSTITUTIONS 17
R
2 dy
where I = e = e2y . Therefore,
dx
e2y + 2xe2y = ey ,
dy
xe2y = ey + C,
2.5 Substitutions
In many cases, equations can be put into one of the standard forms discussed
above (separable, linear, etc.) by a substitution.
Solution. This is a first order linear equation for y 0 . Let u = y 0 . Then the
equation becomes
u0 2u = 5.
R
The integration factor is then I = e 2 dx
= e2x . Thus,
5 C 5
y = x + e2x + C1 = x + C1 e2x + C2
2 2 2
on the domain R, where C1 and C2 are arbitrary constants.
dy
+ P (x)y = Q(x)y n .
dx
Let z = y 1n . Then
dz dy
= (1 n) y n ,
dx dx
18CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
giving us
dy
y n + P (x)y 1n = Q(x),
dx
1 dz
+ P (x)z = Q(x),
1 n dx
dz
+ (1 n) P (x)z = (1 n) Q(x),
dx
which is linear in z.
dz dy
= 2y 3 .
dx dx
Therefore, our equation becomes
y3 z0
+ xy = xy 3 ,
2
z0
+ xy 2 = x,
2
z 0 2xy = 2x.
2
R
We can readily see that I = e 2x dx
= ex . Thus,
2 2 2
ex z 0 2xex = 2xex ,
2 2
ex z = ex + C,
2
z = 1 + Cex ,
1
y = .
1 + Cex2
The domain is
R, C > 1,
p
|x| > ln(C), C 1.
An additional solution is y = 0 on R.
2.5. SUBSTITUTIONS 19
Proposition 2.19
If F is homogeneous of degree 0, then F is a function of y/x.
Proof. We have F (x, y) = F (x, y) for all . Let = 1/x. Then F (x, y) =
F (1, y/x).
dy M
= ,
dx N
20CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
This suggests that v = y/x (or equivalently, y = vx) might help. In fact, write
M (x, y) y
=R .
N (x, y) x
Then
dy y
=R = R(v).
dx
|{z} x
dv
v+x dx
Therefore,
dv
x = R(v) v,
dx
dv dx
= ,
R(v) v x
xv 2 x2 (v dx + x dv) = x3 + v 3 x2 dx,
x3 v 3 dx + x4 v 2 dv = x3 dx + v 3 x3 dx.
v3
= ln(|x|) + C = ln(|x|) + ln(|A|) = ln(|Ax|) = ln(Ax) .
3 | {z }
C
where the sign of A is the opposite of the sign of x. Therefore, the general
1/3
solution is y = x (3 ln(Ax)) , where A is a nonzero constant. Every A > 0
yields a solution on the domain (0, ); every A < 0 yields a solution on (, 0).
In addition, there is the solution y = 0 on the domain R.
F (y 00 , y 0 , y, x) = 0.
2.5. SUBSTITUTIONS 21
d2 y dy
F , ,y =0
dx2 dx
d2 y dv dv dy dv
= = = v.
dx2 dx dy dx dy
Therefore,
d2 y dy
dv
F , ,y =0 F v, v, y = 0.
dx2 dx dy
This is a first order equation in v and y.
3/2
Example 2.22. Solve y 00 = 4 (y 0 ) y.
d2 y dv
= v
dx2 dy
dv
v = 4v 3/2 y,
dy
dv
= 4y dy, v 0,
v
2 v = 2y 2 + C1 ,
v = y 2 + C2 ,
22CHAPTER 2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS
dy 2
= y 2 + C2 ,
dx
dy
dx = 2,
(y 2 + C2 )
Z
dy
x= 2
2
(y + C2 )
1 1 y C2 y
2C 3/2
tan C2
+ y 2 +C2 + C3 , C2 > 0,
2
= 3y13 + C3 , C2 = 0,
2
1 y
+ C3 , C2 < 0.
2(C )3/2 y 2 +C2
2
2
We can eliminate y by letting y = ev . Then y 0 = ev v 0 and y 00 = ev (v 0 ) + ev v 00 .
The equation then becomes
2
P (x) ev (v 0 ) + ev v 00 + Q(x)ev v 0 + R(x)ev = 0,
2
x2 ev (v 0 ) + x2 ev v 00 + x x2 ev v 0 e2x ev = 0.
Write z = v 0 . Then
x2 z 0 + x2 z 2 + (1 x) xz = e2x .
Now we are on our ownthere is no standard technique for this case. Suppose
we try u = xy. Then z = u/x and
u 1
z0 = 2
+ u0 .
x x
Then it follows that
xu0 + u2 xu = e2x .
This is a bit simpler, but it is still nonstandard. We can see that letting u = sex
2.5. SUBSTITUTIONS 23
will give us some cancellation. Thus, u0 = s0 ex + sex and our equation becomes
xs0 ex +
xse
x
+ s2 e2x
xse
x
= e2x ,
xs0 + s2 ex = ex ,
xs0 = ex 1 s2 ,
s0 ex
= ,
1 s2 x
Z x
1 1 + s e
ln = dx.
2 1 s x
Working our way back through the subsitutions we find that s = zxex so our
solution becomes
1 + zxex
Z x
1 e
ln = dx.
2 1 zxex x
Using algebra, we could solve this equation for z in terms of x and then integrate
the result to get v which then determines y = ev as a function of x. The
algebraic solution for z is messy and we will not be able to find a closed form
expression for the antidervative v, so we will abandon the calculation at this
point. In practical applications one would generally use power series techniques
on equations of this form and calculate as many terms of the Taylor series of
the solution as are needed to give the desired accuracy.
(a1 x + b1 y + c1 ) dx + (a2 x + b2 y + c2 ) dy = 0.
(a1 x + a1 h + b1 y + b1 k + c1 ) dx + (a2 x + a2 h + b2 y + b2 k + c2 ) dy = 0.
So suppose
a b1
1
= 0.
a2 b2
dz a1 dx
dy = ,
b1
dz a1 dx
(z + c1 ) dx + (mz + c2 ) = 0,
b1
a1 mz + c2
z + c1 + dx + dx = 0,
b1 b1
mz + c2
b1 dx = dz.
z + c1 + a1 /b1
C=9
=/4
C=4
C=1
=0
x
25
26CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
f f dy
g= (x, y, C) + (x, y, C) .
x y dx
dy
x2 + y 3 + Cx4 = 0 implies 2x + 3y 2 + 4Cx2 = 0.
dx
We can eliminate C between the equations f = 0 and g = 0, that is,
x2 + y 2
f = 0 = C = .
x4
Therefore,
dy x2 + y 3
g = 0 = 2x 3y 2 4 = 0.
dx x
This yields a first order de, where the solution includes the initial family of
curves.
Solution. Figure 3.2 shows the plot of y = C sin(x) with several values of C.
We have C = y/ sin(x), so it follows that
dy cos(x)
= |{z}
C cos(x) = y = y cot(x).
dx sin(x)
y/ sin(x)
Two curves meet at right angles if the product of their slopes is 1, i.e., mnew =
1/mold . So orthogonal trajectories satisfy the equation
dy 1 tan(x)
= = .
dx y cot(x) y
3.2. EXPONENTIAL GROWTH AND DECAY 27
1
x
-3 p -2 p - p 2p 3p
-1
-3
-5
Figure 3.2: A plot of the family y = C sin(x) (in black) and its orthogonal
trajectories (in gray).
y dy = tan(x),
y2
= ln(|cos(x)|) + C,
2
p
y = 2 ln(|cos(x)|) + C1 ,
2. Bank interest. The amount of interest you receive on your bank account is
proportional to your balance. In this case, k > 0. Actually, you do not do
quite this well at the bank because they compound only daily rather than
continuously, e.g., if we measure this time in years, our model predicts
that x = x0 ekt , where x0 is your initial investment and k is the interest
rate. Actually, you get
n
k
x = x0 1 + t ,
n
28CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
Example 3.3. The half-life of radium is 1600 years, i.e., it takes 1600 years for
half of any quantity to decay. If a sample initially contains 50 g, how long will
it be until it contains 45 g?
Solution. Let x(t) be the amount of radium present at time t in years. Then
we know that dx/dt = kx, so x = x0 ekt . With x(0) = 50, we quickly have
x = 50ekt . Solving for t gives t = ln(x/50) /k. With x(1600) = 25, we have
25 = 50e1600k . Therefore,
1
1600k = ln = ln(2) ,
2
dN
= kN (L N ) ,
dt
3.4. PREDATOR-PREY MODELS 29
dx
= ax bxy,
dt
dy
= cy + dxy
dt
is our mathematical model.
If we want to find the function y(x), which gives the way that the number
of foxes and rabbits are related, we begin by dividing to get the differential
equation
dy cy + dxy
=
dx ax bxy
with a, b, c, d, x(t), y(t) positive.
This equation is separable and can be rewritten as
(a by) dy (c + dx) dx
= .
y x
Integrating gives
a ln(y) by = c ln(x) + dx + C,
30CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
or equivalently
y a eby = kxc edx (3.1)
Figure 3.3: A typical solution of the Predator-Prey model with a = 9.4, b = 1.58,
c = 6.84, d = 1.3, and k = 7.54.
Beginning at a point such as a, where there are few rabbits and few foxes, the
fox population does not initially increase much due to the lack of food, but with
so few predators, the number of rabbits multiplies rapidly. After a while, the
point b is reached, at which time the large food supply causes the rapid increase
in the number of foxes, which in turn curtails the growth of the rabbits. By
the time point c is reached, the large number of predators causes the number of
rabbits to decrease. Eventually, point d is reached, where the number of rabbits
has declined to the point where the lack of food causes the fox population to
decrease, eventually returning the situation to point a.
that
dT
= k (Ts T ) , k > 0.
dt
As T changes, the object gives or takes heat from its surroundings. We now
have two cases.
Case 1: Ts is constant. This occurs either because the heat given off is
transported elsewhere or because the surroundings are so large that the contri-
bution is negligible. In such a case, we have
dT
+ kT = kTs ,
dt
which is linear, and the solution is
T = T0 ekt + Ts 1 ekt .
Case 2: The system is closed. All heat lost by the object is gained by its
surroundings. We need to find Ts as a function of T . We have
heat gained
change in temperature = .
weight specific heat capacity
Therefore,
T T0 T s T s0
= ,
wc ws cs
wc
T s = T s0 + (T0 T ) ,
ws cs
dT wc wc
+k 1+ T = k T s0 + T0 .
dt ws cs ws cs
This is linear with different constants than the previous case. The solution is
wc
!
T s0 + ws cs T0
1 ek(1+ ws cs )t .
wc
T = T0 + wc
1+ ws cs
l
it
re
l
it
re
Figure 3.4: Fresh water is being poured into the tank as the well-mixed solution
is flowing out.
Solution. Let Q(t) be the amount (in kilograms) of salt in the tank at time t
(in minutes). The volume of water at time t is
10 + 3t 2t = 10 + t.
amount of salt Q
=
volume 10 + t
dQ Q 2Q
= (rate at which salt is leaving) = 2= .
dt 10 + t 10 + t
dQ 2Q
=
dt 10 + t
dQ dt
= 2 ,
Q 10 + t
Z Z
dQ dt
= 2 ,
Q 10 + t
ln(|Q|) = 2 ln(|10 + t|) ,
2
ln(|Q|) = ln |10 + t| + C,
2
Q = A (10 + t) .
A A
Q(20) = 0 = 2 = 20 = = 20 = A = 2000.
(10 + 0) 100
2000
Q(t) = 2.
(10 + t)
2000 80 80
Q(5) = = = 8.89.
152 225 9
Therefore, after 5 minutes, the tank will contain approximately 8.89 kg of salt.
Additional conditions desired of the solution (Q(0) = 20) in the above exam-
ple) are called boundary conditions and a differential equations together with
boundary conditions is called a boundary-value problem (BVP).
34CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
Q
20
15
10
t
0 100 200 300 400 500 600
Figure 3.5: The plot of the amount of salt Q(t) in the tank at time t shows that
salt leaves slower as time moves on.
A dQ 2dt 2000
A= (10+t)2 is the general solution to the DE dt = 10+t . Q = (10+t)2 is the
solution to the boundary value problem dQ dt =
2dt
10+t .; Q(0) = 200. Boundary-
value problems like this one where the boundary conditions are initial values of
various quantities are also called initial-value problems (IVP).
d2 x
a=
dt2
is a constant, denoted g, the acceleration due to gravity. Note that F = ma =
mg. Air resistance encountered depends on the shape of the object and other
things, but under most circumstances, the most significant effect is a force op-
posing the motion which is proportional to a power of the velocity. So
F = mg kv n
|{z}
ma
and n
d2 x
k dx
=g ,
dt2 m dt
3.7. MOTION OF OBJECTS FALLING UNDER GRAVITY WITH AIR RESISTANCE35
Therefore,
dv k
+ v ekt/m = gekt/m ,
dt m
gm kt/m
ekt/m v = e + C,
k
mg
v= + Cekt/m ,
k
where C is an arbitrary constant. Note that
mg mb
v(0) = v0 = v0 = + C = C = v0 .
k k
So we have
mg mg kt/m
v = + v0 e .
|{z} k k
dx/dt
Note that
Z x Z t
mg m mg kt/m
dx = v dt = t v0 e 1 .
x0 0 k k k
| {z }
xx0
Let x(t) be the height of an object at time t measured positively upward (from
the centre of the earth). Newtons Law of Gravitation states that
kmM
F = 2 ,
|{z} x
ma
where m is the mass of the object, M is the mass of the Earth, and k > 0. Note
that
kM
x00 = 2 .
x
We define the constant g known as the acceleration due to gravity by x00 = g
when x = R, where R = radius of the earth. So k = gR2 /M . Therefore
x00 = gR2 /x2 . Since t does not appear, letting v = dx/dt so that
d2 x dv dx dv
= = v
dt2 dx dt dx
will reduce it to first order. Thus,
dv gR2
v = 2 ,
dx x
gR2
v dv = 2 dx,
x
gR2
Z Z
v dv = 2 dx,
x
2 2
v gR
= + C,
2 x
for some arbitrary constant C. Note that v decreases as x increases, and if v is
not sufficiently large, eventually v = 0 at some height xmax . So C = gR2 /xmax
and
2 2 1 1
v = 2R g .
x xmax
Suppose we leave the surface of the Earth starting with velocity V . How far
3.9. PLANETARY MOTION 37
2R2 g
2Rg V 2
and then fall back. To escape, we need V 2Rg. Thus, the escape velocity
is 2Rg 6.9 mi/s, or roughly 25000 mph.
km
F = 2 r.
|{z} r
ma
x0 = r0 cos() r sin()0 ,
2
ax = x00 = r00 cos() r0 sin()0 r0 sin()0 r cos() (0 ) r sin()00 ,
y 0 = r0 sin() + r cos()0 ,
2
ay = y 00 = r00 sin() + r0 cos()0 + r0 cos()0 r sin() (0 ) + r cos()00
2
= r00 sin() + 2r0 0 cos() (0 ) r sin() + 00 r cos().
38CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
Now ar becomes
2
ar = r00 cos2 () 2r0 0 sin() cos() (0 ) r cos2 () 00 sin() cos()
2
+ r00 sin2 () + 2r0 0 sin() cos() (0 ) r sin2 () + 00 r sin() cos()
2
= r00 (0 ) r
and a becomes
2
a = r00 sin() cos() + 2r0 0 sin2 () + (0 ) r sin() cos() + 00 r sin2 ()
2
+ r00 sin() cos() + 2r0 0 cos2 () (0 ) r sin() cos() + 00 r cos2 ()
= 2r0 0 + 00 r.
Note that
2 k
r00 (0 ) r = , ()
r2
2r0 0 + 00 r = 0. ()
2rr0 0 + 00 r2 = 0 = r2 0 = h.
| {z }
d(r 2 0 )
We want the path of the planet, so want an equation involving r and . There-
fore, we eliminate t between Equation () and r2 0 = h. Thus,
dr 0 dr h
r0 = = ,
d d r2
d2 r h dr h 0 d2 r h h dr h dr h
r00 = 2 0 2 2 r = 2
d r d r2 d2 !r2 r2 d r3 d r2
2
h2 2 dr 1 d2 r
= 2 .
r r3 d r2 d2
Note that
Z r() r0 ()2
Z Z
A= r dr d = d,
0 0 0 2
so dA/d = r()2 /2. So in our case, A0 = h/2. Therefore, A = ht/2, which is Keplers Second
Law (area swept out is proportional to time).
3.10. PARTICLE MOVING ON A CURVE 39
du 1 dr
= 2 ,
d r d
2
d2 u 2 dr 1 d2 r r2 r00 r00
= = = .
d2 r3 d r2 d2 h2 h2 u 2
2
2
2 2d u h k
h u r = 2,
d2 r2 r
d2 u
h2 u2 2 u3 h2 = ku2 , u 6= 0,
d
d2 u k
2
+ u = 2.
d h
Therefore,
k k
u = B cos( ) + 2 = C1 sin() + C2 cos() + 2 ,
|{z} h h
1/r
where
1 1
r= = .
k/h2 + B cos( ) k/h2 + C1 sin() + C2 cos()
Therefore,
k
h2
r+C1 x+C2 y
z }| {
k
r + C1 r sin() + C2 r cos() = 1,
h2
k
r = 1 C1 x C2 x,
h2
2
k 2
x2 + y 2 = (1 C1 x C2 y) ,
h 4
Reference point
Particle
ds d2 s
v= , kak = a = 2 .
dt dt
The vector v is in the tangent direction. The goal is to find components a in
the tangential and normal directions. Let be the angle between the tangent
line and the x axis. Then
dx dy
cos() = , sin() = .
ds ds
In the standard basis, we have a = (ax , ay ), where
d2 x d2 y
ax = , ay = .
dt2 dt2
To get components in T, n basis, apply the matrix representing the change of
basis from T, n to the standard basis, i.e., rotation by , i.e.,
" #" # " #
cos() sin() ax ax cos() + ay sin()
= ,
sin() cos() ay ax sin() + ay cos()
i.e.,
dx dy
aT = ax cos() + ay sin() = ax + ay
ds ds
dx dy dt ax vx + ay vy
= ax + ay =
dt dt ds v
and
ay vx ax vy
an = ax sin() + ay cos() = .
v
q
Also, we have v = vx2 + vy2 , so
dv
d2 s dv 2vx dv y
dt + 2vy dt
x
vx ax + vy ay
2
= = q = = aT .
dt dt 2
2 vx + vy 2 v
3.10. PARTICLE MOVING ON A CURVE 41
But this was clear to start with because it is the component of a in the tangential
direction that affects ds/dt.
We now have
dy dy/dt vy
y0 = = = ,
dx dx/dt vx
dvy dvy dt
vx vy dv x
vx vy dv x dt
vx ay vy ax an v
y 00 = dx
2
dx
= dt dx
2
dt dx
= 3
= 3 ,
vx vx vx vx
dx dx ds dx
vx = = = v,
dt ds dt ds
so
dx 3
y 00
ds v3 y 00 v 2 y 00 2
an = = 3 = 3/2 v .
v (ds/dx) 2
1+ (y 0 )
Let 3/2
2
1 + (y 0 )
R= (3.2)
y 00
so that an = v 2 /R. The quantity R is called the radius of curvature. It is the
radius of the circle which mostly closely fits the curve at that point. This is
shown in Figure 3.7. If the curve is a circle, then R is a constant equal to the
Figure 3.7: The radius of curvature is the circle whose edge best matches a
given curve at a particular point.
radius.
Conversely, suppose that R is a constant. Then
2 3/2
Ry 00 = 1 + (y 0 ) .
42CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
Let z = y 0 . Then
3/2
Rz 0 = 1 + z 2
and
R dz
dx = 3/2
,
(1 + z 2 )
so
Z
R Rz
x= 3/2
dz = + A,
(1 + z2) 1 + z2
R
xA= p ,
(1/z)2 + 1
2
1 R2
+1= 2,
z (x A)
2 2
1 R2 R2 (x A)
= 2 1 = 2 ,
z (x A) (x A)
2
(x A)
z2 = 2,
R2 (x A)
xA
z = q .
2 (x A)2
|{z}
dy/dx R
Therefore,
xA
Z
y= q dx
2
R2
(x A)
q
2
= R2 (x A) + B,
2 2
(y B) = R2 (x A) ,
2 2
(x A) + (y B) = R2 .
Suppose now that the only external force F acting on the particle is gravity,
i.e., F = (0, mg). Then in terms of the T, n basis, the components of F are
mg dy
" #" # " # " #
cos() sin() 0 mg sin() ds
= = .
sin() cos() mg mg cos() mg dx
ds
3.10. PARTICLE MOVING ON A CURVE 43
d2 s dy
m 2 = mg ,
dt ds
ds
= v,
dt
2
d s dv ds dv
2
= = = v,
dt dt dt ds
dv dy
v = g ,
ds ds
v dv = g dy,
v2
= g(y y0 ) = g(y0 y),
2
ds p
= v = 2g(y0 y).
dt
To proceed, we must know y(s), which depends on the curve. Therefore,
ds
dt = p ,
2g(y0 y(s))
Z
ds
t= p .
2g(y0 y(s))
If the man ever becomes greater than the normal component of F, the particle
will fly off the curve. This will happen when
y 00 vx3 dx dx dt vx
m
=
mg = g = g ,
v ds dt ds v
y 00 vx2 = g,
dx dx ds dx
vx = = = v.
dt ds dt ds
Therefore,
2
y 00 v2
00 dx
g = y v2 = 2
ds
(ds/dx) 2
= 2y 00 (y y0 ) = 1 + (y 0 ) .
00 2 00
y v y 2g(y 0 y)
= 2 = 2
0
1 + (y ) 1 + (y 0 )
Example 3.6. A particle is placed at the point (1, 1) on the curve y = x3 and
released. It slides down the curve under the influence of gravity. Determine
whether or not it will ever fly off the curve, and if so, where.
44CHAPTER 3. APPLICATIONS AND EXAMPLES OF FIRST ORDER ODES
y = x3 , y 0 = 3x2 , y 00 = 6x, y0 = 1.
Therefore,
2
2y 00 (y y0 ) = 1 (y 0 ) ,
12x x3 1 = 1 + 9x4 ,
H1,1L
H-0.083,-0.00057L
2
x
-2 -1 1 2
-2
-4
-6
-8
Figure 3.8: A particle released from (1, 1), under the influence of gravity, will
fly off the curve at approximately (0.083, 0.00057).
Chapter 4
Linear Differential
Equations
Although we have dealt with a lot of manageable cases, first order equations
are difficult in general. But second order equations are much worse. The most
manageable case is linear equations. We begin with the general theory of linear
differential equations. Specific techniques for solving these equations will be
given later.
dn y dn1 y dn2 y
+ P 1 (x) + P 2 (x) + + Pn (x)y = Q(x).
dxn dxn1 dxn2
An nth order linear equation can be written as a linear system (see Chapter
9) as follows. Let
dy d2 y dn1 y
y1 (x) = y(x), y2 (x) = , y3 (x) = , ..., yn (x) = .
dx dx2 dxn1
Then
dy1 dy2 dyn1
= y2 , = y3 , ..., = yn
dx dx dx
If the coefficient of dn y/dxn is not 1, we can divide the equation through this coefficient
to write it in normal form. A linear equation is in normal form when the coefficient of y (n) is
1.
45
46 CHAPTER 4. LINEAR DIFFERENTIAL EQUATIONS
and we have
dyn dn y dn1 y
= n = P1 (x) n1 Pn (x)y + Q(x)
dx dx dx
= P1 (x)yn Pn (x)y1 + Q(x).
Therefore,
dY
= A(x)Y + B(x),
dx
where
0 1 0 0
0 0 1 0
A(x) =
.. .. .. .. ..
. . . . .
0 0 0 1
Pn (x) Pn1 (x) Pn2 (x) P1 (x)
and
0
0
B(x) = ..
.
.
0
Q(x)
Theorem 4.1
If P1 (x), . . . , Pn (x), Q(x) are continuous throughout some interval I and x0
is an interior point of I, then for any numbers W0 , . . . , Wn1 , there exists
a unique solution throughout I of
dn y dn1y
n
+ P1 (x) n1 + + Pn (x)y = Q(x)
dx dx
satisfying
Proposition 4.2
1. If r(x) is a solution of Equation (4.1), then so is ar(x) for all r R.
2. If r(x) and s(x) are solutions of Equation (4.1), then so is r(x) + s(x).
Proof.
Therefore, ar is a solution.
Therefore, r + s is a solution.
Corollary 4.3
The set of solutions to Equation (4.1) forms a vector space.
48 CHAPTER 4. LINEAR DIFFERENTIAL EQUATIONS
A = 0 = W (x) 0,
A 6= 0 = W (x) 6= x
for all x.
4.1. HOMOGENEOUS LINEAR EQUATIONS 49
Theorem 4.4
Let V be the set of solutions to Equation (4.1) and suppose that
y1 , y2 , . . . , yn V . Then y1 , y2 , . . . , yn are linearly independent if and only
if W 6= 0.
Proof. Let V be the set of solutions to Equation (4.1) and suppose that y1 , y2 , . . . , yn
V . Suppose that y1 , y2 , . . . , yn are linearly dependent. Then there exists con-
stants c1 , c2 , . . . , cn such that
..
.
(n1) (n1)
c1 y1 (x) + c2 y2 (x) + + cn yn(n1) (x) = 0.
Since there exists a nonzero solution for (c1 , c2 , . . . , cn ) of the equation, we have
c1
c2
M
.. = 0,
.
cn
so it follows that |M | = 0 = W .
Now suppose that W = 0. Select an x0 R. Since W (x) 0, we have
W (x0 ) = 0. Thus, there exists constants c1 , c2 , . . . , cn not all zero such that
c1
c2
M
.. ,
.
cn
where
y1 (x0 ) yn (x0 )
.. .. ..
M =
. . .
,
(n1) (n1)
y1 (x0 ) yn (x0 )
50 CHAPTER 4. LINEAR DIFFERENTIAL EQUATIONS
that is,
..
.
(n1) (n1)
c1 y1 (x0 ) + c2 y2 (x0 ) + + cn yn(n1) (x0 ) = 0.
But y = 0 also satisfies these conditions, and by the uniqueness part of Theorem
4.1, it is the only function satisfying them. Therefore, f 0, that is,
Corollary 4.5
Let V be the set of solutions to Equation (4.1). Then we have dim(V ) = n.
(j) (j)
c1 y1 (x) + c2 y2 (x) + + cn yn(j) (x) 0
4.1. HOMOGENEOUS LINEAR EQUATIONS 51
(j) (j)
c1 y1 (x0 ) + c2 y2 (x0 ) + + cn yn(j) (x0 ) 0
The next problem concerns how to find the n linearly independent solutions. For
n > 1, there is no good method in general. There is, however, a technique which
works in the special cases where all the coefficients are constants. Consider the
constant coefficient linear equation
a0 n ex + a1 n1 ex + + an ex = 0.
a0 n + a1 n1 + + an = 0.
Let
p() = a0 n + a1 n1 + + an . (4.3)
Suppose that Equation (4.3) has a repeated real root r. Then we do not
have enough solutions of the form ex . But consider y = xerx . Then
..
.
n1 n
!
X X
nk nk nk
xerx
ak r x + (n k) r = ak r
k=0 k=0
n1
!
X
nk1
+ ak (n k) r erx
k=0
= p(r)xerx + p0 (r)erx .
2 2
p() = ( r) q() = p0 () = 2 ( r) q() + ( r) q 0 ()
= p0 (r) = 0.
Let
z1 + z2 z1 z2
= eax cos(bx), = eax sin(bx)
2 2
are solutions. So the two roots a + ib and a ib give two solutions eax cos(bx)
and eax sin(bx). For repeated complex roots, use xeax cos(bx), xeax sin(bx), etc.
So in all cases, we get n solutions.
The expression c1 eax cos(bx) + eax c2 sin(bx) can be rewritten as follows. Let
p
A = c21 + c22 . Then !
c 1
= cos1 p 2 ,
c1 + c22
54 CHAPTER 4. LINEAR DIFFERENTIAL EQUATIONS
that is,
c1 p c2
cos() = p , sin() = 1 cos2 () = p .
c21 + c22 c21 + c22
Therefore,
c c2
1
c1 eax cos(bx) + eax c2 sin(bx) = Aeax cos(bx) +sin(bx)
A A
= Aeax (cos() cos(bx) + sin() sin(bx))
= Aeax (cos(bx )) .
Solution. We have
5 44 + 53 + 62 36 + 40 = 0,
2
( 2) ( + 2) 2 2 + 5 = 0.
Therefore, = 2, 2, 2, 1 2i and
y yp = c1 y1 + c2 y2 + + cn yn .
4.2. NONHOMOGENEOUS LINEAR EQUATIONS 55
Therefore,
y = c1 y1 + c2 y2 + + cn yn + yp
y1 = ex , y2 = ex .
57
58 CHAPTER 5. SECOND ORDER LINEAR EQUATIONS
Lemma 5.1
Let f (x) be a twice differentiable function on a closed bounded interval
J. If {x J : f (x) = 0} is infinite, then there exists an x0 J such that
f (x0 ) = 0 and f 0 (x0 ) = 0.
Corollary 5.2
Let y1 (x) be a solution of y 00 + P (x)y 0 + Q(x)y = 0 on a closed bounded
interval J. If y1 is not the zero solution, then y1 (x) = 0 for at most finitely
many x in J.
tiable.
5.1. REDUCTION OF ORDER 59
on the interior of each interval. This will define y2 throughout J except for a
few isolated points.
Consider a subinterval I on which y1 (x) is never zero. So y1 (x) > 0 or
y2 (x) < 0 throughout I. We have
y y2
1
W (y1 , y2 ) = 0 .
y1 y20
R
If y2 is also a solution to y 00 + P (x)y 0 + Q(x)y = 0, then W = Ae P (x) dx for
some constant A (Albels formula). Suppose we choose A = 1. Then
R
y1 y20 y10 y2 = e P (x) dx
,
where y1 is known. This is a linear first order equation for y2 . In standard form,
we have R
0 y10 e P (x) dx
y2 y2 = ,
y1 y1
where y1 6= 0 throughout I. The integrating factor is
R y0
y1 C0
e 1 = e ln(|y1 |)+C = ,
|y1 |
finally giving us R
e P (x) dx
Z
y2 = y1 dx. (5.2a)
y12
This defines y2 on J except at a few isolated points where y1 = 0. At those
points, we have R
y1 y20 y10 y2 = e P (x) dx ,
so R
e P (x) dx
y2 = . (5.2b)
y10
This defines y2 are the boundary points of the subintervals.
Different As correspond to different y2 s.
60 CHAPTER 5. SECOND ORDER LINEAR EQUATIONS
x+2 0 x+2
y 00 y + y=0
x x2
and solve
x+2 0 x+2
y 00 y + y = xex .
x x2
Therefore,
x xex
W = = xex + x2 ex xex = x2 ex .
1 ex + xex
y2 R xex xex
v10 = = 2 x = ex = v1 = ex ,
W x e
0 y1 R xxex
v2 = = 2 x = 1 = v2 = x.
W x e
Therefore,
yp = v1 y1 + v2 y2 = xex + x2 ex ,
1. we have constant coefficients, i.e., P (x) = p and Q(x) = q for some con-
stants p and q.
2. R(x) is nice.
y 0 = 2Ae2x , y 00 = 4Ae2x .
1
4Ae2x Ae2x = e2x = A = .
| {z } 3
3Ae2x
1
y = c1 ex + c2 ex + e2x ,
3
where c1 and c2 are arbitrary constants.
Undetermined coefficients works when P (x) and Q(x) are constants and
R(x) is a sum of the terms of the forms shown in Table 5.1.
1
6A = 1 = A = ,
6
3
2B = 3 = B = .
2
Therefore, yp = ex /6 3/2, and we have
ex 3
y = c1 ex + c2 e2x + ,
6 2
where c1 and c2 are arbitrary constants.
Therefore,
1
2B = 1 = B = ,
2
2A = 0 = A = 0.
5.2. UNDETERMINED COEFFICIENTS 63
x cos(x)
y = c1 sin(x) + c2 cos(x) ,
2
where c1 and c2 are arbitrary constants.
1
4A = 1 = A = ,
4
4B = 0 = B = 0,
A 1/4 1
2A 4C = 0 = C = = = .
2 2 8
x2 1
y = c1 e2x + c2 e2x ,
4 8
where c1 and c2 are arbitrary constants.
y 00 y = 2A 2Ax B.
1
2A = 1 = A = ,
2
2A B = 0 = B = 1.
x2
y = c1 ex + c2 x,
2
64 CHAPTER 5. SECOND ORDER LINEAR EQUATIONS
Suppose we have
y 00 + ay 0 + by = R(x).
y 00 + ay 0 + by = cex
2 + a + b Aex = p()Aex .
y 0 = Aex + Axex ,
y 00 = Aex + Aex + A2 xex = 2Aex + A2 xex .
Thus,
y 00 + y 0 + y = A 2 x + ax + b + 2 + a ex
= A (p()xex + p0 ()ex )
= Ap0 ()ex .
z 00 + az 0 + bz = cex . (5.3)
5.2. UNDETERMINED COEFFICIENTS 65
respectively.
3 3
Re(z) = (cos(2x) + 2 sin(2x)) , Im(z) = (2 cos(x) + sin(2x)) .
40 40
Hence, the solution is
3
y = c1 e2x + c2 e6x + (cos(2x) + 2 sin(2x)) ,
40
where c1 and c2 are arbitrary constants.
where P (x) and Q(x) are not necessarily constant. Let y1 and y2 be linearly
independent solutions of the auxiliary equation
y = v1 y1 + v2 y2 ,
y 0 = v1 y1 + v2 y20 + v10 y1 + v20 y2 = v1 y10 + v2 y20 ,
y 00 = v1 y100 + v2 y200 + v10 y10 + v20 y20 .
Therefore,
y 00 5y 0 + 6y = 0 2 5 + 6 = 0 = = 3, 2.
Therefore,
y2 R e2x x2 e3x x3
v10 = = 5x
= x2 = v1 = .
W e 3
We can use any function v1 with the right v10 , so choose the constant of integra-
tion to be 0.
For v2 , we have
y1 R e3x x2 e3x
v20 = = = x2 ex ,
W e5x
so
Z Z
v2 = x e dx = x e + 2 xex dx
2 x 2 x
| {z } | {z }
by parts by parts
Z
= x2 ex + 2xex 2 ex dx
= x2 ex + 2xex 2ex
= x2 2x + 2 ex .
Finally,
x3 3x x3
e x2 2x + 2 ex e2x = e3x x2 + 2x 2 ,
yp =
3 3
68 CHAPTER 5. SECOND ORDER LINEAR EQUATIONS
Therefore,
e2x sin(ex )
v10 = = ex sin(ex ) = v1 = cos(ex ) .
e3x
For v2 , we have
ex sin(ex )
v20 = = e2x sin(ex ) .
e3x
Therefore,
Z Z
v2 = e2x sin(ex ) dx = t sin(t) dt
| {z } | {z }
substitute t = ex by parts
Z
= t cos(t) + cos(t) dt
= (t cos(t) + sin(t))
= ex cos(ex ) sin(ex ) .
5.3. VARIATION OF PARAMETERS 69
Finally,
yp = v1 y1 + v2 y2
= ex cos(ex ) + e2x (ex cos(ex ) sin(ex ))
= ex cos(ex ) + ex cos(ex ) e2x sin(ex )
= e2x sin(ex ) ,
Applications of Second
Order Differential
Equations
We now look specifically at two applications of first order des. We will see that
they turn out to be analogs to each other.
The force acting is gravity, spring force, air resistance, and any other external
71
72CHAPTER 6. APPLICATIONS OF SECOND ORDER DIFFERENTIAL EQUATIONS
where F is the spring force, d is the displacement from the spring to the natural
length, and k > 0 is the spring constant. As we lower the object, the spring
force increases. There is an equilibrium position at which the spring and grav-
ity cancel. Let x be the distance of the object from this equilibrium position
measured positively downwards. Let s be the displacement of the spring from
the natural length at equilibrium. Then it follows that ks = mg.
If the object is located at x, then the forces are
1. gravity.
2. spring.
3. air resistance.
4. external.
Considering just gravity and the spring force, we have
In other words, the resistance opposes motion, so the force has the opposite sign
of velocity. We will suppose that any external forces present depend only on
time but not on position, i.e., Fexternal = F (t). Then we have
Ftotal = kx rv + F (t),
| {z }
ma
so it follows that
d2 x dx
m 2
+r + kx = F (t). (6.2)
dt dt
Therefore, m2 + r + k = 0 and we have
r
r r2 4mk r r 2 k
= = .
2m 2m 2m m
Intuitively, when x = 0, they balance. Changing x creates a force in the opposite direction
Then = i and
where C1 and C2 are arbitrary constants. Figure 6.2a shows how this solution
qualitatively looks like.
Damped vibrations is the common case. The oscillations die out in time,
where the period is 2/. If r = 0, then = 0. In such a case, there is no
resistance, and it oscillates forever.
where C1 and C2 are arbitrary constants. Figure 6.2b shows how this solution
qualitatively looks like.
where C1 and C2 are arbitrary constants. This is the overdamped case. The
resistance is so great that the object does not oscillate (imagine everything
immersed in molasses). It will just gradually return to the equilibrium position.
Figure 6.2c shows how this solution qualitatively looks like.
Consider Equation (6.2) again. We now consider special cases of F (t), where
F cos(t),
0
F (t) =
F0 sin(t).
74CHAPTER 6. APPLICATIONS OF SECOND ORDER DIFFERENTIAL EQUATIONS
t t
xp = B cos(t) + C sin(t).
F0
xp = 2 2 [(k m) cos(t) + r sin(t)] = A cos(t ) ,
(k 2
m ) + (r)
where
F0
A= q ,
2 2
(k m 2 ) + (r)
k m 2
cos() = q ,
2 2
(k m 2 ) + (r)
r
sin() = q .
2 2
(k m 2 ) + (r)
In any solution
et (?) + A sin(t ) ,
the first term eventually dies out, leaving A sin(t ) as the steady state so-
lution.
Here we consider a phenomenon known as resonance. The above assumes
that cos(t) and sin(t) are not part of the solution to the homogeneous equa-
tion. Consider now
d2 x
m 2 + kx = F0 cos(t),
dt
p
where = k/m (the external force frequency coincides with the natural
frequency). Then the solution is
F0
xp = t sin(t).
2 km
F0
x= cos(t t)
m ( 2 2 )
2F0 ( ) t ( + ) t
= sin sin
( + ) ( ) 2 2
F0
sin(t) sin(t),
2m
where = ( ) /2.
resistor
switch
capacitor
capacitor produces a voltage drop of Q/C. Unless R is too large, the capacitor
will create sine and cosine solutions and, thus, an alternating flow of current.
Kirkhoffs Law states that the sum of the voltage changes around a circuit is
zero, so
dI Q
E(t) + RI + L + ,
dt C
so we have
d2 Q dQ Q
L 2 +R + = E(t). (6.6)
dt dt C
The solution is the same as in the case of a spring. The analogs are shown in
Table 6.1.
Charge Q Displacement x
Inductance I Mass m
Resistance R Friction r
Capacitance inverse 1/C Spring constant k
Voltage generated by battery E(t) External force F (t)
Amperes I Velocity v
turn changes the capacitance C. This changes the frequency of the solution
of the homogeneous equation. When it agrees with the frequency E(t) of some
incoming signal, resonance results so that the amplitude of the current produced
from this signal is much greater than any other.
78CHAPTER 6. APPLICATIONS OF SECOND ORDER DIFFERENTIAL EQUATIONS
Chapter 7
In this section we generalize some of the techniques for solving second order
linear equations discussed in Chapter 5 so that they apply to linear equations
of higher order. Recall from Chapter 4 that an nth order linear differential
equation has the form
y = C1 y1 + C2 y2 + + Cn yn + yp ,
79
80CHAPTER 7. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
Solution. We have
p() = 4 1 = ( 1) ( + 1) 2 + 1 .
Therefore,
y1 = ex , y2 = ex , y3 = cos(x), y4 = sin(x).
Let yp = Ae3x + Bxe3x . Then by the short cut method (5.2.1, p. 64), we have
2 2 2
A= = = ,
p(2) 16 1 15
3 3 3
B= 0 = 3
= .
p (1) 41 4
2 2x 3 x
y = C1 ex + C2 ex + C3 cos(x) + C4 sin(x) + e + xe .
15 4
y = v1 y1 + v2 y2 + + vn yn ,
..
.
n n n
(n1) (n1)
X X X
y (n1) = vi yi + vi0 y (n2) = vi yi ,
i=1 i=1 i=1
n n
(n) (n1)
X X
y (n)
= vi yi + vi0 yi .
i=1 i=1
Therefore,
n n
(n) (n1)
X X
y (n)
+ P1 (x)y (n1)
+ + Pn (x)y = vi yi + vi0 yi
i=1 i=1
n n
(n1)
X X
+ P1 (x) vi yi + + Pn (x) vi yi
i=1 i=1
n
(n1)
X
= vi0 yi
i=1
n
!
(n) (n1)
X yi + P1 (x)yi +
+ vi
i=1
+Pn (x)yi
n n
(n1)
X X
= vi0 yi + vi 0
i=1 i=1
n
(n1)
X
= vi0 yi .
i=1
v10
0
v20
0
M
.. =
.. ,
. .
vn0 R(x)
where
y1 yn
y10 yn0
M = .. .. .. .
. . .
(n1) (n1)
y1 yn
Therefore, |M | = W 6= 0.
dy dy du dy 1
= = ,
dx du dx du x
d2 y d2 y 1
2
dy 1 d y dy 1
= = ,
dx2 du2 x2 du x2 du2 du x2
d3 y d2 y 1
2
3 3 d y dy 1
= ydx 2
dx3 d du2 x3 du2 du x3
3
d2 y
d y dy 1
= 3
3 2 +2 ,
du du du x3
..
.
dn y dn y dn1 y
dy 1
= + C1 + + Cn
dxn du n du n1 du xn
7.3. SUBSTITUTIONS: EULERS EQUATION 83
dn y dn1 y dy
+ b 1 + + bn1 + bn y = R(eu )
dun dun1 du
for some constants b1 , b2 , . . . , bn .
dy dy du dy 1
= = ,
dx du dx du x
d2 y d2 y 1
2
dy 1 d y dy 1
= 2 2 = .
dx2 du x du x2 du2 du x2
Therefore,
d2 y
dy dy
+ y = e3u ,
du2 du du
d2 u
y = e3u .
du2
1
y = C1 eu + C2 eu + e3u
8
1
= C1 eln(x) + C2 e ln(x) + e3 ln(x)
8
C2 x3
= C1 x + + ,
x 8
where C1 and C2 are arbitrary constants.
d2 y dy
2
+p + qy = 0
dx du
84CHAPTER 7. HIGHER ORDER LINEAR DIFFERENTIAL EQUATIONS
involves looking for solutions of the form y = eu = (eu ) and finding . Since
x = eu , we could directly look for solutions of the form x and find the right .
With this in mind and considering Example 7.2 again, we have
y = x ,
y 0 = x1 ,
y 00 = ( 1) x2 .
Therefore,
x2 y 00 + xy 0 y = 0 = ( 1) x + x x = 0
= ( 1) + 1 = 0
= 2 1 = 0
= = 1,
as before.
Chapter 8
8.1 Introduction
Consider the equation
y (n) + P1 (x)y + + Pn y = 0,
we know how to solve Equation (8.1), but so far there is no method of finding
y1 , . . . , yn unless P1 , . . . , Pn are constants. In general, there is no method of
finding y1 , . . . , yn exactly, but we can find the power series approximation.
Solution. Let
X
2 n
y = a0 + a1 x + a2 x + + an x + = an xn .
n=0
85
86CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
y 00 y0 y
z }| z }|{ { z }| {
X X X
(n + 1) (n + 2) an+2 xn x nan xn1 2 an xn = 0,
n=0 n=0 n=0
X
[(n + 1) (n + 2) an+2 nan 2an ] xn = 0.
n=0
Therefore,
y = a0 + a1 x + a2 x2 + a3 x3 + + an xn +
a1 a0 a1
= a0 + a1 x + a0 x2 + x3 + + Qn x2n + Qn x2n+1 +
2 k=1 (2k 1) k=1 2k
2n 3
x2n+1
2 x x
= a0 1 + x + + nQ + + a1 x + + + nQ +
k=1 (2k 1) 2 k=1 2k
X x2n X x2n+1
= a0 Qn + a1 .
n=0 k=1 (2k 1) n=0
2n n!
Having one solution in closed form, we can try to find another (see 5.1, p. 57)
by
R R
e P (x) dx
e P (x) dx
Z Z
x2 /2
y1 = y2 = xe dx
y22 x2 ex2
Z x2 /2 Z
x2 /2 e 2 1
= xe dx = xex /2 dx,
x2 ex2 x2 ex2 /2
Theorem 8.2
P n
Consider the power series n=0 an (z p) .
P
Using (2) of Theorem 8.2, f 0 (z) exists for |z p| < R with f 0 (z) = n=0 nan z n1 .
Similarly,
X n
f (z) = an (z p) , z = p = a0 = f (p),
n=0
X n1
f 0 (z) = nan (z p) , z = p = a1 = f 0 (p),
n=0
X n2 f 00 (p)
f 00 (z) = n (n 1) an (z p) , z = p = a2 = ,
n=0
2
..
.
X f (n) (p)
f (n) (z) = n! an , z = p = an = .
n=0
n!
8.3. ANALYTIC EQUATIONS 89
point or a singularity of the de. When looking at a de, do not forget to write
it in the standard form. For example,
x2 y 00 + 5y 0 + xy = 0
Theorem 8.5
Let x = p be an ordinary point of y 00 + P (x)y 0 + Q(x)y = 0. Let R be the
distance from p to the closest singular point of the de in the complex plane.
P P
Then the de has two series y1 (x) = n=0 an xn and y2 (x) = n=0 bn xn
which converge to linearly independent solutions to the de on the interval
|x p| < R.
Remark. The content of the theorem is that the solutions are analytic within
the specified interval. The fact that solutions exist on domains containing at
least this interval is a consequence of theorems in Chapter 10.
have
3x 0 2
y 00 + 2 y + 2 y = 0.
x +1 x +1
The singularities are i. If p = 0, then r = |i 0| = 1. Therefore, analytic
solutions about x = 0 exist for |x| < 1. If p = 1, then r = |i 1| = 2. There-
P n
fore, there exist two linearly independent series of the form n=0 an (x 1)
converging to the solutions for at least |x 2| < 2.
Note that the existence and uniqueness theorem (see Chapter 10) guarantees
solutions from to (since x2 + 1 has no real zeros), but only in a finite
subinterval can we guarantee that the solutions are expressible as convergent
power series.
1. find the coefficients an for a few small values of n; (For example, find the
coefficients as far as a5 which thereby determines the 5th degree Taylor
approximation to y(x));
2. find a recursion formula which for any n gives an+2 in terms of an1
and an ;
4. recognize the resulting series to get a closed form formula for y(x).
In general, level (1) can always be achieved. If L(x), M (x), and N (x)
are polynomials, then level (2) can be achieved. Having achieved level (2), it
is sometimes possible to get to level (3) but not always. Cases like the one
in Example 8.1 where one can achieve level (4) are rare and usually beyond
expectation.
P n
and set y = n=0 an (x c) solves the equation. Then
X
X
0 n1 00
y = nan (x c) , y = n (n 1) an xn2 ,
n=0 n=0
92CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
y 00 P (x) y0
z }| { z }| !{ z }| !{
X
X
X
n n1
n (n 1) an xn2 + pn (x c) nan (x c)
n=0 n=0 n=0
! !
X n
X n
+ qn (x c) an (x c) = 0.
n=0 n=0
| {z }| {z }
Q(x) y
Since pn s and qn s are known (as many as desired can be computed from deriva-
tives of P (x) and Q(x)), in theory we can inductively find the an s one after
another.
In the special case where P (x) and Q(x) are polynomials, we can get a
recursion formula by writing an+2 in terms of an and an1 (level 2) and, under
favourable circumstances, we can find the general formula for an (level 3) as in
Example 8.1. But in general, computation gets impossibly messy after a while
and, although a few an s can be found, the general formula cannot.
But if you only want a few an s, there is an easier way. We have an =
(n)
y (c)/n!, so
and so on.
Example 8.8. Compute the Taylor expansion about 0 as far as degree 4 for
the solution of y 00 e7x y 0 + xy = 0, which satisfies y(0) = 2 and y 0 (0) = 1.
Substituting gives
y 00 (0) = e0 y 0 (0) 0 = 1
y 000 (0) = e0 y 00 (0) + 5e0 y 0 (0) 0 y(0) = 1 + 5 2 = 4
y (4) (0) = e0 y 000 (0) + 10e0 y 00 (0) 0 2y 0 (0) = 4 + 10 2 = 12
Therefore,
x2 4x3 12x4
y =2+x+ + + +
2! 3! 4!
1 2 1
= 2 + x + x2 + x3 x4 + .
2 3 2
Therefore,
y 0 (0) = 1. Find the series solution as far as x3 and determine the lower bound
on its radius of convergence from the recursion formula.
Solution. We have
X
2y = 2an xn ,
n=0
X
X
X
5y 0 = 5nan xn1 = 5 (n + 1) an+1 xn = 5 (n + 1) an+1 xn ,
n=0 n=0 n=0
X
X
6y 00 = 6n (n 1) an xn2 = 6 (n + 2) (n + 1) an+2 xn ,
n=0 n=0
X X
5xy 00 = 5n (n 1) an xn1 = 5 (n + 1) nan+1 xn ,
n=0 n=0
X
x2 y 00 = n (n 1) an xn .
n=0
Therefore,
n (n 1) an
6 (n + 2) (n + 1) an+2
X 5 (n + 1) nan+1
n X n
x = 5 (n + 1) (n + 1) an x ,
+6 (n + 2) (n + 1) an+2
+ n2 n 2 an
n=0
n=0
5 (n + 1) an+1 2an
so it follows that
2
6 (n + 2) (n + 1) an+2 5 (n + 1) an+1 + (n + 1) (n 2) an = 0,
5 n+1 1 n2
an+2 = an+1 an .
6 n+2 6 n+2
5 1 1 2 5 2 7
a2 = a1 a0 = + = ,
6 2 6 2 12 12 12
5 2 1 1 5 7 1 1 35 6 41
a3 = a1 = + = + = .
6 3 6 3 9 12 6 3 108 108 108
Therefore,
7 2 41 3
y =1+x+ x + x + .
12 108
8.6. LEVEL 2: FINDING THE RECURSION RELATION 95
Notice that although we need to write our equation in standard form (in
this case dividing by x2 5x + 6 to determine that 0 is an ordinary point of
the equation), it is not necessary to use standard form when computing the
recursion relation. Instead we want to use a form in which our coefficients are
polynomials. Although our recursion relation is too messy for us to achieve
find a general formula for an (level 3) there are nevertheless some advantages
in computing it. In particular, we can now proceed to use it to find the radius
of convergence.
To determine the radius of convergence, we use the ratio test R = 1/ |L|,
where L = limn (an+1 /an ), provided that the limit exists. It is hard to prove
that the limit exists, but assuming that
an+2 5 n + 1 1 n 2 an
= ,
an+1 6 n + 2 6 n + 2 an+1
5 1 1 5 1 1
L= 1 = .
6 6 L 6 6 L
Solution. Let y1 (x) be the solution satisfying y1 (0) = 1; y10 (0) = 0 and let y2 (x)
be the solution satisfying y2 (0) = 0; y20 (0) = 1. Then y1 (x) and y2 (x) will be
linearly independent solutions. We already computed the recursion relation in
Example 8.10. For y1 (x), we have a0 = y(0) = 1 and a1 = y 0 (0) = 0. Thus,
5 1 1 2 1
a2 = a1 a0 = ,
6 2 6 2 6
5 2 1 1 5
a3 = a2 a1 = .
6 3 6 3 54
Therefore,
x2 5
y1 = 1 + + x3 + .
6 54
96CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
5 1 2 5
a2 = a1 a0 = ,
6 2 2
12
5 2 1 5 2 5 1 25 6 31
a3 = a2 a1 = + = + = .
6 3 3 6 3 12 6 3 108 108 108
Therefore,
5 2 61 3
y2 = x + x + x + .
12 108
With
7 2 41 3
y =1+x+ x + x + ,
12 108
Looking at the initial values, the solution y(x) of Example 8.10 ought to be
given by y = y1 + y2 , and comparing are answers we see that they are consistent
with this equation.
In the special case where P (x) and Q(x) are polynomials, we can instead
find a recurrence relation. From the recurrence relation there is a chance to get
the formula for an (if so, then there is a chance to actually recognize the series).
Even without the formula, we can determine the radius of convergence from the
recurrence relation (even if P (x) and Q(x) are not polynomials, we could at
least try to get a recurrence relation by expanding them into Taylor series,but
in practice this is not likely to be successful).
8.7. SOLUTIONS NEAR A SINGULAR POINT 97
y 0 = a1 x1 + a1 ( + 1) x + + an ( + n) x+n1 + ,
y 00 = a1 ( 1) x2 + a1 ( + 1) x1 +
+ an ( + n) ( + n 1) x+n2 + ,
xP (x) = p0 + p1 x + + pn xn + ,
x2 Q(x) = q0 + q1 x + + qn xn + .
So if x 6= 0, then
a0 ( 1) x2 + a1 ( + 1) x1 + + an ( + n) ( + n 1) x+n2 +
+x2 (a0 + a1 ( + 1) x + ) (p0 + p1 x + + pn xn + )
+x2 (a0 + a1 x + + an xn ) (q0 + q1 x + + qn xn + ) = 0.
We have
a0 ( 1) + a0 p0 + a0 q0 = 0, (0)
a1 ( + 1) + a1 ( + 1) p0 + a0 p1 + a0 q1 + a1 q0 = 0, (1)
.. ..
. .
(n)
( 1) + p0 + q0 = 0,
2 + (p0 1) + q0 = 0. (8.4)
98CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
y0 y
y 00 + P (x)y 0 + Q(x)y = y 00 + xP (x) + x2 Q(x) 2 ,
x x
so we have
y0 y X
y 00 + xP (x) + x2 Q(x) 2 = an ( + n) ( + n 1) x+n2
x x n=0
2
X
+ p0 + p1 x + p2 x + an ( + n) x+n2
n=0
X
+ q0 + q1 x + q2 x2 + an x+n2 .
n=0
( + n) ( + n 1) + p0 ( + n) + q0 .
Suppose that
( + n) ( + n 1) + p0 ( + n) + q0 = 0.
Let = + n. Then
( 1) + p0 + q0 = 0
is the indicial equation again, i.e., is the other solution to the indicial equation.
We conclude that if two solutions of the indicial equation do not differ by an
integer, the method of Frobenius provides two solutions. If the two solutions
do differ by an integer, then the larger one will give a solution by this method,
but the smaller one may fail.
Let F () = ( 1) + p0 + q0 . Suppose the solutions to F () = 0 are r
This includes the case where the solutions are complex, i.e., in this case, we get a complex
solution z = xa+ib (c0 + c1 x + ), so y1 = Re(x) and y2 = Im(z) give two real solutions.
8.7. SOLUTIONS NEAR A SINGULAR POINT 99
X
L(x w) = cn x2+n ,
n=0
Since F (s) = 0, y1 (x) = xs w(s, x) is the solution to the de. We now need
a second solution. Unfortunately, we cannot set = r since our definition of
an () depends on 6= s (positive integer). Instead, set
and differentiated with respect to . Note that for g(, x), we have
g
(Lg) = L
x x
k
z }| 0
s s w 0
{
s2
z }| {
= s = L ln(x)x w(s, x) + x = F (s) x + F (s) xs2 ln(x)
=s
w
= L ln(x)y1 + xs = kxs2 .
=s
If k = 0, then
w
y2 = ln(x)y1 + xs
=s
is the second solution to our de.
If k > 0, consider
! !
X X
r n r r+n2
L x an x =L x cn x ,
n=0 n=0
!
r
X
n s w X
L x an x A ln(x)y1 (x) + x = cn xr+n2 ,
n=0
=s n=k+1
k1
a0 +a1 x + +ak1x
xr w
k
+ ak A x
X
cn xr+n2 .
L =s =
X
n=k+1
+ an xn A ln(x)y1 (x)
n=k+1
The choice of ak does not matterwe can choose any. Then inductively solve
for ak+1 , ak+2 , . . . in terms of predecessor a? s to make cn = 0 for n > k. This
gives a solution to the de of the form
X
r
y2 = x an xn A ln(x)y1 (x).
n=0
8.7. SOLUTIONS NEAR A SINGULAR POINT 101
where r and s are the two solutions to the indicial equation. We have B = 0,
unless s = r + k for some k {0, 1, 2, . . .}.
X
X
y0 = ( + n) an x+n1 , y 00 = ( + n) ( + n 1) an x+n2 .
n=0 n=0
X
X
X
4 ( + n) ( + n 1) an x+n1 + 2 ( + n) an x+n1 an x+n = 0,
n=0 n=0 n=0
X
X
[4 ( + n) ( + n 1) + 2 ( + n)] an x+n1 an1 x+n1 = 0.
n=0 n=1
Note that
4 ( + n) ( + n 1) + 2 ( + n) = 2 ( + n) (2 + 2n 2 + 1)
= 2 ( + n) (2 + 2n 1) .
Therefore,
X
2 (2 1) a0 x1 + [2 ( + n) (2 + 2n 1) an1 ] x+n1 .
n=1
for n 1.
For = 0, we have
1
2n (2n 1) an an1 = 0 = an = an1 .
2n (2n 1)
102CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
Thus,
1
a1 = a0 ,
21
1 1
a2 = a1 = a0 ,
43 4!
..
.
1 1 1
an = an1 = = a0 .
(2n) (2n 1) 2n (2n 1) (2n 2) (2n)!
Therefore,
x2 xn
x
y = x0 a0 1 + + + + ! +
2! 4! (2n)
2 n
x x x
= a0 1 + + + + ! +
2! 4! (2n)
= a0 cosh x .
Thus,
1
b1 = b0 ,
32
1 1
b2 = b1 = b0 ,
54 5!
..
.
1
bn = b0 .
(2n + 1)!
8.7. SOLUTIONS NEAR A SINGULAR POINT 103
Therefore,
x2 xn
x
y = x1/2 b0 1 + + + + +
3! 5! (2n + 1)!
3/2 5/2
x(2n+1)/2
1/2 x x
= b0 x + + + + +
3! 5! (2n + 1)!
= b0 sinh x .
As mentioned, the above methods work near a by using the above technique
after the substitution v = x a. We can also get solutions near infinity, i.e.,
for large x, we substitute t = 1/x and look at t = 0. Then
dy dy dt dy 1 1 dy dy
= = 2 = 2 = t2 ,
dx dt dx dx x x dt dt
2 2 2 2
d y 2 dy 1 d y dt 2 dy 1 d y 1 3 dy 4d y
= = = = 2t + t .
dx2 x3 dt x2 dt2 dx x3 dt x2 dt2 x2 dt dt2
Then
d2 y 2
dy 4d y 3 dy 1 2 dy 1
2
+ P (x) + Q(x)y = 0 = t 2
+ 2t P t + Q y = 0.
dx dx dt dt t dt t
Solution. Let t = 1/x so that x = 1/t. Then our differential equation becomes
d2 y
1 dy 2 dy
1 2t3 + t4 2 t2 + n (n + 1) y = 0,
t2 dt dt t dt
d2 y dy + n (n + 1) y = 0,
t4 t2 + 2t3 2t
+ 2y
dt2 dt
d2 y 2t dy n (n + 1)
+ 2 + 4 y = 0.
dt2 t 1 dt t t2
1 X 1
y=
x n=0 xn
104CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
for large x.
Solution. Let
X
X
n
y=x an x = an xn+ .
n=0 n=0
Then
X
y0 = an (n + ) xn+1 ,
n=0
X
y 00 = an (n + ) (n + 1) xn+2 ,
n=0
X
X
xy 0 = an xn+ = an2 xn+2 ,
n=0 n=2
y X X
= an xn+1 = an1 xn+2 .
x n=0 n=1
Note that
Indicial equation
z }| {
a0 ( 1) = 0, (0)
a1 ( + 1) + a0 = 0, (1)
.. ..
. .
Note that Equation (0) implies that = 0, 1. The roots differ by an integer.
The largest always gives the solution. For = 1, Equation (1) implies that
2a1 + a0 = 0, so a1 = a0 /2. Equation (n) then becomes
an1 an2
an (n + 1) n + an1 + an2 = 0 = an = .
n (n + 1)
Therefore,
1 1 7 3
y1 = a0 x 1 x x2 + x + .
2 12 144
y = x0 b0 + b1 x + b2 x2 + + C ln(x)y1 ,
C
y 0 = b1 + 2b2 x + + y1 + C ln(x)y10 ,
x
C C C
y 00 = 2b2 + 6b3 x + 2 y1 + y10 + y10 + C ln(x)y100 .
x x x
Therefore, our differential equation becomes
y
y 00 + xy 0 + = 0,
x
C 2C 0
2 00
2b2 + 6b3 x + 24b4 x + x2 y1 + x y1 + C ln(x)y1
+ b1 x + 2b2 x2 + + Cy1 + C ln(x)xy10
= 0,
b0 y1
+ + b1 + b2 x + + C ln(x)
x x
2
C 2C 0
2b2 + 6b3 x + 24b4 x + x2 y1 + x y1
b0
+ b1 x + 2b2 x2 + + Cy1 +
+ b1 + b2 x +
x = 0,
+ C ln(x)y 00 + C ln(x)xy 0 + C ln(x) y1
1 1
| {z x}
0
C 1 1 2 7 3
2
2b2 + 6b3 x + 24b4 x + 1 x x + x +
x 2 12 144
2C 1 7
1 x + x2 + x3 +
+
x 4 36
= 0.
1 1 7 4
+ b1 x + 2b2 x + + C x x2 x3 +
2
x +
2 12 144
b0
+ + b1 + b2 x +
x
y0 10ex
y 00 + 2 y = 0.
x x
X y X
y= an x+n , = an x+n2 ,
n=0
x2 n=0
X y0 X
y0 = ( + n) an x+n1 , = ( + n) an x+n2 ,
n=0
x n=0
X
y 00 = ( + n) ( + n 1) an x+n2 .
n=0
We now have
( 1) a0 a0 + 10a0 = 0.
( + 1) a1 ( + 1) a1 + 10 (a0 + a1 ) = 0,
F ( + 1)a1 + 10a1 = 0.
Therefore,
10 10
a1 = a0 = a0
F ( + 1) F (2 + 3i)
8.7. SOLUTIONS NEAR A SINGULAR POINT 107
and
2
F (2 + 3i) = (2 + 3i) 2 (2 + 3i) + 10
= 4 + 12i 9 4 6i + 10
= 1 + 6i.
Therefore,
so
Therefore,
x cos(3
ln(x))
2 10 60
+x 37 cos(3 ln(x)) 37 sin(3 ln(x)) +
Re(z) = a0
( 1) + p0 + q0 = 0,
( 1) + = 0,
2 + = 0,
2 = 0.
To find y1 , we have
X
y= an xn ,
n=0
X
X
y0 = nan xn1 = (n + 1) an+1 xn ,
n=0 n=0
X
y 00 = (n + 1) nxn1 .
n=0
8.7. SOLUTIONS NEAR A SINGULAR POINT 109
xy 00 + (1 x) y 0 y = 0,
X
[(n + 1) nan+1 + (n + 1) an+1 nan an ] xn = 0.
n=0
Therefore,
a0 a1 a0 a2 a0 a0
a1 = , a2 = = , a3 = = , ..., an = .
1 2 2 2 3! n!
So we have
X xn
y = a0 = a0 ex .
n=0
n!
P
Solution. Let y = n=0 an x+n . Then
X
X
0 +n1 00
y = ( + n) an x , y = ( + n) ( + n 1) an x+n2 .
n=0 n=0
110CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
Substituting gives
2a1 2a0 = 0 = a1 = a0 ,
1
8a2 6a1 + 2a0 = 0 = 8a2 = 6a1 2a0 = (6 2) a0 = 4a0 = a0 = ,
2
1 1
18a3 + (8) a2 + (0) a1 + a2 = 0 = 18a3 = 8
1 = 3 = a3 = ,
2 6
1 1 4 1
32a4 8a3 (2) a2 + (0) a1 + a2 = 0 = 32a3 = 8 2 1 = = a3 = .
6 2 3 24
8.8. FUNCTIONS DEFINED VIA DIFFERENTIAL EQUATIONS 111
If we our intuition leads us to guess that perhaps an = 1/n! be can use induction
to check that this guess is in fact correct. Explicitly, suppose by induction that
ak = 1/k! for all k < n. Then the recursion relation yields
n2 7n + 4 2n 6 1
2n2 an = +
(n 1)! (n 2)! (n 3)!
n2 + 7n 4 + (2n 6)(n 1) (n 1)(n 2)
=
(n 1)!
n + 7n 4 + 2n 8n + 6 n2 + 3n 2
2 2
=
(n 1)!
2n
=
(n 1)!
P
This gives the solution y1 = n=0 xn /n! = ex . (If we had guessed initially
that this might be the solution, we could easily have checked it by direct subsi-
tution into the differential equation.) Using reduction of order, one then finds
that the solutions are
y1 = ex , y2 = xex + 2 ln(x) y1 .
1 x2 y 00 xy 0 + 2 y = 0
(8.5)
112CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
n (n 1) + n 2 n2 2
an+2 = an = an .
(n + 2) (n + 1) (n + 2) (n + 1)
Therefore,
2
a0 = 1, a1 = 0, a2 = , a3 = 0,
2
2 4 2 2
a4 = = , a5 = 0,
43 4!
16 2 4 2 2
a6 = ,
6!
..
.
2 2
4 (n 1) 2 4 (n 2) 2 2
a2n = a1 .
(2n)!
8.8. FUNCTIONS DEFINED VIA DIFFERENTIAL EQUATIONS 113
Also,
1 2
a0 = 0, a1 = 1, a2 = 0, a3 = a1 ,
32
9 2 1 2
a4 = 0, a5 = a1 ,
5!
..
.
2
(2n 1) 2 1 2
a2n+1 = a1 .
(2n + 1)!
1 x2 y 00 2xy 0 + ( + 1) y = 0
(8.6)
n (n 1) + 2n ( + 1) n (n + 1) ( + 1)
an+2 = an = an .
(n + 2) (n + 1) (n + 2) (n + 1)
Then
x 1
p0 = lim (x 1) P (x) = lim = ,
x1 x+1 x12
2
2 2
q0 = lim (x 1) Q(x) = lim (x 1) 2 = 0.
x1 x1 x 1
( 1) + p0 + q0 = 0,
1
( 1) + + q0 = 0,
2
2 1
+ = 0,
2
2 1
= 0.
2
Therefore, = 0 or = 1/2, so the solutions look like
X
X
n 1/2 n
y1 = an (x 1) , y2 = (x 1) bn (x 1)
n=0 n=0
for x > 1.
( 1) + p0 + q0 = 0,
( 1) + = 0,
2 = 0.
for x > 1.
8.8. FUNCTIONS DEFINED VIA DIFFERENTIAL EQUATIONS 115
y 00 xy = 0 (8.7)
xy 00 (1 x) y 0 + y = 0 (8.8)
( 1) = 0,
2 = 0,
2 2 = 0,
( 2) = 0.
for x > 0.
x2 y 00 + xy 0 + x2 2 y = 0.
(8.9)
We have
x
p0 = lim x = 1,
x2
x0
x2 2
q0 = lim x2 = .
x0 x2
The indicial equation is then
( 1) + 2 = 0,
2 2 = 0,
( + ) ( ) = 0.
For y1 , we have
X
y= an xn+1/2 ,
n=0
X 1
y0 = an xn1/2 ,
n+
n=0
2
X 1 1
y 00 = n+ n an xn3/2 .
n=0 |
2 2
{z }
n2 1/4
X
2 1 1 1
n an + n + an + an2 an xn+1/2 = 0.
n=0
4 2 4
Therefore,
a0 a2 a0 a0 n a0
a2 = , a4 = = , a6 = , ..., a2n = (1) .
32 54 5! 7! (2n + 1)!
Therefore
X n x2n+1/2 1 X n x
2n+1
sin(x)
y1 = (1) = (1) = .
n=0
(2n + 1)! x n=0 (2n + 1)! x
118CHAPTER 8. POWER SERIES SOLUTIONS TO LINEAR DIFFERENTIAL EQUATIONS
3 1 = 0,
( 1) 2 + + 1 = 0.
Therefore, = 1 or
1 14 1 3
= = i.
2 2 2
Therefore, the general solution is
! !
x x/2 3 x/2 3
y = C1 e + C2 e sin x + C3 e cos x ,
2 2
Therefore,
C1 + 0C2 + C3 = 1,
3 1
C1 + C2 C3 = 0,
2 2
3 1
C1 C2 C3 = 0.
2 2
8.8. FUNCTIONS DEFINED VIA DIFFERENTIAL EQUATIONS 119
0 0 1 2
3 0 0 1 23 0 0 1 23
Linear Systems
9.1 Preliminaries
Consider
dY
= AY + B,
dx
where " # " #
a b y1
A= , Y=
c d y2
such that
dy1 dy2
= ay1 + by2 , = cy1 + dy2 .
dx dx
We will first consider the homogeneous case where B = 0.
Only in the case when the entries of A are constants is there an algorithmic
method of finding the solution. If A a is a constant, then we know that
y = Ceax is a solution to dy/dx = ay. We will define a matrix eA such that the
general solution of dY/dx = AY is Y = CexA , where C = [C1 , C2 , . . . , Cn ] is
a vector of arbitrary constants. We define eT for an arbitrary n n matrix T
and apply T = xA. Indeed, we know that
t2 t3 t4 tn
et = 1 + t + + + + + + .
2! 3! 4! n!
Therefore, let us analogously define
T2 T3 Tn
eT I + T + + + + + . (9.1)
2! 3! n!
121
122 CHAPTER 9. LINEAR SYSTEMS
where " #
0 1
N=
0 0
that is,
y1 = C1 ex + C2 xex , y2 = C2 ex .
So if Y = CeAx , then
dY
= ACeAx = AY,
dx
as required. Therefore, Y = CeAx is indeed the solution.
9.2 Computing eT
then
e1
e2
D
e = ..
.
.
en
First, we consider the effect of changing bases. Suppose T is diagonalizable, that
is, there exists a P such that P1 TP is diagonalizable. Therefore, T = PDP1
and
T2 = PDP1 PDP1 = PD2 P1 .
124 CHAPTER 9. LINEAR SYSTEMS
T2 Tn
eT = I + T + + +
2! n!
D2 Dn
=P I+D+ + + + P1
2! n!
= PeD P1 .
(4 ) (2 ) 3 = 0,
8 6 + 2 3 = 0,
2 6 + 5 = 0,
( 5) ( 1) = 0.
For = 5, we have
" #" # " #
45 1 a 0
= ,
3 25 b 0
" #" # " #
1 1 a 0
= ,
3 3 b 0
" # " #
a + b 0
= .
3 3b 0
So let " #
1 1
P= .
3 1
constant
z }| {
1
Y = eAx C = ePDP x C = PexD P1 C
x
e 1
e2 x
=P ..
C
.
en x
x
e 1
e2 x
= [v1 , v2 , . . . , vn ] ..
C
.
en x
= C1 e1 x v1 + C2 e2 x v2 + + Cn en x vn ,
where C = P1 C.
Example 9.3. Solve " #
0 4 1
Y = Y.
3 2
(4 ) (2 ) 3 = 0,
8 6 + 2 3 = 0,
2 6 + 5 = 0,
( 5) ( 1) = 0.
For = 5, we have
" #" # " #
45 1 a 0
= ,
3 25 b 0
" #" # " #
1 1 a 0
= ,
3 3 b 0
" # " #
a + b 0
= .
3 3b 0
and we have
" #" #" # " #" #
1 1 ex 0 C1 ex e5x C1
Y= =
3 1 0 e5x C2 3ex e5x C2
" # " # " #
C1 ex + C2 e5x 1 1
= = C1 ex + C2 e5x ,
3C1 ex + C2 e5x 3 1
that is,
y1 = C1 ex + C2 e5x , y2 = 3C1 ex + C2 e5x .
Property 9.4
1
1. ePAP = PeA P1 .
MATB24.
128 CHAPTER 9. LINEAR SYSTEMS
3. If
1
2
D= ..
,
.
n
then
e1
e2
D
e = ..
.
.
en
4. If An+1 = 0, then
A2 An
eA = I + A + + + .
2! n!
Proof. We have already proved (1), and (3) and (4) are trivial. What remains
is (2). So note that
A2 An B2 Bn
A B
e e = I+A+ + + I+B+ + +
2! n! 2! n!
1
A2 + 2AB + B2 + .
=I+A+B+
2
Also
2 3
(A + B) (A + B)
eA+B = I + A + B + + +
2! 3!
1
A2 + AB + BA + B2 + ,
=I+A+B+
2
which is the same as the above if AB = BA.
Theorem 9.5
Given T over C, there exists matrices P, D, and N with D diagonal,
Nk+1 = 0 for some k, and DN = ND such that T = P (D + N) P1 .
9.3. THE 2 2 CASE IN DETAIL 129
Corollary 9.6
We have eT = PeD eN P1 (eD and eN can be computed as above).
The steps involved in finding P, N, and D are essentially the same as those
involved in putting T into Jordan Canonical form (over C).
Then " #
1 1 0
P TP = ,
0 2
so " #
e1 0
T
e =P P1 .
0 e2
in this case.
Then
p(T) = 0,
2
(T rI) = 0,
S2 = 0.
Therefore,
eS = I + S = I + T rI = T + (1 r) I
and " #
T er 0
e = (T + (1 r) I) .
0 er
and
2
p() = 2 2r + r2 + q 2 = ( r) + q 2 .
We then have
p(T) = 0,
2 2
(T rI) + (qI) = 0,
2
S2 + (qI) = 0,
S2 + q 2 I = 0.
9.3. THE 2 2 CASE IN DETAIL 131
S2 S3
eS = I + S + + +
2! 3!
q2 I q2 S q4 I q4 S q6 I
=I+S + + +
2! 3! 4! 5! 6!
q2 q4 q6 q2 q4
=I 1 + + + S 1 +
2! 4! 6! 3! 5!
S
= I cos(q) + sin(q).
q
Therefore,
" r #
T T rI e 0
e = cos(q)I + sin(q) .
q 0 er
Solution. We have
p() = 0,
3 9
= 0,
9
4
(3 ) (9 ) + 36 = 0,
27 6 + 2 + 36 = 0,
2 6 + 9 = 0,
2
( 3) = 0.
Solution. We have
p() = 0,
1 4
= 0,
1 1
(1 ) (1 ) + 4 = 0,
2
( + 1) + 4 = 0.
Therefore, = 1 2i. For A, the roots are 1 2i, so for T, the roots are
(1 2i) x, i.e., r = x and q = 2x. Note that
" # " #
T rI 1 0 4x 0 2
= = .
q 2x x 0 1
0
2
9.4. THE NON-HOMOGENEOUS CASE 133
Thus,
Y = eAx C
" #! " #
0 2 ex 0
= cos(2x)I + sin(2x) 1 x
C
2 0 0 e
" #" #
cos(2x) 2 sin(2x) ex 0
= x
C
1 0 e
2 sin(2x) cos(2x)
" #" #
ex cos(2x) 2ex sin(2x) C1
=
1 x
2e sin(2x) ex cos(2x) C2
" #
C1 ex cos(2x) + 2C2 ex sin(2x)
= .
x
1
2 C1 e sin(2x) + C2 ex cos(2x)
Yp = Yh V, (9.2c)
where " #
v1
V= .
v2
134 CHAPTER 9. LINEAR SYSTEMS
To find V, we have
Yp = Yh V,
Yp0 = Yh0 V + Yh V0 ,
|{z} |{z}
(9.2b) (9.2a)
Therefore, Yh V0 = B V0 = Yh1 B.
It follows that
" #" # " #
1 ex ex e5x 1 e4x ex
0
V = Yh1 B = = ,
4 3e5x e5x e2x 4 3 + e3x
" 1 4x
#
1 4e ex
V=
4 3x 13 e3x
9.5. PHASE PORTRAITS 135
and
" #" 1 #
4x
1 ex e5x 4e ex
Yp = Yh V =
4 3ex e5x 3x 13 e3x
" 1 #
1 4e
5x
e2x + 3xe5x 31 e2x
=
4 3 e5x + 3e2x + 9xe5x 1 e2x
4 3
" 1
# " 1 # " 3 # " 1
#
5x 16 2x
4 5x 4 2x
12
=e +e + xe +e .
3 3 9 1
16 4 4 12
Therefore,
Y = Yh C + Yp
" # " # " 1
#
x 1 5x 1 5x 16
= C1 e + C2 e +e
3 1 3
16
" 1 # " 3 # " 1
#
2x
4 5x 4 2x
12
+e + xe +e ,
3 9 1
4 4 12
dx dy
= F (x, y), = G(x, y) .
dt | {z } dt | {z }
y x2
We are assuming here that V depends only on the position (x, y) and not also upon time
t.
136 CHAPTER 9. LINEAR SYSTEMS
3
4
2 1
y
y
-1
-2
-2
-3
-4
-4 -2 0 2 4 -3 -2 -1 0 1 2 3 4
x
x
Figure 9.1: The vector field plot of y, x2 and its trajectories.
dx dy
= ax + by, = cx + dy,
dt dt
or X0 = AX, where
" # " #
x a b
X= , A= .
y c d
Recall from Theorem 10.12 (p. 157) that if all the entries of A are continuous,
then for any point (x0 , y0 ), there is a unique solution of X0 = AX satisfying
x(t0 ) = x0 and y(t0 ) = y0 , i.e., there exists a unique solution through each
point; in particular, the solution curves do not cross.
The above case can be solved explicitly, where
" #
At x0
X=e
y0
where |A| =
6 0.
0 0
0 0 0 0
0 0
0 0
et v = et eit (p + iq)
= et (cos(t) + i sin(t)) (p + iq)
= et (cos(t)p sin(t)q + i cos(t)q + i sin(t)p) .
Therefore,
= 0 = tr(A) = 0,
> 0 = tr(A) < 0,
< 0 = tr(A) > 0.
Consider first = 0. To consider the axes of the ellipses, first note that
" #" #" #
p1 q1 C1 C2 cos(t)
X= .
p2 q2 C2 C1 sin(t)
| {z }| {z }
P C
Recall that the trace of a matrix is the sum of the elements on the main diagonal.
140 CHAPTER 9. LINEAR SYSTEMS
Except in the degenerate case, where p and q are linearly dependent, we have
" #
cos(t)
= C1 P1 X.
sin(t)
Therefore,
" #
cos(t) t t
cos(t) + sin(t) = Xt P1 C1 C1 P1 X,
sin(t)
t t
cos2 (t) + sin2 (t) = Xt P1 C1 C1 P1 X,
t t
1 = Xt P1 C1 C1 P1 X.
Note that C = Ct , so
" #" #
t C1 C2 C1 C2
CC =
C2 C1 C2 C1
" #
C12 + C22 0
C2 =
0 C1 + C22
2
= C12 + C22 I.
Therefore,
C
C1 = ,
+ C22 C12
t C
C1 = C1 = 2 ,
C1 + C22
t 2 C12 + C22 I I
C1 C1 = C1 = 2 = C2 + C2 .
2
(C1 + C2 )2
1 2
Therefore,
t t 1 t
1 = Xt P1 C1 C1 P1 X = Xt P1 P1 X.
C12 2
+ C2
t
Let T = P1 P1 . Then Xt TX = C12 + C22 and T = Tt (T is symmetric).
Therefore, the eigenvectors of T are mutually orthogonal and form the axes of
the ellipses. Figure 9.3 shows a stable spiral and an unstable spiral.
9.5. PHASE PORTRAITS 141
0 0
0 0 0 0
0 0
Figure 9.3: The cases for , where we have a stable spiral when > 0 and an
unstable spiral when > 0.
Therefore,
" # " #" # " #
At C1 et 0 C1 t C1
X=e = (I + Nt) = e (I + Nt) .
C2 0 et C2 C2
2
Note that N2 = 0 |N| = 0 |N| = 0. Therefore,
" #
n1 n2
N= .
n1 n2
y C2 + (n1 C1 + n2 C2 ) tv2 v2
lim = lim = ,
t x t C1 + (n1 C1 + n2 C2 ) tv1 v1
0 0
0 0 0 0
0 0
Figure 9.4: The cases for , where we have a stable node when < 0 and an
unstable node when > 0.
0 0
0 0 0 0
0 0
Figure 9.5: The degenerate cases for when N = 0, where we have a stable
node when < 0 and an unstable node when > 0.
144 CHAPTER 9. LINEAR SYSTEMS
Chapter 10
giving us Z x
y = y0 + f (t, y(t)) dt. ()
x0
dy
= 0 + f (x, y(x)) = f (x, y)
dx
145
146 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
and
Z x
y2 (x) = y0 + f (t, y1 (t)) dt
x0
x
x2
Z
=1+ (1 t) dt = 1 x + ,
0 2
and x
t2 x2 x3
Z
y3 (x) = 1 + 1t+ dt = 1 x + .
0 2 2 3!
So in general, we have
x2 n x
n
yn (x) = 1 x + + + (1) .
2! n!
Therefore, we finally have
X n xn
y(x) = lim yn (x) = (1) .
n
n=0
n!
10.1. PICARDS METHOD 147
Theorem 10.3
If f : X R is continuous, where X Rn is closed and bounded (compact),
then f (X) is closed and bounded (compact). In particular, there exists an
M such that |f (x)| M for all x X.
Corollary 10.4
Let R be a (closed) rectangle. Let f (x, y) be such that f /y exists and is
continuous throughout R. Then there exists an M such that
Proof. Let R be a (closed) rectangle. Let f (x, y) be such that f /y exists and
is continuous throughout R. By Theorem 10.3, there exists an M such that
f
(r) M
y
for all r R. By the Mean Value Theorem, there exists a c (y1 , y2 ) such that
f
f (x, y2 ) f (x, y1 ) = (x, c)(y2 y1 ).
y
Theorem 10.5
Let f be a function. Then
The point is that the same N works for all xs. If each x has an N that worked for it
x
but there was no N working for all xs at once, then it would converge, but not uniformly.
10.1. PICARDS METHOD 149
Theorem 10.6
Suppose that {fn } is a sequence of functions that converges uniformly to f
on B. if fn is integrable for all n, then f is integrable and
Z Z
f (x) dV = lim fn (x) dV.
B n B
Theorem 10.7
Let {fn } be a sequence of functions defined on B. Suppose that there exists
P
a positive convergent series n=1 an such that |fn (x) fn1 (x)| an for
all n. Then f (x) = limn fn (x) exists for all x B and {fn } converges
uniformly to f .
Therefore
X
lim fn (x) = f0 (x) + (fk (x) fk1 (x))
n
| {z } k=1
f (x)
and
X
X
|f (x) fn (x)| = (fk (x) fk1 (x)) ak <
k=n+1 k=n+1
P
for n sufficiently large, since n=1 an converges. Therefore, fn converges uni-
formly to f .
150 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
Theorem 10.8 (Existence and uniqueness theorem for first order odes )
Let R be a rectangle and let f (x, y) be continuous throughout R and satisfy
the Lipschitz Condition with respect to y throughout R. Let (x0 , y0 ) be an
interior point of R. Then there exists an interval containing x0 on which
there exists a unique function y(x) satisfying y 0 = f (x, y) and y(x0 ) = y0 .
To prove the existence statement in Theorem 10.8, we will need the following
lemma.
Lemma 10.9
Let I = [x0 , x0 + ] and let p(x) satisfy the following:
1. p(x) is continuous on I.
1. p(x) is continuous on I.
f
I R R
t (t, p(t))
Rx
so f is continuous, hence integrable, i.e., x0 f (t, p(t)) dt exists for all x
I. Since q(x) is differentiable on I, it is also continuous on I. Also, since
10.2. EXISTENCE AND UNIQUENESS THEOREM FOR FIRST ORDER ODES151
Inductively define functions (1) and (2) of Lemma 10.9 by y0 (x) y0 for all
x I and Z x
yn (x) = y0 + f (t, yn1 (t)) dt.
x0
By Lemma 10.9, the existence of yn1 satisfying (1) and (2) guarantees the
existence of yn . By hypothesis, there exists an A such that |f (x, v) f (x, w)|
A |v w|, whenever (x, v), (x, w) R. As in the proof of Lemma 10.9, we have
and
Z x
|y3 (x) y2 (x)| =
[f (t, y2 (t)) f (t, y1 (t))] dt
x
Z x0
|f (t, y2 (t)) f (t, y1 (t))| dt
x
Z 0 !
x 2
|t x0 |
AM A dt
x0 2
3
|x x0 |
M A2 .
3!
Continuing, we get
M An1 n
|yn (x) yn1 (x)| .
n!
We have
yn (x) = y0 (x) + (y1 (x) y0 (x)) + (y2 (x) y1 (x)) + + (yn (x) yn1 (x))
n
X
= y0 (x) + (yk (x) yk1 (x)) .
k=1
Since
M Ak1 k
|yk (x) yk1 (x)|
k!
and
X M Ak1 k M A
= e 1 ,
k! A
k=1
converging for all x I. Therefore, y(x) = limn yn (x) exists for all x I
and yn converges uniformly to y.
Note that
we then have
Z x
y(x) = lim yn (x) = lim y0 + f (t, yn1 (t)) dt
n n x0
Z x Z x
= y0 + lim f (t, yn1 (t)) dt = y0 + f (t, y(t)) dt
x0 n x0
To prove the uniqueness statement in Theorem 10.8, we will need the fol-
lowing lemma.
Lemma 10.10
If y(x) is a solution to the de with y(x0 ) = y0 , then |y(x) y0 | a for all
x I. In particular, (x, y(x)) R for all x I.
Proof. Suppose there exist an x I such that |y(x) y0 | > a. We now consider
two cases.
Suppose x > x0 . Let s inf({t : t > x0 and |y(t) y0 | > a}). So s < x
x0 + x0 + a. By continuity, we have
s I = s x0 = M (s x0 ) M a.
Therefore,
M (s x0 ) = M a = s x0 = a = s = x0 + a,
Proof (Proof of uniqueness statement (Theorem 10.8)). Suppose y(x) and z(x)
154 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
2
both satisfy the given de and initial condition. Let (x) = (y(x) z(x)) . Then
if x x0 . Similarly,
dy
= a(x)y + b(x).
dx
Theorem 10.11 (Existence and uniqueness theorem for linear first order des)
Let a(x) and b(x) be continuous throughout an interval I. Let x0 be an
interior point of I and let y0 be arbitrary. Then there exists a unique
function y(x) with y(x0 ) = y0 satisfying dy/dx = a(x)y + b(x) throughout
I.
Then
Z x
y0+ (a(t)yn1 (t) + b(t)) dt
|yn (x) yn1 (x)| = Zx0 x
y
0 (a(t)yn2 (t) + b(t)) dt
x0
Z x
= [a(t) (yn1 (t) yn2 (t))] dt .
x0
Assuming by induction that y (t) is continuous, a(t)y (t) + b(t) is continuous through-
n n
out I and thus integrable, so the definition of yn+1 (x) makes sense. Then yn+1 (x) is also
0
differentiable, i.e., yn+1 (x) = a(x)yn (x) + b(x), and thus continuous, so the induction can
proceed.
156 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
we then have
Z x Z x n1
!
|t x0 |
AAn2
[a(t) (yn1 (t) yn2 (t))] dt dt
x0 x0 (n 1)!
n
|x x0 |
An1 ,
n!
thereby completing the induction. Therefore,
An1 n
|yn (x) yn1 (x)| ,
n!
where is the width of I. The rest of the proof is as before.
..
.
dyn
= an1 (x)y1 + an2 (x)y2 + + ann (x)yn + bn (x).
dx
We can write the system as
dY
= AY + B,
dx
where
a11 (x) a1n (x) y1 (x) b1 (x)
a21 (x) a2n (x)
y2 (x)
b2 (x)
A= .. .. .. , Y= .. , B= .. .
.
. . . .
an1 (x) ann (x) yn (x) bn (x)
Theorem 10.12 (Existence and uniqueness theorem for linear systems of n des)
Let A(x) be a matrix of functions, each continuous throughout an interval
I and let B(x) be an n-dimensional vector of functions, each continuous
throughout I. Let x0 be an interior point of I and let Y0 be an arbitrary
n-dimensional vector. Then there exists a unique vector of functions Y(x)
with Y(x0 ) = Y0 satisfying
dY
= A(x)Y + B(x)
dx
throughout I.
integrating componentwise
158 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
Then
Z x
|Yn (x) Yn1 (x)| =
[A(t) (Yn1 (t) Yn2 (t))] dt
x
Z 0 !
x n1
|t x0 |
n1 dt
x0 (n 1)!
n
|x x0 |
= n
n!
n n
,
where is the width of I. The rest of the proof is essentially the same as
before.
1 (n) M
f (p) n ()
n! r
for all n N. To prove this claim, since
X 1 n
f (p)rn = f (r)
n=0
n
Then
1 (n)
f (p)rn M,
n!
so Equation () holds. Therefore, our claim holds.
Note that
2 n !
zp zp zp
A(z) = M 1+ + + + + ,
a a a
so
A(p) = M,
M
A0 (p) = ,
a
M
A00 (p) = 2 2 ,
a
..
.
M
A(n) (p) = n!
(n)
(p).
an
P
Similarly,
N
B (n) (p) = n! Q(n) (p) .
an
Consider
y 00 = A(z)y 0 + B(z)y. ()
160 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
Using P (n) (p) A(n) (p) and Q(n) (p) B (n) (p) gives w(n) (p) v (n) (p).
P n
The Taylor series of W about p is n=0 an (z p) , where an = w(n) (p)/n!;
P n
the Taylor series of v about p is n=0 bn (z p) , where bn = v (n) (p)/n!.
Since we showed that |an | bn , the first converges anywhere the second does.
So we need to show that the Taylor series of w(x) has a radius of convergence
equal to a (this implies that the Taylor series for w converges for n pk, but
a < R was arbitrary). We have
M N
v 00 = 0
zp v + v.
1 a 1 zp
a
dv 1 dv
v0 = .
du a du
d2 v du 1 d2 v
v 00 = a 2 = 2 2.
du dz a du
Then we have
1 d2 v M 1 dv N
2 2
= + v,
a du 1 u a du 1 u
d2 v dv
(1 u) 2 = aM + a2 N v.
du du
P
Write v = n=0 n un . Using its first radius of convergence, we have
n
zp n n n
v = n un = n = (z p) = bn (z p) ,
a an
10.4. EXISTENCE AND UNIQUENESS THEOREM FOR LINEAR SYSTEMS161
Now we have
X
X
X
n n
(1 u) (n + 2) (n + 1) n+2 u = aM (n + 1) n+1 u + a2 N n un ,
n=0 n=0 n=0
X X
n n
(n + 2) (n + 1) n+2 u (n + 2) (n + 1) n+2 u
n=0 n=0
=
X X
(n + 2) (n + 1) n+2 un+1 (n + 1) nn+1 un .
n=0 n=0
Therefore,
X
X
n
(n + 1) (n + aM ) n+1 + a2 N n un ,
(n + 2) (n + 1) n+2 u =
n=0 n=0
so it follows that
a2 N n
lim = 0.
n n + 2 n+1
Therefore,
n+2
lim = 1 + 0 = 1.
n n+1
So by the ratio test (Equation (8.2) of Theorem 8.2, p. 88), the radius conver-
162 CHAPTER 10. EXISTENCE AND UNIQUENESS THEOREMS
gence of
n
X
n
X n n
X zp
n u = (z p) = bn =1
n=0
a
n=0 n n=0
a
is a.
The point of Theorem 8.5 is to state that solutions are analytic (we already
know solutions exists from early theorems). The theorem gives only a lower
bound on the interval. The actual radius of convergence may be larger.
Chapter 11
Numerical Approximations
Consider y 0 = f (x, y), where y(x0 ) = y0 . The goal is to find y(xend ). Figure
11.1 shows this idea. We pick a large N and divide [x0 , xend ] into segments of
actual value
yN
y0
approximate value
y1 y2
x0 x0+h x0+2h ... xend = xN
= x1 = x2
163
164 CHAPTER 11. NUMERICAL APPROXIMATIONS
y1 = y0 + hf (x0 , y0 ) y1 f (x1 ),
y2 = y1 + hf (x1 , y1 ) y2 f (x2 ),
y3 = y2 + hf (x2 , y2 ) y3 f (x3 ),
..
.
Therefore,
y1 = y0 + hf (x0 , y0 ) = 1 + 0.1 (0 2)
= 1 0.2 = 0.8,
y2 = 0.8 + hf (0.1, 0.8) = 0.8 + h(0.1 1.6)
= 0.8 0.1 1.5 = 0.8 0.15 = 0.65,
y3 = 0.65 + hf (0.2, 0.65) = 0.65 + h (0.2 1.3)
= 0.65 0.1 1.1 = 0.65 0.11 = 0.54.
y 0 + 2y = x,
e2x y 0 + 2e2x y = xe2x ,
Z Z
1 1 2x
ye2x = xe2x dx = xe2x e dx
2 2
1 1
= xe2x e2x + C,
2 4
11.1. EULERS METHOD 165
1 1
y= x + Ce2x .
2 4
Note that
1 5
y(0) = 1 = + C = 1 = C = ,
4 4
so the general solution is
1 1 5
y= x + e2x .
2 4 4
Therefore,
5
y(0.3) = 0.15 0.25 + e0.6 0.1 + 0.68601 0.58601,
4
so y(0.3) = 0.58601 is the (approximated) actual value.
The Taylor second degree polynomial of a function y with step size h is given
by
y 00 (z)
y(a + h) = y(a) + hy 0 (a) + h2
2
for some z [a, h], where h2 y 00 (z)/2 is the local truncation error, i.e., the error
in each step. We have
y 00 (z) M 2
h2 h
2 2
if |y 00 (z)| M on [a, h]. Therefore, dividing h by, say, 3 reduces the local trun-
cation error by 9. However, errors also accumulate due to yn+1 being calculated
using the approximation of yn instead of the exact value of f (xn ). Reducing h
increases N , since N = (xend x0 ) /h, so this effect wipes out part of the local
gain from the smaller h. The net effect is that
yn+1 = yn h?,
where, instead of using f 0 (xn ), it is better to use the average of the left and
right edges, so
xn xn+1
not know yn+1 (that is what we are working out at this stage). Instead, we fill
into the inside yn+1 from the original Eulers method, i.e.,
1
(f (xn , yn ) + f (xn+1 , yn + hf (xn , yn ))) ,
2
i.e.,
h
yn+1 = yn + [f (xn , yn ) + f (xn+1 , yn + hf (xn , yn ))] .
2
This improves the local truncation error from h2 to h3 .
11.3. RUNGE-KUTTA METHODS 167
yn+1 = yn h ? .
At each step, for ?, we use some average of the values over the interval [yn , yn+1 ].
The most common one is xn + h/2. Figure 11.3 illustrates this idea. Set
xn xn + h/2 xn + 1 = xn + h
kn1 = f (xn , yn ),
1 1
kn2 = f xn + h, yn + hkn1 ,
2 2
1 1
kn3 = f xn + h, yn + hkn2 ,
2 2
1 1
kn4 = f xn + h, yn + hkn3 .
2 2
Use
h
yn+1 = yn + (kn1 + 2kn2 + 2kn3 + 3kn4 ) .
6
This gives a local truncation error proportional to h5 .
168 CHAPTER 11. NUMERICAL APPROXIMATIONS