Professional Documents
Culture Documents
—
i
hel
Digitized by the Internet Archive
in 2022 with funding from
Kahle/Austin Foundation
https ://archive.org/details/ordinarydifferenOOOObrau
ORDINARY
DIFFERENTIAL EQUATIONS:
A FIRST COURSE
Second Edition
FRED BRAUER
JOHN A. NOHEL
University of Wisconsin
os
Oe
W. A. BENJAMIN, INC.
Menlo Park, California * Reading, Massachusetts
London « Amsterdam « Don Mills, Ontario * Sydney
UNIVERSITY MATHEMATICS SERIES
Consulting Editors
FRED BRAUER
JOHN A. NOHEL
iii
iv Preface
first edition that this procedure paves the way for a better comprehension
of the basic existence and uniqueness theorem presented without proof but
with numerous examples only at the end of each of these chapters. The
proofs of these theorems are given in Chapter 8.
Chapters | and 2 form the core of any course taught from this book.
Students with no previous training in linear algebra, who want a basic”
course, should follow Chapter 2 with Chapter 3 (linear differential equa-
tions). Such a basic course would ordinarily omit the two chapters on linear
systems (Chapters 4 and 5), but should include some of the topics in the
later chapters. We suggest that at least parts of Chapters 6 (series solutions)
and 7 (boundary value problems) are particularly appropriate. Students who
are already familiar with the elements of linear algebra should replace Chap-
ter 3 with Chapters 4 and 5. All the main results of Chapter 3 are included
as special cases in Chapters 4 and 5.
We have taught much of the material in this book in a one semester
course at the sophomore-junior level at the University of Wisconsin in
Madison, meeting three times per week. Approximately three weeks were
spent on each of Chapters | and 2, and four weeks were spent on Chap-
ter 3. The remainder of the course was devoted to a selection of material
from Chapters 6 and 7. For more theoretically oriented students, Chapter 8
might well be covered instead. In modern applications of differential equa-
tions, a significant role is played by high-speed computers. For this reason,
we have included an introduction to numerical methods of solutions in
Chapter 9. This may be of use to students interested primarily in engineering
applications. Such students might also profit from the introduction to
Laplace transforms in Chapter 10.
Chapters | and 2, together with either Chapter 3 or Chapters 4 and 5,
form the basic introduction on which the remainder of the book depends.
Although physical problems are used throughout for motivation and illus-
tration, the treatment is self-contained and does not depend on any knowl-
edge of physics. The last five chapters are almost completely independent,
except that Bessel functions, covered in Chapter 6, enter in Section 7.6. How-
ever, this section could be omitted if necessary in a study of Chapter 7. The
only places in the last five chapters which depend on Chapters 4 of 5 are the
last sections of Chapters 8 and 9, which could be omitted by students who
have not covered Chapters 4 and 5.
Much of the modern theory of differential equations can be explained
properly and efficiently only with the aid of linear algebra. Only a minimal
knowledge of linear algebra is essential for a proper understanding of Chap-
ters 4 and 5. We believe that emphasis on linear algebra is both important
and consistent with current and future trends in the mathematical training
of engineers, physical scientists, and engineers. We heartily endorse the idea
of a curriculum which introduces linear algebra and thus makes it possible to
Preface Vv
F Comparison
Physical
Prediction
reality
"Laws" Techniques
Physical Mathematical
approximation model
Fig. 0.1
ality which it does not reproduce, and it will always predict events that do
not, in fact, occur. The skill of a scientist lies in knowing how far, and in
what context, to use a particular mathematical model. One simple illustra-
tion may help here. Physicists sometimes speak of light as a wave, and some-
times as a particle. Which is it? The answer is neither, for both are names
of specific mathematical models for light. Both successfully predict (‘‘ex-
plain’) some of the observed behavior of light, but both predict behavior
that light does not exhibit. The moral is clear: Do not expect to have exactly
one correct mathematical model for any aspect fo reality. Newton’s “‘laws
of motion” are not the only “correct” ones (for example, at very high
velocities, Einstein’s laws are closer to reality), nor is Hooke’s law the only
correct law of elasticity.
Because of the way mathematics is often taught, a student is apt to think
of it as merely a collection of techniques, tricks and skills which some pro-
fessor wishes him to learn. As his experience grows, the person who does
research in other physical sciences will learn that for him mathematics is
something entirely different. He will learn to think of mathematics as a tool
which can explain and help him understand various phenomena in the
physical world. At the same time, it is very important for the prospective
mathematician to become acquainted with those mathematical problems
which are closely connected with applications, as many of these problems
are of considerable mathematical interest in their own right. It is our hope
that this book will serve these various needs.
It is a pleasure to acknowledge the help—direct and indirect—from col-
leagues, teaching assistants, and students, particularly at the University of
Wisconsin; their comments aided us in the preparation of this second edition.
We are particularly grateful to Dr. Steve Davis for the checking of exercises,
to Mrs. Phyllis J. Rickli for the typing of the manuscript, and to W. A. Ben-
jamin for preparation of the book. Naturally, any errors that remain are our
responsibility.
Vii
Vili Contents
: a ’ es‘fricna id ay zs
ep i
iP @Miah yi WwebiasMeas aba) abcidtgn basket) ve
i a ey eas es lg aH
pal a ' U , rp as beri: if ’ a
he
7oak
Rein — vw aly es re
= S¢ _ verde ne
‘ i
_ oe
=|
CHAPTER 1
Introduction to
First-Order
Differential Equations
y(t+h)—y(t)=ahy(t) (1.1)
for any ¢>0 and any number />0. Dividing (1.1) by 4 we obtain
y(t+h)—y(t) =ay/(t).
h
Now letting 4-0 and using the definition of the derivative, we obtain the
equation
y'(t)=ay(t). (1.2)
Observe that this limiting process makes use of the assumption that y(r)
is a continuously varying and differentiable function rather than an integer-
valued function.
The mathematical problem is to find the differentiable function y(t)
satisfying (1.2) for t>0 and such that y(0)=A (this number is the initial size
of the population and is given in advance). The equation (1.2) for the un-
known function y is called a differential equation (of first order) and the con-
dition y (0)=A is called the initial condition. We shall now solve this problem.
Since y(¢) represents the size of the population, we expect that y(t)>0.
If y(t) 40, we can divide (1.2) by y(t) and obtain
4 log|y(t)| =a
et Population and Related Growth Problems 3
t
log |y(t)| —log|A|=log y(t)= ub
y(t)=Ae™. (1.3)
Exercise
1. Show by direct substitution that y(¢) given by (1.3) satisfies Eq. (1.2) on — 00 <t<oo
(even when A =0).
In solving Eq. (1.2) we have assumed that y(t) is never zero. Indeed the
solution (1.3) is never zero. The method of solution cannot be applied if
A=0, because the logarithm of zero is undefined. However, the function
y(t)=0 (which has y’(t)=0) satisfies Eq. (1.2) by inspection. Thus if A =0
the solution is y(t) =0. An interpretation of this result is that if the initial size
of the population is zero, then the population remains zero (i.e. even
mathematically you don’t get something for nothing).
In Eq. (1.2) we can interpret y’ (¢)/y(¢) as a growth rate. It is, in fact, the
time rate of increase of population per member. Thus in the model repre-
sented by Eq. (1.2) we are saying that the growth rate is constant. This
leads to the biological conclusion that the population continues to increase.
In most real-life situations such an assumption is unrealistic. One might
assume that since the food supply decreases as the population increases,
the growth rate should have the form
y'(t)
—“=a—by(t), (1.4)
y(t)
where a and + are positive constants. Thus a more realistic model leads
to the mathematical problem of solving the differential equation
(The assumption that the death rate is constant and independent of age is
unrealistic in most situations; it might, however, apply to an insect pop-
ulation being treated by an insecticide.) Under these assumptions the pop-
ulation y(t), where y(0)=4, satisfies the equation
y()=(4—¢) v0), (1.6).
where a and c are constants. The derivation of this equation is almost
exactly the same as that of Eq. (1.2). Equation (1.6) can be solved exactly
as was Eq. (1.2), provided that a#c. The solution is
y(t)= Aer. (1.7)
If a=c, (1.6) reduces to y’(t)=0 and the solution is y(¢)=A; note that this
result also follows from (1.7) by putting a=c. From (1.7) we see that if
a>c the population grows exponentially — just as for the solution of
Eq. (1.2). If a<c, the solution y(t) decays exponentially, which means that
the population is dying out. Models such as (1.2) and (1.6) have applications
to other areas, and we shall now discuss some of these.
—“=—k or y(t)=—ky(t).
y(t)=Ae™.
The half-life of the substance is defined as the time required for the mass to decrease to
one-half its original value. Thus we wish to find the value of t for which
Ane
D
This is the same as finding the value of ¢for which e“'=2. Taking logarithms of both
sides yields
Ki Ooe
Thus the half-life is log 2/k. Note that both the half-life and k depend on the
particular substance.
Exercises
2. If the half-life of aradioactive substance is 30 days, how long will it take until 99%
of the substance has decayed?
LZ, First-Order Equations 5
Having seen how physical problems can give rise to differential equations,
we now begin a systematic study. From the mathematical point of view,
a differential equation is an equation involving a function and some of its
derivatives from which this function is to be determined. Differential
equations involving functions of a single variable are called ordinary dif-
ferential equations, while differential equations involving functions of
several variables are called partial differential equations. We shall be con-
cerned with the former.
Suppose f is a function that is defined and continuous on some region*
Figure 1.1
D in the plane (see Fig. 1.1). Because the independent variable in physical
problems is so often time, we label the axes ¢ and y, rather than x and y. Then
the first-order differential equation associated with the function / is
v=f(ty), (1.8)
where the prime denotes differentiation with respect to ¢. The equations
derived in Section 1.1 are all of this form. The reason for calling (1.8) a first-
order differential equation is that the highest-order derivative appearing
in (1.8) is the first derivative. To so/ve (1.8) means to find an interval J on
the ¢ axis (see Fig. 1.2) and a function ¢ such that:
i) P(t) and @'(t) exist for each ¢ in 1.7
ii) The graph of @ lies in the region D, that is, all points (¢, é(2)), for t
in J, lie in D.
iii) For each ¢ in J we have
(1.9)
Figure 1.2
Exercise
Example 2. y’=—y?. Here f(t, y)=—y*, and we can again take D to be the whole
(t, y) plane. We can check that $(¢)=1/t is a solution of the equation on either the
interval —oo <t<0, or on the interval 0<t<oo, but not on an interval such as
—2<t<2. To see this, draw a graph of ¢ on —«2<t<0 and 0<t<o, and then
8 Introduction to First-Order Differential Equations
verify that the conditions (i), (ii), and (iii) are satisfied on each of these intervals. We
shall learn in Section 1.3 how to obtain this solution.
Exercises
2. Show that the functions #(t)=1/(¢—c) are solutions of the same equation y’= —y?
for each choice of the constant c on a suitably chosen interval. Draw graphs of
these solutions for c=0, +1, +2.
3. Consider the differential equation y’=2/(t?—1) with f(t, y)=2/(t* —1) defined on
each of the domains D, = {(¢, y) | —o<t<—1, |y|<oo}, D,={(t, y) | —l<r<l,
ly|< oo}, and D,={(t, y) | 1<t<, |y|<oo}. Verify that
t—1
(t)=log t+1
is a solution of the equation y’=2y. Suppose we wish to find a solution for which
#(1)=5. Imposing this condition on the family of solutions ce’, we are forced to require
that 5=ce? so that c=Se 7. Therefore #(t)= te") is a solution, passing through the
point (1, 5).
Example 4. Among the solutions
Sulsh.cet
~ 1=cet
1.3 Equations with Variables Separable v)
of the differential equation y’=}(y* — 1), let us find a solution @ which obeys the initial
condition $(1)=5. If there is such a solution, it must satisfy
5 1+ oF
l—ce
1+2e"!
(t) 1—3e r=
Exercises
5. Find the solutions of the equation y’=2y (Example 3 above) that pass through the
point (a) (1, 0), (b) (0, 0), (c) (—1, 2).
6. Find solutions ¢ of the equation y’=4(y?—1) (Example 4 above) that obey the
initial condition (a) #(0)=1, (b) 6(0)=0, (c) (2)=0.
* 7. Find an equation satisfied by the coordinates of all points (¢9, yo) with the
property that a solution of y’=/(t, y) through (foyo) has a maximum or minimum
at (to, ¥o). How would you determine whether a maximum or a minimum occurs
at (to, Vo)?
ya) (Ey)
has variables separable if the function f can be written in the form
* An asterisk preceding an exercise indicates that this exercise is rather more difficult
than the others.
10 Introduction to First-Order Differential Equations
are both of this type, while y’=sin ¢—2ty is not. The examples discussed in
Section 1.1 are also of this type. Before explaining the technique in the
general case of variables separable, let us work out one more example.
¢'(N=LO(O]?
on some interval / containing tg. Assuming that p(t)40 for all ¢ in the interval J and in
particular that y)#0, we would have to have
oO)_
[o(.)]?
If we integrate this equation from fg to ¢, we obtain
$'(s) |
| or
to
ds—a\mledse
to
1.12)
where fp 1s given in the initial condition and ¢ is arbitrary. Note that we have used s as
the variable of integration in place of ¢t. We did this in order to avoid confusion
with the upper limit ¢ of integration. To evaluate the integral on the left-hand side of
(1.12), we make the change of variable u= ¢(s). Then, by the change-of-variable theorem
for definite integrals, we obtain
(6)
$(t)
du
|gare (to)
we
The integral on the right-hand side of (1.12) is of course [{, 1 ds=1—1o. Therefore the
solution @ must satisfy the equation
1 1
#0) Ol)
Since the solution @ is to pass through the point (to, yo), we must have #(to)=yo.
Hence
1.3 Equations with Variables Separable 11
Recall that we have assumed yp) #0. We have now shown that if @ is a solution of Eq.
(1.11).which is never zero, then ¢ has the form (1.13). It remains to be shown that the func-
tion ¢ defined by (1.13) actually satisfies the differential equation (1.11) on some
interval.
The function @ defined by (1.13) is well-behaved for values of ¢ for which the
denominator is different from zero. More precisely, p(t) is differentiable provided
1 —}'9(t—to) #0, and its derivative is given by
r Yo V6
Ws T= yt Ore” ei =Yo(t—to)]?
But
Yo
[o(s)]°=
[1—yo(t—to)]”’
and thus ¢'(t)=[@(c)]? for all ¢ for which 1—yo(t—f9)40, that is, provided that
1Ayo(t—to). Since yo #0, this means that 1/y)>#t—fo, or that tAty+1/Yo. If yo>O,
then 1/y)>>0 and thus the function @ is differentiable on the two intervals
—0 <t<tgt1/yo and tp+1/v9<t<0oo. But remember that the problem is to find a
function ¢ which is a solution of the given differential equation on an interval which
contains to. If y)>0, that interval is —o0<t<tg+1/yo. Therefore, for y)>0, the
function ¢ given by (1.13) is a solution on the interval — co <f<f)+1/yo as shown in
Fig. 1.3. Direct substitution in (1.13) shows that ¢ also satisfies the initial condition
(to)=Yo. Thus we have solved the initial value problem (1.11) if yo >0.
y
(to, Yo) f
Figure 1.3
Exercise
1. Show that if yo <0, the function ¢ given by (1.13) is a solution of the initial-value
problem (1.11) on the interval to + 1/vo <<, and sketch the graph comparable
to Fig. 1.3.
12 Introduction to First-Order Differential Equations
You might well ask if there are any other solutions of the initial-value
problem (1.11). The answer is no, because, having assumed the existence
of a solution on some interval, we found that there was only one possible
candidate for a solution, namely the one given by (1.13).
In the discussion above, we assumed from the start that y)o40 and
in fact that @(¢)40 for all choices of ¢ in the interval /. It is therefore not .
surprising that the solution (1.13) never assumes the value zero for any
choice of ¢. Does this mean that there is no solution of (1.11) if yp=0?
The answer is simple; we have overlooked the fact that ¢(t)=0 is also a
solution of the differential equation y’=y*, and that this indeed does
satisfy the initial condition (tf ))=0. Including the zero solution with the
solutions given by (1.13), we have shown by actual construction that for
every initial point (fo, Yo) there is a solution of the differential equation
through this point. What has happened in this example is quite common;
a “‘general’” method may not succeed in producing all of the solutions of a
differential equation and, for certain exceptional initial conditions, other
methods may have to be used. Note that the function @ as given by (1.13)
is well defined for y)=0. In fact, it happens that the zero solution is ob-
tainable from (1.13) by taking y) =0. For a further discussion of Example |
from a different point of view, we refer you to Example 3, Section 1.6.
Exercise
2. The graphs of the functions ¢ defined by (1.13) above are all hyperbolas. Draw the
solution curves y=(t) passing through the points (0, 4), (0, 1), (0, 2), (0, —1),
(0, —2). Observe how these curves compare with the “special” solution ¢(t)=0.
Let us apply the same technique to the general case of variables separable.
Suppose we have
y'=g(t) h(y)=f(t y) (1.14)
where g is continuous on some interval a<t<b and A is continuous on some
interval c<y<d. The function f(t, y)=g(t)h(y) is continuous on the
7
Figure 1.4
1.3 Equations with Variables Separable 13
and therefore
|Rats [oo
t
to
for every ¢ on J. Putting u=¢(s) and using the initial condition ¢ (to)=yo,
we obtain, proceeding as in Example 1,
$ (t) t
This equation defines the solution ¢ implicitly. As Example | above and Ex-
amples 2, 3, and 4 below show, it may be possible to solve the resulting
equation for @(t), thus obtaining an explicit solution of (1.14).
Exercise
*3. a) Look up the statement of the implicit function theorem for solving an equation
of the form F(t, y)=0 for y terms of ¢ near a given point (to, Yo) for which
F (to, Vo) =0.
b) Apply the implicit function theorem to Eq. (1.15) and determine conditions which
permit you to solve (1.15) for #(¢) near the point (to, Vo).
What happens if there are values of t where h(¢(t))=0? Let us consider
first the case when h(y,)=0. Consider the function w(t)=yo. Then
w'(t)=0 and A(y(t))=A(y.)=0 and therefore W(t)=yo is a solution of
(1.14). In addition, observe that the left-hand side of Eq. (1.15) may not make
sense, because it is an improper integral which may not converge. If it does
not converge, w(t)=Vo is the solution through the point (to, yo). If the in-
tegral does converge there are at least two solutions through the point
(to, Yo), namely w(t)=yo and the solution #(¢) given implicitly by (1.15).
Exercise 32 below shows that one may have infinitely many solutions in
this case. If hvanishes for some other value of y, say y=y, #Vo, Eq. (1.15)
still defines a solution ¢ implicitly on some interval about ¢= fp. If this solu-
tion @ never reaches the value y,, there is no difficulty. If the solution
“reaches” the value y,, the convergence of the integral on the left-hand side
14 Introduction to First-Order Differential Equations
y=ty, — y(0)=1.
In the notation of (1.14), g(¢)=¢, A(v)=y?, and A(yo)=A(1)=140. Suppose that
there is a solution @. Then on some interval containing the point fg =0, ¢’(t)=r[(t)]°.
If b(t)40 we have ¢'(t)/[(t)]* =2, and therefore, using to =0,
Also | s ds=t?/2. Therefore $(1) is defined implicitly by the equation (which corre-
sponds to (1.15) above)
1
alee
{P(t}?
In this example this gives {f(r)}*=(1—¢7)~! and, finally, since ¢(0)>0, we take the
positive square root and obtain $(t)=(1—1*) "1/7, (—1<t<1). We readily verify that
~(0)=1 and that $(t)=(1—1*)~'” satisfies the differential equation y’=ry? on
—1<t<1; note that ¢(t) is never zero for —1<t<1.
Exercises
4. In the solution in the example above, we found that [o(e)]}?=(1 =p7)5%. Why
could ¢ not be given by $(¢)= —(1—1*) 1/7?
5. Can you suggest an initial point (to, yo) for which #(t)= —(1—1?)~1/? would have
been the correct choice?
. What is the rectangle D in this example?
. Why is the solution obtained only valid for —1<t<1?
a
on. Try this method for the initial point (a, 0) and the same equation.
9. Try this method for the initial point (1, 2) and the same equation.
Example 3. Consider again the differential equation y’=ry*, but this time with an
arbitrary initial point (fo, Vo), ¥o >0 (why do we require y)>0?); then, proceeding as
in Example 2, we obtain
t o (t) t
d'(s 1
|Speen
[o(s)3 S =! | Sai
va ae |i xs ds
to yo to
fe
ies! Equations with Variables Separable 15
which yields
l ee
[o@]}’ vo
If we let 1/yo+1t5=c? (c>0) and solve for (¢), this becomes (t)=(c?—12)7!/2,
(\¢|<c). It is easily verified that this is a solution of the differential equation y’=ty?
on the interval —c<t<c for every choice of the constant c>0, that is, for every initial
point (fo, Vo) with yo >0. If yp =0, the corresponding solution is #(t)=0, which is not
obtainable from the expression $(t)=(c?—1t*)~‘/? for any choice of c.
Exercises
10. Sketch the solutions found in the above example corresponding to c=1, c=2,
c=5. Be sure to indicate the appropriate intervals.
11. Solve the above example for an arbitrary initial point (¢9, Yo) with yo <0.
o'(t)
bl [a—b6(0)]
=1.
rf 8) iis
ere lw
Letting vu=¢(s) as in Example | and using the initial condition #(0)=4,
(0)
du
=i. (1.18)
u(a—bu)
A
u(a—bu) u a—bu
16 Introduction to First-Order Differential Equations
where C and D are constants to be determined from the above identity. Since
G D _aC—bCu+ Du
ua—bu —- u(a—bu)
we must require
aC +u(D—bC)=1
for all u. Therefore aC=1 and D—bC=0, or C=1/a, D=bC=B/a.
—bA
log oe (0) Sh.
A a—bdi(t)
Then the following sequence of easy steps
G—bA oO |
A a—bd(t) — ’
6()=[a—b(]]
—— e*,
w) t 1 ate A Co aA at
yields
aAe™ aA
o(t) (1.19)
‘G=bALbAe’ (G=bA)\e “Fb4
The reader can verify by substitution that (1.19) is a solution of(1.16). Note that if b=0
this solution reduces to #(t)= Ae” which agrees with the result obtained for the simpler
model (1.2) in Section 1.1. If 640 the solution behaves quite differently. In particular,
lim,.... &(¢)=a/b. To see this from (1.19), observe that (a—bA)e~ tends to zero as t> 00.
1.3 Equations with Variables Separable 17
Exercise
12. Sketch the graph of the solution (1.19) assuming a>0, b>0, A>0, a> Ab. (If you
wish, take a=8, b=2, A=2.) Also sketch the solution if a>0, A>0, b=0.
Exercises
In Exercises 13-27, find all solutions through the given initial point. Decide, if you
can, whether the solution obtained is unique. Be sure to specify the interval in which
the solution is valid.
Pier
Ak ahi, aed (eile
"log2
27. y'=2t(1—y7)'? (0, 1)
28. By solving the differential equation y’=1+-y’, show that no solution through the
origin exists for —0o0<t<oo.
29. Given a family of curves f(x, y)=c in the (x, y) plane, a family of curves g(x, y)=c
is said to form a set of orthogonal trajectories of the first family if every intersection
of a curve in the first family with a curve in the second family is at right angles.
Find the orthogonal trajectories of each of the following families of curves:
18 Introduction to First-Order Differential Equations
a) x?+y2=c b) y=ce*
C) 7 =4ex d) 2x? +? =c*
[Hint: (for part a)): Show that, for every curve in the given family, dy/dx= —x/y.
Therefore for every orthogonal trajectory dy/dx=y/x; solve this last differential
equation.|
Separable first-order differential equations also arise in certain prob-
lems involving chemical processes. Typically in such problems one has a
tank of a fixed capacity filled with a solution (say brine) containing a
certain amount of a substance (say salt) thoroughly mixed. A solution
(brine) containing a fixed amount of substance (salt) per unit volume enters
the tank at a fixed rate while at the same time the stirred mixture leaves
at a fixed rate (which may differ from the entering rate). If one denotes
by y(t) the amount of substance (salt) at time ¢, then y(t) must satisfy
the ‘‘equation”’:
y’ (¢)=(rate at which substance is being added)
—(rate at which substance is being removed).
Writing this out mathematically leads in certain special cases to a first-
order equation which has variables separable. We now illustrate this type
of problem. For more complicated problems, refer to Exercises 31 and 32,
Section 1.4.
Exercises
30. A tank contains 100 liters of water and 10 kilograms of salt, thoroughly mixed.
Pure water is added at the rate of 5 liters per minute, and the mixture is poured off
at the same rate. How much salt is left in the tank after one hour? Assume
complete and instantaneous mixing.
The following problems illustrate several other important applications
of first-order differential equations.
Exercises
*31. When lying on its side on a smooth horizontal plane, a helical spring is L inches
long (see Fig. 1.5). The spring is a “‘linear spring”’ in the sense that the amount it is
compressed by an axial force is proportional to the force applied. The constant of
proportionality, which is called the “‘spring constant,” is K (pounds per inch), and
the total weight of the spring is m pounds.
Figure 1.5
Equations with Variables Separable 19
When the spring is stood on end, it shortens owing to its own weight. If x
represents the distance (in inches) measured positively downward from the upper
end of the up-ended spring, and if w(x) is the weight of that portion of the spring
above a section x units from the top, then the function w(x) is a solution of the
initial-value problem
dw _ 1E
dx 1—w/KL’
Solve this initial-value problem for the function w(x). Let L’ represent the areal
length of the up-ended spring. Noting that w(L')=m, find L’. How did the spring
shorten when-up-ended? If p(x) is the weight per unit length of the up-ended
spring, then p(x)=dw/dx. For what value of x does p(x) take its maximum? What
is the maximum value of p(x)? What is the ratio of this maximum value to the
average value of p(x)?
32, Find the solution ofthe equation y’ = 3y7'? passing through (¢9, yo). Give a complete
discussion of the various cases. Note that y)>4#0 and yy=0 must be treated
separately and that @(t)=0 is again a solution. However, we do not have uniqueness
of solutions if yg =0. For example, ¢(t)=0 is a solution through (0, 0) and so is
#.0=S 0) —co<t<c
CSU<AOO)
v=f(y),
where f(y) is a continuous positive function for y>0 and /(0)=0. By separating
variables, show that if the improper integral {6° [1//(u)]du diverges, then there
is a unique solution through the point (0, yo) which is strictly positive and tends to
zero as t — 00. Show that if |§ [1//(u)]du converges for some a>0, then there is
an infinite number of solutions through the point (0, 0). Give some examples of
functions f(u) and sketch the corresponding solution curves. [Hint: Use the
method of separation of variables and the inverse function theorem.]
“Bey Consider the solution ¢ ofthe differential equation y’=1* + y* passing through the
point (0, 1). Show that the curve y= p(t) has a vertical asymptote for a value t=fo
with 2/4<1)<1. [Hint: To see that fo <1, observe that ¢ satisfies the differential
inequality $’(t)>[@(d)]*. Integrate this inequality, using ¢(0)=1 to obtain
(t)>1/(1—2). To see that t)>7/4, observe that ¢ also satisfies the differential
inequality $'(t)<1+[@(c)]* for 0<r<1 and proceed in a similar fashion.]
SO: Find all continuous (not necessarily differentiable) functions
f(t) such that
We shall now consider another type of first-order equation for which there
is a general procedure for finding all solutions. The equation y’=f (¢, y)
will be called linear if f (¢, y) is a linear function of y. This means that the
equation can be written in the form
Figure 1.6
We first work with some examples, which will suggest the approach to the
general problem.
Example 1. y’+2ty=0 with initial point (0, yo).
Here the strip S of Fig. 1.6 is the whole (¢, y) plane. This equation has variables
separable, and we can use the method of the previous section to obtain the solution
do(t)=yoe", for —co<t<o, if yo #0. If yp=0, we obtain the solution ¢(1)=0
which, incidentally, is also included in the formula #9 (t)=yoe".
Another way to express the fact that the solution ¢o(t) is ype" is to write
ebo(t)=Yo-
1.4 Linear Equations of the First Order 21
Since e #0, this means that $o is a solution of y’+2ty=0. What we learned from this
is that if @p is a solution and if we multiply both sides of the equation 6 (¢)+ 2/9 (t)=0
by e® we obtain on the left side the derivative of e'’(t). Since this derivative is zero, it
follows that e’9(t) is a constant; in fact, the initial condition $o(0)=Yo tells us that
this constant has the value yo. This procedure of multiplying the differential equation
by a suitable function (in this case e'’) gives us another way of finding the solution do.
While these remarks may appear to be a waste of time in this example, they will be
useful for the solution of the next example.
or
t
and this gives #(t)=yoe +e" [he sin sds for —co<t<oo as a candidate for
a solution. The integral {j e° sins ds cannot be evaluated in terms of elementary
functions, but nevertheless it defines a differentiable function, whose value for t=0
is zero and whose derivative is e’ sint. We now check our result by substituting this
expression for ¢ in the differential equation y’+2ty=sin¢ and we obtain, using the
product rule and the fundamental theorem of calculus,
t
0
t t
d ©
= Ptentyyei” FF|e* sins ds—2te™" |e* sins ds
0 0
t
This shows that ¢ is actually a solution of the differential equation. Observe that ¢ is
the sum of two terms, one of which is the solution ¢p in Example 1.
22 Introduction to First-Order Differential Equations
Now let us try to use the method developed in these examples to find
a solution # of Eq. (1.20), satisfying the initial condition $(fo)=Yo for
some fp in J.
We begin by considering the simpler equation
y'+p(t) y=0 (1:22)
which has variables separable and is easily solved. If we define
P= |ns)
the solution ¢y passing through (to, yo) is given by $o(t)=yoe =P *”. We can
write this as e?d,(t)=yo and differentiate, using P’(t)=p(t), to obtain
© [cP Golt)] =e" Po) +P'(1)ef bolt)=e? [45 (0+ (0) bo(t)]=0.
This is just what we need to find the solution ¢ of the full Eq. (1.20)
passing through (to, yo). For, if we multiply both sides of the equation
f' (t)+p(t) d(t)=q(t) by e?®, we obtain d/dt[e’®
¢(t)]=e? q(t). Now
we can integrate from ¢ to t, obtaining
t
or, using P(t o)=0 and $(to)=Vo, we obtain as a candidate for a solution
t
for fo and ¢ on the interval 7. We must now verify that the function defined
by (1.23) is a solution of (1.20).
Exercise
1. Verify that (1.23) does satisfy both the differential equation for all ¢ in J and the
initial condition. [Hint: Proceed exactly as in the verification that ¢ is a solution, as
in Example 2. |
Observe that the expression for ¢ is the sum of two terms, the solution
of (1.22) through (fo, yo) and the solution of (1.20) through (fo, 0). The
expression is defined and remains finite so long as p(t) and q(t) are con-
tinuous. (This follows from theorems in calculus — which ones?)
‘Combining the result of Exercise 1 with the fact that we found only one
1.4 Linear Equations of the First Order 23
possible candidate for a solution, namely the one given by Eq. (1.23),
we may state the following result.
Theorem 1. If p(t) and q(t) are continuous functions on some interval I and
if to is any point in I, then for each yo the initial-value problem
y+P(y=a(t) — y(to)= Yo
has one and only one solution # given by Eq. (1.23). This solution exists for all
t on the interval I.
Figure 1.7
Exercises
In Exercises 2-18, find the solution through the given initial point and determine the
interval on which the solution is valid.
2. y +2y=e' (0, 1)
3. y +py=q (to, Yo), where p and g are constants
24 Introduction to First-Order Differential Equations
45 yy aye 2 i085)
5. y +y/t=1 (1, —2)
6. ty’ +3y=0? (—2, 1)
[Hint: First divide by ¢.]
1 ty
+2a) (to,
Yo) With f9#0
8. y'+y cotf=cost (2/2, 0)
9. 17y'4+ty=2t74+1 — (to, Yo) with tp #0
10. ty’ +y=e' (1, 1)
Hl. ty +2y=0?-1t4+1 (1, 1/2)
Lay =) Oy (0, 0)
13. y'=—2t(y—t?) (0, 0)
14, y'+3y=e* (0, 1)
15. (t+1)(0'+y)=e' (0, a)
162357 y= 2te" (0, a)
17. y’'—y tant=1 (0, a)
18. v(t)=Jo v(s) ds+¢+a. [Hint: Differentiate the equation. ]
19. Discuss the behavior of all solutions of the differential equation
y=Ay
as t> +00 for each of the cases 1>0, A4=0,1<0.
20. Discuss the behavior of all solutions as t> +00 of each of the following differ-
ential equations.
a) y=—2y+e°
b) y'=—2y+e'
c) w=—2yt+1
d) y’=—2y+1/(1+0’). [Hint: Use PH6pital’s rule to evaluate the limit.]
*e) y’=—2y+f (t), where f(t) is a continuous function with lim,.,,,
f(¢)=0.
21. Find the solution of the differential equation y’+ a(t) y=b(t)y* (Bernoulli equa-
tion), through (fo, Vo), where k is a constant, k#1, 0. [Hint: Make the change of
dependent variable z=y'~*.] How could you solve the equation if k=1 or k=0?
22. Find the solution of y’—y=t) passing through the point (1, —1).
23. Find the solution of y’+ y/t=y*/t? passing through the point (—1, 4).
24. Find the solution of the differential equation ty’=2r7y+y logy through the
point (1, 1). [Hint: Make the change of variable log y=v. ]
*25. a) Show (by constructing it) that the equation y’+y=/(r) has a unique solution
bounded for — co <t<oo, where |f'(t)|<M and fis continuous for — 0 <t<oo.
[Hint: Consider the solution ¢4(t) with ¢4(—A)=0, where A>0, and let
A-+0, b(t)=limysybalt).]
b) Given that the function f(t) is periodic with period 2n (that is, f(¢+22)=/f (t)
1.4 Linear Equations of the First Order 25
for —00 <f<oo), show that the solution obtained in part (a) is periodic. [Hint:
Show by a suitable change of the variable of integration that the solution
obtained in part (a) satisfies p(¢+2x)=¢(t) for every ¢.]
*26. Given the differential equation
where a(t) and f(t) are continuous, where a(t)>c>0, and where lim,.,, f(t)=0,
show that every solution tends to zero as too.
*27. Leta bea positive constant and let lim,_,9 ,f()=5. Show that the equation
ty +ay= f(t)
has a unique solution which is bounded as t>0+ and find the limit of this solution
as t>0+.
Figure 1.8
and causes a current i to flow through the circuit. Ohm’s law states that the
current / is proportional to the electromotive force e. This is expressed by the
equation
e=ir (1.24)
where the constant of proportionality r is called the resistance. The con-
sistent units in which these quantities are measured are: e in vo/ts, iin amperes,
and r in ohms. If the quantities e and r are known, the current 7 is immediately
determined from (1.24) (this is not a differential equation). It is assumed that
the resistance of the wire is negligible.
A more complicated circuit places an inductor L in series with a resistor
and an electromotive source e. An inductor causes a change in current. It has
an inertia effect comparable to a mass in mechanics. Our circuit may be
pictured schematically as in Fig. 1.9.
26 Introduction to First-Order Differential Equations
Figure 1.9
When the switch s is closed at t=0, a current i flows in the circuit. The
problem is to determine the current i(¢) as a function of time. To do this we
will set up a differential equation for the current which is derived from the
following empirical laws.
1. Ohm’s law: The voltage drop e, across a resistor of resistance r is
=f.
Using these laws for the circuit in Fig. 1.9, we obtain the following
equation:
Li’ (t)+ri(t)=e. (1.25)
Assuming that no current flows in the circuit at time t=0 (when the switch
is closed), the initial condition is i(0)=0. If the voltage source e is a battery,
e is a constant; if the voltage source is an alternating current generator,
then ¢ is usually assumed to be of the form
e=ey SINat,
where 2n/q is the frequency and eg is the voltage. Eq. (1.25) is a linear first-
order equation of the form (1.21).
Exercises
28. Solve (1.25) with the initial condition i(0)=0, assuming that L, r, e are positive
constants, and sketch the graph.
1.4 Linear Equations of the First Order 24
29: Show that the solution of Eq. (1.25) with the initial condition i(0)=0, under the
assumption that L, r are positive constants and e=ey sin wt, is
aol
tan «=—.
r
Sketch the graph ofthis solution and compare with the result of Exercise 28. Note
that the current is the sum of two terms: one, called the transient solution, approach-
es zero, and the other, called the steady-state solution, oscillates.
30. Assume that the resistance r is a linear function of time, r=kt, and that L and e
are positive constants. Show that the solution of Eq. (1.25) with the initial con-
dition i(0)=0 is
t
: C _y2
iGy== ee" esks2/2L aids,
E
0
Sketch the graph of this solution and show that lim,.,, i(¢)=0. Is this result phys-
ically reasonable? Given that e= 100 volts, k=5 ohms per second, and the current
reaches its maximum value when t=2 seconds, determine the maximum current.
Exercises
SE. A tank contains 10 liters of water to which is added a salt solution containing 0.3
kilograms of salt per liter. This salt solution is poured in at the rate of 2 liters per
minute and is thoroughly mixed, and then the mixture is drained off at the same rate
(2 liters/min). How much salt is in the tank after five minutes?
[Hint: Let (t) be the
amount ofsalt in the tank at the end of tminutes. Then between ¢ and (t+ /) minutes,
approximately 2 x 0.3 x h=0.6h kilograms of salt enter the tank, and approximately
2x p(t) x zy xh=0.2hG(t) kilograms of salt leave the tank. Thus $(¢+h)—¢(o)
~0.6h—0.2h¢(t); dividing by / and letting h-0, we have #’(t)+0.2(t)=0.6. The
initial condition is ¢(0)=0. }
32. A tank has 10 gal of brine containing 2 Ib of dissolved salt. Brine with 1.5 Ib of salt
per gallon enters the tank at the rate of 3 gal/min. The mixture is kept well stirred
and leaves the tank at,the rate of 4 gal/min.
b) Find the concentration of salt (pounds of salt per gallon) at the end of 10 min.
c) Sketch the graphs of the amount of salt and of the concentration of salt against
time and determine the absolute maximum of each quantity.
The geometric interpretation given in Section 1.2 is the basis for a simple
and effective procedure for finding out something about the nature of the
solutions of the differential equation
y'=f(t, y),
where the function f is defined in a region D in the (¢, y) plane. We can use
a special device to display the values of f in such a fashion that it will be
possible to sketch the solutions directly. In Chapter 9 we shall see that
this crude process is the key to a useful numerical procedure, adaptable to
digital computers, for finding approximations to solutions of a differential
equation. The interested reader should have no difficulty in understanding
Section 9.1 after reading this section.
The function f defines a direction field or tangent field in the region
D as follows. At each point P in D, with coordinates (t, y), we evaluate
f(t, y) and then draw through P a short line segment whose slope is f(t, y).
To do this systematically, it is helpful to construct the curves in the (t, y)
plane on which the function f(t, y) is constant. (These are called the level
curves of the function f:) On each such curve the previously mentioned
line segment has constant slope. Thus choose a value A and construct
the curve f(t, y)=k (such a curve may have more than one branch) and
along this curve draw line segments of slope k. Do this for several values of k.
Figure 1.10
ES Direction Fields 29
Figure 1.11
How will such a diagram help us find a solution of y’=f(t, y)? The
answer is that a solution curve y=¢(t) must be a curve which “‘flows with”
the direction field; that is, at each point on its graph, the graph of y=¢(t)
must be tangent to the corresponding line segment of the direction field.
This becomes evident if we superimpose on a direction field the graphs of
a number of solutions of the differential equation, as is indicated by the solid
curves in Figs. 1.10 and 1.11. Thus, after the direction field has been
sketched, we connect the line segments by smooth curves. In this way we can
obtain some useful information about the nature of the solutions of a
differential equation by a freehand sketch of curves which obey this simple
geometric condition.
Exercises
Construct direction fields for each of the following differential equations and then
sketch general solution curves:
30 Introduction to First-Order Differential Equations
1. jf =2y 2. y=—y"
3. y =ty 4, y=1-0?—-y?
Seyi aa 6. y=t?+y?
7. y=y-t? 8. y=-t/y
t+y
9. y=y/lt+y) 10. eee
1—t?)y-t
Hye
y
Figure 1.12
For our purposes, rectangles are more convenient, but it would be possible
to use circles instead. A rectangle centered at (fo, vo) is the set
{(t, y) |—a<t—to
<a, —b<y—Yo <b} or, equivalently, {(¢, y) || t—to)<a,
ly —Yo)|<5}, for some a, b>0. The interior of a rectangle or circle, the
whole plane, and an infinite strip
{(t, y) |a<t<b, —a<y<o}
Exercises
1
b) g(t,iy) ») =——
1—-P—y3
y'=f(t, y) which passes through (to, Yo) (that is, the solution @ satisfies the
initial condition (to)=0).
Example 1. Consider the differential equation y’=ay (« constant). Here f(t, y)=ay
and (0f/éy) (¢, y)=«. Both fand df/dy are continuous in the whole (¢, y) plane. Theorem |
shows that there is a unique solution ¢ of y’=«y through every point (¢o, Yo) in the plane. |
We have seen in Section 1.1 that #(t)=y e*“~ is a solution of this initial-value prob-
lem. Note that from the explicit solution we see that this solution exists for —00 <f<oo,
but the theorem does vot give us this information.
Exercises
5. Find a solution of y’=«y through (1, 0). Is this the only solution through (1, 0)?
6. a) Show that —1/r is a solution of y’=y? passing through (—1, 1).
b) Show that $(¢)=—1/t is the only solution of y’=y? passing through (~—1, 1).
Be sure to determine an appropriate region D before applying Theorem 1.
c) What is the largest interval on which #(t)= —1/ris a solution of y’=y? through
the point (—1, 1)? The reader should observe that Exercise 6 shows that a solu-
tion of y’=f(t, y) does not necessarily exist for all ¢ even though f and éf/cy
are continuous in the whole plane.
Example 2. Consider the differential equation y’+p(t) y=q(t), where p and gq are
given functions, continuous on some interval a<t<b. (We may, of course, have p, g
continuous on the whole ¢ axis.) Choose tg with a<t )<b and any yo. Then we can use
Theorem | to show that there is a unique solution @ satisfying the initial condition
(to) =Yo. Since f(t, y)= —p(t) y+q(t) and df/ey(t, y)= —p(t) are continuous in the
region D={(t, y) |a<t<b, —0<y<oo}, Theorem | can be applied for every point
(to, ¥o) in D to show the existence of a unique solution. We have seen in Section 1.4
that the solution @ exists on the whole interval a<t<b.
0 if —o<t<c
60-40) if c<t<oo
has a continuous derivative for — oo <t<oo and is a solution passing through (fo, 0)
for every value of c>¢o. This can be verified by direct substitution. In addition to all
these solutions, the identically zero function is also a solution (see Fig. 1.13). These
solutions can be constructed by the method of Section 1.3 (see Exercise 32, Section
1.3).
This example illustrates that in some cases there may be a solution ofthe differential
equation y’=f(t, y) through (¢9,vo) even though the hypothesis of Theorem | is not
1.6 Existence and Uniqueness for First-Order Equations 33
(to, 0)
Figure 1.13
satisfied. In this example we have existence, but not uniqueness, of solutions through
(to, 0). Other situations may arise; for example, it can be shown that $(t)=0 is the only
solution of the differential equation y’=|y| passing through the point (0, 0), in spite of
the fact that 0f//Cy, where f(t, v)=|y|, does not exist at (0, 0).
Exercises
Use Theorem | to answer each of the following.
7. Does the differential equation y’=1+y? have a unique solution through the
point (0, 0)?
8. a) Does the differential equation y’=(y?—1)'/? have a unique solution through
each of the following points? (i) (0, 2), (ii) (1, —5).
b) What can be said about solutions through the point (0, 1)? Note that f(t)=1 isa
solution.
9. Determine the region or regions in the (, y) plane in which Theorem | can be applied
to obtain existence and uniqueness of real solutions for each of the following
differential equations.
y
a) a
y=—t/y) b
yy /—
7
t y
c) y ie eg rer d
)y
ee ee
l4e+y?
i+
) ya ) WP +ty'—y=0
8) eo ear h) (yP=?—y?
, Y /
34 Introduction to First-Order Differential Equations
k) y=1-t?-y? l) y=1+y?
m) y=t?+y? n) y=y-0?
ae
en =—— , (l-t?)y-t
yy
Oo
ae P) ———— =
F
0 (t<0;—w<y<o)
Vi=< Dy? (t>0; O<y<oo)
ve (t>0;—
0 <y<0).
a) Show that
is a solution on — 00 <f<oo.
b) Is ¢(t) continuous everywhere?
c) Is #’(t) continuous everywhere?
d) Can you apply Theorem | to obtain the existence and uniqueness of solutions
through the point (0, 1)? Explain fully.
(to, Yo)
Figure 1.14
For values of t<f) the solution ¢ decreases as we move to the left and, when ¢(t)
becomes less than 1, the slope [(t)]? becomes smaller. Therefore the solution ¢(¢)
decreases less rapidly as t becomes more negative. However, the solution ¢ can never
cross the ¢ axis; by uniqueness (since the identically zero function is a solution), if it
did ¢ would be identically zero, contradicting the assumption yy > 0. Therefore we would
except the solution ¢ to exist for all ¢<¢ and lim, _,, @(¢)=0. Examination ofthe ex-
plicit solution, Eq. (1.13), Section 1.3, shows that this is indeed the case.
Now consider the case yp <0. Again, the initial slope is y>0, and the solution ¢
increases for t>f,) and decreases for t<fo (that is, as we move to the left from ¢9); see
Fig. 1.15. For values of t>¢o, #(t) will continue to increase, but more slowly as
|\p(t)| becomes smaller. As the solution ¢ cannot cross the ¢ axis (why?), we would
expect @ to exist for all t>fo and that lim,.,,, (t)=0 in the case yy <0. For t<fy
the solution will become more negative and approach — oo; this in fact happens for a
finite value ft; <¢o, as can be seen from the explicit solution (1.13), Section 1.3.
(to, Yo)
Figure 1.15
Exercises
11. Fora fixed yy >0, vary yp and use the argument given above to sketch the solution
curves in the (t, y) plane. Repeat for a fixed yy <0. What happens if y) =0?
36 Introduction to First-Order Differential Equations
12. By a similar analysis sketch the solution curves ofthe differential equation y’= — y’.
13. Sketch the solution curves of the differential equations y’=y and y’= —y.
14. Sketch the solution curves of the differential equation y’=1? +,7.
15. Sketch the solution curves of the differential equation y’=1/(t* +7).
We noted in the above examples and in Section 1.6 that, although -
Theorem | guarantees the existence of a solution on some interval, it gives no
information about the size of the interval of existence. What information
we have obtained about the size of the interval of existence has come from
explicit formulas for solutions. Since explicit formulas for solutions are
seldom available, it would be useful to have a criterion for the size of the
interval of existence. Such a result is beyond the scope of this book. However,
roughly speaking, the following is true: Under the hypotheses of Theorem 1,
the interval J of existence of the solution $(t) through the point (fo, Vo) is
such that the graph of the solution extends to the boundary of D (see Fig.
1.16). A proof of this result may be found in [2], Chapter 3.
y D
Figure 1.16
$(t) Stan
ze
exists only on the interval
1
—o<t<tyt+— if yo>O0.
Yo
CHAPTER 2
Introduction to
Second-Order
Differential Equations
Differential equations ofthe type we wish to discuss arise naturally ina variety
of physical problems. Perhaps the simplest class of these problems deals
with the motion of particles. The mass—spring system (considered in Section
2.1) and the pendulum (developed in Section 2.2) lead to differential
equations which are prototypes of mathematical models for other important
physical systems. We shall use these typical examples to motivate our study
of second-order differential equations, just as we used some biological
problems to motivate our study of first-order equations.
Before turning to the construction of these mathematical models, we
recall some aspects of the Newtonian model for the motion of a particle.
In this model, it is assumed that a body, called a particle, can be represented
as a point having mass. (We shall assume knowledge of the rather difficult
concept of mass; for practical purposes, mass can be measured by the weight
of the body.) It is assumed that, in the absence of *‘forces,”’ the motion ofthe
particle is unaccelerated and is therefore straight-line motion with constant,
perhaps zero, velocity (Newton’s first law). The presence of acceleration is
therefore to be interpreted as a sign of the presence of a force. This is a
vector quantity given by Newton’s second law: If F‘ is the force acting ona
particle of mass m moving with a velocity v, then
F=< (my).
The vector quantity mv is called the momentum of the particle. If the mass
is constant, Newton’s second law may be written as
SH
38 Introduction to Second-Order Differential Equations
dv
F=m—=ma,
dt
where a is the acceleration vector of the particle and f is time.
In the Newtonian model, the gravitational force can be shown (experi-
mentally) to be proportional to mass, so that problems involving gravita-
tional forces on particles near the earth’s surface can be handled con-
veniently by assuming that the acceleration g due to gravity is constant.
F(t)
=A coswt
Uddddddddddae
Figure 2.1
2.1 The Mass-Spring System 39
k A
" (t)+— $(t)=— cos at.
m m
Since the system starts from rest at t=0 with an initial displacement yo,
we shall also require that the function ¢ satisfy
v'(j4+— yore
OS cosamMt,
m m m
for 0<1<oo. Such a function y is called a solution of Eq. (2.4) on 0<t<oo.
We also require that W should satisfy initial conditions
Exercises
(In each of the following, assume that the particle is near the earth’s surface and that
gravity is the only force acting on the particle. You will have no difficulty in solving the
differential equations obtained.)
1. A particle is released from a height yo above the earth’s surface (y=0) with initial
velocity (velocity at t=0) in the vertical direction of magnitude vo. Find the
height y(t) of the particle above the earth’s surface and velocity v(t)=dy/dt as
functions of time.
2. A particle is released at an initial position so (distance s is to be considered positive
in the downward direction) on a frictionless inclined plane (as shown in Fig. 2.2)
Figure 2.2
42 Introduction to Second-Order Differential Equations
with an initial velocity vg. Find its distance s(t), measured along the inclined plane
from the top, and its velocity v(t) as functions of time.
. A projectile is fired from an initial position (x9, yo) with an initial velocity of
magnitude vg at an angle of inclination 0, 0<@<z/2. Find its horizontal and
vertical coordinates x(¢) and y(¢) as functions of time.
. In each of the above exercises, discuss factors of actual physical situations which \
you feel have been neglected and consider their probable effect on the actual
behavior of the system. What information do you feel can be obtained from the
mathematical models you have constructed? In particular, compare the time it
takes for a body released from height A to reach the ground to that time for a
projectile fired horizontally with initial velocity vg.
. A weight of mass m is suspended from a rigid horizontal support by means of a
very light spring (see Fig. 2.3). The weight is allowed to move only along a vertical
Figure 2.3
line (no lateral motion in any direction is permitted). The spring has a natural
(unstretched) length L when no weight is suspended from it. When the weight
is attached, the system has an equilibrium position at which the spring is stretched
to a length L+a, where a is a positive number. We set the system in motion by
displacing the weight from this equilibrium position by a prescribed amount
and releasing it either from rest or with some prescribed initial velocity. Describe in
mathematical terms the motion of the system. [Remarks and Hints. Since the notion
is restricted to a vertical line, the position of the weight can be described completely
by the displacement y from the equilibrium position (see Fig. 2.3). The mathematical
equivalent of the motion of the mass-spring system is then a function @ such that
y= 6(t) describes the position of the weight for each value of >0, where t=0 repre-
sents the starting time of the motion. In order to determine the motion, that is to
determine the function ¢, we must impose additional restrictions on @. For example,
2.2 The Pendulum Problem 43
if we displace the weight a distance yg and then release it, we would require that
$(0)=Yo.- If we release it from rest at this position, we will also require that #’(0)=0.
Suppose as before that assumptions (i){(v) hold. With reference to Fig. 2.3,
we shall measure the displacement y from equilibrium (y=0), choosing the down-
ward direction as positive. The force ofgravity F, in Fig. 2.3 in mg, and the restoring
force of the spring F, is —k(y+a) by Hooke’s law. Observe that Fig. 2.3 has been
drawn with y>0 so that F, is directed upward [5]. Sketch the analog of Fig. 2.3
with y>0 and compute the forces F, and F, in this case. The total force acting on
the weight is
F,+F,=mg—k(y+a).
The equilibrium position occurs when this total force is zero. Therefore, at equilib-
rium, mg—k(0+a)=0 or a=mg/k. Thus we can rewrite the total force at any
position y of the mass as
mg
F,+F,=mg—k Ve =—ky.
d 2
eo yp— ky=F imu)=m 2
Figure 2.4
44 Introduction to Second-Order Differential Equations
position 0). How long will this take? How does this period depend on m,
L, and 0)? (1d) What happens if we immerse the pendulum in some substance
such as water, which exerts a considerable amount of resistance on the mov-
ing weight? How will this affect the nature of the motion? Will the
pendulum eventually come to rest, and, if so, how long will this take? (le) Will
any of the previous answers change if we move the apparatus to the top of a
high mountain or place it in an orbiting satellite?
We have assumed that the rod supporting the weight is perfectly rigid.
This means that the position (state) of the pendulum at time ¢ can be
described completely by the size of the angle 0 (Fig. 2.5). The mathematical
Figure 2.5
where T>0. We recall from calculus that the vectors u,, u, satisfy the
relations
= dy
do je
Let r denote the position vector of the weight, taking the pivot as the origin,
so that
u=Lu.
Differentiation, using the chain rule and the above relations, gives
a a dt
and
d’r do dd\?
= —L 2.6
de) ae (7) ss a
The total force acting on the weight is
F,+F,=(mg cos 0—T) u,—mg sin
0uy. (2.7)
By Newton’s second law (since the mass is constant),
d’r
do 2 d
2 ,
Note that the first of these equations contains also the unknown quantity
T. However, if the angle @ can be determined from the second equation,
46 Introduction to Second-Order Differential Equations
then the magnitude of the tension 7 can be found from the first equation;
in fact, T=mg cos 0+mL(d6/dt)’. Therefore the motion of the pendulum
is completely determined by the second equation, which may be written
in the form
7A
and such that (2.9) also holds. If we can find such a function, we say that
we have found a solution of the differential equation (2.8) obeying the initial
conditions (2.9). We may then hope that this function ¢ will give a good
approximation to the actual motion of a specific pendulum, and that we
can use ¢ to predict properties of the pendulum which can be measured
experimentally.
We can modify the model in several ways, some motivated by physical
intuition and some by mathematical requirements. For example, we could
replace assumption (c) by:
c’) The pendulum encounters resistance, due to the pivot and the
surrounding air, which is proportional to the velocity vector,
and leave the remaining assumptions unchanged. In this case,
d70 g a dé
= —— sind 2.10
dt? ea 210)
replaces Eq. (2.8) as the appropriate mathematical model. The last term is
the appropriate mathematical translation of the additional resistance force.
Note that Eq. (2.10) reduces to (2.8) if a=0.
22 The Pendulum Problem 47
Exercises
If @ is small, then sin@ and @ have almost the same value. (Consult any
table of trigonometric functions for radian arguments and also recall that
limy_.9 sin0/@=1.) We might therefore be inclined to replace (2.8) by the
linear equation
d*0 g
rs 0 (2.11)
and hope that this new mathematical model (which, as we shall see, is
much easier to solve) is almost as good for some purposes as the model
(2.8). When we analyze these equations, it turns out that (2.8) and (2.11)
predict quite different qualitative behavior for pendulums, and that some of
the predictions made by (2.11) do not happen at all, but that both agree quite
well with experiment if we restrict the swing of the pendulum to a rather
small arc. (Incidentally, Eqs. (2.8) and (2.11) also predict things that a
‘real’ pendulum does not do at all; for example, they predict that once a
pendulum is set swinging, it will never stop! Equation (2.10) comes closer to
reality by at least predicting that the pendulum will slow down.)
48 Introduction to Second-Order Differential Equations
Air resistance
—mg {Py
Ground
te
Air resistance
Ground
in which y(t) is the height of the ball above the surface at any time f. We note that in
both the ascent and descent the total force acting on the ball at time ¢ is
—mg—
py’ (t).
2.4 Second-Order Equations Solvable by First-Order Methods 49
We are given that at =0, y is zero and y’ is v9. Hence the mathematical problem is to
solve the initial-value problem
We assume that the burnout height is essentially the radius of the earth R. Let y
be the distance from the rocket to the center of the earth. From Newton’s law of
gravitation, we see that the differential equation for the motion is y” = —k/y*, where k
is chosen so that at the surface of the earth (v=R) the acceleration is —g. Therefore,
substituting y=R and y’=—g into the equation y’ = —k/y*, we obtain —g= —k/R?
and thus k=gR?. The initial conditions are that at -=0, y=R and dy/dt= V(velocity at
burnout). Thus the mathematical problem is to solve the initial-value problem
= 0+ |el)
as the solution of (2.15) for which y(to)=yo, y’ (to) =Zo.
Example 1. Solution of Example 1, Section 2.3.
We wish to solve the initial-value problem
my" +py' = —mg, y(0)=0, y(O=25: (2.17)
This is a first-order linear equation in v. By the method ofSection 1.4, its solution is found
to be
m m
o()= (to) exp(—2 ‘|i.
p m p
Thus, integrating y’(¢)=v(t), v(0)=0, we obtain
Exercises
I. Verify that y(1) defined in (2.18) is a solution of the initial-value problem (2.17).
2. Explain why (2.18) is the unique solution of (2.17).
2.4 Second-Order Equations Solvable by First-Order Methods 51
The initial-value problem (2.17) is now solved. To answer the question whether the
sphere takes longer to reach maximum height or to fall back to earth from the maximum
height, we let fo be the time of ascent to maximum height. Thus
, gm gm
v(to)=y t)=(v0+ 2)ep(—? in) =O,
p m p
Dp mg
To determine whether the time of ascent exceeds the time of descent, we compute y(27o).
Clearly, if y(2t9)>0, the time of descent exceeds the time of ascent; and similarly one
interprets the case of y(2t))<0. One has from (2.18),
mg\m 2 2
v(2ts)=(vo+ M2)*]1-ex(-P rn)|
P/?p m p
m
ff
m
2D
p
pto =log
(“!SEre:
m mg
and hence
mg+ Pvo
x=———_..
mg
We observe that in our case x>1, because vp >0. The function f defined for x>1 by
1
f (x)=x—-—2 logx
x
has
1 2 /x-1)
Poj=t+a-24( Jeo
x
Therefore f(x) is increasing. Since f(1)=0, one has f(x)>0 for every x>1. Therefore
52 Introduction to Second-Order Differential Equations
y(2to)>0, and the ascent is faster than the descent. We remark that this is the result
one would expect on physical grounds as well.
The differential equation in Example 2, Section 2.3, is
, 4p dpdy dp
dt dydt dy
Thus Eq. (2.21) becomes
y (t)=p(y). (2.23)
Up to now we have not mentioned initial conditions. If (2.21) is ac-
companied by the initial conditions y(t9)=Vo, y’(to)=Zo, we must solve
(2.22) subject to the initial condition p(yo)=Zo, because Zo is the value of
y’ corresponding to the value yo of y (namely at t=/)). Once p has been
thus determined, we need only solve (2.23) subject to the initial condition
V(to)=Yo-
Because the left-hand side of (2.22) is p(dp/dy), there may be diffi-
culties whenever p=0. You are therefore warned that you should verify
by direct substitution the result of solving any problem by this method.
To clarify this method, we consider three examples.
Example 2. Find the solution of the differential equation
Pn)
y=dy (2.24)
Equation (2.24) does not contain the independent variable explicitly, and is a special
2.4 Second-Order Equations Solvable by First-Order Methods 53
case of (2.21). Therefore let y’=p(y); then y’=p(dp/dy), and (2.24) becomes
et (2.26)
Equation (2.26) can be solved by separation of variables. The appropriate initial con-
dition is p(v(0))=p(1)=y’ (0)= —1. Separating variables in (2.26), we have
Ded pen
l+p?dyy’
or
en
ape atity
a ayer
tae |og|yl.
log(1+p?)—log2=log y—log1!
or
l+p 2 oF
log yo HOBY-
pi=#Qy—1)!?.
Since p(1)=y'(0)= —1, we must choose the negative square root, and
p(y)=—(Qy—-1)"”.
To find y, we recall that p(y)=y’(t). We must therefore solve the differential equation
y= —(2y—1)?
subject to the initial condition y(0)=1. Separating variables again, we have
y(t)
‘(i Les
Qy()-1)'?
©aie[ev()—)'2]=-1.
Integrating with respect to ¢ from t=0, we obtain, using y(0)=1,
(2y(t)—1)'?-1l=—t.
Thus
2y(t)—1=(-t+1),
y(t)=3[1+(1—-0)7]. (2.27)
The reader may verify by direct substitution that (2.27) is a solution of (2.24) satisfying
the initial conditions (2.25).
54 Introduction to Second-Order Differential Equations
Exercise
3. Verify that (2.27) is the solution of (2.24) satisfying the initial conditions (2.25).
—gR2 ‘
y=, yO=R, yO=V. (2.28)
We let y’=p(y). Then y" =p(y) (dp/dy) and (2.28) becomes
d
peeRe a= (2.29)
dy y
This equation is of first order in p with variables separable. It can be solved by the
method of Section 1.3, and its solution is
2gR?
[p(y]? =V7+ —2gR. (2.30)
y
Exercise
4. Separate variables in (2.29), impose the initial condition, and obtain (2.30).
To solve (2.30) for y as a function of ¢ involves solving the separable equation
70m"
subject to the initial condition y(0)= R. The choice of sign in (2.31) is determined by the
physics of the problem (the plus sign when the rocket is rising). Unfortunately, the solu-
tion of (2.31) leads to an integral which cannot be evaluated in terms of elementary
functions unless V7 =2gR.
Exercise
v=)",
2gR*\"/2
y
v=,
(the case of (2.31) V?=2gR).
In spite of the fact that (2.31) cannot be solved in general, we can obtain some
interesting information about the motion of the rocket. For example, we can deter-
mine whether the rocket will reach a maximum height and then return to earth or
whether it will continue to rise and will escape from the earth’s gravitational field.
2.4 Second-Order Equations Solvable by First-Order Methods 55
The rocket will reach its maximum height when y’(¢)=0 or when
Ly (0)? =p?(y)=V2+—
? 2. 2 Re
2g =0.
If V7 <2gR this height is
y 2gR?
: 2aR= Vy)
If V*=2gR the expression for the maximum height is undefined. However, we have
in this case
_ 2gR?*
Ly’ (t)]? y
which was solved in Exercise 5. It is therefore clear that if V? =2gR the rocket ‘‘slows
down to zero” only as yoo. What this means is that the initial velocity V is sufficient
for the rocket to overcome the force due to gravity. Thus V=(2gR)!/” is called the
escape velocity of the rocket.
If V?>2gR, then
]*/?
ly’ (t)| =[e+(2gR?2/y
where e= V?—2gR>0.
With this initial velocity the rocket will certainly overcome the gravitational forces,
and since
9) R2 1/2 _
lim (:+ Z ) =,/e,
yot+o y
the rocket will have a velocity of Je when it leaves the earth’s gravitational field.
If g=32 ft/sec* and R=4000 mi, the escape velocity is given by
43 mi
V=(2gR)'?=
(33)4/? sec.
Exercise
6. In the case V*=2gR, show that the solution y(¢) of Exercise 5 tends to +00 as
too. (This means that the rocket continues to rise.)
d’0 eB g
ee
ae L (2.32)
subject to the initial conditions 0(0)=6 , 0’(0)=0. This initial-value problem was
derived in Section 2.2, Eq. (2.11), as an approximation of Eq. (2.8).
56 Introduction to Second-Order Differential Equations
The equation to be solved is of the form (2.21), which has the independent variable
t missing. Using the method described for this case, we let @’=p(@) and we think of p
as a function of @. By the chain rule we then have
d°@dp dpd@ dp
dt? dt dodt do"
Thus (2.32) becomes
which is a first-order equation in p which has variables separable. The initial con-
dition now becomes p(@(0))=p(@))=0. By the method of separation of variables we
obtain
and therefore
You may find it helpful to draw a sketch of the two situations. To solve the given
problem for @=@(f) we must treat these two situations separately. Taking (2.34) first,
letting k=(g/)'*, p=’, we have
0'()=k[B-9?()}'7, $0)=8.
Separating variables and integrating from 0 to ¢, we obtain
t
$'(s)
—._—_-—_~ ds=kt..
fimo An
0
Making the change of variable v= @(s) in the integral, and using 6(0)=6. we obtain
ou)
du
|(Bw)——__=kt.
80
2.4 Second-Order Equations Solvable by First-Order Methods Sy
The careful reader will now notice that this integral is improper because the
integrand becomes infinite when u*—>
05. Thus by the definition of such integrals (if they
exist), and by use of integral tables or otherwise, we obtain
(0) o()
du , du
ee
2 2\1/2
ein TT
(05—u )/ e> 0+ (03 —u?)!/?
00 Oot+e
yu |e
= lim | arc sin —
e70+ 0 Oo + €.
=arc
= One
sin ——— lim
_ Oo
arc sin ———
0 E> Ot 0
_ p(t)
= arc sin ———arc
:
sin l
0
_ P(t) 7%
=arc sin ————_.
Chy
Thus
t t
are sin Des, or arcsin CA ee
Oy 2 A
and consequently
Exercise
7. Apply the method just used to solve (2.35) in the case p=’ <0 and obtain
which is the same result as was obtained for the case p>0.
In the next chapter we shall learn a much simpler way to solve linear
differential equations such as (2.32).
Exercises
Exercise
10. Derive the result (2.38) by applying the method of Example 4 to the initial-value
problem (2.37).
To solve (2.38) for 0= p(t), we must again consider the two cases ¢’(t)>0,
f'(t)<0. In the case #'(t)>0, separation of variables now yields
P(t)
Wyo?
aa (#) t,
(cosu—cos6o)"? \L
(2.39)
90 |;
2g 1/2 i g 1/2 2)
(7) (1—cos0,) 7=2 = sin >.
Exercises
li. Find the angular speed at the lowest point of the linearized pendulum (2.32) and
compare the results if 69 =7/2, if 09=2/3, and if 69=1/10.
*12. Consider the initial-value problem
which was solved in Example 4. Why is the existence and uniqueness theorem for
first-order equations (Theorem 1, Section 1.6) not applicable to this problem? (Note
2.4 Second-Order Equations Solvable by First-Order Methods 59
that in Example 4 there are actually two solutions: the one we obtained and the
solution p=0.)
alse Repeat Exercise 12 for the initial-value problem
dp 9.
i sind, p(Ao)=0 (2.41)
yy ty y=
(van der Pol equation), which occurs in the theory of vacuum tube circuits.
a) Show that the change of variable y’=v transforms this to the form
dv
SI Cer
dy v
b) Sketch the direction field and a few solution curves of the equation obtained in
part a).
19: Given that y”= —4/y° and y(2)=4, y’(2)=0, find y(4).
20. Solve each of the following initial-value problems:
a) yy’ +) +1=0, y(1)=1, y'(1)=1
b) 2yy’+(y’)?+1=0, y(l=1,y(D=1
7A. Find the solution of y’=[1+(y’)*]*? such that y(0)=1, y’(0)=0.
elms A small solid sphere of density y (pounds per cubic foot) falls from rest under the
influence of gravity into a large reservoir of liquid whose surface is h feet below
the point at which the sphere is released (see Fig. 2.8). The density of the liquid is
ky, where k <1. The frictional resistance of the liquid to the motion of the sphere
is proportional to the velocity of the sphere, with constant of proportionality p.
Determine the maximum depth to which the sphere sinks.
Figure 2.8
60 Introduction to Second-Order Differential Equations
Assume that the reservoir is sufficiently deep that the sphere does not strike
the bottom, and neglect air resistance. Make the approximation that the sphere
is either entirely in air or entirely in the liquid and that the instant of entry
coincides with the time when the center of mass of the sphere passes through the
free surface of the liquid. Recall that a submerged body is buyoed up by a force
equal to the weight of the liquid displaced. If the mass of liquid displaced is w Ib,
then this force is wg. :
*23. A small sphere weighing 0.1 pound is projected vertically upward with an initial
velocity of 1500 ft/sec from a height of 1000 ft above the earth’s surface. It is sub-
sequently acted upon only by gravity and by air friction. The resistance force of
air friction (in pounds) is 10~7 times the square of the velocity in feet per second.
Assuming that the acceleration of gravity has the constant value 32 ft/sec/sec, draw
a graph showing the altitude of the sphere as a function of time from the instant of
release until the instant of contact with the earth.
24. A spherical mass grows at a rate proportional to its instantaneous surface area.
Assuming that the sphere has an initial radius a and that it falls from rest under the
influence of gravity, show that its acceleration at time ¢ is
g y 3a*
4 ie fF
where r is the radius at time ¢. Thus show that the acceleration is constant if and
only if the sphere has zero initial radius.
25. According to Einstein’s special theory of relativity, the mass m of a particle varies
with the velocity v according to the rule
Mo
m=
where c is the velocity of light and my is the mass when v=0 (rest mass). If a particle
falls from rest under a constant gravitational field, show that its velocity at time
t is v=c tanh gt/c, and determine the distance fallen in time f.
We have seen that equations of the second order arise naturally in many
physical examples. These are of the form y”=g(t, y, y’), where g(t, y, z)
is defined in some region D in the three-dimensional (t, y, z) space (for
convenience we let y’=z). The concept of a region in three-dimensional
space is a natural extension of the two-dimensional situation, with rec-
tangles centered at (fo, ¥o) replaced by rectangular parallelepipeds cen-
tered at (fo, Vo, Zo). Such a parallelepiped is a set of points of the form
{(t, ¥, 2) | t—tol<a, ly—yol<5, |z—Zo|<c} for some a, b, c>0.
We suspect from the physical examples discussed earlier that in order
to obtain a unique solution we should prescribe not only an initial position
ZS Existence and Uniqueness Theorems 61
but also an initial velocity at a time fo. In terms of a solution @¢, this means
that we prescribe #(to) and ¢'(to). Keeping in mind the above differences
from the first-order case, we can state the following analog of Theorem 1,
Section 1.6, for second-order equations.
Theorem 1. Let g, 6g/0y, and 0g/éz be continuous in a given region D. Let
(to, Yo. Zo) be a given point ofD. Then there exists an interval I containing to
and exactly one solution ¢, defined on this interval, of the differential equation
y= g(t, y. y’) which passes through (to, Vo, Zo) (that is, the solution ¢ satisfies
the initial conditions $(to)=Yo, $' (to) =Zo-)
Example 1. What does Theorem | say about the initial-value problem
g(t, 0, 0)=—k20.
This function g is continuous in the entire three-dimensional (¢, 0, z=0') space. The
partial derivatives
(t,0,0')=—k?, —(t,0,0')=0
DIS
Sele
are also continuous in the entire (¢, 0, 0’) space. Thus, by Theorem 1, given any 0p, the
differential equation has one and only one solution 0=¢(t) defined on some interval
I (the theorem doesn’t say how large / is), satisfying the initial condition ¢(0)=p,
g'(0)=0. (We note also that if we impose the more general initial condition 0(to)=a,
0'(to)=b, where to, a, b are any real numbers, the result is the same.) We have found
$(t)=0, coskt as a solution of (2.36) in Example 4, Section 2.4. By Theorem |, this is
the only solution. Note that from the explicit formula for the solution, we can say
that it exists for —co0<t<oo.
Exercises
1. What does Theorem | say about the existence and uniqueness of solutions of the
initial-value problem
seu «fie
(en a
b) Write the solution @ satisfying the initial conditions
and prove that this is the only solution satisfying these conditions.
oh ; oh ; oh 5 oh 0
—=4ty,, —=-3, =5). =
Oy, ae Oy, Oy3 OVs
are also continuous in the whole of five-dimensional space. Thus, by Theorem 2, the
initial-value problem (2.43) has a unique solution y=¢(¢) existing on some interval J
containing t=O of unspecified length.
Exercises
Exercises
8. Determine the region or regions in (t, y, y’) space in which Theorem | can be applied
to obtain existence and uniqueness of real solutions for each of the following dif-
ferential equations.
gd 2 /\2 ee
a) y= +y+(y) b) y Tee
) ee d) »"=log(y"/)
e) y’+(sint) y'+(logt) y=0 f) y’+y’?+(y)?=0
g) Y=y h) y’=y't+1
i) y"=4y'y° j) y'+0’-1) y'+y=0
k) y’=—4/y° l) yy" +(yP +1=0
m) y’=[1+()7]*” n) y"=[1+(y)?]'”
64 Introduction to Second-Order Differential Equations
y’+ty'+(1+2t?) y?=0
which is tangent to the ¢-axis at some point (¢g, 0) is the identically zero solution.
*10. Consider the differential equation
a) Is the function
10-5
afd
Sy
(t<0)
a solution on — 0 <t<a?
b) Is #(¢) continuous everywhere?
c) Is #’(t) continuous everywhere?
d) Can one apply Theorem | to obtain the existence of a unique solution wy such
that y(0)=1, w’(0)=1? Explain fully.
CHAPTER 3
3.1 INTRODUCTION
65
66 Linear Differential Equations
It certainly is not obvious at this stage that the linearized equation will
be any simpler to handle than the original one. However, experience will
show that linear equations are relatively easy to handle, while nonlinear ones
usually present serious difficulties.
If one linearizes a problem (for the simple pendulum this means replacing
sin @ by 0 in the equation), the following question arises naturally: How good
an approximation does the linearized equation actually produce? For the
pendulum, we would like to prove that in some sense the motions of the
linear and nonlinear models are close to each other when |@| is “small.”
We can hope to answer such questions only much later (see Section 8.4 for
a treatment of the nonlinear simple pendulum with damping). However, the
material presented here is an essential first step—before we can ask how
good an approximation the linearized equation produces, we must be able
to solve this linearized equation.
Before beginning the study of the general theory of Eq. (3.1), we recall
that we already know something about this equation. Namely, as an applica-
tion of the fundamental existence and uniqueness theorem for second-
order equations (Theorem 1, Section 2.5; see also Exercise 3, Section 2.5),
we can state the following result.
Theorem I. Let do, a;, 42,f be functions continuous on some interval I, and
let ao(t)#0 for all t in I. Then for each to in I, there exists one and only one
solution (t) of the equation (3.1) satisfying arbitrary prescribed initial con-
ditions $(to)=Yo, &' (to) =Zo- This solution $(t) exists on the whole interval I.
The fact that the solution (ft) of the linear equation (3.1) exists on the
entire interval 7 does not follow from Theorem 1, Section 2.5, but can be
proved separately (see, for example, Exercise 1, Section 8.5). In this chapter,
we shall assume the validity of Theorem | as stated. We may formulate this
in another way. For a linear second-order differential equation a solution
with a given initial displacement and slope exists and is unique for as long
as the coefficients are continuous and the coefficient of the leading term (dq (¢)
in (3.1)) is not zero.
Example 1. Consider the differential equation ty” +(cos 7) y’+[1—1/(t+ 1)] y=20.
Discuss existence and uniqueness of solutions.
Here do(t)=t, a, (t)=cos t, a, (t)=1—1/(t+1), f(t)=2¢ are continuous for all f
except a3(f), which is discontinuous at r= —1; also aj(0)=0. Thus we must dis-
tinguish three cases for the initial time fo: Case (i): tg< —1; Case (ii): —1<to<0;
Case (iii): ty>0. We do not take to=0 or fo= —1 (why?). In case (i), by Theorem 1,
given any fg9<—l1, there exists one and only one solution ¢ of the given equation
satisfying the initial conditions $(to)=¥9, $'(to)=Zo, Where Vo, Zo are arbitrary given
real numbers; this solution # exists on the interval —oo<t<—l1 by the last
statement in Theorem |.
ayil Introduction 67
Exercises
1. Discuss in a similar way the existence and uniqueness problem for cases (ii) and (iii)
of the equation in Example 1.
2. Discuss the existence and uniqueness problem for real solutions of the equation
(1+ 1%)y”’+2ty’+(log |t|) y=cost.
3. Do the same for the equation
doy" +a,y +a,y=f (t)
where do, @;, dz are constants and f(t) is continuous on —0<t<o.
Example 2. Consider
Exercises
4. Show that, if solutions ¢ of Eq. (3.2) are represented as curves in the (¢, y) plane, no
solution of(3.2) except #(t)=0 can be tangent to the ¢ axis at any point of /. [Hint:
Study Example 2. ]
5. For each of the following differential equations, determine the largest intervals on
which a unique solution is certain to exist by application of Theorem 1. In each case,
it is assumed that you are given initial conditions of the form $(to)=Vo, $' (to) =Zo
with f9 arbitrary. Note that the interval to be determined may depend on the
choice of fo.
a) ty"+y=2? b) #7(t—3) y’+y"=0
c) y’+./ty=0. d) (l+0*) y’—y'+ty=cost
es ag
O(to)=21. = 3. ---. OP”
=22. H" (to)
— P'(Fo) (Co) =M-
The solution @(t) exists on the entire interval I.
As for Theorem 1, the fact that solutions exist on the whole interval J
does not follow from Theorem 2, Section 2.5 (see Exercise 2, Section 8.5).
e
Exercise
3.2 LINEARITY
Loy)=f (3.5)
where it is understood that all functions are functions of r.
An operator is, roughly speaking, a function applied to functions. In the
present case, the operator Z is a rule which assigns to each twice differentiable
function y on some interval / the function L(y), where L(y) (t)=ao(t) (2)
+a, (t) ¥'(t)+a2(2) v(2).
The operator LZ is a particular example of a class of operators called
linear operators: An operator T defined on a collection S of functions is said
to be linear ifand only iffor any two functions y, and y, in the collection S and
for any constants c, and c, one has
T (cy¥y +¢2¥2)=¢,T (¥1)+¢2T (2).
It is easy to verify that our operator Z defined by (3.4) is linear. To see this,
let S be the collection of twice differentiable functions defined on the interval
I. Then if y, and y, are any two functions in S and c, and c, are any
two constants, L(¢,¥y +¢2¥2)=@o(Cy¥; +C2¥2)" +4, (Cy ¥; +e2V2)' +42 (C1);
3.2 Linearity 69
Exercises
1. Show that the operator T defined by T(y) (t)={y(s) ds, for any function y con-
tinuous on a</<b, is a linear operator.
2. Give other examples of linear operators.
3. Show that the operator T defined by T(¥)=(y’)’, for any function y differentiable
one some interval /, is not linear.
We shall need some more terminology before proceeding to the theory
of linear differential equations. If the function f#0 on J, we say that
Eq. (3.5) is nonhomogeneous (with nonhomogeneous term f). With every
nonhomogeneous linear differential equation of the form (3.5) we associate
the homogeneous (or reduced) linear differential equation L(y)=0 obtained
from (3.5) by replacingf by the zero function.
We now give two basic properties of solutions of linear differential
equations; these are immediate consequences of the linearity of the operator
j
ot
i) If , and , are any two solutions of the homogeneous linear differential
equation L(y)=0 on some interval I, then for any constants c, and c, the
function c,$1 +2 (called a linear combination of $, and $,) is also a solution
of L(y)=0 on I.
To see this we merely compute: L(c,, +¢.6 )=c,L(¢1)+c,L(¢2), by
the linearity of L. Since ¢, and ¢, are solutions of L(y)=0 on J,
L(¢,)=L(¢2)=0 for every t on J, and therefore L(c,¢,+c,¢,)=0.
Thus c,¢,+0¢2¢; is a solution of L(y)=0. I
Exercise
4. Use mathematical induction and the above result to establish the analog of
property (i) for m solutions ¢,(d),...,¢,(t) of L(yv)=0; that is, show that if
1, D25--+, Pm are m solutions of L(y)=0 on J and if cy, C2,..., Gy are any con-
stants, then cy, +¢o6)+-+:-+Cnm is a solution of L(y)=0 on J.
This result is usually expressed by saying that any linear combination
of solutions of L(y)=0 is again a solution of L(y)=0. It is sometimes called
the principle of superposition of solutions. Our object in the next section
will be to show that the problem of solving the equation L(y)=0 can be
reduced to the problem of finding certain special solutions of L(y)=0 and
obtaining all other solutions as linear combinations of these special so-
lutions.
Another important consequence of the linearity of the operator L is the
following.
70 Linear Differential Equations
ii) If @ and w are any two solutions of the nonhomogeneous linear differ-
ential equation L(y)=f on some interval I, then 6—w is a solution of the
corresponding homogeneous equation L(yv)=0.
To see this, we merely compute L(¢—wW). By the linearity of L we have
L(g@—w)=L(¢)—L(y), for tin J. But ¢ and y are solutions of L(v)=fon J.
Therefore L(@—W)=f—f=0 for ¢ in J, which proves the result. I ‘
This result shows that it is only necessary to find one solution of the
equation L(v)=f, provided that one knows all solutions of L(v)=0. This is
because every other solution of the nonhomogeneous equation (3.5) differs
from the known one by some solution of the homogeneous equation L(v)=0.
Exercises
5. Given that wis a solution of L(v)=0 and v isa solution of L(y)=/fon some interval
T, show that u+v is a solution of L(v)=f on J.
6. Suppose f can be written as the sum of m functions /;,..., f,5 that is, {()=f;(0)
+f3(t)+---+/,(0. for ¢ on some interval 7. Suppose that uv, is a solution of the
linear equation L(v)=/,, v2 is a solution of the linear equation L(v)=/>, and in
general u; is a solution of the linear equation L(y)=/; on / for i=1,..., m. Show
that the function v=u,+u,+---+u,, is a solution of L(y)=/ on J. (This result,
also called the principle of superposition, enables us to decompose the problem of
solving L(v)=/ into simpler problems in certain cases.)
Before closing this section we repeat that the only property of the oper-
ator L is used above is linearity. Therefore our results are much more
general than appears to be the case. In particular, if we define the linear
differential operator L, of order by the relation
L,(y) (t)=ao(t) v (t)+.a,(t) VP") (4+ +4,-1(0) ¥ (O+a,(d) (0
where y is any function which is 7 times differentiable on some interval /,
and the functions a;(j=0, 1,..., 7) are continuous on J, do(t)#0 on J, then
4
all results stated in Section 3.2 hold.
Exercise
7. Formulate and verify the analogs of the linearity properties (i) and (ii) for the
equation L,(y)=/ for n=1, 3,4, and » an arbitrary positive integer.
In this section we go far beyond the result established above, that any
linear combination of solutions of the linear homogeneous differential
equation L(y)=0 is again a solution of L(v)=0. We will show that
every solution of L(y)=0 is a linear combination of certain special solu-
tions. Then in Section 3 3.7 we will show how to use the special solutions
3.3 Linear Homogeneous Equations 71
for every ton I. We say further that the m functions are linearly independent on
I if they are not linearly dependent on I.
Example 1. Show that the functions sin*7, cos”, 1 are linearly dependent on any
interval.
Since sin’ t+cos*t—1=0 for every t, we merely put g, ()=sin* ¢, g>(t)=cos?t,
g3(t)=1, b; =b,=1, b; = — 1 in the above definition. This proves the linear dependence
of the given functions.
Example 2. Show that the functions e, e’', where r,, r, are real constants, are
linearly independent on any interval / provided that r; ¥rp.
To see this, we suppose that there exist constants b,, b, such that b,e"'+6,e"'=0
for all ¢ in 7. Multiplying by e~™! we obtain 6, +b,e"2~"'=0 for all ¢ in J, and
differentiating both sides of this equation with respect to ¢, we obtain b(r,—r,) e?-"'
=0 for all tin J. Since r; #r, and e”?~"” is never zero, this implies that b, must be zero.
However, then ),e"''+5,e'=0 for all ¢ in /, implies b,e""'=0 for all ¢ in J, and hence
b, must also be zero. Since b, and b, are both zero, e”’ and e”' must be linearly
independent.
Exercises
1. Establish the linear independence of the following sets of functions on the intervals
indicated.
a) sin ¢, cos f on any interval /.
b) e”"", e', e on any interval / if r;, r2, r3 are all different.
c) e”", te’ on any interval /.
d) 1, ¢, 27, 2 on any interval J.
e) 7, ton —1<¢<1 but not on 0<t<Il.
f) The functionsf, (¢), f2(t) on —1<t<1, where
co ioo}
LO0= (1 pant
hO=), ie.
n=0 n=0
2. Prove that the functionsf, g are linearly dependent on / if and only if there exists a
constant c such that either f(t)=cg(t) or g(t)=c/(t) for every ¢ in J.
3. Decide which of the following sets of functions are linearly dependent and which
are linearly independent on the given interval. Justify your answer in each case.
72 Linear Differential Equations
il Vit Tatyana
(EI Sec e Ag?
for all ¢ in 7. Since, by assumption, the constants a;; are not all zero, at
least one of the polynomials P(t) is not identically zero. It is convenient
to assume that P,(t)#0; we can always arrange this by a suitable labeling
of the numbers r,, r3,...,r,. Now we divide Eq. (3.6) by e* and differ-
entiate at most A, times until the first term drops out. Note that all terms
in (3.6) can be differentiated as often as we wish. Then we have an equation
of the form
in which the polynomial R, has the same degree as P,, and does not vanish
identically. However, the exponential term in this equation does not vanish,
and we have a contradiction. This shows that all the constants a;; must
be zero, and therefore that the n given functions are linearly independent
on/. |
Exercise
4. To which of the sets of functions in Exercises | and 3 could you apply Lemma | to
deduce either linear dependence or linear independence?
for every ¢ on J. Since ¢,, @, are solutions of L(y)=0 on J, they are dif-
ferentiable on J and hence from (3.8) we have also
and we therefore conclude that b, =b,=0, which shows that the solutions
&,, 2 cannot be linearly dependent on / and therefore this proves their
linear independence on /.
To complete the proof of the theorem, let ¢ be any solution of L(y)=0
on / and calculate $(to)=a, '(to)=f. (That is, we evaluate p(t) and ¢'(¢)
at f=fg and call the values at fo, « and f, respectively.) If there are to exist
constants c, and c, such that $(t)=c,¢, (t)+c.¢>(¢) for all tin J, this relation
must hold in particular at tp, and we must have
a=(to)=C1P1
(to) +C262 (to) =e1° 1+c2:°0=c,,
B=@'
(to) =C1 9% (to) + C203 (to) = C1 O+C2*1= ep.
Define the function w by the relation w(t)=a¢@, ()+ Bo2(t) for tin J. Clearly
(by the linearity property (i), Section 3.2), y is a solution of L(y)=0 on J;
moreover,
3.3 Linear Homogeneous Equations Th
Exercises
5. Why are the constants c,, c, in the statement of the theorem unique?
6. Carry out the proof of Theorem | by using the solutions y, and w, of (3.5) on J
satisfying the initial conditions Wy (to)=2, Wj (to)= —1 and (to) = —1, W4 (to) =!
in place of the solutions ¢, and @). [Hint: Begin by showing that the solutions
W1, W2 of (3.5) are linearly independent on /.]
7. Let w, and w, be solutions of L(y)=0 on / satisfying the initial conditions
Example 3. Find that solution ¢ of v’+y=0 such that $(0)=1, ¢’(0)=—1, using
the fact that cost and sint are both solutions.
It is easily shown that cost and sin¢ are linearly independent solutions of
y’+y=0 on any interval / (see Exercise la).) To find the desired solution we apply
Theorem 1, letting ¢,(t)=cos ¢, #2 (¢)=sin ¢, and observing that $,(0)=1, 6, (0)=0,
>(0)=0, $5 (0)=1 as in the above proof. By Theorem | we know that there exist unique
constants ¢c;, Cc, such that $(t)=c, cos t+ c, sin f; as we saw in the proof we may deter-
mine c, and c, by imposing the initial conditions. Thus we obtain
o(0)=1=c,-1+c,°0
¢'(0)= —1=—c,:0+c,°1
Exercise
9. State and prove a theorem analogous to Theorem | for the linear third-order
differential equation
P(t)=CrG1(t)+Crh2
(+--+ Cnbn(t)
for every t in I.
Exercise
Definition. Let f,, f., be any two differentiable functions on some interval I.
Then the determinant
Ws
JiaRites
fr)= ye Oe
is called the Wronskian of f,; and f. Its value at any t in I will be denoted by
W(f,, f2) (t). More generally, iff,,...,f,,aren functions which are n—1 times
differentiable on I, then the nth-order determinant
peer ape:
{0 mekpiene oni,
Whos =[
fe fee feiss pon
Exercise
cost sint
W
(cost, sin =| l —wo<t<o.
—sint cost
Therefore, by Theorem 3, ¢, (t)=cos ¢, #2(t)=sin ¢ are linearly independent solutions
of y’+y=00n —0 <t<o. Of course, we already know this result from having applied
the definition of linear independence directly. However, when dealing with solutions of a
linear homogeneous equation L(y)=0, the theorem is often easier to use than the
definition.
Warning. Do not apply Theorem 3 when the functions being tested for linear
independence are not known to be solutions of a linear homogeneous equation
L(y)=0. To see why, consider the functionsf; (t)=0’, f;(t)=t\¢| and take for
I] the interval — 1<1t<1. Then as we saw in Exercise 11 (c), the functions f,, /5
are linearly independent on J and yet W(f,,f2)(t)=0 for every ¢ on
—l<r<l.
Proof of Theorem 3. The proof consists of two parts. Suppose first that
the solutions ¢,(t), 62(t) of L(y)=0 are such that W(¢,, 2) (t)40 for all
ton J and yet ¢,, ¢, are linearly dependent on /. Then by the definition of
linear dependence there exist constants b,, b, not both zero such that
and also
For each fixed ¢ on J, Eqs. (3.10) and (3.11) are linear homogeneous algebraic
equations satisfied by b, and b,, and the determinant of their coefficients
is precisely W(d,, $2) (¢). Since, by assumption, W(,, ¢2) (t)40 at any
ton J, it follows from the theory of linear homogeneous systems of algebraic
equations (see Appendix 1) that b, =b,=0, which contradicts the assumed
linear dependence ofthe solutions ¢,, ¢, on J. This shows that if the Wrons-
kian of two solutions of L(y)=0 is different from zero on /, then these so-
lutions are linearly independent on J.
To prove the second part of the theorem, assume that the solutions
;, 62 of L(y)=0 are linearly independent on J and assume that there is
at least one ? on J such that W(¢,, $2) (2)=0. (If there is no such ? there
is nothing to prove!) Now look again at the algebraic system (3.10), (3.11)
for t=?. It follows, again from the theory of linear homogeneous systems
of algebraic equations (see Appendix 1) that, because W(¢,, ¢,) (?)=0, the
system of algebraic equations
has at least one solution b,, b,, where b, and 5, are not both zero. To complete
the proof define the function W(t)=6,¢,+5¢.(t), where b,, 6, are taken
as any solution of (3.12). First observe that is a solution of L(y)=0 (why?).
Because of (3.12) the solution w satisfies the initial conditions y(?)=0,
w’(?)=0. Therefore, by Theorem | and Example 2, Section 3.1, w(t)=0 for
every ¢ on J. This means that we have found constants 6,, b, not both zero
such that b, ¢, (t)+52¢, (t)=0 for every t on J. This contradicts the assumed
linear independence of the solutions ¢,, ¢, on J. Therefore the assumption
W (1, $2) (#)=0 is false; that is, no such ? exists and W(d,, ¢) (t)40 for
every ¢ in J. This completes the proof of Theorem 3. I
Exercises
12. Show that e”', e~ *‘ are linearly independent solutions of y’—4y=0 on — 10 <t<oo.
13. Show that e-‘? cos(,/3/2) (ew? sin(./3/2) t are linearly independent solutions
of y’+y’+y=0 on —wo<t<oo.
14. Show that e', te~' are linearly independent solutions of y’+2y’+y=0 on
—0<t<o.
15. Show that sin ?*, cos /? are linearly independent solutions of ty”—y'+4ey=0 on
0<t<o or —«0<t<0. Show that W(sin 77, cos t7) (0)=0. Why does this fact
not contradict Theorem 3?
16. State the analog of Theorem 3 for the nth-order equation L,(y)=ao(t)
+a, (t) yy" +---+a,_,(t)y' +a, (1) y=0.
3.3 Linear Homogeneous Equations 79
We can now establish a result which says that for any two solutions ¢,
and @, of a linear homogeneous second-order equation with continuous
coefficients, the Wronskian is either identically zero or never equal to zero.
Theorem 4. Let the hypothesis of Theorem 3 be satisfied on some interval I.
Let (,. 2 be two solutions of L(y)=0 on I. Then either their Wronskian
W(,, 2) (0) is zero for every t in I or it is different from zero for every t in I.
The proof of Theorem 4 is outlined in the following three exercises.
Exercises
17. Let @,, 2 be two solutions on some interval J of L(v)=do(t) vy"+4,(1) y' +.a,(t) y
=0, where do, a,, a, are continuous on J and a,(t)#0 on J. Show that the
Wronskian W(@,, ¢2) (¢) satisfies the first-order linear differential equation
Wn W, (tind). (*)
Pi (t) $2(1)
[Hint :W'(9,, 62) {t)= = (0162-9162)
=9192 — Pia -
Pi(t) $5(t) ,
Now use the fact that ¢,, @, are solutions of L(y)=0 on /to replace ${, 3 by terms
involving $,, $;, $2, 65. Collect terms to obtain (*).]
18. By solving (*) in Exercise 17, derive Abel’s formula
t
22. Theorem 4, combined with Theorem 3, provides a convenient method for testing
solutions of linear differential equations for linear independence on some interval.
For, according to these results, it is enough to evaluate the Wronskian at some
conveniently chosen point. Thus, for example, show that
em
W(th=c1o (t)t+er¢2(t),
where c;, c, are the numbers given by (3.14). We observe that w(t) (as well
as f(t), 1 (2), $2(t) is a solution of L(y)=0 on J. To complete the proof
we need only show that wy (t)= (1) for every t in /. But using (3.14) we see that
P()=Cibi(Qt- +nn(t)
for t on I of the given solutions $,, $2,..-, Py (1-€., there exist unique constants
Gis Coie. C, SUCK that
P(t)=C1
1 (t) +CrG2(t) +> $end, (t)-
Exercise
Remark (for students acquainted with linear algebra). The theory developed
in Sections 3.2 and 3.3 shows that the solutions of a linear homogeneous
differential equation L(y)=0 with continuous coefficients on some interval J
and with nonvanishing leading coefficient on J, form a vector space V
over the
real or complex numbers (see property (i), Section 3.2). Theorem 1 shows
that the dimension of V is 2 if L is a linear differential operator of order 2, by
exhibiting a basis for V consisting of the special linearly independent solutions
@, and ¢, constructed in the theorem. Theorem 5 shows that any two linearly
independent solutions of L(y)=0 also form a basis for V, provided the order
of L is 2. We can derive this more simply using knowledge of linear algebra.
Once we know, by Theorem 1, that V has dimension 2, it follows immediately
that any two linearly independent vectors in V (that is, solutions) span V.
Theorem | for a homogeneous linear differential equation of order n shows
82 Linear Differential Equations
that for such an equation the vector space of solutions has dimension n.
For a more general discussion of this topic we refer the reader to Chapter 4.
Example 5. Use the functions e?‘ and e *' to find the general solution ofthe equation
y”—4y=0; then find that solution ¢ for which $(0)=1, $'(0)=0.
The functions e?', e~*! are solutions of y’ —4y=0 for all ¢ (why?). They are linearly
independent solutions of L(y)=y’—4y=0 on —co<t<oo (why?). Therefore, by
Theorem 5, every solution @ of y’—4y=0 on —c« <t<o can be written in the form
b(t)=cye*+c,e 7
for some unique choice of the constants c,, c>. This is the general solution of y’—4y=0
on —o<tf<oo. To find that solution ¢ of y’—4y=0 on —a<t<oo for which
(0)=1, $'(0)=0, we see from (t)=c,e7"+c,e ~ that c, and c, must satisfy the equa-
tions
Qj tey=—1, 2¢e;—2e,=0
orc; =c, =4. Thus $(t)=(e" +e *')/2=cosh 2¢. The reader should note that Theorem |
alone does not supply enough information to solve this problem because the solutions
e*' and e 7‘ do not satisfy the right initial conditions. However, we could solve the
problem by using Exercises 7 and 8.
Exercises
24. In each of the following, find the solution ¢ of the given differential equations that
satisfies the specified initial conditions. Also find the general solution in each case.
a) y’—4y=0, (0)=2, $'(0)=—1 (see Exercise 12)
b) y’+y’+y=0, ¢(0)=1, G’(0)=3 (See Exercise 13)
c) y’+y'+y=0, $(0)=0, $7(0)=0
d) y’+2y’+y=0, ¢(1)=—-1, @'(1)=1 (see Exercise 14)
e) y’—ty=0, $(0)=0, $'(0)=2 (see Exercise 22)
f) y"—y"+y'-y=0, (0)=2, $'(0)=—-1, P"(0)=1 (see Exercise 21)
25. Show that for any constant «>0, sin wt and cos af are linearly independent solu-
tions on —co<t<oo of y’+a7y=0. What is the general solution? Find all
solutions which pass through the point (z/4a, 5). Which one of these has slope «
at the point (2/4«, 5)? Which has slope —« at the point (z/4«, 5)? Letting ¢, and
#2 represent the solutions of slope « and —« respectively at the point (7/4, 5),
decide whether @, and @, are linearly independent on — co <t<oo.
*26. In each of the following, let @, (1) and @,(¢) be solutions of the differential equation
L(v)=y"
+p(t) y +q(t) y=0
on some interval /.
a) If by (to)=$2(to)=0 for some fy in J, show that ¢, and ¢, cannot form a
fundamental set of solutions on /.
3.4 Linear Homogeneous Differential Equations 83
In this section we shall learn, with the aid of the theory just developed,
how to solve the linear second-order equation L(y)=0 in certain special
cases. We shall be guided by the fact that in order to find all solutions of
L(y)=0, we merely need to find two linearly independent solutions and
then apply Theorem 5, Section 3.3. Subsequently, we shall study linear
homogeneous equations of higher order. We shall begin with the simplest
case of constant coefficients and then consider in the following sections
several more complicated cases of variable coefficients. We note that in the
general case of continuous coefficients a solution of L(y)=0 exists but
cannot necessarily be found in terms of elementary functions. Fortunately,
large numbers of interesting physical problems lead to mathematical models
which, when simplified sufficiently, fall into categories which we can handle
easily.
Important examples of models leading to an equation of the form
L(y)=y"+py +qy=0, 10
<4 <<100)
where p and qg are real nonnegative constants, are the mass—spring system and
the pendulum (see Eq. (2.4), Section 2.1 and also Eq. (2.10), (2.11), Section 2.2,
with sin@ replaced by 0). Another important model is a linear electrical
circuit consisting of a capacitance C, a resistance R, and an inductance L
connected in series. It can be shown that the potential difference (voltage)
v(t) across the capacitance can be reasonably described by the equation
pee pe =)
v 7 Lean
where do, @;, 4) are real constants, dj#0. Thus we may as well divide
through by a) and assume that the equation has the form
where p and gq are real constants, not necessarily positive. Our task is to
find two linearly independent solutions of (3.15). Recall that for the first-
order equation y’+ry=0, where r is a constant, e'" is a solution. In
Section 1.1, we found this solution by separation of variables. However,
we could also find it as follows: If for some constant z, e*' is to be a solution
of y’+ry=0, then we must have (e')’
+re*’=0 or (z+1r) e’=0. Since e” 40,
we see that e”' can be a solution of y’+ry=0 only if z= —r which gives e ”
as a candidate for a solution. Direct verification shows that it 1s.
Let us try to find a solution of (3.15) of the form e** on —w2w<t<o.
Then we must have L(e*)=0. But L(e*)=(e7)’
+p(e”)’ + ge" =(z? + pz
+q) e'. Therefore e” can be a solution of L(y)=0 on — 0 <t<oo only if
(2? +pz+q) e*=0
or, since e’ 40, only if z is a root of the quadratic equation
z?>+pz+q=0. (3.16)
Equation (3.16) is called the characteristic equation or auxiliary equation
associated with (3.15), and z7+pz+q is called the characteristic polynomial
associated with (3.15). The quadratic equation (3.16) has the roots
OR eet
of 2 5) iG —
—p=(p?=4q)"”?
a)
Exercise
1. Verify that e*'', e* are solutions of (3.15).
e7it e772!
Since z; #2, W(e*", e')#0 and Theorem 3, Section 3.3, gives the desired
linear independence. Therefore, in case (i), by Theorem 5, Section 3.3,
every solution ¢ of Eq. (3.15) has the form
3.4 Linear Homogeneous Differential Equations 85
Exercises
2. Find the general solution of each of the following equations, and then the
solution ¢ satisfying the given initial conditions:
a) y’—y=0, $(0)=0, ¢’(0)=1.
b) y’—5y’+6yv=0, $(0)=0, ¢’(0)=1.
c) y”—6y"+ Lly’—6y=0, ¢(0)=¢'(0)=0, 6” (0)=1.
*3. In Eq. (3.15) for damped oscillations (that is, assume p, q>0), find values of p and
q such that the roots of the characteristic equation are real and distinct. For such
values, discuss the asymptotic behavior (behavior as t + 00) of the solutions by
computing lim,...., #(t), where ¢(t) is any solution of(3.15).
The careful reader may have noticed that in case (i), the roots z,, Z, will
be real and distinct if p*>4g, and will be complex conjugate (hence dis-
tinct) if p* <4q. For example, the differential equation
y"+y+y=0
—-14
—— 31 — _ aa
-1-J3i
ot 5) oe ae
Therefore
Pa on —1-./3i
exp (a ) and exp (= )
should be, and in fact are, solutions. However, these functions are complex-
valued functions of the real variable t, and up to this point all functions con-
sidered have been real. If you are unfamiliar with complex-valued functions
of a real variable, read Appendix 3 before proceeding.
In view of the theory of complex-valued functions of a real variable
discussed in Appendix 3, every definition and theorem given for real so-
lutions of the real equation
86 Linear Differential Equations
Exercises
Ag (t) u" (t)+a, (t) u(t) +.ay(t) u(t) +ifao(t) v(t) +a, (t) v(t)+a,(t)v(t)] =0
[Note: This also shows that L(¢)=L(u)+iL(v); this is true in general if
L is a linear differential operator with real coefficients.] Since the last
relation holds for every ¢ on J and since a complex number is zero if and
only if both its real and imaginary parts are zero, we have, for all ¢ in J:
L(u)=ao(t) u" (t) +a, (t) u(t) +a, (t) u(t)=0
and
L(v)=ao(t) v" (t) +a, (t) v'(t)+.a,(t) v(t) =0
which shows that u=2 and v=4¢ are both solutions of L(y)=0 on J
and completes the proof. |
Exercise
6. Let ¢ be a solution on some interval / of the differential equation
$4()=exp (at
2
‘), $a(t)=exp cee )
ofthe differential equation y’ + y’+y=0 and Theorem | to find the general solution in
real form on —c<t<o. They are linearly independent on -—c0<t<o, since, by
Theorem 3, Section 3.3, interpreted for complex-valued solutions,
an (Ss ) ae Cae )
W ($1. $2) ()=
ie 3 exp(—4 a ae
2 2
are also solutions of y’+y’+y=0 for —co<t<oo. The same statement applies to
t 5 ane 3
p(t)=a, exp(-5) cos . t+da, exp (-5)sin Se
for some unique choice of the (possibly complex) constants a,, a2. Starting with the
complex form of the solution ¢, we may also arrive at the “real form” as follows.
Using Euler’s Formula (see Appendix 3) and collecting terms, we have
b(t)=c11
(t)+cr$2(t)
+C exp(-5) (cos t ee )
Z 2
t 3 t 3
=(c;+C2) exp(-5) cos t+i(c,—c) exp(—5) sin? ihe
2)
If we now define a; =c, + , a, =i(c; —c), we obtain the desired form. It is clear from
this that the solution (t) of the equation y’+y’+y=0 will be real if and only if
C,=C, (the complex conjugate of c,). In this case, of course, a, and a, are both real.
z*>+pz+q=0 (3.16)
and C,, Cy are constants. If p? >4q, z, and z, are real and distinct. If p* <4q
the roots Z,, Z, are complex conjugates. In this case if z, =«+ if (a, B real) the
solution @ may be expressed in the form
p(t) =e" (a, cos Bt +a, sin Bt) (3.18)
where ay, a, are constants. If @ is real, a, and ay are real.
3.4 Linear Homogeneous Differential Equations 89
Proof. We have already proved all of Theorem 2 except for Eq. (3.18).
To prove (3.18), we proceed exactly as in Exercise 5 above; namely, we
know from Theorem | that e%' cos Pr, e* sinft are solutions of y"+py’+
+qy=0, where «+i is a root of z?+pz+q=0. Since these solutions are
linearly independent on — 00 <r< oo, Eq. (3.18) is a direct consequence of
Theorem 5, Section 3.3. 1
Exercises
7. Show that e* cos fr, e“ sin fr are linearly independent solutions on — 00 <t<oo of
(3.15) when p? <4q.
8. Proceeding as in Example 2, show that ay, a in (3.18) are given in terms of c,
and c, by the formulas a; =c,+¢), a, =i(c; —C3), where c,, c, are the constants in
(3.17).
9. Find the solution @ satisfying the initial conditions #(0)=¢'(0)=1 of each of the
following differential equations:
a) y’+y=0 b) y’—4y'4+ 13y=0
c) v"+4r=0 d) y’+2y’+2y=0
*10. In Eq. (3.15) with p, g nonnegative, find conditions on the constants which lead to
complex roots of the characteristic equation and investigate the behavior of the
solutions for various choices of these constants as f> +00.
Wi"(t) + py'(t)+
ay()=9
or equivalently, using (3.19), if and only if
90 Linear Differential Equations
or if and only if
a 3 2 2
— (L (e*))=— G (#+pz+ ") = te (#+ pz +) +e (2z7+p)
Note that 27+p is the derivative of z?+pz+p?/2 and both these vanish at
the double root z= —p/2. (This is a general result about multiple roots—
see Appendix 2.) As you may verify,
=1(te*) =0.
Z=—
p) 2 c=
— p22
Exercises
11. Find the general solution of each of the following equations. If the equation is
real, express the solution in real form. Note that Theorem 3 is true if p and q are
complex, and thus equations with complex coefficients can be solved.
a) y"+9y=0 b) y’—Sy’+ 6y=0
c) y+ 10y’+ 25y=0 d) p"+2i’+y=0
e) 4y”"—y=0 f) y” + 5y’+ 10y=0
gh) ey’ +2y’+y=0, O<e<l h) 4y"+4y’+y=0
*12. In Eq. (3.15) with p? =4g, investigate the behavior of the solutions as t+ 00 for
various values of the constants.
*13. Recall the Definition: A function f is said to be bounded on some interval I if and
only if there exists a constant M >0 such that |f(t)|<M for all t on I. For example,
sin f, cos f are bounded on any interval, 1/t is bounded on [1, 2] but not on (0, 00),
e~' is bounded on [—5, «) but not on (—co, —5].
a) Determine which differential equations in Exercise 1 have a// their solutions
bounded on [0, 2).
b) Repeat part (a) for the interval (— co, «).
*14. Show that the solutions of the differential equation y”+py’+qy=0, wherep and q
are positive constants, are oscillations with amplitudes which decrease ex-
ponentially when p?<4gq (light damping) and that they decrease exponentially
without oscillating if p?>4qg (overdamping). How do they behave if p?=4q
(critical damping)?
Figure 3.1
passes one and only one semiorbit. (This is not hard to prove using the
uniqueness property, Theorem 1, Section 3.1.) We note also that the above
example, though not too difficult to handle, would be even more transparent
if we observed that the solution
(t)=c, cost+c, sint
with c,, ¢, not both zero, could be expressed in still another form, namely,
$(t)=A sin (t+a)
where A=(cj+c¢3)’/ is called the amplitude, and «=arcsin (c,/c? +.c3)"/)
is called the phase shift. It is now obvious that #'(t)= A cos(t+.«) and there-
fore the curve y; = (ft), v2='(t), O<t<oo in the (y,, v2) plane is a circle of
radius A centered at the origin.
15. Establish the above formulas for A, «. Hint: Assuming c,, ¢, are not both zero,
: Cy C2 i
c, cost +c, sint (che)! ( ss2 cost + 2 2\1/2
sin)
(ct +¢3)"/ (Gere
oe)in Linear Homogeneous Equations of Arbitrary Order 93
16. a) Write the general solution of the equation y”+2y=0 in the ‘‘amplitude-phase
shift form.”
b) Determine the amplitude, period, phase shift of that solution @ of y”+9y=0
which satisfies @(0)=1, $’(0)=2.
c) Sketch and identify several typical positive semiorbits (that is, let 0<1<o) of
the equation »”+9y=0 in the phase plane. What happens if we let ¢ range on
the interval —co<f<0 (negative semi-orbit)? Indicate the direction of the
motion along each curve as f increases.
17. Sketch a few typical positive semiorbits in the phase plane for each ofthe following
differential equations. Consider also the negative semiorbits. Indicate the direc-
tion of the motion along each curve as f¢ increases.
a) y’+2y’+2y=0
b) v’—v=0
18. Suppose we had a pendulum for which a crude mathematical model would give
rise either to the equation in Exercise 16a) or 17a) above. Can you give a physical
interpretation of the semiorbits in the phase plane in each case?
19. Consider two solutions ¢(t)=c, cost+c, sint and w(t)=d, cost+d, sint of the
equation y” + y=0, where cj +c3 =dj +d3. Show that these solutions both give rise
to the same positive semiorbit in the phase plane, even though the solution @ need
not be the same as the solution w.
Exercise 19 shows that, although only one orbit passes through each
point of the phase plane, each orbit corresponds to many solutions with
different phase shifts.
We can easily generalize the results of Section 3.4 for second-order linear
differential equations with constant coefficients to equations of arbitrary
order. Consider the linear homogeneous equation of order ” with constant
coeflicients a, az, :.., a,
and look for a solution of the form e*' as before. Note that Eq. (3.20) reduces
to (3.15) when n=2, with a, =p, a, =q. Since L,(e”)=p,,(z) e, where
Py(z)=2"+ay2""'+---+a,-12+a,=0
and suppose the root z; has multiplicity m;, i=1,..., 8, (m,+m2+-": +m,=n).
Then the n functions
ete at et
er perm) pe teat
Hence r*=16 and 0=2/44+(n/2)n (n=0, +1, +2, ...), and the distinct roots are
Z,=2 exp [i(x/4)]=,/2 (1+7), 22=2 exp [3i(2/4)] =,/2 (—1+1), z3=2 exp [—i(x/4)]
=,/2 (1 —1), z4=2 exp [—3i(x/4)] =,/2 (— 1 —i), corresponding ton=0,n=1,n=—1,
n= —2, respectively. It is clear that the choices n= +2, +3,..., lead us back to one of
the roots 2,, 22, 23, 24 already listed. Since n=4 and since the characteristic equation
has four distinct roots, every solution @ of the equation y+ 16y=0 has by Theorem |
(here n=4, m, =m,=m;=m,4=1) the form
for some unique choice of the constants c,, ¢>, C3, C4. This may be written in the real
form
(t)=exp[/ 2t] (a, cos,/2t+a, sin ./2t)+exp[— /2t] (a3 cos,/2t+a, sin ,/2t)
Exercises
The methods of Sections 3.4 and 3.5 do not apply to linear equations
with variable coefficients. Thus even though our theory tells us that there
are two linearly independent solutions of asecond-order linear homogeneous
equation, it may not be possible to find them. Sometimes it is possible to
guess or by some other means find one solution ¢, of the linear equation
the same trick which led to Theorem 3, Section 3.4 (see Eq. 3.19) in the case
of constant coefficients with equal roots of the auxiliary equation will enable
us to find a second, linearly independent solution of L(v)=0 on / by reducing
the problem to one of solving a first-order equation.
Assuming that we know a solution ¢,, we let ¢,(t)=w/(s) ¢, (t) and try
to find a nonconstant function w so that L(@,)=0 for every ¢ on J. (Why
should w be nonconstant?) Since
el apa eat ey
rigmer] [esataa)*|
“Taare | lexcta
Bava xp| — ay
0
where c is the constant [, (fo)*. Then, for fg and fin J, using w’ =v, we have
ee | c ta { a,(s) b
u exc exp( lee is)
to
3.6 Reduction of Order 97
and therefore
t o
The reader should verify by direct substitution that L(d,)=0. This leads us
to the following result.
Theorem 1. If , is a solution of L(y)=0 on I, where do, a,, a, are
continuous on I and ay(t)#0 on I, and if b,(t)#9, then the function dy given
by (3.21) is also a solution of L(y)=0 on I. Moreover, the solutions d),, d4 are
linearly independent on I; hence every solution p of L(y)=0 on I has the form
b=c,6,+C22 for some unique choice of ¢5, Cy.
We have only to prove the linear independence ofthe solutions ,, @, on
I. This is done by computing W(d,, @,) and using Theorem 3, Section 3.3.
Exercise
t | cor ds
to
Since this Wronskian is different from zero, , and ¢, are linearly independent on any
interval not containing the origin.
98 Linear Differential Equations
Exercise
2. Given one solution ¢#,, in each case find a second linearly independent solution
> on the interval indicated.
D
desea, Pi(J=", — 0<t<oo
b) y’—4ty’
+(4t? —2) y=0 $,()=e7, —0<t<0
c) (1—t?) y—2ty’'+2y=0 $i (t)=t, 0<t<1
d) ty”—(t+1) y+y=0 $, (t)=e', t>0.
Exercises
ty +ty+t?y?=1.
(see Eq. 2.4, Section 2.1). Initial conditions are imposed as before. We
remark that the equation for the current in an electrical circuit having
resistance, inductance and capacitance in series and a periodic impressed
voltage also has the form (3.25) under the appropriate physical assumptions.
100 Linear Differential Equations
LWy—,)=LW)-LW))=f—-f=9.
This shows that y—y, is a solution of the homogeneous equation L(y)=0
on /. (Recall that this much of the proof was already established in Section
3.2 property ii). Therefore, by Theorem 5, Section 3.3, there exist unique
constants c,, c, such that
Exercises
1. Prove Theorem 2.
2. Compare Theorem 2 in the case n=I with the results of Section 1.4, in
particular with Theorem 1, Section 1.4.
We shall now study some methods for finding a particular solution ofthe
equation L(y)=for L,(y)=f.
37 Linear Nonhomogeneous Equations 101
(uid, +Urh2)
=u, $b)+25 +P, +uyd2,
(ub, +Urh2)" =U,5 +urh5 + 2u)P+ 2u,$5 + uh, +uyo2
and using L(¢,)=L(¢,)=0 we obtain
U,, Uz. But now reversing the argument we see that if we can find twofunctions
u', u to satisfy equations (3.28), (3.29), then indeed w,=u, 6; +u2h2 will
satisfy L(y)=fon J.
To find a particular solution of the equation L(y)=/, we may therefore
concentrate on equations (3.28), (3.29). These are linear algebraic equations
for the quantities w',, uw, and the determinant oftheir coefficients is W(,, o>).
Since the solutions $,, ¢, of L(y)=0 are by hypothesis linearly independent
on J, it follows that W(¢,, $2) ()40 for all t on / (Theorem 2, Section 3.3)
and the system ((3.28), (3.29)) of equations can therefore be always solved
(in fact uniquely) for the quantities wv, w,. By Cramer’s rule (Appendix 1),
the solution of the algebraic equations (3.28), (3.29) is
£(3) 269) OO
t
[L620 6) 0bo),
Ze to
NOC oy
is a solution of L(y)=f on J, as may be verified by direct substitution. We
have thus sketched the derivation of the following important result.
Theorem 3. Let $1, $y be any two linearly independent solutions of the equa-
tion
Exercise
seas t us t ml
y +y=tant, SSD eas
es 2 2
Since, by Section 3.4, @,(t)=cos t, @,(t)=sin ¢ are linearly independent solutions of
y’+y=0 on any interval, they are linearly independent on —2/2<t<z/2; in fact
cost sint
W (61, $2) (t)= =1.
—sint cost
Thus
. sin*t 1—cos’t
u; (t)= —tant sint= — = =cost—sect
cost cost
' T T
u3(t)=cost tant=sint, aint
or
Tl Tl
u, (t)=sint—log |sect+tant|, — u2(t)= —cost, Ary
Exercises
c I
o(t)=c, coskt+—* sin ket |sink(t—s) f(s) ds
0
for 0<t<oo. (Use cos kt and sin kt/k as a fundamental set of solutions of the
homogeneous equation.) Find an analogous formula in the case k=0.
6. Given the equation
y’+5y'+4y= fl),
use the variation-of-constants formula and Theorem | to prove that:
a) If fis bounded on 0<t< oo (that is, there exists a constant M >0 such that |/(s)|
<M on 0<t<oo), then every solution of y’+5y’+4y=/(t) is bounded on
VEST <66),
b) If also f(t)>0 as too, then every solution # of y’+5y'+4v=f(t) satisfies
(t)>0 as t> 00.
* 7, Can you formulate Exercise 6 for the general equation
Wr=U191 + Urb. + ae UO
manner we find that if wv, w5,..., uj, are chosen to satisfy the system oflinear
algebraic equations on /
+:-+ud, =
uid, tuo,
ud +u,¢, +:-+uid,, =0
(3.31)
.
+u, gf 9+... 4g" =0
ud"
purged t
VP uy GO- D4.
ao
then the function
Exercise
Exercise
Ki=cle™
where c is a constant. (Why?) Before we proceed we stress again that the
method we are about to explore further only works under the special con-
ditions stated above. But if these conditions are not satisfied, the method of
variation of constants is applicable.
For simplicity of exposition let us first assume that f(t)=ce™’, where m
is a real or complex number, possibly zero, and let us consider L,(y)=ce™,
where L,(y)=y"
+a,y"" +---+4,_-1y'+a,y. Our task is to find a par-
3.7 Linear Nonhomogeneous Equations 107
practice we would simply assume a solution of the form Ate”, substitute and
determine A.
Example 4. Find a particular solution of y”—9y=e™.
We observe that e*' is a solution of y” —9y=0, but e*" is not and therefore we guess
a particular solution of the form w,(¢)= Ate*’. Then
Wi,(t)=3Ate* + Ae*, ”(t)=9Ate*
+ 6Ae™.
Therefore ’,(t)—9W,(t)=6Ae*=e* if and only if A=. Thus w,(t)=(¢/6) e*' is a
particular solution of y’—9y=e*.
Similarly, if m is a double root of p,(z)=0, then p,(m)=p;(m)=0, but
p,, (m) 0. This implies that e”", te” are solutions of L,,(y)=0, but t7e”" is not,
and this in turn suggests assuming a solution of L,(yv)=e”™ of the form
W,=Ate™ for some constant A. As before, one determines A by direct
substitution.
In general, if m is a k-fold root of p,(z)=O(k<n), then p,(m)=p’,(m)
=...=p"-(m)=0, but p™(m)40. This implies that e™, te™,...,1%
te”
are solutions of L,(y)=0 but t*e” is not. Thus in this case one would “ju-
diciously guess” a solution y, of L,(y)=e™ of the form y, = At*e™ and sub-
stitute to determine the constant A.
We turn now to the more general equation L,(y)=ct*e”, k a positive
integer, c a constant. A straightforward but tedious calculation shows that
L(t*e™)
= p,,(m) t*e™ + pj,(m) kt*~*e™ +k(k—1) t*~7p%(m) e™+ <
where there are at most a total of k+1 nonzero terms. Therefore when
guessing the form of a particular solution it is clear that we cannot now
merely try w,(t)=Ar‘e™, but must include terms te” (j=0, 1,..., k) which
will serve to cancel out the terms we get in the above calculation. Thus we
try w,(t)=Aye™ + Ante™ +--+ A, ,t*e”™ and find the A,’s by substitution.
Example 5. Consider the equation y’ —9y=Pe'.
Here k=3, m=1 (and m=1 is not a root of m*—9=0). Determine the form of a
particular solution. In this case we try ,(t)=A,te'+ A,17e'+ A3te'+ Axe’, where
the constants A,, A, A3, A, are determined by direct substitution.
Exercise
10. Find the general solution of y’—9y=Pre'.
As in the case k=0 considered previously, additional complications
arise if now one or more of the functions e”", te”",..., “e™ is a solution of
the homogeneous equation L,(y)=0. The ‘‘judicious guess’? based on the
above considerations is to multiply each one of these functions by the lowest
power of t, say t’, such that none of the resulting functions is a solution of the
homogeneous equation L,(y)=0; then assume a solution of the form ,(t)
=A,te™+Azt* e™+---+A,4,0%*4e™", and determine the constants A,,
Ay,..., Ay, by direct substitution in the equation
Si Linear Nonhomogeneous Equations 109
Lap ecttent:
Example 6. Find the form of a particular solution of the equation y”—9y=re*.
Here e *‘is a solution of y”—9y=0, but te> is not. Therefore, to find Wp, we try
W,(t)=Ayte *+A,07?e *+A4,8e* 4 Aytte*. (Note that there is no point in includ-
ing also a term Aje*' since e * is a solution of the homogeneous equation.)
Exercises
a) y"+4y=sin 21 b) y’—4y=2e-
Cy aay oy ty=— =e" d) y"—4y=te'
e) vy’ —4y=te”! f) y’—4y=r'e#
*14. Consider the equation y’+k?y=2k sinkt, and show that all solutions are un-
bounded as t«. This phenomenon always occurs if the nonhomogeneous term is
a sine or cosine function which is a solution of the homogeneous equation (physical-
ly speaking if the “‘applied” frequency is a “natural” frequency of the system).
15. Find the general solution of each of the following differential equations.
a) y’+y=cosectcott, O<t<n
fox) Py" +0'+4vssin log |t|
) py” —3r?y" + 6ty’—6y=0
d) y’—6y' +9y=e'
) y’—6y'+ 9y=te**
ia’)
f) ty" —31y'+4y=log ||
g) y+ 8y + l6y=0
h) ?y’+ty’—2iy=0
i) VO —2yp"+y=e'+sin ¢
jy ty stant, O<t<n/2
k) ¥?+y=g(t) g continuous
l) y’+y=h/(t), where
spring constant is determined from the information in the first sentence. Note
also that g =980. |
. Suppose that the mass-spring system in Exercise 16 is oscillating with an
amplitude of 5 cm. Find the maximum velocity. [Hint: Consider the initial
conditions (0)=5, #'(0)=0.]
. Suppose that the mass-spring system in Exercise 16 is at rest in an equilibrium
position at time r=0, and a force 500 cos 2¢ is applied. For what value of ¢ will the
displacement first equal one centimeter?
19. A spring is stretched 15 cm by an 8-kg weight. Suppose a 4-kg weight is
attached to the spring and released 30 cm below the point of equilibrium with
an initial velocity of 180 cm/sec directed downward. Determine the motion of the
system (Example 1, Section 2.1).
20. Determine the motion of the system in Exercise 19, with air resistance 1000 dynes
when the velocity is one centimeter per second. (Note that g=980.)
Mal Consider the mass-spring system of Exercise 19, but with an additional external
force 5 cos 2t. Determine the motion.
De A body of mass m falls from rest from a height 4 above the surface of the earth.
Assume that the only forces acting on the body are the force of gravity and a force
of air resistance which is ¢ times the velocity of the body. Find the motion of the
body, and show that its velocity approaches the limit mg/c as t>00.
ZS: The bob of a simple pendulum of length 2 feet is displaced so that the
pendulum makes an angle of 5° with the vertical and then is released. Assume
that the motion is determined by Eq. (2.11), Section, 2.2.
a) Find the angle 0 which the pendulum makes with the vertical as a function of
time.
b) Determine the frequency of the vibration.
c) Calculate the distance traveled by the pendulum bob during one period.
d) Find the velocity and acceleration of the bob at the centre of its path.
24. A simple pendulum of unit mass vibrates in a medium in which the damping is
proportional to velocity. If the pendulum bob passes through the equilibrium
position 8=0 at t=0 with velocity vo, show the angle @ is given by
1
EU ERE SIE
3.8 Resonance 111
where E(t) is the applied voltage. Suppose the applied voltage is a constant Eo.
a) Show that the current decreases exponentially if CR? >4L
b) Find the current if CR?=4L and 1(0)=0, 1'(0)= E/L
c) Find the current if CR*<4Z and /(0)=0, I'(0)=Eo/L
26. Consider the circuit of Exercise 25. Find the current if
0 t<0
E(t)= 7
Ey sinat t>0
St
Ee O<r<l
1 t=
pg yay:
P= ee
Show that for small vibrations (neglecting the nonlinear term) the sphere vibrates
with frequency 1/272(3g/2R)'/?.
3.8 RESONANCE
Exercise
Exercise
2. Find the above particular solution.
Thus we see that in the case of resonance, the amplitude of the particular
solution (often called a forced oscillation) is not constant, but is an un-
bounded function of t. This phenomenon predicted by the model (3.33) does
not occur in a real physical system because of the presence of friction, which
causes damped oscillations rather than periodic solutions in the homo-
geneous case. In the presence of friction Eq. (3.33) is replaced by
y"+ay+k2y=Acoskt, a>0. (3.35)
Exercises
of the form $(t)=B cos (kt—a). Show that the amplitude of the oscillation B is a
maximum if k=(q—p7/2)'’? (called the resonant frequency) provided p?<2g.
What happens in the case p*>2q? Show that at resonance the amplitude of the
oscillation is inversely proportional to the damping p.
6. Consider the differential equation
my” +ky= f
(t)
where
3.8 Resonance 113
iA O<t<e
0, t>é
Find the solution #(t) such that #(0)=0, '(0)=0. Is @ a solution in the sense of
Theorem |, Section 2.5? Give a physical interpretation of this problem and its
solution, thinking of ¢ as a small positive constant. Discuss the behavior of the
solution as ¢0; does lim, .9, ’(f) exist for all 1?
CHAPTER 4
Linear Systems of
Differential Equations
4.1 INTRODUCTION
where the given functions a;,(t), where i, 7=1,...,m, and g;(t), where
i=1,...,, are continuous on some fixed interval .. Unless mentioned spe-
114
4.1 Introduction 115
cifically otherwise the interval . can be open, closed, half open, finite or
infinite. If n=1, we have the important special case of a scalar first-order
equation, which we write in the form
Vu Vi tye +e
Yo=UYi—Ys (4.3)
¥3=Yity2—y3t+2e “,
where .¥ is the real line, {t|— 00 <t< oo}. Here, n=3, and in the notation of(6.1)
x-| 0 | (4.5)
Dem
Then, observing that matrix vector multiplication of A(t) and y gives
Vary
A(t) y= |tyi—Ys |
Vitae) a)
we see that system (4.3) may be represented conveniently in the matrix vector form
y=A(t)y+e(),
where A(t) and g(¢) are given respectively by (4.4) and (4.5).
116 Linear Systems of Differential Equations
é =y,+cost
yo=V1
Define the matrix A(t) and the vectors y, y’, g(t), and write this system in the
form (4.8).
Before proceeding with the definition of a Solution and a discussion of
the system (4.8), we need the following definitions.
Definition 1. A matrix (such as A(t)) or a vector (such as g(t)) is continuous
on an interval ¥ if and only ifeach of its entries is a continuous function at each
point of F.
Definition 2. An nxn matrix B(t) or a vector u(t) with n components, de-
fined on an interval ¥% and given respectively by
Rent ere
ate
feta fina
Exercises
t Ca em
a) B(t)=|sint 0 cost for —o<t<o.
i? t 1
logt
d) u-| tlogt | for 0<t<oo (and where logt is the natural
t? logt logarithm of f).
118 Linear Systems of Differential Equations
ia} ~ ll
—} —_ for —l<t<2.
eee ae
ee
BwWN
3. Evaluate {{ B(t) dt or {{ u(t) dt for each of the matrices B(¢) or vectors u(t) in Exercise
1. [Hint: In parts (c) and (d), integrate by parts.]
4. Is the vector
w(=||
continuous on the interval 1<r<2?
Is it continuous on the interval —1<t<1? Explain.
o'()= |
4.1 Introduction 119
Thus
Our object will be to learn as much as possible about such initial value
problems. As a matter of fact, when n=1 the initial value problem (4.9) can
always be solved, and we have already obtained a formula for the solution.
(See Section 1.4, Theorem 1.) Unfortunately, for n>2 the situation is much
more complicated.
Example 4. Show that the vector
Obviously,
120 Linear Systems of Differential Equations
Exercises
5. Show that
ial ea
is a solution of the system of Example 4 on —«<t<oo satisfying the initial
condition
6. Show that
w(t)=c,u(t)+c,v(t),
where u(t), v(t) are given in Example 5 and Exercise 5, respectively, and where
C1, Cy are any constants, is a solution of the initial-value problem
7. Show that
vid=|_<|
is a solution of the initial-value problem
If the linear system (4.8) has a very special form, it can be solved com-
pletely. We illustrate this with the following examples and exercises.
a=|6 n g(t)=0
where d,, d, are constants.
4.1 Introduction 121
Yi=dy1, Vi(to)=Yor
; 4.10
Y2=adzy2, V2(to) Be ( )
in which the differential equations are not linked to one another and each can be solved
separately. By separating variables (see, for example, Example | and Exercise 1, Section
1.1) we have that ¢,(t)=e""~" yo, is the solution of the first equation for
—o<t<oo and ¢,(t)=e"""™ yo, is the solution of the second equation for
—0o<t<oo. Thus
d, 0
A(t)= and g(t)=0,
and where d,, d), ..., d, are constants. It is clear that the jth equation of this
system is simply
yj=diy;, Yj(to)=Yoj-
Its solution (by the same method as in part (i)) is given by
Thus
iii) Solve the initial-value problem (4.9) if A(t) is the matrix in part (ii) and g(¢) is any
continuous vector function g(t) on —0<t<oo. Here the jth equation is
y,;=djy;+9,(t), Y;(to)=Yoj
and
rs
Exercises
yi=-)1, 2
; j= J;
Wy Vir vas ¥(0) fil
[Hint: Solve the first equation and substitute in the second equation. Solve this
equation by using Theorem 1, Section 14. What is the interval of validity?]
9. Find a solution @ of the initial-value problem
W=—-N>
; )) = 1
Vos Vurty2: ¥(0) H
Vr = 11 FA 2y27 + inYn
y2= A22y2 1°": + Q2nYn
a= = Oeil. ile
Vie GnnYn>
where q;;, with j>i, are constants; note that a;;, with j<i, are zero.
11. Find a solution of the ¢ of the initial-value problem
Y¥i=yityotf(d,t 0
0)=] , |;
Vo= Vi Vas (0) 0
y'+p()y
+a) y=r(),
¥ — vto)=m, — ¥'(to)= M2, (4.11)
where p, g, r are given functions continuous on an interval ¥, fp is in .%, and y,, n, are
given constants, can be reduced to a system of the form (4.9).
In agreement with Definition 4, by a solution of (4.11) on an interval Y contained
in #% we mean a function w(t) such that w’(t), w(t) exist and are continuous at each
point of .¥, such that w”(t)+ p(t) w’(t)+ q(t) w(t)=r(t) for every ¢ in ¥, and such that
w(to)=1, W’ (to) =N2- The idea is to introduce new unknowns y, and y, by means of
the definitions y,;=y, y,=y’. Then
ve
|| it |
we Ee
0 I 0
=| 10 -n0|"*[x0}
_
[3]
where
|| ahi
Note that (4.12) is a special case of (4.9) with n=2 and A(t), g(t) displayed in (4.12).
We will now show the initial-value problems (4.11) and (4.12) are equivalent in the
sense that given a solution of either one, we can construct a solution of the other one.
More precisely, let y(t) be a solution of (4.11) on some interval Y containing fy. Define
the functions ¢, and ¢, on Y by the relations
di(N=V(), G2 (=w'(0).
Define the vector by the relations
v-|
ooh
We claim that (ft) is a solution of (6.13) on Y. Clearly,
#0 [tol Lyal Ll
124 Linear Systems of Differential Equations
moreover,
for every t on Y, which shows that (¢) is a solution of (4.12) on Y. Conversely, let u(¢)
be a solution of (4.12) on some interval Y containing fy. Let
0 1 0 0
0 Om 1 0 : p
y= Roy eyes ee
0 0 0 1 (.)
Pal’) —Pr-ilt) - —prlt) —pi(t) ;
ny
vo)=| | (4.14)
4.1 Introduction 125
where
Vy yi
Wi==|| 8 lhe y= || 2 |.
Vn Vn
Put
Then
\ i y — V2
Y2= a V3
0 it © 0 0
0 OP iG 0 0
0 0 0 1 0
— p,(t) —p2(t) —pr(t) r(t)
11
y(to)=n= |.”
"n
Indeed, similar to Example 6, let /(t) be any solution of (4.13) on some interval Y con-
taining fo. By this we mean that Wy, Wy), .... Wi) exist in Y, are continuous and satisfy
the differential equation (4.13) while W(to)=11, W'(to)=N2,.... VW"(to)=n,. Define
the functions ¢,, $2,..., @, on Y by the relations
,(t)
—Prlt) d1()—- — Pit) $n ()+r(d)
0 16 0 $1 (t) 0
0 0 1 0 0 $2(t) 0
= : s +| :
0 ig ose 0 1 hy —1(t) 0
=p; (t) tee —p2(t) —p,(t) $n (t) r(t)
which shows that this particular @ is a solution of (4.14). Conversely let u(r) be any
solution of (4.14) on ¥. Define w(t) =u, (¢) (i.e., the first component of u). We claim that
this function w is a solution of (4.13) on Y. This proof is also very similar to the
special case n=2 carried out in Example 6 and we shall leave it to the reader as an
exercise.
We remark that whereas every nth-order scalar equation is equivalent toa
system of first-order equations (as shown in Example 7), the converse is not
ik Le]
true. For example, the system
es | 1 0 > Wa
Exercises
12. For each of the following initial-value problems, write an equivalent initial-value
problem for a first-order system:
Pk A g
a) y’+— y=— cost b) 6”+— 6=0
m m ib
a) y’+5z’-7y+6z=e'
z” —2z+13z'—15y=cost,
where
b) y+5z+2y=0?
z” +6y' + 11z’—3y—z=t,
where
In Exercise 13 above we have seen how the equations for the simple
mass-spring system (Section 2.1) and for the linearized pendulum can be
reduced to initial-value problems for linear systems of first-order equations
of the form (4.9). To close this introductory section we shall consider two
more examples of more complicated physical systems and show how they
also lead to an initial-value problem of the form (4.9).
Example 8. A weight of mass m, is connected to a rigid wall by a spring having
spring constant k, >0. A second weight of mass m, is connected to the weight of mass
m, by means of a spring having spring constant k,>0. An external face F(t) is applied
to the second weight. The whole system slides in a straight line on a frictionless
table, as shown in Fig. 4.1. Let y,(t) denote the displacement of the first weight from
its rest position (equilibrium) and y(t) the displacement of the second weight from
equilibrium. At equilibrium y,=y,=0 and both springs are unstretched.
Ye
-
NN
m, =
ETT
m; / F(t)
7
Figure 4.1
a) If at time t=0 the system starts from rest with initial displacements y,(0)= 1,
¥2(0)=y29, determine the motion of the system.
b) If m,=m,=m, ki =k,=k, and F(t)=0, show that the motion ofthe system is
a superposition of two simple harmonic motions with natural frequencies
128 Linear Systems of Differential Equations
Lessr(y” aw Le EN"
In order to formulate this problem mathematically, we assume (i)-(v) of Section 2.1
and apply Newton’s second law of motion to each moving weight which we regard as
a point mass. Suppose that at time ¢ the system is in the position shown in Fig. 1.5;
then the only forces acting on the particle of mass m, are the restoring force of the first
spring —k,y,(¢) (by Hooke’s law), and the restoring force of the second spring whose
net extension is y,—y, (because it is stretched y, units by the second weight and
compressed y, units by the first weight). Thus, the restoring force of the second spring
is k,[y2(t)—y,(t)]. Applying Newton’s second law to the particle of mass m,, we there-
fore obtain
mi (t= —kyy1 ()+k2(y2(t)—y1 (0). (4.15)
The only forces acting on the particle of mass m, are the restoring force
—k,[y2(t)—y,(t)] of the second spring whose net extension is y,—y, units, and the
external force F(t). Thus, Newton’s second law applied to the second particle yields
ie aa at RAR i $,(t)]
m23(t)= —k2,[b2()—- bi (0 + F(d,
for every t>0 and such that
If such functions have been found (it will be seen later (Section 4.2) that there is only
one such pair of functions), we say that we have found a solution of the initial-value
problem consisting of the system of linear differential equations of second order
myi=—kiyy +k2(yv2—-y1)
oe ~ka(v2—Ya) + F (0)
a
i)
4.17
Exercise
15. Find a system of first order equations of the form (4.9) equivalent to (4.17), (4.18).
(Hint: Put w;=y1, W2=Yi, W3=Y2, We =); then w, =Wy, etc.)
4.1 Introduction 129
54 farads
Figure 4.2
It is desired to determine the voltages v,(t), v2(¢), and the current i,(t) as functions of
time (in terms of the given source current i,(f)).
To set this problem up mathematically, we shall use Ohm’s law (v=iR, relating
voltage, current, and resistance) and the formulas i=Cv’'(t) and Li'(t) (relating the
“current through” to the ‘‘voltage across” condensers and inductors; here C is the
capacitance, L is the inductance, and '=d/dt). We shall also need Kirchhoff’s law of
currents: The sum of the currents entering and leaving a given node is zero.
Suppose that at a time f the source current i,(t) leaves node C and enters node A
as shown in Fig. 4.2. At the same time the current i, leaves node A (through the
inductance of 3/5 henry) and a current (5/3) vi(t) leaves node A through the
capacitance of 5/3 farads. Thus, by Kirchhoff’s law applied to node A we have
iy (t)—§v2(t)—
0, (t)=0. (4.20)
Then in the middle loop of the circuit shown in Fig. 4.2, since the sum of the voltage
drops must also be zero (another one of Kirchhoffs laws), we have
Solving each Eqs. (4.19), (4.20), and (4.21) for the quantities vj, 7;, 02, respectively, we
130 Linear Systems of Differential Equations
zak 7 1 e I
A(t)=|- t?—-1 _ (t)=
g(t)=| cost =0,
to=0, N=n=| 0
1 —é —1
t?+1
Determine whether this initial-value problem has a unique solution and find the largest
interval ¥ of existence of this solution in accordance with the theorem.
4.2 The Existence and Uniqueness Theorem 131
The entries of A(¢) and g(t), with the exception of 1/(t?—1), are continuous on
—0<t<oo. However, 1/(t? — 1) fails to be continuous at = +1. Since tp=0, Theorem
1, therefore, tells us that the given initial-value problem has a unique solution
((0)=n), and the solution @ exists on the interval —1</<1. It is worth pointing
out that if we choose a different ¢), for example tg=10, the new initial-value
problem will also have a unique solution W(W(10)=y) and the solution W will exist for
<=60;
Readers who studied Chapter 3 should note that this is precisely Theo-
rem 1, Section 3.1.
Since these functions are continuous for — 0 <t< oo, the given initial-value problem has
a unique solution existing on — 0 <f<oo.
is a solution, but so is
vo=["7"]
In fact, there are infinitely many solutions, for we can add an arbitrary
differential equation for y,,
yp =a(t) yi + b(t) v2 +9(0).
where a, 6, and g are arbitrary continuous finctions. With this additional
differential equation, the initial-value problem will have a unique solution
by Theorem 1.
From this example, you can see that, similar to the situation for linear
algebraic systems, an initial-value problem for a system of linear differential
equations with fewer equations than unknowns may have infinitely many
solutions. Unlike the algebraic case, however, such a problem may have no
solution, as is shown by the example y,+y3+y5=1, yi +y5+y3=0,
y,(0)=y2(0)=y3(0)=0. If there are more equations than unknowns, the
system may be inconsistent. For example,
yi —
val yi ve (0) 1
is obviously inconsistent.
Exercises
1, What does Theorem | tell you about each of the following initial-value problems for
systems of the form (4.9)?
The Existence and Uniqueness Theorem 133
3 t logt e' |
4=| | Ta 2(=[5 | n=|_4|
b) Same as part (a), except tg = —1
c) Same as part (a), except fp =0
d) Same as part (a), except
g(t)=| 1
t?—9
e) 2=3, fh=5
1 cost —sint 0 1
AG)=| err ete g(t)=|0 q=|-1
0 e t 0
f) Same as part (e), except
0
n=| 0
0
. What does Corollary 1 to Theorem | tell you about each of the following initial-
value problems for linear, scalar second-order equations of the form (4.11)?
a) y’+ry’—y=0,
y(I)=1, y'(1)=1.
b) Same as part (a), except y(1)=1, y’(1) unspecified.
c) Same as part (a), except y(1)=0, y’(1)=0. (Can you guess the solution?)
d) ty’ +ty'+(t?-4)
y=0, y(—1)=0, y'(—1)=1.
e) Same as part (d), except y(0)=0, y’(0)=1.
. What does Theorem | tell you about each of the initial-value problems of Exercise
14, Section 4.1?
. Discuss, using Corollary 2 to Theorem 1, each of the following initial-value
problems:
a) y"+ry'+(tan t) y=2t, y(0)=0, y'(0)=9, y"(O)=1.
b) x—ey=1, y(0)=0, v'(0)=0,
c) yp” +(t?—1
y=0,)'?
y(—1=1,
d) y"+(P=1)!? y=0, y(2)=0, y'(2)=0, y’(2)=1
. Prove Corollary 2 of Theorem 1.
6. For each of the following differential equations, determine the largest intervals on
which a unique solution is certain to exist as an application of Corollary | of
Theorem. In each case it is assumed that you are given initial conditions of the form
P(to)=Yo (to) =Zo- ;
a) ty" +y=0?
134 Linear Systems of Differential Equations
t?(t—3) y"+y'=0
Veet y=0
(1+27) y’—y’+ty=cos t
e) e'y”—(sin t) yy’t+y=0
f) y"—(log |e) y=0
This section is devoted to the study of the algebraic structure of the set of all
solutions of (4.24). Here we are concerned not with a specific initial-value
problem for (4.24), but rather with the algebraic structure of the set of all
solutions. We assume, throughout, that the matrix A(t) is continuous on
some fixed interval %, which may be finite or infinite. The entries q;;(t) of the
matrix A(t) can be real or complex-valued.
By Definition 3, Section 4.1, a solution of the system (4.24) is a vector
u(¢) whose derivative u’(¢) is continuous on .¥. In the language of linear
algebra, this means that a solution of the system (4.24) is an element of the
vector space of functions with n components, real- or complex-valued, and
having continuous first derivatives on the interval %. We shall call this vector
space C, (4). (Here C stands for continuity, ’ stands for the derivative, the
subscript v is there because each vector has n components, and .¥ represents
the interval under consideration.)
We remind the reader of the following important facts about vector
spaces (see for example [3], Chapter 5). Let ¥ represent the real or complex
numbers.
Definition 1. A vector space V over F¥ isa collection of elements called vectors
for which two operations, called addition and scalar multiplication, are defined:
For every pair of vectors u, VE V there is a unique vector u+veV called
the sum of wand vy. For every vector ue V and every scalar we F, there is a
unique vector awe V called the product of « and u.
Addition and scalar multiplication satisfy the following properties:
(A,) (u+v)+w=u+(v+w)
for all vectors u, v, we V.
4.3 Linear Homogeneous Systems 135
1 t te f*
0 0 0 0
OVE 4:0: I, E
0 0 0 0
y =A(t)y (4.24)
on .£ form a vector space V of dimension n over the complex numbers.
In view ofthe remarks preceding the statement ofthe theorem, it is signifi-
cant that, according to Theorem |, to find any solution of (4.24) it suffices
to find a finite number of solutions, namely, a set that forms a basis for the
vector space V.
are obviously n such vectors.) By Theorem 1, Section 4.2, the system (4.24)
possesses n solutions @,, @5,..., @,, each of which exists on the entire interval
#, and each solution @; satisfies the initial condition
But this implies that a,, a,..., a, are all zero because of the assumed linear
independence of the given vectors ¢,, ¢2,...,0,. Thus, 1, @5,..., @, are
linearly independent on .¥.
To complete the proof we must show that these n linearly independent
4.3 Linear Homogeneous Systems 137
solutions of (4.24) span V; that is, they have the property that any solution
W(t) of (4.24) can be expressed as a linear combination of the solutions
;. D2,..., @,. We proceed as follows. Compute the value of the solution
W at fy and let (to) =o. Since the constant vectors 6,, 6>,..., 6, form a basis
for complex Euclidean n-space, there exist unique constants c,, C,..., € n
such that the constant vector o can be represented as
O6=C,6, +C,0,+-::+C,6,,.
Therefore, (t) and W(s) are both solutions of (4.24) on ¥% with (to)
=W (to)
=o. Thus, by the uniqueness part of Theorem 1, Section 4.2, @(¢)=w(d) for
every ton ¥, and the solution w(t) is expressed as the unique linear combina-
tion
1. Show that this expression of(¢) as a linear combination of , (4), ..., @,(¢) is unique.
[Hint: Assume (t)=d, $,(t)+---+d, ,(¢ in addition to (4.26) and show that
d,=c,, where j=1,..., n.]
Thus, we have shown that the solutions @,, @,..., o, of (4.24) span the
vector space V. Since they are also linearly independent, they form a basis
for the solution space V, and the dimension of V is n. This completes the
proof of Theorem 1. I
We often say that the linearly independent solutions @,,..., @, form a
fundamental set of solutions. There are clearly infinitely many different funda-
mental sets of solutions of (4.24), namely, one corresponding to every basis
6,,..., 6, of Euclidean n-space.
Exercise
2. Prove the following analog of Theorem | for systems with real coefficients. If the
real nxn matrix A(t) is continuous on an interval Y, then the real solutions of
(4.16) on .% form a vector space of dimension n over the real numbers. [Hint: This
is not a trick question; just check that the proof of Theorem | applies here.]
rl a0 vo] Ln ae
which is a special case of (4.24). By Theorem 1, there exist two linearly
independent (vector) solutions @, (t), @2(¢) of (4.23) such that every solution
o(¢) of (4.28) has the form (t)=c,,(t)+c,6,(t). By the equivalence of
(4.27) and (4.28),
wo-[20} e0-[20]
and w,(t), w2(t) are solutions of (4.27) on .%. We know that the vector
solutions @,(t) and ¢,(t) are linearly independent on .¥. We wish to show
that w,(¢) and (2) are linearly independent on ¥%. Suppose that c,W;, (t)
+ ¢,/2(t)=0 for every t in %, then cy} (t)+c.W5(t)=0 for every ¢ in ¥.
Thus, c,, (¢)+c22(t)=0 on ¥. Since @, (t), 2 (¢) are linearly independent
on ¥%, c,=0, c,=0. Therefore w,(/ and w,(t) are linearly independent
on ¥. Also, every solution w(t) of (4.27) is the first component of the corres-
ponding vector solution o(t) of the system (4.28). Since (t) has the form
(1)=c1, (1)+cr> (1),W(t) has the form W()=c,W(t) +c2Wo(d). |
Exercise
We can interpret Theorem | ina different and useful way. A matrix ofn
rows whose columns are solutions of (4.24) is called a solution matrix. Now,
if we form anv X n matrix using n linearly independent solutions as columns,
we will have a solution matrix on ¥ and also its columns will be linearly
independent on .¥. A solution matrix whose columns are linearly indepen-
dent on &¥ is called a fundamental matrix for (4.24) on ¥.... Let us denote
the fundamental matrix formed from the solutions @,, 5, ..., @,, as columns
by ®. Then the statement that every solution w is the linear combination
(4.26) for some unique choice of the constants c,,..., c, is simply that
W(t)= P(t) c, (4.30)
where @ is the fundamental matrix constructed above and ¢ is the column
vector with components ¢,,..., c,. (The vector &(t) ¢ is obtained by forming
the linear combination of columns of ®(t) with c,,...,c, as coefficients.) It
is clear that if ® is any other fundamental matrix of (4.24) in Y, then the above
solution W can be expressed as
W(t)=P(t)€ for every ton ¥
for a suitably chosen constant vector é. Clearly, every solution of (4.24) on
can be expressed in this form by using any fundamental matrix.
We see from the discussion above that to find any solution of (4.24) we
need to find a fundamental matrix. A natural question, then, is the following.
Suppose we have found a solution matrix of (4.24) on some interval ¥; can
we test in some simple way whether this solution matrix is a fundamental
matrix? The answer is contained in the following result.
Theorem 2. A solution matrix ®(t) of
y =A(t)y (4.24)
is a fundamental matrix if and only if det &(t)40 for every t in ¥. Further,
ifdet P(t) #0 for some to in F, then det &(t)40 for all t in %. (By det H(t)
we mean the determinant of the matrix ®(t).)
Proof. If det &(t)40 for every t in ¥, then the columns of the solution
140 Linear Systems of Differential Equations
matrix ®(t) are linearly independent on ¥. For suppose there exist constants
Cisse C. Such that
C1, (t) +c. (t)+---+¢,0,(t)=0 foreverytin ,
where @,(0),..., @,(¢) are the columns of ®(t). This can be written in the
form -
-{]
@(t)e=0 foreverytin J,
where
; (1),-.-, On(t)
are linearly independent; hence ®(¢) is a fundamental matrix on ¥.
Conversely, suppose ®(t) is a fundamental matrix of (4.24) on ¥. Let
o(¢) be a solution of (4.24) on ¥. By Eq. (4.30), there exists a unique vector
ec such that o(t)=@(t)¢ for every ¢ in ¥. Fix fy in ¥; then, in fact, the
constant vector ¢ is uniquely determined by solving the algebraic system
(to) Xx=(to). Since this algebraic system has a unique solution for each
right-hand side (to), the coefficient matrix ®(t)) has rank n. Hence ®(tg) is
nonsingular, and therefore, det ®(t)) #0. This is true for each fixed fy in ¥,
and therefore det ®(r)#0 for each t in Y. It may appear that the vector c
depends on the choice of to. However, it does not for the following reason.
Since @(¢)=@(t) c for every ¢ in Y, if 4;Ato in FY, then o(t,)=(¢,) c. Thus,
the unique solution of the algebraic system ®(t,) x=@(t,) is the same vector
¢ obtained as the unique solution of the algebraic system ®(to) x= (to).
Finally, if det ®(to)40 for some fy in Y, let 6, =, (to), ..., 6,=, (Co).
The vectors 6,,..., 6, are linearly independent, and therefore form a basis for
Euclidean n-space. We claim that the solutions @,(¢),..., o,(t) are linearly
independent on ¥; for if not, there exist scalars c,, C,..., ¢, not all zero
such that
The reader is warned that a matrix may have its determinant identically
zero On some interval, although its columns are linearly independent. Indeed,
let
ae tS
P()=)0. 2 ¢
Oe0 an
Then clearly det ®(t)=0, —00<t<oo, and yet the columns are linearly
independent. This, according to Theorem 2, cannot happen for solutions of
(4.24).
Example 1. Show that
We first show that ®(t) is a solution matrix. Let @, (¢) denote the first column of (7);
sn-(SHLo AISHG ts
then
eof 2H TEE se
for —co <t<oo. Similarly, if @,(t) denotes the second column of ®(t), we have
for —co<t<oo. Therefore, &(t)=[, (t), 2 (2)] is a solution matrix for —00<t<oo.
By Theorem 2, since det ®(t)=e7'40, O(¢) is a fundamental matrix for —0o<t<oo.
By Theorem 2 also, it is enough to compute det ®(f) at one point, for instance t=0.
Since &(0)=/, this gives det 6(0)=1+#0.
Exercises
A=
kere
a ea,
,
and r,, rz are the distinct roots of the quadratic equation z” + a,z+a,=0. (We shall
learn in Section 5.3, Exercise 1, how to construct this fundamental matrix.)
Exercise
6. Show that C®(t), where C is a constant matrix and @(t) is a fundamental matrix,
need not be a solution matrix y’= A(t) y.
and since det ® and det ¥ are both different from zero on ¥% (why?), we
also have det C40 so that C is a nonsingular constant matrix. |
Exercises
7. a) Show that
4.3 Linear Homogeneous Systems 143
A(t)=]} —
Se
then
cost sint
—sint cost
is also a fundamental matrix. Can you find another real fundamental matrix?
[Hint: Let &(t)=[,
(2), d2(t).] Show that Zp,(t) and Zh,(2) are solutions of
y =Ay, A real, where & is the real part. By the real part of a vector we mean,
of course, the real part of each component. A similar result holds for the
imaginary parts of o, (7) and o,(¢). ]
saeOmeOa os
If (2) is a solution matrix of (4.28) on ¥, then (t)=[, (2), b2(d], where
6, (0 be
t
W(t)
= / b) 62(0=| 200
W(t)
t = /
with w,(t), W(t) solutions of the scalar equation (4.27). By Theorem 2, ®(t)
is a fundamental matrix of (4.28) on % if and only if
i(t) i Wolt
det &(t) = det \¥ ;
vin Fo ford ine?
This determinant is called the Wronskian of w,(t) and w(t). Thus, by the
proof of Corollary 1 of Theorem 1, if det &(t)40, then the solutions yw, (2),
144 Linear Systems of Differential Equations
W(t) of the scalar equation (4.27) are linearly independent on ¥, and every
solution of (4.27) can be written as a linear combination of ,(¢) and (2).
This is one-half of the following result.
Corollary 3 to Theorem 2. Two solutions W,, Wz of (4.27) on F are linearly
independent on # if and only if their Wronskian
= Wilt) Wo(t)
W (w(t), W2(t)] =det lye(t) |
Exercises
13. a) Let 1, ¢2 be any two solutions on some interval 4%, of L(y)=do(t) y’ +a, (t) y’
+a,(t) y=0, where do, ay, a) are continuous on ¥ and ag(¢)#0 on .¥. Show that
the Wronskian W(¢,, ¢2)(¢) satisfies the first-order linear differential equation
t
wa 20 ton ¥. (s)
ao (t)
[Hint:
bi(t) P2(t)
W'(o1, $2) (= =(6162—
$162) =9163—-Fib2-
P(t) $5(t)
Now, use the fact that ¢,, ¢, are solutions of L(y)=0 on ¥ to replace
”
, and $3 by terms involving ¢,, 6}, $2, 4. If you then collect terms you
should get Eq. (*).]
b) By solving (*), derive Abel’s formula:
t
for to, t on .¥. This gives another way of seeing that if the Wronskian is
different from zero at one point, then it is never zero.
14. State the analog of Corollary 3 to Theorem 2 for the linear third-order
differential equation
m=1
co
pmr
Exercises
has a solution of the form e* for some c, find the general solution. [Hint: First
find what c must be; then find a second linearly independent solution as in
Exercise 19(b).|
4.4 Linear Nonhomogeneous Systems 147
1
Luj=ry ty +(2-3) y=0, t>0
is t'/? sin ¢. Find the general solution of the equation L(y)=37"/? sin t, where
t>0. [Hint: Use the method suggested in Exercise 19(b) previously. |
b) Repeat part (a) for the equation
To provide the reader with some easy, concrete examples, we advise that
he study the solution of linear scalar differential equations with constant
coefficients as carried out in Sections 3.4 and 3.5. While this material is an
easy special case of linear systems with constant coefficients, to be studied in
Chapter 5, it is, nevertheless, helpful to see the special case first.
We now use the theory developed in Sections 4.2 and 4.3 to discuss the
form of solutions of the nonhomogeneous system
y=A()yta(d), (4.31)
where A(t) is a given continuous matrix and g(t) is a given continuous vector
on an interval .%. The entire development rests on the assumption that we can
find a fundamental matrix of the corresponding homogeneous system y' = A(t) y.
The vector g(t) is usually referred to as a forcing term because if (4.31)
describes a physical system, g(t) represents an external force. By Theorem 1,
Section 4.2, we know that given any point (fo, ), fo in ¥%, there is a
unique solution @ of (4.31) existing in all of ¥%such that (to) =n.
To construct solutions of (4.31), we let ®(t) be a fundamental matrix of
the homogeneous system y’=A(t)y on ¥; @ exists as a consequence of
Theorem 1, Section 4.3 (see also remarks immediately following its proof).
Suppose , and @, are any solutions of (4.31) on 4. Then @, — @,j is a solu-
tion of the homogeneous system on -¥.
Exercise
By Theorem 1, Section 4.3, and the remarks immediately following its proof
(in particular, see Eq. 4.30), there exists a contant vector ¢ such that
o,—o,=@e. (4.32)
Formula (4.32) tells us that to find any solution of (4.31), we need only
know one solution of (4.31). (Every other solution differs from the known
148 Linear Systems of Differential Equations
P(t) v(t)=g(0).
Since @(t) is nonsingular on ¥ we can premultiply by ® *(t) and we have, on
integrating,
t
Thus, if (4.31) has a solution of the form (4.33), then W is given by (4.34).
Conversely, define w by (4.34), where © is a fundamental matrix of the
homogeneous system on ¥. Then, differentiating (4.34) and using the
fundamental theorem of calculus, we have
t
= 4()(0)[O*6) 26)ds+20
and using (4.34) again,
4.4 Linear Nonhomogeneous Systems 149
W (to) =0
and valid on F.
leh Le} of
Example 1. Find the solution of the initial-value problem
is t
150 Linear Systems of Differential Equations
Since ©(0)=/, the solution of the corresponding homogeneous system satisfying the
initial condition
Exercises
Verify that
‘lo 2} rn} e0-[
sm
is a fundamental matrix of y’=Ay. Find that solution @ of the nonhomo-
geneous system for which
1
=| |}
3. Find the solution @ of the system y’ = Ay +g/(t) with A the same as in Exercise 2 and
with
=|||
4. Consider the system y’= A(t) y+g(¢), where
01
7]
A(t)=|—2 2
eo-[i]
alg
4.4 Linear Nonhomogeneous Systems 151
and determine the interval of validity of this solution. [Hint: Use the fundamental
matrix given in Exercise 7, Section 4.3.]
We now consider the form of the variation of constants formula for the
scalar second-order linear nonhomogeneous differential equation
r=[-a0 -s0) In Lh ee
We apply Theorem | to the system
= u, (t)
oY bs4
be the solution of (4.37) satisfying the initial condition u(t )=0. By Theorem
I,
From
21 W2(s) —W2(s)
s) meal ave r 0)
I W/(s) seal
Wi (s),Wo(s\] |—Wils) Was)
weobtain
ca ). W2(5)] ae } v0 |\eige 3 S
- |e roc
W(t) W1(s)—W1(t) W2(s)
ove! ep
The solution of (4.36) satisfying the initial conditions y(to)=0, y’ (to) =0
is, by the equivalence of (4.36) and (4.37), the first component aeicf u(t).
Therefore this solution is
E: [Wo(t) Wi(s)—Wilt
‘ 0- | Walsh W is
s)]
bs detk9) (4.38)
We apply the corollary to Theorem | directly using the linearly independent solu-
tions /, (t)=cos ft, 2 (t)=sin ¢ of the homogeneous equation y” + y=0. We have
cost sint
W(Wi(t), W2(0)= ; =1.
—sint cost
= sint(1—cost)+cost(sint—log |sect
+tant).
=sint—cost log |sect+tant].
We note that, since sin ¢ is a solution of the homogeneous equation, the function
is also a particular solution. We also remark that we could apply Theorem | directly
by first converting the given differential equation to an equivalent system of first-order
equations as was done for Eq. (4.36); however, for second-order scalar equations, it is
more efficient to employ the Corollary.
Exercises
a) y"+y=sect —aj2<t<n/2.
b) y’+4y'+4y=cos 2¢.
c) y’+4y=f(t), where f is any continuous function on some interval ¥.
d) y’—4y'+ 4y=3e '+20? +sin ¢.
. If @ is a solution of the equation y’+k*y=f(t), where k is a real constant _
different from zero and
f is continuous for 0<t<oo, show that c, and c, can be
chosen so that
t
1
o(th=cy cos kt +" sinkt +7 [sinke—s f (s) ds
0
for 0<t<oo. (Use cos kt and sin kt/k as a fundamental set of solutions of the
homogeneous equation.) Find an analogous formula in the case k=0 and show
that it can also be obtained by computing lim,_.5 o(¢).
. Given the equation
y'+5y'+4y=
f(0).
Use the variation of constants formula to prove that:
a) If f is bounded on 0<t<oo (that is, there exists a constant M>0 such that
[f(t)| <M on 0<t<oo), then every solution of y’+5y’+4y=/f(t) is bounded
on 0<f<oo.
b) If also f(t)>0 as too, then every solution ¢ of y’+Sy’+4y=/(t) satisfies
(t)>0 as t>00.
. Can you formulate Exercise 8 for the general equation
[Hint: Let w, (t), W2 (0), W3(¢) be linearly independent solutions of the corresponding
homogeneous equation. Then proceed as in the corollary.]
11. Find the general solution of each of the following differential equations.
a) y”—8y=e".
b) y+ l6y=f(d, fcontinuous on — co <t<oo.
12. Find the general solution of each of the following differential equations:
a) y’a cosectcott, O<t<z
b) y”—6y’+ 9y=e'
c) y’—6y’+9y=te*!
d) Ono. l6oy=0
e)My YY —2y"+y=e'+sint
4.4 Linear Nonhomogeneous Systems 155
f) y”+y’=tan ¢, 0<t<n/2
g) w+ y=g(t); g(t) continuous
h) y’+y=h(t), where
h(t)=t(0<t<nz), h(t)=ncos(x—-t), nm<t<2n
and A is periodic with period 27.
13. The current / in amperes in an electrical circuit with resistance R ohms, inductance
L henrys, and capacitance C farads in series (see Fig. 4.3), is governed by the
equation
R
E(t)
Figure 4.3
Lt Ri4ci
i) — E'(t) 5
C
where E(t) is the applied voltage. Suppose the applied voltage is a constant Ep.
a) Show that the current decreases exponentially if CR? >4L.
b) Find the current of CR?=4L and i(0)=0, i’(0)=E£)/L.
c) Find the current if CR* <4Z and i(0)=0, i’(0)=£,/L.
14. Consider the circuit of Exercise 13. Find the current if
0 t<0
BO=46, sinat, t>0O
assuming i(0)=0, i’(0)=0.
15. Find the current in an electrical circuit with inductance and capacitance in series,
but no resistance, and an applied voltage E(t) given by
BE
poral 0<t<l
2
le ifsIh
with i(0)=0, i’(0)=0.
16. Determine a particular solution of the differential equation
y" +py' +qy=A coskt,
where p and q are positive constants, of the form $(t)=B cos (kt—«). Show that
156 Linear Systems of Differential Equations
y"=g(t, y, y) (4.41)
where g is a given function. Put y=),, y’=y2; then one has y,=y, and from (4.41)
y’=y5= g(t, ¥;, ¥2). Thus (4.41) is apparently equivalent to the system of two first-
order equations
Yi=Y2
4.42
y2=9(t. V1, V2) Rae)
which is a special case of (4.40) with n=2, f(t, ¥1, ¥2)=V2, fo(t, V1, V2) =9G(t, V1, V2):
To see this equivalence let ¢ be a solution of (4.41) on some interval /; then y, =(0),
yv2=¢ (t)is a solution of (4.42) on J. Conversely, let b,, 2 be a solution of (4.42) on J,
then y= @,(t) (that is, the first component) is a solution of (4.41) on J.
Exercise
6” += sind=0
:
with initial conditions 6(0)=6, 6’(0)=0, which describes the motion of a simple
pendulum (Section 2.2).
Exercises
AGVN=Slby8I)s GStegn
and thus the system (4.40) can be written in the form
yi =f, (t, y)
Y2= Salty) (4.45)
Vy = fultsy).
Proceeding heuristically (we will be more precise below), we next observe
that /,,...,/,can be regarded as n components of the vector-valued function
f defined by
f(t, y)=col(f, (t, 4 ees A y)),
y’=col(yj,..., Yn):
Thus the system of n first-order equations (4.40) (and all the systems which
arose earlier in this section (see also (4.45))) can be written in the very compact
form
¥ =f(¢ y) (4.46)
Equation (4.46) resembles the familiar single first-order equation y’=f(t, y),
with y, f replaced by the vectors y, f, respectively.
Example 3. We may write the system (4.42) above
so that
f(t, y) =col(y2, g(t, ¥1, Y2))-
The Euclidean length of the vector y is defined by the relation
n 1/2
Notice that |y;| is well defined for y; complex and thus ||y|| is also defined for
a complex vector y. We need the notion of length in order to measure dis-
tances between solutions of systems. However, for the purpose of dealing with
systems such as (4.46) it turns out to be more convenient to define a different
quantity for the length (or norm) of a vector y than the familiar Euclidean
length, namely,
Exercise
4. If y is an n-dimensional vector, E,,, show that
lyl'siyisJ/n lyl
[Hint: Use the inequality 2|uv| < |u|? + |v}? and show |ly||? <|y|? <nlly|7.]
The important point about this inequality is that |y| is small if and only if
lly||is small.
The length function ly| has the following important properties:
i) jy/=0 and |y|=0 if and only if y=0.
ii) if c is any complex number, |cy| = |c| ly].
ili) for all y and z, y+z|<|y|+|z].
The proofs are immediate from well-known properties of complex numbers.
For example, to prove (ii) we have
j=1 A =1 i=
Similarly for (iii) we use the inequality |w+v| < |u|+ |v| valid for any complex
numbers u and v.
Exercise
5. Show that the Euclidean length ||y|| of a vector y also satisfies the properties
(i), (ii), (iii) above. [Hint: To prove (iii) you will need to apply the Schwarz inequality
for sums, that is,
160 Linear Systems of Differential Equations
zoe
Using the length function we define the distance between two vectors y
and 2, d(y, z), by the relation
d(y, z)=|y—2|.
The distance function d(y, z) has the following important properties:
i) d(y, z)>0 and d(y, z)=0 if and only if y=z.
il) d(y, z)=d(z, y).
iii) d(y, z)<d(y, v)+d(v, z) (triangle inequality).
The proofs of these properties follow immediately from the corresponding
properties (iwe(ii), (iii) of the length function. For example, to prove (ili) we
have d(y, z)=|y—z|=|(y—v)+(v—z)|<|y—v|+ |v—z|=d(y, v)+d\v, z).
Any function satisfying the properties (i), (ii), (iii) is called a distance
function. For example, p(y, z)=||y—z|| for any vectors y, z is such a func-
tion, and represents the Euclidean distance between the points y and z in E£,,.
Exercise
6. Show that p(y, z)=||y—z|| also satisfies the properties of a distance function.
[Note: The proof of (iii) is harder than for the distance function d. You will need
to use the Schwarz inequality as in Exercise 5 above.]
the sequence {y“)} converges to the vector y if and only if p(y, y)= ||y — y||
+0 as k-—oo. It seems clear that the concept of convergence should not de-
pend on the particular distance function used. We establish this for the
distance functions d(y, z) and p(y, z) in Exercise 11 below.
Exercise
7. Let {y} be a sequence of vectors. Show that |y“ —y|0 as ko if and only if
ly“ —y|| 0 as koo. [Hint: Use Exercise 4.]
a
a(s) o-(|9,(s) ds, fy jest is)
a a a
[nef
b
se fo )ds
<[int
(s)| ds+-- +for a= i)| ds
Exercise
8. Justify each step in the proof of inequality (4.47). Note that in the middle steps you
have ordinary absolute values.
162 Linear Systems of Differential Equations
for any continuous vector g, but the proof is more difficult than the one for
(4.47).
We can now return to the system
y =f(t, y) (4.46)
where the vector-valued function f is defined in some (n+ 1)-dimensional
region D in (t, 1, 2,---, ¥n) space. To find a solution of (4.46) (compare
Section 1.2) means to find a real interval J and a vector function @ defined on
Tsuch that
i) '(¢) exists for each t on J.
ii) The point (t, @(t)) lies in D for each ¢ on J.
ili) @'(t)=f(¢, b(t) for every ¢ on J.
Thus the analogy between (4.46) and a single scalar equation of first order
is complete. Just as for the scalar equation, to solve an initial-value problem
for the system (4.46) with the initial condition @(¢o)=M, (fo, n) a point of D,
means to find a solution @ of (4.46) in the above sense passing through the
point (to, n) of D, that is, satisfying @(to) =. While it is not in general possible
to solve (4.46) explicitly, we can illustrate the concepts with some simple
problems.
Example 4. The system
is of the form (4.46) with y=(y;, ¥2), f(t, vy)=(V2, ¥1). Clearly, D is all of (t, y,, v2) space
and (¢)=(e', e’) is a solution valid for —0o <t<o, since (i), (ii), (ili) of the definition
are satisfied. Note that cd, c a constant, is also a solution.
Exercise
9. Can you find (guess) another solution w(t) of the system in Example 4 on — 00 <t<00
which is not of the form cd(t)?
There are few systems of the form (4.46) of any real interest, other than
those which are linear in the components of y with constant coefficients,
which can actually be solved explicitly. We therefore refrain from making up
special “‘textbook” problems for this purpose. On the other hand, one wishes
to analyze the behavior of solutions of systems of the form (4.46) that cannot
4.5 Nonlinear Systems of First-Order Equations 163
be solved explicitly. The first question that comes to mind in this analysis
is: Does the given system have a unique solution satisfying the given initial
condition? The following result provides an answer in the affirmative in
most problems which arise in practice. We note that because of the
equivalence between nth-order equations and systems of first-order equations
already established, this result includes all results of this type discussed in
Chapters | and 2 as very special cases. For the proof of this result we refer to
Section 8.5.
In what follows we let D represent a region in (n + 1)-dimensional space
with the property that (compare Section 1.6) given any point (¢o, y) in D,
the interior of the (7 + 1)-dimensional “‘box”
B={(t, y) ||t—tol<a, ly—nl<b}
will, for a, b>0 and sufficiently small, lie entirely in D. (We note that if we
use the Euclidean norm |/y—1|| <4 then the set
C={(t, y) |t—tol<a, lly—nl
| <b}
would specify a “‘cylinder’’ whose cross section by a hyperplane f=constant
would be an n-dimensional sphere.) The most important special cases: the
whole space, a half space {(t, y) |0<t<oo,|y|>0}, and “‘infinite strips” (for
example, {(t, y) ||t—to|<00, |v|<2}) have the above property.
For systems of first-order differential equations one has the following
existence and uniqueness theorem (compare Theorem 1, Section 1.6).
Theorem 1. Let f be a vector function (with n components) defined in a
domain D of (n+1)-dimensional Euclidean space. Let the vectors f, 0f/0y,
(k=1,..., n) be continuous in D. Then given any point (to, 9) in D there exists
an interval containing ty and exactly one solution , defined on this interval,
of the system
y =f(t, y) (4.46)
satisfying the initial condition (to) =M.
The reader is advised to refer to the discussion and examples following
Theorem 1, Section 1.6. All remarks and examples about the scalar equa-
tion, of course, apply to systems of differential equations.
Example 5. Discuss the problem of existence and uniqueness of solutions of the
initial-value problem for the system
Yo ys
y2=(cost) yy +0?ys
Vea via.
This system is of the form y’=f(¢, y) with y=(¥1, Y2, ¥3), f(t, y)=(2 +3, (Cost) yi +
+17y3, y;—Y2); hence f(¢, y) is continuous for |t|<0o, |y|<0o. Moreover, f/éy, =
164 Linear Systems of Differential Equations
(0, cost, 1), éf/éy,=(t,0, —1), 6f/Ay3=(1, 17,0), which are also continuous for
|t|<oo, ly|<oo. Thus D is all of four-dimensional (t, y,, ¥2, 3) space and by
Theorem 1, through any point (¢o, n) there passes a unique solution @ existing on some
interval containing ft). It can be shown, that the solution @ actually exists on the
interval — co <t<oo.
Exercises g
10. Discuss the existence and uniqueness of solutions of the system
Vi=)i
Yo=Vityo.
11. Find a solution @=(@,, $2) of the system in Exercise 10 which satisfies the initial
condition @,(—1)=1, ¢2(—1)=0. Discuss the interval J on which the solution
exists.
CHAPTER 5
Eigenvalues, Eigenvectors,
and Linear Systems
with Constant Coefficients
We have seen in Chapter | how to solve the scalar equation y’=ay, and we
know that every solution is of the form ec, where c is a constant. In this
chapter we will learn how to find a fundamental matrix of the system y’ = Ay,
where A is a constant n x n matrix. The explicit calculation of a fundamental
matrix will lead us naturally to the study of eigenvalues and eigenvectors of
matrices. As in Chapter 4, some knowledge of linear algebra is essential,
but for students with this knowledge, this chapter, which contains the results
of Sections 3.4 and 3.5 as very special cases, can be studied instead of those
sections. We emphasize that the techniques discussed in this chapter are not
applicable to systems for which the coefficient matrix is not constant.
M2 M? Mé . Mé
eM aM ce tee
3 k! ie (52)
S i °o
165
166 Eigenvalues, Eigenvectors, and Linear Systems
[=0
(If x, y are real or complex numbers and k>0 is an integer, the binomial
theorem states that (x+y)*=Yieo K!/[/'(K—-D!] xy". If x and y are
matrices which commute, the same result holds.) Therefore, canceling k!,
we obtain _ :
M! Pk!
Oae=0
On the other hand,
M' . Pi
expM-expP= 1 aT:
Iie j=0 ,
Exercises
b) (e%)"* =e" ™,
c) (e“)K =e, where k is any integer.
d) e°=J, where 0 is the nxn zero matrix.
We are now ready to establish the basic result for linear systems with
constant coefficients
y =Ay. (5:1)
Theorem 1. The matrix
P(t)=exp At (5.8)
is the fundamental matrix of (5.1) with ®(0)=I on —20<t<o.
Proof. That @(0)=T/ is obvious from (5.2). Using (5.2) with M= At (well
defined for — 1 <t<o and every nxn matrix A), we have by differentia-
tion’
; fh GRO wi ke Bed saa
(exp At) Seger mye earn at exp At,
Exercises
3. Show that if ¢ is that solution of (5.1)satisfying (to) =n, then (t)=[exp A(t—to)|
—00 <f<00.
4. Show that if &(t)=e*", then @-'(t)=e“.
We now proceed to find some fundamental matrices in certain special
cases; that is, we evaluate exp At for certain matrices A.
0 au;
t It is easy to prove that the familiar theorems on differentiation of power series
(Section 6.2) with real or complex coefficients hold essentially without change for power
series having n xn matrices as coefficients.
168 Eigenvalues, Eigenvectors, and Linear Systems
From (5.2),
d, Oe d? 2
exp At=I+ cr, —-+ ob: =a
; ata a2| 2!
dj 0) -
+ +
0 dk |
exp d,t 0
as exp dt
0 exp d,t
and by Theorem | this is a fundamental matrix. This result is, of course, obvious, since
in the present case each equation of the system is y,=d,y, (k=1,...,) and can be
integrated separately.
maleic
BIG 3
Since
But
OP 1512 hao
Ol Quiles baa
exp At=e* lo i
Exercises
—2 I 0
A= 0 -2 |
0 0 -2
Myth 2 0ek0
A= ee
0 0
where 4 is ann Xn matrix.
7. Find a fundamental matrix of the system y’= Ay, where A is the n xn matrix
8. What is wrong with the following calculation for |an arbitrary continuous matrix
A(t)?
so that exp (jf, A(s) ds) is a fundamental matrix of y’= A(t) y for any continuous
matrix A(t).
9. Consider the system
LY Ave
where A is a constant matrix. Show that |t\4=e4!°*!"! is a fundamental matrix for 140
in two ways: (i) by direct substitution, (ii) by making the change of variable |t|=e’.
You will have noticed that the examples and exercises presented so far,
all of which involve the calculation of e4‘, are of a rather special form.
In order to be able to handle more complicated problems and in order to
obtain a general representation of solutions of (5.1) (that is, if we want to
evaluate explicitly the entries of the matrix exp(Az), we will need to intro-
duce the notions of eigenvalue and eigenvector of a matrix.
To motivate these concepts, consider the system y’= Ay, and look for a
solution of the form
b(j=clcrmeZ0:
where the constant 4 and the vector c are to be determined. Such a form
170 Eigenvalues, Eigenvectors, and Linear Systems
* Even though the entries of A are real, the scalar 4 may be complex (see Example |
following).
** The function p defined by the expression p(A)=det(A/— A) is a polynominal of
degree n. We shall tacitly assume that such determinantal polynomials obey the rules
of determinants.
5.2 Eigenvalues and Eigenvectors of Matrices 171
(that is, p(A) has (A—A )*, but not (A—A )**1, as a factor), then Ay is an eigen-
value of multiplicity k. Since the constant term in p(A) is p(0)=det (— A), if
A=0 is not an eigenvalue ofA, then p(0) #0, and in this case A is nonsingular.
Example 1. Find the eigenvalues and corresponding eigenvectors of the matrix
[3 J
The eigenvalues of A are roots of the equation
j=)
det(4—A1)= det |-#-02+34=0.
=) 3=/)
Thus, 4,;,,=3+5i. The eigenvector
[i
corresponding to the eigenvalue 2,=3+5/ must satisfy the linear homogeneous
algebraic system
= —S5i 5 uy aa
~Anu=| 3 5||u Fo
and, therefore,
ral
1
oe pe At
1 —| Cy 0 or Cy —C, =0
i —1 C2 0 ¢,—¢,=0.
ila
where « is any scalar, is an eigenvector corresponding to the eigenvalue A= 3.
Exercises
1 21
j) el el (eigenvalues are —1, —1, 3)
2 1
0 1 0
k) 0 0 1] (eigenvalues are —1, —2, —3)
=o ith 6
0 1 0
l) 0 0 1 (eigenvalues are —1, —1, —2)
=O, os toed
A Om Onn
QO 4 i © @
m)|0 0 4 0 0
OR OMOR ASO
O205-07010
Sy Eigenvalues and Eigenvectors of Matrices 173
ad ee
n)
phvnasirig
eee
nea5 mi a !
[Hint: Characteristic polynomial is (A— 1)? (A+ 1)?.]
;
1 2 —-l +3
You will note that in Example | preceding, the two eigenvectors u and
v are linearly independent if «40 and B40, since
detfu, v]=del|on% 6
[2840
Therefore, the vectors u and v form a basis of (complex) two-dimensional
Euclidean space. However, in Example 2, the eigenvectors form only a one-
dimensional subspace. In applications to differential equations as well as
in matrix theory it is important to know whether the set of all eigenvectors
(corresponding to the various eigenvalues) of a given matrix A form a basis.
As Example | shows, even if the matrix A is real, the eigenvectors may
have complex components. Thus, we consider the eigenvectors as vectors
with complex components. If the xm matrix A has n distinct eigenvalues,
the corresponding eigenvectors form a basis for complex n-dimensional
Euclidean space.
Theorem 1. A set of k eigenvectors corresponding to any k distinct eigenvalues
is linearly independent.
Proof. We shall prove the theorem by induction on the number k of eigen-
vectors. For k=1, the result is trivial. Now, assume that every set of (p— 1)
eigenvectors corresponding to (p—1) distinct eigenvalues of a given matrix
A is linearly independent. Let v,,..., v, be eigenvectors of A corresponding
to the eigenvalues /,,...,A,, respectively, with 1;4A,; for i¢j. Suppose
that there exist constants c,, C2,..., C,, not all zero, such that
3,..., p, we have c;=0, where j=2, 3,..., p, and (5.11) becomes c,v, =0. Since
¥, £0, c, =0as well, which shows that v,,..., v, are linearly independent. This
proves the theorem by induction. I
We remind you that since the characteristic polynomial of an n x n matrix
is a polynomial of degree n there are at most n distinct eigenvalues. Since
the eigenvectors span a subspace of n-dimensional space, there are at most
n linearly independent eigenvectors. Of course, in any case, there exists at
least one eigenvector, since there is at least one (distinct) eigenvalue.
Example 3. Determine the subspace spanned by the eigenvectors of the matrix A
in Example 2.
As we saw in Example 2
ol
1
Clearly, this subspace is the line passing through the point (1, 1) and the origin.
Exercises
3. Determine the subspace, and its dimension, spanned by the eigenvectors of each
matrix in Exercise 1.
4. In the matrix A of Exercise 2 assume the diagonal elements a;;, where i=1,..., n,
are all distinct. Find the dimension of the subspace spanned by the eigenvectors
of A.
cle
LL
By Example 1, Section 5.2, 2; =3+5i and 2, =3-—Si are eigenvalues of A and
In general, Theorem 1 does not yield exp tA, even though it does yield a
fundamental matrix &(t) of y’= Ay. By Corollary 2 to Theorem 2, Section 4.3, since exp
tA and @(t) are both fundamental matrices of y’= Ay on — 0 <t<oo, there exists a
nonsingular matrix C such that
exptaA=@(t) C. (5.13)
D jeSt5at eS SHt = 1
Exercises
1. Find a fundamental matrix of the system y’= Ay; also find exp tA for each of the
following coefficient matrices.
2 —3 3
d) A=| 4 —5 3 (see Exercise 1 (i), Section 5.2)
4 —4 2
Oat 0
e) A=} 0 0 1 (see Exercise 1(k), Section 5.2)
5.3 Calculation of aFundamental Matrix 177
2. Show that the scalar second-order differential equation u" + pu’ + qu=0 is equivalent
to the system y’= Ay with
0 1
A=
a a
Exercises
eae
4. Given the matrix
0 1
show that 4*?=—J, 4=—A, A*=I and compute A”, where m is an arbitrary
positive integer.
5. Use the result of Exercise 4 and the definition (5.2) to show that
a, cos t sin ¢
e= F 5
—sin t cos t
2 9h tBass
[Hint: cos t=1-> +7 ,+-, sii¢=t———$4 |
6. Compute e'4, if
Zea
[Hint: Use Exercises 4 and 5.]
we obtain
Cane 4
e 4 5
in
+sin 5t \(
4 (—4 sinsin St 5t— 500s 5+}
a
o()=e EB St ae
COS St| ert 4
—sinsin St }a (—4— cos St+5 sini s+a}
=
e+ 5
Further simplification seems pointless. You will note that even such a simple example
as above leads to a rather complicated answer.
Exercises
y = Ay+g(t)
in each of the following cases:
y=Ayt+ j=0
) oem
has a solution of the form
180 Eigenvalues, Eigenvectors, and Linear Systems
i) [ , Aen
Proof.
of Case (i) arises when A has two distinct eigenvalues A, py. In this
5.4 Two-Dimensional Linear Systems 181
T=([v,, ¥,]-
Then
where by matrix multiplication, using T~'T=J, since vy, is the first column
of T, T~'y, is the first column of J, and similarly, since v, is the second
column of 7, T~'v, is the second column of J.
Case (ii) arises when 4 has an eigenvalue 4 of multiplicity two for which
there are two linearly independent eigenvectors. In this case, the result is
found by the same calculation used in (i), with yw replaced by 4.
Case (iii) arises when A has an eigenvalue / of multiplicity two but the
subspace consisting of all eigenvectors of A has dimension one; that is,
any two eigenvectors of A are linearly dependent. ‘hen there exist nonzero
vectors which are not eigenvectors of A; at the same time we recall that
there exists at least one eigenvector of A. Let v be a nonzero vector which
is not an eigenvector of A and let
u=(A—Al)v. (5.20)
Since v is not an eigenvector, and A—AJ is not the zero matrix, u#0; we
will show that u is an eigenvector.
We first assert that the vectors u and y are linearly independent. Suppose
then there exist constants c,, c, such that
cyu+c,v=0. (5.21)
Since u and v are both different from zero, either c, and c, are both zero (in
which case there is nothing to prove) or they are both different from zero.
Using the definition of u and the fact that c, #0, we may rewrite (5.21) as
¢,(A—Al) v+-c,v=0
(re )opo[-b-2)
or
“fot
=a(A—Al) u+b(A—Al) v=a(A—Al) ut bu
f(t
This says that A—b/a is an eigenvalue, and since 4 is the only eigenvalue,
b=0. Therefore, x is a nonzero multiple of u, and u must be an eigenvector.
Now, we define
T =[u, v] [i a (5.22)
V2
As in case (i),
po diee I aes
U;,V2—Uz2V, —UuUz uj;
We remark that if 4 is a real matrix and ifits eigenvalues are real, then the
matrix T constructed in each of the three cases above is real. However, if A
is real but has complex eigenvalues (necessarily complex conjugates), the
matrix JT will not be real and it is of interest to learn the simplest form of
T~' AT which can be achieved with a real matrix T. The answer lies in the
following result.
Theorem 2. Let A be areal 2 x2 matrix with complex conjugate eigenvalues
a+ifp. There then exists a real constant nonsingular matrix T such that
i -1 ar=|
= a B
il
Au=(«+if) u
the left side of which is real and the right side of which is not real. Thus,
v0, and a similar argument shows that u#0.
We define the matrix 7 with columns u and vy,
= _f[ ur YY
T =[u, n-[i al
In order to show that 7 is nonsingular, we must show that u and vare linearly
independent. Suppose not; then there exist real constants c,, c, both different
from zero such that
cyut+c,v=0.
Since c, 0, and A(u+iv)=(«+if) (u+ iv), we have
thus,
Av=(a-2 5)v.
Cy
Then v is a real eigenvector. But, as remarked above, u40 and therefore
this is impossible. Thus, u and v are linearly independent and T is non-
singular.
Taking real and imaginary parts in the equation
A(u+
iv) =(«+ if) (u+iv),
we obtain
Au=ou—fvy, Av=fu+ayv.
184 Eigenvalues, Eigenvectors, and Linear Systems
Therefore,
Consider now the system (5.19), where 7” 'AT=B has one of the forms
in Theorems | and 2. We can write down the fundamental matrix exp Bt in
each case. We have the following possibilities,
CASE i) If
alo ih
A 0
then
Eu 0
exp Br=|j a
CASE ii) If
alo a
A O
then
CASE ill) If
then
At At
exp B= of |
Further, if
ai)
B=|_§2284 with
: « and f real,
then
whos le COS Be IsimiBE
oo ae [ sinBt cos a
Exercise
1. Show that exp Bz has the form indicated in each of these cases. [Hint: For special
cases of the last one, see Examples 1 and 2, Section 5.3, or Exercises 4 and 5,
Section 5.3.]
5.4 Two-Dimensional Linear Systems 185
“lcm eek
Se 0 [M1
Since det A 40, the origin is the only critical point. A fundamental matrix is
Cae)
exp at={) ea
Let (rt, n) be that solution of y’=Ay for which (0, n)=y. Then
eu nn
OE MS lexP a NS aA:
Here we have arbitrarily chosen fp =0. Notice that @(t—‘o, 9) is that solution passing
through the point n at =f). Let y=(7;, 12) be any point in the (y,, y2) plane. Then the
solution (t, n) for t>0 is represented by the parametric equations y, =, (=e “ni,
¥2=(t)=e *2 for t>0 and this represents the portion of the curve shown in Fig. 5.1
between n and the origin, as is verified by elementary calculus; the arrow indicates the
direction of increasing ft. Notice that the slope of the tangent to this curve, dy,/dy,,
also tends to zero as t> + ©, because
3t
dy. VI es 312 =F,
2t
= Ca
dy, yy —2mye- 2m
Similarly, t<0 represents the portion of the curve in Fig. 5.1 above the point y. It should
be noted that lim ,., , .(¢, n)=90; that is, both the solution and the orbit approach the
origin as t— + 00. Proceeding in this way by choosing various points of the phase plane
as initial points, we obtain the phase portrait of the system, shown in Fig. 5.2. Notice
that every orbit approaches the origin (as f— + 00).
186 Eigenvalues, Eigenvectors, and Linear Systems
Figure 5.1
Ye
Figure 5.2
Exercises
“f 3
2. Obtain the phase portrait of the system y’= Ay, where
sale ow
5.4 Two-Dimensional Linear Systems 187
lee 2 0
5. Obtain the phase portrait for the scalar equation y’ +4y=0. [Hint: Use the system
¥;=2, ¥2= —4),. By a phase portrait of a scalar second-order equation we
mean the phase portrait of the equivalent first-order system.]
I whereA>0O or A<O.
: At pw<0<A.
: 1
iv) E | whereA>0O or A<O.
vi) le o where
v#0.
y2
Figure 5.3
CASE (ii) Here the solution of (5.19) through (11, 12)4(0, 0) at t=0 is
* en,
(t) =|
5.4 Two-Dimensional Linear Systems 189
and if A>0, we obtain the phase portrait in Fig. 5.5, whereas the case
2<0 corresponds to Fig. 5.6. Note that all orbits are straight lines tending
away from the origin if 2>0 and toward the origin if 1<0.
The ratio $(t)/P, (¢) if 7, 40 is constant, as is d, (t)/b2(t) if yn.=0. The
origin in Case (ii) is called a proper node.
y2 y2
(=| 2
with w<0 and j>0, is the solution through (7,, 72) at t=0. Now, as
t>+0, o,(t)> + according as 7,>0 or 4,<0 and ¢,(t)>0 as t> + 0.
It is easy to see that if |A|=|y|, the orbits would be rectangular hyperbolas;
for arbitrary 1>0, u<0O they resemble these curves as shown in Fig. 5.7.
Quite naturally, the origin in Case (iii) is called a saddle point.
y2
Figure 5.7
190 Eigenvalues, Eigenvectors, and Linear Systems
Exercise
$(1)= i ea
N2
@
is that solution passing through (7, 72) at t=0 and if A <0 the phase portrait
is easily characterized by the fact that every orbit tends to the origin as
t>+0oo and has the same limiting direction at (0,0). For dy,/dy,=
5/61 =[Ab2/(Ad, + G2) ] 0 as t> + 00 (see Fig. 5.8). The origin in Case (iv)
is called (as in Case (i)) an improper node.
¥2
aya
Figure 5.8
Exercise
8. Construct the phase portrait in Case (iv) with 2>0.
CASE (v) Here the solution, for the case ¢>0, passing through the point
(11, 2) at t=0 is
|
Ny COS vt+n sin vt
o(t)= el—n, Sinvt+n, Cos vt
o>0, v>0 and the origin is called a spiral point. In this case, the orbits
tend away from zero as t— + 00 (or, equivalently, approach zero as t— — 00).
¥2
Nv
Figure 5.9
Exercise
9. Sketch the phase portrait for the Case (v) in case <0, v<0.
¥2
Figure 5.10
CASE (vi) This is a special case of Case (v) with o=0. From the above
formulas we see that the orbits are concentric circles of radius p oriented as
shown for v>0 in Fig. 5.10. The origin is called a center.
10. Sketch the phase portrait Case (vi) when v <0.
192 Eigenvalues, Eigenvectors, and Linear Systems
We observe from the possible cases considered above that all solutions of
(5.19) and also their orbits tend to the origin as t— + co if and only if both
eigenvalues of A have negative real parts; in this case we say that the origin
is an attractor of the linear system. (5.19).
Exercises
Sketch the phase portrait of each of the following scalar equations by converting to
an equivalent system. Identify the origin and decide whether it is an attractor.
eee x0 12. x”—3x'+x=0
13. x"+3x'+x=0 14. x"+3x'-—x=0
15. x”—3x'+2x=0 16. x” +3x'+2x=0
17. x” —2x'+x=0 18. x” —x'—6x=0
19. To illustrate the complexity of the case when the origin is not the only critical
point of a linear system consider the system
Vi= Va v2
yz=2y—2y2.
a) Show that there is a line of critical points.
b) Sketch the phase portrait.
[Hint: y,=2y;, and the eigenvalues of the coefficient matrix are 0 and —1.]
20. Repeat as much as you can of Exercise 19 for the system
Vi =4y1V1 +4422,
V2 =41V1 + 422y2,
where 4, ,@22—@,24,,=0, but not all of a,,, a5, @21, a>» are zero.
21. For each of the following systems y’= Ay with A given below, sketch the phase
portrait, identify the origin as a node, saddle point, spiral point or center and decide
whether or not the origin is an attractor.
A a-[; i b) a=] if
be the number of members of the first species, known as the prey, at time f,
and let y(t) be the number of members of the second species, known as the
predator, at time ¢. If there were no predators, then according to one of the
models formulated in Section 1.1, the growth rate x’(t)/x(t) of the prey
population satisfies an equation of the form
x'(t)/x(t)=A—ax(t),
where / and a are positive constants. Now, however, we assume in addition
a negative growth rate (that is, a death rate) proportional to the size of the
predator population at time t. Thus the growth rate of the prey population
satisfies an equation of the form
x'(t)/x(t)=2—ax(t)—by
(0),
where 4 and a are positive constants. Now, however, we assume in addition
a negative growth rate (that is, a death rate) proportional to the size of the
predator population at time ¢. Thus the growth rate of the prey population
satisfies an equation of the form
y (O/y(t)="+ ex(t)—dy(t),
where py, c, d are positive constants. We have now obtained a model for the
predator-prey problem, namely a system of two first-order differential
equations x’ =x(A—ax—by)
y'=y(u+cex—dy). ee)
We cannot solve this system explicitly, but we can examine the possibility
of an equilibrium solution. To do this, we find the critical points of the
system (5.23), the points (xo, Vo) in the phase plane where the right-hand
sides of both equations in (5.23) are zero. Thus we must solve
Xo (A—ax 9 —byo)=0,
VYo(U+cxXo —dyo)=0.
It is easy to see that there are four critical points, shown in Fig. 5.11.
iy;
MetaCD yea)
\— ar — by =0
Figure 5.11
194 Eigenvalues, Eigenvectors, and Linear Systems
One is at the origin, a second is at the intersection of the line A—ax —by=0
with the x-axis, a third at the intersection of the line 4+cx —dy=0 with the
y-axis, and a fourth at the intersection of the lines A—ax—by=0 and
u+cx—dy=0. The reader will note that the line 1—ax —by=0 has negative
slope and positive x- and y-intercepts and that the line w+cx—dy=0 has
positive slope, positive y-intercept and negative x-intercept because all the
constants A, a, b, u, c, d are positive. This means that the fourth critical point
is in the first quadrant of the (x, y) plane, as shown in Fig. 5.11. We denote
this critical point by (xo, yo). Since this is the only equilibrium position which
does not involve extinction of either predators or prey or both, we examine
it more closely.
We would like to show that solutions whose initial values are sufficiently
near the equilibrium position tend to the equilibrium position as too. The
reader should note that the other three equilibrium positions are biologically
plausible and have been observed experimentally. If a solution tends to
the origin, this may be interpreted as the situation where the predators first
kill all the prey and then die themselves because of a food shortage. We
concentrate on the equilibrium position (xo, yo) because the other three
equilibrium positions are mathematically trivial, not because of any lack of
biological significance.
To study the behavior of solutions near the critical point (x9, Yo), we
make the change of variable which transforms (x9, Vo) to the origin
u=X—Xo, V=y—Y)o-
Substitution in x’ =x(A—ax — by) gives
u! =x’ =(xo +u) [A—a(xp +u)—b(yo + v)]
=X (A—aXxo — by) + Xo(—au—bv)+u(A—axy
—byo) +u(—au—bv).
Since (Xo, Yo) satisfy the algebraic system
ca ers eo)
This reduces to
u’ = —axou—bx ov —au? —buv.
In a similar way, substitution in y’=y(u+cx—dy) gives
v’ =Cyou—dyou+cuv—dv?.
Thus we obtain the system
Gu [= 12 2a
[teh]
ex SOX
529
to be
Exercise
x'=x(A—ax—by),
y’=y(u—cx—dy). (5.29)
The terms —bdy in x’/x and —cx in y’/y correspond to the assumption of
a decrease in the growth rate proportional to the population of the other
species caused by the decrease in the food supply. Mathematically the
only difference between (5.29) and (5.23) is that (5.29) has a term —cx
where (5.23) has a term +cx. The lines A—ax—by=0 and up—cx—dy=0
now both have negative slope and may not have an intersection (Xo, Yo)
in the first quadrant of the phase plane. (Of course, negative populations
have no biological meaning, and only the first quadrant of the phase plane
is of interest.) The two possibilities are illustrated in Fig. 5.12, Fig. 5.12(a)
being the case in which there is an equilibrium in the first quadrant and
Fig. 5.12(b) being the case where there is not.
My of
\— ar — by = 0
wu — cu —dy=0 [Cl
— Ay—()
Nar — by =" 0
(a) (b)
Figure 5.12
Exercise
2. Show that the solution (xo, Vo) of the algebraic system 2 —ax—by=0, u—cx—dy=0
IS X9=(Ad—b)/(ad—bc), yo=(wa—Ac)/(ad—bc) and deduce that there is an
equilibrium position (Xo, Yo) with x9 >0, yo>0 if and only if the ratio 7/y lies be-
tween the ratios a/c and b/d.
If there is an equilibrium position (x9, yo) with x9>0, yo>0, then the
analysis may be carried on exactly as in the predator-prey problem. The
coefficient matrix of the linearized system is now
A= —aXo —bxo
—CYo —dyo
5.6 The General Case 197
with eigenvalues
—(axX9 +dyo)+[(ax9 —dyo)? + 4bcxoyo |"?
2
A=
Exercises
3. Show that if be<ad, both eigenvalues are real and negative and deduce that the
critical point (Xo, Vo) is a node to which solutions tend.
4. Show that if bc> ad, one eigenvalue is real and negative while the other is real and
positive. Deduce that in this case the critical point is a saddle point, which
means that one of the competing species eventually dies out.
o(t)=en=e'4 y v= 3 ey,
j=1 j=1
k
sy exp(/,t) [i+a—a)+ Re
nj-1
We point out again that if (A —,/)” =0, where g;<n;, then the sum on / in
Eq. (4.65) will contain only g;, rather than n,, terms. This formula also tells
us precisely how the components of the solution behave as functions of ¢ for
any given coefficient matrix A.
If A has only one distinct eigenvalue, there is no need to decompose as
in (5.32). In this case we know that (A —AJ)" x=0 for every x; that is, that
(A—AI)" is the zero matrix. Therefore, from the series definition of exp 7A,
we have
n—1}
t' é
exptA=e"24 "=e" > = (A—Aaly’; (5.34)
i!
i=0
that is, the series terminates after at most terms and exp/A 1s easily com-
puted.
Example 2. Solve the initial problem y’= Ay, y(0)=9, if A is the matrix in Example 2,
Section 5.2. Also, find exp (tA).
From Example 2, Section 5.2, we know that 2, =3 is an eigenvalue of multiplicity 2.
In the above notation, n, =2. Therefore, only the subspace X, is relevant. We readily
calculate
a-si=|
Sil i and we also see that (A—3])
baw e020
-|5 Al
200 Eigenvalues, Eigenyectors, and Linear Systems
ie n=[
we find o(t)=e*[1+t(A—31)] y
1 0 1 0
exptA=exptA E | |=
[exer lo}exptA Hit
and then
Xs =3x,—-X,+x;
Xp =2% + Xs
X3=X,—X2+2x;
Seals al
Ae Ome!
i Sk 2
5.6 The General Case 201
Ny
(0)=} 12] =n,
3
and also find exp tA.
The characteristic polynomial of A is det (AJ—A)=(2—1) (2-2), and therefore
the eigenvalues are 24, =1, 24, =2 with multiplicities n, =1, n,=2, respectively. In the
notation of (5.30), we consider the systems of algebraic equations
(A—I)x=0 and (A—2I)?x=0
in order to determine the subspaces X, and X, of 3-dimensional Euclidean space.
Taking these in succession, we have first
2 —1 1 2x1—X2+x3=0
xX}
25;
X3
with x, =x, and x; arbitrary; clearly, dim ¥,=2. You are advised to picture these
subspaces. Observe that the rank of the matrix A—/ is 2. Thus, by a well-known result
in algebra, [see, for example, [3], Theorem 2, Section 5.6], dim X, =3 —2=1. Similarly,
the rank of the matrix (4—2/)? is clearly | and dim X¥,=3—1=2.
We now wish to find vectors v,EX,, ¥,€X, such that we can write the initial vector
n as in (5.32):
n=YV; +YV>.
Since
v, €Xj,
0
vi a
R
202 Eigenvalues, Eigenvectors, and Linear Systems
so that B=n,, «+ B=n2, ~+y=N3. Solving these equations for a, 6, y, we find that
a=N2—M, B=, Y=N3—N2 +m, and
0 Ny
Vite ope ta Vvo= Ny
N2—- Na—-m +m
Thus, by the formula (5.33), we find that the solution such that @(0)= 1 is given by
1 0 0
0], Ile 0
0 0 1
in (5.36). We obtain the three linearly independent solutions that we use as columns of the
matrix
(1+1) e?! —te** te!
4 —e'+(1+t) et e' — te" te?!
ape ners e'—e2! e2t
ae il OY Oe @
ORAS tena O
Ale |i) @) 4. a) @
OOF O40
OOM OM One
5.6 The General Case 203
Here n=5. Using the results of Example 1, we have (4 — 4)? =0, so that (A —4/)°x
=0 for any vector x and the initial vector y remains arbitrary. Since there is only
one eigenvalue (A =4), only the subspace X, is relevant and we have, from (5.34) (since
(A —4/)? =0),
t?
exptd=e"[I+t(A—4])+ 74 —4I)’].
Therefore
l1t= 00
2!
ae OO) al ie OW
OOM OmO
Od O04) O
CFOs 0,70. 71
y =Ay+g(?),
where A is the constant matrix in Example 2 and where
1—-t t
wiih atonal a
tA et
DOR in |-t a
1—(t—s)+e7 **(t—s)
exp[(t—s) A] eae ora ne
Note that it is not necessary to compute ® '(s) separately. Therefore, using (5.16)
(Section 5.3),
Exercises
Find the fundamental matrix exp tA for each of the following systems y’ = Ay having
the coefficient. matrix given. Also find a particular solution satisfying the given initial
condition.
aft a} wf] 1 © 3 0
3. A=]8 1 -1]); n=|—2]. (See Exercise le, Section 5.2.)
Ss itt —7
3 -1 —4 2) 1
2 3 —2 -4 : ;
AA ea eee 5 n 1 (See Exercise In, Section 5.2.)
1 2 -1 -—3 0
[Note: The characteristic polynomial is (A—1)? (A+1)?.]
5. Find that solution of the system
Yi=Vityo2tsint, y,=2y,+cost
such that y,(0)=1, y.(0)=1. [Hint: Find a fundamental matrix of the homo-
geneous system; then use the variation of constants formula (5.16) in Section 5.3.]
Exercises
Arad
and compute the eigenvalues 2,, 2, of A.
7. Compute a fundamental matrix for the system in Exercise 6 if 2, 42,; that is, if
p’ #4q, and construct the general solution of Eq. 5.37 in this case.
8. Compute a fundamental matrix for the system in Exercise 6 in the case
2,=2,=/; that is, p?>=4g, and construct the general solution of (5.37) in the
case p*=4q. Note that A—AJ/ is never zero in this case, so that the fundamental
matrix, as well as the general solution of (5.37) must necessarily contain a term
in te”.
5.6 The General Case 205
a) 4-[3 | b) a=] |
1
e) 4=| 3raeSi Tak ty hl
11. Find the general solution of the system y’=Ay+b(r) in each of the following
raf!
Cases.
fe) ao o-||
b) 4-|—) | =| 4|
‘ |
ah
Uy ui 0 | 0
u=]Uu,| = V4 . A= —4 4 y
U3 V2 2
Y=AY-- YB;
where A, B, and Y aren xn matrices.
a) Show that the solution satisfying the initial condition Y(0)=C, where C is a
given n Xn matrix, is given by
Y@Q=e"Ce™.
b) Show that
AX +XB=C
whenever the integral exists.
c) Show that the integral for Z in part (b) exists if all eigenvalues of both 4
and B have negative real parts.
V2 —6
A 2 0
det (AI — A)= Li A 2 |a2s02 i246
0 -6 A+6
=(A+ 1) (A+2) (A+3).
Hence the eigenvalues are A, = —1, 4, = —2, A,=—3.
Since the eigenvalues of A are distinct, we may proceed by the method of
Section 5.3, and we next compute an eigenvector corresponding to each
eigenvalue. Corresponding to the eigenvalue 1, = — 1, we consider the system
—1 2 0
0 -6 5
By elementary row operations we have
—1 2 0
R
Qn
0 -6 5 lee
|
(co
Oe
- MN
uln
© (ee
|
ra) |
eooor VN
OR
4h
D
yw ©
Oaey
——
is a solution of (—/—A) x=0. Notice that (miraculously) v,; =y(0) (it was
intentionally given this way). Similarly,
3 =
1%
10 10
o-['] [fF
32 1
2820
~ det &(0)’
and we find, after an elementary but tedious calculation, that
25. 2895 S)
6 6
fb +*(O= 4 8 | (5.40)
5 -9 5
(Se~'—10e7
"+. 5e7*) (—3e7'+12e° *—9e- 3") (e'—Se
7+ Se **)
(Notice that at =0 this matrix reduces to the identity matrix as it should.)
Thus, the solution @, of the homogeneous system y’ = Ay satisfying the given
initial condition at t=0 is (by matrix vector multiplication)
0.6 0.6e°'
®,(t}=exptA | 1 |=exp(tA)vi=| ex ie
12 Fee.
By the variation of constants formula applied to the system (5.39), we obtain
as the solution @ to the given initial-value problem
t
In view of the special form ofg(s) we obtain from (5.40) (replacing t by :—t):
0.6 f [ Se 792-40 te 3G)
Thus, if the source current i,(¢) is given, the solution (5.42) can be simplified
further and if 7,(¢) is sufficiently simple, the integrals may be evaluated explic-
Shi! Solution of Example 9, Section 4.1 209
itly (for example, if the source current is 3 sin wt) and then the desired voltages
Vy, V2 and the current 7, can be determined explicitly.
Exercises
1. Find the voltages v, (¢), v2(¢), and the current i, (¢), if i,(t)=3 sin wr.
2. Discuss the behavior as too of the voltages v, (1), v2(t), and of the current i, (1),
in the case that 7,(t)=3 sinat.
3. Repeat Exercise 2 if i,(t)=3e~™ sin wt, «>0.
v= 17:
Exercises
4. Find the solution w(t) of the system (5.44) satisfying the original condition
W(0)=7~ 'y(0) with y(0) given in (5.39).
5. Use the result of Exercise 4 to find the solution of the original initial value
problem. (Compare your result with (5.42); they should be identical.)
system such as (5.39) driven by any input g(t) and starting from any set of
initial conditions can be thought of as the superposition of the motions of
the uncoupled first-order systems (5.44); for if w is a solution of the un-
coupled system then d¢=7¥ is a solution of the original coupled system
(5.39).
Another remark of some physical importance is the following. You may
prefer to think in terms of a mechanical rather than an electrical system.
An electrical system such as the one in Fig. 4.2, Section 4.1, which we have
studied has an equivalent mechanical analog. This~is obtained as follows.
Dash _pol
Figure 5.13
= mk
that
mm,
'=—v,+0v, (5.45)
v, = ———
ee
|-— v
- mk m, 7
If we now identify
aes 3 1 3
f =i,, Sa m,=s, M2=6> k=3,
the mechanical system (5.45) and the electrical system (5.38) already solved
become identical.
There is a procedure for establishing mechanical analogs from electrical
ones and vice versa. For our purposes it suffices to see one example, such as
the one above. For more details, the interested reader is referred to one of the
standard references such as [8].
Exercise
6. Solve the problein (4.17), (4.18) for the mechanical system posed in Example 8,
Section 4.2. [Hint: Reduce the linear system (4.17) of two second-order equations
to an equivalent linear system of four first-order linear equations and apply the
methods of this chapter.]
CHAPTER 6
Series Solutions
of Linear
Differential Equations
6.1 INTRODUCTION
y’=-siny—y (6.1)
which represents a possible model for a damped pendulum (see Section 2.2), and
which cannot be solved in terms of elementary functions. Let us try to find the
solution ¢ of (6.1) which satisfies the initial conditions
* This follows from an extension of Theorem 1, Section 6.3 below, to nonlinear equa-
tions, but will not be studied here.
212
6.1 Introduction 213
expanded in a power series about t=0, then @ must have derivatives of all orders at
t=0 and @ has the Taylor series expansion
¢"(9) , o” (0)
P(t)=
(0) +4’ (0) t+ y PL Gls aries a oc
1
where the series on the right side converges to ¢(t) in some interval |t|<r. Thus we
need only evaluate the solution and its derivatives at t=0 in order to find the expansion
(assuming that there is one).
From (6.2) we already know that the function ¢ must satisfy #(0)=7/4, ’(0)=0.
In terms of the graph of ¢, this means that we know that y=¢(t) passes through the
point (0, 2/4) with slope 0. Since ¢ is a solution of (6.1), we know that
o¢"(t)=—sind(t)—¢'(t) t>0. (6.3)
In particular, we can set t=0 in (6.3), and find $”(0)=—sin (0)— ¢’(0)= —sin (2/4)
a =P Since $”(0) is negative, this tells us that the graph of ¢ is concave
downward near t=0, and the value 256/53) would enable us to compute its curvature
there (see Fig. 6.1). How can we find the values of the higher derivatives of ¢ at the origin?
i
(0, 1/4)
Figure 6.1
Oo 2 wv2t 3 be ~ et 4 i
DD) eee 2a)! 2 4!
214 Series Solutions of Linear Differential Equations
Exercises
1. Continuing with Example 1, find the numerical values of $'°(0) and $ (0).
2. Let w be the (unique) solution of (6.1) that obeys the initial conditions
w(0)=0, w’(0)=1. Find the values of the first four derivatives of y at =0 and
write down the first few terms of the power series expansion of iy about t=0, assum- _
ing that it has one.
3. Consider the first-order differential equation y’=(?+y°. If @ is the solution
satisfying the initial condition $(1)=0, find the values of ¢’(1), ¢”(1), ¢° (1),
and $(1). Also, write down the first few terms of the power series expansion of
o about t=1, assuming that it has one.
4. Consider the solution ¢ of the first-order equation y’+y=0 such that $(0)=1.
Use the above method to find the power series expansion, if there is one, of ¢.
Can you sum this series and verify that the sum represents a solution?
5. Repeat Exercise 4 for the solution ¢ of the differential equation y” + y=0 such that
$(0)=1, ¢'(0)=1.
ene
Example 2. Consider the differential equation y’=/(t), where f is defined by
== jl 2
The function f has derivatives of all orders at the origin and f(0)=0 (n=0, 1, 2,...).
If @ is the solution of y’=/(t) such that ¢(0)=0, and if @ can be expanded in a
convergent power series about t=0, then every coefficient in this power series must be
zero, and ¢ must be the zero function. However, the zero function does not satisfy the
differential equation.
Every power series )'”- 9c,t" has a radius of convergence R>0. (ick=0)
the facts stated below are meaningless.) The numbers c,, as well as ¢ can be
real or complex. The following properties hold:
6.2 Review of Properties of Power Series 215
[roae) c, |sa
In particular
fr
t=)’ feue)a
Exercise
1. Determine the real values of f, if any, at which the following functions fail to be ;
analytic: sin ¢, 1/t, 1/(t—7), 1/(t? +1), (t—2)/(St+4) (t—3).
The ratio test is a useful and simple method of determining the radius of
convergence of a power series.
Ratio Test
Consider the power series )°°- o¢,(¢—a)". If lim, len +1/Cn| exists and has
the value p, then the power series has radius of convergence R= 1/p if p40,
and R= co if p=0, and the power series converges absolutely for |t—a|<R.
We remind the reader that the end points of the interval of convergence
must be investigated separately.
Another useful method of estimating the radius of convergence of the
power series )'-°_ 9c, (t—a)" is the following.
Comparison Test
If es cS eons tet C-= 0' such, thatiie |<G forn—Onl. Dy sand at,
yo " converges for |t—a|<R, then )'* oc,(t—a)" also converges
at palesa ee <R.
We will make frequent use of the following fact about analytic functions
whose power series begins with different powers of (t—a). We state the result
for two functions, but the extension to any number of functions is obvious.
Lemma 1. Let f and g be functions analytic at the point a, given by the power
series oo
fO=Y alta, g=¥ alta
respectively. Suppose p<q and c,#0, d,#0. (This ensures that the series
for f begins with a term in (t—a)? and the series for g begins with a term in
(t—a)*.) Then the functions f and g are linearly independent on every interval
I on which both their series expansions converge.
Proof. Suppose f and g are linearly dependent on J. Then there exist
constants A and B, not both zero, such that
for every ¢ in J. Using property (v), we may add these power series term-
by-term. The resulting power series has the lowest-order term Ac, (t—a)?
6.3 Second-Order Linear Equations 217
v),
(since p<q). Using property (iv), we conclude that 4=0. But then
for every ¢ in J, which mee itl again by property (iv). Thus 4 = B=0,
which contradicts the hypothesis of linear dependence, and completes the
proof. |
y’—ty=0 (6.5)
satisfying the initial conditions $(0)=a, ¢’(0)=6. The existence and uniqueness
theorem for linear equations (Theorem 1, Section 3.1) tells us that this problem has a
unique solution ¢ which exists for all t. However, no method of solution that we have
studied will yield this solution in closed form. (We must ask the reader to accept this
fact.)
The power series method begins with the assumption that the solution ¢ is analytic
at t=0. (If the initial conditions were given at t=to, we would begin by assuming that
@ is analytic at t=f,.) This assumption means that olf) can be expanded in a power
series b(t)=cotcytt-+et*+---=-> Ct" (6.6)
which converges in some interval |t|<A, where the een A is positive and to be
determined. Our object is to determine the coefficients c, in this power series so that
¢@ satisfies the equation (6.5) and the initial conditions (0)=a, $'(0)=
If the power series (6.6) represents a solution of (6.5), we must be able to
differentiate it twice. Proceeding formally (that is, without worrying about convergence)
for the present, we obtain
f (t)=cy +2egt+--- +keth +---= Keir
eee 0 = (6.7)
f" (t)=2c,+3-2c3t
+++» +k(k—1) qth 7+--- = Y k(k—-l) et" 7.
Also a ag
th(t)=cotteyt?
+. +ctt t+--= PY oth*? (6.8)
k=0
Using (6.7) and (6.8), we obtain
+3+2c3t+ +k(k—1) et
" (t)—th(t)=[2c, 2
h++]
+c,t? Abo Gt
—[cot dake |
=2c,+3-2c3—Co) t+-::
Veeseps |2s
Lite
=2c,+ ¥) [kk-1l)q—q_3] f°’,
k=3
218 Series Solutions of Linear Differential Equations
where we obtain the general term by adding coefficients of the same power of f.
Before proceeding, we note that it is not necessary to write out the calculations as
explicitly as we have done above. From (6.7) and (6.8) again, we have
2 ES
$" (t)—tb(t= ¥ k(k—-1) eth 7- Y gyth*?. (6.9)
k=2 k=0
To combine these series, we observe that the first one begins with a constant term while J
the second one begins with the term cot. Therefore, we separate the constant term from
the first series. We must also rewrite one of the two series in such a way that the general
terms of both series contain the same power of t. To do this with the second series, we
let kK+1=n—2, or k=n—3. Then the second series is )29¢,f"' =) 36,230"7.
We also observe that the particular letter used for the index of summation is of no
importance, and we may use k in place of n again if we wish. We rewrite (6.9) as
p"(t)—th(t)=2c2+osk(k—1) -¥ C, 30k?
(6.10)
yes Y [k(kK—1) q,—cy-3] 8?
which is the same result as that obtained before.
Assuming that the operations which led to (6.10) are justified, we see that ¢, repre-
sented by the series (6.6), is a solution of the differential equation (6.5) if and only if
the c, satisfy the relation
)) [k(k—-1) ,—G-3]
2e,+ K=3 7 =0. (6.11)
By the identity theorem for power series (property (iv), Section 6.2), the coefficient of
each power of ¢ in (6.11) must vanish. Therefore
Relations such as (6.12) are said to determine the coefficients c, recursively. Solving
these in succession, we find
6 =0 Cs
1
mance C4=>—
1 Cy O55
1 2
a _ —_ — are
56 es 6 Oe gh eo
Ce— = Co; Co = = Boas
1 2
Cc, =0 Clea =F Cy
1) (4 2) (5
c5=0 peo RANE
6.3 Second-Order Linear Equations 219
Thus all coefficients can be expressed in terms of cy and c,; these in turn can be
determined from the initial conditions. If the solution ¢ of (6.5) is to satisfy the initial
conditions $(0)=a, ¢'(0)=5, we find that cy=a, c, =b, and, therefore, a candidate for
the solution is
: iG Gne eee
Had i+) v=
Gm! |
Exercises
1. Use mathematical induction to establish the formulas for 3m, C3m+1> ANd C3m 42
above.
2. Show how (6.13) is obtained from the formulas of Exercise 1.
3. Use the ratio test to prove that each infinite series in (6.13) converges for |t|<0oo.
Using the convergence of the series established in Exercise 3, we now observe that
all the formal calculations beginning with (6.6) and leading to the solution (6.13) are
fully justified for — co <t< oo by applying the properties of power series given in Section
6.2. Moreover, the solutions ¢,, ¢, represented by the power series
2) (5)---(3m—
.{)ar+ ate ) eee
) OCOD
m=1
are linearly independent by Lemma 1, Section 6.2. Therefore, if a and 6 are considered
as arbitrary constants, the general solution of Eq. (6.5) is given by ad, (t)+5¢,(t) for
—0O<t<o.
While the series in (6.13) converge for all t, only a small number of terms is needed
to give a good approximation to the solution for small values of ¢. This is of practical
importance in obtaining numerical approximations of solutions. For large values of
t (say, t= 10) the series converges too slowly to be of practical value. In this case, however,
220 Series Solutions of Linear Differential Equations
the method of asymptotic expansions and numerical techniques are available which
make use of the values computed from the power series for small t. Some of these
techniques will be discussed in Section 6.11 and Chapter 9.
Exercises
Employing the method used in solving the equation (6.5), find the solution ¢ of each .
of the following initial-value problems.
4. y’+y=0, ¢(0)=a, $'(0)=6. (Write the solution in closed form if you recognize
the resulting series.)
5. y’—ty’'+y=0, ¢(0)=1, o/(0)=0.
6. y” —2ty’+2ny=0 (n an even integer, n=2m), (0)=(— 1)” (2m)!/m!, o'(0)=0.
7. y’—2ty’+2ny=0 (n an odd integer, n=2m+1), $(0)=0, $'(0)=2(—1)" (2m
+1)!/m!.
Exercise
8. For which values of « does the equation y”—2ty’+ay=0 have solutions which
are polynomials in ¢? [Hint: Assume a power series solution and determine a
condition on « which causes the series to terminate. ]
P()= kK=0
DL ex(t—to)®
converges for at least those values of t for which the power series expansions
of p,q, and fin powers of (t—to) converge. The coefficients c, may be determined
recursively by direct substitution.
t is complex; the proofs of the results are more meaningful in the complex
domain.
You are warned that differential equations must be put in the form given
above (with leading coefficient 1) before the theorem can be applied.
Example 2. (Legendre equation) Consider the equation (1—1*) y’—2ty’+a(a+1) y
=0, where « is a given constant. Determine whether this differential equation has a
series solution about t=0.
To apply the theorem, we must divide the equation by 1 —?7?, and rewrite it as
2t a(a
+ 1)
lie po eee
which is certainly justified if +# +1. Then
t=0 pi=-—52t a=
a(x
+1)
f)=0
and p and q are analytic at t=0; in fact they can be expanded in power series valid for
|t|<1 as follows.
hs a
p(t)= ——5= —2¢(1+0?+t*+---)=—2 }) t?*
1-t k=0
a(a+1) oa ae Se hoy
LG pera
2 =a(a+1)(14+07+t*4+---)=a(a+1) ¥ 17.
we k=0
Theorem | tells us that the Legendre equation has a unique analytic solution ¢
satisfying any pair of initial conditions #(0)=a, ¢’(0)=6, and the power series ex-
pansion of ¢ converges at least for |t|<1.
Exercises
9. Compute the first 5 terms of the series expansion of ¢ if the initial conditions are
~(0)=1, ¢’(0)=0. Can you guess the general term? Show that if a is an even
nonnegative integer, <=2m, then ¢ is a polynomial of degree 2m. Compute this
polynomial for m=0, 1, 2, 3.
10. Repeat Exercise 9 for initial conditions ¢(0)=0, ¢’(0)=1. Show that if « is an
odd positive integer «= 2m+ |, then ¢ is a polynomial of degree (2m+ 1). Compute
this polynomial for m=0, 1, 2.
Exercises
13. Compute [1,[Po(é)]? dt, f=, [Pi (0)? at, [£1 [P2(O]? @t.
It can be shown that in general |*,P,,(t) P,(t)dt=0 if m#n, and
1 [P,.(0]? dt=2/(2n+1) (n=0, 1, 2,...). The Legendre polynomials and
their properties play an important role in many physical problems having
spherical symmetry, including problems of potential theory and heat transfer.
Exercises
14. Consider the Legendre equation with « an even integer, «=2m, namely,
(1-2?) y’—2ty’+2m(2m+1) y=0. In Exercise 9 above, the solution ¢@ with
$(0)=1, '(0)=0 was found to be a polynomial of degree 2m. Find a second
linearly independent solution valid in a neighborhood of t=0.
15. Show that the second solution found in Exercise 14 in the form of a power series
converges for |t|<1 but diverges for ‘= +1.
16. Consider the Legendre equation with « an odd integer, ~=2m+1, namely,
(1 —27) y’ —2ty’+(2m+ 1) (2m+2) y=0. In Exercise 10 above, the solution ¢ with
(0)=0, $'(0)=1 was found to be a polynomial of degree 2m+1. Find a second
linearly independent solution valid in a neighborhood of t=0.
17. Show that the second solution found in Exercise 16 in the form of a power series
converges for |t|<1 but diverges for t= +1.
These exercises show that the Legendre polynomials are the only bounded
solutions of the Legendre equation on —1<f<1 when «@ is an integer.
In fact, if «is not an integer, it is easily shown that no solution of the Legendre
equation is bounded on —1<tr<1.
In applying Theorem | to a specific equation, we divide through by
the coefficient of y” before applying the theorem. However, to solve the
equation it is usually easier to return to the original form to calculate the
series solution. For example, to find a series solution of (1 + ¢*) y’+y=0, we
substitute (zt) =) 2 oc,¢* into this equation to give
Exercises
18. Apply Theorem | to each of the following differential equations and initial
6.4 Proof of Theorem on Solutions in Power Series 223
conditions when applicable, but do not solve the equation. In each case give the
interval of convergence of the series solution @ guaranteed by Theorem 1.
a) (+1?) y"+y=0 P()=1 $'(0)=2
b) (1-0?) y’+y=0 P(1)=1 '(1)=0
Exercises
In this section we shall prove Theorem 1, Section 6.3, in the special case
J (t)=0. This is the case which commonly arises; we shall make some remarks
on the nonhomogeneous problem after the completion of the proof. The
proof is constructive, in the sense that it shows us how to proceed to find
a power series solution for any given second-order linear equation with
analytic coefficients, and is based on the useful method of majorants.
Thus we consider the differential equation
y"+p(t) y'+4(t) y=0 (6.14)
and we seek the solution ¢ satisfying the initial conditions
We assume that p and q are analytic at fy. This means that we can write
p=5ralt—to® al)Yaxlt—to}
with both series converging in some interval |t—f9|<A, where A4>0. The
object is to see whether the solution ¢ of (6.14), (6.15), whose existence is
guaranteed by Theorem 1, Section 3.1, is analytic at ft); that is, whether
224 Series Solutions of Linear Differential Equations
Substituting the series for p, g, ¢, ¢’, and ¢” into Eq. (6.14), we obtain
go"(t)+ ea
$'(t)+4(t) P(t)
Se k—1) ¢(t—to)k~7 + bnD,(t—to)*- 2h ke,(t—to)k~?
(oe) foe)
(6.17)
(k+2) (k+ 1) Ch+2 a 2s [(m+ 1) Pxr—m©m+1 Gye Cl =0
6.4 Proof of Theorem on Solutions in Power Series 225
(t)=Cco
+c, (t—to)+ Dat(di.Co
+ &,C1) (t—to)*
lim,.... |a,“*|=0, and therefore |a,|r* is bounded; that is, there exists a
constant M>0, independent of k, such that |a,| “<M (k=0, 1, 2,...). (If
t is complex, this argument must be modified somewhat.)
The coefficients c, are determined by the equations (6.17) which may
be written as
k
ee eee i (6.19)
Since the series ) 2op,(t—to), ogj(t—to converge in |t—tol<A,
corresponding to every Lee number r<A there is a constant M>0
such that
pir<M |gjrP=oM j=0,1,2..2: (6.20)
Using (6.20) in (6.19), we obtain
M k
where the positive term M |c,,,|r added on the right side of (6.21) only
increases this side of the inequality and is needed in what follows. Now
we define Co =|co|, C; =|c,|, and for k=0, 1, 2,..., we define C, recursively
by
M
(k+2)(k+1) Cita =a ys[iat Cait Call + MC aa. (6.22)
=0
Comparison of(6.22) with (6.21) shows that 0< |c,|<C, (k=0, 1, 2,...).
Exercise
1. Use induction to show that 0<|c,|<C,.
M
(k+1) C41 =Saq DE[in el) Ge on b Colira mi Graas (6.23)
Using (6.22),
k
M
7 [(m+1) Ca+1+C,,] 7°=(k+2) (k+1) C,42.—MrC,,,.
m=0
6.4 Proof of Theorem on Solutions in Power Series 227
M 1 M
a > [(m+1) Casi tC] Nise er a iO Oe a i ae
m=0 m=0
+M(k+1) Cy4,+MC,
1
el KCy44 -MC,+M(k+1) Cy+1+MC,.
and therefore
y"+p(t)
y +a(t) y=f()
let
N= Y Alero} ‘
It is easy to see that the coefficients c, in the series solution )'?.9 c;(to)*
are now determined by the equations
2-1¢.+ Pol: +4olCo=fo
3-2¢3+2poC2+ Pier +9001 + U1C0=/t
k
(k +2) (k+ 1) Cy42t 2 [(m+ 1) Py—mm+1 + Gx—mCm] = Si
Exercises
2. Derive the above set of equations for the coefficients c, in the nonhomogeneous
case. [Hint: Give a development which parallels (6.14){6.17) in this case.]
3. Show that the c, in Exercise 2 can be determined recursively from the above system.
Exercise
1. Deduce the existence of a unique analytic solution ¢ of (6.24) which satisfies the
initial conditions $(t,;)=Yo, ’(t,)=Zo, Where t; # to. [Hint: Show that Theorem 1,
Section 6.3, can be applied; note that
1 1 1 tr,
kalo else listo ti >to
1 U— Ty
= ic space |,
ty —lo t)—lo
w(s)=y(to+exp s)=y/(t),
and by the chain rule (using s=log(t—tfp))
dy _dwds_ 1 dw
dt as dt (i=, ds
d*y d*w (=) dw d*s
di? ds* \at) * ds de
d*w a vdw: dale
~ ds? (=) Pals |-(=-)
Pe \2 ew dw
-(=-] (Ga-%).
Thus Eq. (6.24) becomes
d?w dw
ie, a (a, — 1) Fe +a,w=0 (6.25)
Exercise
2. Find the general solution of each of the following differential equations valid near
f=0)
230 Series Solutions of Linear Differential Equations
a) t?7y"+2ty’—6y=0
==bry Fy +4y—0
c) 2ry"+ty’—y=0
d) ?y’—(2+i)ty’+3iy=0
er me "—ty’+y=0
Exercise
3. Make up an example of a differential equation of the form (6.24) (with ft) =0 if you
wish) which has at least one solution analytic at fg, and another example for which
every solution is analytic at fo.
For the Euler equation, we see that the behavior of solutions when / is
near fy and when f¢ is near ¢, #/g is quite different. Every solution is analytic
at t; #t,) but not necessarily at fy. This suggests that the point f=f) plays a
special role here. The point t=¢,) will be called a singular point of the
equation
have the property that (t—to) p(t) and (t—to)* q(t) are both analytic at to.
More generally, the point to is called a regular singular point of the equation
(6.27) if it is a singular point and if
Px(t) a
~ do(t)
a do(t)’ eeAo (t)
i=) N as
——
=) = Cas
have the property that (t—to) p,(t), (t—to)? p2(t),..., (t—to)" Pa(t) are all
analytic at to.
Examples
1, The Euler equation (6.24) is perhaps the simplest example of an equation which
has a regular singular point at t=f9 because ty is a singular point and
P(t)=4,/(t= to), q(t) = 42/(t— to)” have the property that (¢— to) p(t)=41,(t—to)” (1)
=a, are analytic everywhere, in particular at fo.
2. The equation fy” +(3) ty’+ty=0 has t=0 as a regular singular point because
p(t)=3/2t, q(t)=1/t have the property that tp(t)=3, t?q(t)=t are both analytic
everywhere, in particular at t=0.
3. The equation (t—1)? y’+2(t—1)? y'—7ty=0 does not have a regular singular
point at r=1 because p(t)=2/(t—1), g(t)=—7t/(t—1)° do not have the property
that (t—1) p(t)=2, (t—1)? g(t)= —7t/(t—1) are both analytic at t=1.
Ifa singular point t=, for Eq. (6.26) or (6.27) (such as t= 1 in Example 3
above) is not regular, then we say that f, is an irregular singular point. On
the other hand, points t=¢, which are not singular points of an equation
(6.26) or (6.27) with coefficients analytic at tg are called ordinary points (or
regular points). Thus in Examples 1, 2, 3 above the points f=?o, t=0, t=1
respectively are singular points, and all other finite values of ¢ are ordinary
points. The points t=¢,) in Example | and t=0 in Example 2 are regular sin-
gular points, while the point t= 1 in Example 3 is an irregular singular point.
Theorem 1, Section 6.3, describes completely the behavior of solutions in a
neighborhood of an ordinary point; our next task is to discuss the behavior
of solutions in a neighborhood of a singular point. As we shall see, it is
considerably easier to do this for regular singular points than for irregular
singular points, and we shall begin with the study of the former. Naturally it
is essential that, given a differential equation of the form (6.26) or (6.27), we
first locate and classify its singular points.
Exercises
Locate and classify all the singular points for finite values of t of each of the following
differential equations.
4. t?y"+y'=0
5, (1—t?) y"—2ty’+a(a+1)y=0 (Legendre equation)
Dy nm Series Solutions of Linear Differential Equations
9. y+ y+2y=0
1+t
10. r(1—t) y°+[e—(a+b+1)t] y—aby=0 (hypergeometric equation, with a, b, c
constants)
It follows immediately from the definition given in Section 6.5 that the
equation
do(t) y” +a, (t) y¥ +a,(t) y=0 (6.26)
has a regular singular point at t=/, if and only if (6.26) can be written in
the form
(t—to)? vy +(t—to) x(t) ¥+B(t) y=0 (6.28)
where x(t)=(t—19) a; (t)/ao(t) and B(t)=(t—to)? a, (t)/ao(t) are analytic at
fo, With at least one of the three numbers x(t), B (to), 6’ (to) different from zero.
(If all three of these numbers are zero, then (6.28) has (t— fg)? as a factor and
f=fp is only apparently a singular point. If we divide (6.28) by (t—19)?, the
resulting equation will have an ordinary point at t=/,.) Notice that the Euler
equation (6.24), is of the form (6.28) with «(7) and f(r) constant functions.
Exercise
1, Show that the equation (6.27), Section 6.5, has a regular singular point at t=fp if
and only if the equation can be written in the form
(t—to fy +(t—to) tay (t) YP" +--+ H(t) & 1 (1) +4,(0)y=0 (6.29)
with a, &,..., &, analytic at fp.
cy eee did3
Te) totX) «= a= V"(to +x)
and therefore (6.28) becomes
Cy tie ede wl ny.
= a2 + XE) xt BO) y=0. (6.30)
This is of the same form as (6.28), but with x=0 a regular singular point.
Conversely, if }(x) is a solution of (6.30), the function y(t)=$(t—to) is a
solution of (6.28). Thus (6.28) with a regular singular point at t=t) #0 and
(6.30) with a regular singular point at x=0 are equivalent. The same
transformation t=x+f may, of course, also be applied to (6.29). We will
therefore assume that such a preliminary simplification has already been
made, and we will consider the equation
Exercises
which is of the form (6.31) with «(t)=4, B(t)=377. Since « and f are both analytic at
1=0, =0 is a regular singular point. If « and B were both constants, then (6.32) would
be an Euler equation and would have at least one solution of the form |r|’. To take
into account the fact el in this example f is not a constant, we try to find a solution of
the form |t\? 32 oct“ (co#0), where the constants z, c, are determined by sub-
stitution into the differential equation (6.32), and where the series )% 9¢,t* converges .
on some interval about t=0. Note that we cannot deduce the existence or uniqueness
of any solution with initial conditions prescribed at r=0 from Theorem 1, Section 3.1
(why not?), let alone a solution of the above form. Since t=0 is a singular point for
(6.32), we separate the cases t>0 and t<0. We consider first the case >0 and try asa
solution the function $(t)=f V2 o¢t*=V 2 oct? **, which for 1>0 has
(6.33)
Therefore the equation (6.32) can be satisfied by the function ¢(1)= VP oc,t* for t>0
only if the coefficient of every power of ¢ in the right side of (6.33) vanishes. Since we
assumed cg #0, we must therefore have
* You are reminded about shifting indices in series (Section 6.3, Example 1).
6.6 Solutions about a Regular Singular Point 235
Exercise
5. Verify the formula for c,,, by induction.
Substituting these quantities into the assumed form of the solution @(t)
=? P~ ¢,t*, we obtain as one candidate for a solution of the equation (6.32) for
t>0 the function
1/2 . (=1)" 2m
(=e! rE) Did asenn (On) 5195-0 inten): |
We have chosen the arbitrary constant cy as 1, since it is merely a factor multiplying the
whole series. Similarly, taking the root z=0 of the indicial equation f(z)=0 we find that
Co is arbitrary, c,=0, and f(k) c,+5c¢,-2=0 for k=2, 3,.... Since f(k)=k(k—3)40
for k=2, 3,..., we can write these relations as
1
a ee
and
we find
ee ae
Co
Again taking cy =1 we obtain as a second candidate for a solution of (6.32) for t>0 the
function
(—1)” 2m
b.()=14 )Acer ve On\aaoe Been
The approach now is to prove that the candidates ¢, and ¢, are in fact solutions of
(6.32) in some interval 0<t<a and that ¢, and @, are linearly independent on this
interval. Before doing this we observe that, assuming the convergence of the relevant
236 Series Solutions of Linear Differential Equations —
series in some interval, the above calculations are all valid for r<0 if f is replaced by
|r|F =e eel,
Exercise
We thus have, for those +0 for which the series converge absolutely,
1/2 (—1)" 2m
$1 (=Iel ES) 2-4-5: Qamy:S-9->->> (4m+1)- |
(6.34)
(—1)" 2m
b.()=1+) Fy heer (2m)-3-7-- >> (4m+1)-
m=1
as candidates for solutions of (6.32). Next, we apply the ratio test to the series for @, (¢).
We let
m4 :
Um(t)= (2m)-—
2:4--+-- 5:9< <= (4m + 1) :
Then
=f tm
EAL = iy
2:4---:- (2m):5-9+++-- (4m + 1)
0
Exercise
7. Show that the series for @3(¢) also converges for —co<1<oo.
Now, because of the relevant properties of power series (Section 6.2), it is clear that
all of the calculations which lead from the assumption of the solution of the form
It? 0 cyt* to the two candidates for solutions ¢,, @ given by (6.34) are fully
justified for — oo <r<0 and for 0<r<oo. The value r=0 must be omitted because the
differential equation (6.32) has no meaning at the singular point r=0.
Exercise
8. Show that the solutions ¢, and @, given by (6.34) are linearly independent on
6.6 Solutions about a Regular Singular Point 237
—0o<t<0 and 0<t<oo, and hence on any interval not containing t=0.
[Hint: Modify the argument of Lemma 1, Section 6.2, to fit the present situation.|
Using the result of Exercise 8, we see that the general solution of(6.32) on any inter-
val which does not contain the origin is a, (t)+a, (t), where a, and a, are arbitrary
constants.
Exercise
9. Use the technique of Example | to find the general solution of the equation
ty” +3y'+y=0, and determine the interval of validity of this solution.
=27Cot7 + 7 eteee |e
k=1
=f? |e) Cot ys {f (kK+z) G.+C,-1} |
where f(z)=z*. Thus, since (6.35) can have a solution of the assumed form only if the
coefficient of every power of ¢ in this expression vanishes, we must have f(z)=0 and
f(k+2e+¢-:=0 k=1,2....
The indicial polynomial in this example is f(z)=z*, quadratic as in Example 1, but the
indicial equation has a double root z=0. Since /(k+z)40 for k=1, 2,..., we can solve
for the coefficients c, from the recursive equations c,= —c,_,/k? (k=1, 2,...).
Exercises
10. Determine the coefficients c, and complete the derivation of the solution of the
assumed form as in Example 1.
238 Series Solutions of Linear Differential Equations
11. Determine the interval of validity of this solution. (Notice that we can see
directly from the recursion formulas, without finding the coefficients, that
ley 440°* !/e,t*|=t/(k
+1)? 0 as k00.)
Note that although the differential equation (6.35) makes no sense at the
singular point r=0, the function defined by the series })~9 c,t* in this case
is well defined at the singular point r=0. This remark is important in ”
applications (see especially Section 6.9).
We see in this example that, because the indicial equation has a double
root, there is only one solution of the form |t|? ))@o c,t*. Since the differential
equation (6.35) is of the second order, it must have two linearly independent
solutions, though not necessarily both of the same form. We postpone to
Section 6.8 the finding of a second, linearly independent solution.
From Examples | and 2, we might suspect that whenever the indicial
equation corresponding to a regular singular point at the origin has distinct
roots, the differential equation has two linearly independent solutions of
the form |t? )o ¢,t*, one corresponding to each root of the indicial
equation. However, the following example shows that this is not always
the case.
Example 3. The equation ty” +ty’—y=0, which may be written as
t?y"+t?y'—ty=0 (6.36)
has t=0 as a regular singular point. Find the general solution, valid in an excluded
neighborhood of t=0.
In seeking a solution of the form $(¢) = |t|? V2 o¢xt* (co #0), we first restrict ourselves
to the case t>0, and we find exactly as before
Pp" ()+07¢'(t)—th(t)
+) (k+2z) cy pete+1 ay c,tkte+!
=<) (k+z)(k+z—1) an
k=0 k=0
ee
=) (k+z)(k+z—-1) otk? Peale.
ech k=0
To see if there is a second solution of the assumed form, we consider the root
z=0 of the indicial equation. Now the recursion formulas (6.37) become
k(k—-l)q,+(k—2)cq,-,=0 k=1,2....
Taking k=1, we see that c, must be determined from the relation 0-c,—cy=0.
Since cyo#0, this is impossible, and there can be no solution of the assumed form
corresponding to the root z=0. As in Example 2, a second, linearly independent,
solution of a different form may be found by a method to be studied in Section 6.8.
Exercises
12. Find a formula for a second, linearly independent, solution of the equation (6.36)
by the method of Section 3.6.
Using the methods of the examples studied in this section, find as many linearly
independent solutions of the form |¢|? )%c,t* (co#0) as possible for each of the
following differential equations. Also find the interval of validity of each such
solution.
13. 27?y’+4t(1+2) y'+2v=0
=> 14. Py" + 4ty'+2(1+22) y=0
15. ?y’+2(1+2) »’-y=0
= 16. Py +(0? —35 )y=0
The examples of Section 6.6 suggest that for any linear second-order differ-
ential equation with a regular singular point at the origin there is at least
one solution of the form ||? u(t), where u(t) is analytic at t=0. Indeed, let
the equation be
L(y)=t?y" + ta(t) y'+B(t) y=0 (6.31)
where «(t)=) 720 %E*, B(t)= 2-0 Byt* for |t|<r and not all the numbers
9, Bo, By are zero (see Section 6.6), We consider first the case ¢>0, and
we will show by the same formal procedure used in the examples of
Section 6.6 that (6.31) has at least one solution of the form $(t)= )'79ct"
(co #0), which might be called a generalized power series, whose coefficients c,
may be computed recursively. It is also true (see Theorem | below) that this
series expansion is a valid representation of a solution on the “punctured”
interval 0<|t|<r (that is, the interval —r<t<r with the center r=0
removed). The proof of this last statement parallels the proof of the
corresponding statement in Theorem 1, Section 6.3, for an ordinary point,
and is carried out below.
Assuming the existence of a solution of the desired form on some punc-
240 Series Solutions of Linear Differential Equations
(6.38) corresponding to the root z,; can obviously be solved uniquely for
c, im terms of Co, Cy,..., Cj—1 (kK=1, 2,...) if f(z, +k)
#0 for every positive
integer k. Similarly, the system (6.38) corresponding to the root z, can be
solved uniquely for c, in terms of Co, ¢1,..., Cy-1 (kK=1, 2,...) iff(z, eae
for every positive integer k. The resulting functions $, (=f! Vo c,t* and
f2(t=t? Veo qt, with coefficients determined by this procedure are
candidates for solutions of (6.31) on the interval 0<t<A. We shall call such
candidates for solutions formal solutions of (6.31). However, as we have seen
in Examples 2 and 3 of Section 6.6, we cannot always find formal solutions
for both the indices z,, z,.
We now label the roots z,; and z, so that 2z,>z,, and we will show
that there is always a formal solution of (6.31) of the desired form correspond-
ing to the root z, with larger real part. Indeed, since z, and z, are the roots
of the indicial equation, the indicial polynomial f(z) can be written
I (z)=(z—2Z,) (z—2Z2). Thus
Sf(2, +k)=K(k+z,—z,)40 k=1,2....
since both factors k and (k+z,—Z,) have positive real parts. Therefore
the recursive formulas (6.38) give c, uniquely in terms of Co, C4,..., Cy—1
(co#0) for k=1, 2,... when z=z,, and can be solved recursively for c, in
terms of cy (k=1, 2,...). This procedure gives a formal solution ¢, (¢) cor-
responding to the root z, in some interval with t>0. Exactly as in the ex-
amples of Section 6.6, we verify that the above calculations are valid for
t<0 if f7' is replaced by |t|?'.
We now examine the problem of finding a second, linearly independent,
solution corresponding to the index z,. (Obviously, if z; =z , there is only
one solution of the desired form.) By the argument used to find the formal
solution corresponding to the index z,, we need only check whether
f(Z2+k)#0 for every positive integer k. But f(z, +k)=k(k —(z, —2)), and
it is clear that f(z, +k)=0 for k=k if and only if z, —z, =k. Thus f(z, +k) #40
for every positive integer k if and only if the difference between the two
roots of the indicial equation is not a positive integer. [fz, —2Z is not a positive
integer, we may solve the ne (6.38) corresponding to z=Z», and we obtain
a second formal solution $(t)=|t\? V0 Cxt* of (6.31).
If z; —Z> is a positive integer m, then the following situation can occur.
The recursive formulas (6.38) corresponding to z=z, can certainly be
solved for C1, C2,-+-, Cn—1 because f(z, +k)=k(k—m)+0 for k=1, 2,...,
m—1. Clearly, f(z,+m)=0 but if it should happen that )"""o' [(7+ 2) %m—j+
+B,—-j] ¢;=0, then the equation (6.38) with z=z,, k=m becomes 0°c,,=0,
and is satisfied by an arbitrary constant c,,. We can continue the successive
calculation Of Cm+i5 Cm+2.--. because f(z,+k)=k(k—m)#40 for k=m+1,
m+2,....Wewill refer.to this situation again in Section 6.8.
We may now summarize what our findings up to this point suggest.
242 Series Solutions of Linear Differential Equations
f(2)=2(z—1)+0%92+
Bo=0 ==
with Rz,>Rz>. Then there is a solution of the form
(j= |e?! DY Gt Co=1
also in the punctured interval 0<|t|<r. The coefficients ¢, are also determined
recursively from the equations (6.39), with z; replaced by z and c, replaced by ¢,.
It should be stressed that it is simpler in practice to substitute the
assumed form of the solution into the differential equation than to use the
recursive formulas (6.39) to solve for the coefficients.
We have not yet completed the proof of Theorem |. It remains to be
shown that the series for ¢,(t) and #,(t) converge for 0<|t|<r. Once this
has been done, it follows from the properties of power series that all the
calculations which lead from the assumption of the form of the solution to
the expressions for ¢,(t) and #3(¢) are justified.
To complete the proof of Theorem 1, let us prove the convergence of
the series for #,(¢) for |t|<r. The proof parallels the convergence proof
in Theorem 1, Section 6.3. Since f(z; +k)=k(k+(z,—z.)), it is easy to see
that
LF (21+ A> K(k—| 21-29). (6.40)
Since the series }\9 a;t/, 29 Bt? convergence for |t|<r, by Cauchy’s
inequality corresponding to every positive number p <r there is a constant
M>0 such that
lice!
K(k—|21—zal) lalsM ) G+lzi1+1) led Ka 12h ee (O.A2)
‘6
Let N be the integer such that N—1<|z,—z,|<N, define Cy=|col,
Cay ancenc Cy—1=|cy— |, and then define C, recursively for k>N by
peel
k(k—|z,—22|) Ch=M >) (i+ lz,141)
pi *C; k=N,N+1,.... (6.43)
j=0
Comparison of (6.43) with (6.42) shows by induction (see Exercise 1,
Section 6.4) that 0<|c,|<C, (k=0, 1, 2,...). Replacing k by (k — 1) in (6.43),
we obtain
Kae
(k=1)(kK-1-[z1~2Zal) Cx-s=M Y Gtlel+1)
pi**1C;. (6.44)
=
Combining (6.43) with (6.44), we see that, since
kak. |
ae P)
and lim,.,, C,/C,-1=1/p. Thus, the ratio test shows that the series
Veo C,t* converges for |t|}<p. This implies, by the comparison test, that
Vo Ct* also converges for |t|<p. Since this is true for every p<r, the
series ) 72 9 ¢,t* converges for |t|<r. Exactly the same argument can be used
to prove the convergence of the series for @,(¢), and the proof of Theorem 1
is now complete. The reader should observe that the convergence proof is
similar to the convergence proof in Theorem 1, Section 6.3 (and in fact
contains the earlier convergence proof as a special case). |
Example 1. Consider the equation fy’+3ty’+ty=0. Use Theorem 1 to discuss
the nature of solutions valid in a punctured neighborhood of t=0.
Here t=0 is a regular singular point, with «(¢)=3, B(f)=¢; these functions are
obviously analytic at t=0, and their power series expansions, being the functions
themselves, converge for |t|<0o. The indicial equation is z(z—1)+(3) z=z?+z/2=0,
and thus z, =0, z,= —}4. Since z, —zZ, is not a positive integer, Theorem | tells us that
this differential equation has two linearly independent solutions ¢,(t)=1+)
2 1¢,t*
and #3(¢)=|t|1/7(1+ 02 14,¢") valid for 0<|t|<0o. Observe also that in spite of the
fact that the differential equation is undefined at the singular point t=0 one of the
solutions, namely @,, is analytic at -=0.
244 Series Solutions of Linear Differential Equations
Exercises
1. Write out the statement of Theorem | for the case of a regular singular point at
t=t. [Hint: Recall the discussion at the beginning of Section 6.6].
. Prove that if z;—z, is not a positive integer, the solutions d, and @, are linearly
independent.
. Use Theorem | to determine the number of solutions of the form |t\? V2oct"
(co =1) for each of the following, without solving the differential equation. Also, use
Theorem | to determine the region of validity of each solution, and whether each
solution is analytic at the singular point. Note that in parts (c), (e), (f) the nature of
the solution may depend on the values of constants, and discuss the various
possibilities.
a) ?y"+tv'+(t?—-4) yp=0
ay b) 3¢7y"+Sty’+ 3ty=0
c) ty’+(1—1) y’'+yy=0 — (y constant)
d) ?y"+t' +(?—4) y=0
e) Py" +ty'+(t??—y?) y=0 (Bessel equation)
f) r(l—t) y"+[e—(a+b4+1) t] y'—aby=0 (hypergeometric equation)
g) ?y’+ty'+(1—1) y=0
-_ h) t?y”+te'y’+y=0
4. For each equation in Exercise 3, find the solutions of the form |t? V2 o¢t*
—— (co =1). For the equation in Exercise 3(c) show that this solution is a polynomial of
degree n, a constant multiple of which is called the Laguerre polynomial of degree
n, if the constant y is the non-negative integer n.
. Use the result of Exercise | to determine the number of solutions of the form
[t+ 17 PRoce(t+1) (co=1), and of the form |t—1|? V2oe,(t—1)* (co =1), of the
Legendre equation (1 — 7?) y” —2ty’+a(«+1) y=0. Do not calculate the coefficients;
see Exercise 6 below.
. a) Find a solution of the equation (1—77) »’—2t)’+a(~+1) y=0 of the form
|t— 17 VP oc (t— Ik (Co = 1). [Hint: The algebra is easier if you do not multiply
by (1—2)/(1+12) to put the equation in the form (6.28); since t=1+(t—1) it is
easy to expand all the coefficients in powers of (¢—1) and then substitute the
assumed form of the solution into the equation.|
b) Show that for certain values of « there exists a polynomial solution.
where pas)
f(z)=2(z-M+a2+Bo =9(Z)= », [G+z) %%-j+ Bj] ¢;.
Exercise
1. Show by induction that c,(z), defined by (6.46), is a rational function of z (that is,
a quotient of two polynomials in z).
=, (t) logt+¢”! ¥ a
where ¢, (f)is the solution already found in Section 6.7. Since co (z) is constant,
Co(z1)=0, and the series 7-9 c,(z,) “ actually begins with a term in 1.
Thus we may write
$2(t)=¢,(t) logt+t! 2, ¢ (6.48)
For ¢ negative, we must replace f*' by |t|?! and log ¢ by log |¢].
Exercise
2. Show that c,(z,) exists for k=1, 2,.... [Hint: Use the result of Exercise 1 and the
fact that f(z,+k)#0 for k=1, 2,...]
The result of Exercise 2 shows that the series (6.48) is well defined.
Our work suggests that in the case z;=z,, (6.31) has a second, linearly
independent, solution of the form
bx(D=Mi TYby+
4,(0)logle (6.49)
with the coefficients 6, determined by substitution of (6.49) into (6.31).
In an actual problem (see Example | below) we first find the solution @, as in
Theorem I, Section 6.7, and then if z)=z, we assume a second solution of
the form (6.49), substitute it into the differential equation, and solve for the
coefficients b,.
To justify (6.49) as a second solution, valid for 0<|t|<r (where the
6.8 Solutions about a Regular Singular Point 247
Exercise
3. Prove that the solutions ¢,(¢) and #,(t) are linearly independent in 0<|t|<r.
Example 1. Find two linearly independent solutions ofthe equation fy" + ty’ + ty=0
valid near t=0.
As we saw in Example 2 and Exercise 10, Section 6.6, the indicial equation is z? =0,
which has z=0 as a double root, and one solution is
valid for 0<|t|<oo. We now try to find a second solution of the form
p2(t)=It| rs,b,t*
+ 4 (t) log|¢| (6.50)
k=
bl)=Y (k+1)
byt+$40)logr+—
oi(0)
2 1
O3()= Y (k+1) kb! +$4 (0) loge += $4 (0-36,(0.
k=0
Substituting into the differential equation and using the fact that @, is a solution, which
causes the coefficient of log t to vanish, we find
Exercise
Exercise
From Exercise 5 and the calculations preceding it, we see finally that for
t>0, our second formal solution has the form $,(t)=f? Vo cj (z2) “+
ag, (t) logt, where a is a constant and ¢, is the solution corresponding to
the index z,.
These findings suggest (see also Theorem | below) that (6.31) has, in the
case Z, —Z,=m>0 an integer, a second, linearly independent, solution of
the form
b2(0)=[t Y)butt +aq (logit (6.53)
valid for 0<t<r, where a is a constant (possibly zero), and where @, is the
solution corresponding to the index z, given by Theorem 1, Section 6.7.
It is the form (6.53) which we use for actual calculations in practice and we
determine the constants a, b, by direct substitution into the given differential
equation. The procedure justifying (6.53) as a second solution valid for
0<|t|<r would be the same as in Case | of this section. If the constant a
turns out to be zero, then the solution (6.53) reduces to the special case
mentioned earlier in which c,, may be chosen arbitrarily. To illustrate the
idea we consider a rather special problem.
Example 2. The equation /?y’+?*y’—ty=0, discussed in Example 3, Section 6.6,
has z,; =1, z,=0 as roots of the indicial equation corresponding to the regular singular
point t=0, and the solution corresponding to the index z,=1 is ¢, (t)=|t| (t40). Since
Z,—2Z,=1, the expression (6.53) suggests that we should assume a second solution
$2 (t) of the form
=> bt*+a |t\ loge. (6.54)
k=0
implies :
by] A+ =0
at tat? —bot+ ¥ [(k+1) kby41+(k—1)
k=1
a—b)=0
ete al Ore
k(k+1) by41= —(k-1) by $=, 3haose
We may choose by =1, and then a=1, b,=—4, b;=1(2°3), ... We note that 5, is left —
undetermined; this is because ¢ is a solution of the differential equation and therefore
any multiple of ¢ is a solution. In particular, we may take b, =0. Substitution of these
results into (6.54) gives the second solution for r>0.
Exercises
6. Show that formally we obtain the same solution for t<0 from (6.54).
7. For the differential equation in Example 2, obtain a general formula for b, and
show that the solution @, is valid for 0< |t|< oo.
8. In Example 2 find the solution ¢, by the method of Section 3.6.
f2(t)=|t/*? na b,t*
+ ,(t) log|e|
valid for 0<|t|<r, whose coefficients c,, b, may be determined by direct sub-
stitution in the equation (6.31).
If z; —Z» is a positive integer m, there are two linearly independent solu-
tions &,, b> of the form
$1 (t)=(t/? ce Gt “co
valid for 0<\|t|<r, where ais a constant (possibly zero) and the coefficients
Cy, 5, may be determined recursively by direct substitution into the equation
(6.31).
6.8 Solutions about a Regular Singular Point 251
Exercises
ab a(a+1)b(b+1)_,
TAOS Sie eer EAI
a(a+1)---(a+k—1) b(b+1)---(b+k-1)
igi
kie(c+1)---(c+k-1)
= Fila, be th {W<))
$2 (t)=|t| “F(a—c4+1,b-—c4+1,2-c,t) 0<|t\<1.
(For further details concerning the hypergeometric equation see [12] or [17].)]
. Repeat Exercise 13 relative to the singular point =1.
. For each of the following differential equations, determine the roots of the indicial
equation at t=0, the form of the general solution, and the region of validity of the
general solution as given by the appropriate theorem.
a) 2ty”’+y’—y=0
—
sb) ty’ +3y'—-t’y=0
—>c) (317 +1") y’—ty+y=0
d) #(1—1) y"—2y/ +2y=0
e) ty’+(1—t)y+qy=0 (q constant)
f) ty’+(1—t) yy+my=0 (ma positive integer)
g) ty" +2ty’ +ty=0
252 Series Solutions of Linear Differential Equations
16. Find the general solution valid in a neighborhood of t=0 of each of the dif-
ferential equations in Exercise 2.
17. Find the general solution valid in some neighborhood of the indicated singular
point of each of the following differential equations, and give the interval on which
it is valid.
a) (1-2?) y’—2ty’+a(«+ 1) y=0 (Legendre equation); t= — 1.
b) c(1—2) y’+[c—(a+5+1) t] y’—aby=0 (hypergeometric equation); t=1.
18. a) Find the general solution of the confluent hypergeometric equation
ty” +(c—t) y—ay=0
valid near t=0, assuming that c is not an integer.
b) Define M(a, c; t) to be that solution of the equation in part (a) which is analytic
at ¢=0 and had the value | at t=0. Show that the general solution found in part
(a) is
(a, c; t)+c,|t\*
$(t)=c,M “M(1+a—c, 2—c; t)
if c is not an integer.
c) Obtain the general solution of the equation on part (a) when a=1, c= 1.
d) Obtain the general solution of the equation in part (a) when a=1, c=0.
e) Obtain the form of the general solution of the equation in part (a) valid for
large tf.
We conclude this section with a few remarks about the case that t=Oisa
regular point of a general nth-order equation. According to the definition in
Section 6.5, we may write such an equation in the form
L(y) = t"y +0" *04(t) yO +--+ to,-1() ¥+0,(t) y=0 (6.55)
where «,, %2,..., %, are analytic at r=0. It is apparent that the methods of
Section 6.7 and this section are applicable to (6.55). Naturally, the situation
can now be considerably more complicated. We again assume a solution of
the form #(t)=|t\? Yo ext (co = 1). Formal substitution into (6.55) leads to
Sn(2) = 2(2—1)---(—z—n+1)+a,
(0) z(z—1)---(z-—n +2) +--+» +.0,(0)
of degree n, and g,(z) is a linear homogeneous expression in C9, Cy,.--5 Cy—1
as in the second-order case. Obviously ¢ is a formal solution of (6.55) if the
coefficients c, are determined recursively from the relation f,(z+k) c,=g;,(z)
(which can certainly be done if f,(z+k)#40 for k=1, 2,...), and if z is a root
of the indicial equation /,(z)=0. There are n roots of the indicial equation;
if they are all distinct and no two of them differ by an integer, there will be n
linearly independent formal solutions of the assumed form. This leads to an
extension of Theorem 1, Section 6.7 to the case of the equation (6.55). The
devices used to obtain Theorem | for the exceptional case of equal roots and
roots differing by a positive integer in the second-order case can also be
adapted to this more general case. The interested reader is referred to more
advanced books such as [7], pp. 132-135, or [18]. To examine one very
trivial case of (6.55) when n=1, it is suggested that the reader find a series
solution valid near t=0 of the equation ty’+«(t) y=0, where «a is analytic
at t=0 and has a series expansion valid for |t]}<r, with some r>0.
so that
Co Co
"pli = 22ers
ce han
Com >=
2?"m!(p +1) (p+2)---(p+m)’
Exercise
_ = (- ieee
To define the Bessel functions in the usual way, we must make a particular
choice of co, and for this purpose we need to define the gamma function,
which generalizes the notion of the factorial. This function, denoted by I,
is given by the relation
ii Ga |Ca hi ax (6.60)
and elementary tests for improper integrals show that this function is well
defined and continuous for 2z>0. We observe that ['(1)= | e-* dx=1,
6.9 Bessel Equation; Some Properties of Bessel Functions 255
and I'(g)=|¢ e7* x71? dx=2[2 exp (— y?) dy=\/n* Integration by parts
gives i recursion formula I’(z)=(z—1) hee 1) for 2z> 1, since
A A
['(z+k)
NMGay
z(z+1)---(z+k—1)
Doing this for k=1, 2,... we may define ['(z) for all complex z except
z=0, —1, —2,.... With the aid of (6.60) for z>0 and using this definition
for z<0, the reader will easily see that for real z the graph of I’(z) is as given
m Fig. 6.2.
We now define cy in the solution ¢, given by (6.59) as
1
C=:
2°T (p+1)
(6.61)
The resulting function, denoted by J, and called the Bessel function of the
first kind of index p is given by
J,(t)=
t SOE Dene
2 eri) , Bed
* To evaluate the integral =|? exp(—y7) dy one uses the following trick.
Thus /=,/7/2, Fie itesteps can be justified by methods of advanced calculus [6, p. 149].
256 Series Solutions of Linear Differential Equations
Figure 6.2
This function is well defined for all ¢ and satisfies the differential equation
(6.56), for 0<|t|< oo.
Exercise
2. Show that all calculations which led to the solution ¢, and then to J,(¢) carry over
to z,=—p with no change, provided 2p is not an integer, so that
(6.63)
J_,(t)= 2
: )are i 1) (5)"
m=
Exercises
3. Let ¢ be a solution of the Bessel equation (6.56). Show that the function y defined
— by W(t)=(t|'/7 b(t) satisfies the equation
hee
vfs = [v=o. (6.64)
4. Show that
6. Show that
for 0<t<oo. [Hint: Use Exercises 4, 5, rather than solving the Bessel equation
of index 3 directly.]
With the aid of Exercises 4 and 5 above, one can, as is done in Exercise
6, obtain representations of J,,.(t) and J_,,2(t), where n is a positive integer,
in terms of sin ¢ and cos f¢.
258 Series Solutions of Linear Differential Equations
The cases p=0 and p a positive integer in equation (6.56) still remain
to be treated. In the case p =0, the indicial equation z* =0 has zero as a double
root, and Theorems 1, Section 6.7 and 1, Section 6.8 give the existence of
two linearly independent solutions of the form
pi (t)= Ct eneg HO
= «M18
i]
Notice that the function Jo(t) is analytic at t=0 even though the differential
equation (6.56) makes no sense there. To find a second solution, we
take 0<t<oo and let
We then compute
Be 1
,(t)= ¥ (k+1)
b,t* +Jo(t) logt +— Jo(t)
k=0
5(t)= ))
” =
(k+1) kb,t*"
k-1
++ JG
”
(0) logt +&yJo()—s
, 1
Jo(t)
k=1
pee \\i2 2m
Observing that the right side of this equation contains only even powers of
t, we obtain
by=0, b,=4, 32b,4+b)=0,...
6.9 Bessel Equation; Some Properties of Bessel Functions 259
and finally
= et 1
nes ) (Ht 4 2)...
2°" (mn)? m
Thus we may define as the second solution of (6.56), with p=0, the function
>, usually denoted by Ko, given by
#20=Ko
EM(sadam
0=- l)(SPesertons.
Y (666
Clearly, (6.66) is also a solution of (6.56) with p=0 for t<0 if we replace
log t by log |r|. The solution Kg is called the Bessel function of the second
kind of index zero. Thus we have two linearly independent solutions
Jo(t) and Ko(t) of (6.56) with p=0 on any interval not including r=0.
If p is a positive integer n, then it is easily verified from (6.62) and
(6.63) that the functions J, and J_, are linearly dependent, in fact, the
following relation holds between them.
Exercise
7. Show that J_,(t)=(—1)"J,(t) when n is a positive integer.
We also know that in this case the indicial equation has roots n and —n,
and that the solution @, (t)=J,(¢) of (6.56) with p=n corresponding to the
root n exists and is given by (6.62). However, the recursion formulas
corresponding to the root —n are c, =0, f(—n+k) c, +c, 2.=0 (k =2, 3,...)
(see Exercise 2 and the paragraph which follows it). Since f(—n+k
=(—n+k)?—n?=k(k—2n), the calculation of the coefficients c, breaks
down for k=2n. We must therefore use Theorem 1, Section 6.8, to find a
second, linearly independent, solution. Accordingly, we substitute the func-
tion given by
Exercises
8. Let L be the operator defined by (6.56). By considering L[,(t)]=0, show that the
coefficients b,, b2,...$b2,-, are given by b,=0, k(k—2n) b,+b,-.=0 (k=2,
3,..., 2n—1); and therefore
260 Series Solutions of Linear Differential Equations
bo m=1,2,...,.n—1.
b, =b3=---=b2,-1=9, bom
~ 2?™(m!)(n—1)---(n—m)’
9. By examining the coefficient of 1°" in L[,(t)]=0, show that
bo
oy Peteyt:
10. By examining the coefficients of 2"*1, 72"*3,... in L[@,(t)]=0, show that
bon+1= 2n43='*:=0, by, is undetermined, and 2m(2m + 2n) bon +2mt+92n+2m-2=
—2a(n+ 2m) dom (m=1, 2,...), where d3,, is the coefficient of 17" in the expansion
Ola) =e modale ane
11. Show that the choice
d i
b>,= Me (14h++2) by= —2n7'(n—1)!
2 n
b2()=Ky()=—4 55 Verio t
lerics coils
Cy ls |(1H+--+3)
k=0
Theorem 2. If p=0, Jo(t) and Ko(t), given by (6.65) and (6.66) respect-
ively, are two linearly independent solutions of the Bessel equation (6.56) on
any interval not containing the origin. If p is a positive integer n, then J,(t)
and K,,(t), given by (6.62) and (6.67) respectively, are two linearly independent
solutions of the Bessel equation (6.56) on any interval not containing the
origin.
Exercises
let y be a solution of y” + q(t) y=0 which is not identically zero on a<t<b. Suppose
that p and g are both continuous on a<1<hb, and that q(t)>p(t) for a<t<b. Prove
that if ¢;, ft. are successive points in a<t<b at which ¢=0, then there exists a point
€, ty <¢<fy, such that y(¢)=0. [Hint: Suppose (without loss of generality) that
(t)>O for t; <<t<t, and that y(t)>0 for t; <t<t,. From the differential equations,
(vo — ov’) =06"— dv" =[4()—p()] (0) (0. Integrate from 1, to f,. Since
by hypothesis, JiLq()—p()] (2) (2) dt>0, we obtain W(t) ¢'(t.)—W(4)
¢'(t,)>0, from which we draw a contradiction.]
*18. Show that for every p>0, J,(t) has an infinite number of zeros on 0<t<oo.
[Hint: Combine the results of Exercises 3 and 17, where for t> fo the equation (6.64)
satisfied by ¢’/?J, (t) may be compared with the equation w” +4w=0 if fy is sufficient-
ly large, and every solution of w’+4w=0 has infinitely many zeros on 0<t<oo.
Then apply Exercise 17 with p(t)=4, g(t)=14+(4—p?)/?? for t>to.]
Lo: a) Show that if p>0, A>0, then ¢,(0) =,/t J,(At) satisfies the equation y’+
+[(G-p?)/t?7] y= Ay. a Use the equation (6.64).]
t) Prove that (22-1) fbal0) dl?)dt=[0) 65(0)—d4(0) A6(01E. CHine: Form
(6,6, = $03) =$101— Pu and use the difierendal equation.|
20. Prove that |6rJ,(Az) J,(ut) dt=0 if A#u whenever / and p are positive zeros of
J,. [Hint: Use Brercise 19.]
AE Prove that ie (At) dt=4[J;(A)]*, where p>0, A>0, and J,(A)=0. [Hint: (i) Show
9
that y(t)=J,(At) satisfies the equation of Exercise 23 below. (ii) Show that this
equivalent to the equation (ty’)’+(A?1t—p?/t) y=0, or, on multiplication by ty’,
to the equation (d/dt) (ty’)’
+(A720? —p’) (d/dt) (y?)=0. (iii) Integrate from 0 to 1,
using y(0)=y(1)=0, y’ (t)=AJ,(Az), and integration by parts.]
Exercises 20 and 21, with p=0, and Exercise 12 give the useful formula
1
_f0 AF
[ol Jota d= Ppa a
0
A proof of the fact that this limit exists may be found in [6, p. 189].
262 Series Solutions of Linear Differential Equations
Exercise
22. Show that J, and Y, are linearly independent solutions of (6.56) on any interval
excluding the origin.
Exercises
tian
oe he ine 2)|
where mk 4 —2k; what is the general solution?
26. Show that one solution of (/"y’)'+kt"y=0 is 1°~"7J,(./k ft/s), where s=(m—n+
+ 2)/2, p=(1—n)/2s, n#Am+2. What is the general solution?
27. Show that one solution of ty’+y’+ay=0 is Jo(2(at)'?). What is the general
solution?
= TG)
iAa ea x
j=
00) KEE
(n—1)!]t|-"
O(t"t2
ofr = be —n+2
n—1)!|t}""
y(t) =! = 5 +O(t~"**), napositive integer
1
J,(t)= Deie O(RF2). ad 6)1 ihe’
en (een
p>O, p not an integer
Exercise
28. For each of the following differential equations, obtain the general solution in
terms of Bessel functions.
a) ty”—y’—ty=0 b) ty”—3y’+ty=0
ce) ty” +ty’—(t? +4) y=0 d) ty"—y'+4ry=0
Example 1. Investigate the point at infinity for the equation y”+ay’+by=0, where
a and 6 are constants.
The change of variable r=1/x transforms this equation to x*z”+(2x*—ax?) z’+
z=0, which obviously has an irregular singular point at x=0. Thus the given equation
has an irregular singular point at f= 00.
Exercises
1. Show that the Euler equation 7" + aty’+by=0, where a and 6 are constants, has
a regular singular point at oo.
2. Show that the hypergeometric equation
near t= 00, of the form ))2 9c,t~* and )'7- od,t~*. We substitute into the
differential equation to obtain recursion formulas for the coefficients c, and
d,. For a regular singular point at t=oo, the obvious modifications of
Theorems | of Section 6.7 and 1 of Section 6.8 hold.
Exercises
7. State the analog of Theorems | (Section 6.3), 1 (Section 6.7), and 1 (Section 6.8)
relative to the point at infinity.
8. a) Compute the roots of the indicial equation relative to the point at infinity of the
hypergeometric equation t(1—?) y’+[ce—(a+b+ 1) t] y’—aby=0.
b) Find two linearly independent solutions valid for large t when a#b. What is
the range of validity of these solutions?
9. Show that the change of variable t=(1—1)/t transforms the Legendre equation
(1-1?) »’—2ty’+a(a+1) y=0 into a hypergeometric equation, and calculate
a, b, c. [Hint: Put z(t)=y(t(t)), and use the chain rule.|
10. Locate and classify the singular points of each of the following differential
equations, including t= 00
11. Find the general solution valid in some neighborhood of the point = 00 of each of
the following differential equations, and give the interval on which it is valid.
e) y”’+2(P—-2t) y’+y=0
f)<Py"+(?—)y'+2-) y=0
g) 2ey’+ry'—(t4+1) y=0
12. Show that t= oo is an irregular singular point and determine (formally) the form
of the general solution valid for large |¢| for each of the following differential
equation. Find whether the series converges on any interval.
a) aoe
1 ees eA
b) y’—6y'+5y=0
c)
d) y’—ty=0
e) ty’+(c—t) ¥—ay=0 (confluent hypergeometric equation)
Exercise
a) Show that r=0 is an irregular singular point for the equation fy” +ry’—y=0.
b) Find two linearly independent solutions near t=0. [Hint: $,(t)=¢ is one solu-
tion, and another solution can be found by the method of Section 3.6.]
The above example shows that the behavior of solutions near an irregu-
lar singular point may differ sharply from the behavior of solutions near a
regular singular point. To present further difficulties which can arise, we
consider the equation 17y”+(3¢—1) y’+y=0, having an irregular singular
point at t=0. If we try to find a solution of the form |¢|? V2 9¢,¢*, we obtain
z=0, c,=k! (k=0, 1, 2,...), giving a formal solution )'% ok!¢*. Since this
series fails to converge on any ¢ interval, it cannot represent a solution.
6.11 Irregular Singular Points 267
converging in some region |t|>r, with t>0, and we shall assume that
oo is an irregular singular point. If we make the transformation x =1/t and
let z(x)=y(1/t), as in Section 6.10, the equation (6.71) becomes
x42" +[20°—x
4 xX eo f+ > bt |=0 (6.73)
k=0 k=0
By inspection of Eq. (6.73) at x=0 we see that Eq. (6.71) has an irregular
singular point at t= 00 if and only if at least one of the numbers do, bo, b, is
different from zero. Since we wish to assume that oo is an irregular singular
point for (6.71), we shall assume throughout that this is the case.
Motivated by the simple examples at the beginning of this section,
where the irregular singular point is at the origin, and by our study of
regular singular points, we try to see whether (6.71) can be satisfied formally
by a series of the form
k=0 k=0
k=0 k=0
Using (6.72) and property (vi) (Section 6.2) for multiplication of power series
(here in powers of 1/t), we also have
where
g(A)=A7 +A +bo. (6.75)
Exercise
2. Verify with the aid of (6.76) that the coefficient of c,_, in (6.77) is different from zero
fork=2, 3:...
with a(t), b(t) analytic at co, and having an irregular singular point at o.
Then if the equation g(A)=/? +4 9A+b)=0 has distinct roots 4,, 4, and
if Zm is defined by (2)_,+ 49) Zm= — Ama, — 6, (m=1, 2), then equation (6.71)
is formally satisfied by the two series p_(t)=e?™ |t\?" V9 cht *with ch £0
(m=1, 2), whose coefficients ct”(m) are determined recursively from the equations
(6.77).
Exercise
1. Show that if 4, #A,, then the two series ¢,,, determined by Theorem 1 are
(formally) linearly independent.
vs(142) =o 9
where « is a constant, which is of the form (6.71) with a(t)=0, b(t)=1+4/t?. Thus
t=oo is an irregular singular point; g(A)=A?+1, and the roots of g(A)=0 are A= +i.
By Theorem 1, the series $(t)=e"|t/? V2 o¢,t“ (co #0), satisfies (6.78) formally. In the
notation of (6.72), we have a,=0. (k=0, 1, 2,...), and O)=1, b;=0) 5, =a, b,=0
(k=3, 4,...). Then (6.76) gives z=0, and (6.77) gives the recursion formula
2i(—k+1) c,-, = —(k+2) (—k+1) 2 +0c,-2 koe Bere
or
i (k—2) (k—-1) +0
SS
2 k-1
Cy—9 —o7 3 See (6.79)
We observe that if «= —(n—2) (n—1) for some integer n> 2, then the series terminates
after a finite number of terms. However, if « does not have this form, then
Now the ratio test applied to the formal solution e“ )@9c,t~* of (6.78) whose
coefficients are given by (6.79) shows that the series diverges for all t. Therefore it cannot
be called a solution of (6.78) in the usual sense. We will still find it convenient to call a
series which satisfies a differential equation formally a formal solution even if the series
fails to converge on any interval.
270 Series Solutions of Linear Differential Equations
Exercise
4. Find the formal solution of (6.78) corresponding to the root A= —i, and show
that it diverges everywhere.
e =i ‘c+e = — ds
Ss
— oO
for some constant c, where the integral converges for r<0. In particular, the function
is well defined for every t<0 and satisfies the differential equation for t<0. Repeated
integration by parts gives
t t
1 x e ile al 7 es mee
AO era ; |Sdsata 542k ‘ |Sas=) kim * 1+R,
ies! = 66 k=0
where
t
Ss
R,=(n+1)!e* |ae
s
We observe that the integrated terms in this expression for the solution w (well defined
for t<0) coincide with the first (n+ 1)-terms of the (divergent) formal series solution.
Now, let us examine the remainder term of R,. We see that for 1<0
or
PHO), Me 2. sev
6.11 Irregular Singular Points 271
(n+1)!
feet
W()— Yee & >
I¢|
or
lim |¢|"*?
te
wi Skeet == (|) 72s()alee
To put this in another way, we may say that even though the series ) 9k!¢7-*~!
diverges for all r, the error made in approximating the solution w(t) by the first n terms
of the series is less in magnitude than the (n+ 1)st term, for every integer n and for
t<0. We note, however, that for a particular value of 7, the approximation may not be
improved by taking more terms of the series (for example, t= — 1).
The above considerations suggest the following definition, due to the
French mathematician H. Poincaré.
Definition. Theformal series ) 7° 9a,t~* is said to be an asymptotic expansion
of a functionf(t) defined for t>r as t> ifand only if for each n=0, 1,...,
y’+y= = y. (6.80)
We have already seen that (6.80) has a formal solution e” )'7- 9¢,t~“,where
Cyo=1 and c, (k>1) is determined recursively from (6.79), and that this
series diverges for every t>0. We shall now show that the series represents
a solution of the equation in the sense that (6.80) has a solution for which
the formal series constructed above is an asymptotic expansion. For large t,
the equation (6.80) “resembles” the equation y” + y=0, and this suggests that
(6.80) may have a solution $(t) which behaves like e” as too. If (6.80) has
such a solution, we can use the method of variation of constants (Section 3;
see in particular Exercise 5,) to see that ¢ must satisfy the equation
t
for some constant c. The reader may verify that if @ satisfies (6.81), then
it also satisfies (6.80). The form (6.81), however, is not suitable for our
purpose, for even if the integral on the right side exists as too, this
solution @ would not behave like e'* unless the integral approaches zero.
To obtain a more suitable form, we write
If the first integral on the right side exists, it is a solution of the homogene-
ous equation y”+y=0 (verify this fact), and may be thrown into the term
e“c in (6.81). Then, we take c=1 to obtain the new equation
[e)
t>a>0, then the integral in (6.82) approaches zero as too, and thus
p(t)—e"0 as too. It will be shown in Section 8.1, Exercise 13, that (6.82)
has a solution @ on 1 <t<oo and that
(6.84)
and thus lim,.,,,(¢(t)—e")=0, or $(t)~e'(t-00). We also observe that we
have obtained the first term of the formal series solution of (6.78). But we
can do more; we write (6.82) as
ice) 00
(yea |sin(t—s) z ds
t
[sint—9)S as= | 5; ds
i
elt
=5, +9!)
where |g(t)|<k/t? for > 1 and some constant k. We may now write
ieee2it |eBone
eae
274 Series Solutions of Linear Differential Equations
Or
——-+|g(t)|, tl.
Exercises
where
From this and similar analysis for Y,,(t), defined by (6.68), we can obtain
the following result which complements Theorem 3, Section 6.9.
Theorem 2
1)~(2) cos| Ot
wd~(Z) “4 jee |
One important consequence of Theorem 2 is that J,(f) has infinitely
many zeros 0<?, </,<...<#,<..., and for large k, t,,, —#, is close to x.
This fact is very useful in the study of boundary-value problems involving
the Bessel equation.
The above considerations should suggest to the reader that the general
theory for the second-order equation (6.71) with an irregular singular
point at infinity is quite complicated. By a suitable generalization of the
above techniques it may be shown that if g(A)=A*
+a A+bp has distinct
zeros, then corresponding to the formal series obtained in Theorem 1, the
equation (6.71), has solutions @ and W having these formal series respectively
as asymptotic expansions,as t > ©.
Finally, we make a few remarks about the case of a double zero of g(A),
276 Series Solutions of Linear Differential Equations
This has the form of the equation (6.71) with a(#)=—2, b(t)=1+1/t, and g(a)=
A? —2A+1. Thus A=1 is a double root of the equation g(4)=0. Put y(t)=e"u(t). Then
y (t)=ceu(t)+ eu (t)
y" (t)=c?eu(t) + 2ce“w (t)+ eu" (t).
Thus
tu -+-u=0.
(This equation actually has A =0 as a double root of the corresponding indicial equation.)
We now make the stretching transformation t=x?, and we let v(x)=u(x7). Then
tu” +u=(§) (d?v/dx?) —(1/4x) (dv/dx)+v, and v satisfies the equation
1
v”——v'+4v=0 (6.89)
x
with x as independent variable. For (6.89), the relevant polynomial is g(A)=/?+4,
and its zeros = +2i are distinct. Thus the methods of Theorem | may be applied to
(6.89). According to Theorem 1, (6.89) has two formal solutions of the form
where z and c, (k=1, 2,...) are determined recursively by substitution. Thus the original
equation (6.88) has two formal solutions of the form
eS
[t}"/? exp(t+2i,/t) Y-ot-*?, com
k=
and it can be shown that there exist solutions ¢ and w of (6.88) which have these formal
solutions as their respective asymptotic expansions as t>0o. Thus we see that in the
general case, the formal solutions are more complicated than in the case covered by
Theorem | in that the formal series involves fractional powers.
Exercises
8. Carry out a similar analysis for the equation ty” + y’+ y=0. Construct two linearly
independent formal solutions as t>o.
9. Find two linearly independent formal solutions as too for the equation y” + ty=0.
10. Find two linearly independent formal solutions as t>oo for the equation
6.11 Irregular Singular Points 277
Boundary- Value
Problems
7.1 INTRODUCTION
Figure 7.1
278
Teh Introduction 279
position y=0. We wish to determine whether there are values of the angular velocity
(called critical speeds) for which the string can assume some other shape (called a
standing wave).
To arrive at a mathematical model for this problem, we shall use Newton’s second
law of motion together with the following physical assumptions:
ii) The tensile force at any point acts in a direction tangent to the string and has a
constant magnitude 7.
ili) The angle between the direction of the string at any point and the equilibrium
position y=0 is small.
We examine the portion of the string between x and x ++Ax, where 0<x<L. The
mass of this portion of the string is p As, where s is the arc length measured along the
string; this mass is approximately p[(4x)*+(4y)*]’/”, if |4x| is small. The rotation of
the string about the x axis with angular velocity m produces an acceleration of
magnitude wy directed vertically toward the x axis.
Exercise
1. Use the argument by which equation (2.6) Section 2.2 was derived to show that
the rotating string produces an acceleration —@yu,, where u, is the unit radial
vector. (Here y replaces r and 0=at.)
Since the acceleration vector is vertical, only the vertical component of the tensile
forces on this portion of the string is relevant and this vertical component is
T [sin «(x + Ax) —sin «(x)]
where «(x) is the angle between the string at x and the horizontal (see Fig. 7.2). Thus
Newton’s second law of motion gives
Figure 7.2
280 Boundary-Value Problems
If we divide the equation (7.1) by 4x and take the limit as 4x0, we obtain
d dy Zan
cEre [sin a(x)]+ p0%y|1+ (*) | =0. (2)
Substituting (7.3) into (7.2), we obtain the equation of motion of the string
Exercise
2. Carry out the differentiation in (7.4) and derive the equation (7.5).
We observe that we cannot solve (7.5) by the methods we have studied, and we ask
the reader to accept the statement that the equation does not have solutions which can
be written in closed form. It is not even obvious that solutions of (7.5) satisfying given
initial conditions at x =0 exist on the whole interval 0<x<L, although this can in fact
be proved.
To simplify the complicated equation (7.5), we use the assumption (iii), which
implies that if y(x) is the displacement of the string, then |y’(x)|=|tan «(x)| is small for
O0<x<L. Then, replacing dy/dx in (7.5) by 0, we obtain the simplified model
eee
Fi PPO = 1.6
(7.6)
which we shall use for the differential equation governing the motion of the rotating
string.
The differential equation (7.6) alone does not specify the motion of the string com-
pletely. We have assumed that both ends of the string are attached to a fixed support.
This means that the displacement y(x) of the string must also satisfy the boundary
conditions
y(0)=0, —y(L)=0. (7.7)
Clearly y(x)=0 is a solution of equation (7.6) satisfying the boundary conditions (7.7);
this solution is called the trivial solution.
The mathematical problem is to determine the values of w for which (7.6) has a
solution not identically zero.on 0<x<L which also satisfies the boundary conditions
(7.7), and to determine the corresponding solution @. We shall see in the next section how
to solve this problem.
Wee Homogeneous Boundary-Value Problems 281
We remark that in assuming that dy/dx is small to derive the equation (7.6) we do not
rule out the possibility that (7.6) may have solutions for which dy/dx is not small. How-
ever, we may expect that only solutions with dy/dx small actually approximate solutions
of (7.5), and that the solutions of (7.6) for which dy/dx is not small need not have any
physical significance.
Exercises
3. Look up the derivation for the differential equation for the critical speeds of a
rotating shaft (see, for example, [11], p. 193).
4. Look up the derivation for the differential equation for the buckling of a column
under an axial load (see, for example, [11], p. 198).
We may impose other types of boundary conditions instead of requiring both ends
of the string to be attached to a fixed support. For example, we might assume that the
end at x =L is attached to a yielding support with a restoring force proportional to the
stretching. This gives a boundary condition of the form
Ty (L)=—ky(L)
instead of y(L)=0. The “limiting case” of such a condition is called a free end condition,
y (L)=0. This corresponds to a string unattached at the end x=L.
y"+Ay=0 (7.8)
together with the boundary conditions
Exercise
1. Transform the problem (7.6), (7.7) for the rotating string into the problem (7.8),
(7.9) by making the change of variable t= x/L and letting A= pw*L?/Tn’.
The general solution of (7.8) can be written in the form
(t)=c, exp(i,/At)+c, exp(—i,/At) (7.10)
(see Section 3.4) for every A, real or complex, except 14=0. We may write
Ja=a+ ip with «, B real (see Appendix 3). We shall first show that the boundary con-
282 Boundary-Value Problems
ditions (7.9) cannot be satisfied by a nontrivial solution of (7.8) unless eA is real, that is,
unless 6=0. With Ji= a+iB, (7.10) becomes
O=c.e. Fee
o(0)=c, +c,=0,
(x) =e, 4 t" +-c,e8- *=0.
This is a pair of simultaneous algebraic equations for the constants c,, ¢,. By Theorems
1, 2 in Appendix 1, there is a nontrivial solution (that is, a solution with c,, c, not both
zero) of this algebraic system if and only if the determinant A of coefficients is zero. But
A=el@e7 iat __ eo Ba oian
or equivalently,
2Br 2ani
eck=—e"" = Cos 2am +1 Sin 20
Because f is real, e?9" is real. Therefore sin 2xn=0 and cos 2an=e74". The first of
these equations is satisfied if and only if <=n (=0, 1, 2, 3,...) and in this case the second
of these equations gives 1 =e?*; from this it follows that B=0.
Therefore the boundary-value problem (7.8), (7.9) has a nontrivial solution if and
only if Ja=n= 12352 (he scasenor—025 —Osimplics Ji=0, and is considered
separately below.) The corresponding nontrivial solutions are (from (7.10))
Of)=cje foe. (i= ieee):
However, from the first boundary condition, we know that c, +c,=0, so that
o(t)=c,(e"—e ™)=2ic; smut “(n=1, 2...)
or finally
(t)=A, sin nt (n=, 25::-) (7.11)
where A, is an arbitrary (real) constant. Note that A, remains completely undetermined
by the problem.
It remains to consider the possibility of anontrivial solution of (7.8), (7.9) for A=0.
In this case the general solution of (7.8) is
p(t) — Cy ate Cot
However, this solution satisfies (7.9) if and only if c;=c,=0 and thus there is no
nontrivial solution for A=0.
Exercise
Exercises
3. Assuming that / is positive, we know that the general solution of (7.8) can be written
in the form
sin, /At
o(th=c, cos,/At+c,
where the factor . is inserted for convenience. Apply the boundary conditions
(7.9) to this form of the solution and determine the eigenvalues and eigenfunctions.
4. Compute the limit as 4-0 of the general solution $(t) in Exercise 3.
5. Show that |§ ¢,(¢) ¢,,(t) dt=0 if m#n and [5[¢,(0)]? dt=2A,7/2. [Hint: 2 sin nt
sin mt=cos (m—n) t—cos (m+n) t.]
6. Show that for the boundary-value problem (7.6), (7.7) for the rotating string there
is an infinite sequence of angular velocities w, =nn/L(T/p)'/? (n=1, 2,...), for which
there is a nontrival solution and a corresponding sequence of solutions ¢, (x)=,
sin (nzx/L) (n=1, 2,...), where the A, are constants which we cannot determine
from the problem.
We now observe that the rotating string problem posed in Section 7.1
and solved in Exercise 6 above has solutions which do not satisfy the as-
sumption made in obtaining the linearized equation (7.6), namely, that the
angle between the direction of the string and the horizontal is small. For
the solution @¢,(x)=A, sin(nzx/L), we have ¢/(x)=(nxA,/L) cos(nnx/L)
(n=1, 2,...), and this becomes large as n increases. This suggests that in
fact only the first few of the eigenfunctions are physically meaningful, and
the skillful reader may verify experimentally that this seems to be the case.
The mathematical question of whether the solutions of the linear boundary-
value problem (7.6), (7.7) approximate the solutions of the nonlinear bound-
ary-value problem (7.5), (7.7), or even whether there are values of w for which
the nonlinear problem has solutions, is an extremely difficult one to which
no satisfactory answer can be given here.
Example 2. Consider the boundary-value problem defined on the interval 0<t<z by
the differential equation
y"+dy=0 (7.8)
284 Boundary-Value Problems
Exercise
7. Show, using (7.14) and (7.12), that 2=0 is an eigenvalue of the boundary-value
problem (7.8), (7.12).
Exercises
8. Show that [5 ¢,(t) ¢,(t) dt=0 if m#n, and that {5 [¢0(0)]? dt=24,2/2 if n>0,
while (§[¢(t)]? dt=2A)?.
9. Find the eigenvalues and corresponding eigenfunctions of the differential Eq. (7.8)
subject to the boundary conditions y(0)=0, y’(x)=0.
10. Interpret Exercise 9 for the rotating string (see Section 7.1).
1 Homogeneous Boundary-Value Problems 285
Exercises
11. Show that the eigenvalues of the boundary-value problem (7.8), (7.15) are the solu-
tions (if any) of the transcendental equation
tana A= Sys
12. Show that if /, is an eigenvalue of the boundary-value problem (7.8), (7.15), then the
corresponding eigenfunction is ¢,(¢)=A, sin \/4, t.
The above exercises show that the problem of finding the eigenvalues
of the boundary-value problem (7.8), (7.15) reduces to the problem of
solving the transcendental equation
Thus there is exactly one value \/A, between (n—4) and (n+4) such that
tan ./1,m=—k Ae This shows that there is an infinite sequence of
solutions 1, of (7.16) and hence an infinite sequence of eigenvalues of the
boundary-value problem (7.8), (7.15). In fact, if k>0, —k ./A decreases
as A increases and we may conclude that the solution of (7.16) between
(n—4) and (n+4) tends to (n—4), so that ./A, is given approximately by
(n—4), for large n. In a similar way, we may see that if k <0, RA, is given
approximately by (n +4) for large n (see Fig. 7.3).
In the above discussion, we have ignored the possibility that A=0 may
be an eigenvalue. Whether 1=0 is an eigenvalue depends on the value of
the constant & in the boundary condition at t=7.
Exercises
13. Show that A4=0 is not an eigenvalue of the boundary-value problem (7.8), (7.15)
unless k= —1.
286 Boundary-Value Problems
Figure 7.3
14. Show that if k= —zx, 1=0 is an eigenvalue of the boundary-value problem (7.8),
(7.15) and $o(t)=Aot is a corresponding eigenfunction.
Exercises
15. Show that the eigenvalues of the boundary-value problem (7.8), (7.17) are the
solutions (if any) of the transcendental equation
in,/A
ce * = —(a6 — By) cos,/An.
Vi
*16. Show that the equation in Exercise 15 has an infinite sequence of real positive roots.
[Hint: If B50, the left side of this equation oscillates with amplitude pd ofA
which is large for large A, while the right side is bounded; thus there is a root
near each zero of the left side. If B5=0, the left side oscillates with amplitude
ay/./A, which is small for large A (or is identically zero if ~y=0); thus there is a root
near each zero of the right side.]
1k?5 Homogeneous Boundary-Value Problems 287
*17. Show that if ay=B5=0 but ad—fy#0, the eigenvalues of (7.8), (7.17) are
A=(n+4)? (n=0, 1, 2,...).
*18. Show that if «5 —fy=0 but either xy 40 or £540, the eigenvalues of (7.8), (7.17)
arenA=n? (n=071;.2,:2).
of the differential equation (7.8) for 4>0. Examination of the boundary conditions
(7.18) shows that they are satisfied if fA is a positive integer. Thus there is an infinite
sequence of eigenvalues 1=n? (n=1, 2,...). Corresponding to the eigenvalue n? there
are two linearly independent eigenfunctions, namely, ¢,(t)=A, sin nt and y,(t)
= B, cosnt, for every choice of the constants A,, B, (n=1, 2,...). In the examples con-
sidered previously there was only one eigenfunction corresponding to each eigenvalue.
You should note that as there cannot be more than two linearly independent solutions
of the differential equation (7.8) for any value of A (Theorem 1, Section 3.3), there can-
not be more than two linearly independent eigenfunctions corresponding to any eigen-
value of any boundary-value problem for the differential equation (7.8).
The boundary-value problem (7.8), (7.18) has another eigenvalue, namely, A=0.
For A4=0 the general solution of the differential equation (7.8) is $(t)=c,+ ct. The
boundary conditions (7.18) give c,=0, and we see that Wo(t)= Bp is an eigenfunction
corresponding to the eigenvalue 1, =0 for every choice of the constant Bo.
Exercises
19. Show that there are no negative or complex eigenvalues for the boundary value
problem (7.8), (7.18).
20. Show that [3*,(t) p(t) dt=0, |3"Wa(t) W(t) dt=0 for n#m and 5" ¢,(t) Wn(t)
dt=0, where iA are the Seentuiclions of (7.8), (7.18).
21. Show that [3*[¢,()]? dt=74,7, [2"[W,(0)]? dt=nB,”, while
[$*[Wo(t)]? dt=20By’.
22. Show that A, sin ae cosnt is an eigenfunction of the boundary-value problem
(7.8), (7.18) corresponding to the eigenvalue A= n? for every choice of the constants
An Be
If you are acquainted with linear algebra, you should note that the
eigenfunctions of the boundary-value problem (7.8), (7.18) corresponding
288 Boundary-Value Problems
Exercises
23. Find the eigenvalues and eigenfunctions of the boundary-value problem defined
on the interval a<t<b by the differential equation y’+Ay=0 and the boundary
conditions y(a)=0, y(b)=0.
. Find the eigenvalues and eigenfunctions of the boundary-value problem defined
on the interval 0<r<1 by the differential equation y’+Ay=0 and the boundary
conditions y’(0)=0, p(1)=0.
. Determine all real eigenvalues and the corresponding eigenfunctions of each of the
following boundary-value problems. If the eigenvalues are roots of a transcendental
equation which cannot be solved explicitly, give the equation for the eigenvalues
and the form of the eigenfunctions.
a) y’+Ay=0, y(0)=0, y’(z)=0
b) y"+Ay= ere }=0; ines
c) yp’+Ay=0, y(0)=0, y(a)+ i
d) y"+2y'+(A4+1) y=0, y(0)=0, v(xsay
e) y’+(1+A4) y'+Avy=0, ae y(1)=0
f) t?y”—Aty'+Ay=0, y(1)=0, y(2)—y'(2)=0
g) YY —Ay=0, y(O=y(M=y"(0=y"()=0
26. Determine the eigenvalues and eigenfunctions of the boundary-value problem
defined by the differential equation
y—Ay=0
Example 1. The physical problem of a rotating string to which a given external force
J (t) is applied leads to a nonhomogeneous boundary-value problem of the form
ytdy=f (t) (7.19)
with boundary conditions such as
y(0)=0 ~— y(n) =0 (7.20)
De) Nonhomogeneous Boundary-Value Problems 289
in the case of fixed end points. Solve this boundary problem explicitly. In (7.19), A is a
given constant which we assume to be positive.
We shall see that the problem (7.19), (7.20) is intimately related to the corresponding
homogeneous boundary-value problem
y"+Ay=0 (7.21)
with boundary conditions (7.20) (see Example 1, Section 7.2) and it will be essential to
distinguish the cases when / is an eigenvalue and when J is not an eigenvalue of (7.20),
Cre):
By the variation of constants formula (Section 3.7; see particularly Exercise 5) the
general solution of(7.19) is
1
(t)=c, cos,/At+c, et Vilae (t—s) f (7.22)
where we have used the general solution of the differential equation (7.21) given by
(7.13). We wish to have the solution ¢ defined by (7.22) satisfy the boundary conditions
(7.20). To satisfy the condition y(0)=0 we must clearly have c,=0. The boundary
condition y(z)=0 gives
sin ./A 1
AL aaa sin,/A(n—s) f(s) ds=0. (7.23)
a ah ialMe
If, for the given /, sin Sa m#0, Eq. (7.23) can be solved uniquely and we obtain
‘ey — fonsAa
‘ re
Exercise
1) Derive (7.25) [Aint sin 1*(n—s)=sin,/2* n cos A* s—cos./A* x sin hess
and sin ,/A* n=) |
t t
=| |= f()as-| ae £(s) ds
fies me]
(0)
Exercise
2. Show that
Therefore
t
The function G(t, s, 4), defined for 0<t, s<a, and A¢n* (n=1, 2,...) is
called the Green’s function for the nonhomogeneous boundary-value
problem (7.19), (7.20). When this Green’s function, which does not depend on
the forcing function f, is known, the solution of (7.19), (7.20) is given by
(7.28) for every forcing function /.
Exercises
3. Show that G(s, t, 2)=G(t, s, A) provided A#n? (n=1, 2,...).
4. Show that G(t, s, A) is continuous for 0<?t, s<x and (0G/dt) (t, s, 4) is continuous
for t#s but
0G 0G
lim |Cork S, ao (s—h, S i).Le
h>O+
5. Show that G(t, s, 2), considered as a function of ¢ for each fixed s, satisfies the
differential equation (7.21) except for ¢=s.
6. Show that G(t, s, A), considered as a function of ¢ for each fixed s, satisfies the
boundary conditions y(0)=0, y(z)=0.
It is possible to show that the Green’s function is unique, and that it is
completely determined by the properties given in Exercises 3-6, that is, there
is no other function having these properties (see [1]; [7]).
292 Boundary-Value Problems
Exercises
le Discuss the case 4=0 of the nonhomogeneous boundary-value problem (7.19),
(7.20), that is, find the solution of the problem y’=f(t), y(0)=0, y(x)=0 and
determine the Green’s function H(t, s) for this problem. Show, that H(t, s)=
lim, 49 G(¢, s, 4), where G(t, s, 2) is given by (7.27). [Hint: Recall that lim,5
(sinax/x)=a.]
cos,/At cos,/A(m—5)
if0<t<s
JA sin,/An
G(t, s, A=
cos,/As cos,/A(m—t)
ifs<t<nz.
ye sin RLAgs
10. Show that the solution ¢ of the differential equation y’=/(t) satisfying the
boundary conditions y(0)+y(1)=0, y’(0)+y’(1)=0 may be written $(d)=§ G(t, s)
I (s) ds, where
y’+Ay=0 (7.21)
with the boundary conditions
y(0)=A, y(x)=B (7.29)
where A and B are given constants. Show that this problem may be reduced to one in
which the differential equation becomes inhomogeneous and the boundary conditions
become homogeneous, thereby reducing the problem to one which we have already
handled, in Example | above.
Let g(t) be any function with two continuous derivatives on 0<t<z such that
g(0)=A, —_g(n)=B;
1kes) Nonhomogeneous Boundary-Value Problems 293
y"()+Ay()=2"(N+9"(0+AZM+9(0].
Thus the original boundary-value problem is replaced by the problem
which is of the form (7.19), (7.20) with f(t)= —[g"(t)+Ag(d] and has a solution given
in Example 1. With the particular choice of g suggested above, the problem becomes
t
stim — a)A+B-A) iF z(0)=0, z(x)=0
Tt
y’ +Ay=hit) (7.30)
with the boundary conditions
y(0)=A, y(x)=B
where / is a given function and 4 and B are given constants.
We consider two simpler problems. First, we consider the problem with the homo-
geneous differential equation solved in Example 2. We let #, (t) be the solution of the
boundary-value problem (7.21), (7.29). Next, let p(t) be the solution of the boundary-
value problem (7.30) with homogeneous boundary conditions
Exercises
L(y)=—y"=Ay. (7.31)-
We observed that these problems had some features in common, namely,
(i) all the eigenvalues of each problem were real and (ii) each problem had
an infinite sequence of eigenvalues tending to +00. In this section we
shall examine the question of how general these properties are, that is, for
what boundary conditions they hold.
Let u(t) and v(t) be given functions (possibly complex-valued) which-
are continuous and have continuous first and second derivatives on an
interval a<t<b. We shall use u(t) to denote the complex conjugate of u(t).
We use integration by parts twice to attempt to evaluate |? Lu(t) 0(t) dt,
obtaining
b b b
where W(u, v)(t) denotes the Wronskian of the functions uw and d (see
Section 3.3).
Now, suppose that u and v both satisfy a pair of given boundary condi-
tions at a and b which make the expression W
(u, 0) (¢)]?=0. For example,
we may require both uw and v to satisfy separated boundary conditions of
the form
ay(a)+By'(a)=0, — yy(b)+dy'(b)=0 (7.33)
where «a, B, y, 6 are given real constants with at least one of «, B and at least
one of y, 6 different from zero. If 640, then the first condition in (7.33)
becomes y’ (a)= —(a/B) y(a) and, if u and v both satisfy this condition,
W
(u, 0) (a)=u(a) v'(a)—u'(a) 0(a) = —— [u(a) 0(a)—u(a) d(a)]=0. (7.34)
Exercise
1. If B=0 but «40, show that (7.34) remains valid for functions u, v satisfying
ay(a)=0.
7.4 Self-Adjoint Boundary-Value Problems 295
2. Show that if uw and v both satisfy the periodic boundary conditions (7.35), then
W (u, d) ()]2=0.
The vanishing of W(u, 0) (t)]® in (7.32) has important implications for
the boundary-value problem on the interval a<t<b defined by the dif-
ferential equation (7.31) and the boundary conditions which imply
Wu, 6) (Qe =0:
Definition. A boundary-value problem consisting of the differential equation
(7.31) together with a pair of boundary conditions at a and b having the
property that W(u, 0) (t)]?=0 for any functions u and v both satisfying these
boundary conditions is said to be self-adjoint.
For a self-adjoint boundary-value problem the relation (7.32) becomes
Exercises
3. Show that the boundary-value problem y’+Ay=0, y(0)+ y(z)=0, y’(0)+y'(z)=0
is self-adjoint.
4. Determine a condition on the real constants «, 6, y, 6 for which the boundary-value
problem y” + Ay=0, y(z)—ay(0)— By’ (0) =0, »’ (x)— yy (0) — dy’ (0) =0 is self-adjoint.
This was done by a direct evaluation of the integral. This property of eigen-
functions is true for general self-adjoint boundary-value problems, and in
order to establish this, we must define the concept of orthogonality of
functions.
Definition. Two continuous real functions f(t), g(t) defined on a<t<b are
said to be orthogonal on a<t<b if and only if
b
Thus Exercise 5, Section 7.2 shows that the eigenfunctions @¢,(t)=A, sinnt
(n=1, 2,...) are orthogonal on 0<t<n.
Let u be a (real) eigenfunction corresponding to an eigenvalue J and let
v be a (real) eigenfunction corresponding to an eigenvalue 14 of a self-
adjoint boundary-value problem for the differential equation (7.31). Then,
using Lu=Au, Lv =v in (7.36), we obtain
b
ay(a)+By'(a)=0, — yy(b)
+dy’(b)=0 (7.33)
or
y(a)—y(b)=0, — y'(a)—y'(b)=0 (7.35)
(but not restricted to these), all eigenvalues are real and eigenfunctions
corresponding to different eigenvalues are orthogonal on a<t<b.
We have already shown in equation (7.34) that if w and v are eigen-
functions of the boundary-value problem (7.31), (7.33), with separated
boundary conditions corresponding to the same eigenvalue, then W(u, v)
(a)=0 (and also W(u, v) (b)=0). Since u and v also satisfy the differential
equation (7.31), it follows that uw and v are linearly dependent on
a<t<b (by Theorems 2 and 3, Section 3.3). Therefore, a self-adjoint
boundary-value problem for the differential equation (7.31) with separated
boundary conditions cannot have two linearly independent eigenfunctions
corresponding to any eigenvalue. This is not true in the case of periodic
boundary conditions (7.35). For, as we have seen in Example 5, Section 7.2,
there may be two linearly independent eigenfunctions corresponding to the
same eigenvalue. You should note that while the problem (7.31), (7.35) with
periodic boundary conditions is self-adjoint (Exercise 2), W(u, v) (a) 40,
W
(u, 0) (6)40 for functions u and v which satisfy the boundary conditions
Ta):
oe 1 says nothing about whether a self-adjoint boundary-value
problem for the differential equation (7.31) actually has any eigenvalues.
The examples considered in Section 7.2 suggest that there is an infinite
sequence of eigenvalues, but this has not been proved in the completely
general case.
298 Boundary-Value Problems
Exercise
y"+Ay=0
with the boundary conditions
m1, y(0)+my2y'
(0) +111 y(a) +11 2y (x)=,
m1 y(0)+mz2y (0)+n21y(2) +N22y (x)=0,
where the m;; and n;; are real constants. Show that this boundary-value problem
is self-adjoint if and only if
-4 oe (|ae i)
wf) on |) fo |280]
and use the identity
a
75 Sturm—Liouville Problems 299
and t
= ye (|aC i)
are continuous on a<t<b, p(t)>0 on a<t<b and r(t)40 on a<t<b.
Either r(t)>0 on a<t<b or r(t)<0 on a<t<b. If r(t)<0 on a<t<b, we
may replace 4 by —A/ to obtain an equation of the form (7.40) with r(t)>0
ona<t<b.
We shall now study boundary-value problems consisting of the differ-
ential equation (7.40) with p(t)>0, r(t)>0 on a<t<b, and either separated
boundary conditions
y(a)—y(b)=0, —y'(a)—y'(b)
=0. (7.42)
In the case of periodic boundary conditions we shall also require p(a)=p(d).
Such a boundary-value problem is called a Sturm—Liouville problem.*
Exercises
+ After the German mathematician Sturm (1803-1855) and the French mathematician
Liouville (1809-1887), who, independently, were the first to formulate these problems.
300 Boundary-Value Problems
for every pair of functions u and v which satisfy the boundary conditions
(7.41) or (7.42). The boundary conditions are used only to establish the
self-adjointness condition (7.43). Obviously, as in the special case of the
operator L(y)=-—y" considered in the preceding section, any boundary
conditions which lead to (7.43), even if they are not of the form (7.41) or
(7.42), define boundary-value problems for which our results are valid.
From (7.43) we deduce, exactly as in Theorem 1, Section 7.4 that all
eigenvalues of a Sturm—Liouville problem are real. The orthogonality of
eigenfunctions of (7.40), (7.41) or (7.40), (7.42) corresponding to different
eigenvalues now takes a slightly different form from that of Theorem 1,
Section 7.4. Namely, let uv be a (real) eigenfunction corresponding to an eigen-
value 4 and let v be a (real) eigenfunction corresponding to an eigenvalue
u#A. Then, since Lu=Ar(t) u and Lv=yr(t) v, (7.43) gives
b
Exercise
Exercises
5. Test whether each of the following houndary-value problems is self-adjoint
are
variables (known as the Liouville transformation)
a
302 Boundary-Value Problems
Are the eigenfunctions and their derivatives continuous at t=7/2? [Hint: Find the
solutions of the differential equation satisfying respectively y(0)=0 and y(z)=0,
and choose 4 to make the solutions match at t=7/2.]
*9. Find the eigenvalues and eigenfunctions of the boundary-value problem
11. Show that every complex number / is an eigenvalue of the nonself-adjoint boundary-
value problem
which played an essential role in the theory, where the integral may be
improper. For the particular case of
L(y)=-(n
+= y
we obtain, after integrating by parts,
304 Boundary-Value Problems
and therefore
Since this relationship was all that was used in proving Theorem 1, Section
7.5, this result remains valid for the equation (7.45) and boundary condi-
tions of the form
y(1)=0, y(t), y(t) bounded at t=0 (7.47)
or
yy(1)+ dy’ (1)=0; y(t) and y(t) bounded at t=0. (7.48)
Observe that the condition that y(¢) and y’(t) be bounded at r=0 is of a
different type from the boundary conditions imposed previously. More-
over, this condition is inherent in the differential equation and arbitrary
values cannot be assigned at t=0. It is possible to prove an analog of Theorem
2, Section 7.5 for such problems, but we consider only the special case
(7.45), (7.47).
which is a form of the Bessel equation that has been studied in Section 6.9.
We recall from Exercise 23 that the general solution of (7.49), with m an
integer or zero, is
(t)=CyIm(/At)
+02Km(/ At).
We recall also, see Theorem 3, Section 6.9, that the solution Kea t) is
unbounded on any interval containing the origin. Thus if f(t) is to be a
bounded solution of (7.45) on 0<t<1, c, is chosen zero and
$(t)=CrIm(a/At). (7.50)
Note that ¢’(t)=c, a, Tea t), and, from the power series representa-
tion 6.62, Section 6.9, ¢’(t) is also bounded on 0<t<1. Therefore (7.50)
satisfies the differential equation (7.45) and the second condition in (7.47).
It remains to determine / so that the first condition in (7.47) is satisfied, that
is, we wish to determine / so that
Im(x/A)=0. (7.51)
By the analog of Theorem 1, Section 7.5 for this problem, we know that A
must be real.
Instead of invoking an analog of Theorem 2, Section 7.5, we can give
an independent proof of the fact that Eq. (7.51) has an infinite sequence
of real positive solutions. This proof has already been outlined in Exercises
17, 18, Section 6.9. Let this sequence of solutions of(7.51) be uw? (n=1, 2,...).
(These solutions are tabulated in [11] or [12].) The corresponding eigen-
functions of the boundary-value problem (7.45), (7.47) are
Exercises
and imitate the analysis given above for the Bessel equation.]
with the same boundary conditions (7.53) has no nontrivial solution (that is,
provided that A is not an eigenvalue of (7.54), (7.53)). This solution @ may be
written in the form
b
where the function G(t, s, 4), called the Green’s function for the problem (7.52),
(7.53), has the following properties.
i) G(t, s, A) is a continuous function of (t, s, 4) for a<t, s<b and for 4 not
an eigenvalue of (7.54), (7.53).
ii) (6G/ét) (t, s, A) is a continuous function of(t, s, A) for t#s and for i not
an eigenvalue of (7.54), (7.53); moreover
«eG 0G 1
lim E (s+h, s, A)-= (s—h, s, ’)|--
h>O+ p(s)
ili) G(t, s, A), considered as a function of t, satisfies the homogeneous dif-
ferential equation (7.54) for each t except t=s.
iv) G(t, s, A), considered as a function of t, satisfies the boundary conditions
(7.53) for each s, a<s<b.
v) G(t, s, A)=G(s, t, A) ifa<t, s<b and if d is real but not an eigenvalue of
(7.54), (7.53).
Proof. Let $,(t, 4), b2(t, A) be the solutions of the differential equation
(7.54) such that
b:(a, =1, $4(a,4)=0, $2(a,A)=0, $5(a,a)=1. (7.56)
Then ¢, and @, exist on the whole interval a<t<b by Theorem 1, Section
3.1 and are linearly independent on a<t<b by Theorem 1, Section 3.3.
Using Theorem 3, Section 3.3, we calculate their Wronskian
In view of (7.56), the boundary condition ¢(a)=0 gives c, =0. The boundary
condition ¢(b)=0 gives
Combining the first and third terms, we obtain the form (7.55) as desired, with
1
Loi (t, 4) b2(s, 2) b2(b, A)— b(t, A) b2(s, 4) 1 (0, A)|
P(a) $2(b, A)
G(t, s, A)= ifs<t
1
P(a) 2(b, A)
Lb2(t, 4) b1(s, A) Pa(b, A)— a(t, 4) b2(s, A) $1 (b, A]
iis 0%
(7.59)
From this explicit representation (7.59) it is easy to verify that G(t, s, 4)
has the properties (i}{v) given in the statement of the theorem. The only
part of the theorem not yet proved is the uniqueness of the solution (7.55)
7.7 Green’s Function 309
Exercises
1. Verify that the function G(¢, s, A) given by (7.59) has the properties (i}{v) listed
in the statement of Theorem 1.
2. Show that if 4 is an eigenvalue, the equation (7.58) becomes
b 10 a f2(s, 4) f(s)
p(a)
oads=0.
3. Show that when 4 is an eigenvalue, the boundary-value problem (7.52), (7.53) has
a solution if and only if fis orthogonal to the eigenfunction #2(t, 4) on a<t<b.
4. Solve the boundary-value problem (7.52), (7.53) when / is an eigenvalue and f is
orthogonal to the eigenfunction ¢,(t, 4) on a<t<b. Show also that the solution
is not unique in this case.
ay(0)+
By (0)=0, — yy(b)
+dy'(b)=0. (7.60)
We assume, as before, that p,qg,r, and f are continuous and p(r)>0,
r(t)>0 for a<t<b. Let w,(t, 4) be a solution, not identically zero, of
(7.52) which satisfies the boundary condition ay(a)+fy'(a)=0 and let
W(t, A) be a solution, not identically zero, of (7.52) which satisfies the other
boundary condition yy(b)+dy'(b)=0. If A is not an eigenvalue of the
corresponding homogeneous boundary-value problem (7.54), (7.60), then
the solutions W, (t, A), z(t, 4) are linearly independent on a<t<b.
310 Boundary-Value Problems
Exercise
5. Show that if w,(¢, 4) and W(t, 4) are linearly dependent on a<1<8, then / is an
eigenvalue and w, (t, 4) (or W(t, A)) is a corresponding eigenfunction of the homo-
geneous boundary-value problem.
The most general function of (t, s, 4) which satisfies the differential equa-
tion (7.54) for a<t<s and the boundary condition at a has the form
G(t, s, A\=c,(s, 4) y(t, A), WEST eek
Similarly the most general function of (¢, s, A) which satisfies the differential
equation (7.54) for s<¢<b and the boundary condition at 6 has the form
G(t, s, A)=c2(s, A) Wo(t, A), s<t<b.
Thus the function G(t, s, 4), defined by
is the most general function having the properties (iii), (iv) of Theorem 1.
The symmetry property (v) implies
W2(s,4) Wilt, A)
and since the left side of this equation is independent of ¢ while the right
side is independent of s, both sides must be independent of both s and ¢.
Now, we may write c, (s, A)=kw(s, A), co(t, A)=ky, (t, A) and
_ Jkia(s,
A) Wilt, A), astss
AU tee s<t<b
It is easy to verify that this function G has the property (i) of continuity
in (t, s, A) for a<t,s<b if A is not an eigenvalue. The partial derivative
(6G/ét) (t, s, A) is continuous except for +45 and
where W(W,, W2) (s) is the Wronskian of w,, W, as defined in Section 3.3.
By Abel’s formula (see Theorem 3, Section 36),
W.Va)
(6)=Wi. Wa)(a)p(s)
2) a
()= |Gls.) Fd
a
i W(t,A)
pla) Ws. Va) (af 54).
aw a RTATE
a, A). (7.62)
Exercises
6. Show that $(t) given by (7.62) satisfies the differential equation (7.52) and the
boundary conditions (7.60) if A is not an eigenvalue.
7. Show that the problem (7.52), (7.60) has a unique solution if J is not an eigenvalue.
The Green’s function which we have constructed here reduces to the one
constructed by a different approach in Theorem | when we take S=6=0
in the boundary conditions (7.60). In fact, the following is true.
or equivalently
b
for a<t<b, and for every continuous function f, We choose the particular
function
JS=G. (t, S; A)—Gyr(t, S, A)
for any fixed ¢ (a<t<b) and for / not an eigenvalue. Then (7.63) becomes
Exercises
8. Verify, by using (7.61), the previously constructed Green’s functions in Section 7.3,
namely, equation (7.27) for Example 1, and Exercises 9 and 10.
*9. Show that the Green’s function for the problem
—(ey + y=dty+£0)
Z
Existence Theory
314
8.1 Existence of Solutions 315
centered at (to, Yo). This will mean that we can apply the local result of
this section at every point in a region D in which / satisfies these hypotheses.
Suppose fis continuous in D and that (fo, yo) is an arbitrary point of D.
The first step in our development is the observation that the initial-value
problem (8.1), (8.2) is equivalent to the problem of finding a continuous
function y(t), defined in some interval J containing fo, such that y(t) satisfies
the integral equation*
t
Exercises
1. Determine the integral equation equivalent to the initial-value problem
y=tt+y*, = y(0)=1.
* Equation (8.3) is called’an integral equation (of Volterra type) because the unknown
function appears both under and outside the integral sign.
316 Existence Theory
d 0H
+ |a0 s\ds=H(é, i+ | Fo s) ds
0 0
which is easily proved by the chain rule, assuming only that H, 0H/0t are continuous
on some rectangle containing s=1=0. ]
. Construct an equivalent integral equation to the initial-value problem
y't+wy=glt,y), — y(0)=y'(0)=z0
assuming that g is continuous in a region D containing (0, yo) and where np>0
is a constant. [Hint: Assuming a solution @ of the differential equation on an
interval J which satisfies the initial conditions, apply the variation of constants
formula (Section 3.7). To prove the converse, proceed as in Exercise 2. Answer:
t
ts, sin u(t—s :
y(t)=yo cosut+— sinut+| wn g(s, y(s)) ds.]
(0)
y(t)=e"+a |sin(t—s) a ds
t
(assuming the existence of the integral), then ¢ satisfies the differential equation
y" +(1+a/t?) y=0 (see Eq. (6.82), Section 6.11).
and we continue the process. Our goal is to find a function ¢ with the property
that when it is substituted in the right side of (8.3) the result is the same
function @. If we continue our approximation procedure, we may hope that
the sequence of functions {¢,(¢)}, called successive approximations, converges
to a limit function which has this property. Under suitable hypotheses this
is the case, and precisely this approach is used to prove the existence of a
solution of the integral equation (8.3).
Exercise
We will consider the problem (8.1), (8.2) first with fand df/dy continuous
on a rectangle R= ({(t, y)| |t—tol<a, |y—yo|<b} centered at (tf, yo). We
assume that f and Of/dy are bounded on R (if, as often happens in practice,
the functions f and 0f/dy are continuous on R, the closure of R, defined
by R={(t, y)| |t—tol<a, |v—yo| <5}, then they are necessarily bounded
on R), that is, that there exist constants M>0, K>0 such that
fiy)-f 0) is ania
y,—90 Pere ar es
Now choosing y, >0 sufficiently small, it is clear that K=y,~7/? can be made larger
than any preassigned constant. Therefore, (8.7) fails to hold for any K.
Exercises
6. Compute a Lipschitz constant K as in (8.7) and then show that each ofthe following
functions f satisfy the Lipschitz condition in the regions indicated.
a) f(t =P +y%, (ty) |lls], yl <3}.
b) f(t, y)=p(t) cosy+q(t) siny, {(t, y)||t|< 100, |y|<0oo}, where p, qg are con-
tinuous functions on —100<r<100.
c) f(t, y)=t exp (—y?), {(t, y) ||t| <1, ly|< oo}.
7. Show that f(t, y)=7¢\y| satisfies a Lipschitz condition in the region {(¢, y) ||t|<1,
ly] <0}.
We have already indicated that we will use an approximation procedure
to establish the existence of solutions. Now, let us define the successive
8.1 Existence of Solutions 319
Polt)=Yo
ae (9-yol-||Leona < |
|Ir ls, by(s)l ds
<M |t—to|<Mas<b.
This establishes the lemma. I
(lo af a, Yo)
Figure 8.1
320 Existence Theory
of the rectangle (Fig. 8.2), then we define «=5/M. In either case, all the
successive approximations remain in the triangles indicated in the figures.
We can now state and prove the fundamental local existence theorem.
(lo. Yo + b)
(fo + a, Yo)
mae
Figure 8.2
Tt
= ME (Gt)
3 SEMare!
(p+1)!
which is (8.12) for j=p. This proves (8.12).
Exercise
8. Prove the analog of the inequality (8.12) for the interval tp -«<t< tp.
Combining (8.12) with the result of Exercise 8, we have
therefore,
_M (Ka)*! Ka
—0 as joo.
> (Sin
To prove the continuity of f(t) on J, let e>0 be given. We have
p(t+h)—o()=o(t+h)—$,(t+h)+¢,(t+h)—
¢6;(0+ 6;(0—¢ (0, and thus
by the above estimate. Choosing / sufficiently large and |h/| sufficiently small,
and using lim,..,, é; =0 and the continuity of the ¢,(¢), we can make
IP(t+h)—(i)<e.
We now wish to show that the limit function ¢(f) satisfies the integral
equation (8.3). We will do this by letting jo in the definition (8.8) of the
successive approximations and by showing that
6-105
Gary
M (Kay Gs |
for every t on I.
Exercises
Show that
a) the ¢, are well defined for |t|<«, where
b a
on
; eS min
j a)by
rel M= IZol
Z +M
324 Existence Theory
b) Show that {¢,,} converges to a solution of the integral equation (8.4) on |t|<«.
This together with Exercise 2 establishes the existence of solutions of the initial-
value problem y”+g(¢, y)=0, y(0)=Yo, y’ (0) =Zo.
13. Consider the integral equation
ioe)
fo(t)=0
$, (t)h=e" +a |sin(t—s)
Since ¢,(t)=o0(t)+(¢1 — 0)+-::+(¢,(t)— $,- 1(0) this shows that the @¢, are
well defined for 1 <t<oo, and {¢,} converge uniformly for | <t<oo to a con-
tinuous limit function @.
Sy Show that the limit function satisfies the integral equation.
ie)— Using
IPn(t)<
161 ()— Po(QI+- - +1Pn(t)—
bn-1 (Ol
and the above estimate for |¢,,(t)— ¢, — , (¢)|, show that the limit function satisfies
the estimate
lp(t)|<el", 1<t<oo.
This together with Exercise 4 supplies the missing steps which were assumed
in the justification of the asymptotic series solution of the equation
, a
in Section 6.11.
We have suggested in Section 1.6 that Theorem | is not the best possible
result of its type. Under the hypotheses of Theorem 1, we also have uniqueness
of solutions of (8.1), (8.2), as we shall prove in the following section. However,
we may have existence of solutions without uniqueness. In fact, the following
result is true.
Theorem 2. Suppose f is continuous on the rectangle R, and suppose
If(t, ¥)| <M for all points (t, y) in R. Let « be the smaller of the positive numbers
a and b/M. Then there is a solution $ of the differential equation (8.1) which
satisfies the initial condition (8.2) existing on the interval |\t —to|<«a.
8.1 Existence of Solutions 325
0, —o<t<c
6.0=4 (op c<t<a
is a solution of y’=3y*/* through (0, 0). In addition, the identically zero function is a
solution of this initial-value problem.
Exercises
14. Do the successive approximations for solutions ¢ of y’ =3y?/? with ¢(0)=0 converge
to a solution?
15. Do the successive approximations for solutions of the problem considered in
Exercise 14, but using
0, O<t<l
solt)=| tr 1<t<oa
y +p(t) y=4(t),
where the coefficients p, g are continuous on an interval J, we demonstrated
that the solution passing through the point (to, yo) exists not just near fo
but on the whole interval J. We did this by giving an explicit representation
326 Existence Theory
for the solution and showing that this representation is valid for all ¢ in J.
The reader may now observe that this initial-value problem is equivalent to
the integral equation
t t
Pol(t)=Yo
t t
Exercises
16. Consider the differential equation y’=y, with the solution ¢()=e' satisfying the
initial condition $(0)=1. Letaet) be any polynomial in t and define
po(t)=
netic
1( Keil, DOsoc
= f(t, y)
where f and of/dy are continuous in a region D in the (t, y) plane, and let
(fo, ¥o) be a point in D. Let G be a bounded subregion of D containing (to, yo) and
let G be the closure of G. Define
M=max |f(t, y)}.
G
8.1 Existence of Solutions 327
and generally
Figure 8.3
0
K,=max Ft y) for m>1.
é |éy
[Hint: Follow the proof of Theorem 1.]
(Remark: The same result holds if f satisfies a Lipschitz condition.)
18. In the notation of Exercise 17, suppose that a function o(¢) has been found
328 Existence Theory
satisfying the hypotheses of Exercise 17, and also constants k>0, 6=0 are known
such that
I: (t)—o(tl<Kit—tol’, a<t<f.
Show that
+1 K3 K3 + | a<t<f
PO Seach ees
where K, is as in Exercise 17. Can you generalize this result? [Hint: Assume an
estimate for |,,(t)— @m—1(¢)| on «<t<f and compute an estimate for |@(7)—@,,(¢)|
on a<t<f.]
19. Let #(t) be the solution of y’=?+y? on 0<r<1, with ¢(0)=0. Show that
1 1
60-(; ae ’)=O000IS5¢., 0st=1,
[Hint: In the notation of Exercise 18, let ¢9(¢)=2°/3, and compute @,(¢) and
|, (t)—o(t)|. Then apply the result of Exercise 18 to the differential equation
y'=0?+y? on the closed rectangle {(t,y)|0<r<1, |y|<A} for some suitably
chosen A>0. Such a choice is A =0.345, and this gives K, =0.690. ]
20. Let g(t) be a differentiable function on 0<t<z. Show that the differential
equation
my tal) y=Ay
has a solution @,(t, 4) on 0<t<z such that
sin,/At
+M,
$1 (t, A=
Ji
$4 (t, A)=cos \/At+M3
where |M,|<kK/A, \M|<K//2 on 0<t<nz for some constant K. [Hint: Consider
the integral equation
eral air :
where |M,|<K/,/A, |M4|< Rie on 0</<z for some constant K. [Hint: Proceed
as in Exercise 20, using an appropriate integral equation.|
fW<K+ |foes
fora<t<f. Then
to obtain
Exercises
HSKi+K: |ods
on an interval «<1<f, where ¢, K,, K, are given positive constants. Show that
é
f(t)<K, exp(Ky(t— a) += [exp K,(t—a«)—1].
2
5. Let r(t) bea nonnegative continuous function which satisfies the inequality
t
for r>0, where a>0 and k>0 are constants, and f is a nonnegative continuous
function for t>0. Show that ;
bl=vo+| £6 2(s)) ds
to
= [EF ba(9)P6506)
Taking absolute values and using (8.7), we have
t
lA]. #00)-F160
where |(6f/@y) (t, y)|<K for all (t, y)eR. Taking first the case r>/) and
then t<f, the Gronwall inequality now implies for both cases that
|f2(t)—,(t)| <0. Since |,(t)—¢,(d| is nonnegative, we have |,(t)
— $,(t)|=0 for all tin J, or $3 (t)=¢, (0) for tin J. Thus there cannot be two
distinct solutions of (8.1), (8.2) on J, and this proves uniqueness. |
It is not necessary to assume as much as continuity of éf/éy to ensure
uniqueness. It is clear from the proof of Theorem | that the Lipschitz
condition (8.7), which follows automatically from the continuity of éf/¢y,
could be used in the hypothesis of Theorem | instead of the continuity of
cf/éy without changing the proof. It is possible to prove uniqueness of
solutions under considerably weaker hypotheses, but in most problems
Theorem | is applicable and such more refined results are not needed.
Exercises
6. State and prove a uniqueness theorem for solutions ofthe initial-value problem
for |t|<qa; subtract the two equations corresponding to k=1 and k=2, then use
the Lipschitz condition (8.7), obtaining
t t
for every pair of points (¢, y,), (¢, y2) in a region D. Suppose that the function
h(u) is continuous for 0<u<a for some «>0, that h(u)>0, and that
i du
im |——-=0o
e-0+ | A(u)
Then through each point (fo, Yo) in D there is at most one solution of the equation
y'=f(t, »). [Hint: Suppose $, and ¢, are two solutions with ¢,(to)=2(to)=Yo.
8.3 Continuation of Solutions 333
The existence theorems of Section 8.1 state that under suitable hypotheses
there is a solution of a differential equation that exists on some, possibly
small, interval. The question to be studied in this section is whether this
solution in fact exists on a larger interval.
Example 1. Consider the first-order initial-value problem
y=y, o()=1
whose solution ¢ can be written explicitly as ¢(t)=1/(1 —2) (see Example 1, Section 1.3).
Clearly the solution exists on — 00 <t<1. Suppose we try to determine the interval of
validity of the solution as given by Theorem 1, Section 8.1. Here R is the rectangle
R={(t, y)||t|<a, |y—1|<5}, and « and M of Theorem 1, Section 8.1, are given by
M=max,(y”)=(1+5)*, «=min(a, b/M). The largest value of the positive number
b/M=b/(1 +5) is 1/4.
Exercise
1. Using calculus or otherwise, show that
b 1
a ee
Thus, no matter how a and b are chosen, we have «<1/4. If we take a> 1/4 and
b=1, so that b/(1+b)?=1/4, we obtain «=1/4, and Theorem 1, Section 8.1, gives
the existence of a solution for |t|< 1/4.
334 Existence Theory
Figure 8.4
Now consider the point (t9+, (to+a)) as a new initial point. Since
this point is in D (it cannot be on the boundary of D because the closed
rectangle R is contained in D), there exist numbers a, >0, b, >0 such that
the rectangle R, ={(t, y)| |t—(to+a)|<ay, ly—G(to+a)|<b,} centered at
(to +, P(t o+«)) is contained in D (see Fig. 8.5). Consider the differential
equation (8.16) subject to the initial condition
: eae to —A<t<tyta
W(t), totacst<totat+a,
Figure 8.5
Exercises
2. Show that the function ¢ satisfies the integral equation (8.3) on the interval
fo —AStSlo t+.
3. Consider the solution ¢ of Example | above, which has been shown to exist on the
interval —1<+<+. Consider now the continuation of ¢ to the right obtained by
finding the solution wy through the point (%, $). Show that on any rectangle
R,={(t, y) |(t-Zlsa, |y—$|<b}, M=maxg, y?=($+5)?. Deduce, similarly to
Example 1, that «, =7;. This now gives existence on —3<1<j¥5.
4. Continue, similarly to what was done in Exercise 3, the solution ¢ of Example |
to the left of the point (—4, 4).
In general, the solution ¢ may be continued to the left from the point
(to —«, (tp —«)) in a similar manner, as suggested by Exercise 4.
336 Existence Theory
Exercise
Y(to)=Yo (8.17)
existing on some interval y<t<06. Then lim,.;— (t) and lim,.,, 6(¢) exist.
Proof. Let t, and t, be any two points on the interval y<¢<6 with t, <fp.
Then, since ¢ satisfies the integral equation (8.3),
Subtraction gives
and the assumption |/(t, y)| <M for (t, y)eD now gives
lb(t2)— (ti) SM (to—t4). (8.19)
Since the right side of (8.19) tends to zero as f, and f, both tend to 6 from
below, the Cauchy convergence criterion shows that ¢(t) tends to a limit
as ¢ tends to 6 from below. We obtain the proof that #(¢) tends to a limit as
t tends to y from above in an analogous way by letting ¢, and r, tend to y
from above in (8.19). 1
In view of Lemma 1, we can define $(6)=lim,..;- $(t), 6(y)=lim,.,+ $(d
and we have the solution ¢ defined on the closed interval y<t<6. The con-
tinuation process described above can be repeated provided that the graph
8.3 Continuation of Solutions 337
Exercises
6. Show that the solution ¢ of y’= —y? with ¢(1)=1 exists for 0<t<co but cannot
be continued to the left beyond r=0.
7. Show that no solution other than ¢(t)=0 of the equation y’ = — y* can be extended
to the interval —0<t<oo.
8. Formulate, as another corollary to Theorem 1, the result on continuation if f is
defined on the infinite strip D={(t, y) |a<t<b, |y|< oo}.
There are special cases where we can prove a global result directly from
successive approximations. We have already pointed this out for the first-
order linear differential equation in Section 8.1. We shall see in Section 8.6
that, for linear systems in general, global existence can be proved directly
from successive approximations. Other cases are illustrated by the following
exercises.
Exercises
where K= K(x) and M=max,,_,,) <. |f(¢, Yo)|. [Hint: The trick now is to get around
the fact that f(/, y) itself is not necessarily bounded on D, even though f(t, yo) is
bounded.
i) Induction easily shows that each ¢;(j=0, 1, 2,...) is well defined for |t—to|<a.
Then
© M Ki- ei M Ka
y pe (Cx 1
al)
1M
from which we obtain the desired bound for |(¢)| on taking the limit.
iv) Show that the limit function ¢(t) is a solution of the initial-value problem.|
10. Let f(¢, y) be a continuous function on the whole (f, y) plane. Suppose that df/dy
is also continuous and suppose that for every «>0,
of
_ (t, y) <K=K(q), lt] <a, ly|<oo.
dy
Show that for every given (to, Vo) the equation y’=/(t, y) has a unique solution
#(t)on — 0 <t<oo such that (to) =o. [Hint: By Exercise 9, |p (¢)| is bounded on
every interval |f—f)|<a; now apply the corollary to Theorem 1. ]
11. Show that the equation
cannot be assumed that an explicit expression for f(t, y, y’) is known. Thus
it does not make sense to ask for an explicit representation for the solution
of (8.23). The proper question to ask is whether for a given class of
functionsfwhich are suitably small, the problem (8.23) has a solution which
is described approximately by the solution (8.22) of the simplified problem
(8.21). In other words, the mathematical problem is one of existence of
solutions for a class of differential equations and of the qualitative behavior
of these solutions, rather than one of explicit solution.
Let y(t) be the solution of the initial-value problem
y"+2ay'+k’y=p(ty,y), yO)=yo, y(0)=0. (8.24)
We will assume that there exists a constant M>0 such that
Theorem 1. If (8.28) is satisfied, then there exists a constant B>0 such that
every solution v(t) of the initial-value problem (8.27) with |yo| sufficiently small
satisfies
lv(t)|<B, lv’ (t)|<B (8.29)
for all t=0.
Proof. We consider (8.27) as if it were a linear nonhomogeneous problem
to which we apply the variation of constants formula (see Section 3.7,
especially Exercise 5). Since the unknown function v appears in the non-
342 Existence Theory
When we solve (8.26), we see from the explicit solution that |u(t)|<c,
\u'(t)| <<c for t>0, where c is a constant which can be made arbitrarily small
by making |yo| sufficiently small. In fact, |u(2)| <A =kyo/a, |u'(t)| <w@A=ky®,
and we can take c=kyy if m>1, and c=kyo/a if o<1. Now we use (8.28)
to estimate in (8.30) and (8.31), and we obtain
We let K=L if o> 1, K=L/o if w<1; and we add the two inequalities in
(8.32) to obtain
10.s2e+2K [esas
—as) [r(s)]? ds,
10)
8.4 The Nonlinear Simple Pendulum 343
v(t)=A cos(wt—5)+h(t),
where
|h(t)|
<C exp(—at), |h’(t)| <C exp(—at)
for t=0.
Proof. We rewrite (8.30) and (8.31) as
Exercise
1. Show that the constant C in Theorem 3 may be taken as the larger of 2LB?/aw and
2LB?/a.
The reader will note that the formula (8.37) says that every solution of
the “‘correct” problem (8.24) with |yo| sufficiently small behaves like some
solution of the idealized linear equation (8.21). It does not, however, say
that the solution of (8.24) behaves like the solution of the initial-value
problem (8.21) which is composed of the idealized differential equation
together with the same initial conditions as those in (8.24). The solution of
(8.21) is x(t)=A exp(—ar) cos (wt—9), while the solution of(8.24) is approxi-
mated by 4 exp(—az) cos(wt—5). To justify the use of the simpler problem
(8.21) in place of the true problem (8.24), we would have to show that A is
close to A and 6 is close to 6.
From the relation
A cos(wt—6)= yo cosat 490 sin cot
we see that i
A?=yé+(ayo/w)? and 6=arc tana/o.
If we define
oO
1 1 :
d, =5 |q[s, v(s), v'(s)] cos@s ds, d= in |qLs, v(s), v'(s)] sinas ds,
(0) 0)
|ts. v(s
16,v (ids <2 8? |exp(—as)ds = 2B /a.
0 0
Thus
ldj|=2LB-/a, [Id |= 2LB*a.
Since B can be made arbitrarily small by making |yo| sufficiently small, |d,|
and |d,| can be made arbitrarily small. Now we see from (8.38) that
346 Existence Theory
A*— A? and tand—tanéd (and hence 6—6) can be made arbitrarily small.
Thus we have obtained the following result to complete our study of the
damped pendulum.
Corollary to Theorem 3. The constants A and 6 in the expression (8.37) for
the solution of the nonlinear problem (8.24) are approximated by the amplitude
A and phase angle 6 obtained by solving the linear problem (8.21). These
approximations may be made as close as desired by taking |yo| sufficiently small.
separately and then using (8.40), or by the following, more elegant argument.
Define the function G by
+1¥n—2nl}
=K ly—z
which is (8.41). A function f satisfying an inequality of the form (8.41) for
any points (f, y), (t, z) in D is said to satisfy a Lipschitz condition in D with
Lipschitz constant K. A function f satisfying (8.41) need, of course, not be
of the class C, and all the remarks made in the simple case of scalar functions
apply here.
We begin with the problem of existence.
If(t, y)/< ct
of(t,y) 2k Gin) (8.42)
pov
j
for (t, y) in B. Let « be the smaller of the numbers a and b/M and define the
successive approximations
do(t)=n
(8.43)
o)=n+| peer a
Then the sequence {;} of successive approximations converges (uniformly) on
the interval |t—to|<a to a solution (t) of (8.39) which satisfies the initial
condition ®(to)=N.
348 Existence Theory
and then work with (8.44). This is the analog of Lemma 1, Section 8.1.
Exercises
1. Give a detailed proof of Theorem 1. (The reader is urged to carry out this proof
with care, in order to appreciate the usefulness of introducing vectors.)
2. By writing the scalar equation y” =g(t, y, y’,..., y"~) as a system of n first-
order equations (see Section 6.2), apply Theorem | to deduce an existence
theorem for this scalar equation.
3. Given the system y,=y2+y241
Ya=Yi-y2—1.
Let y=(3!) and let B be the “box” {(¢, y)||t|<1, ly|<2}. Determine the bounds
M, K in (8.42) for f and ¢f/dy, for this case. Determine « of Theorem 1. Compute the
first three successive approximations of the solution (7) satisfying the initial con-
dition )(0)=0, @=($').
We remark that Theorem 2, Section 8.1 also has an analog which is
easy to state. So far as uniqueness of solution of the system (8.39) is
concerned we have the following analog of Theorem 1, Section 8.2.
Theorem 2. Let the hypothesis of Theorem 1 be satisfied in the box B with
center at (to, n). Then there exists at most one solution ® of (8.39) satisfying
the initial condition (to) =.
Exercise
The conclusion now follows as in the scalar case (Theorem 1, Section 8.2), by using
the Gronwall inequality (Lemma 1, Section 8.2).]
Exercises
Proof. By the analog for systems of Lemma 1, Section 8.1, the initial-value
problem (8.45) is equivalent to the integral equation
=n+ jr
[A(s) g(s)] ds. (8.46)
Let J be any closed finite subinterval of J containing fo. (If J is closed, take
J=I.) We proceed as in Theorem 1, Section 8.1, or Theorem 1, Section 8.5,
by defining the successive approximations
Ho(t)=
First we prove by induction that each vector function @,(f) is well defined
and continuous on the interval J. Clearly $9 (¢) is well defined and continuous
on J. Suppose ,(¢) is well defined and continuous on J. Then A(s) o,(s)
+ g(s) is a continuous vector on J, and its integral form f, to ¢ is a continuous
vector on J; hence from (8.47), @;(¢) is a continuous vector on J. Thus, by
induction, each @,(¢) is well defined and continuous on J.
Since J is a closed finite interval on which 4 and g are continuous,
|A(2)| and |g(z)| are bounded on J. We let K and L be constants such that
|A(sI<K, = [g(s)ISL, — seJ
and we define M=K|n|+ZL. Now, proceeding exactly as in Theorem 1,
Section 8.1, we can establish the analog of (8.12), namely
—MKi(t—to"
ldj+1(t)-O,(Hi < (8.48)
Gea)!
for j=0, 1, 2,..., teJ. The only difference is that here (8.48) is valid for all
tin J. The remainder of the proof of existence of a solution (t)=lim,.. ., 0;(2)
on J proceeds exactly as in Theorem 1, Section 8.1. Since J is an arbitrary
closed finite subinterval of J, this solution exists on the whole interval J.
To show that the solution (ft) is unique, we suppose that W(t) is another
solution of the initial-value (8.45) on J. Then
Subtraction gives
t
0) WO) SK |16()—WOi ds
The Gronwall inequality (Lemma 1, Section 8.2) now gives |@(t)—w(a)| <0.
Since |O(‘)—(z)| is nonnegative, we have |(t)—wW()|=0, or b(t)=W(v).
Thus there cannot be two distinct solutions of the initial-value problem
(8.45), and the proof of the theorem is now complete. |
Exercises
By hypothesis, there exist constants M>0 and c, and a time T such that
\g(t)| <M exp(ct), Ve (8.51)
We may assume c>0, since increasing c increases the right-hand side of
(8.51) and does not affect the truth of the inequality. We may rewrite (8.50) as
FE ii t t
Now, taking norms in (8.52) and using (8.51) and (8.53), we obtain
M
—K +||A| |y(s)| ds anges [exp(ct)—exp(cT)]
T
M
< K+— exptcr+| |A| ly(s)| ds.
vi
Multiplying by exp(—cr),
we have ;
M
ly(t)|exp(—ct) <(Kexp(—et)+*)4-| Lal ly(s)] exp(—ct) ds
<(iot-0)e) aoten(-cda
T
a F (8.54)
since exp(—ct)<exp(—cs) for t>¥. Since c>0, there exists a constant L such
that Ke(—ct)+ M/c<L for t>T. Thus (8.54) becomes
t
Exercises
y=f(¢, y) (8.55)
passing through the point (¢), n) depends not only on ¢, but also on the
initial point (to, n). When we wish to emphasize this dependence, we write
the solution as (t, fo, n). We will show that under suitable hypotheses
depends continuously on the initial values and in fact that is a continuous
function of the “‘triple” (t, to, n) (actually (n+2) variables). As in the
previous sections of this chapter, we make no attempt to prove the most
refined result of this type.
Theorem 1. Suppose £ and 0f/0y; (j=1,...,) are continuous and bounded
in a given region D, with
If(t, yI<M,
of(t, y)
a 2k ee (8.56)
J
Let (t) be the solution of the system (8.55) passing through the point (to, )
and let (t) be the solution of the system (8.55) passing through the point
(79, H). Suppose o(t) and y(t) both exist on some interval «<t<p. Then to
354 Existence Theory
Since y(t) is the solution of (8.55) through the point (29, 9), we have for
every t,a<t<f,
t
Since
and therefore
Ib()—W(9)
<0 Mo+K|1b(9) WO) ds. (8.61)
to
The Gronwall inequality (Lemma 1, Section 8.2) gives |(t)—w(d)|
8.7 Dependence on Initial Conditions 355
if |t—7?|<6, we have
Theorem 2. Let f(t, y) and g(t, y) be defined in a region D and satisfy the
hypotheses of Theorem 1. Let (t) be the solution of y'=f(t, y), y(to)=n and
W(t) the solution of y'=g(t, y), y(to)= nN existing on a common interval
a<t<pP. Suppose |f(t, y)—g(t, y)|<e for (t, y) in D. Then the solutions
(1), W(t) satisfy the estimate
Id()—W
(| <m—Al exp(K |t— tol) +e(B—a) exp(K|t—tol), a<t<f.
Exercise
1. Prove Theorem 2. [Hint: Write the integral equations satisfied by (¢) and
W(t), and subtract to obtain
356 Existence Theory
Then take norms, use the hypotheses, and apply the Gronwall inequality to
obtain the result.]
Exercises
2. Let f,(t, y) be a sequence of vector functions converging to f(t, y) in the sense that
f(t, y)—f,.(t, y)| <«, for (t, y) in D, with ¢,0 as ko, and let f and f, (k=1, 2,...)
satisfy the hypotheses of Theorem 2. Let n, be a sequence of constant vectors
converging to n. Let ,(¢) be the solution of y’=f;,(¢, y), y(¢o)=1 (K=1, 2,...),
existing on a<t<f, and let (t) be the solution of y’=f(t, y), y(to.)=n existing
on «<t<f. Show that lim,_,, W,())=(t) for «<1t<f.
3. Let f, of/éy;(j=1,...,n) be continuous in a region D. Let (ft, to, n) be the
solution of y’=f(t, y) for which @(¢o, fo, n)=n, and suppose that @ is differentiable
with respect to each of its (n +2) variables.
a) Show that 6o/@n; (t, fo, n) is the solution of the linear system w’=f,(¢, @(¢)) w
for which
op
an;(to, to. N)=e;,
the unit vector with jth component | and other components 0 (j=1,..., 7).
(Here f, denotes the matrix (6/;/0y,).)
b) Show that 0/01 (¢, fo, n) is the solution of w’=f,(t, @(¢)) w for which
op
at, (to, to, N)= —F(to, 0).
c) Show that
Numerical Methods
of Solution
y +2ty=sint
357
358 Numerical Methods of Solution
It is not possible to evaluate the integral on the left side of this equation
in terms of elementary functions. A numerical approximation for thé
integral is impractical since the upper limit of integration is variable.
Even if a numerical approximation could be obtained, we would still have
the problem of solving for the implicitly defined function ¢. Also, the
integral is improper which adds to the difficulties.
In fact, in practical problems, the most common situation is that no
usable expression for the solution can be found at all, even though it can
be shown that there is a unique solution. When we meet such difficulties
we must often resort to the use of numerical approximations from the
start. In this section, we shall develop one approximation method and
we shall indicate some of its refinements which are frequently used on
electronic computers later in this chapter.
Let us consider the solution of the first-order differential equation
y'=f (ty) (9.1)
through the point (fo, yo). We assume that f satisfies the assumptions of
the existence and uniqueness theorem (Theorem 1, Section 1.6) in some rec-
tangle R= {(t, y) ||t—to|<a, |y—yo| <b}. Then the theorem assures us that
there is a unique solution ¢ of this problem existing on some interval
|t—to|<a, where «<a. We wish to find a numerical approximation for
the number $(t)+7), where T is specified and |7|<« (see Fig. 9.1). We
shall do this by a construction which makes use of the geometric inter-
pretation of the solution suggested in Section 1.5. To be specific, we suppose
that t)+ 7 is to the right of to, 0< T<«, and divide the interval [t), to + T]
into n subintervals by specifying intermediate points t9<t,<t,<---<t,=
to+T. In practice, these points are usually equally spaced, but this is not
necessary. Now start at (to, Yo). We know that the curve y=@(t) passes
through the point (to, yo) and that its derivative at (to, Vo) is f(to, Vo). Since
we do not know ¢(¢), we cannot follow it to ¢,. Instead, we pretend that the
solution is a straight line Ly with slope f(to, yo) and we follow this line to f,.
The premise, of course, is that, if t, is close enough to fo, the error made by
following the straight line segment instead of the solution (called the trunca-
tion error) is not too large. We know from analytic geometry that the
equation of the straight line Lo through (¢9, yo) with the slope my =f (to, Yo)
iS Y=Vo+(t—Lo)f(to, Yo). We compute y,, an approximation for ¢(t,),
by substituting t=¢, in the equation of Lo. This gives
ly t, t, ls - t=f4+T t+a
a —— Qa fe!
Figure 9.1
We now pretend that the solution y=¢(¢) passes through (¢,, y,). If it
did, its tangent at (t;,y,;) would have slope m,=f(t,, y,). Let L, be the
straight line through (¢,, y,) with slope m,. We know that the equation
of L, is y=y, +(t—1,) f(t,, y;). We now proceed along the straight line L,
and compute our approximation y, to (t,) by substituting r=r, in the
equation of L,. This gives
Yo =Vp+(to—-t1)
f(ts, yi).
In general, of course, y, is not likely to be $(¢,), and o’(t,) is not necessarily
m,. Thus we are retaining the error made in the first stage and possibly
compounding it by using the wrong slope. For the moment, let us not
worry about this error. We can continue the process until, at the nth stage,
we obtain an approximation y, for (7).
This construction, which is called an iterative procedure, can also be
expressed analytically as follows. Having computed the approximations
y,(i=1, 2,..., k), we compute y,,, from the formula
Example 1. Let ¢ be the solution of y’=y passing through (0, 1). Compute ¢(1)
using equally spaced mesh points with h=0.1.
Here f(t, y)=y and the iterative formula for y,,, becomes
Ve+1 =y,t+hy,=(1 +h) y, = (1.1) Vx (k=0, Ines 9)
k tk Vk Vist
= lly,
The exact solution is given by ¢(t)=e' and thus ¢(1)=e. We have obtained the
approximation 2.593 for e. Note that we have actually calculated (1 + 1/n)" with n= 10.
Exercise
1. Use the same procedure with h=0.05 to see how much better the approximation
becomes.
accumulate from step to step. It can be shown that we can decrease the
round-off error at each stage by keeping more decimal places.
There is another error in the Euler method, called the truncation error,
caused by the use of straight lines to approximate the solution curves.
We must be careful to distinguish between the /ocal truncation error, which
is the error that would be introduced in going from the value y, at ¢, to
the value y,,, at t,,, if y, were exact, and the cumulative truncation error,
which is the actual error in the value of @(t) + 7) caused by the approxima-
tion. You should note that the cumulative truncation error is not simply
the sum of all the local truncation errors. Since there may be an error already
present in y,, there may be an additional error in y,,, caused by using the
wrong slope at ¢,, as well as the wrong value of the solution.
As an example, let us calculate the local truncation error of the Euler
method. The differential equation
v=f(t y)
has (ft) as its exact solution. This means that
This formula is usually applied with all the subdivisions of equal length h,
so that t,4,—4,=/A. It then becomes
Mf (tes Ve):
Vier 1=Yut (9.6)
In this formula, y, is the approximate value at ¢,. The local truncation
error T;, is defined as |$(¢,+1)—¥x+1| under the assumption that (ty) =Vis
that is, that the approximate value at ¢, is exact. By subtracting (9.5) from
(9.4) we see that
tk+1
We can use the mean-value theorem to get an upper bound for the error.
We have
F (t)—F
(t,) =(t—t,) F’(sx)
where s, is some point between ¢, and f. From this, we obtain
tk+1 tk+i
(arm |(Fa)H(e—
19)Pod)a
n| |(t—t,) F’(s,) dt
tk
of
F'(t) rl
of Hy of
AU) Barwal $(1)) O()=2-(, AU) BieeA P(t) F(t, OO)
if f, Of/6t, and Of/éy are bounded on the rectangle R, a bound for the num-
ber M can be calculated explicitly from the expression
Of Of
M<max/——(t, y) +max ay y) max|/(t, y)|
where the maxima are all taken over the rectangle R. Note that this
expression does not depend on the particular (unknown) solution ¢. Then
we have tkh+1
M
T,.<M |(w dt=> (teri hy)’ (9.7)
tk
In the standard case of equal subdivisions, t,,,—t,=h, and we have
shown that the local truncation error of Euler’s method is at most 4Mh?.
If the cumulative truncation error were the sum of all the local trunca-
tion errors, then, since there are N=T/h steps, this error would be at most
1Mh?-N=4TMh.
9.1 The Euler Method 363
We will show (later in this section) that, although this reasoning is invalid,
the cumulative truncation error is in fact no greater than a constant
multiplied by A. This shows that we can reduce the cumulative truncation
error by making the value of 4 smaller (that is, by increasing the number
of subdivisions).
The bound obtained here for the truncation error is usually far larger
than the actual error which occurs when the method is applied to a
specific problem. This is a common phenomenon in numerical analysis.
The method we have used to estimate the truncation error suggests an
obvious refinement—use a more sophisticated approximation for the
integral, such as the trapezoidal rule or Simpson’s rule. In fact, the use
of Simpson’s rule leads to an approximation method known as Milne’s
method (see for example Section 9.3) which is widely used for computation.
It has a cumulative truncation error no greater than a constant multiplied
by At.
The fact that the truncation error involves a positive power of h implies
that for small values of h, we obtain a high degree of accuracy. Roughly
speaking, the higher the power of h, the more accurate the method. It
would be possible to devise very complicated approximate procedures
with truncation errors no greater than a constant times h!’, but it is more
effective and faster on a computer to use a relatively simple formula, and
a very small value of h.
The method we have discussed can be adapted to second-order differ-
ential equations, but the adaptation involves some complications that we
shall not discuss here.
Exercises
2. Use the Euler method to estimate (1), where ¢ is the solution of y’=1+y passing
through (0, 0). Use equal spacing with h=4,,,—t,=0.1. Also calculate (1)
exactly by solving the equation, and compare the results.
3. Repeat Exercise 2 for the differential equation y’+2ty=sin¢ with initial condition
~(0)=0. Note that you will need to use tables to find sin 0.1, sin 0.2, etc., and that
you will not be able to compare your answer with the exact solution of the
equation.
4. Use the Euler method to estimate the value of $(1) where ¢ is the solution of
dy
—=2ty+1—2t?
dt
passing through (0, 0). Use equal spacing with h=0.1.
decimal places or to a certain number of significant figures, and are thus really
approximations to the values given by the approximation method. The
errors so introduced may accumulate as we proceed from one stage to the
next. The analysis of round-off errors, being largely statistical in nature, is
quite difficult. We shall not discuss it further at the moment, except to suggest
that an approximation method, in order to be useful, ought to have the
property that a smaller round-off error at each stage produces a smaller
cumulative round-off error. Since we can reduce the round-off error at each
stage by keeping more decimal places in the calculations, this property
would give some control over the cumulative round-off error. It will turn
out that we can check rather easily whether this property, called stability of
the numerical method, holds for any given numerical method or not.
The other type of error is the truncation error, caused by the approxima-
tion in the method itself. For the Euler method, this arises in the use of
(t.+1—%) f(t, &(t,)) aS an approximation to the integral |**' f(s, o(s)) ds.
We have defined the local truncation error to be the error introduced in
going from the value y, at ¢, to the value y,,, at t,,,, assuming the value
y, to be exact rather than an approximation. We have shown that the local
truncation error of the Euler method (9.1) is no greater than $Mh?. The
constant M can be obtained from bounds for the function f and its
first-order partial derivatives in the region under consideration.
We have suggested that the cumulative truncation error of the Euler
method is not greater than a constant multiplied by h. The cumulative
truncation error is defined to be the actual deviation of the approximation yy
from the true value $(to+ T)= (ty). (This tacitly assumes that all numbers
can be computed without round-off error.) Let us now estimate this
quantity. We define
E,=|0 (te) — ysl k salsa
to be the cumulative truncation error at the kth stage.
eR Sue |Ronen
and by (9.6)
we obtain
tk+1
| ONTO) NOMEN)
tk
and we have shown that this is no greater than 4Mh*. Since we have
assumed that f has a continuous partial derivative with respect to y, the
mean-value theorem for derivatives shows that there exists a constant L
such that
IF (tes P(ti))—F (tes Ye S L1G (th)
—Vel = LE,
(see Section 8.1, for a similar calculation). When we use these estimates
in (9.8), we see that
Ex+1<E,+4Mh? +hLE,=E,(1+hL)+3Mh? k=0,...,N. (9.9)
It is now easy to use induction to estimate E,.
Exercise
5. Show that the inequality (9.9) implies
disuse)
Ey 2
hLy*—11
Mi?, k=0,1,...,N (9.10)
hL
(Note that Ey =0.)
366 Numerical Methods of Solution
Ey<M‘h
which completes the proof of Theorem 1. 1
t, +h->t, means that the next step in the iterative procedure is started
by replacement of 4+h by 44, (that is, K+, =t,+h). The question
(t,>to+T7?) is included in the flow diagram to show the process by which
the computer decides when to stop iterating the procedure.
bt
Olea) &
Se = f(tes Ye)
i& thot
Ye ar hf, Yr
Record ti, yx
&>t+T? No
Figure 9.2
Exercises
6. Use the Euler method, with h=0.1, to approximate the value for t=1 of the
solution of the differential equation y’=t+y passing through the origin.
7. Use the Euler method, first with h=0.2, and then again with h=0.1, to find an
approximation to #(1), where @ is the solution of y’=7o(t? +”) such that $(0)=1.
Can you make any estimate of the accuracy of the answers?
8. The differential equation y’=3y/? has an infinite number of solutions through the
origin, as we have seen in Sections 1.6. Suppose we try to use the Euler method to
approximate one of these solutions, and repeat the process with a sequence of step
sizes {h,} which decreases to zero. Does the sequence of approximations converge
as n—0o? If so, to which solution does the sequence converge?
9. Use the Euler method, first with h=0.2, and then again with h=0.1; to approxi-
mate (1), where @ is the solution of y’=ty?—y such that (0)=1. How small
must / be chosen for the Euler method to give an approximation which is correct
to two significant figures?
368 Numerical Methods of Solution
The Euler method, although easy to apply, has too large a truncation
error to be of much use in the actual calculation of numerical approxima-
tions. This relatively large truncation error is due to the use of the crude
approximation h/(t,, y,) to the integral {i<*' f(s, b(s)) ds. By using a better
approximation to this integral, we may expect to obtain a numerical method
with a smaller truncation error. One obvious improvement is to approxi-
mate the integral by the length of the interval multiplied by the value of
the integrand at the midpoint of the interval (see Fig. 9.3), rather than by
Af
ty = fre)
tea ti+2
Figure 9.3
the length of the interval multiplied by the value of the integrand at one end
of the interval. To do this, we integrate over two subintervals rather than
divide each subinterval. Using (9.4), we write
k+2
and we approximate the integral in (9.12) by 2Af(t, +1, 6(4,+1)). This leads
to an approximation method given by the iterative formula
This method is called the modified Euler method, and the method of
approximating the integral used in the modified Euler method is called
midpoint quadrature. It is sometimes used in the numerical approximation
of definite integrals. We shall see (Theorem 1 below) that the modified
Euler method has a significantly smaller local truncation error than the
Euler method.
The modified Euler method expresses y,,, in terms of y, and 44.
Since it involves two subintervals, it is called a two-step method. This
introduces a difficulty which does not arise in the Euler method. In order
to begin the approximation procedure by taking k=0 in (9.13) to calculate
2, we need not only the given initial value yo but also y,. However,
9.2 The Modified Euler Method 369
y, is not given by the initial conditions and we must use some other ap-
proximation procedures, called a starting method, to calculate y,.
One method of obtaining a value for y,, which is frequently used if the
function f is analytic, is to use a power series expansion about fg (see
Section 6.1). Power series expansions are not very useful for numerical ap-
proximations because their convergence near the ends of the interval of
convergence may be very slow. However, if ft, is close to fo, a power series
expansion may provide an accurate yet easily obtained approximation
for yj.
A second method of obtaining a value for y, is to use a one-step method,
such as the Euler method. However, to improve the accuracy we subdivide
the interval [f, ¢,] into smaller subintervals. Another approach, probably
the most commonly used one, is to use a Runge-Kutta method. The basic
idea of the Runge-Kutta methods is to obtain as small a truncation error
as possible in an explicit one-step method. This requires subdivision of
the interval [7), ¢,] and, thus, the Runge-Kutta methods may be regarded
as refinements of the method, suggested above, of subdividing the interval
[to, t; ]and using the Euler method. We shall discuss Runge-Kutta methods
in Section 9.5; for the moment we wish only to point out the need for starting
methods and to suggest some possibilities.
Example 1. Let us estimate e by using the modified Euler method with h=0.1 to
approximate (1)=e, where ¢@ is the solution of y’=y such that #(0)=1. We begin by
using a power series expansion to estimate $(0.1)=1+0.1+3(0.1)*+---. This gives the
value y, = 1.105, correct to three decimal places. Now, we can use the iterative formula
(9.13). Using h=0.1, f(t, y)=y, in (9.13), we obtain
Vat2=Vet+O2V41-
It is convenient to tabulate the calculations as follows.
li Ve Ver1 Ve t0.241=Ve+2
We obtain the approximation 2.713 for e, which is considerably better than the ap-
proximation 2.593 obtained in Section 9.1 by the Euler method with the same number
of mesh points.
370 Numerical Methods of Solution
F (s)=F(tya1)+(8—
thai) F(teai)+ Se
(s—tea1)” _,
However,
tk +2
5 F’(é) ds
a
tk
Suppose that M=max ,, <;<,,+7 |F” (|. There is such a constant M since the
assumptions of the theorem imply that F has a continuous second derivative.
Then
tk+2
M
tk
Exercises
1. Use the modified Euler method, with h=0.2, to approximate the value for t=1 of
the solution of the differential equation y’=t+y passing through the origin.
Repeat the problem with h=0.1, and compare the two results with the result
obtained in Exercise 2, Section 9.1.
2. Use the modified Euler method, with h=0.1, to find an approximation to ¢(1),
where ¢ is the solution of y’=75(t? +7) such that ¢(0)=1. Compare the result
with the result obtained in Exercise 7, Section 9.1.
3. Use the modified Euler method with h=0.1, to approximate the value ¢(0.5) of the
solution @ of y’=y—t such that ¢(0)=0. Compare your answer with the one ob-
372 Numerical Methods of Solution
tained by the Euler method with h=0.1 and the one obtained by explicit solution
of the differential equation.
4. Draw a flow diagram (analogous to Fig. 9.2, for the Euler method) for the modified
Euler method.
which is the area of the trapezoid bounded by the line segment joining
[tes A(tis O(ty))) to thei, f(te41. O(te+1))] and the three lines y=0, t=t,,
oy = fil, o(l))
tear
Figure 9.4
t=t,,,; see Fig. 9.4. This approximation leads to the iterative formula
hive 4
Ven =p aa) (Ate, Ve)
+ fete View alle (9.16)
This method, called the improved Euler method, gives y,., implicitly
rather than explicitly. There are methods, as we shall soon indicate, for
dealing with implicit formulas. For some problems the improved Euler
method is useful.
:[SF(tes P(t)
+46 (thor, P(tes
+S (thea, (ty +2))] (9.17)
9.3 The Milne Method 373
legt
Figure 9.5
points (t,, F(t), (te+1> F(te41))s (e+2, F(e+2)); see Fig. 9.5. Since +.
— ty, =t+1—t,=h, the conditions that these points lie on the parabola are
F(t,)=a—bh+ch’,
Pa )\=a-; (9.19)
F(t, 42)=a+bh+ch’.
We can solve the equations (9.19) for the three constants a, b, c and
then use te42
[a+b(s—tys1)+¢(s—tk+1)7] ds
tk
[a+ b(s—tes1)+e(s—th+1)?] ds
b tk+2
2
H[est3o—nad #5 bt =2ah+— ch3 (9.20)
tk
374 Numerical Methods of Solution
29, 42=31
yy +4 e412
ty Vk Ve+1 SVE
t+TOV +1=Ve+2
Although the Milne method is implicit (that is, the iterative formula (9.18)
which defines the method expresses the approximation y,,, which is being
calculated implicitly in terms of y, and y,4,) this causes no difficulty because
the differential equation y’ =y is linear, and this makes it possible to solve for
Ye+2 explicitly. However, the use of the Milne method for nonlinear dif-
ferential equations requires some means of dealing with an implicit iterative
formula.
The usual technique is to solve (9.18) by another iterative procedure at
each step. One uses some other method to obtain a first approximation
9.3 The Milne Method 375
Ve+2 tO P(% +42) and then calculate a second approximation y,,, from the
formula
ess . .
Ve +2 =Ver 3 [Lf (tes ve) +4F (thoi Ver tS (tha, Jn+2)]- (9.21)
For more accuracy we may iterate this procedure, substituting y,,, in the
right-hand side of (9.21) and using (9.21) to calculate a third approximation,
and continuing in the same way. It is possible to prove that if fis continuous
and has continuous first-order partial derivatives with respect to ¢ and y,
then the approximations obtained in this way converge to a solution of the
difference equation (9.18) for h sufficiently small. However, it is usually
impractical to carry out this iteration procedure at each stage of the
approximation method. Normally, the most efficient procedure is to calculate
De+2, Called the predictor, by some explicit method, such as the Euler method
or the modified Euler method, and then use (9.21), called a predictor-corrector
formula, to calculate the corrector y,4 5.This value y,, 5 is the one used in
continuing to the next stage in the Milne method.
Exercises
1. Use the Milne method with h=0.2 to approximate the value for t=1 ofthe solution
of the differential equation y’=/+ y passing through the origin. Repeat the problem
with h=0.1, and compare the two results with those obtained in Exercise 1, Section
a2
2. Use the Milne method, with =0.1, to find an approximation to (1), where @ is
the solution of y’=75(t?+,y7) such that #(0)=1. Compare the result with that
obtained in Exercise 7, Section 9.1.
3. Draw a flow diagram (analogous to Fig. 9.2, Section 9.1, for the Euler method)
for the Milne method.
4. Use the Milne method with h=0.2 to approximate (1), where ¢ is the solution
of y’=y-—tr such that ¢(0)=1. Compare your answer with that obtained by
a) using the Euler method with h=0.05,
b) using the modified Euler method with h=0.1,
c) finding the exact solution by the methods of Chapter 1.
5. Find an approximation to #(0.5), where ¢ is the solution of y’=y+y? such that
$(0)=1, by using the Milne method with h=0.1, and with the modified Euler
method as predictor.
6. What value of / is needed to obtain an approximation correct to two significant
figures, using each of the Euler, modified Euler, and Milne methods, for $(1),
where $(t)=e~" is the solution of y’+2ty=0 such that #(0)=1?
The estimate of the local truncation error for the Milne method is con-
siderably more difficult than the error estimates we have obtained previously.
Obviously, the local truncation error depends on the accuracy of the predictor
376 Numerical Methods of Solution
used. The simplest way to estimate this error is to split it into two parts. First,
we estimate the local truncation error in the formula (9.21) under the as-
sumption that the predictor }, > is exact. In addition, we assume as before
that y, and y,,, are exact in our estimate. Then we calculate the additional
error introduced in (9.21) by the error in the predictor. This, of course, will
depend on the method used to obtain the predictor.
Let ¢ be the solution of the equation y’=f(t, y) which satisfies the fal
condition y’(to)=yo. Then, by (9.4),
tk +2
h
=| | f (s, &(s)) as—; LS (the Va) H4S (teas, Ver tS (tetas Dx+2)]}. (9.23)
tk
For convenience, we set f(t, ¢(t))= F(t), as before, and we split 7, into two
parts U, and V,. The term U, is calculated under the assumption that
Pe+2 is exact, that is, ),4.=—(t,42). Thus,
tk+2
u,-||Fo
(s) ds 5 [F(t +4F (te) + Flee al] (9.24)
Exercise
Let us estimate U,; this is simply the truncation error in Simpson’s rule.
In doing this, we must assume that f(t, y) a continuous partial derivatives
of all orders up to the fourth. We define G(t)={* F(s) ds on [t, t+2], so
that G’(t)= F(t), and then we define the new Aa 3ug[0, A] by
pre
tee y=-e)|
mt T m Wa
Ip Oe (ta. +t)-F (t,41—T)].
Pre)= |Pr) do; P()=| Po) do; p()=| Po) do. (9.29)
0) 0 0)
—-M<F®(E)<M. (9.30)
From (9.28) and (9.30), we obtain
—$07M <P" (1)<30°M.
Integrating and using (9.29), we obtain
—§t°M <P" (t)<8t°M.
Repeating the integration twice more, we obtain
—gt*M <P’ (t)<75t*M
— 51°
M <P(t)<59t°M.
The final step in the calculation of U, is the use of U,=|P(h)| to obtain
. U,<o5Mh. (9.31)
378 Numerical Methods of Solution
This quantity is called the local truncation error of the Milne method, rather
than 7,, even though 7, is the quantity which actually interests us.
To estimate V,, we will only need to use the fact that f(¢, y) is continuous
and has continuous first-order partial derivatives with respect to ¢ and y for
to <t<tg+T and all y. This implies that there exists a constant L such that
Lh A
Vi a IP (th+2)—Veral-
Then,
A
te 5 ace peck. (9.34)
If p is less than 4, the second term on the right side of (9.34) dominates for
small h, while if p is greater than or equal to 4, the first term on the right side
dominates. If we define g to be the smaller of the integers 5 and p+1, then
we can find a constant B such that
M
50 5
h?+ AL
3 enp+1Mececge offq
Thus, ¢o is the solution ¢ which satisfies $(t9) =o. In this notation, the local
truncation error 7; is given by
Figure 9.6
tn
lpi (tw) — Px —1 (tw)| <i da-sle+ |If (s, bx(s))— f(s, bx -1(s))| ds
tk
<Tth| 1$u(3)
—Oy-1(9)| ds
|x (tw) — bx- 1(y)| Se exp L(ty —t,)=e exp L(N—k)h (k=1,..., N).
We substitute this into (9.36) and find
N N
Ey<ée ), expL(N—k)h=eexpLNh ¥\ exp(—kLh).
k=1 k=1
The geometric series })f_ , exp(—LA) with first term e~ and ratio e" ™ has
sum
ane 1—e7NLh 1—e NL
3 je hh]
Thus
eLNh_ 4
NEE 1
and since e“"—1>Lh, Nh=T, we obtain
eo 1
[Bry
oye
or
Me
<<
erty,
where M =((e"7 —1)/L). |
Theorem | shows that if the /ocal truncation error of a one-step method
is no greater than a constant multiple of h?, then the cumulative truncation
error is no greater than a constant multiple of 4?~'. As an analogous
result can be proved for two-step methods (and for methods involving any
9.4 Stability, Consistency, and Convergence 381
finite number of steps), we now see that if fhas sufficiently many continuous
partial derivatives, the cumulative truncation error of the modified Euler
method is no greater than a constant multiple of h?, and the cumulative
truncation error of the Milne method (with a predictor whose local trunca-
tion error is no greater than a constant multiple of h*) is no greater than a
constant multiple of h’*.
Obviously, the accuracy of approximation is improved if a method whose
truncation error involves a higher power of / is used. On the other hand,
to obtain this improvement in accuracy, a more complicated approximation
procedure is required. The benefits of improved accuracy may be offset by
the computational difficulties. For many problems, the Milne method
represents a good compromise between accuracy of the formula and con-
venience of computation. The reader should remember that the truncation
error depends on f/, and can always be reduced by making / smaller.
The fact that the Milne method is implicit causes (as we have seen)
some technical problems in its use and in the estimate of its error. There
is another disadvantage, which we shall discuss in the next section. For
some differential equations it is unsuitable because of difficulties arising
from the round-off error, and other methods are more useful. One such
method is the Adams method, which is obtained by the use of interpolating
polynomials in approximating {{*" f(s, o(s)) ds (see [10]). In implicit form
this is given by
h
Vera =Ver2 7 a4 LF (tes Vi — SS (tears Ver s)+ LOS (tes 25 Ye+2)]
different set of numbers z, (k=1,..., N). The difference r,= |z,—y,| is called
the round-off error. If we retain more decimal places or significant figures,
we obtain another set of numbers 2, (k=1,...,) with round-off error
7, =|2,—Yx|. If |zy —Zy| is small whenever all |r, —7,| (k=1,..., N) are small,
that is, if a small change in the round-off error at each stage produces only
a small change in the final result, then the approximation method is said
to be computationally stable. The property of stability depends on the ap-
proximation method and has nothing to do with the suitability of the method
for a particular problem. For example, the formula yo=1, y,+;=3), is a
ridiculous attempt to approximate the solution ¢ of y’=0 such that ¢(0)=1,
but it is stable in the sense just defined.
A numerical method is said to be consistent with a differential equation
if the solution of the differential equation satisfies the approximation
scheme except for terms which tend to zero as h-0. A numerical method
is said to be convergent if the approximations tend to the actual solution
as h>0. You will note that all methods presented above are convergent.
Example 1. Consider the Euler method as an approximation for the solution ¢ of
the initial-value problem, y’=y, #(0)=1.
We wish to show that it is both consistent and convergent. Consider the exact
solution $(t)=e'. At a mesh point t,=”h, we have, from the Euler method,
Vn+1 =(y, hy.) =Vas 1 —(1 +h) Yn-
Exercise
1. Show that the modified Euler method for the problem considered in Example 1
above is consistent and convergent.
and consistency of numerical methods which are easy to verify. Then the
following result, for whose proof we refer the reader to more specialized
works such as [8], is of great importance.
Theorem 1. A stable, consistent finite difference approximation method is
convergent.
Of course, as we have seen in the preceding sections, the rate of con-
vergence of the approximations to the actual solution as h->0 depends
on the truncation error, and this must be estimated for each method.
To unify our previous considerations, we consider the difference equation
p Pp
with «, #0, and with « and f, not both zero. For example, in the Euler meth-
od (9.6), p=1; a, =1, a =—1; B,=0, By=1. The modified Euler method
(9.13) 4s_ziven by (9.39) with p=2; o,=1, «,=0, o5=—1; B,=0, 6. =—2;
B)=0. The Milne method (9.18) is given by (9.39) with p=2; a,=1, a, =0,
&) = —1; B, =4, B; =4, By =3. The implicit Adams method (9.37) is given by
(9,39) with, p=3; ¢,—1, o,——I, o,—0,.0,=0; 8.=9/24 (p,—19) 24.
B, = —5/24, By=1/24, while the explicit Adams method (9.38) is given by
(9.39) with p=3; «,=1, o,=—1, «,=0, a =0; £;=0, B,=23/12,
B,=—16/12, By=5/12. The difference equation (9.39) is said to define a
multi-step method. The integer p is called the rank of the method. An explicit
method is characterized by the condition £,,=0.
It is convenient to define the two characteristic polynomials
the following criteria for determining whether a method is stable and con-
sistent.
Theorem 2. The multi-step method (9.39) is stable if and only if all roots of
the polynomial equation p(¢)=0 satisfy |\¢|<1 and those roots with absolute
value | are simple roots.
Theorem 3. The multi-step method (9.39) is consistent ifand only if p(1)=0
(that is, 1 is a root of p(¢)=0) and p'(1)=a(I).
Exercises
2. Verify that the methods (9.6), (9.13), (9.18), (9.37), and (9.38) are all both stable
and consistent.
3. Is the method (9.16) obtained by means of the trapezoidal rule stable and con-
sistent?
y=f(t, y) (9.1)
which satisfies the initial condition
$(to)=Yo-
If we assume that ¢ has a continuous second derivative on the interval
[to, to +J], then we can use Taylor’s theorem to write
Geta
P(t)=P(to)
+(t— Lo) b' (to)+ 2)
¢" (6) (9.40)
where tp <&<t. If t; =t) +h, this becomes
h? ”
b' (D=h(t A) tHe OO) & (D=h( OO) +H(4 6) £(6 6).
Proceeding in the same way as above, we obtain the approximation formula
h7
Yat =Vathf (te Vid) NH [fi(tes Yi) +L (tee Yu) F (ts Ved] (9.44)
with local truncation error (h°/3!) @”’(é). The method (9.44) is a plausible
means of obtaining numerical approximations. However, it suffers from the
disadvantage that its use requires the calculation of derivatives of f,We could
develop analogous procedures using higher-order Taylor approximations,
but these would require the calculation of higher-order derivatives of f-
The Runge-Kutta procedure is an attempt to obtain formulas equivalent
to Taylor approximations which do not involve derivatives of f. The most
frequently used Runge-Kutta formula is
h
Yar = Vere (Pi + 2P2 +23 + Pa) (9.45)
where
h hp
Pi=S (te Yi) p= S(tr5 nett)
h h
r= S(t nate), Da ies y+ hp3).
method. We recall that before we can use the Milne method (9.18), we
need not only the given initial value yg but also the value y,. The deter-
mination of y, is usually carried out by a Runge-Kutta method.
The fact that the Runge-Kutta formula (9.45) is rather cumbersome may
be illustrated by the complexity of its flow diagram (Fig. 9.7).
This diagram has even been simplified slightly by the suppression of the
notation a, =+, a,=4, a, =4, a,=$, «, =0, a, =4, a,=4, «7, =1. In an actual
calculation, this information would also have to be fed into the computer.
bt
Yo Yr
&£+hot,
yr t h®->y,
f@, 1) Pp
@+ap—®
4,+ah—or
Ye + ah, > 7
Record &, yx
Figure 9.7
Exercises
1. Use the Runge-Kutta method (9.50) with h=0.2 to approximate #(1)=e, where
¢ is the solution of y’=y such that #(0)=1.
2. Use the Runge-Kutta method (9.45) with h=0.2 to obtain an approximation to
the same value $(1)=e treated in Exercise 1. Compare the accuracy and labor
involved with those of previous attempts to solve the same problem: Example 1,
Section 9.1 (Euler method), Example 1, Section 9.2 (modified Euler method), and
Example 1, Section 9.3 (Milne method).
3. Use the Runge-Kutta method (9.45) to obtain a starting value #(0.1) for the solu-
tion ¢ of the equation y’=r+ y’/? with each of the following initial conditions
a) $(0)=0
b) $(0)=1
c)$(0)=10
d) $(0)=100
4. Obtain an approximation to ¢(0.1), where @ is the solution e' of y’=y such that
~(0)=1 by using
a) the Runge-Kutta method (9.45)
b) the Runge-Kutta method (9.50)
How small must / be taken to obtain approximations of comparable accuracy by
using the Euler method with a subdivision of the interval 0<1<0.1?
(ii) the modified Euler method, (iii) the Milne method, using the Runge-Kutta
method for starting values and predictor-corrector formulas, (iv) the Runge-
Kutta method.
a) y'—4y=1-1, $(0)=1
b) Y= +y’, (0)=0 (compare Exercise 19, Section 8.1)
c) y =e, (0)=0 [Hint: Use a table of exponentials. ]
6. Repeat the calculations of Exercise 5, but with step size h=0.05, and compare the
results with those of Exercise |. [Suggestion: Do not attempt this exercise unless you
have a high-speed computer available or are unusually eager to do hard computing. |
Everything that we have done in this chapter has been for first-order
equations, but the methods are equally suitable for systems of differential
equations, although not entirely without difficulty. We can write a system
in the form
y'=f(t, y) (9.52)
with y the column vector with components (j,,..., y,) and f(t, y) the column
vector with components (f;(1, ¥1,.-+. Yn), folly Vis =+->Vn)s
1-9 Falls Vio+-+> Va)
(see Section 4.5). We can apply the approximation methods developed in
this chapter to the system (9.51) by applying them to each component of the
vector equation (9.52).
Example 1. Consider the system
u'=0v, v’=g(t, u, v). (9.53)
This can be written in the form (9.52) with
(1). (oes)
The Euler method applied to (9.53) leads to a pair of iterative formulas
Ug+1 =U, + hv, (9.54)
Ve 1 =U +NG (ths Uns Vx)
For a system (9.53) we are given initial values v9 and vo, and once we have found both
u, and v,, we can use (9.54) to compute y+, and % +1.
Exercises
1. Find iterative formulas analogous to the modified Euler and Milne methods for
the system (9.53).
2. Use the Euler method with h=0.1 to approximate the value $(z/2) of the solution
@ of the second-order equation y” + y=0 such that $(0)=0. (Note that $(¢)=sin¢,
so that the exact value of (7/2) is 1.)
3. Use the Euler method with h=0.1 to estimate the smallest positive value t at which
the solution wy of y’ + y=0 such that w(0)=1 vanishes. Use your answer to estimate
a value of z, and suggest some ways of obtaining a more precise estimate. [Hint:
Since y/(t)=cost, the exact value of t is 2/2.]
4. Consider the solution ¢ of the differential equation y’+(g/L) y=0 such that
$(0)=5, $’(0)=0, where g, L, 0) are given positive constants. (This is one model
for the simple pendulum obtained in Section 2.2.) Use the Euler method with h=0.1
to estimate the value of #’(t), where t is the first value of ¢for which ¢(t)=0. Also
estimate 7, the first value of t>0 for which $(t)=@). Estimate the same
quantities using the Milne method with h=0.1.
5. Repeat the calculations of Exercise 4 using the more precise model
y"+(g/L) sin y=0.
6. Find iterative formulas analogous to the Euler, modified Euler, Milne, and Runge-
Kutta methods for the system
yi =i (t, ¥1, Y2)
Y2=Sr(t V1» Y2).
7. Specialize the results of Exercise 5 to the linear system
Vi =411(t) V1 +4 2(t) v2 +b, (t)
V2 = 421 (t) V1 +492(t) v2 +bo(t).
8. Generalize the results of Exercises 5 and 6 to n-dimensional systems of the form
y'=f(¢, y)
and
y =A(t) y+b(t).
9. For each of the following initial-value problems obtain an approximate value of
(1)=(¢1(1), 62 (1)), where (t) is the (exact) solution, using step size h=0.1 by (i)
the Euler method, (ii) the modified Euler method, (iii) the Milne method, using the
Runge-Kutta method for starting values and predictor-corrector formulas, (iv) the
Runge-Kutta method.
9.6 Systems and Equations of Higher Order 391
10. For each of the following initial-value problems, obtain an approximate value of
(0.5), where $(r) is the (exact) solution, using step size h=0.1 by (i) the Euler
method, (11) the Runge-Kutta method.
a) y”—2y?=0 $(0)=1, ¢'(0)=—-1
b) y’+2y?=0 o(0)=1, ¢’(0)=-1
[Hint: Reduce to a system of two first-order equations and the results of
Exercise 3.]
CHAPTER 10
10.1 INTRODUCTION
For every function f(t) of a suitable class, we define the Laplace transform
F(s), also denoted by #(/f), by
oo) A
392
10.2 Basic Properties of the Laplace Transform 393
Thus #(1)=1/s. Clearly, the integral does not converge for @s <0.
Example 2. We can calculate the Laplace transform of e*' by almost exactly the same
process as that used in Example 1.
: ae 1
#(et)= [enter dt= [eons di= (As>z). (10.3)
S—Zz
0 0
Notice that the only difference between the integrals evaluated here and in Example 1
is that where there was an s in Example 1, there is an (s—z) in Example 2. This is an ex-
ample of a general principle to which we shall return later.
The operator defined by (10.1) is linear in the sense that if f,(¢) and f(t)
have Laplace transforms F,(s) and F,(s), respectively, and if a and b are
constants, then af; (t)+ bf, (¢) has Laplace transform aF, (s)+bF;(s).
The idea behind the Laplace transform method is very simple. It will be
shown that every solution of any linear homogeneous differential equation
with constant coefficients has a Laplace transform. Also, the Laplace
transform of the derivatives of fcan be expressed in terms of the Laplace
transform of f and the value of f and its derivatives at t=0. This then
means that if ¢ is the solution of a linear differential equation with constant
coefficeints which satisfies some given initial conditions at t=0, the Laplace
transform of ¢ satisfies a linear algebraic equation rather than a differential
equation. When we have solved this algebraic equation, we need only find
the function whose Laplace transform is the solution of this algebraic
equation. This may often be facilitated by tables of Laplace transforms.
Of course, in order to be sure that the function found by this procedure is the
same as the function ¢, we need a uniqueness theorem for Laplace trans-
forms, to the effect that two different functions cannot have the same Laplace
transform.
In defining the Laplace transform by Eq. (10.1), we must impose some con-
ditions on the function f(t) to assure the convergence of the integral. We
consider functions f(t) defined for 0<t<oo which grow sufficiently slowly
near ¢=0 to assure the convergence of the integral at zero (the integral may
be improper); we also require that the function f(t) grows sufficiently slowly
for large ¢ to assure the convergence of the infinite integral for some values
of the complex parameter s; finally, we require f(t) to be integrable over every
closed subinterval of 0<t< oo. This leads us to the following definition.
Definition 1. A function f on 0<t<oo is said to be of exponential growth
at infinity if it satisfies an inequality of the form
jor some real constants M>0 and c, for all sufficiently large t.
Now, we define the class A of functions on 0<t<oo which are
i) absolutely integrable at zero (that is, lims.o+ | |f(0)| dt exists for
sufficiently small a>0);
lil) piecewise continuous on 0<t< oo;
lii) of exponential growth at infinity.
This is the class of functions for which we wish to define the Laplace trans-
form. Clearly, the functions 1, ft, 7" (n a positive integer), sint, cost, e*' for
any complex z are in the class A, but exp/?? is not.
Theorem 1. If f is a function in the class A, the integral \f e~™ f(t) dt
converges absolutely for all complex numbers s with sufficiently large real part.
Proof. For small t, |e~*'| is bounded, and therefore the assumption thatf
is absolutely integrable at zero implies the convergence of the integral
{2 |e“(0 dt for every 6>0. If 2s=c, |e “|=e%, and then (10.4) yields
le“ f (t)| <Mer Coe
for ¢>7, where T is some number greater than zero. Therefore, if ¢>c, the
infinite integral [Ff |e “f(d)| dt converges because its integrand decreases
exponentially to zero, and we have
(c)
M
fens (t) dt < [ier dt<M [ee Rate ene
o—c
T it
Finally, because
f ispiecewise continuous on 6<1<T, {jf |e “f(t)| dt exists,
and this completes the proof of the theorem. |
The Laplace transform F(s) of any function f(t) in the class A is defined
by Eq. (10.1) for all complex s with sufficiently large real part; we will usually
be concerned only with real values of s. Sometimes we denote the Laplace
transform by the symbol #, to emphasize that the Laplace transform is an
operator, which associates the function F(s) with the function f(r). Thus, we
write
F(s)=
2if (}.
As already remarked, this operator is linear.
Example 1. If f(¢) is a complex-valued function of the class A, f(¢)=u(t)+iv(t), where
u, v are real, we have
LYS Q=L {u(t + wn} =f {u()} + iF {v(}.
From the definition, it is clear that the Laplace transform of a real-valued function is
real for real s. Thus, # {u(¢)} is the real part of #{f(t)} and # {v(t)} is the imaginary
part of f {f(t}. If z=a+if, and if f(t)=e, we have
10.2 Basic Properties of the Laplace Transform 395
E ' 1
L (e*)= FL fee" = Y {e* cos Bt+ ie" sin Bt} = ee
! 1 s—a+ip s—a+tif
(10.5)
~s—a—if s—a—if s—atiPp (s—a)?+p?
When we take real and imaginary parts of (10.5) with s real, and then let s be complex
again, we obtain
sS—o
Se COS ph =e,
as Te tees), (10.6)
Ae =P
In particular, taking «=0, we obtain
s : B
= Pes aap . SB ae ea (As>0). (10.7)
We could use the same direct approach to compute the Laplace trans-
forms of other functions such as ¢* and re’, but we can calculate these trans-
forms less laboriously by using some additional general properties of
Laplace transforms.
Suppose fis in the class A. If we differentiate (10.1) with respect to s under
the integral sign, we obtain formally
[e@)
By letting z=a+if and taking real and imaginary parts, we can obtain
the formulas
kK!@[(s—a)+ip]**?
ees cos Bt} = (ane ae ?
; (10.13)
yyy EF ls aye ip
2 ire sin
pi = [e—a?4 pee
LAE giaan
a CAG ect ye au
ee (10.14)
By evaluating the indicated real and imaginary parts in (10.13) and (10.14),
we can obtain explicit expressions for these Laplace transforms.
Exercises
1. Find the Laplace transforms of ¢ cosft and ¢ sin Bt, using (10.14).
2. Find the Laplace transforms of t cos ft and t¢sin ft directly from the definition.
3. Calculate the Laplace transforms of
a) cos* Br. [Hint: Use the half-angle formula. }
b) sin Bt cos Br.
4. Calculate the Laplace transform of the function f given by
0) Oat<1)
r= (1<t<2)
On eS2):
5. Calculate the Laplace transform of the function f given by
f= sin 2t
0
reeew.
O<t<nx
(t>z).
The relation between (10.2) and (10.3) suggests that the multiplication of a
function by an exponential does not affect its Laplace transform except for
causing a translation of the independent variable. This is in fact a general
property.
Theorem 3. If the function f in the class A has Laplace transform F, then
the Laplace transform of e'f(t), for any constant a (real or complex) is
F(s—a).
10.2 Basic Properties of the Laplace Transform 397
Proof. It is easy to verify that e“f(t) also belongs to the class A. The
only part of the verification which is not completely obvious is that e“f(t)
is of exponential growth at infinity. If / satisfies (10.4), and if «=a, then
lef (0) <e*Met= Me®@tt
and thus e“f(t) is of exponential growth at infinity and has a Laplace trans-
form, which we may calculate directly. We obtain
ie.@)
Exercises
L{f()} =F (=f
0) 91-2 Q)=- =f (j=1,2,...,k).
(0) =F) (10.16)
Proof. We can prove (10.16) most easily by induction on j for any fixed k,
remembering that the induction procedure cannot be carried out for j>k
since the hypotheses do not guarantee the existence of the Laplace trans-
forms of derivatives of order higher than k. The case j=1 of (10.16) was
proved in Theorem 4. If (10.16) has been established for f(t), we can write
fi*) (0) as the first derivative of f(t) and apply Theorem 4. We obtain
FE Vie) a0)
=s[s/F(s)—s/~1f (0)— += f9"Y(0)] — f(0)
and this is Eq. (10.16) with jreplaced by (j+ 1). Thus, Theorem 5 is proved by
induction. I
10.2 Basic Properties of the Laplace Transform 399
Example 2. Let us find the solution ¢ of the familiar first-order differential equation
y +ay=0, (10.17)
which satisfies the initial condition
$o(0)=yo, (10.18)
where a and yo are given constants. From our previous solution in Section 1.4 we know
that the solution ¢p is in the class A'. Thus, we should be able to find ¢, by Laplace
transforms. Let Yo(s)=#(o). Using (10.15), we may take the Laplace transform of
every term in the equation
fo(t)+ago(t)=0,
satisfied by @o, and we obtain
¥o(s) ane
sta
and the only remaining problem is to find a function which has this expression as its
Laplace transform. As we have seen in Example 2, Section 10.1, ype ™ is such a func-
tion. By direct verification and application of Theorem 1, Section 1.6, we know that
it is the only solution of Eq. (10.17) satisfying (10.18). However, as motivation for more
complicated problems, it is useful to look at this a little differently. At this stage, we
do not know that it is the only such function, but if we accept the truth of the statement
that two different continuous functions cannot have the same Laplace transform, then
we conclude that ¢9(t)=yoe “. Since ype“ does belong to the class A’, our reasoning
is valid, except for the uniqueness statement which will be stated precisely in Section 10.3.
Example 3. Now, let us apply the same method to find the solution ¢ of the non-
homogeneous equation
$(0)=Yo. (10.20)
Here
/ is a given function belonging to the class A. The problem in (10.19) and (10.20)
can be solved by the method of Section 1.4, and from the solution it is obvious that if
fis in the class A, the solution ¢ is in the class A’. We proceed as in Example 2; we may
assume that ¢ belongs to the class A’ and we let Y(s) be the Laplace transform of ¢
and F(s) the Laplace transform of f. Using (10.15) we take the Laplace transform of
400 The Laplace Transform
Bl)=yoere+
|mo” Fu)du,
10)
and this suggests that |) e ““~ f(u) du should have the Laplace transform F(s)/(s+a).
Example 4. In Examples 2 and 3 we have used the Laplace transform to solve linear
differential equations with constant coefficients. To show that the method is less
useful for linear equations with variable coefficients, let us attempt to find the solution
W of the equation
y' +2ty=sint, (10.21)
which satisfies the initial condition
V.O)=yo- (10.22)
In Example 2, Section 1.4, we found that
t
When we try to obtain this solution by the use of Laplace transforms, we meet with
serious difficulties. If we let Z(s) be the Laplace transform of w and take the Laplace
transform of every term in the equation
y (t)+ 2ty(t)=sint,
satisfied by w, we apply (10.10) and (10.15) and we obtain
The examples of the previous section have indicated the need for finding a
function with a given Laplace transform. The function f(t) whose Laplace
transform is F(s) is called the inverse Laplace transform of F(s). The inverse
Laplace transform is, as we shall prove later in this section, linear. We must
consider the following questions:
i) If we know that F(s) is the Laplace transform of a function f(¢), how
can we compute the inverse transform
/ from a knowledge of F’?
il) Is the inverse transform of a given function F unique?
An answer to the first question can be given in a theoretical way by means
of the so-called complex inversion formula (see, for example, [20], p. 66).
However, the derivation and application of this formula require a knowledge
of real and complex analysis. Therefore, we confine ourselves to a more
elementary approach that will enable us to find the inverse transforms of
some functions commonly arising in applications.
We have seen in Example 3, Section 10.2, that it would be useful to have a
general method of calculating the inverse Laplace transform of a product
of two functions each of whose inverse Laplace transforms are known. Let
us now determine whether there is a means for doing this.
We assume that we are given F(s) and G(s), and that we can find functions
f(t) and g(t) in the class A whose Laplace transforms are F(s) and G(s),
respectively; thus F(s)=Y#(f(t)) and G(s) =f (g(t). Our problem is to deter-
mine the function h(t) whose Laplace transform is the product F(s) G(s), if
such a function exists. If there is such a function, then
ih) = |e “h(t) dt
0 (10.24)
=F {s) a()=| “ro du [ate dv,
0)
where &s>o for some real number o. If F(s)=L {f(t} for As>a, and
G(s)= Lf {g(t)} for s>B (a, B real), then o=max(c, f). Since each integral
converges absolutely for #s>oa, we may write the product of the two
integrals on the right side of (10.24) as a double integral, obtaining
for As>o.
402 The Laplace Transform
this can be justified under our hypotheses because of the absolute conver-
gence noted above. Making the change of variable u+v=t in the inner
integral, we obtain
fo.@) oO
=|er Ef
re—nae tv|dt (&s>c), (10.26)
Exercise
1. Obtain the limits of integration in (10.26). [Hint: Draw a sketch of the region of
integration.|
Equation (10.26) states ¥ {h(t)} =L []o f (t—v) g(v) dv], which suggests
that the solution to our problem is
By reversing the argument we have just completed, we may prove that the
Laplace transform of the function A(t) defined by (10.27) is F(s) G(s), as
desired.
Exercise
2. Show that iffand g belong to the class A and / is defined by (10.27), then
oO
Exercises
3. Show that f*g=g*f for any two functions f, g in the class A.
. Show that (/*g)*h =/*(g*h) for any three functions /, g, h in the class A.
. Show that /*0=0 for any function f in the class A.
&
Nm. Show that if fand g are defined for — 00 <t<oo but are both identically zero for
t<0, then their convolution /*g can be written as [_% f(t—v) g(v) dv.
=
7. Show in two ways that ¥ {{§ f(t) dt} a ATE)
ire
2,
and thus, by the linearity of Y~' (this will be proved following the corollary to Theorem
|
Gale |
1
Sy Cece:
404 The Laplace Transform
=t—tcost+tcost—sint
=t—sint.
We may also obtain this result by partial fractions. For
1 1 1
(al) Set
and hence, by either method
Exercises
8. Find the inverse Laplace transform of each of the following functions. (We indicate
the partial fraction decomposition for the convenience of the reader.)
s?—6 al) oO
a) +4s7+35 5 ue 1 eee
b) 1 = A FeB i Cs+D
Sis Less es
16 A Bs+C Ds+E
c) s(s?+4)? ager To 44e
BE
d) a where f(t) is in the class A and ¥ {f(1)} =F(s).
sta
€
) F(s) ; where f(t) is in the class A and # {f(t)}=F(s).
s?+1
9. Find the solution ¢ of the integral equation
t
[Hint: Take the Laplace transform of every term and use Theorem 1; then solve for
L£(¢) and finally find ¢.]
10.3 The Inverse Transform 405
Ciniee: alt
We have actually proved a little more than we claimed. Not only have we
shown that F(s) tends to zero as soo, but in fact that |sF(s)| remains
bounded as soo.
The question of which functions are Laplace transforms is a difficult one,
and we cannot in this brief treatment give a more precise answer. In
practice, we only apply Laplace transforms when we can prove the existence
of an inverse transform by finding it explicitly, but the question remains an
important one because it is often impossible to find the inverse transform
explicitly.
Exercise
10. a) Show that if f(t) belongs to the class A’ and has Laplace transform F(s), then
lim, SF (s)=/(0). [Hint: Use Theorem 4, Section 10.2, and Theorem 3. ]
b) Generalize part (a) to a result for functions in the class A‘ by using Theorem 5,
Section 10.2, and Theorem 3.
Our main purpose in developing the Laplace transform has been to apply
it to the solution of linear differential equations. We have suggested how
this may be done by examples in Section 10.2, where we considered first-order
differential equations. For equations of higher order, the general idea is the
same, but there are some technical problems which arise when we try to find
the inverse transform. In this section we shall discuss these technical
problems and the means of dealing with them by a collection of examples.
We shall concentrate on equations of the second order, but we shall also
indicate the minor additional problems which arise for equations of higher
order.
In each of the examples involving a second-order differential equation we
shall be seeking the solution @ of an equation of the form
are of exponential growth, that is, if it belongs to the class A?. Thus, to
conclude the existence of a solution which can be found by means of the
Laplace transform, we need a theorem on the growth of solutions of linear
differential equations with constant coefficients. Such a result is proved in
Theorem 2, Section 8.6. This theorem applied in the present context says
that every solution of a linear nonhomogeneous differential equation of order n
with constant coefficients, whose nonhomogeneous term is in the class A,
belongs to the class A".
Example 1. Let us use the Laplace transform to find the solution ¢ of the equation
y”+3y'+2y=0 which satisfies the initial conditions $(0)=1, $’(0)=1. We let Y be
the Laplace transform of ¢ and take the Laplace transform of the equation
" (t)+30'(t)+2(t)=0,
satisfied by @. Using (10.16), we obtain
s*Y(s)—s$(0)—'(0)+3[sY(s)—$(0)]+2Y(s)=0;
now, using ¢(0)=@'(0)=1, we obtain
(s*+3s+2) Y(s)=(s+3) 6(0)+ ¢'(0)=s+4.
Thus,
Sia 2 st+4
¥(s) (10.30)
s?4+3s+2 (s+1)(s+2)
In order to take the inverse transform in (10.30), we must simplify the expression (s+ 4)/
(s+ 1) (s+2). We use the method of partial fractions to accomplish this. Letting
s+4 A B
(s+ 1) (s+2) Sel ee
for all s, we observe that
AZ s+4 . st+4
= —— =
Boe
+ _
(en so
a
st4
so —2 (s+ 1) (s+2) s+
Thus,
2 Oe?
Y(s)= (s+1)(s+2)
st1 s+2
Now, we may use (10.3) and the linearity of the inverse transform to find a function
whose Laplace transform is Y(s), and because of the uniqueness of the inverse transform
(Theorem 2, Section 10.3), this function must be the desired solution ¢. We see that
$(t)=3e '—2e~*'. We could, of course, have obtained this solution by using the
methods of Section 3.4, but this would have involved first finding the general solution
of (10.28) and then substituting the initial conditions (10.29) to determine the constants.
The Laplace transform does not solve problems which would otherwise be unsolvable,
but it does provide an easy, practical method of solution for many problems.
408 The Laplace Transform
Exercise
1. Verify the solution $(t)=3e ‘—2e~~! to Example 1 by direct substitution.
Example 2. Let us now use the Laplace transform to find the solution @ of the equa-
tion y’+4y'+4y=f(t) which satisfies the initial conditions #(0)=1, ¢'(0)=2, where
S(t) belongs to the class A. We let Y be the Laplace transform of ¢, F the Laplace trans-
form of f, and we take the Laplace transform of the equation
b"(t)+4¢'(1)+4e()=f(t),
satisfied by ¢. Much as in Example 1, we obtain
s*Y (s)—s(0)—$'(0)
+4[s¥ (s)— (0)]+4¥ (s)=F(s),
or
(s?+4s+4) Y(s)=(s+4) 6(0)+¢'(0)+
F(s)
(s+4)+2+F(s)=s+6+F(s).
Thus,
Exercise
y (5) 40)
p(s)
and the next step is to separate the rational function q(s)/p(s) into partial
fractions. If the roots of the polynomial p are z,,..., 2, of multiplicities
M,,..., M,, respectively, then we can write
P(s)=4(s—21)"*(s—22)"?---(s—2,)"*.
The process of separating into partial fractions gives
q(s) _ ay1 Aim a1 tae a2 m2
p(s) s—2Z (s—z,)" s—Z, (s—z>)”?
kl = Ak,m,
us a ty (s—z,)"*
where the constants d,,,..., A, m, May be calculated. We may now take the
410 The Laplace Transform
inverse transform of Y(s) using (10.3) and (10.12). We obtain the solution
(
t)=a; (ee
) sa
ay
(m,
er
m
=
-1 t
1) !
Oe Zit©
k1
(m,— 1) 5
which is, of course, the same as that obtained in Section 6.4. If some of the
roots z,,..., Z, are complex but the coefficients do, ..., a, of L are real, then
we may express the solutions in terms of real functions just as in Section 3.4.
We remark that if in place of Eq. (10.31) we consider the equation
LiAy)=f (0),
where f belongs to the class A, we handle the additional term by using the
convolution exactly as in Example 2 above.
The Laplace transform is used a great deal in the study of electrical
circuits. We have already mentioned in Chapter | the linear electrical circuit
consisting of a capacitance C, a resistance R, and an inductance L connected
in series. The voltage v(t) across the capacitance may be described by the
equation
1
Lv"v + Rv'
+ v=0.
Uae
eal
Lv" + Ro’ + v=A cos(kt+a). (10.34)
b= Aci: (10.35)
then
A cos(kt+a)=A[Ae'“*?] = B(Ae'e)
=Abe™.
Thus, instead of (10.34), we consider the complex equation
1
Ly" + Liars Verve. (10.36)
10.4 Linear Equations with Constant Coefficients 411
Let Y be the Laplace transform ofthe solution ¢ of(10.36) which satisfies the
initial conditions $(0)=yo, 6'(0)=y,. Then, using (10.3) and (10.16), we
see that
LL? ¥(9)—s¥o—
11+ R[SY() yo]+zY(9)= b
s—ik
or
1 b
(i+ Rs+4) Y (s)=———
+ L(syp + y1) + Ryo. (10.37)
s—ik
Let z,; and z, be the roots of the polynomial p(s)=
Ls? + Rs+1/C, so that
Ls? + Rs+1/C=L(s—z,) (s—z2). Then we can write (10.37) as
with
b b
ee a Diez) (= 2) PR
When we take the inverse transform, we obtain the solution of (10.36) in the
form $(t)= Me™ + Ne?" + Pe". If L, C, R are all positive constants, a
reasonable hypothesis in applications, then the roots z, and z, of the poly-
nomial p(s) have negative real part. (Why?) We let ¢,(t)=Me™, ,(t)=
Ne*'+ Pe". Then $(t)=¢,(t)+¢,(t) and ¢,(¢) tends to zero exponen-
tially as t+ +00. The term @,(t) is called a transient, because its effect dies
out, and the term @,(f) is called the steady state. As we wish to concentrate
on this steady state, we do not examine the transient term further. (For this
reason we did not actually compute the constants N and P.) The steady state
is [b/p(ik)] e'. To calculate the corresponding voltage v, (t), we must take
the real part of this expression. We define the transfer function
CW= say.
Then
v,(t)=A[bC(k) eM] =R [ Ae’*| C(k)| gree) a
=A A] C(k)| eee)
= A|C(k)| cos[kt+a+argC(k)].
When we compare this steady-state output voltage v,(¢) with the input
voltage A cos(kt+«), we see that the effect of the circuit has been to multiply
412 The Laplace Transform
the amplitude by the gain function |C(k)| and to introduce a phase lag arg C(k).
Note that v, is independent of the initial conditions. The transfer function
C(k) is determined by the electrical circuit, but depends on the frequency of
the input voltage. The essential principle in tuning a radio is to adjust the
circuit, usually by varying the capacitance C, to maximize the gain function
for a given k. In other electrical applications, it is necessary to vary k to
maximize the gain function for given values of L, R, and C.
The reader is warned that we have assumed that L, R, and C are constant
and that the circuit is linear in the above discussion. For time-dependent or
nonlinear (vacuum tube) circuits, the resulting differential equations cannot,
as a rule, by solved by means of the Laplace transform, and other methods
must be developed. It may still be reasonable to define a gain function and a
phase lag, but these will no longer be given by a transfer function.
The Laplace transform is sometimes used to solve linear nonhomogeneous
differential equations whose nonhomogeneous terms do not belong to the
class A. While the solutions obtained in those cases are then purely formal,
it is possible to show that they are actual solutions in a more general sense.
The justification requires a more sophisticated approach, such as the theory
of distributions. Here, we shall only give an example to indicate the nature
of the problem.
Let us again consider an electrical circuit consisting of a capacitance C,
a resistance R, and an inductance L connected in series. However, now let
us attempt to determine the behavior of the circuit if a large external voltage
is appled over a very short time interval. Let this external voltage be
defined by
O,(j=We (0<t<o)
0 (t>0o).
Then the circuit is governed by the differential equation
1
Ly" + Ro'+— U—0_(1).
Since 6,(¢) is certainly in the class A, we can treat this equation by taking
Laplace transforms, using
Be =| Genes (10.39)
Exercise
3. Find the steady-state solution $, of Eq. (10.38) satisfying the initial conditions
$(0)=Yo, $’(0)=3,. (By steady-state solution we mean, as above, the difference
10.4 Linear Equations with Constant Coefficients 413
between the solution and those terms ¢, in the solution which (because L, R, C
are positive constants) tend to zero as t> + 00.)
tml
Figure 10.1
The reader should compare (10.41) with Theorem 3, Section 10.3, which states
that for a function fin the class A, lim,.,, “f(t)=0, and should note that
there is no contradiction here since 6 does not belong to the class A, although
0, does.
If we let Y(s) be the Laplace transform of the solution @ of (10.40)
satisfying the initial conditions #(0)=yo, $’(0)= yo, we obtain, using (10.41)
and writing p(s)=Ls* + Rs + 1/C=L(s—z,) (s—z) as before,
Exercises
Find, using Laplace transforms, the solution ¢ of each of the following differential
equations which satisfies the given initial conditions.
4. y’"—y=0, AeA fo
5. y’—Sy'+6y=0, Pa? Pa!
6. y"—6
+11y'y"
—6y=0, $0)=40'(0)=0, 670) =1
7. y" by =e, $(0)=¢'(0)=0, ¢’(0)=1
8. y’-9y=e’, $(0)=1, $'(0)=0
9. y’—9y=sint, $(0)=1, ¢’(0)=0
10. y’+4y=sin
21, o(0)=1, g'(0)=1
11. y’-y=0, $(1)=0, $'(I)=1
[Hint: Begin by making the change of independent variable t = t—1 to move the initial
time to zero.|
12. y’—9y=f(t), ¢(0)=1, $'(0)=0, where f is in the class A. [Hint: Use the
convolution.|
13. y’+4y=f(t), 6(0)=1, o'(0)=1, where
f is in the class A.
14. Plot the gain and transfer functions of the linear differential operator defined by
each of the following:
a) L(y)=y" +4y'+3y
b) L(y)=y
+4y'+4y
c) L(iy)=y"+y
15. Find the steady-state solution of the differential equation y’ +9y=e*".
“16. Find the steady-state voltage in an electrical circuit with L=1, R=10, C=%
with an applied voltage 96(t).
17. Find the solution of each of the following initial-value problems by means of
Laplace transforms:
rc
— d) y"+2y'+y=24+(t—-3) U(t—3), y(O)=2, y'(0)=1, where U is the unit
step function
SN. J6S0s
*18. a) Show that if (rf) is a solution of the Bessel equation of index zero,
tye + y + y= 0,
then the Laplace transform Y(s) of #(t) satisfies the first-order differential equation
dY
(s*+1) —+sY=0
ds
regardless of the initial conditions prescribed.
b) Solve the equation obtained in part (a) and use Exercise 10, Section 10.3 to show
that Y(s)=¢(0) (s?+1)~‘/. Explain why ¢’(0) cannot be prescribed.
c) By expanding Y(s) in powers of 1/s, show that
¥(s)=4(0) ¥ (-1)
Be
j=
1x3x5---x(2k—1)
Vk ! ved 1
g2k atek,
d) Show that
[Note that the step from (c) to (d) is purely formal. However, (d) is a “‘rigorous”’ solution
of the Bessel equation, as can be justified by direct substitution and theorems on power
series (see Section 6.9).|
*19. Each of the following equations defines a function #(¢). Find ¢(t) by using
Laplace transforms. [Note that before we do this we should actually first prove that
¢ belongs to A. This can in fact be done by estimating |¢(t)| and using the Gronwall
inequality (Lemma 1, Section 8.2). Alternatively, we could just obtain the answer
and then verify that it belongs to A.]
a) $(t)=40?
—[o b(t—t) edt
b) (t)=C+JoO(c) sin(t—t) de
c) P(t)=1+2
fp b(t—1) cost dt
- 4) d'()=sintt
fyo(t—1) costdr, —(0)=0
e) $'(t)=t+Jo
o(t—t) cost dt, (0)=4
f) fo @(t—t)e ‘dt =t
Yi=2yi+Y2, Y2=—Yit4yr,
which satisfies the initial conditions ¢,(0)=0, $2(0)=1. Also find a fundamental
matrix. We let Y;(s)\=L{¢,(0}, ¥(s)=L {¢2(2)}. When we take Laplace transforms
in the equations satisfied by $,, 2, we obtain
SY, (s)—
$, (0)=2Y, (s)+ Y,(s)
SY)(s)=$2(0)= — Y%(s)+4¥2(s)
or
(s—2) ¥1(s)— ¥,(s)=¢$1(0)=0
¥, (s) +(s—4) Yo()=$2(0)=1.
Solving for Y, (s)and Y,(s), and computing the inverse transform, we obtain successively
hence
@;()=te”, $2(t)=e™ +te*.
To find a fundamental matrix we find the solution y=(W,, w2) for which w,(0)=1,
W2(0)=0. Proceeding as above we obtain
s—4 1 1 —1
WO) |ayes
Gay?
Therefore, ,(t)=(1—2) e*', W2(t)=—te*', and by Theorem 2, Section 4.3, a fun-
damental matrix is
The reader should compare this solution with that of Example 2, Section 5.5, for the
initial condition imposed here.
Exercice
1. Verify the solution obtained in Example | above by direct substitution.
10.5 Applications to Linear Systems 417
yi-—2y,—2y,=0, yy—2y,+y2=—2e™,
t
which satisfies the initial conditions ¢,(0)=3, $3(0)=2, ¢2(0)=0. We let Y, and Y,
be the Laplace transforms of ¢, and @5, respectively. When we take Laplace transforms
in the equations satisfied by ¢, and @5, we obtain
[s*Y,(s)—3s—2]—2[sY, (s)—3]—s¥,(s)+2Y,(s)=0
[5% (9)-3]-2%
(9)sH3(9)=— >
or
di(t)h=e+e*, d2(t)=e'—e'.
Exercise
2. Verify the solution obtained in Example 2 by direct substitution.
To solve the problem of Example 2, we could proceed as follows. Trans-
form the system to an equivalent system of three first-order equations, find a
fundamental matrix for this system, and finally impose the initial conditions.
It is clear that in most simple problems such as the one in Example 2, the use
of Laplace transforms does give an answer more quickly.
418 The Laplace Transform
yit2yitvityety2=0, Wityty2=0,
which satisfies the initial conditions ¢,(0)=1, $1 (0)=0, ¢2(0)=1, @2(0)=0. We let
Y, and Y, be the Laplace transforms of $, and $3, respectively. When we take
Laplace transforms in the equations satisfied by ¢, and ¢,, we obtain -
When we attempt to solve the algebraic system (10.45), we find that it is inconsistent!
Hence, there is no solution to the given system of equations which satisfies the initial
conditions.
The Laplace provides an alternative means, independent of the develop-
ment of Chapter 5, for constructing a fundamental matrix for the system
y =4y. (10.46)
where A is an arbitrary n x n constant matrix. This is done as follows.
We will say that if f(t) is a vector function with n components defined on
0<t<oo, then feA if and only if each component of f is in the class 1; we
write Y {f}=|P exp(—st) f(t) dt. The analog of formula (10.16) holds for
vector functions (the proof is exactly the same as in the scalar case). It follows
from Theorem 2, Section 8.6, that if @ is a solution of (10.46) with @(0)=n,
then ge A! for any initial vector n. Let Y(s)=# {o}. Taking Laplace trans-
forms of both sides of (10.46) and using the initial condition, we obtain
sY(s)—n=AY(s).
Thus,
(s{—A) Y(s)=n. (10.47)
The system (10.47) is a linear nonhomogeneous system of n algebraic equa-
tions in m unknowns, namely the components (Y;,(s), Y2(s),..., Y,,(s)) of the
vector Y(s). Clearly, if s is not equal to an eigenvalue of A, det(s/— A)#40,
and (10.47) can be solved uniquely for Y(s) in terms of n and s by Cramer’s
rule. From it, since det(s]— A) is a polynomial of degree n, it is clear that
Y(s) is a vector whose components are rational functions of s and linear in
(11, 125+++5 Mn), the components of y. Hence, each component of Y(s) can be
expanded in partial fractions (the denominators will be integral powers of
(s—A,), where A; is an eigenvalue of A). Doing this we can then invert Y(s) to
10.5 Applications to Linear Systems 419
find the solution @(f) corresponding to any initial vector y. Letting y succes-
sively take on the values
1 y 0
1 ;
up = 1 »N2= : ae65 1, 0
0 0 I
(or any other n linearly independent constant vectors in @, which form a
basis), the solutions ,, 5,..., @, used as columns of the matrix ® generate a
fundamental matrix of y’= Ay, such that &(0)=/.
Example 4. Construct a fundamental matrix ®(¢) (for example, the one with ©(0)=/)
for the system y’ = Ay, where
Be 1
A=W 0 1
Sa
Compare your result with Example 3, Section 5.5.
We use the method outlined in the preceding paragraph. In that notation with
n=3, we have
or
(s1—
A)¥(s)=n,
s—3 1 —1] | y,(s) ny
as = y2(s)| =| 2
—1 1 s—2}[ys(s)} [ms
Expanding det (s/— A) by the first row, we have
det (sE — A)=(s—3) [s(s—2)+1]+[12(s—2)+1]
—[—2+s]=s*—5s*
+ 8s—4=(s—1) (s—2)’.
By Cramer’s rule, we have
iy Ll —1
GO| Gy 8 - 1
Yolst=
(5) (s—1)(s—2)?
_m[s(s—2)+1]—n2(s—2+ 1)+n3(—1+5)
= (6-1)6-37
_m(s—1)—m +N3
(s—2)
=1 s—2
¥,(s)= 3
420 The Laplace Transform
n (2s—3)+n2(s* —5s+5)+n3(s—1)
(s—1) (s—2)? ‘
s-3 1
det] —2 s 1
se ol —1 1 m3} _m(s—2)—no(s—2)+n3(s?
—38+2)
a(s)= (= 1\(s=2) (s—1) (s—2)?
1-2 3
TP ike
It is convenient to substitute specific values for y,, 2,73 at this point, rather than
waiting until after taking the inverse transform as was suggested in the general
procedure. When we take 4, =1, 7, =0, n;=0, we obtain
s—1 A B
TA) S paa\ ae2) =D)
Then (s—1)=A(s—2)+B, which gives A=1, B=1. Thus
1 I :
MG) = 3? Gy and y, (t) =e" + te? =(1+42) CE.
Next
2s—3 A B C
¥i(s)\= =
2(3) (s—1)(s—2/ Re ES
and 2s—3=A(s—2)?+B(s—2)(s—1)+C(s—1), which gives A=—1,
B=1, C=1.
Thus,
Also
¥,(3) 1 1 1
Si =
with
s?—Ss+5=A(s—2)?
+B(s—1) (s—2)+C(s—1)
which implies |=A, 4+ B=1, so that B=0, —1=C.
1
nS) ——
2(\) s—1 (s—2/
and
y2(t)=e'—
te”!
—1 1 1
Y;(s)= =
(s—1)(s—2) s—1 s—2
and
y3(t)=e'—e7".
Thus
te?!
o,(t)= tet
et
l 2t
Y; Ss er and y3(t)=e
Thus,
te"
3(t)= | te”
e2t
Exercises
Find the solution [,, 2] of each of the following systems of equations which satisfies
the given initial conditions:
3. yity2=0, $1 (0)=1, 62(0)=0
Vive =
422 The Laplace Transform
myi+kyy,—k2(y2—y1)=0
1
1 : (10.2)
e a (10.3)
Se
S
cos ft +B? (10.7)
sin Bt
B
+p? (10.7)
A a (10.11)*
S
ee ee (10.12)
(s—z)
t*f (t) (—1)* F®(s) (10.10)
; k!(s+ip)k*?
t cos Pt [(s—o)? + Bee? (10.14)
gi: k!(s+ip)k*?
t* sinBt [6-04 (10.14)
fae k![(s—a)+ip]*** t
t*e* cosBt (s—0) +B (10.13)
He sinBt ills
w eee
[(s—a)? + p?]** 1
(10.13)
e"f (t) F (s—a) Theorem 3, Section 10.2
f'(t) sF(s)—f (0) (10.15)
f? (t) S/F (s)—s!~* f (0)—---— f 9 (0) (10.16)
t
t For the purpose of carrying out this calculation, assume that s is real, and then
let s be complex again after obtaining the answer.
* Formulas involving k! are valid when k is not an integer if k! is replaced by I’(k+ 1).
APPENDIX 1
We consider the linear system in n equations and ” unknowns Ww, W,..., Wy:
Q1414W, + 4j2W2 + °** + 44,W,=Cy
Az,W, + 9W2 +++ +42,W,= C2
A=
where the sum is taken over all indices i,,..., i, such that i,,i,...,i, is a
permutation of the numbers 1, 2,..., and where the + sign is used if the
424
Appendix 1 425
permutation is even and the — sign is used when the permutation is odd.
Thus Nt (=. A=, 1495 —@,4>); and if n=3
Az1W, +a22W2=0
with A=a,,4,,—a,2a,,=0. If a,,40 take w,=1; then w,=—a,,/a,,
from the first equation. Moreover, substituting these values in the second
equation, we obtain
a12 1
Ay1{ ——— +422 )}=— (441422
— 421442) =0.
a ai
Thus, if a,, #0, Ww, = —a,,/a,, and w,=1 isa solution with w,, w, not both
zero.
Similarly if a,, =0 and a,, 40, W, = —(a)/a2,), W,=1 is a solution and
if a,, and a,, are both zero w, =1, w,=0 1s a solution which takes care of
all the possibilities and completes the proof. I
We now turn to the full system (1). The basic result is the following.
Theorem 2. (Cramer’s rule.) If the determinant A of coefficients of (1) is
not zero then the system (1) has a unique solution W,,..., W,. This solution is
given by
A,
wW,=— k= ttt
k A ( )
Proof. We again give the proof for n=2 only and we suppose first that there
exist W,, W such that
Gy 1Wy + 412W2=Cy (2)
A71W1+422W2=Cp.
Then we multiply the first equation by a, and the second by —a,, and add,
426 Appendix 1
obtaining
ee A,.
W,A=c,ay2 — 44202 =
Dn boy)
Similarly, we multiply the first equation by —a,,, the second by a,, and
add, obtaining
~ 44, C4
WA
=— 47 1C, +4440.=
Az, C2
Since A 40 it must be the case that w,=A,/A and w,=A,/A. If we now define
the numbers w, and w, by these relations, we see by direct substitution that
they satisfy (2). This completes the proof. |
Combining Theorems | and 2, we easily obtain the following result.
Theorem 3. The linear system (1) of n equations in n unknowns has a unique
solution if and only if the determinant of coefficients A is not zero.
APPENDIX 2
Polynomial Equations
et
PAZ)=@o2 1 Ojo gaye, (1)
be a polynomial of degree n>1 with real or complex coefficients. Let
z=b be a root of the equation p,(z)=0.
Theorem 1. If z=b is a root of multiplicity k of the equation p,(z)=0, then
Pa(b)=p,,(b)=
pn(b)=---=pp (6) =0
but p (b) £0.
Proof. Since z=b is a root of multiplicity k
427
428 Appendix 2
we obtain
Complex Numbers
and Complex- Valued
Functions
We assume that the reader is familiar with the definition of a complex number
z as an ordered pair (x, y) of real numbers with the definitions of addition
429
430 Appendix 3
1. Using Euler’s identity as a definition, prove that e@'*72" =e" ¢72', where z,, z, are
given complex numbers.
2. For each of the following functions find the function u=&f, v= Sf:
= 2+ 3it b t)=
a) f(t)=e , t real ) f(t) a t real
°) f()=(2+
Seen8? ae
| AeSit odie
yen
If f, g are complex-valued functions defined on some interval /, their
sum (f+ g), product (fg), quotient ((//g) (g(t) #0)) are defined respectively by
the relations (f+ 9) (t)=f(t)+9 (4), 9) Q=/(0)-9 (9, (1/9) =f (d/9 (2) exactly
as in the real case. It should not come asa surprise that the definitions of limit,
continuity, derivative, and integral for a complex-valued function f, and
hence the calculus of such functions, is completely analogous to the real
case. We shall summarize the appropriate definitions and properties.
Definition. A function f is said to approach a (possibly complex) number L
as t approaches some number to (written lim,.,,f(t)=L) if and only if
lim,.,,{f()-L|=0 (where |f(t)—L| denotes absolute value).
Exercise
3. Give an e—o definition of the statement lim,.,, f(t)=L.
It is clear that all properties of limits which hold in the real case hold in
the present case as well. An important consequence of this definition is the
following property which does not arise in the strictly real case, but which is
very easy to prove:
exists. We define the value of this limit to be the derivative of fat r=f) and
we denote the derivative of f at t=tg by f(to). More useful from the
computational point of view is the following equivalent definition (the
equivalence is established by the above lemma): f is differentiable at t=to
if and only if the real functions u=&f,v=¥f are differentiable at t=to;
the derivative offat t=tp is given by :
f (th=w (to) +iv' (to).
Example 2. Prove that for every complex number z=«+if, and for all r, (e**)’=ze?zt
(same formula as in the real case).
By Euler’s formula e‘ =e” cos fr+ie™ sinBt. Since the real functions e™ cos fr,
e™ sinfr are differentiable for all t, we have by the above equivalent definition of
derivative
(e*)’ =(e* cos Bt) +i(e” sin fry’
=ae™ cos Bt— Be™ sin Bt +i(xe™ sin Bt+ Be” cos fr)
=(«x+if) e™ cos Bt+(—B+ia) e* sinBt
=(a+ iB) [e* cos Bt+ie™ sin Br] =ze™.
It is clear that all properties and rules of differentiation familiar from the
study of real functions hold in the case of complex-valued functions. This
includes, of course, derivatives of higher order.
Finally, we say that a complex function £, whose domain is the real interval
a<t<hb, is integrable on [a, 6] if and only if both real functions on u=Af,,
v=¥ f are integrable on [a, b], and in this case we define
b b
Example 3
r/2 r/2 r/2
e
t=nx/2
—icost =1+i.
t=0
APPENDIX 4
The Exponential
Matrix
i) |A+B) <A +B
ii) | ABI <All (Bi
for matrices A, B of complex numbers. The above norm is convenient for
our purposes; other matrix norms satisfying the properties (i) and (ii) are
possible.
Exercise
We now use the matrix norm (1) to define the concept of convergence of a
sequence of matrices.
Definition. The sequence of {A} converges to the matrix A, where A® and
A belong to F,,, ifand only if the sequence of real numbers {\|A—A™'||} has
limit zero, and in this case we write
433
434 Appendix 4
Clearly, because of the definition of the norm, this means that {A} A
if and only if the sequence {a‘?} of complex numbers, representing the
element in the ith row and jth column in the matrices {A}, converges to the
element a;; of the matrix A as ko, for each of the n* elements (i,7=1)..., 7).
To prove this, note that |a —a,;|<||A®
— Al] for i, j=1,..., 7
A matrix function A(t) is a correspondence that assigns to each point t of an
interval I one and only one nxn matrix A(t). Using the remark following the
definition of convergence of a sequence of matrices, we see that it is consistent
to say that a matrix function A(t) is continuous, differentiable, or integrable
on an interval / if and only if each of its n? elements q;;(¢) is continuous,
differentiable, or integrable, respectively, on J. We say that a_ series
Vo U, of matrices converges if and only if the sequence {)%-o U,} of
partial sums converges. The limit of this sequence of partial sums 1s called the
sum of the series.
Combining the definition of convergence of a sequence of matrices with
the Cauchy criterion for sequences of real or complex numbers, we can es-
tablish the following result:
Lemma 1. A sequence {A,} of matrices converges if and only if given a
number ¢>0, there exists an integer N=N(e)>0 such that ||A,,—A,||<é
whenever m, p> WN.
We are now ready to prove that the definition
M? M!
exp M=I+ LSE se oS pa
a aR es
makes sense for every nxn matrix as follows. We define the partial sums
M M? M*
S.=1+7, Nag (2)
eM (IMIF/K!).
Hence, the sum
m
is a partial sum ofa series of positive real numbers that is known to converge.
By the Cauchy criterion for series of real numbers, we see that given ¢>0 we
can choose an integer N>0 such that the right-hand side of (3) is less than ¢
whenever m, p> N. By (3),
{S.Opl) 2 for
m, p>N.
By Lemma 1, the series converges, and thus exp M is well defined for every
nxnmatrix M.
APPENDIX 5
Generalized Eigenvectors,
Invariant Subspaces, and
Canonical Forms of Matrices
Exercise
1. Let X and Y be subspaces of a vector space V. Show that Y+ Y and XN ¥Y are also
subspaces of V.
Exercise
436
Appendix 5 437
[Hint: Begin with a basis for ¥7 Y and extend it to a basis for ¥ and a basis for
all
b) Deduce that the sum X-+ Y is direct if and only if
dim(X + Y)=dim X + dim Y.
We now extend the definitions of vector sums, intersections, and direct
sums to any finite number of subspaces. Let X,, X>,..., X;, be subspaces of a
vector space V.
», Xi
i=1
of the subspaces X,, X>,..., X;,is the set of all vectors ve V of the form
k
v=) x
i=1
aes
i=1
of the subspaces X,, X>,..., X;, is the set of all vectors ve V which is in each
of the subspaces X,, X>,..., X,-
Definition 6. The sum
if and only if
Exercises
v=> AG:
i=2
is a direct sum.
438 Appendix 5
4. Show that a vector space V is a direct sum of subspaces X,, X,..., X4;
k
V= ® Xj;
t=1
Exercise
6. Show that (i) A ~ A, (ii) A~ B implies B~ A, (iii) A~ B, B~ C implies A~C.
=P oA Avanaes Ay,,|
= /P>! LAyvine AWnls
i> AIAG
Poe alan
Ge one
The converse of Theorem | is also true:
ee Vy Be ds
dn
Since P is nonsingular, the columns of P are linearly independent. Suppose
that
Pal\V5 V5 --+5 v, |.
An
The reader will observe that both A and P~ ‘AP in Theorem | have the
same eigenvalues /,,..., 4,. This suggests the following result.
Theorem 3. If A, B are complex nxn matrices and if A and B are similar,
then A and B have the same characteristic polynomials and hence also the same
eigenvalues.
Proof. Since A and B are similar, there exists a nonsingular matrix P such
that B= P~'AP. We compute
det (AI — B)=det(AI — P~' AP)=det(AP~ 'P—P~' AP)
=det P~'(AI— A) P (2)
=det P~' det(AI
— A) det P=det(AI
— A),
where we have used several properties of determinants. Since the eigen-
values ofA are the roots of the equation det (A /— A) =0 and the eigenvalues
ofB are the roots of det(A /— B)=0, the result follows from (2). |
Corollary. Let v be an eigenvector of the complex n x n matrix A correspond-
ing to the eigenvalue 4. Let P be a nonsingular nxn matrix. Then P~'Vv is an
eigenvector of P~' AP corresponding to the eigenvalue 2.
Proof. Since Av=Av, we have
(P-*AP) (Po W=P> pave v=) Petal
Appendix 5 441
Exercise
ie) it il
4-(q | and B=|, |
“(}
We compute that the eigenvalues of A are 3+5i, and that
Hence,
, fori: 2 0 23451 20
ete) e610 |e 0. esr
The reader will note that the diagonal elements of P-' AP are precisely the eigenvalues
of A, in accordance with Theorem 1.
Slat
4 3
“() =
Thus 4=3 and J=1 are eigenvalues. Corresponding eigenvectors are
respectively. Observe that both the eigenvalues and the corresponding eigenvectors are
real. Moreover, the vectors u and v form a basis for two-dimensional Euclidean space.
early
Clearly, :
P=[u, v]= ae
Exercise
8. Prove that the matrices
i @ i i
4-(q if B-| |
are not similar.
To discuss the situation when there are fewer than n linearly independent
eigenvectors, we introduce the concept of generalized eigenvector. If for some
value A and some p>1, there is a vector y such that
(A—ADFv= Yo ¢(A=alyy.
j=kt+1
j=
Note that the sum is from j=1 to j=k, not to j=n. Similarly,
Tv)
f ay,
ji" (=k ln)
j=k+1
Comparing these formulas to (1), we see that the matrix A of T with respect
to the basis v,,..., v, of G, is a “block diagonal” matrix,
tld iO
ae Nae a
444 Appendix 5
A,
A= A,
A,
where A, is ann; xn, matrix that represents T on X; for j=1,...,k.
Corollary 2 to Theorem 4. If A is a complex nxn matrix, then there exists
a nonsingular matrix P such that P~' AP has the block diagonal form given in
Corollary |.
The Jordan canonical form of a matrix is obtained from the above repre-
sentation by choosing bases of the subspaces X,,..., X;, in a suitable manner.
This requires a careful study of nilpotent transformations. A linear transfor-
mation L such that L"=0 but L’~!#0 is said to be nilpotent of index r.
We recall that the subspace X; is the null space of the transformation repre-
sented by the matrix (A—A,I)/, where r, is the largest index of the general-
ized eigenvectors corresponding to /;. Since X; is invariant under (A —A,J),
we may regard (A —/,/) as a linear transformation on X; that is nilpotent of
index r,.
Let L be a nilpotent linear transformation of index r on a vector space X
of dimension n. Then there is a vector u such that L'u=0 but L’ ‘'u40.
446 Appendix 5
We apply L’"? to both sides of this equation and use L'u= L’v=0 to see that
pas r2—1
O= v= tL ic ar ae
j=0 j=0
Since L’-"u, L’~"?**u,..., L’~ ‘uw are linearly independent, c;=0 for j=0, 1,
...5 ("2 — 1). Thus, (4) becomes
Ret
We define
r=
Hy) cea:
Aiea
Since L‘*/~"u is in U, but L*v is outside U,, L*u, is outside U, for k=0, 1,
..., (f2 — 1). Thus, every nonzero linear combination of u,, Lu,,..., L7~'u,
is outside U,. Let U, be the subspace ofX spanned by up, Lup,..., L’?~1u,;
then U, and U, are disjoint. The direct sum U,® U, is invariant under L.
Exercise
with
Petgeto sare) “and “EL a-0) (k=dy2,
1),
To construct the matrix B that represents L with respect to the basis given
by Theorem 5 we denote the basis elements respectively by v,, V>,..., V,-
Then we have
Ly, =0, Lv, =%,,..., LV,,=V,,-1;
Lv,,+:=9, EY. eo Ve aii seres LN 5 Vee
From the definition (1) of the matrix ofa linear transformation with respect to
a given basis, we see that B has the form
By
Beale? ) (6
B,
where B, is the r, xr, matrix given by
Oe Oy eet
B= 0
1
0 0 0
Cy
C,
C;
448 Appendix 5
1
0 nits
This gives the following important result.
Theorem 6. (Jordan Canonical Form) Let T be a linear transformation
of complex n-dimensional space with eigenvalues A,,..., A, of multiplicities
Ny,-.-, Ny, respectively. Then there exists a basis of complex n-dimensional
space relative to which T is represented by a Jordan canonical matrix
A,
A= a
A,
Here A, is ann, xn, matrix that has all diagonal elements equal to 4;, and that
has chains of \’s separated by single 0’s immediately above the main diagonal,
and all other elements zero.
Corollary. Every matrix is similar to a Jordan canonical matrix.
We remark that the length of the chains of 1’s in A; depends on the
integers r,,..., 7, 1n Theorem 5. It can also be shown that, except for the
order of the blocks 4;, the Jordan canonical form is unique.
Bibliography
1. G. Birkhoff and G. C. Rota, Ordinary Differential Equations, 2nd ed. (Ginn, Boston,
1961).
2 . F. Brauer and J. A. Nohel, Qualitative Theory of Ordinary Differential Equations
(Benjamin, New York, 1969).
3: F. Brauer, J. A. Nohel, and H. Schneider, Linear Mathematics (Benjamin, New
York, 1970).
. H. Bremermann, Distributions, Complex Variables, and Fourier Transforms
(Addison-Wesley, Reading, Mass., 1965).
. N. de Bruijn, Asymptotic Methods in Analysis (North-Holland, Amsterdam, 1958).
. R.C. Buck and E. F. Buck, Advanced Calculus, 2nd ed. (McGraw-Hill, New York,
1965).
. E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations
(McGraw-Hill, New York, 1955).
. C.A. Desoer and L. A. Zadeh, Linear Systems Theory (McGraw-Hill, New York,
1963).
. A. Erdélyi, Asymptotic Expansions (Dover, New York, 1956).
. P. Henrici, Discrete Variable Methods in Ordinary Differential Equations (Wiley,
New York, 1962).
. F. B. Hildebrand, Advanced Calculus for Applications (Prentice-Hall, Englewood
Cliffs, N. J., 1962).
. F. B. Hildebrand, Jntroduction to Numerical Analysis (McGraw-Hill, New York,
1956).
_ F. Jahnke and F. Emde, Tables of Functions with Formulae and Curves (transla-
tion) (Dover, New York, 1945).
. W. Magnus and F. Oberhettinger, Formulas and Theorems for the Special
Functions of Mathematical Physics, 3rd ed. (translation) (Springer, Berlin, 1966).
_ B. Noble, Numerical Methods, Vol. 2: Differences, Integration, and Differential
Equations (Oliver & Boyd, Edinburgh, 1964).
_ W. Rudin, Principles of Mathematical Analysis, 2nd ed. (McGraw-Hill, New York,
1964).
. J. Todd, A Survey of Numerical Analysis (McGraw-Hill, New York, 1962).
. W. Wasow, Asymptotic Expansions for Ordinary Differential Equations (Wiley,
New York, 1966).
_E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, 4th ed.
(Cambridge Univ. Press, Cambridge, 1927).
20. D. V. Widder, The Laplace Transform (Princeton Univ. Press, Princeton, N. J.,
1946).
Psa K. Yosida, Lectures on Differential and Integral Equations (Wiley [Interscience],
New York, 1960).
449
Answers to Selected Exercises
1.1 Sloss
30 log(100
log 2
10(log 80—log 5). log 2
min k=
log 2 10
DISS
0.2 min
95.3 ounces
1.2
FSF
CO
FK
re
NOOO
1l+e"?
= f (tos ¥o)=0 is necessary. Consider ¢” and note that $” (to)= f;(to, Yo) if
Ff (tos Yo) =0.
Where $(t)= —t, max if t<0, min if t>0.
1.3 $(0)=1
(0, —1)
The (t, y) plane
Must be real-valued
Integral doesn’t converge, consider ¢(t)=0.
0
JP=8+y2
Clay wee.
At —tot+Yo
450
Answers to Selected Exercises 451
Yot(to+ 1)
VYo(t—to) +to(t+ 1)
1
log [log (1 +e’) —log 2+]
=>a
9 VomA a= 6381+2>
xe
Wis 2 ett Sa
to3)/3
1
Yotl
tan (t+ 7/4)
1+?
1
log
(1 —t)
sin” '(t? + 7/2)
a) y=kx
b) y7=x-+k
c) 2x*+y?=k?
d) y?=4cx
10e~ 3 kg
w(x)=KL—L,./K? —2Kx
[yi +(t—t,)]°
[yd/?+(t—to)]?
f,(t), «0 where fp (t)=0,
ae
L=|4 t—a),t>o (a>0)
1.4 tel 4277!
Fy -pit-to) (0-4)
p P
Ap TE tpg
t/2—5/2t
(Aj 5— 8) St
(1/t?) (t—to+
toYo)
1
Wd
wW
2YADA1
3 sin t—
.
2 sint
1 t .
t+—| log——to+toyo
t to
1
10. Aeris. 1)
t ly 1
iL.
Rie ore
452 Answers to Selected Exercises
i il
(Ol 3 9 |ila
e)
ieee
14. (t+l)e*
16. e(2t—2)+e'(2+a)
a
17. tan t+———
cos t
18. e(1+a)-1
19. All solutions tend to zero if 1<0; all solutions are constants if A=0; all
solutions grow exponentially if 2>0.
20. The limits are as follows.
a) 0
b) co
ei l/2
d) 0
e) 0
22. (1—t—e™'*')"}
24, 27 9
27. limit is b/a
elec 3
31. 3—-21.9 kg
(2
1.6 Lc) No
2. b) Is not a region; all others are regions
4. a) (b) continuous
c) f continuous, g not continuous
0, yes
D is whole plane
ies
GO
A
SENa) i) Yes ii) Yes
b) Theorem does not apply
ty Al}
2.1 2. s(t)=
cos0
3. x(t)=[v9 cosO] t+xo
Answers to Selected Exercises 453
do |?
2.2 T =mg cos0 +m] |
y(t)}=G/2gR*t+ RP)?
y()=4(e2"+e72)
y(t)-=4(e*—e-)
Applicable
y(t)=e"
y()=(1-1)-
A is [Note that hypotheses of the uniqueness Theorem
Ji+e are not satisfied.]
2.5 a) ¢,=—1,¢,=1
a) All(¢, y, y’)
d) {(t, y, y'): v4, y’ #0}
n) All (t, y, y)
Sol There is a unique solution of the initial-value problem on any interval not
containing —1 or 0.
3.2 a) t<0,0<t
Cc ) t>0
oO ) All t
3:3 a) Dependent
c)) Independent
e) ) Independent
f) Dependent
h) Independent
0
) Pi (to)=62 (to) =9
A re linearly independent
3.4
sint+cost
cos 2t+4 sin 2t
ill. ) Cc, sin3t+c, cos3t
d) cy exp[(—1+,/2) it]+c, exp[(—1—./2) it]
454 Answers to Selected Exercises
13)
1e(as0 Bron)
h) (cp cst)ie "*
(C,il (0k) ——=
b) a,d
16. a) A sin(./2t+a)
13 a:
b) sin | 3t + arcsin ——
3 13
By) S) Dine
3.5 1.2) c.e'+(c cos vee sin ew
c) (cy +c 2t) cost+(c3+cz,t) sint
3 ms. t
e) cy sint+c, cost+exp( )(«sin ~+C4 COS ‘)
2 2
oe V3, : a t
exp ° C5 Sin > +Cg Cos 5
3. a) cy +cot+ a +c¢,¢3
oF :We\+C,4 COS.‘|a )
33 BHO \
c) e, exp 7 —At+c, xp-w At
3.6 oy fave
t+ i
c) t/2 log at —1
4. cyt+c,t?+c3t7' (t>0)
n?>—1/4
7. u’+{1— 5 uO
t
e) cye*+c,e° 7°2t
+4176 —zkte!
a) c, cost+cy sint—t sint—cos t log|sin ¢|
b) c, cos(2 log|t|)+c, sin (2 log|t|)
+4 sin log|e|
d) cye**+c,te*
+4e!
f) cyt? +c¢5t? log|t|+4
h) cyt[cos log|t|+i sin log|t|]+c,t~ ' [cos log|t|—i sin log|t|]
Jj)) —log|cost|—sin log|sect+tant|+c, cost+c, sint+c,
5
(#) . /10
2D) b) =
—(R/2L)t
it R2 = Ps ; 1 R2 1/2
A(k2—k2) akA
3.8
(BRP +a ia wern ag
Amplitude is A [(k6 —k*)? +.a7k?] 1?
4.1
sol} ok -[] =—sint cost
—cost —sint
2,
456 Answers to Selected. Exercises
0 1 OE 0
i ||/-7 0 a6" F=85 é
14. a) w= ome fo etl oO
1S 0 2 —13 cost
1
»(0)=| 6
1
4.2 1. a) Unique solution on 0<t<o
b) Unique solution on 0<t<3
T T
4. a) ) Unique solution a Se
c) cy cos2t+c, sin2t—
cos 2t |f(s) sin2s ds+4sin2t |f(s) cos2s ds
to to
=e
Sey
Sil Sea Shit
2
E 2 2
15 wy Eeert
Je.
tana L ee
pee R\~1?2 =(R/2L)t oi 1 EatR12
) (2 *) ‘ sin( x) ?
aE» 1
14. (cosat—coswt) where @ 2 = Le provided w4a
L(w—a)
4.5 3. yr =3yt—2yiyat 33
Y= —2y{ + 2yiy2—2y2
9. c(—e ‘,e~‘), c any nonzero constant
Ite 2
5.1 2) a(c | i )
Cy @ ol
Answers to Selected Exercises 457
Sy? 1.
1
= =t3a| |
te 3
— See, (2+./7)t sa sp Tat
t4c+t4
| s~4b(s) ds, where t—ty>0
to
6.1 3h o(t)=(t—1)+(t—1) +5 |
aes
co 1-3-5.--(2k—3) ;
6.3 5h
a ee ee eee
2(—1)"(2m +1)=a (1—4m) (6—4m)--(4(k=m+4) |ar
ee,
o()=1-a(2+1) =ta(a+1) (0+2)(a+3)-Ae
sspeerfieS EM 9]
7 wai gle(k—-2)k oY
valid
on —o <t<0 or 0<t<a@
eo)
1 ee 4 3
ee ae A ga a ae ie Yas.
2(k—1+z,)
¢,.(z;)= rie 25: k 1(z;)
6.7 3. b) Two solutions valid for 0<|t|<0o; the one corresponding to the root
z=0 of the indicial equation is analytic at t=0.
c) One solution, analytic for all t.
e) If 2y is not an integer or zero, two solutions valid for 0<|t|<o; if
2y is an integer or zero, one solution valid for 0<|t|< oo, and if y=0
or y is an integer, this solution is analytic at t=0.
g) Two solutions valid for 0<|t|< 00, neither of which is analytic at t=0.
00 k=
c) OE 3. ne where co=1, cer = FG)cm
f=:
f) Ifc is not a negative integer or zero,
ab ala+l)b+1),
ele 3 1-2:¢ (e+1) 5
=F(a,b,c:t) for |i}<1
If c is not a positive integer, there is a second solution
|t\'~°F(a—c+1, b—c+1, 2—c; 2), also valid for 0<|t|<1. Thus, if c
is not zero or an integer, the general solution is
AF (a. b, c; t)+ B\t|)~°F(a—c+1, b—c+1, 2—c; t).
Answers to Selected Exercises 459
of 1 in each case.
Se
ates
ee
eae au?
f(k+z))
f (2)=2?—z-2
d ) b=K*
j=l + Yec,t*
=
1-0.
with c,=0,c,=—*“~., =F
Cee
(k>2), f(z)=27+42+3;
1 bys 1 —4(2K
+4) Cy44
res
bo =a 0, 3 —
7W
b,=bs=---=0.
a) Indicial equation: z(z—1)+3z=z?—-3z=0;
1 1
a fi
ANemma ee >
Lo@)
+7 i>oYore ire|
1
where Hy Ua 2 see
a al
Y
k) $1 ()=Idl e, 62 (t)=¢1(8)log|t|+1- r=2 ——— t*
(k—1)!
1
where Healt
ool
ivp(kt)+¢2Y, (kt) if p is an integer or zero.
24. ar
Dis a Je Aes (1)
for)
€,t7J2(t)+c9t? ¥,(t)
c) c ny )+¢p1_4)9(t)=e,t *? cosht+c,t~*? sinht
d) etJ oC) Fests 2 (A)=e costa, sini,
where
EQ 0ty hy (t)= —n/2[J, (it) +i, (it)]
6.10 6. p(2)=0, p'(0)=2, q(00)=4' (20) = 4" (10) = 4" (wo) =0
a) Z;=a,Z,=5
ee 42k) Eos
d) Cy 2X Pk+ DI! +cot, |t|>1
ef (ee eee
a) $,(t)~ + + |
Miele “Ast 218i)
2()~—— Oo
|1-—+——_-..
12 1232
118¢ 2!(8t)?
1.2 4. Cy +Cyt
Jan =(n+4)
$,(t)=A, sin(n+3)t, n=O, 1,2,--
t—
4
t
a) i=(¥) » bn (t)= Aysin, ally Pee
The Agk?:
L#k?: $()=
o(t)= — ——
cos ,/At
i k
jo ewe
it (cos ./An—cos kn) +
(Ak?) sin /2n Ws
Panes $e sinkt tsinkt
k LO c arbitrary
COS) 20
Caag te
DICOSty/ 216
7.4 “d—By=1
if sin2./A40
4 eof esa ELE in fro
,/A cos./AL—2 sin,/AL
f) No solution
g) ccos2t+4 cost, c arbitrary
» cos
-
os!1)—1) sint—cost+1
sin |
9. do(t)=1. (= a j!
9.2 2. 1.1459
9.3 2. 1.1463
9.4 Beeyies
9.5 3. a)
) 1.1076
) 10.3238
) 101.0075
) i) 34.411
ii) 59.938
iii) 64.858
b) i) 0.293
ii) 0.3451
iii) 0.350245
1V ) 0.350234
10.2 3:
s?+4p
0:ideee
1
SS
1
R)
aie)
“eran
‘ yk
:
er)
s>0
1 —2 exp(—as/2)+exp(—as)
Rs>0
s(1—exp(—as))
12
2) (s? +o?) (L—e 9”)
1
% (s*+1)(s+1)
ey ga eis
c) $52 (*) ; Ks>0
10.3 He a) —2+3e'+}e°*!
c) 1—cos 2t—t sin2t
t
e) Jsin(t—u)
f (u) du
cos 2t
10.4 3: @. Ne ge ON en tt — ent)
Se! —e+4he 3t
v3 t
fre — see (os 3t+ V3sin 3 nce)
_ a) —14+2t+2t?7+e°*
b) t? +35t°
c) 1+2te’
d) at
e) 44+317+7,t*
10.5 bi(t)=2t+1, $2(t)=—2t
bade "+90 'eahe™, dall)=Hert—e%)
#(t) is any function for which #,(0)=1 and $(0)=0;
Stretching transformation, 276 local, 361, 362, 363, 364, 370, 379-381
Sturm, 299 of Milne method, 363, 375-379, 380, 381
Sturm-Liouville boundary-value problem, of modified Euler method, 370-371
298-303, 329 of predictor, 375
Sturm-Liouville operator, 306 of Runge-Kutta method, 369
Subspace, 135, 174, 197, 199, 201, 436 in Simpson’s rule, 376-378
invariant, 443, 444, 445, 446 for successive approximations, 323
Successive approximations, 317-323, 325, Tuning a radio, 412 2
326, 327, 328, 339, 347, 350 Two-dimensional systems 180-192
Superposition, 69, 70, 106, 210 Two-step methods, 368, 381
Systems
of first-order equations, 156 Undetermined coefficients, 106, 111,179, 180
of linear algebraic equations, 170, 393 Uniqueness
418, 424-426 of inverse Laplace transform, 405
of linear differential equations, 114-156 of solutions, 329-333, 339, 348, 349
numerical methods for, 389-391 of solutions of nonhomogeneous
triangular, 122 boundary-value problems, 290, 311
Variables separable, 9, 10, 12, 18, 20, 22, 53,
Tangent, to a curve, 7 54556, 1218331" ;
Tangent field, 28 Variation of constants formula, 101, 102
Taylor, formula, 385, 386, 387, 388 148, 149, 151, 152, 154, 177, 228,
Taylor serires, 213 289, 307, 343
Taylor’s theorem, 385, 386, 388 Vector algebra, 414
Tension, 45, 279 Vector functions, 116, 118, 134, 161
Transcendental equations, 285, 286, 288 Vector space, 134, 288
Transfer function, 411, 414 axioms for, 134
Transient, 27, 411 basis for, 135, 136
Trapezoidal rule, 363, 372 dimension of, 135, 137
Triangular matrix, 122, 173 finite-dimensional, 135
Triangular system, 122 Vector sum (of subspaces), 436, 437
Trivial solution, of boundary-value Vectors, 115, 136
problems, 280 derivative of, 116
Truncation error, 384 integral of, 117
cummulative, 361, 362, 363, 364, 365, Voltage, 25, 26, 129, 155, 411, 412
379-381
of Euler method, 364, 365 Weber form of second solution of Bessel
of explicit Adams method, 381 equation, 261
of implicit Adams, method, 381 Wronskian, 76-69, 84, 86, 97, 143, 294, 311
b
so nudalee
fgcone ae *