This action might not be possible to undo. Are you sure you want to continue?
MATHEMATICAL METHODS OF PHYSICS
Second Edition
JON MAT"HEWS
R. L. WALKER
California Institute of Technology
AddisonWesley Publishing Company, Inc. The Advanced Book Program
Red wood City, California .. Menlo Park, Ca lifornia • Rcadi ng, Mas sach use t t s New York • Amsterdam • Don Mills, Ontario • Sydney
Bonn .. Madrid • Singapore • Tokyo
San Juan • Wokingharn, United Kingdom
Preface
For the last nineteen years a course in mathematical method s j n physics h as been taught, by the authors and by others, at the Calif orn ia I nst it ute of T echnol ogy. I t is intended primarily for firstyear physics graduate students ~ in recent years most senior physics undergraduates, as well as graduate students from other departments, have been tak i ng it. This book has evol ved from the notes that ha ve been used in this course.
We make li tt1e effort to teach physics in this lex t; the emphasi sis on prcscn ti ng mat hema tica] techn iq ues that have proved to be u sef u 1 in anal yzi ng pro blcms l n physics. We assume that the st udcn t has been ex posed to the standard undergraduate physics curriculum: mechanics, electrici ty and magnet ism, in trod uctory quantum mechanics, etc., and that we are free to select examples from such areas.
In short, this is a book about mathematics.for physicists.
Both motivation and standards are drawn from physics; that is, the choice of subjects is dictated by t heir user ulness in physics, and the level of rigor is intended to reflect curren t pract ice in theoretical physics.
I t is assumed that the st uden t has become acq uai nted with the f ollowi ng mathematical subjects:
1. Si multaneous 1 inear eq uatio ns and dctermi nants ;
2. Vector ana] ysi s, i ncl udi ng di fferent ial operations in curvili near coordinates;
3. Elementary dj fferen tial eq nations ;
4. Complex variables, through Cauchy's theorem.
v
vi Preface
However, these should not be considered as strict prereq uisi tes, On the one hand, it will often profit the st udent to have had more bac kgro und, and on the other hand, it should not be too difficult for a student lacking familiarity with one of the above subjects to remedy the defect by extra work and/or outside reading. J n fact, the subject of di fferen tial equations is discussed in the first chapter, partly to begin on familiar ground and partly in order to treat some topics not normally covered in an elementary course in the subject.
A considerable variation generally exists l n the amount of preparation different students have in the theory of functions of a complex variable. For this reason, we usually give a rapid review of the subject before studying contour integration (Chapter 3). Materia] for such a review is presented in the A ppendix, Also, there are so me excel Jent and reasonably brief mathematical books on the subject for the student unfamiliar with this materiaL
A considerable nu rnbcr 0 f problems are i nc1 uded at the end of each chapter in this book. These form an important part of the course for which this book j s designed. The emphasis through 0 Lit is on understanding by means of examples.
A few remarks may be made about some more or Jess unconventional aspects of the book. J n t he first place, the material which is prescn ted does not necessarily flow in a smooth, logical pattern. Occasionally a new subject is introduced without the student's having been carefully prepared for the blow. This occurs, not necessarily through the irrationality or irascibility of the authors, but because that is the way physics is. Students in theoretical physics often need considerable persuasion before they will plunge into the middle of an u nf ami] iar subject; this course is intended to give practice and confidence in dealing with problems for which the student's preparation is incomplete.
A related point is that t here is considerable del i berate nonu nif or mi ty in the depth of presentation. Some subjects are skimmed, while very detailed appl ications are worked 0 ut in other areas. If t he co urse is to give practice in do i ng physics, the st udent must be given a chance to gai n confidence in his ability to do detailed calculations. On the other hand, this is a text, not a reference work, and the material is intended to be covered fully in a year. It is therefore not possible to go into everything as deeply as one might like.
The pla n of the book rernai ns unaltered j n this Second Edi tion ~ we have rewritten a number of sections, though only one (Section 146) is new. We have tr ied to make clearer certai n material which has proved part leu lady difficult for students, such as the discussion of the Riemann Psymbol in Section 7·3 and the. WienerHopf integral equation in Section 11 ·5.
A number of new pro blems have bee n added at the ends of chapters, and several new detai1ed exam ples are i ncluded, sue h as the triangular d rumhead of Section 83 and the recursive calculation of Ressel functions in Section 133.
Preface
• +
VlI
Several acknowledgements are in order. The course from which this text evolved. was originally based on lectures by Professor R. P. Feynrnan at Cornell University. Much of Chapter 16 grew out of fruitful conversations with Dr. Sidney Coleman. The authors are grateful to Mrs. Julie Curcio for rapid, accurate, and remarkably neat typing through several revisions.
JON MATHEWS
R. L. WALKER
Pasadena, California May 1969
'
f:~ J;~re~_, ~rM:~._~ clP o~~41. __ ~~~_~
}e Dof'rV'd'i rk r..a: .ruJ.La~o '!W... 0 r~.,o: :
[~~~~:~) ;,:~:)~ . .
. ti e ~ 1 ( ll) '1 ~ 2 ( r/,) dNl.ta: ~ 0./1 rJ.w!tA uv ~
\J ;1 t "P ( !i) U : + Q ( t) ~ ~ '= 0 f· .   _.   . .  ..  . . . . .. !
ct, d ) \ 6 ( ~ t I ~.:zJ ~ ~ t ~ ~  ~~ ~ / I W·rn)tJ.ii";an.tt.i
~'~ + l' 0.) ~; t Q(~) ~~<";" 0 L. . .. 1  _ .
~Pt ~4( ~) 0 ~ 2l~) JtmJ:, .kf#"Juvij, ·
\ ~2(~)~'~~l~)I(!l~~)~ [: A~~:~~)~··l·
.. .. . .l
0tilaJH a W ~ a nll e,.u, ~
\ J li '+" U J li = 0 ~ r~c i) ~ ~ ~~o ;~~ f [ S~P~t'J~ ·U _.,
~ ~ to ,
~  . .... . ~ .._._
) ll~ lfo.) "'J~ _j_ J~ [~r~)~=~J .
)L 5 'P (t) fAA) .,. 00 '10
t (). cA, ~ t ( );) . .v.rft wnm wfti: 1 dull ~ z c~) I~ (>DolL
l 0Dfw Po.. prim . __j A mfJ_, OIro. W •
~.___ _~. __ ._ ... .'_a 0 .
L}r~2~_~~ ~J[t)_H~~ ~Pll tWtd& a ~wJj~ ()Mo'f€W .
%frt~ lJ1fn~rt_ ~. a£. WM\ .tc.ui/M ~tt&i",~, Sf Cifr\Jn cit nl · ~~a. J~p tfrrdj~Jt;.; ~
L}~+?[t)l'~O(~!J=~ .
?u.l'J\_~e 1.to JJ_ MAM~~ ~MrL .klm.t.fu.tr ~fn..t ~~ :
()II.
{'POt) "" j>CO/[J6 1>.) 2 ]) { flJt)~ hJt ll..)~ J (~~_~·h·ce ) .
Q (1t) ::. 4 (>tJ/ C)IJ to ) 4 (~)  ~ 4n. C)t ~jI ) n
. r~~~~t f + ~(l~~)\. )Y'~_1 [JI:)~r f.~a. hrl~).
r~~~::; :i~ ~:.~!£c ~~~=~. )~~n J( c; ~ 0) ,
d"" no n~O
.~_. __ .... _. ......· ......... ._ ... _..._.r • ..,.... ... , __   _ •• ._.  _,_.." ........... ____.,
Se t~c.Wi~'fL o.~ txpreMe ~ ~tm~ rwh1A.~.i rl._ u.u4'eA ..
0,+ vi e rn11it .WA r ~  )::0) ~. t .
Co [),j rh~)+ poh~~(l ] 11:" e, [A~+[ po~)},.+iCl J = 0 ~ e 0 ·f 0 .,.
[~~)~~2+(~_~ __ A)~~~o ~o_ J ECAl«tt9_. ~_#cl~. ~ fic.i,4 m1ML l.w. r l6  ~~ ) A + n  ( fI  _ I "2 ~ '3) _. +) •
. n
\ en [( hi n )2 + (flo A) (h+Yl) +·10 J+ ~ Cn\..L:P,l (lJ+YJ It) + ~ \. ] ~ ]
 en HMn)+ tr 1'0" [h_(AHll )+11<J= 0 _
'Jle AI, A:tiX~ llii~'t'!i .k:~te~dh ~ tNtn£ti. )';,
~ { FC~ ~ ) == JJ:+ (Jpo~J h tt+ 10 ::' 0 ] { h .+..1~ ==  ~o+f lJ
2. (i.. ... ~A<.).
. f (A2) = l!L + (po·A )A,z .... 1 o = 0 . 4~ h~ == ,~ . att/Jptl~ ~h1r ~
a~ f'(.4I+~)+t o.n_l,_[h.(A~+Ili)+q" J=o
..,0 (.°4~.4,,) Cn=V2, 3, .. _) .
bn f (..1.2 + n ) t ~ b nt.. [ P to. (/~ t nl )+ q l J ~ 0
F( lI, +n) + f!J (n ~ 4,2j '3 J ). h.A,z + n ~ F(h~+")+ f(.4~);= 0 C n~ o, ~~ <I 3J ~)
~A(lo)~ (t,~t" )"':;. ()..(tl.Y'l ~ ~0Ifr
(j gL h (A. 1J=t + 0 ~ ~. 2. '3;>  •.• ). Hr. .JtMr. dJtrJ:l, .
U (ft) =: (ll :io ) ~..2_ b n ( ~ '0 ) rr:
d~ ~~o
D~ I.l~;:,A~ ~ tJ;u_f'fV..4 ~3.(.t) ~~ l(~)' D~ ..Q;oik m (m = C2~ 3.) .. ) 0lifiJ ~c4l; IJ ~  1).2 : rn J Mul\ci F(~ of m ) = F(A~ ) == 0 k} tndvk t£rU ~ J.J puitri proJu ce hti rMA 44 *ta. ~. (l ) ,
_u~~~_~ ce&i _ J_, It _ di¥'J~ ~~" (A_~ _ okJ _ d4 j H'tni ~ _ e.<po r~ r{t 8v r
J/ t.1 ~ 11"" ~u. W't\ [(~ fO_i1~±11T .
. . .  .. ... . ... ,.. . .. . . .. . . . ... . .. . <J
r  ... .  . ···1 r· . ., ...   _. ,
l In  A:I_ _m __ J ry L~~~ __ ~ ~B (rn = 0, 1,2, s ,  __ J
(PM rM a f&l'OJu t J " ~ ~
r .  .. _. __ L __  ~ .• ~~ tI(I ..~ ..•.  l
! y,~(t. )=C~J(lJ~~ U(:k) =(Iti,,)A~.L:(.\I\(.tl~)n i
1 ~ n:~ l I
L .. . . __ . . __ .. ._. __ ._ .. . ... _. .. __._ ..... __ . , _ ..... __._ .... _ ..... _ ... __ ......._ ......... __ .•
fr,.ffJi u (~.) .~ ~ ~~e mM:.HtAL ¥ U (~() ) = at) =F 0 .
• \; ~:< (~;~~'(~;G;~~) ~ fp [ S 'N~)h J J *"_ 
II' . •• __ •••••••• • _ __ •• _. __ ....... _._.. •• • •• _ ••• _ • •• __ •••• _
( v(~) " I ~;~J() ~,,~[ j'PCt)dJt J~l&~ J ~~I(J() ~"P [ Li~:: d" ] J", ~ l
, 5  ~  !UJ~ I r  m~ t r.
; ~ (1 io) I) ~ C~) di& = J (1; ~~) 3 (~) Qt t ~ f./W"ru ~
J 00 n z.
l ~ (t ) :" 2" ~ n (~ ~o) l ~ II ~ ~ C ~o ) ~ ~ / a ~ .
J vel,) ~ SC"lIol..rIH ~(J() ~J!. =
I TYi ~ r'l'l
l = ~ m ~ ( t  ~ c ) + (lb  ~o) Z t f\ (i to ) _
n== e
r .  . ..  .. _ .. .~ _. ..   ,.,. ..  ...  .. .. ..j
I 1> i.J. tlO n I
, ~/ 110) "" d m ~.u) hI(:l l6. )t (t;  t.) ~~ bn CJI;~. i j.
L_ __ . . ... _0._ .... .................. _. .. _._ .... __ ........... _. __ ._ ............. ,......
~~ciemtv.· d rn ) 6()) 6" 1 b ~ ) h:. )... M ~tfrlAM ~ ~CMi",J
j ~ ~
.e,w.4Q M~\f~Ja.
~~ ~~ (i) ~ ~~(.~) IX rM1YW~ .1Mi.t'rBt ~~ edt .
r
(
Contents
PREFACE
CHAPTER 1 ORDINARY DIFFERENTIAL
EQUATIONS
1 .. 1 Solution in Closed Form 12 PowerSeries Solutions
13 Miscellaneous Approximate Methods 14 The WKB Method
References
Problems
1
1
13
22
27
37
38
44
44
48
50
55
56
58
58
62
65
75
80
82
90
91
.
IX r CHAPTER 2 INFINITE SERIES
21 Convergence
22 Familiar Series
23 Transformation of Series References
Problems
CHAPTER 3 EV ALUA nON OF INTEGRALS 31 Elementary Methods
32 Use of Symmetry Arguments 33 Contour Integration
34 Tabulated Integrals
3·5 Approximate Expansions 3~6 SaddlePoint Methods References
Problems
x Contents
CHAPTER 4 INTEGRAL TRANSFORMS 96
41 Fourier Series 96
42 Fourier Transforms 101
43 Laplace Transforms 107
44 Other Transform Pairs 109
4·5 Applications of Integral Transforms 110
References 119
Problems 120
CHAPTER 5 FURTHER APPLICATIONS OF COMPLEX
VARIABLES 123
5·1 Conformal Transformations 123
52 Dispersion Relations 129
References 135
Problems 135
CHAPTER 6 VECTORS AND MATRICES 139
61 Linear Vector Spaces 139
62 Linear Operators 141
63 Matrices 143
64 Coordinate Transformations 146
65 Eigenvalue Problems 150
66 Diagonalization of Matrices 158
67 Spaces of Infinite Dimensionality 160
References 163
Problems 163
CHAPTER 7 SPECIAL FUNCTIONS 167
71 Legendre Functions 167
72 Bessel Functions 178
7~3 Hypergeometric Functions 187
74 Confluent Hypergeometric Functions 194
75" Mathieu Functions 198
76 Elliptic Functions 204
References 210
Problems 210
Contents
CHAPTER 8 PARTIAL DIFFERENTIAL EQUA nONS 81 Examples
82 General Discussion
83 Separation of Variables
84 Integral Transform Methods 85 Wiener Hopf Method References
Problems
CHAPTER 9 EIGENFUNCTIONS, EIGENVALUES, AND GREEN'S FUNCTIONS
91 Simple Examples of Eigenval ue Problems 92 General Discussion
9·3 Solutions of BoundaryVal ue Problems as
Eigenfunction Expansions
94 Inhomogeneous Problems. Green 's Functions 95 Green 's Functions in Electrodynamics References
t Problems
CHAPTER 10 PERTURBATION THEORY
101 Conventional N onde gene rate Theory 102 A Rearranged Series
103 Degenerate Perturbation Theory References
Problems
CHAPTER 11 INTEGRAL EQUATIONS
111 Clas ificat ion
112 Degenerate Kernels
113 Neumann and Fredholm Series 114 SchmidtHilbert Theory
1l~5 Miscellaneous Devices
116 Integral Equations in Dispersion Theory References
Problems
.
Xl
217 217 219 226 239 245 252 253
260 260 263
266 267 277 282 282
286 286 292 293 296 296
299
299 300 302 305 311 316 318 318
XU Contents
CHAPTER 12 CALCULUS OF VARIATIONS m
121 EulerLagrange Equation 322
122 Generalizations of the Basic Problem 327
123 Connections between Eigenvalue Problems and
the Calculus of Variations 333
References 341
Problems 341
CHAPTER 13 NUMERICAL METHODS· 34S
131 Interpolation 345
132 Numerical Integration 349
13·3 Numerical Solution of Differential Equations 353
134 Roots of Equations 358
13M5 Summing Series 363
References 368
Problems 368
CHAPTER 14 PROBABILITY AND STATISTICS 372
141 Introduction 372
142 Fundamental Probability Laws 373
143 Combinations and Permutations 37S
144 The Binomial, Poisson, and Gaussian
Distributions 377
145 General Properties of Distributions 380
146 Multivariate Gaussian Distributions 384
147 Fitting of Experimental Data 387
References 395
Problems 396
CHAPTER 15 TENSOR ANALYSIS AND DIFFERENTIAL
GEOMETRY ~
151 Cartesian Tensors in ThreeSpace 403
15 .. 2 Curves in ThreeSpace; Frenet Formulas 408
153 General Tensor Analysis 416
References 421
Problems 421
Contents XUI
CHAPTER lei INTRODUCTION TO GROUPS AND GROUP
REPRESENTATIONS 424
161 Introd uction; Definitions 424
162 Subgroups and Classes 426
163 Group Representations 430
164 Characters 433
16~S Physical Applications 440
166 Infinite Groups 449
167 Irreducible Representations of SU(2), SU(3),
and 0(3) 457
References 467
Problems 468
APPENDIX SOME PROPERTIES OF FUNCTIONS OF A
COMPLEX VARIABLE 471
AI Functions of a Complex Variable. Mapping A~2 Analytic Functions
References
Problems
471 477 483 483
BmLIOGRAPHY
485
INDEX
493
ONE
ORDINARY DIFFERENTIAL EQUATIONS
We begin this chapter with a brief review of some of the methods for obtaining solutions of an ordinary differential equation in closed form. Solutions in the form of power series are discussed in Section 12~ and some methods for obtaining approximate solutions are treated in Sections 13 and 14.
The use of integral transforms in solving differential equations is discussed later, in Chapter 4. Applications of Green's function and eigenfunction methods are treated in Chapter 9, and numerical methods are described in Chapter 13.
11 SOLUTION IN CLOSED FORM
The order and degree of a differential equation refer to the derivative of highest order after the equation has been rationalized. Thus, the equation
'd3y JdY
d 3 + X  + x2y = 0
x dx
is of third order and second degree, since when it is rationalized it contains the term (d3y/dx3)2.
1
2 Ordinary Differential Equations
We first recall some methods which apply particularly to firstorder equations. If the equation can be written in the form
A(x) dx + B(y) dy = 0
(II)
we say the equation is separable; the solution is found immediately by integrating.
EXAMPLE
dy f1Y2 _ 0
dx + vl=Xi
dy." dx"
J1=y.i+ Jl_xi=O
sin 1 y + sin  1 X = C or, taking the sine of both sides,
xJ 1  y2 + yJ I  Xl = sin C = C'
(12)
More generally, it may be possible to integrate immediately an equation of the form
A(x, y) dx + B(x., y) dy = 0
(13)
If the left side of (13) is the differential du of some function u(x~ Y)t then we can integrate and obtain the solution
u(x, y) = C
Such an equation is said to be exact. A necessary and sufficient condition that Eq. (13) be exact is
oA aB a; = ox
(1·4)
EXAMPLE
(x + y) dx + x dy = 0
(15)
A=x+y
B=x
oA =~= 1 oy ax
The solution is
xy + !X2 = C
11 Solutionin Closed Form 3
Sometimes we can find a function A(X, Y)t such that
..1.(A dx + Bdy)
is an exact differential, although A dx + B dy may not have been. We call such a function A. an integrating factor. One can show that such factors always exist (for a firstorder equation), but there is no general way of finding them.
Consider the generallmear firstorder equation
dy J.X+!(x)y=g(x)
(16)
Let us try to find an integrating factor l(x). That is, A(x)[dy + f{x)y dx] = A(X)g(X) dx
is to be exact. The right side is immediately integrable, and our criterion (14) that the left side be exact is
d~~) = l.(x)f(x)
This equation is separable, and its solution
l(x) = exp [f f(x) dX]
(17)
is the integrating factor we were looking for.
EXAMPLE
xy' + (1 + x)y = eX
y' + c: X)y =;
The 'integrating factor is exp {J[( 1 + x)1 x]dx} = xe"
Xe"k + e : X)Y] = el~
N ow our equation is exact; integrating both sides gives
(18)
xe' y = fell!' dx = !e2..; + C ·r C_,'II; y=+e
2x x
4 Ordinary Differential Equations
One can often simplify a differential equation by making a judicious change of variable. For example, the equation
y' = f(ax + by + c) (19)
becomes separable if one introduces the new dependent variable,
v = ax + by + c
As another exam ple, the socalled Bernoulli equation
y' + f(x)y = g(x)y"
(110)
becomes linear if one sets v = yln. (This substitution becomes U obvious" if the equation is first divided by it.)
A functionf(x, y~ ... ) in any number of variables is said to be homogeneous of degree r in these variables if
[tax, ay, ... ) = qf(x~ y, ... ) A firstorder differential equation
A(x, y) dx + Bi», y) dy = 0
(111)
is said to be homogeneous if A and B are homogeneous functions of the same degree. The substitution y = vx makes the homogeneous equation (111) separable.
EXAMPLE
y dx + (2JXY  x) dy = 0
y = vx dy = v dx + x dv
vx dx + (2xJ v  x)( v dx + x dv) = 0 2v3/2 dx + (lj;;  l)x dv = 0 This equation is clearly separable and its solution is trivial.
(112)
Note that this approach is related to dimensional arguments familiar from physics. A homogeneous function is simply a dimensionally consistent function, if x, y~ ... are all assigned the same dimension (for example, length). The variable v = y]» is then a "dimensionless" variable.
This suggests a generalization of the idea of homogeneity. Suppose that the equation
Adx+Bdy=O
is dimensionally consistent when the dimensionality of y is some power m of the dimensionality of x. That is, suppose
A(ax, a'"y) = d' A(x, y) B(ax, a'"y) = ttm+1 B(x, y)
(113)

11 Solution in Closed Form 5
Such equations are said to be isobaric. The substitution y = v_x'ft reduces the equation to a separable one.
EXAMPLE
x y2(3 Y dx + x dy)  (2y dx  x dy) = 0
(114)
. Let us test to see if this is isobaric. Give x a U weight" I and y a weight m. The first term has weight 3m + 2~ and the second has weight m + I. Therefore, the equation is isobaric with weight m = !.
This suggests introducing the H dimensionless" variable v = y~. .To avoid fractional powers, we instead let
v x=.,. y
Eq uation (114) reduces to
(3v  2)y dv + 5v(l ~ v) dy = 0
which is separable.
An equation of the form
(ax + by + c) dx + (ex + fy + g) dy = 0
where a, ... , g are constants, may be made homogeneous by a substitution
x= X+a
Y= Y+fJ
w here a: and p are suitably chosen constants [provided of # be; if af = be, Eq. (115) is even more trivial).
An equation of the form
y  xy' = f(y'}
(116)
is known as a C lairaut equation. To solve it, differentiate both sides with res peet to x. The result is
y"[fl(y/) + x] = 0
We thus have two possibilities. If we set i" = 0, y = ax + b, and substitution back into the original equation (116) gives b = f(a). Thus y = ax + /(a) is the general solution. However, we also have the possibility
f'(yl) + X = 0
Eliminating y' between this equation and the original differential equation (116), we obtain a solution with no. arbitrary constants.. Such a solution is known as a singular solution.
6 Ordinary Differential Equations
EXAMPLE
(117)
This is a Clairaut equation with general solution y = ex + c2
However, we must. also consider the· possibility 2y' + x = 0, which gives
.r + 4y = 0
This singular solution is an envelope .of the family of curves given by the general solution, as shown in Figure 11. The dotted parabola is the singular solution, and" the straight lines tangent to the parabola are the general solution.
There are various other types of singular solutions, but we shall not discuss them here. See Cohen (C4) or Inee (12) for more complete discussions and further references.
Next we review some methods which are useful for higherorder differential equations. An important type is the linear equation with constant coefficients:
aft y(lt) + a",_ly(liIl) + ... + Olyt + ao y = f(x) (118)
If f(x) = 0, the equation is said to be homogeneous; otherwise it is in.homogeneous. Note that, if a linear equation is homogeneous, the sum of two solutions is also a solution, whereas this is not true if the equation is inhomogeneous.
The general solution of an inhomogeneous equation is the sum of the general solution of the corresponding homogeneous equation (the socalled complementary function) and any solution of the inhomogeneous equation
Figure 11
Solutions of the difl'erential equation (1.17) and their envelope
11 Solution in Closed Form 7
(the socalled particular integral). This is in fact true for any linear differential equation, whether or not the coefficients are constants.
Sol utions of the homogeneous equation [( 118) with f(x) = 0] generally have the.form
Su bstitution into the horne geneous equation gives
all mlt + an 1 fit"  1 + · · · + .ao = 0
If the n roots are m, ~ ml , ... ~ m; the complementary function is
(c, are arbitrary constants)
Suppose two roots are the same, m1 = m2• Then we have only n  1 solutions, and we need another. Imagine a limiting procedure in which
ml approaches mi' Then 
is a solution, and, as ml approaches m1 ~ this solution becomes
This is our additional solution. If three roots are equal, m, = m2 = m3 , then the three solutions are
enp:
and so on.
Arguments involving similar limiting procedures are frequently useful; see, for example, the discussion on pp. 17 and 18.
A particular integral is generally harder to find. If f(x) has only a finite num ber of linearly independent derivatives, that is, is a linear combination of terms of the form XC, ellX, sin kx, cos kx, or, more generally,
x"£!"X cos cxx
then the method of undetermined coefficients is quite straightforward. Take for y(x) a linear combination of [(x) and its independent derivatives and determine the coefficients by requiring that y(x) obey the differential equation.
EXAMPLE
y" + 3y' + 2y = e"
(119)
8 Ordinary Differential Equations
Complementary function ~
m2+3m+2=O m = 1 2
,
Particular integral ~ Try Y = Ae:. Substitution into the differential equation (119) gives
6A = I
A=l
Thus, the general solution is
Y = ic + c1ex + c2e2x
If f(x), or a term in f(x)~ is also part of the complementary function, the particular integral may contain this term and its derivatives multiplied by some power of x. To see how this works, solve the above example (119) with the righthand side, e', replaced by e=.
There are several formal devices for obtaining particular integrals. If D means dldx, then we can write our equation OIS) as
(D  m1)(D  m2)" ·(D  mn)y =/(x) A formal solution of (120) is
(120)
f(x)
y =;:: ~
(D  m t) ... (D  mn)
or, expanding by partial fraction techniques,
[(x) f(x)
y = At + ... + An 
D  mt D ~ mrJ
(121)
What does f(x)/(D  m) mean? It is the solution of (D  m)y = fix), which is a firstorder linear equation whose solution is trivial [see (16)1.
Alternatively, we can just peel off the factors in (120) one at a time.
That is~
CD  m2)(D  m3)'" (D  mn)y = f(x) D  m1
We evaluate the right side, divide by D  m2 ~ evaluate again, and so on.
Next, we consider the very important method known as variation. of parameters for obtaining a particular integral, This method has the useful feature of applying equally well to linear equations with nonconstant coefficien ts. Before giving a general discussion of the method and applying it to an example, we shall digress briefly on the subject of osculating parameters.
( 122)
11 Solution in Closed Form 9
Suppose we are given two linearly independent functions Yl(X) and yz(x).
By means of these we can define the twoparameter family of functions
(123)
Now consider some arbitrary function y(x). Can we represent it by an appropriate choice of Cl and C2 in (1·23)? Clearly, the answer in general is no. Let us try the more modest approach of approximating y(x) in the neighborhood of some fixed point x = Xo by a curve of the family (123). Since there are two. parameters at our disposal, a natural choice is to fit the value y(xo) and slope yt(xo) exactly. That is, C1 and C2 are determined from the two simultaneous equations
y(xo) = ctYt(xo) + C1Y2(XO) y~(xo) = clyaXO) + Cl Y2(XO)
(124)
The C1 and C2 obtained in this way vary from point to point (that is, as Xo varies) along the curve y(x}. They are called osculating parameters because the curve they determine fits the curve y(x) as closely as possible at the point in question.'
One can, of course, generalize to an arbitrary number N of functions y, and parameters C1• One chooses the c~ to reproduce the function y(x) and its first N  1 derivatives at the point Xo.
We now return to the problem of solving linear differential equations.
For simplicity, we shall restrict ourselves to secondorder equations. Consider the inhomogeneous equation
p(x)y" + q(x)y' + r(x)y = sex) and suppose we know the complementary function to be c1Yt(x) + C2 Y2(X)
Let us seek a solution of (125) of the form
Y = U1 (X)Yl(X) + uix)yJCx)
(125)
(126)
where the ui(X) are functions to be determined. In order to substitute (1·26) into (125), we must evaluate y' and y". From (126),
(127)
Before going on to calculate y", we observe that it would be convenient to impose the condition that the sum of the last two terms in (127) vanish, . that is~
(128)
1 Latin oscula r i = to kiss.
( ~;1 + P(l,)~: + Q(l) ~1 = 0 I. I ~ 1 (j
,
I <Z
, Ll (*4) ~2)= i=l=o.
.",. i .
~ If + 'P Ct) ~: + Q (l) ~ 2 ~ Q j ) lUi 1J I
I ,_j 4"~ ("z
L 2 d
l... tA Lf ) ~  v (t) Cr'"R (t ) ~ 2 (f) at* + U (1. '1 rt R( t ) ~ ~ ( +) at
d" (~J ~Ct) (j:;>: j tJ(f)
10 Ordinary Differential Equations
This prevents second derivatives of the Ui from appearing, for now
(l29)
and differentiating gives
(130)
Note that the condition (128) not only simplifies the subsequent algebra but also ensures that Ul and Ul are just the osculating parameters discussed above; compare (126) and (129) with (l24)~
The rest of the procedure is straightforward. If (1~26); (129), and (130) are substituted into the original differential equation· (125), and we use the fact that Yl and yz are solutions of the homogeneous equation, we obtain
p(x)(u~y~ + U2 Y;J = s
(I~28) and (lw31) are now simultaneous linear equations for the u; J whose solution is straightforward. The. student is urged to complete the solution; see Problem 126.
EXAMPLE
Consider the differential equation
xZi'·_ 2y = x
The complementary function, that is, the general solution of
x2y"  2y· = 0
(132)
(133)
is most easily found by noting that y = x'" is a natural trial solution.f Substitution into (133) shows that m = 2 or  I, so that the complementary function is
Therefore, we are led to try to find a solution of (1·32) of the form
2. In fact, y = x· is an obvious trial solution for any linear differential equation or the form
C.x·~"} + CII_IX1y(l'Il) + ... + C1XY + CoY = 0
11 Solution in Closed Form 11
We differentiate, obtaining
and impose the condition
(135)
Then
(136)
and
Substituting (134) and (137) back into the differential equation (132), we obtain
Solving this together with (135) gives
, 1 u1= 3x2
, X
U2 = 
3
Then the general solution of (132) is
Changes of variable sometimes help. We shall discuss one general transformation which is particularly useful. Consider the secondorder linear equation
y" + f(x)y' + g(x)y = 0
(138)
The substitution
y = v(x)p(x)
(139)
will give another linear equation in v(x) which may be easier to solve for
12 Ordinary Differential Equations
certain choices of p(x). The resulting equation is
v" + (2 ~ + f)V' + (PO + f:' + gP)v = 0
If we know one solution of the original equation (1·38), we can take p to be that solution and thereby eliminate the term in v from (140). This is very useful because we can then find the general solution by two straightforward integrations.
Alternatively, we may choose
(140)
p = exp [  ~ f f( x) dx ]
and eliminate the first derivative term of (140). This procedure is helpful as an aid to recognizing equations, and it is especially useful in connection with approximate methods of solution,
EXAMPLE
Bessel's equation is
x2yll' + xy' + (x2  n2)y = 0 The substitution y = u(x)p(x) with p(x) = x" 1/2 gives
U" + ( 1  112 x~ !) U = 0 This equation, with no first derivative term. is very convenient for finding the approximate behavior of Bessel functions at large x, fOT example, by the WKB method (Section 14). The details are left as an exercise (Problem 141).
042)
(143)
In conclusion, we shall briefly mention some other devices for solving differential equations. F or details, refer to a text on differential equations.
If the dependent variable y is absent, let y' = p be the new dependent variable. This lowers the order of the equation by one.
If the independent variable x is absent, let y be the new independent variable and y' = p be the new dependent variable. This also lowers the order of the equation by one.
If the equation is homogeneous in y~ let v = log y be a new dependent variable. The resulting equation will not contain v, and the substitution v' = p will then reduce the order of the equation by one.
If the equation is isobaric, when x is given the weight 1 and y the weight m, the change in dependent variable y = ox" t followed by the change in independent variable u = log x, gives an equation in which the new independent variable u is absent.

12 PowerSeries Solutions 13
Always watch for the possibility that an equation is exact. Also consider the possibility of finding an integrating factor. For example, the commonly occurring equation y" = /(y) can be integrated immediately if both sides are multiplied by yl.
12 POWERSERIES SOLUTIONS
Before discussing series solutions in general, consider a simple (although nonlinear) example:
(144)
Try
Then (144) becomes
Equating coefficients of equal powers of x gives
C2 = !c~
c3 = *  teo c1
_ 1 2+ 1 3
c4 lZC1 IleO,
etc.
Suppose, for example, we want the solution with y = 0, y' = 1 at x = O.
Then
(145)
This method of solution is a very useful one, but we have been careless about justifying the method, establishing convergence of the series, etc. We now briefly outline the general theory of series solutions of linear differential equations. 3
Consider the equation
(146)
l This theory is discussed more fully, for example, in Copson (C8) Chapter X, or Jeffreys and Jeffreys (J4) Chapter 16.
14 Ordinary Differential Equations
If !o(x), ... ,In 1 (x) are regular (in the sense of complex variable theory; see the Appendix, Section A2) at a point x = xo, Xo is said to be an ordinary point of the differential equation, Near an ordinary point, the general solution of a differential equation can be written as a Taylor series whose radius of convergence is the distance to the nearest singularity of the differential equation; by a singularity, of course, we mean a point which is not ordinary.
The Taylor series is an ordinary power series
00
Y = L cm{x  xo)'"
m=O
(147)
whose coefficients em may be conveniently found by substitution in the differential equation, as in the example above.
If Xo is not an ordinary point, but (x  XO)fj'Jl(X)~ (x  xO)2/n_ z(x), ... , (x  xo)"fo(x) are regular at xo, Xo is said to be a regular singular point of the differential equation. Near a regular singular point, we can always find at least one solu tion of the form
0:;1
y = (x  xot L cm(x  xo)'"
m""'O
with Co :F 0
(148)
where the exponent s of the leading term will not necessarily be an integer. The series again converges in any circle which includes no singularities except for Xo.
The algebraic work involved in substituting the series (147) or (148) in a differential equation is simplified if Xo = o. Thus it is usually convenient to first make a translation to Xo as origin, that is, to rewrite the equation in terms of a new independent variable, z = x  Xo.
If a point is neither an ordinary point nor a regular singularity, it is an irregular singular point.
EXAMPLE
We first consider an example of expansion about an ordinary point:
Legendre' s differential equation is
(1  x2)y"  2xy' + n(n + l)y = 0
(149)
The points x = + 1 are regular singular points. We shall expand about the ordinary point x = O. Try
IXl
Y= L crnxJlt=CO+CIX+C2X2+ ••• m""O
12 PowerSeries Solutions 15
Su bstitution in to the differen tia1 equation (149) gives n(n + 1)
c2 =  2 Co
2  n(n + 1)
c3 = 6 C1
n(n + 1)[6 ~ n(n + I)]
c4 =  24 Co
[2  n{n + 1)][12  n(n + 1)]
c  c
s  120 1
The general recursion relation is
Ci+2 i(i + 1)  n(n + 1)
=
c, (i + 1)(i + 2)
(i + n + 1)( i  n) (i + 1)(1 + 2)
(150)
Note that this is a twoterm recursion relation, that is, it relates two coefficients only. Their indices differ by two. These facts could have been noticed at the beginning by inspection of the differential equation.
The general solution of our differential equation (I 49) is therefore
[ X2 X4]
Y = Co 1  n(n + 1)  + n(n + l)(n  2)(n + 3)   + ...
21 4!
+ c.[x  (n  l)(n + 2) ~:
+ (n  1)( n + 2)( n  3)( n + 4) x'5  + ... J 5~
F or physical applications, it is necessary to consider the behavior of the infinite series in (151) near the singular points x = + 1. If we let i __.. 00 in the recursion relation (1.50), we see that
(151)
Thus, if we go farther and farther out in either series, we find it more and more resembling a geometric series with x2 being the ratio of successive terms. The sum of such a series clearly approaches infinity as x2 approaches 1.4
4 This discussion is incomp1ete. Infinite series are discussed in more detail in Chapter 2, and the series of (151) are studied as a special case on p. 47. The series in fact .~ almost n converge as Xl approaches 1; their sums go to infinity logarithmically [y "'" ]n(1  x2»).
16 Ordinary Differential Equations
In cases of physical interest, however, we often require solutions y(x) which are finite for 1 ~ x ~ + I. There are two ways we can arrange this: (I) Let C1 = 0, and choose n to be one or the integers ... ~ 5,  3, 1, O~ 2,4, . ... Then the first series in (I51) terminates, and the second is absent.
(2) Let Co = O~ and choose n to be one of the integers ... ~  6,  4t  2, 1, 3, 5~.... Then the second series in (151) terminates, and the first is absent.
We see that if we wish a solution of the Legendre differential equation (149) which is fini te on the interval  1 ::::; x ~ + I, n must be an integer. The resulting solution y(x) is a polynomial which, when normalized by the condition y(x) = 1, is known as a Legendre polynomial Pn(X). 5
As an example of the problems that can arise at a regular singular point, consider Bessel's equat ion
x2 y" + xy' + (x2  m2)y = 0 (152)
This has a regular singular point at x = O. Thus, a solution of the form
eo
y =xs L c"x" n""O
(co ¢ 0)
is guaranteed to exist, and we can see from the differential equation that a twoterm recursion relation will be obtained. If we su bsti tu te in to the differential equation ~ the coefficient of XS is
CO(S2  m2) = 0
S2  m2 = 0
(153)
This is called the indicial equation. Its roots are s = + m.
Next consider the coefficient of r + 1 • It is
C1 [(s + 1)2  m2] = 0
Thus c1 = 0 except in the single case m =tl/2, s = _ = 1/2~ and in that case we can set C1 = 0, since the terms thereby omitted are equivalent to those which make up the other sol ution, with s = _ = + 1/2.
We therefore confine ourselves to even values of n in the sum, writing
y = x±m(co + C2 x2 + C4X4 + ... )
The recursion relation is easily found to be
Cll+2 1 1
C. = (5 + n + 2)2  m 2  (n + 2)(25 + n + 2)
(154)
y ~ cox'[l  4(.sX: 1) + 4 x 8(8 tl)(S + 2)  + ... J Suitably normalized, this series is called a Bessel function; we shall discuss Bessel functions inmore detail in Chapter 7.
If m is not an integer, we have two independent solutions to our equation, namely ~ (155) with s = + m, If m is an integer (which we may assume positive _ iii _, we can only choose s = + m; for s =  m, all denominators in (1~55) beyond a certain term vanish. If we multiply through by (s + m) before setting s =  m, to cancel the offending factors, we get a multiple of the solution with s = + m, as may be readily verified.c~) .
This is a situation reminiscent of our difficulty on p. 7 in obtaining independent solutions of a differential equation with constant coefficients. The resolution of the difficulty is quite analogous. We begin by letting y(x, s) denote the series
(155)
co
y(x, s) = x'L e" Y!
",0
with Co = I, c1 = 0, and all other coefficients given by (l~54), but without making use of the indicial equation (153). That is, we set
[ X2
X S = x6 1
y( , ) (s + 2 + m)(s + 2  m)
X4 ]
+ (s + 2 + m)(s + 2  m )(5 + 4 + m)( s + 4  m)  + ...
(156)
If we abbreviate by L the Bessel differential operator,
d2 d
L = Xl_+ x+(x1  m2)
dx1 dx
then the series Lyix, s) will contain only a term in x', because the recursion relation makes the coefficients of all . higher powers vanish. We obtain [compare (I53)]
Ly(xt s) = (5  m)(s + m)r
We see (again) that 5 must equal ±m in order that y(x, s) be a solution of Bessel's equation Ly = o. However, if m is a positive integer, we have remarked that
[(8 + m)y(x, s)].=m
18 Ordinary Differential Equations
is a constant multiple of y(x, m)~ and we must find a second solution. To do so, consider the result obtained by substituting (s + m)y(x, s)"into Bessel's equation. We get
L[(s + m)y(x~ s)] = (s + m)2(s  m)r
The derivative of the right side with respect to s vanishes at s =  m. Therefore~
{~ [(s + m)y(x, S)]L __
(157)
is a solution of Bessel' s equation, and is in fact the second solution we were looking for.
EXAMPLE
Let us find a second solution of Bessel's equation for m = 2.
[ X2 X4 ]
y(x, s) = x'l 1  s( s + 4) + s(s + 2)( s + 4)( s + 6)  + ...
s[ S + 2 2 X4
(s + 2)y( x, s) = x (5 + 2)  s(s + 4) x + s(s + 4 )(s + 6)
(158)
x6 ]
 s( s + 4)( s + 4)( s + 6)( s + 8) +  ...
Remembering that (dlds)>! = x' In x and
!_ (UV ... ) = uv . . . (~ + v' + ... _. WI _ ••• )
dx w··· w ... u v w
we obtain
a
os [(5 + 2)y(x, s)] = (s + 2)y(x, s) In x
+ x3 [1 _ s + 2 . ( 1 ~ ~ _ 1 )xl.
s(s + 4) s + 2 s s + 4
1 (1 1 1)4 ]
+ s(s + 4)(5 + 6)  S  s + 4  s + 6 x  + · · ·
12 PowerSeries Solutions 19
Setting s = 2i and observing that [(9 + 2)y(xt 9)].,:1112 = hy(xt 2), we obtain
{~[(s + 2)y(x, S)]L_2
1 ( x2 x· )
= y(x 2)lnx + x2 1 + + + ...
16 ' 4 64
(159)
This solution and y(x, 2) are two independent solutions of Bessel's equation with m = 2~
Let us now consider the differential equation d21/1
dx1 + (8  x2).p = 0
(160)
This is the Schrodinger equation for a onedimensional quantummechanical harmonic oscillator, in appropriate units, If one tries a direct power series solution of (1 .. 60) about x = 0, one obtains a threeterm recursion relation. These are a little inconvenient," so that it is advantageous to look for a transformation of variables which will lead to a simpler equation.
A trick that often helps in such a situation is to " factor out" the behavior near some singularity or singularities. Where are the singularities of this equation 1 There are none in the finite z planet but we must digress for a moment on singularitks at infinity.
Consider the differential equation
y" + p(x)y' + Q(x)y = 0
(161)
{X = 0 is an ordinary point if P and Q are regular there
Recall that x = 0 is a regular singular point if xP and ·X1Q are regular there
Let z = ilx. The equation becomes
d1y [2 1 ] dy 1
 +  p(z)  +q(z)y = 0
dz2 Z Z2 dz Z4
(162)
where
p(z) = P(x)
q(z) = Q(x)
Then x = 00 is an ordinary point or a singularity of Eq. (161) depending on whether z = 0 is an ordinary point or a singularity of Eq. (t 62). That
1 !
J
20 Ordinary Differential Equations
is, x = ·00 is. an ordinary point if 2x  x2 p(x) and x~ Q(x) .are regular there, and x = 00 is a regular singular point if xP(x) and x2 Q(x) are regular 'there, (. ) This concludes our digression.
Using these criteria, we see that our differential equation (1 .. 60) is highly singular at x = co. For large x, the equation is approximately
d21/1 2
dx2 ·x 1/1 = 0
(163)
and the solutions are roughly
(164)
in that, if we substitute either function into the differential equation '(1·63), the terms which are dominant atinfinity cancel.
In quantum mechanics, physically acceptable solutions must not become infinite at, Ixl_" 00; therefore, try
(165)
This change of variab1e does· not ensure the desired behavior at infinity, "Of course, and we shall still have to select solutions y(x) which give this behavior. In fact, we may expect in general y(x) .... c'l. so as to give the divergent solution, '" ..... e + ~z/l.
The differential equation (160) becomes
y"  2xy~ + (E d.l)y = 0
(166)
[If we write E  I = 2n, (166) is known as the. Hermite differential equation.] We can obtain the general solution of this equation in the form of a power series which win converge everywhere, and the recursion relation for the coefficients will contain only two terms. The recursion relation is
em + 2 (2m + 1)  E
=
em (m + l)(m + 2)
and the solution is
[ X2 X4]
Y = Co 1 + (1  E)  + (1  E)( 5  E)  + ....
, 2l 41
.[;(3 XS]
. + c 1 X + (3  E)  + (3  E)(7 '. E)  + ...
3 ~ 5!
(167)
If E = 2n , + 1 where n is an integer ~ one series terminates after the term in x" (the even .or odd series, depending on n). The resulting polynomial,
12 PowerSeries Solutions 21
suitably normalized, is called a Hermite polynomial" of order n, H,.{x). The other series can be eliminated by setting its coefficient, Co or C1, equal to zero, and the resulting solution (165) will go to zero at infinity.
If either series of (167) does not terminate, its behavior at large x is determined by the terms far out where the recursion relation is approximately
C".+2 ~em
2
m
Thus either series behaves for large x like eX", and", + tr'2/1 as expected. A solution l/t, which remains bounded as x + + GO, is therefore only possible if E = 2n + 1 with integral n; that is,
"" = tltn(x) = H,.(x)exz/1
tor
E = E,. = 2n + 1
(168)
This is another example of how boundary conditions may impose restrictions on the acceptable values for a constant appearing in a differential equation. The acceptable solutions IjI .. are called eigenfunctions of the differential operator (d2jdx2) + x2 belonging to the eigenvalues En (see Chapter 9).
As another example of series solutions, we consider briefly the associated Legendre equation
(1  X2)y"  2xy' + [n(n + 1)  1 :~2]Y '" 0 (l69)
Note that when m = O~ this reduces to Legendre's equation. The origin is an ordinary point, and x = ± 1 are regular singularities.
A· straightforward attempt at a power series 'solution about x = 0 again yields a threeterm recursion relation. Let us try to factor out the behavior at x = ± 1. Let x = ± I + z and write the approximate form of the differential equation valid for [z] 4 1. Near either singularity it is
m2
zy" + y'  4z y = 0
This has solutions y = Z+1fI/2 and zm/2, the former being well behaved if, as we may assume without loss of generality, m ;;;: O.
We are therefore led to make the change of variable y = v(l  X2)"'fl in (169), thus factoring out the behavior at both singularities simultaneously. The equation becomes
(1  r)v'"  2(m·+ 1)xv' + [n(n + 1)  m(m + I)]v = 0 (170)
, For a precise definition of H,,(x) sec Eq. (7~103).
{ ~ A ell ~ J r ~ ~ 00
v= p 1
~p rn/z rn/<. 11 0 0 mo :f
 rn./2  lYl/Z n+A r'Yl 01 rn+n+~ C A _lot. ) 11 " 2l fA ,+ n (n + ~) U = 0 7
(A _~:a) fA (M+2)_ zem+A) ~ fJ_ (tA+J\)+ [n (n +~)  m (m +4) J u. {ff1) = 0 '9 v =:. U (rn)
22 Ordinary Differential Equations
with a straightforward series solution about x = O. The recursion relation for the coefficients is
c .. + 1 (r + m)(r + m + 1)  n(n + 1)
 = : .......... 
c, (r + l)(r + 2)
_ (1' + m  n)(r + m + n + 1) (r + 1)(r + 2)
(1·71)
Again we obtain even and odd series solutions for v(x), both of which behave like (1  x2)m near x = ± 1 if they do not terminate.f A bounded solution exists only if nand m are such that one "series terminates after some term x~ From (171), the condition is
(n  m) = T = an integer ~ 0
(1·72)
Usually in physical applications nand m are both integers. It may be verified that y is then simply a constant times
(173)
which is called an associated Legendre function. These functions will be discussed in more detail in Chapter 7.
13 MISCELLANEOUS .APPROXIMA TE METHODS
One can often get a qualitative picture of the solutions of a differential equation by means of a graphical approach.
For example, consider the firstorder equation
dy 2X1
=e
dx
(174)
We may draw little lines in the xjplane, each "indicating the slope of the solution passing through that point, as illustrated in Figure 12. The approximate shape of the solutions is obvious.
As a second example, consider the secondorder, nonlinear equation
(175)
a Sec Problem 132.
13 Miscellaneous Approximate Methods 23
I
I , 
, ., _".
I "
, ,
~ I I , x
., I

~ ./ I I
I
..
~ I I
Figure 12 Slopes of the solutions of (174) If we draw the parabola x ;;;;;;; y2, as in Figure 13, we see that yH is positive inside the parabola, and negative outside of it. If we limit ourselves to sol utions starting near the origin and going toward the right, we can identify several types which are sketched in Figure 13:
I. Solutions asci] lating about the upper branch of the parabola.
2. A critical nonoscillating solution approaching the upper branch of the parabola from above.
3. Solutions going to y =  (J).
4. A critical nonoscillating solution approaching the lower branch of the parabola from above.
The powerseries solution (1.45) we found earlier [with yeO) = 0, y'(O) = 1] is of type I. An approximation to solution 2 is given by expression (180).
Figure 13 Several types of solution for the equation y'l = X  yi
24 Ordinary Differentia] Equations
x
Figure 14 A potential Vex) for the Schrodinger equation
As a third example, consider the onedimensional Schrodinger equation for a particle of mass m in a potential V(x) ~
d2", 2m
dx2 = ~ Ji2 [E  V(x)]f/1 (176)
If E> V(x), then 't/IH II/! < 0, and", curves toward the xaxis; that is, t/I has an oscillatory or "sinusoidal n character. If E < V(x)~ then 1/111/1/1 > O~ and ~ curves away from the xaxis; that is, Vi has an "exponential~' character. If we impose the boundary condition that Vi remain finite everywhere, then a solu tion with un bounded exponential behavior is unacceptable.
Suppose V(x) looks as shown in Figure 14.
If E > V3 t all solutions are oscillatory everywhere and a] 1 are acceptable, that is, none blow up at infinity.
If V2 < E < V3 ~ ~~ most ~~ solutions blow up at x + + 00. However, if we start just right, we can find a solution that falls off exponentially as x ~ + 00. This defines the acceptable solution uniquely (except in amplitude) and, in particular, fixes the phase in the lefthand region where the solution is oscillatory. Such a solution is illustrated qualitatively in Figure 15.
__,.
,
;[ E
,
, I ,
x
Figure 15 A physically acceptable solution of the Schrodfnger equation for V2 < E < V3
13 Miscellaneous Approximate Methods 25
If V1 < E < Vl, things are really difficult. 1{1 behaves U exponentially H at both ends. If we adjust it so that it does not blow up at the left, it is almost sure to do so at the right. Only for certain values of E can satisfactory solutions be found. These are the eigenvalues; compare (168).
If E < Vi ~ no satisfactory solutions exist.
Another method, which can often provide useful approximate solutions, consists of experimenting with the equation ~ that is, dropping small terms, iterating, and so forth.
EXAMPLE
(177)
Let's look for a solution near x + 00. Assume y > O. Thus, we try
dy ~O4y~a dx
Now put this guess back into (177)
(= constant)
dy ~ e1G~ dx
Therefore,
eZax
y~a  2a
I terate once more.
dy [( «: 20X)]
dx ~ exp  2x a  2a
e24IX 1 ( 1 )
y ::::: a    x +  e44x + ...
2a 4a2 4a
etc.
(178)
If the procedure seems to be converging, it is pro bably giving us a solution.
Sometimes, though, it does not converge.
EXAMPLE
(179)
26 Ordinary Differential Equations
(See the graphical discussion on pp. 2223.) Try Y ~ J x. Substituting into the right side of (179) gives y" ~ 0, which in turn implies y = ax + b, with a and b constant. This does not look anything 1ike our first try. The mistake, of course, was substituting our first guess into the "large H term y2, rather than into the" small ,t term yfJ. Doing it the other way gives
y2 = x  yf1 ~ X  (J x y = x(1 + ix  5/2)
Y ';::j J x + lx2
I terating once more gives
y = J ; + !xz  14lsx9/2 + ...
This is the solution (2) enumerated on p. 23.
(180)
y = Jx + l1(x)
Alternatively, we may try a different method. Write
Substituting into the differential equation (1 ~ 79) gives lx3!2 + "N = 2tJJ x  112
We neglect the last term, but the equation is still complicated. We may try further neglecting either the first or second term. If we assume
Ir(l ~ [2'1J xi, we find
'1 ~ tX2
and we obtain our previous solution (180). We should, of course, check the consistency of the solution and the approximation made in finding it; namely, that
(ix2y ~ 2(tx2)Jx This is satisfied for large x.
We can obtain another solution by neglecting the X3/2 term of Eq. (182) instead of n", That is, if we assume
1'(1 p llx3/2j we obtain the differential equation
11" + 2Jx'1 = 0
(183)
An approximate sol ution of this equation may be found by the WKB method, which we discuss in the next section. The result is
'1 ~ _:!_ sin (4J2: x~/4 + 0)
Xl!S 5
(184)
14 The WKB Method 27
which gives an oscillatory solution of type 1 listed on p. 23. Again, we shou1d go back and verify the consistency of the assumption
1 '1 !I I ~ I!x 3/21
Why can we not neglect the first term on the right of (182)?
14 THE WKB METHOD
The WKB method provides approximate solutions of differential equations of the form
(185)
provided I(x) satisfies certain restrictions discussed below, which may be summarized in the phrase "f(x) is slowly varying." Reca1l that any linear homogeneous secondorder equation may be put in this form by the transform ation (141). The onedimensional Schrodinger equation is of this farm and the method was developed for q uantummechanical applications by Wentzel, by Kramers.? and by Brillouin, whence the name. The method had been given previously by J effreys.' 0
The solutions of Eq, (185) with f(x) constant suggest the substitution
(186)
The differential equation becomes
_(¢')2 + i¢" + /= 0
(187)
If we assume ¢" small, a first approximation is
4>(x) = + f J f(x) dx
(188)
The condi lion of validity (that 4/' be ,,, small '~) is
1"'''1 ~ ~ :/j ,:g If I
From (186) and (188) we see that l/Ji is roughly IJ(2n) times one H wavelength~' or one "exponen tial length n of the sol u lion y. Thus the condition of validity of our approximation is simply the intuitively reasonable one that the change in f(x) in one wavelength should be small compared to Iff.
(189)
9 See, for example, Kramers (K5). 10 See Jeffreys and Jeffreys (J4).
28 Ordinary Differential Equations
A second approximation is easily found by iteration. From (188)~ ¢" ~ +1/1/2/'
Su bsti tuting this estima te for the small term ¢" in to (187) ~ we obtain
(</>')2 "" f + ~ ~j
_./.., :;:::; + If + ~ f' V' ,,/ 4 f
The two choices of sign give two (approximate) solutions which may be camhi ned to give the general solution
y(x) "" (f(:))114 (c+ exp [i f Jl(x) dxj + c , exp [ i f "I f(x) dX]) (190)
where c + and c _ are arbitrary constants.
We have thus found an approximation to the general solution of the original equation (185) in any region where the condition of validity (189) holds. The method fails if f(x) changes too rapidly or if f(x) passes through zero. The latter is a serious difficulty since we often wish to join an
Figure 16 Graph of f (x) and two exact solutions of the equation (185); one of them, Y 1 (x), being the special solution which decreases to zero on the left
14 The W KB Method 29
oscilJatory solution in a region where lex) > 0 to an '~exponential n one in a regi on where f(x) < 0 [see ~ for exam ple ~ the discussion of Eq. (176)].
We shal I investigate this problem in some detail 1 1 j n order to derive the socal led connection formulas relating the constants c + and c _ of the W K B sol u tions on either side of a point where f(x) = O.
Suppose f(x) passes through zero at Xo and is positive on the right as shown in Figure 16. Suppose further that lex) satisfies the condition of validity (189) in regions both to the left and right of Xo so that any specific solution y(x) may be approximated in these regions by
x « Xo ~f(x) < O~
y(x) ~ / a .. exp [+ rxovr= f(~j dX]
~,  lex) ~x
+ I b _._. exp[ r~(),/ f{;) dX] (191)
~,  f(x) <x
x ~ xo,/(x) > 0:
y(x) "" ~I ;(~) cxp [ + i (./1(X) dX]
+ 4/ d __ exp[ i rl: ... / f(x) dX] (192)
"\ j(x) ~xo
where the symbols '\./ and ~/ .. mean positive rea] roots throughout. If (x) is real, a solution which is real in the left region will also be real on the right. This h reality condition" states that if a and b are real, d = c*.
Our pro blem is to "co n neet " the a pproxima tions on ei ther side of x 0 so that they refer to the same exact solution, that is~ to find c and d if we k now a and b and vice versa. To make this con necti on, we need to usc an approximate solution which is valid along the entire length of some path connecting the regions of x on either side of Xo , where the WK B approximations are valid. One procedure.P followed by Kramcrs and by Jeffreys, is to use a solution valid on the rea] axis through Xo (sec Problem 736 and also the discussion at the end of Section 45),
A second procedure, used by Zwann and by Kemble, 13 is to avoid the real axis near Xo and use a path enci rcl ing x 0 in the com pl ex plane, along
11 If we were using fine print in this book, the material from here to Eq, (1122) would be so displayed. The connect jon formulas (1·113) and (1122) are, however. important and should be understood even by those who wish to sk ip the intervening de ri va t ion.
12. See Schiff (S2) Section 34~ for a discussion of this approach, and numerous references, 1 J See Kemb te (K 1) Section 2J.
30 Ordinary Differential Equations
which the WKB approximations themselves remain valid. We shall follow this second procedure because it provides not only the connection formulas, bu t also the means to estimate errors in the WKB approximations. Also, the techniques employed are more generally instructive.
The question of errors is important because we wish to use the approxirna te solutions over a large range of x covering many ,~ wavelengths ,~ or H exponential lengths. H Thus, one might worry that errors would gradually accumulate and that the approximate solution would get badly H out of phase," for example, in the region where it is oscillatory.
Define W KB functions associated with Eq, (185) as
(I93)
We must consider these to be functions of the complex variable x ~ and we should draw suitable branch cuts in the x plane to avoid ambiguities from the roots of f(x) (see the Appendix, Section AI). These functions (193) satisfy (exactly) a differential equation which we can find by differentiation:
(194 )
If we define
(195)
then W±(x) are exact solutions of
W'± + [f(x) + g(x)] W± = 0
(196)
and approximate solutions of
y" + f(x)y = 0
(197)
provided Ig(x) I 4 If(x)l. Equation (197) is regular at Xo) whereas Eq. (196) has a singularity there. Correspondingly, y(x) is regular at Xo but the W ± are singular there.
We now define functions Ct:± (x) by
y(x) = a+(x)W+(x) + C(_(x)W~(x) y'(x) = Ct:+(x)W'+(x) + rt_(x)W'_(x}
( 198) (199)
where y(x) is a solution of (1·97). The Cl± are just osculating parameters as discussed on p. 9 in connection with the variation of parameters method.
14 The WKB Method 31
The WKB approximation corresponds simply to setting 0: + and a _ equal to constants.
Solving (198) and (199) for ~±(x)~
yW~  y'W_
r:x+= r ,
W+ w_ ~ W+ w_
tX_ =  W + W' __ W'+ W~
where the denominators are the W ronsk ian of W + and W _ + The W ronsk ia n is a constant, as may easily be shown from the differential equation (196). From the explicit forms (193)~ it is
W+ W'_  W~ W_ = 2i
so that
(1 J 00)
Differentiating, and using the differen Hal equations (196) and (197) to eliminate yN and W± , we obtain
dfX± i (n ') i
dx = +:2 yW=F  y'W=F = + 2:g(x)yW:j:
(1101)
or ~ remero bering (1·93) and (198)~
dtt± = + ~ g(X)1/2itX± + C(~ exp[+2i IX ".j f(x) dX]]
dx 2 [f(x)] t XQ
One or the other of these expressions (1101) or (1102) may be used to estimate the error which may accumulate in the WKB approximation
(1102)
over a long range of x.
EXAMPLE
The W K.B functions associated with the equation r" + xy = 0
(1]03)
for x ~ 0 are
W±(x) = XI/4 exp ( + j (X112 dX) = X114 exp (+ i1x312) (1104) These satisfy exactly the equation (196):
Wr1 + (x  /6XZ)W = 0
That is,
f(x) = x
32 Ordinary Differential Equations
We may write the solution of (II03) in the form (198) y(x) = C( +(x) W+ (x) + rx _(x) W_ (x)
At large x ~ a general solution of (1103) is accura te1 y described by the W KB approximation:
y(x) ~ AXll'~ cos (iX3/2 + J) as x ~ co 0105)
so that Ct+ ~ (Aj2)efb, O!~ ~ (Aj2)eid for x + 00. We wish to investigate the error in this WKB solution when it is carried in to smal1 values of x, this error being _measured by the deviation of Gt + (x) and C(_ (x) from the constant values above. If the C{±(x) do not change much, we have approximately, from (1102),
and AC(± , the changes in a± from Xl to infinity, are given by 6a:± ~ ~ foo drx ± dx
AI2 A Xl dx
(1106)
The complicated second term in the bracket is less important than the first (why 7), and we see that the re1ative error faet:± 1/10: ± I is only of the order of 10 to 20 percent even fOT a value of x as low as Xl = 1, and at larger XI the error becomes very small. In particular t no significant error in phase accumulates even after an arbi trarily large n urn ber of" wa velengths, H
We now return to the problem of estimating the variation in the functions a±(x) in the general case. Integrating Eq. (t~ 102) along a path r in the complex plane from Xl to x 2 ~
f dr1.±  i r g(x)  i f g(x) (W=l=)
Acx± = Jr dx dx = + 2" J, J f(x) C(± dx +:2 r J I(x) C(=F w± dx (1107)
We shall consider only paths such that
f g(x) ds ~ 1
r Jf(x)
(1108)
where ds = Idx I is the elemen t of length along r. Kern b1e cal Is such a path a good path.
Along a good path, the first term on the right side of (1 ~ I 07) does not contribute a significant change ~C( ±. The second term can be significant only jf I W + / W ± I ~ 1. That is, along a good path a + can change on] yin a region where u its ~~ function W + is small in magnitude compared to W ~ .
14 The WKB Method 33
Even there, IX+ cannot change if (X_ is zero or nearly zero. Similar statements hold for ex _ if we interchange + and  everywhere.
In the regions where I W _/W + I p I or I W +/W _, pi, the coefficient of the small function may vary markedly, a behavior known as Stokes' phenomenon .14 (This variation has no appreciable effect on the function y(x) in this region, so that there is no inconsistency with the fact that the WKB approximation, with constant IX + and C( ~ ~ is supposed to be valid I)
The above results will now be used to derive the WKB connection formulas.
We recall the defini tions (193) and draw a branch line from x 0 to + 00 along the real axis. We take roots [f(X)]1/4 and U(x)] 1/2 to be the positive real roots on the top side of this branch eu 1. Then , for x < Xo ,
W±(x) = ei(./4,[  «or 1/4 exp[ + (' J= /i..x) dX]
X<Xo (1109)
Referring to (191) and (192), we have
C( + ~ aei(n/4) Cl++C
a: _ ~ bei(n./4) Cl_ + d
(1110)
In order to locate the "Stokes' regions", where I W+/W_' ~ 1 or] W~/W+ I $> l , we note'" that near Xo ,f(x) ~ K(x  xo), and the boundaries of these regions will be qualitatively similar to those for f(x) = K(x  xo). These boundaries
are where J~oJ f(x) dx is real since then I W + I = ,W _I· Forf(x) = K(x  xo)~ jf we put x  Xo = rei9 ~
JC
f Ji(x) dx = K 1/2t(X ~ xo)3/2 = jK t /2r3{2e'(38{2)
.Yo
(1111)
This is real in directions e = O~ t1t"~ ~1t"~ and it is easy to see from (193) which of W + or W _ dom inates in each region between these boundaries.
A map of the Stokes' regions is given in Figure 17 ~ and a good path r is shown connecting the regions x ~ Xo and x ~ Xo on the real axis, where the WKB solutions are to be connected.
It is now easy to find the connection formula for the special solution Y 1 (x) which has a decreasing exponential behavior to the left of xo, as shown in Figure 16. This solution has a = 0, in the notation of (191)~ or, from (1110), c£+(x) = 0 at the start of path r. Thus, ct _(x) remains constant along r even though l W+/W_ f ~ 1 in region 2. z , remains essentially zero. In region 1, I w IW + I ~ 1 so that rx _ remains constant although tX + may
14 For a rather amusing description of Stokes' discovery of this phenomenon, as well as one of the earliest discussions of the connection problem, see R. E. Langer (L4).
1 ~ The reader will note a tacit assumption: we assume that a radius for the path r can be chosen which is small enough for the assumption f(x) ~ Ktx  xo) to be valid, and at the same time large enough that I' is a ~. good·' path.
34 Ordinary Differential Equations
region 2
region 1
region 3
Figure 17 Complex xplane showing the Stokes" regions around the point Xo wberej(x) vanishes
(and does) change. Since e , is constant all along I', we find from (IIIO)
d = bei(1r./4)
(1112)
The reality condition gives c = d" if b is real.
The connection formula for the solution Yl(X) with decreasing exponential behavior for x < Xo is thus (normalizing b = I)
tI 1 exp[ f<C°J  f(x) dX] ..... 2 ~ cos[fX JJW dx  ~]
 f(x) x 4 f( x) ~o 4
(1113)"
Reversal of the arrow is delicate. We know only that the growing exponential term on the left will be absent for some phase near 1t/4 on the right.
Next, consider a more general solution yix) such that a =F O. Since the basic equation is linear, the coefficients in expressions (191) and (192) must be linearly related:
c = Aa + Bb d= Ca+ Db
(1114)

14 The WKB Method 35
We have found in (1112) that B = ei(ff/4)
and
0115)
The reality condition d = c· ~ if a and b are real, says
C =A*
and
(1116)
the second of which we already know.
Another general relation may be obtained from the fact that the Wronskian YIY~  Y2 i1, of two independent solutions of our equation yH + f(x)y = O~ is a constant. (To see this, just differentiate the Wronskian.) Write
(1117)
Remembering that yl = C(+ W~ + e , W~ and the Wronskian
w+ w:.  W~ W_ = 2i
one finds
(1118)
Evaluating this constant at x <i; Xo [from (1110)],
where at and hi are the coefficients a and b in the WKB approximation (191) for Yl(X), and a2 and b2 are the coefficients for Y2(X). Similarly, on the other side, where x ~ Xo ,
Thus
ei(fl/2) = t = AD  Be = AB*  A*B = 2i 1m ABrt
(1119)
But B = e  I(Il/4), so Ai(1t/4) = R + i/2 where R is an undetermined real constant. Collecting our results,
A = Be /(,,/4) + le~(1I/4) C = Ref(Il/4) + !e  ~(1t/4)
B = el{;IIt/4) D = ei(1'I:/4)
(1120)
and because R is undetermined, we still cannot write c and d in terms of a and b. However, this is as far as we can proceed; R is indeterminate. Another way of expressing this indeterminacy is to observe that the asymptotic form of a solution like Y2(X) in Figure 16 for x < Xo does not fix Yl(X) for x > Xo. What about the reverse problem of writing a and b in terms of
36 Ordinary Differential Equations
c and d't Equations (1·114), with the constants A~ B~ C~ D given by (1120), are easily inverted with the result
a = el(n/4)c .+ ei(fC/4)d
b = [!e'Cn/4)  Rei(I't/4)]c + [!ef(II/4)  Rei(It/4)]d
Thus we can find a but not b, which might be expected since tL_(x) fluctuates wildly in region 2 of Figure 17 except in the special case a, + = o. We might also expect this result from the fact that, when a ::F O~ the value of b is immaterial since the decreasing exponential is negligible compared to the increasing one. Omitting b for this reason, we can write a second formula connecting the two expressions (191) and (192) for y(x). It is conventional to choose a real solution by setting c = te~41 and d = !e  J.~. Then
(1121)
sin ( q, + ~) v:.1f(X) exp [(" J  f(x) dx ]
.... t/ I  cos[Jx J7W dx + ¢] (1·122)
f(x) :4:0
provided ¢ is not too near  n/4. The arrow cannot be reversed, as we have seen, since the phase <p cannot be determined if we only know how much large exponential is present.
It is very easy to remember the connection formulas (III3) and (1122) by drawing q ualitati ve pictures like Figure 16. Consider first Y I (x) with the decreasing exponential to the left of Xo . To the right of Xo ~ Yl (x) looks like a cosine wave for which the phase is between ~n/2 and 0 when x = Xo. We remember the phase is  n/4. Furthermore, the amplitude of the cosine wa ve on the right "Jooks " larger than that of the exponential on the left. We remember it is twice as large. These comments reproduce the result (1113). A similar mnemonic for remembering (1~122) results from considering a solution Y2(X) which is 90° out of phase with Yl(X) in the righthand region. By this same procedure, it is a straightforward matter to write down connection formulas similar to (1113) and 0122) for the case where the slope ofJ(x) is negative at Xo instead of positive.
We shall conclude by giving an example of the use of the W KB method in quantum mechanics. Consider the Schrodinger equation for a particle in a potential well
(1123)
with V(x) as shown in Figure 18. 2m
[(x) = ftl [E  V(x)]
. {Positive for a < x < b
IS negative for x < a, x > b

References 37
V(x)
x
Figure 18 A typical onedimensional potential well
If ifJ(x) is to be bounded for x < 0, then in a < x < b our connection formula 0113) tells us
A (fxJ2m 1t)
~(x) ~ (E _ V)1/4 cos G ,;z(E  V) dx  4
(1124)
where A is an arbitrary constant. If t/J(x) is to be bounded for x > b, then in a < x < b similarly
(1125)
where B is an arbitrary constant. These two expressions must be the same, which gives the condition
b
f J2m(E  V) dx = (n + !)nJ,
6
(1126)
This result is very similar to the BohrSommerfeld Quantization condition
of pre1925 quantum mechanics. . .
REFERENCES
There are many good books on the subject of ordinary differential equations. Two small but remarkably complete ones are Burkill (B9) and Ince (11). The latter is a condensed version of the treatise (12) by the same author. Another treatise, excellent and well worth looking through, even if it is slightly oldfashioned, is Forsyth (F5).
Modern texts on this subject tend to place more emphasis on numerical solutions, integral transform methods, and special techniques for nonlinear equations, than do the older texts mentioned above, Several good examples are Birkhoff and Rota (B3); Golomb and Shanks (GS); and Rabenstein (Rl).
The WKB method and its applications to quantum mechanics are discussed in any good book on quantum theory, such as Schiff (S2) Section 34;
38 Ordinary Differential Equations
Merzbacher (M 4) Chapter 7; or Landau and Lifshitz (L2) Chapter VII. Note that the lastnamed book refers to the WKB method as the" quasiclassical case. H
PROBLEMS
12
Find the general solutions of Problems 11 to 120: x2 y' + y2 = xy yl
I xJ 1 + y2
Y = ;==~
yJ 1 + xi
al
yl = ___
(x + y)2
11
13
14 y' + Y cos x = 1 sin 2x
15 (1  X2)y'  xy = xyl
16 2xJ y' = 1 + J 1 + 4x2 y
17 y"+y'2 + 1 =0
18 y" = e'
19 x( 1  x) y 11 + 4 y ~ + 2y = 0 110 (1  X)y2 dx  x3 dy = 0
111 xy' + y + x4y4~ = 0
112 (I + xl)y' + y = tan 1 x 113 x2y'2  2(xy  4)y' + i~ = 0
(general solution and singular solution)
114 yy"  y,l  6xy2 = 0
1~15 x4. yyll + X4 y,2. + 3x3 J'yt ~ 1 = 0 116 xllj  2y = x
117 y'" 2y"  yl + 2y = sin x 118 y~ij + 2y# + Y = cos x
119 y" + 3y' + 2y = exp [eX] 120 a2 yH2 = (I + y12)3
121 The differential equation obeyed by the charge q on a capacitor C connected in series with a resistance R to a voltage
V = Vo (:r e'/'
IS
dq 1 (t)2 tlf
R  +  q = v.  e
dt COT
Find q(t) if q(O) = O.

Problems 39
]·22 In the activation of an indium foil by a constant slow neutron flux, the number N of radioactive atoms obeys the equation
dN
~ = AN  'N
dt oS I.
where N, is the constant n u m her after "sa turation .' , Find N( t) if N(O) = o.
123 Find the general solution of
A(x)y"(x) + A'(x)y'(x) + :~;) = 0 where A(x) is a known function and y(x) is unknown. 124 Find the general solution of
xy" + 2yt + n2xy = sin wx Hint: Eliminate the first derivative term.
1~25 Note that y = x would be a solution of
(1  x)y" + xyl  Y = (l ~ X)2
if the right side were zero. Use this fact to obtain the general solution of the equation as given.
I M26 Con sider the differential equation
yfi + p(x)y' + q(x)y = 0
on the interval a ~ x ~ h. Suppose we know two solutions, Yt (x) and Y2(X)~ such that
Gi ve the solution of the equation
y" + p(x)y f + q(x)y = f{x)
which obeys the conditions y(a) = y(b) = 0, in the form b
y(x) = J G(x, xl)f(x') dx'
a
where G(x, x'), the socalled Green's function, involves only the solutions Yl and Y2 and assumes different functional forms for x' < x and x' > x.
Illustrate by solving
yrt + k2y = f(x) y(a) = y(b) = 0
40 Ordinary Differential Equations
127 Find the general solution of the differential equation
If 3 3
xy +  y = 1 + x x
in real form (no t s in answer). 128 Consider the equation
d2 y + ~ dy + {K + 2 _ J(1 + l))y = 0
dx2 X dx X x2
where I = nonnegative integer
Find all values of the constant K which can give a solution which is finite on the entire range of x (including co). An equation like this arises in solving the Schrodinger equation for the hydrogen atom.
Hint: Let y = olx, then" factor out H the behavior at infinity.
129 For what values of the constan t K does the di fferential equation
O~x~ 00
n (1 K) 0
Y4+~Y=
(0 < x < 00)
ha ve a non trivial solution vanishing at x = 0 and x = oo? 130 Find the values of the constant k for which the equation
xyfj  2xyl + (k  3x)y = 0
has a solution bounded on the range 0 ~ x < 00.
131 A solution of the differential equation
xy'" + 2y' + (E  x)y = 0
is desired such that y(O) = I, y( 00) = O. For what values of E is this possible?
132 By considering Eq. (171) for r + 00, verify that vex) indeed behaves like (1  x2)m as x + + 1.
133 Bessel's equation for m = 0 is
x2 y'" + X Y I + x2 y = 0 We have found one solution
Show that a second solu tion exists of the form
Jo(x) In x + AX2 + BX4 + Cx6 + ... and find the first three coefficients A~ B, C.
Problems 41
134 Consider the differential equation
xy" + (2  X)yf  2y = 0
Give two solutions, one regular at the origin and having the value 1 there, the other of the form
1
 + A(x) In x + B( x) x
where A(x) and B(x) are regular at the origin. Three terms of each series will suffice.
135 Find an approximate expression (say three terms) for large x for the solution of
N 1
Y  ::'=""":'
 (x2 + y2)2
which approaches the x axis as x ....... + co. 136 Consider the differential equation
yH _ xy + y3 = 0
for large positive x.
(a) Find an oscillating solution with two arbitrary constants. (b) Find a (nontrivial) particular nonoscillating solution.
1~37 Find an (approximate) oscillating solution of y" = (y _ X)2 _ e2(yX)
You may let ex denote the real number such that
(a. ~ 0.57)
138 Consider the differential equation
dy = e'lx dx
(a) Suppose y(1) = O. Give a series expansion for y(x) which is valid r or x near 1. Neglect terms of order (x  1)4.
(b) Suppose y(xo) = + 00 (Xo > 0). Give an approximate expression for y(x) which is useful for x slightly less than Xo .
139 Use the W KB method to find approximate negative values of the constan t E for which the equation
dZy
dx2 + [E  V(x)]y = 0
has a solution which is finite for all x between x =  00 and x = + 00, inclusive. V(x) is the function shown below.
42 Ordinary Differential Equations
V(x)
1~ Find a good approximation, for x large and positive, to the solution of the equation
3 (15 )
ylt __ y' + _ + X 1/2 Y = 0
X 4x2
Hint: Remove first derivative term.
141 Obtain an approximate formula for the Bessel function Jm(x) by the WKB method and give the limiting form of this expression for large x(x ~ m). Do not worry about getting the constant in front correct. You may assume m ~ t.
142 Consider the differential equation yN + xy = O.
1 2
(a) If y f'V r/4 cos  x3/l as x__,.+ oo~y ,..._, ? as x ~  co.
x 3
( b) If y ~ ( _ ~) 1/4 exp [  ~ (  X )312] as x .....  00, Y '" ? as x Jo + 00.
(c) If y  (_~)114 exp[ + ~ (_x)3/l J as x .....  00,
Y '" ? as x + + 00.
Note: The answer to one of (a), (b), ( c) is not defined. Be sure to indicate which one is not defined, as well as giving correct answers for the other two.
143 Consider the solution of the differential equa tion
d2y
+ x2y =0 dx2
which has zero value and unit slope at x = 10.
(a) Give (approximately) the location of the next zero of y(x) greater than 10.
(b) Give (approximately) the value of y(x) at its first maximum for x> 10.

Problems 43
144 Consider a solution Yl(X) of the differential equation
ylt + x2y = 0
such that Yt(x) has a zero at x = 5. Give approximately the location of the 25th zero beyond the one at x = 5, and estimate the error in your result.
(
TWO
INFINITE SE.RIES
In this cha pter we recall some tests for the convergence of series and present a number of methods for obtaining the sum of a series in closed form. Numerical and approximate methods are not discussed here, but in Chapter 13.
21 CONVERGENCE
An infinite series
00
Ian=al +02 + a3 + ...
,,== 1
is said to converge to the s urn S provided the seq uenee of partial s urns has the lirnit S: that is, provided
lim (I an) = S
Nco n1
If the series L::" 1 a; converges to the sum S, we will simply write this as an equality:
00
La,. = S
n=l
The series I:", 1 an is said to converge absolutely if the related series L:"'~lla" I converges. Absolute convergence implies convergence, but not vice versa; for example, the series
11+t+···
44

21 Convergence 45
converges (to the sum In 2), but it does not converge absolutely, because
l+!+t+···
does not converge (that is~ it diverges).
It sho uld be emphasized that the numbers all may be complex numbers.
We must, of course, give the sym bol ~alll of the preceding paragraph its usual meaning when an is complex:
That is, lanl denotes the absolute value (or modulus) of an.
The simplest means for determining the convergence of an infinite series is to com pare it with a series which is known to converge or diverge. For example, the geometric series
1
 = 1 + x + x2 + x3 + ... 1  x
(21)
converges for [x] < 1 and diverges for [x] > I. This leads to the ratio test:
If the ratio Ian + 11 a" I of s uccessi ve terms in the infinite series
has as a limit a n urn her less than one as n ......,. 00 ~ the series converges, and, in fact, con verges absolutely. if the limit is greater than one, the series diverges. If the limit is one (or if there is no limit), we must investigate further.
A second criterion is comparison with an infinite integra]. The series
f(l) + 1(2) + /(3) + ...
con verges or diverges with the infinite integral
00
f f(x) dx
!
provided f(x) is monotonically decreasing. For example, consider the series for the Riemann zeta function
111
C(s) = 1 +  +  + ~ + ...
2S J'l 4S
The ratio of successive terms is
( )s ( 1).'1
a,,+l n s
= = 1+ ,......1+···
a" n + 1 n 1.11' 00 n
(23)
46 Infinite Series
The ratio approaches one as n + co. Thus, comparison withthe geometric series fails. However,
f dx 1 1
Xl = S  1 x'1
so that the criterion for convergence of the zetafunction series is s > 1. This enables us to sharpen our ratio test; if
(24)
an+ 1 + 1 _ ::
a" n
with s greater than 1, the series converges (absolutely).
EXAMPLE
Consider the hyperqeometric series
ab x a(a + l)b(b + 1) x2
F( a, b; c; x) = 1 +  1 , + ( 1) 2' + .. · (26)
c. cc+ .
The ratio of successive terms is
0,,+ 1 (a + n)(b + n)
~= x
an (c + n)(n + 1)
(1 + ;;)( 1 + ;)
= X
(1 + :)(1 + !)
( a+bcl )
=1+ n +···X
Thus the series converges if [x I < I, or t when [x I = I, if 1 a+bc<O
We can sharpen our ratio test even more by considering a very slowly converging series such as
00 1 1 1
11 f:2 n(ln n }!l = 20n 2)i + 3( In 3}!l + . · ·
(28) j.;.vl ~ 1nr )~"C5 ~
1 If we allow a, b, c to be complex numbers, condition (27) becomes Re (0 + b  c) < 0
21 Convergence 47
By comparing with the integral
f dx 1 1 x(ln x)"  s  1 (In X)sl
(29)
we see that the series (28) converges provided s > 1. The ratio of successive terms in (28) is
aft + 1 n [ In n ] !if
a. = n + 1 In (n + 1)
= (1 _ ~ + ... ) [1 + 1 + .. 0] s
n nIn n
1 s
= 1    + · · · (2~10)
n n In n
Thus a series for which 10 .. + ,Ian 1 is of the form (210) converges if s > I and diverges if s ~ 1.
EXAMPLE
Consider the series solution (I51) of Legendre's equation
x2 X4
1  n( n + 1)  + n( n + 1)( n  2)( n + 3)   + · . ·
21 4!
The ratio of successive terms is
a, (n  2i + 4) (n + 2i  3) 2
~=~ X
a'l (2i  3) (2i  2)
For large i, this ratio is
.s: ~ [1  ~ + o( 12)] Xl
a,1 I i
Thus, if Xl = I, the series diverges.
There are, of course, other convergence criteria; for example, if the signs of the an alternate and an approaches zero monotonically, then the series LI1 an converges (but not necessarily absolutely). The reader interested in a more rigorous and more complete treatment of convergence, absolute convergence, and so forth, should consult any of the standard references, such as those listed at the end of this chapter.
48 I nfinite Series
22 FAMILIAR SERIES
The reader should be familiar with at least the following simple series:
x2 «) n t Xii
(1 +x.)"= 1 + nx+ n(n 1)+ ... = L .  (211)
2 ! II = 0 (n  ex) f a!
(212)
Using the Euler relation eb = cos x + i sin x, we deduce from (212) x3 x5
sin x .:::;:: x   +   + ...
3 r 5 r
(213)
x2 X4
cos· X = 1  _. +   + ...
21 4!
(214)
Termbyterm integration of the series for (1 + x) 1 and (I + Xl) 1 yields
x2 x3 X4
In(l + x) = x   +    +  . . . (215) .
2. 3 4
x3 XS
t ~1
an x = x   +   + ...
3 5
The above series are simple enough. to be easily remembered. Other elementary functions such as tan x, ctn x, sec x, and so on, have series expansions which are probably less familiar. For some, the coefficients are expressed in terms of the socalled Bernoulli numbers or the Euler numbers (which. may be defined, in fact, by these coefficients). See Problem 212 for sec x.
We shall introduce the Bernoulli numbers by an expansion of the function (eX  1)]. Multiplying by x to eliminate the singularity at x = 0, we can write
x
 = c + c x + C x2 + ...
eX _ 1 0 1 2
(217)
Therefore,
( X2 x3 )
X = (c + c x + C x2 + ... ) x +  +  + ...
o l 2 21 31
and, dividing by x,
(x x2 )
1 = (c + c x + C x2 + .... ) 1 +  + ~ + ... .
o 1 2 '2! 3~
Let e,. = B,,/n! . Then
(218)
Equating powers of x in (2w18),
1 =,Bo
(219)
etc,
Except for the first one, Eqs .. (2~ 1.9) 'may be written symbolically in the concise form (B + 1)"  B":= 0,. with the understanding that B' really means B.. . The first few· of these Bernoulli numbers are
B. =  'n B6 =. n B8 =  JI0· •••
Bl = 1 B3 =·:B·5 = B~ = ... '= 0 (220)
"B:z: fl+.( =: 0 (() = .( \ :2, 3. , . ~ • ) • .
Notations .vary, so that care must be exercised when looking up formulas.
For example, the notations of Dwight (08) and Pierce and . Foster (PI) differ from ours .
. As an example of the usefulness of Bernoulli numbers, as coefficients in power series of familiar functions; consider
. eb + eIx
ctn x = I Ix Ix
·e e
(221)
Let ix =.y/2. Then
e'/2 + e  ,/2. ctn x = i e'f2· .y/2 . e
( ·2 ' )
=; 1+
. . e"l
=.2i.(! .. + .. y .,) .. y' 2 . e'. 1
= 2i L BlIy"
Y "cwn .·n!
Replacing y by 2ix gives
1 B (2x)"
ctn x =  L ( 1 ),,(2" .
X ill noD. n!
(2 .. 22)
The relation tan x = etn x  2 em 2x enables us to deduce the power series for tan x.
23 TRANSFORMATION OF SERIES
Various devices may be used to reduce a given unfamiliar series to a known one.
Differentiation and integration are often useful.
EXAMPLE
I(x) = 1 + 2x + 3.r + 4x3 + ...
Integrating term by term,
Ix !(x) dx = x + x2 + x3 + . ~ ~ = x
o Ix
Then, differentiating,
f(x) ~ 1x (1 ~ J ~ (1 ~ X)2
[We might have recognized this immediately from the binomial series (211)·1
EXAMPLE
1 X x2 x3
I(x) = +  +  +  + 'H 1·2 2·3 3·4 4·5
(224)
Differentiating twice,
1 (x2f)" = 1 + x + Xl + x3 + ... = Ix
From this, two integrations give
IIx.
f(x) =  + 1 In (1  x)
x x .
23 Transformation of Series 51
The constan ts of integration which arise in this procedure must be evaluated by knowing the series at certain values of x; for example, it is obvious from (224) that /(0) = i.
Complex variables sometimes provide useful transformations.
EXAMPLE
I( fJ) = 1 + a cos 0 + a2 cos 28 + .. ~
'0 2. 216
= Re (1 + ae' + a e + ... )
This is just a simple geometric series. Therefore,
(225)
1 1  a cos 8
f(B) = Re  ~
1  ael9 1  2a cos 0 + a2
The trick of differentiation or integration may be employed even if the series does not contain a variable.
EXAMPLE
123 s=+++··· 2! 3! 4~
(226)
Define
Xl 2xl 3x4
f(x) =  +  +  + ...
2r 3! 4!
Then S = f(l).
x3 X4
f'(x) = X + x2 +  +  + ... = xe" 2! 3!
f(x) = {"xex dx = xe"  e" + 1 o
S = f(1) = 1
EXAMPLE
m(m  1) m(m l)(m  2) m!
S = 1 + m + 2' + 3 , + A • • = L 1 ( _ ) ! (227)
. . " n. m n
Let
~ m! n
f(x) = L.J r ( _ ) , x
" n. m n.
S = f(l)
52 Infinite Series
But
f(x) = (1 + x)m
Therefore S = 2m.
We shall now consider the deceptively simple series [compare (2·2)]
eo 1 1 1 1
S = L  = 1 +  +  +  + ... = '(2)
,,=1 n2 4 9 16
(228)
Previous examples suggest defining
x2 x3 X4
f(x) = X +  +  +  + ...
4 9 16
s = 1(1)
(229)
X x2 1
!'(x) = 1 +  +  + ... =  In (1  x)
2 3 x
Thus
f( x) = _ IX In (1  x) dx
o x
and
S __ ~ f 1 In (1  x) dx
~ (230)
. 0 x
Unfortunately, this integral is just as difficult as the series was.
The series (228) can be expressed in terms of a Bernoulli number. To find this relation, we begin by writing the Fourier series for cos kx. (We shall discuss Fourier series in more detail in Chapter 4.)
Ao ~ A .
cos kx = T + 1I~1 ( II cos nx + B,. SID nx)
(231)
cos kx is an even function, so there are no sine terms (Bli = 0). The coefficients All are given by
2 f'" 2k sin kn
An =  cos nx cos kx dx = (  1)" (k2 2)
. n: 0 1t n
(232)
Thus
k _ 2k sin kn: (_1_ cos x cos 2x _ cos 3x ... )
cos x  1t" 2k2  kl _ 1 + k2  4 k2  9 +
Setting x = 1r in (2·33) gives
2( 1 1 1 )
kn ctn kn = 1 + 2k k2 _ 1 + kl _ 4 + k2 _ 9 + .. ·
(233)
(234)
This is the partial fraction representation of the cotangent. For those who know the relevant rna thematics, this partial fraction representat ion could
23 Transformation of Series 53
have been written down immediately by using the MittagLetter theorem 2 from complex variable theory.
In any case, by expanding (234) as a power series in k, we obtain
kn ctn kn = 1  2k2(1 + _!_ + _!_ + ... )
 22 32
4( 1 1 )
 2k 1 +  +  + ... _ ...
24 34
eo
= 1  2 L C(2n)k21t
111
( .. =I: ± ~. + 2) •• ) (235)
where {(2n) is defined in (22). Comparing (235) with our power series (222) for the cotangent in terms of Bernoulli numbers, we find
,(2 ) = (_1)111+ 1 B2ft(2x)ZII
n 2(2n) 1
(236)
Thus
(237)
1 1 B416n4 n4
1 +  +  + · · · = ,(4) =  = 
16 81 48 90
(238)
etc.
We shall conclude this section by discussing a transformation which is useful both for analytical and numerical summing of series. Let
(239)
be a series whose sum we know; and suppose we want to sum the series
DO
f(z) = L. eJi bit z'*
","0
(240)
2 Sec Copson (C8) Section 6.8, or Whittaker and Watson .(W5) Section 7.4. The theorem, roughly spcakina, says that a function /(:) which is analytic everywhere in the finite zplane except for simple poles at z = ah Ql, ••• with residues bit b2 I ••• ~ and which (except ncar the poles) remains finite as Izi approaches infinity~ may be written in the form
f(z) = 1(0) + L bit ( 1 + .!.)
• z  a. Q_.
(234) follows immediately if we take for f(z) the function 'TrZ c:tn 1f'z.
~ 00iJ ~ 2.00 A
cot ~ =~ + 2i:.L.. 2. 2. 2, ~ "tC\'t jf;(;'1::. ~ + 2 ~ Z ~2._ n2. .
~. nJ\ i: n "IT n~~
54 Infinite Series
We eliminate the b; from (240):
fez) = Co bo + c1b1z + ClPZZ2 + C3 b3z3 + ...
= Co g(z) + (el  cO)b1z + (c2  co)b2 Z2 + (C3  co)b·3 Z3 + · .. = Co g(z) + (el  co)zg'(z) + (c2  2el + cO)b2 zl
+ (C3  3cl + 2co}bJ Z3 + ...
I z2g"(Z)
= cog(z) + (el  co)zg (z) + (el  2c1 + co) _.....;..._ 2!
Z3g"F(Z)
+ (c 3  3c 2 + 3c 1  co) 3 ! + · · · (2 41 )
Tbe successive coefficients are just the leading differences in a difference table of the coefficients c,.:
C1  Co
C1 C2  2et + Co
C] _·Ct C3· 3C2 + 3et  Co
C2 C3  2'c] + Cl
~ ..
. . .
• ••
The most common example of this transformation is Euler 's transformation.
Set ..
1
g(z) = = 1  z + Z2  Z3 +  · ..
1+z
(242)
Thus
bit = (1)"
z zg'(z) = (1 + Z)2
Z2g"(Z) Z2
2!  (1 + Z)3
etc.
References 55
Thus, if fez) = I:'o ( Ifcnzn, the Euler transformation tells us that J(z) = 1 [co _ (c1 _ co)( z )
l+z l+z
+ (e2  2cJ + (0)(1 : zf  + 00 oj For example, consider the series
fez) = 1  tz + !Z2 ~ !Z3 +  ...
(244)
[We already know J(z) = (lIz) In (I + z), of course.] This is a rather poorly converging series; in fact, it diverges for Iz 1 > 1. From the difference table
1
t
i !
1 t
12'
1 1
"12'  20
1
30
1
TO we see that
f( z) = 1 [1 + ! ( z ) + ~ ( z ) 2 + ... J
l+z 21+z 31+z
(245)
This new series converges much better for z. near 1, for example. In fact, it converges for all positive z,
This is just one example of the tricks that are a vailable to turn one series in to . another ~ more useful, one. The important practical problem of how to find the numerical value of a series will be treated in Chapter 13.
REFERENCES
Infi ni te series are discussed quite thorough1y in numerous books on com plex variable theory. See, for example, Whittaker and Wa tson (W 5) Chapter II and III, or A postol (AS) Chapters 12 and 13.
Several fairly elementary references are Hyslop (H 13); Green (G7); and Stanaitis (S 10. The monograph by Hirschman (H lO) is also very readable, but it is at a somewhat more advanced level than the three preceding references.
56 Infinite Series
Two standard treatises, with somewhat of a nineteenthcentury flavor, are those by Brornwich (B8) and Knopp (K3),
Davis (D2) presents in a systematic manner many devices for summing series. M any elementary series, inc1 uding those leading to Bernoulli num hers and Euler numbers, are given near the beginning of Dwight (08).
PROBLEMS
21
22
23
24
25
26
27
28
29
210
Find the sums of the series 21 through 27:
1 1 1 1 1
1 + 4  16  64 + 256 + 1024  ~ + + . , .
111 1
N+2·4+M+4·6+···
tIl 1
1  5 . 32  7 . 33 + 11 . 35 + 13 . 36   + + ...
123
+++ ...
O! I! 2!
1 1 1
1++++··' 9 25 49
1 1 ~ 1
1 + 92 + 252 + 492 + , ..
I I 1
1++···
42 92 162
Evaluate in closed form the sum
f( 0) = sin 0 + ! sin 2e + ~ sin 30 + t sin 40 + ... (you may assume 0 < 0 < n for definiteness).
Do the following series converge or not?
!'1 !.~.~ 1·;·1·1
____ + 22 + + ...
9·7·25·1! 11·9·49·2~ 13·11·S1·3!
(1 . 3) 2 (1 . 3 . 5)2 (1 . 3 . 5 . 7) 2
1 . 1 . 12 + 4· 2 . (1 . 2)2 + 16 . 3 . (1 . 2 . 3) 2
(1 . 3 · 5 . 7 · 9)2
+ + ...
64 . 4 . (1 . 2 . 3 . 4)2
Problems 57
211 Evaluate the series
in closed form, by comparing with
x3 x5
sin x = x   +   + ... 3! 5!
212 Consider the socalled Euler numbers, defined by
~ E2n 2
sec z = ~ (  l )" ~ Z n
n""O (2n)!
(a) Show that
(E+ 1)1< + (EI)k = 0
(k even)
Give E2, £4 ~ £6 ~ E8 .
(b) Using the partial fraction expansion of the secant
( 1 3 5 )
k 7r sec kn = 4k  +  + ...
1  4k 2 9  4k2 25  4k2
evaluate:
(I)
(2)
1 1
1 +  + ...
 32n+ 1 5211+ 1
•
THREE
EVALUATION OF INTEGRALS
In the first three sections of this chapter we discuss various techniques for the analytic evaluation of definite integrals. These techniques include differentiation or integration with respect to a parameter, the exploitation of symmetries, and evaluation by con tour integration. Tabulated integrals such as the gamma function, beta f unction, exponential integra], elliptic integrals, and so forth, are described in Section 34. Approximate expansions, especially asymptotic expansions, are treated in Section 35, and the saddlepoint method is discussed in Section 36.
31 ELEMENTARY METHODS
We first review some useful elementary techniques for doing integrals.
The simplest device is changing variables. For example, the standard trick for evaluating
~ ,..ro
Jo «:" dt =!~
by using polar coordinates is presumably familiar. setting t = uJa we find
(31)
From this result, by
J
00 1 1t:
f au2 d 
e u  ~ 
o 2 a
58
31 Elementary Methods 59
or by setting t = ul, we may deduce
f,,''' «> U du = !J; (3.3)
How about So et4 dt? This is not so easy. Let
tIO
If:I = f es• dt (34)
o
Make the change of variable
t(i = u
1
dt =  U(1!.:I)l du
a
Then
This does not look any easier, but one defines the gamma function r(z) by
co
r(z) = f euuz 1 du o
(35)
so that
(36)
We shall say more about the gamma function in Section 34; note that (31) gives r(t) = Jit.
Another useful technique is to introduce co rnplex va riables,
EXAMPLE
~
I = f e  U cos Ax dx o
co
= Re f eaxeux dx o
1
= Rea  iA
(37)
therefore, 1
I Note that our method seems to require that A be real, but the results are in fact correct for A anywhere in the strip· jIm A I ~~ How did this happen 1
..fl o.e::f  j/j. rt ~ 1 TYl A '::>  a. 
60 Evaluation of Integrals
This method gives us another integral at the same time, from" the imaginary "part;
co ..t
feax sin AX dx = 2 A,2
o a +
The method of integration by parts is very useful, and is presumably familiar by now.
Another useful trick is differentiation or integration with respect to a parameter.
EXAMPLE
co
1 = f eQ)O cos Ax x dx o
Let
f~ a
/(0) = ea%. cos AX dx = 2 A, 2
o a +
Then
d a2  A.2
I =  d J(a) = (2 ·2)2
a a + A
" We have assumed that the order of differentiation and integration can be reversed. For necessary and sufficient conditions see mathematics books, such as Whittaker and Watson (W5) Chapter IV, or Apostol (A5) Chapter 9. In physical applications it will nearly always work.
EXAMPLE
"fOO sin x 1= dx
o x
(310)
Let
f" m eusinx
/(a) = " dx
o x
so that 1 = /(0)
d/(a.) " fOCI" ~a:..;' d 1
d« =  0 e sin x x = (12 + 1
fda 1
/( a) =  a2 + 1 = C  tan  ct
But I( co) = o. Therefore C = x/2.
1(12;) ~ ~  tan 1 a
and
1t
1= 1(0) = 2
(311)

31 Elementary Methods 61
Sometimes one can combine several derivatives of an integral to form a differential equation.
EXAMPLE
DO ~~
J(a) = f Ie 2 dx
o + x
(3.12)
00 1
r(a) + 1(0:) = f e(lx dx = 
o 0:
This equation is easily solved by the variation of parameters method discussed in Section 11. The result is
f~ sin t . flZ cos t
J(a) = cos a 1 dt + sm a t dt
But /(0:) and all its derivatives vanish at (X = 00. Thus
. Jet cos t f~ sin t
1(0:) = Sin 0.:  dt ~ cos ex  dt
00 t 0':> t
The cosineintegral and sineinteqral functions are defined by
C· IX cos t d
IX =  t
00 t
. IX sin t 
SlX= dt
. 0 t
(313)
Thus J( Cl) = sin 0: Ci ex + cos a( 7r/2  Si et:). The functions Si C( and Ci a are tabulated in Jahnke et al. (13) and Abramowitz and Stegun (A O.
Finally, the useful integrals So eo.x1.;(, dx (n = 0, 1,2, ... ) can be obtained from the first two,
foo ax1 d  1 In:
e x   
o 2 a
IX) 1
f eax2x dx =
o 2a
(314)
by repeated differentiation.
We observe in passing that if a parameter in an integral also appears in the limitts), the differentiation with respect to that parameter proceeds according to the following rule:
d [fb{tJ;) ] "(11) 8f(x a) db da
d f(x, cc) dx = f a ~ dx + d f(b(ex}, C()  d j(a(rJ.),~)
Cl a{lX) a(a:) ex et: IX
(315)
62 Evaluation of Integrals
32 USE OF SYMMETRY ARGUMENTS
The evaluation of some integrals may be greatly simplified by exploiting the symmetries present in the problem. We shall illustrate the principles involved by means of some integrals over solid angle in three dimensions.
EXAMPLE
Consider the integral
dO 2.: + 1 1
II (k) = f 1 + k • i = fo diP f _ 1 d( cos (j) 1 + k ~ t
(316)
where i is the unit radius vector and «(J, ljJ) are conventional spherical polar coordinates:
f x = sin 8 cos ¢
t y = sin 9 sin ¢
f'z = cos (j
Since the orientation of our coordinate system is arbitrary, we may choose the z axis along k and 0 btain (we assume k < I)
II(k) = f2"d¢ f+ 1 d(eos 0) = 2n In (1 + k)
o 1 1 + k cos (J k 1  k
(317)
The integrals
(318)
may be obtained from II by differentiation (replace the I in the denominator by «, differentiate with respect to CX, and set a equal to I again). For example)
(319)
Another example is
This integral is complicated by the fact that two directions are given by the two vectors a and k, and we cannot choose our polar axis along both of them. However, the direction a is trivial, in that it may be ,,' factored out." For, consider
J(t) = f' t dO.
1 + k : t
(321)
32 Use of Symmetry Arguments 63
Clearly
II (k, a) = a ~ J
Now J(k) is a vector and must point in the direction k, since no other direction is specified in the definition (321) of J. Therefore, .
J(k) = Ak
(322)
To evaluate the scalar A~ we dot k into both sides of (322) and obtain
1 1 J k~f
A = k2 k • J(k) = kl 1 + k ~ ; dQ
= kl, J dn( I  1 + Ik . t)
= 4n: (1 _ _!_ 1 1 + k)
k2 2k n 1  k
Thus our original integral (320) is
(323)
What about the integral
f a ~ t dO
iz(k, a) = (1 + k ~ t)2 ?
We could obtain Il(k~ a) from Il(k~ a) by replacing the I by Cl and differentiating as before, but a simpler method is to differentiate II (k) with respect to k. On the one hand,
8Il(k) a f dO f dOr
ok = ok 1 + k · t =  (1 + k • r)2
On the other hand, using (317)~
et 1 (k) = ~ 01 1 (k) = 2nk [ 2 _ ~ (1 + k)]
ok k ok k2 1  kl kIn 1  k,
Comparing (324) and (325) gives
J dO f _ 2nk (  2 ~ In 1 + k)
(1 + k • f)2  k2 1  k2 + k 1  k
(324)
(325)
and therefore
f a· f dO _ a ~ J f dO _ 2nk · a [  2 1 In (1 + k)] (326)
(1 + k • f)2  (1 + k . f)2  k2 1 kl + k 1  k
64 Evaluation of Integrals
Other examples of the use of symmetry argumen ts are the evaluation of integrals such as
4l1(a~ b) = f dO f~ a f~b
~2(a~ b, c, d) = f dO f· a f· b f ~ c f : d
(327)
etc.
To evaluate €D1 t we observe that it is a scalar which is linear in both a and b. The only possibility is that tlll = A a · b, where A is a number. To find A~ let a and b both eq ual i. Then
f J4 41t
<Dl(i, i) = A = dQ(f· 2)2 = dO cos? 0 = 3
Therefore $1 (a, b) = (4nj3)a· b.
The evaluation of cb2 proceeds similarly. Since <})2 is a scalar, linear in the four vectors a, b, c, d, as well as being invariant under any interchange of these vectors, it must have the form
¢ 2 = B( a ~ b c d + a· c b . d + a db· c)
where B is a number. B is found by setting a = b = c = d = i, so that
<1>,(2. Z, 2,2) = 3B = f dO(1 • Z)4 = f dO cos' 0 = ~n
Therefore B = An] 15, and
As a final exam pJe of the sort of device one can use to simplify integrals, we mention the iden tity
1 f 1 du
ab = 0 [au + b(1  U)]2
(328)
which Feynman has used to simplify the evaluation of integra1s ansmg in quantum fie1d theory.? As an application of (328), we evaluate the integral
f dO.
ijI ( k ~ I) = (l + k . i)(1 + I . f)
2: See the Appendix of R. P. Feynman (F3).
33 Contour Integration
65
Use of (328) converts (329) to
It f dO
",(k, I) = 0 du {l + t ~ [ku + 1(1 _ U)]}2
The solidangle integral in (330) is just 12[ku + 1(1  u)], as given by (319). Thus
(330)
ft du
V1(k, I) = 411: 0 1 ~ [ku + 1(1 _ U)]2
This is an elementary integral, although rather tedious; the answer has the interesting form
4x A
.p(k~ I) = p. cosh  1 ~
A2 _ Bl B
(331)
where
A=lk·1
B = J (I ~ k2 )(f  12)
The proof of (331) is left as an exercise (Problem 337).
33 CONTOUR INTEGRATION
One of the most powerful means for evaluating definite integrals is provided by the theorem of residues from the theory of functions of a complex variable. We shall illustrate this method of contour integration by a number of examples in this section. Before reading this material, the student who does not know the theory of functions of a complex variable reasonably well should review (or learn) certain parts of this theory. These parts are· presented in the Appendix of this book to serve as an aid in the review (or as a guide to the study).
The theorem of residues [Appendix, Eq. (AI5)] tells us that if a function I(z) is regular in the region bounded by a closed path C, except for a finite number of poles and isolated essential singularities in the interior of C, then the integral of fez) along the contour C is
Ie f(z) dz = 2ni I residues
where L residues means the sum of the residues at all the poles and essential singularities inside C.
The residues at poles and isolated essential singularities may be found as follows.
+
66 Evaluation of Integrals
If/(z) has a simple pole (pole or order onefat z = zo, the residue is
01 = [(z  ZO)!(Z)].z =so (332)
If f(z) is written in the form J(z) = q(z){p{z), where q(z) is regular and p(z) has a simple zero at zo, the residue of/(z) at Zo may be computed from
(3·33)
If %0 is a pole of order n, the residue is
1 {( d ) ... 1 }
a_I = (n _ I)! dz [(z ~ zoY/(z)] Z""S'O
(334)
If Zo is an isolated essential singularity, the residue is found from the Laurent expansion (Appendix, Section A2, item 7).
We illustrate the method of contour integration by some examples.
BXAMPLB
fa;! dx
1 = 0 ·1+x....".2
(3 .. 35)
Consider § dzf(l + Z2) along the contour of Figure 31. Along the real axis the integral is U. Along the large semicircle in the upperhalf plane we get zero, since
z = Ref'
dz = iRe" dO
zplane
R+oo
+i
.
I
Figure 31 Contour for the integral (335)
33 Contour Integration 67
The residue of 1/(1 + Z2) = l/(z + i)(z  i) at z = i is 1/(2i). Thus
21 = 21ri(};) = 71
1t 1= 2
Note that an important part of the problem may be choosing the. "return path" so that the contribution from it is simple (preferably zero),
EXAMPLE
Consider a resistance R and inductance L connected in series with a voltage Vet) (Figure 32). Suppose V(t) is a voltage impulse; that is, a very high pulse lasting for a very short time. As we shall see in Chapter 4, we can write to a good approximation
A a>
V(t) = 2ft f _ 00 efmt dw
R
Figure 32 Series RL circuit
where A is the area under the curve V(t).
The current due to a voltage eimf is eir.ot/(R + iwL). Thus the current due to our voltage pulse is
A ~ eittA dw
l(t) =  f
2n  co R + iwL
(3·36)
Let us evaluate this integral.
If t < 0; the integrand is exponentially small for 1m co +  co, so that we may complete the contour by a large semicircle in the lowerhalf coplane, along which the integral vanishes. 3 The contour encloses no singularities, so that l( t) = o.
If t > 0, we must complete the contour by a large semicircle in the upperhalf plane. Then
(A) eRtiL A
I(t) = 2ni  . =  eRt/L
2n ,L L
3 A rigorous justification of this procedure is provided by Jordan:» lemma; see Copson (C8) p, 137 for example.
68 Evaluation of'.Integrals
EXAMPLE
cQ dx 1 = fo 1 + x3
The integrand ·is not even, so we cannot extend it to  00. Consider the integral
. f In z dz
1 + Z3
The integrand is manyvalued; we may cut the plane as shown in Figure 33, and define In z real (= In x) just above the cut. Then In z = In x + 27r.i below the cut, and integrating along the indicated contour,
1.. In z dz _ 2 OJ j 1 + Z3   1t1
On the other hand, using the method of residues, , t,ln' z dz = _ 41[2 i.J3
j 1 + Z3 . 9
Thus I = (2nJ3)/9.
Whenintegrating around a branch point, as in this example, it is necessary to show that the integral on a vanishingly small circle around the branch point is 'zero: In this example, this. part goes like r In r, which approaches zero as r + o.
.zplane
tlTI3 E
X
~1 ~. ~.D. "1 ~ ==QlT
Figure 33 Contour for the in_tegral (337)
33 Contour Integration 69
A more straightforward method for evaluating the integral (337) consists of evaluating the integral
1 dz
J = '! 1 + Z3"
along a closed contour consisting of (i) the real axis from 0 to + 00, (ii) onethird of a large circle at [z] = co, and (iii) a return to the origin along the line arg z = In./3. Evaluating J along these pieces, we find
J = (I  e27r.i/3)/
On the other hand ~ the integrand of J has a simple pole at z = en. ij 3; the resid ue theorem gives
2ni J= . 3e2:1'u/3
Therefore
2n:i e  2ni! 3 11: 2n
1= = =~
3 1  e211.i13 tt 3J3
3 sin 3
as before.
EXAMPLE
11 dO
If
o a + b cos 0
a>b>O
(338)
The integrand is even, so
211: dO
21 = fo a + b cos 8
If we integrate along the unit circle, as in Figure 34,
dz = iei8 dO
era + ew 1 ( 1)
cos () = 2 = 2 z + ~
(339)
Then
21 = f dzj(iz)
c a + (bJ2)[z + (l/z)]
2f dz
= i c b Z2 + 2az + b
70 Evaluation of Integrals
aplane
Figure 34 Contour for the integral (338)
The integrand has two poles, at the roots of the denominator. The prod uct of the roots is hi b = 1; one is outside and one inside the unit circle. They are at
a Ja2
Xl 2 =   +   1
, b  b2
Then
2 ( a Ja2 .. _)
21 = i 21ti residue at Xl =  b + b2  1
( 1) 1t
I = 2it = .
2bx1 + 2a Ja2  b2
EXAMPLE
I =f 00 J;; dx
o 1 + x2
340
Consider the integral
I~; dz j 1 + Z2
along the contour of Figure 35. We choose JZ positive on top of the cut. Then
I..Jidz = 21 :r 1 + Z2
But, using residues,
Therefore 1= n/J 2.
33 Contour Integration 71
Figure 35 Contour for the integral (340)
EXAMPLE
00 ff'x
1= f dx
_ rIO eX + 1
(0 < a < 1)
(341 )
We clearly wish to consider
f e''Z dz ff + 1
along some contour. One choice is shown in Figure 36~ involving the familiar large semicircle. Note that we must give a a small imaginary part in
zplane
Figure 36 Possible contour for the integral (341)
72 Evalua tion of Integrals
order to be certain that the integral along the semicircle vanishes. Then
1 = 2ni L residues
There is an infinite n urn ber of poles.
At z = iff. ~ residue is  e'"'
')
I
At z = 3in:, residue is  e3 =, etc.
Thus
ell'Ul 'It
I = 21d 2.
1  e Ui~ sin na
Now we can 1et Im a + O.
Alternatively, we cou1d use the contour shown in Figure 37. AJong the rea l axis, we get I. Along 1m z = 'lni, we get
Thus
(1  e21CiD)! = Lni (residue at z = rri)
1 = n sin na
as before
zplane
Figure 37 Another contour for the integral (341)
33 Contour Integration 73
R+ 00
zplane
Figure.38 Contour for the integral (342)
EXAMPLE
+1 dx
1= Ll ~x2(1 + x2)
(342)
We consider
1. dz
1 ~1  z2(1 + Zl)
along the contour of Figure 38. On the top side of the cut we get It and we get another I from the bottom side. Therefore,
. [1 1]
21 = 21ti M + M
2iy 2 u; 2
=1t./2
t:
1= J2
EXAMPLE
Consider the integral
I = 1. f~z) dz
j sin 1tZ
(343)
around the contour of Figure 39~ where f(z) has several isolated singularities (indicated by crosses in the figure) and goes to zero at least as fast as Izf1 as lz] + 00.
On the one hand, the integrall may be evaluated by summing the residues at the zeros of sin 1tZ, indicated by dots in Figure 39. The result is
•
CC! 1
[ = 2ni L  (  )"/(n)
n""oo1t
(344)
74 Evaluation of Integrals
x
aplane
x
x
Figure 39 Original contour for the integral (343)
On the other hand, the integral along the contour of Figure 39 is clearly the same as that along the contour of Figure 310 since the integral around the circle at IzJ = 00 vanishes. The singularities enclosed by the contour of Figure 310 are now the singularities of J(z); if the locations and residues are denoted by ZI: and Rk" respectively, then
1= 2ni L _._R_t_ k. Sin 1fZ"
Comparison with (344) gives the summation formula
(345)
I
zplane
x
x
circle at Izl= 00
x
Figure 310 Deformed contour for the integral (343)
34 Tabulated Integrals 75
This device, which converts an infinite sum into a contour integral which is subsequently deformed, is known as a Sommerfeld Watson transformation."
As an example, consider the series
S(x) = sin x _ 2 sin 2x + 3 sin 3x _ + ....
a1 + 1 aZ + 4 a2 + 9
This is just the left side of equation (345), if we set J(z) = _ ~ z sin xz
2 a1 + ZZ
(Note that we must require lxl < 7t in order that the large circle in Figure 310 make no contribution to the integral.) Now the locations and residues of the poles off(z) are
Z1 = ia
Zl = =ta
j · h
R1 = Sin ax
4
f • h
Rl = 4 SID ax
Therefore, we obtain from formula (34S)
~ [i.b iOb 1
4 sin ax 4 sm ax · h
. 2t SIn ax
S(x) =  n .. h +.. h = 2 ·h
I sin aX'  I SlO an sin QX'
As we remarked above, this result is valid only if Jx~ < n. For x outside this range, we simply observe that S(x), from its definition, is periodic in x with period 2n..
3~ TABULATED INTEGRALS
We first mention (again) the gamma junction, defined for Re z » 0
by the integral ~
(0
r(z) = f x,tex dx o
(346)
• See Sommerfeld (S9) ApJ?elli!b: to chaRter VI, or G~ N._W.,taon (W1t
76 Evaluation of Integrals
Integration by parts gives
r(z) = (z  l)r(z  I) Thus, from F(l) = 1, we obtain
. r(2) = 1, r(3) = 2, r(4) = 6, ... , r(n) = (n  1) !
(Rez> 1)
(347)
(348) 0 ~ =" rcu =~ ,
T
The gamma function may be analytically continued into the entire complex. plane by means of the recursion relation (347). except for simple poles at z = 0,  I,  2t ....
A related integral of some interest is encountered if we. consider
Let x + y = u.
Let x = ut.
a) 1
r(r)r(s) = J e"u"+11 du f dt ,"1(1. ,)*1
o 0
= r(r + a)B(r, s)
where B(r, s) is the beta/unction
B(r, s) = r(r)r(s) = f' X1(1  X),l dx rCr + s) , 0
(3 .. 49)
This integral representation for B(r, s) is obviously valid only for Re r > 0, Re s > 0; the first equality in (349) defines B(r, s) for all fJ S in the complex plane.
An interesting special case of this relation is
r(z)r(l  z) = r(l)B(z, 1  z)
Let x = t/( 1 + t). Then
r 1"'"1 dt r(z)r(l  z) = 0 1 + t .
This integral can be done by using the contour shown in Figure 311. S The result is
1t r(z)r(l  z) = .
sin 1tZ
(350) r(1:)={T
• Alternatively, we may make a simple change of variable in Example. (341) .
. _
B ( ~, ~ ) = S~ t r~ C A  t 1 0. ~ d.t = B (q I p) (~p ;:.. Q l ~ q >rJ) .
o
B C f. "1 ) = z f;r/~!lin "'pI it  COO 29/ e d 1J
o
34 Tabulated Integrals 77
t plane
. R+oo
Figure 311 Contour for the integral $: [t=l/(i + t)] dt
Our derivation has only been validfor 0 < Re z < I, but both sides of (350) are analytic functions of z (except at z = 0, + 1, ±2, ... ), so we can extend the result into the entire plane.
Another integral which has been given a name is the socalled exponential integral,
x d dt Ei(x) = f 
~ t
(351)
This definition is conventionally supplemented by cutting the xplane along the positive realaxis, Thus Ei(x) is well defined for negative real X; but for positive real x we must distinguish between Ei(x + it) and Ei(x  i8).
Two functions related to Ei(x) are the sine and cosine integrals:
S. fX sin t dt
IX =
o t
(352)
C· f% cos t dt
IX =
co t
(353)
The error function is defined as
2 .¥
err x = J f e1l dt 1t' 0
(3·54)
The associated trigonometric integrals are known as Fresnel inteqrals":
~ (?ttl)
C(x) = fo cos ""2 dt
f~ (1tll)
S(x) = osin ""2 dt
(355)
6 Conventions differ ; our definitions (355) agree with Magnus et al. (MI) but not with Erdelyi et al. (E.5).
78 Evaluation of Integrals
Another class of integral which is tabulated in many places is the class of elliptic integrals. These are integrals of the form
fdx A(x) + B(x)jSW C(x) + D(x)JS(X)
(356)
where A, B, C, D are polynomials and S is a polynomial of the third or fourth degree (not a perfect square, of course l),
We shall not go into the general theory of such integrals, but just make a few remarks, which the student can verify. 7 In the first place.
A+BJS=E+_!_ C + n.jS .jS
(357)
where E and F are rational (ratios of polynomials). By decomposing Fin partial fractions, we see that the only nonelementary integrals we need are
x"dx
J.= f JS
f dx
H,. = (x _ c)"JS
(358)
But we can evaluate all the JIt from Jo, J1, and J2, and all the Hit from Hi, Jo t J1, and J2• Thus we only need
f dx IX dx f x2 dx I dx
Jo = JS J1 = Js J2 = JS Hi = (x _ c)JS (359)
There are various standard forms for S; we shall consider only Legendre's, namely, S = (I  Xl)( I  k2 x2). Then
The integral
F  I" dx
 0 J (1  x2)(1  k1x2)
is called the Legendre elliptic integral of the first kind. Generally, one sets x = sin 4J, and defines
(360)
f'" dt/>
F(t/J, k) = 0 Jt  k2 sin ' tP
J1 is an H elementary" integral, if we set x2 = u.
(361)
7 See, for example, Abramowitz and Stegun (AI) Chapter 17 for a more complete and useful discussion.
34 Tabulated Integrals 79
Instead of J 2' one takes for a standard integral the Legendre elliptic integral of the second kind,
Ix Jt  k2x1 dx E=
o Jl x2
(362)
Again one generally sets x = sin <p, and tabulates tP
E(4)~ k) = f J 1 ~ k2 sin? (jJ d4>
o
(363)
The integral HI is hard. One defines the Legendre elliptic integral of the third kind,
" dx
Il (cP. n. k) = So (1 + nx')J(l _ X')(J _ k';l)
f'" dfj>
= 0 (1 + n sin2 ¢)Jl  kl sin2 g,
When tP = x/2, we have the complete elliptic integrals,
(364)
(365)
(366)
E(k) = EG. k) = (' Jl  k' sin' cP dcP
n (n, k) = n (1, n, k) = (' (1 + n sin' cP:~1 _ k' sin' ¢ The formulas for reducing S to a standard form are given in various places, for example, Magnus et 01. (M 1) Section 101 or Abramowitz and Stegun (AI) Chapter 17. Elliptic integrals occur, among other places, in
(367)
I. The motion of the simple pendulum
2. Finding inductances of coils
3. Finding lengths of conic sections
4. Calculating the solid angles of circles seen obliquely
A large number of integrals are related to tabulated functions; these
integral representations are sometimes very useful. For example,
Ii cos tx dt 1t
J 2 =2 Jo(x) o 1  t
I, cos(n+t)¢ n
o Jcos 4>  cos 0 drp = J2 P,,(cos 8)
(369)
(3·70)
80 Evaluation of Integrals
Jo(x) is a suitably normalized solution of Bessel's equation, with m = 0, , and is known as a Bessel function; P n is a Legendre polynomial. We shall study such integral representations of the U special functions n in greater detail in Chapter 7.
35 APPROXIMATE EXPANSIONS
One can often obtain a useful expression for an integral by expanding the integrand in some sort of series.
EXAMPLE
I ,
2 x
erf x = r." f etl dt v 1t 0
2 x ( t4,6 )
= ~ t 1  12 + 2!  3! +  ... dt
(371)
This series converges for all x, but is only useful for small x, (x ;$ 1).
Integrations by parts are often useful.
EXAMPLE
Suppose we want erf x for large x. As x + 00, erf x + 1. Let us compute the difference from 1 +
2 ~
1  erf x = r= J e  r2 d t V 1r. x
We perform a sequence of integrations by parts.
foo «=' fC() etl dt
er2dt=___ 2
x 2x x 2t
etc.
35 Approximate Expansions 81
The result after n integrations by parts is (n > 1):
erf x = 1 _ _:_ e ~ xl [_!_ __ 1_ + ~ _ 1 . 3 . 5 + _ . ~ A
~ 2x 22X3 23X5 24x7
1 1 1 . 3 . 5 ... (2n  3)]
+ ( :r 2"r,,1
1 . 3 . 5 ... (2n  1) 2 f eo etl.
+ ( _1)" 2" E.. z;; dt
v 17: x t
(372)
The terms in the brackets do not form the beginning of a convergent infinite series. The series does not converge for any x, since the individual terms eventually increase as n increases. Nevertheless, this expression with a finite number of terms is very useful for large x.
The expression (3·72) is exact if we include the U remainder," that is, the last term, containing the integral. This remainder alternates in sign as n increases, which means that the error after n terms in the series is smaller in magnitude than the next term. Thus the accuracy of the approximate expression in the brackets is highest if we stop one term before the smallest.
The series in brackets in (372) is an example of an asymptotic series if continued indefinitely. 8 The precise definition of an asymptotic series is the following:
C1 Cz
S(z) = Co +  +  + · · .
Z Zl
(373)
is an asymptotic series expansion of fez) [written J(z) '" S(z), where ..... reads "is asymptotically equal to tt] provided, for any n, the error involved in terminating the series with the term ell z':" goes to zero faster than z'" as Izi goes to C(J for some range of arg z, that is,
lim zb[f(z)  Sn{z)] = 0
1%1 ~«l
(374)
for arg z in the given interval; SJiz) means Co + c1/z + ... + c,Jz".
[A convergent series approaches J(z) as n . 00 for given z, whereas an asymptotic series approaches f(z) as z + 00 for given n.]
From the definition, it is easy to show that asymptotic series may be added, multiplied, and integrated to obtain asymptotic series for the sum, product, and integral of the corresponding functions. Also, the asymptotic expansion of a given function J(z) is unique, but the reverse is not true. An asymptotic series does not specify a function J(z) uniquely [see Whittaker and Watson (W5) Chapter VIII or Jeffreys and Jeffreys (J4) Chapter 17].
• The idea is probably due to Poincare; see reference (P2) Chapter VID.
82 Evaluation of Integrals
We have not proved that our series in (372) is an asymptotic series, but that is not hard; we leave it to the interested reader.
Another example:
f~ e ~ dt
Ei(  x) =  dt = f .e'" 
00 t !Xl t
(x>O)
~ ,, t
= !_f ~ dt
x co tl
~ ~ ,,,
e e f e
= ++2 dt
x Xl t3
~ a:l
Continuing in this way, we obtain
eX [ 1 2! 3! (l)"n!]
Ei( x) = ~ 1  ~ + ~  x3 + _ ... + x"
+ (i)"(n + 1)! (;:: dt
(375)
We can use this result in two ways:
1. As an asymptotic series for Ei(  x) :
ex ( 1 2 ~ 3!· )
Ei( x) ...... x 1  x + ~  Xl +  ...
2. As an exact expression for computing certain integrals, assuming we have available a table of Ei(  x). From (3~ 75)
f co e  edt _ ( I)" . _ (  1)" e  ~
1: ~  (n  1)! EI( x) + (n  I)! x
(376)
[ 1 z: .. (n2)!]
x 1   +   + ... + (1) _
x x2 x" 1
(3·77)
36 SADDLEPOINT METHODS
Other important methods for approximating integrals are known as saddlepoint methods, for reasons which will soon be clear. The most important of these is the method of steepest descent. We shall illustrate the method by looking for an approximation to rex + 1) for x large, positive, and real.
eo
rex + 1) = f e«: dt o
(3 .. 78)
36 SaddlePoint Methods 83
For large x, the integrand looks as shown in Figure 312. To find the
maximum, we have .
o = !!.. (ert~') = e'[  t" + X(ll] dt
t = x
We now approximate the integrand in a particular way, by writing it in
an exponential form ef(') and using a Taylor's series approximation for f(t) near its maximum. (A Taylor's expansion ofthe integrand itself, for example, would not be useful if only a few terms were kept.) The integrand of (3·78) is
eI(t) = tXer = r In tI
so that
f(r) = x In t  t f'(t) = ~  1
t
fl = 0 for t = x
/"(f) = ?
Then, expanding about t = X;
I'(x + 1) I':J (0 eJ[p( x In J[  x  ~ (I  X)2] dt
I':J r1oxx roo exp[  ~ (t  4] dt
(If x> I, we make a very small error by extending the integral to  00.) Doing the t integral gives
Figure 312 Graph of re:: for large x
84 Evaluation of Integrals
This is the first term of Stirling's formula, which is the asymptotic expansion of n! = r(n + 1):
(379)
We shall return to this asymptotic series at the end of this chapter.
The method of steepest descents is applicable, in general, to integrals of the form
I(~) = f. ef!./(z) dz c
(a. large .and posi tive)
(380)
where C is a path in the complex plane such that the ends of the path do not contribute significantly to the integral, [This method usually gives the first term in an asymptotic expansion of J(ri), valid for large IX.]
If fez) = u + iu, we would expect most of the contribution to l(a,) to come from parts of the con tour where u is largest, The idea of the method of steepest descents is to deform the contour C so that the region of large u is compressed into as short a space as possible.
To see how to deform the contour it is necessary to examine the behavior of the functions u and v in more or 1ess detail for the example at hand. However, some general features apply to any regular functionJ(z). Neither u nor v can have a maximum or minimum except at a singularity, because V2u = 0 and V2v = 0.9 For example, if
then
so that a "flat spot" of the surface u(x} y)~ where
OU = au = 0
ox By
(381)
must be a "saddJe point," where the surface looks like a saddle or a mountain pass, as shown in Figure 313.
By the CauchyRiemann equations (A8)~ we see that (381) implies ovlvy = 0 and DV/aX = O~ so that f'(z) = O. Thus a saddle point of the function u(x~ y) is also a saddle point of v(x, y) as wen as a point where /I(Z) = o.
Near the sadd le point Zo ,
Sl These follow immediately from the CauchyRiemann equations, equations (A~8) of the Appendix.
u(x~y)
t~
...x
Figure 313 Topography of the surface u = Ref(z) near the saddle point Zo, for a typical functionJ(z). The heavy solid curves follow the centers of the ridges and valleys from the saddle point, and the dashed curves follow level contours, U = u(xo, Yo) = constant. The curve AAI is the path of steepest descent
" 86
Evaluation of Integrals
B
B
3w cfo= 4
Figure 314 A1ternadve possibllJdes for steepest descent contours
Let /"(zo) = pe", Z  Zo = seJ4t• Then.
u ~ u(xo, Yo) + 1pa2 cos (9 + 2tP)" v ~ v(xo, Yo) + ;pS2 sin (6 + 2¢)
We see that on the surface u(x, y) paths of steepest descent from the saddle point into the valleys start out in the directions where cos (8 + 2';) 
1. In these directions, v == v(xo, Yo) is constant. As we go further from _ Zo t the steepest path follows the direction grad u, which is normal to the contours of equal u. Thus the path follows a curve v(xt y) = constant = v(xo, Yo), so that the factor el«v in the original integrand of (380) will not produce destructive oscillations~~) We therefore deform our original contour C so as to go over the pass on this path of steepest descent Using the approximation (382) in the integral (3·80) gives
(382)
(383)
where 4> has one of the va1ues 9/2 ± '1t/2t depending on which direction we travel over the pass. (t/J is just the inclination of the path at the saddle , point.) For example, if (J :::;:: 1(/2 we have the two. alternatives of Figure 314. The first is much more likely to be correct, but to be sure we should examine the H mountain range It we are passing through. If it looks anything like that shown .in part (a) of Figure 315, then q, = ,,/4 is indeed correct. On the other hand. if the range is like that shown in" part (b) of Figure 315~ the second alternative, tP =  3n/4, is the right one.
As an example of the method of steepest descents with a more general contour, we shall work out the asymptotic approximation to r(z + 1) for
36 SaddlePoint Methods 87
complex z, without making any simplifications based on our previous work with r(x + 1).
00
r(z) = f e't2:1 dt o
(384)
Let
(385)
Then
and
/(t) = (In t  ~)e;~ f'(t) = G  DelP 1
n» =  e"
12
to = Z
l(to) = (In z  l)eiJJ
(a)
(b)
Figure 315 Alternative U mountain ranges U for the two cases of Figure 314
88 Evaluation of Integrals
Thus P = Ita?, (J = 1t  p .. What is·4>? Use of the condition cos (6 + 2,,) = 1 gives
P 4>= 2
or
P
1t'
2
In our previous example, with z real, p = 0 and tP = 0, so the choice q, = fJI2 would clearly seem most reasonable. Since it is sometimes necessary to examine the topography of the surface u = Ref(t) in more detail, however, we show the ridges and valleys for the present example, with P = n/4, in Figure 316. This figure confirms the choice tP = 1312.
The steepest descent approximation for r(z + 1) may now be written down from (380):
r(z + 1) ~ ~er:lnZ'Zei"'1, r(z + 1) _. .J2ir. zZ'+ 1/2e:
(386)
The original integral representation for r(z) is only valid for Re z > o.
\ Ret
\
~~~~rw~ ~\
v = constant II. = constan t \
\ \
Figure 316 Topography of the surface u= Re.f(t).= Re (In t . t!z)e'P for the case z = 3ei(Jl:J4) ~ It is clear that the original path of integratJon along the real taxis should be deformed so as to go over the saddle point, to = z, on the path of steepest descents (valleys) in a direction specified by the angle rfJ = P /2
r [ 'i. + a. ) (1 b
"'" 1; ( ~~ 0() ) .
r(16+b;
~ It = constant /
I ".
ridge , /
v· = constant / /
, /
I ,.
I / '
, ". valley v = constant
I /~~_;....
,
36 SaddlePoint Methods 89
. However, the result (3·86) is actually ·valid for all Izl+ 00 provided we stay away from the negative real axis (see Whittaker and Watson (W5) Section 12.33).
The result (386) is just the first term of an asymptotic series fOT the gamma function, which we shall now find. . First write
_ ( 1)%1/2
~ J21t z%~ 1/2 1  ~ «: (=1)
Let us now set
. (387)
The constants A, B, ... may be found from the recursion relation for the gamma function:
r(z + 1) = zr(z)
For, from (387)~
r(z+1)=J21t(Z+l)Z+1/2e(Z+ll[1+ A + B + ... ] , z + 1 (z + 1)2
= Jiit exp[ (z + ~) log (z + I)  (z + I)]
x [1 + A + B + 0.0] z + 1 (z + 1)2
= /21(, exp[(z +~) log z _ z + (~1 1_ + ~  + ... )]
" 2 . 12z2 12z3 40z4
'[ A B ]
x ·1 + +. + 0 ...
. z + 1 (z + 1)2
J._ (1 1 113 )
= 2nzz+ 1./2ez 1 + ~  ~ + ~ + ...
12z2 12z3 1440z4
[A B  A C ~ 28 + A ]
x 1++ + + ...
z Zl Z3