# ﻿MATHEMATICAL METHODS OF PHYSICS

Second Edition

JON MAT"HEWS

R. L. WALKER

California Institute of Technology

Red wood City, California .. Menlo Park, Ca lifornia • Rcadi ng, Mas sach use t t s New York • Amsterdam • Don Mills, Ontario • Sydney

Bonn .. Madrid • Singapore • Tokyo

San Juan • Wokingharn, United Kingdom

Preface

For the last nineteen years a course in mathematical method s j n physics h as been taught, by the authors and by others, at the Calif orn ia I nst it ute of T echnol ogy. I t is intended primarily for first-year physics graduate students ~ in recent years most senior physics undergraduates, as well as graduate students from other departments, have been tak i ng it. This book has evol ved from the notes that ha ve been used in this course.

We make li tt1e effort to teach physics in this lex t; the emphasi sis on prcscn ti ng mat hema tica] techn iq ues that have proved to be u sef u 1 in anal yzi ng pro blcms l n physics. We assume that the st udcn t has been ex posed to the standard undergraduate physics curriculum: mechanics, electrici ty and magnet ism, in trod uctory quantum mechanics, etc., and that we are free to select examples from such areas.

In short, this is a book about mathematics.for physicists.

Both motivation and standards are drawn from physics; that is, the choice of subjects is dictated by t heir user ulness in physics, and the level of rigor is intended to reflect curren t pract ice in theoretical physics.

I t is assumed that the st uden t has become acq uai nted with the f ollowi ng mathematical subjects:

1. Si multaneous 1 inear eq uatio ns and dctermi nants ;

2. Vector ana] ysi s, i ncl udi ng di fferent ial operations in curvili near coordinates;

3. Elementary dj fferen tial eq nations ;

4. Complex variables, through Cauchy's theorem.

v

vi Preface

However, these should not be considered as strict prereq uisi tes, On the one hand, it will often profit the st udent to have had more bac kgro und, and on the other hand, it should not be too difficult for a student lacking familiarity with one of the above subjects to remedy the defect by extra work and/or outside reading. J n fact, the subject of di fferen tial equations is discussed in the first chapter, partly to begin on familiar ground and partly in order to treat some topics not normally covered in an elementary course in the subject.

A considerable variation generally exists l n the amount of preparation different students have in the theory of functions of a complex variable. For this reason, we usually give a rapid review of the subject before studying contour integration (Chapter 3). Materia] for such a review is presented in the A ppendix, Also, there are so me excel Jent and reasonably brief mathematical books on the subject for the student unfamiliar with this materiaL

A considerable nu rnbcr 0 f problems are i nc1 uded at the end of each chapter in this book. These form an important part of the course for which this book j s designed. The emphasis through 0 Lit is on understanding by means of examples.

A few remarks may be made about some more or Jess unconventional aspects of the book. J n t he first place, the material which is prescn ted does not necessarily flow in a smooth, logical pattern. Occasionally a new subject is introduced without the student's having been carefully prepared for the blow. This occurs, not necessarily through the irrationality or irascibility of the authors, but because that is the way physics is. Students in theoretical physics often need considerable persuasion before they will plunge into the middle of an u nf ami] iar subject; this course is intended to give practice and confidence in dealing with problems for which the student's preparation is incomplete.

A related point is that t here is considerable del i berate nonu nif or mi ty in the depth of presentation. Some subjects are skimmed, while very detailed appl ications are worked 0 ut in other areas. If t he co urse is to give practice in do i ng physics, the st udent must be given a chance to gai n confidence in his ability to do detailed calculations. On the other hand, this is a text, not a reference work, and the material is intended to be covered fully in a year. It is therefore not possible to go into everything as deeply as one might like.

The pla n of the book rernai ns unaltered j n this Second Edi tion ~ we have rewritten a number of sections, though only one (Section 14-6) is new. We have tr ied to make clearer certai n material which has proved part leu lady difficult for students, such as the discussion of the Riemann P-symbol in Section 7-·3 and the. Wiener-Hopf integral equation in Section 11 ·5.

A number of new pro blems have bee n added at the ends of chapters, and several new detai1ed exam ples are i ncluded, sue h as the triangular d rumhead of Section 8-3 and the recursive calculation of Ressel functions in Section 13-3.

Preface

• +

VlI

Several acknowledgements are in order. The course from which this text evolved. was originally based on lectures by Professor R. P. Feynrnan at Cornell University. Much of Chapter 16 grew out of fruitful conversations with Dr. Sidney Coleman. The authors are grateful to Mrs. Julie Curcio for rapid, accurate, and remarkably neat typing through several revisions.

JON MATHEWS

R. L. WALKER

-'

f:~ J;~re~_, ~rM:~._~ clP o~~4-1. __ ~~~_~

}e Dof'rV'd'i rk r..a: .ruJ.La~o- '!W... 0 r~.,o: :

[~~~~:~) ;,:~:)~ -. .

. ti e ~ 1 ( ll) '1 ~ 2 ( r/,) dNl.ta: ~ 0./1 rJ.w!tA -uv ~

\J ;1 -t- "P ( !i) U : -+ Q ( t) ~ ~ '= 0 f·-- -. - - _. - -- . -. -- .. - . . . . .. !

ct, d ) \ 6 ( ~ t I ~.:zJ ~ ~ t ~ ~ - ~~ ~ / I W·rn)tJ.ii-";an.tt.i

~'~ -+ l' 0.) ~; -t Q(~) ~~<";" 0 L. . .. 1- - _ .

~Pt ~4( ~) 0 ~ 2l~) JtmJ:, .kf#"Juvij, ·

\ ~2(~)-~'~~l~-)--I(!l~~)~ [: A~~:~~)-~··-l·

.. .. . .-l

0tilaJH a W ~ a nll e,.u, ~

\ J li '-+" U J li = 0 ~ r-~c i) ~ ~ ~~o ;~~ f [- S~P~-t'J-~ ·U- _.,

~ ~ to ,

~ - .- .... . ~ .-._._-

) ll~ lfo.) "'J~ _j_ J~ [~r~)~=~J .

)L 5 'P (t) fAA) -.,.- 00 '10

t (). cA, ~ t ( );) . .v.rft wnm wfti: 1 dull ~ z c-~) I~ (>DolL

l 0Dfw Po.. prim . __j A mfJ_, OIro. W •

~.___ _~. __ ._ ... -.---'---_a 0 .

L}r~2~_~~ ~J[t)_H~~ ~Pll tWtd& a ~wJj~ ()Mo'f€W .

%frt~ lJ1fn~rt_ ~. a£. WM\ .tc.ui/M ~tt&i",~, Sf Cifr\Jn cit nl · ~~a. J~p tfrrdj~Jt;.; ~

L}~+?[t)l'~O(~!J=~ .

?u.l'J\_~e 1.to JJ_ MAM~~ ~MrL .klm.t.fu.tr ~fn..t ~~ :

()II.

{'POt) "" j>CO/[J6- 1>.) 2 ]) { flJt)-~ hJt- ll..)~ J (~~_~·h·ce ) .

Q (1t) -::. 4 (>tJ/ C)IJ- to ) 4 (~) - ~ 4n. C)t- ~jI ) n

. r~~-~~t f + ~(l~~-)\. )Y'~_1 [JI:)--~r f.~a. hrl~).

r~~~::; :i~ ~:.~!£c ~~~=~. )~~n J( c; ~ 0) ,

d"" n-o n~O

.~_--. __ .... _. ......----· ......... ._ ... _..._.r • ..,.... ... , __ - - _ •• ._.- - _---,--_.." ........... ____.,-

Se t~c.Wi~'fL o.~ -txpreMe ~ ~tm~ rwh1A.~.i rl._ u.u4'eA ..

0,+ vi e rn11it .-WA r ~ - )::0) ~. t .

Co [),j rh-~)+ poh~~(l ] 11:" e, [A~+[ po-~)},.+iCl J = 0 ~ e 0 ·f 0 -.,.

[~~)~~2+(~_~ __ A)~~~o ~o_ J ECAl«tt9-_. ~_#cl~. ~ fic.i,4 m1ML -l.w. r l6 - ~~ ) A + n - ( fI -- _ I "2 ~ '3) _. +) •

. n

\ en [( hi- n )2 + (flo -A) (h+Yl) +·10 J+ ~ Cn-\..L:P,l (lJ+YJ- It) + ~ \. ] ~ ]

- en HMn)+ tr 1'0-"- [h_(AHl-l )+11<J= 0 _

'Jle AI, A:tiX~ -llii~'t'!i .k:~te~dh ~ tNtn£ti. )';,

~ { FC~ ~ ) == JJ:-+ (Jpo-~J h t-t+ 10 ::' 0 ] { h .+..1~ == - ~o+f lJ

2. (i.. ... ~A<.).

. f (A2) = l!L + (po-·A )A,z .... 1 o = 0 . 4~ h~ == ,~ . att/Jptl~ ~-h1r ~

a~ f'(.4I+~)+t o.n_l,_[h.(A~+Il-i)+q" J=o

..,0- (.°4~.4,,) Cn=V2, 3, .. _) .

bn f (..1.2 + n ) -t ~ b n-t.. [ P to. (/~ -t n-l )+ q l J ~ 0

F( lI, +n) + f!J (n ~ 4,2j '3 J-- ). h.-A,z + n ~ F(h~+")+ f(.4~);= 0 C n~ o, ~~ <I 3J -~)-

~A(lo)~ (t,~t" )"':;. ()..(t-l.Y'l ~ ~0Ifr

(j gL h (A. -1J=t -+ 0 ~ ~. 2. '3;> - •.• ). Hr. .JtMr. dJtrJ:l, .

U (ft) =: (ll- :i-o ) ~..2_ b n ( ~- '0 ) rr:

d~ ~~o

D~ I.l~-;:,A~ ~ tJ;u_f'fV..4 ~3.(.t) ~~ l(~)' D~ ..Q;oik m (m = C2~ 3.)- .. ) 0lifiJ ~c4l; IJ ~ - 1).2 -: rn J Mul\ci F(~ of- m ) = F(A~ ) == 0 k} tndvk t£rU ~ J.J puitri proJu ce hti rMA 44 *ta. ~. (l ) ,

_u~~~_~ ce&i _ J_, -It _ di¥'J~ ~~" (A_~ _ okJ _ d4 j H'tni ~ _ -e.<po r~ r{t 8v r-

J/ t.1 ~ 11"" ~u. W't\ [(~ fO_i1~±11T .

. -. -.- - .. -... . ... ,.. . -.. -. .- .. . -. -. -... . .. -. <J

r -- ... -. - . ··--·1 r·- --.---- .-, ... - - _. --,

l In - A:I_ -_m __ J ry L~~~ __ ~ -~-B (rn = 0, 1,2, s , - __ J

(PM rM a f&l'O-Ju t- J " ~ ~

r . - .. _. __ L-- __ --- ~-- .• ~~ tI(I -.-.~ ..•. -- l

! y--,~(-t. )=C~-J(lJ~~ U(:k-) =(I-ti,,)A~.L:(.\I\(.t-l~)n i

1 ~ n:~ l I

L .. . . __ . . __ .. ._. __ ._ .. . ... _. .. __._-- ..... __ . , _ ..... -_----_._ .... _ ..... _ ... __ ......._ ......... __ .•

fr,.ffJi u (~.) .~ ~ ~~e mM:.-HtAL ¥ U (~() ) = at) =F 0 .

• \; ~:< (~;~~'(~;G;~~) ~ fp [- S 'N~)h J J *"_ -

II' .- •• __ •••••••• • _ __ •• _. __ ....... _._.. •• • •• _ ••• _ • •• __ •••• _

( v(~) "- I ~;-~J() ~,,~[- j'PCt)dJt J~l&~ J ~~I(J() ~"P [- Li~:: d-" ] J", ~ l

, 5 - ~ - !UJ~ I r - m-~ t r.

; ~ (1- io) I) ~ C~) di&- = J (1;- ~~) 3 (~) Qt t ~ f./W"ru ~

J 00 n z.

l ~ (t ) :" -2" ~ n (~- ~o) l ~ II ~ ~ C ~o ) ~ ~ / a ~ .

J vel,) ~ SC"lIo-l..rIH ~(J() ~J!. =

I -TYi ~ r'l-'l

l = ~ m ~ ( t - ~ c ) + (lb - ~o) Z t f\ (i- to ) _

n== e

r- . - .-- .. -- .. _- .. ----.-~ _.- .. - -- ,.,-.------ .. - ... - .-. .. ..----j

I 1> i.J. tlO n I

, ~/ 110) "" d m ~.u-) hI(:l- l6-. )t (t; - t.) ~~ bn CJI;-~. i j.

L_ __ . . ... _----0._ .... .................. _. .. _._ .... __ ........... _. __ ._ ............. ,.-.-....

~~ciemtv.· d rn ) 6()) 6" 1 b ~ ) h:. )... M ~tfrlAM ~ ~CMi",J

j ~ ~

.e,w.4Q M~\-f~Ja.

~~ ~~ (i) ~ ~~(.~) IX rM1YW~ .1Mi.t'rBt ~~ edt .

r

(

Contents

PREFACE

CHAPTER 1 ORDINARY DIFFERENTIAL

EQUATIONS

1 .. 1 Solution in Closed Form 1-2 Power-Series Solutions

1-3 Miscellaneous Approximate Methods 1-4 The WKB Method

References

Problems

1
1
13
22
27
37
38
44
44
48
50
55
56
58
58
62
65
75
80
82
90
91
.
IX r CHAPTER 2 INFINITE SERIES

2-1 Convergence

2-2 Familiar Series

2-3 Transformation of Series References

Problems

CHAPTER 3 EV ALUA nON OF INTEGRALS 3-1 Elementary Methods

3-2 Use of Symmetry Arguments 3-3 Contour Integration

3-4 Tabulated Integrals

3·5 Approximate Expansions 3~6 Saddle-Point Methods References

Problems

x Contents

CHAPTER 4 INTEGRAL TRANSFORMS 96

4-1 Fourier Series 96

4-2 Fourier Transforms 101

4-3 Laplace Transforms 107

4-4 Other Transform Pairs 109

4·5 Applications of Integral Transforms 110

References 119

Problems 120

CHAPTER 5 FURTHER APPLICATIONS OF COMPLEX

VARIABLES 123

5·1 Conformal Transformations 123

5-2 Dispersion Relations 129

References 135

Problems 135

CHAPTER 6 VECTORS AND MATRICES 139

6-1 Linear Vector Spaces 139

6-2 Linear Operators 141

6-3 Matrices 143

6-4 Coordinate Transformations 146

6-5 Eigenvalue Problems 150

6-6 Diagonalization of Matrices 158

6-7 Spaces of Infinite Dimensionality 160

References 163

Problems 163

CHAPTER 7 SPECIAL FUNCTIONS 167

7-1 Legendre Functions 167

7-2 Bessel Functions 178

7~3 Hypergeometric Functions 187

7-4 Confluent Hypergeometric Functions 194

7-5" Mathieu Functions 198

7-6 Elliptic Functions 204

References 210

Problems 210

Contents

CHAPTER 8 PARTIAL DIFFERENTIAL EQUA nONS 8-1 Examples

8-2 General Discussion

8-3 Separation of Variables

8-4 Integral Transform Methods 8-5 Wiener- Hopf Method References

Problems

CHAPTER 9 EIGENFUNCTIONS, EIGENVALUES, AND GREEN'S FUNCTIONS

9-1 Simple Examples of Eigenval ue Problems 9-2 General Discussion

9·3 Solutions of Boundary-Val ue Problems as

Eigenfunction Expansions

9-4 Inhomogeneous Problems. Green 's Functions 9-5 Green 's Functions in Electrodynamics References

t Problems

CHAPTER 10 PERTURBATION THEORY

10-1 Conventional N onde gene rate Theory 10-2 A Rearranged Series

10-3 Degenerate Perturbation Theory References

Problems

CHAPTER 11 INTEGRAL EQUATIONS

11-1 Clas ificat ion

11-2 Degenerate Kernels

11-3 Neumann and Fredholm Series 11-4 Schmidt-Hilbert Theory

1l~5 Miscellaneous Devices

11-6 Integral Equations in Dispersion Theory References

Problems

.

Xl

217 217 219 226 239 245 252 253

260 260 263

266 267 277 282 282

286 286 292 293 296 296

299

299 300 302 305 311 316 318 318

XU Contents

CHAPTER 12 CALCULUS OF VARIATIONS m

12-1 Euler-Lagrange Equation 322

12-2 Generalizations of the Basic Problem 327

12-3 Connections between Eigenvalue Problems and

the Calculus of Variations 333

References 341

Problems 341

CHAPTER 13 NUMERICAL METHODS· 34S

13-1 Interpolation 345

13-2 Numerical Integration 349

13·3 Numerical Solution of Differential Equations 353

13-4 Roots of Equations 358

13M5 Summing Series 363

References 368

Problems 368

CHAPTER 14 PROBABILITY AND STATISTICS 372

14-1 Introduction 372

14-2 Fundamental Probability Laws 373

14-3 Combinations and Permutations 37S

14-4 The Binomial, Poisson, and Gaussian

Distributions 377

14-5 General Properties of Distributions 380

14-6 Multivariate Gaussian Distributions 384

14-7 Fitting of Experimental Data 387

References 395

Problems 396

CHAPTER 15 TENSOR ANALYSIS AND DIFFERENTIAL

GEOMETRY ~

15-1 Cartesian Tensors in Three-Space 403

15 .. 2 Curves in Three-Space; Frenet Formulas 408

15-3 General Tensor Analysis 416

References 421

Problems 421

Contents XUI

CHAPTER lei INTRODUCTION TO GROUPS AND GROUP

REPRESENTATIONS 424

16-1 Introd uction; Definitions 424

16-2 Subgroups and Classes 426

16-3 Group Representations 430

16-4 Characters 433

16~S Physical Applications 440

16-6 Infinite Groups 449

16-7 Irreducible Representations of SU(2), SU(3),

and 0(3) 457

References 467

Problems 468

APPENDIX SOME PROPERTIES OF FUNCTIONS OF A

COMPLEX VARIABLE 471

A-I Functions of a Complex Variable. Mapping A~2 Analytic Functions

References

Problems

471 477 483 483

BmLIOGRAPHY

485

INDEX

493

ONE

ORDINARY DIFFERENTIAL EQUATIONS

We begin this chapter with a brief review of some of the methods for obtaining solutions of an ordinary differential equation in closed form. Solutions in the form of power series are discussed in Section 1-2~ and some methods for obtaining approximate solutions are treated in Sections 1-3 and 1-4.

The use of integral transforms in solving differential equations is discussed later, in Chapter 4. Applications of Green's function and eigenfunction methods are treated in Chapter 9, and numerical methods are described in Chapter 13.

1-1 SOLUTION IN CLOSED FORM

The order and degree of a differential equation refer to the derivative of highest order after the equation has been rationalized. Thus, the equation

'd3y JdY

-d 3 + X - + x2y = 0

x dx

is of third order and second degree, since when it is rationalized it contains the term (d3y/dx3)2.

1

2 Ordinary Differential Equations

We first recall some methods which apply particularly to first-order equations. If the equation can be written in the form

A(x) dx + B(y) dy = 0

(I-I)

we say the equation is separable; the solution is found immediately by integrating.

EXAMPLE

dy f1-Y2 _ 0

dx + -vl=Xi-

dy." dx"

J1=y.i+ Jl_xi=O

sin -1 y + sin - 1 X = C or, taking the sine of both sides,

xJ 1 - y2 + yJ I - Xl = sin C = C'

(1-2)

More generally, it may be possible to integrate immediately an equation of the form

A(x, y) dx + B(x., y) dy = 0

(1-3)

If the left side of (1-3) is the differential du of some function u(x~ Y)t then we can integrate and obtain the solution

u(x, y) = C

Such an equation is said to be exact. A necessary and sufficient condition that Eq. (1-3) be exact is

oA aB a; = ox

(1·4)

EXAMPLE

(x + y) dx + x dy = 0

(1-5)

A=x+y

B=x

oA =~= 1 oy ax

The solution is

xy + !X2 = C

1-1 Solution-in Closed Form 3

Sometimes we can find a function A(X, Y)t such that

..1.(A dx + Bdy)

is an exact differential, although A dx + B dy may not have been. We call such a function A. an integrating factor. One can show that such factors always exist (for a first-order equation), but there is no general way of finding them.

Consider the generallmear first-order equation

-dy J.X+!(x)y=g(x)

(1-6)

Let us try to find an integrating factor l(x). That is, A(x)[dy + f{x)y dx] = A(X)g(X) dx

is to be exact. The right side is immediately integrable, and our criterion (1-4) that the left side be exact is

d~~) = l.(x)f(x)

This equation is separable, and its solution

l(x) = exp [f f(x) dX]

(1-7)

is the integrating factor we were looking for.

EXAMPLE

xy' + (1 + x)y = eX

y' + c: X)y =;

The 'integrating factor is exp {J[( 1 + x)1 x]dx} = xe"

Xe"k + e : X)Y] = el~

N ow our equation is exact; integrating both sides gives

(1-8)

xe' y = fell!' dx = !e2.-.; + C ·r C_,'II; y=-+-e

2x x

4 Ordinary Differential Equations

One can often simplify a differential equation by making a judicious change of variable. For example, the equation

y' = f(ax + by + c) (1-9)

becomes separable if one introduces the new dependent variable,

v = ax + by + c

As another exam ple, the so-called Bernoulli equation

y' + f(x)y = g(x)y"

(1-10)

becomes linear if one sets v = yl-n. (This substitution becomes U obvious" if the equation is first divided by it.)

A functionf(x, y~ ... ) in any number of variables is said to be homogeneous of degree r in these variables if

[tax, ay, ... ) = qf(x~ y, ... ) A first-order differential equation

A(x, y) dx + Bi», y) dy = 0

(1-11)

is said to be homogeneous if A and B are homogeneous functions of the same degree. The substitution y = vx makes the homogeneous equation (1-11) separable.

EXAMPLE

y dx + (2JXY - x) dy = 0

y = vx dy = v dx + x dv

vx dx + (2xJ v - x)( v dx + x dv) = 0 2v3/2 dx + (lj-;; - l)x dv = 0 This equation is clearly separable and its solution is trivial.

(1-12)

Note that this approach is related to dimensional arguments familiar from physics. A homogeneous function is simply a dimensionally consistent function, if x, y~ ... are all assigned the same dimension (for example, length). The variable v = y]» is then a "dimensionless" variable.

This suggests a generalization of the idea of homogeneity. Suppose that the equation

is dimensionally consistent when the dimensionality of y is some power m of the dimensionality of x. That is, suppose

A(ax, a'"y) = d' A(x, y) B(ax, a'"y) = tt-m+1 B(x, y)

(1-13)

-

1-1 Solution in Closed Form 5

Such equations are said to be isobaric. The substitution y = v_x'ft reduces the equation to a separable one.

EXAMPLE

x y2(3 Y dx + x dy) - (2y dx - x dy) = 0

(1-14)

. Let us test to see if this is isobaric. Give x a U weight" I and y a weight m. The first term has weight 3m + 2~ and the second has weight m + I. Therefore, the equation is isobaric with weight m = -!.

This suggests introducing the H dimensionless" variable v = y~. .To avoid fractional powers, we instead let

v x=.,. y

Eq uation (1-14) reduces to

(3v - 2)y dv + 5v(l ~ v) dy = 0

which is separable.

An equation of the form

(ax + by + c) dx + (ex + fy + g) dy = 0

where a, ... , g are constants, may be made homogeneous by a substitution

x= X+a

Y= Y+fJ

w here a: and p are suitably chosen constants [provided of #- be; if af = be, Eq. (1-15) is even more trivial).

An equation of the form

y - xy' = f(y'}

(1-16)

is known as a C lairaut equation. To solve it, differentiate both sides with res peet to x. The result is

y"[fl(y/) + x] = 0

We thus have two possibilities. If we set i" = 0, y = ax + b, and substitution back into the original equation (1-16) gives b = f(a). Thus y = ax + /(a) is the general solution. However, we also have the possibility

f'(yl) + X = 0

Eliminating y' between this equation and the original differential equation (1-16), we obtain a solution with no. arbitrary constants.. Such a solution is known as a singular solution.

6 Ordinary Differential Equations

EXAMPLE

(1-17)

This is a Clairaut equation with general solution y = ex + c2

However, we must. also consider the· possibility 2y' + x = 0, which gives

.r + 4y = 0

This singular solution is an envelope .of the family of curves given by the general solution, as shown in Figure 1-1. The dotted parabola is the singular solution, and" the straight lines tangent to the parabola are the general solution.

There are various other types of singular solutions, but we shall not discuss them here. See Cohen (C4) or Inee (12) for more complete discussions and further references.

Next we review some methods which are useful for higher-order differential equations. An important type is the linear equation with constant coefficients:

aft y(lt) + a",_ly(liI-l) + ... + Olyt + ao y = f(x) (1-18)

If f(x) = 0, the equation is said to be homogeneous; otherwise it is in.homogeneous. Note that, if a linear equation is homogeneous, the sum of two solutions is also a solution, whereas this is not true if the equation is inhomogeneous.

The general solution of an inhomogeneous equation is the sum of the general solution of the corresponding homogeneous equation (the so-called complementary function) and any solution of the inhomogeneous equation

Figure 1-1

Solutions of the difl'erential equation (1.17) and their envelope

1-1 Solution in Closed Form 7

(the so-called particular integral). This is in fact true for any linear differential equation, whether or not the coefficients are constants.

Sol utions of the homogeneous equation [( 1-18) with f(x) = 0] generally have the.form

Su bstitution into the horne geneous equation gives

all mlt + an -1 fit" - 1 + · · · + .ao = 0

If the n roots are m, ~ ml , ... ~ m; the complementary function is

(c, are arbitrary constants)

Suppose two roots are the same, m1 = m2• Then we have only n - 1 solutions, and we need another. Imagine a limiting procedure in which

ml approaches mi' Then -

is a solution, and, as ml approaches m1 ~ this solution becomes

This is our additional solution. If three roots are equal, m, = m2 = m3 , then the three solutions are

e-np:

and so on.

Arguments involving similar limiting procedures are frequently useful; see, for example, the discussion on pp. 17 and 18.

A particular integral is generally harder to find. If f(x) has only a finite num ber of linearly independent derivatives, that is, is a linear combination of terms of the form XC, ellX, sin kx, cos kx, or, more generally,

x"£!"X cos cxx

then the method of undetermined coefficients is quite straightforward. Take for y(x) a linear combination of [(x) and its independent derivatives and determine the coefficients by requiring that y(x) obey the differential equation.

EXAMPLE

y" + 3y' + 2y = e"

(1-19)

8 Ordinary Differential Equations

Complementary function ~

m2+3m+2=O m = -1 -2

,

Particular integral ~ Try Y = Ae:. Substitution into the differential equation (1-19) gives

6A = I

A=l

Thus, the general solution is

Y = ic + c1e-x + c2e-2x

If f(x), or a term in f(x)~ is also part of the complementary function, the particular integral may contain this term and its derivatives multiplied by some power of x. To see how this works, solve the above example (1-19) with the right-hand side, e', replaced by e=.

There are several formal devices for obtaining particular integrals. If D means dldx, then we can write our equation O-IS) as

(D - m1)(D - m2)" ·(D - mn)y =/(x) A formal solution of (1-20) is

(1-20)

f(x)

y =;:: --~-----

(D - m t) ... (D - mn)

or, expanding by partial fraction techniques,

[(x) f(x)

y = At + ... + An ---

D - mt D ~ mrJ

(1-21)

What does f(x)/(D - m) mean? It is the solution of (D - m)y = fix), which is a first-order linear equation whose solution is trivial [see (1-6)1.

Alternatively, we can just peel off the factors in (1-20) one at a time.

That is~

CD - m2)(D - m3)'" (D - mn)y = f(x) D - m1

We evaluate the right side, divide by D - m2 ~ evaluate again, and so on.

Next, we consider the very important method known as variation. of parameters for obtaining a particular integral, This method has the useful feature of applying equally well to linear equations with nonconstant coefficien ts. Before giving a general discussion of the method and applying it to an example, we shall digress briefly on the subject of osculating parameters.

( 1-22)

1-1 Solution in Closed Form 9

Suppose we are given two linearly independent functions Yl(X) and yz(x).

By means of these we can define the two-parameter family of functions

(1-23)

Now consider some arbitrary function y(x). Can we represent it by an appropriate choice of Cl and C2 in (1·23)? Clearly, the answer in general is no. Let us try the more modest approach of approximating y(x) in the neighborhood of some fixed point x = Xo by a curve of the family (1-23). Since there are two. parameters at our disposal, a natural choice is to fit the value y(xo) and slope yt(xo) exactly. That is, C1 and C2 are determined from the two simultaneous equations

y(xo) = ctYt(xo) + C1Y2(XO) y~(xo) = clyaXO) + Cl Y2(XO)

(1-24)

The C1 and C2 obtained in this way vary from point to point (that is, as Xo varies) along the curve y(x}. They are called osculating parameters because the curve they determine fits the curve y(x) as closely as possible at the point in question.'

One can, of course, generalize to an arbitrary number N of functions y, and parameters C1• One chooses the c~ to reproduce the function y(x) and its first N - 1 derivatives at the point Xo.

For simplicity, we shall restrict ourselves to second-order equations. Consider the inhomogeneous equation

p(x)y" + q(x)y' + r(x)y = sex) and suppose we know the complementary function to be c1Yt(x) + C2 Y2(X)

Let us seek a solution of (1-25) of the form

Y = U1 (X)Yl(X) + uix)yJCx)

(1-25)

(1-26)

where the ui(X) are functions to be determined. In order to substitute (1·26) into (1-25), we must evaluate y' and y". From (1-26),

(1-27)

Before going on to calculate y", we observe that it would be convenient to impose the condition that the sum of the last two terms in (1-27) vanish, . that is~

(1-28)

1 Latin oscula r i = to kiss.
( ~;1 + P(l,)~: + Q(l) ~1 = 0 I. I ~ 1 (j
,
I <Z
, Ll (*4) ~2)-= i=l=o.
.",. i .
~ If + 'P Ct) ~: -+ Q (l) ~ -2 ~ Q j ) lUi 1J I
I ,_j 4"~ ("z
L 2 d
l... tA Lf ) ~ - v (t) Cr'"R (t ) ~ 2 (f) at*- + U (1. '1 rt R( t ) ~ ~ ( +) at

d" (~J ~Ct) (j:;>: j tJ(f)

10 Ordinary Differential Equations

This prevents second derivatives of the Ui from appearing, for now

(l-29)

and differentiating gives

(1-30)

Note that the condition (1-28) not only simplifies the subsequent algebra but also ensures that Ul and Ul are just the osculating parameters discussed above; compare (1-26) and (1-29) with (l-24)~

The rest of the procedure is straightforward. If (1~26); (1-29), and (1-30) are substituted into the original differential equation· (1-25), and we use the fact that Yl and yz are solutions of the homogeneous equation, we obtain

p(x)(u~y~ + U2 Y;J = s

(I~28) and (lw31) are now simultaneous linear equations for the u; J whose solution is straightforward. The. student is urged to complete the solution; see Problem 1-26.

EXAMPLE

Consider the differential equation

xZi'·_ 2y = x

The complementary function, that is, the general solution of

x2y" - 2y· = 0

(1-32)

(1-33)

is most easily found by noting that y = x'" is a natural trial solution.f Substitution into (1-33) shows that m = 2 or - I, so that the complementary function is

Therefore, we are led to try to find a solution of (1·32) of the form

2. In fact, y = x· is an obvious trial solution for any linear differential equation or the form

C.x·~"} + CII_IX--1y(l'I-l) + ... + C1XY + CoY = 0

1-1 Solution in Closed Form 11

We differentiate, obtaining

and impose the condition

(1-35)

Then

(1-36)

and

Substituting (1-34) and (1-37) back into the differential equation (1-32), we obtain

Solving this together with (1-35) gives

, 1 u1=- 3x2

, X

U2 = --

3

Then the general solution of (1-32) is

Changes of variable sometimes help. We shall discuss one general transformation which is particularly useful. Consider the second-order linear equation

y" + f(x)y' + g(x)y = 0

(1-38)

The substitution

y = v(x)p(x)

(1-39)

will give another linear equation in v(x) which may be easier to solve for

12 Ordinary Differential Equations

certain choices of p(x). The resulting equation is

v" + (2 ~ + f)V' + (PO + f:' + gP)v = 0

If we know one solution of the original equation (1·38), we can take p to be that solution and thereby eliminate the term in v from (1-40). This is very useful because we can then find the general solution by two straightforward integrations.

Alternatively, we may choose

(1-40)

p = exp [ - ~ f f( x) dx ]

and eliminate the first derivative term of (1-40). This procedure is helpful as an aid to recognizing equations, and it is especially useful in connection with approximate methods of solution,

EXAMPLE

Bessel's equation is

x2yll' + xy' + (x2 - n2)y = 0 The substitution y = u(x)p(x) with p(x) = x" 1/2 gives

U" + ( 1 - 112 x~ !) U = 0 This equation, with no first derivative term. is very convenient for finding the approximate behavior of Bessel functions at large x, fOT example, by the WKB method (Section 1-4). The details are left as an exercise (Problem 141).

0-42)

(1-43)

In conclusion, we shall briefly mention some other devices for solving differential equations. F or details, refer to a text on differential equations.

If the dependent variable y is absent, let y' = p be the new dependent variable. This lowers the order of the equation by one.

If the independent variable x is absent, let y be the new independent variable and y' = p be the new dependent variable. This also lowers the order of the equation by one.

If the equation is homogeneous in y~ let v = log y be a new dependent variable. The resulting equation will not contain v, and the substitution v' = p will then reduce the order of the equation by one.

If the equation is isobaric, when x is given the weight 1 and y the weight m, the change in dependent variable y = ox" t followed by the change in independent variable u = log x, gives an equation in which the new independent variable u is absent.

-

1-2 Power-Series Solutions 13

Always watch for the possibility that an equation is exact. Also consider the possibility of finding an integrating factor. For example, the commonly occurring equation y" = /(y) can be integrated immediately if both sides are multiplied by yl.

1-2 POWER-SERIES SOLUTIONS

Before discussing series solutions in general, consider a simple (although nonlinear) example:

(1-44)

Try

Then (1-44) becomes

Equating coefficients of equal powers of x gives

C2 = -!c~

c3 = * - teo c1

_ 1 2+ 1 3

c4- -lZC1 IleO,

etc.

Suppose, for example, we want the solution with y = 0, y' = 1 at x = O.

Then

(1-45)

This method of solution is a very useful one, but we have been careless about justifying the method, establishing convergence of the series, etc. We now briefly outline the general theory of series solutions of linear differential equations. 3

Consider the equation

(1-46)

l This theory is discussed more fully, for example, in Copson (C8) Chapter X, or Jeffreys and Jeffreys (J4) Chapter 16.

14 Ordinary Differential Equations

If !o(x), ... ,In -1 (x) are regular (in the sense of complex variable theory; see the Appendix, Section A-2) at a point x = xo, Xo is said to be an ordinary point of the differential equation, Near an ordinary point, the general solution of a differential equation can be written as a Taylor series whose radius of convergence is the distance to the nearest singularity of the differential equation; by a singularity, of course, we mean a point which is not ordinary.

The Taylor series is an ordinary power series

00

Y = L cm{x - xo)'"

m=O

(1-47)

whose coefficients em may be conveniently found by substitution in the differential equation, as in the example above.

If Xo is not an ordinary point, but (x - XO)fj'J-l(X)~ (x - xO)2/n_ z(x), ... , (x - xo)"fo(x) are regular at xo, Xo is said to be a regular singular point of the differential equation. Near a regular singular point, we can always find at least one solu tion of the form

0:;1

y = (x - xot L cm(x - xo)'"

m""'O

with Co -:F 0

(1-48)

where the exponent s of the leading term will not necessarily be an integer. The series again converges in any circle which includes no singularities except for Xo.

The algebraic work involved in substituting the series (147) or (1-48) in a differential equation is simplified if Xo = o. Thus it is usually convenient to first make a translation to Xo as origin, that is, to rewrite the equation in terms of a new independent variable, z = x - Xo.

If a point is neither an ordinary point nor a regular singularity, it is an irregular singular point.

EXAMPLE

We first consider an example of expansion about an ordinary point:

Legendre' s differential equation is

(1 - x2)y" - 2xy' + n(n + l)y = 0

(1-49)

The points x = + 1 are regular singular points. We shall expand about the ordinary point x = O. Try

IXl

Y= L crnxJlt=CO+CIX+C2X2+ ••• m""O

1-2 Power-Series Solutions 15

Su bstitution in to the differen tia1 equation (1-49) gives n(n + 1)

c2 = - 2 Co

2 - n(n + 1)

c3 = 6 C1

n(n + 1)[6 ~ n(n + I)]

c4 = - 24 Co

[2 - n{n + 1)][12 - n(n + 1)]

c - c

s - 120 1

The general recursion relation is

Ci+2 i(i + 1) - n(n + 1)

---=--------------

c, (i + 1)(i + 2)

(i + n + 1)( i - n) (i + 1)(1 + 2)

(1-50)

Note that this is a two-term recursion relation, that is, it relates two coefficients only. Their indices differ by two. These facts could have been noticed at the beginning by inspection of the differential equation.

The general solution of our differential equation (I -49) is therefore

[ X2 X4]

Y = Co 1 - n(n + 1) - + n(n + l)(n - 2)(n + 3) - - + ...

21 4!

+ c.[x - (n - l)(n + 2) ~:

+ (n - 1)( n + 2)( n - 3)( n + 4) x'5 - + ... J 5~

F or physical applications, it is necessary to consider the behavior of the infinite series in (1-51) near the singular points x = + 1. If we let i __.. 00 in the recursion relation (1.50), we see that

(1-51)

Thus, if we go farther and farther out in either series, we find it more and more resembling a geometric series with x2 being the ratio of successive terms. The sum of such a series clearly approaches infinity as x2 approaches 1.4

4 This discussion is incomp1ete. Infinite series are discussed in more detail in Chapter 2, and the series of (1-51) are studied as a special case on p. 47. The series in fact .~ almost n converge as Xl approaches 1; their sums go to infinity logarithmically [y "'" ]n(1 - x2»).

16 Ordinary Differential Equations

In cases of physical interest, however, we often require solutions y(x) which are finite for -1 ~ x ~ + I. There are two ways we can arrange this: (I) Let C1 = 0, and choose n to be one or the integers ... ~ -5, - 3, -1, O~ 2,4, . ... Then the first series in (I-51) terminates, and the second is absent.

(2) Let Co = O~ and choose n to be one of the integers ... ~ - 6, - 4t - 2, 1, 3, 5~.... Then the second series in (1-51) terminates, and the first is absent.

We see that if we wish a solution of the Legendre differential equation (1-49) which is fini te on the interval - 1 ::::; x ~ + I, n must be an integer. The resulting solution y(x) is a polynomial which, when normalized by the condition y(x) = 1, is known as a Legendre polynomial Pn(X). 5

As an example of the problems that can arise at a regular singular point, consider Bessel's equat ion

x2 y" + xy' + (x2 - m2)y = 0 (1-52)

This has a regular singular point at x = O. Thus, a solution of the form

eo

y =xs L c"x" n""O

(co ¢ 0)

is guaranteed to exist, and we can see from the differential equation that a two-term recursion relation will be obtained. If we su bsti tu te in to the differential equation ~ the coefficient of XS is

CO(S2 - m2) = 0

S2 - m2 = 0

(1-53)

This is called the indicial equation. Its roots are s = + m.

Next consider the coefficient of r + 1 • It is

C1 [(s + 1)2 - m2] = 0

Thus c1 = 0 except in the single case m =tl/2, s = _ = -1/2~ and in that case we can set C1 = 0, since the terms thereby omitted are equivalent to those which make up the other sol ution, with s = _ = + 1/2.

We therefore confine ourselves to even values of n in the sum, writing

y = x±m(co + C2 x2 + C4X4 + ... )

The recursion relation is easily found to be

Cll+2 -1 -1

C. = (5 + n + 2)2 - m 2 - (n + 2)(25 + n + 2)

(1-54)

y ~ cox'[l - 4(.sX: 1) + 4 x 8(8 tl)(S + 2) - + ... J Suitably normalized, this series is called a Bessel function; we shall discuss Bessel functions inmore detail in Chapter 7.

If m is not an integer, we have two independent solutions to our equation, namely ~ (1-55) with s = + m, If m is an integer (which we may assume positive _ iii _, we can only choose s = + m; for s = - m, all denominators in (1~55) beyond a certain term vanish. If we multiply through by (s + m) before setting s = - m, to cancel the offending factors, we get a multiple of the solution with s = + m, as may be readily verified.c~) .

This is a situation reminiscent of our difficulty on p. 7 in obtaining independent solutions of a differential equation with constant coefficients. The resolution of the difficulty is quite analogous. We begin by letting y(x, s) denote the series

(1-55)

co

y(x, s) = x'L e" Y!

",-0

with Co = I, c1 = 0, and all other coefficients given by (l~54), but without making use of the indicial equation (1-53). That is, we set

[ X2

X S = x6 1-

y( , ) (s + 2 + m)(s + 2 - m)

X4 ]

+ (s + 2 + m)(s + 2 - m )(5 + 4 + m)( s + 4 - m) - + ...

(1-56)

If we abbreviate by L the Bessel differential operator,

d2 d

L = Xl_+ x-+(x1 - m2)

dx1 dx

then the series Lyix, s) will contain only a term in x', because the recursion relation makes the coefficients of all . higher powers vanish. We obtain [compare (I-53)]

Ly(xt s) = (5 - m)(s + m)r

We see (again) that 5 must equal ±m in order that y(x, s) be a solution of Bessel's equation Ly = o. However, if m is a positive integer, we have remarked that

[(8 + m)y(x, s)].=-m

18 Ordinary Differential Equations

is a constant multiple of y(x, m)~ and we must find a second solution. To do so, consider the result obtained by substituting (s + m)y(x, s)"into Bessel's equation. We get

L[(s + m)y(x~ s)] = (s + m)2(s - m)r

The derivative of the right side with respect to s vanishes at s = - m. Therefore~

{~ [(s + m)y(x, S)]L __

(1-57)

is a solution of Bessel' s equation, and is in fact the second solution we were looking for.

EXAMPLE

Let us find a second solution of Bessel's equation for m = 2.

[ X2 X4 ]

y(x, s) = x'l 1 - s( s + 4) + s(s + 2)( s + 4)( s + 6) - + ...

s[ S + 2 2 X4

(s + 2)y( x, s) = x (5 + 2) - s(s + 4) x + s(s + 4 )(s + 6)

(1-58)

x6 ]

- s( s + 4)( s + 4)( s + 6)( s + 8) + - ...

Remembering that (dlds)>! = x' In x and

!_ (UV ... ) = uv . . . (~ + v' + ... _. WI _ ••• )

dx w··· w ... u v w

we obtain

a

os [(5 + 2)y(x, s)] = (s + 2)y(x, s) In x

+ x3 [1 _ s + 2 . ( 1 ~ ~ _ 1 )xl.

s(s + 4) s + 2 s s + 4

1 (1 1 1)4 ]

+ s(s + 4)(5 + 6) - S - s + 4 - s + 6 x - + · · ·

1-2 Power-Series Solutions 19

Setting s = -2i and observing that [(9 + 2)y(xt 9)].,:111-2 = -hy(xt 2), we obtain

{~[(s + 2)y(x, S)]L_2

1 ( x2 x· )

= --y(x 2)lnx + x-2 1 + -+- + ...

16 ' 4 64

(1-59)

This solution and y(x, 2) are two independent solutions of Bessel's equation with m = 2~

Let us now consider the differential equation d21/1

dx1 + (8 - x2).p = 0

(1-60)

This is the Schrodinger equation for a one-dimensional quantum-mechanical harmonic oscillator, in appropriate units, If one tries a direct power series solution of (1 .. 60) about x = 0, one obtains a three-term recursion relation. These are a little inconvenient," so that it is advantageous to look for a transformation of variables which will lead to a simpler equation.

A trick that often helps in such a situation is to " factor out" the behavior near some singularity or singularities. Where are the singularities of this equation 1 There are none in the finite z planet but we must digress for a moment on singularitks at infinity.

Consider the differential equation

y" + p(x)y' + Q(x)y = 0

(1-61)

{X = 0 is an ordinary point if P and Q are regular there

Recall that x = 0 is a regular singular point if xP and ·X1Q are regular there

Let z = ilx. The equation becomes

d1y [2 1 ] dy 1

- + --- p(z) - +-q(z)y = 0

dz2 Z Z2 dz Z4

(1-62)

where

p(z) = P(x)

q(z) = Q(x)

Then x = 00 is an ordinary point or a singularity of Eq. (1-61) depending on whether z = 0 is an ordinary point or a singularity of Eq. (t -62). That

1 !

J

20 Ordinary Differential Equations

is, x = ·00 is. an ordinary point if 2x - x2 p(x) and x~ Q(x) .are regular there, and x = 00 is a regular singular point if xP(x) and x2 Q(x) are regular 'there, (. ) This concludes our digression.

Using these criteria, we see that our differential equation (1 .. 60) is highly singular at x = co. For large x, the equation is approximately

d21/1 2

dx2 -·x 1/1 = 0

(1-63)

and the solutions are roughly

(1-64)

in that, if we substitute either function into the differential equation '(1·63), the terms which are dominant atinfinity cancel.

In quantum mechanics, physically acceptable solutions must not become infinite at, Ixl_" 00; therefore, try

(1-65)

This change of variab1e does· not ensure the desired behavior at infinity, "Of course, and- we shall still have to select solutions y(x) which give this behavior. In fact, we may expect in general y(x) .... c'l. so as to give the divergent solution, '" ..... e + ~z/l.

The differential equation (1-60) becomes

y" - 2xy~ + (E -d.l)y = 0

(1-66)

[If we write E - I = 2n, (1-66) is known as the. Hermite differential equation.] We can obtain the general solution of this equation in the form of a power series which win converge everywhere, and the recursion relation for the coefficients will contain only two terms. The recursion relation is

em + 2 (2m + 1) - E

--=

em (m + l)(m + 2)

and the solution is

[ X2 X4]

Y = Co 1 + (1 - E) - + (1 - E)( 5 - E) - + ....

, 2l 41

.[;(3 XS]

. + c 1 X + (3 - E) - + (3 -- E)(7 -'. E) - + ...

3 ~ 5!

(1-67)

If E = 2n -, + 1 where n is an integer ~ -one series terminates after the term in x" (the even .or odd series, depending on n). The resulting polynomial,

1-2 Power-Series Solutions 21

suitably normalized, is called a Hermite polynomial" of order n, H,.{x). The other series can be eliminated by setting its coefficient, Co or C1, equal to zero, and the resulting solution (1-65) will go to zero at infinity.

If either series of (1-67) does not terminate, its behavior at large x is determined by the terms far out where the recursion relation is approximately

C".+2 ~em

2

m

Thus either series behaves for large x like eX", and", -+ tr'2/1 as expected. A solution l/t, which remains bounded as x -+ + GO, is therefore only possible if E = 2n + 1 with integral n; that is,

"" = tltn(x) = H,.(x)e-xz/1

tor

E = E,. = 2n + 1

(1-68)

This is another example of how boundary conditions may impose restrictions on the acceptable values for a constant appearing in a differential equation. The acceptable solutions IjI .. are called eigenfunctions of the differential operator -(d2jdx2) + x2 belonging to the eigenvalues En (see Chapter 9).

As another example of series solutions, we consider briefly the associated Legendre equation

(1 - X2)y" - 2xy' + [n(n + 1) - 1 :~2]Y '" 0 (l-69)

Note that when m = O~ this reduces to Legendre's equation. The origin is an ordinary point, and x = ± 1 are regular singularities.

A· straightforward attempt at a power series 'solution about x = 0 again yields a three-term recursion relation. Let us try to factor out the behavior at x = ± 1. Let x = ± I + z and write the approximate form of the differential equation valid for [z] 4 1. Near either singularity it is

m2

zy" + y' - 4z y = 0

This has solutions y = Z+1fI/2 and z-m/2, the former being well behaved if, as we may assume without loss of generality, m ;;;: O.

We are therefore led to make the change of variable y = v(l - X2)"'fl in (1-69), thus factoring out the behavior at both singularities simultaneously. The equation becomes

(1 - r)v'" - 2(m·+ 1)xv' + [n(n + 1) - m(m + I)]v = 0 (1-70)

, For a precise definition of H,,(x) sec Eq. (7~103).
{ -~ A ell ~ J r -~ ~ 00
v= p 1
~-p rn/z rn/<. -11 0 0 m-o :f
- rn./2 - lYl/Z n+A -r'Yl -01 rn+n+~ C A _lot. ) 11 "- 2-l fA ,+ n (n + ~) U = 0 7-

(A _~:a) fA (M+2)_ zem+A) ~ fJ_ (tA+J\)+ [n (n +~) - m (m +4) J u. {ff1) = 0 '9 v =:. U (rn)

22 Ordinary Differential Equations

with a straightforward series solution about x = O. The recursion relation for the coefficients is

c .. + 1 (r + m)(r + m + 1) - n(n + 1)

-- = ---:----- .......... ---

c, (r + l)(r + 2)

_ (1' + m - n)(r + m + n + 1) (r + 1)(r + 2)

(1·71)

Again we obtain even and odd series solutions for v(x), both of which behave like (1 - x2)-m near x = ± 1 if they do not terminate.f A bounded solution exists only if nand m are such that one "series terminates after some term x~ From (1-71), the condition is

(n - m) = T = an integer ~ 0

(1·72)

Usually in physical applications nand m are both integers. It may be verified that y is then simply a constant times

(1-73)

which is called an associated Legendre function. These functions will be discussed in more detail in Chapter 7.

1-3 MISCELLANEOUS .APPROXIMA TE METHODS

One can often get a qualitative picture of the solutions of a differential equation by means of a graphical approach.

For example, consider the first-order equation

dy -2X1

-=e

dx

(1-74)

We may draw little lines in the xj-plane, each "indicating the slope of the solution passing through that point, as illustrated in Figure 1-2. The approximate shape of the solutions is obvious.

As a second example, consider the second-order, nonlinear equation

(1-75)

a Sec Problem 1-32.

1-3 Miscellaneous Approximate Methods 23
I
I , -
, .,- _".
I "
, ,
~ I I , x
., I
-
~ ./ I I
I
..-
~ I I
Figure 1-2 Slopes of the solutions of (1-74) If we draw the parabola x ;;;;;;; y2, as in Figure 1-3, we see that yH is positive inside the parabola, and negative outside of it. If we limit ourselves to sol utions starting near the origin and going toward the right, we can identify several types which are sketched in Figure 1-3:

I. Solutions asci] lating about the upper branch of the parabola.

2. A critical nonoscillating solution approaching the upper branch of the parabola from above.

3. Solutions going to y = - (J).

4. A critical nonoscillating solution approaching the lower branch of the parabola from above.

The power-series solution (1.45) we found earlier [with yeO) = 0, y'(O) = 1] is of type I. An approximation to solution 2 is given by expression (1-80).

Figure 1-3 Several types of solution for the equation y'l = X - yi

24 Ordinary Differentia] Equations

x

Figure 1-4 A potential Vex) for the Schrodinger equation

As a third example, consider the one-dimensional Schrodinger equation for a particle of mass m in a potential V(x) ~

d2", 2m

dx2 = ~ Ji2 [E - V(x)]f/1 (1-76)

If E> V(x), then 't/IH II/! < 0, and", curves toward the x-axis; that is, t/I has an oscillatory or "sinusoidal n character. If E < V(x)~ then 1/111/1/1 > O~ and ~ curves away from the x-axis; that is, Vi has an "exponential~' character. If we impose the boundary condition that Vi remain finite everywhere, then a solu tion with un bounded exponential behavior is unacceptable.

Suppose V(x) looks as shown in Figure 1-4.

If E > V3 t all solutions are oscillatory everywhere and a] 1 are acceptable, that is, none blow up at infinity.

If V2 < E < V3 ~ ~~ most ~~ solutions blow up at x -+ + 00. However, if we start just right, we can find a solution that falls off exponentially as x ~ + 00. This defines the acceptable solution uniquely (except in amplitude) and, in particular, fixes the phase in the left-hand region where the solution is oscillatory. Such a solution is illustrated qualitatively in Figure 1-5.

_---_,.-

,

-;[--- ---E

,

, I ,

x

Figure 1-5 A physically acceptable solution of the Schrodfnger equation for V2 < E < V3

1-3 Miscellaneous Approximate Methods 25

If V1 < E < Vl, things are really difficult. 1{1 behaves U exponentially H at both ends. If we adjust it so that it does not blow up at the left, it is almost sure to do so at the right. Only for certain values of E can satisfactory solutions be found. These are the eigenvalues; compare (1-68).

If E < Vi ~ no satisfactory solutions exist.

Another method, which can often provide useful approximate solutions, consists of experimenting with the equation ~ that is, dropping small terms, iterating, and so forth.

EXAMPLE

(1-77)

Let's look for a solution near x ---+ 00. Assume y > O. Thus, we try

dy -~O-4y~a dx

Now put this guess back into (1-77)

(= constant)

dy ~ e-1G~ dx

Therefore,

e-Zax

y~a --- 2a

I terate once more.

dy [( «: 20X)]

dx ~ exp - 2x a - 2a

e-24IX 1 ( 1 )

y ::::: a - - - x + - e-44x + ...

2a 4a2 4a

etc.

(1-78)

If the procedure seems to be converging, it is pro bably giving us a solution.

Sometimes, though, it does not converge.

EXAMPLE

(1-79)

26 Ordinary Differential Equations

(See the graphical discussion on pp. 22-23.) Try Y ~ J x. Substituting into the right side of (1-79) gives y" ~ 0, which in turn implies y = ax + b, with a and b constant. This does not look anything 1ike our first try. The mistake, of course, was substituting our first guess into the "large H term y2, rather than into the" small ,t term yfJ. Doing it the other way gives

y2 = x - yf1 ~ X - (J x y = x(1 + ix - 5/2)

Y ';::j J x + lx-2

I terating once more gives

y = J -; + !x-z - 14lsx-9/2 + ...

This is the solution (2) enumerated on p. 23.

(1-80)

y = Jx + l1(x)

Alternatively, we may try a different method. Write

Substituting into the differential equation (1 ~ 79) gives -lx-3!2 + "N = -2tJJ x - 112

We neglect the last term, but the equation is still complicated. We may try further neglecting either the first or second term. If we assume

Ir(l ~ [2'1J xi, we find

'1 ~ tX-2

and we obtain our previous solution (1-80). We should, of course, check the consistency of the solution and the approximation made in finding it; namely, that

(ix-2y ~ 2(tx-2)Jx This is satisfied for large x.

We can obtain another solution by neglecting the X-3/2 term of Eq. (1-82) instead of n", That is, if we assume

1'(1 p l--l-x-3/2j we obtain the differential equation

11" + 2Jx'1 = 0

(1-83)

An approximate sol ution of this equation may be found by the WKB method, which we discuss in the next section. The result is

'1 ~ _:!_ sin (4J2: x~/4 + 0)

Xl!S 5

(1-84)

1-4 The WKB Method 27

which gives an oscillatory solution of type 1 listed on p. 23. Again, we shou1d go back and verify the consistency of the assumption

1 '1 !I I ~ I!x- 3/21

Why can we not neglect the first term on the right of (1-82)?

1-4 THE WKB METHOD

The WKB method provides approximate solutions of differential equations of the form

(1-85)

provided I(x) satisfies certain restrictions discussed below, which may be summarized in the phrase "f(x) is slowly varying." Reca1l that any linear homogeneous second-order equation may be put in this form by the transform ation (1-41). The one-dimensional Schrodinger equation is of this farm and the method was developed for q uantum-mechanical applications by Wentzel, by Kramers.? and by Brillouin, whence the name. The method had been given previously by J effreys.' 0

The solutions of Eq, (1-85) with f(x) constant suggest the substitution

(1-86)

The differential equation becomes

_(¢')2 + i¢" + /= 0

(1-87)

If we assume ¢" small, a first approximation is

4>(x) = + f J f(x) dx

(1-88)

The condi lion of validity (that 4/' be ,,, small '~) is

1"'''1 ~ ~ :/j ,:g If I

From (1-86) and (1-88) we see that l/Ji is roughly IJ(2n) times one H wavelength~' or one "exponen tial length n of the sol u lion y. Thus the condition of validity of our approximation is simply the intuitively reasonable one that the change in f(x) in one wavelength should be small compared to Iff.

(1-89)

9 See, for example, Kramers (K5). 10 See Jeffreys and Jeffreys (J4).

28 Ordinary Differential Equations

A second approximation is easily found by iteration. From (1-88)~ ¢" ~ +1/-1/2/'

Su bsti tuting this estima te for the small term ¢" in to (1-87) ~ we obtain

(</>')2 "" f + ~ ~j

_./.., :;:::; + If-- + ~ f' V' -,,/ 4 f

The two choices of sign give two (approximate) solutions which may be camhi ned to give the general solution

y(x) "" (f(:))114 (c+ exp [i f Jl(x) dxj + c , exp [- i f "I f(x) dX]) (1-90)

where c + and c _ are arbitrary constants.

We have thus found an approximation to the general solution of the original equation (1-85) in any region where the condition of validity (1-89) holds. The method fails if f(x) changes too rapidly or if f(x) passes through zero. The latter is a serious difficulty since we often wish to join an

Figure 1--6 Graph of f (x) and two exact solutions of the equation (1-85); one of them, Y 1 (x), being the special solution which decreases to zero on the left

1-4 The W KB Method 29

oscilJatory solution in a region where lex) > 0 to an '~exponential n one in a regi on where f(x) < 0 [see ~ for exam ple ~ the discussion of Eq. (1-76)].

We shal I investigate this problem in some detail 1 1 j n order to derive the so-cal led connection formulas relating the constants c + and c _ of the W K B sol u tions on either side of a point where f(x) = O.

Suppose f(x) passes through zero at Xo and is positive on the right as shown in Figure 1-6. Suppose further that lex) satisfies the condition of validity (1-89) in regions both to the left and right of Xo so that any specific solution y(x) may be approximated in these regions by

x « Xo ~f(x) < O~

y(x) ~ / a .. exp [+ rxovr=- f(~j dX]

~, - lex) ~x

+ I b --_._. exp[- r-~(),/- f{-;) dX] (1-91)

~, - f(x) <x

x ~ xo,/(x) > 0:

y(x) "" ~I ;(~) cxp [ + i (./1(X) dX]

+ 4/ d __ exp[- i rl:- ... / f(x) dX] (1-92)

"\ j(x) ~xo

where the symbols '\./- and ~/- .. mean positive rea] roots throughout. If (x) is real, a solution which is real in the left region will also be real on the right. This h reality condition" states that if a and b are real, d = c*.

Our pro blem is to "co n neet " the a pproxima tions on ei ther side of x 0 so that they refer to the same exact solution, that is~ to find c and d if we k now a and b and vice versa. To make this con necti on, we need to usc an approximate solution which is valid along the entire length of some path connecting the regions of x on either side of Xo , where the WK B approximations are valid. One procedure.P followed by Kramcrs and by Jeffreys, is to use a solution valid on the rea] axis through Xo (sec Problem 7-36 and also the discussion at the end of Section 4-5),

A second procedure, used by Zwann and by Kemble, 13 is to avoid the real axis near Xo and use a path enci rcl ing x 0 in the com pl ex plane, along

11 If we were using fine print in this book, the material from here to Eq, (1-122) would be so displayed. The connect jon formulas (1·113) and (1-122) are, however. important and should be understood even by those who wish to sk ip the intervening de ri va t ion.

12. See Schiff (S2) Section 34~ for a discussion of this approach, and numerous references, 1 J See Kemb te (K 1) Section 2J.

30 Ordinary Differential Equations

which the WKB approximations themselves remain valid. We shall follow this second procedure because it provides not only the connection formulas, bu t also the means to estimate errors in the WKB approximations. Also, the techniques employed are more generally instructive.

The question of errors is important because we wish to use the approxirna te solutions over a large range of x covering many ,~ wavelengths ,~ or H exponential lengths. H Thus, one might worry that errors would gradually accumulate and that the approximate solution would get badly H out of phase," for example, in the region where it is oscillatory.

Define W KB functions associated with Eq, (1-85) as

(I-93)

We must consider these to be functions of the complex variable x ~ and we should draw suitable branch cuts in the x plane to avoid ambiguities from the roots of f(x) (see the Appendix, Section A-I). These functions (1-93) satisfy (exactly) a differential equation which we can find by differentiation:

(1-94 )

If we define

(1-95)

then W±(x) are exact solutions of

W'± + [f(x) + g(x)] W± = 0

(1-96)

and approximate solutions of

y" + f(x)y = 0

(1-97)

provided Ig(x) I 4 If(x)l. Equation (1-97) is regular at Xo) whereas Eq. (1-96) has a singularity there. Correspondingly, y(x) is regular at Xo but the W ± are singular there.

We now define functions Ct:± (x) by

y(x) = a+(x)W+(x) + C(_(x)W~(x) y'(x) = Ct:+(x)W'+(x) + rt_(x)W'_(x}

( 1-98) (1-99)

where y(x) is a solution of (1·97). The Cl± are just osculating parameters as discussed on p. 9 in connection with the variation of parameters method.

1-4 The WKB Method 31

The WKB approximation corresponds simply to setting 0: + and a _ equal to constants.

Solving (1-98) and (1-99) for ~±(x)~

yW~ - y'W_

r:x+= r ,

W+ w_ ~ W+ w_

tX_ = - W + W' __ W'+ W~

where the denominators are the W ronsk ian of W + and W _ + The W ronsk ia n is a constant, as may easily be shown from the differential equation (1-96). From the explicit forms (1-93)~ it is

W+ W'_ - W~ W_ = -2i

so that

(1- J 00)

Differentiating, and using the differen Hal equations (1-96) and (1-97) to eliminate yN and W± , we obtain

dfX± i (n ') i

dx = +:2 yW=F - y'W=F = + 2:g(x)yW:j:

(1-101)

or ~ remero bering (1·93) and (1-98)~

dtt± = + ~ g(X)1/2itX± + C(~ exp[+2i IX ".j f(x) dX]]

dx 2 [f(x)] t XQ

One or the other of these expressions (1-101) or (1-102) may be used to estimate the error which may accumulate in the WKB approximation

(1-102)

over a long range of x.

EXAMPLE

The W K.B functions associated with the equation r" + xy = 0

(1-]03)

for x ~ 0 are

W±(x) = X-I/4 exp ( + j (X112 dX) = X-114 exp (+ i1x312) (1-104) These satisfy exactly the equation (1-96):

Wr1 + (x - -/6X-Z)W = 0

That is,

f(x) = x

32 Ordinary Differential Equations

We may write the solution of (I-I03) in the form (1-98) y(x) = C( +(x) W+ (x) + rx _(x) W_ (x)

At large x ~ a general solution of (1-103) is accura te1 y described by the W KB approximation:

y(x) ~ AX-ll'~ cos (iX3/2 + J) as x ~ co 0-105)

so that Ct+ ~ (Aj2)efb, O!~ ~ (Aj2)e-id for x ---+ 00. We wish to investigate the error in this WKB solution when it is carried in to smal1 values of x, this error being _measured by the deviation of Gt + (x) and C(_ (x) from the constant values above. If the C{±(x) do not change much, we have approximately, from (1-102),

and AC(± , the changes in a± from Xl to infinity, are given by 6a:± ~ ~ foo drx ± dx

AI2 A Xl dx

(1-106)

The complicated second term in the bracket is less important than the first (why 7), and we see that the re1ative error faet:± 1/10: ± I is only of the order of 10 to 20 percent even fOT a value of x as low as Xl = 1, and at larger XI the error becomes very small. In particular t no significant error in phase accumulates even after an arbi trarily large n urn ber of" wa velengths, H

We now return to the problem of estimating the variation in the functions a±(x) in the general case. Integrating Eq. (t~ 102) along a path r in the complex plane from Xl to x 2 ~

f dr1.± - i r g(x) - i f g(x) (W=l=)

Acx± = Jr dx dx = + 2" J, J f(x) C(± dx +:2 r J I(x) C(=F w± dx (1-107)

We shall consider only paths such that

f g(x) ds ~ 1

r Jf(x)

(1-108)

where ds = Idx I is the elemen t of length along r. Kern b1e cal Is such a path a good path.

Along a good path, the first term on the right side of (1 ~ I 07) does not contribute a significant change ~C( ±. The second term can be significant only jf I W + / W ± I ~ 1. That is, along a good path a + can change on] yin a region where u its ~~ function W + is small in magnitude compared to W ~ .

1-4 The WKB Method 33

Even there, IX+ cannot change if (X_ is zero or nearly zero. Similar statements hold for ex _ if we interchange + and - everywhere.

In the regions where I W _/W + I p I or I W +/W _, pi, the coefficient of the small function may vary markedly, a behavior known as Stokes' phenomenon .14 (This variation has no appreciable effect on the function y(x) in this region, so that there is no inconsistency with the fact that the WKB approximation, with constant IX + and C( ~ ~ is supposed to be valid I)

The above results will now be used to derive the WKB connection formulas.

We recall the defini tions (1-93) and draw a branch line from x 0 to + 00 along the real axis. We take roots [f(X)]1/4 and U(x)] 1/2 to be the positive real roots on the top side of this branch eu 1. Then , for x < Xo ,

W±(x) = e-i(./4,[ - «or 1/4 exp[ + (' J= /i..x) dX]

X<Xo (1-109)

Referring to (1-91) and (1-92), we have

C( + ~ aei(n/4) Cl+-+C

a: _ ~ bei(n./4) Cl_ -+ d

(1-110)

In order to locate the "Stokes' regions", where I W+/W_' ~ 1 or] W~/W+ I \$> l , we note'" that near Xo ,f(x) ~ K(x - xo), and the boundaries of these regions will be qualitatively similar to those for f(x) = K(x - xo). These boundaries

are where J~oJ f(x) dx is real since then I W + I = ,W _I· Forf(x) = K(x - xo)~ jf we put x - Xo = rei9 ~

JC

f Ji(x) dx = K 1/2t(X ~ xo)3/2 = j-K t /2r3{2e'(38{2)

.Yo

(1-111)

This is real in directions e = O~ t1t"~ ~1t"~ and it is easy to see from (1-93) which of W + or W _ dom inates in each region between these boundaries.

A map of the Stokes' regions is given in Figure 1-7 ~ and a good path r is shown connecting the regions x ~ Xo and x ~ Xo on the real axis, where the WKB solutions are to be connected.

It is now easy to find the connection formula for the special solution Y 1 (x) which has a decreasing exponential behavior to the left of xo, as shown in Figure 1-6. This solution has a = 0, in the notation of (1-91)~ or, from (1-110), c£+(x) = 0 at the start of path r. Thus, ct _(x) remains constant along r even though l W+/W_ f ~ 1 in region 2. z , remains essentially zero. In region 1, I w -IW + I ~ 1 so that rx _ remains constant although tX + may

14 For a rather amusing description of Stokes' discovery of this phenomenon, as well as one of the earliest discussions of the connection problem, see R. E. Langer (L4).

1 ~ The reader will note a tacit assumption: we assume that a radius for the path r can be chosen which is small enough for the assumption f(x) ~ Ktx - xo) to be valid, and at the same time large enough that I' is a ~. good·' path.

34 Ordinary Differential Equations

region 2

region 1

region 3

Figure 1-7 Complex x-plane showing the Stokes" regions around the point Xo wberej(x) vanishes

(and does) change. Since e , is constant all along I', we find from (I-IIO)

d = bei(1r./4)

(1-112)

The reality condition gives c = d" if b is real.

The connection formula for the solution Yl(X) with decreasing exponential behavior for x < Xo is thus (normalizing b = I)

tI 1 exp[- f<C°J - f(x) dX] ..... 2 ~ cos[fX JJW dx - ~]

- f(x) x 4 f( x) ~o 4

(1-113)"

Reversal of the arrow is delicate. We know only that the growing exponential term on the left will be absent for some phase near -1t/4 on the right.

Next, consider a more general solution yix) such that a -=F O. Since the basic equation is linear, the coefficients in expressions (1-91) and (1-92) must be linearly related:

c = Aa + Bb d= Ca+ Db

(1-114)

-

1-4 The WKB Method 35

We have found in (1-112) that B = e-i(ff/4)

and

0-115)

The reality condition d = c· ~ if a and b are real, says

C =A*

and

(1-116)

the second of which we already know.

Another general relation may be obtained from the fact that the Wronskian YIY~ - Y2 i1, of two independent solutions of our equation yH + f(x)y = O~ is a constant. (To see this, just differentiate the Wronskian.) Write

(1-117)

Remembering that yl = C(+ W~ + e , W~ and the Wronskian

w+ w:. - W~ W_ = -2i

one finds

(1-118)

Evaluating this constant at x <i; Xo [from (1-110)],

where at and hi are the coefficients a and b in the WKB approximation (1-91) for Yl(X), and a2 and b2 are the coefficients for Y2(X). Similarly, on the other side, where x ~ Xo ,

Thus

ei(fl/2) = t = AD - Be = AB* - A*B = 2i 1m ABrt-

(1-119)

But B = e - I(Il/4), so Ai(1t/4) = R + i/2 where R is an undetermined real constant. Collecting our results,

A = Be -/(,,/4) + le~(1I/4) C = Ref(Il/4) + !e - ~(1t/4)

B = e-l{;IIt/4) D = ei(1'I:/4)

(1-120)

and because R is undetermined, we still cannot write c and d in terms of a and b. However, this is as far as we can proceed; R is indeterminate. Another way of expressing this indeterminacy is to observe that the asymptotic form of a solution like Y2(X) in Figure 1-6 for x < Xo does not fix Yl(X) for x > Xo. What about the reverse problem of writing a and b in terms of

36 Ordinary Differential Equations

c and d't Equations (1·114), with the constants A~ B~ C~ D given by (1-120), are easily inverted with the result

a = e-l(n/4)c .+ ei(fC/4)d

b = [!e'Cn/4) - Re-i(I't/4)]c + [!e-f(II/4) - Rei(It/4)]d

Thus we can find a but not b, which might be expected since tL_(x) fluctuates wildly in region 2 of Figure 1-7 except in the special case a, + = o. We might also expect this result from the fact that, when a ::F O~ the value of b is immaterial since the decreasing exponential is negligible compared to the increasing one. Omitting b for this reason, we can write a second formula connecting the two expressions (1-91) and (1-92) for y(x). It is conventional to choose a real solution by setting c = te~41 and d = !e - J.~. Then

(1-121)

sin ( q, + ~) v-:.1f(X) exp [(" J - f(x) dx ]

.... t/ I --- cos[Jx J7W dx + ¢] (1·122)

f(x) :4:0

provided ¢ is not too near - n/4. The arrow cannot be reversed, as we have seen, since the phase <p cannot be determined if we only know how much large exponential is present.

It is very easy to remember the connection formulas (I-II3) and (1-122) by drawing q ualitati ve pictures like Figure 1-6. Consider first Y I (x) with the decreasing exponential to the left of Xo . To the right of Xo ~ Yl (x) looks like a cosine wave for which the phase is between ~n/2 and 0 when x = Xo. We remember the phase is - n/4. Furthermore, the amplitude of the cosine wa ve on the right "Jooks " larger than that of the exponential on the left. We remember it is twice as large. These comments reproduce the result (1-113). A similar mnemonic for remembering (1~122) results from considering a solution Y2(X) which is 90° out of phase with Yl(X) in the right-hand region. By this same procedure, it is a straightforward matter to write down connection formulas similar to (1-113) and 0-122) for the case where the slope ofJ(x) is negative at Xo instead of positive.

We shall conclude by giving an example of the use of the W KB method in quantum mechanics. Consider the Schrodinger equation for a particle in a potential well

(1-123)

with V(x) as shown in Figure 1-8. 2m

[(x) = ftl [E - V(x)]

. {Positive for a < x < b

IS negative for x < a, x > b

-

References 37

V(x)

x

Figure 1-8 A typical one--dimensional potential well

If ifJ(x) is to be bounded for x < 0, then in a < x < b our connection formula 0-113) tells us

A (fxJ2m 1t)

~(x) ~ (E _ V)1/4 cos G -,;z(E - V) dx - 4

(1-124)

where A is an arbitrary constant. If t/J(x) is to be bounded for x > b, then in a < x < b similarly

(1-125)

where B is an arbitrary constant. These two expressions must be the same, which gives the condition

b

f J2m(E - V) dx = (n + !)nJ,

6

(1-126)

This result is very similar to the Bohr-Sommerfeld Quantization condition

of pre-1925 quantum mechanics. . .

REFERENCES

There are many good books on the subject of ordinary differential equations. Two small but remarkably complete ones are Burkill (B9) and Ince (11). The latter is a condensed version of the treatise (12) by the same author. Another treatise, excellent and well worth looking through, even if it is slightly old-fashioned, is Forsyth (F5).

Modern texts on this subject tend to place more emphasis on numerical solutions, integral transform methods, and special techniques for nonlinear equations, than do the older texts mentioned above, Several good examples are Birkhoff and Rota (B3); Golomb and Shanks (GS); and Rabenstein (Rl).

The WKB method and its applications to quantum mechanics are discussed in any good book on quantum theory, such as Schiff (S2) Section 34;

38 Ordinary Differential Equations

Merzbacher (M 4) Chapter 7; or Landau and Lifshitz (L2) Chapter VII. Note that the last-named book refers to the WKB method as the" quasiclassical case. H

PROBLEMS

1-2

Find the general solutions of Problems 1-1 to 1-20: x2 y' + y2 = xy yl

I xJ 1 + y2

Y = --;==~

yJ 1 + xi

al

yl = ___

(x + y)2

1-1

1-3

1-4 y' + Y cos x = 1- sin 2x

1-5 (1 - X2)y' - xy = xyl

1-6 2xJ y' = 1 + J 1 + 4x2 y

1-7 y"+y'2 + 1 =0

1-8 y" = e'

1-9 x( 1 - x) y 11 + 4 y ~ + 2y = 0 1-10 (1 - X)y2 dx - x3 dy = 0

1-11 xy' + y + x4y4~ = 0

1-12 (I + xl)y' + y = tan -1 x 1-13 x2y'2 - 2(xy - 4)y' + i~ = 0

(general solution and singular solution)

1-14 yy" - y,l - 6xy2 = 0

1~15 x4. yyll + X4 y,2. + 3x3 J'yt ~ 1 = 0 1-16 xllj - 2y = x

1-17 y'" -2y" - yl + 2y = sin x 1-18 y~ij + 2y# + Y = cos x

1-19 y" + 3y' + 2y = exp [eX] 1-20 a2 yH2 = (I + y12)3

1-21 The differential equation obeyed by the charge q on a capacitor C connected in series with a resistance R to a voltage

V = Vo (:r e-'/'

IS

dq 1 (t)2 tlf

R - + - q = v. - e-

dt COT

Find q(t) if q(O) = O.

-

Problems 39

]·22 In the activation of an indium foil by a constant slow neutron flux, the number N of radioactive atoms obeys the equation

dN

~ = AN - 'N

dt oS I.

where N, is the constant n u m her after "sa turation .' , Find N( t) if N(O) = o.

1-23 Find the general solution of

A(x)y"(x) + A'(x)y'(x) + :~;) = 0 where A(x) is a known function and y(x) is unknown. 1-24 Find the general solution of

xy" + 2yt + n2xy = sin wx Hint: Eliminate the first derivative term.

1~25 Note that y = x would be a solution of

(1 - x)y" + xyl - Y = (l ~ X)2

if the right side were zero. Use this fact to obtain the general solution of the equation as given.

I M26 Con sider the differential equation

yfi + p(x)y' + q(x)y = 0

on the interval a ~ x ~ h. Suppose we know two solutions, Yt (x) and Y2(X)~ such that

Gi ve the solution of the equation

y" + p(x)y f + q(x)y = f{x)

which obeys the conditions y(a) = y(b) = 0, in the form b

y(x) = J G(x, xl)f(x') dx'

a

where G(x, x'), the so-called Green's function, involves only the solutions Yl and Y2 and assumes different functional forms for x' < x and x' > x.

Illustrate by solving

yrt + k2y = f(x) y(a) = y(b) = 0

40 Ordinary Differential Equations

1-27 Find the general solution of the differential equation

If 3 3

xy + - y = 1 + x x

in real form (no t s in answer). 1-28 Consider the equation

d2 y + ~ dy + {K + 2 _ J(1 + l))y = 0

dx2 X dx X x2

where I = nonnegative integer

Find all values of the constant K which can give a solution which is finite on the entire range of x (including co). An equation like this arises in solving the Schrodinger equation for the hydrogen atom.

Hint: Let y = olx, then" factor out H the behavior at infinity.

1-29 For what values of the constan t K does the di fferential equation

O~x~ 00

n (1 K) 0

Y-4+~Y=

(0 < x < 00)

ha ve a non trivial solution vanishing at x = 0 and x = oo? 1-30 Find the values of the constant k for which the equation

xyfj - 2xyl + (k - 3x)y = 0

has a solution bounded on the range 0 ~ x < 00.

1-31 A solution of the differential equation

xy'" + 2y' + (E - x)y = 0

is desired such that y(O) = I, y( 00) = O. For what values of E is this possible?

1-32 By considering Eq. (1-71) for r -+ 00, verify that vex) indeed behaves like (1 - x2)-m as x -+ + 1.

1-33 Bessel's equation for m = 0 is

x2 y'" + X Y I + x2 y = 0 We have found one solution

Show that a second solu tion exists of the form

Jo(x) In x + AX2 + BX4 + Cx6 + ... and find the first three coefficients A~ B, C.

Problems 41

1-34 Consider the differential equation

xy" + (2 - X)yf - 2y = 0

Give two solutions, one regular at the origin and having the value 1 there, the other of the form

1

- + A(x) In x + B( x) x

where A(x) and B(x) are regular at the origin. Three terms of each series will suffice.

1-35 Find an approximate expression (say three terms) for large x for the solution of

N 1

Y - --::-'-=-""":'

- (x2 + y2)2

which approaches the x axis as x ....... + co. 1-36 Consider the differential equation

yH _ xy + y3 = 0

for large positive x.

(a) Find an oscillating solution with two arbitrary constants. (b) Find a (nontrivial) particular nonoscillating solution.

1~37 Find an (approximate) oscillating solution of y" = (y _ X)2 _ e2(y-X)

You may let ex denote the real number such that

(a. ~ -0.57)

1-38 Consider the differential- equation

dy = e'lx dx

(a) Suppose y(1) = O. Give a series expansion for y(x) which is valid r or x near 1. Neglect terms of order (x - 1)4.

(b) Suppose y(xo) = + 00 (Xo > 0). Give an approximate expression for y(x) which is useful for x slightly less than Xo .

1-39 Use the W KB method to find approximate negative values of the constan t E for which the equation

dZy

dx2 + [E - V(x)]y = 0

has a solution which is finite for all x between x = - 00 and x = + 00, inclusive. V(x) is the function shown below.

42 Ordinary Differential Equations

V(x)

1~ Find a good approximation, for x large and positive, to the solution of the equation

3 (15 )

ylt __ y' + _ + X 1/2 Y = 0

X 4x2

Hint: Remove first derivative term.

1-41 Obtain an approximate formula for the Bessel function Jm(x) by the WKB method and give the limiting form of this expression for large x(x ~ m). Do not worry about getting the constant in front correct. You may assume m ~ t.

1-42 Consider the differential equation yN + xy = O.

1 2

(a) If y f'V -r/4 cos - x3/l as x__,.+ oo~y ,..._, ? as x ~ - co.

x 3

( b) If y ~ ( _ ~) 1/4 exp [ - ~ ( - X )312] as x ..... - 00, Y '" ? as x -Jo + 00.

(c) If y - (_~)114 exp[ + ~ (_x)3/l J as x ..... - 00,

Y '" ? as x -+ + 00.

Note: The answer to one of (a), (b), ( c) is not defined. Be sure to indicate which one is not defined, as well as giving correct answers for the other two.

1-43 Consider the solution of the differential equa tion

d2y

-+ x2y =0 dx2

which has zero value and unit slope at x = 10.

(a) Give (approximately) the location of the next zero of y(x) greater than 10.

(b) Give (approximately) the value of y(x) at its first maximum for x> 10.

-

Problems 43

1-44 Consider a solution Yl(X) of the differential equation

ylt + x2y = 0

such that Yt(x) has a zero at x = 5. Give approximately the location of the 25th zero beyond the one at x = 5, and estimate the error in your result.

(

TWO

INFINITE SE.RIES

In this cha pter we recall some tests for the convergence of series and present a number of methods for obtaining the sum of a series in closed form. Numerical and approximate methods are not discussed here, but in Chapter 13.

2-1 CONVERGENCE

An infinite series

00

Ian=al +02 + a3 + ...

,,== 1

is said to converge to the s urn S provided the seq uenee of partial s urns has the lirnit S: that is, provided

lim (I an) = S

N-co n-1

If the series L::" 1 a; converges to the sum S, we will simply write this as an equality:

00

La,. = S

n=l

The series I:", 1 an is said to converge absolutely if the related series L:"'~lla" I converges. Absolute convergence implies convergence, but not vice versa; for example, the series

1-1+t-+···

44

--

2-1 Convergence 45

converges (to the sum In 2), but it does not converge absolutely, because

l+!+t+···

does not converge (that is~ it diverges).

It sho uld be emphasized that the numbers all may be complex numbers.

We must, of course, give the sym bol ~alll of the preceding paragraph its usual meaning when an is complex:

That is, lanl denotes the absolute value (or modulus) of an.

The simplest means for determining the convergence of an infinite series is to com pare it with a series which is known to converge or diverge. For example, the geometric series

1

-- = 1 + x + x2 + x3 + ... 1 - x

(2-1)

converges for [x] < 1 and diverges for [x] > I. This leads to the ratio test:

If the ratio Ian + 11 a" I of s uccessi ve terms in the infinite series

has as a limit a n urn her less than one as n ......,. 00 ~ the series converges, and, in fact, con verges absolutely. if the limit is greater than one, the series diverges. If the limit is one (or if there is no limit), we must investigate further.

A second criterion is comparison with an infinite integra]. The series

f(l) + 1(2) + /(3) + ...

con verges or diverges with the infinite integral

00

f f(x) dx

!

provided f(x) is monotonically decreasing. For example, consider the series for the Riemann zeta function

111

C(s) = 1 + - + - + ~ + ...

2S J'l 4S

The ratio of successive terms is

( )s ( 1)-.'1

a,,+l n s

-= = 1+- ,......1--+···

a" n + 1 n 1.1-1' 00 n

(2-3)

46 Infinite Series

The ratio approaches one as n -+ co. Thus, comparison with-the geometric series fails. However,

f dx -1 1

Xl = S - 1 x'-1

so that the criterion for convergence of the zeta-function series is s > 1. This enables us to sharpen our ratio test; if

(2-4)

an+ 1 -+ 1 _ ::

a" n

with s greater than 1, the series converges (absolutely).

EXAMPLE

Consider the hyperqeometric series

ab x a(a + l)b(b + 1) x2

F( a, b; c; x) = 1 + - -1 , + ( 1) -2' + .. · (2-6)

c. cc+ .

The ratio of successive terms is

0,,+ 1 (a + n)(b + n)

-~= x

an (c + n)(n + 1)

(1 + ;;)( 1 + ;)

= X

(1 + :)(1 + !)

( a+b-c-l )

=1+ n +···X

Thus the series converges if [x I < I, or t when [x I = I, if 1 a+b-c<O

We can sharpen our ratio test even more by considering a very slowly converging series such as

00 1 1 1

11 f:2 n(ln n }!l = 20n 2)i + 3( In 3}!l + . · ·

(2-8) j.;.-vl ~ 1nr )~"C5 ~

1 If we allow a, b, c to be complex numbers, condition (2-7) becomes Re (0 + b - c) < 0

2-1 Convergence 47

By comparing with the integral

f dx -1 1 x(ln x)" - s - 1 (In X)s-l

(2-9)

we see that the series (2-8) converges provided s > 1. The ratio of successive terms in (2-8) is

aft + 1 n [ In n ] !if

a. = n + 1 In (n + 1)

= (1 _ ~ + ... ) [1 + 1 + .. 0] -s

n nIn n

1 s

= 1 - - - + · · · (2~10)

n n In n

Thus a series for which 10 .. + ,Ian 1 is of the form (2-10) converges if s > I and diverges if s ~ 1.

EXAMPLE

Consider the series solution (I-51) of Legendre's equation

x2 X4

1 - n( n + 1) - + n( n + 1)( n - 2)( n + 3) - - + · . ·

21 4!

The ratio of successive terms is

a, (n - 2i + 4) (n + 2i - 3) 2

~=~ X

a'-l (2i - 3) (2i - 2)

For large i, this ratio is

.s: ~ [1 - ~ + o( 12)] Xl

a,-1 I i

Thus, if Xl = I, the series diverges.

There are, of course, other convergence criteria; for example, if the signs of the an alternate and an approaches zero monotonically, then the series LI1 an converges (but not necessarily absolutely). The reader interested in a more rigorous and more complete treatment of convergence, absolute convergence, and so forth, should consult any of the standard references, such as those listed at the end of this chapter.

48 I nfinite Series

2-2 FAMILIAR SERIES

The reader should be familiar with at least the following simple series:

x2 «) n t Xii

(1 +x.)"= 1 + nx+ n(n -1)-+ ... = L . - (2-11)

2 ! II = 0 (n - ex) f- a!

(2-12)

Using the Euler relation eb = cos x + i sin x, we deduce from (2-12) x3 x5

sin x .:::;:: x - - + - - + ...

3 r 5 r

(2-13)

x2 X4

cos· X = 1 - _. + - - + ...

21 4!

(2-14)

Term-by-term integration of the series for (1 + x) -1 and (I + Xl) -1 yields

x2 x3 X4

In(l + x) = x - - + - - - + - . . . (2-15) .

2. 3 4

x3 XS

t ~1

an x = x - - + - - + ...

3 5

The above series are simple enough. to be easily remembered. Other elementary functions such as tan x, ctn x, sec x, and so on, have series expansions which are probably less familiar. For some, the coefficients are expressed in terms of the so-called Bernoulli numbers or the Euler numbers (which. may be defined, in fact, by these coefficients). See Problem 2-12 for sec x.

We shall introduce the Bernoulli numbers by an expansion of the function (eX - 1)-]. Multiplying by x to eliminate the singularity at x = 0, we can write

x

-- = c + c x + C x2 + ...

eX _ 1 0 1 2

(2-17)

Therefore,

( X2 x3 )

X = (c + c x + C x2 + ... ) x + - + - + ...

o l 2 21 31

and, dividing by x,

(-x x2 )

1 = (c + c x + C x2 + .... ) 1 + - + ~ + ... .

o 1 2 '2! 3~

Let e,. = B,,/n! . Then

(2-18)

Equating powers of x in (2w18),

1 =,Bo

(2-19)

etc,

Except for the first one, Eqs .. (2~ 1.9) 'may be written symbolically in the concise form (B + 1-)" - B":= 0,. with the understanding that B' really means B.. . The first few· of these Bernoulli numbers are

B. = - 'n B6 =. n B8 = - JI0· •••

Bl = -1 B3 =·:B·5 = B~ = ... '= 0 (2-20)

"B:z: fl+.( =: 0 (() = .( \ :2, 3. , . ~ • ) • .

Notations .vary, so that care must be exercised when looking up formulas.

For example, the notations of Dwight (08) and Pierce and . Foster (PI) differ from ours .

. As an example of the usefulness of Bernoulli numbers, as coefficients in power series of familiar functions; consider

. eb + e-Ix

ctn x = I Ix -Ix

·e -e

(2-21)

Let ix =.y/2. Then

e'/2 + e - ,/2. ctn x = i e'f2· .-y/2 . -e

( ·2 ' )

=; 1+--

. . e"-l

=.2i.(! .. + .. y .,) .. y' 2 . e'.- 1

= 2i L BlIy"

Y "cwn .·n!

Replacing y by 2ix gives

1 B (2x)"

ctn x = - L (- 1 ),,(2" .

X ill noD. n!

(2 .. 22)

The relation tan x = etn x - 2 em 2x enables us to deduce the power series for tan x.

2-3 TRANSFORMATION OF SERIES

Various devices may be used to reduce a given unfamiliar series to a known one.

Differentiation and integration are often useful.

EXAMPLE

I(x) = 1 + 2x + 3.r + 4x3 + ...

Integrating term by term,

Ix !(x) dx = x + x2 + x3 + . ~ ~ = x

o I-x

Then, differentiating,

f(x) ~ 1x (1 ~ J ~ (1 ~ X)2

[We might have recognized this immediately from the binomial series (2-11)·1

EXAMPLE

1 X x2 x3

I(x) =- + - + - + - + 'H 1·2 2·3 3·4 4·5

(2-24)

Differentiating twice,

1 (x2f)" = 1 + x + Xl + x3 + ... = -I-x

From this, two integrations give

II-x.

f(x) = - + 1 In (1 - x)

x x .

2-3 Transformation of Series 51

The constan ts of integration which arise in this procedure must be evaluated by knowing the series at certain values of x; for example, it is obvious from (2-24) that /(0) = i.

Complex variables sometimes provide useful transformations.

EXAMPLE

I( fJ) = 1 + a cos 0 + a2 cos 28 + .. ~

'0 2. 216

= Re (1 + ae' + a e + ... )

This is just a simple geometric series. Therefore,

(2-25)

1 1 - a cos 8

f(B) = Re - ----~

1 - ael9 1 - 2a cos 0 + a2

The trick of differentiation or integration may be employed even if the series does not contain a variable.

EXAMPLE

123 s=-+-+-+··· 2! 3! 4~

(2-26)

Define

Xl 2xl 3x4

f(x) = - + - + - + ...

2r 3! 4!

Then S = f(l).

x3 X4

f'(x) = X + x2 + - + - + ... = xe" 2! 3!

f(x) = {"xex dx = xe" - e" + 1 o

S = f(1) = 1

EXAMPLE

m(m - 1) m(m -l)(m - 2) m!

S = 1 + m + 2' + 3 , + A • • = L 1 ( _ ) ! (2-27)

. . " n. m n

Let

~ m! n

f(x) = L.J r ( _ ) , x

" n. m n.

S = f(l)

52 Infinite Series

But

f(x) = (1 + x)m

Therefore S = 2m.

We shall now consider the deceptively simple series [compare (2·2)]

eo 1 1 1 1

S = L - = 1 + - + - + - + ... = '(2)

,,=1 n2 4 9 16

(2-28)

Previous examples suggest defining

x2 x3 X4

f(x) = X + - + - + - + ...

4 9 16

s = 1(1)

(2-29)

X x2 1

!'(x) = 1 + - + - + ... = - -In (1 - x)

2 3 x

Thus

f( x) = _ IX In (1 - x) dx

o x

and

S __ ~ f 1 In (1 - x) dx

~ (2-30)

. 0 x

Unfortunately, this integral is just as difficult as the series was.

The series (2-28) can be expressed in terms of a Bernoulli number. To find this relation, we begin by writing the Fourier series for cos kx. (We shall discuss Fourier series in more detail in Chapter 4.)

Ao ~ A .

cos kx = T + 1I~1 ( II cos nx + B,. SID nx)

(2-31)

cos kx is an even function, so there are no sine terms (Bli = 0). The coefficients All are given by

2 f'" 2k sin kn

An = - cos nx cos kx dx = ( - 1)" (k2 2)

. n: 0 1t -n

(2-32)

Thus

k _ 2k sin kn: (_1_ cos x cos 2x _ cos 3x ... )

cos x - 1t" 2k2 - kl _ 1 + k2 - 4 k2 - 9 +

Setting x = 1r in (2·33) gives

2( 1 1 1 )

kn ctn kn = 1 + 2k k2 _ 1 + kl _ 4 + k2 _ 9 + .. ·

(2-33)

(2-34)

This is the partial fraction representation of the cotangent. For those who know the relevant rna thematics, this partial fraction representat ion could

2-3 Transformation of Series 53

have been written down immediately by using the Mittag-Letter theorem 2 from complex variable theory.

In any case, by expanding (2-34) as a power series in k, we obtain

kn ctn kn = 1 - 2k2(1 + _!_ + _!_ + ... )

- 22 32

4( 1 1 )

- 2k 1 + - + - + ... _ ...

24 34

eo

= 1 - 2 L C(2n)k21t

11-1

( .. =I:- ± ~. + 2) •• -) (2-35)

where {(2n) is defined in (2-2). Comparing (2-35) with our power series (2-22) for the cotangent in terms of Bernoulli numbers, we find

,(2 ) = (_1)111+ 1 B2ft(2x)ZII

n 2(2n) 1

(2-36)

Thus

(2-37)

1 1 B416n4 n4

1 + - + - + · · · = ,(4) = - = -

16 81 48 90

(2-38)

etc.

We shall conclude this section by discussing a transformation which is useful both for analytical and numerical summing of series. Let

(2-39)

be a series whose sum we know; and suppose we want to sum the series

DO

f(z) = L. eJi bit z'*

","0

(2-40)

2 Sec Copson (C8) Section 6.8, or Whittaker and Watson .(W5) Section 7.4. The theorem, roughly spcakina, says that a function /(:) which is analytic everywhere in the finite z-plane except for simple poles at z = ah Ql, ••• with residues bit b2 I ••• ~ and which (except ncar the poles) remains finite as Izi approaches infinity~ may be written in the form

f(z) = 1(0) + L bit ( 1 + .!.)

• z - a. Q_.

(2-34) follows immediately if we take for f(z) the function 'TrZ c:tn 1f'z.

~ 00iJ ~ 2.00 A

cot ~ =~ -+ 2i:.L.. 2. 2. 2, ~ "tC\'t jf;(;'1::. ~ + 2 ~ Z ~2._ n2. .

~. n-J\ i: -n "IT n~~

54 Infinite Series

We eliminate the b; from (2-40):

fez) = Co bo + c1b1z + ClPZZ2 + C3 b3z3 + ...

= Co g(z) + (el - cO)b1z + (c2 - co)b2 Z2 + (C3 - co)b·3 Z3 + · .. = Co g(z) + (el - co)zg'(z) + (c2 - 2el + cO)b2 zl

+ (C3 - 3cl + 2co}bJ Z3 + ...

I z2g"(Z)

= cog(z) + (el - co)zg (z) + (el - 2c1 + co) _.....;..._ 2!

Z3g"F(Z)

+ (c 3 - 3c 2 + 3c 1 - co) 3 ! + · · · (2 -41 )

Tbe successive coefficients are just the leading differences in a difference table of the coefficients c,.:

C1 - Co

C1 C2 - 2et + Co

C] _·Ct C3·- 3C2 + 3et - Co

C2 C3 - 2'c] + Cl

~ ..

. . .

• ••

The most common example of this transformation is Euler 's transformation.

Set ..

1

g(z) = = 1 - z + Z2 - Z3 + - · ..

1+z

(2-42)

Thus

bit = (-1)"

-z zg'(z) = (1 + Z)2

Z2g"(Z) Z2

2! - (1 + Z)3

etc.

References 55

Thus, if fez) = I:'-o (- Ifcnzn, the Euler transformation tells us that J(z) = 1 [co _ (c1 _ co)( z )

l+z l+z

+ (e2 - 2cJ + (0)(1 : zf - + 00 oj For example, consider the series

fez) = 1 - -tz + !Z2 ~ !Z3 + - ...

(2-44)

[We already know J(z) = (lIz) In (I + z), of course.] This is a rather poorly converging series; in fact, it diverges for Iz 1 > 1. From the difference table

1

-t

-i -!
1 t
12'
1 1
-"12' - 20
1
30
1
-TO we see that

f( z) = 1 [1 + ! ( z ) + ~ ( z ) 2 + ... J

l+z 21+z 31+z

(2-45)

This new series converges much better for z. near 1, for example. In fact, it converges for all positive z,

This is just one example of the tricks that are a vailable to turn one series in to . another ~ more useful, one. The important practical problem of how to find the numerical value of a series will be treated in Chapter 13.

REFERENCES

Infi ni te series are discussed quite thorough1y in numerous books on com plex variable theory. See, for example, Whittaker and Wa tson (W 5) Chapter II and III, or A postol (AS) Chapters 12 and 13.

Several fairly elementary references are Hyslop (H 13); Green (G7); and Stanaitis (S 10. The monograph by Hirschman (H lO) is also very readable, but it is at a somewhat more advanced level than the three preceding references.

56 Infinite Series

Two standard treatises, with somewhat of a nineteenth-century flavor, are those by Brornwich (B8) and Knopp (K3),

Davis (D2) presents in a systematic manner many devices for summing series. M any elementary series, inc1 uding those leading to Bernoulli num hers and Euler numbers, are given near the beginning of Dwight (08).

PROBLEMS

2-1

2-2

2-3

2-4

2-5

2-6

2-7

2-8

2-9

2-10

Find the sums of the series 2-1 through 2-7:

1 1 1 1 1

1 + 4 - 16 - 64 + 256 + 1024 - ~ + + . , .

111 1

N+2·4+M+4·6+···

tIl 1

1 - 5 . 32 - 7 . 33 + 11 . 35 + 13 . 36 - - + + ...

123

-+-+-+ ...

O! I! 2!

1 1 1

1+-+-+-+··' 9 25 49

1 1 ~ 1

1 + 92 + 252 + 492 + , ..

I I 1

1--+----+-···

42 92 162

Evaluate in closed form the sum

f( 0) = sin 0 + ! sin 2e + ~ sin 30 + t sin 40 + ... (you may assume 0 < 0 < n for definiteness).

Do the following series converge or not?

!'1 !.~.~ 1·;·1·1

____ + 22 + + ...

9·7·25·1! 11·9·49·2~ 13·11·S1·3!

(1 . 3) 2 (1 . 3 . 5)2 (1 . 3 . 5 . 7) 2

1 . 1 . 12 + 4· 2 . (1 . 2)2 + 16 . 3 . (1 . 2 . 3) 2

(1 . 3 · 5 . 7 · 9)2

+ + ...

64 . 4 . (1 . 2 . 3 . 4)2

Problems 57

2-11 Evaluate the series

in closed form, by comparing with

x3 x5

sin x = x - - + - - + ... 3! 5!

2-12 Consider the so-called Euler numbers, defined by

~ E2n 2

sec z = ~ ( - l )" ~ Z n

n""O (2n)!

(a) Show that

(E+ 1)1< + (E-I)k = 0

(k even)

Give E2, £4 ~ £6 ~ E8 .

(b) Using the partial fraction expansion of the secant

( 1 3 5 )

k 7r sec kn = 4k - + - + ...

1 - 4k 2 9 - 4k2 25 - 4k2

evaluate:

(I)

(2)

1 1

1 + - + ...

- 32n+ 1 5211+ 1

THREE

EVALUATION OF INTEGRALS

In the first three sections of this chapter we discuss various techniques for the analytic evaluation of definite integrals. These techniques include differentiation or integration with respect to a parameter, the exploitation of symmetries, and evaluation by con tour integration. Tabulated integrals such as the gamma function, beta f unction, exponential integra], elliptic integrals, and so forth, are described in Section 3-4. Approximate expansions, especially asymptotic expansions, are treated in Section 3-5, and the saddle-point method is discussed in Section 3-6.

3-1 ELEMENTARY METHODS

We first review some useful elementary techniques for doing integrals.

The simplest device is changing variables. For example, the standard trick for evaluating

~ ,..ro

Jo «:" dt =!~

by using polar coordinates is presumably familiar. setting t = uJa we find

(3-1)

From this result, by

J-

00 1 1t:

f -au2 d -

e u - ~ -

o 2 a

58

3-1 Elementary Methods 59

or by setting t = ul, we may deduce

f,,''' «> U du = !J; (3.3)

How about So e-t4 dt? This is not so easy. Let

tIO

If:I = f e-s• dt (3-4)

o

Make the change of variable

t(i = u

1

dt = - U(1!.:I)-l du

a

Then

This does not look any easier, but one defines the gamma function r(z) by

co

r(z) = f e-uuz -1 du o

(3-5)

so that

(3-6)

We shall say more about the gamma function in Section 3--4; note that (3-1) gives r(t) = Jit.

Another useful technique is to introduce co rnplex va riables,

EXAMPLE

~

I = f e - U cos Ax dx o

co

= Re f e-axeux dx o

1

= Re-a - iA

(3-7)

therefore, 1

I Note that our method seems to require that A be real, but the results are in fact correct for A anywhere in the strip· jIm A I ~~ How did this happen 1

..fl o.e::f - j/j. rt ~ 1 TYl A '::> - a. -

60 Evaluation of Integrals

This method gives us another integral at the same time, from" the imaginary "part;

co ..t

fe-ax sin AX dx = 2 A,2

o a +

The method of integration by parts is very useful, and is presumably familiar by now.

Another useful trick is differentiation or integration with respect to a parameter.

EXAMPLE

co

1 = f e-Q)O cos Ax x dx o

Let

f~ a

/(0) = e-a%. cos AX dx = 2 A, 2

o a +

Then

d a2 - A.2

I = - d- J(a) = (2 ·2)2

a a + A

" We have assumed that the order of differentiation and integration can be reversed. For necessary and sufficient conditions see mathematics books, such as Whittaker and Watson (W5) Chapter IV, or Apostol (A5) Chapter 9. In physical applications it will nearly always work.

EXAMPLE

"fOO sin x 1= -dx

o x

(3-10)

Let

f" m e-usinx

/(a) = " dx

o x

so that 1 = /(0)

d/(a.) " fOCI" ~a:..;' d -1

d« = - 0 e sin x x = (12 + 1

fda 1

/( a) = - a2 + 1 = C - tan - ct

But I( co) = o. Therefore C = x/2.

1(12;) ~ ~ - tan -1 a

and

1t

1= 1(0) =- 2

(3-11)

-

3-1 Elementary Methods 61

Sometimes one can combine several derivatives of an integral to form a differential equation.

EXAMPLE

DO -~~

J(a) = f Ie 2 dx

o + x

(3.12)

00 1

r(a) + 1(0:) = f e-(lx dx = -

o 0:

This equation is easily solved by the variation of parameters method discussed in Section 1-1. The result is

f~ sin t . flZ cos t

J(a) = -cos a -1- dt + sm a -t- dt

But /(0:) and all its derivatives vanish at (X = 00. Thus

. Jet cos t f~ sin t

1(0:) = Sin 0.: -- dt ~ cos ex - dt

00 t 0':> t

The cosine-integral and sine-inteqral functions are defined by

C· IX cos t d

IX = -- t

00 t

. IX sin t -

SlX= -dt

. 0 t

(3-13)

Thus J( Cl) = sin 0: Ci ex + cos a( 7r/2 - Si et:). The functions Si C( and Ci a are tabulated in Jahnke et al. (13) and Abramowitz and Stegun (A O.

Finally, the useful integrals So e-o.x1.;(, dx (n = 0, 1,2, ... ) can be obtained from the first two,

foo -ax1 d - 1 In:

e x - - -

o 2 a

IX) 1

f e-ax2x dx =-

o 2a

(3-14)

by repeated differentiation.

We observe in passing that if a parameter in an integral also appears in the limitts), the differentiation with respect to that parameter proceeds according to the following rule:

d [fb{tJ;) ] "(11) 8f(x a) db da

-d f(x, cc) dx = f a ~ dx + -d f(b(ex}, C() - -d j(a(rJ.),~)

Cl a{lX) a(a:) ex et: IX

(3-15)

62 Evaluation of Integrals

3-2 USE OF SYMMETRY ARGUMENTS

The evaluation of some integrals may be greatly simplified by exploiting the symmetries present in the problem. We shall illustrate the principles involved by means of some integrals over solid angle in three dimensions.

EXAMPLE

Consider the integral

dO 2.: + 1 1

II (k) = f 1 + k • i = fo diP f _ 1 d( cos (j) 1 + k ~ t

(3-16)

where i is the unit radius vector and «(J, ljJ) are conventional spherical polar coordinates:

f x = sin 8 cos ¢

t y = sin 9 sin ¢

f'z = cos (j

Since the orientation of our coordinate system is arbitrary, we may choose the z axis along k and 0 btain (we assume k < I)

II(k) = f2"d¢ f+ 1 d(eos 0) = 2n In (1 + k)

o -1 1 + k cos (J k 1 - k

(3-17)

The integrals

(3-18)

may be obtained from II by differentiation (replace the I in the denominator by «, differentiate with respect to CX, and set a equal to I again). For example)

(3-19)

Another example is

This integral is complicated by the fact that two directions are given by the two vectors a and k, and we cannot choose our polar axis along both of them. However, the direction a is trivial, in that it may be ,,' factored out." For, consider

J(t) = f' t dO.

1 + k : t

(3-21)

3-2 Use of Symmetry Arguments 63

Clearly

II (k, a) = a ~ J

Now J(k) is a vector and must point in the direction k, since no other direction is specified in the definition (3-21) of J. Therefore, .

J(k) = Ak

(3-22)

To evaluate the scalar A~ we dot k into both sides of (3-22) and obtain

1 1 J k~f

A = k2 k • J(k) = kl 1 + k ~ ; dQ

= kl, J dn( I - 1 + Ik . t)

= 4n: (1 _ _!_ 1 1 + k)

k2 2k n 1 - k

Thus our original integral (3-20) is

(3-23)

f a ~ t dO

iz(k, a) = (1 + k ~ t)2 ?

We could obtain Il(k~ a) from Il(k~ a) by replacing the I by Cl and differentiating as before, but a simpler method is to differentiate II (k) with respect to k. On the one hand,

8Il(k) a f dO f dOr

ok = ok 1 + k · t = - (1 + k • r)2

On the other hand, using (3-17)~

et 1 (k) = ~ 01 1 (k) = 2nk [ 2 _ ~ (1 + k)]

ok k ok k2 1 - kl kIn 1 - k,

Comparing (3-24) and (3-25) gives

J dO f _ 2nk ( - 2 ~ In 1 + k)

(1 + k • f)2 - k2 1 - k2 + k 1 - k

(3-24)

(3-25)

and therefore

f a· f dO _ a ~ J f dO _ 2nk · a [ - 2 1 In (1 + k)] (3-26)

(1 + k • f)2 - (1 + k . f)2 - k2 1- kl + k 1 - k

64 Evaluation of Integrals

Other examples of the use of symmetry argumen ts are the evaluation of integrals such as

4l1(a~ b) = f dO f~ a f~b

~2(a~ b, c, d) = f dO f· a f· b f ~ c f : d

(3-27)

etc.

To evaluate €D1 t we observe that it is a scalar which is linear in both a and b. The only possibility is that tlll = A a · b, where A is a number. To find A~ let a and b both eq ual i. Then

f J4 41t

<Dl(i, i) = A = dQ(f· 2)2 = dO cos? 0 = 3

Therefore \$1 (a, b) = (4nj3)a· b.

The evaluation of cb2 proceeds similarly. Since <})2 is a scalar, linear in the four vectors a, b, c, d, as well as being invariant under any interchange of these vectors, it must have the form

¢ 2 = B( a ~ b c- d + a· c b . d + a- db· c)

where B is a number. B is found by setting a = b = c = d = i, so that

<1>,(2. Z, 2,2) = 3B = f dO(1 • Z)4 = f dO cos' 0 = ~n

Therefore B = An] 15, and

As a final exam pJe of the sort of device one can use to simplify integrals, we mention the iden tity

1 f 1 du

ab = 0 [au + b(1 - U)]2

(3-28)

which Feynman has used to simplify the evaluation of integra1s ansmg in quantum fie1d theory.? As an application of (3-28), we evaluate the integral

f dO.

ijI ( k ~ I) = (l + k . i)(1 + I . f)

2: See the Appendix of R. P. Feynman (F3).

3-3 Contour Integration

65

Use of (3-28) converts (3-29) to

It f dO

",(k, I) = 0 du {l + t ~ [ku + 1(1 _ U)]}2

The solid-angle integral in (3-30) is just 12[ku + 1(1 - u)], as given by (3-19). Thus

(3-30)

ft du

V1(k, I) = 411: 0 1 ~ [ku + 1(1 _ U)]2

This is an elementary integral, although rather tedious; the answer has the interesting form

4x A

.p(k~ I) = p. cosh - 1 ~

A2 _ Bl B

(3-31)

where

A=l-k·1

B = J (I ~ k2 )(f - 12)

The proof of (3-31) is left as an exercise (Problem 3-37).

3-3 CONTOUR INTEGRATION

One of the most powerful means for evaluating definite integrals is provided by the theorem of residues from the theory of functions of a complex variable. We shall illustrate this method of contour integration by a number of examples in this section. Before reading this material, the student who does not know the theory of functions of a complex variable reasonably well should review (or learn) certain parts of this theory. These parts are· presented in the Appendix of this book to serve as an aid in the review (or as a guide to the study).

The theorem of residues [Appendix, Eq. (A-I5)] tells us that if a function I(z) is regular in the region bounded by a closed path C, except for a finite number of poles and isolated essential singularities in the interior of C, then the integral of fez) along the contour C is

Ie f(z) dz = 2ni I residues

where L residues means the sum of the residues at all the poles and essential singularities inside C.

The residues at poles and isolated essential singularities may be found as follows.

+

66 Evaluation of Integrals

If/(z) has a simple pole (pole or order onefat z = zo, the residue is

0-1 = [(z - ZO)!(Z)].z =so (3-32)

If f(z) is written in the form J(z) = q(z){p{z), where q(z) is regular and p(z) has a simple zero at zo, the residue of/(z) at Zo may be computed from

(3·33)

If %0 is a pole of order- n, the residue is

1 {( d ) ... -1 }

a_I = (n _ I)! dz [(z ~ zoY/(z)] Z-""S'O

(3-34)

If Zo is an isolated essential singularity, the residue is found from the Laurent expansion (Appendix, Section A-2, item 7).

We illustrate the method of contour integration by some examples.

BXAMPLB

fa;! dx

1 = 0 ·-1-+-x....".2

(3 .. 35)

Consider § dzf(l + Z2) along the contour of Figure 3-1. Along the real axis the integral is U. Along the large semicircle in the upper-half plane we get zero, since

z = Ref'

dz = iRe" dO

z-plane

R-+-oo

+i

.

-I

Figure 3-1 Contour for the integral (3-35)

3-3 Contour Integration 67

The residue of 1/(1 + Z2) = l/(z + i)(z - i) at z = i is 1/(2i). Thus

21 = 21ri(};) = 71

1t 1=- 2

Note that an important part of the problem may be choosing the. "return path" so that the contribution from it is simple (preferably zero),

EXAMPLE

Consider a resistance R and inductance L connected in series with a voltage Vet) (Figure 3-2). Suppose V(t) is a voltage impulse; that is, a very high pulse lasting for a very short time. As we shall see in Chapter 4, we can write to a good approximation

A a>

V(t) = 2ft f _ 00 efmt dw

R

Figure 3-2 Series R-L circuit

where A is the area under the curve V(t).

The current due to a voltage eimf is eir.ot/(R + iwL). Thus the current due to our voltage pulse is

A ~ eittA dw

l(t) = - f

2n - co R + iwL

(3·36)

Let us evaluate this integral.

If t < 0; the integrand is exponentially small for 1m co -+ - co, so that we may complete the contour by a large semicircle in the lower-half co-plane, along which the integral vanishes. 3 The contour encloses no singularities, so that l( t) = o.

If t > 0, we must complete the contour by a large semicircle in the upperhalf plane. Then

(A) e-RtiL A

I(t) = 2ni - . = - e-Rt/L

2n ,L L

3 A rigorous justification of this procedure is provided by Jordan:» lemma; see Copson (C8) p, 137 for example.

68 Evaluation of'.Integrals

EXAMPLE

cQ dx 1 = fo 1 + x3

The integrand ·is not even, so we cannot extend it to - 00. Consider the integral

. f In z dz

1 + Z3

The integrand is many-valued; we may cut the plane as shown in Figure 3-3, and define In z real (= In x) just above the cut. Then In z = In x + 27r.i below the cut, and integrating along the indicated contour,

1.. In z dz _ 2 OJ j 1 + Z3 - - 1t1

On the other hand, using the method of residues, , t,ln' z dz = _ 41[2 i.J3

j 1 +- Z3 . 9

Thus I = (2nJ3)/9.

When-integrating around a branch point, as in this example, it is necessary to show that the integral on a vanishingly small circle around the branch point is 'zero: In this example, this. part goes like r In r, which approaches zero as r -+ o.

.z-plane

tlTI3 E

X

~1 ~. ~.D. "1 ~ ==QlT

Figure 3-3 Contour for the in_tegral (3-37)

3-3 Contour Integration 69

A more straightforward method for evaluating the integral (3-37) consists of evaluating the integral

1 dz

J = '! 1 + Z3"

along a closed contour consisting of (i) the real axis from 0 to + 00, (ii) onethird of a large circle at [z] = co, and (iii) a return to the origin along the line arg z = In./3. Evaluating J along these pieces, we find

J = (I - e27r.i/3)/

On the other hand ~ the integrand of J has a simple pole at z = en. ij 3; the resid ue theorem gives

2ni J= . 3e2:1'u/3

Therefore

2n:i e - 2ni! 3 11: 2n

1=- = =~

3 1 - e211.i13 tt 3J3

3 sin 3

as before.

EXAMPLE

11 dO

I-f

o a + b cos 0

a>b>O

(3-38)

The integrand is even, so

211: dO

21 = fo a + b cos 8

If we integrate along the unit circle, as in Figure 3-4,

dz = iei8 dO

era- + e-w 1 ( 1)

cos () = 2 = 2 z + ~

(3-39)

Then

21 = f dzj(iz)

c a + (bJ2)[z + (l/z)]

2f dz

= i c b Z2 + 2az + b

70 Evaluation of Integrals

a-plane

Figure 3-4 Contour for the integral (3-38)

The integrand has two poles, at the roots of the denominator. The prod uct of the roots is hi b = 1; one is outside and one inside the unit circle. They are at

a Ja2

Xl 2 = - - + - - 1

, b - b2

Then

2 ( a Ja2 .. _)

21 = i 21ti residue at Xl = - b + b2 - 1

( 1) 1t

I = 2it = .

2bx1 + 2a Ja2 - b2

EXAMPLE

I =f 00 J-;; dx

o 1 + x2

3-40

Consider the integral

I~; dz j 1 + Z2

along the contour of Figure 3-5. We choose JZ positive on top of the cut. Then

I..Jidz = 21 :r 1 + Z2

But, using residues,

Therefore 1= n/J 2.

3-3 Contour Integration 71

Figure 3-5 Contour for the integral (3-40)

EXAMPLE

00 ff'x

1= f dx

_ rIO eX + 1

(0 < a < 1)

(3-41 )

We clearly wish to consider

f e''Z dz ff + 1

along some contour. One choice is shown in Figure 3-6~ involving the familiar large semicircle. Note that we must give a a small imaginary part in

z-plane

Figure 3--6 Possible contour for the integral (341)

72 Evalua tion of Integrals

order to be certain that the integral along the semicircle vanishes. Then

1 = 2ni L residues

There is an infinite n urn ber of poles.

At z = iff. ~ residue is - e'"'

')

I

At z = 3in:, residue is - e3 =, etc.

Thus

ell'Ul 'It

I = -21d 2.

1 - e Ui~ sin na

Now we can 1et Im a --+ O.

Alternatively, we cou1d use the contour shown in Figure 3-7. AJong the rea l axis, we get I. Along 1m z = 'lni, we get

Thus

(1 - e21CiD)! = Lni (residue at z = rri)

1 = n sin na

as before

z-plane

Figure 3-7 Another contour for the integral (3-41)

3-3 Contour Integration 73

R-+ 00

z-plane

Figure.3-8 Contour for the integral (3-42)

EXAMPLE

+1 dx

1= Ll ~x2(1 + x2)

(3-42)

We consider

1. dz

1 ~1 - z2(1 + Zl)

along the contour of Figure 3-8. On the top side of the cut we get It and we get another I from the bottom side. Therefore,

. [1 1]

21 = 21ti M + M

2iy 2 u; 2

=1t./2

t:

1= J2

EXAMPLE

Consider the integral

I = 1. f~z) dz

j sin 1tZ

(3-43)

around the contour of Figure 3-9~ where f(z) has several isolated singularities (indicated by crosses in the figure) and goes to zero at least as fast as Izf-1 as lz] -+ 00.

On the one hand, the integrall may be evaluated by summing the residues at the zeros of sin 1tZ, indicated by dots in Figure 3-9. The result is

CC! 1

[ = 2ni L - ( - )"/(n)

n""-oo1t

(3-44)

74 Evaluation of Integrals

x

a-plane

x

x

Figure 3-9 Original contour for the integral (3-43)

On the other hand, the integral along the contour of Figure 3-9 is clearly the same as that along the contour of Figure 3-10 since the integral around the circle at IzJ = 00 vanishes. The singularities enclosed by the contour of Figure 3-10 are now the singularities of J(z); if the locations and residues are denoted by ZI: and Rk" respectively, then

1= -2ni L _._R_t_ k. Sin 1fZ"

Comparison with (3-44) gives the summation formula

(3-45)

I

z-plane

x

x

circle at Izl= 00

x

Figure 3-10 Deformed contour for the integral (3-43)

3-4 Tabulated Integrals 75

This device, which converts an infinite sum into a contour integral which is subsequently deformed, is known as a Sommerfeld- Watson transformation."

As an example, consider the series

S(x) = sin x _ 2 sin 2x + 3 sin 3x _ + ....

a1 + 1 aZ + 4 a2 + 9

This is just the left side of equation (3-45), if we set J(z) = _ ~ z sin xz

2 a1 + ZZ

(Note that we must require lxl < 7t in order that the large circle in Figure 3-10 make no contribution to the integral.) Now the locations and residues of the poles off(z) are

Z1 = ia

Zl = =ta

j · h

R1 = --Sin ax

4

f • h

Rl = 4 SID ax

Therefore, we obtain from formula (3-4S)

~ [i.b iOb 1

--4 sin ax -4 sm ax · h

. 2t SIn ax

S(x) = - n .. h +.. h = 2 -·-h--

I sin aX' - I SlO an sin QX'

As we remarked above, this result is valid only if Jx~ < n. For x outside this range, we simply observe that S(x), from its definition, is periodic in x with period 2n..

3~ TABULATED INTEGRALS

We first mention (again) the gamma junction, defined for Re z » 0

by the integral ~

(0

r(z) = f x,-te-x dx o

(3-46)

• See Sommerfeld (S9) ApJ?elli!b: to chaRter VI, or G~ N._W.,taon (W1t

76 Evaluation of Integrals

Integration by parts gives

r(z) = (z - l)r(z - I) Thus, from F(l) = 1, we obtain

. r(2) = 1, r(3) = 2, r(4) = 6, ... , r(n) = (n - 1) !

(Rez> 1)

(347)

(3-48) 0 ~ =" rcu =~ ,

T

The gamma function may be analytically continued into the entire complex. plane by means of the recursion relation (3-47). except for simple poles at z = 0, - I, - 2t ....

A related integral of some interest is encountered if we. consider

Let x + y = u.

Let x = ut.

a) 1

r(r)r(s) = J e-"u"+1-1 du f dt ,"-1(1.- ,)*-1

o 0

=- r(r + a)B(r, s)

where B(r, s) is the beta/unction

B(r, s) = r(r)r(s) = f' X--1(1 - X),-l dx rCr + s) , 0

(3 .. 49)

This integral representation for B(r, s) is obviously valid only for Re r > 0, Re s > 0; the first equality in (3-49) defines B(r, s) for all fJ S in the complex plane.

An interesting special case of this relation is

r(z)r(l - z) = r(l)B(z, 1 - z)

Let x = t/( 1 + t). Then

r 1"'"-1 dt r(z)r(l - z) = 0 1 + t .

This integral can be done by using the contour shown in Figure 3-11. S The result is

1t r(z)r(l - z) = -.-

sin 1tZ

(3-50) r(1:)={T

• Alternatively, we may make a simple change of variable in Example. (3-41) .

. _----

B ( ~, ~ ) = S~ t r-~ C A - t 1 0. -~ d.t = B (q I p) (~p -;:.. Q l ~ q >rJ) .

o

B C f. "1 ) = z f;r/~!lin "'p-I it - COO 29-/ e- d 1J

o

3-4 Tabulated Integrals 77

t -plane

. R-+oo

Figure 3-11 Contour for the integral \$: [t=-l/(i + t)] dt

Our derivation has only been validfor 0 < Re z < I, but both sides of (3-50) are analytic functions of z (except at z = 0, + 1, ±2, ... ), so we can extend the result into the entire plane.

Another integral which has been given a name is the so-called exponential integral,

x d dt Ei(x) = f -

-~ t

(3-51)

This definition is conventionally supplemented by cutting the x-plane along the positive realaxis, Thus Ei(x) is well defined for negative real X; but for positive real x we must distinguish between Ei(x + it) and Ei(x - i8).

Two functions related to Ei(x) are the sine and cosine integrals:

S. fX sin t dt

IX =

o t

(3-52)

C· f% cos t dt

IX =

co t

(3-53)

The error function is defined as

2 .¥

err x = J- f e-1l dt 1t' 0

(3·54)

The associated trigonometric integrals are known as Fresnel inteqrals":

~ (?ttl)

C(x) = fo cos ""2 dt

f~ (1tll)

S(x) = osin ""2 dt

(3-55)

6 Conventions differ ; our definitions (3-55) agree with Magnus et al. (MI) but not with Erdelyi et al. (E.5).

78 Evaluation of Integrals

Another class of integral which is tabulated in many places is the class of elliptic integrals. These are integrals of the form

fdx A(x) + B(x)jSW C(x) + D(x)JS(X)

(3-56)

where A, B, C, D are polynomials and S is a polynomial of the third or fourth degree (not a perfect square, of course l),

We shall not go into the general theory of such integrals, but just make a few remarks, which the student can verify. 7 In the first place.

A+BJS=E+_!_ C + n.jS .jS

(3-57)

where E and F are rational (ratios of polynomials). By decomposing Fin partial fractions, we see that the only nonelementary integrals we need are

x"dx

J.= f JS

f dx

H,. = (x _ c)"JS

(3-58)

But we can evaluate all the JIt from Jo, J1, and J2, and all the Hit from Hi, Jo t J1, and J2• Thus we only need

f dx IX dx f x2 dx I dx

Jo = JS J1 = Js J2 = JS Hi = (x _ c)JS (3-59)

There are various standard forms for S; we shall consider only Legendre's, namely, S = (I - Xl)( I - k2 x2). Then

The integral

F - I" dx

- 0 J (1 - x2)(1 - k1x2)

is called the Legendre elliptic integral of the first kind. Generally, one sets x = sin 4J, and defines

(3-60)

f'" dt/>

F(t/J, k) = 0 Jt - k2 sin ' tP

J1 is an H elementary" integral, if we set x2 = u.

(3-61)

7 See, for example, Abramowitz and Stegun (AI) Chapter 17 for a more complete and useful discussion.

3-4 Tabulated Integrals 79

Instead of J 2' one takes for a standard integral the Legendre elliptic integral of the second kind,

Ix Jt - k2x1 dx E=

o Jl- x2

(3-62)

Again one generally sets x = sin <p, and tabulates tP

E(4)~ k) = f J 1 ~ k2 sin? (jJ d4>

o

(3-63)

The integral HI is hard. One defines the Legendre elliptic integral of the third kind,

" dx

Il (cP. n. k) = So (1 + nx')J(l _ X')(J _ k';l)

f'" dfj>

= 0 (1 + n sin2 ¢)Jl - kl sin2 g,

When tP = x/2, we have the complete elliptic integrals,

(3-64)

(3-65)

(3-66)

E(k) = EG. k) = (' Jl - k' sin' cP dcP

n (n, k) = n (1, n, k) = (' (1 + n sin' cP:~1 _ k' sin' ¢ The formulas for reducing S to a standard form are given in various places, for example, Magnus et 01. (M 1) Section 10-1 or Abramowitz and Stegun (AI) Chapter 17. Elliptic integrals occur, among other places, in

(3-67)

I. The motion of the simple pendulum

2. Finding inductances of coils

3. Finding lengths of conic sections

4. Calculating the solid angles of circles seen obliquely

A large number of integrals are related to tabulated functions; these

integral representations are sometimes very useful. For example,

Ii cos tx dt 1t

J 2 =-2 Jo(x) o 1 - t

I, cos(n+t)¢ n

o Jcos 4> - cos 0 drp = J2 P,,(cos 8)

(3-69)

(3·70)

80 Evaluation of Integrals

Jo(x) is a suitably normalized solution of Bessel's equation, with m = 0, , and is known as a Bessel function; P n is a Legendre polynomial. We shall study such integral representations of the U special functions n in greater detail in Chapter 7.

3-5 APPROXIMATE EXPANSIONS

One can often obtain a useful expression for an integral by expanding the integrand in some sort of series.

EXAMPLE

I ,

2 x

erf x = r." f e-tl dt v 1t 0

2 x ( t4,6 )

= ~ t 1 - 12 + 2! - 3! + - ... dt

(3-71)

This series converges for all x, but is only useful for small x, (x ;\$ 1).

Integrations by parts are often useful.

EXAMPLE

Suppose we want erf x for large x. As x ---+ 00, erf x ---+ 1. Let us compute the difference from 1 +

2 ~

1 - erf x = r= J e - r2 d t V 1r. x

We perform a sequence of integrations by parts.

foo «=' fC() e-tl dt

e-r2dt=___ 2

x 2x x 2t

etc.

3-5 Approximate Expansions 81

The result after n integrations by parts is (n > 1):

erf x = 1 _ _:_ e ~ xl [_!_ __ 1_ + ~ _ 1 . 3 . 5 + _ . ~ A

~ 2x 22X3 23X5 24x7

1 -1 1 . 3 . 5 ... (2n - 3)]

+ (- :r 2"r,,-1

1 . 3 . 5 ... (2n - 1) 2 f eo e-tl.

+ ( _1)" 2" E.. -z;;- dt

v 17: x t

(3-72)

The terms in the brackets do not form the beginning of a convergent infinite series. The series does not converge for any x, since the individual terms eventually increase as n increases. Nevertheless, this expression with a finite number of terms is very useful for large x.

The expression (3·72) is exact if we include the U remainder," that is, the last term, containing the integral. This remainder alternates in sign as n increases, which means that the error after n terms in the series is smaller in magnitude than the next term. Thus the accuracy of the approximate expression in the brackets is highest if we stop one term before the smallest.

The series in brackets in (3-72) is an example of an asymptotic series if continued indefinitely. 8 The precise definition of an asymptotic series is the following:

C1 Cz

S(z) = Co + - + - + · · .

Z Zl

(3-73)

is an asymptotic series expansion of fez) [written J(z) '" S(z), where ..... reads "is asymptotically equal to tt] provided, for any n, the error involved in terminating the series with the term ell z':" goes to zero faster than z'" as Izi goes to C(J for some range of arg z, that is,

lim zb[f(z) - Sn{z)] = 0

1%1 ~«l

(3-74)

for arg z in the given interval; SJiz) means Co + c1/z + ... + c,Jz".

[A convergent series approaches J(z) as n -. 00 for given z, whereas an asymptotic series approaches f(z) as z -+ 00 for given n.]

From the definition, it is easy to show that asymptotic series may be added, multiplied, and integrated to obtain asymptotic series for the sum, product, and integral of the corresponding functions. Also, the asymptotic expansion of a given function J(z) is unique, but the reverse is not true. An asymptotic series does not specify a function J(z) uniquely [see Whittaker and Watson (W5) Chapter VIII or Jeffreys and Jeffreys (J4) Chapter 17].

• The idea is probably due to Poincare; see reference (P2) Chapter VID.

82 Evaluation of Integrals

We have not proved that our series in (3-72) is an asymptotic series, but that is not hard; we leave it to the interested reader.

Another example:

f-~ e ~ dt

Ei( - x) = - dt = f .e'" -

-00 t !Xl t

(x>O)

-~ ,,- t

= -!_-f ~ dt

x co tl

-~ -~ ,,-,

e e f e

= --+-+2 -dt

x Xl t3

~ a:l

Continuing in this way, we obtain

e-X [ 1 2! 3! (-l)"n!]

-Ei( -x) = ~ 1 - ~ + ~ - x3 + _ ... + x"

+ (-i)"(n + 1)! (;:: dt

(3-75)

We can use this result in two ways:

1. As an asymptotic series for Ei( - x) :

e-x ( 1 2 ~ 3!· )

-Ei( -x) ...... x 1 - x + ~ - Xl + - ...

2. As an exact expression for computing certain integrals, assuming we have available a table of Ei( - x). From (3~ 75)

f co e - edt _ (- I)" . _ ( - 1)" e - ~

1: ~ - (n - 1)! EI( x) + (n - I)! x

(3-76)

[ 1 z: .. (n-2)!]

x 1 - - + - - + ... + (-1) _

x x2 x" 1

(3·77)

Other important methods for approximating integrals are known as saddle-point methods, for reasons which will soon be clear. The most important of these is the method of steepest descent. We shall illustrate the method by looking for an approximation to rex + 1) for x large, positive, and real.

eo

rex + 1) = f e«: dt o

(3 .. 78)

For large x, the integrand looks as shown in Figure 3-12. To find the

maximum, we have .

o = !!.. (e-rt~') = e-'[ - t" + X(l-l] dt

t = x

We now approximate the integrand in a particular way, by writing it in

an exponential form ef(') and using a Taylor's series approximation for f(t) near its maximum. (A Taylor's expansion of-the integrand itself, for example, would not be useful if only a few terms were kept.) The integrand of (3·78) is

eI(t) = tXe-r = r In t-I

so that

f(r) = x In t - t f'(t) = ~ - 1

t

fl = 0 for t = x

/"(f) = -?

Then, expanding about t = X;

I'(x + 1) I':J (0 eJ[p( x In J[ - x - ~ (I - X)2] dt

I':J r1ox-x roo exp[ - ~ (t - 4] dt

(If x> I, we make a very small error by extending the integral to - 00.) Doing the t integral gives

Figure 3-12 Graph of re:: for large x

84 Evaluation of Integrals

This is the first term of Stirling's formula, which is the asymptotic expansion of n! = r(n + 1):

(3-79)

We shall return to this asymptotic series at the end of this chapter.

The method of steepest descents is applicable, in general, to integrals of the form

I(~) = f. ef!./(z) dz c

(a. large .and posi tive)

(3-80)

where C is a path in the complex plane such that the ends of the path do not contribute significantly to the integral, [This method usually gives the first term in an asymptotic expansion of J(ri), valid for large IX.]

If fez) = u + iu, we would expect most of the contribution to l(a,) to come from parts of the con tour where u is largest, The idea of the method of steepest descents is to deform the contour C so that the region of large u is compressed into as short a space as possible.

To see how to deform the contour it is necessary to examine the behavior of the functions u and v in more or 1ess detail for the example at hand. However, some general features apply to any regular functionJ(z). Neither u nor v can have a maximum or minimum except at a singularity, because V2u = 0 and V2v = 0.9 For example, if

then

so that a "flat spot" of the surface u(x} y)~ where

OU = au = 0

ox By

(3-81)

must be a "saddJe point," where the surface looks like a saddle or a mountain pass, as shown in Figure 3-13.

By the Cauchy-Riemann equations (A-8)~ we see that (3-81) implies ovlvy = 0 and DV/aX = O~ so that f'(z) = O. Thus a saddle point of the function u(x~ y) is also a saddle point of v(x, y) as wen as a point where /I(Z) = o.

Near the sadd le point Zo ,

Sl These follow immediately from the Cauchy-Riemann equations, equations (A~8) of the Appendix.

u(x~y)

t~

...-x

Figure 3-13 Topography of the surface u = Ref(z) near the saddle point Zo, for a typical functionJ(z). The heavy solid curves follow the centers of the ridges and valleys from the saddle point, and the dashed curves follow level contours, U = u(xo, Yo) = constant. The curve AAI is the path of steepest descent

" 86

Evaluation of Integrals

B

B

3w cfo=-- 4

Figure 3-14 A1ternadve possibllJdes for steepest descent contours

Let /"(zo) = pe", Z - Zo = seJ4t• Then.

u ~ u(xo, Yo) + 1-pa2 cos (9 + 2tP)" v ~ v(xo, Yo) + ;pS2 sin (6 + 2¢)

We see that on the surface u(x, y) paths of steepest descent from the saddle point into the valleys start out in the directions where cos (8 + 2';) -

-1. In these directions, v == v(xo, Yo) is constant. As we go further from _ Zo t the steepest path follows the direction -grad u, which is normal to the contours of equal u. Thus the path follows a curve v(xt y) = constant = v(xo, Yo), so that the factor el«v in the original integrand of (3-80) will not produce destructive oscillations~~) We therefore deform our original contour C so as to go over the pass on this path of steepest descent Using the approximation (3-82) in the integral (3·80) gives

(3-82)

(3-83)

where 4> has one of the va1ues -9/2 ± '1t/2t depending on which direction we travel over the pass. (t/J is just the inclination of the path at the saddle -, point.) For example, if (J :::;:: 1(/2 we have the two. alternatives of Figure 3-14. The first is much more likely to be correct, but to be sure we should examine the H mountain range It we are passing through. If it looks anything like that shown .in part (a) of Figure 3-15, then q, = ,,/4 is indeed correct. On the other hand. if the range is like that shown in" part (b) of Figure 3-15~ the second alternative, tP = - 3n/4, is the right one.

As an example of the method of steepest descents with a more general contour, we shall work out the asymptotic approximation to r(z + 1) for

complex z, without making any simplifications based on our previous work with r(x + 1).

00

r(z) = f e-'t2:-1 dt o

(3-84)

Let

(3-85)

Then

and

/(t) = (In t - ~)e;~ f'(t) = G - DelP -1

n» = - e"

12

to = Z

l(to) = (In z - l)eiJJ

(a)

(b)

Figure 3-15 Alternative U mountain ranges U for the two cases of Figure 3-14

88 Evaluation of Integrals

Thus P = Ita?, (J = 1t - p .. What is·4>? Use of the condition cos (6 + 2,,) = -1 gives

P 4>=- 2

or

P

--1t'

2

In our previous example, with z real, p = 0 and tP = 0, so the choice q, = fJI2 would clearly seem most reasonable. Since it is sometimes necessary to examine the topography of the surface u = Ref(t) in more detail, however, we show the ridges and valleys for the present example, with P = n/4, in Figure 3-16. This figure confirms the choice tP = 1312.

The steepest descent approximation for r(z + 1) may now be written down from (3-80):

r(z + 1) ~ ~er:lnZ'-Zei"'1, r(z + 1) _. .J2ir. zZ'+ 1/2e-:

(3-86)

The original integral representation for r(z) is only valid for Re z > o.

\ Ret

\

--~------~~~rw~ ~\

v = constant II. = constan t \

\ \

Figure 3-16 Topography of the surface u= Re.f(t).= Re (In t -. t!z)e'P for the case z = 3ei(Jl:J4) ~ It is clear that the original path of integratJon along the real t-axis should be deformed so as to go over the saddle point, to = z, on the path of steepest descents (valleys) in a direction specified by the angle rfJ = P /2

r [ 'i. + a. ) (1- b

"'" 1; ( ~~ 0() ) .

r(16+b;

~ It = constant /

I ".

ridge , /

v· = constant / /

, /

I ,.

I / '

, ". valley v = constant

I /~~_;...-.---

,

. However, the result (3·86) is actually ·valid for all Izl-+ 00 provided we stay away from the negative real axis (see Whittaker and Watson (W5) Section 12.33).

The result (3-86) is just the first term of an asymptotic series fOT the gamma function, which we shall now find. . First write

_ ( 1)%-1/2

~ J21t z%~ 1/2 1 - ~ «: (=-1)

Let us now set

. (3-87)

The constants A, B, ... may be found from the recursion relation for the gamma function:

r(z + 1) = zr(z)

For, from (3-87)~

r(z+1)=J21t(Z+l)Z+1/2e-(Z+ll[1+ A + B + ... ] , z + 1 (z + 1)2

= Jiit exp[ (z + ~) log (z + I) - (z + I)]

x [1 + A + B + 0.0] z + 1 (z + 1)2

= /21(, exp[(z +~) log z _ z + (~1 1_ + ~ - + ... )]

" 2 . 12z2 12z3 40z4

'[ A B ]

x ·1 + +. + 0 ...

. z + 1 (z + 1)2

J._-- (1 1 113 )

= 2nzz+ 1./2e-z 1 + ~- - -~ + ~ + ...

12z2 12z3 1440z4

[A B - A C ~ 28 + A ]

x 1+-+ + + ...

z Zl Z3