You are on page 1of 2143

Introduction to Methods of Applied Mathematics

or
Advanced Mathematical Methods for Scientists and Engineers
Sean Mauch
April 26, 2001
Contents
1 Anti-Copyright 22
2 Preface 23
2.1 Advice to Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Warnings and Disclaimers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Suggested Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 About the Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
I Algebra 1
3 Sets and Functions 2
3.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3.2 Single Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Inverses and Multi-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.4 Transforming Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4 Vectors 11
4.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
4.1.1 Scalars and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.1.2 The Kronecker Delta and Einstein Summation Convention . . . . . . . . . . . . . . . . . . 14
4.1.3 The Dot and Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Sets of Vectors in n Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
II Calculus 36
5 Dierential Calculus 37
5.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4 Implicit Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.6.1 Application: Using Taylors Theorem to Approximate Functions. . . . . . . . . . . . . . . . 57
5.6.2 Application: Finite Dierence Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.7 LHospitals Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.10 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6 Integral Calculus 100
6.1 The Indenite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2 The Denite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
2
6.3 The Fundamental Theorem of Integral Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4.1 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7 Vector Calculus 134
7.1 Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
7.2 Gradient, Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
III Functions of a Complex Variable 150
8 Complex Numbers 151
8.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.2 The Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.3 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
8.4 Arithmetic and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.5 Integer Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
8.6 Rational Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
3
9 Functions of a Complex Variable 202
9.1 Curves and Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
9.2 Cartesian and Modulus-Argument Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
9.3 Graphing Functions of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
9.4 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9.5 Inverse Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
9.6 Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
9.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
9.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
9.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
10 Analytic Functions 303
10.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10.2 Cauchy-Riemann Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
10.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
10.4 Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
10.4.1 Categorization of Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10.4.2 Isolated and Non-Isolated Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
10.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
10.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
11 Analytic Continuation 356
11.1 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
11.2 Analytic Continuation of Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
11.3 Analytic Functions Dened in Terms of Real Variables . . . . . . . . . . . . . . . . . . . . . . . . 360
11.3.1 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
11.3.2 Analytic Functions Dened in Terms of Their Real or Imaginary Parts . . . . . . . . . . . 369
11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
11.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
4
11.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
12 Contour Integration and Cauchys Theorem 381
12.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
12.2 Under Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
12.3 Cauchys Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
12.4 Indenite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
12.5 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
12.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
12.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
12.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
13 Cauchys Integral Formula 403
13.1 Cauchys Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
13.2 The Argument Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
13.3 Rouches Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
13.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
13.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
13.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
14 Series and Convergence 422
14.1 Series of Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
14.1.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
14.1.2 Special Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
14.1.3 Convergence Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
14.2 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
14.2.1 Tests for Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
14.2.2 Uniform Convergence and Continuous Functions. . . . . . . . . . . . . . . . . . . . . . . . 435
14.3 Uniformly Convergent Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
14.4 Integration and Dierentiation of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
5
14.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
14.5.1 Newtons Binomial Formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
14.6 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
14.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
14.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
14.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
15 The Residue Theorem 490
15.1 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
15.2 Cauchy Principal Value for Real Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
15.2.1 The Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
15.3 Cauchy Principal Value for Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
15.4 Integrals on the Real Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
15.5 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
15.6 Fourier Cosine and Sine Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
15.7 Contour Integration and Branch Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
15.8 Exploiting Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
15.8.1 Wedge Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
15.8.2 Box Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
15.9 Denite Integrals Involving Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
15.10Innite Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
15.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
15.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
15.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
IV Ordinary Dierential Equations 634
16 First Order Dierential Equations 635
16.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
6
16.2 One Parameter Families of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
16.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
16.3.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
16.3.2 Homogeneous Coecient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
16.4 The First Order, Linear Dierential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
16.4.1 Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
16.4.2 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
16.4.3 Variation of Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
16.5 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
16.5.1 Piecewise Continuous Coecients and Inhomogeneities . . . . . . . . . . . . . . . . . . . . 654
16.6 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
16.7 Equations in the Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
16.7.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
16.7.2 Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
16.7.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
16.7.4 The Point at Innity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
16.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
16.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
16.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683
17 First Order Systems of Dierential Equations 705
17.1 Matrices and Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
17.2 Systems of Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
17.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
17.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
17.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
18 Theory of Linear Ordinary Dierential Equations 757
18.1 Nature of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
18.2 Transformation to a First Order System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
7
18.3 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
18.3.1 Derivative of a Determinant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762
18.3.2 The Wronskian of a Set of Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
18.3.3 The Wronskian of the Solutions to a Dierential Equation . . . . . . . . . . . . . . . . . . 765
18.4 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
18.5 The Fundamental Set of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770
18.6 Adjoint Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
18.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
18.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
18.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
19 Techniques for Linear Dierential Equations 786
19.1 Constant Coecient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
19.1.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
19.1.2 Higher Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
19.1.3 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
19.2 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
19.2.1 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
19.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
19.4 Equations Without Explicit Dependence on y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
19.5 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
19.6 *Reduction of Order and the Adjoint Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
19.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
19.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
19.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
20 Techniques for Nonlinear Dierential Equations 842
20.1 Bernoulli Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
20.2 Riccati Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
20.3 Exchanging the Dependent and Independent Variables . . . . . . . . . . . . . . . . . . . . . . . . 848
8
20.4 Autonomous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
20.5 *Equidimensional-in-x Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
20.6 *Equidimensional-in-y Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
20.7 *Scale-Invariant Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
20.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
20.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
20.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
21 Transformations and Canonical Forms 878
21.1 The Constant Coecient Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
21.2 Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
21.2.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
21.2.2 Higher Order Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
21.3 Transformations of the Independent Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
21.3.1 Transformation to the form u + a(x) u = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 885
21.3.2 Transformation to a Constant Coecient Equation . . . . . . . . . . . . . . . . . . . . . . 886
21.4 Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
21.4.1 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
21.4.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
21.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
21.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896
21.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
22 The Dirac Delta Function 904
22.1 Derivative of the Heaviside Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
22.2 The Delta Function as a Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
22.3 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
22.4 Non-Rectangular Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
22.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
22.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
9
22.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
23 Inhomogeneous Dierential Equations 915
23.1 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
23.2 Method of Undetermined Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
23.3 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
23.3.1 Second Order Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
23.3.2 Higher Order Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
23.4 Piecewise Continuous Coecients and Inhomogeneities . . . . . . . . . . . . . . . . . . . . . . . . 927
23.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
23.5.1 Eliminating Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . 931
23.5.2 Separating Inhomogeneous Equations and Inhomogeneous Boundary Conditions . . . . . . 933
23.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Conditions . . . . . . . 934
23.6 Green Functions for First Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
23.7 Green Functions for Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
23.7.1 Green Functions for Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . . 949
23.7.2 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
23.7.3 Problems with Unmixed Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 956
23.7.4 Problems with Mixed Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 958
23.8 Green Functions for Higher Order Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
23.9 Fredholm Alternative Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
23.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
23.11Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
23.12Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
24 Dierence Equations 1027
24.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
24.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1029
24.3 Homogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
24.4 Inhomogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
10
24.5 Homogeneous Constant Coecient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
24.6 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
24.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
24.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
24.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
25 Series Solutions of Dierential Equations 1046
25.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
25.1.1 Taylor Series Expansion for a Second Order Dierential Equation . . . . . . . . . . . . . . 1051
25.2 Regular Singular Points of Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
25.2.1 Indicial Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
25.2.2 The Case: Double Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
25.2.3 The Case: Roots Dier by an Integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
25.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
25.4 The Point at Innity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
25.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
25.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087
25.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089
26 Asymptotic Expansions 1113
26.1 Asymptotic Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113
26.2 Leading Order Behavior of Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117
26.3 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
26.4 Asymptotic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1133
26.5 Asymptotic Expansions of Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
26.5.1 The Parabolic Cylinder Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
27 Hilbert Spaces 1141
27.1 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
27.2 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
11
27.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144
27.4 Linear Independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
27.5 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
27.6 Gramm-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
27.7 Orthonormal Function Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
27.8 Sets Of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
27.9 Least Squares Fit to a Function and Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
27.10Closure Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
27.11Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
27.12Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
27.13Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
27.14Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169
28 Self Adjoint Linear Operators 1171
28.1 Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171
28.2 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172
28.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
28.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
28.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
29 Self-Adjoint Boundary Value Problems 1178
29.1 Summary of Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178
29.2 Formally Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
29.3 Self-Adjoint Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
29.4 Self-Adjoint Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
29.5 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1188
29.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
29.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
29.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
12
30 Fourier Series 1195
30.1 An Eigenvalue Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
30.2 Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1198
30.3 Least Squares Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
30.4 Fourier Series for Functions Dened on Arbitrary Ranges . . . . . . . . . . . . . . . . . . . . . . . 1207
30.5 Fourier Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
30.6 Fourier Sine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
30.7 Complex Fourier Series and Parsevals Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
30.8 Behavior of Fourier Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
30.9 Gibbs Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
30.10Integrating and Dierentiating Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
30.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
30.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
30.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
31 Regular Sturm-Liouville Problems 1291
31.1 Derivation of the Sturm-Liouville Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
31.2 Properties of Regular Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
31.3 Solving Dierential Equations With Eigenfunction Expansions . . . . . . . . . . . . . . . . . . . . 1305
31.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1311
31.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
31.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
32 Integrals and Convergence 1342
32.1 Uniform Convergence of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
32.2 The Riemann-Lebesgue Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
32.3 Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
32.3.1 Integrals on an Innite Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
32.3.2 Singular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1345
13
33 The Laplace Transform 1347
33.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
33.2 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
33.2.1 F(s) with Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
33.2.2

f(s) with Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
33.2.3 Asymptotic Behavior of F(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
33.3 Properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1362
33.4 Constant Coecient Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
33.5 Systems of Constant Coecient Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . 1368
33.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
33.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378
33.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1382
34 The Fourier Transform 1415
34.1 Derivation from a Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
34.2 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417
34.2.1 A Word of Caution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1420
34.3 Evaluating Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
34.3.1 Integrals that Converge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1421
34.3.2 Cauchy Principal Value and Integrals that are Not Absolutely Convergent. . . . . . . . . . 1424
34.3.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
34.4 Properties of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428
34.4.1 Closure Relation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
34.4.2 Fourier Transform of a Derivative. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1430
34.4.3 Fourier Convolution Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
34.4.4 Parsevals Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
34.4.5 Shift Property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436
34.4.6 Fourier Transform of x f(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1437
34.5 Solving Dierential Equations with the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . 1437
34.6 The Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
14
34.6.1 The Fourier Cosine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1440
34.6.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
34.7 Properties of the Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
34.7.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
34.7.2 Convolution Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
34.7.3 Cosine and Sine Transform in Terms of the Fourier Transform . . . . . . . . . . . . . . . . 1446
34.8 Solving Dierential Equations with the Fourier Cosine and Sine Transforms . . . . . . . . . . . . . 1447
34.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
34.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
34.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
35 The Gamma Function 1484
35.1 Eulers Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
35.2 Hankels Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
35.3 Gauss Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
35.4 Weierstrass Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1490
35.5 Stirlings Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
35.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
35.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1498
35.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499
36 Bessel Functions 1501
36.1 Bessels Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
36.2 Frobeneius Series Solution about z = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1502
36.2.1 Behavior at Innity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
36.3 Bessel Functions of the First Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
36.3.1 The Bessel Function Satises Bessels Equation . . . . . . . . . . . . . . . . . . . . . . . . 1508
36.3.2 Series Expansion of the Bessel Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
36.3.3 Bessel Functions of Non-Integral Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1512
36.3.4 Recursion Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515
15
36.3.5 Bessel Functions of Half-Integral Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1518
36.4 Neumann Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519
36.5 Bessel Functions of the Second Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1523
36.6 Hankel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525
36.7 The Modied Bessel Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525
36.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529
36.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
36.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1536
V Partial Dierential Equations 1559
37 Transforming Equations 1560
37.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
37.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
37.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
38 Classication of Partial Dierential Equations 1564
38.1 Classication of Second Order Quasi-Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . 1564
38.1.1 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
38.1.2 Parabolic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
38.1.3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
38.2 Equilibrium Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
38.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574
38.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1575
38.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576
39 Separation of Variables 1580
39.1 Eigensolutions of Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1580
39.2 Homogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . 1580
16
39.3 Time-Independent Sources and Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 1582
39.4 Inhomogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . . . . . . . . . 1585
39.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587
39.6 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589
39.7 General Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593
39.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1594
39.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
39.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1613
40 Finite Transforms 1690
40.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
40.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695
40.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
41 Waves 1701
41.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1702
41.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1708
41.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
42 The Diusion Equation 1727
42.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1728
42.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
42.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
43 Similarity Methods 1734
43.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1739
43.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1740
43.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
17
44 Method of Characteristics 1743
44.1 The Method of Characteristics and the Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 1743
44.2 The Method of Characteristics for an Innite Domain . . . . . . . . . . . . . . . . . . . . . . . . . 1745
44.3 The Method of Characteristics for a Semi-Innite Domain . . . . . . . . . . . . . . . . . . . . . . 1746
44.4 Envelopes of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1747
44.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
44.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1752
44.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
45 Transform Methods 1759
45.1 Fourier Transform for Partial Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
45.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1761
45.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
45.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
45.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768
45.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1771
46 Green Functions 1794
46.1 Inhomogeneous Equations and Homogeneous Boundary Conditions . . . . . . . . . . . . . . . . . 1794
46.2 Homogeneous Equations and Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . 1795
46.3 Eigenfunction Expansions for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1797
46.4 The Method of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1802
46.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1803
46.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808
46.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1810
47 Conformal Mapping 1851
47.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1852
47.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855
47.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
18
48 Non-Cartesian Coordinates 1864
48.1 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1864
48.2 Laplaces Equation in a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1865
48.3 Laplaces Equation in an Annulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1868
VI Calculus of Variations 1872
49 Calculus of Variations 1873
49.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1874
49.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1891
49.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1897
VII Nonlinear Dierential Equations 1990
50 Nonlinear Ordinary Dierential Equations 1991
50.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1992
50.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1997
50.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1999
51 Nonlinear Partial Dierential Equations 2021
51.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2022
51.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025
51.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2026
VIII Appendices 2044
A Greek Letters 2045
19
B Notation 2047
C Formulas from Complex Variables 2049
D Table of Derivatives 2052
E Table of Integrals 2056
F Denite Integrals 2060
G Table of Sums 2063
H Table of Taylor Series 2066
I Table of Laplace Transforms 2069
J Table of Fourier Transforms 2074
K Table of Fourier Transforms in n Dimensions 2077
L Table of Fourier Cosine Transforms 2078
M Table of Fourier Sine Transforms 2080
N Table of Wronskians 2082
O Sturm-Liouville Eigenvalue Problems 2084
P Green Functions for Ordinary Dierential Equations 2086
Q Trigonometric Identities 2089
Q.1 Circular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2089
20
Q.2 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091
R Bessel Functions 2094
R.1 Denite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2094
S Formulas from Linear Algebra 2095
T Vector Analysis 2097
U Partial Fractions 2099
V Finite Math 2103
W Probability 2104
W.1 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
W.2 Playing the Odds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2105
X Economics 2106
Y Glossary 2107
21
Chapter 1
Anti-Copyright
Anti-Copyright @ 1995-2000 by Mauch Publishing Company, un-Incorporated.
No rights reserved. Any part of this publication by be reproduced, stored in a retrieval system, transmitted or
desecrated without permission.
22
Chapter 2
Preface
During the summer before my nal undergraduate year at Caltech I set out to write a math text unlike any
other, namely, one written by me. In that respect I have succeeded beautifully. Unfortunately, the text is neither
complete nor polished. I have a Warnings and Disclaimers section below that is a little amusing, and an
appendix on probability that I feel concisesly captures the essence of the subject. However, all the material in
between is in some stage of development. I am currently working to improve and expand this text.
This text is freely available from my web set. Currently Im at http://www.its.caltech.edu/sean. I post
new versions a couple of times a year.
2.1 Advice to Teachers
If you have something worth saying, write it down.
2.2 Acknowledgments
I would like to thank Professor Saman for advising me on this project and the Caltech SURF program for
providing the funding for me to write the rst edition of this book.
23
2.3 Warnings and Disclaimers
This book is a work in progress. It contains quite a few mistakes and typos. I would greatly appreciate
your constructive criticism. You can reach me at sean@its.caltech.edu.
Reading this book impairs your ability to drive a car or operate machinery.
This book has been found to cause drowsiness in laboratory animals.
This book contains twenty-three times the US RDA of ber.
Caution: FLAMMABLE - Do not read while smoking or near a re.
If infection, rash, or irritation develops, discontinue use and consult a physician.
Warning: For external use only. Use only as directed. Intentional misuse by deliberately concentrating
contents can be harmful or fatal. KEEP OUT OF REACH OF CHILDREN.
In the unlikely event of a water landing do not use this book as a otation device.
The material in this text is ction; any resemblance to real theorems, living or dead, is purely coincidental.
This is by far the most amusing section of this book.
Finding the typos and mistakes in this book is left as an exercise for the reader. (Eye ewes a spelling
chequer from thyme too thyme, sew their should knot bee two many misspellings. Though I aint so sure
the grammars too good.)
The theorems and methods in this text are subject to change without notice.
This is a chain book. If you do not make seven copies and distribute them to your friends within ten days
of obtaining this text you will suer great misfortune and other nastiness.
The surgeon general has determined that excessive studying is detrimental to your social life.
24
This text has been buered for your protection and ribbed for your pleasure.
Stop reading this rubbish and get back to work!
2.4 Suggested Use
This text is well suited to the student, professional or lay-person. It makes a superb gift. This text has a boquet
that is light and fruity, with some earthy undertones. It is ideal with dinner or as an apertif. Bon apetit!
2.5 About the Title
The title is only making light of naming conventions in the sciences and is not an insult to engineers. If you want to
nd a good math text to learn a subject, look for books with Introduction and Elementary in the title. If it is
an Intermediate text it will be incomprehensible. If it is Advanced then not only will it be incomprehensible,
it will have low production qualities, i.e. a crappy typewriter font, no graphics and no examples. There is an
exception to this rule when the title also contains the word Scientists or Engineers. Then an advanced book
may be quite suitable for actually learning the material.
25
Part I
Algebra
1
Chapter 3
Sets and Functions
3.1 Sets
Denition. A set is a collection of objects. We call the objects, elements. A set is denoted by listing the
elements between braces. For example: e, i, , 1. We use ellipses to indicate patterns. The set of positive
integers is 1, 2, 3, . . . . We also denote a sets with the notation x[conditions on x for sets that are more easily
described than enumerated. This is read as the set of elements x such that x satises . . . . x S is the notation
for x is an element of the set S. To express the opposite we have x , S for x is not an element of the set S.
Examples. We have notations for denoting some of the commonly encountered sets.
= is the empty set, the set containing no elements.
Z = . . . , 1, 0, 1 . . . is the set of integers. (Z is for Zahlen, the German word for number.)
Q = p/q[p, q Z, q ,= 0 is the set of rational numbers. (Q is for quotient.)
R = x[x = a
1
a
2
a
n
.b
1
b
2
is the set of real numbers, i.e. the set of numbers with decimal expansions.
2
C = a + ib[a, b R, i
2
= 1 is the set of complex numbers. i is the square root of 1. (If you havent
seen complex numbers before, dont dismay. Well cover them later.)
Z
+
, Q
+
and R
+
are the sets of positive integers, rationals and reals, respectively. For example, Z
+
=
1, 2, 3, . . . .
Z
0+
, Q
0+
and R
0+
are the sets of non-negative integers, rationals and reals, respectively. For example,
Z
0+
= 0, 1, 2, . . . .
(a . . . b) denotes an open interval on the real axis. (a . . . b) x[x R, a < x < b
We use brackets to denote the closed interval. [a . . . b] x[x R, a x b
The cardinality or order of a set S is denoted [S[. For nite sets, the cardinality is the number of elements
in the set. The Cartesian product of two sets is the set of ordered pairs:
X Y (x, y)[x X, y Y .
The Cartesian product of n sets is the set of ordered n-tuples:
X
1
X
2
X
n
(x
1
, x
2
, . . . , x
n
)[x
1
X
1
, x
2
X
2
, . . . , x
n
X
n
.
Equality. Two sets S and T are equal if each element of S is an element of T and vice versa. This is denoted,
S = T. Inequality is S ,= T, of course. S is a subset of T, S T, if every element of S is an element of T. S is
a proper subset of T, S T, if S T and S ,= T. For example: The empty set is a subset of every set, S.
The rational numbers are a proper subset of the real numbers, Q R.
Operations. The union of two sets, S T, is the set whose elements are in either of the two sets. The union
of n sets,

n
j=1
S
j
S
1
S
2
S
n
3
is the set whose elements are in any of the sets S
j
. The intersection of two sets, S T, is the set whose elements
are in both of the two sets. In other words, the intersection of two sets in the set of elements that the two sets
have in common. The intersection of n sets,

n
j=1
S
j
S
1
S
2
S
n
is the set whose elements are in all of the sets S
j
. If two sets have no elements in common, S T = , then the
sets are disjoint. If T S, then the dierence between S and T, S T, is the set of elements in S which are not
in T.
S T x[x S, x , T
The dierence of sets is also denoted S T.
Properties. The following properties are easily veried from the above denitions.
S = S, S = , S = S, S S = .
Commutative. S T = T S, S T = T S.
Associative. (S T) U = S (T U) = S T U, (S T) U = S (T U) = S T U.
Distributive. S (T U) = (S T) (S U), S (T U) = (S T) (S U).
3.2 Single Valued Functions
Single-Valued Functions. A single-valued function or single-valued mapping is a mapping of the elements
x X into elements y Y . This is expressed notationally as f : X Y or X
f
Y . If such a function is
well-dened, then for each x X there exists a unique element of y such that f(x) = y. The set X is the domain
of the function, Y is the codomain, (not to be confused with the range, which we introduce shortly). To denote
the value of a function on a particular element we can use any of the notations: f(x) = y, f : x y or simply
x y. f is the identity map on X if f(x) = x for all x X.
4
Let f : X Y . The range or image of f is
f(X) = y[y = f(x) for some x X.
The range is a subset of the codomain. For each Z Y , the inverse image of Z is dened:
f
1
(Z) x X[f(x) = z for some z Z.
Examples.
Finite polynomials and the exponential function are examples of single valued functions which map real
numbers to real numbers.
The greatest integer function, |, is a mapping from R to Z. x| in the greatest integer less than or equal
to x. Likewise, the least integer function, x|, is the least integer greater than or equal to x.
The -jectives. A function is injective if for each x
1
,= x
2
, f(x
1
) ,= f(x
2
). In other words, for each x in the
domain there is a unique y = f(x) in the range. f is surjective if for each y in the codomain, there is an x such
that y = f(x). If a function is both injective and surjective, then it is bijective. A bijective function is also called
a one-to-one mapping.
Examples.
The exponential function y = e
x
is a bijective function, (one-to-one mapping), that maps R to R
+
. (R is
the set of real numbers; R
+
is the set of positive real numbers.)
f(x) = x
2
is a bijection from R
+
to R
+
. f is not injective from R to R
+
. For each positive y in the range,
there are two values of x such that y = x
2
.
f(x) = sin x is not injective from R to [1..1]. For each y [1, 1] there exists an innite number of values
of x such that y = sin x.
5
Injective Surjective Bijective
Figure 3.1: Depictions of Injective, Surjective and Bijective Functions
3.3 Inverses and Multi-Valued Functions
If y = f(x), then we can write x = f
1
(y) where f
1
is the inverse of f. If y = f(x) is a one-to-one function,
then f
1
(y) is also a one-to-one function. In this case, x = f
1
(f(x)) = f(f
1
(x)) for values of x where both
f(x) and f
1
(x) are dened. For example log x, which maps R
+
to R is the inverse of e
x
. x = e
log x
= log( e
x
)
for all x R
+
. (Note the x R
+
ensures that log x is dened.)
If y = f(x) is a many-to-one function, then x = f
1
(y) is a one-to-many function. f
1
(y) is a multi-valued
function. We have x = f(f
1
(x)) for values of x where f
1
(x) is dened, however x ,= f
1
(f(x)). There are
diagrams showing one-to-one, many-to-one and one-to-many functions in Figure 3.2.
Example 3.3.1 y = x
2
, a many-to-one function has the inverse x = y
1/2
. For each positive y, there are two
values of x such that x = y
1/2
. y = x
2
and y = x
1/2
are graphed in Figure 3.3.
We say that there are two branches of y = x
1/2
: the positive and the negative branch. We denote the positive
branch as y =

x; the negative branch is y =

x. We call

x the principal branch of x
1/2
. Note that

x
is a one-to-one function. Finally, x = (x
1/2
)
2
since (

x)
2
= x, but x ,= (x
2
)
1/2
since (x
2
)
1/2
= x. y =

x is
6
range domain range domain range domain
one-to-one many-to-one one-to-many
Figure 3.2: Diagrams of One-To-One, Many-To-One and One-To-Many Functions
Figure 3.3: y = x
2
and y = x
1/2
graphed in Figure 3.4.
Figure 3.4: y =

x
7
Now consider the many-to-one function y = sin x. The inverse is x = arcsin y. For each y [1, 1] there are
an innite number of values x such that x = arcsin y. In Figure 3.5 is a graph of y = sin x and a graph of a few
branches of y = arcsin x.
Figure 3.5: y = sin x and y = arcsin x
Example 3.3.2 arcsin x has an innite number of branches. We will denote the principal branch by Arcsin x
which maps [1, 1] to
_

2
,

2

. Note that x = sin(arcsin x), but x ,= arcsin(sin x). y = Arcsin x in Figure 3.6.
Figure 3.6: y = Arcsin x
Example 3.3.3 Consider 1
1/3
. Since x
3
is a one-to-one function, x
1/3
is a single-valued function. (See Figure 3.7.)
1
1/3
= 1.
8
Figure 3.7: y = x
3
and y = x
1/3
Example 3.3.4 Consider arccos(1/2). cos x and a few branches of arccos x are graphed in Figure 3.8. cos x = 1/2
Figure 3.8: y = cos x and y = arccos x
has the two solutions x = /3 in the range x [, ]. Since cos(x +) = cos x,
arccos(1/2) = /3 +n.
3.4 Transforming Equations
We must take care in applying functions to equations. It is always safe to apply a one-to-one function to an
equation, (provided it is dened for that domain). For example, we can apply y = x
3
or y = e
x
to the equation
x = 1. The equations x
3
= 1 and e
x
= e have the unique solution x = 1.
9
If we apply a many-to-one function to an equation, we may introduce spurious solutions. Applying y = x
2
and
y = sin x to the equation x =

2
results in x
2
=

2
4
and sin x = 1. The former equation has the two solutions
x =

2
; the latter has the innite number of solutions x =

2
+ 2n, n Z.
We do not generally apply a one-to-many function to both sides of an equation as this rarely is useful. Consider
the equation
sin
2
x = 1.
Applying the function f(x) = x
1/2
to the equation would not get us anywhere
(sin
2
x)
1/2
= 1
1/2
.
Since (sin
2
x)
1/2
,= sin x, we cannot simplify the left side of the equation. Instead we could use the denition of
f(x) = x
1/2
as the inverse of the x
2
function to obtain
sin x = 1
1/2
= 1.
Then we could use the denition of arcsin as the inverse of sin to get
x = arcsin(1).
x = arcsin(1) has the solutions x = /2 + 2n and x = arcsin(1) has the solutions x = /2 + 2n. Thus
x =

2
+n, n Z.
Note that we cannot just apply arcsin to both sides of the equation as arcsin(sin x) ,= x.
10
Chapter 4
Vectors
4.1 Vectors
4.1.1 Scalars and Vectors
A vector is a quantity having both a magnitude and a direction. Examples of vector quantities are velocity, force
and position. One can represent a vector in n-dimensional space with an arrow whose initial point is at the origin,
(Figure 4.1). The magnitude is the length of the vector. Typographically, variables representing vectors are often
written in capital letters, bold face or with a vector over-line, A, a, a. The magnitude of a vector is denoted [a[.
A scalar has only a magnitude. Examples of scalar quantities are mass, time and speed.
Vector Algebra. Two vectors are equal if they have the same magnitude and direction. The negative of a
vector, denoted a, is a vector of the same magnitude as a but in the opposite direction. We add two vectors a
and b by placing the tail of b at the head of a and dening a + b to be the vector with tail at the origin and
head at the head of b. (See Figure 4.2.)
The dierence, a b, is dened as the sum of a and the negative of b, a + (b). The result of multiplying
a by a scalar is a vector of magnitude [[ [a[ with the same/opposite direction if is positive/negative. (See
Figure 4.2.)
11
x
z
y
Figure 4.1: Graphical Representation of a Vector in Three Dimensions
a+b
a
b
-a
a
2a
Figure 4.2: Vector Arithmetic
Here are the properties of adding vectors and multiplying them by a scalar. They are evident from geometric
considerations.
a +b = b +a a = a commutative laws
(a +b) +c = a + (b +c) (a) = ()a associative laws
(a +b) = a +b ( +)a = a +a distributive laws
12
Zero and Unit Vectors. The additive identity element for vectors is the zero vector or null vector. This is a
vector of magnitude zero which is denoted as 0. A unit vector is a vector of magnitude one. If a is nonzero then
a/[a[ is a unit vector in the direction of a. Unit vectors are often denoted with a caret over-line, n.
Rectangular Unit Vectors. In n dimensional Cartesian space, R
n
, the unit vectors in the directions of the
coordinates axes are e
1
, . . . e
n
. These are called the rectangular unit vectors. To cut down on subscripts, the unit
vectors in three dimensional space are often denoted with i, j and k. (Figure 4.3).
x
z
y
j
k
i
Figure 4.3: Rectangular Unit Vectors
Components of a Vector. Consider a vector a with tail at the origin and head having the Cartesian coordinates
(a
1
, . . . , a
n
). We can represent this vector as the sum of n rectangular component vectors, a = a
1
e
1
+ +a
n
e
n
.
(See Figure 4.4.) Another notation for the vector a is a
1
, . . . , a
n
). By the Pythagorean theorem, the magnitude
of the vector a is [a[ =
_
a
2
1
+ +a
2
n
.
13
x
z
y
a
a
a
1
3
i
k
j a
2
Figure 4.4: Components of a Vector
4.1.2 The Kronecker Delta and Einstein Summation Convention
The Kronecker Delta tensor is dened

ij
=
_
1 if i = j,
0 if i ,= j.
This notation will be useful in our work with vectors.
Consider writing a vector in terms of its rectangular components. Instead of using ellipses: a = a
1
e
1
+ +a
n
e
n
,
we could write the expression as a sum: a =

n
i=1
a
i
e
i
. We can shorten this notation by leaving out the sum:
a = a
i
e
i
, where it is understood that whenever an index is repeated in a term we sum over that index from 1 to
n. This is the Einstein summation convention. A repeated index is called a summation index or a dummy index.
Other indices can take any value from 1 to n and are called free indices.
14
Example 4.1.1 Consider the matrix equation: A x = b. We can write out the matrix and vectors explicitly.
_
_
_
a
11
a
1n
.
.
.
.
.
.
.
.
.
a
n1
a
nn
_
_
_
_
_
_
x
1
.
.
.
x
n
_
_
_
=
_
_
_
b
1
.
.
.
b
n
_
_
_
This takes much less space when we use the summation convention.
a
ij
x
j
= b
i
Here j is a summation index and i is a free index.
4.1.3 The Dot and Cross Product
Dot Product. The dot product or scalar product of two vectors is dened,
a b [a[[b[ cos ,
where is the angle from a to b. From this denition one can derive the following properties:
a b = b a, commutative.
(a b) = (a) b = a (b), associativity of scalar multiplication.
a (b +c) = a b +a c, distributive.
e
i
e
j
=
ij
. In three dimension, this is
i i = j j = k k = 1, i j = j k = k i = 0.
a b = a
i
b
i
a
1
b
1
+ +a
n
b
n
, dot product in terms of rectangular components.
If a b = 0 then either a and b are orthogonal, (perpendicular), or one of a and b are zero.
15
The Angle Between Two Vectors. We can use the dot product to nd the angle between two vectors, a and
b. From the denition of the dot product,
a b = [a[[b[ cos .
If the vectors are nonzero, then
= arccos
_
a b
[a[[b[
_
.
Example 4.1.2 What is the angle between i and i +j?
= arccos
_
i (i +j)
[i[[i +j[
_
= arccos
_
1

2
_
=

4
.
Parametric Equation of a Line. Consider a line that passes through the point a and is parallel to the vector
t, (tangent). A parametric equation of the line is
x = a +ut, u R.
Implicit Equation of a Line. Consider a line that passes through the point a and is normal, (orthogonal,
perpendicular), to the vector n. All the lines that are normal to n have the property that x n is a constant,
where x is any point on the line. (See Figure 4.5.) x n = 0 is the line that is normal to n and passes through
the origin. The line that is normal to n and passes through the point a is
x n = a n.
16
=0
=1
=a n
n
a
=-1
x n
x n
x n
x n
Figure 4.5: Equation for a Line
The normal to a line determines an orientation of the line. The normal points in the direction that is above
the line. A point b is (above/on/below) the line if (b a) n is (positive/zero/negative). The signed distance of
a point b from the line x n = a n is
(b a)
n
[n[
.
Implicit Equation of a Hyperplane. A hyperplane in R
n
is an n 1 dimensional sheet which passes
through a given point and is normal to a given direction. In R
3
we call this a plane. Consider a hyperplane that
passes through the point a and is normal to the vector n. All the hyperplanes that are normal to n have the
property that x n is a constant, where x is any point in the hyperplane. x n = 0 is the hyperplane that is
normal to n and passes through the origin. The hyperplane that is normal to n and passes through the point a is
x n = a n.
The normal determines an orientation of the hyperplane. The normal points in the direction that is above the
hyperplane. A point b is (above/on/below) the hyperplane if (b a) n is (positive/zero/negative). The signed
17
distance of a point b from the hyperplane x n = a n is
(b a)
n
[n[
.
Right and Left-Handed Coordinate Systems. Consider a rectangular coordinate system in two dimensions.
Angles are measured from the positive x axis in the direction of the positive y axis. There are two ways of labeling
the axes. (See Figure 4.6.) In one the angle increases in the counterclockwise direction and in the other the angle
increases in the clockwise direction. The former is the familiar Cartesian coordinate system.
x y
x y


Figure 4.6: There are Two Ways of Labeling the Axes in Two Dimensions.
There are also two ways of labeling the axes in a three-dimensional rectangular coordinate system. These are
called right-handed and left-handed coordinate systems. See Figure 4.7. Any other labelling of the axes could be
rotated into one of these congurations. The right-handed system is the one that is used by default. If you put
your right thumb in the direction of the z axis in a right-handed coordinate system, then your ngers curl in the
direction from the x axis to the y axis.
Cross Product. The cross product or vector product is dened,
a b = [a[[b[ sin n,
where is the angle from a to b and n is a unit vector that is orthogonal to a and b and in the direction such
that a, b and n form a right-handed system.
18
x
z
y j
i
k
z
k
j
i
y
x
Figure 4.7: Right and Left Handed Coordinate Systems
You can visualize the direction of a b by applying the right hand rule. Curl the ngers of your right hand
in the direction from a to b. Your thumb points in the direction of a b. Warning: Unless you are a lefty, get
in the habit of putting down your pencil before applying the right hand rule.
The dot and cross products behave a little dierently. First note that unlike the dot product, the cross product
is not commutative. The magnitudes of a b and b a are the same, but their directions are opposite. (See
Figure 4.8.)
Let
a b = [a[[b[ sin n and b a = [b[[a[ sin m.
The angle from a to b is the same as the angle from b to a. Since a, b, n and b, a, m are right-handed systems,
m points in the opposite direction as n. Since ab = ba we say that the cross product is anti-commutative.
Next we note that since
[a b[ = [a[[b[ sin ,
the magnitude of a b is the area of the parallelogram dened by the two vectors. (See Figure 4.9.) The area of
the triangle dened by two vectors is then
1
2
[a b[.
19
a
b
b a
a b
Figure 4.8: The Cross Product is Anti-Commutative.
b
sin
b
b
a

a
Figure 4.9: The Parallelogram and the Triangle Dened by Two Vectors
From the denition of the cross product, one can derive the following properties:
a b = b a, anti-commutative.
(a b) = (a) b = a (b), associativity of scalar multiplication.
a (b +c) = a b +a c, distributive.
(a b) c ,= a (b c). The cross product is not associative.
i i = j j = k k = 0.
20
i j = k, j k = i, k i = j.

a b = (a
2
b
3
a
3
b
2
)i + (a
3
b
1
a
1
b
3
)j + (a
1
b
2
a
2
b
1
)k =

i j k
a
1
a
2
a
3
b
1
b
2
b
3

,
cross product in terms of rectangular components.
If a b = 0 then either a and b are parallel or one of a or b is zero.
Scalar Triple Product. Consider the volume of the parallelopiped dened by three vectors. (See Figure 4.10.)
The area of the base is [[b[[c[ sin [, where is the angle between b and c. The height is [a[ cos , where is the
angle between b c and a. Thus the volume of the parallelopiped is [a[[b[[c[ sin cos .

b c
a
b
c
Figure 4.10: The Parallelopiped Dened by Three Vectors
Note that
[a (b c)[ = [a ([b[[c[ sin n)[
= [[a[[b[[c[ sin cos [ .
21
Thus [a (b c)[ is the volume of the parallelopiped. a (b c) is the volume or the negative of the volume
depending on whether a, b, c is a right or left-handed system.
Note that parentheses are unnecessary in a bc. There is only one way to interpret the expression. If you did
the dot product rst then you would be left with the cross product of a scalar and a vector which is meaningless.
a b c is called the scalar triple product.
Plane Dened by Three Points. Three points which are not collinear dene a plane. Consider a plane that
passes through the three points a, b and c. One way of expressing that the point x lies in the plane is that the
vectors xa, ba and ca are coplanar. (See Figure 4.11.) If the vectors are coplanar, then the parallelopiped
dened by these three vectors will have zero volume. We can express this in an equation using the scalar triple
product,
(x a) (b a) (c a) = 0.
b
c
x
a
Figure 4.11: Three Points Dene a Plane.
22
4.2 Sets of Vectors in n Dimensions
Orthogonality. Consider two n-dimensional vectors
x = (x
1
, x
2
, . . . , x
n
), y = (y
1
, y
2
, . . . , y
n
).
The inner product of these vectors can be dened
x[y) x y =
n

i=1
x
i
y
i
.
The vectors are orthogonal if xy = 0. The norm of a vector is the length of the vector generalized to n dimensions.
|x| =

x x
Consider a set of vectors
x
1
, x
2
, . . . , x
m
.
If each pair of vectors in the set is orthogonal, then the set is orthogonal.
x
i
x
j
= 0 if i ,= j
If in addition each vector in the set has norm 1, then the set is orthonormal.
x
i
x
j
=
ij
=
_
1 if i = j
0 if i ,= j
Here
ij
is known as the Kronecker delta function.
23
Completeness. A set of n, n-dimensional vectors
x
1
, x
2
, . . . , x
n

is complete if any n-dimensional vector can be written as a linear combination of the vectors in the set. That is,
any vector y can be written
y =
n

i=1
c
i
x
i
.
Taking the inner product of each side of this equation with x
m
,
y x
m
=
_
n

i=1
c
i
x
i
_
x
m
=
n

i=1
c
i
x
i
x
m
= c
m
x
m
x
m
c
m
=
y x
m
|x
m
|
2
Thus y has the expansion
y =
n

i=1
y x
i
|x
i
|
2
x
i
.
If in addition the set is orthonormal, then
y =
n

i=1
(y x
i
)x
i
.
24
4.3 Exercises
The Dot and Cross Product
Exercise 4.1
Prove the distributive law for the dot product,
a (b +c) = a b +a c.
Exercise 4.2
Prove that
a b = a
i
b
i
a
1
b
1
+ +a
n
b
n
.
Exercise 4.3
What is the angle between the vectors i +j and i + 3j?
Exercise 4.4
Prove the distributive law for the cross product,
a (b +c) = a b +a b.
Exercise 4.5
Show that
a b =

i j k
a
1
a
2
a
3
b
1
b
2
b
3

25
Exercise 4.6
What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?
Exercise 4.7
What is the volume of the tetrahedron with vertices at (1, 1, 0), (3, 2, 1), (2, 4, 1) and (1, 2, 5)?
Exercise 4.8
What is the equation of the plane that passes through the points (1, 2, 3), (2, 3, 1) and (3, 1, 2)? What is the
distance from the point (2, 3, 5) to the plane?
26
4.4 Hints
The Dot and Cross Product
Hint 4.1
First prove the distributive law when the rst vector is of unit length,
n (b +c) = n b +n c.
Then all the quantities in the equation are projections onto the unit vector n and you can use geometry.
Hint 4.2
First prove that the dot product of a rectangular unit vector with itself is one and the dot product of two distinct
rectangular unit vectors is zero. Then write a and b in rectangular components and use the distributive law.
Hint 4.3
Use a b = [a[[b[ cos .
Hint 4.4
First consider the case that both b and c are orthogonal to a. Prove the distributive law in this case from
geometric considerations.
Next consider two arbitrary vectors a and b. We can write b = b

+b
|
where b

is orthogonal to a and b
|
is parallel to a. Show that
a b = a b

.
Finally prove the distributive law for arbitrary b and c.
Hint 4.5
Write the vectors in their rectangular components and use,
i j = k, j k = i, k i = j,
27
and,
i i = j j = k k = 0.
Hint 4.6
The quadrilateral is composed of two triangles. The area of a triangle dened by the two vectors a and b is
1
2
[a b[.
Hint 4.7
Justify that the area of a tetrahedron determined by three vectors is one sixth the area of the parallelogram
determined by those three vectors. The area of a parallelogram determined by three vectors is the magnitude of
the scalar triple product of the vectors: a b c.
Hint 4.8
The equation of a line that is orthogonal to a and passes through the point b is a x = a b. The distance of a
point c from the plane is

(c b)
a
[a[

28
4.5 Solutions
The Dot and Cross Product
Solution 4.1
First we prove the distributive law when the rst vector is of unit length, i.e.,
n (b +c) = n b +n c. (4.1)
From Figure 4.12 we see that the projection of the vector b+c onto n is equal to the sum of the projections b n
and c n.
b
c
n b
n c
b+c
n
n (b+c)
Figure 4.12: The Distributive Law for the Dot Product
Now we extend the result to the case when the rst vector has arbitrary length. We dene a = [a[n and
multiply Equation 4.1 by the scalar, [a[.
[a[n (b +c) = [a[n b +[a[n c
a (b +c) = a b +a c.
29
Solution 4.2
First note that
e
i
e
i
= [e
i
[[e
i
[ cos(0) = 1.
Then note that that dot product of any two distinct rectangular unit vectors is zero because they are orthogonal.
Now we write a and b in terms of their rectangular components and use the distributive law.
a b = a
i
e
i
b
j
e
j
= a
i
b
j
e
i
e
j
= a
i
b
j

ij
= a
i
b
i
Solution 4.3
Since a b = [a[[b[ cos , we have
= arccos
_
a b
[a[[b[
_
when a and b are nonzero.
= arccos
_
(i +j) (i + 3j)
[i +j[[i + 3j[
_
= arccos
_
4

10
_
= arccos
_
2

5
5
_
0.463648
Solution 4.4
First consider the case that both b and c are orthogonal to a. b +c is the diagonal of the parallelogram dened
by b and c, (see Figure 4.13). Since a is orthogonal to each of these vectors, taking the cross product of a with
these vectors has the eect of rotating the vectors through /2 radians about a and multiplying their length by
[a[. Note that a (b +c) is the diagonal of the parallelogram dened by a b and a c. Thus we see that the
distributive law holds when a is orthogonal to both b and c,
a (b +c) = a b +a c.
30
b
c
b+c
a c
a
a b
a (b+c)
Figure 4.13: The Distributive Law for the Cross Product
Now consider two arbitrary vectors a and b. We can write b = b

+ b
|
where b

is orthogonal to a and b
|
is parallel to a, (see Figure 4.14).
a
b
b

b
Figure 4.14: The Vector b Written as a Sum of Components Orthogonal and Parallel to a
31
By the denition of the cross product,
a b = [a[[b[ sin n.
Note that
[b

[ = [b[ sin ,
and that a b

is a vector in the same direction as a b. Thus we see that


a b = [a[[b[ sin n
= [a[(sin [b[)n
= [a[[b

[n = [a[[b

[ sin(/2)n
a b = a b

.
Now we are prepared to prove the distributive law for arbitrary b and c.
a (b +c) = a (b

+b
|
+c

+c
|
)
= a ((b +c)

+ (b +c)
|
)
= a ((b +c)

)
= a b

+a c

= a b +a c
a (b +c) = a b +a c
Solution 4.5
We know that
i j = k, j k = i, k i = j,
32
and that
i i = j j = k k = 0.
Now we write a and b in terms of their rectangular components and use the distributive law to expand the cross
product.
a b = (a
1
i +a
2
j +a
3
k) (b
1
i +b
2
j +b
3
k)
= a
1
i (b
1
i +b
2
j +b
3
k) +a
2
j (b
1
i +b
2
j +b
3
k) +a
3
k (b
1
i +b
2
j +b
3
k)
= a
1
b
2
k +a
1
b
3
(j) +a
2
b
1
(k) +a
2
b
3
i +a
3
b
1
j +a
3
b
2
(i)
= (a
2
b
3
a
3
b
2
)i (a
1
b
3
a
3
b
1
)j + (a
1
b
2
a
2
b
1
)k
Next we evaluate the determinant.

i j k
a
1
a
2
a
3
b
1
b
2
b
3

= i

a
2
a
3
b
2
b
3

a
1
a
3
b
1
b
3

+k

a
1
a
2
b
1
b
2

= (a
2
b
3
a
3
b
2
)i (a
1
b
3
a
3
b
1
)j + (a
1
b
2
a
2
b
1
)k
Thus we see that,
a b =

i j k
a
1
a
2
a
3
b
1
b
2
b
3

Solution 4.6
The area area of the quadrilateral is the area of two triangles. The rst triangle is dened by the vector from
(1, 1) to (4, 2) and the vector from (1, 1) to (2, 3). The second triangle is dened by the vector from (3, 7) to (4, 2)
and the vector from (3, 7) to (2, 3). (See Figure 4.15.) The area of a triangle dened by the two vectors a and b
is
1
2
[a b[. The area of the quadrilateral is then,
1
2
[(3i +j) (i + 2j)[ +
1
2
[(i 5j) (i 4j)[ =
1
2
(5) +
1
2
(19) = 12.
33
x
y
(3,7)
(4,2)
(2,3)
(1,1)
Figure 4.15: Quadrilateral
Solution 4.7
The tetrahedron is determined by the three vectors with tail at (1, 1, 0) and heads at (3, 2, 1), (2, 4, 1) and
(1, 2, 5). These are 2, 1, 1), 1, 3, 1) and 0, 1, 5). The area of the tetrahedron is one sixth the area of the
parallelogram determined by these vectors. (This is because the area of a pyramid is
1
3
(base)(height). The base
of the tetrahedron is half the area of the parallelogram and the heights are the same.
1
2
1
3
=
1
6
) Thus the area of
a tetrahedron determined by three vectors is
1
6
[a b c[. The area of the tetrahedron is
1
6
[2, 1, 1) 1, 3, 1) 1, 2, 5)[ =
1
6
[2, 1, 1) 13, 4, 1)[ =
7
2
Solution 4.8
The two vectors with tails at (1, 2, 3) and heads at (2, 3, 1) and (3, 1, 2) are parallel to the plane. Taking the cross
product of these two vectors gives us a vector that is orthogonal to the plane.
1, 1, 2) 2, 1, 1) = 3, 3, 3)
We see that the plane is orthogonal to the vector 1, 1, 1) and passes through the point (1, 2, 3). The equation of
34
the plane is
1, 1, 1) x, y, z) = 1, 1, 1) 1, 2, 3),
x +y +z = 6.
Consider the vector with tail at (1, 2, 3) and head at (2, 3, 5). The magnitude of the dot product of this vector
with the unit normal vector gives the distance from the plane.

1, 1, 2)
1, 1, 1)
[1, 1, 1)[

=
4

3
=
4

3
3
35
Part II
Calculus
36
Chapter 5
Dierential Calculus
5.1 Limits of Functions
Denition of a Limit. If the value of the function y(x) gets arbitrarily close to as x approaches the point ,
then wee say that the limit of the function as x approaches is equal to . This is written:
lim
x
y(x) =
To make the notion of arbitrarily close precise: for any > 0 there exists a > 0 such that [y(x) [ < for
all 0 < [x [ < . That is, there is an interval surrounding the point x = for which the function is within
of . See Figure 5.1. Note that the interval surrounding x = is a deleted neighborhood, that is it does not
contain the point x = . Thus the value function at x = need not be equal to for the limit to exist. Indeed
the function need not even be dened at x = .
To prove that a function has a limit at a point we rst bound [y(x)[ in terms of for values of x satisfying
0 < [x [ < . Denote this upper bound by u(). Then for an arbitrary > 0, we determine a > 0 such that
the the upper bound u() and hence [y(x) [ is less than .
37
x
y
+

+
Figure 5.1: The neighborhood of x = such that [y(x) [ < .
Example 5.1.1 Show that
lim
x1
x
2
= 1.
Consider any > 0. We need to show that there exists a > 0 such that [x
2
1[ < for all [x 1[ < . First we
obtain a bound on [x
2
1[.
[x
2
1[ = [(x 1)(x + 1)[
= [x 1[[x + 1[
< [x + 1[
= [(x 1) + 2[
< ( + 2)
Now we choose a positive such that,
( + 2) = .
We see that
=

1 + 1,
38
is positive and satises the criterion that [x
2
1[ < for all 0 < [x 1[ < . Thus the limit exists.
Note that the value of the function y() need not be equal to lim
x
y(x). This is illustrated in Example 5.1.2.
Example 5.1.2 Consider the function
y(x) =
_
1 for x Z,
0 for x , Z.
For what values of does lim
x
y(x) exist?
First consider , Z. Then there exists an open neighborhood a < < b around such that y(x) is identically
zero for x (a, b). Then trivially, lim
x
y(x) = 0.
Now consider Z. Consider any > 0. Then if [x [ < 1 then [y(x) 0[ = 0 < . Thus we see that
lim
x
y(x) = 0.
Thus, regardless of the value of , lim
x
y(x) = 0.
Left and Right Limits. With the notation lim
x
+ y(x) we denote the right limit of y(x). This is the limit
as x approaches from above. Mathematically: lim
x
+ exists if for any > 0 there exists a > 0 such that
[y(x) [ < for all 0 < x < . The left limit lim
x
y(x) is dened analogously.
Example 5.1.3 Consider the function,
sin x
[x[
, dened for x ,= 0. (See Figure 5.2.) The left and right limits exist
as x approaches zero.
lim
x0
+
sin x
[x[
= 1, lim
x0

sin x
[x[
= 1
However the limit,
lim
x0
sin x
[x[
,
does not exist.
39
Figure 5.2: Plot of sin(x)/[x[.
Properties of Limits. Let lim
x
u(x) and lim
x
v(x) exist.
lim
x
(au(x) +bv(x)) = a lim
x
u(x) +b lim
x
v(x).
lim
x
(u(x)v(x)) = (lim
x
u(x)) (lim
x
v(x)).
lim
x
_
u(x)
v(x)
_
=
lim
x
u(x)
lim
x
v(x)
if lim
x
v(x) ,= 0.
Example 5.1.4 Prove that if lim
x
u(x) = and lim
x
v(x) = exist then
lim
x
(u(x)v(x)) =
_
lim
x
u(x)
__
lim
x
v(x)
_
.
Assume that and are nonzero. (The cases where one or both are zero are similar and simpler.)
[u(x)v(x) [ = [uv (u + u)[
= [u(v ) + (u )[
= [u[[v [ +[u [[[
A sucient condition for [u(x)v(x) [ < is
[u [ <

2[[
and [v [ <

2
_
[[ +

2[[
_.
40
Since the two right sides of the inequalities are positive, there exists
1
> 0 and
2
> 0 such that the rst inequality
is satised for all [x [ <
1
and the second inequality is satised for all [x [ <
2
. By choosing to be the
smaller of
1
and
2
we see that
[u(x)v(x) [ < for all [x [ < .
Thus
lim
x
(u(x)v(x)) =
_
lim
x
u(x)
__
lim
x
v(x)
_
= .
41
Result 5.1.1 Denition of a Limit. The statement:
lim
x
y(x) =
means that y(x) gets arbitrarily close to as x approaches . For any > 0 there exists
a > 0 such that [y(x) [ < for all x in the neighborhood 0 < [x [ < . The left
and right limits,
lim
x

y(x) = and lim


x
+
y(x) =
denote the limiting value as x approaches respectively from below and above. The
neighborhoods are respectively < x < 0 and 0 < x < .
Properties of Limits. Let lim
x
u(x) and lim
x
v(x) exist.
lim
x
(au(x) +bv(x)) = a lim
x
u(x) +b lim
x
v(x).
lim
x
(u(x)v(x)) = (lim
x
u(x)) (lim
x
v(x)).
lim
x
_
u(x)
v(x)
_
=
lim
x
u(x)
lim
x
v(x)
if lim
x
v(x) ,= 0.
5.2 Continuous Functions
Denition of Continuity. A function y(x) is said to be continuous at x = if the value of the function is
equal to its limit, that is, lim
x
y(x) = y(). Note that this one condition is actually the three conditions: y()
is dened, lim
x
y(x) exists and lim
x
y(x) = y(). A function is continuous if it is continuous at each point
in its domain. A function is continuous on the closed interval [a, b] if the function is continuous for each point
42
x (a, b) and lim
xa
+ y(x) = y(a) and lim
xb

y(x) = y(b).
Discontinuous Functions. If a function is not continuous at a point it is called discontinuous at that point.
If lim
x
y(x) exists but is not equal to y(), then the function has a removable discontinuity. It is thus named
because we could dene a continuous function
z(x) =
_
y(x) for x ,= ,
lim
x
y(x) for x = ,
to remove the discontinuity. If both the left and right limit of a function at a point exist, but are not equal, then
the function has a jump discontinuity at that point. If either the left or right limit of a function does not exist,
then the function is said to have an innite discontinuity at that point.
Example 5.2.1
sin x
x
has a removable discontinuity at x = 0. The Heaviside function,
H(x) =
_

_
0 for x < 0,
1/2 for x = 0,
1 for x > 0,
has a jump discontinuity at x = 0.
1
x
has an innite discontinuity at x = 0. See Figure 5.3.
Figure 5.3: A Removable discontinuity, a Jump Discontinuity and an Innite Discontinuity
43
Properties of Continuous Functions.
Arithmetic. If u(x) and v(x) are continuous at x = then u(x) v(x) and u(x)v(x) are continuous at x = .
u(x)
v(x)
is continuous at x = if v() ,= 0.
Function Composition. If u(x) is continuous at x = and v(x) is continuous at x = = u() then u(v(x)) is
continuous at x = . The composition of continuous functions is a continuous function.
Boundedness. A function which is continuous on a closed interval is bounded in that closed interval.
Nonzero in a Neighborhood. If y() ,= 0 then there exists a neighborhood ( , +), > 0 of the point such
that y(x) ,= 0 for x ( , +).
Intermediate Value Theorem. Let u(x) be continuous on [a, b]. If u(a) u(b) then there exists [a, b]
such that u() = . This is known as the intermediate value theorem. A corollary of this is that if u(a) and
u(b) are of opposite sign then u(x) has at least one zero on the interval (a, b).
Maxima and Minima. If u(x) is continuous on [a, b] then u(x) has a maximum and a minimum on [a, b]. That
is, there is at least one point [a, b] such that u() u(x) for all x [a, b] and there is at least one point
[a, b] such that u() u(x) for all x [a, b].
Piecewise Continuous Functions. A function is piecewise continuous on an interval if the function is bounded
on the interval and the interval can be divided into a nite number of intervals on each of which the function is
continuous. For example, the greatest integer function, x|, is piecewise continuous. (x| is dened to the the
greatest integer less than or equal to x.) See Figure 5.4 for graphs of two piecewise continuous functions.
Uniform Continuity. Consider a function f(x) that is continuous on an interval. This means that for any
point in the interval and any positive there exists a > 0 such that [f(x) f()[ < for all 0 < [x[ < . In
general, this value of depends on both and . If can be chosen so it is a function of alone and independent
of then the function is said to be uniformly continuous on the interval. A sucient condition for uniform
continuity is that the function is continuous on a closed interval.
44
Figure 5.4: Piecewise Continuous Functions
5.3 The Derivative
Consider a function y(x) on the interval (x . . . x + x) for some x > 0. We dene the increment y =
y(x+x)y(x). The average rate of change, (average velocity), of the function on the interval is
y
x
. The average
rate of change is the slope of the secant line that passes through the points (x, y(x)) and (x + x, y(x + x)).
See Figure 5.5.
y
x
y
x
Figure 5.5: The increments x and y.
If the slope of the secant line has a limit as x approaches zero then we call this slope the derivative or
instantaneous rate of change of the function at the point x. We denote the derivative by
dy
dx
, which is a nice
45
notation as the derivative is the limit of
y
x
as x 0.
dy
dx
lim
x0
y(x + x) y(x)
x
.
x may approach zero from below or above. It is common to denote the derivative
dy
dx
by
d
dx
y, y
t
(x), y
t
or Dy.
A function is said to be dierentiable at a point if the derivative exists there. Note that dierentiability implies
continuity, but not vice versa.
Example 5.3.1 Consider the derivative of y(x) = x
2
at the point x = 1.
y
t
(1) lim
x0
y(1 + x) y(1)
x
= lim
x0
(1 + x)
2
1
x
= lim
x0
(2 + x)
= 2
Figure 5.6 shows the secant lines approaching the tangent line as x approaches zero from above and below.
Example 5.3.2 We can compute the derivative of y(x) = x
2
at an arbitrary point x.
d
dx
_
x
2

= lim
x0
(x + x)
2
x
2
x
= lim
x0
(2x + x)
= 2x
46
0.5 1 1.5 2
0.5
1
1.5
2
2.5
3
3.5
4
0.5 1 1.5 2
0.5
1
1.5
2
2.5
3
3.5
4
Figure 5.6: Secant lines and the tangent to x
2
at x = 1.
Properties. Let u(x) and v(x) be dierentiable. Let a and b be constants. Some fundamental properties of
derivatives are:
d
dx
(au +bv) = a
du
dx
+b
dv
dx
Linearity
d
dx
(uv) =
du
dx
v +u
dv
dx
Product Rule
d
dx
_
u
v
_
=
v
du
dx
u
dv
dx
v
2
Quotient Rule
d
dx
(u
a
) = au
a1
du
dx
Power Rule
d
dx
(u(v(x))) =
du
dv
dv
dx
= u
t
(v(x))v
t
(x) Chain Rule
These can be proved by using the denition of dierentiation.
47
Example 5.3.3 Prove the quotient rule for derivatives.
d
dx
_
u
v
_
= lim
x0
u(x+x)
v(x+x)

u(x)
v(x)
x
= lim
x0
u(x + x)v(x) u(x)v(x + x)
xv(x)v(x + x)
= lim
x0
u(x + x)v(x) u(x)v(x) u(x)v(x + x) +u(x)v(x)
xv(x)v(x)
= lim
x0
(u(x + x) u(x))v(x) u(x)(v(x + x) v(x))
xv
2
(x)
=
lim
x0
u(x+x)u(x)
x
v(x) u(x) lim
x0
v(x+x)v(x)
x
v
2
(x)
=
v
du
dx
u
dv
dx
v
2
48
Trigonometric Functions. Some derivatives of trigonometric functions are:
d
dx
sin x = cos x
d
dx
arcsin x =
1
(1 x
2
)
1/2
d
dx
cos x = sin x
d
dx
arccos x =
1
(1 x
2
)
1/2
d
dx
tan x =
1
cos
2
x
d
dx
arctan x =
1
1 +x
2
d
dx
e
x
= e
x
d
dx
log x =
1
x
d
dx
sinh x = cosh x
d
dx
arcsinh x =
1
(x
2
+ 1)
1/2
d
dx
cosh x = sinh x
d
dx
arccosh x =
1
(x
2
1)
1/2
d
dx
tanh x =
1
cosh
2
x
d
dx
arctanh x =
1
1 x
2
Example 5.3.4 We can evaluate the derivative of x
x
by using the identity a
b
= e
b log a
.
d
dx
x
x
=
d
dx
e
xlog x
= e
xlog x
d
dx
(xlog x)
= x
x
(1 log x +x
1
x
)
= x
x
(1 + log x)
49
Inverse Functions. If we have a function y(x), we can consider x as a function of y, x(y). For example, if
y(x) = 8x
3
then x(y) = 2
3

y; if y(x) =
x+2
x+1
then x(y) =
2y
y1
. The derivative of an inverse function is
d
dy
x(y) =
1
dy
dx
.
Example 5.3.5 The inverse function of y(x) = e
x
is x(y) = log y. We can obtain the derivative of the logarithm
from the derivative of the exponential. The derivative of the exponential is
dy
dx
= e
x
.
Thus the derivative of the logarithm is
d
dy
log y =
d
dy
x(y) =
1
dy
dx
=
1
e
x
=
1
y
.
5.4 Implicit Dierentiation
An explicitly dened function has the form y = f(x). A implicitly dened function has the form f(x, y) = 0. A
few examples of implicit functions are x
2
+ y
2
1 = 0 and x + y + sin(xy) = 0. Often it is not possible to write
an implicit equation in explicit form. This is true of the latter example above. One can calculate the derivative
of y(x) in terms of x and y even when y(x) is dened by an implicit equation.
Example 5.4.1 Consider the implicit equation
x
2
xy y
2
= 1.
This implicit equation can be solved for the dependent variable.
y(x) =
1
2
_
x

5x
2
4
_
.
50
We can dierentiate this expression to obtain
y
t
=
1
2
_
1
5x

5x
2
4
_
.
One can obtain the same result without rst solving for y. If we dierentiate the implicit equation, we obtain
2x y x
dy
dx
2y
dy
dx
= 0.
We can solve this equation for
dy
dx
.
dy
dx
=
2x y
x + 2y
We can dierentiate this expression to obtain the second derivative of y.
d
2
y
dx
2
=
(x + 2y)(2 y
t
) (2x y)(1 + 2y
t
)
(x + 2y)
2
=
5(y xy
t
)
(x + 2y)
2
Substitute in the expression for y
t
.
=
10(x
2
xy y
2
)
(x + 2y)
2
Use the original implicit equation.
=
10
(x + 2y)
2
51
5.5 Maxima and Minima
A dierentiable function is increasing where f
t
(x) > 0, decreasing where f
t
(x) < 0 and stationary where f
t
(x) = 0.
A function f(x) has a relative maxima at a point x = if there exists a neighborhood around such that
f(x) f() for x (x , x + ), > 0. The relative minima is dened analogously. Note that this denition
does not require that the function be dierentiable, or even continuous. We refer to relative maxima and minima
collectively are relative extrema.
Relative Extrema and Stationary Points. If f(x) is dierentiable and f() is a relative extrema then x =
is a stationary point, f
t
() = 0. We can prove this using left and right limits. Assume that f() is a relative
maxima. Then there is a neighborhood (x , x + ), > 0 for which f(x) f(). Since f(x) is dierentiable
the derivative at x = ,
f
t
() = lim
x0
f( + x) f()
x
,
exists. This in turn means that the left and right limits exist and are equal. Since f(x) f() for < x <
the left limit is non-positive,
f
t
() = lim
x0

f( + x) f()
x
0.
Since f(x) f() for < x < + the right limit is nonnegative,
f
t
() = lim
x0
+
f( + x) f()
x
0.
Thus we have 0 f
t
() 0 which implies that f
t
() = 0.
It is not true that all stationary points are relative extrema. That is, f
t
() = 0 does not imply that x = is
an extrema. Consider the function f(x) = x
3
. x = 0 is a stationary point since f
t
(x) = x
2
, f
t
(0) = 0. However,
x = 0 is neither a relative maxima nor a relative minima.
52
It is also not true that all relative extrema are stationary points. Consider the function f(x) = [x[. The point
x = 0 is a relative minima, but the derivative at that point is undened.
First Derivative Test. Let f(x) be dierentiable and f
t
() = 0.
If f
t
(x) changes sign from positive to negative as we pass through x = then the point is a relative maxima.
If f
t
(x) changes sign from negative to positive as we pass through x = then the point is a relative minima.
If f
t
(x) is not identically zero in a neighborhood of x = and it does not change sign as we pass through
the point then x = is not a relative extrema.
Example 5.5.1 Consider y = x
2
and the point x = 0. The function is dierentiable. The derivative, y
t
= 2x,
vanishes at x = 0. Since y
t
(x) is negative for x < 0 and positive for x > 0, the point x = 0 is a relative minima.
See Figure 5.7.
Example 5.5.2 Consider y = cos x and the point x = 0. The function is dierentiable. The derivative, y
t
=
sin x is positive for < x < 0 and negative for 0 < x < . Since the sign of y
t
goes from positive to negative,
x = 0 is a relative maxima. See Figure 5.7.
Example 5.5.3 Consider y = x
3
and the point x = 0. The function is dierentiable. The derivative, y
t
= 3x
2
is
positive for x < 0 and positive for 0 < x. Since y
t
is not identically zero and the sign of y
t
does not change, x = 0
is not a relative extrema. See Figure 5.7.
Concavity. If the portion of a curve in some neighborhood of a point lies above the tangent line through that
point, the curve is said to be concave upward. If it lies below the tangent it is concave downward. If a function
is twice dierentiable then f
tt
(x) > 0 where it is concave upward and f
tt
(x) < 0 where it is concave downward.
Note that f
tt
(x) > 0 is a sucient, but not a necessary condition for a curve to be concave upward at a point. A
53
Figure 5.7: Graphs of x
2
, cos x and x
3
.
curve may be concave upward at a point where the second derivative vanishes. A point where the curve changes
concavity is called a point of inection. At such a point the second derivative vanishes, f
tt
(x) = 0. For twice
continuously dierentiable functions, f
tt
(x) = 0 is a necessary but not a sucient condition for an inection point.
The second derivative may vanish at places which are not inection points. See Figure 5.8.
Figure 5.8: Concave Upward, Concave Downward and an Inection Point.
Second Derivative Test. Let f(x) be twice dierentiable and let x = be a stationary point, f
t
() = 0.
If f
tt
() < 0 then the point is a relative maxima.
If f
tt
() > 0 then the point is a relative minima.
If f
tt
() = 0 then the test fails.
54
Example 5.5.4 Consider the function f(x) = cos x and the point x = 0. The derivatives of the function are
f
t
(x) = sin x, f
tt
(x) = cos x. The point x = 0 is a stationary point, f
t
(0) = sin(0) = 0. Since the second
derivative is negative there, f
tt
(0) = cos(0) = 1, the point is a relative maxima.
Example 5.5.5 Consider the function f(x) = x
4
and the point x = 0. The derivatives of the function are
f
t
(x) = 4x
3
, f
tt
(x) = 12x
2
. The point x = 0 is a stationary point. Since the second derivative also vanishes at
that point the second derivative test fails. One must use the rst derivative test to determine that x = 0 is a
relative minima.
5.6 Mean Value Theorems
Rolles Theorem. If f(x) is continuous in [a, b], dierentiable in (a, b) and f(a) = f(b) = 0 then there exists
a point (a, b) such that f
t
() = 0. See Figure 5.9.
Figure 5.9: Rolles Theorem.
To prove this we consider two cases. First we have the trivial case that f(x) 0. If f(x) is not identically
zero then continuity implies that it must have a nonzero relative maxima or minima in (a, b). Let x = be one
of these relative extrema. Since f(x) is dierentiable, x = must be a stationary point, f
t
() = 0.
55
Theorem of the Mean. If f(x) is continuous in [a, b] and dierentiable in (a, b) then there exists a point x =
such that
f
t
() =
f(b) f(a)
b a
.
That is, there is a point where the instantaneous velocity is equal to the average velocity on the interval.
Figure 5.10: Theorem of the Mean.
We prove this theorem by applying Rolles theorem. Consider the new function
g(x) = f(x) f(a)
f(b) f(a)
b a
(x a)
Note that g(a) = g(b) = 0, so it satises the conditions of Rolles theorem. There is a point x = such that
g
t
() = 0. We dierentiate the expression for g(x) and substitute in x = to obtain the result.
g
t
(x) = f
t
(x)
f(b) f(a)
b a
g
t
() = f
t
()
f(b) f(a)
b a
= 0
f
t
() =
f(b) f(a)
b a
56
Generalized Theorem of the Mean. If f(x) and g(x) are continuous in [a, b] and dierentiable in (a, b), then
there exists a point x = such that
f
t
()
g
t
()
=
f(b) f(a)
g(b) g(a)
.
We have assumed that g(a) ,= g(b) so that the denominator does not vanish and that f
t
(x) and g
t
(x) are not
simultaneously zero which would produce an indeterminate form. Note that this theorem reduces to the regular
theorem of the mean when g(x) = x. The proof of the theorem is similar to that for the theorem of the mean.
Taylors Theorem of the Mean. If f(x) is n + 1 times continuously dierentiable in (a, b) then there exists
a point x = (a, b) such that
f(b) = f(a) + (b a)f
t
(a) +
(b a)
2
2!
f
tt
(a) + +
(b a)
n
n!
f
(n)
(a) +
(b a)
n+1
(n + 1)!
f
(n+1)
(). (5.1)
For the case n = 0, the formula is
f(b) = f(a) + (b a)f
t
(),
which is just a rearrangement of the terms in the theorem of the mean,
f
t
() =
f(b) f(a)
b a
.
5.6.1 Application: Using Taylors Theorem to Approximate Functions.
One can use Taylors theorem to approximate functions with polynomials. Consider an innitely dierentiable
function f(x) and a point x = a. Substituting x for b into Equation 5.1 we obtain,
f(x) = f(a) + (x a)f
t
(a) +
(x a)
2
2!
f
tt
(a) + +
(x a)
n
n!
f
(n)
(a) +
(x a)
n+1
(n + 1)!
f
(n+1)
().
57
If the last term in the sum is small then we can approximate our function with an n
th
order polynomial.
f(x) f(a) + (x a)f
t
(a) +
(x a)
2
2!
f
tt
(a) + +
(x a)
n
n!
f
(n)
(a)
The last term in Equation 5.6.1 is called the remainder or the error term,
R
n
=
(x a)
n+1
(n + 1)!
f
(n+1)
().
Since the function is innitely dierentiable, f
(n+1)
() exists and is bounded. Therefore we note that the error
must vanish as x 0 because of the (x a)
n+1
factor. We therefore suspect that our approximation would be a
good one if x is close to a. Also note that n! eventually grows faster than (x a)
n
,
lim
n
(x a)
n
n!
= 0.
So if the derivative term, f
(n+1)
(), does not grow to quickly, the error for a certain value of x will get smaller
with increasing n and the polynomial will become a better approximation of the function. (It is also possible that
the derivative factor grows very quickly and the approximation gets worse with increasing n.)
Example 5.6.1 Consider the function f(x) = e
x
. We want a polynomial approximation of this function near
the point x = 0. Since the derivative of e
x
is e
x
, the value of all the derivatives at x = 0 is f
(n)
(0) = e
0
= 1.
Taylors theorem thus states that
e
x
= 1 +x +
x
2
2!
+
x
3
3!
+ +
x
n
n!
+
x
n+1
(n + 1)!
e

,
for some (0, x). The rst few polynomial approximations of the exponent about the point x = 0 are
f
1
(x) = 1
f
2
(x) = 1 +x
f
3
(x) = 1 +x +
x
2
2
f
4
(x) = 1 +x +
x
2
2
+
x
3
6
58
The four approximations are graphed in Figure 5.11.
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
Figure 5.11: Four Finite Taylor Series Approximations of e
x
Note that for the range of x we are looking at, the approximations become more accurate as the number of
terms increases.
Example 5.6.2 Consider the function f(x) = cos x. We want a polynomial approximation of this function near
the point x = 0. The rst few derivatives of f are
f(x) = cos x
f
t
(x) = sin x
f
tt
(x) = cos x
f
ttt
(x) = sin x
f
(4)
(x) = cos x
Its easy to pick out the pattern here,
f
(n)
(x) =
_
(1)
n/2
cos x for even n,
(1)
(n+1)/2
sin x for odd n.
Since cos(0) = 1 and sin(0) = 0 the n-term approximation of the cosine is,
cos x = 1
x
2
2!
+
x
4
4!

x
6
6!
+ + (1)
2(n1)
x
2(n1)
(2(n 1))!
+
x
2n
(2n)!
cos .
59
Here are graphs of the one, two, three and four term approximations.
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
Figure 5.12: Taylor Series Approximations of cos x
Note that for the range of x we are looking at, the approximations become more accurate as the number of
terms increases. Consider the ten term approximation of the cosine about x = 0,
cos x = 1
x
2
2!
+
x
4
4!

x
18
18!
+
x
20
20!
cos .
Note that for any value of , [ cos [ 1. Therefore the absolute value of the error term satises,
[R[ =

x
20
20!
cos

[x[
20
20!
.
x
20
/20! is plotted in Figure 5.13.
Note that the error is very small for x < 6, fairly small but non-negligible for x 7 and large for x > 8. The
ten term approximation of the cosine, plotted below, behaves just we would predict.
The error is very small until it becomes non-negligible at x 7 and large at x 8.
Example 5.6.3 Consider the function f(x) = log x. We want a polynomial approximation of this function near
60
2 4 6 8 10
0.2
0.4
0.6
0.8
1
Figure 5.13: Plot of x
20
/20!.
-10 -5 5 10
-2
-1.5
-1
-0.5
0.5
1
Figure 5.14: Ten Term Taylor Series Approximation of cos x
the point x = 1. The rst few derivatives of f are
f(x) = log x
f
t
(x) =
1
x
f
tt
(x) =
1
x
2
f
ttt
(x) =
2
x
3
f
(4)
(x) =
3
x
4
61
The derivatives evaluated at x = 1 are
f(0) = 0, f
(n)
(0) = (1)
n1
(n 1)!, for n 1.
By Taylors theorem of the mean we have,
log x = (x 1)
(x 1)
2
2
+
(x 1)
3
3

(x 1)
4
4
+ + (1)
n1
(x 1)
n
n
+ (1)
n
(x 1)
n+1
n + 1
1

n+1
.
Below are plots of the 2, 4, 10 and 50 term approximations.
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
Figure 5.15: The 2, 4, 10 and 50 Term Approximations of log x
Note that the approximation gets better on the interval (0, 2) and worse outside this interval as the number
of terms increases. The Taylor series converges to log x only on this interval.
5.6.2 Application: Finite Dierence Schemes
Example 5.6.4 Suppose you sample a function at the discrete points nx, n Z. In Figure 5.16 we sample the
function f(x) = sin x on the interval [4, 4] with x = 1/4 and plot the data points.
We wish to approximate the derivative of the function on the grid points using only the value of the function
on those discrete points. From the denition of the derivative, one is lead to the formula
f
t
(x)
f(x + x) f(x)
x
. (5.2)
62
-4 -2 2 4
-1
-0.5
0.5
1
Figure 5.16: Sampling of sin x
Taylors theorem states that
f(x + x) = f(x) + xf
t
(x) +
x
2
2
f
tt
().
Substituting this expression into our formula for approximating the derivative we obtain
f(x + x) f(x)
x
=
f(x) + xf
t
(x) +
x
2
2
f
tt
() f(x)
x
= f
t
(x) +
x
2
f
tt
().
Thus we see that the error in our approximation of the rst derivative is
x
2
f
tt
(). Since the error has a linear
factor of x, we call this a rst order accurate method. Equation 5.2 is called the forward dierence scheme for
calculating the rst derivative. Figure 5.17 shows a plot of the value of this scheme for the function f(x) = sin x
and x = 1/4. The rst derivative of the function f
t
(x) = cos x is shown for comparison.
Another scheme for approximating the rst derivative is the centered dierence scheme,
f
t
(x)
f(x + x) f(x x)
2x
.
63
-4 -2 2 4
-1
-0.5
0.5
1
Figure 5.17: The Forward Dierence Scheme Approximation of the Derivative
Expanding the numerator using Taylors theorem,
f(x + x) f(x x)
2x
=
f(x) + xf
t
(x) +
x
2
2
f
tt
(x) +
x
3
6
f
ttt
() f(x) + xf
t
(x)
x
2
2
f
tt
(x) +
x
3
6
f
ttt
()
2x
= f
t
(x) +
x
2
12
(f
ttt
() +f
ttt
()).
The error in the approximation is quadratic in x. Therefore this is a second order accurate scheme. Below is a
plot of the derivative of the function and the value of this scheme for the function f(x) = sin x and x = 1/4.
Notice how the centered dierence scheme gives a better approximation of the derivative than the forward
dierence scheme.
5.7 LHospitals Rule
Some singularities are easy to diagnose. Consider the function
cos x
x
at the point x = 0. The function evaluates
to
1
0
and is thus discontinuous at that point. Since the numerator and denominator are continuous functions and
64
-4 -2 2 4
-1
-0.5
0.5
1
Figure 5.18: Centered Dierence Scheme Approximation of the Derivative
the denominator vanishes while the numerator does not, the left and right limits as x 0 do not exist. Thus
the function has an innite discontinuity at the point x = 0. More generally, a function which is composed of
continuous functions and evaluates to
a
0
at a point where a ,= 0 must have an innite discontinuity there.
Other singularities require more analysis to diagnose. Consider the functions
sin x
x
,
sin x
[x[
and
sin x
1cos x
at the point
x = 0. All three functions evaluate to
0
0
at that point, but have dierent kinds of singularities. The rst has
a removable discontinuity, the second has a nite discontinuity and the third has an innite discontinuity. See
Figure 5.19.
An expression that evaluates to
0
0
,

, 0 , , 1

, 0
0
or
0
is called an indeterminate. A function
f(x) which is indeterminate at the point x = is singular at that point. The singularity may be a removable
discontinuity, a nite discontinuity or an innite discontinuity depending on the behavior of the function around
that point. If lim
x
f(x) exists, then the function has a removable discontinuity. If the limit does not exist, but
the left and right limits do exist, then the function has a nite discontinuity. If either the left or right limit does
not exist then the function has an innite discontinuity.
65
Figure 5.19: The functions
sin x
x
,
sin x
[x[
and
sin x
1cos x
.
LHospitals Rule. Let f(x) and g(x) be dierentiable and f() = g() = 0. Further, let g(x) be nonzero in a
deleted neighborhood of x = , (g(x) ,= 0 for x 0 < [x [ < ). Then
lim
x
f(x)
g(x)
= lim
x
f
t
(x)
g
t
(x)
.
To prove this, we note that f() = g() = 0 and apply the generalized theorem of the mean. Note that
f(x)
g(x)
=
f(x) f()
g(x) g()
=
f
t
()
g
t
()
for some between and x. Thus
lim
x
f(x)
g(x)
= lim

f
t
()
g
t
()
= lim
x
f
t
(x)
g
t
(x)
provided that the limits exist.
LHospitals Rule is also applicable when both functions tend to innity instead of zero or when the limit
point, , is at innity. It is also valid for one-sided limits.
LHospitals rule is directly applicable to the indeterminate forms
0
0
and

.
66
Example 5.7.1 Consider the three functions
sinx
x
,
sinx
[x[
and
sin x
1cos x
at the point x = 0.
lim
x0
sin x
x
= lim
x0
cos x
1
= 1
Thus
sin x
x
has a removable discontinuity at x = 0.
lim
x0
+
sin x
[x[
= lim
x0
+
sin x
x
= 1
lim
x0

sin x
[x[
= lim
x0

sin x
x
= 1
Thus
sin x
[x[
has a nite discontinuity at x = 0.
lim
x0
sin x
1 cos x
= lim
x0
cos x
sin x
=
1
0
=
Thus
sin x
1cos x
has an innite discontinuity at x = 0.
Example 5.7.2 Let a and d be nonzero.
lim
x
ax
2
+bx +c
dx
2
+ex +f
= lim
x
2ax +b
2dx +e
= lim
x
2a
2d
=
a
d
67
Example 5.7.3 Consider
lim
x0
cos x 1
x sin x
.
This limit is an indeterminate of the form
0
0
. Applying LHospitals rule we see that limit is equal to
lim
x0
sin x
xcos x + sin x
.
This limit is again an indeterminate of the form
0
0
. We apply LHospitals rule again.
lim
x0
cos x
x sin x + 2 cos x
=
1
2
Thus the value of the original limit is
1
2
. We could also obtain this result by expanding the functions in Taylor
series.
lim
x0
cos x 1
x sin x
= lim
x0
_
1
x
2
2
+
x
4
24

_
1
x
_
x
x
3
6
+
x
5
120

_
= lim
x0

x
2
2
+
x
4
24

x
2

x
4
6
+
x
6
120

= lim
x0

1
2
+
x
2
24

1
x
2
6
+
x
4
120

=
1
2
We can apply LHospitals Rule to the indeterminate forms 0 and by rewriting the expression in
a dierent form, (perhaps putting the expression over a common denominator). If at rst you dont succeed, try,
try again. You may have to apply LHospitals rule several times to evaluate a limit.
68
Example 5.7.4
lim
x0
_
cot x
1
x
_
= lim
x0
xcos x sin x
x sin x
= lim
x0
cos x xsin x cos x
sin x +x cos x
= lim
x0
x sin x
sin x +x cos x
= lim
x0
x cos x sin x
cos x + cos x x sin x
= 0
You can apply LHospitals rule to the indeterminate forms 1

, 0
0
or
0
by taking the logarithm of the
expression.
Example 5.7.5 Consider the limit,
lim
x0
x
x
,
which gives us the indeterminate form 0
0
. The logarithm of the expression is
log(x
x
) = x log x.
As x 0 we now have the indeterminate form 0 . By rewriting the expression, we can apply LHospitals rule.
lim
x0
log x
1/x
= lim
x0
1/x
1/x
2
= lim
x0
(x)
= 0
Thus the original limit is
lim
x0
x
x
= e
0
= 1.
69
5.8 Exercises
Limits and Continuity
Exercise 5.1
Does
lim
x0
sin
_
1
x
_
exist?
Exercise 5.2
Is the function sin(1/x) continuous in the open interval (0, 1)? Is there a value of a such that the function dened
by
f(x) =
_
sin(1/x) for x ,= 0,
a for x = 0
is continuous on the closed interval [0, 1]?
Exercise 5.3
Is the function sin(1/x) uniformly continuous in the open interval (0, 1)?
Exercise 5.4
Are the functions

x and
1
x
uniformly continuous on the interval (0, 1)?
Exercise 5.5
Prove that a function which is continuous on a closed interval is uniformly continuous on that interval.
70
Denition of Dierentiation
Exercise 5.6 (mathematica/calculus/dierential/denition.nb)
Use the denition of dierentiation to prove the following identities where f(x) and g(x) are dierentiable functions
and n is a positive integer.
a.
d
dx
(x
n
) = nx
n1
, (I suggest that you use Newtons binomial formula.)
b.
d
dx
(f(x)g(x)) = f
dg
dx
+g
df
dx
c.
d
dx
(sin x) = cos x. (Youll need to use some trig identities.)
d.
d
dx
(f(g(x))) = f
t
(g(x))g
t
(x)
Rules of Dierentiation
Exercise 5.7 (mathematica/calculus/dierential/rules.nb)
Find the rst derivatives of the following:
a. x sin(cos x)
b. f(cos(g(x)))
c.
1
f(log x)
d. x
x
x
e. [x[ sin [x[
Exercise 5.8 (mathematica/calculus/dierential/rules.nb)
Using
d
dx
sin x = cos x and
d
dx
tan x =
1
cos
2
x
nd the derivatives of arcsin x and arctan x.
71
Implicit Dierentiation
Exercise 5.9 (mathematica/calculus/dierential/implicit.nb)
Find y
t
(x), given that x
2
+y
2
= 1. What is y
t
(1/2)?
Exercise 5.10 (mathematica/calculus/dierential/implicit.nb)
Find y
t
(x) and y
tt
(x), given that x
2
xy +y
2
= 3.
Maxima and Minima
Exercise 5.11 (mathematica/calculus/dierential/maxima.nb)
Identify any maxima and minima of the following functions.
a. f(x) = x(12 2x)
2
.
b. f(x) = (x 2)
2/3
.
Exercise 5.12 (mathematica/calculus/dierential/maxima.nb)
A cylindrical container with a circular base and an open top is to hold 64 cm
3
. Find its dimensions so that the
surface area of the cup is a minimum.
Mean Value Theorems
Exercise 5.13
Prove the generalized theorem of the mean. If f(x) and g(x) are continuous in [a, b] and dierentiable in (a, b),
then there exists a point x = such that
f
t
()
g
t
()
=
f(b) f(a)
g(b) g(a)
.
Assume that g(a) ,= g(b) so that the denominator does not vanish and that f
t
(x) and g
t
(x) are not simultaneously
zero which would produce an indeterminate form.
72
Exercise 5.14 (mathematica/calculus/dierential/taylor.nb)
Find a polynomial approximation of sin x on the interval [1, 1] that has a maximum error of
1
1000
. Dont use any
more terms that you need to. Prove the error bound. Use your polynomial to approximate sin 1.
Exercise 5.15 (mathematica/calculus/dierential/taylor.nb)
You use the formula
f(x+x)2f(x)+f(xx)
x
2
to approximate f
tt
(x). What is the error in this approximation?
Exercise 5.16
The formulas
f(x+x)f(x)
x
and
f(x+x)f(xx)
2x
are rst and second order accurate schemes for approximating
the rst derivative f
t
(x). Find a couple other schemes that have successively higher orders of accuracy. Would
these higher order schemes actually give a better approximation of f
t
(x)? Remember that x is small, but not
innitesimal.
LHospitals Rule
Exercise 5.17 (mathematica/calculus/dierential/lhospitals.nb)
Evaluate the following limits.
a. lim
x0
xsin x
x
3
b. lim
x0
_
csc x
1
x
_
c. lim
x+
_
1 +
1
x
_
x
d. lim
x0
_
csc
2
x
1
x
2
_
. (First evaluate using LHospitals rule then using a Taylor series expansion. You will
nd that the latter method is more convenient.)
Exercise 5.18 (mathematica/calculus/dierential/lhospitals.nb)
Evaluate the following limits,
lim
x
x
a/x
, lim
x
_
1 +
a
x
_
bx
,
73
where a and b are constants.
74
5.9 Hints
Limits and Continuity
Hint 5.1
Apply the , denition of a limit.
Hint 5.2
The composition of continuous functions is continuous. Apply the denition of continuity and look at the point
x = 0.
Hint 5.3
Note that for x
1
=
1
(n1/2)
and x
2
=
1
(n+1/2)
where n Z we have [ sin(1/x
1
) sin(1/x
2
)[ = 2.
Hint 5.4
Note that the function

x +

x is a decreasing function of x and an increasing function of for positive x


and . Bound this function for xed .
Consider any positive and . For what values of x is
1
x

1
x +
> .
Hint 5.5
Let the function f(x) be continuous on a closed interval. Consider the function
e(x, ) = sup
[x[<
[f() f(x)[.
Bound e(x, ) with a function of alone.
75
Denition of Dierentiation
Hint 5.6
a. Newtons binomial formula is
(a +b)
n
=
n

k=0
_
n
k
_
a
nk
b
k
= a
n
+a
n1
b +
n(n 1)
2
a
n2
b
2
+ +nab
n1
+b
n
.
Recall that the binomial coecient is
_
n
k
_
=
n!
(n k)!k!
.
b. Note that
d
dx
(f(x)g(x)) = lim
x0
_
f(x + x)g(x + x) f(x)g(x)
x
_
and
g(x)f
t
(x) +f(x)g
t
(x) = g(x) lim
x0
_
f(x + x) f(x)
x
_
+f(x) lim
x0
_
g(x + x) g(x)
x
_
.
Fill in the blank.
c. First prove that
lim
0
sin

= 1.
and
lim
0
_
cos 1

_
= 0.
76
d. Let u = g(x). Consider a nonzero increment x, which induces the increments u and f. By denition,
f = f(u + u) f(u), u = g(x + x) g(x),
and f, u 0 as x 0. If u ,= 0 then we have
=
f
u

df
du
0 as u 0.
If u = 0 for some values of x then f also vanishes and we dene = 0 for theses values. In either case,
y =
df
du
u +u.
Continue from here.
Rules of Dierentiation
Hint 5.7
a. Use the product rule and the chain rule.
b. Use the chain rule.
c. Use the quotient rule and the chain rule.
d. Use the identity a
b
= e
b log a
.
e. For x > 0, the expression is xsin x; for x < 0, the expression is (x) sin(x) = xsin x. Do both cases.
Hint 5.8
Use that x
t
(y) = 1/y
t
(x) and the identities cos x = (1 sin
2
x)
1/2
and cos(arctan x) =
1
(1+x
2
)
1/2
.
Implicit Dierentiation
77
Hint 5.9
Dierentiating the equation
x
2
+ [y(x)]
2
= 1
yields
2x + 2y(x)y
t
(x) = 0.
Solve this equation for y
t
(x) and write y(x) in terms of x.
Hint 5.10
Dierentiate the equation and solve for y
t
(x) in terms of x and y(x). Dierentiate the expression for y
t
(x) to
obtain y
tt
(x). Youll use that
x
2
xy(x) + [y(x)]
2
= 3
Maxima and Minima
Hint 5.11
a. Use the second derivative test.
b. The function is not dierentiable at the point x = 2 so you cant use a derivative test at that point.
Hint 5.12
Let r be the radius and h the height of the cylinder. The volume of the cup is r
2
h = 64. The radius and height
are related by h =
64
r
2
. The surface area of the cup is f(r) = r
2
+ 2rh = r
2
+
128
r
. Use the second derivative
test to nd the minimum of f(r).
Mean Value Theorems
Hint 5.13
The proof is analogous to the proof of the theorem of the mean.
78
Hint 5.14
The rst few terms in the Taylor series of sin(x) about x = 0 are
sin(x) = x
x
3
6
+
x
5
120

x
7
5040
+
x
9
362880
+ .
When determining the error, use the fact that [ cos x
0
[ 1 and [x
n
[ 1 for x [1, 1].
Hint 5.15
The terms in the approximation have the Taylor series,
f(x + x) = f(x) + xf
t
(x) +
x
2
2
f
tt
(x) +
x
3
6
f
ttt
(x) +
x
4
24
f
tttt
(x
1
),
f(x x) = f(x) xf
t
(x) +
x
2
2
f
tt
(x)
x
3
6
f
ttt
(x) +
x
4
24
f
tttt
(x
2
),
where x x
1
x + x and x x x
2
x.
Hint 5.16
LHospitals Rule
Hint 5.17
a. Apply LHospitals rule three times.
b. You can write the expression as
x sin x
x sin x
.
c. Find the limit of the logarithm of the expression.
79
d. It takes four successive applications of LHospitals rule to evaluate the limit.
For the Taylor series expansion method,
csc
2
x
1
x
2
=
x
2
sin
2
x
x
2
sin
2
x
=
x
2
(x x
3
/6 +O(x
5
))
2
x
2
(x +O(x
3
))
2
Hint 5.18
To evaluate the limits use the identity a
b
= e
b log a
and then apply LHospitals rule.
80
5.10 Solutions
Limits and Continuity
Solution 5.1
Note that in any open neighborhood of zero, (, ), the function sin(1/x) takes on all values in the interval
[1, 1]. Thus if we choose a positive such that < 1 then there is no value of for which [ sin(1/x) [ < for
all x (, ). Thus the limit does not exist.
Solution 5.2
Since
1
x
is continuous in the interval (0, 1) and the function sin(x) is continuous everywhere, the composition
sin(1/x) is continuous in the interval (0, 1).
Since lim
x0
sin(1/x) does not exist, there is no way of dening sin(1/x) at x = 0 to produce a function that
is continuous in [0, 1].
Solution 5.3
Note that for x
1
=
1
(n1/2)
and x
2
=
1
(n+1/2)
where n Z we have [ sin(1/x
1
) sin(1/x
2
)[ = 2. Thus for any
0 < < 2 there is no value of > 0 such that [ sin(1/x
1
) sin(1/x
2
)[ < for all x
1
, x
2
(0, 1) and [x
1
x
2
[ < .
Thus sin(1/x) is not uniformly continuous in the open interval (0, 1).
Solution 5.4
First consider the function

x. Note that the function

x +

x is a decreasing function of x and an increasing


function of for positive x and . Thus for any xed , the maximum value of

x +

x is bounded by

.
Therefore on the interval (0, 1), a sucient condition for [

x

[ < is [x [ <
2
. The function

x is
uniformly continuous on the interval (0, 1).
Consider any positive and . Note that
1
x

1
x +
>
81
for
x <
1
2
_
_

2
+
4


_
.
Thus there is no value of such that

1
x

1

<
for all [x [ < . The function
1
x
is not uniformly continuous on the interval (0, 1).
Solution 5.5
Let the function f(x) be continuous on a closed interval. Consider the function
e(x, ) = sup
[x[<
[f() f(x)[.
Since f(x) is continuous, e(x, ) is a continuous function of x on the same closed interval. Since continuous
functions on closed intervals are bounded, there is a continuous, increasing function () satisfying,
e(x, ) (),
for all x in the closed interval. Since () is continuous and increasing, it has an inverse (). Now note that
[f(x) f()[ < for all x and in the closed interval satisfying [x [ < (). Thus the function is uniformly
continuous in the closed interval.
Denition of Dierentiation
82
Solution 5.6
a.
d
dx
(x
n
) = lim
x0
_
(x + x)
n
x
n
x
_
= lim
x0
_
_
_
x
n
+nx
n1
x +
n(n1)
2
x
n2
x
2
+ + x
n
_
x
n
x
_
_
= lim
x0
_
nx
n1
+
n(n 1)
2
x
n2
x + + x
n1
_
= nx
n1
d
dx
(x
n
) = nx
n1
b.
d
dx
(f(x)g(x)) = lim
x0
_
f(x + x)g(x + x) f(x)g(x)
x
_
= lim
x0
_
[f(x + x)g(x + x) f(x)g(x + x)] + [f(x)g(x + x) f(x)g(x)]
x
_
= lim
x0
[g(x + x)] lim
x0
_
f(x + x) f(x)
x
_
+f(x) lim
x0
_
g(x + x) g(x)
x
_
= g(x)f
t
(x) +f(x)g
t
(x)
d
dx
(f(x)g(x)) = f(x)g
t
(x) +f
t
(x)g(x)
83
c. Consider a right triangle with hypotenuse of length 1 in the rst quadrant of the plane. Label the vertices
A, B, C, in clockwise order, starting with the vertex at the origin. The angle of A is . The length of a
circular arc of radius cos that connects C to the hypotenuse is cos . The length of the side BC is sin .
The length of a circular arc of radius 1 that connects B to the x axis is . (See Figure 5.20.)
B

sin
A
C

cos
Figure 5.20:
Considering the length of these three curves gives us the inequality:
cos sin .
Dividing by ,
cos
sin

1.
Taking the limit as 0 gives us
lim
0
sin

= 1.
84
One more little tidbit well need to know is
lim
0
_
cos 1

_
= lim
0
_
cos 1

cos + 1
cos + 1
_
= lim
0
_
cos
2
1
(cos + 1)
_
= lim
0
_
sin
2

(cos + 1)
_
= lim
0
_
sin

_
lim
0
_
sin
(cos + 1)
_
= (1)
_
0
2
_
= 0.
Now were ready to nd the derivative of sin x.
d
dx
(sin x) = lim
x0
_
sin(x + x) sin x
x
_
= lim
x0
_
cos xsin x + sin xcos x sin x
x
_
= cos x lim
x0
_
sin x
x
_
+ sin x lim
x0
_
cos x 1
x
_
= cos x
d
dx
(sin x) = cos x
d. Let u = g(x). Consider a nonzero increment x, which induces the increments u and f. By denition,
f = f(u + u) f(u), u = g(x + x) g(x),
85
and f, u 0 as x 0. If u ,= 0 then we have
=
f
u

df
du
0 as u 0.
If u = 0 for some values of x then f also vanishes and we dene = 0 for theses values. In either case,
y =
df
du
u +u.
We divide this equation by x and take the limit as x 0.
df
dx
= lim
x0
f
x
= lim
x0
_
df
du
u
x
+
u
x
_
=
_
df
du
__
lim
x0
f
x
_
+
_
lim
x0

_
_
lim
x0
u
x
_
=
df
du
du
dx
+ (0)
_
du
dx
_
=
df
du
du
dx
Thus we see that
d
dx
(f(g(x))) = f
t
(g(x))g
t
(x).
Rules of Dierentiation
86
Solution 5.7
a.
d
dx
[x sin(cos x)] =
d
dx
[x] sin(cos x) +x
d
dx
[sin(cos x)]
= sin(cos x) +xcos(cos x)
d
dx
[cos x]
= sin(cos x) x cos(cos x) sin x
d
dx
[x sin(cos x)] = sin(cos x) xcos(cos x) sin x
b.
d
dx
[f(cos(g(x)))] = f
t
(cos(g(x)))
d
dx
[cos(g(x))]
= f
t
(cos(g(x))) sin(g(x))
d
dx
[g(x)]
= f
t
(cos(g(x))) sin(g(x))g
t
(x)
d
dx
[f(cos(g(x)))] = f
t
(cos(g(x))) sin(g(x))g
t
(x)
c.
d
dx
_
1
f(log x)
_
=
d
dx
[f(log x)]
[f(log x)]
2
=
f
t
(log x)
d
dx
[log x]
[f(log x)]
2
=
f
t
(log x)
x[f(log x)]
2
87
d
dx
_
1
f(log x)
_
=
f
t
(log x)
x[f(log x)]
2
d. First we write the expression in terms exponentials and logarithms,
x
x
x
= x
exp(xlog x)
= exp(exp(x log x) log x).
Then we dierentiate using the chain rule and the product rule.
d
dx
exp(exp(x log x) log x) = exp(exp(x log x) log x)
d
dx
(exp(x log x) log x)
= x
x
x
_
exp(xlog x)
d
dx
(xlog x) log x + exp(x log x)
1
x
_
= x
x
x
_
x
x
(log x +x
1
x
) log x +x
1
exp(xlog x)
_
= x
x
x
_
x
x
(log x + 1) log x +x
1
x
x
_
= x
x
x
+x
_
x
1
+ log x + log
2
x
_
d
dx
x
x
x
= x
x
x
+x
_
x
1
+ log x + log
2
x
_
e. For x > 0, the expression is xsin x; for x < 0, the expression is (x) sin(x) = xsin x. Thus we see that
[x[ sin [x[ = xsin x.
The rst derivative of this is
sin x +x cos x.
d
dx
([x[ sin [x[) = sin x +xcos x
88
Solution 5.8
Let y(x) = sin x. Then y
t
(x) = cos x.
d
dy
arcsin y =
1
y
t
(x)
=
1
cos x
=
1
(1 sin
2
x)
1/2
=
1
(1 y
2
)
1/2
d
dx
arcsin x =
1
(1 x
2
)
1/2
Let y(x) = tan x. Then y
t
(x) = 1/ cos
2
x.
d
dy
arctan y =
1
y
t
(x)
= cos
2
x
= cos
2
(arctan y)
=
_
1
(1 +y
2
)
1/2
_
=
1
1 +y
2
d
dx
arctan x =
1
1 +x
2
89
Implicit Dierentiation
Solution 5.9
Dierentiating the equation
x
2
+ [y(x)]
2
= 1
yields
2x + 2y(x)y
t
(x) = 0.
We can solve this equation for y
t
(x).
y
t
(x) =
x
y(x)
To nd y
t
(1/2) we need to nd y(x) in terms of x.
y(x) =

1 x
2
Thus y
t
(x) is
y
t
(x) =
x

1 x
2
.
y
t
(1/2) can have the two values:
y
t
_
1
2
_
=
1

3
.
Solution 5.10
Dierentiating the equation
x
2
xy(x) + [y(x)]
2
= 3
90
yields
2x y(x) xy
t
(x) + 2y(x)y
t
(x) = 0.
Solving this equation for y
t
(x)
y
t
(x) =
y(x) 2x
2y(x) x
.
Now we dierentiate y
t
(x) to get y
tt
(x).
y
tt
(x) =
(y
t
(x) 2)(2y(x) x) (y(x) 2x)(2y
t
(x) 1)
(2y(x) x)
2
,
y
tt
(x) = 3
xy
t
(x) y(x)
(2y(x) x)
2
,
y
tt
(x) = 3
x
y(x)2x
2y(x)x
y(x)
(2y(x) x)
2
,
y
tt
(x) = 3
x(y(x) 2x) y(x)(2y(x) x)
(2y(x) x)
3
,
y
tt
(x) = 6
x
2
xy(x) + [y(x)]
2
(2y(x) x)
3
,
y
tt
(x) =
18
(2y(x) x)
3
,
91
Maxima and Minima
Solution 5.11
a.
f
t
(x) = (12 2x)
2
+ 2x(12 2x)(2)
= 4(x 6)
2
+ 8x(x 6)
= 12(x 2)(x 6)
There are critical points at x = 2 and x = 6.
f
tt
(x) = 12(x 2) + 12(x 6) = 24(x 4)
Since f
tt
(2) = 48 < 0, x = 2 is a local maximum. Since f
tt
(6) = 48 > 0, x = 6 is a local minimum.
b.
f
t
(x) =
2
3
(x 2)
1/3
The rst derivative exists and is nonzero for x ,= 2. At x = 2, the derivative does not exist and thus x = 2
is a critical point. For x < 2, f
t
(x) < 0 and for x > 2, f
t
(x) > 0. x = 2 is a local minimum.
Solution 5.12
Let r be the radius and h the height of the cylinder. The volume of the cup is r
2
h = 64. The radius and height
are related by h =
64
r
2
. The surface area of the cup is f(r) = r
2
+ 2rh = r
2
+
128
r
. The rst derivative of the
surface area is f
t
(r) = 2r
128
r
2
. Finding the zeros of f
t
(r),
2r
128
r
2
= 0,
2r
3
128 = 0,
92
r =
4
3

.
The second derivative of the surface area is f
tt
(r) = 2 +
256
r
3
. Since f
tt
(
4
3

) = 6, r =
4
3

is a local minimum of
f(r). Since this is the only critical point for r > 0, it must be a global minimum.
The cup has a radius of
4
3

cm and a height of
4
3

.
Mean Value Theorems
Solution 5.13
We dene the function
h(x) = f(x) f(a)
f(b) f(a)
g(b) g(a)
(g(x) g(a)).
Note that h(x) is dierentiable and that h(a) = h(b) = 0. Thus h(x) satises the conditions of Rolles theorem
and there exists a point (a, b) such that
h
t
() = f
t
()
f(b) f(a)
g(b) g(a)
g
t
() = 0,
f
t
()
g
t
()
=
f(b) f(a)
g(b) g(a)
.
Solution 5.14
The rst few terms in the Taylor series of sin(x) about x = 0 are
sin(x) = x
x
3
6
+
x
5
120

x
7
5040
+
x
9
362880
+ .
The seventh derivative of sin x is cos x. Thus we have that
sin(x) = x
x
3
6
+
x
5
120

cos x
0
5040
x
7
,
93
where 0 x
0
x. Since we are considering x [1, 1] and 1 cos(x
0
) 1, the approximation
sin x x
x
3
6
+
x
5
120
has a maximum error of
1
5040
0.000198. Using this polynomial to approximate sin(1),
1
1
3
6
+
1
5
120
0.841667.
To see that this has the required accuracy,
sin(1) 0.841471.
Solution 5.15
Expanding the terms in the approximation in Taylor series,
f(x + x) = f(x) + xf
t
(x) +
x
2
2
f
tt
(x) +
x
3
6
f
ttt
(x) +
x
4
24
f
tttt
(x
1
),
f(x x) = f(x) xf
t
(x) +
x
2
2
f
tt
(x)
x
3
6
f
ttt
(x) +
x
4
24
f
tttt
(x
2
),
where x x
1
x + x and x x x
2
x. Substituting the expansions into the formula,
f(x + x) 2f(x) +f(x x)
x
2
= f
tt
(x) +
x
2
24
[f
tttt
(x
1
) +f
tttt
(x
2
)].
Thus the error in the approximation is
x
2
24
[f
tttt
(x
1
) +f
tttt
(x
2
)].
94
Solution 5.16
LHospitals Rule
Solution 5.17
a.
lim
x0
_
x sin x
x
3
_
= lim
x0
_
1 cos x
3x
2
_
= lim
x0
_
sin x
6x
_
= lim
x0
_
cos x
6
_
=
1
6
lim
x0
_
x sin x
x
3
_
=
1
6
95
b.
lim
x0
_
csc x
1
x
_
= lim
x0
_
1
sin x

1
x
_
= lim
x0
_
x sin x
xsin x
_
= lim
x0
_
1 cos x
xcos x + sin x
_
= lim
x0
_
sin x
x sin x + cos x + cos x
_
=
0
2
= 0
lim
x0
_
csc x
1
x
_
= 0
96
c.
log
_
lim
x+
__
1 +
1
x
_
x
__
= lim
x+
_
log
__
1 +
1
x
_
x
__
= lim
x+
_
x log
_
1 +
1
x
__
= lim
x+
_
log
_
1 +
1
x
_
1/x
_
= lim
x+
_
_
1 +
1
x
_
1
_

1
x
2
_
1/x
2
_
= lim
x+
_
_
1 +
1
x
_
1
_
= 1
Thus we have
lim
x+
__
1 +
1
x
_
x
_
= e.
97
d. It takes four successive applications of LHospitals rule to evaluate the limit.
lim
x0
_
csc
2
x
1
x
2
_
= lim
x0
x
2
sin
2
x
x
2
sin
2
x
= lim
x0
2x 2 cos x sin x
2x
2
cos x sin x + 2x sin
2
x
= lim
x0
2 2 cos
2
x + 2 sin
2
x
2x
2
cos
2
x + 8xcos xsin x + 2 sin
2
x 2x
2
sin
2
x
= lim
x0
8 cos x sin x
12x cos
2
x + 12 cos xsin x 8x
2
cos xsin x 12xsin
2
x
= lim
x0
8 cos
2
x 8 sin
2
x
24 cos
2
x 8x
2
cos
2
x 64x cos x sin x 24 sin
2
x + 8x
2
sin
2
x
=
1
3
It is easier to use a Taylor series expansion.
lim
x0
_
csc
2
x
1
x
2
_
= lim
x0
x
2
sin
2
x
x
2
sin
2
x
= lim
x0
x
2
(x x
3
/6 +O(x
5
))
2
x
2
(x +O(x
3
))
2
= lim
x0
x
2
(x
2
x
4
/3 +O(x
6
))
x
4
+O(x
6
)
= lim
x0
_
1
3
+O(x
2
)
_
=
1
3
98
Solution 5.18
To evaluate the rst limit, we use the identity a
b
= e
b log a
and then apply LHospitals rule.
lim
x
x
a/x
= lim
x
e
a log x
x
= exp
_
lim
x
a log x
x
_
= exp
_
lim
x
a/x
1
_
= e
0
lim
x
x
a/x
= 1
We use the same method to evaluate the second limit.
lim
x
_
1 +
a
x
_
bx
= lim
x
exp
_
bxlog
_
1 +
a
x
__
= exp
_
lim
x
bxlog
_
1 +
a
x
__
= exp
_
lim
x
b
log(1 +a/x)
1/x
_
= exp
_
_
lim
x
b
a/x
2
1+a/x
1/x
2
_
_
= exp
_
lim
x
b
a
1 +a/x
_
lim
x
_
1 +
a
x
_
bx
= e
ab
99
Chapter 6
Integral Calculus
6.1 The Indenite Integral
The opposite of a derivative is the anti-derivative or the indenite integral. The indenite integral of a function
f(x) is denoted,
_
f(x) dx.
It is dened by the property that
d
dx
_
f(x) dx = f(x).
While a function f(x) has a unique derivative if it is dierentiable, it has an innite number of indenite integrals,
each of which dier by an additive constant.
Zero Slope Implies a Constant Function. If the value of a functions derivative is identically zero,
df
dx
= 0,
then the function is a constant, f(x) = c. To prove this, we assume that there exists a non-constant dierentiable
100
function whose derivative is zero and obtain a contradiction. Let f(x) be such a function. Since f(x) is non-
constant, there exist points a and b such that f(a) ,= f(b). By the Mean Value Theorem of dierential calculus,
there exists a point (a, b) such that
f
t
() =
f(b) f(a)
b a
,= 0,
which contradicts that the derivative is everywhere zero.
Indenite Integrals Dier by an Additive Constant. Suppose that F(x) and G(x) are indenite integrals
of f(x). Then we have
d
dx
(F(x) G(x)) = F
t
(x) G
t
(x) = f(x) f(x) = 0.
Thus we see that F(x) G(x) = c and the two indenite integrals must dier by a constant. For example, we
have
_
sin x dx = cos x + c. While every function that can be expressed in terms of elementary functions, (the
exponent, logarithm, trigonometric functions, etc.), has a derivative that can be written explicitly in terms of
elementary functions, the same is not true of integrals. For example,
_
sin(sin x) dx cannot be written explicitly
in terms of elementary functions.
Properties. Since the derivative is linear, so is the indenite integral. That is,
_
(af(x) +bg(x)) dx = a
_
f(x) dx +b
_
g(x) dx.
For each derivative identity there is a corresponding integral identity. Consider the power law identity,
d
dx
(f(x))
a
=
a(f(x))
a1
f
t
(x). The corresponding integral identity is
_
(f(x))
a
f
t
(x) dx =
(f(x))
a+1
a + 1
+c, a ,= 1,
101
where we require that a ,= 1 to avoid division by zero. From the derivative of a logarithm,
d
dx
ln(f(x)) =
f

(x)
f(x)
,
we obtain,
_
f
t
(x)
f(x)
dx = ln [f(x)[ +c.
Note the absolute value signs. This is because
d
dx
ln [x[ =
1
x
for x ,= 0. In Figure 6.1 is a plot of ln [x[ and
1
x
to
reinforce this.
Figure 6.1: Plot of ln [x[ and 1/x.
Example 6.1.1 Consider
I =
_
x
(x
2
+ 1)
2
dx.
We evaluate the integral by choosing u = x
2
+ 1, du = 2x dx.
I =
1
2
_
2x
(x
2
+ 1)
2
dx
=
1
2
_
du
u
2
=
1
2
1
u
=
1
2(x
2
+ 1)
.
102
Example 6.1.2 Consider
I =
_
tan xdx =
_
sin x
cos x
dx.
By choosing f(x) = cos x, f
t
(x) = sin x, we see that the integral is
I =
_
sin x
cos x
dx = ln [ cos x[ +c.
Change of Variable. The dierential of a function g(x) is dg = g
t
(x) dx. Thus one might suspect that for
= g(x),
_
f() d =
_
f(g(x))g
t
(x) dx, (6.1)
since d = dg = g
t
(x) dx. This turns out to be true. To prove it we will appeal to the the chain rule for
dierentiation. Let be a function of x. The chain rule is
d
dx
f() = f
t
()
t
(x),
d
dx
f() =
df
d
d
dx
.
We can also write this as
df
d
=
dx
d
df
dx
,
or in operator notation,
d
d
=
dx
d
d
dx
.
103
Now were ready to start. The derivative of the left side of Equation 6.1 is
d
d
_
f() d = f().
Next we dierentiate the right side,
d
d
_
f(g(x))g
t
(x) dx =
dx
d
d
dx
_
f(g(x))g
t
(x) dx
=
dx
d
f(g(x))g
t
(x)
=
dx
dg
f(g(x))
dg
dx
= f(g(x))
= f()
to see that it is in fact an identity for = g(x).
Example 6.1.3 Consider
_
xsin(x
2
) dx.
We choose = x
2
, d = 2xdx to evaluate the integral.
_
x sin(x
2
) dx =
1
2
_
sin(x
2
)2x dx
=
1
2
_
sin d
=
1
2
(cos ) +c
=
1
2
cos(x
2
) +c
104
Integration by Parts. The product rule for dierentiation gives us an identity called integration by parts. We
start with the product rule and then integrate both sides of the equation.
d
dx
(u(x)v(x)) = u
t
(x)v(x) +u(x)v
t
(x)
_
(u
t
(x)v(x) +u(x)v
t
(x)) dx = u(x)v(x) +c
_
u
t
(x)v(x) dx +
_
u(x)v
t
(x)) dx = u(x)v(x)
_
u(x)v
t
(x)) dx = u(x)v(x)
_
v(x)u
t
(x) dx
The theorem is most often written in the form
_
udv = uv
_
v du.
So what is the usefulness of this? Well, it may happen for some integrals and a good choice of u and v that the
integral on the right is easier to evaluate than the integral on the left.
Example 6.1.4 Consider
_
x e
x
dx. If we choose u = x, dv = e
x
dx then integration by parts yields
_
xe
x
dx = xe
x

_
e
x
dx = (x 1) e
x
.
Now notice what happens when we choose u = e
x
, dv = xdx.
_
xe
x
dx =
1
2
x
2
e
x

_
1
2
x
2
e
x
dx
The integral gets harder instead of easier.
When applying integration by parts, one must choose u and dv wisely. As general rules of thumb:
105
Pick u so that u
t
is simpler than u.
Pick dv so that v is not more complicated, (hopefully simpler), than dv.
Also note that you may have to apply integration by parts several times to evaluate some integrals.
6.2 The Denite Integral
6.2.1 Denition
The area bounded by the x axis, the vertical lines x = a and x = b and the function f(x) is denoted with a
denite integral,
_
b
a
f(x) dx.
The area is signed, that is, if f(x) is negative, then the area is negative. We measure the area with a divide-and-
conquer strategy. First partition the interval (a, b) with a = x
0
< x
1
< < x
n1
< x
n
= b. Note that the area
under the curve on the subinterval is approximately the area of a rectangle of base x
i
= x
i+1
x
i
and height
f(
i
), where
i
[x
i
, x
i+1
]. If we add up the areas of the rectangles, we get an approximation of the area under
the curve. See Figure 6.2
_
b
a
f(x) dx
n1

i=0
f(
i
)x
i
As the x
i
s get smaller, we expect the approximation of the area to get better. Let x = max
0in1
x
i
. We
dene the denite integral as the sum of the areas of the rectangles in the limit that x 0.
_
b
a
f(x) dx = lim
x0
n1

i=0
f(
i
)x
i
The integral is dened when the limit exists. This is known as the Riemann integral of f(x). f(x) is called the
integrand.
106
a x x x x x
x
1 2 3
i
n-2 n-1
b
f( )

1
Figure 6.2: Divide-and-Conquer Strategy for Approximating a Denite Integral.
6.2.2 Properties
Linearity and the Basics. Because summation is a linear operator, that is
n1

i=0
(cf
i
+dg
i
) = c
n1

i=0
f
i
+d
n1

i=0
g
i
,
denite integrals are linear,
_
b
a
(cf(x) +dg(x)) dx = c
_
b
a
f(x) dx +d
_
b
a
g(x) dx.
One can also divide the range of integration.
_
b
a
f(x) dx =
_
c
a
f(x) dx +
_
b
c
f(x) dx
107
We assume that each of the above integrals exist. If a b, and we integrate from b to a, then each of the x
i
will be negative. From this observation, it is clear that
_
b
a
f(x) dx =
_
a
b
f(x) dx.
If we integrate any function from a point a to that same point a, then all the x
i
are zero and
_
a
a
f(x) dx = 0.
Bounding the Integral. Recall that if f
i
g
i
, then
n1

i=0
f
i

n1

i=0
g
i
.
Let m = min
x[a,b]
f(x) and M = max
x[a,b]
f(x). Then
(b a)m =
n1

i=0
mx
i

n1

i=0
f(
i
)x
i

n1

i=0
Mx
i
= (b a)M
implies that
(b a)m
_
b
a
f(x) dx (b a)M.
Since

n1

i=0
f
i

n1

i=0
[f
i
[,
we have

_
b
a
f(x) dx

_
b
a
[f(x)[ dx.
108
Mean Value Theorem of Integral Calculus. Let f(x) be continuous. We know from above that
(b a)m
_
b
a
f(x) dx (b a)M.
Therefore there exists a constant c [m, M] satisfying
_
b
a
f(x) dx = (b a)c.
Since f(x) is continuous, there is a point [a, b] such that f() = c. Thus we see that
_
b
a
f(x) dx = (b a)f(),
for some [a, b].
6.3 The Fundamental Theorem of Integral Calculus
Denite Integrals with Variable Limits of Integration. Consider a to be a constant and x variable, then
the function F(x) dened by
F(x) =
_
x
a
f(t) dt (6.2)
109
is an anti-derivative of f(x), that is F
t
(x) = f(x). To show this we apply the denition of dierentiation and the
integral mean value theorem.
F
t
(x) = lim
x0
F(x + x) F(x)
x
= lim
x0
_
x+x
a
f(t) dt
_
x
a
f(t) dt
x
= lim
x0
_
x+x
x
f(t) dt
x
= lim
x0
f()x
x
, [x, x + x]
= f(x)
The Fundamental Theorem of Integral Calculus. Let F(x) be any anti-derivative of f(x). Noting that
all anti-derivatives of f(x) dier by a constant and replacing x by b in Equation 6.2, we see that there exists a
constant c such that
_
b
a
f(x) dx = F(b) +c.
Now to nd the constant. By plugging in b = a,
_
a
a
f(x) dx = F(a) +c = 0,
we see that c = F(a). This gives us a result known as the Fundamental Theorem of Integral Calculus.
_
b
a
f(x) dx = F(b) F(a).
We introduce the notation
[F(x)]
b
a
F(b) F(a).
110
Example 6.3.1
_

0
sin xdx = [cos x]

0
= cos() + cos(0) = 2
6.4 Techniques of Integration
6.4.1 Partial Fractions
A proper rational function
p(x)
q(x)
=
p(x)
(x a)
n
r(x)
Can be written in the form
p(x)
(x )
n
r(x)
=
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+ ( )
where the a
k
s are constants and the last ellipses represents the partial fractions expansion of the roots of r(x).
The coecients are
a
k
=
1
k!
d
k
dx
k
_
p(x)
r(x)
_

x=
.
Example 6.4.1 Consider the partial fraction expansion of
1 +x +x
2
(x 1)
3
.
The expansion has the form
a
0
(x 1)
3
+
a
1
(x 1)
2
+
a
2
x 1
.
111
The coecients are
a
0
=
1
0!
(1 +x +x
2
)[
x=1
= 3,
a
1
=
1
1!
d
dx
(1 +x +x
2
)[
x=1
= (1 + 2x)[
x=1
= 3,
a
2
=
1
2!
d
2
dx
2
(1 +x +x
2
)[
x=1
=
1
2
(2)[
x=1
= 1.
Thus we have
1 +x +x
2
(x 1)
3
=
3
(x 1)
3
+
3
(x 1)
2
+
1
x 1
.
Example 6.4.2 Suppose we want to evaluate
_
1 +x +x
2
(x 1)
3
dx.
If we expand the integrand in a partial fraction expansion, then the integral becomes easy.
_
1 +x +x
2
(x 1)
3
dx. =
_ _
3
(x 1)
3
+
3
(x 1)
2
+
1
x 1
_
dx
=
3
2(x 1)
2

3
(x 1)
+ ln(x 1)
Example 6.4.3 Consider the partial fraction expansion of
1 +x +x
2
x
2
(x 1)
2
.
112
The expansion has the form
a
0
x
2
+
a
1
x
+
b
0
(x 1)
2
+
b
1
x 1
.
The coecients are
a
0
=
1
0!
_
1 +x +x
2
(x 1)
2
_

x=0
= 1,
a
1
=
1
1!
d
dx
_
1 +x +x
2
(x 1)
2
_

x=0
=
_
1 + 2x
(x 1)
2

2(1 +x +x
2
)
(x 1)
3
_

x=0
= 3,
b
0
=
1
0!
_
1 +x +x
2
x
2
_

x=1
= 3,
b
1
=
1
1!
d
dx
_
1 +x +x
2
x
2
_

x=1
=
_
1 + 2x
x
2

2(1 +x +x
2
)
x
3
_

x=1
= 3,
Thus we have
1 +x +x
2
x
2
(x 1)
2
=
1
x
2
+
3
x
+
3
(x 1)
2

3
x 1
.
If the rational function has real coecients and the denominator has complex roots, then you can reduce the
work in nding the partial fraction expansion with the following trick: Let and be complex conjugate pairs
of roots of the denominator.
p(x)
(x )
n
(x )
n
r(x)
=
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+ ( )
Thus we dont have to calculate the coecients for the root at . We just take the complex conjugate of the
coecients for .
113
Example 6.4.4 Consider the partial fraction expansion of
1 +x
x
2
+ 1
.
The expansion has the form
a
0
x i
+
a
0
x +i
The coecients are
a
0
=
1
0!
_
1 +x
x +i
_

x=i
=
1
2
(1 i),
a
0
=
1
2
(1 i) =
1
2
(1 +i)
Thus we have
1 +x
x
2
+ 1
=
1 i
2(x i)
+
1 +i
2(x +i)
.
6.5 Improper Integrals
If the range of integration is innite or f(x) is discontinuous at some points then
_
b
a
f(x) dx is called an improper
integral.
Discontinuous Functions. If f(x) is continuous on the interval a x b except at the point x = c where
a < c < b then
_
b
a
f(x) dx = lim
0
+
_
c
a
f(x) dx + lim
0
+
_
b
c+
f(x) dx
provided that both limits exist.
114
Example 6.5.1 Consider the integral of ln x on the interval [0, 1]. Since the logarithm has a singularity at x = 0,
this is an improper integral. We write the integral in terms of a limit and evaluate the limit with LHospitals
rule.
_
1
0
ln x dx = lim
0
_
1

ln xdx
= lim
0
[xln x x]
1

= 1 ln(1) 1 lim
0
( ln )
= 1 lim
0
( ln )
= 1 lim
0
_
ln
1/
_
= 1 lim
0
_
1/
1/
2
_
= 1
Example 6.5.2 Consider the integral of x
a
on the range [0, 1]. If a < 0 then there is a singularity at x = 0. First
assume that a ,= 1.
_
1
0
x
a
dx = lim
0
+
_
x
a+1
a + 1
_
1

=
1
a + 1
lim
0
+

a+1
a + 1
This limit exists only for a > 1. Now consider the case that a = 1.
_
1
0
x
1
dx = lim
0
+
[ln x]
1

= ln(0) lim
0
+
ln
115
This limit does not exist. We obtain the result,
_
1
0
x
a
dx =
1
a + 1
, for a > 1.
Innite Limits of Integration. If the range of integration is innite, say [a, ) then we dene the integral as
_

a
f(x) dx = lim

_

a
f(x) dx,
provided that the limit exists. If the range of integration is (, ) then
_

f(x) dx = lim

_
a

f(x) dx + lim
+
_

a
f(x) dx.
Example 6.5.3
_

1
ln x
x
2
dx =
_

1
ln x
_
d
dx
1
x
_
dx
=
_
ln x
1
x
_

_

1
1
x
1
x
dx
= lim
x+
_

ln x
x
_

_
1
x
_

1
= lim
x+
_

1/x
1
_
lim
x
1
x
+ 1
= 1
116
Example 6.5.4 Consider the integral of x
a
on [1, ). First assume that a ,= 1.
_

1
x
a
dx = lim
+
_
x
a+1
a + 1
_

1
= lim
+

a+1
a + 1

1
a + 1
The limit exists for < 1. Now consider the case a = 1.
_

1
x
1
dx = lim
+
[ln x]

1
= lim
+
ln
1
a + 1
This limit does not exist. Thus we have
_

1
x
a
dx =
1
a + 1
, for a < 1.
117
6.6 Exercises
Fundamental Integration Formulas
Exercise 6.1 (mathematica/calculus/integral/fundamental.nb)
Evaluate
_
(2x + 3)
10
dx.
Exercise 6.2 (mathematica/calculus/integral/fundamental.nb)
Evaluate
_
(ln x)
2
x
dx.
Exercise 6.3 (mathematica/calculus/integral/fundamental.nb)
Evaluate
_
x

x
2
+ 3 dx.
Exercise 6.4 (mathematica/calculus/integral/fundamental.nb)
Evaluate
_
cos x
sin x
dx.
Exercise 6.5 (mathematica/calculus/integral/fundamental.nb)
Evaluate
_
x
2
x
3
5
dx.
Integration by Parts
Exercise 6.6 (mathematica/calculus/integral/parts.nb)
Evaluate
_
x sin x dx.
Exercise 6.7 (mathematica/calculus/integral/parts.nb)
Evaluate
_
x
3
e
2x
dx.
118
Partial Fractions
Exercise 6.8 (mathematica/calculus/integral/partial.nb)
Evaluate
_
1
x
2
4
dx.
Exercise 6.9 (mathematica/calculus/integral/partial.nb)
Evaluate
_
x+1
x
3
+x
2
6x
dx.
Denite Integrals
Exercise 6.10 (mathematica/calculus/integral/denite.nb)
Use the result
_
b
a
f(x) dx = lim
N
N1

n=0
f(x
n
)x
where x =
ba
N
and x
n
= a +nx, to show that
_
1
0
xdx =
1
2
.
Exercise 6.11 (mathematica/calculus/integral/denite.nb)
Evaluate the following integral using integration by parts and the Pythagorean identity.
_

0
sin
2
xdx
Exercise 6.12 (mathematica/calculus/integral/denite.nb)
Prove that
d
dx
_
f(x)
g(x)
h() d = h(f(x))f
t
(x) h(g(x))g
t
(x).
(Dont use the limit denition of dierentiation, use the Fundamental Theorem of Integral Calculus.)
119
Improper Integrals
Exercise 6.13 (mathematica/calculus/integral/improper.nb)
Evaluate
_
4
0
1
(x1)
2
dx.
Exercise 6.14 (mathematica/calculus/integral/improper.nb)
Evaluate
_
1
0
1

x
dx.
Exercise 6.15 (mathematica/calculus/integral/improper.nb)
Evaluate
_

0
1
x
2
+4
dx.
Taylor Series
Exercise 6.16 (mathematica/calculus/integral/taylor.nb)
a. Show that
f(x) = f(0) +
_
x
0
f
t
(x ) d.
b. From the above identity show that
f(x) = f(0) +xf
t
(0) +
_
x
0
f
tt
(x ) d.
c. Using induction, show that
f(x) = f(0) +xf
t
(0) +
1
2
x
2
f
tt
(0) + +
1
n!
x
n
f
(n)
(0) +
_
x
0
1
n!

n
f
(n+1)
(x ) d.
120
6.7 Hints
Fundamental Integration Formulas
Hint 6.1
Make the change of variables u = 2x + 3.
Hint 6.2
Make the change of variables u = ln x.
Hint 6.3
Make the change of variables u = x
2
+ 3.
Hint 6.4
Make the change of variables u = sin x.
Hint 6.5
Make the change of variables u = x
3
5.
Integration by Parts
Hint 6.6
Let u = x, and dv = sin xdx.
Hint 6.7
Perform integration by parts three successive times. For the rst one let u = x
3
and dv = e
2x
dx.
Partial Fractions
121
Hint 6.8
Expanding the integrand in partial fractions,
1
x
2
4
=
1
(x 2)(x + 2)
=
a
(x 2)
+
b
(x + 2)
1 = a(x + 2) +b(x 2)
Set x = 2 and x = 2 to solve for a and b.
Hint 6.9
Expanding the integral in partial fractions,
x + 1
x
3
+x
2
6x
=
x + 1
x(x 2)(x + 3)
=
a
x
+
b
x 2
+
c
x + 3
x + 1 = a(x 2)(x + 3) +bx(x + 3) +cx(x 2)
Set x = 0, x = 2 and x = 3 to solve for a, b and c.
Denite Integrals
Hint 6.10
_
1
0
xdx = lim
N
N1

n=0
x
n
x
= lim
N
N1

n=0
(nx)x
122
Hint 6.11
Let u = sin x and dv = sin x dx. Integration by parts will give you an equation for
_

0
sin
2
x dx.
Hint 6.12
Let H
t
(x) = h(x) and evaluate the integral in terms of H(x).
Improper Integrals
Hint 6.13
_
4
0
1
(x 1)
2
dx = lim
0
+
_
1
0
1
(x 1)
2
dx + lim
0
+
_
4
1+
1
(x 1)
2
dx
Hint 6.14
_
1
0
1

x
dx = lim
0
+
_
1

x
dx
Hint 6.15
_
1
x
2
+a
2
dx =
1
a
arctan
_
x
a
_
Taylor Series
Hint 6.16
a. Evaluate the integral.
123
b. Use integration by parts to evaluate the integral.
c. Use integration by parts with u = f
(n+1)
(x ) and dv =
1
n!

n
.
124
6.8 Solutions
Fundamental Integration Formulas
Solution 6.1
_
(2x + 3)
10
dx
Let u = 2x + 3, g(u) = x =
u3
2
, g
t
(u) =
1
2
.
_
(2x + 3)
10
dx =
_
u
10
1
2
du
=
u
11
11
1
2
=
(2x + 3)
11
22
Solution 6.2
_
(ln x)
2
x
dx =
_
(ln x)
2
d(ln x)
dx
dx
=
(ln x)
3
3
125
Solution 6.3
_
x

x
2
+ 3 dx =
_

x
2
+ 3
1
2
d(x
2
)
dx
dx
=
1
2
(x
2
+ 3)
3/2
3/2
=
(x
2
+ 3)
3/2
3
Solution 6.4
_
cos x
sin x
dx =
_
1
sin x
d(sin x)
dx
dx
= ln [ sin x[
Solution 6.5
_
x
2
x
3
5
dx =
_
1
x
3
5
1
3
d(x
3
)
dx
dx
=
1
3
ln [x
3
5[
Integration by Parts
126
Solution 6.6
Let u = x, and dv = sin xdx. Then du = dx and v = cos x.
_
xsin xdx = x cos x +
_
cos xdx
= x cos x + sin x +C
Solution 6.7
Let u = x
3
and dv = e
2x
dx. Then du = 3x
2
dx and v =
1
2
e
2x
.
_
x
3
e
2x
dx =
1
2
x
3
e
2x

3
2
_
x
2
e
2x
dx
Let u = x
2
and dv = e
2x
dx. Then du = 2x dx and v =
1
2
e
2x
.
_
x
3
e
2x
dx =
1
2
x
3
e
2x

3
2
_
1
2
x
2
e
2x

_
xe
2x
dx
_
_
x
3
e
2x
dx =
1
2
x
3
e
2x

3
4
x
2
e
2x
+
3
2
_
x e
2x
dx
Let u = x and dv = e
2x
dx. Then du = dx and v =
1
2
e
2x
.
_
x
3
e
2x
dx =
1
2
x
3
e
2x

3
4
x
2
e
2x
+
3
2
_
1
2
x e
2x

1
2
_
e
2x
dx
_
_
x
3
e
2x
dx =
1
2
x
3
e
2x

3
4
x
2
e
2x
+
3
4
x e
2x

3
8
e
2x
+C
Partial Fractions
127
Solution 6.8
Expanding the integrand in partial fractions,
1
x
2
4
=
1
(x 2)(x + 2)
=
A
(x 2)
+
B
(x + 2)
1 = A(x + 2) +B(x 2)
Setting x = 2 yields A =
1
4
. Setting x = 2 yields B =
1
4
. Now we can do the integral.
_
1
x
2
4
dx =
_ _
1
4(x 2)

1
4(x + 2)
_
dx
=
1
4
ln [x 2[
1
4
ln [x + 2[ +C
=
1
4

x 2
x + 2

+C
Solution 6.9
Expanding the integral in partial fractions,
x + 1
x
3
+x
2
6x
=
x + 1
x(x 2)(x + 3)
=
A
x
+
B
x 2
+
C
x + 3
x + 1 = A(x 2)(x + 3) +Bx(x + 3) +Cx(x 2)
Setting x = 0 yields A =
1
6
. Setting x = 2 yields B =
3
10
. Setting x = 3 yields C =
2
15
.
_
x + 1
x
3
+x
2
6x
dx =
_ _

1
6x
+
3
10(x 2)

2
15(x + 3)
_
dx
=
1
6
ln [x[ +
3
10
ln [x 2[
2
15
ln [x + 3[ +C
= ln
[x 2[
3/10
[x[
1/6
[x + 3[
2/15
+C
128
Denite Integrals
Solution 6.10
_
1
0
x dx = lim
N
N1

n=0
x
n
x
= lim
N
N1

n=0
(nx)x
= lim
N
x
2
N1

n=0
n
= lim
N
x
2
N(N 1)
2
= lim
N
N(N 1)
2N
2
=
1
2
129
Solution 6.11
Let u = sin x and dv = sin x dx. Then du = cos xdx and v = cos x.
_

0
sin
2
x dx =
_
sin xcos x

0
+
_

0
cos
2
xdx
=
_

0
cos
2
x dx
=
_

0
(1 sin
2
x) dx
=
_

0
sin
2
x dx
2
_

0
sin
2
xdx =
_

0
sin
2
x dx =

2
Solution 6.12
Let H
t
(x) = h(x).
d
dx
_
f(x)
g(x)
h() d =
d
dx
(H(f(x)) H(g(x)))
= H
t
(f(x))f
t
(x) H
t
(g(x))g
t
(x)
= h(f(x))f
t
(x) h(g(x))g
t
(x)
Improper Integrals
130
Solution 6.13
_
4
0
1
(x 1)
2
dx = lim
0
+
_
1
0
1
(x 1)
2
dx + lim
0
+
_
4
1+
1
(x 1)
2
dx
= lim
0
+
_

1
x 1
_
1
0
+ lim
0
+
_

1
x 1
_
4
1+
= lim
0
+
_
1

1
_
+ lim
0
+
_

1
3
+
1

_
= +
The integral diverges.
Solution 6.14
_
1
0
1

x
dx = lim
0
+
_
1

x
dx
= lim
0
+
_
2

= lim
0
+
2(1

)
= 2
131
Solution 6.15
_

0
1
x
2
+ 4
dx = lim

_

0
1
x
2
+ 4
dx
= lim

_
1
2
arctan
_
x
2
_
_

0
=
1
2
_

2
0
_
=

4
Taylor Series
Solution 6.16
a.
f(0) +
_
x
0
f
t
(x ) d = f(0) + [f(x )]
x
0
= f(0) f(0) +f(x)
= f(x)
b.
f(0) +xf
t
(0) +
_
x
0
f
tt
(x ) d = f(0) +xf
t
(0) + [f
t
(x )]
x
0

_
x
0
f
t
(x ) d
= f(0) +xf
t
(0) xf
t
(0) [f(x )]
x
0
= f(0) f(0) +f(x)
= f(x)
132
c. Above we showed that the hypothesis holds for n = 0 and n = 1. Assume that it holds for some n = m 0.
f(x) = f(0) +xf
t
(0) +
1
2
x
2
f
tt
(0) + +
1
n!
x
n
f
(n)
(0) +
_
x
0
1
n!

n
f
(n+1)
(x ) d
= f(0) +xf
t
(0) +
1
2
x
2
f
tt
(0) + +
1
n!
x
n
f
(n)
(0) +
_
1
(n + 1)!

n+1
f
(n+1)
(x )
_
x
0

_
x
0

1
(n + 1)!

n+1
f
(n+2)
(x ) d
= f(0) +xf
t
(0) +
1
2
x
2
f
tt
(0) + +
1
n!
x
n
f
(n)
(0) +
1
(n + 1)!
x
n+1
f
(n+1)
(0)
+
_
x
0
1
(n + 1)!

n+1
f
(n+2)
(x ) d
This shows that the hypothesis holds for n = m + 1. By induction, the hypothesis hold for all n 0.
133
Chapter 7
Vector Calculus
7.1 Vector Functions
Vector-valued Functions. A vector-valued function, r(t), is a mapping r : R R
n
that assigns a vector to
each value of t.
r(t) = r
1
(t)e
1
+ +r
n
(t)e
n
.
An example of a vector-valued function is the position of an object in space as a function of time. The function
is continous at a point t = if
lim
t
r(t) = r().
This occurs if and only if the component functions are continuous. The function is dierentiable if
dr
dt
lim
t0
r(t + t) r(t)
t
exists. This occurs if and only if the component functions are dierentiable.
134
If r(t) represents the position of a particle at time t, then the velocity and acceleration of the particle are
dr
dt
and
d
2
r
dt
2
,
respectively. The speed of the particle is [r
t
(t)[.
Dierentiation Formulas. Let f(t) and g(t) be vector functions and a(t) be a scalar function. By writing out
components you can verify the dierentiation formulas:
d
dt
(f g) = f
t
g +f g
t
d
dt
(f g) = f
t
g +f g
t
d
dt
(af) = a
t
f +af
t
7.2 Gradient, Divergence and Curl
Scalar and Vector Fields. A scalar eld is a function of position u(x) that assigns a scalar to each point in
space. A function that gives the temperature of a material is an example of a scalar eld. In two dimensions, you
can graph a scalar eld as a surface plot, (Figure 7.1), with the vertical axis for the value of the function.
A vector eld is a function of position u(x) that assigns a vector to each point in space. Examples of vectors
elds are functions that give the acceleration due to gravity or the velocity of a uid. You can graph a vector
eld in two or three dimension by drawing vectors at regularly spaced points. (See Figure 7.1 for a vector eld in
two dimensions.)
Partial Derivatives of Scalar Fields. Consider a scalar eld u(x). The partial derivative of u with respect
to x
k
is the derivative of u in which x
k
is considered to be a variable and the remaining arguments are considered
to be parameters. The partial derivative is denoted

x
k
u(x),
u
x
k
or u
x
k
and is dened
u
x
k
lim
x0
u(x
1
, . . . , x
k
+ x, . . . , x
n
) u(x
1
, . . . , x
k
, . . . , x
n
)
x
.
135
0
2
4
6 0
2
4
6
-1
-0.5
0
0.5
1
0
2
4
6
Figure 7.1: A Scalar Field and a Vector Field
Partial derivatives have the same dierentiation formulas as ordinary derivatives.
Consider a scalar eld in R
3
, u(x, y, z). Higher derivatives of u are denoted:
u
xx


2
u
x
2


x
u
x
,
u
xy


2
u
xy


x
u
y
,
u
xxyz


4
u
x
2
yz


2
x
2

y
u
z
.
136
If u
xy
and u
yx
are continuous, then

2
u
xy
=

2
u
yx
.
This is referred to as the equality of mixed partial derivatives.
Partial Derivatives of Vector Fields. Consider a vector eld u(x). The partial derivative of u with respect
to x
k
is denoted

x
k
u(x),
u
x
k
or u
x
k
and is dened
u
x
k
lim
x0
u(x
1
, . . . , x
k
+ x, . . . , x
n
) u(x
1
, . . . , x
k
, . . . , x
n
)
x
.
Partial derivatives of vector elds have the same dierentiation formulas as ordinary derivatives.
Gradient. We introduce the vector dierential operator,


x
1
e
1
+ +

x
n
e
n
,
which is known as del or nabla. In R
3
it is


x
i +

y
j +

z
k.
Let u(x) be a dierential scalar eld. The gradient of u is,
u
u
x
1
e
1
+ +
u
x
n
e
n
,
137
Directional Derivative. Suppose you are standing on some terrain. The slope of the ground in a particular
direction is the directional derivative of the elevation in that direction. Consider a dierentiable scalar eld, u(x).
The derivative of the function in the direction of the unit vector a is the rate of change of the function in that
direction. Thus the directional derivative, D
a
u, is dened:
D
a
u(x) = lim
0
u(x +a) u(x)

= lim
0
u(x
1
+a
1
, . . . , x
n
+a
n
) u(x
1
, . . . , x
n
)

= lim
0
(u(x) +a
1
u
x
1
(x) + +a
n
u
xn
(x) +O(
2
)) u(x)

= a
1
u
x
1
(x) + +a
n
u
xn
(x)
D
a
u(x) = u(x) a.
Tangent to a Surface. The gradient, f, is orthogonal to the surface f(x) = 0. Consider a point on the
surface. Let the dierential dr = dx
1
e
1
+ dx
n
e
n
lie in the tangent plane at . Then
df =
f
x
1
dx
1
+ +
f
x
n
dx
n
= 0
since f(x) = 0 on the surface. Then
f dr =
_
f
x
1
e
1
+ +
f
x
n
e
n
_
(dx
1
e
1
+ +dx
n
e
n
)
=
f
x
1
dx
1
+ +
f
x
n
dx
n
= 0
Thus f is orthogonal to the tangent plane and hence to the surface.
138
Example 7.2.1 Consider the paraboloid, x
2
+ y
2
z = 0. We want to nd the tangent plane to the surface at
the point (1, 1, 2). The gradient is
f = 2xi + 2yj k.
At the point (1, 1, 2) this is
f(1, 1, 2) = 2i + 2j k.
We know a point on the tangent plane, (1, 1, 2), and the normal, f(1, 1, 2). The equation of the plane is
f(1, 1, 2) (x, y, z) = f(1, 1, 2) (1, 1, 2)
2x + 2y z = 2
The gradient of the function f(x) = 0, f(x), is in the direction of the maximum directional derivative. The
magnitude of the gradient, [f(x)[, is the value of the directional derivative in that direction. To derive this,
note that
D
a
f = f a = [f[ cos ,
where is the angle between f and a. D
a
f is maximum when = 0, i.e. when a is the same direction as f.
In this direction, D
a
f = [f[. To use the elevation example, f points in the uphill direction and [f[ is the
uphill slope.
Example 7.2.2 Suppose that the two surfaces f(x) = 0 and g(x) = 0 intersect at the point x = . What is the
angle between their tangent planes at that point? First we note that the angle between the tangent planes is by
denition the angle between their normals. These normals are in the direction of f() and g(). (We assume
these are nonzero.) The angle, , between the tangent planes to the surfaces is
= arccos
_
f() g()
[f()[ [g()[
_
.
139
Example 7.2.3 Let u be the distance from the origin:
u(x) =

x x =

x
i
x
i
.
In three dimensions, this is
u(x, y, z) =
_
x
2
+y
2
+z
2
.
The gradient of u, (x), is a unit vector in the direction of x. The gradient is:
u(x) =
_
x
1

x x
, . . .
x
n

x x
_
=
x
i
e
i

x
j
x
j
.
In three dimensions, we have
u(x, y, z) =
_
x
_
x
2
+y
2
+z
2
,
y
_
x
2
+y
2
+z
2
,
z
_
x
2
+y
2
+z
2
_
.
This is a unit vector because the sum of the squared components sums to unity.
u u =
x
i
e
i

x
j
x
j

x
k
e
k

x
l
x
l
x
i
x
i
x
j
x
j
= 1
Figure 7.2 shows a plot of the vector eld of u in two dimensions.
Example 7.2.4 Consider an ellipse. An implicit equation of an ellipse is
x
2
a
2
+
y
2
b
2
= 1.
We can also express an ellipse as u(x, y) +v(x, y) = c where u and v are the distance from the two foci. That is,
an ellipse is the set of points such that the sum of the distances from the two foci is a constant. Let n = (u+v).
140
Figure 7.2: The gradient of the distance from the origin.
This is a vector which is orthogonal to the ellipse when evaluated on the surface. Let t be a unit tangent to the
surface. Since n and t are orthogonal,
n t = 0
(u +v) t = 0
u t = v (t).
Since these are unit vectors, the angle between u and t is equal to the angle between v and t. In other
words: If we draw rays from the foci to a point on the ellipse, the rays make equal angles with the ellipse. If the
ellipse were a reective surface, a wave starting at one focus would be reected from the ellipse and travel to the
other focus. See Figure 8.3. This result also holds for ellipsoids, u(x, y, z) +v(x, y, z) = c.
We see that an ellipsoidal dish could be used to collect spherical waves, (waves emanating from a point). If
the dish is shaped so that the source of the waves is located at one foci and a collector is placed at the second,
then any wave starting at the source and reecting o the dish will travel to the collector. See Figure 7.4.
141
u v

n
t
v
u
-t

Figure 7.3: An ellipse and rays from the foci.


Figure 7.4: An elliptical dish.
7.3 Exercises
Vector Functions
142
Exercise 7.1
Consider the parametric curve
r = cos
_
t
2
_
i + sin
_
t
2
_
j.
Calculate
dr
dt
and
d
2
r
dt
2
. Plot the position and some velocity and acceleration vectors.
Exercise 7.2
Let r(t) be the position of an object moving with constant speed. Show that the acceleration of the object is
orthogonal to the velocity of the object.
Vector Fields
Exercise 7.3
Consider the paraboloid x
2
+y
2
z = 0. What is the angle between the two tangent planes that touch the surface
at (1, 1, 2) and (1, 1, 2)? What are the equations of the tangent planes at these points?
Exercise 7.4
Consider the paraboloid x
2
+y
2
z = 0. What is the point on the paraboloid that is closest to (1, 0, 0)?
143
7.4 Hints
Vector Functions
Hint 7.1
Plot the velocity and acceleration vectors at regular intervals along the path of motion.
Hint 7.2
If r(t) has constant speed, then [r
t
(t)[ = c. The condition that the acceleration is orthogonal to the velocity can
be stated mathematically in terms of the dot product, r
tt
(t) r
t
(t) = 0. Write the condition of constant speed in
terms of a dot product and go from there.
Vector Fields
Hint 7.3
The angle between two planes is the angle between the vectors orthogonal to the planes. The angle between the
two vectors is
= arccos
_
2, 2, 1) 2, 2, 1)
[2, 2, 1)[[2, 2, 1)[
_
The equation of a line orthogonal to a and passing through the point b is a x = a b.
Hint 7.4
Since the paraboloid is a dierentiable surface, the normal to the surface at the closest point will be parallel to
the vector from the closest point to (1, 0, 0). We can express this using the gradient and the cross product. If
(x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the surface there is f = 2x, 2y, 1).
The vector from the surface to the point (1, 0, 0) is 1 x, y, z). These two vectors are parallel if their cross
product is zero.
144
7.5 Solutions
Vector Functions
Solution 7.1
The velocity is
r
t
=
1
2
sin
_
t
2
_
i +
1
2
cos
_
t
2
_
j.
The acceleration is
r
t
=
1
4
cos
_
t
2
_
i
1
4
sin
_
t
2
_
j.
See Figure 7.5 for plots of position, velocity and acceleration.
Figure 7.5: A Graph of Position and Velocity and of Position and Acceleration
145
Solution 7.2
If r(t) has constant speed, then [r
t
(t)[ = c. The condition that the acceleration is orthogonal to the velocity can
be stated mathematically in terms of the dot product, r
tt
(t) r
t
(t) = 0. Note that we can write the condition of
constant speed in terms of a dot product,
_
r
t
(t) r
t
(t) = c,
r
t
(t) r
t
(t) = c
2
.
Dierentiating this equation yields,
r
tt
(t) r
t
(t) +r
t
(t) r
tt
(t) = 0
r
tt
(t) r
t
(t) = 0.
This shows that the acceleration is orthogonal to the velocity.
Vector Fields
Solution 7.3
The gradient, which is orthogonal to the surface when evaluated there is f = 2xi + 2yj k. 2i + 2j k and
2i 2j k are orthogonal to the paraboloid, (and hence the tangent planes), at the points (1, 1, 2) and (1, 1, 2),
respectively. The angle between the tangent planes is the angle between the vectors orthogonal to the planes.
The angle between the two vectors is
= arccos
_
2, 2, 1) 2, 2, 1)
[2, 2, 1)[[2, 2, 1)[
_
= arccos
_
1
9
_
1.45946.
146
Recall that the equation of a line orthogonal to a and passing through the point b is a x = a b. The equations
of the tangent planes are
2, 2, 1) x, y, z) = 2, 2, 1) 1, 1, 2),
2x 2y z = 2.
The paraboloid and the tangent planes are shown in Figure 7.6.
-1
0
1
-1
0
1
0
2
4
0
2
4
Figure 7.6: Paraboloid and Two Tangent Planes
Solution 7.4
Since the paraboloid is a dierentiable surface, the normal to the surface at the closest point will be parallel to
the vector from the closest point to (1, 0, 0). We can express this using the gradient and the cross product. If
(x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the surface there is f = 2x, 2y, 1).
The vector from the surface to the point (1, 0, 0) is 1 x, y, z). These two vectors are parallel if their cross
product is zero,
2x, 2y, 1) 1 x, y, z) = y 2yz, 1 +x + 2xz, 2y) = 0.
147
This gives us the three equations,
y 2yz = 0,
1 +x + 2xz = 0,
2y = 0.
The third equation requires that y = 0. The rst equation then becomes trivial and we are left with the second
equation,
1 +x + 2xz = 0.
Substituting z = x
2
+y
2
into this equation yields,
2x
3
+x 1 = 0.
The only real valued solution of this polynomial is
x =
6
2/3
_
9 +

87
_
2/3
6
1/3
_
9 +

87
_
1/3
0.589755.
Thus the closest point to (1, 0, 0) on the paraboloid is
_
_
6
2/3
_
9 +

87
_
2/3
6
1/3
_
9 +

87
_
1/3
, 0,
_
6
2/3
_
9 +

87
_
2/3
6
1/3
_
9 +

87
_
1/3
_
2
_
_
(0.589755, 0, 0.34781).
The closest point is shown graphically in Figure 7.7.
148
-1
-0.5
0
0.5
1-1
-0.5
0
0.5
1
0
0.5
1
1.5
2
-1
-0.5
0
0.5
1
0
0.5
1
1.5
2
Figure 7.7: Paraboloid, Tangent Plane and Line Connecting (1, 0, 0) to Closest Point
149
Part III
Functions of a Complex Variable
150
Chapter 8
Complex Numbers
For every complex problem, there is a solution that is simple, neat, and wrong.
- H. L. Mencken
8.1 Complex Numbers
When you started algebra, you learned that the quadratic equation: x
2
+2ax +b = 0 has either two, one or no
solutions. For example:
x
2
3x + 2 = 0 has the two solutions x = 1 and x = 2.
x
2
2x + 1 = 0 has the one solution x = 1.
x
2
+ 1 = 0 has no solutions.
151
This is a little unsatisfactory. We can formally solve the general quadratic equation.
x
2
+ 2ax +b = 0
(x +a)
2
= a
2
b
x = a

a
2
b
However, the solutions are dened only when a
2
b. The square root function,

x, is a bijection from R
0+
to
R
0+
. We cannot solve x
2
= 1 because

1 is not dened. To overcome this apparent shortcoming of the real


number system, we create a new symbolic constant, i

1. Now we can express the solutions of x


2
= 1 as
x = i and x = i. These satisfy the equation since i
2
=
_
1
_
2
= 1 and (i)
2
=
_

1
_
2
= 1. Note that
we can express the square root of any negative real number in terms of i:

r =

r = i

r. We call any
number of the form ib, b R, a pure imaginary number.
1
We call numbers of the form a + ib, where a, b R,
complex numbers
2
The quadratic with real coecients, x
2
+ 2ax + b = 0, has solutions x = a

a
2
b. The solutions are
real-valued only if a
2
b 0. If not, then we can dene solutions as complex numbers. If the discriminant is
negative, then we write x = a i

b a
2
. Thus every quadratic polynomial has exactly two solutions, counting
multiplicities. The fundamental theorem of algebra states that an n
th
degree polynomial with complex coecients
has n, not necessarily distinct, complex roots. We will prove this result later using the theory of functions of a
complex variable.
Consider the complex number z = x +iy, (x, y R). The real part of z is 1(z) = x; the imaginary part of z is
(z) = y. Two complex numbers, z
1
= x
1
+ iy
1
and z
2
= x
2
+ iy
2
, are equal if and only if x
1
= x
2
and y
1
= y
2
.
The complex conjugate
3
of z = x +iy is z = x iy. The notation z

= x iy is also used.
1
Imaginary is an unfortunate term. Real numbers are articial; constructs of the mind. Real numbers are no more real than
imaginary numbers.
2
Here complex means composed of two or more parts, not hard to separate, analyze, or solve. Those who disagree have a
complex number complex.
3
Conjugate: having features in common but opposite or inverse in some particular.
152
The set of complex numbers, C, form a eld. That essentially means that we can do arithmetic with complex
numbers. We treat i as a symbolic constant with the property that i
2
= 1. The eld of complex numbers satisfy
the following properties: (Let z, z
1
, z
2
, z
3
C.)
1. Closure under addition and multiplication.
z
1
+z
2
= (x
1
+iy
1
) + (x
2
+iy
2
)
= (x
1
+x
2
) +i(y
1
+y
2
) C
z
1
z
2
= (x
1
+iy
1
)(x
2
+iy
2
)
= (x
1
x
2
y
1
y
2
) +i(x
1
y
2
+x
2
y
1
) C
2. Commutativity of addition and multiplication. z
1
+z
2
= z
2
+z
1
. z
1
z
2
= z
2
z
1
.
3. Associativity of addition and multiplication. (z
1
+z
2
) +z
3
= z
1
+ (z
2
+z
3
). (z
1
z
2
)z
3
= z
1
(z
2
z
3
).
4. Distributive law. z
1
(z
2
+z
3
) = z
1
z
2
+z
1
z
3
.
5. Identity with respect to addition and multiplication. z + 0 = z. z(1) = z.
6. Inverse with respect to addition. z + (z) = (x +iy) + (x iy) = 0.
7. Inverse with respect to multiplication for nonzero numbers. zz
1
= 1, where
z
1
=
1
z
=
1
x +iy
=
x iy
x
2
+y
2
=
x
x
2
+y
2
i
y
x
2
+y
2
Complex Conjugate. Using the eld properties of complex numbers, we can derive the following properties
of the complex conjugate, z = x iy.
1. (z) = z,
2. z + = z +,
153
3. z = z,
4.
_
z

_
=
z

.
8.2 The Complex Plane
We can denote a complex number z = x +iy as an ordered pair of real numbers (x, y). Thus we can represent
a complex number as a point in R
2
where the x component is the real part and the y component is the imaginary
part of z. This is called the complex plane or the Argand diagram. (See Figure 8.1.)
Im(z)
Re(z)
r
(x,y)

Figure 8.1: The Complex Plane


There are two ways of describing a point in the complex plane: an ordered pair of coordinates (x, y) that give
the horizontal and vertical oset from the origin or the distance r from the origin and the angle from the positive
horizontal axis. The angle is not unique. It is only determined up to an additive integer multiple of 2.
Modulus. The magnitude or modulus of a complex number is the distance of the point from the origin. It is
dened as [z[ = [x + iy[ =
_
x
2
+y
2
. Note that zz = (x + iy)(x iy) = x
2
+ y
2
= [z[
2
. The modulus has the
following properties.
154
1. [z
1
z
2
[ = [z
1
[ [z
2
[
2.

z
1
z
2

=
[z
1
[
[z
2
[
for z
2
,= 0.
3. [z
1
+z
2
[ [z
1
[ +[z
2
[
4. [z
1
+z
2
[ [[z
1
[ [z
2
[[
We could prove the rst two properties by expanding in x + iy form, but it would be fairly messy. The proofs
will become simple after polar form has been introduced. The second two properties follow from the triangle
inequalities in geometry. This will become apparent after the relationship between complex numbers and vectors
is introduced. One can show that
[z
1
z
2
z
n
[ = [z
1
[ [z
2
[ [z
n
[
and
[z
1
+z
2
+ +z
n
[ [z
1
[ +[z
2
[ + +[z
n
[
with proof by induction.
Argument. The argument of a complex number is the angle that the vector with tail at the origin and head
at z = x +iy makes with the positive x-axis. The argument is denoted arg(z). Note that the argument is dened
for all nonzero numbers and is only determined up to an additive integer multiple of 2. That is, the argument
of a complex number is the set of values: +2n[ n Z. The principal argument of a complex number is that
angle in the set arg(z) which lies in the range (, ]. The principal argument is denoted Arg (z). We prove the
following identities in Exercise 8.7.
arg(z) = arg(z) + arg()
Arg (z) ,= Arg (z) + Arg ()
arg(z
2
) = arg(z) + arg(z) ,= 2 arg(z)
155
Example 8.2.1 Consider the equation [z 1 i[ = 2. The set of points satisfying this equation is a circle of
radius 2 and center at 1 +i in the complex plane. You can see this by noting that [z 1 i[ is the distance from
the point (1, 1). (See Figure 8.2.)
-1 1 2 3
-1
1
2
3
Figure 8.2: Solution of [z 1 i[ = 2
Another way to derive this is to substitute z = x +iy into the equation.
[x +iy 1 i[ = 2
_
(x 1)
2
+ (y 1)
2
= 2
(x 1)
2
+ (y 1)
2
= 4
This is the analytic geometry equation for a circle of radius 2 centered about (1, 1).
Example 8.2.2 Consider the curve described by
[z[ +[z 2[ = 4.
156
Note that [z[ is the distance from the origin in the complex plane and [z 2[ is the distance from z = 2. The
equation is
(distance from (0, 0)) + (distance from (2, 0)) = 4.
From geometry, we know that this is an ellipse with foci at (0, 0) and (2, 0), major axis 2, and minor axis

3.
(See Figure 8.3.)
-1 1 2 3
-2
-1
1
2
Figure 8.3: Solution of [z[ +[z 2[ = 4
157
We can use the substitution z = x +iy to get the equation an algebraic form.
[z[ +[z 2[ = 4
[x +iy[ +[x +iy 2[ = 4
_
x
2
+y
2
+
_
(x 2)
2
+y
2
= 4
x
2
+y
2
= 16 8
_
(x 2)
2
+y
2
+x
2
4x + 4 +y
2
x 5 = 2
_
(x 2)
2
+y
2
x
2
10x + 25 = 4x
2
16x + 16 + 4y
2
1
4
(x 1)
2
+
1
3
y
2
= 1
Thus we have the standard form for an equation describing an ellipse.
8.3 Polar Form
Polar Form. A complex number written as z = x + iy is said to be in Cartesian form, or a + ib form.
We can convert this representation to polar form, z = r(cos + i sin ), using trigonometry. Here r = [z[ is
the modulus and = arctan(x, y) is the argument of z. The argument is the angle between the x axis and
the vector with its head at (x, y). (See Figure 8.4.) Note that is not unique. If z = r(cos + i sin ) then
z = r(cos( + 2n) +i sin( + 2n)) for any n Z.
The Arctangent. Note that arctan(x, y) is not the same thing as the old arctangent that you learned about
in trigonometry, arctan
_
y
x
_
. For example,
arctan(1, 1) =

4
+ 2n and arctan(1, 1) =
3
4
+ 2n,
whereas
arctan
_
1
1
_
= arctan
_
1
1
_
= arctan(1).
158
Re( )
r
Im( )
(x,y)
r
z

sin
z
cos r
Figure 8.4: Polar Form
Eulers Formula. Eulers formula, e
i
= cos + i sin , allows us to write the polar form more compactly.
Expressing the polar form in terms of the exponential function of imaginary argument makes arithmetic with
complex numbers much more convenient. (See Exercise 8.14 for a proof of Eulers formula.)
z = r(cos +i sin ) = r e
i
Arithmetic With Complex Numbers. Note that it is convenient to add complex numbers in Cartesian form.
(x
1
+iy
1
) + (x
2
+iy
2
) = (x
1
+x
2
) +i(y
1
+y
2
)
However, it is dicult to multiply or divide them in Cartesian form.
(x
1
+iy
1
)(x
2
+iy
2
) = (x
1
x
2
y
1
y
2
) +i(x
1
y
2
+x
2
y
1
)
x
1
+iy
1
x
2
+iy
2
=
(x
1
+iy
1
)(x
2
iy
2
)
(x
2
+iy
2
)(x
2
iy
2
)
=
x
1
x
2
+y
1
y
2
x
2
2
+y
2
2
+i
x
2
y
1
x
1
y
2
x
2
2
+y
2
2
159
On the other hand, it is dicult to add complex numbers in polar form.
r
1
e
i
1
+r
2
e
i
2
= r
1
(cos
1
+i sin
1
) +r
2
(cos
2
+i sin
2
)
= r
1
cos
1
+r
2
cos
2
+i(r
1
sin
1
+r
2
sin
2
)
=
_
(r
1
cos
1
+r
2
cos
2
)
2
+ (r
1
sin
1
+r
2
sin
2
)
2
e
i arctan(r
1
cos
1
+r
2
cos
2
,r
1
sin
1
+r
2
sin
2
)
=
_
r
2
1
+r
2
2
+ 2 cos(
1

2
) e
i arctan(r
1
cos
1
+r
2
cos
2
,r
1
sin
1
+r
2
sin
2
)
However, it is convenient to multiply and divide them in polar form.
r
1
e
i
1
r
2
e
i
2
= r
1
r
2
e
i(
1
+
2
)
r
1
e
i
1
r
2
e
i
2
=
r
1
r
2
e
i(
1

2
)
Keeping this in mind will make working with complex numbers a shade or two less grungy.
Result 8.3.1 To change between Cartesian and polar form, use the identities
r e
i
= r cos +ir sin ,
x +iy =
_
x
2
+y
2
e
i arctan(x,y)
.
Cartesian form is convenient for addition. Polar form is convenient for multiplication
and division.
Example 8.3.1 The polar form of 5 + 7i is
5 + 7i =

74 e
i arctan(5,7)
.
160
2 e
i/6
in Cartesian form is
2 e
i/6
= 2 cos
_

6
_
+ 2i sin
_

6
_
=

3 +i.
Example 8.3.2 We will show that
cos
4
=
1
8
cos 4 +
1
2
cos 2 +
3
8
.
Recall that
cos =
e
i
+ e
i
2
and sin =
e
i
e
i
2i
.
cos
4
=
_
e
i
+ e
i
2
_
4
=
1
16
( e
4i
+ 4 e
2i
+ 6 + 4 e
2i
+ e
4i
)
=
1
8
_
e
4i
+ e
4i
2
_
+
1
2
_
e
2i
+ e
2i
2
_
+
3
8
=
1
8
cos 4 +
1
2
cos 2 +
3
8
By the denition of exponentiation, we have e
in
= ( e
i
)
n
We apply Eulers formula to obtain a result which is
useful in deriving trigonometric identities.
cos(n) +i sin(n) = (cos() +i sin())
n
161
Result 8.3.2 DeMoivres Theorem.
a
cos(n) +i sin(n) = (cos() +i sin())
n
a
Its amazing what passes for a theorem these days. I would think that this would be a corollary at most.
Example 8.3.3 We will express cos 5 in terms of cos and sin 5 in terms of sin .
We start with DeMoivres theorem.
e
i5
= ( e
i
)
5
cos 5 +i sin 5 = (cos +i sin )
5
=
_
5
0
_
cos
5
+i
_
5
1
_
cos
4
sin
_
5
2
_
cos
3
sin
2
i
_
5
3
_
cos
2
sin
3

+
_
5
4
_
cos sin
4
+i
_
5
5
_
sin
5

=
_
cos
5
10 cos
3
sin
2
+ 5 cos sin
4

_
+i
_
5 cos
4
sin 10 cos
2
sin
3
+ sin
5

_
Equating the real and imaginary parts we obtain
cos 5 = cos
5
10 cos
3
sin
2
+ 5 cos sin
4

sin 5 = 5 cos
4
sin 10 cos
2
sin
3
+ sin
5

Now we use the Pythagorean identity, cos


2
+ sin
2
= 1.
cos 5 = cos
5
10 cos
3
(1 cos
2
) + 5 cos (1 cos
2
)
2
cos 5 = 16 cos
5
20 cos
3
+ 5 cos
sin 5 = 5(1 sin
2
)
2
sin 10(1 sin
2
) sin
3
+ sin
5

sin 5 = 16 sin
5
20 sin
3
+ 5 sin
162
8.4 Arithmetic and Vectors
Addition. We can represent the complex number z = x +iy = r e
i
as a vector in Cartesian space with tail at
the origin and head at (x, y), or equivalently, the vector of length r and angle . With the vector representation,
we can add complex numbers by connecting the tail of one vector to the head of the other. The vector z + is
the diagonal of the parallelogram dened by z and . (See Figure 8.5.)
Negation. The negative of z = x + iy is z = x iy. In polar form we have z = r e
i
and z = r e
i(+)
,
(more generally, z = r e
i(+(2n+1))
, n Z. In terms of vectors, z has the same magnitude but opposite direction
as z. (See Figure 8.5.)
Multiplication. The product of z = r e
i
and = e
i
is z = r e
i(+)
. The length of the vector z is the
product of the lengths of z and . The angle of z is the sum of the angles of z and . (See Figure 8.5.)
Note that arg(z) = arg(z) + arg(). Each of these arguments has an innite number of values. If we write
out the multi-valuedness explicitly, we have
+ + 2n : n Z = + 2n : n Z + + 2n : n Z
The same is not true of the principal argument. In general, Arg (z) ,= Arg (z) + Arg (). Consider the case
z = = e
i3/4
. Then Arg (z) = Arg () = 3/4, however, Arg (z) = /2.
Multiplicative Inverse. Assume that z is nonzero. The multiplicative inverse of z = r e
i
is
1
z
=
1
r
e
i
. The
length of
1
z
is the multiplicative inverse of the length of z. The angle of
1
z
is the negative of the angle of z. (See
Figure 8.6.)
Division. Assume that is nonzero. The quotient of z = r e
i
and = e
i
is
z

=
r

e
i()
. The length of the
vector
z

is the quotient of the lengths of z and . The angle of


z

is the dierence of the angles of z and . (See


Figure 8.6.)
163
Im( ) z
z+ =(x+ )+i(y+ )

=+ i
z=x+iy
Re( ) z
Re( )
=re
Im( ) z
z=x+iy
i
z
-z=-x-iy
=re
i( + )
=(xy)+i(x+y)
Re( )
Im( ) z
=r e
i( + )
z
=+ = i
e
i
z=x+iy=re
i
z
Figure 8.5: Addition, Negation and Multiplication
Complex Conjugate. The complex conjugate of z = x+iy = r e
i
is z = xiy = r e
i
. z is the mirror image
of z, reected across the x axis. In other words, z has the same magnitude as z and the angle of z is the negative
of the angle of z. (See Figure 8.6.)
8.5 Integer Exponents
Consider the product (a + b)
n
, n Z. If we know arctan(a, b) then it will be most convenient to expand the
product working in polar form. If not, we can write n in base 2 to eciently do the multiplications.
Example 8.5.1 Suppose that we want to write (

3 + i)
20
in Cartesian form.
4
We can do the multiplication
directly. Note that 20 is 10100 in base 2. That is, 20 = 2
4
+2
2
. We rst calculate the powers of the form (

3+i)
2
n
4
No, I have no idea why we would want to do that. Just humor me. If you pretend that youre interested, Ill do the same. Believe
me, expressing your real feelings here isnt going to do anyone any good.
164
=e
Re( )
Im( )
-
z
z=re
i
z
1
z
1
r
-i
=- e
Re( )
Im( )
=
z=re
-
z
e
i
i
z
z

i
r
()
Re( )
Im( )
z=x+iy=re
z=x-iy=re
z
i
z
- -i
Figure 8.6: Multiplicative Inverse, Division and Complex Conjugate
by successive squaring.
(

3 +i)
2
= 2 +i2

3
(

3 +i)
4
= 8 +i8

3
(

3 +i)
8
= 128 i128

3
(

3 +i)
16
= 32768 +i32768

3
Next we multiply (

3 +i)
4
and (

3 +i)
16
to obtain the answer.
(

3 +i)
20
= (32768 +i32768

3)(8 +i8

3) = 524288 i524288

3
Since we know that arctan(

3, 1) = /6, it is easiest to do this problem by rst changing to modulus-argument


165
form.
_

3 +i
_
20
=
_
_
_

3
_
2
+ 1
2
e
i arctan(

3,1)
_
20
=
_
2 e
i/6
_
20
= 2
20
e
i4/3
= 1048576
_

1
2
i

3
2
_
= 524288 i524288

3
Example 8.5.2 Consider (5+7i)
11
. We will do the exponentiation in polar form and write the result in Cartesian
form.
(5 + 7i)
11
=
_

74 e
i arctan(5,7)
_
11
= 74
5

74(cos(11 arctan(5, 7)) +i sin(11 arctan(5, 7)))


= 2219006624

74 cos(11 arctan(5, 7)) +i2219006624

74 sin(11 arctan(5, 7))


The result is correct, but not very satisfying. This expression could be simplied. You could evaluate the
trigonometric functions with some fairly messy trigonometric identities. This would take much more work than
directly multiplying (5 + 7i)
11
.
8.6 Rational Exponents
In this section we consider complex numbers with rational exponents, z
p/q
, where p/q is a rational number.
First we consider unity raised to the 1/n power. We dene 1
1/n
as the set of numbers z such that z
n
= 1.
1
1/n
= z [ z
n
= 1
166
We can nd these values by writing z in modulus-argument form.
z
n
= 1
r
n
e
in
= 1
r
n
= 1 n = 0 mod 2
r = 1 = 2k for k Z
There are only n distinct solutions as a result of the 2 periodicity of e
i
. Thus
1
1/n
= e
i2k/n
[ k = 0, . . . , n 1.
These values are equally spaced points on the unit circle in the complex plane.
Example 8.6.1 1
1/6
has the 6 values,
_
e
i0
, e
i/3
, e
i2/3
, e
i
, e
i4/3
, e
i5/3
_
.
In Cartesian form this is
_
1,
1 +i

3
2
,
1 +i

3
2
, 1,
1 i

3
2
,
1 i

3
2
_
.
The sixth roots of unity are plotted in Figure 8.7.
The n
th
roots of the complex number c = e
i
are the set of numbers z = r e
i
such that
z
n
= c = e
i
r
n
e
in
= e
i
r =
n

n = mod 2
r =
n

= ( + 2k)/n for k = 0, . . . , n 1.
Thus
c
1/n
=
n

e
i(+2k)/n
[ k = 0, . . . , n 1 =
n
_
[c[ e
i( Arg (c)+2k)/n
[ k = 0, . . . , n 1
167
-1 1
-1
1
Figure 8.7: The Sixth Roots of Unity.
Principal Roots. The principal n
th
root is denoted
n

z
n

z e
i Arg (z)/n
.
Thus the principal root has the property
/n < Arg
_
n

z
_
/n.
This is consistent with the notation you learned back in algebra where
n

x denoted the positive n


th
root of a
positive real number. We adopt the convention that z
1/n
denotes the n
th
roots of z, which is a set of n numbers
and
n

z is the principal n
th
root of z, which is a single number. With the principal root we can write,
z
1/n
=
n

r e
i( Arg (z)+2k)/n
[ k = 0, . . . , n 1
=
n

z e
i2k/n
[ k = 0, . . . , n 1
z
1/n
=
n

z1
1/n
.
That is, the n
th
roots of z are the principal n
th
root of z times the n
th
roots of unity.
168
Rational Exponents. We interpret z
p/q
to mean z
(p/q)
. That is, we rst simplify the exponent, i.e. reduce
the fraction, before carrying out the exponentiation. Therefore z
2/4
= z
1/2
and z
10/5
= z
2
. If p/q is a reduced
fraction, (p and q are relatively prime, in other words, they have no common factors), then
z
p/q
(z
p
)
1/q
.
Thus z
p/q
is a set of q values. Note that for an un-reduced fraction r/s,
(z
r
)
1/s
,=
_
z
1/s
_
r
.
The former expression is a set of s values while the latter is a set of no more that s values. For instance,
(1
2
)
1/2
= 1
1/2
= 1 and (1
1/2
)
2
= (1)
2
= 1.
Example 8.6.2 Consider 2
1/5
, (1 +i)
1/3
and (2 +i)
5/6
.
2
1/5
=
5

2 e
i2k/5
, for k = 0, 1, 2, 3, 4
(1 +i)
1/3
=
_

2 e
i/4
_
1/3
=
6

2 e
i/12
e
i2k/3
, for k = 0, 1, 2
(2 +i)
5/6
=
_

5 e
i Arctan (2,1)
_
5/6
=
_

5
5
e
i5 Arctan (2,1)
_
1/6
=
12

5
5
e
i
5
6
Arctan (2,1)
e
ik/3
, for k = 0, 1, 2, 3, 4, 5
Example 8.6.3 The roots of the polynomial z
5
+ 4 are
(4)
1/5
=
_
4 e
i
_
1/5
=
5

4 e
i(1+2k)/5
, for k = 0, 1, 2, 3, 4.
169
8.7 Exercises
Complex Numbers
Exercise 8.1
Verify that:
1.
1 + 2i
3 4i
+
2 i
5i
=
2
5
2. (1 i)
4
= 4
Exercise 8.2
Write the following complex numbers in the form a +ib.
1.
_
1 +i

3
_
10
2. (11 + 4i)
2
Exercise 8.3
Write the following complex numbers in the form a +ib
1.
_
2 +i
i6 (1 i2)
_
2
2. (1 i)
7
Exercise 8.4
If z = x +iy, write the following in the form u(x, y) +iv(x, y).
170
1.
_
z
z
_
2.
z + 2i
2 iz
Exercise 8.5
Quaternions are sometimes used as a generalization of complex numbers. A quaternion u may be dened as
u = u
0
+iu
1
+ju
2
+ku
3
where u
0
, u
1
, u
2
and u
3
are real numbers and i, j and k are objects which satisfy
i
2
= j
2
= k
2
= 1, ij = k, ji = k
and the usual associative and distributive laws. Show that for any quaternions u, w there exists a quaternion v
such that
uv = w
except for the case u
0
= u
1
= u
2
= u
3
.
Exercise 8.6
Let ,= 0, ,= 0 be two complex numbers. Show that = t for some real number t (i.e. the vectors dened by
and are parallel) if and only if () = 0.
The Complex Plane
Exercise 8.7
Prove the following identities.
1. arg(z) = arg(z) + arg()
171
2. Arg (z) ,= Arg (z) + Arg ()
3. arg(z
2
) = arg(z) + arg(z) ,= 2 arg(z)
Exercise 8.8
Show, both by geometric and algebraic arguments, that for complex numbers z
1
and z
2
the inequalities
[[z
1
[ [z
2
[[ [z
1
+z
2
[ [z
1
[ +[z
2
[
hold.
Exercise 8.9
Find all the values of
1. (1)
3/4
2. 8
1/6
and show them graphically.
Exercise 8.10
Find all values of
1. (1)
1/4
2. 16
1/8
and show them graphically.
172
Exercise 8.11
Sketch the regions or curves described by
1. 1 < [z 2i[ < 2
2. [1(z)[ + 5[(z)[ = 1
Exercise 8.12
Sketch the regions or curves described by
1. [z 1 +i[ 1
2. [z i[ = [z +i[
3. 1(z) (z) = 5
4. [z i[ +[z +i[ = 1
Exercise 8.13
Solve the equation
[ e
i
1[ = 2
for (0 ) and verify the solution geometrically.
Polar Form
Exercise 8.14
Prove Eulers formula, e
i
= cos +i sin . Consider the Taylor series of e
z
to be the denition of the exponential
function.
173
Exercise 8.15
Use de Moivres formula to derive the trigonometric identity
cos(3) = cos
3
() 3 cos() sin
2
().
Exercise 8.16
Establish the formula
1 +z +z
2
+ +z
n
=
1 z
n+1
1 z
, (z ,= 1),
for the sum of a nite geometric series; then derive the formulas
1. 1 + cos + cos 2 + + cos n =
1
2
+
sin((n + 1/2))
2 sin(/2)
2. sin + sin 2 + + sin n =
1
2
cot

2

cos((n + 1/2))
2 sin(/2)
where 0 < < 2.
Arithmetic and Vectors
Exercise 8.17
Prove [z
1
z
2
[ = [z
1
[[z
2
[ and

z
1
z
2

=
[z
1
[
[z
2
[
using polar form.
Exercise 8.18
Prove that
[z +[
2
+[z [
2
= 2
_
[z[
2
+[[
2
_
.
Interpret this geometrically.
174
Integer Exponents
Exercise 8.19
Write (1 +i)
10
in Cartesian form with the following two methods:
1. Just do the multiplication. If it takes you more than four multiplications, you suck.
2. Do the multiplication in polar form.
Rational Exponents
Exercise 8.20
Show that each of the numbers z = a + (a
2
b)
1/2
satises the equation z
2
+ 2az +b = 0.
175
8.8 Hints
Complex Numbers
Hint 8.1
Hint 8.2
Hint 8.3
Hint 8.4
Hint 8.5
Hint 8.6
The Complex Plane
Hint 8.7
Write the multivaluedness explicitly.
176
Hint 8.8
Consider a triangle with vertices at 0, z
1
and z
1
+z
2
.
Hint 8.9
Hint 8.10
Hint 8.11
Hint 8.12
Hint 8.13
Polar Form
Hint 8.14
Find the Taylor series of e
i
, cos and sin . Note that i
2n
= (1)
n
.
Hint 8.15
177
Hint 8.16
Arithmetic and Vectors
Hint 8.17
[ e
i
[ = 1.
Hint 8.18
Consider the parallelogram dened by z and .
Integer Exponents
Hint 8.19
For the rst part,
(1 +i)
10
=
_
_
(1 +i)
2
_
2
_
2
(1 +i)
2
.
Rational Exponents
Hint 8.20
Substitite the numbers into the equation.
178
8.9 Solutions
Complex Numbers
Solution 8.1
1.
1 + 2i
3 4i
+
2 i
5i
=
1 + 2i
3 4i
3 + 4i
3 + 4i
+
2 i
5i
i
i
=
5 + 10i
25
+
1 2i
5
=
2
5
2.
(1 i)
4
= (2i)
2
= 4
179
Solution 8.2
1. First we do the multiplication in Cartesian form.
_
1 +i

3
_
10
=
_
_
1 +i

3
_
2
_
1 +i

3
_
8
_
1
=
_
_
2 +i2

3
__
2 +i2

3
_
4
_
1
=
_
_
2 +i2

3
__
8 i8

3
_
2
_
1
=
__
2 +i2

3
__
128 +i128

3
__
1
=
_
512 i512

3
_
1
=
1
512
1
1 +i

3
=
1
512
1
1 +i

3
1 i

3
1 i

3
=
1
2048
+i

3
2048
180
Now we do the multiplication in modulus-argument, (polar), form.
_
1 +i

3
_
10
=
_
2 e
i/3
_
10
= 2
10
e
i10/3
=
1
1024
_
cos
_

10
3
_
+i sin
_

10
3
__
=
1
1024
_
cos
_
4
3
_
i sin
_
4
3
__
=
1
1024
_

1
2
+i

3
2
_
=
1
2048
+i

3
2048
2.
(11 + 4i)
2
= 105 +i88
Solution 8.3
1.
_
2 +i
i6 (1 i2)
_
2
=
_
2 +i
1 +i8
_
2
=
3 +i4
63 i16
=
3 +i4
63 i16
63 +i16
63 +i16
=
253
4225
i
204
4225
181
2.
(1 i)
7
= ((1 i)
2
)
2
(1 i)
2
(1 i)
= (i2)
2
(i2)(1 i)
= (4)(2 i2)
= 8 +i8
Solution 8.4
1.
_
z
z
_
=
_
x +iy
x +iy
_
=
_
x iy
x +iy
_
=
x +iy
x iy
=
x +iy
x iy
x +iy
x +iy
=
x
2
y
2
x
2
+y
2
+i
2xy
x
2
+y
2
182
2.
z + 2i
2 iz
=
x +iy + 2i
2 i(x iy)
=
x +i(y + 2)
2 y ix
=
x +i(y + 2)
2 y ix
2 y +ix
2 y +ix
=
x(2 y) (y + 2)x
(2 y)
2
+x
2
+i
x
2
+ (y + 2)(2 y)
(2 y)
2
+x
2
=
2xy
(2 y)
2
+x
2
+i
4 +x
2
y
2
(2 y)
2
+x
2
Solution 8.5
Method 1. We expand the equation uv = w in its components.
uv = w
(u
0
+iu
1
+ju
2
+ku
3
)(v
0
+iv
1
+jv
2
+kv
3
) = w
0
+iw
1
+jw
2
+kw
3
(u
0
v
0
u
1
v
1
u
2
v
2
u
3
v
3
) +i(u
1
v
0
+u
0
v
1
u
3
v
2
+u
2
v
3
) +j(u
2
v
0
+u
3
v
1
+u
0
v
2
u
1
v
3
)
+k(u
3
v
0
u
2
v
1
+u
1
v
2
+u
0
v
3
) = w
0
+iw
1
+jw
2
+kw
3
We can write this as a matrix equation.
_
_
_
_
u
0
u
1
u
2
u
3
u
1
u
0
u
3
u
2
u
2
u
3
u
0
u
1
u
3
u
2
u
1
u
0
_
_
_
_
_
_
_
_
v
0
v
1
v
2
v
3
_
_
_
_
=
_
_
_
_
w
0
w
1
w
2
w
3
_
_
_
_
183
This linear system of equations has a unique solution for v if and only if the determinant of the matrix is nonzero.
The determinant of the matrix is (u
2
0
+u
2
1
+u
2
2
+u
2
3
)
2
. This is zero if and only if u
0
= u
1
= u
2
= u
3
= 0. Thus
there exists a unique v such that uv = w if u is nonzero. This v is
v =
_
(u
0
w
0
+u
1
w
1
+u
2
w
2
+u
3
w
3
) +i(u
1
w
0
+u
0
w
1
+u
3
w
2
u
2
w
3
) +j(u
2
w
0
u
3
w
1
+u
0
w
2
+u
1
w
3
)
+k(u
3
w
0
+u
2
w
1
u
1
w
2
+u
0
w
3
)
_
/(u
2
0
+u
2
1
+u
2
2
+u
2
3
)
Method 2. Note that uu is a real number.
uu = (u
0
iu
1
ju
2
ku
3
)(u
0
+iu
1
+ju
2
+ku
3
)
= (u
2
0
+u
2
1
+u
2
2
+u
2
3
) +i(u
0
u
1
u
1
u
0
u
2
u
3
+u
3
u
2
)
+j(u
0
u
2
+u
1
u
3
u
2
u
0
u
3
u
1
) +k(u
0
u
3
u
1
u
2
+u
2
u
1
u
3
u
0
)
= (u
2
0
+u
2
1
+u
2
2
+u
2
3
)
uu = 0 only if u = 0. We solve for v by multiplying by the conjugate of u and divide by uu.
uv = w
uuv = uw
v =
uw
uu
v =
(u
0
iu
1
ju
2
ku
3
)(w
0
+iw
1
+jw
2
+kw
3
)
u
2
0
+u
2
1
+u
2
2
+u
2
3
v =
_
(u
0
w
0
+u
1
w
1
+u
2
w
2
+u
3
w
3
) +i(u
1
w
0
+u
0
w
1
+u
3
w
2
u
2
w
3
) +j(u
2
w
0
u
3
w
1
+u
0
w
2
+u
1
w
3
)
+k(u
3
w
0
+u
2
w
1
u
1
w
2
+u
0
w
3
)
_
/(u
2
0
+u
2
1
+u
2
2
+u
2
3
)
Solution 8.6
If = t, then = t[[
2
, which is a real number. Hence () = 0.
184
Now assume that () = 0. This implies that = r for some r R. We multiply by and simplify.
[[
2
= r
=
r
[[
2

By taking t =
r
[[
2
We see that = t for some real number t.
The Complex Plane
Solution 8.7
Let z = r e
i
and = e
i
.
1.
arg(z) = arg(z) + arg()
arg(r e
i(+)
) = + 2m + + 2n
+ + 2k = + + 2m
2.
Arg (z) ,= Arg (z) + Arg ()
Consider z = = 1. Arg (z) = Arg () = , however Arg (z) = Arg (1) = 0. The identity becomes
0 ,= 2.
3.
arg(z
2
) = arg(z) + arg(z) ,= 2 arg(z)
arg(r
2
e
i2
) = + 2k + + 2m , = 2 + 2n
2 + 2k = 2 + 2m ,= 2 + 4n
185
z
|z |
|z |
|z +z |
1
1
1 2
2
z +z
1 2
Figure 8.8: Triangle Inequality
Solution 8.8
Consider a triangle in the complex plane with vertices at 0, z
1
and z
1
+z
2
. (See Figure 8.8.)
The lengths of the sides of the triangle are [z
1
[, [z
2
[ and [z
1
+z
2
[ The second inequality shows that one side of
the triangle must be less than or equal to the sum of the other two sides.
[z
1
+z
2
[ [z
1
[ +[z
2
[
The rst inequality shows that the length of one side of the triangle must be greater than or equal to the dierence
in the length of the other two sides.
[z
1
+z
2
[ [[z
1
[ [z
2
[[
186
Now we prove the inequalities algebraically. We will reduce the inequality to an identity. Let z
1
= r
1
e
i
1
,
z
2
= r
2
e
i
2
.
[[z
1
[ [z
2
[[ [z
1
+z
2
[ [z
1
[ +[z
2
[
[r
1
r
2
[ [r
1
e
i
1
+r
2
e
i
2
[ r
1
+r
2
(r
1
r
2
)
2
(r
1
e
i
1
+r
2
e
i
2
)(r
1
e
i
1
+r
2
e
i
2
) (r
1
+r
2
)
2
r
2
1
+r
2
2
2r
1
r
2
r
2
1
+r
2
2
+r
1
r
2
e
i(
1

2
)
+r
1
r
2
e
i(
1
+
2
)
r
2
1
+r
2
2
+ 2r
1
r
2
2r
1
r
2
2r
1
r
2
cos(
1

2
) 2r
1
r
2
1 cos(
1

2
) 1
Solution 8.9
1.
(1)
3/4
=
_
(1)
3
_
1/4
= (1)
1/4
= ( e
i
)
1/4
= e
i/4
1
1/4
= e
i/4
e
ik/2
, k = 0, 1, 2, 3
=
_
e
i/4
, e
i3/4
, e
i5/4
, e
i7/4
_
=
_
1 +i

2
,
1 +i

2
,
1 i

2
,
1 i

2
_
See Figure 8.9.
187
-1 1
-1
1
Figure 8.9: (1)
3/4
2.
8
1/6
=
6

81
1/6
=

2 e
ik/3
, k = 0, 1, 2, 3, 4, 5
=
_

2,

2 e
i/3
,

2 e
i2/3
,

2 e
i
,

2 e
i4/3
,

2 e
i5/3
_
=
_

2,
1 +i

2
,
1 +i

2
,

2
1 i

2
,
1 i

2
_
See Figure 8.10.
188
-2 -1 1 2
-2
-1
1
2
Figure 8.10: 8
1/6
Solution 8.10
1.
(1)
1/4
= ((1)
1
)
1/4
= (1)
1/4
= ( e
i
)
1/4
= e
i/4
1
1/4
= e
i/4
e
ik/2
, k = 0, 1, 2, 3
=
_
e
i/4
, e
i3/4
, e
i5/4
, e
i7/4
_
=
_
1 +i

2
,
1 +i

2
,
1 i

2
,
1 i

2
_
See Figure 8.11.
189
-1 1
-1
1
Figure 8.11: (1)
1/4
2.
16
1/8
=
8

16 1
1/8
=

2 e
ik/4
, k = 0, 1, 2, 3, 4, 5, 6, 7
=
_

2,

2 e
i/4
,

2 e
i/2
,

2 e
i3/4
,

2 e
i
,

2 e
i5/4
,

2 e
i3/2
,

2 e
i7/4
_
=
_

2, 1 +i,

2i, 1 +i,

2, 1 i,

2i, 1 i
_
See Figure 8.12.
Solution 8.11
1. [z 2i[ is the distance from the point 2i in the complex plane. Thus 1 < [z 2i[ < 2 is an annulus. See
Figure 8.13.
190
-1 1
-1
1
Figure 8.12: 16
1/8
-3 -2 -1 1 2 3
-1
1
2
3
4
5
Figure 8.13: 1 < [z 2i[ < 2
191
2.
[1(z)[ + 5[(z)[ = 1
[x[ + 5[y[ = 1
In the rst quadrant this is the line y = (1 x)/5. We reect this line segment across the coordinate axes
to obtain line segments in the other quadrants. Explicitly, we have the set of points: z = x + iy : 1 <
x < 1 y = (1 [x[)/5. See Figure 8.14.
-1 1
-0.4
-0.2
0.2
0.4
Figure 8.14: [1(z)[ + 5[(z)[ = 1
Solution 8.12
1. [z 1 +i[ is the distance from the point (1 i). Thus [z 1 +i[ 1 is the disk of unit radius centered at
(1 i). See Figure 8.15.
2. The set of points equidistant from i and i is the real axis. See Figure 8.16.
192
-1 1 2 3
-3
-2
-1
1
Figure 8.15: [z 1 +i[ < 1
-1 1
-1
1
Figure 8.16: [z i[ = [z +i[
193
3.
1(z) (z) = 5
x y = 5
y = x 5
See Figure 8.17.
-10 -5 5 10
-15
-10
-5
5
Figure 8.17: 1(z) (z) = 5
4. Since [z i[ +[z +i[ 2, there are no solutions of [z i[ +[z +i[ = 1.
194
Solution 8.13
[ e
i
1[ = 2
( e
i
1)( e
i
1) = 4
1 e
i
e
i
+ 1 = 4
2 cos() = 2
=
e
i
[ 0 is a unit semi-circle in the upper half of the complex plane from 1 to -1. The only point on
this semi-circle that is a distance 2 from the point 1 is the point -1, which corresponds to = .
Polar Form
Solution 8.14
The Taylor series expansion of e
x
is
e
x
=

n=0
x
n
n!
.
Taking this as the denition of the exponential function for complex argument,
e
i
=

n=0
(i)
n
n!
=

n=0
i
n

n
n!
=

n=0
(1)
n

2n
(2n)!
+i

n=0
(1)
n

2n+1
(2n + 1)!
.
195
The sine and cosine have the Taylor series
cos =

n=0
(1)
n

2n
(2n)!
, sin =

n=0
(1)
n

2n+1
(2n + 1)!
,
Thus e
i
and cos + i sin have the same Taylor series expansions about = 0. Since the radius of convergence
of the series is innite we conclude that,
e
i
= cos +i sin .
Solution 8.15
cos(3) +i sin(3) = (cos() +i sin())
3
cos(3) +i sin(3) = cos
3
() +i3 cos
2
() sin() 3 cos() sin
2
() i sin
3
()
We equate the real parts of the equation.
cos(3) = cos
3
() 3 cos() sin
2
()
Solution 8.16
Dene the partial sum,
S
n
(z) =
n

k=0
z
k
.
196
Now consider (1 z)S
n
(z).
(1 z)S
n
(z) = (1 z)
n

k=0
z
k
(1 z)S
n
(z) =
n

k=0
z
k

n+1

k=1
z
k
(1 z)S
n
(z) = 1 z
n+1
We divide by 1 z. Note that 1 z is nonzero.
S
n
(z) =
1 z
n+1
1 z
1 +z +z
2
+ +z
n
=
1 z
n+1
1 z
, (z ,= 1)
Now consider z = e
i
where 0 < < 2 so that z is not unity.
n

k=0
_
e
i
_
k
=
1
_
e
i
_
n+1
1 e
i
n

k=0
e
ik
=
1 e
i(n+1)
1 e
i
197
In order to get sin(/2) in the denominator, we multiply top and bottom by e
i/2
.
n

k=0
(cos k +i sin k) =
e
i/2
e
i(n+1/2)
e
i/2
e
i/2
n

k=0
cos k +i
n

k=0
sin k =
cos(/2) i sin(/2) cos((n + 1/2)) i sin((n + 1/2))
2i sin(/2)
n

k=0
cos k +i
n

k=1
sin k =
1
2
+
sin((n + 1/2))
sin(/2)
+i
_
1
2
cot(/2)
cos((n + 1/2))
sin(/2)
_
1. We take the real and imaginary part of this to obtain the identities.
n

k=0
cos k =
1
2
+
sin((n + 1/2))
2 sin(/2)
2.
n

k=1
sin k =
1
2
cot(/2)
cos((n + 1/2))
2 sin(/2)
Arithmetic and Vectors
Solution 8.17
[z
1
z
2
[ = [r
1
e
i
1
r
2
e
i
2
[
= [r
1
r
2
e
i(
1
+
2
)
[
= [r
1
r
2
[
= [r
1
[[r
2
[
= [z
1
[[z
2
[
198

z
1
z
2

r
1
e
i
1
r
2
e
i
2

r
1
r
2
e
i(
1

2
)

r
1
r
2

=
[r
1
[
[r
2
[
=
[z
1
[
[z
2
[
Solution 8.18
[z +[
2
+[z [
2
= (z +)
_
z +
_
+ (z )
_
z
_
= zz +z +z + +zz z z +
= 2
_
[z[
2
+[[
2
_
Consider the parallelogram dened by the vectors z and . The lengths of the sides are z and and the lengths of
the diagonals are z + and z . We know from geometry that the sum of the squared lengths of the diagonals
of a parallelogram is equal to the sum of the squared lengths of the four sides. (See Figure 8.18.)
Integer Exponents
199
z+
z-
z

Figure 8.18: The parallelogram dened by z and .


Solution 8.19
1.
(1 +i)
10
=
_
_
(1 +i)
2
_
2
_
2
(1 +i)
2
=
_
(i2)
2
_
2
(i2)
= (4)
2
(i2)
= 16(i2)
= i32
2.
(1 +i)
10
=
_

2 e
i/4
_
10
=
_

2
_
10
e
i10/4
= 32 e
i/2
= i32
200
Rational Exponents
Solution 8.20
We substitite the numbers into the equation to obtain an identity.
z
2
+ 2az +b = 0
(a + (a
2
b)
1/2
)
2
+ 2a(a + (a
2
b)
1/2
) +b = 0
a
2
2a(a
2
b)
1/2
+a
2
b 2a
2
+ 2a(a
2
b)
1/2
+b = 0
0 = 0
201
Chapter 9
Functions of a Complex Variable
If brute force isnt working, youre not using enough of it.
-Tim Mauch
In this chapter we introduce the algebra of functions of a complex variable. We will cover the trigonometric and
inverse trigonometric functions. The properties of trigonometric function carry over directly from real-variable
theory. However, because of multi-valuedness, the inverse trigonometric functions are signicantly trickier than
their real-variable counterparts.
9.1 Curves and Regions
In this section we introduce curves and regions in the complex plane. This material is necessary for the study
of branch points in this chapter and later for contour integration.
202
Curves. Consider two continuous functions, x(t) and y(t), dened on the interval t [t
0
. . . t
1
]. The set of
points in the complex plane
z(t) = x(t) +iy(t) [ t [t
0
. . . t
1
]
denes a continuous curve or simply a curve. If the endpoints coincide, z(t
0
) = z(t
1
), it is a closed curve. (We
assume that t
0
,= t
1
.) If the curve does not intersect itself, then it is said to be a simple curve.
If x(t) and y(t) have continuous derivatives and the derivatives do not both vanish at any point
1
, then it is
a smooth curve. This essentially means that the curve does not have any corners or other nastiness.
A continuous curve which is composed of a nite number of smooth curves is called a piecewise smooth curve.
We will use the word contour as a synonym for a piecewise smooth curve.
See Figure 9.1 for a smooth curve, a piecewise smooth curve, a simple closed curve and a non-simple closed
curve.
(a) (b) (c) (d)
Figure 9.1: (a) Smooth Curve, (b) Piecewise Smooth Curve, (c) Simple Closed Curve, (d) Non-Simple Closed
Curve
Regions. A region R is connected if any two points in R can be connected by a curve which lies entirely in
R. A region is simply-connected if every closed curve in R can be continuously shrunk to a point without leaving
R. A region which is not simply-connected is said to be multiply-connected region. Another way of dening
1
Why is it necessary that the derivatives do not both vanish?
203
simply-connected is that a path connecting two points in R can be continuously deformed into any other path
that connects those points. Figure 9.2 shows a simply-connected region with two paths which can be continuously
deformed into one another and a multiply-connected region with paths which cannot be deformed into one another.
Figure 9.2: Simply-connected and multiply-connected regions.
Jordan Curve Theorem. A continuous, simple, closed curve is known as a Jordan curve. The Jordan Curve
Theorem, which seems intuitively obvious but is dicult to prove, states that a Jordan curve divides the plane
into a simply-connected, bounded region and an unbounded region. These two regions are called the interior and
exterior regions, respectively. The two regions share the curve as a boundary. Points in the interior are said to
be inside the curve; points in the exterior are said to be outside the curve.
Traversal of a Contour. Consider a Jordan curve. If you traverse the curve in the positive direction, then the
inside is to your left. If you traverse the curve in the opposite direction, then the outside will be to your left and
you will go around the curve in the negative direction. For circles, the positive direction is the counter-clockwise
direction. The positive direction is consistent with the way angles are measured in a right-handed coordinate
system, i.e. for a circle centered on the origin, the positive direction is the direction of increasing angle. For an
oriented contour C, we denote the contour with opposite orientation as C.
204
Boundary of a Region. Consider a simply-connected region. The boundary of the region is traversed in the
positive direction if the region is to the left as you walk along the contour. For multiply-connected regions, the
boundary may be a set of contours. In this case the boundary is traversed in the positive direction if each of the
contours is traversed in the positive direction. When we refer to the boundary of a region we will assume it is
given the positive orientation. In Figure 9.3 the boundaries of three regions are traversed in the positive direction.
Figure 9.3: Traversing the boundary in the positive direction.
Two Interpretations of a Curve. Consider a simple closed curve as depicted in Figure 9.4a. By giving it
an orientation, we can make a contour that either encloses the bounded domain Figure 9.4b or the unbounded
domain Figure 9.4c. Thus a curve has two interpretations. It can be thought of as enclosing either the points
which are inside or the points which are outside.
2
2
A farmer wanted to know the most ecient way to build a pen to enclose his sheep, so he consulted an engineer, a physicist
and a mathematician. The engineer suggested that he build a circular pen to get the maximum area for any given perimeter. The
physicist suggested that he build a fence at innity and then shrink it to t the sheep. The mathematician constructed a little fence
around himself and then dened himself to be outside.
205
(a) (b) (c)
Figure 9.4: Two interpretations of a curve.
9.2 Cartesian and Modulus-Argument Form
We can write a function of a complex variable z as a function of x and y or as a function of r and with the
substitutions z = x +iy and z = r e
i
, respectively. Then we can separate the real and imaginary components or
write the function in modulus-argument form,
f(z) = u(x, y) +iv(x, y), or f(z) = u(r, ) +iv(r, ),
f(z) = (x, y) e
i(x,y)
, or f(z) = (r, ) e
i(r,)
.
Example 9.2.1 Consider the functions f(z) = z, f(z) = z
3
and f(z) =
1
1z
. We write the functions in terms of
x and y and separate them into their real and imaginary components.
f(z) = z
= x +iy
206
f(z) = z
3
= (x +iy)
3
= x
3
+ix
2
y xy
2
iy
3
= (x
3
xy
2
) +i(x
2
y y
3
)
f(z) =
1
1 z
=
1
1 x iy
=
1
1 x iy
1 x +iy
1 x +iy
=
1 x
(1 x)
2
+y
2
+i
y
(1 x)
2
+y
2
Example 9.2.2 Consider the functions f(z) = z, f(z) = z
3
and f(z) =
1
1z
. We write the functions in terms of
r and and write them in modulus-argument form.
f(z) = z
= r e
i
f(z) = z
3
=
_
r e
i
_
3
= r
3
e
i3
207
f(z) =
1
1 z
=
1
1 r e
i
=
1
1 r e
i
1
1 r e
i
=
1 r e
i
1 r e
i
r e
i
+r
2
=
1 r cos +ir sin
1 2r cos +r
2
Note that the denominator is real and non-negative.
=
1
1 2r cos +r
2
[1 r cos +ir sin [ e
i arctan(1r cos ,r sin)
=
1
1 2r cos +r
2
_
(1 r cos )
2
+r
2
sin
2
e
i arctan(1r cos ,r sin )
=
1
1 2r cos +r
2
_
1 2r cos +r
2
cos
2
+r
2
sin
2
e
i arctan(1r cos ,r sin )
=
1

1 2r cos +r
2
e
i arctan(1r cos ,r sin )
9.3 Graphing Functions of a Complex Variable
We cannot directly graph a function of a complex variable as they are mappings from R
2
to R
2
. To do so would
require four dimensions. However, we can can use a surface plot to graph the real part, the imaginary part, the
modulus or the argument of a function of a complex variable. Each of these are scalar elds, mappings from R
2
to R.
208
Example 9.3.1 Consider the identity function, f(z) = z. In Cartesian coordinates and Cartesian form, the
function is f(z) = x + iy. The real and imaginary components are u(x, y) = x and v(x, y) = y. (See Figure 9.5.)
In modulus argument form the function is
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
2
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
2
-2
-1
0
1
2
x
Figure 9.5: The real and imaginary parts of f(z) = z = x +iy
f(z) = z = r e
i
=
_
x
2
+y
2
e
i arctan(x,y)
.
The modulus of f(z) is a single-valued function which is the distance from the origin. The argument of f(z) is a
multi-valued function. Recall that arctan(x, y) has an innite number of values each of which dier by an integer
multiple of 2. A few branches of arg(f(z)) are plotted in Figure 9.6. The modulus and principal argument of
f(z) = z are plotted in Figure 9.7.
Example 9.3.2 Consider the function f(z) = z
2
. In Cartesian coordinates and separated into its real and
imaginary components the function is
f(z) = z
2
= (x +iy)
2
= (x
2
y
2
) +i2xy.
Figure 9.8 shows surface plots of the real and imaginary parts of z
2
. The magnitude of z
2
is
[z
2
[ =
_
z
2
z
2
= zz = (x +iy)(x iy) = x
2
+y
2
.
209
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
-2
-1
0
1
2
y
Figure 9.6: A Few Branches of arg(z)
-2
-1
0
1
2
x
-2
-1
0
1
2
y
0
1
2
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 9.7: Plots of [z[ and Arg (z)
Note that
z
2
= (r e
i
)
2
= r
2
e
i2
.
210
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-4
-2
0
2
4
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 9.8: Plots of 1(z
2
) and (z
2
)
In Figure 9.9 are plots of [z
2
[ and a branch of arg(z
2
).
-2
-1
0
1
2
x
-2
-1
0
1
2
y
0
2
4
6
8
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 9.9: Plots of [z
2
[ and a branch of arg(z
2
)
211
9.4 Trigonometric Functions
The Exponential Function. Consider the exponential function e
z
. We can use Eulers formula to write
e
z
= e
x+iy
in terms of its real and imaginary parts.
e
z
= e
x+iy
= e
x
e
iy
= e
x
cos y +i e
x
sin y
From this we see that the exponential function is i2 periodic: e
z+i2
= e
z
, and i odd periodic: e
z+i
= e
z
.
Figure 9.10 has surface plots of the real and imaginary parts of e
z
which show this periodicity.
-2
0
2
x
-5
0
5
y
-20
-10
0
10
20
-2
0
2
x
-2
0
2
x
-5
0
5
y
-20
-10
0
10
20
-2
0
2
x
Figure 9.10: Plots of 1( e
z
) and ( e
z
)
The modulus of e
z
is a function of x alone.
[ e
z
[ =

e
x+iy

= e
x
The argument of e
z
is a function of y alone.
arg ( e
z
) = arg
_
e
x+iy
_
= y + 2n[ n Z
In Figure 9.11 are plots of [ e
z
[ and a branch of arg( e
z
).
212
-2
0
2
x
-5
0
5
y
0
5
10
15
20
-2
0
2
x
-2
0
2
x
-5
0
5
y
-5
0
5
-2
0
2
x
Figure 9.11: Plots of [ e
z
[ and a branch of arg( e
z
)
Example 9.4.1 Show that the transformation w = e
z
maps the innite strip, < x < , 0 < y < , onto
the upper half-plane.
Method 1. Consider the line z = x +ic, < x < . Under the transformation, this is mapped to
w = e
x+ic
= e
ic
e
x
, < x < .
This is a ray from the origin to innity in the direction of e
ic
. Thus we see that z = x is mapped to the positive,
real w axis, z = x + i is mapped to the negative, real axis, and z = x + ic, 0 < c < is mapped to a ray with
angle c in the upper half-plane. Thus the strip is mapped to the upper half-plane. See Figure 9.12.
Method 2. Consider the line z = c +iy, 0 < y < . Under the transformation, this is mapped to
w = e
c+iy
+ e
c
e
iy
, 0 < y < .
This is a semi-circle in the upper half-plane of radius e
c
. As c , the radius goes to zero. As c , the
radius goes to innity. Thus the strip is mapped to the upper half-plane. See Figure 9.13.
213
-3 -2 -1 1 2 3
1
2
3
-3 -2 -1 1 2 3
1
2
3
Figure 9.12: e
z
maps horizontal lines to rays.
-1 1
1
2
3
-3 -2 -1 1 2 3
1
2
3
Figure 9.13: e
z
maps vertical lines to circular arcs.
The Sine and Cosine. We can write the sine and cosine in terms of the exponential function.
e
iz
+ e
iz
2
=
cos(z) +i sin(z) + cos(z) +i sin(z)
2
=
cos(z) +i sin(z) + cos(z) i sin(z)
2
= cos z
214
e
iz
e
iz
i2
=
cos(z) +i sin(z) cos(z) i sin(z)
2
=
cos(z) +i sin(z) cos(z) +i sin(z)
2
= sin z
We separate the sine and cosine into their real and imaginary parts.
cos z = cos x cosh y i sin x sinh y sin z = sin x cosh y +i cos x sinh y
For xed y, the sine and cosine are oscillatory in x. The amplitude of the oscillations grows with increasing [y[.
See Figure 9.14 and Figure 9.15 for plots of the real and imaginary parts of the cosine and sine, respectively.
Figure 9.16 shows the modulus of the cosine and the sine.
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
Figure 9.14: Plots of 1(cos(z)) and (cos(z))
The Hyperbolic Sine and Cosine. The hyperbolic sine and cosine have the familiar denitions in terms of
the exponential function. Thus not surprisingly, we can write the sine in terms of the hyperbolic sine and write
the cosine in terms of the hyperbolic cosine. Below is a collection of trigonometric identities.
215
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
Figure 9.15: Plots of 1(sin(z)) and (sin(z))
-2
0
2
x
-2
-1
0
1
2
y
2
4
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
0
2
4
-2
0
2
x
Figure 9.16: Plots of [ cos(z)[ and [ sin(z)[
Result 9.4.1
e
z
= e
x
(cos y +i sin y)
cos z =
e
iz
+ e
iz
2
sin z =
e
iz
e
iz
2i
cos z = cos xcosh y i sin x sinh y sin z = sin x cosh y +i cos xsinh y
cosh z =
e
z
+ e
z
2
sinh z =
e
z
e
z
2
cosh z = cosh xcos y +i sinh xsin y sinh z = sinh x cos y +i cosh xsin y
sin iz = i sinh z sinh iz = i sin z
cos iz = cosh z cosh iz = cos z
log z = Log [z[ +i arg(z) = Log [z[ +i Arg (z) + 2in
216
9.5 Inverse Trigonometric Functions
The Logarithm. The logarithm, log(z), is dened as the inverse of the exponential function e
z
. The exponential
function is many-to-one and thus has a multi-valued inverse. From what we know of many-to-one functions, we
conclude that
e
log z
= z, but log( e
z
) ,= z.
This is because e
log z
is single-valued but log( e
z
) is not. Because e
z
is i2 periodic, the logarithm of a number is
a set of numbers which dier by integer multiples of i2. For instance, e
i2n
= 1 so that log(1) = i2n : n Z.
The logarithmic function has an innite number of branches. The value of the function on the branches diers
by integer multiples of i2. It has singularities at zero and innity. [ log(z)[ as either z 0 or z .
We will derive the formula for the complex variable logarithm. For now, let Log (x) denote the real variable
logarithm that is dened for positive real numbers. Consider w = log z. This means that e
w
= z. We write
w = u +iv in Cartesian form and z = r e
i
in polar form.
e
u+iv
= r e
i
We equate the modulus and argument of this expression.
e
u
= r v = + 2n
u = Log r v = + 2n
With log z = u +iv, we have a formula form the logarithm.
log z = Log [z[ +i arg(z)
If we write out the multi-valuedness of the argument function we note that this has the form that we expected.
log z = Log [z[ +i( Arg (z) + 2n), n Z
We check that our formula is correct by showing that e
log z
= z
e
log z
= e
Log [z[+i arg(z)
= e
Log r+i+i2n
= r e
i
= z
217
Note again that log( e
z
) ,= z.
log( e
z
) = Log [ e
z
[ +i arg( e
z
) = Log ( e
x
) +i arg( e
x+iy
) = x +i(y + 2n) = z +i2n ,= z
The real part of the logarithm is the single-valued Log r; the imaginary part is the multi-valued arg(z). We
dene the principal branch of the logarithm Log z to be the branch that satises < ( Log z) . For
positive, real numbers the principal branch, Log x is real-valued. We can write Log z in terms of the principal
argument, Arg z.
Log z = Log [z[ +i Arg (z)
See Figure 9.17 for plots of the real and imaginary part of Log z.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 9.17: Plots of 1( Log z) and ( Log z).
The Form: a
b
. Consider a
b
where a and b are complex and a is nonzero. We dene this expression in terms of
the exponential and the logarithm as
a
b
= e
b log a
.
218
Note that the multi-valuedness of the logarithm may make a
b
multi-valued. First consider the case that the
exponent is an integer.
a
m
= e
mlog a
= e
m( Log a+i2n)
= e
mLog a
e
i2mn
= e
mLog a
Thus we see that a
m
has a single value where m is an integer.
Now consider the case that the exponent is a rational number. Let p/q be a rational number in reduced form.
a
p/q
= e
p
q
log a
= e
p
q
( Log a+i2n)
= e
p
q
Log a
e
i2np/q
.
This expression has q distinct values as
e
i2np/q
= e
i2mp/q
if and only if n = m mod q.
Finally consider the case that the exponent b is an irrational number.
a
b
= e
b log a
= e
b( Log a+i2n)
= e
b Log a
e
i2bn
Note that e
i2bn
and e
i2bm
are equal if and only if i2bn and i2bm dier by an integer multiple of i2, which
means that bn and bm dier by an integer. This occurs only when n = m. Thus e
i2bn
has a distinct value for
each dierent integer n. We conclude that a
b
has an innite number of values.
You may have noticed something a little shy. If b is not an integer and a is any non-zero complex number,
then a
b
is multi-valued. Then why have we been treating e
b
as single-valued, when it is merely the case a = e?
The answer is that in the realm of functions of a complex variable, e
z
is an abuse of notation. We write e
z
when
we mean exp(z), the single-valued exponential function. Thus when we write e
z
we do not mean the number e
raised to the z power, we mean the exponential function of z. We denote the former scenario as (e)
z
, which
is multi-valued.
Logarithmic Identities. Back in high school trigonometry when you thought that the logarithm was only
dened for positive real numbers you learned the identity log x
a
= a log x. This identity doesnt hold when the
logarithm is dened for nonzero complex numbers. Consider the logarithm of z
a
.
log z
a
= Log z
a
+i2n
219
a log z = a( Log z +i2n) = a Log z +i2an
Note that
log z
a
,= a log z
Furthermore, since
Log z
a
= Log [z
a
[ +i Arg (z
a
), a Log z = a Log [z[ +ia Arg (z)
and Arg (z
a
) is not necessarily the same as a Arg (z) we see that
Log z
a
,= a Log z.
Consider the logarithm of a product.
log(ab) = Log [ab[ +i arg(ab)
= Log [a[ + Log [b[ +i arg(a) +i arg(b)
= log a + log b
There is not an analogous identity for the principal branch of the logarithm since Arg (ab) is not in general the
same as Arg (a) + Arg (b).
Using log(ab) = log(a) + log(b) we can deduce that log(a
n
) =

n
k=1
log a = nlog a, where n is a positive
integer. This result is simple, straightforward and wrong. I have led you down the merry path to damnation.
3
In fact, log(a
2
) ,= 2 log a. Just write the multi-valuedness explicitly,
log(a
2
) = Log (a
2
) +i2n, 2 log a = 2( Log a +i2n) = 2 Log a +i4n.
3
Dont feel bad if you fell for it. The logarithm is a tricky bastard.
220
You can verify that
log
_
1
a
_
= log a.
We can use this and the product identity to expand the logarithm of a quotient.
log
_
a
b
_
= log a log b
For general values of a, log z
a
,= a log z. However, for some values of a, equality holds. We already know
that a = 1 and a = 1 work. To determine if equality holds for other values of a, we explicitly write the
multi-valuedness.
log z
a
= log
_
e
a log z
_
= a log z +i2k, k Z
a log z = a Log [z[ +ia Arg z +ia2m, m Z
We see that log z
a
= a log z if and only if
am[ m Z = am+k [ k, m Z.
The sets are equal if and only if a = 1/n, n Z

. Thus we have the identity:


log
_
z
1/n
_
=
1
n
log z, n Z

221
Result 9.5.1 Logarithmic Identities.
a
b
= e
b log a
e
log z
= e
Log z
= z
log(ab) = log a + log b
log(1/a) = log a
log(a/b) = log a log b
log
_
z
1/n
_
=
1
n
log z, n Z

Logarithmic Inequalities.
Log (uv) ,= Log (u) + Log (v)
log z
a
,= a log z
Log z
a
,= a Log z
log e
z
,= z
Example 9.5.1 Consider 1

. We apply the denition a


b
= e
b log a
.
1

= e
log(1)
= e
( Log (1)+i2n)
= e
i2n
2
Thus we see that 1

has an innite number of values, all of which lie on the unit circle [z[ = 1 in the complex
plane. However, the set 1

is not equal to the set [z[ = 1. There are points in the latter which are not in the
222
former. This is analogous to the fact that the rational numbers are dense in the real numbers, but are a subset
of the real numbers.
Example 9.5.2 We nd the zeros of sin z.
sin z =
e
iz
e
iz
2i
= 0
e
iz
= e
iz
e
i2z
= 1
2z mod 2 = 0
z = n, n Z
Equivalently, we could use the identity
sin z = sin x cosh y +i cos x sinh y = 0.
This becomes the two equations (for the real and imaginary parts)
sin xcosh y = 0 and cos xsinh y = 0.
Since cosh is real-valued and positive for real argument, the rst equation dictates that x = n, n Z. Since
cos(n) = (1)
n
for n Z, the second equation implies that sinh y = 0. For real argument, sinh y is only zero at
y = 0. Thus the zeros are
z = n, n Z
Example 9.5.3 Since we can express sin z in terms of the exponential function, one would expect that we could
223
express the sin
1
z in terms of the logarithm.
w = sin
1
z
z = sin w
z =
e
iw
e
iw
2i
e
2iw
2iz e
iw
1 = 0
e
iw
= iz

1 z
2
w = i log
_
iz

1 z
2
_
Thus we see how the multi-valued sin
1
is related to the logarithm.
sin
1
z = i log
_
iz

1 z
2
_
Example 9.5.4 Consider the equation sin
3
z = 1.
sin
3
z = 1
sin z = 1
1/3
e
iz
e
iz
2i
= 1
1/3
e
iz
2i(1)
1/3
e
iz
= 0
e
2iz
2i(1)
1/3
e
iz
1 = 0
e
iz
=
2i(1)
1/3

_
4(1)
2/3
+ 4
2
e
iz
= i(1)
1/3

_
1 (1)
2/3
z = i log
_
i(1)
1/3

_
1 1
2/3
_
224
Note that there are three sources of multi-valuedness in the expression for z. The two values of the square root are
shown explicitly. There are three cube roots of unity. Finally, the logarithm has an innite number of branches.
To show this multi-valuedness explicitly, we could write
z = i Log
_
i e
i2m/3

_
1 e
i4m/3
_
+ 2n, m = 0, 1, 2, n = . . . , 1, 0, 1, . . .
Example 9.5.5 Consider the harmless looking equation, i
z
= 1.
Before we start with the algebra, note that the right side of the equation is a single number. i
z
is single-valued
only when z is an integer. Thus we know that if there are solutions for z, they are integers. We now proceed to
solve the equation.
i
z
= 1
( e
i/2
)
z
= 1
Use the fact that z is an integer.
e
iz/2
= 1
iz/2 = 2in, for some n Z
z = 4n, n Z
Here is a dierent approach. We write down the multi-valued form of i
z
. We solve the equation by requiring
that all the values of i
z
are 1.
i
z
= 1
e
z log i
= 1
z log i = 2in, for some n Z
z
_
i

2
+ 2im
_
= 2in, m Z, for some n Z
i

2
z + 2imz = 2in, m Z, for some n Z
225
The only solutions that satisfy the above equation are
z = 4k, k Z.
Now lets consider a slightly dierent problem: 1 i
z
. For what values of z does i
z
have 1 as one of its values.
1 i
z
1 e
z log i
1 e
z(i/2+i2n)
[ n Z
z(i/2 +i2n) = i2m, m, n Z
z =
4m
1 + 4n
, m, n Z
There are an innite set of rational numbers for which i
z
has 1 as one of its values. For example,
i
4/5
= 1
1/5
=
_
1, e
i2/5
, e
i4/5
, e
i6/5
, e
i8/5
_
9.6 Branch Points
Example 9.6.1 Consider the function z
1/2
. For each value of z, there are two values of z
1/2
. We write z
1/2
in
modulus-argument and Cartesian form.
z
1/2
=
_
[z[ e
i arg(z)/2
z
1/2
=
_
[z[ cos(arg(z)/2) +i
_
[z[ sin(arg(z)/2)
Figures 9.18 and 9.19 show the real and imaginary parts of z
1/2
from three dierent viewpoints. The second
and third views are looking down the x axis and y axis, respectively. Consider 1(z
1/2
). This is a double layered
sheet which intersects itself on the negative real axis. ((z
1/2
) has a similar structure, but intersects itself on the
226
positive real axis.) Lets start at a point on the positive real axis on the lower sheet. If we walk around the origin
once and return to the positive real axis, we will be on the upper sheet. If we do this again, we will return to the
lower sheet.
Suppose we are at a point in the complex plane. We pick one of the two values of z
1/2
. If the function varies
continuously as we walk around the origin and back to our starting point, the value of z
1/2
will have changed.
We will be on the other branch. Because walking around the point z = 0 takes us to a dierent branch of the
function, we refer to z = 0 as a branch point.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
Figure 9.18: Plots of 1(z
1/2
) from three viewpoints.
Now consider the modulus-argument form of z
1/2
:
z
1/2
=
_
[z[ e
i arg(z)/2
.
Figure 9.20 shows the modulus and the principal argument of z
1/2
. We see that each time we walk around the
origin, the argument of z
1/2
changes by . This means that the value of the function changes by the factor
e
i
= 1, i.e. the function changes sign. If we walk around the origin twice, the argument changes by 2, so that
the value of the function does not change, e
i2
= 1.
z
1/2
is a continuous function except at z = 0. Suppose we start at z = 1 = e
i0
and the function value
( e
i0
)
1/2
= 1. If we follow the rst path in Figure 9.21, the argument of z varies from up to about

4
, down to
about

4
and back to 0. The value of the function is still ( e
i0
)
1/2
.
227
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
Figure 9.19: Plots of (z
1/2
) from three viewpoints.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
0
0.5
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 9.20: Plots of [z
1/2
[ and Arg (z
1/2
).
Now suppose we follow a circular path around the origin in the positive, counter-clockwise, direction. (See the
second path in Figure 9.21.) The argument of z increases by 2. The value of the function at half turns on the
228
Re(z)
Im(z)
Re(z)
Im(z)
Figure 9.21: A path that does not encircle the origin and a path around the origin
path is
( e
i0
)
1/2
= 1,
( e
i
)
1/2
= e
i/2
= i,
( e
i2
)
1/2
= e
i
= 1
As we return to the point z = 1, the argument of the function has changed by and the value of the function
has changed from 1 to 1. If we were to walk along the circular path again, the argument of z would increase
by another 2. The argument of the function would increase by another and the value of the function would
return to 1.
( e
4i
)
1/2
= e
2i
= 1
In general, any time we walk around the origin, the value of z
1/2
changes by the factor 1. We call z = 0 a
branch point. If we want a single-valued square root, we need something to prevent us from walking around the
origin. We achieve this by introducing a branch cut. Suppose we have the complex plane drawn on an innite
sheet of paper. With a scissors we cut the paper from the origin to along the real axis. Then if we start
229
at z = e
i0
, and draw a continuous line without leaving the paper, the argument of z will always be in the range
< arg z < . This means that

2
< arg(z
1/2
) <

2
. No matter what path we follow in this cut plane, z = 1
has argument zero and (1)
1/2
= 1. By never crossing the negative real axis, we have constructed a single valued
branch of the square root function. We call the cut along the negative real axis a branch cut.
Example 9.6.2 Consider the logarithmic function log z. For each value of z, there are an innite number of
values of log z. We write log z in Cartesian form.
log z = Log [z[ +i arg z
Figure 9.22 shows the real and imaginary parts of the logarithm. The real part is single-valued. The imaginary
part is multi-valued and has an innite number of branches. The values of the logarithm form an innite-layered
sheet. If we start on one of the sheets and walk around the origin once in the positive direction, then the value
of the logarithm increases by i2 and we move to the next branch. z = 0 is a branch point of the logarithm.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 9.22: Plots of 1(log z) and a portion of (log z).
The logarithm is a continuous function except at z = 0. Suppose we start at z = 1 = e
i0
and the function
value log( e
i0
) = Log (1) + i0 = 0. If we follow the rst path in Figure 9.21, the argument of z and thus the
230
imaginary part of the logarithm varies from up to about

4
, down to about

4
and back to 0. The value of the
logarithm is still 0.
Now suppose we follow a circular path around the origin in the positive direction. (See the second path in
Figure 9.21.) The argument of z increases by 2. The value of the logarithm at half turns on the path is
log( e
i0
) = 0,
log( e
i
) = i,
log( e
i2
) = i2
As we return to the point z = 1, the value of the logarithm has changed by i2. If we were to walk along the
circular path again, the argument of z would increase by another 2 and the value of the logarithm would increase
by another i2.
Result 9.6.1 A point z
0
is a branch point of a function f(z) if the function changes
value when you walk around the point on any path that encloses no singularities other
than the one at z = z
0
.
Branch Points at Innity : Mapping Innity to the Origin. Up to this point we have considered only
branch points in the nite plane. Now we consider the possibility of a branch point at innity. As a rst method
of approaching this problem we map the point at innity to the origin with the transformation = 1/z and
examine the point = 0.
Example 9.6.3 Again consider the function z
1/2
. Mapping the point at innity to the origin, we have f() =
(1/)
1/2
=
1/2
. For each value of , there are two values of
1/2
. We write
1/2
in modulus-argument form.

1/2
=
1
_
[[
e
i arg()/2
Like z
1/2
,
1/2
has a double-layered sheet of values. Figure 9.23 shows the modulus and the principal argument
of
1/2
. We see that each time we walk around the origin, the argument of
1/2
changes by . This means
231
that the value of the function changes by the factor e
i
= 1, i.e. the function changes sign. If we walk around
the origin twice, the argument changes by 2, so that the value of the function does not change, e
i2
= 1.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
1
1.5
2
2.5
3
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 9.23: Plots of [
1/2
[ and Arg (
1/2
).
Since
1/2
has a branch point at zero, we conclude that z
1/2
has a branch point at innity.
Example 9.6.4 Again consider the logarithmic function log z. Mapping the point at innity to the origin, we
have f() = log(1/) = log(). From Example 9.6.2 we known that log() has a branch point at = 0. Thus
log z has a branch point at innity.
Branch Points at Innity : Paths Around Innity. We can also check for a branch point at innity by
following a path that encloses the point at innity and no other singularities. Just draw a simple closed curve
that separates the complex plane into a bounded component that contains all the singularities of the function in
the nite plane. Then, depending on orientation, the curve is a contour enclosing all the nite singularities, or
the point at innity and no other singularities.
232
Example 9.6.5 Once again consider the function z
1/2
. We know that the function changes value on a curve that
goes once around the origin. Such a curve can be considered to be either a path around the origin or a path
around innity. In either case the path encloses one singularity. There are branch points at the origin and at
innity. Now consider a curve that does not go around the origin. Such a curve can be considered to be either a
path around neither of the branch points or both of them. Thus we see that z
1/2
does not change value when we
follow a path that encloses neither or both of its branch points.
Example 9.6.6 Consider f(z) = (z
2
1)
1/2
. We factor the function.
f(z) = (z 1)
1/2
(z + 1)
1/2
There are branch points at z = 1. Now consider the point at innity.
f(
1
) = (
2
1)
1/2
=
1
(1
2
)
1/2
Since f(
1
) does not have a branch point at = 0, f(z) does not have a branch point at innity. We could
reach the same conclusion by considering a path around innity. Consider a path that circles the branch points
at z = 1 once in the positive direction. Such a path circles the point at innity once in the negative direction.
In traversing this path, the value of f(z) is multiplied by the factor ( e
i2
)
1/2
( e
i2
)
1/2
= e
i2
= 1. Thus the value
of the function does not change. There is no branch point at innity.
Diagnosing Branch Points. We have the denition of a branch point, but we do not have a convenient
criterion for determining if a particular function has a branch point. We have seen that log z and z

for non-
integer have branch points at zero and innity. The inverse trigonometric functions like the arcsine also have
branch points, but they can be written in terms of the logarithm and the square root. In fact all the elementary
functions with branch points can be written in terms of the functions log z and z

. Furthermore, note that the


multi-valuedness of z

comes from the logarithm, z

= e
log z
. This gives us a way of quickly determining if and
where a function may have branch points.
233
Result 9.6.2 Let f(z) be a single-valued function. Then log(f(z)) and (f(z))

may
have branch points only where f(z) is zero or singular.
Example 9.6.7 Consider the functions,
1. (z
2
)
1/2
2. (z
1/2
)
2
3. (z
1/2
)
3
Are they multi-valued? Do they have branch points?
1.
(z
2
)
1/2
=

z
2
= z
Because of the ()
1/2
, the function is multi-valued. The only possible branch points are at zero and innity.
If (( e
i0
)
2
)
1/2
= 1, then (( e
2i
)
2
)
1/2
= ( e
4i
)
1/2
= e
2i
= 1. Thus we see that the function does not change
value when we walk around the origin. We can also consider this to be a path around innity. This function
is multi-valued, but has no branch points.
2.
(z
1/2
)
2
= (

z)
2
= z
This function is single-valued.
3.
(z
1/2
)
3
= (

z)
3
= (

z)
3
This function is multi-valued. We consider the possible branch point at z = 0. If (( e
0
)
1/2
)
3
= 1, then
(( e
2i
)
1/2
)
3
= ( e
i
)
3
= e
3i
= 1. Since the function changes value when we walk around the origin, it has
a branch point at z = 0. Since this is also a path around innity, there is a branch point there.
234
Example 9.6.8 Consider the function f(z) = log
_
1
z1
_
. Since
1
z1
is only zero at innity and its only singularity
is at z = 1, the only possibilities for branch points are at z = 1 and z = . Since
log
_
1
z 1
_
= log(z 1)
and log w has branch points at zero and innity, we see that f(z) has branch points at z = 1 and z = .
Example 9.6.9 Consider the functions,
1. e
log z
2. log e
z
.
Are they multi-valued? Do they have branch points?
1.
e
log z
= exp( Log z + 2in) = e
Log z
e
2in
= z
This function is single-valued.
2.
log e
z
= Log e
z
+ 2in = z + 2im
This function is multi-valued. It may have branch points only where e
z
is zero or innite. This only
occurs at z = . Thus there are no branch points in the nite plane. The function does not change when
traversing a simple closed path. Since this path can be considered to enclose innity, there is no branch
point at innity.
235
Consider (f(z))

where f(z) is single-valued and f(z) has either a zero or a singularity at z = z


0
. (f(z))

may have a branch point at z = z


0
. If f(z) is not a power of z, then it may be dicult to tell if (f(z))

changes
value when we walk around z
0
. Factor f(z) into f(z) = g(z)h(z) where h(z) is nonzero and nite at z
0
. Then
g(z) captures the important behavior of f(z) at the z
0
. g(z) tells us how fast f(z) vanishes or blows up. Since
(f(z))

= (g(z))

(h(z))

and (h(z))

does not have a branch point at z


0
, (f(z))

has a branch point at z


0
if and
only if (g(z))

has a branch point there.


Similarly, we can decompose
log(f(z)) = log(g(z)h(z)) = log(g(z)) + log(h(z))
to see that log(f(z)) has a branch point at z
0
if and only if log(g(z)) has a branch point there.
Result 9.6.3 Consider a single-valued function f(z) that has either a zero or a singu-
larity at z = z
0
. Let f(z) = g(z)h(z) where h(z) is nonzero and nite. (f(z))

has a
branch point at z = z
0
if and only if (g(z))

has a branch point there. log(f(z)) has a


branch point at z = z
0
if and only if log(g(z)) has a branch point there.
Example 9.6.10 Consider the functions,
1. sin z
1/2
2. (sin z)
1/2
3. z
1/2
sin z
1/2
4. (sin z
2
)
1/2
Find the branch points and the number of branches.
1.
sin z
1/2
= sin(

z) = sin

z
236
sin z
1/2
is multi-valued. It has two branches. There may be branch points at zero and innity. Consider
the unit circle which is a path around the origin or innity. If sin(( e
i0
)
1/2
) = sin(1), then sin(( e
i2
)
1/2
) =
sin( e
i
) = sin(1) = sin(1). There are branch points at the origin and innity.
2.
(sin z)
1/2
=

sin z
The function is multi-valued with two branches. The sine vanishes at z = n and is singular at innity.
There could be branch points at these locations. Consider the point z = n. We can write
sin z = (z n)
sin z
z n
Note that
sin z
zn
is nonzero and has a removable singularity at z = n.
lim
zn
sin z
z n
= lim
zn
cos z
1
= (1)
n
Since (z n)
1/2
has a branch point at z = n, (sin z)
1/2
has branch points at z = n.
Since the branch points at z = n go all the way out to innity. It is not possible to make a path that
encloses innity and no other singularities. The point at innity is a non-isolated singularity. A point can
be a branch point only if it is an isolated singularity.
3.
z
1/2
sin z
1/2
=

z sin(

z)
=

z(sin

z)
=

z sin

z
The function is single-valued. Thus there could be no branch points.
237
4.
(sin z
2
)
1/2
=

sin z
2
This function is multi-valued. Since sin z
2
= 0 at z = (n)
1/2
, there may be branch points there. First
consider the point z = 0. We can write
sin z
2
= z
2
sin z
2
z
2
where sin(z
2
)/z
2
is nonzero and has a removable singularity at z = 0.
lim
z0
sin z
2
z
2
= lim
z0
2z cos z
2
2z
= 1.
Since (z
2
)
1/2
does not have a branch point at z = 0, (sin z
2
)
1/2
does not have a branch point there either.
Now consider the point z =

n.
sin z
2
= (z

n)
sin z
2
z

n
sin(z
2
)/(z

n) in nonzero and has a removable singularity at z =



n.
lim
z

n
sin z
2
z

n
= lim
z

n
2z cos z
2
1
= 2

n(1)
n
Since (z

n)
1/2
has a branch point at z =

n, (sin z
2
)
1/2
also has a branch point there.
Thus we see that (sin z
2
)
1/2
has branch points at z = (n)
1/2
for n Z 0. This is the set of numbers:

2, . . . , i

, i

2, . . . . The point at innity is a non-isolated singularity.


238
Example 9.6.11 Find the branch points of
f(z) = (z
3
z)
1/3
.
Introduce branch cuts. If f(2) =
3

6 then what is f(2)?


We expand f(z).
f(z) = z
1/3
(z 1)
1/3
(z + 1)
1/3
.
There are branch points at z = 1, 0, 1. We consider the point at innity.
f
_
1

_
=
_
1

_
1/3
_
1

1
_
1/3
_
1

+ 1
_
1/3
=
1

(1 )
1/3
(1 +)
1/3
Since f(1/) does not have a branch point at = 0, f(z) does not have a branch point at innity. Consider the
three possible branch cuts in Figure 9.24.
Figure 9.24: Three Possible Branch Cuts for f(z) = (z
3
z)
1/3
The rst and the third branch cuts will make the function single valued, the second will not. It is clear that
the rst set makes the function single valued since it is not possible to walk around any of the branch points.
239
The second set of branch cuts would allow you to walk around the branch points at z = 1. If you walked
around these two once in the positive direction, the value of the function would change by the factor e
4i/3
.
The third set of branch cuts would allow you to walk around all three branch points together. You can verify
that if you walk around the three branch points, the value of the function will not change ( e
6i/3
= e
2i
= 1).
Suppose we introduce the third set of branch cuts and are on the branch with f(2) =
3

6.
f(2) = (2 e
i0
)
1/3
(1 e
i0
)
1/3
(3 e
i0
)
1/3
=
3

6.
The value of f(2) is
f(2) = (2 e
i
)
1/3
(3 e
i
)
1/3
(1 e
i
)
1/3
=
3

2 e
i/3
3

3 e
i/3
3

1 e
i/3
=
3

6 e
i
=
3

6.
Example 9.6.12 Find the branch points and number of branches for
f(z) = z
z
2
.
z
z
2
= exp(z
2
log z)
There may be branch points at the origin and innity due to the logarithm. Consider walking around a circle of
radius r centered at the origin in the positive direction. Since the logarithm changes by i2, the value of f(z)
changes by the factor e
i2r
2
. There are branch points at the origin and innity. The function has an innite
number of branches.
Example 9.6.13 Construct a branch of
f(z) = (z
2
+ 1)
1/3
240
such that
f(0) =
1
2
(1 +

3i).
First we factor f(z).
f(z) = (z i)
1/3
(z +i)
1/3
There are branch points at z = i. Figure 9.25 shows one way to introduce branch cuts.

Figure 9.25: Branch Cuts for f(z) = (z


2
+ 1)
1/3
Since it is not possible to walk around any branch point, these cuts make the function single valued. We
introduce the coordinates:
z i = e
i
, z +i = r e
i
.
f(z) = ( e
i
)
1/3
(r e
i
)
1/3
=
3

r e
i(+)/3
241
The condition
f(0) =
1
2
(1 +

3i) = e
i(2/3+2n)
can be stated
3

1 e
i(+)/3
= e
i(2/3+2n)
+ = 2 + 6n
The angles must be dened to satisfy this relation. One choice is

2
< <
5
2
,

2
< <
3
2
.
Principal Branches. We construct the principal branch of the logarithm by putting a branch cut on the
negative real axis choose z = r e
i
, (, ). Thus the principal branch of the logarithm is
Log z = Log r +i, < < .
Note that the if x is a negative real number, (and thus lies on the branch cut), then Log x is undened.
The principal branch of z

is
z

= e
Log z
.
Note that there is a branch cut on the negative real axis.
< arg( e
Log z
) <
The principal branch of the z
1/2
is denoted

z. The principal branch of z
1/n
is denoted
n

z.
Example 9.6.14 Construct

1 z
2
, the principal branch of (1 z
2
)
1/2
.
First note that since (1 z
2
)
1/2
= (1 z)
1/2
(1 + z)
1/2
there are branch points at z = 1 and z = 1. The
principal branch of the square root has a branch cut on the negative real axis. 1 z
2
is a negative real number
for z (, 1) (1, ). Thus we put branch cuts on (, 1] and [1, ).
242
9.7 Exercises
Cartesian and Modulus-Argument Form
Exercise 9.1
For a given real number , 0 < 2, nd the image of the sector 0 arg(z) < under the transformation
w = z
4
. How large should be so that the w plane is covered exactly once?
Hint, Solution
Trigonometric Functions
Exercise 9.2
In Cartesian coordinates, z = x +iy, write sin(z) in Cartesian and modulus-argument form.
Hint, Solution
Exercise 9.3
Show that e
z
in nonzero for all nite z.
Hint, Solution
Exercise 9.4
Show that

e
z
2

e
[z[
2
.
When does equality hold?
Hint, Solution
Exercise 9.5
Solve coth(z) = 1.
Hint, Solution
243
Exercise 9.6
Solve 2 2
z
. That is, for what values of z is 2 one of the values of 2
z
? Derive this result then verify your answer
by evaluating 2
z
for the solutions that your nd.
Hint, Solution
Exercise 9.7
Solve 1 1
z
. That is, for what values of z is 1 one of the values of 1
z
? Derive this result then verify your answer
by evaluating 1
z
for the solutions that your nd.
Hint, Solution
Logarithmic Identities
Exercise 9.8
Find the fallacy in the following arguments:
1. log(1) = log
_
1
1
_
= log(1) log(1) = log(1), therefore, log(1) = 0.
2. 1 = 1
1/2
= ((1)(1))
1/2
= (1)
1/2
(1)
1/2
= ii = 1, therefore, 1 = 1.
Hint, Solution
Exercise 9.9
Write the following expressions in modulus-argument or Cartesian form. Denote any multi-valuedness explicitly.
2
2/5
, 3
1+i
, (

3 i)
1/4
, 1
i/4
.
Hint, Solution
Exercise 9.10
Solve cos z = 69.
Hint, Solution
244
Exercise 9.11
Solve cot z = i47.
Hint, Solution
Exercise 9.12
Determine all values of
1. log(i)
2. (i)
i
3. 3

4. log(log(i))
and plot them in the complex plane.
Hint, Solution
Exercise 9.13
Determine all values of i
i
and log((1 +i)
i
) and plot them in the complex plane.
Hint, Solution
Exercise 9.14
Find all z for which
1. e
z
= i
2. cos z = sin z
3. tan
2
z = 1
Hint, Solution
245
Exercise 9.15
Show that
tan
1
(z) =
i
2
log
_
i +z
i z
_
and
tanh
1
(z) =
1
2
log
_
1 +z
1 z
_
.
Hint, Solution
Branch Points and Branch Cuts
Exercise 9.16
Determine the branch points of the function
f(z) = (z
3
1)
1/2
.
Construct cuts and dene a branch so that z = 0 and z = 1 do not lie on a cut, and such that f(0) = i. What
is f(1) for this branch?
Hint, Solution
Exercise 9.17
Determine the branch points of the function
w(z) = ((z 1)(z 6)(z + 2))
1/2
Construct cuts and dene a branch so that z = 4 does not lie on a cut, and such that w = 6i when z = 4.
Hint, Solution
246
Exercise 9.18
Give the number of branches and locations of the branch points for the functions
1. cos z
1/2
2. (z +i)
z
Hint, Solution
Exercise 9.19
Find the branch points of the following functions in the extended complex plane, (the complex plane including
the point at innity).
1. (z
2
+ 1)
1/2
2. (z
3
z)
1/2
3. log(z
2
1)
4. log
_
z + 1
z 1
_
Introduce branch cuts to make the functions single valued.
Hint, Solution
Exercise 9.20
Find all branch points and introduce cuts to make the following functions single-valued: For the rst function,
choose cuts so that there is no cut within the disk [z[ < 2.
1. f(z) =
_
z
3
+ 8
_
1/2
2. f(z) = log
_
5 +
_
z + 1
z 1
_
1/2
_
247
3. f(z) = (z +i3)
1/2
Hint, Solution
Exercise 9.21
Let f(z) have branch points at z = 0 and z = i, but nowhere else in the extended complex plane. How does
the value and argument of f(z) change while traversing the contour in Figure 9.26? Does the branch cut in
Figure 9.26 make the function single-valued?
Figure 9.26: Contour Around the Branch Points and Branch Cut.
Hint, Solution
Exercise 9.22
Let f(z) be analytic except for no more than a countably innite number of singularities. Suppose that f(z) has
only one branch point in the nite complex plane. Does f(z) have a branch point at innity? Now suppose that
f(z) has two or more branch points in the nite complex plane. Does f(z) have a branch point at innity?
Hint, Solution
248
Exercise 9.23
Find all branch points of (z
4
+ 1)
1/4
in the extended complex plane. Which of the branch cuts in Figure 9.27
make the function single-valued.
Figure 9.27: Four Candidate Sets of Branch Cuts for (z
4
+ 1)
1/4
Hint, Solution
Exercise 9.24
Find the branch points of
f(z) =
_
z
z
2
+ 1
_
1/3
in the extended complex plane. Introduce branch cuts that make the function single-valued and such that the
function is dened on the positive real axis. Dene a branch such that f(1) = 1/
3

2. Write down an explicit


formula for the value of the branch. What is f(1 + i)? What is the value of f(z) on either side of the branch
cuts?
Hint, Solution
Exercise 9.25
Find all branch points of
f(z) = ((z 1)(z 2)(z 3))
1/2
249
in the extended complex plane. Which of the branch cuts in Figure 9.28 will make the function single-valued.
Using the rst set of branch cuts in this gure dene a branch on which f(0) = i

6. Write out an explicit formula


for the value of the function on this branch.
Figure 9.28: Four Candidate Sets of Branch Cuts for ((z 1)(z 2)(z 3))
1/2
Hint, Solution
Exercise 9.26
Determine the branch points of the function
w =
_
(z
2
2)(z + 2)
_
1/3
.
Construct and dene a branch so that the resulting cut is one line of nite extent and w(2) = 2. What is w(3)
for this branch? What are the limiting values of w on either side of the branch cut?
Hint, Solution
Exercise 9.27
Construct the principal branch of arccos(z). ( Arccos (z) has the property that if x [1, 1] then Arccos (x)
[0, ]. In particular, Arccos (0) =

2
).
Hint, Solution
250
Exercise 9.28
Find the branch points of (z
1/2
1)
1/2
in the nite complex plane. Introduce branch cuts to make the function
single-valued.
Hint, Solution
Exercise 9.29
For the linkage illustrated in Figure 9.29, use complex variables to outline a scheme for expressing the angular
position, velocity and acceleration of arm c in terms of those of arm a. (You neednt work out the equations.)


a
b
c
l
Figure 9.29: A linkage
Hint, Solution
Exercise 9.30
Find the image of the strip [1(z)[ < 1 and of the strip 1 < (z) < 2 under the transformations:
1. w = 2z
2
2. w =
z+1
z1
Hint, Solution
251
Exercise 9.31
Locate and classify all the singularities of the following functions:
1.
(z + 1)
1/2
z + 2
2. cos
_
1
1 +z
_
3.
1
(1 e
z
)
2
In each case discuss the possibility of a singularity at the point .
Hint, Solution
Exercise 9.32
Describe how the mapping w = sinh(z) transforms the innite strip < x < , 0 < y < into the w-plane.
Find cuts in the w-plane which make the mapping continuous both ways. What are the images of the lines (a)
y = /4; (b) x = 1?
Hint, Solution
252
9.8 Hints
Cartesian and Modulus-Argument Form
Hint 9.1
Trigonometric Functions
Hint 9.2
Recall that sin(z) =
1
2i
( e
iz
e
iz
). Use Result 8.3.1 to convert between Cartesian and modulus-argument form.
Hint 9.3
Write e
z
in polar form.
Hint 9.4
The exponential is an increasing function for real variables.
Hint 9.5
Write the hyperbolic cotangent in terms of exponentials.
Hint 9.6
Write out the multi-valuedness of 2
z
. There is a doubly-innite set of solutions to this problem.
Hint 9.7
Write out the multi-valuedness of 1
z
.
Logarithmic Identities
253
Hint 9.8
Write out the multi-valuedness of the expressions.
Hint 9.9
Do the exponentiations in polar form.
Hint 9.10
Write the cosine in terms of exponentials. Multiply by e
iz
to get a quadratic equation for e
iz
.
Hint 9.11
Write the cotangent in terms of exponentials. Get a quadratic equation for e
iz
.
Hint 9.12
Hint 9.13
i
i
has an innite number of real, positive values. i
i
= e
i log i
. log((1 + i)
i
) has a doubly innite set of values.
log((1 +i)
i
) = log(exp(i log(1 +i))).
Hint 9.14
Hint 9.15
Branch Points and Branch Cuts
254
Hint 9.16
Hint 9.17
Hint 9.18
Hint 9.19
1. (z
2
+ 1)
1/2
= (z i)
1/2
(z +i)
1/2
2. (z
3
z)
1/2
= z
1/2
(z 1)
1/2
(z + 1)
1/2
3. log(z
2
1) = log(z 1) + log(z + 1)
4. log
_
z+1
z1
_
= log(z + 1) log(z 1)
Hint 9.20
Hint 9.21
Reverse the orientation of the contour so that it encircles innity and does not contain any branch points.
Hint 9.22
Consider a contour that encircles all the branch points in the nite complex plane. Reverse the orientation of
the contour so that it contains the point at innity and does not contain any branch points in the nite complex
255
plane.
Hint 9.23
Factor the polynomial. The argument of z
1/4
changes by /2 on a contour that goes around the origin once in
the positive direction.
Hint 9.24
Hint 9.25
To dene the branch, dene angles from each of the branch points in the nite complex plane.
Hint 9.26
Hint 9.27
Hint 9.28
Hint 9.29
256
Hint 9.30
Hint 9.31
Hint 9.32
257
9.9 Solutions
Cartesian and Modulus-Argument Form
Solution 9.1
We write the mapping w = z
4
in polar coordinates.
w = z
4
= (r e
i
)
4
= r
4
e
i4
Thus we see that
w : r e
i
[ r 0, 0 < r
4
e
i4
[ r 0, 0 < = r e
i
[ r 0, 0 < 4.
We can state this in terms of the argument.
w : z [ 0 arg(z) < z [ 0 arg(z) < 4
If = /2, the sector will be mapped exactly to the whole complex plane.
Trigonometric Functions
Solution 9.2
sin z =
1
2i
_
e
iz
e
iz
_
=
1
2i
_
e
y+ix
e
yix
_
=
1
2i
_
e
y
(cos x +i sin x) e
y
(cos x i sin x)
_
=
1
2
_
e
y
(sin x i cos x) + e
y
(sin x +i cos x)
_
= sin xcosh y +i cos xsinh y
258
sin z =
_
sin
2
xcosh
2
y + cos
2
x sinh
2
y exp(i arctan(sin x cosh y, cos x sinh y))
=
_
cosh
2
y cos
2
xexp(i arctan(sin xcosh y, cos xsinh y))
=
_
1
2
(cosh(2y) cos(2x)) exp(i arctan(sin x cosh y, cos x sinh y))
Solution 9.3
In order that e
z
be zero, the modulus, e
x
must be zero. Since e
x
has no nite solutions, e
z
= 0 has no nite
solutions.
Solution 9.4

e
z
2

e
(x+iy)
2

e
x
2
y
2
+i2xy

= e
x
2
y
2
e
[z[
2
= e
[x+iy[
2
= e
x
2
+y
2
The exponential function is an increasing function for real variables. Since x
2
y
2
x
2
+y
2
,

e
z
2

e
[z[
2
.
Equality holds when y = 0.
259
Solution 9.5
coth(z) = 1
( e
z
+ e
z
)/2
( e
z
e
z
)/2
= 1
e
z
+ e
z
= e
z
e
z
e
z
= 0
There are no solutions.
Solution 9.6
We write out the multi-valuedness of 2
z
.
2 2
z
e
Log 2
e
z log(2)
e
Log 2
e
z( Log (2)+i2n)
[ n Z
Log 2 z Log 2 +i2n +i2m[ m, n Z
z =
_
Log (2) +i2m
Log (2) +i2n
[ m, n Z
_
We verify this solution. Consider m and n to be xed integers. We express the multi-valuedness in terms of k.
2
( Log (2)+i2m)/( Log (2)+i2n)
= e
( Log (2)+i2m)/( Log (2)+i2n) log(2)
= e
( Log (2)+i2m)/( Log (2)+i2n)( Log (2)+i2k)
For k = n, this has the value, e
Log (2)+i2m
= e
Log (2)
= 2.
260
Solution 9.7
We write out the multi-valuedness of 1
z
.
1 1
z
1 e
z log(1)
1 e
zi2n
[ n Z
The element corresponding to n = 0 is e
0
= 1. Thus 1 1
z
has the solutions,
z C.
That is, z may be any complex number. We verify this solution.
1
z
= e
z log(1)
= e
zi2n
For n = 0, this has the value 1.
Logarithmic Identities
Solution 9.8
1. The algebraic manipulations are ne. We write out the multi-valuedness of the logarithms.
log(1) = log
_
1
1
_
= log(1) log(1) = log(1)
i +i2n : n Z = i +i2n : n Z = i2n : n Z i +i2n : n Z = i i2n : n Z
Thus log(1) = log(1). However this does not imply that log(1) = 0. This is because the logarithm
is a set-valued function log(1) = log(1) is really saying:
i +i2n : n Z = i i2n : n Z
261
2. We consider
1 = 1
1/2
= ((1)(1))
1/2
= (1)
1/2
(1)
1/2
= ii = 1.
There are three multi-valued expressions above.
1
1/2
= 1
((1)(1))
1/2
= 1
(1)
1/2
(1)
1/2
= (i)(i) = 1
Thus we see that the rst and fourth equalities are incorrect.
1 ,= 1
1/2
, (1)
1/2
(1)
1/2
,= ii
Solution 9.9
2
2/5
= 4
1/5
=
5

4 1
1/5
=
5

4 e
i2n/5
, n = 0, 1, 2, 3, 4
3
1+i
= e
(1+i) log 3
= e
(1+i)( Log 3+i2n)
= e
Log 32n
e
i( Log 3+2n)
, n Z
(

3 i)
1/4
= (2 e
i/6
)
1/4
=
4

2 e
i/24
1
1/4
=
4

2 e
i(n/2/24)
, n = 0, 1, 2, 3
262
1
i/4
= e
(i/4) log 1
= e
(i/4)(i2n)
= e
n/2
, n Z
Solution 9.10
cos z = 69
e
iz
+ e
iz
2
= 69
e
i2z
138 e
iz
+ 1 = 0
e
iz
=
1
2
_
138

138
2
4
_
z = i log
_
69 2

1190
_
z = i
_
Log
_
69 2

1190
_
+i2n
_
z = 2n i Log
_
69 2

1190
_
, n Z
263
Solution 9.11
cot z = i47
( e
iz
+ e
iz
)/2
( e
iz
e
iz
)/(2i)
= i47
e
iz
+ e
iz
= 47( e
iz
e
iz
)
46 e
i2z
48 = 0
i2z = log
24
23
z =
i
2
log
24
23
z =
i
2
_
Log
24
23
+i2n
_
, n Z
z = n
i
2
Log
24
23
, n Z
Solution 9.12
1.
log(i) = Log ([ i[) +i arg(i)
= Log (1) +i
_

2
+ 2n
_
, n Z
log(i) = i

2
+i2n, n Z
These are equally spaced points in the imaginary axis. See Figure 9.30.
264
-1 1
-10
10
Figure 9.30: log(i)
2.
(i)
i
= e
i log(i)
= e
i(i/2+i2n)
, n Z
(i)
i
= e
/2+2n
, n Z
These are points on the positive real axis with an accumulation point at the origin. See Figure 9.31.
3.
3

= e
log(3)
= e
( Log (3)+i arg(3))
3

= e
( Log (3)+i2n)
, n Z
These points all lie on the circle of radius [e

[ centered about the origin in the complex plane. See Figure 9.32.
265
1
-1
1
Figure 9.31: (i)
i
-10 -5 5 10
-10
-5
5
10
Figure 9.32: 3

4.
log(log(i)) = log
_
i
_

2
+ 2m
__
, m Z
= Log

2
+ 2m

+i Arg
_
i
_

2
+ 2m
__
+i2n, m, n Z
= Log

2
+ 2m

+i sign (1 + 4m)

2
+i2n, m, n Z
266
These points all lie in the right half-plane. See Figure 9.33.
1 2 3 4 5
-20
-10
10
20
Figure 9.33: log(log(i))
Solution 9.13
i
i
= e
i log(i)
= e
i( Log [i[+i Arg (i)+i2n)
, n Z
= e
i(i/2+i2n)
, n Z
= e
(1/2+2n)
, n Z
These are points on the positive real axis. There is an accumulation point at z = 0. See Figure 9.34.
267
25 50 75 100
-1
1
Figure 9.34: i
i
log
_
(1 +i)
i
_
= log
_
e
i log(1+i)
_
= i log(1 +i) +i2n, n Z
= i( Log [1 +i[ +i Arg (1 +i) +i2m) +i2n, m, n Z
= i
_
1
2
Log 2 +i

4
+i2m
_
+i2n, m, n Z
=
2
_
1
4
+ 2m
_
+i
_
1
2
Log 2 + 2n
_
, m, n Z
See Figure 9.35 for a plot.
268
-40 -20 20
-10
-5
5
10
Figure 9.35: log((1 +i)
i
)
Solution 9.14
1.
e
z
= i
z = log i
z = Log ([i[) +i arg(i)
z = Log (1) +i
_

2
+ 2n
_
, n Z
z = i

2
+i2n, n Z
269
2. We can solve the equation by writing the cosine and sine in terms of the exponential.
cos z = sin z
e
iz
+ e
iz
2
=
e
iz
e
iz
i2
(1 +i) e
iz
= (1 +i) e
iz
e
i2z
=
1 +i
1 +i
e
i2z
= i
i2z = log(i)
i2z = i

2
+i2n, n Z
z =

4
+n, n Z
3.
tan
2
z = 1
sin
2
z = cos
2
z
cos z = i sin z
e
iz
+ e
iz
2
= i
e
iz
e
iz
2i
e
iz
= e
iz
or e
iz
= e
iz
e
iz
= 0 or e
iz
= 0
e
yix
= 0 or e
y+ix
= 0
e
y
= 0 or e
y
= 0
z =
There are no solutions for nite z.
270
Solution 9.15
First we consider tan
1
(z).
w = tan
1
(z)
z = tan(w)
z =
sin(w)
cos(w)
z =
( e
iw
e
iw
)/(2i)
( e
iw
+ e
iw
)/2
z e
iw
+z e
iw
= i e
iw
+i e
iw
(i +z) e
i2w
= (i z)
e
iw
=
_
i z
i +z
_
1/2
w = i log
_
i z
i +z
_
1/2
tan
1
(z) =
i
2
log
_
i +z
i z
_
271
Now we consider tanh
1
(z).
w = tanh
1
(z)
z = tanh(w)
z =
sinh(w)
cosh(w)
z =
( e
w
e
w
)/2
( e
w
+ e
w
)/2
z e
w
+z e
w
= e
w
e
w
(z 1) e
2w
= z 1
e
w
=
_
z 1
z 1
_
1/2
w = log
_
z + 1
1 z
_
1/2
tanh
1
(z) =
1
2
log
_
z + 1
1 z
_
Branch Points and Branch Cuts
Solution 9.16
The cube roots of 1 are
_
1, e
i2/3
, e
i4/3
_
=
_
1,
1 +i

3
2
,
1 i

3
2
_
.
Thus we can write
_
z
3
1
_
1/2
= (z 1)
1/2
_
z +
1 i

3
2
_
1/2
_
z +
1 +i

3
2
_
1/2
.
272
There are three branch points on the circle of radius 1.
z =
_
1,
1 +i

3
2
,
1 i

3
2
_
We examine the point at innity.
f(1/) = (1/
3
1)
1/2
=
3/2
(1
3
)
1/2
Since f(1/) has a branch point at = 0, f(z) has a branch point at innity.
There are several ways of introducing branch cuts to separate the branches of the function. The easiest
approach is to put a branch cut from each of the three branch points in the nite complex plane out to the
branch point at innity. See Figure 9.36a. Clearly this makes the function single valued as it is impossible to
walk around any of the branch points. Another approach is to have a branch cut from one of the branch points
in the nite plane to the branch point at innity and a branch cut connecting the remaining two branch points.
See Figure 9.36bcd. Note that in walking around any one of the nite branch points, (in the positive direction),
the argument of the function changes by . This means that the value of the function changes by e
i
, which is
to say the value of the function changes sign. In walking around any two of the nite branch points, (again in
the positive direction), the argument of the function changes by 2. This means that the value of the function
changes by e
i2
, which is to say that the value of the function does not change. This demonstrates that the latter
branch cut approach makes the function single-valued.
Now we construct a branch. We will use the branch cuts in Figure 9.36a. We introduce variables to measure
radii and angles from the three nite branch points.
z 1 = r
1
e
i
1
, 0 <
1
< 2
z +
1 i

3
2
= r
2
e
i
2
,
2
3
<
2
<

3
z +
1 +i

3
2
= r
3
e
i
3
,

3
<
3
<
2
3
273
a b c d
Figure 9.36: (z
3
1)
1/2
We compute f(0) to see if it has the desired value.
f(z) =

r
1
r
2
r
3
e
i(
1
+
2
+
3
)/2
f(0) = e
i(/3+/3)/2
= i
Since it does not have the desired value, we change the range of
1
.
z 1 = r
1
e
i
1
, 2 <
1
< 4
f(0) now has the desired value.
f(0) = e
i(3/3+/3)/2
= i
We compute f(1).
f(1) =

2 e
i(32/3+2/3)/2
= i

2
Solution 9.17
First we factor the function.
w(z) = ((z + 2)(z 1)(z 6))
1/2
= (z + 2)
1/2
(z 1)
1/2
(z 6)
1/2
274
There are branch points at z = 2, 1, 6. Now we examine the point at innity.
w
_
1

_
=
__
1

+ 2
__
1

1
__
1

6
__
1/2
=
3/2
__
1 +
2

__
1
1

__
1
6

__
1/2
Since
3/2
has a branch point at = 0 and the rest of the terms are analytic there, w(z) has a branch point at
innity.
Consider the set of branch cuts in Figure 9.37. These cuts let us walk around the branch points at z = 2
and z = 1 together or if we change our perspective, we would be walking around the branch points at z = 6 and
z = together. Consider a contour in this cut plane that encircles the branch points at z = 2 and z = 1. Since
the argument of (z z
0
)
1/2
changes by when we walk around z
0
, the argument of w(z) changes by 2 when we
traverse the contour. Thus the value of the function does not change and it is a valid set of branch cuts.
Figure 9.37: Branch Cuts for ((z + 2)(z 1)(z 6))
1/2
Now to dene the branch. We make a choice of angles.
z + 2 = r
1
e
i
1
,
1
=
2
for z (1..6),
z 1 = r
2
e
i
2
,
2
=
1
for z (1..6),
z 6 = r
3
e
i
3
, 0 <
3
< 2
275
The function is
w(z) =
_
r
1
e
i
1
r
2
e
i
2
r
3
e
i
3
_
1/2
=

r
1
r
2
r
3
e
i(
1
+
2
+
3
)/2
.
We evaluate the function at z = 4.
w(4) =
_
(6)(3)(2) e
i(2n+2n+)/2
= i6
We see that our choice of angles gives us the desired branch.
Solution 9.18
1.
cos z
1/2
= cos(

z) = cos(

z)
This is a single-valued function. There are no branch points.
2.
(z +i)
z
= e
z log(z+i)
= e
z( Log [z+i[+i Arg (z+i)+i2n)
, n Z
There is a branch point at z = i. There are an innite number of branches.
Solution 9.19
1.
f(z) = (z
2
+ 1)
1/2
= (z +i)
1/2
(z i)
1/2
We see that there are branch points at z = i. To examine the point at innity, we substitute z = 1/ and
examine the point = 0.
_
_
1

_
2
+ 1
_
1/2
=
1
(
2
)
1/2
(1 +
2
)
1/2
276
Since there is no branch point at = 0, f(z) has no branch point at innity.
A branch cut connecting z = i would make the function single-valued. We could also accomplish this with
two branch cuts starting z = i and going to innity.
2.
f(z) = (z
3
z)
1/2
= z
1/2
(z 1)
1/2
(z + 1)
1/2
There are branch points at z = 1, 0, 1. Now we consider the point at innity.
f
_
1

_
=
_
_
1

_
3

_
1/2
=
3/2
(1
2
)
1/2
There is a branch point at innity.
One can make the function single-valued with three branch cuts that start at z = 1, 0, 1 and each go to
innity. We can also make the function single-valued with a branch cut that connects two of the points
z = 1, 0, 1 and another branch cut that starts at the remaining point and goes to innity.
3.
f(z) = log(z
2
1) = log(z 1) + log(z + 1)
There are branch points at z = 1.
f
_
1

_
= log
_
1

2
1
_
= log(
2
) + log(1
2
)
log(
2
) has a branch point at = 0.
log(
2
) = Log [
2
[ +i arg(
2
) = Log [
2
[ i2 arg()
Every time we walk around the point = 0 in the positive direction, the value of the function changes by
i4. f(z) has a branch point at innity.
We can make the function single-valued by introducing two branch cuts that start at z = 1 and each go
to innity.
277
4.
f(z) = log
_
z + 1
z 1
_
= log(z + 1) log(z 1)
There are branch points at z = 1.
f
_
1

_
= log
_
1/ + 1
1/ 1
_
= log
_
1 +
1
_
There is no branch point at = 0. f(z) has no branch point at innity.
We can make the function single-valued by introducing two branch cuts that start at z = 1 and each go
to innity. We can also make the function single-valued with a branch cut that connects the points z = 1.
This is because log(z + 1) and log(z 1) change by i2 and i2, respectively, when you walk around
their branch points once in the positive direction.
Solution 9.20
1. The cube roots of 8 are
_
2, 2 e
i2/3
, 2 e
i4/3
_
=
_
2, 1 +i

3, 1 i

3
_
.
Thus we can write
_
z
3
+ 8
_
1/2
= (z + 2)
1/2
(z 1 i

3)
1/2
(z 1 +i

3)
1/2
.
There are three branch points on the circle of radius 2.
z =
_
2, 1 +i

3, 1 i

3
_
.
We examine the point at innity.
f(1/) = (1/
3
+ 8)
1/2
=
3/2
(1 + 8
3
)
1/2
278
Since f(1/) has a branch point at = 0, f(z) has a branch point at innity.
There are several ways of introducing branch cuts outside of the disk [z[ < 2 to separate the branches of
the function. The easiest approach is to put a branch cut from each of the three branch points in the nite
complex plane out to the branch point at innity. See Figure 9.38a. Clearly this makes the function single
valued as it is impossible to walk around any of the branch points. Another approach is to have a branch cut
from one of the branch points in the nite plane to the branch point at innity and a branch cut connecting
the remaining two branch points. See Figure 9.38bcd. Note that in walking around any one of the nite
branch points, (in the positive direction), the argument of the function changes by . This means that the
value of the function changes by e
i
, which is to say the value of the function changes sign. In walking
around any two of the nite branch points, (again in the positive direction), the argument of the function
changes by 2. This means that the value of the function changes by e
i2
, which is to say that the value of
the function does not change. This demonstrates that the latter branch cut approach makes the function
single-valued.
a b c d
Figure 9.38: (z
3
+ 8)
1/2
2.
f(z) = log
_
5 +
_
z + 1
z 1
_
1/2
_
279
First we deal with the function
g(z) =
_
z + 1
z 1
_
1/2
Note that it has branch points at z = 1. Consider the point at innity.
g(1/) =
_
1/ + 1
1/ 1
_
1/2
=
_
1 +
1
_
1/2
Since g(1/) has no branch point at = 0, g(z) has no branch point at innity. This means that if we
walk around both of the branch points at z = 1, the function does not change value. We can verify this
with another method: When we walk around the point z = 1 once in the positive direction, the argument
of z + 1 changes by 2, the argument of (z + 1)
1/2
changes by and thus the value of (z + 1)
1/2
changes
by e
i
= 1. When we walk around the point z = 1 once in the positive direction, the argument of z 1
changes by 2, the argument of (z 1)
1/2
changes by and thus the value of (z 1)
1/2
changes by
e
i
= 1. f(z) has branch points at z = 1. When we walk around both points z = 1 once in the
positive direction, the value of
_
z+1
z1
_
1/2
does not change. Thus we can make the function single-valued with
a branch cut which enables us to walk around either none or both of these branch points. We put a branch
cut from 1 to 1 on the real axis.
f(z) has branch points where
5 +
_
z + 1
z 1
_
1/2
is either zero or innite. The only place in the extended complex plane where the expression becomes innite
280
is at z = 1. Now we look for the zeros.
5 +
_
z + 1
z 1
_
1/2
= 0.
_
z + 1
z 1
_
1/2
= 5.
z + 1
z 1
= 25.
z + 1 = 25z 25
z =
13
12
Note that
_
13/12 + 1
13/12 1
_
1/2
= 25
1/2
= 5.
On one branch, (which we call the positive branch), of the function g(z) the quantity
5 +
_
z + 1
z 1
_
1/2
is always nonzero. On the other (negative) branch of the function, this quantity has a zero at z = 13/12.
The logarithm introduces branch points at z = 1 on both the positive and negative branch of g(z). It
introduces a branch point at z = 13/12 on the negative branch of g(z). To determine if additional branch
cuts are needed to separate the branches, we consider
w = 5 +
_
z + 1
z 1
_
1/2
281
and see where the branch cut between 1 gets mapped to in the w plane. We rewrite the mapping.
w = 5 +
_
1 +
2
z 1
_
1/2
The mapping is the following sequence of simple transformations:
(a) z z 1
(b) z
1
z
(c) z 2z
(d) z z + 1
(e) z z
1/2
(f) z z + 5
We show these transformations graphically below.
-1 1
z z 1
-2 0
z
1
z
-1/2
z 2z
-1
z z +1
0
z z
1/2
z z + 5
For the positive branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is mapped to
the half-plane x > 5. log(w) has branch points at w = 0 and w = . It is possible to walk around only one
282
of these points in the half-plane x > 5. Thus no additional branch cuts are needed in the positive sheet of
g(z).
For the negative branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is mapped to
the half-plane x < 5. It is possible to walk around either w = 0 or w = alone in this half-plane. Thus
we need an additional branch cut. On the negative sheet of g(z), we put a branch cut beteen z = 1 and
z = 13/12. This puts a branch cut between w = and w = 0 and thus separates the branches of the
logarithm.
Figure 9.39 shows the branch cuts in the positive and negative sheets of g(z).
Im(z)
Re(z)
g(13/12)=-5
Im(z)
Re(z)
g(13/12)=5
Figure 9.39: The branch cuts for f(z) = log
_
5 +
_
z+1
z1
_
1/2
_
.
3. The function f(z) = (z + i3)
1/2
has a branch point at z = i3. The function is made single-valued by
connecting this point and the point at innity with a branch cut.
Solution 9.21
Note that the curve with opposite orientation goes around innity in the positive direction and does not enclose
any branch points. Thus the value of the function does not change when traversing the curve, (with either
orientation, of course). This means that the argument of the function must change my an integer multiple of 2.
Since the branch cut only allows us to encircle all three or none of the branch points, it makes the function single
valued.
283
Solution 9.22
We suppose that f(z) has only one branch point in the nite complex plane. Consider any contour that encircles
this branch point in the positive direction. f(z) changes value if we traverse the contour. If we reverse the
orientation of the contour, then it encircles innity in the positive direction, but contains no branch points in the
nite complex plane. Since the function changes value when we traverse the contour, we conclude that the point
at innity must be a branch point. If f(z) has only a single branch point in the nite complex plane then it must
have a branch point at innity.
If f(z) has two or more branch points in the nite complex plane then it may or may not have a branch point
at innity. This is because the value of the function may or may not change on a contour that encircles all the
branch points in the nite complex plane.
Solution 9.23
First we factor the function,
f(z) =
_
z
4
+ 1
_
1/4
=
_
z
1 +i

2
_
1/4
_
z
1 +i

2
_
1/4
_
z
1 i

2
_
1/4
_
z
1 i

2
_
1/4
.
There are branch points at z =
1i

2
. We make the substitution z = 1/ to examine the point at innity.
f
_
1

_
=
_
1

4
+ 1
_
1/4
=
1
(
4
)
1/4
_
1 +
4
_
1/4
(
1/4
)
4
has a removable singularity at the point = 0, but no branch point there. Thus (z
4
+1)
1/4
has no branch
point at innity.
Note that the argument of (z
4
z
0
)
1/4
changes by /2 on a contour that goes around the point z
0
once in the
positive direction. The argument of (z
4
+ 1)
1/4
changes by n/2 on a contour that goes around n of its branch
points. Thus any set of branch cuts that permit you to walk around only one, two or three of the branch points
will not make the function single valued. A set of branch cuts that permit us to walk around only zero or all four
284
of the branch points will make the function single-valued. Thus we see that the rst two sets of branch cuts in
Figure 9.27 will make the function single-valued, while the remaining two will not.
Consider the contour in Figure ??. There are two ways to see that the function does not change value while
traversing the contour. The rst is to note that each of the branch points makes the argument of the function
increase by /2. Thus the argument of (z
4
+ 1)
1/4
changes by 4(/2) = 2 on the contour. This means that the
value of the function changes by the factor e
i2
= 1. If we change the orientation of the contour, then it is a
contour that encircles innity once in the positive direction. There are no branch points inside the this contour
with opposite orientation. (Recall that the inside of a contour lies to your left as you walk around it.) Since there
are no branch points inside this contour, the function cannot change value as we traverse it.
Solution 9.24
f(z) =
_
z
z
2
+ 1
_
1/3
= z
1/3
(z i)
1/3
(z +i)
1/3
There are branch points at z = 0, i.
f
_
1

_
=
_
1/
(1/)
2
+ 1
_
1/3
=

1/3
(1 +
2
)
1/3
There is a branch point at = 0. f(z) has a branch point at innity.
We introduce branch cuts from z = 0 to innity on the negative real axis, from z = i to innity on the positive
imaginary axis and from z = i to innity on the negative imaginary axis. As we cannot walk around any of the
branch points, this makes the function single-valued.
We dene a branch by dening angles from the branch points. Let
z = r e
i
< < ,
(z i) = s e
i
3/2 < < /2,
(z +i) = t e
i
/2 < < 3/2.
285
With
f(z) = z
1/3
(z i)
1/3
(z +i)
1/3
=
3

r e
i/3
1
3

s
e
i/3
1
3

t
e
i/3
=
3
_
r
st
e
i()/3
we have an explicit formula for computing the value of the function for this branch. Now we compute f(1) to see
if we chose the correct ranges for the angles. (If not, well just change one of them.)
f(1) =
3

2
e
i(0/4(/4))/3
=
1
3

2
We made the right choice for the angles. Now to compute f(1 +i).
f(1 +i) =
3


2
1

5
e
i(/40Arctan (2))/3
=
6
_
2
5
e
i(/4Arctan (2))/3
Consider the value of the function above and below the branch cut on the negative real axis. Above the branch
cut the function is
f(x +i0) =
3
_
x

x
2
+ 1

x
2
+ 1
e
i()/3
Note that = so that
f(x +i0) =
3
_
x
x
2
+ 1
e
i()/3
=
3
_
x
x
2
+ 1
1 +i

3
2
.
Below the branch cut = and
f(x i0) =
3
_
x
x
2
+ 1
e
i()/3
=
3
_
x
x
2
+ 1
1 i

3
2
.
286
For the branch cut along the positive imaginary axis,
f(iy + 0) =
3
_
y
(y 1)(y + 1)
e
i(/2/2/2)/3
=
3
_
y
(y 1)(y + 1)
e
i/6
=
3
_
y
(y 1)(y + 1)

3 i
2
,
f(iy 0) =
3
_
y
(y 1)(y + 1)
e
i(/2(3/2)/2)/3
=
3
_
y
(y 1)(y + 1)
e
i/2
= i
3
_
y
(y 1)(y + 1)
.
For the branch cut along the negative imaginary axis,
f(iy + 0) =
3
_
y
(y + 1)(y 1)
e
i(/2(/2)(/2))/3
=
3
_
y
(y + 1)(y 1)
e
i/6
=
3
_
y
(y + 1)(y 1)

3 +i
2
,
287
f(iy 0) =
3
_
y
(y + 1)(y 1)
e
i(/2(/2)(3/2))/3
=
3
_
y
(y + 1)(y 1)
e
i/2
= i
3
_
y
(y + 1)(y 1)
.
Solution 9.25
First we factor the function.
f(z) = ((z 1)(z 2)(z 3))
1/2
= (z 1)
1/2
(z 2)
1/2
(z 3)
1/2
There are branch points at z = 1, 2, 3. Now we examine the point at innity.
f
_
1

_
=
__
1

1
__
1

2
__
1

3
__
1/2
=
3/2
__
1
1

__
1
2

__
1
3

__
1/2
Since
3/2
has a branch point at = 0 and the rest of the terms are analytic there, f(z) has a branch point at
innity.
The rst two sets of branch cuts in Figure 9.28 do not permit us to walk around any of the branch points,
including the point at innity, and thus make the function single-valued. The third set of branch cuts lets us
walk around the branch points at z = 1 and z = 2 together or if we change our perspective, we would be walking
around the branch points at z = 3 and z = together. Consider a contour in this cut plane that encircles the
branch points at z = 1 and z = 2. Since the argument of (z z
0
)
1/2
changes by when we walk around z
0
, the
argument of f(z) changes by 2 when we traverse the contour. Thus the value of the function does not change
and it is a valid set of branch cuts. Clearly the fourth set of branch cuts does not make the function single-valued
as there are contours that encircle the branch point at innity and no other branch points. The other way to see
this is to note that the argument of f(z) changes by 3 as we traverse a contour that goes around the branch
points at z = 1, 2, 3 once in the positive direction.
288
Now to dene the branch. We make the preliminary choice of angles,
z 1 = r
1
e
i
1
, 0 <
1
< 2,
z 2 = r
2
e
i
2
, 0 <
2
< 2,
z 3 = r
3
e
i
3
, 0 <
3
< 2.
The function is
f(z) =
_
r
1
e
i
1
r
2
e
i
2
r
3
e
i
3
_
1/2
=

r
1
r
2
r
3
e
i(
1
+
2
+
3
)/2
.
The value of the function at the origin is
f(0) =

6 e
i(3)/2
= i

6,
which is not what we wanted. We will change range of one of the angles to get the desired result.
z 1 = r
1
e
i
1
, 0 <
1
< 2,
z 2 = r
2
e
i
2
, 0 <
2
< 2,
z 3 = r
3
e
i
3
, 2 <
3
< 4.
f(0) =

6 e
i(5)/2
= i

6,
Solution 9.26
w =
_
(z
2
2)(z + 2)
_
1/3
(z +

2)
1/3
(z

2)
1/3
(z + 2)
1/3
There are branch points at z =

2 and z = 2. If we walk around any one of the branch points once in the
positive direction, the argument of w changes by 2/3 and thus the value of the function changes by e
i2/3
. If we
289
walk around all three branch points then the argument of w changes by 3 2/3 = 2. The value of the function
is unchanged as e
i2
= 1. Thus the branch cut on the real axis from 2 to

2 makes the function single-valued.


Now we dene a branch. Let
z

2 = a e
i
, z +

2 = b e
i
, z + 2 = c e
i
.
We constrain the angles as follows: On the positive real axis, = = . See Figure 9.40.

a
c b
Re(z)
Im(z)
Figure 9.40: A branch of ((z
2
2)(z + 2))
1/3
.
Now we determine w(2).
w(2) = (2

2)
1/3
(2 +

2)
1/3
(2 + 2)
1/3
=
3
_
2

2 e
i0
3
_
2 +

2 e
i0
3

4 e
i0
=
3

2
3

4
= 2.
Note that we didnt have to choose the angle from each of the branch points as zero. Choosing any integer multiple
of 2 would give us the same result.
290
w(3) = (3

2)
1/3
(3 +

2)
1/3
(3 + 2)
1/3
=
3
_
3 +

2 e
i/3
3
_
3

2 e
i/3
3

1 e
i/3
=
3

7 e
i
=
3

7
The value of the function is
w =
3

abc e
i(++)/3
.
Consider the interval (

2 . . .

2). As we approach the branch cut from above, the function has the value,
w =
3

abc e
i/3
=
3
_
(

2 x)(x +

2)(x + 2) e
i/3
.
As we approach the branch cut from below, the function has the value,
w =
3

abc e
i/3
=
3
_
(

2 x)(x +

2)(x + 2) e
i/3
.
Consider the interval (2 . . .

2). As we approach the branch cut from above, the function has the value,
w =
3

abc e
i2/3
=
3
_
(

2 x)(x

2)(x + 2) e
i2/3
.
As we approach the branch cut from below, the function has the value,
w =
3

abc e
i2/3
=
3
_
(

2 x)(x

2)(x + 2) e
i2/3
.
291
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
3
Figure 9.41: The Principal Branch of the arc cosine, Arccos(x).
Solution 9.27
Arccos (x) is shown in Figure 9.41 for real variables in the range [1, 1].
First we write arccos(z) in terms of log(z). If cos(w) = z, then w = arccos(z).
cos(w) = z
e
iw
+ e
iw
2
= z
( e
iw
)
2
2z e
iw
+ 1 = 0
e
iw
= z + (z
2
1)
1/2
w = i log(z + (z
2
1)
1/2
)
Thus we have
arccos(z) = i log(z + (z
2
1)
1/2
).
Since Arccos (0) =

2
, we must nd the branch such that
i log(0 + (0
2
1)
1/2
) = 0
i log((1)
1/2
) = 0.
292
Since
i log(i) = i
_
i

2
+i2n
_
=

2
+ 2n
and
i log(i) = i
_
i

2
+i2n
_
=

2
+ 2n
we must choose the branch of the square root such that (1)
1/2
= i and the branch of the logarithm such that
log(i) = i

2
.
First we construct the branch of the square root.
(z
2
1)
1/2
= (z + 1)
1/2
(z 1)
1/2
We see that there are branch points at z = 1 and z = 1. In particular we want the Arccos to be dened for
z = x, x [1, 1]. Hence we introduce branch cuts on the lines < x 1 and 1 x < . Dene the local
coordinates
z + 1 = r e
i
, z 1 = e
i
.
With the given branch cuts, the angles have the possible ranges
= . . . , (..), (..3), . . . , = . . . , (0..2), (2..4), . . . .
Now we choose ranges for and and see if we get the desired branch. If not, we choose a dierent range for one
of the angles. First we choose the ranges
(..), (0..2).
If we substitute in z = 0 we get
(0
2
1)
1/2
= (1 e
i0
)
1/2
(1 e
i
)
1/2
= e
i0
e
i/2
= i
293
=
=
=0
=2
Figure 9.42: Branch Cuts and Angles for (z
2
1)
1/2
Thus we see that this choice of angles gives us the desired branch.
Now we go back to the expression
arccos(z) = i log(z + (z
2
1)
1/2
).
We have already seen that there are branch points at z = 1 and z = 1 because of (z
2
1)
1/2
. Now we must
determine if the logarithm introduces additional branch points. The only possibilities for branch points are where
the argument of the logarithm is zero.
z + (z
2
1)
1/2
= 0
z
2
= z
2
1
0 = 1
We see that the argument of the logarithm is nonzero and thus there are no additional branch points. Introduce
the variable, w = z + (z
2
1)
1/2
. What is the image of the branch cuts in the w plane? We parameterize the
branch cut connecting z = 1 and z = + with z = r + 1, r [0, ).
w = r + 1 + ((r + 1)
2
1)
1/2
= r + 1
_
r(r + 2)
= r(1 r
_
1 + 2/r) + 1
294
r(1 +
_
1 + 2/r) +1 is the interval [1, ); r(1
_
1 + 2/r) +1 is the interval (0, 1]. Thus we see that this branch
cut is mapped to the interval (0, ) in the w plane. Similarly, we could show that the branch cut (, 1] in
the z plane is mapped to (, 0) in the w plane. In the w plane there is a branch cut along the real w axis from
to . Thus cut makes the logarithm single-valued. For the branch of the square root that we chose, all the
points in the z plane get mapped to the upper half of the w plane.
With the branch cuts we have introduced so far and the chosen branch of the square root we have
arccos(0) = i log(0 + (0
2
1)
1/2
)
= i log i
= i
_
i

2
+i2n
_
=

2
+ 2n
Choosing the n = 0 branch of the logarithm will give us Arccos (z). We see that we can write
Arccos (z) = i Log (z + (z
2
1)
1/2
).
Solution 9.28
We consider the function f(z) = (z
1/2
1)
1/2
. First note that z
1/2
has a branch point at z = 0. We place a branch
cut on the negative real axis to make it single valued. f(z) will have a branch point where z
1/2
1 = 0. This
occurs at z = 1 on the branch of z
1/2
on which 1
1/2
= 1. (1
1/2
has the value 1 on one branch of z
1/2
and 1 on
the other branch.) For this branch we introduce a branch cut connecting z = 1 with the point at innity. (See
Figure 9.43.)
Solution 9.29
The distance between the end of rod a and the end of rod c is b. In the complex plane, these points are a e
i
and
295
1 =1 1 =-1
1/2 1/2
Figure 9.43: Branch Cuts for (z
1/2
1)
1/2
l +c e
i
, respectively. We write this out mathematically.

l +c e
i
a e
i

= b
_
l +c e
i
a e
i
_ _
l +c e
i
a e
i
_
= b
2
l
2
+cl e
i
al e
i
+cl e
i
+c
2
ac e
i()
al e
i
ac e
i()
+a
2
= b
2
cl cos ac cos( ) al cos =
1
2
_
b
2
a
2
c
2
l
2
_
This equation relates the two angular positions. One could dierentiate the equation to relate the velocities and
accelerations.
Solution 9.30
1. Let w = u +iv. First we do the strip: [1(z)[ < 1. Consider the vertical line: z = c +iy, y R. This line is
mapped to
w = 2(c +iy)
2
w = 2c
2
2y
2
+i4cy
u = 2c
2
2y
2
, v = 4cy
This is a parabola that opens to the left. For the case c = 0 it is the negative u axis. We can parametrize
296
the curve in terms of v.
u = 2c
2

1
8c
2
v
2
, v R
The boundaries of the region are both mapped to the parabolas:
u = 2
1
8
v
2
, v R.
The image of the mapping is
_
w = u +iv : v R and u < 2
1
8
v
2
_
.
Note that the mapping is two-to-one.
Now we do the strip 1 < (z) < 2. Consider the horizontal line: z = x +ic, x R. This line is mapped to
w = 2(x +ic)
2
w = 2x
2
2c
2
+i4cx
u = 2x
2
2c
2
, v = 4cx
This is a parabola that opens upward. We can parametrize the curve in terms of v.
u =
1
8c
2
v
2
2c
2
, v R
The boundary (z) = 1 is mapped to
u =
1
8
v
2
2, v R.
The boundary (z) = 2 is mapped to
u =
1
32
v
2
8, v R
297
The image of the mapping is
_
w = u +iv : v R and
1
32
v
2
8 < u <
1
8
v
2
2
_
.
2. We write the transformation as
z + 1
z 1
= 1 +
2
z 1
.
Thus we see that the transformation is the sequence:
(a) translation by -1
(b) inversion
(c) magnication by 2
(d) translation by 1
Consider the strip [1(z)[ < 1. The translation by 1 maps this to 2 < 1(z) < 0. Now we do the
inversion. The left edge, 1(z) = 0, is mapped to itself. The right edge, 1(z) = 2, is mapped to the circle
[z + 1/4[ = 1/4. Thus the current image is the left half plane minus a circle:
1(z) < 0 and

z +
1
4

>
1
4
.
The magnication by 2 yields
1(z) < 0 and

z +
1
2

>
1
2
.
The nal step is a translation by 1.
1(z) < 1 and

z
1
2

>
1
2
.
298
Now consider the strip 1 < (z) < 2. The translation by 1 does not change the domain. Now we do the
inversion. The bottom edge, (z) = 1, is mapped to the circle [z + i/2[ = 1/2. The top edge, (z) = 2, is
mapped to the circle [z +i/4[ = 1/4. Thus the current image is the region between two circles:

z +
i
2

<
1
2
and

z +
i
4

>
1
4
.
The magnication by 2 yields
[z +i[ < 1 and

z +
i
2

>
1
2
.
The nal step is a translation by 1.
[z 1 +i[ < 1 and

z 1 +
i
2

>
1
2
.
Solution 9.31
1. There is a simple pole at z = 2. The function has a branch point at z = 1. Since this is the only
branch point in the nite complex plane there is also a branch point at innity. We can verify this with the
substitution z = 1/.
f
_
1

_
=
(1/ + 1)
1/2
1/ + 2
=

1/2
(1 +)
1/2
1 + 2
Since f(1/) has a branch point at = 0, f(z) has a branch point at innity.
2. cos z is an entire function with an essential singularity at innity. Thus f(z) has singularities only where
1/(1 + z) has singularities. 1/(1 + z) has a rst order pole at z = 1. It is analytic everywhere else,
299
including the point at innity. Thus we conclude that f(z) has an essential singularity at z = 1 and is
analytic elsewhere. To explicitly show that z = 1 is an essential singularity, we can nd the Laurent series
expansion of f(z) about z = 1.
cos
_
1
1 +z
_
=

n=0
(1)
n
(2n)!
(z + 1)
2n
3. 1 e
z
has simple zeros at z = i2n, n Z. Thus f(z) has second order poles at those points.
The point at innity is a non-isolated singularity. To justify this: Note that
f(z) =
1
(1 e
z
)
2
has second order poles at z = i2n, n Z. This means that f(1/) has second order poles at =
1
i2n
,
n Z. These second order poles get arbitrarily close to = 0. There is no deleted neighborhood around
= 0 in which f(1/) is analytic. Thus the point = 0, (z = ), is a non-isolated singularity. There is no
Laurent series expansion about the point = 0, (z = ).
The point at innity is neither a branch point nor a removable singularity. It is not a pole either. If it were,
there would be an n such that lim
z
z
n
f(z) = const ,= 0. Since z
n
f(z) has second order poles in every
deleted neighborhood of innity, the above limit does not exist. Thus we conclude that the point at innity
is an essential singularity.
Solution 9.32
We write sinh z in Cartesian form.
w = sinh z = sinh xcos y +i cosh xsin y = u +iv
Consider the line segment x = c, y (0 . . . ). Its image is
sinh c cos y +i cosh c sin y [ y (0 . . . ).
300
This is the parametric equation for the upper half of an ellipse. Also note that u and v satisfy the equation for
an ellipse.
u
2
sinh
2
c
+
v
2
cosh
2
c
= 1
The ellipse starts at the point (sinh(c), 0), passes through the point (0, cosh(c)) and ends at (sinh(c), 0). As c
varies from zero to or from zero to , the semi-ellipses cover the upper half w plane. Thus the mapping is
2-to-1.
Consider the innite line y = c, x (. . . ).Its image is
sinh xcos c +i cosh xsin c [ x (. . . ).
This is the parametric equation for the upper half of a hyperbola. Also note that u and v satisfy the equation for
a hyperbola.

u
2
cos
2
c
+
v
2
sin
2
c
= 1
As c varies from 0 to /2 or from /2 to , the semi-hyperbola cover the upper half w plane. Thus the mapping
is 2-to-1.
We look for branch points of sinh
1
w.
w = sinh z
w =
e
z
e
z
2
e
2z
2we
z
1 = 0
e
z
= w + (w
2
+ 1)
1/2
z = log
_
w + (w i)
1/2
(w +i)
1/2
_
There are branch points at w = i. Since w + (w
2
+ 1)
1/2
is nonzero and nite in the nite complex plane, the
logarithm does not introduce any branch points in the nite plane. Thus the only branch point in the upper
301
half w plane is at w = i. Any branch cut that connects w = i with the boundary of (w) > 0 will separate the
branches under the inverse mapping.
Consider the line y = /4. The image under the mapping is the upper half of the hyperbola
2u
2
+ 2v
2
= 1.
Consider the segment x = 1.The image under the mapping is the upper half of the ellipse
u
2
sinh
2
1
+
v
2
cosh
2
1
= 1.
302
Chapter 10
Analytic Functions
Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way,
they develop a good, lucky feeling.
1
-Jack Handey
10.1 Complex Derivatives
Functions of a Real Variable. The derivative of a function of a real variable is
d
dx
f(x) = lim
x0
f(x + x) f(x)
x
.
If the limit exists then the function is dierentiable at the point x. Note that x can approach zero from above
or below. The limit cannot depend on the direction in which x vanishes.
Consider f(x) = [x[. The function is not dierentiable at x = 0 since
lim
x0
+
[0 + x[ [0[
x
= 1
1
Quote slightly modied.
303
and
lim
x0

[0 + x[ [0[
x
= 1.
Analyticity. The complex derivative, (or simply derivative if the context is clear), is dened,
d
dz
f(z) = lim
z0
f(z + z) f(z)
z
.
The complex derivative exists if this limit exists. This means that the value of the limit is independent of the
manner in which z 0. If the complex derivative exists at a point, then we say that the function is complex
dierentiable there.
A function of a complex variable is analytic at a point z
0
if the complex derivative exists in a neighborhood
about that point. The function is analytic in an open set if it has a complex derivative at each point in that set.
Note that complex dierentiable has a dierent meaning than analytic. Analyticity refers to the behavior of a
function on an open set. A function can be complex dierentiable at isolated points, but the function would not
be analytic at those points. Analytic functions are also called regular or holomorphic. If a function is analytic
everywhere in the nite complex plane, it is called entire.
Example 10.1.1 Consider z
n
, n Z
+
, Is the function dierentiable? Is it analytic? What is the value of the
derivative?
We determine dierentiability by trying to dierentiate the function. We use the limit denition of dierenti-
ation. We will use Newtons binomial formula to expand (z + z)
n
.
d
dz
z
n
= lim
z0
(z + z)
n
z
n
z
= lim
z0
_
z
n
+nz
n1
z +
n(n1)
2
z
n2
z
2
+ + z
n
_
z
n
z
= lim
z0
_
nz
n1
+
n(n 1)
2
z
n2
z + + z
n1
_
= nz
n1
304
The derivative exists everywhere. The function is analytic in the whole complex plane so it is entire. The value
of the derivative is
d
dz
= nz
n1
.
Example 10.1.2 We will show that f(z) = z is not dierentiable. The derivative is,
d
dz
f(z) = lim
z0
f(z + z) f(z)
z
.
d
dz
z = lim
z0
z + z z
z
= lim
z0
z
z
If we take z = x, the limit is
lim
x0
x
x
= 1.
If we take z = iy, the limit is
lim
y0
iy
iy
= 1.
Since the limit depends on the way that z 0, the function is nowhere dierentiable. Thus the function is not
analytic.
Complex Derivatives in Terms of Plane Coordinates. Let z = (, ) be a system of coordinates in the
complex plane. (For example, we could have Cartesian coordinates z = (x, y) = x + iy or polar coordinates
z = (r, ) = r e
i
). Let f(z) = (, ) be a complex-valued function. (For example we might have a function
305
in the form (x, y) = u(x, y) + iv(x, y) or (r, ) = R(r, ) e
i(r,)
.) If f(z) = (, ) is analytic, its complex
derivative is equal to the derivative in any direction. In particular, it is equal to the derivatives in the coordinate
directions.
df
dz
= lim
0,=0
f(z + z) f(z)
z
= lim
0
( + , ) (, )

=
_

_
1

,
df
dz
= lim
=0,0
f(z + z) f(z)
z
= lim
0
(, + ) (, )

=
_

_
1

.
Example 10.1.3 Consider the Cartesian coordinates z = x +iy. We write the complex derivative as derivatives
in the coordinate directions for f(z) = (x, y).
df
dz
=
_
(x +iy)
x
_
1

x
=

x
,
df
dz
=
_
(x +iy)
y
_
1

y
= i

y
.
We write this in operator notation.
d
dz
=

x
= i

y
.
Example 10.1.4 In Example 10.1.1 we showed that z
n
, n Z
+
, is an entire function and that
d
dz
z
n
= nz
n1
.
Now we corroborate this by calculating the complex derivative in the Cartesian coordinate directions.
d
dz
z
n
=

x
(x +iy)
n
= n(x +iy)
n1
= nz
n1
306
d
dz
z
n
= i

y
(x +iy)
n
= i(i)n(x +iy)
n1
= nz
n1
Complex Derivatives are Not the Same as Partial Derivatives Recall from calculus that
f(x, y) = g(s, t)
f
x
=
g
s
s
x
+
g
t
t
x
Do not make the mistake of using a similar formula for functions of a complex variable. If f(z) = (x, y) then
df
dz
,=

x
x
z
+

y
y
z
.
This is because the
d
dz
operator means The derivative in any direction in the complex plane. Since f(z) is
analytic, f
t
(z) is the same no matter in which direction we take the derivative.
Rules of Dierentiation. For an analytic function dened in terms of z we can calculate the complex derivative
using all the usual rules of dierentiation that we know from calculus like the product rule,
d
dz
f(z)g(z) = f
t
(z)g(z) +f(z)g
t
(z),
or the chain rule,
d
dz
f(g(z)) = f
t
(g(z))g
t
(z).
This is because the complex derivative derives its properties from properties of limits, just like its real variable
counterpart.
307
Result 10.1.1 The complex derivative is,
d
dz
f(z) = lim
z0
f(z + z) f(z)
z
.
The complex derivative is dened if the limit exists and is independent of the manner
in which z 0. A function is analytic at a point if the complex derivative exists in a
neighborhood of that point.
Let z = (, ) be coordinates in the complex plane. The complex derivative in the
coordinate directions is
d
dz
=
_

_
1

=
_

_
1

.
In Cartesian coordinates, this is
d
dz
=

x
= i

y
.
In polar coordinates, this is
d
dz
= e
i

r
=
i
r
e
i

Since the complex derivative is dened with the same limit formula as real derivatives,
all the rules from the calculus of functions of a real variable may be used to dierentiate
functions of a complex variable.
308
Example 10.1.5 We have shown that z
n
, n Z
+
, is an entire function. Now we corroborate that
d
dz
z
n
= nz
n1
by calculating the complex derivative in the polar coordinate directions.
d
dz
z
n
= e
i

r
r
n
e
in
= e
i
nr
n1
e
in
= nr
n1
e
i(n1)
= nz
n1
d
dz
z
n
=
i
r
e
i

r
n
e
in
=
i
r
e
i
r
n
ine
in
= nr
n1
e
i(n1)
= nz
n1
Analytic Functions can be Written in Terms of z. Consider an analytic function expressed in terms of x
and y, (x, y). We can write as a function of z = x +iy and z = x iy.
f(z, z) =
_
z +z
2
,
z z
i2
_
We treat z and z as independent variables. We nd the partial derivatives with respect to these variables.

z
=
x
z

x
+
y
z

y
=
1
2
_

x
i

y
_

z
=
x
z

x
+
y
z

y
=
1
2
_

x
+i

y
_
309
Since is analytic, the complex derivatives in the x and y directions are equal.

x
= i

y
The partial derivative of f(z, z) with respect to z is zero.
f
z
=
1
2
_

x
+i

y
_
= 0
Thus f(z, z) has no functional dependence on z, it can be written as a function of z alone.
If we were considering an analytic function expressed in polar coordinates (r, ), then we could write it in
Cartesian coordinates with the substitutions:
r =
_
x
2
+y
2
, = arctan(x, y).
Thus we could write (r, ) as a function of z alone.
Result 10.1.2 Any analytic function (x, y) or (r, ) can be written as a function of z
alone.
10.2 Cauchy-Riemann Equations
If we know that a function is analytic, then we have a convenient way of determining its complex derivative.
We just express the complex derivative in terms of the derivative in a coordinate direction. However, we dont
have a nice way of determining if a function is analytic. The denition of complex derivative in terms of a limit
is cumbersome to work with. In this section we remedy this problem.
Consider a function f(z) = (x, y). If f(z) is analytic, the complex derivative is equal to the derivatives in
the coordinate directions. We equate the derivatives in the x and y directions to obtain the Cauchy-Riemann
equations in Cartesian coordinates.

x
= i
y
(10.1)
310
This equation is a necessary condition for the analyticity of f(z).
Let (x, y) = u(x, y) + iv(x, y) where u and v are real-valued functions. We equate the real and imaginary
parts of Equation 10.1 to obtain another form for the Cauchy-Riemann equations in Cartesian coordinates.
u
x
= v
y
, u
y
= v
x
.
Note that this is a necessary and not a sucient condition for analyticity of f(z). That is, u and v may satisfy
the Cauchy-Riemann equations but f(z) may not be analytic. The Cauchy-Riemann equations give us an easy
test for determining if a function is not analytic.
Example 10.2.1 In Example 10.1.2 we showed that z is not analytic using the denition of complex dierenti-
ation. Now we obtain the same result using the Cauchy-Riemann equations.
z = x iy
u
x
= 1, v
y
= 1
We see that the rst Cauchy-Riemann equation is not satised; the function is not analytic at any point.
A sucient condition for f(z) = (x, y) to be analytic at a point z
0
= (x
0
, y
0
) is that the partial derivatives
of (x, y) exist and are continuous in some neighborhood of z
0
and satisfy the Cauchy-Riemann equations there.
If the partial derivatives of exist and are continuous then
(x + x, y + y) = (x, y) + x
x
(x, y) + y
y
(x, y) +o(x) +o(y).
Here the notation o(x) means terms smaller than x. We calculate the derivative of f(z).
f
t
(z) = lim
z0
f(z + z) f(z)
z
= lim
x,y0
(x + x, y + y) (x, y)
x +iy
= lim
x,y0
(x, y) + x
x
(x, y) + y
y
(x, y) +o(x) +o(y) (x, y)
x +iy
= lim
x,y0
x
x
(x, y) + y
y
(x, y) +o(x) +o(y)
x +iy
.
311
Here we use the Cauchy-Riemann equations.
= lim
x,y0
(x +iy)
x
(x, y)
x +iy
+ lim
x,y0
o(x) +o(y)
x +iy
.
=
x
(x, y)
Thus we see that the derivative is well dened.
Cauchy-Riemann Equations in General Coordinates Let z = (, ) be a system of coordinates in the
complex plane. Let (, ) be a function which we write in terms of these coordinates, A necessary condition for
analyticity of (, ) is that the complex derivatives in the coordinate directions exist and are equal. Equating
the derivatives in the and directions gives us the Cauchy-Riemann equations.
_

_
1

=
_

_
1

We could separate this into two equations by equating the real and imaginary parts or the modulus and argument.
312
Result 10.2.1 A necessary condition for analyticity of (, ), where z = (, ), at
z = z
0
is that the Cauchy-Riemann equations are satised in a neighborhood of z = z
0
.
_

_
1

=
_

_
1

.
(We could equate the real and imaginary parts or the modulus and argument of this to
obtain two equations.) A sucient condition for analyticity of f(z) is that the Cauchy-
Riemann equations hold and the rst partial derivatives of exist and are continuous in
a neighborhood of z = z
0
.
Below are the Cauchy-Riemann equations for various forms of f(z).
f(z) = (x, y),
x
= i
y
f(z) = u(x, y) +iv(x, y), u
x
= v
y
, u
y
= v
x
f(z) = (r, ),
r
=
i
r

f(z) = u(r, ) +iv(r, ), u


r
=
1
r
v

, u

= rv
r
f(z) = R(r, ) e
i(r,)
, R
r
=
R
r

,
1
r
R

= R
r
Example 10.2.2 Consider the Cauchy-Riemann equations for f(z) = u(r, ) + iv(r, ). From Exercise 10.2 we
know that the complex derivative in the polar coordinate directions is
d
dz
= e
i

r
=
i
r
e
i

.
313
From Result 10.2.1 we have the equation,
e
i

r
[u +iv] =
i
r
e
i

[u +iv].
We multiply by e
i
and equate the real and imaginary components to obtain the Cauchy-Riemann equations.
u
r
=
1
r
v

, u

= rv
r
Example 10.2.3 Consider the exponential function.
e
z
= (x, y) = e
x
(cos y +i sin(y))
We use the Cauchy-Riemann equations to show that the function is entire.

x
= i
y
e
x
(cos y +i sin(y)) = i e
x
(sin y +i cos(y))
e
x
(cos y +i sin(y)) = e
x
(cos y +i sin(y))
Since the function satises the Cauchy-Riemann equations and the rst partial derivatives are continuous every-
where in the nite complex plane, the exponential function is entire.
Now we nd the value of the complex derivative.
d
dz
e
z
=

x
= e
x
(cos y +i sin(y)) = e
z
The dierentiability of the exponential function implies the dierentiability of the trigonometric functions, as they
can be written in terms of the exponential.
In Exercise 10.11 you can show that the logarithm log z is dierentiable for z ,= 0. This implies the dieren-
tiability of z

and the inverse trigonometric functions as they can be written in terms of the logarithm.
314
Example 10.2.4 We compute the derivative of z
z
.
d
dz
(z
z
) =
d
dz
e
z log z
= (1 + log z) e
z log z
= (1 + log z)z
z
= z
z
+z
z
log z
10.3 Harmonic Functions
A function u is harmonic if its second partial derivatives exist, are continuous and satisfy Laplaces equation
u = 0.
2
(In Cartesian coordinates the Laplacian is u u
xx
+ u
yy
.) If f(z) = u + iv is an analytic function
then u and v are harmonic functions. To see why this is so, we start with the Cauchy-Riemann equations.
u
x
= v
y
, u
y
= v
x
We dierentiate the rst equation with respect to x and the second with respect to y. (We assume that u and v
are twice continuously dierentiable. We will see later that they are innitely dierentiable.)
u
xx
= v
xy
, u
yy
= v
yx
Thus we see that u is harmonic.
u u
xx
+u
yy
= v
xy
v
yx
= 0
One can use the same method to show that v = 0.
2
The capital Greek letter is used to denote the Laplacian, like u(x, y), and dierentials, like x.
315
If u is harmonic on some simply-connected domain, then there exists a harmonic function v such that f(z) =
u + iv is analytic in the domain. v is called the harmonic conjugate of u. The harmonic conjugate is unique up
to an additive constant. To demonstrate this, let w be another harmonic conjugate of u. Both the pair u and v
and the pair u and w satisfy the Cauchy-Riemann equations.
u
x
= v
y
, u
y
= v
x
, u
x
= w
y
, u
y
= w
x
We take the dierence of these equations.
v
x
w
x
= 0, v
y
w
y
= 0
On a simply connected domain, the dierence between v and w is thus a constant.
To prove the existence of the harmonic conjugate, we rst write v as an integral.
v(x, y) = v(x
0
, y
0
) +
_
(x,y)
(x
0
,y
0
)
v
x
dx +v
y
dy
On a simply connected domain, the integral is path independent and denes a unique v in terms of v
x
and v
y
.
We use the Cauchy-Riemann equations to write v in terms of u
x
and u
y
.
v(x, y) = v(x
0
, y
0
) +
_
(x,y)
(x
0
,y
0
)
u
y
dx +u
x
dy
Changing the starting point (x
0
, y
0
) changes v by an additive constant. The harmonic conjugate of u to within
an additive constant is
v(x, y) =
_
u
y
dx +u
x
dy.
This proves the existence
3
of the harmonic conjugate. This is not the formula one would use to construct the
harmonic conjugate of a u. One accomplishes this by solving the Cauchy-Riemann equations.
3
A mathematician returns to his oce to nd that a cigarette tossed in the trash has started a small re. Being calm and a
quick thinker he notes that there is a re extinguisher by the window. He then closes the door and walks away because the solution
exists.
316
Result 10.3.1 If f(z) = u + iv is an analytic function then u and v are harmonic
functions. That is, the Laplacians of u and v vanish u = v = 0. The Laplacian in
Cartesian and polar coordinates is
=

2
x
2
+

2
y
2
, =
1
r

r
_
r

r
_
+
1
r
2

2
.
Given a harmonic function u in a simply connected domain, there exists a harmonic
function v, (unique up to an additive constant), such that f(z) = u + iv is analytic in
the domain. One can construct v by solving the Cauchy-Riemann equations.
Example 10.3.1 Is x
2
the real part of an analytic function?
The Laplacian of x
2
is
[x
2
] = 2 + 0
x
2
is not harmonic and thus is not the real part of an analytic function.
Example 10.3.2 Show that u = e
x
(x sin y y cos y) is harmonic.
u
x
= e
x
sin y e
x
(xsin y y cos y)
= e
x
sin y xe
x
sin y +y e
x
cos y

2
u
x
2
= e
x
sin y e
x
sin y +x e
x
sin y y e
x
cos y
= 2 e
x
sin y +x e
x
sin y y e
x
cos y
317
u
y
= e
x
(x cos y cos y +y sin y)

2
u
y
2
= e
x
(xsin y + sin y +y cos y + sin y)
= xe
x
sin y + 2 e
x
sin y +y e
x
cos y
Thus we see that

2
u
x
2
+

2
u
y
2
= 0 and u is harmonic.
Example 10.3.3 Consider u = cos xcosh y. This function is harmonic.
u
xx
+u
yy
= cos xcosh y + cos xcosh y = 0
Thus it is the real part of an analytic function, f(z). We nd the harmonic conjugate, v, with the Cauchy-Riemann
equations. We integrate the rst Cauchy-Riemann equation.
v
y
= u
x
= sin xcosh y
v = sin xsinh y +a(x)
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation to determine
a(x).
v
x
= u
y
cos xsinh y +a
t
(x) = cos xsinh y
a
t
(x) = 0
a(x) = c
Here c is a real constant. Thus the harmonic conjugate is
v = sin xsinh y +c.
318
The analytic function is
f(z) = cos xcosh y i sin x sinh y +ic
We recognize this as
f(z) = cos z +ic.
Example 10.3.4 Here we consider an example that demonstrates the need for a simply connected domain.
Consider u = Log r in the multiply connected domain, r > 0. u is harmonic.
Log r =
1
r

r
_
r

r
Log r
_
+
1
r
2

2
Log r = 0
We solve the Cauchy-Riemann equations to try to nd the harmonic conjugate.
u
r
=
1
r
v

, u

= rv
r
v
r
= 0, v

= 1
v = +c
We are able to solve for v, but it is multi-valued. Any single-valued branch of that we choose will not be
continuous on the domain. Thus there is no harmonic conjugate of u = Log r for the domain r > 0.
If we had instead considered the simply-connected domain r > 0, [ arg(z)[ < then the harmonic conjugate
would be v = Arg (z) +c. The corresponding analytic function is f(z) = Log z +ic.
Example 10.3.5 Consider u = x
3
3xy
2
+x. This function is harmonic.
u
xx
+u
yy
= 6x 6x = 0
319
Thus it is the real part of an analytic function, f(z). We nd the harmonic conjugate, v, with the Cauchy-Riemann
equations. We integrate the rst Cauchy-Riemann equation.
v
y
= u
x
= 3x
2
3y
2
+ 1
v = 3x
2
y y
3
+y +a(x)
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation to determine
a(x).
v
x
= u
y
6xy +a
t
(x) = 6xy
a
t
(x) = 0
a(x) = c
Here c is a real constant. The harmonic conjugate is
v = 3x
2
y y
3
+y +c.
The analytic function is
f(z) = x
3
3xy
2
+x +i(3x
2
y y
3
+y) +ic
f(z) = x
3
+i3x
2
y 3xy
2
iy
2
+x +iy +ic
f(z) = z
3
+z +ic
10.4 Singularities
Any point at which a function is not analytic is called a singularity. In this section we will classify the dierent
avors of singularities.
Result 10.4.1 Singularities. If a function is not analytic at a point, then that point
is a singular point or a singularity of the function.
320
10.4.1 Categorization of Singularities
Branch Points. If f(z) has a branch point at z
0
, then we cannot dene a branch of f(z) that is continuous
in a neighborhood of z
0
. Continuity is necessary for analyticity. Thus all branch points are singularities. Since
function are discontinuous across branch cuts, all points on a branch cut are singularities.
Example 10.4.1 Consider f(z) = z
3/2
. The origin and innity are branch points and are thus singularities of
f(z). We choose the branch g(z) =

z
3
. All the points on the negative real axis, including the origin, are
singularities of g(z).
Removable Singularities.
Example 10.4.2 Consider
f(z) =
sin z
z
.
This function is undened at z = 0 because f(0) is the indeterminate form 0/0. f(z) is analytic everywhere in
the nite complex plane except z = 0. Note that the limit as z 0 of f(z) exists.
lim
z0
sin z
z
= lim
z0
cos z
1
= 1
If we were to ll in the hole in the denition of f(z), we could make it dierentiable at z = 0. Consider the
function
g(z) =
_
sin z
z
z ,= 0,
1 z = 0.
321
We calculate the derivative at z = 0 to verify that g(z) is analytic there.
f
t
(0) = lim
z0
f(0) f(z)
z
= lim
z0
1 sin(z)/z
z
= lim
z0
z sin(z)
z
2
= lim
z0
1 cos(z)
2z
= lim
z0
sin(z)
2
= 0
We call the point at z = 0 a removable singularity of sin(z)/z because we can remove the singularity by dening
the value of the function to be its limiting value there.
Consider a function f(z) that is analytic in a deleted neighborhood of z = z
0
. If f(z) is not analytic at z
0
,
but lim
zz
0
f(z) exists, then the function has a removable singularity at z
0
. The function
g(z) =
_
f(z) z ,= z
0
lim
zz
0
f(z) z = z
0
is analytic in a neighborhood of z = z
0
. We show this by calculating g
t
(z
0
).
g
t
(z
0
) = lim
zz
0
g(z
0
) g(z)
z
0
z
= lim
zz
0
g
t
(z)
1
= lim
zz
0
f
t
(z)
This limit exists because f(z) is analytic in a deleted neighborhood of z = z
0
.
322
Poles. If a function f(z) behaves like c/(z z
0
)
n
near z = z
0
then the function has an n
th
order pole at that
point. More mathematically we say
lim
zz
0
(z z
0
)
n
f(z) = c ,= 0.
We require the constant c to be nonzero so we know that it is not a pole of lower order. We can denote a removable
singularity as a pole of order zero.
Another way to say that a function has an n
th
order pole is that f(z) is not analytic at z = z
0
, but (zz
0
)
n
f(z)
is either analytic or has a removable singularity at that point.
Example 10.4.3 1/ sin(z
2
) has a second order pole at z = 0 and rst order poles at z = (n)
1/2
, n Z

.
lim
z0
z
2
sin(z
2
)
= lim
z0
2z
2z cos(z
2
)
= lim
z0
2
2 cos(z
2
) 4z
2
sin(z
2
)
= 1
lim
z(n)
1/2
z (n)
1/2
sin(z
2
)
= lim
z(n)
1/2
1
2z cos(z
2
)
=
1
2(n)
1/2
(1)
n
Example 10.4.4 e
1/z
is singular at z = 0. The function is not analytic as lim
z0
e
1/z
does not exist. We check
if the function has a pole of order n at z = 0.
lim
z0
z
n
e
1/z
= lim

n
= lim

n!
323
Since the limit does not exist for any value of n, the singularity is not a pole. We could say that e
1/z
is more
singular than any power of 1/z.
Essential Singularities. If a function f(z) is singular at z = z
0
, but the singularity is not a branch point, or
a pole, the the point is an essential singularity of the function.
The point at innity. We can consider the point at innity z by making the change of variables z = 1/
and considering 0. If f(1/) is analytic at = 0 then f(z) is analytic at innity. We have encountered
branch points at innity before (Section 9.6). Assume that f(z) is not analytic at innity. If lim
z
f(z) exists
then f(z) has a removable singularity at innity. If lim
z
f(z)/z
n
= c ,= 0 then f(z) has an n
th
order pole at
innity.
Result 10.4.2 Categorization of Singularities. Consider a function f(z) that has a
singularity at the point z = z
0
. Singularities come in four avors:
Branch Points. Branch points of multi-valued functions are singularities.
Removable Singularities. If lim
zz
0
f(z) exists, then z
0
is a removable singularity. It
is thus named because the singularity could be removed and thus the function made
analytic at z
0
by redening the value of f(z
0
).
Poles. If lim
zz
0
(z z
0
)
n
f(z) = const ,= 0 then f(z) has an n
th
order pole at z
0
.
Essential Singularities. Instead of dening what an essential singularity is, we say
what it is not. If z
0
neither a branch point, a removable singularity nor a pole, it is
an essential singularity.
324
A pole may be called a non-essential singularity. This is because multiplying the function by an integral power
of z z
0
will make the function analytic. Then an essential singularity is a point z
0
such that there does not exist
an n such that (z z
0
)
n
f(z) is analytic there.
10.4.2 Isolated and Non-Isolated Singularities
Result 10.4.3 Isolated and Non-Isolated Singularities. Suppose f(z) has a singu-
larity at z
0
. If there exists a deleted neighborhood of z
0
containing no singularities then
the point is an isolated singularity. Otherwise it is a non-isolated singularity.
If you dont like the abstract notion of a deleted neighborhood, you can work with a deleted circular neighbor-
hood. However, this will require the introduction of more math symbols and a Greek letter. z = z
0
is an isolated
singularity if there exists a > 0 such that there are no singularities in 0 < [z z
0
[ < .
Example 10.4.5 We classify the singularities of f(z) = z/ sin z.
z has a simple zero at z = 0. sin z has simple zeros at z = n. Thus f(z) has a removable singularity at z = 0
and has rst order poles at z = n for n Z

. We can corroborate this by taking limits.


lim
z0
f(z) = lim
z0
z
sin z
= lim
z0
1
cos z
= 1
lim
zn
(z n)f(z) = lim
zn
(z n)z
sin z
= lim
zn
2z n
cos z
=
n
(1)
n
,= 0
325
Now to examine the behavior at innity. There is no neighborhood of innity that does not contain rst order
poles of f(z). (Another way of saying this is that there does not exist an R such that there are no singularities
in R < [z[ < .) Thus z = is a non-isolated singularity.
We could also determine this by setting = 1/z and examining the point = 0. f(1/) has rst order poles
at = 1/(n) for n Z0. These rst order poles come arbitrarily close to the point = 0 There is no deleted
neighborhood of = 0 which does not contain singularities. Thus = 0, and hence z = is a non-isolated
singularity.
The point at innity is an essential singularity. It is certainly not a branch point or a removable singularity.
It is not a pole, because there is no n such that lim
z
z
n
f(z) = const ,= 0. z
n
f(z) has rst order poles in any
neighborhood of innity, so this limit does not exist.
326
10.5 Exercises
Complex Derivatives
Exercise 10.1
Show that if f(z) is analytic and (x, y) = f(z) is twice continuously dierentiable then f
t
(z) is analytic.
Exercise 10.2
Find the complex derivative in the coordinate directions for f(z) = (r, ).
Hint, Solution
Exercise 10.3
Show that the following functions are nowhere analytic by checking where the derivative with respect to z exists.
1. sin xcosh y i cos x sinh y
2. x
2
y
2
+x +i(2xy y)
Hint, Solution
Exercise 10.4
f(z) is analytic for all z, ([z[ < ). f(z
1
+ z
2
) = f(z
1
)f(z
2
) for all z
1
and z
2
. (This is known as a functional
equation). Prove that f(z) = exp(f
t
(0)z).
Hint, Solution
Cauchy-Riemann Equations
Exercise 10.5
Find the Cauchy-Riemann equations for
f(z) = R(r, ) e
i(r,)
.
327
Hint, Solution
Exercise 10.6
Let
f(z) =
_
x
4/3
y
5/3
+ix
5/3
y
4/3
x
2
+y
2
for z ,= 0,
0 for z = 0.
Show that the Cauchy-Riemann equations hold at z = 0, but that f is not dierentiable at this point.
Hint, Solution
Exercise 10.7
Consider the complex function
f(z) = u +iv =
_
x
3
(1+i)y
3
(1i)
x
2
+y
2
for z ,= 0,
0 for z = 0.
Show that the partial derivatives of u and v with respect to x and y exist at z = 0 and that u
x
= v
y
and u
y
= v
x
there: the Cauchy-Riemann equations are satised at z = 0. On the other hand, show that
lim
z0
f(z)
z
does not exist, that is, f is not complex-dierentiable at z = 0.
Hint, Solution
Exercise 10.8
Show that the function
f(z) =
_
e
z
4
for z ,= 0,
0 for z = 0.
328
satises the Cauchy-Riemann equations everywhere, including at z = 0, but f(z) is not analytic at the origin.
Hint, Solution
Exercise 10.9
1. Show that e
z
is not analytic.
2. f(z) is an analytic function of z. Show that f(z) = f(z) is also an analytic function of z.
Hint, Solution
Exercise 10.10
1. Determine all points z = x +iy where the following functions are dierentiable with respect to z:
(i) x
3
+y
3
(ii)
x 1
(x 1)
2
+y
2
i
y
(x 1)
2
+y
2
2. Determine all points z where the functions in part (a) are analytic.
3. Determine which of the following functions v(x, y) are the imaginary part of an analytic function u(x, y) +
iv(x, y). For those that are, compute the real part u(x, y) and re-express the answer as an explicit function
of z = x +iy:
(i) x
2
y
2
(ii) 3x
2
y
Hint, Solution
Exercise 10.11
Show that the logarithm log z is dierentiable for z ,= 0. Find the derivative of the logarithm.
Hint, Solution
329
Exercise 10.12
Show that the Cauchy-Riemann equations for the analytic function f(z) = u(r, ) +iv(r, ) are
u
r
= v

/r, u

= rv
r
.
Hint, Solution
Exercise 10.13
w = u + iv is an analytic function of z. (x, y) is an arbitrary smooth function of x and y. When expressed in
terms of u and v, (x, y) = (u, v). Show that (w
t
,= 0)

u
i

v
=
_
dw
dz
_
1
_

x
i

y
_
.
Deduce

u
2
+

2

v
2
=

dw
dz

2
_

x
2
+

2

y
2
_
.
Hint, Solution
Exercise 10.14
Show that the functions dened by f(z) = log [z[ + i arg(z) and f(z) =
_
[z[ e
i arg(z)/2
are analytic in the sector
[z[ > 0, [ arg(z)[ < . What are the corresponding derivatives df/dz?
Hint, Solution
Exercise 10.15
Show that the following functions are harmonic. For each one of them nd its harmonic conjugate and form the
corresponding holomorphic function.
1. u(x, y) = x Log (r) y arctan(x, y) (r ,= 0)
330
2. u(x, y) = arg(z) ([ arg(z)[ < , r ,= 0)
3. u(x, y) = r
n
cos(n)
4. u(x, y) = y/r
2
(r ,= 0)
Hint, Solution
331
10.6 Hints
Complex Derivatives
Hint 10.1
Start with the Cauchy-Riemann equation and then dierentiate with respect to x.
Hint 10.2
Read Example 10.1.3 and use Result 10.1.1.
Hint 10.3
Use Result 10.1.1.
Hint 10.4
Take the logarithm of the equation to get a linear equation.
Cauchy-Riemann Equations
Hint 10.5
Use the result of Exercise 10.2.
Hint 10.6
To evaluate u
x
(0, 0), etc. use the denition of dierentiation. Try to nd f
t
(z) with the denition of complex
dierentiation. Consider z = r e
i
.
Hint 10.7
To evaluate u
x
(0, 0), etc. use the denition of dierentiation. Try to nd f
t
(z) with the denition of complex
dierentiation. Consider z = r e
i
.
332
Hint 10.8
Hint 10.9
Use the Cauchy-Riemann equations.
Hint 10.10
Hint 10.11
Hint 10.12
Hint 10.13
Hint 10.14
Hint 10.15
333
10.7 Solutions
Complex Derivatives
Solution 10.1
We start with the Cauchy-Riemann equation and then dierentiate with respect to x.

x
= i
y

xx
= i
yx
We interchange the order of dierentiation.
(
x
)
x
= i(
x
)
y
(f
t
)
x
= i(f
t
)
y
Since f
t
(z) satises the Cauchy-Riemann equation and its partial derivatives exist and are continuous, it is
analytic.
Solution 10.2
The complex derivative in the coordinate directions is
df
dz
=
_
r e
i
r
_
1

r
= e
i

r
,
df
dz
=
_
r e
i

_
1

=
i
r
e
i

.
We write this in operator notation.
d
dz
= e
i

r
=
i
r
e
i

334
Solution 10.3
1. Consider f(x, y) = sin xcosh y i cos x sinh y. The derivatives in the x and y directions are
f
x
= cos xcosh y +i sin xsinh y
i
f
y
= cos x cosh y i sin x sinh y
These derivatives exist and are everywhere continuous. We equate the expressions to get a set of two
equations.
cos x cosh y = cos x cosh y, sin xsinh y = sin xsinh y
cos x cosh y = 0, sin x sinh y = 0
_
x =

2
+n
_
and (x = m or y = 0)
The function may be dierentiable only at the points
x =

2
+n, y = 0.
Thus the function is nowhere analytic.
2. Consider f(x, y) = x
2
y
2
+x +i(2xy y). The derivatives in the x and y directions are
f
x
= 2x + 1 +i2y
i
f
y
= i2y + 2x 1
These derivatives exist and are everywhere continuous. We equate the expressions to get a set of two
equations.
2x + 1 = 2x 1, 2y = 2y.
Since this set of equations has no solutions, there are no points at which the function is dierentiable. The
function is nowhere analytic.
335
Solution 10.4
f(z
1
+z
2
) = f(z
1
)f(z
2
)
log(f(z
1
+z
2
)) = log(f(z
1
)) + log(f(z
2
))
We dene g(z) = log(f(z)).
g(z
1
+z
2
) = g(z
1
) +g(z
2
)
This is a linear equation which has exactly the solutions:
g(z) = cz.
Thus f(z) has the solutions:
f(z) = e
cz
,
where c is any complex constant. We can write this constant in terms of f
t
(0). We dierentiate the original
equation with respect to z
1
and then substitute z
1
= 0.
f
t
(z
1
+z
2
) = f
t
(z
1
)f(z
2
)
f
t
(z
2
) = f
t
(0)f(z
2
)
f
t
(z) = f
t
(0)f(z)
We substitute in the form of the solution.
c e
cz
= f
t
(0) e
cz
c = f
t
(0)
Thus we see that
f(z) = e
f

(0)z
.
336
Cauchy-Riemann Equations
Solution 10.5
We nd the Cauchy-Riemann equations for
f(z) = R(r, ) e
i(r,)
.
From Exercise 10.2 we know that the complex derivative in the polar coordinate directions is
d
dz
= e
i

r
=
i
r
e
i

.
We equate the derivatives in the two directions.
e
i

r
_
Re
i

=
i
r
e
i

_
Re
i

(R
r
+iR
r
) e
i
=
i
r
(R

+iR

) e
i
We divide by e
i
and equate the real and imaginary components to obtain the Cauchy-Riemann equations.
R
r
=
R
r

,
1
r
R

= R
r
Solution 10.6
u =
_
x
4/3
y
5/3
x
2
+y
2
if z ,= 0,
0 if z = 0.
, v =
_
x
5/3
y
4/3
x
2
+y
2
if z ,= 0,
0 if z = 0.
The Cauchy-Riemann equations are
u
x
= v
y
, u
y
= v
x
.
337
The partial derivatives of u and v at the point x = y = 0 are,
u
x
(0, 0) = lim
x0
u(x, 0) u(0, 0)
x
= lim
x0
0 0
x
= 0,
v
x
(0, 0) = lim
x0
v(x, 0) v(0, 0)
x
= lim
x0
0 0
x
= 0,
u
y
(0, 0) = lim
y0
u(0, y) u(0, 0)
y
= lim
y0
0 0
y
= 0,
v
y
(0, 0) = lim
y0
v(0, y) v(0, 0)
y
= lim
y0
0 0
y
= 0.
Since u
x
(0, 0) = u
y
(0, 0) = v
x
(0, 0) = v
y
(0, 0) = 0 the Cauchy-Riemann equations are satised.
338
f(z) is not analytic at the point z = 0. We show this by calculating the derivative.
f
t
(0) = lim
z0
f(z) f(0)
z
= lim
z0
f(z)
z
Let z = r e
i
, that is, we approach the origin at an angle of . Then x = r cos and y = r sin .
f
t
(0) = lim
r0
f(r e
i
)
r e
i
= lim
r0
r
4/3
cos
4/3
r
5/3
sin
5/3
+ir
5/3
cos
5/3
r
4/3
sin
4/3

r
2
r e
i
= lim
r0
cos
4/3
sin
5/3
+i cos
5/3
sin
4/3

e
i
The value of the limit depends on and is not a constant. Thus this limit does not exist. The function is not
dierentiable at z = 0.
Solution 10.7
u =
_
x
3
y
3
x
2
+y
2
for z ,= 0,
0 for z = 0.
, v =
_
x
3
+y
3
x
2
+y
2
for z ,= 0,
0 for z = 0.
The Cauchy-Riemann equations are
u
x
= v
y
, u
y
= v
x
.
The partial derivatives of u and v at the point x = y = 0 are,
u
x
(0, 0) = lim
x0
u(x, 0) u(0, 0)
x
= lim
x0
x 0
x
= 1,
339
v
x
(0, 0) = lim
x0
v(x, 0) v(0, 0)
x
= lim
x0
x 0
x
= 1,
u
y
(0, 0) = lim
y0
u(0, y) u(0, 0)
y
= lim
y0
y 0
y
= 1,
v
y
(0, 0) = lim
y0
v(0, y) v(0, 0)
y
= lim
y0
y 0
y
= 1.
We see that the Cauchy-Riemann equations are satised at x = y = 0
f(z) is not analytic at the point z = 0. We show this by calculating the derivative.
f
t
(0) = lim
z0
f(z) f(0)
z
= lim
z0
f(z)
z
Let z = r e
i
, that is, we approach the origin at an angle of . Then x = r cos and y = r sin .
f
t
(0) = lim
r0
f(r e
i
)
r e
i
= lim
r0
r
3
cos
3
(1+i)r
3
sin
3
(1i)
r
2
r e
i
= lim
r0
cos
3
(1 +i) sin
3
(1 i)
e
i
340
The value of the limit depends on and is not a constant. Thus this limit does not exist. The function is not
dierentiable at z = 0. Recall that satisfying the Cauchy-Riemann equations is a necessary, but not a sucient
condition for dierentiability.
Solution 10.8
First we verify that the Cauchy-Riemann equations are satised for z ,= 0. Note that the form
f
x
= if
y
will be far more convenient than the form
u
x
= v
y
, u
y
= v
x
for this problem.
f
x
= 4(x +iy)
5
e
(x+iy)
4
if
y
= i4(x +iy)
5
i e
(x+iy)
4
= 4(x +iy)
5
e
(x+iy)
4
The Cauchy-Riemann equations are satised for z ,= 0.
Now we consider the point z = 0.
f
x
(0, 0) = lim
x0
f(x, 0) f(0, 0)
x
= lim
x0
e
x
4
x
= 0
if
y
(0, 0) = i lim
y0
f(0, y) f(0, 0)
y
= i lim
y0
e
y
4
y
= 0
341
The Cauchy-Riemann equations are satised for z = 0.
f(z) is not analytic at the point z = 0. We show this by calculating the derivative.
f
t
(0) = lim
z0
f(z) f(0)
z
= lim
z0
f(z)
z
Let z = r e
i
, that is, we approach the origin at an angle of .
f
t
(0) = lim
r0
f(r e
i
)
r e
i
= lim
r0
e
r
4
e
i4
r e
i
For most values of the limit does not exist. Consider = /4.
f
t
(0) = lim
r0
e
r
4
r e
i/4
=
Because the limit does not exist, the function is not dierentiable at z = 0. Recall that satisfying the Cauchy-
Riemann equations is a necessary, but not a sucient condition for dierentiability.
Solution 10.9
1. A necessary condition for analyticity in an open set is that the Cauchy-Riemann equations are satised in
that set. We write e
z
in Cartesian form.
e
z
= e
xiy
= e
x
cos y i e
x
sin y.
Now we determine where u = e
x
cos y and v = e
x
sin y satisfy the Cauchy-Riemann equations.
u
x
= v
y
, u
y
= v
x
e
x
cos y = e
x
cos y, e
x
sin y = e
x
sin y
cos y = 0, sin y = 0
y =

2
+m, y = n
Thus we see that the Cauchy-Riemann equations are not satised anywhere. e
z
is nowhere analytic.
342
2. Since f(z) = u + iv is analytic, u and v satisfy the Cauchy-Riemann equations and their rst partial
derivatives are continuous.
f(z) = f(z) = u(x, y) +iv(x, y) = u(x, y) iv(x, y)
We dene f(z) (x, y)+i(x, y) = u(x, y)iv(x, y). Now we see if and satisfy the Cauchy-Riemann
equations.

x
=
y
,
y
=
x
(u(x, y))
x
= (v(x, y))
y
, (u(x, y))
y
= (v(x, y))
x
u
x
(x, y) = v
y
(x, y), u
y
(x, y) = v
x
(x, y)
u
x
= v
y
, u
y
= v
x
Thus we see that the Cauchy-Riemann equations for and are satised if and only if the Cauchy-Riemann
equations for u and v are satised. The continuity of the rst partial derivatives of u and v implies the same
of and . Thus f(z) is analytic.
Solution 10.10
1. The necessary condition for a function f(z) = u + iv to be dierentiable at a point is that the Cauchy-
Riemann equations hold and the rst partial derivatives of u and v are continuous at that point.
(a)
f(z) = x
3
+y
3
+i0
The Cauchy-Riemann equations are
u
x
= v
y
and u
y
= v
x
3x
2
= 0 and 3y
2
= 0
x = 0 and y = 0
The rst partial derivatives are continuous. Thus we see that the function is dierentiable only at the
point z = 0.
343
(b)
f(z) =
x 1
(x 1)
2
+y
2
i
y
(x 1)
2
+y
2
The Cauchy-Riemann equations are
u
x
= v
y
and u
y
= v
x
(x 1)
2
+y
2
((x 1)
2
+y
2
)
2
=
(x 1)
2
+y
2
((x 1)
2
+y
2
)
2
and
2(x 1)y
((x 1)
2
+y
2
)
2
=
2(x 1)y
((x 1)
2
+y
2
)
2
The Cauchy-Riemann equations are each identities. The rst partial derivatives are continuous every-
where except the point x = 1, y = 0. Thus the function is dierentiable everywhere except z = 1.
2. (a) The function is not dierentiable in any open set. Thus the function is nowhere analytic.
(b) The function is dierentiable everywhere except z = 1. Thus the function is analytic everywhere except
z = 1.
3. (a) First we determine if the function is harmonic.
v = x
2
y
2
v
xx
+v
yy
= 0
2 2 = 0
The function is harmonic in the complex plane and this is the imaginary part of some analytic function.
By inspection, we see that this function is
iz
2
+c = 2xy +c +i(x
2
y
2
),
where c is a real constant. We can also nd the function by solving the Cauchy-Riemann equations.
u
x
= v
y
and u
y
= v
x
u
x
= 2y and u
y
= 2x
344
We integrate the rst equation.
u = 2xy +g(y)
Here g(y) is a function of integration. We substitute this into the second Cauchy-Riemann equation
to determine g(y).
u
y
= 2x
2x +g
t
(y) = 2x
g
t
(y) = 0
g(y) = c
u = 2xy +c
f(z) = 2xy +c +i(x
2
y
2
)
f(z) = iz
2
+c
(b) First we determine if the function is harmonic.
v = 3x
2
y
v
xx
+v
yy
= 6y
The function is not harmonic. It is not the imaginary part of some analytic function.
Solution 10.11
We show that the logarithm log z = (r, ) = Log r +i satises the Cauchy-Riemann equations.

r
=
i
r

1
r
=
i
r
i
1
r
=
1
r
345
Since the logarithm satises the Cauchy-Riemann equations and the rst partial derivatives are continuous for
z ,= 0, the logarithm is analytic for z ,= 0.
Now we compute the derivative.
d
dz
log z = e
i

r
( Log r +i)
= e
i
1
r
=
1
z
Solution 10.12
The complex derivative in the coordinate directions is
d
dz
= e
i

r
=
i
r
e
i

.
We substitute f = u +iv into this identity to obtain the Cauchy-Riemann equation in polar coordinates.
e
i
f
r
=
i
r
e
i
f

f
r
=
i
r
f

u
r
+iv
r
=
i
r
(u

+iv

)
We equate the real and imaginary parts.
u
r
=
1
r
v

, v
r
=
1
r
u

u
r
=
1
r
v

, u

= rv
r
346
Solution 10.13
Since w is analytic, u and v satisfy the Cauchy-Riemann equations,
u
x
= v
y
and u
y
= v
x
.
Using the chain rule we can write the derivatives with respect to x and y in terms of u and v.

x
= u
x

u
+v
x

y
= u
y

u
+v
y

v
Now we examine
x
i
y
.

x
i
y
= u
x

u
+v
x

v
i(u
y

u
+v
y

v
)

x
i
y
= (u
x
iu
y
)
u
+ (v
x
iv
y
)
v

x
i
y
= (u
x
iu
y
)
u
i(v
y
+iv
x
)
v
We use the Cauchy-Riemann equations to write u
y
and v
y
in terms of u
x
and v
x
.

x
i
y
= (u
x
+iv
x
)
u
i(u
x
+iv
x
)
v
Recall that w
t
= u
x
+iv
x
= v
y
iu
y
.

x
i
y
=
dw
dz
(
u
i
v
)
Thus we see that,

u
i

v
=
_
dw
dz
_
1
_

x
i

y
_
.
347
We write this in operator notation.

u
i

v
=
_
dw
dz
_
1
_

x
i

y
_
The complex conjugate of this relation is

u
+i

v
=
_
dw
dz
_
1
_

x
+i

y
_
Now we apply both these operators to = .
_

u
+i

v
__

u
i

v
_
=
_
dw
dz
_
1
_

x
+i

y
__
dw
dz
_
1
_

x
i

y
_

_

2
u
2
+i

2
uv
i

2
vu
+

2
v
2
_

=
_
dw
dz
_
1
__
_

x
+i

y
__
dw
dz
_
1
_
_

x
i

y
_
+
_
dw
dz
_
1
_

x
+i

y
__

x
i

y
_
_

(w
t
)
1
is an analytic function. Recall that for analytic functions f, f
t
= f
x
= if
y
. So that f
x
+if
y
= 0.

u
2
+

2

v
2
=
_
dw
dz
_
1
_
_
dw
dz
_
1
_

2
x
2
+

2
y
2
_
_

u
2
+

2

v
2
=

dw
dz

2
_

x
2
+

2

y
2
_
348
Solution 10.14
1. We consider
f(z) = log [z[ +i arg(z) = log r +i.
The Cauchy-Riemann equations in polar coordinates are
u
r
=
1
r
v

, u

= rv
r
.
We calculate the derivatives.
u
r
=
1
r
,
1
r
v

=
1
r
u

= 0, rv
r
= 0
Since the Cauchy-Riemann equations are satised and the partial derivatives are continuous, f(z) is analytic
in [z[ > 0, [ arg(z)[ < . The complex derivative in terms of polar coordinates is
d
dz
= e
i

r
=
i
r
e
i

.
We use this to dierentiate f(z).
df
dz
= e
i

r
[log r +i] = e
i
1
r
=
1
z
2. Next we consider
f(z) =
_
[z[ e
i arg(z)/2
=

r e
i/2
.
The Cauchy-Riemann equations for polar coordinates and the polar form f(z) = R(r, ) e
i(r,)
are
R
r
=
R
r

,
1
r
R

= R
r
.
349
We calculate the derivatives for R =

r, = /2.
R
r
=
1
2

r
,
R
r

=
1
2

r
1
r
R

= 0, R
r
= 0
Since the Cauchy-Riemann equations are satised and the partial derivatives are continuous, f(z) is analytic
in [z[ > 0, [ arg(z)[ < . The complex derivative in terms of polar coordinates is
d
dz
= e
i

r
=
i
r
e
i

.
We use this to dierentiate f(z).
df
dz
= e
i

r
[

r e
i/2
] =
1
2 e
i/2

r
=
1
2

z
Solution 10.15
1. We consider the function
u = xLog r y arctan(x, y) = r cos Log r r sin
We compute the Laplacian.
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
=
1
r

r
(cos (r +r Log r) sin ) +
1
r
2
(r( sin 2 cos ) r cos Log r)
=
1
r
(2 cos + cos Log r sin ) +
1
r
( sin 2 cos cos Log r)
= 0
350
The function u is harmonic. We nd the harmonic conjugate v by solving the Cauchy-Riemann equations.
v
r
=
1
r
u

, v

= ru
r
v
r
= sin (1 + Log r) + cos , v

= r(cos (1 + Log r) sin )


We integrate the rst equation with respect to r to determine v to within the constant of integration g().
v = r(sin Log r + cos ) +g()
We dierentiate this expression with respect to .
v

= r(cos (1 + Log r) sin ) +g


t
()
We compare this to the second Cauchy-Riemann equation to see that g
t
() = 0. Thus g() = c. We have
determined the harmonic conjugate.
v = r(sin Log r + cos ) +c
The corresponding analytic function is
f(z) = r cos Log r r sin +i(r sin Log r +r cos +c).
On the positive real axis, ( = 0), the function has the value
f(z = r) = r Log r +ic.
We use analytic continuation to determine the function in the complex plane.
f(z) = z log z +ic
351
2. We consider the function
u = Arg (z) = .
We compute the Laplacian.
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0
The function u is harmonic. We nd the harmonic conjugate v by solving the Cauchy-Riemann equations.
v
r
=
1
r
u

, v

= ru
r
v
r
=
1
r
, v

= 0
We integrate the rst equation with respect to r to determine v to within the constant of integration g().
v = Log r +g()
We dierentiate this expression with respect to .
v

= g
t
()
We compare this to the second Cauchy-Riemann equation to see that g
t
() = 0. Thus g() = c. We have
determined the harmonic conjugate.
v = Log r +c
The corresponding analytic function is
f(z) = i Log r +ic
352
On the positive real axis, ( = 0), the function has the value
f(z = r) = i Log r +ic
We use analytic continuation to determine the function in the complex plane.
f(z) = i log z +ic
3. We consider the function
u = r
n
cos(n)
We compute the Laplacian.
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
=
1
r

r
(nr
n
cos(n)) n
2
r
n2
cos(n)
= n
2
r
n2
cos(n) n
2
r
n2
cos(n)
= 0
The function u is harmonic. We nd the harmonic conjugate v by solving the Cauchy-Riemann equations.
v
r
=
1
r
u

, v

= ru
r
v
r
= nr
n1
sin(n), v

= nr
n
cos(n)
We integrate the rst equation with respect to r to determine v to within the constant of integration g().
v = r
n
sin(n) +g()
We dierentiate this expression with respect to .
v

= nr
n
cos(n) +g
t
()
353
We compare this to the second Cauchy-Riemann equation to see that g
t
() = 0. Thus g() = c. We have
determined the harmonic conjugate.
v = r
n
sin(n) +c
The corresponding analytic function is
f(z) = r
n
cos(n) +ir
n
sin(n) +ic
On the positive real axis, ( = 0), the function has the value
f(z = r) = r
n
+ic
We use analytic continuation to determine the function in the complex plane.
f(z) = z
n
4. We consider the function
u =
y
r
2
=
sin
r
We compute the Laplacian.
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
=
1
r

r
_

sin
r
_

sin
r
3
=
sin
r
3

sin
r
3
= 0
354
The function u is harmonic. We nd the harmonic conjugate v by solving the Cauchy-Riemann equations.
v
r
=
1
r
u

, v

= ru
r
v
r
=
cos
r
2
, v

=
sin
r
We integrate the rst equation with respect to r to determine v to within the constant of integration g().
v =
cos
r
+g()
We dierentiate this expression with respect to .
v

=
sin
r
+g
t
()
We compare this to the second Cauchy-Riemann equation to see that g
t
() = 0. Thus g() = c. We have
determined the harmonic conjugate.
v =
cos
r
+c
The corresponding analytic function is
f(z) =
sin
r
+i
cos
r
+ic
On the positive real axis, ( = 0), the function has the value
f(z = r) =
i
r
+ic.
We use analytic continuation to determine the function in the complex plane.
f(z) =
i
z
+ic
355
Chapter 11
Analytic Continuation
Im about two beers away from ne.
11.1 Analytic Continuation
Suppose there is a function, f
1
(z) that is analytic in the domain D
1
and another analytic function, f
2
(z) that
is analytic in the domain D
2
. (See Figure 11.1.)
If the two domains overlap and f
1
(z) = f
2
(z) in the overlap region D
1
D
2
, then f
2
(z) is called an analytic
continuation of f
1
(z). This is an appropriate name since f
2
(z) continues the denition of f
1
(z) outside of its
original domain of denition D
1
. We can dene a function f(z) that is analytic in the union of the domains
D
1
D
2
. On the domain D
1
we have f(z) = f
1
(z) and f(z) = f
2
(z) on D
2
. f
1
(z) and f
2
(z) are called function
elements. There is an analytic continuation even if the two domains only share an arc and not a two dimensional
region.
With more overlapping domains D
3
, D
4
, . . . we could perhaps extend f
1
(z) to more of the complex plane.
Sometimes it is impossible to extend a function beyond the boundary of a domain. This is known as a natural
boundary. If a function f
1
(z) is analytically continued to a domain D
n
along two dierent paths, (See Figure 11.2.),
356
Im(z)
Re(z)
D
D
1
2
Figure 11.1: Overlapping Domains
then the two analytic continuations are identical as long as the paths do not enclose a branch point of the function.
This is the uniqueness theorem of analytic continuation.
D
1
D
n
Figure 11.2: Two Paths of Analytic Continuation
Consider an analytic function f(z) dened in the domain D. Suppose that f(z) = 0 on the arc AB, (see
Figure 11.3.) Then f(z) = 0 in all of D.
Consider a point on AB. The Taylor series expansion of f(z) about the point z = converges in a circle C
at least up to the boundary of D. The derivative of f(z) at the point z = is
f
t
() = lim
z0
f( + z) f()
z
357
D
B

C
A
Figure 11.3: Domain Containing Arc Along Which f(z) Vanishes
If z is in the direction of the arc, then f
t
() vanishes as well as all higher derivatives, f
t
() = f
tt
() = f
ttt
() =
= 0. Thus we see that f(z) = 0 inside C. By taking Taylor series expansions about points on AB or inside of
C we see that f(z) = 0 in D.
Result 11.1.1 Let f
1
(z) and f
2
(z) be analytic functions dened in D. If f
1
(z) = f
2
(z)
for the points in a region or on an arc in D, then f
1
(z) = f
2
(z) for all points in D.
To prove Result 11.1.1, we dene the analytic function g(z) = f
1
(z) f
2
(z). Since g(z) vanishes in the region
or on the arc, then g(z) = 0 and hence f
1
(z) = f
2
(z) for all points in D.
Result 11.1.2 Consider analytic functions f
1
(z) and f
2
(z) dened on the domains D
1
and D
2
, respectively. Suppose that D
1
D
2
is a region or an arc and that f
1
(z) = f
2
(z)
for all z D
1
D
2
. (See Figure 11.4.) Then the function
f(z) =
_
f
1
(z) for z D
1
,
f
2
(z) for z D
2
,
is analytic in D
1
D
2
.
Result 11.1.2 follows directly from Result 11.1.1.
358
D
1
D
2 D
1
D
2
Figure 11.4: Domains that Intersect in a Region or an Arc
11.2 Analytic Continuation of Sums
Example 11.2.1 Consider the function
f
1
(z) =

n=0
z
n
.
The sum converges uniformly for D
1
= [z[ r < 1. Since the derivative also converges in this domain, the
function is analytic there.
Im(z)
Re(z)
Im(z)
Re(z)
D
2
D
1
Figure 11.5: Domain of Convergence for

n=0
z
n
.
359
Now consider the function
f
2
(z) =
1
1 z
.
This function is analytic everywhere except the point z = 1. On the domain D
1
,
f
2
(z) =
1
1 z
=

n=0
z
n
= f
1
(z)
Analytic continuation tells us that there is a function that is analytic on the union of the two domains. Here,
the domain is the entire z plane except the point z = 1 and the function is
f(z) =
1
1 z
.
1
1z
is said to be an analytic continuation of

n=0
z
n
.
11.3 Analytic Functions Dened in Terms of Real Variables
Result 11.3.1 An analytic function, u(x, y) + iv(x, y) can be written in terms of a
function of a complex variable, f(z) = u(x, y) +iv(x, y).
Result 11.3.1 is proved in Exercise 11.1.
Example 11.3.1
f(z) = cosh y sin x(x e
x
cos y y e
x
sin y) cos x sinh y(y e
x
cos y +x e
x
sin y)
+i
_
cosh y sin x(y e
x
cos y +x e
x
sin y) + cos xsinh y(xe
x
cos y y e
x
sin y)

360
is an analytic function. Express f(z) in terms of z.
On the real line, y = 0, f(z) is
f(z = x) = x e
x
sin x
(Recall that cos(0) = cosh(0) = 1 and sin(0) = sinh(0) = 0.)
The analytic continuation of f(z) into the complex plane is
f(z) = z e
z
sin z.
Alternatively, for x = 0 we have
f(z = iy) = y sinh y(cos y i sin y).
The analytic continuation from the imaginary axis to the complex plane is
f(z) = iz sinh(iz)(cos(iz) i sin(iz))
= iz sinh(iz)(cos(iz) +i sin(iz))
= z sin z e
z
.
Example 11.3.2 Consider u = e
x
(x sin y y cos y). Find v such that f(z) = u +iv is analytic.
From the Cauchy-Riemann equations,
v
y
=
u
x
= e
x
sin y xe
x
sin y +y e
x
cos y
v
x
=
u
y
= e
x
cos y xe
x
cos y y e
x
sin y
361
Integrate the rst equation with respect to y.
v = e
x
cos y +xe
x
cos y + e
x
(y sin y + cos y) +F(x)
= y e
x
sin y +xe
x
cos y +F(x)
F(x) is an arbitrary function of x. Substitute this expression for v into the equation for v/x.
y e
x
sin y xe
x
cos y + e
x
cos y +F
t
(x) = y e
x
sin y xe
x
cos y + e
x
cos y
Thus F
t
(x) = 0 and F(x) = c.
v = e
x
(y sin y +x cos y) +c
Example 11.3.3 Find f(z) in the previous example. (Up to the additive constant.)
Method 1
f(z) = u +iv
= e
x
(x sin y y cos y) +i e
x
(y sin y +x cos y)
= e
x
_
x
_
e
iy
e
iy
2i
_
y
_
e
iy
+ e
iy
2
__
+i e
x
_
y
_
e
iy
e
iy
2i
_
+x
_
e
iy
+ e
iy
2
__
= i(x +iy) e
(x+iy)
= iz e
z
Method 2 f(z) = f(x +iy) = u(x, y) +iv(x, y) is an analytic function.
On the real axis, y = 0, f(z) is
f(z = x) = u(x, 0) +iv(x, 0)
= e
x
(x sin 0 0 cos 0) +i e
x
(0 sin 0 +x cos 0)
= ix e
x
362
Suppose there is an analytic continuation of f(z) into the complex plane. If such a continuation, f(z), exists,
then it must be equal to f(z = x) on the real axis An obvious choice for the analytic continuation is
f(z) = u(z, 0) +iv(z, 0)
since this is clearly equal to u(x, 0) +iv(x, 0) when z is real. Thus we obtain
f(z) = iz e
z
Example 11.3.4 Consider f(z) = u(x, y) +iv(x, y). Show that f
t
(z) = u
x
(z, 0) iu
y
(z, 0).
f
t
(z) = u
x
+iv
x
= u
x
iu
y
f
t
(z) is an analytic function. On the real axis, z = x, f
t
(z) is
f
t
(z = x) = u
x
(x, 0) iu
y
(x, 0)
Now f
t
(z = x) is dened on the real line. An analytic continuation of f
t
(z = x) into the complex plane is
f
t
(z) = u
x
(z, 0) iu
y
(z, 0).
Example 11.3.5 Again consider the problem of nding f(z) given that u(x, y) = e
x
(xsin y y cos y). Now we
can use the result of the previous example to do this problem.
u
x
(x, y) =
u
x
= e
x
sin y x e
x
sin y +y e
x
cos y
u
y
(x, y) =
u
y
= xe
x
cos y +y e
x
sin y e
x
cos y
363
f
t
(z) = u
x
(z, 0) iu
y
(z, 0)
= 0 i(z e
z
e
z
)
= i(z e
z
+ e
z
)
Integration yields the result
f(z) = iz e
z
+c
Example 11.3.6 Find f(z) given that
u(x, y) = cos xcosh
2
y sin x + cos x sin x sinh
2
y
v(x, y) = cos
2
xcosh y sinh y cosh y sin
2
x sinh y
f(z) = u(x, y) +v(x, y) is an analytic function. On the real line, f(z) is
f(z = x) = u(x, 0) +iv(x, 0)
= cos x cosh
2
0 sin x + cos xsin x sinh
2
0 +i(cos
2
x cosh 0 sinh 0 cosh 0 sin
2
xsinh 0)
= cos x sin x
Now we know the denition of f(z) on the real line. We would like to nd an analytic continuation of f(z) into
the complex plane. An obvious choice for f(z) is
f(z) = cos z sin z
Using trig identities we can write this as
f(z) =
sin 2z
2
.
364
Example 11.3.7 Find f(z) given only that
u(x, y) = cos xcosh
2
y sin x + cos x sin xsinh
2
y.
Recall that
f
t
(z) = u
x
+iv
x
= u
x
iu
y
Dierentiating u(x, y),
u
x
= cos
2
x cosh
2
y cosh
2
y sin
2
x + cos
2
x sinh
2
y sin
2
x sinh
2
y
u
y
= 4 cos xcosh y sin x sinh y
f
t
(z) is an analytic function. On the real axis, f
t
(z) is
f
t
(z = x) = cos
2
x sin
2
x
Using trig identities we can write this as
f
t
(z = x) = cos(2x)
Now we nd an analytic continuation of f
t
(z = x) into the complex plane.
f
t
(z) = cos(2z)
Integration yields the result
f(z) =
sin(2z)
2
+c
365
11.3.1 Polar Coordinates
Example 11.3.8 Is
u(r, ) = r
_
log r cos sin
_
the real part of an analytic function?
The Laplacian in polar coordinates is
=
1
r

r
_
r

r
_
+
1
r
2

2
.
Calculating the partial derivatives of u,
u
r
= cos + log r cos sin
r
u
r
= r cos +r log r cos r sin

r
_
r
u
r
_
= 2 cos + log r cos sin
1
r

r
_
r
u
r
_
=
1
r
_
2 cos + log r cos sin
_
u

= r
_
cos + sin + log r sin
_

2
u

2
= r
_
2 cos log r cos + sin
_
1
r
2

2
u

2
=
1
r
_
2 cos log r cos + sin
_
From the above we see that
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0.
Therefore u is harmonic and is the real part of some analytic function.
366
Example 11.3.9 Find an analytic function f(z) whose real part is
u(r, ) = r
_
log r cos sin
_
.
Let f(z) = u(r, ) +iv(r, ). The Cauchy-Riemann equations are
u
r
=
v

r
, u

= rv
r
.
Using the partial derivatives in the above example, we obtain two partial dierential equations for v(r, ).
v
r
=
u

r
= cos + sin + log r sin
v

= ru
r
= r
_
cos + log r cos sin
_
Integrating the equation for v

yields
v = r
_
cos + log r sin
_
+F(r)
where F(r) is a constant of integration.
Substituting our expression for v into the equation for v
r
yields
cos + log r sin + sin +F
t
(r) = cos + sin + log r sin
F
t
(r) = 0
F(r) = const
Thus we see that
f(z) = u +iv
= r
_
log r cos sin
_
+ir
_
cos + log r sin
_
+ const
f(z) is an analytic function. On the line = 0, f(z) is
f(z = r) = r
_
log r
_
+ir
_
0
_
+ const
= r log r + const
367
The analytic continuation into the complex plane is
f(z) = z log z + const
Example 11.3.10 Find the formula in polar coordinates that is analogous to
f
t
(z) = u
x
(z, 0) iu
y
(z, 0).
We know that
df
dz
= e
i
f
r
.
If f(z) = u(r, ) +iv(r, ) then
df
dz
= e
i
(u
r
+iv
r
)
From the Cauchy-Riemann equations, we have v
r
= u

/r.
df
dz
= e
i
_
u
r
i
u

r
_
f
t
(z) is an analytic function. On the line = 0, f(z) is
f
t
(z = r) = u
r
(r, 0) i
u

(r, 0)
r
The analytic continuation of f
t
(z) into the complex plane is
f
t
(z) = u
r
(z, 0)
i
r
u

(z, 0).
368
Example 11.3.11 Find an analytic function f(z) whose real part is
u(r, ) = r
_
log r cos sin
_
.
u
r
(r, ) = (log r cos sin ) + cos
u

(r, ) = r(log r sin sin cos


f
t
(z) = u
r
(z, 0)
i
r
u

(z, 0)
= log z + 1
Integrating f
t
(z) yields
f(z) = z log z +ic.
11.3.2 Analytic Functions Dened in Terms of Their Real or Imaginary Parts
Consider an analytic function: f(z) = u(x, y) +iv(x, y). We dierentiate this expression.
f
t
(z) = u
x
(x, y) +iv
x
(x, y)
We apply the Cauchy-Riemann equation v
x
= u
y
.
f
t
(z) = u
x
(x, y) iu
y
(x, y). (11.1)
Now consider the function of a complex variable, g():
g() = u
x
(x, ) iu
y
(x, ) = u
x
(x, +i) iu
y
(x, +i).
369
This function is analytic where f() is analytic. To show this we rst verify that the derivatives in the and
directions are equal.

g() = u
xy
(x, +i) iu
yy
(x, +i)
i

g() = i(iu
xy
(x, +i) +u
yy
(x, +i)) = u
xy
(x, +i) iu
yy
(x, +i)
Since these partial derivatives are equal and continuous, g() is analytic. We evaluate the function g() at = ix.
(Substitute y = ix into Equation 11.1.)
f
t
(2x) = u
x
(x, ix) iu
y
(x, ix)
We make a change of variables to solve for f
t
(x).
f
t
(x) = u
x
_
x
2
, i
x
2
_
iu
y
_
x
2
, i
x
2
_
.
If the expression is nonsingular, then this denes the analytic function, f
t
(z), on the real axis. The analytic
continuation to the complex plane is
f
t
(z) = u
x
_
z
2
, i
z
2
_
iu
y
_
z
2
, i
z
2
_
.
Note that
d
dz
2u(z/2, iz/2) = u
x
(z/2, iz/2) iu
y
(z/2, iz/2). We integrate the equation to obtain:
f(z) = 2u
_
z
2
, i
z
2
_
+c.
We know that the real part of an analytic function determines that function to within an additive constant.
Assuming that the above expression is non-singular, we have found a formula for writing an analytic function
in terms of its real part. With the same method, we can nd how to write an analytic function in terms of its
imaginary part, v.
370
We can also derive formulas if u and v are expressed in polar coordinates:
f(z) = u(r, ) +iv(r, ).
Result 11.3.2 If f(z) = u(x, y) + iv(x, y) is analytic and the expressions are non-
singular, then
f(z) = 2u
_
z
2
, i
z
2
_
+ const (11.2)
f(z) = i2v
_
z
2
, i
z
2
_
+ const. (11.3)
If f(z) = u(r, ) +iv(r, ) is analytic and the expressions are non-singular, then
f(z) = 2u
_
z
1/2
,
i
2
log z
_
+ const (11.4)
f(z) = i2v
_
z
1/2
,
i
2
log z
_
+ const. (11.5)
Example 11.3.12 Consider the problem of nding f(z) given that u(x, y) = e
x
(xsin y y cos y).
f(z) = 2u
_
z
2
, i
z
2
_
= 2 e
z/2
_
z
2
sin
_
i
z
2
_
+i
z
2
cos
_
i
z
2
__
+c
= iz e
z/2
_
i sin
_
i
z
2
_
+ cos
_
i
z
2
__
+c
= iz e
z/2
( e
z/2
) +c
= iz e
z
+c
371
Example 11.3.13 Consider
Log z =
1
2
Log
_
x
2
+y
2
_
+i Arctan (x, y).
We try to construct the analytic function from its real part using Equation 11.2.
f(z) = 2u
_
z
2
, i
z
2
_
+c
= 2
1
2
Log
_
_
z
2
_
2
+
_
i
z
2
_
2
_
+c
= Log (0) +c
We obtain a singular expression, so the method fails.
Example 11.3.14 Again consider the logarithm, this time written in terms of polar coordinates,
Log z = Log r +i.
We try to construct the analytic function from its real part using Equation 11.4.
f(z) = 2u
_
z
1/2
, i
i
2
log z
_
+c
= 2 Log
_
z
1/2
_
+c
= Log z +c
With this method we recover the analytic function.
372
11.4 Exercises
Exercise 11.1
Consider two functions, f(x, y) and g(x, y). They are said to be functionally dependent if there is a an h(g) such
that
f(x, y) = h(g(x, y)).
f and g will be functionally dependent if and only if their Jacobian vanishes.
If f and g are functionally dependent, then the derivatives of f are
f
x
= h
t
(g)g
x
f
y
= h
t
(g)g
y
.
Thus we have
(f, g)
(x, y)
=

f
x
f
y
g
x
g
y

= f
x
g
y
f
y
g
x
= h
t
(g)g
x
g
y
h
t
(g)g
y
g
x
= 0.
If the Jacobian of f and g vanishes, then
f
x
g
y
f
y
g
x
= 0.
This is a rst order partial dierential equation for f that has the general solution
f(x, y) = h(g(x, y)).
Prove that an analytic function u(x, y) +iv(x, y) can be written in terms of a function of a complex variable,
f(z) = u(x, y) +iv(x, y).
Exercise 11.2
Which of the following functions are the real part of an analytic function? For those that are, nd the harmonic
conjugate, v(x, y), and nd the analytic function f(z) = u(x, y) +iv(x, y) as a function of z.
373
1. x
3
3xy
2
2xy +y
2. e
x
sinh y
3. e
x
(sin x cos y cosh y cos x sin y sinh y)
Exercise 11.3
For an analytic function, f(z) = u(r, ) +iv(r, ) prove that under suitable restrictions:
f(z) = 2u
_
z
1/2
,
i
2
log z
_
+ const.
374
11.5 Hints
Hint 11.1
Show that u(x, y) + iv(x, y) is functionally dependent on x + iy so that you can write f(z) = f(x + iy) =
u(x, y) +iv(x, y).
Hint 11.2
Hint 11.3
Check out the derivation of Equation 11.2.
375
11.6 Solutions
Solution 11.1
u(x, y) +iv(x, y) is functionally dependent on z = x +iy if and only if
(u +iv, x +iy)
(x, y)
= 0.
(u +iv, x +iy)
(x, y)
=

u
x
+iv
x
u
y
+iv
y
1 i

= v
x
u
y
+i(u
x
v
y
)
Since u and v satisfy the Cauchy-Riemann equations, this vanishes
= 0
Thus we see that u(x, y) +iv(x, y) is functionally dependent on x +iy so we can write
f(z) = f(x +iy) = u(x, y) +iv(x, y).
Solution 11.2
1. Consider u(x, y) = x
3
3xy
2
2xy +y. The Laplacian of this function is
u u
xx
+u
yy
= 6x 6x
= 0
Since the function is harmonic, it is the real part of an analytic function. Clearly the analytic function is of
the form,
az
3
+bz
2
+cz +id,
376
with a, b and c complex-valued constants and d a real constant. Substituting z = x + iy and expanding
products yields,
a(x
3
+i3x
2
y 3xy
2
iy
3
) +b(x
2
+i2xy y
2
) +c(x +iy) +id.
By inspection, we see that the analytic function is
f(z) = z
3
+iz
2
iz +id.
The harmonic conjugate of u is the imaginary part of f(z),
v(x, y) = 3x
2
y y
3
+x
2
y
2
x +d.
We can also do this problem with analytic continuation. The derivatives of u are
u
x
= 3x
2
3y
2
2y,
u
y
= 6xy 2x + 1.
The derivative of f(z) is
f
t
(z) = u
x
iu
y
= 3x
2
2y
2
2y +i(6xy 2x + 1).
On the real axis we have
f
t
(z = x) = 3x
2
i2x +i.
Using analytic continuation, we see that
f
t
(z) = 3z
2
i2z +i.
Integration yields
f(z) = z
3
iz
2
+iz + const
377
2. Consider u(x, y) = e
x
sinh y. The Laplacian of this function is
u = e
x
sinh y + e
x
sinh y
= 2 e
x
sinh y.
Since the function is not harmonic, it is not the real part of an analytic function.
3. Consider u(x, y) = e
x
(sin xcos y cosh y cos xsin y sinh y). The Laplacian of the function is
u =

x
( e
x
(sin xcos y cosh y cos x sin y sinh y + cos xcos y cosh y + sin x sin y sinh y))
+

y
( e
x
(sin x sin y cosh y cos x cos y sinh y + sin xcos y sinh y cos x sin y cosh y))
= 2 e
x
(cos xcos y cosh y + sin x sin y sinh y) 2 e
x
(cos x cos y cosh y + sin xsin y sinh y)
= 0.
Thus u is the real part of an analytic function. The derivative of the analytic function is
f
t
(z) = u
x
+iv
x
= u
x
iu
y
From the derivatives of u we computed before, we have
f(z) = ( e
x
(sin xcos y cosh y cos xsin y sinh y + cos xcos y cosh y + sin x sin y sinh y))
i ( e
x
(sin xsin y cosh y cos xcos y sinh y + sin x cos y sinh y cos x sin y cosh y))
Along the real axis, f
t
(z) has the value,
f
t
(z = x) = e
x
(sin x + cos x).
By analytic continuation, f
t
(z) is
f
t
(z) = e
z
(sin z + cos z)
378
We obtain f(z) by integrating.
f(z) = e
z
sin z + const.
u is the real part of the analytic function
f(z) = e
z
sin z +ic,
where c is a real constant. We nd the harmonic conjugate of u by taking the imaginary part of f.
f(z) = e
x
(cosy +i sin y)(sin x cosh y +i cos x sinh y) +ic
v(x, y) = e
x
sin x sin y cosh y + cos xcos y sinh y +c
Solution 11.3
We consider the analytic function: f(z) = u(r, ) + iv(r, ). Recall that the complex derivative in terms of polar
coordinates is
z
= e
i

r
=
i
r
e
i

.
The Cauchy-Riemann equations are
u
r
=
1
r
v

, v
r
=
1
r
u

.
We dierentiate f(z) and use the partial derivative in r for the right side.
f
t
(z) = e
i
(u
r
+iv
r
)
We use the Cauchy-Riemann equations to right f
t
(z) in terms of the derivatives of u.
f
t
(z) = e
i
_
u
r
i
1
r
u

_
(11.6)
379
Now consider the function of a complex variable, g():
g() = e
i
_
u
r
(r, ) i
1
r
u

(r, )
_
= e
i
_
u
r
(r, +i) i
1
r
u

(r, +i)
_
This function is analytic where f() is analytic. It is a simple calculus exercise to show that the complex derivative
in the direction,

, and the complex derivative in the direction, i


, are equal. Since these partial derivatives


are equal and continuous, g() is analytic. We evaluate the function g() at = i log r. (Substitute = i log r
into Equation 11.6.)
f
t
_
r e
i(i log r)
_
= e
i(i log r)
_
u
r
(r, i log r) i
1
r
u

(r, i log r)
_
rf
t
_
r
2
_
= u
r
(r, i log r) i
1
r
u

(r, i log r)
If the expression is nonsingular, then it denes the analytic function, f
t
(z), on a curve. The analytic continuation
to the complex plane is
zf
t
_
z
2
_
= u
r
(z, i log z) i
1
z
u

(z, i log z).


We integrate to obtain an expression for f(z
2
).
1
2
f
_
z
2
_
= u(z, i log z) + const
We make a change of variables and solve for f(z).
f(z) = 2u
_
z
1/2
,
i
2
log z
_
+ const.
Assuming that the above expression is non-singular, we have found a formula for writing the analytic function in
terms of its real part, u(r, ). With the same method, we can nd how to write an analytic function in terms of
its imaginary part, v(r, ).
380
Chapter 12
Contour Integration and Cauchys Theorem
Between two evils, I always pick the one I never tried before.
- Mae West
12.1 Line Integrals
In this section we will recall the denition of a line integral of real-valued functions in the Cartesian plane. We
will use this to dene the contour integral of complex-valued functions in the complex plane.
Denition. Consider a curve C in the Cartesian plane joining the points (a
0
, b
0
) and (a
1
, b
1
). Partition the
curve into n+1 segments with the points (x
0
, y
0
), . . . , (x
n
, y
n
) where the rst and last points are at the endpoints
of the curve. Dene x
k
= x
k+1
x
k
and y
k
= y
k+1
y
k
. Let (
k
,
k
) be points on the curve between (x
k
, y
k
)
and (x
k+1
, y
k+1
). (See Figure 12.1.)
Consider the sum
n1

k=0
(P(
k
,
k
)x
k
+Q(
k
,
k
)y
k
) ,
381
(x ,y )
0 0
( , )
0 0
(x ,y )
1 1
( , )
1 1
(x ,y )
2 2
( , )
2 2
( , )
n-1 n-1
(x ,y )
n n
(x ,y )
n-1 n-1
y
x
Figure 12.1: A curve in the Cartesian plane.
where P and Q are continuous functions on the curve. In the limit as each of the x
k
and y
k
approach zero
the value of the sum, (if the limit exists), is denoted by
_
C
P(x, y) dx +Q(x, y) dy.
This is a line integral along the curve C. The value of the line integral depends on the functions P(x, y) and
Q(x, y), the endpoints of the curve and the curve C. One can also write a line integral in vector notation,
_
C
f(x) dx,
where x = (x, y) and f(x) = (P(x, y), Q(x, y)).
Evaluation. Let the curve C be parametrized by x = x(t), y = y(t) for t
0
t t
1
. The dierentials on the
curve are dx = x
t
(t) dt and dy = y
t
(t) dt. Thus the line integral is
_
t
1
t
0
_
P(x(t), y(t))x
t
(t) +Q(x(t), y(t))y
t
(t)
_
dt,
382
which is a denite integral.
Example 12.1.1 Consider the line integral
_
C
x
2
dx + (x +y) dy,
where C is the semi-circle from (1, 0) to (1, 0) in the upper half plane. We parameterize the curve with x = cos t,
y = sin t for 0 t .
_
C
x
2
dx + (x +y) dy =
_

0
_
cos
2
t(sin t) + (cos t + sin t) cos t
_
dt
=

2

2
3
Complex Line Integrals. Consider a curve C in the complex plane joining the points c
0
and c
1
. Partition the
curve into n + 1 segments with the points z
0
, . . . , z
n
where the rst and last points are at the endpoints of the
curve. Dene z
k
= z
k+1
z
k
. Let
k
be points on the curve between z
k
and z
k+1
. Consider the sum
n1

k=0
f(
k
)z
k
,
where f is a continuous, complex-valued function on the curve. In the limit as each of the z
k
approach zero the
value of the sum, (if the limit exists), is denoted by
_
C
f(z) dz.
This is a complex line integral along the curve C.
We can write a complex line integral in terms of real line integrals. Let f(z) = u(x, y) +iv(x, y).
_
C
f(z) dz =
_
C
(u(x, y) +iv(x, y))( dx +i dy)
_
C
f(z) dz =
_
C
(u(x, y) dx v(x, y) dy) +i
_
C
(v(x, y) dx +u(x, y) dy). (12.1)
383
Evaluation. Let the curve C be parametrized by z = z(t) for t
0
t t
1
. Then the complex line integral is
_
t
1
t
0
f(z(t))z
t
(t) dt,
which is a denite integral of a complex-valued function.
Example 12.1.2 Let C be the positively oriented unit circle about the origin in the complex plane. Evaluate:
1.
_
C
z dz
2.
_
C
1
z
dz
3.
_
C
1
z
[dz[
1. We parameterize the curve and then do the integral.
z = e
i
, dz = i e
i
d
_
C
z dz =
_
2
0
e
i
i e
i
d
=
_
1
2
e
i2
_
2
0
=
_
1
2
e
i4

1
2
e
i0
_
= 0
2.
_
C
1
z
dz =
_
2
0
1
e
i
i e
i
d = i
_
2
0
d = i2
384
3.
[dz[ =

i e
i
d

i e
i

[d[ = [d[
Since d is positive in this case, [d[ = d.
_
C
1/z [dz[ =
_
2
0
1
e
i
d =
_
i e
i

2
0
= 0
Maximum Modulus Integral Bound. The absolute value of a real integral obeys the inequality

_
b
a
f(x) dx

_
b
a
[f(x)[ [dx[ (b a) max
axb
[f(x)[.
Now we prove the analogous result for the modulus of a complex line integral.

_
C
f(z) dz

lim
z0
n1

k=0
f(
k
)z
k

lim
z0
n1

k=0
[f(
k
)[ [z
k
[
=
_
C
[f(z)[ [dz[

_
C
_
max
zC
[f(z)[
_
[dz[
=
_
max
zC
[f(z)[
__
C
[dz[
=
_
max
zC
[f(z)[
_
(length of C)
385
Result 12.1.1 Maximum Modulus Integral Bound.

_
C
f(z) dz

_
C
[f(z)[ [dz[
_
max
zC
[f(z)[
_
(length of C)
12.2 Under Construction
Cauchys Theorem. Let f(z) be analytic in a compact, closed, connected domain D. Consider the integral of
f(z) on the boundary of the domain.
_
D
f(z) dz =
_
D
(x, y) (dx +idy) =
_
D
dx +i dy
Recall Greens Theorem.
_
D
P dx +Qdy =
_
D
(Q
x
P
y
) dxy
We apply Greens Theorem to the integral of f(z) on D.
_
D
f(z) dz =
_
D
dx +i dy =
_
D
(i
x

y
) dxy
Since f(z) is analytic,
x
= i
y
. The integrand i
x

y
is zero. Thus we have
_
D
f(z) dz = 0.
This is known as Cauchys Theorem.
386
Fundamental Theorem of Calculus. First note that 1() and () commute with derivatives and integrals.
Let P(x, y) and Q(x, y) be dened on a simply connected domain. A necessary and sucient condition for the
existence of a primitive is that P
y
= Q
x
. The primitive satises
d = P dx +Qdy.
Denite integral can be evaluated in terms of the primitive.
_
(c,d)
(a,b)
P dx +Qdy = (c, d) (a, b)
Now consider integral along the contour C of the complex-valued function (x, y).
_
C
dz =
_
C
dx +i dy
If (x, y) is analytic then there exists a function such that
d = dx +i dy.
Then satises the Cauchy-Riemann equations. How do we nd the primitive that satises
x
= and

y
= i? Note that choosing (x, y) = F(z) where F(z) is an anti-derivative of f(z), F
t
(z) = f(z), does the
trick.
F
t
(z) =
x
= i
y
= f =
The dierential of is
d =
x
dx +
y
dy = dx + dy.
We can evaluate a denite integral of f in terms of F.
_
b
a
f(z) dz = F(b) F(a).
This is the Fundamental Theorem of Calculus for functions of a complex variable.
387
12.3 Cauchys Theorem
Result 12.3.1 Cauchys Theorem. If f(z) is analytic in a compact, closed, connected
domain D then the integral of f(z) on the boundary of the domain vanishes.
_
D
f(z) dz =

k
_
C
k
f(z) dz = 0
Here the set of contours C
k
make up the positively oriented boundary D of the domain
D.
This result follows from Greens Theorem. Since Greens theorem holds for both simply and multiply connected
domains, so does Cauchys theorem.
Proof of Cauchys Theorem. We will assume that f
t
(z) is continuous. This assumption is not necessary,
but it allows us to use Greens Theorem, which makes for a simpler proof. We consider the integral of f(z) =
u(x, y) +iv(x, y) along the boundary of the domain. From Equation 12.1 we have,
_
D
f(z) dz =
_
D
(udx v dy) +i
_
D
(v dx +udy)
We use Greens theorem to write this as an area integral.
_
D
f(z) dz =
_
D
(v
x
u
y
) dx dy +i
_
D
(u
x
v
y
) dx dy
Since u and v satisfy the Cauchy-Riemann Equations, u
x
= v
y
and u
y
= v
x
, the two integrands on the right
side are identically zero. Thus the two area integrals vanish and Cauchys theorem is proved.
As a special case of Cauchys theorem we can consider a simply-connected region. For this the boundary is a
Jordan curve. We can state the theorem in terms of this curve instead of referring to the boundary.
388
Result 12.3.2 Cauchys Theorem for Jordan Curves. If f(z) is analytic inside and
on a simple, closed contour C, then
_
C
f(z) dz = 0
Example 12.3.1 In Example 12.1.2 we calculated that
_
C
z dz = 0
where C is the unit circle about the origin. Now we can evaluate the integral without parameterizing the curve.
We simply note that the integrand is analytic inside and on the circle, which is simple and closed. By Cauchys
Theorem, the integral vanishes.
We cannot apply Cauchys theorem to evaluate
_
C
1
z
dz = i2
as the integrand is not analytic at z = 0.
Moreras Theorem. The converse of Cauchys theorem, is Moreras Theorem. If the integrals of a continuous
function f(z) vanish along all possible simple, closed contours in a domain, then f(z) is analytic on that domain.
To prove Moreras Theorem we will assume that rst partial derivatives of f(z) = u(x, y)+iv(x, y) are continuous,
although the result can be derived without this restriction. Let the simple, closed contour C be the boundary of
389
D which is contained in the domain .
_
C
f(z) dz =
_
C
(u +iv)( dx +i dy)
=
_
C
udx v dy +i
_
C
v dx +udy
=
_
D
(v
x
u
y
) dx dy +i
_
D
(u
x
v
y
) dx dy
= 0
Since the two integrands are continuous and vanish for all C in , we conclude that the integrands are identically
zero. This implies that the Cauchy-Riemann equations,
u
x
= v
y
, u
y
= v
x
,
are satised. f(z) is analytic in .
Result 12.3.3 Moreras Theorem. If f(z) is continuous in a simply connected domain
and
_
C
f(z) dz = 0
for all possible simple, closed contours C in the domain, the f(z) is analytic in .
12.4 Indenite Integrals
Consider a function f(z) which is analytic in a domain D. An anti-derivative or indenite integral (or simply
integral ) is a function F(z) which satises F
t
(z) = f(z). This integral exists and is unique up to an additive
390
constant. Note that if the domain is not connected, then the additive constants in each connected component are
independent. The indenite integrals are denoted:
_
f(z) dz = F(z) +c.
We will prove existence in the next section by writing an indenite integral as a contour integral. We consider
uniqueness here. Let F(z) and G(z) be integrals of f(z). Then F
t
(z) G
t
(z) = f(z) f(z) = 0. One can use
this to show that F(z) G(z) is a constant on each connected component of the domain. This demonstrates
uniqueness.
Integrals of analytic functions have all the nice properties of integrals of functions of a real variables. All the
formulas from integral tables, including things like integration by parts, carry over directly.
12.5 Contour Integrals
Result 12.5.1 Path Independence. Let f(z) be analytic on a simply connected do-
main. For points a and b in the domain, the contour integral,
_
b
a
f(z) dz
is independent of the path connecting the points.
(Here we assume that the paths lie entirely in the domain.) This result is a direct consequence of Cauchys
Theorem. Let C
1
and C
2
be two dierent paths connecting the points. Let C
2
denote the second curve with the
opposite orientation. Let C be the contour which is the union of C
1
and C
2
. By Cauchys theorem, the integral
along this contour vanishes.
_
C
1
f(z) dz +
_
C
2
f(z) dz = 0
391
This implies that
_
C
1
f(z) dz =
_
C
2
f(z) dz.
Thus contour integrals on simply connected domains are independent of path. This result does not hold for
multiply connected domains.
Result 12.5.2 Constructing an Indenite Integral. If f(z) is analytic in a simply
connected domain D and a is a point in the domain, then
F(z) =
_
z
a
f() d
is analytic in D and is an indenite integral of f(z), (F
/
(z) = f(z)).
To prove this, we use the limit denition of dierentiation.
F
t
(z) = lim
z0
F(z + z) F(z)
z
= lim
z0
1
z
__
z+z
a
f() d
_
z
a
f() d
_
= lim
z0
1
z
_
z+z
z
f() d
The integral is independent of path. We choose a straight line connecting z and z + z. We add and subtract
zf(z) =
_
z+z
z
f(z) d from the expression for F
t
(z).
F
t
(z) = lim
z0
1
z
_
zf(z) +
_
z+z
z
(f() f(z)) d
_
= f(z) + lim
z0
1
z
_
z+z
z
(f() f(z)) d
392
Since f(z) is analytic, it is certainly continuous. This means that
lim
z
f() = 0.
The limit term vanishes as a result of this continuity.
lim
z0

1
z
_
z+z
z
(f() f(z)) d

lim
z0
1
[z[
[z[ max
[z...z+z]
[f() f(z)[
= lim
z0
max
[z...z+z]
[f() f(z)[
= 0
Thus F
t
(z) = f(z).
This results demonstrates the existence of the indenite integral. We will use this to prove the Fundamental
Theorem of Calculus for functions of a complex variable.
Result 12.5.3 Fundamental Theorem of Calculus. If f(z) is analytic in a simply
connected domain D then
_
b
a
f(z) dz = F(b) F(a)
where F(z) is any indenite integral of f(z).
From Result 12.5.2 we know that
_
b
a
f(z) dz = F(b) +c.
(Here we are considering b to be a variable.) The case b = a determines the constant.
_
a
a
f(z) dz = F(a) +c = 0
c = F(a)
393
This proves the Fundamental Theorem of Calculus for functions of a complex variable.
Example 12.5.1 Consider the integral
_
C
1
z a
dz
where C is any closed contour that goes around the point z = a once in the positive direction. We use the
Fundamental Theorem of Calculus to evaluate the integral. We start at a point on the contour z a = r e
i
.
When we traverse the contour once in the positive direction we end at the point z a = r e
i(+2)
.
_
C
1
z a
dz = [log(z a)]
za=r e
i(+2)
za=r e
i
= Log r +i( + 2) ( Log r +i)
= i2
394
12.6 Exercises
Exercise 12.1
C is the arc corresponding to the unit semi-circle, [z[ = 1, (z) 0, directed from z = 1 to z = 1. Evaluate
1.
_
C
z
2
dz
2.
_
C

z
2

dz
3.
_
C
z
2
[dz[
4.
_
C

z
2

[dz[
Exercise 12.2
Evaluate
_

e
(ax
2
+bx)
dx,
where a, b C and 1(a) > 0. Use the fact that
_

e
x
2
dx =

.
Exercise 12.3
Evaluate
2
_

0
e
ax
2
cos(x) dx, and 2
_

0
x e
ax
2
sin(x) dx,
395
where 1(a) > 0 and R.
396
12.7 Hints
Hint 12.1
Hint 12.2
Let C be the parallelogram in the complex plane with corners at R and R + b/(2a). Consider the integral of
e
az
2
on this contour. Take the limit as R .
Hint 12.3
Extend the range of integration to (. . . ). Use e
ix
= cos(x) +i sin(x) and the result of Exercise 12.2.
397
12.8 Solutions
Solution 12.1
We parameterize the path with z = e
i
, with ranging from to 0.
dz = i e
i
d
[dz[ = [i e
i
d[ = [d[ = d
1.
_
C
z
2
dz =
_
0

e
i2
i e
i
d
=
_
0

i e
i3
d
=
_
1
3
e
i3
_
0

=
1
3
_
e
i0
e
i3
_
=
1
3
(1 (1))
=
2
3
398
2.
_
C
[z
2
[ dz =
_
0

[ e
i2
[i e
i
d
=
_
0

i e
i
d
=
_
e
i

= 1 (1)
= 2
3.
_
C
z
2
[dz[ =
_
0

e
i2
[i e
i
d[
=
_
0

e
i2
d
=
_
i
2
e
i2
_
0

=
i
2
(1 1)
= 0
4.
_
C
[z
2
[ [dz[ =
_
0

[ e
i2
[[i e
i
d[
=
_
0

d
= []
0

=
399
Solution 12.2
I =
_

e
(ax
2
+bx)
dx
First we complete the square in the argument of the exponential.
I = e
b
2
/(4a)
_

e
a(x+b/(2a))
2
dx
Consider the parallelogram in the complex plane with corners at R and R +b/(2a). The integral of e
az
2
on
this contour vanishes as it is an entire function. We can write this as
_
R+b/(2a)
R+b/(2a)
e
az
2
dz =
_
_
R
R+b/(2a)
+
_
R
R
+
_
R+b/(2a)
R
_
e
az
2
dz.
The rst and third integrals on the right side vanish as R because the integrand vanishes and the lengths
of the paths of integration are nite. Taking the limit as R we have,
_
+b/(2a)
+b/(2a)
e
az
2
dz
_

e
a(x+b/(2a))
2
dx =
_

e
ax
2
dx.
Now we have
I = e
b
2
/(4a)
_

e
ax
2
dx.
We make the change of variables =

ax.
I = e
b
2
/(4a)
1

a
_

2
dx
_

e
(ax
2
+bx)
dx =
_

a
e
b
2
/(4a)
400
Solution 12.3
Consider
I = 2
_

0
e
ax
2
cos(x) dx.
Since the integrand is an even function,
I =
_

e
ax
2
cos(x) dx.
Since e
ax
2
sin(x) is an odd function,
I =
_

e
ax
2
e
ix
dx.
We evaluate this integral with the result of Exercise 12.2.
2
_

0
e
ax
2
cos(x) dx =
_

a
e

2
/(4a)
Consider
I = 2
_

0
xe
ax
2
sin(x) dx.
Since the integrand is an even function,
I =
_

x e
ax
2
sin(x) dx.
Since xe
ax
2
cos(x) is an odd function,
I = i
_

xe
ax
2
e
ix
dx.
401
We add a dash of integration by parts to get rid of the x factor.
I = i
_

1
2a
e
ax
2
e
ix
_

+i
_

1
2a
e
ax
2
i e
ix
_
dx.
I =

2a
_

e
ax
2
e
ix
dx.
2
_

0
x e
ax
2
sin(x) dx. =

2a
_

a
e

2
/(4a)
402
Chapter 13
Cauchys Integral Formula
If I were founding a university I would begin with a smoking room; next a dormitory; and then a decent reading
room and a library. After that, if I still had more money that I couldnt use, I would hire a professor and get
some text books.
- Stephen Leacock
403
13.1 Cauchys Integral Formula
Result 13.1.1 Cauchys Integral Formula. If f() is analytic in a compact, closed,
connected domain D and z is a point in the interior of D then
f(z) =
1
i2
_
D
f()
z
d =
1
i2

k
_
C
k
f()
z
d. (13.1)
Here the set of contours C
k
make up the positively oriented boundary D of the domain
D. More generally, we have
f
(n)
(z) =
n!
i2
_
D
f()
( z)
n+1
d =
n!
i2

k
_
C
k
f()
( z)
n+1
d. (13.2)
Cauchys Formula shows that the value of f(z) and all its derivatives in a domain are determined by the value
of f(z) on the boundary of the domain. Consider the rst formula of the result, Equation 13.1. We deform the
contour to a circle of radius about the point = z.
_
C
f()
z
d =
_
C

f()
z
d
=
_
C

f(z)
z
d +
_
C

f() f(z)
z
d
We use the result of Example 12.5.1 to evaluate the rst integral.
_
C
f()
z
d = i2f(z) +
_
C

f() f(z)
z
d
404
The remaining integral along C

vanishes as 0 because f() is continuous. We demonstrate this with the


maximum modulus integral bound. The length of the path of integration is 2.
lim
0

_
C

f() f(z)
z
d

lim
0
_
(2)
1

max
[z[=
[f() f(z)[
_
lim
0
_
2 max
[z[=
[f() f(z)[
_
= 0
This gives us the desired result.
f(z) =
1
i2
_
C
f()
z
d
We derive the second formula, Equation 13.2, from the rst by dierentiating with respect to z. Note that the
integral converges uniformly for z in any closed subset of the interior of C. Thus we can dierentiate with respect
to z and interchange the order of dierentiation and integration.
f
(n)
(z) =
1
i2
d
n
dz
n
_
C
f()
z
d
=
1
i2
_
C
d
n
dz
n
f()
z
d
=
n!
i2
_
C
f()
( z)
n+1
d
Example 13.1.1 Consider the following integrals where C is the positive contour on the unit circle. For the
third integral, the point z = 1 is removed from the contour.
1.
_
C
sin(cos(z
5
)) dz
405
2.
_
C
1
(z 3)(3z 1)
dz
3.
_
C

z dz
1. Since sin(cos(z
5
)) is an analytic function inside the unit circle,
_
C
sin(cos(z
5
)) dz = 0
2.
1
(z3)(3z1)
has singularities at z = 3 and z = 1/3. Since z = 3 is outside the contour, only the singularity at
z = 1/3 will contribute to the value of the integral. We will evaluate this integral using the Cauchy integral
formula.
_
C
1
(z 3)(3z 1)
dz = i2
_
1
(1/3 3)3
_
=
i
4
3. Since the curve is not closed, we cannot apply the Cauchy integral formula. Note that

z is single-valued
and analytic in the complex plane with a branch cut on the negative real axis. Thus we use the Fundamental
Theorem of Calculus.
_
C

z dz =
_
2
3

z
3
_
e
i
e
i
=
2
3
_
e
i3/2
e
i3/2
_
=
2
3
(i i)
= i
4
3
406
Cauchys Inequality. Suppose the f() is analytic in the closed disk [ z[ r. By Cauchys integral formula,
f
(n)
(z) =
n!
i2
_
C
f()
( z)
n+1
d,
where C is the circle of radius r centered about the point z. We use this to obtain an upper bound on the modulus
of f
(n)
(z).

f
(n)
(z)

=
n!
2

_
C
f()
( z)
n+1
d

n!
2
2r max
[z[=r

f()
( z)
n+1

=
n!
r
n
max
[z[=r
[f()[
Result 13.1.2 Cauchys Inequality. If f() is analytic in [ z[ r then

f
(n)
(z)


n!M
r
n
where [f()[ M for all [ z[ = r.
Liouvilles Theorem. Consider a function f(z) that is analytic and bounded, (f(z) M), in the complex
plane. From Cauchys inequality,
[f
t
(z)[
M
r
for any positive r. By taking r , we see that f
t
(z) is identically zero for all z. Thus f(z) is a constant.
407
Result 13.1.3 Liouvilles Theorem. If f(z) is analytic and bounded in the complex
plane then f(z) is a constant.
The Fundamental Theorem of Algebra. We will prove that every polynomial of degree n 1 has exactly
n roots, counting multiplicities. First we demonstrate that each such polynomial has at least one root. Suppose
that an n
th
degree polynomial p(z) has no roots. Let the lower bound on the modulus of p(z) be 0 < m [p(z)[.
The function f(z) = 1/p(z) is analytic, (f
t
(z) = p
t
(z)/p
2
(z)), and bounded, ([f(z)[ 1/m), in the extended
complex plane. Using Liouvilles theorem we conclude that f(z) and hence p(z) are constants, which yields a
contradiction. Therefore every such polynomial p(z) must have at least one root.
Now we show that we can factor the root out of the polynomial. Let
p(z) =
n

k=0
p
k
z
k
.
We note that
(z
n
c
n
) = (z c)
n1

k=0
c
n1k
z
k
.
408
Suppose that the n
th
degree polynomial p(z) has a root at z = c.
p(z) = p(z) p(c)
=
n

k=0
p
k
z
k

k=0
p
k
c
k
=
n

k=0
p
k
(z
k
c
k
)
=
n

k=0
p
k
(z c)
k1

j=0
c
k1j
z
j
= (z c)q(z)
Here q(z) is a polynomial of degree n 1. By induction, we see that p(z) has exactly n roots.
Result 13.1.4 Fundamental Theorem of Algebra. Every polynomial of degree n
1 has exactly n roots, counting multiplicities.
Gauss Mean Value Theorem. Let f() be analytic in [ z[ r. By Cauchys integral formula,
f(z) =
1
2i
_
C
f()
z
d,
where C is the circle [ z[ = r. We parameterize the contour with = z +r e
i
.
f(z) =
1
2i
_
2
0
f(z +r e
i
)
r e
i
ir e
i
d
Writing this in the form,
f(z) =
1
2r
_
2
0
f(z +r e
i
)r d,
409
we see that f(z) is the average value of f() on the circle of radius r about the point z.
Result 13.1.5 Gauss Average Value Theorem. If f() is analytic in [ z[ r
then
f(z) =
1
2
_
2
0
f(z +r e
i
) d.
That is, f(z) is equal to its average value on a circle of radius r about the point z.
Extremum Modulus Theorem. Let f(z) be analytic in closed, connected domain, D. The extreme values
of the modulus of the function must occur on the boundary. If [f(z)[ has an interior extrema, then the function
is a constant. We will show this with proof by contradiction. Assume that [f(z)[ has an interior maxima at the
point z = c. This means that there exists an neighborhood of the point z = c for which [f(z)[ [f(c)[. Choose
an so that the set [z c[ lies inside this neighborhood. First we use Gauss mean value theorem.
f(c) =
1
2
_
2
0
f
_
c + e
i
_
d
We get an upper bound on [f(c)[ with the maximum modulus integral bound.
[f(c)[
1
2
_
2
0

f
_
c + e
i
_

d
Since z = c is a maxima of [f(z)[ we can get a lower bound on [f(c)[.
[f(c)[
1
2
_
2
0

f
_
c + e
i
_

d
If [f(z)[ < [f(c)[ for any point on [z c[ = , then the continuity of f(z) implies that [f(z)[ < [f(c)[ in a
neighborhood of that point which would make the value of the integral of [f(z)[ strictly less than [f(c)[. Thus we
410
conclude that [f(z)[ = [f(c)[ for all [z c[ = . Since we can repeat the above procedure for any circle of radius
smaller than , [f(z)[ = [f(c)[ for all [z c[ , i.e. all the points in the disk of radius about z = c are also
maxima. By recursively repeating this procedure points in this disk, we see that [f(z)[ = [f(c)[ for all z D.
This implies that f(z) is a constant in the domain. By reversing the inequalities in the above method we see that
the minimum modulus of f(z) must also occur on the boundary.
Result 13.1.6 Extremum Modulus Theorem. Let f(z) be analytic in a closed,
connected domain, D. The extreme values of the modulus of the function must occur on
the boundary. If [f(z)[ has an interior extrema, then the function is a constant.
13.2 The Argument Theorem
Result 13.2.1 The Argument Theorem. Let f(z) be analytic inside and on C except
for isolated poles inside the contour. Let f(z) be nonzero on C.
1
i2
_
C
f
/
(z)
f(z)
dz = N P
Here N is the number of zeros and P the number of poles, counting multiplicities, of
f(z) inside C.
First we will simplify the problem and consider a function f(z) that has one zero or one pole. Let f(z) be
analytic and nonzero inside and on A except for a zero of order n at z = a. Then we can write f(z) = (z a)
n
g(z)
411
where g(z) is analytic and nonzero inside and on A. The integral of
f

(z)
f(z)
along A is
1
i2
_
A
f
t
(z)
f(z)
dz =
1
i2
_
A
d
dz
(log(f(z))) dz
=
1
i2
_
A
d
dz
(log((z a)
n
) + log(g(z))) dz
=
1
i2
_
A
d
dz
(log((z a)
n
)) dz
=
1
i2
_
A
n
z a
dz
= n
Now let f(z) be analytic and nonzero inside and on B except for a pole of order p at z = b. Then we can write
f(z) =
g(z)
(zb)
p
where g(z) is analytic and nonzero inside and on B. The integral of
f

(z)
f(z)
along B is
1
i2
_
B
f
t
(z)
f(z)
dz =
1
i2
_
B
d
dz
(log(f(z))) dz
=
1
i2
_
B
d
dz
_
log((z b)
p
) + log(g(z))
_
dz
=
1
i2
_
B
d
dz
_
log((z b)
p
)+
_
dz
=
1
i2
_
B
p
z b
dz
= p
Now consider a function f(z) that is analytic inside an on the contour C except for isolated poles at the points
b
1
, . . . , b
p
. Let f(z) be nonzero except at the isolated points a
1
, . . . , a
n
. Let the contours A
k
, k = 1, . . . , n,
be simple, positive contours which contain the zero at a
k
but no other poles or zeros of f(z). Likewise, let the
412
contours B
k
, k = 1, . . . , p be simple, positive contours which contain the pole at b
k
but no other poles of zeros of
f(z). (See Figure 13.1.) By deforming the contour we obtain
_
C
f
t
(z)
f(z)
dz =
n

j=1
_
A
j
f
t
(z)
f(z)
dz +
p

k=1
_
B
j
f
t
(z)
f(z)
dz.
From this we obtain Result 13.2.1.
C
A
1
B
1
B
3
B
2
A
2
Figure 13.1: Deforming the contour C.
13.3 Rouches Theorem
Result 13.3.1 Rouches Theorem. Let f(z) and (g) be analytic inside and on a
simple, closed contour C. If [f(z)[ > [g(z)[ on C then f(z) and f(z) + g(z) have the
same number of zeros inside C and no zeros on C.
First note that since [f(z)[ > [g(z)[ on C, f(z) is nonzero on C. The inequality implies that [f(z) +g(z)[ > 0
on C so f(z) +g(z) has no zeros on C. We well count the number of zeros of f(z) and g(z) using the Argument
413
Theorem, (Result 13.2.1). The number of zeros N of f(z) inside the contour is
N =
1
i2
_
C
f
t
(z)
f(z)
dz.
Now consider the number of zeros M of f(z) +g(z). We introduce the function h(z) = g(z)/f(z).
M =
1
i2
_
C
f
t
(z) +g
t
(z)
f(z) +g(z)
dz
=
1
i2
_
C
f
t
(z) +f
t
(z)h(z) +f(z)h
t
(z)
f(z) +f(z)h(z)
dz
=
1
i2
_
C
f
t
(z)
f(z)
dz +
1
i2
_
C
h
t
(z)
1 +h(z)
dz
= N +
1
i2
[log(1 +h(z))]
C
= N
(Note that since [h(z)[ < 1 on C, 1(1 + h(z)) > 0 on C and the value of log(1 + h(z)) does not not change in
traversing the contour.) This demonstrates that f(z) and f(z) + g(z) have the same number of zeros inside C
and proves the result.
414
13.4 Exercises
Exercise 13.1
What is
(arg(sin z))

C
where C is the unit circle?
Exercise 13.2
Let C be the circle of radius 2 centered about the origin and oriented in the positive direction. Evaluate the
following integrals:
1.
_
C
sin z
z
2
+5
dz
2.
_
C
z
z
2
+1
dz
3.
_
C
z
2
+1
z
dz
Exercise 13.3
Let f(z) be analytic and bounded (i.e. [f(z)[ < M) for [z[ > R, but not necessarily analytic for [z[ R. Let the
points and lie inside the circle [z[ = R. Evaluate
_
C
f(z)
(z )(z )
dz
where C is any closed contour outside [z[ = R, containing the circle [z[ = R. [Hint: consider the circle at innity]
Now suppose that in addition f(z) is analytic everywhere. Deduce that f() = f().
Exercise 13.4
Using Rouches theorem show that all the roots of the equation p(z) = z
6
5z
2
+ 10 = 0 lie in the annulus
1 < [z[ < 2.
415
Exercise 13.5
Evaluate as a function of t
=
1
i2
_
C
e
zt
z
2
(z
2
+a
2
)
dz,
where C is any positively oriented contour surrounding the circle [z[ = a.
416
13.5 Hints
Hint 13.1
Use the argument theorem.
Hint 13.2
Hint 13.3
To evaluate the integral, consider the circle at innity.
Hint 13.4
Hint 13.5
417
13.6 Solutions
Solution 13.1
Let f(z) be analytic inside and on the contour C. Let f(z) be nonzero on the contour. The argument theorem
states that
1
i2
_
C
f
t
(z)
f(z)
dz = N P,
where N is the number of zeros and P is the number of poles, (counting multiplicities), of f(z) inside C. The
theorem is aptly named, as
1
i2
_
C
f
t
(z)
f(z)
dz =
1
i2
[log(f(z))]
C
=
1
i2
[log [f(z)[ +i arg(f(z))]
C
=
1
2
[arg(f(z))]
C
.
Thus we could write the argument theorem as
1
i2
_
C
f
t
(z)
f(z)
dz =
1
2
[arg(f(z))]
C
= N P.
Since sin z has a single zero and no poles inside the unit circle, we have
1
2
arg(sin(z))

C
= 1 0
arg(sin(z))

C
= 2
Solution 13.2
418
1. Since the integrand
sin z
z
2
+5
is analytic inside and on the contour, (the only singularities are at z = i

5 and
at innity), the integral is zero by Cauchys Theorem.
2. First we expand the integrand in partial fractions.
z
z
2
+ 1
=
a
z i
+
b
z +i
a =
z
z +i

z=i
=
1
2
, b =
z
z i

z=i
=
1
2
Now we can do the integral with Cauchys formula.
_
C
z
z
2
+ 1
dz =
_
C
1/2
z i
dz +
_
C
1/2
z +i
dz
=
1
2
i2 +
1
2
i2
= i2
3.
_
C
z
2
+ 1
z
dz =
_
C
_
z +
1
z
_
dz
=
_
C
z dz +
_
C
1
z
dz
= 0 +i2
= i2
419
Solution 13.3
Let C be the circle of radius r, (r > R), centered at the origin. We get an upper bound on the integral with the
Maximum Modulus Integral Bound, (Result 12.1.1).

_
C
f(z)
(z )(z )
dz

2r max
[z[=r

f(z)
(z )(z )

2r
M
(r [[)(r [[)
By taking the limit as r we see that the modulus of the integral is bounded above by zero. Thus the integral
vanishes.
Now we assume that f(z) is analytic and evaluate the integral with Cauchys Integral Formula. (We assume
that ,= .)
_
C
f(z)
(z )(z )
dz = 0
_
C
f(z)
(z )( )
dz +
_
C
f(z)
( )(z )
dz = 0
i2
f()

+i2
f()

= 0
f() = f()
Solution 13.4
Consider the circle [z[ = 2. On this circle:
[z
6
[ = 64
[ 5z
2
+ 10[ [ 5z
2
[ +[10[ = 30
Since [z
6
[ < [ 5z
2
+10[ on [z[ = 2, p(z) has the same number of roots as z
6
in [z[ < 2. p(z) has 6 roots in [z[ < 2.
Consider the circle [z[ = 1. On this circle:
[10[ = 10
[z
6
5z
2
[ [z
6
[ +[ 5z
2
[ = 6
420
Since [z
6
5z
2
[ < [10[ on [z[ = 1, p(z) has the same number of roots as 10 in [z[ < 1. p(z) has no roots in [z[ < 1.
On the unit circle,
[p(z)[ [10[ [z
6
[ [5z
2
[ = 4.
Thus p(z) has no roots on the unit circle.
We conclude that p(z) has exactly 6 roots in 1 < [z[ < 2.
Solution 13.5
We evaluate the integral with Cauchys Integral Formula.
=
1
2i
_
C
e
zt
z
2
(z
2
+a
2
)
dz
=
1
2i
_
C
_
e
zt
a
2
z
2
+
i e
zt
2a
3
(z ia)

i e
zt
2a
3
(z +ia)
_
dz
=
_
d
dz
e
zt
a
2
_
z=0
+
i e
iat
2a
3

i e
iat
2a
3
=
t
a
2

sin(at)
a
3
=
at sin(at)
a
3
421
Chapter 14
Series and Convergence
You are not thinking. You are merely being logical.
- Neils Bohr
14.1 Series of Constants
14.1.1 Denitions
Convergence of Sequences. The innite sequence a
n

n=0
a
0
, a
1
, a
2
, . . . is said to converge if
lim
n
a
n
= a
for some constant a. If the limit does not exist, then the sequence diverges. Recall the denition of the limit in
the above formula: For any > 0 there exists an N Z such that [a a
n
[ < for all n > N.
Example 14.1.1 The sequence sin(n) is divergent. The sequence is bounded above and below, but bounded-
ness does not imply convergence.
422
Cauchy Convergence Criterion. Note that there is something a little shy about the above denition. We
should be able to say if a sequence converges without rst nding the constant to which it converges. We x this
problem with the Cauchy convergence criterion. A sequence a
n
converges if and only if for any > 0 there
exists an N such that [a
n
a
m
[ < for all n, m > N. The Cauchy convergence criterion is equivalent to the
denition we had before. For some problems it is handier to use. Now we dont need to know the limit of a
sequence to show that it converges.
Convergence of Series. The series

n=1
a
n
converges if the sequence of partial sums, S
N
=

N1
n=0
a
n
, con-
verges. That is,
lim
N
S
N
= lim
N
N1

n=0
a
n
= constant.
If the limit does not exist, then the series diverges. A necessary condition for the convergence of a series is that
lim
n
a
n
= 0.
Otherwise the sequence of partial sums would not converge.
Example 14.1.2 The series

n=0
(1)
n
= 1 1 +1 1 + is divergent because the sequence of partial sums,
S
N
= 1, 0, 1, 0, 1, 0, . . . is divergent.
Tail of a Series. An innite series,

n=0
a
n
, converges or diverges with its tail. That is, for xed N,

n=0
a
n
converges if and only if

n=N
a
n
converges. This is because the sum of the rst N terms of a series is just a
number. Adding or subtracting a number to a series does not change its convergence.
Absolute Convergence. The series

n=0
a
n
converges absolutely if

n=0
[a
n
[ converges. Absolute convergence
implies convergence. If a series is convergent, but not absolutely convergent, then it is said to be conditionally
convergent.
423
The terms of an absolutely convergent series can be rearranged in any order and the series will still converge
to the same sum. This is not true of conditionally convergent series. Rearranging the terms of a conditionally
convergent series may change the sum. In fact, the terms of a conditionally convergent series may be rearranged
to obtain any desired sum.
Example 14.1.3 The alternating harmonic series,
1
1
2
+
1
3

1
4
+ ,
converges, (Exercise 14.2). Since
1 +
1
2
+
1
3
+
1
4
+
diverges, (Exercise 14.3), the alternating harmonic series is not absolutely convergent. Thus the terms can be
rearranged to obtain any sum, (Exercise 14.4).
Finite Series and Residuals. Consider the series f(z) =

n=0
a
n
(z). We will denote the sum of the rst N
terms in the series as
S
N
(z) =
N1

n=0
a
n
(z).
We will denote the residual after N terms as
R
N
(z) f(z) S
N
(z) =

n=N
a
n
(z).
424
14.1.2 Special Series
Geometric Series. One of the most important series in mathematics is the geometric series,
1

n=0
z
n
= 1 +z +z
2
+z
3
+ .
The series clearly diverges for [z[ 1 since the terms do not vanish as n . Consider the partial sum,
S
N
(z)

N1
n=0
z
n
, for [z[ < 1.
(1 z)S
N
(z) = (1 z)
N1

n=0
z
n
=
N1

n=0
z
n

n=1
z
n
= (1 +z + +z
N1
) (z +z
2
+ +z
N
)
= 1 z
N
N1

n=0
z
n
=
1 z
N
1 z

1
1 z
as N .
The limit of the partial sums is
1
1z
.

n=0
z
n
=
1
1 z
for [z[ < 1
1
The series is so named because the terms grow or decay geometrically. Each term in the series is a constant times the previous
term.
425
Harmonic Series. Another important series is the harmonic series,

n=1
1
n

= 1 +
1
2

+
1
3

+ .
The series is absolutely convergent for 1() > 1 and absolutely divergent for 1() 1, (see the Exercise 14.6).
The Riemann zeta function () is dened as the sum of the harmonic series.
() =

n=1
1
n

The alternating harmonic series is

n=1
(1)
n+1
n

= 1
1
2

+
1
3


1
4

+ .
Again, the series is absolutely convergent for 1() > 1 and absolutely divergent for 1() 1.
14.1.3 Convergence Tests
The Comparison Test. The series of positive terms

n=0
a
n
converges if there exists a convergent series

n=0
b
n
such that a
n
b
n
for all n. Similarly,

n=0
a
n
diverges if there exists a divergent series

n=0
b
n
such
that a
n
b
n
for all n.
Example 14.1.4 Consider the series

n=1
1
2
n
2
.
We can rewrite this as

n=1
n a perfect square
1
2
n
.
426
Then by comparing this series to the geometric series,

n=1
1
2
n
= 1,
we see that it is convergent.
Integral Test. If the coecients a
n
of a series

n=0
a
n
are monotonically decreasing and can be extended to
a monotonically decreasing function of the continuous variable x,
a(x) = a
n
for x Z
0+
,
then the series converges or diverges with the integral
_

0
a(x) dx.
Example 14.1.5 Consider the series

n=1
1
n
2
. Dene the functions s
l
(x) and s
r
(x), (left and right),
s
l
(x) =
1
(x|)
2
, s
r
(x) =
1
(x|)
2
.
Recall that x| is the greatest integer function, the greatest integer which is less than or equal to x. x| is the
least integer function, the least integer greater than or equal to x. We can express the series as integrals of these
functions.

n=1
1
n
2
=
_

0
s
l
(x) dx =
_

1
s
r
(x) dx
427
In Figure 14.1 these functions are plotted against y = 1/x
2
. From the graph, it is clear that we can obtain a lower
and upper bound for the series.
_

1
1
x
2
dx

n=1
1
n
2
1 +
_

1
1
x
2
dx
1

n=1
1
n
2
2
1 2 3 4
1
1 2 3 4
1
Figure 14.1: Upper and Lower bounds to

n=1
1/n
2
.
In general, we have
_

m
a(x) dx

n=m
a
n
a
m
+
_

m
a(x) dx.
Thus we see that the sum converges or diverges with the integral.
The Ratio Test. The series

n=0
a
n
converges absolutely if
lim
n

a
n+1
a
n

< 1.
428
If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails.
If the limit is greater than unity, then the terms are eventually increasing with n. Since the terms do not
vanish, the sum is divergent. If the limit is less than unity, then there exists some N such that

a
n+1
a
n

r < 1 for all n N.


From this we can show that

n=0
a
n
is absolutely convergent by comparing it to the geometric series.

n=N
[a
n
[ [a
N
[

n=0
r
n
= [a
N
[
1
1 r
Example 14.1.6 Consider the series,

n=1
e
n
n!
.
We apply the ratio test to test for absolute convergence.
lim
n

a
n+1
a
n

= lim
n
e
n+1
n!
e
n
(n + 1)!
= lim
n
e
n + 1
= 0
The series is absolutely convergent.
429
Example 14.1.7 Consider the series,

n=1
1
n
2
,
which we know to be absolutely convergent. We apply the ratio test.
lim
n

a
n+1
a
n

= lim
n
1/(n + 1)
2
1/n
2
= lim
n
n
2
n
2
+ 2n + 1
= lim
n
1
1 + 2/n + 1/n
2
= 1
The test fails to predict the absolute convergence of the series.
The Root Test. The series

n=0
a
n
converges absolutely if
lim
n
[a
n
[
1/n
< 1.
If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails.
If the limit is greater than unity, then the terms in the series do not vanish as n . This implies that the
sum does not converge. If the limit is less than unity, then there exists some N such that
[a
n
[
1/n
r < 1 for all n N.
430
We bound the tail of the series of [a
n
[.

n=N
[a
n
[ =

n=N
([a
n
[
1/n
)
n

n=N
r
n
=
r
N
1 r

n=0
a
n
is absolutely convergent.
Example 14.1.8 Consider the series

n=0
n
a
b
n
,
where a and b are real constants. We use the root test to check for absolute convergence.
lim
n
[n
a
b
n
[
1/n
< 1
[b[ lim
n
n
a/n
< 1
[b[ exp
_
lim
n
1 log n
n
_
< 1
[b[ e
0
< 1
[b[ < 1
Thus we see that the series converges absolutely for [b[ < 1. Note that the value of a does not aect the absolute
convergence.
431
Example 14.1.9 Consider the absolutely convergent series,

n=1
1
n
2
.
We aply the root test.
lim
n
[a
n
[
1/n
= lim
n

1
n
2

1/n
= lim
n
n
2/n
= lim
n
e

2
n
log n
= e
0
= 1
It fails to predict the convergence of the series.
14.2 Uniform Convergence
Continuous Functions. A function f(z) is continuous in a closed domain if, given any > 0, there exists a
> 0 such that [f(z) f()[ < for all [z [ < in the domain.
An equivalent denition is that f(z) is continuous in a closed domain if
lim
z
f() = f(z)
for all z in the domain.
432
Convergence. Consider a series in which the terms are functions of z,

n=0
a
n
(z). The series is convergent in a
domain if the series converges for each point z in the domain. We can then dene the function f(z) =

n=0
a
n
(z).
We can state the convergence criterion as: For any given > 0 there exists a function N(z) such that
[f(z) S
N(z)
(z)[ =

f(z)
N(z)1

n=0
a
n
(z)

<
for all z in the domain. Note that the rate of convergence, i.e. the number of terms, N(z) required for for the
absolute error to be less than , is a function of z.
Uniform Convergence. Consider a series

n=0
a
n
(z) that is convergent in some domain. If the rate of
convergence is independent of z then the series is said to be uniformly convergent. Stating this a little more
mathematically, the series is uniformly convergent in the domain if for any given > 0 there exists an N,
independent of z, such that
[f(z) S
N
(z)[ =

f(z)
N

n=1
a
n
(z)

<
for all z in the domain.
14.2.1 Tests for Uniform Convergence
Weierstrass M-test. The Weierstrass M-test is useful in determining if a series is uniformly convergent. The
series

n=0
a
n
(z) is uniformly and absolutely convergent in a domain if there exists a convergent series of positive
terms

n=0
M
n
such that [a
n
(z)[ M
n
for all z in the domain. This condition rst implies that the series is
absolutely convergent for all z in the domain. The condition [a
n
(z)[ M
n
also ensures that the rate of convergence
is independent of z, which is the criterion for uniform convergence.
Note that absolute convergence and uniform convergence are independent. A series of functions may be
absolutely convergent without being uniformly convergent or vice versa. The Weierstrass M-test is a sucient
433
but not a necessary condition for uniform convergence. The Weierstrass M-test can succeed only if the series is
uniformly and absolutely convergent.
Example 14.2.1 The series
f(x) =

n=1
sin x
n(n + 1)
is uniformly and absolutely convergent for all real x because [
sin x
n(n+1)
[ <
1
n
2
and

n=1
1
n
2
converges.
Dirichlet Test. Consider a sequence of monotone decreasing, positive constants c
n
with limit zero. If all the
partial sums of a
n
(z) are bounded in some closed domain, that is

n=1
a
n
(z)

< constant
for all N, then

n=1
c
n
a
n
(z) is uniformly convergent in that closed domain. Note that the Dirichlet test does not
imply that the series is absolutely convergent.
Example 14.2.2 Consider the series,

n=1
sin(nx)
n
.
We cannot use the Weierstrass M-test to determine if the series is uniformly convergent on an interval. While it
is easy to bound the terms with [ sin(nx)/n[ 1/n, the sum

n=1
1
n
434
does not converge. Thus we will try the Dirichlet test. Consider the sum

N1
n=1
sin(nx). This sum can be
evaluated in closed form. (See Exercise 14.7.)
N1

n=1
sin(nx) =
_
0 for x = 2k
cos(x/2)cos((N1/2)x)
2 sin(x/2)
for x ,= 2k
The partial sums have innite discontinuities at x = 2k, k Z. The partial sums are bounded on any closed
interval that does not contain an integer multiple of 2. By the Dirichlet test, the sum

n=1
sin(nx)
n
is uniformly
convergent on any such closed interval. The series may not be uniformly convergent in neighborhoods of x = 2k.
14.2.2 Uniform Convergence and Continuous Functions.
Consider a series f(z) =

n=1
a
n
(z) that is uniformly convergent in some domain and whose terms a
n
(z) are
continuous functions. Since the series is uniformly convergent, for any given > 0 there exists an N such that
[R
N
[ < for all z in the domain.
Since the nite sum S
N
is continuous, for that there exists a > 0 such that [S
N
(z) S
N
()[ < for all
in the domain satisfying [z [ < .
Combining these two results,
[f(z) f()[ = [S
N
(z) +R
N
(z) S
N
() R
N
()[
[S
N
(z) S
N
()[ +[R
N
(z)[ +[R
N
()[
< 3 for [z [ < .
Thus f(z) is continuous.
Result 14.2.1 A uniformly convergent series of continuous terms represents a continuous
function.
Example 14.2.3 Again consider

n=1
sin(nx)
n
. In Example 14.2.2 we showed that the convergence is uniform in
any closed interval that does not contain an integer multiple of 2. In Figure 14.2 is a plot of the rst 10 and
435
then 50 terms in the series and nally the function to which the series converges. We see that the function has
jump discontinuities at x = 2k and is continuous on any closed interval not containing one of those points.
Figure 14.2: Ten, Fifty and all the Terms of

n=1
sin(nx)
n
.
14.3 Uniformly Convergent Power Series
Power Series. Power series are series of the form

n=0
a
n
(z z
0
)
n
.
Domain of Convergence of a Power Series Consider the series

n=0
a
n
z
n
. Let the series converge at some
point z
0
. Then [a
n
z
n
0
[ is bounded by some constant A for all n, so
[a
n
z
n
[ = [a
n
z
n
0
[

z
z
0

n
< A

z
z
0

n
This comparison test shows that the series converges absolutely for all z satisfying [z[ < [z
0
[.
436
Suppose that the series diverges at some point z
1
. Then the series could not converge for any [z[ > [z
1
[ since
this would imply convergence at z
1
. Thus there exists some circle in the z plane such that the power series
converges absolutely inside the circle and diverges outside the circle.
Result 14.3.1 The domain of convergence of a power series is a circle in the complex
plane.
Radius of Convergence of Power Series. Consider a power series
f(z) =

n=0
a
n
z
n
Applying the ratio test, we see that the series converges if
lim
n
[a
n+1
z
n+1
[
[a
n
z
n
[
< l
lim
n
[a
n+1
[
[a
n
[
[z[ < 1
[z[ < lim
n
[a
n
[
[a
n+1
[
437
Result 14.3.2 The radius of convergence of the power series
f(z) =

n=0
a
n
z
n
is
R = lim
n
[a
n
[
[a
n+1
[
when the limit exists.
Result 14.3.3 Cauchy-Hadamard formula. The radius of convergence of the power
series:

n=0
a
n
z
n
is
R =
1
limsup
n
_
[a
n
[
.
Absolute Convergence of Power Series. Consider a power series
f(z) =

n=0
a
n
z
n
438
that converges for z = z
0
. Let M be the value of the greatest term, a
n
z
n
0
. Consider any point z such that [z[ < [z
0
[.
We can bound the residual of

n=0
[a
n
z
n
[,
R
N
(z) =

n=N
[a
n
z
n
[
=

n=N

a
n
z
n
a
n
z
n
0

[a
n
z
n
0
[
M

n=N

z
z
0

n
Since [z/z
0
[ < 1, this is a convergent geometric series.
= M

z
z
0

N
1
1 [z/z
0
[
0 as N
Thus the power series is absolutely convergent for [z[ < [z
0
[.
Result 14.3.4 If the power series

n=0
a
n
z
n
converges for z = z
0
, then the series con-
verges absolutely for [z[ < [z
0
[.
Example 14.3.1 Find the radii of convergence of
1)

n=1
nz
n
, 2)

n=1
n!z
n
, 3)

n=1
n!z
n!
1. Applying the formula for the radius of convergence,
R = lim
n

a
n
a
n+1

= lim
n
n
n + 1
= 1
439
2. Applying the ratio test to the second series,
R = lim
n

n!
(n + 1)!

= lim
n
1
n + 1
= 0
Thus we see that the second series has a vanishing radius of convergence.
3. The third series converges when
lim
n

(n + 1)!z
(n+1)!
n!z
n!

< 1
lim
n
(n + 1)[z[
(n+1)!n!
< 1
lim
n
(n + 1)[z[
(n)n!
< 1
lim
n
(log(n + 1) + (n)n! log [z[) < 0
log [z[ < lim
n
log(n + 1)
(n)n!
log [z[ < 0
[z[ < 1
Thus the radius of convergence for the third series is 1.
Alternatively we could determine the radius of convergence of the third series with the comparison test. We
know that

n=1

n!z
n!

n=1
[nz
n
[
440

n=1
nz
n
has a radius of convergence of 1. Thus the third sum must have a radius of convergence of at
least 1. Note that if [z[ > 1 then the terms in the third series do not vanish as n . Thus the series
must diverge for all [z[ > 1. We see that the radius of convergence is 1.
Uniform Convergence of Power Series. Consider a power series

n=0
a
n
z
n
that converges in the disk
[z[ < r
0
. The sum converges absolutely for z in the closed disk, [z[ r < r
0
. Since [a
n
z
n
[ [a
n
r
n
[ and

n=0
[a
n
r
n
[ converges, the power series is uniformly convergent in [z[ r < r
0
.
Result 14.3.5 If the power series

n=0
a
n
z
n
converges for [z[ < r
0
then the series con-
verges uniformly for [z[ r < r
0
.
Example 14.3.2 Convergence and Uniform Convergence. Consider the series
log(1 z) =

n=1
z
n
n
.
This series converges for [z[ 1, z ,= 1. Is the series uniformly convergent in this domain? The residual after N
terms R
N
is
R
N
(z) =

n=N+1
z
n
n
.
We can get a lower bound on the absolute value of the residual for real, positive z.
[R
N
(x)[ =

n=N+1
x
n
n

_

N+1
x

d
= Ei ((N + 1) log x)
441
The exponential integral function, Ei (z), is dened
Ei (z) =
_

z
e
t
t
dt.
The exponential integral function is plotted in Figure 14.3. Since Ei (z) diverges as z 0, by choosing x
suciently close to 1 the residual can be made arbitrarily large. Thus this series is not uniformly convergent in
the domain [z[ 1, z ,= 1. The series is uniformly convergent for [z[ r < 1.
-4 -3 -2 -1
-2
-1.75
-1.5
-1.25
-1
-0.75
-0.5
-0.25
Figure 14.3: The Exponential Integral Function.
Analyticity. Recall that a sucient condition for the analyticity of a function f(z) in a domain is that
_
C
f(z) dz = 0 for all simple, closed contours in the domain.
442
Consider a power series f(z) =

n=0
a
n
z
n
that is uniformly convergent in [z[ r. If C is any simple, closed
contour in the domain then
_
C
f(z) dz exists. Expanding f(z) into a nite series and a residual,
_
C
f(z) dz =
_
C
[S
N
(z) +R
N
(z)] dz.
Since the series is uniformly convergent, for any given > 0 there exists an N

such that [R
N
[ < for all z in
[z[ r. If L is the length of the contour C then

_
C
R
N
(z) dz

L 0 as N

.
_
C
f(z) dz = lim
N
_
C
_
N1

n=0
a
n
z
n
+R
N
(z)
_
dz
=
_
C

n=0
a
n
z
n
=

n=0
a
n
_
C
z
n
dz
= 0.
Thus f(z) is analytic for [z[ < r.
Result 14.3.6 A power series is analytic in its domain of uniform convergence.
14.4 Integration and Dierentiation of Power Series
Consider a power series f(z) =

n=0
a
n
z
n
that is convergent in the disk [z[ < r
0
. Let C be any contour of nite
length L lying entirely within the closed domain [z[ r < r
0
. The integral of f(z) along C is
_
C
f(z) dz =
_
C
[S
N
(z) +R
N
(z)] dz.
443
Since the series is uniformly convergent in the closed disk, for any given > 0, there exists an N

such that
[R
N
(z)[ < for all [z[ r.
Bounding the absolute value of the integral of R
N
(z),

_
C
R
N
(z) dz

_
C
[R
N
(z)[ dz
< L
0 as N


Thus
_
C
f(z) dz = lim
N
_
C
N

n=0
a
n
z
n
dz
= lim
N
N

n=0
a
n
_
C
z
n
dz
=

n=0
a
n
_
C
z
n
dz
Result 14.4.1 If C is a contour lying in the domain of uniform convergence of the power
series

n=0
a
n
z
n
then
_
C

n=0
a
n
z
n
dz =

n=0
a
n
_
C
z
n
dz.
In the domain of uniform convergence of a series we can interchange the order of summation and a limit
444
process. That is,
lim
zz
0

n=0
a
n
(z) =

n=0
lim
zz
0
a
n
(z).
We can do this because the rate of convergence does not depend on z. Since dierentiation is a limit process,
d
dz
f(z) = lim
h0
f(z +h) f(z)
h
,
we would expect that we could dierentiate a uniformly convergent series.
Since we showed that a uniformly convergent power series is equal to an analytic function, we can dierentiate
a power series in its domain of uniform convergence.
Result 14.4.2 In the domain of uniform convergence of a power series
d
dz

n=0
a
n
z
n
=

n=0
(n + 1)a
n+1
z
n
.
Example 14.4.1 Dierentiating a Series. Consider the series from Example 14.3.2
log(1 z) =

n=1
z
n
n
.
Dierentiating this series yields

1
1 z
=

n=1
z
n1
1
1 z
=

n=0
z
n
.
445
We recognize this as the geometric series, which is convergent for [z[ < 1 and uniformly convergent for [z[ r < 1.
Note that the domain of convergence is dierent than the series for log(1 z). The geometric series does not
converge for [z[ = 1, z ,= 1. However, the domain of uniform convergence has remained the same.
14.5 Taylor Series
Result 14.5.1 Taylors Theorem. Let f(z) be a function that is single-valued and
analytic in [z z
0
[ < R. For all z in this open disk, f(z) has the convergent Taylor series
f(z) =

n=0
f
(n)
(z
0
)
n!
(z z
0
)
n
. (14.1)
We can also write this as
f(z) =

n=0
a
n
(z z
0
)
n
, a
n
=
f
(n)
(z
0
)
n!
=
1
i2
_
C
f(z)
(z z
0
)
n+1
dz, (14.2)
where C is a simple, positive, closed contour in 0 < [z z
0
[ < R that goes once around
the point z
0
.
Proof of Taylors Theorem. Lets see why Result 14.5.1 is true. Consider a function f(z) that is analytic in
[z[ < R. (Considering z
0
,= 0 is only trivially more general as we can introduce the change of variables = z z
0
.)
According to Cauchys Integral Formula, (Result ??),
f(z) =
1
i2
_
C
f()
z
d, (14.3)
446
where C is a positive, simple, closed contour in 0 < [ z[ < R that goes once around z. We take this contour
to be the circle about the origin of radius r where [z[ < r < R. (See Figure 14.4.)
Im(z)
Re(z)
r
C
R
z
Figure 14.4: Graph of Domain of Convergence and Contour of Integration.
We expand
1
z
in a geometric series,
1
z
=
1/
1 z/
=
1

n=0
_
z

_
n
, for [z[ < [[
=

n=0
z
n

n+1
, for [z[ < [[
We substitute this series into Equation 14.3.
f(z) =
1
i2
_
C
_

n=0
f()z
n

n+1
_
d
447
The series converges uniformly so we can interchange integration and summation.
=

n=0
1
i2
_
C
f()

n+1
d z
n
Now we have derived Equation 14.2. To obtain Equation 14.1, we apply Cauchys Integral Formula.
=

n=0
f
(n)
(0)
n!
z
n
There is a table of some commonly encountered Taylor series in Appendix H.
Example 14.5.1 Consider the Taylor series expansion of 1/(1 z) about z = 0. Previously, we showed that this
function is the sum of the geometric series

n=0
z
n
and we used the ratio test to show that the series converged
absolutely for [z[ < 1. Now we nd the series using Taylors theorem. Since the nearest singularity of the function
is at z = 1, the radius of convergence of the series is 1. The coecients in the series are
a
n
=
1
n!
_
d
n
dz
n
1
1 z
_
z=0
=
1
n!
_
n!
(1 z)
n
_
z=0
= 1
Thus we have
1
1 z
=

n=0
z
n
, for [z[ < 1.
448
14.5.1 Newtons Binomial Formula.
Result 14.5.2 For all [z[ < 1, a complex:
(1 +z)
a
= 1 +
_
a
1
_
z +
_
a
2
_
z
2
+
_
a
3
_
z
3
+
where
_
a
r
_
=
a(a 1)(a 2) (a r + 1)
r!
.
If a is complex, then the expansion is of the principle branch of (1 +z)
a
. We dene
_
r
0
_
= 1,
_
0
r
_
= 0, for r ,= 0,
_
0
0
_
= 1.
Example 14.5.2 Evaluate lim
n
(1 + 1/n)
n
.
First we expand (1 + 1/n)
n
using Newtons binomial formula.
lim
n
_
1 +
1
n
_
n
= lim
n
_
1 +
_
n
1
_
1/n +
_
n
2
_
1/n
2
+
_
n
3
_
1/n
3
+
_
= lim
n
_
1 + 1 +
n(n 1)
2! n
2
+
n(n 1)(n 2)
3! n
3
+
_
=
_
1 + 1 +
1
2!
+
1
3!
+
_
449
We recognize this as the Taylor series expansion of e
1
.
= e
We can also evaluate the limit using LHospitals rule.
Log
_
lim
x
_
1 +
1
x
_
x
_
= lim
x
Log ((1 + 1/x)
x
)
= lim
x
xLog (1 + 1/x)
= lim
x
Log (1 + 1/x)
1/x
= lim
x
1/x
2
1+1/x
1/x
2
= 1
lim
x
_
1 +
1
x
_
x
= e
1
Example 14.5.3 Find the Taylor series expansion of 1/(1 +z) about z = 0.
For [z[ < 1,
1
1 +z
= 1 +
_
1
1
_
z +
_
1
2
_
z
2
+
_
1
3
_
z
3
+
= 1 + (1)
1
z + (1)
2
z
2
+ (1)
3
z
3
+
= 1 z +z
2
z
3
+
450
Example 14.5.4 Find the rst few terms in the Taylor series expansion of
1

z
2
+ 5z + 6
about the origin.
We factor the denominator and then apply Newtons binomial formula.
1

z
2
+ 5z + 6
=
1

z + 3
1

z + 2
=
1

3
_
1 +z/3
1

2
_
1 +z/2
=
1

6
_
1 +
_
1/2
1
_
z
3
+
_
1/2
2
_
_
z
3
_
2
+
_ _
1 +
_
1/2
1
_
z
2
+
_
1/2
2
_
_
z
2
_
2
+
_
=
1

6
_
1
z
6
+
z
2
24
+
_ _
1
z
4
+
3z
2
32
+
_
=
1

6
_
1
5
12
z +
17
96
z
2
+
_
451
14.6 Laurent Series
Result 14.6.1 Let f(z) be single-valued and analytic in the annulus R
1
< [z z
0
[ < R
2
.
For points in the annulus, the function has the convergent Laurent series
f(z) =

n=
a
n
z
n
,
where
a
n
=
1
2i
_
C
f(z)
(z z
0
)
n+1
dz
and C is a positively oriented, closed contour around z
0
lying in the annulus.
To derive this result, consider a function f() that is analytic in the annulus R
1
< [[ < R
2
. Consider any
point z in the annulus. Let C
1
be a circle of radius r
1
with R
1
< r
1
< [z[. Let C
2
be a circle of radius r
2
with
[z[ < r
2
< R
2
. Let C
z
be a circle around z, lying entirely between C
1
and C
2
. (See Figure 14.5 for an illustration.)
Consider the integral of
f()
z
around the C
2
contour. Since the the only singularities of
f()
z
occur at = z and
at points outside the annulus,
_
C
2
f()
z
d =
_
Cz
f()
z
d +
_
C
1
f()
z
d.
By Cauchys Integral Formula, the integral around C
z
is
_
Cz
f()
z
d = i2 f(z).
452
This gives us an expression for f(z).
f(z) =
1
2i
_
C
2
f()
z
d
1
2i
_
C
1
f()
z
d (14.4)
On the C
2
contour, [z[ < [[. Thus
1
z
=
1/
1 z/
=
1

n=0
_
z

_
n
, for [z[ < [[
=

n=0
z
n

n+1
, for [z[ < [[
On the C
1
contour, [[ < [z[. Thus

1
z
=
1/z
1 /z
=
1
z

n=0
_

z
_
n
, for [[ < [z[
=

n=0

n
z
n+1
, for [[ < [z[
=
1

n=
z
n

n+1
, for [[ < [z[
We substitute these geometric series into Equation 14.4.
f(z) =
1
i2
_
C
2
_

n=0
f()z
n

n+1
_
d +
1
i2
_
C
1
_
1

n=
f()z
n

n+1
_
d
453
Since the sums converge uniformly, we can interchange the order of integration and summation.
f(z) =
1
i2

n=0
_
C
2
f()z
n

n+1
d +
1
i2
1

n=
_
C
1
f()z
n

n+1
d
Since the only singularities of the integrands lie outside of the annulus, the C
1
and C
2
contours can be deformed
to any positive, closed contour C that lies in the annulus and encloses the origin. (See Figure 14.5.) Finally, we
combine the two integrals to obtain the desired result.
f(z) =

n=
1
i2
__
C
f()

n+1
d
_
z
n
For the case of arbitrary z
0
, simply make the transformation z z z
0
.
Example 14.6.1 Find the Laurent series expansions of 1/(1 +z).
For [z[ < 1,
1
1 +z
= 1 +
_
1
1
_
z +
_
1
2
_
z
2
+
_
1
3
_
z
3
+
= 1 + (1)
1
z + (1)
2
z
2
+ (1)
3
z
3
+
= 1 z +z
2
z
3
+
For [z[ > 1,
1
1 +z
=
1/z
1 + 1/z
=
1
z
_
1 +
_
1
1
_
z
1
+
_
1
2
_
z
2
+
_
= z
1
z
2
+z
3

454
Im(z)
Re(z)
R
R
2
1
Im(z)
Re(z)
R
R
2
1
C
r
1
r
2
z
C
C
C
1
2
z
Figure 14.5: Contours for a Laurent Expansion in an Annulus.
14.7 Exercises
Exercise 14.1 (mathematica/fcv/series/constants.nb)
Does the series

n=2
1
nlog n
converge?
Hint, Solution
455
Exercise 14.2 (mathematica/fcv/series/constants.nb)
Show that the alternating harmonic series,

n=1
(1)
n+1
n
= 1
1
2
+
1
3

1
4
+ ,
is convergent.
Hint, Solution
Exercise 14.3 (mathematica/fcv/series/constants.nb)
Show that the series

n=1
1
n
is divergent with the Cauchy convergence criterion.
Hint, Solution
Exercise 14.4
The alternating harmonic series has the sum:

n=1
(1)
n
n
= log(2).
Show that the terms in this series can be rearranged to sum to .
Hint, Solution
456
Exercise 14.5 (mathematica/fcv/series/constants.nb)
Is the series,

n=1
n!
n
n
,
convergent?
Hint, Solution
Exercise 14.6
Show that the harmonic series,

n=1
1
n

= 1 +
1
2

+
1
3

+ ,
converges for > 1 and diverges for 1.
Hint, Solution
Exercise 14.7
Evaluate

N1
n=1
sin(nx).
Hint, Solution
Exercise 14.8
Using the geometric series, show that
1
(1 z)
2
=

n=0
(n + 1)z
n
, for [z[ < 1,
and
log(1 z) =

n=1
z
n
n
, for [z[ < 1.
457
Hint, Solution
Exercise 14.9
Find the Taylor series of
1
1+z
2
about the z = 0. Determine the radius of convergence of the Taylor series from the
singularities of the function. Determine the radius of convergence with the ratio test.
Hint, Solution
Exercise 14.10
Use two methods to nd the Taylor series expansion of log(1 + z) about z = 0 and determine the circle of
convergence. First directly apply Taylors theorem, then dierentiate a geometric series.
Hint, Solution
Exercise 14.11
Find the Laurent series about z = 0 of 1/(z i) for [z[ < 1 and [z[ > 1.
Hint, Solution
Exercise 14.12
Evaluate
n

k=1
kz
k
and
n

k=1
k
2
z
k
for z ,= 1.
Hint, Solution
Exercise 14.13
Find the circle of convergence of the following series.
458
1. z + ( )
z
2
2!
+ ( )( 2)
z
3
3!
+ ( )( 2)( 3)
z
4
4!
+
2.

n=1
n
2
n
(z i)
n
3.

n=1
n
n
z
n
4.

n=1
n!
n
n
z
n
5.

n=1
[3 + (1)
n
]
n
z
n
6.

n=1
(n +
n
) z
n
([[ > 1)
Hint, Solution
Exercise 14.14
Let f(z) = (1 +z)

be the branch for which f(0) = 1. Find its Taylor series expansion about z = 0. What is the
radius of convergence of the series? ( is an arbitrary complex number.)
Hint, Solution
Exercise 14.15
Obtain the Laurent expansion of
f(z) =
1
(z + 1)(z + 2)
459
centered on z = 0 for the three regions:
1. [z[ < 1
2. 1 < [z[ < 2
3. 2 < [z[
Hint, Solution
Exercise 14.16
By comparing the Laurent expansion of (z + 1/z)
m
, m Z
+
, with the binomial expansion of this quantity, show
that
_
2
0
(cos )
m
cos(n) d =
_

2
m1
_
m
(mn)/2
_
m n m and mn even
0 otherwise
Hint, Solution
Exercise 14.17
The function f(z) is analytic in the entire z-plane, including , except at the point z = i/2, where it has a simple
pole, and at z = 2, where it has a pole of order 2. In addition
_
[z[=1
f(z) dz = 2i,
_
[z[=3
f(z) dz = 0,
_
[z[=3
(z 1)f(z) dz = 0.
Find f(z) and its complete Laurent expansion about z = 0.
Hint, Solution
Exercise 14.18
Let f(z) =

k=1
k
3
_
z
3
_
k
. Compute each of the following, giving justication in each case. The contours are
circles of radius one about the origin.
460
1.
_
[z[=1
e
iz
f(z) dz
2.
_
[z[=1
f(z)
z
4
dz
3.
_
[z[=1
f(z) e
z
z
2
dz
Hint, Solution
461
14.8 Hints
Hint 14.1
Use the integral test.
Hint 14.2
Group the terms.
1
1
2
=
1
2
1
3

1
4
=
1
12
1
5

1
6
=
1
30

Hint 14.3
Show that
[S
2n
S
n
[ >
1
2
.
Hint 14.4
The alternating harmonic series is conditionally convergent. Let a
n
and b
n
be the positive and negative terms
in the sum, respectively, ordered in decreasing magnitude. Note that both

n=1
a
n
and sumoinb
n
are divergent.
Devise a method for alternately taking terms from a
n
and b
n
.
Hint 14.5
Use the ratio test.
462
Hint 14.6
Use the integral test.
Hint 14.7
Note that sin(nx) = ( e
inx
). This substitute will yield a nite geometric series.
Hint 14.8
Dierentiate the geometric series. Integrate the geometric series.
Hint 14.9
The Taylor series is a geometric series.
Hint 14.10
Hint 14.11
Hint 14.12
Let S
n
be the sum. Consider S
n
zS
n
. Use the nite geometric sum.
Hint 14.13
463
Hint 14.14
Hint 14.15
Hint 14.16
Hint 14.17
Hint 14.18
464
14.9 Solutions
Solution 14.1
Since

n=2
is a series of positive, monotone decreasing terms, the sum converges or diverges with the integral,
_

2
1
x log x
dx =
_

log 2
1

d
Since the integral diverges, the series also diverges.
Solution 14.2

n=1
(1)
n+1
n
=

n=1
_
1
2n 1

1
2n
_
=

n=1
1
(2n 1)(2n)
<

n=1
1
(2n 1)
2
<
1
2

n=1
1
n
2
=

2
12
Thus the series is convergent.
465
Solution 14.3
Since
[S
2n
S
n
[ =

2n1

j=n
1
j

2n1

j=n
1
2n 1
=
n
2n 1
>
1
2
the series does not satisfy the Cauchy convergence criterion.
Solution 14.4
The alternating harmonic series is conditionally convergent. That is, the sum is convergent but not absolutely
convergent. Let a
n
and b
n
be the positive and negative terms in the sum, respectively, ordered in decreasing
magnitude. Note that both

n=1
a
n
and

n=1
b
n
are divergent. Otherwise the alternating harmonic series would
be absolutely convergent.
To sum the terms in the series to we repeat the following two steps indenitely:
1. Take terms from a
n
until the sum is greater than .
2. Take terms from b
n
until the sum is less than .
Each of these steps can always be accomplished because the sums,

n=1
a
n
and

n=1
b
n
are both divergent.
Hence the tails of the series are divergent. No matter how many terms we take, the remaining terms in each series
are divergent. In each step a nite, nonzero number of terms from the respective series is taken. Thus all the
terms will be used. Since the terms in each series vanish as n , the running sum converges to .
466
Solution 14.5
Applying the ratio test,
lim
n

a
n+1
a
n

= lim
n
(n + 1)!n
n
n!(n + 1)
(n+1)
= lim
n
n
n
(n + 1)
n
= lim
n
_
n
(n + 1)
_
n
=
1
e
< 1,
we see that the series is absolutely convergent.
Solution 14.6
The harmonic series,

n=1
1
n

= 1 +
1
2

+
1
3

+ ,
converges or diverges absolutely with the integral,
_

1
1
[x

[
dx =
_

1
1
x
T()
dx =
_
[log x]

1
for 1() = 1,
_
x
1()
1T()
_

1
for 1() ,= 1.
The integral converges only for 1() > 1. Thus the harmonic series converges absolutely for 1() > 1 and
diverges absolutely for 1() 1.
467
Solution 14.7
N1

n=1
sin(nx) =
N1

n=0
sin(nx)
=
N1

n=0
( e
inx
)
=
_
N1

n=0
( e
ix
)
n
_
=
_
(N) for x = 2k

_
1e
iNx
1e
ix
_
for x ,= 2k
=
_
0 for x = 2k

_
e
ix/2
e
i(N1/2)x
e
ix/2
e
ix/2
_
for x ,= 2k
=
_
0 for x = 2k

_
e
ix/2
e
i(N1/2)x
2i sin(x/2)
_
for x ,= 2k
=
_
0 for x = 2k
1
_
e
ix/2
e
i(N1/2)x
2 sin(x/2)
_
for x ,= 2k
N1

n=1
sin(nx) =
_
0 for x = 2k
cos(x/2)cos((N1/2)x)
2 sin(x/2)
for x ,= 2k
468
Solution 14.8
The geometric series is
1
1 z
=

n=0
z
n
.
This series is uniformly convergent in the domain, [z[ r < 1. Dierentiating this equation yields,
1
(1 z)
2
=

n=1
nz
n1
=

n=0
(n + 1)z
n
for [z[ < 1.
Integrating the geometric series yields
log(1 z) =

n=0
z
n+1
n + 1
log(1 z) =

n=1
z
n
n
, for [z[ < 1.
Solution 14.9
1
1 +z
2
=

n=0
(z
2
)
n
=

n=0
(1)
n
z
2n
469
The function
1
1+z
2
=
1
(1iz)(1+iz)
has singularities at z = i. Thus the radius of convergence is 1. Now we use the
ratio test to corroborate that the radius of convergence is 1.
lim
n

a
n+1
(z)
a
n
(z)

< 1
lim
n

(1)
n+1
z
2(n+1)
(1)
n
z
2n

< 1
lim
n

z
2

< 1
[z[ < 1
Solution 14.10
Method 1.
log(1 +z) = [log(1 +z)]
z=0
+
_
d
dz
log(1 +z)
_
z=0
z
1!
+
_
d
2
dz
2
log(1 +z)
_
z=0
z
2
2!
+
= 0 +
_
1
1 +z
_
z=0
z
1!
+
_
1
(1 +z)
2
_
z=0
z
2
2!
+
_
2
(1 +z)
3
_
z=0
z
3
3!
+
= z
z
2
2
+
z
3
3

z
4
4
+
=

n=1
(1)
n+1
z
n
n
Since the nearest singularity of log(1 +z) is at z = 1, the radius of convergence is 1.
Method 2. We know the geometric series
1
1 +z
=

n=0
(1)
n
z
n
470
converges for [z[ < 1. Integrating this equation yields
log(1 +z) =

n=0
(1)
n
z
n+1
n + 1
=

n=1
(1)
n+1
z
n
n
for [z[ < 1. We calculate the radius of convergence with the ratio test.
R = lim
n

a
n
a
n+1

= lim
n

(n + 1)
n

= 1
Thus the series converges absolutely for [z[ < 1.
Solution 14.11
For [z[ < 1:
1
z i
=
i
1 +iz
= i

n=0
(iz)
n
(Note that [z[ < 1 [ iz[ < 1.)
For [z[ > 1:
1
z i
=
1
z
1
(1 i/z)
471
(Note that [z[ > 1 [ i/z[ < 1.)
=
1
z

n=0
_
i
z
_
n
=
1
z
0

n=
i
n
z
n
=
0

n=
(i)
n
z
n1
=
1

n=
(i)
n+1
z
n
Solution 14.12
Let
S
n
=
n

k=1
kz
k
.
S
n
zS
n
=
n

k=1
kz
k

k=1
kz
k+1
=
n

k=1
kz
k

n+1

k=2
(k 1)z
k
=
n

k=1
z
k
nz
n+1
=
z z
n+1
1 z
nz
n+1
472
n

k=1
kz
k
=
z(1 (n + 1)z
n
+nz
n+1
)
(1 z)
2
Let
S
n
=
n

k=1
k
2
z
k
.
S
n
zS
n
=
n

k=1
(k
2
(k 1)
2
)z
k
n
2
z
n+1
= 2
n

k=1
kz
k

k=1
z
k
n
2
z
n+1
= 2
z(1 (n + 1)z
n
+nz
n+1
)
(1 z)
2

z z
n+1
1 z
n
2
z
n+1
n

k=1
k
2
z
k
=
z(1 +z z
n
(1 +z +n(n(z 1) 2)(z 1)))
(1 z)
3
473
Solution 14.13
1. We assume that ,= 0. We determine the radius of convergence with the ratio test.
R = lim
n

a
n
a
n+1

= lim
n

( ) ( (n 1))/n!
( ) ( n)/(n + 1)!

= lim
n

n + 1
n

=
1
[[
The series converges absolutely for [z[ < 1/[[.
2. By the ratio test formula, the radius of absolute convergence is
R = lim
n

n/2
n
(n + 1)/2
n+1

= 2 lim
n

n
n + 1

= 2
By the root test formula, the radius of absolute convergence is
R =
1
lim
n
n
_
[n/2
n
[
=
2
lim
n
n

n
= 2
The series converges absolutely for [z i[ < 2.
474
3. We determine the radius of convergence with the Cauchy-Hadamard formula.
R =
1
limsup
n
_
[a
n
[
=
1
limsup
n
_
[n
n
[
=
1
limsup n
= 0
The series converges only for z = 0.
475
4. By the ratio test formula, the radius of absolute convergence is
R = lim
n

n!/n
n
(n + 1)!/(n + 1)
n+1

= lim
n

(n + 1)
n
n
n

= lim
n
_
n + 1
n
_
n
= exp
_
lim
n
log
__
n + 1
n
_
n
__
= exp
_
lim
n
nlog
_
n + 1
n
__
= exp
_
lim
n
log(n + 1) log(n)
1/n
_
= exp
_
lim
n
1/(n + 1) 1/n
1/n
2
_
= exp
_
lim
n
n
n + 1
_
= e
1
The series converges absolutely in the circle, [z[ < e.
5. By the Cauchy-Hadamard formula, the radius of absolute convergence is
R =
1
limsup
n
_
[ (3 + (1)
n
)
n
[
=
1
limsup (3 + (1)
n
)
=
1
4
476
Thus the series converges absolutely for [z[ < 1/4.
6. By the Cauchy-Hadamard formula, the radius of absolute convergence is
R =
1
limsup
n
_
[n +
n
[
=
1
limsup [[
n
_
[1 +n/
n
[
=
1
[[
Thus the sum converges absolutely for [z[ < 1/[[.
Solution 14.14
The Taylor series expansion of f(z) about z = 0 is
f(z) =

n=0
f
(n)
(0)
n!
z
n
.
The derivatives of f(z) are
f
(n)
(z) =
_
n1

k=0
( k)
_
(1 +z)
n
.
Thus f
(n)
(0) is
f
(n)
(0) =
n1

k=0
( k).
477
If = m is a non-negative integer, then only the rst m+1 terms are nonzero. The Taylor series is a polynomial
and the series has an innite radius of convergence.
(1 +z)
m
=
m

n=0

n1
k=0
( k)
n!
z
n
If is not a non-negative integer, then all of the terms in the series are non-zero.
(1 +z)

n=0

n1
k=0
( k)
n!
z
n
The radius of convergence of the series is the distance to the nearest singularity of (1 +z)

. This occurs at z = 1.
Thus the series converges for [z[ < 1. We can corroborate this with the ratio test. The radius of convergence is
R = lim
n

_
n1
k=0
( k)
_
/n!
(

n
k=0
( k)) /(n + 1)!

= lim
n

n + 1
n

= [ 1[ = 1.
If we dene the binomial coecient,
_

n
_

n1
k=0
( k)
n!
,
then we can write the series as
(1 +z)

n=0
_

n
_
z
n
.
Solution 14.15
We expand the function in partial fractions.
f(z) =
1
(z + 1)(z + 2)
=
1
z + 1

1
z + 2
478
The Taylor series about z = 0 for 1/(z + 1) is
1
1 +z
=
1
1 (z)
=

n=0
(z)
n
, for [z[ < 1
=

n=0
(1)
n
z
n
, for [z[ < 1
The series about z = for 1/(z + 1) is
1
1 +z
=
1/z
1 + 1/z
=
1
z

n=0
(1/z)
n
, for [1/z[ < 1
=

n=0
(1)
n
z
n1
, for [z[ > 1
=
1

n=
(1)
n+1
z
n
, for [z[ > 1
The Taylor series about z = 0 for 1/(z + 2) is
1
2 +z
=
1/2
1 +z/2
=
1
2

n=0
(z/2)
n
, for [z/2[ < 1
=

n=0
(1)
n
2
n+1
z
n
, for [z[ < 2
479
The series about z = for 1/(z + 2) is
1
2 +z
=
1/z
1 + 2/z
=
1
z

n=0
(2/z)
n
, for [2/z[ < 1
=

n=0
(1)
n
2
n
z
n1
, for [z[ > 2
=
1

n=
(1)
n+1
2
n+1
z
n
, for [z[ > 2
To nd the expansions in the three regions, we just choose the appropriate series.
1.
f(z) =
1
1 +z

1
2 +z
=

n=0
(1)
n
z
n

n=0
(1)
n
2
n+1
z
n
, for [z[ < 1
=

n=0
(1)
n
_
1
1
2
n+1
_
z
n
, for [z[ < 1
f(z) =

n=0
(1)
n
2
n+1
1
2
n+1
z
n
, for [z[ < 1
480
2.
f(z) =
1
1 +z

1
2 +z
f(z) =
1

n=
(1)
n+1
z
n

n=0
(1)
n
2
n+1
z
n
, for 1 < [z[ < 2
3.
f(z) =
1
1 +z

1
2 +z
=
1

n=
(1)
n+1
z
n

n=
(1)
n+1
2
n+1
z
n
, for 2 < [z[
f(z) =
1

n=
(1)
n+1
2
n+1
1
2
n+1
z
n
, for 2 < [z[
Solution 14.16
Laurent Series. We assume that m is a non-negative integer and that n is an integer. The Laurent series about
the point z = 0 of
f(z) =
_
z +
1
z
_
m
is
f(z) =

n=
a
n
z
n
481
where
a
n
=
1
i2
_
C
f(z)
z
n+1
dz
and C is a contour going around the origin once in the positive direction. We manipulate the coecient integral
into the desired form.
a
n
=
1
i2
_
C
(z + 1/z)
m
z
n+1
dz
=
1
i2
_
2
0
( e
i
+ e
i
)
m
e
i(n+1)
i e
i
d
=
1
2
_
2
0
2
m
cos
m
e
in
d
=
2
m1

_
2
0
cos
m
(cos(n) i sin(n)) d
Note that cos
m
is even and sin(n) is odd about = .
=
2
m1

_
2
0
cos
m
cos(n) d
Binomial Series. Now we nd the binomial series expansion of f(z).
_
z +
1
z
_
m
=
m

n=0
_
m
n
_
z
mn
_
1
z
_
n
=
m

n=0
_
m
n
_
z
m2n
=
m

n=m
mn even
_
m
(mn)/2
_
z
n
482
The coecients in the series f(z) =

n=
a
n
z
n
are
a
n
=
_
_
m
(mn)/2
_
m n m and mn even
0 otherwise
By equating the coecients found by the two methods, we evaluate the desired integral.
_
2
0
(cos )
m
cos(n) d =
_

2
m1
_
m
(mn)/2
_
m n m and mn even
0 otherwise
Solution 14.17
First we write f(z) in the form
f(z) =
g(z)
(z i/2)(z 2)
2
.
g(z) is an entire function which grows no faster that z
3
at innity. By expanding g(z) in a Taylor series about
the origin, we see that it is a polynomial of degree no greater than 3.
f(z) =
z
3
+z
2
+z +
(z i/2)(z 2)
2
.
Since f(z) is a rational function we expand it in partial fractions to obtain a form that is convenient to integrate.
f(z) =
a
z i/2
+
b
z 2
+
c
(z 2)
2
+d
We use the value of the integrals of f(z) to determine the constants, a, b, c and d.
_
[z[=1
_
a
z i/2
+
b
z 2
+
c
(z 2)
2
+d
_
dz = i2
i2a = i2
a = 1
483
_
[z[=3
_
1
z i/2
+
b
z 2
+
c
(z 2)
2
+d
_
dz = 0
i2(1 +b) = 0
b = 1
Note that by applying the second constraint, we can change the third constraint to
_
[z[=3
zf(z) dz = 0.
_
[z[=3
z
_
1
z i/2

1
z 2
+
c
(z 2)
2
+d
_
dz = 0
_
[z[=3
_
(z i/2) +i/2
z i/2

(z 2) + 2
z 2
+
c(z 2) + 2c
(z 2)
2
_
dz = 0
i2
_
i
2
2 +c
_
= 0
c = 2
i
2
Thus we see that the function is
f(z) =
1
z i/2

1
z 2
+
2 i/2
(z 2)
2
+d.
where d is an arbitrary constant. We can also write the function in the form:
f(z) =
dz
3
+ 15 i8
4(z i/2)(z 2)
2
.
484
Complete Laurent Series. We nd the complete Laurent series about z = 0 for each of the terms in the
partial fraction expansion of f(z).
1
z i/2
=
i2
1 +i2z
= i2

n=0
(i2z)
n
, for [ i2z[ < 1
=

n=0
(i2)
n+1
z
n
, for [z[ < 1/2
1
z i/2
=
1/z
1 i/(2z)
=
1
z

n=0
_
i
2z
_
n
, for [i/(2z)[ < 1
=

n=0
_
i
2
_
n
z
n1
, for [z[ < 2
=
1

n=
_
i
2
_
n1
z
n
, for [z[ < 2
=
1

n=
(i2)
n+1
z
n
, for [z[ < 2
485

1
z 2
=
1/2
1 z/2
=
1
2

n=0
_
z
2
_
n
, for [z/2[ < 1
=

n=0
z
n
2
n+1
, for [z[ < 2

1
z 2
=
1/z
1 2/z
=
1
z

n=0
_
2
z
_
n
, for [2/z[ < 1
=

n=0
2
n
z
n1
, for [z[ > 2
=
1

n=
2
n1
z
n
, for [z[ > 2
486
2 i/2
(z 2)
2
= (2 i/2)
1
4
(1 z/2)
2
=
4 i
8

n=0
_
2
n
_
_

z
2
_
n
, for [z/2[ < 1
=
4 i
8

n=0
(1)
n
(n + 1)(1)
n
2
n
z
n
, for [z[ < 2
=
4 i
8

n=0
n + 1
2
n
z
n
, for [z[ < 2
2 i/2
(z 2)
2
=
2 i/2
z
2
_
1
2
z
_
2
=
2 i/2
z
2

n=0
_
2
n
__

2
z
_
n
, for [2/z[ < 1
= (2 i/2)

n=0
(1)
n
(n + 1)(1)
n
2
n
z
n2
, for [z[ > 2
= (2 i/2)
2

n=
(n 1)2
n2
z
n
, for [z[ > 2
= (2 i/2)
2

n=
n + 1
2
n+2
z
n
, for [z[ > 2
We take the appropriate combination of these series to nd the Laurent series expansions in the regions:
487
[z[ < 1/2, 1/2 < [z[ < 2 and 2 < [z[. For [z[ < 1/2, we have
f(z) =

n=0
(i2)
n+1
z
n
+

n=0
z
n
2
n+1
+
4 i
8

n=0
n + 1
2
n
z
n
+d
f(z) =

n=0
_
(i2)
n+1
+
1
2
n+1
+
4 i
8
n + 1
2
n
_
z
n
+d
f(z) =

n=0
_
(i2)
n+1
+
1
2
n+1
_
1 +
4 i
4
(n + 1)
__
z
n
+d, for [z[ < 1/2
For 1/2 < [z[ < 2, we have
f(z) =
1

n=
(i2)
n+1
z
n
+

n=0
z
n
2
n+1
+
4 i
8

n=0
n + 1
2
n
z
n
+d
f(z) =
1

n=
(i2)
n+1
z
n
+

n=0
_
1
2
n+1
_
1 +
4 i
4
(n + 1)
__
z
n
+d, for 1/2 < [z[ < 2
For 2 < [z[, we have
f(z) =
1

n=
(i2)
n+1
z
n

n=
2
n1
z
n
(2 i/2)
2

n=
n + 1
2
n+2
z
n
+d
f(z) =
2

n=
_
(i2)
n+1

1
2
n+1
(1 + (1 i/4)(n + 1))
_
z
n
+d, for 2 < [z[
488
Solution 14.18
The radius of convergence of the series for f(z) is
R = lim
n

k
3
/3
k
(k + 1)
3
/3
k+1

= 3 lim
n

k
3
(k + 1)
3

= 3.
Thus f(z) is a function which is analytic inside the circle of radius 3.
1. The integrand is analytic. Thus by Cauchys theorem the value of the integral is zero.
_
[z[=1
e
iz
f(z) dz = 0
2. We use Cauchys integral formula to evaluate the integral.
_
[z[=1
f(z)
z
4
dz =
i2
3!
f
(3)
(0) =
i2
3!
3! 3
3
3
3
= i2
_
[z[=1
f(z)
z
4
dz = i2.
3. We use Cauchys integral formula to evaluate the integral.
_
[z[=1
f(z) e
z
z
2
dz =
i2
1!
d
dz
(f(z) e
z
)

z=0
= i2
1! 1
3
3
1
_
[z[=1
f(z) e
z
z
2
dz =
i2
3
489
Chapter 15
The Residue Theorem
Man will occasionally stumble over the truth, but most of the time he will pick himself up and continue on.
- Winston Churchill
15.1 The Residue Theorem
We will nd that many integrals on closed contours may be evaluated in terms of the residues of a function. We
rst dene residues and then prove the Residue Theorem.
490
Result 15.1.1 Residues. Let f(z) be single-valued an analytic in a deleted neighbor-
hood of z
0
. Then f(z) has the Laurent series expansion
f(z) =

n=
a
n
(z z
0
)
n
,
The residue of f(z) at z = z
0
is the coecient of the
1
zz
0
term:
Res (f(z), z
0
) = a
1
.
The residue at a branch point or non-isolated singularity is undened as the Laurent
series does not exist. If f(z) has a pole of order n at z = z
0
then we can use the Residue
Formula:
Res (f(z), z
0
) = lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
_
(z z
0
)
n
f(z)

_
.
See Exercise 15.1 for a proof of the Residue Formula.
Example 15.1.1 In Example 10.4.5 we showed that f(z) = z/ sin z has rst order poles at z = n, n Z 0.
491
Now we nd the residues at these isolated singularities.
Res
_
z
sin z
, z = n
_
= lim
zn
_
(z n)
z
sin z
_
= n lim
zn
z n
sin z
= n lim
zn
1
cos z
= n
1
(1)
n
= (1)
n
n
Residue Theorem. We can evaluate many integrals in terms of the residues of a function. Suppose f(z) has
only one singularity, (at z = z
0
), inside the simple, closed, positively oriented contour C. f(z) has a convergent
Laurent series in some deleted disk about z
0
. We deform C to lie in the disk. See Figure 15.1. We now evaluate
_
C
f(z) dz by deforming the contour and using the Laurent series expansion of the function.
C B
Figure 15.1: Deform the contour to lie in the deleted disk.
492
_
C
f(z) dz =
_
B
f(z) dz
=
_
B

n=
a
n
(z z
0
)
n
dz
=

n=
n,=1
a
n
_
(z z
0
)
n+1
n + 1
_
r e
i(+2)
r e
i
+a
1
[log(z z
0
)]
r e
i(+2)
r e
i
= a
1
i2
_
C
f(z) dz = i2 Res (f(z), z
0
)
Now assume that f(z) has n singularities at z
1
, . . . , z
n
. We deform C to n contours C
1
, . . . , C
n
which enclose
the singularities and lie in deleted disks about the singularities in which f(z) has convergent Laurent series. See
Figure 15.2. We evaluate
_
C
f(z) dz by deforming the contour.
_
C
f(z) dz =
n

k=1
_
C
k
f(z) dz = i2
n

k=1
Res (f(z), z
k
)
Now instead let f(z) be analytic outside and on C except for isolated singularities at
n
in the domain outside
C and perhaps an isolated singularity at innity. Let a be any point in the interior of C. To evaluate
_
C
f(z) dz
we make the change of variables = 1/(z a). This maps the contour C to C
t
. (Note that C
t
is negatively
oriented.) All the points outside C are mapped to points inside C
t
and vice versa. We can then evaluate the
integral in terms of the singularities inside C
t
.
493
C
C
C
C
1
2
3
Figure 15.2: Deform the contour n contours which enclose the n singularities.
_
C
f(z) dz =
_
C

f
_
1

+a
_
1

2
d
=
_
C

1
z
2
f
_
1
z
+a
_
dz
= i2

n
Res
_
1
z
2
f
_
1
z
+a
_
,
1

n
a
_
+i2 Res
_
1
z
2
f
_
1
z
+a
_
, 0
_
.
494
a
C
C
Figure 15.3: The change of variables = 1/(z a).
Result 15.1.2 Residue Theorem. If f(z) is analytic in a compact, closed, connected
domain D except for isolated singularities at z
n
in the interior of D then
_
D
f(z) dz =

k
_
C
k
f(z) dz = i2

n
Res (f(z), z
n
).
Here the set of contours C
k
make up the positively oriented boundary D of the domain
D. If the boundary of the domain is a single contour C then the formula simplies.
_
C
f(z) dz = i2

n
Res (f(z), z
n
)
If instead f(z) is analytic outside and on C except for isolated singularities at
n
in
the domain outside C and perhaps an isolated singularity at innity then
_
C
f(z) dz = i2

n
Res
_
1
z
2
f
_
1
z
+a
_
,
1

n
a
_
+i2 Res
_
1
z
2
f
_
1
z
+a
_
, 0
_
.
Here a is a any point in the interior of C.
495
Example 15.1.2 Consider
1
2i
_
C
sin z
z(z 1)
dz
where C is the positively oriented circle of radius 2 centered at the origin. Since the integrand is single-valued
with only isolated singularities, the Residue Theorem applies. The value of the integral is the sum of the residues
from singularities inside the contour.
The only places that the integrand could have singularities are z = 0 and z = 1. Since
lim
z0
sin z
z
= lim
z0
cos z
1
= 1,
there is a removable singularity at the point z = 0. There is no residue at this point.
Now we consider the point z = 1. Since sin(z)/z is analytic and nonzero at z = 1, that point is a rst order
pole of the integrand. The residue there is
Res
_
sin z
z(z 1)
, z = 1
_
= lim
z1
(z 1)
sin z
z(z 1)
= sin(1).
There is only one singular point with a residue inside the path of integration. The residue at this point is
sin(1). Thus the value of the integral is
1
2i
_
C
sin z
z(z 1)
dz = sin(1)
Example 15.1.3 Evaluate the integral
_
C
cot z coth z
z
3
dz
where C is the unit circle about the origin in the positive direction.
496
The integrand is
cot z coth z
z
3
=
cos z cosh z
z
3
sin z sinh z
sin z has zeros at n. sinh z has zeros at in. Thus the only pole inside the contour of integration is at z = 0.
Since sin z and sinh z both have simple zeros at z = 0,
sin z = z +O(z
3
), sinh z = z +O(z
3
)
the integrand has a pole of order 5 at the origin. The residue at z = 0 is
lim
z0
1
4!
d
4
dz
4
_
z
5
cot z coth z
z
3
_
= lim
z0
1
4!
d
4
dz
4
_
z
2
cot z coth z
_
=
1
4!
lim
z0
_
24 cot(z) coth(z) csc(z)
2
32 z coth(z) csc(z)
4
16 z cos(2 z) coth(z) csc(z)
4
+ 22 z
2
cot(z) coth(z) csc(z)
4
+ 2 z
2
cos(3 z) coth(z) csc(z)
5
+ 24 cot(z) coth(z) csch (z)
2
+ 24 csc(z)
2
csch (z)
2
48 z cot(z) csc(z)
2
csch (z)
2
48 z coth(z) csc(z)
2
csch (z)
2
+ 24 z
2
cot(z) coth(z) csc(z)
2
csch (z)
2
+ 16 z
2
csc(z)
4
csch (z)
2
+ 8 z
2
cos(2 z) csc(z)
4
csch (z)
2
32 z cot(z) csch (z)
4
16 z cosh(2 z) cot(z) csch (z)
4
+ 22 z
2
cot(z) coth(z) csch (z)
4
+ 16 z
2
csc(z)
2
csch (z)
4
+ 8 z
2
cosh(2 z) csc(z)
2
csch (z)
4
+ 2 z
2
cosh(3 z) cot(z) csch (z)
5
_
=
1
4!
_

56
15
_
=
7
45
497
Since taking the fourth derivative of z
2
cot z coth z really sucks, we would like a more elegant way of nding
the residue. We expand the functions in the integrand in Taylor series about the origin.
cos z cosh z
z
3
sin z sinh z
=
_
1
z
2
2
+
z
4
24

__
1 +
z
2
2
+
z
4
24
+
_
z
3
_
z
z
3
6
+
z
5
120

_ _
z +
z
3
6
+
z
5
120
+
_
=
1
z
4
6
+
z
3
_
z
2
+z
6
_
1
36
+
1
90
_
+
_
=
1
z
5
1
z
4
6
+
1
z
4
90
+
=
1
z
5
_
1
z
4
6
+
__
1 +
z
4
90
+
_
=
1
z
5
_
1
7
45
z
4
+
_
=
1
z
5

7
45
1
z
+
Thus we see that the residue is
7
45
. Now we can evaluate the integral.
_
C
cot z coth z
z
3
dz = i
14
45

15.2 Cauchy Principal Value for Real Integrals


15.2.1 The Cauchy Principal Value
First we recap improper integrals. If f(x) has a singularity at x
0
(a . . . b) then
_
b
a
f(x) dx lim
0
+
_
x
0

a
f(x) dx + lim
0
+
_
b
x
0
+
f(x) dx.
498
For integrals on (. . . ),
_

f(x) dx lim
a, b
_
b
a
f(x) dx.
Example 15.2.1
_
1
1
1
x
dx is divergent. We show this with the denition of improper integrals.
_
1
1
1
x
dx = lim
0
+
_

1
1
x
dx + lim
0
+
_
1

1
x
dx
= lim
0
+
[ln [x[]

1
+ lim
0
+
[ln [x[]
1

= lim
0
+
ln lim
0
+
ln
The integral diverges because and approach zero independently.
Since 1/x is an odd function, it appears that the area under the curve is zero. Consider what would happen if
and were not independent. If they approached zero symmetrically, = , then the value of the integral would
be zero.
lim
0
+
__

1
+
_
1

_
1
x
dx = lim
0
+
(ln ln ) = 0
We could make the integral have any value we pleased by choosing = c.
1
lim
0
+
__

1
+
_
1
c
_
1
x
dx = lim
0
+
(ln ln(c)) = ln c
We have seen it is reasonable that
_
1
1
1
x
dx
1
This may remind you of conditionally convergent series. You can rearrange the terms to make the series sum to any number.
499
has some meaning, and if we could evaluate the integral, the most reasonable value would be zero. The Cauchy
principal value provides us with a way of evaluating such integrals. If f(x) is continuous on (a, b) except at the
point x
0
(a, b) then the Cauchy principal value of the integral is dened

_
b
a
f(x) dx = lim
0
+
__
x
0

a
f(x) dx +
_
b
x
0
+
f(x) dx
_
.
The Cauchy principal value is obtained by approaching the singularity symmetrically. The principal value of
the integral may exist when the integral diverges. If the integral exists, it is equal to the principal value of the
integral.
The Cauchy principal value of
_
1
1
1
x
dx is dened

_
1
1
1
x
dx lim
0
+
__

1
1
x
dx +
_
1

1
x
dx
_
= lim
0
+
_
[log [x[]

1
[log [x[]
1

_
= lim
0
+
(log [ [ log [[)
= 0.
(Another notation for the principal value of an integral is PV
_
f(x) dx.) Since the limits of integration approach
zero symmetrically, the two halves of the integral cancel. If the limits of integration approached zero independently,
(the denition of the integral), then the two halves would both diverge.
Example 15.2.2
_

x
x
2
+1
dx is divergent. We show this with the denition of improper integrals.
_

x
x
2
+ 1
dx = lim
a, b
_
b
a
x
x
2
+ 1
dx
= lim
a, b
_
1
2
ln(x
2
+ 1)
_
b
a
=
1
2
lim
a, b
ln
_
b
2
+ 1
a
2
+ 1
_
500
The integral diverges because a and b approach innity independently. Now consider what would happen if a and
b were not independent. If they approached zero symmetrically, a = b, then the value of the integral would be
zero.
1
2
lim
b
ln
_
b
2
+ 1
b
2
+ 1
_
= 0
We could make the integral have any value we pleased by choosing a = cb.
We can assign a meaning to divergent integrals of the form
_

f(x) dx with the Cauchy principal value. The


Cauchy principal value of the integral is dened

f(x) dx = lim
a
_
a
a
f(x) dx.
The Cauchy principal value is obtained by approaching innity symmetrically.
The Cauchy principal value of
_

x
x
2
+1
dx is dened

x
x
2
+ 1
dx = lim
a
_
a
a
x
x
2
+ 1
dx
= lim
a
_
1
2
ln
_
x
2
+ 1
_
_
a
a
= 0.
501
Result 15.2.1 Cauchy Principal Value. If f(x) is continuous on (a, b) except at the
point x
0
(a, b) then the integral of f(x) is dened
_
b
a
f(x) dx = lim
0
+
_
x
0

a
f(x) dx + lim
0
+
_
b
x
0
+
f(x) dx.
The Cauchy principal value of the integral is dened

_
b
a
f(x) dx = lim
0
+
__
x
0

a
f(x) dx +
_
b
x
0
+
f(x) dx
_
.
If f(x) is continuous on (, ) then the integral of f(x) is dened
_

f(x) dx = lim
a, b
_
b
a
f(x) dx.
The Cauchy principal value of the integral is dened

f(x) dx = lim
a
_
a
a
f(x) dx.
The principal value of the integral may exist when the integral diverges. If the integral
exists, it is equal to the principal value of the integral.
Example 15.2.3 Clearly
_

x dx diverges, however the Cauchy principal value exists.

x dx = lim
a
_
x
2
2
_
a
a = 0
502
In general, if f(x) is an odd function with no singularities on the nite real axis then

f(x) dx = 0.
15.3 Cauchy Principal Value for Contour Integrals
Example 15.3.1 Consider the integral
_
Cr
1
z 1
dz,
where C
r
is the positively oriented circle of radius r and center at the origin. From the residue theorem, we know
that the integral is
_
Cr
1
z 1
dz =
_
0 for r < 1,
2i for r > 1.
When r = 1, the integral diverges, as there is a rst order pole on the path of integration. However, the principal
value of the integral exists.

_
Cr
1
z 1
dz = lim
0
+
_
2

1
e
i
1
ie
i
d
= lim
0
+
_
log(e
i
1)

503
We choose the branch of the logarithm with a branch cut on the positive real axis and arg log z (0, 2).
= lim
0
+
_
log
_
e
i(2)
1
_
log
_
e
i
1
__
= lim
0
+
_
log
__
1 i +O(
2
)
_
1
_
log
__
1 +i +O(
2
)
_
1
__
= lim
0
+
_
log
_
i +O(
2
)
_
log
_
i +O(
2
)
__
= lim
0
+
_
Log
_
+O(
2
)
_
+i arg
_
i +O(
2
)
_
Log
_
+O(
2
)
_
i arg
_
i +O(
2
)
__
= i
3
2
i

2
= i
Thus we obtain

_
Cr
1
z 1
dz =
_

_
0 for r < 1,
i for r = 1,
2i for r > 1.
In the above example we evaluated the contour integral by parameterizing the contour. This approach is
only feasible when the integrand is simple. We would like to use the residue theorem to more easily evaluate the
principal value of the integral. But before we do that, we will need a preliminary result.
Result 15.3.1 Let f(z) have a rst order pole at z = z
0
and let (z z
0
)f(z) be analytic
in some neighborhood of z
0
. Let the contour C

be a circular arc from z


0
+e
i
to z
0
+e
i
.
(We assume that > and < 2.)
lim
0
+
_
C

f(z) dz = i( ) Res (f(z), z


0
)
The contour is shown in Figure 15.4. (See Exercise 15.6 for a proof of this result.)
504

z
0

Figure 15.4: The C

Contour
Example 15.3.2 Consider

_
C
1
z 1
dz
where C is the unit circle. Let C
p
be the circular arc of radius 1 that starts and ends a distance of from z = 1.
Let C

be the positive, circular arc of radius with center at z = 1 that joins the endpoints of C
p
. Let C
i
, be the
union of C
p
and C

. (C
p
stands for Principal value Contour; C
i
stands for Indented Contour.) C
i
is an indented
contour that avoids the rst order pole at z = 1. Figure 15.5 shows the three contours.
C
C
p

Figure 15.5: The Indented Contour.


505
Note that the principal value of the integral is

_
C
1
z 1
dz = lim
0
+
_
Cp
1
z 1
dz.
We can calculate the integral along C
i
with the residue theorem.
_
C
i
1
z 1
dz = 2i
We can calculate the integral along C

using Result 15.3.1. Note that as 0


+
, the contour becomes a semi-circle,
a circular arc of radians.
lim
0
+
_
C
1
z 1
dz = i Res
_
1
z 1
, 1
_
= i
Now we can write the principal value of the integral along C in terms of the two known integrals.

_
C
1
z 1
dz =
_
C
i
1
z 1
dz
_
C
1
z 1
dz
= i2 i
= i
In the previous example, we formed an indented contour that included the rst order pole. You can show that
if we had indented the contour to exclude the pole, we would obtain the same result. (See Exercise 15.8.)
We can extend the residue theorem to principal values of integrals. (See Exercise 15.7.)
506
Result 15.3.2 Residue Theorem for Principal Values. Let f(z) be analytic inside
and on a simple, closed, positive contour C, except for isolated singularities at z
1
, . . . , z
m
inside the contour and rst order poles at
1
, . . . ,
n
on the contour. Further, let the
contour be C
1
at the locations of these rst order poles. (i.e., the contour does not have
a corner at any of the rst order poles.) Then the principal value of the integral of f(z)
along C is

_
C
f(z) dz = i2
m

j=1
Res (f(z), z
j
) +i
n

j=1
Res (f(z),
j
).
15.4 Integrals on the Real Axis
Example 15.4.1 We wish to evaluate the integral
_

1
x
2
+ 1
dx.
We can evaluate this integral directly using calculus.
_

1
x
2
+ 1
dx = [arctan x]

=
Now we will evaluate the integral using contour integration. Let C
R
be the semicircular arc from R to R in the
upper half plane. Let C be the union of C
R
and the interval [R, R].
We can evaluate the integral along C with the residue theorem. The integrand has rst order poles at z = i.
507
For R > 1, we have
_
C
1
z
2
+ 1
dz = i2 Res
_
1
z
2
+ 1
, i
_
= i2
1
2i
= .
Now we examine the integral along C
R
. We use the maximum modulus integral bound to show that the value of
the integral vanishes as R .

_
C
R
1
z
2
+ 1
dz

Rmax
zC
R

1
z
2
+ 1

= R
1
R
2
1
0 as R .
Now we are prepared to evaluate the original real integral.
_
C
1
z
2
+ 1
dz =
_
R
R
1
x
2
+ 1
dx +
_
C
R
1
z
2
+ 1
dz =
We take the limit as R .
_

1
x
2
+ 1
dx =
We would get the same result by closing the path of integration in the lower half plane. Note that in this case
the closed contour would be in the negative direction.
508
If you are really observant, you may have noticed that we did something a little funny in evaluating
_

1
x
2
+ 1
dx.
The denition of this improper integral is
_

1
x
2
+ 1
dx = lim
a+
_
0
a
1
x
2
+ 1
dx+ = lim
b+
_
b
0
1
x
2
+ 1
dx.
In the above example we instead computed
lim
R+
_
R
R
1
x
2
+ 1
dx.
Note that for some integrands, the former and latter are not the same. Consider the integral of
x
x
2
+1
.
_

x
x
2
+ 1
dx = lim
a+
_
0
a
x
x
2
+ 1
dx + lim
b+
_
b
0
x
x
2
+ 1
dx
= lim
a+
_
1
2
log [a
2
+ 1[
_
+ lim
b+
_

1
2
log [b
2
+ 1[
_
Note that the limits do not exist and hence the integral diverges. We get a dierent result if the limits of
integration approach innity symmetrically.
lim
R+
_
R
R
x
x
2
+ 1
dx = lim
R+
_
1
2
(log [R
2
+ 1[ log [R
2
+ 1[)
_
= 0
(Note that the integrand is an odd function, so the integral from R to R is zero.) We call this the principal
value of the integral and denote it by writing PV in front of the integral sign or putting a dash through the
integral.
PV
_

f(x) dx
_

f(x) dx lim
R+
_
R
R
f(x) dx
509
The principal value of an integral may exist when the integral diverges. If the integral does converge, then it
is equal to its principal value.
We can use the method of Example 15.4.1 to evaluate the principal value of integrals of functions that vanish
fast enough at innity.
510
Result 15.4.1 Let f(z) be analytic except for isolated singularities, with only rst order
poles on the real axis. Let C
R
be the semi-circle from R to R in the upper half plane.
If
lim
R
_
Rmax
zC
R
[f(z)[
_
= 0
then

f(x) dx = i2
m

k=1
Res (f(z), z
k
) +i
n

k=1
Res (f(z), x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the upper half plane and x
1
, . . . , x
n
are
the rst order poles on the real axis.
Now let C
R
be the semi-circle from R to R in the lower half plane. If
lim
R
_
Rmax
zC
R
[f(z)[
_
= 0
then

f(x) dx = i2
m

k=1
Res (f(z), z
k
) i
n

k=1
Res (f(z), x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the lower half plane and x
1
, . . . , x
n
are
the rst order poles on the real axis.
511
This result is proved in Exercise 15.9. Of course we can use this result to evaluate the integrals of the form
_

0
f(z) dz,
where f(x) is an even function.
15.5 Fourier Integrals
In order to do Fourier transforms, which are useful in solving dierential equations, it is necessary to be able to
calculate Fourier integrals. Fourier integrals have the form
_

e
ix
f(x) dx.
We evaluate these integrals by closing the path of integration in the lower or upper half plane and using techniques
of contour integration.
Consider the integral
_
/2
0
e
Rsin
d.
Since 2/ sin for 0 /2,
e
Rsin
e
R2/
for 0 /2
512
_
/2
0
e
Rsin
d
_
/2
0
e
R2/
d
=
_


2R
e
R2/
_
/2
0
=

2R
( e
R
1)


2R
0 as R
We can use this to prove the following Result 15.5.1. (See Exercise 15.13.)
Result 15.5.1 Jordans Lemma.
_

0
e
Rsin
d <

R
.
Suppose that f(z) vanishes as [z[ . If is a (positive/negative) real number and
C
R
is a semi-circle of radius R in the (upper/lower) half plane then the integral
_
C
R
f(z) e
iz
dz
vanishes as R .
We can use Jordans Lemma and the Residue Theorem to evaluate many Fourier integrals. Consider
_

f(x) e
ix
dx,
where is a positive real number. Let f(z) be analytic except for isolated singularities, with only rst order poles
on the real axis. Let C be the contour from R to R on the real axis and then back to R along a semi-circle in
the upper half plane. If R is large enough so that C encloses all the singularities of f(z) in the upper half plane
513
then
_
C
f(z) e
iz
dz = i2
m

k=1
Res (f(z) e
iz
, z
k
) +i
n

k=1
Res (f(z) e
iz
, x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the upper half plane and x
1
, . . . , x
n
are the rst order poles on the
real axis. If f(z) vanishes as [z[ then the integral on C
R
vanishes as R by Jordans Lemma.
_

f(x) e
ix
dx = i2
m

k=1
Res (f(z) e
iz
, z
k
) +i
n

k=1
Res (f(z) e
iz
, x
k
)
For negative we close the path of integration in the lower half plane. Note that the contour is then in the
negative direction.
Result 15.5.2 Fourier Integrals. Let f(z) be analytic except for isolated singularities,
with only rst order poles on the real axis. Suppose that f(z) vanishes as [z[ . If
is a positive real number then
_

f(x) e
ix
dx = i2
m

k=1
Res (f(z) e
iz
, z
k
) +i
n

k=1
Res (f(z) e
iz
, x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the upper half plane and x
1
, . . . , x
n
are
the rst order poles on the real axis. If is a negative real number then
_

f(x) e
ix
dx = i2
m

k=1
Res (f(z) e
iz
, z
k
) i
n

k=1
Res (f(z) e
iz
, x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the lower half plane and x
1
, . . . , x
n
are
the rst order poles on the real axis.
514
15.6 Fourier Cosine and Sine Integrals
Fourier cosine and sine integrals have the form,
_

0
f(x) cos(x) dx and
_

0
f(x) sin(x) dx.
If f(x) is even/odd then we can evaluate the cosine/sine integral with the method we developed for Fourier
integrals.
Let f(z) be analytic except for isolated singularities, with only rst order poles on the real axis. Suppose that
f(x) is an even function and that f(z) vanishes as [z[ . We consider real > 0.

_

0
f(x) cos(x) dx =
1
2

_

f(x) cos(x) dx
Since f(x) sin(x) is an odd function,
1
2

_

f(x) sin(x) dx = 0.
Thus

_

0
f(x) cos(x) dx =
1
2

_

f(x) e
ix
dx
Now we apply Result 15.5.2.

_

0
f(x) cos(x) dx = i
m

k=1
Res (f(z) e
iz
, z
k
) +
i
2
n

k=1
Res (f(z) e
iz
, x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the upper half plane and x
1
, . . . , x
n
are the rst order poles on the
real axis.
515
If f(x) is an odd function, we note that f(x) cos(x) is an odd function to obtain the analogous result for
Fourier sine integrals.
Result 15.6.1 Fourier Cosine and Sine Integrals. Let f(z) be analytic except for
isolated singularities, with only rst order poles on the real axis. Suppose that f(x) is
an even function and that f(z) vanishes as [z[ . We consider real > 0.

_

0
f(x) cos(x) dx = i
m

k=1
Res (f(z) e
iz
, z
k
) +
i
2
n

k=1
Res (f(z) e
iz
, x
k
)
where z
1
, . . . z
m
are the singularities of f(z) in the upper half plane and x
1
, . . . , x
n
are
the rst order poles on the real axis. If f(x) is an odd function then,

_

0
f(x) sin(x) dx =

k=1
Res (f(z) e
iz
,
k
) +

2
n

k=1
Res (f(z) e
iz
, x
k
)
where
1
, . . .

are the singularities of f(z) in the lower half plane and x


1
, . . . , x
n
are the
rst order poles on the real axis.
Now suppose that f(x) is neither even nor odd. We can evaluate integrals of the form:
_

f(x) cos(x) dx and


_

f(x) sin(x) dx
516
by writing them in terms of Fourier integrals
_

f(x) cos(x) dx =
1
2
_

f(x) e
ix
dx +
1
2
_

f(x) e
ix
dx
_

f(x) sin(x) dx =
i
2
_

f(x) e
ix
dx +
i
2
_

f(x) e
ix
dx
15.7 Contour Integration and Branch Cuts
Example 15.7.1 Consider
_

0
x
a
x + 1
dx, 0 < a < 1,
where x
a
denotes exp(a ln(x)). We choose the branch of the function
f(z) =
z
a
z + 1
[z[ > 0, 0 < arg z < 2
with a branch cut on the positive real axis.
Let C

and C
R
denote the circular arcs of radius and R where < 1 < R. C

is negatively oriented; C
R
is positively oriented. Consider the closed contour C that is traced by a point moving from C

to C
R
above the
branch cut, next around C
R
, then below the cut to C

, and nally around C

. (See Figure 15.10.)


We write f(z) in polar coordinates.
f(z) =
exp(a log z)
z + 1
=
exp(a(log r +i))
r e
i
+ 1
We evaluate the function above, (z = r e
i0
), and below, (z = r e
i2
), the branch cut.
f(r e
i0
) =
exp[a(log r +i0)]
r + 1
=
r
a
r + 1
f(r e
i2
) =
exp[a(log r +i2)]
r + 1
=
r
a
e
i2a
r + 1
.
517

C
R
C
Figure 15.6:
We use the residue theorem to evaluate the integral along C.
_
C
f(z) dz = i2 Res (f(z), 1)
_
R

r
a
r + 1
dr +
_
C
R
f(z) dz
_
R

r
a
e
i2a
r + 1
dr +
_
C
f(z) dz = i2 Res (f(z), 1)
The residue is
Res (f(z), 1) = exp(a log(1)) = exp(a(log 1 +i)) = e
ia
.
We bound the integrals along C

and C
R
with the maximum modulus integral bound.

_
C
f(z) dz

2

a
1
= 2

1a
1

_
C
R
f(z) dz

2R
R
a
R 1
= 2
R
1a
R 1
Since 0 < a < 1, the values of the integrals tend to zero as 0 and R . Thus we have
_

0
r
a
r + 1
dr = i2
e
ia
1 e
i2a
518
_

0
x
a
x + 1
dx =

sin a
519
Result 15.7.1 Integrals from Zero to Innity. Let f(z) be a single-valued analytic
function with only isolated singularities and no singularities on the positive, real axis,
[0, ). Let a , Z. If the integrals exist then,
_

0
f(x) dx =
n

k=1
Res (f(z) log z, z
k
) ,
_

0
x
a
f(x) dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
) ,
_

0
f(x) log x dx =
1
2
n

k=1
Res
_
f(z) log
2
z, z
k
_
+i
n

k=1
Res (f(z) log z, z
k
) ,
_

0
x
a
f(x) log xdx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z) log z, z
k
)
+

2
a
sin
2
(a)
n

k=1
Res (z
a
f(z), z
k
) ,
_

0
x
a
f(x) log
m
xdx =

m
a
m
_
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
)
_
,
where z
1
, . . . , z
n
are the singularities of f(z) and there is a branch cut on the positive
real axis with 0 < arg(z) < 2.
520
15.8 Exploiting Symmetry
We have already used symmetry of the integrand to evaluate certain integrals. For f(x) an even function we
were able to evaluate
_

0
f(x) dx by extending the range of integration from to . For
_

0
x

f(x) dx
we put a branch cut on the positive real axis and noted that the value of the integrand below the branch cut
is a constant multiple of the value of the function above the branch cut. This enabled us to evaluate the real
integral with contour integration. In this section we will use other kinds of symmetry to evaluate integrals. We
will discover that periodicity of the integrand will produce this symmetry.
15.8.1 Wedge Contours
We note that z
n
= r
n
e
in
is periodic in with period 2/n. The real and imaginary parts of z
n
are odd periodic
in with period /n. This observation suggests that certain integrals on the positive real axis may be evaluated
by closing the path of integration with a wedge contour.
Example 15.8.1 Consider
_

0
1
1 +x
n
dx
521
where n N, n 2. We can evaluate this integral using Result 15.7.1.
_

0
1
1 +x
n
dx =
n1

k=0
Res
_
log z
1 +z
n
, e
i(1+2k)/n
_
=
n1

k=0
lim
ze
i(1+2k)/n
_
(z e
i(1+2k)/n
) log z
1 +z
n
_
=
n1

k=0
lim
ze
i(1+2k)/n
_
log z + (z e
i(1+2k)/n
)/z
nz
n1
_
=
n1

k=0
_
i(1 + 2k)/n
ne
i(1+2k)(n1)/n
_
=
i
n
2
e
i(n1)/n
n1

k=0
(1 + 2k) e
i2k/n
=
i2 e
i/n
n
2
n1

k=1
k e
i2k/n
=
i2 e
i/n
n
2
n
e
i2/n
1
=

nsin(/n)
This is a bit grungy. To nd a spier way to evaluate the integral we note that if we write the integrand as a
function of r and , it is periodic in with period 2/n.
1
1 +z
n
=
1
1 +r
n
e
in
The integrand along the rays = 2/n, 4/n, 6/n, . . . has the same value as the integrand on the real axis.
Consider the contour C that is the boundary of the wedge 0 < r < R, 0 < < 2/n. There is one singularity
522
inside the contour. We evaluate the residue there.
Res
_
1
1 +z
n
, e
i/n
_
= lim
ze
i/n
z e
i/n
1 +z
n
= lim
ze
i/n
1
nz
n1
=
e
i/n
n
We evaluate the integral along C with the residue theorem.
_
C
1
1 +z
n
dz =
i2 e
i/n
n
Let C
R
be the circular arc. The integral along C
R
vanishes as R .

_
C
R
1
1 +z
n
dz

2R
n
max
zC
R

1
1 +z
n

2R
n
1
R
n
1
0 as R
We parametrize the contour to evaluate the desired integral.
_

0
1
1 +x
n
dx +
_
0

1
1 +x
n
e
i2/n
dx =
i2 e
i/n
n
_

0
1
1 +x
n
dx =
i2 e
i/n
n(1 e
i2/n
)
_

0
1
1 +x
n
dx =

nsin(/n)
523
15.8.2 Box Contours
Recall that e
z
= e
x+iy
is periodic in y with period 2. This implies that the hyperbolic trigonometric functions
cosh z, sinh z and tanh z are periodic in y with period 2 and odd periodic in y with period . We can exploit
this property to evaluate certain integrals on the real axis by closing the path of integration with a box contour.
Example 15.8.2 Consider the integral
_

1
cosh x
dx =
_
i log
_
tanh
_
i
4
+
x
2
___

= i log(1) i log(1)
= .
We will evaluate this integral using contour integration. Note that
cosh(x +i) =
e
x+i
+ e
xi
2
= cosh(x).
Consider the box contour C that is the boundary of the region R < x < R, 0 < y < . The only singularity
of the integrand inside the contour is a rst order pole at z = i/2. We evaluate the integral along C with the
residue theorem.
_
C
1
cosh z
dz = i2 Res
_
1
cosh z
,
i
2
_
= i2 lim
zi/2
z i/2
cosh z
= i2 lim
zi/2
1
sinh z
= 2
524
The integrals along the sides of the box vanish as R .

_
R+i
R
1
cosh z
dz

max
z[R...R+i]

1
cosh z

max
y[0...]

2
e
R+iy
+ e
Riy

=
2
e
R
e
R


sinh R
0 as R
The value of the integrand on the top of the box is the negative of its value on the bottom. We take the limit as
R .
_

1
cosh x
dx +
_

1
cosh x
dx = 2
_

1
cosh x
dx =
15.9 Denite Integrals Involving Sine and Cosine
Example 15.9.1 For real-valued a, evaluate the integral:
f(a) =
_
2
0
d
1 +a sin
.
What is the value of the integral for complex-valued a.
Real-Valued a. For 1 < a < 1, the integrand is bounded, hence the integral exists. For [a[ = 1, the
integrand has a second order pole on the path of integration. For [a[ > 1 the integrand has two rst order poles
525
on the path of integration. The integral is divergent for these two cases. Thus we see that the integral exists for
1 < a < 1.
For a = 0, the value of the integral is 2. Now consider a ,= 0. We make the change of variables z = e
i
. The
real integral from = 0 to = 2 becomes a contour integral along the unit circle, [z[ = 1. We write the sine,
cosine and the dierential in terms of z.
sin =
z z
1
2i
, cos =
z +z
1
2
, dz = i e
i
d, d =
dz
iz
We write f(a) as an integral along C, the positively oriented unit circle [z[ = 1.
f(a) =
_
C
1/(iz)
1 +a(z z
1
)/(2i)
dz =
_
C
2/a
z
2
+ (2i/a)z 1
dz
We factor the denominator of the integrand.
f(a) =
_
C
2/a
(z z
1
)(z z
2
)
dz
z
1
= i
_
1 +

1 a
2
a
_
, z
2
= i
_
1

1 a
2
a
_
Because [a[ < 1, the second root is outside the unit circle.
[z
2
[ =
1 +

1 a
2
[a[
> 1.
Since [z
1
z
2
[ = 1, [z
1
[ < 1. Thus the pole at z
1
is inside the contour and the pole at z
2
is outside. We evaluate the
contour integral with the residue theorem.
f(a) =
_
C
2/a
z
2
+ (2i/a)z 1
dz
= i2
2/a
z
1
z
2
= i2
1
i

1 a
2
526
f(a) =
2

1 a
2
Complex-Valued a. We note that the integral converges except for real-valued a satisfying [a[ 1. On
any closed subset of C a R[ [a[ 1 the integral is uniformly convergent. Thus except for the values
a R[ [a[ 1, we can dierentiate the integral with respect to a. f(a) is analytic in the complex plane except
for the set of points on the real axis: a (. . . 1] and a [1 . . . ). The value of the analytic function f(a)
on the real axis for the interval (1 . . . 1) is
f(a) =
2

1 a
2
.
By analytic continuation we see that the value of f(a) in the complex plane is the branch of the function
f(a) =
2
(1 a
2
)
1/2
where f(a) is positive, real-valued for a (1 . . . 1) and there are branch cuts on the real axis on the intervals:
(. . . 1] and [1 . . . ).
Result 15.9.1 For evaluating integrals of the form
_
a+2
a
F(sin , cos ) d
it may be useful to make the change of variables z = e
i
. This gives us a contour integral
along the unit circle about the origin. We can write the sine, cosine and dierential in
terms of z.
sin =
z z
1
2i
, cos =
z +z
1
2
, d =
dz
iz
527
15.10 Innite Sums
The function g(z) = cot(z) has simple poles at z = n Z. The residues at these points are all unity.
Res ( cot(z), n) = lim
zn
(z n) cos(z)
sin(z)
= lim
zn
cos(z) (z n) sin(z)
cos(z)
= 1
Let C
n
be the square contour with corners at z = (n + 1/2)(1 i). Recall that
cos z = cos xcosh y i sin xsinh y and sin z = sin xcosh y +i cos xsinh y.
First we bound the modulus of cot(z).
[ cot(z)[ =

cos xcosh y i sin xsinh y


sin x cosh y +i cos x sinh y

cos
2
xcosh
2
y + sin
2
x sinh
2
y
sin
2
xcosh
2
y + cos
2
x sinh
2
y

cosh
2
y
sinh
2
y
= [ coth(y)[
The hyperbolic cotangent, coth(y), has a simple pole at y = 0 and tends to 1 as y .
Along the top and bottom of C
n
, (z = x i(n + 1/2)), we bound the modulus of g(z) = cot(z).
[ cot(z)[

coth((n + 1/2))

528
Along the left and right sides of C
n
, (z = (n+1/2) +iy), the modulus of the function is bounded by a constant.
[g((n + 1/2) +iy)[ =

cos((n + 1/2)) cosh(y) i sin((n + 1/2)) sinh(y)


sin((n + 1/2)) cosh(y) +i cos((n + 1/2)) sinh(y)

= [i tanh(y)[

Thus the modulus of cot(z) can be bounded by a constant M on C
n
.
Let f(z) be analytic except for isolated singularities. Consider the integral,
_
Cn
cot(z)f(z) dz.
We use the maximum modulus integral bound.

_
Cn
cot(z)f(z) dz

(8n + 4)M max


zCn
[f(z)[
Note that if
lim
[z[
[zf(z)[ = 0,
then
lim
n
_
Cn
cot(z)f(z) dz = 0.
This implies that the sum of all residues of cot(z)f(z) is zero. Suppose further that f(z) is analytic at
z = n Z. The residues of cot(z)f(z) at z = n are f(n). This means

n=
f(n) = ( sum of the residues of cot(z)f(z) at the poles of f(z) ).
529
Result 15.10.1 If
lim
[z[
[zf(z)[ = 0,
then the sum of all the residues of cot(z)f(z) is zero. If in addition f(z) is analytic
at z = n Z then

n=
f(n) = ( sum of the residues of cot(z)f(z) at the poles of f(z) ).
Example 15.10.1 Consider the sum

n=
1
(n +a)
2
, a , Z.
By Result 15.10.1 with f(z) = 1/(z +a)
2
we have

n=
1
(n +a)
2
= Res
_
cot(z)
1
(z +a)
2
, a
_
= lim
za
d
dz
cot(z)
=
sin
2
(z) cos
2
(z)
sin
2
(z)
.

n=
1
(n +a)
2
=

2
sin
2
(a)
530
Example 15.10.2 Derive /4 = 1 1/3 + 1/5 1/7 + 1/9 .
Consider the integral
I
n
=
1
2i
_
Cn
dw
w(w z) sin w
where C
n
is the square with corners at w = (n + 1/2)(1 i), n Z
+
. With the substitution w = x +iy,
[ sin w[
2
= sin
2
x + sinh
2
y,
we see that [1/ sin w[ 1 on C
n
. Thus I
n
0 as n . We use the residue theorem and take the limit n .
0 =

n=1
_
(1)
n
n(n z)
+
(1)
n
n(n +z)
_
+
1
z sin z

1
z
2
1
sin z
=
1
z
2z

n=1
(1)
n
n
2

2
z
2
=
1
z

n=1
_
(1)
n
n z

(1)
n
n +z
_
We substitute z = /2 into the above expression to obtain
/4 = 1 1/3 + 1/5 1/7 + 1/9
531
15.11 Exercises
The Residue Theorem
Exercise 15.1
Let f(z) have a pole of order n at z = z
0
. Prove the Residue Formula:
Res (f(z), z
0
) = lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
[(z z
0
)
n
f(z)]
_
.
Hint, Solution
Exercise 15.2
Consider the function
f(z) =
z
4
z
2
+ 1
.
Classify the singularities of f(z) in the extended complex plane. Calculate the residue at each pole and at innity.
Find the Laurent series expansions and their domains of convergence about the points z = 0, z = i and z = .
Hint, Solution
Exercise 15.3
Let P(z) be a polynomial none of whose roots lie on the closed contour . Show that
1
i2
_
P
t
(z)
P(z)
dz = number of roots of P(z) which lie inside .
where the roots are counted according to their multiplicity.
Hint: From the fundamental theorem of algebra, it is always possible to factor P(z) in the form P(z) =
(z z
1
)(z z
2
) (z z
n
). Using this form of P(z) the integrand P
t
(z)/P(z) reduces to a very simple expression.
Hint, Solution
532
Exercise 15.4
Find the value of
_
C
e
z
(z ) tan z
dz
where C is the positively-oriented circle
1. [z[ = 2
2. [z[ = 4
Hint, Solution
Cauchy Principal Value for Real Integrals
Solution 15.1
Show that the integral
_
1
1
1
x
dx.
is divergent. Evaluate the integral
_
1
1
1
x i
dx, R, ,= 0.
Evaluate
lim
0
+
_
1
1
1
x i
dx
and
lim
0

_
1
1
1
x i
dx.
533
The integral exists for arbitrarily close to zero, but diverges when = 0. Plot the real and imaginary part of
the integrand. If one were to assign meaning to the integral for = 0, what would the value of the integral be?
Exercise 15.5
Do the principal values of the following integrals exist?
1.
_
1
1
1
x
2
dx,
2.
_
1
1
1
x
3
dx,
3.
_
1
1
f(x)
x
3
dx.
Assume that f(x) is real analytic on the interval (1, 1).
Hint, Solution
Cauchy Principal Value for Contour Integrals
Exercise 15.6
Let f(z) have a rst order pole at z = z
0
and let (z z
0
)f(z) be analytic in some neighborhood of z
0
. Let the
contour C

be a circular arc from z


0
+e
i
to z
0
+e
i
. (Assume that > and < 2.) Show that
lim
0
+
_
C
f(z) dz = i( ) Res (f(z), z
0
)
Hint, Solution
Exercise 15.7
Let f(z) be analytic inside and on a simple, closed, positive contour C, except for isolated singularities at
z
1
, . . . , z
m
inside the contour and rst order poles at
1
, . . . ,
n
on the contour. Further, let the contour be C
1
at
534
the locations of these rst order poles. (i.e., the contour does not have a corner at any of the rst order poles.)
Show that the principal value of the integral of f(z) along C is

_
C
f(z) dz = i2
m

j=1
Res (f(z), z
j
) +i
n

j=1
Res (f(z),
j
).
Hint, Solution
Exercise 15.8
Let C be the unit circle. Evaluate

_
C
1
z 1
dz
by indenting the contour to exclude the rst order pole at z = 1.
Hint, Solution
Integrals from to
Exercise 15.9
Prove Result 15.4.1.
Hint, Solution
Exercise 15.10
Evaluate

2x
x
2
+x + 1
.
Hint, Solution
535
Exercise 15.11
Use contour integration to evaluate the integrals
1.
_

dx
1 +x
4
,
2.
_

x
2
dx
(1 +x
2
)
2
,
3.
_

cos(x)
1 +x
2
dx.
Hint, Solution
Exercise 15.12
Evaluate by contour integration
_

0
x
6
(x
4
+ 1)
2
dx.
Hint, Solution
Fourier Integrals
Exercise 15.13
Suppose that f(z) vanishes as [z[ . If is a (positive / negative) real number and C
R
is a semi-circle of
radius R in the (upper / lower) half plane then show that the integral
_
C
R
f(z) e
iz
dz
vanishes as R .
Hint, Solution
536
Exercise 15.14
Evaluate by contour integration
_

cos 2x
x i
dx.
Hint, Solution
Fourier Cosine and Sine Integrals
Exercise 15.15
Evaluate
_

sin x
x
dx.
Hint, Solution
Exercise 15.16
Evaluate
_

1 cos x
x
2
dx.
Hint, Solution
Exercise 15.17
Evaluate
_

0
sin(x)
x(1 x
2
)
dx.
Hint, Solution
537
Contour Integration and Branch Cuts
Exercise 15.18
Show that
1.
_

0
(ln x)
2
1 +x
2
dx =

3
8
,
2.
_

0
ln x
1 +x
2
dx = 0.
Hint, Solution
Exercise 15.19
By methods of contour integration nd
_

0
dx
x
2
+ 5x + 6
[ Recall the trick of considering
_

f(z) log z dz with a suitably chosen contour and branch for log z. ]
Hint, Solution
Exercise 15.20
Show that
_

0
x
a
(x + 1)
2
dx =
a
sin(a)
for 1 < 1(a) < 1.
From this derive that
_

0
log x
(x + 1)
2
dx = 0,
_

0
log
2
x
(x + 1)
2
dx =

2
3
.
Hint, Solution
538
Exercise 15.21
Consider the integral
I(a) =
_

0
x
a
1 +x
2
dx.
1. For what values of a does the integral exist?
2. Evaluate the integral. Show that
I(a) =

2 cos(a/2)
3. Deduce from your answer in part (b) the results
_

0
log x
1 +x
2
dx = 0,
_

0
log
2
x
1 +x
2
dx =

3
8
.
You may assume that it is valid to dierentiate under the integral sign.
Hint, Solution
Exercise 15.22
Let f(z) be a single-valued analytic function with only isolated singularities and no singularities on the positive
real axis, [0, ). Give sucient conditions on f(x) for absolute convergence of the integral
_

0
x
a
f(x) dx.
Assume that a is not an integer. Evaluate the integral by considering the integral of z
a
f(z) on a suitable contour.
(Consider the branch of z
a
on which 1
a
= 1.)
Hint, Solution
539
Exercise 15.23
Using the solution to Exercise 15.22, evaluate
_

0
x
a
f(x) log xdx,
and
_

0
x
a
f(x) log
m
x dx,
where m is a positive integer.
Hint, Solution
Exercise 15.24
Using the solution to Exercise 15.22, evaluate
_

0
f(x) dx,
i.e. examine a = 0. The solution will suggest a way to evaluate the integral with contour integration. Do the
contour integration to corroborate the value of
_

0
f(x) dx.
Hint, Solution
Exercise 15.25
Let f(z) be an analytic function with only isolated singularities and no singularities on the positive real axis,
[0, ). Give sucient conditions on f(x) for absolute convergence of the integral
_

0
f(x) log xdx
Evaluate the integral with contour integration.
Hint, Solution
540
Exercise 15.26
For what values of a does the following integral exist?
_

0
x
a
1 +x
4
dx.
Evaluate the integral. (Consider the branch of x
a
on which 1
a
= 1.)
Hint, Solution
Exercise 15.27
By considering the integral of f(z) = z
1/2
log z/(z + 1)
2
on a suitable contour, show that
_

0
x
1/2
log x
(x + 1)
2
dx = ,
_

0
x
1/2
(x + 1)
2
dx =

2
.
Hint, Solution
Exploiting Symmetry
Exercise 15.28
Evaluate by contour integration, the principal value integral
I(a) =
_

e
ax
e
x
e
x
dx
for a real and [a[ < 1. [Hint: Consider the contour that is the boundary of the box, R < x < R, 0 < y < , but
indented around z = 0 and z = i.
Hint, Solution
Exercise 15.29
Evaluate the following integrals.
541
1.
_

0
dx
(1 +x
2
)
2
,
2.
_

0
dx
1 +x
3
.
Hint, Solution
Exercise 15.30
Find the value of the integral I
I =
_

0
dx
1 +x
6
by considering the contour integral
_

dz
1 +z
6
with an appropriately chosen contour .
Hint, Solution
Exercise 15.31
Let C be the boundary of the sector 0 < r < R, 0 < < /4. By integrating e
z
2
on C and letting R show
that
_

0
cos(x
2
) dx =
_

0
sin(x
2
) dx =
1

2
_

0
e
x
2
dx.
Hint, Solution
542
Exercise 15.32
Evaluate
_

x
sinh x
dx
using contour integration.
Hint, Solution
Exercise 15.33
Show that
_

e
ax
e
x
+ 1
dx =

sin(a)
for 0 < a < 1.
Use this to derive that
_

cosh(bx)
cosh x
dx =

cos(b/2)
for 1 < b < 1.
Hint, Solution
Exercise 15.34
Using techniques of contour integration nd for real a and b:
F(a, b) =
_

0
d
(a +b cos )
2
What are the restrictions on a and b if any? Can the result be applied for complex a, b? How?
Hint, Solution
543
Exercise 15.35
Show that
_

cos x
e
x
+ e
x
dx =

e
/2
+ e
/2
[ Hint: Begin by considering the integral of e
iz
/( e
z
+ e
z
) around a rectangle with vertices: R, R +i.]
Hint, Solution
Denite Integrals Involving Sine and Cosine
Exercise 15.36
Use contour integration to evaluate the integrals
1.
_
2
0
d
2 + sin()
,
2.
_

cos(n)
1 2a cos() +a
2
d for [a[ < 1, n Z
0+
.
Hint, Solution
Exercise 15.37
By integration around the unit circle, suitably indented, show that

_

0
cos(n)
cos cos
d =
sin(n)
sin
.
Hint, Solution
544
Exercise 15.38
Evaluate
_
1
0
x
2
(1 +x
2
)

1 x
2
dx.
Hint, Solution
Innite Sums
Exercise 15.39
Evaluate

n=1
1
n
4
.
Hint, Solution
Exercise 15.40
Sum the following series using contour integration:

n=
1
n
2

2
Hint, Solution
545
15.12 Hints
The Residue Theorem
Hint 15.1
Substitute the Laurent series into the formula and simplify.
Hint 15.2
Use that the sum of all residues of the function in the extended complex plane is zero in calculating the residue
at innity. To obtain the Laurent series expansion about z = i, write the function as a proper rational function,
(numerator has a lower degree than the denominator) and expand in partial fractions.
Hint 15.3
Hint 15.4
Cauchy Principal Value for Real Integrals
Hint 15.5
Hint 15.6
For the third part, does the integrand have a term that behaves like 1/x
2
?
Cauchy Principal Value for Contour Integrals
546
Hint 15.7
Expand f(z) in a Laurent series. Only the rst term will make a contribution to the integral in the limit as
0
+
.
Hint 15.8
Use the result of Exercise 15.6.
Hint 15.9
Look at Example 15.3.2.
Integrals from to
Hint 15.10
Close the path of integration in the upper or lower half plane with a semi-circle. Use the maximum modulus
integral bound, (Result 12.1.1), to show that the integral along the semi-circle vanishes.
Hint 15.11
Make the change of variables x = 1/.
Hint 15.12
Use Result 15.4.1.
Hint 15.13
Fourier Integrals
547
Hint 15.14
Use
_

0
e
Rsin
d <

R
.
Hint 15.15
Fourier Cosine and Sine Integrals
Hint 15.16
Consider the integral of
e
ix
ix
.
Hint 15.17
Show that
_

1 cos x
x
2
dx =
_

1 e
ix
x
2
dx.
Hint 15.18
Show that
_

0
sin(x)
x(1 x
2
)
dx =
i
2

_

e
ix
x(1 x
2
)
dx.
Contour Integration and Branch Cuts
Hint 15.19
Integrate a branch of (log z)
2
/(1 +z
2
) along the boundary of the domain < r < R, 0 < < .
548
Hint 15.20
Hint 15.21
Note that
_
1
0
x
a
dx
converges for 1(a) > 1; and
_

1
x
a
dx
converges for 1(a) < 1.
Consider f(z) = z
a
/(z +1)
2
with a branch cut along the positive real axis and the contour in Figure 15.10 in
the limit as 0 and R .
To derive the last two integrals, dierentiate with respect to a.
Hint 15.22
Hint 15.23
Consider the integral of z
a
f(z) on the contour in Figure 15.10.
Hint 15.24
Dierentiate with respect to a.
549
Hint 15.25
Take the limit as a 0. Use LHospitals rule. To corroborate the result, consider the integral of f(z) log z on
an appropriate contour.
Hint 15.26
Consider the integral of f(z) log
2
z on the contour in Figure 15.10.
Hint 15.27
Consider the integral of
f(z) =
z
a
1 +z
4
on the boundary of the region < r < R, 0 < < /2. Take the limits as 0 and R .
Hint 15.28
Consider the branch of f(z) = z
1/2
log z/(z + 1)
2
with a branch cut on the positive real axis and 0 < arg z < 2.
Integrate this function on the contour in Figure 15.10.
Exploiting Symmetry
Hint 15.29
Hint 15.30
For the second part, consider the integral along the boundary of the region, 0 < r < R, 0 < < 2/3.
Hint 15.31
550
Hint 15.32
To show that the integral on the quarter-circle vanishes as R establish the inequality,
cos 2 1
4

, 0

4
.
Hint 15.33
Consider the box contour C this is the boundary of the rectangle, R x R, 0 y . The value of the
integral is
2
/2.
Hint 15.34
Consider the rectangular contour with corners at R and R +i2. Let R .
Hint 15.35
Hint 15.36
Denite Integrals Involving Sine and Cosine
Hint 15.37
Hint 15.38
551
Hint 15.39
Make the changes of variables x = sin and then z = e
i
.
Innite Sums
Hint 15.40
Use Result 15.10.1.
Hint 15.41
552
15.13 Solutions
The Residue Theorem
Solution 15.2
Since f(z) has an isolated pole of order n at z = z
0
, it has a Laurent series that is convergent in a deleted
neighborhood about that point. We substitute this Laurent series into the Residue Formula to verify it.
Res (f(z), z
0
) = lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
[(z z
0
)
n
f(z)]
_
= lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
_
(z z
0
)
n

k=n
a
k
(z z
0
)
k
__
= lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
_

k=0
a
kn
(z z
0
)
k
__
= lim
zz
0
_
1
(n 1)!

k=n1
a
kn
k!
(k n + 1)!
(z z
0
)
kn+1
_
= lim
zz
0
_
1
(n 1)!

k=0
a
k1
(k +n 1)!
k!
(z z
0
)
k
_
=
1
(n 1)!
a
1
(n 1)!
0!
= a
1
This proves the Residue Formula.
553
Solution 15.3
Classify Singularities.
f(z) =
z
4
z
2
+ 1
=
z
4
(z i)(z +i)
.
There are rst order poles at z = i. Since the function behaves like z
2
at innity, there is a second order pole
there. To see this more slowly, we can make the substitution z = 1/ and examine the point = 0.
f
_
1

_
=

4

2
+ 1
=
1

2
+
4
=
1

2
(1 +
2
)
f(1/) has a second order pole at = 0, which implies that f(z) has a second order pole at innity.
Residues. The residues at z = i are,
Res
_
z
4
z
2
+ 1
, i
_
= lim
zi
z
4
z +i
=
i
2
,
Res
_
z
4
z
2
+ 1
, i
_
= lim
zi
z
4
z i
=
i
2
.
The residue at innity is
Res (f(z), ) = Res
_
1

2
f
_
1

_
, = 0
_
= Res
_
1

2
+ 1
, = 0
_
= Res
_


4
1 +
2
, = 0
_
554
Here we could use the residue formula, but its easier to nd the Laurent expansion.
= Res
_

n=0
(1)
n

2n
, = 0
_
= 0
We could also calculate the residue at innity by recalling that the sum of all residues of this function in the
extended complex plane is zero.
i
2
+
i
2
+ Res (f(z), ) = 0
Res (f(z), ) = 0
Laurent Series about z = 0. Since the nearest singularities are at z = i, the Taylor series will converge
in the disk [z[ < 1.
z
4
z
2
+ 1
= z
4
1
1 (z)
2
= z
4

n=0
(z
2
)
n
= z
4

n=0
(1)
n
z
2n
=

n=2
(1)
n
z
2n
This geometric series converges for [ z
2
[ < 1, or [z[ < 1. The series expansion of the function is
z
4
z
2
+ 1
=

n=2
(1)
n
z
2n
for [z[ < 1
555
Laurent Series about z = i. We expand f(z) in partial fractions. First we write the function as a proper
rational function, (i.e. the numerator has lower degree than the denominator). By polynomial division, we see
that
f(z) = z
2
1 +
1
z
2
+ 1
.
Now we expand the last term in partial fractions.
f(z) = z
2
1 +
i/2
z i
+
i/2
z +i
Since the nearest singularity is at z = i, the Laurent series will converge in the annulus 0 < [z i[ < 2.
z
2
1 = ((z i) +i)
2
1
= (z i)
2
+i2(z i) 2
i/2
z +i
=
i/2
i2 + (z i)
=
1/4
1 i(z i)/2
=
1
4

n=0
_
i(z i)
2
_
n
=
1
4

n=0
i
n
2
n
(z i)
n
This geometric series converges for [i(z i)/2[ < 1, or [z i[ < 2. The series expansion of f(z) is
f(z) =
i/2
z i
2 +i2(z i) + (z i)
2
+
1
4

n=0
i
n
2
n
(z i)
n
.
556
z
4
z
2
+ 1
=
i/2
z i
2 +i2(z i) + (z i)
2
+
1
4

n=0
i
n
2
n
(z i)
n
for [z i[ < 2
Laurent Series about z = . Since the nearest singularities are at z = i, the Laurent series will converge
in the annulus 1 < [z[ < .
z
4
z
2
+ 1
=
z
2
1 + 1/z
2
= z
2

n=0
_

1
z
2
_
n
=
0

n=
(1)
n
z
2(n+1)
=
1

n=
(1)
n+1
z
2n
This geometric series converges for [ 1/z
2
[ < 1, or [z[ > 1. The series expansion of f(z) is
z
4
z
2
+ 1
=
1

n=
(1)
n+1
z
2n
for 1 < [z[ <
Solution 15.4
Method 1: Residue Theorem. We factor P(z). Let m be the number of roots, counting multiplicities, that
557
lie inside the contour . We nd a simple expression for P
t
(z)/P(z).
P(z) = c
n

k=1
(z z
k
)
P
t
(z) = c
n

k=1
n

j=1
j,=k
(z z
j
)
P
t
(z)
P(z)
=
c

n
k=1

n
j=1
j,=k
(z z
j
)
c

n
k=1
(z z
k
)
=
n

k=1

n
j=1
j,=k
(z z
j
)

n
j=1
(z z
j
)
=
n

k=1
1
z z
k
Now we do the integration using the residue theorem.
1
i2
_

P
t
(z)
P(z)
dz =
1
i2
_

k=1
1
z z
k
dz
=
n

k=1
1
i2
_

1
z z
k
dz
=

z
k
inside
1
i2
_

1
z z
k
dz
=

z
k
inside
1
= m
558
Method 2: Fundamental Theorem of Calculus. We factor the polynomial, P(z) = c

n
k=1
(z z
k
). Let
m be the number of roots, counting multiplicities, that lie inside the contour .
1
i2
_

P
t
(z)
P(z)
dz =
1
i2
[log P(z)]
C
=
1
i2
_
log
n

k=1
(z z
k
)
_
C
=
1
i2
_
n

k=1
log(z z
k
)
_
C
The value of the logarithm changes by i2 for the terms in which z
k
is inside the contour. Its value does not
change for the terms in which z
k
is outside the contour.
=
1
i2
_

z
k
inside
log(z z
k
)
_
C
=
1
i2

z
k
inside
i2
= m
Solution 15.5
1.
_
C
e
z
(z ) tan z
dz =
_
C
e
z
cos z
(z ) sin z
dz
The integrand has rst order poles at z = n, n Z, n ,= 1 and a double pole at z = . The only pole
559
inside the contour occurs at z = 0. We evaluate the integral with the residue theorem.
_
C
e
z
cos z
(z ) sin z
dz = i2 Res
_
e
z
cos z
(z ) sin z
, z = 0
_
= i2 lim
z=0
z
e
z
cos z
(z ) sin z
= i2 lim
z=0
z
sin z
= i2 lim
z=0
1
cos z
= i2
_
C
e
z
(z ) tan z
dz = i2
2. The integrand has a rst order poles at z = 0, and a second order pole at z = inside the contour. The
value of the integral is i2 times the sum of the residues at these points. From the previous part we know
that residue at z = 0.
Res
_
e
z
cos z
(z ) sin z
, z = 0
_
=
1

We nd the residue at z = with the residue formula.


Res
_
e
z
cos z
(z ) sin z
, z =
_
= lim
z
(z +)
e
z
cos z
(z ) sin z
=
e

(1)
2
lim
z
z +
sin z
=
e

2
lim
z
1
cos z
=
e

2
560
We nd the residue at z = by nding the rst few terms in the Laurent series of the integrand.
e
z
cos z
(z ) sin z
=
( e

+ e

(z ) +O((z )
2
)) (1 +O((z )
2
))
(z ) ((z ) +O((z )
3
))
=
e

(z ) +O((z )
2
)
(z )
2
+O((z )
4
)
=
e

(z)
2
+
e

z
+O(1)
1 +O((z )
2
)
=
_
e

(z )
2
+
e

z
+O(1)
_
_
1 +O
_
(z )
2
__
=
e

(z )
2
+
e

z
+O(1)
With this we see that
Res
_
e
z
cos z
(z ) sin z
, z =
_
= e

.
The integral is
_
C
e
z
cos z
(z ) sin z
dz = i2
_
Res
_
e
z
cos z
(z ) sin z
, z =
_
+ Res
_
e
z
cos z
(z ) sin z
, z = 0
_
+ Res
_
e
z
cos z
(z ) sin z
, z =
_
_
= i2
_

2
+ e

_
_
C
e
z
(z ) tan z
dz = i
_
2 e

2 e

_
561
Cauchy Principal Value for Real Integrals
Solution 15.6
Consider the integral
_
1
1
1
x
dx.
By the denition of improper integrals we have
_
1
1
1
x
dx = lim
0
+
_

1
1
x
dx + lim
0
+
_
1

1
x
dx
= lim
0
+
[log [x[]

1
+ lim
0
+
[log [x[]
1

= lim
0
+
log lim
0
+
log
This limit diverges. Thus the integral diverges.
Now consider the integral
_
1
1
1
x i
dx
562
where R, ,= 0. Since the integrand is bounded, the integral exists.
_
1
1
1
x i
dx =
_
1
1
x +i
x
2
+
2
dx
=
_
1
1
i
x
2
+
2
dx
= 2i
_
1
0

x
2
+
2
dx
= 2i
_
1/
0
1

2
+ 1
d
= 2i [arctan ]
1/
0
= 2i arctan
_
1

_
Note that the integral exists for all nonzero real and that
lim
0
+
_
1
1
1
x i
dx = i
and
lim
0

_
1
1
1
x i
dx = i.
The integral exists for arbitrarily close to zero, but diverges when = 0. The real part of the integrand
is an odd function with two humps that get thinner and taller with decreasing . The imaginary part of the
integrand is an even function with a hump that gets thinner and taller with decreasing . (See Figure 15.7.)
1
_
1
x i
_
=
x
x
2
+
2
,
_
1
x i
_
=

x
2
+
2
563
Figure 15.7: The real and imaginary part of the integrand for several values of .
Note that
1
_
1
0
1
x i
dx + as 0
+
and
1
_
0
1
1
x i
dx as 0

.
However,
lim
0
1
_
1
1
1
x i
dx = 0
because the two integrals above cancel each other.
Now note that when = 0, the integrand is real. Of course the integral doesnt converge for this case, but if
we could assign some value to
_
1
1
1
x
dx
564
it would be a real number. Since
lim
0
_
1
1
1
_
1
x i
_
dx = 0,
This number should be zero.
Solution 15.7
1.

_
1
1
1
x
2
dx = lim
0
+
__

1
1
x
2
dx +
_
1

1
x
2
dx
_
= lim
0
+
_
_

1
x
_

1
+
_

1
x
_
1

_
= lim
0
+
_
1

1 1 +
1

_
The principal value of the integral does not exist.
2.

_
1
1
1
x
3
dx = lim
0
+
__

1
1
x
3
dx +
_
1

1
x
3
dx
_
= lim
0
+
_
_

1
2x
2
_

1
+
_

1
2x
2
_
1

_
= lim
0
+
_

1
2()
2
+
1
2(1)
2

1
2(1)
2
+
1
2
2
_
= 0
565
3. Since f(x) is real analytic,
f(x) =

n=1
f
n
x
n
for x (1, 1).
We can rewrite the integrand as
f(x)
x
3
=
f
0
x
3
+
f
1
x
2
+
f
2
x
+
f(x) f
0
f
1
x f
2
x
2
x
3
.
Note that the nal term is real analytic on (1, 1). Thus the principal value of the integral exists if and
only if f
2
= 0.
Cauchy Principal Value for Contour Integrals
Solution 15.8
We can write f(z) as
f(z) =
f
0
z z
0
+
(z z
0
)f(z) f
0
z z
0
.
Note that the second term is analytic in a neighborhood of z
0
. Thus it is bounded on the contour. Let M

be the
maximum modulus of
(zz
0
)f(z)f
0
zz
0
on C

. By using the maximum modulus integral bound, we have

_
C
(z z
0
)f(z) f
0
z z
0
dz

( )M

0 as 0
+
.
Thus we see that
lim
0
+
_
C
f(z) dz lim
0
+
_
C
f
0
z z
0
dz.
566
We parameterize the path of integration with
z = z
0
+e
i
, (, ).
Now we evaluate the integral.
lim
0
+
_
C
f
0
z z
0
dz = lim
0
+
_

f
0
e
i
ie
i
d
= lim
0
+
_

if
0
d
= i( )f
0
i( ) Res (f(z), z
0
)
This proves the result.
Solution 15.9
Let C
i
be the contour that is indented with circular arcs or radius at each of the rst order poles on C so as to
enclose these poles. Let A
1
, . . . , A
n
be these circular arcs of radius centered at the points
1
, . . . ,
n
. Let C
p
be
the contour, (not necessarily connected), obtained by subtracting each of the A
j
s from C
i
.
Since the curve is C
1
, (or continuously dierentiable), at each of the rst order poles on C, the A
j
s becomes
semi-circles as 0
+
. Thus
_
A
j
f(z) dz = i Res (f(z),
j
) for j = 1, . . . , n.
567
The principal value of the integral along C is

_
C
f(z) dz = lim
0
+
_
Cp
f(z) dz
= lim
0
+
_
_
C
i
f(z) dz
n

j=1
_
A
j
f(z) dz
_
= i2
_
m

j=1
Res (f(z), z
j
) +
n

j=1
Res (f(z),
j
)
_
i
n

j=1
Res (f(z),
j
)

_
C
f(z) dz = i2
m

j=1
Res (f(z), z
j
) +i
n

j=1
Res (f(z),
j
).
Solution 15.10
Consider

_
C
1
z 1
dz
where C is the unit circle. Let C
p
be the circular arc of radius 1 that starts and ends a distance of from z = 1.
Let C

be the negative, circular arc of radius with center at z = 1 that joins the endpoints of C
p
. Let C
i
, be the
union of C
p
and C

. (C
p
stands for Principal value Contour; C
i
stands for Indented Contour.) C
i
is an indented
contour that avoids the rst order pole at z = 1. Figure 15.8 shows the three contours.
Figure 15.8: The Indented Contour.
568
Note that the principal value of the integral is

_
C
1
z 1
dz = lim
0
+
_
Cp
1
z 1
dz.
We can calculate the integral along C
i
with Cauchys theorem. The integrand is analytic inside the contour.
_
C
i
1
z 1
dz = 0
We can calculate the integral along C

using Result 15.3.1. Note that as 0


+
, the contour becomes a semi-circle,
a circular arc of radians in the negative direction.
lim
0
+
_
C
1
z 1
dz = i Res
_
1
z 1
, 1
_
= i
Now we can write the principal value of the integral along C in terms of the two known integrals.

_
C
1
z 1
dz =
_
C
i
1
z 1
dz
_
C
1
z 1
dz
= 0 (i)
= i
Integrals from to
Solution 15.11
Let C
R
be the semicircular arc from R to R in the upper half plane. Let C be the union of C
R
and the interval
[R, R]. We can evaluate the principal value of the integral along C with Result 15.3.2.

_
C
f(x) dx = i2
m

k=1
Res (f(z), z
k
) +i
n

k=1
Res (f(z), x
k
)
569
We examine the integral along C
R
as R .

_
C
R
f(z) dz

Rmax
zC
R
[f(z)[
0 as R .
Now we are prepared to evaluate the real integral.

f(x) dx = lim
R

_
R
R
f(x) dx
= lim
R

_
C
f(z) dz
= i2
m

k=1
Res (f(z), z
k
) +i
n

k=1
Res (f(z), x
k
)
If we close the path of integration in the lower half plane, the contour will be in the negative direction.

f(x) dx = i2
m

k=1
Res (f(z), z
k
) i
n

k=1
Res (f(z), x
k
)
Solution 15.12
We consider

2x
x
2
+x + 1
dx.
With the change of variables x = 1/, this becomes

2
1

2
+
1
+ 1
_
1

2
_
d,
570

2
1

2
+ + 1
d
There are rst order poles at = 0 and = 1/2 i

3/2. We close the path of integration in the upper half


plane with a semi-circle. Since the integrand decays like
3
the integrand along the semi-circle vanishes as the
radius tends to innity. The value of the integral is thus
i Res
_
2z
1
z
2
+z + 1
, z = 0
_
+i2 Res
_
2z
1
z
2
+z + 1
, z =
1
2
+i

3
2
_
i lim
z0
_
2
z
2
+z + 1
_
+i2 lim
z(1+i

3)/2
_
2z
1
z + (1 +i

3)/2
_

2x
x
2
+x + 1
dx =
2

3
Solution 15.13
1. Consider
_

1
x
4
+ 1
dx.
The integrand
1
z
4
+1
is analytic on the real axis and has isolated singularities at the points z = e
i/4
, e
i3/4
, e
i5/4
, e
i7/4
.
Let C
R
be the semi-circle of radius R in the upper half plane. Since
lim
R
_
Rmax
zC
R

1
z
4
+ 1

_
= lim
R
_
R
1
R
4
1
_
= 0,
571
we can apply Result 15.4.1.
_

1
x
4
+ 1
dx = i2
_
Res
_
1
z
4
+ 1
, e
i/4
_
+ Res
_
1
z
4
+ 1
, e
i3/4
__
The appropriate residues are,
Res
_
1
z
4
+ 1
, e
i/4
_
= lim
ze
i/4
z e
i/4
z
4
+ 1
= lim
ze
i/4
1
4z
3
=
1
4
e
i3/4
=
1 i
4

2
,
Res
_
1
z
4
+ 1
, e
i3/4
_
=
1
4( e
i3/4
)
3
=
1
4
e
i/4
=
1 i
4

2
,
We evaluate the integral with the residue theorem.
_

1
x
4
+ 1
dx = i2
_
1 i
4

2
+
1 i
4

2
_
_

1
x
4
+ 1
dx =

2
572
2. Now consider
_

x
2
(x
2
+ 1)
2
dx.
The integrand is analytic on the real axis and has second order poles at z = i. Since the integrand decays
suciently fast at innity,
lim
R
_
Rmax
zC
R

z
2
(z
2
+ 1)
2

_
= lim
R
_
R
R
2
(R
2
1)
2
_
= 0
we can apply Result 15.4.1.
_

x
2
(x
2
+ 1)
2
dx = i2 Res
_
z
2
(z
2
+ 1)
2
, z = i
_
Res
_
z
2
(z
2
+ 1)
2
, z = i
_
= lim
zi
d
dz
_
(z i)
2
z
2
(z
2
+ 1)
2
_
= lim
zi
d
dz
_
z
2
(z +i)
2
_
= lim
zi
_
(z +i)
2
2z z
2
2(z +i)
(z +i)
4
_
=
i
4
_

x
2
(x
2
+ 1)
2
dx =

2
3. Since
sin(x)
1 +x
2
573
is an odd function,
_

cos(x)
1 +x
2
dx =
_

e
ix
1 +x
2
dx
Since e
iz
/(1 +z
2
) is analytic except for simple poles at z = i and the integrand decays suciently fast in
the upper half plane,
lim
R
_
Rmax
zC
R

e
iz
1 +z
2

_
= lim
R
_
R
1
R
2
1
_
= 0
we can apply Result 15.4.1.
_

e
ix
1 +x
2
dx = i2 Res
_
e
iz
(z i)(z +i)
, z = i
_
= i2
e
1
i2
_

cos(x)
1 +x
2
dx =

e
Solution 15.14
Consider the function
f(z) =
z
6
(z
4
+ 1)
2
.
The value of the function on the imaginary axis:
y
6
(y
4
+ 1)
2
574
is a constant multiple of the value of the function on the real axis:
x
6
(x
4
+ 1)
2
.
Thus to evaluate the real integral we consider the path of integration, C, which starts at the origin, follows the
real axis to R, follows a circular path to iR and then follows the imaginary axis back down to the origin. f(z)
has second order poles at the fourth roots of 1: (1 i)/

2. Of these only (1 + i)/

2 lies inside the path of


integration. We evaluate the contour integral with the Residue Theorem. For R > 1:
_
C
z
6
(z
4
+ 1)
2
dz = i2 Res
_
z
6
(z
4
+ 1)
2
, z = e
i/4
_
= i2 lim
ze
i/4
d
dz
_
(z e
i/4
)
2
z
6
(z
4
+ 1)
2
_
= i2 lim
ze
i/4
d
dz
_
z
6
(z e
i3/4
)
2
(z e
i5/4
)
2
(z e
i7/4
)
2
_
= i2 lim
ze
i/4
_
z
6
(z e
i3/4
)
2
(z e
i5/4
)
2
(z e
i7/4
)
2
_
6
z

2
z e
i3/4

2
z e
i5/4

2
z e
i7/4
_
_
= i2
i
(2)(4i)(2)
_
6

2
1 +i

2

2
2 +i2

2
i

2
_
= i2
3
32
(1 i)

2
=
3
8

2
(1 +i)
The integral along the circular part of the contour, C
R
, vanishes as R . We demonstrate this with the
575
maximum modulus integral bound.

_
C
R
z
6
(z
4
+ 1)
2
dz

R
4
max
zC
R
_
z
6
(z
4
+ 1)
2
_
=
R
4
R
6
(R
4
1)
2
0 as R
Taking the limit R , we have:
_

0
x
6
(x
4
+ 1)
2
dx +
_
0

(iy)
6
((iy)
4
+ 1)
2
i dy =
3
8

2
(1 +i)
_

0
x
6
(x
4
+ 1)
2
dx +i
_

0
y
6
(y
4
+ 1)
2
dy =
3
8

2
(1 +i)
(1 +i)
_

0
x
6
(x
4
+ 1)
2
dx =
3
8

2
(1 +i)
_

0
x
6
(x
4
+ 1)
2
dx =
3
8

2
Fourier Integrals
Solution 15.15
We know that
_

0
e
Rsin
d <

R
.
576
First take the case that is positive and the semi-circle is in the upper half plane.

_
C
R
f(z) e
iz
dz

_
C
R
e
iz
dz

max
zC
R
[f(z)[

_

0

e
iRe
i
Re
i

d max
zC
R
[f(z)[
= R
_

0

e
Rsin

d max
zC
R
[f(z)[
< R

R
max
zC
R
[f(z)[
=

max
zC
R
[f(z)[
0 as R
The procedure is almost the same for negative .
Solution 15.16
First we write the integral in terms of Fourier integrals.
_

cos 2x
x i
dx =
_

e
i2x
2(x i)
dx +
_

e
i2x
2(x i)
dx
Note that
1
2(zi)
vanishes as [z[ . We close the former Fourier integral in the upper half plane and the latter
in the lower half plane. There is a rst order pole at z = i in the upper half plane.
_

e
i2x
2(x i)
dx = i2 Res
_
e
i2z
2(z i)
, z = i
_
= i2
e
2
2
577
There are no singularities in the lower half plane.
_

e
i2x
2(x i)
dx = 0
Thus the value of the original real integral is
_

cos 2x
x i
dx = i e
2
Fourier Cosine and Sine Integrals
Solution 15.17
We are considering the integral
_

sin x
x
dx.
The integrand is an entire function. So it doesnt appear that the residue theorem would directly apply. Also the
integrand is unbounded as x +i and x i, so closing the integral in the upper or lower half plane is
not directly applicable. In order to proceed, we must write the integrand in a dierent form. Note that

cos x
x
dx = 0
since the integrand is odd and has only a rst order pole at x = 0. Thus
_

sin x
x
dx =
_

e
ix
ix
dx.
Let C
R
be the semicircular arc in the upper half plane from R to R. Let C be the closed contour that is the
union of C
R
and the real interval [R, R]. If we close the path of integration with a semicircular arc in the upper
half plane, we have
_

sin x
x
dx = lim
R
_

_
C
e
iz
iz
dz
_
C
R
e
iz
iz
dz
_
,
578
provided that all the integrals exist.
The integral along C
R
vanishes as R by Jordans lemma. By the residue theorem for principal values
we have

_
e
iz
iz
dz = i Res
_
e
iz
iz
, 0
_
= .
Combining these results,
_

sin x
x
dx = .
Solution 15.18
Note that (1 cos x)/x
2
has a removable singularity at x = 0. The integral decays like
1
x
2
at innity, so the
integral exists. Since (sin x)/x
2
is a odd function with a simple pole at x = 0, the principal value of its integral
vanishes.

sin x
x
2
dx = 0
_

1 cos x
x
2
dx =
_

1 cos x i sin x
x
2
dx =
_

1 e
ix
x
2
dx
Let C
R
be the semi-circle of radius R in the upper half plane. Since
lim
R
_
Rmax
zC
R

1 e
iz
z
2

_
= lim
R
R
2
R
2
= 0
the integral along C
R
vanishes as R .
_
C
R
1 e
iz
z
2
dz 0 as R
579
We can apply Result 15.4.1.

1 e
ix
x
2
dx = i Res
_
1 e
iz
z
2
, z = 0
_
= i lim
z0
1 e
iz
z
= i lim
z0
i e
iz
1
_

1 cos x
x
2
dx =
Solution 15.19
Consider
_

0
sin(x)
x(1 x
2
)
dx.
Note that the integrand has removable singularities at the points x = 0, 1 and is an even function.
_

0
sin(x)
x(1 x
2
)
dx =
1
2
_

sin(x)
x(1 x
2
)
dx.
Note that
cos(x)
x(1 x
2
)
is an odd function with rst order poles at x = 0, 1.

cos(x)
x(1 x
2
)
dx = 0
_

0
sin(x)
x(1 x
2
)
dx =
i
2

_

e
ix
x(1 x
2
)
dx.
Let C
R
be the semi-circle of radius R in the upper half plane. Since
lim
R
_
Rmax
zC
R

e
iz
z(1 z
2
)

_
= lim
R
R
1
R(R
2
1)
= 0
580
the integral along C
R
vanishes as R .
_
C
R
e
iz
z(1 z
2
)
dz 0 as R
We can apply Result 15.4.1.

i
2

_

e
ix
x(1 x
2
)
dx = i
i
2
_
Res
_
e
iz
z(1 z
2
)
, z = 0
_
+ Res
_
e
iz
z(1 z
2
)
, z = 1
_
+ Res
_
e
iz
z(1 z
2
)
, z = 1
__
=

2
_
lim
z0
e
iz
1 z
2
lim
z0
e
iz
z(1 +z)
+ lim
z0
e
iz
z(1 z)
_
=

2
_
1
1
2
+
1
2
_
_

0
sin(x)
x(1 x
2
)
dx =
Contour Integration and Branch Cuts
Solution 15.20
Let C be the boundary of the region < r < R, 0 < < . Choose the branch of the logarithm with a branch cut
on the negative imaginary axis and the angle range /2 < < 3/2. We consider the integral of (log z)
2
/(1+z
2
)
581
on this contour.
_
C
(log z)
2
1 +z
2
dz = i2 Res
_
(log z)
2
1 +z
2
, z = i
_
= i2 lim
zi
(log z)
2
z +i
= i2
(i/2)
2
2i
=

3
4
Let C
R
be the semi-circle from R to R in the upper half plane. We show that the integral along C
R
vanishes as
R with the maximum modulus integral bound.

_
C
R
(log z)
2
1 +z
2
dz

Rmax
zC
R

(log z)
2
1 +z
2

R
(ln R)
2
+ 2 ln R +
2
R
2
1
0 as R
Let C

be the semi-circle from to in the upper half plane. We show that the integral along C

vanishes as
0 with the maximum modulus integral bound.

_
C
(log z)
2
1 +z
2
dz

max
zC

(log z)
2
1 +z
2


(ln )
2
2 ln +
2
1
2
0 as 0
582
Now we take the limit as 0 and R for the integral along C.
_
C
(log z)
2
1 +z
2
dz =

3
4
_

0
(ln r)
2
1 +r
2
dr +
_
0

(ln r +i)
2
1 +r
2
dr =

3
4
2
_

0
(ln x)
2
1 +x
2
dx +i2
_

0
ln x
1 +x
2
dx =
2
_

0
1
1 +x
2
dx

3
4
(15.1)
We evaluate the integral of 1/(1 + x
2
) by extending the path of integration to (. . . ) and closing the
path of integration in the upper half plane. Since
lim
R
_
Rmax
zC
R

1
1 +z
2

_
lim
R
_
R
1
R
2
1
_
= 0,
the integral of 1/(1 +z
2
) along C
R
vanishes as R . We evaluate the integral with the Residue Theorem.

2
_

0
1
1 +x
2
dx =

2
2
_

1
1 +x
2
dx
=

2
2
i2 Res
_
1
1 +z
2
, z = i
_
= i
3
lim
zi
1
z +i
=

3
2
Now we return to Equation 15.1.
2
_

0
(ln x)
2
1 +x
2
dx +i2
_

0
ln x
1 +x
2
dx =

3
4
583
We equate the real and imaginary parts to solve for the desired integrals.
_

0
(ln x)
2
1 +x
2
dx =

3
8
_

0
ln x
1 +x
2
dx = 0
Solution 15.21
We consider the branch of the function
f(z) =
log z
z
2
+ 5z + 6
with a branch cut on the real axis and 0 < arg(z) < 2.
Let C

and C
R
denote the circles of radius and R where < 1 < R. C

is negatively oriented; C
R
is positively
oriented. Consider the closed contour, C, that is traced by a point moving from to R above the branch cut,
next around C
R
back to R, then below the cut to , and nally around C

back to . (See Figure 15.10.)

C
R
C
Figure 15.9: The path of integration.
584
We can evaluate the integral of f(z) along C with the residue theorem. For R > 3, there are rst order poles
inside the path of integration at z = 2 and z = 3.
_
C
log z
z
2
+ 5z + 6
dz = i2
_
Res
_
log z
z
2
+ 5z + 6
, z = 2
_
+ Res
_
log z
z
2
+ 5z + 6
, z = 3
__
= i2
_
lim
z2
log z
z + 3
+ lim
z3
log z
z + 2
_
= i2
_
log(2)
1
+
log(3)
1
_
= i2 (log(2) +i log(3) i)
= i2 log
_
2
3
_
In the limit as 0, the integral along C

vanishes. We demonstrate this with the maximum modulus


theorem.

_
C
log z
z
2
+ 5z + 6
dz

2 max
zC

log z
z
2
+ 5z + 6

2
2 log
6 5
2
0 as 0
In the limit as R , the integral along C
R
vanishes. We again demonstrate this with the maximum modulus
theorem.

_
C
R
log z
z
2
+ 5z + 6
dz

2Rmax
zC
R

log z
z
2
+ 5z + 6

2R
log R + 2
R
2
5R 6
0 as R
585
Taking the limit as 0 and R , the integral along C is:
_
C
log z
z
2
+ 5z + 6
dz =
_

0
log x
x
2
+ 5x + 6
dx +
_
0

log x +i2
x
2
+ 5x + 6
dx
= i2
_

0
log x
x
2
+ 5x + 6
dx
Now we can evaluate the real integral.
i2
_

0
log x
x
2
+ 5x + 6
dx = i2 log
_
2
3
_
_

0
log x
x
2
+ 5x + 6
dx = log
_
3
2
_
Solution 15.22
We consider the integral
I(a) =
_

0
x
a
(x + 1)
2
dx.
To examine convergence, we split the domain of integration.
_

0
x
a
(x + 1)
2
dx =
_
1
0
x
a
(x + 1)
2
dx +
_

1
x
a
(x + 1)
2
dx
First we work with the integral on (0 . . . 1).

_
1
0
x
a
(x + 1)
2
dx

_
1
0

x
a
(x + 1)
2

[dx[
=
_
1
0
x
T(a)
(x + 1)
2
dx

_
1
0
x
T(a)
dx
586
This integral converges for 1(a) > 1.
Next we work with the integral on (1 . . . ).

_

1
x
a
(x + 1)
2
dx

_

1

x
a
(x + 1)
2

[dx[
=
_

1
x
T(a)
(x + 1)
2
dx

_

1
x
T(a)2
dx
This integral converges for 1(a) < 1.
Thus we see that the integral dening I(a) converges in the strip, 1 < 1(a) < 1. The integral converges
uniformly in any closed subset of this domain. Uniform convergence means that we can dierentiate the integral
with respect to a and interchange the order of integration and dierentiation.
I
t
(a) =
_

0
x
a
log x
(x + 1)
2
dx
Thus we see that I(a) is analytic for 1 < 1(a) < 1.
For 1 < 1(a) < 1 and a ,= 0, z
a
is multi-valued. Consider the branch of the function f(z) = z
a
/(z + 1)
2
with a branch cut on the positive real axis and 0 < arg(z) < 2. We integrate along the contour in Figure 15.10.
The integral on C

vanishes as 0. We show this with the maximum modulus integral bound. First we
write z
a
in modulus-argument form, z = e
i
, where a = +i.
z
a
= e
a log z
= e
(+i)(ln +i)
= e
ln +i( ln +)
=

e
i( log +)
587
Now we bound the integral.

_
C
z
a
(z + 1)
2
dz

2 max
zC

z
a
(z + 1)
2

e
2[[
(1 )
2
0 as 0
The integral on C
R
vanishes as R .

_
C
R
z
a
(z + 1)
2
dz

2Rmax
zC
R

z
a
(z + 1)
2

2R
R

e
2[[
(R 1)
2
0 as R
Above the branch cut, (z = r e
i0
), the integrand is
f(r e
i0
) =
r
a
(r + 1)
2
.
Below the branch cut, (z = r e
i2
), we have,
f(r e
i2
) =
e
i2a
r
a
(r + 1)
2
.
588
Now we use the residue theorem.
_

0
r
a
(r + 1)
2
dr +
_
0

e
i2a
r
a
(r + 1)
2
dr = i2 Res
_
z
a
(z + 1)
2
, 1
_
_
1 e
i2a
_
_

0
r
a
(r + 1)
2
dr = i2 lim
z1
d
dz
(z
a
)
_

0
r
a
(r + 1)
2
dr = i2
a e
i(a1)
1 e
i2a
_

0
r
a
(r + 1)
2
dr =
i2a
e
ia
e
ia
_

0
x
a
(x + 1)
2
dx =
a
sin(a)
for 1 < 1(a) < 1, a ,= 0
The right side has a removable singularity at a = 0. We use analytic continuation to extend the answer to a = 0.
I(a) =
_

0
x
a
(x + 1)
2
dx =
_
a
sin(a)
for 1 < 1(a) < 1, a ,= 0
1 for a = 0
We can derive the last two integrals by dierentiating this formula with respect to a and taking the limit
a 0.
I
t
(a) =
_

0
x
a
log x
(x + 1)
2
dx, I
tt
(a) =
_

0
x
a
log
2
x
(x + 1)
2
dx
I
t
(0) =
_

0
log x
(x + 1)
2
dx, I
tt
(0) =
_

0
log
2
x
(x + 1)
2
dx
We can nd I
t
(0) and I
tt
(0) either by dierentiating the expression for I(a) or by nding the rst few terms in
the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier.
I(a) =

n=0
I
(n)
(0)
n!
a
n
589
I(a) =
a
sin(a)
=
a
a (a)
3
/6 +O(a
5
)
=
1
1 (a)
2
/6 +O(a
4
)
= 1 +

2
a
2
6
+O(a
4
)
I
t
(0) =
_

0
log x
(x + 1)
2
dx = 0
I
tt
(0) =
_

0
log
2
x
(x + 1)
2
dx =

2
3
Solution 15.23
1. We consider the integral
I(a) =
_

0
x
a
1 +x
2
dx.
To examine convergence, we split the domain of integration.
_

0
x
a
1 +x
2
dx =
_
1
0
x
a
1 +x
2
dx +
_

1
x
a
1 +x
2
dx
590
First we work with the integral on (0 . . . 1).

_
1
0
x
a
1 +x
2
dx

_
1
0

x
a
1 +x
2

[dx[
=
_
1
0
x
T(a)
1 +x
2
dx

_
1
0
x
T(a)
dx
This integral converges for 1(a) > 1.
Next we work with the integral on (1 . . . ).

_

1
x
a
1 +x
2
dx

_

1

x
a
1 +x
2

[dx[
=
_

1
x
T(a)
1 +x
2
dx

_

1
x
T(a)2
dx
This integral converges for 1(a) < 1.
Thus we see that the integral dening I(a) converges in the strip, 1 < 1(a) < 1. The integral converges
uniformly in any closed subset of this domain. Uniform convergence means that we can dierentiate the
integral with respect to a and interchange the order of integration and dierentiation.
I
t
(a) =
_

0
x
a
log x
1 +x
2
dx
Thus we see that I(a) is analytic for 1 < 1(a) < 1.
2. For 1 < 1(a) < 1 and a ,= 0, z
a
is multi-valued. Consider the branch of the function f(z) = z
a
/(1 + z
2
)
with a branch cut on the positive real axis and 0 < arg(z) < 2. We integrate along the contour in
Figure 15.10.
591

C
R
C
Figure 15.10:
The integral on C

vanishes are 0. We show this with the maximum modulus integral bound. First we
write z
a
in modulus-argument form, where z = e
i
and a = +i.
z
a
= e
a log z
= e
(+i)(log +i)
= e
log +i( log +)
=
a
e

e
i( log +)
Now we bound the integral.

_
C
z
a
1 +z
2
dz

2 max
zC

z
a
1 +z
2

e
2[[
1
2
0 as 0
592
The integral on C
R
vanishes as R .

_
C
R
z
a
1 +z
2
dz

2Rmax
zC
R

z
a
1 +z
2

2R
R

e
2[[
R
2
1
0 as R
Above the branch cut, (z = r e
i0
), the integrand is
f(r e
i0
) =
r
a
1 +r
2
.
Below the branch cut, (z = r e
i2
), we have,
f(r e
i2
) =
e
i2a
r
a
1 +r
2
.
593
Now we use the residue theorem.
_

0
r
a
1 +r
2
dr +
_
0

e
i2a
r
a
1 +r
2
dr = i2
_
Res
_
z
a
1 +z
2
, i
_
+ Res
_
z
a
1 +z
2
, i
__
_
1 e
i2a
_
_

0
x
a
1 +x
2
dx = i2
_
lim
zi
z
a
z +i
+ lim
zi
z
a
z i
_
_
1 e
i2a
_
_

0
x
a
1 +x
2
dx = i2
_
e
ia/2
2i
+
e
ia3/2
2i
_
_

0
x
a
1 +x
2
dx =
e
ia/2
e
ia3/2
1 e
i2a
_

0
x
a
1 +x
2
dx =
e
ia/2
(1 e
ia
)
(1 + e
ia
)(1 e
ia
)
_

0
x
a
1 +x
2
dx =

e
ia/2
+ e
ia/2
_

0
x
a
1 +x
2
dx =

2 cos(a/2)
for 1 < 1(a) < 1, a ,= 0
We use analytic continuation to extend the answer to a = 0.
I(a) =
_

0
x
a
1 +x
2
dx =

2 cos(a/2)
for 1 < 1(a) < 1
3. We can derive the last two integrals by dierentiating this formula with respect to a and taking the limit
a 0.
I
t
(a) =
_

0
x
a
log x
1 +x
2
dx, I
tt
(a) =
_

0
x
a
log
2
x
1 +x
2
dx
I
t
(0) =
_

0
log x
1 +x
2
dx, I
tt
(0) =
_

0
log
2
x
1 +x
2
dx
594
We can nd I
t
(0) and I
tt
(0) either by dierentiating the expression for I(a) or by nding the rst few terms
in the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier.
I(a) =

n=0
I
(n)
(0)
n!
a
n
I(a) =

2 cos(a/2)
=

2
1
1 (a/2)
2
/2 +O(a
4
)
=

2
_
1 + (a/2)
2
/2 +O(a
4
)
_
=

2
+

3
/8
2
a
2
+O(a
4
)
I
t
(0) =
_

0
log x
1 +x
2
dx = 0
I
tt
(0) =
_

0
log
2
x
1 +x
2
dx =

3
8
Solution 15.24
Convergence. If x
a
f(x) x

as x 0 for some > 1 then the integral


_
1
0
x
a
f(x) dx
will converge absolutely. If x
a
f(x) x

as x for some < 1 then the integral


_

1
x
a
f(x)
595
will converge absolutely. These are sucient conditions for the absolute convergence of
_

0
x
a
f(x) dx.
Contour Integration. We put a branch cut on the positive real axis and choose 0 < arg(z) < 2. We
consider the integral of z
a
f(z) on the contour in Figure 15.10. Let the singularities of f(z) occur at z
1
, . . . , z
n
.
By the residue theorem,
_
C
z
a
f(z) dz = i2
n

k=1
Res (z
a
f(z), z
k
) .
On the circle of radius , the integrand is o(
1
). Since the length of C

is 2, the integral on C

vanishes
as 0. On the circle of radius R, the integrand is o(R
1
). Since the length of C
R
is 2R, the integral on C
R
vanishes as R .
The value of the integrand below the branch cut, z = xe
i2
, is
f(xe
i2
) = x
a
e
i2a
f(x)
In the limit as 0 and R we have
_

0
x
a
f(x) dx +
_
0

x
a
e
i2a
f(x) dx = i2
n

k=1
Res (z
a
f(z), z
k
) .
_

0
x
a
f(x) dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
) .
596
Solution 15.25
In the interval of uniform convergence of th integral, we can dierentiate the formula
_

0
x
a
f(x) dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
) ,
with respect to a to obtain,
_

0
x
a
f(x) log x dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z) log z, z
k
) ,
4
2
a e
i2a
(1 e
i2a
)
2
n

k=1
Res (z
a
f(z), z
k
) .
_

0
x
a
f(x) log xdx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z) log z, z
k
) , +

2
a
sin
2
(a)
n

k=1
Res (z
a
f(z), z
k
) ,
Dierentiating the solution of Exercise 15.22 m times with respect to a yields
_

0
x
a
f(x) log
m
xdx =

m
a
m
_
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
)
_
,
Solution 15.26
Taking the limit as a 0 Z in the solution of Exercise 15.22 yields
_

0
f(x) dx = i2 lim
a0
_
n
k=1
Res (z
a
f(z), z
k
)
1 e
i2a
_
The numerator vanishes because the sum of all residues of z
n
f(z) is zero. Thus we can use LHospitals rule.
_

0
f(x) dx = i2 lim
a0
_
n
k=1
Res (z
a
f(z) log z, z
k
)
i2 e
i2a
_
597
_

0
f(x) dx =
n

k=1
Res (f(z) log z, z
k
)
This suggests that we could have derived the result directly by considering the integral of f(z) log z on the contour
in Figure 15.10. We put a branch cut on the positive real axis and choose the branch arg z = 0. Recall that we
have assumed that f(z) has only isolated singularities and no singularities on the positive real axis, [0, ). By
the residue theorem,
_
C
f(z) log z dz = i2
n

k=1
Res (f(z) log z, z = z
k
) .
By assuming that f(z) z

as z 0 where > 1 the integral on C

will vanish as 0. By assuming that


f(z) z

as z where < 1 the integral on C


R
will vanish as R . The value of the integrand below
the branch cut, z = x e
i2
is f(x)(log x +i2). Taking the limit as 0 and R , we have
_

0
f(x) log xdx +
_
0

f(x)(log x +i2) dx = i2
n

k=1
Res (f(z) log z, z
k
) .
Thus we corroborate the result.
_

0
f(x) dx =
n

k=1
Res (f(z) log z, z
k
)
Solution 15.27
Consider the integral of f(z) log
2
z on the contour in Figure 15.10. We put a branch cut on the positive real axis
and choose the branch 0 < arg z < 2. Let z
1
, . . . z
n
be the singularities of f(z). By the residue theorem,
_
C
f(z) log
2
z dz = i2
n

k=1
Res
_
f(z) log
2
z, z
k
_
.
598
If f(z) z

as z 0 for some > 1 then the integral on C

will vanish as 0. f(z) z

as z for some
< 1 then the integral on C
R
will vanish as R . Below the branch cut the integrand is f(x)(log x +i2)
2
.
Thus we have
_

0
f(x) log
2
x dx +
_
0

f(x)(log
2
x +i4 log x 4
2
) dx = i2
n

k=1
Res
_
f(z) log
2
z, z
k
_
.
i4
_

0
f(x) log xdx + 4
2
_

0
f(x) dx = i2
n

k=1
Res
_
f(z) log
2
z, z
k
_
.
_

0
f(x) log x dx =
1
2
n

k=1
Res
_
f(z) log
2
z, z
k
_
+i
n

k=1
Res (f(z) log z, z
k
)
Solution 15.28
Convergence. We consider
_

0
x
a
1 +x
4
dx.
Since the integrand behaves like x
a
near x = 0 we must have 1(a) > 1. Since the integrand behaves like x
a4
at innity we must have 1(a 4) < 1. The integral converges for 1 < 1(a) < 3.
Contour Integration. The function
f(z) =
z
a
1 +z
4
has rst order poles at z = (1 i)/

2 and a branch point at z = 0. We could evaluate the real integral by


putting a branch cut on the positive real axis with 0 < arg(z) < 2 and integrating f(z) on the contour in
Figure 15.11.
599
C
R
C
Figure 15.11: Possible Path of Integration for f(z) =
z
a
1+z
4
Integrating on this contour would work because the value of the integrand below the branch cut is a constant
times the value of the integrand above the branch cut. After demonstrating that the integrals along C

and C
R
vanish in the limits as 0 and R we would see that the value of the integral is a constant times the sum
of the residues at the four poles. However, this is not the only, (and not the best), contour that can be used to
evaluate the real integral. Consider the value of the integral on the line arg(z) = .
f(r e
i
) =
r
a
e
ia
1 +r
4
e
i4
If is a integer multiple of /2 then the integrand is a constant multiple of
f(x) =
r
a
1 +r
4
.
Thus any of the contours in Figure 15.12 can be used to evaluate the real integral. The only dierence is how
many residues we have to calculate. Thus we choose the rst contour in Figure 15.12. We put a branch cut on
the negative real axis and choose the branch < arg(z) < to satisfy f(1) = 1.
We evaluate the integral along C with the Residue Theorem.
_
C
z
a
1 +z
4
dz = i2 Res
_
z
a
1 +z
4
, z =
1 +i

2
_
600
C
C
C
C
C
C
R

R

R
Figure 15.12: Possible Paths of Integration for f(z) =
z
a
1+z
4
Let a = +i and z = r e
i
. Note that
[z
a
[ = [(r e
i
)
+i
[ = r

.
The integral on C

vanishes as 0. We demonstrate this with the maximum modulus integral bound.

_
C
z
a
1 +z
4
dz


2
max
zC

z
a
1 +z
4

e
[[/2
1
4
0 as 0
The integral on C
R
vanishes as R .

_
C
R
z
a
1 +z
4
dz

R
2
max
zC
R

z
a
1 +z
4

R
2
R

e
[[/2
R
4
1
0 as R
601
The value of the integrand on the positive imaginary axis, z = xe
i/2
, is
(x e
i/2
)
a
1 + (x e
i/2
)
4
=
x
a
e
ia/2
1 +x
4
.
We take the limit as 0 and R .
_

0
x
a
1 +x
4
dx +
_
0

x
a
e
ia/2
1 +x
4
e
i/2
dx = i2 Res
_
z
a
1 +z
4
, e
i/4
_
_
1 e
i(a+1)/2
_
_

0
x
a
1 +x
4
dx = i2 lim
ze
i/4
_
z
a
(z e
i/2
)
1 +z
4
_
_

0
x
a
1 +x
4
dx =
i2
1 e
i(a+1)/2
lim
ze
i/4
_
az
a
(z e
i/2
) +z
a
4z
3
_
_

0
x
a
1 +x
4
dx =
i2
1 e
i(a+1)/2
e
ia/4
4 e
i3/4
_

0
x
a
1 +x
4
dx =
i
2( e
i(a+1)/4
e
i(a+1)/4
)
_

0
x
a
1 +x
4
dx =

4
csc
_
(a + 1)
4
_
Solution 15.29
Consider the branch of f(z) = z
1/2
log z/(z + 1)
2
with a branch cut on the positive real axis and 0 < arg z < 2.
We integrate this function on the contour in Figure 15.10.
602
We use the maximum modulus integral bound to show that the integral on C

vanishes as 0.

_
C
z
1/2
log z
(z + 1)
2
dz

2 max
C

z
1/2
log z
(z + 1)
2

= 2

1/2
(2 log )
(1 )
2
0 as 0
The integral on C
R
vanishes as R .

_
C
R
z
1/2
log z
(z + 1)
2
dz

2Rmax
C
R

z
1/2
log z
(z + 1)
2

= 2R
R
1/2
(log R + 2)
(R 1)
2
0 as R
Above the branch cut, (z = x e
i0
), the integrand is,
f(x e
i0
) =
x
1/2
log x
(x + 1)
2
.
Below the branch cut, (z = x e
i2
), we have,
f(x e
i2
) =
x
1/2
(log x +i)
(x + 1)
2
.
Taking the limit as 0 and R , the residue theorem gives us
_

0
x
1/2
log x
(x + 1)
2
dx +
_
0

x
1/2
(log x +i2)
(x + 1)
2
dx = i2 Res
_
z
1/2
log z
(z + 1)
2
, 1
_
.
603
2
_

0
x
1/2
log x
(x + 1)
2
dx +i2
_

0
x
1/2
(x + 1)
2
dx = i2 lim
z1
d
dz
(z
1/2
log z)
2
_

0
x
1/2
log x
(x + 1)
2
dx +i2
_

0
x
1/2
(x + 1)
2
dx = i2 lim
z1
_
1
2
z
1/2
log z +z
1/2
1
z
_
2
_

0
x
1/2
log x
(x + 1)
2
dx +i2
_

0
x
1/2
(x + 1)
2
dx = i2
_
1
2
(i)(i) i
_
2
_

0
x
1/2
log x
(x + 1)
2
dx +i2
_

0
x
1/2
(x + 1)
2
dx = 2 +i
2
Equating real and imaginary parts,
_

0
x
1/2
log x
(x + 1)
2
dx = ,
_

0
x
1/2
(x + 1)
2
dx =

2
.
Exploiting Symmetry
Solution 15.30
Convergence. The integrand,
e
az
e
z
e
z
=
e
az
2 sinh(z)
,
has rst order poles at z = in, n Z. To study convergence, we split the domain of integration.
_

=
_
1

+
_
1
1
+
_

1
604
The principal value integral

_
1
1
e
ax
e
x
e
x
dx
exists for any a because the integrand has only a rst order pole on the path of integration.
Now consider the integral on (1 . . . ).

_

1
e
ax
e
x
e
x
dx

=
_

1
e
(a1)x
1 e
2x
dx

1
1 e
2
_

1
e
(a1)x
dx
This integral converges for a 1 < 0; a < 1.
Finally consider the integral on (. . . 1).

_
1

e
ax
e
x
e
x
dx

=
_
1

e
(a+1)x
1 e
2x
dx

1
1 e
2
_
1

e
(a+1)x
dx
This integral converges for a + 1 > 0; a > 1.
Thus we see that the integral for I(a) converges for real a, [a[ < 1.
Choice of Contour. Consider the contour C that is the boundary of the region: R < x < R, 0 < y < .
The integrand has no singularities inside the contour. There are rst order poles on the contour at z = 0 and
z = i. The value of the integral along the contour is i times the sum of these two residues.
The integrals along the vertical sides of the contour vanish as R .

_
R+i
R
e
az
e
z
e
z
dz

max
z(R...R+i)

e
az
e
z
e
z


e
aR
e
R
e
R
0 as R
605

_
R+i
R
e
az
e
z
e
z
dz

max
z(R...R+i)

e
az
e
z
e
z


e
aR
e
R
e
R
0 as R
Evaluating the Integral. We take the limit as R and apply the residue theorem.
_

e
ax
e
x
e
x
dx +
_
+i
+i
e
az
e
z
e
z
dz
= i Res
_
e
az
e
z
e
z
, z = 0
_
+i Res
_
e
az
e
z
e
z
, z = i
_
_

e
ax
e
x
e
x
dx +
_

e
a(x+i
e
x+i
e
xi
dz = i lim
z0
z e
az
2 sinh(z)
+i lim
zi
(z i) e
az
2 sinh(z)
(1 + e
ia
)
_

e
ax
e
x
e
x
dx = i lim
z0
e
az
+az e
az
2 cosh(z)
+i lim
zi
e
az
+a(z i) e
az
2 cosh(z)
(1 + e
ia
)
_

e
ax
e
x
e
x
dx = i
1
2
+i
e
ia
2
_

e
ax
e
x
e
x
dx =
i(1 e
ia
)
2(1 + e
ia
)
_

e
ax
e
x
e
x
dx =

2
i( e
ia/2
e
ia/2
)
e
ia/2
+ e
ia/2
_

e
ax
e
x
e
x
dx =

2
tan
_
a
2
_
606
Solution 15.31
1.
_

0
dx
(1 +x
2
)
2
=
1
2
_

dx
(1 +x
2
)
2
We apply Result 15.4.1 to the integral on the real axis. First we verify that the integrand vanishes fast
enough in the upper half plane.
lim
R
_
Rmax
zC
R

1
(1 +z
2
)
2

_
= lim
R
_
R
1
(R
2
1)
2
_
= 0
Then we evaluate the integral with the residue theorem.
_

dx
(1 +x
2
)
2
= i2 Res
_
1
(1 +z
2
)
2
, z = i
_
= i2 Res
_
1
(z i)
2
(z +i)
2
, z = i
_
= i2 lim
zi
d
dz
1
(z +i)
2
= i2 lim
zi
2
(z +i)
3
=

2
_

0
dx
(1 +x
2
)
2
=

4
2. We wish to evaluate
_

0
dx
x
3
+ 1
.
607
Let the contour C be the boundary of the region 0 < r < R, 0 < < 2/3. We factor the denominator of
the integrand to see that the contour encloses the simple pole at e
i/3
for R > 1.
z
3
+ 1 = (z e
i/3
)(z + 1)(z e
i/3
),
We calculate the residue at that point.
Res
_
1
z
3
+ 1
, z = e
i/3
_
= lim
ze
i/3
_
(z e
i/3
)
1
z
3
+ 1
_
= lim
ze
i/3
_
1
(z + 1)(z e
i/3
)
_
=
1
( e
i/3
+ 1)( e
i/3
e
i/3
)
We use the residue theorem to evaluate the integral.
_
C
dz
z
3
+ 1
=
i2
( e
i/3
+ 1)( e
i/3
e
i/3
)
Let C
R
be the circular arc portion of the contour.
_
C
dz
z
3
+ 1
=
_
R
0
dx
x
3
+ 1
+
_
C
R
dz
z
3
+ 1

_
R
0
e
2i/3
dx
x
3
+ 1
= (1 + e
i/3
)
_
R
0
dx
x
3
+ 1
+
_
C
R
dz
z
3
+ 1
We show that the integral along C
R
vanishes as R with the maximum modulus integral bound.

_
C
R
dz
z
3
+ 1

2R
3
1
R
3
1
0 as R
608
We take R and solve for the desired integral.
(1 + e
i/3
)
_

0
dx
x
3
+ 1
=
i2
(1 + e
i/3
)( e
i/3
e
i/3
)
_

0
dx
x
3
+ 1
=
i2
(1 + e
i/3
)(1 + e
i/3
)( e
i/3
e
i/3
)
_

0
dx
x
3
+ 1
=
i2
1
2
(3 +i

3)
1
2
(3 i

3)(i

3)
_

0
dx
x
3
+ 1
=
2
3

3
Solution 15.32
Method 1: Semi-Circle Contour. We wish to evaluate the integral
I =
_

0
dx
1 +x
6
.
We note that the integrand is an even function and express I as an integral over the whole real axis.
I =
1
2
_

dx
1 +x
6
Now we will evaluate the integral using contour integration. We close the path of integration in the upper half
plane. Let
R
be the semicircular arc from R to R in the upper half plane. Let be the union of
R
and the
interval [R, R]. (See Figure 15.13.)
We can evaluate the integral along with the residue theorem. The integrand has rst order poles at
609
Figure 15.13: The semi-circle contour.
z = e
i(1+2k)/6
, k = 0, 1, 2, 3, 4, 5. Three of these poles are in the upper half plane. For R > 1, we have
_

1
z
6
+ 1
dz = i2
2

k=0
Res
_
1
z
6
+ 1
, e
i(1+2k)/6
_
= i2
2

k=0
lim
ze
i(1+2k)/6
z e
i(1+2k)/6
z
6
+ 1
610
Since the numerator and denominator vanish, we apply LHospitals rule.
= i2
2

k=0
lim
ze
i(1+2k)/6
1
6z
5
=
i
3
2

k=0
e
i5(1+2k)/6
=
i
3
_
e
i5/6
+ e
i15/6
+ e
i25/6
_
=
i
3
_
e
i5/6
+ e
i/2
+ e
i/6
_
=
i
3
_

3 i
2
i +

3 i
2
_
=
2
3
Now we examine the integral along
R
. We use the maximum modulus integral bound to show that the value of
the integral vanishes as R .

R
1
z
6
+ 1
dz

Rmax
z
R

1
z
6
+ 1

= R
1
R
6
1
0 as R .
Now we are prepared to evaluate the original real integral.
_

1
z
6
+ 1
dz =
2
3
_
R
R
1
x
6
+ 1
dx +
_

R
1
z
6
+ 1
dz =
2
3
611
We take the limit as R .
_

1
x
6
+ 1
dx =
2
3
_

0
1
x
6
+ 1
dx =

3
We would get the same result by closing the path of integration in the lower half plane. Note that in this case
the closed contour would be in the negative direction.
Method 2: Wedge Contour. Consider the contour , which starts at the origin, goes to the point R along
the real axis, then to the point Re
i/3
along a circle of radius R and then back to the origin along the ray = /3.
(See Figure 15.14.)
Figure 15.14: The wedge contour.
We can evaluate the integral along with the residue theorem. The integrand has one rst order pole inside
612
the contour at z = e
i/6
. For R > 1, we have
_

1
z
6
+ 1
dz = i2 Res
_
1
z
6
+ 1
, e
i/6
_
= i2 lim
ze
i/6
z e
i/6
z
6
+ 1
Since the numerator and denominator vanish, we apply LHospitals rule.
= i2 lim
ze
i/6
1
6z
5
=
i
3
e
i5/6
=

3
e
i/3
Now we examine the integral along the circular arc,
R
. We use the maximum modulus integral bound to show
that the value of the integral vanishes as R .

R
1
z
6
+ 1
dz

R
3
max
z
R

1
z
6
+ 1

=
R
3
1
R
6
1
0 as R .
Now we are prepared to evaluate the original real integral.
_

1
z
6
+ 1
dz =

3
e
i/3
_
R
0
1
x
6
+ 1
dx +
_

R
1
z
6
+ 1
dz +
_
0
Re
i/3
1
z
6
+ 1
dz =

3
e
i/3
_
R
0
1
x
6
+ 1
dx +
_

R
1
z
6
+ 1
dz +
_
0
R
1
x
6
+ 1
e
i/3
dx =

3
e
i/3
613
We take the limit as R .
_
1 e
i/3
_
_

0
1
x
6
+ 1
dx =

3
e
i/3
_

0
1
x
6
+ 1
dx =

3
e
i/3
1 e
i/3
_

0
1
x
6
+ 1
dx =

3
(1 i

3)/2
1 (1 +i

3)/2
_

0
1
x
6
+ 1
dx =

3
Solution 15.33
First note that
cos(2) 1
4

, 0

4
.
These two functions are plotted in Figure 15.15. To prove this inequality analytically, note that the two functions
are equal at the endpoints of the interval and that cos(2) is concave downward on the interval,
d
2
d
2
cos(2) = 4 cos(2) 0 for 0

4
,
while 1 4/ is linear.
Let C
R
be the quarter circle of radius R from = 0 to = /4. The integral along this contour vanishes as
614
Figure 15.15: cos(2) and 1
4

R .

_
C
R
e
z
2
dz

_
/4
0

e
(Re
i
)
2

Ri e
i

_
/4
0
Re
R
2
cos(2)
d

_
/4
0
Re
R
2
(14/)
d
=
_
R

4R
2
e
R
2
(14/)
_
/4
0
=

4R
_
1 e
R
2
_
0 as R
Let C be the boundary of the domain 0 < r < R, 0 < < /4. Since the integrand is analytic inside C the
integral along C is zero. Taking the limit as R , the integral from r = 0 to along = 0 is equal to the
integral from r = 0 to along = /4.
_

0
e
x
2
dx =
_

0
e

1+i

2
x

2
1 +i

2
dx
615
_

0
e
x
2
dx =
1 +i

2
_

0
e
ix
2
dx
_

0
e
x
2
dx =
1 +i

2
_

0
_
cos(x
2
) i sin(x
2
)
_
dx
_

0
e
x
2
dx =
1

2
__

0
cos(x
2
) dx +
_

0
sin(x
2
) dx
_
+
i

2
__

0
cos(x
2
) dx
_

0
sin(x
2
) dx
_
We equate the imaginary part of this equation to see that the integrals of cos(x
2
) and sin(x
2
) are equal.
_

0
cos(x
2
) dx =
_

0
sin(x
2
) dx
The real part of the equation then gives us the desired identity.
_

0
cos(x
2
) dx =
_

0
sin(x
2
) dx =
1

2
_

0
e
x
2
dx
Solution 15.34
Consider the box contour C that is the boundary of the rectangle R x R, 0 y . There is a removable
singularity at z = 0 and a rst order pole at z = i. By the residue theorem,

_
C
z
sinh z
dz = i Res
_
z
sinh z
, i
_
= i lim
zi
z(z i)
sinh z
= i lim
zi
2z i
cosh z
=
2
616
The integrals along the side of the box vanish as R .

_
R+i
R
z
sinh z
dz

max
z[R,R+i]

z
sinh z


R +
sinh R
0 as R
The value of the integrand on the top of the box is
x +i
sinh(x +i)
=
x +i
sinh x
.
Taking the limit as R ,
_

x
sinh x
dx +
_

x +i
sinh x
dx =
2
.
Note that

1
sinh x
dx = 0
as there is a rst order pole at x = 0 and the integrand is odd.
_

x
sinh x
dx =

2
2
Solution 15.35
First we evaluate
_

e
ax
e
x
+ 1
dx.
617
Consider the rectangular contour in the positive direction with corners at R and R+i2. With the maximum
modulus integral bound we see that the integrals on the vertical sides of the contour vanish as R .

_
R+i2
R
e
az
e
z
+ 1
dz

2
e
aR
e
R
1
0 as R

_
R
R+i2
e
az
e
z
+ 1
dz

2
e
aR
1 e
R
0 as R
In the limit as R tends to innity, the integral on the rectangular contour is the sum of the integrals along the
top and bottom sides.
_
C
e
az
e
z
+ 1
dz =
_

e
ax
e
x
+ 1
dx +
_

e
a(x+i2)
e
x+i2
+ 1
dx
_
C
e
az
e
z
+ 1
dz = (1 e
i2a
)
_

e
ax
e
x
+ 1
dx
The only singularity of the integrand inside the contour is a rst order pole at z = i. We use the residue theorem
to evaluate the integral.
_
C
e
az
e
z
+ 1
dz = i2 Res
_
e
az
e
z
+ 1
, i
_
= i2 lim
zi
(z i) e
az
e
z
+ 1
= i2 lim
zi
a(z i) e
az
+ e
az
e
z
= i2 e
ia
618
We equate the two results for the value of the contour integral.
(1 e
i2a
)
_

e
ax
e
x
+ 1
dx = i2 e
ia
_

e
ax
e
x
+ 1
dx =
i2
e
ia
e
ia
_

e
ax
e
x
+ 1
dx =

sin(a)
Now we derive the value of,
_

cosh(bx)
cosh x
dx.
First make the change of variables x 2x in the previous result.
_

e
2ax
e
2x
+ 1
2 dx =

sin(a)
_

e
(2a1)x
e
x
+ e
x
dx =

sin(a)
Now we set b = 2a 1.
_

e
bx
cosh x
dx =

sin((b + 1)/2)
=

cos(b/2)
for 1 < b < 1
Since the cosine is an even function, we also have,
_

e
bx
cosh x
dx =

cos(b/2)
for 1 < b < 1
Adding these two equations and dividing by 2 yields the desired result.
_

cosh(bx)
cosh x
dx =

cos(b/2)
for 1 < b < 1
619
Solution 15.36
Real-Valued Parameters. For b = 0, the integral has the value: /a
2
. If b is nonzero, then we can write the
integral as
F(a, b) =
1
b
2
_

0
d
(a/b + cos )
2
.
We dene the new parameter c = a/b and the function,
G(c) = b
2
F(a, b) =
_

0
d
(c + cos )
2
.
If 1 c 1 then the integrand has a double pole on the path of integration. The integral diverges. Otherwise
the integral exists. To evaluate the integral, we extend the range of integration to (0..2) and make the change
of variables, z = e
i
to integrate along the unit circle in the complex plane.
G(c) =
1
2
_
2
0
d
(c + cos )
2
For this change of variables, we have,
cos =
z +z
1
2
, d =
dz
iz
.
G(c) =
1
2
_
C
dz/(iz)
(c + (z +z
1
)/2)
2
= i2
_
C
z
(2cz +z
2
+ 1)
2
dz
= i2
_
C
z
(z +c +

c
2
1)
2
(z +c

c
2
1)
2
dz
620
If c > 1, then c

c
2
1 is outside the unit circle and c +

c
2
1 is inside the unit circle. The integrand
has a second order pole inside the path of integration. We evaluate the integral with the residue theorem.
G(c) = i2i2 Res
_
z
(z +c +

c
2
1)
2
(z +c

c
2
1)
2
, z = c +

c
2
1
_
= 4 lim
zc+

c
2
1
d
dz
z
(z +c +

c
2
1)
2
= 4 lim
zc+

c
2
1
_
1
(z +c +

c
2
1)
2

2z
(z +c +

c
2
1)
3
_
= 4 lim
zc+

c
2
1
c +

c
2
1 z
(z +c +

c
2
1)
3
= 4
2c
(2

c
2
1)
3
=
c
_
(c
2
1)
3
621
If c < 1, then c

c
2
1 is inside the unit circle and c +

c
2
1 is outside the unit circle.
G(c) = i2i2 Res
_
z
(z +c +

c
2
1)
2
(z +c

c
2
1)
2
, z = c

c
2
1
_
= 4 lim
zc

c
2
1
d
dz
z
(z +c

c
2
1)
2
= 4 lim
zc

c
2
1
_
1
(z +c

c
2
1)
2

2z
(z +c

c
2
1)
3
_
= 4 lim
zc

c
2
1
c

c
2
1 z
(z +c

c
2
1)
3
= 4
2c
(2

c
2
1)
3
=
c
_
(c
2
1)
3
Thus we see that
G(c)
_

_
=
c

(c
2
1)
3
for c > 1,
=
c

(c
2
1)
3
for c < 1,
is divergent for 1 c 1.
In terms of F(a, b), this is
F(a, b)
_

_
=
a

(a
2
b
2
)
3
for a/b > 1,
=
a

(a
2
b
2
)
3
for a/b < 1,
is divergent for 1 a/b 1.
622
Complex-Valued Parameters. Consider
G(c) =
_

0
d
(c + cos )
2
,
for complex c. Except for real-valued c between 1 and 1, the integral converges uniformly. We can interchange
dierentiation and integration. The derivative of G(c) is
G
t
(c) =
d
dc
_

0
d
(c + cos )
2
=
_

0
2
(c + cos )
3
d
Thus we see that G(c) is analytic in the complex plane with a cut on the real axis from 1 to 1. The value of
the function on the positive real axis for c > 1 is
G(c) =
c
_
(c
2
1)
3
.
We use analytic continuation to determine G(c) for complex c. By inspection we see that G(c) is the branch of
c
(c
2
1)
3/2
,
with a branch cut on the real axis from 1 to 1 and which is real-valued and positive for real c > 1. Using
F(a, b) = G(c)/b
2
we can determine F for complex-valued a and b.
Solution 15.37
First note that
_

cos x
e
x
+ e
x
dx =
_

e
ix
e
x
+ e
x
dx
623
since sin x/( e
x
+ e
x
) is an odd function. For the function
f(z) =
e
iz
e
z
+ e
z
we have
f(x +i) =
e
ix
e
x+i
+ e
xi
= e

e
ix
e
x
+ e
x
= e

f(x).
Thus we consider the integral
_
C
e
iz
e
z
+ e
z
dz
where C is the box contour with corners at R and R + i. We can evaluate this integral with the residue
theorem. We can write the integrand as
e
iz
2 cosh z
.
We see that the integrand has rst order poles at z = i(n + 1/2). The only pole inside the path of integration
is at z = i/2.
_
C
e
iz
e
z
+ e
z
dz = i2 Res
_
e
iz
e
z
+ e
z
, z =
i
2
_
= i2 lim
zi/2
(z i/2) e
iz
e
z
+ e
z
= i2 lim
zi/2
e
iz
+i(z i/2) e
iz
e
z
e
z
= i2
e
/2
e
i/2
e
i/2
= e
/2
624
The integrals along the vertical sides of the box vanish as R .

_
R+i
R
e
iz
e
z
+ e
z
dz

max
z[R...R+i]

e
iz
e
z
+ e
z

max
y[0...]

1
e
R+iy
+ e
Riy

max
y[0...]

1
e
R
+ e
Ri2y

=
1
2 sinh R
0 as R
Taking the limit as R , we have
_

e
ix
e
x
+ e
x
dx +
_
+i
+i
e
iz
e
z
+ e
z
dz = e
/2
(1 + e

)
_

e
ix
e
x
+ e
x
dx = e
/2
_

e
ix
e
x
+ e
x
dx =

e
/2
+ e
/2
Finally we have,
_

cos x
e
x
+ e
x
dx =

e
/2
+ e
/2
.
Denite Integrals Involving Sine and Cosine
Solution 15.38
1. Let C be the positively oriented unit circle about the origin. We parametrize this contour.
z = e
i
, dz = i e
i
d, (0 . . . 2)
625
We write sin and the dierential d in terms of z. Then we evaluate the integral with the Residue theorem.
_
2
0
1
2 + sin
d =
_
C
1
2 + (z 1/z)/(2i)
dz
iz
=
_
C
2
z
2
+i4z 1
dz
=
_
C
2
_
z +i
_
2 +

3
__ _
z +i
_
2

3
__ dz
= i2 Res
__
z +i
_
2 +

3
___
z +i
_
2

3
__
, z = i
_
2 +

3
__
= i2
2
i2

3
=
2

3
2. First consider the case a = 0.
_

cos(n) d =
_
0 for n Z
+
2 for n = 0
Now we consider [a[ < 1, a ,= 0. Since
sin(n)
1 2a cos +a
2
is an even function,
_

cos(n)
1 2a cos +a
2
d =
_

e
in
1 2a cos +a
2
d
Let C be the positively oriented unit circle about the origin. We parametrize this contour.
z = e
i
, dz = i e
i
d, ( . . . )
626
We write the integrand and the dierential d in terms of z. Then we evaluate the integral with the Residue
theorem.
_

e
in
1 2a cos +a
2
d =
_
C
z
n
1 a(z + 1/z) +a
2
dz
iz
= i
_
C
z
n
az
2
+ (1 +a
2
)z a
dz
=
i
a
_
C
z
n
z
2
(a + 1/a)z + 1
dz
=
i
a
_
C
z
n
(z a)(z 1/a)
dz
= i2
i
a
Res
_
z
n
(z a)(z 1/a)
, z = a
_
=
2
a
a
n
a 1/a
=
2a
n
1 a
2
We write the value of the integral for [a[ < 1 and n Z
0+
.
_

cos(n)
1 2a cos +a
2
d =
_
2 for a = 0, n = 0
2a
n
1a
2
otherwise
Solution 15.39
Convergence. We consider the integral
I() =
_

0
cos(n)
cos cos
d =
sin(n)
sin
.
627
We assume that is real-valued. If is an integer, then the integrand has a second order pole on the path of
integration, the principal value of the integral does not exist. If is real, but not an integer, then the integrand
has a rst order pole on the path of integration. The integral diverges, but its principal value exists.
Contour Integration. We will evaluate the integral for real, non-integer .
I() =
_

0
cos(n)
cos cos
d
=
1
2

_
2
0
cos(n)
cos cos
d
=
1
2
1
_
2
0
e
in
cos cos
d
We make the change of variables: z = e
i
.
I() =
1
2
1
_
C
z
n
(z + 1/z)/2 cos
dz
iz
= 1
_
C
iz
n
(z e
i
)(z e
i
)
dz
628
Now we use the residue theorem.
= 1
_
i(i)
_
Res
_
z
n
(z e
i
)(z e
i
)
, z = e
i
_
+ Res
_
z
n
(z e
i
)(z e
i
)
, z = e
i
_
__
= 1
_
lim
ze
i
z
n
z e
i
+ lim
ze
i
z
n
z e
i
_
= 1
_
e
in
e
i
e
i
+
e
in
e
i
e
i
_
= 1
_
e
in
e
in
e
i
e
i
_
= 1
_
sin(n)
sin()
_
I() =
_

0
cos(n)
cos cos
d =
sin(n)
sin
.
Solution 15.40
Consider the integral
_
1
0
x
2
(1 +x
2
)

1 x
2
dx.
We make the change of variables x = sin to obtain,
_
/2
0
sin
2

(1 + sin
2
)
_
1 sin
2

cos d
629
_
/2
0
sin
2

1 + sin
2

d
_
/2
0
1 cos(2)
3 cos(2)
d
1
4
_
2
0
1 cos
3 cos
d
Now we make the change of variables z = e
i
to obtain a contour integral on the unit circle.
1
4
_
C
1 (z + 1/z)/2
3 (z + 1/z)/2
_
i
z
_
dz
i
4
_
C
(z 1)
2
z(z 3 + 2

2)(z 3 2

2)
dz
There are two rst order poles inside the contour. The value of the integral is
i2
i
4
_
Res
_
(z 1)
2
z(z 3 + 2

2)(z 3 2

2)
, 0
_
+ Res
_
(z 1)
2
z(z 3 + 2

2)(z 3 2

2)
, z = 3 2

2
__

2
_
lim
z0
_
(z 1)
2
(z 3 + 2

2)(z 3 2

2)
_
+ lim
z32

2
_
(z 1)
2
z(z 3 2

2)
__
.
_
1
0
x
2
(1 +x
2
)

1 x
2
dx =
(2

2)
4
630
Innite Sums
Solution 15.41
From Result 15.10.1 we see that the sum of the residues of cot(z)/z
4
is zero. This function has simples poles
at nonzero integers z = n with residue 1/n
4
. There is a fth order pole at z = 0. Finding the residue with the
formula
1
4!
lim
z0
d
4
dz
4
(z cot(z))
would be a real pain. After doing the dierentiation, we would have to apply LHospitals rule multiple times. A
better way of nding the residue is with the Laurent series expansion of the function. Note that
1
sin(z)
=
1
z (z)
3
/6 + (z)
5
/120
=
1
z
1
1 (z)
2
/6 + (z)
4
/120
=
1
z
_
1 +
_

2
6
z
2


4
120
z
4
+
_
+
_

2
6
z
2


4
120
z
4
+
_
2
+
_
.
Now we nd the z
1
term in the Laurent series expansion of cot(z)/z
4
.
cos(z)
z
4
sin(z)
=

z
4
_
1

2
2
z
2
+

4
24
z
4

_
1
z
_
1 +
_

2
6
z
2


4
120
z
4
+
_
+
_

2
6
z
2


4
120
z
4
+
_
2
+
_
=
1
z
5
_
+
_


4
120
+

4
36


4
12
+

4
24
_
z
4
+
_
=

4
45
1
z
+
Thus the residue at z = 0 is
4
/45. Summing the residues,
1

n=
1
n
4


4
45
+

n=1
1
n
4
= 0.
631

n=1
1
n
4
=

4
90
Solution 15.42
For this problem we will use the following result: If
lim
[z[
[zf(z)[ = 0,
then the sum of all the residues of cot(z)f(z) is zero. If in addition, f(z) is analytic at z = n Z then

n=
f(n) = ( sum of the residues of cot(z)f(z) at the poles of f(z) ).
We assume that is not an integer, otherwise the sum is not dened. Consider f(z) = 1/(z
2

2
). Since
lim
[z[

z
1
z
2

= 0,
and f(z) is analytic at z = n, n Z, we have

n=
1
n
2

2
= ( sum of the residues of cot(z)f(z) at the poles of f(z) ).
f(z) has rst order poles at z = .

n=
1
n
2

2
= Res
_
cot(z)
z
2

2
, z =
_
Res
_
cot(z)
z
2

2
, z =
_
= lim
z
cot(z)
z +
lim
z
cot(z)
z
=
cot()
2

cot()
2
632

n=
1
n
2

2
=
cot()

633
Part IV
Ordinary Dierential Equations
634
Chapter 16
First Order Dierential Equations
Dont show me your technique. Show me your heart.
-Tetsuyasu Uekuma
16.1 Notation
A dierential equation is an equation involving a function, its derivatives, and independent variables. If there is
only one independent variable, then it is an ordinary dierential equation. Identities such as
d
dx
_
f
2
(x)
_
= 2f(x)f
t
(x), and
dy
dx
dx
dy
= 1
are not dierential equations.
The order of a dierential equation is the order of the highest derivative. The following equations are rst,
second and third order, respectively.
y
t
= xy
2
635
y
tt
+ 3xy
t
+ 2y = x
2
y
ttt
= y
tt
y
The degree of a dierential equation is the highest power of the highest derivative in the equation. The
following equations are rst, second and third degree, respectively.
y
t
3y = sin x
(y
tt
)
2
+ 2xy = e
x
(y
t
)
3
+y
5
= 0
An equation is said to be linear if it is linear in the dependent variable.
y
tt
cos x +x
2
y = 0 is a linear dierential equation.
y
t
+xy
2
= 0 is a nonlinear dierential equation.
A dierential equation is homogeneous if it has no terms that are functions of the independent variable alone.
Thus an inhomogeneous equation is one in which there are terms that are functions of the independent variables
alone.
y
tt
+xy +y = 0 is a homogeneous equation.
y
t
+y +x
2
= 0 is an inhomogeneous equation.
A rst order dierential equation may be written in terms of dierentials. Recall that for the function y(x)
the dierential dy is dened dy = y
t
(x) dx. Thus the dierential equations
y
t
= x
2
y and y
t
+xy
2
= sin(x)
can be denoted:
dy = x
2
y dx and dy +xy
2
dx = sin(x) dx.
636
A solution of a dierential equation is a function which when substituted into the equation yields an identity.
For example, y = x log x is a solution of
y
t

y
x
= 1
and y = c e
x
is a solution of
y
tt
y = 0
for any value of the parameter c.
16.2 One Parameter Families of Functions
Consider the equation
F(x, y(x); c) = 0, (16.1)
which implicitly denes a one-parameter family of functions y(x). (We assume that F has a non-trivial dependence
on y, that is F
y
,= 0.) Dierentiating this equation with respect to x yields
F
x
+F
y
y
t
= 0.
This gives us two equations involving the independent variable x, the dependent variable y(x) and its derivative
and the parameter c. If we algebraically eliminate c between the two equations, the eliminant will be a rst order
dierential equation for y(x). Thus we see that every equation of the form (16.1) denes a one-parameter family
of functions y(x) which satisfy a rst order dierential equation. This y(x) is the primitive of the dierential
equation. Later we will discuss why y(x) is the general solution of the dierential equation.
Example 16.2.1 Consider the family of circles of radius c centered about the origin,
x
2
+y
2
= c
2
.
637
Dierentiating this yields,
2x + 2yy
t
= 0.
It is trivial to eliminate the parameter and obtain a dierential equation for the family of circles.
x +yy
t
= 0.
We can see the geometric meaning in this equation by writing it in the form
y
t
=
x
y
.
The slope of the tangent to a circle at a point is the negative of the cotangent of the angle.
Example 16.2.2 Consider the one-parameter family of functions,
y(x) = f(x) +cg(x),
where f(x) and g(x) are known functions. The derivative is
y
t
= f
t
+cg
t
.
Eliminating the parameter yields
gy
t
g
t
y = gf
t
g
t
f
y
t

g
t
g
y = f
t

g
t
f
g
.
Thus we see that y(x) = f(x) +cg(x) satises a rst order linear dierential equation.
638
We know that every one-parameter family of functions satises a rst order dierential equation. The converse
is true as well.
Result 16.2.1 Every rst order dierential equation has a one-parameter family of so-
lutions, y(x), dened by an equation of the form:
F(x, y(x); c) = 0.
This y(x) is called the general solution. If the equation is linear then the general solution
expresses the totality of solutions of the dierential equation. If the equation is nonlinear,
there may be other special singular solutions, which do not depend on a parameter.
This is strictly an existence result. It does not say that the general solution of a rst order dierential equation
can be determined by some method, it just says that it exists. There is no method for solving the general rst
order dierential equation. However, there are some special forms that are soluble. We will devote the rest of
this chapter to studying these forms.
16.3 Exact Equations
Any rst order ordinary dierential equation of the rst degree can be written as the total dierential equation,
P(x, y) dx +Q(x, y) dy = 0.
If this equation can be integrated directly, that is if there is a primitive, u(x, y), such that
du = P dx +Qdy,
then this equation is called exact. The (implicit) solution of the dierential equation is
u(x, y) = c,
639
where c is an arbitrary constant. Since the dierential of a function, u(x, y), is
du
u
x
dx +
u
y
dy,
P and Q are the partial derivatives of u:
P(x, y) =
u
x
, Q(x, y) =
u
y
.
In an alternate notation, the dierential equation
P(x, y) +Q(x, y)
dy
dx
= 0, (16.2)
is exact if there is a primitive u(x, y) such that
du
dx

u
x
+
u
y
dy
dx
= P(x, y) +Q(x, y)
dy
dx
.
The solution of the dierential equation is u(x, y) = c.
Example 16.3.1
x +y
dy
dx
= 0
is an exact dierential equation since
d
dx
_
1
2
(x
2
+y
2
)
_
= x +y
dy
dx
The solution of the dierential equation is
1
2
(x
2
+y
2
) = c.
640
Example 16.3.2 , Let f(x) and g(x) be known functions.
g(x)y
t
+g
t
(x)y = f(x)
is an exact dierential equation since
d
dx
(g(x)y(x)) = gy
t
+g
t
y.
The solution of the dierential equation is
g(x)y(x) =
_
f(x) dx +c
y(x) =
1
g(x)
_
f(x) dx +
c
g(x)
.
A necessary condition for exactness. The solution of the exact equation P + Qy
t
= 0 is u = c where u is
the primitive of the equation,
du
dx
= P + Qy
t
. At present the only method we have for determining the primitive
is guessing. This is ne for simple equations, but for more dicult cases we would like a method more concrete
than divine inspiration. As a rst step toward this goal we determine a criterion for determining if an equation
is exact.
Consider the exact equation,
P +Qy
t
= 0,
with primitive u, where we assume that the functions P and Q are continuously dierentiable. Since the mixed
partial derivatives of u are equal,

2
u
xy
=

2
u
yx
,
a necessary condition for exactness is
P
y
=
Q
x
.
641
A sucient condition for exactness. This necessary condition for exactness is also a sucient condition.
We demonstrate this by deriving the general solution of (16.2). Assume that P +Qy
t
= 0 is not necessarily exact,
but satises the condition P
y
= Q
x
. If the equation has a primitive,
du
dx

u
x
+
u
y
dy
dx
= P(x, y) +Q(x, y)
dy
dx
,
then it satises
u
x
= P,
u
y
= Q. (16.3)
Integrating the rst equation of (16.3), we see that the primitive has the form
u(x, y) =
_
x
x
0
P(, y) d +f(y),
for some f(y). Now we substitute this form into the second equation of (16.3).
u
y
= Q(x, y)
_
x
x
0
P
y
(, y) d +f
t
(y) = Q(x, y)
Now we use the condition P
y
= Q
x
.
_
x
x
0
Q
x
(, y) d +f
t
(y) = Q(x, y)
Q(x, y) Q(x
0
, y) +f
t
(y) = Q(x, y)
f
t
(y) = Q(x
0
, y)
f(y) =
_
y
y
0
Q(x
0
, ) d
642
Thus we see that
u =
_
x
x
0
P(, y) d +
_
y
y
0
Q(x
0
, ) d
is a primitive of the derivative; the equation is exact. The solution of the dierential equation is
_
x
x
0
P(, y) d +
_
y
y
0
Q(x
0
, ) d = c.
Even though there are three arbitrary constants: x
0
, y
0
and c, the solution is a one-parameter family. This is
because changing x
0
or y
0
only changes the left side by an additive constant.
Result 16.3.1 Any rst order dierential equation of the rst degree can be written in
the form
P(x, y) +Q(x, y)
dy
dx
= 0.
This equation is exact if and only if
P
y
= Q
x
.
In this case the solution of the dierential equation is given by
_
x
x
0
P(, y) d +
_
y
y
0
Q(x
0
, ) d = c.
16.3.1 Separable Equations
Any dierential equation that can written in the form
P(x) +Q(y)y
t
= 0
643
is a separable equation, (because the dependent and independent variables are separated). We can obtain an
implicit solution by integrating with respect to x.
_
P(x) dx +
_
Q(y)
dy
dx
dx = c
_
P(x) dx +
_
Q(y) dy = c
Result 16.3.2 The general solution to the separable equation P(x) +Q(y)y
/
= 0 is
_
P(x) dx +
_
Q(y) dy = c
Example 16.3.3 Consider the equation y
t
= xy
2
.
dy
dx
= xy
2
y
2
dy = xdx
_
y
2
dy =
_
xdx +c
y
1
=
1
2
x
2
+c
y =
1
1
2
x
2
+c
Example 16.3.4 The equation
y
t
= y y
2
,
644
is separable.
y
t
y y
2
= 1
We expand in partial fractions and integrate.
_
1
y

1
y 1
_
y
t
= 1
log(y) log(y 1) = x +c
Then we solve for y(x).
log
_
y
y 1
_
= x +c
y
y 1
= e
x+c
y =
e
x+c
e
x+c
1
Finally we substitute a = e
c
to write the solution in a nice form.
y =
1
1 a e
x
16.3.2 Homogeneous Coecient Equations
Eulers Theorem on Homogeneous Functions. The function F(x, y) is homogeneous of degree n if
F(x, y) =
n
F(x, y).
645
From this denition we see that
F(x, y) = x
n
F
_
1,
y
x
_
.
(Just formally substitute 1/x for .) For example,
xy
2
,
x
2
y + 2y
3
x +y
, xcos(y/x)
are homogeneous functions of orders 3, 2 and 1, respectively.
Eulers theorem for a homogeneous function of order n is:
xF
x
+yF
y
= nF.
To prove this, we dene = x, = y. From the denition of homogeneous functions, we have
F(, ) =
n
F(x, y).
We dierentiate this equation with respect to .
F(, )

+
F(, )

= n
n1
F(x, y)
xF

+yF

= n
n1
F(x, y)
Setting = 1, (and hence = x, = y), proves Eulers theorem.
Result 16.3.3 Eulers Theorem. If F(x, y) is a homogeneous function of degree n,
then
xF
x
+yF
y
= nF.
646
Homogeneous Coecient Dierential Equations. If the coecient functions P(x, y) and Q(x, y) are ho-
mogeneous of degree n then the dierential equation,
Q(x, y) +P(x, y)
dy
dx
= 0,
is called a homogeneous coecient equation. They are often referred to as simply homogeneous equations. We can
write the equation in the form,
x
n
Q
_
1,
y
x
_
+x
n
P
_
1,
y
x
_
dy
dx
= 0,
Q
_
1,
y
x
_
+P
_
1,
y
x
_
dy
dx
= 0.
This suggests the change of dependent variable u(x) =
y(x)
x
.
Q(1, u) +P(1, u)
_
u +x
du
dx
_
= 0
This equation is separable.
Q(1, u) +uP(1, u) +xP(1, u)
du
dx
= 0
1
x
+
P(1, u)
Q(1, u) +uP(1, u)
du
dx
= 0
log x +
_
1
u +Q(1, u)/P(1, u)
du = c
By substituting log c for c, we can write this in the form,
_
1
u +Q(1, u)/P(1, u)
du = log
_
c
x
_
.
647
Example 16.3.5 Consider the homogeneous coecient equation
x
2
y
2
+xy
dy
dx
= 0.
The solution for u(x) = y(x)/x is determined by
_
1
u +
1u
2
u
du = log
_
c
x
_
_
udu = log
_
c
x
_
1
2
u
2
= log
_
c
x
_
u =
_
2 log(c/x)
Thus the solution of the dierential equation is
y = x
_
2 log(c/x)
Result 16.3.4 Homogeneous Coecient Dierential Equations. If P(x, y) and
Q(x, y) are homogeneous functions of degree n, then the equation
P(x, y) +Q(x, y)
dy
dx
= 0
is made separable by the change of independent variable u(x) =
y(x)
x
. The solution is
determined by
_
1
u +Q(1, u)/P(1, u)
du = log
_
c
x
_
.
648
16.4 The First Order, Linear Dierential Equation
16.4.1 Homogeneous Equations
The rst order, linear, homogeneous equation has the form
dy
dx
+p(x)y = 0.
Note that this equation is separable.
y
t
y
= p(x)
log(y) =
_
p(x) dx +a
y = e

p(x) dx+a
y = c e

p(x) dx
Example 16.4.1 Consider the equation
dy
dx
+
1
x
y = 0.
y(x) = c e

1/xdx
y(x) = c e
log x
y(x) =
c
x
649
16.4.2 Inhomogeneous Equations
The rst order, linear, inhomogeneous dierential equation has the form
dy
dx
+p(x)y = f(x). (16.4)
This equation is not separable. Note that it is similar to the exact equation we solved in Example 16.3.2,
g(x)y
t
(x) +g
t
(x)y(x) = f(x).
To solve Equation 16.4, we multiply by an integrating factor. Multiplying a dierential equation by its integrating
factor changes it to an exact equation. Multiplying Equation 16.4 by the function, I(x), yields,
I(x)
dy
dx
+p(x)I(x)y = f(x)I(x).
In order that I(x) be an integrating factor, it must satisfy
d
dx
I(x) = p(x)I(x).
This is a rst order, linear, homogeneous equation with the solution
I(x) = c e

p(x) dx
.
This is an integrating factor for any constant c. For simplicity we will choose c = 1.
To solve Equation 16.4 we multiply by the integrating factor and integrate. Let P(x) =
_
p(x) dx.
e
P(x)
dy
dx
+p(x) e
P(x)
y = e
P(x)
f(x)
d
dx
_
e
P(x)
y
_
= e
P(x)
f(x)
y = e
P(x)
_
e
P(x)
f(x) dx +c e
P(x)
y y
p
+c y
h
650
Note that the general solution is the sum of a particular solution, y
p
, that satises y
t
+ p(x)y = f(x), and an
arbitrary constant times a homogeneous solution, y
h
, that satises y
t
+p(x)y = 0.
Example 16.4.2 Consider the dierential equation
y
t
+
1
x
y = x
2
.
The integrating factor is
I(x) = exp
__
1
x
dx
_
= e
log x
= x.
Multiplying by the integrating factor and integrating,
d
dx
(xy) = x
3
xy =
1
4
x
4
+c
y =
1
4
x
3
+
c
x
.
We see that the particular and homogeneous solutions are
y
p
=
1
4
x
3
and y
h
=
1
x
.
Note that the general solution to the dierential equation is a one-parameter family of functions. The general
solution is plotted in Figure 16.1 for various values of c.
16.4.3 Variation of Parameters.
We could also have found the particular solution with the method of variation of parameters. Although we
can solve rst order equations without this method, it will become important in the study of higher order
651
-1 -0.5 0.5 1
-10
-7.5
-5
-2.5
2.5
5
7.5
10
Figure 16.1: Solutions to y
t
+y/x = x
2
.
inhomogeneous equations. We begin by assuming that the particular solution has the form y
p
= u(x)y
h
(x) where
u(x) is an unknown function. We substitute this into the dierential equation.
d
dx
y
p
+p(x)y
p
= f(x)
d
dx
(uy
h
) +p(x)uy
h
= f(x)
u
t
y
h
+u(y
t
h
+p(x)y
h
) = f(x)
652
Since y
h
is a homogeneous solution, y
t
h
+p(x)y
h
= 0.
u
t
=
f(x)
y
h
u =
_
f(x)
y
h
(x)
dx
Recall that the homogeneous solution is y
h
= e
P(x)
.
u =
_
e
P(x)
f(x) dx
Thus the particular solution is
y
p
= e
P(x)
_
e
P(x)
f(x) dx.
16.5 Initial Conditions
In physical problems involving rst order dierential equations, the solution satises both the dierential
equation and a constraint which we call the initial condition. Consider a rst order linear dierential equation
subject to the initial condition y(x
0
) = y
0
. The general solution is
y = y
p
+cy
h
= e
P(x)
_
e
P(x)
f(x) dx +c e
P(x)
.
For the moment, we will assume that this problem is well-posed. A problem is well-posed if there is a unique
solution to the dierential equation that satises the constraint(s). Recall that
_
e
P(x)
f(x) dx denotes any integral
of e
P(x)
f(x). For convenience, we choose
_
x
x
0
e
P()
f() d. The initial condition requires that
y(x
0
) = y
0
= e
P(x
0
)
_
x
0
x
0
e
P()
f() d +c e
P(x
0
)
= c e
P(x
0
)
.
653
Thus c = y
0
e
P(x
0
)
. The solution subject to the initial condition is
y = e
P(x)
_
x
x
0
e
P()
f() d +y
0
e
P(x
0
)P(x)
.
Example 16.5.1 Consider the problem
y
t
+ (cos x)y = x, y(0) = 2.
From Result 16.5.1, the solution subject to the initial condition is
y = e
sin x
_
x
0
e
sin
d + 2 e
sin x
.
16.5.1 Piecewise Continuous Coecients and Inhomogeneities
If the coecient function p(x) and the inhomogeneous term f(x) in the rst order linear dierential equation
dy
dx
+p(x)y = f(x)
are continuous, then the solution is continuous and has a continuous rst derivative. To see this, we note that
the solution
y = e
P(x)
_
e
P(x)
f(x) dx +c e
P(x)
is continuous since the integral of a piecewise continuous function is continuous. The rst derivative of the solution
can be found directly from the dierential equation.
y
t
= p(x)y +f(x)
Since p(x), y, and f(x) are continuous, y
t
is continuous.
If p(x) or f(x) is only piecewise continuous, then the solution will be continuous since the integral of a piecewise
continuous function is continuous. The rst derivative of the solution will be piecewise continuous.
654
Example 16.5.2 Consider the problem
y
t
y = H(x 1), y(0) = 1,
where H(x) is the Heaviside function.
H(x) =
_
1 for x > 0,
0 for x < 0.
To solve this problem, we divide it into two equations on separate domains.
y
t
1
y
1
= 0, y
1
(0) = 1, for x < 1
y
t
2
y
2
= 1, y
2
(1) = y
1
(1), for x > 1
With the condition y
2
(1) = y
1
(1) on the second equation, we demand that the solution be continuous. The
solution to the rst equation is y = e
x
. The solution for the second equation is
y = e
x
_
x
1
e

d + e
1
e
x1
= 1 + e
x1
+ e
x
.
Thus the solution over the whole domain is
y =
_
e
x
for x < 1,
(1 + e
1
) e
x
1 for x > 1.
The solution is graphed in Figure 16.2.
Example 16.5.3 Consider the problem,
y
t
+ sign (x)y = 0, y(1) = 1.
655
-1 -0.5 0.5 1 1.5 2
2
4
6
8
Figure 16.2: Solution to y
t
y = H(x 1).
Recall that
sign x =
_

_
1 for x < 0
0 for x = 0
1 for x > 0.
Since sign x is piecewise dened, we solve the two problems,
y
t
+
+y
+
= 0, y
+
(1) = 1, for x > 0
y
t

= 0, y

(0) = y
+
(0), for x < 0,
656
and dene the solution, y, to be
y(x) =
_
y
+
(x), for x 0,
y

(x), for x 0.
The initial condition for y

demands that the solution be continuous.


Solving the two problems for positive and negative x, we obtain
y(x) =
_
e
1x
, for x > 0,
e
1+x
, for x < 0.
This can be simplied to
y(x) = e
1[x[
.
This solution is graphed in Figure 16.3.
657
-3 -2 -1 1 2 3
0.5
1
1.5
2
2.5
Figure 16.3: Solution to y
t
+ sign (x)y = 0.
Result 16.5.1 Existence, Uniqueness Theorem. Let p(x) and f(x) be piecewise
continuous on the interval [a, b] and let x
0
[a, b]. Consider the problem,
dy
dx
+p(x)y = f(x), y(x
0
) = y
0
.
The general solution of the dierential equation is
y = e
P(x)
_
e
P(x)
f(x) dx +c e
P(x)
.
The unique, continuous solution of the dierential equation subject to the initial condition
is
y = e
P(x)
_
x
x
0
e
P()
f() d +y
0
e
P(x
0
)P(x)
,
where P(x) =
_
p(x) dx.
658
16.6 Well-Posed Problems
Example 16.6.1 Consider the problem,
y
t

1
x
y = 0, y(0) = 1.
The general solution is y = cx. Applying the initial condition demands that 1 = c 0, which cannot be satised.
The general solution for various values of c is plotted in Figure 16.4.
-1 -0.5 0.5 1
-0.6
-0.4
-0.2
0.2
0.4
0.6
Figure 16.4: Solutions to y
t
y/x = 0.
659
Example 16.6.2 Consider the problem
y
t

1
x
y =
1
x
, y(0) = 1.
The general solution is
y = 1 +cx.
The initial condition is satised for any value of c so there are an innite number of solutions.
Example 16.6.3 Consider the problem
y
t
+
1
x
y = 0, y(0) = 1.
The general solution is y =
c
x
. Depending on whether c is nonzero, the solution is either singular or zero at the
origin and cannot satisfy the initial condition.
The above problems in which there were either no solutions or an innite number of solutions are said to be
ill-posed. If there is a unique solution that satises the initial condition, the problem is said to be well-posed. We
should have suspected that we would run into trouble in the above examples as the initial condition was given at
a singularity of the coecient function, p(x) = 1/x.
Consider the problem,
y
t
+p(x)y = f(x), y(x
0
) = y
0
.
We assume that f(x) bounded in a neighborhood of x = x
0
. The dierential equation has the general solution,
y = e
P(x)
_
e
P(x)
f(x) dx +c e
P(x)
.
660
If the homogeneous solution, e
P(x)
, is nonzero and nite at x = x
0
, then there is a unique value of c for which
the initial condition is satised. If the homogeneous solution vanishes at x = x
0
then either the initial condition
cannot be satised or the initial condition is satised for all values of c. The homogeneous solution can vanish or
be innite only if P(x) as x x
0
. This can occur only if the coecient function, p(x), is unbounded at
that point.
Result 16.6.1 If the initial condition is given where the homogeneous solution to a rst
order, linear dierential equation is zero or innite then the problem may be ill-posed.
This may occur only if the coecient function, p(x), is unbounded at that point.
16.7 Equations in the Complex Plane
16.7.1 Ordinary Points
Consider the rst order homogeneous equation
dw
dz
+p(z)w = 0,
where p(z), a function of a complex variable, is analytic in some domain D. The integrating factor,
I(z) = exp
__
p(z) dz
_
,
is an analytic function in that domain. As with the case of real variables, multiplying by the integrating factor
and integrating yields the solution,
w(z) = c exp
_

_
p(z) dz
_
.
We see that the solution is analytic in D.
661
Example 16.7.1 It does not make sense to pose the equation
dw
dz
+[z[w = 0.
For the solution to exist, w and hence w
t
(z) must be analytic. Since p(z) = [z[ is not analytic anywhere in the
complex plane, the equation has no solution.
Any point at which p(z) is analytic is called an ordinary point of the dierential equation. Since the solution
is analytic we can expand it in a Taylor series about an ordinary point. The radius of convergence of the series
will be at least the distance to the nearest singularity of p(z) in the complex plane.
Example 16.7.2 Consider the equation
dw
dz

1
1 z
w = 0.
The general solution is w =
c
1z
. Expanding this solution about the origin,
w =
c
1 z
= c

n=0
z
n
.
The radius of convergence of the series is,
R = lim
n

a
n
a
n+1

= 1,
which is the distance from the origin to the nearest singularity of p(z) =
1
1z
.
We do not need to solve the dierential equation to nd the Taylor series expansion of the homogeneous
solution. We could substitute a general Taylor series expansion into the dierential equation and solve for the
coecients. Since we can always solve rst order equations, this method is of limited usefulness. However, when
we consider higher order equations in which we cannot solve the equations exactly, this will become an important
method.
662
Example 16.7.3 Again consider the equation
dw
dz

1
1 z
w = 0.
Since we know that the solution has a Taylor series expansion about z = 0, we substitute w =

n=0
a
n
z
n
into
the dierential equation.
(1 z)
d
dz

n=0
a
n
z
n

n=0
a
n
z
n
= 0

n=1
na
n
z
n1

n=1
na
n
z
n

n=0
a
n
z
n
= 0

n=0
(n + 1)a
n+1
z
n

n=0
na
n
z
n

n=0
a
n
z
n
= 0

n=0
((n + 1)a
n+1
(n + 1)a
n
) z
n
= 0.
Now we equate powers of z to zero. For z
n
, the equation is (n + 1)a
n+1
(n + 1)a
n
= 0, or a
n+1
= a
n
. Thus we
have that a
n
= a
0
for all n 1. The solution is then
w = a
0

n=0
z
n
,
which is the result we obtained by expanding the solution in Example 16.7.2.
663
Result 16.7.1 Consider the equation
dw
dz
+p(z)w = 0.
If p(z) is analytic at z = z
0
then z
0
is called an ordinary point of the dierential equation.
The Taylor series expansion of the solution can be found by substituting w =

n=0
a
n
(z
z
0
)
n
into the equation and equating powers of (z z
0
). The radius of convergence of the
series is at least the distance to the nearest singularity of p(z) in the complex plane.
16.7.2 Regular Singular Points
If the coecient function p(z) has a simple pole at z = z
0
then z
0
is a regular singular point of the rst order
dierential equation.
Example 16.7.4 Consider the equation
dw
dz
+

z
w = 0, ,= 0.
This equation has a regular singular point at z = 0. The solution is w = cz

. Depending on the value of , the


solution can have three dierent kinds of behavior.
is a negative integer. The solution is analytic in the nite complex plane.
is a positive integer The solution has a pole at the origin. w is analytic in the annulus, 0 < [z[.
is not an integer. w has a branch point at z = 0. The solution is analytic in the cut annulus 0 < [z[ < ,

0
< arg z <
0
+ 2.
664
Consider the dierential equation
dw
dz
+p(z)w = 0,
where p(z) has a simple pole at the origin and is analytic in the annulus, 0 < [z[ < r, for some positive r. Recall
that the solution is
w = c exp
_

_
p(z) dz
_
= c exp
_

_
b
0
z
+p(z)
b
0
z
dz
_
= c exp
_
b
0
log z
_
zp(z) b
0
z
dz
_
= cz
b
0
exp
_

_
zp(z) b
0
z
dz
_
The exponential factor has a removable singularity at z = 0 and is analytic in [z[ < r. We consider the
following cases for the z
b
0
factor:
b
0
is a negative integer. Since z
b
0
is analytic at the origin, the solution to the dierential equation is analytic
in the circle [z[ < r.
b
0
is a positive integer. The solution has a pole of order b
0
at the origin and is analytic in the annulus
0 < [z[ < r.
b
0
is not an integer. The solution has a branch point at the origin and thus is not single-valued. The solution
is analytic in the cut annulus 0 < [z[ < r,
0
< arg z <
0
+ 2.
Since the exponential factor has a convergent Taylor series in [z[ < r, the solution can be expanded in a series
of the form
w = z
b
0

n=0
a
n
z
n
, where a
0
,= 0 and b
0
= lim
z0
z p(z).
665
In the case of a regular singular point at z = z
0
, the series is
w = (z z
0
)
b
0

n=0
a
n
(z z
0
)
n
, where a
0
,= 0 and b
0
= lim
zz
0
(z z
0
) p(z).
Series of this form are known as Frobenius series. Since we can write the solution as
w = c(z z
0
)
b
0
exp
_

_ _
p(z)
b
0
z z
0
_
dz
_
,
we see that the Frobenius expansion of the solution will have a radius of convergence at least the distance to the
nearest singularity of p(z).
Result 16.7.2 Consider the equation,
dw
dz
+p(z)w = 0,
where p(z) has a simple pole at z = z
0
, p(z) is analytic in some annulus, 0 < [z z
0
[ < r,
and lim
zz
0
(z z
0
)p(z) = . The solution to the dierential equation has a Frobenius
series expansion of the form
w = (z z
0
)

n=0
a
n
(z z
0
)
n
, a
0
,= 0.
The radius of convergence of the expansion will be at least the distance to the nearest
singularity of p(z).
Example 16.7.5 We will nd the rst two nonzero terms in the series solution about z = 0 of the dierential
666
equation,
dw
dz
+
1
sin z
w = 0.
First we note that the coecient function has a simple pole at z = 0 and
lim
z0
z
sin z
= lim
z0
1
cos z
= 1.
Thus we look for a series solution of the form
w = z
1

n=0
a
n
z
n
, a
0
,= 0.
The nearest singularities of 1/ sin z in the complex plane are at z = . Thus the radius of convergence of the
series will be at least .
Substituting the rst three terms of the expansion into the dierential equation,
d
dz
(a
0
z
1
+a
1
+a
2
z) +
1
sin z
(a
0
z
1
+a
1
+a
2
z) = O(z).
Recall that the Taylor expansion of sin z is sin z = z
1
6
z
3
+O(z
5
).
_
z
z
3
6
+O(z
5
)
_
(a
0
z
2
+a
2
) + (a
0
z
1
+a
1
+a
2
z) = O(z
2
)
a
0
z
1
+
_
a
2
+
a
0
6
_
z +a
0
z
1
+a
1
+a
2
z = O(z
2
)
a
1
+
_
2a
2
+
a
0
6
_
z = O(z
2
)
a
0
is arbitrary. Equating powers of z,
z
0
: a
1
= 0.
z
1
: 2a
2
+
a
0
6
= 0.
667
Thus the solution has the expansion,
w = a
0
_
z
1

z
12
_
+O(z
2
).
In Figure 16.5 the exact solution is plotted in a solid line and the two term approximation is plotted in a dashed
line. The two term approximation is very good near the point x = 0.
1 2 3 4 5 6
-2
2
4
Figure 16.5: Plot of the Exact Solution and the Two Term Approximation.
Example 16.7.6 Find the rst two nonzero terms in the series expansion about z = 0 of the solution to
w
t
i
cos z
z
w = 0.
668
Since
cos z
z
has a simple pole at z = 0 and lim
z0
i cos z = i we see that the Frobenius series will have the form
w = z
i

n=0
a
n
z
n
, a
0
,= 0.
Recall that cos z has the Taylor expansion

n=0
(1)
n
z
2n
(2n)!
. Substituting the Frobenius expansion into the dierential
equation yields
z
_
iz
i1

n=0
a
n
z
n
+z
i

n=0
na
n
z
n1
_
i
_

n=0
(1)
n
z
2n
(2n)!
__
z
i

n=0
a
n
z
n
_
= 0

n=0
(n +i)a
n
z
n
i
_

n=0
(1)
n
z
2n
(2n)!
__

n=0
a
n
z
n
_
= 0.
Equating powers of z,
z
0
: ia
0
ia
0
= 0 a
0
is arbitrary
z
1
: (1 +i)a
1
ia
1
= 0 a
1
= 0
z
2
: (2 +i)a
2
ia
2
+
i
2
a
0
= 0 a
2
=
i
4
a
0
.
Thus the solution is
w = a
0
z
i
_
1
i
4
z
2
+O(z
3
)
_
.
16.7.3 Irregular Singular Points
If a point is not an ordinary point or a regular singular point then it is called an irregular singular point. The
following equations have irregular singular points at the origin.
669
w
t
+

zw = 0
w
t
z
2
w = 0
w
t
+ exp(1/z)w = 0
Example 16.7.7 Consider the dierential equation
dw
dz
+z

w = 0, ,= 0, ,= 1, 0, 1, 2, . . .
This equation has an irregular singular point at the origin. Solving this equation,
d
dz
_
exp
__
z

dz
_
w
_
= 0
w = c exp
_


+ 1
z
+1
_
= c

n=0
(1)
n
n!
_

+ 1
_
n
z
(+1)n
.
If is not an integer, then the solution has a branch point at the origin. If is an integer, < 1, then
the solution has an essential singularity at the origin. The solution cannot be expanded in a Frobenius series,
w = z

n=0
a
n
z
n
.
Although we will not show it, this result holds for any irregular singular point of the dierential equation. We
cannot approximate the solution near an irregular singular point using a Frobenius expansion.
Now would be a good time to summarize what we have discovered about solutions of rst order dierential
equations in the complex plane.
670
Result 16.7.3 Consider the rst order dierential equation
dw
dz
+p(z)w = 0.
Ordinary Points If p(z) is analytic at z = z
0
then z
0
is an ordinary point of
the dierential equation. The solution can be expanded in the Taylor series
w =

n=0
a
n
(z z
0
)
n
. The radius of convergence of the series is at least the
distance to the nearest singularity of p(z) in the complex plane.
Regular Singular Points If p(z) has a simple pole at z = z
0
and is analytic in some
annulus 0 < [zz
0
[ < r then z
0
is a regular singular point of the dierential equation.
The solution at z
0
will either be analytic, have a pole, or have a branch point. The
solution can be expanded in the Frobenius series w = (z z
0
)

n=0
a
n
(z z
0
)
n
where a
0
,= 0 and = lim
zz
0
(zz
0
)p(z). The radius of convergence of the Frobenius
series will be at least the distance to the nearest singularity of p(z).
Irregular Singular Points If the point z = z
0
is not an ordinary point or a regular
singular point, then it is an irregular singular point of the dierential equation. The
solution cannot be expanded in a Frobenius series about that point.
16.7.4 The Point at Innity
Now we consider the behavior of rst order linear dierential equations at the point at innity. Recall from
complex variables that the complex plane together with the point at innity is called the extended complex plane.
To study the behavior of a function f(z) at innity, we make the transformation z =
1

and study the behavior of


f(1/) at = 0.
671
Example 16.7.8 Lets examine the behavior of sin z at innity. We make the substitution z = 1/ and nd the
Laurent expansion about = 0.
sin(1/) =

n=0
(1)
n
(2n + 1)!
(2n+1)
Since sin(1/) has an essential singularity at = 0, sin z has an essential singularity at innity.
We use the same approach if we want to examine the behavior at innity of a dierential equation. Starting
with the rst order dierential equation,
dw
dz
+p(z)w = 0,
we make the substitution
z =
1

,
d
dz
=
2
d
d
, w(z) = u()
to obtain

2
du
d
+p(1/)u = 0
du
d

p(1/)

2
u = 0.
672
Result 16.7.4 The behavior at innity of
dw
dz
+p(z)w = 0
is the same as the behavior at = 0 of
du
d

p(1/)

2
u = 0.
Example 16.7.9 Classify the singular points of the equation
dw
dz
+
1
z
2
+ 9
w = 0.
Rewriting this equation as
dw
dz
+
1
(z 3i)(z + 3i)
w = 0,
we see that z = 3i and z = 3i are regular singular points. The transformation z = 1/ yields the dierential
equation
du
d

1

2
1
(1/)
2
+ 9
u = 0
du
d

1
9
2
+ 1
u = 0
Since the equation for u has a ordinary point at = 0, z = is a ordinary point of the equation for w.
673
16.8 Exercises
Exact Equations
Exercise 16.1 (mathematica/ode/rst order/exact.nb)
Find the general solution y = y(x) of the equations
1.
dy
dx
=
x
2
+xy +y
2
x
2
,
2. (4y 3x) dx + (y 2x) dy = 0.
Hint, Solution
Exercise 16.2 (mathematica/ode/rst order/exact.nb)
Determine whether or not the following equations can be made exact. If so nd the corresponding general solution.
1. (3x
2
2xy + 2) dx + (6y
2
x
2
+ 3) dy = 0
2.
dy
dx
=
ax +by
bx +cy
Hint, Solution
Exercise 16.3 (mathematica/ode/rst order/exact.nb)
Find the solutions of the following dierential equations which satisfy the given initial condition. In each case
determine the interval in which the solution is dened.
1.
dy
dx
= (1 2x)y
2
, y(0) = 1/6.
2. xdx +y e
x
dy = 0, y(0) = 1.
Hint, Solution
674
Exercise 16.4
Show that
(x, y) =
1
xM(x, y) +yN(x, y)
is an integrating factor for the homogeneous equation,
M(x, y) +N(x, y)
dy
dx
.
Hint, Solution
Exercise 16.5
Are the following equations exact? If so, solve them.
1. (4y x)y
t
(9x
2
+y 1) = 0
2. (2x 2y)y
t
+ (2x + 4y) = 0.
Hint, Solution
Exercise 16.6
Solve the following dierential equations by inspection. That is, group terms into exact derivatives and then
integrate. f(x) and g(x) are known functions.
1. g(x)y
t
(x) +g
t
(x)y(x) = f(x)
2.
y

(x)
y(x)
= f(x)
3. y

(x)y
t
(x) = f(x)
4.
y

cos x
+y
tan x
cos x
= cos x
675
Hint, Solution
Exercise 16.7 (mathematica/ode/rst order/exact.nb)
Suppose we have a dierential equation of the form dy/dt = f(y/t). Dierential equations of this form are called
homogeneous equations. Since the right side only depends on the single variable y/t, it suggests itself to make
the substitution y/t = v or y = tv.
1. Show that this substitution replaces the equation dy/dt = f(y/t) by the equivalent equation tdv/dt + v =
f(v), which is separable.
2. Find the general solution of the equation dy/dt = 2(y/t) + (y/t)
2
.
Hint, Solution
Exercise 16.8 (mathematica/ode/rst order/exact.nb)
Find all functions f(t) such that the dierential equation
y
2
sin t +yf(t)
dy
dt
= 0 (16.5)
is exact. Solve the dierential equation for these f(t).
Hint, Solution
The First Order, Linear Dierential Equation
Exercise 16.9 (mathematica/ode/rst order/linear.nb)
Solve the dierential equation
y
t
+
y
sin x
= 0.
Hint, Solution
676
Exercise 16.10 (mathematica/ode/rst order/linear.nb)
Solve the dierential equation
y
t

1
x
y = x

.
Hint, Solution
Initial Conditions
Exercise 16.11 (mathematica/ode/rst order/exact.nb)
Find the solutions of the following dierential equations which satisfy the given initial conditions:
1.
dy
dx
+xy = x
2n+1
, y(1) = 1, n Z
2.
dy
dx
2xy = 1, y(0) = 1
Hint, Solution
Exercise 16.12 (mathematica/ode/rst order/exact.nb)
Show that if > 0 and > 0, then for any real , every solution of
dy
dx
+y(x) = e
x
satises lim
x+
y(x) = 0. (The case = requires special treatment.) Find the solution for = = 1 which
satises y(0) = 1. Sketch this solution for 0 x < for several values of . In particular, show what happens
when 0 and .
Hint, Solution
Well-Posed Problems
677
Exercise 16.13
Find the solutions of
t
dy
dt
+Ay = 1 +t
2
which are bounded at t = 0. Consider all (real) values of A.
Hint, Solution
Equations in the Complex Plane
Exercise 16.14
Find the Taylor series expansion about the origin of the solution to
dw
dz
+
1
1 z
w = 0
with the substitution w =

n=0
a
n
z
n
. What is the radius of convergence of the series? What is the distance to
the nearest singularity of
1
1z
?
Hint, Solution
Exercise 16.15
Classify the singular points of the following rst order dierential equations, (include the point at innity).
1. w
t
+
sin z
z
w = 0
2. w
t
+
1
z3
w = 0
3. w
t
+z
1/2
w = 0
Hint, Solution
678
Exercise 16.16
Consider the equation
w
t
+z
2
w = 0.
The point z = 0 is an irregular singular point of the dierential equation. Thus we know that we cannot expand
the solution about z = 0 in a Frobenius series. Try substituting the series solution
w = z

n=0
a
n
z
n
, a
0
,= 0
into the dierential equation anyway. What happens?
Hint, Solution
679
16.9 Hints
Exact Equations
Hint 16.1
1.
2.
Hint 16.2
1. The equation is exact. Determine the primitive u by solving the equations u
x
= P, u
y
= Q.
2. The equation can be made exact.
Hint 16.3
1. This equation is separable. Integrate to get the general solution. Apply the initial condition to determine
the constant of integration.
2. Ditto. You will have to numerically solve an equation to determine where the solution is dened.
Hint 16.4
Hint 16.5
Hint 16.6
680
1.
d
dx
[uv] = u
t
v +uv
t
2.
d
dx
log u =
1
u
3.
d
dx
u
c
= u
c1
u
t
Hint 16.7
Hint 16.8
The First Order, Linear Dierential Equation
Hint 16.9
Look in the appendix for the integral of csc x.
Hint 16.10
Make sure you consider the case = 0.
Initial Conditions
Hint 16.11
Hint 16.12
Well-Posed Problems
681
Hint 16.13
Equations in the Complex Plane
Hint 16.14
The radius of convergence of the series and the distance to the nearest singularity of
1
1z
are not the same.
Hint 16.15
Hint 16.16
Try to nd the value of by substituting the series into the dierential equation and equating powers of z.
682
16.10 Solutions
Exact Equations
Solution 16.1
1.
dy
dx
=
x
2
+xy +y
2
x
2
Since the right side is a homogeneous function of order zero, this is a homogeneous dierential equation.
We make the change of variables u = y/x and then solve the dierential equation for u.
xu
t
+u = 1 +u +u
2
du
1 +u
2
=
dx
x
arctan(u) = ln [x[ +c
u = tan(ln([cx[))
y = xtan(ln([cx[))
2.
(4y 3x) dx + (y 2x) dy = 0
Since the coecients are homogeneous functions of order one, this is a homogeneous dierential equation.
683
We make the change of variables u = y/x and then solve the dierential equation for u.
_
4
y
x
3
_
dx +
_
y
x
2
_
dy = 0
(4u 3) dx + (u 2)(udx +x du) = 0
(u
2
+ 2u 3) dx +x(u 2) du = 0
dx
x
+
u 2
(u + 3)(u 1)
du = 0
dx
x
+
_
5/4
u + 3

1/4
u 1
_
du = 0
ln(x) +
5
4
ln(u + 3)
1
4
ln(u 1) = c
x
4
(u + 3)
5
u 1
= c
x
4
(y/x + 3)
5
y/x 1
= c
(y + 3x)
5
y x
= c
Solution 16.2
1.
(3x
2
2xy + 2) dx + (6y
2
x
2
+ 3) dy = 0
We check if this form of the equation, P dx +Qdy = 0, is exact.
P
y
= 2x, Q
x
= 2x
Since P
y
= Q
x
, the equation is exact. Now we nd the primitive u(x, y) which satises
du = (3x
2
2xy + 2) dx + (6y
2
x
2
+ 3) dy.
684
The primitive satises the partial dierential equations
u
x
= P, u
y
= Q. (16.6)
We integrate the rst equation of 16.6 to determine u up to a function of integration.
u
x
= 3x
2
2xy + 2
u = x
3
x
2
y + 2x +f(y)
We substitute this into the second equation of 16.6 to determine the function of integration up to an additive
constant.
x
2
+f
t
(y) = 6y
2
x
2
+ 3
f
t
(y) = 6y
2
+ 3
f(y) = 2y
3
+ 3y
The solution of the dierential equation is determined by the implicit equation u = c.
x
3
x
2
y + 2x + 2y
3
+ 3y = c
2.
dy
dx
=
ax +by
bx +cy
(ax +by) dx + (bx +cy) dy = 0
We check if this form of the equation, P dx +Qdy = 0, is exact.
P
y
= b, Q
x
= b
Since P
y
= Q
x
, the equation is exact. Now we nd the primitive u(x, y) which satises
du = (ax +by) dx + (bx +cy) dy
685
The primitive satises the partial dierential equations
u
x
= P, u
y
= Q. (16.7)
We integrate the rst equation of 16.7 to determine u up to a function of integration.
u
x
= ax +by
u =
1
2
ax
2
+bxy +f(y)
We substitute this into the second equation of 16.7 to determine the function of integration up to an additive
constant.
bx +f
t
(y) = bx +cy
f
t
(y) = cy
f(y) =
1
2
cy
2
The solution of the dierential equation is determined by the implicit equation u = d.
ax
2
+ 2bxy +cy
2
= d
Solution 16.3
Note that since these equations are nonlinear, we cannot predict where the solutions will be dened from the
equation alone.
686
1. This equation is separable. We integrate to get the general solution.
dy
dx
= (1 2x)y
2
dy
y
2
= (1 2x) dx

1
y
= x x
2
+c
y =
1
x
2
x c
Now we apply the initial condition.
y(0) =
1
c
=
1
6
y =
1
x
2
x 6
y =
1
(x + 2)(x 3)
The solution is dened on the interval (2 . . . 3).
2. This equation is separable. We integrate to get the general solution.
x dx +y e
x
dy = 0
xe
x
dx +y dy = 0
(x 1) e
x
+
1
2
y
2
= c
y =
_
2(c + (1 x) e
x
)
687
We apply the initial condition to determine the constant of integration.
y(0) =
_
2(c + 1) = 1
c =
1
2
y =
_
2(1 x) e
x
1
The function 2(1 x) e
x
1 is plotted in Figure 16.6. We see that the argument of the square root in
the solution is non-negative only on an interval about the origin. Because 2(1 x) e
x
1 == 0 is a mixed
algebraic / transcendental equation, we cannot solve it analytically. The solution of the dierential equation
is dened on the interval (1.67835 . . . 0.768039).
-5 -4 -3 -2 -1 1
-3
-2
-1
1
Figure 16.6: The function 2(1 x) e
x
1.
Solution 16.4
We consider the homogeneous equation,
M(x, y) +N(x, y)
dy
dx
.
688
That is, both M and N are homogeneous of degree n. Multiplying by
(x, y) =
1
xM(x, y) +yN(x, y)
will make the equation exact. To prove this we use the result that
P(x, y) +Q(x, y)
dy
dx
= 0
is exact if and only if P
y
= Q
x
.
P
y
=

y
_
M
xM +yN
_
=
M
y
(xM +yN) M(xM
y
+N +yN
y
)
(xM +yN)
2
Q
x
=

x
_
N
xM +yN
_
=
N
x
(xM +yN) N(M +xM
x
+yN
x
)
(xM +yN)
2
M
y
(xM +yN) M(xM
y
+N +yN
y
) = N
x
(xM +yN) N(M +xM
x
+yN
x
)
yM
y
N yMN
y
= xMN
x
xM
x
N
xM
x
N +yM
y
N = xMN
x
+yMN
y
With Eulers theorem, this reduces to the identity,
nMN = nMN.
Thus the equation is exact. (x, y) is an integrating factor for the homogeneous equation.
689
Solution 16.5
1. We consider the dierential equation,
(4y x)y
t
(9x
2
+y 1) = 0.
P
y
=

y
_
1 y 9x
2
_
= 1
Q
x
=

x
(4y x) = 1
This equation is exact. It is simplest to solve the equation by rearranging terms to form exact derivatives.
4yy
t
xy
t
y + 1 9x
2
= 0
d
dx
_
2y
2
xy

+ 1 9x
2
= 0
2y
2
xy +x 3x
3
+c = 0
y =
1
4
_
x
_
x
2
8(c +x 3x
3
)
_
2. We consider the dierential equation,
(2x 2y)y
t
+ (2x + 4y) = 0.
P
y
=

y
(2x + 4y) = 4
Q
x
=

x
(2x 2y) = 2
Since P
y
,= Q
x
, this is not an exact equation.
690
Solution 16.6
1.
g(x)y
t
(x) +g
t
(x)y(x) = f(x)
d
dx
[g(x)y(x)] = f(x)
y(x) =
1
g(x)
_
f(x) dx +
c
g(x)
2.
y
t
(x)
y(x)
= f(x)
d
dx
log(y(x)) = f(x)
log(y(x)) =
_
f(x) dx +c
y(x) = e

f(x) dx+c
y(x) = a e

f(x) dx
3.
y

(x)y
t
(x) = f(x)
y
+1
(x)
+ 1
=
_
f(x) dx +c
y(x) =
_
( + 1)
_
f(x) dx +a
_
1/(+1)
691
4.
y
t
cos x
+y
tan x
cos x
= cos x
d
dx
_
y
cos x
_
= cos x
y
cos x
= sin x +c
y(x) = sin x cos x +c cos x
Solution 16.7
1. We substitute y = tv into the dierential equation and simplify.
y
t
= f
_
y
t
_
tv
t
+v = f(v)
tv
t
= f(v) v
v
t
f(v) v
=
1
t
(16.8)
The nal equation is separable.
2. We start with the homogeneous dierential equation:
dy
dt
= 2
_
y
t
_
+
_
y
t
_
2
.
692
We substitute y = tv to obtain Equation 16.8, and solve the separable equation.
v
t
v
2
+v
=
1
t
v
t
v(v + 1)
=
1
t
v
t
v

v
t
v + 1
=
1
t
log v log(v + 1) = log t +c
log
_
v
v + 1
_
= log(ct)
v
v + 1
= ct
v =
ct
1 ct
v =
t
c t
y =
t
2
c t
Solution 16.8
Recall that the dierential equation
P(x, y) +Q(x, y)y
t
= 0
is exact if and only if P
y
= Q
x
. For Equation 16.5, this criterion is
2y sin t = yf
t
(t)
f
t
(t) = 2 sin t
f(t) = 2(a cos t).
693
In this case, the dierential equation is
y
2
sin t + 2yy
t
(a cos t) = 0.
We can integrate this exact equation by inspection.
d
dt
_
y
2
(a cos t)
_
= 0
y
2
(a cos t) = c
y =
c

a cos t
The First Order, Linear Dierential Equation
Solution 16.9
Consider the dierential equation
y
t
+
y
sin x
= 0.
The solution is
y = c e

1/ sin xdx
= c e
log(tan(x/2))
y = c cot
_
x
2
_
.
Solution 16.10
y
t

1
x
y = x

694
The integrating factor is
exp
__

1
x
dx
_
= exp (log x) =
1
x
.
1
x
y
t

1
x
2
y = x
1
d
dx
_
1
x
y
_
= x
1
1
x
y =
_
x
1
dx +c
y = x
_
x
1
dx +cx
y =
_
x
+1

+cx for ,= 0,
xlog x +cx for = 0.
Initial Conditions
Solution 16.11
1.
y
t
+xy = x
2n+1
, y(1) = 1, n Z
The integrating factor is
I(x) = e

xdx
= e
x
2
/2
.
695
We multiply by the integrating factor and integrate. Since the initial condition is given at x = 1, we will
take the lower bound of integration to be that point.
d
dx
_
e
x
2
/2
y
_
= x
2n+1
e
x
2
/2
y = e
x
2
/2
_
x
1

2n+1
e

2
/2
d +c e
x
2
/2
We choose the constant of integration to satisfy the initial condition.
y = e
x
2
/2
_
x
1

2n+1
e

2
/2
d + e
(1x
2
)/2
If n 0 then we can use integration by parts to write the integral as a sum of terms. If n < 0 we can write
the integral in terms of the exponential integral function. However, the integral form above is as nice as any
other and we leave the answer in that form.
2.
dy
dx
2xy(x) = 1, y(0) = 1.
The integrating factor is
I(x) = e

2xdx
= e
x
2
.
d
dx
_
e
x
2
y
_
= e
x
2
y = e
x
2
_
x
0
e

2
d +c e
x
2
696
We choose the constant of integration to satisfy the initial condition.
y = e
x
2
_
1 +
_
x
0
e

2
d
_
We can write the answer in terms of the Error function,
erf (x)
2

_
x
0
e

2
d.
y = e
x
2
_
1 +

2
erf (x)
_
Solution 16.12
The integrating factor is,
I(x) = e

dx
= e
x
.
d
dx
( e
x
y) = e
()x
y = e
x
_
e
()x
dx +c e
x
For ,= , the solution is
y = e
x
e
()x

+c e
x
y =


e
x
+c e
x
Clearly the solution vanishes as x .
697
For = , the solution is
y = e
x
x +c e
x
y = (c +x) e
x
We use LHospitals rule to show that the solution vanishes as x .
lim
x
c +x
e
x
= lim
x

e
x
= 0
For = = 1, the solution is
y =
_
1
1
e
x
+c e
x
for ,= 1,
(c +x) e
x
for = 1.
The solution which satises the initial condition is
y =
_
1
1
( e
x
+ ( 2) e
x
) for ,= 1,
(1 +x) e
x
for = 1.
In Figure 16.7 the solution is plotted for = 1/16, 1/8, . . . , 16.
Consider the solution in the limit as 0.
lim
0
y(x) = lim
0
1
1
_
e
x
+ ( 2) e
x
_
= 2 e
x
698
4 8 12 16
1
Figure 16.7: The Solution for a Range of
In the limit as we have,
lim

y(x) = lim

1
1
_
e
x
+ ( 2) e
x
_
= lim

2
1
e
x
=
_
1 for x = 0,
0 for x > 0.
This behavior is shown in Figure 16.8. The rst graph plots the solutions for = 1/128, 1/64, . . . , 1. The second
graph plots the solutions for = 1, 2, . . . , 128.
Well-Posed Problems
699
1 2 3 4
1
1 2 3 4
1
Figure 16.8: The Solution as 0 and
Solution 16.13
First we write the dierential equation in the standard form.
dy
dt
+
A
t
y =
1
t
+t
The integrating factor is
I(t) = e

A/t dt
= e
Alog t
= t
A
700
We multiply the dierential equation by the integrating factor and integrate.
dy
dt
+
A
t
y =
1
t
+t
d
dt
_
t
A
y
_
= t
A1
+t
A+1
t
A
y =
_

_
t
A
A
+
t
A+2
A+2
+c, A ,= 0, 2
log t +
1
2
t
2
+c, A = 0

1
2
t
2
+ log t +c, A = 2
y =
_

_
1
A
+
t
2
A+2
+ct
A
, A ,= 2
log t +
1
2
t
2
+c, A = 0

1
2
+t
2
log t +ct
2
, A = 2
For positive A, the solution is bounded at the origin only for c = 0. For A = 0, there are no bounded solutions.
For negative A, the solution is bounded there for any value of c and thus we have a one-parameter family of
solutions.
In summary, the solutions which are bounded at the origin are:
y =
_

_
1
A
+
t
2
A+2
, A > 0
1
A
+
t
2
A+2
+ct
A
, A < 0, A ,= 2

1
2
+t
2
log t +ct
2
, A = 2
Equations in the Complex Plane
701
Solution 16.14
We substitute w =

n=0
a
n
z
n
into the equation
dw
dz
+
1
1z
w = 0.
d
dz

n=0
a
n
z
n
+
1
1 z

n=0
a
n
z
n
= 0
(1 z)

n=1
na
n
z
n1
+

n=0
a
n
z
n
= 0

n=0
(n + 1)a
n+1
z
n

n=0
na
n
z
n
+

n=0
a
n
z
n
= 0

n=0
((n + 1)a
n+1
(n 1)a
n
) z
n
= 0
Equating powers of z to zero, we obtain the relation,
a
n+1
=
n 1
n + 1
a
n
.
a
0
is arbitrary. We can compute the rest of the coecients from the recurrence relation.
a
1
=
1
1
a
0
= a
0
a
2
=
0
2
a
1
= 0
We see that the coecients are zero for n 2. Thus the Taylor series expansion, (and the exact solution), is
w = a
0
(1 z).
The radius of convergence of the series in innite. The nearest singularity of
1
1z
is at z = 1. Thus we see the
radius of convergence can be greater than the distance to the nearest singularity of the coecient function, p(z).
702
Solution 16.15
1. Consider the equation w
t
+
sinz
z
w = 0. The point z = 0 is the only point we need to examine in the nite
plane. Since
sin z
z
has a removable singularity at z = 0, there are no singular points in the nite plane. The
substitution z =
1

yields the equation


u
t

sin(1/)

u = 0.
Since
sin(1/)

has an essential singularity at = 0, the point at innity is an irregular singular point of the
original dierential equation.
2. Consider the equation w
t
+
1
z3
w = 0. Since
1
z3
has a simple pole at z = 3, the dierential equation has a
regular singular point there. Making the substitution z = 1/, w(z) = u()
u
t

2
(1/ 3)
u = 0
u
t

1
(1 3)
u = 0.
Since this equation has a simple pole at = 0, the original equation has a regular singular point at innity.
3. Consider the equation w
t
+z
1/2
w = 0. There is an irregular singular point at z = 0. With the substitution
z = 1/, w(z) = u(),
u
t


1/2

2
u = 0
u
t

5/2
u = 0.
We see that the point at innity is also an irregular singular point of the original dierential equation.
703
Solution 16.16
We start with the equation
w
t
+z
2
w = 0.
Substituting w = z

n=0
a
n
z
n
, a
0
,= 0 yields
d
dz
_
z

n=0
a
n
z
n
_
+z
2
z

n=0
a
n
z
n
= 0
z
1

n=0
a
n
z
n
+z

n=1
na
n
z
n1
+z

n=0
a
n
z
n2
= 0
The lowest power of z in the expansion is z
2
. The coecient of this term is a
0
. Equating powers of z demands
that a
0
= 0 which contradicts our initial assumption that it was nonzero. Thus we cannot nd a such that the
solution can be expanded in the form,
w = z

n=0
a
n
z
n
, a
0
,= 0.
704
Chapter 17
First Order Systems of Dierential Equations
We all agree that your theory is crazy, but is it crazy enough?
- Niels Bohr
17.1 Matrices and Jordan Canonical Form
Functions of Square Matrices. Consider a function f(x) with a Taylor series.
f(x) =

n=0
f
(n)
(0)
n!
x
n
We can dene the function to take square matrices as arguments. The function of the square matrix A is dened
in terms of the Taylor series.
f(A) =

n=0
f
(n)
(0)
n!
A
n
(Note that this denition is usually not the most convenient method for computing a function of a matrix. Use
the Jordan canonical form for that.)
705
Eigenvalues and Eigenvectors. Consider a square matrix A. A nonzero vector x is an eigenvector of the
matrix with eigenvalue if
Ax = x.
Note that we can write this equation as
(AI)x = 0.
This equation has solutions for nonzero x if and only if A I is singular, (det(A I) = 0). We dene the
characteristic polynomial of the matrix () as this determinant.
() = det(AI)
The roots of the characteristic polynomial are the eigenvalues of the matrix. The eigenvectors of distinct eigen-
values are linearly independent. Thus if a matrix has distinct eigenvalues, the eigenvectors form a basis.
If is a root of () of multiplicity m then there are up to m linearly independent eigenvectors corresponding
to that eigenvalue. That is, it has from 1 to m eigenvectors.
Diagonalizing Matrices. Consider an n n matrix A that has a complete set of n linearly independent
eigenvectors. A may or may not have distinct eigenvalues. Consider the matrix S with eigenvectors as columns.
S =
_
x
1
x
2
x
n
_
A is diagonalized by the similarity transformation:
= S
1
AS.
is a diagonal matrix with the eigenvalues of A as the diagonal elements. Furthermore, the k
th
diagonal element
is
k
, the eigenvalue corresponding to the the eigenvector, x
k
.
706
Generalized Eigenvectors. A vector x
k
is a generalized eigenvector of rank k if
(AI)
k
x
k
= 0 but (AI)
k1
x
k
,= 0.
Eigenvectors are generalized eigenvectors of rank 1. An n n matrix has n linearly independent generalized
eigenvectors. A chain of generalized eigenvectors generated by the rank m generalized eigenvector x
m
is the set:
x
1
, x
2
, . . . , x
m
, where
x
k
= (AI)x
k+1
, for k = m1, . . . , 1.
Computing Generalized Eigenvectors. Let be an eigenvalue of multiplicity m. Let n be the smallest
integer such that
rank ( nullspace ((A I)
n
)) = m.
Let N
k
denote the number of eigenvalues of rank k. These have the value:
N
k
= rank
_
nullspace
_
(A I)
k
__
rank
_
nullspace
_
(A I)
k1
__
.
One can compute the generalized eigenvectors of a matrix by looping through the following three steps until
all the the N
k
are zero:
1. Select the largest k for which N
k
is positive. Find a generalized eigenvector x
k
of rank k which is linearly
independent of all the generalized eigenvectors found thus far.
2. From x
k
generate the chain of eigenvectors x
1
, x
2
, . . . , x
k
. Add this chain to the known generalized
eigenvectors.
3. Decrement each positive N
k
by one.
707
Example 17.1.1 Consider the matrix
A =
_
_
1 1 1
2 1 1
3 2 4
_
_
.
The characteristic polynomial of the matrix is
() =

1 1 1
2 1 1
3 2 4

= (1 )
2
(4 ) + 3 + 4 + 3(1 ) 2(4 ) + 2(1 )
= ( 2)
3
.
Thus we see that = 2 is an eigenvalue of multiplicity 3. A2I is
A2I =
_
_
1 1 1
2 1 1
3 2 2
_
_
The rank of the nullspace space of A2I is less than 3.
(A2I)
2
=
_
_
0 0 0
1 1 1
1 1 1
_
_
The rank of nullspace ((A2I)
2
) is less than 3 as well, so we have to take one more step.
(A2I)
3
=
_
_
0 0 0
0 0 0
0 0 0
_
_
708
The rank of nullspace ((A2I)
3
) is 3. Thus there are generalized eigenvectors of ranks 1, 2 and 3. The generalized
eigenvector of rank 3 satises:
(A2I)
3
x
3
= 0
_
_
0 0 0
0 0 0
0 0 0
_
_
x
3
= 0
We choose the solution
x
3
=
_
_
1
0
0
_
_
.
Now to compute the chain generated by x
3
.
x
2
= (A2I)x
3
=
_
_
1
2
3
_
_
x
1
= (A2I)x
2
=
_
_
0
1
1
_
_
Thus a set of generalized eigenvectors corresponding to the eigenvalue = 2 are
x
1
=
_
_
0
1
1
_
_
, x
2
=
_
_
1
2
3
_
_
, x
3
=
_
_
1
0
0
_
_
.
709
Jordan Block. A Jordan block is a square matrix which has the constant, , on the diagonal and ones on the
rst super-diagonal:
_
_
_
_
_
_
_
_
_
_
1 0 0 0
0 1 0 0
0 0
.
.
. 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
.
.
. 1
0 0 0 0
_
_
_
_
_
_
_
_
_
_
Jordan Canonical Form. A matrix J is in Jordan canonical form if all the elements are zero except for Jordan
blocks J
k
along the diagonal.
J =
_
_
_
_
_
_
_
_
J
1
0 0 0
0 J
2
.
.
. 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
.
.
. J
n1
0
0 0 0 J
n
_
_
_
_
_
_
_
_
The Jordan canonical form of a matrix is obtained with the similarity transformation:
J = S
1
AS,
where S is the matrix of the generalized eigenvectors of A and the generalized eigenvectors are grouped in chains.
Example 17.1.2 Again consider the matrix
A =
_
_
1 1 1
2 1 1
3 2 4
_
_
.
710
Since = 2 is an eigenvalue of multiplicity 3, the Jordan canonical form of the matrix is
J =
_
_
2 1 0
0 2 1
0 0 2
_
_
.
In Example 17.1.1 we found the generalized eigenvectors of A. We dene the matrix with generalized eigenvectors
as columns:
S =
_
_
0 1 1
1 2 0
1 3 0
_
_
.
We can verify that J = S
1
AS.
J = S
1
AS
=
_
_
0 3 2
0 1 1
1 1 1
_
_
_
_
1 1 1
2 1 1
3 2 4
_
_
_
_
0 1 1
1 2 0
1 3 0
_
_
=
_
_
2 1 0
0 2 1
0 0 2
_
_
711
Functions of Matrices in Jordan Canonical Form. The function of an n n Jordan block is the upper-
triangular matrix:
f(J
k
) =
_
_
_
_
_
_
_
_
_
_
_
_
f()
f

()
1!
f

()
2!

f
(n2)
()
(n2)!
f
(n1)
()
(n1)!
0 f()
f

()
1!

f
(n3)
()
(n3)!
f
(n2)
()
(n2)!
0 0 f()
.
.
.
f
(n4)
()
(n4)!
f
(n3)
()
(n3)!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
.
.
. f()
f

()
1!
0 0 0 0 f()
_
_
_
_
_
_
_
_
_
_
_
_
The function of a matrix in Jordan canonical form is
f(J) =
_
_
_
_
_
_
_
_
f(J
1
) 0 0 0
0 f(J
2
)
.
.
. 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0
.
.
. f(J
n1
) 0
0 0 0 f(J
n
)
_
_
_
_
_
_
_
_
The Jordan canonical form of a matrix satises:
f(J) = S
1
f(A)S,
where S is the matrix of the generalized eigenvectors of A. This gives us a convenient method for computing
functions of matrices.
Example 17.1.3 Consider the matrix exponential function e
A
for our old friend:
A =
_
_
1 1 1
2 1 1
3 2 4
_
_
.
712
In Example 17.1.2 we showed that the Jordan canonical form of the matrix is
J =
_
_
2 1 0
0 2 1
0 0 2
_
_
.
Since all the derivatives of e

are just e

, it is especially easy to compute e


J
.
e
J
=
_
_
e
2
e
2
e
2
/2
0 e
2
e
2
0 0 e
2
_
_
We nd e
A
with a similarity transformation of e
J
. We use the matrix of generalized eigenvectors found in
Example 17.1.2.
e
A
= Se
J
S
1
e
A
=
_
_
0 1 1
1 2 0
1 3 0
_
_
_
_
e
2
e
2
e
2
/2
0 e
2
e
2
0 0 e
2
_
_
_
_
0 3 2
0 1 1
1 1 1
_
_
e
A
=
_
_
0 2 2
3 1 1
5 3 5
_
_
e
2
2
17.2 Systems of Dierential Equations
The homogeneous dierential equation
x
t
(t) = Ax(t)
713
has the solution
x(t) = e
At
c
where c is a vector of constants. The solution subject to the initial condition, x(t
0
) = x
0
is
x(t) = e
A(tt
0
)
x
0
.
The homogeneous dierential equation
x
t
(t) =
1
t
Ax(t)
has the solution
x(t) = t
A
c e
ALog t
c,
where c is a vector of constants. The solution subject to the initial condition, x(t
0
) = x
0
is
x(t) =
_
t
t
0
_
A
x
0
e
ALog (t/t
0
)
x
0
.
The inhomogeneous problem
x
t
(t) = Ax(t) +f(t), x(t
0
) = x
0
has the solution
x(t) = e
A(tt
0
)
x
0
+ e
At
_
t
t
0
e
A
f() d.
714
Example 17.2.1 Consider the system
dx
dt
=
_
_
1 1 1
2 1 1
3 2 4
_
_
x.
The general solution of the system of dierential equations is
x(t) = e
At
c.
In Example 17.1.3 we found e
A
. At is just a constant times A. The eigenvalues of At are
k
t where
k
are
the eigenvalues of A. The generalized eigenvectors of At are the same as those of A.
Consider e
Jt
. The derivatives of f() = e
t
are f
t
() = t e
t
and f
tt
() = t
2
e
t
. Thus we have
e
Jt
=
_
_
e
2t
t e
2t
t
2
e
2t
/2
0 e
2t
t e
2t
0 0 e
2t
_
_
e
Jt
=
_
_
1 t t
2
/2
0 1 t
0 0 1
_
_
e
2t
We nd e
At
with a similarity transformation.
e
At
= Se
Jt
S
1
e
At
=
_
_
0 1 1
1 2 0
1 3 0
_
_
_
_
1 t t
2
/2
0 1 t
0 0 1
_
_
e
2t
_
_
0 3 2
0 1 1
1 1 1
_
_
e
At
=
_
_
1 t t t
2t t
2
/2 1 t +t
2
/2 t +t
2
/2
3t +t
2
/2 2t t
2
/2 1 + 2t t
2
/2
_
_
e
2t
715
The solution of the system of dierential equations is
x(t) =
_
_
c
1
_
_
1 t
2t t
2
/2
3t +t
2
/2
_
_
+c
2
_
_
t
1 t +t
2
/2
2t t
2
/2
_
_
+c
3
_
_
t
t +t
2
/2
1 + 2t t
2
/2
_
_
_
_
e
2t
Example 17.2.2 Consider the Euler equation system
dx
dt
=
1
t
Ax
1
t
_
1 0
1 1
_
x.
The solution is x(t) = t
A
c. Note that A is almost in Jordan canonical form. It has a one on the sub-diagonal
instead of the super-diagonal. It is clear that a function of A is dened
f(A) =
_
f(1) 0
f
t
(1) f(1)
_
.
The function f() = t

has the derivative f


t
() = t

log t. Thus the solution of the system is


x(t) =
_
t 0
t log t t
__
c
1
c
2
_
= c
1
_
t
t log t
_
+c
2
_
0
t
_
Example 17.2.3 Consider an inhomogeneous system of dierential equations.
dx
dt
= Ax +f(t)
_
4 2
8 4
_
x +
_
t
3
t
2
_
, t > 0.
The general solution is
x(t) = e
At
c + e
At
_
e
At
f(t) dt.
716
First we nd homogeneous solutions. The characteristic equation for the matrix is
() =

4 2
8 4

=
2
= 0
= 0 is an eigenvalue of multiplicity 2. Thus the Jordan canonical form of the matrix is
J =
_
0 1
0 0
_
.
Since rank ( nullspace (A0I)) = 1 there is only one eigenvector. A generalized eigenvector of rank 2 satises
(A0I)
2
x
2
= 0
_
0 0
0 0
_
x
2
= 0
We choose
x
2
=
_
1
0
_
Now we generate the chain from x
2
.
x
1
= (A0I)x
2
=
_
4
8
_
We dene the matrix of generalized eigenvectors S.
S =
_
4 1
8 0
_
The derivative of f() = e
t
is f
t
() = t e
t
. Thus
e
Jt
=
_
1 t
0 1
_
717
The homogeneous solution of the dierential equation system is x
h
= e
At
c where
e
At
= Se
Jt
S
1
e
At
=
_
4 1
8 0
_
.
_
1 t
0 1
__
0 1/8
1 1/2
_
e
At
=
_
1 + 4t 2t
8t 1 4t
_
The general solution of the inhomogeneous system of equations is
x(t) = e
At
c + e
At
_
e
At
f(t) dt
x(t) =
_
1 + 4t 2t
8t 1 4t
_
c +
_
1 + 4t 2t
8t 1 4t
__ _
1 4t 2t
8t 1 + 4t
__
t
3
t
2
_
dt
x(t) = c
1
_
1 + 4t
8t
_
+c
2
_
2t
1 4t
_
+
_
2 2 Log t +
6
t

1
2t
2
4 4 Log t +
13
t
_
We can tidy up the answer a little bit. First we take linear combinations of the homogeneous solutions to obtain
a simpler form.
x(t) = c
1
_
1
2
_
+c
2
_
2t
4t 1
_
+
_
2 2 Log t +
6
t

1
2t
2
4 4 Log t +
13
t
_
Then we subtract 2 times the rst homogeneous solution from the particular solution.
x(t) = c
1
_
1
2
_
+c
2
_
2t
4t 1
_
+
_
2 Log t +
6
t

1
2t
2
4 Log t +
13
t
_
718
17.3 Exercises
Exercise 17.1 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
2 1
5 4
_
x, x(0) = x
0

_
1
3
_
Hint, Solution
Exercise 17.2 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
_
1 1 2
0 2 2
1 1 3
_
_
x, x(0) = x
0

_
_
2
0
1
_
_
Hint, Solution
Exercise 17.3 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
1 5
1 3
_
x, x(0) = x
0

_
1
1
_
Hint, Solution
Exercise 17.4 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
_
3 0 2
1 1 0
2 1 0
_
_
x, x(0) = x
0

_
_
1
0
0
_
_
719
Hint, Solution
Exercise 17.5 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
1 4
4 7
_
x, x(0) = x
0

_
3
2
_
Hint, Solution
Exercise 17.6 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as t .
x
t
= Ax
_
_
1 0 0
4 1 0
3 6 2
_
_
x, x(0) = x
0

_
_
1
2
30
_
_
Hint, Solution
Exercise 17.7
1. Consider the system
x
t
= Ax =
_
_
1 1 1
2 1 1
3 2 4
_
_
x. (17.1)
(a) Show that = 2 is an eigenvalue of multiplicity 3 of the coecient matrix A, and that there is only
one corresponding eigenvector, namely

(1)
=
_
_
0
1
1
_
_
.
720
(b) Using the information in part (i), write down one solution x
(1)
(t) of the system (17.1). There is no
other solution of a purely exponential form x = e
t
.
(c) To nd a second solution use the form x = t e
2t
+ e
2t
, and nd appropriate vectors and . This
gives a solution of the system (17.1) which is independent of the one obtained in part (ii).
(d) To nd a third linearly independent solution use the form x = (t
2
/2) e
2t
+t e
2t
+ e
2t
. Show that ,
and satisfy the equations
(A2I) = 0, (A2I) = , (A2I) = .
The rst two equations can be taken to coincide with those obtained in part (iii). Solve the third
equation, and write down a third independent solution of the system (17.1).
2. Consider the system
x
t
= Ax =
_
_
5 3 2
8 5 4
4 3 3
_
_
x. (17.2)
(a) Show that = 1 is an eigenvalue of multiplicity 3 of the coecient matrix A, and that there are only
two linearly independent eigenvectors, which we may take as

(1)
=
_
_
1
0
2
_
_
,
(2)
=
_
_
0
2
3
_
_
Find two independent solutions of equation (17.2).
(b) To nd a third solution use the form x = t e
t
+e
t
; then show that and must satisfy
(AI) = 0, (AI) = .
Show that the most general solution of the rst of these equations is = c
1

1
+ c
2

2
, where c
1
and
c
2
are arbitrary constants. Show that, in order to solve the second of these equations it is necessary
to take c
1
= c
2
. Obtain such a vector , and use it to obtain a third independent solution of the
system (17.2).
721
Hint, Solution
Exercise 17.8 (mathematica/ode/systems/systems.nb)
Consider the system of ODEs
dx
dt
= Ax, x(0) = x
0
where A is the constant 3 3 matrix
A =
_
_
1 1 1
2 1 1
8 5 3
_
_
1. Find the eigenvalues and associated eigenvectors of A. [HINT: notice that = 1 is a root of the charac-
teristic polynomial of A.]
2. Use the results from part (a) to construct e
At
and therefore the solution to the initial value problem above.
3. Use the results of part (a) to nd the general solution to
dx
dt
=
1
t
Ax.
Hint, Solution
Exercise 17.9 (mathematica/ode/systems/systems.nb)
1. Find the general solution to
dx
dt
= Ax
722
where
A =
_
_
2 0 1
0 2 0
0 1 3
_
_
2. Solve
dx
dt
= Ax +g(t), x(0) = 0
using A from part (a).
Hint, Solution
Exercise 17.10
Let A be an n n matrix of constants. The system
dx
dt
=
1
t
Ax, (17.3)
is analogous to the Euler equation.
1. Verify that when A is a 2 2 constant matrix, elimination of (17.3) yields a second order Euler dierential
equation.
2. Now assume that A is an n n matrix of constants. Show that this system, in analogy with the Euler
equation has solutions of the form x = at

where a is a constant vector provided a and satisfy certain


conditions.
3. Based on your experience with the treatment of multiple roots in the solution of constant coecient systems,
what form will the general solution of (17.3) take if is a multiple eigenvalue in the eigenvalue problem
derived in part (b)?
723
4. Verify your prediction by deriving the general solution for the system
dx
dt
=
1
t
_
1 0
1 1
_
x.
Hint, Solution
Exercise 17.11
Use the matrix form of the method of variation of parameters to nd the general solution of
dx
dt
=
_
4 2
8 4
_
x +
_
t
3
t
2
_
, t > 0.
Hint, Solution
724
17.4 Hints
Hint 17.1
Hint 17.2
Hint 17.3
Hint 17.4
Hint 17.5
Hint 17.6
Hint 17.7
Hint 17.8
725
Hint 17.9
Hint 17.10
Hint 17.11
726
17.5 Solutions
Solution 17.1
We consider an initial value problem.
x
t
= Ax
_
2 1
5 4
_
x, x(0) = x
0

_
1
3
_
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues
1
= 1,
2
= 3. The
corresponding eigenvectors are
x
1
=
_
1
1
_
, x
2
=
_
1
5
_
.
The general solution of the system of dierential equations is
x = c
1
_
1
1
_
e
t
+c
2
_
1
5
_
e
3t
.
We apply the initial condition to determine the constants.
_
1 1
1 5
__
c
1
c
2
_
=
_
1
3
_
c
1
=
1
2
, c
2
=
1
2
The solution subject to the initial condition is
x =
1
2
_
1
1
_
e
t
+
1
2
_
1
5
_
e
3t
For large t, the solution looks like
x
1
2
_
1
5
_
e
3t
.
727
-10 -7.5 -5 -2.5 2.5 5 7.5 10
-10
-7.5
-5
-2.5
2.5
5
7.5
10
Figure 17.1: Homogeneous solutions in the phase plane.
Both coordinates tend to innity.
Figure 17.1 show some homogeneous solutions in the phase plane.
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
1 0
0 3
_
.
728
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
_
1 1
1 5
__
e
t
0
0 e
3t
_
1
4
_
5 1
1 1
__
1
3
_
=
1
2
_
e
t
+ e
3t
e
t
+ 5 e
3t
_
x =
1
2
_
1
1
_
e
t
+
1
2
_
1
5
_
e
3t
Solution 17.2
We consider an initial value problem.
x
t
= Ax
_
_
1 1 2
0 2 2
1 1 3
_
_
x, x(0) = x
0

_
_
2
0
1
_
_
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues
1
= 1,
2
= 2,

3
= 3. The corresponding eigenvectors are
x
1
=
_
_
0
2
1
_
_
, x
2
=
_
_
1
1
0
_
_
, x
3
=
_
_
2
2
1
_
_
.
The general solution of the system of dierential equations is
x = c
1
_
_
0
2
1
_
_
e
t
+c
2
_
_
1
1
0
_
_
e
2t
+c
3
_
_
2
2
1
_
_
e
3t
.
729
We apply the initial condition to determine the constants.
_
_
0 1 2
2 1 2
1 0 1
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
2
0
1
_
_
c
1
= 1, c
2
= 2, c
3
= 0
The solution subject to the initial condition is
x =
_
_
0
2
1
_
_
e
t
+ 2
_
_
1
1
0
_
_
e
2t
.
As t , all coordinates tend to innity.
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
_
1 0 0
0 2 0
0 0 3
_
_
.
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
_
_
0 1 2
2 1 2
1 0 1
_
_
_
_
e
t
0 0
0 e
2t
0
0 0 e
3t
_
_
1
2
_
_
1 1 0
4 2 4
1 1 2
_
_
_
_
2
0
1
_
_
=
_
_
2 e
2t
2 e
t
+ 2 e
2t
e
t
_
_
730
x =
_
_
0
2
1
_
_
e
t
+
_
_
2
2
0
_
_
e
2t
.
Solution 17.3
We consider an initial value problem.
x
t
= Ax
_
1 5
1 3
_
x, x(0) = x
0

_
1
1
_
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues
1
= 1 i,
2
=
1 + i. The corresponding eigenvectors are
x
1
=
_
2 i
1
_
, x
2
=
_
2 + i
1
_
.
The general solution of the system of dierential equations is
x = c
1
_
2 i
1
_
e
(1i)t
+c
2
_
2 + i
1
_
e
(1+i)t
.
We can take the real and imaginary parts of either of these solution to obtain real-valued solutions.
_
2 + i
1
_
e
(1+i)t
=
_
2 cos(t) sin(t)
cos(t)
_
e
t
+ i
_
cos(t) + 2 sin(t)
sin(t)
_
e
t
x = c
1
_
2 cos(t) sin(t)
cos(t)
_
e
t
+c
2
_
cos(t) + 2 sin(t)
sin(t)
_
e
t
We apply the initial condition to determine the constants.
_
2 1
1 0
__
c
1
c
2
_
=
_
1
1
_
c
1
= 1, c
2
= 1
731
The solution subject to the initial condition is
x =
_
cos(t) 3 sin(t)
cos(t) sin(t)
_
e
t
.
Plotted in the phase plane, the solution spirals in to the origin at t increases. Both coordinates tend to zero as
t .
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
1 i 0
0 1 + i
_
.
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
_
2 i 2 + i
1 1
__
e
(1i)t
0
0 e
(1+i)t
_
1
2
_
i 1 i2
i 1 + i2
__
1
1
_
=
_
(cos(t) 3 sin(t)) e
t
(cos(t) sin(t)) e
t
_
x =
_
1
1
_
e
t
cos(t)
_
3
1
_
e
t
sin(t)
Solution 17.4
We consider an initial value problem.
x
t
= Ax
_
_
3 0 2
1 1 0
2 1 0
_
_
x, x(0) = x
0

_
_
1
0
0
_
_
732
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues
1
= 2,
2
=
1 i

2,
3
= 1 + i

2. The corresponding eigenvectors are


x
1
=
_
_
2
2
1
_
_
, x
2
=
_
_
2 + i

2
1 + i

2
3
_
_
, x
3
=
_
_
2 i

2
1 i

2
3
_
_
.
The general solution of the system of dierential equations is
x = c
1
_
_
2
2
1
_
_
e
2t
+c
2
_
_
2 + i

2
1 + i

2
3
_
_
e
(1i

2)t
+c
3
_
_
2 i

2
1 i

2
3
_
_
e
(1+i

2)t
.
We can take the real and imaginary parts of the second or third solution to obtain two real-valued solutions.
_
_
2 + i

2
1 + i

2
3
_
_
e
(1i

2)t
=
_
_
2 cos(

2t) +

2 sin(

2t)
cos(

2t) +

2 sin(

2t)
3 cos(

2t)
_
_
e
t
+ i
_
_

2 cos(

2t) 2 sin(

2t)

2 cos(

2t) + sin(

2t)
3 sin(

2t)
_
_
e
t
x = c
1
_
_
2
2
1
_
_
e
2t
+c
2
_
_
2 cos(

2t) +

2 sin(

2t)
cos(

2t) +

2 sin(

2t)
3 cos(

2t)
_
_
e
t
+c
3
_
_

2 cos(

2t) 2 sin(

2t)

2 cos(

2t) + sin(

2t)
3 sin(

2t)
_
_
e
t
We apply the initial condition to determine the constants.
_
_
2 2

2
2 1

2
1 3 0
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
1
0
0
_
_
c
1
=
1
3
, c
2
=
1
9
, c
3
=
5
9

2
733
The solution subject to the initial condition is
x =
1
3
_
_
2
2
1
_
_
e
2t
+
1
6
_
_
2 cos(

2t) 4

2 sin(

2t)
4 cos(

2t) +

2 sin(

2t)
2 cos(

2t) 5

2 sin(

2t)
_
_
e
t
.
As t , all coordinates tend to innity. Plotted in the phase plane, the solution would spiral in to the origin.
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
_
2 0 0
0 1 i

2 0
0 0 1 + i

2
_
_
.
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
1
3
_
_
6 2 + i

2 2 i

2
6 1 + i

2 1 i

2
3 3 3
_
_
_
_
e
2t
0 0
0 e
(1i

2)t
0
0 0 e
(1+i

2)t
_
_
1
6
_
_
2 2 2
1 i5

2/2 1 i2

2 4 + i

2
1 + i5

2/2 1 + i2

2 4 i

2
_
_
_
_
1
0
0
_
_
x =
1
3
_
_
2
2
1
_
_
e
2t
+
1
6
_
_
2 cos(

2t) 4

2 sin(

2t)
4 cos(

2t) +

2 sin(

2t)
2 cos(

2t) 5

2 sin(

2t)
_
_
e
t
.
734
Solution 17.5
We consider an initial value problem.
x
t
= Ax
_
1 4
4 7
_
x, x(0) = x
0

_
3
2
_
Method 1. Find Homogeneous Solutions. The matrix has the double eigenvalue
1
=
2
= 3. There
is only one corresponding eigenvector. We compute a chain of generalized eigenvectors.
(A+ 3I)
2
x
2
= 0
0x
2
= 0
x
2
=
_
1
0
_
(A+ 3I)x
2
= x
1
x
1
=
_
4
4
_
The general solution of the system of dierential equations is
x = c
1
_
1
1
_
e
3t
+c
2
__
4
4
_
t +
_
1
0
__
e
3t
.
We apply the initial condition to determine the constants.
_
1 1
1 0
__
c
1
c
2
_
=
_
3
2
_
c
1
= 2, c
2
= 1
The solution subject to the initial condition is
x =
_
3 + 4t
2 + 4t
_
e
3t
.
735
Both coordinates tend to zero as t .
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
3 1
0 3
_
.
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
_
1 1/4
1 0
__
e
3t
t e
3t
0 e
3t
__
0 1
4 4
__
3
2
_
x =
_
3 + 4t
2 + 4t
_
e
3t
.
Solution 17.6
We consider an initial value problem.
x
t
= Ax
_
_
1 0 0
4 1 0
3 6 2
_
_
x, x(0) = x
0

_
_
1
2
30
_
_
Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues
1
= 1,
2
= 1,

3
= 2. The corresponding eigenvectors are
x
1
=
_
_
1
2
5
_
_
, x
2
=
_
_
0
1
6
_
_
, x
3
=
_
_
0
0
1
_
_
.
736
The general solution of the system of dierential equations is
x = c
1
_
_
1
2
5
_
_
e
t
+c
2
_
_
0
1
6
_
_
e
t
+c
3
_
_
0
0
1
_
_
e
2t
.
We apply the initial condition to determine the constants.
_
_
1 0 0
2 1 0
5 6 1
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
1
2
30
_
_
c
1
= 1, c
2
= 4, c
3
= 11
The solution subject to the initial condition is
x =
_
_
1
2
5
_
_
e
t
4
_
_
0
1
6
_
_
e
t
11
_
_
0
0
1
_
_
e
2t
.
As t , the rst coordinate vanishes, the second coordinate tends to and the third coordinate tends to
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
_
_
1 0 0
0 1 0
0 0 2
_
_
.
The solution of the initial value problem is x = e
At
x
0
.
x = e
At
x
0
= Se
Jt
S
1
x
0
=
_
_
1 0 0
2 1 0
5 6 1
_
_
_
_
e
t
0 0
0 e
t
0
0 0 e
2t
_
_
1
2
_
_
1 0 0
2 1 0
7 6 1
_
_
_
_
1
2
30
_
_
737
x =
_
_
1
2
5
_
_
e
t
4
_
_
0
1
6
_
_
e
t
11
_
_
0
0
1
_
_
e
2t
.
Solution 17.7
1. (a) We compute the eigenvalues of the matrix.
() =

1 1 1
2 1 1
3 2 4

=
3
+ 6
2
12 + 8 = ( 2)
3
= 2 is an eigenvalue of multiplicity 3. The rank of the null space of A2I is 1. (The rst two rows
are linearly independent, but the third is a linear combination of the rst two.)
A2I =
_
_
1 1 1
2 1 1
3 2 2
_
_
Thus there is only one eigenvector.
_
_
1 1 1
2 1 1
3 2 2
_
_
_
_

3
_
_
= 0

(1)
=
_
_
0
1
1
_
_
(b) One solution of the system of dierential equations is
x
(1)
=
_
_
0
1
1
_
_
e
2t
.
738
(c) We substitute the form x = t e
2t
+ e
2t
into the dierential equation.
x
t
= Ax
e
2t
+ 2t e
2t
+ 2 e
2t
= At e
2t
+A e
2t
(A2I) = 0, (A2I) =
We already have a solution of the rst equation, we need the generalized eigenvector . Note that
is only determined up to a constant times . Thus we look for the solution whose second component
vanishes to simplify the algebra.
(A2I) =
_
_
1 1 1
2 1 1
3 2 2
_
_
_
_

1
0

3
_
_
=
_
_
0
1
1
_
_

1
+
3
= 0, 2
1

3
= 1, 3
1
+ 2
3
= 1
=
_
_
1
0
1
_
_
A second linearly independent solution is
x
(2)
=
_
_
0
1
1
_
_
t e
2t
+
_
_
1
0
1
_
_
e
2t
.
(d) To nd a third solution we substutite the form x = (t
2
/2) e
2t
+ t e
2t
+ e
2t
into the dierential
equation.
x
t
= Ax
2(t
2
/2) e
2t
+ ( + 2)t e
2t
+ ( + 2) e
2t
= A(t
2
/2) e
2t
+At e
2t
+A e
2t
(A2I) = 0, (A2I) = , (A2I) =
739
We have already solved the rst two equations, we need the generalized eigenvector . Note that
is only determined up to a constant times . Thus we look for the solution whose second component
vanishes to simplify the algebra.
(A2I) =
_
_
1 1 1
2 1 1
3 2 2
_
_
_
_

1
0

3
_
_
=
_
_
1
0
1
_
_

1
+
3
= 1, 2
1

3
= 0, 3
1
+ 2
3
= 1
=
_
_
1
0
2
_
_
A third linearly independent solution is
x
(3)
=
_
_
0
1
1
_
_
(t
2
/2) e
2t
+
_
_
1
0
1
_
_
t e
2t
+
_
_
1
0
2
_
_
e
2t
2. (a) We compute the eigenvalues of the matrix.
() =

5 3 2
8 5 4
4 3 3

=
3
+ 3
2
3 + 1 = ( 1)
3
= 1 is an eigenvalue of multiplicity 3. The rank of the null space of A I is 2. (The second and
third rows are multiples of the rst.)
AI =
_
_
4 3 2
8 6 4
4 3 2
_
_
740
Thus there are two eigenvectors.
_
_
4 3 2
8 6 4
4 3 2
_
_
_
_

3
_
_
= 0

(1)
=
_
_
1
0
2
_
_
,
(2)
=
_
_
0
2
3
_
_
Two linearly independent solutions of the dierential equation are
x
(1)
=
_
_
1
0
2
_
_
e
t
, x
(2)
=
_
_
0
2
3
_
_
e
t
.
(b) We substitute the form x = t e
t
+ e
t
into the dierential equation.
x
t
= Ax
e
t
+t e
t
+ e
t
= At e
t
+A e
t
(AI) = 0, (AI) =
The general solution of the rst equation is a linear combination of the two solutions we found in the
previous part.
= c
1

1
+c
2

2
Now we nd the generalized eigenvector, . Note that is only determined up to a linear combination
741
of
1
and
2
. Thus we can take the rst two components of to be zero.
_
_
4 3 2
8 6 4
4 3 2
_
_
_
_
0
0

3
_
_
= c
1
_
_
1
0
2
_
_
+c
2
_
_
0
2
3
_
_
2
3
= c
1
, 4
3
= 2c
2
, 2
3
= 2c
1
3c
2
c
1
= c
2
,
3
=
c
1
2
We see that we must take c
1
= c
2
in order to obtain a solution. We choose c
1
= c
2
= 2 A third linearly
independent solution of the dierential equation is
x
(3)
=
_
_
2
4
2
_
_
t e
t
+
_
_
0
0
1
_
_
e
t
.
Solution 17.8
1. The characteristic polynomial of the matrix is
() =

1 1 1
2 1 1
8 5 3

= (1 )
2
(3 ) + 8 10 5(1 ) 2(3 ) 8(1 )
=
3

2
+ 4 + 4
= ( + 2)( + 1)( 2)
Thus we see that the eigenvalues are = 2, 1, 2. The eigenvectors satisfy
(AI) = 0.
742
For = 2, we have
(A+ 2I) = 0.
_
_
3 1 1
2 3 1
8 5 1
_
_
_
_

3
_
_
=
_
_
0
0
0
_
_
If we take
3
= 1 then the rst two rows give us the system,
_
3 1
2 3
__

2
_
=
_
1
1
_
which has the solution
1
= 4/7,
2
= 5/7. For the rst eigenvector we choose:
=
_
_
4
5
7
_
_
For = 1, we have
(A+I) = 0.
_
_
2 1 1
2 2 1
8 5 2
_
_
_
_

3
_
_
=
_
_
0
0
0
_
_
If we take
3
= 1 then the rst two rows give us the system,
_
2 1
2 2
__

2
_
=
_
1
1
_
which has the solution
1
= 3/2,
2
= 2. For the second eigenvector we choose:
=
_
_
3
4
2
_
_
743
For = 2, we have
(A+I) = 0.
_
_
1 1 1
2 1 1
8 5 5
_
_
_
_

3
_
_
=
_
_
0
0
0
_
_
If we take
3
= 1 then the rst two rows give us the system,
_
1 1
2 1
__

2
_
=
_
1
1
_
which has the solution
1
= 0,
2
= 1. For the third eigenvector we choose:
=
_
_
0
1
1
_
_
In summary, the eigenvalues and eigenvectors are
= 2, 1, 2, =
_
_
_
_
_
4
5
7
_
_
,
_
_
3
4
2
_
_
,
_
_
0
1
1
_
_
_
_
_
2. The matrix is diagonalized with the similarity transformation
J = S
1
AS,
where S is the matrix with eigenvectors as columns:
S =
_
_
4 3 0
5 4 1
7 2 1
_
_
744
The matrix exponential, e
At
is given by
e
A
= Se
J
S
1
.
e
A
=
_
_
4 3 0
5 4 1
7 2 1
_
_
_
_
e
2t
0 0
0 e
t
0
0 0 e
2t
_
_
1
12
_
_
6 3 3
12 4 4
18 13 1
_
_
.
e
At
=
_
_
2 e
2t
+ 3 e
t
e
2t
+ e
t
e
2t
+ e
t
5 e
2t
8 e
t
+3 e
t
2
15 e
2t
16 e
t
+13 e
t
12
15 e
2t
16 e
t
+e
t
12
7 e
2t
4 e
t
3 e
t
2
21 e
2t
8 e
t
13 e
t
12
21 e
2t
8 e
t
e
t
12
_
_
The solution of the initial value problem is e
At
x
0
.
3. The general solution of the Euler equation is
c
1
_
_
4
5
7
_
_
t
2
+c
2
_
_
3
4
2
_
_
t
1
+c
3
_
_
0
1
1
_
_
t
2
.
We could also write the solution as
x = t
A
c e
Alog t
c,
Solution 17.9
1. The characteristic polynomial of the matrix is
() =

2 0 1
0 2 0
0 1 3

= (2 )
2
(3 )
745
Thus we see that the eigenvalues are = 2, 2, 3. Consider
A2I =
_
_
0 0 1
0 0 0
0 1 3
_
_
.
Since rank ( nullspace (A2I)) = 1 there is one eigenvector and one generalized eigenvector of rank two for
= 2. The generalized eigenvector of rank two satises
(A2I)
2

2
= 0
_
_
0 1 1
0 0 0
0 1 1
_
_

2
= 0
We choose the solution

2
=
_
_
0
1
1
_
_
.
The eigenvector for = 2 is

1
= (A2I)
2
=
_
_
1
0
0
_
_
.
The eigenvector for = 3 satises
(A3I)
2
= 0
_
_
1 0 1
0 1 0
0 1 0
_
_
= 0
746
We choose the solution
=
_
_
1
0
1
_
_
.
The eigenvalues and generalized eigenvectors are
= 2, 2, 3, =
_
_
_
_
_
1
0
0
_
_
,
_
_
0
1
1
_
_
,
_
_
1
0
1
_
_
_
_
_
.
The matrix of eigenvectors and its inverse is
S =
_
_
1 0 1
0 1 0
0 1 1
_
_
, S
1
=
_
_
1 1 1
0 1 0
0 1 1
_
_
.
The Jordan canonical form of the matrix, which satises J = S
1
AS is
J =
_
_
2 1 0
0 2 0
0 0 3
_
_
Recall that the function of a Jordan block is:
f
_
_
_
_
_
_
_
_
1 0 0
0 1 0
0 0 1
0 0 0
_
_
_
_
_
_
_
_
=
_
_
_
_
_
f()
f

()
1!
f

()
2!
f

()
3!
0 f()
f

()
1!
f

()
2!
0 0 f()
f

()
1!
0 0 0 f()
_
_
_
_
_
,
and that the function of a matrix in Jordan canonical form is
f
_
_
_
_
_
_
_
_
J
1
0 0 0
0 J
2
0 0
0 0 J
3
0
0 0 0 J
4
_
_
_
_
_
_
_
_
=
_
_
_
_
f(J
1
) 0 0 0
0 f(J
2
) 0 0
0 0 f(J
3
) 0
0 0 0 f(J
4
)
_
_
_
_
.
747
We want to compute e
Jt
so we consider the function f() = e
t
, which has the derivative f
t
() = t e
t
.
Thus we see that
e
Jt
=
_
_
e
2t
t e
2t
0
0 e
2t
0
0 0 e
3t
_
_
The exponential matrix is
e
At
= Se
Jt
S
1
,
e
At
=
_
_
e
2t
(1 +t) e
2t
+ e
3t
e
2t
+ e
3t
0 e
2t
0
0 e
2t
+ e
3t
e
3t
_
_
.
The general solution of the homogeneous dierential equation is
x = e
At
C.
2. The solution of the inhomogeneous dierential equation subject to the initial condition is
x = e
At
0 + e
At
_
t
0
e
A
g() d
x = e
At
_
t
0
e
A
g() d
Solution 17.10
1.
dx
dt
=
1
t
Ax
t
_
x
t
1
x
t
2
_
=
_
a b
c d
__
x
1
x
2
_
748
The rst component of this equation is
tx
t
1
= ax
1
+bx
2
.
We dierentiate and multiply by t to obtain a second order coupled equation for x
1
. We use (17.3) to
eliminate the dependence on x
2
.
t
2
x
tt
1
+tx
t
1
= atx
t
1
+btx
t
2
t
2
x
tt
1
+ (1 a)tx
t
1
= b(cx
1
+dx
2
)
t
2
x
tt
1
+ (1 a)tx
t
1
bcx
1
= d(tx
t
1
ax
1
)
t
2
x
tt
1
+ (1 a d)tx
t
1
+ (ad bc)x
1
= 0
Thus we see that x
1
satises a second order, Euler equation. By symmetry we see that x
2
satises,
t
2
x
tt
2
+ (1 b c)tx
t
2
+ (bc ad)x
2
= 0.
2. We substitute x = at

into (17.3).
at
1
=
1
t
Aat

Aa = a
Thus we see that x = at

is a solution if is an eigenvalue of A with eigenvector a.


3. Suppose that = is an eigenvalue of multiplicity 2. If = has two linearly independent eigenvectors,
a and b then at

and bt

are linearly independent solutions. If = has only one linearly independent


eigenvector, a, then at

is a solution. We look for a second solution of the form


x = t

log t +t

.
Substituting this into the dierential equation yields
t
1
log t +t
1
+t
1
= At
1
log t +At
1
749
We equate coecients of t
1
log t and t
1
to determine and .
(AI) = 0, (AI) =
These equations have solutions because = has generalized eigenvectors of rst and second order.
Note that the change of independent variable = log t, y() = x(t), will transform (17.3) into a constant
coecient system.
dy
d
= Ay
Thus all the methods for solving constant coecient systems carry over directly to solving (17.3). In the
case of eigenvalues with multiplicity greater than one, we will have solutions of the form,
t

, t

log t +t

, t

(log t)
2
+t

log t +t

, . . . ,
analogous to the form of the solutions for a constant coecient system,
e

, e

+ e

,
2
e

+ e

+ e

, . . . .
4. Method 1. Now we consider
dx
dt
=
1
t
_
1 0
1 1
_
x.
The characteristic polynomial of the matrix is
() =

1 0
1 1

= (1 )
2
.
= 1 is an eigenvalue of multiplicity 2. The equation for the associated eigenvectors is
_
0 0
1 0
__

2
_
=
_
0
0
_
.
750
There is only one linearly independent eigenvector, which we choose to be
a =
_
0
1
_
.
One solution of the dierential equation is
x
1
=
_
0
1
_
t.
We look for a second solution of the form
x
2
= at log t +t.
satises the equation
(AI) =
_
0 0
1 0
_
=
_
0
1
_
.
The solution is determined only up to an additive multiple of a. We choose
=
_
1
0
_
.
Thus a second linearly independent solution is
x
2
=
_
0
1
_
t log t +
_
1
0
_
t.
The general solution of the dierential equation is
x = c
1
_
0
1
_
t +c
2
__
0
1
_
t log t +
_
1
0
_
t
_
.
751
Method 2. Note that the matrix is lower triangular.
_
x
t
1
x
t
2
_
=
1
t
_
1 0
1 1
__
x
1
x
2
_
(17.4)
We have an uncoupled equation for x
1
.
x
t
1
=
1
t
x
1
x
1
= c
1
t
By substituting the solution for x
1
into (17.4), we obtain an uncoupled equation for x
2
.
x
t
2
=
1
t
(c
1
t +x
2
)
x
t
2

1
t
x
2
= c
1
_
1
t
x
2
_
t
=
c
1
t
1
t
x
2
= c
1
log t +c
2
x
2
= c
1
t log t +c
2
t
Thus the solution of the system is
x =
_
c
1
t
c
1
t log t +c
2
t
_
,
x = c
1
_
t
t log t
_
+c
2
_
0
t
_
,
which is equivalent to the solution we obtained previously.
752
Solution 17.11
Homogeneous Solution, Method 1. We designate the inhomogeneous system of dierential equations
x
t
= Ax +g(t).
First we nd homogeneous solutions. The characteristic equation for the matrix is
() =

4 2
8 4

=
2
= 0
= 0 is an eigenvalue of multiplicity 2. The eigenvectors satisfy
_
4 2
8 4
__

2
_
=
_
0
0
_
.
Thus we see that there is only one linearly independent eigenvector. We choose
=
_
1
2
_
.
One homogeneous solution is then
x
1
=
_
1
2
_
e
0t
=
_
1
2
_
.
We look for a second homogeneous solution of the form
x
2
= t +.
We substitute this into the homogeneous equation.
x
t
2
= Ax
2
= A(t +)
753
We see that and satisfy
A = 0, A = .
We choose to be the eigenvector that we found previously. The equation for is then
_
4 2
8 4
__

2
_
=
_
1
2
_
.
is determined up to an additive multiple of . We choose
=
_
0
1/2
_
.
Thus a second homogeneous solution is
x
2
=
_
1
2
_
t +
_
0
1/2
_
.
The general homogeneous solution of the system is
x
h
= c
1
_
1
2
_
+c
2
_
t
2t 1/2
_
We can write this in matrix notation using the fundamental matrix (t).
x
h
= (t)c =
_
1 t
2 2t 1/2
__
c
1
c
2
_
Homogeneous Solution, Method 2. The similarity transform C
1
AC with
C =
_
1 0
2 1/2
_
754
will convert the matrix
A =
_
4 2
8 4
_
to Jordan canonical form. We make the change of variables,
y =
_
1 0
2 1/2
_
x.
The homogeneous system becomes
dy
dt
=
_
1 0
4 2
__
4 2
8 4
__
1 0
2 1/2
_
y
_
y
t
1
y
t
2
_
=
_
0 1
0 0
__
y
1
y
2
_
The equation for y
2
is
y
t
2
= 0.
y
2
= c
2
The equation for y
1
becomes
y
t
1
= c
2
.
y
1
= c
1
+c
2
t
The solution for y is then
y = c
1
_
1
0
_
+c
2
_
t
1
_
.
755
We multiply this by C to obtain the homogeneous solution for x.
x
h
= c
1
_
1
2
_
+c
2
_
t
2t 1/2
_
Inhomogeneous Solution. By the method of variation of parameters, a particular solution is
x
p
= (t)
_

1
(t)g(t) dt.
x
p
=
_
1 t
2 2t 1/2
__ _
1 4t 2t
4 2
__
t
3
t
2
_
dt
x
p
=
_
1 t
2 2t 1/2
__ _
2t
1
4t
2
+t
3
2t
2
+ 4t
3
_
dt
x
p
=
_
1 t
2 2t 1/2
__
2 log t + 4t
1

1
2
t
2
2t
1
2t
2
_
x
p
=
_
2 2 log t + 2t
1

1
2
t
2
4 4 log t + 5t
1
_
By adding 2 times our rst homogeneous solution, we obtain
x
p
=
_
2 log t + 2t
1

1
2
t
2
4 log t + 5t
1
_
The general solution of the system of dierential equations is
x = c
1
_
1
2
_
+c
2
_
t
2t 1/2
_
+
_
2 log t + 2t
1

1
2
t
2
4 log t + 5t
1
_
756
Chapter 18
Theory of Linear Ordinary Dierential
Equations
A little partyin is good for the soul.
-Matt Metz
757
18.1 Nature of Solutions
Result 18.1.1 Consider the n
th
order ordinary dierential equation of the form
L[y] =
d
n
y
dx
n
+p
n1
(x)
d
n1
y
dx
n1
+ +p
1
(x)
dy
dx
+p
0
(x)y = f(x). (18.1)
If the coecient functions p
n1
(x), . . . , p
0
(x) and the inhomogeneity f(x) are continuous
on some interval a < x < b then the dierential equation subject to the conditions,
y(x
0
) = v
0
, y
/
(x
0
) = v
1
, . . . y
(n1)
(x
0
) = v
n1
, a < x
0
< b,
has a unique solution on the interval.
Linearity of the Operator. The dierential operator L is linear. To verify this,
L[cy] =
d
n
dx
n
(cy) +p
n1
(x)
d
n1
dx
n1
(cy) + +p
1
(x)
d
dx
(cy) +p
0
(x)(cy)
= c
d
n
dx
n
y +cp
n1
(x)
d
n1
dx
n1
y + +cp
1
(x)
d
dx
y +cp
0
(x)y
= cL[y]
L[y
1
+y
2
] =
d
n
dx
n
(y
1
+y
2
) +p
n1
(x)
d
n1
dx
n1
(y
1
+y
2
) + +p
1
(x)
d
dx
(y
1
+y
2
) +p
0
(x)(y
1
+y
2
)
=
d
n
dx
n
(y
1
) +p
n1
(x)
d
n1
dx
n1
(y
1
) + +p
1
(x)
d
dx
(y
1
) +p
0
(x)(y
1
)
+
d
n
dx
n
(y
2
) +p
n1
(x)
d
n1
dx
n1
(y
2
) + +p
1
(x)
d
dx
(y
2
) +p
0
(x)(y
2
)
= L[y
1
] +L[y
2
].
758
Homogeneous Solutions. The general homogeneous equation has the form
L[y] =
d
n
y
dx
n
+p
n1
(x)
d
n1
y
dx
n1
+ +p
1
(x)
dy
dx
+p
0
(x)y = 0.
From the linearity of L, we see that if y
1
and y
2
are solutions to the homogeneous equation then c
1
y
1
+ c
2
y
2
is
also a solution, (L[c
1
y
1
+c
2
y
2
] = 0).
On any interval where the coecient functions are continuous, the n
th
order linear homogeneous equation has
n linearly independent solutions, y
1
, y
2
, . . . , y
n
. (We will study linear independence in Section 18.3.) The general
solution to the homogeneous problem is then
y
h
= c
1
y
1
+c
2
y
2
+ +c
n
y
n
.
Particular Solutions. Any function, y
p
, that satises the inhomogeneous equation, L[y
p
] = f(x), is called a
particular solution or particular integral of the equation. Note that for linear dierential equations the particular
solution is not unique. If y
p
is a particular solution then y
p
+ y
h
is also a particular solution where y
h
is any
homogeneous solution.
The general solution to the problem L[y] = f(x) is the sum of a particular solution and a linear combination
of the homogeneous solutions
y = y
p
+c
1
y
1
+ +c
n
y
n
.
Example 18.1.1 Consider the dierential equation
y
tt
y
t
= 1.
You can verify that two homogeneous solutions are e
x
and 1. A particular solution is x. Thus the general
solution is
y = x +c
1
e
x
+c
2
.
759
Real-Valued Solutions. If the coecient function and the inhomogeneity in Equation 18.1 are real-valued,
then the general solution can be written in terms of real-valued functions. Let y be any, homogeneous solution,
(perhaps complex-valued). By taking the complex conjugate of the equation L[y] = 0 we show that y is a
homogeneous solution as well.
L[y] = 0
L[y] = 0
y
(n)
+p
n1
y
(n1)
+ +p
0
y = 0
y
(n)
+p
n1
y
(n1)
+ +p
0
y = 0
L[ y] = 0
For the same reason, if y
p
is a particular solution, then y
p
is a particular solution as well.
Since the real and imaginary parts of a function y are linear combinations of y and y,
1(y) =
y + y
2
, (y) =
y y
i2
,
if y is a homogeneous solution then both 1y and (y) are homogeneous solutions. Likewise, if y
p
is a particular
solution then 1(y
p
) is a particular solution.
L[1(y
p
)] = L
_
y
p
+y
p
2
_
=
f
2
+
f
2
= f
Thus we see that the homogeneous solution, the particular solution and the general solution of a linear dierential
equation with real-valued coecients and inhomogeneity can be written in terms of real-valued functions.
760
Result 18.1.2 The dierential equation
L[y] =
d
n
y
dx
n
+p
n1
(x)
d
n1
y
dx
n1
+ +p
1
(x)
dy
dx
+p
0
(x)y = f(x)
with continuous coecients and inhomogeneity has a general solution of the form
y = y
p
+c
1
y
1
+ +c
n
y
n
where y
p
is a particular solution, L[y
p
] = f, and the y
k
are linearly independent homoge-
neous solutions, L[y
k
] = 0. If the coecient functions and inhomogeneity are real-valued,
then the general solution can be written in terms of real-valued functions.
18.2 Transformation to a First Order System
Any linear dierential equation can be put in the form of a system of rst order dierential equations. Consider
y
(n)
+p
n1
y
(n1)
+ +p
0
y = f(x).
We introduce the functions,
y
1
= y, y
2
= y
t
, , . . . , y
n
= y
(n1)
.
The dierential equation is equivalent to the system
y
t
1
= y
2
y
t
2
= y
3
.
.
. =
.
.
.
y
t
n
= f(x) p
n1
y
n
p
0
y
1
.
761
The rst order system is more useful when numerically solving the dierential equation.
Example 18.2.1 Consider the dierential equation
y
tt
+x
2
y
t
+ cos x y = sin x.
The corresponding system of rst order equations is
y
t
1
= y
2
y
t
2
= sin x x
2
y
2
cos x y
1
.
18.3 The Wronskian
18.3.1 Derivative of a Determinant.
Before investigating the Wronskian, we will need a preliminary result from matrix theory. Consider an n n
matrix A whose elements a
ij
(x) are functions of x. We will denote the determinant by [A(x)]. We then have
the following theorem.
Result 18.3.1 Let a
ij
(x), the elements of the matrix A, be dierentiable functions of x.
Then
d
dx
[A(x)] =
n

k=1

k
[A(x)]
where
k
[A(x)] is the determinant of the matrix A with the k
th
row replaced by the
derivative of the k
th
row.
762
Example 18.3.1 Consider the the matrix
A(x) =
_
x x
2
x
2
x
4
_
The determinant is x
5
x
4
thus the derivative of the determinant is 5x
4
4x
3
. To check the theorem,
d
dx
[A(x)] =
d
dx

x x
2
x
2
x
4

1 2x
x
2
x
4

x x
2
2x 4x
3

= x
4
2x
3
+ 4x
4
2x
3
= 5x
4
4x
3
.
18.3.2 The Wronskian of a Set of Functions.
A set of functions y
1
, y
2
, . . . , y
n
is linearly dependent on an interval if there are constants c
1
, . . . , c
n
not all zero
such that
c
1
y
1
+c
2
y
2
+ +c
n
y
n
= 0 (18.2)
identically on the interval. The set is linearly independent if all of the constants must be zero to satisfy c
1
y
1
+
c
n
y
n
= 0 on the interval.
Consider a set of functions y
1
, y
2
, . . . , y
n
that are linearly dependent on a given interval and n 1 times
dierentiable. There are a set of constants, not all zero, that satisfy equation 18.2
Dierentiating equation 18.2 n 1 times gives the equations,
c
1
y
t
1
+c
2
y
t
2
+ +c
n
y
t
n
= 0
c
1
y
tt
1
+c
2
y
tt
2
+ +c
n
y
tt
n
= 0

c
1
y
(n1)
1
+c
2
y
(n1)
2
+ +c
n
y
(n1)
n
= 0.
763
We could write the problem to nd the constants as
_
_
_
_
_
_
_
y
1
y
2
. . . y
n
y
t
1
y
t
2
. . . y
t
n
y
tt
1
y
tt
2
. . . y
tt
n
.
.
.
.
.
.
.
.
. . . .
y
(n1)
1
y
(n1)
2
. . . y
(n1)
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
c
1
c
2
c
3
.
.
.
c
n
_
_
_
_
_
_
_
= 0
From linear algebra, we know that this equation has a solution for a nonzero constant vector only if the determinant
of the matrix is zero. Here we dene the Wronskian ,W(x), of a set of functions.
W(x) =

y
1
y
2
. . . y
n
y
t
1
y
t
2
. . . y
t
n
.
.
.
.
.
.
.
.
. . . .
y
(n1)
1
y
(n1)
2
. . . y
(n1)
n

Thus if a set of functions is linearly dependent on an interval, then the Wronskian is identically zero on that
interval. Alternatively, if the Wronskian is identically zero, then the above matrix equation has a solution for a
nonzero constant vector. This implies that the the set of functions is linearly dependent.
Result 18.3.2 The Wronskian of a set of functions vanishes identically over an interval
if and only if the set of functions is linearly dependent on that interval. The Wronskian of
a set of linearly independent functions does not vanish except possibly at isolated points.
Example 18.3.2 Consider the set, x, x
2
. The Wronskian is
W(x) =

x x
2
1 2x

= 2x
2
x
2
= x
2
.
764
Thus the functions are independent.
Example 18.3.3 Consider the set sin x, cos x, e
ix
. The Wronskian is
W(x) =

sin x cos x e
ix
cos x sin x i e
ix
sin x cos x e
ix

.
Since the last row is a constant multiple of the rst row, the determinant is zero. The functions are dependent.
We could also see this with the identity e
ix
= cos x +i sin x.
18.3.3 The Wronskian of the Solutions to a Dierential Equation
Consider the n
th
order linear homogeneous dierential equation
y
(n)
+p
n1
(x)y
(n1)
+ +p
0
(x)y = 0.
Let y
1
, y
2
, . . . , y
n
be any set of n linearly independent solutions. Let Y (x) be the matrix such that W(x) =
[Y (x)]. Now lets dierentiate W(x).
W
t
(x) =
d
dx
[Y (x)]
=
n

k=1

k
[Y (x)]
We note that the all but the last term in this sum is zero. To see this, lets take a look at the rst term.

1
[Y (x)] =

y
t
1
y
t
2
y
t
n
y
t
1
y
t
2
y
t
n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n1)
1
y
(n1)
2
y
(n1)
n

765
The rst two rows in the matrix are identical. Since the rows are dependent, the determinant is zero.
The last term in the sum is

n
[Y (x)] =

y
1
y
2
y
n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n2)
1
y
(n2)
2
y
(n2)
n
y
(n)
1
y
(n)
2
y
(n)
n

.
In the last row of this matrix we make the substitution y
(n)
i
= p
n1
(x)y
(n1)
i
p
0
(x)y
i
. Recalling that
we can add a multiple of a row to another without changing the determinant, we add p
0
(x) times the rst row,
and p
1
(x) times the second row, etc., to the last row. Thus we have the determinant,
W
t
(x) =

y
1
y
2
y
n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n2)
1
y
(n2)
2
y
(n2)
n
p
n1
(x)y
(n1)
1
p
n1
(x)y
(n1)
2
p
n1
(x)y
(n1)
n

= p
n1
(x)

y
1
y
2
y
n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n2)
1
y
(n2)
2
y
(n2)
n
y
(n1)
1
y
(n1)
2
y
(n1)
n

= p
n1
(x)W(x)
Thus the Wronskian satises the rst order dierential equation,
W
t
(x) = p
n1
(x)W(x).
Solving this equation we get a result known as Abels formula.
W(x) = c exp
_

_
p
n1
(x) dx
_
766
Thus regardless of the particular set of solutions that we choose, we can compute their Wronskian up to a constant
factor.
Result 18.3.3 The Wronskian of any linearly independent set of solutions to the equa-
tion
y
(n)
+p
n1
(x)y
(n1)
+ +p
0
(x)y = 0
is, (up to a multiplicative constant), given by
W(x) = exp
_

_
p
n1
(x) dx
_
.
Example 18.3.4 Consider the dierential equation
y
tt
3y
t
+ 2y = 0.
The Wronskian of the two independent solutions is
W(x) = c exp
_

_
3 dx
_
= c e
3x
.
For the choice of solutions e
x
, e
2x
, the Wronskian is
W(x) =

e
x
e
2x
e
x
2 e
2x

= 2 e
3x
e
3x
= e
3x
.
767
18.4 Well-Posed Problems
Consider the initial value problem for an n
th
order linear dierential equation.
d
n
y
dx
n
+p
n1
(x)
d
n1
y
dx
n1
+ +p
1
(x)
dy
dx
+p
0
(x)y = f(x)
y(x
0
) = v
1
, y
t
(x
0
) = v
2
, . . . , y
(n1)
(x
0
) = v
n
Since the general solution to the dierential equation is a linear combination of the n homogeneous solutions plus
the particular solution
y = y
p
+c
1
y
1
+c
2
y
2
+ +c
n
y
n
,
the problem to nd the constants c
i
can be written
_
_
_
_
_
y
1
(x
0
) y
2
(x
0
) . . . y
n
(x
0
)
y
t
1
(x
0
) y
t
2
(x
0
) . . . y
t
n
(x
0
)
.
.
.
.
.
.
.
.
. . . .
y
(n1)
1
(x
0
) y
(n1)
2
(x
0
) . . . y
(n1)
n
(x
0
)
_
_
_
_
_
_
_
_
_
_
c
1
c
2
.
.
.
c
n
_
_
_
_
_
+
_
_
_
_
_
y
p
(x
0
)
y
t
p
(x
0
)
.
.
.
y
(n1)
p
(x
0
)
_
_
_
_
_
=
_
_
_
_
_
v
1
v
2
.
.
.
v
n
_
_
_
_
_
.
From linear algebra we know that this system of equations has a unique solution only if the determinant of the
matrix is nonzero. Note that the determinant of the matrix is just the Wronskian evaluated at x
0
. Thus if the
Wronskian vanishes at x
0
, the initial value problem for the dierential equation either has no solutions or innitely
many solutions. Such problems are said to be ill-posed. From Abels formula for the Wronskian
W(x) = exp
_

_
p
n1
(x) dx
_
,
we see that the only way the Wronskian can vanish is if the value of the integral goes to .
Example 18.4.1 Consider the initial value problem
y
tt

2
x
y
t
+
2
x
2
y = 0, y(0) = y
t
(0) = 1.
768
The Wronskian
W(x) = exp
_

_

2
x
dx
_
= exp (2 log x) = x
2
vanishes at x = 0. Thus this problem is not well-posed.
The general solution of the dierential equation is
y = c
1
x +c
2
x
2
.
We see that the general solution cannot satisfy the initial conditions. If instead we had the initial conditions
y(0) = 0, y
t
(0) = 1, then there would be an innite number of solutions.
Example 18.4.2 Consider the initial value problem
y
tt

2
x
2
y = 0, y(0) = y
t
(0) = 1.
The Wronskian
W(x) = exp
_

_
0 dx
_
= 1
does not vanish anywhere. However, this problem is not well-posed.
The general solution,
y = c
1
x
1
+c
2
x
2
,
cannot satisfy the initial conditions. Thus we see that a non-vanishing Wronskian does not imply that the problem
is well-posed.
769
Result 18.4.1 Consider the initial value problem
d
n
y
dx
n
+p
n1
(x)
d
n1
y
dx
n1
+ +p
1
(x)
dy
dx
+p
0
(x)y = 0
y(x
0
) = v
1
, y
/
(x
0
) = v
2
, . . . , y
(n1)
(x
0
) = v
n
.
If the Wronskian
W(x) = exp
_

_
p
n1
(x) dx
_
vanishes at x = x
0
then the problem is ill-posed. The problem may be ill-posed even if
the Wronskian does not vanish.
18.5 The Fundamental Set of Solutions
Consider a set of linearly independent solutions u
1
, u
2
, . . . , u
n
to an n
th
order linear homogeneous dierential
equation. This is called the fundamental set of solutions at x
0
if they satisfy the relations
u
1
(x
0
) = 1 u
2
(x
0
) = 0 . . . u
n
(x
0
) = 0
u
t
1
(x
0
) = 0 u
t
2
(x
0
) = 1 . . . u
t
n
(x
0
) = 0
.
.
.
.
.
.
.
.
.
.
.
.
u
(n1)
1
(x
0
) = 0 u
(n1)
2
(x
0
) = 0 . . . u
(n1)
n
(x
0
) = 1
Knowing the fundamental set of solutions is handy because it makes the task of solving an initial value problem
trivial. Say we are given the initial conditions,
y(x
0
) = v
1
, y
t
(x
0
) = v
2
, . . . , y
(n1)
(x
0
) = v
n
.
770
If the u
i
s are a fundamental set then the solution that satises these constraints is just
y = v
1
u
1
(x) +v
2
u
2
(x) + +v
n
u
n
(x).
Of course in general, a set of solutions is not the fundamental set. If the Wronskian of the solutions is
nonzero and nite we can generate a fundamental set of solutions that are linear combinations of our original set.
Consider the case of a second order equation Let y
1
, y
2
be two linearly independent solutions. We will generate
the fundamental set of solutions, u
1
, u
2
.
_
u
1
u
2
_
=
_
c
11
c
12
c
21
c
22
__
y
1
y
2
_
For u
1
, u
2
to satisfy the relations that dene a fundamental set, it must satisfy the matrix equation
_
u
1
(x
0
) u
t
1
(x
0
)
u
2
(x
0
) u
t
2
(x
0
)
_
=
_
c
11
c
12
c
21
c
22
__
y
1
(x
0
) y
t
1
(x
0
)
y
2
(x
0
) y
t
2
(x
0
)
_
=
_
1 0
0 1
_
_
c
11
c
12
c
21
c
22
_
=
_
y
1
(x
0
) y
t
1
(x
0
)
y
2
(x
0
) y
t
2
(x
0
)
_
1
If the Wronskian is non-zero and nite, we can solve for the constants, c
ij
, and thus nd the fundamental set of
solutions. To generalize this result to an equation of order n, simply replace all the 2 2 matrices and vectors
of length 2 with n n matrices and vectors of length n. I presented the case of n = 2 simply to save having to
write out all the ellipses involved in the general case. (It also makes for easier reading.)
Example 18.5.1 Two linearly independent solutions to the dierential equation y
tt
+ y = 0 are y
1
= e
ix
and
y
2
= e
ix
.
_
y
1
(0) y
t
1
(0)
y
2
(0) y
t
2
(0)
_
=
_
1 i
1 i
_
771
To nd the fundamental set of solutions, u
1
, u
2
, at x = 0 we solve the equation
_
c
11
c
12
c
21
c
22
_
=
_
1 i
1 i
_
1
_
c
11
c
12
c
21
c
22
_
=
1
2i
_
i i
1 1
_
The fundamental set is
u
1
=
e
ix
+ e
ix
2
, u
2
=
e
ix
e
ix
2i
.
Using trigonometric identities we can rewrite these as
u
1
= cos x, u
2
= sin x.
Result 18.5.1 The fundamental set of solutions at x = x
0
, u
1
, u
2
, . . . , u
n
, to an n
th
order linear dierential equation, satisfy the relations
u
1
(x
0
) = 1 u
2
(x
0
) = 0 . . . u
n
(x
0
) = 0
u
/
1
(x
0
) = 0 u
/
2
(x
0
) = 1 . . . u
/
n
(x
0
) = 0
.
.
.
.
.
.
.
.
.
.
.
.
u
(n1)
1
(x
0
) = 0 u
(n1)
2
(x
0
) = 0 . . . u
(n1)
n
(x
0
) = 1.
If the Wronskian of the solutions is nonzero and nite at the point x
0
then you can
generate the fundamental set of solutions from any linearly independent set of solutions.
772
18.6 Adjoint Equations
For the n
th
order linear dierential operator
L[y] = p
n
d
n
y
dx
n
+p
n1
d
n1
y
dx
n1
+ +p
0
y
(where the p
j
are complex-valued functions) we dene the adjoint of L
L

[y] = (1)
n
d
n
dx
n
(p
n
y) + (1)
n1
d
n1
dx
n1
(p
n1
y) + +p
0
y.
Here f denotes the complex conjugate of f.
Example 18.6.1
L[y] = xy
tt
+
1
x
y
t
+y
has the adjoint
L

[y] =
d
2
dx
2
[xy]
d
dx
_
1
x
y
_
+y
= xy
tt
+ 2y
t

1
x
y
t
+
1
x
2
y +y
= xy
tt
+
_
2
1
x
_
y
t
+
_
1 +
1
x
2
_
y.
Taking the adjoint of L

yields
L

[y] =
d
2
dx
2
[xy]
d
dx
__
2
1
x
_
y
_
+
_
1 +
1
x
2
_
y
= xy
tt
+ 2y
t

_
2
1
x
_
y
t

_
1
x
2
_
y +
_
1 +
1
x
2
_
y
= xy
tt
+
1
x
y
t
+y.
773
Thus by taking the adjoint of L

, we obtain the original operator.


In general, L

= L.
Consider L[y] = p
n
y
(n)
+ + p
0
y. If each of the p
k
is k times continuously dierentiable and u and v are n
times continuously dierentiable on some interval, then on that interval
vL[u] uL

[v] =
d
dx
B[u, v]
where B[u, v], the bilinear concomitant, is the bilinear form
B[u, v] =
n

m=1

j+k=m1
j0,k0
(1)
j
u
(k)
(p
m
v)
(j)
.
This equation is known as Lagranges identity. If L is a second order operator then
vL[u] uL

[v] =
d
dx
_
up
1
v +u
t
p
2
v u(p
2
v)
t

= u
tt
p
2
v +u
t
p
1
v +u
_
p
2
v
tt
+ (2p
t
2
+p
1
)v
t
+ (p
tt
2
+p
t
1
)v

.
Example 18.6.2 Verify Lagranges identity for the second order operator, L[y] = p
2
y
tt
+p
1
y
t
+p
0
y.
vL[u] uL

[v] = v(p
2
u
tt
+p
1
u
t
+p
0
u) u
_
d
2
dx
2
(p
2
v)
d
dx
(p
1
v) +p
0
v
_
= v(p
2
u
tt
+p
1
u
t
+p
0
u) u(p
2
v
tt
+ (2p
2
t
p
1
)v
t
+ (p
2
tt
p
1
t
+p
0
)v)
= u
tt
p
2
v +u
t
p
1
v +u
_
p
2
v
tt
+ (2p
t
2
+p
1
)v
t
+ (p
tt
2
+p
t
1
)v

.
We will not verify Lagranges identity for the general case.
774
Integrating Lagranges identity on its interval of validity gives us Greens formula.
_
b
a
_
vL[u] uL

[v]
_
dx = B[u, v]

x=b
B[u, v]

x=a
Result 18.6.1 The adjoint of the operator
L[y] = p
n
d
n
y
dx
n
+p
n1
d
n1
y
dx
n1
+ +p
0
y
is dened
L

[y] = (1)
n
d
n
dx
n
(p
n
y) + (1)
n1
d
n1
dx
n1
(p
n1
y) + +p
0
y.
If each of the p
k
is k times continuously dierentiable and u and v are n times continuously
dierentiable, then Lagranges identity states
vL[y] uL

[v] =
d
dx
B[u, v] =
d
dx
n

m=1

j+k=m1
j0,k0
(1)
j
u
(k)
(p
m
v)
(j)
.
Integrating Lagranges identity on its domain of validity yields Greens formula,
_
b
a
_
vL[u] uL

[v]
_
dx = B[u, v]

x=b
B[u, v]

x=a
.
775
18.7 Exercises
Exercise 18.1
Determine a necessary condition for a second order linear dierential equation to be exact.
Determine an equation for the integrating factor for a second order linear dierential equation.
Hint, Solution
Exercise 18.2
Show that
y
tt
+xy
t
+y = 0
is exact. Find the solution.
Hint, Solution
Nature of Solutions
Exercise 18.3
On what intervals do the following problems have unique solutions?
1. xy
tt
+ 3y = x
2. x(x 1)y
tt
+ 3xy
t
+ 4y = 2
3. e
x
y
tt
+x
2
y
t
+y = tan x
Hint, Solution
Exercise 18.4
Suppose you are able to nd three linearly independent particular solutions u
1
(x), u
2
(x) and u
3
(x) of the second
order linear dierential equation L[y] = f(x). What is the general solution?
Hint, Solution
776
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Exercise 18.5
Two solutions of y
tt
y = 0 are e
x
and e
x
. Show that the solutions are independent. Find the fundamental set
of solutions at x = 0.
Hint, Solution
Adjoint Equations
Exercise 18.6
Find the adjoint of the Bessel equation of order ,
x
2
y
tt
+xy
t
+ (x
2

2
)y = 0,
and the Legendre equation of order ,
(1 x
2
)y
tt
2xy
t
+( + 1)y = 0.
Hint, Solution
Exercise 18.7
Find the adjoint of
x
2
y
tt
xy
t
+ 3y = 0.
Hint, Solution
777
18.8 Hints
Hint 18.1
Hint 18.2
Nature of Solutions
Hint 18.3
Hint 18.4
The dierence of any two of the u
i
s is a homogeneous solution.
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Hint 18.5
Adjoint Equations
Hint 18.6
778
Hint 18.7
779
18.9 Solutions
Solution 18.1
The second order, linear, homogeneous dierential equation is
P(x)y
tt
+Q(x)y
t
+R(x)y = 0. (18.3)
The second order, linear, homogeneous, exact dierential equation is
d
dx
_
P(x)
dy
dx
_
+
d
dx
[f(x)y] = 0. (18.4)
P(x)y
tt
+ (P
t
(x) +f(x)) y
t
+f
t
(x)y = 0
Equating the coecients of Equations 18.3 and 18.4 yields the set of equations,
P
t
(x) +f(x) = Q(x), f
t
(x) = R(x).
We dierentiate the rst equation and substitute in the expression for f
t
(x) from the second equation to determine
a necessary condition for exactness.
P
tt
(x) Q
t
(x) +R(x) = 0
We multiply Equation 18.3 by the integrating factor (x) to obtain,
(x)P(x)y
tt
+(x)Q(x)y
t
+(x)R(x)y = 0. (18.5)
The corresponding exact equation is of the form,
d
dx
_
(x)P(x)
dy
dx
_
+
d
dx
[f(x)y] = 0. (18.6)
780
(x)P(x)y
tt
+ (
t
(x)P(x) +(x)P
t
(x) +f(x)) y
t
+f
t
(x)y = 0
Equating the coecients of Equations 18.5 and 18.6 yields the set of equations,

t
P +P
t
+f = Q, f
t
= R.
We dierentiate the rst equation and substitute in the expression for f
t
from the second equation to nd a
dierential equation for (x).

tt
P +
t
P
t
+
t
P
t
+P
tt
+R =
t
Q+Q
t
P
tt
+ (2P
t
Q)
t
+ (P
tt
Q
t
+R) = 0
Solution 18.2
We consider the dierential equation,
y
tt
+xy
t
+y = 0.
Since
(1)
tt
(x)
t
+ 1 = 0
we see that this is an exact equation. We rearrange terms to form exact derivatives and then integrate.
(y
t
)
t
+ (xy)
t
= 0
y
t
+xy = c
d
dx
_
e
x
2
/2
y
_
= c e
x
2
/2
y = c e
x
2
/2
_
e
x
2
/2
dx +d e
x
2
/2
781
Nature of Solutions
Solution 18.3
Consider the initial value problem,
y
tt
+p(x)y
t
+q(x)y = f(x),
y(x
0
) = y
0
, y
t
(x
0
) = y
1
.
If p(x), q(x) and f(x) are continuous on an interval (a . . . b) with x
0
(a . . . b), then the problem has a unique
solution on that interval.
1.
xy
tt
+ 3y = x
y
tt
+
3
x
y = 1
Unique solutions exist on the intervals (. . . 0) and (0 . . . ).
2.
x(x 1)y
tt
+ 3xy
t
+ 4y = 2
y
tt
+
3
x 1
y
t
+
4
x(x 1)
y =
2
x(x 1)
Unique solutions exist on the intervals (. . . 0), (0 . . . 1) and (1 . . . ).
3.
e
x
y
tt
+x
2
y
t
+y = tan x
y
tt
+x
2
e
x
y
t
+ e
x
y = e
x
tan x
Unique solutions exist on the intervals
_
(2n1)
2
. . .
(2n+1)
2
_
for n Z.
782
Solution 18.4
We know that the general solution is
y = y
p
+c
1
y
1
+c
2
y
2
,
where y
p
is a particular solution and y
1
and y
2
are linearly independent homogeneous solutions. Since y
p
can be
any particular solution, we choose y
p
= u
1
. Now we need to nd two homogeneous solutions. Since L[u
i
] = f(x),
L[u
1
u
2
] = L[u
2
u
3
] = 0. Finally, we note that since the u
i
s are linearly independent, y
1
= u
1
u
2
and
y
2
= u
2
u
3
are linearly independent. Thus the general solution is
y = u
1
+c
1
(u
1
u
2
) +c
2
(u
2
u
3
).
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Solution 18.5
The Wronskian of the solutions is
W(x) =

e
x
e
x
e
x
e
x

= 2.
Since the Wronskian is nonzero, the solutions are independent.
The fundamental set of solutions, u
1
, u
2
, is a linear combination of e
x
and e
x
.
_
u
1
u
2
_
=
_
c
11
c
12
c
21
c
22
__
e
x
e
x
_
783
The coecients are
_
c
11
c
12
c
21
c
22
_
=
_
e
0
e
0
e
0
e
0
_
1
=
_
1 1
1 1
_
1
=
1
2
_
1 1
1 1
_
=
1
2
_
1 1
1 1
_
u
1
=
1
2
( e
x
+ e
x
), u
2
=
1
2
( e
x
e
x
).
The fundamental set of solutions at x = 0 is
cosh x, sinh x
.
Adjoint Equations
Solution 18.6
1. The Bessel equation of order is
x
2
y
tt
+xy
t
+ (x
2

2
)y = 0.
The adjoint equation is
x
2

tt
+ (4x x)
t
+ (2 1 +x
2

2
) = 0
x
2

tt
+ 3x
t
+ (1 +x
2

2
) = 0.
784
2. The Legendre equation of order is
(1 x
2
)y
tt
2xy
t
+( + 1)y = 0
The adjoint equation is
(1 x
2
)
tt
+ (4x + 2x)
t
+ (2 + 2 +( + 1)) = 0
(1 x
2
)
tt
2x
t
+( + 1) = 0
Solution 18.7
The adjoint of
x
2
y
tt
xy
t
+ 3y = 0
is
d
2
dx
2
(x
2
y) +
d
dx
(xy) + 3y = 0
(x
2
y
tt
+ 4xy
t
+ 2y) + (xy
t
+y) + 3y = 0
x
2
y
tt
+ 5xy
t
+ 6y = 0.
785
Chapter 19
Techniques for Linear Dierential Equations
My new goal in life is to take the meaningless drivel out of human interaction.
-Dave Ozenne
The n
th
order linear homogeneous dierential equation has the form
y
(n)
+a
n1
(x)y
(n1)
+ +a
1
(x)y
t
+a
0
(x)y = 0.
In general it is not possible to solve second order and higher linear dierential equations. In this chapter we will
examine equations that have special forms which allow us to either reduce the order of the equation or solve it.
19.1 Constant Coecient Equations
The n
th
order constant coecient dierential equation has the form
y
(n)
+a
n1
y
(n1)
+ +a
1
y
t
+a
0
y = 0.
We will nd that solving a constant coecient dierential equation is no more dicult than nding the roots of
a polynomial.
786
19.1.1 Second Order Equations
Factoring the Dierential Equation. Consider the second order constant coecient dierential equation
y
tt
+ 2ay
t
+by = 0. (19.1)
Just as we can factor the polynomial,

2
+ 2a +b = ( )( ), (19.2)
where
= a +

a
2
b and = a

a
2
b,
we can factor the dierential equation.
_
d
2
dx
2
+ 2a
d
dx
+b
_
y =
_
d
dx

__
d
dx

_
y
Once we have factored the dierential equation, we can solve it by solving a series of of two rst order dierential
equations. We set u =
_
d
dx

_
y to obtain a rst order equation,
_
d
dx

_
u = 0,
which has the solution
u = c
1
e
x
.
To nd the solution of Equation 19.1, we solve
_
d
dx

_
y = u = c
1
e
x
.
787
We multiply by the integrating factor and integrate.
d
dx
_
e
x
y
_
= c
1
e
()x
y = c
1
e
x
_
e
()x
dx +c
2
e
x
We rst consider the case when and are distinct.
y = c
1
e
x
1

e
()x
+c
2
e
x
We choose new constants to write the solution in a better form.
y = c
1
e
x
+c
2
e
x
Now we consider the case = .
y = c
1
e
x
_
1 dx +c
2
e
x
y = c
1
x e
x
+c
2
e
x
.
The solution of Equation 19.1 is
y =
_
c
1
e
x
+c
2
e
x
, ,= ,
c
1
e
x
+c
2
xe
x
, = .
Example 19.1.1 Consider the dierential equation: y
tt
+y = 0. We factor the equation.
_
d
dx
i
__
d
dx
+ i
_
y = 0
The general solution of the dierential equation is
y = c
1
e
ix
+c
2
e
ix
.
788
Example 19.1.2 Consider the dierential equation: y
tt
= 0. We factor the equation.
_
d
dx
0
__
d
dx
0
_
y = 0
The general solution of the dierential equation is
y = c
1
e
0x
+c
2
x e
0x
y = c
1
+c
2
x.
Substituting the Form of the Solution into the Dierential Equation. Note that if we substitute y = e
x
into the dierential equation 19.1, we will obtain the quadratic polynomial equation 19.2 for .
y
tt
+ 2ay
t
+by = 0

2
e
x
+ 2ae
x
+b e
x
= 0

2
+ 2a +b = 0.
This gives us a supercially dierent method for solving constant coecient equations. We substitute y = e
x
into the dierential equation. Let and be the roots of the quadratic in . If the roots are distinct, then the
linearly independent solutions are y
1
= e
x
and y
2
= e
x
. If the quadratic has a double root at = , then the
linearly independent solutions are y
1
= e
x
and y
2
= xe
x
.
Example 19.1.3 Consider the equation
y
tt
3y
t
+ 2y = 0.
The substitution y = e
x
yields

2
3 + 2 = ( 1)( 2) = 0.
Thus the solutions are e
x
and e
2x
.
789
Example 19.1.4 Consider the equation
y
tt
2y
t
+ 4y = 0.
The substitution y = e
x
yields

2
2 + 4 = ( 2)
2
= 0.
Thus the solutions are e
2x
and xe
2x
.
Shift Invariance. Note that if u(x) is a solution of a constant coecient equation, then u(x + c) is also a
solution. This is useful in applying initial or boundary conditions.
Example 19.1.5 Consider the problem
y
tt
3y
t
+ 2y = 0, y(0) = a, y
t
(0) = b.
We know that the general solution is
y = c
1
e
x
+c
2
e
2x
.
Applying the initial conditions, we obtain the equations,
c
1
+c
2
= a, c
1
+ 2c
2
= b.
The solution is
y = (2a b) e
x
+ (b a) e
2x
.
Now suppose we wish to solve the same dierential equation with the boundary conditions y(1) = a and y
t
(1) = b.
All we have to do is shift the solution to the right.
y = (2a b) e
x1
+ (b a) e
2(x1)
.
790
Result 19.1.1 . Consider the second order constant coecient equation
y
//
+ 2ay
/
+by = 0.
The general solution of this dierential equation is
y =
_

_
e
ax
_
c
1
e

a
2
b x
+c
2
e

a
2
b x
_
if a
2
> b,
e
ax
_
c
1
cos(

b a
2
x) +c
2
sin(

b a
2
x)
_
if a
2
< b,
e
ax
(c
1
+c
2
x) if a
2
= b.
The fundamental set of solutions at x = 0 is
_

_
_
e
ax
_
cosh(

a
2
b x) +
a

a
2
b
sinh(

a
2
b x)
_
, e
ax 1

a
2
b
sinh(

a
2
b x)
_
if a
2
> b,
_
e
ax
_
cos(

b a
2
x) +
a

ba
2
sin(

b a
2
x)
_
, e
ax 1

ba
2
sin(

b a
2
x)
_
if a
2
< b,
(1 + ax) e
ax
, xe
ax
if a
2
= b.
To obtain the fundamental set of solutions at the point x = , substitute (x ) for x
in the above solutions.
19.1.2 Higher Order Equations
The constant coecient equation of order n has the form
L[y] = y
(n)
+a
n1
y
(n1)
+ +a
1
y
t
+a
0
y = 0. (19.3)
791
The substitution y = e
x
will transform this dierential equation into an algebraic equation.
L[ e
x
] =
n
e
x
+a
n1

n1
e
x
+ +a
1
e
x
+a
0
e
x
= 0
_

n
+a
n1

n1
+ +a
1
+a
0
_
e
x
= 0

n
+a
n1

n1
+ +a
1
+a
0
= 0
Assume that the roots of this equation,
1
, . . . ,
n
, are distinct. Then the n linearly independent solutions of
Equation 19.3 are
e

1
x
, . . . , e
nx
.
If the roots of the algebraic equation are not distinct then we will not obtain all the solutions of the dierential
equation. Suppose that
1
= is a double root. We substitute y = e
x
into the dierential equation.
L[ e
x
] = [( )
2
(
3
) (
n
)] e
x
= 0
Setting = will make the left side of the equation zero. Thus y = e
x
is a solution. Now we dierentiate both
sides of the equation with respect to and interchange the order of dierentiation.
d
d
L[ e
x
] = L
_
d
d
e
x
_
= L
_
x e
x

Let p() = (
3
) (
n
). We calculate L
_
x e
x

by applying L and then dierentiating with respect to .


L
_
xe
x

=
d
d
L[ e
x
]
=
d
d
[( )
2
(
3
) (
n
)] e
x
=
d
d
[( )
2
p()] e
x
=
_
2( )p() + ( )
2
p
t
() + ( )
2
p()x

e
x
= ( ) [2p() + ( )p
t
() + ( )p()x] e
x
792
Since setting = will make this expression zero, L[x e
x
] = 0, xe
x
is a solution of Equation 19.3. You can
verify that e
x
and x e
x
are linearly independent. Now we have generated all of the solutions for the dierential
equation.
If = is a root of multiplicity m then by repeatedly dierentiating with respect to you can show that the
corresponding solutions are
e
x
, xe
x
, x
2
e
x
, . . . , x
m1
e
x
.
Example 19.1.6 Consider the equation
y
ttt
3y
t
+ 2y = 0.
The substitution y = e
x
yields

3
3 + 2 = ( 1)
2
( + 2) = 0.
Thus the general solution is
y = c
1
e
x
+c
2
x e
x
+c
3
e
2x
.
19.1.3 Real-Valued Solutions
If the coecients of the dierential equation are real, then the solution can be written in terms of real-valued
functions (Result 18.1.2). For a real root = of the polynomial in , the corresponding solution, y = e
x
, is
real-valued.
Now recall that the complex roots of a polynomial with real coecients occur in complex conjugate pairs.
Assume that i are roots of

n
+a
n1

n1
+ +a
1
+a
0
= 0.
The corresponding solutions of the dierential equation are e
(+i)x
and e
(i)x
. Note that the linear combinations
e
(+i)x
+ e
(i)x
2
= e
x
cos(x),
e
(+i)x
e
(i)x
i2
= e
x
sin(x),
793
are real-valued solutions of the dierential equation. We could also obtain real-valued solution by taking the real
and imaginary parts of either e
(+i)x
or e
(i)x
.
1
_
e
(+i)x
_
= e
x
cos(x),
_
e
(+i)x
_
= e
x
sin(x)
Result 19.1.2 Consider the n
th
order constant coecient equation
d
n
y
dx
n
+a
n1
d
n1
y
dx
n1
+ +a
1
dy
dx
+a
0
y = 0.
Let the factorization of the algebraic equation obtained with the substitution y = e
x
be
(
1
)
m
1
(
2
)
m
2
(
p
)
m
p
= 0.
A set of linearly independent solutions is given by
e

1
x
, x e

1
x
, . . . , x
m
1
1
e

1
x
, . . . , e

p
x
, x e

p
x
, . . . , x
m
p
1
e

p
x
.
If the coecients of the dierential equation are real, then we can nd a real-valued set
of solutions.
Example 19.1.7 Consider the equation
d
4
y
dx
4
+ 2
d
2
y
dx
2
+y = 0.
The substitution y = e
x
yields

4
+ 2
2
+ 1 = ( i)
2
( +i)
2
= 0.
794
Thus the linearly independent solutions are
e
ix
, xe
ix
, e
ix
and x e
ix
.
Noting that
e
ix
= cos(x) + i sin(x),
we can write the general solution in terms of sines and cosines.
y = c
1
cos x +c
2
sin x +c
3
xcos x +c
4
xsin x
Example 19.1.8 Consider the equation
y
tt
2y
t
+ 2y = 0.
The substitution y = e
x
yields

2
2 + 2 = ( 1 i)( 1 + i) = 0.
The linearly independent solutions are
e
(1+i)x
, and e
(1i)x
.
We can write the general solution in terms of real functions.
y = c
1
e
x
cos x +c
2
e
x
sin x
19.2 Euler Equations
Consider the equation
L[y] = x
2
d
2
y
dx
2
+ax
dy
dx
+by = 0, x > 0.
795
Lets say, for example, that y has units of distance and x has units of time. Note that each term in the dierential
equation has the same dimension.
(time)
2
(distance)
(time)
2
= (time)
(distance)
(time)
= (distance)
Thus this is a second order Euler, or equidimensional equation. We know that the rst order Euler equation,
xy
t
+ ay = 0, has the solution y = cx
a
. Thus for the second order equation we will try a solution of the form
y = x

. The substitution y = x

will transform the dierential equation into an algebraic equation.


L[x

] = x
2
d
2
dx
2
[x

] +ax
d
dx
[x

] +bx

= 0
( 1)x

+ax

+bx

= 0
( 1) +a +b = 0
Factoring yields
(
1
)(
2
) = 0.
If the two roots,
1
and
2
, are distinct then the general solution is
y = c
1
x

1
+c
2
x

2
.
If the roots are not distinct,
1
=
2
= , then we only have the one solution, y = x

. To generate the other


solution we use the same approach as for the constant coecient equation. We substitute y = x

into the
dierential equation and dierentiate with respect to .
d
d
L[x

] = L[
d
d
x

]
= L[ln x x

]
796
Note that
d
d
x

=
d
d
e
ln x
= ln x e
lnx
= ln x x

.
Now we apply L and then dierentiate with respect to .
d
d
L[x

] =
d
d
( )
2
x

= 2( )x

+ ( )
2
ln x x

Equating these two results,


L[ln x x

] = 2( )x

+ ( )
2
ln x x

.
Setting = will make the right hand side zero. Thus y = ln x x

is a solution.
If you are in the mood for a little algebra you can show by repeatedly dierentiating with respect to that if
= is a root of multiplicity m in an n
th
order Euler equation then the associated solutions are
x

, ln x x

, (ln x)
2
x

, . . . , (ln x)
m1
x

.
Example 19.2.1 Consider the Euler equation
xy
tt
y
t
+
y
x
= 0.
The substitution y = x

yields the algebraic equation


( 1) + 1 = ( 1)
2
= 0.
Thus the general solution is
y = c
1
x +c
2
xln x.
797
19.2.1 Real-Valued Solutions
If the coecients of the Euler equation are real, then the solution can be written in terms of functions that are
real-valued when x is real and positive, (Result 18.1.2). If i are the roots of
( 1) +a +b = 0
then the corresponding solutions of the Euler equation are
x
+i
and x
i
.
We can rewrite these as
x

e
i ln x
and x

e
i ln x
.
Note that the linear combinations
x

e
i ln x
+x

e
i ln x
2
= x

cos( ln x), and


x

e
i ln x
x

e
i ln x
2i
= x

sin( ln x),
are real-valued solutions when x is real and positive. Equivalently, we could take the real and imaginary parts of
either x
+i
or x
i
.
1
_
x

e
i ln x
_
= x

cos( ln x),
_
x

e
i ln x
_
= x

sin( ln x)
798
Result 19.2.1 Consider the second order Euler equation
x
2
y
//
+ (2a + 1)xy
/
+by = 0.
The general solution of this dierential equation is
y =
_

_
x
a
_
c
1
x

a
2
b
+c
2
x

a
2
b
_
if a
2
> b,
x
a
_
c
1
cos
_
b a
2
ln x
_
+c
2
sin
_
b a
2
ln x
__
if a
2
< b,
x
a
(c
1
+c
2
ln x) if a
2
= b.
The fundamental set of solutions at x = is
y =
_

_
__
x

_
a
_
cosh
_

a
2
b ln
x

_
+
a

a
2
b
sinh
_

a
2
b ln
x

__
,
_
x

_
a

a
2
b
sinh
_

a
2
b ln
x

__
if a
2
> b,
__
x

_
a
_
cos
_

b a
2
ln
x

_
+
a

ba
2
sin
_

b a
2
ln
x

__
,
_
x

_
a

ba
2
sin
_

b a
2
ln
x

__
if a
2
< b,
_
_
x

_
a
_
1 +a ln
x

_
,
_
x

_
a
ln
x

_
if a
2
= b.
Example 19.2.2 Consider the Euler equation
x
2
y
tt
3xy
t
+ 13y = 0.
The substitution y = x

yields
( 1) 3 + 13 = ( 2 i3)( 2 + i3) = 0.
799
The linearly independent solutions are
_
x
2+i3
, x
2i3
_
.
We can put this in a more understandable form.
x
2+i3
= x
2
e
i3 ln x
= x
2
cos(3 ln x) +x
2
sin(3 ln x)
We can write the general solution in terms of real-valued functions.
y = c
1
x
2
cos(3 ln x) +c
2
x
2
sin(3 ln x)
Result 19.2.2 Consider the n
th
order Euler equation
x
n
d
n
y
dx
n
+a
n1
x
n1
d
n1
y
dx
n1
+ +a
1
x
dy
dx
+a
0
y = 0.
Let the factorization of the algebraic equation obtained with the substitution y = x

be
(
1
)
m
1
(
2
)
m
2
(
p
)
m
p
= 0.
A set of linearly independent solutions is given by
x

1
, ln x x

1
, . . . , (ln x)
m
1
1
x

1
, . . . , x

p
, ln x x

p
, . . . , (ln x)
m
p
1
x

p
.
If the coecients of the dierential equation are real, then we can nd a set of solutions
that are real valued when x is real and positive.
800
19.3 Exact Equations
Exact equations have the form
d
dx
F(x, y, y
t
, y
tt
, . . . ) = f(x).
If you can write an equation in the form of an exact equation, you can integrate to reduce the order by one, (or
solve the equation for rst order). We will consider a few examples to illustrate the method.
Example 19.3.1 Consider the equation
y
tt
+x
2
y
t
+ 2xy = 0.
We can rewrite this as
d
dx
_
y
t
+x
2
y

= 0.
Integrating yields a rst order inhomogeneous equation.
y
t
+x
2
y = c
1
We multiply by the integrating factor I(x) = exp(
_
x
2
dx) to make this an exact equation.
d
dx
_
e
x
3
/3
y
_
= c
1
e
x
3
/3
e
x
3
/3
y = c
1
_
e
x
3
/3
dx +c
2
y = c
1
e
x
3
/3
_
e
x
3
/3
dx +c
2
e
x
3
/3
801
Result 19.3.1 If you can write a dierential equation in the form
d
dx
F(x, y, y
/
, y
//
, . . . ) = f(x),
then you can integrate to reduce the order of the equation.
F(x, y, y
/
, y
//
, . . . ) =
_
f(x) dx +c
19.4 Equations Without Explicit Dependence on y
Example 19.4.1 Consider the equation
y
tt
+

xy
t
= 0.
This is a second order equation for y, but note that it is a rst order equation for y
t
. We can solve directly for y
t
.
d
dx
_
exp
_
2
3
x
3/2
_
y
t
_
= 0
y
t
= c
1
exp
_

2
3
x
3/2
_
Now we just integrate to get the solution for y.
y = c
1
_
exp
_

2
3
x
3/2
_
dx +c
2
Result 19.4.1 If an n
th
order equation does not explicitly depend on y then you can
consider it as an equation of order n 1 for y
/
.
802
19.5 Reduction of Order
Consider the second order linear equation
L[y] y
tt
+p(x)y
t
+q(x)y = f(x).
Suppose that we know one homogeneous solution y
1
. We make the substitution y = uy
1
and use that L[y
1
] = 0.
L[uy
1
] = 0u
tt
y
1
+ 2u
t
y
t
1
+uy
tt
1
+p(u
t
y
1
+uy
t
1
) +quy
1
= 0
u
tt
y
1
+u
t
(2y
t
1
+py
1
) +u(y
tt
1
+py
t
1
+qy
1
) = 0
u
tt
y
1
+u
t
(2y
t
1
+py
1
) = 0
Thus we have reduced the problem to a rst order equation for u
t
. An analogous result holds for higher order
equations.
Result 19.5.1 Consider the n
th
order linear dierential equation
y
(n)
+p
n1
(x)y
(n1)
+ +p
1
(x)y
/
+p
0
(x)y = f(x).
Let y
1
be a solution of the homogeneous equation. The substitution y = uy
1
will trans-
form the problem into an (n 1)
th
order equation for u
/
. For the second order problem
y
//
+p(x)y
/
+q(x)y = f(x)
this reduced equation is
u
//
y
1
+u
/
(2y
/
1
+py
1
) = f(x).
Example 19.5.1 Consider the equation
y
tt
+xy
t
y = 0.
803
By inspection we see that y
1
= x is a solution. We would like to nd another linearly independent solution. The
substitution y = xu yields
xu
tt
+ (2 +x
2
)u
t
= 0
u
tt
+
_
2
x
+x
_
u
t
= 0
The integrating factor is I(x) = exp(2 ln x +x
2
/2) = x
2
exp(x
2
/2).
d
dx
_
x
2
e
x
2
/2
u
t
_
= 0
u
t
= c
1
x
2
e
x
2
/2
u = c
1
_
x
2
e
x
2
/2
dx +c
2
y = c
1
x
_
x
2
e
x
2
/2
dx +c
2
x
Thus we see that a second solution is
y
2
= x
_
x
2
e
x
2
/2
dx.
19.6 *Reduction of Order and the Adjoint Equation
Let L be the linear dierential operator
L[y] = p
n
d
n
y
dx
n
+p
n1
d
n1
y
dx
n1
+ +p
0
y,
where each p
j
is a j times continuously dierentiable complex valued function. Recall that the adjoint of L is
L

[y] = (1)
n
d
n
dx
n
(p
n
y) + (1)
n1
d
n1
dx
n1
(p
n1
y) + +p
0
y.
804
If u and v are n times continuously dierentiable, then Lagranges identity states
vL[u] uL

[v] =
d
dx
B[u, v],
where
B[u, v] =
n

m=1

j+k=m1
j0,k0
(1)
j
u
(k)
(p
m
v)
(j)
.
For second order equations,
B[u, v] = up
1
v +u
t
p
2
v u(p
2
v)
t
.
(See Section 18.6.)
If we can nd a solution to the homogeneous adjoint equation, L

[y] = 0, then we can reduce the order of the


equation L[y] = f(x). Let satisfy L

[] = 0. Substituting u = y, v = into Lagranges identity yields


L[y] yL

[] =
d
dx
B[y, ]
L[y] =
d
dx
B[y, ].
The equation L[y] = f(x) is equivalent to the equation
d
dx
B[y, ] = f
B[y, ] =
_
(x)f(x) dx,
which is a linear equation in y of order n 1.
Example 19.6.1 Consider the equation
L[y] = y
tt
x
2
y
t
2xy = 0.
805
Method 1. Note that this is an exact equation.
d
dx
(y
t
x
2
y) = 0
y
t
x
2
y = c
1
d
dx
_
e
x
3
/3
y
_
= c
1
e
x
3
/3
y = c
1
e
x
3
/3
_
e
x
3
/3
dx +c
2
e
x
3
/3
Method 2. The adjoint equation is
L

[y] = y
tt
+x
2
y
t
= 0.
By inspection we see that = (constant) is a solution of the adjoint equation. To simplify the algebra we will
choose = 1. Thus the equation L[y] = 0 is equivalent to
B[y, 1] = c
1
y(x
2
) +
d
dx
[y](1) y
d
dx
[1] = c
1
y
t
x
2
y = c
1
.
By using the adjoint equation to reduce the order we obtain the same solution as with Method 1.
806
19.7 Exercises
Constant Coecient Equations
Exercise 19.1 (mathematica/ode/techniques linear/constant.nb)
Find the solution of each one of the following initial value problems. Sketch the graph of the solution and describe
its behavior as t increases.
1. 6y
tt
5y
t
+y = 0, y(0) = 4, y
t
(0) = 0
2. y
tt
2y
t
+ 5y = 0, y(/2) = 0, y
t
(/2) = 2
3. y
tt
+ 4y
t
+ 4y = 0, y(1) = 2, y
t
(1) = 1
Hint, Solution
Exercise 19.2 (mathematica/ode/techniques linear/constant.nb)
Substitute y = e
x
to nd two linearly independent solutions to
y
tt
4y
t
+ 13y = 0.
that are real-valued when x is real-valued.
Hint, Solution
Exercise 19.3 (mathematica/ode/techniques linear/constant.nb)
Find the general solution to
y
ttt
y
tt
+y
t
y = 0.
Write the solution in terms of functions that are real-valued when x is real-valued.
Hint, Solution
807
Exercise 19.4
Substitute y = e
x
to nd the fundamental set of solutions at x = 0 for the equations:
1. y
tt
+y = 0,
2. y
tt
y = 0,
3. y
tt
= 0.
What are the fundamental sets of solutions at x = 1 for these equations.
Hint, Solution
Exercise 19.5
Find the general solution of
y
tt
+ 2ay
t
+by = 0
for a, b R. There are three distinct forms of the solution depending on the sign of a
2
b.
Hint, Solution
Exercise 19.6
Find the fundamental set of solutions of
y
tt
+ 2ay
t
+by = 0
at the point x = 0, for a, b R. Use the general solutions obtained in Exercise 19.5.
Hint, Solution
Exercise 19.7
Consider a ball of mass m hanging by an ideal spring of spring constant k. The ball is suspended in a uid
which damps the motion. This resistance has a coecient of friction, . Find the dierential equation for the
808
displacement of the mass from its equilibrium position by balancing forces. Denote this displacement by y(t). If
the damping force is weak, the mass will have a decaying, oscillatory motion. If the damping force is strong, the
mass will not oscillate. The displacement will decay to zero. The value of the damping which separates these two
behaviors is called critical damping.
Find the solution which satises the initial conditions y(0) = 0, y
t
(0) = 1. Use the solutions obtained in
Exercise 19.6 or refer to Result 19.1.1.
Consider the case m = k = 1. Find the coecient of friction for which the displacement of the mass decays
most rapidly. Plot the displacement for strong, weak and critical damping.
Hint, Solution
Exercise 19.8
Show that y = c cos(x ) is the general solution of y
tt
+ y = 0 where c and are constants of integration. (It
is not sucient to show that y = c cos(x ) satises the dierential equation. y = 0 satises the dierential
equation, but is is certainly not the general solution.) Find constants c and such that y = sin(x).
Is y = c cosh(x ) the general solution of y
tt
y = 0? Are there constants c and such that y = sinh(x)?
Hint, Solution
Exercise 19.9 (mathematica/ode/techniques linear/constant.nb)
Let y(t) be the solution of the initial-value problem
y
tt
+ 5y
t
+ 6y = 0; y(0) = 1, y
t
(0) = V.
For what values of V does y(t) remain nonnegative for all t > 0?
Hint, Solution
Exercise 19.10 (mathematica/ode/techniques linear/constant.nb)
Find two linearly independent solutions of
y
tt
+ sign (x)y = 0, < x < .
809
where sign (x) = 1 according as x is positive or negative. (The solution should be continuous and have a
continuous rst derivative.)
Hint, Solution
Euler Equations
Exercise 19.11
Find the general solution of
x
2
y
tt
+xy
t
+y = 0, x > 0.
Hint, Solution
Exercise 19.12
Substitute y = x

to nd the general solution of


x
2
y
tt
2xy + 2y = 0.
Hint, Solution
Exercise 19.13 (mathematica/ode/techniques linear/constant.nb)
Substitute y = x

to nd the general solution of


xy
ttt
+y
tt
+
1
x
y
t
= 0.
Write the solution in terms of functions that are real-valued when x is real-valued and positive.
Hint, Solution
810
Exercise 19.14
Find the general solution of
x
2
y
tt
+ (2a + 1)xy
t
+by = 0.
Hint, Solution
Exercise 19.15
Show that
y
1
= e
ax
, y
2
= lim
a
e
x
e
x

are linearly indepedent solutions of


y
tt
a
2
y = 0
for all values of a. It is common to abuse notation and write the second solution as
y
2
=
e
ax
e
ax
a
where the limit is taken if a = 0. Likewise show that
y
1
= x
a
, y
2
=
x
a
x
a
a
are linearly indepedent solutions of
x
2
y
tt
+xy
t
a
2
y = 0
for all values of a.
Hint, Solution
811
Exercise 19.16 (mathematica/ode/techniques linear/constant.nb)
Find two linearly independent solutions (i.e., the general solution) of
(a) x
2
y
tt
2xy
t
+ 2y = 0, (b) x
2
y
tt
2y = 0, (c) x
2
y
tt
xy
t
+y = 0.
Hint, Solution
Exact Equations
Exercise 19.17
Solve the dierential equation
y
tt
+y
t
sin x +y cos x = 0.
Hint, Solution
Equations Without Explicit Dependence on y
Reduction of Order
Exercise 19.18
Consider
(1 x
2
)y
tt
2xy
t
+ 2y = 0, 1 < x < 1.
Verify that y = x is a solution. Find the general solution.
Hint, Solution
Exercise 19.19
Consider the dierential equation
y
tt

x + 1
x
y
t
+
1
x
y = 0.
812
Since the coecients sum to zero, (1
x+1
x
+
1
x
= 0), y = e
x
is a solution. Find another linearly independent
solution.
Hint, Solution
Exercise 19.20
One solution of
(1 2x)y
tt
+ 4xy
t
4y = 0
is y = x. Find the general solution.
Hint, Solution
Exercise 19.21
Find the general solution of
(x 1)y// xy/ +y = 0,
given that one solution is y = e
x
. (you may assume x > 1)
Hint, Solution
*Reduction of Order and the Adjoint Equation
813
19.8 Hints
Constant Coecient Equations
Hint 19.1
Hint 19.2
Hint 19.3
It is a constant coecient equation.
Hint 19.4
Use the fact that if u(x) is a solution of a constant coecient equation, then u(x +c) is also a solution.
Hint 19.5
Substitute y = e
x
into the dierential equation.
Hint 19.6
The fundamental set of solutions is a linear combination of the homogeneous solutions.
Hint 19.7
The force on the mass due to the spring is ky(t). The frictional force is y
t
(t).
Note that the initial conditions describe the second fundamental solution at t = 0.
Note that for large t, t e
t
is much small than e
t
if < . (Prove this.)
814
Hint 19.8
By denition, the general solution of a second order dierential equation is a two parameter family of functions
that satises the dierential equation. The trigonometric identities in Appendix Q may be useful.
Hint 19.9
Hint 19.10
Euler Equations
Hint 19.11
Hint 19.12
Hint 19.13
Hint 19.14
Substitute y = x

into the dierential equation. Consider the three cases: a


2
> b, a
2
< b and a
2
= b.
Hint 19.15
815
Hint 19.16
Exact Equations
Hint 19.17
It is an exact equation.
Equations Without Explicit Dependence on y
Reduction of Order
Hint 19.18
Hint 19.19
Use reduction of order to nd the other solution.
Hint 19.20
Use reduction of order to nd the other solution.
Hint 19.21
*Reduction of Order and the Adjoint Equation
816
19.9 Solutions
Constant Coecient Equations
Solution 19.1
1. We consider the problem
6y
tt
5y
t
+y = 0, y(0) = 4, y
t
(0) = 0.
We make the substitution y = e
x
in the dierential equation.
6
2
5 + 1 = 0
(2 1)(3 1) = 0
=
_
1
3
,
1
2
_
The general solution of the dierential equation is
y = c
1
e
t/3
+c
2
e
t/2
.
We apply the initial conditions to determine the constants.
c
1
+c
2
= 4,
c
1
3
+
c
2
2
= 0
c
1
= 12, c
2
= 8
The solution subject to the initial conditions is
y = 12 e
t/3
8 e
t/2
.
The solution is plotted in Figure 19.1. The solution tends to as t .
817
1 2 3 4 5
-30
-25
-20
-15
-10
-5
Figure 19.1: The solution of 6y
tt
5y
t
+y = 0, y(0) = 4, y
t
(0) = 0.
2. We consider the problem
y
tt
2y
t
+ 5y = 0, y(/2) = 0, y
t
(/2) = 2.
We make the substitution y = e
x
in the dierential equation.

2
2 + 5 = 0
= 1

1 5
= 1 +i2, 1 i2
The general solution of the dierential equation is
y = c
1
e
t
cos(2t) +c
2
e
t
sin(2t).
We apply the initial conditions to determine the constants.
y(/2) = 0 c
1
e
/2
= 0 c
1
= 0
y
t
(/2) = 2 2c
2
e
/2
= 2 c
2
= e
/2
818
The solution subject to the initial conditions is
y = e
t/2
sin(2t).
The solution is plotted in Figure 19.2. The solution oscillates with an amplitude that tends to as t .
3 4 5 6
-10
10
20
30
40
50
Figure 19.2: The solution of y
tt
2y
t
+ 5y = 0, y(/2) = 0, y
t
(/2) = 2.
3. We consider the problem
y
tt
+ 4y
t
+ 4y = 0, y(1) = 2, y
t
(1) = 1.
We make the substitution y = e
x
in the dierential equation.

2
+ 4 + 4 = 0
( + 2)
2
= 0
= 2
819
The general solution of the dierential equation is
y = c
1
e
2t
+c
2
t e
2t
.
We apply the initial conditions to determine the constants.
c
1
e
2
c
2
e
2
= 2, 2c
1
e
2
+ 3c
2
e
2
= 1
c
1
= 7 e
2
, c
2
= 5 e
2
The solution subject to the initial conditions is
y = (7 + 5t) e
2(t+1)
The solution is plotted in Figure 19.3. The solution vanishes as t .
lim
t
(7 + 5t) e
2(t+1)
= lim
t
7 + 5t
e
2(t+1)
= lim
t
5
2 e
2(t+1)
= 0
Solution 19.2
y
tt
4y
t
+ 13y = 0.
With the substitution y = e
x
we obtain

2
e
x
4e
x
+ 13 e
x
= 0

2
4 + 13 = 0
= 2 3i.
Thus two linearly independent solutions are
e
(2+3i)x
, and e
(23i)x
.
820
-1 1 2 3 4 5
0.5
1
1.5
2
Figure 19.3: The solution of y
tt
+ 4y
t
+ 4y = 0, y(1) = 2, y
t
(1) = 1.
Noting that
e
(2+3i)x
= e
2x
[cos(3x) +i sin(3x)]
e
(23i)x
= e
2x
[cos(3x) i sin(3x)],
we can write the two linearly independent solutions
y
1
= e
2x
cos(3x), y
2
= e
2x
sin(3x).
Solution 19.3
We note that
y
ttt
y
tt
+y
t
y = 0
is a constant coecient equation. The substitution, y = e
x
, yields

2
+ 1 = 0
( 1)( i)( +i) = 0.
821
The corresponding solutions are e
x
, e
ix
, and e
ix
. We can write the general solution as
y = c
1
e
x
+c
2
cos x +c
3
sin x.
Solution 19.4
We start with the equation y
tt
+y = 0. We substitute y = e
x
into the dierential equation to obtain

2
+ 1 = 0, = i.
A linearly independent set of solutions is
e
ix
, e
ix
.
The fundamental set of solutions has the form
y
1
= c
1
e
ix
+c
2
e
ix
,
y
2
= c
3
e
ix
+c
4
e
ix
.
By applying the constraints
y
1
(0) = 1, y
t
1
(0) = 0,
y
2
(0) = 0, y
t
2
(0) = 1,
we obtain
y
1
=
e
ix
+ e
ix
2
= cos x,
y
2
=
e
ix
+ e
ix
2i
= sin x.
Now consider the equation y
tt
y = 0. By substituting y = e
x
we nd that a set of solutions is
e
x
, e
x
.
822
By taking linear combinations of these we see that another set of solutions is
cosh x, sinh x.
Note that this is the fundamental set of solutions.
Next consider y
tt
= 0. We can nd the solutions by substituting y = e
x
or by integrating the equation twice.
The fundamental set of solutions as x = 0 is
1, x.
Note that if u(x) is a solution of a constant coecient dierential equation, then u(x + c) is also a solution.
Also note that if u(x) satises y(0) = a, y
t
(0) = b, then u(x x
0
) satises y(x
0
) = a, y
t
(x
0
) = b. Thus the
fundamental sets of solutions at x = 1 are
1. cos(x 1), sin(x 1),
2. cosh(x 1), sinh(x 1),
3. 1, x 1.
Solution 19.5
We substitute y = e
x
into the dierential equation.
y
tt
+ 2ay
t
+by = 0

2
+ 2a +b = 0
= a

a
2
b
If a
2
> b then the two roots are distinct and real. The general solution is
y = c
1
e
(a+

a
2
b)x
+c
2
e
(a

a
2
b)x
.
823
If a
2
< b then the two roots are distinct and complex-valued. We can write them as
= a i

b a
2
.
The general solution is
y = c
1
e
(a+i

ba
2
)x
+c
2
e
(ai

ba
2
)x
.
By taking the sum and dierence of the two linearly independent solutions above, we can write the general solution
as
y = c
1
e
ax
cos
_

b a
2
x
_
+c
2
e
ax
sin
_

b a
2
x
_
.
If a
2
= b then the only root is = a. The general solution in this case is then
y = c
1
e
ax
+c
2
xe
ax
.
In summary, the general solution is
y =
_

_
e
ax
_
c
1
e

a
2
b x
+c
2
e

a
2
b x
_
if a
2
> b,
e
ax
_
c
1
cos
_
b a
2
x
_
+c
2
sin
_
b a
2
x
__
if a
2
< b,
e
ax
(c
1
+c
2
x) if a
2
= b.
Solution 19.6
First we note that the general solution can be written,
y =
_

_
e
ax
_
c
1
cosh
_
a
2
b x
_
+c
2
sinh
_
a
2
b x
__
if a
2
> b,
e
ax
_
c
1
cos
_
b a
2
x
_
+c
2
sin
_
b a
2
x
__
if a
2
< b,
e
ax
(c
1
+c
2
x) if a
2
= b.
824
We rst consider the case a
2
> b. The derivative is
y
t
= e
ax
__
ac
1
+

a
2
b c
2
_
cosh
_

a
2
b x
_
+
_
ac
2
+

a
2
b c
1
_
sinh
_

a
2
b x
__
.
The conditions, y
1
(0) = 1 and y
t
1
(0) = 0, for the rst solution become,
c
1
= 1, ac
1
+

a
2
b c
2
= 0,
c
1
= 1, c
2
=
a

a
2
b
.
The conditions, y
2
(0) = 0 and y
t
2
(0) = 1, for the second solution become,
c
1
= 0, ac
1
+

a
2
b c
2
= 1,
c
1
= 0, c
2
=
1

a
2
b
.
The fundamental set of solutions is
_
e
ax
_
cosh
_

a
2
b x
_
+
a

a
2
b
sinh
_

a
2
b x
_
_
, e
ax
1

a
2
b
sinh
_

a
2
b x
_
_
.
Now consider the case a
2
< b. The derivative is
y
t
= e
ax
__
ac
1
+

b a
2
c
2
_
cos
_

b a
2
x
_
+
_
ac
2

b a
2
c
1
_
sin
_

b a
2
x
__
.
Clearly, the fundamental set of solutions is
_
e
ax
_
cos
_

b a
2
x
_
+
a

b a
2
sin
_

b a
2
x
_
_
, e
ax
1

b a
2
sin
_

b a
2
x
_
_
.
Finally we consider the case a
2
= b. The derivative is
y
t
= e
ax
(ac
1
+c
2
+ac
2
x).
825
The conditions, y
1
(0) = 1 and y
t
1
(0) = 0, for the rst solution become,
c
1
= 1, ac
1
+c
2
= 0,
c
1
= 1, c
2
= a.
The conditions, y
2
(0) = 0 and y
t
2
(0) = 1, for the second solution become,
c
1
= 0, ac
1
+c
2
= 1,
c
1
= 0, c
2
= 1.
The fundamental set of solutions is
_
(1 +ax) e
ax
, xe
ax
_
.
In summary, the fundamental set of solutions at x = 0 is
_

_
_
e
ax
_
cosh
_
a
2
b x
_
+
a

a
2
b
sinh
_
a
2
b x
_
_
, e
ax 1

a
2
b
sinh
_
a
2
b x
_
_
if a
2
> b,
_
e
ax
_
cos
_
b a
2
x
_
+
a

ba
2
sin
_
b a
2
x
_
_
, e
ax 1

ba
2
sin
_
b a
2
x
_
_
if a
2
< b,
(1 +ax) e
ax
, xe
ax
if a
2
= b.
Solution 19.7
Let y(t) denote the displacement of the mass from equilibrium. The forces on the mass are ky(t) due to the
spring and y
t
(t) due to friction. We equate the external forces to my
tt
(t) to nd the dierential equation of
the motion.
my
tt
= ky y
t
y
tt
+

m
y
t
+
k
m
y = 0
826
The solution which satises the initial conditions y(0) = 0, y
t
(0) = 1 is
y(t) =
_

_
e
t/(2m) 2m

2
4km
sinh
_
_

2
4kmt/(2m)
_
if
2
> km,
e
t/(2m) 2m

4km
2
sin
_
_
4km
2
t/(2m)
_
if
2
< km,
t e
t/(2m)
if
2
= km.
We respectively call these cases: strongly damped, weakly damped and critically damped. In the case that
m = k = 1 the solution is
y(t) =
_

_
e
t/2 2

2
4
sinh
_
_

2
4 t/2
_
if > 2,
e
t/2 2

4
2
sin
_
_
4
2
t/2
_
if < 2,
t e
t
if = 2.
Note that when t is large, t e
t
is much smaller than e
t/2
for < 2. To prove this we examine the ratio of these
functions as t .
lim
t
t e
t
e
t/2
= lim
t
t
e
(1/2)t
= lim
t
1
(1 /2) e
(1)t
= 0
Using this result, we see that the critically damped solution decays faster than the weakly damped solution.
We can write the strongly damped solution as
e
t/2
2
_

2
4
_
e

2
4 t/2
e

2
4 t/2
_
.
For large t, the dominant factor is e

2
4

t/2
. Note that for > 2,
_

2
4 =
_
( + 2)( 2) > 2.
827
Therefore we have the bounds
2 <
_

2
4 < 0.
This shows that the critically damped solution decays faster than the strongly damped solution. = 2 gives the
fastest decaying solution. Figure 19.4 shows the solution for = 4, = 1 and = 2.
2 4 6 8 10
-0.1
0.1
0.2
0.3
0.4
0.5
Critical Damping
Weak Damping
Strong Damping
Figure 19.4: Strongly, weakly and critically damped solutions.
Solution 19.8
Clearly y = c cos(x ) satises the dierential equation y
tt
+ y = 0. Since it is a two-parameter family of
functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
y = c cos cos x +c sin sin x.
Setting this equal to sin x gives us the two equations
c cos = 0,
c sin = 1,
828
which has the solutions c = 1, = (2n + 1/2), and c = 1, = (2n 1/2), for n Z.
Clearly y = c cosh(x ) satises the dierential equation y
tt
y = 0. Since it is a two-parameter family of
functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
y = c cosh cosh x +c sinh sinh x.
Setting this equal to sinh x gives us the two equations
c cosh = 0,
c sinh = 1,
which has the solutions c = i, = i(2n + 1/2), and c = i, = i(2n 1/2), for n Z.
Solution 19.9
We substitute y = e
t
into the dierential equation.

2
e
t
+ 5e
t
+ 6 e
t
= 0

2
+ 5 + 6 = 0
( + 2)( + 3) = 0
The general solution of the dierential equation is
y = c
1
e
2t
+c
2
e
3t
.
The initial conditions give us the constraints:
c
1
+c
2
= 1,
2c
1
3c
2
= V.
The solution subject to the initial conditions is
y = (3 +V ) e
2t
(2 +V ) e
3t
.
This solution will be non-negative for t > 0 if V 3.
829
Solution 19.10
For negative x, the dierential equation is
y
tt
y = 0.
We substitute y = e
x
into the dierential equation to nd the solutions.

2
1 = 0
= 1
y =
_
e
x
, e
x
_
We can take linear combinations to write the solutions in terms of the hyperbolic sine and cosine.
y = cosh(x), sinh(x)
For positive x, the dierential equation is
y
tt
+y = 0.
We substitute y = e
x
into the dierential equation to nd the solutions.

2
+ 1 = 0
= i
y =
_
e
ix
, e
ix
_
We can take linear combinations to write the solutions in terms of the sine and cosine.
y = cos(x), sin(x)
We will nd the fundamental set of solutions at x = 0. That is, we will nd a set of solutions, y
1
, y
2
that
satisfy the conditions:
y
1
(0) = 1 y
t
1
(0) = 0
y
2
(0) = 0 y
t
2
(0) = 1
830
Clearly these solutions are
y
1
=
_
cosh(x) x < 0
cos(x) x 0
y
2
=
_
sinh(x) x < 0
sin(x) x 0
Euler Equations
Solution 19.11
We consider an Euler equation,
x
2
y
tt
+xy
t
+y = 0, x > 0.
We make the change of independent variable = ln x, u() = y(x) to obtain
u
tt
+u = 0.
We make the substitution u() = e

2
+ 1 = 0
= i
A set of linearly independent solutions for u() is
e
i
, e
i
.
Since
cos =
e
i
+ e
i
2
and sin =
e
i
e
i
2i
,
another linearly independent set of solutions is
cos , sin .
The general solution for y(x) is
y(x) = c
1
cos(ln x) +c
2
sin(ln x).
831
Solution 19.12
Consider the dierential equation
x
2
y
tt
2xy + 2y = 0.
With the substitution y = x

this equation becomes


( 1) 2 + 2 = 0

2
3 + 2 = 0
= 1, 2.
The general solution is then
y = c
1
x +c
2
x
2
.
Solution 19.13
We note that
xy
ttt
+y
tt
+
1
x
y
t
= 0
is an Euler equation. The substitution y = x

yields

3
3
2
+ 2 +
2
+ = 0

3
2
2
+ 2 = 0.
The three roots of this algebraic equation are
= 0, = 1 +i, = 1 i
The corresponding solutions to the dierential equation are
y = x
0
y = x
1+i
y = x
1i
y = 1 y = x e
i lnx
y = xe
i ln x
.
832
We can write the general solution as
y = c
1
+c
2
x cos(ln x) +c
3
sin(ln x).
Solution 19.14
We substitute y = x

into the dierential equation.


x
2
y
tt
+ (2a + 1)xy
t
+by = 0
( 1) + (2a + 1) +b = 0

2
+ 2a +b = 0
= a

a
2
b
For a
2
> b then the general solution is
y = c
1
x
a+

a
2
b
+c
2
x
a

a
2
b
.
For a
2
< b, then the general solution is
y = c
1
x
a+i

ba
2
+c
2
x
ai

ba
2
.
By taking the sum and dierence of these solutions, we can write the general solution as
y = c
1
x
a
cos
_

b a
2
ln x
_
+c
2
x
a
sin
_

b a
2
ln x
_
.
For a
2
= b, the quadratic in lambda has a double root at = a. The general solution of the dierential equation
is
y = c
1
x
a
+c
2
x
a
ln x.
In summary, the general solution is:
y =
_

_
x
a
_
c
1
x

a
2
b
+c
2
x

a
2
b
_
if a
2
> b,
x
a
_
c
1
cos
_
b a
2
ln x
_
+c
2
sin
_
b a
2
ln x
__
if a
2
< b,
x
a
(c
1
+c
2
ln x) if a
2
= b.
833
Solution 19.15
For a ,= 0, two linearly independent solutions of
y
tt
a
2
y = 0
are
y
1
= e
ax
, y
2
= e
ax
.
For a = 0, we have
y
1
= e
0x
= 1, y
2
= xe
0x
= x.
In this case the solution are dened by
y
1
= [ e
ax
]
a=0
, y
2
=
_
d
da
e
ax
_
a=0
.
By the denition of dierentiation, f
t
(0) is
f
t
(0) = lim
a0
f(a) f(a)
2a
.
Thus the second solution in the case a = 0 is
y
2
= lim
a0
e
ax
e
ax
a
Consider the solutions
y
1
= e
ax
, y
2
= lim
a
e
x
e
x

.
Clearly y
1
is a solution for all a. For a ,= 0, y
2
is a linear combination of e
ax
and e
ax
and is thus a solution.
Since the coecient of e
ax
in this linear combination is non-zero, it is linearly independent to y
1
. For a = 0, y
2
is one half the derivative of e
ax
evaluated at a = 0. Thus it is a solution.
834
For a ,= 0, two linearly independent solutions of
x
2
y
tt
+xy
t
a
2
y = 0
are
y
1
= x
a
, y
2
= x
a
.
For a = 0, we have
y
1
= [x
a
]
a=0
= 1, y
2
=
_
d
da
x
a
_
a=0
= ln x.
Consider the solutions
y
1
= x
a
, y
2
=
x
a
x
a
a
Clearly y
1
is a solution for all a. For a ,= 0, y
2
is a linear combination of x
a
and x
a
and is thus a solution. For
a = 0, y
2
is one half the derivative of x
a
evaluated at a = 0. Thus it is a solution.
Solution 19.16
1.
x
2
y
tt
2xy
t
+ 2y = 0
We substitute y = x

into the dierential equation.


( 1) 2 + 2 = 0

2
3 + 2 = 0
( 1)( 2) = 0
y = c
1
x +c
2
x
2
835
2.
x
2
y
tt
2y = 0
We substitute y = x

into the dierential equation.


( 1) 2 = 0

2
2 = 0
( + 1)( 2) = 0
y =
c
1
x
+c
2
x
2
3.
x
2
y
tt
xy
t
+y = 0
We substitute y = x

into the dierential equation.


( 1) + 1 = 0

2
2 + 1 = 0
( 1)
2
= 0
Since there is a double root, the solution is:
y = c
1
x +c
2
xln x.
Exact Equations
Solution 19.17
We note that
y
tt
+y
t
sin x +y cos x = 0
836
is an exact equation.
d
dx
[y
t
+y sin x] = 0
y
t
+y sin x = c
1
d
dx
_
y e
cos x

= c
1
e
cos x
y = c
1
e
cos x
_
e
cos x
dx +c
2
e
cos x
Equations Without Explicit Dependence on y
Reduction of Order
Solution 19.18
(1 x
2
)y
tt
2xy
t
+ 2y = 0, 1 < x < 1
We substitute y = x into the dierential equation to check that it is a solution.
(1 x
2
)(0) 2x(1) + 2x = 0
We look for a second solution of the form y = xu. We substitute this into the dierential equation and use the
837
fact that x is a solution.
(1 x
2
)(xu
tt
+ 2u
t
) 2x(xu
t
+u) + 2xu = 0
(1 x
2
)(xu
tt
+ 2u
t
) 2x(xu
t
) = 0
(1 x
2
)xu
tt
+ (2 4x
2
)u
t
= 0
u
tt
u
t
=
2 4x
2
x(x
2
1)
u
tt
u
t
=
2
x
+
1
1 x

1
1 +x
ln(u
t
) = 2 ln(x) ln(1 x) ln(1 +x) + const
ln(u
t
) = ln
_
c
x
2
(1 x)(1 +x)
_
u
t
=
c
x
2
(1 x)(1 +x)
u
t
= c
_
1
x
2
+
1
2(1 x)
+
1
2(1 +x)
_
u = c
_

1
x

1
2
ln(1 x) +
1
2
ln(1 +x)
_
+ const
u = c
_

1
x
+
1
2
ln
_
1 +x
1 x
__
+ const
A second linearly independent solution is
y = 1 +
x
2
ln
_
1 +x
1 x
_
.
838
Solution 19.19
We are given that y = e
x
is a solution of
y
tt

x + 1
x
y
t
+
1
x
y = 0.
To nd another linearly independent solution, we will use reduction of order. Substituting
y = ue
x
y
t
= (u
t
+u) e
x
y
tt
= (u
tt
+ 2u
t
+u) e
x
into the dierential equation yields
u
tt
+ 2u
t
+u
x + 1
x
(u
t
+u) +
1
x
u = 0.
u
tt
+
x 1
x
u
t
= 0
d
dx
_
u
t
exp
__ _
1
1
x
_
dx
__
= 0
u
t
e
xln x
= c
1
u
t
= c
1
xe
x
u = c
1
_
x e
x
dx +c
2
u = c
1
(x e
x
+ e
x
) +c
2
y = c
1
(x + 1) +c
2
e
x
Thus a second linearly independent solution is
y = x + 1.
839
Solution 19.20
We are given that y = x is a solution of
(1 2x)y
tt
+ 4xy
t
4y = 0.
To nd another linearly independent solution, we will use reduction of order. Substituting
y = xu
y
t
= xu
t
+u
y
tt
= xu
tt
+ 2u
t
into the dierential equation yields
(1 2x)(xu
tt
+ 2u
t
) + 4x(xu
t
+u) 4xu = 0,
(1 2x)xu
tt
+ (4x
2
4x + 2)u
t
= 0,
u
tt
u
t
=
4x
2
4x + 2
x(2x 1)
,
u
tt
u
t
= 2
2
x
+
2
2x 1
,
ln(u
t
) = 2x 2 ln x + ln(2x 1) + const,
u
t
= c
1
_
2
x

1
x
2
_
e
2x
,
u = c
1
1
x
e
2x
+c
2
,
y = c
1
e
2x
+c
2
x.
Solution 19.21
One solution of
(x 1)y// xy/ +y = 0,
840
is y
1
= e
x
. We nd a second solution with reduction of order. We make the substitution y
2
= ue
x
in the
dierential equation. We determine u up to an additive constant.
(x 1)(u
tt
+ 2u
t
+u) e
x
x(u
t
+u) e
x
+ue
x
= 0
(x 1)u
tt
+ (x 2)u
t
= 0
u
tt
u
t
=
x 2
x 1
= 1 +
1
x 1
ln [u
t
[ = x + ln [x 1[ +c
u
t
= c(x 1) e
x
u = cx e
x
The second solution of the dierential equation is y
2
= x.
*Reduction of Order and the Adjoint Equation
841
Chapter 20
Techniques for Nonlinear Dierential
Equations
In mathematics you dont understand things. You just get used to them.
- Johann von Neumann
20.1 Bernoulli Equations
Sometimes it is possible to solve a nonlinear equation by making a change of the dependent variable that converts
it into a linear equation. One of the most important such equations is the Bernoulli equation
dy
dt
+p(t)y = q(t)y

, ,= 1.
The change of dependent variable u = y
1
will yield a rst order linear equation for u which when solved will
give us an implicit solution for y. (See Exercise ??.)
842
Result 20.1.1 The Bernoulli equation y
/
+p(t)y = q(t)y

, ,= 1 can be transformed to
the rst order linear equation
du
dt
+ (1 )p(t)u = (1 )q(t)
with the change of variables u = y
1
.
Example 20.1.1 Consider the Bernoulli equation
y
t
=
2
x
y +y
2
.
First we divide by y
2
.
y
2
y
t
=
2
x
y
1
+ 1
We make the change of variable u = y
1
.
u
t
=
2
x
u + 1
u
t
+
2
x
u = 1
843
The integrating factor is I(x) = exp(
_
2
x
dx) = x
2
.
d
dx
(x
2
u) = x
2
x
2
u =
1
3
x
3
+c
u =
1
3
x +
c
x
2
y =
_

1
3
x +
c
x
2
_
1
Thus the solution for y is
y =
3x
2
c x
2
.
20.2 Riccati Equations
Factoring Second Order Operators. Consider the second order linear equation
L[y] =
_
d
2
dx
2
+p(x)
d
dx
+q(x)
_
y = y
tt
+p(x)y
t
+q(x)y = f(x).
If we were able to factor the linear operator L into the form
L =
_
d
dx
+a(x)
_ _
d
dx
+b(x)
_
, (20.1)
then we would be able to solve the dierential equation. Factoring reduces the problem to a system of rst order
equations. We start with the factored equation
_
d
dx
+a(x)
_ _
d
dx
+b(x)
_
y = f(x).
844
We set u =
_
d
dx
+b(x)

y and solve the problem


_
d
dx
+a(x)
_
u = f(x).
Then to obtain the solution we solve
_
d
dx
+b(x)
_
y = u.
Example 20.2.1 Consider the equation
y
tt
+
_
x
1
x
_
y
t
+
_
1
x
2
1
_
y = 0.
Lets say by some insight or just random luck we are able to see that this equation can be factored into
_
d
dx
+x
_ _
d
dx

1
x
_
y = 0.
We rst solve the equation
_
d
dx
+x
_
u = 0.
u
t
+xu = 0
d
dx
_
e
x
2
/2
u
_
= 0
u = c
1
e
x
2
/2
845
Then we solve for y with the equation
_
d
dx

1
x
_
y = u = c
1
e
x
2
/2
.
y
t

1
x
y = c
1
e
x
2
/2
d
dx
_
x
1
y
_
= c
1
x
1
e
x
2
/2
y = c
1
x
_
x
1
e
x
2
/2
dx +c
2
x
If we were able to solve for a and b in Equation 20.1 in terms of p and q then we would be able to solve any
second order dierential equation. Equating the two operators,
d
2
dx
2
+p
d
dx
+q =
_
d
dx
+a
_ _
d
dx
+b
_
=
d
2
dx
2
+ (a +b)
d
dx
+ (b
t
+ab).
Thus we have the two equations
a +b = p, and b
t
+ab = q.
Eliminating a,
b
t
+ (p b)b = q
b
t
= b
2
pb +q
Now we have a nonlinear equation for b that is no easier to solve than the original second order linear equation.
846
Riccati Equations. Equations of the form
y
t
= a(x)y
2
+b(x)y +c(x)
are called Riccati equations. From the above derivation we see that for every second order dierential equation
there is a corresponding Riccati equation. Now we will show that the converse is true.
We make the substitution
y =
u
t
au
, y
t
=
u
tt
au
+
(u
t
)
2
au
2
+
a
t
u
t
a
2
u
,
in the Riccati equation.
y
t
= ay
2
+by +c

u
tt
au
+
(u
t
)
2
au
2
+
a
t
u
t
a
2
u
= a
(u
t
)
2
a
2
u
2
b
u
t
au
+c

u
tt
au
+
a
t
u
t
a
2
u
+b
u
t
au
c = 0
u
tt

_
a
t
a
+b
_
u
t
+acu = 0
Now we have a second order linear equation for u.
Result 20.2.1 The substitution y =
u

au
transforms the Riccati equation
y
/
= a(x)y
2
+b(x)y +c(x)
into the second order linear equation
u
//

_
a
/
a
+b
_
u
/
+acu = 0.
847
Example 20.2.2 Consider the Riccati equation
y
t
= y
2
+
1
x
y +
1
x
2
.
With the substitution y =
u

u
we obtain
u
tt

1
x
u
t
+
1
x
2
u = 0.
This is an Euler equation. The substitution u = x

yields
( 1) + 1 = ( 1)
2
= 0.
Thus the general solution for u is
u = c
1
x +c
2
xlog x.
Since y =
u

u
,
y =
c
1
+c
2
(1 + log x)
c
1
x +c
2
xlog x
y =
1 +c(1 + log x)
x +cx log x
20.3 Exchanging the Dependent and Independent Variables
Some dierential equations can be put in a more elementary form by exchanging the dependent and independent
variables. If the new equation can be solved, you will have an implicit solution for the initial equation. We will
consider a few examples to illustrate the method.
848
Example 20.3.1 Consider the equation
y
t
=
1
y
3
xy
2
.
Instead of considering y to be a function of x, consider x to be a function of y. That is, x = x(y), x
t
=
dx
dy
.
dy
dx
=
1
y
3
xy
2
dx
dy
= y
3
xy
2
x
t
+y
2
x = y
3
Now we have a rst order equation for x.
d
dy
_
e
y
3
/3
x
_
= y
3
e
y
3
/3
x = e
y
3
/3
_
y
3
e
y
3
/3
dy +c e
y
3
/3
Example 20.3.2 Consider the equation
y
t
=
y
y
2
+ 2x
.
849
Interchanging the dependent and independent variables yields
1
x
t
=
y
y
2
+ 2x
x
t
= y + 2
x
y
x
t
2
x
y
= y
d
dy
(y
2
x) = y
1
y
2
x = log y +c
x = y
2
log y +cy
2
Result 20.3.1 Some dierential equations can be put in a simpler form by exchanging
the dependent and independent variables. Thus a dierential equation for y(x) can be
written as an equation for x(y). Solving the equation for x(y) will give an implicit solution
for y(x).
20.4 Autonomous Equations
Autonomous equations have no explicit dependence on x. The following are examples.
y
tt
+ 3y
t
2y = 0
y
tt
= y + (y
t
)
2
y
ttt
+y
tt
y = 0
850
The change of variables u(y) = y
t
reduces an n
th
order autonomous equation in y to a non-autonomous
equation of order n 1 in u(y). Writing the derivatives of y in terms of u,
y
t
= u(y)
y
tt
=
d
dx
u(y)
=
dy
dx
d
dy
u(y)
= y
t
u
t
= u
t
u
y
ttt
= (u
tt
u + (u
t
)
2
)u.
Thus we see that the equation for u(y) will have an order of one less than the original equation.
Result 20.4.1 Consider an autonomous dierential equation for y(x), (autonomous e-
quations have no explicit dependence on x.) The change of variables u(y) = y
/
reduces
an n
th
order autonomous equation in y to a non-autonomous equation of order n 1 in
u(y).
Example 20.4.1 Consider the equation
y
tt
= y + (y
t
)
2
.
With the substitution u(y) = y
t
, the equation becomes
u
t
u = y +u
2
u
t
= u +yu
1
.
851
We recognize this as a Bernoulli equation. The substitution v = u
2
yields
1
2
v
t
= v +y
v
t
2v = 2y
d
dy
_
e
2y
v
_
= 2y e
2y
v(y) = c
1
e
2y
+ e
2y
_
2y e
2y
dy
v(y) = c
1
e
2y
+ e
2y
_
y e
2y
+
_
e
2y
dy
_
v(y) = c
1
e
2y
+ e
2y
_
y e
2y

1
2
e
2y
_
v(y) = c
1
e
2y
y
1
2
.
Now we solve for u.
u(y) =
_
c
1
e
2y
y
1
2
_
1/2
.
dy
dx
=
_
c
1
e
2y
y
1
2
_
1/2
This equation is separable.
dx =
dy
_
c
1
e
2y
y
1
2
_
1/2
x +c
2
=
_
1
_
c
1
e
2y
y
1
2
_
1/2
dy
852
Thus we nally have arrived at an implicit solution for y(x).
Example 20.4.2 Consider the equation
y
tt
+y
3
= 0.
With the change of variables, u(y) = y
t
, the equation becomes
u
t
u +y
3
= 0.
This equation is separable.
udu = y
3
dy
1
2
u
2
=
1
4
y
4
+c
1
u =
_
2c
1

1
2
y
4
_
1/2
y
t
=
_
2c
1

1
2
y
4
_
1/2
dy
(2c
1

1
2
y
4
)
1/2
= dx
Integrating gives us the implicit solution
_
1
(2c
1

1
2
y
4
)
1/2
dy = x +c
2
.
853
20.5 *Equidimensional-in-x Equations
Dierential equations that are invariant under the change of variables x = c are said to be equidimensional-in-x.
For a familiar example from linear equations, we note that the Euler equation is equidimensional-in-x. Writing
the new derivatives under the change of variables,
x = c ,
d
dx
=
1
c
d
d
,
d
2
dx
2
=
1
c
2
d
2
d
2
, . . . .
Example 20.5.1 Consider the Euler equation
y
tt
+
2
x
y
t
+
3
x
2
y = 0.
Under the change of variables, x = c , y(x) = u(), this equation becomes
1
c
2
u
tt
+
2
c
1
c
u
t
+
3
c
2

2
u = 0
u
tt
+
2

u
t
+
3

2
u = 0.
Thus this equation is invariant under the change of variables x = c .
Example 20.5.2 For a nonlinear example, consider the equation
y
tt
y
t
+
y
tt
x y
+
y
t
x
2
= 0.
With the change of variables x = c , y(x) = u() the equation becomes
u
tt
c
2
u
t
c
+
u
tt
c
3
u
+
u
t
c
3

2
= 0
u
tt
u
t
+
u
tt
u
+
u
t

2
= 0.
We see that this equation is also equidimensional-in-x.
854
You may recall that the change of variables x = e
t
reduces an Euler equation to a constant coecient
equation. To generalize this result to nonlinear equations we will see that the same change of variables reduces
an equidimensional-in-x equation to an autonomous equation.
Writing the derivatives with respect to x in terms of t,
x = e
t
,
d
dx
=
dt
dx
d
dt
= e
t
d
dt
x
d
dx
=
d
dt
x
2
d
2
dx
2
= x
d
dx
_
x
d
dx
_
x
d
dx
=
d
2
dt
2

d
dt
.
Example 20.5.3 Consider the equation in Example 20.5.2
y
tt
y
t
+
y
tt
x y
+
y
t
x
2
= 0.
Applying the change of variables x = e
t
, y(x) = u(t) yields an autonomous equation for u(t).
x
2
y
tt
xy
t
+
x
2
y
tt
y
+x y
t
= 0
(u
tt
u
t
)u
t
+
u
tt
u
t
u
+u
t
= 0
Result 20.5.1 A dierential equation that is invariant under the change of variables
x = c is equidimensional-in-x. Such an equation can be reduced to autonomous equation
of the same order with the change of variables, x = e
t
.
855
20.6 *Equidimensional-in-y Equations
A dierential equation is said to be equidimensional-in-y if it is invariant under the change of variables y(x) =
c v(x). Note that all linear homogeneous equations are equidimensional-in-y.
Example 20.6.1 Consider the linear equation
y
tt
+p(x)y
t
+q(x)y = 0.
With the change of variables y(x) = cv(x) the equation becomes
cv
tt
+p(x)cv
t
+q(x)cv = 0
v
tt
+p(x)v
t
+q(x)v = 0
Thus we see that the equation is invariant under the change of variables.
Example 20.6.2 For a nonlinear example, consider the equation
y
tt
y + (y
t
)
2
y
2
= 0.
Under the change of variables y(x) = cv(x) the equation becomes.
cv
tt
cv + (cv
t
)
2
(cv)
2
= 0
v
tt
v + (v
t
)
2
v
2
= 0.
Thus we see that this equation is also equidimensional-in-y.
856
The change of variables y(x) = e
u(x)
reduces an n
th
order equidimensional-in-y equation to an equation of
order n 1 for u
t
. Writing the derivatives of e
u(x)
,
d
dx
e
u
= u
t
e
u
d
2
dx
2
e
u
= (u
tt
+ (u
t
)
2
) e
u
d
3
dx
3
e
u
= (u
ttt
+ 3u
tt
u
tt
+ (u
t
)
3
) e
u
.
Example 20.6.3 Consider the linear equation in Example 20.6.1
y
tt
+p(x)y
t
+q(x)y = 0.
Under the change of variables y(x) = e
u(x)
the equation becomes
(u
tt
+ (u
t
)
2
) e
u
+p(x)u
t
e
u
+q(x) e
u
= 0
u
tt
+ (u
t
)
2
+p(x)u
t
+q(x) = 0.
Thus we have a Riccati equation for u
t
. This transformation might seem rather useless since linear equations are
usually easier to work with than nonlinear equations, but it is often useful in determining the asymptotic behavior
of the equation.
Example 20.6.4 From Example 20.6.2 we have the equation
y
tt
y + (y
t
)
2
y
2
= 0.
The change of variables y(x) = e
u(x)
yields
(u
tt
+ (u
t
)
2
) e
u
e
u
+ (u
t
e
u
)
2
( e
u
)
2
= 0
u
tt
+ 2(u
t
)
2
1 = 0
u
tt
= 2(u
t
)
2
+ 1
857
Now we have a Riccati equation for u
t
. We make the substitution u
t
=
v

2v
.
v
tt
2v

(v
t
)
2
2v
2
= 2
(v
t
)
2
4v
2
+ 1
v
tt
2v = 0
v = c
1
e

2x
+c
2
e

2x
u
t
= 2

2
c
1
e

2x
c
2
e

2x
c
1
e

2x
+c
2
e

2x
u = 2
_
c
1

2 e

2x
c
2

2 e

2x
c
1
e

2x
+c
2
e

2x
dx +c
3
u = 2 log
_
c
1
e

2x
+c
2
e

2x
_
+c
3
y =
_
c
1
e

2x
+c
2
e

2x
_
2
e
c
3
The constants are redundant, the general solution is
y =
_
c
1
e

2x
+c
2
e

2x
_
2
Result 20.6.1 A dierential equation is equidimensional-in-y if it is invariant under the
change of variables y(x) = cv(x). An n
th
order equidimensional-in-y equation can be
reduced to an equation of order n 1 in u
/
with the change of variables y(x) = e
u(x)
.
858
20.7 *Scale-Invariant Equations
Result 20.7.1 An equation is scale invariant if it is invariant under the change of vari-
ables, x = c, y(x) = c

v(), for some value of . A scale-invariant equation can be trans-


formed to an equidimensional-in-x equation with the change of variables, y(x) = x

u(x).
Example 20.7.1 Consider the equation
y
tt
+x
2
y
2
= 0.
Under the change of variables x = c, y(x) = c

v() this equation becomes


c

c
2
v
tt
() +c
2
x
2
c
2
v
2
() = 0.
Equating powers of c in the two terms yields = 4.
Introducing the change of variables y(x) = x
4
u(x) yields
d
2
dx
2
_
x
4
u(x)

+x
2
(x
4
u(x))
2
= 0
x
4
u
tt
8x
5
u
t
+ 20x
6
u +x
6
u
2
= 0
x
2
u
tt
8xu
t
+ 20u +u
2
= 0.
We see that the equation for u is equidimensional-in-x.
859
20.8 Exercises
Exercise 20.1
1. Find the general solution and the singular solution of the Clairaut equation,
y = xp +p
2
.
2. Show that the singular solution is the envelope of the general solution.
Hint, Solution
Bernoulli Equations
Exercise 20.2 (mathematica/ode/techniques nonlinear/bernoulli.nb)
Consider the Bernoulli equation
dy
dt
+p(t)y = q(t)y

.
1. Solve the Bernoulli equation for = 1.
2. Show that for ,= 1 the substitution u = y
1
reduces Bernoullis equation to a linear equation.
3. Find the general solution to the following equations.
t
2
dy
dt
+ 2ty y
3
= 0, t > 0
(a)
dy
dx
+ 2xy +y
2
= 0
(b)
Hint, Solution
860
Exercise 20.3
Consider a population, y. Let the birth rate of the population be proportional to y with constant of proportionality
1. Let the death rate of the population be proportional to y
2
with constant of proportionality 1/1000. Assume
that the population is large enough so that you can consider y to be continuous. What is the population as a
function of time if the initial population is y
0
?
Hint, Solution
Exercise 20.4
Show that the transformation u = y
1n
reduces the equation to a linear rst order equation. Solve the equations
1. t
2
dy
dt
+ 2ty y
3
= 0 t > 0
2.
dy
dt
= (cos t +T) y y
3
, and T are real constants. (From a uid ow stability problem.)
Hint, Solution
Riccati Equations
Exercise 20.5
1. Consider the Ricatti equation,
dy
dx
= a(x)y
2
+b(x)y +c(x).
Substitute
y = y
p
(x) +
1
u(x)
into the Ricatti equation, where y
p
is some particular solution to obtain a rst order linear dierential
equation for u.
861
2. Consider a Ricatti equation,
y
t
= 1 +x
2
2xy +y
2
.
Verify that y
p
(x) = x is a particular solution. Make the substitution y = y
p
+ 1/u to nd the general
solution.
What would happen if you continued this method, taking the general solution for y
p
? Would you be able
to nd a more general solution?
3. The substitution
y =
u
t
au
gives us the second order, linear, homogeneous dierential equation,
u
tt

_
a
t
a
+b
_
u
t
+acu = 0.
The general solution for u has two constants of integration. However, the solution for y should only have
one constant of integration as it satises a rst order equation. Write y in terms of the solution for u and
verify tha y has only one constant of integration.
Hint, Solution
Exchanging the Dependent and Independent Variables
Exercise 20.6
Solve the dierential equation
y
t
=

y
xy +y
.
Hint, Solution
862
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
863
20.9 Hints
Hint 20.1
Bernoulli Equations
Hint 20.2
Hint 20.3
The dierential equation governing the population is
dy
dt
= y
y
2
1000
, y(0) = y
0
.
This is a Bernoulli equation.
Hint 20.4
Riccati Equations
Hint 20.5
Exchanging the Dependent and Independent Variables
Hint 20.6
Exchange the dependent and independent variables.
Autonomous Equations
864
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
865
20.10 Solutions
Solution 20.1
We consider the Clairaut equation,
y = xp +p
2
. (20.2)
1. We dierentiate Equation 20.2 with respect to x to obtain a second order dierential equation.
y
t
= y
t
+xy
tt
+ 2y
t
y
tt
y
tt
(2y
t
+x) = 0
Equating the rst or second factor to zero will lead us to two distinct solutions.
y
tt
= 0 or y
t
=
x
2
If y
tt
= 0 then y
t
p is a constant, (say y
t
= c). From Equation 20.2 we see that the general solution is,
y(x) = cx +c
2
. (20.3)
Recall that the general solution of a rst order dierential equation has one constant of integration.
If y
t
= x/2 then y = x
2
/4 + const. We determine the constant by substituting the expression into
Equation 20.2.

x
2
4
+c = x
_

x
2
_
+
_

x
2
_
2
Thus we see that a singular solution of the Clairaut equation is
y(x) =
1
4
x
2
. (20.4)
Recall that a singular solution of a rst order nonlinear dierential equation has no constant of integration.
866
2. Equating the general and singular solutions, y(x), and their derivatives, y
t
(x), gives us the system of
equations,
cx +c
2
=
1
4
x
2
, c =
1
2
x.
Since the rst equation is satised for c = x/2, we see that the solution y = cx + c
2
is tangent to the
solution y = x
2
/4 at the point (2c, [c[). The solution y = cx+c
2
is plotted for c = . . . , 1/4, 0, 1/4, . . .
in Figure 20.1.
-4 -2 2 4
-4
-3
-2
-1
1
2
Figure 20.1: The Envelope of y = cx +c
2
.
The envelope of a one-parameter family F(x, y, c) = 0 is given by the system of equations,
F(x, y, c) = 0, F
c
(x, y, c) = 0.
For the family of solutions y = cx +c
2
these equations are
y = cx +c
2
, 0 = x + 2c.
Substituting the solution of the second equation, c = x/2, into the rst equation gives the envelope,
y =
_

1
2
x
_
x +
_

1
2
x
_
2
=
1
4
x
2
.
Thus we see that the singular solution is the envelope of the general solution.
867
Bernoulli Equations
Solution 20.2
1.
dy
dt
+p(t)y = q(t)y
dy
y
= (q p) dt
ln y =
_
(q p) dt +c
y = c e

(qp) dt
2. We consider the Bernoulli equation,
dy
dt
+p(t)y = q(t)y

, ,= 1.
We divide by y

.
y

y
t
+p(t)y
1
= q(t)
This suggests the change of dependent variable u = y
1
, u
t
= (1 )y

y
t
.
1
1
d
dt
y
1
+p(t)y
1
= q(t)
du
dt
+ (1 )p(t)u = (1 )q(t)
Thus we obtain a linear equation for u which when solved will give us an implicit solution for y.
868
3. (a)
t
2
dy
dt
+ 2ty y
3
= 0, t > 0
t
2
y
t
y
3
+ 2t
1
y
2
= 1
We make the change of variables u = y
2
.

1
2
t
2
u
t
+ 2tu = 1
u
t

4
t
u =
2
t
2
The integrating factor is
= e

(4/t) dt
= e
4 ln t
= t
4
.
We multiply by the integrating factor and integrate to obtain the solution.
d
dt
_
t
4
u
_
= 2t
6
u =
2
5
t
1
+ct
4
y
2
=
2
5
t
1
+ct
4
y =
1
_
2
5
t
1
+ct
4
y =

5t

2 +ct
5
(b)
dy
dx
+ 2xy +y
2
= 0
y
t
y
2
+
2x
y
= 1
869
We make the change of variables u = y
1
.
u
t
2xu = 1
The integrating factor is
= e

(2x) dx
= e
x
2
.
We multiply by the integrating factor and integrate to obtain the solution.
d
dx
_
e
x
2
u
_
= e
x
2
u = e
x
2
_
e
x
2
dx +c e
x
2
y =
e
x
2
_
e
x
2
dx +c
Solution 20.3
The dierential equation governing the population is
dy
dt
= y
y
2
1000
, y(0) = y
0
.
We recognize this as a Bernoulli equation. The substitution u(t) = 1/y(t) yields

du
dt
= u
1
1000
, u(0) =
1
y
0
.
u
t
+u =
1
1000
u =
1
y
0
e
t
+
e
t
1000
_
t
0
e

d
u =
1
1000
+
_
1
y
0

1
1000
_
e
t
870
Solving for y(t),
y(t) =
_
1
1000
+
_
1
y
0

1
1000
_
e
t
_
1
.
As a check, we see that as t , y(t) 1000, which is an equilibrium solution of the dierential equation.
dy
dt
= 0 = y
y
2
1000
y = 1000.
Solution 20.4
1.
t
2
dy
dt
+ 2ty y
3
= 0
dy
dt
+ 2t
1
y = t
2
y
3
We make the change of variables u(t) = y
2
(t).
u
t
4t
1
u = 2t
2
This gives us a rst order, linear equation. The integrating factor is
I(t) = e

4t
1
dt
= e
4 log t
= t
4
.
We multiply by the integrating factor and integrate.
d
dt
_
t
4
u
_
= 2t
6
t
4
u =
2
5
t
5
+c
u =
2
5
t
1
+ct
4
871
Finally we write the solution in terms of y(t).
y(t) =
1
_
2
5
t
1
+ct
4
y(t) =

5t

2 +ct
5
2.
dy
dt
(cos t +T) y = y
3
We make the change of variables u(t) = y
2
(t).
u
t
+ 2 (cos t +T) u = 2
This gives us a rst order, linear equation. The integrating factor is
I(t) = e

2(cos t+T) dt
= e
2(sin t+Tt)
We multiply by the integrating factor and integrate.
d
dt
_
e
2(sin t+Tt)
u
_
= 2 e
2(sin t+Tt)
u = 2 e
2(sin t+Tt)
__
e
2(sin t+Tt)
dt +c
_
Finally we write the solution in terms of y(t).
y =
e
sin t+Tt
_
2
__
e
2(sin t+Tt)
dt +c
_
872
Riccati Equations
Solution 20.5
We consider the Ricatti equation,
dy
dx
= a(x)y
2
+b(x)y +c(x). (20.5)
1. We substitute
y = y
p
(x) +
1
u(x)
into the Ricatti equation, where y
p
is some particular solution.
y
t
p

u
t
u
2
= +a(x)
_
y
2
p
+ 2
y
p
u
+
1
u
2
_
+b(x)
_
y
p
+
1
u
_
+c(x)

u
t
u
2
= b(x)
1
u
+a(x)
_
2
y
p
u
+
1
u
2
_
u
t
= (b + 2ay
p
) u a
We obtain a rst order linear dierential equation for u whose solution will contain one constant of integra-
tion.
2. We consider a Ricatti equation,
y
t
= 1 +x
2
2xy +y
2
. (20.6)
We verify that y
p
(x) = x is a solution.
1 = 1 +x
2
2xx +x
2
873
Substituting y = y
p
+ 1/u into Equation 20.6 yields,
u
t
= (2x + 2x) u 1
u = x +c
y = x +
1
c x
What would happen if we continued this method? Since y = x +
1
cx
is a solution of the Ricatti equation
we can make the substitution,
y = x +
1
c x
+
1
u(x)
, (20.7)
which will lead to a solution for y which has two constants of integration. Then we could repeat the process,
substituting the sum of that solution and 1/u(x) into the Ricatti equation to nd a solution with three
constants of integration. We know that the general solution of a rst order, ordinary dierential equation
has only one constant of integration. Does this method for Ricatti equations violate this theorem? Theres
only one way to nd out. We substitute Equation 20.7 into the Ricatti equation.
u
t
=
_
2x + 2
_
x +
1
c x
__
u 1
u
t
=
2
c x
u 1
u
t
+
2
c x
u = 1
The integrating factor is
I(x) = e
2/(cx)
= e
2 log(cx)
=
1
(c x)
2
.
874
Upon multiplying by the integrating factor, the equation becomes exact.
d
dx
_
1
(c x)
2
u
_
=
1
(c x)
2
u = (c x)
2
1
c x
+b(c x)
2
u = x c +b(c x)
2
Thus the Ricatti equation has the solution,
y = x +
1
c x
+
1
x c +b(c x)
2
.
It appears that we we have found a solution that has two constants of integration, but appearances can be
deceptive. We do a little algebraic simplication of the solution.
y = x +
1
c x
+
1
(b(c x) 1)(c x)
y = x +
(b(c x) 1) + 1
(b(c x) 1)(c x)
y = x +
b
b(c x) 1
y = x +
1
(c 1/b) x
This is actually a solution, (namely the solution we had before), with one constant of integration, (namely
c 1/b). Thus we see that repeated applications of the procedure will not produce more general solutions.
3. The substitution
y =
u
t
au
875
gives us the second order, linear, homogeneous dierential equation,
u
tt

_
a
t
a
+b
_
u
t
+acu = 0.
The solution to this linear equation is a linear combination of two homogeneous solutions, u
1
and u
2
.
u = c
1
u
1
(x) +c
2
u
2
(x)
The solution of the Ricatti equation is then
y =
c
1
u
t
1
(x) +c
2
u
t
2
(x)
a(x)(c
1
u
1
(x) +c
2
u
2
(x))
.
Since we can divide the numerator and denominator by either c
1
or c
2
, this answer has only one constant of
integration, (namely c
1
/c
2
or c
2
/c
1
).
Exchanging the Dependent and Independent Variables
Solution 20.6
Exchanging the dependent and independent variables in the dierential equation,
y
t
=

y
xy +y
,
yields
x
t
(y) = y
1/2
x +y
1/2
.
876
This is a rst order dierential equation for x(y).
x
t
y
1/2
x = y
1/2
d
dy
_
xexp
_

2y
3/2
3
__
= y
1/2
exp
_

2y
3/2
3
_
xexp
_

2y
3/2
3
_
= exp
_

2y
3/2
3
_
+c
1
x = 1 +c
1
exp
_
2y
3/2
3
_
x + 1
c
1
= exp
_
2y
3/2
3
_
log
_
x + 1
c
1
_
=
2
3
y
3/2
y =
_
3
2
log
_
x + 1
c
1
__
2/3
y =
_
c +
3
2
log(x + 1)
_
2/3
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
877
Chapter 21
Transformations and Canonical Forms
Prize intensity more than extent. Excellence resides in quality not in quantity. The best is always few and
rare - abundance lowers value. Even among men, the giants are usually really dwarfs. Some reckon books by
the thickness, as if they were written to exercise the brawn more than the brain. Extent alone never rises above
mediocrity; it is the misfortune of universal geniuses that in attempting to be at home everywhere are so nowhere.
Intensity gives eminence and rises to the heroic in matters sublime.
-Balthasar Gracian
21.1 The Constant Coecient Equation
The solution of any second order linear homogeneous dierential equation can be written in terms of the solutions
to either
y
tt
= 0, or y
tt
y = 0
Consider the general equation
y
tt
+ay
t
+by = 0.
878
We can solve this dierential equation by making the substitution y = e
x
. This yields the algebraic equation

2
+a +b = 0.
=
1
2
_
a

a
2
4b
_
There are two cases to consider. If a
2
,= 4b then the solutions are
y
1
= e
(a+

a
2
4b)x/2
, y
2
= e
(a

a
2
4b)x/2
If a
2
= 4b then we have
y
1
= e
ax/2
, y
2
= xe
ax/2
Note that regardless of the values of a and b the solutions are of the form
y = e
ax/2
u(x)
We would like to write the solutions to the general dierential equation in terms of the solutions to simpler
dierential equations. We make the substitution
y = e
x
u
The derivatives of y are
y
t
= e
x
(u
t
+u)
y
tt
= e
x
(u
tt
+ 2u
t
+
2
u)
Substituting these into the dierential equation yields
u
tt
+ (2 +a)u
t
+ (
2
+a +b)u = 0
879
In order to get rid of the u
t
term we choose
=
a
2
.
The equation is then
u
tt
+
_
b
a
2
4
_
u = 0.
There are now two cases to consider.
Case 1. If b = a
2
/4 then the dierential equation is
u
tt
= 0
which has solutions 1 and x. The general solution for y is then
y = e
ax/2
(c
1
+c
2
x).
Case 2. If b ,= a
2
/4 then the dierential equation is
u
tt

_
a
2
4
b
_
u = 0.
We make the change variables
u(x) = v(), x = .
The derivatives in terms of are
d
dx
=
d
dx
d
d
=
1

d
d
d
2
dx
2
=
1

d
d
1

d
d
=
1

2
d
2
d
2
.
880
The dierential equation for v is
1

2
v
tt

_
a
2
4
b
_
v = 0
v
tt

2
_
a
2
4
b
_
v = 0
We choose
=
_
a
2
4
b
_
1/2
to obtain
v
tt
v = 0
which has solutions e

. The solution for y is


y = e
x
_
c
1
e
x/
+c
2
e
x/
_
y = e
ax/2
_
c
1
e

a
2
/4b x
+c
2
e

a
2
/4b x
_
21.2 Normal Form
21.2.1 Second Order Equations
Consider the second order equation
y
tt
+p(x)y
t
+q(x)y = 0. (21.1)
881
Through a change of dependent variable, this equation can be transformed to
u
tt
+I(x)y = 0.
This is known as the normal form of (21.1). The function I(x) is known as the invariant of the equation.
Now to nd the change of variables that will accomplish this transformation. We make the substitution
y(x) = a(x)u(x) in (21.1).
au
tt
+ 2a
t
u
t
+a
tt
u +p(au
t
+a
t
u) +qau = 0
u
tt
+
_
2
a
t
a
+p
_
u
t
+
_
a
tt
a
+
pa
t
a
+q
_
u = 0
To eliminate the u
t
term, a(x) must satisfy
2
a
t
a
+p = 0
a
t
+
1
2
pa = 0
a = c exp
_

1
2
_
p(x) dx
_
.
For this choice of a, our dierential equation for u becomes
u
tt
+
_
q
p
2
4

p
t
2
_
u = 0.
Two dierential equations having the same normal form are called equivalent.
882
Result 21.2.1 The change of variables
y(x) = exp
_

1
2
_
p(x) dx
_
u(x)
transforms the dierential equation
y
//
+p(x)y
/
+q(x)y = 0
into its normal form
u
//
+I(x)u = 0
where the invariant of the equation, I(x), is
I(x) = q
p
2
4

p
/
2
.
21.2.2 Higher Order Dierential Equations
Consider the third order dierential equation
y
ttt
+p(x)y
tt
+q(x)y
t
+r(x)y = 0.
883
We can eliminate the y
tt
term. Making the change of dependent variable
y = uexp
_

1
3
_
p(x) dx
_
y
t
=
_
u
t

1
3
pu
_
exp
_

1
3
_
p(x) dx
_
y
tt
=
_
u
tt

2
3
pu
t
+
1
9
(p
2
3p
t
)u
_
exp
_

1
3
_
p(x) dx
_
y
tt
=
_
u
ttt
pu
tt
+
1
3
(p
2
3p
t
)u
t
+
1
27
(9p
t
9p
tt
p
3
)u
_
exp
_

1
3
_
p(x) dx
_
yields the dierential equation
u
ttt
+
1
3
(3q 3p
t
p
2
)u
t
+
1
27
(27r 9pq 9p
tt
+ 2p
3
)u = 0.
Result 21.2.2 The change of variables
y(x) = exp
_

1
n
_
p
n1
(x) dx
_
u(x)
transforms the dierential equation
y
(n)
+p
n1
(x)y
(n1)
+p
n2
(x)y
(n2)
+ +p
0
(x)y = 0
into the form
u
(n)
+a
n2
(x)u
(n2)
+a
n3
(x)u
(n3)
+ +a
0
(x)u = 0.
884
21.3 Transformations of the Independent Variable
21.3.1 Transformation to the form u + a(x) u = 0
Consider the second order linear dierential equation
y
tt
+p(x)y
t
+q(x)y = 0.
We make the change of independent variable
= f(x), u() = y(x).
The derivatives in terms of are
d
dx
=
d
dx
d
d
= f
t
d
d
d
2
dx
2
= f
t
d
d
f
t
d
d
= (f
t
)
2
d
2
d
2
+f
tt
d
d
The dierential equation becomes
(f
t
)
2
u
tt
+f
tt
u
t
+pf
t
u
t
+qu = 0.
In order to eliminate the u
t
term, f must satisfy
f
tt
+pf
t
= 0
f
t
= exp
_

_
p(x) dx
_
f =
_
exp
_

_
p(x) dx
_
dx.
885
The dierential equation for u is then
u
tt
+
q
(f
t
)
2
u = 0
u
tt
() +q(x) exp
_
2
_
p(x) dx
_
u() = 0.
Result 21.3.1 The change of variables
=
_
exp
_

_
p(x) dx
_
dx, u() = y(x)
transforms the dierential equation
y
//
+p(x)y
/
+q(x)y = 0
into
u
//
() +q(x) exp
_
2
_
p(x) dx
_
u() = 0.
21.3.2 Transformation to a Constant Coecient Equation
Consider the second order linear dierential equation
y
tt
+p(x)y
t
+q(x)y = 0.
With the change of independent variable
= f(x), u() = y(x),
886
the dierential equation becomes
(f
t
)
2
u
tt
+ (f
tt
+pf
t
)u
t
+qu = 0.
For this to be a constant coecient equation we must have
(f
t
)
2
= c
1
q, and f
tt
+pf
t
= c
2
q,
for some constants c
1
and c
2
. Solving the rst condition,
f
t
= c

q,
f = c
_
_
q(x) dx.
The second constraint becomes
f
tt
+pf
t
q
= const
1
2
cq
1/2
q
t
+pcq
1/2
q
= const
q
t
+ 2pq
q
3/2
= const.
887
Result 21.3.2 Consider the dierential equation
y
//
+p(x)y
/
+q(x)y = 0.
If the expression
q
/
+ 2pq
q
3/2
is a constant then the change of variables
= c
_
_
q(x) dx, u() = y(x),
will yield a constant coecient dierential equation. (Here c is an arbitrary constant.)
21.4 Integral Equations
Volterras Equations. Volterras integral equation of the rst kind has the form
_
x
a
N(x, )f() d = f(x).
The Volterra equation of the second kind is
y(x) = f(x) +
_
x
a
N(x, )y() d.
N(x, ) is known as the kernel of the equation.
888
Fredholms Equations. Fredholms integral equations of the rst and second kinds are
_
b
a
N(x, )f() d = f(x),
y(x) = f(x) +
_
b
a
N(x, )y() d.
21.4.1 Initial Value Problems
Consider the initial value problem
y
tt
+p(x)y
t
+q(x)y = f(x), y(a) = , y
t
(a) = .
Integrating this equation twice yields
_
x
a
_

a
y
tt
() +p()y
t
() +q()y() d d =
_
x
a
_

a
f() d d
_
x
a
(x )[y
tt
() +p()y
t
() +q()y()] d =
_
x
a
(x )f() d.
Now we use integration by parts.
_
(x )y
t
()

x
a

_
x
a
y
t
() d +
_
(x )p()y()

x
a

_
x
a
[(x )p
t
() p()]y() d
+
_
x
a
(x )q()y() d =
_
x
a
(x )f() d.
(x a)y
t
(a) +y(x) y(a) (x a)p(a)y(a)
_
x
a
[(x )p
t
() p()]y() d
+
_
x
a
(x )q()y() d =
_
x
a
(x )f() d.
889
We obtain a Volterra integral equation of the second kind for y(x).
y(x) =
_
x
a
(x )f() d + (x a)(p(a) +) + +
_
x
a
_
(x )[p
t
() q()] p()
_
y() d.
Note that the initial conditions for the dierential equation are built into the Volterra equation. Setting
x = a in the Volterra equation yields y(a) = . Dierentiating the Volterra equation,
y
t
(x) =
_
x
a
f() d + (p(a) +) p(x)y(x) +
_
x
a
[p
t
() q()] p()y() d
and setting x = a yields
y
t
(a) = p(a) + p(a) = .
(Recall from calculus that
d
dx
_
x
g(x, ) d = g(x, x) +
_
x

x
[g(x, )] d.)
Result 21.4.1 The initial value problem
y
//
+p(x)y
/
+q(x)y = f(x), y(a) = , y
/
(a) = .
is equivalent to the Volterra equation of the second kind
y(x) = F(x) +
_
x
a
N(x, )y() d
where
F(x) =
_
x
a
(x )f() d + (x a)(p(a) +) +
N(x, ) = (x )[p
/
() q()] p().
890
21.4.2 Boundary Value Problems
Consider the boundary value problem
y
tt
= f(x), y(a) = , y(b) = . (21.2)
To obtain a problem with homogeneous boundary conditions, we make the change of variable
y(x) = u(x) + +

b a
(x a)
to obtain the problem
u
tt
= f(x), u(a) = u(b) = 0.
Now we will use Greens functions to write the solution as an integral. First we solve the problem
G
tt
= (x ), G(a[) = G(b[) = 0.
The homogeneous solutions of the dierential equation that satisfy the left and right boundary conditions are
c
1
(x a) and c
2
(x b).
Thus the Greens function has the form
G(x[) =
_
c
1
(x a), for x
c
2
(x b), for x
Imposing continuity of G(x[) at x = and a unit jump of G(x[) at x = , we obtain
G(x[) =
_
(xa)(b)
ba
, for x
(xb)(a)
ba
, for x
Thus the solution of the (21.2) is
y(x) = +

b a
(x a) +
_
b
a
G(x[)f() d.
891
Now consider the boundary value problem
y
tt
+p(x)y
t
+q(x)y = 0, y(a) = , y(b) = .
From the above result we can see that the solution satises
y(x) = +

b a
(x a) +
_
b
a
G(x[)[f() p()y
t
() q()y()] d.
Using integration by parts, we can write

_
b
a
G(x[)p()y
t
() d =
_
G(x[)p()y()

b
a
+
_
b
a
_
G(x[)

p() +G(x[)p
t
()
_
y() d
=
_
b
a
_
G(x[)

p() +G(x[)p
t
()
_
y() d.
Substituting this into our expression for y(x),
y(x) = +

b a
(x a) +
_
b
a
G(x[)f() d +
_
b
a
_
G(x[)

p() +G(x[)[p
t
() q()]
_
y() d,
we obtain a Fredholm integral equation of the second kind.
892
Result 21.4.2 The boundary value problem
y
//
+p(x)y
/
+q(x)y = f(x), y(a) = , y(b) = .
is equivalent to the Fredholm equation of the second kind
y(x) = F(x) +
_
b
a
N(x, )y() d
where
F(x) = +

b a
(x a) +
_
b
a
G(x[)f() d,
N(x, ) =
_
b
a
H(x[)y() d,
G(x[) =
_
(xa)(b)
ba
, for x
(xb)(a)
ba
, for x ,
H(x[) =
_
(xa)
ba
p() +
(xa)(b)
ba
[p
/
() q()] for x
(xb)
ba
p() +
(xb)(a)
ba
[p
/
() q()] for x .
893
21.5 Exercises
The Constant Coecient Equation
Normal Form
Exercise 21.1
Solve the dierential equation
y
tt
+
_
2 +
4
3
x
_
y
t
+
1
9
_
24 + 12x + 4x
2
_
y = 0.
Hint, Solution
Transformations of the Independent Variable
Integral Equations
Exercise 21.2
Show that the solution of the dierential equation
y
tt
+ 2(a +bx)y
t
+ (c +dx +ex
2
)y = 0
can be written in terms of one of the following canonical forms:
v
tt
+ (
2
+A)v = 0
v
tt
= v
v
tt
+v = 0
v
tt
= 0.
Hint, Solution
894
Exercise 21.3
Show that the solution of the dierential equation
y
tt
+ 2
_
a +
b
x
_
y
t
+
_
c +
d
x
+
e
x
2
_
y = 0
can be written in terms of one of the following canonical forms:
v
tt
+
_
1 +
A

+
B

2
_
v = 0
v
tt
+
_
1

+
A

2
_
v = 0
v
tt
+
A

2
v = 0
Hint, Solution
Exercise 21.4
Show that the second order Euler equation
x
2
d
2
y
dx
2
+a
1
x
dy
dx
+a
0
y = 0
can be transformed to a constant coecient equation.
Hint, Solution
Exercise 21.5
Solve Bessels equation of order 1/2,
y
tt
+
1
x
y
t
+
_
1
1
4x
2
_
y = 0.
Hint, Solution
895
21.6 Hints
The Constant Coecient Equation
Normal Form
Hint 21.1
Transform the equation to normal form.
Transformations of the Independent Variable
Integral Equations
Hint 21.2
Transform the equation to normal form and then apply the scale transformation x = +.
Hint 21.3
Transform the equation to normal form and then apply the scale transformation x = .
Hint 21.4
Make the change of variables x = e
t
, y(x) = u(t). Write the derivatives with respect to x in terms of t.
x = e
t
dx = e
t
dt
d
dx
= e
t
d
dt
x
d
dx
=
d
dt
Hint 21.5
Transform the equation to normal form.
896
21.7 Solutions
The Constant Coecient Equation
Normal Form
Solution 21.1
y
tt
+
_
2 +
4
3
x
_
y
t
+
1
9
_
24 + 12x + 4x
2
_
y = 0
To transform the equation to normal form we make the substitution
y = exp
_

1
2
_ _
2 +
4
3
x
_
dx
_
u
= e
xx
2
/3
u
The invariant of the equation is
I(x) =
1
9
_
24 + 12x + 4x
2
_

1
4
_
2 +
4
3
x
_
2

1
2
d
dx
_
2 +
4
3
x
_
= 1.
The normal form of the dierential equation is then
u
tt
+u = 0
which has the general solution
u = c
1
cos x +c
2
sin x
Thus the equation for y has the general solution
y = c
1
e
xx
2
/3
cos x +c
2
e
xx
2
/3
sin x.
897
Transformations of the Independent Variable
Integral Equations
Solution 21.2
The substitution that will transform the equation to normal form is
y = exp
_

1
2
_
2(a +bx) dx
_
u
= e
axbx
2
/2
u.
The invariant of the equation is
I(x) = c +dx +ex
2

1
4
(2(a +bx))
2

1
2
d
dx
(2(a +bx))
= c b a
2
+ (d 2ab)x + (e b
2
)x
2
+x +x
2
The normal form of the dierential equation is
u
tt
+ ( +x +x
2
)u = 0
We consider the following cases:
= 0.
= 0.
= 0. We immediately have the equation
u
tt
= 0.
,= 0. With the change of variables
v() = u(x), x =
1/2
,
898
we obtain
v
tt
+v = 0.
,= 0. We have the equation
y
tt
+ ( +x)y = 0.
The scale transformation x = + yields
v
tt
+
2
( +( +))y = 0
v
tt
= [
3
+
2
( +)]v.
Choosing
= ()
1/3
, =

yields the dierential equation


v
tt
= v.
,= 0. The scale transformation x = + yields
v
tt
+
2
[ +( +) +( +)
2
]v = 0
v
tt
+
2
[ + +
2
+( + 2) +
2

2
]v = 0.
Choosing
=
1/4
, =

2
899
yields the dierential equation
v
tt
+ (
2
+A)v = 0
where
A =
1/2

1
4

3/2
.
Solution 21.3
The substitution that will transform the equation to normal form is
y = exp
_

1
2
_
2
_
a +
b
x
_
dx
_
u
= x
b
e
ax
u.
The invariant of the equation is
I(x) = c +
d
x
+
e
x
2

1
4
_
2
_
a +
b
x
__
2

1
2
d
dx
_
2
_
a +
b
x
__
= c a
x
+
d 2ab
x
+
e +b b
2
x
2
+

x
+

x
2
.
The invariant form of the dierential equation is
u
tt
+
_
+

x
+

x
2
_
u = 0.
We consider the following cases:
900
= 0.
= 0. We immediately have the equation
u
tt
+

x
2
u = 0.
,= 0. We have the equation
u
tt
+
_

x
+

x
2
_
u = 0.
The scale transformation u(x) = v(), x = yields
v
tt
+
_

2
_
u = 0.
Choosing =
1
, we obtain
v
tt
+
_
1

2
_
u = 0.
,= 0. The scale transformation x = yields
v
tt
+
_

2
+

2
_
v = 0.
Choosing =
1/2
, we obtain
v
tt
+
_
1 +

1/2

2
_
v = 0.
901
Solution 21.4
We write the derivatives with respect to x in terms of t.
x = e
t
dx = e
t
dt
d
dx
= e
t
d
dt
x
d
dx
=
d
dt
Now we express x
2 d
2
dx
2
in terms of t.
x
2
d
2
dx
2
= x
d
dx
_
x
d
dx
_
x
d
dx
=
d
2
dt
2

d
dt
Thus under the change of variables, x = e
t
, y(x) = u(t), the Euler equation becomes
u
tt
u
t
+a
1
u
t
+a
0
u = 0
u
tt
+ (a
1
1)u
t
+a
0
u = 0.
Solution 21.5
The transformation
y = exp
_

1
2
_
1
x
dx
_
= x
1/2
u
will put the equation in normal form. The invariant is
I(x) =
_
1
1
4x
2
_

1
4
_
1
x
2
_

1
2
1
x
2
= 1.
902
Thus we have the dierential equation
u
tt
+u = 0,
with the solution
u = c
1
cos x +c
2
sin x.
The solution of Bessels equation of order 1/2 is
y = c
1
x
1/2
cos x +c
2
x
1/2
sin x.
903
Chapter 22
The Dirac Delta Function
I do not know what I appear to the world; but to myself I seem to have been only like a boy playing on a
seashore, and diverting myself now and then by nding a smoother pebble or a prettier shell than ordinary, whilst
the great ocean of truth lay all undiscovered before me.
- Sir Issac Newton
22.1 Derivative of the Heaviside Function
The Heaviside function H(x) is dened
H(x) =
_
0 for x < 0,
1 for x > 0.
The derivative of the Heaviside function is zero for x ,= 0. At x = 0 the derivative is undened. We will represent
the derivative of the Heaviside function by the Dirac delta function, (x). The delta function is zero for x ,= 0 and
innite at the point x = 0. Since the derivative of H(x) is undened, (x) is not a function in the conventional
sense of the word. One can derive the properties of the delta function rigorously, but the treatment in this text
will be almost entirely heuristic.
904
The Dirac delta function is dened by the properties
(x) =
_
0 for x ,= 0,
for x = 0,
and
_

(x) dx = 1.
The second property comes from the fact that (x) represents the derivative of H(x). The Dirac delta function
is conceptually pictured in Figure 22.1.
Figure 22.1: The Dirac Delta Function.
Let f(x) be a continuous function that vanishes at innity. Consider the integral
_

f(x)(x) dx.
905
Using integration by parts,
_

f(x)(x) dx =
_
f(x)H(x)

f
t
(x)H(x) dx
=
_

0
f
t
(x) dx
= [f(x)]

0
= f(0).
We assumed that f(x) vanishes at innity in order to use integration by parts to evaluate the integral. However,
since the delta function is zero for x ,= 0, the integrand is nonzero only at x = 0. Thus the behavior of the function
at innity should not aect the value of the integral. Thus it is reasonable that f(0) =
_

f(x)(x) dx holds for


all continuous functions. Changing variables and noting that (x) is symmetric we have
f(x) =
_

f()(x ) d.
This formula is very important in solving inhomogeneous dierential equations.
22.2 The Delta Function as a Limit
Consider a function b(x, ) dened by
b(x, ) =
_
0 for [x[ > /2
1

for [x[ < /2.


The graph of b(x, 1/10) is shown in Figure 22.2.
The Dirac delta function (x) can be thought of as b(x, ) in the limit as 0. Note that the delta function
so dened satises the properties,
(x) =
_
0 for x ,= 0
for x = 0
and
_

(x) dx = 1
906
-1 1
5
10
Figure 22.2: Graph of b(x, 1/10).
Delayed Limiting Process. When the Dirac delta function appears inside an integral, we can think of the
delta function as a delayed limiting process. That is,
_

f(x)(x) dx lim
0
_

f(x)b(x, ) dx.
Let f(x) be a continuous function and let F
t
(x) = f(x). The integral of f(x)(x) is then
_

f(x)(x) dx = lim
0
1

_
/2
/2
f(x) dx
= lim
0
1

[F(x)]
/2
/2
= lim
0
F(/2) F(/2)

= F
t
(0)
= f(0).
907
22.3 Higher Dimensions
We can dene a Dirac delta function in n-dimensional Cartesian space,
n
(x), x R
n
. It is dened by the
following two properties.

n
(x) = 0 for x ,= 0
_
R
n

n
(x) dx = 1
It is easy to verify, that the n-dimensional Dirac delta function can be written as a product of 1-dimensional Dirac
delta functions.

n
(x) =
n

k=1
(x
k
)
22.4 Non-Rectangular Coordinate Systems
We can derive Dirac delta functions in non-rectangular coordinate systems by making a change of variables in
the relation,
_
R
n

n
(x) dx = 1
Where the transformation is non-singular, one merely divides the Dirac delta function by the Jacobian of the
transformation to the coordinate system.
Example 22.4.1 Consider the Dirac delta function in cylindrical coordinates, (r, , z). The Jacobian is J = r.
_

_
2
0
_

0

3
(x x
0
) r dr d dz = 1
908
For r
0
,= 0, the Dirac Delta function is

3
(x x
0
) =
1
r
(r r
0
) (
0
) (z z
0
)
since it satises the two dening properties.
1
r
(r r
0
) (
0
) (z z
0
) = 0 for (r, , z) ,= (r
0
,
0
, z
0
)
_

_
2
0
_

0
1
r
(r r
0
) (
0
) (z z
0
) r dr d dz
=
_

0
(r r
0
) dr
_
2
0
(
0
) d
_

(z z
0
) dz = 1
For r
0
= 0, we have

3
(x x
0
) =
1
2r
(r) (z z
0
)
since this again satises the two dening properties.
1
2r
(r) (z z
0
) = 0 for (r, z) ,= (0, z
0
)
_

_
2
0
_

0
1
2r
(r) (z z
0
) r dr d dz =
1
2
_

0
(r) dr
_
2
0
d
_

(z z
0
) dz = 1
909
22.5 Exercises
Exercise 22.1
Let f(x) be a function that is continuous except for a jump discontinuity at x = 0. Using a delayed limiting
process, show that
f(0

) +f(0
+
)
2
=
_

f(x)(x) dx.
Hint, Solution
Exercise 22.2
Let y = y(x) be dened on some interval Assume y(x) is continuously dierentiable and that y
t
(x) ,= 0. Show
that
(x x
0
) =
_
dy
dx
_
1
(y y
0
)
where y
0
= y(x
0
).
Hint, Solution
Exercise 22.3
Determine the Dirac delta function in spherical coordinates, (r, , ).
x = r cos sin , y = r sin sin , z = r cos
Hint, Solution
910
22.6 Hints
Hint 22.1
Hint 22.2
Make a change of variables in the integral
_
(x x
0
) dx.
Hint 22.3
Consider the special cases
0
= 0, and r
0
= 0.
911
22.7 Solutions
Solution 22.1
Let F
t
(x) = f(x).
_

f(x)(x) dx = lim
0
1

f(x)b(x, ) dx
= lim
0
1

_
_
0
/2
f(x)b(x, ) dx +
_
/2
0
f(x)b(x, ) dx
_
= lim
0
1

([F(0) F(/2)] + [F(/2) F(0)])


= lim
0
1
2
_
F(0) F(/2)
/2
+
F(/2) F(0)
/2
_
=
F
t
(0

) +F
t
(0
+
)
2
=
f(0

) +f(0
+
)
2
Solution 22.2
We prove the identity by making a change of variables in the integral of (x x
0
).
_
b
a
(x x
0
) dx =
_
y(b)
y(a)
(y y
0
)
_
dy
dx
_
1
dy
(x x
0
) =
_
dy
dx
_
1
(y y
0
)
912
Solution 22.3
We consider the Dirac delta function in spherical coordinates, (r, , ). The Jacobian is J = r
2
sin().
_

0
_
2
0
_

0

3
(x x
0
) r
2
sin() dr d d = 1
For r
0
,= 0, and
0
,= 0, , the Dirac Delta function is

3
(x x
0
) =
1
r
2
sin()
(r r
0
) (
0
) (
0
)
since it satises the two dening properties.
1
r
2
sin()
(r r
0
) (
0
) (
0
) = 0 for (r, , ) ,= (r
0
,
0
,
0
)
_

0
_
2
0
_

0
1
r
2
sin()
(r r
0
) (
0
) (
0
) r
2
sin() dr d d
=
_

0
(r r
0
) dr
_
2
0
(
0
) d
_

0
(
0
) d = 1
For
0
= 0 or
0
= , the Dirac delta function is

3
(x x
0
) =
1
2r
2
sin()
(r r
0
) (
0
) .
We check that the value of the integral is unity.
_

0
_
2
0
_

0
1
2r
2
sin()
(r r
0
) (
0
) r
2
sin() dr d d
=
1
2
_

0
(r r
0
) dr
_
2
0
d
_

0
(
0
) d = 1
913
For r
0
= 0 the Dirac delta function is

3
(x) =
1
4r
2
(r)
We verify that the value of the integral is unity.
_

0
_
2
0
_

0
1
4r
2
(r r
0
) r
2
sin() dr d d =
1
4
_

0
(r) dr
_
2
0
d
_

0
sin() d = 1
914
Chapter 23
Inhomogeneous Dierential Equations
Feelin stupid? I know I am!
-Homer Simpson
23.1 Particular Solutions
Consider the n
th
order linear homogeneous equation
L[y] y
(n)
+p
n1
(x)y
(n1)
+ +p
1
(x)y
t
+p
0
(x)y = 0.
Let y
1
, y
2
, . . . , y
n
be a set of linearly independent homogeneous solutions, L[y
k
] = 0. We know that the general
solution of the homogeneous equation is a linear combination of the homogeneous solutions.
y
h
=
n

k=1
c
k
y
k
(x)
Now consider the n
th
order linear inhomogeneous equation
L[y] y
(n)
+p
n1
(x)y
(n1)
+ +p
1
(x)y
t
+p
0
(x)y = f(x).
915
Any function y
p
which satises this equation is called a particular solution of the dierential equation. We want
to know the general solution of the inhomogeneous equation. Later in this chapter we will cover methods of
constructing this solution; now we consider the form of the solution.
Let y
p
be a particular solution. Note that y
p
+h is a particular solution if h satises the homogeneous equation.
L[y
p
+h] = L[y
p
] +L[h] = f + 0 = f
Therefore y
p
+y
h
satises the homogeneous equation. We show that this is the general solution of the inhomoge-
neous equation. Let y
p
and
p
both be solutions of the inhomogeneous equation L[y] = f. The dierence of y
p
and
p
is a homogeneous solution.
L[y
p

p
] = L[y
p
] L[
p
] = f f = 0
y
p
and
p
dier by a linear combination of the homogeneous solutions y
k
. Therefore the general solution of
L[y] = f is the sum of any particular solution y
p
and the general homogeneous solution y
h
.
y
p
+y
h
= y
p
(x) +
n

k=1
c
k
y
k
(x)
Result 23.1.1 The general solution of the n
th
order linear inhomogeneous equation
L[y] = f(x) is
y = y
p
+c
1
y
1
+c
2
y
2
+ +c
n
y
n
,
where y
p
is a particular solution, y
1
, . . . , y
n
is a set of linearly independent homogeneous
solutions, and the c
k
s are arbitrary constants.
Example 23.1.1 The dierential equation
y
tt
+y = sin(2x)
916
has the two homogeneous solutions
y
1
= cos x, y
2
= sin x,
and a particular solution
y
p
=
1
3
sin(2x).
We can add any combination of the homogeneous solutions to y
p
and it will still be a particular solution. For
example,

p
=
1
3
sin(2x)
1
3
sin x
=
2
3
sin
_
3x
2
_
cos
_
x
2
_
is a particular solution.
23.2 Method of Undetermined Coecients
The rst method we present for computing particular solutions is the method of undetermined coecients. For
some simple dierential equations, (primarily constant coecient equations), and some simple inhomogeneities
we are able to guess the form of a particular solution. This form will contain some unknown parameters. We
substitute this form into the dierential equation to determine the parameters and thus determine a particular
solution.
Later in this chapter we will present general methods which work for any linear dierential equation and any
inhogeneity. Thus one might wonder why I would present a method that works only for some simple problems.
(And why it is called a method if it amounts to no more than guessing.) The answer is that guessing an answer
is less grungy than computing it with the formulas we will develop later. Also, the process of this guessing is not
random, there is rhyme and reason to it.
917
Consider an n
th
order constant coecient, inhomogeneous equation.
L[y] y
(n)
+a
n1
y
(n1)
+ +a
1
y
t
+a
0
y = f(x)
If f(x) is one of a few simple forms, then we can guess the form of a particular solution. Below we enumerate
some cases.
f = p(x). If f is an m
th
order polynomial, f(x) = p
m
x
m
+ +p
1
x +p
0
, then guess
y
p
= c
m
x
m
+ c
1
x +c
0
.
f = p(x) e
ax
. If f is a polynomial times an exponential then guess
y
p
= (c
m
x
m
+ c
1
x +c
0
) e
ax
.
f = p(x) e
ax
cos (bx). If f is a cosine or sine times a polynomial and perhaps an exponential, f(x) = p(x) e
ax
cos(bx)
or f(x) = p(x) e
ax
sin(bx) then guess
y
p
= (c
m
x
m
+ c
1
x +c
0
) e
ax
cos(bx) + (d
m
x
m
+ d
1
x +d
0
) e
ax
sin(bx).
Likewise for hyperbolic sines and hyperbolic cosines.
Example 23.2.1 Consider
y
tt
2y
t
+y = t
2
.
The homogeneous solutions are y
1
= e
t
and y
2
= t e
t
. We guess a particular solution of the form
y
p
= at
2
+bt +c.
918
We substitute the expression into the dierential equation and equate coecients of powers of t to determine the
parameters.
y
tt
p
2y
t
p
+y
p
= t
2
(2a) 2(2at +b) + (at
2
+bt +c) = t
2
(a 1)t
2
+ (b 4a)t + (2a 2b +c) = 0
a 1 = 0, b 4a = 0, 2a 2b +c = 0
a = 1, b = 4, c = 6
A particular solution is
y
p
= t
2
+ 4t + 6.
If the inhomogeneity is a sum of terms, L[y] = f f
1
+ +f
k
, you can solve the problems L[y] = f
1
, . . . , L[y] =
f
k
independently and then take the sum of the solutions as a particular solution of L[y] = f.
Example 23.2.2 Consider
L[y] y
tt
2y
t
+y = t
2
+ e
2t
. (23.1)
The homogeneous solutions are y
1
= e
t
and y
2
= t e
t
. We already know a particular solution to L[y] = t
2
. We
seek a particular solution to L[y] = e
2t
. We guess a particular solution of the form
y
p
= a e
2t
.
We substitute the expression into the dierential equation to determine the parameter.
y
tt
p
2y
t
p
+y
p
= e
2t
4ae
2t
4a e
2t
+a e
2t
= e
2t
a = 1
A particular solution of L[y] = e
2t
is y
p
= e
2t
. Thus a particular solution of Equation 23.1 is
y
p
= t
2
+ 4t + 6 + e
2t
.
919
The above guesses will not work if the inhomogeneity is a homogeneous solution. In this case, multiply the
guess by the lowest power of x such that the guess does not contain homogeneous solutions.
Example 23.2.3 Consider
L[y] y
tt
2y
t
+y = e
t
.
The homogeneous solutions are y
1
= e
t
and y
2
= t e
t
. Guessing a particular solution of the form y
p
= a e
t
would
not work because L[ e
t
] = 0. We guess a particular solution of the form
y
p
= at
2
e
t
We substitute the expression into the dierential equation and equate coecients of like terms to determine the
parameters.
y
tt
p
2y
t
p
+y
p
= e
t
(at
2
+ 4at + 2a) e
t
2(at
2
+ 2at) e
t
+at
2
e
t
= e
t
2a e
t
= e
t
a =
1
2
A particular solution is
y
p
=
t
2
2
e
t
.
Example 23.2.4 Consider
y
tt
+
1
x
y
t
+
1
x
2
y = x, x > 0.
The homogeneous solutions are y
1
= cos(ln x) and y
2
= sin(ln x). We guess a particular solution of the form
y
p
= ax
3
920
We substitute the expression into the dierential equation and equate coecients of like terms to determine the
parameter.
y
tt
p
+
1
x
y
t
p
+
1
x
2
y
p
= x
6ax + 3ax +ax = x
a =
1
10
A particular solution is
y
p
=
x
3
10
.
23.3 Variation of Parameters
In this section we present a method for computing a particular solution of an inhomogeneous equation given
that we know the homogeneous solutions. We will rst consider second order equations and then generalize the
result for n
th
order equations.
23.3.1 Second Order Dierential Equations
Consider the second order inhomogeneous equation,
L[y] y
tt
+p(x)y
t
+q(x)y = f(x), on a < x < b.
We assume that the coecient functions in the dierential equation are continuous on [a . . . b]. Let y
1
(x) and
y
2
(x) be two linearly independent solutions to the homogeneous equation. Since the Wronskian,
W(x) = exp
_

_
p(x) dx
_
,
921
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form,
y
p
= u
1
(x)y
1
+u
2
(x)y
2
.
We compute the derivatives of y
p
.
y
t
p
= u
t
1
y
1
+u
1
y
t
1
+u
t
2
y
2
+u
2
y
t
2
y
tt
p
= u
tt
1
y
1
+ 2u
t
1
y
t
1
+u
1
y
tt
1
+u
tt
2
y
2
+ 2u
t
2
y
t
2
+u
2
y
tt
2
We substitute the expression for y
p
and its derivatives into the inhomogeneous equation and use the fact that y
1
and y
2
are homogeneous solutions to simplify the equation.
u
tt
1
y
1
+ 2u
t
1
y
t
1
+u
1
y
tt
1
+u
tt
2
y
2
+ 2u
t
2
y
t
2
+u
2
y
tt
2
+p(u
t
1
y
1
+u
1
y
t
1
+u
t
2
y
2
+u
2
y
t
2
) +q(u
1
y
1
+u
2
y
2
) = f
u
tt
1
y
1
+ 2u
t
1
y
t
1
+u
tt
2
y
2
+ 2u
t
2
y
t
2
+p(u
t
1
y
1
+u
t
2
y
2
) = f
This is an ugly equation for u
1
and u
2
, however, we have an ace up our sleeve. Since u
1
and u
2
are undetermined
functions of x, we are free to impose a constraint. We choose this constraint to simplify the algebra.
u
t
1
y
1
+u
t
2
y
2
= 0
This constraint simplies the derivatives of y
p
,
y
t
p
= u
t
1
y
1
+u
1
y
t
1
+u
t
2
y
2
+u
2
y
t
2
= u
1
y
t
1
+u
2
y
t
2
y
tt
p
= u
t
1
y
t
1
+u
1
y
tt
1
+u
t
2
y
t
2
+u
2
y
tt
2
.
We substitute the new expressions for y
p
and its derivatives into the inhomogeneous dierential equation to obtain
a much simpler equation than before.
u
t
1
y
t
1
+u
1
y
tt
1
+u
t
2
y
t
2
+u
2
y
tt
2
+p(u
1
y
t
1
+u
2
y
t
2
) +q(u
1
y
1
+u
2
y
2
) = f(x)
u
t
1
y
t
1
+u
t
2
y
t
2
+u
1
L[y
1
] +u
2
L[y
2
] = f(x)
u
t
1
y
t
1
+u
t
2
y
t
2
= f(x).
922
With the constraint, we have a system of linear equations for u
t
1
and u
t
2
.
u
t
1
y
1
+u
t
2
y
2
= 0
u
t
1
y
t
1
+u
t
2
y
t
2
= f(x).
_
y
1
y
2
y
t
1
y
t
2
__
u
t
1
u
t
2
_
=
_
0
f
_
We solve this system using Kramers rule. (See Appendix S.)
u
t
1
=
f(x)y
2
W(x)
u
t
2
=
f(x)y
1
W(x)
Here W(x) is the Wronskian.
W(x) =

y
1
y
2
y
t
1
y
t
2

We integrate to get u
1
and u
2
. This gives us a particular solution.
y
p
= y
1
_
f(x)y
2
(x)
W(x)
dx +y
2
_
f(x)y
1
(x)
W(x)
dx.
Result 23.3.1 Let y
1
and y
2
be linearly independent homogeneous solutions of
L[y] = y
//
+p(x)y
/
+q(x)y = f(x).
A particular solution is
y
p
= y
1
(x)
_
f(x)y
2
(x)
W(x)
dx +y
2
(x)
_
f(x)y
1
(x)
W(x)
dx,
where W(x) is the Wronskian of y
1
and y
2
.
923
Example 23.3.1 Consider the equation,
y
tt
+y = cos(2x).
The homogeneous solutions are y
1
= cos x and y
2
= sin x. We compute the Wronskian.
W(x) =

cos x sin x
sin x cos x

= cos
2
x + sin
2
x = 1
We use variation of parameters to nd a particular solution.
y
p
= cos(x)
_
cos(2x) sin(x) dx + sin(x)
_
cos(2x) cos(x) dx
=
1
2
cos(x)
_
_
sin(3x) sin(x)
_
dx +
1
2
sin(x)
_
_
cos(3x) + cos(x)
_
dx
=
1
2
cos(x)
_

1
3
cos(3x) + cos(x)
_
+
1
2
sin(x)
_
1
3
sin(3x) + sin(x)
_
=
1
2
_
sin
2
(x) cos
2
(x)
_
+
1
6
_
cos(3x) cos(x) + sin(3x) sin(x)
_
=
1
2
cos(2x) +
1
6
cos(2x)
=
1
3
cos(2x)
The general solution of the inhomogeneous equation is
y =
1
3
cos(2x) +c
1
cos(x) +c
2
sin(x).
924
23.3.2 Higher Order Dierential Equations
Consider the n
th
order inhomogeneous equation,
L[y] = y(n) +p
n1
(x)y
(n1)
+ +p
1
(x)y
t
+p
0
(x)y = f(x), on a < x < b.
We assume that the coecient functions in the dierential equation are continuous on [a . . . b]. Let y
1
, . . . , y
n

be a set of linearly independent solutions to the homogeneous equation. Since the Wronskian,
W(x) = exp
_

_
p
n1
(x) dx
_
,
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form
y
p
= u
1
y
1
+u
2
y
2
+ +u
n
y
n
.
Since u
1
, . . . , u
n
are undetermined functions of x, we are free to impose n 1 constraints. We choose these
constraints to simplify the algebra.
u
t
1
y
1
+u
t
2
y
2
+ +u
t
n
y
n
=0
u
t
1
y
t
1
+u
t
2
y
t
2
+ +u
t
n
y
t
n
=0
.
.
. +
.
.
. +
.
.
. +
.
.
. =0
u
t
1
y
(n2)
1
+u
t
2
y
(n2)
2
+ +u
t
n
y
(n2)
n
=0
We dierentiate the expression for y
p
, utilizing our constraints.
y
p
=u
1
y
1
+u
2
y
2
+ +u
n
y
n
y
t
p
=u
1
y
t
1
+u
2
y
t
2
+ +u
n
y
t
n
y
tt
p
=u
1
y
tt
1
+u
2
y
tt
2
+ +u
n
y
tt
n
.
.
. =
.
.
. +
.
.
. +
.
.
. +
.
.
.
y
(n)
p
=u
1
y
(n)
1
+u
2
y
(n)
2
+ +u
n
y
(n)
n
+u
t
1
y
(n1)
1
+u
t
2
y
(n1)
2
+ +u
t
n
y
(n1)
n
925
We substitute y
p
and its derivatives into the inhomogeneous dierential equation and use the fact that the y
k
are
homogeneous solutions.
u
1
y
(n)
1
+ +u
n
y
(n)
n
+u
t
1
y
(n1)
1
+ +u
t
n
y
(n1)
n
+p
n1
(u
1
y
(n1)
1
+ +u
n
y
(n1)
n
) + +p
0
(u
1
y
1
+ u
n
y
n
) = f
u
1
L[y
1
] +u
2
L[y
2
] + +u
n
L[y
n
] +u
t
1
y
(n1)
1
+u
t
2
y
(n1)
2
+ +u
t
n
y
(n1)
n
= f
u
t
1
y
(n1)
1
+u
t
2
y
(n1)
2
+ +u
t
n
y
(n1)
n
= f.
With the constraints, we have a system of linear equations for u
1
, . . . , u
n
.
_
_
_
_
_
y
1
y
2
y
n
y
t
1
y
t
2
y
t
n
.
.
.
.
.
.
.
.
.
.
.
.
y
(n1)
1
y
(n1)
2
y
(n1)
n
_
_
_
_
_
_
_
_
_
_
u
t
1
u
t
2
.
.
.
u
t
n
_
_
_
_
_
=
_
_
_
_
_
0
.
.
.
0
f
_
_
_
_
_
.
We solve this system using Kramers rule. (See Appendix S.)
u
t
k
= (1)
n+k+1
W[y
1
, . . . , y
k1
, y
k+1
, . . . , y
n
]
W[y
1
, y
2
, . . . , y
n
]
f, for k = 1, . . . , n,
Here W is the Wronskian.
We integrating to obtain the u
k
s.
u
k
= (1)
n+k+1
_
W[y
1
, . . . , y
k1
, y
k+1
, . . . , y
n
](x)
W[y
1
, y
2
, . . . , y
n
](x)
f(x) dx, for k = 1, . . . , n
926
Result 23.3.2 Let y
1
, . . . , y
n
be linearly independent homogeneous solutions of
L[y] = y(n) +p
n1
(x)y
(n1)
+ +p
1
(x)y
/
+p
0
(x)y = f(x), on a < x < b.
A particular solution is
y
p
= u
1
y
1
+u
2
y
2
+ +u
n
y
n
.
where
u
k
= (1)
n+k+1
_
W[y
1
, . . . , y
k1
, y
k+1
, . . . , y
n
](x)
W[y
1
, y
2
, . . . , y
n
](x)
f(x) dx, for k = 1, . . . , n,
and W[y
1
, y
2
, . . . , y
n
](x) is the Wronskian of y
1
(x), . . . , y
n
(x).
23.4 Piecewise Continuous Coecients and Inhomogeneities
Example 23.4.1 Consider the problem
y
tt
y = e
[x[
, y() = 0, > 0, ,= 1.
927
The homogeneous solutions of the dierential equation are e
x
and e
x
. We use variation of parameters to nd a
particular solution for x > 0.
y
p
= e
x
_
x
e

2
d + e
x
_
x
e

2
d
=
1
2
e
x
_
x
e
(+1)
d
1
2
e
x
_
x
e
(1)
d
=
1
2( + 1)
e
x
+
1
2( 1)
e
x
=
e
x

2
1
, for x > 0
A particular solution for x < 0 is
y
p
=
e
x

2
1
, for x < 0.
Thus a particular solution is
y
p
=
e
[x[

2
1
.
The general solution is
y =
1

2
1
e
[x[
+c
1
e
x
+c
2
e
x
.
Applying the boundary conditions, we see that c
1
= c
2
= 0. Apparently the solution is
y =
e
[x[

2
1
.
This function is plotted in Figure 23.1. This function satises the dierential equation for positive and negative
x. It also satises the boundary conditions. However, this is NOT a solution to the dierential equation. Since
928
the dierential equation has no singular points and the inhomogeneous term is continuous, the solution must be
twice continuously dierentiable. Since the derivative of e
[x[
/(
2
1) has a jump discontinuity at x = 0, the
second derivative does not exist. Thus this function could not possibly be a solution to the dierential equation.
In the next example we examine the right way to solve this problem.
-4 -2 2 4
0.05
0.1
0.15
0.2
0.25
0.3
-4 -2 2 4
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
Figure 23.1: The Incorrect and Correct Solution to the Dierential Equation.
Example 23.4.2 Again consider
y
tt
y = e
[x[
, y() = 0, > 0, ,= 1.
Separating this into two problems for positive and negative x,
y
tt

= e
x
, y

() = 0, on < x 0,
y
tt
+
y
+
= e
x
, y
+
() = 0, on 0 x < .
929
In order for the solution over the whole domain to be twice dierentiable, the solution and its rst derivative
must be continuous. Thus we impose the additional boundary conditions
y

(0) = y
+
(0), y
t

(0) = y
t
+
(0).
The solutions that satisfy the two dierential equations and the boundary conditions at innity are
y

=
e
x

2
1
+c

e
x
, y
+
=
e
x

2
1
+c
+
e
x
.
The two additional boundary conditions give us the equations
y

(0) = y
+
(0) c

= c
+
y
t

(0) = y
t
+
(0)

2
1
+c

2
1
c
+
.
We solve these two equations to determine c

and c
+
.
c

= c
+
=

2
1
Thus the solution over the whole domain is
y =
_
e
x
e
x

2
1
for x < 0,
e
x
e
x

2
1
for x > 0
y =
e
[x[
e
[x[

2
1
.
This function is plotted in Figure 23.1. You can verify that this solution is twice continuously dierentiable.
930
23.5 Inhomogeneous Boundary Conditions
23.5.1 Eliminating Inhomogeneous Boundary Conditions
Consider the n
th
order equation
L[y] = f(x), for a < x < b,
subject to the linear inhomogeneous boundary conditions
B
j
[y] =
j
, for j = 1, . . . , n,
where the boundary conditions are of the form
B[y]
0
y(a) +
1
y
t
(a) + +y
n1
y
(n1)
(a) +
0
y(b) +
1
y
t
(b) + +
n1
y
(n1)
Let g(x) be an n-times continuously dierentiable function that satises the boundary conditions. Substituting
y = u +g into the dierential equation and boundary conditions yields
L[u] = f(x) L[g], B
j
[u] = b
j
B
j
[g] = 0 for j = 1, . . . , n.
Note that the problem for u has homogeneous boundary conditions. Thus a problem with inhomogeneous bound-
ary conditions can be reduced to one with homogeneous boundary conditions. This technique is of limited
usefulness for ordinary dierential equations but is important for solving some partial dierential equation prob-
lems.
Example 23.5.1 Consider the problem
y
tt
+y = cos 2x, y(0) = 1, y() = 2.
g(x) =
x

+ 1 satises the boundary conditions. Substituting y = u +g yields


u
tt
+u = cos 2x
x

1, y(0) = y() = 0.
931
Example 23.5.2 Consider
y
tt
+y = cos 2x, y
t
(0) = y() = 1.
g(x) = sin x cos x satises the inhomogeneous boundary conditions. Substituting y = u + sin x cos x yields
u
tt
+u = cos 2x, u
t
(0) = u() = 0.
Note that since g(x) satises the homogeneous equation, the inhomogeneous term in the equation for u is the
same as that in the equation for y.
Example 23.5.3 Consider
y
tt
+y = cos 2x, y(0) =
2
3
, y() =
4
3
.
g(x) = cos x
1
3
satises the boundary conditions. Substituting y = u + cos x
1
3
yields
u
tt
+u = cos 2x +
1
3
, u(0) = u() = 0.
Result 23.5.1 The n
th
order dierential equation with boundary conditions
L[y] = f(x), B
j
[y] = b
j
, for j = 1, . . . , n
has the solution y = u +g where u satises
L[u] = f(x) L[g], B
j
[u] = 0, for j = 1, . . . , n
and g is any n-times continuously dierentiable function that satises the inhomogeneous
boundary conditions.
932
23.5.2 Separating Inhomogeneous Equations and Inhomogeneous Boundary Con-
ditions
Now consider a problem with inhomogeneous boundary conditions
L[y] = f(x), B
1
[y] =
1
, B
2
[y] =
2
.
In order to solve this problem, we solve the two problems
L[u] = f(x), B
1
[u] = B
2
[u] = 0, and
L[v] = 0, B
1
[v] =
1
, B
2
[v] =
2
.
The solution for the problem with an inhomogeneous equation and inhomogeneous boundary conditions will be
the sum of u and v. To verify this,
L[u +v] = L[u] +L[v] = f(x) + 0 = f(x),
B
i
[u +v] = B
i
[u] +B
i
[v] = 0 +
i
=
i
.
This will be a useful technique when we develop Green functions.
Result 23.5.2 The solution to
L[y] = f(x), B
1
[y] =
1
, B
2
[y] =
2
,
is y = u +v where
L[u] = f(x), B
1
[u] = 0, B
2
[u] = 0, and
L[v] = 0, B
1
[v] =
1
, B
2
[v] =
2
.
933
23.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Con-
ditions
Consider the n
th
order homogeneous dierential equation
L[y] = y
(n)
+p
n1
y
(n1)
+ +p
1
y
t
+p
0
y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
B
j
[y] =
j
, for j = 1, . . . , n
where each boundary condition is of the form
B[y]
0
y(a) +
1
y
t
(a) + +
n1
y
(n1)
(a) +
0
y(b) +
1
y
t
(b) + +
n1
y
(n1)
(b).
We assume that the coecients in the dierential equation are continuous on [a, b]. Since the Wronskian of the
solutions of the dierential equation,
W(x) = exp
_

_
p
n1
(x) dx
_
,
is non-vanishing on [a, b], there are n linearly independent solution on that range. Let y
1
, . . . , y
n
be a set
of linearly independent solutions of the homogeneous equation. From Result 23.3.2 we know that a particular
solution y
p
exists. The general solution of the dierential equation is
y = y
p
+c
1
y
1
+c
2
y
2
+ +c
n
y
n
.
The n boundary conditions impose the matrix equation,
_
_
_
_
_
B
1
[y
1
] B
1
[y
2
] B
1
[y
n
]
B
2
[y
1
] B
2
[y
2
] B
2
[y
n
]
.
.
.
.
.
.
.
.
.
.
.
.
B
n
[y
1
] B
n
[y
2
] B
n
[y
n
]
_
_
_
_
_
_
_
_
_
_
c
1
c
2
.
.
.
c
n
_
_
_
_
_
=
_
_
_
_
_

1
B
1
[y
p
]

2
B
2
[y
p
]
.
.
.

n
B
n
[y
p
]
_
_
_
_
_
934
This equation has a unique solution if and only if the equation
_
_
_
_
_
B
1
[y
1
] B
1
[y
2
] B
1
[y
n
]
B
2
[y
1
] B
2
[y
2
] B
2
[y
n
]
.
.
.
.
.
.
.
.
.
.
.
.
B
n
[y
1
] B
n
[y
2
] B
n
[y
n
]
_
_
_
_
_
_
_
_
_
_
c
1
c
2
.
.
.
c
n
_
_
_
_
_
=
_
_
_
_
_
0
0
.
.
.
0
_
_
_
_
_
has only the trivial solution. (This is the case if and only if the determinant of the matrix is nonzero.) Thus the
problem
L[y] = y
(n)
+p
n1
y
(n1)
+ +p
1
y
t
+p
0
y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
B
j
[y] =
j
, for j = 1, . . . , n,
has a unique solution if and only if the problem
L[y] = y
(n)
+p
n1
y
(n1)
+ +p
1
y
t
+p
0
y = 0, for a < x < b,
subject to the n homogeneous boundary conditions
B
j
[y] = 0, for j = 1, . . . , n,
has only the trivial solution.
935
Result 23.5.3 The problem
L[y] = y
(n)
+p
n1
y
(n1)
+ +p
1
y
/
+p
0
y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
B
j
[y] =
j
, for j = 1, . . . , n,
has a unique solution if and only if the problem
L[y] = y
(n)
+p
n1
y
(n1)
+ +p
1
y
/
+p
0
y = 0, for a < x < b,
subject to
B
j
[y] = 0, for j = 1, . . . , n,
has only the trivial solution.
23.6 Green Functions for First Order Equations
Consider the rst order inhomogeneous equation
L[y] y
t
+p(x)y = f(x), for x > a, (23.2)
subject to a homogeneous initial condition, B[y] y(a) = 0.
The Green function G(x[) is dened as the solution to
L[G(x[)] = (x ) subject to G(a[) = 0.
936
We can represent the solution to the inhomogeneous problem in Equation 23.2 as an integral involving the Green
function. To show that
y(x) =
_

a
G(x[)f() d
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly convergent.)
L
__

a
G(x[)f() d
_
=
_

a
L[G(x[)]f() d
=
_

a
(x )f() d
= f(x)
The integral also satises the initial condition.
B
__

a
G(x[)f() d
_
=
_

a
B[G(x[)]f() d
=
_

a
(0)f() d
= 0
Now we consider the qualitiative behavior of the Green function. For x ,= , the Green function is simply a
homogeneous solution of the dierential equation, however at x = we expect some singular behavior. G
t
(x[)
will have a Dirac delta function type singularity. This means that G(x[) will have a jump discontinuity at x = .
We integrate the dierential equation on the vanishing interval (

. . .
+
) to determine this jump.
G
t
+p(x)G = (x )
G(
+
[) G(

[) +
_

+

p(x)G(x[) dx = 1
G(
+
[) G(

[) = 1 (23.3)
937
The homogeneous solution of the dierential equation is
y
h
= e

p(x) dx
Since the Green function satises the homogeneous equation for x ,= , it will be a constant times this homogeneous
solution for x < and x > .
G(x[) =
_
c
1
e

p(x) dx
a < x <
c
2
e

p(x) dx
< x
In order to satisfy the homogeneous initial condition G(a[) = 0, the Green function must vanish on the interval
(a . . . ).
G(x[) =
_
0 a < x <
c e

p(x) dx
< x
The jump condition, (Equation 23.3), gives us the constraint G(
+
[) = 1. This determines the constant in the
homogeneous solution for x > .
G(x[) =
_
0 a < x <
e

p(t) dt
< x
We can use the Heaviside function to write the Green function without using a case statement.
G(x[) = e

p(t) dt
H(x )
Clearly the Green function is of little value in solving the inhomogeneous dierential equation in Equation 23.2,
as we can solve that problem directly. However, we will encounter rst order Green function problems in solving
some partial dierential equations.
938
Result 23.6.1 The rst order inhomogeneous dierential equation with homogeneous
initial condition
L[y] y
/
+p(x)y = f(x), for a < x, y(a) = 0,
has the solution
y =
_

a
G(x[)f() d,
where G(x[) satises the equation
L[G(x[)] = (x ), for a < x, G(a[) = 0.
The Green function is
G(x[) = e

_
x

p(t) dt
H(x )
23.7 Green Functions for Second Order Equations
Consider the second order inhomogeneous equation
L[y] = y
tt
+p(x)y
t
+q(x)y = f(x), for a < x < b, (23.4)
subject to the homogeneous boundary conditions
B
1
[y] = B
2
[y] = 0.
The Green function G(x[) is dened as the solution to
L[G(x[)] = (x ) subject to B
1
[G] = B
2
[G] = 0.
939
The Green function is useful because you can represent the solution to the inhomogeneous problem in Equation 23.4
as an integral involving the Green function. To show that
y(x) =
_
b
a
G(x[)f() d
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly convergent.)
L
__
b
a
G(x[)f() d
_
=
_
b
a
L[G(x[)]f() d
=
_
b
a
(x )f() d
= f(x)
The integral also satises the boundary conditions.
B
i
__
b
a
G(x[)f() d
_
=
_
b
a
B
i
[G(x[)]f() d
=
_
b
a
[0]f() d
= 0
One of the advantages of using Green functions is that once you nd the Green function for a linear operator
and certain homogeneous boundary conditions,
L[G] = (x ), B
1
[G] = B
2
[G] = 0,
you can write the solution for any inhomogeneity, f(x).
L[f] = f(x), B
1
[y] = B
2
[y] = 0
940
You do not need to do any extra work to obtain the solution for a dierent inhomogeneous term.
Qualitatively, what kind of behavior will the Green function for a second order dierential equation have?
Will it have a delta function singularity; will it be continuous? To answer these questions we will rst look at the
behavior of integrals and derivatives of (x).
The integral of (x) is the Heaviside function, H(x).
H(x) =
_
x

(t) dt =
_
0 for x < 0
1 for x > 0
The integral of the Heaviside function is the ramp function, r(x).
r(x) =
_
x

H(t) dt =
_
0 for x < 0
x for x > 0
The derivative of the delta function is zero for x ,= 0. At x = 0 it goes from 0 up to +, down to and then
back up to 0.
In Figure 23.2 we see conceptually the behavior of the ramp function, the Heaviside function, the delta function,
and the derivative of the delta function.
Figure 23.2: r(x), H(x), (x) and
d
dx
(x)
941
We write the dierential equation for the Green function.
G
tt
(x[) +p(x)G
t
(x[) +q(x)G(x[) = (x )
we see that only the G
tt
(x[) term can have a delta function type singularity. If one of the other terms had a delta
function type singularity then G
tt
(x[) would be more singular than a delta function and there would be nothing
in the right hand side of the equation to match this kind of singularity. Analogous to the progression from a
delta function to a Heaviside function to a ramp function, we see that G
t
(x[) will have a jump discontinuity and
G(x[) will be continuous.
Let y
1
and y
2
be two linearly independent solutions to the homogeneous equation, L[y] = 0. Since the Green
function satises the homogeneous equation for x ,= , it will be a linear combination of the homogeneous solutions.
G(x[) =
_
c
1
y
1
+c
2
y
2
for x <
d
1
y
1
+d
2
y
2
for x >
We require that G(x[) be continuous.
G(x[)

= G(x[)

x
+
We can write this in terms of the homogeneous solutions.
c
1
y
1
() +c
2
y
2
() = d
1
y
1
() +d
2
y
2
()
We integrate L[G(x[)] = (x ) from

to +.
_

+

[G
tt
(x[) +p(x)G
t
(x[) +q(x)G(x[)] dx =
_

+

(x ) dx.
942
Since G(x[) is continuous and G
t
(x[) has only a jump discontinuity two of the terms vanish.
_

+

p(x)G
t
(x[) dx = 0 and
_

+

q(x)G(x[) dx = 0
_

+

G
tt
(x[) dx =
_

+

(x ) dx
_
G
t
(x[)

=
_
H(x )

G
t
(
+
[) G
t
(

[) = 1
We write this jump condition in terms of the homogeneous solutions.
d
1
y
t
1
() +d
2
y
t
2
() c
1
y
t
1
() c
2
y
t
2
() = 1
Combined with the two boundary conditions, this gives us a total of four equations to determine our four constants,
c
1
, c
2
, d
1
, and d
2
.
943
Result 23.7.1 The second order inhomogeneous dierential equation with homogeneous
boundary conditions
L[y] = y
//
+p(x)y
/
+q(x)y = f(x), for a < x < b, B
1
[y] = B
2
[y] = 0,
has the solution
y =
_
b
a
G(x[)f() d,
where G(x[) satises the equation
L[G(x[)] = (x ), for a < x < b, B
1
[G(x[)] = B
2
[G(x[)] = 0.
G(x[) is continuous and G
/
(x[) has a jump discontinuity of height 1 at x = .
Example 23.7.1 Solve the boundary value problem
y
tt
= f(x), y(0) = y(1) = 0,
using a Green function.
A pair of solutions to the homogeneous equation are y
1
= 1 and y
2
= x. First note that only the trivial
solution to the homogeneous equation satises the homogeneous boundary conditions. Thus there is a unique
solution to this problem.
The Green function satises
G
tt
(x[) = (x ), G(0[) = G(1[) = 0.
The Green function has the form
G(x[) =
_
c
1
+c
2
x for x <
d
1
+d
2
x for x > .
944
Applying the two boundary conditions, we see that c
1
= 0 and d
1
= d
2
. The Green function now has the form
G(x[) =
_
cx for x <
d(x 1) for x > .
Since the Green function must be continuous,
c = d( 1) d = c

1
.
From the jump condition,
d
dx
c

1
(x 1)

x=

d
dx
cx

x=
= 1
c

1
c = 1
c = 1.
Thus the Green function is
G(x[) =
_
( 1)x for x <
(x 1) for x > .
The Green function is plotted in Figure 23.3 for various values of . The solution to y
tt
= f(x) is
y(x) =
_
1
0
G(x[)f() d
y(x) = (x 1)
_
x
0
f() d +x
_
1
x
( 1)f() d.
945
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
Figure 23.3: Plot of G(x[0.05),G(x[0.25),G(x[0.5) and G(x[0.75).
Example 23.7.2 Solve the boundary value problem
y
tt
= f(x), y(0) = 1, y(1) = 2.
In Example 23.7.1 we saw that the solution to
u
tt
= f(x), u(0) = u(1) = 0
is
u(x) = (x 1)
_
x
0
f() d +x
_
1
x
( 1)f() d.
Now we have to nd the solution to
v
tt
= 0, v(0) = 1, u(1) = 2.
The general solution is
v = c
1
+c
2
x.
Applying the boundary conditions yields
v = 1 +x.
946
Thus the solution for y is
y = 1 +x + (x 1)
_
x
0
f() d +x
_
1
x
( 1)f( xi) d.
Example 23.7.3 Consider
y
tt
= x, y(0) = y(1) = 0.
Method 1. Integrating the dierential equation twice yields
y =
1
6
x
3
+c
1
x +c
2
.
Applying the boundary conditions, we nd that the solution is
y =
1
6
(x
3
x).
Method 2. Using the Green function to nd the solution,
y = (x 1)
_
x
0

2
d +x
_
1
x
( 1) d
= (x 1)
1
3
x
3
+x
_
1
3

1
2

1
3
x
3
+
1
2
x
2
_
y =
1
6
(x
3
x).
947
Example 23.7.4 Find the solution to the dierential equation
y
tt
y = sin x,
that is bounded for all x.
The Green function for this problem satises
G
tt
(x[) G(x[) = (x ).
The homogeneous solutions are y
1
= e
x
, and y
2
= e
x
. The Green function has the form
G(x[) =
_
c
1
e
x
+c
2
e
x
for x <
d
1
e
x
+d
2
e
x
for x > .
Since the solution must be bounded for all x, the Green function must also be bounded. Thus c
2
= d
1
= 0. The
Green function now has the form
G(x[) =
_
c e
x
for x <
d e
x
for x > .
Requiring that G(x[) be continuous gives us the condition
c e

= d e

d = c e
2
.
G(x[) has a jump discontinuity of height 1 at x = .
d
dx
c e
2
e
x

x=

d
dx
c e
x

x=
= 1
c e
2
e

c e

= 1
c =
1
2
e

948
The Green function is then
G(x[) =
_

1
2
e
x
for x <

1
2
e
x+
for x >
G(x[) =
1
2
e
[x[
.
A plot of G(x[0) is given in Figure 23.4. The solution to y
tt
y = sin x is
y(x) =
_

1
2
e
[x[
sin d
=
1
2
__
x

sin e
x
d +
_

x
sin e
x+
d
_
=
1
2
(
sin x + cos x
2
+
sin x + cos x
2
)
y =
1
2
sin x.
23.7.1 Green Functions for Sturm-Liouville Problems
Consider the problem
L[y] = (p(x)y
t
)
t
+q(x)y = f(x), subject to
B
1
[y] =
1
y(a) +
2
y
t
(a) = 0, B
2
[y] =
1
y(b) +
2
y
t
(b) = 0.
949
-4 -2 2 4
-0.6
-0.4
-0.2
0.2
0.4
0.6
Figure 23.4: Plot of G(x[0).
This is known as a Sturm-Liouville problem. Equations of this type often occur when solving partial dierential
equations. The Green function associated with this problem satises
L[G(x[)] = (x ), B
1
[G(x[)] = B
2
[G(x[)] = 0.
Let y
1
and y
2
be two non-trivial homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
L[y
1
] = 0, B
1
[y
1
] = 0, L[y
2
] = 0, B
2
[y
2
] = 0.
The Green function satises the homogeneous equation for x ,= and satises the homogeneous boundary condi-
tions. Thus it must have the following form.
G(x[) =
_
c
1
()y
1
(x) for a x ,
c
2
()y
2
(x) for x b,
950
Here c
1
and c
2
are unknown functions of .
The rst constraint on c
1
and c
2
comes from the continuity condition.
G(

[) = G(
+
[)
c
1
()y
1
() = c
2
()y
2
()
We write the inhomogeneous equation in the standard form.
G
tt
(x[) +
p
t
p
G
t
(x[) +
q
p
G(x[) =
(x )
p
The second constraint on c
1
and c
2
comes from the jump condition.
G
t
(
+
[) G
t
(

[) =
1
p()
c
2
()y
t
2
() c
1
()y
t
1
() =
1
p()
Now we have a system of equations to determine c
1
and c
2
.
c
1
()y
1
() c
2
()y
2
() = 0
c
1
()y
t
1
() c
2
()y
t
2
() =
1
p()
We solve this system with Kramers rule.
c
1
() =
y
2
()
p()(W())
, c
2
() =
y
1
()
p()(W())
Here W(x) is the Wronskian of y
1
(x) and y
2
(x). The Green function is
G(x[) =
_
y
1
(x)y
2
()
p()W()
for a x ,
y
2
(x)y
1
()
p()W()
for x b.
951
The solution of the Sturm-Liouville problem is
y =
_
b
a
G(x[)f() d.
Result 23.7.2 The problem
L[y] = (p(x)y
/
)
/
+q(x)y = f(x), subject to
B
1
[y] =
1
y(a) +
2
y
/
(a) = 0, B
2
[y] =
1
y(b) +
2
y
/
(b) = 0.
has the Green function
G(x[) =
_
y
1
(x)y
2
()
p()W()
for a x ,
y
2
(x)y
1
()
p()W()
for x b,
where y
1
and y
2
are non-trivial homogeneous solutions that satisfy B
1
[y
1
] = B
2
[y
2
] = 0,
and W(x) is the Wronskian of y
1
and y
2
.
Example 23.7.5 Consider the equation
y
tt
y = f(x), y(0) = y(1) = 0.
A set of solutions to the homogeneous equation is e
x
, e
x
. Equivalently, one could use the set cosh x, sinh x.
Note that sinh x satises the left boundary condition and sinh(x 1) satises the right boundary condition. The
952
Wronskian of these two homogeneous solutions is
W(x) =

sinh x sinh(x 1)
cosh x cosh(x 1)

= sinh xcosh(x 1) cosh xsinh(x 1)


=
1
2
[sinh(2x 1) + sinh(1)]
1
2
[sinh(2x 1) sinh(1)]
= sinh(1).
The Green function for the problem is then
G(x[) =
_
sinh xsinh(1)
sinh(1)
for 0 x
sinh(x1) sinh
sinh(1)
for x 1.
The solution to the problem is
y =
sinh(x 1)
sinh(1)
_
x
0
sinh()f() d +
sinh(x)
sinh(1)
_
1
x
sinh( 1)f() d.
23.7.2 Initial Value Problems
Consider
L[y] = y
tt
+p(x)y
t
+q(x)y = f(x), for a < x < b,
subject the the initial conditions
y(a) =
1
, y
t
(a) =
2
.
The solution is y = u +v where
u
tt
+p(x)u
t
+q(x)u = f(x), u(a) = 0, u
t
(a) = 0,
953
and
v
tt
+p(x)v
t
+q(x)v = 0, v(a) =
1
, v
t
(a) =
2
.
Since the Wronskian
W(x) = c exp
_

_
p(x) dx
_
is non-vanishing, the solutions of the dierential equation for v are linearly independent. Thus there is a unique
solution for v that satises the initial conditions.
The Green function for u satises
G
tt
(x[) +p(x)G
t
(x[) +q(x)G(x[) = (x ), G(a[) = 0, G
t
(a[) = 0.
The continuity and jump conditions are
G(

[) = G(
+
[), G
t
(

[) + 1 = G
t
(
+
[).
Let u
1
and u
2
be two linearly independent solutions of the dierential equation. For x < , G(x[) is a linear
combination of these solutions. Since the Wronskian is non-vanishing, only the trivial solution satises the
homogeneous initial conditions. The Green function must be
G(x[) =
_
0 for x <
u

(x) for x > ,


where u

(x) is the linear combination of u


1
and u
2
that satises
u

() = 0, u
t

() = 1.
Note that the non-vanishing Wronskian ensures a unique solution for u

. We can write the Green function in the


form
G(x[) = H(x )u

(x).
954
This is known as the causal solution. The solution for u is
u =
_
b
a
G(x[)f() d
=
_
b
a
H(x )u

(x)f() d
=
_
x
a
u

(x)f() d
Now we have the solution for y,
y = v +
_
x
a
u

(x)f() d.
Result 23.7.3 The solution of the problem
y
//
+p(x)y
/
+q(x)y = f(x), y(a) =
1
, y
/
(a) =
2
,
is
y = y
h
+
_
x
a
y

(x)f() d
where y
h
is the combination of the homogeneous solutions of the equation that satisfy
the initial conditions and y

(x) is the linear combination of homogeneous solutions that


satisfy y

() = 0, y
/

() = 1.
955
23.7.3 Problems with Unmixed Boundary Conditions
Consider
L[y] = y
tt
+p(x)y
t
+q(x)y = f(x), for a < x < b,
subject the the unmixed boundary conditions

1
y(a) +
2
y
t
(a) =
1
,
1
y(b) +
2
y
t
(b) =
2
.
The solution is y = u +v where
u
tt
+p(x)u
t
+q(x)u = f(x),
1
u(a) +
2
u
t
(a) = 0,
1
u(b) +
2
u
t
(b) = 0,
and
v
tt
+p(x)v
t
+q(x)v = 0,
1
v(a) +
2
v
t
(a) =
1
,
1
v(b) +
2
v
t
(b) =
2
.
The problem for v may have no solution, a unique solution or an innite number of solutions. We consider only
the case that there is a unique solution for v. In this case the homogeneous equation subject to homogeneous
boundary conditions has only the trivial solution.
The Green function for u satises
G
tt
(x[) +p(x)G
t
(x[) +q(x)G(x[) = (x ),

1
G(a[) +
2
G
t
(a[) = 0,
1
G(b[) +
2
G
t
(b[) = 0.
The continuity and jump conditions are
G(

[) = G(
+
[), G
t
(

[) + 1 = G
t
(
+
[).
956
Let u
1
and u
2
be two solutions of the homogeneous equation that satisfy the left and right boundary conditions,
respectively. The non-vanishing of the Wronskian ensures that these solutions exist. Let W(x) denote the
Wronskian of u
1
and u
2
. Since the homogeneous equation with homogeneous boundary conditions has only the
trivial solution, W(x) is nonzero on [a, b]. The Green function has the form
G(x[) =
_
c
1
u
1
for x < ,
c
2
u
2
for x > .
The continuity and jump conditions for Green function gives us the equations
c
1
u
1
() c
2
u
2
() = 0
c
1
u
t
1
() c
2
u
t
2
() = 1.
Using Kramers rule, the solution is
c
1
=
u
2
()
W()
, c
2
=
u
1
()
W()
.
Thus the Green function is
G(x[) =
_
u
1
(x)u
2
()
W()
for x < ,
u
1
()u
2
(x)
W()
for x > .
The solution for u is
u =
_
b
a
G(x[)f() d.
Thus if there is a unique solution for v, the solution for y is
y = v +
_
b
a
G(x[)f() d.
957
Result 23.7.4 Consider the problem
y
//
+p(x)y
/
+q(x)y = f(x),

1
y(a) +
2
y
/
(a) =
1
,
1
y(b) +
2
y
/
(b) =
2
.
If the homogeneous dierential equation subject to the inhomogeneous boundary condi-
tions has the unique solution y
h
, then the problem has the unique solution
y = y
h
+
_
b
a
G(x[)f() d
where
G(x[) =
_
u
1
(x)u
2
()
W()
for x < ,
u
1
()u
2
(x)
W()
for x > ,
u
1
and u
2
are solutions of the homogeneous dierential equation that satisfy the left and
right boundary conditions, respectively, and W(x) is the Wronskian of u
1
and u
2
.
23.7.4 Problems with Mixed Boundary Conditions
Consider
L[y] = y
tt
+p(x)y
t
+q(x)y = f(x), for a < x < b,
subject the the mixed boundary conditions
B
1
[y] =
11
y(a) +
12
y
t
(a) +
11
y(b) +
12
y
t
(b) =
1
,
958
B
2
[y] =
21
y(a) +
22
y
t
(a) +
21
y(b) +
22
y
t
(b) =
2
.
The solution is y = u +v where
u
tt
+p(x)u
t
+q(x)u = f(x), B
1
[u] = 0, B
2
[u] = 0,
and
v
tt
+p(x)v
t
+q(x)v = 0, B
1
[v] =
1
, B
2
[v] =
2
.
The problem for v may have no solution, a unique solution or an innite number of solutions. Again we consider
only the case that there is a unique solution for v. In this case the homogeneous equation subject to homogeneous
boundary conditions has only the trivial solution.
Let y
1
and y
2
be two solutions of the homogeneous equation that satisfy the boundary conditions B
1
[y
1
] = 0
and B
2
[y
2
] = 0. Since the completely homogeneous problem has no solutions, we know that B
1
[y
2
] and B
2
[y
1
] are
nonzero. The solution for v has the form
v = c
1
y
1
+c
2
y
2
.
Applying the two boundary conditions yields
v =

2
B
2
[y
1
]
y
1
+

1
B
1
[y
2
]
y
2
.
The Green function for u satises
G
tt
(x[) +p(x)G
t
(x[) +q(x)G(x[) = (x ), B
1
[G] = 0, B
2
[G] = 0.
The continuity and jump conditions are
G(

[) = G(
+
[), G
t
(

[) + 1 = G
t
(
+
[).
959
We write the Green function as the sum of the causal solution and the two homogeneous solutions
G(x[) = H(x )y

(x) +c
1
y
1
(x) +c
2
y
2
(x)
With this form, the continuity and jump conditions are automatically satised. Applying the boundary conditions
yields
B
1
[G] = B
1
[H(x )y

] +c
2
B
1
[y
2
] = 0,
B
2
[G] = B
2
[H(x )y

] +c
1
B
2
[y
1
] = 0,
B
1
[G] =
11
y

(b) +
12
y
t

(b) +c
2
B
1
[y
2
] = 0,
B
2
[G] =
21
y

(b) +
22
y
t

(b) +c
1
B
2
[y
1
] = 0,
G(x[) = H(x )y

(x)

21
y

(b) +
22
y
t

(b)
B
2
[y
1
]
y
1
(x)

11
y

(b) +
12
y
t

(b)
B
1
[y
2
]
y
2
(x).
Note that the Green function is well dened since B
2
[y
1
] and B
1
[y
2
] are nonzero. The solution for u is
u =
_
b
a
G(x[)f() d.
Thus if there is a unique solution for v, the solution for y is
y =
_
b
a
G(x[)f() d +

2
B
2
[y
1
]
y
1
+

1
B
1
[y
2
]
y
2
.
960
Result 23.7.5 Consider the problem
y
//
+p(x)y
/
+q(x)y = f(x),
B
1
[y] =
11
y(a) +
12
y
/
(a) +
11
y(b) +
12
y
/
(b) =
1
,
B
2
[y] =
21
y(a) +
22
y
/
(a) +
21
y(b) +
22
y
/
(b) =
2
.
If the homogeneous dierential equation subject to the homogeneous boundary conditions
has no solution, then the problem has the unique solution
y =
_
b
a
G(x[)f() d +

2
B
2
[y
1
]
y
1
+

1
B
1
[y
2
]
y
2
,
where
G(x[) = H(x )y

(x)

21
y

(b) +
22
y
/

(b)
B
2
[y
1
]
y
1
(x)

11
y

(b) +
12
y
/

(b)
B
1
[y
2
]
y
2
(x),
y
1
and y
2
are solutions of the homogeneous dierential equation that satisfy the rst and
second boundary conditions, respectively, and y

(x) is the solution of the homogeneous


equation that satises y

() = 0, y
/

() = 1.
961
23.8 Green Functions for Higher Order Problems
Consider the n
th
order dierential equation
L[y] = y
(n)
+p
n1
(x)y
(n1)
+ +p
1
(x)y
t
+p
0
y = f(x) on a < x < b,
subject to the n independent boundary conditions
B
j
[y] =
j
where the boundary conditions are of the form
B[y]
n1

k=0

k
y
(k)
(a) +
n1

k=0

k
y
(k)
(b).
We assume that the coecient functions in the dierential equation are continuous on [a, b]. The solution is
y = u +v where u and v satisfy
L[u] = f(x), with B
j
[u] = 0,
and
L[v] = 0, with B
j
[v] =
j
From Result 23.5.3, we know that if the completely homogeneous problem
L[w] = 0, with B
j
[w] = 0,
has only the trivial solution, then the solution for y exists and is unique. We will construct this solution using
Green functions.
962
First we consider the problem for v. Let y
1
, . . . , y
n
be a set of linearly independent solutions. The solution
for v has the form
v = c
1
y
1
+ +c
n
y
n
where the constants are determined by the matrix equation
_
_
_
_
_
B
1
[y
1
] B
1
[y
2
] B
1
[y
n
]
B
2
[y
1
] B
2
[y
2
] B
2
[y
n
]
.
.
.
.
.
.
.
.
.
.
.
.
B
n
[y
1
] B
n
[y
2
] B
n
[y
n
]
_
_
_
_
_
_
_
_
_
_
c
1
c
2
.
.
.
c
n
_
_
_
_
_
=
_
_
_
_
_

2
.
.
.

n
_
_
_
_
_
.
To solve the problem for u we consider the Green function satisfying
L[G(x[)] = (x ), with B
j
[G] = 0.
Let y

(x) be the linear combination of the homogeneous solutions that satisfy the conditions
y

() = 0
y
t

() = 0
.
.
. =
.
.
.
y
(n2)

() = 0
y
(n1)

() = 1.
The causal solution is then
y
c
(x) = H(x )y

(x).
The Green function has the form
G(x[) = H(x )y

(x) +d
1
y
1
(x) + +d
n
y
n
(x)
963
The constants are determined by the matrix equation
_
_
_
_
_
B
1
[y
1
] B
1
[y
2
] B
1
[y
n
]
B
2
[y
1
] B
2
[y
2
] B
2
[y
n
]
.
.
.
.
.
.
.
.
.
.
.
.
B
n
[y
1
] B
n
[y
2
] B
n
[y
n
]
_
_
_
_
_
_
_
_
_
_
d
1
d
2
.
.
.
d
n
_
_
_
_
_
=
_
_
_
_
_
B
1
[H(x )y

(x)]
B
2
[H(x )y

(x)]
.
.
.
B
n
[H(x )y

(x)]
_
_
_
_
_
.
The solution for u then is
u =
_
b
a
G(x[)f() d.
964
Result 23.8.1 Consider the n
th
order dierential equation
L[y] = y
(n)
+p
n1
(x)y
(n1)
+ +p
1
(x)y
/
+p
0
y = f(x) on a < x < b,
subject to the n independent boundary conditions
B
j
[y] =
j
If the homogeneous dierential equation subject to the homogeneous boundary conditions
has only the trivial solution, then the problem has the unique solution
y =
_
b
a
G(x[)f() d +c
1
y
1
+ c
n
y
n
where
G(x[) = H(x )y

(x) +d
1
y
1
(x) + +d
n
y
n
(x),
y
1
, . . . , y
n
is a set of solutions of the homogeneous dierential equation, and the con-
stants c
j
and d
j
can be determined by solving sets of linear equations.
Example 23.8.1 Consider the problem
y
ttt
y
tt
+y
t
y = f(x),
y(0) = 1, y
t
(0) = 2, y(1) = 3.
The completely homogeneous associated problem is
w
ttt
w
tt
+w
t
w = 0, w(0) = w
t
(0) = w(1) = 0.
965
The solution of the dierential equation is
w = c
1
cos x +c
2
sin x +c
2
e
x
.
The boundary conditions give us the equation
_
_
1 0 1
0 1 1
cos 1 sin 1 e
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
0
0
0
_
_
.
The determinant of the matrix is ecos 1sin 1 ,= 0. Thus the homogeneous problem has only the trivial solution
and the inhomogeneous problem has a unique solution.
We separate the inhomogeneous problem into the two problems
u
ttt
u
tt
+u
t
u = f(x), u(0) = u
t
(0) = u(1) = 0,
v
ttt
v
tt
+v
t
v = 0, v(0) = 1, v
t
(0) = 2, v(1) = 3,
First we solve the problem for v. The solution of the dierential equation is
v = c
1
cos x +c
2
sin x +c
2
e
x
.
The boundary conditions yields the equation
_
_
1 0 1
0 1 1
cos 1 sin 1 e
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
1
2
3
_
_
.
The solution for v is
v =
1
e cos 1 sin 1
_
(e + sin 1 3) cos x + (2e cos 1 3) sin x + (3 cos 1 2 sin 1) e
x

.
966
Now we nd the Green function for the problem in u. The causal solution is
H(x )u

(x) = H(x )
1
2
_
(sin cos ) cos x (sin + cos ) sin + e

e
x

,
H(x )u

(x) =
1
2
H(x )
_
e
x
cos(x ) sin(x )

.
The Green function has the form
G(x[) = H(x )u

(x) +c
1
cos x +c
2
sin x +c
3
e
x
.
The constants are determined by the three conditions
_
c
1
cos x +c
2
sin x +c
3
e
x

x=0
= 0,
_

x
(c
1
cos x +c
2
sin x +c
3
e
x
)
_
x=0
= 0,
_
u

(x) +c
1
cos x +c
2
sin x +c
3
e
x

x=1
= 0.
The Green function is
G(x[) =
1
2
H(x )
_
e
x
cos(x ) sin(x )

+
cos(1 ) + sin(1 ) e
1
2(cos 1 + sin 1 e)
_
cos x + sin x e
x

The solution for v is


v =
_
1
0
G(x[)f() d.
Thus the solution for y is
y =
_
1
0
G(x[)f() d +
1
e cos 1 sin 1
_
(e + sin 1 3) cos x
+ (2e cos 1 3) sin x + (3 cos 1 2 sin 1) e
x

.
967
23.9 Fredholm Alternative Theorem
Orthogonality. Two real vectors, u and v are orthogonal if u v = 0. Consider two functions, u(x) and v(x),
dened in [a, b]. The dot product in vector space is analogous to the integral
_
b
a
u(x)v(x) dx
in function space. Thus two real functions are orthogonal if
_
b
a
u(x)v(x) dx = 0.
Consider the n
th
order linear inhomogeneous dierential equation
L[y] = f(x) on [a, b],
subject to the linear inhomogeneous boundary conditions
B
j
[y] = 0, for j = 1, 2, . . . n.
The Fredholm alternative theorem tells us if the problem has a unique solution, an innite number of solutions,
or no solution. Before presenting the theorem, we will consider a few motivating examples.
No Nontrivial Homogeneous Solutions. In the section on Green functions we showed that if the completely
homogeneous problem has only the trivial solution then the inhomogeneous problem has a unique solution.
Nontrivial Homogeneous Solutions Exist. If there are nonzero solutions to the homogeneous problem
L[y] = 0 that satisfy the homogeneous boundary conditions B
j
[y] = 0 then the inhomogeneous problem L[y] =
f(x) subject to the same boundary conditions either has no solution or an innite number of solutions.
Suppose there is a particular solution y
p
that satises the boundary conditions. If there is a solution y
h
to the
homogeneous equation that satises the boundary conditions then there will be an innite number of solutions
since y
p
+cy
h
is also a particular solution.
968
The question now remains: Given that there are homogeneous solutions that satisfy the boundary conditions,
how do we know if a particular solution that satises the boundary conditions exists? Before we address this
question we will consider a few examples.
Example 23.9.1 Consider the problem
y
tt
+y = cos x, y(0) = y() = 0.
The two homogeneous solutions of the dierential equation are
y
1
= cos x, and y
2
= sin x.
y
2
= sin x satises the boundary conditions. Thus we know that there are either no solutions or an innite number
of solutions. A particular solution is
y
p
= cos x
_
cos xsin x
1
dx + sin x
_
cos
2
x
1
dx
= cos x
_
1
2
sin(2x) dx + sin x
_ _
1
2
+
1
2
cos(2x)
_
dx
=
1
4
cos x cos(2x) + sin x
_
1
2
x +
1
4
sin(2x)
_
=
1
2
x sin x +
1
4
_
cos xcos(2x) + sin xsin(2x)

=
1
2
x sin x +
1
4
cos x
The general solution is
y =
1
2
xsin x +c
1
cos x +c
2
sin x.
Applying the two boundary conditions yields
y =
1
2
x sin x +c sin x.
969
Thus there are an innite number of solutions.
Example 23.9.2 Consider the dierential equation
y
tt
+y = sin x, y(0) = y() = 0.
The general solution is
y =
1
2
xcos x +c
1
cos x +c
2
sin x.
Applying the boundary conditions,
y(0) = 0 c
1
= 0
y() = 0
1
2
cos() +c
2
sin() = 0


2
= 0.
Since this equation has no solution, there are no solutions to the inhomogeneous problem.
In both of the above examples there is a homogeneous solution y = sin x that satises the boundary con-
ditions. In Example 23.9.1, the inhomogeneous term is cos x and there are an innite number of solutions. In
Example 23.9.2, the inhomogeneity is sin x and there are no solutions. In general, if the inhomogeneous term
is orthogonal to all the homogeneous solutions that satisfy the boundary conditions then there are an innite
number of solutions. If not, there are no inhomogeneous solutions.
970
Result 23.9.1 Fredholm Alternative Theorem. Consider the n
th
order inhomoge-
neous problem
L[y] = f(x) on [a, b] subject to B
j
[y] = 0 for j = 1, 2, . . . , n,
and the associated homogeneous problem
L[y] = 0 on [a, b] subject to B
j
[y] = 0 for j = 1, 2, . . . , n.
If the homogeneous problem has only the trivial solution then the inhomogeneous prob-
lem has a unique solution. If the homogeneous problem has m independent solutions,
y
1
, y
2
, . . . , y
m
, then there are two possibilities:
If f(x) is orthogonal to each of the homogeneous solutions then there are an innite
number of solutions of the form
y = y
p
+
m

j=1
c
j
y
j
.
If f(x) is not orthogonal to each of the homogeneous solutions then there are no
inhomogeneous solutions.
Example 23.9.3 Consider the problem
y
tt
+y = cos 2x, y(0) = 1, y() = 2.
cos x and sin x are two linearly independent solutions to the homogeneous equation. sin x satises the homogeneous
boundary conditions. Thus there are either an innite number of solutions, or no solution.
971
To transform this problem to one with homogeneous boundary conditions, we note that g(x) =
x

+ 1 and
make the change of variables y = u +g to obtain
u
tt
+u = cos 2x
x

1, y(0) = 0, y() = 0.
Since cos 2x
x

1 is not orthogonal to sin x, there is no solution to the inhomogeneous problem.


To check this, the general solution is
y =
1
3
cos 2x +c
1
cos x +c
2
sin x.
Applying the boundary conditions,
y(0) = 1 c
1
=
4
3
y() = 2
1
3

4
3
= 2.
Thus we see that the right boundary condition cannot be satised.
Example 23.9.4 Consider
y
tt
+y = cos 2x, y
t
(0) = y() = 1.
There are no solutions to the homogeneous equation that satisfy the homogeneous boundary conditions. To check
this, note that all solutions of the homogeneous equation have the form u
h
= c
1
cos x +c
2
sin x.
u
t
h
(0) = 0 c
2
= 0
u
h
() = 0 c
1
= 0.
From the Fredholm Alternative Theorem we see that the inhomogeneous problem has a unique solution.
972
To nd the solution, start with
y =
1
3
cos 2x +c
1
cos x +c
2
sin x.
y
t
(0) = 1 c
2
= 1
y() = 1
1
3
c
1
= 1
Thus the solution is
y =
1
3
cos 2x
4
3
cos x + sin x.
Example 23.9.5 Consider
y
tt
+y = cos 2x, y(0) =
2
3
, y() =
4
3
.
cos x and sin x satisfy the homogeneous dierential equation. sin x satises the homogeneous boundary conditions.
Since g(x) = cos x 1/3 satises the boundary conditions, the substitution y = u +g yields
u
tt
+u = cos 2x +
1
3
, y(0) = 0, y() = 0.
Now we check if sin x is orthogonal to cos 2x +
1
3
.
_

0
sin x
_
cos 2x +
1
3
_
dx =
_

0
1
2
sin 3x
1
2
sin x +
1
3
sin x dx
=
_

1
6
cos 3x +
1
6
cos x
_

0
= 0
973
Since sin x is orthogonal to the inhomogeneity, there are an innite number of solutions to the problem for u,
(and hence the problem for y).
As a check, then general solution for y is
y =
1
3
cos 2x +c
1
cos x +c
2
sin x.
Applying the boundary conditions,
y(0) =
2
3
c
1
= 1
y() =
4
3

4
3
=
4
3
.
Thus we see that c
2
is arbitrary. There are an innite number of solutions of the form
y =
1
3
cos 2x + cos x +c sin x.
974
23.10 Exercises
Undetermined Coecients
Exercise 23.1 (mathematica/ode/inhomogeneous/undetermined.nb)
Find the general solution of the following equations.
1. y
tt
+ 2y
t
+ 5y = 3 sin(2t)
2. 2y
tt
+ 3y
t
+y = t
2
+ 3 sin(t)
Hint, Solution
Exercise 23.2 (mathematica/ode/inhomogeneous/undetermined.nb)
Find the solution of each one of the following initial value problems.
1. y
tt
2y
t
+y = t e
t
+ 4, y(0) = 1, y
t
(0) = 1
2. y
tt
+ 2y
t
+ 5y = 4 e
t
cos(2t), y(0) = 1, y
t
(0) = 0
Hint, Solution
Variation of Parameters
Exercise 23.3 (mathematica/ode/inhomogeneous/variation.nb)
Use the method of variation of parameters to nd a particular solution of the given dierential equation.
1. y
tt
5y
t
+ 6y = 2 e
t
2. y
tt
+y = tan(t), 0 < t < /2
3. y
tt
5y
t
+ 6y = g(t), for a given function g.
Hint, Solution
975
Exercise 23.4 (mathematica/ode/inhomogeneous/variation.nb)
Solve
y
tt
(x) +y(x) = x, y(0) = 1, y
t
(0) = 0.
Hint, Solution
Exercise 23.5 (mathematica/ode/inhomogeneous/variation.nb)
Solve
x
2
y
tt
(x) xy
t
(x) +y(x) = x.
Hint, Solution
Exercise 23.6 (mathematica/ode/inhomogeneous/variation.nb)
1. Find the general solution of y
tt
+y = e
x
.
2. Solve y
tt
+
2
y = sin x, y(0) = y
t
(0) = 0. is an arbitrary real constant. Is there anything special about
= 1?
Hint, Solution
Exercise 23.7 (mathematica/ode/inhomogeneous/variation.nb)
Consider the problem of solving the initial value problem
y
tt
+y = g(t), y(0) = 0, y
t
(0) = 0.
1. Show that the general solution of y
tt
+y = g(t) is
y(t) =
_
c
1

_
t
a
g() sin d
_
cos t +
_
c
2
+
_
t
b
g() cos d
_
sin t,
where c
1
and c
2
are arbitrary constants and a and b are any conveniently chosen points.
976
2. Using the result of part (a) show that the solution satisfying the initial conditions y(0) = 0 and y
t
(0) = 0 is
given by
y(t) =
_
t
0
g() sin(t ) d.
Notice that this equation gives a formula for computing the solution of the original initial value problem for
any given inhomogeneous term g(t). The integral is referred to as the convolution of g(t) with sin t.
3. Use the result of part (b) to solve the initial value problem,
y
tt
+y = sin(t), y(0) = 0, y
t
(0) = 0,
where is a real constant. How does the solution for = 1 dier from that for ,= 1? The = 1 case
provides an example of resonant forcing. Plot the solution for resonant and non-resonant forcing.
Hint, Solution
Exercise 23.8
Find the variation of parameters solution for the third order dierential equation
y
ttt
+p
2
(x)y
tt
+p
1
(x)y
t
+p
0
(x)y = f(x).
Hint, Solution
Green Functions
Exercise 23.9
Use a Green function to solve
y
tt
= f(x), y() = y
t
() = 0.
Verify the the solution satises the dierential equation.
Hint, Solution
977
Exercise 23.10
Solve the initial value problem
y
tt
+
1
x
y
t

1
x
2
y = x
2
, y(0) = 0, y
t
(0) = 1.
First use variation of parameters, and then solve the problem with a Green function.
Hint, Solution
Exercise 23.11
What are the continuity conditions at x = for the Green function for the problem
y
ttt
+p
2
(x)y
tt
+p
1
(x)y
t
+p
0
(x)y = f(x).
Hint, Solution
Exercise 23.12
Use variation of parameters and Green functions to solve
x
2
y
tt
2xy
t
+ 2y = e
x
, y(1) = 0, y
t
(1) = 1.
Hint, Solution
Exercise 23.13
Find the Green function for
y
tt
y = f(x), y
t
(0) = y(1) = 0.
Hint, Solution
978
Exercise 23.14
Find the Green function for
y
tt
y = f(x), y(0) = y() = 0.
Hint, Solution
Exercise 23.15
Find the Green function for each of the following:
a) xu
tt
+u
t
= f(x), u(0
+
) bounded, u(1) = 0.
b) u
tt
u = f(x), u(a) = u(a) = 0.
c) u
tt
u = f(x), u(x) bounded as [x[ .
d) Show that the Green function for (b) approaches that for (c) as a .
Hint, Solution
Exercise 23.16
1. For what values of does the problem
y
tt
+y = f(x), y(0) = y() = 0, (23.5)
have a unique solution? Find the Green functions for these cases.
2. For what values of does the problem
y
tt
+ 9y = 1 +x, y(0) = y() = 0,
have a solution? Find the solution.
979
3. For = n
2
, n Z
+
state in general the conditions on f in Equation 23.5 so that a solution will exist. What
is the appropriate modied Green function (in terms of eigenfunctions)?
Hint, Solution
Exercise 23.17
Show that the inhomogeneous boundary value problem:
Lu (pu
t
)
t
+qu = f(x), a < x < b, u(a) = , u(b) =
has the solution:
u(x) =
_
b
a
g(x; )f() d p(a)g

(x; a) +p(b)g

(x; b).
Hint, Solution
Exercise 23.18
The Green function for
u
tt
k
2
u = f(x), < x <
subject to [u()[ < is
G(x; ) =
1
2k
e
k[x[
.
(We assume that k > 0.) Use the image method to nd the Green function for the same equation on the
semi-innite interval 0 < x < satisfying the boundary conditions,
i) u(0) = 0 [u()[ < ,
ii) u
t
(0) = 0 [u()[ < .
Express these results in simplied forms without absolute values.
Hint, Solution
980
Exercise 23.19
1. Determine the Green function for solving:
y
tt
a
2
y = f(x), y(0) = y
t
(L) = 0.
2. Take the limit as L to nd the Green function on (0, ) for the boundary conditions: y(0) = 0,
y
t
() = 0. We assume here that a > 0. Use the limiting Green function to solve:
y
tt
a
2
y = e
x
, y(0) = 0, y
t
() = 0.
Check that your solution satises all the conditions of the problem.
Hint, Solution
981
23.11 Hints
Undetermined Coecients
Hint 23.1
Hint 23.2
Variation of Parameters
Hint 23.3
Hint 23.4
Hint 23.5
Hint 23.6
Hint 23.7
982
Hint 23.8
Look for a particular solution of the form
y
p
= u
1
y
1
+u
2
y
2
+u
3
y
3
,
where the y
j
s are homogeneous solutions. Impose the constraints
u
t
1
y
1
+u
t
2
y
2
+u
t
3
y
3
= 0
u
t
1
y
t
1
+u
t
2
y
t
2
+u
t
3
y
t
3
= 0.
To avoid some messy algebra when solving for u
t
j
, use Kramers rule.
Green Functions
Hint 23.9
Hint 23.10
Hint 23.11
Hint 23.12
Hint 23.13
cosh(x) and sinh(x1) are homogeneous solutions that satisfy the left and right boundary conditions, respectively.
983
Hint 23.14
sinh(x) and e
x
are homogeneous solutions that satisfy the left and right boundary conditions, respectively.
Hint 23.15
The Green function for the dierential equation
L[y]
d
dx
(p(x)y
t
) +q(x)y = f(x),
subject to unmixed, homogeneous boundary conditions is
G(x[) =
y
1
(x
<
)y
2
(x
>
)
p()W()
,
G(x[) =
_
y
1
(x)y
2
()
p()W()
for a x ,
y
1
()y
2
(x)
p()W()
for x b,
where y
1
and y
2
are homogeneous solutions that satisfy the left and right boundary conditions, respectively.
Recall that if y(x) is a solution of a homogeneous, constant coecient dierential equation then y(x + c) is
also a solution.
Hint 23.16
The problem has a Green function if and only if the inhomogeneous problem has a unique solution. The inhomo-
geneous problem has a unique solution if and only if the homogeneous problem has only the trivial solution.
Hint 23.17
Show that g

(x; a) and g

(x; b) are solutions of the homogeneous dierential equation. Determine the value of
these solutions at the boundary.
984
Hint 23.18
Hint 23.19
985
23.12 Solutions
Undetermined Coecients
Solution 23.1
1. We consider
y
tt
+ 2y
t
+ 5y = 3 sin(2t).
We rst nd the homogeneous solution with the substitition y = e
t
.

2
+ 2 + 5 = 0
= 1 2i
The homogeneous solution is
y
h
= c
1
e
t
cos(2t) +c
2
e
t
sin(2t).
We guess a particular solution of the form
y
p
= a cos(2t) +b sin(2t).
We substitute this into the dierential equation to determine the coecients.
y
tt
p
+ 2y
t
p
+ 5y
p
= 3 sin(2t)
4a cos(2t) 4b sin(2t) 4a sin(2t) + 4b sin(2t) + 5a cos(2t) + 5b sin(2t) = 3 sin(2t)
(a + 4b) cos(2t) + (3 4a +b) sin(2t) = 0
a + 4b = 0, 4a +b = 3
a =
12
17
, b =
3
17
A particular solution is
y
p
=
3
17
(sin(2t) 4 cos(2t)).
986
The general solution of the dierential equation is
y = c
1
e
t
cos(2t) +c
2
e
t
sin(2t) +
3
17
(sin(2t) 4 cos(2t)).
2. We consider
2y
tt
+ 3y
t
+y = t
2
+ 3 sin(t)
We rst nd the homogeneous solution with the substitition y = e
t
.
2
2
+ 3 + 1 = 0
= 1, 1/2
The homogeneous solution is
y
h
= c
1
e
t
+c
2
e
t/2
.
We guess a particular solution of the form
y
p
= at
2
+bt +c +d cos(t) +e sin(t).
We substitute this into the dierential equation to determine the coecients.
2y
tt
p
+ 3y
t
p
+y
p
= t
2
+ 3 sin(t)
2(2a d cos(t) e sin(t)) + 3(2at +b d sin(t) +e cos(t))
+at
2
+bt +c +d cos(t) +e sin(t) = t
2
+ 3 sin(t)
(a 1)t
2
+ (6a +b)t + (4a + 3b +c) + (d + 3e) cos(t) (3 + 3d +e) sin(t) = 0
a 1 = 0, 6a +b = 0, 4a + 3b +c = 0, d + 3e = 0, 3 + 3d +e = 0
a = 1, b = 6, c = 14, d =
9
10
, e =
3
10
987
A particular solution is
y
p
= t
2
6t + 14
3
10
(3 cos(t) + sin(t)).
The general solution of the dierential equation is
y = c
1
e
t
+c
2
e
t/2
+t
2
6t + 14
3
10
(3 cos(t) + sin(t)).
Solution 23.2
1. We consider the problem
y
tt
2y
t
+y = t e
t
+ 4, y(0) = 1, y
t
(0) = 1.
First we solve the homogeneous equation with the substitution y = e
t
.

2
2 + 1 = 0
( 1)
2
= 0
= 1
The homogeneous solution is
y
h
= c
1
e
t
+c
2
t e
t
.
We guess a particular solution of the form
y
p
= at
3
e
t
+bt
2
e
t
+ 4.
988
We substitute this into the inhomogeneous dierential equation to determine the coecients.
y
tt
p
2y
t
p
+y
p
= t e
t
+ 4
(a(t
3
+ 6t
2
+ 6t) +b(t
2
+ 4t + 2)) e
t
2(a(t
2
+ 3t) +b(t + 2)) e
t
at
3
e
t
+bt
2
e
t
+ 4 = t e
t
+ 4
(6a 1)t + 2b = 0
6a 1 = 0, 2b = 0
a =
1
6
, b = 0
A particular solution is
y
p
=
t
3
6
e
t
+ 4.
The general solution of the dierential equation is
y = c
1
e
t
+c
2
t e
t
+
t
3
6
e
t
+ 4.
We use the initial conditions to determine the constants of integration.
y(0) = 1, y
t
(0) = 1
c
1
+ 4 = 1, c
1
+c
2
= 1
c
1
= 3, c
2
= 4
The solution of the initial value problem is
y =
_
t
3
6
+ 4t 3
_
e
t
+ 4.
2. We consider the problem
y
tt
+ 2y
t
+ 5y = 4 e
t
cos(2t), y(0) = 1, y
t
(0) = 0.
989
First we solve the homogeneous equation with the substitution y = e
t
.

2
+ 2 + 5 = 0
= 1

1 5
= 1 i2
The homogeneous solution is
y
h
= c
1
e
t
cos(2t) +c
2
e
t
sin(2t).
We guess a particular solution of the form
y
p
= t e
t
(a cos(2t) +b sin(2t))
We substitute this into the inhomogeneous dierential equation to determine the coecients.
y
tt
p
+ 2y
t
p
+ 5y
p
= 4 e
t
cos(2t)
e
t
(((2 + 3t)a + 4(1 t)b) cos(2t) + (4(t 1)a (2 + 3t)b) sin(2t))
+ 2 e
t
(((1 t)a + 2tb) cos(2t) + (2ta + (1 t)b) sin(2t))
+ 5( e
t
(ta cos(2t) +tb sin(2t))) = 4 e
t
cos(2t)
4(b 1) cos(2t) 4a sin(2t) = 0
a = 0, b = 1
A particular solution is
y
p
= t e
t
sin(2t).
990
The general solution of the dierential equation is
y = c
1
e
t
cos(2t) +c
2
e
t
sin(2t) +t e
t
sin(2t).
We use the initial conditions to determine the constants of integration.
y(0) = 1, y
t
(0) = 0
c
1
= 1, c
1
+ 2c
2
= 0
c
1
= 1, c
2
=
1
2
The solution of the initial value problem is
y =
1
2
e
t
(2 cos(2t) + (2t + 1) sin(2t)) .
Variation of Parameters
Solution 23.3
1. We consider the equation
y
tt
5y
t
+ 6y = 2 e
t
.
We nd homogeneous solutions with the substitution y = e
t
.

2
5 + 6 = 0
= 2, 3
The homogeneous solutions are
y
1
= e
2t
, y
2
= e
3t
.
991
We compute the Wronskian of these solutions.
W(t) =

e
2t
e
3t
2 e
2t
3 e
3t

= e
5t
We nd a particular solution with variation of parameters.
y
p
= e
2t
_
2 e
t
e
3t
e
5t
dt + e
3t
_
2 e
t
e
2t
e
5t
dt
= 2 e
2t
_
e
t
dt + 2 e
3t
_
e
2t
dt
= 2 e
t
e
t
y
p
= e
t
2. We consider the equation
y
tt
+y = tan(t), 0 < t <

2
.
We nd homogeneous solutions with the substitution y = e
t
.

2
+ 1 = 0
= i
The homogeneous solutions are
y
1
= cos(t), y
2
= sin(t).
We compute the Wronskian of these solutions.
W(t) =

cos(t) sin(t)
sin(t) cos(t)

= cos
2
(t) + sin
2
(t) = 1
992
We nd a particular solution with variation of parameters.
y
p
= cos(t)
_
tan(t) sin(t) dt + sin(t)
_
tan(t) cos(t) dt
= cos(t)
_
sin
2
(t)
cos(t)
dt + sin(t)
_
sin(t) dt
= cos(t)
_
ln
_
cos(t/2) sin(t/2)
cos(t/2) + sin(t/2)
+ sin(t)
__
sin(t) cos(t)
y
p
= cos(t) ln
_
cos(t/2) sin(t/2)
cos(t/2) + sin(t/2)
_
3. We consider the equation
y
tt
5y
t
+ 6y = g(t).
The homogeneous solutions are
y
1
= e
2t
, y
2
= e
3t
.
The Wronskian of these solutions is W(t) = e
5t
. We nd a particular solution with variation of parameters.
y
p
= e
2t
_
g(t) e
3t
e
5t
dt + e
3t
_
g(t) e
2t
e
5t
dt
y
p
= e
2t
_
g(t) e
2t
dt + e
3t
_
g(t) e
3t
dt
993
Solution 23.4
Solve
y
tt
(x) +y(x) = x, y(0) = 1, y
t
(0) = 0.
The solutions of the homogeneous equation are
y
1
(x) = cos x, y
2
(x) = sin x.
The Wronskian of these solutions is
W[cos x, sin x] =

cos x sin x
sin x cos x

= cos
2
x + sin
2
x
= 1.
The variation of parameters solution for the particular solution is
y
p
= cos x
_
xsin x dx + sin x
_
xcos xdx
= cos x
_
xcos x +
_
cos xdx
_
+ sin x
_
xsin x
_
sin x dx
_
= cos x (xcos x + sin x) + sin x (xsin x + cos x)
= xcos
2
x cos xsin x +xsin
2
x + cos x sin x
= x
The general solution of the dierential equation is thus
y = c
1
cos x +c
2
sin x +x.
Applying the two initial conditions gives us the equations
c
1
= 1, c
2
+ 1 = 0.
994
The solution subject to the initial conditions is
y = cos x sin x +x.
Solution 23.5
Solve
x
2
y
tt
(x) xy
t
(x) +y(x) = x.
The homogeneous equation is
x
2
y
tt
(x) xy
t
(x) +y(x) = 0.
Substituting y = x

into the homogeneous dierential equation yields


x
2
( 1)x
2
xx

+x

= 0

2
2 + 1 = 0
( 1)
2
= 0
= 1.
The homogeneous solutions are
y
1
= x, y
2
= xlog x.
The Wronskian of the homogeneous solutions is
W[x, xlog x] =

x x log x
1 1 + log x

= x +x log x xlog x
= x.
995
Writing the inhomogeneous equation in the standard form:
y
tt
(x)
1
x
y
t
(x) +
1
x
2
y(x) =
1
x
.
Using variation of parameters to nd the particular solution,
y
p
= x
_
log x
x
dx +x log x
_
1
x
dx
= x
1
2
log
2
x +x log xlog x
=
1
2
x log
2
x.
Thus the general solution of the inhomogeneous dierential equation is
y = c
1
x +c
2
x log x +
1
2
x log
2
x.
Solution 23.6
1. First we nd the homogeneous solutions. We substitute y = e
x
into the homogeneous dierential equation.
y
tt
+y = 0

2
+ 1 = 0
= i
y =
_
e
ix
, e
ix
_
We can also write the solutions in terms of real-valued functions.
y = cos x, sin x
996
The Wronskian of the homogeneous solutions is
W[cos x, sin x] =

cos x sin x
sin x cos x

= cos
2
x + sin
2
x = 1.
We obtain a particular solution with the variation of parameters formula.
y
p
= cos x
_
e
x
sin xdx + sin x
_
e
x
cos xdx
y
p
= cos x
1
2
e
x
(sin x cos x) + sin x
1
2
e
x
(sin x + cos x)
y
p
=
1
2
e
x
The general solution is the particular solution plus a linear combination of the homogeneous solutions.
y =
1
2
e
x
+ cos x + sin x
2.
y
tt
+
2
y = sin x, y(0) = y
t
(0) = 0
Assume that is positive. First we nd the homogeneous solutions by substituting y = e
x
into the
homogeneous dierential equation.
y
tt
+
2
y = 0

2
+
2
= 0
= i
y =
_
e
ix
, e
ix
_
y = cos(x), sin(x)
997
The Wronskian of these homogeneous solution is
W[cos(x), sin(x)] =

cos(x) sin(x)
sin(x) cos(x)

= cos
2
(x) +sin
2
(x) = .
We obtain a particular solution with the variation of parameters formula.
y
p
= cos(x)
_
sin(x) sin x

dx + sin(x)
_
cos(x) sin x

dx
We evaluate the integrals for ,= 1.
y
p
= cos(x)
cos(x) sin(x) sin x cos(x)
(
2
1)
+ sin(x)
cos(x) cos(x) +sin xsin(x)
(
2
1)
y
p
=
sin x

2
1
The general solution for ,= 1 is
y =
sin x

2
1
+c
1
cos(x) +c
2
sin(x).
The initial conditions give us the constraints:
c
1
= 0,
1

2
1
+c
2
= 0,
For ,= 1, (non-resonant forcing), the solution subject to the initial conditions is
y =
sin(x) sin(x)
(
2
1)
.
998
Now consider the case = 1. We obtain a particular solution with the variation of parameters formula.
y
p
= cos(x)
_
sin
2
(x) dx + sin(x)
_
cos(x) sin x dx
y
p
= cos(x)
1
2
(x cos(x) sin(x)) + sin(x)
_

1
2
cos
2
(x)
_
y
p
=
1
2
x cos(x)
The general solution for = 1 is
y =
1
2
x cos(x) +c
1
cos(x) +c
2
sin(x).
The initial conditions give us the constraints:
c
1
= 0

1
2
+c
2
= 0
For = 1, (resonant forcing), the solution subject to the initial conditions is
y =
1
2
(sin(x) xcos x).
Solution 23.7
1. A set of linearly independent, homogeneous solutions is cos t, sin t. The Wronskian of these solutions is
W(t) =

cos t sin t
sin t cos t

= cos
2
t + sin
2
t = 1.
We use variation of parameters to nd a particular solution.
y
p
= cos t
_
g(t) sin t dt + sin t
_
g(t) cos t dt
999
The general solution can be written in the form,
y(t) =
_
c
1

_
t
a
g() sin d
_
cos t +
_
c
2
+
_
t
b
g() cos d
_
sin t.
2. Since the initial conditions are given at t = 0 we choose the lower bounds of integration in the general
solution to be that point.
y =
_
c
1

_
t
0
g() sin d
_
cos t +
_
c
2
+
_
t
0
g() cos d
_
sin t
The initial condition y(0) = 0 gives the constraint, c
1
= 0. The derivative of y(t) is then,
y
t
(t) = g(t) sin t cos t +
_
t
0
g() sin d sin t +g(t) cos t sin t +
_
c
2
+
_
t
0
g() cos d
_
cos t,
y
t
(t) =
_
t
0
g() sin d sin t +
_
c
2
+
_
t
0
g() cos d
_
cos t.
The initial condition y
t
(0) = 0 gives the constraint c
2
= 0. The solution subject to the initial conditions is
y =
_
t
0
g()(sin t cos cos t sin ) d
y =
_
t
0
g() sin(t ) d
3. The solution of the initial value problem
y
tt
+y = sin(t), y(0) = 0, y
t
(0) = 0,
is
y =
_
t
0
sin() sin(t ) d.
1000
For ,= 1, this is
y =
1
2
_
t
0
_
cos(t ) cos(t +)
_
d
=
1
2
_

sin(t )
1 +
+
sin(t +)
1
_
t
0
=
1
2
_
sin(t) sin(t)
1 +
+
sin(t) + sin(t)
1
_
y =
sin t
1
2
+
sin(t)
1
2
. (23.6)
The solution is the sum of two periodic functions of period 2 and 2/. This solution is plotted in
Figure 23.5 on the interval t [0, 16] for the values = 1/4, 7/8, 5/2.
Figure 23.5: Non-resonant Forcing
1001
For = 1, we have
y =
1
2
_
t
0
_
cos(t 2) cos(tau)
_
d
=
1
2
_

1
2
sin(t 2) cos t
_
t
0
y =
1
2
(sin t t cos t) . (23.7)
The solution has both a periodic and a transient term. This solution is plotted in Figure 23.5 on the interval
t [0, 16].
Figure 23.6: Resonant Forcing
Note that we can derive (23.7) from (23.6) by taking the limit as 0.
lim
1
sin(t) sin t
1
2
= lim
1
t cos(t) sin t
2
=
1
2
(sin t t cos t)
1002
Solution 23.8
Let y
1
, y
2
and y
3
be linearly independent homogeneous solutions to the dierential equation
L[y] = y
ttt
+p
2
y
tt
+p
1
y
t
+p
0
y = f(x).
We will look for a particular solution of the form
y
p
= u
1
y
1
+u
2
y
2
+u
3
y
3
.
Since the u
j
s are undetermined functions, we are free to impose two constraints. We choose the constraints to
simplify the algebra.
u
t
1
y
1
+u
t
2
y
2
+u
t
3
y
3
= 0
u
t
1
y
t
1
+u
t
2
y
t
2
+u
t
3
y
t
3
= 0
Dierentiating the expression for y
p
,
y
t
p
= u
t
1
y
1
+u
1
y
t
1
+u
t
2
y
2
+u
2
y
t
2
+u
t
3
y
3
+u
3
y
t
3
= u
1
y
t
1
+u
2
y
t
2
+u
3
y
t
3
y
tt
p
= u
t
1
y
t
1
+u
1
y
tt
1
+u
t
2
y
t
2
+u
2
y
tt
2
+u
t
3
y
t
3
+u
3
y
tt
3
= u
1
y
tt
1
+u
2
y
tt
2
+u
3
y
tt
3
y
ttt
p
= u
t
1
y
tt
1
+u
1
y
ttt
1
+u
t
2
y
tt
2
+u
2
y
ttt
2
+u
t
3
y
tt
3
+u
3
y
ttt
3
Substituting the expressions for y
p
and its derivatives into the dierential equation,
u
t
1
y
tt
1
+u
1
y
ttt
1
+u
t
2
y
tt
2
+u
2
y
ttt
2
+u
t
3
y
tt
3
+u
3
y
ttt
3
+p
2
(u
1
y
tt
1
+u
2
y
tt
2
+u
3
y
tt
3
) +p
1
(u
1
y
t
1
+u
2
y
t
2
+u
3
y
t
3
)
+p
0
(u
1
y
1
+u
2
y
2
+u
3
y
3
) = f(x)
u
t
1
y
tt
1
+u
t
2
y
tt
2
+u
t
3
y
tt
3
+u
1
L[y
1
] +u
2
L[y
2
] +u
3
L[y
3
] = f(x)
u
t
1
y
tt
1
+u
t
2
y
tt
2
+u
t
3
y
tt
3
= f(x).
1003
With the two constraints, we have the system of equations,
u
t
1
y
1
+u
t
2
y
2
+u
t
3
y
3
= 0
u
t
1
y
t
1
+u
t
2
y
t
2
+u
t
3
y
t
3
= 0
u
t
1
y
tt
1
+u
t
2
y
tt
2
+u
t
3
y
tt
3
= f(x)
We solve for the u
t
j
using Kramers rule.
u
t
1
=
(y
2
y
t
3
y
t
2
y
3
)f(x)
W(x)
, u
t
2
=
(y
1
y
t
3
y
t
1
y
3
)f(x)
W(x)
, u
t
3
=
(y
1
y
t
2
y
t
1
y
2
)f(x)
W(x)
Here W(x) is the Wronskian of y
1
, y
2
, y
3
. Integrating the expressions for u
t
j
, the particular solution is
y
p
= y
1
_
(y
2
y
t
3
y
t
2
y
3
)f(x)
W(x)
dx +y
2
_
(y
3
y
t
1
y
t
3
y
1
)f(x)
W(x)
dx +y
3
_
(y
1
y
t
2
y
t
1
y
2
)f(x)
W(x)
dx.
Green Functions
Solution 23.9
We consider the Green function problem
G
tt
= f(x), G([) = G
t
([) = 0.
The homogeneous solution is y = c
1
+ c
2
x. The homogeneous solution that satises the boundary conditions is
y = 0. Thus the Green function has the form
G(x[) =
_
0 x < ,
c
1
+c
2
x x > .
The continuity and jump conditions are then
G(
+
[) = 0, G
t
(
+
[) = 1.
1004
Thus the Green function is
G(x[) =
_
0 x < ,
x x >
= (x )H(x ).
The solution of the problem
y
tt
= f(x), y() = y
t
() = 0.
is
y =
_

f()G(x[) d
y =
_

f()(x )H(x ) d
y =
_
x

f()(x ) d
We dierentiate this solution to verify that it satises the dierential equation.
y
t
= [f()(x )]
=x
+
_
x

x
(f()(x )) d =
_
x

f() d
y
tt
= [f()]
=x
= f(x)
Solution 23.10
Since we are dealing with an Euler equation, we substitute y = x

to nd the homogeneous solutions.


( 1) + 1 = 0
( 1)( + 1) = 0
y
1
= x, y
2
=
1
x
1005
Variation of Parameters. The Wronskian of the homogeneous solutions is
W(x) =

x 1/x
1 1/x
2

=
1
x

1
x
=
2
x
.
A particular solution is
y
p
= x
_
x
2
(1/x)
2/x
dx +
1
x
_
x
2
x
2/x
dx
= x
_

x
2
2
dx +
1
x
_

x
4
2
dx
=
x
4
6

x
4
10
=
x
4
15
.
The general solution is
y =
x
4
15
+c
1
x +c
2
1
x
.
Applying the initial conditions,
y(0) = 0 c
2
= 0
y
t
(0) = 0 c
1
= 1.
Thus we have the solution
y =
x
4
15
+x.
1006
Green Function. Since this problem has both an inhomogeneous term in the dierential equation and inho-
mogeneous boundary conditions, we separate it into the two problems
u
tt
+
1
x
u
t

1
x
2
u = x
2
, u(0) = u
t
(0) = 0,
v
tt
+
1
x
v
t

1
x
2
v = 0, v(0) = 0, v
t
(0) = 1.
First we solve the inhomogeneous dierential equation with the homogeneous boundary conditions. The Green
function for this problem satises
L[G(x[)] = (x ), G(0[) = G
t
(0[) = 0.
Since the Green function must satisfy the homogeneous boundary conditions, it has the form
G(x[) =
_
0 for x <
cx +d/x for x > .
From the continuity condition,
0 = c +d/.
The jump condition yields
c d/
2
= 1.
Solving these two equations, we obtain
G(x[) =
_
0 for x <
1
2
x

2
2x
for x >
1007
Thus the solution is
u(x) =
_

0
G(x[)
2
d
=
_
x
0
_
1
2
x

2
2x
_

2
d
=
1
6
x
4

1
10
x
4
=
x
4
15
.
Now to solve the homogeneous dierential equation with inhomogeneous boundary conditions. The general
solution for v is
v = cx +d/x.
Applying the two boundary conditions gives
v = x.
Thus the solution for y is
y = x +
x
4
15
.
Solution 23.11
The Green function satises
G
ttt
(x[) +p
2
(x)G
tt
(x[) +p
1
(x)G
t
(x[) +p
0
(x)G(x[) = (x ).
First note that only the G
ttt
(x[) term can have a delta function singularity. If a lower derivative had a delta
function type singularity, then G
ttt
(x[) would be more singular than a delta function and there would be no other
1008
term in the equation to balance that behavior. Thus we see that G
ttt
(x[) will have a delta function singularity;
G
tt
(x[) will have a jump discontinuity; G
t
(x[) will be continuous at x = . Integrating the dierential equation
from

to
+
yields
_

+

G
ttt
(x[) dx =
_

+

(x ) dx
G
tt
(
+
[) G
tt
(

[) = 1.
Thus we have the three continuity conditions:
G
tt
(
+
[) = G
tt
(

[) + 1
G
t
(
+
[) = G
t
(

[)
G(
+
[) = G(

[)
Solution 23.12
Variation of Parameters. Consider the problem
x
2
y
tt
2xy
t
+ 2y = e
x
, y(1) = 0, y
t
(1) = 1.
Previously we showed that two homogeneous solutions are
y
1
= x, y
2
= x
2
.
The Wronskian of these solutions is
W(x) =

x x
2
1 2x

= 2x
2
x
2
= x
2
.
1009
In the variation of parameters formula, we will choose 1 as the lower bound of integration. (This will simplify the
algebra in applying the initial conditions.)
y
p
= x
_
x
1
e

4
d +x
2
_
x
1
e

4
d
= x
_
x
1
e

2
d +x
2
_
x
1
e

3
d
= x
_
e
1

e
x
x

_
x
1
e

d
_
+x
2
_
e
x
2x

e
x
2x
2
+
1
2
_
x
1
e

d
_
= xe
1
+
1
2
(1 +x) e
x
+
_
x +x
2
2
__
x
1
e

d
If you wanted to, you could write the last integral in terms of exponential integral functions.
The general solution is
y = c
1
x +c
2
x
2
x e
1
+
1
2
(1 +x) e
x
+
_
x +
x
2
2
__
x
1
e

d
Applying the boundary conditions,
y(1) = 0 c
1
+c
2
= 0
y
t
(1) = 1 c
1
+ 2c
2
= 1,
we nd that c
1
= 1, c
2
= 1.
Thus the solution subject to the initial conditions is
y = (1 + e
1
)x +x
2
+
1
2
(1 +x) e
x
+
_
x +
x
2
2
__
x
1
e

d
Green Functions. The solution to the problem is y = u +v where
u
tt

2
x
u
t
+
2
x
2
u =
e
x
x
2
, u(1) = 0, u
t
(1) = 0,
1010
and
v
tt

2
x
v
t
+
2
x
2
v = 0, v(1) = 0, v
t
(1) = 1.
The problem for v has the solution
v = x +x
2
.
The Green function for u is
G(x[) = H(x )u

(x)
where
u

() = 0, and u
t

() = 1.
Thus the Green function is
G(x[) = H(x )
_
x +
x
2

_
.
The solution for u is then
u =
_

1
G(x[)
e

2
d
=
_
x
1
_
x +
x
2

_
e

2
d
= xe
1
+
1
2
(1 +x) e
x
+
_
x +
x
2
2
__
x
1
e

d.
Thus we nd the solution for y is
y = (1 + e
1
)x +x
2
+
1
2
(1 +x) e
x
+
_
x +
x
2
2
__
x
1
e

d
1011
Solution 23.13
The dierential equation for the Green function is
G
tt
G = (x ), G
x
(0[) = G(1[) = 0.
Note that cosh(x) and sinh(x 1) are homogeneous solutions that satisfy the left and right boundary conditions,
respectively. The Wronskian of these two solutions is
W(x) =

cosh(x) sinh(x 1)
sinh(x) cosh(x 1)

= cosh(x) cosh(x 1) sinh(x) sinh(x 1)


=
1
4
__
e
x
+ e
x
_ _
e
x1
+ e
x+1
_

_
e
x
e
x
_ _
e
x1
e
x+1
__
=
1
2
_
e
1
+ e
1
_
= cosh(1).
The Green function for the problem is then
G(x[) =
cosh(x
<
) sinh(x
>
1)
cosh(1)
,
G(x[) =
_
cosh(x) sinh(1)
cosh(1)
for 0 x ,
cosh() sinh(x1)
cosh(1)
for x 1.
Solution 23.14
The dierential equation for the Green function is
G
tt
G = (x ), G(0[) = G([) = 0.
1012
Note that sinh(x) and e
x
are homogeneous solutions that satisfy the left and right boundary conditions, respec-
tively. The Wronskian of these two solutions is
W(x) =

sinh(x) e
x
cosh(x) e
x

= sinh(x) e
x
cosh(x) e
x
=
1
2
_
e
x
e
x
_
e
x

1
2
_
e
x
+ e
x
_
e
x
= 1
The Green function for the problem is then
G(x[) = sinh(x
<
) e
x>
G(x[) =
_
sinh(x) e

for 0 x ,
sinh() e
x
for x .
Solution 23.15
a) The Green function problem is
xG
tt
(x[) +G
t
(x[) = (x ), G(0[) bounded, G(1[) = 0.
First we nd the homogeneous solutions of the dierential equation.
xy
tt
+y
t
= 0
This is an exact equation.
d
dx
[xy
t
] = 0
1013
y
t
=
c
1
x
y = c
1
log x +c
2
The homogeneous solutions y
1
= 1 and y
2
= log x satisfy the left and right boundary conditions, respectively.
The Wronskian of these solutions is
W(x) =

1 log x
0 1/x

=
1
x
.
The Green function is
G(x[) =
1 log x
>
(1/)
,
G(x[) = log x
>
.
b) The Green function problem is
G
tt
(x[) G(x[) = (x ), G(a[) = G(a[) = 0.
e
x
, e
x
and cosh x, sinh x are both linearly independent sets of homogeneous solutions. sinh(x+a) and
sinh(xa) are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The
Wronskian of these two solutions is,
W(x) =

sinh(x +a) sinh(x a)


cosh(x +a) cosh(x a)

= sinh(x +a) cosh(x a) sinh(x a) cosh(x +a)


= sinh(2a)
The Green function is
G(x[) =
sinh(x
<
+a) sinh(x
>
a)
sinh(2a)
.
1014
c) The Green function problem is
G
tt
(x[) G(x[) = (x ), G(x[) bounded as [x[ .
e
x
and e
x
are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The
Wronskian of these solutions is
W(x) =

e
x
e
x
e
x
e
x

= 2.
The Green function is
G(x[) =
e
x<
e
x>
2
,
G(x[) =
1
2
e
x<x>
.
d) The Green function from part (b) is,
G(x[) =
sinh(x
<
+a) sinh(x
>
a)
sinh(2a)
.
We take the limit as a .
lim
a
sinh(x
<
+a) sinh(x
>
a)
sinh(2a)
= lim
a
( e
x<+a
e
x<a
) ( e
x>a
e
x>+a
)
2 ( e
2a
e
2a
)
= lim
a
e
x<x>
+ e
x<+x>2a
+ e
x<x>2a
e
x<+x>4a
2 2 e
4a
=
e
x<x>
2
Thus we see that the solution from part (b) approaches the solution from part (c) as a .
1015
Solution 23.16
1. The problem,
y
tt
+y = f(x), y(0) = y() = 0,
has a Green function if and only if it has a unique solution. This inhomogeneous problem has a unique
solution if and only if the homogeneous problem has only the trivial solution.
First consider the case = 0. We nd the general solution of the homogeneous dierential equation.
y = c
1
+c
2
x
Only the trivial solution satises the boundary conditions. The problem has a unique solution for = 0.
Now consider non-zero . We nd the general solution of the homogeneous dierential equation.
y = c
1
cos
_

x
_
+c
2
sin
_

x
_
.
The solution that satises the left boundary condition is
y = c sin
_

x
_
.
We apply the right boundary condition and nd nontrivial solutions.
sin
_

_
= 0
= n
2
, n Z
+
Thus the problem has a unique solution for all complex except = n
2
, n Z
+
.
Consider the case = 0. We nd solutions of the homogeneous equation that satisfy the left and right
boundary conditions, respectively.
y
1
= x, y
2
= x .
1016
We compute the Wronskian of these functions.
W(x) =

x x
1 1

= .
The Green function for this case is
G(x[) =
x
<
(x
>
)

.
We consider the case ,= n
2
, ,= 0. We nd the solutions of the homogeneous equation that satisfy the
left and right boundary conditions, respectively.
y
1
= sin
_

x
_
, y
2
= sin
_

(x )
_
.
We compute the Wronskian of these functions.
W(x) =

sin
_

x
_
sin
_

(x )
_

cos
_

x
_

cos
_

(x )
_

sin
_

_
The Green function for this case is
G(x[) =
sin
_

x
<
_
sin
_

(x
>
)
_

sin
_

_ .
2. Now we consider the problem
y
tt
+ 9y = 1 +x, y(0) = y() = 0.
The homogeneous solutions of the problem are constant multiples of sin(3x). Thus for each value of ,
the problem either has no solution or an innite number of solutions. There will be an innite number of
1017
solutions if the inhomogeneity 1 + x is orthogonal to the homogeneous solution sin(3x) and no solution
otherwise.
_

0
(1 +x) sin(3x) dx =
+ 2
3
The problem has a solution only for = 2/. For this case the general solution of the inhomogeneous
dierential equation is
y =
1
9
_
1
2x

_
+c
1
cos(3x) +c
2
sin(3x).
The one-parameter family of solutions that satises the boundary conditions is
y =
1
9
_
1
2x

cos(3x)
_
+c sin(3x).
3. For = n
2
, n Z
+
, y = sin(nx) is a solution of the homogeneous equation that satises the boundary
conditions. Equation 23.5 has a (non-unique) solution only if f is orthogonal to sin(nx).
_

0
f(x) sin(nx) dx = 0
The modied Green function satises
G
tt
+n
2
G = (x )
sin(nx) sin(n)
/2
.
We expand G in a series of the eigenfunctions.
G(x[) =

k=1
g
k
sin(kx)
1018
We substitute the expansion into the dierential equation to determine the coecients. This will not
determine g
n
. We choose g
n
= 0, which is one of the choices that will make the modied Green function
symmetric in x and .

k=1
g
k
_
n
2
k
2
_
sin(kx) =
2

k=1
k,=n
sin(kx) sin(k)
G(x[) =
2

k=1
k,=n
sin(kx) sin(k)
n
2
k
2
The solution of the inhomogeneous problem is
y(x) =
_

0
f()G(x[) d.
Solution 23.17
We separate the problem for u into the two problems:
Lv (pv
t
)
t
+qv = f(x), a < x < b, v(a) = 0, v(b) = 0
Lw (pw
t
)
t
+qw = 0, a < x < b, w(a) = , w(b) =
and note that the solution for u is u = v +w.
The problem for v has the solution,
v =
_
b
a
g(x; )f() d,
with the Green function,
g(x; ) =
v
1
(x
<
)v
2
(x
>
)
p()W()

_
v
1
(x)v
2
()
p()W()
for a x ,
v
1
()v
2
(x)
p()W()
for x b.
1019
Here v
1
and v
2
are homogeneous solutions that respectively satisfy the left and right homogeneous boundary
conditions.
Since g(x; ) is a solution of the homogeneous equation for x ,= , g

(x; ) is a solution of the homogeneous


equation for x ,= . This is because for x ,= ,
L
_

g
_
=

L[g] =

(x ) = 0.
If is outside of the domain, (a, b), then g(x; ) and g

(x; ) are homogeneous solutions on that domain. In


particular g

(x; a) and g

(x; b) are homogeneous solutions,


L[g

(x; a)] = L[g

(x; b)] = 0.
Now we use the denition of the Green function and v
1
(a) = v
2
(b) = 0 to determine simple expressions for these
homogeneous solutions.
g

(x; a) =
v
t
1
(a)v
2
(x)
p(a)W(a)

(p
t
(a)W(a) +p(a)W
t
(a))v
1
(a)v
2
(x)
(p(a)W(a))
2
=
v
t
1
(a)v
2
(x)
p(a)W(a)
=
v
t
1
(a)v
2
(x)
p(a)(v
1
(a)v
t
2
(a) v
t
1
(a)v
2
(a))
=
v
t
1
(a)v
2
(x)
p(a)v
t
1
(a)v
2
(a)
=
v
2
(x)
p(a)v
2
(a)
We note that this solution has the boundary values,
g

(a; a) =
v
2
(a)
p(a)v
2
(a)
=
1
p(a)
, g

(b; a) =
v
2
(b)
p(a)v
2
(a)
= 0.
1020
We examine the second solution.
g

(x; b) =
v
1
(x)v
t
2
(b)
p(b)W(b)

(p
t
(b)W(b) +p(b)W
t
(b))v
1
(x)v
2
(b)
(p(b)W(b))
2
=
v
1
(x)v
t
2
(b)
p(b)W(b)
=
v
1
(x)v
t
2
(b)
p(b)(v
1
(b)v
t
2
(b) v
t
1
(b)v
2
(b))
=
v
1
(x)v
t
2
(b)
p(b)v
1
(b)v
t
2
(b)
=
v
1
(x)
p(b)v
1
(b)
This solution has the boundary values,
g

(a; b) =
v
1
(a)
p(b)v
1
(b)
= 0, g

(b; b) =
v
1
(b)
p(b)v
1
(b)
=
1
p(b)
.
Thus we see that the solution of
Lw = (pw
t
)
t
+qw = 0, a < x < b, w(a) = , w(b) = ,
is
w = p(a)g

(x; a) +p(b)g

(x; b).
Therefore the solution of the problem for u is
u =
_
b
a
g(x; )f() d p(a)g

(x; a) +p(b)g

(x; b).
1021
-4 -2 2 4
-0.5
-0.4
-0.3
-0.2
-0.1
Figure 23.7: G(x; 1) and G(x; 1)
Solution 23.18
Figure 23.7 shows a plot of G(x; 1) and G(x; 1) for k = 1.
First we consider the boundary condition u(0) = 0. Note that the solution of
G
tt
k
2
G = (x ) (x +), [G(; )[ < ,
satises the condition G(0; ) = 0. Thus the Green function which satises G(0; ) = 0 is
G(x; ) =
1
2k
e
k[x[
+
1
2k
e
k[x+[
.
Since x, > 0 we can write this as
G(x; ) =
1
2k
e
k[x[
+
1
2k
e
k(x+)
=
_

1
2k
e
k(x)
+
1
2k
e
k(x+)
, for x <

1
2k
e
k(x)
+
1
2k
e
k(x+)
, for < x
=
_

1
k
e
k
sinh(kx), for x <

1
k
e
kx
sinh(k), for < x
1022
G(x; ) =
1
k
e
kx>
sinh(kx
<
)
Now consider the boundary condition u
t
(0) = 0. Note that the solution of
G
tt
k
2
G = (x ) +(x +), [G(; )[ < ,
satises the boundary condition G
t
(x; ) = 0. Thus the Green function is
G(x; ) =
1
2k
e
k[x[

1
2k
e
k[x+[
.
Since x, > 0 we can write this as
G(x; ) =
1
2k
e
k[x[

1
2k
e
k(x+)
=
_

1
2k
e
k(x)

1
2k
e
k(x+)
, for x <

1
2k
e
k(x)

1
2k
e
k(x+)
, for < x
=
_

1
k
e
k
cosh(kx), for x <

1
k
e
kx
cosh(k), for < x
G(x; ) =
1
k
e
kx>
cosh(kx
<
)
The Green functions which satises G(0; ) = 0 and G
t
(0; ) = 0 are shown in Figure 23.8.
Solution 23.19
1. The Green function satises
g
tt
a
2
g = (x ), g(0; ) = g
t
(L; ) = 0.
1023
1 2 3 4 5
-0.4
-0.3
-0.2
-0.1
1 2 3 4 5
-0.5
-0.4
-0.3
-0.2
-0.1
Figure 23.8: G(x; 1) and G(x; 1)
We can write the set of homogeneous solutions as
_
e
ax
, e
ax
_
or cosh(ax), sinh(ax) .
The solutions that respectively satisfy the left and right boundary conditions are
u
1
= sinh(ax), u
2
= cosh(a(x L)).
The Wronskian of these solutions is
W(x) =
_
sinh(ax) cosh(a(x L))
a cosh(ax) a sinh(a(x L))
_
= a cosh(aL).
Thus the Green function is
g(x; ) =
_

sinh(ax) cosh(a(L))
a cosh(aL)
for x ,

sinh(a) cosh(a(xL))
a cosh(aL)
for x.
=
sinh(ax
<
) cosh(a(x
>
L))
a cosh(aL)
.
1024
2. We take the limit as L .
g(x; ) = lim
L

sinh(ax
<
) cosh(a(x
>
L))
a cosh(aL)
= lim
L

sinh(ax
<
)
a
cosh(ax
>
) cosh(aL) sinh(ax
>
) sinh(aL)
cosh(aL)
=
sinh(ax
<
)
a
(cosh(ax
>
) sinh(ax
>
))
g(x; ) =
1
a
sinh(ax
<
) e
ax>
The solution of
y
tt
a
2
y = e
x
, y(0) = y
t
() = 0
is
y =
_

0
g(x; ) e

d
=
1
a
_

0
sinh(ax
<
) e
ax>
e

d
=
1
a
__
x
0
sinh(a) e
ax
e

d +
_

x
sinh(ax) e
a
e

d
_
We rst consider the case that a ,= 1.
=
1
a
_
e
ax
a
2
1
_
a + e
x
(a cosh(ax) + sinh(ax))
_
+
1
a + 1
e
(a+1)x
sinh(ax)
_
=
e
ax
e
x
a
2
1
1025
For a = 1, we have
y =
_
1
4
ex
_
1 + 2x + e
2x
_
+
1
2
e
2x
sinh(x)
_
=
1
2
x e
x
.
Thus the solution of the problem is
y =
_
e
ax
e
x
a
2
1
for a ,= 1,

1
2
x e
x
for a = 1.
We note that this solution satises the dierential equation and the boundary conditions.
1026
Chapter 24
Dierence Equations
Televisions should have a dial to turn up the intelligence. There is a brightness knob, but it doesnt work.
-?
24.1 Introduction
Example 24.1.1 Gamblers ruin problem. Consider a gambler that initially has n dollars. He plays a game
in which he has a probability p of winning a dollar and q of losing a dollar. (Note that p+q = 1.) The gambler has
decided that if he attains N dollars he will stop playing the game. In this case we will say that he has succeeded.
Of course if he runs out of money before that happens, we will say that he is ruined. What is the probability of
the gamblers ruin? Let us denote this probability by a
n
. We know that if he has no money left, then his ruin is
certain, so a
0
= 1. If he reaches N dollars he will quit the game, so that a
N
= 0. If he is somewhere in between
ruin and success then the probability of his ruin is equal to p times the probability of his ruin if he had n + 1
dollars plus q times the probability of his ruin if he had n 1 dollars. Writing this in an equation,
a
n
= pa
n+1
+qa
n1
subject to a
0
= 1, a
N
= 0.
1027
This is an example of a dierence equation. You will learn how to solve this particular problem in the section on
constant coecient equations.
Consider the sequence a
1
, a
2
, a
3
, . . . Analogous to a derivative of a continuous function, we can dene a discrete
derivative on the sequence
Da
n
= a
n+1
a
n
.
The second discrete derivative is then dened as
D
2
a
n
= D[a
n+1
a
n
] = a
n+2
2a
n+1
+a
n
.
The discrete integral of a
n
is
n

i=n
0
a
i
.
Corresponding to
_

df
dx
dx = f() f(),
in the discrete realm we have
1

n=
D[a
n
] =
1

n=
(a
n+1
a
n
) = a

.
Linear dierence equations have the form
D
r
a
n
+p
r1
(n)D
r1
a
n
+ +p
1
(n)Da
n
+p
0
(n)a
n
= f(n).
From the denition of the discrete derivative an equivalent form is
a
n+r
+q
r1
(n)a
nr1
+ +q
1
(n)a
n+1
+q
0
(n)a
n
= f(n).
1028
Besides being important in their own right, we will need to solve dierence equations in order to develop series
solutions of dierential equations. Also, some methods of solving dierential equations numerically are based on
approximating them with dierence equations.
There are many similarities between dierential and dierence equations. Like dierential equations, an r
th
order homogeneous dierence equation has r linearly independent solutions. The general solution to the r
th
order inhomogeneous equation is the sum of the particular solution and an arbitrary linear combination of the
homogeneous solutions.
For an r
th
order dierence equation, the initial condition is given by specifying the values of the rst r a
n
s.
Example 24.1.2 Consider the dierence equation a
n2
a
n1
a
n
= 0 subject to the initial condition a
1
= a
2
= 1.
Note that although we may not know a closed-form formula for the a
n
we can calculate the a
n
in order by
substituting into the dierence equation. The rst few a
n
are 1, 1, 2, 3, 5, 8, 13, 21, . . . We recognize this as the
Fibonacci sequence.
24.2 Exact Equations
Consider the sequence a
1
, a
2
, . . . . Exact dierence equations on this sequence have the form
D[F(a
n
, a
n+1
, . . . , n)] = g(n).
We can reduce the order of, (or solve for rst order), this equation by summing from 1 to n 1.
n1

j=1
D[F(a
j
, a
j+1
, . . . , j)] =
n1

j=1
g(j)
F(a
n
, a
n+1
, . . . , n) F(a
1
, a
2
, . . . , 1) =
n1

j=1
g(j)
F(a
n
, a
n+1
, . . . , n) =
n1

j=1
g(j) +F(a
1
, a
2
, . . . , 1)
1029
Result 24.2.1 We can reduce the order of the exact dierence equation
D[F(a
n
, a
n+1
, . . . , n)] = g(n), for n 1
by summing both sides of the equation to obtain
F(a
n
, a
n+1
, . . . , n) =
n1

j=1
g(j) +F(a
1
, a
2
, . . . , 1).
Example 24.2.1 Consider the dierence equation, D[na
n
] = 1. Summing both sides of this equation
n1

j=1
D[ja
j
] =
n1

j=1
1
na
n
a
1
= n 1
a
n
=
n +a
1
1
n
.
24.3 Homogeneous First Order
Consider the homogeneous rst order dierence equation
a
n+1
= p(n)a
n
, for n 1.
1030
We can directly solve for a
n
.
a
n
= a
n
a
n1
a
n1
a
n2
a
n2

a
1
a
1
= a
1
a
n
a
n1
a
n1
a
n2

a
2
a
1
= a
1
p(n 1)p(n 2) p(1)
= a
1
n1

j=1
p(j)
Alternatively, we could solve this equation by making it exact. Analogous to an integrating factor for dier-
ential equations, we multiply the equation by the summing factor
S(n) =
_
n

j=1
p(j)
_
1
.
a
n+1
p(n)a
n
= 0
a
n+1

n
j=1
p(j)

a
n

n1
j=1
p(j)
= 0
D
_
a
n

n1
j=1
p(j)
_
= 0
Now we sum from 1 to n 1.
a
n

n1
j=1
p(j)
a
1
= 0
a
n
= a
1
n1

j=1
p(j)
1031
Result 24.3.1 The solution of the homogeneous rst order dierence equation
a
n+1
= p(n)a
n
, for n 1,
is
a
n
= a
1
n1

j=1
p(j).
Example 24.3.1 Consider the equation a
n+1
= na
n
with the initial condition a
1
= 1.
a
n
= a
1
n1

j=1
j = (1)(n 1)! = (n)
Recall that (z) is the generalization of the factorial function. For positive integral values of the argument,
(n) = (n 1)!.
24.4 Inhomogeneous First Order
Consider the equation
a
n+1
= p(n)a
n
+q(n) for n 1.
Multiplying by S(n) =
_

n
j=1
p(j)
_
1
yields
a
n+1

n
j=1
p(j)

a
n

n1
j=1
p(j)
=
q(n)

n
j=1
p(j)
.
1032
The left hand side is a discrete derivative.
D
_
a
n

n1
j=1
p(j)
_
=
q(n)

n
j=1
p(j)
Summing both sides from 1 to n 1,
a
n

n1
j=1
p(j)
a
1
=
n1

k=1
_
q(k)

k
j=1
p(j)
_
a
n
=
_
n1

m=1
p(m)
__
n1

k=1
_
q(k)

k
j=1
p(j)
_
+a
1
_
.
Result 24.4.1 The solution of the inhomogeneous rst order dierence equation
a
n+1
= p(n)a
n
+q(n) for n 1
is
a
n
=
_
n1

m=1
p(m)
__
n1

k=1
_
q(k)

k
j=1
p(j)
_
+a
1
_
.
Example 24.4.1 Consider the equation a
n+1
= na
n
+ 1 for n 1. The summing factor is
S(n) =
_
n

j=1
j
_
1
=
1
n!
.
1033
Multiplying the dierence equation by the summing factor,
a
n+1
n!

a
n
(n 1)!
=
1
n!
D
_
a
n
(n 1)!
_
=
1
n!
a
n
(n 1)!
a
1
=
n1

k=1
1
k!
a
n
= (n 1)!
_
n1

k=1
1
k!
+a
1
_
.
Example 24.4.2 Consider the equation
a
n+1
= a
n
+, for n 0.
From the above result, (with the products and sums starting at zero instead of one), the solution is
a
0
=
_
n1

m=0

__
n1

k=0
_

k
j=0

_
+a
0
_
=
n
_
n1

k=0
_

k+1
_
+a
0
_
=
n
_

n1

1
1
+a
0
_
=
n
_

n
1
1
+a
0
_
=
1
n
1
+a
0

n
.
1034
24.5 Homogeneous Constant Coecient Equations
Homogeneous constant coecient equations have the form
a
n+N
+p
N1
a
n+N1
+ +p
1
a
n+1
+p
0
a
n
= 0.
The substitution a
n
= r
n
yields
r
N
+p
N1
r
N1
+ +p
1
r +p
0
= 0
(r r
1
)
m
1
(r r
k
)
m
k
= 0.
If r
1
is a distinct root then the associated linearly independent solution is r
n
1
. If r
1
is a root of multiplicity
m > 1 then the associated solutions are r
n
1
, nr
n
1
, n
2
r
n
1
, . . . , n
m1
r
n
1
.
Result 24.5.1 Consider the homogeneous constant coecient dierence equation
a
n+N
+p
N1
a
n+N1
+ +p
1
a
n+1
+p
0
a
n
= 0.
The substitution a
n
= r
n
yields the equation
(r r
1
)
m
1
(r r
k
)
m
k
= 0.
A set of linearly independent solutions is
r
n
1
, nr
n
1
, . . . , n
m
1
1
r
n
1
, . . . , r
n
k
, nr
n
k
, . . . , n
m
k
1
r
n
k
.
Example 24.5.1 Consider the equation a
n+2
3a
n+1
+ 2a
n
= 0 with the initial conditions a
1
= 1 and a
2
= 3.
The substitution a
n
= r
n
yields
r
2
3r + 2 = (r 1)(r 2) = 0.
1035
Thus the general solution is
a
n
= c
1
1
n
+c
2
2
n
.
The initial conditions give the two equations,
a
1
= 1 = c
1
+ 2c
2
a
2
= 3 = c
1
+ 4c
2
Since c
1
= 1 and c
2
= 1, the solution to the dierence equation subject to the initial conditions is
a
n
= 2
n
1.
Example 24.5.2 Consider the gamblers ruin problem that was introduced in Example 24.1.1. The equation for
the probability of the gamblers ruin at n dollars is
a
n
= pa
n+1
+qa
n1
subject to a
0
= 1, a
N
= 0.
We assume that 0 < p < 1. With the substitution a
n
= r
n
we obtain
r = pr
2
+q.
The roots of this equation are
r =
1

1 4pq
2p
=
1
_
1 4p(1 p)
2p
=
1
_
(1 2p)
2
2p
=
1 [1 2p[
2p
.
We will consider the two cases p ,= 1/2 and p = 1/2.
1036
p = 1/2. If p < 1/2, the roots are
r =
1 (1 2p)
2p
r
1
=
1 p
p
=
q
p
, r
2
= 1.
If p > 1/2 the roots are
r =
1 (2p 1)
2p
r
1
= 1, r
2
=
p + 1
p
=
q
p
.
Thus the general solution for p ,= 1/2 is
a
n
= c
1
+c
2
_
q
p
_
n
.
The boundary condition a
0
= 1 requires that c
1
+c
2
= 1. From the boundary condition a
N
= 0 we have
(1 c
2
) +c
2
_
q
p
_
N
= 0
c
2
=
1
1 + (q/p)
N
c
2
=
p
N
p
N
q
N
.
Solving for c
1
,
c
1
= 1
p
N
p
N
q
N
c
1
=
q
N
p
N
q
N
.
1037
Thus we have
a
n
=
q
N
p
N
q
N
+
p
N
p
N
q
N
_
q
p
_
n
.
p = 1/2. In this case, the two roots of the polynomial are both 1. The general solution is
a
n
= c
1
+c
2
n.
The left boundary condition demands that c
1
= 1. From the right boundary condition we obtain
1 +c
2
N = 0
c
2
=
1
N
.
Thus the solution for this case is
a
n
= 1
n
N
.
As a check that this formula makes sense, we see that for n = N/2 the probability of ruin is 1
N/2
N
=
1
2
.
24.6 Reduction of Order
Consider the dierence equation
(n + 1)(n + 2)a
n+2
3(n + 1)a
n+1
+ 2a
n
= 0 for n 0 (24.1)
We see that one solution to this equation is a
n
= 1/n!. Analogous to the reduction of order for dierential
equations, the substitution a
n
= b
n
/n! will reduce the order of the dierence equation.
(n + 1)(n + 2)b
n+2
(n + 2)!

3(n + 1)b
n+1
(n + 1)!
+
2b
n
n!
= 0
b
n+2
3b
n+1
+ 2b
n
= 0 (24.2)
1038
At rst glance it appears that we have not reduced the order of the equation, but writing it in terms of discrete
derivatives
D
2
b
n
Db
n
= 0
shows that we now have a rst order dierence equation for Db
n
. The substitution b
n
= r
n
in equation 24.2 yields
the algebraic equation
r
2
3r + 2 = (r 1)(r 2) = 0.
Thus the solutions are b
n
= 1 and b
n
= 2
n
. Only the b
n
= 2
n
solution will give us another linearly independent
solution for a
n
. Thus the second solution for a
n
is a
n
= b
n
/n! = 2
n
/n!. The general solution to equation 24.1 is
then
a
n
= c
1
1
n!
+c
2
2
n
n!
.
Result 24.6.1 Let a
n
= s
n
be a homogeneous solution of a linear dierence equation.
The substitution a
n
= s
n
b
n
will yield a dierence equation for b
n
that is of order one less
than the equation for a
n
.
1039
24.7 Exercises
Exercise 24.1
Find a formula for the n
th
term in the Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, . . . .
Hint, Solution
Exercise 24.2
Solve the dierence equation
a
n+2
=
2
n
a
n
, a
1
= a
2
= 1.
Hint, Solution
1040
24.8 Hints
Hint 24.1
The dierence equation corresponding to the Fibonacci sequence is
a
n+2
a
n+1
a
n
= 0, a
1
= a
2
= 1.
Hint 24.2
Consider this exercise as two rst order dierence equations; one for the even terms, one for the odd terms.
1041
24.9 Solutions
Solution 24.1
We can describe the Fibonacci sequence with the dierence equation
a
n+2
a
n+1
a
n
= 0, a
1
= a
2
= 1.
With the substitution a
n
= r
n
we obtain the equation
r
2
r 1 = 0.
This equation has the two distinct roots
r
1
=
1 +

5
2
, r
2
=
1

5
2
.
Thus the general solution is
a
n
= c
1
_
1 +

5
2
_
n
+c
2
_
1

5
2
_
n
.
From the initial conditions we have
c
1
r
1
+c
2
r
2
= 1
c
1
r
2
1
+c
2
r
2
2
= 1.
Solving for c
2
in the rst equation,
c
2
=
1
r
2
(1 c
1
r
1
).
1042
We substitute this into the second equation.
c
1
r
2
1
+
1
r
2
(1 c
1
r
1
)r
2
2
= 1
c
1
(r
2
1
r
1
r
2
) = 1 r
2
c
1
=
1 r
2
r
2
1
r
1
r
2
=
1
1

5
2
1+

5
2

5
=
1+

5
2
1+

5
2

5
=
1

5
Substitute this result into the equation for c
2
.
c
2
=
1
r
2
_
1
1

5
r
1
_
=
2
1

5
_
1
1

5
1 +

5
2
_
=
2
1

5
_
1

5
2

5
_
=
1

5
1043
Thus the n
th
term in the Fibonacci sequence has the formula
a
n
=
1

5
_
1 +

5
2
_
n

5
_
1

5
2
_
n
.
It is interesting to note that although the Fibonacci sequence is dened in terms of integers, one cannot express
the formula form the n
th
element in terms of rational numbers.
Solution 24.2
We can consider
a
n+2
=
2
n
a
n
, a
1
= a
2
= 1
to be a rst order dierence equation. First consider the odd terms.
a
1
= 1
a
3
=
2
1
a
5
=
2
3
2
1
a
n
=
2
(n1)/2
(n 2)(n 4) (1)
For the even terms,
a
2
= 1
a
4
=
2
2
a
6
=
2
4
2
2
a
n
=
2
(n2)/2
(n 2)(n 4) (2)
.
1044
Thus
a
n
=
_
2
(n1)/2
(n2)(n4)(1)
for odd n
2
(n2)/2
(n2)(n4)(2)
for even n.
1045
Chapter 25
Series Solutions of Dierential Equations
Skill beats honesty any day.
-?
25.1 Ordinary Points
Big O and Little o Notation. The notation O(z
n
) means terms no bigger than z
n
. This gives us a convenient
shorthand for manipulating series. For example,
sin z = z
z
3
6
+O(z
5
)
1
1 z
= 1 +O(z)
The notation o(z
n
) means terms smaller that z
n
. For example,
cos z = 1 +o(1)
1046
e
z
= 1 +z +o(z)
Example 25.1.1 Consider the equation
w
tt
(z) 3w
t
(z) + 2w(z) = 0.
The general solution to this constant coecient equation is
w = c
1
e
z
+c
2
e
2z
.
The functions e
z
and e
2z
are analytic in the nite complex plane. Recall that a function is analytic at a point z
0
if and only if the function has a Taylor series about z
0
with a nonzero radius of convergence. If we substitute the
Taylor series expansions about z = 0 of e
z
and e
2z
into the general solution, we obtain
w = c
1

n=0
z
n
n!
+c
2

n=0
2
n
z
n
n!
.
Thus we have a series solution of the dierential equation.
Alternatively, we could try substituting a Taylor series into the dierential equation and solving for the
coecients. Substituting w =

n=0
a
n
z
n
into the dierential equation yields
d
2
dz
2

n=0
a
n
z
n
3
d
dz

n=0
a
n
z
n
+ 2

n=0
a
n
z
n
= 0

n=2
n(n 1)a
n
z
n2
3

n=1
na
n
z
n1
+ 2

n=0
a
n
z
n
= 0

n=0
(n + 2)(n + 1)a
n+2
z
n
3

n=0
(n + 1)a
n+1
z
n
+ 2

n=0
a
n
z
n
= 0

n=0
_
(n + 2)(n + 1)a
n+2
3(n + 1)a
n+1
+ 2a
n
_
z
n
= 0.
1047
Equating powers of z, we obtain the dierence equation
(n + 2)(n + 1)a
n+2
3(n + 1)a
n+1
+ 2a
n
= 0, n 0.
We see that a
n
= 1/n! is one solution since
(n + 2)(n + 1)
(n + 2)!
3
n + 1
(n + 1)!
+ 2
1
n!
=
1 3 + 2
n!
= 0.
We use reduction of order for dierence equations to nd the other solution. Substituting a
n
= b
n
/n! into the
dierence equation yields
(n + 2)(n + 1)
b
n+2
(n + 2)!
3(n + 1)
b
n+1
(n + 1)!
+ 2
b
n
n!
= 0
b
n+2
3b
n+1
+ 2b
n
= 0.
At rst glance it appears that we have not reduced the order of the dierence equation. However writing this
equation in terms of discrete derivatives,
D
2
b
n
Db
n
= 0
we see that this is a rst order dierence equation for Db
n
. Since this is a constant coecient dierence equation
we substitute b
n
= r
n
into the equation to obtain an algebraic equation for r.
r
2
3r + 2 = (r 1)(r 2) = 0
Thus the two solutions are b
n
= 1
n
b
0
and b
n
= 2
n
b
0
. Only b
n
= 2
n
b
0
will give us a second independent solution
for a
n
. Thus the two solutions for a
n
are
a
n
=
a
0
n!
and a
n
=
2
n
a
0
n!
.
Thus we can write the general solution to the dierential equation as
w = c
1

n=0
z
n
n!
+c
2

n=0
2
n
z
n
n!
.
1048
We recognize these two sums as the Taylor expansions of e
z
and e
2z
. Thus we obtain the same result as we did
solving the dierential equation directly.
Of course it would be pretty silly to go through all the grunge involved in developing a series expansion of the
solution in a problem like Example 25.1.1 since we can solve the problem exactly. However if we could not solve
a dierential equation, then having a Taylor series expansion of the solution about a point z
0
would be useful in
determining the behavior of the solutions near that point.
For this method of substituting a Taylor series into the dierential equation to be useful we have to know at
what points the solutions are analytic. Lets say we were considering a second order dierential equation whose
solutions were
w
1
=
1
z
, and w
2
= log z.
Trying to nd a Taylor series expansion of the solutions about the point z = 0 would fail because the solutions
are not analytic at z = 0. This brings us to two important questions.
1. Can we tell if the solutions to a linear dierential equation are analytic at a point without knowing the
solutions?
2. If there are Taylor series expansions of the solutions to a dierential equation, what are the radii of conver-
gence of the series?
In order to answer these questions, we will introduce the concept of an ordinary point. Consider the n
th
order
linear homogeneous equation
d
n
w
dz
n
+p
n1
(z)
d
n1
w
dz
n1
+ +p
1
(z)
dw
dz
+p
0
(z)w = 0.
If each of the coecient functions p
i
(z) are analytic at z = z
0
then z
0
is an ordinary point of the dierential
equation.
For reasons of typography we will restrict our attention to second order equations and the point z
0
= 0 for a
while. The generalization to an n
th
order equation will be apparent. Considering the point z
0
,= 0 is only trivially
more general as we could introduce the transformation z z
0
z to move the point to the origin.
1049
In the chapter on rst order dierential equations we showed that the solution is analytic at ordinary points.
One would guess that this remains true for higher order equations. Consider the second order equation
y
tt
+p(z)y
t
+q(z)y = 0,
where p and q are analytic at the origin.
p(z) =

n=0
p
n
z
n
, and q(z) =

n=0
q
n
z
n
Assume that one of the solutions is not analytic at the origin and behaves like z

at z = 0 where ,= 0, 1, 2, . . . .
That is, we can approximate the solution with w(z) = z

+ o(z

). Lets substitute w = z

+ o(z

) into the
dierential equation and look at the lowest power of z in each of the terms.
_
( 1)z
2
+o(z
2
)

+
_
z
1
+o(z
1
)

n=0
p
n
z
n
+
_
z

+o(z

n=0
q
n
z
n
= 0.
We see that the solution could not possibly behave like z

, ,= 0, 1, 2, because there is no term on the left to


cancel out the z
2
term. The terms on the left side could not add to zero.
You could also check that a solution could not possibly behave like log z at the origin. Though we will not
prove it, if z
0
is an ordinary point of a homogeneous dierential equation, then all the solutions are analytic at
the point z
0
. Since the solution is analytic at z
0
we can expand it in a Taylor series.
Now we are prepared to answer our second question. From complex variables, we know that the radius of
convergence of the Taylor series expansion of a function is the distance to the nearest singularity of that function.
Since the solutions to a dierential equation are analytic at ordinary points of the equation, the series expansion
about an ordinary point will have a radius of convergence at least as large as the distance to the nearest singularity
of the coecient functions.
Example 25.1.2 Consider the equation
w
tt
+
1
cos z
w
t
+z
2
w = 0.
1050
If we expand the solution to the dierential equation in Taylor series about z = 0, the radius of convergence will
be at least /2. This is because the coecient functions are analytic at the origin, and the nearest singularities
of 1/ cos z are at z = /2.
25.1.1 Taylor Series Expansion for a Second Order Dierential Equation
Consider the dierential equation
w
tt
+p(z)w
t
+q(z)w = 0
where p(z) and q(z) are analytic in some neighborhood of the origin.
p(z) =

n=0
p
n
z
n
and q(z) =

n=0
q
n
z
n
We substitute a Taylor series and its derivatives
w =

n=0
a
n
z
n
w
t
=

n=1
nz
n
z
n1
=

n=0
(n + 1)a
n+1
z
n
w
tt
=

n=2
n(n 1)a
n
z
n2
=

n=0
(n + 2)(n + 1)a
n+2
z
n
into the dierential equation to obtain

n=0
(n + 2)(n + 1)a
n+2
z
n
+
_

n=0
p
n
z
n
__

n=0
(n + 1)a
n+1
z
n
_
+
_

n=0
q
n
z
n
__

n=0
a
n
z
n
_
= 0
1051

n=0
(n + 2)(n + 1)a
n+2
z
n
+

n=0
_
n

m=0
(m+ 1)a
m+1
p
nm
_
z
n
+

n=0
_
n

m=0
a
m
q
nm
_
z
n
= 0

n=0
_
(n + 2)(n + 1)a
n+2
+
n

m=0
_
(m+ 1)a
m+1
p
nm
+a
m
q
nm
_
_
z
n
= 0.
Equating coecients of powers of z,
(n + 2)(n + 1)a
n+2
+
n

m=0
_
(m+ 1)a
m+1
p
nm
+a
m
q
nm
_
= 0 for n 0.
We see that a
0
and a
1
are arbitrary and the rest of the coecients are determined by the recurrence relation
a
n+2
=
1
(n + 1)(n + 2)
n

m=0
((m+ 1)a
m+1
p
nm
+a
m
q
nm
) for n 0.
Example 25.1.3 Consider the problem
y
tt
+
1
cos x
y
t
+ e
x
y = 0, y(0) = y
t
(0) = 1.
Lets expand the solution in a Taylor series about the origin.
y(x) =

n=0
a
n
x
n
Since y(0) = a
0
and y
t
(0) = a
1
, we see that a
0
= a
1
= 1. The Taylor expansions of the coecient functions are
1
cos x
= 1 +O(x), and e
x
= 1 +O(x).
1052
Now we can calculate a
2
from the recurrence relation.
a
2
=
1
1 2
0

m=0
((m+ 1)a
m+1
p
0m
+a
m
q
0m
)
=
1
2
(1 1 1 + 1 1)
= 1
Thus the solution to the problem is
y(x) = 1 +x x
2
+O(x
3
).
In Figure 25.1 the numerical solution is plotted in a solid line and the sum of the rst three terms of the Taylor
series is plotted in a dashed line.
The general recurrence relation for the a
n
s is useful if you only want to calculate the rst few terms in the
Taylor expansion. However, for many problems substituting the Taylor series for the coecient functions into the
dierential equation will enable you to nd a simpler form of the solution. We consider the following example to
illustrate this point.
Example 25.1.4 Develop a series expansion of the solution to the initial value problem
w
tt
+
1
(z
2
+ 1)
w = 0, w(0) = 1, w
t
(0) = 0.
Solution using the General Recurrence Relation. The coecient function has the Taylor expansion
1
1 +z
2
=

n=0
(1)
n
z
2n
.
From the initial condition we obtain a
0
= 1 and a
1
= 0. Thus we see that the solution is
w =

n=0
a
n
z
n
,
1053
0.2 0.4 0.6 0.8 1 1.2 1.4
0.7
0.8
0.9
1.1
1.2
Figure 25.1: Plot of the Numerical Solution and the First Three Terms in the Taylor Series.
where
a
n+2
=
1
(n + 1)(n + 2)
n

m=0
a
m
q
nm
and
q
n
=
_
0 for odd n
(1)
(n/2)
for even n.
Although this formula is ne if you only want to calculate the rst few a
n
s, it is just a tad unwieldy to work
with. Lets see if we can get a better expression for the solution.
1054
Substitute the Taylor Series into the Dierential Equation. Substituting a Taylor series for w yields
d
2
dz
2

n=0
a
n
z
n
+
1
(z
2
+ 1)

n=0
a
n
z
n
= 0.
Note that the algebra will be easier if we multiply by z
2
+ 1. The polynomial z
2
+ 1 has only two terms, but the
Taylor series for 1/(z
2
+ 1) has an innite number of terms.
(z
2
+ 1)
d
2
dz
2

n=0
a
n
z
n
+

n=0
a
n
z
n
= 0

n=2
n(n 1)a
n
z
n
+

n=2
n(n 1)a
n
z
n2
+

n=0
a
n
z
n
= 0

n=0
n(n 1)a
n
z
n
+

n=0
(n + 2)(n + 1)a
n+2
z
n
+

n=0
a
n
z
n
= 0

n=0
_
(n + 2)(n + 1)a
n+2
+n(n 1)a
n
+a
n
_
z
n
= 0
Equating powers of z gives us the dierence equation
a
n+2
=
n
2
n + 1
(n + 2)(n + 1)
a
n
, for n 0.
From the initial conditions we see that a
0
= 1 and a
1
= 0. All of the odd terms in the series will be zero. For
the even terms, it is easier to reformulate the problem with the change of variables b
n
= a
2n
. In terms of b
n
the
dierence equation is
b
n+1
=
(2n)
2
2n + 1
(2n + 2)(2n + 1)
b
n
, b
0
= 1.
1055
This is a rst order dierence equation with the solution
b
n
=
n

j=0
_

4j
2
2j + 1
(2j + 2)(2j + 1)
_
.
Thus we have that
a
n
=
_

n/2
j=0
_

4j
2
2j+1
(2j+2)(2j+1)
_
for even n,
0 for odd n.
Note that the nearest singularities of 1/(z
2
+ 1) in the complex plane are at z = i. Thus the radius of
convergence must be at least 1. Applying the ratio test, the series converges for values of [z[ such that
lim
n

a
n+2
z
n+2
a
n
z
n

< 1
lim
n

n
2
n + 1
(n + 2)(n + 1)

[z[
2
< 1
[z[
2
< 1.
The radius of convergence is 1.
The rst few terms in the Taylor expansion are
w = 1
1
2
z
2
+
1
8
z
4

13
240
z
6
+ .
In Figure 25.2 the plot of the rst two nonzero terms is shown in a short dashed line, the plot of the rst four
nonzero terms is shown in a long dashed line, and the numerical solution is shown in a solid line.
In general, if the coecient functions are rational functions, that is they are fractions of polynomials, multi-
plying the equations by the quotient will reduce the algebra involved in nding the series solution.
1056
0.2 0.4 0.6 0.8 1 1.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 25.2: Plot of the solution and approximations.
Example 25.1.5 If we were going to nd the Taylor series expansion about z = 0 of the solution to
w
tt
+
z
1 +z
w
t
+
1
1 z
2
w = 0,
we would rst want to multiply the equation by 1 z
2
to obtain
(1 z
2
)w
tt
+z(1 z)w
tt
+w = 0.
Example 25.1.6 Find the series expansions about z = 0 of the fundamental set of solutions for
w
tt
+z
2
w = 0.
1057
Recall that the fundamental set of solutions w
1
, w
2
satisfy
w
1
(0) = 1 w
2
(0) = 0
w
t
1
(0) = 0 w
t
2
(0) = 1.
Thus if
w
1
=

n=0
a
n
z
n
and w
2
=

n=0
b
n
z
n
,
then the coecients must satisfy
a
0
= 1, a
1
= 0, and b
0
= 0, b
1
= 1.
Substituting the Taylor expansion w =

n=0
c
n
z
n
into the dierential equation,

n=2
n(n 1)c
n
z
n2
+

n=0
c
n
z
n+2
= 0

n=0
(n + 2)(n + 1)c
n+2
z
n
+

n=2
c
n2
z
n
= 0
2c
2
+ 6c
3
z +

n=2
_
(n + 2)(n + 1)c
n+2
+c
n2
_
z
n
= 0
Equating coecients of powers of z,
z
0
: c
2
= 0
z
1
: c
3
= 0
z
n
: (n + 2)(n + 1)c
n+2
+c
n2
= 0, for n 2
c
n+4
=
c
n
(n + 4)(n + 3)
1058
For our rst solution we have the dierence equation
a
0
= 1, a
1
= 0, a
2
= 0, a
3
= 0, a
n+4
=
a
n
(n + 4)(n + 3)
.
For our second solution,
b
0
= 0, b
1
= 1, b
2
= 0, b
3
= 0, b
n+4
=
b
n
(n + 4)(n + 3)
.
The rst few terms in the fundamental set of solutions are
w
1
= 1
1
12
z
4
+
1
672
z
8
, w
2
= z
1
20
z
5
+
1
1440
z
9
.
In Figure 25.3 the ve term approximation is graphed in a coarse dashed line, the ten term approximation is
graphed in a ne dashed line, and the numerical solution of w
1
is graphed in a solid line. The same is done for
w
2
.
Result 25.1.1 Consider the n
th
order linear homogeneous equation
d
n
w
dz
n
+p
n1
(z)
d
n1
w
dz
n1
+ +p
1
(z)
dw
dz
+p
0
(z)w = 0.
If each of the coecient functions p
i
(z) are analytic at z = z
0
then z
0
is an ordinary
point of the dierential equation. The solution is analytic in some region containing z
0
and can be expanded in a Taylor series. The radius of convergence of the series will be
at least the distance to the nearest singularity of the coecient functions in the complex
plane.
1059
1 2 3 4 5 6
-1
-0.5
0.5
1
1.5
1 2 3 4 5 6
-1
-0.5
0.5
1
1.5
Figure 25.3: The graph of approximations and numerical solution of w
1
and w
2
.
25.2 Regular Singular Points of Second Order Equations
Consider the dierential equation
w
tt
+
p(z)
z z
0
w
t
+
q(z)
(z z
0
)
2
w = 0.
If z = z
0
is not an ordinary point but both p(z) and q(z) are analytic at z = z
0
then z
0
is a regular singular
point of the dierential equation. The following equations have a regular singular point at z = 0.
w
tt
+
1
z
w
t
+z
2
w = 0
w
tt
+
1
sin z
w
t
w = 0
1060
w
tt
zw
t
+
1
z sin z
w = 0
Concerning regular singular points of second order linear equations there is good news and bad news.
The Good News. We will nd that with the use of the Frobenius method we can always nd series expansions
of two linearly independent solutions at a regular singular point. We will illustrate this theory with several
examples.
The Bad News. Instead of a tidy little theory like we have for ordinary points, the solutions can be of several
dierent forms. Also, for some of the problems the algebra can get pretty ugly.
Example 25.2.1 Consider the equation
w
tt
+
3(1 +z)
16z
2
w = 0.
We wish to nd series solutions about the point z = 0. First we try a Taylor series w =

n=0
a
n
z
n
. Substituting
this into the dierential equation,
z
2

n=2
n(n 1)a
n
z
n2
+
3
16
(1 +z)

n=0
a
n
z
n
= 0

n=0
n(n 1)a
n
z
n
+
3
16

n=0
a
n
z
n
+
3
16

n=1
a
n+1
z
n
= 0.
Equating powers of z,
z
0
: a
0
= 0
z
n
:
_
n(n 1) +
3
16
_
a
n
+
3
16
a
n+1
= 0
a
n+1
=
_
16
3
n(n 1) + 1
_
a
n
.
1061
This dierence equation has the solution a
n
= 0 for all n. Thus we have obtained only the trivial solution to the
dierential equation. We must try an expansion of a more general form. We recall that for regular singular points
of rst order equations we can always nd a solution in the form of a Frobenius series w = z

n=0
a
n
z
n
, a
0
,= 0.
We substitute this series into the dierential equation.
z
2

n=0
_
( 1) + 2n +n(n 1)

a
n
z
n+2
+
3
16
(1 +z)z

n=0
a
n
z
n
= 0

n=0
_
( 1) + 2n +n(n 1)

a
n
z
n
+
3
16

n=0
a
n
z
n
+
3
16

n=1
a
n1
z
n
= 0
Equating the z
0
term to zero yields the equation
_
( 1) +
3
16
_
a
0
= 0.
Since we have assumed that a
0
,= 0, the polynomial in must be zero. The two roots of the polynomial are

1
=
1 +
_
1 3/4
2
=
3
4
,
2
=
1
_
1 3/4
2
=
1
4
.
Thus our two series solutions will be of the form
w
1
= z
3/4

n=0
a
n
z
n
, w
2
= z
1/4

n=0
b
n
z
n
.
Substituting the rst series into the dierential equation,

n=0
_

3
16
+ 2n +n(n 1) +
3
16
_
a
n
z
n
+
3
16

n=1
a
n1
z
n
= 0.
Equating powers of z, we see that a
0
is arbitrary and
a
n
=
3
16n(n + 1)
a
n1
for n 1.
1062
This dierence equation has the solution
a
n
= a
0
n

j=1
_

3
16j(j + 1)
_
= a
0
_

3
16
_
n n

j=1
1
j(j + 1)
= a
0
_

3
16
_
n
1
n!(n + 1)!
for n 1.
Substituting the second series into the dierential equation,

n=0
_

3
16
+ 2n +n(n 1) +
3
16
_
b
n
z
n
+
3
16

n=1
b
n1
z
n
= 0.
We see that the dierence equation for b
n
is the same as the equation for a
n
. Thus we can write the general
solution to the dierential equation as
w = c
1
z
3/4
_
1 +

n=1
_

3
16
_
n
1
n!(n + 1)!
z
n
_
+c
2
z
1/4
_
1 +

n=1
_

3
16
_
n
1
n!(n + 1)!
z
n
_
_
c
1
z
3/4
+c
2
z
1/4
_
_
1 +

n=1
_

3
16
_
n
1
n!(n + 1)!
z
n
_
.
25.2.1 Indicial Equation
Now lets consider the general equation for a regular singular point at z = 0
w
tt
+
p(z)
z
w
t
+
q(z)
z
2
w = 0.
1063
Since p(z) and q(z) are analytic at z = 0 we can expand them in Taylor series.
p(z) =

n=0
p
n
z
n
, q(z) =

n=0
q
n
z
n
Substituting a Frobenius series w = z

n=0
a
n
z
n
, a
0
,= 0 and the Taylor series for p(z) and q(z) into the
dierential equation yields

n=0
_
( +n)( +n 1)
_
a
n
z
n
+
_

n=0
p
n
z
n
__

n=0
( +n)a
n
z
n
_
+
_

n=0
q
n
z
n
__

n=0
a
n
z
n
_
= 0

n=0
_
( +n)
2
( +n) +p
0
( +n) +q
0
_
a
n
z
n
+
_

n=1
p
n
z
n
__

n=0
( +n)a
n
z
n
_
+
_

n=1
q
n
z
n
__

n=0
a
n
z
n
_
= 0

n=0
_
( +n)
2
+ (p
0
1)(
n
) +q
0
_
a
n
z
n
+

n=1
_
n1

j=0
( +j)a
j
p
nj
_
z
n
+

n=1
_
n1

j=0
a
j
q
nj
_
z
n
= 0
Equating powers of z,
z
0
:
_

2
+ (p
0
1) +q
0
_
a
0
= 0
z
n
:
_
( +n)
2
+ (p
0
1)( +n) +q
0
_
a
n
=
n1

j=0
_
( +j)p
nj
+q
nj
_
a
j
.
Let
I() =
2
+ (p
0
1) +q
0
= 0.
1064
This is known as the indicial equation. The indicial equation gives us the form of the solutions. The equation
for a
0
is I()a
0
= 0. Since we assumed that a
0
is nonzero, I() = 0. Let the two roots of I() be
1
and
2
where 1(
1
) 1(
2
).
Rewriting the dierence equation for a
n
(),
I( +n)a
n
() =
n1

j=0
_
( +j)p
nj
+q
nj
_
a
j
() for n 1. (25.1)
If the roots are distinct and do not dier by an integer then we can use Equation 25.1 to solve for a
n
(
1
) and
a
n
(
2
), which will give us the two solutions
w
1
= z

n=0
a
n
(
1
)z
n
, and w
2
= z

n=0
a
n
(
2
)z
n
.
If the roots are not distinct,
1
=
2
, we will only have one solution and will have to generate another. If
the roots dier by an integer,
1

2
= N, there is one solution corresponding to
1
, but when we try to solve
Equation 25.1 for a
n
(
2
), we will encounter the equation
I(
2
+N)a
N
(
2
) = I(
1
)a
N
(
2
) = 0 a
N
(
2
) =
N1

j=0
_
( +n)p
nj
+q
nj
_
a
j
(
2
).
If the right side of the equation is nonzero, then a
N
(
2
) is undened. On the other hand, if the right side is zero
then a
N
(
2
) is arbitrary. The rest of this section is devoted to considering the cases
1
=
2
and
1

2
= N.
25.2.2 The Case: Double Root
Consider a second order equation L[w] = 0 with a regular singular point at z = 0. Suppose the indicial equation
has a double root.
I() = (
1
)
2
= 0
1065
One solution has the form
w
1
= z

n=0
a
n
z
n
.
In order to nd the second solution, we will dierentiate with respect to the parameter, . Let a
n
() satisfy
Equation 25.1 Substituting the Frobenius expansion into the dierential equation,
L
_
z

n=0
a
n
()z
n
_
= 0.
Setting =
1
will make the left hand side of the equation zero. Dierentiating this equation with respect to ,

L
_
z

n=0
a
n
()z
n
_
= 0.
Interchanging the order of dierentiation,
L
_
log z z

n=0
a
n
()z
n
+z

n=0
da
n
()
d
z
n
_
= 0.
Since setting =
1
will make the left hand side of this equation zero, the second linearly independent solution
is
w
2
= log z z

n=0
a
n
(
1
)z
n
+z

n=0
da
n
()
d

=
1
z
n
w
2
= w
1
log z +z

n=0
a
t
n
(
1
)z
n
.
1066
Example 25.2.2 Consider the dierential equation
w
tt
+
1 +z
4z
2
w = 0.
There is a regular singular point at z = 0. The indicial equation is
( 1) +
1
4
=
_

1
2
_
2
= 0.
One solution will have the form
w
1
= z
1/2

n=0
a
n
z
n
, a
0
,= 0.
Substituting the Frobenius expansion
z

n=0
a
n
()z
n
into the dierential equation yields
z
2
w
tt
+
1
4
(1 +z)w = 0

n=0
_
( 1) + 2n +n(n 1)

a
n
()z
n+
+
1
4

n=0
a
n
()z
n+
+
1
4

n=0
a
n
()z
n++1
= 0.
Divide by z

and adjust the summation indices.

n=0
[( 1) + 2n +n(n 1)] a
n
()z
n
+
1
4

n=0
a
n
()z
n
+
1
4

n=1
a
n1
()z
n
= 0
_
( 1)a
0
+
1
4
_
a
0
+

n=1
__
( 1) + 2n +n(n 1) +
1
4
_
a
n
() +
1
4
a
n1
()
_
z
n
= 0
1067
Equating the coecient of z
0
to zero yields I()a
0
= 0. Equating the coecients of z
n
to zero yields the dierence
equation
_
( 1) + 2n +n(n 1) +
1
4
_
a
n
() +
1
4
a
n1
() = 0
a
n
() =
_
n(n + 1)
4
+
( 1)
4
+
1
16
_
a
n1
().
The rst few a
n
s are
a
0
,
_
( 1) +
9
16
_
a
0
,
_
( 1) +
25
16
__
( 1) +
9
16
_
a
0
, . . .
Setting = 1/2, the coecients for the rst solution are
a
0
,
5
16
a
0
,
105
16
a
0
, . . .
The second solution has the form
w
2
= w
1
log z +z
1/2

n=0
a
t
n
(1/2)z
n
.
Dierentiating the a
n
(),
da
0
d
= 0,
da
1
()
d
= (2 1)a
0
,
da
2
()
d
= (2 1)
__
( 1) +
9
16
_
+
_
( 1) +
25
16
__
a
0
, . . .
Setting = 1/2 in this equation yields
a
t
0
= 0, a
t
1
(1/2) = 0, a
t
2
(1/2) = 0, . . .
1068
Thus the second solution is
w
2
= w
1
log z.
The rst few terms in the general solution are
(c
1
+c
2
log z)
_
1
5
16
z +
105
16
z
2

_
.
25.2.3 The Case: Roots Dier by an Integer
Consider the case in which the roots of the indicial equation
1
and
2
dier by an integer. (
1

2
= N) Recall
the equation that determines a
n
()
I( +n)a
n
=
_
( +n)
2
+ (p
0
1)( +n) +q
0
_
a
n
=
n1

j=0
_
( +j)p
nj
+q
nj
_
a
j
.
When =
2
the equation for a
N
is
I(
2
+N)a
N
(
2
) = 0 a
N
(
2
) =
N1

j=0
_
( +j)p
Nj
+q
Nj
_
a
j
.
If the right hand side of this equation is zero, then a
N
is arbitrary. There will be two solutions of the Frobenius
form.
w
1
= z

n=0
a
n
(
1
)z
n
and w
2
= z

n=0
a
n
(
2
)z
n
.
If the right hand side of the equation is nonzero then a
N
(
2
) will be undened. We will have to generate the
second solution. Let
w(z, ) = z

n=0
a
n
()z
n
,
1069
where a
n
() satises the recurrence formula. Substituting this series into the dierential equation yields
L[w(z, )] = 0.
We will multiply by (
2
), dierentiate this equation with respect to and then set =
2
. This will generate
a linearly independent solution.

L[(
2
)w(z, )] = L
_

(
2
)w(z, )
_
= L
_

(
2
)z

n=0
a
n
()z
n
_
= L
_
log z z

n=0
(
2
)a
n
()z
n
+z

n=0
d
d
[(
2
)a
n
()]z
n
_
Setting =
2
with make this expression zero, thus
log z z

n=0
lim

2
(
2
)a
n
() z
n
+z

n=0
lim

2
_
d
d
[(
2
)a
n
()]
_
z
n
is a solution. Now lets look at the rst term in this solution
log z z

n=0
lim

2
(
2
)a
n
() z
n
.
The rst N terms in the sum will be zero. That is because a
0
, . . . , a
N1
are nite, so multiplying by (
2
) and
taking the limit as
2
will make the coecients vanish. The equation for a
N
() is
I( +N)a
N
() =
N1

j=0
_
( +j)p
Nj
+q
Nj
_
a
j
().
1070
Thus the coecient of the N
th
term is
lim

2
(
2
)a
N
() = lim

2
_
(
2
)
I( +N)
N1

j=0
_
( +j)p
Nj
+q
Nj
_
a
j
()
_
= lim

2
_
(
2
)
( +N
1
)( +N
2
)
N1

j=0
_
( +j)p
Nj
+q
Nj
_
a
j
()
_
Since
1
=
2
+N, lim

2
+N
1
= 1.
=
1
(
1

2
)
N1

j=0
_
(
2
+j)p
Nj
+q
Nj
_
a
j
(
2
).
Using this you can show that the rst term in the solution can be written
d
1
log z w
1
,
where d
1
is a constant. Thus the second linearly independent solution is
w
2
= d
1
log z w
1
+z

n=0
d
n
z
n
,
where
d
1
=
1
a
0
1
(
1

2
)
N1

j=0
_
(
2
+j)p
Nj
+q
Nj
_
a
j
(
2
)
and
d
n
= lim

2
_
d
d
_
(
2
)a
n
()
_
_
for n 0.
1071
Example 25.2.3 Consider the dierential equation
w
tt
+
_
1
2
z
_
w
t
+
2
z
2
w = 0.
The point z = 0 is a regular singular point. In order to nd series expansions of the solutions, we rst calculate
the indicial equation. We can write the coecient functions in the form
p(z)
z
=
1
z
(2 +z), and
q(z)
z
2
=
1
z
2
(2).
Thus the indicial equation is

2
+ (2 1) + 2 = 0
( 1)( 2) = 0.
The First Solution. The rst solution will have the Frobenius form
w
1
= z
2

n=0
a
n
(
1
)z
n
.
Substituting a Frobenius series into the dierential equation,
z
2
w
tt
+ (z
2
2z)w
t
+ 2w = 0

n=0
(n +)(n + 1)z
n+
+ (z
2
2z)

n=0
(n +)z
n+1
+ 2

n=0
a
n
z
n
= 0
[
2
3 + 2]a
0
+

n=1
_
(n +)(n + 1)a
n
+ (n + 1)a
n1
2(n +)a
n
+ 2a
n
_
z
n
= 0.
Equating powers of z,
_
(n +)(n + 1) 2(n +) + 2
_
a
n
= (n + 1)a
n1
a
n
=
a
n1
n + 2
.
1072
Setting =
1
= 2, the recurrence relation becomes
a
n
(
1
) =
a
n1
(
1
)
n
= a
0
(1)
n
n!
.
The rst solution is
w
1
= a
0

n=0
(1)
n
n!
z
n
= a
0
e
z
.
The Second Solution. The equation for a
1
(
2
) is
0 a
1
(
2
) = 2a
0
.
Since the right hand side of this equation is not zero, the second solution will have the form
w
2
= d
1
log z w
1
+z

n=0
lim

2
_
d
d
[(
2
)a
n
()]
_
z
n
First we will calculate d
1
as we dened it previously.
d
1
=
1
a
0
1
2 1
a
0
= 1.
The expression for a
n
() is
a
n
() =
(1)
n
a
0
( +n 2)( +n 1) ( 1)
.
1073
The rst few a
n
() are
a
1
() =
a
0
1
a
2
() =
a
0
( 1)
a
3
() =
a
0
( + 1)( 1)
.
We would like to calculate
d
n
= lim
1
_
d
d
_
( 1)a
n
()
_
_
.
1074
The rst few d
n
are
d
0
= lim
1
_
d
d
_
( 1)a
0
_
_
= a
0
d
1
= lim
1
_
d
d
_
( 1)
_

a
0
1
___
= lim
1
_
d
d
_
a
0
_
_
= 0
d
2
= lim
1
_
d
d
_
( 1)
_
a
0
( 1)
___
= lim
1
_
d
d
_
a
0

_
_
= a
0
d
3
= lim
1
_
d
d
_
( 1)
_

a
0
( + 1)( 1)
___
= lim
1
_
d
d
_

a
0
( + 1)
__
=
3
4
a
0
.
It will take a little work to nd the general expression for d
n
. We will need the following relations.
(n) = (n 1)!,
t
(z) = (z)(z), (n) = +
n1

k=1
1
k
.
1075
See the chapter on the Gamma function for explanations of these equations.
d
n
= lim
1
_
d
d
_
( 1)
(1)
n
a
0
( +n 2)( +n 1) ( 1)
__
= lim
1
_
d
d
_
(1)
n
a
0
( +n 2)( +n 1) ()
__
= lim
1
_
d
d
_
(1)
n
a
0
()
( +n 1)
__
= (1)
n
a
0
lim
1
_
()()
( +n 1)

()( +n 1)
( +n 1)
_
= (1)
n
a
0
lim
1
_
()[() ( +n 1)]
( +n 1)
_
= (1)
n
a
0
(1) (n)
(n 1)!
=
(1)
n+1
a
0
(n 1)!
n1

k=0
1
k
Thus the second solution is
w
2
= log z w
1
+z

n=0
_
(1)
n+1
a
0
(n 1)!
n1

k=0
1
k
_
z
n
.
The general solution is
w = c
1
e
z
c
2
log z e
z
+c
2
z

n=0
_
(1)
n+1
(n 1)!
n1

k=0
1
k
_
z
n
.
We see that even in problems that are chosen for their simplicity, the algebra involved in the Frobenius method
can be pretty involved.
1076
Example 25.2.4 Consider a series expansion about the origin of the equation
w
tt
+
1 z
z
w
t

1
z
2
w = 0.
The indicial equation is

2
1 = 0
= 1.
Substituting a Frobenius series into the dierential equation,
z
2

n=0
(n +)(n + 1)a
n
z
n2
+ (z z
2
)

n=0
(n +)a
n
z
n1

n=0
a
n
z
n
= 0

n=0
(n +)(n + 1)a
n
z
n
+

n=0
(n +)a
n
z
n

n=1
(n + 1)a
n1
z
n

n=0
a
n
z
n
= 0
_
( 1) + 1
_
a
0
+

n=1
_
n +)(n + 1)a
n
+ (n + 1)a
n
(n + 1)a
n1
_
z
n
= 0.
Equating powers of z to zero,
a
n
() =
a
n1
()
n + + 1
.
We know that the rst solution has the form
w
1
= z

n=0
a
n
z
n
.
Setting = 1 in the reccurence formula,
a
n
=
a
n1
n + 2
=
2a
0
(n + 2)!
.
1077
Thus the rst solution is
w
1
= z

n=0
2a
0
(n + 2)!
z
n
= 2a
0
1
z

n=0
z
n+2
(n + 2)!
=
2a
0
z
_

n=0
z
n
n!
1 z
_
=
2a
0
z
( e
z
1 z).
Now to nd the second solution. Setting = 1 in the reccurence formula,
a
n
=
a
n1
n
=
a
0
n!
.
We see that in this case there is no trouble in dening a
2
(
2
). The second solution is
w
2
=
a
0
z

n=0
z
n
n!
=
a
0
z
e
z
.
Thus we see that the general solution is
w =
c
1
z
( e
z
1 z) +
c
2
z
e
z
w =
d
1
z
e
z
+d
2
_
1 +
1
z
_
.
1078
25.3 Irregular Singular Points
If a point z
0
of a dierential equation is not ordinary or regular singular, then it is an irregular singular point.
At least one of the solutions at an irregular singular point will not be of the Frobenius form. We will examine
how to obtain series expansions about an irregular singular point in the chapter on asymptotic expansions.
25.4 The Point at Innity
If we want to determine the behavior of a function f(z) at innity, we can make the transformation t = 1/z and
examine the point t = 0.
Example 25.4.1 Consider the behavior of f(z) = sin z at innity. This is the same as considering the point
t = 0 of sin(1/t), which has the series expansion
sin
_
1
t
_
=

n=0
(1)
n
(2n + 1)!t
2n+1
.
Thus we see that the point t = 0 is an essential singularity of sin(1/t). Hence sin z has an essential singularity at
z = .
Example 25.4.2 Consider the behavior at innity of z e
1/z
. With the transformation t = 1/z the function is
1
t
e
t
=
1
t

n=0
t
n
n!
.
Thus z e
1/z
has a pole of order 1 at innity.
1079
In order to classify the point at innity of a dierential equation in w(z), we apply the transformation t = 1/z,
u(t) = w(z). Writing the derivatives with respect to z in terms if t yields
z =
1
t
dz =
1
t
2
dt
d
dz
= t
2
d
dt
d
2
dz
2
= t
2
d
dt
_
t
2
d
dt
_
= t
4
d
2
dt
2
+ 2t
3
d
dt
.
Applying the transformation to the dierential equation
w
tt
+p(z)w
t
+q(z)w = 0
yields
t
4
u
tt
+ 2t
3
u
t
+p(1/t)(t
2
)u
t
+q(1/t)u = 0
u
tt
+
_
2
t

p(1/t)
t
2
_
u
t
+
q(1/t)
t
4
u = 0.
Example 25.4.3 Classify the singular points of the dierential equation
w
tt
+
1
z
w
t
+ 2w = 0.
There is a regular singular point at z = 0. To examine the point at innity we make the transformation
t = 1/z, u(t) = w(z). The equation in u is
u
tt
+
_
2
t

1
t
_
u
t
+
2
t
4
u = 0
1080
u
tt
+
1
t
u
t
+
2
t
4
u = 0.
Thus we see that the dierential equation for w(z) has an irregular singular point at innity.
1081
25.5 Exercises
Exercise 25.1 (mathematica/ode/series/series.nb)
f(x) satises the Hermite equation
d
2
f
dx
2
2x
df
dx
+ 2f = 0.
Construct two linearly independent solutions of the equation as Taylor series about x = 0. For what values of x
do the series converge?
Show that for certain values of , called eigenvalues, one of the solutions is a polynomial, called an eigenfunc-
tion. Calculate the rst four eigenfunctions H
0
(x), H
1
(x), H
2
(x), H
3
(x), ordered by degree.
Hint, Solution
Exercise 25.2
Consider the Legendre equation
(1 x
2
)y
tt
2xy
t
+( + 1)y = 0.
1. Find two linearly independent solutions in the form of power series about x = 0.
2. Compute the radius of convergence of the series. Explain why it is possible to predict the radius of conver-
gence without actually deriving the series.
3. Show that if = 2n, with n an integer and n 0, the series for one of the solutions reduces to an even
polynomial of degree 2n.
4. Show that if = 2n + 1, with n an integer and n 0, the series for one of the solutions reduces to an odd
polynomial of degree 2n + 1.
1082
5. Show that the rst 4 polynomial solutions P
n
(x) (known as Legendre polynomials) ordered by their degree
and normalized so that P
n
(1) = 1 are
P
0
= 1 P
1
= x
P
2
=
1
2
(3x
2
1) P
4
=
1
2
(5x
3
3x)
6. Show that the Legendre equation can also be written as
((1 x
2
)y
t
)
t
= ( + 1)y.
Note that two Legendre polynomials P
n
(x) and P
m
(x) must satisfy this relation for = n and = m
respectively. By multiplying the rst relation by P
m
(x) and the second by P
n
(x) and integrating by parts
show that Legendre polynomials satisfy the orthogonality relation
_
1
1
P
n
(x)P
m
(x) dx = 0 if n ,= m.
If n = m, it can be shown that the value of the integral is 2/(2n + 1). Verify this for the rst three
polynomials (but you neednt prove it in general).
Hint, Solution
Exercise 25.3
Find the forms of two linearly independent series expansions about the point z = 0 for the dierential equation
w
tt
+
1
sin z
w
t
+
1 z
z
2
w = 0,
such that the series are real-valued on the positive real axis. Do not calculate the coecients in the expansions.
Hint, Solution
1083
Exercise 25.4
Classify the singular points of the equation
w
tt
+
w
t
z 1
+ 2w = 0.
Hint, Solution
Exercise 25.5
Find the series expansions about z = 0 for
w
tt
+
5
4z
w
t
+
z 1
8z
2
w = 0.
Hint, Solution
Exercise 25.6
Find the series expansions about z = 0 of the fundamental solutions of
w
tt
+zw
t
+w = 0.
Hint, Solution
Exercise 25.7
Find the series expansions about z = 0 of the two linearly independent solutions of
w
tt
+
1
2z
w
t
+
1
z
w = 0.
Hint, Solution
1084
Exercise 25.8
Classify the singularity at innity of the dierential equation
w
tt
+
_
2
z
+
3
z
2
_
w
t
+
1
z
2
w = 0.
Find the forms of the series solutions of the dierential equation about innity that are real-valued when z is
real-valued and positive. Do not calculate the coecients in the expansions.
Hint, Solution
Exercise 25.9
Consider the second order dierential equation
x
d
2
y
dx
2
+ (b x)
dy
dx
ay = 0,
where a, b are real constants.
1. Show that x = 0 is a regular singular point. Determine the location of any additional singular points and
classify them. Include the point at innity.
2. Compute the indicial equation for the point x = 0.
3. By solving an appropriate recursion relation, show that one solution has the form
y
1
(x) = 1 +
ax
b
+
(a)
2
x
2
(b)
2
2!
+ +
(a)
n
x
n
(b)
n
n!
+
where the notation (a)
n
is dened by
(a)
n
= a(a + 1)(a + 2) (a +n 1), (a)
0
= 1.
Assume throughout this problem that b ,= n where n is a non-negative integer.
1085
4. Show that when a = m, where m is a non-negative integer, that there are polynomial solutions to this
equation. Compute the radius of convergence of the series above when a ,= m. Verify that the result you
get is in accord with the Frobenius theory.
5. Show that if b = n+1 where n = 0, 1, 2, . . . , then the second solution of this equation has logarithmic terms.
Indicate the form of the second solution in this case. You need not compute any coecients.
Hint, Solution
Exercise 25.10
Consider the equation
xy
tt
+ 2xy
t
+ 6 e
x
y = 0.
Find the rst three non-zero terms in each of two linearly independent series solutions about x = 0.
Hint, Solution
1086
25.6 Hints
Hint 25.1
Hint 25.2
Hint 25.3
Hint 25.4
Hint 25.5
Hint 25.6
Hint 25.7
Hint 25.8
1087
Hint 25.9
Hint 25.10
1088
25.7 Solutions
Solution 25.1
f(x) is a Taylor series about x = 0.
f(x) =

n=0
a
n
x
n
f
t
(x) =

n=1
na
n
x
n1
=

n=0
na
n
x
n1
f
tt
(x) =

n=2
n(n 1)a
n
x
n2
=

n=0
(n + 2)(n + 1)a
n+2
x
n
We substitute the Taylor series into the dierential equation.
f
tt
(x) 2xf
t
(x) + 2f = 0

n=0
(n + 2)(n + 1)a
n+2
x
n
2

n=0
na
n
x
n
+ 2

n=0
a
n
x
n
Equating coecients gives us a dierence equation for a
n
:
(n + 2)(n + 1)a
n+2
2na
n
+ 2a
n
= 0
a
n+2
= 2
n
(n + 1)(n + 2)
a
n
.
1089
The rst two coecients, a
0
and a
1
are arbitrary. The remaining coecients are determined by the recurrence
relation. We will nd the fundamental set of solutions at x = 0. That is, for the rst solution we choose a
0
= 1
and a
1
= 0; for the second solution we choose a
0
= 0, a
1
= 1. The dierence equation for y
1
is
a
n+2
= 2
n
(n + 1)(n + 2)
a
n
, a
0
= 1, a
1
= 0,
which has the solution
a
2n
=
2
n

n
k=0
(2(n k) )
(2n)!
, a
2n+1
= 0.
The dierence equation for y
2
is
a
n+2
= 2
n
(n + 1)(n + 2)
a
n
, a
0
= 0, a
1
= 1,
which has the solution
a
2n
= 0, a
2n+1
=
2
n

n1
k=0
(2(n k) 1 )
(2n + 1)!
.
A set of linearly independent solutions, (in fact the fundamental set of solutions at x = 0), is
y
1
(x) =

n=0
2
n

n
k=0
(2(n k) )
(2n)!
x
2n
, y
2
(x) =

n=0
2
n

n1
k=0
(2(n k) 1 )
(2n + 1)!
x
2n+1
.
Since the coecient functions in the dierential equation do not have any singularities in the nite complex plane,
the radius of convergence of the series is innite.
If = n is a positive even integer, then the rst solution, y
1
, is a polynomial of order n. If = n is a positive
1090
odd integer, then the second solution, y
2
, is a polynomial of order n. For = 0, 1, 2, 3, we have
H
0
(x) = 1
H
1
(x) = x
H
2
(x) = 1 2x
2
H
3
(x) = x
2
3
x
3
Solution 25.2
1. First we write the dierential equation in the standard form.
(1 x
2
)y
tt
2xy
t
+( + 1)y = 0 (25.2)
y
tt

2x
1 x
2
+
( + 1)
1 x
2
y = 0. (25.3)
Since the coecients of y
t
and y are analytic in a neighborhood of x = 0, We can nd two Taylor series
solutions about that point. We nd the Taylor series for y and its derivatives.
y =

n=0
a
n
x
n
y
t
=

n=1
na
n
x
n1
y
tt
=

n=2
(n 1)na
n
x
n2
=

n=0
(n + 1)(n + 2)a
n+2
x
n
1091
Here we used index shifting to explicitly write the two forms that we will need for y
tt
. Note that we can
take the lower bound of summation to be n = 0 for all above sums. The terms added by this operation are
zero. We substitute the Taylor series into Equation 25.2.

n=0
(n + 1)(n + 2)a
n+2
x
n

n=0
(n 1)na
n
x
n
2

n=0
na
n
x
n
+( + 1)

n=0
a
n
x
n
= 0

n=0
_
(n + 1)(n + 2)a
n+2

_
(n 1)n + 2n ( + 1)
_
a
n
_
x
n
= 0
We equate coecients of x
n
to obtain a recurrence relation.
(n + 1)(n + 2)a
n+2
= (n(n + 1) ( + 1))a
n
a
n+2
=
n(n + 1) ( + 1)
(n + 1)(n + 2)
a
n
, n 0.
We can solve this dierence equation to determine the a
n
s. (a
0
and a
1
are arbitrary.)
a
n
=
_

_
a
0
n!
n2

k=0
even k
_
k(k + 1) ( + 1)
_
, even n,
a
1
n!
n2

k=1
odd k
_
k(k + 1) ( + 1)
_
, odd n
We will nd the fundamental set of solutions at x = 0, that is the set y
1
, y
2
that satises
y
1
(0) = 1 y
t
1
(0) = 0
y
2
(0) = 0 y
t
2
(0) = 1.
1092
For y
1
we take a
0
= 1 and a
1
= 0; for y
2
we take a
0
= 0 and a
1
= 1. The rest of the coecients are
determined from the recurrence relation.
y
1
=

n=0
even n
_
_
_
1
n!
n2

k=0
even k
_
k(k + 1) ( + 1)
_
_
_
_
x
n
y
2
=

n=1
odd n
_
_
_
1
n!
n2

k=1
odd k
_
k(k + 1) ( + 1)
_
_
_
_
x
n
2. We determine the radius of convergence of the series solutions with the ratio test.
lim
n

a
n+2
x
n+2
a
n
x
n

< 1
lim
n

n(n+1)(+1)
(n+1)(n+2)
a
n
x
n+2
a
n
x
n

< 1
lim
n

n(n + 1) ( + 1)
(n + 1)(n + 2)

x
2

< 1

x
2

< 1
Thus we see that the radius of convergence of the series is 1. We knew that the radius of convergence would
be at least one, because the nearest singularities of the coecients of (25.3) occur at x = 1, a distance
of 1 from the origin. This implies that the solutions of the equation are analytic in the unit circle about
x = 0. The radius of convergence of the Taylor series expansion of an analytic function is the distance to
the nearest singularity.
3. If = 2n then a
2n+2
= 0 in our rst solution. From the recurrence relation, we see that all subsequent
1093
coecients are also zero. The solution becomes an even polynomial.
y
1
=
2n

m=0
even m
_
_
_
1
m!
m2

k=0
even k
_
k(k + 1) ( + 1)
_
_
_
_
x
m
4. If = 2n+1 then a
2n+3
= 0 in our second solution. From the recurrence relation, we see that all subsequent
coecients are also zero. The solution becomes an odd polynomial.
y
2
=
2n+1

m=1
odd m
_
_
_
1
m!
m2

k=1
odd k
_
k(k + 1) ( + 1)
_
_
_
_
x
m
5. From our solutions above, the rst four polynomials are
1
x
1 3x
2
x
5
3
x
3
To obtain the Legendre polynomials we normalize these to have value unity at x = 1
P
0
= 1
P
1
= x
P
2
=
1
2
_
3x
2
1
_
P
3
=
1
2
_
5x
3
3x
_
These four Legendre polynomials are plotted in Figure 25.4.
1094
Figure 25.4: The First Four Legendre Polynomials
6. We note that the rst two terms in the Legendre equation form an exact derivative. Thus the Legendre
equation can also be written as
_
(1 x
2
)y
t
_
t
= ( + 1)y.
P
n
and P
m
are solutions of the Legendre equation.
_
(1 x
2
)P
t
n
_
t
= n(n + 1)P
n
,
_
(1 x
2
)P
t
m
_
t
= m(m + 1)P
m
(25.4)
We multiply the rst relation of Equation 25.4 by P
m
and integrate by parts.
_
(1 x
2
)P
t
n
_
t
P
m
= n(n + 1)P
n
P
m
_
1
1
_
(1 x
2
)P
t
n
_
t
P
m
dx = n(n + 1)
_
1
1
P
n
P
m
dx
__
(1 x
2
)P
t
n
_
P
m

1
1

_
1
1
(1 x
2
)P
t
n
P
t
m
dx = n(n + 1)
_
1
1
P
n
P
m
dx
_
1
1
(1 x
2
)P
t
n
P
t
m
dx = n(n + 1)
_
1
1
P
n
P
m
dx
1095
We multiply the secord relation of Equation 25.4 by P
n
and integrate by parts. To obtain a dierent
expression for
_
1
1
(1 x
2
)P
t
m
P
t
n
dx.
_
1
1
(1 x
2
)P
t
m
P
t
n
dx = m(m + 1)
_
1
1
P
m
P
n
dx
We equate the two expressions for
_
1
1
(1 x
2
)P
t
m
P
t
n
dx. to obtain an orthogonality relation.
(n(n + 1) m(m + 1))
_
1
1
P
n
P
m
dx = 0
_
1
1
P
n
(x)P
m
(x) dx = 0 if n ,= m.
We verify that for the rst four polynomials the value of the integral is 2/(2n + 1) for n = m.
_
1
1
P
0
(x)P
0
(x) dx =
_
1
1
1 dx = 2
_
1
1
P
1
(x)P
1
(x) dx =
_
1
1
x
2
dx =
_
x
3
3
_
1
1
=
2
3
_
1
1
P
2
(x)P
2
(x) dx =
_
1
1
1
4
_
9x
4
6x
2
+ 1
_
dx =
_
1
4
_
9x
5
5
2x
3
+x
__
1
1
=
2
5
_
1
1
P
3
(x)P
3
(x) dx =
_
1
1
1
4
_
25x
6
30x
4
+ 9x
2
_
dx =
_
1
4
_
25x
7
7
6x
5
+ 3x
3
__
1
1
=
2
7
Solution 25.3
The indicial equation for this problem is

2
+ 1 = 0.
1096
Since the two roots
1
= i and
2
= i are distinct and do not dier by an integer, there are two solutions in the
Frobenius form.
w
1
= z
i

n=0
a
n
z
n
, w
1
= z
i

n=0
b
n
z
n
However, these series are not real-valued on the positive real axis. Recalling that
z
i
= e
i log z
= cos(log z) +i sin(log z), and z
i
= e
i log z
= cos(log z) i sin(log z),
we can write a new set of solutions that are real-valued on the positive real axis as linear combinations of w
1
and
w
2
.
u
1
=
1
2
(w
1
+w
2
), u
2
=
1
2i
(w
1
w
2
)
u
1
= cos(log z)

n=0
c
n
z
n
, u
1
= sin(log z)

n=0
d
n
z
n
Solution 25.4
Consider the equation w
tt
+w
t
/(z 1) + 2w = 0.
We see that there is a regular singular point at z = 1. All other nite values of z are ordinary points of the
equation. To examine the point at innity we introduce the transformation z = 1/t, w(z) = u(t). Writing the
derivatives with respect to z in terms of t yields
d
dz
= t
2
d
dt
,
d
2
dz
2
= t
4
d
2
dt
2
+ 2t
3
d
dt
.
Substituting into the dierential equation gives us
t
4
u
tt
+ 2t
3
u
t

t
2
u
t
1/t 1
+ 2u = 0
u
tt
+
_
2
t

1
t(1 t)
_
u
t
+
2
t
4
u = 0.
1097
Since t = 0 is an irregular singular point in the equation for u(t), z = is an irregular singular point in the
equation for w(z).
Solution 25.5
Find the series expansions about z = 0 for
w
tt
+
5
4z
w
t
+
z 1
8z
2
w = 0.
We see that z = 0 is a regular singular point of the equation. The indicial equation is

2
+
1
4

1
8
= 0
_
+
1
2
__

1
4
_
= 0.
Since the roots are distinct and do not dier by an integer, there will be two solutions in the Frobenius form.
w
1
= z
1/4

n=0
a
n
(
1
)z
n
, w
2
= z
1/2

n=0
a
n
(
2
)z
n
We multiply the dierential equation by 8z
2
to put it in a better form. Substituting a Frobenius series into
the dierential equation,
8z
2

n=0
(n +)(n + 1)a
n
z
n+2
+ 10z

n=0
(n +)a
n
z
n+1
+ (z 1)

n=0
a
n
z
n+
8

n=0
(n +)(n + 1)a
n
z
n
+ 10

n=0
(n +)a
n
z
n
+

n=1
a
n1
z
n

n=0
a
n
z
n
.
Equating coecients of powers of z,
[8(n +)(n + 1) + 10(n +) 1] a
n
= a
n1
a
n
=
a
n1
8(n +)
2
+ 2(n +) 1
.
1098
The First Solution. Setting = 1/4 in the recurrence formula,
a
n
(
1
) =
a
n1
8(n + 1/4)
2
+ 2(n + 1/4) 1
a
n
(
1
) =
a
n1
2n(4n + 3)
.
Thus the rst solution is
w
1
= z
1/4

n=0
a
n
(
1
)z
n
= a
0
z
1/4
_
1
1
14
z +
1
616
z
2
+
_
.
The Second Solution. Setting = 1/2 in the recurrence formula,
a
n
=
a
n1
8(n 1/2)
2
+ 2(n 1/2) 1
a
n
=
a
n1
2n(4n 3)
Thus the second linearly independent solution is
w
2
= z
1/2

n=0
a
n
(
2
)z
n
= a
0
z
1/2
_
1
1
2
z +
1
40
z
2
+
_
.
Solution 25.6
We consider the series solutions of,
w
tt
+zw
t
+w = 0.
We would like to nd the expansions of the fundamental set of solutions about z = 0. Since z = 0 is a regular
point, (the coecient functions are analytic there), we expand the solutions in Taylor series. Dierentiating the
1099
series expansions for w(z),
w =

n=0
a
n
z
n
w
t
=

n=1
na
n
z
n1
w
tt
=

n=2
n(n 1)a
n
z
n2
=

n=0
(n + 2)(n + 1)a
n+2
z
n
We may take the lower limit of summation to be zero without changing the sums. Substituting these expressions
into the dierential equation,

n=0
(n + 2)(n + 1)a
n+2
z
n
+

n=0
na
n
z
n
+

n=0
a
n
z
n
= 0

n=0
_
(n + 2)(n + 1)a
n+2
+ (n + 1)a
n
_
z
n
= 0.
Equating the coecient of the z
n
term gives us
(n + 2)(n + 1)a
n+2
+ (n + 1)a
n
= 0, n 0
a
n+2
=
a
n
n + 2
, n 0.
a
0
and a
1
are arbitrary. We determine the rest of the coecients from the recurrence relation. We consider the
1100
cases for even and odd n separately.
a
2n
=
a
2n2
2n
=
a
2n4
(2n)(2n 2)
= (1)
n
a
0
(2n)(2n 2) 4 2
= (1)
n
a
0

n
m=1
2m
, n 0
a
2n+1
=
a
2n1
2n + 1
=
a
2n3
(2n + 1)(2n 1)
= (1)
n
a
1
(2n + 1)(2n 1) 5 3
= (1)
n
a
1

n
m=1
(2m + 1)
, n 0
If w
1
, w
2
is the fundamental set of solutions, then the initial conditions demand that w
1
= 1 + 0 z + and
w
2
= 0 +z + . We see that w
1
will have only even powers of z and w
2
will have only odd powers of z.
w
1
=

n=0
(1)
n

n
m=1
2m
z
2n
, w
2
=

n=0
(1)
n

n
m=1
(2m + 1)
z
2n+1
Since the coecient functions in the dierential equation are entire, (analytic in the nite complex plane), the
radius of convergence of these series solutions is innite.
1101
Solution 25.7
w
tt
+
1
2z
w
t
+
1
z
w = 0.
We can nd the indicial equation by substituting w = z

+O(z
+1
) into the dierential equation.
( 1)z
2
+
1
2
z
2
+z
1
= O(z
1
)
Equating the coecient of the z
2
term,
( 1) +
1
2
= 0
= 0,
1
2
.
Since the roots are distinct and do not dier by an integer, the solutions are of the form
w
1
=

n=0
a
n
z
n
, w
2
= z
1/2

n=0
b
n
z
n
.
Dierentiating the series for the rst solution,
w
1
=

n=0
a
n
z
n
w
t
1
=

n=1
na
n
z
n1
=

n=0
(n + 1)a
n+1
z
n
w
tt
1
=

n=1
n(n + 1)a
n+1
z
n1
.
1102
Substituting this series into the dierential equation,

n=1
n(n + 1)a
n+1
z
n1
+
1
2z

n=0
(n + 1)a
n+1
z
n
+
1
z

n=0
a
n
z
n
= 0

n=1
_
n(n + 1)a
n+1
+
1
2
(n + 1)a
n+1
+a
n
_
z
n1
+
1
2z
a
1
+
1
z
a
0
= 0.
Equating powers of z,
z
1
:
a
1
2
+a
0
= 0 a
1
= 2a
0
z
n1
:
_
n +
1
2
_
(n + 1)a
n+1
+a
n
= 0 a
n+1
=
a
n
(n + 1/2)(n + 1)
.
We can combine the above two equations for a
n
.
a
n+1
=
a
n
(n + 1/2)(n + 1)
, for n 0
Solving this dierence equation for a
n
,
a
n
= a
0
n1

j=0
1
(j + 1/2)(j + 1)
a
n
= a
0
(1)
n
n!
n1

j=0
1
j + 1/2
1103
Now lets nd the second solution. Dierentiating w
2
,
w
t
2
=

n=0
(n + 1/2)b
n
z
n1/2
w
tt
2
=

n=0
(n + 1/2)(n 1/2)b
n
z
n3/2
.
Substituting these expansions into the dierential equation,

n=0
(n + 1/2)(n 1/2)b
n
z
n3/2
+
1
2

n=0
(n + 1/2)b
n
z
n3/2
+

n=1
b
n1
z
n3/2
= 0.
Equating the coecient of the z
3/2
term,
1
2
_

1
2
_
b
0
+
1
2
1
2
b
0
= 0,
we see that b
0
is arbitrary. Equating the other coecients of powers of z,
(n + 1/2)(n 1/2)b
n
+
1
2
(n + 1/2)b
n
+b
n1
= 0
b
n
=
b
n1
n(n + 1/2)
Calculating the b
n
s,
b
1
=
b
0
1
3
2
b
2
=
b
0
1 2
3
2

5
2
b
n
=
(1)
n
2
n
b
0
n! 3 5 (2n + 1)
1104
Thus the second solution is
w
2
= b
0
z
1/2

n=0
(1)
n
2
n
z
n
n! 3 5 (2n + 1)
.
Solution 25.8
w
tt
+
_
2
z
+
3
z
2
_
w
t
+
1
z
2
w = 0.
In order to analyze the behavior at innity we make the change of variables t = 1/z, u(t) = w(z) and examine
the point t = 0. Writing the derivatives with respect to z in terms if t yields
z =
1
t
dz =
1
t
2
dt
d
dz
= t
2
d
dt
d
2
dz
2
= t
2
d
dt
_
t
2
d
dt
_
= t
4
d
2
dt
2
+ 2t
3
d
dt
.
The equation for u is then
t
4
u
tt
+ 2t
3
u
t
+ (2t + 3t
2
)(t
2
)u
t
+t
2
u = 0
u
tt
+3u
t
+
1
t
2
u = 0
1105
We see that t = 0 is a regular singular point. To nd the indicial equation, we substitute u = t

+ O(t
+1
) into
the dierential equation.
( 1)t
2
3t
1
+t
2
= O(t
1
)
Equating the coecients of the t
2
terms,
( 1) + 1 = 0
=
1 i

3
2
Since the roots of the indicial equation are distinct and do not dier by an integer, a set of solutions has the form
_
t
(1+i

3)/2

n=0
a
n
t
n
, t
(1i

3)/2

n=0
b
n
t
n
_
.
Noting that
t
(1+i

3)/2
= t
1/2
exp
_
i

3
2
log t
_
, and t
(1i

3)/2
= t
1/2
exp
_

3
2
log t
_
.
We can take the sum and dierence of the above solutions to obtain the form
u
1
= t
1/2
cos
_

3
2
log t
_

n=0
a
n
t
n
, u
1
= t
1/2
sin
_

3
2
log t
_

n=0
b
n
t
n
.
Putting the answer in terms of z, we have the form of the two Frobenius expansions about innity.
w
1
= z
1/2
cos
_

3
2
log z
_

n=0
a
n
z
n
, w
1
= z
1/2
sin
_

3
2
log z
_

n=0
b
n
z
n
.
1106
Solution 25.9
1. We write the equation in the standard form.
y
tt
+
b x
x
y
t

a
x
y = 0
Since
bx
x
has no worse than a rst order pole and
a
x
has no worse than a second order pole at x = 0, that
is a regular singular point. Since the coecient functions have no other singularities in the nite complex
plane, all the other points in the nite complex plane are regular points.
Now to examine the point at innity. We make the change of variables u() = y(x), = 1/x.
y
t
=
d
dx
d
d
u =
1
x
2
u
t
=
2
u
t
y
tt
=
2
d
d
_

2
d
d
_
u =
4
u
tt
+ 2
3
u
t
The dierential equation becomes
xy
tt
+ (b x)y
t
ay
1

4
u
tt
+ 2
3
u
t
_
+
_
b
1

_
_

2
u
t
_
au = 0

3
u
tt
+
_
(2 b)
2
+
_
u
t
au = 0
u
tt
+
_
2 b

+
1

2
_

3
u = 0
Since this equation has an irregular singular point at = 0, the equation for y(x) has an irregular singular
point at innity.
1107
2. The coecient functions are
p(x)
1
x

n=1
p
n
x
n
=
1
x
(b x),
q(x)
1
x
2

n=1
q
n
x
n
=
1
x
2
(0 ax).
The indicial equation is

2
+ (p
0
1) +q
0
= 0

2
+ (b 1) + 0 = 0
( +b 1) = 0.
3. Since one of the roots of the indicial equation is zero, and the other root is not a negative integer, one of
1108
the solutions of the dierential equation is a Taylor series.
y
1
=

k=0
c
k
x
k
y
t
1
=

k=1
kc
k
x
k1
=

k=0
(k + 1)c
k+1
x
k
=

k=0
kc
k
x
k1
y
tt
1
=

k=2
k(k 1)c
k
x
k2
=

k=1
(k + 1)kc
k+1
x
k1
=

k=0
(k + 1)kc
k+1
x
k1
We substitute the Taylor series into the dierential equation.
xy
tt
+ (b x)y
t
ay = 0

k=0
(k + 1)kc
k+1
x
k
+b

k=0
(k + 1)c
k+1
x
k

k=0
kc
k
x
k
a

k=0
c
k
x
k
= 0
We equate coecients to determine a recurrence relation for the coecients.
(k + 1)kc
k+1
+b(k + 1)c
k+1
kc
k
ac
k
= 0
c
k+1
=
k +a
(k + 1)(k +b)
c
k
1109
For c
0
= 1, the recurrence relation has the solution
c
k
=
(a)
k
x
k
(b)
k
k!
.
Thus one solution is
y
1
(x) =

k=0
(a)
k
(b)
k
k!
x
k
.
4. If a = m, where m is a non-negative integer, then (a)
k
= 0 for k > m. This makes y
1
a polynomial:
y
1
(x) =
m

k=0
(a)
k
(b)
k
k!
x
k
.
5. If b = n + 1, where n is a non-negative integer, the indicial equation is
( +n) = 0.
For the case n = 0, the indicial equation has a double root at zero. Thus the solutions have the form:
y
1
(x) =
m

k=0
(a)
k
(b)
k
k!
x
k
, y
2
(x) = y
1
(x) log x +

k=0
d
k
x
k
For the case n > 0 the roots of the indicial equation dier by an integer. The solutions have the form:
y
1
(x) =
m

k=0
(a)
k
(b)
k
k!
x
k
, y
2
(x) = d
1
y
1
(x) log x +x
n

k=0
d
k
x
k
The form of the solution for y
2
can be substituted into the equation to determine the coecients d
k
.
1110
Solution 25.10
We write the equation in the standard form.
xy
tt
+ 2xy
t
+ 6 e
x
y = 0
y
tt
+ 2y
t
+ 6
e
x
x
y = 0
We see that x = 0 is a regular singular point. The indicial equation is

2
= 0
= 0, 1.
The rst solution has the Frobenius form.
y
1
= x +a
2
x
2
+a
3
x
3
+O(x
4
)
We substitute y
1
into the dierential equation and equate coecients of powers of x.
xy
tt
+ 2xy
t
+ 6 e
x
y = 0
x(2a
2
+ 6a
3
x +O(x
2
)) + 2x(1 + 2a
2
x + 3a
3
x
2
+O(x
3
))
+ 6(1 +x +x
2
/2 +O(x
3
))(x +a
2
x
2
+a
3
x
3
+O(x
4
)) = 0
(2a
2
x + 6a
3
x
2
) + (2x + 4a
2
x
2
) + (6x + 6(1 +a
2
)x
2
) = O(x
3
) = 0
a
2
= 4, a
3
=
17
3
y
1
= x 4x
2
+
17
3
x
3
+O(x
4
)
1111
Now we see if the second solution has the Frobenius form. There is no a
1
x term because y
2
is only determined
up to an additive constant times y
1
.
y
2
= 1 +O(x
2
)
We substitute y
2
into the dierential equation and equate coecients of powers of x.
xy
tt
+ 2xy
t
+ 6 e
x
y = 0
O(x) +O(x) + 6(1 +O(x))(1 +O(x
2
)) = 0
6 = O(x)
The substitution y
2
= 1 + O(x) has yielded a contradiction. Since the second solution is not of the Frobenius
form, it has the following form:
y
2
= y
1
ln(x) +a
0
+a
2
x
2
+O(x
3
)
The rst three terms in the solution are
y
2
= a
0
+x ln x 4x
2
ln x +O(x
2
).
We calculate the derivatives of y
2
.
y
t
2
= ln(x) +O(1)
y
tt
2
=
1
x
+O(ln(x))
We substitute y
2
into the dierential equation and equate coecients.
xy
tt
+ 2xy
t
+ 6 e
x
y = 0
(1 +O(x ln x)) + 2 (O(xln x)) + 6 (a
0
+O(x ln x)) = 0
1 + 6a
0
= 0
y
2
=
1
6
+x ln x 4x
2
ln x +O(x
2
)
1112
Chapter 26
Asymptotic Expansions
The more you sweat in practice, the less you bleed in battle.
-Navy Seal Saying
26.1 Asymptotic Relations
The and symbols. First we will introduce two new symbols used in asymptotic relations.
f(x) g(x) as x x
0
,
is read, f(x) is much smaller than g(x) as x tends to x
0
. This means
lim
xx
0
f(x)
g(x)
= 0.
The notation
f(x) g(x) as x x
0
,
1113
is read f(x) is asymptotic to g(x) as x tends to x
0
; which means
lim
xx
0
f(x)
g(x)
= 1.
A few simple examples are
e
x
x as x +
sin x x as x 0
1/x 1 as x +
e
1/x
x
n
as x 0
+
for all n
An equivalent denition of f(x) g(x) as x x
0
is
f(x) g(x) g(x) as x x
0
.
Note that it does not make sense to say that a function f(x) is asymptotic to zero. Using the above denition
this would imply
f(x) 0 as x x
0
.
If you encounter an expression like f(x) +g(x) 0, take this to mean f(x) g(x).
The Big O and Little o Notation. If [f(x)[ m[g(x)[ for some constant m in some neighborhood of the
point x = x
0
, then we say that
f(x) = O(g(x)) as x x
0
.
We read this as f is big O of g as x goes to x
0
. If g(x) does not vanish, an equivalent denition is that f(x)/g(x)
is bounded as x x
0
.
1114
If for any given positive there exists a neighborhood of x = x
0
in which [f(x)[ [g(x)[ then
f(x) = o(g(x)) as x x
0
.
This is read, f is little o of g as x goes to x
0
.
For a few examples of the use of this notation,
e
x
= o(x
n
) as x for any n.
sin x = O(x) as x 0.
cos x 1 = o(1) as x 0.
log x = o(x

) as x + for any positive .


Operations on Asymptotic Relations. You can perform the ordinary arithmetic operations on asymptotic
relations. Addition, multiplication, and division are valid.
You can always integrate an asymptotic relation. Integration is a smoothing operation. However, it is necessary
to exercise some care.
Example 26.1.1 Consider
f
t
(x)
1
x
2
as x .
This does not imply that
f(x)
1
x
as x .
We have forgotten the constant of integration. Integrating the asymptotic relation for f
t
(x) yields
f(x)
1
x
+c as x .
If c is nonzero then
f(x) c as x .
1115
It is not always valid to dierentiate an asymptotic relation.
Example 26.1.2 Consider f(x) =
1
x
+
1
x
2
sin(x
3
).
f(x)
1
x
as x .
Dierentiating this relation yields
f
t
(x)
1
x
2
as x .
However, this is not true since
f
t
(x) =
1
x
2

2
x
3
sin(x
3
) + 2 cos(x
3
)
,
1
x
2
as x .
The Controlling Factor. The controlling factor is the most rapidly varying factor in an asymptotic relation.
Consider a function f(x) that is asymptotic to x
2
e
x
as x goes to innity. The controlling factor is e
x
. For a few
examples of this,
x log x has the controlling factor x as x .
x
2
e
1/x
has the controlling factor e
1/x
as x 0.
x
1
sin x has the controlling factor sin x as x .
The Leading Behavior. Consider a function that is asymptotic to a sum of terms.
f(x) a
0
(x) +a
1
(x) +a
2
(x) + , as x x
0
.
1116
where
a
0
(x) a
1
(x) a
2
(x) , as x x
0
.
The rst term in the sum is the leading order behavior. For a few examples,
For sin x x x
3
/6 +x
5
/120 as x 0, the leading order behavior is x.
For f(x) e
x
(1 1/x + 1/x
2
) as x , the leading order behavior is e
x
.
26.2 Leading Order Behavior of Dierential Equations
It is often useful to know the leading order behavior of the solutions to a dierential equation. If we are considering
a regular point or a regular singular point, the approach is straight forward. We simply use a Taylor expansion or
the Frobenius method. However, if we are considering an irregular singular point, we will have to be a little more
creative. Instead of an all encompassing theory like the Frobenius method which always gives us the solution, we
will use a heuristic approach that usually gives us the solution.
Example 26.2.1 Consider the Airy equation
y
tt
= xy.
We
1
would like to know how the solutions of this equation behave as x +. First we need to classify the
point at innity. The change of variables
x =
1
t
, y(x) = u(t),
d
dx
= t
2
d
dt
,
d
2
dx
2
= t
4
d
2
dt
2
+ 2t
3
d
dt
1
Using We may be a bit presumptuous on my part. Even if you dont particularly want to know how the solutions behave, I
urge you to just play along. This is an interesting section, I promise.
1117
yields
t
4
u
tt
+ 2t
3
u
t
=
1
t
u
u
tt
+
2
t
u
t

1
t
5
u = 0.
Since the equation for u has an irregular singular point at zero, the equation for y has an irregular singular point
at innity.
The Controlling Factor. Since the solutions at irregular singular points often have exponential behavior, we
make the substitution y = e
s(x)
into the dierential equation for y.
d
2
dx
2
_
e
s

= xe
s
_
s
tt
+ (s
t
)
2

e
s
= x e
s
s
tt
+ (s
t
)
2
= x
The Dominant Balance. Now we have a dierential equation for s that appears harder to solve than our
equation for y. However, we did not introduce the substitution in order to obtain an equation that we could solve
exactly. We are looking for an equation that we can solve approximately in the limit as x . If one of the
terms in the equation for s is much smaller that the other two as x , then dropping that term and solving the
simpler equation may give us an approximate solution. If one of the terms in the equation for s is much smaller
than the others then we say that the remaining terms form a dominant balance in the limit as x .
Assume that the s
tt
term is much smaller that the others, s
tt
(s
t
)
2
, x as x . This gives us
(s
t
)
2
x
s
t

x
s
2
3
x
3/2
as x .
1118
Now lets check our assumption that the s
tt
term is small. Assuming that we can dierentiate the asymptotic
relation s
t

x, we obtain s
tt

1
2
x
1/2
as x .
s
tt
(s
t
)
2
, x x
1/2
x as x
Thus we see that the behavior we found for s is consistent with our assumption. The controlling factors for
solutions to the Airy equation are exp(
2
3
x
3/2
) as x .
The Leading Order Behavior of the Decaying Solution. Lets nd the leading order behavior as x goes
to innity of the solution with the controlling factor exp(
2
3
x
3/2
). We substitute
s(x) =
2
3
x
3/2
+t(x), where t(x) x
3/2
as x
into the dierential equation for s.
s
tt
+ (s
t
)
2
= x

1
2
x
1/2
+t
tt
+ (x
1/2
+t
t
)
2
= x
t
tt
+ (t
t
)
2
2x
1/2
t
t

1
2
x
1/2
= 0
Assume that we can dierentiate t x
3/2
to obtain
t
t
x
1/2
, t
tt
x
1/2
as x .
1119
Since t
tt

1
2
x
1/2
we drop the t
tt
term. Also, t
t
x
1/2
implies that (t
t
)
2
2x
1/2
t
t
, so we drop the (t
t
)
2
term.
This gives us
2x
1/2
t
t

1
2
x
1/2
0
t
t

1
4
x
1
t
1
4
log x +c
t
1
4
log x as x .
Checking our assumptions about t,
t
t
x
1/2
x
1
x
1/2
t
tt
x
1/2
x
2
x
1/2
we see that the behavior of t is consistent with our assumptions.
So far we have
y(x) exp
_

2
3
x
3/2

1
4
log x +u(x)
_
as x ,
where u(x) log x as x . To continue, we substitute t(x) =
1
4
log x + u(x) into the dierential equation
for t(x).
t
tt
+ (t
t
)
2
2x
1/2
t
t

1
2
x
1/2
= 0
1
4
x
2
+u
tt
+
_

1
4
x
1
+u
t
_
2
2x
1/2
_

1
4
x
1
+u
t
_

1
2
x
1/2
= 0
u
tt
+ (u
t
)
2
+
_

1
2
x
1
2x
1/2
_
u
t
+
5
16
x
2
= 0
1120
Assume that we can dierentiate the asymptotic relation for u to obtain
u
t
x
1
, u
tt
x
2
as x .
We know that
1
2
x
1
u
t
2x
1/2
u
t
. Using our assumptions,
u
tt
x
2
u
tt

5
16
x
2
u
t
x
1
(u
t
)
2

5
16
x
2
.
Thus we obtain
2x
1/2
u
t
+
5
16
x
2
0
u
t

5
32
x
5/2
u
5
48
x
3/2
+c
u c as x .
Since u = c + o(1), e
u
= e
c
+ o(1). The behavior of y is
y x
1/4
exp
_

2
3
x
3/2
_
( e
c
+ o(1)) as x .
Thus the full leading order behavior of the decaying solution is
y (const)x
1/4
exp
_

2
3
x
3/2
_
as x .
You can show that the leading behavior of the exponentially growing solution is
y (const)x
1/4
exp
_
2
3
x
3/2
_
as x .
1121
Example 26.2.2 The Modied Bessel Equation. Consider the modied Bessel equation
x
2
y
tt
+xy
t
(x
2
+
2
)y = 0.
We would like to know how the solutions of this equation behave as x +. First we need to classify the point
at innity. The change of variables x =
1
t
, y(x) = u(t) yields
1
t
2
(t
4
u
tt
+ 2t
3
u
t
) +
1
t
(t
2
u
t
)
_
1
t
2
+
2
_
u = 0
u
tt
+
1
t
u
t

_
1
t
4
+

2
t
2
_
u = 0
Since u(t) has an irregular singular point at t = 0, y(x) has an irregular singular point at innity.
The Controlling Factor. Since the solutions at irregular singular points often have exponential behavior, we
make the substitution y = e
s(x)
into the dierential equation for y.
x
2
(s
tt
+ (s
t
)
2
) e
s
+xs
t
e
s
(x
2
+
2
) e
s
= 0
s
tt
+ (s
t
)
2
+
1
x
s
t
(1 +

2
x
2
) = 0
We make the assumption that s
tt
(s
t
)
2
as x and we know that
2
/x
2
1 as x . Thus we drop these
two terms from the equation to obtain an approximate equation for s.
(s
t
)
2
+
1
x
s
t
1 0
This is a quadratic equation for s
t
, so we can solve it exactly. However, let us try to simplify the equation even
further. Assume that as x goes to innity one of the three terms is much smaller that the other two. If this is
the case, there will be a balance between the two dominant terms and we can neglect the third. Lets check the
three possibilities.
1122
1.
1 is small. (s
t
)
2
+
1
x
s
t
0 s
t

1
x
, 0
1 ,
1
x
2
, 0 as x so this balance is inconsistent.
2.
1
x
s
t
is small. (s
t
)
2
1 0 s
t
1
This balance is consistent as
1
x
1 as x .
3.
(s
t
)
2
is small.
1
x
s
t
1 0 s
t
x
This balance is not consistent as x
2
, 1 as x .
The only dominant balance that makes sense leads to s
t
1 as x . Integrating this relationship,
s x +c
x as x .
Now lets see if our assumption that we made to get the simplied equation for s is valid. Assuming that we can
dierentiate s
t
1, s
tt
(s
t
)
2
becomes
d
dx
_
1 + o(1)

_
1 + o(1)

2
0 + o(1/x) 1
Thus we see that the behavior we obtained for s is consistent with our initial assumption.
We have found two controlling factors, e
x
and e
x
. This is a good sign as we know that there must be two
linearly independent solutions to the equation.
1123
Leading Order Behavior. Now lets nd the full leading behavior of the solution with the controlling factor
e
x
. In order to nd a better approximation for s, we substitute s(x) = x + t(x), where t(x) x as x ,
into the dierential equation for s.
s
tt
+ (s
t
)
2
+
1
x
s
t

_
1 +

2
x
2
_
= 0
t
tt
+ (1 +t
t
)
2
+
1
x
(1 +t
t
)
_
1 +

2
x
2
_
= 0
t
tt
+ (t
t
)
2
+
_
1
x
2
_
t
t

_
1
x
+

2
x
2
_
= 0
We know that
1
x
2 and

2
x
2

1
x
as x . Dropping these terms from the equation yields
t
tt
+ (t
t
)
2
2t
t

1
x
0.
Assuming that we can dierentiate the asymptotic relation for t, we obtain t
t
1 and t
tt

1
x
as x . We
can drop t
tt
. Since t
t
vanishes as x goes to innity, (t
t
)
2
t
t
. Thus we are left with
2t
t

1
x
0, as x .
Integrating this relationship,
t
1
2
log x +c

1
2
log x as x .
Checking our assumptions about the behavior of t,
t
t
1
1
2x
1
t
tt

1
x

1
2x
2

1
x
1124
we see that the solution is consistent with our assumptions.
The leading order behavior to the solution with controlling factor e
x
is
y(x) exp
_
x
1
2
log x +u(x)
_
= x
1/2
e
x+u(x)
as x ,
where u(x) log x. We substitute t =
1
2
log x + u(x) into the dierential equation for t in order to nd the
asymptotic behavior of u.
t
tt
+ (t
t
)
2
+
_
1
x
2
_
t
t

_
1
x
+

2
x
2
_
= 0
1
2x
2
+u
tt
+
_

1
2x
+u
t
_
2
+
_
1
x
2
__

1
2x
+u
t
_

_
1
x
+

2
x
2
_
= 0
u
tt
+ (u
t
)
2
2u
t
+
1
4x
2


2
x
2
= 0
Assuming that we can dierentiate the asymptotic relation for u, u
t

1
x
and u
tt

1
x
2
as x . Thus we see
that we can neglect the u
tt
and (u
t
)
2
terms.
2u
t
+
_
1
4

2
_
1
x
2
0
u
t

1
2
_
1
4

2
_
1
x
2
u
1
2
_

1
4
_
1
x
+c
u c as x
Since u = c + o(1), we can expand e
u
as e
c
+ o(1). Thus we can write the leading order behavior as
y x
1/2
e
x
( e
c
+ o(1)).
1125
Thus the full leading order behavior is
y (const)x
1/2
e
x
as x .
You can verify that the solution with the controlling factor e
x
has the leading order behavior
y (const)x
1/2
e
x
as x .
Two linearly independent solutions to the modied Bessel equation are the modied Bessel functions, I

(x)
and K

(x). These functions have the asymptotic behavior


I

(x)
1

2x
e
x
, K

(x)

2x
e
x
as x .
In Figure 26.1 K
0
(x) is plotted in a solid line and

2x
e
x
is plotted in a dashed line. We see that the leading
order behavior of the solution as x goes to innity gives a good approximation to the behavior even for fairly
small values of x.
26.3 Integration by Parts
Example 26.3.1 The complementary error function
erfc (x) =
2

_

x
e
t
2
dt
1126
0 1 2 3 4 5
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Figure 26.1: Plot of K
0
(x) and its leading order behavior.
is used in statistics for its relation to the normal probability distribution. We would like to nd an approximation
to erfc (x) for large x. Using integration by parts,
erfc (x) =
2

_

x
_
1
2t
_
_
2t e
t
2
_
dt
=
2

_
1
2t
e
t
2
_

_

x
1
2
t
2
e
t
2
dt
=
1

x
1
e
x
2

_

x
t
2
e
t
2
dt.
1127
We examine the residual integral in this expression.
1

_

x
t
2
e
t
2
dt
1
2

x
3
_

x
2t e
t
2
dt
=
1
2

x
3
e
x
2
.
Thus we see that
1

x
1
e
x
2

_

x
t
2
e
t
2
dt as x .
Therefore,
erfc (x)
1

x
1
e
x
2
as x ,
and we expect that
1

x
1
e
x
2
would be a good approximation to erfc (x) for large x. In Figure 26.2 log( erfc (x))
is graphed in a solid line and log
_
1

x
1
e
x
2
_
is graphed in a dashed line. We see that this rst approximation
to the error function gives very good results even for moderate values of x. Table 26.1 gives the error in this rst
approximation for various values of x.
If we continue integrating by parts, we might get a better approximation to the complementary error function.
erfc (x) =
1

x
1
e
x
2

_

x
t
2
e
t
2
dt
=
1

x
1
e
x
2

1
2
t
3
e
t
2
_

x
+
1

_

x
3
2
t
4
e
t
2
dt
=
1

e
x
2
_
x
1

1
2
x
3
_
+
1

_

x
3
2
t
4
e
t
2
dt
=
1

e
x
2
_
x
1

1
2
x
3
_
+
1

3
4
t
5
e
t
2
_

_

x
15
4
t
6
e
t
2
dt
=
1

e
x
2
_
x
1

1
2
x
3
+
3
4
x
5
_

_

x
15
4
t
6
e
t
2
dt
1128
0.5 1 1.5 2 2.5 3
-10
-8
-6
-4
-2
2
Figure 26.2: Logarithm of the Approximation to the Complementary Error Function.
The error in approximating erfc (x) with the rst three terms is given in Table 26.1. We see that for x 2 the
three terms give a much better approximation to erfc (x) than just the rst term.
At this point you might guess that you could continue this process indenitely. By repeated application of
integration by parts, you can obtain the series expansion
erfc (x) =
2

e
x
2

n=0
(1)
n
(2n)!
n!(2x)
2n+1
.
1129
x erfc (x) One Term Relative Error Three Term Relative Error
1 0.157 0.3203 0.6497
2 0.00468 0.1044 0.0182
3 2.21 10
5
0.0507 0.0020
4 1.54 10
8
0.0296 3.9 10
4
5 1.54 10
12
0.0192 1.1 10
4
6 2.15 10
17
0.0135 3.7 10
5
7 4.18 10
23
0.0100 1.5 10
5
8 1.12 10
29
0.0077 6.9 10
6
9 4.14 10
37
0.0061 3.4 10
6
10 2.09 10
45
0.0049 1.8 10
6
Table 26.1:
This is a Taylor expansion about innity. Lets nd the radius of convergence.
lim
n

a
n+1
(x)
a
n
(x)

< 1 lim
n

(1)
n+1
(2(n + 1))!
(n + 1)!(2x)
2(n+1)+1
n!(2x)
2n+1
(1)
n
(2n)!

< 1
lim
n

(2n + 2)(2n + 1)
(n + 1)(2x)
2

< 1
lim
n

2(2n + 1)
(2x)
2

< 1

1
x

= 0
Thus we see that our series diverges for all x. Our conventional mathematical sense would tell us that this series
is useless, however we will see that this series is very useful as an asymptotic expansion of erfc (x).
1130
Say we are working with a convergent series expansion of some function f(x).
f(x) =

n=0
a
n
(x)
For xed x = x
0
,
f(x
0
)
N

n=0
a
n
(x
0
) 0 as N .
For an asymptotic series we have a quite dierent behavior. If g(x) is asymptotic to

n=0
b
n
(x) as x x
0
then
for xed N,
g(x)
N

0
b
n
(x) b
N
(x) as x x
0
.
For the complementary error function,
For xed N, erfc (x)
2

e
x
2
N

n=0
(1)
n
(2n)!
n!(2x)
2n+1
x
2N1
as x .
We say that the error function is asymptotic to the series as x goes to innity.
erfc (x)
2

e
x
2

n=0
(1)
n
(2n)!
n!(2x)
2n+1
as x
In Figure 26.3 the logarithm of the dierence between the one term, ten term and twenty term approximations
and the complementary error function are graphed in coarse, medium, and ne dashed lines, respectively.
1131
1 2 3 4 5 6
-60
-40
-20
Figure 26.3: log(error in approximation)
*Optimal Asymptotic Series. Of the three approximations, the one term is best for x 2, the ten term
is best for 2 x 4, and the twenty term is best for 4 x. This leads us to the concept of an optimal
asymptotic approximation. An optimal asymptotic approximation contains the number of terms in the series that
best approximates the true behavior.
In Figure 26.4 we see a plot of the number of terms in the approximation versus the logarithm of the error for
x = 3. Thus we see that the optimal asymptotic approximation is the rst nine terms. After nine terms the error
gets larger. It was inevitable that the error would start to grow after some point as the series diverges for all x.
A good rule of thumb for nding the optimal series is to nd the smallest term in the series and take all of the
terms up to but not including the smallest term as the optimal approximation. This makes sense, because the
n
th
term is an approximation of the error incurred by using the rst n 1 terms. In Figure 26.5 there is a plot
1132
5 10 15 20 25
-18
-16
-14
-12
Figure 26.4: The logarithm of the error in using n terms.
of n versus the logarithm of the n
th
term in the asymptotic expansion of erfc (3). We see that the tenth term is
the smallest. Thus, in this case, our rule of thumb predicts the actual optimal series.
26.4 Asymptotic Series
A function f(x) has an asymptotic series expansion about x = x
0
,

n=0
a
n
(x), if
f(x)
N

n=0
a
n
(x) a
N
(x) as x x
0
for all N.
1133
5 10 15 20 25
-16
-14
-12
Figure 26.5: The logarithm of the n
th
term in the expansion for x = 3.
An asymptotic series may be convergent or divergent. Most of the asymptotic series you encounter will be
divergent. If the series is convergent, then we have that
f(x)
N

n=0
a
n
(x) 0 as N for xed x.
Let
n
(x) be some set of gauge functions. The example that we are most familiar with is
n
(x) = x
n
. If we
say that

n=0
a
n

n
(x)

n=0
b
n

n
(x),
1134
then this means that a
n
= b
n
.
26.5 Asymptotic Expansions of Dierential Equations
26.5.1 The Parabolic Cylinder Equation.
Controlling Factor. Let us examine the behavior of the bounded solution of the parabolic cylinder equation
as x +.
y
tt
+
_
+
1
2

1
4
x
2
_
y = 0
This equation has an irregular singular point at innity. With the substitution y = e
s
, the equation becomes
s
tt
+ (s
t
)
2
+ +
1
2

1
4
x
2
= 0.
We know that
+
1
2

1
4
x
2
as x +
so we drop this term from the equation. Let us make the assumption that
s
tt
(s
t
)
2
as x +.
Thus we are left with the equation
(s
t
)
2

1
4
x
2
s
t

1
2
x
s
1
4
x
2
+c
s
1
4
x
2
as x +
1135
Now lets check if our assumption is consistent. Substituting into s
tt
(s
t
)
2
yields 1/2 x
2
/4 as x +
which is true. Since the equation for y is second order, we would expect that there are two dierent behaviors
as x +. This is conrmed by the fact that we found two behaviors for s. s x
2
/4 corresponds to the
solution that is bounded at +. Thus the controlling factor of the leading behavior is e
x
2
/4
.
Leading Order Behavior. Now we attempt to get a better approximation to s. We make the substitution
s =
1
4
x
2
+t(x) into the equation for s where t x
2
as x +.

1
2
+t
tt
+
1
4
x
2
xt
t
+ (t
t
)
2
+ +
1
2

1
4
x
2
= 0
t
tt
xt
t
+ (t
t
)
2
+ = 0
Since t x
2
, we assume that t
t
x and t
tt
1 as x +. Note that this in only an assumption since it is not
always valid to dierentiate an asymptotic relation. Thus (t
t
)
2
xt
t
and t
tt
xt
t
as x +; we drop these
terms from the equation.
t
t


x
t log x +c
t log x as x +
Checking our assumptions for the derivatives of t,
t
t
x
1
x
x t
tt
1
1
x
2
1,
we see that they were consistent. Now we wish to rene our approximation for t with the substitution t(x) =
log x +u(x). So far we have that
y exp
_

x
2
4
+ log x +u(x)
_
= x

exp
_

x
2
4
+u(x)
_
as x +.
1136
We can try and determine u(x) by substituting the expression t(x) = log x +u(x) into the equation for t.


x
2
+u
tt
( +xu
t
) +

2
x
2
+
2
x
u
t
+ (u
t
)
2
+ = 0
After suitable simplication, this equation becomes
u
t


2

x
3
as x +
Integrating this asymptotic relation,
u

2
2x
2
+c as x +.
Notice that

2
2x
2
c as x +; thus this procedure fails to give us the behavior of u(x). Further renements
to our approximation for s go to a constant value as x +. Thus we have that the leading behavior is
y cx

exp
_

x
2
4
_
as x +
Asymptotic Expansion Since we have factored o the singular behavior of y, we might expect that what is
left over is well behaved enough to be expanded in a Taylor series about innity. Let us assume that we can
expand the solution for y in the form
y(x) x

exp
_

x
2
4
_
(x) = x

exp
_

x
2
4
_

n=0
a
n
x
n
as x +
where a
0
= 1. Dierentiating y = x

exp
_

x
2
4
_
(x),
y
t
=
_
x
1

1
2
x
+1
_
e
x
2
/4
(x) +x

e
x
2
/4

t
(x)
1137
y
tt
=
_
( 1)x
2

1
2
x

1
2
( + 1)x

+
1
4
x
+2
_
e
x
2
/4
(x) + 2
_
x
1

1
2
x
+1
_
e
x
2
/4

t
(x)
+x

e
x
2
/4

tt
(x).
Substituting this into the dierential equation for y,
_
( 1)x
2
( +
1
2
) +
1
4
x
2
_
(x) + 2
_
x
1

1
2
x
_

t
(x) +
tt
(x) +
_
+
1
2

1
4
x
2
_
(x) = 0

tt
(x) + (2x
1
x)
t
(x) +( 1)x
2
= 0
x
2

tt
(x) + (2x x
3
)
t
(x) +( 1)(x) = 0.
Dierentiating the expression for (x),
(x) =

n=0
a
n
x
n

t
(x) =

n=1
na
n
x
n1
=

n=1
(n + 2)a
n+2
x
n3

tt
(x) =

n=1
n(n + 1)a
n
x
n2
.
Substituting this into the dierential equation for (x),

n=1
n(n + 1)a
n
x
n
+ 2

n=1
na
n
x
n

n=1
(n + 2)a
n+2
x
n
+( 1)

n=0
a
n
x
n
= 0.
Equating the coecient of x
1
to zero yields
a
1
x = 0 a
1
= 0.
1138
Equating the coecient of x
0
,
2a
2
+( 1)a
0
= 0 a
2
=
1
2
( 1).
From the coecient of x
n
for n > 0,
n(n + 1)a
n
2na
n
+ (n + 2)a
n+2
+( 1)a
n
= 0
(n + 2)a
n+2
= [n(n + 1) 2n +( 1)]a
n
(n + 2)a
n+2
= [n
2
+n 2n +( 1)]a
n
(n + 2)a
n+2
= (n )(n + 1)a
n
.
Thus the recursion formula for the a
n
s is
a
n+2
=
(n )(n + 1)
n + 2
a
n
, a
0
= 1, a
1
= 0.
The rst few terms in (x) are
(x) 1
( 1)
2
1
1!
x
2
+
( 1)( 2)( 3)
2
2
2!
x
4
as x +
If we check the radius of convergence of this series
lim
n

a
n+2
x
n2
a
n
x
n

< 1 lim
n

(n )(n + 1)
n + 2
x
2

< 1

1
x
= 0
we see that the radius of convergence is zero. Thus if ,= 0, 1, 2, . . . our asymptotic expansion for y
y x

e
x
2
/4
_
1
( 1)
2
1
1!
x
2
+
( 1)( 2)( 3)
2
2
2!
x
4

_
diverges for all x. However this solution is still very useful. If we only use a nite number of terms, we will get a
very good numerical approximation for large x.
In Figure 26.6 the one term, two term, and three term asymptotic approximations are shown in rough, medium,
and ne dashing, respectively. The numerical solution is plotted in a solid line.
1139
1 2 3 4 5 6
-2
2
4
6
Figure 26.6: Asymptotic Approximations to the Parabolic Cylinder Function.
1140
Chapter 27
Hilbert Spaces
An expert is a man who has made all the mistakes which can be made, in a narrow eld.
- Niels Bohr
WARNING: UNDER HEAVY CONSTRUCTION.
In this chapter we will introduce Hilbert spaces. We develop the two important examples: l
2
, the space of
square summable innite vectors and L
2
, the space of square integrable functions.
27.1 Linear Spaces
A linear space is a set of elements x, y, z, . . . that is closed under addition and scalar multiplication. By closed
under addition we mean: if x and y are elements, then z = x+y is an element. The addition is commutative and
associative.
x +y = y +x
(x +y) +z = x + (y +z)
1141
Scalar multiplication is associative and distributive. Let a and b be scalars, a, b C.
(ab)x = a(bx)
(a +b)x = ax +bx
a(x +y) = ax +ay
All the linear spaces that we will work with have additional properties: The zero element 0 is the additive
identity.
x + 0 = x
Multiplication by the scalar 1 is the multiplicative identity.
1x = x
Each element x and the additive inverse, x.
x + (x) = 0
Consider a set of elements x
1
, x
2
, . . . . Let the c
i
be scalars. If
y = c
1
x
1
+c
2
x
2
+
then y is a linear combination of the x
i
. A set of elements x
1
, x
2
, . . . is linearly independent if the equation
c
1
x
1
+c
2
x
2
+ = 0
has only the trivial solution c
1
= c
2
= = 0. Otherwise the set is linearly dependent.
Let e
1
, e
2
, be a linearly independent set of elements. If every element x can be written as a linear
combination of the e
i
then the set e
i
is a basis for the space. The e
i
are called base elements.
x =

i
c
i
e
i
The set e
i
is also called a coordinate system. The scalars c
i
are the coordinates or components of x. If the set
e
i
is a basis, then we say that the set is complete.
1142
27.2 Inner Products
x[y) is an inner product of two elements x and y if it satises the properties:
1. Conjugate-commutative.
x[y) = x[y)
2. Linearity in the second argument.
x[ay +bz) = ax[y) +bx[y)
3. Positive denite.
x[x) 0
x[x) = 0 if and only if x = 0
From these properties one can derive the properties:
1. Conjugate linearity in the rst argument.
ax +by[z) = ax[z) +bx[z)
2. Schwarz Inequality.
[x[y)[
2
x[x)y[y)
One inner product of vectors is the Euclidean inner product.
x[y) x y =
n

i=0
x
i
y
i
.
1143
One inner product of functions dened on (a . . . b) is
u[v) =
_
b
a
u(x)v(x) dx.
If (x) is a positive-valued function, then we can dene the inner product:
u[[v) =
_
b
a
u(x)(x)v(x) dx.
This is called the inner product with respect to the weighting function (x). It is also denoted u[v)

.
27.3 Norms
A norm is a real-valued function on a space which satises the following properties.
1. Positive.
|x| 0
2. Denite.
|x| = 0 if and only if x = 0
3. Multiplication my a scalar, c C.
|cx| = [c[|x|
4. Triangle inequality.
|x +y| |x| +|y|
1144
Example 27.3.1 Consider a vector space, (nite or innite dimension), with elements x = (x
1
, x
2
, x
3
, . . . ). Here
are some common norms.
Norm generated by the inner product.
|x| =
_
x[x)
The l
p
norm.
|x|
p
=
_

k=1
[x
k
[
p
_
1/p
There are three common cases of the l
p
norm.
Euclidian norm, or l
2
norm.
|x|
2
=

k=1
[x
k
[
2
l
1
norm.
|x|
1
=

k=1
[x
k
[
l

norm.
|x|

= max
k
[x
k
[
Example 27.3.2 Consider a space of functions dened on the interval (a . . . b). Here are some common norms.
1145
Norm generated by the inner product.
|u| =
_
u[u)
The L
p
norm.
|u|
p
=
__
b
a
[u(x)[
p
dx
_
1/p
There are three common cases of the L
p
norm.
Euclidian norm, or L
2
norm.
|u|
2
=

_
b
a
[u(x)[
2
dx
L
1
norm.
|u|
1
=
_
b
a
[u(x)[ dx
L

norm.
|u|

= limsup
x(a...b)
[u(x)[
Distance. Using the norm, we can dene the distance between elements u and v.
d(u, v) |u v|
Note that d(u, v) = 0 does not necessarily imply that u = v. CONTINUE.
1146
27.4 Linear Independence.
27.5 Orthogonality
Orthogonality.

j
[
k
) = 0 if j ,= k
Orthonormality.

j
[
k
) =
jk
Example 27.5.1 Innite vectors. e
j
has all zeros except for a 1 in the j
th
position.
e
j
= (0, 0, . . . 0, 1, 0, . . . )
Example 27.5.2 L
2
functions on (0 . . . 2).

j
=
1

2
e
ijx
, j Z

0
=
1

2
,
(1)
j
=
1

cos(jx),
(1)
j
=
1

sin(jx), j Z
+
27.6 Gramm-Schmidt Orthogonalization
Let
1
(x), . . . ,
n
(x) be a set of linearly independent functions. Using the Gramm-Schmidt orthogonalization
process we can construct a set of orthogonal functions
1
(x), . . . ,
n
(x) that has the same span as the set of
1147

n
s with the formulas

1
=
1

2
=
2


1
[
2
)
|
1
|
2

1

3
=
3


1
[
3
)
|
1
|
2

1


2
[
3
)
|
2
|
2

2

n
=
n

n1

j=1

j
[
n
)
|
j
|
2

j
.
You could verify that the
m
are orthogonal with a proof by induction.
Example 27.6.1 Suppose we would like a polynomial approximation to cos(x) in the domain [1, 1]. One way
to do this is to nd the Taylor expansion of the function about x = 0. Up to terms of order x
4
, this is
cos(x) = 1
(x)
2
2
+
(x)
4
24
+O(x
6
).
In the rst graph of Figure 27.1 cos(x) and this fourth degree polynomial are plotted. We see that the approx-
imation is very good near x = 0, but deteriorates as we move away from that point. This makes sense because
the Taylor expansion only makes use of information about the functions behavior at the point x = 0.
As a second approach, we could nd the least squares t of a fourth degree polynomial to cos(x). The set
of functions 1, x, x
2
, x
3
, x
4
is independent, but not orthogonal in the interval [1, 1]. Using Gramm-Schmidt
1148
orthogonalization,

0
= 1

1
= x
1[x)
1[1)
= x

2
= x
2

1[x
2
)
1[1)

x[x
2
)
x[x)
x = x
2

1
3

3
= x
3

3
5
x

4
= x
4

6
7
x
2

3
35
A widely used set of functions in mathematics is the set of Legendre polynomials P
0
(x), P
1
(x), . . . . They
dier from the
n
s that we generated only by constant factors. The rst few are
P
0
(x) = 1
P
1
(x) = x
P
2
(x) =
3x
2
1
2
P
3
(x) =
5x
3
3x
2
P
4
(x) =
35x
4
30x
2
+ 3
8
.
Expanding cos(x) in Legendre polynomials
cos(x)
4

n=0
c
n
P
n
(x),
and calculating the generalized Fourier coecients with the formula
c
n
=
P
n
[ cos(x))
P
n
[P
n
)
,
1149
yields
cos(x)
15

2
P
2
(x) +
45(2
2
21)

4
P
4
(x)
=
105
8
4
[(315 30
2
)x
4
+ (24
2
270)x
2
+ (27 2
2
)]
The cosine and this polynomial are plotted in the second graph in Figure 27.1. The least squares t method uses
information about the function on the entire interval. We see that the least squares t does not give as good
an approximation close to the point x = 0 as the Taylor expansion. However, the least squares t gives a good
approximation on the entire interval.
In order to expand a function in a Taylor series, the function must be analytic in some domain. One advantage
of using the method of least squares is that the function being approximated does not even have to be continuous.
-1 -0.5 0.5 1
-1
-0.5
0.5
1
-1 -0.5 0.5 1
-1
-0.5
0.5
1
Figure 27.1: Polynomial Approximations to cos(x).
1150
27.7 Orthonormal Function Expansion
Let
j
be an orthonormal set of functions on the interval (a, b). We expand a function f(x) in the
j
.
f(x) =

j
c
j

j
We choose the coecients to minimize the norm of the error.
_
_
_
_
_
f

j
c
j

j
_
_
_
_
_
2
=
_
f

j
c
j

j
c
j

j
_
= |f|
2

_
f

j
c
j

j
_

j
c
j

f
_
+
_

j
c
j

j
c
j

j
_
= |f|
2
+

j
[c
j
[
2

j
c
j
f[
j
)

j
c
j

j
[f)
_
_
_
_
_
f

j
c
j

j
_
_
_
_
_
2
= |f|
2
+

j
[c
j
[
2

j
c
j

j
[f)

j
c
j

j
[f) (27.1)
To complete the square, we add the constant

j
[f)
j
[f). We see the values of c
j
which minimize
|f|
2
+

j
[c
j

j
[f)[
2
.
Clearly the unique minimum occurs for
c
j
=
j
[f).
1151
We substitute this value for c
j
into the right side of Equation 27.1 and note that this quantity, the squared norm
of the error, is non-negative.
|f|
2
+

j
[c
j
[
2

j
[c
j
[
2

j
[c
j
[
2
0
|f|
2

j
[c
j
[
2
This is known as Bessels Inequality. If the set of
j
is complete then the norm of the error is zero and we
obtain Bessels Equality.
|f|
2
=

j
[c
j
[
2
27.8 Sets Of Functions
Orthogonality. Consider two complex valued functions of a real variable
1
(x) and
2
(x) dened on the interval
a x b. The inner product of the two functions is dened

1
[
2
) =
_
b
a

1
(x)
2
(x) dx.
The two functions are orthogonal if
1
[
2
) = 0. The L
2
norm of a function is dened || =
_
[).
Let
1
,
2
,
3
, . . . be a set of complex valued functions. The set of functions is orthogonal if each pair of
functions is orthogonal. That is,

n
[
m
) = 0 if n ,= m.
If in addition the norm of each function is 1, then the set is orthonormal. That is,

n
[
m
) =
nm
=
_
1 if n = m
0 if n ,= m.
1152
Example 27.8.1 The set of functions
_
_
2

sin(x),
_
2

sin(2x),
_
2

sin(3x), . . .
_
is orthonormal on the interval [0, ]. To verify this,
_
_
2

sin(nx)

_
2

sin(nx)
_
=
2

_

0
sin
2
(nx) dx
= 1
If n ,= m then
_
_
2

sin(nx)

_
2

sin(mx)
_
=
2

_

0
sin(nx) sin(mx) dx
=
1

_

0
(cos[(n m)x] cos[(n +m)x]) dx
= 0.
Example 27.8.2 The set of functions
. . . ,
1

2
e
ix
,
1

2
,
1

2
e
ix
,
1

2
e
2ix
, . . . ,
is orthonormal on the interval [, ]. To verify this,
_
1

2
e
inx

2
e
inx
_
=
1
2
_

e
inx
e
inx
dx
=
1
2
_

dx
= 1.
1153
If n ,= m then
_
1

2
e
inx

2
e
imx
_
=
1
2
_

e
inx
e
imx
dx
=
1
2
_

e
i(mn)x
dx
= 0.
Orthogonal with Respect to a Weighting Function. Let (x) be a real-valued, positive function on the
interval [a, b]. We introduce the notation

n
[[
m
)
_
b
a

m
dx.
If the set of functions
1
,
2
,
3
, . . . satisfy

n
[[
m
) = 0 if n ,= m
then the functions are orthogonal with respect to the weighting function (x).
If the functions satisfy

n
[[
m
) =
nm
then the set is orthonormal with respect to (x).
Example 27.8.3 We know that the set of functions
_
_
2

sin(x),
_
2

sin(2x),
_
2

sin(3x), . . .
_
1154
is orthonormal on the interval [0, ]. That is,
_

0
_
2

sin(nx)
_
2

sin(mx) dx =
nm
.
If we make the change of variables x =

t in this integral, we obtain


_

2
0
1
2

t
_
2

sin(n

t)
_
2

sin(m

t) dt =
nm
.
Thus the set of functions
_
_
1

sin(

t),
_
1

sin(2

t),
_
1

sin(3

t), . . .
_
is orthonormal with respect to (t) =
1
2

t
on the interval [0,
2
].
Orthogonal Series. Suppose that a function f(x) dened on [a, b] can be written as a uniformly convergent
sum of functions that are orthogonal with respect to (x).
f(x) =

n=1
c
n

n
(x)
We can solve for the c
n
by taking the inner product of
m
(x) and each side of the equation with respect to (x).

m
[[f) =
_

n=1
c
n

n
_

m
[[f) =

n=1
c
n

m
[[
n
)

m
[[f) = c
m

m
[[
m
)
c
m
=

m
[[f)

m
[[
m
)
1155
The c
m
are known as Generalized Fourier coecients. If the functions in the expansion are orthonormal, the
formula simplies to
c
m
=
m
[[f).
Example 27.8.4 The function f(x) = x( x) has a uniformly convergent series expansion in the domain [0, ]
of the form
x( x) =

n=1
c
n
_
2

sin(nx).
The Fourier coecients are
c
n
=
_
_
2

sin(nx)

x( x)
_
=
_
2

_

0
x( x) sin(nx) dx
=
_
2

2
n
3
(1 (1)
n
)
=
_
_
2

4
n
3
for odd n
0 for even n
Thus the expansion is
x( x) =

n=1
oddn
8
n
3
sin(nx) for x [0, ].
In the rst graph of Figure 27.2 the rst term in the expansion is plotted in a dashed line and x( x) is
plotted in a solid line. The second graph shows the two term approximation.
1156
1 2 3
1
2
1 2 3
1
2
Figure 27.2: Series Expansions of x( x).
Example 27.8.5 The set . . . , 1/

2 e
ix
, 1/

2, 1/

2 e
ix
, 1/

2 e
2ix
, . . . is orthonormal on the interval
[, ]. f(x) = sign (x) has the expansion
sign (x)

n=
_
1

2
e
in

sign ()
_
1

2
e
inx
=
1
2

n=
_

e
in
sign () d e
inx
=
1
2

n=
__
0

e
in
d +
_

0
e
in
d
_
e
inx
=
1

n=
1 (1)
n
in
e
inx
.
1157
In terms of real functions, this is
=
1

n=
1 (1)
n
in
(cos(nx) +i sin(nx))
=
2

n=1
1 (1)
n
in
sin(nx)
sign (x)
4

n=1
oddn
1
n
sin(nx).
27.9 Least Squares Fit to a Function and Completeness
Let
1
,
2
,
3
, . . . be a set of real, square integrable functions that are orthonormal with respect to the weighting
function (x) on the interval [a, b]. That is,

n
[[
m
) =
nm
.
Let f(x) be some square integrable function dened on the same interval. We would like to approximate the
function f(x) with a nite orthonormal series.
f(x)
N

n=1

n
(x)
f(x) may or may not have a uniformly convergent expansion in the orthonormal functions.
We would like to choose the
n
so that we get the best possible approximation to f(x). The most common
measure of how well a series approximates a function is the least squares measure. The error is dened as the
1158
integral of the weighting function times the square of the deviation.
E =
_
b
a
(x)
_
f(x)
N

n=1

n
(x)
_
2
dx
The best t is found by choosing the
n
that minimize E. Let c
n
be the Fourier coecients of f(x).
c
n
=
n
[[f)
we expand the integral for E.
E() =
_
b
a
(x)
_
f(x)
N

n=1

n
(x)
_
2
dx
=
_
f
N

n=1

f
N

n=1

n
_
= f[[f) 2
_
N

n=1

f
_
+
_
N

n=1

n=1

n
_
= f[[f) 2
N

n=1

n
[[f) +
N

n=1
N

m=1

n
[[
m
)
= f[[f) 2
N

n=1

n
c
n
+
N

n=1

2
n
= f[[f) +
N

n=1
(
n
c
n
)
2

n=1
c
2
n
Each term involving
n
in non-negative and is minimized for
n
= c
n
. The Fourier coecients give the least
squares approximation to a function. The least squares t to f(x) is thus
f(x)
N

n=1

n
[[f)
n
(x).
1159
Result 27.9.1 If
1
,
2
,
3
, . . . is a set of real, square integrable functions that are
orthogonal with respect to (x) then the least squares t of the rst N orthogonal
functions to the square integrable function f(x) is
f(x)
N

n=1

n
[[f)

n
[[
n
)

n
(x).
If the set is orthonormal, this formula reduces to
f(x)
N

n=1

n
[[f)
n
(x).
Since the error in the approximation E is a nonnegative number we can obtain on inequality on the sum of
the squared coecients.
E = f[[f)
N

n=1
c
2
n
N

n=1
c
2
n
f[[f)
This equation is known as Bessels Inequality. Since f[[f) is just a nonnegative number, independent of N,
the sum

n=1
c
2
n
is convergent and c
n
0 as n
Convergence in the Mean. If the error E goes to zero as N tends to innity
lim
N
_
b
a
(x)
_
f(x)
N

n=1
c
n

n
(x)
_
2
dx = 0,
1160
then the sum converges in the mean to f(x) relative to the weighting function (x). This implies that
lim
N
_
f[[f)
N

n=1
c
2
n
_
= 0

n=1
c
2
n
= f[[f).
This is known as Parsevals identity.
Completeness. Consider a set of functions
1
,
2
,
3
, . . . that is orthogonal with respect to the weighting
function (x). If every function f(x) that is square integrable with respect to (x) has an orthogonal series
expansion
f(x)

n=1
c
n

n
(x)
that converges in the mean to f(x), then the set is complete.
27.10 Closure Relation
Let
1
,
2
, . . . be an orthonormal, complete set on the domain [a, b]. For any square integrable function f(x)
we can write
f(x)

n=1
c
n

n
(x).
1161
Here the c
n
are the generalized Fourier coecients and the sum converges in the mean to f(x). Substituting the
expression for the Fourier coecients into the sum yields
f(x)

n=1

n
[f)
n
(x)
=

n=1
__
b
a

n
()f() d
_

n
(x).
Since the sum is not necessarily uniformly convergent, we are not justied in exchanging the order of summation
and integration . . . but what the heck, lets do it anyway.
=
_
b
a
_

n=1

n
()f()
n
(x)
_
d
=
_
b
a
_

n=1

n
()
n
(x)
_
f() d
The sum behaves like a Dirac delta function. Recall that (x ) satises the equation
f(x) =
_
b
a
(x )f() d for x (a, b).
Thus we could say that the sum is a representation of (x ). Note that a series representation of the delta
function could not be convergent, hence the necessity of throwing caution to the wind when we interchanged the
summation and integration in deriving the series. The closure relation for an orthonormal, complete set states

n=1

n
(x)
n
() (x ).
1162
Alternatively, you can derive the closure relation by computing the generalized Fourier coecients of the delta
function.
(x )

n=1
c
n

n
(x)
c
n
=
n
[(x ))
=
_
b
a

n
(x)(x ) dx
=
n
()
(x )

n=1

n
(x)
n
()
Result 27.10.1 If
1
,
2
, . . . is an orthogonal, complete set on the domain [a, b], then

n=1

n
(x)
n
()
|
n
|
2
(x ).
If the set is orthonormal, then

n=1

n
(x)
n
() (x ).
Example 27.10.1 The integral of the Dirac delta function is the Heaviside function. On the interval x (, )
_
x

(t) dt = H(x) =
_
1 for 0 < x <
0 for < x < 0.
1163
Consider the orthonormal, complete set . . . ,
1

2
e
ix
,
1

2
,
1

2
e
ix
, . . . on the domain [, ]. The delta
function has the series
(t)

n=
1

2
e
int
1

2
e
in0
=
1
2

n=
e
int
.
We will nd the series expansion of the Heaviside function rst by expanding directly and then by integrating
the expansion for the delta function.
Finding the series expansion of H(x) directly. The generalized Fourier coecients of H(x) are
c
0
=
_

2
H(x) dx
=
1

2
_

0
dx
=
_

2
c
n
=
_

2
e
inx
H(x) dx
=
1

2
_

0
e
inx
dx
=
1 (1)
n
in

2
.
1164
Thus the Heaviside function has the expansion
H(x)
_

2
1

2
+

n=
n,=0
1 (1)
n
in

2
1

2
e
inx
=
1
2
+
1

n=1
1 (1)
n
n
sin(nx)
H(x)
1
2
+
2

n=1
oddn
1
n
sin(nx).
Integrating the series for (t).
_
x

(t) dt
1
2
_
x

n=
e
int
dt
=
1
2
_
_
_
(x +) +

n=
n,=0
_
1
in
e
int
_
x

_
_
_
=
1
2
_
_
_
(x +) +

n=
n,=0
1
in
_
e
inx
(1)
n
_
_
_
_
=
x
2
+
1
2
+
1
2

n=1
1
in
_
e
inx
e
inx
(1)
n
+ (1)
n
_
=
x
2
+
1
2
+
1

n=1
1
n
sin(nx)
1165
Expanding
x
2
in the orthonormal set,
x
2

n=
c
n
1

2
e
inx
.
c
0
=
_

2
x
2
dx = 0
c
n
=
_

2
e
inx
x
2
dx =
i(1)
n
n

2
x
2

n=
n,=0
i(1)
n
n

2
1

2
e
inx
=
1

n=1
(1)
n
sin(nx)
Substituting the series for
x
2
into the expression for the integral of the delta function,
_
x

(t) dt
1
2
+
1

n=1
1 (1)
n
n
sin(nx)
_
x

(t) dt
1
2
+
2

n=1
oddn
1
n
sin(nx).
Thus we see that the series expansions of the Heaviside function and the integral of the delta function are the
same.
27.11 Linear Operators
1166
27.12 Exercises
Exercise 27.1
1. Suppose
k
(x)

k=0
is an orthogonal system on [a, b]. Show that any nite set of the
j
(x) is a linearly
independent set on [a, b]. That is, if
j
1
(x),
j
2
(x), . . . ,
jn
(x) is the set and all the j

are distinct, then


a
1

j
1
(x) +a
2

j
2
(x) + +a
n

jn
(x) = 0 on a x b
is true i: a
1
= a
2
= = a
n
= 0.
2. Show that the complex functions
k
(x) e
ikx/L
, k = 0, 1, 2, . . . are orthogonal in the sense that
_
L
L

k
(x)

n
(x) dx =
0, for n ,= k. Here

n
(x) is the complex conjugate of
n
(x).
Hint, Solution
1167
27.13 Hints
Hint 27.1
1168
27.14 Solutions
Solution 27.1
1.
a
1

j
1
(x) +a
2

j
2
(x) + +a
n

jn
(x) = 0
n

k=1
a
k

j
k
(x) = 0
We take the inner product with
j
for any = 1, . . . , n. (, )
_
b
a
(x)

(x) dx.)
_
n

k=1
a
k

j
k
,
j
_
= 0
We interchange the order of summation and integration.
n

k=1
a
k

j
k
,
j
) = 0

j
k

j
) = 0 for j ,= .
a

j
) = 0

j
) ,= 0.
a

= 0
Thus we see that a
1
= a
2
= = a
n
= 0.
1169
2. For k ,= n,
k
,
n
) = 0.

k
,
n
)
_
L
L

k
(x)

n
(x) dx
=
_
L
L
e
ikx/L
e
inx/L
dx
=
_
L
L
e
i(kn)x/L
dx
=
_
e
i(kn)x/L
i(k n)/L
_
L
L
=
e
i(kn)
e
i(kn)
i(k n)/L
=
2Lsin((k n))
(k n)
= 0
1170
Chapter 28
Self Adjoint Linear Operators
28.1 Adjoint Operators
The adjoint of an operator, L

, satises
v[Lu) L

v[u) = 0
for all elements u an v. This is known as Greens Identity.
The adjoint of a matrix. For vectors, one can represent linear operators L with matrix multiplication.
Lx Ax
1171
Let B = A

be the adjoint of the matrix A. We determine the adjoint of A from Greens Identity.
x[Ay) Bx[y) = 0
x Ay = Bx y
x
T
Ay = Bx
T
y
x
T
Ay = x
T
B
T
y
y
T
A
T
x = y
T
BxB = A
T
Thus we see that the adjoint of a matrix is the conjugate transpose of the matrix, A

= A
T
. The conjugate
transpose is also called the Hermitian transpose and is denoted A
H
.
The adjoint of a dierential operator. Consider a second order linear dierential operator acting on C
2
functions dened on (a . . . b) which satisfy certain boundary conditions.
Lu p
2
(x)u
tt
+p
1
(x)u
t
+p
0
(x)u
28.2 Self-Adjoint Operators
Matrices. A matrix is self-adjoint if it is equal to its conjugate transpose A = A
H
A
T
. Such matrices are
called Hermitian. For a Hermitian matrix H, Greens identity is
y[Hx) = Hy[x)
y Hx = Hy x
1172
The eigenvalues of a Hermitian matrix are real. Let x be an eigenvector with eigenvalue .
x[Hx) = Hx[x)
x[x) x[x) = 0
( )x[x) = 0
=
The eigenvectors corresponding to distinct eigenvalues are distinct. Let x and y be eigenvectors with distinct
eigenvalues and .
y[Hx) = Hy[x)
y[x) y[x) = 0
( )y[x) = 0
( )y[x) = 0
y[x) = 0
Furthermore, all Hermitian matrices are similar to a diagonal matrix and have a complete set of orthogonal
eigenvectors.
Trigonometric Series. Consider the problem
y
tt
= y, y(0) = y(2), y
t
(0) = y
t
(2).
1173
We verify that the dierential operator L =
d
2
dx
2
with periodic boundary conditions is self-adjoint.
v[Lu) = v[ u
tt
)
= [vu
t
]
2
0
v
t
[ u
t
)
= v
t
[u
t
)
=
_
v
t
u

2
0
v
tt
[u)
= v
tt
[u)
= Lv[u)
The eigenvalues and eigenfunctions of this problem are

0
= 0,
0
= 1

n
= n
2
,
(1)
n
= cos(nx),
(2)
n
= sin(nx), n Z
+
1174
28.3 Exercises
1175
28.4 Hints
1176
28.5 Solutions
1177
Chapter 29
Self-Adjoint Boundary Value Problems
Seize the day and throttle it.
-Calvin
29.1 Summary of Adjoint Operators
The adjoint of the operator
L[y] = p
n
d
n
y
dx
n
+p
n1
d
n1
y
dx
n1
+ +p
0
y,
is dened
L

[y] = (1)
n
d
n
dx
n
(p
n
y) + (1)
n1
d
n1
dx
n1
(p
n1
y) + +p
0
y
If each of the p
k
is k times continuously dierentiable and u and v are n times continuously dierentiable on
some interval, then on that interval Lagranges identity states
vL[u] uL

[v] =
d
dx
B[u, v]
1178
where B[u, v] is the bilinear form
B[u, v] =
n

m=1

j+k=m1
j0,k0
(1)
j
u
(k)
(p
m
v)
(j)
.
If L is a second order operator then
vL[u] uL

[v] = u
tt
p
2
v +u
t
p
1
v +u
_
p
2
v
tt
+ (2p
t
2
+p
1
)v
t
+ (p
tt
2
+p
t
1
)v

.
Integrating Lagranges identity on its interval of validity gives us Greens formula.
_
b
a
_
vL[u] uL

[v]
_
dx = v[L[u]) L

[v][u) = B[u, v]

x=b
B[u, v]

x=a
29.2 Formally Self-Adjoint Operators
Example 29.2.1 The linear operator
L[y] = x
2
y
tt
+ 2xy
t
+ 3y
has the adjoint operator
L

[y] =
d
2
dx
2
(x
2
y)
d
dx
(2xy) + 3y
= x
2
y
tt
+ 4xy
t
+ 2y 2xy
t
2y + 3y
= x
2
y
tt
+ 2xy
t
+ 3y.
In Example 29.2.1, the adjoint operator is the same as the operator. If L = L

, the operator is said to be


formally self-adjoint.
1179
Most of the dierential equations that we study in this book are second order, formally self-adjoint, with
real-valued coecient functions. Thus we wish to nd the general form of this operator. Consider the operator
L[y] = p
2
y
tt
+p
1
y
t
+p
0
y,
where the p
j
s are real-valued functions. The adjoint operator then is
L

[y] =
d
2
dx
2
(p
2
y)
d
dx
(p
1
y) +p
0
y
= p
2
y
tt
+ 2p
t
2
y
t
+p
tt
2
y p
1
y
t
p
t
1
y +p
0
y
= p
2
y
tt
+ (2p
t
2
p
1
)y
t
+ (p
tt
2
p
t
1
+p
0
)y.
Equating L and L

yields the two equations,


2p
t
2
p
1
= p
1
, p
tt
2
p
t
1
+p
0
= p
0
p
t
2
= p
1
, p
tt
2
= p
t
1
.
Thus second order, formally self-adjoint operators with real-valued coecient functions have the form
L[y] = p
2
y
tt
+p
t
2
y
t
+p
0
y,
which is equivalent to the form
L[y] =
d
dx
(py
t
) +qy.
Any linear dierential equation of the form
L[y] = y
tt
+p
1
y
t
+p
0
y = f(x),
where each p
j
is j times continuously dierentiable and real-valued, can be written as a formally self adjoint
equation. We just multiply by the factor,
e
P(x)
= exp(
_
x
p
1
() d)
1180
to obtain
exp [P(x)] (y
tt
+p
1
y
t
+p
0
y) = exp [P(x)] f(x)
d
dx
(exp [P(x)] y
t
) + exp [P(x)] p
0
y = exp [P(x)] f(x).
Example 29.2.2 Consider the equation
y
tt
+
1
x
y
t
+y = 0.
Multiplying by the factor
exp
__
x
1

d
_
= e
log x
= x
will make the equation formally self-adjoint.
xy
tt
+y
t
+xy = 0
d
dx
(xy
t
) +xy = 0
1181
Result 29.2.1 If L = L

then the linear operator L is formally self-adjoint. Second


order formally self-adjoint operators have the form
L[y] =
d
dx
(py
/
) +qy.
Any dierential equation of the form
L[y] = y
//
+p
1
y
/
+p
0
y = f(x),
where each p
j
is j times continuously dierentiable and real-valued, can be written as a
formally self adjoint equation by multiplying the equation by the factor exp(
_
x
p
1
() d).
29.3 Self-Adjoint Problems
Consider the n
th
order formally self-adjoint equation L[y] = 0, on the domain a x b subject to the boundary
conditions, B
j
[y] = 0 for j = 1, . . . , n. where the boundary conditions can be written
B
j
[y] =
n

k=1

jk
y
(k1)
(a) +
jk
y
(k1)
(b) = 0.
If the boundary conditions are such that Greens formula reduces to
v[L[u]) L[v][u) = 0
then the problem is self-adjoint
Example 29.3.1 Consider the formally self-adjoint equation y
tt
= 0, subject to the boundary conditions y(0) =
1182
y() = 0. Greens formula is
v[ u
tt
) v
tt
[u) = [u
t
(v) u(v)
t
]

0
= [uv
t
u
t
v]

0
= 0.
Thus this problem is self-adjoint.
29.4 Self-Adjoint Eigenvalue Problems
Associated with the self-adjoint problem
L[y] = 0, subject to B
j
[y] = 0,
is the eigenvalue problem
L[y] = y, subject to B
j
[y] = 0.
This is called a self-adjoint eigenvalue problem. The values of for which there exist nontrivial solutions to
this problem are called eigenvalues. The functions that satisfy the equation when is an eigenvalue are called
eigenfunctions.
Example 29.4.1 Consider the self-adjoint eigenvalue problem
y
tt
= y, subject to y(0) = y() = 0.
First consider the case = 0. The general solution is
y = c
1
+c
2
x.
1183
Only the trivial solution satises the boundary conditions. = 0 is not an eigenvalue. Now consider ,= 0. The
general solution is
y = c
1
cos
_

x
_
+c
2
sin
_

x
_
.
The solution that satises the left boundary condition is
y = c sin
_

x
_
.
For non-trivial solutions, we must have
sin
_

_
= 0,
= n
2
, n N.
Thus the eigenvalues
n
and eigenfunctions
n
are

n
= n
2
,
n
= sin(nx), for n = 1, 2, 3, . . .
Self-adjoint eigenvalue problems have a number a interesting properties. We will devote the rest of this section
to developing some of these properties.
Real Eigenvalues. The eigenvalues of a self-adjoint problem are real. Let be an eigenvalue with the eigen-
function . Greens formula states
[L[]) L[][) = 0
[) [) = 0
( )[) = 0
Since , 0, [) > 0. Thus = and is real.
1184
Orthogonal Eigenfunctions. The eigenfunctions corresponding to distinct eigenvalues are orthogonal. Let
n
and
m
be distinct eigenvalues with the eigenfunctions
n
and
m
. Using Greens formula,

n
[L[
m
]) L[
n
][
m
) = 0

n
[
m

m
)
n

n
[
m
) = 0
(
m

n
)
n
[
m
) = 0.
Since the eigenvalues are real,
(
m

n
)
n
[
m
) = 0.
Since the two eigenvalues are distinct,
n
[
m
) = 0 and thus
n
and
m
are orthogonal.
*Enumerable Set of Eigenvalues. The eigenvalues of a self-adjoint eigenvalue problem form an enumerable
set with no nite cluster point. Consider the problem
L[y] = y on a x b, subject to B
j
[y] = 0.
Let
1
,
2
, . . . ,
n
be a fundamental set of solutions at x = x
0
for some a x
0
b. That is,

(k1)
j
(x
0
) =
jk
.
The key to showing that the eigenvalues are enumerable, is that the
j
are entire functions of . That is, they
are analytic functions of for all nite . We will not prove this.
The boundary conditions are
B
j
[y] =
n

k=1
_

jk
y
(k1)
(a) +
jk
y
(k1)
(b)

= 0.
The eigenvalue problem has a solution for a given value of if y =

n
k=1
c
k

k
satises the boundary conditions.
That is,
B
j
_
n

k=1
c
k

k
_
=
n

k=1
c
k
B
j
[
k
] = 0 for j = 1, . . . , n.
1185
Dene an n n matrix M such that M
jk
= B
k
[
j
]. Then if c = (c
1
, c
2
, . . . , c
n
), the boundary conditions can
be written in terms of the matrix equation Mc = 0. This equation has a solution if and only if the determinant of
the matrix is zero. Since the
j
are entire functions of , [M] is an entire function of . The eigenvalues are real,
so [M] has only real roots. Since [M] is an entire function, (that is not identically zero), with only real roots,
the roots of [M] can only cluster at innity. Thus the eigenvalues of a self-adjoint problem are enumerable and
can only cluster at innity.
An example of a function whose roots have a nite cluster point is sin(1/x). This function, (graphed in
Figure 29.1), is clearly not analytic at the cluster point x = 0.
-1 1
Figure 29.1: Graph of sin(1/x).
1186
Innite Number of Eigenvalues. Though we will not show it, self-adjoint problems have an innite number
of eigenvalues. Thus the eigenfunctions form an innite orthogonal set.
Eigenvalues of Second Order Problems. Consider the second order, self-adjoint eigenvalue problem
L[y] = (py
t
)
t
+qy = y, on a x b, subject to B
j
[y] = 0.
Let
n
be an eigenvalue with the eigenfunction
n
.

n
[L[
n
]) =
n
[
n

n
)

n
[(p
t
n
)
t
+q
n
) =
n

n
[
n
)
_
b
a

n
(p
t
n
)
t
dx +
n
[q[
n
) =
n

n
[
n
)
_

n
p
t
n

b
a

_
b
a

n
t
p
t
n
dx +
n
[q[
n
) =
n

n
[
n
)

n
=
[p
n

t
n
]
b
a

t
n
[p[
t
n
) +
n
[q[
n
)

n
[
n
)
Thus we can express each eigenvalue in terms of its eigenfunction. You might think that this formula is just
a shade less than worthless. When solving an eigenvalue problem you have to nd the eigenvalues before you
determine the eigenfunctions. Thus this formula could not be used to compute the eigenvalues. However, we can
often use the formula to obtain information about the eigenvalues before we solve a problem.
Example 29.4.2 Consider the self-adjoint eigenvalue problem
y
tt
= y, y(0) = y() = 0.
1187
The eigenvalues are given by the formula

n
=
_
(1)
t

b
a

t
n
[(1)[
t
n
) +
n
[0[
n
)

n
[
n
)
=
0 +
t
n
[
t
n
) + 0

n
[
n
)
.
We see that
n
0. If
n
= 0 then
t
n
[
t
n
) = 0,which implies that
n
= const. The only constant that satises the
boundary conditions is
n
= 0 which is not an eigenfunction since it is the trivial solution. Thus the eigenvalues
are positive.
29.5 Inhomogeneous Equations
Let the problem,
L[y] = 0, B
k
[y] = 0,
be self-adjoint. If the inhomogeneous problem,
L[y] = f, B
k
[y] = 0,
has a solution, then we we can write this solution in terms of the eigenfunction of the associated eigenvalue
problem,
L[y] = y, B
k
[y] = 0.
We denote the eigenvalues as
n
and the eigenfunctions as
n
for n Z
+
. For the moment we assume that
= 0 is not an eigenvalue and that the eigenfunctions are real-valued. We expand the function f(x) in a series
of the eigenfunctions.
f(x) =

f
n

n
(x), f
n
=

n
[f)
|
n
|
1188
We expand the inhomogeneous solution in a series of eigenfunctions and substitute it into the dierential equation.
L[y] = f
L
_

y
n

n
(x)
_
=

f
n

n
(x)

n
y
n

n
(x) =

f
n

n
(x)
y
n
=
f
n

n
The inhomogeneous solution is
y(x) =


n
[f)

n
|
n
|

n
(x). (29.1)
As a special case we consider the Green function problem,
L[G] = (x ), B
k
[G] = 0,
We expand the Dirac delta function in an eigenfunction series.
(x ) =

n
[)
|
n
|

n
(x) =

n
()
n
(x)
|
n
|
The Green function is
G(x[) =

n
()
n
(x)

n
|
n
|
.
1189
We corroborate Equation 29.1 by solving the inhomogeneous equation in terms of the Green function.
y =
_
b
a
G(x[)f() d
y =
_
b
a

n
()
n
(x)

n
|
n
|
f() d
y =

_
b
a

n
()f() d

n
|
n
|

n
(x)
y =


n
[f)

n
|
n
|

n
(x)
Example 29.5.1 Consider the Green function problem
G
tt
+G = (x ), G(0[) = G(1[) = 0.
First we examine the associated eigenvalue problem.

tt
+ = , (0) = (1) = 0

tt
+ (1 ) = 0, (0) = (1) = 0

n
= 1 (n)
2
,
n
= sin(nx), n Z
+
We write the Green function as a series of the eigenfunctions.
G(x[) = 2

n=1
sin(n) sin(nx)
1 (n)
2
1190
29.6 Exercises
Exercise 29.1
Show that the operator adjoint to
Ly = y
(n)
+p
1
(z)y
(n1)
+p
2
(z)y
(n2)
+ +p
n
(z)y
is given by
My = (1)
n
u
(n)
+ (1)
n1
(p
1
(z)u)
(n1)
+ (1)
n2
(p
2
(z)u)
(n2)
+ +p
n
(z)u.
Hint, Solution
1191
29.7 Hints
Hint 29.1
1192
29.8 Solutions
Solution 29.1
Consider u(x), v(x) C
n
. (C
n
is the set of n times continuously dierentiable functions). First we prove the
preliminary result
uv
(n)
(1)
n
u
(n)
v =
d
dx
n1

k=0
(1)
k
u
(k)
v
(nk1)
(29.2)
by simplifying the right side.
d
dx
n1

k=0
(1)
k
u
(k)
v
(nk1)
=
n1

k=0
(1)
k
_
u
(k)
v
(nk)
+u
(k+1)
v
(nk1)
_
=
n1

k=0
(1)
k
u
(k)
v
(nk)

n1

k=0
(1)
k+1
u
(k+1)
v
(nk1)
=
n1

k=0
(1)
k
u
(k)
v
(nk)

k=1
(1)
k
u
(k)
v
(nk)
= (1)
0
u
(0)
v
n0
(1)
n
u
(n)
v
(nn)
= uv
(n)
(1)
n
u
(n)
v
We dene p
0
(x) = 1 so that we can write the operators in a nice form.
Ly =
n

m=0
p
m
(z)y
(nm)
, Mu =
n

m=0
(1)
m
(p
m
(z)u)
(nm)
1193
Now we show that M is the adjoint to L.
uLy yMu = u
n

m=0
p
m
(z)y
(nm)
y
n

m=0
(1)
m
(p
m
(z)u)
(nm)
=
n

m=0
_
up
m
(z)y
(nm)
(p
m
(z)u)
(nm)
y
_
We use Equation 29.2.
=
n

m=0
d
dz
nm1

k=0
(1)
k
(up
m
(z))
(k)
y
(nmk1)
uLy yMu =
d
dz
n

m=0
nm1

k=0
(1)
k
(up
m
(z))
(k)
y
(nmk1)
1194
Chapter 30
Fourier Series
Every time I close my eyes
The noise inside me amplies
I cant escape
I relive every moment of the day
Every misstep I have made
Finds a way it can invade
My every thought
And this is why I nd myself awake
-Failure
-Tom Shear (Assemblage 23)
30.1 An Eigenvalue Problem.
A self adjoint eigenvalue problem. Consider the eigenvalue problem
y
tt
+y = 0, y() = y(), y
t
() = y
t
().
1195
We rewrite the equation so the eigenvalue is on the right side.
L[y] y
tt
= y
We demonstrate that this eigenvalue problem is self adjoint.
v[L[u]) L[v][u) = v[ u
tt
) v
tt
[u)
= [ vu
t
]

+v
t
[u
t
) [ v
t
u]

v
t
[u
t
)
= v()u
t
() +v()u
t
() +v
t
()u() v
t
()u()
= v()u
t
() +v()u
t
() +v
t
()u() v
t
()u()
= 0
Since Greens Identity reduces to v[L[u]) L[v][u) = 0, the problem is self adjoint. This means that the
eigenvalues are real and that eigenfunctions corresponding to distinct eigenvalues are orthogonal. We compute
the Rayleigh quotient for an eigenvalue with eigenfunction .
=
[

t
]

+
t
[
t
)
[)
=
()
t
() +()
t
() +
t
[
t
)
[)
=
()
t
() +()
t
() +
t
[
t
)
[)
=

t
[
t
)
[)
We see that the eigenvalues are non-negative.
Computing the eigenvalues and eigenfunctions. Now we nd the eigenvalues and eigenfunctions. First we
consider the case = 0. The general solution of the dierential equation is
y = c
1
+c
2
x.
1196
The solution that satises the boundary conditions is y = const.
Now consider > 0. The general solution of the dierential equation is
y = c
1
cos
_

x
_
+c
2
sin
_

x
_
.
We apply the rst boundary condition.
y() = y()
c
1
cos
_

_
+c
2
sin
_

_
= c
1
cos
_

_
+c
2
sin
_

_
c
1
cos
_

_
c
2
sin
_

_
= c
1
cos
_

_
+c
2
sin
_

_
c
2
sin
_

_
= 0
Then we apply the second boundary condition.
y
t
() = y
t
()
c
1

sin
_

_
+c
2

cos
_

_
= c
1

sin
_

_
+c
2

cos
_

_
c
1
sin
_

_
+c
2
cos
_

_
= c
1
sin
_

_
+c
2
cos
_

_
c
1
sin
_

_
= 0
To satisify the two boundary conditions either c
1
= c
2
= 0 or sin
_

_
= 0. The former yields the trivial
solution. The latter gives us the eigenvalues
n
= n
2
, n Z
+
. The corresponding solution is
y
n
= c
1
cos(nx) +c
2
sin(nx).
There are two eigenfunctions for each of the positive eigenvalues.
We choose the eigenvalues and eigenfunctions.

0
= 0,
0
=
1
2

n
= n
2
,
2n1
= cos(nx),
2n
= sin(nx), for n = 1, 2, 3, . . .
1197
Orthogonality of Eigenfunctions. We know that the eigenfunctions of distinct eigenvalues are orthogonal.
In addition, the two eigenfunctions of each positive eigenvalue are orthogonal.
_

cos(nx) sin(nx) dx =
_
1
2n
sin
2
(nx)
_

= 0
Thus the eigenfunctions
1
2
, cos(x), sin(x), cos(2x), sin(2x) are an orthogonal set.
30.2 Fourier Series.
A series of the eigenfunctions

0
=
1
2
,
(1)
n
= cos(nx),
(2)
n
= sin(nx), for n 1
is
1
2
a
0
+

n=1
_
a
n
cos(nx) +b
n
sin(nx)
_
.
This is known as a Fourier series. (We choose
0
=
1
2
so all of the eigenfunctions have the same norm.) A fairly
general class of functions can be expanded in Fourier series. Let f(x) be a function dened on < x < .
Assume that f(x) can be expanded in a Fourier series
f(x)
1
2
a
0
+

n=1
_
a
n
cos(nx) +b
n
sin(nx)
_
. (30.1)
Here the means has the Fourier series. We have not said if the series converges yet. For now lets assume
that the series converges uniformly so we can replace the with an =.
1198
We integrate Equation 30.1 from to to determine a
0
.
_

f(x) dx =
1
2
a
0
_

dx +
_

n=1
a
n
cos(nx) +b
n
sin(nx) dx
_

f(x) dx = a
0
+

n=1
_
a
n
_

cos(nx) dx +b
n
_

sin(nx) dx
_
_

f(x) dx = a
0
a
0
=
1

f(x) dx
Multiplying by cos(mx) and integrating will enable us to solve for a
m
.
_

f(x) cos(mx) dx =
1
2
a
0
_

cos(mx) dx
+

n=1
_
a
n
_

cos(nx) cos(mx) dx +b
n
_

sin(nx) cos(mx) dx
_
All but one of the terms on the right side vanishes due to the orthogonality of the eigenfunctions.
_

f(x) cos(mx) dx = a
m
_

cos(mx) cos(mx) dx
_

f(x) cos(mx) dx = a
m
_

_
1
2
+ cos(2mx)
_
dx
_

f(x) cos(mx) dx = a
m
a
m
=
1

f(x) cos(mx) dx.


1199
Note that this formula is valid for m = 0, 1, 2, . . . .
Similarly, we can multiply by sin(mx) and integrate to solve for b
m
. The result is
b
m
=
1

f(x) sin(mx) dx.


a
n
and b
n
are called Fourier coecients.
Although we will not show it, Fourier series converge for a fairly general class of functions. Let f(x

) denote
the left limit of f(x) and f(x
+
) denote the right limit.
Example 30.2.1 For the function dened
f(x) =
_
0 for x < 0,
x + 1 for x 0,
the left and right limits at x = 0 are
f(0

) = 0, f(0
+
) = 1.
1200
Result 30.2.1 Let f(x) be a 2-periodic function for which
_

[f(x)[ dx exists. Dene


the Fourier coecients
a
n
=
1

f(x) cos(nx) dx, b


n
=
1

f(x) sin(nx) dx.


If x is an interior point of an interval on which f(x) has limited total uctuation, then
the Fourier series of f(x)
a
0
2
+

n=1
_
a
n
cos(nx) +b
n
sin(nx)
_
,
converges to
1
2
(f(x

) +f(x
+
)). If f is continuous at x, then the series converges to f(x).
Periodic Extension of a Function. Let g(x) be a function that is arbitrarily dened on x < . The
Fourier series of g(x) will represent the periodic extension of g(x). The periodic extension, f(x), is dened by the
two conditions:
f(x) = g(x) for x < ,
f(x + 2) = f(x).
The periodic extension of g(x) = x
2
is shown in Figure 30.1.
Limited Fluctuation. A function that has limited total uctuation can be written f(x) =
+
(x)

(x),
where
+
and

are bounded, nondecreasing functions. An example of a function that does not have limited
total uctuation is sin(1/x), whose uctuation is unlimited at the point x = 0.
1201
-5 5 10
-2
2
4
6
8
10
Figure 30.1: The Periodic Extension of g(x) = x
2
.
Functions with Jump Discontinuities. Let f(x) be a discontinuous function that has a convergent Fourier
series. Note that the series does not necessarily converge to f(x). Instead it converges to

f(x) =
1
2
(f(x

)+f(x
+
)).
Example 30.2.2 Consider the function dened by
f(x) =
_
x for x < 0
2x for 0 x < .
1202
The Fourier series converges to the function dened by

f(x) =
_

_
0 for x =
x for < x < 0
/2 for x = 0
2x for 0 < x < .
The function

f(x) is plotted in Figure 30.2.
-3 -2 -1 1 2 3
-3
-2
-1
1
2
3
Figure 30.2: Graph of

f(x).
1203
30.3 Least Squares Fit
Approximating a function with a Fourier series. Suppose we want to approximate a 2-periodic function
f(x) with a nite Fourier series.
f(x)
a
0
2
+
N

n=1
(a
n
cos(nx) +b
n
sin(nx))
Here the coecients are computed with the familiar formulas. Is this the best approximation to the function?
That is, is it possible to choose coecients
n
and
n
such that
f(x)

0
2
+
N

n=1
(
n
cos(nx) +
n
sin(nx))
would give a better approximation?
Least squared error t. The most common criterion for nding the best t to a function is the least squares
t. The best approximation to a function is dened as the one that minimizes the integral of the square of the
deviation. Thus if f(x) is to be approximated on the interval a x b by a series
f(x)
N

n=1
c
n

n
(x), (30.2)
the best approximation is found by choosing values of c
n
that minimize the error E.
E
_
b
a

f(x)
N

n=1
c
n

n
(x)

2
dx
1204
Generalized Fourier coecients. We consider the case that the
n
are orthogonal. For simplicity, we also
assume that the
n
are real-valued. Then most of the terms will vanish when we interchange the order of
integration and summation.
E =
_
b
a
_
f
2
2f
N

n=1
c
n

n
+
N

n=1
c
n

n
N

m=1
c
m

m
_
dx
E =
_
b
a
f
2
dx 2
N

n=1
c
n
_
b
a
f
n
dx +
N

n=1
N

m=1
c
n
c
m
_
b
a

m
dx
E =
_
b
a
f
2
dx 2
N

n=1
c
n
_
b
a
f
n
dx +
N

n=1
c
2
n
_
b
a

2
n
dx
E =
_
b
a
f
2
dx +
N

n=1
_
c
2
n
_
b
a

2
n
dx 2c
n
_
b
a
f
n
dx
_
We complete the square for each term.
E =
_
b
a
f
2
dx +
N

n=1
_
_
_
b
a

2
n
dx
_
c
n

_
b
a
f
n
dx
_
b
a

2
n
dx
_
2

_
_
b
a
f
n
dx
_
b
a

2
n
dx
_
2
_
_
Each term involving c
n
is non-negative, and is minimized for
c
n
=
_
b
a
f
n
dx
_
b
a

2
n
dx
. (30.3)
We call these the generalized Fourier coecients.
For such a choice of the c
n
, the error is
E =
_
b
a
f
2
dx
N

n=1
c
2
n
_
b
a

2
n
dx.
1205
Since the error is non-negative, we have
_
b
a
f
2
dx
N

n=1
c
2
n
_
b
a

2
n
dx.
This is known as Bessels Inequality. If the series in Equation 30.2 converges in the mean to f(x), limN E =
0, then we have equality as N .
_
b
a
f
2
dx =

n=1
c
2
n
_
b
a

2
n
dx.
This is Parsevals equality.
Fourier coecients. Previously we showed that if the series,
f(x) =
a
0
2
+

n=1
(a
n
cos(nx) +b
n
sin(nx),
converges uniformly then the coecients in the series are the Fourier coecients,
a
n
=
1

f(x) cos(nx) dx, b


n
=
1

f(x) sin(nx) dx.


Now we show that by choosing the coecients to minimize the squared error, we obtain the same result. We
apply Equation 30.3 to the Fourier eigenfunctions.
a
0
=
_

f
1
2
dx
_

1
4
dx
=
1

f(x) dx
a
n
=
_

f cos(nx) dx
_

cos
2
(nx) dx
=
1

f(x) cos(nx) dx
b
n
=
_

f sin(nx) dx
_

sin
2
(nx) dx
=
1

f(x) sin(nx) dx
1206
30.4 Fourier Series for Functions Dened on Arbitrary Ranges
If f(x) is dened on c d x < c +d and f(x + 2d) = f(x), then f(x) has a Fourier series of the form
f(x)
a
0
2
+

n=1
a
n
cos
_
n(x +c)
d
_
+b
n
sin
_
n(x +c)
d
_
.
Since
_
c+d
cd
cos
2
_
n(x +c)
d
_
dx =
_
c+d
cd
sin
2
_
n(x +c)
d
_
dx = d,
the Fourier coecients are given by the formulas
a
n
=
1
d
_
c+d
cd
f(x) cos
_
n(x +c)
d
_
dx
b
n
=
1
d
_
c+d
cd
f(x) sin
_
n(x +c)
d
_
dx.
Example 30.4.1 Consider the function dened by
f(x) =
_

_
x + 1 for 1 x < 0
x for 0 x < 1
3 2x for 1 x < 2.
This function is graphed in Figure 30.3.
The Fourier series converges to

f(x) = (f(x

) +f(x
+
))/2,

f(x) =
_

1
2
for x = 1
x + 1 for 1 < x < 0
1
2
for x = 0
x for 0 < x < 1
3 2x for 1 x < 2.
1207

f(x) is also graphed in Figure 30.3.


-1 -0.5 0.5 1 1.5 2
-1
-0.5
0.5
1
-1 -0.5 0.5 1 1.5 2
-1
-0.5
0.5
1
Figure 30.3: A Function Dened on the range 1 x < 2 and the Function to which the Fourier Series Converges.
1208
The Fourier coecients are
a
n
=
1
3/2
_
2
1
f(x) cos
_
2n(x + 1/2)
3
_
dx
=
2
3
_
5/2
1/2
f(x 1/2) cos
_
2nx
3
_
dx
=
2
3
_
1/2
1/2
(x + 1/2) cos
_
2nx
3
_
dx +
2
3
_
3/2
1/2
(x 1/2) cos
_
2nx
3
_
dx
+
2
3
_
5/2
3/2
(4 2x) cos
_
2nx
3
_
dx
=
1
(n)
2
sin
_
2n
3
_
_
2(1)
n
n + 9 sin
_
n
3
__
b
n
=
1
3/2
_
2
1
f(x) sin
_
2n(x + 1/2)
3
_
dx
=
2
3
_
5/2
1/2
f(x 1/2) sin
_
2nx
3
_
dx
=
2
3
_
1/2
1/2
(x + 1/2) sin
_
2nx
3
_
dx +
2
3
_
3/2
1/2
(x 1/2) sin
_
2nx
3
_
dx
+
2
3
_
5/2
3/2
(4 2x) sin
_
2nx
3
_
dx
=
2
(n)
2
sin
2
_
n
3
__
2(1)
n
n + 4n cos
_
n
3
_
3 sin
_
n
3
__
1209
30.5 Fourier Cosine Series
If f(x) is an even function, (f(x) = f(x)), then there will not be any sine terms in the Fourier series for f(x).
The Fourier sine coecient is
b
n
=
1

f(x) sin(nx) dx.


Since f(x) is an even function and sin(nx) is odd, f(x) sin(nx) is odd. b
n
is the integral of an odd function from
to and is thus zero. We can rewrite the cosine coecients,
a
n
=
1

f(x) cos(nx) dx
=
2

_

0
f(x) cos(nx) dx.
Example 30.5.1 Consider the function dened on [0, ) by
f(x) =
_
x for 0 x < /2
x for /2 x < .
The Fourier cosine coecients for this function are
a
n
=
2

_
/2
0
x cos(nx) dx +
2

_

/2
( x) cos(nx) dx
=
_

4
for n = 0,
8
n
2
cos
_
n
2
_
sin
2
_
n
4
_
for n 1.
In Figure 30.4 the even periodic extension of f(x) is plotted in a dashed line and the sum of the rst ve nonzero
terms in the Fourier cosine series are plotted in a solid line.
1210
-3 -2 -1 1 2 3
0.25
0.5
0.75
1
1.25
1.5
Figure 30.4: Fourier Cosine Series.
30.6 Fourier Sine Series
If f(x) is an odd function, (f(x) = f(x)), then there will not be any cosine terms in the Fourier series. Since
f(x) cos(nx) is an odd function, the cosine coecients will be zero. Since f(x) sin(nx) is an even function,we can
rewrite the sine coecients
b
n
=
2

_

0
f(x) sin(nx) dx.
1211
Example 30.6.1 Consider the function dened on [0, ) by
f(x) =
_
x for 0 x < /2
x for /2 x < .
The Fourier sine coecients for this function are
b
n
=
2

_
/2
0
x sin(nx) dx +
2

_

/2
( x) sin(nx) dx
=
16
n
2
cos
_
n
4
_
sin
3
_
n
4
_
In Figure 30.5 the odd periodic extension of f(x) is plotted in a dashed line and the sum of the rst ve nonzero
terms in the Fourier sine series are plotted in a solid line.
30.7 Complex Fourier Series and Parsevals Theorem
By writing sin(nx) and cos(nx) in terms of e
inx
and e
inx
we can obtain the complex form for a Fourier series.
a
0
2
+

n=1
_
a
n
cos(nx) +b
n
sin(nx)
_
=
a
0
2
+

n=1
_
a
n
1
2
( e
inx
+ e
inx
) +b
n
1
2i
( e
inx
e
inx
)
_
=
a
0
2
+

n=1
_
1
2
(a
n
ib
n
) e
inx
+
1
2
(a
n
+ib
n
) e
inx
_
=

n=
c
n
e
inx
where
c
n
=
_

_
1
2
(a
n
ib
n
) for n 1
a
0
2
for n = 0
1
2
(a
n
+ib
n
) for n 1.
1212
-3 -2 -1 1 2 3
-1.5
-1
-0.5
0.5
1
1.5
Figure 30.5: Fourier Sine Series.
The functions . . . , e
ix
, 1, e
ix
, e
2ix
, . . . , satisfy the relation
_

e
inx
e
imx
dx =
_

e
i(nm)x
dx
=
_
2 for n = m
0 for n ,= m.
Starting with the complex form of the Fourier series of a function f(x),
f(x)

c
n
e
inx
,
1213
we multiply by e
imx
and integrate from to to obtain
_

f(x) e
imx
dx =
_

c
n
e
inx
e
imx
dx
c
m
=
1
2
_

f(x) e
imx
dx
If f(x) is real-valued then
c
m
=
1
2
_

f(x) e
imx
dx =
1
2
_

f(x)( e
imx
) dx = c
m
where z denotes the complex conjugate of z.
Assume that f(x) has a uniformly convergent Fourier series.
_

f
2
(x) dx =
_

m=
c
m
e
imx
__

n=
c
n
e
inx
_
dx
= 2

n=
c
n
c
n
= 2
_
1

n=
_
1
4
(a
n
+ib
n
)(a
n
ib
n
)
_
+
a
0
2
a
0
2
+

n=1
_
1
4
(a
n
ib
n
)(a
n
+ib
n
)
_
_
= 2
_
a
2
0
4
+
1
2

n=1
(a
2
n
+b
2
n
)
_
This yields a result known as Parsevals theorem which holds even when the Fourier series of f(x) is not uniformly
convergent.
1214
Result 30.7.1 Parsevals Theorem. If f(x) has the Fourier series
f(x)
a
0
2
+

n=1
(a
n
cos(nx) +b
n
sin(nx)),
then
_

f
2
(x) dx =

2
a
2
0
+

n=1
(a
2
n
+b
2
n
).
30.8 Behavior of Fourier Coecients
Before we jump hip-deep into the grunge involved in determining the behavior of the Fourier coecients, lets
take a step back and get some perspective on what we should be looking for.
One of the important questions is whether the Fourier series converges uniformly. From Result 14.2.1 we know
that a uniformly convergent series represents a continuous function. Thus we know that the Fourier series of a
discontinuous function cannot be uniformly convergent. From Section 14.2 we know that a series is uniformly
convergent if it can be bounded by a series of positive terms. If the Fourier coecients, a
n
and b
n
, are O(1/n

)
where > 1 then the series can be bounded by (const)

n=1
1/n

and will thus be uniformly convergent.


Let f(x) be a function that meets the conditions for having a Fourier series and in addition is bounded. Let
(, p
1
), (p
1
, p
2
), (p
2
, p
3
), . . . , (p
m
, ) be a partition into a nite number of intervals of the domain, (, ) such
that on each interval f(x) and all its derivatives are continuous. Let f(p

) denote the left limit of f(p) and f(p


+
)
denote the right limit.
f(p

) = lim
0
+
f(p ), f(p
+
) = lim
0
+
f(p +)
1215
Example 30.8.1 The function shown in Figure 30.6 would be partitioned into the intervals
(2, 1), (1, 0), (0, 1), (1, 2).
-2 -1 1 2
-1
-0.5
0.5
1
Figure 30.6: A Function that can be Partitioned.
Suppose f(x) has the Fourier series
f(x)
a
0
2
+

n=1
a
n
cos(nx) +b
n
sin(nx).
1216
We can use the integral formula to nd the a
n
s.
a
n
=
1

f(x) cos(nx) dx
=
1

__
p
1

f(x) cos(nx) dx +
_
p
2
p
1
f(x) cos(nx) dx + +
_

pm
f(x) cos(nx) dx
_
Using integration by parts,
=
1
n
_
_
f(x) sin(nx)
_
p
1

+
_
f(x) sin(nx)
_
p
2
p
1
+ +
_
f(x) sin(nx)
_

pm
_

1
n
__
p
1

f
t
(x) sin(nx) dx +
_
p
2
p
1
f
t
(x) sin(nx) dx +
_

pm
f
t
(x) sin(nx) dx
_
=
1
n
_
_
f(p

1
) f(p
+
1
)

sin(np
1
) + +
_
f(p

m
) f(p
+
m
)

sin(np
m
)
_

1
n
1

f
t
(x) sin(nx) dx
=
1
n
A
n

1
n
b
t
n
where
A
n
=
1

j=1
sin(np
j
)
_
f(p

j
) f(p
+
j
)

and the b
t
n
are the sine coecients of f
t
(x).
Since f(x) is bounded, A
n
= O(1). Since f
t
(x) is bounded,
b
t
n
=
1

f
t
(x) sin(nx) dx = O(1).
1217
Thus a
n
= O(1/n) as n . (Actually, from the Riemann-Lebesgue Lemma, b
t
n
= O(1/n).)
Now we repeat this analysis for the sine coecients.
b
n
=
1

f(x) sin(nx) dx
=
1

__
p
1

f(x) sin(nx) dx +
_
p
2
p
1
f(x) sin(nx) dx + +
_

pm
f(x) sin(nx) dx
_
=
1
n
_
_
f(x) cos(nx)

p
1

+
_
f(x) cos(nx)

p
2
p
1
+ +
_
f(x) cos(nx)

pm
_
+
1
n
__
p
1

f
t
(x) cos(nx) dx +
_
p
2
p
1
f
t
(x) cos(nx) dx +
_

pm
f
t
(x) cos(nx) dx
_
=
1
n
B
n
+
1
n
a
t
n
where
B
n
=
(1)
n

_
f() f()

j=1
cos(np
j
)
_
f(p

j
) f(p
+
j
)

and the a
t
n
are the cosine coecients of f
t
(x).
Since f(x) and f
t
(x) are bounded, B
n
, a
t
n
= O(1) and thus b
n
= O(1/n) as n .
With integration by parts on the Fourier coecients of f
t
(x) we could nd that
a
t
n
=
1
n
A
t
n

1
n
b
tt
n
where A
t
n
=
1

m
j=1
sin(np
j
)[f
t
(p

j
) f
t
(p
+
j
)] and the b
tt
n
are the sine coecients of f
tt
(x), and
b
t
n
=
1
n
B
t
n
+
1
n
a
tt
n
where B
t
n
=
(1)
n

[f
t
()f
t
()]
1

m
j=1
cos(np
j
)[f
t
(p

j
)f
t
(p
+
j
)] and the a
tt
n
are the cosine coecients of f
tt
(x).
1218
Now we can rewrite a
n
and b
n
as
a
n
=
1
n
A
n
+
1
n
2
B
t
n

1
n
2
a
tt
n
b
n
=
1
n
B
n
+
1
n
2
A
t
n

1
n
2
b
tt
n
.
Continuing this process we could dene A
(j)
n
and B
(j)
n
so that
a
n
=
1
n
A
n
+
1
n
2
B
t
n

1
n
3
A
tt
n

1
n
4
B
ttt
n
+
b
n
=
1
n
B
n
+
1
n
2
A
t
n
+
1
n
3
B
tt
n

1
n
4
A
ttt
n
.
For any bounded function, the Fourier coecients satisfy a
n
, b
n
= O(1/n) as n . If A
n
and B
n
are zero
then the Fourier coecients will be O(1/n
2
). A sucient condition for this is that the periodic extension of f(x)
is continuous. We see that if the periodic extension of f
t
(x) is continuous then A
t
n
and B
t
n
will be zero and the
Fourier coecients will be O(1/n
3
).
Result 30.8.1 Let f(x) be a bounded function for which there is a partition of the
range (, ) into a nite number of intervals such that f(x) and all its derivatives are
continuous on each of the intervals. If f(x) is not continuous then the Fourier coecients
are O(1/n). If f(x), f
/
(x), . . . , f
(k2)
(x) are continuous then the Fourier coecients are
O(1/n
k
).
If the periodic extension of f(x) is continuous, then the Fourier coecients will be O(1/n
2
). The series

n=1
[a
n
cos(nx)b
n
sin(nx)[ can be bounded by M

n=1
1/n
2
where M = max
n
([a
n
[ + [b
n
[). Thus the Fourier
series converges to f(x) uniformly.
Result 30.8.2 If the periodic extension of f(x) is continuous then the Fourier series of
f(x) will converge uniformly for all x.
1219
If the periodic extension of f(x) is not continuous, we have the following result.
Result 30.8.3 If f(x) is continuous in the interval c < x < d, then the Fourier series is
uniformly convergent in the interval c + x d for any > 0.
Example 30.8.2 Dierent Rates of Convergence.
A Discontinuous Function. Consider the function dened by
f
1
(x) =
_
1 for 1 < x < 0
1, for 0 < x < 1.
This function has jump discontinuities, so we know that the Fourier coecients are O(1/n).
Since this function is odd, there will only be sine terms in its Fourier expansion. Furthermore, since the
function is symmetric about x = 1/2, there will be only odd sine terms. Computing these terms,
b
n
= 2
_
1
0
sin(nx) dx
= 2
_
1
n
cos(nx)
_
1
0
= 2
_

(1)
n
n

1
n
_
=
_
4
n
for odd n
0 for even n.
The function and the sum of the rst three terms in the expansion are plotted, in dashed and solid lines
respectively, in Figure 30.7. Although the three term sum follows the general shape of the function, it is clearly
not a good approximation.
1220
-1 -0.5 0.5 1
-1
-0.5
0.5
1
-1 -0.5 0.5 1
-0.4
-0.2
0.2
0.4
Figure 30.7: Three Term Approximation for a Function with Jump Discontinuities and a Continuous Function.
A Continuous Function. Consider the function dened by
f
2
(x) =
_

_
x 1 for 1 < x < 1/2
x for 1/2 < x < 1/2
x + 1 for 1/2 < x < 1.
1221
-1 -0.5 0.5 1
-0.2
-0.1
0.1
0.2
1
0.25
0.1
0
0.1
1
1
1
0.5
Figure 30.8: Three Term Approximation for a Function with Continuous First Derivative and Comparison of the
Rates of Convergence.
Since this function is continuous, the Fourier coecients will be O(1/n
2
). Also we see that there will only be odd
sine terms in the expansion.
b
n
=
_
1/2
1
(x 1) sin(nx) dx +
_
1/2
1/2
xsin(nx) dx +
_
1
1/2
(x + 1) sin(nx) dx
= 2
_
1/2
0
x sin(nx) dx + 2
_
1
1/2
(1 x) sin(nx) dx
=
4
(n)
2
sin(n/2)
=
_
4
(n)
2
(1)
(n1)/2
for odd n
0 for even n.
1222
The function and the sum of the rst three terms in the expansion are plotted, in dashed and solid lines respec-
tively, in Figure 30.7. We see that the convergence is much better than for the function with jump discontinuities.
A Function with a Continuous First Derivative. Consider the function dened by
f
3
(x) =
_
x(1 +x) for 1 < x < 0
x(1 x) for 0 < x < 1.
Since the periodic extension of this function is continuous and has a continuous rst derivative, the Fourier
coecients will be O(1/n
3
). We see that the Fourier expansion will contain only odd sine terms.
b
n
=
_
0
1
x(1 +x) sin(nx) dx +
_
1
0
x(1 x) sin(nx) dx
= 2
_
1
0
x(1 x) sin(nx) dx
=
4(1 (1)
n
)
(n)
3
=
_
4
(n)
3
for odd n
0 for even n.
The function and the sum of the rst three terms in the expansion are plotted in Figure 30.8. We see that the
rst three terms give a very good approximation to the function. The plots of the function, (in a dashed line),
and the three term approximation, (in a solid line), are almost indistinguishable.
In Figure 30.8 the convergence of the of the rst three terms to f
1
(x), f
2
(x), and f
3
(x) are compared. In the
last graph we see a closeup of f
3
(x) and its Fourier expansion to show the error.
1223
30.9 Gibbs Phenomenon
The Fourier expansion of
f(x) =
_
1 for 0 x < 1
1 for 1 x < 0
is
f(x)
4

n=1
1
n
sin(nx).
For any xed x, the series converges to
1
2
(f(x

) + f(x
+
)). For any > 0, the convergence is uniform in the
intervals 1 + x and x 1 . How will the nonuniform convergence at integral values of x
aect the Fourier series? Finite Fourier series are plotted in Figure 30.9 for 5, 10, 50 and 100 terms. (The plot
for 100 terms is closeup of the behavior near x = 0.) Note that at each discontinuous point there is a series of
overshoots and undershoots that are pushed closer to the discontinuity by increasing the number of terms, but
do not seem to decrease in height. In fact, as the number of terms goes to innity, the height of the overshoots
and undershoots does not vanish. This is known as Gibbs phenomenon.
30.10 Integrating and Dierentiating Fourier Series
Integrating Fourier Series. Since integration is a smoothing operation, any convergent Fourier series can be
integrated term by term to yield another convergent Fourier series.
Example 30.10.1 Consider the step function
f(x) =
_
for 0 x <
for x < 0.
1224
1
1
0.1
0.8
1.2
1
1
1
1
Figure 30.9:
Since this is an odd function, there are no cosine terms in the Fourier series.
b
n
=
2

_

0
sin(nx) dx
= 2
_

1
n
cos(nx)
_

0
=
2
n
(1 (1)
n
)
=
_
4
n
for odd n
0 for even n.
1225
f(x)

n=1
oddn
4
n
sin nx
Integrating this relation,
_
x

f(t) dt
_
x

n=1
oddn
4
n
sin(nt) dt
F(x)

n=1
oddn
4
n
_
x

sin(nt) dt
=

n=1
oddn
4
n
_

1
n
cos(nt)
_
x

n=1
oddn
4
n
2
(cos(nx) + (1)
n
)
= 4

n=1
oddn
1
n
2
4

n=1
oddn
cos(nx)
n
2
Since this series converges uniformly,
4

n=1
oddn
1
n
2
4

n=1
oddn
cos(nx)
n
2
= F(x) =
_
x for x < 0
x for 0 x < .
The value of the constant term is
4

n=1
oddn
1
n
2
=
2

_

0
F(x) dx =
1

.
1226
Thus

n=1
oddn
cos(nx)
n
2
=
_
x for x < 0
x for 0 x < .
Dierentiating Fourier Series. Recall that in general, a series can only be dierentiated if it is uniformly
convergent. The necessary and sucient condition that a Fourier series be uniformly convergent is that the
periodic extension of the function is continuous.
Result 30.10.1 The Fourier series of a function f(x) can be dierentiated only if the
periodic extension of f(x) is continuous.
Example 30.10.2 Consider the function dened by
f(x) =
_
for 0 x <
for x < 0.
f(x) has the Fourier series
f(x)

n=1
oddn
4
n
sin nx.
The function has a derivative except at the points x = n. Dierentiating the Fourier series yields
f
t
(x) 4

n=1
oddn
cos(nx).
1227
For x ,= n, this implies
0 = 4

n=1
oddn
cos(nx),
which is false. The series does not converge. This is as we expected since the Fourier series for f(x) is not
uniformly convergent.
1228
30.11 Exercises
Exercise 30.1
1. Consider a 2 periodic function f(x) expressed as a Fourier series with partial sums
S
N
(x) =
a
0
2
+
N

n=1
a
n
cos(nx) +b
n
sin(nt).
Assuming that the Fourier series converges in the mean, i.e.
lim
N
_

(f(x) S
N
(x))
2
dx = 0,
show
a
2
0
2
+

n=1
a
2
n
+b
2
n
=
1

f(x)
2
dx.
This is called Parsevals equation.
2. Find the Fourier series for f(x) = x on x < (and repeating periodically). Use this to show

n=1
1
n
2
=

2
6
.
3. Similarly, by choosing appropriate functions f(x), use Parsevals equation to determine

n=1
1
n
4
and

n=1
1
n
6
.
1229
Exercise 30.2
Consider the Fourier series of f(x) = x on x < as found above. Investigate the convergence at the points
of discontinuity.
1. Let S
N
be the sum of the rst N terms in the Fourier series. Show that
dS
N
dx
= 1 (1)
N
cos
__
N +
1
2
_
x
_
cos
_
x
2
_ .
2. Now use this to show that
x S
N
=
_
x
0
sin
__
N +
1
2
_
( )
_
sin
_

2
_ d.
3. Finally investigate the maxima of this dierence around x = and provide an estimate (good to two decimal
places) of the overshoot in the limit N .
Exercise 30.3
Consider the boundary value problem on the interval 0 < x < 1
y
tt
+ 2y = 1 y(0) = y(1) = 0.
1. Choose an appropriate periodic extension and nd a Fourier series solution.
2. Solve directly and nd the Fourier series of the solution (using the same extension). Compare the result to
the previous step and verify the series agree.
Exercise 30.4
Consider the boundary value problem on 0 < x <
y
tt
+ 2y = sin x y
t
(0) = y
t
() = 0.
1230
1. Find a Fourier series solution.
2. Suppose the ODE is slightly modied: y
tt
+4y = sin x with the same boundary conditions. Attempt to nd
a Fourier series solution and discuss in as much detail as possible what goes wrong.
Exercise 30.5
Find the Fourier cosine and sine series for f(x) = x
2
on 0 x < . Are the series dierentiable?
Exercise 30.6
Find the Fourier series of cos
n
(x).
Exercise 30.7
For what values of x does the Fourier series

2
3
+ 4

n=1
(1)
n
n
2
cos nx = x
2
converge? What is the value of the above Fourier series for all x? From this relation show that

n=1
1
n
2
=

2
6

n=1
(1)
n+1
n
2
=

2
12
Exercise 30.8
1. Compute the Fourier sine series for the function
f(x) = cos x 1 +
2x

, 0 x .
1231
2. How fast do the Fourier coecients a
n
where
f(x) =

n=1
a
n
sin nx
decrease with increasing n? Explain this rate of decrease.
Exercise 30.9
Determine the cosine and sine series of
f(x) = x sin x, (0 < x < ).
Estimate before doing the calculation the rate of decrease of Fourier coecients, a
n
, b
n
, for large n.
Exercise 30.10
Determine the Fourier cosine series of the function
f(x) = cos x, 0 x ,
where is an arbitrary real number. From this series deduce that for ,= n

sin
=
1

n=1
(1)
n
_
1
n
+
1
+n
_
cot =
1

n=1
_
1
n
+
1
+n
_
Integrate the last formula with respect to from = 0 to = , (0 < < 1), to show that
sin

n=1
_
1

2
n
2
_
The symbol

1
u
n
denotes the innite product u
1
u
2
u
3
.
1232
Exercise 30.11
1. Show that
log cos
_
x
2
_
= log 2

n=1
(1)
n
n
cos nx, < x <
Hint: use the identity
log(1 z) =

n=1
z
n
n
for [z[ 1, z ,= 1.
2. From this series deduce that
_

0
log
_
cos
x
2
_
dx = log 2.
3. Show that
1
2
log

sin((x +)/2)
sin((x )/2)

n=1
sin nx sin n
n
, x ,= + 2k.
Exercise 30.12
Solve the problem
y
tt
+y = f(x), y(a) = y(b) = 0,
with an eigenfunction expansion. Assume that ,= n/(b a), n N.
Exercise 30.13
Solve the problem
y
tt
+y = f(x), y(a) = A, y(b) = B,
with an eigenfunction expansion. Assume that ,= n/(b a), n N.
1233
Exercise 30.14
Find the trigonometric series and the simple closed form expressions for A(r, x) and B(r, x) where z = r e
ix
and
[r[ < 1.
a) A +iB
1
1 z
2
= 1 +z
2
+z
4
+
b) A +iB log(1 +z) = z
1
2
z
2
+
1
3
z
3

Find A
n
and B
n
, and the trigonometric sum for them where:
c) A
n
+iB
n
= 1 +z +z
2
+ +z
n
.
Exercise 30.15
1. Is the trigonometric system
1, sin x, cos x, sin 2x, cos 2x, . . .
orthogonal on the interval [0, ]? Is the system orthogonal on any interval of length ? Why, in each case?
2. Show that each of the systems
1, cos x, cos 2x, . . . , and sin x, sin 2x, . . .
are orthogonal on [0, ]. Make them orthonormal too.
Exercise 30.16
Let S
N
(x) be the N
th
partial sum of the Fourier series for f(x) [x[ on < x < . Find N such that
[f(x) S
N
(x)[ < 10
1
on [x[ < .
Exercise 30.17
The set sin(nx)

n=1
is orthogonal and complete on [0, ].
1234
1. Find the Fourier sine series for f(x) 1 on 0 x .
2. Find a convergent series for g(x) = x on 0 x by integrating the series for part (a).
3. Apply Parsevals relation to the series in (a) to nd:

n=1
1
(2n 1)
2
Check this result by evaluating the series in (b) at x = .
Exercise 30.18
1. Show that the Fourier cosine series expansion on [0, ] of:
f(x)
_

_
1, 0 x <

2
,
1
2
, x =

2
,
0,

2
< x ,
is
S(x) =
1
2
+
2

n=0
(1)
n
2n + 1
cos((2n + 1)x).
2. Show that the N
th
partial sum of the series in (a) is
S
N
(x) =
1
2

1

_
x/2
0
sin((2(N + 1)t)
sin t
dt.
( Hint: Consider the dierence of

2N+1
n=1
( e
iy
)
n
and

N
n=1
( e
i2y
)
n
, where y = x /2.)
3. Show that dS
N
(x)/dx = 0 at x = x
n
=
n
2(N+1)
for n = 0, 1, . . . , N, N + 2, . . . , 2N + 2.
1235
4. Show that at x = x
N
, the maximum of S
N
(x) nearest to /2 in (0, /2) is
S
N
(x
N
) =
1
2
+
1

_ N
2(N+1)
0
sin(2(N + 1)t)
sin t
dt.
Clearly x
N
/2 as N .
5. Show that also in this limit,
S
N
(x
N
)
1
2
+
1

_

0
sin t
t
dt 1.0895.
How does this compare with f(/2 0)? This overshoot is the Gibbs phenomenon that occurs at each
discontinuity. It is a manifestation of the non-uniform convergence of the Fourier series for f(x) on [0, ].
Exercise 30.19
Prove the Isoperimetric Inequality: L
2
4A where L is the length of the perimeter and A the area of any
piecewise smooth plane gure. Show that equality is attained only for the circle. (Hints: The closed curve is
represented parametrically as
x = x(s), y = y(s), 0 s L
where s is the arclength. In terms of t = 2s/L we have
_
dx
dt
_
2
+
_
dy
dt
_
2
=
_
L
2
_
2
.
Integrate this relation over [0, 2]. The area is given by
A =
_
2
0
x
dy
dt
dt.
Express x(t) and y(t) as Fourier series and use the completeness and orthogonality relations to show that L
2

4A 0.)
1236
Exercise 30.20
1. Find the Fourier sine series expansion and the Fourier cosine series expansion of
g(x) = x(1 x), on 0 x 1.
Which is better and why over the indicated interval?
2. Use these expansions to show that:
i)

k=1
1
k
2
=

2
6
, ii)

k=1
(1)
k
k
2
=

2
12
, iii)

k=1
(1)
k
(2k 1)
2
=

3
32
.
Note: Some useful integration by parts formulas are:
_
xsin(nx) =
1
n
2
sin(nx)
x
n
cos(nx);
_
x cos(nx) =
1
n
2
cos(nx) +
x
n
sin(nx)
_
x
2
sin(nx) =
2x
n
2
sin(nx)
n
2
x
2
2
n
3
cos(nx)
_
x
2
cos(nx) =
2x
n
2
cos(nx) +
n
2
x
2
2
n
3
sin(nx)
1237
30.12 Hints
Hint 30.1
Hint 30.2
Hint 30.3
Hint 30.4
Hint 30.5
Hint 30.6
Expand
cos
n
(x) =
_
1
2
( e
ix
+ e
ix
)
_
n
Using Newtons binomial formula.
Hint 30.7
1238
Hint 30.8
Hint 30.9
Hint 30.10
Hint 30.11
Hint 30.12
Hint 30.13
Hint 30.14
Hint 30.15
1239
Hint 30.16
Hint 30.17
Hint 30.18
Hint 30.19
Hint 30.20
1240
30.13 Solutions
Solution 30.1
1. We start by assuming that the Fourier series converges in the mean.
_

_
f(x)
a
0
2

n=1
(a
n
cos(nx) +b
n
sin(nx))
_
2
= 0
We interchange the order of integration and summation.
_

(f(x))
2
dx a
0
_

f(x) dx 2

n=1
_
a
n
_

f(x) cos(nx) dx +b
n
_

f(x) sin(nx)
_
+
a
2
0
2
+a
0

n=1
_

(a
n
cos(nx) +b
n
sin(nx)) dx
+

n=1

m=1
_

(a
n
cos(nx) +b
n
sin(nx))(a
m
cos(mx) +b
m
sin(mx)) dx = 0
Most of the terms vanish because the eigenfunctions are orthogonal.
_

(f(x))
2
dx a
0
_

f(x) dx 2

n=1
_
a
n
_

f(x) cos(nx) dx +b
n
_

f(x) sin(nx)
_
+
a
2
0
2
+

n=1
_

(a
2
n
cos
2
(nx) +b
2
n
sin
2
(nx)) dx = 0
1241
We use the denition of the Fourier coecients to evaluate the integrals in the last sum.
_

(f(x))
2
dx a
2
0
2

n=1
_
a
2
n
+b
2
n
_
+
a
2
0
2
+

n=1
_
a
2
n
+b
2
n
_
= 0
a
2
0
2
+

n=1
_
a
2
n
+b
2
n
_
=
1

f(x)
2
dx
2. We determine the Fourier coecients for f(x) = x. Since f(x) is odd, all of the a
n
are zero.
b
0
=
1

xsin(nx) dx
=
1

1
n
xcos(nx)
_

+
_

1
n
cos(nx) dx
=
2(1)
n+1
n
The Fourier series is
x =

n=1
2(1)
n+1
n
sin(nx) for x ( . . . ).
We apply Parsevals theorem for this series to nd the value of

n=1
1
n
2
.

n=1
4
n
2
=
1

x
2
dx

n=1
4
n
2
=
2
2
3

n=1
1
n
2
=

2
6
1242
3. Consider f(x) = x
2
. Since the function is even, there are no sine terms in the Fourier series. The coecients
in the cosine series are
a
0
=
2

_

0
x
2
dx
=
2
2
3
a
n
=
2

_

0
x
2
cos(nx) dx
=
4(1)
n
n
2
.
Thus the Fourier series is
x
2
=

2
3
+ 4

n=1
(1)
n
n
2
cos(nx) for x ( . . . ).
We apply Parsevals theorem for this series to nd the value of

n=1
1
n
4
.
2
4
9
+ 16

n=1
1
n
4
=
1

x
4
dx
2
4
9
+ 16

n=1
1
n
4
=
2
4
5

n=1
1
n
4
=

4
90
1243
Now we integrate the series for f(x) = x
2
.
_
x
0
_


2
3
_
d = 4

n=1
(1)
n
n
2
_
x
0
cos(n) d
x
3
3


2
3
x = 4

n=1
(1)
n
n
3
sin(nx)
We apply Parsevals theorem for this series to nd the value of

n=1
1
n
6
.
16

n=1
1
n
6
=
1

_
x
3
3


2
3
x
_
2
dx
16

n=1
1
n
6
=
16
6
945

n=1
1
n
6
=

6
945
1244
Solution 30.2
1. We dierentiate the partial sum of the Fourier series and evaluate the sum.
S
N
=
N

n=1
2(1)
n+1
n
sin(nx)
S
t
N
= 2
N

n=1
(1)
n+1
cos(nx)
S
t
N
= 21
_
N

n=1
(1)
n+1
e
inx
_
S
t
N
= 21
_
1 (1)
N+2
e
i(N+1)x
1 + e
ix
_
S
t
N
= 1
_
1 + e
ix
(1)
N
e
i(N+1)x
(1)
N
e
iNx
1 + cos(x)
_
S
t
N
= 1 (1)
N
cos((N + 1)x) + cos(Nx)
1 + cos(x)
S
t
N
= 1 (1)
N
cos
__
N +
1
2
_
x
_
cos
_
x
2
_
cos
2
_
x
2
_
dS
N
dx
= 1 (1)
N
cos
__
N +
1
2
_
x
_
cos
_
x
2
_
1245
2. We integrate S
t
N
.
S
N
(x) S
N
(0) = x
_
x
0
(1)
N
cos
__
N +
1
2
_

_
cos
_

2
_ d
x S
N
=
_
x
0
sin
__
N +
1
2
_
( )
_
sin
_

2
_ d
3. We nd the extrema of the overshoot E = x S
N
with the rst derivative test.
E
t
=
sin
__
N +
1
2
_
(x )
_
sin
_
x
2
_ = 0
We look for extrema in the range ( . . . ).
_
N +
1
2
_
(x ) = n
x =
_
1
n
N + 1/2
_
, n [1 . . . 2N]
The closest of these extrema to x = is
x =
_
1
1
N + 1/2
_
Let E
0
be the overshoot at this point. We approximate E
0
for large N.
E
0
=
_
(11/(N+1/2))
0
sin
__
N +
1
2
_
( )
_
sin
_

2
_ d
We shift the limits of integration.
E
0
=
_

/(N+1/2)
sin
__
N +
1
2
_

_
sin
_

2
_ d
1246
We add and subtract an integral over [0 . . . /(N + 1/2)].
E
0
=
_

0
sin
__
N +
1
2
_

_
sin
_

2
_ d
_
/(N+1/2)
0
sin
__
N +
1
2
_

_
sin
_

2
_ d
We can evaluate the rst integral with contour integration on the unit circle C.
_

0
sin
__
N +
1
2
_

_
sin
_

2
_ d =
_

0
sin ((2N + 1) )
sin ()
d
=
1
2
_

sin ((2N + 1) )
sin ()
d
=
1
2

_
C

_
z
2N+1
_
(z 1/z)/(i2)
dz
iz
=
_

_
C
z
2N+1
(z
2
1)
dz
_
=
_
i Res
_
z
2N+1
(z + 1)(z 1)
, 1
_
+ i Res
_
z
2N+1
(z + 1)(z 1)
, 1
__
= 1
_
1
2N+1
2
+
(1)
2N+1
2
_
=
1247
We approximate the second integral.
_
/(N+1/2)
0
sin
__
N +
1
2
_

_
sin
_

2
_ d =
2
2N + 1
_

0
sin(x)
sin
_
x
2N+1
_ dx
2
_

0
sin(x)
x
dx
= 2
_

0
1
x

n=0
(1)
n
x
2n+1
(2n + 1)!
dx
= 2

n=0
_

0
(1)
n
x
2n
(2n + 1)!
dx
= 2

n=0
(1)
n

2n+1
(2n + 1)(2n + 1)!
dx
3.70387
In the limit as N , the overshoot is
[ 3.70387[ 0.56.
Solution 30.3
1. The eigenfunctions of the self-adjoint problem
y
tt
= y, y(0) = y(1) = 0,
are

n
= sin(nx), n Z
+
1248
We nd the series expansion of the inhomogeneity f(x) = 1.
1 =

n=1
f
n
sin(nx)
f
n
= 2
_
1
0
sin(nx) dx
f
n
= 2
_

cos(nx)
n
_
1
0
f
n
=
2
n
(1 (1)
n
)
f
n
=
_
4
n
for odd n
0 for even n
We expand the solution in a series of the eigenfunctions.
y =

n=1
a
n
sin(nx)
We substitute the series into the dierential equation.
y
tt
+ 2y = 1

n=1
a
n

2
n
2
sin(nx) + 2

n=1
a
n
sin(nx) =

n=1
odd n
4
n
sin(nx)
a
n
=
_
4
n(2
2
n
2
)
for odd n
0 for even n
y =

n=1
odd n
4
n(2
2
n
2
)
sin(nx)
1249
2. Now we solve the boundary value problem directly.
y
tt
+ 2y = 1 y(0) = y(1) = 0
The general solution of the dierential equation is
y = c
1
cos
_

2x
_
+c
2
sin
_

2x
_
+
1
2
.
We apply the boundary conditions to nd the solution.
c
1
+
1
2
= 0, c
1
cos
_

2
_
+c
2
sin
_

2
_
+
1
2
= 0
c
1
=
1
2
, c
2
=
cos
_
2
_
1
2 sin
_
2
_
y =
1
2
_
1 cos
_

2x
_
+
cos
_
2
_
1
sin
_
2
_ sin
_

2x
_
_
We nd the Fourier sine series of the solution.
y =

n=1
a
n
sin(nx)
a
n
= 2
_
1
0
y(x) sin(nx) dx
a
n
=
_
1
0
_
1 cos
_

2x
_
+
cos
_
2
_
1
sin
_
2
_ sin
_

2x
_
_
sin(nx) dx
a
n
=
2(1 (1)
2
n(2
2
n
2
)
a
n
=
_
4
n(2
2
n
2
)
for odd n
0 for even n
1250
We obtain the same series as in the rst part.
Solution 30.4
1. The eigenfunctions of the self-adjoint problem
y
tt
= y, y
t
(0) = y
t
() = 0,
are

0
=
1
2
,
n
= cos(nx), n Z
+
We nd the series expansion of the inhomogeneity f(x) = sin(x).
f(x) =
f
0
2
+

n=1
f
n
cos(nx)
f
0
=
2

_

0
sin(x) dx
f
0
=
4

f
n
=
2

_

0
sin(x) cos(nx) dx
f
n
=
2(1 + (1)
n
)
(1 n
2
)
f
n
=
_
4
(1n
2
)
for even n
0 for odd n
We expand the solution in a series of the eigenfunctions.
y =
a
0
2
+

n=1
a
n
cos(nx)
1251
We substitute the series into the dierential equation.
y
tt
+ 2y = sin(x)

n=1
a
n
n
2
cos(nx) +a
0
+ 2

n=1
a
n
cos(nx) =
2

n=2
even n
4
(1 n
2
)
cos(nx)
y =
1

n=2
even n
4
(1 n
2
)(2 n
2
)
cos(nx)
2. We expand the solution in a series of the eigenfunctions.
y =
a
0
2
+

n=1
a
n
cos(nx)
We substitute the series into the dierential equation.
y
tt
+ 4y = sin(x)

n=1
a
n
n
2
cos(nx) + 2a
0
+ 4

n=1
a
n
cos(nx) =
2

n=2
even n
4
(1 n
2
)
cos(nx)
It is not possible to solve for the a
2
coecient. That equation is
(0)a
2
=
4
3
.
This problem is to be expected, as this boundary value problem does not have a solution. The solution of
the dierential equation is
y = c
1
cos(2x) +c
2
sin(2x) +
1
3
sin(x)
1252
The boundary conditions give us an inconsistent set of constraints.
y
t
(0) = 0, y
t
() = 0
c
2
+
1
3
= 0, c
2

1
3
= 0
Thus the problem has no solution.
Solution 30.5
Cosine Series. The coecients in the cosine series are
a
0
=
2

_

0
x
2
dx
=
2
2
3
a
n
=
2

_

0
x
2
cos(nx) dx
=
4(1)
n
n
2
.
Thus the Fourier cosine series is
f(x) =

2
3
+

n=1
4(1)
n
n
2
cos(nx).
In Figure 30.10 the even periodic extension of f(x) is plotted in a dashed line and the sum of the rst ve terms
in the Fourier series is plotted in a solid line. Since the even periodic extension is continuous, the cosine series is
dierentiable.
1253
-3 -2 -1 1 2 3
2
4
6
8
10
-3 -2 -1 1 2 3
-10
-5
5
10
Figure 30.10: The Fourier Cosine and Sine Series of f(x) = x
2
.
Sine Series. The coecients in the sine series are
b
n
=
2

_

0
x
2
sin(nx) dx
=
2(1)
n

n

4(1 (1)
n
)
n
3
=
_

2(1)
n

n
for even n

2(1)
n

n

8
n
3
for odd n.
1254
Thus the Fourier sine series is
f(x)

n=1
_
2(1)
n

n
+
4(1 (1)
n
)
n
3
_
sin(nx).
In Figure 30.10 the odd periodic extension of f(x) and the sum of the rst ve terms in the sine series are plotted.
Since the odd periodic extension of f(x) is not continuous, the series is not dierentiable.
Solution 30.6
We could nd the expansion by integrating to nd the Fourier coecients, but it is easier to expand cos
n
(x)
directly.
cos
n
(x) =
_
1
2
( e
ix
+ e
ix
)
_
n
=
1
2
n
__
n
0
_
e
inx
+
_
n
1
_
e
i(n2)x
+ +
_
n
n 1
_
e
i(n2)x
+
_
n
n
_
e
inx
_
1255
If n is odd,
cos
n
(x) =
1
2
n
_
_
n
0
_
( e
inx
+ e
inx
) +
_
n
1
_
( e
i(n2)x
+ e
i(n2)x
) +
+
_
n
(n 1)/2
_
( e
ix
+ e
ix
)
_
=
1
2
n
__
n
0
_
2 cos(nx) +
_
n
1
_
2 cos((n 2)x) + +
_
n
(n 1)/2
_
2 cos(x)
_
=
1
2
n1
(n1)/2

m=0
_
n
m
_
cos((n 2m)x)
=
1
2
n1
n

k=1
odd k
_
n
(n k)/2
_
cos(kx).
1256
If n is even,
cos
n
(x) =
1
2
n
_
_
n
0
_
( e
inx
+ e
inx
) +
_
n
1
_
( e
i(n2)x
+ e
i(n2)x
) +
+
_
n
n/2 1
_
( e
i2x
+ e
i2x
) +
_
n
n/2
_
_
=
1
2
n
__
n
0
_
2 cos(nx) +
_
n
1
_
2 cos((n 2)x) + +
_
n
n/2 1
_
2 cos(2x) +
_
n
n/2
__
=
1
2
n
_
n
n/2
_
+
1
2
n1
(n2)/2

m=0
_
n
m
_
cos((n 2m)x)
=
1
2
n
_
n
n/2
_
+
1
2
n1
n

k=2
even k
_
n
(n k)/2
_
cos(kx).
We may denote,
cos
n
(x) =
a
0
2
n

k=1
a
k
cos(kx),
where
a
k
=
1 + (1)
nk
2
1
2
n1
_
n
(n k)/2
_
.
1257
Solution 30.7
We expand f(x) in a cosine series. The coecients in the cosine series are
a
0
=
2

_

0
x
2
dx
=
2
2
3
a
n
=
2

_

0
x
2
cos(nx) dx
=
4(1)
n
n
2
.
Thus the Fourier cosine series is
f(x) =

2
3
+ 4

n=1
(1)
n
n
2
cos(nx).
The Fourier series converges to the even periodic extension of
f(x) = x
2
for 0 < x < ,
which is

f(x) =
_
x 2
__
x +
2
___
2
.
(| denotes the oor or greatest integer function.) This periodic extension is a continuous function. Since x
2
is
an even function, we have

2
3
+ 4

n=1
(1)
n
n
2
cos nx = x
2
for x .
1258
We substitute x = into the Fourier series.

2
3
+ 4

n=1
(1)
n
n
2
cos(n) =
2

n=1
1
n
2
=

2
6
We substitute x = 0 into the Fourier series.

2
3
+ 4

n=1
(1)
n
n
2
= 0

n=1
(1)
n+1
n
2
=

2
12
Solution 30.8
1. The Fourier sine coecients are
a
n
=
2

_

0
f(x) sin(nx) dx
=
2

_

0
_
cos x 1 +
2x

_
sin(nx) dx
a
n
=
2(1 + (1)
n
)
(n
3
n)
2. From our work in the previous part, we see that the Fourier coecients decay as 1/n
3
. The Fourier sine
series converges to the odd periodic extension of f(x). We can determine the rate of decay of the Fourier
1259
coecients from the smoothness of

f(x). For < x < , the odd periodic extension of f(x) is dened

f(x) =
_
f
+
(x) = cos(x) 1 +
2x

0 < x < ,
f

(x) = f
+
(x) = cos(x) + 1 +
2x

< x < 0.
Since
f
+
(0) = f

(0) = 0 and f
+
() = f

() = 0

f(x) is continuous, C
0
. Since
f
t
+
(0) = f
t

(0) =
2

and f
t
+
() = f
t

() =
2

f(x) is continuously dierentiable, C


1
. However, since
f
tt
+
(0) = 1, f
tt

(0) = 1

f(x) is not C
2
. Since

f(x) is C
1
we know that the Fourier coecients decay as 1/n
3
.
Solution 30.9
Cosine Series. The even periodic extension of f(x) is a C
0
, continuous, function (See Figure 30.11. Thus the
coecients in the cosine series will decay as 1/n
2
. The Fourier cosine coecients are
a
0
=
2

_

0
xsin xdx
= 2
a
1
=
2

_

0
x sin x cos x dx
=
1
2
1260
a
n
=
2

_

0
xsin xcos(nx) dx
=
2(1)
n+1
n
2
1
, for n 2
The Fourier cosine series is

f(x) = 1
1
2
cos x 2

n=2
2(1)
n
n
2
1
cos(nx).
-5 5
1
Figure 30.11: The even periodic extension of x sin x.
Sine Series. The odd periodic extension of f(x) is a C
1
, continuously dierentiable, function (See Fig-
ure 30.12. Thus the coecients in the cosine series will decay as 1/n
3
. The Fourier sine coecients are
a
1
=
1

_

0
x sin x sin x dx
=

2
1261
a
n
=
2

_

0
x sin x sin(nx) dx
=
4(1 + (1)
n
)n
(n
2
1)
2
, for n 2
The Fourier sine series is

f(x) =

2
sin x
4

n=2
(1 + (1)
n
)n
(n
2
1)
2
cos(nx).
-5 5
1
Figure 30.12: The odd periodic extension of x sin x.
Solution 30.10
If = n is an integer, then the Fourier cosine series is cos([n[x).
We note that for ,= n, the even periodic extension of cos(x) is C
0
so that the series converges to cos(x)
1262
for x and the coecients decay as 1/n
2
. If is not an integer, then then Fourier cosine coecients are
a
0
=
2

_

0
cos(x) dx
=
2

sin()
a
n
=
2

_

0
cos(x) cos(nx) dx
= (1)
n
_
1
n
+
1
+n
_
sin()
The Fourier cosine series is
cos(x) =
1

sin() +

n=1
(1)
n
_
1
n
+
1
+n
_
sin() cos(nx).
For ,= n we substitute x = 0 into the Fourier cosine series.
1 =
1

sin() +

n=1
(1)
n
_
1
n
+
1
+n
_
sin()

sin
=
1

n=1
(1)
n
_
1
n
+
1
+n
_
For ,= n we substitute x = into the Fourier cosine series.
cos() =
1

sin() +

n=1
(1)
n
_
1
n
+
1
+n
_
sin()(1)
n
cot =
1

n=1
_
1
n
+
1
+n
_
1263
We write the last formula as
cot
1

n=1
_
1
n
+
1
+n
_
We integrate from = 0 to = < 1.
_
log
_
sin()

__

0
=

n=1
_
[log(n )]

0
+ [log(n +)]

0
_
log
_
sin()

_
log() =

n=1
_
log
_
n
n
_
+ log
_
n +
n
__
log
_
sin()

_
=

n=1
log
_
1

2
n
2
_
log
_
sin()

_
= log
_

n=1
_
1

2
n
2
_
_
sin()

n=1
_
1

2
n
2
_
Solution 30.11
1. We will consider the principal branch of the logarithm, < arg(log z) . For < x < , cos(x/2) is
positive so that log(cos(x/2)) is real-valued. At x = , log(cos(x/2)) is singular. However, the function is
integrable so it has a Fourier series which converges except at x = (2k + 1), k Z.
log
_
cos
x
2
_
= log
_
e
ix/2
+ e
ix/2
2
_
= log 2 + log
_
e
ix/2
_
1 + e
ix
__
= log 2 i
x
2
+ log
_
1 + e
ix
_
1264
Since [ e
ix
[ 1 and e
ix
,= 1 for (x) 0, x ,= (2k + 1), we can expand the last term in a Taylor series
in that domain.
= log 2 i
x
2

n=1
(1)
n
n
_
e
ix
_
n
= log 2

n=1
(1)
n
n
cos nx i
_
x
2
+

n=1
(1)
n
n
sin nx
_
For < x < , log(cos(x/2)) is real-valued. We equate the real parts of the equation on this domain to
obtain the Fourier series of log(cos(x/2)).
log
_
cos
x
2
_
= log 2

n=1
(1)
n
n
cos(nx), < x < .
The domain of convergence for this series is (x) = 0, x ,= (2k + 1). The Fourier series converges to the
periodic extension of the function.
log

cos
x
2

= log 2

n=1
(1)
n
n
cos(nx), x ,= (2k + 1), k Z
2. Now we integrate the function from 0 to .
_

0
log
_
cos
x
2
_
dx =
_

0
_
log 2

n=1
(1)
n
n
cos(nx)
_
dx
= log 2

n=1
(1)
n
n
_

0
cos(nx) dx
= log 2

n=1
(1)
n
n
_
sin(nx)
n
_

0
dx
1265
_

0
log
_
cos
x
2
_
dx = log 2
3.
1
2
log

sin((x +)/2)
sin((x )/2)

=
1
2
log [sin((x +)/2)[
1
2
log [sin((x )/2)[
Consider the function log [ sin(y/2)[. Since sin(x) = cos(x/2), we can use the result of part (a) to obtain,
log

sin
y
2

= log

cos
y
2

= log 2

n=1
(1)
n
n
cos(n(y ))
= log 2

n=1
1
n
cos(ny), for y ,= 2k, k Z.
We return to the original function:
1
2
log

sin((x +)/2)
sin((x )/2)

=
1
2
_
log 2

n=1
1
n
cos(n(x +)) + log 2 +

n=1
1
n
cos(n(x ))
_
,
for x ,= 2k, k Z.
1
2
log

sin((x +)/2)
sin((x )/2)

n=1
sin nx sin n
n
, x ,= + 2k.
1266
Solution 30.12
The eigenfunction problem associated with this problem is

tt
+
2
= 0, (a) = (b) = 0,
which has the solutions,

n
=
n
b a
,
n
= sin
_
n(x a)
b a
_
, n N.
We expand the solution and the inhomogeneity in the eigenfunctions.
y(x) =

n=1
y
n
sin
_
n(x a)
b a
_
f(x) =

n=1
f
n
sin
_
n(x a)
b a
_
, f
n
=
2
b a
_
b
a
f(x) sin
_
n(x a)
b a
_
dx
Since the solution y(x) satises the same homogeneous boundary conditions as the eigenfunctions, we can dier-
entiate the series. We substitute the series expansions into the dierential equation.
y
tt
+y = f(x)

n=1
y
n
_

2
n
+
_
sin (
n
x) =

n=1
f
n
sin (
n
x)
y
n
=
f
n

2
n
Thus the solution of the problem has the series representation,
y(x) =

n=1
_

2
n
_
sin
_
n(x a)
b a
_
.
1267
Solution 30.13
The eigenfunction problem associated with this problem is

tt
+
2
= 0, (a) = (b) = 0,
which has the solutions,

n
=
n
b a
,
n
= sin
_
n(x a)
b a
_
, n N.
We expand the solution and the inhomogeneity in the eigenfunctions.
y(x) =

n=1
y
n
sin
_
n(x a)
b a
_
f(x) =

n=1
f
n
sin
_
n(x a)
b a
_
, f
n
=
2
b a
_
b
a
f(x) sin
_
n(x a)
b a
_
dx
Since the solution y(x) does not satisfy the same homogeneous boundary conditions as the eigenfunctions, we can
dierentiate the series. We multiply the dierential equation by an eigenfunction and integrate from a to b. We
use integration by parts to move derivatives from y to the eigenfunction.
y
tt
+y = f(x)
_
b
a
y
tt
(x) sin(
m
x) dx +
_
b
a
y(x) sin(
m
x) dx =
_
b
a
f(x) sin(
m
x) dx
[y
t
sin(
m
x)]
b
a

_
b
a
y
t

m
cos(
m
x) dx +
b a
2
y
m
=
b a
2
f
m
[y
m
cos(
m
x)]
b
a

_
b
a
y
2
m
sin(
m
x) dx +
b a
2
y
m
=
b a
2
f
m
B
m
(1)
m
+A
m
(1)
m+1

2
m
y
m
+
b a
2
y
m
=
b a
2
f
m
y
m
=
f
m
+ (1)
m

m
(A +B)

2
m
1268
Thus the solution of the problem has the series representation,
y(x) =

n=1
f
m
+ (1)
m

m
(A +B)

2
m
sin
_
n(x a)
b a
_
.
Solution 30.14
1.
A +iB =
1
1 z
2
=

n=0
z
2n
=

n=0
r
2n
e
i2nx
=

n=0
r
2n
cos(2nx) +i

n=1
r
2n
sin(2nx)
A =

n=0
r
2n
cos(2nx), B =

n=1
r
2n
sin(2nx)
1269
A +iB =
1
1 z
2
=
1
1 r
2
e
2ix
=
1
1 r
2
cos(2x) ir
2
sin(2x)
=
1 r
2
cos(2x) +ir
2
sin(2x)
(1 r
2
cos(2x))
2
+ (r
2
sin(2x))
2
A =
1 r
2
cos(2x)
1 2r
2
cos(2x) +r
4
, B =
r
2
sin(2x)
1 2r
2
cos(2x) +r
4
2. We consider the principal branch of the logarithm.
A +iB = log(1 +z)
=

n=1
(1)
n+1
n
z
n
=

n=1
(1)
n+1
n
r
n
e
inx
=

n=1
(1)
n+1
n
r
n
_
cos(nx) +i sin(nx)
_
A =

n=1
(1)
n+1
n
r
n
cos(nx), B =

n=1
(1)
n+1
n
r
n
sin(nx)
1270
A +iB = log(1 +z)
= log
_
1 +r e
ix
_
= log (1 +r cos x +ir sin x)
= log [1 +r cos x +ir sin x[ +i arg (1 +r cos x +ir sin x)
= log
_
(1 +r cos x)
2
+ (r sin x)
2
+i arctan (1 +r cos x, r sin x)
A =
1
2
log
_
1 + 2r cos x +r
2
_
, B = arctan (1 +r cos x, r sin x)
3.
A
n
+iB
n
=
n

k=1
z
k
=
1 z
n+1
1 z
=
1 r
n+1
e
i(n+1)x
1 r e
ix
=
1 r e
ix
r
n+1
e
i(n+1)x
+r
n+2
e
inx
1 2r cos x +r
2
A
n
=
1 r cos x r
n+1
cos((n + 1)x) +r
n+2
cos(nx)
1 2r cos x +r
2
B
n
=
r sin x r
n+1
sin((n + 1)x) +r
n+2
sin(nx)
1 2r cos x +r
2
1271
A
n
+iB
n
=
n

k=1
z
k
=
n

k=1
r
k
e
ikx
A
n
=
n

k=1
r
k
cos(kx), B
n
=
n

k=1
r
k
sin(kx)
Solution 30.15
1.
_

0
1 sin x dx = [cos x]

0
= 2
Thus the system is not orthogonal on the interval [0, ]. Consider the interval [a, a +].
_
a+
a
1 sin xdx = [cos x]
a+
a
= 2 cos a
_
a+
a
1 cos xdx = [sin x]
a+
a
= 2 sin a
Since there is no value of a for which both cos a and sin a vanish, the system is not orthogonal for any
interval of length .
2. First note that
_

0
cos nx dx = 0 for n N.
1272
If n ,= m, n 1 and m 0 then
_

0
cos nx cos mxdx =
1
2
_

0
_
cos((n m)x) + cos((n +m)x)
_
dx = 0
Thus the set 1, cos x, cos 2x, . . . is orthogonal on [0, ]. Since
_

0
dx =
_

0
cos
2
(nx) dx =

2
,
the set
_
_
1

,
_
2

cos x,
_
2

cos 2x, . . .
_
is orthonormal on [0, ].
If n ,= m, n 1 and m 1 then
_

0
sin nx sin mxdx =
1
2
_

0
_
cos((n m)x) cos((n +m)x)
_
dx = 0
Thus the set sin x, sin 2x, . . . is orthogonal on [0, ]. Since
_

0
sin
2
(nx) dx =

2
,
the set
_
_
2

sin x,
_
2

sin 2x, . . .
_
is orthonormal on [0, ].
1273
Solution 30.16
Since the periodic extension of [x[ in [, ] is an even function its Fourier series is a cosine series. Because of the
anti-symmetry about x = /2 we see that except for the constant term, there will only be odd cosine terms. Since
the periodic extension is a continuous function, but has a discontinuous rst derivative, the Fourier coecients
will decay as 1/n
2
.
[x[ =

n=0
a
n
cos(nx), for x [, ]
a
0
=
1

_

0
x dx =
1

_
x
2
2
_

0
=

2
a
n
=
2

_

0
x cos(nx) dx
=
2

_
x
sin(nx)
n
_

_

0
sin(nx)
n
dx
=
2

_
cos(nx)
n
2
_

0
=
2
n
2
(cos(n) 1)
=
2(1 (1)
n
)
n
2
[x[ =

2
+
4

n=1
odd n
1
n
2
cos(nx) for x [, ]
1274
Dene R
N
(x) = f(x) S
N
(x). We seek an upper bound on [R
N
(x)[.
[R
N
(x)[ =

n=N+1
odd n
1
n
2
cos(nx)

n=N+1
odd n
1
n
2
=
4

n=1
odd n
1
n
2

4

n=1
odd n
1
n
2
Since

n=1
odd n
1
n
2
=

2
8
We can bound the error with,
[R
N
(x)[

2

4

n=1
odd n
1
n
2
.
N = 7 is the smallest number for which our error bound is less than 10
1
. N 7 is sucient to make the error
less that 0.1.
[R
7
(x)[

2

4

_
1 +
1
9
+
1
25
+
1
49
_
0.079
N 7 is also necessary because.
[R
N
(0)[ =
4

n=N+1
odd n
1
n
2
.
1275
Solution 30.17
1.
1

n=1
a
n
sin(nx), 0 x
Since the odd periodic extension of the function is discontinuous, the Fourier coecients will decay as 1/n.
Because of the symmetry about x = /2, there will be only odd sine terms.
a
n
=
2

_

0
1 sin(nx) dx
=
2
n
(cos(n) + cos(0))
=
2
n
(1 (1)
n
)
1
4

n=1
odd n
sin(nx)
n
2. Its always OK to integrate a Fourier series term by term. We integrate the series in part (a).
_
x
a
1 dx
4

n=1
odd n
_
x
a
sin(n)
n
dx
x a
4

n=1
odd n
cos(na) cos(nx)
n
2
1276
Since the series converges uniformly, we can replace the with =.
x a =
4

n=1
odd n
cos(na)
n
2

4

n=1
odd n
cos(nx)
n
2
Now we have a Fourier cosine series. The rst sum on the right is the constant term. If we choose a = /2
this sum vanishes since cos(n/2) = 0 for odd integer n.
x =

2

4

n=1
odd n
cos(nx)
n
2
3. If f(x) has the Fourier series
f(x)
a
0
2
+

n=1
(a
n
cos(nx) +b
n
sin(nx)),
then Parsevals theorem states that
_

f
2
(x) dx =

2
a
2
0
+

n=1
(a
2
n
+b
2
n
).
1277
We apply this to the Fourier sine series from part (a).
_

f
2
(x) dx =

n=1
odd n
_
4
n
_
2
_
0

(1)
2
dx +
_

0
(1)
2
dx =
16

n=1
1
(2n 1)
2

n=1
1
(2n 1)
2
=

2
8
We substitute x = in the series from part (b) to corroborate the result.
x =

2

4

n=1
cos((2n 1)x)
(2n 1)
2
=

2

4

n=1
cos((2n 1))
(2n 1)
2

n=1
1
(2n 1)
2
=

2
8
Solution 30.18
1.
f(x) a
0
+

n=1
a
n
cos(nx)
Since the periodic extension of the function is discontinuous, the Fourier coecients will decay like 1/n.
Because of the anti-symmetry about x = /2, there will be only odd cosine terms.
a
0
=
1

_

0
f(x) dx =
1
2
1278
a
n
=
2

_

0
f(x) cos(nx) dx
=
2

_
/2
0
cos(nx) dx
=
2
n
sin(n/2)
=
_
2
n
(1)
(n1)/2
, for odd n
0 for even n
The Fourier cosine series of f(x) is
f(x)
1
2
+
2

n=0
(1)
n
2n + 1
cos((2n + 1)x).
2. The N
th
partial sum is
S
N
(x) =
1
2
+
2

n=0
(1)
n
2n + 1
cos((2n + 1)x).
We wish to evaluate the sum from part (a). First we make the change of variables y = x /2 to get rid
1279
of the (1)
n
factor.

n=0
(1)
n
2n + 1
cos((2n + 1)x)
=
N

n=0
(1)
n
2n + 1
cos((2n + 1)(y +/2))
=
N

n=0
(1)
n
2n + 1
(1)
n+1
sin((2n + 1)y)
=
N

n=0
1
2n + 1
sin((2n + 1)y)
1280
We write the summand as an integral and interchange the order of summation and integration to get rid of
the 1/(2n + 1) factor.
=
N

n=0
_
y
0
cos((2n + 1)t) dt
=
_
y
0
N

n=0
cos((2n + 1)t) dt
=
_
y
0
_
2N+1

n=1
cos(nt)
N

n=1
cos(2nt)
_
dt
=
_
y
0
1
_
2N+1

n=1
e
int

n=1
e
i2nt
_
dt
=
_
y
0
1
_
e
it
e
i(2N+2)t
1 e
it

e
i2t
e
i2(N+1)t
1 e
i2t
_
dt
=
_
y
0
1
_
( e
it
e
i2(N+1)t
)(1 e
i2t
) ( e
i2t
e
i2(N+1)t
)(1 e
it
)
(1 e
it
)(1 e
i2t
)
_
dt
=
_
y
0
1
_
e
it
e
i2t
+ e
i(2N+4)t
e
i(2N+3)t
(1 e
it
)(1 e
i2t
)
_
dt
=
_
y
0
1
_
e
it
e
i(2N+3)t
1 e
i2t
_
dt
=
_
y
0
1
_
e
i(2N+2)t
1
e
it
e
it
_
dt
=
_
y
0
1
_
i e
i2(N+1)t
+i
2 sin t
_
dt
=
1
2
_
y
0
sin(2(N + 1)t)
sin t
dt
=
1
2
_
x/2
0
sin(2(N + 1)t)
sin t
dt
1281
Now we have a tidy representation of the partial sum.
S
N
(x) =
1
2

1

_
x/2
0
sin(2(N + 1)t)
sin t
dt
3. We solve
dS
N
(x)
dx
= 0 to nd the relative extrema of S
N
(x).
S
t
N
(x) = 0

sin(2(N + 1)(x /2))


sin(x /2)
= 0
(1)
N+1
sin(2(N + 1)x)
cos(x)
= 0
sin(2(N + 1)x)
cos(x)
= 0
x = x
n
=
n
2(N + 1)
, n = 0, 1, . . . , N, N + 2, . . . , 2N + 2
Note that x
N+1
= /2 is not a solution as the denominator vanishes there. The function has a removable
singularity at x = /2 with limiting value (1)
N
.
4.
S
N
(x
N
) =
1
2

1

_ N
2(N+1)
/2
0
sin(2(N + 1)t)
sin t
dt
We note that the integrand is even.
_ N
2(N+1)
/2
0
=
_


2(N+1)
0
=
_
2(N+1)
0
1282
S
N
(x
N
) =
1
2
+
1

_
2(N+1)
0
sin(2(N + 1)t)
sin t
dt
5. We make the change of variables 2(N + 1)t t.
S
N
(x
N
) =
1
2
+
1

_

0
sin(t)
2(N + 1) sin(t/(2(N + 1)))
dt
Note that
lim
0
sin(t)

= lim
0
t cos(t)
1
= t
S
N
(x
N
)
1
2
+
1

_

0
sin(t)
t
dt 1.0895 as N
This is not equal to the limiting value of f(x), f(/2 0) = 1.
Solution 30.19
With the parametrization in t, x(t) and y(t) are continuous functions on the range [0, 2]. Since the curve is
closed, we have x(0) = x(2) and y(0) = y(2). This means that the periodic extensions of x(t) and y(t) are
continuous functions. Thus we can dierentiate their Fourier series. First we dene formal Fourier series for x(t)
1283
and y(t).
x(t) =
a
0
2
+

n=1
_
a
n
cos(nt) +b
n
sin(nt)
_
y(t) =
c
0
2
+

n=1
_
c
n
cos(nt) +d
n
sin(nt)
_
x
t
(t) =

n=1
_
nb
n
cos(nt) na
n
sin(nt)
_
y
t
(t) =

n=1
_
nd
n
cos(nt) nc
n
sin(nt)
_
In this problem we will be dealing with integrals on [0, 2] of products of Fourier series. We derive a general
formula for later use.
_
2
0
xy dt =
_
2
0
_
a
0
2
+

n=1
_
a
n
cos(nt) +b
n
sin(nt)
_
__
c
0
2
+

n=1
_
c
n
cos(nt) +d
n
sin(nt)
_
_
dt
=
_
2
0
_
a
0
c
0
4
+

n=1
_
a
n
c
n
cos
2
(nt) +b
n
d
n
sin
2
(nt)
_
_
dt
=
_
1
2
a
0
c
0
+

n=1
(a
n
c
n
+b
n
d
n
)
_
In the arclength parametrization we have
_
dx
ds
_
2
+
_
dy
ds
_
2
= 1.
1284
In terms of t = 2s/L this is
_
dx
dt
_
2
+
_
dy
dt
_
2
=
_
L
2
_
2
.
We integrate this identity on [0, 2].
L
2
2
=
_
2
0
_
_
dx
dt
_
2
+
_
dy
dt
_
2
_
dt
=
_

n=1
_
(nb
n
)
2
+ (na
n
)
2
_
+

n=1
_
(nd
n
)
2
+ (nc
n
)
2
_
_
=

n=1
n
2
(a
2
n
+b
2
n
+c
2
n
+d
2
n
)
L
2
= 2
2

n=1
n
2
(a
2
n
+b
2
n
+c
2
n
+d
2
n
)
We assume that the curve is parametrized so that the area is positive. (Reversing the orientation changes the
sign of the area as dened above.) The area is
A =
_
2
0
x
dy
dt
dt
=
_
2
0
_
a
0
2
+

n=1
_
a
n
cos(nt) +b
n
sin(nt)
_
__

n=1
_
nd
n
cos(nt) nc
n
sin(nt)
_
_
dt
=

n=1
n(a
n
d
n
b
n
c
n
)
1285
Now we nd an upper bound on the area. We will use the inequality [ab[
1
2
[a
2
+ b
2
[, which follows from
expanding (a b)
2
0.
A

2

n=1
n
_
a
2
n
+b
2
n
+c
2
n
+d
2
n
_

n=1
n
2
_
a
2
n
+b
2
n
+c
2
n
+d
2
n
_
We can express this in terms of the perimeter.
=
L
2
4
L
2
4A
Now we determine the curves for which L
2
= 4A. To do this we nd conditions for which A is equal to the
upper bound we obtained for it above. First note that

n=1
n
_
a
2
n
+b
2
n
+c
2
n
+d
2
n
_
=

n=1
n
2
_
a
2
n
+b
2
n
+c
2
n
+d
2
n
_
implies that all the coecients except a
0
, c
0
, a
1
, b
1
, c
1
and d
1
are zero. The constraint,

n=1
n(a
n
d
n
b
n
c
n
) =

2

n=1
n
_
a
2
n
+b
2
n
+c
2
n
+d
2
n
_
then becomes
a
1
d
1
b
1
c
1
= a
2
1
+b
2
1
+c
2
1
+d
2
1
.
1286
This implies that d
1
= a
1
and c
1
= b
1
. a
0
and c
0
are arbitrary. Thus curves for which L
2
= 4A have the
parametrization
x(t) =
a
0
2
+a
1
cos t +b
1
sin t, y(t) =
c
0
2
b
1
cos t +a
1
sin t.
Note that
_
x(t)
a
0
2
_
2
+
_
y(t)
c
0
2
_
2
= a
2
1
+b
2
1
.
The curve is a circle of radius
_
a
2
1
+b
2
1
and center (a
0
/2, c
0
/2).
Solution 30.20
1. The Fourier sine series has the form
x(1 x) =

n=1
a
n
sin(nx).
The norm of the eigenfunctions is
_
1
0
sin
2
(nx) dx =
1
2
.
The coecients in the expansion are
a
n
= 2
_
1
0
x(1 x) sin(nx) dx
=
2

3
n
3
(2 2 cos(n) n sin(n))
=
4

3
n
3
(1 (1)
n
).
1287
Thus the Fourier sine series is
x(1 x) =
8

n=1
odd n
sin(nx)
n
3
=
8

n=1
sin((2n 1)x)
(2n 1)
3
.
The Fourier cosine series has the form
x(1 x) =

n=0
a
n
cos(nx).
The norm of the eigenfunctions is
_
1
0
1
2
dx = 1,
_
1
0
cos
2
(nx) dx =
1
2
.
The coecients in the expansion are
a
0
=
_
1
0
x(1 x) dx =
1
6
,
a
n
= 2
_
1
0
x(1 x) cos(nx) dx
=
2

2
n
2
+
4 sin(n) n cos(n)

3
n
3
=
2

2
n
2
(1 + (1)
n
)
Thus the Fourier cosine series is
x(1 x) =
1
6

4

n=1
even n
cos(nx)
n
2
=
1
6

1

n=1
cos(2nx)
n
2
.
1288
-1 -0.5 0.5 1
-0.2
-0.1
0.1
0.2
-1 -0.5 0.5 1
-0.2
-0.1
0.1
0.2
Figure 30.13: The odd and even periodic extension of x(1 x), 0 x 1.
The Fourier sine series converges to the odd periodic extension of the function. Since this function is C
1
,
continuously dierentiable, we know that the Fourier coecients must decay as 1/n
3
. The Fourier cosine
series converges to the even periodic extension of the function. Since this function is only C
0
, continuous,
the Fourier coecients must decay as 1/n
2
. The odd and even periodic extensions are shown in Figure 30.13.
The sine series is better because of the faster convergence of the series.
2. (a) We substitute x = 0 into the cosine series.
0 =
1
6

1

n=1
1
n
2

n=1
1
n
2
=

2
6
1289
(b) We substitute x = 1/2 into the cosine series.
1
4
=
1
6

1

n=1
cos(n)
n
2

n=1
(1)
n
n
2
=

2
12
(c) We substitute x = 1/2 into the sine series.
1
4
=
8

n=1
sin((2n 1)/2)
(2n 1)
3

n=1
(1)
n
(2n 1)
3
=

3
32
1290
Chapter 31
Regular Sturm-Liouville Problems
I learned there are troubles
Of more than one kind.
Some come from ahead
And some come from behind.
But Ive bought a big bat.
Im all ready, you see.
Now my troubles are going
To have troubles with me!
-I Had Trouble in Getting to Solla Sollew
-Theodor S. Geisel, (Dr. Suess)
31.1 Derivation of the Sturm-Liouville Form
Consider the eigenvalue problem on the nite interval [a, b]
p
2
(x)y
tt
+p
1
(x)y
t
+p
0
(x)y = y,
1291
subject to the homogeneous unmixed boundary conditions

1
y(a) +
2
y
t
(a) = 0,
1
y(b) +
2
y
t
(b) = 0.
Here the p
j
s are real and continuous and p
2
> 0 on the interval [a, b]. The
j
s and
j
s are real. (Note that if
p
2
were negative we could multiply the equation by (1) and replace by .)
We would like to write this problem in a form that can be used to obtain qualitative information about the
problem. First we will write the operator in self-adjoint form. Since p
2
is positive on the interval,
y
tt
+
p
1
p
2
y
t
+
p
0
p
2
y =

p
2
y.
Multiplying by the factor
exp
__
x
p
1
p
2
d
_
= e
P(x)
yields
e
P(x)
_
y
tt
+
p
1
p
2
y
t
+
p
0
p
2
y
_
= e
P(x)

p
2
y
_
e
P(x)
y
t
_
t
+ e
P(x)
p
0
p
2
y = e
P(x)

p
2
y.
We dene the following functions
p = e
P(x)
, q = e
P(x)
p
0
p
2
, = e
P(x)
1
p
2
, = .
Since the p
j
s are continuous and p
2
is positive, p, q, and are continuous. p and are positive functions. The
problem now has the form
(py
t
)
t
+qy +y = 0,
1292
subject to the boundary conditions

1
y(a) +
2
y
t
(a) = 0,
1
y(b) +
2
y
t
(b) = 0.
This is known as a Regular Sturm-Liouville problem. We will devote much of this chapter to studying the
properties of this problem. We will encounter many results that are analogous to the properties of self-adjoint
eigenvalue problems.
Example 31.1.1
d
dx
_
log x
dy
dx
_
+xy = 0, y(1) = y(2) = 0
is not a regular Sturm-Liouville problem since log x vanishes at x = 1.
Result 31.1.1 Any eigenvalue problem of the form
p
2
y
//
+p
1
y
/
+p
0
y = y, for a x b,

1
y(a) +
2
y
/
(a) = 0,
1
y(b) +
2
y
/
(b) = 0,
where the p
j
s are real and continuous, p
2
> 0 on [a, b], and the
j
s and
j
s are real can
be written in the form of a regular Sturm-Liouville problem
(py
/
)
/
+qy +y = 0, on a x b,

1
y(a) +
2
y
/
(a) = 0,
1
y(b) +
2
y
/
(b) = 0.
1293
31.2 Properties of Regular Sturm-Liouville Problems
Self-Adjoint. Writing the problem in the form
L[y] = (py
t
)
t
+qy = y,
we see that the operator is formally self-adjoint. Now to see if the problem is self-adjoint.
v[L[u]) L[v][u) = v[(pu
t
)
t
+qu) (pv
t
)
t
+qv[u)
= [vpu
t
]
b
a
v
t
[pu
t
) +v[qu) [pv
t
u]
b
a
+pv
t
[u
t
) qv[u)
= [vpu
t
]
b
a
[pv
t
u]
b
a
= p(b)
_
v(b)u
t
(b) v
t
(b)u(b)
_
+p(a)
_
v(a)u
t
(a) v
t
(a)u(a)
_
= p(b)
_
v(b)
_

2
_
u(b)
_

2
_
v(b)u(b)
_
+p(a)
_
v(a)
_

2
_
u(a)
_

2
_
v(a)u(a)
_
= 0
Note that
i
and
i
are real so
_

2
_
=
_

2
_
,
_

2
_
=
_

2
_
Thus L[y] subject to the boundary conditions is self-adjoint.
Real Eigenvalues. Let be an eigenvalue with the eigenfunction . Starting with Greens formula,
[L[]) L[][) = 0
[ ) [) = 0
[[) +[[) = 0
( )[[) = 0.
1294
Since [[) > 0, = 0. Thus the eigenvalues are real.
Innite Number of Eigenvalues. There are an innite of eigenvalues which have no nite cluster point. This
result is analogous to the result that we derived for self-adjoint eigenvalue problems. When we cover the Rayleigh
quotient, we will nd that there is a least eigenvalue. Since the eigenvalues are distinct and have no nite cluster
point,
n
as n . Thus the eigenvalues form an ordered sequence,

1
<
2
<
3
< .
Orthogonal Eigenfunctions. Let and be two distinct eigenvalues with the eigenfunctions and . Greens
formula states
[L[]) L[][) = 0.
[ ) [) = 0
[[) +[[) = 0
( )[[) = 0
Since the eigenvalues are distinct, [[) = 0. Thus eigenfunctions corresponding to distinct eigenvalues are
orthogonal with respect to .
Unique Eigenfunctions. Let be an eigenvalue. Suppose and are two independent eigenfunctions corre-
sponding to . The eigenfunctions satisfy the equations
L[] + = 0
L[] + = 0.
1295
Taking the dierence of times the rst equation and times the second equation gives us
L[] L[] = 0
(p
t
)
t
(p
t
)
t
= 0
(p(
t

t
))
t
= 0
p(
t

t
) = const.
In order to satisfy the boundary conditions, the constant must be zero.
p(
t

t
) = 0
Since p > 0,

t
= 0

2
= 0
d
dx
_

_
= 0

= const.
and are not independent. Thus each eigenvalue has a unique, (to within a multiplicative constant),
eigenfunction.
Real Eigenfunctions. If is an eigenvalue with eigenfunction , then
(p
t
)
t
+q + = 0.
Taking the complex conjugate of this equation,
_
p
t
_
t
+q + = 0.
1296
Thus is also an eigenfunction corresponding to . Are and independent functions, or do they just dier by
a multiplicative constant? (For example, e
ix
and e
ix
are independent functions, but ix and ix are dependent.)
From our argument on unique eigenfunctions, we see that
= (const).
Since and only dier by a multiplicative constant, the eigenfunctions can be chosen so that they are real-valued
functions.
Rayleighs Quotient. Let be an eigenvalue with the eigenfunction .
[L[]) = [ )
[(p
t
)
t
+q) = [[)
_
p
t

b
a

t
[p[
t
) +[q[) = [[)
=

_
p
t

b
a
+
t
[p[
t
) [q[)
[[)
This is known as Rayleighs quotient. It is useful for obtaining qualitative information about the eigenvalues.
Minimum Property of Rayleighs Quotient. Note that since p, q, and are bounded functions, the
Rayleigh quotient is bounded below. Thus there is a least eigenvalue. If we restrict u to be a real continuous
function that satises the boundary conditions, then

1
= min
u
[puu
t
]
b
a
+u
t
[p[u
t
) u[q[u)
u[[u)
,
where
1
is the least eigenvalue. This form allows us to get upper and lower bounds on
1
.
To derive this formula, we rst write it in terms of the operator L.

1
= min
u
u[L[u])
u[[u)
1297
Since u is continuous and satises the boundary conditions, we can expand u in a series of the eigenfunctions.

u[L[u])
u[[u)
=

n=1
c
n

L[

m=1
c
m

m
]
_

n=1
c
n

m=1
c
m

m
_
=

n=1
c
n

m=1
c
m

m
_

n=1
c
n

m=1
c
m

m
_
Assuming that we can interchange summation and integration,
=

n=1

m=1
c
n
c
m

m
[[
n
)

n=1

m=1
c
n
c
m

m
[[
n
)
=

n=1
[c
n
[
2

n
[[
n
)

n=1
[c
n
[
2

n
[[
n
)

1

n=1
[c
n
[
2

n
[[
n
)

n=1
[c
n
[
2

n
[[
n
)
=
1
.
We see that the minimum value of Rayleighs quotient is
1
. The minimum is attained when c
n
= 0 for all n 2,
that is, when u = c
1

1
.
Completeness. The set of the eigenfunctions of a regular Sturm-Liouville problem is complete. That is, any
piecewise continuous function dened on [a, b] can be expanded in a series of the eigenfunctions
f(x)

n=1
c
n

n
(x),
where the c
n
are the generalized Fourier coecients
c
n
=

n
[[f)

n
[[
n
)
.
1298
Here the sum is convergent in the mean. For any xed x, the sum converges to
1
2
(f(x

) + f(x
+
)). If f(x) is
continuous and satises the boundary conditions, then the convergence is uniform.
1299
Result 31.2.1 Properties of regular Sturm-Liouville problems.
The eigenvalues are real.
There are an innite number of eigenvalues

1
<
2
<
3
< .
There is a least eigenvalue
1
but there is no greatest eigenvalue, (
n
as
n ).
For each eigenvalue, there is one unique, (to within a multiplicative constant), eigen-
function
n
. The eigenfunctions can be chosen to be real-valued. (Assume the
n
following are real-valued.) The eigenfunction
n
has exactly n 1 zeros in the open
interval a < x < b.
The eigenfunctions are orthogonal with respect to the weighting function (x).
_
b
a

n
(x)
m
(x)(x) dx = 0 if n ,= m.
The eigenfunctions are complete. Any piecewise continuous function f(x) dened
on a x b can be expanded in a series of eigenfunctions
f(x)

n=1
c
n

n
(x),
where
c
n
=
_
b
a
f(x)
n
(x)(x) dx
_
b
a

2
n
(x)(x) dx
.
The sum converges to
1
2
(f(x

) +f(x
+
)).
The eigenvalues can be related to the eigenfunctions with a formula known as the
Rayleigh quotient.

n
=
p
n
d
n
dx

b
a
+
_
b
a
_
p
_
d
n
dx
_
2
q
2
n
_
dx
_
b
a

2
n
dx
1300
Example 31.2.1 A simple example of a Sturm-Liouville problem is
d
dx
_
dy
dx
_
+y = 0, y(0) = y() = 0.
Bounding The Least Eigenvalue. The Rayleigh quotient for the rst eigenvalue is

1
=
_

0
(
t
1
)
2
dx
_

0

2
1
dx
.
Immediately we see that the eigenvalues are non-negative. If
_

0
(
t
1
)
2
dx = 0 then = (const). The only constant
that satises the boundary conditions is = 0. Since the trivial solution is not an eigenfunction, = 0 is not an
eigenvalue. Thus all the eigenvalues are positive.
Now we get an upper bound for the rst eigenvalue.

1
= min
u
_

0
(u
t
)
2
dx
_

0
u
2
dx
where u is continuous and satises the boundary conditions. We choose u = x(x ) as a trial function.

1

_

0
(u
t
)
2
dx
_

0
u
2
dx
=
_

0
(2x )
2
dx
_

0
(x
2
x)
2
dx
=

3
/3

5
/30
=
10

2
1.013
1301
Finding the Eigenvalues and Eigenfunctions. We consider the cases of negative, zero, and positive eigen-
values to check our results above.
< 0. The general solution is
y = c e

x
+d e

x
.
The only solution that satises the boundary conditions is the trivial solution, y = 0. Thus there are no
negative eigenvalues.
= 0. The general solution is
y = c +dx.
Again only the trivial solution satises the boundary conditions, so = 0 is not an eigenvalue.
> 0. The general solution is
y = c cos(

x) +d sin(

x).
Applying the boundary conditions,
y(0) = 0 c = 0
y() = 0 d sin(

) = 0
The nontrivial solutions are

= n = 1, 2, 3, . . . y = d sin(n).
Thus the eigenvalues and eigenfunctions are

n
= n
2
,
n
= sin(nx), for n = 1, 2, 3, . . .
1302
We can verify that this example satises all the properties listed in Result 31.2.1. Note that there are an
innite number of eigenvalues. There is a least eigenvalue
1
= 1 but there is no greatest eigenvalue. For each
eigenvalue, there is one eigenfunction. The n
th
eigenfunction sin(nx) has n 1 zeroes in the interval 0 < x < .
Since a series of the eigenfunctions is the familiar Fourier sine series, we know that the eigenfunctions are
orthogonal and complete. Checking Rayleighs quotient,

n
=
p
n
dn
dx

0
+
_

0
_
p
_
dn
dx
_
2
q
2
n
_
dx
_

0

2
n
dx
=
sin(nx)
d sin(nx)
dx

0
+
_

0
_
_
d sin(nx)
dx
_
2
_
dx
_

0
sin
2
(nx)dx
=
_

0
n
2
cos
2
(nx) dx
/2
= n
2
.
Example 31.2.2 Consider the eigenvalue problem
x
2
y
tt
+xy
t
+y = y, y(1) = y(2) = 0.
Since x
2
> 0 on [1, 2], we can write this problem in terms of a regular Sturm-Liouville eigenvalue problem.
Dividing by x
2
,
y
tt
+
1
x
y
t
+
1
x
2
(1 )y = 0.
We multiply by the factor exp(
_
x
1

d) = exp(log x) = x and make the substitution, = 1 to obtain the


Sturm-Liouville form
xy
tt
+y
t
+
1
x
y = 0
(xy
t
)
t
+
1
x
y = 0.
1303
We see that the eigenfunctions will be orthogonal with respect to the weighting function = 1/x.
From the Rayleigh quotient,
=

_
p
t

b
a
+
t
[x[
t
)
[
1
x
[)
=

t
[x[
t
)
[
1
x
[)
.
If
t
= 0, then only the trivial solution, = 0, satises the boundary conditions. Thus the eigenvalues are
positive.
Returning to the original problem We see that the eigenvalues, , satisfy < 1. Since this is an Euler equation,
the substitution y = x

yields
( 1) + + 1 = 0

2
+ 1 = 0.
Since < 1,
= i
_
1 .
The general solution is
y = c
1
x
i

1
+c
2
x
i

1
.
We know that the eigenfunctions can be written as real functions. We can rewrite the solution as
y = c
1
e
i

1log x
+c
2
e
i

1log x
.
An equivalent form is
y = c
1
cos(
_
1 log x) +c
2
sin(
_
1 log x).
1304
Applying the boundary conditions,
y(1) = 0 c
1
= 0
y(2) = 0 sin(
_
1 log 2) = 0

_
1 log 2 = n, for n = 1, 2, . . .
Thus the eigenvalues and eigenfunctions are

n
= 1
_
n
log 2
_
2
,
n
= sin
_
n
log x
log 2
_
for n = 1, 2, . . .
31.3 Solving Dierential Equations With Eigenfunction Expansions
Linear Algebra. Consider the eigenvalue problem,
Ax = x.
If the matrix A has a complete, orthonormal set of eigenvectors
k
with eigenvalues
k
then we can represent
any vector as a linear combination of the eigenvectors.
y =
n

k=1
a
k

k
, a
k
=
k
y
y =
n

k=1
(
k
y)
k
This property allows us to solve the inhomogeneous equation
Ax x = b. (31.1)
1305
Before we try to solve this equation, we should consider the existence/uniqueness of the solution. If is not an
eigenvalue, then the range of L A is R
n
. The problem has a unique solution. If is an eigenvalue, then the
null space of L is the span of the eigenvectors of . That is, if =
i
, then nullspace (L) = span (
i
1
,
i
2
, . . . ,
im
).
(
i
1
,
i
2
, . . . ,
im
are the eigenvalues of
i
.) If b is orthogonal to nullspace (L) then Equation 31.1 has a solution,
but it is not unique. If y is a solution then we can add any linear combination of
i
j
to obtain another solution.
Thus the solutions have the form
x = y +
m

j=1
c
j

i
j
.
If b is not orthogonal to nullspace (L) then Equation 31.1 has no solution.
Now we solve Equation 31.1. We assume that is not an eigenvalue. We expand the solution x and the
inhomogeneity in the orthonormal eigenvectors.
x =
n

k=1
a
k

k
, b =
n

k=1
b
k

k
We substitute the expansions into Equation 31.1.
A
n

k=1
a
k

k

n

k=1
a
k

k
=
n

k=1
b
k

k
n

k=1
a
k

k

n

k=1
a
k

k
=
n

k=1
b
k

k
a
k
=
b
k

k

The solution is
x =
n

k=1
b
k

k
.
1306
Inhomogeneous Boundary Value Problems. Consider the self-adjoint eigenvalue problem,
Ly = y, a < x < b,
B
1
[y] = B
2
[y] = 0.
If the problem has a complete, orthonormal set of eigenfunctions
k
with eigenvalues
k
then we can represent
any square-integrable function as a linear combination of the eigenfunctions.
f =

k
f
k

k
, f
k
=
k
[f) =
_
b
a

k
(x)f(x) dx
f =

k
[f)
k
This property allows us to solve the inhomogeneous dierential equation
Ly y = f, a < x < b, (31.2)
B
1
[y] = B
2
[y] = 0.
Before we try to solve this equation, we should consider the existence/uniqueness of the solution. If is not
an eigenvalue, then the range of L is the space of square-integrable functions. The problem has a unique
solution. If is an eigenvalue, then the null space of L is the span of the eigenfunctions of . That is, if =
i
,
then nullspace (L) = span (
i
1
,
i
2
, . . . ,
im
). (
i
1
,
i
2
, . . . ,
im
are the eigenvalues of
i
.) If f is orthogonal to
nullspace (L ) then Equation 31.2 has a solution, but it is not unique. If u is a solution then we can add any
linear combination of
i
j
to obtain another solution. Thus the solutions have the form
y = u +
m

j=1
c
j

i
j
.
If f is not orthogonal to nullspace (L ) then Equation 31.2 has no solution.
1307
Now we solve Equation 31.2. We assume that is not an eigenvalue. We expand the solution y and the
inhomogeneity in the orthonormal eigenfunctions.
y =

k
y
k

k
, f =

k
f
k

k
It would be handy if we could substitute the expansions into Equation 31.2. However, the expansion of a function
is not necessarily dierentiable. Thus we demonstrate that since y is C
2
(a . . . b) and satises the boundary
conditions B
1
[y] = B
2
[y] = 0, we are justied in substituting it into the dierential equation. In particular, we
will show that
L[y] = L
_

k
y
k

k
_
=

k
y
k
L[
k
] =

k
y
k

k
.
To do this we will use Greens identity. If u and v are C
2
(a . . . b) and satisfy the boundary conditions B
1
[y] =
B
2
[y] = 0 then
u[L[v]) = L[u][v).
First we assume that we can dierentiate y term-by-term.
L[y] =

k
y
k

k
Now we directly expand L[y] and show that we get the same result.
L[y] =

k
c
k

k
c
k
=
k
[L[y])
= L[
k
][y)
=
k

k
[y)
=
k

k
[y)
=
k
y
k
1308
L[y] =

k
y
k

k
The series representation of y may not be dierentiable, but we are justied in applying L term-by-term.
Now we substitute the expansions into Equation 31.2.
L
_

k
y
k

k
_

k
y
k

k
=

k
f
k

k
y
k

k
y
k

k
=

k
f
k

k
y
k
=
f
k

k

The solution is
y =

k
f
k

k
Consider a second order, inhomogeneous problem.
L[y] = f(x), B
1
[y] = b
1
, B
2
[y] = b
2
We will expand the solution in an orthogonal basis.
y =

n
a
n

n
We would like to substitute the series into the dierential equation, but in general we are not allowed to dierentiate
such series. To get around this, we use integration by parts to move derivatives from the solution y, to the
n
.
Example 31.3.1 Consider the problem,
y
tt
+y = f(x), y(0) = a, y() = b,
1309
where ,= n
2
, n Z
+
. We expand the solution in a cosine series.
y(x) =
y
0

n=1
y
n
_
2

cos(nx)
We also expand the inhomogeneous term.
f(x) =
f
0

n=1
f
n
_
2

cos(nx)
We multiply the dierential equation by the orthonormal functions and integrate over the interval. We neglect
the special case
0
= 1/

for now.
_

0
_
2

cos(nx)y
tt
dx +
_

0
_
2

cos(nx)y dx =
_

0
_
2

f(x) dx
_
_
2

cos(nx)y
t
(x)
_

0
+
_

0
_
2

nsin(nx)y
t
(x) dx +y
n
= f
n
_
2

((1)
n
y
t
() y
t
(0)) +
_
_
2

nsin(nx)y(x)
_

_

0
_
2

n
2
cos(nx)y(x) dx +y
n
= f
n
_
2

((1)
n
y
t
() y
t
(0)) n
2
y
n
+y
n
= f
n
Unfortunately we dont know the values of y
t
(0) and y
t
().
CONTINUE HERE
1310
31.4 Exercises
Exercise 31.1
Find the eigenvalues and eigenfunctions of
y
tt
+ 2y
t
+y = 0, y(a) = y(b) = 0,
where a < b.
Write the problem in Sturm Liouville form. Verify that the eigenvalues and eigenfunctions satisfy the properties
of regular Sturm-Liouville problems. Find the coecients in the expansion of an arbitrary function f(x) in a series
of the eigenfunctions.
Hint, Solution
Exercise 31.2
Find the eigenvalues and eigenfunctions of the boundary value problem
y
tt
+

(z + 1)
2
y = 0
on the interval 1 z 2 with boundary conditions y(1) = y(2) = 0. Discuss how the results conrm the concepts
presented in class relating to boundary value problems of this type.
Hint, Solution
Exercise 31.3
Find the eigenvalues and eigenfunctions of
y
tt
+
2 + 1
x
y
t
+

x
2
y = 0, y(a) = y(b) = 0,
where 0 < a < b. Write the problem in Sturm Liouville form. Verify that the eigenvalues and eigenfunctions
satisfy the properties of regular Sturm-Liouville problems. Find the coecients in the expansion of an arbitrary
function f(x) in a series of the eigenfunctions.
Hint, Solution
1311
Exercise 31.4
Find the eigenvalues and eigenfunctions of
y
tt
y
t
+y = 0, y(0) = y(1) = 0.
Find the coecients in the expansion of an arbitrary, f(x), in a series of the eigenfunctions.
Hint, Solution
Exercise 31.5
Find the eigenvalues and eigenfunctions for,
y
tt
+y = 0, y(0) = 0, y(1) +y
t
(1) = 0.
Show that the transcendental equation for has innitely many roots
1
<
2
<
3
< . Find the limit of
n
as n . How is this approached?
Hint, Solution
Exercise 31.6
Consider
y
tt
+y = f(x) y(0) = 0 y(1) +y
t
(1) = 0.
Find the eigenfunctions for this problem and the equation which the eigenvalues satisfy. Give the general solution
in terms of these eigenfunctions.
Hint, Solution
Exercise 31.7
Show that the eigenvalue problem,
y
tt
+y = 0, y(0) = 0, y
t
(0) y(1) = 0,
1312
(note the mixed boundary condition), has only one real eigenvalue. Find it and the corresponding eigenfunction.
Show that this problem is not self-adjoint. Thus the proof, valid for unmixed, homogeneous boundary conditions,
that all eigenvalues are real fails in this case.
Hint, Solution
Exercise 31.8
Determine the Rayleigh quotient, R[] for,
y
tt
+
1
x
y
t
+y = 0, [y(0)[ < , y(1) = 0.
Use the trial function = 1 x in R[] to deduce that the smallest zero of J
0
(x), the Bessel function of the rst
kind and order zero, is less than

6.
Hint, Solution
Exercise 31.9
Discuss the eigenvalues of the equation
y
tt
+q(z)y = 0, y(0) = y() = 0
where
q(z) =
_
a > 0, 0 z l
b > 0, l < z .
This is an example that indicates that the results we obtained in class for eigenfunctions and eigenvalues with
q(z) continuous and bounded also hold if q(z) is simply integrable; that is
_

0
[q(z)[ dz
is nite.
Hint, Solution
1313
Exercise 31.10
1. Find conditions on the smooth real functions p(x), q(x), r(x) and s(x) so that the eigenvalues, , of:
Lv (p(x)v
tt
(x))
tt
(q(x)v
t
(x))
t
+r(x)v(x) = s(x)v(x), a < x < b
v(a) = v
tt
(a) = 0
v
tt
(b) = 0, p(b)v
ttt
(b) q(b)v
t
(b) = 0
are positive. Prove the assertion.
2. Show that for any smooth p(x), q(x), r(x) and s(x) the eigenfunctions belonging to distinct eigenvalues are
orthogonal relative to the weight s(x). That is:
_
b
a
v
m
(x)v
k
(x)s(x) dx = 0 if
k
,=
m
.
3. Find the eigenvalues and eigenfunctions for:
d
4

dx
4
= ,
_
(0) =
tt
(0) = 0,
(1) =
tt
(1) = 0.
Hint, Solution
1314
31.5 Hints
Hint 31.1
Hint 31.2
Hint 31.3
Hint 31.4
Write the problem in Sturm-Liouville form to show that the eigenfunctions are orthogonal with respect to the
weighting function = e
x
.
Hint 31.5
Note that the solution is a regular Sturm-Liouville problem and thus the eigenvalues are real. Use the Rayleigh
quotient to show that there are only positive eigenvalues. Informally show that there are an innite number of
eigenvalues with a graph.
Hint 31.6
Hint 31.7
Find the solution for = 0, < 0 and > 0. A problem is self-adjoint if it satises Greens identity.
1315
Hint 31.8
Write the equation in self-adjoint form. The Bessel equation of the rst kind and order zero satises the problem,
y
tt
+
1
x
y
t
+y = 0, [y(0)[ < , y(r) = 0,
where r is a positive root of J
0
(x). Make the change of variables = x/r, u() = y(x).
Hint 31.9
Hint 31.10
1316
31.6 Solutions
Solution 31.1
Recall that constant coecient equations are shift invariant. If u(x) is a solution, then so is u(x c).
We substitute y = e
x
into the constant coecient equation.
y
tt
+ 2y
t
+y = 0

2
+ 2 + = 0
=

First we consider the case =


2
. A set of solutions of the dierential equation is
_
e
x
, xe
x
_
The homogeneous solution that satises the left boundary condition y(a) = 0 is
y = c(x a) e
x
.
Since only the trivial solution with c = 0 satises the right boundary condition, =
2
is not an eigenvalue.
Next we consider the case ,=
2
. We write
= i


2
.
Note that 1(


2
) 0. A set of solutions of the dierential equation is
_
e
(i

2
)x
_
By taking the sum and dierence of these solutions we obtain a new set of linearly independent solutions.
_
e
x
cos
_


2
x
_
, e
x
sin
_


2
x
__
The solution which satises the left boundary condition is
y = c e
x
sin
_


2
(x a)
_
.
1317
For nontrivial solutions, the right boundary condition y(b) = 0 imposes the constraint
e
b
sin
_


2
(b a)
_
= 0


2
(b a) = n, n Z
We have the eigenvalues

n
=
2
+
_
n
b a
_
2
, n Z
with the eigenfunctions

n
= e
x
sin
_
n
x a
b a
_
.
To write the problem in Sturm-Liouville form, we multiply by the integrating factor
e

2dx
= e
2x
.
_
e
2x
y
t
_
t
+e
2x
y = 0, y(a) = y(b) = 0
Now we verify that the Sturm-Liouville properties are satised.
The eigenvalues

n
=
2
+
_
n
b a
_
2
, n Z
are real.
1318
There are an innite number of eigenvalues

1
<
2
<
3
< ,

2
+
_

b a
_
2
<
2
+
_
2
b a
_
2
<
2
+
_
3
b a
_
2
< .
There is a least eigenvalue

1
=
2
+
_

b a
_
2
,
but there is no greatest eigenvalue, (
n
as n ).
For each eigenvalue, we found one unique, (to within a multiplicative constant), eigenfunction
n
. We were
able to choose the eigenfunctions to be real-valued. The eigenfunction

n
= e
x
sin
_
n
x a
b a
_
.
has exactly n 1 zeros in the open interval a < x < b.
The eigenfunctions are orthogonal with respect to the weighting function (x) = e
2ax
.
_
b
a

n
(x)
m
(x)(x) dx =
_
b
a
e
x
sin
_
n
x a
b a
_
e
x
sin
_
m
x a
b a
_
e
2ax
dx
=
_
b
a
sin
_
n
x a
b a
_
sin
_
m
x a
b a
_
dx
=
b a

_

0
sin(nx) sin(mx) dx
=
b a
2
_

0
(cos((n m)x) cos((n +m)x)) dx
= 0 if n ,= m
1319
The eigenfunctions are complete. Any piecewise continuous function f(x) dened on a x b can be
expanded in a series of eigenfunctions
f(x)

n=1
c
n

n
(x),
where
c
n
=
_
b
a
f(x)
n
(x)(x) dx
_
b
a

2
n
(x)(x) dx
.
The sum converges to
1
2
(f(x

) +f(x
+
)). (We do not prove this property.)
The eigenvalues can be related to the eigenfunctions with the Rayleigh quotient.

n
=
_
p
n
dn
dx

b
a
+
_
b
a
_
p
_
dn
dx
_
2
q
2
n
_
dx
_
b
a

2
n
dx
=
_
b
a
_
e
2x
_
e
x
_
n
ba
cos
_
n
xa
ba
_
sin
_
n
xa
ba
___
2
_
dx
_
b
a
_
e
x
sin
_
n
xa
ba
__
2
e
2x
dx
=
_
b
a
_
_
n
ba
_
2
cos
2
_
n
xa
ba
_
2
n
ba
cos
_
n
xa
ba
_
sin
_
n
xa
ba
_
+
2
sin
2
_
n
xa
ba
_
_
dx
_
b
a
sin
2
_
n
xa
ba
_
dx
=
_

0
_
_
n
ba
_
2
cos
2
(x) 2
n
ba
cos(x) sin(x) +
2
sin
2
(x)
_
dx
_

0
sin
2
(x) dx
=
2
+
_
n
b a
_
2
1320
Now we expand a function f(x) in a series of the eigenfunctions.
f(x)

n=1
c
n
e
x
sin
_
n
x a
b a
_
,
where
c
n
=
_
b
a
f(x)
n
(x)(x) dx
_
b
a

2
n
(x)(x) dx
=
2n
b a
_
b
a
f(x) e
x
sin
_
n
x a
b a
_
dx
Solution 31.2
This is an Euler equation. We substitute y = (z + 1)

into the equation.


y
tt
+

(z + 1)
2
y = 0
( 1) + = 0
=
1

1 4
2
First consider the case = 1/4. A set of solutions is
_

z + 1,

z + 1 log(z + 1)
_
.
Another set of solutions is
_

z + 1,

z + 1 log
_
z + 1
2
__
.
1321
The solution which satises the boundary condition y(1) = 0 is
y = c

z + 1 log
_
z + 1
2
_
.
Since only the trivial solution satises the y(2) = 0, = 1/4 is not an eigenvalue.
Now consider the case ,= 1/4. A set of solutions is
_
(z + 1)
(1+

14)/2
, (z + 1)
(1

14)/2
_
.
We can write this in terms of the exponential and the logarithm.
_

z + 1 exp
_
i

4 1
2
log(z + 1)
_
,

z + 1 exp
_
i

4 1
2
log(z + 1)
__
.
Note that
_

z + 1 exp
_
i

4 1
2
log
_
z + 1
2
__
,

z + 1 exp
_
i

4 1
2
log
_
z + 1
2
___
.
is also a set of solutions. The new factor of 2 in the logarithm just multiplies the solutions by a constant. We
write the solution in terms of the cosine and sine.
_

z + 1 cos
_
4 1
2
log
_
z + 1
2
__
,

z + 1 sin
_
4 1
2
log
_
z + 1
2
___
.
The solution of the dierential equation which satises the boundary condition y(1) = 0 is
y = c

z + 1 sin
_
1 4
2
log
_
z + 1
2
__
.
1322
Now we use the second boundary condition to nd the eigenvalues.
y(2) = 0
sin
_
4 1
2
log
_
3
2
__
= 0

4 1
2
log
_
3
2
_
= n, n Z
=
1
4
_
1 +
_
2n
log(3/2)
_
2
_
, n Z
n = 0 gives us a trivial solution, so we discard it. Discarding duplicate solutions, The eigenvalues and eigenfunc-
tions are

n
=
1
4
+
_
n
log(3/2)
_
2
, y
n
=

z + 1 sin
_
n
log((z + 1)/2)
log(3/2)
_
, n Z
+
.
Now we verify that the eigenvalues and eigenfunctions satisfy the properties of regular Sturm-Liouville prob-
lems.
The eigenvalues are real.
There are an innite number of eigenvalues

1
<
2
<
3
<
1
4
+
_

log(3/2)
_
2
<
1
4
+
_
2
log(3/2)
_
2
<
1
4
+
_
3
log(3/2)
_
2
<
There is a least least eigenvalue

1
=
1
4
+
_

log(3/2)
_
2
,
but there is no greatest eigenvalue.
1323
The eigenfunctions are orthogonal with respect to the weighting function (z) = 1/(z + 1)
2
. Let n ,= m.
_
2
1
y
n
(z)y
m
(z)(z) dz
=
_
2
1

z + 1 sin
_
n
log((z + 1)/2)
log(3/2)
_

z + 1 sin
_
m
log((z + 1)/2)
log(3/2)
_
1
(z + 1)
2
dz
=
_

0
sin(nx) sin(mx)
log(3/2)

dx
=
log(3/2)
2
_

0
(cos((n m)x) cos((n +m)x)) dx
= 0
The eigenfunctions are complete. A function f(x) dened on (1 . . . 2) has the series representation
f(x)

n=1
c
n
y
n
(x) =

n=1
c
n

z + 1 sin
_
n
log((z + 1)/2)
log(3/2)
_
,
where
c
n
=
y
n
[1/(z + 1)
2
[f)
y
n
[1/(z + 1)
2
[y
n
)
=
2
log(3/2)
_
2
1
sin
_
n
log((z + 1)/2)
log(3/2)
_
1
(z + 1)
3/2
f(x) dz
Solution 31.3
Recall that Euler equations are scale invariant. If u(x) is a solution, then so is u(cx) for any nonzero constant c.
We substitute y = x

into the Euler equation.


y
tt
+
2 + 1
x
y
t
+

x
2
y = 0
( 1) + (2 + 1) + = 0

2
+ 2 + = 0
=

1324
First we consider the case =
2
. A set of solutions of the dierential equation is
_
x

, x

log x
_
The homogeneous solution that satises the left boundary condition y(a) = 0 is
y = cx

(log x log a) = cx

log
_
x
a
_
.
Since only the trivial solution with c = 0 satises the right boundary condition, =
2
is not an eigenvalue.
Next we consider the case ,=
2
. We write
= i


2
.
Note that 1(


2
) 0. A set of solutions of the dierential equation is
_
x
i

2
_
_
x

e
i

2
log x
_
.
By taking the sum and dierence of these solutions we obtain a new set of linearly independent solutions.
_
x

cos
_


2
log x
_
, x

sin
_


2
log x
_
,
_
The solution which satises the left boundary condition is
y = cx

sin
_


2
log
_
x
a
__
.
For nontrivial solutions, the right boundary condition y(b) = 0 imposes the constraint
b

sin
_


2
log
_
b
a
__


2
log
_
b
a
_
= n, n Z
1325
We have the eigenvalues

n
=
2
+
_
n
log(b/a)
_
2
, n Z
with the eigenfunctions

n
= x

sin
_
n
log(x/a)
log(b/a)
_
.
To write the problem in Sturm-Liouville form, we multiply by the integrating factor
e

(2+1)/xdx
= e
(2+1) log x
= x
2+1
.
_
x
2+1
y
t
_
t
+x
21
y = 0, y(a) = y(b) = 0
Now we verify that the Sturm-Liouville properties are satised.
The eigenvalues

n
=
2
+
_
n
log(b/a)
_
2
, n Z
are real.
There are an innite number of eigenvalues

1
<
2
<
3
< ,

2
+
_

log(b/a)
_
2
<
2
+
_
2
log(b/a)
_
2
<
2
+
_
3
log(b/a)
_
2
<
1326
There is a least eigenvalue

1
=
2
+
_

log(b/a)
_
2
,
but there is no greatest eigenvalue, (
n
as n ).
For each eigenvalue, we found one unique, (to within a multiplicative constant), eigenfunction
n
. We were
able to choose the eigenfunctions to be real-valued. The eigenfunction

n
= x

sin
_
n
log(x/a)
log(b/a)
_
.
has exactly n 1 zeros in the open interval a < x < b.
The eigenfunctions are orthogonal with respect to the weighting function (x) = x
21
.
_
b
a

n
(x)
m
(x)(x) dx =
_
b
a
x

sin
_
n
log(x/a)
log(b/a)
_
x

sin
_
m
log(x/a)
log(b/a)
_
x
21
dx
=
_
b
a
sin
_
n
log(x/a)
log(b/a)
_
sin
_
m
log(x/a)
log(b/a)
_
1
x
dx
=
log(b/a)

_

0
sin(nx) sin(mx) dx
=
log(b/a)
2
_

0
(cos((n m)x) cos((n +m)x)) dx
= 0 if n ,= m
The eigenfunctions are complete. Any piecewise continuous function f(x) dened on a x b can be
expanded in a series of eigenfunctions
f(x)

n=1
c
n

n
(x),
1327
where
c
n
=
_
b
a
f(x)
n
(x)(x) dx
_
b
a

2
n
(x)(x) dx
.
The sum converges to
1
2
(f(x

) +f(x
+
)). (We do not prove this property.)
The eigenvalues can be related to the eigenfunctions with the Rayleigh quotient.

n
=
_
p
n
dn
dx

b
a
+
_
b
a
_
p
_
dn
dx
_
2
q
2
n
_
dx
_
b
a

2
n
dx
=
_
b
a
_
x
2+1
_
x
1
_
n
log(b/a)
cos
_
n
log(x/a)
log(b/a)
_
sin
_
n
log(x/a)
log(b/a)
___
2
_
dx
_
b
a
_
x

sin
_
n
log(x/a)
log(b/a)
__
2
x
21
dx
=
_
b
a
_
_
n
log(b/a)
_
2
cos
2
() 2
n
log(b/a)
cos () sin () +
2
sin
2
()
_
x
1
dx
_
b
a
sin
2
_
n
log(x/a)
log(b/a)
_
x
1
dx
=
_

0
_
_
n
log(b/a)
_
2
cos
2
(x) 2
n
log(b/a)
cos(x) sin(x) +
2
sin
2
(x)
_
dx
_

0
sin
2
(x) dx
=
2
+
_
n
log(b/a)
_
2
Now we expand a function f(x) in a series of the eigenfunctions.
f(x)

n=1
c
n
x

sin
_
n
log(x/a)
log(b/a)
_
,
1328
where
c
n
=
_
b
a
f(x)
n
(x)(x) dx
_
b
a

2
n
(x)(x) dx
=
2n
log(b/a)
_
b
a
f(x)x
1
sin
_
n
log(x/a)
log(b/a)
_
dx
Solution 31.4
y
tt
y
t
+y = 0, y(0) = y(1) = 0.
The factor that will put this equation in Sturm-Liouville form is
F(x) = exp
__
x
1 dx
_
= e
x
.
The dierential equation becomes
d
dx
_
e
x
y
t
_
+e
x
y = 0.
Thus we see that the eigenfunctions will be orthogonal with respect to the weighting function = e
x
.
Substituting y = e
x
into the dierential equation yields

2
+ = 0
=
1

1 4
2
=
1
2

_
1/4 .
1329
If < 1/4 then the solutions to the dierential equation are exponential and only the trivial solution satises
the boundary conditions.
If = 1/4 then the solution is y = c
1
e
x/2
+ c
2
x e
x/2
and again only the trivial solution satises the boundary
conditions.
Now consider the case that > 1/4.
=
1
2
i
_
1/4
The solutions are
e
x/2
cos(
_
1/4 x), e
x/2
sin(
_
1/4 x).
The left boundary condition gives us
y = c e
x/2
sin(
_
1/4 x).
The right boundary condition demands that
_
1/4 = n, n = 1, 2, . . .
Thus we see that the eigenvalues and eigenfunctions are

n
=
1
4
+ (n)
2
, y
n
= e
x/2
sin(nx).
If f(x) is a piecewise continuous function then we can expand it in a series of the eigenfunctions.
f(x) =

n=1
a
n
e
x/2
sin(nx)
1330
The coecients are
a
n
=
_
1
0
f(x) e
x
e
x/2
sin(nx) dx
_
1
0
e
x
( e
x/2
sin(nx))
2
dx
=
_
1
0
f(x) e
x/2
sin(nx) dx
_
1
0
sin
2
(nx) dx
= 2
_
1
0
f(x) e
x/2
sin(nx) dx.
Solution 31.5
Since this is a Sturm-Liouville problem, there are only real eigenvalues. By the Rayleigh quotient, the eigenvalues
are
=

d
dx

1
0
+
_
1
0
_
_
d
dx
_
2
_
dx
_
1
0

2
dx
,
=

2
(1) +
_
1
0
_
_
d
dx
_
2
_
dx
_
1
0

2
dx
.
This demonstrates that there are only positive eigenvalues. The general solution of the dierential equation for
positive, real is
y = c
1
cos
_

x
_
+c
2
sin
_

x
_
.
The solution that satises the left boundary condition is
y = c sin
_

x
_
.
1331
For nontrivial solutions we must have
sin
_

_
+

cos
_

_
= 0

= tan
_

_
.
The positive solutions of this equation are eigenvalues with corresponding eigenfunctions sin
_

x
_
. In Fig-
ure 31.1 we plot the functions x and tan(x) and draw vertical lines at x = (n 1/2), n N.
Figure 31.1: x and tan(x).
From this we see that there are an innite number of eigenvalues,
1
<
2
<
3
< . In the limit as n ,

n
(n 1/2). The limit is approached from above.
Solution 31.6
Consider the eigenvalue problem
y
tt
+y = y y(0) = 0 y(1) +y
t
(1) = 0.
1332
From Exercise 31.5 we see that the eigenvalues satisfy
_
1 = tan
_
_
1
_
and that there are an innite number of eigenvalues. For large n,
n
1 (n 1/2). The eigenfunctions are

n
= sin
_
_
1
n
x
_
.
To solve the inhomogeneous problem, we expand the solution and the inhomogeneity in a series of the eigen-
functions.
f =

n=1
f
n

n
, f
n
=
_
1
0
f(x)
n
(x) dx
_
1
0

2
n
(x) dx
y =

n=1
y
n

n
We substitite the expansions into the dierential equation to determine the coecients.
y
tt
+y = f

n=1

n
y
n

n
=

n=1
f
n

n
y =

n=1
f
n

n
sin
_
_
1
n
x
_
Solution 31.7
First consider = 0. The general solution is
y = c
1
+c
2
x.
1333
y = cx satises the boundary conditions. Thus = 0 is an eigenvalue.
Now consider negative real . The general solution is
y = c
1
cosh
_

x
_
+c
2
sinh
_

x
_
.
The solution that satises the left boundary condition is
y = c sinh
_

x
_
.
For nontrivial solutions of the boundary value problem, there must be negative real solutions of

sinh
_

_
= 0.
Since x = sinh x has no nonzero real solutions, this equation has no solutions for negative real . There are no
negative real eigenvalues.
Finally consider positive real . The general solution is
y = c
1
cos
_

x
_
+c
2
sin
_

x
_
.
The solution that satises the left boundary condition is
y = c sin
_

x
_
.
For nontrivial solutions of the boundary value problem, there must be positive real solutions of

sin
_

_
= 0.
Since x = sin x has no nonzero real solutions, this equation has no solutions for positive real . There are no
positive real eigenvalues.
There is only one real eigenvalue, = 0, with corresponding eigenfunction = x.
1334
The diculty with the boundary conditions, y(0) = 0, y
t
(0) y(1) = 0 is that the problem is not self-adjoint.
We demonstrate this by showing that the problem does not satisfy Greens identity. Let u and v be two functions
that satisfy the boundary conditions, but not necessarily the dierential equation.
u, L[v]) L[u], v) = u, v
tt
) u
tt
, v)
= [uv
t
]
1
0
u
t
, v
t
) u
t
, v
t
) [u
t
v]
1
0
+u
t
, v
t
) u
t
, v
t
)
= u(1)v
t
(1) u
t
(1)v(1)
Greens identity is not satised,
u, L[v]) L[u], v) ,= 0;
The problem is not self-adjoint.
Solution 31.8
First we write the equation in formally self-adjoint form,
L[y] (xy
t
)
t
= xy, [y(0)[ < , y(1) = 0.
Let be an eigenvalue with corresponding eigenfunction . We derive the Rayleigh quotient for .
, L[]) = , x)
, (x
t
)
t
) = , x)
[x
t
]
1
0

t
, x
t
) = , x)
We apply the boundary conditions and solve for .
=

t
, x
t
)
, x)
1335
The Bessel equation of the rst kind and order zero satises the problem,
y
tt
+
1
x
y
t
+y = 0, [y(0)[ < , y(r) = 0,
where r is a positive root of J
0
(x). We make the change of variables = x/r, u() = y(x) to obtain the problem
1
r
2
u
tt
+
1
r
1
r
u
t
+u = 0, [u(0)[ < , u(1) = 0,
u
tt
+
1

u
t
+r
2
u = 0, [u(0)[ < , u(1) = 0.
Now r
2
is the eigenvalue of the problem for u(). From the Rayleigh quotient, the minimum eigenvalue obeys the
inequality
r
2


t
, x
t
)
, x)
,
where is any test function that satises the boundary conditions. Taking = 1 x we obtain,
r
2

_
1
0
(1)x(1) dx
_
1
0
(1 x)x(1 x) dx
= 6,
r

6
Thus the smallest zero of J
0
(x) is less than or equal to

6 2.4494. (The smallest zero of J


0
(x) is approximately
2.40483.)
1336
Solution 31.9
We assume that 0 < l < .
Recall that the solution of a second order dierential equation with piecewise continuous coecient functions
is piecewise C
2
. This means that the solution is C
2
except for a nite number of points where it is C
1
.
First consider the case = 0. A set of linearly independent solutions of the dierential equation is 1, z. The
solution which satises y(0) = 0 is y
1
= c
1
z. The solution which satises y() = 0 is y
2
= c
2
( z). There is a
solution for the problem if there are there are values of c
1
and c
2
such that y
1
and y
2
have the same position and
slope at z = l.
y
1
(l) = y
2
(l), y
t
1
(l) = y
t
2
(l)
c
1
l = c
2
( l), c
1
= c
2
Since there is only the trivial solution, c
1
= c
2
= 0, = 0 is not an eigenvalue.
Now consider ,= 0. For 0 z l a set of linearly independent solutions is
_
cos(

az), sin(

az)
_
.
The solution which satises y(0) = 0 is
y
1
= c
1
sin(

az).
For l < z a set of linearly independent solutions is
_
cos(

bz), sin(

bz)
_
.
The solution which satises y() = 0 is
y
2
= c
2
sin(

b( z)).
,= 0 is an eigenvalue if there are nontrivial solutions of
y
1
(l) = y
2
(l), y
t
1
(l) = y
t
2
(l)
c
1
sin(

al) = c
2
sin(

b( l)), c
1

acos(

al) = c
2

bcos(

b( l))
1337
We divide the second equation by
_
() since ,= 0 and write this as a linear algebra problem.
_
sin(

al) sin(

b( l))

a cos(

al)

b sin(

b( l))
__
c
1
c
2
_
=
_
0
0
_
This system of equations has nontrivial solutions if and only if the determinant of the matrix is zero.

b sin(

al) sin(

b( l)) +

a cos(

al) sin(

b( l)) = 0
We can use trigonometric identities to write this equation as
(

a) sin
_

(l

a ( l)

b)
_
+ (

b +

a) sin
_

(l

a + ( l)

b)
_
= 0
Clearly this equation has an innite number of solutions for real, positive . However, it is not clear that this
equation does not have non-real solutions. In order to prove that, we will show that the problem is self-adjoint.
Before going on to that we note that the eigenfunctions have the form

n
(z) =
_
sin
_
a
n
z
_
0 z l
sin
_
b
n
( z)
_
l < z .
Now we prove that the problem is self-adjoint. We consider the class of functions which are C
2
in (0 . . . )
except at the interior point x = l where they are C
1
and which satisfy the boundary conditions y(0) = y() = 0.
Note that the dierential operator is not dened at the point x = l. Thus Greens identity,
u[q[Lv) = Lu[q[v)
is not well-dened. To remedy this we must dene a new inner product. We choose
u[v)
_
l
0
uv dx +
_

l
uv dx.
1338
This new inner product does not require dierentiability at the point x = l.
The problem is self-adjoint if Greens indentity is satised. Let u and v be elements of our class of functions.
In addition to the boundary conditions, we will use the fact that u and v satisfy y(l

) = y(l
+
) and y
t
(l

) = y
t
(l
+
).
v[Lu) =
_
l
0
vu
tt
dx +
_

l
vu
tt
dx
= [vu
t
]
l
0

_
l
0
v
t
u
t
dx + [vu
t
]

l

_

l
v
t
u
t
dx
= v(l)u
t
(l)
_
l
0
v
t
u
t
dx v(l)u
t
(l)
_

l
v
t
u
t
dx
=
_
l
0
v
t
u
t
dx
_

l
v
t
u
t
dx
= [v
t
u]
l
0
+
_
l
0
v
tt
udx [v
t
u]

l
+
_

l
v
tt
udx
= v
t
(l)u(l) +
_
l
0
v
tt
udx +v
t
(l)u(l) +
_

l
v
tt
udx
=
_
l
0
v
tt
udx +
_

l
v
tt
udx
= Lv[Lu)
The problem is self-adjoint. Hence the eigenvalues are real. There are an innite number of positive, real
eigenvalues
n
.
Solution 31.10
1. Let v be an eigenfunction with the eigenvalue . We start with the dierential equation and then take the
1339
inner product with v.
(pv
tt
)
tt
(qv
t
)
t
+rv = sv
v, (pv
tt
)
tt
(qv
t
)
t
+rv) = v, sv)
We use integration by parts and utilize the homogeneous boundary conditions.
[v(pv
tt
)
t
]
b
a
v
t
, (pv
tt
)
t
) [vqv
t
]
b
a
+v
t
, qv
t
) +v, rv) = v, sv)
[v
t
pv
tt
]
b
a
+v
tt
, pv
tt
) +v
t
, qv
t
) +v, rv) = v, sv)
=
v
tt
, pv
tt
) +v
t
, qv
t
) +v, rv)
v, sv)
We see that if p, q, r, s 0 then the eigenvalues will be positive. (Of course we assume that p and s are not
identically zero.)
2. First we prove that this problem is self-adjoint. Let u and v be functions that satisfy the boundary conditions,
but do not necessarily satsify the dierential equation.
v, L[u]) L[v], u) = v, (pu
tt
)
tt
(qu
t
)
t
+ru) (pv
tt
)
tt
(qv
t
)
t
+rv, u)
Following our work in part (a) we use integration by parts to move the derivatives.
= (v
tt
, pu
tt
) +v
t
, qu
t
) +v, ru)) (pv
tt
, u
tt
) +qv
t
, u
t
) +rv, u))
= 0
This problem satises Greens identity,
v, L[u]) L[v], u) = 0,
and is thus self-adjoint.
1340
Let v
k
and v
m
be eigenfunctions corresponding to the distinct eigenvalues
k
and
m
. We start with Greens
identity.
v
k
, L[v
m
]) L[v
k
], v
m
) = 0
v
k
,
m
sv
m
)
k
sv
k
, v
m
) = 0
(
m

k
)v
k
, sv
m
) = 0
v
k
, sv
m
) = 0
The eigenfunctions are orthogonal with respect to the weighting function s.
3. From part (a) we know that there are only positive eigenvalues. The general solution of the dierential
equation is
= c
1
cos(
1/4
x) +c
2
cosh(
1/4
x) +c
3
sin(
1/4
x) +c
4
sinh(
1/4
x).
Applying the condition (0) = 0 we obtain
= c
1
(cos(
1/4
x) cosh(
1/4
x)) +c
2
sin(
1/4
x) +c
3
sinh(
1/4
x).
The condition
tt
(0) = 0 reduces this to
= c
1
sin(
1/4
x) +c
2
sinh(
1/4
x).
We substitute the solution into the two right boundary conditions.
c
1
sin(
1/4
) +c
2
sinh(
1/4
) = 0
c
1

1/2
sin(
1/4
) +c
2

1/2
sinh(
1/4
) = 0
We see that sin(
1/4
) = 0. The eigenvalues and eigenfunctions are

n
= (n)
4
,
n
= sin(nx), n N.
1341
Chapter 32
Integrals and Convergence
Never try to teach a pig to sing. It wastes your time and annoys the pig.
-?
32.1 Uniform Convergence of Integrals
Consider the improper integral
_

c
f(x, t) dt.
The integral is convergent to S(x) if, given any > 0, there exists T(x, ) such that

_

c
f(x, t) dt S(x)

< for all > T(x, ).


The sum is uniformly convergent if T is independent of x.
1342
Similar to the Weierstrass M-test for innite sums we have a uniform convergence test for integrals. If there
exists a continuous function M(t) such that [f(x, t)[ M(t) and
_

c
M(t) dt is convergent, then
_

c
f(x, t) dt is
uniformly convergent.
If
_

c
f(x, t) dt is uniformly convergent, we have the following properties:
If f(x, t) is continuous for x [a, b] and t [c, ) then for a < x
0
< b,
lim
xx
0
_

c
f(x, t) dt =
_

c
_
lim
xx
0
f(x, t)
_
dt.
If a x
1
< x
2
b then we can interchange the order of integration.
_
x
2
x
1
__

c
f(x, t) dt
_
dx =
_

c
__
x
2
x
1
f(x, t) dx
_
dt
If
f
x
is continuous, then
d
dx
_

c
f(x, t) dt =
_

c

x
f(x, t) dt.
32.2 The Riemann-Lebesgue Lemma
Result 32.2.1 If
_
b
a
[f(x)[ dx exists, then
_
b
a
f(x) sin(x) dx 0 as .
1343
Before we try to justify the Riemann-Lebesgue lemma, we will need a preliminary result. Let be a positive
constant.

_
b
a
sin(x) dx

cos(x)
_
b
a

.
We will prove the Riemann-Lebesgue lemma for the case when f(x) has limited total uctuation on the interval
(a, b). We can express f(x) as the dierence of two functions
f(x) =
+
(x)

(x),
where
+
and

are positive, increasing, bounded functions.


From the mean value theorem for positive, increasing functions, there exists an x
0
, a x
0
b, such that

_
b
a

+
(x) sin(x) dx

+
(b)
_
b
x
0
sin(x) dx

[
+
(b)[
2

.
Similarly,

_
b
a

(x) sin(x) dx

(b)[
2

.
Thus

_
b
a
f(x) sin(x) dx

([
+
(b)[ +[

(b)[)
0 as .
1344
32.3 Cauchy Principal Value
32.3.1 Integrals on an Innite Domain
The improper integral
_

f(x) dx is dened
_

f(x) dx = lim
a
_
0
a
f(x) dx + lim
b
_
b
0
f(x) dx,
when these limits exist. The Cauchy principal value of the integral is dened
PV
_

f(x) dx = lim
a
_
a
a
f(x) dx.
The principal value may exist when the integral diverges.
Example 32.3.1
_

x dx diverges, but
PV
_

x dx = lim
a
_
a
a
x dx = lim
a
(0) = 0.
If the improper integral converges, then the Cauchy principal value exists and is equal to the value of the
integral. The principal value of the integral of an odd function is zero. If the principal value of the integral of an
even function exists, then the integral converges.
32.3.2 Singular Functions
Let f(x) have a singularity at x = 0. Let a and b satisfy a < 0 < b. The integral of f(x) is dened
_
b
a
f(x) dx = lim

1
0

_

1
a
f(x) dx + lim

2
0
+
_
b

2
f(x) dx,
1345
when the limits exist. The Cauchy principal value of the integral is dened
PV
_
b
a
f(x) dx = lim
0
+
__

a
f(x) dx +
_
b

f(x) dx
_
,
when the limit exists.
Example 32.3.2 The integral
_
2
1
1
x
dx
diverges, but the principal value exists.
PV
_
2
1
1
x
dx = lim
0
+
__

1
1
x
dx +
_
2

1
x
dx
_
= lim
0
+
_

_
1

1
x
dx +
_
2

1
x
dx
_
=
_
2
1
1
x
dx
= log 2
1346
Chapter 33
The Laplace Transform
33.1 The Laplace Transform
The Laplace transform of the function f(t) is dened
L[f(t)] =
_

0
e
st
f(t) dt,
for all values of s for which the integral exists. The Laplace transform of f(t) is a function of s which we will
denote

f(s).
1
A function f(t) is of exponential order if there exist constants t
0
and M such that
[f(t)[ < M e
t
, for all t > t
0
.
If
_
t
0
0
f(t) dt exists and f(t) is of exponential order then the Laplace transform F(s) exists for 1(s) > .
Here are a few examples of these concepts.
sin t is of exponential order 0.
1
Denoting the Laplace transform of f(t) as F(s) is also common.
1347
t e
2t
is of exponential order for any > 2.
e
t
2
is not of exponential order for any .
t
n
is of exponential order for any > 0.
t
2
does not have a Laplace transform as the integral diverges.
Example 33.1.1 Consider the Laplace transform of f(t) = 1. Since f(t) = 1 is of exponential order for any
> 0, the Laplace transform integral converges for 1(s) > 0.

f(s) =
_

0
e
st
dt
=
_

1
s
e
st
_

0
=
1
s
Example 33.1.2 The function f(t) = t e
t
is of exponential order for any > 1. We compute the Laplace
transform of this function.

f(s) =
_

0
e
st
t e
t
dt
=
_

0
t e
(1s)t
dt
=
_
1
1 s
t e
(1s)t
_

_

0
1
1 s
e
(1s)t
dt
=
_
1
(1 s)
2
e
(1s)t
_

0
=
1
(1 s)
2
for 1(s) > 1.
1348
Example 33.1.3 Consider the Laplace transform of the Heaviside function,
H(t c) =
_
0 for t < c
1 for t > c,
where c > 0.
L[H(t c)] =
_

0
e
st
H(t c) dt
=
_

c
e
st
dt
=
_
e
st
s
_

c
=
e
cs
s
for 1(s) > 0
Example 33.1.4 Next consider H(t c)f(t c).
L[H(t c)f(t c)] =
_

0
e
st
H(t c)f(t c) dt
=
_

c
e
st
f(t c) dt
=
_

0
e
s(t+c)
f(t) dt
= e
cs

f(s)
33.2 The Inverse Laplace Transform
The inverse Laplace transform in denoted
f(t) = L
1
[

f(s)].
1349
We compute the inverse Laplace transform with the Mellin inversion formula.
f(t) =
1
i2
_
+i
i
e
st

f(s) ds
Here is a real constant that is to the right of the singularities of

f(s).
To see why the Mellin inversion formula is correct, we take the Laplace transform of it. Assume that f(t) is
of exponential order . Then will be to the right of the singularities of

f(s).
L[L
1
[

f(s)]] = L
_
1
i2
_
+i
i
e
zt

f(z) dz
_
=
_

0
e
st
1
i2
_
+i
i
e
zt

f(z) dz dt
We interchange the order of integration.
=
1
i2
_
+i
i

f(z)
_

0
e
(zs)t
dt dz
Since 1(z) = , the integral in t exists for 1(s) > .
=
1
i2
_
+i
i

f(z)
s z
dz
We would like to evaluate this integral by closing the path of integration with a semi-circle of radius R in the right
half plane and applying the residue theorem. However, in order for the integral along the semi-circle to vanish as
R ,

f(z) must vanish as [z[ . If

f(z) vanishes we can use the maximum modulus bound to show that
the integral along the semi-circle vanishes. This we assume that

f(z) vanishes at innity.
Consider the integral,
1
i2
_
C

f(z)
s z
dz,
1350
Im(z)

Re(z)
+iR
-iR
s
Figure 33.1: The Laplace Transform Pair Contour.
where C is the contour that starts at iR, goes straight up to +iR, and then follows a semi-circle back down
to iR. This contour is shown in Figure 33.1.
If s is inside the contour then
1
i2
_
C

f(z)
s z
dz =

f(s).
Note that the contour is traversed in the negative direction. Since

f(z) decays as [z[ , the semicircular
contribution to the integral will vanish as R . Thus
1
i2
_
+i
i

f(z)
s z
dz =

f(s).
Therefore, we have shown than
L[L
1
[

f(s)]] = F(s).
1351
f(t) and

f(s) are known as Laplace transform pairs.
33.2.1 F(s) with Poles
Example 33.2.1 Consider the inverse Laplace transform of 1/s
2
. s = 1 is to the right of the singularity of 1/s
2
.
L
1
_
1
s
2
_
=
1
i2
_
1+i
1i
e
st
1
s
2
ds
Let B
R
be the contour starting at 1 iR and following a straight line to 1 +iR; let C
R
be the contour starting at
1 +iR and following a semicircular path down to 1 iR. Let C be the combination of B
R
and C
R
. This contour
is shown in Figure 33.2.
Im(s)

Re(s)
+iR
-iR
B
R
C
R
Figure 33.2: The Path of Integration for the Inverse Laplace Transform.
1352
Consider the line integral on C for R > 1.
1
i2
_
C
e
st
1
s
2
ds = Res
_
e
st
1
s
2
, 0
_
=
d
ds
e
st

s=0
= t
If t 0, the integral along C
R
vanishes as R . We parameterize s.
s = 1 +Re
i
,

2

3
2

e
st

e
t(1+Re
i
)

= e
t
e
tRcos
e
t

_
C
R
e
st
1
s
2
ds

_
C
R

e
st
1
s
2

ds
Re
t
1
(R 1)
2
0 as R
Thus the inverse Laplace transform of 1/s
2
is
L
1
_
1
s
2
_
= t, for t 0.
Let

f(s) be analytic except for isolated poles at s
1
, s
2
, . . . , s
N
and let be to the right of these poles. Also,
let

f(s) 0 as [s[ . Dene B
R
to be the straight line from iR to + iR and C
R
to be the semicircular
1353
path from + iR to iR. If R is large enough to enclose all the poles, then
1
i2
_
B
R
+C
R
e
st
F(s) ds =
N

n=1
Res ( e
st
F(s), s
n
)
1
i2
_
B
R
e
st
F(s) ds =
N

n=1
Res ( e
st
F(s), s
n
)
1
i2
_
C
R
e
st
F(s) ds.
Now lets examine the integral along C
R
. Let the maximum of [

f(s)[ on C
R
be M
R
. We can parameterize the
contour with s = +Re
i
, /2 < < 3/2.

_
C
R
e
st
F(s) ds

_
3/2
/2
e
t(+Re
i
)

f( +Re
i
)Ri e
i
d

_
3/2
/2
e
t
e
tRcos
RM
R
d
= RM
R
e
t
_

0
e
tRsin
d
If t 0 we can use Jordans Lemma to obtain,
< RM
R
e
t

tR
.
= M
R
e
t

t
We use that M
R
0 as R .
0 as R
1354
Thus we have an expression for the inverse Laplace transform of

f(s).
1
i2
_
+i
i
e
st

f(s) ds =
N

n=1
Res ( e
st

f(s), s
n
)
L
1
[

f(s)] =
N

n=1
Res ( e
st

f(s), s
n
)
Result 33.2.1 If

f(s) is analytic except for poles at s
1
, s
2
, . . . , s
N
and

f(s) 0 as
[s[ then the inverse Laplace transform of

f(s) is
f(t) = L
1
[

f(s)] =
N

n=1
Res ( e
st

f(s), s
n
), for t > 0.
Example 33.2.2 Consider the inverse Laplace transform of
1
s
3
s
2
.
First we factor the denominator.
1
s
3
s
2
=
1
s
2
1
s 1
.
Taking the inverse Laplace transform,
L
1
_
1
s
3
s
3
_
= Res
_
e
st
1
s
2
1
s 1
, 0
_
+ Res
_
e
st
1
s
2
1
s 1
, 1
_
=
d
ds
e
st
s 1

s=0
+ e
t
=
1
(1)
2
+
t
1
+ e
t
1355
Thus we have that
L
1
_
1
s
3
s
2
_
= e
t
t 1, for t > 0.
Example 33.2.3 Consider the inverse Laplace transform of
s
2
+s 1
s
3
2s
2
+s 2
.
We factor the denominator.
s
2
+s 1
(s 2)(s i)(s + i)
.
Then we take the inverse Laplace transform.
L
1
_
s
2
+s 1
s
3
2s
2
+s 2
_
= Res
_
e
st
s
2
+s 1
(s 2)(s i)(s + i)
, 2
_
+ Res
_
e
st
s
2
+s 1
(s 2)(s i)(s + i)
, i
_
+ Res
_
e
st
s
2
+s 1
(s 2)(s i)(s + i)
, i
_
= e
2t
+ e
it
1
i2
+ e
it
1
i2
Thus we have
L
1
_
s
2
+s 1
s
3
2s
2
+s 2
_
= sin t + e
2t
, for t > 0.
1356
33.2.2

f(s) with Branch Points
Example 33.2.4 Consider the inverse Laplace transform of
1

s
.

s denotes the principal branch of s
1/2
. There
is a branch cut from s = 0 to s = and
1

s
=
e
i/2

r
, for < < .
Let be any positive number. The inverse Laplace transform of
1

s
is
f(t) =
1
i2
_
+i
i
e
st
1

s
ds.
We will evaluate the integral by deforming it to wrap around the branch cut. Consider the integral on the contour
shown in Figure 33.3. C
+
R
and C

R
are circular arcs of radius R. B is the vertical line at 1(s) = joining the two
arcs. C

is a semi-circle in the right half plane joining i and i. L


+
and L

are lines joining the circular arcs


at (s) = .
Since there are no residues inside the contour, we have
1
i2
_
_
B
+
_
C
+
R
+
_
L
+
+
_
C
+
_
L

+
_
C

R
_
e
st
1

s
ds = 0.
We will evaluate the inverse Laplace transform for t > 0.
First we will show that the integral along C
+
R
vanishes as R . As 0, we have
_
C
+
R
ds =
_
/2
/2
d +
_

/2
d.
The rst integral vanishes by the maximum modulus bound. Note that the length of the path of integration is
1357
/2
C

L
+
L
-
-
C
R
R
C
+
B
/2+
Figure 33.3: Path of Integration for 1/

s
less than 2.

_
/2
/2
d

_
max
sC
+
R

e
st
1

_
(2)
= e
t
1

R
(2)
0 as R
1358
The second integral vanishes by Jordans Lemma. A parameterization of C
+
R
is s = Re
i
.

_

/2
e
Re
i
t
1

Re
i
d

_

/2

e
Re
i
t
1

Re
i

R
_

/2
e
Rcos()t
d

R
_
/2
0
e
Rt sin()
d
<
1

2Rt
0 as R
We could show that the integral along C

R
vanishes by the same method. Now we have
1
i2
__
B
+
_
L
+
+
_
C
+
_
L

_
e
st
1

s
ds = 0.
We can show that the integral along C

vanishes as 0 with the maximum modulus bound.

_
C
e
st
1

s
ds

_
max
sC

e
st
1

_
()
< e
t
1

0 as 0
Now we can express the inverse Laplace transform in terms of the integrals along L
+
and L

.
f(t)
1
i2
_
+i
i
e
st
1

s
ds =
1
i2
_
L
+
e
st
1

s
ds
1
i2
_
L

e
st
1

s
ds
1359
On L
+
, s = r e
i
, ds = e
i
dr = dr; on L

, s = r e
i
, ds = e
i
dr = dr. We can combine the integrals along
the top and bottom of the branch cut.
f(t) =
1
i2
_
0

e
rt
i

r
(1) dr
1
i2
_

0
e
rt
i

r
(1) dr
=
1
i2
_

0
e
rt
i2

r
dr
We make the change of variables x = rt.
=
1

t
_

0
e
x
1

x
dx
We recognize this integral as (1/2).
=
1

t
(1/2)
=
1

t
Thus the inverse Laplace transform of
1

s
is
f(t) =
1

t
, for t > 0.
33.2.3 Asymptotic Behavior of F(s)
Consider the behavior of

f(s) =
_

0
e
st
f(t) dt
1360
as s +. Assume that f(t) is analytic in a neighborhood of t = 0. Only the behavior of the integrand near
t = 0 will make a signicant contribution to the value of the integral. As you move away from t = 0, the e
st
term dominates. Thus we could approximate the value of

f(s) by replacing f(t) with the rst few terms in its
Taylor series expansion about the origin.

f(s)
_

0
e
st
_
f(0) +tf
t
(0) +
t
2
2
f
tt
(0) +
_
dt as s +
Using
L[t
n
] =
n!
s
n+1
we obtain

f(s)
f(0)
s
+
f
t
(0)
s
2
+
f
tt
(0)
s
3
+ as s +.
Example 33.2.5 The Taylor series expansion of sin t about the origin is
sin t = t
t
3
6
+O(t
5
).
Thus the Laplace transform of sin t has the behavior
L[sin t]
1
s
2

1
s
4
+O(s
6
) as s +.
1361
We corroborate this by expanding L[sin t].
L[sin t] =
1
s
2
+ 1
=
s
2
1 +s
2
= s
2

n=0
(1)
n
s
2n
=
1
s
2

1
s
4
+O(s
6
)
33.3 Properties of the Laplace Transform
In this section we will list several useful properties of the Laplace transform. If a result is not derived, it is shown
in the Problems section. Unless otherwise stated, assume that f(t) and g(t) are piecewise continuous and of
exponential order .
L[af(t) +bg(t)] = aL[f(t)] +bL[g(t)]
L[ e
ct
f(t)] = F(s c) for s > c +
L[t
n
f(t)] = (1)
n d
n
ds
n
[

f(s)] for n = 1, 2, . . .
If
_

0
f(t)
t
dt exists for positive then
L
_
f(t)
t
_
=
_

s
F() d.
L
_
_
t
0
f() d
_
=

f(s)
s
1362
L
_
d
dt
f(t)

= s

f(s) f(0)
L
_
d
2
dt
2
f(t)
_
= s
2

f(s) sf(0) f
t
(0)
To derive these formulas,
L
_
d
dt
f(t)
_
=
_

0
e
st
f
t
(t) dt
=
_
e
st
f(t)

0

_

0
s e
st
f(t) dt
= f(0) +s

f(s)
L
_
d
2
dt
2
f(t)
_
= sL[f
t
(t)] f
t
(0)
= s
2

f(s) sf(0) f
t
(0)
Let f(t) and g(t) be continuous. The convolution of f(t) and g(t) is dened
h(t) = (f g) =
_
t
0
f()g(t ) d =
_
t
0
f(t )g() d
The convolution theorem states

h(s) =

f(s) g(s).
1363
To show this,

h(s) =
_

0
e
st
_
t
0
f()g(t ) d dt
=
_

0
_

e
st
f()g(t ) dt d
=
_

0
e
s
f()
_

e
s(t)
g(t ) dt d
=
_

0
e
s
f() d
_

0
e
s
g() d
=

f(s) g(s)
If f(t) is periodic with period T then
L[f(t)] =
_
T
0
e
st
f(t) dt
1 e
sT
.
Example 33.3.1 Consider the inverse Laplace transform of
1
s
3
s
2
. First we factor the denominator.
1
s
3
s
2
=
1
s
2
1
s 1
We know the inverse Laplace transforms of each term.
L
1
_
1
s
2
_
= t, L
1
_
1
s 1
_
= e
t
1364
We apply the convolution theorem.
L
1
_
1
s
2
1
s 1
_
=
_
t
0
e
t
d
= e
t
_
e

t
0
e
t
_
t
0
e

d
= t 1 + e
t
L
1
_
1
s
2
1
s 1
_
= e
t
t 1.
Example 33.3.2 We can nd the inverse Laplace transform of
s
2
+s 1
s
3
2s
2
+s 2
with the aid of a table of Laplace transform pairs. We factor the denominator.
s
2
+s 1
(s 2)(s i)(s + i)
We expand the function in partial fractions and then invert each term.
s
2
+s 1
(s 2)(s i)(s + i)
=
1
s 2

i/2
s i
+
i/2
s + i
s
2
+s 1
(s 2)(s i)(s + i)
=
1
s 2
+
1
s
2
+ 1
L
1
_
1
s 2
+
1
s
2
+ 1
_
= e
2t
+ sin t
1365
33.4 Constant Coecient Dierential Equations
Example 33.4.1 Consider the dierential equation
y
t
+y = cos t, for t > 0, y(0) = 1.
We take the Laplace transform of this equation.
s y(s) y(0) + y(s) =
s
s
2
+ 1
y(s) =
s
(s + 1)(s
2
+ 1)
+
1
s + 1
y(s) =
1/2
s + 1
+
1
2
s + 1
s
2
+ 1
Now we invert y(s).
y(t) =
1
2
e
t
+
1
2
cos t +
1
2
sin t, for t > 0
Notice that the initial condition was included when we took the Laplace transform.
One can see from this example that taking the Laplace transform of a constant coecient dierential equation
reduces the dierential equation for y(t) to an algebraic equation for y(s).
Example 33.4.2 Consider the dierential equation
y
tt
+y = cos(2t), for t > 0, y(0) = 1, y
t
(0) = 0.
We take the Laplace transform of this equation.
s
2
y(s) sy(0) y
t
(0) + y(s) =
s
s
2
+ 4
y(s) =
s
(s
2
+ 1)(s
2
+ 4)
+
s
s
2
+ 1
1366
From the table of Laplace transform pairs we know
L
1
_
s
s
2
+ 1
_
= cos t, L
1
_
1
s
2
+ 4
_
=
1
2
sin(2t).
We use the convolution theorem to nd the inverse Laplace transform of y(s).
y(t) =
_
t
0
1
2
sin(2) cos(t ) d + cos t
=
1
4
_
t
0
sin(t +) + sin(3 t) d + cos t
=
1
4
_
cos(t +)
1
3
cos(3 t)
_
t
0
+ cos t
=
1
4
_
cos(2t) + cos t
1
3
cos(2t) +
1
3
cos(t)
_
+ cos t
=
1
3
cos(2t) +
4
3
cos(t)
Alternatively, we can nd the inverse Laplace transform of y(s) by rst nding its partial fraction expansion.
y(s) =
s/3
s
2
+ 1

s/3
s
2
+ 4
+
s
s
2
+ 1
=
s/3
s
2
+ 4
+
4s/3
s
2
+ 1
y(t) =
1
3
cos(2t) +
4
3
cos(t)
Example 33.4.3 Consider the initial value problem
y
tt
+ 5y
t
+ 2y = 0, y(0) = 1, y
t
(0) = 2.
1367
Without taking a Laplace transform, we know that since
y(t) = 1 + 2t +O(t
2
)
the Laplace transform has the behavior
y(s)
1
s
+
2
s
2
+O(s
3
), as s +.
33.5 Systems of Constant Coecient Dierential Equations
The Laplace transform can be used to transform a system of constant coecient dierential equations into a
system of algebraic equations. This should not be surprising, as a system of dierential equations can be written
as a single dierential equation, and vice versa.
Example 33.5.1 Consider the set of dierential equations
y
t
1
= y
2
y
t
2
= y
3
y
t
3
= y
3
y
2
y
1
+t
3
with the initial conditions
y
1
(0) = y
2
(0) = y
3
(0) = 0.
We take the Laplace transform of this system.
s y
1
y
1
(0) = y
2
s y
2
y
2
(0) = y
3
s y
3
y
3
(0) = y
3
y
2
y
1
+
6
s
4
1368
The rst two equations can be written as
y
1
=
y
3
s
2
y
2
=
y
3
s
.
We substitute this into the third equation.
s y
3
= y
3

y
3
s

y
3
s
2
+
6
s
4
(s
3
+s
2
+s + 1) y
3
=
6
s
2
y
3
=
6
s
2
(s
3
+s
2
+s + 1)
.
We solve for y
1
.
y
1
=
6
s
4
(s
3
+s
2
+s + 1)
y
1
=
1
s
4

1
s
3
+
1
2(s + 1)
+
1 s
2(s
2
+ 1)
We then take the inverse Laplace transform of y
1
.
y
1
=
t
3
6

t
2
2
+
1
2
e
t
+
1
2
sin t
1
2
cos t.
We can nd y
2
and y
3
by dierentiating the expression for y
1
.
y
2
=
t
2
2
t
1
2
e
t
+
1
2
cos t +
1
2
sin t
y
3
= t 1 +
1
2
e
t

1
2
sin t +
1
2
cos t
1369
33.6 Exercises
Exercise 33.1
Find the Laplace transform of the following functions:
1. f(t) = e
at
2. f(t) = sin(at)
3. f(t) = cos(at)
4. f(t) = sinh(at)
5. f(t) = cosh(at)
6. f(t) =
sin(at)
t
7. f(t) =
_
t
0
sin(au)
u
du
8. f(t) =
_
1, 0 t <
0, t < 2
and f(t + 2) = f(t) for t > 0. That is, f(t) is periodic for t > 0.
Hint, Solution
Exercise 33.2
Show that L[af(t) +bg(t)] = aL[f(t)] +bL[g(t)].
Hint, Solution
1370
Exercise 33.3
Show that if f(t) is of exponential order ,
L[ e
ct
f(t)] = F(s c) for s > c +.
Hint, Solution
Exercise 33.4
Show that
L[t
n
f(t)] = (1)
n
d
n
ds
n
[

f(s)] for n = 1, 2, . . .
Hint, Solution
Exercise 33.5
Show that if
_

0
f(t)
t
dt exists for positive then
L
_
f(t)
t
_
=
_

s
F() d.
Hint, Solution
Exercise 33.6
Show that
L
__
t
0
f() d
_
=

f(s)
s
.
Hint, Solution
1371
Exercise 33.7
Show that if f(t) is periodic with period T then
L[f(t)] =
_
T
0
e
st
f(t) dt
1 e
sT
.
Hint, Solution
Exercise 33.8
The function f(t) t 0, is periodic with period 2T; i.e. f(t + 2T) f(t), and is also odd with period T; i.e.
f(t +T) = f(t). Further,
_
T
0
f(t) e
st
dt = g(s).
Show that the Laplace transform of f(t) is

f(s) = g(s)/(1 + e
sT
). Find f(t) such that

f(s) = s
1
tanh(sT/2).
Hint, Solution
Exercise 33.9
Find the Laplace transform of t

, > 1 by two methods.


1. Assume that s is complex-valued. Make the change of variables z = st and use integration in the complex
plane.
2. Show that the Laplace transform of t

is an analytic function for 1(s) > 0. Assume that s is real-valued.


Make the change of variables x = st and evaluate the integral. Then use analytic continuation to extend
the result to complex-valued s.
Hint, Solution
1372
Exercise 33.10 (mathematica/ode/laplace/laplace.nb)
Show that the Laplace transform of f(t) = ln t is

f(s) =
Log s
s


s
, where =
_

0
e
t
ln t dt.
[ = 0.5772 . . . is known as Eulers constant.]
Hint, Solution
Exercise 33.11
Find the Laplace transform of t

ln t. Write the answer in terms of the digamma function, () =


t
()/().
What is the answer for = 0?
Hint, Solution
Exercise 33.12
Find the inverse Laplace transform of

f(s) =
1
s
3
2s
2
+s 2
with the following methods.
1. Expand

f(s) using partial fractions and then use the table of Laplace transforms.
2. Factor the denominator into (s 2)(s
2
+ 1) and then use the convolution theorem.
3. Use Result 33.2.1.
Hint, Solution
1373
Exercise 33.13
Solve the dierential equation
y
tt
+y
t
+y = sin t, y(0) = y
t
(0) = 0, 0 < 1
using the Laplace transform. This equation represents a weakly damped, driven, linear oscillator.
Hint, Solution
Exercise 33.14
Solve the problem,
y
tt
ty
t
+y = 0, y(0) = 0, y
t
(0) = 1,
with the Laplace transform.
Hint, Solution
Exercise 33.15
Prove the following relation between the inverse Laplace transform and the inverse Fourier transform,
L
1
[

f(s)] =
1
2
e
ct
T
1
[

f(c + i)],
where c is to the right of the singularities of

f(s).
Hint, Solution
Exercise 33.16 (mathematica/ode/laplace/laplace.nb)
Show by evaluating the Laplace inversion integral that if

f(s) =
_

s
_
1/2
e
2(as)
1/2
, s
1/2
=

s for s > 0,
1374
then f(t) = e
a/t
/

t. Hint: cut the s-plane along the negative real axis and deform the contour onto the cut.
Remember that
_

0
e
ax
2
cos(bx) dx =
_
/4a e
b
2
/4a
.
Hint, Solution
Exercise 33.17 (mathematica/ode/laplace/laplace.nb)
Use Laplace transforms to solve the initial value problem
d
4
y
dt
4
y = t, y(0) = y
t
(0) = y
tt
(0) = y
ttt
(0) = 0.
Hint, Solution
Exercise 33.18 (mathematica/ode/laplace/laplace.nb)
Solve, by Laplace transforms,
dy
dt
= sin t +
_
t
0
y() cos(t ) d, y(0) = 0.
Hint, Solution
Exercise 33.19 (mathematica/ode/laplace/laplace.nb)
Suppose u(t) satises the dierence-dierential equation
du
dt
+u(t) u(t 1) = 0, t 0,
and the initial condition u(t) = u
0
(t), 1 t 0, where u
0
(t) is given. Show that the Laplace transform u(s)
of u(t) satises
u(s) =
u
0
(0)
1 +s e
s
+
e
s
1 +s e
s
_
0
1
e
st
u
0
(t) dt.
1375
Find u(t), t 0, when u
0
(t) = 1. Check the result.
Hint, Solution
Exercise 33.20
Let the function f(t) be dened by
f(t) =
_
1 0 t <
0 t < 2,
and for all positive values of t so that f(t +2) = f(t). That is, f(t) is periodic with period 2. Find the solution
of the intial value problem
d
2
y
dt
2
y = f(t); y(0) = 1, y
t
(0) = 0.
Examine the continuity of the solution at t = n, where n is a positive integer, and verify that the solution is
continuous and has a continuous derivative at these points.
Hint, Solution
Exercise 33.21
Use Laplace transforms to solve
dy
dt
+
_
t
0
y() d = e
t
, y(0) = 1.
Hint, Solution
1376
Exercise 33.22
An electric circuit gives rise to the system
L
di
1
dt
+Ri
1
+q/C = E
0
L
di
2
dt
+Ri
2
q/C = 0
dq
dt
= i
1
i
2
with initial conditions
i
1
(0) = i
2
(0) =
E
0
2R
, q(0) = 0.
Solve the system by Laplace transform methods and show that
i
1
=
E
0
2R
+
E
0
2L
e
t
sin(t)
where
=
R
2L
and
2
=
2
LC

2
.
Hint, Solution
1377
33.7 Hints
Hint 33.1
Use the dierentiation and integration properties of the Laplace transform where appropriate.
Hint 33.2
Hint 33.3
Hint 33.4
If the integral is uniformly convergent and
g
s
is continuous then
d
ds
_
b
a
g(s, t) dt =
_
b
a

s
g(s, t) dt
Hint 33.5
_

s
e
tx
dt =
1
x
e
sx
Hint 33.6
Use integration by parts.
1378
Hint 33.7
_

0
e
st
f(t) dt =
_

n=0
(n+1)T

nT
e
st
f(t) dt
The sum can be put in the form of a geometric series.

n=0

n
=
1
1
, for [[ < 1
Hint 33.8
Hint 33.9
Write the answer in terms of the Gamma function.
Hint 33.10
Hint 33.11
Hint 33.12
1379
Hint 33.13
Hint 33.14
Hint 33.15
Hint 33.16
Hint 33.17
Hint 33.18
Hint 33.19
Hint 33.20
1380
Hint 33.21
Hint 33.22
1381
33.8 Solutions
Solution 33.1
1.
L
_
e
at

=
_

0
e
st
e
at
dt
=
_

0
e
(sa)t
dt
=
_

e
(sa)t
s a
_

0
, for 1(s) > 1(a)
L
_
e
at

=
1
s a
2.
L[sin(at)] =
_

0
e
st
sin(at) dt
=
1
2i
_

0
_
e
(s+ia)t
e
(sia)t
_
dt
=
1
2i
_
e
(s+ia)t
s ia
+
e
(sia)t
s + ia
_

0
=
1
2i
_
1
s ia

1
s + ia
_
L[sin(at)] =
a
s
2
+a
2
.
1382
3.
L[cos(at)] = L
_
d
dt
sin(at)
a
_
= sL
_
sin(at)
a
_
sin(0)
L[cos(at)] =
s
s
2
+a
2
4.
L[sinh(at)] =
_

0
e
st
sinh(at) dt
=
1
2
_

0
_
e
(s+a)t
e
(sa)t
_
dt
=
1
2
_
e
(s+a)t
s a
+
e
(sa)t
s +a
_

0
, for 1(s) > [1(a)[
=
1
2
_
1
s a

1
s +a
_
L[sinh(at)] =
a
s
2
a
2
, for 1(s) > [1(a)[
5.
L[cosh(at)] = L
_
d
dt
sinh(at)
a
_
= sL
_
sinh(at)
a
_
sinh(0)
1383
L[cosh(at)] =
s
s
2
a
2
6. First note that
L
_
sin(at)
t
_
(s) =
_

s
L[sin(at)]() d.
Now we use the Laplace transform of sin(at) to compute the Laplace transform of sin(at)/t.
L
_
sin(at)
t
_
=
_

s
a

2
+a
2
d
=
_

s
1
(/a)
2
+ 1
d
a
=
_
arctan
_

a
__

s
=

2
arctan
_
s
a
_
L
_
sin(at)
t
_
= arctan
_
a
s
_
7.
L
__
t
0
sin(a)

d
_
=
1
s
L
_
sin(at)
t
_
L
__
t
0
sin(a)

d
_
=
1
s
arctan
_
a
s
_
1384
8.
L[f(t)] =
_
2
0
e
st
f(t) dt
1 e
2s
=
_

0
e
st
dt
1 e
2s
=
1 e
s
s(1 e
2s
)
L[f(t)] =
1
s(1 + e
s
)
Solution 33.2
L[af(t) +bg(t)] =
_

0
e
st
_
af(t) +bg(t)
_
dt
= a
_

0
e
st
f(t) dt +b
_

0
e
st
g(t) dt
= aL[f(t)] +bL[g(t)]
Solution 33.3
If f(t) is of exponential order , then e
ct
f(t) is of exponential order c +.
L[ e
ct
f(t)] =
_

0
e
st
e
ct
f(t) dt
=
_

0
e
(sc)t
f(t) dt
=

f(s c) for s > c +
1385
Solution 33.4
First consider the Laplace transform of t
0
f(t).
L[t
0
f(t)] =

f(s)
Now consider the Laplace transform of t
n
f(t) for n 1.
L[t
n
f(t)] =
_

0
e
st
t
n
f(t) dt
=
d
ds
_

0
e
st
t
n1
f(t) dt
=
d
ds
L[t
n1
f(t)]
Thus we have a dierence equation for the Laplace transform of t
n
f(t) with the solution
L[t
n
f(t)] = (1)
n
d
n
ds
n
L[t
0
f(t)] for n Z
0+
,
L[t
n
f(t)] = (1)
n
d
n
ds
n

f(s) for n Z
0+
.
Solution 33.5
If
_

0
f(t)
t
dt exists for positive and f(t) is of exponential order then the Laplace transform of f(t)/t is dened
1386
for s > .
L
_
f(t)
t
_
=
_

0
e
st
1
t
f(t) dt
=
_

0
_

s
e
t
d f(t) dt
=
_

s
_

0
e
t
f(t) dt d
=
_

s

f() d
Solution 33.6
L
__
t
0
f() d
_
=
_

0
e
st
_
t
0
f() d dx
=
_

e
st
s
_
t
0
f() d
_

_

0

e
st
s
d
dt
__
t
0
f() d
_
dt
=
1
s
_

0
e
st
f(t) dt
=
1
s

f(s)
1387
Solution 33.7
f(t) is periodic with period T.
L[f(t)] =
_

0
e
st
f(t) dt
=
_
T
0
e
st
f(t) dt +
_
2T
T
e
st
f(t) dt +
=

n=0
_
(n+1)T
nT
e
st
f(t) dt
=

n=0
_
T
0
e
s(t+nT)
f(t +nT) dt
=

n=0
e
snT
_
T
0
e
st
f(t) dt
=
_
T
0
e
st
f(t) dt

n=0
e
snT
=
_
T
0
e
st
f(t) dt
1 e
sT
1388
Solution 33.8

f(s) =
_

0
e
st
f(t) dt
=
n

0
_
(n+1)T
nT
e
st
f(t) dt
=
n

0
_
T
0
e
s(t+nT)
f(t +nT) dt
=
n

0
e
snT
_
T
0
e
st
(1)
n
f(t) dt
=
_
T
0
e
st
f(t) dt
n

0
(1)
n
_
e
sT
_
n

f(s) =
g(s)
1 + e
sT
, for 1(s) > 0
Consider

f(s) = s
1
tanh(sT/2).
s
1
tanh(sT/2) = s
1
e
sT/2
e
sT/2
e
sT/2
+ e
sT/2
= s
1
1 e
sT
1 + e
sT
We have
g(s)
_
T
0
f(t) e
st
dt =
1 e
st
s
.
1389
By inspection we see that this is satised for f(t) = 1 for 0 < t < T. We conclude:
f(t) =
_
1 for t [2nT . . . (2n + 1)T),
1 for t [(2n + 1)T . . . (2n + 2)T),
where n Z.
Solution 33.9
The Laplace transform of t

, > 1 is

f(s) =
_

0
e
st
t

dt.
Assume s is complex-valued. The integral converges for 1(s) > 0 and > 1.
Method 1. We make the change of variables z = st.

f(s) =
_
C
e
z
_
z
s
_

1
s
dz
= s
(+1)
_
C
e
z
z

dz
C is the path from 0 to along arg(z) = arg(s). (Shown in Figure 33.4).
Since the integrand is analytic in the domain < r < R, 0 < < arg(s), the integral along the boundary of
this domain vanishes.
_
_
R

+
_
Re
i arg(s)
R
+
_
e
i arg(s)
Re
i arg(s)
+
_

e
i arg(s)
_
e
z
z

dz = 0
1390
Im(z)
Re(z)
arg(s)
Figure 33.4: The Path of Integration.
We show that the integral along C
R
, the circular arc of radius R, vanishes as R with the maximum modulus
integral bound.

_
C
R
e
z
z

dz

R[ arg(s)[ max
zC
R

e
z
z

= R[ arg(s)[ e
Rcos(arg(s))
R

0 as R .
The integral along C

, the circular arc of radius , vanishes as 0. We demonstrate this with the maximum
modulus integral bound.

_
C
e
z
z

dz

[ arg(s)[ max
zC

e
z
z

= [ arg(s)[ e
cos(arg(s))

0 as 0.
1391
Taking the limit as 0 and R , we see that the integral along C is equal to the integral along the real
axis.
_
C
e
z
z

dz =
_

0
e
z
z

dz
We can evaluate the Laplace transform of t

in terms of this integral.


L[t

] = s
(+1)
_

0
e
t
t

dt
L[t

] =
( + 1)
s
(+1)
In the case that is a non-negative integer = n > 1 we can write this in terms of the factorial.
L[t
n
] =
n!
s
(n+1)
Method 2. First note that the integral

f(s) =
_

0
e
st
t

dt
exists for 1(s) > 0. It converges uniformly for 1(s) c > 0. On this domain of uniform convergence we can
interchange dierentiation and integration.
d

f
ds
=
d
ds
_

0
e
st
t

dt
=
_

0

s
_
e
st
t

_
dt
=
_

0
t e
st
t

dt
=
_

0
e
st
t
+1
dt
1392
Since

f
t
(s) is dened for 1(s) > 0,

f(s) is analytic for 1(s) > 0.
Let be real and positive. We make the change of variables x = t.

f() =
_

0
e
x
_
x

dx
=
(+1)
_

0
e
x
x

dx
=
( + 1)

+1
Note that the function

f(s) =
( + 1)
s
+1
is the analytic continuation of

f(). Thus we can dene the Laplace transform for all complex s in the right half
plane.

f(s) =
( + 1)
s
+1
Solution 33.10
Note that

f(s) is an analytic function for 1(s) > 0. Consider real-valued s > 0. By denition,

f(s) is

f(s) =
_

0
e
st
ln t dt.
1393
We make the change of variables x = st.

f(s) =
_

0
e
x
ln
_
x
s
_
dx
s
=
1
s
_

0
e
x
(ln x ln s) dx
=
ln [s[
s
_

0
e
x
dx +
1
s
_

0
e
x
ln xdx
=
ln s
s


s
, for real s > 0
The analytic continuation of

f(s) into the right half-plane is

f(s) =
Log s
s


s
.
Solution 33.11
Dene

f(s) = L[t

ln t] =
_

0
e
st
t

ln t dt.
This integral denes

f(s) for 1(s) > 0. Note that the integral converges uniformly for 1(s) c > 0. On this
domain we can interchange dierentiation and integration.

f
t
(s) =
_

0

s
_
e
st
t

ln t
_
dt =
_

0
t e
st
t

Log t dt
Since

f
t
(s) also exists for 1(s) > 0,

f(s) is analytic in that domain.
1394
Let be real and positive. We make the change of variables x = t.

f() = L[t

ln t]
=
_

0
e
t
t

ln t dt
=
_

0
e
x
_
x

ln
x

dx
=
1

+1
_

0
e
x
x

(ln x ln ) dx
=
1

+1
__

0
e
x
x

ln xdx ln
_

0
e
x
x

dx
_
=
1

+1
__

0

_
e
x
x

_
dx ln ( + 1)
_
=
1

+1
_
d
d
_

0
e
x
x

dx ln ( + 1)
_
=
1

+1
_
d
d
( + 1) ln ( + 1)
_
=
1

+1
( + 1)
_

t
( + 1)
( + 1)
ln
_
=
1

+1
( + 1) (( + 1) ln )
Note that the function

f(s) =
1
s
+1
( + 1) (( + 1) ln s)
is an analytic continuation of

f(). Thus we can dene the Laplace transform for all s in the right half plane.
L[t

ln t] =
1
s
+1
( + 1) (( + 1) ln s) for 1(s) > 0.
1395
For the case = 0, we have
L[ln t] =
1
s
1
(1) ((1) ln s)
L[ln t] =
ln s
s
,
where is Eulers constant
=
_

0
e
x
ln xdx = 0.5772156629 . . .
Solution 33.12
Method 1. We factor the denominator.

f(s) =
1
(s 2)(s
2
+ 1)
=
1
(s 2)(s i)(s + i)
We expand the function in partial fractions and simplify the result.
1
(s 2)(s i)(s + i)
=
1/5
s 2

(1 i2)/10
s i

(1 + i2)/10
s + i

f(s) =
1
5
1
s 2

1
5
s + 2
s
2
+ 1
We use a table of Laplace transforms to do the inversion.
L[ e
2t
] =
1
s 2
, L[cos t] =
s
s
2
+ 1
, L[sin t] =
1
s
2
+ 1
f(t) =
1
5
_
e
2t
cos t 2 sin t
_
1396
Method 2. We factor the denominator.

f(s) =
1
s 2
1
s
2
+ 1
From a table of Laplace transforms we note
L[ e
2t
] =
1
s 2
, L[sin t] =
1
s
2
+ 1
.
We apply the convolution theorem.
f(t) =
_
t
0
sin e
2(t)
d
f(t) =
1
5
_
e
2t
cos t 2 sin t
_
Method 3. We factor the denominator.

f(s) =
1
(s 2)(s i)(s + i)

f(s) is analytic except for poles and vanishes at innity.


f(t) =

sn=2,i,i
Res
_
e
st
(s 2)(s i)(s + i)
, s
n
_
=
e
2t
(2 i)(2 + i)
+
e
it
(i 2)(i2)
+
e
it
(i 2)(i2)
=
e
2t
5
+
(1 + i2) e
it
10
+
(1 i2) e
it
10
=
e
2t
5
+
e
it
+ e
it
10
+ i
e
it
e
it
5
1397
f(t) =
1
5
_
e
2t
cos t 2 sin t
_
Solution 33.13
y
tt
+y
t
+y = sin t, y(0) = y
t
(0) = 0, 0 < 1
We take the Laplace transform of this equation.
(s
2
y(s) sy(0) y
t
(0)) +(s y(s) y(0)) + y(s) = L[sin(t)]
(s
2
+s + 1) y(s) = L[sin(t)]
y(s) =
1
s
2
+s + 1
L[sin(t)]
y(s) =
1
(s +

2
)
2
+ 1

2
4
L[sin(t)]
We use a table of Laplace transforms to nd the inverse Laplace transform of the rst term.
L
1
_
1
(s +

2
)
2
+ 1

2
4
_
=
1
_
1

2
4
e
t/2
sin
_
_
1

2
4
t
_
We dene
=
_
1

2
4
1398
to get rid of some clutter. Now we apply the convolution theorem to invert
2
ys.
y(t) =
_
t
0
1

e
/2
sin () sin(t ) d
y(t) = e
t/2
_
1

cos (t) +
1
2
sin (t)
_

cos t
The solution is plotted in Figure 33.5 for = 0.05.
20 40 60 80 100
-15
-10
-5
5
10
15
Figure 33.5: The Weakly Damped, Driven Oscillator
2
Evaluate the convolution integral by inspection.
1399
Solution 33.14
We consider the solutions of
y
tt
ty
t
+y = 0, y(0) = 0, y
t
(0) = 1
which are of exponential order for any > 0. We take the Laplace transform of the dierential equation.
s
2
y 1 +
d
ds
(s y) + y = 0
y
t
+
_
s +
2
s
_
y =
1
s
y(s) =
1
s
2
+c
e
s
2
/2
s
2
We use that
y(s)
y(0)
s
+
y
t
(0)
s
2
+
to conclude that c = 0.
y(s) =
1
s
2
y(t) = t
Solution 33.15
L
1
[

f(s)] =
1
i2
_
c+i
ci
e
st

f(s) ds
First we make the change of variable s = c +.
L
1
[

f(s)] =
1
i2
e
ct
_
i
i
e
t
F(c +) d
1400
Then we make the change of variable = i.
L
1
[

f(s)] =
1
2
e
ct
_

e
it
F(c + i) d
L
1
[

f(s)] =
1
2
e
ct
T
1
[F(c + i)]
Solution 33.16
We assume that 1(a) 0. We are considering the principal branch of the square root: s
1/2
=

s. There is a
branch cut on the negative real axis.

f(s) is singular at s = 0 and along the negative real axis. Let be any
positive number. The inverse Laplace transform of
_

s
_
1/2
e
2(as)
1/2
is
f(t) =
1
i2
_
+i
i
e
st
_

s
_
1/2
e
2(as)
1/2
ds.
We will evaluate the integral by deforming it to wrap around the branch cut. Consider the integral on the
contour shown in Figure 33.6. C
+
R
and C

R
are circular arcs of radius R. B is the vertical line at 1(s) = joining
the two arcs. C

is a semi-circle in the right half plane joining i and i. L


+
and L

are lines joining the circular


arcs at (s) = .
Since there are no residues inside the contour, we have
1
i2
_
_
B
+
_
C
+
R
+
_
L
+
+
_
C
+
_
L

+
_
C

R
_
e
st
_

s
_
1/2
e
2(as)
1/2
ds = 0.
We will evaluate the inverse Laplace transform for t > 0.
First we will show that the integral along C
+
R
vanishes as R . We parametrize the path of integration
with s = Re
i
and write the integral along C
+
R
as the sum of two integrals.
_
C
+
R
ds =
_
/2
/2
d +
_

/2
d
1401
/2
C

L
+
L
-
-
C
R
R
C
+
B
/2+
Figure 33.6: Path of Integration
The rst integral vanishes by the maximum modulus bound. Note that the length of the path of integration is
less than 2.

_
/2
/2
d

_
max
[/2.../2]

e
st
_

s
_
1/2
e
2(as)
1/2

_
(2)
= e
t

R
(2)
0 as R
1402
The second integral vanishes by Jordans Lemma.

_

/2
e
Re
i
t

Re
i
e
2

aRe
i
d

_

/2

e
Re
i
t

Re
i
e
2

Re
i/2

R
_

/2
e
Rcos()t
d

R
_
/2
0
e
Rt sin()
d
<

2Rt
0 as R
We could show that the integral along C

R
vanishes by the same method.
Now we have
1
i2
__
B
+
_
L
+
+
_
C
+
_
L

_
e
st
_

s
_
1/2
e
2(as)
1/2
ds = 0.
We show that the integral along C

vanishes as 0 with the maximum modulus bound.

_
C
e
st
_

s
_
1/2
e
2(as)
1/2
ds

_
max
sC

e
st
_

s
_
1/2
e
2(as)
1/2

_
()
e
t

0 as 0.
Now we can express the inverse Laplace transform in terms of the integrals along L
+
and L

f(t)
1
i2
_
+i
i
e
st
_

s
_
1/2
e
2(as)
1/2
ds
=
1
i2
_
L
+
e
st
_

s
_
1/2
e
2(as)
1/2
ds
1
i2
_
L

e
st
_

s
_
1/2
e
2(as)
1/2
ds.
1403
On L
+
, s = r e
i
, ds = e
i
dr = dr; on L

, s = r e
i
, ds = e
i
dr = dr. We can combine the integrals along
the top and bottom of the branch cut.
f(t) =
1
i2
_
0

e
rt

r
e
i2

r
(dr)
1
i2
_

0
e
rt

r
e
i2

r
(dr)
=
1
2

_

0
e
rt
1

r
_
e
i2

r
+ e
i2

r
_
dr
=
1
2

_

0
1

r
e
rt
2 cos
_
2

r
_
dr
We make the change of variables x =

r.
=
1

_

0
1
x
e
tx
2
cos
_
2

ax
_
2x dx
=
2

_

0
e
tx
2
cos
_
2

ax
_
dx
=
2

4t
e
4a/(4t)
=
e
a/t

t
Thus the inverse Laplace transform is
f(t) =
e
a/t

t
Solution 33.17
We consider the problem
d
4
y
dt
4
y = t, y(0) = y
t
(0) = y
tt
(0) = y
ttt
(0) = 0.
1404
We take the Laplace transform of the dierential equation.
s
4
y(s) s
3
y(0) s
2
y
t
(0) sy
tt
(0) y
ttt
(0) y(s) =
1
s
2
s
4
y(s) y(s) =
1
s
2
y(s) =
1
s
2
(s
4
1)
There are several ways in which we could carry out the inverse Laplace transform to nd y(t). We could expand
the right side in partial fractions and then use a table of Laplace transforms. Since the function is analytic except
for isolated singularities and vanishes as s we could use the result,
L
1
[

f(s)] =
N

n=1
Res
_
e
st

f(s), s
n
_
,
where s
k

n
k=1
are the singularities of

f(s). Since we can write the function as a product of simpler terms we
could also apply the convolution theorem.
We will rst do the inverse Laplace transform by expanding the function in partial fractions to obtain simpler
rational functions.
1
s
2
(s
4
1)
=
1
s
2
(s 1)(s + 1)(s i)(s + i)
=
a
s
2
+
b
s
+
c
s 1
+
d
s + 1
+
e
s i
+
f
s + i
1405
a =
_
1
s
4
1
_
s=0
= 1
b =
_
d
ds
1
s
4
1
_
s=0
= 0
c =
_
1
s
2
(s + 1)(s i)(s + i)
_
s=1
=
1
4
d =
_
1
s
2
(s 1)(s i)(s + i)
_
s=1
=
1
4
e =
_
1
s
2
(s 1)(s + 1)(s + i)
_
s=i
= i
1
4
f =
_
1
s
2
(s 1)(s + 1)(s i)
_
s=i
= i
1
4
Now we have simple functions that we can look up in a table.
y(s) =
1
s
2
+
1/4
s 1

1/4
s + 1
+
1/2
s
2
+ 1
y(t) =
_
t +
1
4
e
t

1
4
e
t
+
1
2
sin t
_
H(t)
y(t) =
_
t +
1
2
(sinh t + sin t)
_
H(t)
We can also do the inversion with the convolution theorem.
1
s
2
(s
4
1)
=
1
s
2
1
s
2
+ 1
1
s
2
1
1406
From a table of Laplace transforms we know,
L
1
_
1
s
2
_
= t,
L
1
_
1
s
2
+ 1
_
= sin t,
L
1
_
1
s
2
1
_
= sinh t.
Now we use the convolution theorem to nd the solution for t > 0.
L
1
_
1
s
4
1
_
=
_
t
0
sinh() sin(t ) d
=
1
2
(sinh t sin t)
L
1
_
1
s
2
(s
4
1)
_
=
_
t
0
1
2
(sinh sin ) (t ) d
= t +
1
2
(sinh t + sin t)
1407
Solution 33.18
dy
dt
= sin t +
_
t
0
y() cos(t ) d
s y(s) y(0) =
1
s
2
+ 1
+ y(s)
s
s
2
+ 1
(s
3
+s) y(s) s y(s) = 1
y(s) =
1
s
3
y(t) =
t
2
2
Solution 33.19
The Laplace transform of u(t 1) is
L[u(t 1)] =
_

0
e
st
u(t 1) dt
=
_

1
e
s(t+1)
u(t) dt
= e
s
_
0
1
e
st
u(t) dt + e
s
_

0
e
st
u(t) dt
= e
s
_
0
1
e
st
u
0
(t) dt + e
s
u(s).
1408
We take the Laplace transform of the dierence-dierential equation.
s u(s) u(0) + u(s) e
s
_
0
1
e
st
u
0
(t) dt + e
s
u(s) = 0
(1 +s e
s
) u(s) = u
0
(0) + e
s
_
0
1
e
st
u
0
(t) dt
u(s) =
u
0
(0)
1 +s e
s
+
e
s
1 +s e
s
_
0
1
e
st
u
0
(t) dt
Consider the case u
0
(t) = 1.
u(s) =
1
1 +s e
s
+
e
s
1 +s e
s
_
0
1
e
st
dt
u(s) =
1
1 +s e
s
+
e
s
1 +s e
s
_

1
s
+
1
s
e
s
_
u(s) =
1/s + 1 e
s
/s
1 +s e
s
u(s) =
1
s
u(t) = 1
Clearly this solution satises the dierence-dierential equation.
Solution 33.20
We consider the problem,
d
2
y
dt
2
y = f(t), y(0) = 1, y
t
(0) = 0,
1409
where f(t) is periodic with period 2 and is dened by,
f(t) =
_
1 0 t < ,
0 t < 2.
We take the Laplace transform of the dierential equation.
s
2
y(s) sy(0) y
t
(0) y(s) =

f(s)
s
2
y(s) s y(s) =

f(s)
y(s) =
s
s
2
1
+

f(s)
s
2
1
By inspection, (of a table of Laplace transforms), we see that
L
1
_
s
s
2
1
_
= cosh(t)H(t),
L
1
_
1
s
2
1
_
= sinh(t)H(t).
Now we use the convolution theorem.
L
1
_

f(s)
s
2
1
_
=
_
t
0
f() sinh(t ) d
The solution for positive t is
y(t) = cosh(t) +
_
t
0
f() sinh(t ) d.
1410
Clearly the solution is continuous because the integral of a bounded function is continuous. The rst derivative
of the solution is
y
t
(t) = sinh t +f(t) sinh(0) +
_
t
0
f() cosh(t ) d
y
t
(t) = sinh t +
_
t
0
f() cosh(t ) d
We see that the rst derivative is also continuous.
Solution 33.21
We consider the problem
dy
dt
+
_
t
0
y() d = e
t
, y(0) = 1.
We take the Laplace transform of the equation and solve for y.
s y y(0) +
y
s
=
1
s + 1
y =
s(s + 2)
(s + 1)(s
2
+ 1)
We expand the right side in partial fractions.
y =
1
2(s + 1)
+
1 + 3s
2(s
2
+ 1)
We use a table of Laplace transforms to do the inversion.
y =
1
2
e
t
+
1
2
(sin(t) + 3 cos(t))
1411
Solution 33.22
We consider the problem
L
di
1
dt
+Ri
1
+q/C = E
0
L
di
2
dt
+Ri
2
q/C = 0
dq
dt
= i
1
i
2
i
1
(0) = i
2
(0) =
E
0
2R
, q(0) = 0.
We take the Laplace transform of the system of dierential equations.
L
_
s

i
1

E
0
2R
_
+R

i
1
+
q
C
=
E
0
s
L
_
s

i
2

E
0
2R
_
+R

i
2

q
C
= 0
s q =

i
1

i
2
We solve for

i
1
,

i
2
and q.

i
1
=
E
0
2
_
1
Rs
+
1/L
s
2
+Rs/L + 2/(CL)
_

i
2
=
E
0
2
_
1
Rs

1/L
s
2
+Rs/L + 2/(CL)
_
q =
CE
0
2
_
1
s

s +R/L
s
2
+Rs/L + 2/(CL)
_
1412
We factor the polynomials in the denominators.

i
1
=
E
0
2
_
1
Rs
+
1/L
(s + i)(s + + i)
_

i
2
=
E
0
2
_
1
Rs

1/L
(s + i)(s + + i)
_
q =
CE
0
2
_
1
s

s + 2
(s + i)(s + + i)
_
Here we have dened
=
R
2L
and
2
=
2
LC

2
.
We expand the functions in partial fractions.

i
1
=
E
0
2
_
1
Rs
+
i
2L
_
1
s + + i

1
s + i
__

i
2
=
E
0
2
_
1
Rs

i
2L
_
1
s + + i

1
s + i
__
q =
CE
0
2
_
1
s
+
i
2
_
+ i
s + i

i
s + + i
__
Now we can do the inversion with a table of Laplace transforms.
i
1
=
E
0
2
_
1
R
+
i
2L
_
e
(i)t
e
(+i)t
_
_
i
2
=
E
0
2
_
1
R

i
2L
_
e
(i)t
e
(+i)t
_
_
q =
CE
0
2
_
1 +
i
2
_
( + i) e
(+i)t
( i) e
(i)t
_
_
1413
We simplify the expressions to obtain the solutions.
i
1
=
E
0
2
_
1
R
+
1
L
e
t
sin(t)
_
i
2
=
E
0
2
_
1
R

1
L
e
t
sin(t)
_
q =
CE
0
2
_
1 e
t
_
cos(t) +

sin(t)
__
1414
Chapter 34
The Fourier Transform
34.1 Derivation from a Fourier Series
Consider the eigenvalue problem
y
tt
+y = 0, y(L) = y(L), y
t
(L) = y
t
(L).
The eigenvalues and eigenfunctions are

n
=
_
n
L
_
2
for n Z
0+

n
=

L
e
inx/L
, for n Z
The eigenfunctions form an orthogonal set. A piecewise continuous function dened on [L. . . L] can be expanded
in a series of the eigenfunctions.
f(x)

n=
c
n

L
e
inx/L
1415
The Fourier coecients are
c
n
=
_

L
e
inx/L

f(x)
_
_

L
e
inx/L

L
e
inx/L
_
=
1
2
_
L
L
e
inx/L
f(x) dx.
We substitute the expression for c
n
into the series for f(x).
f(x)

n=
_
1
2L
_
L
L
e
in/L
f() d
_
e
inx/L
.
We let
n
= n/L and = /L.
f(x)

n=
_
1
2
_
L
L
e
in
f() d
_
e
inx
.
In the limit as L , (and thus 0), the sum becomes an integral.
f(x)
_

_
1
2
_

e
i
f() d
_
e
ix
d.
Thus the expansion of f(x) for nite L
f(x)

n=
c
n

L
e
inx/L
c
n
=
1
2
_
L
L
e
inx/L
f(x) dx
1416
in the limit as L becomes
f(x)
_

f() e
ix
d

f() =
1
2
_

f(x) e
ix
dx.
Of course this derivation is only heuristic. In the next section we will explore these formulas more carefully.
34.2 The Fourier Transform
Let f(x) be piecewise continuous and let
_

[f(x)[ dx exist. We dene the function I(x, L).


I(x, L) =
1
2
_
L
L
__

f() e
i
d
_
e
ix
d.
Since the integral in parentheses is uniformly convergent, we can interchange the order of integration.
=
1
2
_

__
L
L
f() e
i(x)
d
_
d
=
1
2
_

_
f()
e
i(x)
i( x)
_
L
L
d
=
1
2
_

f()
1
i( x)
_
e
iL(x)
e
iL(x)
_
d
=
1

f()
sin(L( x))
x
d
=
1

f( +x)
sin(L)

d.
1417
In Example 34.3.3 we will show that
_

0
sin(L)

d =

2
.
Continuous Functions. Suppose that f(x) is continuous.
f(x) =
1

f(x)
sin(L)

d
I(x, L) f(x) =
1

f(x +) f(x)

sin(L) d.
If f(x) has a left and right derivative at x then
f(x+)f(x)

is bounded and
_

f(x+)f(x)

d < . We use the


Riemann-Lebesgue lemma to show that the integral vanishes as L .
1

f(x +) f(x)

sin(L) d 0 as L .
Now we have an identity for f(x).
f(x) =
1
2
_

__

f() e
i
d
_
e
ix
d.
Piecewise Continuous Functions. Now consider the case that f(x) is only piecewise continuous.
f(x
+
)
2
=
1

_

0
f(x
+
)
sin(L)

d
f(x

)
2
=
1

_
0

f(x

)
sin(L)

d
1418
I(x, L)
f(x
+
) +f(x

)
2
=
_
0

_
f(x +) f(x

_
sin(L) d

_

0
_
f(x +) f(x
+
)

_
sin(L) d
If f(x) has a left and right derivative at x, then
f(x +) f(x

is bounded for 0, and


f(x +) f(x
+
)

is bounded for 0.
Again using the Riemann-Lebesgue lemma we see that
f(x
+
) +f(x

)
2
=
1
2
_

__

f() e
i
d
_
e
ix
d.
1419
Result 34.2.1 Let f(x) be piecewise continuous with
_

[f(x)[ dx < . The Fourier


transform of f(x) is dened

f() = T[f(x)] =
1
2
_

f(x) e
ix
dx.
We see that the integral is uniformly convergent. The inverse Fourier transform is dened
f(x
+
) +f(x

)
2
= T
1
[

f()] =
_

f() e
ix
d.
If f(x) is continuous then this reduces to
f(x) = T
1
[

f()] =
_

f() e
ix
d.
34.2.1 A Word of Caution
Other texts may dene the Fourier transform dierently. The important relation is
f(x) =
_

_
1
2
_

f() e
i
d
_
e
ix
d.
Multiplying the right side of this equation by 1 =
1

yields
f(x) =
1

_

2
_

f() e
i
d
_
e
ix
d.
1420
Setting =

2 and choosing sign in the exponentials gives us the Fourier transform pair

f() =
1

2
_

f(x) e
ix
dx
f(x) =
1

2
_

f() e
ix
d.
Other equally valid pairs are

f() =
_

f(x) e
ix
dx
f(x) =
1
2
_

f() e
ix
d,
and

f() =
_

f(x) e
ix
dx
f(x) =
1
2
_

f() e
ix
d.
Be aware of the dierent denitions when reading other texts or consulting tables of Fourier transforms.
34.3 Evaluating Fourier Integrals
34.3.1 Integrals that Converge
If the Fourier integral
T[f(x)] =
1
2
_

f(x) e
ix
dx,
1421
converges for real , then nding the transform of a function is just a matter of direct integration. We will
consider several examples of such garden variety functions in this subsection. Later on we will consider the more
interesting cases when the integral does not converge for real .
Example 34.3.1 Consider the Fourier transform of e
[x[
, where > 0. Since the integral of e
[x[
is absolutely
convergent, we know that the Fourier transform integral converges for real . We write out the integral.
T
_
e
[x[

=
1
2
_

e
[x[
e
ix
dx
=
1
2
_
0

e
xix
dx +
1
2
_

0
e
xix
dx
=
1
2
_
0

e
(iT()+.())x
dx +
1
2
_

0
e
(iT()+.())x
dx
The integral converges for [()[ < . This domain is shown in Figure 34.1.
Re(z)
Im(z)
Figure 34.1: The Domain of Convergence
1422
Now We do the integration.
T
_
e
[x[

=
1
2
_
0

e
(i)x
dx +
1
2
_

0
e
(+i)x
dx
=
1
2
_
e
(i)x
i
_
0

+
1
2
_

e
(+i)x
+ i
_

0
=
1
2
_
1
i
+
1
+ i
_
=
1

(
2
+
2
)
, for [()[ <
We can extend the domain of the Fourier transform with analytic continuation.
T
_
e
[x[

=

(
2
+
2
)
, for ,= i
Example 34.3.2 Consider the Fourier transform of f(x) =
1
xi
, > 0.
T
_
1
x i
_
=
1
2
_

1
x i
e
ix
dx
The integral converges for () = 0. We will evaluate the integral for positive and negative real values of .
For > 0, we will close the path of integration in the lower half-plane. Let C
R
be the contour from x = R
to x = R following a semicircular path in the lower half-plane. The integral along C
R
vanishes as R by
Jordans Lemma.
_
C
R
1
x i
e
ix
dx 0 as R .
Since the integrand is analytic in the lower half-plane the integral vanishes.
T
_
1
x i
_
= 0
1423
For < 0, we will close the path of integration in the upper half-plane. Let C
R
denote the semicircular contour
from x = R to x = R in the upper half-plane. The integral along C
R
vanishes as R goes to innity by Jordans
Lemma. We evaluate the Fourier transform integral with the Residue Theorem.
T
_
1
x i
_
=
1
2
2i Res
_
e
ix
x i
, i
_
= i e

We combine the results for positive and negative values of .


T
_
1
x i
_
=
_
0 for > 0,
i e

for < 0
34.3.2 Cauchy Principal Value and Integrals that are Not Absolutely Convergent.
That the integral of f(x) is absolutely convergent is a sucient but not a necessary condition that the Fourier
transform of f(x) exists. The integral
_

f(x) e
ix
dx may converge even if
_

[f(x)[ dx does not. Furthermore,


if the Fourier transform integral diverges, its principal value may exist. We will say that the Fourier transform of
f(x) exists if the principal value of the integral exists.
T[f(x)] =
_

f(x) e
ix
dx
Example 34.3.3 Consider the Fourier transform of f(x) = 1/x.

f() =
1
2

_

1
x
e
ix
dx
If > 0, we can close the contour in the lower half-plane. The integral along the semi-circle vanishes due to
Jordans Lemma.
lim
R
_
C
R
1
x
e
ix
dx = 0
1424
We can evaluate the Fourier transform with the Residue Theorem.

f() =
1
2
_
1
2
_
(2i) Res
_
1
x
e
ix
, 0
_

f() =
i
2
, for > 0.
The factor of 1/2 in the above derivation arises because the path of integration is in the negative, (clockwise),
direction and the path of integration crosses through the rst order pole at x = 0. The path of integration is
shown in Figure 34.2.
Re(z)
Im(z)
Figure 34.2: The Path of Integration
If < 0, we can close the contour in the upper half plane to obtain

f() =
i
2
, for < 0.
For = 0 the integral vanishes because
1
x
is an odd function.

f(0) =
1
2
=
_

1
x
dx = 0
1425
We collect the results in one formula.

f() =
i
2
sign ()
We write the integrand for > 0 as the sum of an odd and and even function.
1
2

_

1
x
e
ix
dx =
i
2

1
x
cos(x) dx +
_

i
x
sin(x) dx = i
The principal value of the integral of any odd function is zero.

1
x
sin(x) dx =
If the principal value of the integral of an even function exists, then the integral converges.
_

1
x
sin(x) dx =
_

0
1
x
sin(x) dx =

2
Thus we have evaluated an integral that we used in deriving the Fourier transform.
34.3.3 Analytic Continuation
Consider the Fourier transform of f(x) = 1. The Fourier integral is not convergent, and its principal value
does not exist. Thus we will have to be a little creative in order to dene the Fourier transform. Dene the two
1426
functions
f
+
(x) =
_

_
1 for x > 0
1/2 for x = 0
0 for x < 0
, f

(x) =
_

_
0 for x > 0
1/2 for x = 0
1 for x < 0
.
Note that 1 = f

(x) +f
+
(x).
The Fourier transform of f
+
(x) converges for () < 0.
T[f
+
(x)] =
1
2
_

0
e
ix
dx
=
1
2
_

0
e
(iT()+.())x
dx.
=
1
2
_
e
ix
i
_

0
=
i
2
for () < 0
Using analytic continuation, we can dene the Fourier transform of f
+
(x) for all except the point = 0.
T[f
+
(x)] =
i
2
1427
We follow the same procedure for f

(x). The integral converges for () > 0.


T[f

(x)] =
1
2
_
0

e
ix
dx
=
1
2
_
0

e
(iT()+.())x
dx
=
1
2
_
e
ix
i
_
0

=
i
2
.
Using analytic continuation we can dene the transform for all nonzero .
T[f

(x)] =
i
2
Now we are prepared to dene the Fourier transform of f(x) = 1.
T[1] = T[f

(x)] +T[f
+
(x)]
=
i
2
+
i
2
= 0, for ,= 0
When = 0 the integral diverges. When we consider the closure relation for the Fourier transform we will see
that
T[1] = ().
34.4 Properties of the Fourier Transform
In this section we will explore various properties of the Fourier Transform. I would like to avoid stating assumptions
on various functions at the beginning of each subsection. Unless otherwise indicated, assume that the integrals
1428
converge.
34.4.1 Closure Relation.
Recall the closure relation for an orthonormal set of functions
1
,
2
, . . . ,

n=1

n
(x)
n
() (x ).
There is a similar closure relation for Fourier integrals. We compute the Fourier transform of (x ).
T[(x )] =
1
2
_

(x ) e
ix
dx
=
1
2
e
i
Next we take the inverse Fourier transform.
(x )
_

1
2
e
i
e
ix
d
(x )
1
2
_

e
i(x)
d.
Note that the integral is divergent, but it would be impossible to represent (x ) with a convergent integral.
1429
34.4.2 Fourier Transform of a Derivative.
Consider the Fourier transform of y
t
(x).
T[y
t
(x)] =
1
2
_

y
t
(x) e
ix
dx
=
_
1
2
y(x) e
ix
_

1
2
_

(i)y(x) e
ix
dx
= i
1
2
_

y(x) e
ix
dx
= iT[y(x)]
Next consider y
tt
(x).
T[y
tt
(x)] = T
_
d
dx
(y
t
(x))
_
= iT[y
t
(x)]
= (i)
2
T[y(x)]
=
2
T[y(x)]
In general,
T
_
y
(n)
(x)

= (i)
n
T[y(x)].
Example 34.4.1 The Dirac delta function can be expressed as the derivative of the Heaviside function.
H(x c) =
_
0 for x < c,
1 for x > c
1430
Thus we can express the Fourier transform of H(x c) in terms of the Fourier transform of the delta function.
T[(x c)] = iT[H(x c)]
1
2
_

(x c) e
ix
dx = iT[H(x c)]
1
2
e
ic
= iT[H(x c)]
T[H(x c)] =
1
2i
e
ic
34.4.3 Fourier Convolution Theorem.
Consider the Fourier transform of a product of two functions.
T[f(x)g(x)] =
1
2
_

f(x)g(x) e
ix
dx
=
1
2
_

__

f() e
ix
d
_
g(x) e
ix
dx
=
1
2
_

__

f()g(x) e
i()x
dx
_
d
=
_

f()
_
1
2
_

g(x) e
i()x
dx
_
d
=
_

f()G( ) d
The convolution of two functions is dened
f g(x) =
_

f()g(x ) d.
1431
Thus
T[f(x)g(x)] =

f g() =
_

f() g( ) d.
Now consider the inverse Fourier Transform of a product of two functions.
T
1
[

f() g()] =
_

f() g() e
ix
d
=
_

_
1
2
_

f() e
i
d
_
g() e
ix
d
=
1
2
_

__

f() g() e
i(x)
d
_
d
=
1
2
_

f()
__

g() e
i(x)
d
_
d
=
1
2
_

f()g(x ) d
Thus
T
1
[

f() g()] =
1
2
f g(x) =
1
2
_

f()g(x ) d,
T[f g(x)] = 2

f() g().
These relations are known as the Fourier convolution theorem.
Example 34.4.2 Using the convolution theorem and the table of Fourier transform pairs in the appendix, we
can nd the Fourier transform of
f(x) =
1
x
4
+ 5x
2
+ 4
.
1432
We factor the fraction.
f(x) =
1
(x
2
+ 1)(x
2
+ 4)
From the table, we know that
T
_
2c
x
2
+c
2
_
= e
c[[
for c > 0.
We apply the convolution theorem.
T[f(x)] = T
_
1
8
2
x
2
+ 1
4
x
2
+ 4
_
=
1
8
__

e
[[
e
2[[
d
_
=
1
8
__
0

e
2[[
d +
_

0
e

e
2[[
d
_
First consider the case > 0.
T[f(x)] =
1
8
__
0

e
2+3
d +
_

0
e
2+
d +
_

e
23
d
_
=
1
8
_
1
3
e
2
+ e

e
2
+
1
3
e

_
=
1
6
e

1
12
e
2
1433
Now consider the case < 0.
T[f(x)] =
1
8
__

e
2+3
d +
_
0

e
2
d +
_

0
e
23
d
_
=
1
8
_
1
3
e

e
2
+ e

+
1
3
e
2
_
=
1
6
e

1
12
e
2
We collect the result for positive and negative .
T[f(x)] =
1
6
e
[[

1
12
e
2[[
A better way to nd the Fourier transform of
f(x) =
1
x
4
+ 5x
2
+ 4
is to rst expand the function in partial fractions.
f(x) =
1/3
x
2
+ 1

1/3
x
2
+ 4
T[f(x)] =
1
6
T
_
2
x
2
+ 1
_

1
12
T
_
4
x
2
+ 4
_
=
1
6
e
[[

1
12
e
2[[
1434
34.4.4 Parsevals Theorem.
Recall Parsevals theorem for Fourier series. If f(x) is a complex valued function with the Fourier series

n=
c
n
e
inx
then
2

n=
[c
n
[
2
=
_

[f(x)[
2
dx.
Analogous to this result is Parsevals theorem for Fourier transforms.
Let f(x) be a complex valued function that is both absolutely integrable and square integrable.
_

[f(x)[ dx < and


_

[f(x)[
2
dx <
The Fourier transform of f(x) is

f().
T
_
f(x)
_
=
1
2
_

f(x) e
ix
dx
=
1
2
_

f(x) e
ix
dx
=
1
2
_

f(x) e
ix
dx
=

f()
We apply the convolution theorem.
T
1
[2

f()

f()] =
_

f()f((x )) d
_

2

f()

f() e
ix
d =
_

f()f( x) d
1435
We set x = 0.
2
_

f()

f() d =
_

f()f() d
2
_

f()[
2
d =
_

[f(x)[
2
dx
This is known as Parsevals theorem.
34.4.5 Shift Property.
The Fourier transform of f(x +c) is
T[f(x +c)] =
1
2
_

f(x +c) e
ix
dx
=
1
2
_

f(x) e
i(xc)
dx
T[f(x +c)] = e
ic

f()
The inverse Fourier transform of

f( +c) is
T
1
[

f( +c)] =
_

f( +c) e
ix
d
=
_

f() e
i(c)x
d
T
1
[

f( +c)] = e
icx
f(x)
1436
34.4.6 Fourier Transform of x f(x).
The Fourier transform of xf(x) is
T[xf(x)] =
1
2
_

xf(x) e
ix
dx
=
1
2
_

if(x)

( e
ix
) dx
= i

_
1
2
_

f(x) e
ix
dx
_
T[xf(x)] = i

.
Similarly, you can show that
T[x
n
f(x)] = (i)
n

n

f

n
.
34.5 Solving Dierential Equations with the Fourier Transform
The Fourier transform is useful in solving some dierential equations on the domain (. . . ) with homogeneous
boundary conditions at innity. We take the Fourier transform of the dierential equation L[y] = f and solve
for y. We take the inverse transform to determine the solution y. Note that this process is only applicable if the
Fourier transform of y exists. Hence the requirement for homogeneous boundary conditions at innity.
We will use the table of Fourier transforms in the appendix in solving the examples in this section.
Example 34.5.1 Consider the problem
y
tt
y = e
[x[
, y() = 0, > 0, ,= 1.
1437
We take the Fourier transform of this equation.

2
y() y() =
/

2
+
2
We take the inverse Fourier transform to determine the solution.
y() =
/
(
2
+
2
)(
2
+ 1)
=

2
1
_
1

2
+ 1

1

2
+
2
_
=
1

2
1
_
/

2
+
2

1/

2
+ 1
_
y(x) =
e
[x[
e
[x[

2
1
Example 34.5.2 Consider the Green function problem
G
tt
G = (x ), y() = 0.
We take the Fourier transform of this equation.

2

G

G = T[(x )]

G =
1

2
+ 1
T[(x )]
We use the Table of Fourier transforms.

G = T
_
e
[x[

T[(x )]
1438
We use the convolution theorem to do the inversion.
G =
1
2
_

e
[x[
( ) d
G(x[) =
1
2
e
[x[
The inhomogeneous dierential equation
y
tt
y = f(x), y() = 0,
has the solution
y =
1
2
_

f() e
[x[
d.
When solving the dierential equation L[y] = f with the Fourier transform, it is quite common to use the
convolution theorem. With this approach we have no need to compute the Fourier transform of the right side.
We merely denote it as T[f] until we use f in the convolution integral.
1439
34.6 The Fourier Cosine and Sine Transform
34.6.1 The Fourier Cosine Transform
Suppose f(x) is an even function. In this case the Fourier transform of f(x) coincides with the Fourier cosine
transform of f(x).
T[f(x)] =
1
2
_

f(x) e
ix
dx
=
1
2
_

f(x)(cos(x) i sin(x)) dx
=
1
2
_

f(x) cos(x) dx
=
1

_

0
f(x) cos(x) dx
The Fourier cosine transform is dened:
T
c
[f(x)] =

f
c
() =
1

_

0
f(x) cos(x) dx.
Note that

f
c
() is an even function. The inverse Fourier cosine transform is
T
1
c
[

f
c
()] =
_

f
c
() e
ix
d
=
_

f
c
()(cos(x) + i sin(x)) d
=
_

f
c
() cos(x) d
= 2
_

0

f
c
() cos(x) d.
1440
Thus we have the Fourier cosine transform pair
f(x) = T
1
c
[

f
c
()] = 2
_

0

f
c
() cos(x) d,

f
c
() = T
c
[f(x)] =
1

_

0
f(x) cos(x) dx.
34.6.2 The Fourier Sine Transform
Suppose f(x) is an odd function. In this case the Fourier transform of f(x) coincides with the Fourier sine
transform of f(x).
T[f(x)] =
1
2
_

f(x) e
ix
dx
=
1
2
_

f(x)(cos(x) i sin(x)) dx
=
i

_

0
f(x) sin(x) dx
Note that

f() = T[f(x)] is an odd function of . The inverse Fourier transform of

f() is
T
1
[

f()] =
_

f() e
ix
d
= 2i
_

0

f() sin(x) d.
Thus we have that
f(x) = 2i
_

0
_

_

0
f(x) sin(x) dx
_
sin(x) d
= 2
_

0
_
1

_

0
f(x) sin(x) dx
_
sin(x) d.
1441
This gives us the Fourier sine transform pair
f(x) = T
1
s
[

f
s
()] = 2
_

0

f
s
() sin(x) d,

f
s
() = T
s
[f(x)] =
1

_

0
f(x) sin(x) dx.
Result 34.6.1 The Fourier cosine transform pair is dened:
f(x) = T
1
c
[

f
c
()] = 2
_

0

f
c
() cos(x) d

f
c
() = T
c
[f(x)] =
1

_

0
f(x) cos(x) dx
The Fourier sine transform pair is dened:
f(x) = T
1
s
[

f
s
()] = 2
_

0

f
s
() sin(x) d

f
s
() = T
s
[f(x)] =
1

_

0
f(x) sin(x) dx
34.7 Properties of the Fourier Cosine and Sine Transform
34.7.1 Transforms of Derivatives
Cosine Transform. Using integration by parts we can nd the Fourier cosine transform of derivatives. Let
y be a function for which the Fourier cosine transform of y and its rst and second derivatives exists. Further
1442
assume that y and y
t
vanish at innity. We calculate the transforms of the rst and second derivatives.
T
c
[y
t
] =
1

_

0
y
t
cos(x) dx
=
1

_
y cos(x)

0
+

_

0
y sin(x) dx
= y
c
()
1

y(0)
T
c
[y
tt
] =
1

_

0
y
tt
cos(x) dx
=
1

_
y
t
cos(x)

0
+

_

0
y
t
sin(x) dx
=
1

y
t
(0) +

_
y sin(x)

0


2

_

0
y cos(x) dx
=
2

f
c
()
1

y
t
(0)
Sine Transform. You can show, (see Exercise 34.3), that the Fourier sine transform of the rst and second
derivatives are
T
s
[y
t
] =

f
c
()
T
s
[y
tt
] =
2
y
c
() +

y(0).
1443
34.7.2 Convolution Theorems
Cosine Transform of a Product. Consider the Fourier cosine transform of a product of functions. Let f(x)
and g(x) be two functions dened for x 0. Let T
c
[f(x)] =

f
c
(), and T
c
[g(x)] = g
c
().
T
c
[f(x)g(x)] =
1

_

0
f(x)g(x) cos(x) dx
=
1

_

0
_
2
_

0

f
c
() cos(x) d
_
g(x) cos(x) dx
=
2

_

0
_

0

f
c
()g(x) cos(x) cos(x) dxd
We use the identity cos a cos b =
1
2
(cos(a b) + cos(a +b)).
=
1

_

0
_

0

f
c
()g(x)
_
cos(( )x) + cos(( +)x)
_
dx d
=
_

0

f
c
()
_
1

_

0
g(x) cos(( )x) dx +
1

_

0
g(x) cos(( +)x) dx
_
d
=
_

0

f
c
()
_
g
c
( ) + g
c
( +)
_
d
g
c
() is an even function. If we have only dened g
c
() for positive argument, then g
c
() = g
c
([[).
=
_

0

f
c
()
_
g
c
([ [) + g
c
( +)
_
d
1444
Inverse Cosine Transform of a Product. Now consider the inverse Fourier cosine transform of a product of
functions. Let T
c
[f(x)] =

f
c
(), and T
c
[g(x)] = g
c
().
T
1
c
[

f
c
() g
c
()] = 2
_

0

f
c
() g
c
() cos(x) d
= 2
_

0
_
1

_

0
f() cos() d
_
g
c
() cos(x) d
=
2

_

0
_

0
f() g
c
() cos() cos(x) d d
=
1

_

0
_

0
f() g
c
()
_
cos((x )) + cos((x +))
_
d d
=
1
2
_

0
f()
_
2
_

0
g
c
() cos((x )) d + 2
_

0
g
c
() cos((x +)) d
_
d
=
1
2
_

0
f()
_
g([x [) +g(x +)
_
d
Sine Transform of a Product. You can show, (see Exercise 34.5), that the Fourier sine transform of a product
of functions is
T
s
[f(x)g(x)] =
_

0

f
s
()
_
g
c
([ [) g
c
( +)
_
d.
Inverse Sine Transform of a Product. You can also show, (see Exercise 34.6), that the inverse Fourier sine
transform of a product of functions is
T
1
s
[

f
s
() g
c
()] =
1
2
_

0
f()
_
g([x [) g(x +)
_
d.
1445
Result 34.7.1 The Fourier cosine and sine transform convolution theorems are
T
c
[f(x)g(x)] =
_

0

f
c
()
_
g
c
([ [) + g
c
( +)

d
T
1
c
[

f
c
() g
c
()] =
1
2
_

0
f()
_
g([x [) +g(x +)
_
d
T
s
[f(x)g(x)] =
_

0

f
s
()
_
g
c
([ [) g
c
( +)
_
d
T
1
s
[

f
s
() g
c
()] =
1
2
_

0
f()
_
g([x [) g(x +)
_
d
34.7.3 Cosine and Sine Transform in Terms of the Fourier Transform
We can express the Fourier cosine and sine transform in terms of the Fourier transform. First consider the Fourier
cosine transform. Let f(x) be an even function.
T
c
[f(x)] =
1

_

0
f(x) cos(x) dx
We extend the domain integration because the integrand is even.
=
1
2
_

f(x) cos(x) dx
Note that
_

f(x) sin(x) dx = 0 because the integrand is odd.


=
1
2
_

f(x) e
ix
dx
= T[f(x)]
1446
T
c
[f(x)] = T[f(x)], for even f(x).
For general f(x), use the even extension, f([x[) to write the result.
T
c
[f(x)] = T[f([x[)]
There is an analogous result for the inverse Fourier cosine transform.
T
1
c
_

f()
_
= T
1
_

f([[)
_
For the sine series, we have
T
s
[f(x)] = iT [ sign (x)f([x[)] T
1
s
_

f()
_
= iT
1
_
sign ()

f([[)
_
Result 34.7.2 The results:
T
c
[f(x)] = T[f([x[)] T
1
c
_

f()
_
= T
1
_

f([[)
_
T
s
[f(x)] = iT[ sign (x)f([x[)] T
1
s
_

f()
_
= iT
1
_
sign ()

f([[)
_
allow us to evaluate Fourier cosine and sine transforms in terms of the Fourier transform.
This enables us to use contour integration methods to do the integrals.
34.8 Solving Dierential Equations with the Fourier Cosine and
Sine Transforms
Example 34.8.1 Consider the problem
y
tt
y = 0, y(0) = 1, y() = 0.
1447
Since the initial condition is y(0) = 1 and the sine transform of y
tt
is
2
y
c
() +

y(0) we take the Fourier sine


transform of both sides of the dierential equation.

2
y
c
() +

y(0) y
c
() = 0
(
2
+ 1) y
c
() =

y
c
() =

(
2
+ 1)
We use the table of Fourier Sine transforms.
y = e
x
Example 34.8.2 Consider the problem
y
tt
y = e
2x
, y
t
(0) = 0, y() = 0.
Since the initial condition is y
t
(0) = 0, we take the Fourier cosine transform of the dierential equation. From
the table of cosine transforms, T
c
[ e
2x
] = 2/((
2
+ 4)).

2
y
c
()
1

y
t
(0) y
c
() =
2
(
2
+ 4)
y
c
() =
2
(
2
+ 4)(
2
+ 1)
=
2

_
1/3

2
+ 1

1/3

2
+ 4
_
=
1
3
2/

2
+ 4

2
3
1/

2
+ 1
y =
1
3
e
2x

2
3
e
x
1448
34.9 Exercises
Exercise 34.1
Show that
H(x +c) H(x c) =
sin(c)

.
Exercise 34.2
Using contour integration, nd the Fourier transform of
f(x) =
1
x
2
+c
2
,
where 1(c) ,= 0
Exercise 34.3
Find the Fourier sine transforms of y
t
(x) and y
tt
(x).
Exercise 34.4
Prove the following identities.
1. T[f(x a)] = e
ia

f()
2. T[f(ax)] =
1
[a[

f
_

a
_
Exercise 34.5
Show that
T
s
[f(x)g(x)] =
_

0

f
s
()
_
g
c
([ [) g
c
( +)
_
d.
1449
Exercise 34.6
Show that
T
1
s
[

f
s
() g
c
()] =
1
2
_

0
f()
_
g([x [) g(x +)
_
d.
Exercise 34.7
Let

f
c
() = T
c
[f(x)],

f
c
() = T
s
[f(x)], and assume the cosine and sine transforms of xf(x) exist. Express
T
c
[xf(x)] and T
s
[xf(x)] in terms of

f
c
() and

f
c
().
Exercise 34.8
Solve the problem
y
tt
y = e
2x
, y(0) = 1, y() = 0,
using the Fourier sine transform.
Exercise 34.9
Show that
T
s
[f(x)] = iT[ sign (x)f([x[)] T
1
s
_

f()
_
= iT
1
_
sign ()

f([[)
_
Exercise 34.10
Let

f
c
() = T
c
[f(x)] and

f
c
() = T
s
[f(x)]. Show that
1. T
c
[xf(x)] =

f
c
()
2. T
s
[xf(x)] =

f
c
()
1450
3. T
c
[f(cx)] =
1
c

f
c
_

c
_
for c > 0
4. T
s
[f(cx)] =
1
c

f
c
_

c
_
for c > 0.
Exercise 34.11
Solve the integral equation,
_

u() e
a(x)
2
d = e
bx
2
,
where a, b > 0, a ,= b, with the Fourier transform.
Exercise 34.12
Evaluate
1

_

0
1
x
e
cx
sin(x) dx,
where is a positive, real number and 1(c) > 0.
Exercise 34.13
Use the Fourier transform to solve the equation
y
tt
a
2
y = e
a[x[
on the domain < x < with boundary conditions y() = 0.
Exercise 34.14
1. Use the cosine transform to solve
y
tt
a
2
y = 0 on x 0 with y
t
(0) = b, y() = 0.
1451
2. Use the cosine transform to show that the Green function for the above with b = 0 is
G(x, ) =
1
2a
e
a[x[

1
2a
e
a(x)
.
Exercise 34.15
1. Use the sine transform to solve
y
tt
a
2
y = 0 on x 0 with y(0) = b, y() = 0.
2. Try using the Laplace transform on this problem. Why isnt it as convenient as the Fourier transform?
3. Use the sine transform to show that the Green function for the above with b = 0 is
g(x; ) =
1
2a
_
e
a(x)
e
a[x+[
_
Exercise 34.16
1. Find the Green function which solves the equation
y
tt
+ 2y
t
+ (
2
+
2
)y = (x ), > 0, > 0,
in the range < x < with boundary conditions y() = y() = 0.
2. Use this Greens function to show that the solution of
y
tt
+ 2y
t
+ (
2
+
2
)y = g(x), > 0, > 0, y() = y() = 0,
with g() = 0 in the limit as 0 is
y =
1

_
x

g() sin[(x )]d.


You may assume that the interchange of limits is permitted.
1452
Exercise 34.17
Using Fourier transforms, nd the solution u(x) to the integral equation
_

u()
[(x )
2
+a
2
]
d =
1
x
2
+b
2
0 < a < b.
Exercise 34.18
The Fourer cosine transform is dened by
F
c
() =
1

_

0
f(x) cos(x) dx.
1. From the Fourier theorem show that the inverse cosine transform is given by
f(x) = 2
_

0
F
c
() cos(x) d.
2. Show that the cosine transform of f
tt
(x) is

2
F
c
()
f
t
(0)

.
3. Use the cosine transform to solve
y
tt
a
2
y = 0 on x > 0 with y
t
(0) = b, y() = 0.
Exercise 34.19
The Fourier sine transform is dened by
F
s
() =
1

_

0
f(x) sin(x) dx.
1453
1. Show that the inverse sine transform is given by
f(x) = 2
_

0
F
s
() sin(x) d.
2. Show that the sine transform of f
tt
(x) is

f(0)
2
F
s
().
3. Use this property to solve the equation
y
tt
a
2
y = 0 on x > 0 with y(0) = b, y() = 0.
4. Try using the Laplace transform on this problem. Why isnt it as convenient as the Fourier transform?
Exercise 34.20
Show that
T[f(x)] =
1
2
(T
c
[f(x) +f(x)] iT
s
[f(x) f(x)])
where T, T
c
and T
s
are respectively the Fourier transform, Fourier cosine transform and Fourier sine transform.
Exercise 34.21
Find u(x) as the solution to the integral equation:
_

u()
(x )
2
+a
2
d =
1
x
2
+b
2
, 0 < a < b.
Use Fourier transforms and the inverse transform. Justify the choice of any contours used in the complex plane.
1454
34.10 Hints
Hint 34.1
H(x +c) H(x c) =
_
1 for [x[ < c,
0 for [x[ > c
Hint 34.2
Consider the two cases 1() < 0 and 1() > 0, closing the path of integration with a semi-circle in the lower or
upper half plane.
Hint 34.3
Hint 34.4
Hint 34.5
Hint 34.6
Hint 34.7
1455
Hint 34.8
Hint 34.9
Hint 34.10
Hint 34.11
The left side is the convolution of u(x) and e
ax
2
.
Hint 34.12
Hint 34.13
Hint 34.14
Hint 34.15
1456
Hint 34.16
Hint 34.17
Hint 34.18
Hint 34.19
Hint 34.20
Hint 34.21
1457
34.11 Solutions
Solution 34.1
T[H(x +c) H(x c)] =
1
2
_

(H(x +c) H(x c)) e


ix
dx
=
1
2
_
c
c
e
ix
dx
=
1
2
_
e
ix
i
_
c
c
=
1
2
_
e
ic
i

e
ic
i
_
T[H(x +c) H(x c)] =
sin(c)

Solution 34.2
T
_
1
x
2
+c
2
_
=
1
2
_

1
x
2
+c
2
e
ix
dx
=
1
2
_

e
ix
(x ic)(x + ic)
dx
If 1() < 0 then we close the path of integration with a semi-circle in the upper half plane.
T
_
1
x
2
+c
2
_
=
1
2
2i Res
_
e
ix
(x ic)(x + ic)
, x = ic
_
=
1
2c
e
c
1458
If > 0 then we close the path of integration in the lower half plane.
T
_
1
x
2
+c
2
_
=
1
2
2i Res
_
e
ix
(x ic)(x + ic)
, ic
_
=
1
2c
e
c
Thus we have that
T
_
1
x
2
+c
2
_
=
1
2c
e
c[[
, for 1(c) ,= 0.
Solution 34.3
T
s
[y
t
] =
1

_

0
y
t
sin(x) dx
=
1

_
y sin(x)
_

_

0
y cos(x) dx
= y
c
()
T
s
[y
tt
] =
1

_

0
y
tt
sin(x) dx
=
1

_
y
t
sin(x)
_

_

0
y
t
cos(x) dx
=

_
y cos(x)
_

_

0
y sin(x) dx
=
2
y
s
() +

y(0).
1459
Solution 34.4
1.
T[f(x a)] =
1
2
_

f(x a) e
ix
dx
=
1
2
_

f(x) e
i(x+a)
dx
= e
ia
1
2
_

f(x) e
ix
dx
T[f(x a)] = e
ia

f()
2. If a > 0, then
T[f(ax)] =
1
2
_

f(ax) e
ix
dx
=
1
2
_

f() e
i/a
1
a
d
=
1
a

f
_

a
_
.
If a < 0, then
T[f(ax)] =
1
2
_

f(ax) e
ix
dx
=
1
2
_

e
i/a
1
a
d
=
1
a

f
_

a
_
.
1460
Thus
T[f(ax)] =
1
[a[

f
_

a
_
.
Solution 34.5
T
s
[f(x)g(x)] =
1

_

0
f(x)g(x) sin(x) dx
=
1

_

0
_
2
_

0

f
s
() sin(x) d
_
g(x) sin(x) dx
=
2

_

0
_

0

f
s
()g(x) sin(x) sin(x) dxd
Use the identity, sin a sin b =
1
2
[cos(a b) cos(a +b)].
=
1

_

0
_

0

f
s
()g(x)
_
cos(( )x) cos(( +)x)
_
dx d
=
_

0

f
s
()
_
1

_

0
g(x) cos(( )x) dx
1

_

0
g(x) cos(( +)x) dx
_
d
T
s
[f(x)g(x)] =
_

0

f
s
()
_
G
c
([ [) G
c
( +)

d
1461
Solution 34.6
T
1
s
[

f
s
()G
c
()] = 2
_

0

f
s
()G
c
() sin(x) d
= 2
_

0
_
1

_

0
f() sin() d
_
G
c
() sin(x) d
=
2

_

0
_

0
f()G
c
() sin() sin(x) d d
=
1

_

0
_

0
f()G
c
()
_
cos((x )) cos((x +))
_
d d
=
1
2
_

0
f()
_
2
_

0
G
c
() cos((x )) d 2
_

0
G
c
() cos((x +)) d)
_
d
=
1
2
_

0
f()[g(x ) g(x +)] d
T
1
s
[

f
s
()G
c
()] =
1
2
_

0
f()
_
g([x [) g(x +)

d
1462
Solution 34.7
T
c
[xf(x)] =
1

_

0
xf(x) cos(x) dx
=
1

_

0
f(x)

(sin(x)) dx
=

_

0
f(x) sin(x) dx
=

f
s
()
T
s
[xf(x)] =
1

_

0
xf(x) sin(x) dx
=
1

_

0
f(x)

(cos(x)) dx
=

_

0
f(x) cos(x) dx
=

f
c
()
Solution 34.8
y
tt
y = e
2x
, y(0) = 1, y() = 0
We take the Fourier sine transform of the dierential equation.

2
y
s
() +

y(0) y
s
() =
2/

2
+ 4
1463
y
s
() =
/
(
2
+ 4)(
2
+ 1)
+
/
(
2
+ 1)
=
/(3)

2
+ 4

/(3)

2
+ 1
+
/

2
+ 1
=
2
3
/

2
+ 1
+
1
3
/

2
+ 4
y =
2
3
e
x
+
1
3
e
2x
Solution 34.9
Consider the Fourier sine transform. Let f(x) be an odd function.
T
s
[f(x)] =
1

_

0
f(x) sin(x) dx
Extend the integration because the integrand is even.
=
1
2
_

f(x) sin(x) dx
Note that
_

f(x) cos(x) dx = 0 as the integrand is odd.


=
1
2
_

f(x)i e
ix
dx
= iT[f(x)]
T
s
[f(x)] = iT[f(x)], for odd f(x).
1464
For general f(x), use the odd extension, sign (x)f([x[) to write the result.
T
s
[f(x)] = iT[ sign (x)f([x[)]
Now consider the inverse Fourier sine transform. Let

f() be an odd function.
T
1
s
_

f()
_
= 2
_

0

f() sin(x) d
Extend the integration because the integrand is even.
=
_

f() sin(x) d
Note that
_

f() cos(x) d = 0 as the integrand is odd.


=
_

f()(i) e
ix
d
= iT
1
_

f()
_
T
1
s
_

f()
_
= iT
1
_

f()
_
, for odd

f().
For general

f(), use the odd extension, sign ()

f([[) to write the result.
T
1
s
_

f()
_
= iT
1
_
sign ()

f([[)
_
1465
Solution 34.10
T
c
[xf(x)] =
1

_

0
xf(x) cos(x) dx
=
1

_

0
f(x)

sin(x) dx
=

_

0
f(x) sin(x) dx
=

f
s
()
T
s
[xf(x)] =
1

_

0
xf(x) sin(x) dx
=
1

_

0
f(x)

(cos(x)) dx
=

_

0
f(x) cos(x) dx
=

f
c
()
T
c
[f(cx)] =
1

_

0
f(cx) cos(x) dx
=
1

_

0
f() cos
_

_
d
c
=
1
c

f
c
_

c
_
1466
T
s
[f(cx)] =
1

_

0
f(cx) sin(x) dx
=
1

_

0
f() sin
_

_
d
c
=
1
c

f
s
_

c
_
Solution 34.11
_

u() e
a(x)
2
d = e
bx
2
We take the Fourier transform and solve for U().
2U()T
_
e
ax
2
_
= T
_
e
bx
2
_
2U()
1

4a
e

2
/(4a)
=
1

4b
e

2
/(4b)
U() =
1
2
_
a
b
e

2
(ab)/(4ab)
Now we take the inverse Fourier transform.
U() =
1
2
_
a
b
_
4ab/(a b)
_
4ab/(a b)
e

2
(ab)/(4ab)
u(x) =
a
_
(a b)
e
abx
2
/(ab)
1467
Solution 34.12
I =
1

_

0
1
x
e
cx
sin(x) dx
=
1

_

0
__

c
e
zx
dz
_
sin(x) dx
=
1

_

c
_

0
e
zx
sin(x) dx dz
=
1

_

c

z
2
+
2
dz
=
1

_
arctan
_
z

__

c
=
1

2
arctan
_
c

__
=
1

arctan
_

c
_
Solution 34.13
We consider the dierential equation
y
tt
a
2
y = e
a[x[
on the domain < x < with boundary conditions y() = 0. We take the Fourier transform of the
dierential equation.

2
y a
2
y =
a
(
2
+a
2
)
We solve for y().
y() =
a
(
2
+a
2
)
2
1468
We take the inverse Fourier transform to nd the solution of the dierential equation.
y(x) =
_

a
(
2
+a
2
)
2
e
ix
d
Note that since y() is a real-valued, even function, y(x) is a real-valued, even function. Thus we only need to
evaluate the integral for positive x. If we replace x by [x[ in this expression we will have the solution that is valid
for all x.
For x > 0, we evaluate the integral by closing the path of integration in the upper half plane and using the
Residue Theorem and Jordans Lemma.
y(x) =
a

1
( ia)
2
( + ia)
2
e
ix
d
= i2
a

Res
_
1
( ia)
2
( + ia)
2
e
ix
, = ia
_
= i2a lim
ia
d
d
_
e
ix
( + ia)
2
_
= i2a lim
ia
_
ix e
ix
( + ia)
2

2 e
ix
( + ia)
3
_
= i2a
_
ix e
ax
4a
2

2 e
ax
i8a
3
_
=
(1 +ax) e
ax
2a
2
, for x 0
The solution of the dierential equation is
y(x) =
1
2a
2
(1 +a[x[) e
a[x[
.
1469
Solution 34.14
1. We take the Fourier cosine transform of the dierential equation.

2
y()
b

a
2
y() = 0
y() =
b
(
2
+a
2
)
Now we take the inverse Fourier cosine transform. We use the fact that y() is an even function.
y(x) = T
1
c
_

b
(
2
+a
2
)
_
= T
1
_

b
(
2
+a
2
)
_
=
b

i2 Res
_
1

2
+a
2
e
ix
, = ia
_
= i2b lim
ia
_
e
ix
+ ia
_
, for x 0
y(x) =
b
a
e
ax
2. The Green function problem is
g
tt
a
2
g = (x ) on x, > 0, g
t
(0; ) = 0, g(; ) = 0.
We take the Fourier cosine transform and solve for g(; ).

2
g a
2
g = T
c
[(x )]
g(; ) =
1

2
+a
2
T
c
[(x )]
1470
We express the right side as a product of Fourier cosine transforms.
g(; ) =

a
T
c
[ e
ax
]T
c
[(x )]
Now we can apply the Fourier cosine convolution theorem,
T
1
c
[T
c
[f(x)]T
c
[g(x)]] =
1
2
_

0
f(t)
_
g([x t[) +g(x +t)
_
dt,
to obtain
g(x; ) =

a
1
2
_

0
(t )
_
e
a[xt[
+ e
a(x+t)
_
dt
g(x; ) =
1
2a
_
e
a[x[
+ e
a(x+)
_
Solution 34.15
1. We take the Fourier sine transform of the dierential equation.

2
y() +
b

a
2
y() = 0
y() =
b
(
2
+a
2
)
1471
Now we take the inverse Fourier sine transform. We use the fact that y() is an odd function.
y(x) = T
1
s
_
b
(
2
+a
2
)
_
= iT
1
_
b
(
2
+a
2
)
_
= i
b

i2 Res
_

2
+a
2
e
ix
, = ia
_
= 2b lim
ia
_
e
ix
+ ia
_
= b e
ax
for x 0
y(x) = b e
ax
2. Now we solve the dierential equation with the Laplace transform.
y
tt
a
2
y = 0
s
2
y(s) sy(0) y
t
(0) a
2
y(s) = 0
We dont know the value of y
t
(0), so we treat it as an unknown constant.
y(s) =
bs +y
t
(0)
s
2
a
2
y(x) = b cosh(ax) +
y
t
(0)
a
sinh(ax)
In order to satisfy the boundary condition at innity we must choose y
t
(0) = ab.
y(x) = b e
ax
1472
We see that solving the dierential equation with the Laplace transform is not as convenient, because
the boundary condition at innity is not automatically satised. We had to nd a value of y
t
(0) so that
y() = 0.
3. The Green function problem is
g
tt
a
2
g = (x ) on x, > 0, g(0; ) = 0, g(; ) = 0.
We take the Fourier sine transform and solve for g(; ).

2
g a
2
g = T
s
[(x )]
g(; ) =
1

2
+a
2
T
s
[(x )]
We write the right side as a product of Fourier cosine transforms and sine transforms.
g(; ) =

a
T
c
[ e
ax
]T
s
[(x )]
Now we can apply the Fourier sine convolution theorem,
T
1
s
[T
s
[f(x)]T
c
[g(x)]] =
1
2
_

0
f(t)
_
g([x t[) g(x +t)
_
dt,
to obtain
g(x; ) =

a
1
2
_

0
(t )
_
e
a[xt[
e
a(x+t)
_
dt
g(x; ) =
1
2a
_
e
a(x)
e
a[x+[
_
1473
Solution 34.16
1. We take the Fourier transform of the dierential equation, solve for

G and then invert.
G
tt
+ 2G
t
+ (
2
+
2
)G = (x )

2

G+ i2

G+ (
2
+
2
)

G =
e
i
2

G =
e
i
2(
2
i2
2

2
)
G =
_

e
i
e
ix
2(
2
i2
2

2
)
d
G =
1
2
_

e
i(x)
( + i)( i)
d
For x > we close the path of integration in the upper half plane and use the Residue theorem. There are
two simple poles in the upper half plane. For x < we close the path of integration in the lower half plane.
Since the integrand is analytic there, the integral is zero. G(x; ) = 0 for x < . For x > we have
G(x; ) =
1
2
i2
_
Res
_
e
i(x)
( + i)( i)
, = + i
_
+ Res
_
e
i(x)
( + i)( i)
, = i
_
_
G(x; ) = i
_
e
i(+i)(x)
2
e
i(+i)(x)
2
_
G(x; ) =
1

e
(x)
sin((x )).
Thus the Green function is
G(x; ) =
1

e
(x)
sin((x ))H(x ).
1474
2. The solution of the inhomogeneous equation
y
tt
+ 2y
t
+ (
2
+
2
)y = g(x), y() = y() = 0,
is
y(x) =
_

g()G(x; ) d
y(x) =
_

g()
1

e
(x)
sin((x )) d
y(x) =
1

_
x

g() e
(x)
sin((x )) d.
Taking the limit 0 we have
y =
1

_
x

g() sin((x ))d.


Solution 34.17
First we consider the Fourier transform of f(x) = 1/(x
2
+c
2
) where 1(c) > 0.

f() = T
_
1
x
2
+c
2
_
=
1
2
_

1
x
2
+c
2
e
ix
dx
=
1
2
_

e
ix
(x ic)(x + ic)
dx
If < 0 then we close the path of integration with a semi-circle in the upper half plane.

f() =
1
2
2i Res
_
e
ix
(x ic)(x + ic)
, x = ic
_
=
e
c
2c
, for < 0
1475
Note that f(x) = 1/(x
2
+ c
2
) is an even function of x so that

f() is an even function of . If

f() = g() for
< 0 then f() = g([[) for all . Thus
T
_
1
x
2
+c
2
_
=
1
2c
e
c[[
.
Now we consider the integral equation
_

u()
[(x )
2
+a
2
]
d =
1
x
2
+b
2
0 < a < b.
We take the Fourier transform, utilizing the convolution theorem.
2 u()
e
a[[
2a
=
e
b[[
2b
u() =
a e
(ba)[[
2b
u(x) =
a
2b
2(b a)
1
x
2
+ (b a)
2
u(x) =
a(b a)
b(x
2
+ (b a)
2
)
1476
Solution 34.18
1. Note that F
c
() is an even function. The inverse Fourier cosine transform is
f(x) = T
1
c
[F
c
()]
=
_

F
c
() e
ix
d
=
_

F
c
()(cos(x) + i sin(x)) d
=
_

F
c
() cos(x) d
= 2
_

0
F
c
() cos(x) d.
2.
T
c
[y
tt
] =
1

_

0
y
tt
cos(x) dx
=
1

_
y
t
cos(x)

0
+

_

0
y
t
sin(x) dx
=
1

y
t
(0) +

_
y sin(x)

0


2

_

0
y cos(x) dx
T
c
[y
tt
] =
2
F
c
()
y
t
(0)

3. We take the Fourier cosine transform of the dierential equation.

2
y()
b

a
2
y() = 0
y() =
b
(
2
+a
2
)
1477
Now we take the inverse Fourier cosine transform. We use the fact that y() is an even function.
y(x) = T
1
c
_

b
(
2
+a
2
)
_
= T
1
_

b
(
2
+a
2
)
_
=
b

i2 Res
_
1

2
+a
2
e
ix
, = ia
_
= i2b lim
ia
_
e
ix
+ ia
_
, for x 0
y(x) =
b
a
e
ax
Solution 34.19
1. Suppose f(x) is an odd function. The Fourier transform of f(x) is
T[f(x)] =
1
2
_

f(x) e
ix
dx
=
1
2
_

f(x)(cos(x) i sin(x)) dx
=
i

_

0
f(x) sin(x) dx.
Note that F() = T[f(x)] is an odd function of . The inverse Fourier transform of F() is
T
1
[F()] =
_

F() e
ix
d
= 2i
_

0
F() sin(x) d.
1478
Thus we have that
f(x) = 2i
_

0
_

_

0
f(x) sin(x) dx
_
sin(x) d
= 2
_

0
_
1

_

0
f(x) sin(x) dx
_
sin(x) d.
This gives us the Fourier sine transform pair
f(x) = 2
_

0
F
s
() sin(x) d, F
s
() =
1

_

0
f(x) sin(x) dx.
2.
T
s
[y
tt
] =
1

_

0
y
tt
sin(x) dx
=
1

_
y
t
sin(x)
_

_

0
y
t
cos(x) dx
=

_
y cos(x)
_

_

0
y sin(x) dx
T
s
[y
tt
] =
2
F
s
() +

y(0)
3. We take the Fourier sine transform of the dierential equation.

2
y() +
b

a
2
y() = 0
y() =
b
(
2
+a
2
)
1479
Now we take the inverse Fourier sine transform. We use the fact that y() is an odd function.
y(x) = T
1
s
_
b
(
2
+a
2
)
_
= iT
1
_
b
(
2
+a
2
)
_
= i
b

i2 Res
_

2
+a
2
e
ix
, = ia
_
= 2b lim
ia
_
e
ix
+ ia
_
= b e
ax
for x 0
y(x) = b e
ax
4. Now we solve the dierential equation with the Laplace transform.
y
tt
a
2
y = 0
s
2
y(s) sy(0) y
t
(0) a
2
y(s) = 0
We dont know the value of y
t
(0), so we treat it as an unknown constant.
y(s) =
bs +y
t
(0)
s
2
a
2
y(x) = b cosh(ax) +
y
t
(0)
a
sinh(ax)
In order to satisfy the boundary condition at innity we must choose y
t
(0) = ab.
y(x) = b e
ax
1480
We see that solving the dierential equation with the Laplace transform is not as convenient, because
the boundary condition at innity is not automatically satised. We had to nd a value of y
t
(0) so that
y() = 0.
Solution 34.20
The Fourier, Fourier cosine and Fourier sine transforms are dened:
T[f(x)] =
1
2
_

f(x) e
ix
dx,
T[f(x)]
c
=
1

_

0
f(x) cos(x) dx,
T[f(x)]
s
=
1

_

0
f(x) sin(x) dx.
We start with the right side of the identity and apply the usual tricks of integral calculus to reduce the expression
to the left side.
1
2
(T
c
[f(x) +f(x)] iT
s
[f(x) f(x)])
1
2
__

0
f(x) cos(x) dx +
_

0
f(x) cos(x) dx i
_

0
f(x) sin(x) dx + i
_

0
f(x) sin(x) dx
_
1
2
__

0
f(x) cos(x) dx
_

0
f(x) cos(x) dx i
_

0
f(x) sin(x) dx i
_

0
f(x) sin(x) dx
_
1
2
__

0
f(x) cos(x) dx +
_
0

f(x) cos(x) dx i
_

0
f(x) sin(x) dx i
_
0

f(x) sin(x) dx
_
1481
1
2
__

f(x) cos(x) dx i
_

f(x) sin(x) dx
_
1
2
_

f(x) e
ix
dx
T[f(x)]
Solution 34.21
We take the Fourier transform of the integral equation, noting that the left side is the convolution of u(x) and
1
x
2
+a
2
.
2 u()T
_
1
x
2
+a
2
_
= T
_
1
x
2
+b
2
_
We nd the Fourier transform of f(x) =
1
x
2
+c
2
. Note that since f(x) is an even, real-valued function,

f() is
an even, real-valued function.
T
_
1
x
2
+c
2
_
=
1
2
_

1
x
2
+c
2
e
ix
dx
For x > 0 we close the path of integration in the upper half plane and apply Jordans Lemma to evaluate the
integral in terms of the residues.
=
1
2
i2 Res
_
e
ix
(x ic)(x + ic)
, x = ic
_
= i
e
iic
2ic
=
1
2c
e
c
1482
Since

f() is an even function, we have
T
_
1
x
2
+c
2
_
=
1
2c
e
c[[
.
Our equation for u() becomes,
2 u()
1
2a
e
a[[
=
1
2b
e
b[[
u() =
a
2b
e
(ba)[omega[
.
We take the inverse Fourier transform using the transform pair we derived above.
u(x) =
a
2b
2(b a)
x
2
+ (b a)
2
u(x) =
a(b a)
b(x
2
+ (b a)
2
)
1483
Chapter 35
The Gamma Function
35.1 Eulers Formula
For non-negative, integral n the factorial function is
n! = n(n 1) (1), with 0! = 1.
We would like to extend the factorial function so it is dened for all complex numbers.
Consider the function (z) dened by Eulers formula
(z) =
_

0
e
t
t
z1
dt.
(Here we take the principal value of t
z1
.) The integral converges for 1(z) > 0. If 1(z) 0 then the integrand
will be at least as singular as 1/t at t = 0 and thus the integral will diverge.
1484
Dierence Equation. Using integration by parts,
(z + 1) =
_

0
e
t
t
z
dt
=
_
e
t
t
z
_

_

0
e
t
zt
z1
dt.
Since 1(z) > 0 the rst term vanishes.
= z
_

0
e
t
t
z1
dt
= z(z)
Thus (z) satises the dierence equation
(z + 1) = z(z).
For general z it is not possible to express the integral in terms of elementary functions. However, we can
evaluate the integral for some z. The value z = 1 looks particularly simple to do.
(1) =
_

0
e
t
dt =
_
e
t
_

0
= 1.
Using the dierence equation we can nd the value of (n) for any positive, integral n.
(1) = 1
(2) = 1
(3) = (2)(1) = 2
(4) = (3)(2)(1) = 6
=
(n + 1) = n!.
1485
Thus the Gamma function, (z), extends the factorial function to all complex z in the right half-plane. For
non-negative, integral n we have
(n + 1) = n!.
Analyticity. The derivative of (z) is

t
(z) =
_

0
e
t
t
z1
log t dt.
Since this integral converges for 1(z) > 0, (z) is analytic in that domain.
35.2 Hankels Formula
We would like to nd the analytic continuation of the Gamma function into the left half-plane. We accomplish
this with Hankels formula
(z) =
1
2i sin(z)
_
C
e
t
t
z1
dt.
Here C is the contour starting at below the real axis, enclosing the origin and returning to above the
real axis. A graph of this contour is shown in Figure 35.1. Again we use the principle value of t
z1
so there is a
branch cut on the negative real axis.
The integral in Hankels formula converges for all complex z. For non-positive, integral z the integral does
not vanish. Thus because of the sine term the Gamma function has simple poles at z = 0, 1, 2, . . . . For
positive, integral z, the integrand is entire and thus the integral vanishes. Using LHospitals rule you can show
that the points, z = 1, 2, 3, . . . are removable singularities and the Gamma function is analytic at these points.
Since the only zeroes of sin(z) occur for integral z, (z) is analytic in the entire plane except for the points,
z = 0, 1, 2, . . . .
1486
Figure 35.1: The Hankel Contour.
Dierence Equation. Using integration by parts we can derive the dierence equation from Hankels formula.
(z + 1) =
1
2i sin((z + 1))
_
C
e
t
t
z
dt
=
1
2i sin(z)
_
_
e
t
t
z
_
+0i
0i

_
C
e
t
zt
z1
dt
_
=
1
2i sin(z)
z
_
C
e
t
t
z1
dt
= z(z).
Evaluating (1),
(1) = lim
z1
_
C
e
t
t
z1
dt
2i sin(z)
.
1487
Both the numerator and denominator vanish. Using LHospitals rule,
= lim
z1
_
C
e
t
t
z1
log t dt
2i cos(z)
=
_
C
e
t
log t dt
2i
Let C
r
be the circle of radius r starting at radians and going to radians.
=
1
2i
__
r

e
t
[log(t) i] dt +
_
Cr
e
t
log t dt +
_

r
e
t
[log(t) +i] dt
_
=
1
2i
__

r
e
t
[log(t) +i] dt +
_

r
e
t
[log(t) +i] dt +
_
Cr
e
t
log t dt
_
=
1
2i
__

r
e
t
2i dt +
_
Cr
e
t
log t dt
_
The integral on C
r
vanishes as r 0.
=
1
2i
2i
_

0
e
t
dt
= 1.
Thus we obtain the same value as with Eulers formula. It can be shown that Hankels formula is the analytic
continuation of the Gamma function into the left half-plane.
35.3 Gauss Formula
Gauss dened the Gamma function as an innite product. This form is useful in deriving some of its properties.
We can obtain the product form from Eulers formula. First recall that
e
t
= lim
n
_
1
t
n
_
n
.
1488
Substituting this into Eulers formula,
(z) =
_

0
e
t
t
z1
dt
= lim
n
_
n
0
_
1
t
n
_
n
t
z1
dt.
With the substitution = t/n,
= lim
n
_
1
0
(1 )
n
n
z1

z1
nd
= lim
n
n
z
_
1
0
(1 )
n

z1
d.
Let n be an integer. Using integration by parts we can evaluate the integral.
_
1
0
(1 )
n

z1
d =
_
(1 )
n

z
z
_
1
0

_
1
0
n(1 )
n1

z
z
d
=
n
z
_
1
0
(1 )
n1

z
d
=
n(n 1)
z(z + 1)
_
1
0
(1 )
n2

z+1
d
=
n(n 1) (1)
z(z + 1) (z +n 1)
_
1
0

z+n1
d
=
n(n 1) (1)
z(z + 1) (z +n 1)
_

z+n
z +n
_
1
0
=
n!
z(z + 1) (z +n)
1489
Thus we have that
(z) = lim
n
n
z
n!
z(z + 1) (z +n)
=
1
z
lim
n
(1)(2) (n)
(z + 1)(z + 2) (z +n)
n
z
=
1
z
lim
n
1
(1 +z)(1 +z/2) (1 +z/n)
n
z
=
1
z
lim
n
1
(1 +z)(1 +z/2) (1 +z/n)
2
z
3
z
n
z
1
z
2
z
(n 1)
z
Since lim
n
(n+1)
z
n
z
= 1 we can multiply by that factor.
=
1
z
lim
n
1
(1 +z)(1 +z/2) (1 +z/n)
2
z
3
z
(n + 1)
z
1
z
2
z
n
z
=
1
z

n=1
_
1
1 +z/n
(n + 1)
z
n
z
_
Thus we have Gauss formula for the Gamma function
(z) =
1
z

n=1
__
1 +
1
n
_
z _
1 +
z
n
_
1
_
.
We derived this formula from Eulers formula which is valid only in the left half-plane. However, the product
formula is valid for all z except z = 0, 1, 2, . . . .
35.4 Weierstrass Formula
1490
The Euler-Mascheroni Constant. Before deriving Weierstrass product formula for the Gamma function we
will need to dene the Euler-Mascheroni constant
= lim
n
__
1 +
1
2
+
1
3
+ +
1
n
_
log n
_
= 0.5772 .
In deriving the Euler product formula, we had the equation
(z) = lim
n
_
n
z
n!
z(z + 1) (z +n)
_
.
= lim
n
_
z
1
_
1 +
z
1
_
1
_
1 +
z
2
_
1

_
1 +
z
n
_
1
n
z
_
1
(z)
= lim
n
_
z
_
1 +
z
1
__
1 +
z
2
_

_
1 +
z
n
_
e
z log n
_
= lim
n
_
z
_
1 +
z
1
_
e
z
_
1 +
z
2
_
e
z/2

_
1 +
z
n
_
e
z/n
exp
__
1 +
1
2
+ +
1
n
log n
_
z
__
Weierstrass formula for the Gamma function is then
1
(z)
= z e
z

n=1
__
1 +
z
n
_
e
z/n
_
.
Since the product is uniformly convergent, 1/(z) is an entire function. Since 1/(z) has no singularities, we
see that (z) has no zeros.
1491
Result 35.4.1 Eulers formula for the Gamma function is valid for 1(z) > 0.
(z) =
_

0
e
t
t
z1
dt
Hankels formula denes the (z) for the entire complex plane except for the points
z = 0, 1, 2, . . . .
(z) =
1
2i sin(z)
_
C
e
t
t
z1
dt
Gauss and Weierstrass product formulas are, respectively
(z) =
1
z

n=1
__
1 +
1
n
_
z
_
1 +
z
n
_
1
_
and
1
(z)
= z e
z

n=1
__
1 +
z
n
_
e
z/n
_
.
35.5 Stirlings Approximation
In this section we will try to get an approximation to the Gamma function for large positive argument. Eulers
formula is
(x) =
_

0
e
t
t
x1
dt.
1492
We could rst try to approximate the integral by only looking at the domain where the integrand is large. In
Figure 35.2 the integrand in the formula for (10), e
t
t
9
, is plotted.
5 10 15 20 25 30
10000
20000
30000
40000
Figure 35.2: Plot of the integrand for (10)
We see that the important part of the integrand is the hump centered around x = 9. If we nd where the
integrand of (x) has its maximum
d
dx
_
e
t
t
x1
_
= 0
e
t
t
x1
+ (x 1) e
t
t
x2
= 0
(x 1) t = 0
t = x 1,
we see that the maximum varies with x. This could complicate our analysis. To take care of this problem we
1493
introduce the change of variables t = xs.
(x) =
_

0
e
xs
(xs)
x1
x ds
= x
x
_

0
e
xs
s
x
s
1
ds
= x
x
_

0
e
x(slog s)
s
1
ds
The integrands, ( e
x(slog s)
s
1
), for (5) and (20) are plotted in Figure 35.3.
1 2 3 4
0.001
0.002
0.003
0.004
0.005
0.006
0.007
1 2 3 4
510
-10
110
-9
1.510
-9
210
-9
Figure 35.3: Plot of the integrand for (5) and (20).
We see that the important part of the integrand is the hump that seems to be centered about s = 1. Also
note that the the hump becomes narrower with increasing x. This makes sense as the e
x(slog s)
term is the
most rapidly varying term. Instead of integrating from zero to innity, we could get a good approximation to the
integral by just integrating over some small neighborhood centered at s = 1. Since s log s has a minimum at
s = 1, e
x(slog s)
has a maximum there. Because the important part of the integrand is the small area around
1494
s = 1, it makes sense to approximate s log s with its Taylor series about that point.
s log s = 1 +
1
2
(s 1)
2
+O
_
(s 1)
3

Since the hump becomes increasingly narrow with increasing x, we will approximate the 1/s term in the integrand
with its value at s = 1. Substituting these approximations into the integral, we obtain
(x) x
x
_
1+
1
e
x(1+(s1)
2
/2)
ds
= x
x
e
x
_
1+
1
e
x(s1)
2
/2
ds
As x both of the integrals
_
1

e
x(s1)
2
/2
ds and
_

1+
e
x(s1)
2
/2
ds
are exponentially small. Thus instead of integrating from 1 to 1 + we can integrate from to .
(x) x
x
e
x
_

e
x(s1)
2
/2
ds
= x
x
e
x
_

e
xs
2
/2
ds
= x
x
e
x
_
2
x
(x)

2x
x1/2
e
x
as x .
This is known as Stirlings approximation to the Gamma function. In the table below, we see that the
approximation is pretty good even for relatively small argument.
1495
n (n)

2x
x1/2
e
x
relative error
5 24 23.6038 0.0165
15 8.71783 10
10
8.66954 10
10
0.0055
25 6.20448 10
23
6.18384 10
23
0.0033
35 2.95233 10
38
2.94531 10
38
0.0024
45 2.65827 10
54
2.65335 10
54
0.0019
In deriving Stirlings approximation to the Gamma function we did a lot of hand waving. However, all of
the steps can be justied and better approximations can be obtained by using Laplaces method for nding the
asymptotic behavior of integrals.
1496
35.6 Exercises
Exercise 35.1
Given that
_

e
x
2
dx =

,
deduce the value of (1/2). Now nd the value of (n + 1/2).
Exercise 35.2
Evaluate
_

0
e
x
3
dx in terms of the gamma function.
Exercise 35.3
Show that
_

0
e
x
sin(log x) dx =
(i) + (i)
2
.
1497
35.7 Hints
Hint 35.1
Use the change of variables, = x
2
in the integral. To nd the value of (n + 1/2) use the dierence relation.
Hint 35.2
Make the change of variable = x
3
.
1498
35.8 Solutions
Solution 35.1
_

e
x
2
dx =

_

0
e
x
2
dx =

2
Make the change of variables = x
2
.
_

0
e

1
2

1/2
d =

2
(1/2) =

Recall the dierence relation for the Gamma function (z + 1) = z(z).


(n + 1/2) = (n 1/2)(n 1/2)
=
2n 1
2
(n 1/2)
=
(2n 3)(2n 1)
2
2
(n 3/2)
=
(1)(3)(5) (2n 1)
2
n
(1/2)
(n + 1/2) =
(1)(3)(5) (2n 1)
2
n

1499
Solution 35.2
We make the change of variable = x
3
, x =
1/3
, dx =
1
3

2/3
d.
_

0
e
x
3
dx =
_

0
e

1
3

2/3
d
=
1
3

_
1
3
_
Solution 35.3
_

0
e
x
sin(log x) dx =
_

0
e
x
1
2i
_
e
i log x
e
i log x
_
dx
=
1
2i
_

0
e
x
_
x
i
x
i
_
dx
=
1
2i
((1 +i) (1 i))
=
1
2i
(i(i) (i)(i))
=
(i) + (i)
2
1500
Chapter 36
Bessel Functions
Ideas are angels. Implementations are a bitch.
36.1 Bessels Equation
A commonly encountered dierential equation in applied mathematics is Bessels equation
y
tt
+
1
z
y
t
+
_
1

2
z
2
_
y = 0.
For our purposes, we will consider R
0+
. This equation arises when solving certain partial dierential equations
with the method of separation of variables in cylindrical coordinates. For this reason, the solutions of this equation
are sometimes called cylindrical functions.
This equation cannot be solved directly. However, we can nd series representations of the solutions. There is
a regular singular point at z = 0, so the Frobenius method is applicable there. The point at innity is an irregular
singularity, so we will look for asymptotic series about that point. Additionally, we will use Laplaces method to
nd denite integral representations of the solutions.
1501
Note that Bessels equation depends only on
2
and not alone. Thus if we nd a solution, (which of course
depends on this parameter), y

(z) we know that y

(z) is also a solution. For this reason, we will consider


R
0+
. Whether or not y

(z) and y

(z) are linearly independent, (distinct solutions), remains to be seen.


Example 36.1.1 Consider the dierential equation
y
tt
+
1
z
y
t
+

2
z
2
y = 0
One solution is y

(z) = z

. Since the equation depends only on


2
, another solution is y

(z) = z

. For ,= 0,
these two solutions are linearly independent.
Now consider the dierential equation
y
tt
+
2
y = 0
One solution is y

(z) = cos(z). Therefore, another solution is y

(z) = cos(z) = cos(z). However, these two


solutions are not linearly independent.
36.2 Frobeneius Series Solution about z = 0
We note that z = 0 is a regular singular point, (the only singular point of Bessels equation in the nite complex
plane.) We will use the Frobenius method at that point to analyze the solutions. We assume that 0.
The indicial equation is
( 1) +
2
= 0
= .
If do not dier by an integer, (that is if is not a half-integer), then there will be two series solutions of the
Frobenius form.
y
1
(z) = z

k=0
a
k
z
k
, y
2
(z) = z

k=0
b
k
z
k
1502
If is a half-integer, the second solution may or may not be in the Frobenius form. In any case, then will always
be at least one solution in the Frobenius form. We will determine that series solution. y(z) and it derivatives are
y =

k=0
a
k
z
k+
, y
t
=

k=0
(k +)a
k
z
k+1
, y
tt
=

k=0
(k +)(k + 1)a
k
z
k+2
.
We substitute the Frobenius series into the dierential equation.
z
2
y
tt
+zy
t
+
_
z
2

2
_
y = 0

k=0
(k +)(k + 1)a
k
z
k+
+

k=0
(k +)a
k
z
k+
+

k=0
a
k
z
k++2

k=0

2
a
k
z
k+
= 0

k=0
_
k
2
+ 2k
_
a
k
z
k
+

k=2
a
k2
z
k
= 0
We equate powers of z to obtain equations that determine the coecients. The coecient of z
0
is the equation
0 a
0
= 0. This corroborates that a
0
is arbitrary, (but non-zero). The coecient of z
1
is the equation
(1 + 2)a
1
= 0
a
1
= 0
The coecient of z
k
for k 2 gives us
_
k
2
+ 2k
_
a
k
+a
k2
= 0.
a
k
=
a
k2
k
2
+ 2k
=
a
k2
k(k + 2)
From the recurrence relation we see that all the odd coecients are zero, a
2k+1
= 0. The even coecients are
a
2k
=
a
2k2
4k(k +)
=
(1)
k
a
0
2
2k
k!(k + + 1)
1503
Thus we have the series solution
y(z) = a
0

k=0
(1)
k
2
2k
k!(k + + 1)
z
2k
.
a
0
is arbitrary. We choose a
0
= 2

. We call this solution the Bessel function of the rst kind and order and
denote it with J

(z).
J

(z) =

k=0
(1)
k
k!(k + + 1)
_
z
2
_
2k+
Recall that the Gamma function is non-zero and nite for all real arguments except non-positive integers.
(x) has singularities at x = 0, 1, 2, . . . . Therefore, J

(z) is well-dened when is not a positive integer.


Since J

(z) z

at z = 0, J

(z) is clear linearly independent to J

(z) for non-integer . In particular we note


that there are two solutions of the Frobenius form when is a half odd integer.
J

(z) =

k=0
(1)
k
k!(k + 1)
_
z
2
_
2k
, for , Z
+
Of course for = 0, J

(z) and J

(z) are identical. Consider the case that = n is a positive integer. Since
(x) + as x 0, 1, 2, . . . we see the the coecients in the series for J
nu
(z) vanish for k = 0, . . . , n 1.
J
n
(z) =

k=n
(1)
k
k!(k n + 1)
_
z
2
_
2kn
J
n
(z) =

k=0
(1)
k+n
(k +n)!(k + 1)
_
z
2
_
2k+n
J
n
(z) = (1)
n

k=0
(1)
k
k!(k +n)!
_
z
2
_
2k+n
J
n
(z) = (1)
n
J
n
(z)
Thus we see that J
n
(z) and J
n
(z) are not linearly independent for integer n.
1504
36.2.1 Behavior at Innity
With the change of variables z = 1/t, w(z) = u(t) Bessels equation becomes
t
4
u
tt
+ 2t
3
u
t
+t(t
2
)u
t
+ (1
2
t
2
)u = 0
u
tt
+
1
t
u
t
+
_
1
t
4


2
t
2
_
u = 0.
The point t = 0 and hence the point z = is an irregular singular point. We will nd the leading order
asymptotic behavior of the solutions as z +.
Controlling Factor. Starting with Bessels equation for real argument
y
tt
+
1
x
y
t
+
_
1

2
x
2
_
y = 0,
we make the substitution y = e
s(x)
to obtain
s
tt
+ (s
t
)
2
+
1
x
s
t
+ 1

2
x
2
= 0.
We know that

2
x
2
1 as x ; we will assume that s
tt
(s
t
)
2
as x . This gives us
(s
t
)
2
+
1
x
s
t
+ 1 0 as x .
To simplify the equation further, we will try the possible two-term balances.
1. (s
t
)
2
+
1
x
s
t
0 s
t

1
x
This balance is not consistent as it violates the assumption that 1 is
smaller than the other terms.
2. (s
t
)
2
+ 1 0 s
t
i This balance is consistent.
3.
1
x
s
t
+ 1 0 s
t
x This balance is inconsistent as (s
t
)
2
isnt smaller than the other terms.
Thus the only dominant balance is s
t
i. This balance is consistent with our initial assumption that
s
tt
(s
t
)
2
. Thus s ix and the controlling factor is e
ix
.
1505
Leading Order Behavior. In order to nd the leading order behavior, we substitute s = ix + t(x) where
t(x) x as x into the dierential equation for s. We rst consider the case s = ix +t(x). We assume that
t
t
1 and t
tt
1/x.
t
tt
+ (i +t
t
)
2
+
1
x
(i +t
t
) + 1

2
x
2
= 0
t
tt
+ 2it
t
+ (t
t
)
2
+
i
x
+
1
x
t
t


2
x
2
= 0
Using our assumptions about the behavior of t
t
and t
tt
,
2it
t
+
i
x
0
t
t

1
2x
t
1
2
log x as x .
This asymptotic behavior is consistent with our assumptions.
Substituting s = ix +t(x) will also yield t
1
2
log x. Thus the leading order behavior of the solutions is
y c e
ix
1
2
log x+u(x)
= cx
1/2
e
ix+u(x)
as x ,
where u(x) log x as x .
By substituting t =
1
2
log x + u(x) into the dierential equation for t, you could show that u(x) const as
x . Thus the full leading order behavior of the solutions is
y cx
1/2
e
ix+u(x)
as x
where u(x) 0 as x . Writing this in terms of sines and cosines yields
y
1
x
1/2
cos(x +u
1
(x)), y
2
x
1/2
sin(x +u
2
(x)), as x ,
1506
where u
1
, u
2
0 as x .
Result 36.2.1 Bessels equation for real argument is
y
//
+
1
x
y
/
+
_
1

2
x
2
_
y = 0.
If is not an integer then the solutions behave as linear combinations of
y
1
= x

, and y
2
= x

at x = 0. If is an integer, then the solutions behave as linear combinations of


y
1
= x

, and y
2
= x

+cx

log x
at x = 0. The solutions are asymptotic to a linear combination of
y
1
= x
1/2
sin(x +u
1
(x)), and y
2
= x
1/2
cos(x +u
2
(x))
as x +, where u
1
, u
2
0 as x .
36.3 Bessel Functions of the First Kind
Consider the function exp(
1
2
z(t 1/t)). We can expand this function in a Laurent series in powers of t,
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
,
1507
where the coecient functions J
n
(z) are
J
n
(z) =
1
2i
_

n1
e
1
2
z(1/)
d.
Here the path of integration is any positive closed path around the origin. exp(
1
2
z(t 1/t)) is the generating
function for Bessel function of the rst kind.
36.3.1 The Bessel Function Satises Bessels Equation
We would like to expand J
n
(z) in powers of z. The rst step in doing this is to make the substitution = 2t/z.
J
n
(z) =
1
2i
_ _
2t
z
_
n1
exp
_
1
2
z
_
2t
z

z
2t
__
2
z
dt
=
1
2i
_
z
2
_
n
_
t
n1
e
tz
2
/4t
dt
Dierentiating the expression for J
n
(z),
J
t
n
(z) =
1
2i
nz
n1
2
n
_
t
n1
e
tz
2
/4t
dt +
1
2i
_
z
2
_
n
_
t
n1
_
2z
4t
_
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_
_
n
z

z
2t
_
t
n1
e
tz
2
/4t
dt
J
tt
n
(z) =
1
2i
_
z
2
_
n
_ _
n
z
_
n
z

z
2t
_
+
_

n
z
2

1
2t
_

z
2t
_
n
z

z
2t
_
_
t
n1
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_ _
n
2
z
2

nz
2zt

n
z
2

1
2t

nz
2zt
+
z
2
4t
2
_
t
n1
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_ _
n(n 1)
z
2

2n + 1
2t
+
z
2
4t
2
_
t
n1
e
tz
2
/4t
dt.
1508
Substituting J
n
(z) into Bessels equation,
J
tt
n
+
1
z
J
t
n
+
_
1
n
2
z
2
_
J
n
=
1
2i
_
z
2
_
n
_ __
n(n 1)
z
2

2n + 1
2t
+
z
2
4t
2
_
+
_
n
z
2

1
2t
_
+
_
1
n
2
z
2
__
t
n1
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_ _
1
n + 1
t
+
z
2
4t
2
_
t
n1
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_
d
dt
_
t
n1
e
tz
2
/4t
_
dt
Since t
n1
e
tz
2
/4t
is analytic in 0 < [t[ < when n is an integer, the integral vanishes.
= 0.
Thus for integral n, J
n
(z) satises Bessels equation.
J
n
(z) is called the Bessel function of the rst kind. The subscript is the order. Thus J
1
(z) is a Bessel function
of order 1. J
0
(x) and J
1
(x) are plotted in the rst graph in Figure 36.1. J
5
(x) is plotted in the second graph in
Figure 36.1. Note that for non-negative, integral n, J
n
(z) behaves as z
n
at z = 0.
36.3.2 Series Expansion of the Bessel Function
Expanding exp(z
2
/4t) in the integral expression for J
n
,
J
n
(z) =
1
2i
_
z
2
_
n
_
t
n1
e
tz
2
/4t
dt
=
1
2i
_
z
2
_
n
_
t
n1
e
t
_

m=0
_
z
2
4t
_
m
1
m!
_
dt
1509
2 4 6 8 10 12 14
-0.4
-0.2
0.2
0.4
0.6
0.8
1
5 10 15 20
-0.2
-0.1
0.1
0.2
0.3
Figure 36.1: Plot of J
0
(x), J
1
(x) and J
5
(x).
For the path of integration, we are free to choose any contour that encloses the origin. Consider the circular path
on [t[ = 1. Since the integral is uniformly convergent, we can interchange the order of integration and summation.
J
n
(z) =
1
2i
_
z
2
_
n

m=0
(1)
m
z
2m
2
2m
m!
_
t
nm1
e
t
dt
1510
If n is a non-negative integer,
1
2i
_
t
nm1
e
t
dt = lim
z0
_
1
(n +m)!
d
n+m
dz
n+m
( e
z
)
_
=
1
(n +m)!
.
Thus we have the series expansion
J
n
(z) =

m=0
(1)
m
m!(n +m)!
_
z
2
_
n+2m
for n 0.
Now consider J
n
(z), (n positive).
J
n
(z) =
1
2i
_
z
2
_
n

m=1
(1)
m
z
2m
2
2m
m!
_
t
nm1
e
t
dt
For m n, the integrand has a pole of order mn + 1 at the origin.
1
2i
_
t
nm1
e
t
dt =
_
1
(mn)!
for m n
0 for m < n
The expression for J
n
is then
J
n
(z) =

m=n
(1)
m
m!(mn)!
_
z
2
_
n+2m
=

m=0
(1)
m+n
(m+n)!m!
_
z
2
_
n+2m
= (1)
n
J
n
(z).
Thus we have that
J
n
(z) = (1)
n
J
n
(z) for integral n.
1511
36.3.3 Bessel Functions of Non-Integral Order
The generalization of the factorial function is the Gamma function. For integral values of n, n! = (n + 1). The
Gamma function is dened for all complex-valued arguments. Thus one would guess that if the Bessel function
of the rst kind were dened for non-integral order, it would have the denition,
J

(z) =

m=0
(1)
m
m!( +m+ 1)
_
z
2
_
+2m
.
The Integrand for Non-Integral . Recall the denition of the Bessel function
J

(z) =
1
2i
_
z
2
_

_
t
1
e
tz
2
/4t
dt.
When is an integer, the integrand is single valued. Thus if you start at any point and follow any path around
the origin, the integrand will return to its original value. This property was the key to J
n
satisfying Bessels
equation. If is not an integer, then this property does not hold for arbitrary paths around the origin.
A New Contour. First, since the integrand is multiple-valued, we need to dene what branch of the function
we are talking about. We will take the principal value of the integrand and introduce a branch cut on the negative
real axis. Let C be a contour that starts at z = below the branch cut, circles the origin, and returns to the
point z = above the branch cut. This contour is shown in Figure 36.2.
Thus we dene
J

(z) =
1
2i
_
z
2
_

_
C
t
1
e
tz
2
/4t
dt.
Bessels Equation. Substituting J

(z) into Bessels equation yields


J
tt

+
1
z
J
t

+
_
1

2
z
2
_
J

=
1
2i
_
z
2
_

_
C
d
dt
_
t
1
e
tz
2
/4t
_
dt.
Since t
1
e
tz
2
/4t
is analytic in 0 < [z[ < and [ arg(z)[ < , and it vanishes at z = , the integral is zero.
Thus the Bessel function of the rst kind satises Bessels equation for all complex orders.
1512
Figure 36.2: The Contour of Integration.
Series Expansion. Because of the e
t
factor in the integrand, the integral dening J

converges uniformly.
Expanding e
z
2
/4t
in a Taylor series yields
J

(z) =
1
2i
_
z
2
_

m=0
(1)
m
z
2m
2
2m
m!
_
C
t
m1
e
t
dt
Since
1
()
=
1
2i
_
C
t
1
e
t
dt,
we have the series expansion of the Bessel function
J

(z) =

m=0
(1)
m
m!( +m+ 1)
_
z
2
_
+2m
.
1513
Linear Independence. The Wronskian of Bessels equation is
W(z) = exp
_

_
z
1

d
_
= e
log z
=
1
z
.
Thus to within a function of , the Wronskian is 1/z. For any given , there are two linearly independent
solutions. Note that Bessels equation is unchanged under the transformation . Thus both J

and J

satisfy Bessels equation. Now we must determine if they are linearly independent. We have already shown that
for integral values of they are not independent. (J
n
= (1)
n
J
n
.) Assume that is not an integer. The
Wronskian of J

and J

is
W[J

, J

] =

J
t

J
t

= J

J
t

J
t

Substituting in the expansion for J

,
=
_

m=0
(1)
m
m!( +m + 1)
_
z
2
_
+2m
__

n=0
(1)
n
( + 2n)
n!( +n + 1)2
_
z
2
_
+2n1
_

m=0
(1)
m
m!( +m+ 1)
_
z
2
_
+2m
__

n=0
(1)
n
( + 2n)
n!( +n + 1)2
_
z
2
_
+2n1
_
.
Since the Wronskian is a function of times 1/z the coecients of all of the powers of z except 1/z must vanish.
=

z( + 1)( + 1)


z( + 1)( + 1)
=
2
z()(1 )
1514
Using an identity for the Gamma function simplies this expression.
=
2
z
sin()
Since the Wronskian is nonzero for non-integral , J

and J

are independent functions when is not an integer.


The general solution to the equation is then aJ

+bJ

.
36.3.4 Recursion Formulas
In showing that J

satises Bessels equation for arbitrary complex , we obtained


_
C
d
dt
_
t

e
tz
2
/4t
_
dt = 0.
Expanding the integral,
_
C
_
t

+
z
2
4
t
2
t
1
_
e
tz
2
/4t
dt = 0.
1
2i
_
z
2
_

_
C
_
t

+
z
2
4
t
2
t
1
_
e
tz
2
/4t
dt = 0.
Since J

(z) =
1
2i
(z/2)

_
C
t
1
e
tz
2
/4t
dt,
_
_
2
z
_
1
J
1
+
_
2
z
_
z
2
4
J
+1
J

_
= 0.
J
1
+J
+1
=
2
z
J

1515
Dierentiating the integral expression for J

,
J
t

(z) =
1
2i
z
1
2

_
C
t
1
e
tz
2
/4t
dt +
1
2i
_
z
2
_

_
C
t
1
_

z
2t
_
e
tz
2
/4t
dt
J
t

(z) =

z
1
2i
_
z
2
_

_
C
t
1
e
tz
2
/4t
dt
1
2i
_
z
2
_
+1
_
C
t
2
e
tz
2
/4t
dt
J
t

=

z
J

J
+1
From the two relations we have derived you can show that
J
t

=
1
2
(J
1
+J
+1
) and J
t

= J
1


z
J

.
1516
Result 36.3.1 The Bessel function of the rst kind, J

(z), is dened,
J

(z) =
1
2i
_
z
2
_

_
C
t
1
e
tz
2
/4t
dt.
The Bessel function has the expansion,
J

(z) =

m=0
(1)
m
m!( +m + 1)
_
z
2
_
+2m
.
The Wronskian of J

(z) and J

(z) is
W(z) =
2
z
sin().
Thus J

(z) and J

(z) are independent when is not an integer. The Bessel functions


satisfy the recursion relations,
J
1
+J
+1
=
2
z
J

J
/

=

z
J

J
+1
J
/

=
1
2
(J
1
J
+1
) J
/

= J
1


z
J

.
1517
36.3.5 Bessel Functions of Half-Integral Order
Consider J
1/2
(z). Start with the series expansion
J
1/2
(z) =

m=0
(1)
m
m!(1/2 +m + 1)
_
z
2
_
1/2+2m
.
Use the identity (n + 1/2) =
(1)(3)(2n1)
2
n

.
=

m=0
(1)
m
2
m+1
m!(1)(3) (2m + 1)

_
z
2
_
1/2+2m
=

m=0
(1)
m
2
m+1
(2)(4) (2m) (1)(3) (2m + 1)

_
1
2
_
1/2+m
z
1/2+2m
=
_
2
z
_
1/2

m=0
(1)
m
(2m+ 1)!
z
2m+1
We recognize the sum as the Taylor series expansion of sin z.
=
_
2
z
_
1/2
sin z
Using the recurrence relations,
J
+1
=

z
J

J
t

and J
1
=

z
J

+J
t

,
we can nd J
n+1/2
for any integral n.
1518
Example 36.3.1 To nd J
3/2
(z),
J
3/2
(z) =
1/2
z
J
1/2
(z) J
t
1/2
(z)
=
1/2
z
_
2

_
1/2
z
1/2
sin z
_

1
2
__
2

_
1/2
z
3/2
sin z
_
2

_
1/2
z
1/2
cos z
= 2
1/2

1/2
z
3/2
sin z + 2
1/2

1/2
z
3/2
sin z 2
1/2

1/2
cos z
=
_
2

_
1/2
z
3/2
sin z
_
2

_
1/2
z
1/2
cos z
=
_
2

_
1/2
_
z
3/2
sin z z
1/2
cos z
_
.
You can show that
J
1/2
(z) =
_
2
z
_
1/2
cos z.
Note that at a rst glance it appears that J
3/2
z
1/2
as z 0. However, if you expand the sine and cosine
you will see that the z
1/2
and z
1/2
terms vanish and thus J
3/2
(z) z
3/2
as z 0 as we showed previously.
Recall that we showed the asymptotic behavior as x + of Bessel functions to be linear combinations of
x
1/2
sin(x +U
1
(x)) and x
1/2
cos(x +U
2
(x))
where U
1
, U
2
0 as x +.
36.4 Neumann Expansions
Consider expanding an analytic function in a series of Bessel functions of the form
f(z) =

n=0
a
n
J
n
(z).
1519
If f(z) is analytic in the disk [z[ r then we can write
f(z) =
1
2i
_
f()
z
d,
where the path of integration is [[ = r and [z[ < r. If we were able to expand the function
1
z
in a series of Bessel
functions, then we could interchange the order of summation and integration to get a Bessel series expansion of
f(z).
The Expansion of 1/( z). Assume that
1
z
has the uniformly convergent expansion
1
z
= c
0
()J
0
(z) + 2

n=1
c
n
()J
n
(z),
where each c
n
() is analytic. Note that
_

+

z
_
1
z
=
1
( z)
2
+
1
( z)
2
= 0.
Thus we have
_

+

z
_
_
c
0
()J
0
(z) + 2

n=1
c
n
()J
n
(z)
_
= 0
_
c
t
0
J
0
+ 2

n=1
c
t
n
J
n
_
+
_
c
0
J
t
0
+ 2

n=1
c
n
J
t
n
_
= 0.
Using the identity 2J
t
n
= J
n1
J
n+1
,
_
c
t
0
J
0
+ 2

n=1
c
t
n
J
n
_
+
_
c
0
(J
1
) +

n=1
c
n
(J
n1
J
n+1
)
_
= 0.
1520
Collecting coecients of J
n
,
(c
t
0
+c
1
)J
0
+

n=1
(2c
t
n
+c
n+1
c
n1
)J
n
= 0.
Equating the coecients of J
n
, we see that the c
n
are given by the relations,
c
1
= c
t
0
, and c
n+1
= c
n1
2c
t
n
.
We can evaluate c
0
(). Setting z = 0,
1

= c
0
()J
0
(0) + 2

n=1
c
n
()J
n
(0)
1

= c
0
().
Using the recurrence relations we can calculate the c
n
s. The rst few are:
c
1
=
1

2
=
1

2
c
2
=
1

2
2

3
=
1

+
4

3
c
3
=
1

2
2
_
1

2

12

4
_
=
3

2
+
24

4
.
We see that c
n
is a polynomial of degree n + 1 in 1/. One can show that
c
n
() =
_
_
_
2
n1
n!

n+1
_
1 +

2
2(2n2)
+

4
24(2n2)(2n4)
+ +

n
24n(2n2)(2nn)
_
for even n
2
n1
n!

n+1
_
1 +

2
2(2n2)
+

4
24(2n2)(2n4)
+ +

n1
24(n1)(2n2)(2n(n1))
_
for odd n
1521
Uniform Convergence of the Series. We assumed before that the series expansion of
1
z
is uniformly
convergent. The behavior of c
n
and J
n
are
c
n
() =
2
n1
n!

n+1
+O(
n
), J
n
(z) =
z
n
2
n
n!
+O(z
n+1
).
This gives us
c
n
()J
n
(z) =
1
2
_
z

_
n
+O
_
1

_
z

_
n+1
_
.
If

= < 1 we can bound the series with the geometric series

n
. Thus the series is uniformly convergent.
Neumann Expansion of an Analytic Function. Let f(z) be a function that is analytic in the disk [z[ r.
Consider [z[ < r and the path of integration along [[ = r. Cauchys integral formula tells us that
f(z) =
1
2i
_
f()
z
d.
Substituting the expansion for
1
z
,
=
1
2i
_
f()
_
c
o
()J
0
(z) + 2

n=1
c
n
()J
n
(z)
_
d
= J
0
(z)
1
2i
_
f()

d +

n=1
J
n
(z)
i
_
c
n
()f() d
= J
0
(z)f(0) +

n=1
J
n
(z)
i
_
c
n
()f() d.
1522
Result 36.4.1 let f(z) be analytic in the disk, [z[ r. Consider [z[ < r and the path
of integration along [[ = r. f(z) has the Bessel function series expansion
f(z) = J
0
(z)f(0) +

n=1
J
n
(z)
i
_
c
n
()f() d,
where the c
n
satisfy
1
z
= c
0
()J
0
(z) + 2

n=1
c
n
()J
n
(z).
36.5 Bessel Functions of the Second Kind
When is an integer, J

and J

are not linearly independent. In order to nd an second linearly independent


solution, we dene the Bessel function of the second kind, (also called Webers function),
Y

=
_
J(z) cos()J

(z)
sin()
when is not an integer
lim

J(z) cos()J

(z)
sin()
when is an integer.
J

and Y

are linearly independent for all .


In Figure 36.3 Y
0
and Y
1
are plotted in solid and dashed lines, respectively.
1523
5 10 15 20
-1
-0.75
-0.5
-0.25
0.25
0.5
Figure 36.3: Bessel Functions of the Second Kind
Result 36.5.1 The Bessel function of the second kind, Y

(z), is dened,
Y

=
_
J

(z) cos()J

(z)
sin()
when is not an integer
lim

(z) cos()J

(z)
sin()
when is an integer.
The Wronskian of J

(z) and Y

(z) is
W[J

, Y

] =
2
z
.
Thus J

(z) and Y

(z) are independent for all . The Bessel functions of the second kind
satisfy the recursion relations,
Y
1
+Y
+1
=
2
z
Y

Y
/

=

z
Y

Y
+1
Y
/

=
1
2
(Y
1
Y
+1
) Y
/

= Y
1


z
Y

.
1524
36.6 Hankel Functions
Another set of solutions to Bessels equation is the Hankel functions,
H
(1)

(z) = J

(z) +iY

(z),
H
(2)

(z) = J

(z) iY

(z)
Result 36.6.1 The Hankel functions are dened
H
(1)

(z) = J

(z) +iY

(z),
H
(2)

(z) = J

(z) iY

(z)
The Wronskian of H
(1)

(z) and H
(2)

(z) is
W[H
(1)

, H
(2)

] =
4i
z
.
The Hankel functions are independent for all . The Hankel functions satisfy the same
recurrence relations as the other Bessel functions.
36.7 The Modied Bessel Equation
The modied Bessel equation is
w
tt
+
1
z
w
t

_
1 +

2
z
2
_
w = 0.
1525
This equation is identical to the Bessel equation except for a sign change in the last term. If we make the change
of variables = iz, u() = w(z) we obtain the equation
u
tt

u
t

_
1

2

2
_
u = 0
u
tt
+
1

u
t
+
_
1

2

2
_
u = 0.
This is the Bessel equation. Thus J

(iz) is a solution to the modied Bessel equation. This motivates us to dene


the modied Bessel function of the rst kind
I

(z) = i

(iz).
Since J

and J

are linearly independent solutions when is not an integer, I

and I

are linearly independent


solutions to the modied Bessel equation when is not an integer.
The Taylor series expansion of I

(z) about z = 0 is
I

(z) = i

(iz)
= i

m=0
(1)
m
m!( +m + 1)
_
iz
2
_
+2m
= i

m=0
(1)
m
i

i
2m
m!( +m + 1)
_
z
2
_
+2m
=

m=0
1
m!( +m+ 1)
_
z
2
_
+2m
Modied Bessel Functions of the Second Kind. In order to have a second linearly independent solution
when is an integer, we dene the modied Bessel function of the second kind
K

(z) =
_

2
I

I
sin()
when is not an integer,
lim

2
I

I
sin()
when is an integer.
1526
I

and K

are linearly independent for all . In Figure 36.4 I


0
and K
0
are plotted in solid and dashed lines,
respectively.
1 2 3 4
2
4
6
8
10
Figure 36.4: Modied Bessel Functions
1527
Result 36.7.1 The modied Bessel functions of the rst and second kind, I

(z) and
K

(z), are dened,


I

(z) = i

(iz).
K

(z) =
_

2
I

sin()
when is not an integer,
lim

2
I

sin()
when is an integer.
The modied Bessel function of the rst kind has the expansion,
I

(z) =

m=0
1
m!( +m + 1)
_
z
2
_
+2m
The Wronskian of I

(z) and I

(z) is
W[I

, I

] =
2
z
sin().
I

(z) and I

(z) are linearly independent when is not an integer. The Wronskian of


I

(z) and K

(z) is
W[I

, K

] =
1
z
.
I

(z) and K

(z) are independent for all . The modied Bessel functions satisfy the
recursion relations,
A
1
A
+1
=
2
z
A

A
/

= A
+1
+

z
A

A
/

=
1
2
(A
1
+A
+1
) A
/

= A
1


z
A

.
where A stands for either I or K.
1528
36.8 Exercises
Exercise 36.1
Consider Bessels equation
z
2
y
tt
(z) +zy
t
(z) +
_
z
2

2
_
y = 0
where 0. Find the Frobenius series solution that is asymptotic to t

as t 0. By multiplying this solution


by a constant, dene the solution
J

(z) =

k=1
(1)
k
k!(k + + 1)
_
z
2
_
2k+
.
This is called the Bessel function of the rst kind and order . Clearly J

(z) is dened and is linearly independent


to J

(z) if is not an integer. What happens when is an integer?


Exercise 36.2
Consider Bessels equation for integer n,
z
2
y
tt
+zy
t
+ (z
2
n
2
)y = 0.
Using the kernel
K(z, t) = e
1
2
z(t
1
t
)
,
nd two solutions of Bessels equation. (For n = 0 you will nd only one solution.) Are the two solutions linearly
independent? Dene the Bessel function of the rst kind and order n,
J
n
(z) =
1
i2
_
C
t
n1
e
1
2
z(t1/t)
dt,
where C is a simple, closed contour about the origin. Verify that
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
.
1529
This is the generating function for the Bessel functions.
Exercise 36.3
Use the generating function
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
to show that J
n
satises Bessels equation
z
2
y
tt
+zy
t
+ (z
2
n
2
)y = 0.
Exercise 36.4
Using
J
n1
+J
n+1
=
2n
z
J
n
and J
t
n
=
n
z
J
n
J
n+1
,
show that
J
t
n
=
1
2
(J
n1
J
n+1
) and J
t
n
= J
n1

n
z
J
n
.
Exercise 36.5
Find the general solution of
w
tt
+
1
z
w
t
+
_
1
1
4z
2
_
w = z.
Exercise 36.6
Show that J

(z) and Y

(z) are linearly independent for all .


1530
Exercise 36.7
Compute W[I

, I

] and W[I

, K

].
Exercise 36.8
Using the generating function,
exp
_
z
2
_
t
1
t
__
=
+

n=
J
n
(z)t
n
,
verify the following identities:
1.
2n
z
J
n
(z) = J
n1
(z) +J
n+1
(z).
This relation is useful for recursively computing the values of the higher order Bessel functions.
2.
J
t
n
(z) =
1
2
(J
n1
J
n+1
) .
This relation is useful for computing the derivatives of the Bessel functions once you have the values of the
Bessel functions of adjacent order.
3.
d
dz
_
z
n
J
n
(z)
_
= z
n
J
n+1
(z).
1531
Exercise 36.9
Use the Wronskian of J

(z) and J

(z),
W [J

(z), J

(z)] =
2 sin
z
,
to derive the identity
J
+1
(z)J

(z) +J

(z)J
1
(z) =
2
z
sin .
Exercise 36.10
Show that, using the generating function or otherwise,
J
0
(z) + 2J
2
(z) + 2J
4
(z) + 2J
6
(z) + = 1
J
0
(z) 2J
2
(z) + 2J
4
(z) 2J
6
(z) + = cos z
2J
1
(z) 2J
3
(z) + 2J
5
(z) = sin z
J
2
0
(z) + 2J
2
1
(z) + 2J
2
2
(z) + 2J
2
3
(z) + = 1
Exercise 36.11
It is often possible to solve certain ordinary dierential equations by converting them into the Bessel equation
by means of various transformations. For example, show that the solution of
y
tt
+x
p2
y = 0,
can be written in terms of Bessel functions.
y(x) = c
1
x
1/2
J
1/p
_
2
p
x
p/2
_
+c
2
x
1/2
Y
1/p
_
2
p
x
p/2
_
Here c
1
and c
2
are arbitrary constants. Thus show that the Airy equation,
y
tt
+xy = 0,
can be solved in terms of Bessel functions.
1532
Exercise 36.12
The spherical Bessel functions are dened by
j
n
(z) =
_

2z
J
n+1/2
(z),
y
n
(z) =
_

2z
Y
n+1/2
(z),
k
n
(z) =
_

2z
K
n+1/2
(z),
i
n
(z) =
_

2z
I
n+1/2
(z).
Show that
j
1
(z) =
sin z
z
2

cos z
z
,
i
0
(z) =
sinh z
z
,
k
0
(z) =

2z
exp(z).
Exercise 36.13
Show that as x ,
K
n
(x)
e
x

x
_
1 +
4n
2
1
8x
+
(4n
2
1)(4n
2
9)
128x
2
+
_
.
1533
36.9 Hints
Hint 36.2
Hint 36.3
Hint 36.4
Use the generating function
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
to show that J
n
satises Bessels equation
z
2
y
tt
+zy
t
+ (z
2
n
2
)y = 0.
Hint 36.6
Use variation of parameters and the Wronskian that was derived in the text.
Hint 36.7
Compute the Wronskian of J

(z) and Y

(z). Use the relation


W [J

, J

] =
2
z
sin()
Hint 36.8
Derive W[I

, I

] from the value of W[J

, J

]. Derive W[I

, K

] from the value of W[I

, I

].
1534
Hint 36.9
Hint 36.10
Hint 36.11
Hint 36.12
Hint 36.13
Hint 36.14
1535
36.10 Solutions
Solution 36.1
Bessels equation is
L[y] z
2
y
tt
+zy
t
+ (z
2
n
2
)y = 0.
We consider a solution of the form
y(z) =
_
C
e
1
2
z(t1/t)
v(t) dt.
We substitute the form of the solution into Bessels equation.
_
C
L
_
e
1
2
z(t1/t)
_
v(t) dt = 0
_
C
_
z
2
1
4
_
t +
1
t
_
2
+z
1
2
_
t
1
t
_
2
+
_
z
2
n
2
_
_
e
1
2
z(t1/t)
v(t) dt = 0 (36.1)
By considering
d
dt
t e
1
2
z(t1/t)
=
_
1
2
x
_
t +
1
t
_
+ 1
_
e
1
2
z(t1/t)
d
2
dt
2
t
2
e
1
2
z(t1/t)
=
_
1
4
x
2
_
t +
1
t
_
2
+x
_
2t +
1
t
_
+ 2
_
e
1
2
z(t1/t)
we see that
L
_
e
1
2
z(t1/t)
_
=
_
d
2
dt
2
t
2
3
d
dt
t + (1 n
2
)
_
e
1
2
z(t1/t)
.
Thus Equation 36.1 becomes
_
C
_
d
2
dt
2
t
2
e
1
2
z(t1/t)
3
d
dt
t e
1
2
z(t1/t)
+ (1 n
2
) e
1
2
z(t1/t)
_
v(t) dt = 0
1536
We apply integration by parts to move derivatives from the kernel to v(t).
_
t
2
e
1
2
z(t1/t)
v(t)
_
C

_
t e
1
2
z(t1/t)
v
t
(t)
_
C
+
_
3t e
1
2
z(t1/t)
v(t)
_
C
+
_
C
e
1
2
z(t1/t)
_
t
2
v
tt
(t) + 3tv(t) + (1 n
2
)v(t)
_
dt = 0
_
e
1
2
z(t1/t)
_
(t
2
3t)v(t) tv
t
(t)
_
_
C
+
_
C
e
1
2
z(t1/t)
_
t
2
v
tt
(t) + 3tv(t) + (1 n
2
)v(t)
_
dt = 0
In order that the integral vanish, v(t) must be a solution of the dierential equation
t
2
v
tt
+ 3tv + (1 n
2
)v = 0.
This is an Euler equation with the solutions t
n1
, t
n1
for non-zero n and t
1
, t
1
log t for n = 0.
Consider the case of non-zero n. Since
e
1
2
z(t1/t)
_
(t
2
3t)v(t) tv
t
(t)
_
is single-valued and analytic for t ,= 0 for the functions v(t) = t
n1
and v(t) = t
n1
, the boundary term will
vanish if C is any closed contour that that does not pass through the origin. Note that the integrand in our
solution,
e
1
2
z(t1/t)
v(t),
is analytic and single-valued except at the origin and innity where it has essential singularities. Consider a
simple closed contour that does not enclose the origin. The integral along such a path would vanish and give us
y(z) = 0. This is not an interesting solution. Since
e
1
2
z(t1/t)
v(t),
has non-zero residues for v(t) = t
n1
and v(t) = t
n1
, choosing any simple, positive, closed contour about the
origin will give us a non-trivial solution of Bessels equation. These solutions are
y
1
(t) =
_
C
t
n1
e
1
2
z(t1/t)
dt, y
2
(t) =
_
C
t
n1
e
1
2
z(t1/t)
dt.
Now consider the case n = 0. The two solutions above concide and we have the solution
y(t) =
_
C
t
1
e
1
2
z(t1/t)
dt.
1537
Choosing v(t) = t
1
log t would make both the boundary terms and the integrand multi-valued. We do not pursue
the possibility of a solution of this form.
The solution y
1
(t) and y
2
(t) are not linearly independent. To demonstrate this we make the change of variables
t 1/t in the integral representation of y
1
(t).
y
1
(t) =
_
C
t
n1
e
1
2
z(t1/t)
dt
=
_
C
(1/t)
n1
e
1
2
z(1/t+t)
1
t
2
dt
=
_
C
(1)
n
t
n1
e
1
2
z(t1/t)
dt
= (1)
n
y
2
(t)
Thus we see that a solution of Bessels equation for integer n is
y(t) =
_
C
t
n1
e
1
2
z(t1/t)
dt
where C is any simple, closed contour about the origin.
Therefore, the Bessel function of the rst kind and order n,
J
n
(z) =
1
i2
_
C
t
n1
e
1
2
z(t1/t)
dt
is a solution of Bessels equation for integer n. Note that J
n
(z) is the coecient of t
n
in the Laurent series of
e
1
2
z(t1/t)
. This establishes the generating function for the Bessel functions.
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
1538
Solution 36.2
The generating function is
e
z
2
(t1/t)
=

n=
J
n
(z)t
n
.
In order to show that J
n
satises Bessels equation we seek to show that

n=
_
z
2
J
tt
n
(z) +zJ
n
(z) + (z
2
n
2
)J
n
(z)
_
t
n
= 0.
To get the appropriate terms in the sum we will dierentiate the generating function with respect to z and t.
First we dierentiate it with respect to z.
1
2
_
t
1
t
_
e
z
2
(t1/t)
=

n=
J
t
n
(z)t
n
1
4
_
t
1
t
_
2
e
z
2
(t1/t)
=

n=
J
tt
n
(z)t
n
Now we dierentiate with respect to t and multiply by t get the n
2
J
n
term.
z
2
_
1 +
1
t
2
_
e
z
2
(t1/t)
=

n=
nJ
n
(z)t
n1
z
2
_
t +
1
t
_
e
z
2
(t1/t)
=

n=
nJ
n
(z)t
n
z
2
_
1
1
t
2
_
e
z
2
(t1/t)
+
z
2
4
_
t +
1
t
_
2
e
z
2
(t1/t)
=

n=
n
2
J
n
(z)t
n1
z
2
_
t
1
t
_
e
z
2
(t1/t)
+
z
2
4
_
t +
1
t
_
2
e
z
2
(t1/t)
=

n=
n
2
J
n
(z)t
n
1539
Now we can evaluate the desired sum.

n=
_
z
2
J
tt
n
(z) +zJ
n
(z) + (z
2
n
2
)J
n
(z)
_
t
n
=
_
z
2
4
_
t
1
t
_
2
+
z
2
_
t
1
t
_
+z
2

z
2
_
t
1
t
_

z
2
4
_
t +
1
t
_
2
_
e
z
2
(t1/t)

n=
_
z
2
J
tt
n
(z) +zJ
n
(z) + (z
2
n
2
)J
n
(z)
_
t
n
= 0
z
2
J
tt
n
(z) +zJ
n
(z) + (z
2
n
2
)J
n
(z) = 0
Thus J
n
satises Bessels equation.
Solution 36.3
J
t
n
=
n
z
J
n
J
n+1
=
1
2
(J
n1
+J
n+1
) J
n+1
=
1
2
(J
n1
J
n+1
)
J
t
n
=
n
z
J
n
J
n+1
=
n
z
J
n

_
2n
z
J
n
J
n1
_
= J
n1

n
z
J
n
1540
Solution 36.4
The linearly independent homogeneous solutions are J
1/2
and J
1/2
. The Wronskian is
W[J
1/2
, J
1/2
] =
2
z
sin(/2) =
2
z
.
Using variation of parameters, a particular solution is
y
p
= J
1/2
(z)
_
z
J
1/2
()
2/
d +J
1/2
(z)
_
z
J
1/2
()
2/
d
=

2
J
1/2
(z)
_
z

2
J
1/2
() d

2
J
1/2
(z)
_
z

2
J
1/2
() d.
Thus the general solution is
y = c
1
J
1/2
(z) +c
2
J
1/2
(z) +

2
J
1/2
(z)
_
z

2
J
1/2
() d

2
J
1/2
(z)
_
z

2
J
1/2
() d.
We could substitute
J
1/2
(z) =
_
2
z
_
1/2
sin z and J
1/2
=
_
2
z
_
1/2
cos z
into the solution, but we cannot evaluate the integrals in terms of elementary functions. (You can write the
solution in terms of Fresnel integrals.)
1541
Solution 36.5
W [J

, Y

] =

cot() J

csc()
J
t

J
t

cot() J
t

csc()

= cot()

J
t

J
t

csc()

J
t

J
t

= csc()
2
z
sin()
=
2
z
Since the Wronskian does not vanish identically, the functions are independent for all values of .
Solution 36.6
I

(z) = i

(iz)
W [I

, I

] =

I
t

I
t

(iz) i

(iz)
i

iJ
t

(iz) i

iJ
t

(iz)

= i

(iz) J

(iz)
J
t

(iz) J
t

(iz)

= i
2
iz
sin()
=
2
z
sin()
1542
W [I

, K

] =

2
csc()(I

)
I
t

2
csc()(I
t

I
t

=

2
csc()
_

I
t

I
t

I
t

I
t

_
=

2
csc()
2
z
sin()
=
1
z
Solution 36.7
1. We diferentiate the generating function with respect to t.
e
z
2
(t1/t)
=

n=
J
n
(z)t
n
z
2
_
1 +
1
t
2
_
e
z
2
(t1/t)
=

n=
J
n
(z)nt
n1
_
1 +
1
t
2
_

n=
J
n
(z)t
n
=
2
z

n=
J
n
(z)nt
n1

n=
J
n
(z)t
n
+

n=
J
n
(z)t
n2
=
2
z

n=
J
n
(z)nt
n1

n=
J
n1
(z)t
n1
+

n=
J
n+1
(z)t
n1
=
2
z

n=
J
n
(z)nt
n1
J
n1
(z) +J
n+1
(z) =
2
z
J
n
(z)n
2n
z
J
n
(z) = J
n1
(z) +J
n+1
(z)
1543
2. We diferentiate the generating function with respect to z.
e
z
2
(t1/t)
=

n=
J
n
(z)t
n
1
2
_
t
1
t
_
e
z
2
(t1/t)
=

n=
J
t
n
(z)t
n
1
2
_
t
1
t
_

n=
J
n
(z)t
n
=

n=
J
t
n
(z)t
n
1
2
_

n=
J
n
(z)t
n+1

n=
J
n
(z)t
n1
_
=

n=
J
t
n
(z)t
n
1
2
_

n=
J
n1
(z)t
n

n=
J
n+1
(z)t
n
_
=

n=
J
t
n
(z)t
n
1
2
(J
n1
(z) J
n+1
(z)) = J
t
n
(z)
J
t
n
(z) =
1
2
(J
n1
J
n+1
)
3.
d
dz
_
z
n
J
n
(z)
_
= nz
n1
J
n
(z) +z
n
J
t
n
(z)
=
1
2
z
n
2n
z
J
n
(z) +z
n
1
2
(J
n1
(z) J
n+1
(z))
=
1
2
z
n
(J
n+1
(z) +J
n1
(z)) +
1
2
z
n
(J
n1
(z) J
n+1
(z))
d
dz
_
z
n
J
n
(z)
_
= z
n
J
n+1
(z)
1544
Solution 36.8
For this part we will use the identities
J
t

(z) =

z
J

(z) J
+1
(z), J
t

(z) = J
1
(z)

z
J

(z).

(z) J

(z)
J
t

(z) J
t

(z)

=
2 sin()
z

(z) J

(z)
J
1
(z)

z
J

z
J

(z) J
+1
(z)

=
2 sin()
z

(z) J

(z)
J
1
(z) J
+1
(z)

(z) J

(z)
J

(z) J

(z)

=
2 sin()
z
J
+1
(z)J

(z) J

(z)J
1
(z) =
2 sin()
z
J
+1
(z)J

(z) +J

(z)J
1
(z) =
2
z
sin
Solution 36.9
The generating function for the Bessel functions is
e
1
2
z(t1/t)
=

n=
J
n
(z)t
n
. (36.2)
1. We substitute t = 1 into (36.2).

n=
J
n
(z) = 1
J
0
(z) +

n=1
J
n
(z) +

n=1
J
n
(z) = 1
1545
We use the identity J
n
= (1)
n
J
n
.
J
0
(z) +

n=1
(1 + (1)
n
)J
n
(z) = 1
J
0
(z) + 2

n=2
even n
J
n
(z) = 1
J
0
(z) + 2

n=1
J
2n
(z) = 1
2. We substitute t = i into (36.2).

n=
J
n
(z)i
n
= e
iz
J
0
(z) +

n=1
J
n
(z)i
n
+

n=1
J
n
(z)i
n
= e
iz
J
0
(z) +

n=1
J
n
(z)i
n
+

n=1
(1)
n
J
n
(z)(i)
n
= e
iz
J
0
(z) + 2

n=1
J
n
(z)i
n
= e
iz
(36.3)
Substituting t = i into (36.2) yields
J
0
(z) + 2

n=1
(1)
n
J
n
(z)i
n
= e
iz
(36.4)
1546
Dividing the sum of (36.3) and (36.4) by 2 gives us the desired identity.
J
0
(z) +

n=1
(1 + (1)
n
)J
n
(z)i
n
= cos z
J
0
(z) + 2

n=2
even n
J
n
(z)i
n
= cos z
J
0
(z) + 2

n=2
even n
(1)
n/2
J
n
(z) = cos z
J
0
(z) + 2

n=1
(1)
n
J
2n
(z) = cos z
3. Dividing the dierence of (36.3) and (36.4) by 2i gives us the other identity.
i

n=1
(1 (1)
n
)J
n
(z)i
n
= sin z
2

n=1
odd n
J
n
(z)i
n1
= sin z
2

n=1
odd n
(1)
(n1)/2
J
n
(z) = sin z
2

n=0
(1)
n
J
2n+1
(z) = sin z
1547
4. Substituting t for t in (36.2) yields
e

1
2
z(t1/t)
=

n=
J
n
(z)(t)
n
. (36.5)
We take the product of (36.2) and (36.5) to obtain the nal identity.
_

n=
J
n
(z)t
n
__

m=
J
m
(z)(t)
m
_
= e
1
2
z(t1/t)
e

1
2
z(t1/t)
= 1
Note that the coecients of all powers of t except t
0
in the product of sums must vanish.

n=
J
n
(z)t
n
J
n
(z)(t)
n
= 1

n=
J
2
n
(z) = 1
J
2
0
(z) + 2

n=1
J
2
n
(z) = 1
Solution 36.10
First we make the change of variables y(x) = x
1/2
v(x). We compute the derivatives of y(x).
y
t
= x
1/2
v
t
+
1
2
x
1/2
v,
y
tt
= x
1/2
v
tt
+x
1/2
v
t

1
4
x
3/2
v.
1548
We substitute these into the dierential equation for y.
y
tt
+x
p2
y = 0
x
1/2
v
tt
+x
1/2
v
t

1
4
x
3/2
v +x
p3/2
v = 0
x
2
v
tt
+xv
t
+
_
x
p

1
4
_
v = 0
Then we make the change of variables v(x) = u(), =
2
p
x
p/2
. We write the derivatives in terms of .
x
d
dx
= x
d
dx
d
d
= xx
p/21
d
d
=
p
2

d
d
x
2
d
2
dx
2
+x
d
dx
= x
d
dx
x
d
dx
=
p
2

d
d
p
2

d
d
=
p
2
4

2
d
2
d
2
+
p
2
4

d
d
We write the dierential equation for u().
p
2
4

2
u
tt
+
p
2
4
u
t
+
_
p
2
4

2

1
4
_
u = 0
u
tt
+
1

u
t
+
_
1
1
p
2

2
_
u = 0
This is the Bessel equation of order 1/p. We can write the general solution for u in terms of Bessel functions of
the rst kind if p ,= 1. Otherwise, we use a Bessel function of the second kind.
u() = c
1
J
1/p
() +c
2
J
1/p
() for p ,= 0, 1
u() = c
1
J
1/p
() +c
2
Y
1/p
() for p ,= 0
1549
We write the solution in terms of y(x).
y(x) = c
1

xJ
1/p
_
2
p
x
p/2
_
+c
2

xJ
1/p
_
2
p
x
p/2
_
for p ,= 0, 1
y(x) = c
1

xJ
1/p
_
2
p
x
p/2
_
+c
2

xY
1/p
_
2
p
x
p/2
_
for p ,= 0
The Airy equation y
tt
+xy = 0 is the case p = 3. The general solution of the Airy equation is
y(x) = c
1

xJ
1/3
_
2
3
x
3/2
_
+c
2

xJ
1/3
_
2
3
x
3/2
_
.
Solution 36.11
Consider J
1/2
(z). We start with the series expansion.
J
1/2
(z) =

m=0
(1)
m
m!(1/2 +m + 1)
_
z
2
_
1/2+2m
.
Use the identity (n + 1/2) =
(1)(3)(2n1)
2
n

.
=

m=0
(1)
m
2
m+1
m!(1)(3) (2m + 1)

_
z
2
_
1/2+2m
=

m=0
(1)
m
2
m+1
(2)(4) (2m) (1)(3) (2m + 1)

_
1
2
_
1/2+m
z
1/2+2m
=
_
2
z
_
1/2

m=0
(1)
m
(2m+ 1)!
z
2m+1
1550
We recognize the sum as the Taylor series expansion of sin z.
=
_
2
z
_
1/2
sin z
Using the recurrence relations,
J
+1
=

z
J

J
t

and J
1
=

z
J

+J
t

,
we can nd J
n+1/2
for any integral n.
We need J
3/2
(z) to determine j
1
(z). To nd J
3/2
(z),
J
3/2
(z) =
1/2
z
J
1/2
(z) J
t
1/2
(z)
=
1/2
z
_
2

_
1/2
z
1/2
sin z
_

1
2
__
2

_
1/2
z
3/2
sin z
_
2

_
1/2
z
1/2
cos z
= 2
1/2

1/2
z
3/2
sin z + 2
1/2

1/2
z
3/2
sin z 2
1/2

1/2
cos z
=
_
2

_
1/2
z
3/2
sin z
_
2

_
1/2
z
1/2
cos z
=
_
2

_
1/2
_
z
3/2
sin z z
1/2
cos z
_
.
The spherical Bessel function j
1
(z) is
j
1
(z) =
sin z
z
2

cos z
z
.
The modied Bessel function of the rst kind is
I

(z) = i

(iz).
1551
We can determine I
1/2
(z) from J
1/2
(z).
I
1/2
(z) = i
1/2
_
2
iz
sin(iz)
= i
_
2
z
i sinh(z)
=
_
2
z
sinh(z)
The spherical Bessel function i
0
(z) is
i
0
(z) =
sinh z
z
.
The modied Bessel function of the second kind is
K

(z) = lim

2
I

sin()
Thus K
1/2
(z) can be determined in terms of I
1/2
(z) and I
1/2
(z).
K
1/2
(z) =

2
_
I
1/2
I
1/2
_
We determine I
1/2
with the recursion relation
I
1
(z) = I
t

(z) +

z
I

(z).
I
1/2
(z) = I
t
1/2
(z) +
1
2z
I
1/2
(z)
=
_
2

z
1/2
cosh(z)
1
2
_
2

z
3/2
sinh(z) +
1
2z
_
2

z
1/2
sinh(z)
=
_
2
z
cosh(z)
1552
Now we can determine K
1/2
(z).
K
1/2
(z) =

2
_
_
2
z
cosh(z)
_
2
z
sinh(z)
_
=
_

2z
e
z
The spherical Bessel function k
0
(z) is
k
0
(z) =

2z
e
z
.
Solution 36.12
The Point at Innity. With the change of variables z = 1/t, w(z) = u(t) the modied Bessel equation becomes
w
tt
+
1
z
w
t

_
1 +
n
2
z
2
_
w = 0
t
4
u
tt
+ 2t
3
u
t
+t(t
2
)u
t
(1 +n
2
t
2
)u = 0
u
tt
+
1
t
u
t

_
1
t
4

n
2
t
2
_
u = 0.
The point t = 0 and hence the point z = is an irregular singular point. We will nd the leading order
asymptotic behavior of the solutions as z +.
Controlling Factor. Starting with the modied Bessel equation for real argument
y
tt
+
1
x
y
t

_
1 +
n
2
x
2
_
y = 0,
we make the substitution y = e
s(x)
to obtain
s
tt
+ (s
t
)
2
+
1
x
s
t
1
n
2
x
2
= 0.
1553
We know that
n
2
x
2
1 as x ; we will assume that s
tt
(s
t
)
2
as x . This gives us
(s
t
)
2
+
1
x
s
t
1 0 as x .
To simplify the equation further, we will try the possible two-term balances.
1. (s
t
)
2
+
1
x
s
t
0 s
t

1
x
This balance is not consistent as it violates the assumption that 1 is
smaller than the other terms.
2. (s
t
)
2
1 0 s
t
1 This balance is consistent.
3.
1
x
s
t
1 0 s
t
x This balance is inconsistent as (s
t
)
2
isnt smaller than the other terms.
Thus the only dominant balance is s
t
1. This balance is consistent with our initial assumption that
s
tt
(s
t
)
2
. Thus s x and the controlling factor is e
x
. We are interested in the decaying solution, so we will
work with the controlling factor e
x
.
Leading Order Behavior. In order to nd the leading order behavior, we substitute s = x + t(x) where
t(x) x as x into the dierential equation for s. We assume that t
t
1 and t
tt
1/x.
t
tt
+ (1 +t
t
)
2
+
1
x
(1 +t
t
) 1
n
2
x
2
= 0
t
tt
2t
t
+ (t
t
)
2

1
x
+
1
x
t
t

n
2
x
2
= 0
Using our assumptions about the behavior of t
t
and t
tt
,
2t
t

1
x
0
t
t

1
2x
t
1
2
log x as x .
1554
This asymptotic behavior is consistent with our assumptions.
Thus the leading order behavior of the decaying solution is
y c e
x
1
2
log x+u(x)
= cx
1/2
e
x+u(x)
as x ,
where u(x) log x as x .
By substituting t =
1
2
log x + u(x) into the dierential equation for t, you could show that u(x) const as
x . Thus the full leading order behavior of the decaying solution is
y cx
1/2
e
x
as x
where u(x) 0 as x .
Asymptotic Series. Now we nd the full asymptotic series for K
n
(x) as x . We substitute
K
n
(x)
e
x

x
w(x)
into the modied Bessel equation, where w(x) is a Taylor series about x = , i.e.,
K
n
(x)
e
x

k=0
a
k
x
k
.
First we dierentiate the expression for K
n
(x).
K
t
n
(x)
e
x

x
_
w
t

_
1 +
1
2x
_
w
_
K
tt
n
(x)
e
x

x
_
w
tt

_
2 +
1
x
_
w
t
+
_
1 +
1
x
+
3
4x
2
_
w
_
1555
We substitute these expressions into the modied Bessel equation.
x
2
y
tt
+xy
t
(x
2
+n
2
)y = 0
x
2
w
tt

_
2x
2
+x
_
w
t
+
_
x
2
+x +
3
4
_
w +xw
t

_
x +
1
2
_
w (x
2
+n
2
)w = 0
x
2
w
tt
2x
2
w
t
+
_
1
4
n
2
_
w = 0
The derivatives of the Taylor series are
w
t
=

k=1
(k)a
k
x
k1
,
=

k=0
(k 1)a
k+1
x
k2
,
w
tt
=

k=1
(k)(k 1)a
k
x
k2
,
=

k=0
(k)(k 1)a
k
x
k2
.
We substitute these expression into the dierential equation.
x
2

k=0
k(k + 1)a
k
x
k2
+ 2x
2

k=0
(k + 1)a
k+1
x
k2
+
_
1
4
n
2
_

k=0
a
k
x
k
= 0

k=0
k(k + 1)a
k
x
k
+ 2

k=0
(k + 1)a
k+1
x
k
+
_
1
4
n
2
_

k=0
a
k
x
k
= 0
1556
We equation coecients of x to obtain a recurrence relation for the coecients.
k(k + 1)a
k
+ 2(k + 1)a
k+1
+
_
1
4
n
2
_
a
k
= 0
a
k+1
=
n
2
1/4 k(k + 1)
2(k + 1)
a
k
a
k+1
=
n
2
(k + 1/2)
2
2(k + 1)
a
k
a
k+1
=
4n
2
(2k + 1)
2
8(k + 1)
a
k
We choose a
0
= 1 and use the recurrence relation to determine the rest of the coecients.
a
k
= (8(n + 1))
k
k

j=1
(4n
2
(2j 1)
2
).
The asymptotic expansion of the modied Bessel function of the second kind is
K
n
(x)
e
x

k=0
(8(n + 1))
k
_
k

j=1
(4n
2
(2j 1)
2
)
_
z
k
, as x .
Convergence. We determine the domain of convergence of the series with the ratio test. The Taylor series
about innity will converge outside of some circle.
lim
k

a
n+1
(x)
a
n
(x)

< 1
lim
k

a
n+1
x
k1
a
n
x
k

< 1
lim
k

4n
2
(2k + 1)
2
8(k + 1)

[x[
1
< 1
< [x[
1557
The series does not converge for any x in the nite complex plane. However, if we take only a nite number of
terms in the series, it gives a good approximation of K
n
(x) for large, positive x. At x = 10, the one, two and
three term approximations give relative errors of 0.01, 0.0006 and 0.00006, respectively.
1558
Part V
Partial Dierential Equations
1559
Chapter 37
Transforming Equations
Let x
i
denote rectangular coordinates. Let a
i
be unit basis vectors in the orthogonal coordinate system
i
.
The distance metric coecients h
i
can be dened
h
i
=

_
x
1

i
_
2
+
_
x
2

i
_
2
+
_
x
3

i
_
2
.
The gradient, divergence, etc., follow.
u =
a
1
h
1
u

1
+
a
2
h
2
u

2
+
a
3
h
3
u

3
v =
1
h
1
h
2
h
3
_

1
(h
2
h
3
v
1
) +

2
(h
3
h
1
v
2
) +

3
(h
1
h
2
v
3
)
_

2
u =
1
h
1
h
2
h
3
_

1
_
h
2
h
3
h
1
u

1
_
+

2
_
h
3
h
1
h
2
u

2
_
+

3
_
h
1
h
2
h
3
u

3
__
1560
37.1 Exercises
Exercise 37.1
Find the Laplacian in cylindrical coordinates (r, , z).
x = r cos , y = r sin , z
Exercise 37.2
Find the Laplacian in spherical coordinates (r, , ).
x = r sin cos , y = r sin sin , z = r cos
1561
37.2 Hints
1562
37.3 Solutions
Solution 37.1
h
1
=
_
(cos )
2
+ (sin )
2
+ 0 = 1
h
2
=
_
(r sin )
2
+ (r cos )
2
+ 0 = r
h
3
=

0 + 0 + 1
2
= 1

2
u =
1
r
_

r
_
r
u
r
_
+

_
1
r
u

_
+

z
_
r
u
z
__

2
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
+

2
u
z
2
Solution 37.2
h
1
=
_
(sin cos )
2
+ (sin sin )
2
+ (cos )
2
= 1
h
2
=
_
(r cos cos )
2
+ (r cos sin )
2
+ (r sin )
2
= r
h
3
=
_
(r sin sin )
2
+ (r sin cos )
2
+ 0 = r sin

2
u =
1
r
2
sin
_

r
_
r
2
sin
u
r
_
+

_
sin
u

_
+

_
1
sin
u

__

2
u =
1
r
2

r
_
r
2
u
r
_
+
1
r
2
sin

_
sin
u

_
+
1
r
2
sin

2
u

2
1563
Chapter 38
Classication of Partial Dierential Equations
38.1 Classication of Second Order Quasi-Linear Equations
Consider the general second order quasi-linear partial dierential equation in two variables.
a(x, y)u
xx
+ 2b(x, y)u
xy
+c(x, y)u
yy
= F(x, y, u, u
x
, u
y
) (38.1)
We classify the equation by the sign of the discriminant. At a given point x
0
, y
0
, the equation is classied as one
of the following types:
b
2
ac > 0 : hyperbolic
b
2
ac = 0 : parabolic
b
2
ac < 0 : elliptic
If an equation has a particular type for all points x, y in a domain then the equation is said to be of that type
in the domain. Each of these types has a canonical form that can be obtained through a change of independent
variables. The type of an equation indicates much about the nature of its solution.
We seek a change of independent variables, (a dierent coordinate system), such that Equation 38.1 has a
simpler form. We will nd that a second order quasi-linear partial dierential equation in two variables can be
1564
transformed to one of the canonical forms:
u

= G(, , u, u

, u

), hyperbolic
u

= G(, , u, u

, u

), parabolic
u

+u

= G(, , u, u

, u

), elliptic
Consider the change of independent variables
= (x, y), = (x, y).
The partial derivatives of u are
u
x
=
x
u

+
x
u

u
y
=
y
u

+
y
u

u
xx
=
2
x
u

+ 2
x

x
u

+
2
x
u

+
xx
u

+
xx
u

u
xy
=
x

y
u

+ (
x

y
+
y

x
)u

+
x

y
u

+
xy
u

+
xy
u

u
yy
=
2
y
u

+ 2
y

y
u

+
2
y
u

+
yy
u

+
yy
u

.
Substituting these into (??) yields an equation in and .
_
a
2
x
+ 2b
x

y
+c
2
y
_
u

+ 2 (a
x

x
+b(
x

y
+
y

x
) +c
y

y
) u

+
_
a
2
x
+ 2b
x

y
+c
2
y
_
u

= H(, , u, u

, u

)
(, )u

+(, )u

+(, )u

= H(, , u, u

, u

) (38.2)
38.1.1 Hyperbolic Equations
We start with a hyperbolic equation, (b
2
ac > 0). We seek a change of independent variables that will put
Equation 38.1 in the form
u

= G(, , u, u

, u

) (38.3)
1565
We require that the u

and u

terms vanish. That is = = 0 in Equation 38.2. This gives us two constraints


on and .
a
2
x
+ 2b
x

y
+c
2
y
= 0, a
2
x
+ 2b
x

y
+c
2
y
= 0 (38.4)

y
=
b +

b
2
ac
a
,

x

y
=
b

b
2
ac
a
.
Here we chose the signs in the quadratic formulas to get dierent solutions for and .
Consider (x, y) = const as an implicit equation for y in terms of x. We dierentiate with respect to x.
d
dx
=
x
+
y
dy
dx
= 0
The derivative of y(x) is
dy
dx
=

y
=
b

b
2
ac
a
.
Solving this ordinary dierential equation for y(x) determines (x, y). We just write the solution for y(x) in the
form F(x, y(x)) = const. We then have = F(x, y). Upon solving for and we divide Equation 38.2 by (, )
to obtain the canonical form.
Note that we could have solved for
y
/
x
in Equation 38.4.
dx
dy
=

x
=
b

b
2
ac
c
This form is useful if a vanishes.
Another canonical form for hyperbolic equations is
u

= K(, , u, u

, u

). (38.5)
1566
We can transform Equation 38.3 to this form with the change of variables
= +, = .
Equation 38.3 becomes
u

= G
_
+
2
,

2
, u, u

+u

, u

_
.
Example 38.1.1 Consider the wave equation with a source.
u
tt
c
2
u
xx
= s(x, t)
Since 0 (1)(c
2
) > 0, the equation is hyperbolic. We nd the new variables.
dx
dt
= c, x = ct + const, = x +ct
dx
dt
= c, x = ct + const, = x ct
Then we determine t and x in terms of and .
t =

2c
, x =
+
2
We calculate the derivatives of and .

t
= c
x
= 1

t
= c
x
= 1
Then we calculate the derivatives of u.
u
tt
= c
2
u

2c
2
u

+c
2
u

u
xx
= u

+u

1567
Finally we transform the equation to canonical form.
2c
2
u

= s
_
+
2
,

2c
_
u

=
1
2c
2
s
_
+
2
,

2c
_
If s(x, t) = 0, then the equation is u

= 0 we can integrate with respect to and to obtain the solution,


u = f() +g(). Here f and g are arbitrary C
2
functions. In terms of t and x, we have
u(x, t) = f(x +ct) +g(x ct).
To put the wave equation in the form of Equation 38.5 we make the change of variables
= + = 2x, = = 2ct.
u
tt
c
2
u
xx
= s(x, t)
4c
2
u

4c
2
u

= s
_

2
,

2c
_
u

=
1
4c
2
s
_

2
,

2c
_
Example 38.1.2 Consider
y
2
u
xx
x
2
u
yy
= 0.
For x ,= 0 and y ,= 0 this equation is hyperbolic. We nd the new variables.
dy
dx
=
_
y
2
x
2
y
2
=
x
y
, y dy = x dx,
y
2
2
=
x
2
2
+ const, = y
2
+x
2
dy
dx
=
_
y
2
x
2
y
2
=
x
y
, y dy = x dx,
y
2
2
=
x
2
2
+ const, = y
2
x
2
1568
We calculate the derivatives of and .

x
= 2x
y
= 2y

x
= 2x
y
= 2y
Then we calculate the derivatives of u.
u
x
= 2x(u

)
u
y
= 2y(u

+u

)
u
xx
= 4x
2
(u

2u

+u

) + 2(u

)
u
yy
= 4y
2
(u

+ 2u

+u

) + 2(u

+u

)
Finally we transform the equation to canonical form.
y
2
u
xx
x
2
u
yy
= 0
8x
2
y
2
u

8x
2
y
2
u

+ 2y
2
(u

) + 2x
2
(u

+u

) = 0
16
1
2
( )
1
2
( +)u

= 2u

2u

=
u

2(
2

2
)
Example 38.1.3 Consider Laplaces equation.
u
xx
+u
yy
= 0
Since 0(1)(1) < 0, the equation is elliptic. We will transform this equation to the canical form of Equation 38.3.
We nd the new variables.
dy
dx
= i, y = ix + const, = x +iy
dy
dx
= i, y = ix + const, = x iy
1569
We calculate the derivatives of and .

x
= 1
y
= i

x
= 1
y
= i
Then we calculate the derivatives of u.
u
xx
= u

+ 2u

+u

u
yy
= u

+ 2u

Finally we transform the equation to canonical form.


4u

= 0
u

= 0
We integrate with respect to and to obtain the solution, u = f() + g(). Here f and g are arbitrary C
2
functions. In terms of x and y, we have
u(x, y) = f(x +iy) +g(x iy).
This solution makes a lot of sense, because the real and imaginary parts of an analytic function are harmonic.
38.1.2 Parabolic equations
38.1.3 Elliptic Equations
We start with an elliptic equation, (b
2
ac < 0). We seek a change of independent variables that will put
Equation 38.1 in the form
u

+u

= G(, , u, u

, u

) (38.6)
1570
If we make the change of variables determined by

y
=
b +i

ac b
2
a
,

x

y
=
b i

ac b
2
a
The equation will have the form
u

= G(, , u, u

, u

).
and are complex-valued. If we then make the change of variables
=
+
2
, =

2i
we will obtain the canonical form of Equation 38.6. Note that since and are complex conjugate, and are
real-valued.
Example 38.1.4 Consider
y
2
u
xx
+x
2
u
yy
= 0. (38.7)
For x ,= 0 and y ,= 0 this equation is elliptic. We nd new variables that will put this equation in the form
u

= G(). From Example 38.1.2 we see that they are


dy
dx
= i
_
y
2
x
2
y
2
= i
x
y
, y dy = ixdx,
y
2
2
= i
x
2
2
+ const, = y
2
+ix
2
dy
dx
= i
_
y
2
x
2
y
2
= i
x
y
, y dy = ix dx,
y
2
2
= i
x
2
2
+ const, = y
2
ix
2
The variables that will put Equation 38.7 in canonical form are
=
+
2
= y
2
, =

2i
= x
2
1571
We calculate the derivatives of and .

x
= 0
y
= 2y

x
= 2x
y
= 0
Then we calculate the derivatives of u.
u
x
= 2xu

u
y
= 2yu

u
xx
= 4x
2
u

+ 2u

u
yy
= 4y
2
u

+ 2u

Finally we transform the equation to canonical form.


y
2
u
xx
+x
2
u
yy
= 0
(4u

+ 2u

) +(4u

+ 2u

) = 0
u

+u

=
1
2
u


1
2
u

38.2 Equilibrium Solutions


Example 38.2.1 Consider the equilibrium solution for the following problem.
u
t
= u
xx
, u(x, 0) = x, u
x
(0, t) = u
x
(1, t) = 0.
Setting u
t
= 0 we have the ordinary dierential equation
d
2
u
dx
2
= 0.
1572
This equation has the solution
u = ax +b.
Applying the boundary conditions we see that
u = b.
To determine the constant, we note that the heat energy in the rod is constant in time.
_
1
0
u(x, t) dx =
_
1
0
u(x, 0) dx
_
1
0
b dx =
_
1
0
x dx
Thus the equilibrium solution is
u(x) =
1
2
.
1573
38.3 Exercises
Exercise 38.1
Classify as hyperbolic, parabolic, or elliptic in a region R each of the equations:
(a) u
t
= (pu
x
)
x
(b) u
tt
= c
2
u
xx
u
(c) (qu
x
)
x
+ (qu
t
)
t
= 0
where p(x), c(x, t), q(x, t), and (x) are given functions that take on only positive values in a region R of the
(x, t) plane.
Exercise 38.2
Transform each of the following equations for (x, y) into canonical form in appropriate regions
(a)
xx
y
2

yy
+
x
+x
2
= 0
(b)
xx
+x
yy
= 0
The equation in part (b) is known as Tricomis equation and is a model for transonic uid ow in which the ow
speed changes from supersonic to subsonic.
1574
38.4 Hints
Hint 38.1
Hint 38.2
1575
38.5 Solutions
Exercise 38.3
1.
u
t
= (pu
x
)
x
pu
xx
+ 0u
xt
+ 0u
tt
+p
x
u
x
u
t
= 0
Since 0
2
p0 = 0, the equation is parabolic.
2.
u
tt
= c
2
u
xx
u
u
tt
+ 0u
tx
c
2
u
xx
+u = 0
Since 0
2
(1)(c
2
) > 0, the equation is hyperbolic.
3.
(qu
x
)
x
+ (qu
t
)
t
= 0
qu
xx
+ 0u
xt
+qu
tt
+q
x
u
x
+q
t
u
t
= 0
Since 0
2
qq < 0, the equation is elliptic.
Exercise 38.4
1. For y ,= 0, the equation is hyperbolic. We nd the new independent variables.
dy
dx
=
_
y
2
1
= y, y = c e
x
, e
x
y = c, = e
x
y
dy
dx
=

_
y
2
1
= y, y = c e
x
, e
x
y = c, = e
x
y
1576
Next we determine x and y in terms of and .
= y
2
, y =
_

= e
x
_
, e
x
=
_
/, x =
1
2
log
_

_
We calculate the derivatives of and .

x
= e
x
y =

y
= e
x
=
_
/

x
= e
x
y =

y
= e
x
=
_
/
Then we calculate the derivatives of .

x
=

,

y
=

+
_

x
=

,
y
=

+
_

xx
=
2

+
2

,
yy
=

+ 2

Finally we transform the equation to canonical form.

xx
y
2

yy
+
x
+x
2
= 0
4

+ log
_

_
= 0

=
1
2

+ log
_

_
1577
For y = 0 we have the ordinary dierential equation

xx
+
x
+x
2
= 0.
2. For x < 0, the equation is hyperbolic. We nd the new independent variables.
dy
dx
=

x, y =
2
3
x

x +c, =
2
3
x

x y
dy
dx
=

x, y =
2
3
x

x +c, =
2
3
x

x +y
Next we determine x and y in terms of and .
x =
_
3
4
( +)
_
1/3
, y =

2
We calculate the derivatives of and .

x
=

x =
_
3
4
( +)
_
1/6
,
y
= 1

x
=
_
3
4
( +)
_
1/6
,
y
= 1
Then we calculate the derivatives of .

x
=
_
3
4
( +)
_
1/6
(

y
=

xx
=
_
3
4
( +)
_
1/3
(

) + (6( +))
1/3

+ (6( +))
2/3
(

yy
=

1578
Finally we transform the equation to canonical form.

xx
+x
yy
= 0
(6( +))
1/3

+ (6( +))
1/3

+ (6( +))
2/3
(

) = 0

12( +)
For x > 0, the equation is elliptic. The variables we dened before are complex-valued.
= i
2
3
x
3/2
y, = i
2
3
x
3/2
+y.
We choose the new real-valued variables.
= , = i( +)
We write the derivatives in terms of and .

We transform the equation to canonical form.

12( +)

=
2i

12i

6
1579
Chapter 39
Separation of Variables
39.1 Eigensolutions of Homogeneous Equations
39.2 Homogeneous Equations with Homogeneous Boundary Condi-
tions
The method of separation of variables is a useful technique for nding special solutions of partial dierential
equations. We can combine these special solutions to solve certain problems. Consider the temperature of a
one-dimensional rod of length h
1
. The left end is held at zero temperature, the right end is insulated and the
initial temperature distribution is known at time t = 0. To nd the temperature we solve the problem:
u
t
=

2
u
x
2
, 0 < x < h, t > 0
u(0, t) = u
x
(h, t) = 0
u(x, 0) = f(x)
1
Why h? Because l looks like 1 and we use L to denote linear operators
1580
We look for special solutions of the form, u(x, t) = X(x)T(t). Substituting this into the partial dierential equation
yields
X(x)T
t
(t) = X
tt
(x)T(t)
T
t
(t)
T(t)
=
X
tt
(x)
X(x)
Since the left side is only dependent on t, the right side in only dependent on x, and the relation is valid for all t
and x, both sides of the equation must be constant.
T
t
T
=
X
tt
X
=
Here is an arbitrary constant. (Youll see later that this form is convenient.) u(x, t) = X(x)T(t) will satisfy
the partial dierential equation if X(x) and T(t) satisfy the ordinary dierential equations,
T
t
= T and X
tt
= X.
Now we see how lucky we are that this problem happens to have homogeneous boundary conditions
2
. If the left
boundary condition had been u(0, t) = 1, this would imply X(0)T(t) = 1 which tells us nothing very useful about
either X or T. However the boundary condition u(0, t) = X(0)T(t) = 0, tells us that either X(0) = 0 or T(t) = 0.
Since the latter case would give us the trivial solution, we must have X(0) = 0. Likewise by looking at the right
boundary condition we obtain X
t
(h) = 0.
We have a regular Sturm-Liouville problem for X(x).
X
tt
+X = 0, X(0) = X
t
(h) = 0
The eigenvalues and orthonormal eigenfunctions are

n
=
_
(2n 1)
2h
_
2
, X
n
=
_
2
h
sin
_
(2n 1)
2h
x
_
, n Z
+
.
2
Actually luck has nothing to do with it. I planned it that way.
1581
Now we solve the equation for T(t).
T
t
=
n
T
T = c e
nt
The eigen-solutions of the partial dierential equation that satisfy the homogeneous boundary conditions are
u
n
(x, t) =
_
2
h
sin
_
_

n
x
_
e
nt
.
We seek a solution of the problem that is a linear combination of these eigen-solutions.
u(x, t) =

n=1
a
n
_
2
h
sin
_
_

n
x
_
e
nt
We apply the initial condition to nd the coecients in the expansion.
u(x, 0) =

n=1
a
n
_
2
h
sin
_
_

n
x
_
= f(x)
a
n
=
_
2
h
_
h
0
sin
_
_

n
x
_
f(x) dx
39.3 Time-Independent Sources and Boundary Conditions
Consider the temperature in a one-dimensional rod of length h. The ends are held at temperatures and ,
respectively, and the initial temperature is known at time t = 0. Additionally, there is a heat source, s(x), that
is independent of time. We nd the temperature by solving the problem,
u
t
= u
xx
+s(x), u(0, t) = , u(h, t) = , u(x, 0) = f(x). (39.1)
1582
Because of the source term, the equation is not separable, so we cannot directly apply separation of variables.
Furthermore, we have the added complication of inhomogeneous boundary conditions. Instead of attacking this
problem directly, we seek a transformation that will yield a homogeneous equation and homogeneous boundary
conditions.
Consider the equilibrium temperature, (x). It satises the problem,

tt
(x) =
s(x)

= 0, (0) = , (h) = .
The Green function for this problem is,
G(x; ) =
x
<
(x
>
h)
h
.
The equilibrium temperature distribution is
(x) =
x h
h
+
x
h

1
h
_
h
0
x
<
(x
>
h)s() d,
(x) = + ( )
x
h

1
h
_
(x h)
_
x
0
s() d +x
_
h
x
( h)s() d
_
.
Now we substitute u(x, t) = v(x, t) +(x) into Equation 39.1.

t
(v +(x)) =

2
x
2
(v +(x)) +s(x)
v
t
= v
xx
+
tt
(x) +s(x)
v
t
= v
xx
(39.2)
Since the equilibrium solution satises the inhomogeneous boundary conditions, v(x, t) satises homogeneous
boundary conditions.
v(0, t) = v(h, t) = 0.
1583
The initial value of v is
v(x, 0) = f(x) (x).
We seek a solution for v(x, t) that is a linear combination of eigen-solutions of the heat equation. We substitute
the separation of variables, v(x, t) = X(x)T(t) into Equation 39.2
T
t
T
=
X
tt
X
=
This gives us two ordinary dierential equations.
X
tt
+X = 0, X(0) = X(h) = 0
T
t
= T.
The Sturm-Liouville problem for X(x) has the eigenvalues and orthonormal eigenfunctions,

n
=
_
n
h
_
2
, X
n
=
_
2
h
sin
_
nx
h
_
, n Z
+
.
We solve for T(t).
T
n
= c e
(n/h)
2
t
.
The eigen-solutions of the partial dierential equation are
v
n
(x, t) =
_
2
h
sin
_
nx
h
_
e
(n/h)
2
t
.
The solution for v(x, t) is a linear combination of these.
v(x, t) =

n=1
a
n
_
2
h
sin
_
nx
h
_
e
(n/h)
2
t
1584
We determine the coecients in the series with the initial condition.
v(x, 0) =

n=1
a
n
_
2
h
sin
_
nx
h
_
= f(x) (x)
a
n
=
_
2
h
_
h
0
sin
_
nx
h
_
(f(x) (x)) dx
The temperature of the rod is
u(x, t) = (x) +

n=1
a
n
_
2
h
sin
_
nx
h
_
e
(n/h)
2
t
39.4 Inhomogeneous Equations with Homogeneous Boundary Con-
ditions
Now consider the heat equation with a time dependent source, s(x, t).
u
t
= u
xx
+s(x, t), u(0, t) = u(h, t) = 0, u(x, 0) = f(x). (39.3)
In general we cannot transform the problem to one with a homogeneous dierential equation. Thus we cannot
represent the solution in a series of the eigen-solutions of the partial dierential equation. Instead, we will do the
next best thing and expand the solution in a series of eigenfunctions in X
n
(x) where the coecients depend on
time.
u(x, t) =

n=1
u
n
(t)X
n
(x)
We will nd these eigenfunctions with the separation of variables, u(x, t) = X(x)T(t) applied to the homogeneous
equation, u
t
= u
xx
, which yields,
X
n
(x) =
_
2
h
sin
_
nx
h
_
, n Z
+
.
1585
We expand the heat source in the eigenfunctions.
s(x, t) =

n=1
s
n
(t)
_
2
h
sin
_
nx
h
_
s
n
(t) =
_
2
h
_
h
0
sin
_
nx
h
_
s(x, t) dx,
We substitute the series solution into Equation 39.3.

n=1
u
t
n
(t)
_
2
h
sin
_
nx
h
_
=

n=1
u
n
(t)
_
n
h
_
2
_
2
h
sin
_
nx
h
_
+

n=1
s
n
(t)
_
2
h
sin
_
nx
h
_
u
t
n
(t) +
_
n
h
_
2
u
n
(t) = s
n
(t)
Now we have a rst order, ordinary dierential equation for each of the u
n
(t). We obtain initial conditions from
the initial condition for u(x, t).
u(x, 0) =

n=1
u
n
(0)
_
2
h
sin
_
nx
h
_
= f(x)
u
n
(0) =
_
2
h
_
h
0
sin
_
nx
h
_
f(x) dx f
n
The temperature is given by
u(x, t) =

n=1
u
n
(t)
_
2
h
sin
_
nx
h
_
,
u
n
(t) = f
n
e
(n/h)
2
t
+
_
t
0
e
(n/h)
2
(t)
s
n
() d.
1586
39.5 Inhomogeneous Boundary Conditions
Consider the temperature of a one-dimensional rod of length h. The left end is held at the temperature (t),
the heat ow at right end is specied, there is a time-dependent source and the initial temperature distribution
is known at time t = 0. To nd the temperature we solve the problem:
u
t
= u
xx
+s(x, t), 0 < x < h, t > 0 (39.4)
u(0, t) = (t), u
x
(h, t) = (t) u(x, 0) = f(x)
Transformation to a homogeneous equation. Because of the inhomogeneous boundary conditions, we
cannot directly apply the method of separation of variables. However we can transform the problem to an
inhomogeneous equation with homogeneous boundary conditions. To do this, we rst nd a function, (x, t)
which satises the boundary conditions. We note that
(x, t) = (t) +x(t)
does the trick. We make the change of variables
u(x, t) = v(x, t) +(x, t)
in Equation 39.4.
v
t
+
t
= (v
xx
+
xx
) +s(x, t)
v
t
= v
xx
+s(x, t)
t
The boundary and initial conditions become
v(0, t) = 0, v
x
(h, t) = 0, v(x, 0) = f(x) (x, 0).
Thus we have a heat equation with the source s(x, t)
t
(x, t). We could apply separation of variables to nd a
solution of the form
u(x, t) = (x, t) +

n=1
u
n
(t)
_
2
h
sin
_
(2n 1)x
2h
_
.
1587
Direct eigenfunction expansion. Alternatively we could seek a direct eigenfunction expansion of u(x, t).
u(x, t) =

n=1
u
n
(t)
_
2
h
sin
_
(2n 1)x
2h
_
.
Note that the eigenfunctions satisfy the homogeneous boundary conditions while u(x, t) does not. If we choose
any xed time t = t
0
and form the periodic extension of the function u(x, t
0
) to dene it for x outside the range
(0, h), then this function will have jump discontinuities. This means that our eigenfunction expansion will not
converge uniformly. We are not allowed to dierentiate the series with respect to x. We cant just plug the series
into the partial dierential equation to determine the coecients. Instead, we will multiply Equation 39.4, by an
eigenfunction and integrate from x = 0 to x = h. To avoid dierentiating the series with respect to x, we will use
integration by parts to move derivatives from u(x, t) to the eigenfunction. (We will denote
n
=
_
(2n1)
2h
_
2
.)
_
2
h
_
h
0
sin(
_

n
x)(u
t
u
xx
) dx =
_
2
h
_
h
0
sin(
_

n
x)s(x, t) dx
u
t
n
(t)
_
2
h

_
u
x
sin(
_

n
x)
_
h
0
+
_
2
h

n
_
h
0
u
x
cos(
_

n
x) dx = s
n
(t)
u
t
n
(t)
_
2
h
(1)
n
u
x
(h, t) +
_
2
h

n
_
ucos(
_

n
x)
_
h
0
+
_
2
h

n
_
h
0
usin(
_

n
x) dx = s
n
(t)
u
t
n
(t)
_
2
h
(1)
n
(t)
_
2
h

n
u(0, t) +
n
u
n
(t) = s
n
(t)
u
t
n
(t) +
n
u
n
(t) =
_
2
h

_
_

n
(t) + (1)
n
(t)
_
+s
n
(t)
1588
Now we have an ordinary dierential equation for each of the u
n
(t). We obtain initial conditions for them using
the initial condition for u(x, t).
u(x, 0) =

n=1
u
n
(0)
_
2
h
sin(
_

n
x) = f(x)
u
n
(0) =
_
2
h
_
h
0
sin(
_

n
x)f(x) dx f
n
Thus the temperature is given by
u(x, t) =
_
2
h

n=1
u
n
(t) sin(
_

n
x),
u
n
(t) = f
n
e
nt
+
_
2
h

_
t
0
e
n(t)
_
_

n
() + (1)
n
()
_
d.
39.6 The Wave Equation
Consider an elastic string with a free end at x = 0 and attached to a massless spring at x = 1. The partial
dierential equation that models this problem is
u
tt
= u
xx
u
x
(0, t) = 0, u
x
(1, t) = u(1, t), u(x, 0) = f(x), u
t
(x, 0) = g(x).
We make the substitution u(x, t) = (x)(t) to obtain

tt

=

tt

= .
1589
First we consider the problem for .

tt
+ = 0,
t
(0) = (1) +
t
(1) = 0.
To nd the eigenvalues we consider the following three cases:
< 0. The general solution is
= a cosh(

x) +b sinh(

x).

t
(0) = 0 b = 0.
(1) +
t
(1) = 0 a cosh(

) +a

sinh(

) = 0
a = 0.
Since there is only the trivial solution, there are no negative eigenvalues.
= 0. The general solution is
= ax +b.

t
(0) = 0 a = 0.
(1) +
t
(1) = 0 b + 0 = 0.
Thus = 0 is not an eigenvalue.
> 0. The general solution is
= a cos(

x) +b sin(

x).
1590

t
(0) b = 0.
(1) +
t
(1) = 0 a cos(

) a

sin(

) = 0
cos(

) =

sin(

= cot(

)
By looking at Figure 39.1, (the plot shows the functions f(x) = x, f(x) = cot x and has lines at x = n),
we see that there are an innite number of positive eigenvalues and that

n
(n)
2
as n .
The eigenfunctions are

n
= cos(
_

n
x).
The solution for is

n
= a
n
cos(
_

n
t) +b
n
sin(
_

n
t).
Thus the solution to the dierential equation is
u(x, t) =

n=1
cos(
_

n
x)[a
n
cos(
_

n
t) +b
n
sin(
_

n
t)].
Let
f(x) =

n=1
f
n
cos(
_

n
x)
g(x) =

n=1
g
n
cos(
_

n
x).
1591
2 4 6 8 10
-2
2
4
6
8
10
Figure 39.1: Plot of x and cot x.
From the initial value we have

n=1
cos(
_

n
x)a
n
=

n=1
f
n
cos(
_

n
x)
a
n
= f
n
.
The initial velocity condition gives us

n=1
cos(
_

n
x)
_

n
b
n
=

n=1
g
n
cos(
_

n
x)
b
n
=
g
n

n
.
1592
Thus the solution is
u(x, t) =

n=1
cos(
_

n
x)
_
f
n
cos(
_

n
t) +
g
n

n
sin(
_

n
t)
_
.
39.7 General Method
Here is an outline detailing the method of separation of variables for a linear partial dierential equation for
u(x, y, z, . . . ).
1. Substitute u(x, y, z, . . . ) = X(x)Y (y)Z(z) into the partial dierential equation. Separate the equation
into ordinary dierential equations.
2. Translate the boundary conditions for u into boundary conditions for X, Y , Z, . . . . The continuity of u
may give additional boundary conditions and boundedness conditions.
3. Solve the dierential equation(s) that determine the eigenvalues. Make sure to consider all cases. The
eigenfunctions will be determined up to a multiplicative constant.
4. Solve the rest of the dierential equations subject to the homogeneous boundary conditions. The eigenvalues
will be a parameter in the solution. The solutions will be determined up to a multiplicative constant.
5. The eigen-solutions are the product of the solutions of the ordinary dierential equations.
n
= X
n
Y
n
Z
n
.
The solution of the partial dierential equation is a linear combination of the eigen-solutions.
u(x, y, z, . . . ) =

a
n

n
6. Solve for the coecients, a
n
using the inhomogeneous boundary conditions.
1593
39.8 Exercises
Exercise 39.1
Obtain Poissons formula to solve the Dirichlet problem for the circular region 0 r < R, 0 < 2. That is,
determine a solution (r, ) to Laplaces equation

2
= 0
in polar coordinates given (R, ). Show that
(r, ) =
1
2
_
2
0
(R, )
R
2
r
2
R
2
+r
2
2Rr cos( )
d
Exercise 39.2
Consider the temperature of a ring of unit radius. Solve the problem
u
t
= u

, u(, 0) = f()
with separation of variables.
Exercise 39.3
Solve the Laplaces equation by separation of variables.
u u
xx
+u
yy
= 0, 0 < x < 1, 0 < y < 1,
u(x, 0) = f(x), u(x, 1) = 0, u(0, y) = 0, u(1, y) = 0
Here f(x) is an arbitrary function which is known.
Exercise 39.4
Solve the following problem by separation of variables:

2
u
r
2
+
1
r
u
r
+
1
r
2

2
u

2
= 0, 0 < r < 1,
1594
u(1, ) = f().
Thus, you must nd the function u = u(r, ) which satises the partial dierential equation inside the unit circle
and which takes on the values of f() on the circumference.
Exercise 39.5
Find the normal modes of oscillation of a drum head of unit radius. The drum head obeys the wave equation
with zero displacement on the boundary.
v
1
r

r
_
r
v
r
_
+
1
r
2

2
v

2
=
1
c
2

2
v
t
2
, v(1, , t) = 0
Exercise 39.6
Solve the equation

t
= a
2

xx
, 0 < x < l, t > 0
with boundary conditions (0, t) = (l, t) = 0, and initial conditions
(x, 0) =
_
x, 0 x l/2,
l x, l/2 < x l.
Comment on the dierentiability ( that is the number of nite derivatives with respect to x ) at time t = 0 and
at time t = , where > 0 and 1.
Exercise 39.7
Consider a one-dimensional rod of length L with initial temperature distribution f(x). The temperatures at the
left and right ends of the rod are held at T
0
and T
1
, respectively. To nd the temperature of the rod for t > 0,
solve
u
t
= u
xx
, 0 < x < L, t > 0
u(0, t) = T
0
, u(L, t) = T
1
, u(x, 0) = f(x),
1595
with separation of variables.
Exercise 39.8
For 0 < x < l solve the problem

t
= a
2

xx
+w(x, t) (39.5)
(0, t) = 0,
x
(l, t) = 0, (x, 0) = f(x)
by means of a series expansion involving the eigenfunctions of
d
2
(x)
dx
2
+(x) = 0,
(0) =
t
(l) = 0.
Here w(x, t) and f(x) are prescribed functions.
Exercise 39.9
Solve the heat equation of Exercise 39.8 with the same initial conditions but with the boundary conditions
(0, t) = 0, c(l, t) +
x
(l, t) = 0.
Here c > 0 is a constant. Although it is not possible to solve for the eigenvalues in closed form, show that the
eigenvalues assume a simple form for large values of .
Exercise 39.10
Use a series expansion technique to solve the problem

t
= a
2

xx
+ 1, t > 0, 0 < x < l
with boundary and initial conditions given by
(x, 0) = 0, (0, t) = t,
x
(l, t) = c(l, t)
where c > 0 is a constant.
1596
Exercise 39.11
Let (x, t) satisfy the equation

t
= a
2

xx
for 0 < x < l, t > 0 with initial conditions (x, 0) = 0 for 0 < x < l, with boundary conditions (0, t) = 0 for
t > 0, and (l, t) +
x
(l, t) = 1 for t > 0. Obtain two series solutions for this problem, one which is useful for
large t and the other useful for small t.
Exercise 39.12
A rod occupies the portion 1 < x < 2 of the x-axis. The thermal conductivity depends on x in such a manner
that the temperature (x, t) satises the equation

t
= A
2
(x
2

x
)
x
(39.6)
where A is a constant. For (1, t) = (2, t) = 0 for t > 0, with (x, 0) = f(x) for 1 < x < 2, show that the
appropriate series expansion involves the eigenfunctions

n
(x) =
1

x
sin
_
nlog x
log 2
_
.
Work out the series expansion for the given boundary and initial conditions.
Exercise 39.13
Consider a string of length L with a xed left end a free right end. Initially the string is at rest with displacement
f(x). Find the motion of the string by solving,
u
tt
= c
2
u
xx
, 0 < x < L, t > 0,
u(0, t) = 0, u
x
(L, t) = 0,
u(x, 0) = f(x), u
t
(x, 0) = 0,
with separation of variables.
1597
Exercise 39.14
Consider the equilibrium temperature distribution in a two-dimensional block of width a and height b. There
is a heat source given by the function f(x, y). The vertical sides of the block are held at zero temperature; the
horizontal sides are insulated. To nd this equilibrium temperature distribution, solve the potential equation,
u
xx
+u
yy
= f(x, y), 0 < x < a, 0 < y < b,
u(0, y) = u(a, y) = 0, u
y
(x, 0) = u
y
(x, b) = 0,
with separation of variables.
Exercise 39.15
Consider the vibrations of a sti beam of length L. More precisely, consider the transverse vibrations of an
unloaded beam, whose weight can be neglected compared to its stiness. The beam is simply supported at
x = 0, L. (That is, it is resting on fulcrums there. u(0, t) = 0 means that the beam is resting on the fulcrum;
u
xx
(0, t) = 0 indicates that there is no bending force at that point.) The beam has initial displacement f(x) and
velocity g(x). To determine the motion of the beam, solve
u
tt
+a
2
u
xxxx
= 0, 0 < x < L, t > 0,
u(x, 0) = f(x), u
t
(x, 0) = g(x),
u(0, t) = u
xx
(0, t) = 0, u(L, t) = u
xx
(L, t) = 0,
with separation of variables.
Exercise 39.16
The temperature along a magnet winding of length L carrying a current I satises, (for some > 0):
u
t
= u
xx
+I
2
u.
The ends of the winding are kept at zero, i.e.,
u(0, t) = u(L, t) = 0;
1598
and the initial temperature distribution is
u(x, 0) = g(x).
Find u(x, t) and determine the critical current I
CR
which is dened as the least current at which the winding
begins to heat up exponentially. Suppose that < 0, so that the winding has a negative coecient of resistance
with respect to temperature. What can you say about the critical current in this case?
Exercise 39.17
The e-folding time of a decaying function of time is the time interval,
e
, in which the magnitude of the function
is reduced by at least
1
e
. Thus if u(x, t) = e
t
f(x) + e
t
g(x) with > > 0 then
e
=
1

. A body with heat


conductivity has its exterior surface maintained at temperature zero. Initially the interior of the body is at the
uniform temperature T > 0. Find the e-folding time of the body if it is:
a) An innite slab of thickness a.
b) An innite cylinder of radius a.
c) A sphere of radius a.
Note that in (a) the temperature varies only in the z direction and in time; in (b) and (c) the temperature varies
only in the radial direction and in time.
d) What are the e-folding times if the surfaces are perfectly insulated, (i.e.,
u
n
= 0, where n is the exterior
normal at the surface)?
Exercise 39.18
Solve the heat equation with a time-dependent diusivity in the rectangle 0 < x < a, 0 < y < b. The top and
bottom sides are held at temperature zero; the lateral sides are insulated. We have the initial-boundary value
1599
problem:
u
t
= (t) (u
xx
+u
yy
) , 0 < x < a, 0 < y < b, t > 0,
u(x, 0, t) = u(x, b, t) = 0,
u
x
(0, y, t) = u
x
(a, y, t) = 0,
u(x, y, 0) = f(x, y).
The diusivity, (t), is a known, positive function.
Exercise 39.19
A semi-circular rod of innite extent is maintained at temperature T = 0 on the at side and at T = 1 on the
curved surface:
x
2
+y
2
= 1, y > 0.
Find the steady state temperature in a cross section of the rod using separation of variables.
Exercise 39.20
Use separation of variables to nd the steady state temperature u(x, y) in a slab: x 0, 0 y 1, which has
zero temperature on the faces y = 0 and y = 1 and has a given distribution: u(y, 0) = f(y) on the edge x = 0,
0 y 1.
Exercise 39.21
Find u(r, ) which satises:
u = 0, 0 < < , a < r < b,
subject to the boundary conditions:
u(r, 0) = u(r, ) = 0, u(a, ) = 0, u(b, ) = f().
1600
Exercise 39.22
a) A piano string of length L is struck, at time t = 0, by a at hammer of width 2d centered at a point ,
having velocity v. Find the ensuing motion, u(x, t), of the string for which the wave speed is c.
b) Suppose the hammer is curved, rather than at as above, so that the initial velocity distribution is
u
t
(x, 0) =
_
v cos
_
(x)
2d
_
, [x [ < d
0 [x [ > d.
Find the ensuing motion.
c) Compare the kinetic energies of each harmonic in the two solutions. Where should the string be struck in
order to maximize the energy in the n
th
harmonic in each case?
Exercise 39.23
If the striking hammer is not perfectly rigid, then its eect must be included as a time dependent forcing term of
the form:
s(x, t) =
_
v cos
_
(x)
2d
_
sin
_
t

_
, for [x [ < d, 0 < t < ,
0 otherwise.
Find the motion of the string for t > . Discuss the eects of the width of the hammer and duration of the blow
with regard to the energy in overtones.
Exercise 39.24
Find the propagating modes in a square waveguide of side L for harmonic signals of frequency when the
propagation speed of the medium is c. That is, we seek those solutions of
u
tt
c
2
u = 0,
1601
where u = u(x, y, z, t) has the form u(x, y, z, t) = v(x, y, z) e
it
, which satisfy the conditions:
u(x, y, z, t) = 0 for x = 0, L, y = 0, L, z > 0,
lim
z
[u[ ,= and ,= 0.
Indicate in terms of inequalities involving k = /c and appropriate eigenvalues,
n,m
say, for which n and m the
solutions u
n,m
satisfy the conditions.
Exercise 39.25
Find the modes of oscillation and their frequencies for a rectangular drum head of width a and height b. The
modes of oscillation are eigensolutions of
u
tt
= c
2
u, 0 < x < a, 0 < y < b,
u(0, y) = u(a, y) = u(x, 0) = u(x, b) = 0.
Exercise 39.26
Using separation of variables solve the heat equation

t
= a
2
(
xx
+
yy
)
in the rectangle 0 < x < l
x
, 0 < y < l
y
with initial conditions
(x, y, 0) = 1,
and boundary conditions
(0, y, t) = (l
x
, y, t) = 0,
y
(x, 0, t) =
y
(x, l
y
, t) = 0.
1602
Exercise 39.27
Using polar coordinates and separation of variables solve the heat equation

t
= a
2

in the circle 0 < r < R


0
with initial conditions
(r, , 0) = V
where V is a constant, and boundary conditions
(R
0
, , t) = 0.
(a) Show that for t > 0,
(r, , t) = 2V

n=1
exp
_

a
2
j
2
0,n
R
2
0
t
_
J
0
(j
0,n
r/R
0
)
j
0,n
J
1
(j
0,n
)
,
where j
0,n
are the roots of J
0
(x):
J
0
(j
0,n
) = 0, n = 1, 2, . . .
Hint: The following identities may be of some help:
_
R
0
0
rJ
0
(j
0,n
r/R
0
) J
0
(j
0,m
r/R
0
) dr = 0, m ,= n,
_
R
0
0
rJ
2
0
(j
0,n
r/R
0
) dr =
R
2
0
2
J
2
1
(j
0,n
),
_
r
0
rJ
0
(r)dr =
r

J
1
(r) for any .
(b) For any xed r, 0 < r < R
0
, use the asymptotic approximation for the J
n
Bessel functions for large argument
(this can be found in the notes for second quarter, AMa95b or in any standard math tables) to determine
the rate of decay of the terms of the series solution for at time t = 0.
1603
Exercise 39.28
Consider the solution of the diusion equation in spherical coordinates given by
x = r sin cos ,
y = r sin sin ,
z = r cos ,
where r is the radius, is the polar angle, and is the azimuthal angle. We wish to solve the equation on the
surface of the sphere given by r = R, 0 < < , and 0 < < 2. The diusion equation for the solution
(, , t) in these coordinates on the surface of the sphere becomes

t
=
a
2
R
2
_
1
sin

_
sin

_
+
1
sin
2

2
_
. (39.7)
where a is a positive constant.
(a) Using separation of variables show that a solution can be found in the form
(, , t) = T(t)()(),
where T,, obey ordinary dierential equations in t,, and respectively. Derive the ordinary dierential
equations for T and , and show that the dierential equation obeyed by is given by
d
2

d
2
c = 0,
where c is a constant.
(b) Assuming that (, , t) is determined over the full range of the azimuthal angle, 0 < < 2, determine
the allowable values of the separation constant c and the corresponding allowable functions . Using these
values of c and letting x = cos rewrite in terms of the variable x the dierential equation satised by .
What are appropriate boundary conditions for ? The resulting equation is known as the generalized or
associated Legendre equation.
1604
(c) Assume next that the initial conditions for are chosen such that
(, , t = 0) = f(),
where f() is a specied function which is regular at the north and south poles (that is = 0 and = ).
Note that the initial condition is independent of the azimuthal angle . Show that in this case the method
of separation of variables gives a series solution for of the form
(, t) =

l=0
A
l
exp(
2
l
t)P
l
(cos ),
where P
l
(x) is the lth Legendre polynomial, and determine the constants
l
as a function of the index l.
(d) Solve for (, t), t > 0 given that f() = 2 cos
2
1.
Useful facts:
d
dx
_
(1 x
2
)
dP
l
(x)
dx
_
+l(l + 1)P
l
(x) = 0
P
0
(x) = 1
P
1
(x) = x
P
2
(x) =
3
2
x
2

1
2
_
1
1
dxP
l
(x)P
m
(x) =
_
0 if l ,= m
2
2l + 1
if l = m
1605
Exercise 39.29
Let (x, y) satisfy Laplaces equation

xx
+
yy
= 0
in the rectangle 0 < x < 1, 0 < y < 2, with (x, 2) = x(1 x), and with = 0 on the other three sides. Use a
series solution to determine inside the rectangle. How many terms are required to give (
1
2
, 1) with about 1%
(also 0.1%) accuracy; how about
x
(
1
2
, 1)?
Exercise 39.30
Let (r, , ) satisfy Laplaces equation in spherical coordinates in each of the two regions r < a, r > a, with
0 as r . Let
lim
ra
+
(r, , ) lim
ra

(r, , ) = 0,
lim
ra
+

r
(r, , ) lim
ra

r
(r, , ) = P
m
n
(cos ) sin(m),
where m and n m are integers. Find in r < a and r > a. In electrostatics, this problem corresponds to that of
determining the potential of a spherical harmonic type charge distribution over the surface of the sphere. In this
way one can determine the potential due to an arbitrary surface charge distribution since any charge distribution
can be expressed as a series of spherical harmonics.
Exercise 39.31
Obtain a formula analogous to the Poisson formula to solve the Neumann problem for the circular region 0 r <
R, 0 < 2. That is, determine a solution (r, ) to Laplaces equation

2
= 0
in polar coordinates given
r
(R, ). Show that
(r, ) =
R
2
_
2
0

r
(R, ) ln
_
1
2r
R
cos( ) +
r
2
R
2
_
d
within an arbitrary additive constant.
1606
Exercise 39.32
Investigate solutions of

t
= a
2

xx
obtained by setting the separation constant C = ( +i)
2
in the equations obtained by assuming = X(x)T(t):
T
t
T
= C,
X
tt
X
=
C
a
2
.
1607
39.9 Hints
Hint 39.1
Hint 39.2
Impose the boundary conditions
u(0, t) = u(2, t), u

(0, t) = u

(2, t).
Hint 39.3
Apply the separation of variables u(x, y) = X(x)Y (y). Solve an eigenvalue problem for X(x).
Hint 39.4
Hint 39.5
Hint 39.6
Hint 39.7
There are two ways to solve the problem. For the rst method, expand the solution in a series of the form
u(x, t) =

n=1
a
n
(t) sin
_
nx
L
_
.
1608
Because of the inhomogeneous boundary conditions, the convergence of the series will not be uniform. You can
dierentiate the series with respect to t, but not with respect to x. Multiply the partial dierential equation by
the eigenfunction sin(nx/L) and integrate from x = 0 to x = L. Use integration by parts to move derivatives in
x from u to the eigenfunctions. This process will yield a rst order, ordinary dierential equation for each of the
a
n
s.
For the second method: Make the change of variables v(x, t) = u(x, t) (x), where (x) is the equilibrium
temperature distribution to obtain a problem with homogeneous boundary conditions.
Hint 39.8
Hint 39.9
Hint 39.10
Hint 39.11
Hint 39.12
Hint 39.13
Use separation of variables to nd eigen-solutions of the partial dierential equation that satisfy the homogeneous
boundary conditions. There will be two eigen-solutions for each eigenvalue. Expand u(x, t) in a series of the
1609
eigen-solutions. Use the two initial conditions to determine the constants.
Hint 39.14
Expand the solution in a series of eigenfunctions in x. Determine these eigenfunctions by using separation of
variables on the homogeneous partial dierential equation. You will nd that the answer has the form,
u(x, y) =

n=1
u
n
(y) sin
_
nx
a
_
.
Substitute this series into the partial dierential equation to determine ordinary dierential equations for each of
the u
n
s. The boundary conditions on u(x, y) will give you boundary conditions for the u
n
s. Solve these ordinary
dierential equations with Green functions.
Hint 39.15
Solve this problem by expanding the solution in a series of eigen-solutions that satisfy the partial dierential
equation and the homogeneous boundary conditions. Use the initial conditions to determine the coecients in
the expansion.
Hint 39.16
Use separation of variables to nd eigen-solutions that satisfy the partial dierential equation and the homogeneous
boundary conditions. The solution is a linear combination of the eigen-solutions. The whole solution will be
exponentially decaying if each of the eigen-solutions is exponentially decaying.
Hint 39.17
For parts (a), (b) and (c) use separation of variables. For part (b) the eigen-solutions will involve Bessel functions.
For part (c) the eigen-solutions will involve spherical Bessel functions. Part (d) is trivial.
1610
Hint 39.18
The solution is a linear combination of eigen-solutions of the partial dierential equation that satisfy the homo-
geneous boundary conditions. Determine the coecients in the expansion with the initial condition.
Hint 39.19
The problem is
u
rr
+
1
r
u
r
+
1
r
2
u

= 0, 0 < r < 1, 0 < <


u(r, 0) = u(r, ) = 0, u(0, ) = 0, u(1, ) = 1
The solution is a linear combination of eigen-solutions that satisfy the partial dierential equation and the three
homogeneous boundary conditions.
Hint 39.20
Hint 39.21
Hint 39.22
Hint 39.23
Hint 39.24
1611
Hint 39.25
Hint 39.26
Hint 39.27
Hint 39.28
Hint 39.29
Hint 39.30
Hint 39.31
Hint 39.32
1612
39.10 Solutions
Solution 39.1
We expand the solution in a Fourier series.
=
1
2
a
0
(r) +

n=1
a
n
(r) cos(n) +

n=1
b
n
(r) sin(n)
We substitute the series into the Laplaces equation to determine ordinary dierential equations for the coecients.

r
_
r

r
_
+
1
r
2

2
= 0
a
tt
0
+
1
r
a
t
0
= 0, a
tt
n
+
1
r
a
t
n
n
2
a
n
= 0, b
tt
n
+
1
r
b
t
n
n
2
b
n
= 0
The solutions that are bounded at r = 0 are, (to within multiplicative constants),
a
0
(r) = 1, a
n
(r) = r
n
, b
n
(r) = r
n
.
Thus (r, ) has the form
(r, ) =
1
2
c
0
+

n=1
c
n
r
n
cos(n) +

n=1
d
n
r
n
sin(n)
We apply the boundary condition at r = R.
(R, ) =
1
2
c
0
+

n=1
c
n
R
n
cos(n) +

n=1
d
n
R
n
sin(n)
The coecients are
c
0
=
1

_
2
0
(R, ) d, c
n
=
1
R
n
_
2
0
(R, ) cos(n) d, d
n
=
1
R
n
_
2
0
(R, ) sin(n) d.
1613
We substitute the coecients into our series solution.
(r, ) =
1
2
_
2
0
(R, ) d +
1

n=1
_
r
R
_
n
_
2
0
(R, ) cos(n( )) d
(r, ) =
1
2
_
2
0
(R, ) d +
1

_
2
0
(R, )1
_

n=1
_
r
R
_
n
e
in()
_
d
(r, ) =
1
2
_
2
0
(R, ) d +
1

_
2
0
(R, )1
_
r
R
e
i()
1
r
R
e
i()
_
d
(r, ) =
1
2
_
2
0
(R, ) d +
1

_
2
0
(R, )1
_
r
R
e
i()

_
r
R
_
2
1 2
r
R
cos( ) +
_
r
R
_
2
_
d
(r, ) =
1
2
_
2
0
(R, ) d +
1

_
2
0
(R, )
Rr cos( ) r
2
R
2
+r
2
2Rr cos( )
d
(r, ) =
1
2
_
2
0
(R, )
R
2
r
2
R
2
+r
2
2Rr cos( )
d
Solution 39.2
In order that the solution is continuously dierentiable, (which it must be in order to satisfy the dierential
equation), we impose the boundary conditions
u(0, t) = u(2, t), u

(0, t) = u

(2, t).
We apply the separation of variables u(, t) = ()T(t).
u
t
= u

T
t
=
tt
T
T
t
T
=

tt

=
1614
We have the self-adjoint eigenvalue problem

tt
+ = 0, (0) = (2),
t
(0) =
t
(2)
which has the eigenvalues and orthonormal eigenfunctions

n
= n
2
,
n
=
1

2
e
in
, n Z.
Now we solve the problems for T
n
(t) to obtain eigen-solutions of the heat equation.
T
t
n
= n
2
T
n
T
n
= e
n
2
t
The solution is a linear combination of the eigen-solutions.
u(, t) =

n=
u
n
1

2
e
in
e
n
2
t
We use the initial conditions to determine the coecients.
u(, 0) =

n=
u
n
1

2
e
in
= f()
u
n
=
1

2
_
2
0
e
in
f() d
Solution 39.3
Substituting u(x, y) = X(x)Y (y) into the partial dierential equation yields
X
tt
X
=
Y
tt
Y
= .
1615
With the homogeneous boundary conditions, we have the two problems
X
tt
+X = 0, X(0) = X(1) = 0,
Y
tt
Y = 0, Y (1) = 0.
The eigenvalues and orthonormal eigenfunctions for X(x) are

n
= (n)
2
, X
n
=

2 sin(nx).
The general solution for Y is
Y
n
= a cosh(ny) +b sinh(ny).
The solution for that satises the right homogeneous boundary condition, (up to a multiplicative constant), is
Y
n
= sinh(n(1 y))
u(x, y) is a linear combination of the eigen-solutions.
u(x, y) =

n=1
u
n

2 sin(nx) sinh(n(1 y))


We use the inhomogeneous boundary condition to determine coecients.
u(x, 0) =

n=1
u
n

2 sin(nx) sinh(n) = f(x)


u
n
=

2
_
1
0
sin(n)f() d
1616
Solution 39.4
Substituting u(r, ) = R(r)() yields
R
tt
+
1
r
R
t
+
1
r
2
R
tt
= 0
r
2
R
tt
R
+r
R
t
R
=

tt

=
r
2
R
tt
+rR
t
R = 0,
tt
+ = 0
We assume that u is a strong solution of the partial dierential equation and is thus twice continuously dier-
entiable, (u C
2
). In particular, this implies that R and are bounded and that is continuous and has a
continuous rst derivative along = 0. This gives us the problems

tt
+ = 0, (0) = (2),
t
(0) =
t
(2),
r
2
R
tt
+rR
t
R = 0, R is bounded
We consider negative, zero and positive values of in solving the equation for .
< 0. The general solution for is
= a cosh(

) +b sinh(

).
(0) = (2) a = 0,
t
(0) =
t
(2) b = 0.
There are no negative eigenvalues.
1617
= 0. The general solution for is
= a +b.
(0) = (2) b = 0,
t
(0) =
t
(2) a is arbitrary.
We have the eigenvalue and eigenfunction

0
= 0,
0
=
1
2
.
> 0. The general solution is
= a cos(

) +b sin(

).
Applying the boundary conditions to nd the eigenvalues and eigenfunctions,
(0) = (2) a = a cos(

2) +b sin(

2)
cos(

2) = 1, sin(

2) = 0

= n, for n = 1, 2, 3, . . .
= n
2
, for n = 1, 2, 3, . . .
The boundary condition,
t
(0) =
t
(2) is satised for these values of . This gives us the eigenvalues and
eigenfunctions

n
= n
2
,
(1)
n
= cos(n),
(2)
n
= sin(n), forn = 1, 2, 3, . . .
Now to nd the bounded solutions of the equation for R. Substituting R = r

yields
( 1) + = 0
=

.
There are two cases to consider.
1618

0
= 0.
R = a +b log r
Boundedness demands that b = 0. Thus we have the solution
R = 1

n
= n
2
> 0
R = ar
n
+br
n
Boundedness demands that b = 0. Thus we have the solution
R = r
n
.
The general solution for u is
u(r, ) =
a
0
2
+

n=1
[a
n
cos(n) +b
n
sin(n)] r
n
.
The inhomogeneous boundary condition will determine the coecients.
u(1, ) =
a
0
2
+

n=1
[a
n
cos(n) +b
n
sin(n)] = f()
The coecients are the Fourier coecients of f().
a
n
=
1

_
2
0
f() cos(n) d
b
n
=
1

_
2
0
f() sin(n) d
1619
Solution 39.5
A normal mode of frequency satises
v(r, , t) = u(r, ) e
it
.
Substituting this into the partial dierential equation and the boundary condition yields
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
=

2
c
2
u, u(1, ) = 0,

2
u
r
2
+
1
r
u
r
+
1
r
2

2
u

2
+k
2
u = 0, u(1, ) = 0,
where k =

c
. Applying separation of variables to the partial dierential equation for u, with u = R(r)(),
r
2
R
tt
+rR
t
+R
tt
+k
2
r
2
R = 0,
r
2
R
tt
R
+r
R
t
R
+k
2
r
2
=

tt

=
2
.
Now we have the two ordinary dierential equations,
R
tt
+
1
r
R
t
+
_
k
2


2
r
2
_
R = 0, R(0) is bounded, R(1) = 0,

tt
+
2
= 0, () = (),
t
() =
t
().
The eigenvalues and eigenfunctions for are

n
= n, n = 0, 1, 2, . . . ,

0
=
1
2
,
(1)
n
= cos(n),
(2)
n
= sin(n), n = 1, 2, 3, . . .
The dierential equation for R is then
R
tt
+
1
r
R
t
+
_
k
2

n
2
r
2
_
R = 0, R(0) is bounded, R(1) = 0.
1620
The general solution of the dierential equation is a linear combination of Bessel functions of order n.
R(r) = c
1
J
n
(kr) +c
2
Y
n
(kr)
Since Y
n
(kr) is unbounded at r = 0, the solution has the form
R(r) = cJ
n
(kr).
Applying the second boundary condition yields
J
n
(k) = 0.
Thus the eigenvalues and eigenfunctions for R are
k
nm
= j
nm
, R
nm
= J
n
(j
nm
r),
where j
nm
is the m
th
positive root of J
n
. Combining the above results, the normal modes of oscillation are
v
0m
=
1
2
J
0
(j
0m
r) e
it
, m = 1, 2, 3, . . . ,
v
nm
= cos(n +)J
nm
(j
nm
r) e
it
, n, m = 1, 2, 3, . . . .
u
22
and u
33
are plotted in Figure 39.2.
Solution 39.6
We will expand the solution in a complete, orthogonal set of functions X
n
(x), where the coecients are functions
of t.
=

n
T
n
(t)X
n
(x)
1621
Figure 39.2: The Normal Modes u
22
and u
33
We will use separation of variables to determine a convenient set X
n
. We substitite = T(t)X(x) into the
diusion equation.

t
= a
2

xx
XT
t
= a
2
X
tt
T
T
t
a
2
T
=
X
tt
X
=
T
t
= a
2
T, X
tt
+X = 0
Note that in order to satisfy (0, t) = (l, t) = 0, the X
n
must satisfy the same homogeneous boundary conditions,
1622
X
n
(0) = X
n
(l) = 0. This gives us a Sturm-Liouville problem for X(x).
X
tt
+X = 0, X(0) = X(l) = 0

n
=
_
n
l
_
2
, X
n
= sin
_
nx
l
_
, n Z
+
Thus we seek a solution of the form
=

n=1
T
n
(t) sin
_
nx
l
_
. (39.8)
This solution automatically satises the boundary conditions. We will assume that we can dierentiate it. We
will substitite this form into the diusion equation and the initial condition to determine the coecients in the
series, T
n
(t). First we substitute Equation 39.8 into the partial dierential equation for to determine ordinary
dierential equations for the T
n
.

t
= a
2

xx

n=1
T
t
n
(t) sin
_
nx
l
_
= a
2

n=1
_
n
l
_
2
T
n
(t) sin
_
nx
l
_
T
t
n
=
_
an
l
_
2
T
n
1623
Now we substitute Equation 39.8 into the initial condition for to determine initial conditions for the T
n
.

n=1
T
n
(0) sin
_
nx
l
_
= (x, 0)
T
n
(0) =
_
l
0
sin
_
nx
l
_
(x, 0) dx
_
l
0
sin
2
_
nx
l
_
dx
T
n
(0) =
2
l
_
l
0
sin
_
nx
l
_
(x, 0) dx
T
n
(0) =
2
l
_
l/2
0
sin
_
nx
l
_
xdx +
2
l
_
l/2
0
sin
_
nx
l
_
(l x) dx
T
n
(0) =
4l
n
2

2
sin
_
n
2
_
T
2n1
(0) = (1)
n
4l
(2n 1)
2

2
, T
2n
(0) = 0, n Z
+
We solve the ordinary dierential equations for T
n
subject to the initial conditions.
T
2n1
(t) = (1)
n
4l
(2n 1)
2

2
exp
_

_
a(2n 1)
l
_
2
t
_
, T
2n
(t) = 0, n Z
+
This determines the series representation of the solution.
=
4
l

n=1
(1)
n
_
l
(2n 1)
_
2
exp
_

_
a(2n 1)
l
_
2
t
_
sin
_
(2n 1)x
l
_
From the initial condition, we know that the the solution at t = 0 is C
0
. That is, it is continuous, but not
dierentiable. The series representation of the solution at t = 0 is
=
4
l

n=1
(1)
n
_
l
(2n 1)
_
2
sin
_
(2n 1)x
l
_
.
1624
That the coecients decay as 1/n
2
corroborates that (x, 0) is C
0
.
The derivatives of with respect to x are

2m1
x
2m1
=
4(1)
m+1
l

n=1
(1)
n
_
(2n 1)
l
_
2m3
exp
_

_
a(2n 1)
l
_
2
t
_
cos
_
(2n 1)x
l
_

2m
x
2m
=
4(1)
m
l

n=1
(1)
n
_
(2n 1)
l
_
2m2
exp
_

_
a(2n 1)
l
_
2
t
_
sin
_
(2n 1)x
l
_
For any xed t > 0, the coecients in the series for

n
x
decay exponentially. These series are uniformly convergent
in x. Thus for any xed t > 0, is C

in x.
Solution 39.7
u
t
= u
xx
, 0 < x < L, t > 0
u(0, t) = T
0
, u(L, t) = T
1
, u(x, 0) = f(x),
Method 1. We solve this problem with an eigenfunction expansion in x. To nd an appropriate set of eigen-
functions, we apply the separation of variables, u(x, t) = X(x)T(t) to the partial dierential equation with the
homogeneous boundary conditions, u(0, t) = u(L, t) = 0.
(XT)
t
= (XT)
xx
XT
t
= X
tt
T
T
t
T
=
X
tt
X
=
2
We have the eigenvalue problem,
X
tt
+
2
X = 0, X(0) = X(L) = 0,
1625
which has the solutions,

n
=
nx
L
, X
n
= sin
_
nx
L
_
, n N.
We expand the solution of the partial dierential equation in terms of these eigenfunctions.
u(x, t) =

n=1
a
n
(t) sin
_
nx
L
_
Because of the inhomogeneous boundary conditions, the convergence of the series will not be uniform. We can
dierentiate the series with respect to t, but not with respect to x. We multiply the partial dierential equation
by an eigenfunction and integrate from x = 0 to x = L. We use integration by parts to move derivatives from u
to the eigenfunction.
u
t
u
xx
= 0
_
L
0
(u
t
u
xx
) sin
_
mx
L
_
dx = 0
_
L
0
_

n=1
a
t
n
(t) sin
_
nx
L
_
_
sin
_
mx
L
_
dx
_
u
x
sin
_
mx
L
__
L
0
+
m
L
_
L
0
u
x
cos
_
mx
L
_
dx = 0
L
2
a
t
m
(t) +
m
L
_
ucos
_
mx
L
__
L
0
+
_
m
L
_
2
_
L
0
usin
_
mx
L
_
dx = 0
L
2
a
t
m
(t) +
m
L
((1)
m
u(L, t) u(0, t)) +
_
m
L
_
2
_
L
0
_

n=1
a
n
(t) sin
_
nx
L
_
_
sin
_
mx
L
_
dx = 0
L
2
a
t
m
(t) +
m
L
((1)
m
T
1
T
0
) +
L
2
_
m
L
_
2
a
m
(t) = 0
a
t
m
(t) +
_
m
L
_
2
a
m
(t) =
2m
L
2
(T
0
(1)
m
T
1
)
1626
Now we have a rst order dierential equation for each of the a
n
s. We obtain initial conditions for each of the
a
n
s from the initial condition for u(x, t).
u(x, 0) = f(x)

n=1
a
n
(0) sin
_
nx
L
_
= f(x)
a
n
(0) =
2
L
_
L
0
f(x) sin
_
nx
L
_
dx f
n
By solving the rst order dierential equation for a
n
(t), we obtain
a
n
(t) =
2(T
0
(1)
n
T
1
)
n
+ e
(n/L)
2
t
_
f
n

2(T
0
(1)
n
T
1
)
n
_
.
Note that the series does not converge uniformly due to the 1/n term.
Method 2. For our second method we transform the problem to one with homogeneous boundary conditions
so that we can use the partial dierential equation to determine the time dependence of the eigen-solutions. We
make the change of variables v(x, t) = u(x, t) (x) where (x) is some function that satises the inhomogeneous
boundary conditions. If possible, we want (x) to satisfy the partial dierential equation as well. For this problem
we can choose (x) to be the equilibrium solution which satises

tt
(x) = 0, (0)T
0
, (L) = T
1
.
This has the solution
(x) = T
0
+
T
1
T
0
L
x.
With the change of variables,
v(x, t) = u(x, t)
_
T
0
+
T
1
T
0
L
x
_
,
1627
we obtain the problem
v
t
= v
xx
, 0 < x < L, t > 0
v(0, t) = 0, v(L, t) = 0, v(x, 0) = f(x)
_
T
0
+
T
1
T
0
L
x
_
.
Now we substitute the separation of variables v(x, t) = X(x)T(t) into the partial dierential equation.
(XT)
t
= (XT)
xx
T
t
T
=
X
tt
X
=
2
Utilizing the boundary conditions at x = 0, L we obtain the two ordinary dierential equations,
T
t
=
2
T,
X
tt
=
2
X, X(0) = X(L) = 0.
The problem for X is a regular Sturm-Liouville problem and has the solutions

n
=
n
L
, X
n
= sin
_
nx
L
_
, n N.
The ordinary dierential equation for T becomes,
T
t
n
=
_
n
L
_
2
T
n
,
which, (up to a multiplicative constant), has the solution,
T
n
= e
(npi/L)
2
t
.
Thus the eigenvalues and eigen-solutions of the partial dierential equation are,

n
=
n
L
, v
n
= sin
_
nx
L
_
e
(n/L)
2
t
, n N.
1628
Let v(x, t) have the series expansion,
v(x, t) =

n=1
a
n
sin
_
nx
L
_
e
(n/L)
2
t
.
We determine the coecients in the expansion from the initial condition,
v(x, 0) =

n=1
a
n
sin
_
nx
L
_
= f(x)
_
T
0
+
T
1
T
0
L
x
_
.
The coecients in the expansion are the Fourier sine coecients of f(x)
_
T
0
+
T
1
T
0
L
x
_
.
a
n
=
2
L
_
L
0
_
f(x)
_
T
0
+
T
1
T
0
L
x
__
sin
_
nx
L
_
dx
a
n
= f
n

2(T
0
(1)
n
T
1
)
n
With the coecients dened above, the solution for u(x, t) is
u(x, t) = T
0
+
T
1
T
0
L
x +

n=1
_
f
n

2(T
0
(1)
n
T
1
)
n
_
sin
_
nx
L
_
e
(n/L)
2
t
.
Since the coecients in the sum decay exponentially for t > 0, we see that the series is uniformly convergent for
positive t. It is clear that the two solutions we have obtained are equivalent.
Solution 39.8
First we solve the eigenvalue problem for (x), which is the problem we would obtain if we applied separation of
variables to the partial dierential equation,
t
=
xx
. We have the eigenvalues and orthonormal eigenfunctions

n
=
_
(2n 1)
2l
_
2
,
n
(x) =
_
2
l
sin
_
(2n 1)x
2l
_
, n Z
+
.
1629
We expand the solution and inhomogeneity in Equation 39.5 in a series of the eigenvalues.
(x, t) =

n=1
T
n
(t)
n
(x)
w(x, t) =

n=1
w
n
(t)
n
(x), w
n
(t) =
_
l
0

n
(x)w(x, t) dx
Since satises the same homgeneous boundary conditions as , we substitute the series into Equation 39.5 to
determine dierential equations for the T
n
(t).

n=1
T
t
n
(t)
n
(x) = a
2

n=1
T
n
(t)(
n
)
n
(x) +

n=1
w
n
(t)
n
(x)
T
t
n
(t) = a
2
_
(2n 1)
2l
_
2
T
n
(t) +w
n
(t)
Now we substitute the series for into its initial condition to determine initial conditions for the T
n
.
(x, 0) =

n=1
T
n
(0)
n
(x) = f(x)
T
n
(0) =
_
l
0

n
(x)f(x) dx
We solve for T
n
(t) to determine the solution, (x, t).
T
n
(t) = exp
_

_
(2n 1)a
2l
_
2
t
__
T
n
(0) +
_
t
0
w
n
() exp
_
_
(2n 1)a
2l
_
2

_
d
_
1630
Solution 39.9
Separation of variables leads to the eigenvalue problem

tt
+ = 0, (0) = 0, (l) +c
t
(l) = 0.
First we consider the case = 0. A set of solutions of the dierential equation is 1, x. The solution that satises
the left boundary condition is (x) = x. The right boundary condition imposes the constraint l + c = 0. Since c
is positive, this has no solutions. = 0 is not an eigenvalue.
Now we consider ,= 0. A set of solutions of the dierential equation is cos(

x), sin(

x). The solution


that satises the left boundary condition is = sin(

x). The right boundary condition imposes the constraint


c sin
_

l
_
+

cos
_

l
_
= 0
tan
_

l
_
=

c
For large , the we can determine approximate solutions.
_

n
l
(2n 1)
2
, n Z
+

n

_
(2n 1)
2l
_
2
, n Z
+
The eigenfunctions are

n
(x) =
sin
_

n
x
_
_
_
l
0
sin
2
_

n
x
_
dx
, n Z
+
.
We expand (x, t) and w(x, t) in series of the eigenfunctions.
(x, t) =

n=1
T
n
(t)
n
(x)
w(x, t) =

n=1
w
n
(t)
n
(x), w
n
(t) =
_
l
0

n
(x)w(x, t) dx
1631
Since satises the same homgeneous boundary conditions as , we substitute the series into Equation 39.5 to
determine dierential equations for the T
n
(t).

n=1
T
t
n
(t)
n
(x) = a
2

n=1
T
n
(t)(
n
)
n
(x) +

n=1
w
n
(t)
n
(x)
T
t
n
(t) = a
2

n
T
n
(t) +w
n
(t)
Now we substitute the series for into its initial condition to determine initial conditions for the T
n
.
(x, 0) =

n=1
T
n
(0)
n
(x) = f(x)
T
n
(0) =
_
l
0

n
(x)f(x) dx
We solve for T
n
(t) to determine the solution, (x, t).
T
n
(t) = exp
_
a
2

n
t
_
_
T
n
(0) +
_
t
0
w
n
() exp
_
a
2

_
d
_
Solution 39.10
First we seek a function u(x, t) that satises the boundary conditions u(0, t) = t, u
x
(l, t) = cu(l, t). We try a
function of the form u = (ax +b)t. The left boundary condition imposes the constraint b = 1. We then apply the
right boundary condition no determine u.
at = c(al + 1)t
a =
c
1 +cl
u(x, t) =
_
1
cx
1 +cl
_
t
1632
Now we dene to be the dierence of and u.
(x, t) = (x, t) u(x, t)
satises an inhomogeneous diusion equation with homogeneous boundary conditions.
( +u)
t
= a
2
( +u)
xx
+ 1

t
= a
2

xx
+ 1 +a
2
u
xx
u
t

t
= a
2

xx
+
cx
1 +cl
The initial and boundary conditions for are
(x, 0) = 0, (0, t) = 0,
x
(l, t) = c(l, t).
We solved this system in problem 2. Just take
w(x, t) =
cx
1 +cl
, f(x) = 0.
The solution is
(x, t) =

n=1
T
n
(t)
n
(x),
T
n
(t) =
_
t
0
w
n
exp
_
a
2

n
(t )
_
d,
w
n
(t) =
_
l
0

n
(x)
cx
1 +cl
dx.
This determines the solution for .
1633
Solution 39.11
First we solve this problem with a series expansion. We transform the problem to one with homogeneous boundary
conditions. Note that
u(x) =
x
l + 1
satises the boundary conditions. (It is the equilibrium solution.) We make the change of variables = u.
The problem for is

t
= a
2

xx
,
(0, t) = (l, t) +
x
(l, t) = 0, (x, 0) =
x
l + 1
.
This is a particular case of what we solved in Exercise 39.9. We apply the result of that problem. The solution
for (x, t) is
(x, t) =
x
l + 1
+

n=1
T
n
(t)
n
(x)

n
(x) =
sin
_

n
x
_
_
_
l
0
sin
2
_

n
x
_
dx
, n Z
+
tan
_

l
_
=

T
n
(t) = T
n
(0) exp
_
a
2

n
t
_
T
n
(0) =
_
l
0

n
(x)
x
l + 1
dx
This expansion is useful for large t because the coecients decay exponentially with increasing t.
1634
Now we solve this problem with the Laplace transform.

t
= a
2

xx
, (0, t) = 0, (l, t) +
x
(l, t) = 1, (x, 0) = 0
s

= a
2

xx
,

(0, s) = 0,

(l, s) +

x
(l, s) =
1
s

xx

s
a
2

= 0,

(0, s) = 0,

(l, s) +

x
(l, s) =
1
s
The solution that satises the left boundary condition is

= c sinh
_
sx
a
_
.
We apply the right boundary condition to determine the constant.

=
sinh
_

sx
a
_
s
_
sinh
_

sl
a
_
+

s
a
cosh
_

sl
a
__
1635
We expand this in a series of simpler functions of s.

=
2 sinh
_

sx
a
_
s
_
exp
_

sl
a
_
exp
_

sl
a
_
+

s
a
_
exp
_

sl
a
_
+ exp
_

sl
a
___

=
2 sinh
_

sx
a
_
s exp
_

sl
a
_
1
1 +

s
a

_
1

s
a
_
exp
_

sl
a
_

=
exp
_

sx
a
_
exp
_

sx
a
_
s
_
1 +

s
a
_
exp
_

sl
a
_
1
1
_
1

s/a
1+

s/a
_
exp
_

sl
a
_

=
exp
_

s(xl)
a
_
exp
_

s(xl)
a
_
s
_
1 +

s
a
_

n=0
_
1

s/a
1 +

s/a
_
n
exp
_

sln
a
_

=
1
s
_

n=0
(1

s/a)
n
(1 +

s/a)
n+1
exp
_

s((2n + 1)l x)
a
_

n=0
(1

s/a)
n
(1 +

s/a)
n+1
exp
_

s((2n + 1)l +x)


a
_
_
By expanding
(1

s/a)
n
(1 +

s/a)
n+1
in binomial series all the terms would be of the form
s
m/23/2
exp
_

s((2n 1)l x)
a
_
.
1636
Taking the rst term in each series yields


a
s
3/2
_
exp
_

s(l x)
a
_
exp
_

s(l +x)
a
__
, as s .
We take the inverse Laplace transform to obtain an appoximation of the solution for t 1.
(x, t) 2a
2

t
_
_
exp
_

(lx)
2
4a
2
t
_
l x

exp
_

(l+x)
2
4a
2
t
_
l +x
_
_

_
erfc
_
l x
2a

t
_
erfc
_
l +x
2a

t
__
, for t 1
Solution 39.12
We apply the separation of variables (x, t) = X(x)T(t).

t
= A
2
_
x
2

x
_
x
XT
t
= TA
2
_
x
2
X
t
_
t
T
t
A
2
T
=
(x
2
X
t
)
t
X
=
This gives us a regular Sturm-Liouville problem.
_
x
2
X
t
_
t
+X = 0, X(1) = X(2) = 0
This is an Euler equation. We make the substitution X = x

to nd the solutions.
x
2
X
tt
+ 2xX
t
+X = 0 (39.9)
( 1) + 2 + = 0
=
1

1 4
2
=
1
2
i
_
1/4
1637
First we consider the case of a double root when = 1/4. The solutions of Equation 39.9 are x
1/2
, x
1/2
ln x.
The solution that satises the left boundary condition is X = x
1/2
ln x. Since this does not satisfy the right
boundary condition, = 1/4 is not an eigenvalue.
Now we consider ,= 1/4. The solutions of Equation 39.9 are
_
1

x
cos
_
_
1/4 ln x
_
,
1

x
sin
_
_
1/4 ln x
_
_
.
The solution that satises the left boundary condition is
1

x
sin
_
_
1/4 ln x
_
.
The right boundary condition imposes the constraint
_
1/4 ln 2 = n, n Z
+
.
This gives us the eigenvalues and eigenfunctions.

n
=
1
4
+
_
n
ln 2
_
2
, X
n
(x) =
1

x
sin
_
n ln x
ln 2
_
, n Z
+
.
We normalize the eigenfunctions.
_
2
1
1
x
sin
2
_
n ln x
ln 2
_
dx = ln 2
_
1
0
sin
2
(n) d =
ln 2
2
X
n
(x) =
_
2
ln 2
1

x
sin
_
n ln x
ln 2
_
, n Z
+
.
From separation of variables, we have dierential equations for the T
n
.
T
t
n
= A
2
_
1
4
+
_
n
ln 2
_
2
_
T
n
T
n
(t) = exp
_
A
2
_
1
4
+
_
n
ln 2
_
2
_
t
_
1638
We expand in a series of the eigensolutions.
(x, t) =

n=1

n
X
n
(x)T
n
(t)
We substitute the expansion for into the initial condition to determine the coecients.
(x, 0) =

n=1

n
X
n
(x) = f(x)

n
=
_
2
1
X
n
(x)f(x) dx
Solution 39.13
u
tt
= c
2
u
xx
, 0 < x < L, t > 0,
u(0, t) = 0, u
x
(L, t) = 0,
u(x, 0) = f(x), u
t
(x, 0) = 0,
We substitute the separation of variables u(x, t) = X(x)T(t) into the partial dierential equation.
(XT)
tt
= c
2
(XT)
xx
T
tt
c
2
T
=
X
tt
X
=
2
With the boundary conditions at x = 0, L, we have the ordinary dierential equations,
T
tt
= c
2

2
T,
X
tt
=
2
X, X(0) = X
t
(L) = 0.
1639
The problem for X is a regular Sturm-Liouville eigenvalue problem. From the Rayleigh quotient,

2
=
[
t
]
L
0
+
_
L
0
(
t
)
2
dx
_
L
0

2
dx
=
_
L
0
(
t
)
2
dx
_
L
0

2
dx
we see that there are only positive eigenvalues. For
2
> 0 the general solution of the ordinary dierential equation
is
X = a
1
cos(x) +a
2
sin(x).
The solution that satises the left boundary condition is
X = a sin(x).
For non-trivial solutions, the right boundary condition imposes the constraint,
cos (L) = 0,
=

L
_
n
1
2
_
, n N.
The eigenvalues and eigenfunctions are

n
=
(2n 1)
2L
, X
n
= sin
_
(2n 1)x
2L
_
, n N.
The dierential equation for T becomes
T
tt
= c
2
_
(2n 1)
2L
_
2
T,
which has the two linearly independent solutions,
T
(1)
n
= cos
_
(2n 1)ct
2L
_
, T
(2)
n
= sin
_
(2n 1)ct
2L
_
.
1640
The eigenvalues and eigen-solutions of the partial dierential equation are,

n
=
(2n 1)
2L
, n N,
u
(1)
n
= sin
_
(2n 1)x
2L
_
cos
_
(2n 1)ct
2L
_
, u
(2)
n
= sin
_
(2n 1)x
2L
_
sin
_
(2n 1)ct
2L
_
.
We expand u(x, t) in a series of the eigen-solutions.
u(x, t) =

n=1
sin
_
(2n 1)x
2L
__
a
n
cos
_
(2n 1)ct
2L
_
+b
n
sin
_
(2n 1)ct
2L
__
.
We impose the initial condition u
t
(x, 0) = 0,
u
t
(x, 0) =

n=1
b
n
(2n 1)c
2L
sin
_
(2n 1)x
2L
_
= 0,
b
n
= 0.
The initial condition u(x, 0) = f(x) allows us to determine the remaining coecients,
u(x, 0) =

n=1
a
n
sin
_
(2n 1)x
2L
_
= f(x),
a
n
=
2
L
_
L
0
f(x) sin
_
(2n 1)x
2L
_
dx.
The series solution for u(x, t) is,
u(x, t) =

n=1
a
n
sin
_
(2n 1)x
2L
_
cos
_
(2n 1)ct
2L
_
.
1641
Solution 39.14
u
xx
+u
yy
= f(x, y), 0 < x < a, 0 < y < b,
u(0, y) = u(a, y) = 0, u
y
(x, 0) = u
y
(x, b) = 0,
We will solve this problem with an eigenfunction expansion in x. To determine a suitable set of eigenfunctions,
we substitute the separation of variables u(x, y) = X(x)Y (y) into the homogeneous partial dierential equation.
u
xx
+u
yy
= 0
(XY )
xx
+ (XY )
yy
= 0
X
tt
X
=
Y
tt
Y
=
2
With the boundary conditions at x = 0, a, we have the regular Sturm-Liouville problem,
X
tt
=
2
X, X(0) = X(a) = 0,
which has the solutions,

n
=
n
a
, X
n
= sin
_
nx
a
_
, n N.
We expand u(x, y) in a series of the eigenfunctions,
u(x, y) =

n=1
u
n
(y) sin
_
nx
a
_
.
Substituting this series into the partial dierential equation and boundary conditions at y = 0, b, we obtain,

n=1
_

_
n
a
_
2
u
n
(y) sin
_
nx
a
_
+u
tt
n
(y) sin
_
nx
a
_
_
= f(x),

n=1
u
t
n
(0) sin
_
nx
a
_
=

n=1
u
t
n
(b) sin
_
nx
a
_
= 0.
1642
Expanding f(x, y) in the Fourier sine series,
f(x, y) =

n=1
f
n
(y) sin
_
nx
a
_
,
f
n
(y) =
2
a
_
a
0
f(x, y) sin
_
nx
a
_
dx,
we obtain the ordinary dierential equations,
u
tt
n
(y)
_
n
a
_
2
u
n
(y) = f
n
(y), u
t
n
(0) = u
t
n
(b) = 0, n N.
We will solve these ordinary dierential equations with Green functions. Consider the Green function problem,
g
tt
n
(y; )
_
n
a
_
2
g
n
(y; ) = (y ), g
t
n
(0; ) = g
t
n
(b; ) = 0.
The homogeneous solutions
cosh
_
ny
a
_
and cosh
_
n(y b)
a
_
satisfy the left and right boundary conditions, respectively. The Wronskian of these two solutions is
W(y) =

cosh(ny/a) cosh(n(y b)/a)


n
a
sinh(ny/a)
n
a
sinh(n(y b)/a)

=
n
a
_
cosh
_
ny
a
_
sinh
_
n(y b)
a
_
sinh
_
ny
a
_
cosh
_
n(y b)
a
__
=
n
a
sinh
_
nb
a
_
.
1643
Thus the Green function is
g
n
(y; ) =
a cosh(ny
<
/a) cosh(n(y
>
b)/a)
n sinh(nb/a)
.
The solutions for the coecients in the expansion are
u
n
(y) =
_
b
0
g
n
(y; )f
n
() d.
Solution 39.15
u
tt
+a
2
u
xxxx
= 0, 0 < x < L, t > 0,
u(x, 0) = f(x), u
t
(x, 0) = g(x),
u(0, t) = u
xx
(0, t) = 0, u(L, t) = u
xx
(L, t) = 0,
We will solve this problem by expanding the solution in a series of eigen-solutions that satisfy the partial dierential
equation and the homogeneous boundary conditions. We will use the initial conditions to determine the coecients
in the expansion. We substitute the separation of variables, u(x, t) = X(x)T(t) into the partial dierential
equation.
(XT)
tt
+a
2
(XT)
xxxx
= 0
T
tt
a
2
T
=
X
tttt
X
=
4
Here we make the assumption that 0 arg() < /2, i.e., lies in the rst quadrant of the complex plane. Note
that
4
covers the entire complex plane. We have the ordinary dierential equation,
T
tt
= a
2

4
T,
1644
and with the boundary conditions at x = 0, L, the eigenvalue problem,
X
tttt
=
4
X, X(0) = X
tt
(0) = X(L) = X
tt
(L) = 0.
For = 0, the general solution of the dierential equation is
X = c
1
+c
2
x +c
3
x
2
+c
4
x
3
.
Only the trivial solution satises the boundary conditions. = 0 is not an eigenvalue. For ,= 0, a set of linearly
independent solutions is
e
x
, e
ix
, e
x
, e
ix
.
Another linearly independent set, (which will be more useful for this problem), is
cos(x), sin(x), cosh(x), sinh(x).
Both sin(x) and sinh(x) satisfy the left boundary conditions. Consider the linear combination c
1
cos(x) +
c
2
cosh(x). The left boundary conditions impose the two constraints c
1
+ c
2
= 0, c
1
c
2
= 0. Only the trivial
linear combination of cos(x) and cosh(x) can satisfy the left boundary condition. Thus the solution has the
form,
X = c
1
sin(x) +c
2
sinh(x).
The right boundary conditions impose the constraints,
_
c
1
sin(L) +c
2
sinh(L) = 0,
c
1

2
sin(L) +c
2

2
sinh(L) = 0
_
c
1
sin(L) +c
2
sinh(L) = 0,
c
1
sin(L) +c
2
sinh(L) = 0
1645
This set of equations has a nontrivial solution if and only if the determinant is zero,

sin(L) sinh(L)
sin(L) sinh(L)

= 2 sin(L) sinh(L) = 0.
Since sinh(z) is nonzero in 0 arg(z) < /2, z ,= 0, and sin(z) has the zeros z = n, n N in this domain, the
eigenvalues and eigenfunctions are,

n
=
n
L
, X
n
= sin
_
nx
L
_
, n N.
The dierential equation for T becomes,
T
tt
= a
2
_
n
L
_
4
T,
which has the solutions,
_
cos
_
a
_
n
L
_
2
t
_
, sin
_
a
_
n
L
_
2
t
__
.
The eigen-solutions of the partial dierential equation are,
u
(1)
n
= sin
_
nx
L
_
cos
_
a
_
n
L
_
2
t
_
, u
(2)
n
= sin
_
nx
L
_
sin
_
a
_
n
L
_
2
t
_
, n N.
We expand the solution of the partial dierential equation in a series of the eigen-solutions.
u(x, t) =

n=1
sin
_
nx
L
_
_
c
n
cos
_
a
_
n
L
_
2
t
_
+d
n
sin
_
a
_
n
L
_
2
t
__
The initial condition for u(x, t) and u
t
(x, t) allow us to determine the coecients in the expansion.
u(x, 0) =

n=1
c
n
sin
_
nx
L
_
= f(x)
u
t
(x, 0) =

n=1
d
n
a
_
n
L
_
2
sin
_
nx
L
_
= g(x)
1646
c
n
and d
n
are coecients in Fourier sine series.
c
n
=
2
L
_
L
0
f(x) sin
_
nx
L
_
dx
d
n
=
2L
a
2
n
2
_
L
0
g(x) sin
_
nx
L
_
dx
Solution 39.16
u
t
= u
xx
+I
2
u, 0 < x < L, t > 0,
u(0, t) = u(L, t) = 0, u(x, 0) = g(x).
We will solve this problem with an expansion in eigen-solutions of the partial dierential equation. We substitute
the separation of variables u(x, t) = X(x)T(t) into the partial dierential equation.
(XT)
t
= (XT)
xx
+I
2
XT
T
t
T

I
2

=
X
tt
X
=
2
Now we have an ordinary dierential equation for T and a Sturm-Liouville eigenvalue problem for X. (Note
that we have followed the rule of thumb that the problem will be easier if we move all the parameters out of the
eigenvalue problem.)
T
t
=
_

2
I
2

_
T
X
tt
=
2
X, X(0) = X(L) = 0
The eigenvalues and eigenfunctions for X are

n
=
n
L
, X
n
= sin
_
nx
L
_
, n N.
1647
The dierential equation for T becomes,
T
t
n
=
_

_
n
L
_
2
I
2

_
T
n
,
which has the solution,
T
n
= c exp
_

_
n
L
_
2
I
2

_
t
_
.
From this solution, we see that the critical current is
I
CR
=
_

L
.
If I is greater that this, then the eigen-solution for n = 1 will be exponentially growing. This would make the
whole solution exponentially growing. For I < I
CR
, each of the T
n
is exponentially decaying. The eigen-solutions
of the partial dierential equation are,
u
n
= exp
_

_
n
L
_
2
I
2

_
t
_
sin
_
nx
L
_
, n N.
We expand u(x, t) in its eigen-solutions, u
n
.
u(x, t) =

n=1
a
n
exp
_

_
n
L
_
2
I
2

_
t
_
sin
_
nx
L
_
We determine the coecients a
n
from the initial condition.
u(x, 0) =

n=1
a
n
sin
_
nx
L
_
= g(x)
a
n
=
2
L
_
L
0
g(x) sin
_
nx
L
_
dx.
If < 0, then the solution is exponentially decaying regardless of current. Thus there is no critical current.
1648
Solution 39.17
a) The problem is
u
t
(x, y, z, t) = u(x, y, z, t), < x < , < y < , 0 < z < a, t > 0,
u(x, y, z, 0) = T, u(x, y, 0, t) = u(x, y, a, t) = 0.
Because of symmetry, the partial dierential equation in four variables is reduced to a problem in two
variables,
u
t
(z, t) = u
zz
(z, t), 0 < z < a, t > 0,
u(z, 0) = T, u(0, t) = u(a, t) = 0.
We will solve this problem with an expansion in eigen-solutions of the partial dierential equation that
satisfy the homogeneous boundary conditions. We substitute the separation of variables u(z, t) = Z(z)T(t)
into the partial dierential equation.
ZT
t
= Z
tt
T
T
t
T
=
Z
tt
Z
=
2
With the boundary conditions at z = 0, a we have the Sturm-Liouville eigenvalue problem,
Z
tt
=
2
Z, Z(0) = Z(a) = 0,
which has the solutions,

n
=
n
a
, Z
n
= sin
_
nz
a
_
, n N.
The problem for T becomes,
T
t
n
=
_
n
a
_
2
T
n
,
1649
with the solution,
T
n
= exp
_

_
n
a
_
2
t
_
.
The eigen-solutions are
u
n
(z, t) = sin
_
nz
a
_
exp
_

_
n
a
_
2
t
_
.
The solution for u is a linear combination of the eigen-solutions. The slowest decaying eigen-solution is
u
1
(z, t) = sin
_
z
a
_
exp
_

a
_
2
t
_
.
Thus the e-folding time is

e
=
a
2

2
.
b) The problem is
u
t
(r, , z, t) = u(r, , z, t), 0 < r < a, 0 < < 2, < z < , t > 0,
u(r, , z, 0) = T, u(0, , z, t) is bounded, u(a, , z, t) = 0.
The Laplacian in cylindrical coordinates is
u = u
rr
+
1
r
u
r
+
1
r
2
u

+u
zz
.
Because of symmetry, the solution does not depend on or z.
u
t
(r, t) =
_
u
rr
(r, t) +
1
r
u
r
(r, t)
_
, 0 < r < a, t > 0,
u(r, 0) = T, u(0, t) is bounded, u(a, t) = 0.
1650
We will solve this problem with an expansion in eigen-solutions of the partial dierential equation that
satisfy the homogeneous boundary conditions at r = 0 and r = a. We substitute the separation of variables
u(r, t) = R(r)T(t) into the partial dierential equation.
RT
t
=
_
R
tt
T +
1
r
R
t
T
_
T
t
T
=
R
tt
R
+
R
t
rR
=
2
We have the eigenvalue problem,
R
tt
+
1
r
R
t
+
2
R = 0, R(0) is bounded, R(a) = 0.
Recall that the Bessel equation,
y
tt
+
1
x
y
t
+
_


2
x
2
_
y = 0,
has the general solution y = c
1
J

(x) +c
2
Y

(x). We discard the Bessel function of the second kind, Y

, as
it is unbounded at the origin. The solution for R(r) is
R(r) = J
0
(r).
Applying the boundary condition at r = a, we see that the eigenvalues and eigenfunctions are

n
=

n
a
, R
n
= J
0
_

n
r
a
_
, n N,
where
n
are the positive roots of the Bessel function J
0
.
The dierential equation for T becomes,
T
t
n
=
_

n
a
_
2
T
n
,
1651
which has the solutions,
T
n
= exp
_

n
a
_
2
t
_
.
The eigen-solutions of the partial dierential equation for u(r, t) are,
u
n
(r, t) = J
0
_

n
r
a
_
exp
_

n
a
_
2
t
_
.
The solution u(r, t) is a linear combination of the eigen-solutions, u
n
. The slowest decaying eigenfunction
is,
u
1
(r, t) = J
0
_

1
r
a
_
exp
_

1
a
_
2
t
_
.
Thus the e-folding time is

e
=
a
2

2
1
.
c) The problem is
u
t
(r, , , t) = u(r, , , t), 0 < r < a, 0 < < 2, 0 < < , t > 0,
u(r, , , 0) = T, u(0, , , t) is bounded, u(a, , , t) = 0.
The Laplacian in spherical coordinates is,
u = u
rr
+
2
r
u
r
+
1
r
2
u

+
cos
r
2
sin
u

+
1
r
2
sin
2

.
1652
Because of symmetry, the solution does not depend on or .
u
t
(r, t) =
_
u
rr
(r, t) +
2
r
u
r
(r, t)
_
, 0 < r < a, t > 0,
u(r, 0) = T, u(0, t) is bounded, u(a, t) = 0
We will solve this problem with an expansion in eigen-solutions of the partial dierential equation that
satisfy the homogeneous boundary conditions at r = 0 and r = a. We substitute the separation of variables
u(r, t) = R(r)T(t) into the partial dierential equation.
RT
t
=
_
R
tt
T +
2
r
R
t
T
_
T
t
T
=
R
tt
R
+
2
r
R
t
R
=
2
We have the eigenvalue problem,
R
tt
+
2
r
R
t
+
2
R = 0, R(0) is bounded, R(a) = 0.
Recall that the equation,
y
tt
+
2
x
y
t
+
_

( + 1)
x
2
_
y = 0,
has the general solution y = c
1
j

(x) + c
2
y

(x), where j

and y

are the spherical Bessel functions of the


rst and second kind. We discard y

as it is unbounded at the origin. (The spherical Bessel functions are


related to the Bessel functions by
j

(x) =
_

2x
J
+1/2
(x).)
The solution for R(r) is
R
n
= j
0
(r).
1653
Applying the boundary condition at r = a, we see that the eigenvalues and eigenfunctions are

n
=

n
a
, R
n
= j
0
_

n
r
a
_
, n N.
The problem for T becomes
T
t
n
=
_

n
a
_
2
T
n
,
which has the solutions,
T
n
= exp
_

n
a
_
2
t
_
.
The eigen-solutions of the partial dierential equation are,
u
n
(r, t) = j
0
_

n
r
a
_
exp
_

n
a
_
2
t
_
.
The slowest decaying eigen-solution is,
u
1
(r, t) = j
0
_

1
r
a
_
exp
_

1
a
_
2
t
_
.
Thus the e-folding time is

e
=
a
2

2
1
.
d) If the edges are perfectly insulated, then no heat escapes through the boundary. The temperature is constant
for all time. There is no e-folding time.
1654
Solution 39.18
We will solve this problem with an eigenfunction expansion. Since the partial dierential equation is homogeneous,
we will nd eigenfunctions in both x and y. We substitute the separation of variables u(x, y, t) = X(x)Y (y)T(t)
into the partial dierential equation.
XY T
t
= (t) (X
tt
Y T +XY
tt
T)
T
t
(t)T
=
X
tt
X
+
Y
tt
Y
=
2
X
tt
X
=
Y
tt
Y

2
=
2
First we have a Sturm-Liouville eigenvalue problem for X,
X
tt
=
2
X, X
t
(0) = X
t
(a) = 0,
which has the solutions,

m
=
m
a
, X
m
= cos
_
mx
a
_
, m = 0, 1, 2, . . . .
Now we have a Sturm-Liouville eigenvalue problem for Y ,
Y
tt
=
_

_
m
a
_
2
_
Y, Y (0) = Y (b) = 0,
which has the solutions,

mn
=
_
_
m
a
_
2
+
_
n
b
_
2
, Y
n
= sin
_
ny
b
_
, m = 0, 1, 2, . . . , n = 1, 2, 3, . . . .
A few of the eigenfunctions, cos
_
mx
a
_
sin
_
ny
b
_
, are shown in Figure 39.3.
The dierential equation for T becomes,
T
t
mn
=
_
_
m
a
_
2
+
_
n
b
_
2
_
(t)T
mn
,
1655
m=2, n=1 m=2, n=2 m=2, n=3
m=1, n=1 m=1, n=2 m=1, n=3
m=0, n=1 m=0, n=2 m=0, n=3
Figure 39.3: The eigenfunctions cos
_
mx
a
_
sin
_
ny
b
_
which has the solutions,
T
mn
= exp
_

_
_
m
a
_
2
+
_
n
b
_
2
__
t
0
() d
_
.
The eigen-solutions of the partial dierential equation are,
u
mn
= cos
_
mx
a
_
sin
_
ny
b
_
exp
_

_
_
m
a
_
2
+
_
n
b
_
2
__
t
0
() d
_
.
1656
The solution of the partial dierential equation is,
u(x, y, t) =

m=0

n=1
c
mn
cos
_
mx
a
_
sin
_
ny
b
_
exp
_

_
_
m
a
_
2
+
_
n
b
_
2
__
t
0
() d
_
.
We determine the coecients from the initial condition.
u(x, y, 0) =

m=0

n=1
c
mn
cos
_
mx
a
_
sin
_
ny
b
_
= f(x, y)
c
0n
=
2
ab
_
a
0
_
b
0
f(x, y) sin
_
n
b
_
dy dx
c
mn
=
4
ab
_
a
0
_
b
0
f(x, y) cos
_
m
a
_
sin
_
n
b
_
dy dx
Solution 39.19
The steady state temperature satises Laplaces equation, u = 0. The Laplacian in cylindrical coordinates is,
u(r, , z) = u
rr
+
1
r
u
r
+
1
r
2
u

+u
zz
.
Because of the homogeneity in the z direction, we reduce the partial dierential equation to,
u
rr
+
1
r
u
r
+
1
r
2
u

= 0, 0 < r < 1, 0 < < .


The boundary conditions are,
u(r, 0) = u(r, ) = 0, u(0, ) = 0, u(1, ) = 1.
1657
We will solve this problem with an eigenfunction expansion. We substitute the separation of variables u(r, ) =
R(r)T() into the partial dierential equation.
R
tt
T +
1
r
R
t
T +
1
r
2
RT
tt
= 0
r
2
R
tt
R
+r
R
t
R
=
T
tt
T
=
2
We have the regular Sturm-Liouville eigenvalue problem,
T
tt
=
2
T, T(0) = T() = 0,
which has the solutions,

n
= n, T
n
= sin(n), n N.
The problem for R becomes,
r
2
R
tt
+rR
t
n
2
R = 0, R(0) = 0.
This is an Euler equation. We substitute R = r

into the dierential equation to obtain,


( 1) + n
2
= 0,
= n.
The general solution of the dierential equation for R is
R
n
= c
1
r
n
+c
2
r
n
.
The solution that vanishes at r = 0 is
R
n
= cr
n
.
1658
The eigen-solutions of the dierential equation are,
u
n
= r
n
sin(n).
The solution of the partial dierential equation is
u(r, ) =

n=1
a
n
r
n
sin(n).
We determine the coecients from the boundary condition at r = 1.
u(1, ) =

n=1
a
n
sin(n) = 1
a
n
=
2

_

0
sin(n) d =
2
n
(1 (1)
n
)
The solution of the partial dierential equation is
u(r, ) =
4

n=1
odd n
r
n
sin(n).
Solution 39.20
The problem is
u
xx
+u
yy
= 0, 0 < x, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, u(0, y) = f(y).
We substitute the separation of variables u(x, y) = X(x)Y (y) into the partial dierential equation.
X
tt
Y +XY
tt
= 0
X
tt
X
=
Y
tt
Y
=
2
1659
We have the regular Sturm-Liouville problem,
Y
tt
=
2
Y, Y (0) = Y (1) = 0,
which has the solutions,

n
= n, Y
n
= sin(ny), n N.
The problem for X becomes,
X
tt
n
= (n)
2
X,
which has the general solution,
X
n
= c
1
e
nx
+c
2
e
nx
.
The solution that is bounded as x is,
X
n
= c e
nx
.
The eigen-solutions of the partial dierential equation are,
u
n
= e
nx
sin(ny), n N.
The solution of the partial dierential equation is,
u(x, y) =

n=1
a
n
e
nx
sin(ny).
We nd the coecients from the boundary condition at x = 0.
u(0, y) =

n=1
a
n
sin(ny) = f(y)
a
n
= 2
_
1
0
f(y) sin(ny) dy
1660
Solution 39.21
The Laplacian in circular coordinates is
u = u
rr
+
1
r
u
r
+
1
r
2
u

.
Since we have homogeneous boundary conditions at = 0 and = , we will solve this problem with an
eigenfunction expansion. We substitute the separation of variables u(r, ) = R(r)T() into the partial dierential
equation.
R
tt
T +
1
r
R
t
T +
1
r
2
RT
tt
= 0
r
2
R
tt
R
+r
R
t
R
=
T
tt
T
=
2
.
We have the regular Sturm-Liouville eigenvalue problem,
T
tt
=
2
T, T(0) = T() = 0,
which has the solutions,

n
=
n

, T
n
= sin
_
nx

_
, n N.
The dierential equation for R becomes,
r
2
R
tt
+rR
t

_
n

_
2
R = 0, R(a) = 0.
This is an Euler equation. We make the substitution, R = r

.
( 1) +
_
n

_
2
= 0
=
n

1661
The general solution of the equation for R is
R = c
1
r
n/
+c
2
r
n/
.
The solution, (up to a multiplicative constant), that vanishes at r = a is
R = r
n/
a
2n/
r
n/
.
Thus the series expansion of our solution is,
u(r, ) =

n=1
c
n
_
r
n/
a
2n/
r
n/
_
sin
_
n

_
.
We determine the coecients from the boundary condition at r = b.
u(b, ) =

n=1
c
n
_
b
n/
a
2n/
b
n/
_
sin
_
n

_
= f()
c
n
=
2
(b
n/
a
2n/
b
n/
)
_

0
f() sin
_
n

_
d
Solution 39.22
a) The mathematical statement of the problem is
u
tt
= c
2
u
xx
, 0 < x < L, t > 0,
u(0, t) = u(L, t) = 0,
u(x, 0) = 0, u
t
(x, 0) =
_
v for [x [ < d
0 for [x [ > d.
1662
Because we are interest in the harmonics of the motion, we will solve this problem with an eigenfunction expansion
in x. We substitute the separation of variables u(x, t) = X(x)T(t) into the wave equation.
XT
tt
= c
2
X
tt
T
T
tt
c
2
T
=
X
tt
X
=
2
The eigenvalue problem for X is,
X
tt
=
2
X, X(0) = X(L) = 0,
which has the solutions,

n
=
n
L
, X
n
= sin
_
nx
L
_
, n N.
The ordinary dierential equation for the T
n
are,
T
tt
n
=
_
nc
L
_
2
T
n
,
which have the linearly independent solutions,
cos
_
nct
L
_
, sin
_
nct
L
_
.
The solution for u(x, t) is a linear combination of the eigen-solutions.
u(x, t) =

n=1
sin
_
nx
L
_
_
a
n
cos
_
nct
L
_
+b
n
sin
_
nct
L
__
Since the string initially has zero displacement, each of the a
n
are zero.
u(x, t) =

n=1
b
n
sin
_
nx
L
_
sin
_
nct
L
_
1663
Now we use the initial velocity to determine the coecients in the expansion. Because the position is a continuous
function of x, and there is a jump discontinuity in the velocity as a function of x, the coecients in the expansion
will decay as 1/n
2
.
u
t
(x, 0) =

n=1
nc
L
b
n
sin
_
nx
L
_
=
_
v for [x [ < d
0 for [x [ > d.
nc
L
b
n
=
2
L
_
L
0
u
t
(x, 0) sin
_
nx
L
_
dx
b
n
=
2
nc
_
+d
d
v sin
_
nx
L
_
dx
=
4Lv
n
2

2
c
sin
_
nd
L
_
sin
_
n
L
_
The solution for u(x, t) is,
u(x, t) =
4Lv

2
c

n=1
1
n
2
sin
_
nd
L
_
sin
_
n
L
_
sin
_
nx
L
_
sin
_
nct
L
_
.
b) The form of the solution is again,
u(x, t) =

n=1
b
n
sin
_
nx
L
_
sin
_
nct
L
_
We determine the coecients in the expansion from the initial velocity.
u
t
(x, 0) =

n=1
nc
L
b
n
sin
_
nx
L
_
=
_
v cos
_
(x)
2d
_
for [x [ < d
0 for [x [ > d.
nc
L
b
n
=
2
L
_
L
0
u
t
(x, 0) sin
_
nx
L
_
dx
1664
b
n
=
2
nc
_
+d
d
v cos
_
(x )
2d
_
sin
_
nx
L
_
dx
b
n
=
_
8dL
2
v
n
2
c(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
for d ,=
L
2n
,
v
n
2

2
c
_
2nd +Lsin
_
2nd
L
__
sin
_
n
L
_
for d =
L
2n
The solution for u(x, t) is,
u(x, t) =
8dL
2
v

2
c

n=1
1
n(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
nx
L
_
sin
_
nct
L
_
for d ,=
L
2n
,
u(x, t) =
v

2
c

n=1
1
n
2
_
2nd +Lsin
_
2nd
L
__
sin
_
n
L
_
sin
_
nx
L
_
sin
_
nct
L
_
for d =
L
2n
.
c) The kinetic energy of the string is
E =
1
2
_
L
0
(u
t
(x, t))
2
dx,
where is the density of the string per unit length.
Flat Hammer. The n
th
harmonic is
u
n
=
4Lv
n
2

2
c
sin
_
nd
L
_
sin
_
n
L
_
sin
_
nx
L
_
sin
_
nct
L
_
.
The kinetic energy of the n
th
harmonic is
E
n
=

2
_
L
0
_
u
n
t
_
2
dx =
4Lv
2
n
2

2
sin
2
_
nd
L
_
sin
2
_
n
L
_
cos
2
_
nct
L
_
.
1665
This will be maximized if
sin
2
_
n
L
_
= 1,
n
L
=
(2m1)
2
, m = 1, . . . , n,
=
(2m1)L
2n
, m = 1, . . . , n
We note that the kinetic energies of the n
th
harmonic decay as 1/n
2
.
Curved Hammer. We assume that d ,=
L
2n
. The n
th
harmonic is
u
n
=
8dL
2
v
n
2
c(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
nx
L
_
sin
_
nct
L
_
.
The kinetic energy of the n
th
harmonic is
E
n
=

2
_
L
0
_
u
n
t
_
2
dx =
16d
2
L
3
v
2

2
(L
2
4d
2
n
2
)
2
cos
2
_
nd
L
_
sin
2
_
n
L
_
cos
2
_
nct
L
_
.
This will be maximized if
sin
2
_
n
L
_
= 1,
=
(2m1)L
2n
, m = 1, . . . , n
We note that the kinetic energies of the n
th
harmonic decay as 1/n
4
.
1666
Solution 39.23
In mathematical notation, the problem is
u
tt
c
2
u
xx
= s(x, t), 0 < x < L, t > 0,
u(0, t) = u(L, t) = 0,
u(x, 0) = u
t
(x, 0) = 0.
Since this is an inhomogeneous partial dierential equation, we will expand the solution in a series of eigenfunctions
in x for which the coecients are functions of t. The solution for u has the form,
u(x, t) =

n=1
u
n
(t) sin
_
nx
L
_
.
Substituting this expression into the inhomogeneous partial dierential equation will give us ordinary dierential
equations for each of the u
n
.

n=1
_
u
tt
n
+c
2
_
n
L
_
2
u
n
_
sin
_
nx
L
_
= s(x, t).
We expand the right side in a series of the eigenfunctions.
s(x, t) =

n=1
s
n
(t) sin
_
nx
L
_
.
For 0 < t < we have
s
n
(t) =
2
L
_
L
0
s(x, t) sin
_
nx
L
_
dx
=
2
L
_
L
0
v cos
_
(x )
2d
_
sin
_
t

_
sin
_
nx
L
_
dx
=
8dLv
(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
t

_
.
1667
For t > , s
n
(t) = 0. Substituting this into the partial dierential equation yields,
u
tt
n
+
_
nc
L
_
2
u
n
=
_
8dLv
(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
t

_
, for t < ,
0 for t > .
Since the initial position and velocity of the string is zero, we have
u
n
(0) = u
t
n
(0) = 0.
First we solve the dierential equation on the range 0 < t < . The homogeneous solutions are
cos
_
nct
L
_
, sin
_
nct
L
_
.
Since the right side of the ordinary dierential equation is a constant times sin(t/), which is an eigenfunction
of the dierential operator, we can guess the form of a particular solution, p
n
(t).
p
n
(t) = d sin
_
t

_
We substitute this into the ordinary dierential equation to determine the multiplicative constant d.
p
n
(t) =
8d
2
L
3
v

3
(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
t

_
The general solution for u
n
(t) is
u
n
(t) = a cos
_
nct
L
_
+b sin
_
nct
L
_

8d
2
L
3
v

3
(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
t

_
.
We use the initial conditions to determine the constants a and b. The solution for 0 < t < is
u
n
(t) =
8d
2
L
3
v

3
(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
__
L
cn
sin
_
nct
L
_
sin
_
t

__
.
1668
The solution for t > , the solution is a linear combination of the homogeneous solutions. This linear combination
is determined by the position and velocity at t = . We use the above solution to determine these quantities.
u
n
() =
8d
2
L
4
v

3
cn(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
sin
_
nc
L
_
u
t
n
() =
8d
2
L
3
v

2
(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
__
1 + cos
_
nc
L
__
The fundamental set of solutions at t = is
_
cos
_
nc(t )
L
_
,
L
nc
sin
_
nc(t )
L
__
From the initial conditions at t = , we see that the solution for t > is
u
n
(t) =
8d
2
L
3
v

3
(L
2
c
2

2
n
2
)(L
2
4d
2
n
2
)
cos
_
nd
L
_
sin
_
n
L
_
_
L
cn
sin
_
nc
L
_
cos
_
nc(t )
L
_
+

_
1 + cos
_
nc
L
__
sin
_
nc(t )
L
__
.
Width of the Hammer. The n
th
harmonic has the width dependent factor,
d
L
2
4d
2
n
2
cos
_
nd
L
_
.
Dierentiating this expression and trying to nd zeros to determine extrema would give us an equation with both
algebraic and transcendental terms. Thus we dont attempt to nd the maxima exactly. We know that d < L.
The cosine factor is large when
nd
L
m, m = 1, 2, . . . , n 1,
d
mL
n
, m = 1, 2, . . . , n 1.
1669
Substituting d = mL/n into the width dependent factor gives us
d
L
2
(1 4m
2
)
(1)
m
.
Thus we see that the amplitude of the n
th
harmonic and hence its kinetic energy will be maximized for
d
L
n
The cosine term in the width dependent factor vanishes when
d =
(2m1)L
2n
, m = 1, 2, . . . , n.
The kinetic energy of the n
th
harmonic is minimized for these widths.
For the lower harmonics, n
L
2d
, the kinetic energy is proportional to d
2
; for the higher harmonics, n
L
2d
,
the kinetic energy is proportional to 1/d
2
.
Duration of the Blow. The n
th
harmonic has the duration dependent factor,

2
L
2
n
2
c
2

2
_
L
nc
sin
_
nc
L
_
cos
_
nc(t )
L
_
+

_
1 + cos
_
nc
L
__
sin
_
nc(t )
L
__
.
If we assume that is small, then
L
nc
sin
_
nc
L
_
.
and

_
1 + cos
_
nc
L
__

.
1670
Thus the duration dependent factor is about,

L
2
n
2
c
2

2
sin
_
nc(t )
L
_
.
Thus for the lower harmonics, (those satisfying n
L
c
), the amplitude is proportional to , which means that the
kinetic energy is proportional to
2
. For the higher harmonics, (those with n
L
c
), the amplitude is proportional
to 1/, which means that the kinetic energy is proportional to 1/
2
.
Solution 39.24
Substituting u(x, y, z, t) = v(x, y, z) e
it
into the wave equation will give us a Helmholtz equation.

2
v e
it
c
2
(v
xx
+v
yy
+v
zz
) e
it
= 0
v
xx
+v
yy
+v
zz
+k
2
v = 0.
We nd the propagating modes with separation of variables. We substitute v = X(x)Y (y)Z(z) into the Helmholtz
equation.
X
tt
Y Z +XY
tt
Z +XY Z
tt
+k
2
XY Z = 0

X
tt
X
=
Y
tt
Y
+
Z
tt
Z
+k
2
=
2
The eigenvalue problem in x is
X
tt
=
2
X, X(0) = X(L) = 0,
which has the solutions,

n
=
n
L
, X
n
= sin
_
nx
L
_
.
We continue with the separation of variables.

Y
tt
Y
=
Z
tt
Z
+k
2

_
n
L
_
2
=
2
1671
The eigenvalue problem in y is
Y
tt
=
2
Y, Y (0) = Y (L) = 0,
which has the solutions,

n
=
m
L
, Y
m
= sin
_
my
L
_
.
Now we have an ordinary dierential equation for Z,
Z
tt
+
_
k
2

L
_
2 _
n
2
+m
2
_
_
Z = 0.
We dene the eigenvalues,

2
n,m
= k
2

L
_
2 _
n
2
+m
2
_
.
If k
2

L
_
2
(n
2
+m
2
) < 0, then the solutions for Z are,
exp
_

_
_

L
_
2
(n
2
+m
2
) k
2
_
z
_
.
We discard this case, as the solutions are not bounded as z .
If k
2

L
_
2
(n
2
+m
2
) = 0, then the solutions for Z are,
1, z
The solution Z = 1 satises the boundedness and nonzero condition at innity. This corresponds to a standing
wave.
If k
2

L
_
2
(n
2
+m
2
) > 0, then the solutions for Z are,
e
in,mz
.
1672
These satisfy the boundedness and nonzero conditions at innity. For values of n, msatisfying k
2

L
_
2
(n
2
+m
2
)
0, there are the propagating modes,
u
n,m
= sin
_
nx
L
_
sin
_
my
L
_
e
i(tn,mz)
.
Solution 39.25
u
tt
= c
2
u, 0 < x < a, 0 < y < b, (39.10)
u(0, y) = u(a, y) = u(x, 0) = u(x, b) = 0.
We substitute the separation of variables u(x, y, t) = X(x)Y (y)T(t) into Equation 39.10.
T
tt
c
2
T
=
X
tt
X
+
Y
tt
Y
=
X
tt
X
=
Y
tt
Y
=
This gives us dierential equations for X(x), Y (y) and T(t).
X
tt
= X, X(0) = X(a) = 0
Y
tt
= ( )Y, Y (0) = Y (b) = 0
T
tt
= c
2
T
First we solve the problem for X.

m
=
_
m
a
_
2
, X
m
= sin
_
mx
a
_
Then we solve the problem for Y .

m,n
=
_
m
a
_
2
+
_
n
b
_
2
, Y
m,n
= sin
_
ny
b
_
1673
Finally we determine T.
T
m,n
=
cos
sin
_
c
_
_
m
a
_
2
+
_
n
b
_
2
t
_
The modes of oscillation are
u
m,n
= sin
_
mx
a
_
sin
_
ny
b
_
cos
sin
_
c
_
_
m
a
_
2
+
_
n
b
_
2
t
_
.
The frequencies are

m,n
= c
_
_
m
a
_
2
+
_
n
b
_
2
.
Figure 39.4 shows a few of the modes of oscillation in surface and density plots.
Solution 39.26
We substitute the separation of variables = X(x)Y (y)T(t) into the dierential equation.

t
= a
2
(
xx
+
yy
) (39.11)
XY T
t
= a
2
(X
tt
Y T +XY
tt
T)
T
t
a
2
T
=
X
tt
X
+
Y
tt
Y
=
T
t
a
2
T
= ,
X
tt
X
=
Y
tt
Y
=
First we solve the eigenvalue problem for X.
X
tt
+X = 0, X(0) = X(l
x
) = 0

m
=
_
m
l
x
_
2
, X
m
(x) = sin
_
mx
l
x
_
, m Z
+
1674
m=3, n=1 m=3, n=2 m=3, n=3
m=2, n=1 m=2, n=2 m=2, n=3
m=1, n=1 m=1, n=2 m=1, n=3
m=3, n=1 m=3, n=2 m=3, n=3
m=2, n=1 m=2, n=2 m=2, n=3
m=1, n=1 m=1, n=2 m=1, n=3
Figure 39.4: The modes of oscillation of a rectangular drum head.
Then we solve the eigenvalue problem for Y .
Y
tt
+ (
m
)Y = 0, Y
t
(0) = Y
t
(l
y
) = 0

mn
=
m
+
_
n
l
y
_
2
, Y
mn
(y) = cos
_
ny
l
y
_
, n Z
0+
Next we solve the dierential equation for T, (up to a multiplicative constant).
T
t
= a
2

mn
T
T(t) = exp
_
a
2

mn
t
_
The eigensolutions of Equation 39.11 are
sin(
m
x) cos(
mn
y) exp
_
a
2

mn
t
_
, m Z
+
, n Z
0+
.
1675
We choose the eigensolutions
mn
to be orthonormal on the xy domain at t = 0.

m0
(x, y, t) =

2
l
x
l
y
sin(
m
x) exp
_
a
2

mn
t
_
, m Z
+

mn
(x, y, t) =
2
_
l
x
l
y
sin(
m
x) cos(
mn
y) exp
_
a
2

mn
t
_
, m Z
+
, n Z
+
The solution of Equation 39.11 is a linear combination of the eigensolutions.
(x, y, t) =

m=1
n=0
c
mn

mn
(x, y, t)
1676
We determine the coecients from the initial condition.
(x, y, 0) = 1

m=1
n=0
c
mn

mn
(x, y, 0) = 1
c
mn
=
_
lx
0
_
ly
0

mn
(x, y, 0) dy dx
c
m0
=

2
l
x
l
y
_
lx
0
_
ly
0
sin(
m
x) dy dx
c
m0
=
_
2l
x
l
y
1 (1)
m
m
, m Z
+
c
mn
=
2
_
l
x
l
y
_
lx
0
_
ly
0
sin(
m
x) cos(
mn
y) dy dx
c
mn
= 0, m Z
+
, n Z
+
(x, y, t) =

m=1
c
m0

m0
(x, y, t)
(x, y, t) =

m=1
odd m
2
_
2l
x
l
y
m
sin(
m
x) exp
_
a
2

mn
t
_
Addendum. Note that an equivalent problem to the one specied is

t
= a
2
(
xx
+
yy
) , 0 < x < l
x
, < y < ,
(x, y, 0) = 1, (0, y, t) = (l
y
, y, t) = 0.
Here we have done an even periodic continuation of the problem in the y variable. Thus the boundary conditions

y
(x, 0, t) =
y
(x, l
y
, t) = 0
1677
are automatically satised. Note that this problem does not depend on y. Thus we only had to solve

t
= a
2

xx
, 0 < x < l
x
(x, 0) = 1, (0, t) = (l
y
, t) = 0.
Solution 39.27
1. Since the initial and boundary conditions do not depend on , neither does . We apply the separation of
variables = u(r)T(t).

t
= a
2
(39.12)

t
= a
2
1
r
(r
r
)
r
(39.13)
T
t
a
2
T
=
1
r
(ru
t
)
t
= (39.14)
We solve the eigenvalue problem for u(r).
(ru
t
)
t
+u = 0, u(0) bounded, u(R) = 0
First we write the general solution.
u(r) = c
1
J
0
_

r
_
+c
2
Y
0
_

r
_
The Bessel function of the second kind, Y
0
, is not bounded at r = 0, so c
2
= 0. We use the boundary
condition at r = R to determine the eigenvalues.

n
=
_
j
0,n
R
_
2
, u
n
(r) = cJ
0
_
j
0,n
r
R
_
1678
We choose the constant c so that the eigenfunctions are orthonormal with respect to the weighting function
r.
u
n
(r) =
J
0
_
j
0,n
r
R
_
_
_
R
0
rJ
2
0
_
j
0,n
r
R
_
=

2
RJ
1
(j
0,n
)
J
0
_
j
0,n
r
R
_
Now we solve the dierential equation for T.
T
t
= a
2

n
T
T
n
= exp
_

_
aj
0,n
R
2
_
2
t
_
The eigensolutions of Equation 39.12 are

n
(r, t) =

2
RJ
1
(j
0,n
)
J
0
_
j
0,n
r
R
_
exp
_

_
aj
0,n
R
2
_
2
t
_
The solution is a linear combination of the eigensolutions.
=

n=1
c
n

2
RJ
1
(j
0,n
)
J
0
_
j
0,n
r
R
_
exp
_

_
aj
0,n
R
2
_
2
t
_
1679
We determine the coecients from the initial condition.
(r, , 0) = V

n=1
c
n

2
RJ
1
(j
0,n
)
J
0
_
j
0,n
r
R
_
= V
c
n
=
_
R
0
V r

2
RJ
1
(j
0,n
)
J
0
_
j
0,n
r
R
_
dr
c
n
= V

2
RJ
1
(j
0,n
)
R
j
0,n
/R
J
1
(j
0,n
)
c
n
=

2 V R
j
0,n
(r, , t) = 2V

n=1
J
0
_
j
0,n
r
R
_
j
0,n
J
1
(j
0,n
)
exp
_

_
aj
0,n
R
2
_
2
t
_
2.
J

(r)
_
2
r
cos
_
r

2


4
_
, r +
j
,n

_
n +

2

1
4
_

For large n, the terms in the series solution at t = 0 are


J
0
_
j
0,n
r
R
_
j
0,n
J
1
(j
0,n
)

_
2R
j
0,n
r
cos
_
j
0,n
r
R


4
_
j
0,n
_
2
j
0,n
cos
_
j
0,n

3
4
_

R
r(n 1/4)
cos
_
(n1/4)r
R


4
_
cos ((n 1))
.
1680
The coecients decay as 1/n.
Solution 39.28
1. We substitute the separation of variables = T(t)()() into Equation 39.7
T
t
=
a
2
R
2
_
1
sin

(sin T
t
) +
1
sin
2

T
tt
_
R
2
T
t
a
2
T
=
_
1
sin
(sin
t
)
t
+
1
sin
2

tt

_
=
sin

(sin
t
)
t
+sin
2
=

tt

=
We have dierential equations for each of T, and .
T
t
=
a
2
R
2
T,
1
sin
(sin
t
)
t
+
_


sin
2

_
= 0,
tt
+ = 0
2. In order that the solution be continuously dierentiable, we need the periodic boundary conditions
(0) = (2),
t
(0) =
t
(2).
The eigenvalues and eigenfunctions for are

n
= n
2
,
n
=
1

2
e
in
, n Z.
Now we deal with the equation for .
x = cos , () = P(x), sin
2
= 1 x
2
,
d
dx
=
1
sin
d
d
1
sin
(sin
2

1
sin

t
)
t
+
_


sin
2

_
= 0
__
1 x
2
_
P
t
_
t
+
_

n
2
1 x
2
_
P = 0
1681
P(x) should be bounded at the endpoints, x = 1 and x = 1.
3. If the solution does not depend on , then the only one of the
n
that will appear in the solution is

0
= 1/

2. The equations for T and P become


__
1 x
2
_
P
t
_
t
+P = 0, P(1) bounded,
T
t
=
a
2
R
2
T.
The solutions for P are the Legendre polynomials.

l
= l(l + 1), P
l
(cos ), l Z
0+
We solve the dierential equation for T.
T
t
= l(l + 1)
a
2
R
2
T
T
l
= exp
_

a
2
l(l + 1)
R
2
t
_
The eigensolutions of the partial dierential equation are

l
= P
l
(cos ) exp
_

a
2
l(l + 1)
R
2
t
_
.
The solution is a linear combination of the eigensolutions.
=

l=0
A
l
P
l
(cos ) exp
_

a
2
l(l + 1)
R
2
t
_
1682
4. We determine the coecients in the expansion from the initial condition.
(, 0) = 2 cos
2
1

l=0
A
l
P
l
(cos ) = 2 cos
2
1
A
0
+A
1
cos +A
2
_
3
2
cos
2

1
2
_
+ = 2 cos
2
1
A
0
=
1
3
, A
1
= 0, A
2
=
4
3
, A
3
= A
4
= = 0
(, t) =
1
3
P
0
(cos ) +
4
3
P
2
(cos ) exp
_

6a
2
R
2
t
_
(, t) =
1
3
+
_
2 cos
2

2
3
_
exp
_

6a
2
R
2
t
_
Solution 39.29
Since we have homogeneous boundary conditions at x = 0 and x = 1, we will expand the solution in a series of
eigenfunctions in x. We determine a suitable set of eigenfunctions with the separation of variables, = X(x)Y (y).

xx
+
yy
= 0 (39.15)
X
tt
X
=
Y
tt
Y
=
We have dierential equations for X and Y .
X
tt
+X = 0, X(0) = X(1) = 0
Y
tt
Y = 0, Y (0) = 0
The eigenvalues and orthonormal eigenfunctions for X are

n
= (n)
2
, X
n
(x) =

2 sin(nx), n Z
+
.
1683
The solutions for Y are, (up to a multiplicative constant),
Y
n
(y) = sinh(ny).
The solution of Equation 39.15 is a linear combination of the eigensolutions.
(x, y) =

n=1
a
n

2 sin(nx) sinh(ny)
We determine the coecients from the boundary condition at y = 2.
x(1 x) =

n=1
a
n

2 sin(nx) sinh(n2)
a
n
sinh(2n) =

2
_
1
0
x(1 x) sin(nx) dx
a
n
=
2

2(1 (1)
n
)
n
3

3
sinh(2n)
(x, y) =
8

n=1
odd n
1
n
3
sin(nx)
sinh(ny)
sinh(2n)
The solution at x = 1/2, y = 1 is
(1/2, 1) =
8

n=1
odd n
1
n
3
sinh(n)
sinh(2n)
.
1684
Let R
k
be the relative error at that point incurred by taking k terms.
R
k
=

n=k+2
odd n
1
n
3
sinh(n)
sinh(2n)

n=1
odd n
1
n
3
sinh(n)
sinh(2n)

R
k
=

n=k+2
odd n
1
n
3
sinh(n)
sinh(2n)

n=1
odd n
1
n
3
sinh(n)
sinh(2n)
Since R
1
0.0000693169 we see that one term is sucient for 1% or 0.1% accuracy.
Now consider
x
(1/2, 1).

x
(x, y) =
8

n=1
odd n
1
n
2
cos(nx)
sinh(ny)
sinh(2n)

x
(1/2, 1) = 0
Since all the terms in the series are zero, accuracy is not an issue.
Solution 39.30
The solution has the form
=
_
r
n1
P
m
n
(cos ) sin(m), r > a
r
n
P
m
n
(cos ) sin(m), r < a.
The boundary condition on at r = a gives us the constraint
a
n1
a
n
= 0
= a
2n1
.
1685
Then we apply the boundary condition on
r
at r = a.
(n + 1)a
n2
na
2n1
a
n1
= 1
=
a
n+2
2n + 1
=
_

a
n+2
2n+1
r
n1
P
m
n
(cos ) sin(m), r > a

a
n+1
2n+1
r
n
P
m
n
(cos ) sin(m), r < a
Solution 39.31
We expand the solution in a Fourier series.
=
1
2
a
0
(r) +

n=1
a
n
(r) cos(n) +

n=1
b
n
(r) sin(n)
We substitute the series into the Laplaces equation to determine ordinary dierential equations for the coecients.

r
_
r

r
_
+
1
r
2

2
= 0
a
tt
0
+
1
r
a
t
0
= 0, a
tt
n
+
1
r
a
t
n
n
2
a
n
= 0, b
tt
n
+
1
r
b
t
n
n
2
b
n
= 0
The solutions that are bounded at r = 0 are, (to within multiplicative constants),
a
0
(r) = 1, a
n
(r) = r
n
, b
n
(r) = r
n
.
Thus (r, ) has the form
(r, ) =
1
2
c
0
+

n=1
c
n
r
n
cos(n) +

n=1
d
n
r
n
sin(n)
1686
We apply the boundary condition at r = R.

r
(R, ) =

n=1
nc
n
R
n1
cos(n) +

n=1
nd
n
R
n1
sin(n)
In order that
r
(R, ) have a Fourier series of this form, it is necessary that
_
2
0

r
(R, ) d = 0.
In that case c
0
is arbitrary in our solution. The coecients are
c
n
=
1
nR
n1
_
2
0

r
(R, ) cos(n) d, d
n
=
1
nR
n1
_
2
0

r
(R, ) sin(n) d.
1687
We substitute the coecients into our series solution to determine it up to the additive constant.
(r, ) =
R

n=1
1
n
_
r
R
_
n
_
2
0

r
(R, ) cos(n( )) d
(r, ) =
R

_
2
0

r
(R, )

n=1
1
n
_
r
R
_
n
cos(n( )) d
(r, ) =
R

_
2
0

r
(R, )

n=1
_
r
0

n1
R
n
d1
_
e
in()
_
d
(r, ) =
R

_
2
0

r
(R, )1
_
_
r
0
1

n=1

n
R
n
e
in()
d
_
d
(r, ) =
R

_
2
0

r
(R, )1
_
_
r
0
1

R
e
i()
1

R
e
i()
d
_
d
(r, ) =
R

_
2
0

r
(R, )1
_
log
_
1
r
R
e
i()
__
d
(r, ) =
R

_
2
0

r
(R, ) log

1
r
R
e
i()

d
(r, ) =
R
2
_
2
0

r
(R, ) log
_
1 2
r
R
cos( ) +
r
2
R
2
_
d
Solution 39.32
We will assume that both and are nonzero. The cases of real and pure imaginary have already been covered.
We solve the ordinary dierential equations, (up to a multiplicative constant), to nd special solutions of the
1688
diusion equation.
T
t
T
= ( +i)
2
,
X
tt
X
=
( +i)
2
a
2
T = exp
_
( +i)
2
t
_
, X = exp
_

+i
a
x
_
T = exp
__

2
_
t +i2t
_
, X = exp
_

a
x i

a
x
_
= exp
_
_

2
_
t

a
x +i
_
2t

a
x
__
We take the sum and dierence of these solutions to obtain
= exp
_
_

2
_
t

a
x
_
cos
sin
_
2t

a
x
_
1689
Chapter 40
Finite Transforms
Example 40.0.1 Consider the problem
u
1
c
2

2
u
t
2
= (x )(y ) e
it
on < x < , 0 < y < b,
with
u
y
(x, 0, t) = u
y
(x, b, t) = 0.
Substituting u(x, y, t) = v(x, y) e
it
into the partial dierential equation yields the problem
v +k
2
v = (x )(y ) on < x < , 0 < y < b,
with
v
y
(x, 0) = v
y
(x, b) = 0.
We assume that the solution has the form
v(x, y) =
1
2
c
0
(x) +

n=1
c
n
(x) cos
_
ny
b
_
, (40.1)
1690
and apply a nite cosine transform in the y direction. Integrating from 0 to b yields
_
b
0
v
xx
+v
yy
+k
2
v dy =
_
b
0
(x )(y ) dy,
_
v
y

b
0
+
_
b
0
v
xx
+k
2
v dy = (x ),
_
b
0
v
xx
+k
2
v dy = (x ).
Substituting in Equation 40.1 and using the orthogonality of the cosines gives us
c
tt
0
(x) +k
2
c
0
(x) =
2
b
(x ).
Multiplying by cos(ny/b) and integrating form 0 to b yields
_
b
0
_
v
xx
+v
yy
+k
2
v
_
cos
_
ny
b
_
dy =
_
b
0
(x )(y ) cos
_
ny
b
_
dy.
The v
yy
term becomes
_
b
0
v
yy
cos
_
ny
b
_
dy =
_
v
y
cos
_
ny
b
__
b
0

_
b
0

n
b
v
y
sin
_
ny
b
_
dy
=
_
n
b
v sin
_
ny
b
__
b
0

_
b
0
_
n
b
_
2
v cos
_
ny
b
_
dy.
The right-hand-side becomes
_
b
0
(x )(y ) cos
_
ny
b
_
dy = (x ) cos
_
n
b
_
.
1691
Thus the partial dierential equation becomes
_
b
0
_
v
xx

_
n
b
_
2
v +k
2
v
_
cos
_
ny
b
_
dy = (x ) cos
_
n
b
_
.
Substituting in Equation 40.1 and using the orthogonality of the cosines gives us
c
tt
n
(x) +
_
k
2

_
n
b
_
2
_
c
n
(x) =
2
b
(x ) cos
_
n
b
_
.
Now we need to solve for the coecients in the expansion of v(x, y). The homogeneous solutions for c
0
(x) are
e
ikx
. The solution for u(x, y, t) must satisfy the radiation condition. The waves at x = travel to the left and
the waves at x = + travel to the right. The two solutions of that will satisfy these conditions are, respectively,
y
1
= e
ikx
, y
2
= e
ikx
.
The Wronskian of these two solutions is 2ik. Thus the solution for c
0
(x) is
c
0
(x) =
e
ikx<
e
ikx>
ibk
We need to consider three cases for the equation for c
n
.
k > n/b Let =
_
k
2
(n/b)
2
. The homogeneous solutions that satisfy the radiation condition are
y
1
= e
ix
, y
2
= e
ix
.
The Wronskian of the two solutions is 2i. Thus the solution is
c
n
(x) =
e
ix<
e
ix>
ib
cos
_
n
b
_
.
In the case that cos
_
n
b
_
= 0 this reduces to the trivial solution.
1692
k = n/b The homogeneous solutions that are bounded at innity are
y
1
= 1, y
2
= 1.
If the right-hand-side is nonzero there is no way to combine these solutions to satisfy both the continuity
and the derivative jump conditions. Thus if cos
_
n
b
_
,= 0 there is no bounded solution. If cos
_
n
b
_
= 0
then the solution is not unique.
c
n
(x) = const.
k < n/b Let =
_
(n/b)
2
k
2
. The homogeneous solutions that are bounded at innity are
y
1
= e
x
, y
2
= e
x
.
The Wronskian of these solutions is 2. Thus the solution is
c
n
(x) =
e
x<
e
x>
b
cos
_
n
b
_
In the case that cos
_
n
b
_
= 0 this reduces to the trivial solution.
1693
40.1 Exercises
Exercise 40.1
A slab is perfectly insulated at the surface x = 0 and has a specied time varying temperature f(t) at the surface
x = L. Initially the temperature is zero. Find the temperature u(x, t) if the heat conductivity in the slab is = 1.
Exercise 40.2
Solve
u
xx
+u
yy
= 0, 0 < x < L, y > 0,
u(x, 0) = f(x), u(0, y) = g(y), u(L, y) = h(y),
with an eigenfunction expansion.
1694
40.2 Hints
Hint 40.1
Hint 40.2
1695
40.3 Solutions
Solution 40.1
The problem is
u
t
= u
xx
, 0 < x < L, t > 0,
u
x
(0, t) = 0, u(L, t) = f(t), u(x, 0) = 0.
We will solve this problem with an eigenfunction expansion. We nd these eigenfunction by replacing the inho-
mogeneous boundary condition with the homogeneous one, u(L, t) = 0. We substitute the separation of variables
v(x, t) = X(x)T(t) into the homogeneous partial dierential equation.
XT
t
= X
tt
T
T
t
T
=
X
tt
X
=
2
.
This gives us the regular Sturm-Liouville eigenvalue problem,
X
tt
=
2
X, X
t
(0) = X(L) = 0,
which has the solutions,

n
=
(2n 1)
2L
, X
n
= cos
_
(2n 1)x
2L
_
, n N.
Our solution for u(x, t) will be an eigenfunction expansion in these eigenfunctions. Since the inhomogeneous
boundary condition is a function of t, the coecients will be functions of t.
u(x, t) =

n=1
a
n
(t) cos(
n
x)
Since u(x, t) does not satisfy the homogeneous boundary conditions of the eigenfunctions, the series is not u-
niformly convergent and we are not allowed to dierentiate it with respect to x. We substitute the expansion
1696
into the partial dierential equation, multiply by the eigenfunction and integrate from x = 0 to x = L. We use
integration by parts to move derivatives from u to the eigenfunctions.
u
t
= u
xx
_
L
0
u
t
cos(
m
x) dx =
_
L
0
u
xx
cos(
m
x) dx
_
L
0
_

n=1
a
t
n
(t) cos(
n
x)
_
cos(
m
x) dx = [u
x
cos(
m
x)]
L
0
+
_
L
0
u
x

m
sin(
m
x) dx
L
2
a
t
m
(t) = [u
m
sin(
m
x)]
L
0

_
L
0
u
2
m
cos(
m
x) dx
L
2
a
t
m
(t) =
m
u(L, t) sin(
m
L)
2
m
_
L
0
_

n=1
a
n
(t) cos(
n
x)
_
cos(
m
x) dx
L
2
a
t
m
(t) =
m
(1)
n
f(t)
2
m
L
2
a
m
(t)
a
t
m
(t) +
2
m
a
m
(t) = (1)
n

m
f(t)
From the initial condition u(x, 0) = 0 we see that a
m
(0) = 0. Thus we have a rst order dierential equation and
an initial condition for each of the a
m
(t).
a
t
m
(t) +
2
m
a
m
(t) = (1)
n

m
f(t), a
m
(0) = 0
This equation has the solution,
a
m
(t) = (1)
n

m
_
t
0
e

2
m
(t)
f() d.
1697
Solution 40.2
u
xx
+u
yy
= 0, 0 < x < L, y > 0,
u(x, 0) = f(x), u(0, y) = g(y), u(L, y) = h(y),
We seek a solution of the form,
u(x, y) =

n=1
u
n
(y) sin
_
nx
L
_
.
Since we have inhomogeneous boundary conditions at x = 0, L, we cannot dierentiate the series representation
with respect to x. We multiply Laplaces equation by the eigenfunction and integrate from x = 0 to x = L.
_
L
0
(u
xx
+u
yy
) sin
_
mx
L
_
dx = 0
We use integration by parts to move derivatives from u to the eigenfunctions.
_
u
x
sin
_
mx
L
__
L
0

m
L
_
L
0
u
x
cos
_
mx
L
_
dx +
L
2
u
tt
m
(y) = 0
_

m
L
ucos
_
mx
L
__
L
0

_
m
L
_
2
_
L
0
usin
_
mx
L
_
dx +
L
2
u
tt
m
(y) = 0

m
L
h(y)(1)
m
+
m
L
g(y)
L
2
_
m
L
_
2
u
m
(y) +
L
2
u
tt
m
(y) = 0
u
tt
m
(y)
_
m
L
_
2
u
m
(y) = 2m ((1)
m
h(y) g(y))
Now we have an ordinary dierential equation for the u
n
(y). In order that the solution is bounded, we require
that each u
n
(y) is bounded as y . We use the boundary condition u(x, 0) = f(x) to determine boundary
1698
conditions for the u
m
(y) at y = 0.
u(x, 0) =

n=1
u
n
(0) sin
_
nx
L
_
= f(x)
u
n
(0) = f
n

2
L
_
L
0
f(x) sin
_
nx
L
_
dx
Thus we have the problems,
u
tt
n
(y)
_
n
L
_
2
u
n
(y) = 2n ((1)
n
h(y) g(y)) , u
n
(0) = f
n
, u
n
(+) bounded,
for the coecients in the expansion. We will solve these with Green functions. Consider the associated Green
function problem
G
tt
n
(y; )
_
n
L
_
2
G
n
(y; ) = (y ), G
n
(0; ) = 0, G
n
(+; ) bounded.
The homogeneous solutions that satisfy the boundary conditions are
sinh
_
ny
L
_
and e
ny/L
,
respectively. The Wronskian of these solutions is

sinh
_
ny
L
_
e
ny/L
n
L
sinh
_
ny
L
_

n
L
e
ny/L

=
n
L
e
2ny/L
.
Thus the Green function is
G
n
(y; ) =
Lsinh
_
ny<
L
_
e
ny>/L
n e
2n/L
.
1699
Using the Green function we determine the u
n
(y) and thus the solution of Laplaces equation.
u
n
(y) = f
n
e
ny/L
+ 2n
_

0
G
n
(y; ) ((1)
n
h() g()) d
u(x, y) =

n=1
u
n
(y) sin
_
nx
L
_
.
1700
Chapter 41
Waves
1701
41.1 Exercises
Exercise 41.1
Sketch the solution to the wave equation:
u(x, t) =
1
2
(u(x +ct, 0) +u(x ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0) d,
for various values of t corresponding to the initial conditions:
1. u(x, 0) = 0, u
t
(x, 0) = sin x where is a constant,
2. u(x, 0) = 0, u
t
(x, 0) =
_

_
1 for 0 < x < 1
1 for 1 < x < 0
0 for [x[ > 1.
Exercise 41.2
1. Consider the solution of the wave equation for u(x, t):
u
tt
= c
2
u
xx
on the innite interval < x < with initial displacement of the form
u(x, 0) =
_
h(x) for x > 0,
h(x) for x < 0,
and with initial velocity
u
t
(x, 0) = 0.
Show that the solution of the wave equation satisfying these initial conditions also solves the following semi-
innite problem: Find u(x, t) satisfying the wave equation u
tt
= c
2
u
xx
in 0 < x < , t > 0, with initial
conditions u(x, 0) = h(x), u
t
(x, 0) = 0, and with the xed end condition u(0, t) = 0. Here h(x) is any given
function with h(0) = 0.
1702
2. Use a similar idea to explain how you could use the general solution of the wave equation to solve the nite
interval problem (0 < x < l) in which u(0, t) = u(l, t) = 0 for all t, with u(x, 0) = h(x) and u
t
(x, 0) = 0.
Take h(0) = h(l) = 0.
Exercise 41.3
The deection u(x, T) = (x) and velocity u
t
(x, T) = (x) for an innite string (governed by u
tt
= c
2
u
xx
) are
measured at time T, and we are asked to determine what the initial displacement and velocity proles u(x, 0) and
u
t
(x, 0) must have been. An alert AMa95c student suggests that this problem is equivalent to that of determining
the solution of the wave equation at time T when initial conditions u(x, 0) = (x), u
t
(x, 0) = (x) are prescribed.
Is she correct? If not, can you rescue her idea?
Exercise 41.4
In obtaining the general solution of the wave equation the interval was chosen to be innite in order to simplify
the evaluation of the functions () and () in the general solution
u(x, t) = (x +ct) +(x ct).
But this general solution is in fact valid for any interval be it innite or nite. We need only choose appropriate
functions (), () to satisfy the appropriate initial and boundary conditions. This is not always convenient but
there are other situations besides the solution for u(x, t) in an innite domain in which the general solution is of
use. Consider the whip-cracking problem (this is not meant to be a metaphor for AMa95c):
u
tt
= c
2
u
xx
,
(with c a constant) in the domain x > 0, t > 0 with initial conditions
u(x, 0) = u
t
(x, 0) = 0 x > 0,
and boundary conditions
u(0, t) = (t)
1703
prescribed for all t > 0. Here (0) = 0. Find and so as to determine u for x > 0, t > 0.
Hint: (From physical considerations conclude that you can take () = 0. Your solution will corroborate
this.) Use the initial conditions to determine () and () for > 0. Then use the initial condition to determine
() for < 0.
Exercise 41.5
Let u(x, t) satisfy the equation
u
tt
= c
2
u
xx
;
(with c a constant) in some region of the (x, t) plane.
1. Show that the quantity (u
t
cu
x
) is constant along each straight line dened by x ct = constant, and
that (u
t
+ cu
x
) is constant along each straight line of the form x + ct = constant. These straight lines are
called characteristics; we will refer to typical members of the two families as C
+
and C

characteristics,
respectively. Thus the line x ct = constant is a C
+
characteristic.
2. Let u(x, 0) and u
t
(x, 0) be prescribed for all values of x in < x < , and let (x
0
, t
0
) be some point in
the (x, t) plane, with t
0
> 0. Draw the C
+
and C

characteristics through (x
0
, t
0
) and let them intersect
the x-axis at the points A,B. Use the properties of these curves derived in part (a) to determine u
t
(x
0
, t
0
)
in terms of initial data at points A and B. Using a similar technique to obtain u
t
(x
0
, ) with 0 < < t,
determine u(x
0
, t
0
) by integration with respect to , and compare this with the solution derived in class:
u(x, t) =
1
2
(u(x +ct, 0) +u(x ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0)d.
Observe that this method of characteristics again shows that u(x
0
, t
0
) depends only on that part of the
initial data between points A and B.
1704
Exercise 41.6
The temperature u(x, t) at a depth x below the Earths surface at time t satises
u
t
= u
xx
.
The surface x = 0 is heated by the sun according to the periodic rule:
u(0, t) = T cos(t).
Seek a solution of the form
u(x, t) = 1
_
Ae
itx
_
.
a) Find u(x, t) satisfying u 0 as x +, (i.e. deep into the Earth).
b) Find the temperature variation at a xed depth, h, below the surface.
c) Find the phase lag (x) such that when the maximum temperature occurs at t
0
on the surface, the maximum
at depth x occurs at t
0
+(x).
d) Show that the seasonal, (i.e. yearly), temperature changes and daily temperature changes penetrate to
depths in the ratio:
x
year
x
day
=

365,
where x
year
and x
day
are the depths of same temperature variation caused by the dierent periods of the
source.
Exercise 41.7
An innite cylinder of radius a produces an external acoustic pressure eld u satisfying:
u
tt
= c
2
u,
1705
by a pure harmonic oscillation of its surface at r = a. That is, it moves so that
u(a, , t) = f() e
it
where f() is a known function. Note that the waves must be outgoing at innity, (radiation condition at innity).
Find the solution, u(r, , t). We seek a periodic solution of the form,
u(r, , t) = v(r, ) e
it
.
Exercise 41.8
Plane waves are incident on a soft cylinder of radius a whose axis is parallel to the plane of the waves. Find the
eld scattered by the cylinder. In particular, examine the leading term of the solution when a is much smaller
than the wavelength of the incident waves. If v(x, y, t) is the scattered eld it must satisfy:
Wave Equation: v
tt
= c
2
v, x
2
+y
2
> a
2
;
Soft Cylinder: v(x, y, t) = e
i(ka cos t
, on r = a, 0 < 2;
Scattered: v is outgoing as r .
Here k = /c. Use polar coordinates in the (x, y) plane.
Exercise 41.9
Consider the ow of electricity in a transmission line. The current, I(x, t), and the voltage, V (x, t), obey the
telegraphers system of equations:
I
x
= CV
t
+GV,
V
x
= LI
t
+RI,
where C is the capacitance, G is the conductance, L is the inductance and R is the resistance.
a) Show that both I and V satisfy a damped wave equation.
1706
b) Find the relationship between the physical constants, C, G, L and R such that there exist damped traveling
wave solutions of the form:
V (x, t) = e
t
(f(x at) +g(x +at)).
What is the wave speed?
1707
41.2 Hints
Hint 41.1
Hint 41.2
Hint 41.3
Hint 41.4
From physical considerations conclude that you can take () = 0. Your solution will corroborate this. Use the
initial conditions to determine () and () for > 0. Then use the initial condition to determine () for < 0.
Hint 41.5
Hint 41.6
a) Substitute u(x, t) = 1(Ae
itx
) into the partial dierential equation and solve for . Assume that has
positive real part so that the solution vanishes as x +.
1708
Hint 41.7
Seek a periodic solution of the form,
u(r, , t) = v(r, ) e
it
.
Solve the Helmholtz equation for v with a Fourier series expansion,
v(r, ) =

n=
v
n
(r) e
in
.
You will nd that the v
n
satisfy Bessels equation. Choose the v
n
so that u satises the boundary condition at
r = a and the radiation condition at innity.
The Bessel functions have the asymptotic behavior,
J
n
()
_
2

cos( n/2 /4), as ,


Y
n
()
_
2

sin( n/2 /4), as ,


H
(1)
n
()
_
2

e
i(n/2/4)
, as ,
H
(2)
n
()
_
2

e
i(n/2/4)
, as .
Hint 41.8
Hint 41.9
1709
41.3 Solutions
Solution 41.1
1.
u(x, t) =
1
2
(u(x +ct, 0) +u(x ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0) d
u(x, t) =
1
2c
_
x+ct
xct
sin() d
u(x, t) =
sin(x) sin(ct)
c
Figure 41.1 shows the solution for c = 1 and = 1/10.
2. We can write the initial velocity in terms of the Heaviside function.
u
t
(x, 0) =
_

_
1 for 0 < x < 1
1 for 1 < x < 0
0 for [x[ > 1.
u
t
(x, 0) = H(x + 1) + 2H(x) H(x 1)
We integrate the Heaviside function.
_
b
a
H(x c) dx =
_

_
0 for b < c
b a for a > c
b c otherwise
If a < b, we can express this as
_
b
a
H(x c) dx = min(b a, max(b c, 0)).
1710
-50
0
50
x
0
20
40
60
t
-10
-5
0
5
10
u
-50
0
50
x
Figure 41.1: Solution of the wave equation.
Now we nd an expression for the solution.
u(x, t) =
1
2
(u(x +ct, 0) +u(x ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0) d
u(x, t) =
1
2c
_
x+ct
xct
(H( + 1) + 2H() H( 1)) d
u(x, t) = min(2ct, max(x +ct + 1, 0)) + 2 min(2ct, max(x +ct, 0)) min(2ct, max(x +ct 1, 0))
Figure 41.2 shows the solution for c = 1.
1711
-4
-2
0
2
4
x
0
1
2
3
4
t
-1
-0.5
0
0.5
1
u
-4
-2
0
2
4
x
Figure 41.2: Solution of the wave equation.
Solution 41.2
1. The solution on the interval (. . . ) is
u(x, t) =
1
2
(h(x +ct) +h(x ct)).
Now we solve the problem on (0 . . . ). We dene the odd extension of h(x).

h(x) =
_
h(x) for x > 0,
h(x) for x < 0,
= sign (x)h([x[)
Note that

h
t
(0

) =
d
dx
(h(x))

x0
+
= h
t
(0
+
) =

h
t
(0
+
).
1712
Thus

h(x) is piecewise C
2
. Clearly
u(x, t) =
1
2
(

h(x +ct) +

h(x ct))
satises the dierential equation on (0 . . . ). We verify that it satises the initial condition and boundary
condition.
u(x, 0) =
1
2
(

h(x) +

h(x)) = h(x)
u(0, t) =
1
2
(

h(ct) +

h(ct)) =
1
2
(h(ct) h(ct)) = 0
2. First we dene the odd extension of h(x) on the interval (l . . . l).

h(x) = sign (x)h([x[), x (l . . . l)


Then we form the odd periodic extension of h(x) dened on (. . . ).

h(x) = sign
_
x 2l
_
x +l
2l
__
h
_

x 2l
_
x +l
2l
_

_
, x (. . . )
We note that

h(x) is piecewise C
2
. Also note that

h(x) is odd about the points x = nl, n Z. That is,

h(nl x) =

h(nl +x). Clearly


u(x, t) =
1
2
(

h(x +ct) +

h(x ct))
satises the dierential equation on (0 . . . l). We verify that it satises the initial condition and boundary
1713
conditions.
u(x, 0) =
1
2
(

h(x) +

h(x))
u(x, 0) =

h(x)
u(x, 0) = sign
_
x 2l
_
x +l
2l
__
h
_

x 2l
_
x +l
2l
_

_
u(x, 0) = h(x)
u(0, t) =
1
2
(

h(ct) +

h(ct)) =
1
2
(

h(ct)

h(ct)) = 0
u(l, t) =
1
2
(

h(l +ct) +

h(l ct)) =
1
2
(

h(l +ct)

h(l +ct)) = 0
Solution 41.3
Change of Variables. Let u(x, t) be the solution of the problem with deection u(x, T) = (x) and velocity
u
t
(x, T) = (x). Dene
v(x, ) = u(x, T ).
We note that u(x, 0) = v(x, T). v() satises the wave equation.
v

= c
2
v
xx
The initial conditions for v are
v(x, 0) = u(x, T) = (x), v

(x, 0) = u
t
(x, T) = (x).
Thus we see that the student was correct.
Direct Solution. DAlemberts solution is valid for all x and t. We formally substitute t T for t in this
solution to solve the problem with deection u(x, T) = (x) and velocity u
t
(x, T) = (x).
u(x, t) =
1
2
((x +c(t T)) +(x c(t T))) +
1
2c
_
x+c(tT)
xc(tT)
() d
1714
This satises the wave equation, because the equation is shift-invariant. It also satises the initial conditions.
u(x, T) =
1
2
((x) +(x)) +
1
2c
_
x
x
() d = (x)
u
t
(x, t) =
1
2
(c
t
(x +c(t T)) c
t
(x c(t T))) +
1
2
((x +c(t T)) +(x c(t T)))
u
t
(x, T) =
1
2
(c
t
(x) c
t
(x)) +
1
2
((x) +(x)) = (x)
Solution 41.4
Since the solution is a wave moving to the right, we conclude that we could take () = 0. Our solution will
corroborate this.
The form of the solution is
u(x, t) = (x +ct) +(x ct).
We substitute the solution into the initial conditions.
u(x, 0) = () +() = 0, > 0
u
t
(x, 0) = c
t
() c
t
() = 0, > 0
We integrate the second equation to obtain the system
() +() = 0, > 0,
() () = 2k, > 0,
which has the solution
() = k, () = k, > 0.
Now we substitute the solution into the initial condition.
u(0, t) = (ct) +(ct) = (t), t > 0
() +() = (/c), > 0
() = (/c) k, < 0
1715
This determines u(x, t) for x > 0 as it depends on () only for > 0. The constant k is arbitrary. Changing k
does not change u(x, t). For simplicity, we take k = 0.
u(x, t) = (x ct)
u(x, t) =
_
0 for x ct < 0
(t x/c) for x ct > 0
u(x, t) = (t x/c)H(ct x)
Solution 41.5
1. We write the value of u along the line x ct = k as a function of t: u(k + ct, t). We dierentiate u
t
cu
x
with respect to t to see how the quantity varies.
d
dt
(u
t
(k +ct, t) cu
x
(k +ct, t)) = cu
xt
+u
tt
c
2
u
xx
cu
xt
= u
tt
c
2
u
xx
= 0
Thus u
t
cu
x
is constant along the line x ct = k. Now we examine u
t
+cu
x
along the line x +ct = k.
d
dt
(u
t
(k ct, t) +cu
x
(k ct, t)) = cu
xt
+u
tt
c
2
u
xx
+cu
xt
= u
tt
c
2
u
xx
= 0
u
t
+cu
x
is constant along the line x +ct = k.
2. From part (a) we know
u
t
(x
0
, t
0
) cu
x
(x
0
, t
0
) = u
t
(x
0
ct
0
, 0) cu
x
(x
0
ct
0
, 0)
u
t
(x
0
, t
0
) +cu
x
(x
0
, t
0
) = u
t
(x
0
+ct
0
, 0) +cu
x
(x
0
+ct
0
, 0).
1716
We add these equations to nd u
t
(x
0
, t
0
).
u
t
(x
0
, t
0
) =
1
2
(u
t
(x
0
ct
0
, 0) cu
x
(x
0
ct
0
, 0)u
t
(x
0
+ct
0
, 0) +cu
x
(x
0
+ct
0
, 0))
Since t
0
was arbitrary, we have
u
t
(x
0
, ) =
1
2
(u
t
(x
0
c, 0) cu
x
(x
0
c, 0)u
t
(x
0
+c, 0) +cu
x
(x
0
+c, 0))
for 0 < < t
0
. We integrate with respect to to determine u(x
0
, t
0
).
u(x
0
, t
0
) = u(x
0
, 0) +
_
t
0
0
1
2
(u
t
(x
0
c, 0) cu
x
(x
0
c, 0)u
t
(x
0
+c, 0) +cu
x
(x
0
+c, 0)) d
= u(x
0
, 0) +
1
2
_
t
0
0
(cu
x
(x
0
c, 0) +cu
x
(x
0
+c, 0)) d
+
1
2
_
t
0
0
(u
t
(x
0
c, 0) +u
t
(x
0
+c, 0)) d
= u(x
0
, 0) +
1
2
(u(x
0
ct
0
, 0) u(x
0
, 0) +u(x
0
+ct
0
, 0) u(x
0
, 0))
+
1
2c
_
x
0
ct
0
x
0
u
t
(, 0) d +
1
2c
_
x
0
+ct
0
x
0
u
t
(, 0) d
=
1
2
(u(x
0
ct
0
, 0) +u(x
0
+ct
0
, 0)) +
1
2c
_
x
0
+ct
0
x
0
ct
0
u
t
(, 0) d
We have DAlemberts solution.
u(x, t) =
1
2
(u(x ct, 0) +u(x +ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0) d
1717
Solution 41.6
a) We substitute u(x, t) = Ae
itx
into the partial dierential equation and take the real part as the solution.
We assume that has positive real part so the solution vanishes as x +.
iAe
itx
=
2
Ae
itx
i =
2
= (1 +i)
_

2
A solution of the partial dierential equation is,
u(x, t) = 1
_
Aexp
_
it (1 +i)
_

2
x
__
,
u(x, t) = Aexp
_

_

2
x
_
cos
_
t
_

2
x
_
.
Applying the initial condition, u(0, t) = T cos(t), we obtain,
u(x, t) = T exp
_

_

2
x
_
cos
_
t
_

2
x
_
.
b) At a xed depth x = h, the temperature is
u(h, t) = T exp
_

_

2
h
_
cos
_
t
_

2
h
_
.
Thus the temperature variation is
T exp
_

_

2
h
_
u(h, t) T exp
_

_

2
h
_
.
1718
c) The solution is an exponentially decaying, traveling wave that propagates into the Earth with speed
/
_
/(2) =

2. More generally, the wave


e
bt
cos(t ax)
travels in the positive direction with speed /a. Figure 41.3 shows such a wave for a sequence of times.
Figure 41.3: An Exponentially Decaying, Traveling Wave
The phase lag, (x) is the time that it takes for the wave to reach a depth of x. It satises,
(x)
_

2
x = 0,
(x) =
x

2
.
d) Let
year
be the frequency for annual temperature variation, then
day
= 365
year
. If x
year
is the depth that
a particular yearly temperature variation reaches and x
day
is the depth that this same variation in daily
1719
temperature reaches, then
exp
_

year
2
x
year
_
= exp
_

day
2
x
day
_
,
_

year
2
x
year
=
_

day
2
x
day
,
x
year
x
day
=

365.
Solution 41.7
We seek a periodic solution of the form,
u(r, , t) = v(r, ) e
it
.
Substituting this into the wave equation will give us a Helmholtz equation for v.

2
v = c
2
v
v
rr
+
1
r
v
r
+
1
r
2
v

+

2
c
2
v = 0
We have the boundary condition v(a, ) = f() and the radiation condition at innity. We expand v in a Fourier
series in in which the coecients are functions of r. You can check that e
in
are the eigenfunctions obtained
with separation of variables.
v(r, ) =

n=
v
n
(r) e
in
1720
We substitute this expression into the Helmholtz equation to obtain ordinary dierential equations for the coe-
cients v
n
.

n=
_
v
tt
n
+
1
r
v
t
n
+
_

2
c
2

n
2
r
2
_
v
n
_
e
in
= 0
The dierential equations for the v
n
are
v
tt
n
+
1
r
v
t
n
+
_

2
c
2

n
2
r
2
_
v
n
= 0.
which has as linearly independent solutions the Bessel and Neumann functions,
J
n
_
r
c
_
, Y
n
_
r
c
_
,
or the Hankel functions,
H
(1)
n
_
r
c
_
, H
(2)
n
_
r
c
_
.
The functions have the asymptotic behavior,
J
n
()
_
2

cos( n/2 /4), as ,


Y
n
()
_
2

sin( n/2 /4), as ,


H
(1)
n
()
_
2

e
i(n/2/4)
, as ,
H
(2)
n
()
_
2

e
i(n/2/4)
, as .
1721
u(r, , t) will be an outgoing wave at innity if it is the sum of terms of the form e
i(tconstr)
. Thus the v
n
must
have the form
v
n
(r) = b
n
H
(2)
n
_
r
c
_
for some constants, b
n
. The solution for v(r, ) is
v(r, ) =

n=
b
n
H
(2)
n
_
r
c
_
e
in
.
We determine the constants b
n
from the boundary condition at r = a.
v(a, ) =

n=
b
n
H
(2)
n
_
a
c
_
e
in
= f()
b
n
=
1
2H
(2)
n
(a/c)
_
2
0
f() e
in
d
u(r, , t) = e
it

n=
b
n
H
(2)
n
_
r
c
_
e
in
Solution 41.8
We substitute the form v(x, y, t) = u(r, ) e
it
into the wave equation to obtain a Helmholtz equation.
c
2
u +
2
u = 0
u
rr
+
1
r
u
r
+
1
r
2
u

+k
2
u = 0
We solve the Helmholtz equation with separation of variables. We expand u in a Fourier series.
u(r, ) =

n=
u
n
(r) e
in
1722
We substitute the sum into the Helmholtz equation to determine ordinary dierential equations for the coecients.
u
tt
n
+
1
r
u
t
n
+
_
k
2

n
2
r
2
_
u
n
= 0
This is Bessels equation, which has as solutions the Bessel and Neumann functions, J
n
(kr), Y
n
(kr) or the
Hankel functions, H
(1)
n
(kr), H
(2)
n
(kr).
Recall that the solutions of the Bessel equation have the asymptotic behavior,
J
n
()
_
2

cos( n/2 /4), as ,


Y
n
()
_
2

sin( n/2 /4), as ,


H
(1)
n
()
_
2

e
i(n/2/4)
, as ,
H
(2)
n
()
_
2

e
i(n/2/4)
, as .
From this we see that only the Hankel function of the rst kink will give us outgoing waves as . Our
solution for u becomes,
u(r, ) =

n=
b
n
H
(1)
n
(kr) e
in
.
We determine the coecients in the expansion from the boundary condition at r = a.
u(a, ) =

n=
b
n
H
(1)
n
(ka) e
in
= e
ika cos
b
n
=
1
2H
(1)
n
(ka)
_
2
0
e
ika cos
e
in
d
1723
We evaluate the integral with the identities,
J
n
(x) =
1
2i
n
_
2
0
e
ixcos
e
in
d,
J
n
(x) = (1)
n
J
n
(x).
Thus we obtain,
u(r, ) =

n=
(i)
n
J
n
(ka)
H
(1)
n
(ka)
H
(1)
n
(kr) e
in
.
When a 1/k, i.e. ka 1, the Bessel function has the behavior,
J
n
(ka)
(ka/2)
n
n!
.
In this case, the n ,= 0 terms in the sum are much smaller than the n = 0 term. The approximate solution is,
u(r, )
H
(1)
0
(kr)
H
(1)
0
(ka)
,
v(r, , t)
H
(1)
0
(kr)
H
(1)
0
(ka)
e
it
.
Solution 41.9
a)
_
I
x
= CV
t
+GV,
V
x
= LI
t
+RI
1724
First we derive a single partial dierential equation for I. We dierentiate the two partial dierential equations
with respect to x and t, respectively and then eliminate the V
xt
terms.
_
I
xx
= CV
tx
+GV
x
,
V
xt
= LI
tt
+RI
t
I
xx
+LCI
tt
+RCI
t
= GV
x
We use the initial set of equations to write V
x
in terms of I.
I
xx
+LCI
tt
+RCI
t
+G(LI
t
+RI) = 0
I
tt
+
RC +GL
LC
I
t
+
GR
LC
I
1
LC
I
xx
= 0
Now we derive a single partial dierential equation for V . We dierentiate the two partial dierential equations
with respect to t and x, respectively and then eliminate the I
xt
terms.
_
I
xt
= CV
tt
+GV
t
,
V
xx
= LI
tx
+RI
x
V
xx
= RI
x
LCV
tt
LGV
t
We use the initial set of equations to write I
x
in terms of V .
LCV
tt
+LGV
t
V
xx
+R(CV
t
+GV ) = 0
V
tt
+
RC +LG
LC
V
t
+
RG
LC
V
1
LC
V
xx
= 0.
Thus we see that I and V both satisfy the same damped wave equation.
1725
b) We substitute V (x, t) = e
t
(f(x at) +g(x +at)) into the damped wave equation for V .
_

RC +LG
LC
+
RG
LC
_
e
t
(f +g) +
_
2 +
RC +LG
LC
_
a e
t
(f
t
+g
t
)
+a
2
e
t
(f
tt
+g
tt
)
1
LC
e
t
(f
tt
+g
tt
) = 0
Since f and g are arbitrary functions, the coecients of e
t
(f +g), e
t
(f
t
+g
t
) and e
t
(f
tt
+g
tt
) must vanish.
This gives us three constraints.
a
2

1
LC
= 0, 2 +
RC +LG
LC
= 0,
2

RC +LG
LC
+
RG
LC
= 0
The rst equation determines the wave speed to be a = 1/

LC. We substitute the value of from the second


equation into the third equation.
=
RC +LG
2LC
,
2
+
RG
LC
= 0
In order for damped waves to propagate, the physical constants must satisfy,
RG
LC

_
RC +LG
2LC
_
2
= 0,
4RGLC (RC +LG)
2
= 0,
(RC LG)
2
= 0,
RC = LG.
1726
Chapter 42
The Diusion Equation
1727
42.1 Exercises
Exercise 42.1
Derive the heat equation for a general 3 dimensional body, with non-uniform density (x), specic heat c(x), and
conductivity k(x). Show that
u(x, t)
t
=
1
c
(ku(x, t))
where u is the temperature, and you may assume there are no internal sources or sinks.
Exercise 42.2
Verify Duhamels Principal: If u(x, t, ) is the solution of the initial value problem:
u
t
= u
xx
, u(x, 0, ) = f(x, ),
then the solution of
w
t
= w
xx
+f(x, t), w(x, 0) = 0
is
w(x, t) =
_
t
0
u(x, t , ) d.
Exercise 42.3
Modify the derivation of the diusion equation

t
= a
2

xx
, a
2
=
k
c
, (42.1)
1728
so that it is valid for diusion in a non-homogeneous medium for which c and k are functions of x and and
so that it is valid for a geometry in which A is a function of x. Show that Equation (42.1) above is in this case
replaced by
cA
t
= (kA
x
)
x
.
Recall that c is the specic heat, k is the thermal conductivity, is the density, is the temperature and A is the
cross-sectional area.
1729
42.2 Hints
Hint 42.1
Hint 42.2
Check that the expression for w(x, t) satises the partial dierential equation and initial condition. Recall that

x
_
x
a
h(x, ) d =
_
x
a
h
x
(x, ) d +h(x, x).
Hint 42.3
1730
42.3 Solutions
Exercise 42.4
Consider a Region of material, R. Let u be the temperature and be the heat ux. The amount of heat energy
in the region is
_
R
cudx.
We equate the rate of change of heat energy in the region with the heat ux across the boundary of the region.
d
dt
_
R
cudx =
_
R
nds
We apply the divergence theorem to change the surface integral to a volume integral.
d
dt
_
R
cudx =
_
R
dx
_
R
_
c
u
t
+
_
dx = 0
Since the region is arbitrary, the integral must vanish identically.
c
u
t
=
We apply Fouriers law of heat conduction, = ku, to obtain the heat equation.
u
t
=
1
c
(ku)
1731
Solution 42.1
We verify Duhamels principal by showing that the integral expression for w(x, t) satises the partial dierential
equation and the initial condition. Clearly the initial condition is satised.
w(x, 0) =
_
0
0
u(x, 0 , ) d = 0
Now we substitute the expression for w(x, t) into the partial dierential equation.

t
_
t
0
u(x, t , ) d =

2
x
2
_
t
0
u(x, t , ) d +f(x, t)
u(x, t t, t) +
_
t
0
u
t
(x, t , ) d =
_
t
0
u
xx
(x, t , ) d +f(x, t)
f(x, t) +
_
t
0
u
t
(x, t , ) d =
_
t
0
u
xx
(x, t , ) d +f(x, t)
_
t
0
(u
t
(x, t , ) d u
xx
(x, t , )) d
Since u
t
(x, t , ) d u
xx
(x, t , ) = 0, this equation is an identity.
Solution 42.2
We equate the rate of change of thermal energy in the segment (. . . ) with the heat entering the segment
1732
through the endpoints.
_

t
cAdx = k(, ())A()
x
(, t) k(, ())A()
x
(, t)
_

t
cAdx = [kA
x
]

t
cAdx =
_

(kA
x
)
x
dx
_

cA
t
(kA
x
)
x
dx = 0
Since the domain is arbitrary, we conclude that
cA
t
= (kA
x
)
x
.
1733
Chapter 43
Similarity Methods
Introduction. Consider the partial dierential equation (not necessarily linear)
F
_
u
t
,
u
x
, u, t, x
_
= 0.
Say the solution is
u(x, t) =
x
t
sin
_
t
1/2
x
1/2
_
.
Making the change of variables = x/t, f() = u(x, t), we could rewrite this equation as
f() = sin
_

1/2
_
.
We see now that if we had guessed that the solution of this partial dierential equation was only dependent on
powers of x/t we could have changed variables to and f and instead solved the ordinary dierential equation
G
_
df
d
, f,
_
= 0.
By using similarity methods one can reduce the number of independent variables in some PDEs.
1734
Example 43.0.1 Consider the partial dierential equation
x
u
t
+t
u
x
u = 0.
One way to nd a similarity variable is to introduce a transformation to the temporary variables u
t
, t
t
, x
t
, and
the parameter .
u = u
t

t = t
t

m
x = x
t

n
where n and m are unknown. Rewriting the partial dierential equation in terms of the temporary variables,
x
t

n
u
t
t
t

1m
+t
t

m
u
t
x
t

1n
u
t
= 0
x
t
u
t
t
t

m+n
+t
t
u
t
x
t

mn
u
t
= 0
There is a similarity variable if can be eliminated from the equation. Equating the coecients of the powers of
in each term,
m+n = mn = 0.
This has the solution m = n. The similarity variable, , will be unchanged under the transformation to the
temporary variables. One choice is
=
t
x
=
t
t

n
x
t

m
=
t
t
x
t
.
Writing the two partial derivative in terms of ,

t
=

t
d
d
=
1
x
d
d

x
=

x
d
d
=
t
x
2
d
d
1735
The partial dierential equation becomes
du
d

2
du
d
u = 0
du
d
=
u
1
2
Thus we have reduced the partial dierential equation to an ordinary dierential equation that is much easier to
solve.
u() = exp
__

d
1
2
_
u() = exp
__

1/2
1
+
1/2
1 +
d
_
u() = exp
_

1
2
log(1 ) +
1
2
log(1 +)
_
u() = (1 )
1/2
(1 +)
1/2
u(x, t) =
_
1 +t/x
1 t/x
_
1/2
Thus we have found a similarity solution to the partial dierential equation. Note that the existence of a similarity
solution does not mean that all solutions of the dierential equation are similarity solutions.
Another Method. Another method is to substitute = x

t and determine if there is an that makes a


similarity variable. The partial derivatives become

t
=

t
d
d
= x

d
d

x
=

x
d
d
= x
1
t
d
d
1736
The partial dierential equation becomes
x
+1
du
d
+x
1
t
2
du
d
u = 0.
If there is a value of such that we can write this equation in terms of , then = x

t is a similarity variable.
If = 1 then the coecient of the rst term is trivially in terms of . The coecient of the second term then
becomes x
2
t
2
. Thus we see = x
1
t is a similarity variable.
Example 43.0.2 To see another application of similarity variables, any partial dierential equation of the form
F
_
tx, u,
u
t
x
,
u
x
t
_
= 0
is equivalent to the ODE
F
_
, u,
du
d
,
du
d
_
= 0
where = tx. Performing the change of variables,
1
x
u
t
=
1
x

t
du
d
=
1
x
x
du
d
=
du
d
1
t
u
x
=
1
t

x
du
d
=
1
t
t
du
d
=
du
d
.
For example the partial dierential equation
u
u
t
+
x
t
u
x
+tx
2
u = 0
which can be rewritten
u
1
x
u
t
+
1
t
u
x
+txu = 0,
1737
is equivalent to
u
du
d
+
du
d
+u = 0
where = tx.
1738
43.1 Exercises
Exercise 43.1
With = x

t, nd such that for some function f, = f() is a solution of

t
= a
2

xx
.
Find f() as well.
1739
43.2 Hints
Hint 43.1
1740
43.3 Solutions
Solution 43.1
We write the derivatives of in terms of f.

t
=

t

f = x

f
t
= t
1
f
t

x
=

x

f = x
1
tf
t

xx
= f
t

x
_
x
1
t
_
+x
1
tx
1
t

f
t

xx
=
2
x
22
t
2
f
tt
+( 1)x
2
tf
t

xx
= x
2
_

2
f
tt
+( 1)f
t
_
We substitute these expressions into the diusion equation.
f
t
= x
2
t
_

2
f
tt
+( 1)f
t
_
1741
In order for this equation to depend only on the variable , we must have = 2. For this choice we obtain an
ordinary dierential equation for f().
f
t
= 4
2
f
tt
+ 6f
t
f
tt
f
t
=
1
4
2

3
2
log(f
t
) =
1
4

3
2
log +c
f
t
= c
1

3/2
e
1/(4)
f() = c
1
_

t
3/2
e
1/(4t)
dt +c
2
f() = c
1
_
1/(2

)
e
t
2
dt +c
2
f() = c
1
erf
_
1
2

_
+c
2
1742
Chapter 44
Method of Characteristics
44.1 The Method of Characteristics and the Wave Equation
Consider the one dimensional wave equation
u
tt
= c
2
u
xx
.
With the change of variables, v = u
x
, w = u
t
, we have the system of equations,
v
t
w
x
= 0,
w
t
c
2
v
x
= 0.
We can write this as the matrix equation,
_
v
w
_
t
+
_
0 1
c
2
0
__
v
w
_
x
= 0.
The eigenvalues and eigenvectors of the matrix are

1
= c,
2
= c,
1
=
_
1
c
_
,
2
=
_
1
c
_
.
1743
The matrix is diagonalized by the similarity transformation,
_
c 0
0 c
_
=
_
1 1
c c
_
1
_
0 1
c
2
0
__
1 1
c c
_
.
We make the change of variables
_
v
w
_
=
_
1 1
c c
__

_
.
The partial dierential equation becomes
_
1 1
c c
__

_
t
+
_
0 1
c
2
0
__
1 1
c c
__

_
x
= 0.
Now we left multiply by the inverse of the matrix of eigenvectors to obtain
_

_
t
+
_
c 0
0 c
__

_
x
= 0.
This is two un-coupled partial dierential equations of rst order with solutions
(x, t) = p(x +ct), (x, t) = q(x ct),
where p, q C
2
are arbitrary functions. Changing variables back to v and w,
v(x, t) = p(x +ct) +q(x ct), w(x, t) = cp(x +ct) cq(x ct).
Since v = u
x
, w = u
t
, we have
u = f(x +ct) +g(x ct).
where f, g C
2
are arbitrary functions. This is the general solution of the one-dimensional wave equation. Note
that for any given problem, f and g are only determined to whithin an additive constant. For any constant k,
adding k to f and subtracting it from g does not change the solution.
u = (f(x +ct) +k) + (g(x ct) k) .
1744
44.2 The Method of Characteristics for an Innite Domain
Consider the problem
u
tt
= c
2
u
xx
, < x < , t > 0
u(x, 0) = p(x), u
t
(x, 0) = q(x).
We know that the solution has the form
u(x, t) = f(x +ct) +g(x ct). (44.1)
The initial conditions give us the two equations
f(x) +g(x) = p(x), cf
t
(x) cg
t
(x) = q(x).
We integrate the second equation.
f(x) g(x) =
1
c
Q(x) + 2k
Here Q(x) =
_
q(x) dx and k is an arbitrary constant. We solve the system of equations for f and g.
f(x) =
1
2
p(x) +
1
2c
Q(x) +k, g(x) =
1
2
p(x)
1
2c
Q(x) k
Note that the value of k does not aect the solution, u(x, t). For simplicity we take k = 0. We substitute f and
g into Equation 44.1 to determine the solution.
u(x, t) =
1
2
(p(x +ct) +p(x ct)) +
1
2c
(Q(x +ct) Q(x ct))
u(x, t) =
1
2
(p(x +ct) +p(x ct)) +
1
2c
_
x+ct
xct
q() d
u(x, t) =
1
2
(u(x +ct, 0) +u(x ct, 0)) +
1
2c
_
x+ct
xct
u
t
(, 0) d
1745
44.3 The Method of Characteristics for a Semi-Innite Domain
Consider the problem

2
u
t
2
= c
2

2
u
x
2
, 0 x < , t > 0
u(x, 0) = f(x),
u(x, 0)
t
= 0, u(0, t) = h(t).
We assume that f(0) = h(0) = u
0
. Following the previous example we see that
G() =
1
2
f() +k, for > 0
F() =
1
2
f() k, for > 0
The boundary condition yields
F(ct) +G(ct) = h(t), for t > 0
F() +G() = h(/c), for < 0
F() = h(/c)
1
2
f() k, for < 0.
Since u(0, 0) = F(0) +G(0) = f(0) = h(0) = u
0
,
F() = h(/c)
1
2
f() +
1
2
u
0
, for < 0.
Now F and G are
F() =
_
1
2
f() +
1
2
u
0
, for > 0
h(/c)
1
2
f() +
1
2
u
0
, for < 0
G() =
1
2
f()
1
2
u
0
, for > 0.
1746
Thus the solution is
u(x, t) =
_
1
2
[f(x ct) +f(x +ct)] , for x ct > 0
h(t x/c)
1
2
f(ct x) +
1
2
u
0
, for x ct < 0.
.
44.4 Envelopes of Curves
Consider the tangent lines to the parabola y = x
2
. The slope of the tangent at the point (x, x
2
) is 2x. The set of
tangents form a one parameter family of lines,
f(x, t) = t
2
+ (x t)2t = 2tx t
2
.
The parabola and some of its tangents are plotted in Figure 44.1.
-1 1
-1
1
Figure 44.1: A parabola and its tangents.
1747
The parabola is the envelope of the family of tangent lines. Each point on the parabola is tangent to one of
the lines. Given a curve, we can generate a family of lines that envelope the curve. We can also do the opposite,
given a family of lines, we can determine the curve that they envelope. More generally, given a family of curves,
we can determine the curve that they envelope. Let the one parameter family of curves be given by the equation
F(x, y, t) = 0. For the example of the tangents to the parabola this equation would be y 2tx +t
2
= 0.
Let y(x) be the envelope of F(x, y, t) = 0. Then the points on y(x) must lie on the family of curves. Thus
y(x) must satisfy the equation F(x, y, t) = 0. The points that lie on the envelope have the property,

t
F(x, y, t) = 0.
We can solve this equation for t in terms of x and y, t = t(x, y). The equation for the envelope is then
F(x, y, t(x, y)) = 0.
Consider the example of the tangents to the parabola. The equation of the one-parameter family of curves is
F(x, y, t) y 2tx +t
2
= 0.
The condition F
t
(x, y, t) = 0 gives us the constraint,
2x + 2t = 0.
Solving this for t gives us t(x, y) = x. The equation for the envelope is then,
y 2xx +x
2
= 0,
y = x
2
.
Example 44.4.1 Consider the one parameter family of curves,
(x t)
2
+ (y t)
2
1 = 0.
1748
These are circles of unit radius and center (t, t). To determine the envelope of the family, we rst use the constraint
F
t
(x, y, t) to solve for t(x, y).
F
t
(x, y, t) = 2(x t) 2(y t) = 0
t(x, y) =
x +y
2
Now we substitute this into the equation F(x, y, t) = 0 to determine the envelope.
F
_
x, y,
x +y
2
_
=
_
x
x +y
2
_
2
+
_
y
x +y
2
_
2
1 = 0
_
x y
2
_
2
+
_
y x
2
_
2
1 = 0
(x y)
2
= 2
y = x

2
The one parameter family of curves and its envelope is shown in Figure 44.2.
1749
-3 -2 -1 1 2 3
-3
-2
-1
1
2
3
Figure 44.2: The envelope of (x t)
2
+ (y t)
2
1 = 0.
44.5 Exercises
Exercise 44.1
Consider a semi-innite string, x > 0. For all time the end of the string is displaced according to u(0, t) = f(t).
Find the motion of the string, u(x, t) with the method of characteristics and then with a Fourier transform in
time. The wave speed is c.
Exercise 44.2
Solve using characteristics:
uu
x
+u
y
= 1, u

x=y
=
x
2
.
1750
Exercise 44.3
Solve using characteristics:
(y +u)u
x
+yu
y
= x y, u

y=1
= 1 +x.
1751
44.6 Hints
Hint 44.1
Hint 44.2
Hint 44.3
1752
44.7 Solutions
Solution 44.1
Method of characteristics. The problem is
u
tt
c
2
u
xx
= 0, x > 0, < t < ,
u(0, t) = f(t).
By the method of characteristics, we know that the solution has the form,
u(x, t) = F(x ct).
That is, it is a wave moving to the right with speed c. Substituting this into the boundary condition yields,
F(ct) = f(t)
F() = f
_

c
_
Now we can write the solution.
u(x, t) = f(t x/c)
Fourier transform. We take the Fourier transform in time of the wave equation and the boundary condition.
u
tt
= c
2
u
xx
, u(0, t) = f(t)

2
u = c
2
u
xx
, u(0, ) =

f()
u
xx
+

2
c
2
u = 0, u(0, ) =

f()
The general solution of this ordinary dierential equation is
u(x, ) = a() e
ix/c
+b() e
ix/c
.
1753
The radiation condition, (u(x, t) must be a wave traveling in the positive direction), and the boundary condition
at x = 0 will determine the constants a and b. u is the inverse Fourier transform of u.
u(x, t) =
_

_
a() e
ix/c
+b() e
ix/c
_
e
it
d
u(x, t) =
_

_
a() e
i(t+x/c)
+b() e
i(tx/c)
_
d
The rst and second terms in the integrand are left and right traveling waves, respectively. In order that u is a
right traveling wave, it must be a superposition of right traveling waves. We conclude that a() = 0. Applying
the boundary condition at x = 0, we solve for u.
u(x, ) =

f() e
ix/c
Finally we take the inverse Fourier transform.
u(x, t) =
_

f() e
i(tx/c)
d
u(x, t) = f(t x/c)
Solution 44.2
uu
x
+u
y
= 1, u

x=y
=
x
2
(44.2)
We form
du
dy
.
du
dy
= u
x
dx
dy
+u
y
1754
We compare this with Equation 44.2 to obtain dierential equations for x and u.
dx
dy
= u,
du
dy
= 1. (44.3)
The initial data is
x(y = ) = , u(y = ) =

2
. (44.4)
We solve the dierenial equation for u (44.3) subject to the initial condition (44.4).
u(x(y), y) = y

2
The dierential equation for x becomes
dx
dy
= y

2
.
We solve this subject to the initial condition (44.4).
x(y) =
1
2
(y
2
+(2 y))
This denes the characteristic starting at the point (, ). We solve for .
=
y
2
2x
y 2
We substitute this value for into the solution for u.
u(x, y) =
y(y 4) + 2x
2(y 2)
This solution is dened for y ,= 2. This is because at (x, y) = (2, 2), the characteristic is parallel to the line x = y.
Figure 44.3 has a plot of the solution that shows the singularity at y = 2.
1755
-2
0
2
x
-2
0
2
y
-10
0
10
u
-10
0
10
u
Figure 44.3: The solution u(x, y).
Solution 44.3
(y +u)u
x
+yu
y
= x y, u

y=1
= 1 +x (44.5)
We dierentiate u with respect to s.
du
ds
= u
x
dx
ds
+u
y
dy
ds
We compare this with Equation 44.5 to obtain dierential equations for x, y and u.
dx
ds
= y +u,
dy
ds
= y,
du
ds
= x y
We parametrize the initial data in terms of s.
x(s = 0) = , y(s = 0) = 1, u(s = 0) = 1 +
1756
We solve the equation for y subject to the inital condition.
y(s) = e
s
This gives us a coupled set of dierential equations for x and u.
dx
ds
= e
s
+u,
du
ds
= x e
s
The solutions subject to the initial conditions are
x(s) = ( + 1) e
s
e
s
, u(s) = e
s
+ e
s
.
We substitute y(s) = e
s
into these solutions.
x(s) = ( + 1)y
1
y
, u(s) = y +
1
y
We solve the rst equation for and substitute it into the second equation to obtain the solution.
u(x, y) =
2 +xy y
2
y
This solution is valid for y > 0. The characteristic passing through (, 1) is
x(s) = ( + 1) e
s
e
s
, y(s) = e
s
.
Hence we see that the characteristics satisfy y(s) 0 for all real s. Figure 44.4 shows some characteristics in the
(x, y) plane with starting points from (5, 1) to (5, 1) and a plot of the solution.
1757
-10-7.5-5-2.5 2.5 5 7.5 10
0.25
0.5
0.75
1
1.25
1.5
1.75
2 -2
-1
0
1
2
x
0.5 1 1.5 2
y
0
5
10
15
u
0
5
10
15
u
Figure 44.4: Some characteristics and the solution u(x, y).
1758
Chapter 45
Transform Methods
45.1 Fourier Transform for Partial Dierential Equations
Solve Laplaces equation in the upper half plane

2
u = 0 < x < , y > 0
u(x, 0) = f(x) < x <
Taking the Fourier transform in the x variable of the equation and the boundary condition,
T
_

2
u
x
2
+

2
u
y
2
_
= 0, T [u(x, 0)] = T [f(x)]

2
U(, y) +

2
y
2
U(, y) = 0, U(, 0) = F().
The general solution to the equation is
U(, y) = a e
y
+b e
y
.
1759
Remember that in solving the dierential equation here we consider to be a parameter. Requiring that the
solution be bounded for y [0, ) yields
U(, y) = a e
[[y
.
Applying the boundary condition,
U(, y) = F() e
[[y
.
The inverse Fourier transform of e
[[y
is
T
1
_
e
[[y

=
2y
x
2
+y
2
.
Thus
U(, y) = F() T
_
2y
x
2
+y
2
_
T [u(x, y)] = T [f(x)] T
_
2y
x
2
+y
2
_
.
Recall that the convolution theorem is
T
_
1
2
_

f(x )g() d
_
= F()G().
Applying the convolution theorem to the equation for U,
u(x, y) =
1
2
_

f(x )2y

2
+y
2
d
u(x, y) =
y

f(x )

2
+y
2
d.
1760
45.2 The Fourier Sine Transform
Consider the problem
u
t
= u
xx
, x > 0, t > 0
u(0, t) = 0, u(x, 0) = f(x)
Since we are given the position at x = 0 we apply the Fourier sine transform.
u
t
=
_

2
u +
2

u(0, t)
_
u
t
=
2
u
u(, t) = c() e

2
t
The initial condition is
u(, 0) =

f().
We solve the rst order dierential equation to determine u.
u(, t) =

f() e

2
t
u(, t) =

f()T
c
_
1

4t
e
x
2
/(4t)
_
We take the inverse sine transform with the convolution theorem.
u(x, t) =
1
4
3/2

t
_

0
f()
_
e
[x[
2
/(4t)
e
(x+)
2
/(4t)
_
d
1761
45.3 Fourier Transform
Consider the problem
u
t

u
x
+u = 0, < x < , t > 0,
u(x, 0) = f(x).
Taking the Fourier Transform of the partial dierential equation and the initial condition yields
U
t
iU +U = 0,
U(, 0) = F() =
1
2
_

f(x) e
ix
dx.
Now we have a rst order dierential equation for U(, t) with the solution
U(, t) = F() e
(1+i)t
.
Now we apply the inverse Fourier transform.
u(x, t) =
_

F() e
(1+i)t
e
ix
d
u(x, t) = e
t
_

F() e
i(x+t)
d
u(x, t) = e
t
f(x +t)
1762
45.4 Exercises
Exercise 45.1
Find an integral representation of the solution u(x, y), of
u
xx
+u
yy
= 0 in < x < , 0 < y < ,
subject to the boundary conditions:
u(x, 0) = f(x), < x < ;
u(x, y) 0 as x
2
+y
2
.
Exercise 45.2
Solve the Cauchy problem for the one-dimensional heat equation in the domain < x < , t > 0,
u
t
= u
xx
, u(x, 0) = f(x),
with the Fourier transform.
Exercise 45.3
Let (x, t) satisfy the equation

t
= a
2

xx
, (45.1)
for < x < , t > 0 with initial conditions (x, 0) = f(x) in < x < . Boundary conditions cannot
be given here because both endpoints are innite. In this case all we can ask is that the solution be regular as
x . Show that the Laplace transform of (x, t) is given by
(x, s) =
1
2a

s
_

f() exp
_

s
a
[x [
_
d,
and hence deduce that
(x, t) =
1
2a

t
_

f() exp
_

(x )
2
4a
2
t
_
d.
1763
Exercise 45.4
1. In Exercise 45.3 above, let f(x) = f(x) for all x and verify that (x, t) so obtained is the solution, for
x > 0, of the following problem: nd (x, t) satisfying

t
= a
2

xx
in 0 < x < , t > 0, with boundary condition (0, t) = 0 and initial condition (x, 0) = f(x). This
technique, in which the solution for a semi-innite interval is obtained from that for an innite interval, is
an example of what is called the method of images.
2. How would you modify the result of part (a) if the boundary condition (0, t) = 0 was replaced by
x
(0, t) =
0?
Exercise 45.5
Solve the Cauchy problem for the one-dimensional wave equation in the domain < x < , t > 0,
u
tt
= c
2
u
xx
, u(x, 0) = f(x), u
t
(x, 0) = g(x),
with the Fourier transform.
Exercise 45.6
Solve the Cauchy problem for the one-dimensional wave equation in the domain < x < , t > 0,
u
tt
= c
2
u
xx
, u(x, 0) = f(x), u
t
(x, 0) = g(x),
with the Laplace transform.
Exercise 45.7
Consider the problem of determining (x, t) in the region 0 < x < , 0 < t < , such that

t
= a
2

xx
, (45.2)
1764
with initial and boundary conditions
(x, 0) = 0 for all x > 0,
(0, t) = f(t) for all t > 0,
where f(t) is a given function.
1. Obtain the formula for the Laplace transform of (x, t), (x, s) and use the convolution theorem for Laplace
transforms to show that
(x, t) =
x
2a

_
t
0
f(t )
1

3/2
exp
_

x
2
4a
2

_
d.
2. Discuss the special case obtained by setting f(t) = 1 and also that in which f(t) = 1 for 0 < t < T, with
f(t) = 0 for t > T. Here T is some positive constant.
Exercise 45.8
Solve the radiating half space problem:
u
t
= u
xx
, x > 0, t > 0,
u
x
(0, t) u(0, t) = 0, u(x, 0) = f(x).
To do this, dene
v(x, t) = u
x
(x, t) u(x, t)
and nd the half space problem that v satises. Solve this problem and then show that
u(x, t) =
_

x
e
(x)
v(, t) d.
1765
Exercise 45.9
Show that
_

0
e
c
2
sin(x) d =
x

4c
3/2
e
x
2
/(4c)
.
Use the sine transform to solve:
u
t
= u
xx
, x > 0, t > 0,
u(0, t) = g(t), u(x, 0) = 0.
Exercise 45.10
Use the Fourier sine transform to nd the steady state temperature u(x, y) in a slab: x 0, 0 y 1, which has
zero temperature on the faces y = 0 and y = 1 and has a given distribution: u(y, 0) = f(y) on the edge x = 0,
0 y 1.
Exercise 45.11
Find a harmonic function u(x, y) in the upper half plane which takes on the value g(x) on the x-axis. Assume
that u and u
x
vanish as [x[ . Use the Fourier transform with respect to x. Express the solution as a single
integral by using the convolution formula.
Exercise 45.12
Find the bounded solution of
u
t
= u
xx
a
2
u, 0 < x < , t > 0,
u
x
(0, t) = f(t), u(x, 0) = 0.
Exercise 45.13
The left end of a taut string of length L is displaced according to u(0, t) = f(t). The right end is xed, u(L, t) = 0.
Initially the string is at rest with no displacement. If c is the wave speed for the string, nd its motion for all
t > 0.
1766
Exercise 45.14
Let
2
= 0 in the (x, y)-plane region dened by 0 < y < l, < x < , with (x, 0) = (x ), (x, l) = 0,
and 0 as [x[ . Solve for using Fourier transforms. You may leave your answer in the form of an
integral but in fact it is possible to use techniques of contour integration to show that
(x, y[) =
1
2l
_
sin(y/l)
cosh[(x )/l] cos(y/l)
_
.
Note that as l we recover the result derived in class:

1

y
(x )
2
+y
2
,
which clearly approaches (x ) as y 0.
1767
45.5 Hints
Hint 45.1
The desired solution form is: u(x, y) =
_

K(x , y)f() d. You must nd the correct K. Take the Fourier


transform with respect to x and solve for u(, y) recalling that u
xx
=
2
u. By u
xx
we denote the Fourier
transform with respect to x of u
xx
(x, y).
Hint 45.2
Use the Fourier convolution theorem and the table of Fourier transforms in the appendix.
Hint 45.3
Hint 45.4
Hint 45.5
Use the Fourier convolution theorem. The transform pairs,
T[((x +) +(x ))] = cos(),
T[(H(x +) H(x ))] =
sin()

,
will be useful.
Hint 45.6
1768
Hint 45.7
Hint 45.8
v(x, t) satises the same partial dierential equation. You can solve the problem for v(x, t) with the Fourier sine
transform. Use the convolution theorem to invert the transform.
To show that
u(x, t) =
_

x
e
(x)
v(, t) d,
nd the solution of
u
x
u = v
that is bounded as x .
Hint 45.9
Note that
_

0
e
c
2
sin(x) d =

x
_

0
e
c
2
cos(x) d.
Write the integral as a Fourier transform.
Take the Fourier sine transform of the heat equation to obtain a rst order, ordinary dierential equation for
u(, t). Solve the dierential equation and do the inversion with the convolution theorem.
Hint 45.10
1769
Hint 45.11
Hint 45.12
Hint 45.13
Hint 45.14
1770
45.6 Solutions
Solution 45.1
1. We take the Fourier transform of the integral equation, noting that the left side is the convolution of u(x)
and
1
x
2
+a
2
.
2 u()T
_
1
x
2
+a
2
_
= T
_
1
x
2
+b
2
_
We nd the Fourier transform of f(x) =
1
x
2
+c
2
. Note that since f(x) is an even, real-valued function,

f()
is an even, real-valued function.
T
_
1
x
2
+c
2
_
=
1
2
_

1
x
2
+c
2
e
ix
dx
For x > 0 we close the path of integration in the upper half plane and apply Jordans Lemma to evaluate
the integral in terms of the residues.
=
1
2
i2 Res
_
e
ix
(x ic)(x +ic)
, x = ic
_
= i
e
iic
2ic
=
1
2c
e
c
Since

f() is an even function, we have
T
_
1
x
2
+c
2
_
=
1
2c
e
c[[
.
1771
Our equation for u() becomes,
2 u()
1
2a
e
a[[
=
1
2b
e
b[[
u() =
a
2b
e
(ba)[omega[
.
We take the inverse Fourier transform using the transform pair we derived above.
u(x) =
a
2b
2(b a)
x
2
+ (b a)
2
u(x) =
a(b a)
b(x
2
+ (b a)
2
)
2. We take the Fourier transform of the partial dierential equation and the boundary condtion.
u
xx
+u
yy
= 0, u(x, 0) = f(x)

2
u(, y) + u
yy
(, y) = 0, u(, 0) =

f()
This is an ordinary dierential equation for u in which is a parameter. The general solution is
u = c
1
e
y
+c
2
e
y
.
We apply the boundary conditions that u(, 0) =

f() and u 0 and y .
u(, y) =

f() e
y
We take the inverse transform using the convolution theorem.
u(x, y) =
1
2
_

e
(x)y
f() d
1772
Solution 45.2
u
t
= u
xx
, u(x, 0) = f(x),
We take the Fourier transform of the heat equation and the initial condition.
u
t
=
2
u, u(, 0) =

f()
This is a rst order ordinary dierential equation which has the solution,
u(, t) =

f() e

2
t
.
Using a table of Fourier transforms we can write this in a form that is conducive to applying the convolution
theorem.
u(, t) =

f()T
__

t
e
x
2
/(4t)
_
u(x, t) =
_

t
_

e
(x)
2
/(4t)
f() d
Solution 45.3
We take the Laplace transform of Equation 45.1.
s

(x, 0) = a
2

xx

xx

s
a
2

=
f(x)
a
2
(45.3)
The Green function problem for Equation 45.3 is
G
tt

s
a
2
G = (x ), G(; ) is bounded.
1773
The homogeneous solutions that satisfy the left and right boundary conditions are, respectively,
exp
_
sa
x
_
, exp
_

sa
x
_
.
We compute the Wronskian of these solutions.
W =

exp
_

s
a
x
_
exp
_

s
a
x
_

s
a
exp
_

sa
x
_

s
a
exp
_

sa
x
_

=
2

s
a
The Green function is
G(x; ) =
exp
_

s
a
x
<
_
exp
_

s
a
x
>
_

s
a
G(x; ) =
a
2

s
exp
_

s
a
[x [
_
.
Now we solve Equation 45.3 using the Green function.

(x, s) =
_

f()
a
2
G(x; ) d

(x, s) =
1
2a

s
_

f() exp
_

s
a
[x [
_
d
Finally we take the inverse Laplace transform to obtain the solution of Equation 45.1.
(x, t) =
1
2a

t
_

f() exp
_

(x )
2
4a
2
t
_
d
1774
Solution 45.4
1. Clearly the solution satises the dierential equation. We must verify that it satises the boundary condition,
(0, t) = 0.
(x, t) =
1
2a

t
_

f() exp
_

(x )
2
4a
2
t
_
d
(x, t) =
1
2a

t
_
0

f() exp
_

(x )
2
4a
2
t
_
d +
1
2a

t
_

0
f() exp
_

(x )
2
4a
2
t
_
d
(x, t) =
1
2a

t
_

0
f() exp
_

(x +)
2
4a
2
t
_
d +
1
2a

t
_

0
f() exp
_

(x )
2
4a
2
t
_
d
(x, t) =
1
2a

t
_

0
f() exp
_

(x +)
2
4a
2
t
_
d +
1
2a

t
_

0
f() exp
_

(x )
2
4a
2
t
_
d
(x, t) =
1
2a

t
_

0
f()
_
exp
_

(x )
2
4a
2
t
_
exp
_

(x +)
2
4a
2
t
__
d
(x, t) =
1
2a

t
_

0
f() exp
_

x
2
+
2
4a
2
t
__
exp
_
x
2a
2
t
_
exp
_

x
2a
2
t
__
d
(x, t) =
1
a

t
_

0
f() exp
_

x
2
+
2
4a
2
t
_
sinh
_
x
2a
2
t
_
d
Since the integrand is zero for x = 0, the solution satises the boundary condition there.
2. For the boundary condition
x
(0, t) = 0 we would choose f(x) to be even. f(x) = f(x). The solution is
(x, t) =
1
a

t
_

0
f() exp
_

x
2
+
2
4a
2
t
_
cosh
_
x
2a
2
t
_
d
The derivative with respect to x is

x
(x, t) =
1
2a
3

t
3/2
_

0
f() exp
_

x
2
+
2
4a
2
t
__
sinh
_
x
2a
2
t
_
xcosh
_
x
2a
2
t
__
d.
1775
Since the integrand is zero for x = 0, the solution satises the boundary condition there.
Solution 45.5
With the change of variables
= ct,

=
t

t
=
1
c

t
, v(x, ) = u(x, t),
the problem becomes
v

= v
xx
, v(x, 0) = f(x), v

(x, 0) =
1
c
g(x).
(This change of variables isnt necessary, it just gives us fewer constants to carry around.) We take the Fourier
transform in x of the equation and the initial conditions, (we consider to be a parameter),
v

(, ) =
2
v(, ), v(, ) =

f(), v

(, ) =
1
c
g().
Now we have an ordinary dierential equation for v(, ), (now we consider to be a parameter). The general
solution of this constant coecient dierential equation is,
v(, ) = a() cos() +b() sin(),
where a and b are constants that depend on the parameter . Applying the initial conditions, we see that
v(, ) =

f() cos() +
1
c
g() sin(),
With the Fourier transform pairs
T[((x +) +(x ))] = cos(),
T[(H(x +) H(x ))] =
sin()

,
1776
we can write v(, ) in a form that is conducive to applying the Fourier convolution theorem.
v(, ) = T[f(x)]T[((x +) +(x ))] +
1
c
T[g(x)]T[(H(x +) H(x ))]
v(x, ) =
1
2
_

f()((x +) +(x )) d
+
1
c
1
2
_

g()(H(x +) H(x )) d
v(x, ) =
1
2
(f(x +) +f(x )) +
1
2c
_
x+
x
g() d
Finally we make the change of variables t = /c, u(x, t) = v(x, ) to obtain DAlemberts solution of the wave
equation,
u(x, t) =
1
2
(f(x ct) +f(x +ct)) +
1
2c
_
x+ct
xct
g() d.
Solution 45.6
With the change of variables
= ct,

=
t

t
=
1
c

t
, v(x, ) = u(x, t),
the problem becomes
v

= v
xx
, v(x, 0) = f(x), v

(x, 0) =
1
c
g(x).
1777
We take the Laplace transform in of the equation, (we consider x to be a parameter),
s
2
V (x, s) sv(x, 0) v

(x, 0) = V
xx
(x, s),
V
xx
(x, s) s
2
V (x, s) = sf(x)
1
c
g(x),
Now we have an ordinary dierential equation for V (x, s), (now we consider s to be a parameter). We impose the
boundary conditions that the solution is bounded at x = . Consider the Greens function problem
g
xx
(x; ) s
2
g(x; ) = (x ), g(; ) bounded.
e
sx
is a homogeneous solution that is bounded at x = . e
sx
is a homogeneous solution that is bounded at
x = +. The Wronskian of these solutions is
W(x) =

e
sx
e
sx
s e
sx
s e
sx

= 2s.
Thus the Greens function is
g(x; ) =
_

1
2s
e
sx
e
s
for x < ,

1
2s
e
s
e
sx
for x > ,
=
1
2s
e
s[x[
.
The solution for V (x, s) is
V (x, s) =
1
2s
_

e
s[x[
(sf()
1
c
g()) d,
V (x, s) =
1
2
_

e
s[x[
f() d +
1
2cs
_

e
s[x[
g()) d,
1778
V (x, s) =
1
2
_

e
s[[
f(x ) d +
1
2c
_

e
s[[
s
g(x )) d.
Now we take the inverse Laplace transform and interchange the order of integration.
v(x, ) =
1
2
L
1
__

e
s[[
f(x ) d
_
+
1
2c
L
1
__

e
s[[
s
g(x )) d
_
v(x, ) =
1
2
_

L
1
_
e
s[[

f(x ) d +
1
2c
_

L
1
_
e
s[[
s
_
g(x )) d
v(x, ) =
1
2
_

( [[)f(x ) d +
1
2c
_

H( [[)g(x )) d
v(x, ) =
1
2
(f(x ) +f(x +)) +
1
2c
_

g(x ) d
v(x, ) =
1
2
(f(x ) +f(x +)) +
1
2c
_
x+
x
g() d
v(x, ) =
1
2
(f(x ) +f(x +)) +
1
2c
_
x+
x
g() d
Now we write make the change of variables t = /c, u(x, t) = v(x, ) to obtain DAlemberts solution of the wave
equation,
u(x, t) =
1
2
(f(x ct) +f(x +ct)) +
1
2c
_
x+ct
xct
g() d.
1779
Solution 45.7
1. We take the Laplace transform of Equation 45.2.
s

(x, 0) = a
2

xx

xx

s
a
2

= 0 (45.4)
We take the Laplace transform of the initial condition, (0, t) = f(t), and use that

(x, s) vanishes as
x to obtain boundary conditions for

(x, s).

(0, s) =

f(s),

(, s) = 0
The solutions of Equation 45.4 are
exp
_

s
a
x
_
.
The solution that satises the boundary conditions is

(x, s) =

f(s) exp
_

s
a
x
_
.
We write this as the product of two Laplace transforms.

(x, s) =

f(s)L
_
x
2a

t
3/2
exp
_

x
2
4a
2
t
__
We invert using the convolution theorem.
(x, t) =
x
2a

_
t
0
f(t )
1

3/2
exp
_

x
2
4a
2

_
d.
1780
2. Consider the case f(t) = 1.
(x, t) =
x
2a

_
t
0
1

3/2
exp
_

x
2
4a
2

_
d
=
x
2a

, d =
x
4a
3/2
(x, t) =
2

_
x/(2a

t)

2
d
(x, t) = erfc
_
x
2a

t
_
Now consider the case in which f(t) = 1 for 0 < t < T, with f(t) = 0 for t > T. For t < T, is the same
as before.
(x, t) = erfc
_
x
2a

t
_
, for 0 < t < T
Consider t > T.
(x, t) =
x
2a

_
t
tT
1

3/2
exp
_

x
2
4a
2

_
d
(x, t) =
2

_
x/(2a

t)
x/(2a

tT)
e

2
d
(x, t) = erf
_
x
2a

t T
_
erf
_
x
2a

t
_
Solution 45.8
u
t
= u
xx
, x > 0, t > 0,
u
x
(0, t) u(0, t) = 0, u(x, 0) = f(x).
1781
First we nd the partial dierential equation that v satises. We start with the partial dierential equation for u,
u
t
= u
xx
.
Dierentiating this equation with respect to x yields,
u
tx
= u
xxx
.
Subtracting times the former equation from the latter yields,
u
tx
u
t
= u
xxx
u
xx
,

t
(u
x
u) =

2
x
2
(u
x
u) ,
v
t
= v
xx
.
Thus v satises the same partial dierential equation as u. This is because the equation for u is linear and
homogeneous and v is a linear combination of u and its derivatives. The problem for v is,
v
t
= v
xx
, x > 0, t > 0,
v(0, t) = 0, v(x, 0) = f
t
(x) f(x).
With this new boundary condition, we can solve the problem with the Fourier sine transform. We take the sine
transform of the partial dierential equation and the initial condition.
v
t
(, t) =
_

2
v(, t) +
1

v(0, t)
_
,
v(, 0) = T
s
[f
t
(x) f(x)]
v
t
(, t) =
2
v(, t)
v(, 0) = T
s
[f
t
(x) f(x)]
1782
Now we have a rst order, ordinary dierential equation for v. The general solution is,
v(, t) = c e

2
t
.
The solution subject to the initial condition is,
v(, t) = T
s
[f
t
(x) f(x)] e

2
t
.
Now we take the inverse sine transform to nd v. We utilize the Fourier cosine transform pair,
T
1
c
_
e

2
t
_
=
_

t
e
x
2
/(4t)
,
to write v in a form that is suitable for the convolution theorem.
v(, t) = T
s
[f
t
(x) f(x)] T
c
__

t
e
x
2
/(4t)
_
Recall that the Fourier sine convolution theorem is,
T
s
_
1
2
_

0
f() (g([x [) g(x +)) d
_
= T
s
[f(x)]T
c
[g(x)].
Thus v(x, t) is
v(x, t) =
1
2

t
_

0
(f
t
() f())
_
e
[x[
2
/(4t)
e
(x+)
2
/(4t)
_
d.
With v determined, we have a rst order, ordinary dierential equation for u,
u
x
u = v.
1783
We solve this equation by multiplying by the integrating factor and integrating.

x
_
e
x
u
_
= e
x
v
e
x
u =
_
x
e

v(x, t) d +c(t)
u =
_
x
e
(x)
v(x, t) d + e
x
c(t)
The solution that vanishes as x is
u(x, t) =
_

x
e
(x)
v(, t) d.
Solution 45.9
_

0
e
c
2
sin(x) d =

x
_

0
e
c
2
cos(x) d
=
1
2

x
_

e
c
2
cos(x) d
=
1
2

x
_

e
c
2
+ix
d
=
1
2

x
_

e
c(+ix/(2c))
2
e
x
2
/(4c)
d
=
1
2

x
e
x
2
/(4c)
_

e
c
2
d
=
1
2
_

x
e
x
2
/(4c)
=
x

4c
3/2
e
x
2
/(4c)
1784
u
t
= u
xx
, x > 0, t > 0,
u(0, t) = g(t), u(x, 0) = 0.
We take the Fourier sine transform of the partial dierential equation and the initial condition.
u
t
(, t) =
2
u(, t) +

g(t), u(, 0) = 0
Now we have a rst order, ordinary dierential equation for u(, t).

t
_
e

2
t
u
t
(, t)
_
=

g(t) e

2
t
u(, t) =

2
t
_
t
0
g() e

d +c() e

2
t
The initial condition is satised for c() = 0.
u(, t) =

_
t
0
g() e

2
(t)
d
We take the inverse sine transform to nd u.
u(x, t) = T
1
s
_

_
t
0
g() e

2
(t)
d
_
u(x, t) =
_
t
0
g()T
1
s
_

2
(t)
_
d
u(x, t) =
_
t
0
g()
x
2

(t )
3/2
e
x
2
/(4(t))
d
u(x, t) =
x
2

_
t
0
g()
e
x
2
/(4(t))
(t )
3/2
d
1785
Solution 45.10
The problem is
u
xx
+u
yy
= 0, 0 < x, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, u(0, y) = f(y).
We take the Fourier sine transform of the partial dierential equation and the boundary conditions.

2
u(, y) +
k

u(0, y) + u
yy
(, y) = 0
u
yy
(, y)
2
u(, y) =
k

f(y), u(, 0) = u(, 1) = 0


This is an inhomogeneous, ordinary dierential equation that we can solve with Green functions. The homogeneous
solutions are
cosh(y), sinh(y).
The homogeneous solutions that satisfy the left and right boundary conditions are
y
1
= sinh(y), y
2
= sinh((y 1)).
The Wronskian of these two solutions is,
W(x) =

sinh(y) sinh((y 1))


cosh(y) cosh((y 1))

= (sinh(y) cosh((y 1)) cosh(y) sinh((y 1)))


= sinh().
The Green function is
G(y[) =
sinh(y
<
) sinh((y
>
1))
sinh()
.
1786
The solution of the ordinary dierential equation for u(, y) is
u(, y) =

_
1
0
f()G(y[) d
=
1

_
y
0
f()
sinh() sinh((y 1))
sinh()
d
1

_
1
y
f()
sinh(y) sinh(( 1))
sinh()
d.
With some uninteresting grunge, you can show that,
2
_

0
sinh() sinh((y 1))
sinh()
sin(x) d = 2
sin() sin(y)
(cosh(x) cos((y )))(cosh(x) cos((y +)))
.
Taking the inverse Fourier sine transform of u(, y) and interchanging the order of integration yields,
u(x, y) =
2

_
y
0
f()
sin() sin(y)
(cosh(x) cos((y )))(cosh(x) cos((y +)))
d
+
2

_
1
y
f()
sin(y) sin()
(cosh(x) cos(( y)))(cosh(x) cos(( +y)))
d.
u(x, y) =
2

_
1
0
f()
sin() sin(y)
(cosh(x) cos((y )))(cosh(x) cos((y +)))
d
Solution 45.11
The problem for u(x, y) is,
u
xx
+u
yy
= 0, < x < , y > 0,
u(x, 0) = g(x).
We take the Fourier transform of the partial dierential equation and the boundary condition.

2
u(, y) + u
yy
(, y) = 0, u(, 0) = g().
1787
This is an ordinary dierential equation for u(, y). So far we only have one boundary condition. In order that u
is bounded we impose the second boundary condition u(, y) is bounded as y . The general solution of the
dierential equation is
u(, y) =
_
c
1
() e
y
+c
2
() e
y
, for ,= 0,
c
1
() +c
2
()y, for = 0.
Note that e
y
is the bounded solution for < 0, 1 is the bounded solution for = 0 and e
y
is the bounded
solution for > 0. Thus the bounded solution is
u(, y) = c() e
[[y
.
The boundary condition at y = 0 determines the constant of integration.
u(, y) = g() e
[[y
Now we take the inverse Fourier transform to obtain the solution for u(x, y). To do this we use the Fourier
transform pair,
T
_
2c
x
2
+c
2
_
= e
c[[
,
and the convolution theorem,
T
_
1
2
_

f()g(x ) d
_
=

f() g().
u(x, y) =
1
2
_

g()
2y
(x )
2
+y
2
d.
1788
Solution 45.12
Since the derivative of u is specied at x = 0, we take the cosine transform of the partial dierential equation and
the initial condition.
u
t
(, t) =
_

2
u(, t)
1

u
x
(0, t)
_
a
2
u(, t), u(, 0) = 0
u
t
+
_

2
+a
2
_
u =

f(t), u(, 0) = 0
This rst order, ordinary dierential equation for u(, t) has the solution,
u(, t) =

_
t
0
e
(
2
+a
2
)(t)
f() d.
We take the inverse Fourier cosine transform to nd the solution u(x, t).
u(x, t) =

T
1
c
__
t
0
e
(
2
+a
2
)(t)
f() d
_
u(x, t) =

_
t
0
T
1
c
_
e

2
(t)
_
e
a
2
(t)
f() d
u(x, t) =

_
t
0
_

(t )
e
x
2
/(4(t))
e
a
2
(t)
f() d
u(x, t) =
_

_
t
0
e
x
2
/(4(t))a
2
(t)

t
f() d
Solution 45.13
Mathematically stated we have
u
tt
= c
2
u
xx
, 0 < x < L, t > 0,
u(x, 0) = u
t
(x, 0) = 0,
u(0, t) = f(t), u(L, t) = 0.
1789
We take the Laplace transform of the partial dierential equation and the boundary conditions.
s
2
u(x, s) su(x, 0) u
t
(x, 0) = c
2
u
xx
(x, s)
u
xx
=
s
2
c
2
u, u(0, s) =

f(s), u(L, s) = 0
Now we have an ordinary dierential equation. A set of solutions is
_
cosh
_
sx
c
_
, sinh
_
sx
c
__
.
The solution that satises the right boundary condition is
u = a sinh
_
s(L x)
c
_
.
The left boundary condition determines the multiplicative constant.
u(x, s) =

f(s)
sinh(s(L x)/c)
sinh(sL/c)
If we can nd the inverse Laplace transform of
u(x, s) =
sinh(s(L x)/c)
sinh(sL/c)
then we can use the convolution theorem to write u in terms of a single integral. We proceed by expanding this
1790
function in a sum.
sinh(s(L x)/c)
sinh(sL/c)
=
e
s(Lx)/c
e
s(Lx)/c
e
sL/c
e
sL/c
=
e
sx/c
e
s(2Lx)/c
1 e
2sL/c
=
_
e
sx/c
e
s(2Lx)/c
_

n=0
e
2nsL/c
=

n=0
e
s(2nL+x)/c

n=0
e
s(2(n+1)Lx)/c
=

n=0
e
s(2nL+x)/c

n=1
e
s(2nLx)/c
Now we use the Laplace transform pair:
L[(x a)] = e
sa
.
L
1
_
sinh(s(L x)/c)
sinh(sL/c)
_
=

n=0
(t (2nL +x)/c)

n=1
(t (2nL x)/c)
We write u in the form,
u(x, s) = L[f(t)]L
_

n=0
(t (2nL +x)/c)

n=1
(t (2nL x)/c)
_
.
By the convolution theorem we have
u(x, t) =
_
t
0
f()
_

n=0
(t (2nL +x)/c)

n=1
(t (2nL x)/c)
_
d.
1791
We can simplify this a bit. First we determine which Dirac delta functions have their singularities in the range
(0..t). For the rst sum, this condition is
0 < t (2nL +x)/c < t.
The right inequality is always satised. The left inequality becomes
(2nL +x)/c < t,
n <
ct x
2L
.
For the second sum, the condition is
0 < t (2nL x)/c < t.
Again the right inequality is always satised. The left inequality becomes
n <
ct +x
2L
.
We change the index range to reect the nonzero contributions and do the integration.
u(x, t) =
_
t
0
f()
_
_
]
ctx
2L
|

n=0
(t (2nL +x)/c)
]
ct+x
2L
|

n=1
(t (2nL x)/c)
_
_
d.
u(x, t) =
]
ctx
2L
|

n=0
f(t (2nL +x)/c)
]
ct+x
2L
|

n=1
f(t (2nL x)/c)
1792
Solution 45.14
We take the Fourier transform of the partial dierential equation and the boundary conditions.

yy
= 0,

(, 0) =
1
2
e
i
,

(, l) = 0
We solve this boundary value problem.

(, y) = c
1
cosh((l y)) +c
2
sinh((l y))

(, y) =
1
2
e
i
sinh((l y))
sinh(l)
We take the inverse Fourier transform to obtain an expression for the solution.
(x, y) =
1
2
_

e
i(x)
sinh((l y))
sinh(l)
d
1793
Chapter 46
Green Functions
46.1 Inhomogeneous Equations and Homogeneous Boundary Con-
ditions
Consider a linear dierential equation on the domain subject to homogeneous boundary conditions.
L[u(x)] = f(x) for x , B[u(x)] = 0 for x (46.1)
For example, L[u] might be
L[u] = u
t
u, or L[u] = u
t
c
2
u.
and B[u] might be u = 0, or u n = 0.
If we nd a Green function G(x; ) that satises
L[G(x; )] = (x ), B[G(x; )] = 0
1794
then the solution to Equation 46.1 is
u(x) =
_

G(x; )f() d.
We verify that this solution satises the equation and boundary condition.
L[u(x)] =
_

L[G(x; )]f() d
=
_

(x )f() d
= f(x)
B[u(x)] =
_

B[G(x; )]f() d
=
_

0 f() d
= 0
46.2 Homogeneous Equations and Inhomogeneous Boundary Con-
ditions
Consider a homogeneous linear dierential equation on the domain subject to inhomogeneous boundary
conditions,
L[u(x)] = 0 for x , B[u(x)] = h(x) for x . (46.2)
If we nd a Green function g(x; ) that satises
L[g(x; )] = 0, B[g(x; )] = (x )
1795
then the solution to Equation 46.2 is
u(x) =
_

g(x; )h() d.
We verify that this solution satises the equation and boundary condition.
L[u(x)] =
_

L[g(x; )]h() d
=
_

0 h() d
= 0
B[u(x)] =
_

B[g(x; )]h() d
=
_

(x )h() d
= h(x)
Example 46.2.1 Consider the Cauchy problem for the homogeneous heat equation.
u
t
= u
xx
, < x < , t > 0
u(x, 0) = h(x), u(, t) = 0
We nd a Green function that satises
g
t
= g
xx
, < x < , t > 0
g(x, 0; ) = (x ), g(, t; ) = 0.
Then we write the solution
u(x, t) =
_

g(x, t; )h() d.
1796
To nd the Green function for this problem, we apply a Fourier transform to the equation and boundary
condition for g.
g
t
=
2
g, g(, 0; ) = T[(x )]
g(, t; ) = T[(x )] e

2
t
g(, t; ) = T[(x )]T
__

t
exp
_

x
2
4t
__
We invert using the convolution theorem.
g(x, t; ) =
1
2
_

( )
_

t
exp
_

(x )
2
4t
_
d
=
1

4t
exp
_

(x )
2
4t
_
The solution of the heat equation is
u(x, t) =
1

4t
_

exp
_

(x )
2
4t
_
h() d.
46.3 Eigenfunction Expansions for Elliptic Equations
Consider a Green function problem for an elliptic equation on a nite domain.
L[G] = (x ), x (46.3)
B[G] = 0, x
Let the set of functions
n
be orthonormal and complete on . (Here n is the multi-index n = n
1
, . . . , n
d
.)
_

n
(x)
m
(x) dx =
nm
1797
In addition, let the
n
be eigenfunctions of L subject to the homogeneous boundary conditions.
L[
n
] =
n

n
, B[
n
] = 0
We expand the Green function in the eigenfunctions.
G =

n
g
n

n
(x)
Then we expand the Dirac Delta function.
(x ) =

n
d
n

n
(x)
d
n
=
_

n
(x)(x ) dx
d
n
=
n
()
We substitute the series expansions for the Green function and the Dirac Delta function into Equation 46.3.

n
g
n

n
(x) =

n
()
n
(x)
We equate coecients to solve for the g
n
and hence determine the Green function.
g
n
=

n
()

n
G(x; ) =

n
()
n
(x)

n
Example 46.3.1 Consider the Green function for the reduced wave equation, u k
2
u in the rectangle, 0
x a, 0 y b, and vanishing on the sides.
1798
First we nd the eigenfunctions of the operator L = k
2
= 0. Note that = X(x)Y (y) is an eigenfunction
of L if X is an eigenfunction of

2
x
2
and Y is an eigenfunction of

2
y
2
. Thus we consider the two regular Sturm-
Liouville eigenvalue problems:
X
tt
= X, X(0) = X(a) = 0
Y
tt
= Y, Y (0) = Y (b) = 0
This leads us to the eigenfunctions

mn
= sin
_
mx
a
_
sin
_
ny
b
_
.
We use the orthogonality relation
_
2
0
sin
_
mx
a
_
sin
_
nx
a
_
dx =
a
2

mn
to make the eigenfunctions orthonormal.

mn
=
2

ab
sin
_
mx
a
_
sin
_
ny
b
_
, m, n Z
+
The
mn
are eigenfunctions of L.
L[
mn
] =
_
_
m
a
_
2
+
_
n
b
_
2
+k
2
_

mn
By expanding the Green function and the Dirac Delta function in the
mn
and substituting into the dierential
equation we obtain the solution.
G =

m,n=1
2

ab
sin
_
m
a
_
sin
_
n
b
_
2

ab
sin
_
mx
a
_
sin
_
ny
b
_

_
_
m
a
_
2
+
_
n
b
_
2
+k
2
_
G(x, y; , ) = 4ab

m,n=1
sin
_
mx
a
_
sin
_
m
a
_
sin
_
ny
b
_
sin
_
n
b
_
(mb)
2
+ (na)
2
+ (kab)
2
1799
Example 46.3.2 Consider the Green function for Laplaces equation, u = 0 in the disk, [r[ < a, and vanishing
at r = a.
First we nd the eigenfunctions of the operator
=

2
r
2
+
1
r

r
+
1
r
2

2
.
We will look for eigenfunctions of the form = ()R(r). We choose the to be eigenfunctions of
d
2
d
2
subject
to the periodic boundary conditions in .

tt
= , (0) = (2),
t
(0) =
t
(2)

n
= e
in
, n Z
We determine R(r) by requiring that be an eigenfunction of .
=
(
n
R)
rr
+
1
r
(
n
R)
r
+
1
r
2
(
n
R)

=
n
R

n
R
tt
+
1
r

n
R
t
+
1
r
2
(n
2
)
n
R = R
For notational convenience, we denote =
2
.
R
tt
+
1
r
R
t
+
_

n
2
r
2
_
R = 0, R(0) bounded, R(a) = 0
The general solution for R is
R = c
1
J
n
(r) +c
2
Y
n
(r).
The left boundary condition demands that c
2
= 0. The right boundary condition determines the eigenvalues.
R
nm
= J
n
_
j
n,m
r
a
_
,
nm
=
j
n,m
a
1800
Here j
n,m
is the m
th
positive root of J
n
. This leads us to the eigenfunctions

nm
= e
in
J
n
_
j
n,m
r
a
_
We use the orthogonality relations
_
2
0
e
im
e
in
d = 2
mn
,
_
1
0
rJ

(j
,m
r)J

(j
,n
r) dr =
1
2
(J
t

(j
,n
))
2

mn
to make the eigenfunctions orthonormal.

nm
=
1

a[J
t
n
(j
n,m
)[
e
in
J
n
_
j
n,m
r
a
_
, n Z, m Z
+
The
nm
are eigenfunctions of L.

nm
=
_
j
n,m
a
_
2

nm
By expanding the Green function and the Dirac Delta function in the
nm
and substituting into the dierential
equation we obtain the solution.
G =

n=

m=1
1

a[J

n(jn,m)[
e
in
J
n
_
jn,m
a
_
1

a[J

n(jn,m)[
e
in
J
n
_
jn,mr
a
_

_
jn,m
a
_
2
G(r, ; , ) =

n=

m=1
1
(j
n,m
J
t
n
(j
n,m
))
2
e
in()
J
n
_
j
n,m

a
_
J
n
_
j
n,m
r
a
_
1801
46.4 The Method of Images
Consider the problem

2
u = f(x, y), < x < , y > 0
u(x, 0) = 0, u(x, y) 0 as (x
2
+y
2
) .
The equations for the Green function are

2
g = (x )(y ), < x < , y > 0
g(x, 0; , ) = 0, g(x, y; , ) 0 as (x
2
+y
2
) .
To solve this problem we will use the method of images. We expand the domain to include the lower half
plane and solve the problem

2
g = (x )(y ) (x )(y +), < x, , y < , > 0
g(x, y; , ) 0 as (x
2
+y
2
) .
Because of symmetry, g is zero for y = 0.
We solve the dierential equation using the innite space Green functions.
g =
1
4
log
_
(x )
2
+ (y )
2

1
4
log
_
(x )
2
+ (y +)
2

=
1
4
log
_
(x )
2
+ (y )
2
(x )
2
+ (y +)
2
_
Thus we can write the solution
u(x, y) =
_

0
_

1
4
log
_
(x )
2
+ (y )
2
(x )
2
+ (y +)
2
_
f(, ) dd.
1802
46.5 Exercises
Exercise 46.1
Derive the causal Green function for the one dimensional wave equation on (..). That is, solve
G
tt
c
2
G
xx
= (x )(t ),
G(x, t; , ) = 0 for t < .
Exercise 46.2
By reducing the problem to a series of one dimensional Green function problems, determine G(x, ) if

2
G = (x )
(a) on the rectangle 0 < x < L, 0 < y < H and
G(0, y; , ) = G
x
(L, y; , ) = G
y
(x, 0; , ) = G
y
(x, H; , ) = 0
(b) on the box 0 < x < L, 0 < y < H, 0 < z < W with G = 0 on the boundary.
(c) on the semi-circle 0 < r < a, 0 < < with G = 0 on the boundary.
(d) on the quarter-circle 0 < r < a, 0 < < /2 with G = 0 on the straight sides and G
r
= 0 at r = a.
Exercise 46.3
Using the method of multi-dimensional eigenfunction expansions, determine G(x, x
0
) if

2
G = (x x
0
)
and
1803
(a) on the rectangle (0 < x < L, 0 < y < H)
at x = 0, G = 0 at y = 0,
G
y
= 0
at x = L,
G
x
= 0 at y = H,
G
y
= 0
(b) on the rectangular shaped box (0 < x < L, 0 < y < H, 0 < z < W) with G = 0 on the six sides.
(c) on the semi-circle (0 < r < a, 0 < < ) with G = 0 on the entire boundary.
(d) on the quarter-circle (0 < r < a, 0 < < /2) with G = 0 on the straight sides and G/r = 0 at r = a.
Exercise 46.4
Using the method of images solve

2
G = (x x
0
)
in the rst quadrant (x 0 and y 0) with G = 0 at x = 0 and G/y = 0 at y = 0. Use the Green function to
solve in the rst quadrant

2
u = 0
u(0, y) = g(y)
u
y
(x, 0) = h(x).
1804
Exercise 46.5
Consider the wave equation dened on the half-line x > 0:

2
u
t
2
= c
2

2
u
x
2
+Q(x, t),
u(x, 0) = f(x)
u
t
(x, 0) = g(x)
u(0, t) = h(t)
(a) Determine the appropriate Greens function using the method of images.
(b) Solve for u(x, t) if Q(x, t) = 0, f(x) = 0, and g(x) = 0.
(c) For what values of t does h(t) inuence u(x
1
, t
1
). Interpret this result physically.
Exercise 46.6
Derive the Green functions for the one dimensional wave equation on (..) for non-homogeneous initial
conditions. Solve the two problems
g
tt
c
2
g
xx
= 0, g(x, 0; , ) = (x ), g
t
(x, 0; , ) = 0,

tt
c
2

xx
= 0, (x, 0; , ) = 0,
t
(x, 0; , ) = (x ),
using the Fourier transform.
Exercise 46.7
Use the Green functions from Problem 46.1 and Problem 46.6 to solve
u
tt
c
2
u
xx
= f(x, t), x > 0, < t <
u(x, 0) = p(x), u
t
(x, 0) = q(x).
Use the solution to determine the domain of dependence of the solution.
1805
Exercise 46.8
Show that the Green function for the reduced wave equation, uk
2
u = 0 in the rectangle, 0 x a, 0 y b,
and vanishing on the sides is:
G(x, y; , ) =
2
a

n=1
sinh(
n
y
<
) sinh(
n
(y
>
b))

n
sinh(
n
b)
sin
_
nx
a
_
sin
_
n
a
_
,
where

n
=
_
k
2
+
n
2

2
a
2
.
Exercise 46.9
Find the Green function for the reduced wave equation uk
2
u = 0, in the quarter plane: 0 < x < , 0 < y <
subject to the mixed boundary conditions:
u(x, 0) = 0, u
x
(0, y) = 0.
Find two distinct integral representations for G(x, y; , ).
Exercise 46.10
Show that in polar coordinates the Green function for u = 0 in the innite sector, 0 < < , 0 < r < , and
vanishing on the sides is given by,
G(r, , , ) =
1
4
log
_
_
cosh
_

log
r

_
cos
_

( )
_
cosh
_

log
r

_
cos
_

( +)
_
_
_
.
Use this to nd the harmonic function u(r, ) in the given sector which takes on the boundary values:
u(r, ) = u(r, ) =
_
0 for r < c
1 for r > c.
1806
Exercise 46.11
The Green function for the initial value problem,
u
t
u
xx
= 0, u(x, 0) = f(x),
on < x < is
G(x, t; ) =
1

4t
e
(x)
2
/(4t)
.
Use the method of images to nd the corresponding Green function for the mixed initial-boundary problems:
i) u
t
= u
xx
, u(x, 0) = f(x) for x > 0, u(0, t) = 0,
i) u
t
= u
xx
, u(x, 0) = f(x) for x > 0, u
x
(0, t) = 0.
Exercise 46.12
Find the Green function (expansion) for the one dimensional wave equation u
tt
c
2
u
xx
= 0 on the interval
0 < x < L, subject to the boundary conditions:
a) u(0, t) = u
x
(L, t) = 0,
b) u
x
(0, t) = u
x
(L, t) = 0.
Write the nal forms in terms showing the propagation properties of the wave equation, i.e., with arguments
((x ) (t )).
Exercise 46.13
Solve, using the above determined Green function,
u
tt
c
2
u
xx
= 0, 0 < x < 1, t > 0,
u
x
(0, t) = u
x
(1, t) = 0,
u(x, 0) = x
2
(1 x)
2
, u
t
(x, 0) = 1.
For c = 1, nd u(x, t) at x = 3/4, t = 7/2.
1807
46.6 Hints
Hint 46.1
Hint 46.2
Take a Fourier transform in x. This will give you an ordinary dierential equation Green function problem for

G.
Find the continuity and jump conditions at t = . After solving for

G, do the inverse transform with the aid of a
table.
Hint 46.3
Hint 46.4
Hint 46.5
Hint 46.6
Hint 46.7
1808
Hint 46.8
Use Fourier sine and cosine transforms.
Hint 46.9
The the conformal mapping z = w
/
to map the sector to the upper half plane. The new problem will be
G
xx
+G
yy
= (x )(y ), < x < , 0 < y < ,
G(x, 0, , ) = 0,
G(x, y, , ) 0 as x, y .
Solve this problem with the image method.
Hint 46.10
Hint 46.11
Hint 46.12
1809
46.7 Solutions
Solution 46.1
G
tt
c
2
G
xx
= (x )(t ),
G(x, t; , ) = 0 for t < .
We take the Fourier transform in x.

G
tt
+c
2

2
G = T[(x )](t ),

G(, 0; , ) =

G
t
(, 0; , ) = 0
Now we have an ordinary dierential equation Green function problem for

G. We have written the causality
condition, the Green function is zero for t < , in terms of initial conditions. The homogeneous solutions of the
ordinary dierential equation are
cos(ct), sin(ct).
It will be handy to use the fundamental set of solutions at t = :
_
cos(c(t )),
1
c
sin(c(t ))
_
.
We write the solution for

G and invert using the convolution theorem.

G = T[(x )]H(t )
1
c
sin(c(t ))

G = H(t )T[(x )]T


_

c
H(c(t ) [x[)
_
G = H(t )

c
1
2
_

(y )H(c(t ) [x y[) dy
G =
1
2c
H(t )H(c(t ) [x [)
G =
1
2c
H(c(t ) [x [)
1810
The Green function for = = 0 and c = 1 is plotted in Figure 46.1 on the domain x (1..1), t (0..1).
The Green function is a displacement of height
1
2c
that propagates out from the point x = in both directions
with speed c. The Green function shows the range of inuence of a disturbance at the point x = and time
t = . The disturbance inuences the solution for all ct < x < +ct and t > .
-1
-0.5
0
0.5
1
x
0
0.2
0.4
0.6
0.8
1
t
0
0.2
0.4
-1
-0.5
0
0.5
1
x
0
0.2
0.4
0.6
0.8
1
t
Figure 46.1: Green function for the wave equation.
Solution 46.2
1. We expand the Green function in eigenfunctions in x.
G(x; ) =

n=1
a
n
(y) sin
_
(2n 1)x
2L
_
We substitute the expansion into the dierential equation.

n=1
a
n
(y)
_
2
L
sin
_
(2n 1)x
2L
_
= (x )(y )
1811

n=1
_
a
tt
n
(y)
_
(2n 1)
2L
_
2
a
n
(y)
_
_
2
L
sin
_
(2n 1)x
2L
_
= (y )

n=1
_
2
L
sin
_
(2n 1)
2L
_
_
2
L
sin
_
(2n 1)x
2L
_
a
tt
n
(y)
_
(2n 1)
2L
_
2
a
n
(y) =
_
2
L
sin
_
(2n 1)
2L
_
(y )
From the boundary conditions at y = 0 and y = H, we obtain boundary conditions for the a
n
(y).
a
t
n
(0) = a
t
n
(H) = 0.
The solutions that satisfy the left and right boundary conditions are
a
n1
= cosh
_
(2n 1)y
2L
_
, a
n2
= cosh
_
(2n 1)(H y)
2L
_
.
The Wronskian of these solutions is
W =
(2n 1)
2L
sinh
_
(2n 1)
2
_
.
Thus the solution for a
n
(y) is
a
n
(y) =
_
2
L
sin
_
(2n 1)
2L
_
cosh
_
(2n1)y<
2L
_
cosh
_
(2n1)(Hy>)
2L
_

(2n1)
2L
sinh
_
(2n1)
2
_
a
n
(y) =
2

2L
(2n 1)
csch
_
(2n 1)
2
_
cosh
_
(2n 1)y
<
2L
_
cosh
_
(2n 1)(H y
>
)
2L
_
sin
_
(2n 1)
2L
_
.
1812
This determines the Green function.
G(x; ) =
2

2L

n=1
1
2n 1
csch
_
(2n 1)
2
_
cosh
_
(2n 1)y
<
2L
_
cosh
_
(2n 1)(H y
>
)
2L
_
sin
_
(2n 1)
2L
_
sin
_
(2n 1)x
2L
_
2. We seek a solution of the form
G(x; ) =

m=1
n=1
a
mn
(z)
2

LH
sin
_
mx
L
_
sin
_
ny
H
_
.
We substitute this into the dierential equation.

m=1
n=1
a
mn
(z)
2

LH
sin
_
mx
L
_
sin
_
ny
H
_
= (x )(y )(z )

m=1
n=1
_
a
tt
mn
(z)
_
_
m
L
_
2
+
_
n
H
_
2
_
a
mn
(z)
_
2

LH
sin
_
mx
L
_
sin
_
ny
H
_
= (z )

m=1
n=1
2

LH
sin
_
m
L
_
sin
_
n
H
_
2

LH
sin
_
mx
L
_
sin
_
ny
H
_
a
tt
mn
(z)
_
_
m
L
_
2
+
_
n
H
_
2
_
a
mn
(z) =
2

LH
sin
_
m
L
_
sin
_
n
H
_
(z )
From the boundary conditions on G, we obtain boundary conditions for the a
mn
.
a
mn
(0) = a
mn
(W) = 0
1813
The solutions that satisfy the left and right boundary conditions are
a
mn1
= sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
z
_
, a
mn2
= sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
(W z)
_
.
The Wronskian of these solutions is
W =
_
_
m
L
_
2
+
_
n
H
_
2
sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
W
_
.
Thus the solution for a
mn
(z) is
a
mn
(z) =
2

LH
sin
_
m
L
_
sin
_
n
H
_
sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
z
<
_
sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
(W z
>
)
_

_
_
m
L
_
2
+
_
n
H
_
2
sinh
_
_
_
m
L
_
2
+
_
n
H
_
2
W
_
a
mn
(z) =
2

mn

LH
csch (
mn
W) sin
_
m
L
_
sin
_
n
H
_
sinh (
mn
z
<
) sinh (
mn
(W z
>
)) ,
where

mn
=
_
_
m
L
_
2
+
_
n
H
_
2
.
1814
This determines the Green function.
G(x; ) =
4
LH

m=1
n=1
1

mn
csch (
mn
W) sin
_
m
L
_
sin
_
mx
L
_
sin
_
n
H
_
sin
_
ny
H
_
sinh (
mn
z
<
) sinh (
mn
(W z
>
))
3. First we write the problem in circular coordinates.

2
G = (x )
G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( ),
G(r, 0; , ) = G(r, ; , ) = G(0, ; , ) = G(a, ; , ) = 0
Because the Green function vanishes at = 0 and = we expand it in a series of the form
G =

n=1
g
n
(r) sin(n).
We substitute the series into the dierential equation.

n=1
_
g
tt
n
(r) +
1
r
g
t
n
(r)
n
2
r
2
g
n
(r)
_
sin(n) =
1
r
(r )

n=1
2

sin(n) sin(n)
g
tt
n
(r) +
1
r
g
t
n
(r)
n
2
r
2
g
n
(r) =
2
r
sin(n)(r )
From the boundary conditions on G, we obtain boundary conditions for the g
n
.
g
n
(0) = g
n
(a) = 0
1815
The solutions that satisfy the left and right boundary conditions are
g
n1
= r
n
, g
n2
=
_
r
a
_
n

_
a
r
_
n
.
The Wronskian of these solutions is
W =
2na
n
r
.
Thus the solution for g
n
(r) is
g
n
(r) =
2

sin(n)
r
n
<
_
_
r>
a
_
n

_
a
r>
_
n
_
2na
n

g
n
(r) =
1
n
sin(n)
_
r
<
a
_
n
_
_
r
>
a
_
n

_
a
r
>
_
n
_
.
This determines the solution.
G =

n=1
1
n
_
r
<
a
_
n
_
_
r
>
a
_
n

_
a
r
>
_
n
_
sin(n) sin(n)
4. First we write the problem in circular coordinates.
G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( ),
G(r, 0; , ) = G(r, /2; , ) = G(0, ; , ) = G
r
(a, ; , ) = 0
Because the Green function vanishes at = 0 and = /2 we expand it in a series of the form
G =

n=1
g
n
(r) sin(2n).
1816
We substitute the series into the dierential equation.

n=1
_
g
tt
n
(r) +
1
r
g
t
n
(r)
4n
2
r
2
g
n
(r)
_
sin(2n) =
1
r
(r )

n=1
4

sin(2n) sin(2n)
g
tt
n
(r) +
1
r
g
t
n
(r)
4n
2
r
2
g
n
(r) =
4
r
sin(2n)(r )
From the boundary conditions on G, we obtain boundary conditions for the g
n
.
g
n
(0) = g
t
n
(a) = 0
The solutions that satisfy the left and right boundary conditions are
g
n1
= r
2n
, g
n2
=
_
r
a
_
2n
+
_
a
r
_
2n
.
The Wronskian of these solutions is
W =
4na
2n
r
.
Thus the solution for g
n
(r) is
g
n
(r) =
4

sin(2n)
r
2n
<
_
_
r>
a
_
2n
+
_
a
r>
_
2n
_

4na
2n

g
n
(r) =
1
n
sin(2n)
_
r
<
a
_
2n
_
_
r
>
a
_
2n
+
_
a
r
>
_
2n
_
This determines the solution.
G =

n=1
1
n
_
r
<
a
_
2n
_
_
r
>
a
_
2n
+
_
a
r
>
_
2n
_
sin(2n) sin(2n)
1817
Solution 46.3
1. The set
X
n
=
_
sin
_
(2m1)x
2L
__

m=1
are eigenfunctions of
2
and satisfy the boundary conditions X
n
(0) = X
t
n
(L) = 0. The set
Y
n
=
_
cos
_
ny
H
__

n=0
are eigenfunctions of
2
and satisfy the boundary conditions Y
t
n
(0) = Y
t
n
(H) = 0. The set
_
sin
_
(2m1)x
2L
_
cos
_
ny
H
_
_

m=1,n=0
are eigenfunctions of
2
and satisfy the boundary conditions of this problem. We expand the Green function
in a series of these eigenfunctions.
G =

m=1
g
m0
_
2
LH
sin
_
(2m1)x
2L
_
+

m=1
n=1
g
mn
2

LH
sin
_
(2m1)x
2L
_
cos
_
ny
H
_
We substitute the series into the Green function dierential equation.
G = (x )(y )
1818

m=1
g
m0
_
(2m1)
2L
_
2
_
2
LH
sin
_
(2m1)x
2L
_

m=1
n=1
g
mn
_
_
(2m1)
2L
_
2
+
_
ny
H
_
2
_
2

LH
sin
_
(2m1)x
2L
_
cos
_
ny
H
_
=

m=1
_
2
LH
sin
_
(2m1)
2L
_
_
2
LH
sin
_
(2m1)x
2L
_
+

m=1
n=1
2

LH
sin
_
(2m1)
2L
_
cos
_
n
H
_
2

LH
sin
_
(2m1)x
2L
_
cos
_
ny
H
_
We equate terms and solve for the coecients g
mn
.
g
m0
=
_
2
LH
_
2L
(2m1)
_
2
sin
_
(2m1)
2L
_
g
mn
=
2

LH
1

2
_
_
2m1
2L
_
2
+
_
n
H
_
2
_ sin
_
(2m1)
2L
_
cos
_
n
H
_
This determines the Green function.
2. Note that
_
_
8
LHW
sin
_
kx
L
_
, sin
_
my
H
_
, sin
_
nz
W
_
: k, m, n Z
+
_
is orthonormal and complete on (0 . . . L) (0 . . . H) (0 . . . W). The functions are eigenfunctions of
2
.
We expand the Green function in a series of these eigenfunctions.
G =

k,m,n=1
g
kmn
_
8
LHW
sin
_
kx
L
_
sin
_
my
H
_
sin
_
nz
W
_
1819
We substitute the series into the Green function dierential equation.
G = (x )(y )(z )

k,m,n=1
g
kmn
_
_
k
L
_
2
+
_
m
H
_
2
+
_
n
W
_
2
_
_
8
LHW
sin
_
kx
L
_
sin
_
my
H
_
sin
_
nz
W
_
=

k,m,n=1
_
8
LHW
sin
_
k
L
_
sin
_
m
H
_
sin
_
n
W
_
_
8
LHW
sin
_
kx
L
_
sin
_
my
H
_
sin
_
nz
W
_
We equate terms and solve for the coecients g
kmn
.
g
kmn
=
_
8
LHW
sin
_
k
L
_
sin
_
m
H
_
sin
_
n
W
_

2
_
_
k
L
_
2
+
_
m
H
_
2
+
_
n
W
_
2
_
This determines the Green function.
3. The Green function problem is
G G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( ).
We seek a set of functions
n
()R
nm
(r) which are orthogonal and complete on (0 . . . a) (0 . . . ) and
which are eigenfunctions of the laplacian. For the
n
we choose eigenfunctions of

2

2
.

tt
=
2
, (0) = () = 0

n
= n,
n
= sin(n), n Z
+
1820
Now we look for eigenfunctions of the laplacian.
(R
n
)
rr
+
1
r
(R
n
)
r
+
1
r
2
(R
n
)

=
2
R
n
R
tt

n
+
1
r
R
t

n
2
r
2
R
n
=
2
R
n
R
tt
+
1
r
R
t
+
_

n
2
r
2
_
R = 0, R(0) = R(a) = 0
The general solution for R is
R = c
1
J
n
(r) +c
2
Y
n
(r).
the solution that satises the left boundary condition is R = cJ
n
(r). We use the right boundary condition
to determine the eigenvalues.

m
=
j
n,m
a
, R
nm
= J
n
_
j
n,m
r
a
_
, m, n Z
+
here j
n,m
is the m
th
root of J
n
.
Note that
_
sin(n)J
n
_
j
n,m
r
a
_
: m, n Z
+
_
is orthogonal and complete on (r, ) (0 . . . a) (0 . . . ). We use the identities
_

0
sin
2
(n) d =

2
,
_
1
0
rJ
2
n
(j
n,m
r) dr =
1
2
J
2
n+1
(j
n,m
)
to make the functions orthonormal.
_
2

a[J
n+1
(j
n,m
)[
sin(n)J
n
_
j
n,m
r
a
_
: m, n Z
+
_
1821
We expand the Green function in a series of these eigenfunctions.
G =

n,m=1
g
nm
2

a[J
n+1
(j
n,m
)[
J
n
_
j
n,m
r
a
_
sin(n)
We substitute the series into the Green function dierential equation.
G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( )

n,m=1
_
j
n,m
a
_
2
g
nm
2

a[J
n+1
(j
n,m
)[
J
n
_
j
n,m
r
a
_
sin(n)
=

n,m=1
2

a[J
n+1
(j
n,m
)[
J
n
_
j
n,m

a
_
sin(n)
2

a[J
n+1
(j
n,m
)[
J
n
_
j
n,m
r
a
_
sin(n)
We equate terms and solve for the coecients g
mn
.
g
nm
=
_
a
j
n,m
_
2
2

a[J
n+1
(j
n,m
)[
J
n
_
j
n,m

a
_
sin(n)
This determines the green function.
4. The Green function problem is
G G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( ).
We seek a set of functions
n
()R
nm
(r) which are orthogonal and complete on (0 . . . a) (0 . . . /2) and
which are eigenfunctions of the laplacian. For the
n
we choose eigenfunctions of

2

2
.

tt
=
2
, (0) = (/2) = 0

n
= 2n,
n
= sin(2n), n Z
+
1822
Now we look for eigenfunctions of the laplacian.
(R
n
)
rr
+
1
r
(R
n
)
r
+
1
r
2
(R
n
)

=
2
R
n
R
tt

n
+
1
r
R
t

(2n)
2
r
2
R
n
=
2
R
n
R
tt
+
1
r
R
t
+
_

(2n)
2
r
2
_
R = 0, R(0) = R(a) = 0
The general solution for R is
R = c
1
J
2n
(r) +c
2
Y
2n
(r).
the solution that satises the left boundary condition is R = cJ
2n
(r). We use the right boundary condition
to determine the eigenvalues.

m
=
j
t
2n,m
a
, R
nm
= J
2n
_
j
t
2n,m
r
a
_
, m, n Z
+
here j
t
n,m
is the m
th
root of J
t
n
.
Note that
_
sin(2n)J
t
2n
_
j
t
2n,m
r
a
_
: m, n Z
+
_
is orthogonal and complete on (r, ) (0 . . . a) (0 . . . /2). We use the identities
_

0
sin(m) sin(n) d =

2

mn
,
_
1
0
rJ

(j
t
,m
r)J

(j
t
,n
r) dr =
j
t
2
,n

2
2j
t
2
,n
_
J

(j
t
,n
)
_
2

mn
1823
to make the functions orthonormal.
_
_
_
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
sin(2n)J
2n
_
j
t
2n,m
r
a
_
: m, n Z
+
_
_
_
We expand the Green function in a series of these eigenfunctions.
G =

n,m=1
g
nm
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
J
2n
_
j
t
2n,m
r
a
_
sin(2n)
We substitute the series into the Green function dierential equation.
G
rr
+
1
r
G
r
+
1
r
2
G

=
1
r
(r )( )

n,m=1
_
j
t
2n,m
a
_
2
g
nm
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
J
2n
_
j
t
2n,m
r
a
_
sin(2n)
=

n,m=1
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
J
2n
_
j
t
2n,m

a
_
sin(2n)
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
J
2n
_
j
t
2n,m
r
a
_
sin(2n)
We equate terms and solve for the coecients g
mn
.
g
nm
=
_
a
j
t
2n,m
_
2
2j
t
2n,m

a
_
j
t
2
2n,m
4n
2
[J
2n
(j
t
2n,m
)[
J
2n
_
j
t
2n,m

a
_
sin(2n)
This determines the green function.
1824
Solution 46.4
We start with the equation

2
G = (x )(y ).
We do an odd reection across the y axis so that G(0, y; , ) = 0.

2
G = (x )(y ) (x +)(y )
Then we do an even reection across the x axis so that G
y
(x, 0; , ) = 0.

2
G = (x )(y ) (x +)(y ) +(x )(y +) (x +)(y +)
We solve this problem using the innite space Green function.
G =
1
4
log
_
(x )
2
+ (y )
2
_

1
4
log
_
(x +)
2
+ (y )
2
_
+
1
4
log
_
(x )
2
+ (y +)
2
_

1
4
log
_
(x +)
2
+ (y +)
2
_
G =
1
4
log
_
((x )
2
+ (y )
2
) ((x )
2
+ (y +)
2
)
((x +)
2
+ (y )
2
) ((x +)
2
+ (y +)
2
)
_
Now we solve the boundary value problem.
u(, ) =
_
S
_
u(x, y)
G
n
G
u(x, y)
n
_
dS +
_
V
GudV
u(, ) =
_
0

u(0, y)(G
x
(0, y; , )) dy +
_

0
G(x, 0; , )(u
y
(x, 0)) dx
u(, ) =
_

0
g(y)G
x
(0, y; , ) dy +
_

0
G(x, 0; , )h(x) dx
u(, ) =

_

0
_
1

2
+ (y )
2
+
1

2
+ (y +)
2
_
g(y) dy +
1
2
_

0
log
_
(x )
2
+
2
(x +)
2
+
2
_
h(x) dx
u(x, y) =
x

_

0
_
1
x
2
+ (y )
2
+
1
x
2
+ (y +)
2
_
g() d +
1
2
_

0
log
_
(x )
2
+y
2
(x +)
2
+y
2
_
h() d
1825
Solution 46.5
First we nd the innite space Green function.
G
tt
c
2
G
xx
= (x )(t ), G = G
t
= 0 for t <
We solve this problem with the Fourier transform.

G
tt
+c
2

2

G = T[(x )](t )

G = T[(x )]H(t )
1
c
sin(c(t ))

G = H(t )T[(x )]T


_

c
H(c(t ) [x[)
_
G = H(t )

c
1
2
_

(y )H(c(t ) [x y[) dy
G =
1
2c
H(t )H(c(t ) [x [)
G =
1
2c
H(c(t ) [x [)
1. So that the Green function vanishes at x = 0 we do an odd reection about that point.
G
tt
c
2
G
xx
= (x )(t ) (x +)(t )
G =
1
2c
H(c(t ) [x [)
1
2c
H(c(t ) [x +[)
2. Note that the Green function satises the symmetry relation
G(x, t; , ) = G(, ; x, t).
This implies that
G
xx
= G

, G
tt
= G

.
1826
We write the Green function problem and the inhomogeneous dierential equation for u in terms of and
.
G

c
2
G

= (x )(t ) (46.4)
u

c
2
u

= Q(, ) (46.5)
We take the dierence of u times Equation 46.4 and G times Equation 46.5 and integrate this over the
domain (0, ) (0, t
+
).
_
t
+
0
_

0
(u(x )(t ) GQ) d d =
_
t
+
0
_

0
_
uG

Gc
2
(uG

G)
_
d d
u(x, t) =
_
t
+
0
_

0
GQd d +
_
t
+
0
_

0
_

(uG

G) c
2

(uG

G)
_
d d
u(x, t) =
_
t
+
0
_

0
GQd d +
_

0
[uG

G]
t+
0
d c
2
_
t
+
0
[uG

G]

0
d
u(x, t) =
_
t
+
0
_

0
GQd d
_

0
[uG

G]
=0
d +c
2
_
t
+
0
[uG

]
=0
d
We consider the case Q(x, t) = f(x) = g(x) = 0.
u(x, t) = c
2
_
t
+
0
h()G

(x, t; 0, ) d
We calculate G

.
G =
1
2c
(H(c(t ) [x [) H(c(t ) [x +[))
G

=
1
2c
((c(t ) [x [)(1) sign (x )(1) (c(t ) [x +[)(1) sign (x +))
G

(x, t; 0, ) =
1
c
(c(t ) [x[) sign (x)
1827
We are interested in x > 0.
G

(x, t; 0, ) =
1
c
(c(t ) x)
Now we can calculate the solution u.
u(x, t) = c
2
_
t
+
0
h()
1
c
(c(t ) x) d
u(x, t) =
_
t
+
0
h()
_
(t )
x
c
_
d
u(x, t) = h
_
t
x
c
_
3. The boundary condition inuences the solution u(x
1
, t
1
) only at the point t = t
1
x
1
/c. The contribution
from the boundary condition u(0, t) = h(t) is a wave moving to the right with speed c.
Solution 46.6
g
tt
c
2
g
xx
= 0, g(x, 0; , ) = (x ), g
t
(x, 0; , ) = 0
g
tt
+c
2

2
g
xx
= 0, g(x, 0; , ) = T[(x )], g
t
(x, 0; , ) = 0
g = T[(x )] cos(ct)
g = T[(x )]T[((x +ct) +(x ct))]
g =
1
2
_

( )((x +ct) +(x ct)) d


g(x, t; ) =
1
2
((x +ct) +(x ct))
1828

tt
c
2

xx
= 0, (x, 0; , ) = 0,
t
(x, 0; , ) = (x )

tt
+c
2

2

xx
= 0, (x, 0; , ) = 0,
t
(x, 0; , ) = T[(x )]
= T[(x )]
1
c
sin(ct)
= T[(x )]T
_

c
(H(x +ct) +H(x ct))
_
=
1
2
_

( )

c
(H(x +ct) +H(x ct)) d
(x, t; ) =
1
2c
(H(x +ct) +H(x ct))
Solution 46.7
u(x, t) =
_

0
_

G(x, t; , )f(, ) d d +
_

g(x, t; )p() d +
_

(x, t; )q() d
u(x, t) =
1
2c
_

0
_

H(t )(H(x +c(t )) H(x c(t )))f(, ) d d


+
1
2
_

((x +ct) +(x ct))p() d +


1
2c
_

(H(x +ct) +H(x ct))q() d


u(x, t) =
1
2c
_
t
0
_

(H(x +c(t )) H(x c(t )))f(, ) d d


+
1
2
(p(x +ct) +p(x ct)) +
1
2c
_
x+ct
xct
q() d
1829
u(x, t) =
1
2c
_
t
0
_
x+c(t)
xc(t)
f(, ) d d +
1
2
(p(x +ct) +p(x ct)) +
1
2c
_
x+ct
xct
q() d
This solution demonstrates the domain of dependence of the solution. The rst term is an integral over the
triangle domain (, ) : 0 < < t, x c < < x + c. The second term involves only the points (x ct, 0).
The third term is an integral on the line segment (, 0) : xct < < x+ct. In totallity, this is just the triangle
domain. This is shown graphically in Figure 46.2.
x-ct x+ct
Domain of
Dependence
x,t
Figure 46.2: Domain of dependence for the wave equation.
Solution 46.8
Single Sum Representation. First we nd the eigenfunctions of the homogeneous problem u k
2
u = 0. We
substitute the separation of variables, u(x, y) = X(x)Y (y) into the partial dierential equation.
X
tt
Y +XY
tt
k
2
XY = 0
X
tt
X
= k
2

Y
tt
Y
=
2
1830
We have the regular Sturm-Liouville eigenvalue problem,
X
tt
=
2
X, X(0) = X(a) = 0,
which has the solutions,

n
=
n
a
, X
n
= sin
_
nx
a
_
, n N.
We expand the solution u in a series of these eigenfunctions.
G(x, y; , ) =

n=1
c
n
(y) sin
_
nx
a
_
We substitute this series into the partial dierential equation to nd equations for the c
n
(y).

n=1
_

_
n
a
_
2
c
n
(y) +c
tt
n
(y) k
2
c
n
(y)
_
sin
_
nx
a
_
= (x )(y )
The series expansion of the right side is,
(x )(y ) =

n=1
d
n
(y) sin
_
nx
a
_
d
n
(y) =
2
a
_
a
0
(x )(y ) sin
_
nx
a
_
dx
d
n
(y) =
2
a
sin
_
n
a
_
(y ).
The the equations for the c
n
(y) are
c
tt
n
(y)
_
k
2
+
_
n
a
_
2
_
c
n
(y) =
2
a
sin
_
n
a
_
(y ), c
n
(0) = c
n
(b) = 0.
1831
The homogeneous solutions are cosh(
n
y), sinh(
n
y), where
n
=
_
k
2
(n/a)
2
. The solutions that satisfy the
boundary conditions at y = 0 and y = b are, sinh(
n
y) and sinh(
n
(y b)), respectively. The Wronskian of these
solutions is,
W(y) =

sinh(
n
y) sinh(
n
(y b))

n
cosh(
n
y)
n
cosh(
n
(y b))

=
n
(sinh(
n
y) cosh(
n
(y b)) sinh(
n
(y b)) cosh(
n
y))
=
n
sinh(
n
b).
The solution for c
n
(y) is
c
n
(y) =
2
a
sin
_
n
a
_
sinh(
n
y
<
) sinh(
n
(y
>
b))

n
sinh(
n
b)
.
The Green function for the partial dierential equation is
G(x, y; , ) =
2
a

n=1
sinh(
n
y
<
) sinh(
n
(y
>
b))

n
sinh(
n
b)
sin
_
nx
a
_
sin
_
n
a
_
.
Solution 46.9
We take the Fourier cosine transform in x of the partial dierential equation and the boundary condition along
y = 0.
G
xx
+G
yy
k
2
G = (x )(y )

2

G(, y)
1

G
x
(0, y) +

G
yy
(, y) k
2

G(, y) =
1

cos()(y )

G
yy
(, y) (k
2
+
2
)

G(, y) ==
1

cos()(y ),

G(, 0) = 0
1832
Then we take the Fourier sine transform in y.

G(, ) +

G(, 0) (k
2
+
2
)

G(, ) =
1

2
cos() sin()

G =
cos() sin()

2
(k
2
+
2
+
2
)
We take two inverse transforms to nd the solution. For one integral representation of the Green function we take
the inverse sine transform followed by the inverse cosine transform.

G = cos()
sin()

1
(k
2
+
2
+
2
)

G = cos()T
s
[(y )]T
c
_
1

k
2
+
2
e

k
2
+
2
y
_

G(, y) = cos()
1
2
_

0
(z )
1

k
2
+
2
_
exp
_

k
2
+
2
[y z[
_
exp
_

k
2
+
2
(y +z)
__
dz

G(, y) =
cos()
2

k
2
+
2
_
exp
_

k
2
+
2
[y [
_
exp
_

k
2
+
2
(y +)
__
G(x, y; , ) =
1

_

0
cos()

k
2
+
2
_
exp
_

k
2
+
2
[y [
_
exp
_

k
2
+
2
(y +)
__
d
For another integral representation of the Green function, we take the inverse cosine transform followed by the
1833
inverse sine transform.

G(, ) = sin()
cos()

1
(k
2
+
2
+
2
)

G(, ) = sin()T
c
[(x )]T
c
_
1
_
k
2
+
2
e

k
2
+
2
x
_

G(x, ) = sin()
1
2
_

0
(z )
1
_
k
2
+
2
_
e

k
2
+
2
[xz[
+ e

k
2
+
2
(x+z)
_
dz

G(x, ) = sin()
1
2
1
_
k
2
+
2
_
e

k
2
+
2
[x[
+ e

k
2
+
2
(x+)
_
G(x, y; , ) =
1

_

0
sin(y) sin()
_
k
2
+
2
_
e

k
2
+
2
[x[
+ e

k
2
+
2
(x+)
_
d
Solution 46.10
The problem is:
G
rr
+
1
r
G
r
+
1
r
2
G

=
(r )( )
r
, 0 < r < , 0 < < ,
G(r, 0, , ) = G(r, , , ) = 0,
G(0, , , ) = 0
G(r, , , ) 0 as r .
Let w = r e
i
and z = x + iy. We use the conformal mapping, z = w
/
to map the sector to the upper half z
plane. The problem is (x, y) space is
G
xx
+G
yy
= (x )(y ), < x < , 0 < y < ,
G(x, 0, , ) = 0,
G(x, y, , ) 0 as x, y .
1834
We will solve this problem with the method of images. Note that the solution of,
G
xx
+G
yy
= (x )(y ) (x )(y +), < x < , < y < ,
G(x, y, , ) 0 as x, y ,
satises the condition, G(x, 0, , ) = 0. Since the innite space Green function for the Laplacian in two dimensions
is
1
4
log
_
(x )
2
+ (y )
2
_
,
the solution of this problem is,
G(x, y, , ) =
1
4
log
_
(x )
2
+ (y )
2
_

1
4
log
_
(x )
2
+ (y +)
2
_
=
1
4
log
_
(x )
2
+ (y )
2
(x )
2
+ (y +)
2
_
.
Now we solve for x and y in the conformal mapping.
z = w
/
= (r e
i
)
/
x +iy = r
/
(cos(/) +i sin(/))
x = r
/
cos(/), y = r
/
sin(/)
We substitute these expressions into G(x, y, , ) to obtain G(r, , , ).
G(r, , , ) =
1
4
log
_
(r
/
cos(/)
/
cos(/))
2
+ (r
/
sin(/)
/
sin(/))
2
(r
/
cos(/)
/
cos(/))
2
+ (r
/
sin(/) +
/
sin(/))
2
_
=
1
4
log
_
r
2/
+
2/
2r
/

/
cos(( )/)
r
2/
+
2/
2r
/

/
cos(( +)/)
_
=
1
4
log
_
(r/)
/
/2 + (/r)
/
/2 cos(( )/)
(r/)
/
/2 + (/r)
/
/2 cos(( +)/)
_
=
1
4
log
_
e
log(r/)/
/2 + e
log(/r)/
/2 cos(( )/)
e
log(r/)/
/2 + e
log(/r)/
/2 cos(( +)/)
_
1835
G(r, , , ) =
1
4
log
_
_
cosh
_
/
log
r

_
cos(( )/)
cosh
_
/
log
r

_
cos(( +)/)
_
_
Now recall that the solution of
u = f(x),
subject to the boundary condition,
u(x) = g(x),
is
u(x) =
_ _
f()G(x; ) dA

+
_
g()

G(x; ) nds

.
The normal directions along the lower and upper edges of the sector are

and

, respectively. The gradient in
polar coordinates is

.
We only need to compute the

component of the gradient of G. This is
1

G =
sin(( )/)
4
_
cosh
_

log
r

_
cos(( )/)
_ +
sin(( )/)
4
_
cosh
_

log
r

_
cos(( +)/)
_
Along = 0, this is
1

(r, , , 0) =
sin(/)
2
_
cosh
_

log
r

_
cos(/)
_.
1836
Along = , this is
1

(r, , , ) =
sin(/)
2
_
cosh
_

log
r

_
+ cos(/)
_.
The solution of our problem is
u(r, ) =
_
c

sin(/)
2
_
cosh
_

log
r

_
+ cos(/)
_ d +
_

c

sin(/)
2
_
cosh
_

log
r

_
cos(/)
_ d
u(r, ) =
_

c
sin(/)
2
_
cosh
_

log
r

_
cos(/)
_ +
sin(/)
2
_
cosh
_

log
r

_
+ cos(/)
_ d
u(r, ) =
1

sin
_

_
cos
_

__

c
1

_
cosh
2
_

log
r

_
cos
2
_

_
_ d
u(r, ) =
1

sin
_

_
cos
_

__

log(c/r)
1
cosh
2
_
x

_
cos
2
_

_ dx
u(r, ) =
2

sin
_

_
cos
_

__

log(c/r)
1
cosh
_
2x

_
cos
_
2

_ dx
Solution 46.11
First consider the Green function for
u
t
u
xx
= 0, u(x, 0) = f(x).
The dierential equation and initial condition is
G
t
= G
xx
, G(x, 0; ) = (x ).
1837
The Green function is a solution of the homogeneous heat equation for the initial condition of a unit amount of
heat concentrated at the point x = . You can verify that the Green function is a solution of the heat equation
for t > 0 and that it has the property:
_

G(x, t; ) dx = 1, for t > 0.


This property demonstrates that the total amount of heat is the constant 1. At time t = 0 the heat is concentrated
at the point x = . As time increases, the heat diuses out from this point.
The solution for u(x, t) is the linear combination of the Green functions that satises the initial condition
u(x, 0) = f(x). This linear combination is
u(x, t) =
_

G(x, t; )f() d.
G(x, t; 1) and G(x, t; 1) are plotted in Figure 46.3 for the domain t [1/100..1/4], x [2..2] and = 1.
Now we consider the problem
u
t
= u
xx
, u(x, 0) = f(x) for x > 0, u(0, t) = 0.
Note that the solution of
G
t
= G
xx
, x > 0, t > 0,
G(x, 0; ) = (x ) (x +),
satises the boundary condition G(0, t; ) = 0. We write the solution as the dierence of innite space Green
functions.
G(x, t; ) =
1

4t
e
(x)
2
/(4t)

4t
e
(x+)
2
/(4t)
=
1

4t
_
e
(x)
2
/(4t)
e
(x+)
2
/(4t)
_
1838
0.1
0.2
-2
-1
0
1
2
0
1
2
0.1
0.2
Figure 46.3: G(x, t; 1) and G(x, t; 1)
G(x, t; ) =
1

4t
e
(x
2
+
2
)/(4t)
sinh
_
x
2t
_
Next we consider the problem
u
t
= u
xx
, u(x, 0) = f(x) for x > 0, u
x
(0, t) = 0.
Note that the solution of
G
t
= G
xx
, x > 0, t > 0,
G(x, 0; ) = (x ) +(x +),
1839
satises the boundary condition G
x
(0, t; ) = 0. We write the solution as the sum of innite space Green functions.
G(x, t; ) =
1

4t
e
(x)
2
/(4t)
+
1

4t
e
(x+)
2
/(4t)
G(x, t; ) =
1

4t
e
(x
2
+
2
)/(4t)
cosh
_
x
2t
_
The Green functions for the two boundary conditions are shown in Figure 46.4.
0.05
0.1
0.15
0.2
0.25
0
0.2
0.4
0.6
0.8
1
0
1
2
0.05
0.1
0.15
0.2
0.25
0.05
0.1
0.15
0.2
0.25
0
0.2
0.4
0.6
0.8
1
0
1
2
0.05
0.1
0.15
0.2
0.25
Figure 46.4: Green functions for the boundary conditions u(0, t) = 0 and u
x
(0, t) = 0.
Solution 46.12
1840
a) The Green function problem is
G
tt
c
2
G
xx
= (t )(x ), 0 < x < L, t > 0,
G(0, t; , ) = G
x
(L, t; , ) = 0,
G(x, t; , ) = 0 for t < .
The condition that G is zero for t < makes this a causal Green function. We solve this problem by expanding
G in a series of eigenfunctions of the x variable. The coecients in the expansion will be functions of t. First we
nd the eigenfunctions of x in the homogeneous problem. We substitute the separation of variables u = X(x)T(t)
into the homogeneous partial dierential equation.
XT
tt
= c
2
X
tt
T
T
tt
c
2
T
=
X
tt
X
=
2
The eigenvalue problem is
X
tt
=
2
X, X(0) = X
t
(L) = 0,
which has the solutions,

n
=
(2n 1)
2L
, X
n
= sin
_
(2n 1)x
2L
_
, n N.
The series expansion of the Green function has the form,
G(x, t; , ) =

n=1
g
n
(t) sin
_
(2n 1)x
2L
_
.
We determine the coecients by substituting the expansion into the Green function dierential equation.
G
tt
c
2
G
xx
= (x )(t )

n=1
_
g
tt
n
(t) +
_
(2n 1)c
2L
_
2
g
n
(t)
_
sin
_
(2n 1)x
2L
_
= (x )(t )
1841
We need to expand the right side of the equation in the sine series
(x )(t ) =

n=1
d
n
(t) sin
_
(2n 1)x
2L
_
d
n
(t) =
2
L
_
L
0
(x )(t ) sin
_
(2n 1)x
2L
_
dx
d
n
(t) =
2
L
sin
_
(2n 1)
2L
_
(t )
By equating coecients in the sine series, we obtain ordinary dierential equation Green function problems for
the g
n
s.
g
tt
n
(t; ) +
_
(2n 1)c
2L
_
2
g
n
(t; ) =
2
L
sin
_
(2n 1)
2L
_
(t )
From the causality condition for G, we have the causality conditions for the g
n
s,
g
n
(t; ) = g
t
n
(t; ) = 0 for t < .
The continuity and jump conditions for the g
n
are
g
n
(
+
; ) = 0, g
t
n
(
+
; ) =
2
L
sin
_
(2n 1)
2L
_
.
A set of homogeneous solutions of the ordinary dierential equation are
_
cos
_
(2n 1)ct
2L
_
, sin
_
(2n 1)ct
2L
__
Since the continuity and jump conditions are given at the point t = , a handy set of solutions to use for this
problem is the fundamental set of solutions at that point:
_
cos
_
(2n 1)c(t )
2L
_
,
2L
(2n 1)c
sin
_
(2n 1)c(t )
2L
__
1842
The solution that satises the causality condition and the continuity and jump conditions is,
g
n
(t; ) =
4
(2n 1)c
sin
_
(2n 1)
2L
_
sin
_
(2n 1)c(t )
2L
_
H(t ).
Substituting this into the sum yields,
G(x, t; , ) =
4
c
H(t )

n=1
1
2n 1
sin
_
(2n 1)
2L
_
sin
_
(2n 1)c(t )
2L
_
sin
_
(2n 1)x
2L
_
.
We use trigonometric identities to write this in terms of traveling waves.
G(x, t; , ) =
1
c
H(t )

n=1
1
2n 1
_
sin
_
(2n 1)((x ) c(t ))
2L
_
+sin
_
(2n 1)((x ) +c(t ))
2L
_
sin
_
(2n 1)((x +) c(t ))
2L
_
sin
_
(2n 1)((x +) +c(t ))
2L
_
_
b) Now we consider the Green function with the boundary conditions,
u
x
(0, t) = u
x
(L, t) = 0.
First we nd the eigenfunctions in x of the homogeneous problem. The eigenvalue problem is
X
tt
=
2
X, X
t
(0) = X
t
(L) = 0,
1843
which has the solutions,

0
= 0, X
0
= 1,

n
=
n
L
, X
n
= cos
_
nx
L
_
, n = 1, 2, . . . .
The series expansion of the Green function for t > has the form,
G(x, t; , ) =
1
2
g
0
(t) +

n=1
g
n
(t) cos
_
nx
L
_
.
(Note the factor of 1/2 in front of g
0
(t). With this, the integral formulas for all the coecients are the same.) We
determine the coecients by substituting the expansion into the partial dierential equation.
G
tt
c
2
G
xx
= (x )(t )
1
2
g
tt
0
(t) +

n=1
_
g
tt
n
(t) +
_
nc
L
_
2
g
n
(t)
_
cos
_
nx
L
_
= (x )(t )
We expand the right side of the equation in the cosine series.
(x )(t ) =
1
2
d
0
(t) +

n=1
d
n
(t) cos
_
nx
L
_
d
n
(t) =
2
L
_
L
0
(x )(t ) cos
_
nx
L
_
dx
d
n
(t) =
2
L
cos
_
n
L
_
(t )
By equating coecients in the cosine series, we obtain ordinary dierential equations for the g
n
.
g
tt
n
(t; ) +
_
nc
L
_
2
g
n
(t; ) =
2
L
cos
_
n
L
_
(t ), n = 0, 1, 2, . . .
1844
From the causality condition for G, we have the causality condiions for the g
n
,
g
n
(t; ) = g
t
n
(t; ) = 0 for t < .
The continuity and jump conditions for the g
n
are
g
n
(
+
; ) = 0, g
t
n
(
+
; ) =
2
L
cos
_
n
L
_
.
The homogeneous solutions of the ordinary dierential equation for n = 0 and n > 0 are respectively,
1, t,
_
cos
_
nct
L
_
, sin
_
nct
L
__
.
Since the continuity and jump conditions are given at the point t = , a handy set of solutions to use for this
problem is the fundamental set of solutions at that point:
1, t ,
_
cos
_
nc(t )
L
_
,
L
nc
sin
_
nc(t )
L
__
.
The solutions that satisfy the causality condition and the continuity and jump conditions are,
g
0
(t) =
2
L
(t )H(t ),
g
n
(t) =
2
nc
cos
_
n
L
_
sin
_
nc(t )
L
_
H(t ).
Substituting this into the sum yields,
G(x, t; , ) = H(t )
_
t
L
+
2
c

n=1
1
n
cos
_
n
L
_
sin
_
nc(t )
L
_
cos
_
nx
L
_
_
.
We can write this as the sum of traveling waves.
1845
G(x, t; , ) =
t
L
H(t ) +
1
2c
H(t )

n=1
1
n
_
sin
_
n((x ) c(t ))
2L
_
+ sin
_
n((x ) +c(t ))
2L
_
sin
_
n((x +) c(t ))
2L
_
+ sin
_
n((x +) +c(t ))
2L
_
_
Solution 46.13
First we derive Greens identity for this problem. We consider the integral of uL[v] L[u]v on the domain
0 < x < 1, 0 < t < T.
_
T
0
_
1
0
(uL[v] L[u]v) dx dt
_
T
0
_
1
0
_
u(v
tt
c
2
v
xx
(u
tt
c
2
u
xx
)v
_
dxdt
_
T
0
_
1
0
__

x
,

t
_

_
c
2
(uv
x
u
x
v), uv
t
u
t
v
_
_
dx dt
Now we can use the divergence theorem to write this as an integral along the boundary of the domain.
_

_
c
2
(uv
x
u
x
v), uv
t
u
t
v
_
nds
The domain and the outward normal vectors are shown in Figure 46.5.
1846
x=0
x=1
t=0
t=T
n=(0,-1)
n=(1,0)
n=(0,1)
n=(-1,0)
Figure 46.5: Outward normal vectors of the domain.
Writing out the boundary integrals, Greens identity for this problem is,
_
T
0
_
1
0
_
u(v
tt
c
2
v
xx
) (u
tt
c
2
u
xx
)v
_
dx dt =
_
1
0
(uv
t
u
t
v)
t=0
dx
+
_
0
1
(uv
t
u
t
v)
t=T
dx c
2
_
T
0
(uv
x
u
x
v)
x=1
dt +c
2
_
1
T
(uv
x
u
x
v)
x=0
dt
The Green function problem is
G
tt
c
2
G
xx
= (x )(t ), 0 < x, < 1, t, > 0,
G
x
(0, t; , ) = G
x
(1, t; , ) = 0, t > 0, G(x, t; , ) = 0 for t < .
1847
If we consider G as a function of (, ) with (x, t) as parameters, then it satises:
G

c
2
G

= (x )(t ),
G

(x, t; 0, ) = G

(x, t; 1, ) = 0, > 0, G(x, t; , ) = 0 for > t.


Now we apply Greens identity for u = u(, ), (the solution of the wave equation), and v = G(x, t; , ), (the
Green function), and integrate in the (, ) variables. The left side of Greens identity becomes:
_
T
0
_
1
0
_
u(G

c
2
G

) (u

c
2
u

)G
_
d d
_
T
0
_
1
0
(u((x )(t )) (0)G) d d
u(x, t).
Since the normal derivative of u and G vanish on the sides of the domain, the integrals along = 0 and = 1 in
Greens identity vanish. If we take T > t, then G is zero for = T and the integral along = T vanishes. The
one remaining integral is

_
1
0
(u(, 0)G

(x, t; , 0) u

(, 0)G(x, t; , 0) d.
Thus Greens identity allows us to write the solution of the inhomogeneous problem.
u(x, t) =
_
1
0
(u

(, 0)G(x, t; , 0) u(, 0)G

(x, t; , 0)) d.
With the specied initial conditions this becomes
u(x, t) =
_
1
0
(G(x, t; , 0)
2
(1 )
2
G

(x, t; , 0)) d.
1848
Now we substitute in the Green function that we found in the previous exercise. The Green function and its
derivative are,
G(x, t; , 0) = t +

n=1
2
nc
cos(n) sin(nct) cos(nx),
G

(x, t; , 0) = 1 2

n=1
cos(n) cos(nct) cos(nx).
The integral of the rst term is,
_
1
0
_
t +

n=1
2
nc
cos(n) sin(nct) cos(nx)
_
d = t.
The integral of the second term is
_
1
0

2
(1 )
2
_
1 + 2

n=1
cos(n) cos(nct) cos(nx)
_
d =
1
30
3

n=1
1
n
4

4
cos(2nx) cos(2nct).
Thus the solution is
u(x, t) =
1
30
+t 3

n=1
1
n
4

4
cos(2nx) cos(2nct).
For c = 1, the solution at x = 3/4, t = 7/2 is,
u(3/4, 7/2) =
1
30
+
7
2
3

n=1
1
n
4

4
cos(3n/2) cos(7n).
1849
Note that the summand is nonzero only for even terms.
u(3/4, 7/2) =
53
15

3
16pi
4

n=1
1
n
4
cos(3n) cos(14n)
=
53
15

3
16pi
4

n=1
(1)
n
n
4
=
53
15

3
16pi
4
7
4
720
u(3/4, 7/2) =
12727
3840
1850
Chapter 47
Conformal Mapping
1851
47.1 Exercises
Exercise 47.1
= + i is an analytic function of z, = (z). We assume that
t
(z) is nonzero on the domain of interest.
u(x, y) is an arbitrary smooth function of x and y. When expressed in terms of and , u(x, y) = (, ). In
Exercise 10.13 we showed that

2
+

2

2
=

d
dz

2
_

2
u
x
2
+

2
u
y
2
_
.
1. Show that if u satises Laplaces equation in the z-plane,
u
xx
+u
yy
= 0,
then satises Laplaces equation in the -plane,

= 0,
2. Show that if u satises Helmholtzs equation in the z-plane,
u
xx
+u
yy
= u,
then in the -plane satises

dz
d

2
.
3. Show that if u satises Poissons equation in the z-plane,
u
xx
+u
yy
= f(x, y),
then satises Poissons equation in the -plane,

dz
d

2
(, ),
where (, ) = f(x, y).
1852
4. Show that if in the z-plane, u satises the Green function problem,
u
xx
+u
yy
= (x x
0
)(y y
0
),
then in the -plane, satises the Green function problem,

= (
0
)(
0
).
Exercise 47.2
A semi-circular rod of innite extent is maintained at temperature T = 0 on the at side and at T = 1 on the
curved surface:
x
2
+y
2
= 1, y > 0.
Use the conformal mapping
w = +i =
1 +z
1 z
, z = x +iy,
to formulate the problem in terms of and . Solve the problem in terms of these variables. This problem is
solved with an eigenfunction expansion in Exercise ??. Verify that the two solutions agree.
Exercise 47.3
Consider Laplaces equation on the domain < x < , 0 < y < , subject to the mixed boundary conditions,
u = 1 on y = 0, x > 0,
u = 0 on y = , x > 0,
u
y
= 0 on y = 0, y = , x < 0.
Because of the mixed boundary conditions, (u and u
y
are given on separate parts of the same boundary), this
problem cannot be solved with separation of variables. Verify that the conformal map,
= cosh
1
( e
z
),
1853
with z = x + iy, = + i maps the innite interval into the semi-innite interval, > 0, 0 < < . Solve
Laplaces equation with the appropriate boundary conditions in the plane by inspection. Write the solution u
in terms of x and y.
1854
47.2 Hints
Hint 47.1
Hint 47.2
Show that w = (1 + z)/(1 z) maps the semi-disc, 0 < r < 1, 0 < < to the rst quadrant of the w plane.
Solve the problem for v(, ) by taking Fourier sine transforms in and .
To show that the solution for v(, ) is equivalent to the series expression for u(r, ), rst nd an analytic
function g(w) of which v(, ) is the imaginary part. Change variables to z to obtain the analytic function
f(z) = g(w). Expand f(z) in a Taylor series and take the imaginary part to show the equivalence of the solutions.
Hint 47.3
To see how the boundary is mapped, consider the map,
z = log(cosh ).
The problem in the plane is,
v

+v

= 0, > 0, 0 < < ,


v

(0, ) = 0, v(, 0) = 1, v(, ) = 0.


To solve this, nd a plane that satises the boundary conditions.
1855
47.3 Solutions
Solution 47.1

2
+

2

2
=

d
dz

2
_

2
u
x
2
+

2
u
y
2
_
.
1.
u
xx
+u
yy
= 0

d
dz

2
(

) = 0

= 0
2.
u
xx
+u
yy
= u

d
dz

2
(

) =

dz
d

3.
u
xx
+u
yy
= f(x, y)

d
dz

2
(

) = (, )

dz
d

2
(, )
1856
4. The Jacobian of the mapping is
J =

= x

= x
2

+y
2

.
Thus the Dirac delta function on the right side gets mapped to
1
x
2

+y
2

(
0
)(
0
).
Next we show that [dz/d[
2
has the same value as the Jacobian.

dz
d

2
= (x

+iy

)(x

iy

) = x
2

+y
2

Now we transform the Green function problem.


u
xx
+u
yy
= (x x
0
)(y y
0
)

d
dz

2
(

) =
1
x
2

+y
2

(
0
)(
0
)

= (
0
)(
0
)
Solution 47.2
The mapping,
w =
1 +z
1 z
,
maps the unit semi-disc to the rst quadrant of the complex plane.
We write the mapping in terms of r and .
+i =
1 +r e
i
1 r e
i
=
1 r
2
+i2r sin
1 +r
2
2r cos
1857
=
1 r
2
1 +r
2
2r cos
=
2r sin
1 +r
2
2r cos
Consider a semi-circle of radius r. The image of this under the conformal mapping is a semi-circle of radius
2r
1r
2
and center
1+r
2
1r
2
in the rst quadrant of the w plane. This semi-circle intersects the axis at
1r
1+r
and
1+r
1r
. As r
ranges from zero to one, these semi-circles cover the rst quadrant of the w plane. (See Figure 47.1.)
-1 1
1
1 2 3 4 5
1
2
3
4
5
Figure 47.1: The conformal map, w =
1+z
1z
.
We also note how the boundary of the semi-disc is mapped to the boundary of the rst quadrant of the w
plane. The line segment = 0 is mapped to the real axis > 1. The line segment = is mapped to the real
axis 0 < < 1. Finally, the semi-circle r = 1 is mapped to the positive imaginary axis.
The problem for v(, ) is,
v

+v

= 0, > 0, > 0,
v(, 0) = 0, v(0, ) = 1.
We will solve this problem with the Fourier sine transform. We take the Fourier sine transform of the partial
1858
dierential equation, rst in and then in .

2
v(, ) +

v(0, ) + v(, ) = 0, v(, 0) = 0

2
v(, ) +

+ v(, ) = 0, v(, 0) = 0

v(, ) +

v(, ) +

v(, 0) = 0

v(, ) =

2
(
2
+
2
)
Now we utilize the Fourier sine transform pair,
T
s
_
e
cx

=
/

2
+c
2
,
to take the inverse sine transform in .
v(, ) =
1

With the Fourier sine transform pair,


T
s
_
2 arctan
_
x
c
__
=
1

e
c
,
we take the inverse sine transform in to obtain the solution.
v(, ) =
2

arctan
_

_
Since v is harmonic, it is the imaginary part of an analytic function g(w). By inspection, we see that this function
is
g(w) =
2

log(w).
1859
We change variables to z, f(z) = g(w).
f(z) =
2

log
_
1 +z
1 z
_
We expand f(z) in a Taylor series about z = 0,
f(z) =
4

n=1
oddn
z
n
n
,
and write the result in terms of r and , z = r e
i
.
f(z) =
4

n=1
oddn
r
n
e
i
n
u(r, ) is the imaginary part of f(z).
u(r, ) =
4

n=1
oddn
1
n
r
n
sin(n)
This demonstrates that the solutions obtained with conformal mapping and with an eigenfunction expansion in
Exercise ?? agree.
Solution 47.3
Instead of working with the conformal map from the z plane to the plane,
= cosh
1
( e
z
),
it will be more convenient to work with the inverse map,
z = log(cosh ),
1860
which maps the semi-innite strip to the innite one. We determine how the boundary of the domain is mapped
so that we know the appropriate boundary conditions for the semi-innite strip domain.
A : > 0, = 0 log(cosh()) : > 0 = z : x > 0, y = 0
B : > 0, = log(cosh()) : > 0 = z : x > 0, y =
C : = 0, 0 < < /2 log(cos()) : 0 < < /2 = z : x < 0, y = 0
D : = 0, /2 < < log(cos()) : /2 < < = z : x < 0, y =
From the mapping of the boundary, we see that the solution v(, ) = u(x, y), is 1 on the bottom of the semi-innite
strip, 0 on the top. The normal derivative of v vanishes on the vertical boundary. See Figure 47.2.
z=log(cosh( ))
D
A
B
C
D
A C
B
x
y

z=log(cosh( ))
D
A
B
C
D
A C
B
x
y

u =0
y
u =0
y
x
y

u=1
u=0
v=1
v=0
v =0

Figure 47.2: The mapping of the boundary conditions.


1861
In the plane, the problem is,
v

+v

= 0, > 0, 0 < < ,


v

(0, ) = 0, v(, 0) = 1, v(, ) = 0.


By inspection, we see that the solution of this problem is,
v(, ) = 1

.
The solution in the z plane is
u(x, y) = 1
1

_
cosh
1
( e
z
)
_
,
where z = x+iy. We will nd the imaginary part of cosh
1
( e
z
) in order to write this explicitly in terms of x and
y. Recall that we can write the cosh
1
in terms of the logarithm.
cosh
1
(w) = log
_
w +

w
2
1
_
cosh
1
( e
z
) = log
_
e
z
+

e
2z
1
_
= log
_
e
z
_
1 +

1 e
2z
__
= z + log
_
1 +

1 e
2z
_
Now we need to nd the imaginary part. Well work from the inside out. First recall,
_
x +iy =
_
_
x
2
+y
2
exp
_
i tan
1
_
y
x
__
=
4
_
x
2
+y
2
exp
_
i
2
tan
1
_
y
x
_
_
,
1862
so that we can write the innermost factor as,

1 e
2z
=
_
1 e
2x
cos(2y) +i e
2x
sin(2y)
=
4
_
(1 e
2x
cos(2y))
2
+ ( e
2x
sin(2y))
2
exp
_
i
2
tan
1
_
e
2x
sin(2y)
1 e
2x
cos(2y)
__
=
4
_
1 2 e
2x
cos(2y) + e
4x
exp
_
i
2
tan
1
_
sin(2y)
e
2x
cos(2y)
__
We substitute this into the logarithm.
log
_
1 +

1 e
2z
_
= log
_
1 +
4
_
1 2 e
2x
cos(2y) + e
4x
exp
_
i
2
tan
1
_
sin(2y)
e
2x
cos(2y)
___
Now we can write .
=
_
z + log
_
1 +

1 e
2z
__
= y + tan
1
_
_
4
_
1 2 e
2x
cos(2y) + e
4x
sin
_
1
2
tan
1
_
sin(2y)
e
2x
cos(2y)
__
1 +
4
_
1 2 e
2x
cos(2y) + e
4x
cos
_
1
2
tan
1
_
sin(2y)
e
2x
cos(2y)
__
_
_
Finally we have the solution, u(x, y).
u(x, y) = 1
y

tan
1
_
_
4
_
1 2 e
2x
cos(2y) + e
4x
sin
_
1
2
tan
1
_
sin(2y)
e
2x
cos(2y)
__
1 +
4
_
1 2 e
2x
cos(2y) + e
4x
cos
_
1
2
tan
1
_
sin(2y)
e
2x
cos(2y)
__
_
_
1863
Chapter 48
Non-Cartesian Coordinates
48.1 Spherical Coordinates
Writing rectangular coordinates in terms of spherical coordinates,
x = r cos sin
y = r sin sin
z = r cos .
1864
The Jacobian is

cos sin r sin sin r cos cos


sin sin r cos sin r sin cos
cos 0 r sin

= r
2
sin

cos sin sin cos cos


sin sin cos sin cos
cos 0 sin

r
2
sin (cos
2
sin
2
sin
2
cos
2
cos
2
cos
2
sin
2
sin
2
)

= r
2
sin (sin
2
+ cos
2
)
= r
2
sin .
Thus we have that
___
V
f(x, y, z) dx dy dz =
___
V
f(r, , )r
2
sin dr d d.
48.2 Laplaces Equation in a Disk
Consider Laplaces equation in polar coordinates
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0, 0 r 1
subject to the the boundary conditions
1. u(1, ) = f()
2. u
r
(1, ) = g().
1865
We separate variables with u(r, ) = R(r)T().
1
r
(R
t
T +rR
tt
T) +
1
r
2
RT
tt
= 0
r
2
R
tt
R
+r
R
t
R
=
T
tt
T
=
Thus we have the two ordinary dierential equations
T
tt
+T = 0, T(0) = T(2), T
t
(0) = T
t
(2)
r
2
R
tt
+rR
t
R = 0, R(0) < .
The eigenvalues and eigenfunctions for the equation in T are

0
= 0, T
0
=
1
2

n
= n
2
, T
(1)
n
= cos(n), T
(2)
n
= sin(n)
(I chose T
0
= 1/2 so that all the eigenfunctions have the same norm.)
For = 0 the general solution for R is
R = c
1
+c
2
log r.
Requiring that the solution be bounded gives us
R
0
= 1.
For = n
2
> 0 the general solution for R is
R = c
1
r
n
+c
2
r
n
.
1866
Requiring that the solution be bounded gives us
R
n
= r
n
.
Thus the general solution for u is
u(r, ) =
a
0
2
+

n=1
r
n
[a
n
cos(n) +b
n
sin(n)] .
For the boundary condition u(1, ) = f() we have the equation
f() =
a
0
2
+

n=1
[a
n
cos(n) +b
n
sin(n)] .
If f() has a Fourier series then the coecients are
a
0
=
1

_
2
0
f() d
a
n
=
1

_
2
0
f() cos(n) d
b
n
=
1

_
2
0
f() sin(n) d.
For the boundary condition u
r
(1, ) = g() we have the equation
g() =

n=1
n[a
n
cos(n) +b
n
sin(n)] .
g() has a series of this form only if
_
2
0
g() d = 0.
1867
The coecients are
a
n
=
1
n
_
2
0
g() cos(n) d
b
n
=
1
n
_
2
0
g() sin(n) d.
48.3 Laplaces Equation in an Annulus
Consider the problem

2
u =
1
r

r
_
r
u
r
_
+
1
r
2

2
u

2
= 0, 0 r < a, < ,
with the boundary condition
u(a, ) =
2
.
So far this problem only has one boundary condition. By requiring that the solution be nite, we get the
boundary condition
[u(0, )[ < .
By specifying that the solution be C
1
, (continuous and continuous rst derivative) we obtain
u(r, ) = u(r, ) and
u

(r, ) =
u

(r, ).
We will use the method of separation of variables. We seek solutions of the form
u(r, ) = R(r)().
1868
Substituting into the partial dierential equation,

2
u
r
2
+
1
r
u
r
+
1
r
2

2
u

2
= 0
R
tt
+
1
r
R
t
=
1
r
2
R
tt
r
2
R
tt
R
+
rR
t
R
=

tt

=
Now we have the boundary value problem for ,

tt
() +() = 0, < ,
subject to
() = () and
t
() =
t
()
We consider the following three cases for the eigenvalue, ,
< 0. No linear combination of the solutions, = exp(

), exp(

), can satisfy the boundary


conditions. Thus there are no negative eigenvalues.
= 0. The general solution solution is = a +b. By applying the boundary conditions, we get = a. Thus
we have the eigenvalue and eigenfunction,

0
= 0, A
0
= 1.
> 0. The general solution is = a cos(

) + b sin(

). Applying the boundary conditions yields the


eigenvalues

n
= n
2
, n = 1, 2, 3, . . .
with the associated eigenfunctions
A
n
= cos(n) and B
n
= sin(n).
1869
The equation for R is
r
2
R
tt
+rR
t

n
R = 0.
In the case
0
= 0, this becomes
R
tt
=
1
r
R
t
R
t
=
a
r
R = a log r +b
Requiring that the solution be bounded at r = 0 yields (to within a constant multiple)
R
0
= 1.
For
n
= n
2
, n 1, we have
r
2
R
tt
+rR
t
n
2
R = 0
Recognizing that this is an Euler equation and making the substitution R = r

,
( 1) + n
2
= 0
= n
R = ar
n
+br
n
.
requiring that the solution be bounded at r = 0 we obtain (to within a constant multiple)
R
n
= r
n
The general solution to the partial dierential equation is a linear combination of the eigenfunctions
u(r, ) = c
0
+

n=1
[c
n
r
n
cos n +d
n
r
n
sin n] .
1870
We determine the coecients of the expansion with the boundary condition
u(a, ) =
2
= c
0
+

n=1
[c
n
a
n
cos n +d
n
a
n
sin n] .
We note that the eigenfunctions 1, cos n, and sin n are orthogonal on . Integrating the boundary
condition from to yields
_

2
d =
_

c
0
d
c
0
=

2
3
.
Multiplying the boundary condition by cos m and integrating gives
_

2
cos m d = c
m
a
m
_

cos
2
m d
c
m
=
(1)
m
8
m
2
a
m
.
We multiply by sin m and integrate to get
_

2
sin m d = d
m
a
m
_

sin
2
m d
d
m
= 0
Thus the solution is
u(r, ) =

2
3
+

n=1
(1)
n
8
n
2
a
n
r
n
cos n.
1871
Part VI
Calculus of Variations
1872
Chapter 49
Calculus of Variations
1873
49.1 Exercises
Exercise 49.1
Discuss the problem of minimizing
_

0
((y
t
)
4
6(y
t
)
2
) dx, y(0) = 0, y() = . Consider both C
1
[0, ] and C
1
p
[0, ],
and comment (with reasons) on whether your answers are weak or strong minima.
Exercise 49.2
Consider
1.
_
x
1
x
0
(a(y
t
)
2
+byy
t
+cy
2
) dx, y(x
0
) = y
0
, y(x
1
) = y
1
, a ,= 0,
2.
_
x
1
x
0
(y
t
)
3
dx, y(x
0
) = y
0
, y(x
1
) = y
1
.
Can these functionals have broken extremals, and if so, nd them.
Exercise 49.3
Discuss nding a weak extremum for the following:
1.
_
1
0
((y
tt
)
2
2xy) dx, y(0) = y
t
(0) = 0, y(1) =
1
120
2.
_
1
0
_
1
2
(y
t
)
2
+yy
t
+y
t
+y
_
dx
3.
_
b
a
(y
2
+ 2xyy
t
) dx, y(a) = A, y(b) = B
4.
_
1
0
(xy +y
2
2y
2
y
t
) dx, y(0) = 1, y(1) = 2
Exercise 49.4
Find the natural boundary conditions associated with the following functionals:
1.
__
D
F(x, y, u, u
x
, u
y
) dx dy
1874
2.
__
D
_
p(x, y)(u
2
x
+u
2
y
) q(x, y)u
2
_
dxdy +
_

(x, y)u
2
ds
Here D represents a closed boundary domain with boundary , and ds is the arc-length dierential. p and q are
known in D, and is known on .
Exercise 49.5
The equations for water waves with free surface y = h(x, t) and bottom y = 0 are

xx
+
yy
= 0 0 < y < h(x, t),

t
+
1
2

2
x
+
1
2

2
y
+gy = 0 on y = h(x, t),
h
t
+
x
h
x

y
= 0, on y = h(x, t),

y
= 0 on y = 0,
where the uid motion is described by (x, y, t) and g is the acceleration of gravity. Show that all these equations
may be obtained by varying the functions (x, y, t) and h(x, t) in the variational principle

__
R
_
_
h(x,t)
0
_

t
+
1
2

2
x
+
1
2

2
y
+gy
_
dy
_
dxdt = 0,
where R is an arbitrary region in the (x, t) plane.
Exercise 49.6
Extremize the functional
_
b
a
F(x, y, y
t
) dx, y(a) = A, y(b) = B given that the admissible curves can not penetrate
the interior of a given region R in the (x, y) plane. Apply your results to nd the curves which extremize
_
10
0
(y
t
)
3
dx, y(0) = 0, y(10) = 0 given that the admissible curves can not penetrate the interior of the circle
(x 5)
2
+y
2
= 9.
1875
Exercise 49.7
Consider the functional
_

y ds where ds is the arc-length dierential (ds =
_
(dx)
2
+ (dy)
2
). Find the curve
or curves from a given vertical line to a given xed point B = (x
1
, y
1
) which minimize this functional. Consider
both the classes C
1
and C
1
p
.
Exercise 49.8
A perfectly exible uniform rope of length L hangs in equilibrium with one end xed at (x
1
, y
1
) so that it passes
over a frictionless pin at (x
2
, y
2
). What is the position of the free end of the rope?
Exercise 49.9
The drag on a supersonic airfoil of chord c and shape y = y(x) is proportional to
D =
_
c
0
_
dy
dx
_
2
dx.
Find the shape for minimum drag if the moment of inertia of the contour with respect to the x-axis is specied;
that is, nd the shape for minimum drag if
_
c
0
y
2
dx = A, y(0) = y(c) = 0, (c, A given).
Exercise 49.10
The deection y of a beam executing free (small) vibrations of frequency satises the dierential equation
d
2
dx
2
_
EI
d
2
y
dx
2
_

2
y = 0,
where EI is the exural rigidity and is the linear mass density. Show that the deection modes are extremals
of the problem

2

_
_
L
0
EI(y
tt
)
2
dx
_
L
0
y
2
dx
_
= 0, (L = length of beam)
1876
when appropriate homogeneous end conditions are prescribed. Show that stationary values of the ratio are the
squares of the natural frequencies.
Exercise 49.11
A boatman wishes to steer his boat so as to minimize the transit time required to cross a river of width l. The
path of the boat is given parametrically by
x = X(t), y = Y (t),
for 0 t T. The river has no cross currents, so the current velocity is directed downstream in the y-direction.
v
0
is the constant boat speed relative to the surrounding water, and w = w(x, y, t) denotes the downstream river
current at point (x, y) at time t. Then,

X(t) = v
0
cos (t),

Y (t) = v
0
sin (t) +w,
where (t) is the steering angle of the boat at time t. Find the steering control function (t) and the nal time
T that will transfer the boat from the initial state (X(0), Y (0)) = (0, 0) to the nal state at X(t) = l in such a
way as to minimize T.
Exercise 49.12
Two particles of equal mass m are connected by an inextensible string which passes through a hole in a smooth
horizontal table. The rst particle is on the table moving with angular velocity =
_
g/ in a circular path, of
radius , around the hole. The second particle is suspended vertically and is in equilibrium. At time t = 0, the
suspended mass is pulled downward a short distance and released while the rst mass continues to rotate.
1. If x represents the distance of the second mass below its equilibrium at time t and represents the angular
position of the rst particle at time t, show that the Lagrangian is given by
L = m
_
x
2
+
1
2
( x)
2

2
+gx
_
and obtain the equations of motion.
1877
2. In the case where the displacement of the suspended mass from equilibrium is small, show that the suspended
mass performs small vertical oscillations and nd the period of these oscillations.
Exercise 49.13
A rocket is propelled vertically upward so as to reach a prescribed height h in minimum time while using a given
xed quantity of fuel. The vertical distance x(t) above the surface satises,
m x = mg +mU(t), x(0) = 0,

(x)(0) = 0,
where U(t) is the acceleration provided by engine thrust. We impose the terminal constraint x(T) = h, and we
wish to nd the particular thrust function U(t) which will minimize T assuming that the total thrust of the rocket
engine over the entire thrust time is limited by the condition,
_
T
0
U
2
(t) dt = k
2
.
Here k is a given positive constant which measures the total amount of fuel available.
Exercise 49.14
A space vehicle moves along a straight path in free space. x(t) is the distance to its docking pad, and a, b are its
position and speed at time t = 0. The equation of motion is
x = M sin V, x(0) = a, x(0) = b,
where the control function V (t) is related to the rocket acceleration U(t) by U = M sin V , M = const. We wish
to dock the vehicle in minimum time; that is, we seek a thrust function U(t) which will minimize the nal time T
while bringing the vehicle to rest at the origin with x(T) = 0, x(T) = 0. Find U(t), and in the (x, x)-plane plot
the corresponding trajectory which transfers the state of the system from (a, b) to (0, 0). Account for all values
of a and b.
1878
Exercise 49.15
Find a minimum for the functional I(y) =
_
m
0

y +h
_
1 + (y
t
)
2
dx in which h > 0, y(0) = 0, y(m) = M > h.
Discuss the nature of the minimum, (i.e., weak, strong, . . . ).
Exercise 49.16
Show that for the functional
_
n(x, y)
_
1 + (y
t
)
2
dx, where n(x, y) 0 in some domain D, the Weierstrass E
function E(x, y, q, y
t
) is non-negative for arbitrary nite p and y
t
at any point of D. What is the implication of
this for Fermats Principle?
Exercise 49.17
Consider the integral
_
1+y
2
(y

)
2
dx between xed limits. Find the extremals, (hyperbolic sines), and discuss the
Jacobi, Legendre, and Weierstrass conditions and their implications regarding weak and strong extrema. Also
consider the value of the integral on any extremal compared with its value on the illustrated strong variation.
Comment!
P
i
Q
i
are vertical segments, and the lines Q
i
P
i+1
are tangent to the extremal at P
i+1
.
Exercise 49.18
Consider I =
_
x
1
x
0
y
t
(1 +x
2
y
t
) dx, y(x
0
) = y
0
, y(x
1
) = y
1
. Can you nd continuous curves which will minimize I if
(i) x
0
= 1, y
0
= 1, x
1
= 2, y
1
= 4,
(ii) x
0
= 1, y
0
= 3, x
1
= 2, y
1
= 5,
(iii) x
0
= 1, y
0
= 1, x
1
= 2, y
1
= 1.
Exercise 49.19
Starting from
__
D
(Q
x
P
y
) dxdy =
_

(P dx +Qdy)
1879
prove that
(a)
__
D

xx
dxdy =
__
D

xx
dxdy +
_

(
x

x
) dy,
(b)
__
D

yy
dxdy =
__
D

yy
dxdy
_

(
y

y
) dx,
(c)
__
D

xy
dxdy =
__
D

xy
dxdy
1
2
_

(
x

x
) dx +
1
2
_

(
y

y
) dy.
Then, consider
I(u) =
_
t
1
t
0
__
D
_
(u
xx
+u
yy
)
2
+ 2(1 )(u
xx
u
yy
u
2
xy
)
_
dxdy dt.
Show that
I =
_
t
1
t
0
__
D
(
4
u)udxdy dt +
_
t
1
t
0
_

_
P(u)u +M(u)
u
n
_
ds dt,
where P and M are the expressions we derived in class for the problem of the vibrating plate.
Exercise 49.20
For the following functionals use the Rayleigh-Ritz method to nd an approximate solution of the problem of
minimizing the functionals and compare your answers with the exact solutions.

_
1
0
_
(y
t
)
2
y
2
2xy
_
dx, y(0) = 0 = y(1).
For this problem take an approximate solution of the form
y = x(1 x) (a
0
+a
1
x + +a
n
x
n
) ,
and carry out the solutions for n = 0 and n = 1.
1880

_
2
0
_
(y
t
)
2
+y
2
+ 2xy
_
dx, y(0) = 0 = y(2).

_
2
1
_
x(y
t
)
2

x
2
1
x
y
2
2x
2
y
_
dx, y(1) = 0 = y(2)
Exercise 49.21
Let K(x) belong to L
1
(, ) and dene the operator T on L
2
(, ) by
Tf(x) =
_

K(x y)f(y) dy.


1. Show that the spectrum of T consists of the range of the Fourier transform

K of K, (that is, the set of all
values

K(y) with < y < ), plus 0 if this is not already in the range. (Note: From the assumption on
K it follows that

K is continuous and approaches zero at .)
2. For in the spectrum of T, show that is an eigenvalue if and only if

K takes on the value on at least
some interval of positive length and that every other in the spectrum belongs to the continuous spectrum.
3. Find an explicit representation for (T I)
1
f for not in the spectrum, and verify directly that this result
agrees with that givenby the Neumann series if is large enough.
Exercise 49.22
Let U be the space of twice continuously dierentiable functions f on [1, 1] satisfying f(1) = f(1) = 0, and
W = C[1, 1]. Let L : U W be the operator
d
2
dx
2
. Call in the spectrum of L if the following does not occur:
There is a bounded linear transformation T : W U such that (LI)Tf = f for all f W and T(LI)f = f
for all f U. Determine the spectrum of L.
1881
Exercise 49.23
Solve the integral equations
1. (x) = x +
_
1
0
_
x
2
y y
2
_
(y) dy
2. (x) = x +
_
x
0
K(x, y)(y) dy
where
K(x, y) =
_
sin(xy) for x 1 and y 1,
0 otherwise
In both cases state for which values of the solution obtained is valid.
Exercise 49.24
1. Suppose that K = L
1
L
2
, where L
1
L
2
L
2
L
1
= I. Show that if x is an eigenvector of K corresponding
to the eigenvalue , then L
1
x is an eigenvector of K corresponding to the eigenvalue 1, and L
2
x is an
eigenvector corresponding to the eigenvalue + 1.
2. Find the eigenvalues and eigenfunctions of the operator K
d
2
dt
2
+
t
2
4
in the space of functions u
L
2
(, ). (Hint: L
1
=
t
2
+
d
dt
, L
2
=
t
2

d
dt
. e
t
2
/4
is the eigenfunction corresponding to the eigenvalue
1/2.)
Exercise 49.25
Prove that if the value of =
1
is in the residual spectrum of T, then
1
is in the discrete spectrum of T

.
Exercise 49.26
Solve
1882
1.
u
tt
(t) +
_
1
0
sin(k(s t))u(s) ds = f(t), u(0) = u
t
(0) = 0.
2.
u(x) =
_

0
K(x, s)u(s) ds
where
K(x, s) =
1
2
log

sin
_
x+s
2
_
sin
_
xs
2
_

n=1
sin nx sin ns
n
3.
(s) =
_
2
0
1
2
1 h
2
1 2hcos(s t) +h
2
(t) dt, [h[ < 1
4.
(x) =
_

cos
n
(x )() d
Exercise 49.27
Let K(x, s) = 2
2
6[x s[ + 3(x s)
2
.
1. Find the eigenvalues and eigenfunctions of
(x) =
_
2
0
K(x, s)(s) ds.
1883
(Hint: Try to nd an expansion of the form
K(x, s) =

n=
c
n
e
in(xs)
.)
2. Do the eigenfunctions form a complete set? If not, show that a complete set may be obtained by adding a
suitable set of solutions of
_
2
0
K(x, s)(s) ds = 0.
3. Find the resolvent kernel (x, s, ).
Exercise 49.28
Let K(x, s) be a bounded self-adjoint kernel on the nite interval (a, b), and let T be the integral operator on
L
2
(a, b) with kernel K(x, s). For a polynomial p(t) = a
0
+ a
1
t + + a
n
t
n
we dene the operator p(T) =
a
0
I +a
1
T + +a
n
T
n
. Prove that the eigenvalues of p(T) are exactly the numbers p() with an eigenvalue of
T.
Exercise 49.29
Show that if f(x) is continuous, the solution of
(x) = f(x) +
_

0
cos(2xs)(s) ds
is
(x) =
f(x) +
_

0
f(s) cos(2xs) ds
1
2
/4
.
1884
Exercise 49.30
Consider
Lu = 0 in D, u = f on C,
where
Lu u
xx
+u
yy
+au
x
+bu
y
+cu.
Here a, b and c are continuous functions of (x, y) on D +C. Show that the adjoint L

is given by
L

v = v
xx
+v
yy
av
x
bv
y
+ (c a
x
b
y
)v
and that
_
D
(vLu uL

v) =
_
C
H(u, v), (49.1)
where
H(u, v) (vu
x
uv
x
+auv) dy (vu
y
uv
y
+buv) dx
=
_
v
u
n
u
v
n
+auv
x
n
+buv
y
n
_
ds.
Take v in (49.1) to be the harmonic Green function G given by
G(x, y; , ) =
1
2
log
_
1
_
(x )
2
+ (y )
2
_
+ ,
and show formally, (use Delta functions), that (49.1) becomes
u(, )
_
D
u(L

)Gdxdy =
_
C
H(u, G) (49.2)
1885
where u satises Lu = 0, (G = in D, G = 0 on C). Show that (49.2) can be put into the forms
u +
_
D
_
(c a
x
b
y
)GaG
x
bG
y
_
udxdy = U (49.3)
and
u +
_
D
(au
x
+bu
y
+cu)Gdxdy = U, (49.4)
where U is the known harmonic function in D with assumes the boundary values prescribed for u. Finally,
rigorously show that the integrodierential equation (49.4) can be solved by successive approximations when the
domain D is small enough.
Exercise 49.31
Find the eigenvalues and eigenfunctions of the following kernels on the interval [0, 1].
1.
K(x, s) = min(x, s)
2.
K(x, s) = e
min(x,s)
(Hint:
tt
+
t
+e
x
= 0 can be solved in terms of Bessel functions.)
Exercise 49.32
Use Hilbert transforms to evaluate
1.
_

sin(kx) sin(lx)
x
2
z
2
dx
1886
2.
_

cos(px) cos(qx)
x
2
dx
3.
_

(x
2
ab) sin x + (a +b)x cos x
x(x
2
+a
2
)(x
2
+b
2
)
dx
Exercise 49.33
Show that

(1 t
2
)
1/2
log(1 +t)
t x
dt =
_
xlog 2 1 + (1 x
2
)
1/2
_

2
arcsin(x)
__
.
Exercise 49.34
Let C be a simple closed contour. Let g(t) be a given function and consider
1
i

_
C
f(t) dt
t t
0
= g(t
0
) (49.5)
Note that the left side can be written as F
+
(t
0
) + F

(t
0
). Dene a function W(z) such that W(z) = F(z) for z
inside C and W(z) = F(z) for z outside C. Proceeding in this way, show that the solution of (49.5) is given by
f(t
0
) =
1
i

_
C
g(t) dt
t t
0
.
Exercise 49.35
If C is an arc with endpoints and , evaluate
(i)
1
i

_
C
1
( )
1
( )

( )
d, where 0 < < 1
(ii)
1
i

_
C
_


_

n

d, where 0 < < 1, integer n 0.
1887
Exercise 49.36
Solve

_
1
1
(y)
y
2
x
2
dy = f(x).
Exercise 49.37
Solve
1
i

_
1
0
f(t)
t x
dt = f(x), where 1 < < 1.
Are there any solutions for > 1? (The operator on the left is self-adjoint. Its spectrum is 1 1.)
Exercise 49.38
Show that the general solution of
tan(x)


_
1
0
f(t)
t x
dt = f(x)
is
f(x) =
k sin(x)
(1 x)
1x/
x
x/
.
Exercise 49.39
Show that the general solution of
f
t
(x) +
_
C
f(t)
t x
dt = 1
1888
is given by
f(x) =
1
i
+k e
ix
,
(k is a constant). Here C is a simple closed contour, a constant and f(x) a dierentiable function on C.
Generalize the result to the case of an arbitrary function g(x) on the right side, where g(x) is analytic inside C.
Exercise 49.40
Show that the solution of

_
C
_
1
t x
+P(t x)
_
f(t) dt = g(x)
is given by
f(t) =
1

2

_
C
g()
t
d
1

2
_
C
g()P( t) d.
Here C is a simple closed curve, and P(t) is a given entire function of t.
Exercise 49.41
Solve

_
1
0
f(t)
t x
dt +
_
3
2
f(t)
t x
dt = x
where this equation is to hold for x in either (0, 1) or (2, 3).
Exercise 49.42
Solve
_
x
0
f(t)

x t
dt +A
_
1
x
f(t)

t x
dt = 1
1889
where A is a real positive constant. Outline briey the appropriate method of A is a function of x.
1890
49.2 Hints
Hint 49.1
Hint 49.2
Hint 49.3
Hint 49.4
Hint 49.5
Hint 49.6
Hint 49.7
Hint 49.8
1891
Hint 49.9
Hint 49.10
Hint 49.11
Hint 49.12
Hint 49.13
Hint 49.14
Hint 49.15
Hint 49.16
1892
Hint 49.17
Hint 49.18
Hint 49.19
Hint 49.20
Hint 49.21
Hint 49.22
Hint 49.23
Hint 49.24
1893
Hint 49.25
Hint 49.26
Hint 49.27
Hint 49.28
Hint 49.29
Hint 49.30
Hint 49.31
Hint 49.32
1894
Hint 49.33
Hint 49.34
Hint 49.35
Hint 49.36
Hint 49.37
Hint 49.38
Hint 49.39
Hint 49.40
1895
Hint 49.41
Hint 49.42
1896
49.3 Solutions
Solution 49.1
C
1
[0, ] Extremals
Admissible Extremal. First we consider continuously dierentiable extremals. Because the Lagrangian
is a function of y
t
alone, we know that the extremals are straight lines. Thus the admissible extremal is
y =

x.
Legendre Condition.

F
y

y
= 12( y
t
)
2
12
= 12
_
_

_
2
1
_
_

_
< 0 for [/[ < 1
= 0 for [/[ = 1
> 0 for [/[ > 1
Thus we see that

x may be a minimum for [/[ 1 and may be a maximum for [/[ 1.


Jacobi Condition. Jacobis accessory equation for this problem is
(

F
,y

y
h
t
)
t
= 0
_
12
_
_

_
2
1
_
h
t
_
t
= 0
h
tt
= 0
The problem h
tt
= 0, h(0) = 0, h(c) = 0 has only the trivial solution for c > 0. Thus we see that there are
no conjugate points and the admissible extremal satises the strengthened Legendre condition.
1897
A Weak Minimum. For [/[ > 1 the admissible extremal

x is a solution
of the Euler equation, and satises the strengthened Jacobi and Legendre
conditions. Thus it is a weak minima. (For [/[ < 1 it is a weak maxima for
the same reasons.)
Weierstrass Excess Function. The Weierstrass excess function is
E(x, y, y
t
, w) = F(w) F( y
t
) (w y
t
)F
,y
( y
t
)
= w
4
6w
2
( y
t
)
4
+ 6( y
t
)
2
(w y
t
)(4( y
t
)
3
12 y
t
)
= w
4
6w
2

_
4
+ 6
_

_
2
(w

)(4
_

_
3
12

)
= w
4
6w
2
w
_
4

_
2
3
_
+ 3
_

_
4
6
_

_
2
We can nd the stationary points of the excess function by examining its derivative. (Let = /.)
E
t
(w) = 4w
3
12w + 4
_
()
2
3
_
= 0
w
1
= , w
2
=
1
2
_

4
2
_
w
3
=
1
2
_
+

4
2
_
The excess function evaluated at these points is
E(w
1
) = 0,
E(w
2
) =
3
2
_
3
4
6
2
6

3(4
2
)
3/2
_
,
E(w
3
) =
3
2
_
3
4
6
2
6 +

3(4
2
)
3/2
_
.
E(w
2
) is negative for 1 < <

3 and E(w
3
) is negative for

3 < < 1. This implies that the weak


minimum y = x/ is not a strong local minimum for [[ <

3[. Since E(w


1
) = 0, we cannot use the
Weierstrass excess function to determine if y = x/ is a strong local minima for [/[ >

3.
1898
C
1
p
[0, ] Extremals
Erdmanns Corner Conditions. Erdmanns corner conditions require that

F
,y
= 4( y
t
)
3
12 y
t
and

F y
t

F
,y
= ( y
t
)
4
6( y
t
)
2
y
t
(4( y
t
)
3
12 y
t
)
are continuous at corners. Thus the quantities
( y
t
)
3
3 y
t
and ( y
t
)
4
2( y
t
)
2
are continuous. Denoting p = y
t

and q = y
t
+
, the rst condition has the solutions
p = q, p =
1
2
_
q

3
_
4 q
2
_
.
The second condition has the solutions,
p = q, p =
_
2 q
2
Combining these, we have
p = q, p =

3, q =

3, p =

3, q =

3.
Thus we see that there can be a corner only when y
t

3 and y
t
+
=

3.
1899
Case 1, =

3. Notice the the Lagrangian is minimized point-wise if


y
t
=

3. For this case the unique, strong global minimum is


y =

3 sign ()x.
Case 2, [[ <

3[[. For this case there are an innite number of strong


minima. Any piecewise linear curve satisfying y
t

(x) =

3 and y
t
+
(x) =

3
and y(0) = 0, y() = is a strong minima.
Case 3, [[ >

3[[. First note that the extremal cannot have corners. Thus
the unique extremal is y =

x. We know that this extremal is a weak local


minima.
Solution 49.2
1.
_
x
1
x
0
(a(y
t
)
2
+byy
t
+cy
2
) dx, y(x
0
) = y
0
, y(x
1
) = y
1
, a ,= 0
Erdmanns First Corner Condition.

F
y
= 2a y
t
+b y must be continuous at a corner. This implies that
y must be continuous, i.e., there are no corners.
The functional cannot have broken extremals.
2.
_
x
1
x
0
(y
t
)
3
dx, y(x
0
) = y
0
, y(x
1
) = y
1
Erdmanns First Corner Condition.

F
y
= 3(y
t
)
2
must be continuous at a corner. This implies that
y
t

= y
t
+
.
1900
Erdmanns Second Corner Condition.

F y
t

F
y
= ( y
t
)
3
y
t
3( y
t
)
2
= 2( y
t
)
3
must be continuous at a
corner. This implies that y is continuous at a corner, i.e. there are no corners.
The functional cannot have broken extremals.
Solution 49.3
1.
_
1
0
_
(y
tt
)
2
2xy
_
dx, y(0) = y
t
(0) = 0, y(1) =
1
120
Eulers Dierential Equation. We will consider C
4
extremals which satisfy Eulers DE,
(

F
,y
)
tt
(

F
,y
)
t
+

F
,y
= 0.
For the given Lagrangian, this is,
(2 y
tt
)
tt
2x = 0.
Natural Boundary Condition. The rst variation of the performance index is
J =
_
1
0
(

F
,y
y +

F
,y
y
t
+

F
y
y
tt
) dx.
From the given boundary conditions we have y(0) = y
t
(0) = y(1) = 0. Using Eulers DE, we have,
J =
_
1
0
((

F
y
(

F
,y
)
t
)
t
y +

F
,y
y
t
+

F
y
y
tt
) dx.
1901
Now we apply integration by parts.
J =
_
(

F
y
(

F
,y
)
t
)y
_
1
0
+
_
1
0
((

F
y
(

F
,y
)
t
)y
t
+

F
,y
y
t
+

F
y
y
tt
) dx
=
_
1
0
((

F
,y
)
t
y
t
+

F
y
y
tt
) dx
=
_

F
,y
y
t
_
1
0
=

F
,y
(1)y
t
(1)
In order that the rst variation vanish, we need the natural boundary condition

F
,y
(1) = 0. For the given
Lagrangian, this condition is
y
tt
(1) = 0.
The Extremal BVP. The extremal boundary value problem is
y
tttt
= x, y(0) = y
t
(0) = y
tt
(1) = 0, y(1) =
1
120
.
The general solution of the dierential equation is
y = c
0
+c
1
x +c
2
x
2
+c
3
x
3
+
1
120
x
5
.
Applying the boundary conditions, we see that the unique admissible extremal is
y =
x
2
120
(x
3
5x + 5).
This may be a weak extremum for the problem.
Legendres Condition. Since

F
,y

y
= 2 > 0,
1902
the strengthened Legendre condition is satised.
Jacobis Condition. The second variation for F(x, y, y
tt
) is
d
2
J
d
2

=0
=
_
b
a
_

F
,y

y
(h
tt
)
2
+ 2

F
,yy
hh
tt
+

F
,yy
h
2
_
dx
Jacobis accessory equation is,
(2

F
,y

y
h
tt
+ 2

F
,yy
h)
tt
+ 2

F
,yy
h
tt
+ 2

F
,yy
h = 0,
(h
tt
)
tt
= 0
Since the boundary value problem,
h
tttt
= 0, h(0) = h
t
(0) = h(c) = h
tt
(c) = 0,
has only the trivial solution for all c > 0 the strengthened Jacobi condition is satised.
A Weak Minimum. Since the admissible extremal,
y =
x
2
120
(x
3
5x + 5),
satises the strengthened Legendre and Jacobi conditions, we conclude that it
is a weak minimum.
2.
_
1
0
_
1
2
(y
t
)
2
+yy
t
+y
t
+y
_
dx
1903
Boundary Conditions. Since no boundary conditions are specied, we have the Euler boundary condi-
tions,

F
,y
(0) = 0,

F
,y
(1) = 0.
The derivatives of the integrand are,
F
,y
= y
t
+ 1, F
,y
= y
t
+y + 1.
The Euler boundary conditions are then
y
t
(0) + y(0) + 1 = 0, y
t
(1) + y(1) + 1 = 0.
Erdmanns Corner Conditions. Erdmanns rst corner condition species that

F
y
(x) = y
t
(x) + y(x) + 1
must be continuous at a corner. This implies that y
t
(x) is continuous at corners, which means that there
are no corners.
Eulers Dierential Equation. Eulers DE is
(F
,y
)
t
= F
y
,
y
tt
+y
t
= y
t
+ 1,
y
tt
= 1.
The general solution is
y = c
0
+c
1
x +
1
2
x
2
.
1904
The boundary conditions give us the constraints,
c
0
+c
1
+ 1 = 0,
c
0
+ 2c
1
+
5
2
= 0.
The extremal that satises the Euler DE and the Euler BCs is
y =
1
2
_
x
2
3x + 1
_
.
Legendres Condition. Since the strengthened Legendre condition is satised,

F
,y

y
(x) = 1 > 0,
we conclude that the extremal is a weak local minimum of the problem.
Jacobis Condition. Jacobis accessory equation for this problem is,
_

F
,y

y
h
t
_
t

F
,yy
(

F
,yy
)
t
_
h = 0, h(0) = h(c) = 0,
(h
t
)
t
((1)
t
) h = 0, h(0) = h(c) = 0,
h
tt
= 0, h(0) = h(c) = 0,
Since this has only trivial solutions for c > 0 we conclude that there are no conjugate points. The extremal
satises the strengthened Jacobi condition.
The only admissible extremal,
y =
1
2
_
x
2
3x + 1
_
,
satises the strengthened Legendre and Jacobi conditions and is thus a weak
extremum.
1905
3.
_
b
a
(y
2
+ 2xyy
t
) dx, y(a) = A, y(b) = B
Eulers Dierential Equation. Eulers dierential equation,
(F
,y
)
t
= F
y
,
(2xy)
t
= 2y + 2xy
t
,
2y + 2xy
t
= 2y + 2xy
t
,
is trivial. Every C
1
function satises the Euler DE.
Erdmanns Corner Conditions. The expressions,

F
,y
= 2xy,

F y
t

F
,y
= y
2
+ 2x y y
t
y
t
(2x

h) = y
2
are continuous at a corner. The conditions are trivial and do not restrict corners in the extremal.
Extremal. Any piecewise smooth function that satises the boundary conditions y(a) = A, y(b) = B is an
admissible extremal.
An Exact Derivative. At this point we note that
_
b
a
(y
2
+ 2xyy
t
) dx =
_
b
a
d
dx
(xy
2
) dx
=
_
xy
2

b
a
= bB
2
aA
2
.
The integral has the same value for all piecewise smooth functions y that satisfy the boundary conditions.
1906
Since the integral has the same value for all piecewise smooth functions that
satisfy the boundary conditions, all such functions are weak extrema.
4.
_
1
0
(xy +y
2
2y
2
y
t
) dx, y(0) = 1, y(1) = 2
Erdmanns Corner Conditions. Erdmanns rst corner condition requires

F
,y
= 2 y
2
to be continuous,
which is trivial. Erdmanns second corner condition requires that

F y
t

F
,y
= x y + y
2
2 y
2
y
t
y
t
(2 y
2
) = x y + y
2
is continuous. This condition is also trivial. Thus the extremal may have corners at any point.
Eulers Dierential Equation. Eulers DE is
(F
,y
)
t
= F
,y
,
(2y
2
)
t
= x + 2y 4yy
t
y =
x
2
Extremal. There is no piecewise smooth function that satises Eulers dier-
ential equation on its smooth segments and satises the boundary conditions
y(0) = 1, y(1) = 2. We conclude that there is no weak extremum.
1907
Solution 49.4
1. We require that the rst variation vanishes
__
D
_
F
u
h +F
ux
h
x
+F
uy
h
y
_
dxdy = 0.
We rewrite the integrand as
__
D
_
F
u
h + (F
ux
h)
x
+ (F
uy
h)
y
(F
ux
)
x
h (F
uy
)
y
h
_
dxdy = 0,
__
D
_
F
u
(F
ux
)
x
(F
uy
)
y
_
hdx dy +
__
D
_
(F
ux
h)
x
+ (F
uy
h)
y
_
dx dy = 0.
Using the Divergence theorem, we obtain,
__
D
_
F
u
(F
ux
)
x
(F
uy
)
y
_
hdx dy +
_

(F
ux
, F
uy
) nhds = 0.
In order that the line integral vanish we have the natural boundary condition,
(F
ux
, F
uy
) n = 0 for (x, y) .
We can also write this as
F
ux
dy
ds
F
uy
dx
ds
= 0 for (x, y) .
The Euler dierential equation for this problem is
F
u
(F
ux
)
x
(F
uy
)
y
= 0.
1908
2. We consider the natural boundary conditions for
__
D
F(x, y, u, u
x
, u
y
) dx dy +
_

G(x, y, u) ds.
We require that the rst variation vanishes.
__
D
_
F
u
(F
ux
)
x
(F
uy
)
y
_
hdx dy +
_

(F
ux
, F
uy
) nhds +
_

G
u
hds = 0,
__
D
_
F
u
(F
ux
)
x
(F
uy
)
y
_
hdx dy +
_

_
(F
ux
, F
uy
) n +G
u
_
hds = 0,
In order that the line integral vanishes, we have the natural boundary conditions,
(F
ux
, F
uy
) n +G
u
= 0 for (x, y) .
For the given integrand this is,
(2pu
x
, 2pu
y
) n + 2u = 0 for (x, y) ,
pu n +u = 0 for (x, y) .
We can also denote this as
p
u
n
+u = 0 for (x, y) .
Solution 49.5
First we vary .
() =
__
R
_
_
h(x,t)
0
_

t
+
t
+
1
2
(
x
+
x
)
2
+
1
2
(
y
+
y
)
2
+gy
_
dy
_
dxdt
1909

t
(0) =
__
R
_
_
h(x,t)
0
(
t
+
x

x
+
y

y
) dy
_
dxdt = 0

t
(0) =
__
R
_

t
_
h(x,t)
0
dy [h
t
]
y=h(x,t)
+

x
_
h(x,t)
0

x
dy [
x
h
x
]
y=h(x,t)

_
h(x,t)
0

xx
dy
+ [
y
]
h(x,t)
0

_
h(x,t)
0

yy
dy
_
dxdt = 0
Since vanishes on the boundary of R, we have

t
(0) =
__
R
_
[(h
t

x
h
x

y
)]
y=h(x,t)
[
y
]
y=0

_
h(x,t)
0
(
xx
+
yy
) dy
_
dxdt = 0.
From the variations which vanish on y = 0, h(x, t) we have

2
= 0.
This leaves us with

t
(0) =
__
R
_
[(h
t

x
h
x

y
)]
y=h(x,t)
[
y
]
y=0
_
dx dt = 0.
By considering variations which vanish on y = 0 we obtain,
h
t

x
h
x

y
= 0 on y = h(x, t).
Finally we have

y
= 0 on y = 0.
1910
Next we vary h(x, t).
() =
__
R
_
h(x,t)+(x,t)
0
_

t
+
1
2

2
x
+
1
2

2
y
+gy
_
dxdt

t
() =
__
R
_

t
+
1
2

2
x
+
1
2

2
y
+gy
_
y=h(x,t)
dx dt = 0
This gives us the boundary condition,

t
+
1
2

2
x
+
1
2

2
y
+gy = 0 on y = h(x, t).
Solution 49.6
The parts of the extremizing curve which lie outside the boundary of the region R must be extremals, (i.e.,
solutions of Eulers equation) since if we restrict our variations to admissible curves outside of R and its boundary,
we immediately obtain Eulers equation. Therefore an extremum can be reached only on curves consisting of arcs
of extremals and parts of the boundary of region R.
Thus, our problem is to nd the points of transition of the extremal to the boundary of R. Let the boundary
of R be given by (x). Consider an extremum that starts at the point (a, A), follows an extremal to the point
(x
0
, (x
0
)), follows the R to (x
1
, (x
1
)) then follows an extremal to the point (b, B). We seek transversality
conditions for the points x
0
and x
1
. We will extremize the expression,
I(y) =
_
x
0
a
F(x, y, y
t
) dx +
_
x
1
x
0
F(x, ,
t
) dx +
_
b
x
1
F(x, y, y
t
) dx.
Let c be any point between x
0
and x
1
. Then extremizing I(y) is equivalent to extremizing the two functionals,
I
1
(y) =
_
x
0
a
F(x, y, y
t
) dx +
_
c
x
0
F(x, ,
t
) dx,
1911
I
2
(y) =
_
x
1
c
F(x, ,
t
) dx +
_
b
x
1
F(x, y, y
t
) dx,
I = 0 I
1
= I
2
= 0.
We will extremize I
1
(y) and then use the derived transversality condition on all points where the extremals meet
R. The general variation of I
1
is,
I
1
(y) =
_
x
0
a
_
F
y

d
dx
F
y

_
dx + [F
y
y]
x
0
a
+ [(F y
t
F
y
)x]
x
0
a
+ [F

(x)]
c
x
0
+ [(F
t
F

)x]
c
x
0
= 0
Note that x = y = 0 at x = a, c. That is, x = x
0
is the only point that varies. Also note that (x) is not
independent of x. (x)
t
(x)x. At the point x
0
we have y
t
(x)x.
I
1
(y) =
_
x
0
a
_
F
y

d
dx
F
y

_
dx + (F
y

t
x)

x
0
+ ((F y
t
F
y
)x)

x
0
(F


t
x)

x
0
((F
t
F

)x)

x
0
= 0
I
1
(y) =
_
x
0
a
_
F
y

d
dx
F
y

_
dx + ((F(x, y, y
t
) F(x, ,
t
) + (
t
y
t
)F
y
)x)

x
0
= 0
Since I
1
vanishes for those variations satisfying x
0
= 0 we obtain the Euler dierential equation,
F
y

d
dx
F
y
= 0.
Then we have
((F(x, y, y
t
) F(x, ,
t
) + (
t
y
t
)F
y
)x)

x
0
= 0
1912
for all variations x
0
. This implies that
(F(x, y, y
t
) F(x, ,
t
) + (
t
y
t
)F
y
)

x
0
= 0.
Two solutions of this equation are
y
t
(x
0
) =
t
(x
0
) and F
y
= 0.
Transversality condition. If F
y
is not identically zero, the extremal must
be tangent to R at the points of contact.
Now we apply this result to to nd the curves which extremize
_
10
0
(y
t
)
3
dx, y(0) = 0, y(10) = 0 given that the
admissible curves can not penetrate the interior of the circle (x5)
2
+y
2
= 9. Since the Lagrangian is a function
of y
t
alone, the extremals are straight lines.
The Erdmann corner conditions require that
F
y
= 3(y
t
)
2
and F y
t
F
y
= (y
t
)
3
y
t
3(y
t
)
2
= 2(y
t
)
3
are continuous at corners. This implies that y
t
is continuous. There are no corners.
We see that the extrema are
y(x) =
_

3
4
x, for 0 x
16
5
,

_
9 (x 5)
2
, for
16
5
x
34
5
,

3
4
x, for
34
5
x 10.
Note that the extremizing curves neither minimize nor maximize the integral.
1913
Solution 49.7
C
1
Extremals. Without loss of generality, we take the vertical line to be the y axis. We will consider x
1
, y
1
> 1.
With ds =
_
1 + (y
t
)
2
dx we extremize the integral,
_
x
1
0

y
_
1 + (y
t
)
2
dx.
Since the Lagrangian is independent of x, we know that the Euler dierential equation has a rst integral.
d
dx
F
y
F
y
= 0
y
t
F
y

y
+y
tt
F
y

y
F
y
= 0
d
dx
(y
t
F
y
F) = 0
y
t
F
y
F = const
For the given Lagrangian, this is
y
t

y
y
t
_
1 + (y
t
)
2

y
_
1 + (y
t
)
2
= const,
(y
t
)
2

y(1 + (y
t
)
2
) = const
_
1 + (y
t
)
2
,

y = const
_
1 + (y
t
)
2
y = const is one solution. To nd the others we solve for y
t
and then solve the dierential equation.
y = a(1 + (y
t
)
2
)
1914
y
t
=
_
y a
a
dx =
_
a
y a
dy
x +b = 2
_
a(y a)
y =
x
2
4a

bx
2a
+
b
2
4a
+a
The natural boundary condition is
F
y

x=0
=

yy
t
_
1 + (y
t
)
2

x=0
= 0,
y
t
(0) = 0
The extremal that satises this boundary condition is
y =
x
2
4a
+a.
Now we apply y(x
1
) = y
1
to obtain
a =
1
2
_
y
1

_
y
2
1
x
2
1
_
for y
1
x
1
. The value of the integral is
_
x
1
0

_
x
2
4a
+a
__
1 +
_
x
2a
_
2
_
dx =
x
1
(x
2
1
+ 12a
2
)
12a
3/2
.
1915
By denoting y
1
= cx
1
, c 1 we have
a =
1
2
_
cx
1
x
1

c
2
1
_
The values of the integral for these two values of a are

2(x
1
)
3/2
1 + 3c
2
3c

c
2
1
3(c

c
2
1)
3/2
.
The values are equal only when c = 1. These values, (divided by

x
1
), are plotted in Figure 49.1 as a function
of c. The former and latter are ne and coarse dashed lines, respectively. The extremal with
a =
1
2
_
y
1
+
_
y
2
1
x
2
1
_
has the smaller performance index. The value of the integral is
x
1
(x
2
1
+ 3(y
1
+
_
y
2
1
x
2
1
)
2
3

2(y
1
+
_
y
2
1
x
2
1
)
3
.
The function y = y
1
is an admissible extremal for all x
1
. The value of the integral for this extremal is x
1

y
1
which is larger than the integral of the quadratic we analyzed before for y
1
> x
1
.
Thus we see that
y =
x
2
4a
+a, a =
1
2
_
y
1
+
_
y
2
1
x
2
1
_
is the extremal with the smaller integral and is the minimizing curve in C
1
for y
1
x
1
. For y
1
< x
1
the C
1
extremum is,
y = y
1
.
1916
1.2 1.4 1.6 1.8 2
2.5
3
3.5
4
Figure 49.1:
C
1
p
Extremals. Consider the parametric form of the Lagrangian.
_
t
1
t
0
_
y(t)
_
(x
t
(t))
2
+ (y
t
(t))
2
dt
The Euler dierential equations are
d
dt
f
x
f
x
= 0 and
d
dt
f
y
f
y
= 0.
If one of the equations is satised, then the other is automatically satised, (or the extremal is straight). With
either of these equations we could derive the quadratic extremal and the y = const extremal that we found
previously. We will nd one more extremal by considering the rst parametric Euler dierential equation.
d
dt
f
x
f
x
= 0
d
dt
_
_
y(t)x
t
(t)
_
(x
t
(t))
2
+ (y
t
(t))
2
_
= 0
1917
_
y(t)x
t
(t)
_
(x
t
(t))
2
+ (y
t
(t))
2
= const
Note that x(t) = const is a solution. Thus the extremals are of the three forms,
x = const,
y = const,
y =
x
2
4a
+
bx
2a
+
b
2
4a
+a.
The Erdmann corner conditions require that
F
y
=

yy
t
_
1 + (y
t
)
2
,
F y
t
F
y
=

y
_
1 + (y
t
)
2

y(y
t
)
2
_
1 + (y
t
)
2
=

y
_
1 + (y
t
)
2
are continuous at corners. There can be corners only if y = 0.
Now we piece the three forms together to obtain C
1
p
extremals that satisfy the Erdmann corner conditions.
The only possibility that is not C
1
is the extremal that is a horizontal line from (0, 0) to (x
1
, 0) and then a vertical
line from (x
1
, y
1
). The value of the integral for this extremal is
_
y
1
0

t dt =
2
3
(y
1
)
3/2
.
Equating the performance indices of the quadratic extremum and the piecewise smooth extremum,
x
1
(x
2
1
+ 3(y
1
+
_
y
2
1
x
2
1
)
2
3

2(y
1
+
_
y
2
1
x
2
1
)
3
=
2
3
(y
1
)
3/2
,
y
1
= x
1
_
3 2

3
.
1918
The only real positive solution is
y
1
= x
1
_
3 + 2

3
1.46789 x
1
.
The piecewise smooth extremal has the smaller performance index for y
1
smaller than this value and the quadratic
extremal has the smaller performance index for y
1
greater than this value.
The C
1
p
extremum is the piecewise smooth extremal for y
1
x
1
_
3 + 2

3/

3
and is the quadratic extremal for y
1
x
1
_
3 + 2

3/

3.
Solution 49.8
The shape of the rope will be a catenary between x
1
and x
2
and be a vertically hanging segment after that. Let
the length of the vertical segment be z. Without loss of generality we take x
1
= y
2
= 0. The potential energy,
(relative to y = 0), of a length of rope ds in 0 x x
2
is mgy = gy ds. The total potential energy of the
vertically hanging rope is m(center of mass)g = z(z/2)g. Thus we seek to minimize,
g
_
x
2
0
y ds
1
2
gz
2
, y(0) = y
1
, y(x
2
) = 0,
subject to the isoperimetric constraint,
_
x
2
0
ds z = L.
Writing the arc-length dierential as ds =
_
1 + (y
t
)
2
dx we minimize
g
_
x
2
0
y
_
1 + (y
t
)
2
ds
1
2
gz
2
, y(0) = y
1
, y(x
2
) = 0,
1919
subject to,
_
x
2
0
_
1 + (y
t
)
2
dx z = L.
Consider the more general problem of nding functions y(x) and numbers z which extremize I
_
b
a
F(x, y, y
t
) dx+
f(z) subject to J
_
b
a
G(x, y, y
t
) dx +g(z) = L.
Suppose y(x) and z are the desired solutions and form the comparison families, y(x) +
1

1
(x) +
2

2
(x),
z +
1

1
+
2

2
. Then, there exists a constant such that

1
(I +J)

1
,
2
=0
= 0

2
(I +J)

1
,
2
=0
= 0.
These equations are
_
b
a
_
d
dx
H
,y
H
y
_

1
dx +h
t
(z)
1
= 0,
and
_
b
a
_
d
dx
H
,y
H
y
_

2
dx +h
t
(z)
2
= 0,
where H = F +G and h = f +g. From this we conclude that
d
dx
H
,y
H
y
= 0, h
t
(z) = 0
with determined by
J =
_
b
a
G(x, y, y
t
) dx +g(z) = L.
1920
Now we apply these results to our problem. Since f(z) =
1
2
gz
2
and g(z) = z we have
gz = 0,
z =

g
.
It was shown in class that the solution of the Euler dierential equation is a family of catenaries,
y =

g
+c
1
cosh
_
x c
2
c
1
_
.
One can nd c
1
and c
2
in terms of by applying the end conditions y(0) = y
1
and y(x
2
) = 0. Then the expression
for y(x) and z = /g are substituted into the isoperimetric constraint to determine .
Consider the special case that (x
1
, y
1
) = (0, 0) and (x
2
, y
2
) = (1, 0). In this case we can use the fact that
y(0) = y(1) to solve for c
2
and write y in the form
y =

g
+c
1
cosh
_
x 1/2
c
1
_
.
Applying the condition y(0) = 0 would give us the algebraic-transcendental equation,
y(0) =

g
+c
1
cosh
_
1
2c
1
_
= 0,
which we cant solve in closed form. Since we ran into a dead end in applying the boundary condition, we turn
to the isoperimetric constraint.
_
1
0
_
1 + (y
t
)
2
dx z = L
_
1
0
cosh
_
x 1/2
c
1
_
dx z = L
1921
2c
1
sinh
_
1
2c
1
_
z = L
With the isoperimetric constraint, the algebraic-transcendental equation and z = /g we now have
z = c
1
cosh
_
1
2c
1
_
,
z = 2c
1
sinh
_
1
2c
1
_
L.
For any xed L, we can numerically solve for c
1
and thus obtain z. You can derive that there are no solutions
unless L is greater than about 1.9366. If L is smaller than this, the rope would slip o the pin. For L = 2, c
1
has
the values 0.4265 and 0.7524. The larger value of c
1
gives the smaller potential energy. The position of the end
of the rope is z = 0.9248.
Solution 49.9
Using the method of Lagrange multipliers, we look for stationary values of
_
c
0
((y
t
)
2
+y
2
) dx,

_
c
0
((y
t
)
2
+y
2
) dx = 0.
The Euler dierential equation is
d
dx
F
(
, y
t
) F
,y
= 0,
d
dx
(2y
t
) 2y = 0.
Together with the homogeneous boundary conditions, we have the problem
y
tt
y = 0, y(0) = y(c) = 0,
1922
which has the solutions,

n
=
_
n
c
_
2
, y
n
= a
n
sin
_
nx
c
_
, n Z
+
.
Now we determine the constants a
n
with the moment of inertia constraint.
_
c
0
a
2
n
sin
2
_
nx
c
_
dx =
ca
2
n
2
= A
Thus we have the extremals,
y
n
=
_
2A
c
sin
_
nx
c
_
, n Z
+
.
The drag for these extremals is
D =
2A
c
_
c
0
_
n
c
_
2
cos
2
_
nx
c
_
dx =
An
2

2
c
2
.
We see that the drag is minimum for n = 1. The shape for minimum drag is
y =
_
2A
c
sin
_
nx
c
_
.
Solution 49.10
Consider the general problem of determining the stationary values of the quantity
2
given by

2
=
_
b
a
F(x, y, y
t
, y
tt
) dx
_
b
a
G(x, y, y
t
, y
tt
) dx

I
J
.
1923
The variation of
2
is

2
=
JI IJ
J
2
=
1
J
_
I
I
J
J
_
=
1
J
_
I
2
J
_
.
The the values of y and y
t
are specied on the boundary, then the variations of I and J are
I =
_
b
a
_
d
2
dx
2
F
,y

d
dx
F
,y
+F
,y
_
y dx, J =
_
b
a
_
d
2
dx
2
G
,y

d
dx
G
,y
+G
,y
_
y dx
Thus
2
= 0 becomes
_
b
a
_
d
2
dx
2
H
,y

d
dx
H
,y
+H
,y
_
y dx
_
b
a
Gdx
= 0,
where H = F
2
G. A necessary condition for an extremum is
d
2
dx
2
H
,y

d
dx
H
,y
+H
,y
= 0 where H F
2
G.
For our problem we have F = EI(y
tt
)
2
and G = y so that the extremals are solutions of
d
2
dx
2
_
EI
d
2
y
dx
2
_

2
y = 0,
With homogeneous boundary conditions we have an eigenvalue problem with deections modes y
n
(x) and corre-
sponding natural frequencies
n
.
1924
Solution 49.11
We assume that v
0
> w(x, y, t) so that the problem has a solution for any end point. The crossing time is
T =
_
l
0
_

X(t)
_
1
dx =
1
v
0
_
l
0
sec (t) dx.
Note that
dy
dx
=
w +v
0
sin
v
0
cos
=
w
v
0
sec + tan
=
w
v
0
sec +

sec
2
1.
We solve this relation for sec .
_
y
t

w
v
0
sec
_
2
= sec
2
1
(y
t
)
2
2
w
v
0
y
t
sec +
w
2
v
2
0
sec
2
= sec
2
1
(v
2
0
w
2
) sec
2
+ 2v
0
wy
t
sec v
2
0
((y
t
)
2
+ 1) = 0
sec =
2v
0
wy
t

_
4v
2
0
w
2
(y
t
)
2
+ 4(v
2
0
w
2
)v
2
0
((y
t
)
2
+ 1)
2(v
2
0
w
2
)
sec = v
0
wy
t

_
v
2
0
((y
t
)
2
+ 1) w
2
(v
2
0
w
2
)
1925
Since the steering angle satises /2 /2 only the positive solution is relevant.
sec = v
0
wy
t
+
_
v
2
0
((y
t
)
2
+ 1) w
2
(v
2
0
w
2
)
Time Independent Current. If we make the assumption that w = w(x, y) then we can write the crossing
time as an integral of a function of x and y.
T(y) =
_
l
0
wy
t
+
_
v
2
0
((y
t
)
2
+ 1) w
2
(v
2
0
w
2
)
dx
A necessary condition for a minimum is T = 0. The Euler dierential equation for this problem is
d
dx
F
,y
F
,y
= 0
d
dx
_
1
v
2
0
w
2
_
w +
v
2
0
y
t
_
v
2
0
((y
t
)
2
+ 1) w
2
__

w
y
(v
2
0
w
2
)
2
_
w(v
2
(1 + 2(y
t
)
2
) w
2
)
_
v
2
0
((y
t
)
2
+ 1) w
2
y
t
(v
2
0
+w
2
)
_
By solving this second order dierential equation subject to the boundary conditions y(0) = 0, y(l) = y
1
we
obtain the path of minimum crossing time.
Current w = w(x). If the current is only a function of x, then the Euler dierential equation can be integrated
to obtain,
1
v
2
0
w
2
_
w +
v
2
0
y
t
_
v
2
0
((y
t
)
2
+ 1) w
2
_
= c
0
.
Solving for y
t
,
y
t
=
w +c
0
(v
2
0
w
2
)
v
0
_
1 2c
0
w c
2
0
(v
2
0
w
2
)
.
1926
Since y(0) = 0, we have
y(x) =
_
x
0
w() +c
0
(v
2
0
(w())
2
)
v
0
_
1 2c
0
w() c
2
0
(v
2
0
(w())
2
)
.
For any given w(x) we can use the condition y(l) = y
1
to solve for the constant c
0
.
Constant Current. If the current is constant then the Lagrangian is a function of y
t
alone. The admissible
extremals are straight lines. The solution is then
y(x) =
y
1
x
l
.
Solution 49.12
1. The kinetic energy of the rst particle is
1
2
m(( x)

)
2
. Its potential energy, relative to the table top, is
zero. The kinetic energy of the second particle is
1
2
m x
2
. Its potential energy, relative to its equilibrium
position is mgx. The Lagrangian is the dierence of kinetic and potential energy.
L = m
_
x
2
+
1
2
( x)
2

2
+gx
_
The Euler dierential equations are the equations of motion.
d
dt
L
, x
L
x
= 0,
d
dt
L
,

= 0
d
dt
(2m x) +m( x)

2
mg = 0,
d
dt
_
m( x)
2

2
_
= 0
2 x + ( x)

2
g = 0, ( x)
2

2
= const
1927
When x = 0,

= =
_
g/. This determines the constant in the equation of motion for .

g
( x)
2
Now we substitute the expression for

into the equation of motion for x.
2 x + ( x)

3
g
( x)
4
g = 0
2 x +
_

3
( x)
3
1
_
g = 0
2 x +
_
1
(1 x/)
3
1
_
g = 0
2. For small oscillations,

1. Recall the binomial expansion,


(1 +z)
a
=

n=0
_
a
n
_
z
n
, for [z[ < 1,
(1 +z)
a
1 +az, for [z[ 1.
We make the approximation,
1
(1 x/)
3
1 + 3
x

,
1928
to obtain the linearized equation of motion,
2 x +
3g

x = 0.
This is the equation of a harmonic oscillator with solution
x = a sin
_
_
3g2(t b)
_
.
The period of oscillation is,
T = 2

23g.
Solution 49.13
We write the equation of motion and boundary conditions,
x = U(t) g, x(0) = x(0) = 0, x(T) = h,
as the rst order system,
x = 0, x(0) = 0, x(T) = h,
y = U(t) g, y(0) = 0.
We seek to minimize,
T =
_
T
0
dt,
subject to the constraints,
x y = 0,
y U(t) +g = 0,
_
T
0
U
2
(t) dt = k
2
.
1929
Thus we seek extrema of
_
T
0
H dt
_
T
0
_
1 +(t)( x y) +(t)( y U(t) +g) +U
2
(t)
_
dt.
Since y is not specied at t = T, we have the natural boundary condition,
H
, y

t=T
= 0,
(T) = 0.
The rst Euler dierential equation is
d
dt
H
, x
H
,x
= 0,
d
dt
(t) = 0.
We see that (t) = is constant. The next Euler DE is
d
dt
H
, y
H
,y
= 0,
d
dt
(t) + = 0.
(t) = t + const
With the natural boundary condition, (T) = 0, we have
(t) = (T t).
1930
The nal Euler DE is,
d
dt
H
,

U
H
,U
= 0,
(t) 2U(t) = 0.
Thus we have
U(t) =
(T t)
2
.
This is the required thrust function. We use the constraints to nd , and T.
Substituting U(t) = (T t)/(2) into the isoperimetric constraint,
_
T
0
U
2
(t) dt = k
2
yields

2
T
3
12
2
= k
2
,
U(t) =

3k
T
3/2
(T t).
The equation of motion for x is
x = U(t) g =

3k
T
3/2
(T t).
Integrating and applying the initial conditions x(0) = x(0) = 0 yields,
x(t) =
kt
2
(3T t)
2

3T
3/2

1
2
gt
2
.
Applying the condition x(T) = h gives us,
k

3
T
3/2

1
2
gT
2
= h,
1931
1
4
g
2
T
4

k
3
T
3
+ghT
2
+h
2
= 0.
If k 4
_
2/3g
3/2

h then this fourth degree polynomial has positive, real solutions for T. With strict inequality,
the minimum time is the smaller of the two positive, real solutions. If k < 4
_
2/3g
3/2

h then there is not enough


fuel to reach the target height.
Solution 49.14
We have x = U(t) where U(t) is the acceleration furnished by the thrust of the vehicles engine. In practice, the
engine will be designed to operate within certain bounds, say M U(t) M, where M is the maximum
forward/backward acceleration. To account for the inequality constraint we write U = M sin V (t) for some
suitable V (t). More generally, if we had (t) U(t) (t), we could write this as U(t) =
+
2
+

2
sin V (t).
We write the equation of motion as a rst order system,
x = y, x(0) = a, x(T) = 0,
y = M sin V, y(0) = b, y(T) = 0.
Thus we minimize
T =
_
T
0
dt
subject to the constraints,
x y = 0
y M sin V = 0.
Consider
H = 1 +(t)( x y) +(t)( y M sin V ).
1932
The Euler dierential equations are
d
dt
H
, x
H
,x
= 0
d
dt
(t) = 0 (t) = const
d
dt
H
, y
H
,y
= 0
d
dt
(t) + = 0 (t) = t + const
d
dt
H
,

V
H
,V
= 0 (t)M cos V (t) = 0 V (t) =

2
+n.
Thus we see that
U(t) = M sin
_

2
+n
_
= M.
Therefore, if the rocket is to be transferred from its initial state to is specied nal state in minimum time with
a limited source of thrust, ([U[ M), then the engine should operate at full power at all times except possibly
for a nite number of switching times. (Indeed, if some power were not being used, we would expect the transfer
would be speeded up by using the additional power suitably.)
To see how this bang-bang process works, well look at the phase plane. The problem
x = y, x(0) = c,
y = M, y(0) = d,
has the solution
x(t) = c +dt M
t
2
2
, y(t) = d Mt.
We can eliminate t to get
x =
y
2
2M
+c
d
2
2M
.
These curves are plotted in Figure 49.2.
1933
Figure 49.2:
There is only curve in each case which transfers the initial state to the origin. We will denote these curves
and , respectively. Only if the initial point (a, b) lies on one of these two curves can we transfer the state of
the system to the origin along an extremal without switching. If a =
b
2
2M
and b < 0 then this is possible using
U(t) = M. If a =
b
2
2M
and b > 0 then this is possible using U(t) = M. Otherwise we follow an extremal that
intersects the initial position until this curve intersects or . We then follow or to the origin.
Solution 49.15
Since the integrand does not explicitly depend on x, the Euler dierential equation has the rst integral,
F y
t
F
y
= const.
_
y +h
_
1 + (y
t
)
2
y
t
y
t

y +h
_
1 + (y
t
)
2
= const

y +h
_
1 + (y
t
)
2
= const
1934
y +h = c
2
1
(1 + (y
t
)
2
)
_
y +h c
2
1
= c
1
y
t
c
1
dy
_
y +h c
2
1
= dx
2c
1
_
y +h c
2
1
= x c
2
4c
2
1
(y +h c
2
1
) = (x c
2
)
2
Since the extremal passes through the origin, we have
4c
2
1
(h c
2
1
) = c
2
2
.
4c
2
1
y = x
2
2c
2
x (49.6)
Introduce as a parameter the slope of the extremal at the origin; that is, y
t
(0) = . Then dierentiating (49.6)
at x = 0 yields 4c
2
1
= 2c
2
. Together with c
2
2
= 4c
2
1
(h c
2
1
) we obtain c
2
1
=
h
1+
2
and c
2
=
2h
1+
2
. Thus the
equation of the pencil (49.6) will have the form
y = x +
1 +
2
4h
x
2
. (49.7)
To nd the envelope of this family we dierentiate ( 49.7) with respect to to obtain 0 = x +

2h
x
2
and eliminate
between this and ( 49.7) to obtain
y = h +
x
2
4h
.
1935
h 2h
x
-h
y
Figure 49.3: Some Extremals and the Envelope.
See Figure 49.3 for a plot of some extremals and the envelope.
All extremals (49.7) lie above the envelope which in ballistics is called the parabola of safety. If (m, M) lies
outside the parabola, M < h +
m
2
4h
, then it cannot be joined to (0, 0) by an extremal. If (m, M) is above the
envelope then there are two candidates. Clearly we rule out the one that touches the envelope because of the
occurrence of conjugate points. For the other extremal, problem 2 shows that E 0 for all y
t
. Clearly we can
embed this extremal in an extremal pencil, so Jacobis test is satised. Therefore the parabola that does not
touch the envelope is a strong minimum.
1936
Solution 49.16
E = F(x, y, y
t
) F(x, y, p) (y
t
p)F
y
(x, y, p)
= n
_
1 + (y
t
)
2
n
_
1 +p
2
(y
t
p)
np
_
1 +p
2
=
n
_
1 +p
2
_
_
1 + (y
t
)
2
_
1 +p
2
(1 +p
2
) (y
t
p)p
_
=
n
_
1 +p
2
_
_
1 + (y
t
)
2
+p
2
+ (y
t
)
2
p
2
2y
t
p + 2y
t
p (1 +py
t
)
_
=
n
_
1 +p
2
_
_
(1 +py
t
)
2
+ (y
t
p)
2
(1 +py
t
)
_
0
The speed of light in an inhomogeneous medium is
ds
dt
=
1
n(x,y
. The time of transit is then
T =
_
(b,B)
(a,A)
dt
ds
ds =
_
b
a
n(x, y)
_
1 + (y
t
)
2
dx.
Since E 0, light traveling on extremals follow the time optimal path as long as the extremals do not intersect.
Solution 49.17
Extremals. Since the integrand does not depend explicitly on x, the Euler dierential equation has the rst
integral,
F y
t
F
,y
= const.
1 +y
2
(y
t
)
2
y
t
2(1 +y
2
)
(y
t
)
3
= const
1937
dy
_
1 + (y
t
)
2
= const dx
arcsinh (y) = c
1
x +c
2
y = sinh(c
1
x +c
2
)
Jacobi Test. We can see by inspection that no conjugate points exist. Consider the central eld through
(0, 0), sinh(cx), (See Figure 49.4).
-3 -2 -1 1 2 3
-3
-2
-1
1
2
3
Figure 49.4: sinh(cx)
We can also easily arrive at this conclusion analytically as follows: Solutions u
1
and u
2
of the Jacobi equation
are given by
u
1
=
y
c
2
= cosh(c
1
x +c
2
),
u
2
=
y
c
1
= x cosh(c
1
x +c
2
).
1938
Since u
2
/u
1
= x is monotone for all x there are no conjugate points.
Weierstrass Test.
E = F(x, y, y
t
) F(x, y, p) (y
t
p)F
,y
(x, y, p)
=
1 +y
2
(y
t
)
2

1 +y
2
p
2
(y
t
p)
2(1 +y
2
)
p
3
=
1 +y
2
(y
t
)
2
p
2
_
p
3
p(y
t
)
2
+ 2(y
t
)
3
2p(y
t
)
2
p
_
=
1 +y
2
(y
t
)
2
p
2
_
(p y
t
)
2
(p + 2y
t
)
p
_
For p = p(x, y) bounded away from zero, E is one-signed for values of y
t
close to p. However, since the factor
(p + 2y
t
) can have any sign for arbitrary values of y
t
, the conditions for a strong minimum are not satised.
Furthermore, since the extremals are y = sinh(c
1
x + c
2
), the slope function p(x, y) will be of one sign only if
the range of integration is such that we are on a monotonic piece of the sinh. If we span both an increasing and
decreasing section, E changes sign even for weak variations.
Legendre Condition.
F
,y

y
=
6(1 +y
2
)
(y
t
)
4
> 0
Note that F cannot be represented in a Taylor series for arbitrary values of y
t
due to the presence of a discontinuity
in F when y
t
= 0. However, F
,y

y
> 0 on an extremal implies a weak minimum is provided by the extremal.
Strong Variations. Consider
_
1+y
2
(y

)
2
dx on both an extremal and on the special piecewise continuous variation
in the gure. On PQ we have y
t
= with implies that
1+y
2
(y

)
2
= 0 so that there is no contribution to the integral
from PQ.
On QR the value of y
t
is greater than its value along the extremal PR while the value of y on QR is less than
the value of y along PR. Thus on QR the quantity
1+y
2
(y

)
2
is less than it is on the extremal PR.
_
QR
1 +y
2
(y
t
)
2
dx <
_
PR
1 +y
2
(y
t
)
2
dx
1939
Thus the weak minimum along the extremal can be weakened by a strong variation.
Solution 49.18
The Euler dierential equation is
d
dx
F
,y
F
,y
= 0.
d
dx
(1 + 2x
2
y
t
) = 0
1 + 2x
2
y
t
= const
y
t
= const
1
x
2
y =
c
1
x
+c
2
(i) No continuous extremal exists in 1 x 2 that satises y(1) = 1 and y(2) = 4.
(ii) The continuous extremal that satises the boundary conditions is y = 7
4
x
. Since F
,y

y
= 2x
2
0 has a
Taylor series representation for all y
t
, this extremal provides a strong minimum.
(iii) The continuous extremal that satises the boundary conditions is y = 1. This is a strong minimum.
Solution 49.19
For identity (a) we take P = 0 and Q =
x

x
. For identity (b) we take P =
y

y
and Q = 0. For
identity (c) we take P =
1
2
(
x

x
) and Q =
1
2
(
y

y
).
__
D
_
1
2
(
y

y
)
x

1
2
_
(
x

x
)
y
_
dxdy =
_

1
2
(
x

x
) dx +
1
2
(
y

y
) dy
_
1940
__
D
_
1
2
(
x

y
+
xy

x

y

xy
) +
1
2
(
y

xy

y

xy
)
_
dxdy
=
1
2
_

(
x

x
) dx +
1
2
_

(
y

y
) dy
__
D

xy
dxdy =
__
D

xy
dxdy
1
2
_

(
x

x
) dx +
1
2
_

(
y

y
) dy
The variation of I is
I =
_
t
1
t
0
__
D
(2(u
xx
+u
yy
)(u
xx
+u
yy
) + 2(1 )(u
xx
u
yy
+u
yy
u
xx
2u
xy
u
xy
)) dx dy dt.
From (a) we have
__
D
2(u
xx
+u
yy
)u
xx
dx dy =
__
D
2(u
xx
+u
yy
)
xx
udx dy
+
_

2((u
xx
+u
yy
)u
x
(u
xx
+u
yy
)
x
u) dy.
From (b) we have
__
D
2(u
xx
+u
yy
)u
yy
dxdy =
__
D
2(u
xx
+u
yy
)
yy
udxdy

2((u
xx
+u
yy
)u
y
(u
xx
+u
yy
)
y
u) dy.
From (a) and (b) we get
__
D
2(1 )(u
xx
u
yy
+u
yy
u
xx
) dx dy
=
__
D
2(1 )(u
xxyy
+u
yyxx
)udxdy
+
_

2(1 )((u
xx
u
y
u
xxy
u) dx + (u
yy
u
x
u
yyx
u) dy).
1941
Using c gives us
__
D
2(1 )(2u
xy
u
xy
) dxdy =
__
D
2(1 )(2u
xyxy
u) dxdy
+
_

2(1 )(u
xy
u
x
u
xyx
u) dx

2(1 )(u
xy
u
y
u
xyy
u) dy.
Note that
u
n
ds = u
x
dy u
y
dx.
Using the above results, we obtain
I = 2
_
t
1
t
0
__
D
(
4
u)udxdy dt + 2
_
t
1
t
0
_

_
(
2
u)
n
u + (
2
u)
(u)
n
_
ds dt
+ 2(1 )
_
t
1
t
0
__

(u
yy
u
x
u
xy
u
y
) dy + (u
xy
u
x
u
xx
u
y
) dx
_
dt.
Solution 49.20
1. Exact Solution. The Euler dierential equation is
d
dx
F
,y
= F
,y
d
dx
[2y
t
] = 2y 2x
y
tt
+y = x.
The general solution is
y = c
1
cos x +c
2
sin x x.
1942
Applying the boundary conditions we obtain,
y =
sin x
sin 1
x.
The value of the integral for this extremal is
J
_
sin x
sin 1
x
_
= cot(1)
2
3
0.0245741.
n = 0. We consider an approximate solution of the form y(x) = ax(1 x). We substitute this into the
functional.
J(a) =
_
1
0
_
(y
t
)
2
y
2
2xy
_
dx =
3
10
a
2

1
6
a
The only stationary point is
J
t
(a) =
3
5
a
1
6
= 0
a =
5
18
.
Since
J
tt
_
5
18
_
=
3
5
> 0,
we see that this point is a minimum. The approximate solution is
y(x) =
5
18
x(1 x).
1943
0.2 0.4 0.6 0.8 1
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Figure 49.5: One Term Approximation and Exact Solution.
This one term approximation and the exact solution are plotted in Figure 49.5. The value of the functional
is
J =
5
216
0.0231481.
n = 1. We consider an approximate solution of the form y(x) = x(1 x)(a + bx). We substitute this into
the functional.
J(a, b) =
_
1
0
_
(y
t
)
2
y
2
2xy
_
dx =
1
210
_
63a
2
+ 63ab + 26b
2
35a 21b
_
We nd the stationary points.
J
a
=
1
30
(18a + 9b 5) = 0
J
b
=
1
210
(63a + 52b 21) = 0
1944
a =
71
369
, b =
7
41
Since the Hessian matrix
H =
_
J
aa
J
ab
J
ba
J
bb
_
=
_
3
5
3
10
3
10
26
105
_
,
is positive denite,
3
5
> 0, det(H) =
41
700
,
we see that this point is a minimum. The approximate solution is
y(x) = x(1 x)
_
71
369
+
7
41
x
_
.
This two term approximation and the exact solution are plotted in Figure 49.6. The value of the functional
is
J =
136
5535
0.0245709.
2. Exact Solution. The Euler dierential equation is
d
dx
F
,y
= F
,y
d
dx
[2y
t
] = 2y + 2x
y
tt
y = x.
1945
0.2 0.4 0.6 0.8 1
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Figure 49.6: Two Term Approximation and Exact Solution.
The general solution is
y = c
1
cosh x +c
2
sinh x x.
Applying the boundary conditions, we obtain,
y =
2 sinh x
sinh 2
x.
The value of the integral for this extremal is
J =
2( e
4
13)
3( e
4
1)
0.517408.
Polynomial Approximation. Consider an approximate solution of the form
y(x) = x(2 x)(a
0
+a
1
x + a
n
x
n
).
1946
The one term approximate solution is
y(x) =
5
14
x(2 x).
This one term approximation and the exact solution are plotted in Figure 49.7. The value of the functional
is
J =
10
21
0.47619.
0.5 1 1.5 2
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
Figure 49.7: One Term Approximation and Exact Solution.
The two term approximate solution is
y(x) = x(2 x)
_

33
161

7
46
x
_
.
This two term approximation and the exact solution are plotted in Figure 49.8. The value of the functional
is
J =
416
805
0.51677.
1947
0.5 1 1.5 2
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
Figure 49.8: Two Term Approximation and Exact Solution.
Sine Series Approximation. Consider an approximate solution of the form
y(x) = a
1
sin
_
x
2
_
+a
2
sin (x) + +a
n
sin
_
n
x
2
_
.
The one term approximate solution is
y(x) =
16
(
2
+ 4)
sin
_
x
2
_
.
This one term approximation and the exact solution are plotted in Figure 49.9. The value of the functional
is
J =
64

2
(
2
+ 4)
0.467537.
The two term approximate solution is
y(x) =
16
(
2
+ 4)
sin
_
x
2
_
+
2
(
2
+ 1)
sin(x).
1948
0.5 1 1.5 2
-0.35
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
Figure 49.9: One Term Sine Series Approximation and Exact Solution.
This two term approximation and the exact solution are plotted in Figure 49.10. The value of the functional
is
J =
4(17
2
+ 20)

2
(
4
+ 5
2
+ 4)
0.504823.
3. Exact Solution. The Euler dierential equation is
d
dx
F
,y
= F
,y
d
dx
[2xy
t
] = 2
x
2
1
x
y 2x
2
y
tt
+
1
x
y
t
+
_
1
1
x
2
_
y = x
The general solution is
y = c
1
J
1
(x) +c
2
Y
1
(x) x
1949
0.5 1 1.5 2
-0.3
-0.2
-0.1
Figure 49.10: Two Term Sine Series Approximation and Exact Solution.
Applying the boundary conditions we obtain,
y =
(Y
1
(2) 2Y
1
(1))J
1
(x) + (2J
1
(1) J
1
(2))Y
1
(x)
J
1
(1)Y
1
(2) Y
1
(1)J
1
(2)
x
The value of the integral for this extremal is
J 0.310947
Polynomial Approximation. Consider an approximate solution of the form
y(x) = (x 1)(2 x)(a
0
+a
1
x + a
n
x
n
).
The one term approximate solution is
y(x) = (x 1)(2 x)
23
6(40 log 2 23)
1950
This one term approximation and the exact solution are plotted in Figure 49.11. The one term approximation
is a surprisingly close to the exact solution. The value of the functional is
J =
529
360(40 log 2 23)
0.310935.
1.2 1.4 1.6 1.8 2
0.05
0.1
0.15
0.2
Figure 49.11: One Term Polynomial Approximation and Exact Solution.
Solution 49.21
1. The spectrum of T is the set,
: (T I) is not invertible.
1951
(T I)f = g
_

K(x y)f(y) dy f(x) = g

K()

f()

f() = g()
_

K()
_

f() = g()
We may not be able to solve for

f(), (and hence invert T I), if =

K(). Thus all values of

K() are
in the spectrum. If

K() is everywhere nonzero we consider the case = 0. We have the equation,
_

K(x y)f(y) dy = 0
Since there are an innite number of L
2
(, ) functions which satisfy this, (those which are nonzero on
a set of measure zero), we cannot invert the equation. Thus = 0 is in the spectrum. The spectrum of T
is the range of

K() plus zero.
2. Let be a nonzero eigenvalue with eigenfunction .
(T I) = 0, x
_

K(x y)(y) dy (x) = 0, x


Since K is continuous, T is continuous. This implies that the eigenfunction is continuous. We take the
Fourier transform of the above equation.

K()

()

() = 0,
_

K()
_

() = 0,
1952
If (x) is absolutely integrable, then

() is continous. Since (x) is not identically zero,

() is not
identically zero. Continuity implies that

() is nonzero on some interval of positive length, (a, b). From
the above equation we see that

K() = for (a, b).
Now assume that

K() = in some interval (a, b). Any function

() that is nonzero only for (a, b)
satises
_

K()
_

() = 0, .
By taking the inverse Fourier transform we obtain an eigenfunction (x) of the eigenvalue .
3. First we use the Fourier transform to nd an explicit representation of u = (T I)
1
f.
u = (T I)
1
f(T I)u = f
_

K(x y)u(y) dy u = f
2

K u u =

f
u =

f
2

K
u =
1

f
1 2

K/
For [[ > [2

K[ we can expand the denominator in a geometric series.
u =
1

n=0
_
2

K

_
n
u =
1

n=0
1

n
_

K
n
(x y)f(y) dy
1953
Here K
n
is the n
th
iterated kernel. Now we form the Neumann series expansion.
u = (T I)
1
f
=
1

_
I
1

T
_
1
f
=
1

n=0
1

n
T
n
f
=
1

n=0
1

n
T
n
f
=
1

n=0
1

n
_

K
n
(x y)f(y) dy
The Neumann series is the same as the series we derived with the Fourier transform.
Solution 49.22
We seek a transformation T such that
(L I)Tf = f.
We denote u = Tf to obtain a boundary value problem,
u
tt
u = f, u(1) = u(1) = 0.
This problem has a unique solution if and only if the homogeneous adjoint problem has only the trivial solution.
u
tt
u = 0, u(1) = u(1) = 0.
This homogeneous problem has the eigenvalues and eigenfunctions,

n
=
_
n
2
_
2
, u
n
= sin
_
n
2
(x + 1)
_
, n N.
1954
The inhomogeneous problem has the unique solution
u(x) =
_
1
1
G(x, ; )f() d
where
G(x, ; ) =
_

sin(

(x<+1)) sin(

(1x>))

sin(2

)
, < 0,

1
2
(x
<
+ 1)(1 x
>
), = 0,

sinh(

(x<+1)) sinh(

(1x>))

sinh(2

)
, > 0,
for ,= (n/2)
2
, n N. We set
Tf =
_
1
1
G(x, ; )f() d
and note that since the kernel is continuous this is a bounded linear transformation. If f W, then
(L I)Tf = (L I)
_
1
1
G(x, ; )f() d
=
_
1
1
(L I)[G(x, ; )]f() d
=
_
1
1
(x )f() d
= f(x).
1955
If f U then
T(L I)f =
_
1
1
G(x, ; )
_
f
tt
() f()
_
d
= [G(x, ; )f
t
()]
1
1

_
1
1
G
t
(x, ; )f
t
() d
_
1
1
G(x, ; )f() d
= [G
t
(x, ; )f()]
1
1
+
_
1
1
G
tt
(x, ; )f() d
_
1
1
G(x, ; )f() d
=
_
1
1
_
G
tt
(x, ; ) G(x, ; )
_
f() d
=
_
1
1
(x )f() d
= f(x).
L has the point spectrum
n
= (n/2)
2
, n N.
Solution 49.23
1. We see that the solution is of the form (x) = a + x + bx
2
for some constants a and b. We substitute this
into the integral equation.
(x) = x +
_
1
0
_
x
2
y y
2
_
(y) dy
a +x +bx
2
= x +
_
1
0
_
x
2
y y
2
_
(a +x +bx
2
) dy
a +bx
2
=

60
_
(15 + 20a + 12b) + (20 + 30a + 15b)x
2
_
By equating the coecients of x
0
and x
2
we solve for a and b.
a =
( + 60)
4(
2
+ 5 + 60)
, b =
5( 60)
6(
2
+ 5 + 60)
1956
Thus the solution of the integral equation is
(x) = x

2
+ 5 + 60
_
5( 24)
6
x
2
+
+ 60
4
_
.
2. For x < 1 the integral equation reduces to
(x) = x.
For x 1 the integral equation becomes,
(x) = x +
_
1
0
sin(xy)(y) dy.
We could solve this problem by writing down the Neumann series. Instead we will use an eigenfunction
expansion. Let
n
and
n
be the eigenvalues and orthonormal eigenfunctions of
(x) =
_
1
0
sin(xy)(y) dy.
We expand (x) and x in terms of the eigenfunctions.
(x) =

n=1
a
n

n
(x)
x =

n=1
b
n

n
(x), b
n
= x,
n
(x))
1957
We determine the coecients a
n
by substituting the series expansions into the Fredholm equation and
equating coecients of the eigenfunctions.
(x) = x +
_
1
0
sin(xy)(y) dy

n=1
a
n

n
(x) =

n=1
b
n

n
(x) +
_
1
0
sin(xy)

n=1
a
n

n
(y) dy

n=1
a
n

n
(x) =

n=1
b
n

n
(x) +

n=1
a
n
1

n
(x)
a
n
_
1

n
_
= b
n
If is not an eigenvalue then we can solve for the a
n
to obtain the unique solution.
a
n
=
b
n
1 /
n
=

n
b
n

= b
n
+
b
n

(x) = x +

n=1
b
n

n
(x), for x 1.
If =
m
, and x,
m
) = 0 then there is the one parameter family of solutions,
(x) = x +c
m
(x) +

n=1
n,=m
b
n

n
(x), for x 1.
If =
m
, and x,
m
) ,= 0 then there is no solution.
1958
Solution 49.24
1.
Kx = L
1
L
2
x = x
L
1
L
2
(L
1
x) = L
1
(L
1
l
2
I)x
= L
1
(x x)
= ( 1)(L
1
x)
L
1
L
2
(L
2
x) = (L
2
L
1
+I)L
2
x
= L
2
x +L
2
x
= ( + 1)(L
2
x)
2.
L
1
L
2
L
2
L
1
=
_
d
dt
+
t
2
__

d
dt
+
t
2
_

d
dt
+
t
2
__
d
dt
+
t
2
_
=
d
2
dt
2
+
t
2
d
dt
+
1
2
I
t
2
d
dt
+
t
2
4
I
_

d
2
dt
2

t
2
d
dt

1
2
I +
t
2
d
dt
+
t
2
4
I
_
= I
L
1
L
2
=
d
2
dt
2
+
1
2
I +
t
2
4
I = K +
1
2
I
We note that e
t
2
/4
is an eigenfunction corresponding to the eigenvalue = 1/2. Since L
1
e
t
2
/4
= 0 the
result of this problem does not produce any negative eigenvalues. However, L
n
2
e
t
2
/4
is the product of e
t
2
/4
and a polynomial of degree n in t. Since this function is square integrable it is and eigenfunction. Thus we
have the eigenvalues and eigenfunctions,

n
= n
1
2
,
n
=
_
t
2

d
dt
_
n1
e
t
2
/4
, for n N.
1959
Solution 49.25
Since
1
is in the residual spectrum of T, there exists a nonzero y such that
(T
1
I)x, y) = 0
for all x. Now we apply the denition of the adjoint.
x, (T
1
I)

y) = 0, x
x, (T

1
I)y) = 0, x
(T

1
I)y = 0
y is an eigenfunction of T

corresponding to the eigenvalue


1
.
Solution 49.26
1.
u
tt
(t) +
_
1
0
sin(k(s t))u(s) ds = f(t), u(0) = u
t
(0) = 0
u
tt
(t) + cos(kt)
_
1
0
sin(ks)u(s) ds sin(kt)
_
1
0
cos(ks)u(s) ds = f(t)
u
tt
(t) +c
1
cos(kt) c
2
sin(kt) = f(t)
u
tt
(t) = f(t) c
1
cos(kt) +c
2
sin(kt)
The solution of
u
tt
(t) = g(t), u(0) = u
t
(0) = 0
using Green functions is
u(t) =
_
t
0
(t )g() d.
1960
Thus the solution of our problem has the form,
u(t) =
_
t
0
(t )f() d c
1
_
t
0
(t ) cos(k) d +c
2
_
t
0
(t ) sin(k) d
u(t) =
_
t
0
(t )f() d c
1
1 cos(kt)
k
2
+c
2
kt sin(kt)
k
2
We could determine the constants by multiplying in turn by cos(kt) and sin(kt) and integrating from 0 to
1. This would yields a set of two linear equations for c
1
and c
2
.
2.
u(x) =
_

0

n=1
sin nx sin ns
n
u(s) ds
We expand u(x) in a sine series.

n=1
a
n
sin nx =
_

0
_

n=1
sin nx sin ns
n
__

m=1
a
m
sin ms
_
ds

n=1
a
n
sin nx =

n=1
sin nx
n

m=1
_

0
a
m
sin ns sin ms ds

n=1
a
n
sin nx =

n=1
sin nx
n

m=1

2
a
m

mn

n=1
a
n
sin nx =

2

n=1
a
n
sin nx
n
The eigenvalues and eigenfunctions are

n
=
2n

, u
n
= sin nx, n N.
1961
3.
() =
_
2
0
1
2
1 r
2
1 2r cos( t) +r
2
(t) dt, [r[ < 1
We use Poissons formula.
() = u(r, ),
where u(r, ) is harmonic in the unit disk and satises, u(1, ) = (). For a solution we need = 1 and
that u(r, ) is independent of r. In this case u() satises
u
tt
() = 0, u() = ().
The solution is () = c
1
+c
2
. There is only one eigenvalue and corresponding eigenfunction,
= 1, = c
1
+c
2
.
4.
(x) =
_

cos
n
(x )() d
We expand the kernel in a Fourier series. We could nd the expansion by integrating to nd the Fourier
coecients, but it is easier to expand cos
n
(x) directly.
cos
n
(x) =
_
1
2
( e
ix
+ e
ix
)
_
n
=
1
2
n
__
n
0
_
e
inx
+
_
n
1
_
e
i(n2)x
+ +
_
n
n 1
_
e
i(n2)x
+
_
n
n
_
e
inx
_
1962
If n is odd,
cos
n
(x) =
1
2
n
_
_
n
0
_
( e
inx
+ e
inx
) +
_
n
1
_
( e
i(n2)x
+ e
i(n2)x
) +
+
_
n
(n 1)/2
_
( e
ix
+ e
ix
)
_
=
1
2
n
__
n
0
_
2 cos(nx) +
_
n
1
_
2 cos((n 2)x) + +
_
n
(n 1)/2
_
2 cos(x)
_
=
1
2
n1
(n1)/2

m=0
_
n
m
_
cos((n 2m)x)
=
1
2
n1
n

k=1
odd k
_
n
(n k)/2
_
cos(kx).
1963
If n is even,
cos
n
(x) =
1
2
n
_
_
n
0
_
( e
inx
+ e
inx
) +
_
n
1
_
( e
i(n2)x
+ e
i(n2)x
) +
+
_
n
n/2 1
_
( e
i2x
+ e
i2x
) +
_
n
n/2
_
_
=
1
2
n
__
n
0
_
2 cos(nx) +
_
n
1
_
2 cos((n 2)x) + +
_
n
n/2 1
_
2 cos(2x) +
_
n
n/2
__
=
1
2
n
_
n
n/2
_
+
1
2
n1
(n2)/2

m=0
_
n
m
_
cos((n 2m)x)
=
1
2
n
_
n
n/2
_
+
1
2
n1
n

k=2
even k
_
n
(n k)/2
_
cos(kx).
We will denote,
cos
n
(x ) =
a
0
2
n

k=1
a
k
cos(k(x )),
where
a
k
=
1 + (1)
nk
2
1
2
n1
_
n
(n k)/2
_
.
We substitute this into the integral equation.
(x) =
_

_
a
0
2
n

k=1
a
k
cos(k(x ))
_
() d
(x) =
a
0
2
_

() d +
n

k=1
a
k
_
cos(kx)
_

cos(k)() d + sin(kx)
_

sin(k)() d
_
1964
For even n, substituting (x) = 1 yields =
1
a
0
. For n and m both even or odd, substituting (x) = cos(mx)
or (x) = sin(mx) yields =
1
am
. For even n we have the eigenvalues and eigenvectors,

0
=
1
a
0
,
0
= 1,

m
=
1
a
2m
,
(1)
m
= cos(2mx),
(2)
m
= sin(2mx), m = 1, 2, . . . , n/2.
For odd n we have the eigenvalues and eigenvectors,

m
=
1
a
2m1
,
(1)
m
= cos((2m1)x),
(2)
m
= sin((2m1)x), m = 1, 2, . . . , (n + 1)/2.
Solution 49.27
1. First we shift the range of integration to rewrite the kernel.
(x) =
_
2
0
_
2
2
6[x s[ + 3(x s)
2
_
(s) ds
(x) =
_
x+2
x
_
2
2
6[y[ + 3y
2
_
(x +y) dy
We expand the kernel in a Fourier series.
K(y) = 2
2
6[y[ + 3y
2
=

n=
c
n
e
iny
c
n
=
1
2
_
x+2
x
K(y) e
iny
dy =
_
6
n
2
, n ,= 0,
0, n = 0
1965
K(y) =

n=
n,=0
6
n
2
e
iny
=

n=1
12
n
2
cos(ny)
K(x, s) =

n=1
12
n
2
cos(n(x s)) =

n=1
12
n
2
_
cos(nx) cos(nx) + sin(nx) sin(ns)
_
Now we substitute the Fourier series expression for the kernel into the eigenvalue problem.
(x) = 12
_
2
0
_

n=1
1
n
2
_
cos(nx) cos(ns) + sin(nx) sin(ns)
_
_
(s) ds
From this we obtain the eigenvalues and eigenfunctions,

n
=
n
2
12
,
(1)
n
=
1

cos(nx),
(2)
n
=
1

sin(nx), n N.
2. The set of eigenfunctions do not form a complete set. Only those functions with a vanishing integral on
[0, 2] can be represented. We consider the equation
_
2
0
K(x, s)(s) ds = 0
_
2
0
_

n=1
12
n
2
_
cos(nx) cos(ns) + sin(nx) sin(ns)
_
_
(s) ds = 0
This has the solutions = const. The set of eigenfunctions

0
=
1

2
,
(1)
n
=
1

cos(nx),
(2)
n
=
1

sin(nx), n N,
1966
is a complete set. We can also write the eigenfunctions as

n
=
1

2
e
inx
, n Z.
3. We consider the problem
u Tu = f.
For ,= , ( not an eigenvalue), we can obtain a unique solution for u.
u(x) = f(x) +
_
2
0
(x, s, )f(s) ds
Since K(x, s) is self-adjoint and L
2
(0, 2), we have
(x, s, ) =

n=
n,=0

n
(x)
n
(s)

n=
n,=0
1
2
e
inx
e
ins
n
2
12

= 6

n=
n,=0
e
in(xs)
n
2
12
(x, s, ) = 12

n=1
cos(n(x s))
n
2
12
1967
Solution 49.28
First assume that is an eigenvalue of T, T = .
p(T) =
n

k=0
a
n
T
n

=
n

k=0
a
n

= p()
p() is an eigenvalue of p(T).
Now assume that is an eigenvalues of p(T), p(T) = . We assume that T has a complete, orthonormal
set of eigenfunctions,
n
corresponding to the set of eigenvalues
n
. We expand in these eigenfunctions.
p(T) =
p(T)

c
n

n
=

c
n

c
n
p(
n
)
n
=

c
n

n
p(
n
) = , n such that c
n
,= 0
Thus all eigenvalues of p(T) are of the form p() with an eigenvalue of T.
Solution 49.29
The Fourier cosine transform is dened,

f() =
1

_

0
f(x) cos(x) dx,
f(x) = 2
_

0

f() cos(x) d.
1968
We can write the integral equation in terms of the Fourier cosine transform.
(x) = f(x) +
_

0
cos(2xs)(s) ds
(x) = f(x) +

(2x) (49.8)
We multiply the integral equation by
1

cos(2xs) and integrate.


1

_

0
cos(2xs)(x) dx =
1

_

0
cos(2xs)f(x) dx +
_

0
cos(2xs)

(2x) dx

(2s) =

f(2s) +

2
_

0
cos(xs)

(x) dx

(2s) =

f(2s) +

4
(s)
(x) =
4

f(2x) +
4

(2x) (49.9)
We eliminate

between (49.8) and (49.9).
_
1

2
4
_
(x) = f(x) +

f(2x)
(x) =
f(x) +
_

0
f(s) cos(2xs) ds
1
2
/4
1969
Solution 49.30
_
D
vLudxdy =
_
D
v(u
xx
+u
yy
+au
x
+bu
y
+cu) dxdy
=
_
D
(v
2
u +avu
x
+bvu
y
+cuv) dxdy
=
_
D
(u
2
v +avu
x
+bvu
y
+cuv) dxdy +
_
C
(vu uv) nds
=
_
D
(u
2
v auv
x
buv
y
uva
x
uvb
y
+cuv) dx dy +
_
C
_
auv
x
n
+buv
y
n
_
ds +
_
C
_
v
u
n
u
v
n
_
ds
Thus we see that
_
D
(vLu uL

v) dx dy =
_
C
H(u, v) ds,
where
L

v = v
xx
+v
yy
av
x
bv
y
+ (c a
x
b
y
)v
and
H(u, v) =
_
v
u
n
u
v
n
+auv
x
n
+buv
y
n
_
.
Let G be the harmonic Green function, which satises,
G = in D, G = 0 on C.
1970
Let u satisfy Lu = 0.
_
D
(GLu uL

G) dxdy =
_
C
H(u, G) ds

_
D
uL

Gdx dy =
_
C
H(u, G) ds

_
D
uGdxdy
_
D
u(L

)Gdxdy =
_
C
H(u, G) ds

_
D
u(x )(y ) dxdy
_
D
u(L

)Gdxdy =
_
C
H(u, G) ds
u(, )
_
D
u(L

)Gdx dy =
_
C
H(u, G) ds
We expand the operators to obtain the rst form.
u +
_
D
u(aG
x
bG
y
+ (c a
x
b
y
)G) dx dy =
_
C
_
G
u
n
u
G
n
+auG
x
n
+buG
y
n
_
ds
u +
_
D
((c a
x
b
y
)GaG
x
bG
y
)udxdy =
_
C
u
G
n
ds
u +
_
D
((c a
x
b
y
)GaG
x
bG
y
)udxdy = U
Here U is the harmonic function that satises U = f on C.
1971
We use integration by parts to obtain the second form.
u +
_
D
(cuGa
x
uGb
y
uGauG
x
buG
y
) dx dy = U
u +
_
D
(cuGa
x
uGb
y
uG+ (au)
x
G+ (bu)
y
G) dxdy
_
C
_
auG
y
n
+buG
x
n
_
ds = U
u +
_
D
(cuGa
x
uGb
y
uG+a
x
uG +au
x
G+b
y
uG+bu
y
G) dxdy = U
u +
_
D
(au
x
+bu
y
+cu)Gdxdy = U
Solution 49.31
1. First we dierentiate to obtain a dierential equation.
(x) =
_
1
0
min(x, s)(s) ds =
__
x
0
e
s
(s) ds +
_
1
x
e
x
(s) ds
_

t
(x) =
_
x(x) +
_
1
x
(s) ds x(x)
_
=
_
1
x
(s) ds

tt
(x) = (x)
We note that that (x) satises the constraints,
(0) =
_
1
0
0 (s) ds = 0,

t
(1) =
_
1
1
(s) ds = 0.
Thus we have the problem,

tt
+ = 0, (0) =
t
(1) = 0.
1972
The general solution of the dierential equation is
(x) =
_

_
a +bx for = 0
a cos
_

x
_
+b sin
_

x
_
for > 0
a cosh
_
x
_
+b sinh
_
x
_
for < 0
We see that for = 0 and < 0 only the trivial solution satises the homogeneous boundary conditions.
For positive the left boundary condition demands that a = 0. The right boundary condition is then
b

cos
_

_
= 0
The eigenvalues and eigenfunctions are

n
=
_
(2n 1)
2
_
2
,
n
(x) = sin
_
(2n 1)
2
x
_
, n N
2. First we dierentiate the integral equation.
(x) =
__
x
0
e
s
(s) ds +
_
1
x
e
x
(s) ds
_

t
(x) =
_
e
x
(x) + e
x
_
1
x
(s) ds e
x
(x)
_
= e
x
_
1
x
(s) ds

tt
(x) =
_
e
x
_
1
x
(s) ds e
x
(x)
_
(x) satises the dierential equation

tt

t
+e
x
= 0.
1973
We note the boundary conditions,
(0)
t
(0) = 0,
t
(1) = 0.
In self-adjoint form, the problem is
_
e
x

t
_
t
+ = 0, (0)
t
(0) = 0,
t
(1) = 0.
The Rayleigh quotient is
=
[e
x

t
]
1
0
+
_
1
0
e
x
(
t
)
2
dx
_
1
0

2
dx
=
(0)
t
(0) +
_
1
0
e
x
(
t
)
2
dx
_
1
0

2
dx
=
((0))
2
+
_
1
0
e
x
(
t
)
2
dx
_
1
0

2
dx
Thus we see that there are only positive eigenvalues. The dierential equation has the general solution
(x) = e
x/2
_
aJ
1
_
2

e
x/2
_
+bY
1
_
2

e
x/2
__
We dene the functions,
u(x; ) = e
x/2
J
1
_
2

e
x/2
_
, v(x; ) = e
x/2
Y
1
_
2

e
x/2
_
.
We write the solution to automatically satisfy the right boundary condition,
t
(1) = 0,
(x) = v
t
(1; )u(x; ) u
t
(1; )v(x; ).
1974
We determine the eigenvalues from the left boundary condition, (0)
t
(0) = 0. The rst few are

1
0.678298

2
7.27931

3
24.9302

4
54.2593

5
95.3057
The eigenfunctions are,

n
(x) = v
t
(1;
n
)u(x;
n
) u
t
(1;
n
)v(x;
n
).
Solution 49.32
1. First note that
sin(kx) sin(lx) = sign (kl) sin(ax) sin(bx)
where
a = max([k[, [l[), b = min([k[, [l[).
Consider the analytic function,
e
i(ab)x
e
i(a+b)
2
= sin(ax) sin(bx) i cos(ax) sin(bx).

sin(kx) sin(lx)
x
2
z
2
dx = sign (kl)
_

sin(ax) sin(bx)
x
2
z
2
dx
= sign (kl)
1
2z

_

_
sin(ax) sin(bx)
x z

sin(ax) sin(bx)
x +z
_
dx
= sign (kl)
1
2z
(cos(az) sin(bz) + cos(az) sin(bz))
1975

sin(kx) sin(lx)
x
2
z
2
dx = sign (kl)

z
cos(az) sin(bz)
2. Consider the analytic function,
e
i[p[x
e
i[q[x
x
=
cos([p[x) cos([q[x) +i(sin([p[x) sin([q[x))
x
.

cos(px) cos(qx)
x
2
dx =
_

cos([p[x) cos([q[x)
x
2
dx
= lim
x0
sin([p[x) sin([q[x)
x

cos(px) cos(qx)
x
2
dx = ([q[ [p[)
3. We use the analytic function,
i(x ia)(x ib) e
ix
(x
2
+a
2
)(x
2
+b
2
)
=
(x
2
ab) sin x + (a +b)xcos x +i((x
2
ab) cos x + (a +b)x sin x)
(x
2
+a
2
)(x
2
+b
2
)

(x
2
ab) sin x + (a +b)xcos x
x(x
2
+a
2
)(x
2
+b
2
)
= lim
x0
(x
2
ab) cos x + (a +b)x sin x
(x
2
+a
2
)(x
2
+b
2
)
=
ab
a
2
b
2

(x
2
ab) sin x + (a +b)xcos x
(x
2
+a
2
)(x
2
+b
2
)
=

ab
1976
Solution 49.33
We consider the function
G(z) =
_
(1 z
2
)
1/2
+iz
_
log(1 +z).
For (1 z
2
)
1/2
= (1 z)
1/2
(1 +z)
1/2
we choose the angles,
< arg(1 z) < , 0 < arg(1 +z) < 2,
so that there is a branch cut on the interval (1, 1). With this choice of branch, G(z) vanishes at innity. For
the logarithm we choose the principal branch,
< arg(1 +z) < .
For t (1, 1),
G
+
(t) =
_

1 t
2
+it
_
log(1 +t),
G

(t) =
_

1 t
2
+it
_
log(1 +t),
G
+
(t) G

(t) = 2

1 t
2
log(1 +t),
1
2
_
G
+
(t) +G

(t)
_
= it log(1 +t).
For t (, 1),
G
+
(t) = i
_

1 t
2
+t
_
(log(t 1) +i) ,
G

(t) = i
_

1 t
2
+t
_
(log(t 1) i) ,
G
+
(t) G

(t) = 2
_

t
2
1 +t
_
.
1977
For x (1, 1) we have
G(x) =
1
2
_
G
+
(x) +G

(x)
_
= ixlog(1 +x)
=
1
i2
_

1
2(

t
2
1 +t)
t x
dt +
1
i2
_
1
1
2

1 t
2
log(1 +t)
t x
dt
From this we have
_
1
1

1 t
2
log(1 +t)
t x
dt
= x log(1 +x) +
_

1
t

t
2
1
t +x
dt
=
_
xlog(1 +x) 1 +

2

1 x
2

1 x
2
arcsin(x) +xlog(2) +xlog(1 +x)
_
_
1
1

1 t
2
log(1 +t)
t x
dt =
_
x log x 1 +

1 x
2
_

2
arcsin(x)
__
Solution 49.34
Let F(z) denote the value of the integral.
F(z) =
1
i

_
C
f(t) dt
t z
From the Plemelj formula we have,
F
+
(t
0
) +F

(t
0
) =
1
i

_
C
f(t)
t t
0
dt,
f(t
0
) = F
+
(t
0
) F

(t
0
).
1978
With W(z) dened as above, we have
W
+
(t
0
) +W

(t
0
) = F
+
(t
0
) F

(t
0
) = f(t
0
),
and also
W
+
(t
0
) +W

(t
0
) =
1
i

_
C
W
+
(t) W

(t)
t t
0
dt
=
1
i

_
C
F
+
(t) +F

(t)
t t
0
dt
=
1
i

_
C
g(t)
t t
0
dt.
Thus the solution of the integral equation is
f(t
0
) =
1
i

_
C
g(t)
t t
0
dt.
1979
Solution 49.35
(i)
G() = ( )
1
_


_

G
+
() = ( )
1
_


_

() = e
i2
G
+
()
G
+
() G

() = (1 e
i2
)( )
1
_


_

G
+
() +G

() = (1 + e
i2
)( )
1
_


_

G
+
() +G

() =
1
i

_
C
(1 e
i2
) d
( )
1
( )

( )
1
i

_
C
d
( )
1
( )

( )
= i cot()
( )
1
( )

(ii) Consider the branch of


_
z
z
_

1980
that tends to unity as z . We nd a series expansion of this function about innity.
_
z
z
_

=
_
1

z
_
_
1

z
_

=
_

j=0
(1)
j
_

j
__

z
_
j
__

k=0
(1)
k
_

k
_
_

z
_
k
_
=

j=0
_
j

k=0
(1)
j
_

j k
__

k
_

jk

k
_
z
j
Dene the polynomial
Q(z) =
n

j=0
_
j

k=0
(1)
j
_

j k
__

k
_

jk

k
_
z
nj
.
Then the function
G(z) =
_
z
z
_

z
n
Q(z)
1981
vanishes at innity.
G
+
() =
_


_

n
Q()
G

() = e
i2
_


_

n
Q()
G
+
() G

() =
_


_

n
_
1 e
i2
_
G
+
() +G

() =
_


_

n
_
1 + e
i2
_
2Q()
1
i

_
C
_


_

n
_
1 e
i2
_
1

d =
_


_

n
_
1 + e
i2
_
2Q()
1
i

_
C
_


_

n

d = i cot()
_


_

n
(1 i cot())Q()
1
i

_
C
_


_

n

d = i cot()
__


_

n
Q()
_
Q()
Solution 49.36

_
1
1
(y)
y
2
x
2
dy =
1
2x

_
1
1
(y)
y x
dy
1
2x

_
1
1
(y)
y +x
dy
=
1
2x

_
1
1
(y)
y x
dy +
1
2x

_
1
1
(y)
y x
dy
=
1
2x

_
1
1
(y) +(y)
y x
dy
1982
1
2x

_
1
1
(y) +(y)
y x
dy = f(x)
1
i

_
1
1
(y) +(y)
y x
dy =
2x
i
f(x)
(x) +(x) =
1
i

1 x
2

_
1
1
2y
i
f(y)
_
1 y
2
1
y x
dy +
k

1 x
2
(x) +(x) =
1

1 x
2

_
1
1
2yf(y)
_
1 y
2
y x
dy +
k

1 x
2
(x) =
1

1 x
2

_
1
1
yf(y)
_
1 y
2
y x
dy +
k

1 x
2
+g(x)
Here k is an arbitrary constant and g(x) is an arbitrary odd function.
Solution 49.37
We dene
F(z) =
1
i2

_
1
0
f(t)
t z
dt.
The Plemelj formulas and the integral equation give us,
F
+
(x) F

(x) = f(x)
F
+
(x) +F

(x) = f(x).
We solve for F
+
and F

.
F
+
(x) = ( + 1)f(x)
F

(x) = ( 1)f(x)
1983
By writing
F
+
(x)
F

(x)
=
+ 1
1
we seek to determine F to within a multiplicative constant.
log F
+
(x) log F

(x) = log
_
+ 1
1
_
log F
+
(x) log F

(x) = log
_
1 +
1
_
+i
log F
+
(x) log F

(x) = +i
We have left o the additive term of i2n in the above equation, which will introduce factors of z
k
and (z 1)
m
in F(z). We will choose these factors so that F(z) has integrable algebraic singularites and vanishes at innity.
Note that we have dened to be the real parameter,
= log
_
1 +
1
_
.
By the discontinuity theorem,
log F(z) =
1
i2
_
1
0
+i
t z
dz
=
_
1
2
i

2
_
log
_
1 z
z
_
= log
_
_
z 1
z
_
1/2i/(2)
_
1984
F(z) =
_
z 1
z
_
1/2i/(2)
z
k
(z 1)
m
F(z) =
1
_
z(z 1)
_
z 1
z
_
i/(2)
F

(x) =
e
i(i/(2))
_
x(1 x)
_
1 x
x
_
i/(2)
F

(x) =
e
/2
_
x(1 x)
_
1 x
x
_
i/(2)
Dene
f(x) =
1
_
x(1 x)
_
1 x
x
_
i/(2)
.
We apply the Plemelj formulas.
1
i

_
1
0
_
e
/2
e
/2
_
f(t)
t x
dt =
_
e
/2
+ e
/2
_
f(x)
1
i

_
1
0
f(t)
t x
dt = tanh
_

2
_
f(x)
Thus we see that the eigenfunctions are
(x) =
1
_
x(1 x)
_
1 x
x
_
i tanh
1
()/
for 1 < < 1.
The method used in this problem cannot be used to construct eigenfunctions for > 1. For this case we
cannot nd an F(z) that has integrable algebraic singularities and vanishes at innity.
1985
Solution 49.38
1
i

_
1
0
f(t)
t x
dt =
i
tan(x)
f(x)
We dene the function,
F(z) =
1
i2

_
1
0
f(t)
t z
dt.
The Plemelj formula are,
F
+
(x) F

(x) = f(x)
F
+
(x) +F

(x) =
i
tan(x)
f(x).
We solve for F
+
and F

.
F

(x) =
1
2
_
1
i
tan(x)
_
f(x)
From this we see
F
+
(x)
F

(x)
=
1 i/ tan(x)
1 i/ tan(x)
= e
i2x
.
We seek to determine F(z) up to a multiplicative constant. Taking the logarithm of this equation yields
log F
+
(x) log F

(x) = i2x +i2n.


The i2n term will give us the factors (z 1)
k
and z
m
in the solution for F(z). We will choose the integers k and
m so that F(z) has only algebraic singularities and vanishes at innity. We drop the i2n term for now.
log F(z) =
1
i2
_
1
0
i2t
t z
dt
log F(z) =
1

+
z

log
_
1 z
z
_
F(z) = e
1/
_
z 1
z
_
z/
1986
We replace e
1/
by a multiplicative constant and multiply by (z 1)
1
to give F(z) the desired properties.
F(z) =
c
(z 1)
1z/
z
z/
We evaluate F(z) above and below the branch cut.
F

(x) =
c
e
(iix)
(1 x)
1x/
x
x/
=
c e
ix
(1 x)
1x/
x
x/
Finally we use the Plemelj formulas to determine f(x).
f(x) = F
+
(x) F

(x) =
k sin(x)
(1 x)
1x/
x
x/
Solution 49.39
Consider the equation,
f
t
(z) +
_
C
f(t)
t z
dt = 1.
Since the integral is an analytic function of z o C we know that f(z) is analytic o C. We use Cauchys theorem
to evaluate the integral and obtain a dierential equation for f(x).
f
t
(x) +
_
C
f(t)
t x
dt = 1
f
t
(x) +if(x) = 1
f(x) =
1
i
+c e
ix
1987
Consider the equation,
f
t
(z) +
_
C
f(t)
t z
dt = g(z).
Since the integral and g(z) are analytic functions inside C we know that f(z) is analytic inside C. We use Cauchys
theorem to evaluate the integral and obtain a dierential equation for f(x).
f
t
(x) +
_
C
f(t)
t x
dt = g(x)
f
t
(x) +if(x) = g(x)
f(x) =
_
x
z
0
e
i(x)
g() d +c e
ix
Here z
0
is any point inside C.
Solution 49.40

_
C
_
1
t x
+P(t x)
_
f(t) dt = g(x)
1
i

_
C
f(t)
t x
dt =
1
i
g(x)
1
i
_
C
P(t x)f(t) dt
We know that if
1
i

_
C
f()

d = g()
then
f() =
1
i

_
C
g()

d.
1988
We apply this theorem to the integral equation.
f(x) =
1

2

_
C
g(t)
t x
dt +
1

2

_
C
__
C
P( t)f() d
_
1
t x
dt
=
1

2

_
C
g(t)
t x
dt +
1

2
_
C
_

_
C
P( t)
t x
dt
_
f() d
=
1

2

_
C
g(t)
t x
dt
1
i
_
C
P(t x)f(t) dt
Now we substitute the non-analytic part of f(t) into the integral. (The analytic part integrates to zero.)
=
1

2

_
C
g(t)
t x
dt
1
i
_
C
P(t x)
_

2

_
C
g()
t
d
_
dt
=
1

2

_
C
g(t)
t x
dt
1

2
_
C
_

1
i

_
C
P(t x)
t
dt
_
g() d
=
1

2

_
C
g(t)
t x
dt
1

2
_
C
P( x)g() d
f(x) =
1

2

_
C
g(t)
t x
dt
1

2
_
C
P(t x)g(t) dt
Solution 49.41
Solution 49.42
1989
Part VII
Nonlinear Dierential Equations
1990
Chapter 50
Nonlinear Ordinary Dierential Equations
1991
50.1 Exercises
Exercise 50.1
A model set of equations to describe an epidemic, in which x(t) is the number infected, y(t) is the number
susceptible, is
dx
dt
= rxy x,
dy
dt
= rxy +,
where r > 0, 0, 0. Initially x = x
0
, y = y
0
at t = 0. Directly from the equations, without using the
phase plane:
1. Find the solution, x(t), y(t), in the case = = 0.
2. Show for the case = 0, ,= 0 that x(t) rst decreases or increases according as ry
0
< or ry
0
> . Show
that x(t) 0 as t in both cases. Find x as a function of y.
3. In the phase plane: Find the position of the singular point and its type when > 0, > 0.
Exercise 50.2
Find the singular points and their types for the system
du
dx
= ru +v(1 v)(p v), r > 0, 0 < p < 1,
dv
dx
= u,
which comes from one of our nonlinear diusion problems. Note that there is a solution with
u = (1 v)
for special values of and r. Find v(x) for this special case.
1992
Exercise 50.3
Check that r = 1 is a limit cycle for
dx
dt
= y +x(1 r
2
)
dy
dt
= x +y(1 r
2
)
(r = x
2
+y
2
), and that all solution curves spiral into it.
Exercise 50.4
Consider
y = f(y) x
x = y
Introduce new coordinates, R, given by
x = Rcos
y =
1

Rsin
and obtain the exact dierential equations for R(t), (t). Show that R(t) continually increases with t when R ,= 0.
Show that (t) continually decreases when R > 1.
Exercise 50.5
One choice of the Lorenz equations is
x = 10x + 10y
y = Rx y xz
z =
8
3
z +xy
1993
Where R is a positive parameter.
1. Invistigate the nature of the sigular point at (0, 0, 0) by nding the eigenvalues and their behavior for all
0 < R < .
2. Find the other singular points when R > 1.
3. Show that the appropriate eigenvalues for these other singular points satisfy the cubic
3
3
+ 41
2
+ 8(10 +R) + 160(R 1) = 0.
4. There is a special value of R, call it R
c
, for which the cubic has two pure imaginary roots, i say. Find
R
c
and ; then nd the third root.
Exercise 50.6
In polar coordinates (r, ), Einsteins equations lead to the equation
d
2
v
d
2
+v = 1 +v
2
, v =
1
r
,
for planetary orbits. For Mercury, = 8 10
8
. When = 0 (Newtonian theory) the orbit is given by
v = 1 +Acos , period 2.
Introduce = and use perturbation expansions for v() and in powers of to nd the corrections proportional
to .
[A is not small; is the small parameter].
Exercise 50.7
Consider the problem
x +
2
0
x +x
2
= 0, x = a, x = 0 at t = 0
1994
Use expansions
x = a cos +a
2
x
2
() +a
3
x
3
() + , = t
=
0
+a
2

2
+ ,
to nd a periodic solution and its natural frequency .
Note that, with the expansions given, there are no secular term troubles in the determination of x
2
(), but
x
2
() is needed in the subsequent determination of x
3
() and .
Show that a term a
1
in the expansion for would have caused trouble, so
1
would have to be taken equal
to zero.
Exercise 50.8
Consider the linearized trac problem
dp
n
(t)
dt
= [p
n1
(t) p
n
(t)] , n 1,
p
n
(0) = 0, n 1,
p
0
(t) = ae
it
, t > 0.
(We take the imaginary part of p
n
(t) in the nal answers.)
1. Find p
1
(t) directly from the equation for n = 1 and note the behavior as t .
2. Find the generating function
G(s, t) =

n=1
p
n
(t)s
n
.
3. Deduce that
p
n
(t) A
n
e
it
, as t ,
and nd the expression for A
n
. Find the imaginary part of this p
n
(t).
1995
Exercise 50.9
1. For the equation modied with a reaction time, namely
d
dt
p
n
(t +) = [p
n1
(t) p
n
(t)] n 1,
nd a solution of the form in 1(c) by direct substitution in the equation. Again take its imaginary part.
2. Find a condition that the disturbance is stable, i.e. p
n
(t) remains bounded as n .
3. In the stable case show that the disturbance is wave-like and nd the wave velocity.
1996
50.2 Hints
Hint 50.1
Hint 50.2
Hint 50.3
Hint 50.4
Hint 50.5
Hint 50.6
Hint 50.7
Hint 50.8
1997
Hint 50.9
1998
50.3 Solutions
Solution 50.1
1. When = = 0 the equations are
dx
dt
= rxy,
dy
dt
= rxy.
Adding these two equations we see that
dx
dt
=
dy
dt
.
Integrating and applying the initial conditions x(0) = x
0
and y(0) = y
0
we obtain
x = x
0
+y
0
y
Substituting this into the dierential equation for y,
dy
dt
= r(x
0
+y
0
y)y
dy
dt
= r(x
0
+y
0
)y +ry
2
.
1999
We recognize this as a Bernoulli equation and make the substitution u = y
1
.
y
2
dy
dt
= r(x
0
+y
0
)y
1
r
du
dt
= r(x
0
+y
0
)u r
d
dt
_
e
r(x
0
+y
0
)t
u
_
= re
r(x
0
+y
0
)t
u = e
r(x
0
+y
0
)t
_
t
re
r(x
0
+y
0
)t
dt +ce
r(x
0
+y
0
)t
u =
1
x
0
+y
0
+ce
r(x
0
+y
0
)t
y =
_
1
x
0
+y
0
+ce
r(x
0
+y
0
)t
_
1
Applying the initial condition for y,
_
1
x
0
+y
0
+c
_
1
= y
0
c =
1
y
0

1
x
0
+y
0
.
The solution for y is then
y =
_
1
x
0
+y
0
+
_
1
y
0

1
x
0
+y
0
_
e
r(x
0
+y
0
)t
_
1
Since x = x
0
+y
0
y, the solution to the system of dierential equations is
x = x
0
+y
0

_
1
y
0
+
1
x
0
+y
0
_
1 e
r(x
0
+y
0
)t
_
_
1
, y =
_
1
y
0
+
1
x
0
+y
0
_
1 e
r(x
0
+y
0
)t
_
_
1
.
2000
2. For = 0, ,= 0, the equation for x is
x = rxy x.
At t = 0,
x(0) = x
0
(ry
0
).
Thus we see that if ry
0
< , x is initially decreasing. If ry
0
> , x is initially increasing.
Now to show that x(t) 0 as t . First note that if the initial conditions satisfy x
0
, y
0
> 0 then
x(t), y(t) > 0 for all t 0 because the axes are a seqaratrix. y(t) is is a strictly decreasing function of time.
Thus we see that at some time the quantity x(ry ) will become negative. Since y is decreasing, this
quantity will remain negative. Thus after some time, x will become a strictly decreasing quantity. Finally
we see that regardless of the initial conditions, (as long as they are positive), x(t) 0 as t .
Taking the ratio of the two dierential equations,
dx
dy
= 1 +

ry
.
x = y +

r
ln y +c
Applying the intial condition,
x
0
= y
0
+

r
ln y
0
+c
c = x
0
+y
0


r
ln y
0
.
Thus the solution for x is
x = x
0
+ (y
0
y) +

r
ln
_
y
y
0
_
.
2001
3. When > 0 and > 0 the system of equations is
x = rxy x
y = rxy +.
The equilibrium solutions occur when
x(ry ) = 0
rxy = 0.
Thus the singular point is
x =

, y =

r
.
Now to classify the point. We make the substitution u = (x

), v = (y

r
).
u = r
_
u +

_
_
v +

r
_

_
u +

_
v = r
_
u +

_
_
v +

r
_
+
u =
r

v +ruv
v = u
r

v ruv
The linearized system is
u =
r

v
v = u
r

v
2002
Finding the eigenvalues of the linearized system,

+
r

=
2
+
r

+r = 0
=


_
(
r

)
2
4r
2
Since both eigenvalues have negative real part, we see that the singular point is asymptotically stable. A
plot of the vector eld for r = = = 1 is attached. We note that there appears to be a stable singular
point at x = y = 1 which corroborates the previous results.
Solution 50.2
The singular points are
u = 0, v = 0, u = 0, v = 1, u = 0, v = p.
The point u = 0, v = 0. The linearized system about u = 0, v = 0 is
du
dx
= ru
dv
dx
= u.
The eigenvalues are

r 0
1

=
2
r = 0.
= 0, r.
2003
Since there are positive eigenvalues, this point is a source. The critical point is unstable.
The point u = 0, v = 1. Linearizing the system about u = 0, v = 1, we make the substitution w = v 1.
du
dx
= ru + (w + 1)(w)(p 1 w)
dw
dx
= u
du
dx
= ru + (1 p)w
dw
dx
= u

r (p 1)
1

=
2
r +p 1 = 0
=
r
_
r
2
4(p 1)
2
Thus we see that this point is a saddle point. The critical point is unstable.
The point u = 0, v = p. Linearizing the system about u = 0, v = p, we make the substitution w = v p.
du
dx
= ru + (w +p)(1 p w)(w)
dw
dx
= u
du
dx
= ru +p(p 1)w
dw
dx
= u
2004

r p(1 p)
1

=
2
r +p(1 p) = 0
=
r
_
r
2
4p(1 p)
2
Thus we see that this point is a source. The critical point is unstable.
The solution of for special values of and r. Dierentiating u = v(1 v),
du
dv
= 2v.
Taking the ratio of the two dierential equations,
du
dv
= r +
v(1 v)(p v)
u
= r +
v(1 v)(p v)
v(1 v)
= r +
(p v)

Equating these two expressions,


2v = r +
p

.
Equating coecients of v, we see that =
1

2
.
1

2
= r +

2p
Thus we have the solution u =
1

2
v(1 v) when r =
1

2p. In this case, the dierential equation for v is


dv
dx
=
1

2
v(1 v)
v
2
dv
dx
=
1

2
v
1
+
1

2
2005
We make the change of variablles y = v
1
.
dy
dx
=
1

2
y +
1

2
d
dx
_
e
x/

2
y
_
=
e
x/

2
y = e
x/

2
_
x
e
x/

2
dx +ce
x/

2
y = 1 +ce
x/

2
The solution for v is
v(x) =
1
1 +ce
x/

2
.
Solution 50.3
We make the change of variables
x = r cos
y = r sin .
Dierentiating these expressions with respect to time,
x = r cos r

sin
y = r sin +r

cos .
Substituting the new variables into the pair of dierential equations,
r cos r

sin = r sin +r cos (1 r


2
)
r sin +r

cos = r cos +r sin (1 r


2
).
2006
Multiplying the equations by cos and sin and taking their sum and dierence yields
r = r(1 r
2
)
r

= r.
We can integrate the second equation.
r = r(1 r
2
)
= t +
0
At this point we could note that r > 0 in (0, 1) and r < 0 in (1, ). Thus if r is not initially zero, then the
solution tends to r = 1.
Alternatively, we can solve the equation for r exactly.
r = r r
3
r
r
3
=
1
r
2
1
We make the change of variables u = 1/r
2
.

1
2
u = u 1
u + 2u = 2
u = e
2t
_
t
2e
2t
dt +ce
2t
u = 1 +ce
2t
r =
1

1 +ce
2t
Thus we see that if r is initiall nonzero, the solution tends to 1 as t .
2007
Solution 50.4
The set of dierential equations is
y = f(y) x
x = y.
We make the change of variables
x = Rcos
y =
1

Rsin
Dierentiating x and y,
x =

Rcos R

sin
y =
1

Rsin +
1

cos .
The pair of dierential equations become


Rsin +

cos = f
_
1

Rsin
_
Rcos

Rcos R

sin =
1

Rsin .

Rsin +R

cos =
1

Rcos
1

f
_
1

Rsin
_

Rcos R

sin =
1

Rsin .
2008
Multiplying by cos and sin and taking the sum and dierence of these dierential equations yields

R =
1

sin f
_
1

Rsin
_
R

=
1

R +
1

cos f
_
1

Rsin
_
.
Dividing by R in the second equation,

R =
1

sin f
_
1

Rsin
_

=
1

+
1

cos
R
f
_
1

Rsin
_
.
We make the assumptions that 0 < < 1 and that f(y) is an odd function that is nonnegative for positive y
and satises [f(y)[ 1 for all y.
Since sin is odd,
sin f
_
1

Rsin
_
is nonnegative. Thus R(t) continually increases with t when R ,= 0.
If R > 1 then

cos
R
f
_
1

Rsin
_

f
_
1

Rsin
_

1.
Thus the value of

,

+
1

cos
R
f
_
1

Rsin
_
,
is always nonpositive. Thus (t) continually decreases with t.
2009
Solution 50.5
1. Linearizing the Lorentz equations about (0, 0, 0) yields
_
_
x
y
z
_
_
=
_
_
10 10 0
R 1 0
0 0 8/3
_
_
_
_
x
y
z
_
_
The eigenvalues of the matrix are

1
=
8
3
,

2
=
11

81 + 40R
2

3
=
11 +

81 + 40R
2
.
There are three cases for the eigenvalues of the linearized system.
R < 1. There are three negative, real eigenvalues. In the linearized and also the nonlinear system, the
origin is a stable, sink.
R = 1. There are two negative, real eigenvalues and one zero eigenvalue. In the linearized system the
origin is stable and has a center manifold plane. The linearized system does not tell us if the nonlinear
system is stable or unstable.
R > 1. There are two negative, real eigenvalues, and one positive, real eigenvalue. The origin is a saddle
point.
2. The other singular points when R > 1 are
_

_
8
3
(R 1),
_
8
3
(R 1), R 1
_
.
2010
3. Linearizing about the point
_
_
8
3
(R 1),
_
8
3
(R 1), R 1
_
yields
_
_

Z
_
_
=
_
_
_
_
10 10 0
1 1
_
8
3
(R 1)
_
8
3
(R 1)
_
8
3
(R 1)
8
3
_
_
_
_
_
_
X
Y
Z
_
_
The characteristic polynomial of the matrix is

3
+
41
3

2
+
8(10 +R)
3
+
160
3
(R 1).
Thus the eigenvalues of the matrix satisfy the polynomial,
3
3
+ 41
2
+ 8(10 +R) + 160(R 1) = 0.
Linearizing about the point
_

_
8
3
(R 1),
_
8
3
(R 1), R 1
_
yields
_
_

Z
_
_
=
_
_
_
_
10 10 0
1 1
_
8
3
(R 1)

_
8
3
(R 1)
_
8
3
(R 1)
8
3
_
_
_
_
_
_
X
Y
Z
_
_
2011
The characteristic polynomial of the matrix is

3
+
41
3

2
+
8(10 +R)
3
+
160
3
(R 1).
Thus the eigenvalues of the matrix satisfy the polynomial,
3
3
+ 41
2
+ 8(10 +R) + 160(R 1) = 0.
4. If the characteristic polynomial has two pure imaginary roots i and one real root, then it has the form
( r)(
2
+
2
) =
3
r
2
+
2
r
2
.
Equating the
2
and the term with the characteristic polynomial yields
r =
41
3
, =
_
8
3
(10 +R).
Equating the constant term gives us the equation
41
3
8
3
(10 +R
c
) =
160
3
(R
c
1)
which has the solution
R
c
=
470
19
.
For this critical value of R the characteristic polynomial has the roots

1
=
41
3

2
=
4
19

2090

3
=
4
19

2090.
2012
Solution 50.6
The form of the perturbation expansion is
v() = 1 +Acos +u() +O(
2
)
= (1 +
1
+O(
2
)).
Writing the derivatives in terms of ,
d
d
= (1 +
1
+ )
d
d
d
2
d
2
= (1 + 2
1
+ )
d
2
d
2
.
Substituting these expressions into the dierential equation for v(),
_
1 + 2
1
+O(
2
)
_
Acos +u
tt
+O(
2
)

+ 1 +Acos +u() +O(


2
)
= 1 +
_
1 + 2Acos +A
2
cos
2
+O()

u
tt
+u 2
1
Acos = + 2Acos +A
2
cos
2
+O(
2
).
Equating the coecient of ,
u
tt
+u = 1 + 2(1 +
1
)Acos +
1
2
A
2
(cos 2 + 1)
u
tt
+u = (1 +
1
2
A
2
) + 2(1 +
1
)Acos +
1
2
A
2
cos 2.
To avoid secular terms, we must have
1
= 1. A particular solution for u is
u = 1 +
1
2
A
2

1
6
A
2
cos 2.
The the solution for v is
v() = 1 +Acos((1 )) +
_
1 +
1
2
A
2

1
6
A
2
cos(2(1 ))
_
+O(
2
).
2013
Solution 50.7
Substituting the expressions for x and into the dierential equations yields
a
2
_

2
0
_
d
2
x
2
d
2
+x
2
_
+cos
2

_
+a
3
_

2
0
_
d
2
x
3
d
2
+x3
_
2
0

2
cos + 2x
2
cos
_
+O(a
4
) = 0
Equating the coecient of a
2
gives us the dierential equation
d
2
x
2
d
2
+x2 =

2
2
0
(1 + cos 2).
The solution subject to the initial conditions x
2
(0) = x
t
2
(0) = 0 is
x
2
=

6
2
0
(3 + 2 cos + cos 2).
Equating the coecent of a
3
gives us the dierential equation

2
0
_
d
2
x
3
d
2
+x
3
_
+

2
3
2
0

_
2
0

2
+
5
2
6
2
0
_
cos +

2
3
2
0
cos 2 +

2
6
2
0
cos 3 = 0.
To avoid secular terms we must have

2
=
5
2
12
0
.
Solving the dierential equation for x
3
subject to the intial conditions x
3
(0) = x
t
3
(0) = 0,
x
3
=

2
144
4
0
(48 + 29 cos + 16 cos 2 + 3 cos 3).
Thus our solution for x(t) is
x(t) = a cos +a
2
_

6
2
0
(3 + 2 cos + cos 2)
_
+a
3
_

2
144
4
0
(48 + 29 cos + 16 cos 2 + 3 cos 3)
_
+O(a
4
)
2014
where =
_

0
a
2 5
2
12
0
_
t.
Now to see why we didnt need an a
1
term. Assume that
x = a cos +a
2
x
2
() +O(a
3
); = t
=
0
+a
1
+O(a
2
).
Substituting these expressions into the dierential equation for x yields
a
2
_

2
0
(x
tt
2
+x
2
) 2
0

1
cos +cos
2

= O(a
3
)
x
tt
2
+x
2
= 2

0
cos

2
2
0
(1 + cos 2).
In order to eliminate secular terms, we need
1
= 0.
Solution 50.8
1. The equation for p
1
(t) is
dp
1
(t)
dt
= [p
0
(t) p
1
(t)].
dp
1
(t)
dt
= [ae
it
p
1
(t)]
d
dt
_
e
t
p
1
(t)
_
= ae
t
e
it
p
1
(t) =
a
+i
e
it
+ce
t
Applying the initial condition, p
1
(0) = 0,
p
1
(t) =
a
+i
_
e
it
e
t
_
2015
2. We start with the dierential equation for p
n
(t).
dp
n
(t)
dt
= [p
n1
(t) p
n
(t)]
Multiply by s
n
and sum from n = 1 to .

n=1
p
t
n
(t)s
n
=

n=1
[p
n1
(t) p
n
(t)]s
n
G(s, t)
t
=

n=0
p
n
s
n+1
G(s, t)
G(s, t)
t
= sp
0
+

n=1
p
n
s
n+1
G(s, t)
G(s, t)
t
= ase
it
+sG(s, t) G(s, t)
G(s, t)
t
= ase
it
+(s 1)G(s, t)

t
_
e
(1s)t
G(s, t)
_
= ase
(1s)t
e
it
G(s, t) =
as
(1 s) +i
e
it
+C(s)e
(s1)t
The initial condition is
G(s, 0) =

n=1
p
n
(0)s
n
= 0.
The generating function is then
G(s, t) =
as
(1 s) +i
_
e
it
e
(s1)t
_
.
2016
3. Assume that [s[ < 1. In the limit t we have
G(s, t)
as
(1 s) +i
e
it
G(s, t)
as
1 +i/ s
e
it
G(s, t)
as/(1 +i/)
1 s/(1 +i/)
e
it
G(s, t)
ase
it
1 +i/

n=0
_
s
1 +i/
_
n
G(s, t) ae
it

n=1
s
n
(1 +i/)
n
Thus we have
p
n
(t)
a
(1 +i/)
n
e
it
as t .
(p
n
(t))
_
a
(1 +i/)
n
e
it
_
= a
_
1 i/
1 + (/)
2
_
n
[cos(t) +i sin(t)]
=
a
(1 + (/)
2
)
n
[cos(t)[(1 i/)
n
] + sin(t)1[(1 i/)
n
]]
=
a
(1 + (/)
2
)
n
_

_
cos(t)
n

j=1
odd j
(1)
(j+1)/2
_

_
j
+ sin(t)
n

j=0
even j
(1)
j/2
_

_
j
_

_
2017
Solution 50.9
1. Substituting p
n
= A
n
e
it
into the dierential equation yields
A
n
ie
i(t+)
= [A
n1
e
it
A
n
e
it
]
A
n
( +ie
i
) = A
n1
We make the substitution A
n
= r
n
.
r
n
( +ie
i
) = r
n1
r =

+ie
i
Thus we have
p
n
(t) =
_
1
1 +ie
i
/
_
n
e
it
.
2018
Taking the imaginary part,
(p
n
(t)) =
__
1
1 +i

e
i
_
n
e
it
_
=
__
1 i

e
i
1 +i

(e
i
e
i
) + (

)
2
_
n
_
cos(t) +i sin(t)
_
_
=
__
1

sin() i

cos()
1 2

sin() + (

)
2
_
n
_
cos(t) +i sin(t)
_
_
=
_
1
1 2

sin() + (

)
2
_
n _
cos(t)
__
1

sin() i

cos()
_
n
_
+ sin(t)1
__
1

sin() i

cos()
_
n
_ _
=
_
1
1 2

sin() + (

)
2
_
n
_
cos(t)
n

j=1
odd j
(1)
(j+1)/2
_

cos()
_
j
_
1

sin()
_
nj
+ sin(t)
n

j=0
even j
(1)
j/2
_

cos()
_
j
_
1

sin()
_
nj
_
2019
2. p
n
(t) will remain bounded in time as n if

1
1 +i

e
i

1 +i

e
i

2
1
1 2

sin() +
_

_
2
1

2 sin()
3.
2020
Chapter 51
Nonlinear Partial Dierential Equations
2021
51.1 Exercises
Exercise 51.1
Solve the equation

t
+ (1 +x)
x
+ = 0 in < x < , t > 0,
with initial condition (x, 0) = f(x).
Exercise 51.2
Solve the equation

t
+
x
+

1 +x
= 0
in the region 0 < x < , t > 0 with initial condition (x, 0) = 0, and boundary condition (0, t) = g(t). [Here
is a positive constant.]
Exercise 51.3
Solve the equation

t
+
x
+
2
= 0
in < x < , t > 0 with initial condition (x, 0) = f(x). Note that the solution could become innite in
nite time.
Exercise 51.4
Consider
c
t
+cc
x
+c = 0, < x < , t > 0.
2022
1. Use the method of characteristics to solve the problem with
c = F(x) at t = 0.
( is a positive constant.)
2. Find equations for the envelope of characteristics in the case F
t
(x) < 0.
3. Deduce an inequality relating max [F
t
(x)[ and which decides whether breaking does or does not occur.
Exercise 51.5
For water waves in a channel the so-called shallow water equations are
h
t
+ (hv)
x
= 0 (51.1)
(hv)
t
+
_
hv
2
+
1
2
gh
2
_
x
= 0, g = constant. (51.2)
Investigate whether there are solutions with v = V (h), where V (h) is not posed in advance but is obtained from
requiring consistency between the h equation obtained from (1) and the h equation obtained from (2).
There will be two possible choices for V (h) depending on a choice of sign. Consider each case separately. In
each case x the arbitrary constant that arises in V (h) by stipulating that before the waves arrive, h is equal to
the undisturbed depth h
0
and V (h
0
) = 0.
Find the h equation and the wave speed c(h) in each case.
Exercise 51.6
After a change of variables, the chemical exchange equations can be put in the form

t
+

x
= 0 (51.3)

t
= ; , , = positive constants. (51.4)
2023
1. Investigate wave solutions in which = (X), = (X), X = x Ut, U = constant, and show that (X)
must satisfy an ordinary dierential equation of the form
d
dX
= quadratic in .
2. Discuss ths smooth shock solution as we did for a dierent example in class. In particular nd the
expression for U in terms of the values of as X , and nd the sign of d/dX. Check that
U =

2

1
in agreement with the discontinuous theory.
Exercise 51.7
Find solitary wave solutions for the following equations:
1.
t
+
x
+ 6
x

xxt
= 0. (Regularized long wave or B.B.M. equation)
2. u
tt
u
xx

_
3
2
u
2
_
xx
u
xxxx
= 0. (Boussinesq)
3.
tt

xx
+ 2
x

xt
+
xx

xxxx
= 0. (The solitary wave form is for u =
x
)
4. u
t
+ 30u
2
u
1
+ 20u
1
u
2
+ 10uu
3
+u
5
= 0. (Here the subscripts denote x derivatives.)
2024
51.2 Hints
Hint 51.1
Hint 51.2
Hint 51.3
Hint 51.4
Hint 51.5
Hint 51.6
Hint 51.7
2025
51.3 Solutions
Solution 51.1
The method of characteristics gives us the dierential equations
x
t
(t) = (1 +x) x(0) =
d
dt
= (, 0) = f()
Solving the rst dierential equation,
x(t) = ce
t
1, x(0) =
x(t) = ( + 1)e
t
1
The second dierential equation then becomes
(x(t), t) = ce
t
, (, 0) = f(), = (x + 1)e
t
1
(x, t) = f((x + 1)e
t
1)e
t
Thus the solution to the partial dierential equation is
(x, t) = f((x + 1)e
t
1)e
t
.
Solution 51.2
d
dt
=
t
+x
t
(t)
x
=

1 +x
The characteristic curves x(t) satisfy x
t
(t) = 1, so x(t) = t +c. The characteristic curve that separates the region
with domain of dependence on the x axis and domain of dependence on the t axis is x(t) = t. Thus we consider
the two cases x > t and x < t.
2026
x > t. x(t) = t +.
x < t. x(t) = t .
Now we solve the dierential equation for in the two domains.
x > t.
d
dt
=

1 +x
, (, 0) = 0, = x t
d
dt
=

1 +t +
= c exp
_

_
t
1
t + + 1
dt
_
= cexp (log(t + + 1))
= c(t + + 1)

applying the initial condition, we see that


= 0
x < t.
d
dt
=

1 +x
, (0, ) = g(), = t x
d
dt
=

1 +t
= c(t + 1 )

= g()(t + 1 )

= g(t x)(x + 1)

2027
Thus the solution to the partial dierential equation is
(x, t) =
_
0 for x > t
g(t x)(x + 1)

for x < t.
Solution 51.3
The method of characteristics gives us the dierential equations
x
t
(t) = 1 x(0) =
d
dt
=
2
(, 0) = f()
Solving the rst dierential equation,
x(t) = t +.
The second dierential equation is then
d
dt
=
2
, (, 0) = f(), = x t

2
d = dt

1
= t +c
=
1
t c
=
1
t + 1/f()
=
1
t + 1/f(x t)
.
2028
Solution 51.4
1. Taking the total derivative of c with respect to t,
dc
dt
= c
t
+
dx
dt
c
x
.
Equating terms with the partial dierential equation, we have the system of dierential equations
dx
dt
= c
dc
dt
= c.
subject to the initial conditions
x(0) = , c(, 0) = F().
We can solve the second ODE directly.
c(, t) = c
1
e
t
c(, t) = F()e
t
Substituting this result and solving the rst ODE,
dx
dt
= F()e
t
x(t) =
F()

e
t
+c
2
x(t) =
F()

(1 e
t
) +.
The solution to the problem at the point (x, t) is found by rst solving
x =
F()

(1 e
t
) +
2029
for and then using this value to compute
c(x, t) = F()e
t
.
2. The characteristic lines are given by the equation
x(t) =
F()

(1 e
t
) +.
The points on the envelope of characteristics also satisfy
x(t)

= 0.
Thus the points on the envelope satisfy the system of equations
x =
F()

(1 e
t
) +
0 =
F
t
()

(1 e
t
) + 1.
By substituting
1 e
t
=

F
t
()
into the rst equation we can eliminate its t dependence.
x =
F()
F
t
()
+
Now we can solve the second equation in the system for t.
e
t
= 1 +

F
t
()
t =
1

log
_
1 +

F
t
()
_
2030
Thus the equations that describe the envelope are
x =
F()
F
t
()
+
t =
1

log
_
1 +

F
t
()
_
.
3. The second equation for the envelope has a solution for positive t if there is some x that satises
1 <

F
t
(x)
< 0.
This is equivalent to
< F
t
(x) < .
So in the case that F
t
(x) < 0, there will be breaking i
max [F
t
(x)[ > .
Solution 51.5
With the substitution v = V (h), the two equations become
h
t
+ (V +hV
t
)h
x
= 0
(V +hV
t
)h
t
+ (V
2
+ 2hV V
t
+gh)h
x
= 0.
We can rewrite the second equation as
h
t
+
V
2
+ 2hV V
t
+gh
V +hV
t
h
x
= 0.
2031
Requiring that the two equations be consistent gives us a dierential equation for V .
V +hV
t
=
V
2
+ 2hV V
t
+gh
V +hV
t
V
2
+ 2hV V
t
+h
2
(V
t
)
2
= V
2
+ 2hV V
t
+gh
(V
t
)
2
=
g
h
.
There are two choices depending on which sign we choose when taking the square root of the above equation.
Positive V
t
.
V
t
=
_
g
h
V = 2
_
gh + const
We apply the initial condition V (h
0
) = 0.
V = 2

g(

h
_
h
0
)
The partial dierential equation for h is then
h
t
+ (2

g(

h
_
h
0
)h)
x
= 0
h
t
+

g(3

h 2
_
h
0
)h
x
= 0
The wave speed is
c(h) =

g(3

h 2
_
h
0
).
Negative V
t
.
V
t
=
_
g
h
V = 2
_
gh + const
2032
We apply the initial condition V (h
0
) = 0.
V = 2

g(
_
h
0

h)
The partial dierential equation for h is then
h
t
+

g(2
_
h
0
3

h)h
x
= 0.
The wave speed is
c(h) =

g(2
_
h
0
3

h).
Solution 51.6
1. Making the substitutions, = (X), = (X), X = x Ut, the system of partial dierential equations
becomes
U
t
+
t
= 0
U
t
= .
Integrating the rst equation yields
U + = c
= c +U.
Now we substitute the expression for into the second partial dierential equation.
U
t
= (c +U) (c +U)

t
=
_
+
c
U
_
+

U
+
_
+
c
U
_
2033
Thus (X) satises the ordinary dierential equation

t
=
2
+
_
c
U
+

U

_

c
U
.
2. Assume that
(X)
1
as X +
(X)
2
as X

t
(X) 0 as X .
Integrating the ordinary dierential equation for ,
X =
_

d

2
+
_
c
U
+

U

_

c
U
.
We see that the roots of the denominator of the integrand must be
1
and
2
. Thus we can write the
ordinary dierential equation for (X) as

t
(X) = (
1
)(
2
) =
2
(
1
+
2
) +
1

2
.
Equating coecients in the polynomial with the dierential equation for part 1, we obtain the two equations

c
U
=
1

2
,
c
U
+

U
= (
1
+
2
).
Solving the rst equation for c,
c =
U
1

.
2034
Now we substitute the expression for c into the second equation.

U
1

2
U
+

U
= (
1
+
2
)

U
= +

2

(
1
+
2
)
Thus we see that U is
U =

2
+
2

2
(
1
+
2
)
.
Since the quadratic polynomial in the ordinary dierential equation for (X) is convex, it is negative valued
between its two roots. Thus we see that
d
dX
< 0.
Using the expression for that we obtained in part 1,

1
=
c +U
2
(c +U
1
)

1
= U

1
= U.
2035
Now lets return to the ordinary dierential equation for (X)

t
(X) = (
1
)(
2
)
X =
_

d
(
1
)(
2
)
X =
1
(
2

1
)
_

_
1

1
+
1

_
d
X X
0
=
1
(
2

1
)
ln
_

1

_
(
2

1
)(X X
0
) = ln
_

1

_

1

= exp ((
2

1
)(X X
0
))

1
= (
2
) exp ((
2

1
)(X X
0
))
[1 + exp ((
2

1
)(X X
0
))] =
1
+
2
exp ((
2

1
)(X X
0
))
Thus we obtain a closed form solution for
=

1
+
2
exp ((
2

1
)(X X
0
))
1 + exp ((
2

1
)(X X
0
))
Solution 51.7
1.

t
+
x
+ 6
x

xxt
= 0
We make the substitution
(x, t) = z(X), X = x Ut.
2036
(1 U)z
t
+ 6zz
t
+Uz
ttt
= 0
(1 U)z + 3z
2
+Uz
tt
= 0
1
2
(1 U)z
2
+z
3
+
1
2
U(z
t
)
2
= 0
(z
t
)
2
=
U 1
U
z
2

2
U
z
3
z(X) =
U 1
2
sech
2
_
1
2
_
U 1
U
X
_
(x, t) =
U 1
2
sech
2
_
1
2
_
_
U 1
U
x
_
(U 1)Ut
__
The linearized equation is

t
+
x

xxt
= 0.
Substituting = e
x+t
into this equation yields

2
= 0
=

1
2
.
We set

2
=
U 1
U
.
2037
is then
=

1
2
=
_
(U 1)/U
1 (U 1)/U)
=
_
(U 1)U
U (U 1)
=
_
(U 1)U.
The solution for becomes

2
sech
2
_
x t
2
_
where
=

1
2
.
2.
u
tt
u
xx

_
3
2
u
2
_
xx
u
xxxx
= 0
We make the substitution
u(x, t) = z(X), X = x Ut.
(U
2
1)z
tt

_
3
2
z
2
_
tt
z
tttt
= 0
(U
2
1)z
t

_
3
2
z
2
_
t
z
ttt
= 0
(U
2
1)z
3
2
z
2
z
tt
= 0
2038
We multiply by z
t
and integrate.
1
2
(U
2
1)z
2

1
2
z
3

1
2
(z
t
)
2
= 0
(z
t
)
2
= (U
2
1)z
2
z
3
z = (U
2
1) sech
2
_
1
2

U
2
1X
_
u(x, t) = (U
2
1) sech
2
_
1
2
_

U
2
1x U

U
2
1t
_
_
The linearized equation is
u
tt
u
xx
u
xxxx
= 0.
Substituting u = e
x+t
into this equation yields

4
= 0

2
=
2
(
2
+ 1).
We set
=

U
2
1.
is then

2
=
2
(
2
+ 1)
= (U
2
1)U
2
= U

U
2
1.
The solution for u becomes
u(x, t) =
2
sech
2
_
x t
2
_
2039
where

2
=
2
(
2
+ 1).
3.

tt

xx
+ 2
x

xt
+
xx

xxxx
We make the substitution
(x, t) = z(X), X = x Ut.
(U
2
1)z
tt
2Uz
t
z
tt
Uz
tt
z
t
z
tttt
= 0
(U
2
1)z
tt
3Uz
t
z
tt
z
tttt
= 0
(U
2
1)z
t

3
2
(z
t
)
2
z
ttt
= 0
Multiply by z
tt
and integrate.
1
2
(U
2
1)(z
t
)
2

1
2
(z
t
)
3

1
2
(z
tt
)
2
= 0
(z
tt
)
2
= (U
2
1)(z
t
)
2
(z
t
)
3
z
t
= (U
2
1) sech
2
_
1
2

U
2
1X
_

x
(x, t) = (U
2
1) sech
2
_
1
2
_

U
2
1x U

U
2
1t
_
_
.
The linearized equation is

tt

xx

xxxx
2040
Substituting = e
x+t
into this equation yields

2
=
2
(
2
+ 1).
The solution for
x
becomes

x
=
2
sech
2
_
x t
2
_
where

2
=
2
(
2
+ 1).
4.
u
t
+ 30u
2
u
1
+ 20u
1
u
2
+ 10uu
3
+u
5
= 0
We make the substitution
u(x, t) = z(X), X = x Ut.
Uz
t
+ 30z
2
z
t
+ 20z
t
z
tt
+ 10zz
ttt
+z
(5)
= 0
Note that (zz
tt
)
t
= z
t
z
tt
+zz
ttt
.
Uz
t
+ 30z
2
z
t
+ 10z
t
z
tt
+ 10(zz
tt
)
t
+z
(5)
= 0
Uz + 10z
3
+ 5(z
t
)
2
+ 10zz
tt
+z
(4)
= 0
Multiply by z
t
and integrate.

1
2
Uz
2
+
5
2
z
4
+ 5z(z
t
)
2

1
2
(z
tt
)
2
+z
t
z
ttt
= 0
2041
Assume that
(z
t
)
2
= P(z).
Dierentiating this relation,
2z
t
z
tt
= P
t
(z)z
t
z
tt
=
1
2
P
t
(z)
z
ttt
=
1
2
P
tt
(z)z
t
z
ttt
z
t
=
1
2
P
tt
(z)P(z).
Substituting this expressions into the dierential equation for z,

1
2
Uz
2
+
5
2
z
4
+ 5zP(z)
1
2
1
4
(P
t
(z))
2
+
1
2
P
tt
(z)P(z) = 0
4Uz
2
+ 20z
4
+ 40zP(z) (P
t
(z))
2
+ 4P
tt
(z)P(z) = 0
Substituting P(z) = az
3
+bz
2
yields
(20 + 40a + 15a
2
)z
4
+ (40b + 20ab)z
3
+ (4b
2
+ 4U)z
2
= 0
This equation is satised by b
2
= U, a = 2. Thus we have
(z
t
)
2
=

Uz
2
2z
3
z =

U
2
sech
2
_
1
2
U
1/4
X
_
u(x, t) =

U
2
sech
2
_
1
2
(U
1/4
x U
5/4
t)
_
2042
The linearized equation is
u
t
+u
5
= 0.
Substituting u = e
x+t
into this equation yields

5
= 0.
We set
= U
1/4
.
The solution for u(x, t) becomes

2
2
sech
2
_
x t
2
_
where
=
5
.
2043
Part VIII
Appendices
2044
Appendix A
Greek Letters
Name Lower Upper
alpha
beta
gamma
delta
epsilon
iota
kappa
lambda
mu
nu
omicron o
pi
rho
sigma
tau
theta
2045
phi
psi
chi
omega
upsilon
xi
eta
zeta
2046
Appendix B
Notation
C class of continuous functions
C
n
class of n-times continuously dierentiable functions
C set of complex numbers
(x) Dirac delta function
T[] Fourier transform
T
c
[] Fourier cosine transform
T
s
[] Fourier sine transform
Eulers constant, =
_

0
e
x
Log x dx
() Gamma function
H(x) Heaviside function
H
(1)

(x) Hankel function of the rst kind and order


H
(2)

(x) Hankel function of the second kind and order


J

(x) Bessel function of the rst kind and order


K

(x) Modied Bessel function of the rst kind and order


L[] Laplace transform
N set of natural numbers, (positive integers)
2047
N

(x) Modied Bessel function of the second kind and order


R set of real numbers
R
+
set of positive real numbers
R

set of negative real numbers


o(z) terms smaller than z
O(z) terms no bigger than z

_
principal value of the integral
() digamma function, () =
d
d
log ()

(n)
() polygamma function,
(n)
() =
d
n
d
n
()
u
(n)
(x)

n
u
x
n
u
(n,m)
(x, y)

n+m
u
x
n
y
m
Y

(x) Bessel function of the second kind and order , Neumann function
Z set of integers
Z
+
set of positive integers
2048
Appendix C
Formulas from Complex Variables
Analytic Functions. A function f(z) is analytic in a domain if the derivative f
t
(z) exists in that domain.
If f(z) = u(x, y) + iv(x, y) is dened in some neighborhood of z
0
= x
0
+ iy
0
and the partial derivatives of u
and v are continuous and satisfy the Cauchy-Riemann equations
u
x
= v
y
, u
y
= v
x
,
then f
t
(z
0
) exists.
Residues. If f(z) has the Laurent expansion
f(z) =

n=
a
n
z
n
,
then the residue of f(z) at z = z
0
is
Res (f(z), z
0
) = a
1
.
2049
Residue Theorem. Let C be a positively oriented, simple, closed contour. If f(z) is analytic in and on C
except for isolated singularities at z
1
, z
2
, . . . , z
N
inside C then
_
C
f(z) dz = 2i
N

n=1
Res (f(z), z
n
).
If in addition f(z) is analytic outside C in the nite complex plane then
_
C
f(z) dz = 2i Res
_
1
z
2
f
_
1
z
_
, 0
_
.
Residues of a pole of order n. If f(z) has a pole of order n at z = z
0
then
Res (f(z), z
0
) = lim
zz
0
_
1
(n 1)!
d
n1
dz
n1
_
(z z
0
)
n
f(z)

_
.
Jordans Lemma.
_

0
e
Rsin
d <

R
.
Let a be a positive constant. If f(z) vanishes as [z[ then the integral
_
C
f(z) e
iaz
dz
along the semi-circle of radius R in the upper half plane vanishes as R .
Taylor Series. Let f(z) be a function that is analytic and single valued in the disk [z z
0
[ < R.
f(z) =

n=0
f
(n)
(z
0
)
n!
(z z
0
)
n
.
The series converges for [z z
0
[ < R.
2050
Laurent Series. Let f(z) be a function that is analytic and single valued in the annulus r < [z z
0
[ < R. In
this annulus f(z) has the convergent series,
f(z) =

n=
c
n
(z z
0
)
n
,
where
c
n
=
1
2i
_
f(z)
(z z
0
)
n+1
dz
and the path of integration is any simple, closed, positive contour around z
0
and lying in the annulus. The path
of integration is shown in Figure C.1.
C
Im(z)
Re(z)
R
r
Figure C.1: The Path of Integration.
2051
Appendix D
Table of Derivatives
Note: c denotes a constant and
t
denotes dierentiation.
d
dx
(fg) =
df
dx
g +f
dg
dx
d
dx
_
f
g
_
=
f
t
g fg
t
g
2
d
dx
(f
c
) = cf
c1
f
t
d
dx
[f(g)] = f
t
(g)g
t
d
2
dx
2
[f(g)] = f
tt
(g)(g
t
)
2
+f
t
g
tt
d
n
dx
n
(fg) =
_
n
0
_
d
n
f
dx
n
g +
_
n
1
_
d
n1
f
dx
n1
dg
dx
+
_
n
2
_
d
n2
f
dx
n2
d
2
g
dx
2
+ +
_
n
n
_
f
d
n
g
dx
n
2052
d
dx
(log x) =
1
f
d
dx
(c
x
) = c
x
log c
d
dx
(f
g
) = gf
g1
df
dx
+f
g
log f
dg
dx
d
dx
(sin x) = cos x
d
dx
(cos x) = sin x
d
dx
(tan x) = sec
2
x
d
dx
(csc x) = csc xcot x
d
dx
(sec x) = sec xtan x
d
dx
(cot x) = csc
2
x
d
dx
(arcsin x) =
1

1 x
2
,

2
arcsin x

2
2053
d
dx
(arccos x) =
1

1 x
2
, 0 arccos x
d
dx
(arctan x) =
1
1 +x
2
,

2
arctan x

2
d
dx
(sinh x) = cosh x
d
dx
(cosh x) = sinh x
d
dx
(tanh x) = sech
2
x
d
dx
( csch x) = csch xcoth x
d
dx
( sech x) = sech xtanh x
d
dx
(coth x) = csch
2
x
d
dx
( arcsinh x) =
1

x
2
+ 1
d
dx
( arccosh x) =
1

x
2
1
, x > 1, arccosh x > 0
d
dx
( arctanh x) =
1
1 x
2
, x
2
< 1
2054
d
dx
_
x
c
f() d = f(x)
d
dx
_
c
x
f() d = f(x)
d
dx
_
h
g
f(, x) d =
_
h
g
f(, x)
x
d +f(h, x)h
t
f(g, x)g
t
2055
Appendix E
Table of Integrals
_
u
dv
dx
dx = uv
_
v
du
dx
dx
_
f
t
(x)
f(x)
dx = log f(x)
_
f
t
(x)
2
_
f(x)
dx =
_
f(x)
_
x

dx =
x
+1
+ 1
for ,== 1
_
1
x
dx = log x
_
e
ax
dx =
e
ax
a
2056
_
a
bx
dx =
a
bx
b log a
for a > 0
_
log x dx = xlog x x
_
1
x
2
+a
2
dx =
1
a
arctan
x
a
_
1
x
2
a
2
dx =
_
1
2a
log
ax
a+x
for x
2
< a
2
1
2a
log
xa
x+a
for x
2
> a
2
_
1

a
2
x
2
dx = arcsin
x
[a[
= arccos
x
[a[
for x
2
< a
2
_
1

x
2
a
2
dx = log(x +

x
2
a
2
)
_
1
x

x
2
a
2
dx =
1
[a[
sec
1
x
a
_
1
x

a
2
x
2
dx =
1
a
log
_
a +

a
2
x
2
x
_
_
sin(ax) dx =
1
a
cos(ax)
_
cos(ax) dx =
1
a
sin(ax)
2057
_
tan(ax) dx =
1
a
log cos(ax)
_
csc(ax) dx =
1
a
log tan
ax
2
_
sec(ax) dx =
1
a
log tan
_

4
+
ax
2
_
_
cot(ax) dx =
1
a
log sin(ax)
_
sinh(ax) dx =
1
a
cosh(ax)
_
cosh(ax) dx =
1
a
sinh(ax)
_
tanh(ax) dx =
1
a
log cosh(ax)
_
csch (ax) dx =
1
a
log tanh
ax
2
_
sech (ax) dx =
i
a
log tanh
_
i
4
+
ax
2
_
_
coth(ax) dx =
1
a
log sinh(ax)
2058
_
x sin ax dx =
1
a
2
sin ax
x
a
cos ax
_
x
2
sin ax dx =
2x
a
2
sin ax
a
2
x
2
2
a
3
cos ax
_
x cos ax dx =
1
a
2
cos ax +
x
a
sin ax
_
x
2
cos ax dx =
2x cos ax
a
2
+
a
2
x
2
2
a
3
sin ax
2059
Appendix F
Denite Integrals
Integrals from to . Let f(z) be analytic except for isolated singularities, none of which lie on the real
axis. Let a
1
, . . . , a
m
be the singularities of f(z) in the upper half plane; and C
R
be the semi-circle from R to R
in the upper half plane. If
lim
R
_
Rmax
zC
R
[f(z)[
_
= 0
then
_

f(x) dx = i2
m

j=1
Res (f(z), a
j
) .
Let b
1
, . . . , b
n
be the singularities of f(z) in the lower half plane. Let C
R
be the semi-circle from R to R in the
lower half plane. If
lim
R
_
Rmax
zC
R
[f(z)[
_
= 0
2060
then
_

f(x) dx = i2
n

j=1
Res (f(z), b
j
) .
Integrals from 0 to . Let f(z) be analytic except for isolated singularities, none of which lie on the positive
real axis, [0, ). Let z
1
, . . . , z
n
be the singularities of f(z). If f(z) z

as z 0 for some > 1 and f(z) z

as z for some < 1 then


_

0
f(x) dx =
n

k=1
Res (f(z) log z, z
k
) .
_

0
f(x) log dx =
1
2
n

k=1
Res
_
f(z) log
2
z, z
k
_
+i
n

k=1
Res (f(z) log z, z
k
)
Assume that a is not an integer. If z
a
f(z) z

as z 0 for some > 1 and z


a
f(z) z

as z for some
< 1 then
_

0
x
a
f(x) dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z), z
k
) .
_

0
x
a
f(x) log x dx =
i2
1 e
i2a
n

k=1
Res (z
a
f(z) log z, z
k
) , +

2
a
sin
2
(a)
n

k=1
Res (z
a
f(z), z
k
)
Fourier Integrals. Let f(z) be analytic except for isolated singularities, none of which lie on the real axis.
Suppose that f(z) vanishes as [z[ . If is a positive real number then
_

f(x) e
ix
dx = i2
n

k=1
Res (f(z) e
iz
, z
k
),
2061
where z
1
, . . . , z
n
are the singularities of f(z) in the upper half plane. If is a negative real number then
_

f(x) e
ix
dx = i2
n

k=1
Res (f(z) e
iz
, z
k
),
where z
1
, . . . , z
n
are the singularities of f(z) in the lower half plane.
2062
Appendix G
Table of Sums

n=1
r
n
=
r
1 r
, for [r[ < 1
N

n=1
r
n
=
r r
N+1
1 r
b

n=a
n =
(a +b)(b + 1 a)
2
N

n=1
n =
N(N + 1)
2
b

n=a
n
2
=
b(b + 1)(2b + 1) a(a 1)(2a 1)
6
2063
N

n=1
n
2
=
N(N + 1)(2N + 1)
6

n=1
(1)
n+1
n
= log(2)

n=1
1
n
2
=

2
6

n=1
(1)
n+1
n
2
=

2
12

n=1
1
n
3
= (3)

n=1
(1)
n+1
n
3
=
3(3)
4

n=1
1
n
4
=

4
90

n=1
(1)
n+1
n
4
=
7
4
720
2064

n=1
1
n
5
= (5)

n=1
(1)
n+1
n
5
=
15(5)
16

n=1
1
n
6
=

6
945

n=1
(1)
n+1
n
6
=
31
6
30240
2065
Appendix H
Table of Taylor Series
(1 z)
1
=

n=0
z
n
[z[ < 1
(1 z)
2
=

n=0
(n + 1)z
n
[z[ < 1
(1 +z)

n=0
_

n
_
z
n
[z[ < 1
e
z
=

n=0
z
n
n!
[z[ <
log(1 z) =

n=1
z
n
n
[z[ < 1
2066
log
_
1 +z
1 z
_
= 2

n=1
z
2n1
2n 1
[z[ < 1
cos z =

n=0
(1)
n
z
2n
(2n)!
[z[ <
sin z =

n=0
(1)
n
z
2n+1
(2n + 1)!
[z[ <
tan z = z +
z
3
3
+
2z
5
15
+
17z
7
315
+ [z[ <

2
cos
1
z =

2

_
z +
z
3
2 3
+
1 3z
5
2 4 5
+
1 3 5z
7
2 4 6 7
+
_
[z[ < 1
sin
1
z = z +
z
3
2 3
+
1 3z
5
2 4 5
+
1 3 5z
7
2 4 6 7
+ [z[ < 1
tan
1
z =

n=1
(1)
n+1
z
2n1
2n 1
[z[ < 1
cosh z =

n=0
z
2n
(2n)!
[z[ <
sinh z =

n=0
z
2n+1
(2n + 1)!
[z[ <
2067
tanh z = z
z
3
3
+
2z
5
15

17z
7
315
+ [z[ <

2
J

(z) =

n=0
(1)
n
n!( +n + 1)
_
z
2
_
+2n
[z[ <
I

(z) =

n=0
1
n!( +n + 1)
_
z
2
_
+2n
[z[ <
2068
Appendix I
Table of Laplace Transforms
Let f(t) be piecewise continuous and of exponential order . Unless otherwise noted, the transform is dened for
s > 0.
f(t)
_

0
e
st
f(t) dt
1
2i
_
c+i
ci
e
ts
F(s) ds F(s)
af(t) +bg(t) aF(s) +bG(s)
e
ct
f(t) F(s c) s > c +
f(t +c) F(s c) s > c +
tf(t)
d
ds
[F(s)]
2069
t
n
f(t) (1)
n
d
n
ds
n
[F(s)]
f(t)
t
,
_
1
0
f(t)
t
dt exists
_

s
F(t) dt
_
t
0
f() d
F(s)
s
_
t
0
_

0
f(s) ds d
F(s)
s
2
d
dt
f(t) sF(s) f(0)
d
2
dt
2
f(t) s
2
F(s) sf(0) f
t
(0)
d
n
dt
n
f(t) s
n
F(s) s
n1
f(0)
s
n2
f
t
(0) f
(n1)
(0)
_
t
0
f()g(t ) d, f, g C
0
F(s)G(s)
1
c
f(t/c), c > 0 F(cs)
f(t), f(t +T) = f(t)
_
T
0
e
st
f(t) dt
1 e
sT
2070
f(t), f(t +T) = f(t)
_
T
0
e
st
f(t) dt
1 + e
sT
H(t)
1
s
tH(t)
1
s
2
t
n
H(t), for n = 0, 1, 2, . . .
n!
s
n+1
t
1/2
H(t)

2
s
3/2
t
1/2
H(t)

s
1/2
t
n1/2
H(t), n Z
+
(1)(3)(5) (2n 1)

2
n
s
n1/2
t

H(t), 1() > 1


( + 1)
s
n+1
Log tH(t)
Log s
s
t

Log tH(t), 1() > 1


( + 1)
s
n+1
(( + 1) Log s)
(t) 1 s > 0

(n)
(t), n Z
0+
s
n
s > 0
2071
e
ct
H(t)
1
s c
s > c
t e
ct
H(t)
1
(s c)
2
s > c
t
n1
e
ct
(n 1)!
H(t), n Z
+
1
(s c)
n
s > c
sin(ct)H(t)
c
s
2
+c
2
cos(ct)H(t)
s
s
2
+c
2
sinh(ct)H(t)
c
s
2
c
2
s > [c[
cosh(ct)H(t)
s
s
2
c
2
s > [c[
t sin(ct)H(t)
2cs
(s
2
+c
2
)
2
t cos(ct)H(t)
s
2
c
2
(s
2
+c
2
)
2
t
n
e
ct
H(t), n Z
+
n!
(s c)
n+1
e
dt
sin(ct)H(t)
c
(s d)
2
+c
2
s > d
2072
e
dt
cos(ct)H(t)
s d
(s d)
2
+c
2
s > d
(t c)
_
0 for c < 0
e
sc
for c > 0
H(t c) =
_
0 for t < c
1 for t > c
1
s
e
cs
J

(ct)H(t)
c
n

s
2
+c
2
_
s +

s
2
+c
2
_

> 1
I

(ct)H(t)
c
n

s
2
c
2
_
s

s
2
+c
2
_

1(s) > c, > 1


2073
Appendix J
Table of Fourier Transforms
f(x)
1
2
_

f(x) e
ix
dx
_

F() e
ix
d F()
af(x) +bg(x) aF() +bG()
f
(n)
(x) (i)
n
F()
x
n
f(x) i
n
F
(n)
()
f(x +c) e
ic
F()
e
icx
f(x) F( +c)
f(cx) [c[
1
F(/c)
2074
f(x)g(x) F G() =
_

F()G( ) d
1
2
f g(x) =
1
2
_

f()g(x ) d F()G()
e
cx
2
, c > 0
1

4c
e

2
/4c
e
c[x[
, c > 0
c/

2
+c
2
2c
x
2
+c
2
, c > 0 e
c[[
1
x i
, > 0
_
0 for > 0
i e

for < 0
1
x i
, < 0
_
i e

for > 0
0 for < 0
1
x

i
2
sign ()
H(x c) =
_
0 for x < c
1 for x > c
1
2i
e
ic
2075
e
cx
H(x), 1(c) > 0
1
2(c +i)
e
cx
H(x), 1(c) > 0
1
2(c i)
1 ()
(x )
1
2
e
i
((x +) +(x )) cos()
i((x +) (x )) sin()
H(c [x[) =
_
1 for [x[ < c
0 for [x[ > c
, c > 0
sin(c)

2076
Appendix K
Table of Fourier Transforms in n Dimensions
f(x)
1
(2)
n
_
R
n
f(x) e
ix
dx
_
R
n
F() e
ix
d F()
af(x) +bg(x) aF() +bG()
_

c
_
n/2
e
nx
2
/4c
e
c
2
2077
Appendix L
Table of Fourier Cosine Transforms
f(x)
1

_

0
f(x) cos (x) dx
2
_

0
C() cos (x) d C()
f
t
(x) S()
1

f(0)
f
tt
(x)
2
C()
1

f
t
(0)
xf(x)

T
s
[f(x)]
f(cx), c > 0
1
c
C
_

c
_
2078
2c
x
2
+c
2
e
c
e
cx
c/

2
+c
2
e
cx
2 1

4c
e

2
/(4c)
_

c
e
x
2
/(4c)
e
c
2
2079
Appendix M
Table of Fourier Sine Transforms
f(x)
1

_

0
f(x) sin(x) dx
2
_

0
S() sin(x) d S()
f
t
(x) C()
f
tt
(x)
2
S() +
1

f(0)
xf(x)

T
c
[f(x)]
f(cx), c > 0
1
c
S
_

c
_
2080
2x
x
2
+c
2
e
c
e
cx
/

2
+c
2
2 arctan
_
x
c
_
1

e
c
1
x
e
cx
1

arctan
_

c
_
1
1

2
x
1
x e
cx
2
4c
3/2

2
/(4c)

x
2c
3/2
e
x
2
/(4c)
e
c
2
2081
Appendix N
Table of Wronskians
W [x a, x b] b a
W
_
e
ax
, e
bx

(b a) e
(a+b)x
W [cos(ax), sin(ax)] a
W [cosh(ax), sinh(ax)] a
W [ e
ax
cos(bx), e
ax
sin(bx)] b e
2ax
W [ e
ax
cosh(bx), e
ax
sinh(bx)] b e
2ax
W [sin(c(x a)), sin(c(x b))] c sin(c(b a))
W [cos(c(x a)), cos(c(x b))] c sin(c(b a))
W [sin(c(x a)), cos(c(x b))] c cos(c(b a))
2082
W [sinh(c(x a)), sinh(c(x b))] c sinh(c(b a))
W [cosh(c(x a)), cosh(c(x b))] c cosh(c(b a))
W [sinh(c(x a)), cosh(c(x b))] c cosh(c(b a))
W
_
e
dx
sin(c(x a)), e
dx
sin(c(x b))

c e
2dx
sin(c(b a))
W
_
e
dx
cos(c(x a)), e
dx
cos(c(x b))

c e
2dx
sin(c(b a))
W
_
e
dx
sin(c(x a)), e
dx
cos(c(x b))

c e
2dx
cos(c(b a))
W
_
e
dx
sinh(c(x a)), e
dx
sinh(c(x b))

c e
2dx
sinh(c(b a))
W
_
e
dx
cosh(c(x a)), e
dx
cosh(c(x b))

c e
2dx
sinh(c(b a))
W
_
e
dx
sinh(c(x a)), e
dx
cosh(c(x b))

c e
2dx
cosh(c(b a))
W [(x a) e
cx
, (x b) e
cx
] (b a) e
2cx
2083
Appendix O
Sturm-Liouville Eigenvalue Problems
y
tt
+
2
y = 0, y(a) = y(b) = 0

n
=
n
b a
, y
n
= sin
_
n(x a)
b a
_
, n N
y
n
, y
n
) =
b a
2
y
tt
+
2
y = 0, y(a) = y
t
(b) = 0

n
=
(2n 1)
2(b a)
, y
n
= sin
_
(2n 1)(x a)
2(b a)
_
, n N
y
n
, y
n
) =
b a
2
2084
y
tt
+
2
y = 0, y
t
(a) = y(b) = 0

n
=
(2n 1)
2(b a)
, y
n
= cos
_
(2n 1)(x a)
2(b a)
_
, n N
y
n
, y
n
) =
b a
2
y
tt
+
2
y = 0, y
t
(a) = y
t
(b) = 0

n
=
n
b a
, y
n
= cos
_
n(x a)
b a
_
, n = 0, 1, 2, . . .
y
0
, y
0
) = b a, y
n
, y
n
) =
b a
2
for n N
2085
Appendix P
Green Functions for Ordinary Dierential
Equations
G
t
+p(x)G = (x ), G(

: ) = 0
G(x[) = exp
_

_
x

p(t) dt
_
H(x )
y
tt
= 0, y(a) = y(b) = 0
G(x[) =
(x
<
a)(x
>
b)
b a
y
tt
= 0, y(a) = y
t
(b) = 0
G(x[) = a x
<
y
tt
= 0, y
t
(a) = y(b) = 0
G(x[) = x
>
b
2086
y
tt
c
2
y = 0, y(a) = y(b) = 0
G(x[) =
sinh(c(x
<
a)) sinh(c(x
>
b))
c sinh(c(b a))
y
tt
c
2
y = 0, y(a) = y
t
(b) = 0
G(x[) =
sinh(c(x
<
a)) cosh(c(x
>
b))
c cosh(c(b a))
y
tt
c
2
y = 0, y
t
(a) = y(b) = 0
G(x[) =
cosh(c(x
<
a)) sinh(c(x
>
b))
c cosh(c(b a))
y
tt
+c
2
y = 0, y(a) = y(b) = 0, c ,=
npi
ba
, n N
G(x[) =
sin(c(x
<
a)) sin(c(x
>
b))
c sin(c(b a))
y
tt
+c
2
y = 0, y(a) = y
t
(b) = 0, c ,=
(2n1)pi
2(ba)
, n N
G(x[) =
sin(c(x
<
a)) cos(c(x
>
b))
c cos(c(b a))
y
tt
+c
2
y = 0, y
t
(a) = y(b) = 0, c ,=
(2n1)pi
2(ba)
, n N
G(x[) =
cos(c(x
<
a)) sin(c(x
>
b))
c cos(c(b a))
2087
y
tt
+ 2cy
t
+dy = 0, y(a) = y(b) = 0, c
2
> d
G(x[) =
e
cx<
sinh(

c
2
d(x
<
a)) e
cx<
sinh(

c
2
d(x
>
b))

c
2
d e
2c
sinh(

c
2
d(b a))
y
tt
+ 2cy
t
+dy = 0, y(a) = y(b) = 0, c
2
< d,

d c
2
,=
n
ba
, n N
G(x[) =
e
cx<
sin(

d c
2
(x
<
a)) e
cx<
sin(

d c
2
(x
>
b))

d c
2
e
2c
sin(

d c
2
(b a))
y
tt
+ 2cy
t
+dy = 0, y(a) = y(b) = 0, c
2
= d
G(x[) =
(x
<
a) e
cx<
(x
>
b) e
cx<
(b a) e
2c
2088
Appendix Q
Trigonometric Identities
Q.1 Circular Functions
Pythagorean Identities
sin
2
x + cos
2
x = 1, 1 + tan
2
x = sec
2
x, 1 + cot
2
x = csc
2
x
Angle Sum and Dierence Identities
sin(x +y) = sin xcos y + cos xsin y
sin(x y) = sin x cos y cos x sin y
cos(x +y) = cos xcos y sin x sin y
cos(x y) = cos x cos y + sin x sin y
2089
Function Sum and Dierence Identities
sin x + sin y = 2 sin
1
2
(x +y) cos
1
2
(x y)
sin x sin y = 2 cos
1
2
(x +y) sin
1
2
(x y)
cos x + cos y = 2 cos
1
2
(x +y) cos
1
2
(x y)
cos x cos y = 2 sin
1
2
(x +y) sin
1
2
(x y)
Double Angle Identities
sin 2x = 2 sin xcos x, cos 2x = cos
2
x sin
2
x
Half Angle Identities
sin
2
x
2
=
1 cos x
2
, cos
2
x
2
=
1 + cos x
2
Function Product Identities
sin x sin y =
1
2
cos(x y)
1
2
cos(x +y)
cos x cos y =
1
2
cos(x y) +
1
2
cos(x +y)
sin x cos y =
1
2
sin(x +y) +
1
2
sin(x y)
cos x sin y =
1
2
sin(x +y)
1
2
sin(x y)
2090
Exponential Identities
e
ix
= cos x +i sin x, sin x =
e
ix
e
ix
2i
, cos x =
e
ix
+ e
ix
2
Q.2 Hyperbolic Functions
Exponential Identities
sinh x =
e
x
e
x
2
, cosh x =
e
x
+ e
x
2
tanh x =
sinh x
cosh x
=
e
x
e
x
e
x
+ e
x
Reciprocal Identities
csch x =
1
sinh x
, sech x =
1
cosh x
, coth x =
1
tanh x
Pythagorean Identities
cosh
2
x sinh
2
x = 1, tanh
2
x + sech
2
x = 1
Relation to Circular Functions
sinh ix = i sin x sinh x = i sin ix
cosh ix = cos x cosh x = cos ix
tanh ix = i tan x tanh x = i tan ix
2091
Angle Sum and Dierence Identities
sinh(x y) = sinh xcosh y cosh xsinh y
cosh(x y) = cosh xcosh y sinh xsinh y
tanh(x y) =
tanh x tanh y
1 tanh xtanh y
=
sinh 2x sinh 2y
cosh 2x cosh 2y
coth(x y) =
1 coth xcoth y
coth x coth y
=
sinh 2x sinh 2y
cosh 2x cosh 2y
Function Sum and Dierence Identities
sinh x sinh y = 2 sinh
1
2
(x y) cosh
1
2
(x y)
cosh x + cosh y = 2 cosh
1
2
(x +y) cosh
1
2
(x y)
cosh x cosh y = 2 sinh
1
2
(x +y) sinh
1
2
(x y)
tanh x tanh y =
sinh(x y)
cosh xcosh y
coth x coth y =
sinh(x y)
sinh xsinh y
Double Angle Identities
sinh 2x = 2 sinh x cosh x, cosh 2x = cosh
2
x + sinh
2
x
Half Angle Identities
sinh
2
x
2
=
cosh x 1
2
, cosh
2
x
2
=
cosh x + 1
2
2092
Function Product Identities
sinh x sinh y =
1
2
cosh(x +y)
1
2
cosh(x y)
cosh x cosh y =
1
2
cosh(x +y) +
1
2
cosh(x y)
sinh x cosh y =
1
2
sinh(x +y) +
1
2
sinh(x y)
See Figure Q.1 for plots of the hyperbolic circular functions.
-2 -1 1 2
-3
-2
-1
1
2
3
-2 -1 1 2
-1
-0.5
0.5
1
Figure Q.1: cosh x, sinh x and then tanh x
2093
Appendix R
Bessel Functions
R.1 Denite Integrals
Let > 1.
_
1
0
rJ

(j
,m
r)J

(j
,n
r) dr =
1
2
(J
t

(j
,n
))
2

mn
_
1
0
rJ

(j
t
,m
r)J

(j
t
,n
r) dr =
j
t
2
,n

2
2j
t
2
,n
_
J

(j
t
,n
)
_
2

mn
_
1
0
rJ

(
m
r)J

(
n
r) dr =
1
2
2
n
_
a
2
b
2
+
2
n

2
_
(J

(
n
))
2

mn
Here
n
is the n
th
positive root of aJ

(r) +brJ
t

(r), where a, b R.
2094
Appendix S
Formulas from Linear Algebra
Kramers Rule. Consider the matrix equation
Ax =

b.
This equation has a unique solution if and only if det(A) ,= 0. If the determinant vanishes then there are either
no solutions or an innite number of solutions. If the determinant is nonzero, the solution for each x
j
can be
written
x
j
=
det A
j
det A
where A
j
is the matrix formed by replacing the j
th
column of A with b.
Example S.0.1 The matrix equation
_
1 2
3 4
__
x
1
x
2
_
=
_
5
6
_
,
2095
has the solution
x
1
=

5 2
6 4

1 2
3 4

=
8
2
= 4, x
2
=

1 5
3 6

1 2
3 4

=
9
2
=
9
2
.
2096
Appendix T
Vector Analysis
Rectangular Coordinates
f = f(x, y, z), g = g
x
i +g
y
j +g
z
k
f =
f
x
i +
f
y
j +
f
z
k
g =
g
x
x
+
g
y
y
+
g
z
z
g =

i j k

z
g
x
g
y
g
z

f =
2
f =

2
f
x
2
+

2
f
y
2
+

2
f
z
2
2097
Spherical Coordinates
x = r cos sin , y = r sin sin , z = r cos
f = f(r, , ), g = g
r
r +g

+g

Divergence Theorem.
__
udx dy =
_
u nds
Stokes Theorem.
__
(u) ds =
_
u dr
2098
Appendix U
Partial Fractions
A proper rational function
p(x)
q(x)
=
p(x)
(x a)
n
r(x)
Can be written in the form
p(x)
(x )
n
r(x)
=
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+ ( )
where the a
k
s are constants and the last ellipses represents the partial fractions expansion of the roots of r(x).
The coecients are
a
k
=
1
k!
d
k
dx
k
_
p(x)
r(x)
_

x=
.
Example U.0.2 Consider the partial fraction expansion of
1 +x +x
2
(x 1)
3
.
2099
The expansion has the form
a
0
(x 1)
3
+
a
1
(x 1)
2
+
a
2
x 1
.
The coecients are
a
0
=
1
0!
(1 +x +x
2
)[
x=1
= 3,
a
1
=
1
1!
d
dx
(1 +x +x
2
)[
x=1
= (1 + 2x)[
x=1
= 3,
a
2
=
1
2!
d
2
dx
2
(1 +x +x
2
)[
x=1
=
1
2
(2)[
x=1
= 1.
Thus we have
1 +x +x
2
(x 1)
3
=
3
(x 1)
3
+
3
(x 1)
2
+
1
x 1
.
Example U.0.3 Consider the partial fraction expansion of
1 +x +x
2
x
2
(x 1)
2
.
The expansion has the form
a
0
x
2
+
a
1
x
+
b
0
(x 1)
2
+
b
1
x 1
.
2100
The coecients are
a
0
=
1
0!
_
1 +x +x
2
(x 1)
2
_

x=0
= 1,
a
1
=
1
1!
d
dx
_
1 +x +x
2
(x 1)
2
_

x=0
=
_
1 + 2x
(x 1)
2

2(1 +x +x
2
)
(x 1)
3
_

x=0
= 3,
b
0
=
1
0!
_
1 +x +x
2
x
2
_

x=1
= 3,
b
1
=
1
1!
d
dx
_
1 +x +x
2
x
2
_

x=1
=
_
1 + 2x
x
2

2(1 +x +x
2
)
x
3
_

x=1
= 3,
Thus we have
1 +x +x
2
x
2
(x 1)
2
=
1
x
2
+
3
x
+
3
(x 1)
2

3
x 1
.
If the rational function has real coecients and the denominator has complex roots, then you can reduce the
work in nding the partial fraction expansion with the following trick: Let and be complex conjugate pairs
of roots of the denominator.
p(x)
(x )
n
(x )
n
r(x)
=
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+
_
a
0
(x )
n
+
a
1
(x )
n1
+ +
a
n1
x
_
+ ( )
Thus we dont have to calculate the coecients for the root at . We just take the complex conjugate of the
coecients for .
Example U.0.4 Consider the partial fraction expansion of
1 +x
x
2
+ 1
.
2101
The expansion has the form
a
0
x i
+
a
0
x +i
The coecients are
a
0
=
1
0!
_
1 +x
x +i
_

x=i
=
1
2
(1 i),
a
0
=
1
2
(1 i) =
1
2
(1 +i)
Thus we have
1 +x
x
2
+ 1
=
1 i
2(x i)
+
1 +i
2(x +i)
.
2102
Appendix V
Finite Math
Newtons Binomial Formula.
(a +b)
n
=
n

k=0
_
k
n
_
a
nk
b
k
= a
n
+na
n1
b +
n(n 1)
2
a
n2
b
2
+ +nab
n1
+b
n
,
The binomial coecients are,
_
k
n
_
=
n!
k!(n k)!
.
2103
Appendix W
Probability
W.1 Independent Events
Once upon a time I was talking with the father of one of my colleagues at Caltech. He was an educated man.
I think that he had studied Russian literature and language back when he was in college. We were discussing
gambling. He told me that he had a scheme for winning money at the game of 21. I was familiar with counting
cards. Being a mathematician, I was not interested in hearing about conditional probability from a literature
major, but I said nothing and prepared to hear about his particular technique. I was quite surprised with his
method: He said that when he was on a winning streak he would bet more and when he was on a losing streak
he would bet less. He conceded that he lost more hands than he won, but since he bet more when he was winning,
he made money in the end.
I respectfully and thoroughly explained to him the concept of an independent event. Also, if one is not counting
cards then each hand in 21 is essentially an independent event. The outcome of the previous hand has no bearing
on the current. Throughout the explanation he nodded his head and agreed with my reasoning. When I was
nished he replied, Yes, thats true. But you see, I have a method. When Im on my winning streak I bet more
and when Im on my losing streak I bet less.
I pretended that I understood. I didnt want to be rude. After all, he had taken the time to explain the
concept of a winning streak to me. And everyone knows that mathematicians often do not easily understand
2104
practical matters, particularly games of chance.
Never explain mathematics to the layperson.
W.2 Playing the Odds
Years ago in a classroom not so far away, your author was being subjected to a presentation of a lengthy proof.
About ve minutes into the lecture, the entire class was hopelessly lost. At the forty-ve minute mark the professor
had a combinatorial expression that covered most of a chalk board. From his previous queries the professor knew
that none of the students had a clue what was going on. This pleased him and he had became more animated
as the lecture had progressed. He gestured to the board with a smirk and asked for the value of the expression.
Without a moments hesitation, I nonchalantly replied, zero. The professor was taken aback. He was clearly
impressed that I was able to evaluate the expression, especially because I had done it in my head and so quickly.
He enquired as to my method. Probability, I replied. Professors often present dicult problems that have
simple, elegant solutions. Zero is the most elegant of numerical answers and thus most likely to be the correct
answer. My second guess would have been one. The professor was not amused.
Whenever a professor asks the class a question which has a numeric answer, immediately respond, zero. If
you are asked about your method, casually say something vague about symmetry. Speak with condence and give
non-verbal cues that you consider the problem to be elementary. This tactic will usually suce. Its quite likely
that some kind of symmetry is involved. And if it isnt your response will puzzle the professor. They may continue
with the next topic, not wanting to admit that they dont see the symmetry in such an elementary problem. If
they press further, start mumbling to yourself. Pretend that you are lost in thought, perhaps considering some
generalization of the result. They may be a little irked that you are ignoring them, but its better than divulging
your true method.
2105
Appendix X
Economics
There are two important concepts in economics. The rst is Buy low, sell high, which is self-explanitory. The
second is opportunity cost, the highest valued alternative that must be sacriced to attain something or otherwise
satisfy a want. I discovered this concept as an undergraduate at Caltech. I was never very in to computer games,
but I found myself randomly playing tetris. Out of the blue I was struck by a revelation: I could be having sex
right now. I havent played a computer game since.
2106
Appendix Y
Glossary
Phrases often have dierent meanings in mathematics than in everyday usage. Here I have collected denitions
of some mathematical terms which might confuse the novice.
beyond the scope of this text Beyond the comprehension of the author.
dicult Essentially impossible. Note that mathematicians never refer to problems they have solved as being
dicult. This would either be boastful, (claiming that you can solve dicult problems), or self-deprecating,
(admitting that you found the problem to be dicult).
interesting This word is grossly overused in math and science. It is often used to describe any work that the
author has done, regardless of the works signicance or novelty. It may also be used as a synonym for
dicult. It has a completely dierent meaning when used by the non-mathematician. When I tell people
that I am a mathematician they typically respond with, That must be interesting., which means, I dont
know anything about math or what mathematicians do. I typically answer, No. Not really.
non-obvious or non-trivial Real fuckin hard.
one can prove that . . . The one that proved it was a genius like Gauss. The phrase literally means you
havent got a chance in hell of proving that . . .
2107
simple Mathematicians communicate their prowess to colleagues and students by referring to all problems as
simple or trivial. If you ever become a math professor, introduce every example as being really quite
trivial.
1
1
For even more fun say it in your best Elmer Fudd accent. This next pwobwem is weawy quite twiviaw.
2108
Index
a + i b form, 158
Abels formula, 766
absolute convergence, 423
adjoint
of a dierential operator, 773
of operators, 1178
analytic, 304
Analytic continuation
Fourier integrals, 1426
analytic continuation, 356
analytic functions, 2049
anti-derivative, 390
Argand diagram, 154
argument
of a complex number, 155
argument theorem, 411
asymptotic expansions, 1113
integration by parts, 1126
asymptotic relations, 1113
autonomous D.E., 850
average value theorem, 410
Bernoulli equations, 842
Bessel functions, 1501
generating function, 1508
of the rst kind, 1507
second kind, 1523
Bessels equation, 1501
Bessels Inequality, 1160
Bessels inequality, 1206
bilinear concomitant, 774
binomial coecients, 2103
binomial formula, 2103
boundary value problems, 968
branch
principal, 6
branch point, 226
branches, 6
calculus of variations, 1873
canonical forms
constant coecient equation, 878
of dierential equations, 878
cardinality
of a set, 3
Cartesian form, 158
2109
Cartesian product
of sets, 3
Cauchy convergence, 423
Cauchy principal value, 498, 1424
Cauchys inequality, 407
Cauchy-Riemann equations, 310, 2049
clockwise, 204
closed interval, 3
closure relation
and Fourier transform, 1429
discrete sets of functions, 1161
codomain, 4
comparison test, 426
completeness
of sets of functions, 1161
sets of vectors, 24
complex conjugate, 152, 153
complex derivative, 303, 304
complex line integral, 383
complex number, 152
magnitude, 154
modulus, 154
complex numbers, 151
arithmetic, 163
set of, 3
vectors, 163
complex plane, 154
rst order dierential equations, 661
computer games, 2106
connected region, 203
constant coecient dierential equations, 786
continuity, 42
uniform, 44
continuous
piecewise, 44
continuous functions, 42, 432, 435
contour, 203
traversal of, 204
convergence
absolute, 423
Cauchy, 423
comparison test, 426
in the mean, 1160
integral test, 427
of integrals, 1342
ratio test, 428
root test, 430
sequences, 422
series, 423
uniform, 432
convolution theorem
and Fourier transform, 1431
for Laplace transforms, 1363
convolutions, 1363
counter-clockwise, 204
curve, 203
closed, 203
continuous, 203
Jordan, 204
piecewise smooth, 203
2110
simple, 203
smooth, 203
denite integral, 106
degree
of a dierential equation, 636
del, 137
delta function
Kronecker, 23
derivative
complex, 304
determinant
derivative of, 762
dierence
of sets, 4
dierence equations
constant coecient equations, 1035
exact equations, 1029
rst order homogeneous, 1030
rst order inhomogeneous, 1032
dierential calculus, 37
dierential equations
autonomous, 850
constant coecient, 786
degree, 636
equidimensional-in-x, 854
equidimensional-in-y, 856
Euler, 795
exact, 639, 801
rst order, 635, 649
homogeneous, 636
homogeneous coecient, 645
inhomogeneous, 636
linear, 636
order, 635
ordinary, 635
scale-invariant, 859
separable, 643
without explicit dep. on y, 802
dierential operator
linear, 758
Dirac delta function, 904, 1162
direction
negative, 204
positive, 204
directional derivative, 138
discontinuous functions, 43, 1202
discrete derivative, 1028
discrete integral, 1028
disjoint sets, 4
domain, 4
economics, 2106
eigenfunctions, 1195
eigenvalue problems, 1195
eigenvalues, 1195
elements
of a set, 2
empty set, 2
entire, 304
2111
equidimensional dierential equations, 795
equidimensional-in-x D.E., 854
equidimensional-in-y D.E., 856
Euler dierential equations, 795
Eulers formula, 159
Eulers theorem, 645
Euler-Mascheroni constant, 1490
exact dierential equations, 801
exact equations, 639
exchanging dep. and indep. var., 848
extremum modulus theorem, 411
Fibonacci sequence, 1040
formally self-adjoint operators, 1179
Fourier coecients, 1156, 1200
behavior of, 1215
Fourier convolution theorem, 1431
Fourier cosine series, 1210
Fourier cosine transform, 1440
of derivatives, 1442
table of, 2078
Fourier series, 1195
and Fourier transform, 1415
uniform convergence, 1219
Fourier Sine series, 1211
Fourier sine series, 1301
Fourier sine transform, 1441
of derivatives, 1442
table of, 2080
Fourier transform
alternate denitions, 1420
closure relation, 1429
convolution theorem, 1431
of a derivative, 1430
Parsevals theorem, 1435
shift property, 1436
table of, 2074, 2077
Fredholm alternative theorem, 968
Fredholm equations, 888
Frobenius series
rst order dierential equation, 666
function
bijective, 5
injective, 5
inverse of, 6
multi-valued, 6
single-valued, 4
surjective, 5
function elements, 356
functional equation, 327
fundamental set of solutions
of a dierential equation, 770
fundamental theorem of algebra, 409
fundamental theorem of calculus, 109
gamblers ruin problem, 1027, 1036
Gamma function, 1484
dierence equation, 1484
Eulers formula, 1484
Gauss formula, 1488
2112
Hankels formula, 1486
Weierstrass formula, 1490
generating function
for Bessel functions, 1507
geometric series, 425
Gibbs phenomenon, 1224
gradient, 137
Gramm-Schmidt orthogonalization, 1147
greatest integer function, 5
Greens formula, 775, 1179
harmonic conjugate, 316
harmonic series, 426, 457
Heaviside function, 655, 904
holomorphic, 304
homogeneous coecient equations, 645
homogeneous dierential equations, 636
homogeneous functions, 645
homogeneous solution, 651
homogeneous solutions
of dierential equations, 759
identity map, 4
ill-posed problems, 659
linear dierential equations, 768
image
of a mapping, 5
imaginary part, 152
improper integrals, 114
indenite integral, 100, 390
indicial equation, 1063
innity
rst order dierential equation, 671
inhomogeneous dierential equations, 636
initial conditions, 653
inner product
of functions, 1152
integers
set of, 2
integral bound
maximum modulus, 386
integral calculus, 100
integral equations, 888
boundary value problems, 888
initial value problems, 888
integrals
improper, 114
integrating factor, 650
integration
techniques of, 111
intermediate value theorem, 44
intersection
of sets, 4
interval
closed, 3
open, 3
inverse function, 6
inverse image, 5
irregular singular points, 1079
rst order dierential equations, 669
2113
Jordan curve, 204
Jordans lemma, 2050
Kramers rule, 2095
Kronecker delta function, 23
LHospitals rule, 64
Lagranges identity, 774, 805, 1178
Laplace transform
inverse, 1349
Laplace transform pairs, 1352
Laplace transforms, 1347
convolution theorem, 1363
of derivatives, 1363
Laurent expansions, 491, 2049
Laurent series, 452, 2051
rst order dierential equation, 665
leading order behavior
for dierential equations, 1117
least integer function, 5
least squares t
Fourier series, 1204
Legendre polynomials, 1149
limit
left and right, 39
limits of functions, 37
line integral, 382
complex, 383
linear dierential equations, 636
linear dierential operator, 758
linear space, 1141
Liouvilles theorem, 408
magnitude, 154
maximum modulus integral bound, 386
maximum modulus theorem, 411
Mellin inversion formula, 1350
minimum modulus theorem, 411
modulus, 154
multi-valued function, 6
nabla, 137
natural boundary, 356
Newtons binomial formula, 2103
norm
of functions, 1152
normal form
of dierential equations, 881
null vector, 12
one-to-one mapping, 5
open interval, 3
opportunity cost, 2106
optimal asymptotic approximations, 1132
order
of a dierential equation, 635
of a set, 3
ordinary points
rst order dierential equations, 661
of linear dierential equations, 1046
orthogonal series, 1155
2114
orthogonality
weighting functions, 1154
orthonormal, 1152
Parsevals equality, 1206
Parsevals theorem
for Fourier transform, 1435
partial derivative, 135
particular solution, 651
of an ODE, 915
particular solutions
of dierential equations, 759
periodic extension, 1201
piecewise continuous, 44
point at innity
dierential equations, 1079
polar form, 158
power series
denition of, 436
dierentiation of, 443
integration of, 443
radius of convergence, 438
uniformly convergent, 436
principal argument, 155
principal branch, 6
principal root, 168
principal value, 498, 1424
pure imaginary number, 152
range
of a mapping, 5
ratio test, 428
rational numbers
set of, 2
Rayleighs quotient, 1297
minimum property, 1297
real numbers
set of, 2
real part, 152
rectangular unit vectors, 13
reduction of order, 803
and the adjoint equation, 804
dierence equations, 1038
region
connected, 203
multiply-connected, 203
simply-connected, 203
regular, 304
regular singular points
rst order dierential equations, 664
regular Sturm-Liouville problems, 1291
properties of, 1300
residuals
of series, 424
residue theorem, 495, 2050
principal values, 507
residues, 491, 2049
of a pole of order n, 491, 2050
Riccati equations, 844
Riemann zeta function, 426
2115
Riemann-Lebesgue lemma, 1343
root test, 430
Rouches theorem, 413
scalar eld, 135
scale-invariant D.E., 859
separable equations, 643
sequences
convergence of, 422
series, 422
comparison test, 426
convergence of, 422, 423
geometric, 425
integral test, 427
ratio test, 428
residuals, 424
root test, 430
tail of, 423
set, 2
similarity transformation, 1734
single-valued function, 4
singularity, 320
branch point, 321
Stirlings approximation, 1492
subset, 3
proper, 3
Taylor series, 446, 2050
rst order dierential equations, 662
table of, 2066
transformations
of dierential equations, 878
of independent variable, 885
to constant coecient equation, 886
to integral equations, 888
trigonometric identities, 2089
uniform continuity, 44
uniform convergence, 432
of Fourier series, 1219
of integrals, 1342
union
of sets, 3
variation of parameters
rst order equation, 651
vector
components of, 13
rectangular unit, 13
vector calculus, 134
vector eld, 135
vector-valued functions, 134
Volterra equations, 888
wave equation
DAlemberts solution, 1776
Fourier transform solution, 1776
Laplace transform solution, 1777
Webers function, 1523
Weierstrass M-test, 433
well-posed problems, 659
2116
linear dierential equations, 768
Wronskian, 763, 764
zero vector, 12
2117

You might also like