This action might not be possible to undo. Are you sure you want to continue?

# An Introduction to Mathematical Physics via Oscillations

R. L. Herman August 11, 2006

ii

Contents

0 Prologue 0.1 0.2 0.3 0.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . What is Mathematical Physics? . . . . . . . . . . . . . . . . . An Overview of the Course . . . . . . . . . . . . . . . . . . . Famous Quotes . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 4 11 15 16 16 17 20 21 22 28 30 31 38

1 Introduction 1.1 What Do I Need To Know From Calculus? . . . . . . . . . . 1.1.1 1.1.2 1.1.3 1.1.4 1.1.5 1.2 1.3 1.4 1.5 Introduction . . . . . . . . . . . . . . . . . . . . . . . Trigonometric Functions . . . . . . . . . . . . . . . . . Other Elementary Functions . . . . . . . . . . . . . . Derivatives . . . . . . . . . . . . . . . . . . . . . . . . Integrals . . . . . . . . . . . . . . . . . . . . . . . . . .

What I Need From My Intro Physics Class? . . . . . . . . . . Technology and Tables . . . . . . . . . . . . . . . . . . . . . . Back of the Envelope Computations . . . . . . . . . . . . . . Chapter 1 Problems . . . . . . . . . . . . . . . . . . . . . . . iii

iv 2 Free Fall and Harmonic Oscillators 2.1

CONTENTS 39 39 43 46 48 49 50 52 53 56 57 63 64 66 69 73 81 83 89 91 91 97 99

Free Fall and Terminal Velocity . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 First Order Diﬀerential Equations . . . . . . . . . . . Terminal Velocity . . . . . . . . . . . . . . . . . . . .

2.2

The Simple Harmonic Oscillator . . . . . . . . . . . . . . . . 2.2.1 2.2.2 Mass-Spring Systems . . . . . . . . . . . . . . . . . . . The Simple Pendulum . . . . . . . . . . . . . . . . . .

2.3

Second Order Linear Diﬀerential Equations . . . . . . . . . . 2.3.1 Constant Coeﬃcient Equations . . . . . . . . . . . . .

2.4

LRC Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Special Cases . . . . . . . . . . . . . . . . . . . . . . .

2.5 2.6

Damped Oscillations . . . . . . . . . . . . . . . . . . . . . . . Forced Oscillations . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 2.6.2 Method of Undetermined Coeﬃcients . . . . . . . . . Method of Variation of Parameters . . . . . . . . . . .

2.7 2.8 2.9

Numerical Solutions of ODEs . . . . . . . . . . . . . . . . . . Coupled Oscillators . . . . . . . . . . . . . . . . . . . . . . . . The Nonlinear Pendulum Optional . . . . . . . . . . . . . . .

2.10 Cauchy-Euler Equations - Optional . . . . . . . . . . . . . . . 3 Linear Algebra 3.1 3.2 3.3 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Transformations . . . . . . . . . . . . . . . . . . . . . Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CONTENTS 3.4

v

Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . 109 3.4.1 3.4.2 3.4.3 3.4.4 An Introduction to Coupled Systems . . . . . . . . . . 109 Example of an Eigenvalue Problem . . . . . . . . . . . 111 Eigenvalue Problems - A Summary . . . . . . . . . . . 114 Rotations of Conics . . . . . . . . . . . . . . . . . . . 116

3.5 3.6 3.7 3.8

A Return to Coupled Systems . . . . . . . . . . . . . . . . . . 124 Solving Constant Coeﬃcient Systems in 2D . . . . . . . . . . 128 Examples of the Matrix Method . . . . . . . . . . . . . . . . 130 Inner Product Spaces Optional . . . . . . . . . . . . . . . . . 134 139

4 The Harmonics of Vibrating Strings 4.1 4.2 4.3 4.4 4.5 4.6

Harmonics and Vibrations . . . . . . . . . . . . . . . . . . . . 139 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . 141 Partial Diﬀerential Equations . . . . . . . . . . . . . . . . . . 143 The 1D Heat Equation . . . . . . . . . . . . . . . . . . . . . . 145 The 1D Wave Equation . . . . . . . . . . . . . . . . . . . . . 149 Introduction to Fourier Series . . . . . . . . . . . . . . . . . . 152 4.6.1 4.6.2 4.6.3 Trigonometric Series . . . . . . . . . . . . . . . . . . . 158 Fourier Series Over Other Intervals . . . . . . . . . . . 165 Sine and Cosine Series . . . . . . . . . . . . . . . . . . 174

4.7 4.8

Solution of the Heat Equation . . . . . . . . . . . . . . . . . . 181 Finite Length Strings . . . . . . . . . . . . . . . . . . . . . . . 184 191

5 Complex Representations of Functions

vi 5.1 5.2 5.3 5.4 5.5 5.6

CONTENTS Complex Representations of Waves . . . . . . . . . . . . . . . 191 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . 194 Complex Valued Functions . . . . . . . . . . . . . . . . . . . 199

Complex Diﬀerentiation . . . . . . . . . . . . . . . . . . . . . 201 Harmonic Functions and Laplace’s Equation . . . . . . . . . . 204 Complex Integration . . . . . . . . . . . . . . . . . . . . . . . 206 5.6.1 5.6.2 5.6.3 5.6.4 Complex Path Integrals . . . . . . . . . . . . . . . . . 206 Cauchy’s Theorem . . . . . . . . . . . . . . . . . . . . 213 Analytic Functions and Cauchy’s Integral Formula . . 219 Geometric Series . . . . . . . . . . . . . . . . . . . . . 224

5.7 5.8 5.9

Complex Series Representations . . . . . . . . . . . . . . . . . 226 Singularities and the Residue Theorem . . . . . . . . . . . . . 229 Computing Real Integrals . . . . . . . . . . . . . . . . . . . . 237 245

6 Transform Techniques in Physics 6.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 6.1.1 6.1.2 6.1.3 The Linearized KdV Equation . . . . . . . . . . . . . 245 The Free Particle Wave Function . . . . . . . . . . . . 247 Transform Schemes . . . . . . . . . . . . . . . . . . . . 249

6.2 6.3 6.4 6.5

Complex Exponential Fourier Series . . . . . . . . . . . . . . 251 Exponential Fourier Transform . . . . . . . . . . . . . . . . . 255 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . 260 Properties of the Fourier Transform 6.5.1 . . . . . . . . . . . . . . 262

Fourier Transform Examples . . . . . . . . . . . . . . 264

CONTENTS 6.6 6.7 6.8

vii

The Convolution Theorem -Optional . . . . . . . . . . . . . . 272 Applications of the Convolution Theorem -Optional . . . . . . 279 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . 283 6.8.1 6.8.2 6.8.3 Solution of ODEs Using Laplace Transforms . . . . . . 290 Step and Impulse Functions . . . . . . . . . . . . . . . 293 The Inverse Laplace Transform . . . . . . . . . . . . . 300 303

7 Electromagnetic Waves 7.1 7.2

Maxwell’s Equations . . . . . . . . . . . . . . . . . . . . . . . 305 Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 308 7.2.1 7.2.2 7.2.3 Vector Products . . . . . . . . . . . . . . . . . . . . . 308 Div, Grad, Curl . . . . . . . . . . . . . . . . . . . . . . 317 Vector Identities . . . . . . . . . . . . . . . . . . . . . 320

7.3

Electromagnetic Waves . . . . . . . . . . . . . . . . . . . . . . 321 323

8 Problems in Higher Dimensions 8.1 8.2 8.3 8.4

Vibrations of Rectangular Membranes . . . . . . . . . . . . . 325 Vibrations of a Kettle Drum . . . . . . . . . . . . . . . . . . . 332 Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . 342 Properties of Sturm-Liouville Problems 8.4.1 8.4.2 . . . . . . . . . . . . 345

Identities and Adjoint Operators . . . . . . . . . . . . 346 Orthogonality of Eigenfunctions . . . . . . . . . . . . 348

8.5 8.6

The Eigenfunction Expansion Method . . . . . . . . . . . . . 350 Problems in Three Dimensions . . . . . . . . . . . . . . . . . 353

viii 8.7

CONTENTS Spherical Symmetry . . . . . . . . . . . . . . . . . . . . . . . 353 8.7.1 8.7.2 8.8 Laplace’s Equation . . . . . . . . . . . . . . . . . . . . 354 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 360

Other Applications . . . . . . . . . . . . . . . . . . . . . . . . 361 8.8.1 8.8.2 8.8.3 8.8.4 Temperature Distribution in Igloos . . . . . . . . . . . 361 Waveguides . . . . . . . . . . . . . . . . . . . . . . . . 361 Optical Fibers . . . . . . . . . . . . . . . . . . . . . . 361 The Hydrogen Atom . . . . . . . . . . . . . . . . . . . 361 363

9 Special Functions 9.1 9.2 9.3 9.4 9.5 9.6

Classical Orthogonal Polynomials . . . . . . . . . . . . . . . . 363 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . 369 Spherical Harmonics . . . . . . . . . . . . . . . . . . . . . . . 379 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . 379 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . 381 Hypergeometric Functions . . . . . . . . . . . . . . . . . . . . 385 387

A Sequences and Series

A.1 Sequences Real Numbers . . . . . . . . . . . . . . . . . . . . . 388 A.2 Convergence of Sequences . . . . . . . . . . . . . . . . . . . . 389 A.3 Limits Theorems . . . . . . . . . . . . . . . . . . . . . . . . . 391 A.4 Inﬁnite Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 A.5 Geometric Series . . . . . . . . . . . . . . . . . . . . . . . . . 395 A.6 Convergence Tests . . . . . . . . . . . . . . . . . . . . . . . . 396

. . . . . . . . . . . . .8 The Binomial Expansion . .10 Inﬁnite Series of Functions . . . . . . . . . .7 The Order of Sequences and Functions . . . . . . . . . 404 A. 412 . .9 Series of Functions . . . . . . . . . . . . . .CONTENTS ix A. . . . . . . . . . . . . . . . . . . . . . . . . . 402 A. 408 A. . . . .

x CONTENTS .

The intent of the course is to introduce students to many of the mathematical techniques useful in their undergraduate physics education long before they are exposed to more focused topics in physics. or graduate. So.1 Introduction This is a set of notes originally designed to supplement a standard textbook in Mathematical Physics for undergraduate students who have completed a year long introductory course in physics.Chapter 0 Prologue 0. why not teach the methods in the physics courses as they are needed? Part of the reason is that going into the details can take away from the global view of the course. Students may decide to later enroll in such courses during their undergraduate. Students often get lost in the mathematical 1 . Most texts on mathematical physics are encyclopedic works which can never be covered in one semester and are often presented as a list of the above topics with some examples from physics inserted to highlight the connection of the particular topic to the real world. often the relevance to physics must be found in later studies in physics when the particular methods are used for specialized applications. study. However. The point of these excursions is to introduce the student to a variety of topics and not to delve into the rigor that one would ﬁnd in some mathematics courses. Most of the topics have equivalent semester long courses which go into the details and proofs of the main conjectures in that topic.

Collecting these techniques in one place. Sturm-Liouville theory. power series. PROLOGUE details. linear algebra. vector spaces. such as a course in mathematical physics. or approximation techniques. and theoretical physics is closer to physics. the calculus of variations. intuitive. and approximate arguments. aimed at studying and solving problems inspired by physics within a mathematically rigorous framework. can help to provide a uniform background for students entering later courses in specialized topics in physics. Arguably. but will do so in the guise of exploring speciﬁc physical problems. The typical topics covered in a course on mathematical physics are vector analysis. special functions and possibly other more advanced topics. these two notions are often distinguished.2 What is Mathematical Physics? What do you think when you hear the phrase “mathematical physics”? If one does a search on Google. Fourier series. Many of the mathematical techniques used in one course can be found in other courses. one can see the connections between diﬀerent ﬁelds and one can thus tie together some of the physical ideas between seemingly diﬀerent courses. We will cover many of these topics. mathematical physics is closer to mathematics. Laplace and Fourier transforms. Mathematical physics emphasizes the mathematical rigor of the same type as found in mathematics while theoretical physics emphasizes the links to actual observations and experimental physics which often requires the theoretical physicists to use heuristic. ordinary and partial diﬀerential equations.wikipedia. in such as course as this. 0. Furthermore. one ﬁnds in Wikipedia1 the following: “Mathematical physics is an interdisciplinary ﬁeld of academic study in between mathematics and physics. complex variables.2 CHAPTER 0. group theory. 1 http://en.org/wiki/Mathematical physics . as the proverbial tree can be lost in a forest of trees. such as tensors. Although mathematical physics and theoretical physics are related. Repeated exposure to standard methods can also help ingrain these methods.

Leibniz and Euler had their share of contributions to physics. random ﬁelds etc. Quantum mechanics cannot be understood without a good knowledge of mathematics. mathematically-based areas of physical sciences. but will aim more towards the theoretical physics approach and thus this course should really be called “A Course in Mathematical Methods in Physics”. The Greeks did not separate the subjects. However. WHAT IS MATHEMATICAL PHYSICS? Because of the required rigor. but developed an understanding of the natural sciences as part of their philosophical systems. group theory.0. many of the big name physicists and mathematicians actually worked in both areas only to be placed in these categories through historical hindsight. Other subjects researched by mathematical physicists include operator algebras.” However. Mathematicians such as Gauss. then. People like Newton and Maxwell made just as many contributions to mathematics as they had to physics while trying to investigate the workings of the physical universe. we should keep in mind Freeman Dyson’s words.” from Mathematics in the Physical Sciences 3 It has not always been the case that we had to think about the diﬀerences between mathematics and physics. However. geometric algebra. it is the main source of concepts and principles by means of which new theories can be created. With this in mind. noncommutative geometry. mathematical physicists often deal with questions that theoretical physicists have considered to be solved for decades. string theory. we will not adhere to the rigor suggested by this deﬁnition of mathematical physics. that its developed version under the name of quantum ﬁeld theory is one of the most abstract. Later. the course will be designed as a study of physical topics leading to the use of standard mathematical techniques.2. the mathematical physicists can sometimes (but neither commonly nor easily) show that the previous solution was incorrect. being backward-inﬂuential to mathematics. ”For a physicist mathematics is not just a tool by means of which phenomena can be calculated. Until about a century ago people did not view physics and mathematics as separate disciplines. It is not surprising. statistical mechanics. .

(Gibbs Lecture) In the meantime. such as physics. Even .4 CHAPTER 0. We will not provide the mathematical topics devoid of physical motivation. 0.3 An Overview of the Course One of the problems with courses in mathematical physics is that students do not always see the tie with physics. This split is summarized by Freeman Dyson: ”I am acutely aware of the fact that the marriage between mathematics and physics. over the past century a number physicists with a strong bent towards mathematics have emerged as mathematical physicists. The study of symmetry lead to group theory. Einstein’s general theory of relativity. biology and economics. problems of convergence of the trigonometric series used by Fourier and others lead to the need for rigor in analysis. involves a good dose of diﬀerential geometry. I would hope that students reading this book at least get a feel for the need to maintain the needed balance between mathematics and physics. We will tie the class mainly to the idea of oscillation in physics. In this class we hope to enable students to see the mathematical techniques needed to enhance their future studies in physics. the appearance of non-Euclidean geometries challenged the millenia old Euclidean geometry. These applied mathematicians have helped to mediate the divorce. Dyson’s report of a divorce might be premature. PROLOGUE In the 1800’s the climate changed. chemistry. and the foundations of logic were challenged shortly after the turn of the century.” from Missed Opportunities. String theory is also highly mathematical. Likewise. has recently ended in divorce. 1972. a theory of gravitation. many mathematicians are interested in applying and extending their methods to other ﬁelds. Some of the most important ﬁelds at the forefront of physics are steeped in mathematics. which was so enormously fruitful in past centuries. This lead to a whole population of mathematicians interested in abstracting mathematics and putting it on a ﬁrmer foundation without much attention to applications in the real world. We will instead introduce the methods studied in this course while studying one underlying theme from physics. While we will not get into these areas in this course. So.

be prepared to ﬁll in the gaps in derivations and calculations. This is true of all physics classes. The tentative chapters/topics and their contents are: 1. As you read the notes. especially not having had the upper level physics at the time the topics are introduced. Discuss the diﬃcult points with others and your instructor. You should read through this set of notes and then listen to the lectures. the better your understanding of the physics. It is meant to be a reference and additional topics may be added as we get further into the course. Introduction In this chapter we will review some of the key computational tools that you have seen in your ﬁrst two courses in calculus and recall some of the basic formulae for elementary functions. which will be useful in this course. we will also provide an overview of how one can use mathematical tables and computer algebra systems to help with the tedious tasks often encountered in solving physics problems.0. like all topics in physics. . Feel free to go back and reread your old calculus and physics texts. Work on problems as soon as possible. Then we will provide a short overview of your basic physics background. We conclude this section with an overview of the course in terms of the theme of oscillations even though at this writing there might be other topics introduced as the course is developed. It will become clear that the more adept one becomes in the mathematical background. some topics in the course might seem diﬃcult the ﬁrst time through. Generally. These are not problems that you can do the night before they are due. As the aim of this course is to introduce techniques useful in exploring the basic physics concepts in more detail through computation. However. AN OVERVIEW OF THE COURSE 5 though this theme is not the possible collection of applications seen in physics.3. it is one of the most pervasive and has proven to be at the center of the revolutions of twentieth century physics. The importance of mathematical physics will become clearer as you progress through the curriculum. This is not a spectator sport. The successful student will need to develop patience as the story unfolds into graduate school. but a participatory adventure. There are many topics that could/might be included in the class depending upon the time that we have set aside. you will eventually grasp certain topics as you see them repeated throughout your studies.

Such systems can be posed using matrices and the solutions are then obtained by solving eigenvalue problems. so we will also see an application near the end of the chapter. These ideas will be explored later in the course. More complicated problems involve coupled systems and this will allow us to explore linear systems of diﬀerential equations. we will review some of the basics needed to solve common applications in physics. (a) Free Fall and Terminal Velocity (b) The Simple Harmonic Oscillator (c) LRC Circuits . One hot topic in physics is that of nonlinear systems. We will begin with a discussion of free fall and terminal velocity. it does use something that can be explained using the more abstract techniques of similarity analysis. PROLOGUE We will end with an example of how simple estimates in physics can lead to ”back of the envelope“ computations using simple dimensional analysis. Other techniques for studying such problems involve power series and Laplace transforms. simple harmonic motion. While such computations do not require (at face value) the complex machinery seen in this course. As you have been exposed to simple diﬀerential equations in your calculus class.6 CHAPTER 0. Even before introducing diﬀerential equations for solving problems involving simple harmonic motion. We will need to solve constant coeﬃcient diﬀerential equations along the way. (a) What Do I Need to Know From Calculus? (b) What I need From my Intro Physics Class? (c) Using Technology and Tables (d) Back of the Envelope Computations (e) What Else is There? 2. We will look at the mass on a spring. we will ﬁrst look at diﬀerential equations for simpler examples. We begin with the simplest. Free Fall and Harmonic Oscillators A major theme throughout the course will be to look at problems involving oscillations of various types. LRC circuits and oscillating pendula.

Linear Algebra One of the most important mathematical topics in physics is linear algebra. The main theorem of linear algebra is the spectral theorem. The Harmonics of Vibrating Strings The next simplest type of oscillation is provided by ﬁnite length vibrating strings.3. the linear algebra course in most mathematics departments has evolved into a hybrid course covering matrix manipulations and some basics from vector spaces. linear transformations and view matrices as representations of linear transformations. it is seldom the case that applications. AN OVERVIEW OF THE COURSE (d) Damped and Forced Oscillations (e) Coupled Oscillators 3. We will derive the one dimensional wave equation and look at techniques for solving the wave equation. 7 The mathematical basis of much of physics relies on an understanding of both ﬁnite and inﬁnite dimensional vector spaces. We will return to this idea throughout the text. In this chapter we will introduce vector spaces. (a) Vector Spaces (b) Linear Transformations (c) Matrices (d) Eigenvalue Problems (e) Coupled Systems 4. quantum mechanics and general relativity. are covered. which is the basis of many other techniques in which one explore complicated signals to determine their spectral content. This leads to the study of Fourier series. Fourier analysis. The resulting solution will be an inﬁnite series of sinusoidal functions. The standard technique is to use separation of variables. However. especially from physics.0. which means studying eigenvalue problems. In the chapter we will ﬁrst see this in the solution of coupled systems of ordinary diﬀerential equations. Linear algebra is important in the study of ordinary and partial diﬀerential equations. . Nowadays. turning the solution of a partial diﬀerential equation into the solution of several ordinary diﬀerential equations.

we will consider the inﬁnite dimensional string. which involve’s solving Laplace’s equation. We will explore dispersion relations. we will introduce the Gamma function and compute the volume of a hypersphere. especially in electromagnetic theory and quantum mechanics. equation as another example of a generic one dimensional example the methods of this chapter. and the computation of complicated integrals such as those encountered in computing induced current using Faraday’s Law. PROLOGUE In the meantime. Complex Representations of The Real World As a simple example. Finally.8 CHAPTER 0. we will also introduce the heat. So. which pervade upper level physics courses. This will be the beginning of our study of initial-boundary value problems. We will ﬁrst introduce a problem in ﬂuid ﬂow in two dimensions. relations between frequency and wave number for wave propagation. This will lead to the study of Fourier Transforms. We will apply these techniques to solving some special problems. we will spend some time exploring complex variable techniques and introducing the calculus of complex functions. (a) Complex Representations of Waves (b) Complex Numbers (c) Complex Functions and Their Derivatives (d) Harmonic Functions and Laplace’s Equation (e) Complex Series Representations (f) Fluid Flow in 2D Using Complex Functions (g) Singularities and Dispersion Relations . However. or diﬀusion. useful results can only be obtained after ﬁrst introducing complex variable techniques. (a) The Heat Equation in 1D (b) Boundary Value Problems (c) Harmonics and Vibrations (d) The Wave Equation in 1D (e) Fourier Series and Forced Oscillators (f) Finite Length Strings 5. useful later for studying electromagnetic waves.

We will explore Fourier and Laplace transform methods for solving both ordinary and partial diﬀerential equations. We will apply these methods to ordinary diﬀerential equations modeling forced oscillations and to the heat and wave equations. These will involve solving problems in diﬀerent geometries and thus we pause to look at . (a) Transform Theory (b) Exponential Fourier Transform (c) The Dirac Delta Function (d) The Laplace Transform (e) Applications to Oscillations 7. we will review vector products. We will recall Maxwell’s equations and use vector identities and vector theorems to derive the wave equation. Problems in Higher Dimensions Having studied one dimensional oscillations. In the next chapter we will solve the resulting wave equation for some physically interesting systems.3. divergence and curl. Electromagnetic Waves One of the major theories is that of electromagnetism.0. we are lead to simpler equations in transform space. This will lead us to some needed vector identities useful for deriving the existence of electromagnetic waves from Maxwell’s equations. (a) Maxwell’s Equations (b) Vector Analysis (c) Electromagnetic Waves 8. They can be represented in terms of integrals. solutions are no longer given in terms of inﬁnite series. This will require us to recall some vector calculus from Calculus III. AN OVERVIEW OF THE COURSE (h) Computing Real Integrals Using the Residue Theorem (i) The Volume of a Hypersphere 6. In particular. Transforms of the Wave and Heat Equations 9 For problems deﬁned on an inﬁnite interval. which as associated with integral transforms. By transforming our equations. we will be prepared to move on to higher dimensional applications. gradients.

Special Functions In our studies of systems in higher dimensions we encounter a variety of new solutions of boundary value problems. We will introduce this equation and explore solution techniques obtaining the relevant special functions involved in describing the wavefunction for a hydrogenic electron. (a) Vibrations of a Rectangular Membrane (b) Vibrations of a Kettle Drum (c) Sturm-Liouville Problems (d) Curvilinear Coordinates (e) Waveguides (f) The Hydrogen Atom (g) Geopotential Theory (h) Temperature Distribution in Igloos (i) Optical Fibers 9. We will apply these methods to the solution of the wave and heat equations in higher dimensions. we will look at some topics . Another major equation of interest that you will encounter in Modern physics is the Schr¨dinger o equation. These collectively are referred to as Special Functions and have been known for a long time. PROLOGUE generalized coordinate systems and then extend our use of separation of variables to solve these problems. In order to fully appreciate the general techniques. (a) Classical Orthogonal Polynomials (b) Legendre Polynomials (c) Spherical Harmonics (d) Gamma Function (e) Bessel Functions 10.10 CHAPTER 0. They appear later in the undergraduate curriculum and we will cover a couple of the important examples. we will develop the Sturm-Liouville theory with some excursion into the theory of inﬁnite dimensional vector spaces. Describing Vibrations in Higher Dimensions There are many other topics that can be explored beyond the standard topics above. If there is time.

Reid Hilbert.” Leon Lederman ”Mathematics began to seem too much like puzzle solving. and general relativity. FAMOUS QUOTES 11 that the student can take up on their own. we’d have a much easier time raising money. If it were.” Maria Goeppert Mayer (1906-1972) from J. ”Ordinary language is totally unsuited for expressing what physics really asserts. We try to introduce some approximation techniques in parts of the course.0. One possible topic is the introduction of tensors.4.4 Famous Quotes There are many quotes that can be found about physics. A Life of One’s Own. or physics and mathematics. In particular. Only mathematics and mathematical logic can say as little as the physicist means to say.” Bertrand Russell (1872-1970) British philosopher. or in honors research. (a) The Inertia Tensor (b) Metrics in Relativity and Beyond Not all problems can be solved exactly. electrical properties of materials. Dash. 1970. Tensors appear in both classic and modern physics. at Manchester and Cambridge ”Physics isn’t a religion. Maria Goeppert-Mayer. London: Allen and Unwin. since the words of everyday life are not suﬃciently abstract. ”What I am going to tell you about is what we teach our physics students . 1931.” Ernest Rutherford (1871-1937) 1st Baron Rutherford of Nelson British physicist.” David Hilbert (1862-1943) from C. ”All science is either physics or stamp collecting. too. mathematician and social reformer from The Scientiﬁc Outlook. 0. prof. ”Physics is much too hard for physicists. Here are a few for you to read and think about. tensors are important in the study of dynamics. Physics is puzzle solving. not by the mind of man. but of puzzles created by nature.

” Richard Feynman (1918-1988) US physicist. The theory yields a lot. Albert Einstein. In physics. senile decay is sometimes postponed to . To the physicist it is an indispensable characteristic. p 9.” Michio Kaku from Michio Kaku Hyperspace. You see my physics students don’t understand it . director of Cambridge observatory from The Expanding Universe ”One needn’t be a crank to miss the scientiﬁc boat. In this case the ”cat without a grin” and the ”grin without a cat” are equally set aside as purely mathematical phantasies. Many of these theories have been killed oﬀ only when some decisive experiment exposed their incorrectness. Physics is concerned with interrelatedness such as the interrelatedness of cats and grins. mathematics. 1965 from QED. . and especially physics. p 263. Oxford University Press. The very paragon of genius. When he states that something is impossible. .” Sir Arthur Stanley Eddington (1882-1944) British astronomer and physicist. sustained only by the prestige of foolish but well-connected scientists. Perhaps the adjective ’elderly’ requires deﬁnition. The Strange Theory of Light and Matter. .. Nobel Prize. famous for his theories of relativity. In any case I am convinced that He doesn’t play dice.” Albert Einstein (1879-1955) German-Swiss-American mathematical physicist. but it hardly brings us any closer to the secret of the Old One. Here is Einstein’s most frequently paraphrased statement of dissatisfaction with the theory: Quantum mechanics is very impressive. That is because I don’t understand it. . who must keep the theoreticians honest. incorrect theories that stubbornly persisted. is done by the experimentalist. couldn’t be persuaded to give quantum physics his unreserved endorsement. in the other disciplines. It would be going too far to say that to the physicist the cat is merely incidental to the grin. . quantum electrodynamics. he is very probably wrong. Thus the yeoman work in any science. It is my task to convince you not to turn away because you don’t understand it. 1926 ”When a distinguished but elderly scientist states that something is possible. Penguin Books. But an inner voice tells me that it is not yet the real thing. London. . ”There are many examples of old. ”To the pure geometer the radius of curvature is an incidental characteristic like the grin of the Cheshire cat.12 CHAPTER 0. he is almost certainly right. and astronautics it means over thirty. 1995. . December 4. Nobody does. 1990. . Physics. PROLOGUE in the third or fourth year of graduate school . from Letter to Max Born.

” Henry David Thoreau (1817-1862) American philosopher and naturalist. decent scientists could still doubt the concrete .Because it makes almost inevitable the perpetuation amongst philosophers of the muddle-headedness they have taken over from common sense. Another dealt with the phenomenon of Brownian motion. which has no need of technical terms or of changes in the signiﬁcance of common terms. with words used in their ordinary meanings. physics and neurology in those who have had only a classical education. Three of them were among the greatest in the history of physics. This was like a conjuring trick. mathematician and social reformer from Portraits from Memory. suﬃces for philosophy. as I understand it. and should at all costs be kept out of the laboratory!” Arthur C. but as every researcher just out of college knows. if it is applied to no larger system than the starry one.Because it is capable of excusing ignorance of mathematics. Mathematics should be mixed not only with physics but with ethics. that is mixed mathematics. The study of geometry is a petty and idle exercise of the mind.0. the apparently erratic movement of tiny particles suspended in a liquid: Einstein showed that these movements satisﬁed a clear statistical law. twenty-six years old.Because it is advanced by some in a tone of unctuous rectitude. only three years away from crude privation. writer of Walden ”The doctrine. Russell ”Einstein. 2. One. or of partial and extraneous laws.Because it is insincere. as if opposition to it were a sin against democracy. 3.4. very simple. still a patent examiner. The fact which interests us most is the life of the naturalist. glorious exceptions. published in the Annalen der Physik in 1905 ﬁve papers on entirely diﬀerent subjects. Clarke (1917-) English author of science ﬁction from ’Proﬁles of the Future’ 1962 (Clarke’s First Law) 13 ”He is not a true man of science who does not bring some sympathy to his studies. easy when explained: before it.Because it makes philosophy trivial. gave the quantum explanation of the photoelectric eﬀectit was this work for which. and expect to learn something by behavior as well as by application. consists in maintaining that the language of daily life. There are. I object to it: 1. It is childish to rest in the discovery of mere coincidences. scientists of over ﬁfty are good for nothing but board meetings. The purest science is still biographical. sixteen years later he was awarded the Nobel prize. 5. I ﬁnd myself totally unable to accept this view. 4.” Bertrand Russell (1872-1970) British philosopher. of course. FAMOUS QUOTES the forties.

so long as physics lasts. U. . Snow. pp 85-86.K. unaided. the bizarre conclusions. They contain very little mathematics. Variety of Men. To a surprisingly large extent. It is pretty safe to say that. Harmondsworth. time and matter into one fundamental unity. There is a good deal of verbal commentary. emerge as though with the greatest of ease: the reasoning is unbreakable. PROLOGUE existence of atoms and molecules: this paper was as near direct proof of their concreteness as a theoretician could give. All of them are written in a style unlike any other theoretical physicist’s.” Charles Percy Snow (1905-1980) Baron Snow of Leicester English author and physicist from C. no one will again hack out three major breakthroughs in one year.P. without listening to the opinions of others. It looks as though he had reached the conclusions by pure thought. that is precisely what he had done. The third paper was the special theory of relativity. The conclusions. This last paper contains no references and quotes no authority. 1969.14 CHAPTER 0. Penguin Books. which quietly amalgamated space.

Chapter 1

Introduction

Before we begin our study of mathematical physics, perhaps we should review some things from your past classes. You deﬁnitely need to know something before taking this class. It is assumed that you have taken Calculus and are comfortable with diﬀerentiation and integration. You should also have taken some introductory physics class, preferably the calculus based course. Of course, you are not expected to know every detail from these courses. However, there are some topics and methods that will come up and it would be useful to have a handy reference to what it is you should know, especially when it comes to exams. Most importantly, you should still have your physics and calculus texts to which you can refer throughout the course. Looking back on that old material, you will ﬁnd that it appears easier than when you ﬁrst encountered the material. That is the nature of learning mathematics and physics. Your understanding is continually evolving as you explore topics more in depth. It does not always sink in the ﬁrst time you see it. In this chapter we will give a quick review of these topics. We will also mention a few new things that might be interesting. This review is meant to make sure that everyone is at the same level.

15

16

CHAPTER 1. INTRODUCTION

1.1

1.1.1

**What Do I Need To Know From Calculus?
**

Introduction

There are two main topics in calculus: derivatives and integrals. You learned that derivatives are useful in providing rates of change in either time or space. Integrals provide areas under curves, but also are useful in providing other types of sums over continuous bodies, such as lengths, areas, volumes, moments of inertia, or ﬂux integrals. In physics, one can look at graphs of position versus time and the slope (derivative) of such a function gives the velocity. Then plotting velocity versus time you can either look at the derivative to obtain acceleration, or you could look at the area under the curve and get the displacement: x=

t t0

v dt.

(1.1)

Of course, you need to know how to diﬀerentiate and integrate given functions. Even before getting into diﬀerentiation and integration, you need to have a bag of functions useful in physics. Common functions are the polynomial and rational functions. You should be fairly familiar with these. Polynomial functions take the general form f (x) = an xn + an−1 xn−1 + · · · + a1 x + a0 , where an = 0. This is the form of a polynomial of degree n. Rational functions consist of ratios of polynomials. Their graphs can exhibit asymptotes. Next are the exponential and logarithmic functions. The most common are the natural exponential and the natural logarithm. The natural exponential is given by f (x) = ex , where e ≈ 2.718281828 . . . . The natural logarithm is the inverse to the exponential, denoted by ln x. (One needs to be careful, because some mathematics and physics books use log to mean natural exponential, whereas many of us were ﬁrst trained to use it to mean the common logarithm, which is the ‘log base 10’.) The properties of the exponential function follow from our basic properties for exponents. Namely, we have: e0 = 1, (1.3) (1.2)

**1.1. WHAT DO I NEED TO KNOW FROM CALCULUS? e−a = ea eb (e )
**

a b

17 (1.4) (1.5) (1.6)

1 ea = ea+b , = e .

ab

The relation between the natural logarithm and natural exponential is given by y = ex ⇔ x = ln y. (1.7) Some common logarithmic properties are ln 1 1 ln a ln(ab) a ln b 1 ln b = 0, = − ln a, = ln a + ln b, = ln a − ln b, = − ln b. (1.8) (1.9) (1.10) (1.11) (1.12)

We will see further applications of these relations as we progress through the course.

1.1.2

Trigonometric Functions

Another set of useful functions are the trigonometric functions. These functions have probably plagued you since high school .They have their origins as far back as the building of the pyramids. Typical applications in your introductory math classes probably have included ﬁnding the heights of trees, ﬂag poles, or buildings. It was recognized a long time ago that similar right triangles have ﬁxed ratios of any pair of sides of the two similar triangles. These ratios only change when the non-right angles change. Thus, the ratio of two sides of a right triangle only depends upon the angle. Since there are six possible ratios (think about it!), then there are six possible functions. These are designated as sine, cosine, tangent and their reciprocals (cosecant, secant and cotangent). In your introductory

18

**CHAPTER 1. INTRODUCTION Table 1.1: Table of Trigonometric Values θ 0
**

π 6 π 3 π 4 π 2

cos θ 1 √

3 2 1 2 √ 2 2

sin θ 0

1 2 √ 3 2 √ 2 2

tan θ 0 √

3 √ 3 1 undeﬁned 3

0

1

physics class, you really only needed the ﬁrst three. You also learned that they are represented as the ratios of the opposite to hypotenuse, adjacent to hypotenuse, etc. Hopefully, you have this down by now. You should also know the exact values for the special angles θ = 0, π , π , π , π , and their corresponding angles in the second, third and 6 3 4 2 fourth quadrants. This becomes internalized after much use, but we provide these values in Table 1.1 just in case you need a reminder. The problems using trigonometric functions in later courses stem from using identities. We will have many an occasion to do so in this class as well. What is an identity? It is a relation that holds true all of the time. For example, the most common identity for trigonometric functions is sin2 θ + cos2 θ = 1. This hold true for every angle θ! An even simpler identity is sin θ tan θ = . cos θ (1.13)

(1.14)

Other simple identities can be derive from this one. Dividing the equation by cos2 θ or sin2 θ yields tan2 θ + 1 = sec2 θ, 1 + cot θ = csc θ.

2 2

(1.15) (1.16)

Other useful identities stem from the use of the sine and cosine of the sum and diﬀerence of two angles. Namely, we have that sin(A ± B) = sin A cos B ± sin B cos A, cos(A ± B) = cos A cos B sin A sin B. (1.17) (1.18)

1.1. WHAT DO I NEED TO KNOW FROM CALCULUS? Note that the upper (lower) signs are taken together. The double angle formulae are found by setting A = B : sin(2A) = 2 sin A cos B, cos(2A) = cos A − sin A. Using Equation (1.13), we can rewrite (1.20) as cos(2A) = 2 cos2 A − 1, = 1 − 2 sin A.

2 2 2

19

(1.19) (1.20)

(1.21) (1.22)

These, in turn, lead to the half angle formulae. Using A = 2α, we ﬁnd that sin2 α = 1 − cos 2α , 2 1 + cos 2α . cos2 α = 2 (1.23) (1.24)

Finally, another useful set of identities are the product identities. For example, if we add the identities for sin(A + B) and sin(A − B), the second terms cancel and we have sin(A + B) + sin(A − B) = 2 sin A cos B. Thus, we have that 1 sin A cos B = (sin(A + B) + sin(A − B)). 2 Similarly, we have 1 cos A cos B = (cos(A + B) + cos(A − B)). 2 and 1 sin A sin B = (cos(A − B) − cos(A + B)). 2 (1.26) (1.25)

(1.27)

These are the most common trigonometric identities. They appear often and should just roll oﬀ of your tongue. We will also need to understand the behaviors of trigonometric functions. In particular, we know that the sine and cosine functions are periodic.

20

CHAPTER 1. INTRODUCTION

They are not the only periodic functions, as we shall see. [Just visualize the teeth on a carpenter’s saw.] However, they are the most common periodic functions. A periodic function f (x) satisﬁes the relation f (x + p) = f (x), for all x

for some constant p. If p is the smallest such number, then p is called the period. Both the sine and cosine functions have period 2π. This means that the graph repeats its form every 2π units. Similarly, sin bx and cos bx have the common period p = 2π . We will make use of this fact in later b chapters.

1.1.3

Other Elementary Functions

So, are there any other functions that are useful in physics? Actually, there are many more. However, you have probably not see many of them to date. We will see by the end of the semester that there are many important functions that arise as solutions of some fairly generic, but important, physics problems. In your calculus classes you have also seen that some relations are represented in parametric form. However, there is at least one other set of elementary functions, which you should know about. These are the hyperbolic functions. Such functions are useful in representing hanging cables, unbounded orbits, and special traveling waves called solitons. They also play a role in special and general relativity. Hyperbolic functions are actually related to the trigonometric functions, as we shall see after a little bit of complex function theory. For now, we just want to recall a few deﬁnitions and an identity. Just as all of the trigonometric functions can be built from the sine and the cosine, the hyperbolic functions can be deﬁned in terms of the hyperbolic sine and hyperbolic cosine: sinh x = cosh x = ex − e−x , 2 ex + e−x . 2 (1.28) (1.29)

There are four other hyperbolic functions. These are deﬁned in terms of the above functions similar to the relations between the trigonometric

1.1. WHAT DO I NEED TO KNOW FROM CALCULUS? Table 1.2: Table of Derivatives Function a xn eax ln ax sin ax cos ax tan ax csc ax sec ax cot ax sinh ax cosh ax tanh ax cschax sechax coth ax Derivative 0 nxn−1 aeax a cos ax −a sin ax a sec2 ax −a csc ax cot ax a sec ax tan ax −a csc2 ax a cosh ax a sinh ax a sech2 ax −a cschax coth ax −a sech tanh ax −a csch2 ax

1 x

21

functions. For example, we have tanh x = ex − e−x sinh x = x . cosh x e + e−x

There are also a whole set of identities, similar to those for the trigonometric functions. Some of these are given by the following: cosh2 x − sinh2 x = 1, cosh(A ± B) = cosh A cosh B ± sinh A sinh B sinh(A ± B) = sinh A cosh B ± sinh B cosh A. Others can be derived from these. (1.30) (1.31) (1.32)

1.1.4

Derivatives

Now that we know our elementary functions, we can seek their derivatives. We will not spend time exploring the appropriate limits in any rigorous

22

CHAPTER 1. INTRODUCTION

way. We are only interested in the results. We provide these in Table 1.2. We expect that you know the meaning of the derivative and all of the usual rules, such as the product and quotient rules. Also, you should be familiar with the Chain Rule. Recall that this rule tells us that if we have a composition of functions, such as the elementary functions above, then we can compute the derivative of the composite function. Namely, if h(x) = f (g(x)), then dh d df dg = (f (g(x))) = | = f (g(x)g (x). dx dx dg g(x) dx (1.33)

For example, let H(x) = 5 cos π tanh 2x2 . This is a composition of three functions, H(x) = f (g(h(x))), where f (x) = 5 cos x, g(x) = π tanh x, and h(x) = 2x2 . Then the derivative becomes H (x) = 5 − sin π tanh 2x2 d dx π tanh 2x2

= −5π sin π tanh 2x2 sech2 2x2 = −20πx sin π tanh 2x2

d 2x2 dx sech2 2x2 .

(1.34)

1.1.5

Integrals

Integration is typically a bit harder. Imagine being given the result in (1.34) and having to ﬁgure out the integral. As you may recall from the Fundamental Theorem of Calculus, the integral is the inverse operation to diﬀerentiation: df dx = f (x) + C. (1.35) dx However, it is not always easy to determine a given integral. In fact some integrals are not even doable! However, you learned in calculus that there are some methods that might yield an answer. While you might be happier using a computer with a computer algebra systems, such as Maple, you should know a few basic integrals and know how to use tables for some of the more complicated ones. In fact, it can be exhilarating when you can do a given integral without reference to a computer or a Table of Integrals. However, you should be prepared to do some integrals using what you have been taught in calculus. We will review a few of these methods and some of the standard integrals in this section.

you should ﬁrst ask if a simple substitution would reduce the integral to one you know how to do. there are some integrals you should be expected to know without any work. Integration Using Partial Fraction Decomposition. So. WHAT DO I NEED TO KNOW FROM CALCULUS? Table 1.3. consider the following integral x √ dx. Integration by Parts. So. These are not the only integrals you should be able to do. Example 1 When confronted with an integral. and Trigonometric Integrals.3: Table of Integrals Function a xn eax sin ax cos ax sec2 ax sinh ax cosh ax sech2 ax 1 a+bx 1 a2 +x2 √ 1 a2 −x2 1 x2 −a2 1 x 23 Indeﬁnite Integral ax xn1 n+1 1 ax ae ln x 1 − a cos ax 1 a sin ax 1 a tan ax 1 a cosh ax 1 a sinh ax 1 a tanh ax 1 b ln(a + bx) 1 −1 ax a tan 1 −1 ax a sin 1 −1 ax a sec First of all. we . These integrals appear often and are just an application of the Fundamental Theorem of Calculus to the previous Table 1. The basic integrals that students should know of the top of their heads are given in Table 1.1.2. we can expand the list by recalling a few of the techniques that you learned in calculus. as an example. There are just a few: The Method of Substitution.1. However. 2+1 x The ugly part of this integral is the x2 + 1 under the square root.

Often we are faced with deﬁnite integrals. there are other methods you can try. As x goes from 0 to 2. du = 2x dx. We let u = x2 + 1. we can write it as 1 du 1 √ = u−1/2 du. u takes values from 1 to 5. Recall the Integration by Parts Formula: u dv = uv − v du. our substitution gives 2 0 x 1 √ dx = 2+1 2 x 5 1 √ √ du √ = u|5 = 5 − 1.24 CHAPTER 1. Noting that when u = f (x). For our example. Then. So. Example 2 Consider the above example with limits added. INTRODUCTION let u = x2 + 1. in which we integrate between two limits.36) . 1 u When the Method of substitution fails. There are several ways to use these limits. our integral becomes x 1 √ dx = 2 x2 + 1 du √ . 2+1 x We proceed as before. u The substitution has converted our integral into an integral over u. 2 u 2 This is now easily ﬁnished after integrating and using our substitution variable to give x 1 u1/2 √ = dx = 2 1 x2 + 1 2 x2 + 1 + C. part of the integrand can be 1 written as x dx = 2 u du. students often forget that a change of variables generally means that the limits have to change. 2 0 x √ dx. Looking at the integral. (1. One of the most used is the Method of Integration by Parts. Namely. Also. However. Note that we have added the required integration constant and that the derivative of the result easily gives the original integrand (after employing the Chain Rule). this integral is doable! It is one of the integrals we should know. we have du = f (x) dx.

1.1. WHAT DO I NEED TO KNOW FROM CALCULUS?

25

The idea is that you are given the integral on the left and you can relate it to an integral on the right. Hopefully, the new integral is one you can do, or at least it is an easier integral than the one you are trying to evaluate. However, you are not usually given the functions u and v. You have to determine them. The integral form that you really have is a function of another variable, say x. Another form of the formula can be given as f (x)g (x) dx = f (x)g(x) − g(x)f (x) dx. (1.37)

This form is a bit more complicated in appearance, though it is clearer what is happening. The derivative has been moved from one function to the other. Recall that this formula was derived by integrating the product rule for diﬀerentiation. The two formulae are related by using the relations u = f (x) → du = f (x) dx, v = g(x) → dv = g (x) dx. (1.38)

This also gives a method for applying the Integration by Parts Formula. Example 3 Consider the integral x sin 2x dx. We choose u = x and dv = sin 2x dx. This gives the correct left side of the formula. We next determine v and du: du dx = dx, du = dx 1 v = dv = sin 2x dx = − cos 2x. 2 We note that one usually does not need the integration constant. Inserting these expressions into the Integration by Parts Formula, we have 1 1 x sin 2x dx = − x cos 2x + 2 2 cos 2x dx.

We see that the new integral is easier to do than the original integral. Had we picked u = sin 2x and dv = x dx, then the formula still works, but the resulting integral is not easier. For completeness, we can ﬁnish the integration. The result is 1 1 x sin 2x dx = − x cos 2x + sin 2x + C. 2 4

26

CHAPTER 1. INTRODUCTION

As always, you can check your answer by diﬀerentiating the result, a step students often forget to do. Namely, d 1 1 − x cos 2x + sin 2x + C dx 2 4 1 1 = − cos 2x + x sin 2x + (2 cos 2x) 2 4 = x sin 2x. (1.39)

So, we do get back the integrand in the original integral. We can also perform integration by parts on deﬁnite integrals. The general formula is written as

b a

f (x)g (x) dx = f (x)g(x)|b − a

b a

g(x)f (x) dx.

(1.40)

**Example 4 Consider the integral
**

π 0

x2 cos x dx.

This will require two integrations by parts. First, we let u = x2 and dv = cos x. Then, du = 2x dx. v = sin x. Inserting into the Integration by Parts Formula, we have

π 0 π 0

**x2 cos x dx = x2 sin x|π − 2 0 = −2
**

π 0

x sin x dx (1.41)

x sin x dx.

We note that the resulting integral is easier that the given integral, but we still cannot do the integral oﬀ the top of our head (unless we look at Example 3!). So, we need to integrate by parts again. (Note: In your calculus class you may recall that there is a tabular method for carrying out multiple applications of the formula. However, we will leave that to the reader and proceed with the brute force computation.) We apply integration by parts by letting U = x and dV = sin x dx. This gives that dU = dx and V = − cos x. Therefore, we have

π 0

x sin x dx = −x cos x|π + 0 = π + sin x|π 0 = π.

π 0

cos x dx (1.42)

**1.1. WHAT DO I NEED TO KNOW FROM CALCULUS? The ﬁnal result is
**

0 π

27

x2 cos x dx = −2π.

Other types of integrals that you will see often are trigonometric integrals. In particular, integrals involving powers of sines and cosines. For odd powers, a simple substitution will turn the integrals into simple powers. Example 5 For example, consider cos3 x dx. This can be rewritten as cos3 x dx = cos2 x cos x dx.

Let u = sin x. Then du = cos x dx. Since cos2 x = 1 − sin2 x, we have cos3 x dx = = cos2 x cos x dx (1 − u2 ) du

1 = u − u3 + C 3 1 = sin x − sin3 x + C. 3 A quick check conﬁrms the answer: d 1 sin x − sin3 x + C dx 3

(1.43)

= cos x − sin2 x cos x = cos x(1 − sin2 x) = cos3 x.

Even powers of sines and cosines are a little more complicated, but doable. In these cases we need the half angle formulae: sin2 α = 1 − cos 2α , 2 1 + cos 2α cos2 α = . 2 (1.44) (1.45)

**Example 6 As an example, we will compute
**

2π 0

cos2 x dx.

28

CHAPTER 1. INTRODUCTION

**Substituting the half angle formula for cos2 x, we have
**

2π 0

cos2 x dx =

1 2π (1 + cos 2x) dx 2 0 2π 1 1 = x − sin 2x 2 2 0 = π.

(1.46)

We note that this result appears often in physics. When looking at root mean square averages of sinusoidal waves, one needs the average of the square of sines and cosines. Recall that the average of a function on interval [a, b] is given as fave = 1 b−a

b a

f (x) dx.

(1.47)

**So, the average of cos2 x over one period is 1 2π The root mean square is then
**

2π 0 1 √ . 2

1 cos2 x dx = . 2

(1.48)

So far, this is enough to get started in the course. We will recall other topics as we need them. In particular, we discuss the method of partial fraction decomposition when we discuss terminal velocity in the next chapter and applications of the Laplace transform later in the book.

1.2

What I Need From My Intro Physics Class?

So, what do we need to know about physics? You should be comfortable with common terms from mechanics and electromagnetism. In some cases, we will review speciﬁc topics. However, it would be helpful to review some topics from your introductory physics text. As you may recall, your study of physics began with the simplest systems. We ﬁrst studied motion for point masses. We are introduced to the concepts of position, displacement, velocity and acceleration. We studied motion ﬁrst in one dimension and even then can only do problems in which

1.2. WHAT I NEED FROM MY INTRO PHYSICS CLASS?

29

the acceleration is constant, or piecewise constant. We looked at horizontal motion and then vertical motion, in terms of free fall. Finally, we moved into two dimensions and considered projectile motion. Some calculus was introduced and you learned how to represent vector quantities. We then ask, “What causes a change in the state of motion of a body?” We are lead to a discussion of forces. The types of forces encountered are the weight, the normal force, tension, the force of gravity and then centripetal forces. You might have also seen spring forces, which we will see shortly, lead to oscillatory motion - the underlying theme of this book. Next, you ﬁnd out that there are well known conservation principles for energy and momentum. In these cases you are lead to the concepts of work, kinetic energy and potential energy. You ﬁnd out that even when mechanical energy is not conserved, you can account for the missing energy as the work done by nonconservative forces. Momentum becomes important in collision problems or when looking at impulses. With these basic ideas under your belt, you proceed to study more complicated systems. We can look at extended bodies, most notably rigid bodies. This lead to the study of rotational motion. One ﬁnds out that there are analogues to all of the previously discussed concepts for point masses. For example, we have rotational velocity and acceleration. The cause of rotational acceleration is the torque. The analogue to mass is the moment of inertia. The next level of complication, which sometimes is not covered, are bulk systems. One can study ﬂuids, solids and gases. These can be investigated by looking at things like mass density, pressure, volume and temperature. This leads to the study of thermodynamics in which one studies the transfer of energy between a system and its surroundings. This involves the relationship between the work done on the system, the heat energy added to a systems and its change in internal energy. Bulk systems can also suﬀer deformations when a force per area is applied. This can lead to the idea that small deformations can lead to the propagation of energy throughout the system in the form of waves. We will later explore this wave motion in several systems. The second course in physics is spent on electricity and magnetism, leading to electromagnetic waves. One ﬁrst learns about charges and charge

30

CHAPTER 1. INTRODUCTION

distributions, electric ﬁelds, electric potentials. Then we ﬁnd out that moving charges produce magnetic ﬁelds and are aﬀected by external magnetic ﬁelds. Furthermore, changing magnetic ﬁelds produce currents. This can all be summarized by Maxwell’s equations, which we will recall later in the course. These equations, in turn, predict the existence of electromagnetic waves. Depending how far one delves into the book, one may see excursions into optics and the impact that trying to understand the existence of electromagnetic waves has had on the development of so-called ”modern physics”. For example, in trying to understand what medium electromagnetic waves might propagate through, Einstein proposed an answer that completely changed the way we understand the nature of space and time. In trying to understand how ionized gases radiate and interact with matter, Einstein and others were lead down a path that has lead to quantum mechanics and further challenges to our understanding of reality. So, that is the introductory physics course in a nutshell. In fact, that is most of physics. The rest is detail, which you will explore in your other courses as you progress toward a degree in physics.

1.3

Technology and Tables

As we progress through the course, you will often have to compute integrals and derivatives by hand. However, many of you know that some of the tedium can be alleviated by using computers, or even looking up what you need in tables. In some cases you might even ﬁnd applets online that can quickly give you the answers you seek. However, you also need to be comfortable in doing many computations by hand. This is necessary, especially in your early studies, for several reasons. For example, you should try to evaluate integrals by hand when asked to do them. This reinforces the techniques, as outlined earlier. It exercises your brain in much the same way that you might jog daily to exercise your body. Who knows, keeping your brain active this way might even postpone Alzheimer’s. The more comfortable you are with derivations and evaluations, the easier it is to follow future lectures without getting bogged down by the details, wondering how your professor got from step A to step D. You can always use a computer algebra system, or a Table of

1.4. BACK OF THE ENVELOPE COMPUTATIONS Integrals, to check on your work.

31

Problems can arise when depending purely on the output of computers, or other ”black boxes”. Once you have a ﬁrm grasp on the techniques and a feeling as to what answers should look like, then you can feel comfortable with what the computer gives you. Sometimes, programs like Maple can give you strange looking answers, and sometimes wrong answers. Also, Maple cannot do every integral, or solve every diﬀerential equation, that you ask it to do. Even some of the simplest looking expressions can cause computer algebra systems problems. Other times you might even provide wrong input, leading to erroneous results. Another source of indeﬁnite integrals, derivatives, series expansions, etc, is a Table of Mathematical Formulae. There are several good books that have been printed. Even some of these have typos in them, so you need to be careful. However, it may be worth the investment to have such a book in your personal library. Go to the library, or the bookstore, and look at some of these tables to see how useful they might be. There are plenty of online resources as well. For example, there is the Wolfram Integrator at http://integrals.wolfram.com/. There is also a wealth of information at the following sites: http://www.sosmath.com/, http://www.math2.org/, http://mathworld.wolfram.com/, and http://functions.wolfram.com/.

1.4

Back of the Envelope Computations

In the ﬁrst chapter in your introductory physics text you were introduced to dimensional analysis. Dimensional analysis is useful for recalling particular relationships between variables by looking at the units involved, independent of the system of units employed. Though most of the time you have used SI, or MKS, units in most of your physics problems. There are certain basic units - length, mass and time. By the second course, you found out that you could add charge to the list. We can represent these as [L], [M], [T] and [C]. Other quantities typically have units that can be expressed in terms of the basic units. These are called derived units. So, we have that the units of acceleration are [L]/[T]2 and units of mass density are [M]/[L]3 . Similarly, units of magnetic ﬁeld can be

32

CHAPTER 1. INTRODUCTION

found, though with a little more eﬀort. Recall that F = qvB sin θ for a charge q moving with speed v through a magnetic ﬁeld B at an angle of θ. sin θ has no units. So, [B] = = = [F ] [q][v]

[M][L] [T]2 [L] [C] [T]

[M] . [C][T]

(1.49)

Now, assume that you do not know how B depended on F , q and v, but you knew the units of all of the quantities. Can you ﬁgure out the relationship between them? We could write [B] = [F ]α [q]β [v]γ and solve for the exponents by inserting the dimensions. Thus, we have [M][C]−1 [T]−1 = [M][L][T]−2

α γ

[C]β [L][T]−1

.

**Right away we can see that α = 1 and β = −1 by looking at the powers of [M] and [C], respectively. Thus, [M][C]−1 [T]−1 = [M][L][T]−2 [C]−1 [L][T]−1
**

γ

= [M][C]−1 [L]1+γ [T]−2−γ .

We see that picking γ = −1 balances the exponents and gives the correct relation [B] = [F ][q]−1 [v]−1 . An important theorem at the heart of dimensional analysis is the Buckingham Π Theorem. In essence, this theorem tells us that physically meaningful equations in n variables can be written as an equation involving n − m dimensionless quantities, where m is the number of dimensions used. The importance of this theorem is that one can actually compute useful quantities without even knowing the exact form of the equation! The Buckingham Π Theorem1 was introduced by E. Buckingham in 1914. Let qi be n physical variables that are related by f (q1 , q2 , . . . , qn ) = 0.

1

(1.50)

http://en.wikipedia.org/wiki/Buckingham Pi theorem

this is the basis for some of the proverbial ”back of the envelope calculations” which you might have heard about. e.53) (1. It also seems a bit abstract. this is our ﬁrst new concept and it is probably a mystery as to its importance. BACK OF THE ENVELOPE COMPUTATIONS Assuming that m dimensions are involved. call it π. . (1. πk ) = 0.g. k. and gravity in the form of the acceleration due to gravity. The only units involved are length. where the πi ’s can be written in terms of the physical variables as k k k πi = q1 1 q2 2 · · · qnn .52) Well. T . m.4. This could be done by inspection. let’s see how it can be used. So. These are the qi ’s in the theorem. there must be an equation of the form F (π) = 0 in terms of the dimensionless variable π= k1 mk2 T k3 g k4 . . . . . . This means that there are k = n − m = 1 dimensionless variables. π2 . k2 = 0. Example 1 Let’s consider the period of a simple pendulum. . k3 − 2k4 = 0.51) i = 1. m = 3. or we could write out the dimensions of each factor and determine how π can be dimensionless. a point mass hanging on a massless string. We have four physical variables. g. . So. π will be dimensionless when k1 + k4 = 0.1. The period. However. mass and time. 33 (1. So. Then the equation (1. We just need to ﬁnd the ki ’s. (1.. Thus. [π] = [ ]k1 [m]k2 [T ]k3 [g]k4 = [L]k1 [M ]k2 [T ]k3 [L] [T ]2 k4 = [L]k1 +k4 [M ]k2 [T ]k3 −2k4 . the mass of the “pendulum bob”. of the pendulum’s swing could depend upon the the string length. we let πi be k = n − m dimensionless variables.50) can be rewritten as a function of these dimensionless variables as F (π1 . .54) .

there should be two dimensionless quantities. which has to be veriﬁed by other means. and k3 = 2k4 . . the units of force can be found using F = ma and work (energy) is force times distance. E. Thus. We set π = E k1 tk2 rk3 pk4 ρk5 . So. we have determined that the period is independent of the mass and proportional to the square root of the length. Possible physical variables are the time since the blast. Similarly. F( T 2g ) = 0. INTRODUCTION This is a linear homogeneous system of three equations and four unknowns. Example 2 A more interesting example was provided by Sir Geoﬀrey Taylor in 1941 for determining the energy release of an atomic bomb. For example. Inserting the respective units.34 CHAPTER 1. We can satisfy these equations by setting k1 = −k4 . the energy. Therefore. The constant can be determined by experiment as z = 4π 2 . t. we have π= −k4 T 2k4 g k4 = −1 T 2g k4 . so we can pick the simplest value. you need to know that pressure is force per area. Then. k2 = 0. r.. Let’s assume that the energy is released in all directions from a single point. Let’s determine these. We have ﬁve physical variables and only three units. the atmospheric density ρ and the atmospheric pressure. z.55) Note: You should verify the units used. k4 = 1. we ﬁnd that [π] = [E]k1 [t]k2 [r]k3 [p]k4 [ρ]k5 = [M ][L]2 [T ]−2 k1 [T ]k2 [L]k3 [M ][L]−1 [T ]−2 k4 [M ][L]−3 k5 = [M ]k1 +k4 +k5 [L]2k1 +k3 −k4 −3k5 [T ]−2k1 +k2 −2k4 . p. Assuming that this equation has one zero. k4 is arbitrary. (1. the distance from the blast. we have that gT 2 = z = const.

1. Solving this system yields: 2 1 1 k1 = − (2k4 + k3 ) k2 = (3k4 − k3 ) k5 = (k3 − 3k4 ). 2k1 − 3k5 = k4 − k3 . (In linear algebra one learns how to solve this using matrix methods. 2k1 + k3 − k4 − 3k5 = 0. The only way to solve this system is to solve for three unknowns in term of the remaining two. BACK OF THE ENVELOPE COMPUTATIONS For π to be dimensionless. we have to solve the system: k1 + k4 + k5 = 0. 5 5 π2 = E −2/5 t6/5 pρ−3/5 = p t6 ρ3 E 2 1/5 .56) This is a set of three equations and ﬁve unknowns. Two independent set of simple values would be to pick one variable as zero and the other as one. This will give our two dimensionless variables: Case I. k2 = − 5 . k3 = 1 and k4 = 0. This gives 5 5 π1 = E −1/5 t−2/5 rρ1/5 = r Case II. (1. k2 = 5 . 5 5 5 We have the freedom to pick values for k3 and k4 .57) These can be solved by solving for k1 and k4 using the ﬁrst two equations and then ﬁnding k2 from the last one.) Let’s solve for k1 . 2 In this case we then have k1 = − 1 . −2k1 + k2 − 2k4 = 0. . and k5 in terms of k3 and k4 . k3 = 0 and k4 = 1. and k5 = − 3 . 35 (1. and k5 = 1 . k2 .4. 2k1 + k2 = −2k4 . The system can be written as k1 + k5 = −k4 . ρ Et2 1/5 . 6 In this case we then have k1 = − 2 .

So. this is not enough to determine the explicit equation. In 1947 Taylor applied his earlier analysis to movies of the ﬁrst atomic bomb test in 1945 and his results were close to the actual values. It can be represented as a function of the dimensionless variable π2 .atomicarchive. assuming that π1 = h(π2 ). How can one do this? You can ﬁnd pictures of the ﬁrst atomic bomb test with a superimposed length scale online2 . t2 As an exercise. you can estimate the radius of the explosion at the given time and determine how the energy of the blast in so many tons of TNT. so r≈ Et2 ρ 1/5 . Thus. so π2 ≈ 0.36 CHAPTER 1. Et2 Simple experiments suggest that h(0) is of order one. We can rewrite the above result to get the energy estimate: E≈ r5 ρ .shtml . F (r Et2 ρ3 E 2 Of course. we have that h(π2 ) = r ρ Et2 1/5 .p ) = 0. ρ 1/5 r ≈ h(0). Note that π1 is dimensionless. INTRODUCTION Thus. Taylor was able to use this information to get an energy estimate. the energy is expected to be huge. we have that the relation between the energy and the other variables is of the form 1/5 t6 ρ 1/5 . Note that for t = 1 second. However. 2 http://www.com/Photos/Trinity/image7.

1.4. BACK OF THE ENVELOPE COMPUTATIONS 37 Figure 1.com/Photos/Trinity/image7.shtml test. .1: A photograph of the ﬁrst atomic bomb http://www.atomicarchive.

. 2 cos4 3x dx. INTRODUCTION 1.5 Chapter 1 Problems 1. Compute the following integrals (a) (b) (c) (d) xe−x dx.38 CHAPTER 1. 3 √ 5x dx. 0 x2 +16 x2 sin 3x dx.

Let us begin with a simple example from introductory physics. It is experimentally determined that an object at some distance from the center of the earth falls at a constant acceleration in the absence of other forces. where g is called the acceleration due to gravity.1) Note that we will occasionally use a dot to indicate time diﬀerentiation. We will then turn to the study of oscillations. we have y (t) = −g. We will be interested in determining the position. This constant acceleration is denoted by −g. From the deﬁnition of free fall.Chapter 2 Free Fall and Harmonic Oscillators 2. ¨ (2. such as air resistance. which are modelled by second order diﬀerential equations. of the body as a function of time.1 Free Fall and Terminal Velocity In this chapter we will study some common diﬀerential equations that appear in physics. We will begin with the simplest types of equations and standard techniques for solving them We will end this part of the discussion by returning to the problem of free fall with air resistance. 39 . The negative sign is an indication that up is positive. We recall that free fall is the vertical motion of an object under the force of gravity. y(t).

FREE FALL AND HARMONIC OSCILLATORS This notation is standard in physics and we will begin to introduce you to this notation. What we do not know is y(t).4) Now we ask if you know a function whose derivative is 5t + C.3) This tells s that the derivative of dy/dt is 5. we can rewrite the equation as d dt dy dt = 5. 5t + π.1)? We can do so by using what we know about calculus. Well. Can you think of a function whose derivative is 5? (Do not forget that the independent variable is t. So.2 ft/s2 . You might still be getting used to the fact that some letters are used to represent constants. 5t − 6. plays an important role in classical physics. dt (2.81 m/s2 or 32.) Yes. or general diﬀerentiation. Is this the only function whose derivative is 5? No! You can also diﬀerentiate 5t + 1. Thus. In fact it is natural to see diﬀerential equations appear in physics as Newton’s Second Law. the derivative of 5t with respect to t is 5. So. Near the earth’s surface it is about 9. In general. ¨ (2. (2. how does one solve the diﬀerential equation in (2. It is a constant. We will come back to the more general form after we see how to solve the diﬀerential equation. It might be easier to see how if we put in a particular number instead of g. 2 where D is a second integration constant. F = ma. but we just need to recall the Fundamental Theorem of Calculus. you might be able to do this one in your head. . our equation can be reduced to dy = 5t + C. which relates integrals and derivatives. though at times we might use the more familiar prime notation to indicate spatial diﬀerentiation. the derivative of 5t + C is 5. we have 5 y(t) = t2 + Ct + D.40 CHAPTER 2. Consider y (t) = 5.2) Recalling that the second derivative is just the derivative of a derivative.1) we know g. In Equation (2. We will return to this point later. etc. This is our ﬁrst diﬀerential equation.

FREE FALL AND TERMINAL VELOCITY 41 This is a solution to the original equation. Thus. D gives the initial position of the ball. We also see that there are two arbitrary constants. Picking any values for these gives a whole family of solutions. we ﬁnd that dy = −gt + C. That leaves us to determine C. There seems to be a problem. Imagine dropping a ball that then undergoes free fall. So.2. You can always check your answer by showing that it satisﬁes the equation. In this case we have y (t) = ¨ So. our equation is a linear second order ordinary diﬀerential equation. If you set t = 0 in the equation. the derivative of the position. We will see that the general solution of such an equation always has two arbitrary constants. So. 2 2 dt dt . They have physical meanings. we denote initial values with a subscript. it is a solution. Experience tells us that if you drop a ball you expect it to behave the same way every time. Now. is the vertical velocity. We just determined that there are an inﬁnite number of solutions to where the ball is at any time! Well. there are many ways you can release the ball before it is in free fall. You could also toss it up or throw it down. It appears at ﬁrst in Equation (2. denoting the initial velocity d d2 5 2 ( t + Ct + D) = (5t + C) = 5. So. As we will see.1. It is dt positive when the ball moves upward. Thus. it only takes a line or two to solve. The only diﬀerence is that we can replace the constant 5 with the constant −g. (2. That means it is a function that when placed into the diﬀerential equation makes both sides of the equal sign the same. you could drop the ball from anywhere. C and D. Recall that dy . v(t). Typically. we will write y(0) = y0 . (2. D = y0 . that is not possible. Or does it? Actually. We solve it the same way. Let’s return to the free fall problem.5) dt and 1 y(t) = − gt2 + Ct + D.6) 2 Once you get down the process. That is where the constants come in. then you have that y(0) = D.5).

This comes out of Newton’s Law of Gravitation. Along the way we have begun to see some of the features that will appear in the solutions of other problems that are modelled with diﬀerential equation. We will extend our analysis to higher dimensions. Actually. Putting this all together.42 CHAPTER 2. which you may recall is a combination of horizontal motion and free fall. we should also note that free fall at constant g only takes place near the surface of the Earth. we see that Equation (2.5) becomes y(0) = C. Newton’s Law of Gravitation states that ma = F d2 h(t) mM m = G . What if a tile falls oﬀ the shuttle far from the surface? It will also fall to the earth. FREE FALL AND HARMONIC OSCILLATORS v(0) = v0 . A solution of a diﬀerential equation satisfying a set of initial conditions is often called a particular solution. we have the physical form of the solution for free fall as 1 y(t) = − gt2 + v0 t + y0 . But are we done with free fall? Not at all! We can relax some of the conditions that we have imposed. This implies that ˙ C = v(0) = v0 . Before we do that. it may undergo projectile motion. Letting M and R be the Earth’s mass and radius. 2 dt (R + h(t))2 (2. which involve the partial derivatives of functions of more that one variable. So. We can add air resistance. we have solved the free fall equation. We will visit this problem later in this chapter after introducing some more techniques.8) . To look at this problem we need to go to the origins of the acceleration due to gravity. Consider a mass m at some distance h(t) from the surface of the (spherical) Earth.7) 2 Doesn’t this equation look familiar? Now we see that our inﬁnite family of solutions consists of free fall resulting from initially dropping a ball at position y0 with initial velocity v0 . The conditions y(0) = y0 and y(0) = v0 ˙ are called the initial conditions. (2. in which we case will be faced with so-called partial diﬀerential equations. Throughout the book we will see several applications of diﬀerential equations. respectively.

y(x0 ) = y0 .1: Free fall far from the Earth from a height h(t) from the surface.2. y . . x) = 0. (2.10) An initial value problem consists of the diﬀerential equation plus the values of the ﬁrst n − 1 derivatives at a particular value of the independent variable..12) .1 First Order Diﬀerential Equations Before moving on. we ﬁrst deﬁne an n-th order ordinary diﬀerential equation. . (2.. One could write this generally as F (y (n) .9) 2.11) A linear nth order diﬀerential equation takes the form an (x)y (n) (x)+an−1 (x)y (n−1) (x)+. y. (2. This is is an equation for an unknown function y(x) that expresses a relationship between the unknown function and its ﬁrst n derivatives. . . .1. We will leave it as a homework exercise for the reader. FREE FALL AND TERMINAL VELOCITY 43 Figure 2. say x0 : y (n−1) (x0 ) = yn−1 . y (n−1) . . we arrive at a diﬀerential equation d2 h(t) GM = . dt2 (R + h(t))2 This equation is not as easy to solve.+a1 (x)y (x)+a0 (x)y(x) = f (x).1. y (n−2) (x0 ) = yn−2 . . (2. Thus..

one has dy = y 2x dx + C.15) where C is an integration constant. then one obtains an explicit solution.14) Special cases result when either f (x) = 1 or g(y) = 1.15) for y(x). .13) There are two general forms for which one can formally obtain a solution. If an initial condition is given as well. However.15). g(y) (2. dx (2. However. A ﬁrst order diﬀerential equation takes the form F (y . the ﬁrst diﬀerential equations encountered are ﬁrst order equations. Example 1. The ﬁrst is the separable case and the second is a linear ﬁrst order equation. The general solution to equation (2. A ﬁrst order equation is separable if it can be written the form dy = f (x)g(y). then one might be able to ﬁnd a member of the family that satisﬁes this condition. Applying (2. In the ﬁrst case the equation is said to be autonomous. This yields a family of solutions to the diﬀerential equation corresponding to diﬀerent values of C. y = 2xy. as one can indicate the needed integration that leads to a solution. otherwise it is nonhomogeneous.44 CHAPTER 2. Typically. these integrals are not always reducible to elementary functions nor does one necessarily obtain explicit solutions when the integrals are doable. x) = 0. y. y(0) = 2. we will start with the simplest of ordinary diﬀerential equations. If one can solve (2. We indicate that we can formally obtain solutions. FREE FALL AND HARMONIC OSCILLATORS If f (x) ≡ 0. which is often called a particular solution. one has a family of implicit solutions. We will return to these deﬁnitions as we explore a variety of examples. then the equation is said to be homogeneous. (2. Otherwise.14) is obtained in terms of two integrals: dy = f (x) dx + C.

(Note that since C is arbitrary. So.) Next. then so is eC . Following the same procedure as in the last example. Example 2. Thus. For y(0) = 2.16) In this case one seeks an integrating factor.1. we obtain an implicit solution. yy = −x. where A = 2C. the resulting equation becomes d (µy) = µq. dx (2. there is no loss in generality using A instead of eC . y(x) = ex 2 +C 45 = Aex . the particular solution is 2 y(x) = 2ex . 2 where A = eC . The resulting equation is then easily integrated to obtain y(x) = 1 µ(x) x µ(ξ)q(ξ) dξ + C . which is a function that one can multiply through the equation making the left side a perfect derivative. we see that this is a family of circles for A > 0 and the origin for A = 0. (2. one ﬁnds that A = 2.18) . (2. Multiplying the equation by µ. Exponentiating. Thus.2. FREE FALL AND TERMINAL VELOCITY Integrating yields ln |y| = x2 + C. µ(x). one obtains: y dy = − x dx + C ⇒ y 2 = −x2 + A. one obtains the general solution. The second type of ﬁrst order equation encountered is the linear ﬁrst order diﬀerential equation in the form y (x) + p(x)y(x) = q(x). Writing the solution as x2 + y 2 = A. one seeks a particular solution satisfying the initial condition.17) The integrating factor that works is µ(x) = exp( p(x) dx).

it is not in the standard form.46 CHAPTER 2. However.19) by the µ(x) = x. dx x Next. equation (2. As an object falls faster and faster. 2. (xy) = xy + x. C = − 2 .2 Terminal Velocity Now let’s return to free fall. Integrating one obtains 1 xy = x2 + C. one can see that it is not separable. we ﬁrst rewrite the equation as dy 1 + y = 1. What if there is air resistance? We ﬁrst need to model the air resistance. 2 There are other ﬁrst order equations that one can solve for closed form solutions. 2 or 1 C y(x) = x + . 0 = 1 + C. we actually get back the original equation! In this case we have found that xy + y must have been the derivative of something to start. the solution of the initial value problem is 1 y(x) = 1 (x − x ). 2 x Inserting this solution into the initial condition. many equations are not solvable. or one is simply interested in the behavior of solutions. 2 1 Therefore. Therefore. x > 0 y(1) = 0. One ﬁrst notes that this is a linear ﬁrst order diﬀerential equation.1. FREE FALL AND HARMONIC OSCILLATORS Example 3. We will return to a discussion of the qualitative behavior of diﬀerential equations later in the course. we determine the integrating factor µ(x) = exp x (2. Solving for y . ξ Multiplying equation (2. xy + y = x. Thus. So. In fact.19) dξ = eln x = x. the drag . In such cases one turns to direction ﬁelds.17) becomes (xy) = x. However.

we can do this integral. Also.2.) If we can do the integral. 2 − α2 −g kz 2αk z − α z + α (2. For laminar ﬂow the drag coeﬃcient is constant. (2. On common determination derives from the drag force on an object moving through a ﬂuid. In fact.1. y (2. Do you remember Partial Fraction Decomposition? It involves factoring the denominator in our integral. Our equation can then be rewritten as v = kv 2 − g. f (v) should oppose the motion. Recall that this applies to free fall near the Earth’s surface.20) where f (v) gives the resistive force and mg is the weight. The idea is to write F = ma in the form m¨ = −mg + f (v).24) . FREE FALL AND TERMINAL VELOCITY 47 force becomes greater. You need to recall another common method of integration.21) 2 where C is the drag coeﬃcient. So. which we have not reviewed yet. then f (v) should be positive. This force is given by 1 f (v) = CAρv 2 . ˙ (2. Unless you are into aerodynamics. this is ugly because our constants are represented by letters and are not speciﬁc numbers. There are a couple of standard models that people use to test this. Note that this is a ﬁrst order equation for v(t). for it to be resistive. this resistive force is a function of the velocity. you do not need to get into the details of the constants. (2. So. If it is rising. Of course. So. we can separate the variables and integrate the time out to obtain v dz . then we have a solution for v. we can write the integrand as kz 2 1 1 1 1 1 1 = = − . If the body is falling.23) t+K = 2−g kz (Note: We used an integration constant of K since C is the drag coeﬃcient in this problem. we will write f (v) = bv 2 . A is the cross sectional area and ρ is the ﬂuid density. It is separable too! Formally. then f (v) would have to be negative to indicate the opposition to the motion.22) where k = b/m. Letting α2 = g/k. it is best to absorb all of the constants into one to simplify the computation.

2. 1 + Ae2αkt (2. CAρ This is about 175 mph. One would need a more accurate determination of C.093)(1. (2.) Assume that the air density is a constant 1. which is slightly higher than the actual terminal velocity of a sky diver. 2αk v+α 1 − Ae2αkt α. we have v(t) = 1 v−α ln . We will take an 80 kg skydiver with a cross sectional area of about 0. This means that the falling object will reach a terminal velocity. A can be determined using the initial velocity. which the reader can determine as an exercise.48 CHAPTER 2. You have seen simple harmonic motion in your introductory physics class.25) (2. the integrand can be easily integrated giving t+K = Solving for v. We ﬁrst note that vterminal = − So. Thus.8) = 78m/s.2 The Simple Harmonic Oscillator The next physical problem of interest is that of simple harmonic motion. Such motion comes up in many places in physics and provides a generic ﬁrst approximation to models of oscillatory motion. we can determine the terminal velocity. vterminal == − 2(70)(9. This is the beginning of a major thread running throughout our course. One important conclusion is that for large times.26) where A ≡ eK . We will review SHM (or SHO in some texts) by looking at springs and pendula (the plural of .0. the ratio in the solution approaches −1.093 m2 .0)(0. (The skydiver is falling head ﬁrst. There are other forms for the solution in terms of a tanh function.2 kg/m3 and the drag coeﬃcient is C = 2. FREE FALL AND HARMONIC OSCILLATORS Now. g v → −α = − k . As a simple computation.2) g =− k 2mg .

2. x We will later derive solutions of such equations in a methodical way. We have depicted a horizontal system sitting on a frictionless surface. When the displacement is positive.1 Mass-Spring Systems We begin with the case of a single block on a spring as shown in Figure 2. For now we note that two solutions of this equation are given by x(t) = A cos ωt . 2. the oscillatory motion about equilibrium is modelled the same. From Newton’s Second Law. The net force in this case is the restoring force of the spring given by Hooke’s Law.2. or displacement of the spring from equilibrium. A similar model can be provided for vertically oriented springs. F = m¨. THE SIMPLE HARMONIC OSCILLATOR 49 k m x Figure 2.2. However.2. pendulum). the spring force is negative and when the displacement is negative the spring force is positive. we obtain the equation for the x motion of the mass on the spring: m¨ + kx = 0.2: Spring-Mass system. you need to account for gravity to determine the location of equilibrium. Here x is the elongation. We will use this as our jumping board into second order diﬀerential equation and later see how such oscillatory motion occurs in AC circuits. Otherwise. where k > 0 is the spring constant. Fs = −kx.

A is called the amplitude of the oscillation. It is related to the frequency by ω = 2πf. where f is measured in cycles per second. θ0 .2 The Simple Pendulum The simple pendulum consists of a point mass m hanging on a string of length L from some support.27) is the angular frequency. this is related to the period of oscillation.3. We will use the former only to limit the amount of physics background needed.] One pulls the mass back to some stating angle. . measured in rad/s.3: A simple pendulum consists of a point mass m attached to a string of length L. or Hertz. where ω= k m (2. Finally.2. It is released from an angle θ0 . F = ma. [See Figure 2. Furthermore.50 CHAPTER 2. the time it takes the mass to go through one cycle: T = 1/f. 2. FREE FALL AND HARMONIC OSCILLATORS θ Figure 2. x(t) = A sin ωt. and releases it. The goal is to ﬁnd the angular position as a function of time. We could either use Newton’s Second Law of Motion. or its rotational analogue in terms of torque. There are a couple of possible derivations.

In Figure 2. this then gives us our nonlinear pendulum equation ¨ Lθ + g sin θ = 0. This is obtained by making a small angle approximation. Newton’s Second Law of Motion tells us that the net force is the mass times the acceleration. There are two forces acting on the point mass. we need to relate x and θ. Namely. The net force is found to be F = mg sin θ. x = rθ for r = L. This points downward and has a magnitude of mg. Under this approximation (2.4: There are two forces acting on the mass. the weight mg and the tension T. provided the angle is measure in radians.28) becomes ¨ Lθ + gθ = 0.29) .28) There are several variations of Equation (2. Now. So. where g is the standard symbol for the acceleration due to gravity. THE SIMPLE HARMONIC OSCILLATOR 51 θ T mg sin θ mg Figure 2. The ﬁrst is gravity. x Next. For small angles we know that sin θ ≈ θ.28) which will be used in this text. (2. The magnitude of the sum is easily found as F = mg sin θ using the addition of these two vectors. The arclength is related to the angle. The ﬁrst one is the linear pendulum.4 these forces and their sum are shown. x is the distance traveled.2. The other force is the tension in the string. we can write m¨ = −mg sin θ. we can write ¨ mLθ = −mg sin θ.2. which is the length of the arc traced out by our point mass. (2. Cancelling the masses. Thus.

where D = dx . (2. Ly = L(yh + yp ) = Lyh + Lyp = 0 + f = f. Then equation (2. Then the general solution of (2. Lyh = 0. (2. An operator L is said to be linear if it satisﬁes two properties: 1. and a particular solution of the nonhomogeneous problem. Namely.52 CHAPTER 2. L(y1 + y2 ) = L(y1 ) + L(y2 ). Namely. We deﬁne ω = g/L and obtain the equation for simple harmonic motion.30) One can rewrite this equation using operator terminology.32) . A general form is given by a(x)y (x) + b(x)y (x) + c(x)y(x) = f (x). In most cases students are only exposed to second order linear diﬀerential equations.30) becomes Ly = f. ¨ θ + ω 2 θ = 0. In this section we will look at more general second order linear diﬀerential equations. Lyp = f. FREE FALL AND HARMONIC OSCILLATORS We note that this equation is of the same form as the mass-spring system. One typically solves (2. (2.30) by ﬁnding the general solution of the homogeneous problem.30) is simply given as y = y + h + yp . Second order diﬀerential equations are typically harder than ﬁrst order. L(ay) = aL(y) for a a constant. This is found to be true using the linearity of L. we ﬁrst d deﬁne the diﬀerential operator L = a(x)D2 + b(x)D + c(x). 2. 2.3 Second Order Linear Diﬀerential Equations In the last section we saw how second order diﬀerential equations naturally appear in the derivations for simple oscillating systems.31) The solutions of linear diﬀerential equations are found by making use of the linearity of L.

SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS 53 There are methods for ﬁnding a particular solution.34) Solutions to (2. Inserting this guess into (2.33) 2. then c1 y1 + c2 y2 is the general solution of the homogeneous problem. 1.1 Constant Coeﬃcient Equations The simplest and most taught equations are those with constant coeﬃcients. Linear independence is established if the Wronskian of the solutions in not zero: W (y1 . Again. others have studied a variety of second order linear equations and have saved us the trouble in the case of diﬀerential equations that keep reappearing in applications. namely. Therefore. y2 ) = y1 (x)y2 (x) − y1 (x)y2 (x) = 0. c1 y1 + c2 y2 = 0 ⇔ c1 = c2 = 0.3. yp (x). (2. Real.35) The roots of this equation lead to three types of solution depending upon the nature of the roots. The general form for a homogeneous constant coeﬃcient second order linear diﬀerential equation is given as ay (x) + by (x) + cy(x) = 0. linearity is useful. (2. In this case the solutions corresponding to each root are linearly independent. (2. Determining solutions to the homogeneous problem is not laways so easy. then the linear combination c1 y1 + c2 y2 is also a solution of the homogeneous equation. of the equation.34) leads to the characteristic equation ar2 + br + c = 0. distinct roots r1 . In fact. r2 . If y1 and y2 are solutions of the homogeneous equation. These range from pure guessing to either using the Method of Undetermined Coeﬃcients or the Method of Variation of Parameters.34) are obtained by making a guess of y(x) = erx and determining what possible values of r will yield a solution.3. if y1 and y2 are linearly independent. the general solution is simply y(x) = c1 er1 x + c2 er2 x . .2. However.

We seek a second linearly independent solution of the form y2 (x) = v(x)y1 (x). The solution of constant coeﬃcient equations now follows easily. 3. The Method of Reduction of Order We know that y1 (x) = erx is a solution. The roots of this equation are found as r = −2. which never vanishes. In this case the solutions corresponding to each root are linearly independent. Real. Inserting this into the ODE gives and equation for v(x) : a(verx ) + b(verx ) + cverx = 0. Therefore. Making use of Euler’s identity. Since r = −b/a. these complex exponentials can be rewritten in terms of trigonometric functions. Example 1. we are left with v = 0. one uses what is called the Method of Reduction of Order. 3. or [a(v + 2rv ) + bv + (ar2 + br + c)v]erx = 0. Then one simply writes down the general solution. one has that eαx cos(βx) and eαx sin(βx) are two linearly independent solutions. equal roots r1 = r2 = r = − 2a . the general solution can be quickly written down: y(x) = c1 e−2x + c2 e3x .1 This gives the second solution as xerx . the general solution is found as y(x) = (c1 + c2 x)erx . y (0) = 0. y − y − 6y = 0 y(0) = 2. FREE FALL AND HARMONIC OSCILLATORS b 2. the general solution becomes y(x) = eαx (c1 cos(βx) + c2 sin(βx)). In the last section of this chapter we review the class of equations called the Cauchy-Euler equations. One solves the characteristic equation and then determines which case applies. The characteristic equation for this problem is r2 − r − 6 = 0. We will demonstrate this with a couple of examples.54 CHAPTER 2.) Namely. In this case the solutions corresponding to each root are linearly dependent. r2 = α ± iβ. These equations occur often and follow a similar procedure. the last term in the brackets vanishes. To ﬁnd a second linearly independent solution. Cancelling out the exponential factor. eiθ = cos(θ) + i sin(θ). Therefore. . 1 Since r satisﬁes the characteristic equation. leaves a ﬁrst order equation for v: a(v ) + (2ar + b)v = 0. (We will return to this identity later. Therefore. Students should be familiar with these kinds of equations as well. A solution of this equation is v(x) = x. Complex conjugate roots r1 .

The general solution of the nonhomogeneous problem is therefore y(x) = c1 cos(2x) + c2 sin(2x) + 1 sin x. The characteristic equation in this case is r2 + 4 = 0. Example 4. Therefore. we need only seek a particular solution to the nonhomogeneous problem and add it to the solution of the last example to get the general solution. one needs two pieces of information to ﬁnd a particular solution. You should verify that this is indeed 5 a solution. y + 6y + 9y = 0. r = ±2i and the general solution consists purely of sinusoidal functions: y(x) = c1 cos(2x) + c2 sin(2x). Of course. There is only one root. we will make an intelligent guess of yp (x) = A sin x and determine what A needs to be. SECOND ORDER LINEAR DIFFERENTIAL EQUATIONS 55 Note that there are two arbitrary constants in the general solution. So. According to the theory. we see that A = 1/3 works. Example 2. Therefore. (Recall. the solution is found as y(x) = (c1 + c2 x)e−3x . In this example we have r2 + 6r + 9 = 0. Example 3.) Inserting our guess in the equation gives (−A + 4A) sin x = sin x.36) These two equations in two unknowns can readily be solved to give c1 = 6/5 and c2 = 4/5. The particular solution can be obtained by purely guessing.3. We will not review all of these techniques at this time. Evaluating y and y at x = 0 yields 2 = c1 + c2 0 = −2c1 + 3c2 (2. 3 . this is the Method of Undetermined Coeﬃcients. y + 4y = 0. y + 4y = sin x. or using variation of parameters. we have them with the information from the initial conditions. Due to the simple form of the driving term. The homogeneous problem was actually solved in the last example. making an educated guess. This is an example of a nonhomogeneous problem.2. the solution of the initial value 4 problem is y(x) = 6 e−2x + 5 e3x . r = −3. One needs y (x) = −2c1 e−2x + 3c2 e3x in order to attempt to satisfy the initial conditions. Again. The roots are pure imaginary roots.

the sum of the drops in electric potential are set equal to the rises in electric potential. Adding these potential drops. or coil. Thus. FREE FALL AND HARMONIC OSCILLATORS As we have seen. V (t). We will return to damped oscillations later and also investigate nonlinear oscillations. x(t). dt Furthermore. or a simple pendulum. store magnetic energy. x This constant coeﬃcient equation has pure imaginary roots (α = 0) and the solutions are pure sines and cosines.4 LRC Circuits Another typical problem often encountered in a ﬁrst year physics class is that of an LRC series circuit. 3. For a mass m on a spring with spring constant k > 0. The physics for this problem stems from Kirchoﬀ’s Rules for circuits. This circuit is pictured in Figure 2. 2. Adding a damping term and periodic forcing complicates the dynamics. we set them equal to the voltage supplied by the voltage source. Resistor: V = IR.56 CHAPTER 2. one has from Hooke’s law that the position as a function of time. The potential drops across each circuit element are given by 1. Namely. Capacitor: V = q C. Inductor: V = L dI . we obtain IR + q dI +L = V (t). but is nonetheless solvable.5. satisﬁes the equation m¨ + kx = 0. 2. Typical systems are a mass on a spring. we need to deﬁne the current as I = dq . The capacitor is a device that stores electrical energy and an inductor. This is called simple harmonic motion. where q is the charge dt in the circuit. The resistor is a circuit element satisfying Ohm’s Law. C dt . one of the most important applications of such equations is in the study of oscillations.

capacitors and inductors. An example of another circuit setup is shown in Figure 2.6.4. Since both q and I are unknown. This will result in several equations for each loop in the circuit. C This is a second order equation for q(t). leading to oscillatory behavior. solvable by ﬁrst order methods and LC circuits. we can replace the current by its expression in terms of the charge to obtain L¨ + Rq + q ˙ 1 q = V (t).5: LRC Circuit. More complicated circuits are possible by looking at parallel connections. One can set up a system of second order equations and proceed to solve them. . These include RC circuits. 2.2. R1 V(t) R2 C L Figure 2. LRC CIRCUITS 57 R V(t) C L Figure 2. or other combinations. leading to larger systems of diﬀerential equations. This is not a problem that can be covered in the ﬁrst year physics course.1 Special Cases In this section we will look at special cases that arise for the series LRC circuit equation.6: LRC Circuit.4. of resistors.

we can also rewrite it and solve it as a separable equation.58 CHAPTER 2. Now divide out the exponential to get the general solution: q= V0 + Ke−t/RC . we have that 0 = q(0) = V0 + K. dt C q(0) = 0.40) Note that we introduced the integration constant. However. For charging a capacitor. (2. we use the initial condition to get our particular solution. we would not have gotten a correct solution for the diﬀerential equation. We will do the former only as another example of ﬁnding the integrating factor. dt RC (2. we have the initial value problem R dq q + = V0 . K.37) This equation is an example of a linear ﬁrst order equation for q(t). Namely. (2. RC Circuits We ﬁrst consider the case of an RC circuit in which there is no inductor. since V0 is a constant.38) = et/RC . qet/RC = dt R V0 R V0 C (2.39) Integrating. we have qet/RC = et/RC = et/RC + K.41) (If we had forgotten the K. C . C (2. setting t = 0. FREE FALL AND HARMONIC OSCILLATORS Case I. Also.) Next. d V0 t/RC e . dt RC R The integrating factor is then µ(t) = e Thus. we will consider what happens when one charges a capacitor with a DC battery (V (t) = V0 ) and when one discharges a charged capacitor (V (t) = 0). We ﬁrst write the equation in standard form: dq q V0 + = .

Let’s put in some values for the parameters. This is what we expect. and V0 = 12 V. we have C q(t) = V0 (1 − e−t/RC ). the capacitor charges up. asymptotically. Thus. This is the constant factor in the exponential. The larger it is. K = − V0 .00 mF.4.00 kΩ. we see in Figure 2. The rate at which a capacitor charges. We see that the charge builds up to the value of V0 /C = 2000 C. Inserting this into our solution.7: The charge as a function of time for a charging capacitor with R = 2. . and V0 = 12 V. C = 6.7.00 kΩ. So. but much faster. to the ﬁnal value of q0 = V0 . or discharges. τ = RC. C = 6. the slower the exponential term decays. is governed by the time constant. We let R = 2.2. A plot of the solution is given in Figure 2. If we use a smaller resistance. C (2. For large times the second term goes to zero. R = 200 Ω. LRC CIRCUITS 59 Charging Capacitor 2000 1500 q 1000 500 0 20 40 60 Time t 80 100 120 Figure 2. C because the current is no longer ﬂowing over R and this just gives the relation between the potential diﬀerence across the capacitor plates when a charge of q0 is established on the plates.42) Now we can study the behavior of this solution.8 that the capacitor charges to the same value.00 mF.

Now. τ = 1.44) .8: The charge as a function of time for a charging capacitor with R = 200 Ω.60 CHAPTER 2. The relevant form of our initial value problem becomes R q dq + = 0.43) This equation is simpler to solve. and V0 = 12 V. dt C q(0) = q0 . τ = 12s. (2. If we set t = τ . Rearranging. .00 mF.63q0 . the capacitor has almost charged to two thirds of its ﬁnal value.3678794412 . C Thus. the charge will then move oﬀ the plates. If we disconnect the battery and reconnect the wires to complete the circuit. we have q dq =− . discharging the capacitor. .2s. FREE FALL AND HARMONIC OSCILLATORS Charging Capacitor 2000 1500 q 1000 500 0 20 40 60 Time t 80 100 120 Figure 2.)q0 ≈ 0. C = 6. For the ﬁrst set of parameters. at time t = τ . let’s assume the capacitor is charged with charge ±q0 on its plates. For the second set. dt RC (2. we ﬁnd that q(τ ) = V0 (1 − e−1 ) = (1 − 0.

37q0 . At t = τ we have q(τ ) = q0 e−1 = (0. and C = 6.00 mF. We see that the charge decays exponentially. we consider .3678794412 . τ = RC determines the behavior.)q0 ≈ 0. by now you should know how to immediately write down the solution to such problems of the form y = ky.9 we show the discharging of our two previous RC circuits. and q0 = 2000 C. Once again. This is a simple exponential decay problem.9: The charge as a function of time for a discharging capacitor with R = 2. So. τ = RC. In this case. which you can solve using separation of variables. We will now connect a charged capacitor to an inductor. Case II. LC Circuits Another simple result comes from studying LC circuits. That is why you are often instructed to place a shunt across a discharged capacitor to fully discharge it. The solution is q(t) = q0 e−t/τ .2. In Figure 2.00 kΩ or R = 200 Ω. . at this time the capacitor only has about a third of its original value. the capacitor never fully discharges. In principle.4. . LRC CIRCUITS 61 Discharging Capacitor 2000 1500 q 1000 500 0 10 20 30 Time t R = 2000 R=200 40 50 60 Figure 2. However.

47) The oscillations that result are understandable. However. LC ω = (LC)−1/2 .2 = ± √ . The charged capacitor then discharges and the capacitor eventually returns to its original state and the whole system repeats this over and over. As the charge leaves the plates.49) = 2π 2π LC This is called the tuning frequency because of its role in tuning circuits. q(0) = I(0) = 0. C q(0) = q0 .46) is of the form q(t) = c1 cos(ωt) + c2 sin(ωt). we have q+ ¨ 1 q = 0. The stored electrical energy in the capacitor changes to stored magnetic energy in the inductor. ˙ (2. the solution of (2. The frequency of this simple harmonic motion is easily found. the changing current induces a changing magnetic ﬁeld in the inductor. It is given by ω 1 1 √ f= . LC i r1.46) This equation is a second order. Thus. . we expect oscillatory behavior.48) (2. It is of the same form as the ones for simple harmonic motion of a mass on a spring or the linear pendulum. So. (2. (2. Inserting the initial conditions yields q(t) = q0 cos(ωt).62 CHAPTER 2. The characteristic equation is r2 + The solutions are 1 = 0.45) Dividing out the inductance. FREE FALL AND HARMONIC OSCILLATORS the initial value problem L¨ + q 1 q = 0. LC (2. the process continues until the plates are charged with opposite polarity and then the process begins in reverse. constant coeﬃcient equation.

DAMPED OSCILLATIONS 63 Of course. So.5 Damped Oscillations As we have indicated. we really need to account for resistance.5. simple harmonic motion is an ideal situation. or even add a resistor. this is an ideal situation.2.53) We have seen that solutions are obtained by looking at the characteristic equation ar2 + br + c = 0. 2. Then we have √ −b ± b2 − 4mk r= .54) r= 2a We will consider the example of the damped spring. x ˙ ¨ ˙ Lθ + bθ + gθ = 0. In real systems we often have to contend with some energy loss in the system. Thus.50) (2. The simplest models of resistance are the addition of a term in ﬁrst derivative of the dependent variable.52) These are all examples of the general constant coeﬃcient equation ay (x) + by (x) + cy(x) = 0. L¨ + Rq + q ˙ 1 q = 0.51) (2. 2m For b > 0. or in the resistance to the ﬂow of current in an LC circuit. (2. C (2. This energy loss could be in the spring. (2. This leads to a slightly more complicated system in which damping will be present. even if only a small amount from the wires. our three main examples with damping added look like: m¨ + bx + kx = 0. This leads to the damping of our oscillations. in the way a pendulum is attached to its support. There is always resistance in the circuit. there are three types of damping. (2.55) . This leads to three diﬀerent behaviors depending on the discriminant in the quadratic formula: √ −b ± b2 − 4ac .

If it were any weaker the discriminant would be negative and we would need the third case. Overdamped. Since this is Case I for constant coeﬃcient equations. The damping is so strong that there is no oscillation in the system. We note that b2 − 4mk < b2 . the solution. Underdamped. 2. An additional term can be added that can cause even more complicated behavior. III. Critically Damped.. This is Case II for constant coeﬃcient equations and the solution is given by x(t) = (c1 + c2 t)ert .10. These solutions exhibit oscillations due to the trigonometric functions. x(t) = Aeαt cos βt. The damping is just strong enough to hinder any oscillation.64 CHAPTER 2. FREE FALL AND HARMONIC OSCILLATORS I. II. We can write √ α = −b/2m and β = 4mk − b2 /2m. So. b2 = 4mk In this case we obtain one real root. b2 > 4mk In this case we obtain two real root. both terms in the solution exponentially decay. Then the solution is x(t) = eαt (c1 cos βt + c2 sin βt). the roots are both negative. In the . where r = −b/2m. looks like the plot in Figure 2. Consider the case that the initial conditions give c1 = A and c2 = 0. we have that x(t) = c1 er1 t + c2 er2 t .6 Forced Oscillations All of the systems presented at the beginning of the last section exhibit the same general behavior when a damping term is present. (When is this?) Then. Once again. but we see that the amplitude may decay in time due the the overall factor of eαt when α < 0. b2 < 4mk In this case we have complex conjugate roots. the solution decays exponentially. Thus.

It provides what is called a source term. Thus. FORCED OSCILLATIONS 65 Underdamped Oscillation 2 x 1 0 2 4 6 8 10 t 12 14 16 18 20 –1 –2 Figure 2. or driven. First you solve the homogeneous equation for a general solution of Lyh = 0. we want to ﬁnd solutions of equations of the form Ly(x) = a(x)y (x) + b(x)y (x) + c(x)y(x) = f (x).2. we have seen that the voltage source makes the system nonhomogeneous. Such systems are called forced. One can drive such systems by periodically pushing the mass.1t cos 3t. 1. case of LRC circuits. Typical systems in physics can be modeled by nonhomogenous second order equations. or impacted by an outside force.1t . The dashed lines are given by x(t) = ±2e0.10: A plot of underdamped oscillation given by x(t) = 2e0. or having the entire system moved. indicating the bounds on the amplitude of the motion. yh (x). (2.6. Earlier we saw that the solution of equations are found in two steps.56) . Then. yp (x). Such terms can also arise in the mass-spring and pendulum systems. 2. you obtain a particular solution of the nonhomogeneous equation.

1 Method of Undetermined Coeﬃcients Let’s solve a simple diﬀerential equation highlighting how we can handle nonhomogeneous equations. We will come back to this method the Method of Variation of Parameters. What possible function can we insert into this equation such that only a 4 remains? If we try something proportional to x. Inserting y = A into (2. y = A. −3.58) The characteristic equation is r2 + 2r − 3 = 0.57).57). by adding a nonhomogeneous to such equations we need to ﬁgure out what to do with the extra term. The second step is to ﬁnd a particular solution of (2. y = 4 does not work. In the ﬁrst case. Perhaps a constant function you might think. homogeneous equations. But. we obtain −3A = 4. There are two main methods. (2.6. one makes an intelligent guess based on the form of f (x). we can immediately write the solution yh (x) = c1 ex + c2 e−3x . Thus. Let’s see. In other words. FREE FALL AND HARMONIC OSCILLATORS To date. So. 2. then we are left with a linear function after inserting x and its derivatives. Consider the equation y + 2y − 3y = 4. So. (2. In the second method. The roots are r = 1. later in this section. how does one ﬁnd the particular solution? You could guess a solution. So we need some other methods.57) The ﬁrst step is to determine the solution of the homogeneous equation. we could try an arbitrary constant. . we only know how to solve constant coeﬃcient. the Method of Undetermined Coeﬃcients.66 CHAPTER 2. one can systematically developed the particular solution. we solve yh + 2yh − 3yh = 0. but that is not usually possible without a little bit of experience.

6. We still have a constant left. So. we have the general solution to the original nonhomogeneous equation (2. 4 y(x) = yh (x) + yp (x) = c1 ex + c2 e−3x − .57). If we had been given initial conditions. Equating the coeﬃcients of the diﬀerent powers of x on both sides. This step is done. we need something more general. We know a constant function does not work by the last example. we ﬁnd a system of equations for the undetermined coeﬃcients: 2A − 3B = 0 −3A = 4.59). What if we had a diﬀerent source term? Consider the equation y + 2y − 3y = 4x. So. Inserting this function into Equation (2. Picking A = −4/3 would get rid of the x terms. FORCED OSCILLATIONS 67 Ah ha! We see that we can choose A = − 4 and this works. So. yp (x) = − 4 . 3 Combining our two solutions. we could now use them to determine our arbitrary constants.59) The only thing that would change is our particular solution. These are easily solved to obtain A = − B = 4 3 (2. Let’s try a linear function. 3 Insert this solution into the equation and verify that it is indeed a solution. we need a guess.60) 8 2 A=− . So. Then we get after substitution into (2.2.61) . we obtain 2A − 3Ax = 4x. but will not cancel everything. yp (x) = Ax + B. let’s try yp = Ax. (2. Namely.59) 2A − 3(Ax + B) = 4x. 3 9 (2. we have a 3 particular solution.

FREE FALL AND HARMONIC OSCILLATORS Guess An xn + An−1 xn−1 + · · · + A1 x + A0 Aebx A cos ωx + B sin ωx f (x) an xn + an−1 xn−1 + · · · + a1 x + a0 aebx a cos ωx + b sin ωx So. then we need to make a guess consisting of the smallest possible power of x times the function which is . It turns out that there is one further modiﬁcation of the method. A. Inserting our guess. our particular solution is 4 8 yp (x) = − x − . we would guess a solution of the form yp = Ae−3x .62) According to the above. As a ﬁnal example. Oops! The coeﬃcient. Some examples are given in Table 2.6. 3 9 This gives the general solution to the nonhomogeneous problem as 4 8 y(x) = yh (x) + yp (x) = c1 ex + c2 e−3x − x − .68 CHAPTER 2. you make an appropriate guess up to some unknown parameters. disappeared! We cannot solve for it. If our driving term contains terms that are solutions of the homogeneous problem. However. or coeﬃcients. a multiple of e−3x will not get us anywhere. Solve the system and you have your solution. What went wrong? The answer lies in the general solution of the homogeneous problem. f (x). (2. the procedure is simple. More general applications are covered in a standard text on diﬀerential equations.1. let’s consider the equation y + 2y − 3y = 2e−3x . Inserting the guess leads to a system of equations for the unknown coeﬃcients. 3 9 There are general forms that you can guess based upon the form of the driving term. This solution is then added to the general solution of the homogeneous diﬀerential equation. we ﬁnd 0 = 2e−3x . Given f (x) in a particular form. Note that ex and e−3x are solutions to the homogeneous problem. So.

FORCED OSCILLATIONS no longer a solution of the homogeneous problem. let’s assume that the constants are replaced with two unknown functions. If one replaces the constants with functions. we are assuming that a particular solution takes the form yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x).64) . A = −1/2 and yp (x) = − 1 xe−3x . Let’s assume it is of the standard form a(x)y (x) + b(x)y (x) + c(x)y(x) = f (x). Is it possible that you could stumble across the right functions with which to replace the constants and somehow end up with f (x) when inserted into the left side of the diﬀerential equation? It turns out that you can.2. So. We begin with the nonhomogeneous equation. We will ﬁrst derive the needed equations and then do some examples. This change of the parameters is where the name of the method derives. then you now longer have a solution to the homogeneous equation.6. Namely.6.63) We know that the solution of the homogeneous equation can be written in terms of two linearly independent solutions. we guess yp (x) = Axe−3x . or −4A = 2. 2 69 2. Thus. So. Inserting these into the equation. which we will call c1 (x) and c2 (x). yp = A(1 − 3x)e−3x and yp = A(9x − 6)e−3x . (2. but the application of the method is straight forward if you can do the required integrals.2 Method of Variation of Parameters A more systematic way to ﬁnd particular solutions is through the use of the Method of Variation of Parameters. which we will call y1 (x) and y2 (x) : yh (x) = c1 y1 (x) + c2 y2 (x). we obtain [(9x − 6) + 2(1 − 3x) − 3x]Ae−3x = 2e−3x . (2. We compute the derivative of our guess. The derivation is a little messy and the solution is sometimes messy.

then insertion into the diﬀerential equation should make it true. So. (2. To do this we will ﬁrst need to compute some derivatives.69) .66) It turns out that we will get the same results in the end if we did not assume this. we obtain f (x) = c1 (x)(a(x)y1 (x) + b(x)y1 (x) + c(x)y1 (x)) c2 (x)(a(x)y2 (x) + b(x)y2 (x) + c(x)y2 (x)) +a(x)(c1 (x)y1 (x) + c2 (x)y2 (x)). FREE FALL AND HARMONIC OSCILLATORS If this is to be a solution. The second derivative is then only four terms: yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x) + c1 (x)y1 (x) + c2 (x)y2 (x). Thus. we will ﬁrst make an assumption. Now that we have the derivatives. The ﬁrst derivative is given by yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x) + c1 (x)y1 (x) + c2 (x)y2 (x). The important thing is that it works! So. we have (2. (2. Let’s assume that the last two terms add to zero: c1 (x)y1 (x) + c2 (x)y2 (x) = 0.70) (2. But.65) Next we will need the second derivative.67) f (x) = a(x)(c1 (x)y1 (x) + c2 (x)y2 (x) + c1 (x)y1 (x) + c2 (x)y2 (x)) +b(x)(c1 (x)y1 (x) + c2 (x)y2 (x)) +c(x)(c1 (x)y1 (x) + c2 (x)y2 (x)). (2.70 CHAPTER 2. this will give use eight terms. Regrouping the terms. we now have the ﬁrst derivative as yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x).68) (2. we can insert our guess into the diﬀerential equation.

We want the general solution of this nonhomogeneous problem. The general solution to the homogeneous problem yh − yh = 0 is yh (x) = c1 ex + c2 e−x . we seek a solution of the form yp (x) = c1 (x)ex + c2 (x)e−x .72).72) It is standard to solve this system for the derivatives of the unknown functions and then present the integrated forms. We ﬁnd the unknown functions by solving the system in (2. This leaves the equation c1 (x)y1 (x) + c2 (x)y2 (x) = f (x) . one could just start from here.73) . Example Consider the problem: y − y = e2x . FORCED OSCILLATIONS Note that the ﬁrst two rows vanish since y1 and y2 are solutions of the homogeneous problem. 2 (2. Adding these equations we ﬁnd that 1 2c1 ex = e2x → c1 = ex . which in this case becomes c1 (x)ex + c2 (x)e−x = 0 c1 (x)ex − c2 (x)e−x = e2x .6. In order to use the Method of Variation of Parameters.2. However. This is only possible if the unknown functions c1 (x) and c2 (x) satisfy the system of equations c1 (x)y1 (x) + c2 (x)y2 (x) = 0 f (x) c1 (x)y1 (x) + c2 (x)y2 (x) = . we have assumed a particular solution of the form yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x). a(x) (2. a(x) 71 (2.71) In summary.

6 The particular solution is found by inserting these results into yp : yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x) 1 1 = ( ex )ex + (− e3x )e−x 2 6 1 2x = e . We now seek a particular solution of the form yh (x) = c1 (x) cos 2x + c2 (x) sin 2x.74) Thus. (2. we have the general solution of the nonhomogeneous problem as 1 y(x) = c1 ex + c2 e−x + e2x . c2 (x) = − 1 2 1 e3x dx = − e3x .76) (2. 2 Thus. FREE FALL AND HARMONIC OSCILLATORS Solving for c1 (x) we ﬁnd c1 (x) = 1 2 1 ex dx = ex . The solution to the homogeneous problem is yh (x) = c1 cos 2x + c2 sin 2x.72): c1 (x) cos 2x + c2 (x) sin 2x = 0 −2c1 (x) sin 2x + 2c2 (x) cos 2x = sin x.72 CHAPTER 2. 3 Example Now consider the problem: y + 4y = sin x. 2 Subtracting the equations in the system yields 1 2c2 e−x = −e2x → c2 = − e3x . We let y1 (x) = cos 2x and y2 (x) = sin 2x.75) . f (x) = sin x in system (2. 3 (2. a(x) = 1.

2 3 1 sinx cos x dx = − sin3 x.7. However. In this case. While these are not the only equations for .77) 2. Adding the resulting equations will eliminate the c1 terms.78) (2. we have c1 (x) = −c2 (x) sin 2x 1 = − sin x sin 2x = − sin2 x cos x. we have had to restrict ourselves to very special cases in order to get nice analytical solutions to our initial value problems. 3 c1 (x) = − The ﬁnal step in getting the particular solution is to insert these functions into yp (x). Thus. 3 So. NUMERICAL SOLUTIONS OF ODES 73 Now. the general solution is y(x) = c1 cos 2x + c2 sin 2x + 1 sin x.7 Numerical Solutions of ODEs So far we have seen some of the standard methods for solving ﬁrst and second order diﬀerential equations. 2 2 Inserting this into the ﬁrst equation of the system. we have c2 (x) = 1 1 sin x cos 2x = (2 cos2 x − 1) sin x. This gives yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x) 1 1 1 = (− sin3 x) cos 2x + ( cos x − cos3 x) sin x 3 2 3 1 = sin x. use your favorite method for solving a system of two equations and two unknowns. 3 (2. cos 2x 2 These can easily be solved: c2 (x) = 1 2 1 2 (2 cos2 x − 1) sin x dx = (cos x − cos3 x).2. we can multiply the ﬁrst equation by 2 sin 2x and the second equation by cos 2x.

including the numerical solution of the equation at hand. FREE FALL AND HARMONIC OSCILLATORS which we can get exact results (see Section 2. with the advent of powerful computers and desktop computers. . such as the large scale problems of modelling ocean dynamics. Referring to Figure 2. The simple ideas used to solve ﬁrst order diﬀerential equations can be extended to the solutions of more complicated systems of partial diﬀerential equations. etc. b]. . We will seek approximations of the solution at N points. y(x)) of our solution y(x). y(x0 )) = (x0 . there are many cases in which exact solutions are not possible. While it is not the most eﬃcient method. In Figure 2.11 we show three such points on the x-axis. We . . We already know a point on the solution (x0 . Then. which are typically covered in a numerical analysis text. The slope is f (x. dx y(x0 ) = y0 . where a = x0 . However. y0 ) in the xy-plane for values of x in the interval [a.74 CHAPTER 2. we can now solve many of these problems with relative ease. weather systems and even cosmological problems stemming from general relativity. We rely on Figure 2. Let’s consider the class of ﬁrst order initial value problems of the form dy = f (x. y). In such cases we have to rely on approximation techniques. we see the tangent line drawn at (x0 . N . For equally spaced points we have ∆x = x1 − x0 = x2 − x1 . We will develop a simple numerical method. it does provide us with a picture of how one proceeds and can be improved by introducing better techniques. In this section we will look at the simplest method for solving ﬁrst order equations.79) We are interested in ﬁnding the solution y(x) of this equation which passes through the initial point (x0 . How do we ﬁnd the solution for other x values? We ﬁrst note that the diﬀerential equation gives us the slope of the tangent line at (x. As already noted. xn = x0 + n∆x. Euler’s Method. . y0 ). labeled xn for n = 1. y0 ).10 for another common class of second order diﬀerential equations). The use of numerical methods to obtain approximate solutions of diﬀerential equations and systems of diﬀerential equations has been known for some time. we ﬁrst break the interval of interest into N subintervals with N + 1 points xn . (2.11 to do this. y(x)).11. called Euler’s Method.

yn . yn−1 ). An interval of the x axis is broken into N subintervals. one can determine the next approximation. . y).2.11: The basics of Euler’s Method are shown.7. The approximations to the solutions are found using the slope of the tangent to the solution. given by f (x. Knowing previous approximations at (xn−1 . NUMERICAL SOLUTIONS OF ODES 75 Figure 2.

. While we do not know the solution. we arrive at the following numerical scheme for determining a numerical solution to Euler’s equation: y0 = y(x0 ). A vertical line intersects both the solution curve and the tangent line. FREE FALL AND HARMONIC OSCILLATORS look now at x = x1 . we ﬁnd that y2 = y1 + ∆xf (x1 . As seen in our ﬁgure. This give y1 in terms of quantities that we know. y0 ) is y (x0 ) = f (x0 . yn = yn−1 + ∆xf (xn−1 . The idea is simple. So. . Thus. . N. Referring to Figure 2. (2. yn−1 ). y0 ). . y1 ) and we can then get the value of y2 . ∆x So. we will designate y1 as the approximation of our solution y(x1 ). y0 ). (2. n = 1. dx x1 − x0 ∆x (2. This way we can compare our results. we see that this can be done by using the slope of the solution curve at (x1 .85) dx . y1 − y0 ≈ f (x0 . The problem is given that dy = x + y. we have by the diﬀerential equation that the slope of the tangent to the curve at (x0 . We now proceed to approximate y(x2 ). The corresponding tangent line is shown passing though (x1 .11. We just need to determine y1 .76 CHAPTER 2.82) Continuing this procedure for all xn . this intersection point is in theory close to the point on the solution curve.81) (2. y1 ). Following the previous argument. y1 ). we can determine the tangent line and ﬁnd the intersection point. We approximate the derivative in our diﬀerential equation by its diﬀerence quotient: dy y1 − y0 y1 − y0 ≈ = . we can solve this equation for y1 to obtain y1 = y0 + ∆xf (x0 . (2.84) Example We will consider a standard example for which we know the exact solution.80) But. y(0) = 1.83) (2. y0 ).

2.2(1.2yn−1 1 0.5.5 and x2 = 1. Is this a good result? Well. Note how the table is set up.8) + 1. NUMERICAL SOLUTIONS OF ODES n 0 1 2 xn 0 0.2(0.2. x1 = 0.2xn−1 + 1. we ﬁnd that the desired approximation is given as y2 = 2.856 0.2(0) + 1. 1]. So.2(0.5 77 Table 2. This sometimes makes the computation easier.2 0.97664 Table 2.5. There is a column for each xn and yn .4 0.5) = 2.5(1.2.6) + 1.2. Decreasing ∆x more shows that we are beginning to converge to a solution.856) = 2.50. Such a table is shown in Table 2. Then. x0 = 0.0 yn = 0. yn−1 = 0.7.5 0. We also made use of the function f (x.2) + 1.5) + 1.5 1. We set up a table for the needed values. . y) in computing the yn ’s.2(1. or N = 5.1: Application of Euler’s Method for y = x + y.2) = 1.2: Application of Euler’s Method for y = x + y. n 0 1 2 3 4 5 xn 0 0. ﬁnd an approximation for y(1).48 0. The ﬁrst row is the initial condition.3. As a result. We will break up the interval [0.0 yn = yn−1 + ∆xf (xn−1 . First. Now we see that our approximation is y1 = 2.2(0.8 1.3472 0.2 0.0. it looks like our value is near 3.1. We see this in Table 2.6 0.0) = 1. The results are in Table 2.3472) = 2.4) + 1. we could make the spatial increments smaller. Note that N = b−a = 2.5yn−1 1 0.2(1. y(0) = 1 and ∆x = 0. Let’s repeat the procedure for ∆x = 0. we will do this by hand. Let ∆x = 0.5(0.97664.5(0) + 1.0) = 1.2(2. but we cannot say much more. since we want our solution at x = 1 and the initial value is at x = 0.2(1. y(0) = 1 and ∆x = 0.5xn−1 + 1.2(0.5(1.48) = 1. ∆x We can carry out Euler’s Method systematically.

. . The Maple code for doing such a plot is given below.3: Results of Euler’s Method for y = x + y. or at least 40 pages! One could use a computer to do this. y(0) = 1 and varying ∆x Of course.y)->y+x.433847864 3. Adding a few extra lines for plotting. .2 0.001 0. FREE FALL AND HARMONIC OSCILLATORS ∆x 0. a:=0: b:=1: N:=100: h:=(b-a)/N. In this case we could simply use the exact solution. (The reader can verify this. the value we are seeking is y(1) = 2e − 2 = 3. Thus. .436291854 Table 2.5 2.5 0. A simple code in Maple would look like the following: > > > > restart: f:=(x.409627659 3.00027. The last computation would have taken 1000 lines in our table.y[i-1]): x[i]:=x[0]+h*(i): od: evalf(y[N]). The exact solution is easily found as y(x) = 2ex − x − 1.187484920 3. we can visually see how well our approximations compare to the exact solution.78 CHAPTER 2.97664 3. these values were not done by hand.01 0. x[0]:=0: y[0]:=1: for i from 1 to N do y[i]:=y[i-1]+h*f(x[i-1].) So. even the last numerical solution was oﬀ by about 0.4365636 .0001 yN ≈ y(1) 2.1 0.

0 Sol 1. the solution is still oﬀ in the second decimal place with a relative error of about 0.12 we can see how quickly our numerical solution diverges from the exact solution. NUMERICAL SOLUTIONS OF ODES 79 3.5 2.N)]: P1:=pointplot(Data.0 0.12: A comparison of the results Euler’s Method to the exact solution for y = x + y. In Figure 2. This .7.25 0.Sol(b)): display({P1. There are many other methods for solving ﬁrst order equations. > > > > > > with(plots): Data:=[seq([x[i].12-2.0 2.75 1. Why would we use a numerical method when we have the exact solution? Exact solutions can serve as test cases for our methods.5 t 0. In Figure 2.P2}).. We show in Figures 2.i=0. y(0) = 1 and N = 10.0 0.0 Figure 2.b.5 1.8%..3 that for ∆x = 0.13 the results for N = 10 and N = 100.y[i]].0 0.symbol=DIAMOND): Sol:=t->-t-1+2*exp(t).01.. One commonly used method is the fourth order Runge-Kutta method. P2:=plot(Sol(t).Sol=0.13 we can see that visually the solutions agree. but we note that from Table 2.5 0.2.t=a. We can make sure our code works before applying them to problems whose solution is not known.

0 0. we have the ﬁrst order system u v = v. So. Consider the simple second order equation y = f (x. = f (x.0 Sol 1. y(0) = 1 and N = 100. You can ﬁnd more about it in a numerical analysis text.0 Figure 2. Then.86) We will not go further into the Runge-Kutta Method here.13: A comparison of the results Euler’s Method to the exact solution for y = x + y. FREE FALL AND HARMONIC OSCILLATORS 3. It is well suited for programming and comes built-in in many packages like Maple and Matlab.0 2. it is well known that nth order equations can be written as a system of n ﬁrst order equations.75 1. u). v = y = f (x.5 t 0. Typically.5 0.0 0. it is set up to handle systems of ﬁrst order equations. method has smaller errors at each step as compared to Euler’s Method. we will see that . (2. We can turn this into a system of two ﬁrst order diﬀerential equations by letting u = y and v = y = u . In fact. However. This is a larger class of equations than our second order constant coeﬃcient equation.5 2. u).5 1. y).0 0.80 CHAPTER 2.25 0.

we will reserve solving the coupled system until the next chapter after exploring the needed mathematics. There are many problems in physics that result in systems of equations. which states that if a body experiences a net force. 81 2. Thus. x This second order equation can be written as a system of two ﬁrst order equations in terms of the unknown position and velocity. We will introduce a simple model in this section to illustrate the coupling of simple oscillators. This is because the most basic law of physics is given by Newton’s Second Law. We ﬁrst set . or one second order diﬀerential equation for one dimensional problems. ¨ Since a = x we have a system of second order diﬀerential equations in general for three dimensional problems. where k > 0 is the spring constant and x is the elongation of the spring. We have already seen the simple problem of a mass on a spring as shown in Figure 2. or higher. In many physical systems this coupling takes place naturally. the spring force is negative and when it is negative the spring force is positive. When it is positive. COUPLED OSCILLATORS systems of diﬀerential equations do arise naturally in physics. F = ma. can be cast into systems of ﬁrst order equations.8 Coupled Oscillators In the last section we saw that the numerical solution of second order equations.8.2. Such systems are typically coupled in the sense that the solution of at least one of the equations in the system depends on knowing one of the other solutions in the system. it will accelerate. The equation for simple harmonic motion for the mass-spring system was found to be given by m¨ + kx = 0. Fs = −kx.2. Such systems are often coupled equations and lead to interesting behaviors. However. Recall that the net force in this case is the restoring force of the spring given by Hooke’s Law.

(2.88) . Consider two blocks attached with two springs as in Figure 2. we apply Newton’s Second Law to both masses. the forces acting on it are due to each spring. These are shown in Figure 2. we have x=y ˙ k y = − x. it will exert a force on m1 of k2 (x2 − x1 ). One can look at more complicated spring-mass systems. So. ˙ Thus. or compressed. the only force acting directly on mass m2 is provided by the restoring force from spring 2. we see that we have a coupled system of two second order diﬀerential equations. that force is given by −k2 (x2 − x1 ). where ω 2 = (2. In this case we apply Newton’s second law for each block. ¨ Thus.14: Spring-Mass system. y = x and then rewrite the second order equation in terms of x and y. FREE FALL AND HARMONIC OSCILLATORS k m x Figure 2.15. The second spring is stretched. For mass m1 .87) k m. Putting this all together. So. based upon the relative locations of the two masses. The ﬁrst spring with spring constant k1 provides a force on m1 of −k1 x1 .82 CHAPTER 2.15. We obtain the two equations m1 x1 = −k1 x1 + k2 (x1 − x2 ) ¨ m2 x2 = −k2 (x1 − x2 ). We will designate the elongations of each spring from equilibrium as x1 and x2 . Similarly. ˙ m The coeﬃcient matrix for this system is 0 1 2 0 −ω . The reader should think about the signs in each case.

THE NONLINEAR PENDULUM OPTIONAL 83 k1 m1 k2 m2 x1 x2 Figure 2. This will be done in the next chapter.9 The Nonlinear Pendulum Optional We can also make the system more realistic by adding damping. One can rewrite this system of two second order equations as a system of four ﬁrst order equations by letting x3 = x1 and x4 = x2 .9.15: Spring-Mass system. m2 (2. this system can be written more compactly in matrix form: d dt x1 x2 x3 x4 = k1 −k2 m1 k − m22 0 0 0 0 k − m21 k2 m2 1 0 0 0 0 1 0 0 x1 x2 x3 x4 (2.90) However.2. we need to recall a few things from linear algebra. This could be due to energy loss in the way the string is attached to the support . 2.89) As we will see. This leads to the ˙ ˙ system x1 = x3 ˙ x2 = x4 ˙ x3 = − ˙ x4 ˙ k1 k2 x1 + (x1 − x2 ) m1 m1 k2 = − (x1 − x2 ). before we can solve this system of ﬁrst order equations.

The roots of the characteristic equations are g r = ± L i. (2. is an expression for the period of oscillation of a simple pendulum. Imagine that the support is attached to a device to make the system oscillate horizontally at some frequency.95) One consequence of this solution. the general solution takes the form θ(t) = c1 cos( g t) + c2 sin( L g t). The linear pendulum equation (2. Before returning to studying the equilibrium solutions of the nonlinear pendulum. this value for the period of a simple pendulum was derived assuming a small angle approximation. we have equations for the damped nonlinear and damped linear pendula: ¨ ˙ Lθ + bθ + g sin θ = 0. FREE FALL AND HARMONIC OSCILLATORS or due to the drag on the mass.84 CHAPTER 2. First. we will look at how far we can get at obtaining analytical solutions. How good is this . L (2. which is used often in introductory physics.92) Finally.96) As we have seen. (2. Assuming that the damping is proportional to the angular velocity. ¨ ˙ Lθ + bθ + gθ = 0. we can add forcing.93) We will look at these and other oscillation problems later in our discussion. L (2. Thus. Then we could have equations such as ¨ ˙ Lθ + bθ + g sin θ = F cos ωt. we investigate the simple linear pendulum.91) (2. L (2.29) is a constant coeﬃcient second order linear diﬀerential equation. etc. The period is found to be T = 2π = 2π ω g .94) We note that this is usually simpliﬁed by introducing the angular frequency ω≡ g .

We next employ a technique that is useful for equations of the form ¨ θ + F (θ) = 0 when it is easy to integrate the function F (θ).98) ¨ ˙ F (φ) dφ = (θ + F (θ))θ. Namely.9.93) is the simpler form ¨ θ + ω 2 θ = 0.2 0 0. we now turn to the nonlinear pendulum. degrees) we have that the relative error is about 4%. Thus for θ ≈ 0.16: The relative error in percent when approximating sin θ by θ..97) One can obtain a bound on the error when truncating this series to one term after taking a numerical analysis course.16.4 Figure 2.. We would like to do better than this.. approximation? What is meant by a small angle? We could recall from calculus that the Taylor series approximation of sin θ about θ = 0 : sin θ = θ − θ3 θ5 + + . we note that d 1 ˙2 θ + dt 2 θ(t) (2.2. THE NONLINEAR PENDULUM OPTIONAL 85 Relative Error 4 3 Relative Error (%) 2 1 –0. So. sin θ A plot of the relative error is given in Figure 2.2 Angle (Radians) 0. . which is deﬁned as Relative Error = sin θ − θ .4 –0.. We ﬁrst rewrite Equation (2. But we can just simply plot the relative error. 3! 5! (2.4 radians (or.

98) by θ. When one gets a solution in this implicit form. 2(c + ω 2 cos θ) Of course. the solution is given in terms of some integral.99) This equation is a separable ﬁrst order equation and we can rearrange and integrate the terms to ﬁnd that t= dt = dθ . we obtain dθ = dt 2(c + ω 2 cos θ). FREE FALL AND HARMONIC OSCILLATORS ˙ For our problem. The kinetic energy of the masses on the string is given as 1 1 ˙ T = mv 2 = mL2 θ2 . assuming no energy loss at the pivot point. This requires a little physics. then the potential . we can write 1 ˙2 θ − ω 2 cos θ = c. the above integral can be transformed into what is know as an elliptic integral of the ﬁrst kind. 2 2 The potential energy is the gravitational potential energy. So. Namely. the total of the kinetic and gravitational potential energies is a constant.99).86 CHAPTER 2. one needs to be able to do the integral. Thus. We will rewrite our result and then use it to obtain an approximation to the period of oscillation of our nonlinear pendulum. (2. 2 ˙ Solving for θ. one says that the problem has been solved by quadratures. We will ﬁrst rewrite the constant found in (2. The swinging of a mass on a string. If we set the potential energy to zero at the bottom of the swing. the quantity in the brackets is a constant. In fact. Namely. is a conservative process. d 1 ˙2 θ − ω 2 cos θ = 0. we multiply Equation (2. ¨˙ ˙ θθ + ω 2 θθ = 0 and note that the left side of this equation is a perfect derivative. Thus. dt 2 Therefore. leading to corrections to the linear result found earlier. the total mechanical energy is conserved.

101) as 1 ˙2 θ0 θ θ = 2ω 2 sin2 − sin2 . 2 2 (2. 2 Using the half angle formula.100) to get a value for the total energy. 2 2 2 Solving for θ . we have that E = mgL(1 − cos θ0 ) = mL2 ω 2 (1 − cos θ0 ).103) . we have dθ θ0 θ = 2ω sin2 − sin2 dt 2 2 1/2 (2. At the top of the swing the mass is not moving. Letting θ0 denote the angle at the highest position. So. So. THE NONLINEAR PENDULUM OPTIONAL 87 energy is U = mgh. A little trigonometry gives that h = L(1 − cos θ).101) we can rewrite Equation (2. sin2 θ 1 = (1 − cos θ). (2. where h is the height that the mass is from the bottom of the swing. if only for a moment. 2 (2. Thus.9. the total mechanical energy is 1 ˙ E = mL2 θ2 + mgL(1 − cos θ).100) We note that a little rearranging shows that we can relate this to Equation (2. Therefore. we have found that 1 ˙2 θ − ω 2 cos θ = ω 2 (1 − cos θ0 ).2. U = mgL(1 − cos θ). the kinetic energy is zero and the total energy is pure potential energy.102) .99): 1 ˙2 1 θ − ω 2 cos θ = E − ω 2 = c. 2 mL2 We can use Equation (2.

k) is called the complete integral of the ﬁrst 2 kind. we have that k is small. this is known as the incomplete elliptic integral of the ﬁrst kind and K(k) = F ( π . but we can now easily transform the integral into an elliptic integral.88 CHAPTER 2. k) ≡= φ 0 dθ 1 − k 2 sin2 θ = sin φ 0 dz (1 − z 2 )(1 − k2 z 2 ) . The integral in this result is an elliptic integral of the ﬁrst kind. For small angles. the elliptic integral of the ﬁrst kind is deﬁned as F (φ. we have that T = 2 ω θ0 0 dφ sin2 θ0 2 − sin2 θ 2 .105) 1 θ 1 This is done by noting that dz = 2k cos 2 dθ = 2k (1 − k 2 z 2 )1/2 dθ and that θ 0 sin2 θ2 − sin2 2 = k 2 (1 − z 2 ). (2. This is simply done by ﬁrst expanding 1 3 (1 − k 2 z 2 )−1/2 = 1 + k 2 z 2 + k 2 z 4 + O((kz)6 ) 2 8 . Noting that a motion from θ = 0 to θ = θ0 is a quarter of a cycle. for small k. We deﬁne z= and k = sin Then Equation (2. In some contexts. FREE FALL AND HARMONIC OSCILLATORS One can now apply separation of variables and obtain an integral similar to the solution we had obtained previously. T.104) This result is not much diﬀerent than our previous result.104) becomes T = 4 ω 1 0 θ sin 2 0 sin θ2 θ0 . we can develop a series expansion for the period. There are table of values for elliptic integrals and now one can use a computer algebra system to compute values of such integrals. So. 2 dz (1 − z 2 )(1 − k2 z 2 ) . (2. In particular.

g 4 64 (2. using the binomial expansion which we review later in the text. . These are given by ax2 y (x) + bxy (x) + cy(x) = 0.2. two. or three terms in Equation (2. which only provides the ﬁrst term.4 0. Inserting the expansion in the integrand and integrating term by term.2 1.10 Cauchy-Euler Equations . . CAUCHY-EULER EQUATIONS .2 0.6 0. (2.17 we show the relative errors incurred when keeping the k 2 and k 4 terms versus not keeping them. In Figure 2.OPTIONAL 89 Relative Error for T 14 12 10 Relative Error (%) 8 6 4 2 0 0.107) .106).17: The relative error in percent when approximating the exact period of a nonlinear pendulum with one.106) This expression gives further corrections to the linear result. 2.8 Angle (Radians) 1 1.10.Optional Another class of solvable second order diﬀerential equations that are of interest are the Cauchy-Euler equations.4 Figure 2. . one ﬁnds that T = 2π 1 9 L 1 + k2 + k4 + .

we begin by writing down the characteristic equation. distinct roots: r1 . Therefore. the general solution is simply y(x) = c1 xr1 + c2 xr2 . In this case the solutions corresponding to each root are linearly independent. the general solution is y(x) = c1 cos(2 (2) ln |x|) + c2 sin(2 (2) ln |x|) x−2 −8 = (r + 2)2 . the general solution is found as y(x) = (c1 + c2 ln |x|)xr . FREE FALL AND HARMONIC OSCILLATORS Note that in such equations the power of x in the coeﬃcients matches the order of the derivative in that term. In this case the solutions corresponding to each root are linearly dependent. 0 = r(r − 1) + 5r + 12 = r2 + 4r + 12 = (r + 2)2 + 8. (2. 1. These equations are solved in a manner similar to the constant coeﬃcient equations. These complex exponentials can be rewritten in terms of trigonometric functions. (2. Real. A simple computation. Therefore. the general solution becomes y(x) = xα (c1 cos(β ln |x|) + c2 sin(β ln |x|)). In this case the solutions corresponding to each root are linearly independent. equal roots: r1 = r2 = r. Complex conjugate roots: r1 . Example 1. one has that xα cos(β ln |x|) and xα sin(β ln |x|) are two linearly independent solutions. Therefore. one uses the Method of Reduction of Order. Namely. r2 = α ± iβ. 2. Therefore.108) Again. One begins by making the guess y(x) = xr . r2 .109) √ and one determines the roots are r = −2 ± 2 2i.90 CHAPTER 2. This leads to the characteristic equation ar(r − 1) + br + c = 0. x2 y + 5xy + 12y = 0 As with the constant coeﬃcient equations. This gives the second solution as xr ln |x|. Real. To ﬁnd a second linearly independent solution. 3. one has a quadratic equation and the nature of the roots leads to three classes of solutions.

Physics students who have taken a course in linear algebra in a mathematics department might not come away with this perception. we will apply some of these ideas to the coupled systems introduced in the last chapter. In this chapter we will introduce some of the basics of linear algebra for ﬁnite dimensional vector spaces and we will reinforce these concepts through generalizations in later chapters to inﬁnite dimensional vector spaces. which will be useful in later 91 .1 Vector Spaces Much of the discussion and terminology that we will use comes from the theory of vector spaces. Until now you may only have dealt with ﬁnite dimensional vector spaces. We will review a little of what we know about ﬁnite dimensional vector spaces. Another very important area of mathematics is linear algebra. you might only be comfortable with vectors in two and three dimensions. It is not until students take more advanced classes in physics that they begin to realize that a good grounding in linear algebra can lead to a better understanding of the behavior of physical systems.Chapter 3 Linear Algebra As the reader is aware by now. In later sections we will introduce the more general function spaces. In keeping with the theme of our text. 3. Even then. calculus has its roots in physics and has become a very useful tool for modelling the physical world.

w ∈ V and a. W = F · r. To each vector. F = qv × B. We then used the unit vectors i. y. We also need a set of scalars. (x. Then we learned that there were two types of multiplication of vectors. v. which generally come from some ﬁeld. we then learned how to add vectors and multiply vectors by numbers. we expected to get back new vectors. The properties outlined roughly above need to be preserved. Cross products were useful in describing things like torque. The basic concept of a vector can be generalized to spaces of more than three dimensions. So. b ∈ F 1. In three dimensions. This lead to the operations of dot and cross products.92 application in the text. ¯ τ = r × F. or the force on a moving charge in a magnetic ﬁeld. LINEAR ALGEBRA The notion of a vector space is a generalization of the three dimensional vector spaces that you have seen in introductory physics and calculus. z). respectively. Under these operations. u + v = v + u. We will return to these more complicated vector operations later when reviewing Maxwell’s equations of electrodynamics. Having deﬁned vectors. we have to start with a space of vectors and the operations between them. we have objects called vectors. We just attach the tail of the vector v to the origin and the head lands at some point. You may ﬁrst have seen this in your linear algebra class. However. or scalars. which you ﬁrst visualized as arrows of a speciﬁc length and pointing in a given direction. the angle between two vectors. j and k along the coordinate axes to write the vector in the form v = xi + yj + zk. in our applications the ﬁeld will either be the set of real numbers or the set of complex numbers. or if the vectors were orthogonal. We could multiply two vectors to get either a scalar or a vector. The dot product was useful for determining the length of a vector. . CHAPTER 3. A vector space V over a ﬁeld F is a set that is closed under addition and scalar multiplication and satisﬁes the following conditions: For any u. In physics you ﬁrst learned about vector products when you deﬁned work. we can associate a point in a three dimensional Cartesian system.

a(bv) = (ab)v. We deﬁne ek = (0. v = c1 a1 + c2 a1 + c3 a1 .1) space . (3. The standard basis in an n-dimensional vector space is a generalization of the standard basis in three dimensions (i. . 0. 0). k = 1. . v = xi + yj + zk. . We can generalize these ideas. There exists a 0 such that 0 + v= v. a3 }. 1(v) = v. Any vector in the three dimensional space can be written as a linear combination of these vectors. . .3. given any three non-coplanar vectors. 3. . . all vectors can be written as a linear combination of those vectors. 5. VECTOR SPACES 2. {a1 . a(u + v) = au + bv. (u + v) + w = u + (v + w). 6. (a + b)v = av + bv. . kth 1 .1. There exists a −v such that v + (−v) = 0. j and k play an important role. 7. 0. . . . . In fact. Note that we will often use summation notation instead of writing out all of the terms in the sum. a2 . n. 8. 93 In three dimensions the unit vectors i. In an n-dimensional vector space any vector in the space can be represented as the sum over n linearly independent vectors (the equivalent of non-coplanar vectors). Such a linearly independent set of vectors {vj }n satisﬁes the condition j=1 n cj vj = 0 j=1 ⇔ cj = 0. j and k). Such vectors are said to span the space and are called a basis for the space. 4.

v2 . . The other form is the component form: 3 (3. 2. . The only other thing we will need at this point is to generalize the dot product. . v >= 0 if and only if v = 0. One can write the scalar product as (u. 3. This is similar to the ambiguous use of (x. (3. < αv. . where u and v denote the length of the vectors. z) to denote both vectors in as well as to represent points in the three dimensional space. w >= α < v. First. vn ). y. We note that the (real) scalar product satisﬁes some simple properties. Sometimes we will write v as an n-tuple (v1 . w >=< w. w and real scalar α we have 1. < v.2) where the vk ’s are called the components of the vector in this basis. Recall that there are two forms for the dot product in three dimensions. (3. we deﬁne the scalar product between two n-dimensional vectors as n < u.3) u · v = u1 v1 + u2 v2 + u3 v3 = k=1 uk vk . LINEAR ALGEBRA v= k=1 vk ek .94 Then. we can expand any v ∈ V as n CHAPTER 3. v >.5) Actually. So. < v. For vectors v. v >= k=1 uk vk . v) or even in the Dirac bra-ket notation < u|v > . w > . there are a number of notations that are used in other texts. (3.4) Of course. . one has that u · v = uv cos θ. v >≥ 0 and < v. this form is easier to generalize.

VECTOR SPACES 95 While it does not always make sense to talk about angles between general vectors in higher dimensional vector spaces. ek >= δjk . v Notice that we used a hat to indicate that we have a unit vector. The process of making basis vectors have unit length is called normalization. which in three dimensions is another way of saying the vectors are perpendicular to each other. j = k 1. then it is called an orthogonal basis. v >= 0. ak >= 0. We k=1 know that any vector v can be represented in terms of this basis. Furthermore. Then. then we simply normalize it as v ˆ v= . aj > j = 1 . . This is simply done by dividing by the length of the vector. If {ak }n . So. Let {ak }n . < ej . < aj . can we ﬁnd the k=1 components? The answer is yes. It is that of orthogonality. if {aj }n . v. v = n vk ak . n. We will denote such a basis of unit vectors by ej for j = 1 . if we want to ﬁnd a unit vector in the direction of v. we have .7) k = j. . n. Using the properties of the scalar product.3. (3. we also say that vectors u and v are orthogonal if and only if < u.6) where we have introduced the Kronecker delta δjk ≡ 0. We can use the scalar product of v with each basis element aj . be a set of orthogonal basis vectors for vector space V . . If in addition each basis vector is a unit vector. This generalization of the unit basis can be expressed more compactly. So. √ Recall that the length of a vector. then j=1 ai ej = √ . is obtained as v = v · v. If we know the basis and vector. . j = k (3.1. k=1 is a set of basis vectors such that < aj . there is one concept that is useful. is a set of orthogonal basis vectors. then one has an orthonormal basis.

(3. . then the matrix Ajk ≡< aj . (3. n. (3.11) i = j. v > . .. k = 1. n CHAPTER 3. if the basis is orthogonal. then A is the identity matrix and the solution takes on a simpler form: vj =< aj . LINEAR ALGEBRA n < aj .13) . Recall that two vectors are orthogonal if and only if < ai . ak > and bj ≡< aj . we can easily compute the numbers Ajk ≡< aj .8) for the vk ’s is a linear algebraic system. . v and b. . v > = < aj . respectively.96 for j = 1. v >= vj < aj . the basis consists of an orthogonal set of unit vectors. (3. . Therefore. . The set of numbers Ajk . k=1 n vk ak > (3. aj >. vj and bj can be written as column vectors.8) = k=1 vk < aj . . or vj = j = 1. . . ak > is diagonal and the system is easily solvable.12) < aj . the system (3.9) We can write this set of equations in a more compact form. ak > . Also. which takes the form n bj = k=1 Ajk vk . system (3. v > . if the basis is orthonormal. in this case we have that < aj . aj > In fact. (3.e. However. Since we know the basis elements. i. v > . . .8) can be written in matrix form as Av = b. Thus. aj >= 0. Thus.10) < aj . j. n are the elements of an n × n matrix A with Ajk being an element in the jth row and kth column.

These come in many forms and there are an abundance of applications in physics.2 Linear Transformations A main theme in linear algebra is to study linear transformations between vector spaces. Such a system is shown in Figure 3.14) We now consider another set of axes at an angle of θ to the old. 3.3. as shown in Figure 3. For example. the transformation between the spacetime coordinates of observers moving in inertial frames in the theory of special relativity constitute such a transformation. We begin with a vector v as described by a set of axes in the standard orientation. Also displayed in this ﬁgure are the unit vectors. one needs only draw a perpendicular to the axes and read the coordinate oﬀ the axis. In order to derive the needed transformation we will make use of polar coordinates. Comparing the coordinates in both systems shown in . one just a rotation of the other by some angle. This is the description of points in space using two diﬀerent coordinate bases. We will designate these axes as x and y . To ﬁnd the coordinates (x. In Figure 3. A simple example often encountered in physics courses is the rotation by a ﬁxed angle. (3.1.1 we see that the vector makes an angle of φ with respect to the positive x-axis. The components (x. Note that the basis vectors are diﬀerent in this system.2. y). LINEAR TRANSFORMATIONS 97 Figure 3.1: Vector v in a standard coordinate system. y) of the vector can be determined from this angle and the magnitude of v as x = v cos φ y = v sin φ. Projections to the axes are shown.2.

Figures 3.2. LINEAR ALGEBRA Figure 3. we see that the primed coordinates are not the same as the unprimed ones.1-3. (3. we can use the polar form for x and y to ﬁnd the desired form: x = x cos θ − y sin θ . The polar form for the primed system is given by x y = v cos(φ + θ) = v sin(φ + θ).16) Noting that these expressions involve products of v with cos φ and sin φ. we use the addition formula for trigonometric functions to obtain x y = v cos φ cos θ − v sin φ sin θ = v sin φ cos θ + v cos φ sin θ. (3.98 CHAPTER 3.3: Comparison of the coordinate systems. In Figure 3.3 the two systems are superimposed on each other. Namely.2: Vector v in a rotated coordinate system.15) We can use this form to ﬁnd a relationship between the two systems. Figure 3.

(3. y) = Rθ (x. One can derive a similar transformation for how the coordinate of the vector change under such a transformation. y ) = R−θ (x. It is called a rotation by θ. y ). y). 3.18) We note that the active and passive rotations are related. We can designate it generally by ˆ (x . (3.4.3. An active rotation is one in which one rotates the vector. y).3. because it does not aﬀect the vector. It is referred to as a passive transformation. ˆ (x .17) This is an example of a transformation between two coordinate systems. such as shown in Figure 3. MATRICES 99 Figure 3. y ) = Rθ (x. Namely. Such matrix representations often become the .3 Matrices Linear transformations such as the rotation in the last section can be represented by matrices. Denoting the new vector as v with new coordinates (x .4: Rotation of vector v y = x sin θ + y cos θ. we have x y = x cos θ + y sin θ = −x sin θ + y cos θ.

We will review matrix representations and show how they are useful in solving coupled systems of diﬀerential equations later in the chapter.20) . we multiply a 3 × 2 matrix times a 2 × 2 matrix to obtain a 3 × 2 matrix: x y . We write vectors like v as a column matrix v= x y . Multiply corresponding elements of each and add them. We begin with the rotation transformation as applied to a vector in Equation (3. Then. the transformation takes the form x y = cos θ sin θ − sin θ cos θ v = Rθ v. we have employed the deﬁnition of matrix multiplication. place the result into the ijth entry of the product matrix. (Note that an n × m matrix has n rows and m columns. Then.18). This operation can only be performed if the number of columns of the ﬁrst matrix is the same as the number of columns of the second matrix. In using the matrix form of the transformation. As an example.100 CHAPTER 3. we have multiplied a 2 × 2 matrix times a 2 × 1 matrix. Namely. LINEAR ALGEBRA core of a linear algebra class to the extent that one loses sight of their meaning. We can also write the trigonometric functions in a 2 × 2 matrix form as Rθ = cos θ sin θ − sin θ cos θ .) The multiplication proceeds by selecting the ith row of the ﬁrst matrix and the jth column of the second matrix. (3.19) This can be written in the more compact form 1 2 5 −1 3 2 3 2 1 4 1(3) + 2(1) 1(2) + 2(4) = 5(3) + (−1)(1) 5(2) + (−1)(4) 3(3) + 2(1) 3(2) + 2(4) 5 10 = 14 6 11 14 (3.

That transformation can be written as ˆ u = Rθ v. 101 (3. sin θ and column x. let Aij be the elements of A. We can view this as a net transformation from v to w given by ˆ ˆ w = (R−θ Rθ )v. We ﬁrst rotate the ˆ ˆ vector by θ as u = Rθ v and then rotate u by −θ obtaining w = R−θ u. We see that if the 12 and 21 elements of this matrix are interchanged we ˆ ˆ recover Rθ . y. We perform the same operation for the second row: x y = cos θ sin θ − sin θ cos θ x y = x cos θ + y sin θ −x sin θ + y cos θ .3. Due to the symmetry properties of the sines and cosines. we have ˆ R−θ = cos θ sin θ − sin θ cos θ .22) Now consider a rotation by −θ. where ˆ Rθ = cos θ − sin θ sin θ cos θ . its transpose A rows and columns of A. This is x .3. ij It is also the case that these matrices are inverses of each other. (3. (3. Given T is the matrix obtained by interchanging the a matrix. Thus. Then AT = Aji . A. These were rotations of vectors keeping the coordinate system ﬁxed. Combining these we obtain x cos θ + y sin θ.21) In the last section we also introduced active rotations. Formally.23) . Thus. the “composition” of these two transformations leads to ˆ ˆ ˆ w = R−θ u = R−θ (Rθ v). we have the row cos θ. This is an example of what is called the transpose of Rθ . We can understand this in terms of the nature of rotations. we start with a vector v and rotate it by θ to get a new vector u. MATRICES In the above example.

If ab = 1. A linear transformation satisﬁes the following condition: L(αa + βb) = αL(a) + βL(b) for any vectors a and b and scalars α and β. Now consider the eﬀect of the transformation L on v. . (3. if you think about it. we see ˆ ˆ here that Rθ and R−θ are inverses of each other as well. We begin with a vector v in an n-dimensional vector space. we have determined that ˆ ˆ −1 ˆT R−θ = Rθ = Rθ . Actually. Let’s use the standard basis {ei }. So. (3. using the linearity property: n n L(v) = L i=1 vi ei = i=1 vi L(ei ). We note that matrices satisfying the relation AT = A−1 are called orthogonal matrices. Such linear transformations can be represented by matrices. This is like the multiplication of numbers. We note that the product of these two matrices yields the identity. It can be represented in terms of a basis. (3. Then we have n (3. . We can generalize what we have seen with this simple example.24) This is the 2 × 2 identity matrix. We obtain ˆ ˆ R−θ Rθ = cos θ sin θ − sin θ cos θ cos θ − sin θ sin θ cos θ = 1 0 0 1 . . n. then a and b are multiplicative inverses of each other. In fact.27) . We can compute the resulting matrix by carrying out the multiplication. Take any vector v. LINEAR ALGEBRA ˆ ˆ where the transformation matrix for the composition is given by R−θ Rθ . We will restrict ourselves to linear transformations. we should end up with the original vector.26) v= i=1 vi ei . We can consider a transformation L that takes v into a new vector u as u = L(v). i = 1.25) where the T designates the transpose.102 CHAPTER 3.

29) = j=1 i=1 Since L(v) = u. such as dictated by another coordinate system. However.30) This equation can be written in matrix form as u = Lv. We used the standard basis above. Since ei is a vector.28) where Lji is the jth component of L(ei ) for each i = 1. . we see that determining how L acts on v requires that we know how L acts on the basis vectors. But the resulting vector can be expanded in the basis. we see that the jth component of u can be written as n uj = i=1 Lji vi .3. MATRICES 103 Thus. Furthermore. . Then we ﬁnd n L(v) = i=1 n vi L(ei ) n = i=1 n vi j=1 n Lji ej vi Lji ej . .3. what does this have to say about how L acts on any other vector in the space? We insert expression (3. Typically. (3.28) into Equation (3. . The matrix of Lji ’s is called the matrix representation of the operator L. there will be times that you will need this connection to understand why matrices are involved. we need L(ei ). (3. . . . We will not go further into this point at this time and just stick with the standard basis. However. this produces another vector in the space. (3. j = 1 . n.27). Namely. you could have started with a diﬀerent basis. in a linear algebra class you start with matrices and do not see this connection to linear operators. the matrix representation depends on the basis used. n. Now that we know how L acts on basis vectors. Let’s assume that the resulting vector takes the form n L(ei ) = j=1 Lji ej .

Next. if we use the successive transformations. we have n wi = k=1 n Bik uk n = k=1 n Bik j=1 n Akj vj (3. It is similar to the multiplication of the rotation matrix times a vector as seen in the last section. So. This agrees . Let u = A(v) and w = B(u) for two transformations A and B. which resulted from the composition of two linear transformations. we can compose transformations like we had done with the two rotation matrices. (Thus. where the matrix representation of BA is given by the product of the matrix representations of A and B. (3. j=1 (3. we have found the component form of matrix multiplication. We ﬁrst note that the transformation from v to w is given by n wi = (BA)ij vj . We will just work with matrix representations from here on. This can be viewed as a transformation from v to w as w = BA(v).32) = j=1 k=1 Bik Akj vj . To see this. This leads to our result: n (BA)ij = k=1 Bik Akj . the coeﬃcients must be equal.31) However.) Then a composition of these transformations is given by w = B(u) = B(Av). We have two expressions for wi as sums over vj . LINEAR ALGEBRA where L now takes the role of a matrix. v → u → w.104 CHAPTER 3. we look at the ijth element of the matrix representation of BA.33) Thus.

. a b c d . . . i = j 1. . . 1 .3. Let A= Now consider the matrix B= d −b −c a . MATRICES 105 with our earlier example of matrix multiplication: The ij-th component of the product is obtained by multiplying elements in the ith row of B and the jth column of A and summing. Multiplying these matrices. The identity is deﬁned as that matrix satisfying IA = AI = A (3. there is the n × n identity matrix. .34) for any n × n matrix A. First of all. . . It suﬃces to note that the inverse of a 2 × 2 matrix is easily obtained. i = j (3. Namely.. There are many other properties of matrices and types of matrices that one will encounter. (3.36) The inverse of matrix A is that matrix A−1 such that AA−1 = A−1 A = I. .3. We will list a few. we will not cover it here. . we ﬁnd that AB = a b c d d −b −c a = ad − bc 0 0 ad − bc . The n × n identity matrix takes the form I= 1 0 ··· 0 0 1 ··· 0 . 0 0 . . we have that Iij = δij ≡ 0. . I. (3.35) A component form is given by the Kronecker delta.37) While there is a systematic method for determining the inverse in terms of cofactors.

but it is a multiple of the identity. For example. So. As an example. We just need to divide by ad − bc. We write matrix A as a11 a12 a13 A = a21 a22 a23 . We leave it to the reader to show that A−1 A = I. This factor is called the determinant of A. it is called nonsingular.40) a31 a32 (3. if detA = 0.38) For higher dimensional matrices one can write the deﬁnition of the determinant.(3. or columns. If two rows. a31 a32 a33 The determinant of A can be computed in terms of simpler 2 × 2 determinants. LINEAR ALGEBRA This is not quite the identity. A is called a singular matrix. we deﬁne det(A) = a b c d = ad − bc.39) = a11 There are many other properties of determinants. We deﬁne detA = a11 a12 a13 a21 a22 a23 a31 a32 a33 a22 a23 a32 a33 − a12 a21 a23 a31 a33 + a13 a21 a22 . The factor ad − bc is the diﬀerence in the products of the diagonal and oﬀ-diagonal elements of matrix A. then detA = 0. of a matrix are multiples of each other. (3.106 CHAPTER 3. A standard application of determinants is the solution of a system of linear algebraic equations using Cramer’s Rule. It is denoted as det(A) or |A|. we consider a . Otherwise. we have found the inverse matrix: A−1 = 1 ad − bc d −b −c a . Thus. We will for now just indicate the process for 3 × 3 matrices.

3. Let’s consider this system of two equations and two unknowns. then we can solve to y.3. we can also replace each numerator with a determinant. c d In fact. getting y= . we ﬁnd x= ec − f a bc − ad ed − bf . 107 (3. If bc − ad = 0. Multiply the ﬁrst equation by c and the second equation by a and subtract. Note that each variable is determined by placing a determinant with e and . We then get (bc − ad)y = (ec − f a). in the form ax + by = e. x and y. cx + dy = f. (3. (Just imagine dealing with a bigger system!). ad − bc We note the the denominators can be replaced with the determinant of the matrix of coeﬃcients. we can eliminate the x’s.42) This is Cramer’s Rule for writing out solutions of systems of equations. MATRICES simple system of two equations and two unknowns. our solutions may be written as e b f d a b c d a e c f a b c d x = y = .41) The standard way to solve this is to eliminate one of the variables. Thus. So. a b . Similarly.

(3. i. . is called an orthogonal matrix. . . . The transpose of a matrix is a new matrix in which the rows and columns are interchanged. .. . .. . a12 a22 . a matrix satisfying AT = A−1 ..44) a1m a2m . + ann = i=1 aii . . . As we had seen in the last section.. . . . . LINEAR ALGEBRA f placed in the column of the coeﬃcient matrix corresponding to the order of the variable in the equation. an1 an2 then the transpose is deﬁned as . . j = 1. . Finally. . .. . a1m . One also can show that (AB)T = B T AT . . . . This construction is easily extended to larger systems of equations. amn In index form. Another operation that we have seen earlier is the transpose of a matrix. a2m . a1n a2n . . (3. a21 a22 . . . If write an n × m matrix A in standard form as A= a11 a21 . . . We can show that for two square matrices Tr(AB) = Tr(BA). The denominator is the determinant of the coeﬃcient matrix. n. . . or AAT = AT A = I. . the trace of a square matrix is the sum of its diagonal elements: n Tr(A) = a11 + a22 + .108 CHAPTER 3. .43) AT = a11 a12 .. . we have (AT )ij = Aji . anm . . . .

A much easier problem would be to solve an uncoupled system like dx dt dy dt = λ1 x = λ2 y. (3.4. The simplest example is a system of linear diﬀerential equations of the form dx dt dy dt = ax + by = cx + dy.46) The solutions are quickly found to be x(t) = c1 eλ1 t . x(t) = x0 eλ1 t .1 Eigenvalue Problems An Introduction to Coupled Systems Recall that one of the reasons we have seemingly digressed into topics in linear algebra and matrices is to solve a coupled system of diﬀerential equations.4.47) Wouldn’t it be nice if we could transform the more general system into one that is not coupled? Let’s write our systems in more general form. We write the coupled system as d x = Ax dt .3. Here c1 and c2 are two arbitrary constants. (3.4 3.48) (3. We cannot solve either equation without knowing either x(t) or y(t).45) We note that this system is coupled. y(t) = y0 eλ2 t . (3. y(t) = c2 eλ2 t . Thus. We can determine particular solutions of the system by specifying x(t0 ) = x0 and y(t0 ) = y0 at some time t0 . EIGENVALUE PROBLEMS 109 3.

dt Noting that d y = Λy. LINEAR ALGEBRA d y = Λy.50) Multiply both sides by S −1 .] We obtain d y = S −1 ASy. we deﬁne the transformation x = Sy. . dt (3. Thus.. i.e. We can rewrite this equation as AS = SΛ. Now. So. a transformation in which we can get y from x as y = S −1 x. [We can do this if we are dealing with an invertible transformation. we seek a similarity transformation that results in a diagonal matrix. dt Λ = S −1 AS.110 and the uncoupled system as CHAPTER 3. nor do we know Λ. (3. We do not know S. we seek a transformation between x and y that will transform the coupled system into the uncoupled system. dt where Λ= λ1 0 0 λ2 .51) we have The expression S −1 AS is called a similarity transformation of matrix A.49) Inserting this transformation into the coupled system we have d x = Ax ⇒ dt d Sy = ASy ⇒ dt d S y = ASy. in order to uncouple the system. This process is called the diagonalization of matrix A. (3. We note that Λ is a diagonal matrix.

we can solve the eigenvalue problem and this will lead us to solutions of the uncoupled system of diﬀerential equations.] We ﬁrst show that SΛ = ΛS. n (SΛ)ij = k=1 n Sik Λkj Sik λj Ikj k=1 n T λj Ijk Ski = = k=1 n = k=1 Λjk Ski (3. S T = S.4. It is called an eigenvalue problem. one has that the columns of S (denoted v) satisfy an equation of the form Av = λv.e. i. we need the matrix to be Hermitian. λ. are called eigenvalues. S T = S where the bar denotes complex conjugation. The vectors are called eigenvectors and the numbers.52) = (ΛS)ij This result leads us to the fact that S satisﬁes the equation AS = ΛS. Therefore. EIGENVALUE PROBLEMS 111 We can solve this equation if S is real symmetric.3.4. 3. In principle.2 Example of an Eigenvalue Problem We will determine the eigenvalues and eigenvectors for A= 1 −2 −3 2 . [In the case ¯ of complex matrices.53) This is an equation for vectors v and numbers λ given matrix A. We look at the ijth component of SΛ and rearrange the terms in the matrix product. (3.

We can try to solve it using elimination. Thus. we need to solve Av = λv. this does not get us anywhere. This simple solution is the solution of all eigenvalue problems and is called the trivial solution. (3.112 CHAPTER 3. we only look for nontrivial solutions! So. the second by 2 and adding. (3. This can be rewritten as (1 − λ)v1 − 2v2 = 0. LINEAR ALGEBRA In order to ﬁnd the eigenvalues and eigenvectors of this equation. (3. When solving eigenvalue problems. This means that v1 is still unknown. If the factor in the brackets is not zero.55) v1 v2 . we have to stipulate that the factor in the brackets is zero. This situation will always occur for eigenvalue problems. We ﬁnd that multiplying the ﬁrst equation by 2 − λ.54) Let v = have that Av = λv 1 −2 −3 2 v1 v2 = λ = v1 v2 λv1 λv2 .57) (3. We v1 − 2v2 −3v1 + 2v2 So. we see that our system becomes v1 − 2v2 = λv1 .56) This is a homogeneous system. Inserting this into the system gives v2 = 0 as well. we ﬁnd v is the zero vector. However. as we had done earlier when deriving Cramer’s Rule. Then the eigenvalue problem can be written out. We could have guessed this solution. we get [(1 − λ)(2 − λ) − 6]v1 = 0. −3v1 + (2 − λ)v2 = 0. The general eigenvalue problem can be written as Av − λv = 0. −3v1 + 2v2 = λv2 . . we obtain v1 = 0.

(3. −3v1 − 2v2 = 0. our eigenvalues are λ = 4. we have (1 − λ)(2 − λ) − 6 = 0. Note that the matrix is A with λ’s subtracted from the diagonal elements. We ﬁrst insert λ = 4 into our system: −3v1 − 2v2 = 0. We therefore have obtained a condition on the eigenvalues! It is a quadratic and we can factor it: (λ − 4)(λ + 1) = 0.3. we see that we always get a homogeneous system. we require det(A − λI) = 0. Computing the determinant. This is typical of eigenvalue . The second step is to ﬁnd the eigenvectors. We write out this condition for the example at hand. So. So. We have to do this for each eigenvalue. EIGENVALUE PROBLEMS or by inserting the identity matrix. −1.58) This will always be the starting point in solving eigenvalue problems. Av − λIv = 0. We have that 1 − λ −2 −3 2 − λ = 0.4. (A − λI)v = 0. We will not get a unique solution. or λ2 − 3λ − 4 = 0. Finally. 113 The factor that has to be zero can be seen now as the determinant of this system. (3. we have one equation in two unknowns. Thus.59) Note that these equations are the same.

3 Eigenvalue Problems .60) While these equations do not at ﬁrst look the same. We seek nontrivial solutions to the eigenvalue problem Av = λv. we can divide out the constants and see that once again we get the same equation. LINEAR ALGEBRA problems. For λ = −1. but they point in the same direction in the v1 v2 plane. (3. we get v1 = 1. Picking v2 = 1. v= 3. In this section we will summarize the method of solution of eigenvalue problems based upon our discussion in the last section.114 CHAPTER 3. v1 = v2 .61) . the solution to our eigenvalue problem is λ = 4. the system becomes 2v1 − 2v2 = 0. In the next subsection we will look at another problem that is a bit more geometric and will give us more insight into the process of diagonalization. We will return to our coupled system in a later section and provide more examples of solving eigenvalue problems.A Summary In the last subsection we were introduced to eigenvalue problems as a way to obtain a solution to a coupled system of linear diﬀerential equations. −3v1 + 3v2 = 0. v2 − 1 gives v1 = −2/3. Eigenvalue problems appear in many contexts in physical applications. A nicer solution would be v2 = 3 and v1 = −2. (3. In summary.4. We can pick anything we want for v2 and then determine v1 . These vectors are diﬀerent. For example. v= −2 3 1 1 λ = −1.

then there are possibly an inﬁnite number solutions to the algebraic system. Furthermore. Later in the course we will explore other types of eigenvalue problems.3. EIGENVALUE PROBLEMS 115 We note that v = 0 is an obvious solution. and. Typically. λ. This is the same equation as the characteristic equation (3. we need to force the determinant to be zero. it does not lead to anything useful.61). consists of just a few simple steps. Thus. If we expand the right side of the equation. (3.62) The solution of such a system would be unique if the determinant of the system is not zero. c) Solve the linear system (A − λI)v = 0 for each λ. cv1 + (d − λ)v2 = 0.4. v2 Inserting this into Equation (3. this would give the trivial solution v1 = 0. To get a nontrivial solution. This yields the eigenvalue equation 0= a−λ b c d−λ = (a − λ)(d − λ) − bc. v1 . v2 = 0. as you have seen. satisfying the above eigenvalue problem.90) for the general constant coeﬃcient diﬀerential equation considered in the ﬁrst chapter. we ﬁnd that λ2 − (a + d)λ + ad − bc = 0. it is a trivial solution. b) Find the eigenvalues from the equation det(A − λI) = 0. and the associated eigenvectors. . So. We list these steps as follows: a) Write the coeﬃcient matrix. we are given the matrix A and have to determine the eigenvalues. the eigenvalues correspond to the solutions of the characteristic polynomial for the system. Once we ﬁnd the eigenvalues. This is a quadratic equation for the eigenvalues that would lead to nontrivial solutions. we obtain the homogeneous algebraic system For now we begin to solve the eigenvalue problem for v = (a − λ)v1 + bv2 = 0. v. We will see this in the examples. The method for solving eigenvalue problems. However.

63) This equation can describe a variety of conics (ellipses. then the resulting equation could be an equation for the standard ellipse or hyperbola with center at the origin. and C.116 CHAPTER 3. B. where Q is the matrix of coeﬃcients A. We want to determine the transformation that puts this conic into a coordinate system in which there is no B term. LINEAR ALGEBRA 3. In the case of an ellipse. Our goal is to obtain an equation of the form Ax2+C y2 = D in the new coordinates yT = (x . We leave it to the reader to show that coordinate translations can be made to eliminate the linear terms. hyperbolae and parabolae) depending on the constants. So. This conic equation can be written in matrix form. The matrix form of this equation is given as A 0 y=D. you could rotate the ellipse and that would introduce a B term. We note that x y A B B C x y = Ax2 + 2Bxy + Cy 2 . However. In short hand matrix form. It is given by Ax2 + 2Bxy + Cy 2 + Ex + F y = D.4. we thus have for our equation xT Qx = D. y ).4 Rotations of Conics You may have seen the general form for the equation of a conic in Cartesian coordinates in your calculus class. yT 0 C . the semimajor and semiminor axes lie along the coordinate axes. The E and F terms result from a translation of the origin and the B term is the result of a rotation of the coordinate system. we will set E = F = 0 in our discussion and only consider quadratic equations of the form Ax2 + 2Bxy + Cy 2 = D. If B = 0. as we will see. (3.

as we will see.65) Recalling that the rotation matrix is an orthogonal matrix. we might not know this without plotting it.3. Comparing this result to to desired form. where R is a rotation matrix. we let x = Ry. 117 (3. (Actually.5 shows that it is an ellipse. For this . A plot of this conic in Figure 3. The coeﬃcient matrix for this equation is given by Q= 13 −5 −5 13 . Recall. there are some conditions on the coeﬃcients that do allow us to determine the conic. So. we will use the method outlined above to ﬁnd a coordinate system in which the ellipse appears in standard form. RT = R−1 .) If the equation were in standard form. (3.4. EIGENVALUE PROBLEMS We will denote the diagonal matrix by Λ.64) (3. will give the directions of the principal axes (the semimajor and semiminor axes). we could identify its general shape. the problem reduces to that of trying to diagonalize the matrix Q.66) Thus. we have Λ = R−1 QR. So.67) We seek a solution to the eigenvalue problem: Qv = λv. Inserting this transformation into our equation we ﬁnd that xT Qx = (Ry)T QRy = yT (RT QR)y. Example Determine the principle axes of the ellipse given by 13x2 − 10xy + 13y 2 − 72 = 0. But you may not know this yet. The eigenvalues of Q will lead to the constants in the rotated equation and the eigenvectors. However. we have Λ = RT QR. the ﬁrst step is to get the eigenvalue equation from det(Q − λI) = 0. We will ﬁrst show this in an example. (3.

problem we have 13 − λ −5 −5 13 − λ So. 8.118 CHAPTER 3. or λ = 13 ± 5 = 18. the equation in the new system is 8x 2 + 18y 2 = 72. we have to solve (13 − λ)2 − 25 = 0. (3. Dividing out the 72 puts this into the standard form x2 y2 + = 1. This is easily solved by taking square roots to get λ − 13 = ±5. 9 4 = 0. LINEAR ALGEBRA Rotated Ellipse 2 y 1 –2 –1 0 1 x 2 –1 –2 Figure 3.68) .5: Plot of the ellipse given by 13x2 − 10xy + 13y 2 − 72 = 0. Thus.

(3. we seek the eigenvectors corresponding to each eigenvalue. Next. we can choose our eigenvector to be v1 v2 = 1 1 . The system for the unknown eigenvector is 13 − 8 −5 −5 13 − 8 The ﬁrst equation is 5v1 − 5v2 = 0. Eigenvalue 1: λ = 8 We insert the eigenvalue into the equation (Q − λI)v = 0.3. We show the two ellipses in Figure 3. or v1 = v2 . which we still need to determine.70) v1 v2 = 0.4.6: Plot of the ellipse given by 13x2 − 10xy + 13y 2 − 72 = 0 and the 2 2 ellipse x9 + y4 = 1 showing that the ﬁrst ellipse is a rotated version of the second ellipse. EIGENVALUE PROBLEMS 119 2 y 1 –3 –2 –1 0 1 x 2 3 –1 –2 Figure 3. (3. Now we can identify the ellipse in the new system. We note that the given ellipse is the new one rotated by some angle.69) . Thus.6.

our ellipse is a rotated version of one in standard position.73) We would like to ﬁnd a rotation that puts it in the form λ1 x 2 + λ2 y 2 + E x + F y = D. we insert the eigenvalue into the equation (Q − λI)v = 0 and obtain 13 − 18 −5 v1 = 0.71) −5 13 − 18 v2 The ﬁrst equation is −5v1 − 5v2 = 0. (3. The general equation can be written in matrix form: xT Qx + fx = D. Eigenvector one is at a 45o angle. Thus. Transforming this equation gives −1 x Rθ QRθ x + f Rθ x = D.72) In Figure 3. We use the rotation matrix ˆ Rθ = cos θ − sin θ sin θ cos θ (3. Thus. F ). Or. LINEAR ALGEBRA In the same way. (3. and C and f = (E.7 we superimpose the eigenvectors on our original ellipse. we could deﬁne new axes that are at 45o to the standard axes and then the ellipse would take the standard form in the new coordinate system.74) ˆT and deﬁne x = Rθ x.120 Eigenvalue 2: λ = 18 CHAPTER 3. (3. or x = Rθ x . A general rotation of any conic can be performed. or v1 = −v2 . (3.75) where Q is the usual matrix of coeﬃcients A. we can choose our eigenvector to be v1 v2 = −1 1 . T (3. B. Consider the general equation: Ax2 + 2Bxy + Cy 2 + Ex + F y = D.76) . We see that the eigenvectors point in directions along the semimajor and semiminor axes and indicate the angle of rotation.

EIGENVALUE PROBLEMS 121 Standard Ellipse and its Rotation 2 y 1 –2 –1 0 1 x 2 –1 –2 Figure 3. then we seek an angle θ such that B = 0. this gives tan(2θ) = A−C . with A = C = 13 and B = −5.7: Plot of the ellipse given by 13x2 − 10xy + 13y 2 − 72 = 0 and the eigenvectors. (3.3.79) (3. If we want the nonrotated form. where B = 2(C − A) sin θ cos θ + 2B(2 cos θ2 − 1). Finally. we had noted that knowing the coeﬃcients in the general quadratic is enough to determine the type of conic represented without doing any plotting. 2θ = π/2. in our previous example. The resulting equation is of the form A x 2 + 2B x y + C y 2 + E x + F y = D. we have tan(2θ) = ∞. Thus. This is based on the fact that the determinant of the .4. Noting that 2 sin θ cos θ = sin 2θ and 2 cos θ2 − 1 = cos 2θ. or θ = π/4.78) (We only need B for this discussion). B (3. Note that they are along the semimajor and semiminor axes and indicate the angle of rotation.77) So.

and one eigenvalue is nonzero.9.8. (3. so the new system is at 45o to the old. Looking at Equation (3. A plot is shown in Figure 3.5 −0.5x2 + (−0. or x2 − y 2 = 12. Once again. Otherwise the equation degenerates to a linear equation.5)y 2 = 6.82) 0 −0.5 0 . The equation in new coordinates is 0. 2. A plot is shown in Figure 3. Parabola λ1 λ2 = 0 or B 2 − AC = 0.25 = 0. As a ﬁnal example. Therefore. (3.81) . We can see that this is a rotated hyperbola by plotting y = 6/x. (3. Ellipse λ1 λ2 > 0 or B 2 − AC < 0. We see this from the equation for diagonalization −1 det(Λ) = det(Rθ QRθ ) −1 = det(Rθ ) det(Q) det(Rθ ) −1 = det(Rθ Rθ ) det(Q) = det(Q). λ2 − 0. The coeﬃcient matrix for this equation is given by A= The eigenvalue equation is −λ −0. Hyperbola λ1 λ2 < 0 or B 2 − AC > 0. we consider this simple equation.80) 3. we have three cases: 1. tan(2θ) = ∞.5 0. we have λ1 λ2 = AC − B 2 .74). LINEAR ALGEBRA coeﬃcient matrix is invariant under rotation. Example xy = 6.5 −λ Thus.122 CHAPTER 3.5. or λ = ±0. = 0.

9: Plot of the rotated hyperbola given by x2 − y 2 = 12. 8 6 y 4 2 –10 –5 –2 –4 –6 –8 5 x 10 Figure 3. .8: Plot of the hyperbola given by xy = 6. EIGENVALUE PROBLEMS 123 Rotated Hyperbola 10 y 5 –10 –5 0 5 x 10 –5 –10 Figure 3.3.4.

A homogeneous linear system results when e(t) = 0 and f (t) = 0. Otherwise the system is called nonautonomous.84) (3. A linear system takes the form x = a(t)x + b(t)y + e(t) y = c(t)x + d(t)y + f (t). (3. homogeneous system of constant coeﬃcient ﬁrst order diﬀerential equations: x = ax + by y = cx + dy.86) (3. A linear. We will review some theory of linear systems with constant coeﬃcients. y) y (t) = Q(x. we present a bit more detail for the interested reader. t). y. . t) y (t) = Q(x.83) We will focus on linear. y).87) As we will see later. y. such systems can result from a simple translation of the unknown functions.5 A Return to Coupled Systems We now return to examples of solving a coupled system of equations. While the general techniques have already been covered.85) (3.124 CHAPTER 3. constant coeﬃcient system of diﬀerential equations is given by x = ax + by + e y = cx + dy + f. We also show a few examples. A general form for ﬁrst order systems in the plane is given by a system of two equations for unknowns x(t) and y(t) : x (t) = P (x. These equations are said to be coupled if either b = 0 or c = 0. (3. LINEAR ALGEBRA 3. An autonomous system is one in which there is no explicit time dependence: x (t) = P (x.

3. we still need y(t). We now demonstrate this for a speciﬁc example. Thus. We know that we can solve this by ﬁrst looking at the roots of the equation r2 − (a + d)r + ad − bc = 0 (3. A RETURN TO COUPLED SYSTEMS 125 We begin by noting that the system (3.89) (3. x(t) = c1 et + c2 e−4t .91) Carrying out the above steps. We diﬀerentiate the ﬁrst equation in the system and systematically replace occurrences of y and y . The roots of this equation are r = 1. which we already know how to solve. 6 6 Thus. −4. From the ﬁrst equation of the system we have 1 1 y(t) = (x + x) = (2c1 et − 3c2 e−4t ). we have that x + 3x − 4x = 0. 3 2 (3. (3.92) .87) can be rewritten as a second order constant coeﬃcient ordinary diﬀerential equation. we have b x = ax + by = ax + b(cx + dy) = ax + bcx + d(x − ax). 1 1 y(t) = c1 et − c2 e−4t .90) and writing down the appropriate general solution for x(t). constant coeﬃcient second order ordinary diﬀerential equation. (3. b Example x = −x + 6y y = x − 2y. But. since we also know from the ﬁrst equation that y = 1 (x − ax).5. homogeneous.88) This is a linear. This has a characteristic equation of r2 + 3r − 4 = 0. the solution to our system is x(t) = c1 et + c2 e−4t . Therefore. Then we ﬁnd y(x) = 1 (x − ax). Therefore. we have x − (a + d)x + (ad − bc)x = 0.

x = = = c1 x(t) y(t) c1 et 1 t 3 c1 e 1 1 3 = + c1 et 1 t 3 c1 e + c2 e−4t 1 − 2 c2 e−4t c2 e−4t 1 − 2 c2 e−4t 1 −1 2 e−4t . For these systems we would specify conditions like x(0) = x0 and y(0) = y0 . Our investigations will lead to new techniques for solving linear systems using matrix methods. 1 1 y(t) = c1 et − c2 e−4t .93) This can be rewritten using matrix operations.91). We start with the usual system in Equation (3. x = Ax. Later. we ﬁrst write the solution in vector form. Formerly. We would like to investigate the solution of our system. We obtained the solution to this system as x(t) = c1 et + c2 e−4t . This is a ﬁrst order vector equation.94) et + c2 . Namely. Let the unknowns be represented by the vector x(t) = Then we have that x = x y = ax + by cx + dy = a b c d x y ≡ Ax. These would allow the determination of the arbitrary constants as before. which can easily be extended to systems of ﬁrst order diﬀerential equations of more than two unknowns. LINEAR ALGEBRA Sometimes one needs initial conditions. 3 2 (3. (3. We will next recast our system in matrix form and present a diﬀerent analysis.126 CHAPTER 3. Here we have introduced the coeﬃcient matrix A. we will make some sense out of the exponential of a matrix. we can write the solution as x = x0 eAt . We begin by recalling the solution to the speciﬁc problem (3. x(t) y(t) .87).

A is a 2 × 2 matrix for our problem. (3. x λve λt = Ax ⇒ = Aveλt .3. Then. we consider x = 0 and y = 0. So. However. we need to recall how to solve eigenvalue problems and then see how solutions of eigenvalue problems can be used to obtain solutions to our systems of diﬀerential equations. Let x0 and y0 be equilibrium solutions. This is similar to how we began to ﬁnd solutions to second order constant coeﬃcient equations. In this case. (3. We will conﬁne our remarks for now to planar systems. Of course. Therefore.96) This is an eigenvalue problem. Such is the case for many nonlinear systems. (x0 . Studies of equilibrium solutions and their stability occur more often in systems that do not readily yield to analytic solutions. but could easily be generalized to a system of n ﬁrst order diﬀerential equations. For equilibrium solutions the system does not change in time. if ad − bc = 0.95) For this to be true for all t.e. . we only have the origin as a solution. However. i.97) This is a linear system of homogeneous algebraic equations. y0 ) = (0.5) we insert this guess. A RETURN TO COUPLED SYSTEMS 127 We see that our solution is in the form of a linear combination of vectors of the form x = veλt with v a constant vector and λ a constant number.. this can only happen for constant solutions. ad − bc = 0. we then have that Av = λv. 0). (3. Thus. 0 = cx0 + dy0 . Often we are only interested in equilibrium solutions. One only has a unique solution when the determinant of the system is not zero.5. then there are an inﬁnite number of solutions. we have 0 = ax0 + by0 . for the general problem (3. Such systems are the basis of research in nonlinear dynamics and chaos.

98) The type of behavior depends upon the eigenvalues of matrix A. This is obtained by solving the nonhomogeneous problem Av2 − λv2 = v1 for v2 . we ﬁrst indicate the types of solutions that could result from the solution of a homogeneous. distinct roots. if x1 (t) and x2 (t) are two linearly independent solutions. 1. Then.x(t0 ) = x0 . Thus. constant coeﬃcient system of ﬁrst order diﬀerential equations. (3. Solve the eigenvalue problem Av = λv for each eigenvalue obtaining two eigenvectors v1 . One then needs a second linearly independent solution. Then write the general solution as a linear combination x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 2. then the general solution is given as x(t) = c1 x1 (t) + c2 x2 (t). dx = dt a b c d x = Ax. We begin with the linear system of diﬀerential equations in matrix form. We will look at a cleaner technique later in our discussion. setting t = 0. Case II: One Repeated Root Solve the eigenvalue problem Av = λv for one eigenvalue λ. If you have an initial condition. The major work is in ﬁnding the linearly independent solutions.128 CHAPTER 3. you get two linear equations for c1 and c2 : c1 x1 (0) + c2 x2 (0) = x0 . obtaining the ﬁrst eigenvector v1 . The nature of these roots indicate the form of the general solution. This depends upon the diﬀerent types of eigenvalues that you obtain from solving the eigenvalue equation.6 Solving Constant Coeﬃcient Systems in 2D Before proceeding to examples. The procedure is to determine the eigenvalues and eigenvectors and use them to construct the general solution. you can determine your two arbitrary constants in the general solution in order to obtain the particular solution. LINEAR ALGEBRA 3. . v2 . Case I: Two real. The general solution is then given by x(t) = c1 eλt v1 + c2 eλt (v2 + tv1 ). det(x − λI) = 0.

dt dt Setting the real and imaginary parts equal. dt and d Im(y(t)) = A[Im(y(t))]. dt Diﬀerentiating the sum and splitting the real and imaginary parts of the equation. Now. the other two cases need a little explanation. We ﬁrst look at case III. The construction of the general solution in Case I is straight forward. Case III: Two complex conjugate roots. then we would expect real solutions. So. we have that the general solution takes the form 2 x(t) = (c1 + c2 t)eλt . Then the general solution can be written as x(t) = c1 y1 (t) + c2 y2 (t). gives d d Re(y(t)) + i Im(y(t)) = A[Re(y(t))] + iA[Im(y(t))]. dt Therefore. construct two linearly independent solutions to the problem using the real and imaginary parts of y(t) : y1 (t) = Re(y(t)) and y2 (t) = Im(y(t)). 129 Solve the eigenvalue problem Ax = λx for one eigenvalue. we have d Re(y(t)) = A[Re(y(t))]. Writing the system of ﬁrst order equations as a second order equation for x(t) with the sole solution of the characteristic equation. . we look at the real and imaginary parts of the complex solution. Note that since the original system of equations does not have any i’s.6. We now turn to Case II.3. However. λ = α + iβ. We have that the complex solution satisﬁes the equation d [Re(y(t)) + iIm(y(t))] = A[Re(y(t)) + iIm(y(t))]. SOLVING CONSTANT COEFFICIENT SYSTEMS IN 2D 3. λ = 1 (a + d). obtaining one eigenvector v. the real and imaginary parts each are linearly independent solutions of the system and the general solution can be written as a linear combination of these expressions. Note that this eigenvector may have complex entries. one can write the vector y(t) = eλt v = eαt (cos βt + i sin βt)v. Thus.

It turns out that the guess that works is x = teλt v1 + eλt v2 . 4−λ 2 3 3−λ (3. This is an example of Case I. (3. Eigenvalues: We ﬁrst determine the eigenvalues. LINEAR ALGEBRA This suggests that the second linearly independent solution involves a term of the form vteλt . (A − λI)v2 = v1 .99) eλt v1 + λteλt v1 + λeλt v2 = λteλt v1 + eλt Av2 . 0= Therefore. These are also examples of solving matrix eigenvalue problems. 3. Inserting this guess into the system x = Ax yields (teλt v1 + eλt v2 ) = A teλt v1 + eλt v2 (3. We know everything except for v2 . 6.101) 0 = (4 − λ)(3 − λ) − 6 0 = λ2 − 7λ + 6 0 = (λ − 1)(λ − 6) (3. Using the eigenvalue problem and noting this is true for all t. we ﬁnd that v1 + λv2 = +Av2 .100) Therefore.130 CHAPTER 3. A = . .7 Examples of the Matrix Method Here we will give some examples of constant coeﬃcient systems of diﬀerential equations for the three cases mentioned in the previous section. So. we just solve for it and obtain the second linearly independent solution. 4 2 3 3 Example 1.102) The eigenvalues are then λ = 1.

This yields v1 v2 = 1 1 .105) (3. A = 3 −5 1 −1 .104) This gives 3v1 + 2v2 = 0. v2 −3 λ = 6. General Solution: We can now construct the general solution.106) For this case we need to solve −2v1 + 2v2 = 0. EXAMPLES OF THE MATRIX METHOD 131 Eigenvectors: Next we determine the eigenvectors associated with each of these eigenvalues. 4 2 3 3 −2 2 3 −3 v1 v2 v1 v2 =6 = v1 v2 0 0 (3.7. λ = 1. One possible solution yields an eigenvector of v1 2 = . . 1 1 (3.107) 2c1 et + c2 e6t −3c1 et + c2 e6t Example 2.3. We have to solve the system Av = λv in each case.103) (3. 4 2 3 3 3 2 3 2 v1 v2 v1 v2 = = v1 v2 0 0 (3. x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 = c1 et = 2 −3 + c2 e6t .

110) We need to solve (2 − i)v1 − 5v2 = 0. 1 − i. Thus. This is an example of Case III. one solves the eigenvalue equation. Eigenvectors: In order to ﬁnd the general solution. v1 v2 = 2+i 1 . we need only ﬁnd the eigenvector associated with 1 + i.108) 0 = (3 − λ)(−1 − λ) + 5 0 = λ2 − 2λ + 2 −(−2) ± 4 − 4(1)(2) λ = = 1 ± i. 3 −5 1 −1 2 − i −5 1 −2 − i v1 v2 v1 v2 = (1 + i) = 0 0 .111) Complex Solution: In order to get the two real linearly independent solutions.109) The eigenvalues are then λ = 1 + i. v1 v2 (3. 0= Therefore. we need to compute the real and imaginary parts of veλt . (3. LINEAR ALGEBRA Eigenvalues: Again. 3 − λ −5 1 −1 − λ (3. 2 (3.132 CHAPTER 3. eλt 2+i 1 = e(1+i)t 2+i 1 2+i 1 = et (cos t + i sin t) = et (2 + i)(cos t + i sin t) cos t + i sin t .

7. 7 − λ −1 9 1−λ (3. Eigenvectors: In this case we ﬁrst solve for v1 and then get the second linearly independent vector. This is an example of Case II. + et sin t 2c2 − c1 c2 .113) 0 = (7 − λ)(1 − λ) + 9 0 = λ2 − 8λ + 16 0 = (λ − 4)2 .114) There is only one real eigenvalue.3. A = Eigenvalues: 7 −1 9 1 0= Therefore. (3. 2 cos t − sin t cos t cos t + 2 sin t sin t . (3.115) .112) x(t) = c1 et = et + c2 et c1 (2 cos t − sin t) + c2 (cos t + 2 sin t) c1 cos t + c2 sin t Note: This can be rewritten as x(t) = et cos t 2c1 + c2 c1 . EXAMPLES OF THE MATRIX METHOD = et = et (2 cos t − sin t) + i(cos t + 2 sin t) cos t + i sin t 2 cos t − sin t cos t + iet cos t + 2 sin t sin t 133 . Example 3. General Solution: Now we can construct the general solution. λ = 4. (3. 7 −1 9 1 3 −1 9 −3 v1 v2 v1 v2 = 4 = 0 0 v1 v2 .

linear transformations and their matrix representations. An important . 7 −1 9 1 u1 u2 −4 u1 u2 u1 u2 = = 1 3 1 3 . LINEAR ALGEBRA ⇒ v1 v2 = 1 3 .118) c1 + c2 (1 + t) 3c1 + c2 (2 + 3t) 3. The solution is u1 u2 = 1 2 . There is more that we could discuss and more rigor. (3. We have only discussed ﬁnite dimensional vector spaces. +t 1 3 (3. we have 3v1 − v2 = 0. As we progress through the course we will return to the basics and some of their generalizations.134 Therefore.117) General Solution: We construct the general solution as y(t) = c1 eλt v1 + c2 eλt (v2 + tv1 ). CHAPTER 3.116) 3 −1 9 −3 We therefore need to solve the system of equations 3u1 − u2 = 1 9u1 − 3u2 = 3.8 Inner Product Spaces Optional This chapter has been about some of the linear algebra background that is needed in undergraduate physics. = c1 e4t = e4t 1 3 + c2 e4t 1 2 . and solving eigenvalue problems. Second Linearly Independent Solution: Now we need to solve Av2 − λv2 = v1 . (3.

it can later help in the overall understanding of the methods of solution of linear partial diﬀerential equations. w > . Later. We will also see in the next chapter that the appropriate background spaces are function spaces in which we can solve the wave and heat equations. We will also need a scalar product deﬁned on this space of functions. or the space of diﬀerentiably continuous functions. 2. < u + v. 4. or the set of functions integrable from a to b. < v. However. INNER PRODUCT SPACES OPTIONAL 135 generalization for physics is to inﬁnite dimensional vector spaces. < v. There are several types of scalar products. In particular. you can see that there are many types of function spaces. While we do not immediately need this understanding to carry out our computations. v. w > + < v. > on a real vector space V is a mapping from V × V into R such that for u. A more general deﬁnition with the second property replaced with < v. v >≥ 0 and < v. For the time being. For a real vector space. w >= < w. v > is needed for complex inner product spaces. or inner products. One such deﬁnition is . < αv. They could be the space of continuous functions on [0. We need an inner product appropriate for such spaces. v > . w >= α < v. one deﬁnitely needs to grasp these ideas in order to fully understand and appreciate quantum mechanics. the set of functions and these operations will provide us with a vector space of functions.function spaces. 3. w > .1]. w ∈ V and α ∈ R one has 1. we are dealing just with real valued functions. This conceptual framework is very important in areas such as quantum mechanics because this is the basis of solving the eigenvalue problems that come up there so often with the Schr¨dinger o equation. v >= 0 if and only if v = 0.8. Thus. in particular . A real vector space equipped with the above inner product leads to a real inner product space. we deﬁne An inner product <. We will further need to be able to add functions and multiply them by scalars. w >=< w.3. w >=< u. we will specify the types of functions. We will consider the space of functions of a certain type. that we can deﬁne.

φn > cn Nj δij n=1 = = cj Nj . then we have < φj . we take the inner product of f with each φj . For an inﬁnite dimensional space. let ∞ f (x) = n=1 cn φn (x).120) = n=1 cn < φj . to ﬁnd ∞ < φj . φn >= Nj δij . If our basis is an orthogonal basis.136 CHAPTER 3. Given a n=1 function f (x). Thus. f > = n=1 ∞ cn < φj . we deﬁne the inner product. how can we go about ﬁnding the components of f in this basis? In other words. g >= b a f (x)g(x) dx. if the integral exists. (3. we have ∞ (3. we have functions spaces equipped with an inner product. Then. LINEAR ALGEBRA the following. n=1 ∞ cn φn > (3. Let’s assume that we have a basis of functions {φn (x)}∞ .122) .121) < φj . f > = < φ j . where δij is the Kronecker delta. How do we ﬁnd the cn ’s? Does this remind you of the problem we had earlier? Formally. b]. (3. as < f. Let f (x) and g(x) be functions deﬁned on [a. Can we ﬁnd a basis for the space? For an n-dimensional space we need n basis vectors. how many will we need? How do we know when we have enough? We will consider the answers to these questions as we proceed through the text.119) So. φn > .

We ﬁrst compute π −π φn 2 = = = sin2 nx dx 1 π [1 − cos 2nx] dx 2 −π 1 sin 2nx π x− = π. Nj < φj . we have for n = m π −π < φn . the expansion coeﬃcient is cj = < φj . For the above basis of sine functions. f >. . there are many types of norms. we have determined that the set φn (x) = sin nx for n = 1. . Just as with vectors in three dimensions. . 2 2n −π (3. we want to ﬁrst compute the norm of each function. This is simply done by dividing by the length of the √ vector. Thus. is an orthogonal set of functions on the interval [= π. f > = . (3. let’s determine if the set of functions φn (x) = sin nx for n = 1. π]. is orthogonal on the interval [= π. 2. In the same way. 2. φm >= 0 for n = m.123) 2 n−m n+m −π Here we have made use of a trigonometric identity for the product of two sines. .8. INNER PRODUCT SPACES OPTIONAL So. φm > = = = sin nx sin mx dx 1 π [cos(n − m)x − cos(n + m)x] dx 2 −π 1 sin(n − m)x sin(n + m)x π − = 0. we can normalize our basis functions to arrive at an orthonormal basis. . but this will be suﬃcient for us.124) . Note. π]. we deﬁne the norm of our functions by f = < f. So. f > < φj . φ j > 137 In our preparation for later sections. . We need to show that < φn . Then we would like to ﬁnd a new basis from this one such that each basis eigenfunction has unit length and is therefore an orthonormal basis.3. Recall that the length of a vector was obtained as v = v · v.

LINEAR ALGEBRA We have found from this computation that < φj . They have been named Fourier series and will be the topic of the next chapter.138 CHAPTER 3.125) √ 1 and that φn = π. . Expansions of functions in trigonometric bases occur often and originally resulted from the study of partial diﬀerential equations. we have normalized the φn ’s and have obtained an orthonormal basis of functions on [−π. φn >= πδij (3. π]. Deﬁning ψn (x) = √π φn (x).

139 . This will take us into the ﬁeld of spectral.Chapter 4 The Harmonics of Vibrating Strings 4. These are enhanced by the violin case. but we will only focus on the simpler vibrations of the string.1 Harmonics and Vibrations Until now we have studied oscillations in several physical systems. of the vibrating string and how a general wave on a string can be represented as a sum over such harmonics. In this chapter we will extend our study include oscillations in space. or the allowed wavelengths across the string. Another simple partial diﬀerential equation is that of the heat. or pure sinusoidal waves. string. analysis. The frequencies we hear are then related to the string shape. The typical example is the vibrating string. or diﬀusion. We will be interested the harmonics. We will consider the one dimensional wave motion in the string. These lead to ordinary diﬀerential equations describing the time evolution of the systems and required the solution of initial value problems. the speed of these waves depends on the tension in the string and its mass density. or guitar. When one plucks a violin. Such systems are governed by partial diﬀerential equations. The vibrations of a string are governed by the one dimensional wave equation. or Fourier. Physically. the string vibrates exhibiting a variety of sounds.

For example. This equation governs heat ﬂow. this led to a more rigorous approach in the study of analysis by ﬁrst coming to grips with the concept of a function. ∂x2 ∂t where y(x. This work was carried out by people such as John Wallis. Such ideas stretch back to the Pythagoreans’ study of the vibrations of strings. with ﬁxed ends at x = 0 and x = L. t) is the string height and c is the wave speed. L L where the string extended over a ﬁnite interval.140 CHAPTER 4. [0. The solution of the heat equation also involves the use of Fourier analysis. L]. Such superpositions amounted to looking at solutions of the form y(x. There are many applications that are studied using spectral analysis. Brook Taylor and Jean le Rond d’Alembert. In fact. like Leonhard Euler and Daniel Bernoulli. starting with the wave equation and leading to the superposition of right and left traveling waves. THE HARMONICS OF VIBRATING STRINGS equation. the initial conditions for such superpositions are represented as a sum of sinusoidal functions. In the 1700’s others worked on the superposition theory for vibrating waves on a stretched spring. y(x. t) = k ak sin kπx kπct cos . L . However. 0) = k ak sin kπx . However. 0) = h(x) has a discontinuous derivative! In 1753 Daniel Bernoulli viewed the solutions as a superposition of simple vibrations. which led to their program of a world of harmony. This idea was carried further by Johannes Kepler in his harmony of the spheres approach to planetary orbits. in 1749 Euler sought the solution for a plucked string in which case the initial condition y(x. However. in this case there are no oscillations in time. to investigate what “functions” could be the solutions of this equation. or harmonics. At the root of these studies is the belief that continuous waveforms are composed of a number of harmonics. We will consider the ﬂow of heat through a one dimensional rod. his solution led himself and others. In 1742 d’Alembert solved the wave equation c2 ∂2y ∂2y − 2 = 0.

Do such inﬁnite sums of trigonometric functions actually converge to the functions they represent? There are many other systems in which it makes sense to interpret the solutions as sums of sinusoids of particular frequencies. one can separate out the tidal components by making use of Fourier analysis. even for the simply plucked string given by an initial condition of the form y(x. L/2 ≤ x ≤ L Thus. which in turn have their own periods of motion. More generally. these studies led to very important questions.2 Boundary Value Problems Until this point we have solved initial value problems. the solution consists generally of an inﬁnite series of trigonometric functions. Some of the problems raised were: 1.2. However. For an initial value problem one has to solve a diﬀerential equation subject to conditions on . we can consider ocean waves. using a technique called the Separation of Variables. Such series expansions were also of importance in Joseph Fourier’s solution of the heat equation. What functions can be represented as the sum of trigonometric functions? 2. How can a function with discontinuous derivatives like the plucked string be represented by a sum of smooth functions? 3. Ocean waves are aﬀected by the gravitational pull of the moon and the sun and other numerous forces.4. such as the wave equation and the heat equation. These lead to the tides. higher dimensional problems could be reduced to one dimensional boundary value problems. BOUNDARY VALUE PROBLEMS 141 It was determined that many functions could not be represented by a ﬁnite number of harmonics. For example. which in turn opened the doors to whole ﬁelds of analysis. In an analysis of wave heights. 0) = cx. 0 ≤ x ≤ L/2 c(L − x). 4. The use of Fourier expansions became an important tool in the solution of linear partial diﬀerential equations.

For boundary values problems. we have a slight modiﬁcation of the above problem: Find the solution x = x(t) for 0 ≤ t ≤ 1 that satisﬁes the problem x + x = 2. Typically. The shape of the beam under the inﬂuence of gravity. x(t) = xh (t) + xp (t). x(1) = 0.142 CHAPTER 4. An example would be a horizontal beam supported at the ends. for x = x(t) we could have the initial value problem x + x = 2. We will explore the eﬀects of diﬀerent boundary value conditions in our discussions and exercises. x(0) = 1. x (0) = 0. so we have that the solution is the sum of a solution of the homogeneous equation and a particular solution of the nonhomogeneous equation. For an initial value problem one has to solve a diﬀerential equation subject to conditions on the unknown function or its derivatives at more than one value of the independent variable. Then one seeks to determine the state of the system at a later time. but there are conditions that have to be satisﬁed at the endpoints. xp (t) = 2. with an initial value problem one knows how a system evolves in terms of the diﬀerential equation and the state of the system at some ﬁxed time. . There are also a variety of types of boundary conditions. would lead to a diﬀerential equation and the boundary conditions at the beam ends would aﬀect the solution of the problem. like a bridge. The solution of x + x = 0 is easily found as xh (t) = c1 cos t + c2 sin t. The particular solution is easily found using the Method of Undetermined Coeﬃcients. we need to ﬁnd the general solution and then apply any conditions that we may have. So. In the case of a beam. For example. (4.1) In the next sections we will study boundary value problems and various tools for solving such problems. This is a nonhomogeneous diﬀerential equation. As an example. one end could be ﬁxed and the other end could be free to move. THE HARMONICS OF VIBRATING STRINGS the unknown function and its derivatives at one value of the independent variable. (4. x(0) = 1. or other forces.2) As with initial value problems. initial value problems involve time dependent functions and boundary value problems are spatial. one knows how each point responds to its neighbors.

However. y = y(x).. sin 1 Boundary value problems arise in many physical systems. 4. e. x(0) = 0. sin 1 We have found that there is a solution to the boundary value problem and it is given by (cos 1 − 1) x(t) = 2 1 − cos t sin t . Thus.3.4. the unknown functions are functions of a single variable. Using this value for c1 . However. there is no guarantee that we will have unique solutions of our boundary value problems as we had in our example above. we will not derive the particular equations at this time. gives 0 = 2 − 2 cos 1 + c2 sin 1. The ﬁrst condition. the general solution is x(t) = 2 + c1 cos t + c2 sin t. gives 0 = 2 + c1 . For ordinary diﬀerential equations. but the unknown function is a function of . the second condition. We will see in the next sections that boundary value problems for ordinary diﬀerential equations often appear in the solutions of partial diﬀerential equations. This yields c2 = 2(cos 1 − 1) . Partial diﬀerential equations involve a function and its derivatives. c1 = −2. PARTIAL DIFFERENTIAL EQUATIONS Thus.g. 143 We now apply the boundary conditions and see if there are values of c1 and c2 that yield a solution to our problem.3 Partial Diﬀerential Equations In this section we will introduce some generic partial diﬀerential equations and see how the discussion of such equations leads naturally to the study of boundary value problems for ordinary diﬀerential equations. just as the initial value problems we have seen earlier. x(1) = 1.

144 CHAPTER 4. These in turn will lead us to the study of eigenvalue problems in function spaces. Here we have introduced the Laplacian operator. u = u(x. t)u Name Heat Equation Wave Equation Laplace’s Equation Poisson’s Equation Schr¨dinger’s Equation o Table 4. We begin with a thin rod of length L. The rod initially has a temperature distribution u(x. y.1: List of generic partial diﬀerential equations. etc. Let’s look at the heat equation in one dimension. z) iut = 2 u + F (x. cylindrical. etc. spherical. Placing the rod in an ice bath and assuming the heat ﬂow is only through the ends.). etc. t) = 0. So. This could describe the heat conduction in a thin insulated rod of length L. uxx = ∂ u .1. z. one instead calls this equation the diﬀusion equation. Depending on the types of boundary conditions imposed and on the geometry of the system (rectangular. several variables. or the ﬂow of traﬃc down a road. u = u(x. y) iut = uxx + F (x. THE HARMONICS OF VIBRATING STRINGS 2 Vars ut = kuxx utt = c2 uxx uxx + uyy = 0 uxx + uyy = F (x. y. the problem one would need . we are dealing with Celsius temperatures and we assume there is plenty of ice to keep that temperature ﬁxed at each end. y. 2 u = uxx + uyy + uzz . ∂x ∂x2 There are a few generic equations that one encounters. 0) = f (x). In problems involving diﬀusion processes. t) = 0 and u(L. A list is provided in Table 4. one encounters many interesting boundary value problems for ordinary diﬀerential equations. We will use the standard notations 2 ux = ∂u . y). z. It could also describe the diﬀusion of pollutant in a long narrow stream. y). one has the boundary conditions u(0. the derivatives are partial derivatives. t)u 3D ut = k 2 u utt = c2 2 u 2u = 0 2 u = F (x. u = u(x. We assume that it is laterally insulated so that no heat ﬂows from the middle of the rod. Therefore. These can be studied in one to three dimensions and are all linear diﬀerential equations. We now formulate a typical initial-boundary value problem for the heat equation. t). Of course.

4) In this problem c is the wave speed in the string.3). The motion of the string is governed by the one dimensional wave equation. THE 1D HEAT EQUATION to solve is given as PDE ut = kuxx 0 < t. t) = 0 t>0 145 (4. It depends on the mass per unit length of the string and the tension placed on the string.3) Here.4. t) = 0 t>0 u(L. t) be the vertical displacement of the string at position x and time t. 0 ≤ x ≤ L IC u(x. u(x. Let u(x. . t) = 0 t>0 u(L.4. t) = 0 t>0 (4. We will see this as we solve the initial-boundary value problem given in Equation (4. 4. The initial-boundary value problem for this problem is given as PDE utt = c2 uxx 0 < t. Another problem that will come up in later discussions is that of the vibrating string. 0) = f (x) 0<x<L BC u(0. We will employ a method typically used in studying linear partial diﬀerential equations. t) = X(x)T (t). A string of length L is stretched out horizontally with both ends ﬁxed.4 The 1D Heat Equation We would like to see how the solution of such problems involving partial diﬀerential equations will lead naturally to studying boundary value problems for ordinary diﬀerential equations. We assume that u can be written as a product of single variable functions of each independent variable. k is the heat conduction constant and is determined using properties of the bar. giving the string an initial proﬁle. Think of a violin string or a guitar string. called the method of separation of variables. 0) = f (x) 0<x<L BC u(0. 0 ≤ x ≤ L IC u(x. Then the string is plucked.

λ : 1T kT function of t This leads to two equations: T = kλT. we set each function equal to a constant. t) = 0. This implies that X(0)T (t) = 0 for all t. Similarly. or negative. we ﬁnd that XT = kX T. So. The only way that a function of t equals a function of x is if the functions are constant functions. u(L.9) . Also. kT X We have separated the functions of time on one side and space on the other side. Therefore. X(0) = 0 = X(L).146 CHAPTER 4. THE HARMONICS OF VIBRATING STRINGS Substituting this guess into the heat equation. The only way that this is true is if X(0) = 0. We ﬁrst look at how the boundary conditions on u lead to conditions on X. X = λX. we have to solve the boundary value problem X − λX = 0. (4. √ (4.6) = X X function of x = λ . constant These are ordinary diﬀerential equations. zero. Dividing both sides by k and u = XT. (4. we should note that λ is arbitrary and may be positive.7) √ −λx X(x) = c1 e λx + c2 e . (4. The ﬁrst condition is u(0.5) (4.8) We need to be a little careful at this point. The aim is to force our product solutions to satisfy the boundary conditions and initial conditions. we then get 1T X = . The general solutions to these equations are readily found T (t) = Aekλt . t) = 0 implies that X(L) = 0.

9). THE 1D HEAT EQUATION 147 We are seeking nonzero solutions. leaving X(x) = c1 x. we have 0 = c1 + c2 . the only solution in this case is X(x) = 0. c1 = 0. Integrating twice. we have c2 = 0. Applying the second condition. So. Setting x = 0. as X(x) = 0 for all x is an obvious and uninteresting solution. Case I. Case II.10) − e− √ ) = 2c1 sinh λx. depending on the sign of λ. Case III. . we ﬁnd c1 L = 0. c1 = 0 and we are once again left with a trivial solution. We call such solutions trivial solutions.4. √ This will be true only if c1 = 0 or sinh λL = 0. So. X = 0. We will take c2 = −c1 . There are three cases to consider in solving the boundary value problem (4. λ > 0 In this case we have the exponential solutions X(x) = c1 e For X(0) = 0. The latter equation √ is true only if λ = 0. Then the diﬀerential equation is X + µ2 X = 0. Then. Therefore. t) = 0. in this case λ > 0. Setting x = L. X(L) = 0 yields √ 2c1 sinh λL = 0. u(x. Thus. (4. X(x) = c1 (e √ λx √ λx √ λx √ λx + c2 e− . This leads to a trivial solution.4. λ < 0 In this case it would be simpler to write λ = −µ2 < 0. one ﬁnds X(x) = c1 x + c2 . However. λ = 0 For this case it is easier to set λ to zero in the diﬀerential equation from the start.

3. if our initial condition is in one of these forms. . So. 2. n = 1. But. 0) = f (x). THE HARMONICS OF VIBRATING STRINGS The general solution of this equation is X(x) = c1 cos µx + c2 sin µx. Namely. The solutions are nπx Xn (x) = sin . . 3. n = 1. . c2 = 0 leads to a trivial solution again. . 3. For other initial conditions. 3. . we ﬁnd 0 = c2 sin µL. At x = 0 we get 0 = c1 . . . either c2 = 0 or sin µL = 0. these do not necessarily satisfy the initial condition u(x. . . since the heat equation is linear. 0) = sin . L for nπ 2 λn = −µ2 = − . t) = n=1 bn ekλn t sin nπx . .148 CHAPTER 4. 2. we can write a linear combination of our solutions and have a solution. t) = bn ekλn t sin L where bn is an arbitrary constant. Therefore. ∞ u(x. At x = L. In summary. . Note. 2. Note that n = 0 is not included since this leads to a trivial solution.9) for particular values of λ. negative values of n are redundant. there are cases when the sine is zero. Also. since the sine function is an odd function. we have found solutions to our boundary value problem (4. n = 1. we have to do more work. . . This leaves X(x) = c2 sin µx. . . . 2. µL = nπ. n L Product solutions of the heat equation (4. However.3) satisfying the boundary conditions are therefore nπx . . 2. . un (x. What we do get is nπx un (x. . L So. we can pick out the right n and we are done. n = 1. L . n = 1.

We can recast the diﬀerential equation as LX = λX.12) (4. ﬁrst we will look at the general form of our boundary value problem and then relate what we have done to the theory of inﬁnite dimensional vector spaces. 0) = n=1 bn sin nπx . The solutions. However. then we will have the solution to the full initial-boundary value problem. if we are given f (x). u(0. u(L. We will elaborate more on this characterization later in the book. 0) = f (x). This will be the subject of the next chapter.5 The 1D Wave Equation In this section we will apply the method of separation of variables to the one dimensional wave equation. THE 1D WAVE EQUATION 149 We note that this equation satisﬁes the boundary conditions. are called eigenfunctions and the λn ’s are the eigenvalues. Before moving on to the wave equation. L = D2 = 4. ut (x. t) = 0. (4. t) = 0. t) represents the .5.11) This problem applies to the propagation of waves on a string of length L with both ends ﬁxed so that they do not move. the only thing to check is the initial condition: ∞ f (x) = u(x. So.4. Xn (x). 0) = g(x). we should note that (4. where d2 dx2 is a linear diﬀerential operator. given by ∂2u ∂2u = c2 2 ∂2t ∂ x and subject to the conditions u(x.9) is an eigenvalue problem. u(x. can we ﬁnd the constants bn ? If we can. L So.

t) = X(x)T (t). (4. for a positive concavity the string is curved upward near the point of interest. we have the boundary conditions on X(x): X(0) = 0. and X(L) = 0. Thus. λ : 1 T c2 T function of t This leads to two equations: T = c2 λT. c2 T X Again. X = λX. Thus. In some cases the mass density is changed simply by using thicker strings. We let u(x. neighboring points tend to pull upward towards the equilibrium position. constant . The constant c is the wave speed. thick or thin) also has an eﬀect. the thicker strings in a piano produce lower frequency notes. we set each function equal to a constant.14) = X X function of x = λ . it would cause a negative acceleration. The derivation of the wave equation assumes that the vertical displacement is small and the string is uniform. µ where T is the tension in the string and µ is the mass per unit length. we have separated the functions of time on one side and space on the other side. Therefore. which can be rewritten as X 1 T = . The uxx is the concavity of the string.150 CHAPTER 4. As before.13) (4. The tension can be adjusted to produce diﬀerent tones and the makeup of the string (nylon or steel. Thus. If the concavity is negative. We can understand this in terms of string instruments. given by c= T . The utt term gives the acceleration of a piece of the string. THE HARMONICS OF VIBRATING STRINGS vertical displacement of the string over time. The solution of this problem is easily found using separation of variables. Then we ﬁnd XT = c2 X T.

(4.19) . L (4. Thus.4. 0) = f (x). ut (x. nπc .13) we have to solve T + nπc L 2 T = 0. t) = n=1 An cos nπct nπx nπct + Bn sin sin . rom Equation (4. 0) = g(x). THE 1D WAVE EQUATION Again.17) This solution satisﬁes the wave equation and the boundary conditions. t) = nπct nπct nπx nπc −An sin + Bn cos sin . this gives us that Xn (x) = sin nπx . (4. Namely. we have u(x. since the wave equation is second order in time.15) This equation takes a familiar form. L L L L n=1 ∞ (4.16) So. 0) = n=1 An sin nπx . L The solutions are easily found as T (t) = An cos ωn t + Bn sin ωn t. Note that there are two initial conditions. We still need to satisfy the initial conditions. ∞ f (x) = u(x. We let ωn = then we have 2 T + ωn T = 0. L L L (4. a superposition of all product solutions.5. First. we need to diﬀerentiate the general solution with respect to t: ut (x. is given by ∞ u(x. L λn = − nπ L 2 151 . The main diﬀerence from the solution of the heat equation is the form of the time function. L L The general solution. our product solutions are of the form sin nπx cos ωn t and sin nπx sin ωn t.18) In order to obtain the condition on the initial velocity.

When dealing with Taylor series.. given an expansion of f (x) about x = a. we have g(x) = ut (x. We had seen similar results for the heat equation. Once we have these. 4. are represented as Fourier sine series. where the expansion coeﬃcients are determined as cn = f (n) (a) . or x − a. we have the complete solution to the wave equation. (We will review Taylor series in the next chapter when we study series representations of complex valued functions.152 CHAPTER 4. L L n=1 ∞ (4. you learned that the Taylor series was given by ∞ f (x) = n=0 cn (x − a)n . . n! Then you found that the Taylor series converged for a certain range of x values. For example. 3. In the next section we will ﬁnd out how to determine the Fourier coeﬃcients for such series of sinusoidal functions. .6 Introduction to Fourier Series In this chapter we will look at trigonometric series. THE HARMONICS OF VIBRATING STRINGS Then. 0) = nπc nπx Bn sin . This led to MacLaurin or Taylor series.20) In both cases we have that the given functions.) . . In your calculus courses you have probably seen that many functions could have series representations as expansions in powers of x. f (x) and g(x). We have seen from our previous product solutions of the heat and wave equations that series representations might also come in the form of expansions in terms of sine and cosine functions. 2. you often had to determine the expansion coeﬃcients. In order to complete the problem we need to determine the constants An and Bn for n = 1.

5 2 2. INTRODUCTION TO FOURIER SERIES y(t)=2 sin(4 π t) 4 153 2 y(t) 0 −2 −4 0 0.5 4 4. f is the frequency in hertz (Hz). In a similar way. we will investigate the Fourier trigonometric series expansion ∞ nπx a0 nπx + + bn sin .5 3 3. the louder the sound.6. 5] for f = 2 Hz and f = 5 Hz. or sound. bn } given a function f (x) deﬁned on [−L. f (x) = an cos 2 L L n=1 We will ﬁnd expressions useful for determining the Fourier coeﬃcients {an . The larger the amplitude. where A is the amplitude. or sampling . The natural appearance of such sums over sinusoidal functions is in music. However.5 4 4. we ﬁrst begin with some basic ideas involving simple sums of sinusoidal functions.5 5 Time y(t)=sin(10 π t) 4 2 y(t) 0 −2 −4 0 0. In Figure 4.5 1 1. In these plots you should notice the diﬀerence due to the amplitudes and the frequencies.5 1 1. As an aside. You can easily reproduce these plots and others in your favorite plotting utility.1 we show plots of two such tones with f = 2 Hz in the top plot and f = 5 Hz in the bottom one. you should be cautious when plotting functions.1: Plots of y(t) = A sin(2πf t) on [0. The amplitude is related to the volume of the sound.4.5 5 Time Figure 4. A pure note can be represented as y(t) = A sin(2πf t). and t is time in seconds.5 2 2. L]. We will also see if the resulting inﬁnite series reproduces f (x).5 3 3.

most of the sounds that we hear are in fact a combination of pure tones with diﬀerent amplitudes and frequencies. if you use a diﬀerent number of points to plot this function. This could happen when getting data on ocean wave heights. Looking at the superpositions in Figure 4. After all. Typically.3. The plots you get might not be what you expect. even for a simple sine function. One notes that as one adds more and more tones with diﬀerent characteristics.2: Problems can occur while plotting. when you sample a set of data. data.154 CHAPTER 4. In Figure 4. or any other process when you attempt to analyze a continuous signal. the results may be surprising. 101 points instead of the 201 points used in the ﬁrst plot.3 we see what happens when we add several sinusoids. 200. but are also present when collecting data. Here we plot the function y(t) = 2 sin 4πt using N = 201. we want to consider what happens when we add several pure tones. 101 points. We recall that a periodic function is one in which the function values repeat over the domain of the function. the resulting signal gets more complicated. In this example we show what happens if you use N = 200. 100. However. 100. The length of the smallest . digitizing music and other audio to put on your computer. THE HARMONICS OF VIBRATING STRINGS y(t)=2 sin(4 π t) for N=201 points 4 y(t)=2 sin(4 π t) for N=200 points 4 2 2 y(t) 0 y(t) 0 1 2 3 4 5 0 −2 −2 −4 −4 0 1 2 3 4 5 Time y(t)=2 sin(4 π t) for N=100 points 4 Time y(t)=2 sin(4 π t) for N=101 points 4 2 2 y(t) 0 y(t) 0 1 2 3 4 5 0 −2 −2 −4 −4 0 1 2 3 4 5 Time Time Figure 4. This is not to be unexpected. In the top left you see a proper rendering of this function. Next. In Figure 4. you only gather a ﬁnite amount of information at a ﬁxed rate.2 we show four plots of the function y(t) = 2 sin(4πt). we see that the sums yield functions that appear to be periodic. Such disparities are not only possible when plotting functions.

3.5 5 Time Figure 4. we consider the functions used in Figure 4.1 we can verify this result.4. and and f = 8 Hz. we have that sin(10πt) has a period of 0. The two . Recall that one can determine the period by dividing the coeﬃcient of t into 2π to get the period.3: Superposition of several sinusoids. We began with y(t) = 2 sin(4πt).5 3 3. as the unit of frequency. f = 5 Hz.5 5 Time y(t)=2 sin(4 π t)+sin(10 π t)+0. 2πf f Of course. Returning to the superpositions in Figure 4. T . the period is found as T = 2π 1 = . part of the domain which repeats is called the period. 4π 2 Looking at the top plot in Figure 4.5 2 2.) In general.5 1 1. (You can count the full number of cycles in the graph and divide this into the total time to get a more accurate value than just looking at one period.5 sin(16 π t) 4 2 y(t) 0 −2 −4 0 0. is also deﬁned as s−1 . In this case we have T = 1 2π = .) For example. (We deﬁned this more precisely in an earlier chapter.5 2 2.6.5 3 3.5 1 1. or cycles per second. Bottom: Sum of signals with f = 2 Hz.5 4 4. INTRODUCTION TO FOURIER SERIES y(t)=2 sin(4 π t)+sin(10 π t) 4 155 2 y(t) 0 −2 −4 0 0.125 Hz. Top: Sum of signals with f = 2 Hz and f = 5 Hz.2 Hz and sin(16πt) has a period of 0.5 4 4. this result makes sense. the hertz. if y(t) = A sin(2πf t).3.

This is similar to when your music teacher would make sections of the class sing a song like ”Row. THE HARMONICS OF VIBRATING STRINGS y(t)=2 sin(4 π t) and y(t)=2 sin(4 π t+7π/8) 4 2 y(t) 0 −2 −4 0 0. we will show that using just sine functions will not be enough.5 Hz from the sin(4πt) terms. f (t).5 5 Time Figure 4. Secondly.5 4 4.4 we show the functions y(t) = 2 sin(4πt) and y(t) = 2 sin(4πt + 7π/8) and their sum. we should account for shifted sine functions in our general sum. Row. While this is one approach that some researchers use to analyze signals. we will be studying inﬁnite series of functions.4: Plot of the functions y(t) = 2 sin(4πt) and y(t) = 2 sin(4πt + 7π/8) and their sum. We need only consider two signals that originate at diﬀerent times. which is 0.5 1 1. Row your Boat” starting at slightly diﬀerent times.5 2 2.5 1 1. In Figure 4. First of all. We can easily add shifted sine functions. Of course.5 3 3. So. superpositions retain the largest period of the signals added. Thus. this corresponds to a time shift of −7/8. . This is because we can add sinusoidal functions that do not necessarily peak at the same time. Note that this shifted sine function can be written as y(t) = 2 sin(4π(t + 7/32)). we would then need to determine the unknown time shift as well as the amplitudes of the sinusoidal functions that make up our signal.5 4 4. Thus.5 3 3.5 2 2.156 CHAPTER 4. Our goal will be to start with a function and then determine the amplitudes of the simple sinusoids needed to sum to that function.5 5 Time y(t)=2 sin(4 π t)+2 sin(4 π t+7π/8) 4 2 y(t) 0 −2 −4 0 0. this might involve an inﬁnite number of such terms.

. to account for any vertical shift in our function. we then are lead to sums over yn (t) = an cos nx + bn sin nx. we have to settle for a discrete set of frequencies. First of all. Deﬁning a = A sin(φ) and b = A cos(φ). 2.6. then our phases in the trigonometric series are 2πf t = 2πnfs t. Thus. there are a continuum of frequencies to add over. we can rewrite this as y(t) = a cos(2πf t) + b sin(2πf t). We can use the trigonometric identity for the sine of the sum of two angles to obtain y(t) = A sin(2πf t + φ) = A sin(φ) cos(2πf t) + A cos(φ) sin(2πf t). our measuring devices do not sample frequently enough to get at high frequency content. This typically means our sum is an integral! In practice. the subject of this chapter. So. we will also need a zero frequency term.4. Then our sum would lead to an inﬁnite series. we see that our signal is a sum of sine and cosine functions with the same frequency and diﬀerent amplitudes. Actually. then we can easily determine A and φ: A= a2 + b2 tan φ = b . If we can ﬁnd a and b. Consider the general shifted function y(t) = A sin(2πf t + φ). we would like to determine its frequency content by ﬁnding out what combinations of sines and cosines of varying frequencies and amplitudes will sum to the given function. for fs the sampling rate of our device. n = 1. a We are now in a position to formally state our goal. Note that 2πf t + φ is called the phase of our sine function and φ is called the phase shift.. Goal: Given a function f (t). Deﬁning x = 2πfs t. There are still a few problems in practice. INTRODUCTION TO FOURIER SERIES 157 there is a more common approach. corresponding to n = 0. In circuit analysis this . This results from a reworking of the shifted function.. If we only have frequencies fn = nfs .

. The largest period of the trigonometric terms comes from the . This will lead to a simpler discussion for now and one can always make the transformation nx = 2πfn t when applying these ideas to applications. Later we will be interested in knowing what functions have such a representation and when the Fourier series converges and to what function it converges. At times we may formally use an equal sign. (4. though some other authors choose to write the constant term as a0 .21) f (x) ∼ 2 n=1 Notice that we have opted to drop reference to the frequency form of the phase. bn . are called the Fourier coeﬃcients. we see that the inﬁnite series is periodic.158 CHAPTER 4.21) is called a Fourier series. an . Our goal is to ﬁnd the Fourier series representation given f (x). we will only consider the case that t is continuous and the frequencies are discrete. note that we have used ‘∼’ to indicate that equality may not hold for all x. This is enough to get started. Also. we will be led to discrete transforms. This is consistent with our solutions of the heat and wave equations earlier in the chapter. Since this would be the subject of texts on signal and image analysis. instead of t. ignoring the convergence properties of the inﬁnite series. . or oﬀset. 4. THE HARMONICS OF VIBRATING STRINGS corresponds to a background DC signal. The series representation in Equation (4. The constant term is chosen in this form to make later computations simpler. The set of constants a0 . In general. this term represents the average of our signal. . n = 1.1 Trigonometric Series As we have seen in the last section. we seek representations of functions in terms of sines and cosines. This may be discussed further in a later section of the book. From our discussion in the last section. Given a function f (x) we seek a representation in the form ∞ a0 + [an cos nx + bn sin nx] . However. 2. We have also chosen to use x as an independent variable.6. When considering that both t and f are continuous. Thus. we are led to Fourier transforms. we are interested in the frequency content of a given function. This is the case for analog signals. or function. When these variables are both discrete. our discrete sampling will also mean that we do not gather data for a continuous range of times or even for all times.

In Figure 4. 1. . . Therefore. . .5 we show a function deﬁned on [0. the Fourier series has period 2π. Theorem The Fourier series representation of f (x) deﬁned on [0. f (x) sin nx dx.5 f(x) 1 0.22) . (4.5 0 −5 0 5 10 15 x Figure 4. Note that we could just as easily consider a function deﬁned on [−π. n = 1.5 f(x) 1 0. Thus. In the same ﬁgure. . .6. 2π] will lead to a representation of the original function. INTRODUCTION TO FOURIER SERIES f(x) on [0.5: Plot of the function f (t) deﬁned on [0. we should see its periodic extension. we will consider Fourier series representations of functions deﬁned on this interval. is given by (4. 2π] and its periodic extension.21) with Fourier coeﬃcients an = bn = 1 π 1 π 2π 0 2π 0 f (x) cos nx dx. The extension can now be represented by a Fourier series and restricting the Fourier series to [0. The periods of cos x and sin x are T = 2π. .5 0 −5 0 5 10 15 x Periodic Extension of f(x) 2 1.4. n = 1 terms. n = 0. 2. we could also consider functions that are deﬁned over one period. It is built from copies of the original function shifted by the period and glued together. 2π]. π] or any interval of length 2π. 2. 2π] when it exists. .2 π] 2 159 1. This means that the series should be able to represent functions that are periodic of period 2π. While this appears restrictive.

(4.24) From these results we see that only one term in the integrated sum does not vanish. Next. This is like multiplying by cos 2x. We are multiplying by all possible cos mx functions at the same time. THE HARMONICS OF VIBRATING STRINGS These expressions for the Fourier coeﬃcients are obtained by considering special integrations of the Fourier series. cos 5x. (4.25) n=1 2π ∞ 0 Integrating term by term. the right side becomes a0 2 2π 0 ∞ cos mx dx + n=1 an 2π 0 cos nx cos mx dx + bn 2π 0 sin nx cos mx dx . First we obtain a0 . Then we will need to compute a0 a0 dx = (2π) = πa0 . We ﬁnd the integrated sum of the series times cos mx is given by 2π 0 f (x) cos mx dx = + 2π 0 a0 cos mx dx 2 [an cos nx + bn sin nx] cos mx dx. etc. We will see that this will allow us to solve for the an ’s. n 0 0 2π − cos nx 2π = 0. we need to ﬁnd an . This conﬁrms the value for a0 . sin nx dx = n 0 0 (4. leaving 2π 0 2π f (x) dx = πa0 .160 CHAPTER 4.21) by cos mx for some positive integer m. . We begin by integrating the Fourier series in Equation (4.23) We will assume that we can integrate the inﬁnite sum term by term. 2 2 0 2π sin nx 2π cos nx dx = = 0. 2π 0 f (x) dx = 2π 0 a0 dx + 2 2π ∞ 0 n=1 [an cos nx + bn sin nx] dx. We will multiply the Fourier series (4. We will look at the derivations of the an ’s.21).

we will review how to carry out such integrals.26) We have already established that 2π 0 cos mx dx = 0. INTRODUCTION TO FOURIER SERIES 161 (4. 2 (4. we can ﬁnish the integration. for n = m we have to compute 0 cos2 mx dx.6. We do this by using the product identity for cosines: 1 [cos(A + B) + cos(A − B)] . What if one of the denominators m ± n vanishes? For our problem m + n = 0. since both m and n are positive integers. Try it!) 2π So.28) . 1 cos2 θ = (1 + cos 2θ. This means that the vanishing of our integral can only happen when m = n. (Another way is to carefully compute the limit as n approaches m in our result and use L’Hopital’s Rule. 2 Setting A = mx and B = nx in the identity. cos A cos B = 2π 0 cos nx cos mx dx = 1 2π [cos(m + n)x + cos(m − n)x] dx 2 0 1 sin(m + n)x sin(m − n)x 2π + = 2 m+n m−n 0 = 0. what can we do about the m = n case? One way is to start from scratch with our integration. However. Next we need to compute integrals of products of sines and cosines. This can also be handled using another trigonometric identity.27) There is a caveat when doing such integrals.4. While you have seen such integrals before in your calculus class. This requires that we make use of some trigonometric identities. So. (4. 2π We ﬁrst want to evaluate 0 cos nx cos mx dx. we ﬁnd 2π 0 cos2 mx dx = = = 1 2π (1 + cos2 2mx) dx 2 0 1 1 [x + sin 2mx]2π 0 2 2m 1 (2π) = π.) 2 Inserting this into our integral. it is possible for m = n.

This can also be evaluated using trigonometric identities. (4. . we still 2π have to evaluate 0 sin nx cos mx dx. THE HARMONICS OF VIBRATING STRINGS To summarize. m = n. b] if a φn (x)φm (x) dx = 0 when n = m. we ﬁnd that 2π 0 sin nx cos mx dx = 1 2π [sin(n + m)x + sin(n − m)x] dx 2 0 1 − cos(n + m)x − cos(n − m)x 2π = + 2 n+m n−m 0 = (−1 + 1) + (−1 + 1) = 0. .26). they are said to be an orthogonal set over the integration interval. inner product) of f (x) and g(x). The notion of orthogonality is actually a generalization of the orthogonality b of vectors in ﬁnite dimensional vector spaces. When we have such a set of functions. (4. 2 Setting A = nx and B = mx. Returning to the evaluation of the integrals in equation (4. We can make them orthonormal by dividing each √ function by π.29) This holds true for m. In this case. The integral a f (x)g(x) dx is the generalization of the dot product. Furthermore. . we have shown that 2π 0 cos nx cos mx dx = 0.30) . Actually. we have shown that the set of functions {cos nx}∞ are orthogonal on [0. n From the above computations.162 CHAPTER 4. This was discussed in the last section of the previous chapter. which are thought of as vectors in an inﬁnite dimensional vector space spanned by a set of orthogonal functions. these functions are called orthonormal. we need the identity involving products of sines and cosines. n = 0. It is given by sin A cos B = 1 [sin(A + B) + sin(A − B)] . m = n π. Deﬁnition A set of (real) functions {φn (x)} is said to be orthogonal on b [a. The reader may want to go back and read that section at this time. if we also have b that a φ2 (x) dx = 1. 1. . 2π]. and is called the scalar product (or. they are orthogonal on any n=0 interval of length 2π.

1 3 cos 2x sin nx dx = 0. we should have known this before doing all of these integrals.4. if we have a function expressed simply in terms of sums of simple sines and cosines. f (x) = 3 cos 2x. . Since this is true for all m = 1. So there is one term and f (x) = 3 cos 2x. the procedure is similar and the derivation will be left as an exercise for the reader.6. we have that the only nonvanishing coeﬃcient is a2 = 3. In all cases we deﬁne f (x) on [0. 2. 2 2m Finally. We ﬁrst compute the integrals for the Fourier coeﬃcients. So. However. Example 1. . .26). In that case. In this special case. . This leaves us with 2π 0 f (x) cos mx dx = am π. INTRODUCTION TO FOURIER SERIES 163 For these integrals we also should be careful about setting n = m. we have proven this part of the theorem. ∀n. We now consider some examples of ﬁnding Fourier coeﬃcients. 1 π 2π 2π 0 n = 2. 2π]. Solving for am gives am = 1 π 2π 0 f (x) cos mx dx. The only part left is ﬁnding the bn s. then it should be easy to write down the Fourier coeﬃcients without much work. we can ﬁnish our evaluation of (4. Well. a2 = bn = 3 cos2 2x dx = 3. n = m. We will explore Fourier series over other intervals in the next section. . we have the integrals 2π 0 sin mx cos mx dx = 1 2 2π 0 1 − cos 2mx 2π sin 2mx dx = [ ]0 = 0. We have determined that all but one integral vanishes. π 0 Therefore. an = 1 π 2π 0 3 cos 2x cos nx dx = 0.

We know that 1 1 1 sin2 x = (1 − cos 2x) = − cos 2x. 0 < x < π. so bn = 0. f (x) = sin2 x. 2. 2 Example 3. but it is easier to use identities. . 2π]. π < x < 2π. . (4. There is a cos 2x term. . one over [0. 1 2π f (x) dx π 0 1 π 1 2π dx + (−1) dx π 0 π π 1 1 (π) + (−2π + π) = 0. THE HARMONICS OF VIBRATING STRINGS Example 2. so a0 = 0.164 CHAPTER 4. This function is discontinuous. This example will take a little more work. π π 2π 0 π 0 a0 = = = (4. 2. That leaves an = 0 for n = 0. . −1.31) an = = = 1 π 1 π 1 π f (x) cos nx dx cos nx dx − π 2π π cos nx dx 2π π 1 sin nx n − 0 1 sin nx n = 0. n = 1. f (x) = 1. so a2 = − 1 . We could integrate as in the last example. so we will have to compute each integral by breaking up the integration into two integrals. corresponding to n = 2.32) bn = = 1 π 1 π 2π 0 π 0 f (x) sin nx dx sin nx dx − 2π π sin nx dx . . There is a constant term. We cannot bypass evaluating any integrals this time. π] and the other over [π. 2 2 2 There are no sine terms.

21).34) So. INTRODUCTION TO FOURIER SERIES = = = π 2π 1 1 1 − cos nx + cos nx π n n 0 π 1 1 1 1 1 − cos nπ + + − cos nπ π n n n n 2 (1 − cos nπ).4. L/2]. L]. nπ 165 (4. odd 1 sin nx. [0. In this section we will determine the form of the series expansion and the Fourier coeﬃcients in these cases. 2k − 1 k=1 ∞ But does this series converge? Does it converge to f (x)? We will answer this question by looking at several examples in the next section. The simplest generalization is to the interval [0. However.6. π]. half of the bn ’s are zero. (4.2 Fourier Series Over Other Intervals In many applications we are interested in determining Fourier series representations of functions deﬁned on intervals other than [0.6. Such . we note that cos nπ = (−1)n . While we could write the Fourier series representation as f (x) ∼ 4 π ∞ n=1. 4. n even 2. Before inserting them into the Fourier series (4. 2π]. b]. Therefore. 1 − cos nπ = 0.33) We have found the Fourier coeﬃcients for this function. L]. this often is too general. or [−L/2. n we could let n = 2k − 1 and write 4 f (x) = π 1 sin(2k − 1)x. The most general type of interval is given as [a. n odd. More common intervals are of the form [−π.

L].21): ∞ a0 + [an cos nx + bn sin nx] . A transformation relating these intervals is simply x = 2πt . Thus. THE HARMONICS OF VIBRATING STRINGS intervals arise often in applications. we still need to determine the Fourier coeﬃcients.38) . are also useful. Given an interval [0. dx = 2π dt. For example. we could apply a transformation to an interval of length 2π by simply rescaling our interval. Another problem would be to study the temperature distribution along a one dimensional rod of length L. Recall the form of the Fourier representation for f (x) in Equation (4. L]. (4. Then we could apply this transformation to our Fourier series representation to obtain an equivalent one useful for functions deﬁned on [0. We also will L need to transform the diﬀerential. 2 L L n=1 (4. a]. L]. L]. L]. But. that g(t) sin 2nπt dt. 1 2π f (x) cos nx dx. [−a. we ﬁnd that an = bn = 2 L L 0 Recall. 2 n=1 f (x) ∼ (4. π 0 We need to make a substitution in the integral of x = 2πt .36) This gives the form of the series expansion for g(t) with t ∈ [0. Given f (x).35) Inserting the transformation relating x and t. L (4. this transformation would result in a new L function g(t) = f (x(t)). Such problems lead to the original studies of Fourier series. one can study vibrations of a one dimensional string of length L and set up the axes with the left end at x = 0 and the right end at x = L. we have g(t) ∼ ∞ a0 2nπt 2nπt + an cos + bn sin . As we will see later. We would like to determine the Fourier series representation of this function. symmetric intervals. We begin with the representation for f (x). the resulting form for L our coeﬃcient is 2nπt 2 L g(t) cos dt.37) an = L 0 L Similarly. Deﬁne x ∈ [0.166 CHAPTER 4. 2π] and t ∈ [0. This function is deﬁned on [0.

which means that the L representation for g(t) has a period of L. beginning as usual with a0 . Recall that f (x) is an even function if f (−x) = f (x) for all x. the period of cos 2nπt is L/n. Thus. In Table 4. using the substitution x = −y.47) 0 f (x) dx. (4. One can recognize even functions as they are symmetric with respect to the y-axis. INTRODUCTION TO FOURIER SERIES 167 We note ﬁrst that when L = 2π we get back the series representation that we ﬁrst studied.4. We have 1 π |x| dx a0 = π −π 2 π = |x| dx = π (4. At the end of this section we present the derivation of the Fourier series representation for a general interval for the interested reader. Even Functions: In this evaluation we made use of the fact that the integrand is an even function. a −a f (x) dx = 0 −a f (x) dx + 0 a 0 f (x) dx a 0 = − = = 2 a 0 a f (−y) dy + a 0 f (x) dx f (y) dy + a f (x) dx (4. We will end our discussion for now with some special cases and an example for a function deﬁned on [−π.45) π 0 At this point we need to remind the reader about the integration of even and odd functions.46) One can prove this by splitting oﬀ the integration over negative values of x.6. If one integrates an even function over a symmetric interval. . π] We compute the coeﬃcients. π].2 we summarize some commonly used Fourier series representations. Example Let f (x) = |x| on [−π. Also. 1. and employing the evenness of f (x). then one has that a −a f (x) dx = 2 a 0 f (x) dx.

.41) an = bn = 2 L 2 L L 2 −L 2 L 2 −L 2 f (x) cos f (x) sin 2nπx dx. 2 L L n=1 (4. . . . 2. (4.39) an = bn = 2 L 2 L 2nπx dx.168 CHAPTER 4. 2 L L n=1 (4.43) an = bn = 1 π 1 π f (x) cos nx dx. . . L 0 L 2nπx f (x) sin dx. . . 2. n = 0.40) Fourier Series on [− L . n = 1.2: Special Fourier Series Representations on Diﬀerent Intervals Fourier Series on [0. 2. (4. n = 1. 2. . L] f (x) ∼ ∞ a0 2nπx 2nπx + an cos + bn sin . . . . π] f (x) ∼ ∞ a0 + [an cos nx + bn sin nx] . . THE HARMONICS OF VIBRATING STRINGS Table 4. n = 1. L ] 2 2 f (x) ∼ ∞ a0 2nπx 2nπx + + bn sin an cos . . L n = 0. n = 0. . . . . 1. . 1.44) . . 2 n=1 π −π π −π (4. . . 1. . f (x) sin nx dx. . L 2nπx dx. 2. L 0 f (x) cos L (4.42) Fourier Series on [−π. 2.

49) Here we have made use of the fact that |x| cos nx is an even function. π]. f (x) is an odd function if f (−x) = −f (x) for all x.51) 4 So. (4. We have an = 1 π π −π |x| cos nx dx = 2 π π 0 x cos nx dx. the result is that bn = 0 for all n. πn π 0 sin nx dx (4. Continuing with the computation.48) −a We now continue with our computation of the Fourier coeﬃcients for f (x) = |x| on [−π. We note that we have to integrate |x| sin nx from x = −π to π. Computing the bn ’s is simpler. The graphs of such functions are symmetric with respect to the origin.4. then one has that a f (x) dx = 0. This lead to a factor (1 − (−1)n ). an = 0 for n even and an = − πn2 for n odd.6. we need to use integration by parts. INTRODUCTION TO FOURIER SERIES 169 2.50) Here we have used the fact that cos nπ = (−1)n for any integer n. b a u dv = uv|b − a b a v du. by letting u = x and dv = cos nx dx. π 0 1 2 1 x sin nx|π − = 0 π n n π 1 2 − cos nx = − nπ n 0 2 n = − 2 (1 − (−1) ). In order to compute the resulting integral. we have an = 2 π x cos nx dx. (4. du = dx and 1 v = dv = n sin nx. Thus. This factor can be simpliﬁed as 1 − (−1)n = 2 n odd 0 n even (4. Odd Functions: A similar computation could be done for odd functions. If one integrates an odd function over a symmetric interval. So. . The integrand is an odd function and this is a symmetric interval.

π] is given as f (x) ∼ π 4 − 2 π ∞ n=1. odd cos nx .170 CHAPTER 4.6: Plot of the ﬁrst partial sums of the Fourier series representation for f (x) = |x|. . 3.52) While this is correct. . The series can then be written as f (x) ∼ π 4 − 2 π cos(2k − 1)x . We have not looked at the convergence of these series. we see that the . n2 (4. They appear to be converging to f (x) = |x| fairly quickly. Then we only get the odd integers. (2k − 1)2 k=1 ∞ (4.53) Throughout our discussion we have referred to such results as Fourier representations.6 the ﬁrst few partial sums. π] we can still evaluate the Fourier series at values of x outside this interval. What does this series sum to? We show in Figure 4. . Putting this all together. THE HARMONICS OF VIBRATING STRINGS Partial Sum with One Term 4 4 Partial Sum with Two Terms 3 3 2 2 1 1 0 −2 0 2 0 −2 0 2 x Partial Sum with Three Terms 4 4 x Partial Sum with Four Terms 3 3 2 2 1 1 0 −2 0 2 0 −2 0 2 x x Figure 4. In Figure 4. We let n = 2k − 1 for k = 1. Even though f (x) was deﬁned on [−π. we can rewrite the sum over only odd n by reindexing. Here is an example of an inﬁnite series of functions.7. 2. . the Fourier series representation of f (x) = |x| on [−π.

Outside this interval we have a periodic extension of f (x) with period 2π. π].6.4. representation agrees with f (x) on the interval [−π. Another example is the Fourier series representation of f (x) = x on [−π. b−a .8 we again obtain the periodic extension of our function. As before. This is determined to be f (x) ∼ 2 (−1)n+1 sin nx. In this case we needed many more terms.5 2 1. t ∈ [a. we just need to transform this interval to [0.7: Plot of the ﬁrst 10 terms of the Fourier series representation for f (x) = |x| on the interval [−2π.5 1 0. π].5 0 −6 −4 −2 0 2 4 6 8 10 12 x Figure 4. b]. b] Optional A Fourier series representation is also possible for a general interval. 4π]. n n=1 ∞ (4.54) As seen in Figure 4. the vertical parts of the ﬁrst plot are nonexistent.5 3 2. In the second plot we only plot the points and not the typical connected points that most software packages plot as the default style. 2π]. Fourier Series on [a. Also. Let x = 2π t−a . INTRODUCTION TO FOURIER SERIES Periodic Extension with 10 Terms 4 171 3.

Inserting this into the Fourier series (4. However. So. If one were to apply the theory to applications. 4π].172 CHAPTER 4. where the transformation was straightforward. mathematics students enjoy the challenge of developing such generalized expressions. g(t) ∼ = ∞ a0 2nπ(t − a) 2nπ(t − a) + an cos + bn sin 2 b−a b−a n=1 ∞ 2nπt 2nπa 2nπt 2nπa a0 an cos + cos + sin sin 2 b−a b−a b−a b−a n=1 + bn sin 2nπt 2nπa 2nπt 2nπa cos − cos sin b−a b−a b−a b−a .55) Well. 2 b−a b−a n=1 (4. First. let’s see what is involved. THE HARMONICS OF VIBRATING STRINGS Periodic Extension with 10 Terms 4 2 0 −2 −4 −6 −4 −2 0 2 4 6 8 10 12 x Periodic Extension with 200 Terms 4 2 0 −2 −4 −6 −4 −2 0 2 4 6 8 10 12 x Figure 4. this expansion is ugly.21) representation for f (x) we obtain g(t) ∼ ∞ 2nπ(t − a) 2nπ(t − a) a0 + an cos + bn sin .8: Plot of the ﬁrst 10 terms and 200 terms of the Fourier series representation for f (x) = x on the interval [−2π. we apply the addition identities for trigonometric functions and rearrange the terms. it might seem to make sense to just shift the data so that a = 0 and be done with any complicated expressions. It is not like the last example.

b−a b−a An ≡ an cos (4. inserting these integrals in An .58) We next need to ﬁnd expressions for the Fourier coeﬃcients.4. 2 b−a b−a n=1 (4. combining integrals and making use of the addition formula for the cosine of the sum of two angles. we note that under t−a the transformation x = 2π b−a we have an = = and bn = = 1 π 2π 0 1 π 2π 0 f (x) cos nx dx b a 2 b−a g(t) cos 2nπ(t − a) dt.59) f (x) cos nx dx b a 2 b−a g(t) sin 2nπ(t − a) dt. b−a (4. we obtain An ≡ an cos = = 2nπa 2nπa − bn sin b−a b−a b 2nπ(t − a) 2nπa 2nπ(t − a) 2nπa 2 g(t) cos cos − sin sin dt b−a a b−a b−a b−a b−a b 2 2nπt g(t) cos dt.6. b−a (4.60) Then. (4.56) Deﬁning A0 = a0 and 2nπa 2nπa − bn sin b−a b−a 2nπa 2nπa Bn ≡ an sin + bn cos .61) b−a a b−a .57) we arrive at the more desirable form for the Fourier series representation of a function deﬁned on the interval [a. ∞ A0 2nπt 2nπt g(t) ∼ + An cos + Bn sin . INTRODUCTION TO FOURIER SERIES = ∞ a0 2nπt 2nπa 2nπa + cos an cos − bn sin 2 b−a b−a b−a n=1 173 + sin 2nπt 2nπa 2nπa an sin + bn cos b−a b−a b−a . We insert the known expressions for an and bn and rearrange. b]. First. (4.

. we have shown that: Theorem The Fourier series representation of f (x) deﬁned on [a. Fourier representations involving just sines are called sine series and those involving just cosines (and the constant term) are called cosine series. Thus.6.6-4. 2 b−a b−a n=1 (4. 2. . We have made the following observations from the previous examples: 1. 1. As we know. n = 1.8 that their Fourier series representations do as well.62) Summarizing. the sine functions are odd functions and thus sum to odd functions. . . . There are several trigonometric series representations for a function deﬁned on a ﬁnite interval. based upon these examples. π]. cosine functions sum to even functions. . . more than one series can be used to represent functions deﬁned on ﬁnite intervals.174 CHAPTER 4. Similarly.3 Sine and Cosine Series In the last two examples we have seen Fourier series representations that contain only sine or cosine terms. n = 0. . 2. is given by f (x) ∼ ∞ a0 2nπx 2nπx + an cos + bn sin . THE HARMONICS OF VIBRATING STRINGS A similar computation gives Bn = 2 b−a b a g(t) sin 2nπt dt. b−a (4. f (x) sin b−a a f (x) cos b (4. b] when it exists.63) with Fourier coeﬃcients an = bn = 2 b−a 2 b−a 2nπx dx. Another interesting result. Sometimes one of these series is more useful because it has additional properties needed in the given application. b−a a b 2nπx dx.64) 4. All they need to do is to agree with the function over that particular interval. Such occurrences happen often in practice. Note from Figures 4. |x| and x agree on the interval [0. is that the original functions. .

4. INTRODUCTION TO FOURIER SERIES f(x) on [0. The two lower plots are obtained by ﬁrst making the original function even or odd and then creating the periodic extensions of the new function. Its period will be 2L = 2. even and odd extensions.9: This is a sketch of a function and its various extensions. in the last plot we ﬂip the function about each axis and graph the periodic extension of the new odd function. We have seen that the Fourier series representation of this function appears to converge to a periodic extension of the function. In general.5 −1 −1.5 −1 −1.5 1 0. The original function f (x) is deﬁned on [0.6. L]. we obtain three diﬀerent periodic representations.5 −1 −1. These two observations are related and are the subject of this section.5 −1 0 1 2 3 f(x) 0 −0.5 1. It will also have a period of 2L = 2.5 175 Periodic Extension of f(x) f(x) 0 −0. To the right is its periodic extension to the whole real axis. L].5 −1 0 1 2 3 x Even Periodic Extension of f(x) 1. The bottom left plot is obtained by ﬁrst reﬂecting f about the y-axis to make it an even function and then graphing the periodic extension of this new function. Now.5 −1 −1.5 −1 0 1 2 3 f(x) 0 −0. 1] and graphed in the upper left corner. 2. In order to distinguish these we will refer to them simply as the periodic. Odd functions on a symmetric interval are represented by sine series and even functions on a symmetric interval are represented by cosine series. To its right is the periodic extension. starting with f (x) deﬁned on [0.1] 1.5 f(x) 0 −0. we would like to . Finally. We begin by deﬁning a function f (x) on interval [0.5 1 0.9 we show a function deﬁned on [0. In Figure 4. 1]. This representation has a period of L = 1.5 −1 0 1 2 3 x x Figure 4.5 1 0. obtained by adding replicas.5 1 0.5 x Odd Periodic Extension of f(x) 1.

(4.68) where an = 2 L L 0 f (x) cos nπx dx. [Fr easy reference. the series expansion will be given by [Use the general case in (4. .]: ∞ a0 nπx fe (x) ∼ + an cos . Therefore. . Given f (x) deﬁned on [0. L n = 0. the odd periodic extension is obtained by simply computing the Fourier series representation for the odd function f (x). we can simplify this by noting that the integrand is even and the interval of integration can be replaced by [0. fo (x) ≡ (4. .70) −f (−x) −L < x < 0. . L (4. 2 L n=1 (4. . the results are summarized in Table 4. 2.63) with a = −L and b = L. f (−x) −L < x < 0. n = 0. we expect that the resulting Fourier series will not contain sine terms. the even periodic extension is obtained by simply computing the Fourier series representation for the even function fe (x) ≡ f (x). an cos 2 L n=1 (4.65) Since fe (x) is an even function on a symmetric interval [−L. So. On this interval fe (x) = f (x). 0 < x < L. 0 < x < L.66) with Fourier coeﬃcients an = 1 L L −L fe (x) cos nπx dx. L] is given as ∞ a0 nπx f (x) ∼ + . THE HARMONICS OF VIBRATING STRINGS determine the Fourier series representations leading to these extensions. L]. 1. 1.69) Similarly.3] We have already seen that the periodic extension of f (x) is obtained through the Fourier series representation in Equation (4. L]. .73). L]. given f (x) deﬁned on [0. 2. . (4. L]. we have the Cosine Series Representation of f (x) for x ∈ [0.67) However.176 CHAPTER 4. . .

L] as ∞ 177 f (x) ∼ n=1 bn sin nπx . .72) 1.x=0. . 1 an := ------2 2 n~ Pi > bn:=2/L*int(f*sin(2*n*Pi*x/L).L).integer): a0:=2/L*int(f. even and odd extensions of this function. L (4. . a0 := 2/3 > an:=2/L*int(f*cos(2*n*Pi*x/L). In this case we can use Maple.6.x=0..x=0.. 1].. Let’s determine the representations of the periodic. 2.----n~ Pi > F:=a0/2+sum((1/(k*Pi)^2)*cos(2*k*Pi*x/L) . A general code for doing this for the periodic extension is as follows: > > > > > restart: L:=1: f:=x^2: assume(n. 1 bn := . (4.L). L n = 1. For a change. Example In Figure 4. INTRODUCTION TO FOURIER SERIES The resulting series expansion leads to deﬁning the Sine Series Representation of f (x) for x ∈ [0.4.9 we actually provided plots of the various extensions of the function f (x) = x2 for x ∈ [0.71) where bn = 2 L L 0 f (x) sin nπx dx. we will use a CAS (Computer Algebra System) package to do the integrals.L). .

cos 2nπx − 2π2 3 n=1 n nπ In Figure 4.8 0.10: The periodic extension of f (x) = x2 on [0.10 we see the sum of the ﬁrst 50 terms of this series.14]). we see that the series seems to be converging to the periodic extension of f .4 0.6 0. we have that a0 = 2 an = n21π2 and 3 1 bn = − nπ .. Therefore.ROMAN.Trigonometric Fourier Series Using the above code. There appear to be some problems with the convergence around integer values of x. THE HARMONICS OF VIBRATING STRINGS Periodic Extension 1 0.14].50): > plot(F.Cosine Series n In this case we compute a0 = 2 and an = 4(−1) .. 3 π n=1 n2 . Thus. titlefont=[TIMES. Generally. we 3 n2 π 2 have 1 4 ∞ (−1)n f (x) ∼ + 2 cos nπx.178 CHAPTER 4.2 –1 0 1 x 2 3 Figure 4. 1]. -1/(k*Pi)*sin(2*k*Pi*x/L).k=1.x=-1. (a) Periodic Extension . the resulting series is given as f (x) ∼ ∞ 1 1 1 + sin 2nπx . We will later see that this is because of the discontinuities in the periodic extension.font=[TIMES.ROMAN.title=‘Periodic Extension‘. (b) Even Periodic Extension .3.

n3 n=1 ∞ f (x) ∼ − Once again we see discontinuities in the extension as seen in Figure 4.8 0. . (c) Odd Periodic Extension . Therefore.Sine Series Finally. We also see that it is converging to the even extension.6. we look at the sine series for this function. we have veriﬁed that our sine series appears to be converging to the odd extension as we ﬁrst sketched in Figure 4. 1].12.4. INTRODUCTION TO FOURIER SERIES 179 Even Periodic Extension 1 0. However. We ﬁnd that bn = − n32π3 (n2 π 2 (−1)n − 2(−1)n + 2). In this case the convergence seems to be much better than in the periodic extension case.9.2 –1 0 1 x 2 3 Figure 4.11 we see the sum of the ﬁrst 50 terms of this series.11: The even periodic extension of f (x) = x2 on [0. 2 π3 1 2 2 (n π (−1)n − 2(−1)n + 2) sin nπx.4 0. In Figure 4.6 0.

. L 0 L 2nπx dx. L] f (x) ∼ ∞ a0 2nπx 2nπx + an cos + bn sin . 1. . THE HARMONICS OF VIBRATING STRINGS Table 4. 2. . . . . L] ∞ f (x) ∼ n=1 bn sin nπx . .78) .73) an = bn = 2 L 2 L 2nπx dx.74) Fourier Cosine Series on [0. L n = 1. . .180 CHAPTER 4. L (4. 2 L L n=1 (4. . (4. n = 0. L] Fourier Series on [0. 2. n = 1.76) Fourier Sine Series on [0. L] ∞ f (x) ∼ a0 /2 + n=1 an cos nπx . . L n = 0.77) where bn = 2 L L 0 f (x) sin nπx dx. . 1. . 2. . . (4. L (4. . f (x) sin L 0 f (x) cos L (4.3: Fourier Cosine and Sine Series Representations on [0. 2.75) where an = 2 L L 0 f (x) cos nπx dx.

12: The odd periodic extension of f (x) = x2 on [0. . 1]. t) = 0 t > 0.4. 0 ≤ x ≤ L IC u(x. (4. This resulted in a sum over various product solutions: ∞ u(x. The problem was given by PDE ut = kuxx 0 < t. we found the general solution for the problem of heat ﬂow in a one dimensional rod of length L with ﬁxed zero temperature ends.7.5 –1 Figure 4. In particular.5 –1 0 1 x 2 3 –0. SOLUTION OF THE HEAT EQUATION 181 Odd Periodic Extension 1 0. t) = n=1 bn ekλn t sin nπ L 2 nπx . t) = 0 t>0 u(L.79) We found the solution using separation of variables. 0) = f (x) 0<x<L BC u(0. L where λn = − .7 Solution of the Heat Equation We started out the chapter seeking the solution of an initial-boundary value problem involving the heat equation and the wave equation. 4.

L We consider a couple of examples with diﬀerent initial conditions. the solution consists of just one term. where bn = 2 π π 0 f (x) sin nx dx. Therefore. The solution takes the form ∞ u(x. u(x. f (x). 1]. ∞ f (x) = u(x.. In Figure 4. They are given by bn = 2 L L 0 f (x) sin nπx dx. Once we know them. we need not carry out the integral because we can immediately write b1 = 1 and bn = 0. L We were left with having to determine the constants bn . . Namely.. t) = e−kt sin x. However.e. . Example 1 f (x) = sin x for L = π. n = 2. 3. i. t) = n=1 bn e−n 2 π 2 kt sin nπx. t) = n=1 bn ekλn t sin nx. However. we had only gotten to state initial condition using this solution. Example 2 f (x) = x(1 − x) for L = 1. . . This example requires a bit more work. So. we have the solution. THE HARMONICS OF VIBRATING STRINGS This equation satisﬁes the boundary conditions. the initial condition takes the form of the ﬁrst term in the expansion.13 we see that how this solution behaves for k = 1 and t ∈ [0. Now we can get the Fourier coeﬃcients when we are given the initial condition.182 CHAPTER 4. In this case the solution takes the form ∞ u(x. the n = 1 term. 0) = n=1 bn sin nπx .

5 1 1. SOLUTION OF THE HEAT EQUATION 183 1 0.8 0.5 3 Figure 4.7. n odd sin nπx dx (4. n even .t) 0.13: The evolution of the initial condition f (x) = sin x for L = π and k = 1.80) So. t) = 8 π3 ∞ 1 e−(2 (2 − 1)3 =1 −1)2 π 2 kt sin(2 − 1)πx.2 0 0 0.4. where bn = 2 1 0 0 1 f (x) sin nπx dx.6 u(x. we have that the solution can be written as u(x.5 x 2 2. . 8 − n3 π3 .4 0. This integral is easily computed using integration by parts bn = 2 = x(1 − x) sin nπx dx 1 cos nπx nπ 1 2x(1 − x) − + 0 2 nπ 1 0 1 0 (1 − 2x) cos nπx dx = − = = = 2 [(1 − 2x) sin nπx]1 + 2 0 n2 π 2 4 [cos nπx]1 0 n3 π 3 4 (cos nπ − 1) 3π3 n 0.

We see that this solution diﬀuses much faster that the last example. 0) = f (x).1 0.8 Finite Length Strings We now return to the physical example of wave propagation in a string. The solution is then ∞ u(x. We have found that the general solution can be represented as a sum over product solutions. Twenty terms were used.4 x 0. t) = n=1 An sin nπx nπct cos L L (4. We will restrict our discussion to the special case that the initial velocity is zero and the original proﬁle is given by u(x.81) satisfying ∞ f (x) = n=1 An sin nπx .184 CHAPTER 4.8 1 Figure 4.15 u(x.25 0.6 0. In Figure 4.13 we see that how this solution behaves for k = 1 and t ∈ [0. Most of the terms damp out quickly as the solution asymptotically approaches the ﬁrst term.05 0 0 0.t) 0. 1].2 0. 4.82) . L (4. THE HARMONICS OF VIBRATING STRINGS 0.2 0.14: The evolution of the initial condition f (x) = x(1 − x) for L = 1 and k = 1.

. Using our trigonometric identities. .8. we can put this into a more suggestive form: u(x. the solution 2 satisﬁes the initial condition.83) Note that we are using An ’s only because of the development of the solution. L Then the product solutions take the form sin kn x cos ωn t.4. . (4. 2. the solution takes the form u(x. t) = 1 [f (x + ct) + f (x − ct)] . 2 (4. 2 Inserting this expression in our solution. t) = An [sin(kn x + ωn t) + sin(kn x − ωn t)] . these products can be written as sin kn x cos ωn t = 1 [sin(kn x + ωn t) + sin(kn x − ωn t)] . we have 1 ∞ u(x.85) We see that each sum is simply the sine series for f (x) but evaluated at either x + ct or x − ct. FINITE LENGTH STRINGS We have learned that the Fourier sine series coeﬃcients are given by An = 2 L L 0 185 f (x) sin nπx .84) An sin kn (x + ct) + n=1 n=1 An sin kn (x − ct) . At t = 1. We can rewrite this solution in a more compact form. t) = 1 2 ∞ ∞ (4. So. 2 n=1 Since ωn = ckn . L (4. ωn = ckn = nπc . 0) = 1 [f (x) + f (x)] = f (x).86) If t = 0. Thus. . L and the angular frequencies. nπ kn = . n = 1. we deﬁne the wave numbers. then we have u(x. . the sum has a term f (x − c). First.

Next. f (x + ct) is a wave travelling to the left with velocity −c. The details of such analysis would take us too far from our current goal. In this case ω depends on k linearly. In cases where . we would think that the function leaves the interval leaving nothing left inside. before we apply this shifting.15. However. The physical string lies in the interval [0.16. Similarly.) The top plot is the sum of these 2 solutions. We begin by plucking a string of length L. We have a problem when needing to shift f (x) across the boundaries. this shift is further to the right. we can illustrate this with a few ﬁgures. THE HARMONICS OF VIBRATING STRINGS Recall from your mathematics classes that this is simply a shifted version of f (x). If we are not careful. In fact. ω = ck. (Actually. we construct the periodic extension of this to the entire line. L]. we really have the odd periodic of f (x) being shifted. k In this case. However. The function (wave) shifts to the right with velocity c. This is shown in Figure 4. then one can ﬁnd the wave speed as c = ω . For general times. For larger values of t. being a sine series.18. Namely. the waves on the string consist of waves travelling to the right and to the left. the function is shifted by ct to the right. Finally. we create an odd function by extending the function to a period of 2L.1]. one moving to the right and the other moving to the left. the story does not stop here. The relation between the angular frequency and the wave number. If one knows the dispersion relation. all of the harmonics travel at the same speed. The time evolution for this plucked string is shown for several times in Figure 4. Thus. is called a dispersion relation. This can be represented by the function x 0≤x≤a a f (x) = (4. The original problem only deﬁnes f (x) on [0. However. we need to account for its periodicity. the copies are 1 f (x ± ct).186 CHAPTER 4. This results in a wave that appears to reﬂect from the ends as time increases. In Figure 4. So. This is shown in Figure 4.87) L−x a≤x≤L L−a where the string is pulled up one unit at x = a.17 we show in the lower part of the ﬁgure copies of the periodic extension. we have to recall that our sine series representation for f (x) has a period of 2L. it is shifted to the right.

4 0.0) 1 0. .8 1 Figure 4.16: The odd extension about the right end of the plucked string.2L] 1 0.2 0 0.8 0.4 0.25.2 0.5 0 0.6 0.4 1.2 1.6 1.15: The initial proﬁle for a string of length one plucked at x = 0.8.8 2 –0. Extension to [0.2 0.6 0.8 1 x 1.4.6 0. FINITE LENGTH STRINGS 187 Initial Profile u(x.4 x 0.5 –1 Figure 4.

they do not.5 Figure 4. . THE HARMONICS OF VIBRATING STRINGS 1.17: Summing the odd periodic extensions. which we will discuss later. we have nonlinear dispersion. one moving to the right and the other moving to the left.188 CHAPTER 4.5 –4 –3 –2 –1 0 1 2 x 3 4 –0.5 1 0. The lower plot shows copies of the periodic extension. The upper plot is the sum.

2 0.6 0.8 0.8 1 0.8.8 –1 u(x.8 –1 u(x.4 0.8 –1 u(x.4 –0.2 0 –0.6 0.4 –0.t) 0.6 –0.8 1 0.4 –0.8 0. FINITE LENGTH STRINGS 189 1 0.8 1 (c) (d) 1 0.6 0.2 –0.8 0.8 0.6 0.2 0.6 0.8 1 (a) (b) 1 0.t) 1 0.6 –0.t) 1 0.4 0.t) 1 0.6 0.2 0.6 –0.4 0.6 0.4 –0.2 0 –0.4 x 0. .2 –0.2 0 –0.8 –1 u(x.2 –0.t) 0.2 –0.8 1 0.2 –0.2 0 –0.8 –1 u(x.4 x 0.4 –0.6 0.2 0 –0.2 –0.2 0.4 x 0.6 0.4 0.2 0 –0.18: This Figure shows the plucked string at six times in progression from (a) to (f).4 0.4 x 0.6 –0.8 0.8 0.2 0.6 –0.4 x 0.8 1 (e) (f) Figure 4.4 –0.t) 0.8 –1 u(x.4 0.2 0.6 0.6 0.6 0.6 –0.4 x 0.4.

THE HARMONICS OF VIBRATING STRINGS .190 CHAPTER 4.

Euler’s identity): 1 eiθ = cos θ + i sin θ. This is based on Euler’s formula (or. n! 2! n! 191 . 2 T T n=1 The coeﬃcients take forms like an = 2 T T 0 f (t) cos 2πnt dt. T ] by looking for the Fourier coeﬃcients in the Fourier series expansion f (t) = ∞ 2πnt 2πnt a0 + an cos + bn sin .1 Complex Representations of Waves We have seen that we can seek the frequency content of a function f (t) deﬁned on an interval [0. T However. trigonometric functions can be written in a complex exponential form. 1 Euler’s formula can be obtained using the Maclaurin expansion of ex : ∞ e = n=0 x xn 1 xn = 1 + x + x2 + · · · + + ···.Chapter 5 Complex Representations of Functions 5.

Subtracting the exponentials leads to an expression for the sine function. 2 eiθ − e−iθ . T 2 Later we will see that we can use this information to rewrite our series as a sum over complex exponentials in the form cos ∞ f (t) = n=−∞ cn e 2πint T where the Fourier coeﬃcients now take the form cn = We formally set x = iθ Then.192 CHAPTER 5. 2i (5. eiθ = n=0 (iθ)n n! = = = = (iθ)2 (iθ)3 (iθ)4 (iθ)5 + + + + ··· 2! 3! 4! 5! 2 3 4 5 (θ) (θ) (θ) (θ) 1 + iθ − −i + +i + ··· 2! 3! 4! 5! 2 4 (θ) (θ) (θ)3 (θ)5 1− + + · · · + i iθ − + + ··· 2! 4! 3! 5! 1 + iθ + cos θ + i sin θ.1) . Thus.2) 2πint 2πnt 1 2πint = (e T + e− T ). ∞ T 0 f (t)e− 2πint T . (5. we have 2 cos θ = eiθ + e−iθ . we have the important result that sines and cosines can be written as complex exponentials: cos θ = sin θ = So. COMPLEX REPRESENTATIONS OF FUNCTIONS The complex conjugate is found by replacing i with −i to obtain e−iθ = cos θ − i sin θ. Adding these expressions. we can write eiθ + e−iθ .

(5. t) = 1 4i + n=1 ∞ n=1 ∞ An eikn (x+ct) − e−ikn (x+ct) . The Fourier coeﬃcients in the representation can be complex valued functions and the evaluation of the integral may be done . in order to connect our analysis to ideal signals over an inﬁnite interval and containing a continuum of frequencies. So. (5. t) = 1 2 ∞ ∞ An sin kn (x + ct) + n=1 n=1 An sin kn (x − ct) . we are then lead to the complex representation u(x.1. t) = ∞ −∞ c(k)eik(x+ct) + d(k)eik(x−ct) dk. COMPLEX REPRESENTATIONS OF WAVES 193 In fact.4) An eikn (x−ct) − e−ikn (x−ct) Now. we can rewrite this solution in the form ∞ u(x. t) = n=−∞ cn eikn (x+ct) + dn eikn (x−ct) .5) Such representations are also possible for waves propagating over the entire real line.3) We can replace the sines with their complex forms as u(x. In such cases we are not restricted to discrete frequencies and wave numbers. (5. We can extend these ideas to develop a complex representation for waves. typically representing wave packets. which means that our sums become integrals. The integral represents a general wave form consisting of a sum over plane waves. We obtained the solution u(x. Recall from our discussion in the last chapter of waves on ﬁnite length strings. The sum of the harmonics will then be a sum over a continuous range. deﬁning k−n = −kn . (5.5. we will see the above sum become an integral and we will naturally ﬁnd ourselves needing to work with functions of complex variables and perform complex integrals.6) The forms eik(x+ct) and eik(x−ct) are complex representations of what are called plane waves in one dimension.

We would like to be able to compute such integrals. (5. of this vector is called the complex modulus of z. x is called the real part of z and y is the imaginary part of z.1 we can see that in terms of r and θ we have that x = r cos θ.2 Complex Numbers Complex numbers were ﬁrst introduced in order to solve some simple problems. Examples of such numbers are 3 + 3i.194 CHAPTER 5. z = x + iy = r(cos θ + i sin θ) = reiθ . including diﬀerentiation and complex integration. From Figure 5. In essence. which was not √ realized at ﬁrst. −1i. known as the complex plane C.8) (5.1. or length. Here we can think of the complex number z = x + iy as a point (x. The history of complex numbers only extends about two thousand years. This is given by the Argand diagram as shown in Figure 5. COMPLEX REPRESENTATIONS OF FUNCTIONS using methods from complex analysis. The magnitude. Note that 5 = 5 + 0i and 4i = 0 + 4i. a special symbol was introduced . y) in the complex plane or as a vector. This will lead us to the calculus of functions of a complex variable. Here we have used Euler’s formula. 5.the imaginary unit. The solution is x = ± −1. Thus. where x and y are real numbers. With the above ideas in mind. it was found that we need to ﬁnd the roots of √ equations such as x2 + 1 = 0. We can also use the geometric picture to develop a polar representation of complex numbers.7) . A complex number is a number of the form z = x + iy. There is a geometric representation of complex numbers in a two dimensional plane. we will now take a tour of complex analysis. Due to the usefulness of this concept. 4i and 5. i = −1. denoted by |z| = x2 + y 2 . We will ﬁrst review some facts about complex numbers and then introduce complex functions. y = r sin θ.

5.2. COMPLEX NUMBERS

195

iy x+iy r θ x

Figure 5.1: The Argand diagram for plotting complex numbers in the complex z-plane. So, given r and θ we have z = reiθ . However, given the Cartesian form, z = x + iy, we can also determine the polar form, since r= x2 + y 2 , y tan θ = . x

(5.9)

Note that r = |z|. Example Write 1 + i in polar form. If one locates 1 + i in the complex plane, then it might be possible to immediately determine the polar form from the angle and length of the “complex vector”. This is shown in Figure 5.2. If one did not see the polar form from the plot in the z-plane, then one can systematically determine the results. We want to write 1 + i in polar form: 1 + i = reiθ for some r and θ. Using the above √ y relations, we have r = x2 + y 2 = 2 and tan θ = x = 1. This gives π θ = 4 . So, we have found that 1+i= √ iπ/4 2e .

We also have the usual operations. We can add two complex numbers and obtain another complex number. This is simply done by adding the real

196 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS

iy 2i i 1+i

1

2

x

Figure 5.2: Locating 1 + i in the complex z-plane. parts and the imaginary parts. So, (3 + 2i) + (1 − i) = 4 + i. We can also multiply two complex numbers just like we multiply any binomials, though we now can use the fact that i2 = −1. For example, we have (3 + 2i)(1 − i) = 3 + 2i − 3i + 2i(−i) = 5 − i. We can even divide one complex number into another one and get a complex number as the quotient. Before we do this, we need to introduce the complex conjugate, z , of a complex number. The complex conjugate of ¯ z = x + iy, where x and y are real numbers, is given as z = x − iy. Complex conjugates satisfy the following relations for complex numbers z and w and real number x. z+w =z+w zw = zw z=z x = x. One consequence is that the complex conjugate of z = reiθ is z = reiθ = cos θ + i sin θ = cos θ − i sin θ = re−iθ . (5.10)

5.2. COMPLEX NUMBERS Another consequence is that zz = reiθ re−iθ = r2 .

197

Thus, the product of a complex number with its complex conjugate is a real number. We can also write this result in the form zz = (x + iy)(x − iy) = x2 + y 2 = |z|2 . Therefore, we have |z|2 = zz. Now we are in a position to write the quotient of two complex numbers in the standard form of a real plus an imaginary number. As an example, we want to rewrite 3+2i . This is accomplished by multiplying the numerator 1−i and denominator of this expression by the complex conjugate of the denominator: 3 + 2i 1 + i 1 + 5i 3 + 2i = = . 1−i 1−i 1+i 2 Therefore, we have the quotient is

3+2i 1−i

=

1 2

5 + 2 i.

We can also look at powers of complex numbers. For example, (1 + i)2 = 2i, (1 + i)3 = (1 + i)(2i) = 2i − 2. √ But, what is (1 + i)1/2 = 1 + i? In general, we want to ﬁnd the nth root of a complex number. Let t = z 1/n . To ﬁnd t in this case is the same as asking for the solution of tn − z = 0 given z. But, this is the root of an nth degree equation, for which we expect n roots. We can answer our question if we write z in polar form, z = reiθ . Then, z 1/n = reiθ

1/n

= r1/n eiθ/n = r1/n cos θ θ + i sin . n n (5.11)

**198 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS If we use this to obtain an answer to our problem, we get (1 + i)1/2 = √ iπ/4 2e
**

1/2

= 21/4 eiπ/8 .

But this is only one solution. We expected two solutions! The problem is that the polar representation for z is not unique. We note that e2kπi = 1, k = 0, ±1, ±2, . . . . So, we can rewrite z as z = reiθ e2kπi = rei(θ+2kπ) . Now, we have that z 1/n = r1/n ei(θ+2kπ)/n θ + 2kπ = r1/n cos n θ + 2kπ n

+ i sin

.

(5.12)

We note that we only get diﬀerent values for k = 0, 1, . . . , n − 1. Now, we can ﬁnish our example. (1 + i)1/2 √ = 2eiπ/4

1/2 e2kπi insert 1=e2kπi

= 21/4 ei(π/8+kπ) = 21/4 eiπ/8 , 21/4 e9πi/8 . (5.13)

√ √ Finally, what is n 1? Our ﬁrst guess would be n 1 = 1. But, we know that there should be n roots. These roots are called the nth roots of unity. Using the above result in Equation (5.12) with r = 1 and θ = 0, we have that √ 2πk 2πk n 1 = cos + i sin , k = 0, . . . , n − 1. n n For example, we have √ 2πk 2πk 3 1 = cos + i sin , 3 3 k = 0, 1, 2.

These three roots can be written out as √ √ √ 1 1 3 3 3 i, − − i. 1 = 1, − + 2 2 2 2

5.3. COMPLEX VALUED FUNCTIONS

199

iy i

-1 -i

1 x

Figure 5.3: Locating the cube roots of unity in the complex z-plane. Note, the reader can verify that these are indeed the cube roots of unity. We note that √ √ 2 3 1 3 1 i =− − i − + 2 2 2 2 and √ 1 3 i − + 2 2

3

√ 1 3 = − − i 2 2

√ 1 3 − + i 2 2

= 1.

We can locate these cube roots of unity in the complex plane. In Figure 5.3 we see that these points lie on the unit circle and are at the vertices of an equilateral triangle. In fact, all nth roots of unity lie on the unit circle and are the vertices of a regular n-gon.

5.3

Complex Valued Functions

We would like to next explore complex functions and the calculus of complex functions. We begin by deﬁning a function that takes complex numbers into complex numbers, f : C → C. It is diﬃcult to visualize such functions. One typically uses two copies of the complex plane to indicate how such functions behave. We will call the domain the z-plane and the image will lie in the w-plane. We show this is Figure 5.4.

**200 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS
**

z-plane iy 2i i z 1 2 x f iv 2i i w 1 2 u w-plane

Figure 5.4: Deﬁning a complex valued function on C. We let z = x + iy and w = u + iv. Then we can deﬁne our function as w = f (z) = f (x + iy) = u(x, y) + iv(x, y). We see that one can view this function as a function of z or a function of x and y. Often, we have an interest in writing out the real and imaginary parts of the function, which can be viewed as functions of two variables. Example 1: f (z) = z 2 . For example, we can look at the simple function f (z) = z 2 . It is a simple matter to determine the real and imaginary parts of this function. Namely, we have z 2 = (x + iy)2 = x2 − y 2 + 2ixy. Therefore, we have that u(x, y) = x2 − y 2 , Example 2: f (z) = ez . For this case, we make use of Euler’s Formula. f (z) = ez = ex+iy = ex eiy = ex (cos y + i sin y). (5.14) v(x, y) = 2xy.

5.4. COMPLEX DIFFERENTIATION Thus, u(x, y) = ex cos y and v(x, y) = ex sin y. Example 3: f (z) = ln z. In this case we make use of the polar form, z = reiθ . Our ﬁrst thought would be to simply compute ln z = ln r + iθ.

201

However, the natural logarithm is multivalued, just like the nth root. Recalling that e2πik = 1 for k an integer, we have z = rei(θ+2πk) . Therefore, ln z = ln r + i(θ + 2πk), k = integer.

The natural logarithm is a multivalued function. In fact there are an inﬁnite number of values for a given z. Of course, this contradicts the deﬁnition of a function that you were ﬁrst taught. Thus, one typically will only report the principal value, ln z = ln r + iθ, for θ restricted to some interval of length 2π, such as [0, 2π). Sometimes the principal logarithm is denoted by Ln z. There are ways to handle multivalued functions. This involves introducing branch cuts and (at a more sophisticated level) Riemann surfaces. We will not go into these types of functions here, but refer the interested reader to other texts.

5.4

Complex Diﬀerentiation

Next we want to diﬀerentiate complex functions. We generalize our deﬁnition from single variable calculus, f (z) = lim provided this limit exists. The computation of this limit is similar to what we faced in multivariable calculus. Letting ∆z → 0 means that we get closer to z. There are many paths that one can take that will approach z. [See Figure 5.5.] It is suﬃcient to look at two paths in particular. We ﬁrst consider the path y = constant. Such a path is shown in Figure 5.6. For this path, f (z + ∆z) − f (z) , ∆z→0 ∆z (5.15)

202 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS

iy 2i i z

1

2

x

Figure 5.5: There are many paths that approach z as ∆z → 0. ∆z = ∆x + i∆y = ∆x, since y does not change along the path. The derivative, if it exists, is then computed as f (z) = f (z + ∆z) − f (z) ∆z→0 ∆z u(x + ∆x, y) + iv(x + ∆x, y) − (u(x, y) + iv(x, y)) = lim ∆x→0 ∆x u(x + ∆x, y) − u(x, y) v(x + ∆x, y) − v(x, y) = lim + lim i . ∆x→0 ∆x→0 ∆x ∆x (5.16) lim

The last two limits are easily identiﬁed as partial derivatives of real valued functions of two variables. Thus, we have shown that when f (z) exists, f (z) = ∂u ∂v +i . ∂x ∂x (5.17)

A similar computation can be made if instead we take a path corresponding to x = constant. In this case ∆z = i∆y and f (z) = f (z + ∆z) − f (z) ∆z u(x, y + ∆y) + iv(x, y + ∆y) − (u(x, y) + iv(x, y)) = lim ∆y→0 i∆y u(x, y + ∆y) − u(x, y) v(x, y + ∆y) − v(x, y) = lim + lim . ∆y→0 ∆y→0 i∆y ∆y (5.18)

∆z→0

lim

5.4. COMPLEX DIFFERENTIATION

203

iy 2i i z

1

2

x

Figure 5.6: A path that approaches z with y = constant. Therefore, f (z) =

∂u ∂v −i . ∂y ∂y

(5.19)

We have found two diﬀerent expressions for f (z) by following two diﬀerent paths to z. If the derivative exists, then these two expressions must be the same. Equating the real and imaginary parts of these expressions, we have ∂v ∂u = ∂x ∂y ∂u ∂v =− . ∂x ∂y These are known as the Cauchy-Riemann equations and we have the following theorem: Theorem f (z) is holomorphic (diﬀerentiable) if and only if the Cauchy-Riemann equations are satisﬁed. Example 1: f (z) = z 2 . In this case we have already seen that z 2 = x2 − y 2 + 2ixy. Therefore, u(x, y) = x2 − y 2 and v(x, y) = 2xy. We ﬁrst check the Cauchy-Riemann equations. ∂u ∂v = 2x = ∂x ∂y ∂v ∂u = 2y = − . ∂x ∂y

(5.20)

(5.21)

204 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS Therefore, f (z) = z 2 is diﬀerentiable. We can further compute the derivative using either Equation (5.17) or Equation (5.19). Thus, f (z) = ∂u ∂v +i = 2x + i(2y) = 2z. ∂x ∂x

This result is not surprising. Example 2: f (z) = z . ¯ In this case we have f (z) = x − iy. Therefore, u(x, y) = x and ∂v v(x, y) = −y. But, ∂u = 1 and ∂y = −1. Thus, the Cauchy-Riemann ∂x equations are not satisﬁed and we conclude the f (z) = z is not ¯ diﬀerentiable.

5.5

Harmonic Functions and Laplace’s Equation

Another consequence of the Cauchy-Riemann equations is that both u(x, y) and v(x, y) are harmonic functions. A real-valued function u(x, y) is harmonic if it satisﬁes Laplace’s equation in two dimensions, 2 u = 0, or ∂2u ∂2u + 2 = 0. ∂x2 ∂y Theorem f is diﬀerentiable if and only if u and v are harmonic functions. This is easily proven using the Cauchy-Riemann equations. ∂2u ∂x2 = = = = = ∂ ∂u ∂x ∂x ∂ ∂v ∂x ∂y ∂ ∂v ∂y ∂x ∂ ∂u − ∂y ∂y ∂2u − 2. ∂y

(5.22)

Is u(x. such that u + iv is diﬀerentiable. HARMONIC FUNCTIONS AND LAPLACE’S EQUATION 1. ∂y ∂x We can integrate the ﬁrst of these equations to obtain v(x. we diﬀerentiate our result with respect to y to ﬁnd that ∂v = 2x + c (y). the second equation must also hold. However. So. y) is diﬀerentiable? Such a v is called the harmonic conjugate. y) : ∂v ∂u =− = 2y. Here c(y) is an arbitrary function of y.5.5. One can check to see that this works by simply diﬀerentiating the result with respect to x. The Cauchy-Riemann equations tell us the following about the unknown function. y) = 2y dx = 2xy + c(y). it is. ∂x2 ∂y No. v(x. v(x. y). 205 Given a harmonic function u(x. y) = x2 + y 2 harmonic? ∂2u ∂2u + 2 = 2 + 2 = 0. v(x. ∂y . y) = x2 − y 2 harmonic? ∂2u ∂2u + 2 = 2 − 2 = 0. y) = x2 − y 2 is harmonic. y). Example u(x. ∂x ∂y ∂v ∂u = = 2x. can one ﬁnd a function. it is not. y). ﬁnd the harmonic conjugate. y) + iv(x. such f (z) = u(x. Is u(x. ∂x2 ∂y Yes. 2.

connected by a path Γ.6. we then have to ﬁnd a parametrization of the path and use methods from the third semester calculus class. y) + iv(x. we would like to deﬁne the integral of f (z) along Γ. . In order to carry out the integration. So. We have just shown that we get an inﬁnite number of functions. or holomorphic. 5. y) = 2xy + k. COMPLEX REPRESENTATIONS OF FUNCTIONS Since we were supposed to get 2x. for k = 0 this is nothing other than f (z) = z 2 . Given two points in the complex plane. Before carrying this out with some examples. We will see that contour integral methods are also useful in the computation of some real integrals. Thus. by writing Γ f (z) dz = Γ [u(x. such that f (z) = x2 − y 2 + i(2xy + k) is diﬀerentiable. In fact. v(x. we have that c (y) = 0. Γ f (z) dz. we will ﬁrst provide some deﬁnitions. we have f (z) = z 2 + ik. c(y) = k is a constant.6 Complex Integration In the last sections we were introduced to functions of a complex variable. A natural procedure would be to work in real variables. Now we will turn to integration in the complex plane. We will learn how to compute complex path integrals.206 CHAPTER 5. or contour integrals. y)] (dx + idy).1 Complex Path Integrals We begin by investigating the computation of complex path integrals. 5. We have also established when functions are diﬀerentiable as complex functions.

COMPLEX INTEGRATION 207 Γ z 1 z 2 Figure 5. for a point on the boundary. . Otherwise it is called disconnected.8: Examples of (a) a connected set and (b) a disconnected set. However. Deﬁnition A set D is connected if and only if for all z1 . the smaller the radii of such disks. (a) (b) Figure 5. and z2 in D there exists a piecewise smooth curve connecting z1 to z2 and lying in D. every such disk would contain points inside and outside the disk.7: We would like to integrate a complex function f (z) over the path Γ in the complex plane. an open set in the complex plane would not contain any of its boundary points. Examples are shown in Figure 5. For all points on the interior of the region one can ﬁnd at least one disk contained entirely in the region. Thus. The closer one is to the boundary.6.9 we show a region with two disks.8 Deﬁnition A set D is open if and only if for all z0 in D there exists an open disk |z − z0 | < ρ in D.5. In Figure 5.

Then. y(t))] ( dy dx + i )dt. 0 ≤ θ ≤ . y(t)) be a parametrization of Γ for t0 ≤ t ≤ t1 . dt dt Inserting these expressions into the integral leads to the above deﬁnition.208 CHAPTER 5. Let’s see how this works with a couple of examples. Example 1 C z 2 dz. y) + iv(x.9: Locations of open disks inside and on the boundary of a region. y(θ)) = (cos θ. Let (x(t). dz = dx + idy = dy dx dt + i dt. We see that we can write f (z) = u(x. sin θ).23) It is easy to see how this deﬁnition arises. Deﬁnition Let u and v be continuous in domain D. COMPLEX REPRESENTATIONS OF FUNCTIONS ρ z0 D Figure 5. 2 . C = the arc of the unit circle in the ﬁrst quadrant as shown in Figure 5. There are two ways we could do this. First. y) and z = x(t) + iy(t). Deﬁnition D is called a domain if it is both open and connected. dt dt (5.10. This deﬁnition gives us a prescription for computing path integrals. we note that the standard parametrization of the unit circle is π (x(θ). We ﬁrst specify the parametrization. Then Γ f (z) dz = t1 t0 [u(x(t). y(t)) + iv(x(t). and Γ a piecewise smooth curve in D.

So. dz = ieiθ dθ. We can multiply this out and integrate. we have z = cos θ + i sin θ and dz = (− sin θ + i cos θ)dθ.10: Contour for Example 1. We ﬁrst note that z = eiθ on C. This is simply the result of using the polar forms x = r cos θ y = r sin θ (5. While this is doable. having to perform some trigonometric integrations: π 2 0 [sin3 θ − 3 cos2 θ sin θ + i(cos3 θ − 3 cos θ sin2 θ)] dθ.24) for r = 1 and restricting θ to trace out a quarter of a circle.5. COMPLEX INTEGRATION 209 iy 2i i 1 2 x Figure 5. there is a simpler procedure. The integration then becomes z 2 dz = π 2 C 0 (eiθ )2 ieiθ dθ .6. Using this parametrization. Then. the path integral becomes z 2 dz = π 2 C 0 (cos θ + i sin θ)2 (− sin θ + i cos θ) dθ.

11. 1]. 3 Example 2 Γ z dz.210 CHAPTER 5. Over γ1 we note that y = 0. 2 . = i = π 2 0 e3iθ dθ π 2 ie3iθ 3i 0 1+i = − . Thus. So. In this problem we have a path that is a piecewise smooth curve. It is natural to take x as the parameter. 2 Γ z dz Combining these results.25) Γ is the path shown in Figure 5. The integral becomes z dz = 1 0 γ2 1 (1 + iy) idy = i − . dz = idy. we have = 1 2 + (i − 1 ) = i. COMPLEX REPRESENTATIONS OF FUNCTIONS iy 2i i γ2 γ1 1 2 x Figure 5. 2 For path γ2 we have that z = 1 + iy for y ∈ [0. Thus. We can compute the path integral by computing the values along the two segments of the path and adding up the results. (5.11: Contour for Example 2. z = x for x ∈ [0. dz = dx and we have z dz = 1 0 γ1 1 x dx = . 1]. Let the two segments be called γ1 and γ2 as shown in Figure 5.11.

it is not true that integrating over diﬀerent paths always yields the same results. In fact. γ3 is the path shown in Figure 5. x ∈ [0.12: Contour for Example 3.26) In the last case we found the same answer as in Example 2. 1]} = {z|z = x + ix2 .6.5. COMPLEX INTEGRATION 211 iy 2i i γ3 γ1 1 γ2 2 x Figure 5. We will now look into this notion of path independence. x ∈ [0. 1]}. (5. The integral becomes z dz = = 1 0 1 0 γ1 (x + ix2 )(1 + 2ix) dx (x + 2ix2 − 2x3 ) dx = i. Deﬁnition The integral f (z) dz is path independent if f (z) dz = f (z) dz Γ1 Γ2 for all paths from z1 to z2 . Example 3 z dz. Then. . But we should not take this as a general rule for all complex path integrals. dz = (1 + 2ix) dx. Let γ3 = {(x. y)|y = x2 . γ3 In this case we take a path from z = 0 to z = 1 + i along a diﬀerent path.12.

closed loops A common notation for integrating over closed loops is ﬁrst we have to deﬁne what we mean by a closed loop. (This makes the loop simple. consider an integral over a closed loop C as shown in Figure 5. − Then we make use of the path independence by deﬁning C2 to be the path along C2 but in the opposite direction. Deﬁnition A simple closed contour is a path satisfying a The end point is the same as the beginning point. We pick two points on the loop breaking it into two contours. then the integral of f (z) over all closed loops is zero.) b The are no self-intersections. f (z) dz = 0.14. (This makes the loop closed. C C f (z) dz.13: Γ1 f (z) dz = Γ2 f (z) dz for all paths from z1 to z2 when the integral of f (z) is path independent.212 CHAPTER 5. If f (z) dz is path independent. but it is not simple. COMPLEX REPRESENTATIONS OF FUNCTIONS Γ z 1 1 z Γ 2 2 Figure 5. (5. C1 and C2 .) A loop in the shape of a ﬁgure eight is closed. But f (z) dz = = C1 C1 f (z) dz + f (z) dz − C2 − C2 f (z) dz f (z) dz. Now. Then.27) .

COMPLEX INTEGRATION 213 C 2 1 2 C1 Figure 5. We will deﬁne a positively oriented contour as one that is traversed with the outward normal pointing to the right. As one follows loops. 5.5.14: The integral is path independent. Therefore. dy ).14 are positively oriented. y(t)) has a normal (nx .6. Deﬁnition A curve with parametrization (x(t). As an example. There are two such perpendiculars. the interior would then be on the left. C f (z) dz around C is zero if the integral Γ f (z) dz Assuming that the integrals from point 1 to point 2 are path independent. the contours in Figure 5.2 Cauchy’s Theorem Next we investigate if we can determine that integrals over simple closed contours that vanish without doing all the work of parametrizing the contour. we have C f (z) dz = 0. dt dt Recall that the normal is a perpendicular to the curve. − then the integrals over C1 and C2 are equal. The above normal points outward and the other normal points toward the interior of a closed curve. . First. we need to establish the direction about which we traverse the contour. ny ) = (− dx .6.

In a similar fashion. we have C P dx + Q dy = S ∂Q ∂P − ∂x ∂y dxdy. C u dx − v dy = 0.28). we have u dx − v dy = −∂v ∂u − ∂x ∂y dxdy. C S If u and v satisfy the Cauchy-Riemann equations.30) . y) be continuously diﬀerentiable functions on and inside the simple closed curve C.214 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS We now consider C (u + iv) dz over a simple closed contour. (5. Recall this theorem from your last semester of calculus: Theorem : Green’s Theorem in the Plane Let P (x.28) These integrals in the plane can be evaluated using Green’s Theorem in the Plane. Denoting the enclosed region S. ∂x ∂y ∂v ∂u =− . one can show that C v dx + u dy = 0. This can be written in terms of two real integrals in the xy-plane. (5. ∂v ∂u = . then (u + iv) dz = 0.29) Using Green’s Theorem to rewrite the ﬁrst integral in (5. y) and Q(x. ∂x ∂y then the integrand in the double integral vanishes. Therefore. (5. C (u + iv) dz = = C C (u + iv)(dx + i dy) u dx − v dy + i C v dx + u dy. We have thus proven the following theorem: Theorem If u and v satisfy the Cauchy-Riemann equations inside and on the simple closed contour C.

Example Consider |z−1|=3 z 4 dz. Corollary C ⊂ D. C f (z) dz = 0 when f is diﬀerentiable in domain D with Either one of these is referred to as Cauchy’s Theorem. contour. Cauchy’s Theorem tells us that integrals of f (z) over these regions are zero. C and C . We can use Cauchy’s Theorem to show that we can deform one contour into another. perhaps simpler. Noting that integrations over contours in the opposite . This splits C into contours C1 and C2 and C into contours C1 and C2 . Now connect the two contours with contours Γ1 and Γ2 as shown. COMPLEX INTEGRATION 215 C1 Γ 1 C' C' 1 Γ 2 C' 2 C C 2 Figure 5. Theorem If f (z) is holomorphic between two simple closed contours.5.15. C f (z) dz.15: The contours needed to prove that C f (z) dz = when f (z) is holomorphic between the contours C and C . then C f (z) dz = C f (z) dz. Since f (z) = z 4 is diﬀerentiable inside the circle |z − 1| = 3.6. this integral vanishes. f (z) is diﬀerentiable inside the newly formed regions between the curves and the boundaries of these regions are now simple closed curves. Therefore. We consider the two curves as shown in Figure 5.

(5. as was to be proven. dz z = = i 2π 0 |z|=1 ieiθ dθ eiθ dθ = 2πi.16 The theorem tells us that R dz = z |z|=1 dz z The latter integral can be computed using the parametrization z = eiθ for θ ∈ [0. 2] × [−2i. Thus. but going backwards over all contours except for C2 . We can do this integral by computing four separate integrals over the sides of this square in the complex plane. Γ1 . The second integral denotes the integration over the lower region. 2i]. Thus. as shown in Figure 5. C1 C2 C1 C2 Noting that C = C1 + C2 and C = C1 + C2 . One simply parametrizes each line segment. Example Compute dz R z for R the rectangle [−2. The last theorem tells us that we could instead integrate over a simpler contour by 1 deforming the square into a circle as long as f (z) = z is diﬀerentiable in the region bounded by the square and the circle. we have f (z) dz + f (z) dz − f (z) dz − f (z) dz = 0.31) 2π 0 . performs the integration and sums the four results. In the ﬁrst integral we have traversed the contours in the following order: C1 .216 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS direction introduce a negative sign. we have C f (z) dz = C f (z) dz. 2π]. we have from Cauchy’s Theorem that f (z) dz + f (z) dz − f (z) dz + f (z) dz = 0 C1 Γ1 C1 Γ2 and C2 f (z) dz − Γ2 f (z) dz − C2 f (z) dz − Γ1 f (z) dz = 0. Adding these two equations. C1 backwards and Γ2 . we can use the unit circle.

Note that to compute the z integral around R we can deform the contour to the circle C since f (z) is diﬀerentiable in the region between the contours.5. dz R z = 2πi by deforming the original For fun. Then.17. we have dz z = 2 −2 γ1 dx x − 2i = ln |x − 2i|2 −2 √ √ 7πi 5πi = (ln(2 2) + ) − (ln(2 2) + ) 4 4 πi = . COMPLEX INTEGRATION iy 2i i C 217 R -2 -1 -i 1 2 x -2i Figure 5. The lower segment.17. 2]. 2 (5. γ1 of the square can be simply parametrized by noting that along this segment z = x − 2i for x ∈ [−2.32) We note that the arguments of the logarithms are determined from the angles made by the diagonals provided in Figure 5. We will label the contour as shown in Figure 5. [The reader should verify this!] Similarly.6. Therefore. let’s do this the long way to see how much eﬀort was saved. the integral along the top segment is computed as dz z = −2 2 γ3 dx x + 2i = ln |x + 2i|−2 2 √ √ 3πi πi = (ln(2 2) + ) − (ln(2 2) + ) 4 4 . we have found that simple closed contour.16: The contours used to compute R dz .

we have that R (5. 2 Therefore. COMPLEX REPRESENTATIONS OF FUNCTIONS iy 2i i γ 3 γ 2 R γ -1 -i 1 1 -2 2 x -2i γ 4 Figure 5. 2 (5. = 2 Finally.34) dz z = −2 2 idy −2 + iy = ln | − 2 + iy|2 −2 √ √ 5πi 3πi ) − (ln(2 2) + ) = (ln(2 2) + 4 4 πi = .35) dz πi = 4( ) = 2πi z 2 . = πi . The added z diagonals are for the reader to easily see the arguments of the logarithms that appear in during the integration.218 CHAPTER 5. the integral over the left side is γ4 (5.17: The contours used to compute R dz are shown.33) The integral over the right side is γ2 dz z = 2 −2 idy 2 + iy = ln |2 + iy|2 −2 √ √ πi πi = (ln(2 2) + ) − (ln(2 2) − ) 4 4 πi .

is Morera’s Theorem. This will take the form of what is called Cauchy’s Integral Formula. we will generalize our integrand slightly so that we can integrate a larger family of complex functions. We ﬁrst need to explore the concept of analytic functions. What we do have though. Namely. COMPLEX INTEGRATION Note that we had obtained the same result using Cauchy’s Theorem. these integrals evaluate to zero. or deforming certain contours to simpler ones. However. Thus. Theorem Let f be continuous in a domain D. 5. this result is used in the next section. f (z) can be represented as a power series in z0 . we have to compute integrals like C (z − z0 )n dz. The integrand needs to possess certain diﬀerentiability properties. This series converges uniformly and absolutely inside the circle of convergence. it took quite a bit of computation! 219 The converse of Cauchy’s Theorem is not true. we can integrate it term by term over any simple closed contour in D containing z0 . we can show that . which is diﬀerent from Cauchy’s Theorem. Since f (z) can be written as a uniformly convergent power series. Deﬁnition f (z) is analytic in D if for every open disk |z − z0 | < ρ lying in D. C f (z) dz = 0.6. |z − z0 | < R. namely C f (z) dz = 0 does not imply that f (z) is diﬀerentiable.6. ∞ f (z) = n=0 cn (z − z0 )n . As we will see in the homework exercises. In this section. In particular.5. Then f is diﬀerentiable in D. The proof is a bit more detailed than we need to go into here. However.3 Analytic Functions and Cauchy’s Integral Formula In the previous section we saw that Cauchy’s Theorem was useful for computing certain integrals without having to parametrize the contours. Suppose that for every simple closed contour C in D. with a radius of convergence R.

The reader might need to recall how to sum geometric series. f is a uniformly convergent sum of continuous functions. This case is simple. diﬀerentiable and holomorphic are used interchangeably. Then we would have 2 1 f (z) = 1 . by Morera’s Theorem. A review is given at the end of this section. so f (z) is also continuous. Thus. f (z) is the sum of a geometric series for |z| < 1. we will make repeated use of the result ∞ a arn = . We have ∞ 1 = (−z)n . We seek an expansion in powers of z − 1 . 1−r n=0 Example f (z) = 1 1+z for z0 = 0. We could compute the expansion coeﬃcients using Taylor’ formula for the coeﬃcients. C f (z) dz = 0. Essentially. We can get the 3 denominator into such a form by factoring out the 2 . Let’s recall some manipulations from the study of series of real functions. Thus. It would be nice if the denominator were of the form of one plus something. we can also make use of the formula for geometric series after rearranging the function. Often terms like analytic. though there is a subtle distinction due to their deﬁnitions. Example f (z) = 1 1+z for z0 = 1 . 2 3 1 + 3 (z − 2 ) . Also. So. this series expansion converges inside the unit circle in the complex plane.220 CHAPTER 5. we rewrite the function in a form 2 that has this term. However. f (z) = 1 + z n=0 Thus. 2 We now look into an expansion about a diﬀerent point. f (z) = 1 1 = 1+z 1 + (z − 1 2 + 1 2) = 3 2 1 1 . + (z − 2 ) This is not quite in the form we need. |r| < 1. we have that f (z) is diﬀerentiable if it is analytic. COMPLEX REPRESENTATIONS OF FUNCTIONS Theorem For f (z) analytic in D and any C lying in D.

. 3 2 1 3 |z − | < .5.8 we show the regions of convergence for the power series 1 expansions of f (z) = 1+z about z = 0 and z = 1 .18: Regions of convergence for expansions of f (z) = z = 0 and z = 1 . We note that the ﬁrst 2 expansion gives us that f (z) is at least analytic inside the region |z| < 1. We now present the Cauchy Integral Formula. though some are series expansions involving negative powers of z − z0 . which would be the sum of 1 a geometric series with ﬁrst term a = 1 and ratio r = − 2 (z − 2 ) 3 provided that |r| < 1. In Figure 6. COMPLEX INTEGRATION iy 2i i 221 -2 -1 -i 1 2 x -2i Figure 5. 2 2 This convergence interval can be rewritten as This is a circle centered at z = 1 2 3 with radius 2 . Therefore. We will see later that there are 2 expansions outside of these regions. The second expansion shows that f (z) is analytic in a region even further 3 outside to the region |z − 1 | < 2 . we have found that f (z) = for 2 1 2 ∞ − (z − ) 3 n=0 3 2 n 1 2 | − (z − )| < 1.6. 2 1 1+z about 1 The second factor now has the form 1−r .

z − z0 1 We need only compute the integral C z−z0 dz to ﬁnish the proof of Cauchy’s Integral Formula. Noting also that c0 = f (z0 ) is the ﬁrst term of a Taylor series expansion about z = z0 . |z − z0 | = ρ as shown in Figure 5.19. We insert the power series expansion of f (z) about z0 into the integrand. z − z0 analytic function (5. . since it is representable as a Taylor series expansion about z0 . (Note that this has the right complex modulus since |eiθ | = 1. . This is done by parametrizing the circle. z − z0 (5. z − z0 z − z0 where h(z) is an analytic function. Then. We have already shown that analytic functions are diﬀerentiable. z − z0 c0 + c1 + c2 (z − z0 ) + . Using this parametrization.36) C In order to prove this. COMPLEX REPRESENTATIONS OF FUNCTIONS Theorem Let f (z) be analytic in |z − z0 | < ρ and let C be the boundary (circle) of this disk. Let z − z0 = ρeiθ .222 CHAPTER 5. we ﬁrst make use of the analyticity of f (z). we have C f (z) dz = z − z0 C c0 + h(z) dz = f (z0 ) z − z0 C 1 dz. .37) As noted the integrand can be written as f (z) c0 = + h(z). Then we have f (z) z − z0 = = = 1 z − z0 ∞ n=0 cn (z − z0 )n 1 c0 + c1 (z − z0 ) + c2 (z − z0 )2 + . we have dz = z − z0 2π 0 C iρeiθ dθ =i ρeiθ 2π 0 dθ = 2πi. . f (z0 ) = 1 2πi f (z) dz. so by Cauchy’s Theorem C h(z) dz = 0.) Then dz = iρeiθ dθ. . .

6. Therefore.19: Circular contour used in proving the Cauchy Integral Formula. COMPLEX INTEGRATION 223 iy ρ z 0 C x Figure 5. C 1 dz = 2πf (z0 ). The only point inside the region bounded by the contour is z = 1. z − z0 Example . we have |z|=4 cos z πi cos(1) dz = − . In Figure 5. Therefore. (z − 1)(z − 5) 2 We have shown that f (z0 ) has an integral representation for f (z) analytic in |z − z0 | < ρ.Using the Cauchy Integral Formula We now compute cos z |z|=4 z 2 −6z+5 dz. In fact. we need to factor the denominator. f (z) dz = f (z0 ) z − z0 C as was to be shown. all derivatives of an analytic function have an . In order to apply the Cauchy Integral Formula.20 we see the contour and the points z = 1 and z = 5. We next locate the zeroes of the denominator. (z − 1) Therefore. we can apply the Cauchy Integral Formula for f (z) = cos z to z−5 the integral |z|=4 cos z dz = (z − 1)(z − 5) |z|=4 f (z) dz = 2πif (1). z 2 − 6z + 5 = (z − 1)(z − 5).5.

+ arn + .20: Circular contour used in computing integral representation. . . The integrals are similar to the n = 0 case above. One needs to recall the coeﬃcients of the Taylor series expansion for f (z) are given by cn = We also need the following lemma Lemma C f (n) (z0 ) . . (5. COMPLEX REPRESENTATIONS OF FUNCTIONS iy C ρ=4 1 5 x Figure 5.40) . n = 0. f (z) dz.6. 5. A geometric series is of the form ∞ n=0 arn = a + ar + ar2 + ar2 + .39) This will be a homework problem. . (z − z0 )n+1 (5. n=0 2πi.4 Geometric Series In this section we have made use of geometric series. (5. .38) This can be proven following a derivation similar to that for the Cauchy Integral Formula. n! dz = (z − z0 )n+1 0.224 CHAPTER 5. This is given by f (n) (z0 ) = n! 2πi C cos z |z|=4 z 2 −6z+5 dz.

if it exists. ∞ 1 n=0 2n In this case we have that a = 1 and r = 1 . It is called the ratio because the ratio of two consecutive terms in the sum is r.42) Subtracting these two equations. 1−r |r| < 1.6. consider what happens for the separate cases |r| > 1. . (5. r = 1 and r = −1. We consider the nth partial sum: sn = a + ar + .9) we need only evaluate limn→∞ rn . + arn−2 + arn−1 . rsn = ar + ar2 + . Now. while noting the many cancellations. multiply this equation by r.44) Recalling that the sum. 225 (5. 1−r (5. .45) The reader should verify that the geometric series diverges for all other values of r. we have the sum of the geometric series: ∞ n=0 arn = a . COMPLEX INTEGRATION Here a is the ﬁrst term and r is called the ratio. Thus. is given by S = limn→∞ sn .43) Thus. we present a few typical examples of geometric series. . + arn−1 + arn .41) (5. the nth partial sums can be written in the compact form sn = a(1 − rn ) . (5. Therefore. Namely. From our special limits we know that this limit is zero for |r| < 0. . . Next. Example 1. we have (1 − r)sn = a − arn . this inﬁnite 2 series converges and the sum is S= 1 1− 1 2 = 2. The sum of a geometric series.5. when it converges. can easily be determined. Letting n get large in the partial sum (A.

Let us reconsider each of these expansions. So. In the last section we investigated 1 expansions of f (z) = 1+z about z = 0 and z = 1 . we have ∞ ( n=1 ∞ ∞ 3 2 3 2 − n) = − . As before. Also. in this case we do not have a geometric series. we instead rewrite our function as f (z) = 1 1 1 = 1. So. 2 2 5. Example 1. ∞ 3 n=1 ( 2n 4 9 1− 1 3 2 = . we make use of the geometric series. but we do have the diﬀerence of two geometric series.226 CHAPTER 5. ∞ 4 k=2 3k In this example we note that the ﬁrst term occurs for k = 2. n n 2 5 2 5n n=1 n=1 Now we can add both geometric series: ∞ ( n=1 3 3 2 − n) = 2 2n 5 1− 1 2 − 2 5 1− 1 5 =3− 5 1 = . COMPLEX REPRESENTATIONS OF FUNCTIONS Example 2. The regions of 2 convergence for each series were shown in Figure 6.8. Of course. It is possible to have series representations in which there are negative powers. a = 4 . Thus.7 Complex Series Representations Until this point we have only talked about series whose terms have nonnegative powers of z − z0 . Since |z| > 1. but for values of z outside the region of convergence previously found. In this case it is allowed. 1+z z1+ z . 3 − 2 5n ) Finally. 9 3 S= Example 3. we need to be careful whenever rearranging inﬁnite series. f (z) = 1 1+z for |z| > 1. r = 1 .

we write as before f (z) = 1 1 = 1+z 1 + (z − 1 2 + 1 2) = 3 2 1 Instead of factoring out the 3 we factor out the (z − 2 ) term. Thus.5.21. Then. 2 2 1 1 . COMPLEX SERIES REPRESENTATIONS 227 We now have the function in a form of the sum of a geometric series 1 with ﬁrst term a = 1 and ratio r = − z . R1 < |z − z0 | < R2 . 2 we obtain 1 1 1 f (z) = = .7. has negative powers of z. z n=0 z n=0 This can be re-indexed using j = n + 1 to obtain ∞ f (z) = (−1)j−1 z −j . Example 2. 2 This leads to the following theorem: Theorem Let f (z) be analytic in an annulus. ∞ f (z) = j=0 aj (z − z0 )j + ∞ j=1 bj (z − z0 )−j . f (z) = 1 1+z for |z − 1 | > 3 . we have the geometric series f (z) = ∞ 1 ∞ 1 (− )n = (−1)n z −n−1 . 2 3 This converges for |z − 1 | > 2 and can also be re-indexed to verify that 2 this series involves negative powers of z − 1 . with C a positively oriented simple closed curve around z0 and inside the annulus as shown in Figure 5. We note that |z| > 1 implies that |r| < 1. . we identify a = 1 and r = − 3 (z − 1 )−1 . Then. |z| > 1. 1 3 1 1+z (z − 2 ) (1 + 2 (z − 2 )−1 ) Again. which converges outside the unit circle. + (z − 2 ) In this case. j=1 Note that this series. This leads to the series 2 2 f (z) = 1 z− ∞ (− (z 1 2 2 n=0 3 1 − )−1 )n .

with aj = and bj = 1 2πi 1 2πi C f (z) dz. R1 < |z − z0 | < R2 . 2+z . we write 1 1 1 ∞ z = = (− )n . 3 1−z 2+z 1 We can expand the ﬁrst fraction. Example Expand f (z) = 1 (1−z)(2+z) in the annulus 1 < |z| < 2. Such a series expansion is called a Laurent expansion. (z − z0 )−j+1 C The above series can be written in the more compact form ∞ f (z) = j=−∞ cj (z − z0 )j . as an analytic function in |z| < 2. (z − z0 )j+1 f (z) dz.228 CHAPTER 5. as an analytic function in the region 1 |z| > 1 and the second fraction. 1−z . First.21: This ﬁgure shows an annulus. This is done as follows. 2+z 2[1 − (− z )] 2 n=0 2 2 . we can write this as f (z) = 1 1 1 + . with C a positively oriented simple closed curve around z0 and inside the annulus. COMPLEX REPRESENTATIONS OF FUNCTIONS iy R2 z R1 C 0 x Figure 5. Using partial fractions.

in the common region. Typically these are isolated singularities. . f (z) The integrand in the Cauchy Integral Formula was of the form g(z) = z−z0 where f (z) is well behaved at z0 . Deﬁnition A singularity of f (z) is a point at which f (z) fails to be analytic. we look at the principal part of the Laurent series: ∞ −j j=1 bj (z − z0 ) . z − z0 2 We now deﬁne singularities and then classify isolated singularities.8. we have that 1 (1 − z)(2 + z) = = ∞ 1 1 ∞ z 1 (− )n − n+1 3 2 n=0 2 z n=0 ∞ (−1)n n (−1) −n z + z . g(z) = 1 f (z0 ) + f (z0 ) + f (z0 )(z − z0 )2 + . In this section we will extend our tools for performing contour integrals.8 Singularities and the Residue Theorem In the last section we found that we could integrate functions satisfying some analyticity properties along contours without using detailed parametrizations around the contours.46) 5. as g(z) is not deﬁned there. The point z = z0 is called a singularity of g(z).5. 1 < |z| < 2. In order to classify the singularities of f (z). As we saw from the proof of the Cauchy Integral Formula. We can deform contours if the function is analytic in the region between the original and new contour. 1 1−z z n=0 z n z[1 − z ] Therefore. SINGULARITIES AND THE RESIDUE THEOREM Then we write 229 1 1 1 ∞ 1 =− =− . . . n) 6(2 3 n=0 n=1 ∞ (5. g(z) has a Laurent series expansion about z = z0 . This is the part of the Laurent series containing only negative powers of z − z0 . .

we know from the ﬁrst semester of calculus that limz→0 sin z = 1. . . However. n! n=0 ∞ . 2. this is an example of a removable singularity. . The series expansion is found by expanding ez about z = 1: e e e z−1 e = + e + (z − 1) + . 3. we can expand sin z about z = 0 and see that sin z 1 z3 z2 = (z − + . Example: Poles f (z) = ez (z−1)n . . . . 2 2 (z − 1) (z − 1) z − 1 2! 3! z Note that the principal part of the Laurent series has two terms involving (z − 1)−2 and (z − 1)−1 .. The series expansion is found again by expanding ez about z = 1: e e e e e f (z) = ez−1 = + + + (z − 1) + . there are only nonnegative powers in the series expansion. If there are a ﬁnite number of terms in the principal part. This function has a singularity at z = 1 called a simple pole. . z Furthermore. then one has a pole of order n if there is a term (z − z0 )−n and no terms of the form (z − z0 )−j for j > n.) = 1 − + . then z0 is a removable singularity. This is a pole of order 2. COMPLEX REPRESENTATIONS OF FUNCTIONS 1. .230 CHAPTER 5. z e For n = 1 we have f (z) = z−1 .. If f (z) is bounded near z0 . e For n = 2 we have f (z) = (z−1)2 . Example: Essential f (z) = e z . In this case we have the series expansion about z = 0 given by f (z) = e = n=0 1 z 1 ∞ 1 z n n! = 1 −n z . Example: Removable f (z) = sin z z . If there are an inﬁnite number of terms in the principal part. At ﬁrst it looks like there is a possible singularity at z = 0. So.. f (z) = z−1 z−1 2! Note that the principal part of the Laurent series expansion about z = 1 has only one term. z z 6! 6! Thus. then one has an essential singularity.

48) z→z0 (k − 1)! dz k−1 Again. It is called the residue of f (z) at z = z0 . Deﬁnition f (z) has a pole of order k at z0 if and only if (z − z0 )k f (z) has a removable singularity at z0 .5. the residue is the coeﬃcient of the (z − z0 )−1 term. for a pole of order k we deﬁne the residue as 1 dk−1 Res[f (z). k 2πi C (z − z0 ) Inserting the deﬁnition of φ(z) we then have φ(k−1) (z0 ) = (k − 1)! 2πi f (z) dz. Let φ(z) = (z − z0 )k f (z) be analytic.38) in the last section. So. C z=z0 We note that from the integral representation of the coeﬃcients for a Laurent series. but (z − z0 )k−1 f (z) for k > 0 does not.8. As we had seen in Equation (5. In general.47) . this gives c−1 . . 231 In the above examples we have seen poles of order one (a simple pole) and two. C Dividing out the factorial factor and evaluating the φ(z) derivative. Thus. Then it has a Taylor series expansion about z0 . SINGULARITIES AND THE RESIDUE THEOREM We see that there are an inﬁnite number of terms in the principal part of the Laurent series. or b1 . we can deﬁne poles of order k. z0 ] = lim (z − z0 )k f (z) (5. this function has an essential singularity at z = 0. This particular coeﬃcient plays a role in helping to compute contour integrals surrounding poles. we have that 1 2πi f (z) dz = = 1 φ(k−1) (z0 ) (k − 1)! 1 dk−1 (z − z0 )k f (z) (k − 1)! dz k−1 (5. we can write the integral representation of the (k − 1)st derivative of an analytic function as (k − 1)! φ(z) φ(k−1) (z0 ) = dz.

Thus. . then f (z) dz = 2πi Res[f (z). the residue is one and we have dz = 2πi. However. are the singularities.22: Contour for computing dz |z|=1 sin z .232 CHAPTER 5. z = 0. as shown in Figure 5. we will be computing dz . Referring to the last derivation.49) C Example dz |z|=1 sin z . we could have several poles of diﬀerent orders.22. COMPLEX REPRESENTATIONS OF FUNCTIONS iy i -1 -i x 1 x Figure 5. we have shown that if f (z) has one pole. ±π. since lim (z − 0) 1 = 1. sin z |z|=1 In general. of order k inside a simple closed contour C. For example. z0 ] (5. 2−1 |z|=2 z . We begin by looking for the singularities of the integrand. only z = 0 lies inside the contour. We note further that z = 0 is a simple pole. which is when sin z = 0. . .z0 . ±2π. sin z z→0 Therefore.

the sum is zero because f (z) is analytic in the enclosed region. which can be evaluated with single residue computations. (5. However.23. One constructs a new contour C by encircling each pole. in cases in which we have many poles. the result is that C f (z) dz is 2πi times the sum of the residues of f (z) at each pole. We ﬁrst note that there are two poles in this integral since z2 1 1 = . In the ﬁgure two paths are shown only to indicate the direction followed on the cut. The sum of the contributions to the contour integration involve two integrals for each cut. The new contour is then obtained by following C and crossing each cut as it is encountered. zj ]. Thus. The proof of this theorem is based upon the contours shown in Figure 5. we are left with f (z) dz = f (z) dz − f (z) dz − f (z) dz − f (z) dz = 0. N inside a simple closed contour C and no other singularities in this region. Both poles are inside the contour. as shown in the ﬁgure. we can use the following theorem. SINGULARITIES AND THE RESIDUE THEOREM 233 The integrand has singularities at z 2 − 1 = 0. Then one connects a path from C to each circle. One could do a partial fraction decomposition and have two integrals with one pole each. . . one has that this integral is the sum of the integrals around the separate poles.24.50) where the residues are computed using Equation (5. j = 1. Then. Example 1 dz |z|=2 z 2 −1 . Then one goes around a circle in the negative sense and returns along the cut to proceed around C.48). Solving for C f (z) dz. C C C1 C2 C3 Of course. . Thus.5.8. −1 (z − 1)(z + 1) . or z = ±1. Theorem Let f (z) be a function which has poles zj . which will cancel due to the opposing directions. N C f (z) dz = 2πi j=1 Res[f (z). since all singularities have be cut out. as seen in Figure 5. . known as the Residue Theorem.

They are both simple poles. . so we have Res z2 1 . we need to compute the residues for each one. We do not. z→−1 z − 1 2 z→−1 (5.52) Then. Since both poles are inside the contour.51) and Res 1 .23: A depiction of how one cuts out poles to prove that the integral around C is the sum of the integrals around circles with the poles at the center of each.z = 1 −1 1 z→1 −1 1 1 = lim = . each denoted by an “x”. however.24 we plot the contour and the two poles. dz 1 1 = 2πi( − ) = 0. we could apply methods from our calculus class to θ do this integral. attempting to write 1 + cos θ = 2 cos2 2 . z = −1 z2 − 1 = lim (z + 1) (5. z2 − 1 2 2 Example 2 Here we have a real integral in which there are no signs of complex functions. In fact. |z|=2 2π dθ 0 2+cos θ . get very far.234 CHAPTER 5. z→1 z + 1 2 = lim (z − 1) z2 1 z2 − 1 1 1 = lim =− . In Figure 5. COMPLEX REPRESENTATIONS OF FUNCTIONS C C1 C3 C2 Figure 5.

is to transform the integration to the complex plane through the transformation z = eiθ .5. SINGULARITIES AND THE RESIDUE THEOREM 235 iy 2i -2 x -2i x 2 x Figure 5. z = eiθ . The singularities occur for z 2 + 4z + 1 = 0. 2 2 z eiθ − e−iθ i 1 =− z− .8. The locations of these poles are √ shown in Figure 5. Only z = −2 + 3 lies inside the integration . 2i 2 z Under this transformation. z 2 + 4z + 1 |z|=1 = −i (5. the integration now takes place around the unit circle in the complex plane. Then. we have 2π 0 dθ 2 + cos θ = dz iz |z|=1 2+ 1 2 z+ 1 2 1 z dz |z|=1 2z + (z 2 + 1) dz = −2i .24: Contour for computing dz |z|=2 z 2 −1 . sin θ). Using the quadratic formula. Noting that dz = ieiθ dθ = iz dθ. cos θ = sin θ = eiθ + e−iθ 1 1 = z+ .25. One trick. √ we have the roots z = −2 ± 3. useful in computing integrals whose integrand is in the form f (cos θ.53) We can apply the Residue Theorem to the resulting integral.

z = 1] = lim 1 d z2 + 1 (z − 1)2 z→1 1! dz (z − 1)2 (z + 2) . We will therefore need the residue of f (z) = simple pole: Res[f (z). we have 2π 0 at this −2i z 2 + 4z + 1 z→−2+ 3 √ z − (−2 + 3) √ √ −2i lim √ z→−2+ 3 (z − (−2 + 3))(z − (−2 − 3)) 1 √ −2i lim √ z→−2+ 3 z − (−2 − 3) −2i √ √ −2 + 3 − (−2 − 3) −i √ 3 √ −i 3 (5. See Figure 5.26.236 CHAPTER 5. z = −2 + √ 3] = = = = = = Therefore. we need the residues at each pole for z 2 +1 f (z) = (z−1)2 (z+2) : Res[f (z).55) √ 3)) dθ = −2i 2 + cos θ |z|=1 Example 3 z 2 +1 |z|=3 (z−1)2 (z+2) In this example there are two poles z = 1. z = 1 is a second order pole and z = 2 is a simple pole.54) 3 lim √ (z − (−2 + √ dz −i 3 = 2πi z 2 + 4z + 1 3 dz. Therefore. 3 (5. −2 inside the contour.25: Contour for computing 2π dθ 0 2+cos θ . √ 2π 3 = . −2i z 2 +4z+1 contour. COMPLEX REPRESENTATIONS OF FUNCTIONS iy i -4 x -3 -2 -1 x -i 1 x Figure 5.

9 z→1 z 2 + 4z − 1 (z + 2)2 (5. COMPUTING REAL INTEGRALS 237 iy 3i -3 -2 x -3i 1 x 3 x Figure 5. These types of integrals will appear later in the text and will help to tie in what seems to . = 9 lim (z + 2) (5. we will turn to ∞ the evaluation of inﬁnite integrals of the form −∞ f (x) dx. = lim = 4 .26: Contour for computing z 2 +1 |z|=3 (z−1)2 (z+2) dz.5.56) Res[f (z).9.57) The evaluation of the integral is now 2πi times the sum of the residues: z2 + 1 4 5 dz = 2πi + 2 (z + 2) (z − 1) 9 9 = 2πi.9 Computing Real Integrals As our ﬁnal application of complex integration techniques. z = −2] = z2 + 1 z→−2 (z − 1)2 (z + 2) z2 + 1 = lim z→−2 (z − 1)2 5 . |z|=3 5.

∞ −∞ f (x) dx = 0 −∞ f (x) dx + f (x) dx R does not exist while limR→∞ −R f (x) dx does exist. For example. Therefore. The way that one determines if such integrals exist. the integrals ∞ 0 ∞ 0 x dx and R 0 −∞ x dx do not exist. (5. We now proceed to the evaluation of such principal value integrals using complex integration methods. In this section we will see that such integrals may be computed by extending the integration to a contour in the complex plane. Recall that such integrals are improper integrals and you had seen them in your calculus classes. However. COMPLEX REPRESENTATIONS OF FUNCTIONS be a digression in our study of mathematical physics. We will be interested in computing the latter type of integral. Note that R2 2 ∞ 0 x dx = lim R→∞ 0 x dx = lim R→∞ = ∞. Such an integral is called the Cauchy Principal Value Integral and is denoted with either a P or P V preﬁx: P ∞ −∞ f (x) dx = lim R R→∞ −R f (x) dx. ∞ −∞ x dx = lim R R→∞ −R x dx = lim R→∞ R2 (−R)2 − 2 2 = 0. We want to evaluate the integral . or converge.58) In our discussions we will be computing integrals over the real line in the Cauchy principal value sense.238 CHAPTER 5. is to compute the integral using a limit: ∞ −∞ f (x) dx = lim R R→∞ −R f (x) dx. ∞ −∞ 1 dx = lim R→∞ x2 R −R 1 2 dx = lim − R→∞ x2 R = 0. Similarly.

This is true if R|f (z)| → 0 along ΓR as R → ∞. (5.5. The integral around the entire contour CR can be computed using the Residue Theorem and is related to integrations over the pieces of the contour by CR f (z) dz = ΓR f (z) dz + R −R f (z) dz. we have P ∞ −∞ f (x) dx = C f (z) dz − lim R→∞ ΓR f (z) dz. C f (z) dz = lim R→∞ CR f (z) dz. We extend f (x) to f (z) and assume that f (z) is analytic in the upper half R plane (Im(z) > 0). We view this interval as a piece of a contour CR obtained by completing the contour with a semicircle ΓR of radius R extending into the upper half plane as shown in Figure 5. (5. This .27: Contours for computing P ∞ −∞ f (x) dx.9. R). Note. a similar construction is sometimes needed to extend the integration into the lower half plane (Im(z) < 0) when f (z) is analytic there. ∞ −∞ f (x) dx.59) Taking the limit R → ∞ and noting that the integral over (−R.60) where we have identiﬁed C as the limiting contour as R gets large. COMPUTING REAL INTEGRALS 239 Γ R x x -R R R Figure 5. We then consider the integral −R f (x) dx as an integral over the interval (−R. We will extend this into an integration in the complex plane. Now the key to carrying out the integration is that the integral over ΓR vanishes in the limit. R) is the desired integral.27.

if limR→∞ RM (R) = 0. then limR→∞ We show how this applies some examples. we have f (z) dz = ≤ R 2π 0 ΓR f (Reiθ )Reiθ dθ f (Reiθ ) dθ 2π 0 2π 0 < RM (R) dθ (5. We will apply the methods of this section and conﬁrm this result.28 and the poles of the integrand are at z = ±i.240 CHAPTER 5. The needed contours are shown in Figure 5.61) = 2πRM (R). Example 1 Evaluate ∞ dx −∞ 1+x2 . Thus. We assume that the function is bounded by some function of R. can be seen by the following argument. We have that ∞ −∞ dx π = lim 2 tan−1 R = 2 2 R→∞ 1+x 2 = π.28: Contour for computing ∞ dx −∞ 1+x2 . . We can parametrize the contour ΓR using z = Reiθ . ΓR f (z) dz = 0. We already know how to do this integral from our calculus classes. So. for |f (z)| < M (R). Denote this functon by M (R). COMPLEX REPRESENTATIONS OF FUNCTIONS Γ R i -R -i x R R x Figure 5.

as R → ∞. we have ∞ −∞ dx 1 = 2πi 2 1+x 2i ∞ sin x −∞ x = π. There are several new techniques that have to be introduced in order to carry out this integration. 2 e2iθ 2 cos θ + R4 |1 + R 1 + 2R Thus.5. We need to handle the pole at z = 0 in a special way and we need something called Jordan’s Lemma to guarantee that the contour on the contour ΓR vanishes. = lim 2 z→i z + i 1+z 2i Then.29: Contour for computing P We ﬁrst note that f (z) = gets large. So. ∞ −∞ dx = 1 + x2 C dz . goes to zero fast enough on ΓR as R R R |= √ . z = i. since the integrand does not satisfy our previous condition on the bound. Example 2 Evaluate P dx. . COMPUTING REAL INTEGRALS 241 Γ R Cε -R −ε x ε R Figure 5. z = i] = lim(z − i) z→i 1 1 1 = . using the Residue Theorem. R|f (z)| = 1 1+z 2 ∞ sin x −∞ x dx. R|f (z) → 0. 1 + z2 We need only compute the residue at the enclosed pole.9. Res[f (z).

x (5. then R→∞ CR lim f (z)eikz dz = 0 where k > 0 and CR is the upper half of the circle |z| = R. We use the contour in Figure 5.63) We now employ Jordan’s Lemma. We can proceed with our computation by carefully going around this pole with a small semicircle of radius . According to Jordan’s lemma. We have eiz dz = z 0 π C exp(i eiθ ) iθ i e dθ = − eiθ π 0 i exp(i eiθ ) dθ. COMPLEX REPRESENTATIONS OF FUNCTIONS For this example the integral is unbounded at z = 0.29. according to Jordan’s Lemma. The remaining integral around the small circle has to be done separately.242 CHAPTER 5. Then we have eiz dz = z eiz dz + z − −R CR ΓR iz eiz dz + z C eiz dz + z R eiz dz. P ∞ −∞ sin x 1 dx = P x 2i ∞ −∞ eix dx − P x ∞ −∞ e−ix dx . but one closes the contour in the lower half plane. Then our principal value integral computation becomes P ∞ −∞ sin x dx = lim →0 x − −∞ sin x dx + x ∞ sin x dx . . since there are no poles enclosed in the contour! The integral over ΓR will vanish as R gets large. we will need to compute the above exponential integrals using two diﬀerent contours. Constructing the contours as before. z The integral CR ez dz vanishes. A similar result applies for k < 0. x (5.29. We now put these ideas together to compute the given integral.62) We will also need to rewrite the sine function in terms of exponentials in this integral. We cannot ignore this fact. The sum of the second and fourth integrals is the integral we seek as → 0 and R → ∞. we are faced for the ﬁrst time with a pole lying on the contour. as indicated in Figure 5. If f (z) converges uniformly to zero as z → ∞. We ﬁrst consider ix ∞ P −∞ ex dx.

In this case. being careful with the sign changes due to the orientations of the contours. we have that P ∞ −∞ eix dx = − lim →0 x −ix C eiz dz = πi.9. x Finally. the integrand goes to i and we have eiz dz = −πi. we ﬁnd the same value P ∞ −∞ e−ix dx = πi.30: Contour in the lower half plane for computing P ∞ e−ix −∞ x dx. z ∞ We can compute P −∞ e x dx in a similar manner. we can compute the original integral as ∞ eix 1 P dx − P 2i −∞ x 1 (πi + πi) = 2i = π.64) . P ∞ −∞ sin x dx = x ∞ −∞ e−ix dx x (5. COMPUTING REAL INTEGRALS 243 -R −ε x ε Cε Γ R R Figure 5. Taking the limit as goes to zero. z C So far.5.

244 CHAPTER 5. COMPLEX REPRESENTATIONS OF FUNCTIONS .

Chapter 6 Transform Techniques in Physics 6. Its nonlinear counterpart has been at the center of attention in the last 40 years as a generic nonlinear wave equation. So.1) This equation governs the propagation of some small amplitude water waves.2) 245 .1. The idea is that one can transform the problem at hand to a new problem in a diﬀerent space. (6. t) = A(t)eikx . hoping that the problem in the new space is easier to solve. (6.1 The Linearized KdV Equation As a relatively simple example. We seek solutions that oscillate in space.1 Introduction Some of the most powerful tools for solving problems in physics are transform methods. 6. we assume a solution of the form u(x. −∞ < x < ∞. Such transforms appear in many forms. we consider the linearized Kortweg-deVries (KdV) equation: ut + cux + βuxxx = 0.

and is called a dispersion relation. Inserting the guess into the linearized KdV equation. we have converted our problem of seeking a solution of the partial diﬀerential equation into seeking a solution to an ordinary diﬀerential equation. the equation ω = ω(k) gives the angular frequency as a function of the wave number.3) Thus. the solution of the partial diﬀerential equation is u(x. we ﬁnd that dA + i(ck − βk 3 )A = 0. we see that c in nothing but the wave speed. the wave speed is given as ω v = = c − βk 2 . This should remind you of what we had done when using separation of variables. will not maintain its shape. This new problem is easier to solve. k This suggests that waves with diﬀerent wave numbers will travel at diﬀerent speeds. this means that waves with diﬀerent wavelengths will travel at λ diﬀerent speeds. Recalling that wave numbers are related to wavelengths. TRANSFORM TECHNIQUES IN PHYSICS Such behavior was seen in the last chapter for the wave equation for vibrating strings. For β = 0. So. we need to write the solutions to the linearized KdV as a superposition of waves. . We further note that one often seeks complex solutions of this form and then takes the real part in order to obtain a real physical solutions. In fact.246 CHAPTER 6.5) We note that this takes the form ei(kx−ωt) . k = 2π . we found plane wave solutions of the form eik(x−ct) . For a general initial condition. a linear combination of such solutions. where ω = ck − βk 3 . We ﬁrst sought product solutions and then took a linear combination of the product solutions to give the general solution. as the waves of diﬀering wavelengths tend to part company. k. In general. It is said to disperse. we have A(t) = A(0)e−i(ck−βk 3 )t . (6. (6. In that case. t) = A(0)eik(x−(c−βk 2 )t) . dt (6. which we could write as ei(kx−ωt) by deﬁning ω = kc. We can do this as the equation is linear.4) Therefore. For β = 0.

we have f (x) = u(x. 0)eik(x−(c−βk 2 )t) dk.] .6.7) Thus. Then. given f (x). Let u(x.6) Note that we have now made A a function of k. Thus. This is similar to introducing the An ’s and Bn ’s in the series solution for waves on a string. This involves what is called the Fourier transform of f (x).2 The Free Particle Wave Function A more familiar example in physics comes from quantum mechanics. Then we have h ¯2 Ψxx . (6. t)eikx dk. we seek A(k. The wave numbers are not restricted to discrete values. V = 0. (6. we will assume that solutions of Equation (6. The Schr¨diger equation gives the wave function Ψ(x. represented through the corresponding potential function V .8) i¯ Ψt = − h 2m We consider the case of a free particle in which there are no forces. so we have a continuous range of values. INTRODUCTION 247 In this case. 6. t) = ∞ −∞ A(k. The one dimensional time dependent Schr¨dinger equation is o given by h ¯2 Ψxx + V Ψ.9) i¯ Ψt = − h 2m Taking a hint from the study of the linearized KdV equation. t) for a particle under the o inﬂuence of forces. 0). This is just one of the so-called integral transforms that we will consider in this section. Thus.1. we have the general solution u(x. φ(k. 0)eikx dk. (6. 0) = ∞ −∞ A(k. (6. summing over k means that we have to integrate over the wave numbers.9) take the form Ψ(x. How do we determine the A(k. 0)’s? We introduce an initial condition. [Here we have opted to use the more traditional notation. t) = ∞ −∞ φ(k. t) as above. 0) = f (x). we need to sum over all wave numbers.1. t) instead of A(k.

we can equate integrands.248 CHAPTER 6. giving i¯ h h ¯2 2 dφ(k. . TRANSFORM TECHNIQUES IN PHYSICS Inserting this expression into (6. t). 0)eik(x− 2m t) dk. t) = ∞ −∞ hk2 ¯ φ(k. We obtain φ(k. we can expand ω(k) about k0 : ω(k) = ω0 + ω0 (k − k0 ) + . we see that this is not the particle velocity! Recall that p the momentum is given as p = ¯ k. t) = where ω= The wave speed is given as v= ω h ¯k = . (6. So. which is h only half the classical particle velocity! A simple manipulation of our result will clarify this “problem”. 2m As a special note. h ¯ k2 . k0 . t) = φ(k. . t)(ik)2 eikx dk.10) . this wave speed is v = 2m . Thus. we have found the general solution to the time dependent problem for a free particle. It is given as Ψ(x. t) = k φ(k. k 2m ∞ −∞ φ(k. t) ikx h ¯2 e dk = − dt 2m ∞ −∞ φ(k. This is the case if the major contributions to the integral are centered about a central wave number. Since this is true for all t. 0)e−i 2m t . dt 2m This is easily solved. 0)ei(kx−ωt) dk. We assume that particles can be represented by a localized wave function. . we have i¯ h ∞ −∞ dφ(k. hk ¯ We note that this takes the familiar form Ψ(x.9). Therefore.

vp = 6. we have Ψ(x. s = k − k0 and rearrange the factors to ﬁnd Ψ(x. or time variables to frequency space. x = Ax.) dk. This lead to a simpler system y = Λy.. we can transform the equation from spatial variables to wave number space. We make a change of variables. INTRODUCTION Here ω0 = ω(k0 ) and ω0 = ω (k0 ). vg = dω dk ω . The velocity of the wave packet is h seen to be ω0 = ¯ k . In the new space the time evolution is simpler. k and the former velocity as the phase velocity. This is depicted in Figure 6.6. t) ≈ ∞ −∞ φ(k0 + s. t) = ∞ −∞ 249 φ(k. In that case we diagonalized the system ˙ using the transformation x = Sy. 0)ei((k0 +s)(x−ω0 t)) ds (6.. This corresponds to the classical velocity of the m particle. 0)ei(kx−ω0 t−ω0 (k−k0 )+. Inserting this expression into our integral representation for Ψ(x. Given a partial diﬀerential equation. one usually deﬁnes this to be the group velocity. In these cases.11) = ei(−ω0 t+k0 ω0 t) Ψ(x − ω0 t.3 Transform Schemes These examples have illustrated one of the features of transform theory. 0)ei((k0 +s)x−(ω0 +ω0 s)t) ds ∞ −∞ = ei(−ω0 t+k0 ω0 t) φ(k0 + s. up to a phase factor. Thus.1 for the Schr¨dinger equation. o This is similar to the solution of the system of ordinary diﬀerential ˙ equations in Chapter 3. the evolution is governed by an ordinary diﬀerential equation. What we have found is that for a localized wave packet with wave numbers grouped around k0 the wave function is a translated version of the initial wave function. One solves the problem in the new space and then transforms back to the original space. t).1. . 0).1.

particularly for linear ordinary diﬀerential equations with constant coeﬃcients. 0) in wave number space. the Laplace transform. TRANSFORM TECHNIQUES IN PHYSICS Figure 6. as opposed to the sum over discrete frequencies. This simpler equation is solved to obtain φ(k. t). Analog signals are continuous signals which may be sums over a continuous set of frequencies.20. . Instead of a direct solution in coordinate space (on the left side). which is useful in solving initial value problems such as those encountered in ordinary diﬀerential equations.1: The scheme for solving the Schr¨diger equation using Fourier o transforms. t) given Ψ(x. one can ﬁrst transform the initial condition obtaining φ(k. In this chapter we will turn to the study of Fourier transforms. We will end this chapter with a study of Laplace transforms. These will provide an integral representation of functions deﬁned on the real line. which are useful in the study of initial value problems. A similar scheme for using Laplace transforms is depicted in Figure 6.2. we inverted the solution to obtain x. Such functions can also represent analog signals. Similar transform constructions occur for many other type of problems. The goal is to solve for Ψ(x. which Fourier series were used to represent in an earlier chapter. Solving for y. We will then investigate a related transform. Similarly. The governing equation in the new space is found by transforming the PDE to get an ODE. 0). The general scheme is shown in Figure 6.250 CHAPTER 6. one can apply this diagonalization to the solution of linear algebraic systems of equations. Then an inverse transform yields the solution of the original equation.

π −π n = 0. 1. . COMPLEX EXPONENTIAL FOURIER SERIES 251 Figure 6. π −π 1 π bn = f (x) sin nx dx. (6.2. Also. ˙ this scheme applies to solving the ODE system x = Ax as we had seen in Chapter 3.13) In order to derive the exponential Fourier series. n = 1. . Then one uses the inverse transformation to obtain the solution to the original problem.2: The scheme for solving the linear system Ax = b. . 6. . Then we will extend our series to problems involving inﬁnite periods. π] with period 2π. 2 n=1 (6. .12) where the Fourier coeﬃcients were found as 1 π an = f (x) cos nx dx. 2. .2 Complex Exponential Fourier Series In this section we will see how to rewrite our trigonometric Fourier series as complex exponential series. we replace the trigonometric functions with exponential functions and collect like terms. The resulting system is easier to solve for y. . . The Fourier series is given by f (x) ∼ ∞ a0 + (an cos nx + bn sin nx) . We ﬁrst recall the trigonometric Fourier series representation of a function deﬁned on [−π.6. One ﬁnds a transformation between x and y of the form x = Sy which diagonalizes the system.

we can write the complex 2 exponential Fourier series representation as ∞ f (x) ∼ n=−∞ cn e−inx . −2. we note that we can take c0 = a0 . 2.14) ∞ ∞ a0 an − ibn inx an + ibn −inx + e + e . −2. . 2. . 2 n=1 n=1 n = 1. . TRANSFORM TECHNIQUES IN PHYSICS f (x) ∼ = ∞ a0 + an 2 n=1 einx + e−inx 2 + bn einx − e−inx 2i (6. . . . .16) Reindexing the ﬁrst sum. ¯ n = −1. . (6. −∞ ∞ a0 + c−k e−ikx + ¯ cn e−inx .15) n = 1.17) where cn = cn = c0 = 1 (an + ibn ). . by letting k = −n. . . we also have that 1 cn = (an − ibn ). (6. n = 1. 2 (6. . n = −1. . 2 a0 . we deﬁne cn = c−n . . 2 Then. . (6.252 This gives CHAPTER 6. we can write f (x) ∼ Now. 2 n=1 k=−1 Finally. . . So. 2 2 2 n=1 n=1 The coeﬃcients can be rewritten by deﬁning 1 cn = (an + ibn ). . 2 1 (a−n − ib−n ). 2.18) . ¯ 2 This gives our representation as f (x) ∼ ∞ ∞ a0 + cn einx + ¯ cn e−inx .

−2. Therefore.22) . (6. Doing this. . 2 2π −π For n = −1.2. . cn . we have obtained the Complex Fourier Series representation ∞ f (x) ∼ n=−∞ cn e−inx . . . Thus. we would like to write out the integral forms of the coeﬃcients.20) We have converted our trigonometric series for functions deﬁned on [−π. (6. we have that 1 π a0 = c0 = f (x) dx.19) It is a simple matter to determine the cn ’s for other values of n. So..17) with Fourier coeﬃcients given by (6. 2. . (6. π] to the complex exponential series in Equation (6. cn = = = = 1 (an + ibn ) 2 i π 1 1 π f (x) cos nx dx + f (x) sin nx dx 2 π −π π −π 1 π f (x) (cos nx + i sin nx) dx 2π −π 1 π f (x)einx dx 2π −π (6. For n = 0. for all n we have shown that cn = 1 2π π −π f (x)einx dx. we replace the an ’s and bn ’s with their integral representations and replace the trigonometric functions with a complex exponential function using Euler’s formula.21) where the complex Fourier coeﬃcients are given by cn = 1 2π π −π f (x)einx dx. we ﬁnd that cn = c−n = ¯ 1 2π π −π π −π f (x)e−inx dx = 1 2π f (x)einx dx. COMPLEX EXPONENTIAL FOURIER SERIES 253 Given such a representation. . .20). we have for n = 1.6.

L At times. . . we note that these expressions can be put into the form: ∞ f (x) ∼ n=−∞ cn e−ikn x with cn = 1 2L L −L f (x)eikn x dx. where we have introduced the discrete set of wave numbers kn = nπ . . we will also be interested in functions of time. . L nπx dx. L] the Fourier trigonometric series is f (x) ∼ ∞ nπx nπx a0 + an cos + bn sin 2 L L n=1 with Fourier coeﬃcients an = bn = 1 L 1 L L −L L −L f (x) cos nπx dx. for x ∈ [−L. . The exponential Fourier series will then take the form ∞ g(t) ∼ n=−∞ cn e−iωn t . TRANSFORM TECHNIQUES IN PHYSICS We can easily extend the above analysis to other intervals. L n = 0. T ].254 CHAPTER 6. 2. . This can be rewritten in an exponential Fourier series of the form: ∞ f (x) ∼ n=−∞ cn e−inπx/L with cn = 1 2L L −L f (x)einπx/L dx. 1. . Finally. f (x) sin n = 1. . In this case we will have a function g(t) deﬁned on a time interval [−T. For example.

or functions of time. One can do this rigorously. 2T −T Here we have introduced the discrete set of angular frequencies. (6. We deﬁne k = 2π and the sum over the continuous set of λ wave numbers becomes an integral. ∞) and to extend the discrete set of wave numbers to a continuous set of wave numbers. L] the wavelength is 2L.3. 2T cn = 6. which is given by 1 ∞ ˆ ˆ F −1 [f ] = f (x) = f (k)e−ikx dk. Formally. . we arrive at the Fourier transform ∞ ˆ F [f ] = f (k) = f (x)eikx dx. or countable.3 Exponential Fourier Transform Both the trigonometric and complex exponential Fourier series provide us with representations of a class of functions in term of sums over a discrete set of wave numbers for functions of ﬁnite wavelength. A similar n argument can be made for time series. Once we know the Fourier transform. EXPONENTIAL FOURIER TRANSFORM with 255 T 1 g(t)eiωn t dt. which can be related to the corresponding discrete set of frequencies by nπ ωn = 2πfn = T with n fn = . set of wavelengths. (6. We would now like to extend our interval to x ∈ (−∞. Writing the arguments in terms of 2π wavelengths. which occur more often in signal analysis. On intervals [−L.24) 2π −∞ We note that it can be proven that the Fourier transform exists when f (x) is absolutely integrable.23) −∞ This is a generalization of the Fourier coeﬃcients (6. then we can reconstruct our function using the inverse Fourier transform. This is a discrete..6. ∞ −∞ |f (x)| dx < ∞. but it amounts to letting L and n get large and n keeping L ﬁxed. i. or the sums are over wavelengths L λn = 2L .20).e. we have kn = λn = nπ .

This is an improper integral. we need to evaluate the inside integral. (6. we have 1 ∞ F [f ]e−ikx dk F −1 [F [f ]] = 2π −∞ ∞ 1 ∞ = f (ξ)eikξ dξ e−ikx dk 2π −∞ −∞ 1 ∞ ∞ = f (ξ)eik(ξ−x) dξdk 2π −∞ −∞ ∞ 1 ∞ = eik(ξ−x) dk f (ξ) dξ. x (6. A simple evaluation yields DL (x) = = = = L −L eikx dk eikx L | ix −L eixL − e−ixL 2ix 2 sin xL . This means that F −1 [F [f ]] = f (x) and ˆ ˆ F [F −1 [f ]] = f (k). so we will deﬁne DL (x) = and compute the inner integral as ∞ −∞ L −L eikx dk eik(ξ−x) dk = lim DL (ξ − x). We will now prove the ﬁrst of these equations. The Fourier transform and inverse Fourier transform are inverse operations.25) 2π −∞ −∞ In order to complete the proof. Thus. L→∞ We can compute DL (x). This is done by inserting the deﬁnition of the Fourier transform into the inverse transform deﬁnition and then interchanging the orders of integration.26) . TRANSFORM TECHNIQUES IN PHYSICS Such functions are said to be L1 . The second follows in a similar way. which does not depend upon f (x).256 CHAPTER 6.

27) .3. EXPONENTIAL FOURIER TRANSFORM 257 8 6 4 2 –4 –3 –2 –1 0 1 2 x 3 4 Figure 6. For large L the peak grows and the values of DL (x) for x = 0 tend to zero as show in Figure 6. We can graph this function. DL (x) → 2L: lim DL (x) = 2 sin xL x sin xL = lim 2L x→0 xL sin y = 2L lim y→0 y = 2L.4. 80 60 40 20 –4 –2 0 2 x 4 Figure 6.3. For large x. The function tends to zero. In fact.4: A plot of the function DL (x) for L = 40.3: A plot of the function DL (x) for L = 4. we can show that as x → 0. A plot of this function is in Figure 6.6. x→0 x→0 lim (6.

28) where we had used the substitution y = Lx to carry out the integration. ∞ −∞ ∞ −∞ sin x dx = π. the area under each member of the sequences is one. It is a generalized function. This can be shown using a previous result from complex analysis. δ(x) = 0 for x = 0. The limit is not really a function. As n → ∞. As a further note. we ﬁnd the limit is zero for x = 0 and is inﬁnite for x = 0. |x| < n This is a sequence of functions as shown in Figure 6. x ∞ DL (x) dx = 2 sin xL dx x −∞ ∞ sin y = 2 dy −∞ y = 2π. This behavior can be represented by the limit of other sequences of functions. the limiting function is zero at most points but has area one. It is called the Dirac delta function. Thus. the area is constant for each L. ∞ −∞ δ(x) dx = 1. |x| < 1 n 1 n . we could have considered the sequence of functions gn (x) = 0. ∞ −∞ DL (x) dx = 2π. However. (6. |x| > 2n. In the last chapter we had shown that P So. 2. Deﬁne the sequence of functions (not to be confused with frequencies) 1 0. However.258 CHAPTER 6.6. |x| > n fn (x) = n 1 2 . TRANSFORM TECHNIQUES IN PHYSICS We note that in the limit L → ∞. In fact. DL (x) = 0 for x = 0 and it is inﬁnite at x = 0. which is deﬁned by 1.

3.4 –0. This is called the sifting property because it sifts out a value of the function f (x).6 –0. 4. So. (6. we now have that ∞ −∞ eik(ξ−x) dk = lim DL (ξ − x) = 2πδ(ξ − x). This sequence diﬀers from the fn ’s by the heights of the functions. EXPONENTIAL FOURIER TRANSFORM 259 4 3 2 1 –1 –0. we have proven that the inverse transform of the Fourier transform of f is f .25). L→∞ Inserting this into (6.29) Thus. we have F −1 [F [f ]] = ∞ 1 ∞ eik(ξ−x) dk f (ξ) dξ.5: A plot of the functions fn (x) for n = 2. Returning to the proof. it is not enough that our sequence of functions consist of nonzero values at just one point. Before returning to the proof. the limit as n → ∞ is zero for x = 0 and is inﬁnite for x = 0. 2π −∞ −∞ 1 ∞ 2πδ(ξ − x)f (ξ) dξ. The height might be inﬁnite.6 0.8 –0. 8. As before.6. but the areas can vary! In this case limn→∞ gn (x) = 4δ(x).8 1 Figure 6. 2 However. We have that ∞ −∞ δ(x − a)f (x) dx = f (a). we state one more property of the Dirac delta function.2 0. the area under each member of the sequences is now 2n × n = 4. which we will prove in the next section.2 0 0.4 x 0. = 2π −∞ = f (x). .

b] and b a δ(x) dx = 0. only makes sense under an integral. Two properties were used in the last section. More generally. So. as any distribution. b] .4 The Dirac Delta Function In the last section we introduced the Dirac delta function. δ(x). The Dirac delta function. It was later studied in a general theory of distributions and found to be more than a simple tool used by physicists. Dirac had introduced this function in the 1930’s in his study of quantum mechanics as a useful tool. we have that ∞ −∞ δ(x − a)f (x) dx = ∞ −∞ δ(x − a)f (a) dx = f (a) ∞ −∞ δ(x − a) dx = f (a). Other occurrences of the delta function are integrals of the form ∞ −∞ δ(f (x)) dx. the integration over a more general interval gives b a δ(x) dx = 1. Such integrals can be converted into a useful form . 0 ∈ [a.260 CHAPTER 6. This is one example of what is known as a generalized function or a distribution. This can be seen by noting that the delta function is zero everywhere except at x = a. Since f (a) is a constant. 0 not in [a. we can replace f (x) with f (a) under the integral. the integrand is zero everywhere and the only contribution from f (x) will be from x = a. First one has that the area under the delta function is one. TRANSFORM TECHNIQUES IN PHYSICS 6. Therefore. The other property that was used was that ∞ −∞ δ(x − a)f (x) dx = f (a). ∞ −∞ δ(x) dx = 1.

2 3 2 = 4 . ∞ −∞ H (x) dx = lim [H(L) − H(−L)] = 1. L→∞ In some texts the notation θ(x) is used for the step function. Example Evaluate ∞ −∞ δ(3x − 2)x2 dx. δ(f (x)) = |f (xj )| j=1 Finally. or step. It has an inﬁnite slope at x = 0. This is not a simple δ(x − a). f (x1 ) = 0. n. . . THE DIRAC DELTA FUNCTION depending upon the number of zeros of f (x). There is only one. . 27 Example Evaluate ∞ 2 −∞ δ(2x)(x . H(x).6. Therefore. 2. 3 we have ∞ −∞ δ(3x − 2)x2 dx = ∞ −∞ 1 2 1 δ(x − )x2 dx = 3 3 3 + 4) dx. |f (x)| = 3. . x < 0 H(x) = 1. function and the Dirac delta function. it is easy to see that H (x) = δ(x). We need only check that the area is one. we need to ﬁnd the zeros of f (x) = 3x − 2. one can show that when f (xj ) = 0 for xj . H (x) = 0.4. We deﬁne the Heaviside function as 0. x > 0 Then. If there is only one zero. For x = 0. |f (x)| This can be proven using the substitution y = f (x) and is left as an exercise for the reader. Thus. then n 1 δ(x − xj ). Also. |f (x1 )| More generally. one can show that there is a relationship between the Heaviside. This result is often written as δ(f (x)) = 1 δ(x − x1 ). j = 1. x = 2 . So. then one has that ∞ −∞ 261 δ(f (x)) dx = ∞ −∞ 1 δ(x − x1 ) dx.

In applications our functions can either be functions of time. λ 1. So. So. These simply follow from the properties of integration and establish the linearity of the Fourier transform. The corresponding Fourier transforms are then written as ∞ ˆ f (ω) = f (t)eiωt dt. or space. 6.31) ω is called the angular frequency and is related to the frequency ν by ω = 2πν. One has to use the fact that the derivative of 2x is 2. . One cannot just plug in x = 0 into the function x2 + 4.262 CHAPTER 6. we prove a few of the properties of the Fourier transform. f (x). (6. δ(2x) = 1 δ(x). TRANSFORM TECHNIQUES IN PHYSICS This problem is deceiving.30) −∞ ∞ −∞ or ˆ f (k) = f (x)eikx dx. (6. by k = 2π .5 Properties of the Fourier Transform We now return to the Fourier transform. It has units of inverse length and is related to the wavelength. First we recall that there are several forms that one may encounter for the Fourier transform. Sometimes the frequency is denoted by f when there is no confusion. The units of frequency are typically given in Hertz (Hz). Linearity For any functions f (x) and g(x) for which the Fourier transform exists and constant a. Before actually computing the Fourier transform of some functions. we have F [f + g] = F [f ] + F [g] and F [af ] = aF [f ]. λ. 2 ∞ −∞ δ(2x)(x2 + 4) dx = 1 2 ∞ −∞ δ(x)(x2 + 4) dx = 2. Recall that k is called the wavenumber. f (t).

34) . We will consider the case when n = 2. dk (6. PROPERTIES OF THE FOURIER TRANSFORM 2. The integral is recognized as the Fourier transform of f . Noting that the second derivative is the derivative of f (x) and applying the last result. Generalizations to the transform of the nth derivative easily follows.6. F [xf (x)] = −i dk f (k) d This property can be shown by using the fact that dk eikx = ixeikx and being able to diﬀerentiate an integral with respect to a parameter.5. proving the given property. 3. or doing several integration by parts. F dn f dxn ˆ = (−ik)n f (k) The proof of this property follows from the last result. F = = df ikx e dx dx f (x)eikx |L − ik −L ∞ −∞ lim f (x)eikx dx. we have F d2 f dx2 = F d f dx df ˆ = (−ik)2 f (k). dx (6. F [xf (x)] = = ∞ −∞ ∞ −∞ xf (x)eikx dx f (x) d dk 1 ikx e i dx = −i d ∞ f (x)eikx dx dk −∞ d ˆ = −i f (k).33) = −ikF This result will be true if both limx→±∞ f (x) = 0 and limx→±∞ f (x) = 0.32) The limit will vanish if we assume that limx→±∞ f (x) = 0. (6. F df dx 263 ˆ = −ik f (k) df dx ∞ −∞ L→∞ This property can be shown using integration by parts. d ˆ 4.

6. Then.36) ˆ Here we have denoted the Fourier transform pairs as f (x) ↔ f (k). or inverse Fourier transform. The ﬁrst shift property is shown by the following argument. Convolution We deﬁne the convolution of two functions f (x) and g(x) as (f ∗ g)(x) = Then ∞ −∞ f (t)g(x − t) dx.37) (6. This will make f a function on asingle variable. Example 1 f (x) = e−ax 2 /2 .35) (6. We evaluate the Fourier transform. It has many applications in areas such as quantum mechanics. F [f (x − a)] = ∞ −∞ f (x − a)eikx dx. Shifting Properties For constant a. F [f (x − a)] = ∞ −∞ f (y)eik(y+a) dy = eika ∞ −∞ ˆ f (y)eiky dy = eika f (k).264 CHAPTER 6. 6. Now perform the substitution y = x − a. These are easily proven by inserting the desired forms into the deﬁnition of the Fourier transform. The second shift property follows in a similar way.1 Fourier Transform Examples In this section we will compute some Fourier transforms of several functions. we have the following shifting properties: ˆ f (x − a) ↔ eika f (k). (6. We will return to the proof and examples of this property in a later section.38) ˆ g F [f ∗ g] = f (k)ˆ(k). TRANSFORM TECHNIQUES IN PHYSICS 5. (6. . molecular theory. ˆ f (x)e−iax ↔ f (k − a). This function is called the Gaussian function.5.

We will compute the Fourier transform of this function and show that the Fourier transform of a Gaussian is a Gaussian. (6. PROPERTIES OF THE FOURIER TRANSFORM 265 probability and heat diﬀusion. The completion of the square −∞ e follows as usual: a a 2ik − x2 + ikx = − x2 − x 2 2 a 2ik ik a x+ − = − x2 − 2 a a = − a ik x− 2 a 2 2 − − ik a 2 − k2 2a (6.39) The ﬁrst step in computing this integral is to complete the square in the argument of the exponential. 2 (6. we have a k ˆ f (k) = e− 2a 2 −k 2a 2 ∞ −∞ e− 2 (x− a ) dx a ik 2 = e ∞− ik a −∞− ik a e−βy dy. we need to be careful. However. Our goal is to rewrite this integral so that a simple substitution will lead to a classic integral of the form ∞ βy 2 dy. We know from our previous study that the integration takes place over a contour in the complex plane. ˆ f (k) = ∞ −∞ f (x)eikx dx = ∞ −∞ e−ax 2 /2+ikx dx.40) Using this result in the integral and making the substitution y = x − ik . 2 β= a . So. In the derivation we will introduce classic techniques for computing such integrals.6.41) One would be tempted to absorb the − ik terms in the limits of a integration. We can deform this horizontal contour to a contour along the real axis since we will not cross any singularities of the integrand. this is what is usually done in texts.5. we can safely write ∞ k2 2 ˆ f (k) = e− 2a e−βy dy. We begin by applying the deﬁnition of the Fourier transform. In fact. −∞ . which we can integrate.

|x| ≤ a . |x| > a . I2 = ∞ −∞ e−βy dy. So. 2 This integral is doable. Letting z = r2 . We can now write this product as a double integral: I2 = ∞ ∞ −∞ −∞ e−β(x 2 +y 2 ) dxdy. Let I be given by I= Then.41) to give the Fourier transform of the Gaussian function: ˆ f (k) = 2π −k2 /2a e . the ﬁnal result is found by taking the square root of both sides: π I= . This is an integral over the entire xy-plane. Since this is a function of x2 + y 2 . π This gives I 2 = β . We have that r2 = x2 + y 2 and the area element is given by dxdy = r drdθ. 2 ∞ −∞ e−βy dy 2 ∞ −∞ e−βx dx. ∞ 0 e−βr rdr = 2 1 2 ∞ 0 e−βz dz. we have that I2 = 2π 0 0 ∞ e−βr rdrdθ. 2 Note that we needed to introduce a second integration variable. β We can now insert this result into Equation (6. 0.266 CHAPTER 6. it is natural to transform to polar coordinates. a (6. Therefore.42) Example 2 f (x) = b. TRANSFORM TECHNIQUES IN PHYSICS The resulting integral is a classic integral and can be performed using a standard trick.

In fact. or gate function. . It is given by ˆ f (k) = = = = ∞ −∞ a −a f (x)eikx dx beikx dx b ikx a e |−a ik 2b sin ka. It is shown in Figure 6. ka The sinc function appears often in signal analysis.6: A plot of the box function in Example 2. We had seen this function earlier when we ﬁrst deﬁned the Dirac delta function.5. (6.6. This function is called the box function. as a gets large the box function approaches the constant function f (x) = b.43) as We can rewrite this using the sinc function. We will consider special limiting values for the box function and its transform.3.7 with Figure 6. The Fourier transform of the box function is relatively easy to compute. PROPERTIES OF THE FOURIER TRANSFORM 267 0. sinc x ≡ sin ka ˆ f (k) = 2ab = 2ab sinc ka.1 –4 –3 –2 –1 0 1 2 x 3 4 Figure 6. In this case.5 0.4 0. A plot of this function is shown in Figure 6. we see that the Fourier transform approaches a Dirac delta function.7.2 0.3 0. Compare Figure 6. k sin x x . At the same time.6. (a) a → ∞ and b ﬁxed.

in the limit we obtain f (k) = 2πbδ(k). It is . In this case our box narrows and becomes steeper while maintaining a constant area of one.7: A plot of the Fourier transform of the box function in Example 2. the more spread out the Fourier transform is.45) In this case we have that the more localized the function f (x) is.2 Figure 6. This limit gives us the fact that the Fourier transform of f (x) = 1 is ˆ f (k) = 2πδ(k). So. (6.2 –20 –10 0 10 x 20 –0. The Fourier transform approaches a constant in this limit. we have ∞ −∞ δ(x)eikx dx = 1. (6. the Fourier transform of the Dirac delta function is one. leaving ˆ f (k) → 2ab = 1.44) (b) b → ∞.6 0. Namely. Thus. (c) The Uncertainty Principle The widths of the box function and its Fourier transform are related as we have seen in the last two limiting cases. the Fourier transform becomes more localized. As the width of the box becomes wider. Namely. and 2ab = 1. As a approaches zero. a → 0.8 0. we have arrived at the result that ∞ −∞ eikx dx = 2πδ(k).4 0. This is the way we had found a representation of the Dirac delta function previously. the sinc function approaches one. ˆ ˆ f (k) = bDa (k). TRANSFORM TECHNIQUES IN PHYSICS 1 0.268 CHAPTER 6.

a ∆k = 2π . Example 3 f (x) = e−ax . where is it most known. one obtains ∆x∆p ≥ ¯ . one needs to deﬁne the eﬀective widths more carefully. of the box function as ∆x = 2a. ∆k as the distance between the ﬁrst zeros on either side of the main lobe. a > 0. This function actually extends along the entire k-axis. the central peak becomes narrower. p = ¯ k. The ﬁrst zeros are at k = ± π . In quantum mechanics (or modern physics). sin ka = 0. but the main idea holds: ∆x∆k ≥ 1. a Combining the expressions for the two widths. x<0 . it appears in quantum mechanics. However. as ˆ f (k) becomes more localized. one ﬁnds that the momentum is given in terms of the wave number. 0. Thus. PROPERTIES OF THE FOURIER TRANSFORM natural to deﬁne the width. In particular. the uncertainty principle arises in other forms elsewhere. While this is a result of Fourier transforms. the less localized its transform (larger δk). This notion is referred to as the Uncertainty Principle. we ﬁnd that ∆x∆k = 4π. So. the zeros are at the zeros of the sine k function. ∆x. h This gives the famous uncertainty relation between the uncertainties in position and momentum. where h is Planck’s constant divided by h ¯ 2π. ˆ Since f (k) = 2b sin ka. 269 The width of the Fourier transform is a little trickier.5. we deﬁne the width of this function. For more general signals. x ≥ 0 .6. Inserting this into the above condition. Thus. the more localized a signal (smaller δx).

270 CHAPTER 6. then R→∞ CR lim f (z)eikz dz = 0 where k > 0 and CR is the upper half of the circle |z| = R. TRANSFORM TECHNIQUES IN PHYSICS The Fourier transform of this function is ˆ f (k) = = = ∞ −∞ ∞ o f (x)eikx dx eikx−ax dx (6. a − iz According to Jordan’s Lemma. In this example. The integrations along the semicircles will vanish and we will have f (x) = 1 2π 1 2π ∞ −∞ = ± C e−ikx dk a − ik e−ixz dz a − iz . ˆ Example 4 f (k) = 1 2π 1 a−ik . we will compute the inverse Fourier transform of this result and recover the original function. we need to enclose the contour with a semicircle in the upper half plane for x < 0 and in the lower half plane for x > 0. but one closes the contour in the lower half plane. a − ik Next. We recall Jordan’s Lemma from the last chapter: If f (z) converges uniformly to zero as z → ∞. A similar result applies for k < 0. The inverse Fourier transform of this function is f (x) = 1 ˆ f (k)e−ikx dk = 2π −∞ ∞ ∞ −∞ e−ikx dk. we have to evaluate the integral I= ∞ −∞ e−ixz dz. a − ik This integral can be evaluated using contour integral methods.46) 1 .

Instead of carrying out any integration. 2 2 Example 6 The Finite Wave Train f (x) = cos ω0 t. |t| ≤ a . Consider δ(ω − ω0 ). x<0 .6.47) ˆ Example 5 f (ω) = πδ(ω + ω0 ) + πδ(ω − ω0 ). A straight forward computation gives ˆ f (ω) = = = = = ∞ −∞ a −a a −a f (t)eiωt dt cos ω0 teiωt dt cos ω0 t cos ωt dt 1 a [cos(ω0 + ω)t + cos(ω0 − ω)t] dt 2 −a sin(ω0 + ω)a sin(ω0 − ω)a + . From the Shift Theorems we have ˆ eiω0 t f (t) ↔ f (ω − ω0 ). Therefore. Recalling from a previous example that ∞ −∞ 1eiωt dt = 2πδ(ω). x<0 Res [z = −ia]. 1 −iω0 t e . This is a shifted function. we have 1 1 F −1 [πδ(ω + ω0 ) + πδ(ω − ω0 ] = eiω0 t + e−iω0 t = cos ω0 t. We would like to ﬁnd the inverse Fourier transform of this function. 0.5. x > 0 271 1 − 2π 2πi 0. we will make use of the properties of Fourier transforms. we consider the ﬁnite wave train. Since the transforms of sums are the sums of transforms. e−ax . PROPERTIES OF THE FOURIER TRANSFORM = = 0. we can look at each term individually. 2π we have F −1 [δ(ω − ω0 )] = The other term can be transformed similarly. x > 0 (6. which often appears in signal analysis. |t| > a For our last example.48) . ω + ω0 ω − ω0 (6.

Example Graphical Convolution. y. f (x) and g(x) to be the integral (f ∗ g)(x) = ∞ −∞ f (ξ)g(x − ξ) dξ. we need to apply it to several functions. |x| < 1 0. (6. we x. TRANSFORM TECHNIQUES IN PHYSICS 6. |x| > 1 1.50) = (f ∗ g)(x).8 and 6. This is easily shown by replacing x − ξ with a new variable. (g ∗ f )(x) = ∞ = − = −∞ −∞ ∞ ∞ g(ξ)f (x − ξ) dξ g(x − y)f (y) dy −∞ f (y)g(x − y) dy (6.6 The Convolution Theorem -Optional In our list of properties. We will do this graphically for the box function f (x) = and the triangular function g(x) = as shown in Figures 6. In order to understand the convolution operation.272 CHAPTER 6. |x| > 1 . we deﬁned the convolution of two functions.9. which means ’folding’.49) In some sense one is looking at a sum of the overlaps of one of the functions and all of the shifted versions of the other function. we note that the convolution is commutative: f ∗ g = g ∗ f. The German word for convolution is faltung. we look at the shifted and reﬂected function g(ξ − x) for various values of ξ. In order to determine the contributions to the integrand. First. For ξ = 0. |x| < 1 0.

4 0. Reflected Triangle Function 1 0.8 0.10: A plot of the reﬂected triangle function.4 0.4 0. THE CONVOLUTION THEOREM -OPTIONAL 273 Box Function 1 0. .6 0.2 –4 –2 0 2 x 4 Figure 6.9: A plot of the triangle function.2 –4 –2 0 2 x 4 Figure 6.8 0. Triangle Function 1 0.8: A plot of the box function f (x).6 0.8 0.6 0.6.2 –4 –2 0 2 x 4 Figure 6.6.

Intermediate shift values are displayed in Figures 6. have g(−x).13-6.4 0.12 and 6.8 0. The following ﬁgures show other shifts superimposed on f (x).15 and the convolution is shown by the area under the product of the two functions. This is a reﬂection of the triangle function as shown in Figure 6.6 0.10.11 we show such a shifted and reﬂected g(x) for ξ = 2.2 –4 –2 2 x 4 Figure 6.4 0.11: A plot of the reﬂected triangle function shifted by 2 units. The integrand is the product of f (x) and g(ξ − x) and the convolution evaluated at ξ is given by the shaded areas. .274 CHAPTER 6. Convolution for Various t 1 0. In Figures 6.12: A plot of the box and triangle functions with the convolution indicated by the shaded area.2 –4 –2 0 2 x 4 Figure 6.8 0. TRANSFORM TECHNIQUES IN PHYSICS Shifted. Reflected Triangle Function 1 0.6 0. We then translate this function through horizontal shifts by ξ. In Figure 6.16 the area is zero. as there is no overlap of the functions.

8 0.6.4 0.2 –4 –2 0 2 x 4 Figure 6.4 0. Convolution for Various t 1 0. THE CONVOLUTION THEOREM -OPTIONAL 275 Convolution for Various t 1 0.14: A plot of the box and triangle functions with the convolution indicated by the shaded area.2 –4 –2 0 2 x 4 Figure 6.8 0.6 0.13: A plot of the box and triangle functions with the convolution indicated by the shaded area. .6.6 0.

6 0. .16: A plot of the box and triangle functions with the convolution indicated by the shaded area.8 0.4 0.2 –4 –2 0 2 x 4 Figure 6.8 0. Convolution for Various t 1 0.4 0. TRANSFORM TECHNIQUES IN PHYSICS Convolution for Various t 1 0.6 0.276 CHAPTER 6.15: A plot of the box and triangle functions with the convolution indicated by the shaded area.2 –4 –2 0 2 x 4 Figure 6.

51) Next. we substitute y = x − ξ on the inside integral and separate the integrals: F [f ∗ g] = = = = ∞ −∞ ∞ −∞ ∞ −∞ ∞ −∞ ∞ −∞ ∞ −∞ ∞ −∞ g(x − ξ)eikx dx f (ξ) dξ g(y)eik(y+ξ) dy f (ξ) dξ g(y)eiky dy f (ξ)eikξ dξ ∞ −∞ (6.5 0.1 –4 –2 0 2 t 4 Figure 6. THE CONVOLUTION THEOREM -OPTIONAL 277 Convolution of Block & Triangle Functions 0.6.4 0.6. We see that the value of the convolution integral builds up and then quickly drops to zero.17: A plot of the convolution of the box and triangle functions.3 0. The plot of the convolution of the box and triangle functions is given in Figure 6. Next we would like to compute the Fourier transform of the convolution integral.53) f (ξ)eikξ dξ g(y)eiky dy .52) (6. We see the the two integral factors are just the Fourier transforms of f and .17. (6. First. we use the deﬁnitions of Fourier transform and convolution to write the transform as F [f ∗ g] = = = ∞ −∞ ∞ −∞ ∞ −∞ (f ∗ g)(x)eikx dx ∞ −∞ ∞ −∞ f (t)g(x − ξ) dξ eikx dx g(x − ξ)eikx dx f (ξ) dξ.2 0.

we will use the Convolution Theorem to evaluate the convolution. Example Convolution of two Gaussian functions. we need to ﬁnd the inverse transform of the Gaussian in . A direct evaluation of the integral would be to compute (f ∗ g)(x) = ∞ −∞ (6. as seen by rewriting the Fourier transform of h(x) as a+b 2 1 1 1 2 2π 2π ˆ h(k) = √ e− 2 ( a + b )k = √ e− 2ab k . TRANSFORM TECHNIQUES IN PHYSICS g. ab This is another Gaussian function. We will compute the convolution of two Gaussian functions with diﬀerent 2 2 widths. However.278 CHAPTER 6. the Fourier transform of a convolution is the product of the Fourier transforms of the functions involved: ˆ g F [f ∗ g] = f (k)ˆ(k).54) f (ξ)g(x − ξ) dξ = ∞ −∞ e−aξ 2 −b(x−ξ)2 dξ. we have 2 ˆ f (k) = F [e−ax ] = 2π −k2 /2a e a (6.55) and 2π −k2 /2b e . ab ab (6. Recalling the Fourier transform of a Gaussian.56) To complete the evaluation of the convolution of these two Gaussian functions. b Denoting the convolution function by h(x) = (f ∗ g)(x). This integral can be rewritten as (f ∗ g)(x) = e−bx 2 ∞ −∞ e−(a+b)ξ 2 +2bxξ dξ. Let f (x) = e−ax and g(x) = e−bx . Therefore. One could proceed to complete the square and ﬁnish carrying out the integration. the Convolution Theorem gives g (k) = F [e−bx ] = ˆ 2 2π 2 2 ˆ ˆ g h(k) = f (k)ˆ(k) = √ e−k /2a e−k /2b .

7. There are many ways to ﬁlter out unwanted frequencies. or the device used for recording an analog signal might naturally not be able to record high frequencies. 4πα a+b 2ab This is in the form needed to invert (6.7 Applications of the Convolution Theorem -Optional There are many applications of the convolution operation. a −ax2 e . |ω| > ω0 for some cutoﬀ frequency ω0 .18. for α = (f ∗ g)(x) = h(x) = ab 2π − a+b x2 e . a+b we ﬁnd 6.56). For a given signal there might be some noise in the signal.6. An example is provided in Figure 6. This could be accomplished by multiplying the Fourier . F −1 e a Moving the constants. The simplest would be to just drop all of the high frequencies. The Fourier transform of the ﬁltered signal would then be zero for |ω| > ω0 . or cycles per second (cps).55). some undesirable high frequencies. where ν is the frequency in Hertz. We can do this by looking at Equation (6. The ﬁrst application is ﬁltering signals.56). We have ﬁrst that 2π −k2 /2a 2 = e−ax . Thus. Recall that the Fourier transform gives the frequency content of the signal and that ω = 2πν. APPLICATIONS OF THE CONVOLUTION THEOREM -OPTIONAL279 Equation (6. 2π We now make the substitution α = 2 F −1 [e−αk ] = 1 −x2 /2α e . Let f (t) denote the amplitude of a given analog signal ˆ and f (ω) be the Fourier transform of this signal. In this section we will describe a few of the applications. we then obtain F −1 [e−k 2 /2a ]= 1 2a .

Instead data is collected over a ﬁnite time .18: Plot of a signal f (t) and its Fourier transform f (ω). Another application of the convolution is in windowing. ˆ The new signal. In this case. one has ∞ −∞ h(t − τ )δ(τ ) dτ = h(t). transform of the signal by a function that vanishes for |ω| > ω0 . 0. The function h(t) is called the impulse response. This is because it is a response to the impulse function. g(t).19 shows how the gate function is used to ﬁlter the signal. ˆ ˆ g (ω) = f (ω)h(ω).280 CHAPTER 6. One thinks of f (t) as the input signal into some ﬁltering device which in turn produces the output.58) Such processes occur often in systems theory as well. δ(t). TRANSFORM TECHNIQUES IN PHYSICS f(t) f(ω) t ω ˆ Figure 6. |ω| ≤ ω0 . Real signals cannot be recorded for all values of time. This represents what happens when one measures a real signal. (6. In general. we could consider the gate function pω0 (ω) = 1. g(t) is then the inverse Fourier transform of this product. giving the new signal as a convolution: ˆ ˆ g(t) = F −1 [f (ω)h(ω)] = ∞ −∞ h(t − τ )f (τ ) dτ. For example. we multiply the Fourier transform of the signal by some ˆ ﬁltering function h(t) to get the Fourier transform of the ﬁltered signal.57) Figure 6. |ω| > ω0 (6.

ˆ .7. (c) The product of the ˆ functions.19: (a) Plot of the Fourier transform f (ω) of a signal. g (ω) = f (ω)pω0 (ω). APPLICATIONS OF THE CONVOLUTION THEOREM -OPTIONAL281 (a) f(ω) ω (b) p (ω) ω0 ω0 (c) ω g(ω) ω ˆ Figure 6.6. in (a) and (b). (b) The gate function pω0 (ω) used to ﬁlter out high frequencies.

the Fourier transform of g(t) is related to the Fourier transform of f (t) : g (ω) = ˆ ∞ −∞ −∞ ∞ ∞ f (−t)eiωt dt f (τ )e−iωτ dτ (6. let g(t) = f (−t). We will study these natural windowing and ﬁltering eﬀects from recording data in the last chapter. (6. TRANSFORM TECHNIQUES IN PHYSICS interval.60) f (−u)g(u) du = 1 2π ∞ −∞ ˆ g f (ω)ˆ(ω) dω.59) 2π −∞ −∞ This equality has a physical meaning for signals. Parseval’s equality. by the deﬁnition of the inverse Fourier transform. This can be modeled in the same way as with ﬁltering. (6. If the length of time the data is collected is T . The right side provides a measure of the energy content of the transform of the signal. .282 CHAPTER 6. We will later see that the eﬀect of windowing would be to change the spectral content of the signal we are trying to analyze. Let’s rewrite the Convolution Theorem in the form ˆ g F −1 [f (k)ˆ(k)] = (f ∗ g)(t). We can also use the convolution theorem to derive Parseval’s Equality: ∞ 1 ∞ ˆ |f (t)|2 dt = |f (ω)|2 dω. Then. sometimes referred as Plancherel’s formula. The integral on the left side is a measure of the energy content of the signal in the time domain. or f (−t) = g(t). g 2π −∞ −∞ Setting t = 0.62) = − = −∞ ˆ f (τ )eiωτ dτ = f (ω). is simply a statement that the energy is invariant under the transform. then the resulting signal is zero outside this time interval.61) Now. The resulting Fourier transform of the new signal will be a convolution of the Fourier transforms of the original signal and the windowing function. except the new signal will be the product of the old signal with the windowing function. ∞ −∞ (6. Then. we have ∞ 1 ∞ ˆ f (t − u)g(u) du = f (ω)ˆ(ω)e−iωt dω.

These transforms are deﬁned over semi-inﬁnite domains and are useful for solving ordinary diﬀerential equations. 6.63) This is an improper integral and one needs t→∞ lim f (t)e−st = 0 to guarantee convergence.8. inserting this result into Equation (6.8. called the Laplace transform. (6. The Fourier transform is useful on inﬁnite domains.8 The Laplace Transform Up until this point we have only explored Fourier exponential transforms as one type of integral transform. They also have proven useful in engineering for solving circuit problems and doing systems analysis.8. we ﬁnd that ∞ −∞ 283 f (−u)f (−u) du = 1 2π ∞ −∞ ˆ |f (ω)|2 dω which implies Parseval’s Equality. THE LAPLACE TRANSFORM So. Combining some of these simple Laplace transforms with the properties of the Laplace transform. s > 0. It is typical that one makes use of Laplace transforms by referring to a Table of transform pairs. A sample of such pairs is given in Table 6.6.61). in their introductory diﬀerential equations class. we can deal with many applications of the Laplace transform. In the next section we will show how these can be used to solve ordinary diﬀerential equations. The Laplace transform of a function f (t) is deﬁned as F (s) = L[f ](s) = ∞ 0 f (t)e−st dt. We will ﬁrst prove a few of the given Laplace transforms and show how they can be used to obtain new transform pairs. These are found by simply using the deﬁnition of the Laplace transform. as shown in Table 6. students are often introduced to another integral transform. We begin with some simple transforms. However. .

284 CHAPTER 6. The Laplace transform is simply an integral. Therefore. So.1: Table of selected Laplace transform pairs. This can be s extended to any constant c. L[c] = cL[1]. using the property of linearity of the transform. We will not always write this limit. ω s2 +ω 2 s s2 +ω 2 2ωs (s2 +ω 2 )2 a s2 −a2 f (t) eat tn eat eat sin ωt eat cos ωt t cos ωt cosh at δ(t − a) 1 s−a .64) Thus. F (s) s>a s>0 e−as . This is an improper integral and the computation is understood by introducing an upper limit of a and then letting a → ∞. TRANSFORM TECHNIQUES IN PHYSICS f (t) c tn sin ωt cos ωt t sin ωt sinh at H(t − a) F (s) n! . but it will be understood that this is how one computes such improper integrals. Example 1: L[1] For this example. we have c L[c] = . Thus. we have that the Laplace transform of 1 is 1 . since we can pull a constant factor out from under the integral. we insert f (t) = 1 into our integral: L[1] = ∞ 0 e−st dt. we have L[1] = = = = ∞ 0 e−st dt a a→∞ 0 a→∞ lim e−st dt lim lim a→∞ a 1 − e−st s 0 1 −sa 1 + − e s s 1 = . s . n! (s−a)n+1 ω (s−a)2 +ω 2 s−a (s−a)2 +ω 2 2 −ω 2 s (s2 +ω 2 )2 s s2 −a2 a ≥ 0. s (6. s > 0 Table 6. sn+1 c s s>0 e−as s .

or s > a. Recall how one does such integrals involving both the trigonometric function and the exponential function. there is a much simpler way to compute these transforms. s − ia . Making use of the linearity of the Laplace transform. It again is simply the integral of an exponential function.] Example 2: L[cos at] and L[sin at] In these cases.8. However.65) Note that the last limit was computed as limt→∞ e(a−s)t = 0.6. Recall that eiat = cos at + i sin at. In this case we would only need that s is greater than the real part of a. This is only true if a − s < 0. we can easily compute the transform. Rearranging the result the answer can be obtained. we could again insert the functions directly into the transform. we have L[eiat ] = L[cos at] + iL[sin at]. t→∞ a − s a−s s−a (6. For example. 285 For this example. a could be complex. L[cos at] = ∞ 0 e−st cos at dt. THE LAPLACE TRANSFORM Example 1: L[eat ]. [Actually. Thus. transforming this complex exponential and looking at the real and imaginary parts of the results will give both transforms at the same time! The transform is simply computed as L[eiat ] = ∞ 0 eiat e−st dt = ∞ 0 e−(s−ia)t dt = 1 . One integrates by parts two times and then obtains an integral of the form with which one started. ∞ 0 0 ∞ L[eat ] = = = eat e−st dt e(a−s)t dt 1 (a−s)t ∞ e a−s 0 1 (a−s)t 1 1 = lim e − = .

Following the previous example. TRANSFORM TECHNIQUES IN PHYSICS Note that we could easily have used the result for the transform of an exponential. = = 2 s − ia s − ia s + ia s + a2 Reading oﬀ the real and imaginary parts gives L[cos at] = L[sin at] = Example 3: L[t] For this example we need to evaluate L[t] = ∞ 0 s s2 + a2 a . s2 + a2 (6.66) te−st dt. In this case we have to do the integral L[tn ] = ∞ 0 tn e−st dt. s ∞ 0 e−st dt (6.67) Example 4: L[tn ] We can generalize the last example to powers greater than n = 1.286 CHAPTER 6. We now extract the real and imaginary parts of the result using the complex conjugate of the denominator: 1 1 s + ia s + ia . we integrate by parts: ∞ 0 1 1 tn e−st dt = −tn e−st |∞ + n 0 s s 1 ∞ −n −st t e dt. In this case s > Re(ia) = 0.68) . This integration can be done using integration by parts. du = dt and v = − 1 e−st . = n s 0 ∞ 0 t−n e−st dt (6. (Pick u = t and dv = e−st dt. Then.) s ∞ 0 1 1 te−st dt = −t e−st |∞ + 0 s s 1 = 2. which was already proven.

we can write the result as L[tn ] = n n−1 L[t ]. s2 n−1 In−2 . I0 . s (6.69) This is also what is called a diﬀerence equation. we have In−1 = So. in this case a sequence of integrals. look at the integral that resulted after one integration by parts. Note that replacing n with n − 1. In some cases you need to be careful so that you can count the number of iterations of the process.8.70) We can repeat this process until we get to I0 . Continuing the iteration process. So. we have the following: s In = n In−1 . So. Denoting In = L[tn ] = ∞ 0 tn e−st dt and noting that I[0] = L[1] = 1 . It is just the Laplace transform of tn−1 . repeating the process we ﬁnd In = = = n In−1 s n n−1 In−2 s s n(n − 1) In−2 . This can be seen by watching for patterns. There is a whole theory of diﬀerence equations. Our goal is to solve the above diﬀerence equation. which we know. which we will not get into here. s 1 I0 = . we ﬁrst ask what the result is after k steps. However. It is easy to do by simple iteration. we have In = n In−1 s . It is a ﬁrst order diﬀerence equation with an “initial condition”. s This is an example of an recursive deﬁnition of a sequence. s (6.6. THE LAPLACE TRANSFORM 287 We could continue to integrate by parts until the ﬁnal integral can be computed.

[Such iterative techniques are useful in obtaining a variety of of integrals. For x − 1 an integer and s = 1.72) Note the similarity to the Laplace transform of tx−1 : L[tx−1 ] = ∞ 0 tx−1 e−st dt. TRANSFORM TECHNIQUES IN PHYSICS n(n − 1) In−2 s2 n(n − 1)(n − 2) = In−3 s3 = . we have that Γ(x) = (x − 1)!. we show this result later and state here that L[tp ] = for values of p > −1. such as 2 ∞ In = −∞ x2n e−x dx. we have shown that L[tn ] = sn+1 . In fact. . n s s (6.71) n! Therefore. the Gamma function seems to be a generalization of the factorial. . n(n − 1)(n − 2) .] As a ﬁnal note. dt . (n − k + 1) In−k ... This function is deﬁned as Γ(x) = ∞ 0 tx−1 e−t dt. (6. one introduces what is called the Gamma function. one can extend this result to cases when n is not an integer.288 CHAPTER 6. . = sk = Since we know I0 . . Thus. we choose to stop at k = n obtaining In = n(n − 1)(n − 2) . Example 5: L[ df ] dt We have to compute L df = dt ∞ 0 Γ(p + 1) sp+1 df −st e dt. (2)(1) n! I0 = n+1 . To do this.

or we could (t) make use of the last result. G(s) = L df = sF (s) − f (0).6. Letting g(t) = dfdt . dt . The ﬁnal result is that L f Example 6: L[ d 2 ] dt 2 df = sF (s) − f (0). dt We can compute this using two integrations by parts. we have L But. dt d2 f dt2 =L dg = sG(s) − g(0) = sG(s) − f (0).73) Here we have assumed that f (t)e−st vanishes for large t. f (t)e−st dt (6. Thus letting u = e−st and v = f (t). 289 We can move the derivative oﬀ of f by integrating by parts. THE LAPLACE TRANSFORM Laplace Transform Properties L[af (t) + bg(t)] = aF (s) + bG(s) d L[tf (t)] = − ds F (s) dy L[ dt ] = sY (s) − y(0) 2y L[ d 2 ] = s2 Y (s) − sy(0) − y (0) dt L[eat f (t)] = F (s − a) L[H(t − a)f (t − a)] = e−as F (s) t L[(f ∗ g)(t)] = L[ 0 f (t − u)g(u) du] = F (s)G(s) Table 6. we have L df dt = ∞ 0 df −st e dt dt ∞ 0 = f (t)e−st |∞ + s 0 = −f (0) + sF (s).8.2: Table of selected Laplace transform properties. This is similar to what we had done when ﬁnding the Fourier transform of the derivative of a function.

Then one transforms back into t-space using Laplace transform tables and the properties of Laplace transforms. we have L[e2t ] = Combining these. Later we will see that there is an integral form for the inverse transform. The scheme is shown in Figure 6. Y (t). This is typically not covered in introductory diﬀerential equations classes as one needs carry out integrations in the complex plane. Transforming the right hand side. Example 1: Solve the initial value problem y + 3y = e2t . s−2 . y(0) = 1. (6. The ﬁrst step is to perform a Laplace transform of the initial value problem. Typically. The transform of the left side of the equation is L[y + 3y] = sY − y(0) + 3Y = (s + 3)Y − 1.74) 6.8.290 So. In the following examples we will show how this works.20. the algebraic equation is easy to solve for Y (s) as a function of s. we obtain (s + 3)Y − 1 = The next step is to solve for Y (s) : Y (s) = 1 1 + .1 Solution of ODEs Using Laplace Transforms One of the typical applications of Laplace transforms is the solution of nonhomogeneous linear constant coeﬃcient diﬀerential equations. CHAPTER 6. The general idea is that one transforms the equation for an unknown function y(t) into an algebraic equation for its transform. TRANSFORM TECHNIQUES IN PHYSICS L d2 f dt2 = sG(s) − f (0) = s(sF (s) − f (0)) − f (0) = s2 F (s) − sf (0) − f (0). s−2 1 . s + 3 (s − 2)(s + 3) 1 .

we have 1 A(s + 3) + B(s − 2) = .8. It is easy to do if we only had the ﬁrst term. we are not stuck. We know that we can rewrite the second term by using a partial fraction decomposition. THE LAPLACE TRANSFORM 291 Figure 6. Let’s recall how to do this. such that 1 A B = + . we need to ﬁnd the inverse Laplace transform. Namely. However. One transforms the initial value problem for y(t) and obtains an algebraic equation for Y (s). Now. So. The inverse transform of the ﬁrst term is e−3t .6. we need to ﬁgure out what function has a Laplace transform of the above form. We just need to make sure the numerators agree afterwards. (s − 2)(s + 3) (s − 2)(s + 3) . adding the two terms.20: The scheme for solving an ordinary diﬀerential equation using Laplace transforms. Solve for Y (s) and the inverse transform give the solution to the initial value problem. A and B. The goal is to ﬁnd constants. (s − 2)(s + 3) s−2 s+3 We picked this form because we know that recombining the two terms into one term will have the same denominator. We have not seen anything that looks like the second form in the table of transforms that we have compiled so far.

This has to be true for all s. we have found that Y (s) = 1 1 s+35 1 1 − .292 CHAPTER 6. Transforming the equation. we have 0 = s2 Y − sy(0) − y (0) + 4Y = (s2 + 4)Y − s − 3.] In order to ﬁnish the problem at hand. we have the solution of the initial value problem 1 4 y(t) = e2t + e−3t .75) The ﬁrst equation gives A = −B. y(0) = 1. (6. we have (A + B)s + 3A − 2B = 1. The only way that this can be true for all s is that the coeﬃcients of the diﬀerent powers of s agree on both sides. but it is a simple exercise. 1 = A(s + 3) + B(s − 2).76) . s−2 s+3 [Of course. Simplifying. This leads to two equations for A and B: A+B =0 3A − 2B = 1. We easily see that y(t) = e−3t + 1 2t e − e−3t 5 works. Rewriting the equation by gathering terms with common powers of s. we could have tried to guess the form of the partial fraction decomposition as we had done earlier when talking about Laurent series. 5 Returning to the problem. we ﬁnd a function whose Laplace transform is of this form. (6. TRANSFORM TECHNIQUES IN PHYSICS Equating numerators. 5 5 Example 2: Solve the initial value problem y + 4y = 0. We can probably solve this without Laplace transforms. y (0) = 3. so the second equation becomes −5B = 1. The solution is then A = −B = 1 .

Such systems can get fairly complicated. s2 + 4 We now ask if we recognize the transform pair needed. s2 + 4 2 s2 + 4 2 6. It would be if the numerator were a 2. Many circuit designs can be modelled with systems of diﬀerential equations using Kirchoﬀ’s Rules. However. s2 + 4 2 s2 + 4 So. We just need to play with the numerator. we have Y (s) = Y (s) = s2 s 3 + 2 . However. we have 293 s+3 . (6. This can be corrected by multiplying and dividing by 2: 2 3 2 = .2 Step and Impulse Functions The initial value problems that we have solved so far can be solved using the Method of Undetermined Coeﬃcients or the Method of Variation of Parameters. out solution is then found as y(t) = L[ s 3 2 3 + ] = cos 2t + sin 2t. using periodic functions like a square wave. Splitting the expression into two terms. THE LAPLACE TRANSFORM Solving for Y . The denominator looks like the type needed for the transform of a sine or cosine.8. +4 s +4 The ﬁrst term is now recognizable as the transform of cos 2t. 1. t > 0. given by H(t) = 0.77) . t < 0. We ﬁrst recall the Heaviside step function. In this section we add a couple of more transform pairs and transform properties that are useful in accounting for things like turning on a driving force. or introducing an impulse force.6. The second term is not the transform of sin 2t. using variation of parameters can be messy and involves some skill with integration.8. Laplace transforms can be used to solve such systems and electrical engineers have long used such methods in circuit analysis.

The Laplace transform of this function is found for a > 0 as L[H(t − a)] = = = ∞ 0 ∞ H(t − a)e−st dt H(t − a)e−st dt |∞ = a e−as .21. TRANSFORM TECHNIQUES IN PHYSICS Figure 6.79) (6.78) a e−st s Just like the Fourier transform.21: A shifted Heaviside function. s (6. the Laplace transform has two shift theorems involving multiplication of f (t) or F (s) by exponentials.81) Example: Compute the Laplace transform of e−at sin ωt. (6. A more general version of the step function is the horizontally shifted step function. L[eat f (t)] = = ∞ 0 0 ∞ eat f (t)e−st dt f (t)e−(s−a)t dt = F (s − a). H(t − a). .80) F (s). Namely.294 CHAPTER 6. We prove the ﬁrst shift theorem and leave the other proof as an exercise for the reader. This function is shown in Figure 6. H(t − a). These are given by L[eat f (t)] = F (s − a) L[f (t − a)H(t − a)] = e −as (6.

24. It can be represented as a sum of an inﬁnite number of boxes. a] the function H(t) = 1 and H(t − a) = 0.22. Therefore. ω F (s) = 2 . First we consider the function H(t) − H(t − a). H(t) − H(t − a). both functions are one and therefore the diﬀerence is zero. (s + a)2 + ω 2 More interesting examples can be found in piecewise functions. This function can be rewritten in terms of step functions. we can write the solution as L[e−at sin ωt] = F (s + a) = ω . THE LAPLACE TRANSFORM 295 Figure 6.23. We now consider the piecewise deﬁned function g(t) = f (t). This function is shown in Figure 6. We ﬁrst note that the exponential multiplies a sine function. 0 ≤ t ≤ a. H(t) − H(t − a) = 1 for t ∈ [0. a]. We depict this in Figure 6. The shift theorem tells us that we need the transform of this function.22: The box function. We only need to multiply f (t) by the above box function. t < 0. Finally. This function arises as the solution of the underdamped harmonic oscillator. g(t) = f (t)[H(t) − H(t − a)]. . So. For t < 0 both terms are zero.8. An example of a square wave function is shown in Figure 6. In the interval [0. Even more complicated functions can be written out in terms of step functions. This is just a box between a and b of height f (t). s + ω2 Knowing this. 0. We only need to look at sums of functions of the form f (t)[H(t − a) − H(t − b)] for b > a.6. for t > a. t > a. f (t) = ∞ n=−∞ [H(t − 2na) − H(t − (2n + 1)a)].

we have ∞ L[δ(t − a)] = n=0 ∞ [L[H(t − 2na)] − L[H(t − (2n + 1)a)]] e−2nas e−2(n+1)as − s s ∞ n=0 = n=0 = = = 1 − e−as s e−2as n 1 − e−as 1 s 1 − e−2as 1 − e−as . or point driving force. The delta function represents a point impulse.296 CHAPTER 6. Another interesting example is the delta function.23: Formation of a piecewise function. TRANSFORM TECHNIQUES IN PHYSICS Figure 6. Example: Laplace Transform of a square wave turned on at t = 0. One would then need the Laplace transform of the delta function to solve the associated diﬀerential equation. We summed this series to get our answer in a compact form. For example. one could hit it for an instant at time t = a. we could represent the force as a multiple of δ(t − a). while a mass on a spring is undergoing simple harmonic motion. ∞ f (t) = [H(t − 2na) − H(t − (2n + 1)a)]. f (t)[H(t) − H(t − a)].82) Note that the third line in the derivation is a geometric series. . s(1 − e−2as ) (6. In such a case. n=0 Using the properties of the Heaviside function.

given by the delta function. The delta function models a unit impulse at t = 2.24: A square wave.8. we need the . y(0) = y (0) = 0. First. We will solve this problem using Laplace transforms. THE LAPLACE TRANSFORM 297 Figure 6. In this case we see that we have a nonhomogeneous spring problem.83) −∞ −as .6. this spring is initially at rest and not stretched. Of course. we anticipate that at this time the spring will begin to oscillate. Inserting the initial conditions. Example: Solve the initial value problem y + 4π 2 y = δ(t − 2). The form of this function is an exponential times some F (s). f (t) = We ﬁnd that for a > 0 L[δ(t − a)] = = = e ∞ n=−∞ [H(t − 2na) − H(t − (2n + 1)a)]. Without the forcing term. s2 + 4π 2 We now seek the function for which this is the Laplace transform. transform the diﬀerential equation: s2 Y − sy(0) − y (0) + 4π 2 Y = e−2s . Thus. Solve for Y (s) : Y (s) = e−2s . we have (s2 + 4π 2 )Y = e−2s . ∞ 0 ∞ δ(t − a)e−st dt δ(t − a)e−st dt (6.

84) This solution tells us that the mass is at rest until t = 2 and then begins to oscillate at its natural frequency. we consider the convolution of two functions. We now apply the second shift theorem. Of course. From the tables of transforms. especially if we cannot perform a partial fraction decomposition. we have L[sin 2πt] = So. We know how to do this if we only have one of the denominators present. The convolution is commutative: f ∗ g = g ∗ f . For example. First we need to ﬁnd the f (t) corresponding to F (s) = s2 1 . L[f (t − a)H(t − a)] = e−as F (s). we write F (s) = This gives f (t) = (2π)−1 sin 2πt. 2 + 4π 2 2π s (6. Since the numerator is constant. We deﬁne the convolution of two functions deﬁned on [0. Often we are faced with having the product of two Laplace transforms that we know and we seek the inverse transform of the product. = 2π s2 2π . y(t) = H(t − 2)f (t − 2) 1 H(t − 2) sin 2π(t − 2). Finally. But. ∞) much the same way as we had done for the Fourier transform. The convolution operation has two important properties: 1. TRANSFORM TECHNIQUES IN PHYSICS second shift theorem. there is another way to ﬁnd the inverse transform. we pick sine. we could do a partial fraction decomposition.298 CHAPTER 6. + 4π 2 1 2π . + 4π 2 The denominator suggests a sine or cosine. We deﬁne (f ∗ g)(t) = t 0 f (u)g(t − u) du. let’s say you end 1 up with Y (s) = (s−1)(s−2) while trying to solve a diﬀerential equation.

(g ∗ f )(t) = t 0 g(u)f (t − u) du 0 t = − = t 0 g(t − y)f (y) dy f (y)g(t − y) dy (6. First.6. which needs more rigorous attention than will be provided here. L[f ∗ g] = = = = = = ∞ 0 ∞ 0 ∞ 0 ∞ 0 ∞ 0 0 0 0 t f (u)g(t − u) du e−st dt f (u)g(t − u) du e−st dt ∞ 0 ∞ 0 ∞ f (u) f (u) g(t − u)e−st dt g(τ )e−s(τ +u) dτ ∞ 0 du du du f (u)e−su g(τ )e−sτ dτ ∞ 0 ∞ f (u)e−su du g(τ )e−sτ dτ (6. Then a change of variables will allow us to split the integral into the product of two integrals that are recognized as a product of two Laplace transforms.8. The ﬁrst assumption will allow us to write the ﬁnite integral as an inﬁnite integral. f (t) = 0 and g(t) = 0 for t < 0. Secondly. The Convolution Theorem: The Laplace transform of a convolution is the product of the Laplace transforms of the individual functions: L[f ∗ g] = F (s)G(s) Proving this theorem takes a bit more work. we will assume that we can interchange integrals.85) = (f ∗ g)(t). We will make some assumptions that will work in many cases. THE LAPLACE TRANSFORM 299 Proof: The key is to make a substitution y = t − u int the integral to make f a simple function of the integration variable. 2. .86) = F (s)G(s). we assume that our functions are causal.

We note that this is a product of two functions Y (s) = 1 1 1 = = F (s)G(s). We begin by considering a function f (t) which vanishes for t < 0. This is typically the way Laplace transforms are taught and used.87) = e2t [−et + 1] = e2t − et . (s − 1)(s − 2) s−1s−2 We know the inverse transforms of the factors: f (t) = et and g(t) = e2t . ∞ −∞ |g(t)| dt = ∞ 0 |f (t)|e−ct dt < ∞. g (ω) = ˆ ∞ −∞ g(t)eiωt dt = ∞ 0 f (t)eiωt−ct dt . In this section we will introduce the inverse Laplace transform integral and show how it is used. For g(t) absolutely integrable.3 The Inverse Laplace Transform Up until this point we have seen that the inverse Laplace transform can be found by making use of Laplace transform tables and properties of Laplace transforms. it does. We compute the convolution: y(t) = = t 0 t 0 f (u)g(t − u) du eu e2(t−u) du t 0 = e2t e−u du (6. You can conﬁrm this by carrying out the partial fraction decomposition.300 CHAPTER 6. 6. 1 Example y(t) = L−1 [ (s−1)(s−2) ]. does such an inverse exist for the Laplace transform? Yes. in that case we introduced an inverse transform in the form of an integral. TRANSFORM TECHNIQUES IN PHYSICS We make use of the Convolution theorem to do the following example. One can do the same for Fourier transforms. we can write the Fourier transform. Using the Convolution Theorem. we ﬁnd that y(t) = (f ∗ g)(t). However. We deﬁne the function g(t) = f (t)e−ct .8.

THE LAPLACE TRANSFORM and the inverse Fourier transform. So. We enclose the contour with a semicircle to the left of the path in the complex s-plane.6. z = 0] = lim =1 z→0 (z + 1) z(z + 1) . called the Bromwich integral. The residues in this case are: Res[ ezt ezt . then the result is simply obtained as 2πi times the sum of the residues. est ds.8. The contour we will use is shown in Figure 6. Assuming that we have done this. One has to verify that the integral over the semicircle vanishes as the radius goes to inﬁnity. This integral is evaluated along a path in the complex plane. g(t) = f (t)e−ct = 1 2π ∞ −∞ 301 g (ω)e−iωt dω. we have f (t) = i 2π c−i∞ c+i∞ 0 ∞ f (τ )e−sτ dτ est ds. The typical way to compute this integral is to chose c so that all poles are to the left of the contour and to close the contour with a semicircle enclosing the poles. ˆ Multiplying by ect and inserting g (ω) into the integral for g(t). we have f (t) = 1 2πi c+i∞ c−i∞ F (s)est ds.25. Note that the inside integral is simply F (s). One then relies on Jordan’s lemma extended into the second and third quadrants. we ﬁnd ˆ f (t) = 1 2π ∞ −∞ 0 ∞ f (τ )e(iω−c)τ dτ e−(iω−c)t dω. Example: Find the inverse Laplace transform of F (s) = The integral we have to compute is f (t) = 1 2πi c+i∞ c−i∞ 1 s(s+1) . This is the inverse Laplace transform. Letting s = c − iω (so dω = ids). s(s + 1) This integral has poles at s = 0 and s = −1.

z = −1] = lim = −e−t . s(s + 1) s s+1 The ﬁrst term leads to an inverse transform of 1 and the second term gives an e−t . and Res[ Therefore. 2πi 2πi ezt ezt . z→−1 z z(z + 1) We can verify this result using the Convolution Theorem or using partial fraction decomposition.25: The contour used for applying the Bromwich integral to F (s) = 1 s(s+1) . TRANSFORM TECHNIQUES IN PHYSICS Figure 6. we have veriﬁed the result from doing a contour integration.302 CHAPTER 6. The decomposition is simplest: 1 1 1 = − . we have f (t) = 2πi 1 1 (1) + (−e−t ) = 1 − e−t . Thus. .

1) We need to generalize the ∂ u term.Chapter 7 Electromagnetic Waves Up to this point we have mainly been conﬁned to problems involving only one or two independent variables. ∂x2 ∂y ∂z (7. some theoretical physicists live in worlds of many more dimensions.3) ∂t2 303 . It can be written a more compact form using the Laplacian operator. we live in a world of three spatial dimensions. (Though. ∂t2 ∂x 2 (7. we will derive the three dimensional wave equation. or at least they think so. ∂2u = c2 2 u.) We will need to extend the study of heat ﬂow and wave theory to three dimensions. However. the heat equation and the wave equation involved one time and one space dimension. Recall that the one-dimensional wave equation takes the form ∂2u ∂2u = c2 2 . (7. For the case of electromagnetic waves ∂x2 in a source-free environment. In particular. It is given by ∂2u = c2 ∂t2 ∂2u ∂2u ∂2u + 2 + 2 . 2 .2) This is the generic form of the linear wave equation in Cartesian coordinates.

This will lead to the study of ordinary diﬀerential equations. For steady-state. ELECTROMAGNETIC WAVES 2 Vars ut = kuxx utt = c2 uxx uxx + uyy = 0 uxx + uyy = F (x. but also in a coordinate-free representation. which will lead to new sets of functions. the heat equation no longer involves the time derivative. t)u 3D ut = k 2 u utt = c2 2 u 2u = 0 2 u = F (x. z. In fact. What is left is called Laplace’s equation. In the next chapter we will look at several examples of applying the separation of variables in higher dimensions. For example.304 Name Heat Equation Wave Equation Laplace’s Equation Poisson’s Equation Schr¨dinger’s Equation o CHAPTER 7. other than our typical sines and cosines. we already presented some generic equations in Table 4. Adding an external heat source. which we have also seen in relation to complex functions. We saw Schr¨dinger’s o equation in the last chapter as an natural problem involving Fourier transforms. The solution of these partial diﬀerential equations can be handled using separation of variables or transform methods. y) iut = uxx + F (x. The heat ﬂow inside a hemispherical igloo can be described using spherical coordinates. the propagation of electromagnetic waves in an optical ﬁber are naturally described in terms of cylindrical coordinates. z) iut = 2 u + F (x. The introduction of the Laplacian is common when generalizing to higher dimensions. y. which we reproduce in this chapter in Table 7. Laplace’s equation becomes what is know as Poisson’s equation. We have already studied the wave equation and the heat equation.1. . or equilibrium.1. t)u Table 7. In each of these cases the Laplacian has to be written in terms of the needed coordinate systems.1: List of generic partial diﬀerential equations. The vibrations of a circular drumhead can be described using polar coordinates. y. Using the Laplacian allows us not only to write these equations in a more compact form. heat ﬂow problems. Many problems are more easily cast in other coordinate systems.

MAXWELL’S EQUATIONS 305 7. only later to be cast in vector form. At the time much was known about the relationship between electric and magnetic ﬁelds through the work of of such people as Hans Christian Ørstead (1777-1851). and σ is the conductivity.1. namely in terms of quaternions. (Quaternions were introduced in 1843 by William Rowan Hamilton (1805-1865) as a four dimensional generalization of complex numbers. D is the electric displacement ﬁeld. ∂A ∂t (7. they are not cast as the core set of equations now referred to as Maxwell’s equations. Several of these equations are deﬁning quantities. Maxwell provided a mathematical e e formalism for these relationships consisting of twenty partial diﬀerential equations in twenty unknowns. ρ is the charge density.4) Note that Maxwell expressed the electric and magnetic ﬁelds in terms of the scalar and vector potentials. This derivation was ﬁrst carried out by James Clerk Maxwell in 1860.) In vector form. the original Maxwell’s equations are given as ·D = ρ × H = µ0 Jtot D = E J = σE ∂D Jtot = J ∂t ∂ρ ·J = − ∂t E = − φ− µH = × A.1. φ and A. This set of equations diﬀers from what we typically present in physics courses. Here H is the magnetic ﬁeld. E is the electric ﬁeld. Michael Faraday (1791-1867). While the potentials are part of a course in electrodynamics. respectively. and Andr´-Marie Amp`re. J is the current density.7. In this chapter our goal is to derive the three dimensional wave equation for electromagnetic waves. several equations are given as deﬁning relations between the various variables.1 Maxwell’s Equations There are many applications leading to the equations in Table 7. Also. . Later these equations were put into more compact notations.

the distinction ∂t between the magnetic ﬁeld strength. in many applications of the propagation of electromagnetic waves. Students are typically ﬁrst introduced to B in introductory physics classes. ·( × V) = 0.5) We have noted the common names attributed to each law. It simply states that there are no free magnetic poles. It allows one to determine the electric ﬁeld due to speciﬁc charge distributions. the divergence of the curl of any vector is zero. (for now. B. The second law typically has no name attached to it.306 CHAPTER 7. and the magnetic ﬂux density. In general. Furthermore. such as the continuity equation. As we will see. indicating that changing magnetic ﬂux induces electric potential diﬀerences. we will just rely on the fact that the del operator is a diﬀerential operator and will later recall the exact form that you had seen in your third semester calculus class. Lastly. the fourth law is a modiﬁcation of ampere’s law that states that electric currents produce magnetic ﬁelds. (Gauss’ Law) 0 ·B = 0 ×E = − ∂B . ∂t (Maxwell-Amp`re Law) e (7. H. In the case of a vacuum.) We compute the divergence of the curl of the electric ﬁeld. these equations can be written in a more familiar form. The third law is Faraday’s law. The ﬁrst law is Gauss’ law. We ﬁnd from . ELECTROMAGNETIC WAVES though they have some physical signiﬁcance of their own. which are often presented in introductory physics class. where µ is the magnetic permeability of a material. B = µH. It should be noted that the last term in the fourth equation was introduced by Maxwell. µ ≈ µ0 . µ = µ0 . In the absence of magnetic materials. only becomes important in the presence of magnetic materials. ∂t (Faraday’s Law) 0 × B = µ0 J + µ0 ∂E . There are corresponding integral forms of these laws. The equations that we will refer to as Maxwell’s equations from now on are ρ ·E = . but in some cases is called Gauss’ law for magnetic ﬁelds. In fact. given by · J = − ∂ρ .

He introduced what he called the e displacement current. it made the equations mathematically consistent. the vector identity does not apply here! Maxwell argued that we need to account for a changing charge distribution. Maxwell’s introduction of the displacement current was not only physically important. However. Amp`re’s law in e diﬀerential form would have been written as × B = µ0 J.7) Here we made use of the continuity equation. MAXWELL’S EQUATIONS Maxwell’s equations that ·( × E) = − = − · ∂ ∂t ∂B ∂t · B = 0. and divergence and introduce some of the standard vector identities that are often seen in physics courses. before Maxwell.8) So. 307 (7. curl. we have ·( × B) = µ0 ·J ∂ρ = −µ0 . Computing the divergence of the curl of the magnetic ﬁeld. deﬁne the gradient. ∂t (7.7. (7. the relation works here. However. In this chapter we will review some of the needed vector analysis for the derivation of the three dimensional wave equation from Maxwell’s equations. Now. .6) Thus. we have ∂t ·( × B) = µ0 = −µ0 · J + µ0 0 ∂E ∂t ·E ρ 0 ∂ρ ∂ + µ0 0 ∂t ∂t ∂ρ ∂ = −µ0 + µ0 0 ∂t ∂t = 0. We will recall some of the basic vector operations (the dot and cross products).1. µ0 0 ∂E into the Amp`re Law.

Note how much this looks like a path integral. There is also a component form.2 7. Another application of the dot product is in proving the Law of Cosines. Note that b = c + a. but the path lies in a real three dimensional space. we have to add up the incremental contributions to the work. It is a path integral. dW = F · dr to obtain W = dW = F · dr (7. In that chapter we introduced the formal deﬁnition of a vector space and some simple properties of vectors.9) u · v = u1 v1 + u2 v2 + u3 v3 = k=1 uk vk .11) C C over the path C. The work done on a body by a constant force F during a displacement d is W = F · d. ELECTROMAGNETIC WAVES 7. which we can write as 3 (7.12) Consider the triangle in Figure 7. recall that the square of the length any vector can be written as the dot product of the vector with itself. .1 Vector Analysis Vector Products At this point you might want to reread the ﬁrst section of Chapter 3. In the case of a nonconstant force. We also discussed one of the common vector products.1. which is deﬁned as u · v = uv cos θ. We draw the sides of the triangle as vectors.308 CHAPTER 7. the dot product. (7. Also. (7.10) One of the ﬁrst physical examples using a cross product is the deﬁnition of work. Recall that this law gives the side opposite a given angle in terms of the angle and the other two sides of the triangle: c2 = a2 + b2 − 2ab cos θ.2.

(7. The magnitude of the cross product is given as |a × b| = ab sin θ. The direction can also be determined using your right hand. the vector a can locate a mass. or r and R. The cross product produces a vector. . (7. but something like r1 and r2 . Then the inverse law would involve √ vector c.13) We note that this also comes up in writing out inverse square laws in many applications. The thumb will point in the direction of a × b. Namely.7.2. Another important vector product is the cross product. Then one is interested in approximating the expression of interest in terms of r ratios like R when R r. The cross product produces a vector that is perpendicular to both vectors a and b. whose length is obtained as ab + b2 − 2ab cos θ.1: The Law of Cosines can be derived using vectors.2. This is shown in Figure 7. unlike the dot product that results in a scalar. we also have to specify the direction. one does not have a’s and b’s. Therefore. Curl your ﬁngers from a through to b. VECTOR ANALYSIS 309 Figure 7. we have c2 = c · c = (b − a) · (b − a) = a · a + b · b − 2a · b = a2 + b2 − 2ab cos θ.14) Being a vector. and vector b points to an observation point. the vector is normal to the plane in which these vectors live. The direction taken is given by the right hand rule. Typically. or charge. There are two possible directions. Thus.

The tangential velocity of the body is related to the angular velocity by a cross product v = ω × r. In the ﬁgure this would be out of the page. Consider a rigid body in which a force is applied to it at a position r from the axis of rotation. Counter clockwise motion produces a positive angular velocity and clockwise will give a negative angular velocity. The direction is obtained using the right hand rule: Curl ﬁngers from a through to b.3: A force applied at a point located at r from the axis of rotation produces a torque τ = r × F with respect to the axis. Point your ﬁngers in the direction of r and rotate them towards F. The direction of the angular velocity is given be a right hand rule. The thumb will point in the direction of a × b. We locate the body with a position vector pointing from the origin of the coordinate system to the body.2: The cross product is shown.) Then this force produces a torque with respect to the axis.3. Another example is that of a body rotating about an axis as shown in Figure 7. ELECTROMAGNETIC WAVES Figure 7. Your thumb will point in the direction of ω. we obtain the familiar expression v = rω. Curl the ﬁngers of your right hand in the direction of the motion of the rotating mass. . One of the ﬁrst occurrences of the cross product in physics is in the deﬁnition of the torque. Recall that the torque is the analogue to the force. The direction of the torque is given by the right hand rule. Figure 7.4. This indicates that the bar would rotate in a counter clockwise direction if this were the only force acting on the bar. Note that for the origin at the center of rotation of the mass. A net torque will cause an angular acceleration. τ = r × F.310 CHAPTER 7. (See Figure 7.

2. VECTOR ANALYSIS 311 Figure 7.7.4: A mass rotates at an angular velocity ω about a ﬁxed axis of rotation. Figure 7. .5: The magnitudes of the cross product gives the area of the parallelogram deﬁned by a and b. The tangential velocity with respect to a given origin is given by v = ω × r.

and u × (av) = (au) × v = au × v. it is anticommutative: u × v = −v × u. the magnitude of the cross product is the area of the triangle formed by the vectors a and b. The cross product also satisﬁes distributive properties: u × (v + w) = u × v + u × w). (7.15) Therefore. Slide the triangle over to form a rectangle of base a and height h. Something that is its negative must be zero. the cross product is not commutative. Just replace u with v in the anticommutativity rule and you have v × v = −v × v. This forms a triangle of height h. .16) where the ek ’s are the standard basis vectors.5. we can write the cross product in component form. ELECTROMAGNETIC WAVES There is also a geometric interpretation of the cross product. First of all. In fact. The area of this triangle is A = ah = a(b sin θ) = |a × b|. Now draw a perpendicular from the tip of b to vector a. (7. We would like to expand the cross product of two vectors. A simple consequence of this is that v × v = 0. In order to do this we need a few properties of the cross product. Similarly. Consider the vectors a and b in Figure 7. Recall that we can expand any vector v as n v= k=1 vk ek . n n u×v = k=1 uk ek × k=1 vk ek . The dot product was shown to have a simple form in terms of the components of the vectors.312 CHAPTER 7.

we will end up with cross products of the basis vectors: n n j=1 u×v = = ui ei × i=1 n n i=1 j=1 vj ej (7. it is simple. First of all. This is a vector of length |i × j| = |i||j| sin 90◦ = 1. For i = j. we can expand the cross product in terms of the components of the given vectors.2. some of these can lead to problems with the novice as dealing with indices are at ﬁrst daunting. However. For the typical basis. it is either k or −k. k}. we ﬁnd the following i × j = k. First of all. Similarly. It is perpendicular to both vectors. However.20) . k × j = −i. the cross products ei × ej vanish when i = j by anticommutativity of the cross product. there are other forms that help in verifying identities or making computation simpler with less memorization. we have u × v = (u2 v3 − u3 v2 )i + (u3 v1 − u1 v3 )j + (u1 v2 − u2 v1 )k.19) While this form is correct and useful. (7.7.17) ui vj ei × ej . Imagine computing i × j. j × k = i. j. i × k = −j. it is slightly more diﬃcult in this general index formalism. we have i × j = k. {i. there is the familiar computation using determinants. Using the right hand rule. (7. The same result as above can be obtained by noting that u×v = i j k u1 u2 u3 v1 v2 v3 u2 u3 u1 u3 u1 u2 i− j+ k v2 v3 v1 v3 v1 v2 = = (u2 v3 − u3 v2 )i + (u3 v1 − u1 v3 )j + (u1 v2 − u2 v1 )k. j × i = −k. k × i = j. VECTOR ANALYSIS 313 Thus.18) Inserting this into our cross product for vectors in R3 . (7. Thus. These cross products are simple to compute.

(7. we have that 3 ei × ej = k=1 ijk ek . With this notation. Also.314 CHAPTER 7. and 321 = 213 = 132 = −1. and all other combinations.22) It is helpful to write out enough terms in these sums until you get familiar with manipulating the indices. Note that the ﬁrst two terms vanished because of repeated indices. We can now write out the general cross product as n n u×v = i=1 j=1 n n ui vj ei × ej 3 = i=1 j=1 3 ui vj k=1 ijk ek = i.j=1 ijk ui vj .23) Sometimes it is useful to note that the kth component of the cross product is given by 3 (u × v)k = i. This symbol is deﬁned by the relations 123 = 231 = 312 = 1. 2. if the order is a cyclic permutation of {1. In the last term we used 213 = −1. vanish. ijk .k=1 ijk ui vj ek . consider the product e2 × e1 : 3 e2 × e1 = k=1 21k ek 211 e1 = + 212 e2 + 213 e3 = −e3 . (7.j.21) For example. Note that all indices must diﬀer. (7. ELECTROMAGNETIC WAVES A more compact form can be obtained by introducing the completely antisymmetric symbol. 3}. . like 113 . then the value is +1.

VECTOR ANALYSIS 315 Also. summing over j is understood.. The only thing we can do is to multiply this times vector a to either yield a scalar or a vector. the other case yield the second term on the right side of the identity. or 3.k=1 ijk a1i a2j a3k . Since the ﬁrst two slots are the same j. or in the case of relativistic computations with tensors. This will give terms with δk or δkm . the summation symbol is suppressed. Note that the Einstein summation convention is used in this identity. We just need to get the signs right.7. 2. So. One starts with the cross product b × c. A little care will show that the above is the correct ordering. in some more advanced texts. the left side is really a sum of three terms: jki j m = 1ki 1 m + 2ki 2 m + 3ki 3 m . we have a term δk δim . where it is understood that summation is performed over repeated indices. This is called the Einstein summation convention. We will end this section by looking at the triple products. Similarly. If the former is true. then there is only one possibility for the third slot. i. For this case. Thus. where δij is the Kronecker delta. There are only two ways to construct triple products. Obviously. This is a vector. i = m. one has (u × v)k = ijk ui vj .e. and the indices only take values 1. = δk δim − δkm δi . . then either k = or k = m. changing the order of and m will introduce a minus sign. This identity is simple to understand.j.2. then we could write a11 a12 a13 a21 a22 a23 a31 a32 a33 One useful identity is jki j m 3 = i. For nonzero values of the Levi-Civita symbol. We should also note that since the cross product is given as a determinant. we have to require that all indices diﬀer for each factor on the left side of the equation: j = k = i and j = = m.

The volume is given by the triple scalar product. then we would get a scalar. If we computed a · b ﬁrst. Actually.24) . Thus. If they do not all lie in a plane. which is not a scalar. we have proven that a · (b × c) = (a × b) · c.316 CHAPTER 7. The cross product is perpendicular to this base. Let’s consider the component form of this product. The cross product a × b gives the area of the base as we had seen earlier. Writing a · b × c could only mean one thing. a · b × c. We will use the Einstein summation convention and the fact that the permutation symbol is cyclic in ijk. Now. leaving oﬀ the parentheses would mean that we want the triple scalar product by convention. In the ﬁrst case we obtain the triple scalar product. Then the result would be a multiple of c. imagine how much writing would be involved if we had expanded everything out in terms of all of the components. a · (b × c).6. a · (b × c) = ai (b × c)i = = jki ai bj ck ijk ai bj ck = (b × c)k ck = (a × b) · c. There is a geometric interpretation of the scalar triple product. ELECTROMAGNETIC WAVES Figure 7.6: Three non-coplanar vectors deﬁne a parallepiped. So. then they form the sides of a parallelepiped. the volume of the parallelepiped (7. The dot product of c with this cross product gives the height of the parallelepiped. So. we do not need the parentheses. Consider the three vectors drawn as in Figure 7.

2. Then changing x by dx results in a change in f by df = df dx. dx We can extend this idea to functions of several variables. VECTOR ANALYSIS 317 is the height times the base. a × (b × c). In general. (7. We can think of it as an inﬁnitesimal increment in x. Consider the temperature T (x. or the triple scalar product. In this case we cannot drop the parentheses as this would lead to a real ambiguity.27) . ∂x ∂y ∂z (7.25) This rule is called the BAC-CAB rule because of the order of the right side of this equation. It is given by a × (b × c) = b(a · c) − c(a · b). The vector b × c is a vector that is perpendicular to both b and c. there is an identity that tells us exactly the right combination of vectors b and c. In fact. The change in temperature depends on the direction in which one moves in space. Grad. as the cross product could be pointing below the base. The second type of triple product is the triple cross product. Lets think a little about this product. (7. Therefore. The derivative also tells us how rapidly f (x) varies when x is changed by dx. The standard meaning is that it gives the slope of the graph of f (x) at x. y.7.2 Div. we have dT = ∂T ∂T ∂T dx + dy + dz. But the later vector is perpendicular to both b and c already. z) at a point in space. dr = dxi + dyj + dzk. Computing the triple cross product would then produce a vector perpendicular to a and b × c.2. 7. It will be left to the reader to prove this result. the triple cross product must lie in the plane spanned by these vectors. Recall that dx is called a diﬀerential. Extending the above relation between diﬀerentials. dx : The derivative has several meanings. one gets a signed volume. one is introduced to the df derivative. Curl In studying functions of one variable in calculus.26) If we introduce the following vectors.

z)i + F2 (x. It can act on scalar functions to produce a vector ﬁeld. we can also allow the del operator to act on vector ﬁelds. ∂x ∂y ∂z is called the del.29) Equation (7. Using the deﬁnition of the dot product. However. or we could compute the cross product × F. ELECTROMAGNETIC WAVES T = ∂T ∂T ∂T i+ j+ k. We could either compute the dot product. T . We can write the gradient in the form T = ∂ ∂ ∂ i+ j+ k T. y. The operator. z). ∂x ∂y ∂z dT = T · dr (7. Equation (7. y.318 CHAPTER 7. For example. Recall that a vector ﬁeld is simple a vector valued function. y.28) then we can write (7. Note that by ﬁxing |dr| and varying θ. In some texts they are denoted by div F and curl F. y. Writing the vector ﬁeld in component form. z)j + F3 (x. . z)k. operator. ∂ ∂ ∂ = i+ j+ k. the maximum value of dT is obtained in the direction of the gradient. ∂x ∂y ∂z (7. How can we combine the vector operator and a vector ﬁeld? Well. we see that the gradient can be viewed as an operator acting on T .30) Thus. We could denote it a F(x. F = F1 (x.29) gives the change in T as one moves in the direction dr. we could “multiply” them. The ﬁrst expression is called the divergence of the vector ﬁeld and the second is called the curl of the vector ﬁeld. we also have dT = | T ||dr| cos θ. The divergence is computed the same as any other dot product. · F.28) deﬁnes the gradient of a scalar function. or gradient. These are typically encountered in a third semester calculus course. It is a diﬀerential vector operator. a force ﬁeld is a function deﬁned at points in space indicating the force that would act on a mass placed at that location.

32) These operations also have interpretations. as this will take us away from our goal of deriving a three dimensional wave . · E = ρ indicates 0 that there are sources contributing to the electric led. For a single charge. A ﬁeld in which the divergence is zero is called divergenceless. A ﬁeld that has zero curl is called irrotational. If the curl of the ﬁeld is nonzero.7. Maxwell’s equations as given in this chapter are in diﬀerential form and only describe the physics locally. Consider the ﬂow of a stream. In this case one can derive various integral theorems. The velocity of each element of ﬂuid can be represented by a velocity ﬁeld. Using the determinant form. VECTOR ANALYSIS we ﬁnd the divergence is simply given as ·F = = ∂ ∂ ∂ i+ j+ k · (F1 i + F2 j + F3 k) ∂x ∂y ∂z ∂F1 ∂F2 ∂F3 + + ∂x ∂y ∂z 319 (7. The divergence measures how much the vector ﬁeld F spreads from a point. These are the ﬁnale in a three semester calculus sequence.31) Similarly. The curl is an indication of a rotational ﬁeld. the ﬁeld lines are radially pointing towards (sink) or away from (source) the charge. For example. When the divergence of a vector ﬁeld is nonzero around a point. ∂x ∂y (7. We will not delve into these theorems here. It is a measure of how much a ﬁeld curls around a point. we can compute the curl of F. then when we drop a leaf into the stream we will see it begin to rotate about some point. or information over an ﬁnite region. At times we would like to also provide global information. that is an indication that there is a source (div F > 0) or a sink (div F < 0). we have ×F = = = ∂ ∂ ∂ i+ j+ k × (F1 i + F2 j + F3 k) ∂x ∂y ∂z i j k ∂ ∂x ∂ ∂y ∂ ∂y F1 F2 F3 ∂F3 ∂F2 − i+ ∂y ∂z ∂F1 ∂F3 − j+ ∂z ∂x ∂F2 ∂F1 − k.2.

ELECTROMAGNETIC WAVES equation. · F dV = S F · da. The Fundamental Theorem of Calculus T · dr = T (b) − T (a). First Derivatives (a) (b) (c) (d) (e) (f) (f g) = f g + g f (A · B) = A × ( × B) + B × ( × A) + (A · )B + (B · )A · (f A) = f · A + A · f · (A × B) = B · ( × A) − A · ( × B) × (f A) = f × A − A × f × (A × B) = (B · )A − (A · )B + A( · B) − B( · A) 3. 4. Triple Products (a) A · (B × C) = B · (C × A) = C · (A × B) (b) A × (B × C) = B(A · C) − C(A · B) 2.320 CHAPTER 7. These theorems are all diﬀerent versions of a generalized Fundamental Theorem of Calculus: 1. b ∂f a ∂x b a V S( dx = f (b) − f (a). these integral theorems are important and useful in deriving local conservation laws.3 Vector Identities In this section we list the common vector identities. However. 1. 2. Gauss’ Divergence Theorem C × F) · da = F · dr. Second Derivatives (a) (b) (c) (d) (e) (f) · ( × A) = 0 × ( f) = 0 · ( f × g) = 0 2 (f g) = f 2 g + 2 f · g + g 2 f · (f g − g f ) = f 2 g − g 2 f × ( × A) = ( · A) − 2 A . Stoke’s Theorem 7. 3.2.

and using the Ampere-Maxwell Law. ∂t ∂E × B = µ0 0 .33) We will derive the wave equation for the electric ﬁeld. Consider the expression × ( × E).3. ∂t2 ×( × E).3 Electromagnetic Waves We are now ready to derive the wave equation for electromagnetic waves. (7. ELECTROMAGNETIC WAVES 321 7. You should conﬁrm that a similar result can be obtained for the magnetic ﬁeld. ∂B . the divergence of E is zero.35) = − 0 µ0 Combining the two expressions for 0 µ0 ∂2E . However. We then have Maxwell’s equations in the form · E = 0. we ﬁnd ×( × E) = − ∂ ( ∂t ∂ = − ∂t × B) 0 µ0 ∂E ∂t (7. · B = 0. we have the sought result: 2 ∂2E = ∂t2 E. so we have ×( × E) = − 2 E. ∂t Interchanging the time and space derivatives. ∂t ×E = − (7. . We will consider the case of free space in which there are no free charges or currents and the waves propagate in a vacuum. We note that our identities give ×( × E) = ( · E) − 2 E.7.34) We can also use Faraday’s Law on the right side of this equation to obtain ×( × E) = − × ∂B .

one The wave equations lead to many of the properties of the electric and magnetic ﬁelds. . Introducing the dielectric constant. 0 In many materials µ ≈ µ0 . v.85 × 10−12 C2 /Nm2 and µ0 = 4π × 10−7 N/A2 . c. We can also study systems in which these waves are conﬁned. A similar equation can be found for the magnetic ﬁeld. We have to solve for several inter-related component functions. n = . we could look for waves in what are called linear media. the wave speed in a vacuum. In this case one has D = E and B = µH. κ = √ ﬁnds that n ≈ κ. one ﬁnds that c = 3 × 108 m/s. .322 CHAPTER 7. 0 µ0 ∂2B = ∂t2 2 B. In such cases we can impose boundary conditions and determine what modes are allowed to propagate within certain structures. In the next chapter we will look at simpler models in order to get some ideas as to how one can solve scalar wave equations in higher dimensions. However. For example. Recalling that 0 = 8. is replaced by the wave speed in the medium. It is given by 1 v=√ µ = c . n is the index of refraction. Here is called the electric permittivity and µ is the magnetic permeability of the material. One can derive more general equations. ELECTROMAGNETIC WAVES This is the three dimensional equation for an oscillating electric ﬁeld. n µ 0 µ0 Here. Then. these equation involve unknown vector ﬁelds. such as waveguides. such as optical ﬁbers.

which are eigenvalue problems.Chapter 8 Problems in Higher Dimensions In this chapter we will explore several generic examples of the solution of initial-boundary value problems involving higher spatial dimensions. For higher dimensional problems these take the form ut = k utt = c 2 2 u.1) (8. (8. We solve the eigenvalue problems for the eigenvalues and eigenfunctions. For these two equations we have T φ = kT T φ = c T 323 2 2 φ. several are boundary value problems. the main equation that we have seen are the heat equation and the wave equation. Of these equations. These are described by higher dimensional partial diﬀerential equations. leading to a set of product solutions satisfying the partial diﬀerential equation and the boundary conditions.3) (8. As you go through these examples. u. For example. t) = φ(r)T (t).4) 2 . (8. This will result in a system of ordinary diﬀerential equations for each problem. φ.2) 2 One can ﬁrst separate out the time dependence. The general solution can be written as a linear superposition of the product solutions. The method of solution will be the method of separation of variables. you will see some common features. Let u(r.

This leads to the respective set of equations for T (t): T T + c λT = 0.5) (8. φn . We will again use the boundary conditions and ﬁnd the eigenvalues and eigenfunctions. (8. We had to impose the boundary conditions and found that there were a discrete set of eigenvalues.8) √ ω = c λ. (8.11) This is called the Helmholtz equation.9) (8. The resulting boundary value problems belong to a class of eigenvalue problems called Sturm-Liouville problems. though they will be labelled with more than one index.7) (8. Thus. We have T (t) = T (0)e−λkt . = −λ. T (t) = a cos ωt + b sin ωt In both cases the spatial equation becomes 2 2 = −λkT.6) φ Note that in each case we have a function of time equals a function of spatial variables. The sign of λ is chosen because we expect decaying solutions in time for the heat equation and oscillations in time for the wave equation. they must be a constant.10) φ + λφ = 0. (8. Also. which we have already solved. −λ < 0. λn .324 CHAPTER 8. PROBLEMS IN HIGHER DIMENSIONS Separating out the time and space dependence. These are easily solved. the Helmholtz equation takes the form φ + λφ = 0. (8. and associated eigenfunctions. For one dimensional problems. in the solution of the ordinary diﬀerential equations we will ﬁnd solutions other than the sines and cosines that we have seen in previous . In higher dimensional problems we need to further separate out the space dependence. We will explore the properties of Sturm-Liouville eigenvalue problems later in the chapter. we ﬁnd 1T kT 1 T c2 T = = 2φ φ 2φ = −λ.

Inserting into the wave equation. We will refer to sections of that chapter for properties of the special functions that are encountered in this chapter. y. (8. u(x.1 Vibrations of Rectangular Membranes Our ﬁrst example will be the study of the vibrations of a rectangular membrane. y. H.12) a set of boundary conditions. t > 0. (8. y).14) The ﬁrst step is to separate the variables: u(x. You can think of this as a drum with a rectangular cross section as shown in Figure 8. 8. u(L. t) = X(x)Y (y)T (t). 0 < y < H. y. 0) = g(x. A more general discussion of these special functions is provided in the next chapter. we obtain 1 T c2 T Function of t = Y X + X Y Function of x and y = −λ.13) and a pair of initial conditions (since the equation is second order in time). ut (x.8. The special functions also form a set of orthogonal functions. t > 0. such as the vibrations of a string. 0 < y < H. We stretch the membrane over the drumhead and fasten the material to the boundaries of the rectangle. (8. u(0. y). y. t) = 0. t) = 0. y. y.15) . The problem is given by a partial diﬀerential equation. u(x. u(x. VIBRATIONS OF RECTANGULAR MEMBRANES 325 problems. 0) = f (x. y. Dividing by both u(x. This problem is a much simpler example of higher dimensional vibrations than that possed by the oscillating electric and magnetic ﬁelds in the last chapter. utt = c2 2 u = c2 (uxx + uyy ). t) and c2 . t) = 0. 0 < x < L. (8. u(x. t > 0.1. t) = 0. 0 < x < L. The hight of the vibrating membrane is described by its hight from equilibrium.1. leading to general Fourier-type series expansions. 0. we have X(x)Y (y)T (t) = c2 X (x)Y (y)T (t) + X(x)Y (y)T (t) . t).

16) (8. So. 2π 2π (8. . We have T (t) = a cos ωt + b sin ωt. so we chose the constant to be λ > 0.) These lead to two equations: T + c2 λT = 0. It leads to the frequency of oscillations for the various harmonics of the vibrating membrane as ν= ω c √ = λ. X Y (8. There are ﬁxed boundary conditions along the edges.1: The rectangular membrane of length L and width H. where √ ω = c λ. PROBLEMS IN HIGHER DIMENSIONS Figure 8. both expressions are constant. (8. and Y X + = −λ.326 CHAPTER 8. We expect oscillations in time.17) The ﬁrst equation is easily solved.18) (8. the primes mean diﬀerentiation with respect to the speciﬁc dependent variable. there should be no ambiguity. We see that we have a function of t equals a function of x and y.20) Once we know we can compute these frequencies. Thus. (Note: As usual.19) This is the angular frequency in terms of the separation constant. or eigenvalue.

1. y. which we indicate with a second separation constant. Similarly. . . 2. 3.21) Function of y Here we have a function of x equals a function of y. VIBRATIONS OF RECTANGULAR MEMBRANES 327 Now we solve the spatial equation. (8. (8. This leads to two equations: X + µX = 0. and the interval is [0. Again. The boundary values problems we need to solve are: X + µX = 0.8. . X(L) = 0. . The diﬀerences are that our “eigenvalue” is λ − µ. We note that homogeneous boundary conditions are important in carrying out this process. n = 1. We pick the sign in this way because we expect oscillatory solutions for X(x). Y + (λ − µ)Y = 0. m = 1. Nonhomogeneous boundary conditions could be imposed. t) = 0 for all t > 0 and 0 < y < H. . This implies that X(0)Y (y)T (t) = 0 for all t and y in the domain. Thus. the independent variable is y. but the techniques are a bit more complicated and we will not discuss these techniques here. −µ < 0. Y (H) = 0. H λ−µ= mπ H 2 . the two expressions are constant. we have X X Function of x = − Y −λ Y = −µ. The solution is Y (y) = sin mπx . This is only true if X(0) = 0. 3. The second equation is solved in the same way.23) We have seen the ﬁrst of these problems before. . 2. L λ= nπ L 2 . we can quickly write down the solution as λ instead of a µ.22) We now need to use the boundary conditions. and Y (H) = 0. . X(0) = 0. Y (0) = 0. We have u(0. Rearranging the spatial equation. except with a λ instead of a µ. we need to do a separation of variables. Y (0) = 0. Y + (λ − µ)Y = 0. So. (8. H]. we ﬁnd that X(L) = 0. . The solution is X(x) = sin nπx . .

(8. we can explore the shapes of the harmonics of the vibrating membrane. λnm = So. . However. We have that λmn − µn = and µn = Therefore. (8. or places on the string that did not move. we will ﬁrst concentrate on the two dimensional harmonics of this membrane.26) The most general solution can now be written as a linear combination of the product solutions and we can solve for the expansion coeﬃcients that will lead to a solution satisfying he initial conditions. these depend on the indices. Of course.25) + . mπ H mπ H 2 nπ L + . The various harmonics corresponded to the pure tones supported L by the string. or nodal lines.27) Instead of nodes. The actual shapes of the harmonics could be sketched by locating the nodes. y) = 0. These then lead to the corresponding frequencies that one would hear. In the same way.24) L H Recall that ω is given in terms of λ. The product solutions can be written as nπx mπy unm = (a cos ωnm t + b sin ωnm t) sin sin . y) at which φ(x. n and m.328 CHAPTER 8. y) = sin nπx mπy sin . 2 (8. L H (8. These are the points (x. PROBLEMS IN HIGHER DIMENSIONS We have successfully carried out the separation of variables for the wave equation for the vibrating rectangular membrane. ωnm = c nπ L 2 mπ H 2 2 nπ L 2 . These are given by the spatial functions φnm (x. we will look for the nodal curves. For the vibrating string the nth harmonic corresponded to the function sin nπx .

2: The ﬁrst few modes of the vibrating rectangular membrane. this can only happen for x = 0. Thus.e. L or sin mπy = 0. H and sin 2πx = 0. Thus. x = 0. we have y = 0. there is one interior nodal line at x = L . there are no interior nodal lines. H. we have sin These are zero when either sin nπx = 0.2)-mode. L. i. L or. A similar solution shape results for the (1. VIBRATIONS OF RECTANGULAR MEMBRANES 329 Figure 8.8. For example.1. When n = 2 and m = 1.. L and y = 0. n = 1 and m = 2. The dashed lines show the nodal lines indicating the points that do not move for the particular mode. H mπy nπx sin = 0. when n = 1 and m = 1. L H Of course. L . These 2 2 points stay ﬁxed during the oscillation and all other points oscillate on either side of this line. .

PROBLEMS IN HIGHER DIMENSIONS Figure 8.330 CHAPTER 8. .3: A three dimensional view of the vibrating rectangular membrane for the lowest modes.

VIBRATIONS OF RECTANGULAR MEMBRANES 331 In Figure 8. y) = n=1 m=1 anm sin mπy nπx sin . we obtain ∞ ∞ f (x.29) This is a double sine series. The goal is to ﬁnd the unknown coeﬃcients anm .2 we show the nodal lines for several modes for n. L 0 H mπy An (y) sin dy. Setting t = 0 in the general solution. L H (8. 0) = f (x. The general solution is given by a linear superposition of the product solutions. y. m = 1.3 . H (8.30) where An (y) = ∞ anm sin m=1 mπy . Thus.) (8. y.33) . y) sin L (8. y). There are two indices to sum over. we have An (y) = anm = 2 L 2 H nπx dx.8.31) These are two sine series. For completeness. anm = 4 LH H 0 0 L f (x.28) L H The ﬁrst initial condition is u(x. y) = n=1 An (y) sin nπx . the general solution is ∞ ∞ u(x.32) Inserting the ﬁrst result into the second. 3 The blocked regions appear to vibrate independently. 2. H 0 f (x. t) = (anm cos ωnm t + bnm sin ωnm t) sin n=1 m=1 nπx mπy sin . Recalling that the coeﬃcients of sine series can be computed as integrals. The frequencies of vibration are easily computed using the formula for ωnm . L (8. we now see how one satisﬁes the initial conditions. We can write this as the single sum ∞ f (x. L H (8. y) sin nπx mπy sin dxdy. This can be done knowing what we already know about Fourier sine series.1. A better view is the three dimensional view depicted in Figure 8.

now we can write down Fourier coeﬃcients quickly: bnm = 4 ωnm LH H 0 0 L g(x. There are ﬁxed boundary conditions along the edge at r = a. y) for the initial velocity of each point. y) sin mπy nπx sin dxdy. We can carry out the same process for satisfying the second initial condition. we have ∞ ∞ g(x.35) This completes the full solution of the vibrating rectangular membrane problem. Inserting this into the general solution.34) Again. L H (8. L H (8. However. Again we are looking for the harmonics of the vibrating membrane. but with the membrane ﬁxed around the circular boundary given by x2 + y 2 = a2 . y. . 0) = g(x. In this case the boundary condition would be u = 0 at r = a. expressing the boundary condition in Cartesian coordinates would be awkward. ut (x.332 CHAPTER 8. 8. we have a double sine series. .2 Vibrations of a Kettle Drum In this section we consider the vibrations of a circular membrane of radius a as shown in Figure 8.4: The circular membrane of radius a. r. PROBLEMS IN HIGHER DIMENSIONS Figure 8.4. y) = n=1 m=1 bnm ωnm sin mπy nπx sin . But. and the angle. It would be more natural to use polar coordinates as indicated in the ﬁgure. A general point on the membrane is given by the distance from the center.

First recall that transformations x = r cos θ.2. VIBRATIONS OF A KETTLE DRUM 333 Before solving the initial-boundary value problem. y(r. θ)) = g(r. To do so would require that we know how to transform derives in x and y into derivatives with respect to r and θ.) Thinking of x = x(r. ∂f ∂y = =− y . consider a function f = f (x(r. tan θ = y = r sin θ y .37) . x Now. (Technically. once we transform a given function of Cartesian coordinates we obtain a new function g of the polar coordinates. = sin θ ∂r r ∂θ = (8. θ) and y = y(r. and r= x2 + y 2 . θ). θ). Many texts do not rigourously distinguish between the two which is ﬁne when this point is clear. ∂r r ∂θ x x = . we have to cast it in polar coordinates. r2 ∂g ∂r ∂g ∂θ + ∂r ∂y ∂θ ∂y ∂g y ∂g x = + ∂r r ∂θ r2 ∂g cos θ ∂g + .36) Here we have used ∂r = ∂x x2 and d y ∂θ = tan−1 ∂x dx x Similarly. we have from the chain rule for functions of two variables: ∂f ∂x = ∂g ∂r ∂g ∂θ + ∂r ∂x ∂θ ∂x ∂g x ∂g y = − ∂r r ∂θ r2 ∂g sin θ ∂g = cos θ − .8. This means that we need to rewrite the Laplacian in r and θ. θ). 2 r +y −y/x2 1+ y 2 x (8.

θ. u(a. 0 < r < a. ut (r. r2 ∂θ2 (8. θ. θ). the boundary condition.39) t > 0.334 CHAPTER 8. t > 0. utt = c2 2 u = c2 ∂u 1 ∂ r r ∂r ∂r + 1 ∂2u . r ∂r ∂r r ∂θ (8. θ). 0) = g(r.40) u(r.38) The last form often occurs in texts because it is in the form of a Sturm-Liouville operator. 0) = f (r. (8. −π < < π. −π < θ < π. θ. t) = 0.41) . and the initial conditions. It is given by a partial diﬀerential equation. Now that we have written the Laplacian in polar coordinates we can pose the problem of a vibrating circular membrane. (8. PROBLEMS IN HIGHER DIMENSIONS The 2D Laplacian can now be computed as ∂2f ∂2f + 2 ∂x2 ∂y ∂ ∂f sin θ ∂ ∂f ∂ ∂f cos θ ∂ ∂f − + sin θ + ∂r ∂x r ∂θ ∂x ∂r ∂y r ∂θ ∂y ∂ ∂g sin θ ∂g sin θ ∂ ∂g sin θ ∂g = cos θ − − cos θ − cos θ ∂r ∂r r ∂θ r ∂θ ∂r r ∂θ ∂ ∂g cos θ ∂g cos θ ∂ ∂g cos θ ∂g + sin θ sin θ + + sin θ + ∂r ∂r r ∂θ r ∂θ ∂r r ∂θ 2g 2g ∂ sin θ ∂g sin θ ∂ = cos θ cos θ 2 + 2 − ∂r r ∂θ r ∂r∂θ = cos θ − sin θ r cos θ ∂g cos θ ∂g ∂2g sin θ ∂ 2 g − sin θ − − 2 ∂θ∂r r ∂θ ∂r r ∂θ + sin θ sin θ + = = cos θ r ∂ 2 g cos θ ∂ 2 g cos θ ∂g + − 2 ∂r2 r ∂r∂θ r ∂θ ∂2g cos θ ∂ 2 g ∂g sin θ ∂g + − + cos θ 2 ∂θ∂r r ∂θ ∂r r ∂θ sin θ 1 ∂2g ∂ 2 g 1 ∂g + + 2 2 ∂r2 r ∂r r ∂θ 1 ∂ ∂g 1 ∂2g r + 2 2.

This can be ﬁxed by multiplying the equation by r2 . Rearranging the equation.43) The last term is a constant. We can separate this equation by letting φ(r. This leaves the Helmhotz equation. The ﬁrst term is a function of r. we have dR r d r R dr dr + λr2 = − 1 d2 Θ = µ. r2 Θ dθ2 (8. If µ > 0. we can separate out the time dependence. This is a . VIBRATIONS OF A KETTLE DRUM 335 Now were are ready to solve this problem using separation of variables. we can separate out the θ-dependence from the radial dependence. r2 ∂θ2 (8.42) Dividing by u = RΘ. Now we apply the boundary condition in θ.44) This gives us two ordinary diﬀerential equations: d2 Θ + µΘ = 0. It should look familiar. Letting µ be the separation constant. Looking at our boundary conditions in the problem. T (t) can be written in terms of sines and cosines. t) = T (t)φ(r. θ).8.2. θ. then the general solution is √ √ Θ(θ) = a cos µθ + b sin µθ. Let u(r. θ) = R(r)Θ(θ). we do not see anything involving θ. As before. However. as usual. (8. the middle term involves both r and θ. dθ2 r dR d r dr dr + (λr2 − µ)R = 0. Θ dθ2 (8.45) Let’s consider the ﬁrst of these equations. 2 φ + λφ = 0. As usual. leads to dR 1 d r rR dr dr + 1 d2 Θ + λ = 0. This gives 1 ∂ ∂RΘ r r ∂r ∂r + 1 ∂ 2 RΘ + λRΘ = 0.

2. (8. . this gives no new information. we would expect the solution to have the same value at θ = −π as it has at θ = −π. √ sin µπ = 0. We need to ﬁnd solutions to this equation. Namely. sin mθ}∞ . we have −am sin mπ + bm cos mπ = am sin mπ + bm cos mπ. So. 1. For Θ (π) = Θ (−π). we have r dR d r dr dr + (λr2 − m2 )R = 0. The product solutions will have various products of {cos ωt. √ µ = m for m = 0. Similarly. That leaves us with the radial equation. We can derive the boundary conditions by making some observations. Inserting µ = m2 . .336 CHAPTER 8. . This implies that So. This tells us that Θ(π) = Θ(−π) Θ (π) = Θ (−π). It turns out that under a simple transformation it becomes an equation whose solutions are well known. sin ωt} and {cos mθ. To summarize so far. . . But. 3. noting that at these values for any r < a we are at the same physical point. we expect the slope of the solution to be the same at these points. Such boundary conditions are called periodic boundary conditions. Let’s consider the solution corresponding to the endpoints θ = ±π. we set Θ(π) = Θ(−π): √ √ √ √ a cos µπ + b sin µπ = a cos µπ − b sin µπ.46) This is not an equation we have encountered before (unless you took a course in diﬀerential equations). We also know that µ = m2 and m=0 √ ω = c λ. Let’s apply these conditions to the general solution for Θ(θ) First. we have found the general solutions to the temporal and angular equations. the solution is continuous at these physical points. PROBLEMS IN HIGHER DIMENSIONS case where the boundary conditions needed are implied and not stated outright.

Lagrange found expressions for the coeﬃcients in the expansions of r and E in trigonometric functions of time. the Bessel function of the ﬁrst kind of order m. Namely. In 1824 he presented a thorough study of these functions. the derivatives transform as r d d =z . we have 0 = r dR d r + (λr2 − m2 )R. In 1816 Bessel had shown that the coeﬃcients in the expansion for r could be given an integral representation. as functions of time. does not originate in the study of partial diﬀerential equations. E.48) This equation has well known solutions and they are discussed in Section 9. VIBRATIONS OF A KETTLE DRUM 337 √ Let z = λr and w(z) = R(r). +z 2 dz dz (8. There are two linearly independent solutions of this second order equation: Jm (z). These solutions originally came up in the study of the Kepler problem. We use the Chain Rule to obtain dR dw dz √ dw = = λ . dz dz (8.and Nm (z). called Bessel functions. dr dz Inserting the transformation into the diﬀerential equation. describing planetary motion. According to Watson in his Treatise on Bessel Functions. which are now called Bessel functions. The history of the solutions of this equation. dr dr d dw 0 = z z + (z 2 − m2 )w. he only computed the ﬁst few coeﬃcients.47) Expanding the derivative terms. We need to change derivatives with respect to r into derivative with respect to z. dr dz dr dz Thus. we obtain Bessel’s equation: z2 d2 w dw + (z 2 − m2 )w = 0. We have changed the name of the dependent variable since inserting the transformation into R(r) leads to a function of z that is not the same function. the .8. the problem was to express the radial coordinate and what is called the eccentric anomaly. However.2.5. the formulation and solution of Kepler’s Problem was discovered by Lagrange in 1770.

We can apply the vanishing condition at r = a.8. PROBLEMS IN HIGHER DIMENSIONS Bessel function of the second kind of order m. Then our boundary condition tells us that √ λa = zmn . Namely. This implies that c2 = 0. Now we are ready to apply the boundary conditions to our last factor in our product solutions. Let’s denote the nth zero of Jm (z) by zmn . |R(0)| < ∞. Plots of these functions are shown in Figures 9. Do you expect that our solution will become inﬁnite at the center of our drum? No. So. Sometimes the Nm ’s are called Neumann functions. This implies that R(0) = 0. Thus. Looking at the original problem we ﬁnd only one condition: u(a.338 CHAPTER 8. you might ﬁnd the notation is given as Ym (z) in some books. t) = 0 for t > 0 and −π < < π. This gives use our eigenvalue as λmn = zmn a 2 . this becomes √ √ R(r) = c1 Jm ( λr) + c2 Nm ( λr). this is the second boundary condition. Now we are left with √ R(r) = Jm ( λr). we have the general solution of our transformed equation is w(z) = c1 Jm (z) + c2 Nm (z). we see that there are an inﬁnite number of zeros. our radial function. satisfying the boundary conditions. Transforming back into r variables. Looking again at the plots of Jm (z). Look again at the plots of the Bessel functions. a . Notice that the Neumann functions are not well behaved at the origin. This gives √ Jm ( λa) = 0. We have set c1 = 1 for simplicity. So. but they are not as easy as π! In Table 9. the solutions should be ﬁnite at the center. Also. θ.3 we list the nth zeros of Jp .7 and 9. is R(r) = Jm ( zmn r). But where is our second condition? This is another unstated boundary condition.

1. The dashed lines show the nodal lines indicating the points that do not move for the particular mode. or Jm ( zmn r) = 0. So. . We are ﬁnally ready to write out the product solutions.2. . Also. or cos mθ = 0. . VIBRATIONS OF A KETTLE DRUM 339 Figure 8. m = 0. The various nodal curves are shown in a Figure 8.5. a As with the rectangular membrane. and ωmn = zmn c. we are interested in the shapes of the harmonics. . . The nodal curves are given by φ(r. Adding the solutions involving sin mθ will only a rotate these modes. They are given by u(r. we consider the spatial solution (t = 0) φ(r. a (8.49) Here we have indicated choices with the braces.8.5: The ﬁrst few modes of the vibrating circular membrane. θ) = 0. t) = cos ωmn t sin ωmn t cos mθ sin mθ Jm ( zmn r). leading to four diﬀerent types of product solutions. θ. 2. θ) = cos mθJm ( zmn r).

436a. For m = 1. For m = 0 and n = 1. we easily see that the nodal curves are radial lines. 5.340 CHAPTER 8. there are no solutions. we have zmn r = zmj .3.5. 3π .520 . For m = 0 and n = 2. cos 2θ = 0 implies that θ = π .405 a ≈ 0. For 2 m = 2. a or r= zmj a. PROBLEMS IN HIGHER DIMENSIONS Figure 8. 4 4 We can also consider the nodal curves deﬁned by the Bessel functions.6: A three dimensional view of the vibrating circular membrane for the lowest modes. a]. The zeros can be found in Table 9. zmn These will give circles of this radius with zmj < zmn . We seek values of r for which zm n r is a zero of the Bessel function and lies in a the interval [0. there is only one zero and r = a. These values give the same line as shown in Figure 8. we have two circles. For m = 0. we have cos θ = 0 implies that θ = ± π . r = a and r = 2. For the angular part. Thus.

In this case we cannot toss out the Neumann functions because the origin is not part of the drum head. one can look at Figure 8. Similar 8. we have to satisfy .405 For m = 0 and n = 3 we obtain circles of radii r = a.7: A three dimensional view of the vibrating annular membrane for the lowest modes. The separation would follow as before except now the boundary conditions are that the membrane is ﬁxed around the two circular boundaries. 5.specifying the radii as a and b with b < a. For a three dimensional view. 8. 2. VIBRATIONS OF A KETTLE DRUM 341 Figure 8.2.520 a.6. In this case.654 a. More complicated vibrations can be dreamt up for this geometry. We could consider an annulus in which the drum is formed from two concentric circular cylinders and the membrane is stretch between the two with an annular cross section.654 computations result for larger values of m.8. Imagine that the various regions are oscillating independently and that the points on the nodal curves are not moving.

We will see that many physically interesting boundary value problems lead to a class of problems called Sturm-Liouville eigenvalue problems. This includes our previous examples of the wave and heat equations. In Figure 8.51) takes the form Lu = f. This .51) Then. giving a nice complicated condition on λ. (8. For example. In this chapter we have seen that expansions in other functions are possible. We would like to generalize some of these techniques in order to solve other boundary value problems. the resulting solutions were written in term of series expansions over a set of eigenfunctions. √ √ R(b) = c1 Jm ( λb) + c2 Nm ( λb) = 0. In many of the problems we have encountered. We will also be introduced to new functions.50) This leads to two homogeneous equations for c1 and c2 . we might want to solve the equation a2 (x)y + a1 (x)y + a0 (x)y = f (x) subject to boundary conditions. Such functions can be used to represent functions in Fourier series expansions. PROBLEMS IN HIGHER DIMENSIONS √ √ R(a) = c1 Jm ( λa) + c2 Nm ( λa) = 0.7 we show various modes for a particular choice of a and b. + a1 (x) 2 dx dx (8. The determinant has to vanish.342 the conditions: CHAPTER 8. Equation (8.3 Sturm-Liouville Problems In earlier chapters we have explored the solution of boundary value problems that led to trigonometric eigenfunctions. We begin by noting that in physics many problems arise in the form of boundary value problems involving second order ordinary diﬀerential equations. We can write such an equation in operator form by deﬁning the diﬀerential operator L = a2 (x) d2 d + a0 (x). which are gathered in the next chapter. 8.

b]. The resulting operator is the Sturm-Liouville operator. α2 u(b) + β2 u (b) = 0. q(x) > 0 on [a. or d du p(x) dx dx + q(x)u + λσ(x)u = 0. In this case. (8. dx dx (8. we have Neumann boundary conditions. we are not guaranteed a nice set of n=1 eigenfunctions.54) The functions p(x). STURM-LIOUVILLE PROBLEMS 343 indicates that perhaps we might seek solutions to the eigenvalue problem Lφ = λφ with homogeneous boundary conditions and then seek a solution as an expansion of the eigenfunctions. However. it would be nice to have orthogonality so that we can solve for the expansion coeﬃcients as was done in Section ??. We deﬁne the Sturm-Liouville operator as L= d d p(x) + q(x). q(x) and σ(x) are assumed to be continuous on (a. For βi = 0. u (a) = 0 and u (b) = 0. which we will now explore. one has special types of boundary conditions. we have what are called Dirichlet conditions. as there would be no temperature gradient at those points. we let u = ∞ cn φn (x) dx. In terms of the heat equation example.8. The alpha’s and β’s are constants. Also. Namely. Dirichlet conditions correspond to maintaining a ﬁxed temperature at the ends of the rod. We need an appropriate set to form a basis in the function space. For αi = 0. For diﬀerent values. Another type of boundary condition that is often encountered is the periodic boundary condition. b] and the set of homogeneous boundary conditions α1 u(a) + β1 u (a) = 0. u(a) = 0 and u(b) = 0.3. (8. Consider the heated rod that has been bent .53) for x ∈ [a. Formally. It turns out that any linear second order operator can be turned into an operator that posses just the right properties for us to carry out this procedure. b) and p(x) > 0. The Neumann boundary conditions would correspond to no heat ﬂow across the ends.52) The regular Sturm-Liouville eigenvalue problem is given by the diﬀerential equation Lu = −λσ(x)u.

dx a2 (x) . we can change it to a Sturm Liouville type of equation. consider the diﬀerential equation x2 y + xy + 2y = 0.344 CHAPTER 8. In this case a2 (x) = x2 and a2 (x) = 2x = a1 (x). We can do the same thing here. we multiply the diﬀerential equation by µ : µ(x)y + µ(x) a1 (x) a0 (x) f (x) y + µ(x) y = µ(x) . Theorem Any second order linear operator can be put into the form of a Sturm-Liouville operator. In that case we sought an integrating factor. The proof of this is straight forward. PROBLEMS IN HIGHER DIMENSIONS to form a circle. then we can write the equation in the form f (x) = a2 (x)y + a1 (x)y + a0 (x)y = (a2 (x)y ) + a0 (x)y. In the Sturm Liouville operator the derivative terms are gather together into one perfect derivative. This equation is not of Sturm-Liouville type. We just identify p(x) = a2 (x) and q(x) = a0 (x). We seek a multiplicative function µ(x) that we can multiply through (8. If a1 (x) = a2 (x).51) so that it can be written in Sturm-Liouville form. But. Consider the equation (8. However. So. This is similar to what we saw in the ﬁrst chapter when we solved linear ﬁrst order equations. we would expect that the temperature and the temperature gradient should agree at those points. (8. a2 (x) a2 (x) a2 (x) Now. a2 (x) a2 (x) a2 (x) The ﬁrst two terms can now be combined into an exact derivative (µy ) if µ(x) satisﬁes dµ a1 (x) = µ(x) . For this case we write u(a) = u(b) and u (a) = u (b). Then the two end points are physically the same. We ﬁrst divide out the a2 (x).51). giving a1 (x) a0 (x) f (x) y + y + y= .55) This is in the correct form.

Deﬁning the inner product of f (x) and g(x) as < f. (8. However. b). φn > δnm . (8.8. the original equation can be multiplied by a1 (x) a2 (x) dx = 1 . there is no largest eigenvalue and n → ∞.57) .56) then the orthogonality condition can be written in the form < φn . λn → ∞. . φm >=< φn .4. PROPERTIES OF STURM-LIOUVILLE PROBLEMS This is easily solved to give µ(x) = e 1 e a2 (x) to turn it into Sturm-Liouville form. x x 1 e x2 dx x a1 (x) a2 (x) 345 dx . countable. For the example above. we need only multiply by Therefore. we can write them as λ1 < λ2 < . 3. For each eigenvalue λn there exists an eigenfunction φn with n − 1 zeros on (a. Thus. x2 y + xy + 2y = 0. The eigenvalues are real. However. Eigenfunctions corresponding to diﬀerent eigenvalues are orthogonal with respect to the weight function. ordered and there is a smallest eigenvalue.4 Properties of Sturm-Liouville Problems There are several properties that can be proven for the Sturm-Liouville eigenvalue problems. . 2. we will not prove them all here. Thus. x 8. 1. we obtain 0 = xy + y + 2 2 y = (xy ) + y. σ(x). . g >= b a f (x)g(x)σ(x) dx.

the set of square integrable σ functions over [a.4. one needs f (x) ∈ L2 [a. PROBLEMS IN HIGHER DIMENSIONS 4. we have two identities: Lagrange’s Identity uLv − vLu = [p(uv − vu )] . ∞ f (x) ∼ n=1 cn φn (x). φn > . For the Sturm-Liouville operator. to ﬁnd the Rayleigh Quotient λn = −pφn dφn |b − dx a b a p dφn 2 dx − qφ2 dx n < φn . Multiply the eigenvalue problem Lφn = −λn σ(x)φn by φn and integrate. φn > Actually. L. Solve this result for λn .346 CHAPTER 8. The set of eigenfunctions is complete. where < f. The proof follows by a simple manipulations of the operator: uLv − vLu = u d dv d du p + qv − v p + qu dx dx dx dx d dv d du = u p −v p dx dx dx dx dv du dv d du du dv d p +p −v p −p = u dx dx dx dx dx dx dx dx d dv du = pu − pv . f >< ∞.58) dx dx dx . i. we will need two more tools. b] with weight function σ(x) : < f.. cn = 5. (8. 8. b]. φn > The Rayleigh quotient is useful for getting estimates of eigenvalues and proving some of the other properties.1 Identities and Adjoint Operators Before turning to any proofs.e. < φn . any piecewise smooth function can be represented by a generalized Fourier series expansion of the eigenfunctions.

61) .60) u(a2 v + a1 v + a0 v) dx b a a1 (x)u(x)v(x) + a2 (x)u(x)v (x) − (a2 (x)u(x)) v(x) |b a + (a2 u) − (a1 u) + a0 u v dx. 2 We ﬁrst look at the inner product < u. (8. we apply several integrations by parts to individual terms. L† . Next. σ Deﬁnition The adjoint.59) and b a u(x)a2 (x)v (x) dx = a2 (x)u(x)v (x)|b − a b a b a (u(x)a2 (x)) v(x) dx = a2 (x)u(x)v (x) − (a2 (x)u(x)) v(x) |b + a we have < u. For a simple operator like L = dx . d d Example Find the adjoint of L = a2 (x) dx2 + a1 (x) dx + a0 (x). PROPERTIES OF STURM-LIOUVILLE PROBLEMS Green’s Identity b a (uLv 347 − vLu) dx = [p(uv − vu )]|b . (8. Noting that b a u(x)a1 (x)v (x) dx = a1 (x)u(x)v(x)|b − a b a (u(x)a1 (x)) v(x) dx. b] satisfying a given set of homogeneous boundary conditions. Lv >=< L† u. of operator L satisﬁes < u. Deﬁnition The domain of a diﬀerential operator L is the set of all u ∈ L2 [a. this is easily done using integration by parts.8. Lv >= b a u(a2 v + a1 v + a0 v) dx. a This identity is simply proven by integrating Lagrange’s identity. we deﬁne the domain of an operator and introduce the notion of adjoint operators. For the given operator.4. (8. v > for all v in the domain of L and u in the domain of L† . Then we have to move the operator L from v and determine what operator d is acting on u. Lv > = = b a (u(x)a2 (x)) v(x) dx.

a This leaves < u.348 CHAPTER 8. Theorem The eigenvalues of the Sturm-Liouville problem are real. 1]. 0 8. Lu >= From this we have d 1. uv|1 = 0 ⇒ 0 = u(1)[v(1) − 2v(0)] ⇒ v(1) = 2v(0). Lu >=< L† v. Lv >= Therefore. we rewrite the integral < v. the operator is called formally self-adjoint. v > . When L† = L. L† = − dx . L† = d2 d a (x) − a1 (x) + a0 (x).62) b a (a2 u) − (a1 u) + a0 u v dx ≡< L† u. . u > . one ﬁnds boundary conditions for u such that a1 (x)u(x)v(x) + a2 (x)u(x)v (x) − (a2 (x)u(x)) v(x) |b = 0. Example Determine L† and its domain for operator Lu = satisfying u(0) = 2u(1) on [0. 2 2 dx dx (8.2 Orthogonality of Eigenfunctions We are now ready to prove That the eigenvalues of a Sturm-Liouville problem are real and the corresponding eigenfunctions are orthogonal. Therefore.4. PROBLEMS IN HIGHER DIMENSIONS Inserting the boundary conditions for v. du dx and u We need to ﬁnd the adjoint operator satisfying < v. the term self-adjoint is used. dx 2. When the domain of L is the same as the domain of L† . 1 0 v du dx = uv|1 − 0 dx 1 0 u dv dx.

Integrating both sides and noting that by Green’s identity and the boundary conditions for a self-adjoint operator. we have 0 = (λm − λn ) b a σφn φm dx. PROPERTIES OF STURM-LIOUVILLE PROBLEMS Let φn (x) be a solution of the eigenvalue problem associated with λn : Lφn = −λn σφn . The integral is nonnegative. Theorem The eigenfunctions corresponding to diﬀerent eigenvalues of the Sturm-Liouville problem are orthogonal. 349 Now. Now. multiply the ﬁrst equation by φm and the second equation by φn . Let φn (x) be a solution of the eigenvalue problem associated with λn . the eigenvalues are real. multiply the ﬁrst equation by φn and the second equation by φn and then subtract the results. and let φm (x) be a solution of the eigenvalue problem associated with λm = λn . Lφn = −λn σφn .4. We obtain φn Lφn − φn Lφn = (λn − λn )σφn φn . we have 0 = (λn − λn ) b a σ φn 2 dx. . Lφm = −λm σφm . Subtracting the results. The complex conjugate of this equation is Lφn = −λn σφn .8. we obtain φm Lφn − φn Lφm = (λm − λn )σφn φm Integrating both sides and using Green’s identity and the boundary conditions for a self-adjoint operator. so we have λn = λn . This is proven similar to the last proof. Therefore.

cm = b a f (x)φm (x) dx . In this chapter we have seen that Sturm-Liouville eigenvalue problems have the requisite set of orthogonal eigenfunctions. we have that b a σφn φm dx = 0. PROBLEMS IN HIGHER DIMENSIONS Since the eigenvalues are diﬀerent. In this section we will apply this method to solve a particular nonhomogenous eigenvalue problem. Therefore. The expansion coeﬃcients are then found by making use of the orthogonality of the eigenfunctions. Recall that one starts with a nonhomogeneous diﬀerential equation Ly = f where y(x) is to satisfy given homogeneous boundary conditions. and inserts the expansion into the nonhomogeneous equation. Thus. b λm a φ2 (x)σ(x) dx m . This gives ∞ ∞ f (x) = L n=1 cn φn (x) = − n=1 cn λn σ(x)φn (x). 8.350 CHAPTER 8. ∞ y(x) = n=1 cn φn (x). the eigenfunctions are orthogonal with respect to the weight function σ(x). The method makes use of the eigenfunctions satisfying the eigenvalue problem Lφn = −λn σφn subject to the given boundary conditions.5 The Eigenfunction Expansion Method In section ?? we saw generally how one can use the eigenfunctions of a diﬀerential operator to solve a nonhomogeneous boundary value problem. Then one assumes that y(x) can be written as an expansion in the eigenfunctions.

We ﬁrst determine this set. φn (x)2 σ(x) dx e 1 1 0 = A2 = A 2 1 sin(nπ ln x) dx x 1 sin(nπy) dy = A2 . Thus. we consider the solution of the boundary value problem (xy ) + y 1 = . This is almost an equation of Cauchy-Euler type. So. . This means that one chooses A so that the norm of each eigenfunction is one. (8.66) . 2 (8. where λn = n2 π 2 − 1. One obtains nontrivial solutions of the eigenvalue problem satisfying the boundary conditions when λ > −1. 2. The characteristic equation is r2 + (1 + λ) = 0. It is often useful to normalize the eigenfunctions. This equation is already in self-adjoint form. Namely. we have 1 = e 1 . The solutions are φn (x) = A sin(nπ ln x). we need to solve (xφ ) + φ = −λσφ. x x x ∈ [1. we know that the associated Sturm-Liouville eigenvalue problem has an orthogonal set of eigenfunctions.64) y(1) = 0 = y(e).65) Rearranging the terms and multiplying by x. we have that x2 φ + xφ + (1 + λσx)φ = 0.63) (8. . n = 1. e]. . (8.8. x φ(1) = 0 = φ(e). . we have x2 φ + xφ + (1 + λ)φ = 0. Picking the weight 1 function σ(x) = x . THE EIGENFUNCTION EXPANSION METHOD 351 As an example.5. This is easily solved.

we insert our coeﬃcients into the expansion for y(x).352 Thus. we √ make use of orthogonality. we have cm √ 2 [(−1)m − 1] = . Inserting this solution into the diﬀerential equation. x x n=1 Next. Ly = x . CHAPTER 8. Multiplying both sides by φm (x) = 2 sin(mπ ln x) and integrating. We ﬁrst expand the unknown solution in terms of the eigenfunctions. we have ∞ √ 1 1 = Ly = − cn λn 2 sin(nπ ln x) . √ y(x) = n = 1∞ cn 2 sin(nπ ln x). x mπ 1 Solving for cm . gives √ e√ 1 2 λm cm = 2 sin(mπ ln x) dx = [(−1)m − 1]. mπ m2 π 2 − 1 Finally. PROBLEMS IN HIGHER DIMENSIONS 1 We now turn towards solving the nonhomogeneous problem. nπ n2 π 2 − 1 n=1 . The solution is then ∞ 2 [(−1)n − 1] y(x) = sin(nπ ln(x)). A = √ 2.

We started with one dimensional problems: heat ow in a long thin rod.8. We then moved on to two-dimensional regions and looked at the vibrations of rectangular and circular membranes.7 Spherical Symmetry We have solved several types of problems so far. vibrations of a one dimensional string. PROBLEMS IN THREE DIMENSIONS 353 Figure 8. The . 8.6.8: Rectangular coordinates in two dimensions. Figure 8. 8.9: A three dimensional view of the vibrating rectangular membrane for the lowest modes.6 Problems in Three Dimensions need to introduce 3D coordinate systems and theie Laplacians. In this section we will list needed information for various standard coordinate systems.

7. 8. 2 u = 0. PROBLEMS IN HIGHER DIMENSIONS Figure 8. circular membrane required the use of polar coordinates due to the circular symmetry in the problem. (8. Figure 8. arises in electrostatics as an equation for electric potential outside a charge distribution and it occurs as the equation governing equilibrium temperature distributions. We will describe this system in this section. One type that comes up often in classes like electrodynamics and quantum mechanics is spherical symmetry.354 CHAPTER 8.10: Rectangular coordinates in three dimensions. In this section we will explore three dimensional symmetries. Laplace’s equation in spherical coordinates is given by 2 u= 1 ∂ ∂u ρ2 2 ∂ρ ρ ∂ρ + 1 ∂ ∂u sin θ 2 sin θ ∂θ ρ ∂θ + 1 ∂2u = 0.11: Cylindrical coordinates in three dimensions.67) ρ2 sin2 θ ∂φ2 .1 Laplace’s Equation Laplace’s equation.

. φ). SPHERICAL SYMMETRY 355 Figure 8. Figure 8. θ.8.7.13: A sphere of radius r with the boundary condition u(r. φ) = g(θ.12: Spherical coordinates in three dimensions.

and introducing a second separation constant: sin θ d dΘ 1 d2 Φ sin θ + λ sin2 θ = − = µ. we have − dR 1 d ρ2 R dρ dρ = 1 d dΘ sin θ sin θΘ dθ dθ + d2 Φ = −λ.74) d2 Φ + µΦ = 0.73) Θ dθ dθ Φ dφ2 This gives equations for Θ(θ) and Φ(φ): sin θ and d dΘ sin θ dθ dθ + (λ sin2 θ − µ)Θ = 0. ρ2 sin2 θ dφ2 (8.356 CHAPTER 8. we can separate out the radial part. sin2 θΦ dφ2 1 (8.69) Note that the ﬁrst term is the only term depending upon ρ. We ﬁrst do a separation of variables.72) The ﬁnal separation can be performed by multiplying the last equation by sin2 θ. Thus. However. sin2 θΦ dφ2 1 (8. multiply the equation by ρ2 and divide by RΘΦ. φ) = R(ρ)Θ(θ)Φ(φ).13. dφ2 (8. Thus. φ) as shown in Figure 8.68) Now. there is still more work to do on the other two terms. sin θΦ dφ2 1 2 (8. (8. We seek product solutions of the form u(ρ. rearranging the terms.75) . θ. we obtain ΘΦ d dR ρ2 2 dρ ρ dρ + RΦ d dΘ sin θ 2 sin θ dθ ρ dθ + RΘ d2 Φ = 0. PROBLEMS IN HIGHER DIMENSIONS We will seek solutions of this equation inside a sphere or radius r subject to the boundary condition u(ρ. Inserting this form into our equation. This leads to two equations: dR d ρ2 dρ dρ and 1 d dΘ sin θ sin θΘ dθ dθ − λR = 0 d2 Φ = −λ. This yields 1 d dR ρ2 R dρ dρ + d 1 dΘ sin θ sin θΘ dθ dθ + d2 Φ = 0. θφ) = g(θ. which give the angular part.71) + (8.70) where we have introduced our ﬁrst separation constant. (8.

We need to solve each one and impose the boundary conditions. (8. As we have seen before. . We note that all three are in the form of Sturm-Liouville problems. we impose periodic boundary conditions: u(ρ. However. we can guess the form of the solution to be R(ρ) = ρs . 2π]. Φ (0) = Φ (2π).74)-(8. we will use this form in the derivation of the radial function. We know that φ ∈ [0. 0) = u(ρ.77) This equation is a Cauchy-Euler type of equation. θ. The boundary condition in φ is not speciﬁed in the problem. The simplest equation is the one for Φ(φ)). So. SPHERICAL SYMMETRY 357 We now have three ordinary diﬀerential equations to solve. as we will note later. These are the radial equation (8. the eigenfunctions and eigenvalue are found as Φ(φ) = {cos mφ. (8. Similar to the vibrating circular membrane problem. θ. µ = m2 . . . Solving for s. we expect that the solution and its derivative should agree at the endpoints of this interval.8. where s is a yet to be determined constant.75). we obtain the characteristic equation s(s + 1) = ( + 1). Thus. as these are physically the same point. θ. λ will need to take the form λ = ( + 1) for = 0. Inserting our guess. The radial equation (8. 1.71) and the two angular equations (8. 2π). It is one of those unstated conditions. uφ (ρ. we have s = .76) The next simplest looking equation is the radial equation. We have seen equations of this form many times. we have Φ(0) = Φ(2π). So. As this should hold for all ρ and θ. Thus. 0) = uφ(ρ. the general solution of the radial equation is R(ρ) = aρ + bρ−( +1) . .71) can be written as ρ2 R + 2ρR − ( + 1)R = 0. 2. (8.78) . 2π). −( + 1).7. sin mφ} . θ.

(8. denoted by P (x). Since the solution is expected to be bounded at the origin.75). Note that ρ−( +1) is not deﬁned at the origin. dθ dθ dx dx Letting y(x) = T heta(θ) and noting that sin2 θ = 1x2 . In this case we set a = 0 so that our solution is bounded at inﬁnity. The two linearly independent solutions are denoted by P m (x) and Qm (x). Equation (8. Now we come to our last equation. we can throw out these solutions and are left with Θ(θ) = P m (cos θ) . φ) = g(θ. so we will need to hold oﬀ using it until we have the general solution to the three dimensional problem. θ.80) The solutions of this equation are described in the next chapter. we need to consider what happens at ρ = 0. So. We will need to transform this equation in order to identify the solutions. Since we are interested in solutions inside the sphere. The solutions consist of a set of orthogonal eigenfunctions. Recall that we gave that for ρ = r. However.75) becomes dy m2 d (1 − x2 ) + ( + 1) − y = 0. The more general case. The latter functions are not well behaved at x = ±1. m = 0 has solutions called the associated Legendre functions. (8. u(r. Let x = cos θ. φ). In some applications we are interested in solutions outside the sphere.79) dx dx 1 − x2 We note that x ∈ [−1. This is another Sturm-Liouville eigenvalue problem. They are the Legendre polynomials. This is not a homogeneous boundary condition. we do have a hidden condition. PROBLEMS IN HIGHER DIMENSIONS We would normally apply boundary conditions at this point.358 CHAPTER 8. For the special case that m = 0 this equation becomes dy d (1 − x2 ) dx dx + ( + 1)y = 0. corresponding to the north and south poles of the original problem. as can be easily conﬁrmed by the reader. (8. So. Then the derivatives transform as d dx d d = = sin θ . 1]. we can set b = 0. in the current problem we have established that R(ρ) = aρ .

φ) m (θ. (8. The product solutions consist of the forms u(ρ. .7. The general solution is given as a linear combination of these solutions. 2. as the main diﬀerences in problems with spherical symmetry arise in the radial equation. . θ. 1. . and m = 0. and m = 0. we have a double sum: ∞ u(ρ. we have that = 0. These are often combined into a complex representation as u(ρ. . φ) = =0 m=− a mρ P m (cos θ)eimφ . 1. ± . one often encounters what are called the spherical harmonics functions. . θ.84) . .82) We have carried out the full separation of Laplace’s equation in spherical coordinates. They satisfy the orthogonality condition 1 −1 P m (x)P m (x) dx = 2 ( + m)! δ . ±1.83) Due to the frequency of occurrence of the angular contributions. . . φ) = ρ P m (cos θ) cos mφ and u(ρ. θ. = 2 + 1 ( − m)! m imφ P e .8. . φ). . As there are two indices. (8. 2 + 1 ( − m)! (8. 2 ( + m)! (8. 1. . . φ) = ρ P m (cos θ) sin mφ for = 0. . SPHERICAL SYMMETRY as the needed solutions. The solutions of the angular parts of the problem are often combined. . 359 The associated Legendre functions are related to the Legendre polynomials by dm P m (x) = (−1)m (1 − x2 )m/2 m P (x). 2. . φ) = ρ P m (cos θ)eimφ . Y which are deﬁned as Y m (θ.81) dx Also. . θ. .

360 CHAPTER 8. 2n + 1 So. (8.86) This is a Fourier-Legendre series representation of g(θ). Since the Legendre polynomials are an orthogonal set of eigenfunctions. Imposing the boundary condition at ρ = r. we consider the solution of Laplace’s equation in which there is azimuthal symmetry.85) Here we have used the fact that P 0 (x) = P (x).2 Example As a simple example. (8. φ) = g(θ) = 1 − cos 2θ. we obtain the desired solution: 2 +1 Π a = g(θ)P (cos θ) sin θ dθ. 8. θ. a . Note that g(θ) = 1 − cos 2θ = 2 sin2 θ = 2 − 2 cos2 θ. We will not go into these further at this time. we are lead to ∞ g(θ) = =0 a r P (cos θ).7. only the m = 0 term of the general solution survives. This function is zero at the poles and has a maximum at the equator. θ. Let u(r. For our example this is possible. we have that ∞ u(ρ. (8. We just need to determine the unknown expansion coeﬃcients. In problems in which there is no φ-dependence. φ) = =0 a ρ P (cos θ). φ)Y m (θ. (8.87) 2 0 Sometimes it is easier to rewrite g(θ) as a polynomial in cos θ and avoid the integration. Thus. In the next chapter we prove that π 0 Pn (cos θ)Pm (cos θ) sin θ dθ = 1 −1 Pn (x)Pm (x) dx = 2 δnm . PROBLEMS IN HIGHER DIMENSIONS These satisfy the simple orthogonality relation int1 −1 2π 0 Y ∗ m (θ. we can extract the coeﬃcients just as we had done for Fourier sine series. multiplying our expression for g(θ) by Pm (cos θ) sin θ and integrating. φ) sin θ dφdθ = δ δmm .88) .

3 3 Inserting into the general solution. we know that c1 = 0.8. 1 1 3 2 − 2x2 = c0 (1) + c2 (3x2 − 1) = c0 − c2 + c2 x2 . So. We seek the form 2 g(θ) = c0 P0 (x) + c1 P1 (x) + c2 P2 (x). a2 = 4 r12 . and the rest of the coeﬃcients are zero. This gives 3 3 our sought expansion for g: 4 4 g() = P0 (cos θ) − P2 (cos θ). a0 = 4 .8.8.8 8. and P2 (x) = 1 (3x2 − 1). 2 2 2 1 By observation we have c2 = − 4 and thus. c0 = 2 + 2 c2 = 4 .8.1 8.3 8. P1 (x) = x. 3 3 Therefore. we have 4 4 u(ρ. Since 2 − 2x2 does not have any x terms.89) 4 2 P2 (cos θ) = P0 (cos) − 3 3 ρ r 2 (3 cos2 θ − 1). we have g(θ) = 2 − 2x2 . 8.8.8.4 Other Applications Temperature Distribution in Igloos Waveguides Optical Fibers The Hydrogen Atom . OTHER APPLICATIONS 361 Thus. In the next chapter we show that P0 (x) = 1. θφ) = P0 (cos) − 3 3 ρ r 2 (8. setting x = cos θ.2 8.

362 CHAPTER 8. PROBLEMS IN HIGHER DIMENSIONS .

b). . These functions are typically found as solutions of diﬀerential equations using power series methods in a ﬁrst course in diﬀerential equations. the Tchebychef (other transliterations used: Chebyshev. We will leave these techniques for an appendix. These include such polynomial functions as the Legendre polynomials. by the Stone-Weierstrass Approximation Theorem in analysis this set is a basis of L2 (a. Tchebycheﬀ or Tschebyscheﬀ )and the Gegenbauer polynomials. b] relative to weight σ(x). We begin with a collection of special functions. the Hermite polynomials. 9. x2 . called the classical orthogonal polynomials. x.} is a basis of linearly independent functions.Chapter 9 Special Functions In this chapter we will look at some additional functions which arise often in physical applications and are eigenfunctions for some Sturm-Liouville boundary value problem. Also. We will spend most of our time exploring the Legendre and Bessel functions. the space σ of square integrable functions over the interval [a. since 363 . We are familiar with being able to expand functions over this basis. Bessel functions occur quite often. . . Chebyshov. In fact.1 Classical Orthogonal Polynomials We begin by noting that the sequence of functions {1.

Of course. ﬁnite combinations of these basis element are just polynomials! OK. or two odd. 3 −1 Since we have found that orthogonal bases have been useful in determining the coeﬃcients for expansions of given functions. of R3 considered in the text. First we take one of the original basis vectors. Let’s assume that we have three vectors that span R3 .364 CHAPTER 9. we will ask. a2 . that there is a method for carrying this out called the Gram-Schmidt Orthogonalization Process. and a3 .1. basis functions with σ(x) = 1 and (a. we might ask if it is possible to obtain an orthogonal basis involving these powers of x. and e3 . can one ﬁnd an orthogonal basis of the given space?” The answer is yes. and a3 as shown in Figure 9. “Given a set of linearly independent basis vectors. a2 . However. SPECIAL FUNCTIONS a a3 2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx a1 Figure 9. 1 . For example. ∞ f (x) ∼ n=0 cn xn . given by a1 . One can easily see this by integrating the product of two even. e2 . We seek an orthogonal basis e1 . and deﬁne e1 = a1 .1: The basis a1 . 2 x0 x2 dx = . beginning one vector at a time. We will recall this process for ﬁnite dimensional vectors and then generalize to function spaces. b)=(−1. say a1 . We recall from introductory linear algebra. which mostly covers ﬁnite dimensional vector spaces. 1). the expansions are just Maclaurin series representations of the functions. this basis is not an orthogonal set of basis functions.

Denoting this projection by pr1 a2 . a2 · e1 pr1 a2 = e1 .1) We recall from our vector calculus class the projection of one vector onto another.9. (9. a2 . where θ is the angle between e1 and a2 .2: A plot of the vectors e1 . we want to determine an e2 that is orthogonal to e1 . on e1 illustrating the Gram-Schmidt orthogonalization process. so we would denote such a normalized vector with a ’hat’: e1 ˆ e1 = . a2 e1 . CLASSICAL ORTHOGONAL POLYNOMIALS 365 a e2 2 e1 pr1a 2 Figure 9.1. pr1 a2 = a2 cos θ e1 . We take the next element of the original basis. Next. (9. we might want to normalize our new basis vectors. a2 can be written as a sum of e2 and the projection of a2 on e1 . e1 √ where e1 = e1 · e1 . a2 .2) e2 1 Note that this is easily proven. and e2 needed to ﬁnd the projection of a2 . First write the projection as a vector of ˆ length a2 cos θ in direction e1 . Note that the desired orthogonal vector is e2 . e1 Recall that the angle between e1 and a2 is obtained from cos θ = a2 · e1 . Of course. In Figure 9.2 we see the orientation of the vectors. we then have e2 = a2 − pr1 a2 .

. From Equations (9.. SPECIAL FUNCTIONS a 2 2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx e1 pr1a 3 Figure 9. it is a simple matter to compute the scalar products with e1 and e2 to verify orthogonality.366 e pr2a 3 a3 CHAPTER 9. we ﬁnd that e2 = a2 − a2 · e1 e1 .3) It is a simple matter to verify that e2 is orthogonal to e1 : e2 · e1 = a2 · e1 − a2 · e1 e1 · e1 e2 1 = a2 · e1 − a2 · e1 = 0. an orthogonal basis can be found by setting e1 = a1 and for n > 1.3. Combining these expressions gives Equation (9.1)-(9. We can generalize the procedure to the N -dimensional case. n = 1. Pictorially. (9. N be a set of linearly independent vectors in RN .6) . .5) Again. e3 = a3 − a3 · e1 a3 · e2 e1 − e2 .2).. we seek a third vector e3 that is orthogonal to both e1 and e2 . e2 1 (9.2). we can write the given vector a3 as a combination of vector projections along e1 and e2 and the new vector e3 . This is shown in Figure 9.4) Now. e2 e2 1 2 (9. n−1 en = an − j=1 an · ej ej . Then. e2 j (9. Let an . Then we have.3: A plot of the vectors and their projections for determining e3 .

1 > is the integral of an odd function over a symmetric interval. φ0 > < f2 . Note that 1 φ2 (x) dx = .9) since < x.8) and ||f ||2 =< f. n ∈ N and x ∈ [a. . 2. f > . Example Apply the Gram-Schmidt Orthogonalization process to the set fn (x) = xn . an orthogonal basis of functions. . when x ∈ (−1. Then.7) Here we are using inner products of real valued functions relative to weight σ(x). we have φ0 (x) = f0 (x) = 1. but we will hold oﬀ on doing that for now. we can generalize this idea to function spaces. b] be a linearly independent sequence of continuous functions. 1 > < x2 . = x− 1 2 1 (9. 0 2 −1 We could use this result to ﬁx the noramlization of our new basis.9. < f. Note the similarity between this expression and the expression for the ﬁnite dimensional case in Equation (9. φj > φj (x). we compute the second basis element: φ1 (x) = f1 (x) − < f1 . φ 0 > φ0 (x) φ0 2 < x. φj 2 n = 1. φ1 > φ0 (x) − φ1 (x) φ0 2 φ1 2 < x2 .6). Now. g >= b a f (x)g(x)σ(x) dx (9. φn (x). n ∈ N . 1 > 1 = x. Let fn (x). First. we have φ2 (x) = f2 (x) − < f2 . CLASSICAL ORTHOGONAL POLYNOMIALS 367 Now. 1) and σ(x) = 1. n ∈ N can be found and is given by φ0 (x) = f0 (x) and n−1 φn (x) = fn (x) − j=0 < fn . . (9. . x > = x2 − 1− x 1 2 x 2 . For φ2 (x).1.

1.368 Polynomial Hermite Laguerre Legendre Gegenbauer Tchebychef of the 1st kind Tchebychef of the 2nd kind Jacobi CHAPTER 9. x.µ) Pn (x) Interval (−∞. ∞) [0. Also.1) σ(x) 2 e−x e−x 1 (1 − x2 )λ−1/2 (1 − x2 )−1/2 (1 − x2 )−1/2 (1 − x)ν (1 − x)µ Table 9. Others in this group are shown in Table 9. Thus. φ2 (1) = 3 . This is not the typical normalization.10) So far. The φn ’s can be multiplied by any constant and this will only aﬀect the “length”.1) (-1. They all have similar properties and we will just elaborate some of these for the Legendre functions in the next section.1) (-1. 2 The set of Legendre polynomials is just one set of classical orthogonal polynomials that can be obtained in this way. ∞) (-1. Therefore. x2 − 1 }. . we have so far that 2 1 φ2 (x) = C(x2 − 3 ). Many had originally appeared as solutions of important boundary value problems in physics. If one chooses to 3 normalize these by forcing φn (1) = 1. = x − 2 1 2 −1 x dx 1 −1 dx 1 = x2 − . then one obtains the classical Legendre polynomials.1) (-1. ||φn ||2 .1: Common classical orthogonal polynomials with the interval and weight function used to deﬁne them. we have the orthogonal set {1. it might not be clear where the normalization constant is. one obtains 1 P2 (x) = φ2 (x) = (3x2 − 1). 3 (9. Setting x = 1.1) (-1. Pn (x) = φn (x). SPECIAL FUNCTIONS Symbol Hn (x) Lα (x) n Pn (x) λ Cn (x) Tn (x) Un (x) (ν.

In the case of the Legendre polynomials. Also. 2. you saw these polynomials as one of the solutions of the diﬀerential equation (1 − x2 )y − 2xy + n(n + 1)y = 0.2. we have (2n + 1)xPn (x) = (n + 1)Pn+1 (x) + nPn−1 (x). (9. The classical orthogonal polynomials also satisfy three term recursion formulae. . .12) From this. .2. 9. the polynomial is an odd function and for n even. . In your ﬁrst course in diﬀerential equations. In Figure 9. .9. 2. (9. 1). n = 1. the polynomial is an even function. Note that we get the same result as we found in the last section using orthogonalization.2 Legendre Polynomials In the last section we saw the Legendre polynomials in the context of orthogonal bases for a set of square integrable functions in L2 (−1. . This can also be rewritten by replacing n with n − 1 as (2n − 1)xPn−1 (x) = nPn (x) + (n − 1)Pn−2 (x). for n odd.11) Recall that these were obtained by using power series expansion methods.2: Tabular computation of the Legendre polynomials using the Rodrigues formula. . One can systematically generate the Legendre polynomials in tabular form as shown in Table 9. there is the Rodrigues formula: Pn (x) = 1 dn 2n n! dxn (x2 − 1)n . . In this section we will explore a few of the properties of these functions. n = 1. n ∈ N. LEGENDRE POLYNOMIALS n 0 1 2 3 (x2 − 1)n 1 x2 − 1 x4 − 2x2 + 1 x6 − 3x4 + 3x2 + 1 dn 2 dxn (x 369 − 1)n 1 2n n! 1 2x 12x2 − 4 12x2 − 4 1 1 2 1 8 120x3 − 72x Pn (x) 1 x 1 2 2 (3x − 1) 1 3 2 (5x − 3x) Table 9.4 we show a few Legnedre polynomials. . First. n ∈ N. one can see that Pn (x) is an nth degree polynomial.

This last fact can be n! obtained from Rodrigues formula. We will prove this two diﬀerent ways.2 0. and P5 (x). 1 [2n(2n − 1) . (n + 1)]xn . First. SPECIAL FUNCTIONS 1 0. 1 1 (2n)! . (2n − j + 1)]x2n−j . The jth derivative is [2n(2n − 1) . . The second derivative is 2n(2n − 1)x2n−2 . . . (n + 1)] 2n n! n(n − 1) . the nth derivative is [2n(2n − 1) . .370 CHAPTER 9.6 –0.13) n n! n! 2 In order to prove the three term recursion formula we consider the expression nPn (x) − (2n − 1)xPn−1 (x). which is x2n . the leading order terms cancel. The ﬁrst derivative is 2nx2n−1 . . We see this by focussing on the leading coeﬃcient of (x2 − 1)n .8 1 –0.5 –1 Figure 9. P4 (x). . .4 –0.6 0. This proves that Pn (x) has degree n. (n + 1)] = 2n n! = 1 n(n − 1) .4: Plots of the Legendre polynomials P2 (x). we use the orthogonality properties of Legendre polynomials and the 1 fact that the coeﬃcient of xn in Pn (x) is 2n n! (2n)! . The leading coeﬃcient of Pn (x) can now be written as 1 [2n(2n − 1) . .8 –0. P3 (x).5 –1 –0. . . Thus. While each term is a polynomial of degree n. .2 0.4 x 0. . (9. We ﬁrst look at the coeﬃcient of .

+ cn−2 Pn−2 (x).14) . . LEGENDRE POLYNOMIALS the leading order term in the second term. A second proof of the three term recursion formula can be obtained from the generating function of the Legendre polynomials. Thus. Multiplying by Pm (x) for m = 0.2. This is because the Pn ’s are either even or odd functions. since the Legendre polynomials form a basis. cn−2 = n − (2n − 1) = −(n − 1). . 1. thus only containing even. powers of x. . and integrating from −1 to 1. n − 3. . For Legendre polynomials the generating function is given by ∞ 1 g(x. (9. leaving nPn (x) − (2n − 1)xPn−1 (x) = cn−2 Pn−2 (x). The next terms will be of degree n − 2. we can write this polynomial as a linear combination of of Legendre polynomials: nPn (x) − (2n − 1)xPn−1 (x) = c0 P0 (x) + c1 P1 (x) + . |t| < 1. t) = √ = Pn (x)tn . It is (2n − 1) 1 2n−1 (n (2n − 2)! 1 (2n − 1)! = n−1 − 1)! (n − 1)! 2 (n − 1)! (n − 1)! 371 The coeﬃcient of the leading term for nPn (x) can be written as n 2n n! 1 (2n − 1)! 1 (2n)! 2n = n[ 2 ] n−1 n! 2n 2 (n − 1)! (n − 1)! After some simple cancellations in the ﬁrst factors. The ﬁnal coeﬃcient can be found by using the normalization condition. Therefore. Many special functions have such generating functions. We conclude that nPn (x) − (2n − 1)xPn−1 (x) = polynomial of degree n − 2. or odd. Pn (1) = 1. 2 1 − 2xt + t n=0 |x| < 1. we obtain 0 = cm Pm 2 using orthogonality.9. . all of these cm ’s are zero. we see that the leading order terms cancel. . Thus.

would be to place the origin at the center of the Earth and consider the forces on the non-pointlike Earth due to the moon. The ﬁrst term in the expansion will give the usual force between the Earth and the moon as point masses. r the gravitational potential between the Earth and the moon is proportional to the reciprocal of the magnitude of the diﬀerence between their positions relative to some coordinate system. The next terms will give expressions for the tidal eﬀects. We then have the tidal potential is r2 proportional to the generating function for the Legendre polynomials! So. This generating function occurs often in applications. one can write Φ∝ 1 2 r1 − 2r1 r2 cos θ + 2 r2 = 1 r2 1 1− r1 2 r2 cos θ + r1 r2 2 . Consider a piece of the Earth at position r1 and the moon at position r2 as shown in Figure 9. deﬁne x = cos θ and t = r1 .5. For example. An even better example.5: The position vectors used to describe the tidal force on the Earth due to the moon. These potential functions are 1 type functions. or spheres. . we can write the tidal potential as Φ∝ 1 r2 ∞ Pn (cos θ) n=0 r1 r2 n . Now. So. one of the position vectors is larger than the other. it arises in potential theory. In this case. such as electromagnetic or gravitational potentials. SPECIAL FUNCTIONS r1 r2 r2 r1 Figure 9.372 CHAPTER 9. where θ is the angle between r1 and r2 . we have r1 r2 . In particular. Typically. The tidal potential Φ is given as Φ∝ 1 = |r2 − r1 | 1 = (r2 − r1 ) · (r2 − r1 ) 1 2 r1 2 − 2r1 r2 cos θ + r2 .

we obtain ∞ (x − t) n=0 Pn (x)tn = ∞ n=0 nPn (x)tn−1 − ∞ n=0 2nxPn (x)tn + ∞ n=0 nPn (x)tn+1 . t) = (1 − 2xt + t2 ) nPn (x)tn−1 . t). ] A simpler evaluation is to ﬁnd Pn (−1). LEGENDRE POLYNOMIALS 373 Now that we have some idea as to where this generating function might have originated. ∂t n=0 ∞ n=0 we have (x − t)g(x. . Pn (−1) = (−1)n . 2 )3/2 ∂t 1 − 2xt + t2 (1 − 2xt + t Combining this with ∞ ∂g = nPn (x)tn−1 . Inserting the series expression for g(x. First of all. .15) However. First note that x−t x−t ∂g = = g(x. t) = √ = = 1 − t + t2 − t3 + . . t) = √ = Pn (0)tn . Pn (0) is found by looking at g(0.2. 5!! = 5(3)(1) and 6!! = 6(4)(2). In this case we have 1 1 g(−1. 1+t 1 + 2t + t2 Therefore. Thus. we have 1 1 3 √ = 1 − t2 + t4 + . we need only diﬀerentiate the generating function with respect to t in Equation (9. it can be used to provide values of the Legendre polynomials at key points. So.9. .14) and rearrange the result. 2 2 8 1+t Comparing these expansions. [note that the double factorial (2n)!! is deﬁned by n!! = (n − 2)(n − 2)!!. To prove the three term recursion that we introduced above. . we now make some use of it. 1 + t2 n=0 (9. We can also use the generating function to ﬁnd recursion relations. . t). Namely. t) and distributing the sum on the right side. we have the Pn (0) = 0 for n odd and one can show that P2n (0) = (−1)n (2n−1)!! for n even. . we can use the binomial expansion to ﬁnd our result. ∞ 1 g(0.

The second sum in Equation (9. k=−1 actually give the same sum. SPECIAL FUNCTIONS Rearranging leads to three separate sums: ∞ n=0 nPn (x)tn−1 − ∞ (2n + 1)xPn (x)tn + ∞ (n + 1)Pn (x)tn+1 = 0. Then. (9.16) becomes ∞ (k + 1)Pk+1 (x)tk − ∞ (2k + 1)xPk (x)tk + ∞ k=1 kPk−1 (x)tk = 0. Equation (9.18) Therefore. (9. This is done by reindexing.16) n=0 n=0 Each term contains powers of t that we would like to combine into a single sum. n=0 and ∞ (k + 1)Pk+1 (x)tk = 0 + P1 (x) + 2P2 (x)t + 3P3 (x)t2 + . These diﬀerent indices are just another way of writing out the terms. we could now replace all of the k’s with n’s. The indices are sometimes referred to dummy indices because they do not show up in the expanded expressions and can be replaced with another letter. ∞ n=0 nPn (x)tn−1 = ∞ k=−1 (k + 1)Pk+1 (x)tk . .16) just needs the replacement n = k and the last sum we reindex using k = n + 1. For the ﬁrst sum. (9.374 CHAPTER 9. for k > 0. . noting the k = −1 term is zero and the k = 0 terms give P1 (x) − xP0 (x) = 0. Note that ∞ nPn (x)tn−1 = 0 + P1 (x) + 2P2 (x)t + 3P3 (x)t2 + . we could use the new index k = n − 1. . (9. ∞ k=1 [(k + 1)Pk+1 (x) − (2k + 1)xPk (x) + kPk−1 (x)] tk = 0. . Therefore. If we want to do so.17) k=−1 k=0 We can now combine all of the terms.19) .

. 1) takes the form ∞ f (x) ∼ n=0 cn Pn (x). the coeﬃcients of the tk ’s are zero. . LEGENDRE POLYNOMIALS Since this is true for all t. n=0 −1 ∞ ∞ 1 (9. 375 (9. Namely. Squaring the generating function. For example. we have 1 −1 dx 1 − 2xt + t2 = = tn+m Pn (x)Pm (x) dx −1 n=0 m=0 ∞ 1 2 t2n Pn (x) dx. = ln 1 − 2xt + t2 t 1−t −1 One can expand this expression about t = 0 to obtain 1 1+t ln t 1−t = 2 t2n . Pn 2 .9. k = 1. A Fourier-Legendre series expansion for f (x) on (−1. 2n + 1 ∞ 1 Comparing this result with Equation (9. 2. Another use of the generating function is to obtain the normalization constant. we can expand functions in this orthogonal basis.22) However. or (k + 1)Pk+1 (x) − (2k + 1)xPk (x) + kPk−1 (x) = 0.24) . dx 1+t 1 . t) with respect to x and rearranging the resulting inﬁnite series just as in this last manipulation. This is just a generalized Fourier series. we ﬁnd that Pn 2 = 1 −1 2 Pn (x) dx = (9. we have 1 = 1 − 2xt + t2 ∞ 2 Pn (x)t n=0 n ∞ ∞ = n=0 m=0 Pn (x)Pm (x)tn+m . Pn+1 (x) − Pn−1 (x) = (2n + 1)Pn (x).2. There are other recursion relations. (9.23) Finally. 2n + 1 n=0 2 . .21) Integrating from -1 to 1 and using orthogonality. (9. .20) This can be proven using the generating function by diﬀerentiating g(x.22).

12). We need to compute cn = 2n + 1 2 1 −1 x3 Pn (x) dx. c0 = 0 and c2 = 0. The right hand side vanishes. (9. c2 and c3 . we have cn = We have just found Pn coeﬃcients are 2 < f. This leaves us with only two coeﬃcients to compute. Since x3 is an odd function and P0 and P2 are even functions. dxn For m < n. 2 5 .26) We ﬁrst note that 1 −1 xm Pn (x) dx = 0 for m < n. the Fourier-Legendre 1 −1 = 2 2n+1 . c1 . we will have that cn = 0 for n > 3 in this example. This is proven using Rodrigues formula in Equation (9. Orthogonality give the usual form for the generalized Fourier coeﬃcients. SPECIAL FUNCTIONS As with Fourier trigonometric series. Pn 2 Therefore. We have 1 −1 xm Pn (x) dx = 1 2n n! 1 −1 xm dn 2 (x − 1)n dx. These are c1 = c3 = 7 2 1 −1 3 2 1 −1 x4 dx = 3 5 x3 1 2 (5x3 − 3x) dx = .25) Example 1 Expand f (x) = x3 in a Fourier-Legendre Series. (9. As a result. This leaves the computation of c0 . we can determine the coeﬃcients by multiplying both sides by Pm (x) and integrating. Pn > . cn = 2n + 1 2 f (x)Pn (x) dx. we integrate by parts m-times and use the facts that Pn (1) = 1 and Pn (−1) = (−1)n . In this case.376 CHAPTER 9.

cn = 1 2 1 0 1 [Pn+1 (x) − Pn−1 (x)] dx = [Pn−1 (0) − Pn+1 (0)]. LEGENDRE POLYNOMIALS Thus. So. we cannot ﬁnd the expansion coeﬃcients without some integration. 5 5 5 5 2 Well. there are no constant terms left except c0 . maybe we could have guessed this without doing any integration. we do not expect the expansion in Legendre polynomials for f (x) to have polynomials of order greater than three. so c2 = 0. 5 5 Of course.2.28) We can make use of the formula (9. c0 = 0. we have that c2 = 3 c1 = 2 c2 = 3 . In this case. there are no quadratic terms on the left side. 2 c0 = 1 2 1 0 For n = 0. 5 Then.20) for n > 1. we have 1 dx = . 377 3 2 x3 = P1 (x) + P3 (x). (9. 2 2 However. Then. We have to compute cn = 2n + 1 2 1 −1 f (x)Pn (x) dx = 2n + 1 2 1 0 Pn (x) dx. Let’s see.9. 1 1 x3 = c0 + c1 x + c2 (3x2 − 1) + c2 (5x3 − 3x). 2 . This leaves 1 x3 = c1 x + c2 (5x3 − 3x) 2 3 5 = (c1 − c2 )x + c2 x3 . 2 2 Equating coeﬃcients of the remaining like terms. Since f (x) = x3 has degree three. (9. So. we assume from the beginning that f (x) = c0 P0 (x) + c1 P1 (x) + c2 P2 (x) + c3 P3 (x). Then. this is simple to check: 3 2 3 2 1 P1 (x) + P3 (x) = x + (5x3 − 3x) = x3 .27) 2 5 and Example 2 Expand the Heaviside function in a Fourier-Legendre Series.

4 0.8 –0. (−1)n 2 2 n=1 (2n − 2)!! 2n (9.6 0.6: Sum of ﬁrst 21 terms for Fourier-Legendre series expansion of Heaviside function. Then we have the expansion f (x) ∼ 1 1 ∞ + [Pn−1 (0) − Pn+1 (0)]Pn (x).29) The sum of the ﬁrst 21 terms are shown in Figure 9.8 Figure 9.8 0. SPECIAL FUNCTIONS Partial Sum of Fourier-Legendre Series 1 0.4 –0.4 x 0.2 0.6. .6 0.378 CHAPTER 9.6 –0.2 –0. 2 2 n=1 which can be written as f (x) ∼ = = = 1 1 ∞ + [P2n−2 (0) − P2n (0)]P2n−1 (x) 2 2 n=1 1 1 ∞ (2n − 3)!! (2n − 1)!! + (−1)n−1 − (−1)n P2n−1 (x) 2 2 n=1 (2n − 2)!! (2n)!! 1 1 ∞ (2n − 3)!! 2n − 1 (−1)n − 1+ P2n−1 (x) 2 2 n=1 (2n − 2)!! 2n 1 1 ∞ (2n − 3)!! 4n − 1 − P2n−1 (x).2 0.

We ﬁrst note that by iteration on n ∈ Z + .3. x > 0. we then ﬁnd Γ(x) = Γ(x + n) . x + n > 0. we have Γ(1) = 1 and Γ(x + 1) = xΓ(x). This can also be written as Γ(n) = (n − 1)!.30) We ﬁrst show that the Gamma function generalizes the factorial function. Solving for Γ(x). we can iterate the second expression to obtain Γ(n + 1) = nΓ(n) = n(n − 1)Γ(n − 2) = n(n − 1) · · · 2Γ(1) = n!. x + n > 0. We can also deﬁne the Gamma function for negative. In fact. Also. We will see that the Gamma function is the natural generalization of the factorial function. we will need the Gamma function in the next section on Bessel functions. SPHERICAL HARMONICS 379 9. The reader can prove the second equation by simply performing an integration by parts. we have Γ(x + n) = (x + n − 1) · · · (x + 1)xΓ(x). (9.4 Spherical Harmonics Gamma Function Another function that often occurs in the study of special functions is the Gamma function. For x > 0 we deﬁne the Gamma function as Γ(x) = ∞ 0 tx−1 e−t dt. (x + n − 1) · · · (x + 1)x x < 0. . x < 0.9.3 9. non-integer values of x. For n an integer.

which we have not covered in this book. 2 Due to the symmetry of the integrand. Another useful formula is It is simply found as √ 1 Γ( ) = π. Γ(x)Γ(1 − x) = . −∞ √ Therefore. In particular. we have conﬁrmed that Γ( 1 ) = π. ∞ 1 Letting t = z 2 . sin πx This result can be proven using complex variable methods. we have 1 Γ( ) = 2 2 0 e−z dz. we obtain the classic integral ∞ 1 2 Γ( ) = e−z dz. we note that (2n)!! = 2n n!. one needs to be able to integrate around branch cuts. 2 We have seen that the factorial function can be written in terms of Gamma functions. However.380 CHAPTER 9. 2 −∞ which we had integrated when we computed the Fourier transform of a Gaussian. 2 2 2 (2n + 1)!! = (2n + 1)! . 2 2n Formally. this gives √ 1 3 π ! = Γ( ) = . One can also relate the odd double factorials in terms of the Gamma function. 2 1 Γ( ) = 2 ∞ 0 t− 2 e−t dt. 2n n! Another useful relation is π . SPECIAL FUNCTIONS Note that the Gamma function is undeﬁned at zero and the negative integers. Recall that ∞ √ 2 e−z dz = π. one can prove 1 (2n − 1)!! √ Γ(n + ) = π. First.

Thus. the Γ(n + p + 1) factor leads to evaluations of the Gamma function at zero.9. One ﬁnds that s = ±p and an−2 an = − . One assumes a series solution of the form ∞ y(x) = n=0 an xn+s . Note that these functions can be described as decaying oscillatory functions. sin πp (9. when p is negative. A second linearly independent solution is obtained for p not an integer as J−p (x). where one seeks allowed values of the constant s and a recursion relation for the coeﬃcients. x (9.5. (9.31) This equation is readily put into self-adjoint form as (xy ) + (x − p2 )y = 0.33) In Figure 9. n ≥ 2. However. the above series is not deﬁned in these cases. namely by using the Frobenius Method. (n + s)2 − p2 One solution is the Bessel function of the ﬁrst kind of order p. BESSEL FUNCTIONS 381 9.5 Bessel Functions Another important diﬀerential equation that arises in many physics applications is x2 y + xy + (x2 − p2 )y = 0. or negative integers. an . for p an integer.7 we display the ﬁrst few Bessel functions of the ﬁrst kind of integer order. given as y(x) = Jp (x) = (−1)n Γ(n + 1)Γ(n + p + 1) n=0 ∞ x 2 2n+p . Another method for obtaining a second linearly independent solution is through a linear combination of Jp (x) and J−p (x) as Np (x) = cos πpJp (x) − J−p (x) .32) This equation was solved in the ﬁrst course on diﬀerential equations using power series methods.34) . (9.

For example. Bessel functions satisfy a variety of properties. dx (9. Note that these functions are also decaying oscillatory functions.35) (9.36) . dx d x−p Jp (x) = x−p Jp+1 (x).8 J1(x) J2(x) 0. and J3 (x). they are singular at x = 0.8 we display the ﬁrst few Bessel functions of the second kind of integer order. These functions are called the Neumann functions.4 J3(x) 0. one standard problem is to describe the oscillations of a circular drumhead. J2 (x). SPECIAL FUNCTIONS 1 J0(x) 0.2 0 –0. However. Derivative Identities d p [x Jp (x)] = xp Jp−1 (x). or Bessel functions of the second kind of order p. In many applications these functions do not satisfy the boudary condition that one desires a bounded solution at x = 0. In this case the Bessel functions describe the radial part of the solution and one does not expect a singular solution at the center of the drum.2 2 4 x 6 8 10 –0.7: Plots of the Bessel functions J0 (x).4 Figure 9. J1 (x). In Figure 9. which we will only list at this time for Bessel functions of the ﬁrst kind.6 0.382 CHAPTER 9.

8 0. A list of some of these roots are provided in Table 9.6 –0. x (9. x > 0.3. N2 (x). N1 (x). t = 0.2 0 –0. Jp (jpn ) = 0.4 0.38) Jp−1 (x) − Jp+1 (x) = 2Jp (x).8 –1 2 4 x 6 8 10 N0(x) N1(x) N2(x) N3(x) Figure 9. and N3 (x).39) where jpn is the nth root of Jp (x). (9.4 –0.6 0.2 –0. BESSEL FUNCTIONS 383 1 0.37) (9.m 2 (9.40) . Orthogonality a 0 xJ(jpn x)Jp (jpm x) dx = a2 [Jp+1 (jpn a)]2 δn.8: Plots of the Neumann functions N0 (x).9. Recursion Formulae Jp−1 (x) − Jp+1 (x) = 2p Jp−1 (x). Generating Function ex(t− t )/2 = Integral Representation 1 ∞ n=−∞ Jn (x)tn .5.

339 15.016 10.760 13.792 14.749 28.620 14.903 29. as can be shown by making the substitution t = λx in the diﬀerential equation.173 13.616 20.212 24.43) where the Fourier-Bessel coeﬃcients are found using the orthogonality relation as a 2 xf (x)Jp (jpn x) dx.135 8.270 27.796 17.44) Example Expand f (x) = 1 for 0 < x < 1 in a Fourier-Bessel series of the form ∞ f (x) = n=1 cn J0 (jpn x) .42) The solutions are then of the form Jp (λx).373 17.147 11.780 12.494 n=1 3.379 9. the eigenvalue problem is given in the form x2 y + xy + (λx2 − p2 )y = 0.827 24.760 25. if 0 < x < a. (9.220 25.512 n=5 8.41) Fourier-Bessel Series Since the Bessel functions are an orthogonal set of eigenfunctions of a Sturm-Liouville problem.117 24. and one solves the diﬀerential equation with boundary conditions that y(x) is bounded at x = 0 and y(a) = 0.064 14.050 n=4 7.071 21.586 11.353 27.832 7.982 22.470 19.431 28.628 31. In fact.047 CHAPTER 9.384 p 1 2 3 4 5 6 7 8 9 n=0 2. cn = a2 [Jp+1 (jpn a)]2 0 (9.813 34.960 21.017 16. (9.421 30.410 22.583 25. we can expand square integrable functions in this basis. SPECIAL FUNCTIONS n=2 5.571 n=3 6. n ∈ Z.909 32.700 18.200 30.405 5.616 22. then one can show that ∞ f (x) = n=1 cn Jp (jpn x) (9.931 18.323 16.371 33.018 27. x > 0.224 19. Furthermore.983 Table 9.654 11.3: The zeros of Bessel Functions Jn (x) = 1 π π 0 cos(x sin θ − nθ) dθ.520 8.

BesselJZeros(0.n))).6. (9.45): cn = 2 [J1 (j0n )]2 1 0 385 xJ0 (j0n x) dx. (9. We need only compute the Fourier-Bessel coeﬃcients in Equation (9. Note: For reference.45) From Equation (9. We note the slow convergence due to the Gibbs phenomenon.9.35) we have 1 0 xJ0 (j0n x) dx = = = = j0n 1 yJ0 (y) dy 2 j0n 0 j0n d 1 [yJ1 (y)] dy 2 j0n 0 dy 1 j0n 2 [yJ1 (y)]0 j0n 1 J1 (j0n ).n)*x) /(BesselJZeros(0.9 we have show the partial sum for the ﬁrst ﬁfty terms of this series. 0 < x < 1.50) 9.n)*BesselJ(1..47) 1=2 j J (j ) n=1 0n 1 0n In Figure 9. j0n (9.6 Hypergeometric Functions . HYPERGEOMETRIC FUNCTIONS .46) As a result.n=1. this was done in Maple using the following code: 2*sum(BesselJ(0.BesselJZeros(0. we have found that the desired Fourier-Bessel expansion is ∞ J0 (j0n x) .

8 1 Figure 9.47) for f (x) = 1 on 0 < x < 1.8 0.2 0. .2 1 0.9: Plot of the ﬁrst 50 terms of the Fourier-Bessel series in Equation (9.2 0 0.6 0.386 CHAPTER 9.4 x 0. SPECIAL FUNCTIONS 1.6 0.4 0.

. k2 k=2 A sum of even as many as 107 terms only gives convergence to four or ﬁve decimal places. . . The series 1 2! 3! 4! 1 − 2 + 3 − 4 + 5 − ···. x x x x x 387 ∞ x>0 .Appendix A Sequences and Series In this chapter we will review and extend some of the concepts and deﬁnitions that you might have seen previously related to inﬁnite series. However. one can show that the inﬁnite series 1 1 1 1 + − + − ··· 2 3 4 5 converges to ln 2. other rearrangements can be made to give any desired sum! Other problems with inﬁnite series can occur. the terms can be rearranged to give S =1− 1+ 1 1 1 − + 3 2 5 + 1 1 1 − + 7 4 9 + 1 1 1 − + 11 6 13 + ··· = 3 ln 2. 2 In fact. For example. Working with inﬁnite series can be a little tricky and we need to understand some of the basics before moving on to the study of series of trigonometric functions. Try to sum the following inﬁnite series to ﬁnd that ln k ∼ 0.937548 .

. a(n) = 3n yields the sequence {3. However. . So. 5. n ≥ 2 and the starting values of . Another way to deﬁne a particular sequence is recursively. can we make sense out of any of these. .388 APPENDIX A. one typically uses subscript notation and not functional notation: an = a(n). x+t 0 So. 3. A rule.} However. A typical example is given by the Fibonacci sequence. to determine later terms from earlier ones is given. Examples are 1. you might think this divergent series is useless. 2. 4. Deﬁnition A sequence is a function whose domain is the set of positive integers. or other manipulations of inﬁnite series? A. The value of ﬁrst term (or ﬁrst few terms)is given. a(n) = n yields the sequence {1. truncation of this divergent series leads to and approximation of the integral ∞ e−t dt.1 Sequences Real Numbers We ﬁrst begin with the deﬁnitions for sequences and series of numbers. .} 2. It can be deﬁned by the recursion formula an+1 = an + an−1 . 9. or recursion formula. We then call an the nth term of the sequence. . . x > 0. 12. 2. Deﬁnition A recursive sequenceis deﬁned in two steps: 1. SEQUENCES AND SERIES diverges for all x. 6.

CONVERGENCE OF SEQUENCES Plot of an = n−1 vs n 9 389 8 7 6 5 an 4 3 2 1 0 1 2 3 4 5 n 6 7 8 9 10 Figure A.A.}. an also gets large. 5.2.2 Convergence of Sequences Next we are interested in the behavior of the sequence as n gets large. Writing the general expression for the nth term is possible. 3. we ﬁnd the behavior as shown in Figure A. n an = (−1) .1: Plot of an = n − 1 for n = 1 . If no such number exists.3. then the sequence is said to diverge. .}. .2. n > N ⇒ |a − L| < . Given an > 0. but it is not as simply stated. This sequence is the alternating 2n 1 1 sequence { 2 . 8 . . we ask for what value of N the nth terms . A. . 4 Deﬁnition The sequence an converges to the number L if to every positive number there corresponds an integer N such that for all n. is shown in Figure A. . Notice that as n gets large. Another related series. . This sequence is said to be divergent. . This is depicted in Figure A.1. 2. . 1. For the given sequence.4-A. 8. the sequence deﬁned by an = 21 approaches a limit as n n gets large. The resulting sequence is {0. On the other hand. For the sequence deﬁned by an = n − 1. we see that L = 0. In Figures A. a1 = 0 and a1 = 1. 1 . 1. Recursive deﬁnitions are often useful in doing computations for large values of n.5 we see what this means. 10.

10.3: Plot of an = (−1)n 2n for n = 1 . 10.390 APPENDIX A.1 0 an −0.5 1 2 3 4 5 n 6 7 8 9 10 Figure A.05 0 1 2 3 4 5 n 6 7 8 9 10 Figure A.2: Plot of an = 1 2n for n = 1 .5 0. . .25 0.2 0.4 −0.3 an 0. Plot of an = (−1)n/2n vs n 0.4 0.3 −0.3 0.1 −0.15 0.1 0. . .45 0. SEQUENCES AND SERIES Plot of a = 1/2n vs n n 0.35 0.2 0.2 −0. .

2 0.2 −0.3 −0.3.2 −0.1. . sooner.3 0.1 L−ε a −0. 10.5 0 1 2 3 4 5 n 6 7 8 9 10 Figure A.2 0.1 a −0. then we can investigate certain properties of limits of sequences. Plot of a = (−1)n/2n vs n n n 0.4 −0.1 L+ε 0 L L−ε n −0. For example. we have already seen that n limn→∞ (−1) = 0.05. . or later.4 −0. 2n n A.1 L+ε 0 L n −0.A. the tail of the sequence ends up entirely within this band. Picking = 0. L + ]. If a sequence {an }∞ converges to a limit L.5: Plot of an = (−1) for n = 1 . We see that for convergence.3 0. In these ﬁgures this interval is depicted by a horizontal band. . one sees 2n that the tail of the sequence lies between L + and L − for n > 3. then we write either an → ∞ n=1 or limn→∞ an = L.3 Limits Theorems Once we have deﬁned the notion of convergence of a sequence to some limit.4: Plot of an = (−1) for n = 1 . LIMITS THEOREMS Plot of a = (−1)n/2n vs n n 391 0. one sees 2n that the tail of the sequence lies between L + and L − for n > 4. (n > N ) lie in the interval [L − . . .5 0 1 2 3 4 5 n 6 7 8 9 10 Figure A. 10. Picking = 0.3 −0.

3. 2. 1 x > 0. n . limn→∞ an bn = A B. 4. |x| < 1. Theorem Consider two convergent sequences {an } and {bn } and a number k. depending upon L’Hopital’s Rule and other manipulations. 2. Assume that limn→∞ an = A and limn→∞ bn = B. Then we have 1. limn→∞ (1 + n )n = ex . x 5. limn→∞ n n = 1. x 1 The second limit in the list can be proven by ﬁrst looking at n→∞ lim ln n1/n = lim n→∞ ln n = 0. limn→∞ xn n! = 0. These are generally ﬁrst encountered in a second course in calculus. limn→∞ x n = 1. Thus. In such cases one employs L’Hopital’s Rule. 4. limn→∞ (an ± bn ) = A ± B. This limit is n x indeterminate as x → ∞ in its current form since the numerator and the denominator get large for large x. which arise often. 6.392 APPENDIX A. one can prove the ﬁrst limit by ﬁrst realizing that limn→∞ ln n = limx→∞ ln x . The proofs generally are straight forward. limx→∞ ln x = limx→∞ 1/x = 0. For example. limn→∞ (an bn ) = AB. Theorem The following are special cases: 1. limn→∞ (kbn ) = kB. limn→∞ ln n n 1 = 0. 3. limn→∞ xn = 0. SEQUENCES AND SERIES Here we list a few results on limits theorems and some special limits. B = 0. Some special limits are given next.

.. .1) A typical example is the inﬁnite series 1+ 1 1 1 + + + . limn→∞ that 0.2) How would one evaluate this sum? We begin by just adding the terms. For example. 1+ 3 1 = . limn→∞ ( (n−2) )n n This limit can be written as limn→∞ ( (n−2) )n = limn→∞ (1 + n (−2) n n ) 1 = e−2 . Example 1. we ﬁnd 2 limn→∞ ln(n ) = 2 limn→∞ ln(n) = n n Example 2. . we rewrite 1 1 1 limn→∞ (n2 ) n = limn→∞ (n) n (n) n = 1. A. The reader is left to conﬁrm the other limits.4. INFINITE SERIES 393 Now. which are inﬁnite sums of the form a1 + a2 + a2 + . ln(n2 ) n 2 ln(n) ln(n2 ) Rewriting n = n . . (A. Then = limn→∞ 1 = limn→∞ n . limn→∞ (n2 ) n To compute this limit. limn→∞ limn→∞ n2 +2n+3 n3 +n n2 +2n+3 n3 +n 2 1+ n + 3 n2 1 n+ n Divide the numerator and denominator by n2 .. 2 2 .4 Inﬁnite Series In this section we investigate the meaning of series. Example 3. Thus proving the second limit. The third limit can be done similarly.A. Example 4. 2 4 8 (A. then limn→∞ f (n) = e0 = 1. if limn→∞ ln f (n) = 0.

s2 = a1 + a2 . we want to make sense out of Equation (A. As with the example. .6 sn 1. 1 1 7 + = . then the sequence of partial sums should converge to some limit.1) is to make any sense. .3 1. (A. SEQUENCES AND SERIES Plot of sn vs n 1.6: Plot of sn = 1+ n 1 k=1 2k−1 for n = 1 . 2 4 8 16 16 (A.2 1. + an . then the inﬁnite series is said to have the sum L. 2 4 4 1 1 1 15 1+ + + = . We deﬁne this limit to be the sum of the inﬁnite series.394 2 APPENDIX A.5 1.4) .1 1 1 2 3 4 5 n 6 7 8 9 10 Figure A. we look at a sequence of partial sums. In general.6. . limn→∞ sn = S. If the inﬁnite series (A. s4 = a1 + a2 + a3 + a4 . etc.8 1.4 1. .3) etc. 10.7 1. Thus. Deﬁnition If the sequence of partial sums converges to the limit L as n gets large. We can see this graphically in Figure A. we deﬁne the nth partial sum as sn = a1 + a2 + . The values tend to a limit. s3 = a1 + a2 + a3 .9 1.1). 2 4 8 8 1 1 1 1 31 1+ + + + = . we consider the sums s1 = a1 . In general.

A. (A.2) is an example of what is known as a geometric series. + arn−2 + arn−1 . . we have (1 − r)sn = a − arn . We consider the nth partial sum: sn = a + ar + . multiply this equation by r. It is called the ratio because the ratio of two consecutive terms in the sum is r. n=1 Here n will be referred to as the index and it may start at values other than n = 1. 1−r (A. . is given by S = limn→ sn . . if it exists. Letting n get large in the partial sum (A. .5) Here a is the ﬁrst term and r is called the ratio. . + arn + . . GEOMETRIC SERIES We will use the compact summation notation ∞ 395 = a1 + a2 + . 1−r (A. . the nth partial sums can be written in the compact form sn = a(1 − rn ) . (A.5 Geometric Series Example (A. can easily be determined. while noting the many cancellations. A geometric series is of the form ∞ n=0 arn = a + ar + ar2 + ar2 + . .6) Subtracting these two equations.9) we need only evaluate lim n →rn .5. From our special limits we know that this limit is zero for |r| < 0. + an + . we have the sum of the geometric series is given by ∞ n=0 arn = a .9) Recalling that the sum. . .10) . + arn−1 + arn .8) Thus. rsn = ar + ar2 + . (A. Now. . .7) (A. . A. when it converges. Thus. . The sum of a geometric series.

Example 2. Next. Therefore. but we do have the diﬀerence of two geometric series. This agrees with the plot of the partial sums in Figure A. this inﬁnite 2 series converges and the sum is S= 1 1− 1 2 = 2. in this case we do not have a geometric series. ∞ 4 k=2 3k In this example we note that the ﬁrst term occurs for k = 2. Also. In this case it is allowed. Namely. we have ∞ ( n=1 ∞ ∞ 2 3 2 3 − n) = − . ∞ 1 n=0 2n In this case we have that a = 1 and r = 1 . or not. Of course.6 Convergence Tests Given a general inﬁnite series. ∞ 3 n=1 ( 2n 4 9 1− 1 3 2 = . Often.396 APPENDIX A. 3 − 2 5n ) Finally. it would be nice to know if it converges.6. we need to be careful whenever rearranging inﬁnite series. r = 1 and r = −1. Thus. So. consider what happens for the separate cases |r| > 1. Example 1. a = 4 . So. n n 2 5 2 5n n=1 n=1 Now we can add both geometric series: ∞ ( n=1 3 3 2 − n) = 2 2n 5 1− 1 2 − 2 5 1− 1 5 =3− 1 5 = . 9 3 S= Example 3. r = 1 . SEQUENCES AND SERIES The reader should verify that the geometric series diverges for all other values of r. 2 2 A. we present a few typical examples of geometric series. we are only interested in the convergence and not the actual .

. each term is bigger than one. this requires some experience with convergent series. In the second case. In this section we will review some of the standard tests for convergence. 1. CONVERGENCE TESTS sum. For this test one has to dream up a second series for comparison. The series an diverges if there is a divergent series dn such that dn ≤ an for all n > N for some N. this series will also diverge.6.A. Obviously. Thus. 2. the series will be bigger than adding the same number of ones as there are terms in the sum.. the series will converge. It is often diﬃcult to determine the sum. and thus the partial sums will grow without bound. Typically. 397 First. Often it is better to use other tests ﬁrst if possible. Otherwise.. This leads to the nth term divergence test: Theorem If lim an = 0 or if this limit does not exist. we will assume that the series has a nonnegative terms. we have the nth term divergence test. This can be motivated by two examples: 1. For the next theorems. when the series does converge. then n aa diverges.. Limit Comparison Test . 2. ∞ n n=1 2 = 1 + 2 + 4 + 8 + . we would not need any other convergence theorems.. In the ﬁrst example it is easy to see that each term is getting larger and larger. = 2 1 ∞ n+1 n=1 n + 3 2 + 4 3 + . This theorem does not imply that just because the terms are getting smaller.. Comparison Test The series an converges if there is a convergent series cn such that an ≤ cn for all n > N for some N.

7: Plot of the partial sums for the harmonic series n If limn→∞ an is ﬁnite then b diverge together. We can see from the ﬁgure that the total area of the boxes is greater than the area under the curve. then we could draw a conclusion. Integral Test Consider the inﬁnite series ∞ an . x n n=1 . We are interested in the convergence or divergence of the inﬁnite 1 series ∞ n which we saw in the Limit Comparison Test example. If we knew the behavior of the second series.8 we plot 1 f (x) = x and at each integer n we plot a box from n to n + 1 of 1 height n . In this case we can use the Integral Test. ∞ 1 n=1 n . 2n +n n Then. or both diverge.5 1 0 2 4 6 8 10 n 12 14 16 18 20 Figure A. Using the next test. Let f (n) = an . therefore ∞ (n+1)2 diverges. n=1 n=1 3. ∞ an n−1 n−1 ∞ and 1 f (x) dx both converge or both diverge.398 4 APPENDIX A. It is hard to tell graphically. then we have that ∞ 1 ∞ dx 1 < . Since the area of each box 1 is n . SEQUENCES AND SERIES Plot of sn vs n 3.5 2 1. we will 1 2n+1 prove that ∞ n diverges. Here we mean that the integral converges or diverges as an improper integral.5 3 sn 2. In Figure A. consider the inﬁnite series 2 and ∞ 1 n=1 n . Then. The plot of the partial sums is given in Figure A. n=1 This inﬁnite series is famous and is called the harmonic series. Thus. limn→∞ an = limn→∞ (n+1)2 = 2. an and bn converge together or ∞ 2n+1 n=1 (n+1)2 For example. It appears that the series could possibly converge or diverge. these two series both b converge.7.

by the limit comparison test.8 0. we can compute the integral. The Integral Test provides us with the convergence behavior for a class of inﬁnite series called p-series. n=1 ∞ .4 0. n+1 Example ∞ n3 −2 . the integral diverges and the inﬁnite series is larger than this! So. x→∞ x Thus. ∞ 1 1 n and width 1. Recalling that the improper integrals 1 xp converge for p > 1 and diverge otherwise.6. CONVERGENCE TESTS 399 6 1 0.2 0 2 4 x 8 10 Figure A. the harmonic series diverges.A. These series are of the form ∞ dx ∞ 1 n=1 np .6 0. the general term behaves like 1 since the numerator behaves like n and the denominator n2 behaves like n3 . n=1 We ﬁrst note that as n gets large. we have the p-test: 1 converges for p > 1 np n=1 and diverges otherwise. Figure dx = lim (ln x) = ∞.8: Plot of f (x) = x and boxes of height needs to be rotated! But. Thus. we expect that this series behaves like the 1 series ∞ n2 . So.

ρ = an+1 n→∞ an (n + 1)10 10n = lim n→∞ n10 10n+1 1 10 1 = lim (1 + ) n→∞ n 10 1 = < 1. or both 1 diverge. we know that ∞ n2 converges by the n=1 p-test since p = 2. Root Test . SEQUENCES AND SERIES n+1 limn→∞ n3 −2 (n2 ) = 1. the series converges by the ratio test. 10 lim (A. Then n=1 an the behavior of the inﬁnite series can be determined from ρ < 1. Example 2.11) Therefore. ∞ 3 . We compute ρ = an+1 an 3n+1 n! = lim n→∞ 3n (n + 1)! 3 = lim =0<1 n→∞ n + 1 n→∞ n lim (A. These series both converge. Ratio Test Consider the series ∞ an for an > 0. We compute ∞ n10 n=1 10n . 5. Let ρ = limn→∞ an+1 .400 APPENDIX A. However.12) This series also converges by the ratio test. n=1 n! In this case we make use of the fact that (n + 1)! = (n + 1)n!. converges ρ > 1. diverges Example 1. 4.

limn→∞ n an = limn→∞ n 2 n n 2 n We next turn to series which have both positive and negative terms. If a series converges. The n=1 convergence of alternating series is determined from Leibniz’s Theorem. n=1 2n This series also converges by the nth root test. we say that a series converges absolutely if ∞ |an | converges. then the original series converges. because we can use the previous tests to establish convergence of such series.6. ∞ n 2 . If the sum ∞ |an |converges. 1/n n √ = limn→∞ 2n = 0 < 1. Theorem The series ∞ n+1 a n n=1 (−1) converges if . CONVERGENCE TESTS 401 Consider the series ∞ an for an > 0. but does not n=1 converge absolutely.A. diverges Example 1. We then note that since an ≤ |an | we have ∞ ∞ ∞ − n=1 |an | ≤ n=1 an ≤ n=1 |an |. Example ∞ cos πn n=1 n2 . = ∞ 1 n=1 n2 is a Finally. ∞ e−n . Then n=1 the behavior of the inﬁnite series can be determined from ρ < 1. ∞ n=1 |an | This series converges absolutely because p-series with p = 2. Thus. then it is said to converge conditionally. This n=1 type of convergence is useful. Example 2. n=0 We use the nth root test: √ limn→∞ n an = limn→∞ e−1 = e−1 < 1. converges ρ > 1. given by ∞ (−1)n+1 an . this series converges by the nth root test. We consider the alternating series. Thus. there is one last test that we recall from your introductory calculus class. Let ρ = limn→∞ an 1/n . We can toss out the signs by taking absolute values of each of the terms.

such that an <K bn whenever n > N. 2. so it is not absolutely convergent. Theorem ∞ (−1)n n=0 2n also passes the conditions of Leibniz’s Note that in Example 2 we can show that the series is absolutely convergent. an ’s are positive. Example 1. This is useful in approximation theory.402 1. A. the series of absolute value for Example 1 is the harmonic series. ∞ (−1)n+1 . We write this as an = O(bn ) as n → ∞. 3. independent of N. then we say that an is of the order of bn . an ≥ an+1 for all n. However. n=1 n Example 2. . APPENDIX A. Then if there is are numbers N and K. The next conditions say that the magnitude if the terms gets smaller and approaches zero. Deﬁnition Let an and bn be two sequences. Therefore. The alternating harmonic series converges.7 The Order of Sequences and Functions Often we are interested in comparing the magnitude of sequences or of functions. an → 0. we have an example of a series that is conditionally convergent. SEQUENCES AND SERIES The ﬁrst condition guarantees that we have alternating signs in the series.

THE ORDER OF SEQUENCES AND FUNCTIONS For example. we can compare functions. or f (x) = O(g(x)) as x → x0 if x−x0 Thus. We will review the binomial expansion in the next section. we recall the Taylor series expansion for cos x gives us that cos x = 1 − x2 + O(x4 ) as x → 0. we can make use of the binomial expansion to determine the behavior of functions such as f (x) = (a + x)b − ab . Then.A. consider the series given by an = 2n+1 3n2 +2 403 1 and bn = n . which we will cover soon. Deﬁnition f (x) is of the order of g(x). 2+2 2 bn 3n 3 + 2/n 3 1 + 2/3n2 1 an = O(bn ) = O( ). we have f (x) = (a + x)b − ab = ab (1 + bx + O(x2 )) − ab = O(x) as x → 0. It can also be obtained using Taylor series expansions.7. Inserting this expression. a . Recall that the ﬁrst terms of the binomial expansion can be written (1 + x)b = 1 + bx + O(x2 ) as x → ∞. but the second sequence needs to be found by looking at the large n behavior of an . For example. an 2n + 1 2 + 1/n 2 1 + 1/2n = n = < < 1. In a similar way. 2 Similarly. n In practice one is given a sequence like an . lim f (x) <K g(x) for some K independent of x0 .

For example. But we will ﬁrst ask how each row can be generated. the powers of b increase from 0 to n. However. Similarly.404 APPENDIX A. we do not know the numerical coeﬃcient in the expansion. We now list the coeﬃcients for the above expansions. SEQUENCES AND SERIES A. we can write the k + 1st term in the expansion as an−k bk . n=0: 1 n=1: 1 1 n=2: 1 2 1 n=3: 1 3 3 1 n=4: 1 4 6 4 1 (A. Lets list some of the common expansions for nonnegative integer powers. in the expansion of (a + b)5 1 the 6th term is a51−5 b5 = a46 b5 . First. We see that each row begins and ends with a one. So. (a + b)0 (a + b) (a + b) 1 2 = = = = = ··· 1 a+b a2 + 2ab + b2 a3 + 3a2 b + 3ab2 + b3 a4 + 4a3 b + 6a2 b2 + 4ab3 + b4 (A.14) This pattern is the famous Pascal’s triangle. There are many interesting features of this triangle.13) (a + b)3 (a + b)4 We now look at the patterns of the terms in the expansions. Next the second term and next to last term has a coeﬃcient of n.8 The Binomial Expansion One series expansion which occurs often in examples and applications is the binomial expansion. we note that each term consists of a product of a power of a and a power of b. We will investigate this expansion ﬁrst for nonnegative integer powers p and then derive the expansion for other values of p. The sums of the exponents in each term is n. The powers of a are decreasing from n to 0 in the expansion of (a + b)n . Next we note that consecutive . This is simply the expansion of the expression (a + b)p .

Actually. and bbaa. abba. we have n=2: n=3: 1 1 3 2 3 1 405 (A. (A.16) Of course. abab. So. the coeﬃcients have been found to take a simple form. it is natural to use this notation.15) 1 With this in mind. n=3: 1 3 3 1 n=4: 1 4 6 4 1 n=5: 1 5 10 10 5 1 n=6: 1 6 15 20 15 6 1 (A. This is nothing other than the combinatoric symbol for determining how to choose n things r at a time. baab. we can generate the next several rows of our triangle. the r = 2 case for n = 4 involves the six products: aabb. How good of an approximation is this? This . this makes sense. We have seen the the coeﬃcients satisfy n−1 n n−1 Cr = Cr + Cr−1 . n Cr = n! = (n − r)!r! n r . Thus. We need a simple expression for computing a speciﬁc coeﬃcient. baba. In our case. For example. Consider the kth term in the expansion of (a + b)n . The original problem that concerned Pascal was in gambling.8. THE BINOMIAL EXPANSION pairs in each row can be added to obtain entries in the next row. There are n slots to place the b’s. we have found that (a + b)n = n r=0 n r an−r br . For example. it would take a while to compute each row up to the desired n. We have to count the number of ways that we can arrange the products of r b’s with n − r a’s. Then this term is n of the form Cr an−r br . Let r = k − 1.17) What if a b? Can we use this to get an approximation to (a + b)n ? If we neglect b then (a + b)n an .A.

This is again a binomial to a power. with an error on the order of ban−2 . a a (A. b Thus. (A. this then gives b (a + b)n = an (1 + )n a b b 2 = an (1 + n + O( )) a a b b 2 = an + nan + an O( ). note that we have used the observation that the second coeﬃcient in the nth row of Pascal’s triangle is n. We could also use the approximation that (a + b)n an . Summarizing. we can approximate (a + b)n an + nban−1 . a b Now we have a small parameter.19) Therefore. we have a ﬁnite sum of terms involving powers of a . SEQUENCES AND SERIES is where it would be nice to know the order of the next term in the expansion.406 APPENDIX A. We have seen that 1 = 1 + x + x2 + . . we can use the binomial expansion to write n b n (1 + ) = a r=0 n r b a r .17). which we could state using big O notation. but it is not as good because the error in this case is of the order ban−1 . . So. but the power is not a nonnegative integer.18) b. 1−x = (1 − x)−1 . . . It turns out that the coeﬃcients of such a binomial expansion can be written similar to the form in Equation (A. According to what we have seen above. we can write b b b (1 + )n = 1 + n + O( a a a 2 ). Since a most of these terms can be neglected. Note that the order of the error does not include the constant factor from the expansion. In order to do this we ﬁrst divide out a as b (a + b)n = an (1 + )n . 1−x 1 But. a .

we write (1 + x)p = ∞ r=0 p r xr .17). We can write the general coeﬃcient as p r = = = p! (p − r)!r! p(p − 1) · · · (p − r + 1)(p − r)! (p − r)!r! p(p − 1) · · · (p − r + 1) .A. THE BINOMIAL EXPANSION 407 This example suggests that our sum may no longer be ﬁnite. 2 (A. (A. we need to be careful not to interpret the combinatorial coeﬃcient literally. So. Consider the coeﬃcient for r = 1 in an expansion of (1 + x)−1 .22) r! r=0 Often we need the ﬁrst few terms for the case that x 1: p(p − 1) 2 (1 + x)p = 1 + px + x + O(x3 ). (−2)! (−2)! So.21) With this in mind we now state the theorem: General Binomial Expansion The general binomial expansion for (1 + x)p is a simple generalization of Equation (A. For p real.20) However. we quickly run into problems with this form. we note that (−1)(−2)! (−1)! = = −1. There are better ways to write the general binomial expansion. we have that ∞ p(p − 1) · · · (p − r + 1) r (1 + x)p = x . (−1 − 1)!1! (−2)!1! But what is (−1)!? By deﬁnition. (A. for p a real number. r! (A. This is given by −1 1 = (−1)! (−1)! = . it is (−1)! = (−1)(−2)(−3) · · · .23) . This product does not seem to exist! But with a little care.8.

In order n=1 to investigate the convergence of this series. in this section we begin to discuss series of functions and the convergence of such series. This means that we would need to consider the N th partial sums N sN (x) = n=1 fn (x).408 APPENDIX A. we really mean substitute values for x and determine if the resulting real series of number converges. Using n=1 powers of x again. |x| < ∞. An inﬁnite series of functions is given by ∞ fn (x). which are series whose terms are functions. x ∈ D. ∀n ≥ N. The limits depends on the value of x. SEQUENCES AND SERIES A. . A frequently used example is the sequence of functions {1. . . Example Consider the sequence of functions 1 f (x) = 1+nx . 1]. (b) x = 0. . (a) x = 0. 3. . we write that n→∞ lim fn = g (pointwise on D) if given x ∈ D and > 0. and example would be ∞ xn .9 Series of Functions Our immediate goal is to provide a preparation useful for studying Fourier series. More formally. x. n = 1. there exists an integer N such that |fn (x) − g(x)| < . Does this sequence of functions converge? We say that a sequence of functions fn converge pointwise on D to a limit g if lim fn (x) = g(x) n→∞ for each x ∈ D. . deﬁned on a common domain D. Here limn→∞ fn (0) = limn→∞ 1 = 1. Once more we will need to resort to the convergence of the sequence of partial sums. So. . Here limn→∞ fn (x) = limn→∞ 1 1+nx = 0. x ∈ [−1. . . n = 1. 2. x2 . . 2.}. A sequence of functions is simply a set of functions fn (x). This means we really need to start with sequences of functions. .

n = 1. As n 2 2 n gets large. If the sequence of functions converges pointwise to g on D then we can ask the following. But all n work. where g(x) = 0. fn → 0. ∀n ≥ N. given > 0 we seek an N such that |fn (0) − 0| < . x = 0. . x = 0. . So. (a) Is g continuous on D? (b) If each fn is integrable on [a. then does b b limn→∞ a fn (x) dx = a g(x) dx? (c) If each fn is diﬀerentiable at c. There are other questions that can be asked about sequences of functions. or 0 < . So. . given > 0. . In this case we have fn ( 1 ) = 21 . we can say that fn → g pointwise for |x| < ∞. Here are two examples: i. ∀n ≥ N. Let the sequence of functions fn be continuous on D. so we can pick N = 1. what we will need is uniform convergence. n→ if given > 0. 2. ii. 1. . . . ∀n ≥ N.24) We also note that in general N depends on both x and . Example We consider the functions fn (x) = xn . or n n ln n > − ln 2 ≥ N. Though we will nit prove it here. b]. then does limn→∞ fn (c) = g (c)? It turns out that pointwise convergence is not enough to provide an aﬃrmative answer to any of these questions. . for n = 1. x = 1 . we seek N such that | 21 − 0| < . 2. . our choice of N depends on . x ∈ [0. We recall that the above deﬁnition suggests that for each x we seek an N such that |fn (x) − g(x)| < . Deﬁnition Consider a sequence of functions {fn (x)}∞ on D.9. Here we have fn (0) = 0 for all n. SERIES OF FUNCTIONS Therefore. ∀n ≥ N and ∀x ∈ D. 1]. Thus. or lim ∞fn = g uniformly on D. 409 (A. Then the sequence converges uniformly on D.A. This means that 21 < . x = 0. Let n=1 g(x) be deﬁned for x ∈ D. there exists an N such that |fn (x) − g(x)| < .

fn (x) does not lie in the band (g(x) − . g(x) − ).11. This deﬁnition almost looks like the deﬁnition for pointwise convergence. Finally. the examples should bear out that the converse is not true. this sequence of functions will converge uniformly to the limit.410 APPENDIX A. g(x) − . For this example we plot the ﬁrst several members of the sequence in Figure A. Example fn (x) = xn . As seen in Figure A. SEQUENCES AND SERIES Uniform Convergence 2. the seemingly subtle diﬀerence lies in the fact that N does not depend upon x. for x ∈ [0.5 1 f (x) 0. fn (x) lies in the band g(x) − . We can see that eventually (n ≥ N ) members of this sequence do lie inside a band of width about the limit g(x) = 0 for all values of x. 1]. we should note that if a sequence of functions is uniformly convergent then it converges pointwise. Example fn (x) == cos(nx)/n2 on [−1. as n gets large.5 0 −0. 1]. g(x) − .9 as n gets large. Note that in this case as n gets large. .9: For uniform convergence. This is displayed in Figure A.5 2 1.10. The sought N works for all x in the domain.5 −2 −1 0 x 1 Figure A. However. Thus. However.5 g(x)+ε n g(x) −1 g(x)−ε −1. fn (x) lies in the band g(x) − .

. 1] for n = 1 .3 g(x)+ε 0.2.11: Plot of an = cos(nx)/n2 on [−π. for f (x) = cos(nx)/n vs x n 2 0. π] for n = 1 .3 −0.2 g(x)+ε f (x) n a a 2 4 0 g(x) −0.6 −0. .9. 10 and g(x) ± = 0.4 −0.5 −3 −2 −1 0 x 1 2 3 2 g(x)−ε a 1 Figure A.4 0. 10 and g(x)± for = 0. SERIES OF FUNCTIONS 411 f (x) = x vs x n n 1 0.8 −1 −1 0 x 1 a a 3 1 Figure A.6 0.2 g(x)−ε −0.10: Plot of an = xn on [−1. .4 −0.1 f (x) n a 3 g(x) 0 −0. .2 0.4 0.A.1 a −0.2.2 −0.5 0. .8 0.

and then f is continuous on D. Then the deﬁnitions of pointwise and uniform convergence are as follows: fj (x) converges pointwise to f (x) on D if given x ∈ D. b and a fn (x) dx exists. 2. Uniform convergence implies pointwise convergence. there exists and N such that |f (x) − sn (x)| < for all n > N . SEQUENCES AND SERIES A. We deﬁne the sequence of partial sums n sn (x) = j=1 fj (x).412 APPENDIX A.10 Inﬁnite Series of Functions We now turn our attention to inﬁnite series of functions. If fn is continuous on D. b a 3. converges uniformly on D. and Deﬁnition > 0. Recall that we are interested in the convergence of the sequence of partial sums of the series ∞ fn (x) for n=1 x ∈ D. ∞ n fn > 0. there converges uniformly to f on D. . then ∞ n b a ∞ n fn fn (x) dx = b ∞ a n fn (x) dx = g(x) dx. we state without proof the following: 1. which will form the basis of our study of Fourier series. Deﬁnition fj (x) converges uniformly to f (x) on D given exists and N such that |f (x) − sn (x)| < for all n > N and all x ∈ D. Again. it is natural to deﬁne the convergence of sequences of functions in terms of pointwise and uniform convergence. If fn is continuous on [a. But the sequence of partial sums is just a sequence of functions So. b] ⊂ D.

Since uniform convergence of series gives so much. If |fn (x)| ≤ Mn . Theorem Let {fn }∞ be a sequence of functions on D. given > 0 we need to ﬁnd an N such that n |g(x) − j=1 fj (x)| < if n ≥ N for all x ∈ D. since by the assumption that ∞ Mn converges. ∞ fn converges n=1 n=1 pointwise on D. we have that n=1 ∞ fn converges absolutely on D. then ∞ fn converges uniformly of D. let ∞ fn = g. So. If fn is continuous on [a. n=1 n=1 Proof First. b). ∞ fn converges pointwise to g on n D. we would like to be able to recognize when we have a uniformly convergent series.25) ≤ j=n+1 ∞ ≤ j=n+1 Mj . then n ∞ ∞ d n fn (x) = dx ( n fn (x)) = g (x) for x ∈ (a. like term by term integration and diﬀerentiation. and ∞ fn converges uniformly on D. One test for such convergence is the Weierstrass M-Test. we note that for x ∈ D ∞ ∞ |fn (x)| ≤ n=1 n=1 Mn . . for any x ∈ D. INFINITE SERIES OF FUNCTIONS 413 4. n ∞ n |g(x) − j=1 fj (x)| = | = | fj (x) − j=1 ∞ j=n+1 ∞ j=1 fj (x)| fj (x)| |fj (x)|. n=1 for x ∈ D and ∞ Mn converges.10. Thus.A. by the triangle inequality (A. n=1 We now want to prove that this convergence is in fact uniform. Therefore. So.. b] ⊂ D. So.

414 APPENDIX A. j=n+1 n ≥ N. we have from above that n ∞ |g(x) − j=1 fj (x)| ≤ j=n+1 Mj < for all n ≥ N and x ∈ D. Thus. so we can choose our N such that ∞ Mj < . the sum over the Mj ’s is convergent. we can conclude that the original series converges uniformly. π]. fj → g uniformly on D. We know that = = < ∞. QED We now given an example of how to use the M-Test. as it satisﬁes the conditions of the Weierstrass M-Test. . SEQUENCES AND SERIES Now. n2 deﬁned on [−π. Thus. Then. Example We consider the series Each term is bounded by ∞ n=1 Mn ∞ 1 n=1 n2 cos nx n2 ∞ cos nx n=1 n2 1 ≡ Mn .