This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Differential Equations Linear, Nonlinear, Ordinary, Partial4.39

|Views: 6,723|Likes: 169Published by okaar

See more

See less

https://www.scribd.com/doc/344810/Differential-Equations-Linear-Nonlinear-Ordinary-Partial

12/14/2012

text

original

- 1.1 The Method of Reduction of Order
- 1.2 The Method of Variation of Parameters
- 1.3 Solution by Power Series: The Method of Frobenius
- Legendre Functions
- 2.1 Deﬁnition of the Legendre Polynomials, Pn(x)
- 2.2 The Generating Function for Pn(x)
- 2.4 Rodrigues’ Formula
- 2.5 Orthogonality of the Legendre Polynomials
- 2.6 Physical Applications of the Legendre Polynomials
- 2.7 The Associated Legendre Equation
- Bessel Functions
- 3.1 The Gamma Function and the Pockhammer Symbol
- 3.2 Series Solutions of Bessel’s Equation
- 3.3 The Generating Function for Jn(x), n an integer
- 3.4 Diﬀerential and Recurrence Relations Between Bessel Functions
- 3.5 Modiﬁed Bessel Functions
- 3.6 Orthogonality of the Bessel Functions
- 3.7 Inhomogeneous Terms in Bessel’s Equation
- 3.8 Solutions Expressible as Bessel Functions
- 3.9 Physical Applications of the Bessel Functions
- Sturm–Liouville Theory
- 4.1 Inhomogeneous Linear Boundary Value Problems
- 4.3 Sturm–Liouville Systems
- Fourier Series and the Fourier Transform
- 5.1 General Fourier Series
- 5.2 The Fourier Transform
- 5.3 Green’s Functions Revisited
- 5.4 SOLUTION OF LAPLACE’S EQUATION USING FOURIER TRANSFORMS 143
- 5.4 Solution of Laplace’s Equation Using Fourier Transforms
- 5.5 Generalization to Higher Dimensions
- Laplace Transforms
- 6.1 Deﬁnition and Examples
- 6.2 Properties of the Laplace Transform
- 6.4 The Inversion Formula for Laplace Transforms
- 7.2 Complex Variable Methods for Solving Laplace’s Equation
- Nonlinear Equations and Advanced Techniques
- 8.1 Local Existence of Solutions
- 8.2 Uniqueness of Solutions
- 8.3 Dependence of the Solution on the Initial Conditions
- 8.4 Comparison Theorems
- 9.1 Introduction: The Simple Pendulum
- Group Theoretical Methods
- 10.1 Lie Groups
- 10.2 Invariants Under Group Action
- 10.3 The Extended Group
- 10.6 Invariants for Second Order Diﬀerential Equations
- 10.7 Partial Diﬀerential Equations
- Asymptotic Methods: Basic Ideas
- 11.1 Asymptotic Expansions
- 11.2 The Asymptotic Evaluation of Integrals
- Asymptotic Methods: Diﬀerential Equations
- 12.1 An Instructive Analogy: Algebraic Equations
- 12.2 Ordinary Diﬀerential Equations
- 12.3 Partial Diﬀerential Equations
- Stability, Instability and Bifurcations
- 13.1 Zero Eigenvalues and the Centre Manifold Theorem
- 13.2 Lyapunov’s Theorems
- 13.3 Bifurcation Theory
- Time-Optimal Control in the Phase Plane
- 14.1 Deﬁnitions
- 14.2 First Order Equations
- 14.3 Second Order Equations
- 14.4 Examples of Second Order Control Problems
- 14.5 Properties of the Controllable Set
- 14.6 The Controllability Matrix
- 14.7 The Time-Optimal Maximum Principle (TOMP)
- 15.2 Mappings
- 15.3 The Poincar´e Return Map
- 15.4 Homoclinic Tangles
- Bibliography
- Index

Continuing with our example of a rotating string, let’s consider what happens

when there is a steady, imposed external force TF(x) acting towards the x-axis.

The linearized boundary value problem is then

d2

y

dx2 +λy = F(x), subject to y(0) = y(l) = 0.

(4.6)

We can solve this using the variation of parameters formula, (1.6), which gives

y(x) = Acosλ1/2

x+Bsinλ1/2

x+ 1

λ1/2

x

0

F(s)sinλ1/2

(x−s)ds.

To satisfy the boundary condition y(0) = 0 we need A = 0, whilst y(l) = 0 gives

Bsinλ1/2

l + 1

λ1/2

l

0

F(s)sinλ1/2

(x−s)ds = 0.

(4.7)

When λ is not an eigenvalue, sinλ1/2

l = 0, and (4.7) has a unique solution

B = −

1

λ1/2

sinλ1/2

l

l

0

F(s)sinλ1/2

(x−s)ds.

However, when λ is an eigenvalue, we have sinλ1/2

l = 0, so that there is a solution

for arbitrary values of B, namely

y(x) = Bsinλ1/2

x+ 1

λ1/2

x

0

F(s)sinλ1/2

(x−s)ds,

(4.8)

provided that

l

0

F(s)sinλ1/2

(l−s)ds = 0.

(4.9)

If F(s) does not satisfy this integral constraint there is no solution.

These cases, where there may be either no solution or many solutions depending

on the form of F(x), are in complete contrast to the solutions of an initial value

problem, for which, as we shall see in Chapter 8, we can prove theorems that guar-

antee the existence and uniqueness of solutions, subject to a few mild conditions

on the form of the ordinary diﬀerential equation. This situation also arises for any

system that can be written in the form Ax = b, where A is a linear operator. A fa-

mous result (see, for example, Courant and Hilbert, 1937) known as the Fredholm

alternative, shows that either there is a unique solution of Ax = b, or Ax = 0

has nontrivial solutions.

4.1 INHOMOGENEOUS LINEAR BOUNDARY VALUE PROBLEMS 97

In terms of the physics of the rotating string problem, if ω is not an eigenfre-

quency, for arbitrary forcing of the string there is a unique steady solution. If ω

is an eigenfrequency and the forcing satisﬁes (4.9), there is a solution, (4.8), that

is a linear combination of the eigensolution and the response to the forcing. The

size of B depends upon how the steady state was reached, just as for the unforced

problem. If ω is an eigenfrequency and the forcing does not satisfy (4.9) there is no

steady solution. This reﬂects the fact that the forcing has a component that drives

a response at the eigenfrequency. We say that there is a resonance. In practice, if

the string were forced in this way, the amplitude of the motion would grow linearly,

whilst varying spatially like sinλ1/2

x, until the nonlinear terms in (4.1) were no

longer negligible.

4.1.1 Solubility

As we have now seen, an inhomogeneous, linear boundary value problem may

have no solutions. Let’s examine this further for the general boundary value prob-

lem

(p(x)y (x)) + q(x)y(x) = f(x) subject to y(a) = y(b) = 0. (4.10)

If u(x) is a solution of the homogeneous problem, so that (p(x)u (x)) +q(x)u(x) = 0

and u(a) = u(b) = 0, then multiplying (4.10) by u(x) and integrating over the

interval [a,b] gives

b

a

u(x)

(p(x)y (x)) + q(x)y(x)

dx =

b

a

u(x)f(x)dx. (4.11)

Now, using integration by parts,

b

a

u(x)(p(x)y (x)) dx = [u(x)p(x)y (x)]b

a−

b

a

p(x)y (x)u (x)dx

=−

b

a

p(x)y (x)u (x)dx,

using u(a) = u(b) = 0. Integrating by parts again gives

−

b

a

p(x)y (x)u (x)dx

=−[p(x)u (x)y(x)]b

a +

b

a

(p(x)u (x)) y(x)dx =

b

a

(p(x)u (x)) y(x)dx,

using y(a) = y(b) = 0. Substituting this into (4.11) gives

b

a

y(x)

(p(x)u (x)) + q(x)u(x)

dx =

b

a

u(x)f(x)dx,

and, since u(x) is a solution of the homogeneous problem,

b

a

u(x)f(x)dx = 0.

(4.12)

98 BOUNDARY VALUE PROBLEMS

A necessary condition for there to be a solution of the inhomogeneous boundary

value problem (4.10) is therefore (4.12), which we call a solvability or solubility

condition. We say that the forcing term, f(x), must be orthogonal to the solution

of the homogeneous problem, a terminology that we will explore in more detail in

Section 4.2.3. Note that variations in the form of the boundary conditions will give

some variation in the form of the solvability condition.

4.1.2 The Green’s Function

Let’s now solve (4.10) using the variation of parameters formula, (1.6). This

gives

y(x) = Au1(x) + Bu2(x) +

x

s=a

f(s)

W(s) {u1(s)u2(x) − u1(x)u2(s)} ds, (4.13)

where u1(x) and u2(x) are solutions of the homogeneous problem and W(x) =

u1(x)u 2(x)−u 1(x)u2(x) is the Wronskian of the homogeneous equation. The bound-

ary conditions show that

Au1(a) + Bu2(a) = 0,

Au1(b) + Bu2(b) =

b

a

f(s)

W(s) {u1(b)u2(s) − u1(s)u2(b)} ds.

Provided that u1(a)u2(b) = u1(b)u2(a), we can solve these simultaneous equations

and substitute back into (4.13) to obtain

y(x) =

b

a

f(s)

W(s)

{u1(b)u2(s) − u1(s)u2(b)}{u1(a)u2(x) − u1(x)u2(a)}

u1(a)u2(b) − u1(b)u2(a)

ds

+

x

a

f(s)

W(s) {u1(s)u2(x) − u1(x)u2(s)} ds.

(4.14)

This form of solution, although correct, is not the most convenient one to use.

To improve it, we note that the functions v1(x) = u1(a)u2(x) − u1(x)u2(a) and

v2(x) = u1(b)u2(x) − u1(x)u2(b), which appear in (4.14) as a product, are linear

combinations of solutions of the homogeneous problem, and are therefore them-

selves solutions of the homogeneous problem. They also satisfy v1(a) = v2(b) = 0.

Because of the way that they appear in (4.14), it makes sense to look for a solution

of the inhomogeneous boundary value problem in the form

y(x) =

b

a

f(s)G(x,s)ds,

(4.15)

where

G(x,s) =

v1(s)v2(x) = G<(x,s) for a s < x,

v1(x)v2(s) = G>(x,s) for x < s b.

The function G(x,s) is known as the Green’s function for the boundary value

problem. From the deﬁnition, it is clear that G is continuous at s = x, and that

4.1 INHOMOGENEOUS LINEAR BOUNDARY VALUE PROBLEMS 99

y(a) = y(b) = 0, since, for example, G>(a,s) = 0 and y(a) = b

a G>(a,s)f(s)ds =

0. If we calculate the partial derivative with respect to x, we ﬁnd that

Gx(x,s = x−)−Gx(x,s = x+

) = v1(x)v

2(x)−v

1(x)v2(x) = W = C

p(x),

where, using Abel’s formula, C is a constant.

For this to be useful, we must now show that y(x), as deﬁned by (4.15), actually

satisﬁes the inhomogeneous diﬀerential equation, and also determine the value of

C. To do this, we split the range of integration and write

y(x) =

x

a

G<(x,s)f(s)ds +

b

x

G>(x,s)f(s)ds.

If we now diﬀerentiate under the integral sign, we obtain

y (x) =

x

a

G<,x(x,s)f(s)ds + G<(x,x)f(x) +

b

x

G>,x(x,s)f(s)ds−G>(x,x)f(x),

where G<,x = ∂G function at x = s, to give

y (x) =

x

a

G

b

x

G>x(x,s)f(s)ds

=

x

a

v1(s)v2x(x)f(s)ds +

b

x

v1x(x)v2(s)f(s)ds,

and hence

(py ) =

x

a

v1(s)(pv2x)xf(s)ds + p(x)v1(x)v2x(x)f(x)

+

b

x

(pv1x)xv2(s)f(s)ds−p(x)v1x(x)v2(x)f(x)

=

x

a

v1(s)(pv2x)xf(s)ds +

b

x

(pv1x)xv2(s)f(s)ds + Cf(x),

using the deﬁnition of C. If we substitute this into the diﬀerential equation (4.10),

(py ) + qy = f, we obtain

x

a

v1(s)(pv2x)xf(s)ds +

b

x

(pv1x)xv2(s)f(s)ds + Cf(x)

+q(x)

x

a

v1(s)v2(x)f(s)ds +

b

x

v1(x)v2(s)f(s)ds

= f(x).

Since v1 and v2 are solutions of the homogeneous problem, the integral terms vanish

and, if we choose C = 1, our representation provides us with a solution of the

diﬀerential equation.

100 BOUNDARY VALUE PROBLEMS

As an example, consider the boundary value problem

y (x)−y(x) = f(x) subject to y(0) = y(1) = 0.

The solutions of the homogeneous problem are e−x

and ex

. Appropriate combi-

nations of these that satisfy v1(0) = v2(1) = 0 are v1(x) = Asinhx and v2(x) =

B sinh(1−x), which gives the Green’s function

G(x,s) =

AB sinh(1−x)sinhs for 0 s < x,

AB sinh(1−s)sinhx for x < s 1,

which is continuous at s = x. In addition,

Gx(x,s = x−)−Gx(x,s = x+

)

=−AB cosh(1−x)sinhx−AB sinh(1−x)coshx =−AB sinh1.

Since p(x) = 1, we require AB =−1/sinh1, and the ﬁnal Green’s function is

G(x,s) =

−sinh(1−x)sinhs

sinh1

for 0 s < x,

−sinh(1−s)sinhx

sinh1

for x < s 1.

The solution of the inhomogeneous boundary value problem can therefore be written

as

y(x) =−

x

0

f(s)sinh(1−x)sinhs

sinh1 ds−

1

x

f(s)sinh(1−s)sinhx

sinh1 ds.

We will return to the subject of Green’s functions in the next chapter.

4.2 The Solution of Boundary Value Problems by Eigenfunction

Expansions

In the previous section, we developed the idea of a Green’s function, with which we

can solve inhomogeneous boundary value problems for linear, second order ordinary

diﬀerential equations. We will now develop an alternative approach that draws

heavily upon the ideas of linear algebra (see Appendix 1 for a reminder). Before

we start, it is useful to be able to work with the simplest possible type of linear

diﬀerential operator.

4.2.1 Self-Adjoint Operators

We deﬁne a linear diﬀerential operator, L : C2

[a,b] → C[a,b], as being in self-

adjoint form if

L = d

dx

p(x) d

dx

+ q(x),

(4.16)

4.2 EIGENFUNCTION EXPANSIONS 101

where p(x) ∈ C1

[a,b] and is strictly nonzero for all x ∈ (a,b), and q(x) ∈ C[a,b].

The reasons for referring to such an operator as self-adjoint will become clear later

in this chapter.

This deﬁnition encompasses a wide class of second order diﬀerential operators.

For example, if

L1

≡a2(x) d2

dx2 + a1(x) d

dx + a0(x)

(4.17)

is nonsingular on [a,b], we can write it in self-adjoint form by deﬁning (see Exer-

cise 4.5)

p(x) = exp

x

a1(t)

a2(t) dt

, q(x) = a0(x)

a2(x) exp

x

a1(t)

a2(t) dt

. (4.18)

Note that p(x) = 0 for x ∈ [a,b]. By studying inhomogeneous boundary value

problems of the form Ly = f, or

d

dx

p(x)dy

dx

+ q(x)y = f(x),

(4.19)

we are therefore considering all second order, nonsingular, linear diﬀerential oper-

ators. For example, consider Hermite’s equation,

d2

y

dx2 −2xdy

dx + λy = 0,

(4.20)

for −∞ < x < ∞. This is not in self-adjoint form, but, if we follow the above

procedure, the self-adjoint form of the equation is

d

dx

e−x2 dy

dx

+ λe−x2

y = 0.

This can be simpliﬁed, and kept in self-adjoint form, by writing u = e−x2

/2

y, to

obtain

d2

u

dx2 −(x2

−1)u =−λu.

(4.21)

4.2.2 Boundary Conditions

To complete the deﬁnition of a boundary value problem associated with (4.19),

we need to know the boundary conditions. In general these will be of the form

α1y(a) + α2y(b) + α3y (a) + α4y (b) = 0,

β1y(a) + β2y(b) + β3y (a) + β4y (b) = 0.

(4.22)

Since each of these is dependent on the values of y and y at each end of [a,b],

we refer to these as mixed or coupled boundary conditions. It is unnecessarily

complicated to work with the boundary conditions in this form, and we can start

to simplify matters by deriving Lagrange’s identity.

102 BOUNDARY VALUE PROBLEMS

Lemma 4.1 (Lagrange’s identity) If L is the linear diﬀerential operator given

by (4.16) on [a,b], and if y1, y2 ∈ C2

[a,b], then

y1 (Ly2)−y2 (Ly1) = [p(y1y

2 −y

1y2)] .

(4.23)

Proof From the deﬁnition of L,

y1 (Ly2)−y2 (Ly1) = y1

(py

2) + qy2

−y2

(py

1) + qy1

= y1 (py

2) −y2 (py

1) = y1 (py

2 + p y

2)−y2 (py

1 + p y

1)

= p (y1y

2 −y

1y2) + p(y1y

2 −y

1y2) = [p(y1y

2 −y

1y2)] .

Now recall that the space C[a,b] is a real inner product space with a standard

inner product deﬁned by

f,g =

b

a

f(x)g(x)dx.

If we now integrate (4.23) over [a,b] then

y1,Ly2 − Ly1,y2 = [p(y1y

2 −y

1y2)]b

a .

(4.24)

This result can be used to motivate the following deﬁnitions. The adjoint operator

to T, written ¯

T, satisﬁes y1,Ty2 = ¯

Ty1,y2 for all y1 and y2. For example, let’s

see if we can construct the adjoint to the operator

D ≡ d2

dx2 + γ d

dx + δ,

with γ, δ ∈* R*, on the interval [0,1], when the functions on which D operates are

zero at x = 0 and x = 1. After integrating by parts and applying these boundary

conditions, we ﬁnd that

φ1,Dφ2 =

1

0

φ1 (φ

2 + γφ

2 + δφ2) dx =

φ1φ

2

1

0

−

1

0

φ

1φ

2 dx

+

γφ1φ2

1

0

−

1

0

γφ

1φ2 dx +

1

0

δφ1φ2 dx

= −

φ

1φ2

1

0

+

1

0

φ

1φ2 dx−

1

0

γφ

1φ2 dx +

1

0

δφ1φ2 dx = ¯

Dφ1,φ2 ,

where

¯

D ≡ d2

dx2 −γ d

dx + δ.

4.2 EIGENFUNCTION EXPANSIONS 103

A linear operator is said to be Hermitian, or self-adjoint, if y1,Ty2 =

Ty1,y2 for all y1 and y2. It is clear from (4.24) that L is a Hermitian, or self-

adjoint, operator if and only if

p(y1y

2 − y

1y2)

b

a

= 0,

and hence

p(b){y1(b)y

2(b) − y

1(b)y2(b)} − p(a){y1(a)y

2(a) − y

1(a)y2(a)} = 0. (4.25)

In other words, whether or not L is Hermitian depends only upon the boundary

values of the functions in the space upon which it operates.

There are three diﬀerent ways in which (4.25) can occur.

(i) p(a) = p(b) = 0. Note that this doesn’t violate our deﬁnition of p as strictly

nonzero on the open interval (a,b). This is the case of singular boundary

conditions.

(ii) p(a) = p(b) = 0, yi(a) = yi(b) and y i(a) = y i(b). This is the case of periodic

boundary conditions.

(iii) α1yi(a) + α2y i(a) = 0 and β1yi(b) + β2y i(b) = 0, with at least one of the αi

and one of the βi nonzero. These conditions then have nontrivial solutions

if and only if

y1(a)y

2(a) − y

1(a)y2(a) = 0, y1(b)y

2(b) − y

1(b)y2(b) = 0,

and hence (4.25) is satisﬁed.

Conditions (iii), each of which involves y and y at a single endpoint, are called

unmixed or separated. We have therefore shown that our linear diﬀerential

operator is Hermitian with respect to a pair of unmixed boundary conditions. The

signiﬁcance of this result becomes apparent when we examine the eigenvalues and

eigenfunctions of Hermitian linear operators.

As an example of how such boundary conditions arise when we model physical

systems, consider a string that is rotating (as in the example at the start of this

chapter) or vibrating with its ends ﬁxed. This leads to boundary conditions y(0) =

y(a) = 0 – separated boundary conditions. In the study of the motion of electrons

in a crystal lattice, the periodic conditions p(0) = p(l), y(0) = y(l) are frequently

used to represent the repeating structure of the lattice.

4.2.3 Eigenvalues and Eigenfunctions of Hermitian Linear

Operators

The eigenvalues and eigenfunctions of a Hermitian, linear operator L are the

nontrivial solutions of Ly = λy subject to appropriate boundary conditions.

Theorem 4.1 Eigenfunctions belonging to distinct eigenvalues of a Hermitian lin-

ear operator are orthogonal.

104 BOUNDARY VALUE PROBLEMS

Proof Let y1 and y2 be eigenfunctions that correspond to the distinct eigenvalues

λ1 and λ2. Then

Ly1,y2 = λ1y1,y2 = λ1 y1,y2 ,

and

y1,Ly2 = y1,λ2y2 = λ2 y1,y2 ,

so that the Hermitian property Ly1,y2 = y1,Ly2 gives

(λ1 −λ2) y1,y2 = 0.

Since λ1 = λ2, y1,y2 = 0, and y1 and y2 are orthogonal.

As we shall see in the next section, all of the eigenvalues of a Hermitian linear

operator are real, a result that we will prove once we have deﬁned the notion of a

complex inner product.

If the space of functions C2

[a,b] were of ﬁnite dimension, we would now argue

that the orthogonal eigenfunctions generated by a Hermitian operator are linearly

independent and can be used as a basis (or in the case of repeated eigenvalues,

extended into a basis). Unfortunately, C2

[a,b] is not ﬁnite dimensional, and we

cannot use this argument. We will have to content ourselves with presenting a

credible method for solving inhomogeneous boundary value problems based upon

the ideas we have developed, and simply state a theorem that guarantees that the

method will work in certain circumstances.

4.2.4 Eigenfunction Expansions

In order to solve the inhomogeneous boundary value problem given by (4.19) with

f ∈ C[a,b] and unmixed boundary conditions, we begin by ﬁnding the eigenvalues

and eigenfunctions of L. We denote these eigenvalues by λ1,λ2,... ,λn,... , and

the eigenfunctions by φ1(x),φ2(x),... ,φn(x),... . Next, we expand f(x) in terms

of these eigenfunctions, as

f(x) =

∞

n=1

cnφn(x).

(4.26)

By making use of the orthogonality of the eigenfunctions, after taking the inner

product of (4.26) with φn, we ﬁnd that the expansion coeﬃcients are

cn = f,φn

φn,φn .

(4.27)

Next, we expand the solution of the boundary value problem in terms of the eigen-

functions, as

y(x) =

∞

n=1

dnφn(x),

(4.28)

4.2 EIGENFUNCTION EXPANSIONS 105

and substitute (4.27) and (4.28) into (4.19) to obtain

L

∞

n=1

dnφn(x)

=

∞

n=1

cnφn(x).

From the linearity of L and the deﬁnition of φn this becomes

∞

n=1

dnλnφn(x) =

∞

n=1

cnφn(x).

We have therefore constructed a solution of the boundary value problem with dn =

cn/λn, if the series (4.28) converges and deﬁnes a function in C2

[a,b]. This process

will work correctly and give a unique solution provided that none of the eigenvalues

λn is zero. When λm = 0, there is no solution if cm = 0 and an inﬁnite number of

solutions if cm = 0, as we saw in Section 4.1.

Example

Consider the boundary value problem

−y = f(x) subject to y(0) = y(π) = 0.

(4.29)

In this case, the eigenfunctions are solutions of

y + λy = 0 subject to y(0) = y(π) = 0,

which we already know to be λn = n2

, φn(x) = sinnx. We therefore write

f(x) =

∞

n=1

cn sinnx,

and the solution of the inhomogeneous problem (4.29) is

y(x) =

∞

n=1

cn

n2 sinnx.

In the case f(x) = x,

cn =

π

0 xsinnxdx

π

0 sin2

nxdx

= 2(−1)n+1

n ,

so that

y(x) = 2

∞

n=1

(−1)n+1

n3 sinnx.

We will discuss the convergence of this type of series, known as a Fourier series, in

detail in Chapter 5.

This example is, of course, rather artiﬁcial, and we could have integrated (4.29)

directly. There are, however, many boundary value problems for which this eigen-

function expansion method is the only way to proceed analytically, such as the

example given in Section 3.9.1 on Bessel functions.

106 BOUNDARY VALUE PROBLEMS

Example

Consider the inhomogeneous equation

(1 −x2

)y − 2xy + 2y = f(x) on −1 < x < 1,

(4.30)

with f ∈ C[−1,1], subject to the condition that y should be bounded on [−1,1].

We begin by noting that there is a solubility condition associated with this problem.

If u(x) is a solution of the homogeneous problem, then, after multiplying through

by u and integrating over [−1,1], we ﬁnd that

u(1 −x2

)y 1

−1 −

u (1 −x2

)y

1

−1 =

1

−1

u(x)f(x)dx.

If u and y are bounded on [−1,1], the left hand side of this equation vanishes, so

that 1

−1u(x)f(x)dx = 0. Since the Legendre polynomial, u = P1(x) = x, is the

bounded solution of the homogeneous problem, we have

1

−1

P1(x)f(x)dx = 0.

Now, to solve the boundary value problem, we ﬁrst construct the eigenfunction

solutions by solving Ly = λy, which is

(1 −x2

)y − 2xy + (2 −λ)y = 0.

The choice 2−λ = n(n+1), with n a positive integer, gives us Legendre’s equation

of integer order, which has bounded solutions yn(x) = Pn(x). These Legendre

polynomials are orthogonal over [−1,1] (as we shall show in Theorem 4.4), and

form a basis for C[−1,1]. If we now write

f(x) =

∞

m=0

AmPm(x),

where A1 = 0 by the solubility condition, and then expand y(x) = ∞

m=0BmPm(x),

we ﬁnd that

{2 −m(m+ 1)}Bm = Am for m 0.

The required solution is therefore

y(x) = 1

2A0 +B1P1(x) +

∞

m=2

Am

2 −m(m+ 1)Pm(x),

with B1 an arbitrary constant.

Having seen that this method works, we can now state a theorem that gives the

method a rigorous foundation.

Theorem 4.2 If L is a nonsingular, linear diﬀerential operator deﬁned on a closed

interval [a,b] and subject to unmixed boundary conditions at both endpoints, then

4.3 STURM–LIOUVILLE SYSTEMS 107

(i) L has an inﬁnite sequence of real eigenvalues λ0,λ1,... , which can be or-

dered so that

|λ0| < |λ1| < ··· < |λn| < ···

and

lim

n→∞

|λn| = ∞.

(ii) The eigenfunctions that correspond to these eigenvalues form a basis for

C[a,b], and the series expansion relative to this basis of a piecewise con-

tinuous function y with piecewise continuous derivative on [a,b] converges

uniformly to y on any subinterval of [a,b] in which y is continuous.

We will not prove this result here.† Instead, we return to the equation, Ly = λy,

which deﬁnes the eigenfunctions and eigenvalues. For a self-adjoint, second order,

linear diﬀerential operator, this is

d

dx

p(x)dy

dx

+ q(x)y = λy,

(4.31)

which, in its simplest form, is subject to the unmixed boundary conditions

α1y(a) + α2y (a) = 0, β1y(b) + β2y (b) = 0,

(4.32)

with α2

1 + α2

2 > 0 and β2

1 + β2

2 > 0 to avoid a trivial condition. This is an example

of a Sturm–Liouville system, and we will devote the rest of this chapter to a

study of the properties of the solutions of such systems.

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

An Introduction to Diff Equations

Matlab - Differential Equations Linear, Nonlinear, Ordinary, Partial

Solving Applied Mathematical Problems With MATLAB

0521194245 Differential Equations

Partial Differential Equations

Coddington, E. Carlson, R. - Linear Ordinary Differential Equations(1)

[Mark S. Gockenbach] Partial Differential Equation

FEM SOLVING DIFFERENTIAL EQUATIONS

MATLAB - A Fundamental Tool for Scientific Computing and Engineering Applications - Volume 2

Steven E. Koonin Computational Physics, ForTRAN Version 1998

[Thijssen J.] Computational Physics(BookZZ.org)

Fourier and Laplace Transforms

Partial Differential Equations

Advanced Engineering Mathematics DUFFY

Computational Methods for Physics - Franklin, Joel

Introduction to Numerical Ordinary and Partial Differential Equations Using MATLAB

Matlab Reservoir Simulation Mrst-comg

Ordinary Differential Equations

Partial Differential Equations with Fourier Series and Boundary Value Problems Second Edition by Asmar

Basic Thermodynamics

Partial-Differential-Equations

Mathematical Constants

Statistics

Advanced Engineering Mathematics With MATLAB by Dean G. Duffy

Fundamentals of Numerical Computing

Differential Equations With MATLAB

Computational Physics Using MATLAB

Numerical Methods

Matrix Methods - Applied Linear Algebra 3rd Ed - Bronson,Costa.pdf

Numerical Simulation of a Pressure Swing Adsorption.pdf

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->