You are on page 1of 86

MATH 401: GREEN’S FUNCTIONS AND VARI- ATIONAL METHODS

The main goal of this course is to learn how to “solve” various PDE (= partial differential equation) problems arising in science. Since only very few special such problems have solutions we can write explicitly, “solve” here often means either (a) find an explicit approximation to the true solution, or (b) learn some important qualitative properties of the solution.

A number of techniques are available for this. The method of separation of variables

(together with Fourier series), perhaps familiar from earlier courses, unfortunately only applies to very special problems. The related (but more general) method of eigenfunction expansion is very useful – and will be discussed in places in this course. Integral transform techniques (such as the Fourier transform and Laplace transform) will also be touched on here. If time permits, we will also study some perturbation methods, which are extremely useful if there is a small parameter somewhere in the problem.

The bulk of this course, however, will be devoted to the method/notion of Green’s function (a function which, roughly speaking, expresses the effect of the “data” at one point on the solution at another point), and to variational methods (by which PDE problems are solved by minimizing (or maximizing) some quantities). Variational

methods are extremely powerful – and even apply directly to some nonlinear problems (as well as linear ones). Green’s functions are for linear problems, but in fact play a key role in nonlinear problems when they are treated as – in some sense – perturbations

of linear ones (which is frequently the only feasible treatment!).

We begin with Green’s functions.

A. GREEN’S FUNCTIONS

I. INTRODUCTION

As an introductory example, consider the following initial-boundary value problem for the inhomogeneous heat equation (HE) in one (space) dimension

∂u

∂t

=

2 u ∂x 2

+ f (x, t)

u(0, t) = u(L, t) = 0 u(x, 0) = u 0 (x)

0 < x < L,

t > 0 (BC) (IC)

(HE)

which, physically, describes the temperature u(x, t) at time t and at point x along a “rod” 0 x L of length L subject to (time- and space-varying) heat source f (x, t), and with the ends held fixed at temperature 0 (boundary condition (BC)) and with initial (time t = 0) temperature distribution u 0 (x) (initial condition (IC)).

1

This problem is easily solved by the (hopefully) familiar methods of separation of

variables and eigenfunction expansion. Without going into details, the eigenfunctions of the spatial part of the (homogeneous) PDE (namely d 2 /dx 2 ) satisfying (BC) are

(with corresponding eigenvalues n 2 π 2 /L 2 ), and so we

sin(nπx/L), n = 1, 2, 3, seek a solution of the form

u(x, t) =

n=1

a n (t) sin(nπx/L).

We similarly expand the data of the problem – the source term f (x, t) and the initial condition u 0 (x) – in terms of these eigenfunctions; that is, as Fourier series:

f (x, t) =

u 0 (x) =

n=1

n=1

f n (t) sin(nπx/L),

g n sin(nπx/L),

f n (t) =

L L

2

0

f (x, t) sin(nπx/L)dx

g n =

L L

2

0

u 0 (x) sin(nπx/L)dx.

Plugging the expression for u(x, t) into the PDE (HE) and comparing coefficients then yields the family of ODE problems

a n (t) + (n 2 π 2 /L 2 )a n (t) = f n (t),

a n (0) = g n

which are easily solved (after all, they are first-order linear ODEs) by using an inte- grating factor, to get

a n (t) = e (n 2 π 2 /L 2 )t g n + t

0

and hence the solution we sought:

e (n 2 π 2 /L 2 )s f n (s)ds ,

u(x, t) =

2

L

n=1

e (n 2 π 2 /L 2 )t L

0

g(y) sin(nπy/L)dy

+ t

0

e (n 2 π 2 /L 2 )s L

0

f (y, s) sin(nπy/L)dyds sin(nπx/L).

No problem. But it is instructive to re-write this expression by exchanging the order of integration and summation (note we are not worrying here about the convergence of the sum or the exchange of sum and integral – suffice it to say that for reasonable (say continuous) functions g and f all our manipulations are justified and the sum converges beautifully due to the decaying exponential) to obtain

u(x, t) = L G(x, t; y, 0)u 0 (y)dy + t

0

0 L

0

2

G(x, t; y, s)f (y, s)dyds

(1)

where

G(x, t; y, s) =

2

L

n=1

e (n 2 π 2 /L 2 )(ts) sin(nπy/L) sin(nπx/L).

Expression (1) gives the solution as an integral (OK, 2 integrals) of the “data” (the initial condition u 0 (x) and the source term f (x, t)) against the function G which is called, of course, the Green’s function for our problem (precisely, for the heat equation on [0, L] with 0 boundary conditions).

Our computation above suggests a few observations about Green’s functions:

if we can find the Green’s function for a problem, we have effectively solved the problem for any data – we just need to plug the data into an integral like (1)

a Green’s function is a function of 2 sets of variables – one set are the variables of the solution (x and t above), the other set (y and s above) gets integrated

one can think of a Green’s function as giving the effect of the data at one point ((y, s) above) on the solution at another point ((x, t))

the domain of a Green’s function is determined by the original problem: in the above example, the spatial variables x and y run over the interval [0, L] (the “rod”), and the time variables satisfy 0 s t – here the condition s t reflects the fact that the solution at time t is only determined by the data at previous times (not future times)

The first part of this course will be devoted to a systematic study of Green’s functions, first for ODEs (where computations are generally easier), and then for PDEs, where Green’s functions really come into their own.

3

II. GREEN’S FUNCTIONS FOR ODEs 1. An ODE boundary value problem

Consider the ODE (= ordinary differential equation) boundary value problem

Here

Lu := a 0 u + a 1 u + a 2 u = f(x)

u(x 0 ) = u(x 1 ) =

0.

x 0 < x < x 1

L := a 0 (x)

d

2

dx 2 +

d

a 1 (x) dx + a 2 (x)

(2)

is a (first-order, linear) differential operator. As motivation for problem (2), one can think, for example, of u(x) as giving the steady-state temperature along a rod [x 0 , x 1 ] with (non-uniform) thermal conductivity ρ(x), subject to a heat source f (x) and with ends held fixed at temperature 0, which leads to the problem

(ρ(x)u ) = f,

u(x 0 ) = u(x 1 ) = 0

of the form (2). Zero boundary conditions are the simplest, but later we will consider other boundary conditions (for example, if the ends of the rod are insulated, we should take u (x 0 ) = u (x 1 ) = 0).

We would like to solve (2) by finding a function G(x; z ), the Green’s function, so that

u(x) =

x

x

0

1

G(x; z)f (z)dz =: (G x , f)

where we have introduced the notations

G x (z) := G(x; z)

(when we want to emphasize the dependence of G specifically on the variable z, thinking of x as fixed), and

(g, f ) :=

x

x

0

1

g(z)f (z)dz

( inner product ).

Then since u solves Lu = f , we want

u(x) = (G x , f) = (G x , Lu).

Next we want to “move the operator L over” from u to G x on the other side of the inner-product – for which we need the notion of adjoint.

Definition: The adjoint of the operator L is the operator L such that

(v, Lu) = (L v, u) + “ boundary terms “

4

for all smooth functions u and v.

The following example illustrates the adjoint, and explains what is meant by “bound- ary terms”.

2 + a 1 (x) dx + a 2 (x) acting on functions defined

for x 0 x x 1 . Then for two such (smooth) functions u(x) and v(x), integration by parts gives (check it!)

Example: Let, as above, L = a 0 (x)

2

d

dx

d

(v, Lu) =

x

x

0

1

v[a 0 u + a 1 u + a 2 u]dx


=

x

= a 0 v + (2a 0 a 1 )v + (a 2 + a a 1 )v, u + [a 0 (vu v u) + (a 1 a 0

x 1

0

u[a 0 v +

(2a 0 a 1 )v + (a 2 + a a 1 )v]dx + [a 0 vu a 0 v u + a 1 uv] x

0

x

1

0

0

)uv] x x 0 .

1

The terms after the integral (the ones evaluated at the endpoints x 0 and x 1 ) are what we mean by “boundary terms”. Hence the adjoint is

L =

d

2 d

a 0 dx 2 + (2a 0 a 1 ) dx + (a 2 + a a 1

0

).

The differential operator L is of the same form as L, but with (in general) different coefficients.

An important class of operators are those which are equal to their adjoints.

Definition: An operator L is called (formally) self-adjoint if L = L .

Example: Comparison of L and L for our example L = a 0

L is formally self-adjoint if and only if a 0 = a 1 . Note that in this case

Lu = a 0 u + a 0 u + a 2 u = (a 0 u ) + a 2 u

which is an ordinary differential operator of Sturm-Liouville type.

Now we can return to our search for a Green’s function for problem (2):

u(x) = (G x , f) = (G x , Lu) = (L G x , u) + BT

where we know from our computations above that the “boundary terms” are

dx d + a 2 shows that

dx 2 + a 1

d

2

BT = a 0 G x u a

0 G

x u + a 1 G x u x

x 0 = a 0 G x u x 1

1

x

0

where we used the zero boundary conditions u(x 0 ) = u(x 1 ) = 0 in problem (2). We can make the remaining boundary term disappear if we also impose the boundary conditions G x (x 0 ) = G x (x 1 ) = 0 on our Green’s function G. Thus we are led to the problem

u(x) = (L G x , u),

G x (x 0 ) = G x (x 1 ) = 0

5

for G.

So L G x should be a function g(z) which satisfies

u(x) = (g, u) =

x

x

0

1

g(z)u(z)dz

for all (nice) functions u. What kind of function is this? In fact, it is a (Dirac) delta function, which s not really a function at all! Rather, it is a generalized function, a notion we need to explore further before proceeding with Green’s functions.

6

2. Generalized functions (distributions).

The precise mathematical definition of a generalized function is:

Definition: A generalized function or distribution is a (continuous) linear func- tional acting on the space of test functions

C

(R) = { infinitely differentiable functions on R which vanish outside some interval }

0

(an example of such a test function is

φ(x) = e 1/(a 2 x 2 )

0

a < x < a

|x| ≥ a

for any a > 0). That is, a distribution f maps a test function φ to a real number

f(φ) = (f, φ) = “ f (x)φ(x)dx

which it is useful to think of as an integral (as in the parentheses above) – hence the common notation (f, φ) for f (φ) – but is not in general an integral (it cannot be – since f is not in general an actual function!). Further, this map should be linear: for test functions φ and ψ and numbers α and β,

f (αφ + βψ) = αf (φ) + βf (ψ)

Some examples should help clarify.

Example:

or (f, αφ + βψ) = α(f, φ) + β(f, ψ) .

1. If f (x) is a usual function (say, a piecewise continuous one), then it is also a distribution (after all, we wouldn’t call them “generalized functions” if they didn’t include regular functions), which acts by integration,

f(φ) = (f, φ) = f (x)φ(x)dx

which is indeed a linear operation. This is why we use inner-product (and sometimes even integral) notation for the action of a distribution – when the distribution is a real function, its action on test functions is integration.

2. The (Dirac) delta function denoted δ(z) (or more generally δ x (z) = δ(z x) for the delta function centred at a point x) is not a function, but a distribution, whose action on test functions is defined to be

(δ, φ) := φ(0)

(δ x , φ) := φ(x)

= “ δ(z)φ(z)dz

= “ δ(z x)φ(z)dz

7

(where, again, the integral here is just notation). That is, δ acts on test functions by picking out their value at 0 (and δ x acts on test functions by picking out their value at x).

Generalized functions are so useful because we can perform on them many of the operations we can perform on usual functions. We can

1. Differentiate them: if f is a usual differentiable function, then for a test function φ, by integration by parts,

(f , φ) = f (x)φ(x)dx = f(x)φ (x)dx

=

(f, φ )

(there are no boundary terms because φ vanishes outside of some interval). Now if f is any distribution, these integrals make no sense, but we can use the above calculation as the definition of how the distribution f acts on test functions:

(f , φ) := (f, φ )

and by iterating, we can differentiate f n times:

(f (n) , φ) = (f, (1) n φ (n) ).

Example:

(a)

the derivative of a delta function:

(δ , φ) = (δ, φ ) = φ (0)

(b)

the derivative of the Heavyside function

H(x) := 0

1

x 0

x > 0

(which is a usual function, but is not differentiable in the classical sense at x = 0):

(H , φ) = (H, φ ) =

H(x)(φ (x))dx) = φ (x)dx

0

= φ(x)| x= = φ(0)

x=0

(since φ vanishes outside an interval). Hence

d

dx H(x) = δ(x).

8

The fact that we can always differentiate a distribution is what makes them so useful for differential equations.

2. Multiply them by smooth functions: if f is a distribution and a(x) is a smooth (infinitely differentiable) function we define

(a(x)f, φ) := (f, a(x)φ)

(which makes sense, since is again a test function). Note this definition conincides with the usual one when f is a usual function.

3. Consider convergence of distributions: we say that a sequence {f j } j=1 of distri- butions converges to another distribution f if

j (f j , φ) = (f, φ)

lim

for all test functions φ.

This kind of convergence is called weak convergence.

Example: Let ψ(x) be a smooth, non-negative function with ψ(x)dx = 1, and

Note that as j increases, the graph

of ψ j (x) is becoming both taller, and more concentrated near x = 0, while maintaining ψ j (x)dx = 1. In fact, we have

set ψ j (x) := (jx) for j = 1, 2, 3,

j ψ j (x) = lim

lim

j (jx) = δ(x)

in the weak sense – it is a nice exercise to show this!

4. Compose them with invertible functions: let g : R R be a one-to-one and

onto differentiable function, with g (x) > 0. If f is a usual function, then by changing variables y = g(x) (so dy = g (x)dx), we have for the composition

f

g(x) = f (g(x)),

(fg, φ) = f (g(x))φ(x)dx =

f(y)φ(g 1 (y))dy/g (g 1 (y)) = (f,

and so for f a distribution, we define

(f g, φ) := (f,

1

g 1 φ g 1 ).

g

1

g 1 φg 1 )

g

Example: Composing the delta function with g(x) gives

(δ(g(x)), φ) = (δ,

g 1 φ g 1 ) = φ(g 1 (0))

1

g (g 1 (0)) ,

g

and in particular if g(x) = cx (constant c > 0)

and

hence δ(cx) = 1 c δ(x).

(δ(cx), φ) = 1 c φ(0) = ( 1 c δ, φ)

9

3. Green’s functions for ODEs.

Returning now to the ODE problem (2), we had concluded that we want our Green’s

function G(x; z) = G z (z) to

satisfy

u(x) = (L G x , u)

u,

G x (x 0 ) = G x (x 1 ) = 0.

After our discussion of generalized functions, then, we see that what we want is really

L G x (z) = δ(z x),

G x (x 0 ) = G x (x 1 ) = 0.

Notice that for z

equation for x < z and for z > x, and then “glue the two pieces together”. Some examples should help clarify.

Example: use the Green’s function method to solve the problem

(3)

(which could, of course, be solved simply by integrating twice).

d 2 2 = L (the operator is self-adjoint), so the problem for our

= x, we are simply solving L G x = 0. The strategy is to solve this

u = f (x),

dx

0 < x < L,

u(0) = u(L) = 0.

First note that L =

Green’s function G(x; z) = G x (z) is

(here denotes

G

x

(z) = δ(z x),

G x (0) = G x (L) = 0

d

dz ). For z < x and z > x, we have simply G = 0, and so

x

G x (z) =

Az + B Cz + D

0 z < x x < z L

The BC G x (0) = 0 implies B = 0, and the BC G x (L) = 0 implies D = LC, so we have

G x (z) =

Az C(z L)

0 z < x x < z L

Now our task is to determine the remaining two unknown constants by using matching conditions to “glue” the two pieces together:

1. continuity: we demand that G x be continuous at z = x: G x (x) = G x (x+) (the notation here is g(x±) := lim ε0 g(x ± ε)). This yields Ax = C(x L).

2. jump condition: for any ε > 0, integrating the equation G = δ(z x) between x ε and x + ε yields

x

G x (x + ε) G x (x ε) = G

x

xε =

x+ε

x+ε

xε

G

x

(z)dz =

x+ε

xε

δ(z x)dz = 1

and letting ε 0, we arrive at

G x (x+) G x (x) = 1.

This jump condition requires C A = 1.

10

Solving the two equations for A and C yields C = x/L, A = (x L)/L, and so

G(x; z) = G x (z) =

z(x L)/L x(z L)/L

0 z < x x < z L

which gives our solution of problem (3)

u(x) = L G(x; z)f (z)dz = x L

0

L

Remark:

x

0

zf (z)dz +

x

L L

x

(z

L)f (z)dz.

1. whatever you may think of think of the derivation of this expression for the solution, it is easy to check (by direct differentiation and fundamental theorem of calculus) that it is correct (assuming f is “reasonable” – say, continuous).

2. notice the form of the Green’s function G x (z) (graph it!) – it has a “singularity” (in the sense of being continuous, but not differentiable) at the point z = x. This “singularity” must be there, since differentiating G twice has to yield a delta function.

Example: use the Green’s function method to solve the problem

x 2 u + 2xu 2u = f (x), 0 < x < 1, u(0) = u(1) = 0. (4)

Remark: Notice that the coefficients x 2 and 2x vanish at the left endpoint x = 0 (this point is a “regular singular point” in ODE parlance). This suggests that unless u is very wild as x approaches 0, we will require f (0) = 0 to fully solve the problem. Let’s come back to this point after we find a solution formula.

dx d 2 is self-adjoint (L = L ), and

Notice the operator L = x 2 d 2 2 + 2x dx 2 = so the problem for the Green’s function is

d

d

dx x 2

dx

LG x = z 2 G + 2zG x 2G x = δ(z x),

x

G x (0) = G x (1) = 0.

For z

for solutions of the form G = z r yields

= x, the equation z 2 G +2zG 2G = 0 is an ODE of Euler type, and so looking

0 = z 2 (rz r1 ) + 2z(rz r1 ) 2z r = (r(r 1) + 2r

2)z r = (r + 2)(r 1)z r

and so we want r = 2 or r = 1. Thus

G x (z) =

Az

Cz

+

+

B/z 2

D/z 2

0 z < x x < z 1

.

The BCs G x (0) = G x (1) = 0 imply B = 0 and C + D = 0, so

G x (z) =

Az C(z 1/z 2 )

The matching conditions are

11

0 z < x x < z 1

.

1.

continuity G x (x) = G x (x+) implies Ax = C(x 1/x 2 )

2. jump condition

x+

1 =

x

x+

δ(zx)dz =

x

(z 2 G x ) 2G x ]dz = z 2 G x (z)

x+

x

implies x 2 C(1 + 2/x 3 ) x 2 A = 1.

= x 2 (G x (x+)G x (x))

Solving the simultaneous linear equations for A and C yields

1 = C (x 2 + 2/x) x 2 (1 1/x 3 ) = 3C/x

=C = x/3,

and so the Green’s function is

G(x; z) =

1 z(x 1/x 2 )

x(z 1/z 2 )

3

0 z < x

x < z 1

and the corresponding solution formula is

u(x) = 1 G(x; z)f (z)dz = 1 3 (x 1/x 2 ) x zf (z)dz

0

0

+

1

3 x 1

x

A = (x 1/x 2 )/3

(z

1/z 2 )f (z)dz.

Does this really solve (4) (supposing f is, say, continuous)? For 0 < x < 1, differen- tiation (and fundamental theorem of calculus) gives

so

3u (x) = (1 + 2/x 3 ) x zf (z)dz + (x 1/x 2 )xf (x) +

0

1

x

(z

1/z 2 )f (z)dz

x(x 1/x 2 )f (x) = (1 + 2/x 3 ) x zf (z)dz + 1 (z 1/z 2 )f (z)dz

0

x

3(x 2 u + 2xu 2u) = x 2 6/x 4 x zf (z)dz + (1 + 2/x 3 )xf (x) (x 1/x 2 )f(x)

0

+

2x (1 + 2/x 3 ) x zf (z)dz + 1 (z 1/z 2 )f (z)dz

0

x

2 (x 1/x 2 ) x zf (z)dz + x 1 (z 1/z 2 )f (z)dz

0

x

= (6/x 2 + 2x + 4/x 2 2x + 2/x 2 ) x zf (z)dz

0

+

(2x 2x) 2 (z 1/z 2 )f (z)dz + (x 3 + 2 x 3 + 1)f (x)

x

= 3f(x)

12

and we see that that the ODE is indeed solved. What about the BCs? Well, u(1) = 0 obviously holds. As alluded to above, the BC at x = 0 is subtler. We see that

x0+ u(x) = lim

3 lim

x0+

2 x

1

x

0

x0+ x 1

x

zf (z)dz lim

f

(z)/z 2 dz.

If f is smooth, we have f (z) = f (0) + O(z) for small z, so

x0+ u(x) = f (0) f (0) = 2f (0),

3 lim

and so we require f (0) = 0 to genuinely satisfy the boundary condition at x = 0.

13

4. Boundary conditions, and self-adjoint problems.

The only BCs we have seen so far have been homogeneous (i.e. 0) Dirichlet (specifying the value of the function) ones; namely u(x 0 ) = u(x 1 ) = 0. Let’s make this more general, first by considering an ODE problem with inhomogeneous Dirichlet BCs:

(5)

Lu := a 0 u + a 1 u + a 2 u = f

u(x 0 ) = u 0 ,

x 0 < x < x 1

u(x 1 ) = u 1

.

Recall that by integration by parts, for functions u and v,

(v, Lu) = (L v, u) + [a 0 (vu

v u) + (a 1 a 0 )vu] x x 0 .

1

Suppose we find a Green’s function G(x; z) = G x (z) solving the problem

L G x = δ(z x) G x (x 0 ) = G x (x 1 )

= 0

with the corresponding homogeneous (i.e. 0) BCs to the BCs in problem (5). Then

u(x) = (L G x , u) = (G x , Lu) [a 0 G x u] x x 0 = (G x , f) [a 0 G x u] x

1

x

=

x

x

0

1

G(x; z)f (z)dz + a 0 (x 0 )G x (x 0 )u 0

a 0 (x 1 )G x (x 1 )u 1

1

0

a formula which gives the solution of problem (5) in terms of the Green’s function G(x; z), and the “data” (the source term f (x), and the boundary data u 0 and u 1 ).

The other way to generalize boundary conditions is to include the value of the deriva- tive of u (as well as u itself) at the boundary (i.e. the endpoints of the interval). For example, if u is the temperature along a rod [x 0 , x 1 ] whose ends are insulated, we should impose the Neumann BCs u (x 0 ) = u (x 1 ) = 0 (no heat flux through the ends).

In general, for the following discussion, think of imposing 2 boundary conditions, each of which is a linear combination of u(x 0 ), u (x 0 ), u(x 1 ), and u (x 1 ) equal to 0 (homogeneous case) or some non-zero number (inhomogeneous case).

Definition:

1. A problem

Lu = f BCs on

u is called (essentially) self-adjoint if

(a)

L = L (so the operator is self-adjoint), and

(b)

(v, Lu) = (Lv, u) (i.e. with no boundary terms) whenever both u and v satisfy the homogeneous BCs corresponding to the BCs on u in the problem.

14

Remark: As in the above example, the Green’s function for a self-adjoint problem should satisfy the homogeneous BCs corresponding to the BCs in the original problem. More generally, the Green’s function for a problem should satisfy:

2. The homogeneous adjoint boundary conditions for a problem

Lu = f BCs on u

are the BCs on v which guarantee that (v, Lu) = (L v, u) (i.e. no boundary terms) when u satisfies the homogeneous BCs corresponding to the BCs in the original problem.

Remark:

1. these definitions are quite abstract – it is better to see what is going on by doing some specific examples

2. a problem can be non-self-adjoint even if L = L , for example (see homework)

u + q(x)u = f(x)

u (0) u(1) = 0,

u (1) = 0

3. if L

= L , we can make Lu = f self-adjoint by multiplying by a function (again,

see homework).

Example: (Sturm-Liouville problem)

  Lu := (p(x)u ) + q(x)u = f(x)

α 1 u(1)

+

α 0 u(0) + β 0 u (0) = 0

β 1 u (1) = 0

0 < x < 1

(6)

where p(x) > 0, and α 0 , α 1 , β 0 , β 1 are numbers with α 0 , β 0 not both 0, and α 1 , β 1 not both 0.

First notice that L = L (the operator is self-adjoint), and integration by parts (as usual) gives

0 . If u and v both satisfy the BCs in problem (6) (which are homogeneous) then

(v, Lu) = (Lv, u) + [p(vu v u)]

1

α 0 [v(0)u (0) v (0)u(0)] =

β 0 [v(0)u (0) v (0)u(0)] = v(0)(α 0 u(0)) u(0)(α 0 v(0)) = 0

u (0)(β 0 v (0)) v (0)(β 0 u (0)) = 0

and so (since α 0 β 0 = 0), v(0)u (0) v (0)u(0) = 0. A similar computation shows that v(1)u (1) v (1)u(1) = 0. Hence (v, Lu) = (Lv, u) (boundary terms disappear), and the problem is, indeed, self-adjoint.

15

Thus a Green’s function G(x; z) = G x (z) for problem (6) should satisfy

  LG x = (p(z)G x

(z)) + q(z)G x (z) = δ(z x)

α 0 G x (0) + β 0 G x (0) = 0

β 1 G x (1) = 0

α 1 G x (1) +

For z

= x, we have LG x = 0, so

G x (z) = c c 0 1 w w 0 1 (z) (z)

0 z < x

x < z 1

.

where for j = 0, 1, w j denotes any fixed, non-zero solution of

Lw j = 0,

α j w j (j) + β j w j (j) = 0

(which is an initial value problem for a second-order, linear, ODE – hence there is a one-dimensional family of solutions), and c 0 , c 1 are non-zero constants. Continuity of G at x implies

c 0 w 0 (x) = c 1 w 1 (x).

The jump condition at x is (check it!) p(x)[G x (x+) G x (x)] = 1, so

c 1 w 1 (x) c 0 w 0 (x) =

1

p(x) .

Hence

c 1 [w 0 (x)w 1 (x) w 1 (x)w 0 (x)] = w 0 (x)

p(x) ,

which involves the Wronskian

W

= W[w 0 , w 1 ](x) = w 0 (x)w 1 (x) w 1 (x)w 0 (x).

Recall that since w 0 and w 1 satisfy Lw = 0,

p(x)W[w 0 , w 1 ](x) constant.

There are two possibilities:

1. If W 0, then we cannot satisfy the equations for the coefficients above – there is no Green’s function! In this case, w 0 and w 1 are linearly dependent, which means that w 0 and w 1 are both actually solutions of the homogeneous equation Lu = 0 satisfying both BCs in (6). We’ll discuss this case more later.

16

2.

Otherwise, W is non-zero everywhere in (0, 1) and so we have

c 1 =

w

0

(x)

p(x)W ,

c 0 =

w 1 (x) p(x)W

and hence a Green’s function

G x (z) =

1

w 1 ]p(x) w 1 (x)w 0