You are on page 1of 16


Greens Functions and Distributions

9.1. Boundary Value Problems We would like to study, and solve if possible, boundary value problems such as the following: u = f (1.1) u = g where U Rn on U, is open and bounded, with smooth boundary U. in U

Now, we know how to solve the PDE on all space - we write the solution as a convolution with the fundamental solution: (1.2) u(x) =

(x y)f (y) dy.

The idea of the Greens function is to nd a formula like this that also takes into account the boundary conditions. Of course, the trouble is that since (x) is set up to localize at x = 0, it cannot be expected to do the job in the interior of U at the same time as on the boundary. However, we can modify so that it satises homogeneous boundary conditions. This is how the Greens function is set up, and in fact such a function allows us to solve general problems like (1.1), even for nonzero boundary data g. On the way to formulating Greens functions, we outline the theory of distributions (the Dirac delta function is an example of a distribution that is not a function in the usual sense), which is useful in the study of weak solutions of PDE. Lets start with a seemingly simple example, namely problem (1.1) in one dimension, with zero boundary conditions to start with: u (x) = f (x) (1.3) u(0) = 0,


u(1) = 0.
x z

We solve this problem by integrating twice: u (x) =


f (y) dy + C;

u(x) =
0 0

f (y) dy dz + Cx + D,

where C, D are constants of integration.




Now we apply the boundary conditions. The boundary condition u(0) = 0 lead immediately to D = 0. But the boundary condition at u(1) = 0 gives an equation for C that is hard to understand. It becomes much more transparent when we change the order of integration. To do so, we note that the double integral occurs over a triangle (see Fig. 15.1).

z x

y z

Figure 9.1. Area of Integration.

x z 0 0

f (y) dy dz = =

x x 0 y f (y) dz dy x 0 (x

y)f (y) dy

Consequently, the solution we have so far is expressed as (1.4) u(x) =


(x y)f (y) dy Cx.

The boundary condition at x = 1 becomes


u(1) =

(1 y)f (y) dy C = 0,

so that


(1 y)f (y) dy.



Substituting back into (1.4) and rearranging terms, we arrive at

x 1 x 1

u(x) =

(yx)f (y) dy+


x(1y)f (y) dy) =


y(1x)f (y) dy+


x(1y)f (y) dy.


(1.5) where

u(x) =

G(x, y)f (y) dy,

y(1 x) (1.6) G(x, y) = x(1 y)

if y x if x y.

The graph of G(x, y) for xed x (0, 1) is shown in Figure 15.2. The

G(x,y) x(1-x)

Figure 9.2. The integral kernel G(x, y). function G(x, y) is called an integral kernel. It denes an integral operator G : C[0, 1] X on the space of continuous functions on [0, 1] that is the d2 inverse of the dierential operator L = dx2 : X C[0, 1], where X = {u C[0, 1] : u(0) = 0 = u(1)} :

(Gf )(x) =

G(x, y)f (y) dy.

Note that the domain of L is the subspace of X consisting of dierentiable functions. In fact, G is the Greens function for this boundary value problem - note that the formula (1.5) is similar to that of (1.2). Here are some properties of G : [0, 1] [0, 1] R. 1. G is non-negative: G(x, y) 0;



2. G is symmetric: G(x, y) = G(y, x). 3. G is continuous. 4. G is dierentiable, except on the diagonal x = y. On the diagonal, has a jump discontinuity: G = 1, y x=y
G y

where [..] means the jump, or dierence between the right limit and left limit. Now G = 0 except on the diagonal, where y 2 innite negative slope. We will write

G y

may be thought to have

2G (x, y) = (x y), y 2 where (x) is the Dirac delta function. (x) is a measure that assigns mass one at x = 0, and zero mass elsewhere. We shall treat as a distribution, or generalized function, which leads us to the theory of distributions, a useful framework for considering PDEs, and Greens functions such as G(x, y).

9.2. Distributions
We start by considering smooth functions on R. Let D = Cc (R) denote functions with compact support; this is the space of test the space of C functions. To dene the topology of D, we dene what it means for a sequence of functions {n } to converge in the space. Let (j) denote the j-th derivative of . We say n D as n if:

(a) There is a compact subset K of R such that supp n K for all n, and supp K. (b) n (j) as n , uniformly on K, for each j 0 :
xK (j)

sup |(j) (x) (j) (x)| 0, n

as n .

Similarly, we may dene the space D(Rn ) of test functions on Rn .



Important example of a test function. Let (2.7) 1 Ce 1|x|2 (x) = 0

|x|<1 e 1 2 1|x|

if |x| < 1 otherwise,

where C = 1/

dx is chosen so that (x) dx = 1.


To see that is a test function, we observe that it has continuous derivatives of all orders, even where the denition is split, at |x| = 1. For example, with n = 1, the derivatives approach zero at x = 1, since every derivative is the 1 product of a rational function and the exponential e 1x2 ; the exponential dominates the rational function. It is useful to rescale using a parameter > 0 : 1 x . (2.8) (x) = n Then: supp = {x : |x| } and

(x) dx = 1.

The space of distributions D is dened to be the space of continuous linear functionals on D. That is, f D means f : D R, and f has the properties (i) f is linear, i.e., f (a1 +b2 ) = af (1 )+bf (2 ) for each a, b R, 1 , 2 D. (ii) f is continuous, i.e., n in D implies f (n ) f () (as a sequence of numbers). Following the usual custom, we denote f () using the more suggestive notation (f, ). (Sometimes, f, is used, but not in these notes.) Examples of distributions 1. (f1 , ) = compact support). 2. 3. 4. (f2 , ) =
R (x) dx

(dened because is continuous and has

0 (x) dx.

(f3 , ) = (0). (f4 , ) = (0).



All of these are distributions, and the last three are related, as we shall see, by dierentiation in the sense of distributions. The rst two examples are associated with locally integrable functions: Let Lloc (Rn ) denote the space of locally integrable (in the sense of Lebesgue) 1 functions on Rn : g Lloc (Rn ) 1 Examples. 1. g(x) = 1 is not integrable on R, but it is in Lloc (R). 1 2. g(x) = x is unbounded on R, but is in Lloc (R). 1 CLAIM If g Lloc (Rn ), then g denes a distribution f D by 1 (f, ) =


g(x) dx < for any compact K Rn .

g(x)(x) dx.

Proof. Linearity of f is clear; continuity follows from the denition. However, not every distribution denes an Lloc function. We say a distribu1 tion f is regular if it has an Lloc representative f : 1 (f, ) =

f (x)(x) dx

for all D(Rn ).

Otherwise, the distribution f is called singular. Example f3 is the Dirac -function: (, ) = (0); we show below that is a singular distribution. It will then follow that example f4 is also singular. However, examples f1 and f2 dene Lloc functions: 1 For f1 : take f (x) = 1. For f2 : take f to be the Heaviside function H(x) = Lemma 9.1. is a singular distribution. Proof. Suppose is regular, so that there is g Lloc such that 1 (2.9) (, ) =

0 1

if x < 0 if x 0

g(x)(x) dx

for all D(Rn ).



Now dene a one-parameter family of test functions, modeled on (x) : 1 e 1| x |2 if |x| < (x) = 0 if |x| . Now we calculate the eect of on in two ways. rst, from the denition of : 1 (, ) = (0) = e Second, using the assumption (2.9): (, ) =

g(x) (x) dx. 1 |g(x)| dx. e |x| zero, providing a contradic-

But then

1 =| g(x) (x) dx| e Rn But the latter integral approaches zero as tion for small enough .

Note that the function used in the proof of the lemma is a mulitple ( n /C to be precise) of , but this multiple provides the very dierent behaviors of the two functions as 0. The function is called a mollier because it can be used to smooth rough functions. It does this through the convolution product: Recall that the convolution of two square integrable functions (i.e., in L2 (Rn )) f and g is dened as g f (x) =

g(x y)f (y) dy.

Example: Compute g f for the functions 1 f (x) = 0 otherwise; if 0 < x < 1 g(x) = 0 otherwise, x if |x| < 1

and graph the convolution product g f. Remark Convolution is symmetric: gf = f g, as can be seen by changing variables in the integral. Here is how the mollier does its smoothing: Let f C(Rn ), and dene, for Then f C , > 0, 0.

f (x) = f (x). and f (x) f (x) for all x, as



Note that f is dened and is C for f Lloc . In that case, it takes a little 1 measure theory (the Lebesgue dominated convergence theorem) to prove that f f almost everywhere (see [Evans, Appendix]). Convergence of distributions Let {fk } D (Rn ) be a sequence of k=1 distributions, and let f D (Rn ). We say fk f if (fk , ) (f, ) Examples. 1. Let n = 1. k fk (x) = 0 Then, for D(R), (fk , ) =

in D

in the sense of distributions as k , for all D(Rn )

if |x|

1 2k

1 2k 1 2k

fk (x)(x) dx =
1 2k

k(x) dx

(x) dx

1 2k

(0) as k = (, ).

Thus, fk in the sense of distribitions as k . 2. (Exercise): Prove that in the sense of distributions, as 0.

9.2.1. Distributional derivatives. Let f C 1 (Rn ). Then f denes a distribution f D : (f, ) =


f (x)(x) dx.


f xi

C(Rn ) also denes a distribution: f , xi f (x)(x) dx Rn xi = f (x) (x) dx xi Rn = = f, xi ,

integrating by parts over the support of .



This calculation suggests that for any distribution f we dene its derivative, f or distributional derivative xi to be a distribution given by f , xi = f, xi , for all D.

Then every distribution is dierentiable in the sense of distributions, hence f has derivatives of all orders. It is not hard to show that xi is indeed a distribution, by checking directly that it is continuous and linear. Examples with n = 1. 1. Consider the Heaviside function H(x). As we saw earlier, H is in Lloc , and it acts on test functions by (H, ) = 1

(x) dx. Thus, for any test function ,


H (x), = (H, ) =

(x) dx = (0) = (, ),

Therefore, H = .

2. We can also dierentiate the function directly from the denition of derivative: , = (, ) = (0)

3. Let f (x) = |x|. Then f (x) = 1 1 if x < 0 if x > 0

is an Lloc function, hence denes a distribution. In fact, f = 2H 1, so 1 f = 2. Here, we use the fact that dierentiation is a linear operation on distributions, just as it is on dierentiable functions. Here are two further properties, or more precisely, denitions: 1. Translation by y Rn . If f D , we dene f ( + y) = f y D by (f y , ) = (f, y ),



where y D is the test function dened by y (x) = (x y). To see that this makes sense, we simply check that it is consistent for f Lloc : 1 (f y , ) =

f (x + y)(x) dx =

f (z)(z y) dz = (f, y ).

An example where this is useful is the function. We often write (x y) to mean the distribution y , and it is sometimes helpful to leave x in the arguments of both the distribution and the test function: ((x y), (x)) = (y). 2. Multiplication by a C function. Let c C , f D . Then we dene cf D by (cf, ) = (f, c), noting that c is a test function if is. Again, this denition is motivated by the case of a regular distribution f , in which case the formula makes sense as integrals. As an example of this denition, consider c(x) = x, f = . Then (x(x), (x)) = ((x), x(x)) = 0, where again we have left the argument x in the calculation for clarity. Since = H , we see that y(x) = H(x) is a solution of the dierential equation dy = 0. x dx In fact, the general distributional solution of this equation (which is singular at x = 0) is y(x) = aH(x) + b, for arbitrary constants a, b. Thus, we have a two parameter family of solutions of a rst order equation. 9.2.2. Example. Here is a quick application to conservation laws: Consider the scalar conservation law (2.10) ut + f (u)x = 0, in which f : R R is a given C 1 function. We can interpret equation (2.10) in the sense of distributions, and in particular say what it means for a discontinuous function to be a solution. The equation simply states that if u D (R2 ), and f (u) can be interpreted as a distribution, then the combination of distributional derivatives on the left hand side of the equation should be zero on every test function. But this implies for a test function (x, t) : (u, t ) + (f (u), x ) = 0.



In this way, we can dene distribution solutions of dierential equations, even nonlinear equations. To see that this interpretation has some substance, consider a jump discontinuity u if x < st (2.11) u(x, t) = u if x > st, + where s, u are constants. Now f (u)(x, t) = But then u(x, t) = (u+ u )H(xst)+u , Thus, u = (u+ u )H (x st)(s); t f (u) = (f (u+ ) f (u ))H (x st), x f (u)(x, t) = (f (u+ )f (u ))H(xst)+f (u ). f (u ) f (u ) + if x < st if x > st,

and H = . Equation (2.10) becomes (s(u+ u ) + (f (u+ ) f (u )))(x st) = 0, in the sense of distributions. But then the constant s(u+ u ) + f (u+ ) f (u ) must be zero: s(u+ u ) + f (u+ ) f (u ) = 0. Thus, we have derived the Rankine-Hugoniot condition for shock wave solutions of (2.10). We are almost ready to describe Greens functions, but we rst need to review integration by parts in Rn . Consider a bounded open set U Rn , with piecewise smooth boundary U. Consider functions f, g in C 1 (U ) C(U ), and let = (1 , . . . , n ) be the unit outward normal on U. Dierentiating the product f (x)g(x) : g f fg = f +g . xi xi xi Now integrate over U and apply the divergence theorem to F(x) = (0, . . . , 0, f g, 0, . . . , 0) : div F = f g. After rearranging, we get the formula for integration by xi parts: (2.12)

g dx = xi

f g i dS

f dx. xi



9.3. Greens Functions In this section, we rst give a framework for Greens functions for a rather general dierential operator, and then show how this applies when the operator is the Laplacian. 9.3.1. General Framework. Consider a linear partial dierential operator L (L = for example), that acts on functions u : Rn R, and consider solving a boundary value problem Lu = f (3.13) Bu = g on U. Here, f is a given function on U, g and given function on U, and U Rn may be unbounded. Bu represents a linear combination of u and derivatives of u of lower order than the order of the partial dierential operator L. Associated with L we have the fundamental solution (x, y) : (3.14) L(x, y) = (x y), x, y Rn . in U

In order to see how this works, consider to be locally integrable in y for each x Rn , and let f D(Rn ). Then (3.15) v(x) =

(x, y)f (y) dy = ((x, y), f (y))

satises Lv = f in Rn : Lv(x) = L((x, y), f (y)) = (L(x, y), f (y)) = ((x y), f (y)) = f (x). That is, the fundamental solution is the key to solving the inhomogeneous PDE. However, the formula (3.15) does not generally satisfy the boundary condition Bu = g. To satisfy the boundary condition, we add a solution w of the homogeneous equation Lw = 0 so that u(x) = v(x) + w(x) satises the boundary condition Bu = g. But then w must satisfy the boundary condition Bw = gBv. Thus, provided we have the fundamental solution (x, y), the boundary value problem (3.13) is reduced to solving the problem Lw = 0 in U ; Bw = g Bv on U.

The boundary condition for w is a bit clumsy, because it relies on applying the boundary operator to the integral (3.15) and restricting to the boundary. The way to express this more smoothly is to introduce the Greens function for the dierential operator L with boundary operator B. For clarity, we consider L and B to have independent variable x U , and we let y U be independent.



The Greens function G = G(x, y) is dened as the solution of the problem LG(x, y) = y (x), (3.16) BG(x, y) = 0, for each y U . We construct G using the fundamental solution, by dening a function y (x) : G(x, y) = (x, y) y (x). Then y (x) satises Ly = 0, in U x U, xU

By = B(, y), in U.

9.3.2. Greens Functions for the Laplacian. Let U be open and bounded, with piecewise smooth boundary U, and let f, g be continuous on U, U respectively. Consider the Dirichlet boundary value problem for Poissons equation: u = f, (3.17) u = g, in U. in U

The fundamental solution for is a function (x y) of x y, since is translation invariant. (More generally, the fundamental solution for any constant coecient L will be a function of x y.) The Greens function G(x, y) = (x y) y (x) is then specied by y (x) = 0 xU

y (x) = (x y), x U. In particular, y is a harmonic function in U. Theorem 9.2. If u C 2 (U ) solves (3.17), then (3.18) u(x) =

G(x, y)f (y) dy


G (x, y)g(y) dS(y). y

Remark. Note that the rst integral solves the inhomogeneous PDE, with zero boundary condition, while the second integral should be harmonic, and should satisfy the given boundary condition. That it is harmonic follows from the fact that G(x, y) is harmonic in x U for each y U, so that the normal derivative with respect to y is also harmonic. To verify the boundary condition, we could try to show that the normal derivative acts



like a function as x approaches the boundary. In fact, the proof uses the divergence theorem to recover the boundary data. Proof of Theorem 9.2. Recall that G(x, y) has a singularity at y = x, because the fundamental solution has. As with our solution of Poissons equation on all of space, using the fundamental solution, we exclude a small ball around x U , by integrating on the domain V = U B(x, ). Let u C 2 (U ) (not necessarily a solution). Then, using Greens identity, u(y)(x y) (x y)u(y) dy =

u y y


Now (x y) is harmonic away from y = x, so the rst term on the left hand side is zero. On the right hand side, there are two terms, but there are also two parts of the boundary, namely U and B(x, ). Let I =
B(x, )


(x y) dS(y). y 0. (See 8.2.)

As for the solution of Poissons equation, I u(x) as Similarly, let J =

B(x, )

(x y)

u (y) dS(y). y

Now u is continuously dierentiable, and (x y) is constant on B(x, ) (proportional to ln for n = 2, and to (n2) for n > 2). Consequently, J 0 as 0. Letting

0, we obtain u

(xy)u(y) dy = u(x)+

u (x y) (x y) y y


Similarly, since x is harmonic in U, has no singularity at x = y, and x (y) = (x y) for y U, we have


x u dy =

x u (x y) y y


Subtracting, and using G(x, y) = (x y) x (y), we obtain (3.19)


G(x, y)u(y) dy = u(x) +



G (x, y) dS(y). y

(The other terms cancel.) Finally, if u is a solution of (3.17), then we have proved the theorem.



9.3.3. The Method of Images. In some examples, we can use the socalled Method of Images to construct the Greens function. The idea is that (x y) represents a eld (more precisely, a potential; the study of solutions of Poissons equation is sometimes referred to as Potential Theory) induced by a singularity at x = y, which induces a eld on the boundary U. Suppose we can construct a point x outside U such that the function (C( y)) x exactly cancels (xy) on the boundary U, for some scale factor C, possibly depending on x. Then the new eld is harmonic with respect to y U (since the Laplacian is invariant under scaling and translation by a constant), so we can set x (y) = (C( y)). The point x is the image of x that gives x the technique its name. Here is how it works in two examples: (i) U is a half space, and (ii) U is a ball. (i) Let U = {x Rn : xn > 0}. For x = (x1 , . . . , xn ) U, dene the image x = (x1 , . . . , xn1 , xn ), and let G(x, y) = (x y) ( y), x xn 0, yn 0. Since ( y) is harmonic in U (with respect to y, for x U ), we see that x G(x, y) is the Greens function if we can show G(x, y) = 0 for yn = 0, xn > 0. But for yn = 0, we have

|y x| =

(yj xj )2 + x2 = |y x|, n

so that (x y) = ( y). (See Figure 9.3.3.) x (ii) Now consider U = B(0, 1), the unit ball in Rn . Here, the image x includes a scaling, in addition to reection through the boundary. (See Figure 9.3.3.) x We dene x = |x|2 , and let G(x, y) = (x y) (|x|( y)). x (Note that || = 1/|x|.) To prove that G(x, y) = 0 for x U, y U, we x need to show that x |x y| = |x|| y|, x = x , |y| = 1. |x|2 Proof. |x|2 | y|2 = x x |x|y |x|

= |x|2 2y x + 1 = |x y|2 .




~ x
Figure 9.3. Greens function for a half plane.


1 0

x y

Figure 9.4. Greens function for the unit ball.