## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Laplace’s Equation

∆u = 0

We now turn to studying Laplace’s equation and its inhomogeneous version, Poisson’s equation, −∆u = f. We say a function u satisfying Laplace’s equation is a harmonic function.

3.1

**The Fundamental Solution
**

∆u = 0 x ∈ Rn .

Consider Laplace’s equation in Rn , Clearly, there are a lot of functions u which satisfy this equation. In particular, any constant function is harmonic. In addition, any function of the form u(x) = a1 x1 + . . . + an xn for constants ai is also a solution. Of course, we can list a number of others. Here, however, we are interested in ﬁnding a particular solution of Laplace’s equation which will allow us to solve Poisson’s equation. Given the symmetric nature of Laplace’s equation, we look for a radial solution. That is, we look for a harmonic function u on Rn such that u(x) = v(|x|). In addition, to being a natural choice due to the symmetry of Laplace’s equation, radial solutions are natural to look for because they reduce a PDE to an ODE, which is generally easier to solve. Therefore, we look for a radial solution. If u(x) = v(|x|), then xi uxi = v (|x|) |x| = 0, |x| which implies x2 x2 1 i uxi xi = v (|x|) − 3 v (|x|) + i 2 v (|x|) |x| = 0. |x| |x| |x| Therefore, n−1 ∆u = v (|x|) + v (|x|). |x| Letting r = |x|, we see that u(x) = v(|x|) is a radial solution of Laplace’s equation implies v satisﬁes n−1 v (r) + v (r) = 0. r Therefore, 1−n v = v r 1−n v = =⇒ v r =⇒ ln v = (1 − n) ln r + C C =⇒ v (r) = n−1 , r 1

which implies v(r) = c1 ln r + c2 c1 + c2 (2−n)rn−2 c1 ln |x| + c2 c1 + c2 (2−n)|x|n−2 n=2 n ≥ 3.

From these calculations, we see that for any constants c1 , c2 , the function u(x) ≡ n=2 n ≥ 3. (3.1)

for x ∈ Rn , |x| = 0 is a solution of Laplace’s equation in Rn − {0}. We notice that the function u deﬁned in (3.1) satisﬁes ∆u(x) = 0 for x = 0, but at x = 0, ∆u(0) is undeﬁned. We claim that we can choose constants c1 and c2 appropriately so that −∆x u = δ0 in the sense of distributions. Recall that δ0 is the distribution which is deﬁned as follows. For all φ ∈ D, (δ0 , φ) = φ(0). Below, we will prove this claim. For now, though, assume we can prove this. That is, assume we can ﬁnd constants c1 , c2 such that u deﬁned in (3.1) satisﬁes −∆x u = δ0 . Let Φ denote the solution of (3.2). Then, deﬁne v(x) = (3.2)

Rn

Φ(x − y)f (y) dy.

Formally, we compute the Laplacian of v as follows, −∆x v = − =− =

Rn Rn

∆x Φ(x − y)f (y) dy ∆y Φ(x − y)f (y) dy

Rn

δx f (y) dy = f (x).

That is, v is a solution of Poisson’s equation! Of course, this set of equalities above is entirely formal. We have not proven anything yet. However, we have motivated a solution formula for Poisson’s equation from a solution to (3.2). We now return to using the radial solution (3.1) to ﬁnd a solution of (3.2). Deﬁne the function Φ as follows. For |x| = 0, let Φ(x) =

1 − 2π ln |x| 1 1 n(n−2)α(n) |x|n−2

n=2 n ≥ 3,

(3.3)

where α(n) is the volume of the unit ball in Rn . We see that Φ satisﬁes Laplace’s equation on Rn − {0}. As we will show in the following claim, Φ satisﬁes −∆x Φ = δ0 . For this reason, we call Φ the fundamental solution of Laplace’s equation. 2

we would like to apply the divergence theorem. We will show that (FΦ . (FΦ . and. by breaking up the integral into two pieces: one piece consisting of the ball of radius δ about the origin. Therefore. − Rn Φ(x)∆x g(x) dx = g(0). but Φ has a singularity at x = 0.δ) Rn Φ(x)∆g(x) dx Φ(x)∆g(x) dx + Rn −B(0. B(0. Recall that the derivative of a distribution F is deﬁned as the distribution G such that (G.δ) Φ(x)∆g(x) dx = I + J. By deﬁnition. the distributional Laplacian of Φ is deﬁned as the distribution F∆Φ such that (F∆Φ . (F∆Φ . Proof. 3 . g) = −g(0). ∆g) = = B(0. for all g ∈ D. That is. Therefore. δ) and the other piece consisting of the complement of this ball in Rn . g) = −(F. ∆g) = −(δ0 .3). That is. g) = −g(0).Claim 1. g) = Rn Φ(x)g(x) dx for all g ∈ D. ∆g) = Rn Φ(x)∆g(x) dx. g ) for all g ∈ D. therefore. For Φ deﬁned in (3. which means −∆x Φ = δ0 in the sense of distributions. Now. ∆g) for all g ∈ D. we have (FΦ . Let FΦ be the distribution associated with the fundamental solution Φ. g) = (FΦ . Φ satisﬁes −∆x Φ = δ0 in the sense of distributions. We get around this. let FΦ : D → R be deﬁned such that (FΦ .

|x| . 1 1 ∆g(x) dx ≤ C|∆g|L∞ n(n − 2)α(n) |x|n−2 δ B(0.r) 1 dS(y) |y|n−2 dS(y) dr = 0 δ 1 rn−2 1 rn−2 ∂B(0.r) = 0 nα(n)rn−1 dr δ = nα(n) 0 + r dr = nα(n) 2 δ .δ) Φ(x)∆x g(x) dx = Rn −B(0. we see that x . ) where the normal derivative ν is the outer normal to Rn − B(0. we only need to calculate the integral over ∂B(0.δ)) ∂Φ g(x) dS(x) + ∂ν ∂( Rn −B(0. − B(0. Consequently. For n = 2.δ) 1 dx |x|n−2 dr ≤C 0 δ ∂B(0. By a straightforward calculation. term I is bounded as follows.We look ﬁrst at term I. Next. We ﬁrst look at term J1. 2 Therefore. using the fact that ∆x Φ(x) = 0 for x ∈ Rn − B(0. Applying the divergence theorem. by assumption. g vanishes at ∞. x Φ(x) = − nα(n)|x|n The outer unit normal to Rn − B(0. For n ≥ 3. δ). and. |I| → 0.δ)) ∂Φ g(x) dS(x) ∂ν =− ∂( Rn −B(0. Now. term I is bounded as follows.δ) B(0. we have Rn −B(0. δ) on B(0. as δ → 0 . therefore.δ)) Φ(x) ∂g dS(x) ∂ν Rn −B(0.δ) ≤C 0 δ 0 ln |r|r dr dθ ln |r|r dr 0 ≤C ≤ C ln |δ|δ 2 . we look at term J. g ∈ D.δ) + ∂( ∆x Φ(x)g(x) dx − ∂( Rn −B(0. δ).δ)) Φ(x) ∂g dS(x) ∂ν ≡ J1 + J2. δ) is given by ν=− 4 x .δ) 1 ln |x|∆g(x) dx ≤ C|∆g|L∞ 2π 2π δ ln |x| dx B(0.

Therefore. nα(n)|x|n−1 Therefore.Therefore. . |Φ(x)| dS(x) = C ∂B(0. |J2| → 0 as δ → 0+ .δ) | ln |x|| dS(x) dS(x) ∂B(0. and. δ). Next. Now using the fact that g vanishes as |x| → +∞. inﬁnitely diﬀerentiable.δ) 1 1 g(x) dS(x) = − n−1 nα(n)|x| nα(n)δ n−1 g(x) dS(x) = − − ∂B(0.δ) ∂B(0.δ) ∂B(0. we only need to integrate over ∂B(0. J1 can be written as − ∂B(0. we conclude that term J2 is bounded in absolute value by Cδ| ln δ| Cδ Therefore.δ) ≤ C| ln |δ|| = C| ln |δ||(2πδ) ≤ Cδ| ln |δ||.δ) ∂B(0.δ) Now ﬁrst. 5 n=2 n ≥ 3. we look at term J2. |Φ(x)| dS(x) = C ∂B(0.δ) 1 dS(x) |x|n−2 dS(x) ≤ = C δ n−2 C δ n−2 ∂B(0. for n ≥ 3.δ) |Φ(x)| dS(x).δ) ∂g ∂g dS(x) ≤ ∂ν ∂ν ≤C |Φ(x)| dS(x) L∞ (∂B(0. the normal derivative of Φ on B(0. δ) is given by ∂Φ = ∂ν − x nα(n)|x|n · − x |x| = 1 . Using the fact that g ∈ D. then − − g(x) dS(x) → −g(0) as δ → 0. for n = 2.δ) nα(n)δ n−1 ≤ Cδ.δ)) ∂B(0. we have Φ(x) ∂B(0.δ) g(x) dS(x). therefore. Lastly. Now if g is a continuous function. ∂B(0.

then |f |L∞ ≤ C. Solving Poisson’s Equation. Theorem 2. Remark. Assume f ∈ C 2 (Rn ) and has compact support. we write u(x) = Rn Φ(x − y)f (y) dy = Rn Φ(y)f (x − y) dy. we must ﬁrst verify that this integral actually converges. −∆u = f in Rn . we make a remark. Ref: Evans. First. If we additionally assume that f is bounded. + δ→0 Therefore. p. From our discussion before the above claim. By a change of variables. Proof. we see that Rn Φ(x)∆x g(x) dx = lim I + J1 + J2 = −g(0). Then 1.Combining these estimates.3). Let u(x) ≡ Rn Φ(x − y)f (y) dy where Φ is the fundamental solution of Laplace’s equation (3. 23. If we assume f has compact support on some bounded set K in Rn . we expect the function v(x) ≡ Rn Φ(x − y)f (y) dy to give us a solution of Poisson’s equation. It is left as an exercise to verify that |Φ(x − y)| dy < +∞ K on any compact set K. We now prove that this is in fact true. 6 . our claim is proved. If we hope that the function v deﬁned above solves Poisson’s equation. then we see that Rn Φ(x − y)f (y) dy ≤ |f |L∞ K |Φ(x − y)| dy. 1. We now return to solving Poisson’s equation −∆u = f x ∈ Rn . u ∈ C 2 (Rn ) 2.

we see that ∆x u(x) = = Rn Rn Φ(y)∆x f (x − y) dy Φ(y)∆y f (x − y) dy = −f (x).1 Properties of Harmonic Functions Mean Value Property In this section. . 0.r) . the average of u on B(x. . . Then u(x + hei ) − u(x) f (x + hei − y) − f (x − y) = Φ(y) dy.r) u(y) dy = 1 α(n)rn 7 u(y) dy. . 2. we prove a mean value property which all harmonic functions satisfy. we give some deﬁnitions. By the above calculations and Claim 1. 3.Let ei = (. . r). Let B(x. 1. ∂xi xj ∂xi xj Rn This function is continuous because the right-hand side is continuous. First.) be the unit vector in Rn with a 1 in the ith slot. ∂f ∂u (x) = Φ(y) (x − y) dy. ∂ 2u ∂ 2f (x) = Φ(y) (x − y) dy.2. r) = ball of radius r about x in Rn ∂B(x. B(x.2 3. 0. r) = boundary of ball of radius r about x in Rn α(n) = volume of unit ball in Rn nα(n) = surface area of unit ball in Rn . ∂xi ∂xi Rn Similarly. Therefore. For a function u deﬁned on B(x. h h Rn Now f ∈ C 2 implies f (x + hei − y) − f (x − y) ∂f → (x − y) as h → 0 h ∂xi uniformly on Rn . . r) is given by − B(x.

r) u(y) dS(y) = 1 nα(n)rn−1 u(y) dS(y).r) Theorem 3. ∂B(x. For r = 0. If u ∈ C 2 (Ω) is harmonic. We prove φ (r) = 0 as follows. if we can show that φ (r) = 0. deﬁne φ(r) = u(x). (Mean-Value Formulas) Let Ω ⊂ Rn .r) u(y) dS(y) u(x + rz) dS(z).r) u(y) dy for every ball B(x. nα(n)rn−1 B(x. and. r) ⊂ Ω. therefore. Assume u ∈ C 2 (Ω) is harmonic. φ (r) = − ∂B(0. and.r) =− 8 . First. For r > 0.r) ∂ν 1 = · ( u) dy (by the Divergence Theorem) nα(n)rn−1 B(x. the average of u on ∂B(x.r) 1 = ∆u(y) dy = 0. u(x) = − ∂B(x.r) u(y) dS(y) = − B(x. r). φ is a continuous function.r) ∂ν 1 ∂u = (y) dS(y) nα(n)rn−1 ∂B(x.r) u(y) dS(y). then u(x) = − ∂B(x. then we can conclude that φ is a constant function. Proof. making a change of variables.For a function u deﬁned on ∂B(x. deﬁne φ(r) = − ∂B(x. r) is given by − ∂B(x. Therefore. we have φ(r) = − ∂B(x. then limr→0+ φ(r) = u(x).r) u(y) dS(y).1) u(x + rz) · z dS(z) u(y) · y−x dS(y) r =− ∂B(x. Notice that if u is a smooth function.1) Therefore.r) ∂u (y) dS(y) ∂B(x. therefore. =− ∂B(0.

It remains to prove that u(x) = − B(x. r u(y) dy = B(x. If u(x) = − ∂B(x.r) B(x. we prove that if a smooth function u satisﬁes the mean value property described above. We do so as follows.2 Converse to Mean Value Property In this section. then u is harmonic. Therefore.using the fact that u is harmonic.r) u(y) dS(y) for all B(x.s) r ds u(y) dS(y) ds = 0 = 0 nα(n)sn−1 u(x) ds r 0 = nα(n)u(x) sn−1 ds = α(n)u(x) sn |s=r s=0 n = α(n)u(x)r . u(y) dy = α(n)rn u(x). Proof. r) ⊂ Ω. Let φ(r) = − ∂B(x. If u ∈ C 2 (Ω) satisﬁes u(x) = − ∂B(x. Therefore.r) u(y) dy.r) u(y) dS(y). using the ﬁrst result.s) u(y) dS(y) nα(n)sn−1 − ∂B(x. Theorem 4.r) u(y) dS(y) 9 . B(x.r) which implies u(x) = as claimed. 1 α(n)rn u(y) dy = − B(x. 3.r) u(y) dy.r) 0 r ∂B(x. we have proven the ﬁrst part of the theorem.2. then u must be harmonic.

n B(x. n B(x. (Strong maximum principle) If Ω is connected and there exists a point x0 ∈ Ω such that u(x0 ) = max u(x). ﬁlling Ω with balls. φ (r) = r − ∆u(y) dy. To prove u ≡ M throughout Ω. Ω ∂Ω 2. 3. We prove the second assertion. therefore. ∂Ω). r) such that ∆u > 0. and M = maxΩ u(x). Then there exists some ball B(x.2. we prove that if u is a harmonic function on a bounded domain Ω in Rn .for all B(x.r) Suppose u is not harmonic. r) ⊂ Ω. r). Proof.r) which contradicts the fact that φ (r) = 0. As described in the previous theorem. Therefore. Suppose Ω ⊂ Rn is open and bounded.r) u(y) dy ≤ M. u(y) ≡ M for y ∈ B(x0 . Ω Then for 0 < r < dist(x0 . you continue with this argument. (Maximum principle) max u(x) = max u(x). Suppose u ∈ C 2 (Ω) ∩ C(Ω) is harmonic. then u attains its maximum value on the boundary of Ω. But. r) ⊂ Ω such that ∆u > 0 or ∆u < 0. The ﬁrst follows from the second. Suppose there exists a point x0 in Ω such that u(x0 ) = M = max u(x).3 Maximum Principle In this section. the mean value property says M = u(x0 ) = − B(x0 . Then 1. Ω then u is constant within Ω. Without loss of generality. Therefore. Therefore. r φ (r) = − ∆u(y) dy > 0. − B(x0 . 10 . then φ (r) = 0.r) u(y) dy = M. we assume there is some ball B(x. u must be harmonic. Theorem 5.

then u is analytic. then u ∈ C ∞ (Ω). and. Let w = u − v and let w = v − u. Theorem 6. u(x) = − ∂B(x. 2. −∆u = f x∈Ω u=g x ∈ ∂Ω. Notice that η ∈ C ∞ (Rn ) and η has compact support. we conclude max |u − v| = max |u − v| = 0. (Uniqueness) There exists at most one solution u ∈ C 2 (Ω) ∩ C(Ω) of the boundary-value problem. In fact. Now deﬁne the function η (x) such that η (x) ≡ 1 η n x . By replacing u by −u above. Therefore. 1. we can prove the Minimum Principle. using the maximum principle.Remark. (See Evans. if u satisﬁes the hypothesis of the above theorem.2. we use the maximum principle to prove uniqueness of solutions to Poisson’s equation on bounded domains Ω in Rn . Theorem 7. Remarks.) Proof. bounded subset of Rn . r) ⊂ Ω. Next. 11 . If u ∈ C(Ω) and u satisﬁes the mean value property. u ∈ C ∞ (Ω). therefore.4 Smoothness of Harmonic Functions In this section. First. As proven earlier. Let Ω be an open.r) u(y) dS(y) for every ball B(x. we introduce the function η such that η(x) ≡ Ce |x|2 −1 0 1 |x| < 1 |x| ≥ 1 where the constant C is chosen such that Rn η(x) dx = 1. Proof. we prove that harmonic functions are C ∞ . Ω ∂Ω 3. Then w and w satisfy ∆w = 0 x∈Ω w=0 x ∈ ∂Ω. if u ∈ C 2 (Ω) ∩ C(Ω) and u is harmonic. but we will not prove that here. Suppose there are two solutions u and v. then u satisﬁes the mean value property.

Deﬁne u (x) = Ω η (x − y)u(y) dy.Therefore.r) u(y) dS(y) dr 1 n nα(n)rn−1 u(x) dr r ∂B(0. First.r) = u(x) = u(x) = u(x) 1 n dS(y) dr |y| dy 1 n B(0. ) = u(x). We prove (2) as follows. ) = = = = = = η |x − y| η u(y) dy u(y) dS(y) dr dr 1 n 0 ∂B(x. η ∈ C ∞ (Rn ) and supp(η ) ⊂ {x : |x| < }. Therefore.r) |x − y| r 1 n 0 ∂B(x. Now we claim 1. ) η η (y) dy B(0. Rn Now choose such that η (x) dx = 1. ) η (x − y)u(y) dy 1 n B(x. < dist(x. for (1). 12 . u (x) = B(x. u ∈ C ∞ because η ∈ C ∞ .r) 1 n 0 η η 0 r r η 0 nα(n)rn−1 − ∂B(x. u (x) = u(x). Further.r) η η 0 u(y) dS(y) u(y) dS(y) dr 1 n r ∂B(x. Using the fact that suppη (x − y) ⊂ {y : |x − y| < }. u ∈ C ∞ 2. ∂Ω).

Therefore. . .2. 13 . we know that if u ∈ C 2 (Ω) ∩ C(Ω) and u is harmonic. Taking the limit as r → +∞. This is true for i = 1. ∆u = 0 =⇒ ∆uxi = 0 for i = 1. uxi is harmonic and satisﬁes the mean value property. then u is C ∞ .r) 1 α(n)rn 1 = α(n)rn B(x0 . uxi (x0 ) = − = uxi (y) dy uxi (y) dy uνi dS(y). Therefore.5 Liouville’s Theorem In this section. we conclude that u ≡ constant.r) B(x0 . n. Now this is true for all r. |uxi (x0 )| ≤ 1 α(n)rn uνi dS(y) ∂B(x0 . r). we show that the only functions which are bounded and harmonic on Rn are constant functions. uxi (x0 ) = 0. . n and for all x0 ∈ Rn . .r) n |u|L∞ (Rn ) . . Then u is constant. . Theorem 8. Proof.3. Therefore. Now by the previous theorem. By the mean value property.r) by the Divergence theorem. . r). Therefore. .r) u(y) dy for all B(x0 . Therefore. Therefore. νn ) is the outward unit normal to B(x0 . we see that |uxi (x0 )| = 0. Let x0 ∈ Rn . .r)) |νi |L∞ ≤ |u|L∞ (Rn ) ≤ Therefore. where ν = (ν1 . u(x0 ) = − B(x0 . |uxi (x0 )| ≤ nα(n)rn−1 α(n)rn 1 α(n)rn dS(y) ∂B(x0 . ∂B(x0 .r) ≤ |u|L∞ (∂B(x0 . . r by the assumption that u is bounded. . Suppose u : Rn → R is harmonic and bounded. . r n |u|L∞ (Rn ) r n ≤C .

4) Rn Φ(x − y)f (y) dy + C for some constant C. ) |x − y| = K It is easy to see that the ﬁrst term on the right-hand side is bounded. Then every bounded solution of −∆u = f has the form u(x) = x ∈ Rn (3. ) B(x.4). ≤ |f (y)|L∞ n−2 Rn −B(x. n ≥ 3 is given by Φ(x) = where K = 1/n(n − 2)α(n). As shown earlier. ) |x − y|n−2 1 dy + C |f (y)| dy.4). ) |x − y| Rn −B(x. Let w(x) = u(x) − u(x). Now suppose there is another bounded solution of (3. Here we show this is a bounded solution for n ≥ 3. The second term on the right-hand side is bounded.4). Fix we have |u(x)| = 1 f (y) dy Rn |x − y|n−2 1 1 f (y) dy + K f (y) dy = K n−2 B(x. Let n ≥ 3. is a solution of (3. Theorem 9. (Representation Formula) Let f ∈ C 2 (Rn ) with compact support. u(x) = K |x|n−2 Rn Φ(x − y)f (y) dy > 0. we have the following representation formula for all bounded solutions of Poisson’s equation on Rn . using the assumption that f ∈ C 2 (Rn ) with compact support. where Φ(x) is the fundamental solution of Laplace’s equation in Rn . we conclude that u(x) = Rn Φ(x − y)f (y) dy Rn Φ(x − y)f (y) dy is a bounded solution of (3. Proof. Therefore. n ≥ 3. Recall that the fundamental solution of Laplace’s equation in Rn . Then. 14 .As a corollary of Liouville’s Theorem. Let u be such a solution.

y) ∈ Ω uxx + uyy = 0 u(0. we arrive at X Y + = 0. y) = g1 (y). y) ∈ R2 : 0 < x < a.6) 15 . we get X Y + XY = 0. That is. y) = g2 (y) 0<y<b (3. we consider the following simpler example. (x. b) = g4 (y) 0 < x < a. Now dividing by XY . u(x.5) u(x. Let Ω = {(x. we consider the case of Dirichlet boundary conditions.1 Solving Laplace’s Equation on Bounded Domains Laplace’s Equation on a Rectangle In this section. Rn Φ(x − y)f (y) dy + C. harmonic function on Rn . Therefore. we will solve Laplace’s equation on a rectangle in R2 . u(a. by Liouville’s Theorem. Let Ω = {(x. y) ∈ R2 : 0 < x < a. y) ∈ Ω uxx + uyy = 0 u(0.Then w is a bounded. y) = X(x)Y (y). 0) = g3 (x).3. 0 < y < b}.3 3. b) = 0 0 < x < a. u(x. Plugging this into our equation. 0 < y < b}. 3. Consider (x. we conclude that u(x) = u(x) + C = as claimed. In order to do so. Then. We want to look for a solution of the following. X Y which implies Y X =− = −λ Y X (3. We look for a solution of the form u(x. w must be constant. y) = g1 (y). First. we consider the following boundary value problem. We use separation of variables. we will show how to solve the more general problem above. y) = 0 0<y<b u(x. 0) = 0. Example 10. u(a. From this.

The solutions of this ODE are given by X = Xn (x) = An cosh nπ nπ x + Bn sinh x . (3. but also satisfy the condition u(0. we can take any combination of solutions {un } and get a solution of Laplace’s equation which satisﬁes these three boundary conditions. As we know. y) = Xn (x)Yn (y) = An cosh where An . Bn satisfy nπ nπ a + Bn sinh a = 0. b b is a solution of Laplace’s equation on Ω which satisﬁes the boundary conditions u(x. the solutions of this eigenvalue problem are given by Yn (y) = sin We now turn to solving nπ nπ y .for some constant λ. nπ 2 X b with the boundary condition X(a) = 0. we look for a solution of the form ∞ ∞ u(x.7). y) = g1 (y). we want Y (0) = 0 = Y (b). y) = n=1 un (x. Y = −λY 0<y<b Y (0) = 0 = Y (b). and u(a.7) b b To solve our boundary-value problem (3. As we know. 0) = 0. y) = n=1 An cosh nπ nπ x + Bn sinh x b b sin nπ y b where An . we need An cosh ∞ u(0. we begin by solving the eigenvalue problem. Laplace’s equation is linear. Bn which not only satisfy (3.6). λn = b b 2 . b b Now the boundary condition X(a) = 0 implies An cosh Therefore. u(x. it remains to ﬁnd coeﬃcients An . b . Therefore. y) = 0. y) = n=1 An sin 16 nπ y = g1 (y). That is. b b nπ nπ x + Bn sinh x b b sin nπ y b nπ nπ a + Bn sinh a = 0. Bn satisfy the condition An cosh nπ nπ a + Bn sinh a = 0. Therefore. Therefore. By our boundary conditions. b) = 0. un (x.

y) = g1 (y) and u1 (a. . we know that the Fourier sine series of a function g1 is given by ∞ g1 (y) ∼ n=1 An sin nπ y b where the coeﬃcients An are given by An = g1 . 4 such that each ui is identically zero on three of the sides and satisﬁes the boundary condition on the fourth side. We now consider an example where we have a mixed boundary condition on one side. From our earlier discussion of Fourier series. For example. 0 < y < H}. y) ∈ Ω uxx + uyy = 0 u(0. u(x. Let Ω = {(x. b]. 0) − uy (x. sin nπ y b sin nπ y . we can ﬁnd a solution by ﬁnding four separate solutions ui for i = 1. 0 < x < L. b) for 0 < x < a. sin nπ y b sin nπ y . sin b nπ y b where the L2 -inner product is taken over the interval [0. we have found a solution of (3. . . for the boundary value problem (3. y) = 0 0<y<H (3. Assuming g1 is a “nice” function. . b Now we return to considering (3. we have Y X =− = −λ.8) u(x. y) = 0 for 0 < y < b. u3 and u4 which vanish on three of the sides but satisfy the fourth boundary condition. (x.6) given by ∞ ∞ u(x. to summarize. 0) = 0 = u1 (x. we use the procedure in the above example to ﬁnd a function u1 (x. Consider the following boundary value problem. b]. y) = n=1 un (x. H) = f (x) 0 < x < L. sin b nπ y b nπ a An . 0) = 0. we want to be able to express g1 in terms of its Fourier sine series on the interval [0. X Y 17 . y) = n=1 An cosh nπ nπ x + Bn sinh x b b sin nπ y b where An = and Bn = − coth g1 .5). u(L. we can do this. Using separation of variables. Example 11.5). For the general boundary value problem on a rectangle with Dirichlet boundary conditions. y) which is harmonic on Ω and such that u1 (0. Similar we ﬁnd functions u2 . and u1 (x. y) = 0.That is. Therefore. y) ∈ R2 .

H) = n=1 Bn sin nπ x L nπ nπ nπ cosh H + sinh H L L L = f (x). sin nπ x L L where the L2 -inner product is taken over (0. y) = n=1 Bn sin nπ x L nπ nπ nπ cosh y + sinh y L L L . Substituting in the condition u(x. L nπ nπ nπ cosh y + Bn sinh y . L] is given by ∞ f∼ n=1 An sin nπ x L where f. 0<x<L As we know.We ﬁrst look to solve X = −λX X(0) = 0 = X(L). we look for a solution of (3. sin nπ x L . sin nπ x L = . λn = L L 2 . nπ sin L x . Therefore. we need Bn to satisfy Bn nπ nπ nπ cosh H + sinh H L L L 18 f. The solutions of this ODE are given by −Y = − Yn (y) = An cosh nπ nπ y + Bn sinh y . L). L L L Therefore. H) = f (x) to be satisﬁed. H) = f (x). nπ = 0. Recall the Fourier sine series of f on [0. in order for our boundary condition u(x. the solutions of this eigenvalue problem are given by Xn (x) = sin Now we need to solve nπ nπ x . sin nπ x L An = sin nπ x . L L The boundary condition Y (0) − Y (0) = 0 implies An − Bn Therefore.8) of the form Yn (y) = Bn ∞ u(x. nπ 2 Y L with the boundary condition Y (0) − Y (0) = 0. we have ∞ u(x.

y) and θ = θ(x. y) ∈ Ω (x. θ). y)) = ur rx + uθ θx ∂x x y = ur 2 − uθ 2 2 2 )1/2 (x + y x sec θ sin θ = ur cos θ − uθ . y). we will write the operators ∂x and ∂y in polar coordinates. y) = n=1 Bn sin nπ x L nπ nπ nπ cosh y + sinh y L L L where Bn = 2 nπ nπ nπ cosh H + sinh H L L L L −1 0 L f (x) sin nπ x dx. y) ∈ ∂Ω. we consider Laplace’s Equation on a disk in R2 .9) To solve. θ(x. y) ∈ R2 : x2 + y 2 < a2 }. L 2 the solution of (3. the operator ∂ ∂x can be written in polar coordinates as ∂ ∂ sin θ ∂ = cos θ − .3. Then ∂ u(r(x. L 3. x Consider a function u such that u = u(r. let Ω = {(x. That is. To transform our equation in to polar coordinates. θ(x. we write this equation in polar coordinates as follows. That is. y). y). We will use the fact that x2 + y 2 = r 2 y = tan θ. (3. ∂x ∂r r ∂θ 19 . Consider uxx + uyy = 0 u = h(θ) (x.8) is given by ∞ u(x. r Therefore. y)).2 Laplace’s Equation on a Disk In this section.Using the fact that sin nπ nπ x . where r = r(x. sin x L L L = 0 sin2 L nπ x dx = . u = u(r(x.

20 0 < θ < 2π . R R Θ + + 2 = 0. into (3. Θ = −λΘ Θ(0) = Θ(2π). ∂r r2 ∂θ r ∂r∂θ r ∂r r2 ∂θ2 2 2 Combining the above terms. θ) = R(r)Θ(θ). we can write the operator ∂x + ∂y in polar coordinates as follows. we look for a solution of the form u(r. y). r r (3. we will arrive at a solution of Laplace’s equation on the disk.Similarly. ∂r r2 ∂θ r ∂r∂θ r ∂r r ∂θ2 Similarly. y) ∈ ∂Ω. the operator ∂ ∂y can be written in polar coordinates as ∂ ∂ cos θ ∂ = sin θ + . Therefore. R rR r Θ 2 Multiplying by r . ∂2 ∂2 1 ∂ 1 ∂2 ∂2 + 2 = 2+ + 2 2. ∂x2 ∂y ∂r r ∂r r ∂θ Therefore. Laplace’s equation is written as 2 2 1 1 urr + ur + 2 uθθ = 0. our equation is written as 1 1 R Θ + R Θ + 2 RΘ = 0. we are led to the following eigenvalue problem with periodic boundary conditions. y)). The boundary condition for this problem is u = h(θ) for (x.10). y) = u(r(x. in polar coordinates. r r Dividing by RΘ.10) Now we will solve it using separation of variables. θ(x. Substituting a function of the form u(r. ∂y ∂r r ∂θ Now squaring these operators we have ∂2 ∂ sin θ ∂ = cos θ − 2 ∂x ∂r r ∂θ 2 ∂ sin θ cos θ ∂ sin θ cos θ ∂ 2 sin2 θ ∂ sin2 θ ∂ 2 = cos2 θ 2 + 2 −2 + + 2 . we are led to the equations r2 R rR Θ =− − = −λ Θ R R for some scalar λ. In particular. Θ (0) = Θ (2π). θ) = R(r)Θ(θ). ∂2 ∂ cos θ ∂ = sin θ + 2 ∂y ∂r r ∂θ 2 ∂ sin θ cos θ ∂ sin θ cos θ ∂ 2 cos2 θ ∂ cos2 θ ∂ 2 = sin2 θ 2 − 2 +2 + + . Then letting u(x.

Recall from our earlier work that periodic boundary conditions imply our eigenfunctions and eigenvalues are Θn (θ) = An cos(nθ) + Bn sin(nθ). we have An = 1 h(θ).. . our equation can be written as r2 R + rR = 0. Now in order to solve (3. 1. we need to solve the second-order ODE. we have found two linearly independent solutions. 2. we have only found one linearly independent solution so far. we can solve for our coefﬁcients An and Bn as follows. Doing so. . we need u(a. r2 Rn + rRn − n2 Rn = 0 That is. we reject the solutions 1 and ln r. we don’t want a solution which blows up as r → 0+ . .9). we consider a solution of (3. . cos(nθ) 1 = n an cos(nθ). Rn (r) = rn and Rn (r) = r−n . n=0 Using the fact that our eigenfunctions are orthogonal on [0. our ODE becomes (α2 − n2 )rα = 0. our equation becomes rR + R = 0. We look for a solution of the form R(r) = rα for some α. If n = 0. A linearly independent solution of this equation is R0 (r) = ln r. 2π]. we need ∞ an [An cos(nθ) + Bn sin(nθ)] = h(θ). we have found a solution of (3. θ) = n=0 rn [An cos(nθ) + Bn sin(nθ)] . .10) of the form rn ∞ u(r. for n = 0. Multiplying the above equation by cos(nθ) and integrating over [0. 2. θ) = Rn (r)Θn (θ) = A0 [C0 + D0 ln r] . we need to solve λn = n2 n = 0. Therefore. . Now for n = 0. . 2π]. 2. for each n ≥ 0. Therefore. For each λn . We look for another linearly independent solution. . Therefore. 1. R0 (r) = 1. θ) = h(θ). Therefore. for n ≥ 1.10) of the form C rn + Dn [A cos(nθ) + B sin(nθ)] n n n rn un (r. That is. Recall that a second-order ODE will have two linearly independent solutions. . 21 . But. Dividing by r. cos(nθ) πa 2π h(θ) cos(nθ) dθ 0 for n = 1. r2 Rn + rRn = λn Rn .

sin(nθ) 1 Bn = n = n a sin(nθ). given by ∞ u(r. θ) = 2π ∞ 2π h(φ) dφ 0 + n=1 rn 2π 1 πan 2π h(φ) cos(nφ) dφ cos(nθ) + 0 ∞ 1 πan 2π h(φ) sin(nφ) dφ) sin(nθ) 0 1 = 2π = Now 1 2π h(φ) 1 + 2 0 2π n=1 ∞ rn [cos(nφ) cos(nθ) + sin(nφ) sin(nθ)] an rn cos(n(θ − φ)) an dφ. sin(nθ) πa 2π h(θ) sin(nθ) dθ. 1 2π 2π h(θ) dθ. 0 Similarly. 1 1 = 1.A0 = h(θ). a − 2ar cos(θ − φ) + r2 22 n=1 i(θ−φ) re−i(θ−φ) a n . 0 Now we will rewrite this solution in terms of a single integral by substituting An and Bn into the series solution above. 2π]. θ) = n=0 rn [An cos(nθ) + Bn sin(nθ)] where 1 A0 = 2π 1 An = n πa 1 Bn = n πa 2π h(θ) dθ 0 2π h(θ) cos(nθ) dθ 0 2π h(θ) sin(nθ) dθ. 0 To summarize. we have found a solution of Laplace’s equation on the disk in polar coordinates. we have 1 u(r. Doing so. multiplying by sin(nθ) and integrating over [0. dφ h(φ) 1 + 2 0 n=1 ∞ 1+2 n=1 rn cos(n(θ − φ)) an ∞ = 1+2 n=1 ∞ rn ein(θ−φ) + e−in(θ−φ) an 2 rei(θ−φ) a n ∞ =1+ =1+ + n=1 −i(θ−φ) re re + i(θ−φ) a − re a − re−i(θ−φ) a2 − r 2 = 2 . we have 1 h(θ).

θ). Let x be a point in the disk Ω with polar coordinates (r. 23 . Let x be a point on the boundary of the disk Ω with polar coordinates (a. a2 − 2ar cos(θ − φ) + r2 We can write this in rectangular coordinates as follows. 1 u(x )(a2 − |x |2 ) ds u(x) = . |x − x |2 |x |=a This is known as Poisson’s formula for the solution of Laplace’s equation on the disk. Therefore. Therefore. u(r. we have u(x) = a2 − |x |2 2πa u(x ) ds.Therefore. θ) = 1 2π 0 2π h(φ) a2 − r2 dφ. |x − x |2 = a2 + r2 − 2ar cos(θ − φ) by the law of cosines. φ). 2π |x |=a |x − x |2 a using the fact that ds = a dφ is the arc length of the curve. Rewriting this.

- Mektek Pert.2 - Sistem Gaya
- Non-linear Models for the Deformation of Cellular Structures
- mathematics a may 2016 3h
- Trigonometric
- How to Use Poisson's Formulas
- h alg 2 t10 3
- GR1268 Solutions
- unit-3.doc
- intro to trig worksheet
- Holiday Assignment for CLASS XII
- C3_June 2010
- 11_math_eng_2015_16
- Laplace Equation
- Macro Calculator
- Lect 5
- 3 - 1 - Lecture 10 Derivatives
- Trigonometric Graphs
- ch3.pdf
- KERTAS 2_1
- 1986 Mathematics Paper2
- Mathematics for Chemistry WikiBook
- Trigonometry
- De Chapter 3
- Statics Math
- MJC JC2 H2 Maths 2012 Year End Exam Paper 1 Solutions
- BTEC NC - Maths - Calculus
- Applying SOHCAHTOA
- 12 Trigonometry
- NOTES FUNCTIONS.doc
- Trigonometry Presentation

- Espectrofotometría
- artículo fisicoquímica
- MTE 583_Class_17.pdf
- Hydrogen Fuel Cells
- quimica computacional.pdf
- Biodiesel
- Conferencia de asilomar - Biotecnología
- Biomateriales Aleaciones de Titanio
- Guionismo_Cortometraje
- Ecuaciones Fundamentales Termodinamica_14163
- Ecuaciones Fundamentales Termodinamica_14163
- Funciones especiales
- Mathematical Tripos Cambrigde
- Vocabulario - Alemán Lección 3
- Vocabulario - Alemán Lección 3
- microelectronica y semiconductores
- microelectronica y semiconductores
- artículo fisicoquímica
- presentacion rutherford
- Ecuaciones de la recta

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading