Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Diﬀerential Equations of Mathematical Physics
G. Sweers
http://go.to/sweers
or
http://fa.its.tudelft.nl/∼sweers
Perugia, July 28  August 29, 2003
It is better to have failed and tried,
To kick the groom and kiss the bride,
Than not to try and stand aside,
Sparing the coal as well as the guide.
John O’Mill
ii
Contents
1 From models to diﬀerential equations 1
1.1 Laundry on a line . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 A linear model . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 A nonlinear model . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Comparing both models . . . . . . . . . . . . . . . . . . . 4
1.2 Flow through area and more 2d . . . . . . . . . . . . . . . . . . . 5
1.3 Problems involving time . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Wave equation . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Diﬀerential equations from calculus of variations . . . . . . . . . 15
1.5 Mathematical solutions are not always physically relevant . . . . 19
2 Spaces, Traces and Imbeddings 23
2.1 Function spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.1 Hölder spaces . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.2 Sobolev spaces . . . . . . . . . . . . . . . . . . . . . . . . 24
2.2 Restricting and extending . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Zero trace and W
1,p
0
(Ω) . . . . . . . . . . . . . . . . . . . . . . . 36
2.5 Gagliardo, Nirenberg, Sobolev and Morrey . . . . . . . . . . . . . 38
3 Some new and old solution methods I 43
3.1 Direct methods in the calculus of variations . . . . . . . . . . . . 43
3.2 Solutions in ﬂavours . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Preliminaries for CauchyKowalevski . . . . . . . . . . . . . . . . 52
3.3.1 Ordinary diﬀerential equations . . . . . . . . . . . . . . . 52
3.3.2 Partial diﬀerential equations . . . . . . . . . . . . . . . . 53
3.3.3 The Cauchy problem . . . . . . . . . . . . . . . . . . . . . 55
3.4 Characteristics I . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 First order p.d.e. . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.2 Classiﬁcation of second order p.d.e. in two dimensions . . 58
3.5 Characteristics II . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5.1 Second order p.d.e. revisited . . . . . . . . . . . . . . . . 61
3.5.2 Semilinear second order in higher dimensions . . . . . . . 63
3.5.3 Higher order p.d.e. . . . . . . . . . . . . . . . . . . . . . . 65
iii
4 Some old and new solution methods II 67
4.1 CauchyKowalevski . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.1.1 Statement of the theorem . . . . . . . . . . . . . . . . . . 67
4.1.2 Reduction to standard form . . . . . . . . . . . . . . . . . 68
4.2 A solution for the 1d heat equation . . . . . . . . . . . . . . . . . 71
4.2.1 The heat kernel on R . . . . . . . . . . . . . . . . . . . . 71
4.2.2 On bounded intervals by an orthonormal set . . . . . . . 72
4.3 A solution for the Laplace equation on a square . . . . . . . . . . 76
4.4 Solutions for the 1d wave equation . . . . . . . . . . . . . . . . . 78
4.4.1 On unbounded intervals . . . . . . . . . . . . . . . . . . . 78
4.4.2 On a bounded interval . . . . . . . . . . . . . . . . . . . . 79
4.5 A fundamental solution for the Laplace operator . . . . . . . . . 81
5 Some classics for a unique solution 85
5.1 Energy methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2 Maximum principles . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Proof of the strong maximum principle . . . . . . . . . . . . . . . 93
5.4 Alexandrov’s maximum principle . . . . . . . . . . . . . . . . . . 95
5.5 The wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.5.1 3 space dimensions . . . . . . . . . . . . . . . . . . . . . . 100
5.5.2 2 space dimensions . . . . . . . . . . . . . . . . . . . . . . 105
iv
Week 1
From models to diﬀerential
equations
1.1 Laundry on a line
1.1.1 A linear model
Consider a rope tied between two positions on the same height and hang a sock
on that line at location x
0
.
x
0
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X¡
¡
¡
¡
¡
¡
X
X
X
X
Xy
¡
¡
¡
¡µ
F
2
F
1
α
β
Then the balance of forces gives the following two equations:
F
1
cos α = F
2
cos β = c,
F
1
sinα +F
2
sinβ = mg.
We assume that the positive constant c does not depend on the weight hanging
on the line but is a given ﬁxed quantity. The g is the gravitation constant and
m the mass of the sock. Eliminating F
i
we ﬁnd
tanα + tanβ =
mg
c
.
We call u the deviation of the horizontal measured upwards so in the present
situation u will be negative. Fixing the ends at (0, 0) and (1, 0) we ﬁnd two
straight lines connecting (0, 0) through (x
0
, u(x
0
)) to (1, 0) . Moreover u
0
(x) =
tanβ for x > x
0
and u
0
(x) = −tanα for x < x
0
, so
u
0
(x
−
0
) = −tanα and u
0
(x
+
0
) = tanβ.
1
This leads to
u
0
(x
+
0
) −u
0
(x
−
0
) =
mg
c
, (1.1)
and the function u can be written as follows:
u(x) =
½
−
mg
c
x(1 −x
0
) for x ≤ x
0
,
−
mg
c
x
0
(1 −x) for x > x
0
.
(1.2)
Hanging multiple socks on that line, say at position x
i
a sock of weight m
i
with i = 1 . . . 35, we ﬁnd the function u by superposition. Fixing
G(x, s) =
½
x(1 −s) for x ≤ s,
s (1 −x) for x > s,
(1.3)
we’ll get to
u(x) =
35
X
i=1
−
m
i
g
c
G(x, x
i
) .
Indeed, in each point x
i
we ﬁnd
u
0
(x
+
i
) −u
0
(x
−
i
) = −
m
i
g
c
·
∂
∂x
G(x, x
i
)
¸
x=x
+
i
x=x
−
i
=
m
i
g
c
.
In the next step we will not only hang pointmasses on the line but even a
blanket. This gives a (continuously) distributed force down on this line. We
will approximate this by dividing the line into n units of width 4x and consider
the force, say distributed with density ρ(x), between x
i
−
1
2
4x and x
i
+
1
2
4x to
be located at x
i
. We could even allow some upwards pulling force and replace
−m
i
g/c by ρ(x
i
)4x to ﬁnd
u(x) =
n
X
i=1
ρ(x
i
)4x G(x, x
i
) .
Letting n →∞and this sum, a Riemannsum, approximates an integral so that
we obtain
u(x) =
Z
1
0
G(x, s) ρ(s) ds. (1.4)
We might also consider the balance of forces for the discretized problem and
see that formula (1.1) gives
u
0
(x
i
+
1
2
4x) −u
0
(x
i
−
1
2
4x) = −ρ(x
i
) 4x.
By Taylor
u
0
(x
i
+
1
2
4x) = u
0
(x) +
1
2
4x u
00
(x
i
) + O(4x)
2
,
u
0
(x
i
−
1
2
4x) = u
0
(x) −
1
2
4x u
00
(x
i
) + O(4x)
2
,
and
4x u
00
(x
i
) + O(4x)
2
= −ρ(x
i
) 4x.
After dividing by 4x and taking limits this results in
−u
00
(x) = ρ(x). (1.5)
2
Exercise 1 Show that (1.4) solves (1.5) and satisﬁes the boundary conditions
u(0) = u(1) = 0.
Deﬁnition 1.1.1 The function in (1.3) is called a Green function for the
boundary value problem
½
−u
00
(x) = ρ(x),
u(0) = u(1) = 0.
1.1.2 A nonlinear model
Consider again a rope tied between two positions on the same height and again
hang a sock on that line at location x
0
. Now we assume that the tension is
constant throughout the rope.
x
0
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X¡
¡
¡
¡
¡
¡
X
X
X
X
Xy
¡
¡
¡
¡µ
F
2
F
1
α
β
To balance the forces we assume some sidewards eﬀect of the wind. We ﬁnd
F
1
= F
2
= c,
F
1
sinα +F
2
sinβ = mg.
Now one has to use that
sinβ =
u
0
¡
x
+
0
¢
q
1 +
¡
u
0
¡
x
+
0
¢¢
2
and sinα =
−u
0
¡
x
−
0
¢
q
1 +
¡
u
0
¡
x
−
0
¢¢
2
and the replacement of (1.1) is
u
0
¡
x
+
0
¢
q
1 +
¡
u
0
¡
x
+
0
¢¢
2
−
u
0
¡
x
−
0
¢
q
1 +
¡
u
0
¡
x
−
0
¢¢
2
=
mg
c
.
A formula as in (1.2) still holds but the superposition argument will fail since
the problem is nonlinear. So no Green formula as before.
Nevertheless we may derive a diﬀerential equation as before. Proceeding as
before the Taylorexpansion yields
−
∂
∂x
¸
u
0
(x)
q
1 + (u
0
(x))
2
¸
= ρ(x). (1.6)
3
1.1.3 Comparing both models
The ﬁrst derivation results in a linear equation which directly gives a solution
formula. It is not that this formula is so pleasant but at least we ﬁnd that
whenever we can give some meaning to this formula (for example if ρ ∈ L
1
)
there exists a solution:
Lemma 1.1.2 A function u ∈ C
2
[0, 1] solves
½
−u
00
(x) = f(x) for 0 < x < 1,
u(0) = u(1) = 0,
if and only if
u(x) =
Z
1
0
G(x, s) f(s)ds
with
G(x, s) =
½
x(1 −s) for 0 ≤ x ≤ s ≤ 1,
s (1 −x) for 0 ≤ s < x ≤ 1.
The second derivation results in a nonlinear equation. We no longer see
immediately if there exists a solution.
Exercise 2 Suppose that we are considering equation (1.6) with a uniformly
distributed force, say
g
c
ρ(x) = M. If possible compute the solution, that is, the
function that satisﬁes (1.6) and u(0) = u(1) = 0. For which M does a solution
exist? Hint: use that u is symmetric around
1
2
and hence that u
0
(
1
2
) = 0.
0.2 0.4 0.6 0.8 1
0.5
0.4
0.3
0.2
0.1
Here are some graphs of the solutions from the last exercise.
Remark 1.1.3 If we don’t have a separate weight hanging on the line but are
considering a heavy rope that bends due to its own weight we ﬁnd:
(
−u
00
(x) = c
q
1 + (u
0
(x))
2
for 0 < x < 1,
u(0) = u(1) = 0;
4
1.2 Flow through area and more 2d
Consider the ﬂow through some domain Ω in R
2
according to the velocity ﬁeld
v = (v
1
, v
2
) .
Deﬁnition 1.2.1 A domain Ω is deﬁned as an open and connected set. The
boundary of the domain is called ∂Ω and its closure
¯
Ω = Ω∪ ∂Ω.
(x, y) r
4x
¾ 
4y
6
?
¢
¢¸
¢
¢¸
¶
¶7
¶
¶7
¶
¶7
¶
¶7
¡
¡µ
¡
¡µ
¡
¡µ
¡
¡µ
¡
¡µ
½
½>
½
½>
½
½>
½
½>
½
½>
©©*
©©*
©©*
©©*
If we single out a small rectangle with sides of length 4x and 4y with (x, y)
in the middle we ﬁnd
• ﬂowing out:
Out =
Z
y+
1
2
4y
y−
1
2
4y
ρ.v
1
¡
x +
1
2
4x, s
¢
ds +
Z
x+
1
2
4x
x−
1
2
4x
ρ.v
2
¡
s, y +
1
2
4y
¢
ds,
• ﬂowing in:
In =
Z
y+
1
2
4y
y−
1
2
4y
ρ.v
1
¡
x −
1
2
4x, s
¢
ds +
Z
x+
1
2
4x
x−
1
2
4x
ρ.v
2
¡
s, y −
1
2
4y
¢
ds.
Scaling with the size of the rectangle and assuming that v is suﬃciently
diﬀerentiable we obtain
lim
4y↓0
4x↓0
Out −In
4x4y
= lim
4y↓0
4x↓0
ρ
4y
Z
y+
1
2
4y
y−
1
2
4y
v
1
¡
x +
1
2
4x, s
¢
−v
1
¡
x −
1
2
4x, s
¢
4x
ds +
+ lim
4y↓0
4x↓0
ρ
4x
Z
x+
1
2
4x
x−
1
2
4x
v
2
¡
s, y +
1
2
4y
¢
−v
2
¡
s, y −
1
2
4y
¢
4y
ds
= lim
4y↓0
ρ
4y
Z
y+
1
2
4y
y−
1
2
4y
∂v
1
∂x
(x, s) ds + lim
4x↓0
ρ
4x
Z
x+
1
2
4x
x−
1
2
4x
∂v
2
∂y
(s, y) ds
= ρ
∂v
1
(x, y)
∂x
+ρ
∂v
2
(x, y)
∂y
.
5
If there is no concentration of ﬂuid then the diﬀerence of these two quantities
should be 0, that is ∇.v = 0. In case of a potential ﬂow we have v = ∇u for
some function u and hence we obtain a harmonic function u:
∆u = ∇.∇u = ∇.v = 0.
A typical problem would be to determine the ﬂow given some boundary data,
for example obtained by measurements:
½
∆u = 0 in Ω,
∂
∂n
u = ψ on ∂Ω.
(1.7)
The speciﬁc problem in (1.7) by the way is only solvable under the compatibility
condition that the amount ﬂowing in equals the amount going out:
Z
∂Ω
ψdσ = 0.
This physical condition does also follow ‘mathematically’ from (1.7). Indeed,
by Green’s Theorem:
Z
∂Ω
ψdσ =
Z
∂Ω
∂
∂n
udσ =
Z
∂Ω
n · ∇u dσ =
Z
Ω
∇· ∇u dA =
Z
Ω
∆u dA = 0.
In case that the potential is given at part of the boundary Γ and no ﬂow in or
out at ∂Ω\Γ we ﬁnd the problem:
∆u = 0 in Ω,
u = φ on Γ.
∂
∂n
u = 0 on ∂Ω\Γ.
(1.8)
Such a problem could include local sources, think of external injection or ex
traction of ﬂuid, and that would lead to ∆u = f.
Instead of a rope or string showing a deviation due to some force applied
one may consider a twodimensional membrane. Similarly one may derive the
following boundary value problems on a twodimensional domain Ω.
1. For small forces/deviations we may consider the linear model as a reason
able model:
½
−∆u(x, y) = f(x, y) for (x, y) ∈ Ω,
u(x, y) = 0 for (x, y) ∈ ∂Ω.
The diﬀerential operator ∆ is called the Laplacian and is deﬁned by ∆u =
u
xx
+u
yy
.
2. For larger deviations and assuming the tension is uniform throughout the
domain the nonlinear model:
−∇·
¸
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
= f(x, y) for (x, y) ∈ Ω,
u(x, y) = 0 for (x, y) ∈ ∂Ω.
Here ∇ deﬁned by ∇u = (u
x
, u
y
) is the gradient and ∇· the divergence
deﬁned by ∇· (v, w) = v
x
+w
y
. For example the deviation u of a soap ﬁlm
with a pressure f applied is modeled by this boundary value problem.
6
3. In case that we do not consider a membrane that is isotropic (meaning
uniform behavior in all directions), for example a stretched piece of cloth
and the tension is distributed in the direction of the threads, the following
nonlinear model might be better suited:
−
∂
∂x
µ
u
x
(x,y)
√
1+(u
x
(x,y))
2
¶
−
∂
∂y
µ
uy(x,y)
√
1+(u
y
(x,y))
2
¶
= f(x, y) for (x, y) ∈ Ω,
u(x, y) = 0 for (x, y) ∈ ∂Ω.
(1.9)
In all of the above cases we have considered socalled zero Dirichlet boundary
conditions. Instead one could also force the deviation u at the boundary by
describing nonzero values.
Exercise 3 Consider the second problem above with a constant right hand side
on Ω =
©
(x, y) ; x
2
+y
2
< 1
ª
:
−∇·
¸
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
= M for (x, y) ∈ Ω,
u(x, y) = 0 for (x, y) ∈ ∂Ω.
This would model a soap ﬁlm attached on a ring with constant force (blowing onto
the soapﬁlm might give a good approximation). Assuming that the solution is
radially symmetric compute the solution. Hint: in polar coordinates (r, ϕ) with
x = r cos ϕ and y = r sinϕ we obtain by identifying U(r, ϕ) = u(r cos ϕ, r sinϕ) :
∂
∂r
U = cos ϕ
∂
∂x
u + sinϕ
∂
∂y
u,
∂
∂ϕ
U = −r sinϕ
∂
∂x
u +r cos ϕ
∂
∂y
u,
and hence
cos ϕ
∂
∂r
U −
sinϕ
r
∂
∂ϕ
U =
∂
∂x
u,
sinϕ
∂
∂r
U +
cos ϕ
r
∂
∂ϕ
U =
∂
∂y
u.
After some computations one should ﬁnd for a radially symmetric function, that
is U = U (r), that
∇·
¸
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
=
∂
∂r
Ã
U
r
p
1 +U
2
r
!
+
1
r
Ã
U
r
p
1 +U
2
r
!
.
Exercise 4 Next we consider a soapﬁlm without any exterior force that is
spanned between two rings of radius 1 and 2 with the middle one on level −C
for some positive constant C and the outer one at level 0. Mathematically:
−∇·
¸
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
= 0 for 1 < x
2
+y
2
< 4,
u(x, y) = 0 for x
2
+y
2
= 4,
u(x, y) = −C for x
2
+y
2
= 1.
(1.10)
7
Solve this problem assuming that the solution is radial and C is small. What
happens if C increases? Can one deﬁne a solution that is no longer a function?
Hint: think of putting a virtual ring somewhere in between and think of two
functions glued together appropriately.
Solution 4 Writing F =
U
r
√
1+U
2
r
we have to solve F
0
+
1
r
F = 0. This is a
separable o.d.e.:
F
0
F
= −
1
r
and an integration yields
lnF = −lnr +c
1
and hence F (r) =
c
2
r
for some c
2
> 0. Next solving
U
r
√
1+U
2
r
=
c
2
r
one ﬁnds
U
r
=
c
2
p
r
2
−c
2
2
.
Hence
U(r) = c
2
log
µ
r +
q
r
2
−c
2
2
¶
+c
3
The boundary conditions give
0 = U (2) = c
2
log
µ
2 +
q
4 −c
2
2
¶
+c
3
, (1.11)
−C = U(1) = c
2
log
µ
1 +
q
1 −c
2
2
¶
+c
3
.
Hence
c
2
log
Ã
2 +
p
4 −c
2
2
1 +
p
1 −c
2
2
!
= C.
Note that c
2
7→ c
2
log
µ
2+
√
4−c
2
2
1+
√
1−c
2
2
¶
is an increasing function that maps [0, 1]
onto
£
0, log
¡
2 +
√
3
¢¤
. Hence we ﬁnd a function that solves the boundary value
problem for C > 0 if and only if C ≤ log
¡
2 +
√
3
¢
≈ 1.31696. For those C there
is a unique c
2
∈ (0, 1] and with this c
2
we ﬁnd c
3
by (1.11) and U :
U(r) = c
2
log
Ã
r +
p
r
2
−c
2
2
2 +
p
4 −c
2
2
!
.
Let us consider more general solutions by putting in a virtual ring with radius
R on level h, necessarily R < 1 and −C < h < 0. To ﬁnd two smoothly connected
functions we are looking for a combination of the two solutions of
−
∂
∂r
Ã
U
r
p
1 +U
2
r
!
+
Ã
U
r
p
1 +U
2
r
!
= 0 for R < r < 2,
U(2) = 0 and U (R) = h and U
r
(R) = ∞,
and
−
∂
∂r
¸
˜
U
r
q
1 +
˜
U
2
r
¸
+
¸
˜
U
r
q
1 +
˜
U
2
r
¸
= 0 for R < r < 1,
˜
U(1) = −C and
˜
U (R) = h and
˜
U
r
(R) = −∞.
8
Note that in order to connect smoothly we need a nonexistent derivative at R!
We proceed as follows. We know that
U (r) = c
2
log
Ã
r +
p
r
2
−c
2
2
2 +
p
4 −c
2
2
!
is deﬁned on [c
2
, 2] and U
r
(c
2
) = ∞. Hence R = c
2
and
U (R) = c
2
log
Ã
c
2
2 +
p
4 −c
2
2
!
.
Then by symmetry
˜
U (r) = 2U (R) −U (r)
and next
˜
U (1) = 2U (R) −U (1) = 2c
2
log
Ã
c
2
2 +
p
4 −c
2
2
!
−c
2
log
Ã
1 +
p
1 −c
2
2
2 +
p
4 −c
2
2
!
= c
2
log
¸
c
2
2
³
2 +
p
4 −c
2
2
´³
1 +
p
1 −c
2
2
´
¸
= −C
Again this function c
2
7→c
2
log
µ
c
2
2
³
2+
√
4−c
2
2
´³
1+
√
1−c
2
2
´
¶
is deﬁned on [0, 1] . The
maximal C one ﬁnds is ≈ 1.82617 for c
2
≈ 0.72398. (Use Maple or Mathematica
for this.)
2
1
0
1
2
2
1
0
1
2
1.5
1
0.5
0
The solution that spans the maximal distance between the two rings.
2
1
0
1
2
2
1
0
1
2
1
0.75
0.5
0.25
0
2
1
0
1
2
2
1
0
1
2
1
0.75
0.5
0.25
0
Two solutions for the same conﬁguration of rings.
Of the two solutions in the last ﬁgure only the left one is physically relevant.
As a soap ﬁlm the conﬁguration on the right would immediately collapse. This
lack of stability can be formulated in a mathematically sound way but for the
moment we will skip that.
9
Exercise 5 Solve the problem of the previous exercise for the linearized prob
lem:
−∆u = 0 for 1 < x
2
+y
2
< 4,
u(x, y) = 0 for x
2
+y
2
= 4,
u(x, y) = −C for x
2
+y
2
= 1.
Are there any critical C as for the previous problem? Hint: in polar coordinates
one ﬁnds
∆ =
µ
∂
∂r
¶
2
+
1
r
∂
∂r
+
1
r
2
µ
∂
∂ϕ
¶
2
.
Better check this instead of just believing it!
Exercise 6 Find a nonconstant function u deﬁned on
¯
Ω
Ω = {(x, y); 0 < x < 1 and 0 < y < 1}
that satisﬁes
−
∂
∂x
¸
u
x
(x, y)
q
1 + (u
x
(x, y))
2
¸
−
∂
∂y
¸
u
y
(x, y)
q
1 + (u
y
(x, y))
2
¸
= 0 for (x, y) ∈ Ω,
(1.12)
the diﬀerential equation for a piece of cloth as in (1.9) now with f = 0. Give
the boundary data u(x, y) = g(x, y) for (x, y) ∈ ∂Ω that your solution satisﬁes.
Exercise 7 Consider the linear model for a square blanket hanging between two
straight rods when the deviation from the horizontal is due to some force f.
Exercise 8 Consider the nonlinear model for a soap ﬁlm with the following
conﬁguration:
The ﬁlm is free to move up and down the vertical front and back walls but is
ﬁxed to the rods on the right and left. There is a constant pressure from above.
Give the boundary value problem and compute the solution.
10
1.3 Problems involving time
1.3.1 Wave equation
We have seen that a force due to tension in a string under some assumption is
related to the second order derivative of the deviation from equilibrium. Instead
of balancing this force by an exterior source such a ‘force’ can be neutralized
by a change in the movement of the string. Even if we are considering time
dependent deviations from equilibrium u(x, t) the forcedensity due the tension
in the (linearized) string is proportional to u
xx
(x, t) . If this force (Newton’s
law: F = ma) implies a vertical movement we ﬁnd
c
1
u
xx
(x, t) dx = dF = dm u
tt
(x, t) = ρdx u
tt
(x, t) .
Here c depends on the tension in the string and ρ is the massdensity of that
string. We ﬁnd the 1dimensional wave equation:
u
tt
−c
2
u
xx
= 0.
If we ﬁx the string at its ends, say in 0 and and describe both the initial position
and the initial velocity in all its points we arrive at the following system with
c, > 0 and φ and ψ given functions
1d wave equation: u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for 0 < x < and t > 0,
boundary condition: u(0, t) = u(, t) = 0 for t > 0,
initial position: u(x, 0) = φ(x) for 0 < x < ,
initial velocity: u
t
(x, 0) = ψ(x) for 0 < x < .
One may show that this system is wellposed.
Deﬁnition 1.3.1 (Hadamard) A system of diﬀerential equations with bound
ary and initial conditions is called wellposed if it satisﬁes the following proper
ties:
1. It has a solution (existence).
2. There is at most one solution (uniqueness).
3. Small changes in the problem result in small changes in the solution (sen
sitivity).
The last item is sometimes translated as ‘the solution is continuously depen
dent on the boundary data’.
We should remark that the second property may hold locally. Apart from
this, these conditions stated as in this deﬁnition sound rather soft. For each
problem we will study we have to specify in which sense these three properties
hold.
Other systems that are wellposed (if we give the right speciﬁcation of these
three conditions) are the following.
11
Example 1.3.2 For Ω a domain in R
2
(think of the horizontal layout of a
swimming pool of suﬃcient depth and u models the height of the surface of the
water)
2d wave equation: u
tt
(x, t) −c
2
∆u(x, t) = 0 for x ∈ Ω and t > 0,
boundary condition: −
∂
∂n
u(x, t) +αu(x, t) = 0 for x ∈ ∂Ω and t > 0,
initial position: u(x, 0) = φ(x) for x ∈ Ω,
initial velocity: u
t
(x, 0) = ψ(x) for x ∈ Ω.
Here n denotes the outer normal, α > 0 and c > 0 are some numbers and
∆ =
³
∂
∂x
1
´
2
+
³
∂
∂x
2
´
2
.
Example 1.3.3 For Ω a domain in R
3
(think of room with students listening
to some source of noise and u models the deviation of the pressure of the air
compared with complete silence)
inhomogeneous
3d wave equation:
u
tt
(x, t) −c
2
∆u(x, t) = f (x, t) for x ∈ Ω and t > 0,
boundary condition: −
∂
∂n
u(x, t) +α(x)u(x, t) = 0 for x ∈ ∂Ω and t > 0,
initial position: u(x, 0) = 0 for x ∈ Ω,
initial velocity: u
t
(x, 0) = 0 for x ∈ Ω.
Here n denotes the outer normal and α a positive function deﬁned on ∂Ω. Again
c > 0. On soft surfaces such as curtains α is large and on hard surfaces such
as concrete α is small. The function f is given and usually almost zero except
near some location close to the blackboard. The initial conditions being zero
represents a teacher’s dream.
Exercise 9 Find all functions of the form u(x, t) = α(x)β(t) that satisfy
u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for 0 < x < and t > 0,
u(0, t) = u(, t) = 0 for t > 0,
u
t
(x, 0) = 0 for 0 < x < .
Exercise 10 Same question for
u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for 0 < x < and t > 0,
u(0, t) = u(, t) = 0 for t > 0,
u(x, 0) = 0 for 0 < x < .
1.3.2 Heat equation
In the diﬀerential equation that models a temperature proﬁle time also plays a
role. It will appear in a diﬀerent role as for the wave equation.
Allow us to give a simple explanation for the onedimensional heat equation.
If we consider a simple threepoint model with the left and right point being
kept at constant temperature we expect the middle point to reach the average
temperature after due time. In fact one should expect the speed of convergence
proportional to the diﬀerence with the equilibrium as in the following graph.
The temperature distribution in six consecutive steps.
12
For this discrete system with u
i
for i = 1, 2, 3 the temperature of the middle
node should satisfy:
∂
∂t
u
2
(t) = −c (−u
1
+ 2u
2
(t) −u
3
) .
If we would consider not just three nodes but say 43 of which we keep the ﬁrst
and the last one of ﬁxed temperature we obtain
u
1
(t) = T
1
,
∂
∂t
u
i
(t) = −c (−u
i−1
(t) + 2u
i
(t) −u
i+1
(t)) for i ∈ {2, ..., 42} ,
u
43
(t) = T
2
.
Letting the stepsize between nodes go to zero, but adding more nodes to keep
the total length ﬁxed, and applying the right scaling: u(i4x, t) = u
i
(t) and
c = ˜ c(4x)
−2
we ﬁnd for smooth u through
lim
4x↓0
−u(x −4x, t) + 2u(x, t) −u(x + 4x, t)
(4x)
2
= −u
xx
(x, t)
the diﬀerential equation
∂
∂t
u(x, t) = ˜ c u
xx
(x, t).
For ˜ c > 0 this is the 1d heat equation.
Example 1.3.4 Considering a cylinder of radius d and length which is iso
lated around with the bottom in ice and heated from above. One ﬁnds the fol
lowing model which contains a 3d heat equation:
u
t
(x, t) −c
2
∆u(x, t) = 0 for x
2
1
+x
2
2
< d
2
, 0 < x
3
< and t > 0,
∂
∂n
u(x, t) = 0 for x
2
1
+x
2
2
= d
2
, 0 < x
3
< and t > 0,
u(x, t) = 0 for x
2
1
+x
2
2
< d
2
, x
3
= 0 and t > 0,
u(x, t) = φ(x
1
, x
2
) for x
2
1
+x
2
2
< d
2
, x
3
= and t > 0,
u(x, 0) = 0 for x
2
1
+x
2
2
< d
2
, 0 < x
3
< .
Here ∆ =
³
∂
∂x
1
´
2
+
³
∂
∂x
2
´
2
+
³
∂
∂x
3
´
2
only contains the derivatives with respect
to space.
Until now we have seen several types of boundary conditions. Let us give
the names that belong to them.
Deﬁnition 1.3.5 For second order ordinary and partial diﬀerential equations
on a domain Ω the following names are given:
• Dirichlet: u(x) = φ(x) on ∂Ω for a given function φ.
• Neumann:
∂
∂n
u(x) = ψ (x) on ∂Ω for a given function ψ.
• Robin: α(x)
∂
∂n
u(x) = u(x) +χ(x) on ∂Ω for a given function χ.
Here n is the outward normal at the boundary.
13
Exercise 11 Consider the 1d heat equation for (x, t) ∈ (0, ) ×R
+
:
u
t
= c
2
u
xx
. (1.13)
1. Suppose u(0, t) = u(, t) = 0 for all t > 0. Compute the only posi
tive functions satisfying these boundary conditions and (1.13) of the form
u(x, t) = X(x)T(t).
2. Suppose
∂
∂n
u(0, t) =
∂
∂n
u(, t) = 0 for all t > 0. Same question.
3. Suppose α
∂
∂n
u(0, t) +u(0, t) =
∂
∂n
u(, t) +u(0, t) = 0 for all t > 0. Same
question again. Thinking of physics: what would be a restriction for the
number α?
14
1.4 Diﬀerential equations from calculus of vari
ations
A serious course with the title Calculus of Variations would certainly take more
time then just a few hours. Here we will just borrow some techniques in order
to derive some systems of diﬀerential equations.
Instead of trying to ﬁnd some balance of forces in order to derive a diﬀerential
equation and the appropriate boundary conditions one can try to minimize for
example the energy of the system. Going back to the laundry on a line from
(0, 0) to (1, 0) one could think of giving a formula for the energy of the system.
First the energy due to deviation of the equilibrium:
E
rope
(y) =
Z
0
1
2
(y
0
(x))
2
dx
Also the potential energy due to the laundry pulling the rope down with force
f is present:
E
pot
(y) = −
Z
0
y (x) f(x)dx.
So the total energy is
E(y) =
Z
0
¡
1
2
(y
0
(x))
2
−y (x) f(x)
¢
dx. (1.14)
For the line with constant tension one has
E(y) =
Z
0
³
p
1 + (y
0
(x))
2
−1 −y (x) f(x)
´
dx. (1.15)
Assuming that a solution minimizes the energy implies that for any nice
function η and number τ it should hold that
E(y) ≤ E(y +τη) .
Nice means that the boundary condition also hold for the function y +τη and
that η has some diﬀerentiability properties.
If the functional τ 7→E(y +τη) is diﬀerentiable, then y should be a station
ary point:
∂
∂τ
E(y +τη)
τ=0
= 0 for all such η.
Deﬁnition 1.4.1 The functional J
1
(y; η) :=
∂
∂τ
E(y +τη)
τ=0
is called the ﬁrst
variation of E.
Example 1.4.2 We ﬁnd for the E in (1.15) that
∂
∂τ
E(y +τη)
τ=0
=
Z
0
Ã
y
0
(x)η
0
(x)
p
1 + (y
0
(x))
2
−η (x) f(x)
!
dx. (1.16)
If the function y is as we want then this quantity should be 0 for all such η.
With an integration by part we ﬁnd
0 =
"
y
0
(x)
p
1 + (y
0
(x))
2
η(x)
#
0
−
Z
0
ÃÃ
y
0
(x)
p
1 + (y
0
(x))
2
!
0
+f(x)
!
η (x) dx.
15
Since y and y +τη are zero in 0 and the boundary terms disappear and we are
left with
Z
0
ÃÃ
y
0
(x)
p
1 + (y
0
(x))
2
!
0
+f(x)
!
η (x) dx = 0.
If this holds for all functions η then
−
Ã
y
0
(x)
p
1 + (y
0
(x))
2
!
0
= f(x).
Here we have our familiar diﬀerential equation.
Deﬁnition 1.4.3 The diﬀerential equation that follows for y from
∂
∂τ
E(y +τη)
τ=0
= 0 for all appropriate η, (1.17)
is called the EulerLagrange equation for E.
The extra boundary conditions for y that follow from (1.17) are called the natural
boundary conditions.
Example 1.4.4 Let us consider the minimal surface that connects a ring of
radius 2 on level 0 with a ring of radius 1 on level −1. Assuming the solution is
a function deﬁned between these two rings, Ω =
©
(x, y); 1 < x
2
+y
2
< 4
ª
is the
domain. If u is the height of the surface then the area is
Area (u) =
Z
Ω
q
1 + ∇u(x, y)
2
dxdy.
We ﬁnd
∂
∂τ
Area (y +τη)
τ=0
=
Z
Ω
∇u(x, y) · ∇η(x, y)
q
1 + ∇u(x, y)
2
dxdy. (1.18)
Since u and u+τη are both equal to 0 on the outer ring and equal to −1 on the
inner ring it follows that η = 0 on both parts of the boundary. Then by Green
0 =
Z
Ω
∇u(x, y) · ∇η(x, y)
q
1 + ∇u(x, y)
2
dxdy =
=
Z
∂Ω
∂
∂n
u(x, y)
q
1 + ∇u(x, y)
2
η(x, y)dσ −
Z
Ω
¸
∇·
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
η(x, y)dxdy
= −
Z
Ω
¸
∇·
∇u(x, y)
q
1 + ∇u(x, y)
2
¸
η(x, y)dxdy.
If this holds for all functions η then a solution u satisﬁes
∇·
∇u(x, y)
q
1 + ∇u(x, y)
2
= 0.
16
Example 1.4.5 Consider a onedimensional loaded beam of shape y (x) that is
ﬁxed at its boundaries by y(0) = y() = 0. The energy consists of the deformation
energy:
E
def
(y) =
Z
0
1
2
(y
00
(x))
2
dx
and the potential energy due to the weight that pushes:
E
pot
(y) =
Z
0
y(x)f(x)dx.
So the total energy E = E
def
−E
pot
equals
E(y) =
Z
0
µ
1
2
(y
00
(x))
2
−y(x)f(x)
¶
dx
and its ﬁrst variation
∂
∂τ
E(y +τη)
τ=0
=
Z
0
y
00
(x)η
00
(x) −η(x)f(x) dx =
=
h
y
00
(x)η
0
(x) −y
000
(x)η(x)
i
0
+
Z
0
y
0000
(x)η(x) −η(x)f(x) dx. (1.19)
Since y and y + τη equals 0 on the boundary we ﬁnd that η equals 0 on the
boundary. Using this only part of the boundary terms disappear:
(#1.19) =
h
y
00
(x)η
0
(x)
i
0
+
Z
0
(y
0000
(x) −f(x)) η(x) dx.
If this holds for all functions η with η = 0 at the boundary then we may ﬁrst
consider functions which also disappear in the ﬁrst derivative at the boundary
and ﬁnd that y
0000
(x) = f(x) for all x ∈ (0, ) . Next by choosing η with η
0
6= 0
at the boundary we additionally ﬁnd that a solution y satisﬁes
y
00
(0) = y
00
() = 0.
So the solution should satisfy:
the d.e.: y
0000
(x) = f(x),
prescribed b.c.: y(0) = y() = 0,
natural b.c.: y
00
(0) = y
00
() = 0.
Exercise 12 At most swimming pools there is a possibility to jump in from a
springboard. Just standing on it the energy is, supposing it is a 1d model, as for
the loaded beam but with diﬀerent boundary conditions, namely y(0) = y
0
(0) = 0
and no prescribed boundary condition at . Give the corresponding boundary
value problem.
Consider the problem
y
0000
(x) = f(x), for 0 < x < 1
y(0) = y
0
(0) = 0,
y
00
(1) = y
000
(1) = 0.
(1.20)
17
Exercise 13 Show that the problem (1.20) is wellposed (specify your setting).
The solution of (1.20) can be written by means of a Green function: u(x) =
R
1
0
G(x, s)f(s)ds. Compute this Green function.
Exercise 14 Consider a bridge that is clamped in −1 and 1 and which is sup
ported in 0 (and we assume it doesn’t loose contact which the supporting pillar
in 0).
If u is the deviation the prescribed boundary conditions are:
u(−1) = u
0
(−1) = u(0) = u(1) = u
0
(1) = 0.
The energy functional is
E(u) =
Z
1
−1
µ
1
2
(u
00
(x))
2
−f(x) u(x)
¶
dx.
Compute the diﬀerential equation for the minimizing solution and the remaining
natural boundary conditions that this function should satisfy. Hint: consider u
on [−1, 0] and u
r
on [0, 1] . How many conditions are needed?
18
1.5 Mathematical solutions are not always phys
ically relevant
If we believe physics in the sense that for physically relevant solutions the energy
is minimized we could come back to the two solutions of the soap ﬁlm between
the two rings as in the last part of Exercise 4. We now do have an energy for
that model, at least when we are discussing functions u of (x, y) :
E(u) =
Z
Ω
q
1 + ∇u(x, y)
2
dxdy.
We have seen that if u is a solution then the ﬁrst variation necessarily is 0 for
all appropriate test functions η :
∂
∂τ
E(u +τη)
τ=0
= 0.
For twice diﬀerentiable functions Taylor gives f (a +τb) = f (a) + τf
0
(a) +
1
2
τ
2
f
00
(a) +o(τ
2
). So in order that the energyfunctional has a minimum it will
be necessary that
µ
∂
∂τ
¶
2
E(u +τη)
τ=0
≥ 0
whenever E is a ‘nice’ functional.
Deﬁnition 1.5.1 The quantity
¡
∂
∂τ
¢
2
E(u +τη)
τ=0
is called the second varia
tion.
Example 1.5.2 Back to the rings connected by the soap ﬁlm from Exercise
4. We were looking for radially symmetric solutions of (1.10). If we restrict
ourselves to solutions that are functions of r then we found
U(r) = c
2
log
Ã
r +
p
r
2
−c
2
2
2 +
p
4 −c
2
2
!
(1.21)
if and only if C ≤ log
¡
2 +
√
3
¢
≈ 1.31696. Let call these solutions of type I.
If we allowed graphs that are not necessarily represented by one function then
there are two solutions if and only if C ≤ C
max
≈ 1.82617 :
U (r) = c
2
log
µ
r+
√
r
2
−c
2
2
2+
√
4−c
2
2
¶
and
˜
U (r) = c
2
log
µ
c
2
2
³
2+
√
4−c
2
2
´³
r+
√
r
2
−c
2
2
´
¶
.
Let us call these solutions of type II.
type I type II
19
On the left are type I and on the right type II solutions. The top two or three
on the right probably do not exist ‘physically’.
The energy of the type I solution U in (1.21) is (skipping the constant 2π)
Z
Ω
q
1 + ∇U
2
dxdy =
Z
2
1
p
1 +U
2
r
(r)rdr
=
Z
2
1
v
u
u
t
1 +
Ã
c
2
p
r
2
−c
2
2
!
2
r dr
=
Z
2
1
r
2
p
r
2
−c
2
2
dr =
=
"
r
p
r
2
−c
2
2
2
+
c
2
2
log(r +
p
r
2
−c
2
2
)
2
#
2
1
=
q
4 −c
2
2
−
1
2
q
1 −c
2
2
+
1
2
c
2
2
log
Ã
2 +
p
4 −c
2
2
1 +
p
1 −c
2
2
!
.
The energy of type II solution equals
Z
2
c2
p
1 +U
2
r
(r)rdr +
Z
1
c2
q
1 +
˜
U
2
r
(r)rdr =
=
Z
2
1
p
1 +U
2
r
(r)rdr + 2
Z
1
c
2
p
1 +U
2
r
(r)rdr
=
q
4 −c
2
2
−
1
2
q
1 −c
2
2
+
1
2
c
2
2
log
Ã
2 +
p
4 −c
2
2
1 +
p
1 −c
2
2
!
+
+
q
1 −c
2
2
+c
2
2
log(1 +
q
1 −c
2
2
) −c
2
2
log(c
2
)
=
q
4 −c
2
2
+
1
2
q
1 −c
2
2
+
1
2
c
2
2
log
¸
³
2 +
p
4 −c
2
2
´³
1 +
p
1 −c
2
2
´
c
2
2
¸
.
Just like C = C (c
2
) also E = E(c
2
) turns out to be an ugly function. But with
the help of Mathematica (or Maple) we may plot these functions.
On the left: C
I
above C
II
, on the right: E
I
below E
II
.
20
Exercise 15 Use these plots of height and energy as functions of c
2
to compare
the energy of the two solutions for the same conﬁguration of rings. Can one
conclude instability for one of the solutions? Are physically relevant solutions
necessarily global minimizers or could they be local minimizers of the energy
functional? Hint: think of the surface area of these solutions.
Exercise 16 The approach that uses solutions in the form r 7→ U (r) in Ex
ercise 4 has the drawback that we have to glue together two functions for one
solution. A formulation without this problem uses u 7→ R(u) (the ‘inverse’
function). Give the energy
˜
E formulation for this R. What is the ﬁrst variation
for this
˜
E?
Using this new formulation it might be possible to ﬁnd a connection from
a type II solution R
II
to a type I solution R
I
with the same C < log
¡
2 +
√
3
¢
such that the energy decreases along the connecting path. To be more precise:
we should construct a homotopy H ∈ C
¡
[0, 1] ; C
2
[−C, 0]
¢
with H (0) = R
II
and H (1) = R
I
that is such that t 7→
˜
E(H(t)) is decreasing. This would prove
that R
II
is not even a local minimizer.
Claim 1.5.3 A type II solution with c
2
< 0.72398 is probably not a local mini
mizer of the energy and hence not a physically relevant solution.
Volunteers may try to prove this claim.
Exercise 17 Let us consider a glass cylinder with at the top and bottom each
a rod laid out on a diametral line and such that both rods cross perpendicular.
See the left of the three ﬁgures:
One can connect these two rods through this cylinder by a soap ﬁlm as in the
two ﬁgures on the right.
21
1. The solution depicted in the second cylinder is described by a function.
Setting the cylinder
©
(x, y, z) ; − ≤ x ≤ and y
2
+z
2
= R
2
ª
the function
looks as z(x, y) = y tan
¡
π
4
x
¢
. Show that for a deviation of this function
by w the area formula is given by
E(w) =
Z
−
Z
yr,w(x)
y
l,w
(x)
q
1 + ∇z +∇w
2
dydx
where y
l,w
(x) < 0 < y
r,w
(x) are in absolute sense smallest solutions of
y
2
+
³
y tan
³
π
4
x
´
+w(x, y)
´
2
= R
2
.
2. If we restrict ourselves to functions that satisfy w(x, 0) = 0 we may use a
more suitable coordinate system, namely x, r instead of x, y and describe
ϕ(x, r) instead of w(x, y) as follows:
y
z
r
Show that
E(ϕ) =
Z
−
Z
R
−R
s
1 +
µ
∂ϕ
∂r
¶
2
+
µ
r
π
4
+
∂ϕ
∂x
¶
2
drdx.
3. Derive the EulerLagrange equation for E(ϕ) and state the boundary value
problem.
4. Show that ‘the constant turning plane’ is indeed a solution in the sense
that it yields a stationary point of the energy functional.
5. It is not obvious to me if the ﬁrst depicted solution is indeed a mini
mizer. If you ask me for a guess: for R >> it is the minimizer
(R > 1.535777884999 ?). For the second depicted solution of the bound
ary value problem one can ﬁnd a geometrical argument that shows it is not
even a local minimizer. (Im)prove these guesses.
22
Week 2
Spaces, Traces and
Imbeddings
2.1 Function spaces
2.1.1 Hölder spaces
Before we will start ‘solving problems’ we have to ﬁx the type of function we are
looking for. For mathematics that means that we have to decide which function
space will be used. The classical spaces that are often used for solutions of
p.d.e. (and o.d.e.) are C
k
(
¯
Ω) and the Hölder spaces C
k,α
(
¯
Ω) with k ∈ N and
α ∈ (0, 1] . Here is a short reminder of those spaces.
Deﬁnition 2.1.1 Let Ω ⊂ R
n
be a domain.
• C(
¯
Ω) with k·k
∞
deﬁned by kuk
∞
= sup
x∈
¯
Ω
u(x) is the space of continu
ous functions.
• Let α ∈ (0, 1] . Then C
α
(
¯
Ω) with k·k
α
deﬁned by
kuk
α
= kuk
∞
+ sup
x,y∈
¯
Ω
u(x) −u(y)
x −y
α
,
consists of all functions u ∈ C(
¯
Ω) such that kuk
α
< ∞. It is the space of
functions which are Höldercontinuous with coeﬃcient α.
If we want higher order derivatives to be continuous up to the boundary
we should explain how these are deﬁned on the boundary or to consider just
derivatives in the interior and extend these. We choose this second option.
Deﬁnition 2.1.2 Let Ω ⊂ R
n
be a domain and k ∈ N
+
.
• C
k
(
¯
Ω) with k·k
C
k
(
¯
Ω)
consists of all functions u that are ktimes diﬀer
entiable in Ω and which are such that for each multiindex β ∈ N
n
with
1 ≤ β ≤ k there exists g
β
∈ C(
¯
Ω) with g
β
=
¡
∂
∂x
¢
β
u in Ω. The norm
k·k
C
k
(
¯
Ω)
is deﬁned by
kuk
C
k
(
¯
Ω)
= kuk
C(
¯
Ω)
+
X
1≤β≤k
kg
β
k
C(
¯
Ω)
.
23
• C
k,α
(
¯
Ω) with k·k
C
k,α
(
¯
Ω)
consists of all functions u that are k times α
Höldercontinuously diﬀerentiable in Ω and which are such that for each
multiindex β ∈ N
n
with 1 ≤ β ≤ k there exists g
β
∈ C
α
(
¯
Ω) with g
β
=
¡
∂
∂x
¢
β
u in Ω. The norm k·k
C
k,α
(
¯
Ω)
is deﬁned by
kuk
C
k,α
(
¯
Ω)
= kuk
C(
¯
Ω)
+
X
1≤β<k
kg
β
k
C(
¯
Ω)
+
X
β=k
kg
β
k
C
α
(
¯
Ω)
.
Remark 2.1.3 Adding an index 0 at the bottom in C
k,α
0
(
¯
Ω) means that u and
all of its derivatives (read g
β
) up to order k are 0 at the boundary. For example,
C
4
(
¯
Ω) ∩ C
2
0
(
¯
Ω) =
(
u ∈ C
4
(
¯
Ω); u
∂Ω
=
µ
∂
∂x
i
u
¶
∂Ω
=
µ
∂
2
∂x
i
∂x
j
u
¶
∂Ω
= 0
)
is a subspace of C
4
(
¯
Ω).
Remark 2.1.4 Some other cases where we will use the notation C
·
(·) are the
following. For those cases we are not deﬁning a norm but just consider the
collection of functions.
The set C
∞
(
¯
Ω) consists of the functions that belong to C
k
(
¯
Ω) for every
k ∈ N.
If we write C
m
(Ω), without the closure of Ω, we mean all functions which
are mtimes diﬀerentiable inside Ω.
By C
∞
0
(Ω) we will mean all functions with compact support in Ω and which
are inﬁnitely diﬀerentiable on Ω.
For bounded domains Ω the spaces C
k
(
¯
Ω) and C
k,α
(
¯
Ω) are Banach spaces.
Although often well equipped for solving diﬀerential equations these spaces
are in general not very convenient in variational settings. Spaces that are better
suited are the...
2.1.2 Sobolev spaces
Let us ﬁrst recall that L
p
(Ω) is the space of all measurable functions u : Ω →R
with
R
Ω
u(x)
p
dx < ∞.
Deﬁnition 2.1.5 Let Ω ⊂ R
n
be a domain and k ∈ N and p ∈ (1, ∞) .
• W
k,p
(Ω) are the functions u such that
¡
∂
∂x
¢
α
u ∈ L
p
(Ω) for all α ∈ N
n
with α ≤ k. Its norm k·k
W
k,p
(Ω)
is deﬁned by
kuk
W
k,p
(Ω)
=
X
α∈N
n
α≤k
°
°
°
°
µ
∂
∂x
¶
α
u
°
°
°
°
L
p
(Ω)
.
Remark 2.1.6 When is a derivative in L
p
(Ω)? For the moment we just recall
that
∂
∂x1
u ∈ L
p
(Ω) if there is g ∈ L
p
(Ω) such that
Z
Ω
g ϕ dx = −
Z
Ω
u
∂
∂x1
ϕ dx for all ϕ ∈ C
∞
0
(Ω).
24
For u ∈ C
1
(Ω) ∩ L
p
(Ω) one takes g =
∂
∂x
1
u ∈ C(Ω), checks if g ∈ L
p
(Ω), and
the formula follows from an integration by parts since each ϕ has a compact
support in Ω.
A closely related space is used in case of Dirichlet boundary conditions.
Then we would like to restrict ourselves to all functions u ∈ W
k,p
(Ω) that
satisfy
¡
∂
∂x
¢
α
u = 0 on ∂Ω for α with 0 ≤ α ≤ m for some m. Since the
functions in W
k,p
(Ω) are not necessarily continuous we cannot assume that a
condition like
¡
∂
∂x
¢
α
u = 0 on ∂Ω to hold pointwise. One way out is through
the following deﬁnition.
Deﬁnition 2.1.7 Let Ω ⊂ R
n
be a domain, k ∈ N and p ∈ (1, ∞) .
• Set W
k,p
0
(Ω) = C
∞
0
(Ω)
k·k
W
k,p
(Ω)
, the closure in the k·k
W
k,p
(Ω)
norm of the
set of inﬁnitely diﬀerentiable functions with compact support in Ω. The
standard norm on W
k,p
0
(Ω) is k·k
W
k,p
(Ω)
.
Instead of the norm k·k
W
k,p
(Ω)
one might also encounter some other norm
for W
k,p
0
(Ω) namely ·
W
k,p
0
(Ω)
deﬁned by taking only the highest order deriva
tives:
u
W
k,p
0
(Ω)
=
X
α∈N
n
α=k
°
°
°
°
µ
∂
∂x
¶
α
u
°
°
°
°
L
p
(Ω)
,
Clearly this cannot be a norm on W
k,p
(Ω) for k ≥ 1 since 1
W
k,p
0
(Ω)
= 0.
Nevertheless, a result of Poincaré saves our day. Let us ﬁrst ﬁx the meaning of
C
1
boundary or manifold.
Deﬁnition 2.1.8 We will call a bounded (n −1)dimensional manifold M a
C
1
manifold if there are ﬁnitely many C
1
maps f
i
:
¯
A
i
→ R with A
i
an open
set in R
n−1
and corresponding Cartesian coordinates
n
y
(i)
1
, . . . , y
(i)
n
o
, say i ∈
{1, m} , such that
M =
m
[
i=1
n
y
(i)
n
= f
i
(y
(i)
1
, . . . , y
(i)
n−1
); (y
(i)
1
, . . . , y
(i)
n−1
) ∈
¯
A
i
o
.
We may assume there are open blocks B
i
:= A
i
× (a
i
, b
i
) that cover M and are
such that
B
i
∩ M =
n
y
(i)
n
= f
i
(y
(i)
1
, . . . , y
(i)
n−1
); (y
(i)
1
, . . . , y
(i)
n−1
) ∈
¯
A
i
o
.
Three blocks with local cartesian coordinates
25
Theorem 2.1.9 (A Poincaré type inequality) Let Ω ⊂ R
n
be a domain
and ∈ R
+
. Suppose that there is a bounded (n −1)dimensional C
1
manifold
M⊂
¯
Ω such that for every point x ∈ Ω there is a point x
∗
∈ M connected by a
smooth curve within Ω with length and curvature uniformly bounded, say by .
Then there is a constant c such that for all u ∈ C
1
(
¯
Ω) with u = 0 on M
Z
Ω
u
p
dx ≤ c
Z
Ω
∇u
p
dx.
Ω M
Proof. First let us consider the onedimensional problem. Taking u ∈ C
1
(0, )
with u(0) = 0 we ﬁnd with
1
p
+
1
q
= 1:
Z
0
u(x)
p
dx =
Z
0
¯
¯
¯
¯
Z
x
0
u
0
(s)ds
¯
¯
¯
¯
p
dx
≤
Z
0
µZ
x
0
u
0
(s)
p
ds
¶µZ
x
0
1ds
¶
p
q
dx
≤
Ã
Z
0
u
0
(s)
p
ds
!
Z
0
x
p
q
dx =
1
p
p
Z
0
u
0
(s)
p
ds.
For the higher dimensional problem we go over to new coordinates. Let ˜ y
denote a system of coordinates on (part) of the manifold M and let y
n
ﬁll up
the remaining direction. We may choose several systems of coordinates to ﬁll
up Ω. Since M is C
1
and (n −1)dimensional we may assume that the Jacobian
of each of these transformations is bounded from above and away from zero, let
us assume 0 < c
−1
1
< J < c
1
.
26
By these curvilinear coordinates we ﬁnd for any number κ >
Z
shaded area
u(x)
p
dx ≤ c
1
Z
part of M
µZ
κ
0
u(y)
p
dy
n
¶
d˜ y ≤
≤ c
1
Z
part of M
1
p
κ
p
µZ
κ
0
¯
¯
¯
¯
∂
∂y
n
u(y)
¯
¯
¯
¯
p
dy
n
¶
d˜ y ≤
≤ c
1
Z
part of M
1
p
κ
p
µZ
κ
0
(∇u) (y)
p
dy
n
¶
d˜ y ≤
≤ c
2
1
1
p
κ
p
Z
shaded area
(∇u) (x)
p
dx.
Filling up Ω appropriately yields the result.
Exercise 18 Show that the result of Theorem 2.1.9 indeed implies that there
are constants c
1
, c
2
∈ R
+
such that
c
1
u
W
1,2
0
(Ω)
≤ kuk
W
1,2
(Ω)
≤ c
2
u
W
1,2
0
(Ω)
.
Exercise 19 Suppose that u ∈ C
2
([0, 1]
2
) with u = 0 on ∂((0, 1)
2
). Show the
following:
there exists c
1
> 0 such that
Z
[0,1]
2
u
2
dx ≤ c
1
Z
[0,1]
2
¡
u
2
xx
+u
2
yy
¢
dx.
Exercise 20 Suppose that u ∈ C
2
([0, 1]
2
) with u = 0 on ∂((0, 1)
2
). Show the
following:
there exists c
2
> 0 such that
Z
[0,1]
2
u
2
dx ≤ c
2
Z
[0,1]
2
(∆u)
2
dx.
Hint: show that for these functions u the following holds:
Z
[0,1]
2
u
xx
u
yy
dx =
Z
[0,1]
2
(u
xy
)
2
dx ≥ 0.
Exercise 21 Suppose that u ∈ C
2
([0, 1]
2
) with u(0, x
2
) = u(1, x
2
) = 0 for
x
2
∈ [0, 1] . Show the following:
there exists c
3
> 0 such that
Z
[0,1]
2
u
2
dxdy ≤ c
3
Z
[0,1]
2
¡
u
2
xx
+u
2
yy
¢
dx.
Exercise 22 Suppose that u ∈ C
2
([0, 1]
2
) with u(0, x
2
) = u(1, x
2
) = 0 for
x
2
∈ [0, 1] . Prove or give a counterexample to:
there exists c
4
> 0 such that
Z
[0,1]
2
u
2
dxdy ≤ c
4
Z
[0,1]
2
(∆u)
2
dx.
27
Exercise 23 Here are three spaces: W
2,2
(0, 1) , W
2,2
(0, 1) ∩ W
1,2
0
(0, 1) and
W
2,2
0
(0, 1) , which are all equipped with the norm k·k
W
2,2
(0,1)
. Consider the
functions f
α
(x) = x
α
(1 −x)
α
for α ∈ R. Complete the next sentences:
1. If α ....., then f
α
∈ W
2,2
(0, 1) .
2. If α ....., then f
α
∈ W
2,2
(0, 1) ∩ W
1,2
0
(0, 1) .
3. If α ....., then f
α
∈ W
2,2
0
(0, 1) .
28
2.2 Restricting and extending
If Ω ⊂ R
n
is bounded and ∂Ω ∈ C
1
then by assumption we may split the
boundary in ﬁnitely many pieces, i = 1, . . . , m, and we may describe each of
these by
n
y
(i)
n
= f
i
(y
(i)
1
, . . . , y
(i)
n−1
); (y
(i)
1
, . . . , y
(i)
n−1
) ∈ A
i
o
for some bounded open set A
i
∈ R
n−1
. We may even impose higher regularity
of the boundary:
Deﬁnition 2.2.1 For a bounded domain Ω we say ∂Ω ∈ C
m,α
if there is a
ﬁnite set of functions f
i
, open sets A
i
and Cartesian coordinate systems as in
Deﬁnition 2.1.8, and if moreover f
i
∈ C
m,α
(
¯
A
i
).
First let us consider multiplication operators.
Lemma 2.2.2 Fix Ω ⊂ R
n
bounded and ζ ∈ C
∞
0
(R
n
). Consider the multipli
cation operator M(u) = ζu . This operator M is bounded in any of the spaces
C
m
(
¯
Ω), C
m,α
(
¯
Ω) with m ∈ N and α ∈ (0, 1] , W
m,p
(Ω) and W
m,p
0
(Ω) with
m ∈ N and p ∈ (1, ∞) .
Exercise 24 Prove this lemma.
Often it is convenient to study the behaviour of some function locally. One
of the tools is the socalled partition of unity.
Deﬁnition 2.2.3 A set of functions {ζ
i
}
i=1
is called a C
∞
partition of unity
for Ω if the following holds:
1. ζ
i
∈ C
∞
0
(R
n
) for all i;
2. 0 ≤ ζ
i
(x) ≤ 1 for all i and x ∈ Ω;
3.
P
i=1
ζ
i
(x) = 1 for x ∈ Ω.
Sometimes it is useful to work with a partition of unity of Ω that coincides
with the coordinate systems mentioned in Deﬁnition 2.1.8 and 2.2.1. In fact it
is possible to ﬁnd a partition of unity {ζ
i
}
+1
i=1
such that
(
support(ζ
i
) ⊂ B
i
for i = 1, . . . , ,
support(ζ
+1
) ⊂ Ω.
(2.1)
Any function u deﬁned on Ω can now be written as
u =
X
i=1
ζ
i
u
and by Lemma 2.2.2 the functions u
i
:= ζ
i
u have the same regularity as u but
are now restricted to an area where we can deal with them. Indeed we ﬁnd
support(u
i
) ⊂ support(ζ
i
).
Deﬁnition 2.2.4 A molliﬁer on R
n
is a function J
1
with the following proper
ties:
29
1. J
1
∈ C
∞
(R
n
);
2. J
1
(x) ≥ 0 for all x ∈ R
n
;
3. support(J
1
) ⊂ {x ∈ R
n
; x ≤ 1} ;
4.
R
R
n
J
1
(x)dx = 1.
By rescaling one arrives at the family of functions J
ε
(x) = ε
−n
J
1
(ε
−1
x) that
goes in some sense to the δfunction in 0. The nice property is the following.
If u has compact support in Ω we may extend u by 0 outside of Ω. Then, for
u ∈ C
m,α
(
¯
Ω) respectively u ∈ W
m,p
(Ω) we may deﬁne
u
ε
(x) := (J
ε
∗ u) (x) =
Z
y∈R
n
J
ε
(x −y)u(y)dy,
to ﬁnd a family of C
∞
functions that approximates u when ε ↓ 0 in k·k
C
m,α
(Ω)
respectively in k·k
W
m,p
(Ω)
norm.
2 1 1 2 3
0.5
0.5
1
Mollyfying a jump in u, u
0
and u
00
.
Example 2.2.5 The standard example of a molliﬁer is the function
ρ(x) =
(
c exp
³
−1
1−x
2
´
for x < 1,
0 for x ≥ 1,
where the c is ﬁxed by condition 4 in Deﬁnition 2.2.4.
There are other ways of restricting but let us now focus on extending a func
tion. Without any assumption on the boundary this can become very involved.
For bounded Ω with at least ∂Ω ∈ C
1
we may deﬁne a bounded extension oper
ator. But ﬁrst let us consider extension operators from C
m
[0, 1] to C
m
[−1, 1] .
For m = 0 one proceeds by a simple reﬂection. Set
E
0
(u)(x) =
½
u(x) for 0 ≤ x ≤ 1,
u(−x) for −1 ≤ x < 0,
and one directly checks that E
0
(u) ∈ C [−1, 1] for u ∈ C [0, 1] and moreover
kE
0
(u)k
C[−1,1]
≤ kuk
C[0,1]
.
30
A simple and elegant procedure does it for m > 0. Set
E
m
(u)(x) =
½
u(x) for 0 ≤ x ≤ 1,
P
m+1
k=1
α
m,k
u(−
1
k
x) for −1 ≤ x < 0,
and compute the coeﬃcients α
m,k
from the equations coming out of
(E
m
(u))
(j)
¡
0
+
¢
= (E
m
(u))
(j)
¡
0
−
¢
,
namely
1 =
m+1
X
k=1
α
m,k
µ
−
1
k
¶
j
for j = 0, 1, . . . , m. (2.2)
Then E
m
(u) ∈ C
m
[−1, 1] for u ∈ C
m
[0, 1] and moreover with c
m
=
m+1
P
k=1
α
m,k
 :
kE
m
(u)k
C
m
[−1,1]
≤ c
m
kuk
C
m
[0,1]
.
Exercise 25 Show that there is c
p,m
∈ R
+
such that for all u ∈ C
m
[0, 1] :
kE
m
(u)k
W
m,p
(−1,1)
≤ c
p,m
kuk
W
m,p
(0,1)
.
Exercise 26 Let Ω ⊂ R
n−1
be a bounded domain. Convince yourself that
E
m
(u)(x
0
, x
n
) =
½
u(x
0
, x
n
) for 0 ≤ x
n
≤ 1,
P
m+1
k=1
α
m,k
u(x
0
, −
1
k
x
n
) for −1 ≤ x
n
< 0,
is a bounded extension operator from C
m
(Ω × [0, 1]) to C
m
(Ω× [−1, 1]).
Lemma 2.2.6 Let Ω and Ω
0
be bounded domains in R
n
and ∂Ω ∈ C
m
with
m ∈ N
+
. Suppose that
¯
Ω ⊂ Ω
0
.Then there exists a linear extension operator E
m
from C
m
(
¯
Ω) to C
m
0
(
¯
Ω
0
) such that:
1. (E
m
u)
Ω
= u,
2. (E
m
u)
R
n
\Ω
0 = 0
3. there is c
Ω,Ω
0
,m
and c
Ω,Ω
0
,m,p
∈ R
+
such that for all u ∈ C
m
(
¯
Ω)
kE
m
uk
C
m
(
¯
Ω
0
)
≤ c
Ω,Ω
0
,m
kuk
C
m
(
¯
Ω)
, (2.3)
kE
m
uk
W
m,p
(Ω
0
)
≤ c
Ω,Ω
0
,m,p
kuk
W
m,p
(Ω)
. (2.4)
Proof. Let the A
i
, f
i
, B
i
and ζ
i
be as in Deﬁnition 2.1.8 and (2.1). Let
us consider u
i
= ζ
i
u in the coordinates {y
(i)
1
, . . . , y
(i)
n
}. First we will ﬂatten
this part of the boundary by replacing the last coordinate by y
(i)
n
= ˜ y
(i)
n
+
f
i
(y
(i)
1
, . . . , y
(i)
n−1
). Writing u
i
in these new coordinates {y
(i)
∗
, ˜ y
(i)
n
}, with y
(i)
∗
=
(y
(i)
1
, . . . , y
(i)
n−1
) and setting B
∗
i
the transformed box, we will ﬁrst assume that
u
i
is extended by zero for ˜ y
(i)
n
> 0 and large. Now we may deﬁne the extension
of u
i
by
¯ u
i
(y
(i)
∗
, ˜ y
(i)
n
) =
u
i
(y
(i)
∗
, ˜ y
(i)
n
) if ˜ y
(i)
n
≥ 0,
m+1
P
k=1
α
k
u
i
(y
(i)
∗
, −
1
k
˜ y
(i)
n
) if ˜ y
(i)
n
< 0,
31
where the α
k
are deﬁned by the equations in (2.2). Assuming that u ∈ C
m
(
¯
Ω) we
ﬁnd that with the coeﬃcients chosen this way the function ¯ u
i
∈ C
m
(
¯
B
∗
i
) and
its derivatives of order ≤ m are continuous. Moreover, also after the reverse
transformation the function ¯ u
i
∈ C
m
(
¯
B
i
). Indeed this fact remains due to the
assumptions on f
i
. It is even true that there is a constant C
i,∂Ω
, which just
depend on the properties of the i
th
local coordinate system and ζ
i
, such that
k¯ u
i
k
W
m,p
(B
i
)
≤ C
i,∂Ω
ku
i
k
W
m,p
(B
i
∩Ω)
. (2.5)
An approximation argument show that (2.5) even holds for all u ∈ W
m,p
(Ω). Up
to now we have constructed an extension operator
˜
E
m
: W
m,p
(Ω) →W
m,p
(R
n
)
by
˜
E
m
(u) =
X
i=1
¯ u
i
+ζ
+1
u. (2.6)
As a last step let χ be a C
∞
0
function with such that χ(x) = 1 for x ∈
¯
Ω and
support(χ) ⊂ Ω
0
. We deﬁne the extension operator E
m
by
E
m
(u) = χ
X
i=1
¯ u
i
+ζ
+1
u.
Notice that we repeatedly used Lemma 2.2.2 in order to obtain the estimate in
(2.4).
It remains to show that E
m
(u) ∈ W
m,p
0
(Ω
0
). Due to the assumption that
support(χ) ⊂ Ω
0
we ﬁnd that support(J
ε
∗ E
m
(u)) ⊂ Ω
0
for ε small enough
where J
ε
is a molliﬁer. Hence (J
ε
∗ E
m
(u)) ∈ C
∞
0
(Ω
0
) and since J
ε
∗ E
m
(u)
approaches E
m
u in W
m,p
(Ω
0
)norm for ε ↓ 0 one ﬁnds E
m
u ∈ W
m,p
0
(Ω
0
).
Exercise 27 A square inside a disk: Ω
1
= (−1, 1)
2
and Ω
0
1
=
©
x ∈ R
2
; x < 2
ª
.
Construct a bounded extension operator E : W
2,2
(Ω
1
) →W
2,2
0
(Ω
0
1
).
Exercise 28 Think about the similar question in case of
an equilateral triangle: Ω
2
=
©
(x, y) ; −
1
2
< y < 1 −
√
3 x
ª
,
a regular pentagon: Ω
3
= co {(cos(2kπ/5), sin(2kπ/5)); k = 0, 1, . . . , 4}
o
.
a USarmy star Ω
4
= ....
co(A) is the convex hull of A; the small
o
means the open interior.
Hint for the triangle.
32
Above we have deﬁned a molliﬁer on R
n
. If we want to use such a molliﬁer
on a bounded domain a diﬃculty arises when u doesn’t have a compact support
in Ω. In that case we have to use the last lemma ﬁrst and extend the function.
Notice that we don’t need the last part of the proof of that lemma but may use
˜
E
m
deﬁned in (2.6).
Exercise 29 Let Ω and Ω
0
be bounded domains in R
n
with
¯
Ω ⊂ Ω
0
and let
ε > 0.
1. Suppose that u ∈ C(
¯
Ω
0
). Prove that u
ε
(x) :=
R
y∈Ω
0
J
ε
(x−y)u(y)dy is such
that u
ε
→u in C(
¯
Ω) for ε ↓ 0.
2. Let γ ∈ (0, 1] and suppose that u ∈ C
γ
(
¯
Ω
0
). Prove that u
ε
→ u in C
γ
(
¯
Ω)
for ε ↓ 0.
3. Let u ∈ C
k
(
¯
Ω
0
) and α ∈ N
n
with α ≤ k. Show that
¡
∂
∂x
¢
α
u
ε
=
³
¡
∂
∂x
¢
α
u
´
ε
in Ω if ε < inf {x −x
0
 ; x ∈ ∂Ω, x
0
∈ ∂Ω
0
} .
Exercise 30 Let Ω be bounded domains in R
n
and let ε > 0.
1. Derive from Hölder’s inequality that
Z
ab dx ≤
µZ
a dx
¶1
q
µZ
a b
p
dx
¶1
p
for p, q ∈ (1, ∞) with
1
p
+
1
q
= 1.
2. Suppose that u ∈ L
p
(Ω) with p < ∞. Set ¯ u(x) = u(x) for x ∈ Ω and
¯ u(x) = 0 elsewhere. Prove that u
ε
(x) :=
R
y∈R
n0
J
ε
(x −y)¯ u(y)dy satisﬁes
ku
ε
k
L
p
(Ω)
≤ kuk
L
p
(Ω)
.
3. Use that C(
¯
Ω
0
) is dense in L
p
(Ω
0
) and the previous exercise to conclude
that u
ε
→u in L
p
(Ω).
33
2.3 Traces
We have seen that u ∈ W
1,2
0
(Ω) means u = 0 on ∂Ω in some sense, but not
∂
∂n
u
being zero on the boundary although we deﬁned W
1,2
0
(Ω) by approximation
through functions in C
∞
0
(Ω). What can we do if the boundary data we are
interested in are nonhomogeneous? The trace operator solves that problem.
Theorem 2.3.1 Let p ∈ (1, ∞) and assume that Ω is bounded and ∂Ω ∈ C
1
.
A. For u ∈ W
1,p
(Ω) let {u
m
}
∞
m=1
⊂ C
1
(
¯
Ω) be such that ku −u
m
k
W
1,p
(Ω)
→
0. Then u
m∂Ω
converges in L
p
(∂Ω), say to v ∈ L
p
(∂Ω), and the limit v
only depends on u.
So T : W
1,p
(Ω) →L
p
(∂Ω) with Tu = v is well deﬁned.
B. The following holds:
1. T is a bounded linear operator from W
1,p
(Ω) to L
p
(∂Ω) ;
2. if u ∈ W
1,p
(Ω) ∩ C
¡
¯
Ω
¢
, then Tu = u
∂Ω
.
Remark 2.3.2 This T is called the trace operator. Bounded means there is
C
p,Ω
such that
kTuk
L
p
(∂Ω)
≤ C
p,Ω
kuk
W
1,p
(Ω)
. (2.7)
Proof. We may assume that there are ﬁnitely many maps and sets of Cartesian
coordinate systems as in Deﬁntion 2.1.8. Let {ζ
i
}
m
i=1
be a C
∞
partition of unity
for Ω, that ﬁts with the ﬁnitely many boundary maps:
∂Ω ∩ support(ζ
i
) ⊂ ∂Ω ∩
n
y
(i)
n
= f
i
(y
(i)
1
, . . . , y
(i)
n−1
); (y
(i)
1
, . . . , y
(i)
n−1
) ∈ A
i
o
.
We may choose such a partition since these local coordinate systems do overlap;
the A
i
are open. Now for the sake of simpler notation let us assume these
boundary parts are even ﬂat. For this assumption one will pay by a factor
(1 + ∇f
i

2
)
1/2
in the integrals but by the C
1
assumption for ∂Ω the values of
∇f
i

2
are uniformly bounded.
First let us suppose that u ∈ C
1
¡
¯
Ω
¢
and derive an estimate as in (2.7).
Integrating inwards from the (ﬂat) boundary part A
i
⊂ R
n−1
gives, with
‘outside the support of ζ
i
’, that
Z
Ai
ζ
i
u
p
dx
0
= −
Z
Ai
Z
0
∂
∂x
n
(ζ
i
u
p
) dx
n
dx
0
= −
Z
Ai×(0,)
µ
∂ζ
i
∂x
n
u
p
+pζ
i
u
p−2
u
∂u
∂x
n
¶
dx
≤ C
Z
A
i
×(0,)
µ
u
p
+ u
p−1
¯
¯
¯
¯
∂u
∂x
n
¯
¯
¯
¯
¶
dx
≤ C
2p−1
p
Z
Ai×(0,)
u
p
dx +C
1
p
Z
Ai×(0,)
¯
¯
¯
¯
∂u
∂x
n
¯
¯
¯
¯
p
dx
≤
˜
C
Z
Ai×(0,)
(u
p
+ ∇u
p
) dx.
34
In the one but last step we used Young’s inequality:
ab ≤
1
p
a
p
+
1
q
b
q
for p, q > 0 with
1
p
+
1
q
= 1. (2.8)
Combining such an estimate for all boundary parts we ﬁnd, adding ﬁnitely
many constants, that
Z
∂Ω
u
p
dx
0
≤ C
Z
Ω
(u
p
+ ∇u
p
) dx.
So
°
°
u
∂Ω
°
°
L
p
(∂Ω)
≤ C kuk
W
1,p
(Ω)
for u ∈ C
1
¡
¯
Ω
¢
, (2.9)
and if u
m
∈ C
1
(Ω) with u
m
→ u in W
1,p
(u) then
°
°
u
∂Ω
−u
m∂Ω
°
°
L
p
(∂Ω)
≤
C ku −u
m
k
W
1,p
(Ω)
→ 0. So for u ∈ C
1
(
¯
Ω) the operator T is welldeﬁned and
Tu = u
∂Ω
.
Now assume that u ∈ W
1,p
(Ω) without the C
1
¡
¯
Ω
¢
restriction. Then there
are C
∞
(
¯
Ω) functions u
m
converging to u and
kTu
m
−Tu
k
k
L
p
(∂Ω)
≤ C ku
m
−u
k
k
W
1,p
(Ω)
. (2.10)
So {Tu
m
}
∞
m=1
is a Cauchy sequence in L
p
(∂Ω) and hence converges. By (2.9)
one also ﬁnds that the limit does not depend on the chosen sequence: if u
m
→u
and v
m
→u in W
1,p
(Ω), then kTu
m
−Tv
m
k
L
p
(∂Ω)
≤ C ku
m
−v
m
k
W
1,p
(Ω)
→0.
So Tu := lim
m→∞
Tu
m
is well deﬁned in L
p
(∂Ω) for all u ∈ W
1,p
(Ω).
The fact that T is bounded also follows from (2.9):
kTuk
L
p
(∂Ω)
= lim
m→∞
kTu
m
k
L
p
(∂Ω)
≤ C lim
m→∞
ku
m
k
W
1,p
(Ω)
= C kuk
W
1,p
(Ω)
.
It remains to show that if u ∈ W
1,p
(Ω)∩C
¡
¯
Ω
¢
then Tu = u
∂Ω
. To complete
that argument we use that functions in W
1,p
(Ω) ∩ C
¡
¯
Ω
¢
can be approximated
by a sequence of C
1
(
¯
Ω)functions {u
m
}
∞
m=1
in W
1,p
(Ω)sense and such that
u
m
→u uniformly in
¯
Ω. For example, since ∂Ω ∈ C
1
one may extend functions
in a small ε
1
neighborhood outside of
¯
Ω by a local reﬂection as in Lemma 2.2.6.
Setting ˜ u = E(u) and one ﬁnds ˜ u ∈ W
1,p
(Ω
ε
) ∩ C
¡
Ω
ε
¢
and k˜ uk
W
1,p
(Ω
ε
)
≤
C kuk
W
1,p
(Ω)
. A molliﬁer allows us to construct a sequence {u
m
}
∞
m=1
in C
∞
(
¯
Ω)
that approximates both u in W
1,p
(Ω) as well as uniformly on
¯
Ω.
So
°
°
u
∂Ω
−u
m∂Ω
°
°
L
p
(∂Ω)
≤ ∂Ω
1/p
°
°
u
∂Ω
−u
m∂Ω
°
°
C(∂Ω)
≤
≤ ∂Ω
1/p
ku −u
m
k
C(
¯
Ω)
→0
and
°
°
Tu −u
m∂Ω
°
°
L
p
(∂Ω)
→0 implying that Tu = u
∂Ω
.
Exercise 31 Prove Young’s inequality (2.8).
35
2.4 Zero trace and W
1,p
0
(Ω)
So by now there are two ways of considering boundary conditions. First there is
W
1,p
0
(Ω) as the closure of C
∞
0
(Ω) in the k·k
W
1,p
(Ω)
norm and secondly the trace
operator. It would be rather nice if for zero boundary conditions these would
coincide.
Theorem 2.4.1 Fix p ∈ (1, ∞) and suppose that Ω is bounded with ∂Ω ∈ C
1
.
Suppose that T is the trace operator from Theorem 2.3.1. Let u ∈ W
1,p
(Ω).
Then
u ∈ W
1,p
0
(Ω) if and only if Tu = 0 on ∂Ω.
Proof. (⇒) If u ∈ W
1,p
0
(Ω) then we may approximate u by u
m
∈ C
∞
0
(Ω) and
since T is a bounded linear operator:
kTuk
L
p
(Ω)
= kTu −Tu
m
k
L
p
(Ω)
≤ c ku −u
m
k
W
1,p
(Ω)
→0.
(⇐) Assuming that Tu = 0 on ∂Ω we have to construct a sequence in C
∞
0
(Ω)
that converges to u in the k·k
W
1,p
(Ω)
norm. Since ∂Ω ∈ C
1
we again may work
on the ﬁnitely many local coordinate systems as in Deﬁnition 2.2.1. We start by
ﬁxing a partition of unity {ζ
i
}
m
i=1
as in the proof of the previous theorem and
which again corresponds with the local coordinate systems. Next we assume
the boundary sets are ﬂat, say ∂Ω ∩ B
i
= Γ
B
i
⊂ R
n−1
and set Ω ∩ B
i
= Ω
B
i
where B
i
is a box.
Since Tu = 0 we may take any sequence {u
k
}
∞
k=1
∈ C
1
(
¯
Ω) such that
ku
k
−uk
W
1,p
(Ω)
→ 0 and it follows from the deﬁnition of T that u
m∂Ω
=
Tu
m
→ 0. Now we proceed in three steps. From v(t) = v(0) +
R
t
0
v
0
(s)ds one
gets
v(t) ≤ v(0) +
Z
t
0
v
0
(s) ds
and hence by Minkowski’s and Young’s inequality
v(t)
p
≤ c
p
v(0)
p
+c
p
µZ
t
0
v
0
(s) ds
¶
p
≤ c
p
v(0)
p
+c
p
t
p−1
Z
t
0
v
0
(s)
p
ds.
It implies that
Z
ΓB
u
m
(x
0
, x
n
)
p
dx
0
≤
≤ c
p
Z
Γ
B
u
m
(x
0
, 0)
p
dx
0
+c
p
x
p−1
n
Z
Γ
B
×[0,xn]
¯
¯
¯
¯
∂
∂x
n
u
m
(x
0
, s)
¯
¯
¯
¯
p
dx
0
ds,
and since u
m∂Ω
→0 in L
p
(∂Ω) and u
m
→u in W
1,p
(Ω) we ﬁnd that
Z
Γ
B
u(x
0
, x
n
)
p
dx
0
≤ c
p
x
p−1
n
Z
Γ
B
×[0,x
n
]
¯
¯
¯
¯
∂
∂x
n
u(x
0
, s)
¯
¯
¯
¯
p
dx
0
ds. (2.11)
Taking ζ ∈ C
∞
0
(R) such that
ζ (x) = 1 for x ∈ [0, 1] ,
ζ (x) = 0 for x / ∈ [−1, 2] ,
0 ≤ ζ ≤ 1 in R,
36
we set
w
m
(x
0
, x
n
) = (1 −ζ (mx
n
)) u(x
0
, x
n
).
One directly ﬁnds for m →∞ that
Z
ΩB
u −w
m

p
dx ≤
Z
ΓB×[0,2m
−1
]
u(x)
p
dx →0.
For the gradient part of the k·knorm we ﬁnd
Z
Ω
B
∇u −∇w
m

p
dx =
Z
Ω
B
∇(ζ (mx
n
) u(x))
p
dx
≤ 2
p−1
Z
ΩB
¡
ζ (mx
n
)
p
∇u(x)
p
+m
p
¯
¯
ζ
0
(mx
n
)
¯
¯
p
u(x)
p
¢
dx. (2.12)
For the ﬁrst term of (2.12) when m →∞ we obtain
Z
Ω
B
ζ (mx
n
)
p
∇u(x)
p
dx ≤
Z
Γ
B
×[0,2m
−1
]
∇u(x)
p
dx →0
and from (2.11) for the second term of (2.12) when m →∞
Z
Ω
B
m
p
¯
¯
ζ
0
(mx
n
)
¯
¯
p
u(x)
p
dx =
Z
2m
−1
0
Z
Γ
B
m
p
¯
¯
ζ
0
(mx
n
)
¯
¯
p
u(x)
p
dx
0
dx
n
≤ C
p,ζ
m
p
Z
2m
−1
0
x
p−1
n
Ã
Z
Γ
B
×[0,2m
−1
]
¯
¯
¯
¯
∂
∂x
n
u(x
0
, t)
¯
¯
¯
¯
p
dx
0
dt
!
dx
n
≤ C
0
p,ζ
Z
ΓB×[0,2m
−1
]
¯
¯
¯
¯
∂
∂x
n
u(x
0
, t)
¯
¯
¯
¯
p
dx
0
dt →0.
So w
m
→ u in k·k
W
1,p
(Ω)
norm and moreover support(w
n
) lies compactly
in ‘Ω’. (Here we have to go back to the curved boundaries and glue the w
(i)
m
together). Now there is room for using a molliﬁer. With J
1
as before we may
ﬁnd that for all ε <
1
2m
(in Ω this becomes ε <
1
cm
for some uniform constant
depending on ∂Ω) that v
ε,m
= J
ε
∗ w
m
is a C
∞
0
(Ω)function. We may choose
a sequence of ε
m
such that kv
εm,m
−w
m
k
W
1,p
(Ω)
≤
1
m
and hence v
ε,m
→ u in
W
1,p
(Ω).
Exercise 32 Prove Minkowski’s inequality: for all ε > 0, p ∈ (1, ∞) and a, b ∈
R:
a +b
p
≤ (1 +ε)
p−1
a
p
+
µ
1 +
1
ε
¶
p−1
b
p
.
37
2.5 Gagliardo, Nirenberg, Sobolev and Morrey
In the previous sections we have (re)visited some diﬀerent function spaces. For
bounded (smooth) domains there are the more obvious relations that W
m,p
(Ω) ⊂
W
k,p
(Ω) when m > k and W
m,p
(Ω) ⊂ W
m,q
(Ω) when p > q, not to mention
C
m,α
(
¯
Ω) ⊂ C
m,β
(
¯
Ω) when α > β or C
m
(
¯
Ω) ⊂ W
m,p
(Ω). But what about the
less obvious relations? Are there conditions such that a Sobolev space is con
tained in some Hölder space? Or is it possible that W
m,p
(Ω) ⊂ W
k,q
(Ω) with
m > k but p < q? Here we will only recall two of the most famous estimates
that imply an answer to some of these questions.
Theorem 2.5.1 (GagliardoNirenbergSobolev inequality) Fix n ∈ N
+
and suppose that 1 ≤ p < n. Then there exists a constant C = C
p,n
such that
kuk
L
pn
n−p
(R
n
)
≤ C k∇uk
L
p
(R
n
)
for all u ∈ C
1
0
(R
n
).
Theorem 2.5.2 (Morrey’s inequality) Fix n ∈ N
+
and suppose that n <
p < ∞. Then there exists a constant C = C
p,n
such that
kuk
C
0,1−
n
p
(R
n
)
≤ C kuk
W
1,p
(R
n
)
for all u ∈ C
1
(R
n
).
Remark 2.5.3 These two estimates form a basis for most of the Sobolev imbed
ding Theorems. Let us try to supply you with a way to remember the correct
coeﬃcients. Obviously for W
k,p
(Ω) ⊂ C
,α
(
¯
Ω) and W
k,p
(Ω) ⊂ W
,q
(Ω) (with
q > p) one needs k > . Let us introduce some number κ for the quality of the
continuity. For the Hölder spaces C
m,α
(
¯
Ω) this is ‘just’ the coeﬃcient:
κ
C
m,α = m+α.
For the Sobolev spaces W
m,p
(Ω) with Ω ∈ R
n
, which functions have ‘less dif
ferentiability’ than the ones in C
m
¡
¯
Ω
¢
, this less corresponds with −
1
p
for each
dimension and we set
κ
W
m,p = m−
n
p
.
For X
1
⊂ X
2
it should hold that κ
X
1
≥ κ
X
2
. The optimal estimates appear in
the theorems above; so comparing W
1,p
and L
q
, respectively W
1,p
and C
α
, one
ﬁnds
½
1 −
n
p
≥ −
n
q
if q ≤
pn
n−p
and n > p,
1 −
n
p
≥ α if n < p.
This is just a way of remembering the constants. In case of equality of the κ’s
an imbedding might hold or just fail.
Exercise 33 Show that u deﬁned on B
1
= {x ∈ R
n
; x < 1} with n > 1 by
u(x) = log(log(1 +
1
x
)) belongs to W
1,n
(B
1
) . It does not belong to L
∞
(B
1
).
Exercise 34 Set Ω = {x ∈ R
n
; x < 1} and deﬁne u
β
(x) = x
β
for x ∈ Ω.
Compute for which β it holds that:
38
1. u
β
∈ C
0,α
(
¯
Ω);
2. u
β
∈ W
1,p
(Ω);
3. u
β
∈ L
q
(Ω).
Exercise 35 Set Ω = {x ∈ R
n
; x −(1, 0, . . . , 0) < 1} and deﬁne u
β
(x) = x
β
for x ∈ Ω. Compute for which β it holds that:
1. u
β
∈ C
0,α
(∂Ω);
2. u
β
∈ W
1,p
(∂Ω);
3. u
β
∈ L
q
(∂Ω).
Proof of Theorem 2.5.1. Since u has a compact support we have (using
f(t) = f(a) +
R
t
a
f
0
(s)ds)
u(x) =
¯
¯
¯
¯
Z
xi
−∞
∂u
∂x
i
(x
1
, . . . x
i−1
, y
i
, x
i+1
, . . . x
n
)dy
i
¯
¯
¯
¯
≤
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
,
so that
u(x)
n
n−1
≤
n
Y
i=1
µZ
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
¶ 1
n−1
. (2.13)
When integrating with respect to x
1
one ﬁnds that on the right side there are
n−1 integrals that are x
1
dependent and one integral that does not depend on
x
1
. So we can get out one factor for free and use a generalized Hölder inequality
for the n −1 remaining ones:
Z
c
1

1
n−1
a
2

1
n−1
. . . a
n

1
n−1
dt ≤
µ
c
1

Z
a
2
 dt . . .
Z
a
2
 dt
¶ 1
n−1
.
For (2.13) it gives us
Z
∞
−∞
u(x)
n
n−1
dx
1
≤
µZ
∞
−∞
¯
¯
¯
¯
∂u
∂x
1
¯
¯
¯
¯
dy
1
¶ 1
n−1
Z
∞
−∞
n
Y
i=2
µZ
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
¶ 1
n−1
dx
1
≤
µZ
∞
−∞
¯
¯
¯
¯
∂u
∂x
1
¯
¯
¯
¯
dy
1
¶ 1
n−1
n
Y
i=2
µZ
∞
−∞
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
dx
1
¶ 1
n−1
=
Ã
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
1
¯
¯
¯
¯
dy
1
n
Y
i=2
Z
∞
−∞
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
dx
1
! 1
n−1
.
Repeating this argument one ﬁnds
Z
∞
−∞
Z
∞
−∞
u(x)
n
n−1
dx
1
dx
2
≤
µZ
∞
−∞
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
1
¯
¯
¯
¯
dy
1
dx
2
¶ 1
n−1
×
×
µZ
∞
−∞
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
2
¯
¯
¯
¯
dy
2
dx
1
¶ 1
n−1
n
Y
i=3
µZ
∞
−∞
Z
∞
−∞
Z
∞
−∞
¯
¯
¯
¯
∂u
∂x
i
¯
¯
¯
¯
dy
i
dx
1
dx
2
¶ 1
n−1
39
and after n steps
kuk
n
n−1
L
n
n−1
≤
n
Y
i=1
°
°
°
°
∂u
∂x
i
°
°
°
°
1
n−1
L
1
≤ k∇uk
n
n−1
L
1
(2.14)
which completes the proof of Theorem 2.5.1 for p = 1. For p ∈ (1, n) one uses
(2.14) with u replaced by u
α
for appropriate α > 1. Indeed, again with Hölder
and
1
p
+
1
q
= 1
kuk
α
L
α
n
n−1
= ku
α
k
L
n
n−1
≤ k∇u
α
k
L
1 = α
°
°
°u
α−1
∇u
°
°
°
L
1
≤ α
°
°
°u
α−1
°
°
°
L
q
k∇uk
L
p
= αkuk
α−1
L
(α−1)q
k∇uk
L
p
.
If we take α > 1 such that α
n
n−1
= (α − 1)q, in other words α =
q
q−
n
n−1
and
q >
n
n−1
should hold, we ﬁnd kuk
L
α
n
n−1
≤ α k∇uk
L
p . Since q >
n
n−1
coincides
with p < n and α
n
n−1
=
np
n−p
the claim follows.
Proof of Theorem 2.5.2. Let us ﬁx 0 < s ≤ r and we ﬁrst will derive an
estimate in B
r
(0). Starting again with
u(y) −u(0) =
Z
1
0
y · (∇u)(ry))dr
one ﬁnds
Z
y=s
u(y) −u(0) dσ
y
≤
Z
y=s
Z
1
0
y · (∇u)(ry)) drdσ
y
≤
Z
y=s
Z
1
0
s (∇u)(ry)) drdσ
y
= s
n−1
Z
w=1
Z
s
0
∇u(tw)) dtdσ
w
= s
n−1
Z
w=1
Z
s
0
∇u(tw))
tw
n−1
t
n−1
dtdσ
w
= s
n−1
Z
y<s
∇u(y))
y
n−1
dy ≤ s
n−1
Z
y<r
∇u(y))
y
n−1
dy
and an integration with respect to s gives:
Z
y<r
u(y) −u(0) dy ≤
1
n
r
n
Z
y<r
∇u(y))
y
n−1
dy. (2.15)
So with ω
n
is the surface area of the unit ball in R
n
ω
n
n
r
n
u(0) ≤
Z
y<r
u(y) −u(0) dy +
Z
y<r
u(y) dy (2.16)
≤
1
n
r
n
Z
y<r
∇u(y))
y
n−1
dy + kuk
L
1
(B
r
(0))
≤
1
n
r
n
k∇uk
L
p
(Br(0))
°
°
°·
1−n
°
°
°
L
p
p−1
(B
r
(0))
+ (2.17)
+ r
n−
n
p
³
ω
n
n
´
1−
1
p
kuk
L
p
(Br(0))
40
For p > n one ﬁnds
°
°
°·
1−n
°
°
°
L
p
p−1
(B
r
(0))
=
µ
ω
n
Z
r
0
s
(1−n)
p
p−1
+n−1
ds
¶
p−1
p
=
Ã
ω
n
1
1 −
n−1
p−1
!
p−1
p
r
p−n
p
.
Since 0 is an arbitrary point and taking r = 1 we may conclude that there is
C = C
n,p
such that for p > n
sup
x∈R
n
u(x) ≤ C kuk
W
1,p
(R
n
)
.
A bound for the Hölderpart of the norm remains to be proven. Let x and
z be two points and ﬁx m =
1
2
x +
1
2
z and r =
1
2
x −z . As in (2.16) and using
B
r
(m) ⊂ B
2r
(x) ∩ B
2r
(z) :
ω
n
n
r
n
u(x) −u(z) =
Z
y−m<r
u(x) −u(z) dy ≤
≤
Z
y−m<r
u(y) −u(x) dy +
Z
y−m<r
u(y) −u(z) dy
≤
Z
y−x<2r
u(y) −u(x) dy +
Z
y−z<2r
u(y) −u(z) dy
≤
1
n
2
n
r
n
Ã
Z
y−x<2r
∇u(y))
y −x
n−1
dy +
Z
y−z<2r
∇u(y))
y −z
n−1
dy
!
and reasoning as in (2.17)
u(x) −u(z) ≤ C
n,p
r
p−n
p
k∇uk
L
p
(B
r
(0))
.
So
u(x) −u(z)
x −z
α
≤ 2
α
C
n,p
r
1−
n
p
−α
k∇uk
L
p
(B
r
(0))
,
which is bounded when x →z if α < 1 −
n
p
.
41
42
Week 3
Some new and old solution
methods I
3.1 Direct methods in the calculus of variations
Whenever the problem is formulated in terms of an energy or some other quan
tity that should be minimized one could try to skip the derivation of the Euler
Lagrange equation. Instead of trying to solve the corresponding boundary value
problem one could try to minimize the functional directly.
Let E be the functional that we want to minimize. In order to do so we need
the following:
A. There is some open bounded set K of functions and num
bers E
0
< E
1
such that:
(a) the functional on K is bounded from below by E
0
:
for all u ∈ K : E(u) ≥ E
0
,
(b) on the boundary of K the functional is bounded from
below by E
1
:
for all u ∈ ∂K : E(u) ≥ E
1
,
(c) somewhere inside K the functional has a value in be
tween:
there is u
0
∈ K : E(u
0
) < E
1
.
Then
˜
E := inf
u∈K
E(u) exists and hence we may assume that there is a
minimizing sequence {u
n
}
∞
n=1
⊂ K, that is, a sequence such that
E(u
1
) ≥ E(u
2
) ≥ E(u
3
) ≥ · · · ≥
˜
E and lim
n→∞
E(u
n
) =
˜
E.
This does not yet imply that there is a function ˜ u such that E(˜ u) =
inf
u∈K
E(u) .
43
Deﬁnition 3.1.1 A function ˜ u ∈ K such that E(˜ u) = inf
u∈K
E(u) is
called a minimizer for E on K.
Sometimes there is just a local minimizer.
B. In order that u
n
leads us to a minimizer the following three properties
would be helpful:
(a) the function space should be a Banachspace that ﬁts with the func
tional; there should be enough functions such that a minimizer is
among them
(b) the sequence {u
n
}
∞
n=1
converges or at least has a convergent subse
quence in some topology (compactness of the set K), and
(c) if a sequence {u
n
}
∞
n=1
converges to u
∞
in this topology, then it
should also hold that lim
n→∞
E(u
n
) = E(u
∞
) or, which is suﬃcient:
lim
n→∞
E(u
n
) ≥ E(u
∞
) .
After this heuristic introduction let us try to give some suﬃcient conditions.
First let us remark that the properties mentioned under A all follow from
coercivity.
Deﬁnition 3.1.2 Let (X, k·k) be a function space and suppose that E : X →R
is such that for some f ∈ C
¡
R
+
0
; R
¢
with lim
t→∞
f (t) = ∞ it holds that
E(u) ≥ f(kuk), (3.1)
then E is called coercive.
So let (X, k·k) be a function space such that (3.1) holds. Notice that this
condition implies that we cannot use a norm with higher order derivatives than
the ones appearing in the functional.
Now let us see how the conditions in A follow. The function f has a lower
bound which we may use as E
0
. Next take any function u
0
∈ X and take any
E
1
> E(u
0
) . Since lim
t→∞
f (t) = ∞ we may take M such that f (t) ≥ E
1
for
all t ≥ M and set K = {u ∈ X; kuk < M} . The assumptions in A are satisﬁed.
44
Now let us assume that (X, k·k) is some Sobolev space W
m,p
(Ω) with 1 <
p < ∞ and that
K =
n
u ∈ W
m,p
(Ω); ‘boundary conditions’ and kuk
W
m,p
(Ω)
< M
o
.
Obviously W
m,p
(Ω) is not ﬁnite dimensional so that bounded sets are not
precompact. So the minimizing sequence in K that we have is, although bounded,
has no reason to be convergent in the norm of W
m,p
(Ω). The result that saves
us is the following.
Theorem 3.1.3 A bounded sequence {u
k
}
∞
k=1
in a reﬂexive Banach space has
a weakly convergent subsequence.
N.B. The sequence u
k
converges weakly to u in the space X, in symbols u
k
u in X, means φ(u
k
) →φ(u) for all φ ∈ X
0
, the bounded linear functionals on
X. This theorem can be found for example in H. Brezis: Analyse Fonctionnelle
(Théorème III.27). And yes, the Sobolev spaces such as W
m,p
(Ω) and W
m,p
0
(Ω)
with 1 < p < ∞ are reﬂexive.
So although the minimizing sequence {u
n
}
∞
n=1
is not strongly convergent it
has at least a subsequence {u
n
k
}
∞
k=1
that weakly converges. Let us call u
∞
this
weak limit in W
m,p
(Ω) . So
u
n
k
u
∞
weakly in W
m,p
(Ω) . (3.2)
For the Sobolev space W
m,p
(Ω) the statement in (3.2) coincides with
µ
∂
∂x
¶
α
u
n
k
µ
∂
∂x
¶
α
u
∞
weakly in L
p
(Ω) for all multiindex α = (α
1
, . . . , α
n
) ∈ N
n
with α = α
1
+· · · +
α
n
≤ m and
¡
∂
∂x
¢
α
=
³
∂
∂x
1
´
α
1
. . .
³
∂
∂x
2
´
α
2
.
The ﬁnal touch comes from the assumption that E is ‘sequentially weakly
lower semicontinuous’.
Deﬁnition 3.1.4 Let (X, k·k) be a Banach space. The functional E : X → R
is called sequentially weakly lower semicontinuous, if
u
k
u weakly in X
implies
E(u) ≤ liminf
n→∞
E(u
n
).
We will collect the result discussed above in a theorem. Indeed, the coerciv
ity condition gives us an open bounded set of functions K, numbers E
0
, E
1
, a
function u
0
∈ K
o
and hence a minimizing sequence in K. Theorem 3.1.3 sup
plies us with a weak limit u
∞
∈ K of a subsequence of the minimizing sequence
which is also a minimizing sequence itself (so inf
u∈K
E(u) ≤ E(u
∞
)). And the
sequentially weakly lower semicontinuity yields E(u
∞
) ≤ inf
u∈K
E(u).
Theorem 3.1.5 Let E be a functional on W
m,p
(Ω) (or W
m,p
0
(Ω)) with 1 <
p < ∞ which is:
45
1. coercive;
2. sequentially weakly lower semicontinuous.
Then there exists a minimizer u of E.
Remark 3.1.6 These 2 conditions are suﬃcient but neither of them are nec
essary and most of the time too restrictive. Often one is only concerned with a
minimizing a functional E on some subset of X. If you are interested in varia
tional methods please have a look at Chapter 8 in Evans’ book or any other book
concerned with direct methods in the Calculus of Variations.
Example 3.1.7 Let Ω be a bounded domain in R
n
and suppose we want to
minimize E(u) =
R
Ω
³
1
2
∇u
2
−f u
´
dx for functions u that satisfy u = 0 on
∂Ω. The candidate for the reﬂexive Banach space is W
1,2
0
(Ω) with the norm k·k
deﬁned by
kuk
W
1,2
0
:=
µZ
Ω
∇u
2
dx
¶1
2
.
Indeed this is a norm by Poincaré’s inequality: there is C ∈ R
+
such that
Z
Ω
u
2
dx ≤ C
Z
Ω
∇u
2
dx
and hence
1
√
1+C
kuk
W
1,2
≤ kuk
W
1,2
0
≤ kuk
W
1,2
.
Since W
1,2
0
(Ω) with hu, vi =
R
Ω
∇u · ∇v dx is even a Hilbert space we may
identify
³
W
1,2
0
(Ω)
´
0
and W
1,2
0
(Ω) .
The functional E is coercive: using the inequality of CauchySchwarz
E(u) =
Z
Ω
³
1
2
∇u
2
−f u
´
dx
≥
1
2
Z
Ω
∇u
2
dx −
µZ
Ω
u
2
dx
¶1
2
µZ
Ω
f
2
dx
¶1
2
≥
1
2
kuk
2
W
1,2
0
−kuk
W
1,2
0
kfk
L
2
≥
1
2
kuk
2
W
1,2
0
−
³
1
4
kuk
2
W
1,2
0
+ kfk
2
L
2
´
≥
1
4
kuk
2
W
1,2
0
−kfk
2
L
2 .
The sequentially weakly lower semicontinuity goes as follows. If u
n
u weakly
in W
1,2
0
(Ω), then
E(u
n
) −E(u) =
Z
Ω
³
1
2
∇u
n

2
−
1
2
∇u
2
−f (u
n
−u)
´
dx
=
Z
Ω
³
1
2
∇u
n
−∇u
2
+ (∇u
n
−∇u) · ∇u −f (u
n
−u)
´
dx
≥
Z
Ω
((∇u
n
−∇u) · ∇u −f (u
n
−u)) dx →0.
Here we also used that
¡
v 7→
R
Ω
fvdx
¢
∈
³
W
1,2
0
(Ω)
´
0
. Hence the minimizer
exists.
46
In this example we have used zero Dirichlet boundary conditions which al
lowed us to use the wellknown function space W
1,2
0
(Ω). For nonzero boundary
conditions a way out is to consider W
1,p
0
(Ω)+g where g is an arbitrary W
1,p
(Ω)
function that satisﬁes the boundary conditions.
47
3.2 Solutions in ﬂavours
In Example 3.1.7 we have seen that we found a minimizer u ∈ W
1,2
0
(Ω) and since
it is the minimizer of a diﬀerentiable functional it follows that this minimizer
satisﬁes
Z
(∇u · ∇η −f η) dx = 0 for all η ∈ W
1,2
0
(Ω).
If we knew that u ∈ W
1,2
0
(Ω) ∩ W
2,2
(Ω) an integration by parts would show
Z
(−∆u −f) ηdx = 0 for all η ∈ W
1,2
0
(Ω)
and hence −∆u = f in L
2
(Ω)sense.
Let us ﬁx for the moment the problem
½
−∆u = f in Ω,
u = 0 on ∂Ω.
(3.3)
Deﬁnition 3.2.1 If u ∈ W
1,2
0
(Ω) is such that
Z
(∇u · ∇η −f η) dx = 0 for all η ∈ W
1,2
0
(Ω)
holds, then u is called a weak solution of (3.3).
Deﬁnition 3.2.2 If u ∈ W
1,2
0
(Ω) ∩ W
2,2
(Ω) is such that −∆u = f in L
2
(Ω)
sense, then u is called a strong solution of (3.3).
Remark 3.2.3 If a strong solution even satisﬁes u ∈ C
2
(
¯
Ω) and hence the
equations in (3.3) hold pointwise u is called a classical solution.
Often one obtains a weak solution and then one has to do some work in order
to show that the solution has more regularity properties, that is, it is a solution
in the strong sense, a classical solution or even a C
∞
solution. In the remainder
we will consider an example that shows such upgrading is not an automated
process.
Example 3.2.4 Suppose that we want to minimize
E(y) =
Z
1
0
³
(1 −(y
0
(x))
2
´
2
dx
with y(0) = y(1) = 0. The appropriate Sobolev space should be W
1,4
0
(0, 1) and
indeed E is coercive:
E(y) =
Z
1
0
¡
1 −2(y
0
)
2
+ (y
0
)
4
¢
dx
≥
Z
1
0
¡
−1 +
1
2
(y
0
)
4
¢
dx =
1
2
kyk
4
W
1,4
0
−1.
Here we used 2a
2
≤
1
2
a
4
+2 (indeed 0 ≤ (ε
−
1
2
a−ε
1
2
b)
2
implies 2ab ≤ ε
−1
a
2
+εb
2
).
We will skip the sequentially weakly lower semicontinuity and just assume that
48
there is a minimizer u in W
1,4
0
(0, 1). For such a minimizer we’ll ﬁnd through
∂
∂τ
E(y +τη)
τ=0
= 0 that
Z
1
0
³
−4y
0
+ 4 (y
0
)
3
´
η
0
dx = 0 for all η ∈ W
1,4
0
(0, 1).
Assuming that such a minimizer is a strong solution, y ∈ W
2,4
(0, 1) we may
integrate by part to ﬁnd the EulerLagrange equation:
4
³
1 −3 (y
0
)
2
´
y
00
= 0.
So y
00
= 0 or (y
0
)
2
=
1
3
. A closer inspection gives y(x) = ax +b and plugging in
the boundary conditions y(0) = y(1) = 0 we come to y(x) = 0.
Next let us compute some energies:
E(0) =
Z
1
0
(1 −0)
2
dx = 1,
and E(y) for a special function y in the next exercise.
Exercise 36 Let E be as in the previous example and compute E(y) for y(x) =
sin πx
π
. If you did get that E(y) =
3
8
< 1 = E(0) then 0 is not the minimizer.
What is wrong here?
Notice that E(y) ≥ 0 for any y ∈ W
1,4
0
(0, 1), so if we would ﬁnd a function
y such that E(y) = 0 we would have a minimizer. Here are the graphs of a few
candidates:
0.5 1
0.5
0.5
0.5 1
0.5
0.5
0.5 1
0.5
0.5
0.5 1
0.5
0.5
0.5 1
0.5
0.5
Exercise 37 Check that the function y deﬁned by y(x) =
1
2
−
¯
¯
x −
1
2
¯
¯
is indeed
in W
1,4
0
(0, 1).
And why is y not in W
2,4
(0, 1)? For those unfamiliar with derivatives in a
weaker sense see the following remark.
Remark 3.2.5 If a function y is in C
2
(0, 1) then we know y
0
and y
00
pointwise
in (0, 1) and we compute the integral
°
°
y
(i)
°
°
L
4
(0,1)
, i ∈ {0, 1, 2} in order to ﬁnd
whether or not y ∈ W
2,4
(0, 1). But if y is not in C
2
(0, 1) we cannot directly
compute
R
1
0
(y
00
)
4
dx. So how is y
00
deﬁned?
Remember that for diﬀerentiable functions y and φ an integration by parts
show that
Z
1
0
y
0
φ dx = −
Z
1
0
y φ
0
dx.
One uses this result to deﬁne derivatives in the sense of distributions. The
distributions that we consider are bounded linear operators from C
∞
0
(R
n
) to R.
Remember that we wrote C
∞
0
(R
n
) for the inﬁnitely diﬀerentiable functions with
compact support in R
n
.
49
Deﬁnition 3.2.6 If y is some function deﬁned on R
n
then
¡
∂
∂x
¢
α
y in C
∞
0
(R
n
)
0
is the distribution V such that
V (φ) = (−1)
α
Z
R
n
y(x)
µ
∂
∂x
¶
α
φ(x) dx.
Example 3.2.7 The derivative of the signum function is twice the Diracdelta
function. This Diracdelta function is not a true function but a distribution:
δ(φ) = φ(0) for all φ ∈ C
0
(R) .
The signum function is deﬁned by
sign(x) =
1 if x > 0,
0 if x = 0,
−1 if x < 0.
We ﬁnd, calling ‘sign
0
= V ’, that for φ with compact support:
V (φ) = −
Z
∞
−∞
sign(x) φ
0
(x) dx =
=
Z
0
−∞
φ
0
(x) dx −
Z
∞
0
φ
0
(x) dx = 2φ(0) = 2δ(φ).
Exercise 38 A functional for the energy that belongs to a rectangular plate
lying on three supporting walls, clamped on one side and being pushed down by
some weight is
E(u) =
Z
R
³
1
2
(∆u)
2
−f u
´
dxdy.
Here R = (0, ) × (0, b) and supported means the deviation u satisﬁes
u(x, 0) = u(x, b) = u(, y) = 0,
and clamped means
u(0, y) =
∂
∂x
u(0, y) = 0.
Compute the boundary value problem that comes out.
Exercise 39 Let us consider the following functional for the energy of a rectan
gular plate lying on two supporting walls and being pushed down by some weight
is
E(u) =
Z
R
³
1
2
(∆u)
2
−f u
´
dxdy.
Here R = (0, ) × (0, b) and supported means the deviation u satisﬁes u(0, y) =
u(, y) = 0.
50
Compute the boundary value problem that comes out.
Exercise 40 Consider both for Exercise 38 and for Exercise 39 the variational
problem
E : W
bc
(R) :=
©
u ∈ C
∞
¡
¯
R
¢
; + boundary conditions
ªk·k
W(R)
→R
with kuk
W(R)
= kuk
L
2
(R)
+ k∆uk
L
2
(R)
.
For the source term we assume f ∈ L
2
(R).
1. Is the functional E coercive?
2. Is E is sequentially weakly lower semicontinuous?
Exercise 41 Show that the equation for Exercise 39 can be written as a system
of two second order problems, one for u and one for v = −∆u. For those who
have seen Hopf ’s boundary Point Lemma: can this system be solved for positive
f?
Exercise 42 Suppose that we replace both in Exercise 38 and 39 the energy by
E(u) =
Z
R
³
1
2
(∆u)
2
+δ
1
2
∇u
2
−f u
´
dxdy.
Compute the boundary conditions for both problems. Can you comment on the
changes that appear?
Exercise 43 What about uniqueness for the problem in Exercises 38 and 39?
Exercise 44 Is it true that W
br
(R) ⊂ W
2,2
(R) both for Exercise 38 and for
Exercise 39?
Exercise 45 Let u ∈ W
4,2
(R) ∩ W
br
(R) be a solution for Exercise 39. Show
that for all nonzero η ∈ W
bc
(R) the second variation
¡
∂
∂τ
¢
2
E(u +τη)
τ=0
is
positive but not necessarily strictly positive.
In the last exercises we came up with a weak solution, that is u ∈ W
2,2
bc
(R)
satisfying
Z
R
(∆u∆η −fη) dxdy = 0 for all η ∈ W
2,2
bc
(R) .
Starting from such a weak solution one may try to use ‘regularity results’ in
order to ﬁnd that the u satisfying the weak formulation is in fact is a strong
solution: for f ∈ L
2
(R) it holds that u ∈ W
4,2
(R) ∩ W
2,2
bc
(R) .
In the case that f has a stronger regularity, say f ∈ C
α
(R) , it may even be
possible that the solution satisﬁes u ∈ C
4,α
(R) . Although since the problems
above are linear one might expect that such regularity results are standard
available in the literature. On the contrary, these are by no means easy to
ﬁnd. Especially higher order problems and corners form a research area by
themselves. A good starting point would be the publications of P. Grisvard.
51
3.3 Preliminaries for CauchyKowalevski
3.3.1 Ordinary diﬀerential equations
One equation
One of the classical ways of ﬁnding solutions for an o.d.e. is by trying to ﬁnd
solutions in the form of a power series. The advantage of such an approach
is that the computation of the Taylor series is almost automatical. For the
n
th
order o.d.e.
y
(n)
(x) = f(x, y(x), . . . , y
(n−1)
(x)) (3.4)
with initial values y(x
0
) = y
0
, y
0
(x
0
) = y
1
, . . . , y
(n−1)
(x
0
) = y
n−1
the higher
order coeﬃcients follow by diﬀerentiation and ﬁlling in the values already known:
y
(n)
(x
0
) = y
n
:= f(x
0
, y
0
, . . . y
n−1
)
y
(n+1)
(x
0
) = y
n+1
:=
∂f
∂x
(x
0
, y
0
, . . . y
n−1
) +
n−1
X
k=0
y
k+1
∂f
∂y
i
(x
0
, y
0
, . . . y
n−1
)
etc.
It is well known that such an approach has severe shortcomings. First of all
the problem itself needs to ﬁt the form of a power series: f needs to be a real
analytic function. Secondly, a power series usually has a rather restricted area
of convergence. But if f is appropriate, and if we can show the convergence, we
ﬁnd a solution by its Taylor series
y (x) =
∞
X
k=0
(x −x
0
)
k
k!
y
(k)
(x
0
). (3.5)
Remark 3.3.1 For those who are not to familiar with ordinary diﬀerential
equations. A suﬃcient condition for existence and uniqueness of a solution
for (3.4) with prescribed initial conditions is (x, p) 7→ f(x, p) being continuous
and p 7→f (x, p) Lipschitzcontinuous.
Multiple equations
A similar result holds for systems of o.d.e.:
y
(n)
(x) = f (x, y(x), . . . , y
(n−1)
(x)) (3.6)
where y : R →R
m
and f is a vectorfunction. The computation of the coeﬃcients
will be more involved but one may convince oneself that given the (vector)values
of y(x
0
), y
0
(x
0
) up to y
(n−1)
(x
0
) the higher values will follow as before and the
Taylor series in (3.5) will be the same with y instead of y. To have at most one
solution by the Taylor series near x = x
0
one needs to ﬁx y(x
0
), . . . , y
(n−1)
(x
0
).
52
From higher order to ﬁrst order autonomous
The trick to go from higher order to ﬁrst order is well known. Setting u =
(y, y
0
, . . . , y
(n−1)
) the n
th
order equation in (3.4) or system in (3.6) changes in
u
0
=
¸
¸
¸
¸
¸
0 1 · · · 0
0 0
.
.
.
.
.
.
.
.
.
.
.
. 1
0 · · · 0 0
¸
u+
¸
¸
¸
¸
0
0
.
.
.
f(x, u)
¸
with u(x
0
) =
¸
¸
¸
¸
y
0
y
1
.
.
.
y
n−1
¸
Finally a simpel addition will make the system autonomous. Just add the equa
tion u
0
n+1
(x) = 1 with u
n+1
(x
0
) = x
0
. We ﬁnd u
n+1
(x) = x and by replacing
x by u
n+1
in the o.d.e. or in the system of o.d.e.’s the system becomes au
tonomous.
So we have found an initial value problem for an autonomous ﬁrst order
system
½
u
0
(x) = g(u(x)),
u(x
0
) = u
0
.
(3.7)
Remark 3.3.2 If we include some parameter dependance in (3.7) we would
ﬁnd the problem
½
u
0
(x) = g(u(x), p),
u(x
0
) = u
0
(p).
(3.8)
Such a problem would give a solution of the type
u(x, p) =
∞
X
i=0
u
(k)
(x
0
, p)
k!
(x −x
0
)
k
and the u
(k)
(x
0
, p) itself could be power series in p if the dependance of p in
(3.8) would allow. One could think of p as the remaining coordinates in R
n
.
Note that u
0
(p) would prescribe the values for u on a hypersurface of dimension
n −1.
Exercise 46 Compute u
(k)
(x
0
) from (3.7) for k = 0, 1, 2, 3 in terms of g and
the initial conditions.
3.3.2 Partial diﬀerential equations
The idea of CauchyKowalevski could be phrased as: let us compute the solution
of the p.d.e. by a Taylorseries. The complicating factor becomes that it is not
so clear what initial values one should impose.
Taylor series with multiple variables
Before we consider partial diﬀerential equations let us recall the Taylor series in
multiple dimensions. For an analytic function v : R
n
→R it reads as
v (x) =
∞
X
k=0
α=k
X
α∈N
n
¡
∂
∂x
¢
α
v (x
0
)
α!
(x −x
0
)
α
53
where
α! = α
1
!α
2
! . . . α
n
!,
µ
∂
∂x
¶
α
=
∂
α
1
∂x
α
1
1
∂
α
2
∂x
α
2
2
. . .
∂
α
n
∂x
α
n
n
,
(x −x
0
)
α
= (x
1
−x
1,0
)
α1
(x
2
−x
2,0
)
α2
. . . (x
n
−x
n,0
)
αn
.
The α ∈ N
n
that appears here is called a multiindex.
Exercise 47 Take your favourite function f of two variables and compute the
Taylor polynomial of degree 3 in (0, 0):
3
X
k=0
α=k
X
α∈N
2
³
¡
∂
∂x
¢
α
f
´
(0, 0)
α!
x
α
.
Several ﬂavours of nonlinearities
A linear partial diﬀerential equation is of order m if it is of the form
α≤m
X
α∈N
n
A
α
(x)
µ
∂
∂x
¶
α
u(x) = f (x) , (3.9)
and the a
α
(x) with α = m should not disappear.
Every p.d.e. that cannot be written as in (3.9) is not linear (= nonlinear?).
Nevertheless one sometimes makes a distinction.
Deﬁnition 3.3.3 1. A m
th
order semilinear p.d.e. is as follows:
α=m
X
α∈N
n
A
α
(x)
µ
∂
∂x
¶
α
u(x) = f
µ
x,
∂u
∂x
1
, . . . ,
∂
m−1
u
∂x
m−1
n
¶
.
2. A m
th
order quasilinear p.d.e. is as follows:
α=m
X
α∈N
n
A
α
(x,
∂u
∂x
1
, . . . ,
∂
m−1
u
∂x
m−1
n
)
µ
∂
∂x
¶
α
u(x) = f
µ
x,
∂u
∂x
1
, . . . ,
∂
m−1
u
∂x
m−1
n
¶
.
3. The remaining nonlinear equations are the genuine nonlinear ones.
Example 3.3.4 Here are some nonlinear equations:
• The Kortewegde Vries equation: u
t
+uu
x
+u
xxx
= 0.
• The minimal surface equation: ∇·
∇u
q
1 + ∇u
2
= 0.
• The MongeAmpère equation: u
xx
u
yy
−u
2
xy
= f.
One could call them respectively semilinear, quasilinear and ‘strictly’ non
linear.
54
3.3.3 The Cauchy problem
As we have seen for o.d.e.’s an m
th
order ordinary diﬀerential equation needs
m initial conditions in order to have a unique solution. A standard set of
such initial conditions consists of giving y, y
0
, . . . , y
(m−1)
a ﬁxed value. The
analogon of such an ‘initial value’ problem for an m
th
order partial diﬀerential
equation in R
n
would be to describe all derivatives up to order m−1 on some
(n −1)dimensional manifold. This is a bit too much. Indeed if one prescribes
y on a smooth manifold M, then also all tangential derivatives are determined.
Similar, if
∂
∂ν
y, the normal derivative on M is given then also all tangential
derivatives of
∂
∂ν
y are determined. Etcetera. So it will be suﬃcient to ﬁx all
normal derivatives from order 0 up to order m−1.
Deﬁnition 3.3.5 The Cauchy problem for the m
th
order partial diﬀerential
equation F
³
x, y,
∂
∂x
1
y, . . . ,
∂
m
∂x
m
n
y
´
= 0 on R
n
with respect to the smooth (n−1)
dimensional manifold M is the following:
F
³
x, y,
∂
∂x1
y, . . . ,
∂
m
∂x
m
n
y
´
= 0 in R
n
,
y = φ
0
∂
∂ν
y = φ
1
.
.
.
∂
m−1
∂ν
m−1
y = φ
m−1
on M,
(3.10)
where ν is the normal on M and where the φ
i
are given.
Exercise 48 Consider M = {(x
1
, x
2
); x
2
1
+x
2
2
= 1} with φ
0
(x
1
, x
2
) = x
1
and
φ
1
(x
1
, x
2
) = x
2
. Let ν be the outside normal direction on M. Suppose u is a
solution of
∂
2
∂x
2
1
u+
∂
2
∂x
2
2
u = 1 in a neighborhood of M, say for 1 −ε < x
2
1
+x
2
2
<
1 +ε that satisﬁes
½
u = φ
0
for x ∈ M,
∂
∂ν
u = φ
1
for x ∈ M.
Compute
∂
2
∂ν
2
u on M for such a solution.
Exercise 49 The same question but with the diﬀerential equation replaced by
∂
2
∂x
2
1
u −
∂
2
∂x
2
2
u = 1.
55
3.4 Characteristics I
So the Cauchy problem consists of an m
th
order partial diﬀerential equation
for say u in (a subset of) R
n
, an (n −1)dimensional manifold M, the normal
direction ν on M and with u,
∂
∂ν
u, . . .
∂
n−1
∂ν
n−1
u given on M. One may guess
that the diﬀerential equation should be of order m in the νdirection.
Example 3.4.1 Consider
∂
∂x
1
u+
∂
∂x
2
u = g(x
1
, x
2
) for a given function g. This
is a ﬁrst order p.d.e. but if we take new coordinates y
1
= x
1
+ x
2
and y
2
=
x
1
−x
2
, and set ˜ g(x
1
+x
2
, x
1
−x
2
) = g(x
1
, x
2
) and ˜ u(x
1
+x
2
, x
1
−x
2
) = u(x
1
, x
2
),
we ﬁnd
∂
∂x
1
u +
∂
∂x
2
u = 2
∂
∂y
1
˜ u.
The y
2
derivative dissappeared. So we have to solve the o.d.e.
2
∂
∂y
1
˜ u = ˜ g. (3.11)
If for example u(x
1
, 0) = ϕ(x
1
) is given for x
1
∈ R, then
˜ u(y
1
, y
2
) = ˜ u(x
1
, x
1
) +
1
2
Z
y
1
x
1
˜ g(s, y
2
)ds,
which can be rewritten to a formula for u.
But if u(x
1
, x
1
) = ϕ(x
1
) is given for x
1
∈ R, then ˜ u(y
1
, 0) = ϕ(
1
2
y
1
) and this
is in general not compatible with (3.11). One ﬁnds that the solution is constant
on each line y
2
= c. That means we are not allowed to prescribe the function
u on such a line and if we want the Cauchyproblem to be wellposed for a
1dmanifold (curve) we better take a curve with no tangential in that direction.
The line (or curve) along which the p.d.e. takes the form of an o.d.e. is
called a characteristic curve.
In higher dimensions and for higher order equations the notion of character
istic manifold becomes somewhat involved. For quasilinear diﬀerential equations
a characteristic curve or manifold in general will even depend on the solution
itself. So we will use a more usefull concept that is found in the book of Evans.
Instead of giving a deﬁnition of characteristic curve or characteristic manifold
we will give a deﬁnition of noncharacteristic manifolds.
Deﬁnition 3.4.2 A manifold M is called noncharacteristic for the diﬀerential
equation
α=m
X
α∈N
n
A
α
(x)
µ
∂
∂x
¶
α
u(x) = B
µ
x, u,
∂
∂x
1
u, . . . ,
∂
m
∂x
m
n
u
¶
. (3.12)
if for x ∈ M and every normal direction ν to M in x:
α=m
X
α∈N
n
A
α
(x)ν
α
6= 0. (3.13)
We say ν is a singular direction for (3.12) if
P
α=m
α∈N
n
A
α
(x)ν
α
= 0.
56
Remark 3.4.3 The condition in (3.13) means that the coeﬃcient of the diﬀer
ential equation of the m
th
order in the νdirection does not disappear. In case
that (3.12) would be an o.d.e. in the ν direction we would ‘loose’ one order in
the diﬀerential equation.
Remark 3.4.4 Sobolev deﬁnes a surface given by F(x
1
, . . . , x
n
) = 0 to be char
acteristic if the (2
nd
order) diﬀerential equation, on changing from the variables
x
1
, . . . , x
n
to new variables y
1
= F(x
1
, . . . , x
n
), y
2
, . . . , y
n
with y
2
to y
n
arbitrary
functions of x
1
to x
n
, such that all the y
i
are continuous and have ﬁrst order
derivatives and a nonzero Jacobian in a neighbourhood of the surface under
consideration, it happens that the coeﬃcient
¯
A
11
of
∂
2
∂y
2
1
vanishes on this sur
face.
The deﬁnition of noncharacteristic gives a condition for each point on the man
ifold. Also Sobolev’s deﬁnition of characteristic does. So not characteristic and
noncharacteristic are not identical.
3.4.1 First order p.d.e.
Consider the ﬁrst order semilinear partial diﬀerential equation
n
X
i=1
A
i
(x)
∂
∂x
i
y(x) = B(x, y) . (3.14)
A solution is a function y : R
n
→ R. The Cauchy problem for such a ﬁrst
order p.d.e. consist of prescribing initial data on a (smooth) n −1dimensional
surface. We will see that we cannot choose just any surface. If we consider a
curve t 7→ x(t) that satisﬁes x
0
(t) = A(x(t)) then one ﬁnds for a solution y on
this line that
∂
∂t
y (x(t)) =
n
X
i=1
x
0
i
(t)
∂
∂x
i
y(x) =
=
n
X
i=1
A
i
(x)
∂
∂x
i
y(x) = B(x(t), y (x(t)))
and hence that we may solve y on this curve when one value y (x
0
) is given.
Indeed
½
x
0
(t) = A(x(t))
x(0) = x
0
(3.15)
is an o.d.e. system with a unique solution for analytic A (Lipschitz is suﬃcient).
Deﬁnition 3.4.5 The solution t 7→ x(t) is called a characteristic curve for
(3.14).
With this curve t 7→x(t) known one obtains a second o.d.e. for t 7→y (x(t)) .
Set Y (t) = y (x(t)) and Y solves
½
Y
0
(t) = B(x(t), Y (t))
Y (0) = y(x
0
)
(3.16)
So if we would prescribe the initial values for the solution on an n − 1
dimensional surface we should take care that this surface does not intersect
57
each of those characteristic curves more than once. This is best guaranteed by
assuming that the characteristic directions are not tangential to the surface. For
a ﬁrst order this means that x
0
(t) should not be tangential to this surface. In
other words A(x(t)) · ν = x
0
(t) · ν 6= 0 where ν is the normal direction of the
surface.
Characteristic lines and an appropriate surface with prescribed values
Exercise 50 Show that also for the ﬁrst order quasilinear equations
m
X
i=1
A
i
(x, y)
∂
∂x
i
y(x) = B(x, y)
one obtains a system of o.d.e.’s for the curve t 7→ x(t) and the solution t 7→
y (x(t)) along this curve.
3.4.2 Classiﬁcation of second order p.d.e. in two dimen
sions
The standard form of a second order semilinear partial diﬀerential equation in
two dimensions is
a u
x1x1
+ 2b u
x1x2
+c u
x2x2
= f (x, u, ∇u) . (3.17)
Here f is a given function and one is searching for a function u.
We may write this second order diﬀerential operator as follows:
a
∂
2
∂x
2
1
+ 2b
∂
2
∂x
1
∂x
2
+c
∂
2
∂x
2
2
= ∇·
µ
a b
b c
¶
∇
and since the matrix is symmetric we may diagonalize it by an orthogonal matrix
T:
µ
a b
b c
¶
= T
T
µ
λ
1
0
0 λ
1
¶
T.
Deﬁnition 3.4.6 The equation in (3.17) is called
58
• elliptic if λ
1
and λ
2
have the same (nonzero) sign;
• hyperbolic if λ
1
and λ
2
have opposite (nonzero) signs;
• parabolic if λ
1
= 0 or λ
2
= 0.
Remark 3.4.7 In the case that a, b and c do depend on x one says that the
equation is elliptic, hyperbolic respectively parabolic in x if the eigenvalues λ
1
(x)
and λ
2
(x) are as above (freeze the coeﬃcients in x). Note that
∇·
µ
a(x) b(x)
b(x) c(x)
¶
∇u =
= a(x) u
x
1
x
1
+ 2b(x) u
x
1
x
2
+c(x) u
x
2
x
2
+γ(x)u
x
1
+δ(x)u
x
2
where
γ =
∂a
∂x
1
+
∂b
∂x
2
and δ =
∂b
∂x
1
+
∂c
∂x
2
.
So the highest order doesn’t change.
For the constant coeﬃcients case one reduce the highest order coeﬃcients to
standard form by taking new coordinates Tx = y. One ﬁnds that (3.17) changes
to
λ
1
u
y1y1
+λ
2
u
y2y2
=
˜
f (y, u, ∇u) .
Depending on the signs of λ
1
and λ
2
we may introduce a scaling y
i
=
p
λ
i
z
i
and come up with either one of the following three possibilities:
• a semilinear Laplace equation: u
z
1
z
1
+u
z
2
z
2
=
ˆ
f(z, u, ∇u);
• a semilinear wave equation: u
z1z1
−u
z2z2
=
ˆ
f(z, u, ∇u);
• u
z
1
z
1
=
ˆ
f(z, u, ∇u), which can be turned into a heat equation when
ˆ
f(z, u, ∇u) = c
1
u
z
1
+c
2
u
z
2
.
Exercise 51 Show that (3.17) is
1. elliptic if and only if det
µ
a b
b c
¶
> 0;
2. parabolic if and only if det
µ
a b
b c
¶
= 0;
3. hyperbolic if and only if det
µ
a b
b c
¶
< 0;
Exercise 52 Show that under any regular change of coordinates y = Bx, with
B a nonsingular 2 × 2matrix, the classiﬁcation of the diﬀerential equation in
(3.17) does not change.
Deﬁnition 3.4.8 For a linear p.d.e. operator Lu :=
P
α≤m
α∈N
n
A
α
(x)
¡
∂
∂x
¢
α
u
one introduces the symbol by replacing the derivation
∂
∂xi
by ξ
i
:
symbol
L
(ξ) =
α≤m
X
α∈N
n
A
α
(x) ξ
α1
1
. . . ξ
αn
n
.
59
Exercise 53 Show that for second order operators with constant coeﬃcients in
two dimensions:
1. The solutions of {ξ ∈ R
2
; symbol
L
(ξ) = k} consists of an ellips for some
k ∈ R, if and only if L is elliptic.
2. The solutions of {ξ ∈ R
2
; symbol
L
(ξ) = k} consists of a hyperbola for
some k ∈ R, if and only if L is hyperbolic.
3. If the solutions of {ξ ∈ R
2
; symbol
L
(ξ) = k} consists of a parabola for
some k ∈ R, then L is parabolic.
4. If the solutions of {ξ ∈ R
2
; symbol
L
(ξ) = k} do not ﬁt the descriptions
above for any k ∈ R, then L is not a real p.d.e.: it can be transformed
into an o.d.e..
60
3.5 Characteristics II
3.5.1 Second order p.d.e. revisited
Let us consider the three standard equations that we obtained separately.
1. u
x1x1
+u
x2x2
= f(x, u, ∇u). We may take any line through a point ¯ x and
a orthonormal coordinate system with ν perpendicular and τ tangential
to ﬁnd in the new coordinates u
νν
+ u
ττ
=
˜
f(y, u, u
τ
, u
ν
). So with x =
y
1
ν +y
2
τ
u
y2y2
= −u
y1y1
+
˜
f(y, u, u
y1
, u
y2
) for y ∈ R
2
,
u(y
1
, 0) = φ(y
1
) for y
1
∈ R,
u
y
2
(y
1
, 0) = ψ (y
1
) for y
1
∈ R,
(3.18)
and ﬁnd that the coeﬃcients in the power series are deﬁned. Indeed if we
prescribe u(y
1
, 0) and u
y
2
(y
1
, 0) for y
1
we also know
µ
∂
∂y
1
¶
k
u(y
1
, 0) and
µ
∂
∂y
1
¶
k
∂
∂y
2
u(y
1
, 0) for all k ∈ N
+
. (3.19)
Using the diﬀerential equation and (3.19) we ﬁnd
µ
∂
∂y
2
¶
2
u(y
1
, 0) and hence
µ
∂
∂y
1
¶
k
µ
∂
∂y
2
¶
2
u(y
1
, 0) for all k ∈ N
+
.
(3.20)
Diﬀerentiating the diﬀerential equation with respect to y
2
and using the
results from (3.193.20) gives
µ
∂
∂y
2
¶
3
u(y
1
, 0) and hence
µ
∂
∂y
1
¶
k
µ
∂
∂y
2
¶
3
u(y
1
, 0) for all k ∈ N
+
.
(3.21)
By repeating these steps we will ﬁnd all Taylorcoeﬃcients and, hoping
the series converges, the solution.
Going back through the transformation one ﬁnds that for any p.d.e. as
in (3.17) that is elliptic the Cauchyproblem looks wellposed.(We would
like to say ‘is wellposed’ but since we didn’t state the corresponding
theorem ... )
For elliptic p.d.e. any manifold is noncharacteristic.
2. u
x
1
x
1
−u
x
2
x
2
= f(x, u, ∇u). Taking new orthogonal coordinates by
y =
µ
cos α −sinα
sinα cos α
¶
x (3.22)
the diﬀerential operator turns into
¡
cos
2
α −sin
2
α
¢
∂
2
∂y
2
1
−4 cos αsinα
∂
2
∂y
1
∂y
2
+
¡
sin
2
α −cos
2
α
¢
∂
2
∂y
2
2
61
and the equation we may write as
u
y2y2
= u
y1y1
+
4 cos αsinα
sin
2
α −cos
2
α
u
y1y2
+
ˆ
f(y, u, ∇u)
if and only if sin
2
α 6= cos
2
α. If we suppose that sin
2
α 6= cos
2
α and
prescribe u(y
1
, 0) and u
y
2
(y
1
, 0) then we may proceed as for case 1 and ﬁnd
the coeﬃcients of the Taylor series. On the other hand, if sin
2
α = cos
2
α,
then we are left with
2u
y1y2
= ±
˜
f(y, u, u
y1
, u
y2
).
Prescribing just u(y
1
, 0) is not suﬃcient since then u
y2
(y
1
, 0) is still un
known but prescribing both u(y
1
, 0) and u
y2
(y
1
, 0) is too much since now
u
y
1
y
2
(y
1
, 0) follows from u
y
2
(y
1
, 0) directly and from the diﬀerential equa
tion indirectly and unless some special relation between f and u
y
2
holds
two diﬀerent values will come out. So the Cauchy problem
u
x2x2
= u
x1x1
+f(x, u, u
x1
, u
x2
) for x ∈ R
2
,
u(x) = φ(x) for x ∈ ,
u
ν
(x) = ψ (x) for x ∈ ,
is (probably) only wellposed for not parallel to
¡
1
1
¢
or
¡
1
−1
¢
. Going back
through the transformation one ﬁnds that any p.d.e. as in (3.17) that is
hyperbolic has at each point two directions for which the Cauchy problem
has a singularity if one of these two direction coincides with the manifold
M. For any other manifold (a curve in 2 dimensions) the Cauchyproblem
is (could be) wellposed.
2
nd
order hyperbolic in 2d: in each point there are two singular directions.
3. u
x1x1
= f(x, u, ∇u). Doing the transformation as in (3.22) one ﬁnds
sin
2
α u
y
2
y
2
= 2 sinαcos α u
y
1
y
2
−cos
2
α u
y
1
y
1
+
ˆ
f(y, u, ∇u).
which a (probably) wellposed Cauchy problem (3.18) except for α a mul
tiple of π. The lines x
2
= c give the sigular directions for this equation.
Going back through the transformation one ﬁnds that any p.d.e. as in
(3.17) that is parabolic has one family of directions where the highest or
der derivatives disappear. For any 1d manifold that crosses one of those
lines exactly once, the Cauchyproblem is (as we will see) wellposed.
2
nd
order parabolic in 2d: in each point there is one singular direction.
Exercise 54 Consider the onedimensional wave equation (a p.d.e. in R
2
)
u
tt
= c
2
u
xx
for some c > 0.
1. Write the Cauchy problem for u
tt
= c
2
u
xx
on M = {(x, t); t = 0}.
62
2. Use u
tt
− c
2
u
xx
=
¡
∂
∂t
−c
∂
∂x
¢ ¡
∂
∂t
+c
∂
∂t
¢
to show that solutions of this
onedimensional wave equation can be written as
u(x, t) = f(x −ct) +g(x +ct).
3. Now compute the f and g such that u is a solution of the Cauchy problem
with general ϕ
0
and ϕ
1
.
The explicit formula for this Cauchy problem that comes out is named after
d’Alembert. If you promise to make the exercise before continuing reading, this
is it:
u(x, t) =
1
2
ϕ
0
(x +ct) +
1
2
ϕ
0
(x −ct) +
1
2
Z
x+ct
x−ct
ϕ
1
(s)ds.
Exercise 55 Try the ﬁnd a solution for the Cauchy problem for u
tt
+u
t
= c
2
u
xx
on M = {(x, t); t = 0}.
3.5.2 Semilinear second order in higher dimensions
That means we are considering diﬀerential equations of the form
α=2
X
α∈N
n
A
α
(x)
µ
∂
∂x
¶
α
u = f (x, u, ∇u) (3.23)
and we want to prescribe u and the normal derivative u
ν
on some analytic n−1
dimensional manifold. Let us suppose that this manifold is given by the implicit
equation Φ(x) = c. So the normal in ¯ x is given by
ν =
∇Φ(¯ x)
∇Φ(¯ x)
.
In order to have a welldeﬁned Cauchy problem we should have a nonvanishing
∂
2
∂ν
2
u component in (3.23).
Going to new coordinates We want to rewrite (3.23) such that we recog
nize the
∂
2
∂ν
2
u component. We may do so by ﬁlling up the coordinate system
with tangential directions τ
1
, . . . , τ
n−1
at ¯ x. Denoting the old coordinates by
{x
1
, x
2
, . . . , x
n
} and the new coordinates by {y
1
, y
2
, . . . , y
n
} we have
¸
¸
¸
¸
x
1
x
2
.
.
.
x
n
¸
=
¸
¸
¸
¸
ν
1
τ
1,1
· · · τ
n−1,1
ν
2
τ
1,2
· · · τ
n−1,n
.
.
.
.
.
.
.
.
.
ν
n
τ
1,n
· · · τ
n−1,n
¸
¸
¸
¸
¸
y
1
y
2
.
.
.
y
n
¸
and since this transformation matrix T is orthonormal one ﬁnds
¸
¸
¸
¸
y
1
y
2
.
.
.
y
n
¸
=
¸
¸
¸
¸
ν
1
ν
2
· · · ν
n
τ
1,1
τ
1,2
· · · τ
1,n
.
.
.
.
.
.
.
.
.
τ
n−1,1
τ
n−1,n
· · · τ
n−1,n
¸
¸
¸
¸
¸
x
1
x
2
.
.
.
x
n
¸
.
63
Hence
∂
∂x
i
=
∂y
1
∂x
i
∂
∂y
1
+
n−1
X
j=1
∂y
j
∂x
i
∂
∂y
j
= ν
i
∂
∂y
1
+
n−1
X
j=1
τ
i,j
∂
∂y
j
So (3.23) turns into
α=2
X
α∈N
n
A
α
(x)
n
Y
i=1
¸
ν
i
∂
∂y
1
+
n−1
X
j=1
τ
i,j
∂
∂y
j
¸
αi
u = f (x, u, ∇u) (3.24)
and the factor in front of the
∂
2
∂y
2
1
u component, which is our notation for
∂
2
∂ν
2
u
at ¯ x in (3.23), is
α=2
X
α∈N
n
A
α
(¯ x)ν
α
= ν ·
¸
¸
¸
¸
A
2,0,...,0
(¯ x)
1
2
A
1,1,...,0
(¯ x) · · ·
1
2
A
1,0,...,1
(¯ x)
1
2
A
1,1,...,0
(¯ x) A
0,2,...,0
(¯ x) · · ·
1
2
A
0,1,...,1
(¯ x)
.
.
.
.
.
.
.
.
.
1
2
A
1,0,...,1
(¯ x)
1
2
A
0,1,...,1
(¯ x) · · · A
0,0,...,2
(¯ x)
¸
ν.
(3.25)
Classiﬁcation by the number of positive, negative and zero eigenvalues
Let us call the matrix in (3.25) M. Since it is symmetric we may diagonalize it
and ﬁnd a matrix with eigenvalues λ
1
, . . . , λ
n
on the diagonal. Notice that this
matrix does not depend on ν. Also remember that we have a wellposed Cauchy
problem if ν · Mν 6= 0.
Let P be the number of positive eigenvalues and N the number of negative
ones. There are three possibilities.
1. (P, N) = (n, 0) of (P, N) = (0, n) . Then all eigenvalues of M have the
same sign and in that case ν · Mν 6= 0 for any ν with ν = 1. So any
direction of the normal ν is ﬁne and there is no direction that a manifold
may not contain. Equation (3.23) is called elliptic.
2. P +N = n but 1 ≤ P ≤ n−1. There are negative and positive eigenvalues
but no zero eigenvalues. Then there are directions ν such that ν · Mν = 0.
Equation (3.23) is called hyperbolic.
3. P + N < n. There are eigenvalues equal to 0. Equation (3.23) is called
parabolic.
There exists a more precise subclassiﬁcation according to the number of
positive, negative and zero eigenvalues.
In dimension 3 with (N, P) = (1, 2) or (2, 1) one ﬁnds a cone of singular
directions near ¯ x.
64
The singular directions at ¯ x form a cone in the hyperbolic case.
In dimension 3 with (N, P) = (0, 2) or (0, 2) one ﬁnds one singular direction
near ¯ x.
Singular directions in a parabolic case.
3.5.3 Higher order p.d.e.
The classiﬁcation for higher order p.d.e. in so far it concerns parabolic and
hyperbolic becomes a bit of a zoo. Elliptic however can be formulated by the
absence of any singular directions. For an m
th
order semilinear equation as
X
α=m
A
α
(x)
µ
∂
∂x
¶
α
u = F
µ
x, u,
∂
∂x
1
u, . . . ,
µ
∂
∂x
¶
α
u
¶
(3.26)
it can be rephrased as:
Deﬁnition 3.5.1 The diﬀerential equation in (3.26) is called elliptic in ¯ x if
X
α=m
A
α
(¯ x)ξ
α
6= 0 for all ξ ∈ R
n
\{0}.
Exercise 56 Show that for an elliptic m
th
order semilinear equation (as in
(3.26)) with (as always) real coeﬃcients A
α
(x) the order m is even.
65
66
Week 4
Some old and new solution
methods II
4.1 CauchyKowalevski
4.1.1 Statement of the theorem
Let us try to formulate some of the ‘impressions’ we might have got from the
previous sections.
Heuristics 4.1.1
1. For a wellposed Cauchy problem of a general p.d.e. one should have a
manifold M to which the singular directions of the p.d.e. are strictly
nontangential.
2. The condition for the singular directions of the p.d.e.
X
α=m
A
α
(x)
µ
∂
∂x
¶
α
u +F(x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u) = 0
is
P
α=m
A
α
(x) ν
α
= 0. A noncharacteristic manifold M means that if
ν is normal to M then
P
α=m
A
α
(x) ν
α
6= 0.
3. The amount of singular directions increases from elliptic (none) via parabolic
to hyperbolic.
4. (Realvalued) elliptic p.d.e.’s have an even order m; so odd order p.d.e.’s
have singular directions.
Now for the theorem.
Theorem 4.1.2 (CauchyKowalevski for one p.d.e.) We consider the Cauchy
problem for the quasilinear p.d.e.
X
α=m
A
α
³
x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u
´
¡
∂
∂x
¢
α
u = F
³
x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u
´
67
with the boundary conditions as in (3.10) with an (n−1)dimensional manifold
M and given functions ϕ
0
, ϕ
1
, . . . ϕ
m−1
.
Assume that
1. M is a real analytic manifold with ¯ x ∈ M. Let ν be the normal direction
to M in ¯ x.
2. ϕ
0
, ϕ
1
, . . . ϕ
m−1
are real analytic functions deﬁned in a neighborhood of
¯ x.
3. (x, p) 7→A
a
(x, p) and (x, p) 7→F(x, p) are real analytic in a neighborhood
of ¯ x.
If
P
α=m
A
α
³
¯ x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u
´
ν
α
6= 0 with the derivatives of u
replaced by the appropriate (derivatives of ) ϕ
i
at x = ¯ x, then there is a ball
B
r
(¯ x) and a unique analytic solution u of the Cauchyproblem in this ball.
For CauchyKowalevski the characteristic curves should not be tangential to
the manifold.
The theorem above seems quite general but in fact it is not. There is a version
of the Theorem of CauchyKowalevski for systems of p.d.e. But similar as for
o.d.e. we may reduce Theorem 4.1.2 or the version for systems to a version for
a system of ﬁrstorder p.d.e.
4.1.2 Reduction to standard form
We are considering
X
α=m
A
α
¡
∂
∂x
¢
α
u = F (4.1)
where A
α
and F may depend on x and the derivatives
¡
∂
∂x
¢
β
u of orders β ≤
m−1. In fact the u could even be a vector function, that is u : R
n
→R
N
, and
the A
α
would be N × N matrices. Are you still following? Then one might
recognize that not only solutions turn into vectorvalued solutions but also that
characteristic curves turn into characteristic manifolds.
68
For the moment let us start with the single equation in (4.1), which is written
out:
X
α=m
A
α
³
x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u
´
¡
∂
∂x
¢
α
u = F
³
x, u,
∂
∂x
1
u, . . . ,
∂
m−1
∂x
m−1
n
u
´
.
The ﬁrst step is to take new coordinates that ﬁt with the manifold. This we
have done before: see Deﬁnitions 2.1.8 and 2.2.1 where now C
1
or C
m
is re
placed by real analytic. In fact we are only considering local solutions so one
box/coordinate system is suﬃcient. Next we introduce new coordinates for
which the manifold is ﬂat. The assumption that the manifold M does not con
tain singular directions is preserved in this new coordinate system. Let us write
{y
1
, y
2
, . . . , y
n−1
, t} for the coordinates where the ﬂattened manifold is {t = 0} .
We are let to the following problem:
ˆ
A
(0,...,0,m)
¡
∂
∂t
¢
m
ˆ u =
m−1
X
k=0
β=m−k
X
β∈N
n−1
ˆ
A
(β,k)
¡
∂
∂t
¢
k
³
∂
∂y
´
β
ˆ u +
ˆ
F (4.2)
where the coeﬃcients
ˆ
A
α
and
ˆ
F depend on y, t, ˆ u,
∂
∂y1
ˆ u, . . . ,
∂
m−1
∂t
m−1
ˆ u. The as
sumption that M does not contain singular directions is transferred to the
condition
ˆ
A
(0,...,0,m)
6= 0 and hence we are allowed to divide by this factor to
ﬁnd just
¡
∂
∂t
¢
m
ˆ u on the left hand side of (4.2). The prescribed Cauchydata on
M change into
ˆ u = ˆ ϕ
0
= ϕ
0
∂
∂t
ˆ u = ˆ ϕ
1
= ϕ
1
+
n
P
i=1
a
i
∂
∂τ
i
ϕ
0
.
.
.
∂
m−1
∂t
m−1
ˆ u = ˆ ϕ
m−1
= ϕ
m−1
+
m−2
P
k=0
k+β=m−2
P
β∈N
m−1
a
(β,k)
¡
∂
∂τ
¢
β
ϕ
k
(4.3)
taking the appropriate identiﬁcation of x ∈ M and (y, t) ∈ R
n−1
× {0} .
As a next step one transfers the higher order initial value problem (4.2)(4.3)
into a ﬁrst order system by setting
˜ u
(β,k)
=
¡
∂
∂t
¢
k
³
∂
∂y
´
β
ˆ u for β +k ≤ m−1 and β ∈ N
n−1
.
We ﬁnd for k ≤ m−1 that
∂
∂t
˜ u
(0,...,0,k)
=
¡
∂
∂t
¢
k+1
ˆ u
and if β +k ≤ m−1 with β 6= 0, let us say i
β
is the ﬁrst nonzero index with
β
i
≥ 1, that
∂
∂t
˜ u
(β,k)
=
³
∂
∂yi
β
´
˜ u
(
˜
β,k+1)
with
˜
β = (0, . . . , 0, β
i
−1, β
i+1
, . . . , β
n−1
)
So combining these equations we have
∂
∂t
˜ u
(0,k)
= ˜ u
(0,k+1)
for 0 ≤ k ≤ m−2,
∂
∂t
˜ u
(β,k)
=
³
∂
∂y
i
´
˜ u
(
˜
β,k+1)
for 0 ≤ k + β ≤ m−1 with β 6= 0,
∂
∂t
˜ u
(0,...,0,m−1)
=
m−1
P
k=0
β=m−k
P
β∈N
n−1
˜
A
(β,k)
∂
∂y
i
β
˜ u
(
˜
β,k)
+
ˆ
F
(4.4)
69
where
˜
A
(β,k)
and
ˆ
F are analytic functions of y, t, ˜ u
(β,k)
with β + k ≤ m− 1.
The boundary conditions go to
(
˜ u
(0,k)
= ˆ ϕ
k
for 0 ≤ k ≤ m−1,
˜ u
(β,k)
=
³
∂
∂y
´
β
ˆ ϕ
k
for 0 ≤ k + β ≤ m−1 with β 6= 0
(4.5)
With some extra equations for the coordinates in order to make the system
autonomous as we did for the o.d.e. the equation (4.4)(4.5) can be recasted by
vector notation:
(
∂
∂t
v =
˘
A
j
(v)
∂
∂yj
v +
˘
F(v) in R
n−1
×R
+
,
v(0, y) = ψ(y) on R
n−1
.
As a ﬁnal step we consider u(t, y) = v(t, y)−ψ(0) in order to reduce to vanishing
initial data at the origin:
∂
∂t
u = A
j
(u)
∂
∂y
j
u +F(u) in R
n−1
×R
+
,
u(0, y) = ϕ(y) on R
n−1
,
u(0, 0) = 0.
(4.6)
The formulation of the CauchyKowalevski Theorem is usually stated for this
system which is more general than the one we started with.
Theorem 4.1.3 (CauchyKowalevski) Assume that u 7→A
j
(u), u 7→F(u)
and y 7→ϕ(y) are real analytic functions in a neighborhood of the origin. Then
there exists a unique analytic solution of (4.6) in a neighborhood of the origin.
For a detailed proof one might consider the book by Fritz John.
70
4.2 A solution for the 1d heat equation
4.2.1 The heat kernel on R
Let us try to derive a solution for
u
t
−u
xx
= 0 (4.7)
and preferably one that satisﬁes some initial condition(s). The idea is to look for
similarity solutions, that is, try to ﬁnd some solution that ﬁts with the scaling
found in the equation. Or in other words exploit this scaling to reduce the
dimension. For the heat equation one notices that if u(t, x) solves (4.7) also
u
¡
c
2
t, cx
¢
is a solution for any c ∈ R. So let us try the (maybe silly) approach
to take c such that c
2
t = 1. Then it could be possible to get a solution of the
form u(1, t
−1/2
x) which is just a function of one variable. Let’s try and set
u(1, t
−1/2
x) = f(t
−1/2
x). Putting this in (4.7) we obtain
µ
∂
∂t
−
∂
2
∂x
2
¶
f(x/
√
t) = −
x
2t
√
t
f
0
(x/
√
t) −
1
t
f
00
(x/
√
t).
If we set ξ = x/
√
t we ﬁnd the o.d.e.
1
2
ξf
0
(ξ) +f
00
(ξ) = 0.
This o.d.e. is separable and through
f
00
(ξ)
f
0
(ξ)
= −
1
2
ξ
and integrating both sides we obtain ﬁrst
lnf
0
(ξ) = −
1
4
ξ
2
+C and f
0
(ξ) = c
1
e
−
1
4
ξ
2
and next
f(x/
√
t) = c
2
+c
1
Z
x/
√
t
−∞
e
−
1
4
ξ
2
dξ.
The constant solution is not very interesting so we forget c
2
and for reasons
to come we take c
1
such that c
1
R
∞
−∞
e
−
1
4
ξ
2
dξ = 1. A computation gives c
1
=
1
2
π
−1/2
. Here is a graph of that function.
4 2 0 2 4 x
0
t
1
0
1
u
1
One ﬁnds that
¯ u(t, x) =
1
2
√
π
Z
x/
√
t
−∞
e
−
1
4
ξ
2
dξ
is a solution of the equation. Moreover for (t, x) 6= (0, 0) (and t ≥ 0) this function
is inﬁnitely diﬀerentiable. As a result also all derivatives of this ¯ u satisfy the
71
heat equation. One of them is special. The function x 7→ ¯ u(0, x) equals 0 for
x < 0 and 1 for x > 0. The derivative
∂
∂x
¯ u(0, x) equals Dirac’s δfunction in the
sense of distributions. Let us give a name to the xderivative of ¯ u
p(t, x) =
1
2
√
πt
e
−
x
2
4t
.
This function p is known as the heat kernel on R.
Exercise 57 Let ϕ ∈ C
∞
0
(R). Show that z(t, x) =
R
∞
−∞
p (t, x −y) ϕ(y)dy is a
solution of
( ³
∂
∂t
−
∂
2
∂x
2
´
z(t, x) = 0 for (t, x) ∈ R
+
×R,
z(0, x) = ϕ(x) for x ∈ R.
Exercise 58 Let ϕ ∈ C
∞
0
(R
3
) and set p
3
(t, x) = p(t, x
1
)p(t, x
2
)p(t, x
3
). Show
that z(t, x) =
R
R
3
p
3
(t, x −y) ϕ(y)dy is a solution of
½ ¡
∂
∂t
−∆
¢
z(t, x) = 0 for (t, x) ∈ R
+
×R
3
,
z(0, x) = ϕ(x) for x ∈ R.
Exercise 59 Show that the function w(x, t) =
x
t
√
t
e
−
x
2
4t
is a solution to the heat
equation and satisﬁes
lim
t↓0
w(x, t) = 0 for any x ∈ R.
Is this a candidate for showing that
( ³
∂
∂t
−
∂
2
∂x
2
´
z(t, x) = 0 for (t, x) ∈ R
+
×R,
z(0, x) = ϕ(x) for x ∈ R.
has a nonunique solution in C
2
¡
R
+
0
×R
¢
?
Is the convergence of lim
t↓0
w(x, t) = 0 uniform with respect to x ∈ R?
4.2.2 On bounded intervals by an orthonormal set
For the heat equation in one dimension on a bounded interval, say (0, 1), we
may ﬁnd a solution for initial values ϕ ∈ L
2
by the following construction. First
let us recall the boundary value problem:
the p.d.e.: u
t
−u
xx
= 0 in (0, 1) ×R
+
,
the initial value: u = ϕ on (0, 1) × {0} ,
the boundary condition: u = 0 on {0, 1} ×R
+
.
(4.8)
We try to solve this boundary value problem by separation of variables.
That is, we try to ﬁnd functions of the form u(t, x) = T(t)X(x) that solve the
boundary value problem in (4.8) for some initial value.
Putting these special u’s into (4.8) we ﬁnd the following
T
0
(t)X(x) −T(t)X
00
(x) = 0 for 0 < x < 1 and t > 0,
T(0)X(x) = ϕ(x) for 0 < x < 1,
T(t)X(0) = T(t)X(1) = 0 for t > 0.
72
The only nontrivial solutions for X are found by
T
0
(t)
T(t)
= c = −
X
00
(x)
X(x)
and X(0) = X(1) = 0,
where also the constant c is unknown. Some lengthy computations show that
these nontrivial solutions are
X
k
(x) = αsin(kπx) and c = k
2
π
2
.
(Without further knowledge one tries c < 0, ﬁnds functions ae
√
cx
+be
−
√
cx
that
do not satisfy the boundary conditions except when they are trivial. Similarly
c = 0 doesn’t bring anything. For some c > 0 some combination of cos(
√
cx)
and sin(
√
cx) does satisfy the boundary condition.)
Having these X
k
we can now solve the corresponding T and ﬁnd
T
k
(t) = e
−k
2
π
2
t
.
So the special solutions we have found are
u(t, x) = α e
−k
2
π
2
t
sin(kπx) .
Since the problem is linear one may combine such special functions and ﬁnd
that one can solve
u
t
(t, x) −u
xx
(t, x) = 0 for 0 < x < 1 and t > 0,
u(0, x) =
P
m
k=1
α
k
sin(kπx) for 0 < x < 1,
u(t, 0) = u(t, 1) = 0 for t > 0.
(4.9)
Now one should remember that the set of functions {e
k
}
∞
k=1
with e
k
(x) =
√
2 sin(kπx) is a complete orthonormal system in L
2
(0, 1), so that we can write
any function in L
2
(0, 1) as
P
∞
k=1
α
k
e
k
(·). Let us recall:
Deﬁnition 4.2.1 Suppose H is a Hilbert space with inner product h·, ·i . The
set of functions {e
k
}
∞
k=1
in H is called a complete orthonormal system for H if
the following holds:
1. he
k
, e
i = δ
kl
(orthonormality);
2. lim
m→∞
k
P
m
k=1
he
k
, fi e
k
(·) −f(·)k
H
= 0 for all f ∈ H (completeness).
Remark 4.2.2 As a consequence one ﬁnds Parseval’s identity: for all f ∈ H
kfk
2
H
= lim
m→∞
°
°
°
°
°
m
X
k=1
he
k
, fi e
k
(·)
°
°
°
°
°
2
H
=
= lim
m→∞
*
m
X
k=1
he
k
, fi e
k
(·),
m
X
j=1
he
j
, fi e
j
(·)
+
=
= lim
m→∞
m
X
k=1
m
X
j=1
he
k
, fi he
j
, fi δ
kj
=
= lim
m→∞
m
X
k=1
he
k
, fi
2
= khe
k
, fik
2
2
.
73
Remark 4.2.3 Some standard complete orthonormal systems on L
2
(0, 1), which
are useful for the diﬀerential equations that we will meet, are:
•
©√
2 sin(kπ·) ; k ∈ N
+
ª
;
•
©
1,
√
2 sin(2kπ·) ,
√
2 cos (2kπ·) ; k ∈ N
+
ª
;
•
©
1,
√
2 cos (kπ·) ; k ∈ N
ª
.
Another famous set of orthonormal functions are the Legendre polynomials:
•
nq
k +
1
2
P
k
(·); k ∈ N
o
with P
k
(x) =
1
2
k
k!
d
k
dx
k
¡
x
2
−1
¢
k
is a complete or
thonormal set in L
2
(−1, 1) .
Remark 4.2.4 The use of orthogonal polynomials for solving linear p.d.e. is
classical. For further reading we refer to the books of CourantHilbert, Strauss,
Weinberger or Sobolev.
Exercise 60 Write down the boundary conditions that the third set of functions
in Remark 4.2.3 satisfy.
Exercise 61 The same question for the second set of functions.
Exercise 62 Find a complete orthonormal set of functions for
−u
00
= f in (−1, 1) ,
u(−1) = 0,
u(1) = 0.
Exercise 63 Find the appropriate set of orthonormal functions for
−u
00
= f in (0, 1) ,
u(0) = 0,
u
x
(1) = 0.
Exercise 64 Suppose that {e
k
; k ∈ N} is an orthonormal system in L
2
(0, 1).
Let f ∈ L
2
(0, 1). Show that for a
0
, . . . , a
k
∈ R the following holds:
°
°
°
°
°
f −
k
X
i=0
hf, e
i
i e
i
°
°
°
°
°
L
2
(0,1)
≤
°
°
°
°
°
f −
k
X
i=0
a
i
e
i
°
°
°
°
°
L
2
(0,1)
.
1. Show that the set of Legendre polynomials is an orthonormal collection.
2. Show that every polynomial p : R →R can be written by a ﬁnite combina
tion of Legendre polynomials.
3. Prove that the Legendre polynomials form a complete orthonormal set in
L
2
(−1, 1).
Coming back to the problem in (4.8) we have found for ϕ ∈ L
2
(0, 1) the
following solution:
u(t, x) =
∞
X
k=1
D
√
2 sin(kπ·) , ϕ
E
e
−π
2
k
2
t
√
2 sin(kπx) . (4.10)
74
Exercise 65 Let ϕ ∈ L
2
(0, 1) and let u be the function in (4.10).
1. Show that x 7→u(t, x) is in L
2
(0, 1) for all t ≥ 0.
2. Show that all functions x 7→
∂
i
∂x
i
∂
j
∂t
j
u(t, x) for t > 0 are in L
2
(0, 1).
3. Show that there is c > 0 (independent of ϕ) with ku(t, ·)k
W
2,2
(0,1)
≤
c
t
kϕk
L
2
(0,1)
.
75
4.3 A solution for the Laplace equation on a
square
The boundary value problem that we will consider has zero Dirichlet boundary
conditions:
(
−∆u = f in (0, 1)
2
,
u = 0 on ∂
³
(0, 1)
2
´
,
(4.11)
where f ∈ L
2
(0, 1)
2
. We will try the use a separation of variables approach and
ﬁrst try to ﬁnd suﬃciently many functions of the form u(x, y) = X(x)Y (y) that
solve the corresponding eigenvalue problem:
(
−∆φ = λφ in (0, 1)
2
,
φ = 0 on ∂
³
(0, 1)
2
´
.
(4.12)
At least there should be enough functions to approximate f in L
2
sense. Putting
such functions in (4.12) we obtain
−X
00
(x)Y (y) −X(x)Y
00
(y) = λX(x)Y (y) for 0 < x, y < 1,
X(x)Y (0) = 0 = X(x)Y (1) for 0 < x < 1,
X(0)Y (y) = 0 = X(1)Y (y) for 0 < y < 1.
Skipping the trivial solution we ﬁnd
−
X
00
(x)
X(x)
−
Y
00
(y)
Y (y)
= λ for 0 < x, y < 1,
X(0) = 0 = X(1)
Y (0) = 0 = Y (1)
Our quest for solutions X, Y, λ leads us after some computations to
X
k
(x) = sin(kπx),
Y
(y) = sin(πy),
λ
k,
= π
2
k
2
+π
2
2
.
Since
©√
2 sin(kπ·); k ∈ N
+
ª
is a complete orthonormal system in L
2
(0, 1) (and
©√
2 sin(π·); ∈ N
+
ª
on L
2
¡
1
0
¢
, sorry L
2
(0, 1)) the combinations supply such a
system in L
2
((0, 1)
2
).
Lemma 4.3.1 If {e
k
; k ∈ N} is a complete orthonormal system in L
2
(0, a) and
if {f
k
; k ∈ N} is a complete orthonormal system in L
2
(0, b) , then {˜ e
k,
; k, ∈ N}
with ˜ e
k,
(x, y) = e
k
(x)f
(y) is a complete orthonormal system on L
2
((0, a) ×
(0, b)).
Since we now have a complete orthonormal system for (4.11) we can approx
imate f ∈ L
2
(0, 1)
2
. Setting
e
k,l
(x, y) = 2 sin(kπx) sin(πy)
we have
lim
M→∞
°
°
°
°
°
°
f −
M
X
m=2
X
i+j=m
he
i,j
, fi e
i,j
°
°
°
°
°
°
L
2
(0,1)
2
= 0.
76
Since for the right hand side f in (4.11) replaced by
˜
f :=
M
P
m=2
P
i+j=m
he
i,j
, fi e
i,j
the solution is directly computable, namely
˜ u =
M
X
m=2
X
i+j=m
he
i,j
, fi
λ
ij
e
i,j
,
there is some hope that the solution for f ∈ L
2
(0, 1)
2
is given by
u = lim
M→∞
M
X
m=2
X
i+j=m
he
i,j
, fi
λ
ij
e
i,j
, (4.13)
at least in L
2
sense. Indeed, one can show that for every f ∈ L
2
(0, 1)
2
a solution
u is given by (4.13). This solution will be unique in W
2,2
(0, 1)
2
∩ W
1,2
0
(0, 1)
2
.
Exercise 66 Suppose that f ∈ L
2
(0, 1)
2
.
1. Show that the following series converge in L
2
sense:
M
X
m=2
X
i+j=m
i
α
1
j
α
2
he
i,j
, fi
λ
ij
e
i,j
with α ∈ N
2
and α ≤ 2 (6 diﬀerent series).
2. Show that u(t, x), u
x
, u
y
, u
xx
, u
xy
and u
yy
are in L
2
(0, 1)
2
.
Exercise 67 Find a complete orthonormal system that ﬁts for
−∆u(x, y) = f(x, y) for 0 < x, y < 1,
u(0, y) = u(1, y) = 0 for 0 < y < 1,
u
y
(x, 0) = u
y
(x, 1) = 0 for 0 < x < 1.
(4.14)
Exercise 68 Have a try for the disk in R
2
using polar coordinates
½
−∆u(x) = f(x) for x < 1,
u(x) = 0 for x = 1.
(4.15)
In polar coordinates: ∆ =
∂
2
∂r
2
+
1
r
∂
∂r
+
1
r
2
∂
2
∂ϕ
2
.
77
4.4 Solutions for the 1d wave equation
4.4.1 On unbounded intervals
For the 1dimensional wave equation we have seen a solution by the formula of
d’Alembert. Indeed
u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for x ∈ R and t ∈ R
+
,
u(x, 0) = ϕ
0
(x) for x ∈ R,
u
t
(x, 0) = ϕ
1
(x) for x ∈ R,
has a solution of the form
u(x, t) =
1
2
ϕ
0
(x −ct) +
1
2
ϕ
0
(x +ct) +
1
2
Z
x+ct
x−ct
ϕ
1
(s)ds.
We may use this solution to ﬁnd a solution for the wave equation on the half
line:
u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for x ∈ R
+
and t ∈ R
+
,
u(x, 0) = ϕ
0
(x) for x ∈ R
+
,
u
t
(x, 0) = ϕ
1
(x) for x ∈ R
+
,
u(0, t) = 0 for t ∈ R
+
.
(4.16)
A way to do this is by a ‘reﬂection’. We extend the initial values ϕ
0
and ϕ
1
on R
−
, use d’Alembert’s formula on R ×R
+
0
and hope that if we restrict the
function we have found to R
+
0
×R
+
0
a solution of (4.16) comes out.
First the extension. For i = 1, 2 we set
˜ ϕ
i
(x) =
ϕ
i
(x) if x > 0,
0 if x = 0,
−ϕ
i
(−x) if x < 0.
Then we make the distinction for
I: = {(x, t) ; 0 ≤ t ≤ cx} ,
II: = {(x, t) ; t > c x} ,
(the case 0 ≤ t ≤ c x with x < 0 we will not need).
Case I. The solution for the modiﬁed ˜ ϕ
i
on R is
˜ u(x, t) =
1
2
˜ ϕ
0
(x −ct) +
1
2
˜ ϕ
0
(x +ct) +
1
2
Z
x+ct
x−ct
˜ ϕ
1
(s)ds =
=
1
2
ϕ
0
(x −ct) +
1
2
ϕ
0
(x +ct) +
1
2
Z
x+ct
x−ct
ϕ
1
(s)ds.
Case II. The solution for the modiﬁed ˜ ϕ
i
on R is, now using that x−ct < 0
˜ u(x, t) =
1
2
˜ ϕ
0
(x −ct) +
1
2
˜ ϕ
0
(x +ct) +
1
2
Z
x+ct
x−ct
˜ ϕ
1
(s)ds =
= −
1
2
ϕ
0
(ct −x) +
1
2
ϕ
0
(x +ct) +
1
2
Z
0
x−ct
ϕ
1
(−s)ds +
1
2
Z
x+ct
0
ϕ
1
(s)ds =
= −
1
2
ϕ
0
(ct −x) +
1
2
ϕ
0
(x +ct) +
1
2
Z
ct+x
ct−x
ϕ
1
(s)ds.
78
Exercise 69 Suppose that ϕ
0
∈ C
2
(R
+
0
) and ϕ
1
∈ C
1
(R
+
0
) with ϕ
0
(0) =
ϕ
1
(0) = 0.
1. Show that
u(x, t) = sign(x −ct)
1
2
ϕ
0
(x −ct) +
1
2
ϕ
0
(x +ct) +
1
2
Z
x+ct
x−ct
ϕ
1
(s)ds
is C
2
(R
+
0
×R
+
0
) and a solution to (4.16).
2. What happens when we remove the condition ϕ
1
(0) = 0 ?
3. And what if we also remove the condition ϕ
0
(0) = 0 ?
Exercise 70 Solve the problem
u
tt
−c
2
u
xx
= 0 in R
+
×R
+
,
u(x, 0) = ϕ
0
(x) for x ∈ R
+
,
u
t
(x, 0) = ϕ
1
(x) for x ∈ R
+
,
u
x
(0, t) = 0 for t ∈ R
+
,
using d’Alembert and a reﬂection.
Exercise 71 Try to ﬁnd out what happens when we
use a similar approach for
u
tt
(x, t) −c
2
u
xx
(x, t) = 0 for (x, t) ∈ (0, 1) ×R
+
,
u(x, 0) = ϕ
0
(x) for x ∈ (0, 1),
u
t
(x, 0) = ϕ
1
(x) for x ∈ (0, 1),
u(0, t) = u(1, t) = 0 for t ∈ R
+
.
The picture on the right might help.
0.5 1
1
2
3
4
4.4.2 On a bounded interval
Exercise 72 We return to the boundary value problem in Exercise 71.
1. Use an appropriate complete orthonormal set {e
k
(·) ; k ∈ N} to ﬁnd a so
lution:
u(t, x) =
∞
X
k=0
a
k
(t) e
k
(x)
2. Suppose that ϕ
0
, ϕ
1
∈ L
2
(0, 1). Is it true that x 7→ u(t, x) ∈ L
2
(0, 1) for
all t > 0?
3. Suppose that ϕ
0
, ϕ
1
∈ C [0, 1] . Is it true that x 7→u(t, x) ∈ C [0, 1] for all
t > 0?
4. Suppose that ϕ
0
, ϕ
1
∈ C
0
[0, 1] . Is it true that x 7→ u(t, x) ∈ C
0
[0, 1] for
all t > 0?
79
5. Give a condition on ϕ
0
, ϕ
1
such that sup
n
ku(·, t)k
L
2
(0,1)
; t ≥ 0
o
is bounded.
6. Give a condition on ϕ
0
, ϕ
1
such that sup{u(x, t) ; 0 ≤ x ≤ 1, t ≥ 0} is
bounded.
7. Give a condition on ϕ
0
, ϕ
1
such that ku(·, ·)k
L
2
((0,1)×R
+
)
is bounded.
80
4.5 A fundamental solution for the Laplace op
erator
A function u deﬁned on a domain Ω ⊂ R
n
are called harmonic whenever
∆u = 0 in Ω.
We will start today by looking for radially symmetric harmonic functions. Since
the Laplaceoperator ∆ in R
n
can be written for radial coordinates with r = x
by
∆ = r
1−n
∂
∂r
r
n−1
∂
∂r
+
1
r
2
∆
LB
where ∆
LB
is the socalled LaplaceBeltrami operator on the surface of the unit
ball S
n−1
in R
n
. So radial solutions of ∆u = 0 satisfy
r
1−n
∂
∂r
r
n−1
∂
∂r
u(r) = 0.
Hence
∂
∂r
r
n−1 ∂
∂r
u(r) = 0, which implies r
n−1 ∂
∂r
u(r) = c, which implies
∂
∂r
u(r) =
cr
1−n
, which implies
for n = 2 : u(r) = c
1
log r +c
2
,
for n ≥ 3 : u(r) = c
1
r
2−n
+c
2
.
Removing the not so interesting constant c
2
and choosing a special c
1
(the reason
will become apparent later) we deﬁne
for n = 2 : F
2
(x) =
−1
2π
log x ,
for n ≥ 3 : F
n
(x) =
1
ω
n
(n−2)
x
2−n
.
(4.17)
where ω
n
is the surface area of the unit ball in R
n
.
Lemma 4.5.1 The functions F
n
and ∇F
n
are in L
1
loc
(R
n
) .
Next let us recall a consequence of Green’s formula:
Lemma 4.5.2 For u, v ∈ C
2
¡
¯
Ω
¢
and Ω ⊂ R
n
a bounded domain with C
1

boundary
Z
Ω
u ∆v dy =
Z
Ω
∆u v dy +
Z
∂Ω
µ
u
∂
∂ν
v −
µ
∂
∂ν
u
¶
v
¶
dσ,
where ν is the outwards pointing normal direction.
So if u, v ∈ C
2
¡
¯
Ω
¢
with u is harmonic, we ﬁnd
Z
Ω
u ∆v dy =
Z
∂Ω
µ
u
∂
∂ν
v −
µ
∂
∂ν
u
¶
v
¶
dσ. (4.18)
Now we use this for v (y) = F
n
(x −y) with x ∈ R
n
and the F
n
in (4.17). If
x / ∈
¯
Ω then (4.18) holds. If x ∈ Ω then since the F
n
are not harmonic in 0 we
81
have to drill a hole around x for y 7→F
n
(x −y) . Since the F
n
have an integrable
singularity we ﬁnd
Z
Ω
F
n
(x −y) ∆v(y) dy = lim
ε↓0
Z
Ω\B
ε
(x)
F
n
(x −y) ∆v(y) dy.
So instead of Ω we will use Ω\B
ε
(x) = {y ∈ Ω; x −y > ε} and ﬁnd
lim
ε↓0
Z
Ω\B
ε
(x)
F
n
(x −y) ∆v(y) dy =
=
Z
∂(Ω\Bε(x))
µ
F
n
(x −y)
∂
∂ν
v −
µ
∂
∂ν
F
n
(x −y)
¶
v
¶
dσ =
=
Z
∂Ω
µ
F
n
(x −y)
∂
∂ν
v −
µ
∂
∂ν
F
n
(x −y)
¶
v
¶
dσ +
−lim
ε↓0
Z
∂B
ε
(x)
µ
F
n
(x −y)
∂
∂ν
v −
µ
∂
∂ν
F
n
(x −y)
¶
v
¶
dσ. (4.19)
Notice that the minus sign appeared since outside of Ω\B
ε
(x) is inside of B
ε
(x).
Now let us separately consider the last two terms in (4.19).
We ﬁnd that
lim
ε↓0
Z
∂Bε(x)
F
n
(x −y)
∂
∂ν
v dσ =
=
if n = 2: lim
ε↓0
R
ω=1
¡
−1
2π
log ε
¢
∂
∂ν
v(x +εω) ε dω = 0,
if n ≥ 3: lim
ε↓0
R
1
ω
n
(n−2)
ε
2−n ∂
∂ν
v(x +εω) ε
n−1
dω = 0,
and since
∂
∂ν
F
n
(x −y) = −ω
−1
n
ε
1−n
on ∂B
ε
(x) for all n ≥ 2 it follows that
lim
ε↓0
Z
∂B
ε
(x)
µ
∂
∂ν
F
n
(x −y)
¶
v dσ =
= lim
ε↓0
Z
ω=1
−ω
−1
n
ε
1−n
v(x +εω) ε
n−1
dω = −v (x) .
Combining these results we have found, assuming that x ∈ Ω:
v (x) =
Z
Ω
F
n
(x −y) (−∆v(y)) dy +
+
Z
∂Ω
F
n
(x −y)
∂
∂ν
v dσ −
Z
∂Ω
µ
∂
∂ν
F
n
(x −y)
¶
v dσ.
If x / ∈
¯
Ω then we may directly use (4.18) to ﬁnd
0 =
Z
Ω
F
n
(x −y) (−∆v(y)) dy +
+
Z
∂Ω
F
n
(x −y)
∂
∂ν
v dσ −
Z
∂Ω
µ
∂
∂ν
F
n
(x −y)
¶
v dσ.
Example 4.5.3 Suppose that we want to ﬁnd a solution for
½
−∆v = f on R
+
×R
n−1
,
v = ϕ on {0} ×R
n−1
,
82
say for f ∈ C
∞
0
(R
+
×R
n−1
) and ϕ ∈ C
∞
0
(R
n−1
). Introducing
x
∗
= (−x
1
, x
2
, . . . , x
n
)
we ﬁnd that for x ∈ R
+
×R
n−1
:
v (x) =
Z
Ω
F
n
(x −y) (−∆v(y)) dy +
+
Z
∂Ω
F
n
(x −y)
∂
∂ν
v dσ −
Z
∂Ω
µ
∂
∂ν
F
n
(x −y)
¶
v dσ, (4.20)
0 =
Z
Ω
F
n
(x
∗
−y) (−∆v(y)) dy +
+
Z
∂Ω
F
n
(x
∗
−y)
∂
∂ν
v dσ −
Z
∂Ω
µ
∂
∂ν
F
n
(x
∗
−y)
¶
v dσ. (4.21)
On the boundary, that is x
1
= 0, it holds that F
n
(x −y) = F
n
(x
∗
−y) and
∂
∂ν
F
n
(x −y) = −
∂
∂x
1
F
n
(x −y) =
∂
∂x
1
F
n
(x
∗
−y) = −
∂
∂ν
F
n
(x
∗
−y) .
So by subtracting (4.20)(4.21) and setting
G(x, y) = F
n
(x −y) −F
n
(x
∗
−y)
we ﬁnd
v(x) =
Z
R
+
×R
n−1
G(x, y) (−∆v(y)) dy +
−
Z
{0}×R
n−1
µ
∂
∂ν
G(x, y)
¶
v(0, y
2
, . . . , y
n
) dy
2
. . . dy
n
.
Since both −∆v and v
{0}×R
n−1 are given we might have some hope to have
found a solution, namely by
v(x) =
Z
R
+
×R
n−1
G(x, y) f(y) dy +
−
Z
{0}×R
n−1
µ
∂
∂ν
G(x, y)
¶
ϕ(y
2
, . . . y
n
) dy
2
. . . dy
n
.
Indeed this will be a solution but this conclusion does need some proof.
Deﬁnition 4.5.4 Let Ω ⊂ R
n
be a domain. A function G :
¯
Ω ×
¯
Ω → R such
that
v(x) =
Z
Ω
G(x, y) f(y) dy −
Z
∂Ω
µ
∂
∂ν
G(x, y)
¶
ϕ(y) dσ
is a solution of
½
−∆v = f in Ω,
v = ϕ on ∂Ω,
(4.22)
is called a Green function for Ω.
Exercise 73 Show that the v in the previous example is indeed a solution.
83
Exercise 74 Let f ∈ C
∞
0
(R
+
× R
+
× R
n−1
) and ϕ
a
, ϕ
b
∈ C
∞
0
(R
+
× R
n−2
).
Find a solution for
−∆v(x) = f(x) for x ∈ R
+
×R
+
×R
n−2
,
v(0, x
2
, x
3
, . . . , x
n
) = ϕ
a
(x
2
, x
3
, . . . , x
n
) for x
2
∈ R
+
, x
k
∈ R and k ≥ 3,
v(x
1
, 0, x
3
, . . . , x
n
) = ϕ
b
(x
1
, x
3
, . . . , x
n
) for x
1
∈ R
+
, x
k
∈ R and k ≥ 3.
Exercise 75 Let n ∈ {2, 3, . . . } and deﬁne
˜
F
n
(x, y) =
(
F
n
³
xy −
y
y
´
for y 6= 0,
F
n
(1, 0, . . . , 0) for y = 0.
1. Show that
˜
F
n
(x, y) =
˜
F
n
(y, x) for all x, y ∈ R with y 6= x
−2
x.
2. Show that y 7→
˜
F
n
(x, y) is harmonic on R
n
\
n
x
−2
x
o
.
3. Set G(x, y) = F
n
(x −y) −
˜
F
n
(x, y) and B = {x ∈ R
n
; x < 1} . Show
that
v(x) =
Z
B
G(x, y) f(y) dy −
Z
∂B
µ
∂
∂ν
G(x, y)
¶
ϕ(y) dσ
is a solution of
½
−∆v = f in B,
v = ϕ on ∂B.
(4.23)
You may assume that f ∈ C
∞
(
¯
B) and ϕ ∈ C
∞
(∂B).
Exercise 76 Consider (4.23) for ϕ = 0 and let G be the Green function of the
previous exercise.
1. Let p >
1
2
n. Show that G : L
p
(B) →L
∞
(B) deﬁned by
(Gf) (x) :=
Z
B
G(x, y) f(y) dy (4.24)
is a bounded operator.
2. Let p > n. Show that G : L
p
(B) → W
1,∞
(B) deﬁned as in (4.24) is a
bounded operator.
3. Try to improve one of the above estimates.
4. Compare these results with the results of section 2.5
Exercise 77 Suppose that f is real analytic and consider
−∆v = f in B,
v = 0 on ∂B,
∂
∂ν
v = 0 on ∂B.
(4.25)
1. What may we conclude from CauchyKowalevski?
2. Suppose that f = 1. Compute the ‘CauchyKowalevski’solution.
84
Week 5
Some classics for a unique
solution
5.1 Energy methods
Suppose that we are considering the heat equation or wave equation with the
space variable x in a bounded domain Ω, that is
p.d.e.: u
t
= ∆u in Ω ×R
+
,
initial c.: u(x, 0) = φ
0
(x) for x ∈ Ω,
boundary c.: u(x, t) = g(x, t) for x ∈ ∂Ω ×R
+
,
(5.1)
or
p.d.e.: u
tt
= c
2
∆u in Ω ×R
+
,
initial c.:
u(x, 0) = φ
0
(x)
u
t
(x, 0) = φ
1
(x)
for x ∈ Ω,
boundary c.: u(x, t) = g(x, t) for x ∈ ∂Ω ×R
+
.
(5.2)
Then it is not clear by CauchyKowalevski that these problems are wellposed.
Only locally near (x, 0) ∈ Ω × R there is a unique solution for the second
problem at least when the φ
i
are real analytic. When Ω is a special domain we
may construct solutions of both problems above by an appropriate orthonormal
system. Even for more general domains there are ways of establishing a solution.
The question we want to address in this paragraph is whether or not such a
solution in unique. For the two problems above one may proceed by considering
an energy functional.
• For (5.1) this is
E(u)(t) =
1
2
Z
Ω
u(x, t)
2
dx.
This quantity is called the thermal energy of the body Ω at time t.
Now let u
1
and u
2
be two solutions of (5.1) and set u = u
1
−u
2
. This u
is a solution of (5.1) with φ
i
= 0 and g = 0. Since u satisﬁes 0 initial and
85
boundary conditions we obtain
∂
∂t
E(u)(t) =
∂
∂t
µ
1
2
Z
Ω
u(x, t)
2
dx
¶
=
Z
Ω
u
t
(x, t) u(x, t)dx =
=
Z
Ω
∆u(x, t) u(x, t)dx = −
Z
Ω
∇u(x, t)
2
dx ≤ 0.
So t 7→ E(u)(t) decreases, is nonnegative and starts with E(u)(0) = 0.
Hence E(u)(t) ≡ 0 for all t ≥ 0, which implies that u(x, t) ≡ 0.
Lemma 5.1.1 Suppose Ω is a bounded domain in R
n
with a ∂Ω ∈ C
1
and let
T > 0. Then C
2
(
¯
Ω × [0, T]) solutions of (5.1) are unique.
• For (5.2) the appropriate functional is
E(u)(t) =
1
2
Z
Ω
³
c
2
∇u(x, t)
2
+ u
t
(x, t)
2
´
dx.
This quantity is called the energy of the body Ω at time t;
•
1
2
R
Ω
c
2
∇u(x, t)
2
dx is the potential energy due to the deviation from
the equilibrium,
•
1
2
R
Ω
u
t
(x, t)
2
dx is the kinetic energy.
Again we suppose that there are two solutions and set u = u
1
− u
2
such
that u satisﬁes 0 initial and boundary conditions. Now (note that we use
u is twice diﬀerentiable)
∂
∂t
E(u)(t) =
Z
Ω
¡
c
2
∇u(x, t) · ∇u
t
(x, t) +u
t
(x, t)u
tt
(x, t)
¢
dx =
=
Z
Ω
¡
−c
2
∆u(x, t) +u
tt
(x, t)
¢
u
t
(x, t)dx = 0.
As before we may conclude that E(u)(t) ≡ 0 for all t ≥ 0. This implies
that ∇u(x, t) = 0 for all x ∈ Ω and t ≥ 0. Since u(x, t) = 0 for x ∈ ∂Ω
we ﬁnd that u(x, t) = 0. Indeed, let x
∗
∈ ∂Ω the point closest to x, and
we ﬁnd
u(x, t) = u(x
∗
, t) +
Z
1
0
∇u(θx + (1 −θ)x
∗
, t) · (x −x
∗
) dθ = 0.
Lemma 5.1.2 Suppose Ω is a bounded domain in R
n
with a ∂Ω ∈ C
1
and let
T > 0. Then C
2
(
¯
Ω × [0, T]) solutions of (5.2) are unique.
• In a closely related way one can even prove uniqueness for
½
−∆u = f in Ω,
u = g on ∂Ω.
(5.3)
Suppose that there are two solutions and call the diﬀerence u. This u
satisﬁes (5.3) with f and g both equal to 0. So
Z
Ω
∇u
2
dx = −
Z
0
∆u u dx = 0
and ∇u = 0 implying, since u
∂Ω
= 0, that u = 0.
86
Lemma 5.1.3 Suppose Ω is a bounded domain in R
n
with a ∂Ω ∈ C
1
. Then
C
2
(
¯
Ω) solutions of (5.3) are unique.
Exercise 78 Why is the condition ‘Ω is bounded’ appearing in the three lemma’s
above?
Exercise 79 Can we prove a similar uniqueness result for the following b.v.p.?
u
t
= ∆u in Ω ×R
+
,
u(x, 0) = φ
0
(x) for x ∈ Ω,
∂
∂n
u(x, t) = g(x) for x ∈ ∂Ω ×R
+
.
(5.4)
Exercise 80 And for this one?
u
tt
= c
2
∆u in Ω ×R
+
,
u(x, 0) = φ
0
(x)
u
t
(x, 0) = φ
1
(x)
for x ∈ Ω,
∂
∂n
u(x, t) = g(x) for x ∈ ∂Ω ×R
+
.
(5.5)
Exercise 81 Let Ω = {(x
1
, x
2
); −1 < x
1
, x
2
< 1} and let
Γ
= {(−1, s) ; −1 < s < 1} ,
Γ
r
= {(1, s) ; −1 < s < 1} .
Let f ∈ C
∞
¡
¯
Ω
¢
. Which of the following problems has at most one solution
in C
2
¡
¯
Ω
¢
? And which one has at least one solution in W
2,2
(Ω)?
A :
½
−∆u = f in Ω,
u = 0 on ∂Ω.
B :
½
−∆u = f in Ω,
∂
∂n
u = 0 on ∂Ω.
C :
−∆u = f in Ω,
u = 0 on Γ
,
∂
∂n
u = 0 on ∂Ω\Γ
.
D :
−∆u = f in Ω,
∂
∂n
u = 0 on ∂Ω,
u(1, 1) = 0.
E :
−∆u = f in Ω\{0},
u = 0 on ∂Ω,
u(0, 0) = 0.
F :
−∆u = f in Ω,
u = 0 in Γ
,
∂
∂n
u = 0 on ∂Ω\Γ
r
.
Exercise 82 For which of the problems in the previous exercise can you ﬁnd
f ∈ C
∞
(
¯
Ω) such that no solution in C
2
¡
¯
Ω
¢
exists?
1. Can you ﬁnd a condition on λ such that for the solution u of
u
t
= ∆u +λu in Ω×R
+
,
u(x, 0) = φ
0
(x) for x ∈ Ω,
u(x, t) = 0 for x ∈ ∂Ω×R
+
.
(5.6)
the norm ku(·, t)k
L
2
(Ω)
remains bounded for t →∞.
2. Can you ﬁnd a condition on f such that
u
t
= ∆u +f(u) in Ω×R
+
,
u(x, 0) = φ
0
(x) for x ∈ Ω,
u(x, t) = 0 for x ∈ ∂Ω×R
+
.
(5.7)
ku(·, t)k
L
2
(Ω)
remains bounded for t →∞.
87
Exercise 83 Here is a model for a string with some damping:
u
tt
+au
t
= c
2
u
xx
in (0, ) ×R
+
,
u(x, 0) = φ(x) for 0 < x < ,
u
t
(x, 0) = 0 for 0 < x < ,
u(x, t) = 0 for x ∈ {0, } and t ≥ 0.
(5.8)
Find a bound like
°
°
u
2
x
(·, t)
°
°
L
2
≤ Me
−bt
.
Exercise 84 A more realistic model is
u
tt
+a u
t
 u
t
= c
2
u
xx
in (0, ) ×R
+
,
u(x, 0) = φ(x) for 0 < x < ,
u
t
(x, 0) = 0 for 0 < x < ,
u(x, t) = 0 for x ∈ {0, } and t ≥ 0.
(5.9)
Can you show that u(·, t) →0 for t →∞ in some norm?
88
5.2 Maximum principles
In this section we will consider second order linear partial diﬀerential equations
in R
n
. The corresponding diﬀerential operators are as follows:
L =
n
X
i,j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
+
n
X
i=1
b
i
(x)
∂
∂x
i
+c(x) (5.10)
with a
ij
, b
i
, c ∈ C(
¯
Ω). Without loss of generality we may assume that a
ij
= a
ji
.
In order to restrict ourselves to parabolic and elliptic cases with the right
sign for stating the results below we will assume a sign for the operator.
Deﬁnition 5.2.1 We will call L parabolicelliptic in x if
n
X
i,j=1
a
ij
(x) ξ
i
ξ
j
≥ 0 for all ξ ∈ R
n
. (5.11)
We will call L elliptic in x if there is λ > 0 such that
n
X
i,j=1
a
ij
(x) ξ
i
ξ
j
≥ λξ
2
for all ξ ∈ R
n
. (5.12)
Deﬁnition 5.2.2 For Ω ⊂ R
n
and L as in (5.10)(5.11) we will deﬁne the
parabolic boundary ∂
L
Ω as follows:
x ∈ ∂
L
Ω if x ∈ ∂Ω and either
1. ∂Ω is not C
1
locally near x, or
2.
P
n
i,j=1
a
ij
(x) ν
i
ν
j
> 0 for the outside normal ν at x, or
3.
P
n
i,j=1
a
ij
(x) ν
i
ν
j
= 0 and
P
n
i=1
b
i
(x)ν
i
≥ 0 for the outside normal ν at
x.
Lemma 5.2.3 Let L be parabolicelliptic on
¯
Ω with Ω ⊂ R
n
and Γ ⊂ ∂Ω\∂
L
Ω
with Γ ∈ C
1
. Suppose that u ∈ C
2
(Ω∪ Γ), that Lu > 0 in Ω∪ Γ and that c ≤ 0
on Ω ∪ Γ. Then u cannot attain a nonnegative maximum in Ω ∪ Γ.
Example 5.2.4 For the heatequation with source term u
t
− ∆u = f on Ω =
˜
Ω× (0, T) ⊂ R
n+1
the nonparabolic boundary is
˜
Ω× {T}. Indeed, rewriting to
∆u − u
t
we ﬁnd part 2 of Deﬁnition 5.2.2 satisﬁed on ∂Ω × (0, T) and part 3
of Deﬁnition 5.2.2 satisﬁed on
˜
Ω × {0} . In (x, t)coordinates b = (0, . . . , 0, −1)
one ﬁnds that a
ij
= δ
ij
for all i, j 6= (n + 1, n + 1) and a
n+1,n+1
= 0. So
n+1
X
i,j=1
a
ij
(x) ν
i
ν
j
=
n
X
i,j=1
δ
ij
0
2
+ 0 1
2
= 0 and b · ν = −1 on Ω× {T} .
So, if u
t
− ∆u > 0 then u cannot attain a nonpositive minimum in Ω × (0, T]
for every T > 0. In other words, if u satisﬁes
u
t
−∆u > 0 in Ω ×R
+
,
u(x, 0) = φ
0
(x) ≥ 0 for x ∈ Ω,
u(x, t) = g(x, t) ≥ 0 for x ∈ ∂Ω×R.
then u ≥ 0 in
¯
Ω ×R
+
0
.
89
Proof. Suppose that u does attain a nonnegative maximum in Ω ∪ Γ, say in
x
0
. If x
0
∈ Ω then ∇u(x
0
) = 0. If x
0
∈ Γ then by assumption b · ν ≤ 0 and
∇u(x
0
) = γν for some nonnegative constant γ. So in both case
b(x
0
) · ∇u(x
0
) +c(x
0
)u(x
0
) ≤ 0.
We ﬁnd that
n
X
i,j=1
a
ij
(x
0
)
∂
∂x
i
∂
∂x
j
u(x
0
) ≥ Lu(x
0
) > 0.
As before we may diagonalize (a
ij
(x
0
)) = T
t
DT and ﬁnd in the new coordinates
y = Tx with U(y) = u(x) that
n
X
i=1
d
2
ii
∂
2
∂y
2
i
U (Tx
0
) > 0
with d
ii
≥ 0 by our assumption in (5.11).
If x
0
∈ Ω then in a maximum
∂
2
∂y
2
i
U (Tx
0
) ≤ 0 for all i ∈ {1, n} and we ﬁnd
n
X
i=1
d
2
ii
∂
2
∂y
2
i
U (Tx
0
) ≤ 0, (5.13)
a contradiction.
If x
0
∈ Γ then ν is an eigenvector of (a
ij
(x
0
)) with eigenvalue zero and
hence we may use this as our ﬁrst new basis element and one for which we have
d
11
= 0. In a maximum we still have
∂
2
∂y
2
i
U (Tx
0
) ≤ 0 for all i ∈ {2, n} . Our
proof is saved since the ﬁrst component is killed through d
11
= 0 and we still
ﬁnd the contradiction by (5.13).
In the remainder we will restrict ourselves to the strictly elliptic case.
Deﬁnition 5.2.5 The operator L as in (5.10) is called strictly elliptic on Ω if
there is λ > 0 such that
n
X
i,j=1
a
ij
(x) ξ
i
ξ
j
≥ λξ
2
for all x ∈ Ω and ξ ∈ R
n
. (5.14)
Exercise 85 Suppose that Ω ⊂ R
n
is bounded. Show that if L as in (5.10) is
elliptic for all x ∈
¯
Ω then L is strictly elliptic on Ω as in (5.14). Also give an
example of an elliptic L that is not strictly elliptic.
For u ∈ C
¡
¯
Ω
¢
we deﬁne u
+
∈ C(
¯
Ω) by
u
+
(x) = max [0, u(x)] and u
−
(x) = max [0, −u(x)] .
Theorem 5.2.6 (Weak Maximum Principle) Let Ω ⊂ R
n
be a bounded do
main and suppose that L is strictly elliptic on Ω with c ≤ 0. If u ∈ C
2
(Ω)∩C(
¯
Ω)
and Lu ≥ 0 in Ω, then the maximum of u
+
is attained at the boundary.
Proof. The proof of this theorem relies on a smartly chosen auxiliary function.
Supposing that Ω ⊂ B
R
(0) we consider w deﬁned by
w(x) = u(x) +εe
αx
1
90
with ε > 0. One ﬁnds that
Lw(x) = Lu(x) +ε
¡
α
2
a
11
(x) +αb
1
(x) +c(x)
¢
e
αx1
.
Since a
11
(x) ≥ λ be the strict ellipticity assumption and since b
1
, c ∈ C(
¯
Ω) with
Ω bounded, we may choose α such that
α
2
a
11
(x) +αb
1
(x) +c(x)e
αx
1
> 0.
The result follows for w from the previous Lemma and hence
sup
Ω
u ≤ sup
Ω
w ≤ sup
Ω
w
+
≤ sup
∂Ω
w
+
≤ sup
∂Ω
u
+
+εe
αR
.
Letting ε ↓ 0 the claim follows.
Exercise 86 State and prove a weak maximum principle for L = −
∂
∂t
+∆ on
Ω × (0, T) .
Theorem 5.2.7 (Strong Maximum Principle) Let Ω ⊂ R
n
be a domain
and suppose that L is strictly elliptic on Ω with c ≤ 0. If u ∈ C
2
(Ω) ∩C(
¯
Ω) and
Lu ≥ 0 in Ω, then either
1. u ≡ sup
©
u(x); x ∈
¯
Ω
ª
, or
2. u does not attain a nonnegative maximum in Ω.
For the formulation of a boundary version of the strong maximum principle
we need a condition on Ω.
Deﬁnition 5.2.8 A domain Ω ⊂ R
n
satisﬁes the interior sphere condition at
x
0
∈ ∂Ω if there is a open ball B such that B ⊂ Ω and x
0
∈ ∂B.
Theorem 5.2.9 (Hopf’s boundary point lemma) Let Ω ⊂ R
n
be a do
main that satisﬁes the interior sphere condition at x
0
∈ ∂Ω. Let L be strictly
elliptic on Ω with c ≤ 0. If u ∈ C
2
(Ω) ∩C(
¯
Ω) satisﬁes Lu ≥ 0 in Ω and is such
that max
©
u(x); x ∈
¯
Ω
ª
= u(x
0
) ≥ 0, then either
1. u ≡ u(x
0
) on
¯
Ω, or
2. liminf
t↓0
u(x
0
) −u(x
0
+tµ)
t
> 0 (possibly +∞) for every direction µ point
ing into an interior sphere.
Remark 5.2.10 If u ∈ C
1
(Ω ∪ {x
0
}) then
∂
∂µ
u(x
0
) = −liminf
t↓0
u(x
0
) −u(x
0
+tµ)
t
< 0,
which means for the outward normal ν
∂
∂ν
u(x
0
) > 0
91
Exercise 87 Suppose that Ω = B
r
(x
a
) ∪ B
r
(x
b
) with r = 2 x
a
−x
b
 . Let L be
strictly elliptic on Ω with c ≤ 0 and assume that u ∈ C
2
(Ω) ∩ C(
¯
Ω) satisﬁes
½
Lu ≥ 0 in Ω,
u = 0 on ∂Ω.
Show that if u ∈ C
1
(
¯
Ω) then u ≡ 0 in
¯
Ω. Hint: consider some x
0
∈ ∂B
r
(x
a
) ∩
∂B
r
(x
b
).
Exercise 88 Let Ω be a bounded domain in R
n
and let L be strictly elliptic
on Ω (and we put no sign restriction on c). Suppose that there is a function
v ∈ C
2
(Ω) ∩ C(
¯
Ω) such that −Lv ≥ 0 in Ω and v > 0 on
¯
Ω.
Show that any function w ∈ C
2
(Ω) ∩ C(
¯
Ω) such that
½
−Lw ≥ 0 in Ω,
w ≥ 0 on ∂Ω,
is nonnegative.
Exercise 89 Let Ω be a bounded domain in R
n
and let L be strictly elliptic on
Ω with c ≤ 0. Let f and φ be given functions. Show that
½
−Lu = f in Ω,
u = φ on ∂Ω,
has at most one solution in C
2
(Ω) ∩ C(
¯
Ω).
92
5.3 Proof of the strong maximum principle
Set m = sup
Ω
u and set Σ = {x ∈ Ω; u(x) = m} . We have to prove that when
m > 0 then either Σ is empty or that Σ = Ω. We argue by contradiction and
assume that both Σ and Ω\Σ are nonempty, say Ω\Σ 3 x
1
and Σ 3 x
2
.
I. Constructing an appropriate subdomain. First we will construct a
ball in Ω that touches Σ exactly once. Since u is continuous Ω\Σ is open. Move
along the arc from x
1
to x
2
and there will be a ﬁrst x
3
on this arc that lies in
Σ. Set s the minimum of x
1
−x
3
 and the distance of x
3
to ∂Ω. Take x
4
on the
arc between x
1
and x
3
such that x
3
−x
4
 <
1
2
s.
Next we take B
r
1
(x
4
) to be the largest ball around x
4
that is contained
in Ω\Σ. Since x
3
−x
4
 <
1
2
s we ﬁnd that r
1
≤
1
2
s and that there is at least
one point of Σ on ∂B
r1
(x
4
) . Let us call such a point x
5
. The ﬁnal step of the
construction is to set x
∗
=
1
2
x
4
+
1
2
x
5
and r =
1
2
r
1
. The ball B
r
(x
∗
) is contained
in Ω\Σ and ∂B
r
(x
∗
) ∩ Σ = {x
5
} . We will also use the ball B1
2
r
(x
5
). See the
pictures below.
93
II. The auxiliary function. Next we set for α > 0 the auxiliary function
h(x) =
e
−
α
2
x−x
∗

2
−e
−
α
2
r
2
1 −e
−
α
2
r
2
.
One should notice that this function is tailored in such a way that h = 0 for
x ∈ ∂B
r
(x
∗
), positive inside B
r
(x
∗
) and negative outside of B
r
(x
∗
). Moreover
h(x) ≤ 1 on B
r
(x
∗
).
On B1
2
r
(x
5
) one ﬁnds, using that x −x
∗
 ∈
¡
1
2
r,
3
2
r
¢
that
Lh =
α
2
n
P
i,j=1
a
ij
(x
i
−x
∗
i
)(x
j
−x
∗
j
) −α
n
P
i=1
(a
ii
+b
i
(x
i
−x
∗
i
) +c
1 −e
−
α
2
r
2
e
−
α
2
x−x
∗

2
+
−c(x)
e
−
α
2
r
2
1 −e
−
α
2
r
2
≥
≥
Ã
4α
2
λ
¡
1
2
r
¢
2
−2α
n
X
i=1
¡
a
ii
(x) + b
i
(x)
3
2
r
¢
+c
!
e
−
α
2
x−x
∗

2
1 −e
−
α
2
r
2
and by choosing α large enough we ﬁnd Lh > 0 in B1
2
r
(x
5
).
III. Deriving a contradiction. As before we consider w = u+εh. Now the
subdomain that we will use to derive a contradiction is B1
2
r
(x
5
). The boundary
of this set consists of two parts:
Γ
1
= ∂B1
2
r
(x
5
) ∩ B
r
(x
∗
) and Γ
2
= ∂B1
2
r
(x
5
)\Γ
1
.
Since Γ
1
is closed, since u
Γ1
< m by construction and since u is continuous we
ﬁnd
sup{u(x); x ∈ Γ
1
} = max {u(x); x ∈ Γ
1
} = ˜ m < m.
On Γ
2
we have h ≤ 0 and hence w = u +εh ≤ u ≤ m.
Next we choose ε > 0 such that
ε =
1
2
(m− ˜ m)
As a consequence we ﬁnd that
sup
n
w(x); x ∈ ∂B1
2
r
(x
5
)
o
< m.
Since w(x
5
) = u(x
5
) = m and also Lw > 0 in B1
2
r
(x
5
) we obtain a contradiction
with Theorem 5.2.6.
94
5.4 Alexandrov’s maximum principle
Alexandrov’s maximum principle gives a ﬁrst step towards regularity. Before
we are able to state the result we need a deﬁnition.
Deﬁnition 5.4.1 For u ∈ C(
¯
Ω) with Ω a domain in R
n
, the upper contact set
Γ
+
is deﬁned by
Γ
+
= {y ∈ Ω; ∃a
y
∈ R
n
such that ∀x ∈ Ω : u(x) ≤ u(y) +a
y
· (x −y)} .
Next we deﬁne a lemma concerning this upper contact set. Here the diameter
of Ω will play a role:
diam(Ω) = sup{x −y ; x, y ∈ Ω} .
Lemma 5.4.2 Suppose that Ω is bounded. Let g ∈ C(R
n
) be nonnegative and
u ∈ C(
¯
Ω) ∩ C
2
(Ω). Set
M = (diam(Ω))
−1
µ
sup
Ω
u −sup
∂Ω
u
¶
. (5.15)
Then
Z
BM(0)⊂R
n
g(z)dz ≤
Z
Γ
+
g(∇u(x))
¯
¯
¯
¯
det
µ
∂
∂x
i
∂
∂x
j
u(x)
¶¯
¯
¯
¯
dx. (5.16)
Proof. Let Σ ⊂ R
n
be the following set:
Σ =
©
∇u(x); x ∈ Γ
+
ª
.
If ∇u : Γ
+
→Σ is a bijection then
Z
Σ
g(z)dz =
Z
Γ
+
g(∇u(x))
¯
¯
¯
¯
det
µ
∂
∂x
i
∂
∂x
j
u(x)
¶¯
¯
¯
¯
dx.
is just a consequence of a change of variables. One cannot expect that this holds
but since the mapping is onto and since g ≥ 0 one ﬁnds
Z
Σ
g(z)dz ≤
Z
Γ
+
g(∇u(x))
¯
¯
¯
¯
det
µ
∂
∂x
i
∂
∂x
j
u(x)
¶¯
¯
¯
¯
dx.
95
So if we can show that B
M
(0) ⊂ Σ we are done. In other words, it is suﬃcient
to show that for every a ∈ B
M
(0) there is y ∈ Γ
+
such that a = ∇u(y).
Set
(a) = min
x∈
¯
Ω
(a · x −u(x))
which is attained at some y
a
since
¯
Ω is closed and bounded and u is continuous.
So we have
a · y
a
−u(y
a
) = (a) for some y
a
∈
¯
Ω,
a · x −u(x) ≥ (a) for all x ∈
¯
Ω,
which implies that
u(y
a
) ≥ u(x) +a · (y
a
−x) for all x ∈
¯
Ω. (5.17)
By taking x
0
∈
¯
Ω such that u(x
0
) = max
x∈
¯
Ω
u(x) one obtains
u(y
a
) ≥ u(x
0
) +a · (y
a
−x
0
)
= sup
x∈Ω
u +a · (y
a
−x
0
)
= sup
x∈∂Ω
u +M diam(Ω) +a · (y
a
−x
0
)
> sup
x∈∂Ω
u +M diam(Ω) −M y
a
−x
0
 ≥ sup
x∈∂Ω
u.
So y
a
/ ∈ ∂Ω and hence y
a
∈ Ω. Since (5.17) holds and since u is diﬀerentiable in
y
a
we ﬁnd that a = ∇u(y
a
). By this construction we even have that y
a
∈ Γ
+
,
the upper contact set.
Corollary 5.4.3 Suppose that Ω is bounded and that u ∈ C(
¯
Ω)∩C
2
(Ω). Denote
the volume of the unit ball in R
n
by ω
∗
n
and set
D
∗
(x) =
³
det
³
a
ij
(x)
´´
1/n
.
Then
sup
Ω
u ≤ sup
∂Ω
u +
diam(Ω)
n
√
ω
∗
n
°
°
°
°
°
−
P
n
i.j=1
a
ij
(·)
∂
∂x
i
∂
∂x
j
u(·)
nD
∗
(·)
°
°
°
°
°
L
n
(Γ
+
)
.
Remark 5.4.4 Replacing Ω with Ω
+
= {x ∈ Ω; u(x) > 0} one ﬁnds
sup
Ω
u ≤ sup
∂Ω
u
+
+
diam(Ω)
n
√
ω
∗
n
°
°
°
°
°
−
P
n
i.j=1
a
ij
(·)
∂
∂x
i
∂
∂x
j
u(·)
nD
∗
(·)
°
°
°
°
°
L
n
(Γ
+
∩Ω
+
)
.
Proof. Note that
n
X
i.j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
u(x) = trace (AD)
with the matrices A and D deﬁned by
A =
³
a
ij
(x)
´
and D =
µ
∂
∂x
j
∂
∂x
i
u(x)
¶
.
96
Now let us diagonalize the elliptic operator
P
n
i.j=1
a
ij
(¯ x)
∂
∂x
i
∂
∂x
j
by choosing new
orthonormal coordinates y at this point ¯ x (the new coordinates will be diﬀerent
at each point ¯ x but that doesn’t matter since for the moment we are just using
linear algebra at this ﬁxed point ¯ x) which ﬁt the eigenvalues of (a
ij
(x)) . So
AD = T
t
Λ
˜
DT with diagonal matrices
Λ =
µ
λ
i
(¯ x)
¶
and
˜
D =
µ
∂
2
∂y
2
i
(v ◦ T) (y)
¶
.
Since for T orthonormal the eigenvalues of AD and Λ
˜
D coincide and
trace (AD) = trace
³
Λ
˜
D
´
det (AD) = det
³
Λ
˜
D
´
we may consider the at ¯ x diagonalized elliptic operator. The ellipticity condition
implies λ
i
(¯ x) ≥ λ > 0. Since we are at a point in the upper contact set Γ
+
we
have
∂
2
∂y
2
i
u(y) ≤ 0. So on Γ
+
the matrix T
t
Λ
˜
DT is nonpositive deﬁnite. Writing
µ
i
= −λ
i
(¯ x)
∂
2
∂y
2
i
(v ◦ T) (y) for the the eigenvalues of −AD by µ
i
and using that
the geometric mean is less then the arithmetic mean (for positive numbers) we
obtain
D
∗
(x) det (−D)
1/n
= det (−AD)
1/n
=
Ã
n
Y
i=1
µ
i
!
1/n
≤
≤
1
n
n
X
i=1
µ
i
=
1
n
trace (−AD) .
So we may conclude that for every x ∈ Γ
+
det
µ
−
∂
∂x
i
∂
∂x
j
u(x)
¶
≤
Ã
−
P
n
i.j=1
a
ij
(x)
∂
∂xi
∂
∂xj
u(x)
n D
∗
(x)
!
n
. (5.18)
Taking g = 1 in (5.16) and M as in (5.15) we have
ω
∗
n
µ
sup
Ω
u −sup
∂Ω
u
diam(Ω)
¶
n
=
Z
B
M
(0)
1dz ≤
≤
Z
Γ
+
¯
¯
¯
¯
det
µ
∂
∂x
i
∂
∂x
j
u(x)
¶¯
¯
¯
¯
dx ≤
≤
Z
Γ
+
Ã
−
P
n
i.j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
u(x)
n D
∗
(x)
!
n
dx,
which completes the proof.
Theorem 5.4.5 (Alexandrov’s Maximum Principle) Let Ω ⊂ R
n
be a
bounded domain and let L be elliptic as in (5.12) with c ≤ 0. Suppose that
u ∈ C
2
(Ω) ∩ C(
¯
Ω) satisﬁes Lu ≥ f with
b(·)
D
∗
(·)
,
f(·)
D
∗
(·)
∈ L
n
(Ω),
97
and let Γ
+
denote the upper contact set of u. Then one ﬁnds
sup
Ω
u ≤ sup
∂Ω
u
+
+C diam(Ω)
°
°
°
°
f
−
(·)
D
∗
(·)
°
°
°
°
L
n
(Γ
+
)
,
where C depends on n and
°
°
°
b(·)
D
∗
(·)
°
°
°
L
n
(Γ
+
)
.
Proof. If the b
i
and c components of L would be equal 0 then the result follows
from (5.4.3). Indeed, on Γ
+
0 ≤ −
n
X
i.j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
u(x) ≤ −f(x) ≤ f
−
(x).
For nonzero b
i
and c we proceed as follows. Since it is suﬃcient to consider
u > 0 we may restrict ourselves to Ω
+
. Then we have c(x)u(x) ≤ 0 and hence
for any µ > 0 :
0 ≤ −
n
X
i.j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
u(x) ≤
≤
n
X
i=1
b
i
(x)
∂
∂x
i
u(x) +c(x)u(x) −f(x) ≤
≤
n
X
i=1
b
i
(x)
∂
∂x
i
u(x) +f
−
(x) ≤ (by CauchySchwarz)
≤ b
i
(x) ∇u(x) 1 +µ
−1
f
−
(x) µ 1 ≤ (by Hölder)
≤
³
b
i
(x)
n
+
¯
¯
µ
−1
f
−
(x)
¯
¯
n
´1
n
(∇u(x)
n
+µ
n
)
1
n
(1 + 1)
n−2
n
.
Using Lemma 5.4.2 on Ω
+
and with g(z) = (z
n
+µ
n
)
−1
one ﬁnds with
˜
M = (diam(Ω))
−1
µ
sup
Ω
u −sup
∂Ω
u
+
¶
that, similar as before in (5.18),
Z
B ˜
M
(0)
1
z
n
+µ
n
dz ≤
Z
Γ
+
∩Ω
+
1
∇u(x)
n
+µ
n
¯
¯
¯
¯
det
µ
∂
∂x
i
∂
∂x
j
u(x)
¶¯
¯
¯
¯
dx
≤
Z
Γ
+
∩Ω
+
1
∇u(x)
n
+µ
n
Ã
−
P
n
i.j=1
a
ij
(x)
∂
∂x
i
∂
∂x
j
u(x)
n D
∗
(x)
!
n
dx ≤
≤ 2
n−2
Z
Γ
+
∩Ω
+
¸
¡
b
i
(x)
n
+
¯
¯
µ
−1
f
−
(x)
¯
¯
n
¢
1
n
n D
∗
(x)
¸
n
dx ≤
≤
2
n−2
n
n
Z
Γ
+
∩Ω
+
b
i
(x)
n
+
¯
¯
µ
−1
f
−
(x)
¯
¯
n
D
∗
(x)
n
dx =
=
2
n−2
n
n
Ã
°
°
°
°
b
i

D
∗
°
°
°
°
n
L
n
(Γ
+
∩Ω
+
)
+µ
−n
°
°
°
°
f
−
D
∗
°
°
°
°
n
L
n
(Γ
+
∩Ω
+
)
!
.
98
By a direct computation
Z
B ˜
M
(0)⊂R
n
1
z
n
+µ
n
dz = nω
∗
n
Z
M
0
r
n−1
r
n
+µ
n
dr = ω
∗
n
log
Ã
˜
M
n
+µ
n
µ
n
!
.
Choosing µ = kf
−
/D
∗
k
L
n
(Γ
+
∩Ω
+
)
it follows that
ω
∗
n
log
ÃÃ
˜
M
kf
−
/D
∗
k
L
n
(Γ
+
∩Ω
+
)
!
n
+ 1
!
≤
2
n−2
n
n
Ã
°
°
°
°
b
i

D
∗
°
°
°
°
n
L
n
(Γ
+
∩Ω
+
)
+ 1
!
and hence
Ã
˜
M
kf
−
/D
∗
k
L
n
(Γ
+
∩Ω
+
)
!
n
≤ e
2
n−2
n
n
ω
∗
n
Ã
°
°
°
°
b
i

D
∗
°
°
°
°
n
L
n
(Γ
+
∩Ω
+
)
+1
!
−1
and the claim follows.
99
5.5 The wave equation
5.5.1 3 space dimensions
This is the diﬀerential equation with four variables x ∈ R
3
and t ∈ R
+
and the
unknown function u
u
tt
= c
2
∆u, (5.19)
where ∆u =
∂
2
∂x
2
1
+
∂
2
∂x
2
2
+
∂
2
∂x
2
3
.
Believing in CauchyKowalevski an appropriate initial value problem would
be
u
tt
= c
2
∆u in R
3
×R
+
,
u(x, 0) = ϕ
0
(x) for x ∈ R
3
,
u
t
(x, 0) = ϕ
1
(x) for x ∈ R
3
.
(5.20)
We will start with the special case that u is radially symmetric in x, y, z.
Under this assumption and setting r =
p
x
2
+y
2
+z
2
the equation changes in
u
tt
= c
2
µ
u
rr
+
2
r
u
r
¶
.
Next we use the substitution
U (r, t) = ru(r, t),
and with some elementary computations
u
tt
=
1
r
U
tt
and u
rr
+
2
r
u
r
=
1
r
U
rr
,
we return to the one dimensional wave equation
U
tt
= c
2
U
rr
.
This equation on R ×R
+
0
has solutions of the form
U (r, t) = Φ(r −ct) +Ψ(r +ct).
Returning to u it means we obtain solutions
u(x, t) =
1
x
Φ(x −ct) +
1
x
Ψ(x +ct)
but this does not look like covering all solutions and in fact the second term is
suspicious. If Ψis somewhere nonzero, say at x
0
, then
1
x
Ψ(x+ct) is unbounded
for t = c
−1
x
0
 in x = 0. So we are just left with u(x, t) =
1
x
Φ(x −ct).
How can we turn this into a solution of (5.20)? Remember that for the heat
equation it was suﬃcient to have the solution for the δfunction in order to ﬁnd
a solution for arbitrary initial values by a convolution. In the present case one
might try to consider as this special ‘function’ (really a distribution)
v(x, t) =
1
x
δ (x −ct) =
1
ct
δ (x −ct) .
So at time t the inﬂuence of an initial ‘function’ at x = 0 distributes itself over a
circle of radius ct around 0. Or similarly if we start at y, at time t the inﬂuence
100
of an initial ‘function’ at x = y distributes itself over a circle of radius ct around
y :
v
1
(x, t) =
1
x −y
δ (x −y −ct) =
1
ct
δ (x −y −ct) .
Combining these ‘solutions’ for y ∈ R
n
with density f(y) we obtain
u(x, t) =
Z
R
3
1
ct
δ (x −y −ct) f(y)dy =
=
1
ct
Z
x−y=ct
f(y)dσ
y
=
1
ct
Z
z=ct
f(x +z)dσ
z
= ct
Z
ω=1
f (x +ctω) dω.
Looking backwards from (x, t) we should see an average of the initial function
at positions at distance ct from x.
On the left the cones that represent the inﬂuence of the initial data; on the
right the cone representing the dependence on the initial data.
Now we have to ﬁnd out what u(x, 0) and u
t
(x, 0) are. For f ∈ C
∞
0
(R
3
) one
gets
u(x, 0) = lim
t↓0
ct
Z
ω=1
f (x +ctω) dω = 0
and
u
t
(x, 0) = lim
t↓0
∂
∂t
Ã
ct
Z
ω=1
f (x +ctω) dω
!
=
= lim
t↓0
Ã
c
Z
ω=1
f (x +ctω) dω +c
2
t
2
Z
ω=1
∇f (x +ctω) · ω dω
!
= 4πcf (x) .
So there is a possible solution of (5.20) for ϕ
0
= 0, namely
˜ u(ϕ
1
; x, t) =
1
4πc
2
t
Z
x−y=ct
ϕ
1
(y) dσ. (5.21)
Since we do have a solution with u(x, 0) = 0 and u
t
(x, 0) = ϕ
1
(x) it could
be worth a try to consider the derivative with respect to t of this ˜ u in (5.21),
with ϕ
1
replaced by ϕ
0
, in order to solve the ﬁrst initial condition. So v(x, t) =
∂
∂t
˜ u(ϕ
0
; x, t) will satisfy the diﬀerential equation and the ﬁrst initial condition,
that is, v(x, 0) = ϕ
0
(x). We still have to see about v
t
(x, 0). First let us write
101
out v(x, t) :
v(x, t) =
∂
∂t
˜ u(ϕ
0
; x, t) =
∂
∂t
Ã
1
4πc
2
t
Z
x−y=ct
ϕ
0
(y) dσ
!
=
=
∂
∂t
Ã
t
4π
Z
ω=1
ϕ
0
(x +ctω) dω
!
=
=
1
4π
Z
ω=1
ϕ
0
(x +ctω) dω +
ct
4π
Z
ω=1
∇ϕ
0
(x +ctω) · ω dω =
=
1
4πc
2
t
2
Z
x−y=ct
ϕ
0
(y) dσ +
1
4πc
2
t
2
Z
x−y=ct
∇ϕ
0
(y) · (y −x) dσ,
We ﬁnd that
v
t
(x, t) =
∂
∂t
Ã
1
4π
Z
ω=1
ϕ
0
(x +ctω) dω +
ct
4π
Z
ω=1
∇ϕ
0
(x +ctω) · ω dω
!
=
2c
4π
Z
ω=1
∇ϕ
0
(x +ctω) · ω dω +
+
c
2
t
4π
Z
ω=1
3
X
i,j=1
ω
i
ω
j
∂
2
∂x
i
∂x
j
ϕ
0
(x +ctω) dω
and since
lim
t↓0
2c
4π
Z
ω=1
∇ϕ
0
(x +ctω) · ω dω =
2c
4π
Z
ω=1
∇ϕ
0
(x) · ω dω = 0,
we have
lim
t↓0
v
t
(x, t) = 0.
Theorem 5.5.1 Suppose that ϕ
0
∈ C
3
(R
3
) and ϕ
1
∈ C
2
(R
3
). The Cauchy
problem in (5.20) has a unique solution u ∈ C
2
¡
R
3
×R
+
¢
and this solution is
given by
u(x, t) =
1
4πc
2
t
Z
x−y=ct
ϕ
1
(y) dσ +
∂
∂t
Ã
1
4πc
2
t
Z
x−y=ct
ϕ
0
(y) dσ
!
.
This expression is known as Poisson’s solution.
Exercise 90 Let u ∈ C
2
¡
¯
B
¢
with B =
©
x ∈ R
3
; x ≤ 1
ª
and deﬁne ¯ u as fol
lows:
¯ u(x) =
1
4π x
2
Z
y=x
u(y)dy.
This ¯ u is called the spherical average of u. Show that:
∆¯ u(x) =
1
4π x
2
Z
y=x
∆u(y)dy.
Hint: use spherical coordinates x = (r cos ϕsinθ, r sinϕsinθ, r cos θ). In these
coordinates:
∆ = r
−2
∂
∂r
r
2
∂
∂r
+
1
r
2
sinθ
∂
∂θ
sinθ
∂
∂θ
+
1
r
2
sin
2
θ
∂
2
∂ϕ
2
.
102
Proof. To give a direct proof that the formula indeed gives a solution of the
diﬀerential equation is quite tedious. Some key ingredients are Green’s formula
applied to the fundamental solution:
−4πf(x) =
=
Z
z<R
1
z
∆
z
f(x +z)dz −
1
R
Z
z=R
∂
∂r
f(x +z)dσ −
1
R
2
Z
z=R
f(x +z)dσ,
the observation that
∂
∂t
Z
z<ct
v(z)dz = c
Z
z=ct
v(z)dσ
and a version of the fundamental theorem of calculus on balls in R
3
:
Z
z<R
1
z
2
∂
∂ z
g (z) dz =
Z
ω=1
Z
R
r=0
1
r
2
µ
∂
∂r
g(rω)
¶
r
2
drdω =
=
Z
ω=1
(g(Rω) −g(0)) dω =
Z
z=R
1
z
2
(g (z) −g(0)) dσ
z
.
Here we go for u(x, t) =
1
t
R
x−y=ct
f (y) dσ:
c
2
∆u(x, t) =
c
2
t
Z
z=ct
∆
x
f(x +z)dσ
z
= c
3
Z
z=ct
1
z
∆
z
f(x +z)dσ
z
=
= c
2
∂
∂t
Z
z<ct
1
z
∆
z
f(x +z)dz =
= c
2
∂
∂t
Ã
−4πf(x) +
1
ct
Z
z=ct
∂
∂ z
f(x +z)dσ +
1
c
2
t
2
Z
z=ct
f(x +z)dσ
!
=
= c
2
∂
∂t
Ã
Z
z=ct
1
z
∂
∂ z
f(x +z)dσ +
Z
z=ct
1
z
2
f(x +z)dσ
!
=
= c
2
∂
∂t
Ã
Z
z=ct
1
z
2
∂
∂ z
³
z f(x +z)
´
dz
!
=
= c
∂
2
∂t
2
Ã
Z
z<ct
1
z
2
∂
∂ z
³
z f(x +z)
´
dz
!
=
= c
∂
2
∂t
2
Ã
Z
z=ct
1
z
f(x +z)dσ
!
=
= c
∂
2
∂t
2
Ã
1
ct
Z
z=ct
f(x +z)dσ
!
=
∂
2
∂t
2
u(x, t) .
Indeed it solves
∂
2
∂t
2
u(x, t) = c
2
∆u(x, t) . The initial conditions we have already
checked above.
It remains to verify the uniqueness. Suppose that u
1
and u
2
are two diﬀerent
solutions to (5.20). Set v = u
1
−u
2
and consider the average of v over the sphere
of radius r around some x
¯ v (r, t) =
1
4πr
2
Z
y−x=r
v(y, t)dy.
103
Since
∂
2
∂r
2
³
r¯ v (r, t)
´
= r
∂
2
∂r
2
¯ v (r, t) + 2
∂
∂r
¯ v (r, t) = r∆¯ v (y , t) ,
we may, by taking the average as in the last exercise, conclude that
∂
2
∂r
2
³
r¯ v (r, t)
´
=
r
4πr
2
Z
y−x=r
∆v(y, t)dy =
=
r
4πc
2
r
2
Z
y−x=r
v
tt
(y, t)dy =
1
c
2
∂
2
∂t
2
(r¯ v (r, t)) .
Since v(·, 0) and v
t
(·, 0) are both 0 also r¯ v (r, 0) and r¯ v
t
(r, 0) are zero implying
that r¯ v (t, r) = 0. Moreover, for all t ≥ 0 one ﬁnds (r¯ v (t, r))
r=0
= 0. So we
may use the version of d’Alembert formula we derived for (x, t) ∈ R
+
×R
+
in
Exercise 69. We ﬁnd that r¯ v (t, r) = 0. So the average of u
1
(·, t) and u
2
(·, t) are
equal over each sphere in R
3
for all t > 0. This contradicts the assumption that
u
1
6= u
2
.
Exercise 91 Writing out Poisson’s solution one obtains Kirchhoﬀ’s formula.
From the cited books (see the bibliography) the following versions of Kirchhoﬀ’s
formula have been copied:
1.
u(x, t) =
1
c
2
Z
∂B(x,t)
th(y) +g(y) +Dg(y) · (y −x) dS(y)
where
Z
A
v(y)dy =
µZ
A
1dy
¶
−1
Z
A
v(y)dy.
2.
u(x, t) =
1
4πc
2
t
2
Z
x−y=ct
[tψ(y) +ϕ(y) +∇ϕ · (x −y)] dσ
3.
u(x, t) =
1
4πc
2
t
2
Z
x−y=ct
"
tg(y) +f(y) +
X
i
f
yi
(y)(y
i
−x
i
)
#
dσ
4. u(x, t) =
1
4π
ZZ
S
·
∂
∂n
µ
1
r
¶
−
1
r
∂u
1
∂n
−
2
cr
∂r
∂n
∂u
1
∂t
1
¸
t
1
=0
dS +
+
1
4π
ZZZ
Ω
1
c
2
r
F
³
y, t −
r
c
´
dy
5. u(x, t) =
∂
∂t
tω[f; x, t] +tω[g; x, t]
where the following expression is called Kirchhoﬀ’s solution:
tω[g; x, t] =
t
4π
Z
2π
0
Z
π
0
g(x
1
+t sinθ cos φ, x
2
+t sinθ sinφ, x
3
+t cos θ) sinθdθdφ.
Explain which boundary value problem each of these u solve (if that one does)
and what the appearing symbols mean.
104
5.5.2 2 space dimensions
In 2 dimensions no transformation to a 1 space dimensional problem exists as
in 3 dimensions. However, having a solution in 3 space dimensions one could
just use that formula for initial values that do not depend on x
3
. The solution
that comes out should not depend on the x
3
variable and hence
u
tt
= c
2
(u
x1x1
+u
x2x2
+u
x3x3
) = c
2
(u
x1x1
+u
x2x2
) .
Borrowing the formula from Theorem 5.5.1 we obtain, with y = (y
1
, y
2
)
1
4πc
2
t
Z
(x,0)−(y,y
3
)=ct
ϕ
1
(y
1
, y
2
) dσ =
=
2
4πc
2
t
Z
x−y≤ct
ϕ
1
(y
1
, y
2
)
ct
q
c
2
t
2
−(x
1
−y
1
)
2
−(x
2
−y
2
)
2
dy
1
dy
2
=
1
2πc
Z
x−y≤ct
ϕ
1
(y)
q
c
2
t
2
−x −y
2
dy.
So we may conclude that
u
tt
= c
2
∆u in R
2
×R
+
,
u(x, 0) = φ
0
(x) for x ∈ R
2
,
u
t
(x, 0) = φ
1
(x) for x ∈ R
2
.
(5.22)
can be solved uniquely. This idea to go down from a known solution method in
dimension n in order to ﬁnd a solution in dimension n −1 is called the method
of descent.
Theorem 5.5.2 Suppose that ϕ
0
∈ C
3
(R
2
) and ϕ
1
∈ C
2
(R
2
). The Cauchy
problem in (5.22) has a unique solution u ∈ C
2
¡
R
2
×R
+
¢
and this solution is
given by
u(x, t) =
1
2πc
Z
x−y≤ct
ϕ
1
(y)
q
c
2
t
2
−x −y
2
dy +
+
∂
∂t
¸
1
2πc
Z
x−y≤ct
ϕ
0
(y)
q
c
2
t
2
−x −y
2
dy
¸
.
This expression is known as Poisson’s solution in 2 space dimensions.
As a result of this ‘spreading out’ higher frequencies die out faster then lower
frequencies in 2 dimensions. Flatlanders should be baritones.
105
106
Bibliography
[1] H. Brezis: Analyse Fonctionnelle, Masson, 1983
[2] E. Coddington and N. Levinson: Theory of Ordinary Diﬀerential Equa
tions, McGrawHill, 1955.
[3] R. Courant and D. Hilbert: Methods of Mathematical Physics I, Inter
science, 1953.
[4] R. Courant and D. Hilbert: Methods of Mathematical Physics II, Inter
science, 1961.
[5] L.C. Evans, Partial Diﬀerential Equations, AMS, 1998.
[6] B. Dacorogna: Direct methods in the calculus of variations, Springer, 1993.
[7] E. DiBenedetto: Partial Diﬀerential Equations, Birkhäuser, 1995.
[8] P.R. Garabedian: Partial Diﬀerential Equations, AMS Chelsea Publishing,
1964.
[9] M. Giaquinta and S. Hildebrandt: Calculus of Variations I, Springer, 1996
[10] M. Giaquinta and S. Hildebrandt: Calculus of Variations II, Springer, 1996
[11] Fritz John: Partial Diﬀerential Equations 4
th
edition, Springer, 1981.
[12] S.L. Sobolev: Partial Diﬀerential Equations of Mathematical Physics, Perg
amon Press, 1964 (republished by Dover in 1989).
[13] W.A. Strauss: Partial Diﬀerential Equations, an introduction, Wiley, 1992.
[14] W. Walter: Gewöhnliche Diﬀerentialgleichungen, Springer
[15] H.F. Weinberger: A ﬁrst course in Partial Diﬀerential Equations, Blaisdell,
1965 (republished by Dover 1995).
For ordinary diﬀerential equations see [2] or [14].
For derivation of models see [3], [4], [7, Chapter 0], [8].
For models arising in a variational setting see [9] and [10].
Introductionary text in p.d.e.: [13] and [15].
Modern graduate texts in p.d.e.: [5] and [7].
Direct methods in the calculus of variations: [5, Chapter 8] and [6].
For CauchyKowalevski see [11].
For the functional analytic tools see [1].
107
ii
Contents
1 From models to diﬀerential equations 1.1 Laundry on a line . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 A linear model . . . . . . . . . . . . . . . . . . . . 1.1.2 A nonlinear model . . . . . . . . . . . . . . . . . . 1.1.3 Comparing both models . . . . . . . . . . . . . . . 1.2 Flow through area and more 2d . . . . . . . . . . . . . . . 1.3 Problems involving time . . . . . . . . . . . . . . . . . . . 1.3.1 Wave equation . . . . . . . . . . . . . . . . . . . . 1.3.2 Heat equation . . . . . . . . . . . . . . . . . . . . . 1.4 Diﬀerential equations from calculus of variations . . . . . 1.5 Mathematical solutions are not always physically relevant 2 Spaces, Traces and Imbeddings 2.1 Function spaces . . . . . . . . . . . . . . . 2.1.1 Hölder spaces . . . . . . . . . . . . 2.1.2 Sobolev spaces . . . . . . . . . . . 2.2 Restricting and extending . . . . . . . . . 2.3 Traces . . . . . . . . . . . . . . . . . . . . 1,p 2.4 Zero trace and W0 (Ω) . . . . . . . . . . 2.5 Gagliardo, Nirenberg, Sobolev and Morrey 1 1 1 3 4 5 11 11 12 15 19 23 23 23 24 29 34 36 38 43 43 48 52 52 53 55 56 57 58 61 61 63 65
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
3 Some new and old solution methods I 3.1 Direct methods in the calculus of variations . . . . . . . . . . 3.2 Solutions in ﬂavours . . . . . . . . . . . . . . . . . . . . . . . 3.3 Preliminaries for CauchyKowalevski . . . . . . . . . . . . . . 3.3.1 Ordinary diﬀerential equations . . . . . . . . . . . . . 3.3.2 Partial diﬀerential equations . . . . . . . . . . . . . . 3.3.3 The Cauchy problem . . . . . . . . . . . . . . . . . . . 3.4 Characteristics I . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 First order p.d.e. . . . . . . . . . . . . . . . . . . . . . 3.4.2 Classiﬁcation of second order p.d.e. in two dimensions 3.5 Characteristics II . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Second order p.d.e. revisited . . . . . . . . . . . . . . 3.5.2 Semilinear second order in higher dimensions . . . . . 3.5.3 Higher order p.d.e. . . . . . . . . . . . . . . . . . . . . iii
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . 67 67 67 68 71 71 72 76 78 78 79 81 85 . . 85 . . . . . 4. . . 4.4. . . . . . . .4. . . . . . . .1. . . . . . . . . 95 . . . . . . . . . . . . . .2 On bounded intervals by an orthonormal set 4. . . . . . . . 4. 89 . . . . . . . . . . . . . . . . . . . .2 2 space dimensions . . . .2. . 4. . . . . . . . 5 Some classics for a unique solution 5.2 Maximum principles . . . . . . .2. . . . .4 Solutions for the 1d wave equation .5 A fundamental solution for the Laplace operator . . . . . . . . . . . . . . . . .5 The wave equation . . 5. . . 4. 4. . . . . . . . . . . . . . . . . . . . . . 4. . .5.4 Some old and new solution methods II 4. .3 Proof of the strong maximum principle 5. . . 105 iv . . . . . . . . . . . 5. . . . . . . . . . . . . . . 5. . . . . .1 3 space dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 The heat kernel on R . . . . . . 100 . . .2 On a bounded interval . . . . . . . . . . . . 100 . . . . . . . . . . . . . . . . . . . . . . . . . . 4. 5. . 4. . . . . . . . . . .1 Statement of the theorem . . 93 . . . .2 A solution for the 1d heat equation . . . . . . . . . . . .1 Energy methods . . 5. . .4 Alexandrov’s maximum principle . . . . . . . . . . . . . . .1 CauchyKowalevski . . . . . . . . . . . . . .2 Reduction to standard form .5. .3 A solution for the Laplace equation on a square .1 On unbounded intervals . . . . . . . . . . . . .1. . .
F1 sin α + F2 sin β = mg. so u0 (x− ) = − tan α and u0 (x+ ) = tan β.1 1. Moreover u0 (x) = tan β for x > x0 and u0 (x) = − tan α for x < x0 .Week 1 From models to diﬀerential equations 1. 0) through (x0 . The g is the gravitation constant and m the mass of the sock. x0 XXX XXX ¡ ¡ XXX XXX ¡ F2 ¡ XXX µ ¡ XXX ¡ XXF1 y X X αX XXX¡ β X ¡ X Then the balance of forces gives the following two equations: F1 cos α = F2 cos β = c. c We call u the deviation of the horizontal measured upwards so in the present situation u will be negative. 0) we ﬁnd two straight lines connecting (0.1 Laundry on a line A linear model Consider a rope tied between two positions on the same height and hang a sock on that line at location x0 . u(x0 )) to (1. Eliminating Fi we ﬁnd mg tan α + tan β = . 0) .1. We assume that the positive constant c does not depend on the weight hanging on the line but is a given ﬁxed quantity. Fixing the ends at (0. 0 0 1 . 0) and (1.
2) Hanging multiple socks on that line. s) = (1. we’ll get to u (x) = Indeed. say distributed with density ρ(x). 2 2 2 and After dividing by 4x and taking limits this results in (1.4) 0 We might also consider the balance of forces for the discretized problem and see that formula (1. s) ρ(s) ds. 35. We could even allow some upwards pulling force and replace −mi g/c by ρ(xi )4x to ﬁnd u (x) = n X i=1 ρ(xi )4x G (x. This gives a (continuously) distributed force down on this line. G (x. . a Riemannsum. u (x) = − mg x0 (1 − x) for x > x0 . Fixing ½ x (1 − s) for x ≤ s. 2 2 By Taylor u0 (xi + 1 4x) = u0 (x) + 1 4x u00 (xi ) + O (4x) . c (1. we ﬁnd the function u by superposition.1) gives u0 (xi + 1 4x) − u0 (xi − 1 4x) = −ρ(xi ) 4x.3) s (1 − x) for x > s. in each point xi we ﬁnd u 0 35 X i=1 − mi g G (x. 2 2 u0 (xi − 1 4x) = u0 (x) − 1 4x u00 (xi ) + O (4x) .5) . approximates an integral so that we obtain Z 1 u(x) = G (x.This leads to u0 (x+ ) − u0 (x− ) = 0 0 mg . (1. say at position xi a sock of weight mi with i = 1 . c ∂x c x=x− i In the next step we will not only hang pointmasses on the line but even a blanket. xi ) . We will approximate this by dividing the line into n units of width 4x and consider the force. c (1. xi ) . −u00 (x) = ρ(x). Letting n → ∞ and this sum. between xi − 1 4x and xi + 1 4x to 2 2 be located at xi . . xi ) .1) and the function u can be written as follows: ½ mg − c x (1 − x0 ) for x ≤ x0 . 2 2 4x u00 (xi ) + O (4x)2 = −ρ(xi ) 4x. c (x+ ) i −u 0 (x− ) i ¸x=x+ · i mi g ∂ mi g =− = G (x.
u(0) = u(1) = 0. Now we assume that the tension is constant throughout the rope.4) solves (1. Deﬁnition 1. So no Green formula as before.1 The function in (1. 1 + u0 x0 1 + u0 x0 . We ﬁnd F1 = F2 = c.2) still holds but the superposition argument will fail since the problem is nonlinear.1) is ¡ ¢ ¡ ¢ u0 x+ u0 x− mg 0 0 q ¡ ¡ + ¢¢2 − q ¡ ¡ − ¢¢2 = c . q − (1. Now one has to use that ¡ ¢ ¡ ¢ −u0 x− u0 x+ 0 0 sin β = q ¡ ¡ + ¢¢2 and sin α = q ¡ ¡ − ¢¢2 0 x 0 x 1+ u 0 1+ u 0 A formula as in (1.3) is called a Green function for the boundary value problem ½ −u00 (x) = ρ(x). x0 XXX XXX ¡ ¡ XXX XXX ¡ F2 ¡ XXX µ ¡ XXX ¡ XXF1 y X X αX XXX¡ β X ¡ X To balance the forces we assume some sidewards eﬀect of the wind.6) ∂x 1 + (u0 (x))2 3 and the replacement of (1. Nevertheless we may derive a diﬀerential equation as before.5) and satisﬁes the boundary conditions u(0) = u(1) = 0. F1 sin α + F2 sin β = mg.1. 1.2 A nonlinear model Consider again a rope tied between two positions on the same height and again hang a sock on that line at location x0 .Exercise 1 Show that (1.1. Proceeding as before the Taylorexpansion yields 0 ∂ u (x) = ρ(x).
that is.4 0.2 A function u ∈ C 2 [0. u(0) = u(1) = 0. s) = ½ Z 1 G (x. s) f (s)ds 0 x (1 − s) for 0 ≤ x ≤ s ≤ 1. s (1 − x) for 0 ≤ s < x ≤ 1.3 0. Exercise 2 Suppose that we are considering equation (1. the c function that satisﬁes (1. 2 2 0. say g ρ(x) = M. 1] solves ½ −u00 (x) = f (x) for 0 < x < 1. u(0) = u(1) = 0.3 Comparing both models The ﬁrst derivation results in a linear equation which directly gives a solution formula.6) and u(0) = u(1) = 0. Remark 1.4 0. It is not that this formula is so pleasant but at least we ﬁnd that whenever we can give some meaning to this formula (for example if ρ ∈ L1 ) there exists a solution: Lemma 1.1.5 0.2 0.1. If possible compute the solution.1 0.3 If we don’t have a separate weight hanging on the line but are considering a heavy rope that bends due to its own weight we ﬁnd: ( q 2 −u00 (x) = c 1 + (u0 (x)) for 0 < x < 1.1. 4 . We no longer see immediately if there exists a solution. The second derivation results in a nonlinear equation.8 1 Here are some graphs of the solutions from the last exercise. For which M does a solution exist? Hint: use that u is symmetric around 1 and hence that u0 ( 1 ) = 0. if and only if u(x) = with G (x.6 0.2 0.6) with a uniformly distributed force.1.
v2 s. y) ∂v1 (x. ∂x ∂y Z y+ 1 4y 2 y− 1 4y 2 ∂v1 ρ (x. y − 1 4y ρ 2 2 + lim ds 4y↓0 4x x− 1 4x 4y 2 4x↓0 = ρ 4y↓0 4y lim = ρ ∂v2 (x.1 A domain Ω is deﬁned as an open and connected set. y) in the middle we ﬁnd • ﬂowing out: Out = Z y+ 1 4y 2 y− 1 4y 2 ¢ ¡ ρ. s ds + 2 Z x+ 1 4x 2 x− 1 4x 2 ¢ ¡ ρ.2. Deﬁnition 1. y + 1 4y ds. y − 1 4y ds. s ds + 2 Z x+ 1 4x 2 x− 1 4x 2 ¢ ¡ ρ.v1 x − 1 4x.v1 x + 1 4x. 2 • ﬂowing in: In = Z y+ 1 4y 2 y− 1 4y 2 ¢ ¡ ρ. ¸ ¢ ¸ ¢ ¶ 7 ¶ 7 ¶ ¶ ¾ 7 ¶ ¶ 7 ¶ ¶ µ ¡ ¡ µ ¡ ¡ > ½ ½ > ½ ½ 6 * © © * © © 4y © * © * © © ? µ ¡ r (x. y + 1 4y − v2 s. 2 Scaling with the size of the rectangle and assuming that v is suﬃciently diﬀerentiable we obtain ¢ ¢ ¡ Z y+ 1 4y ¡ 2 v1 x + 1 4x. s) ds + lim 4x↓0 4x ∂x Z x+ 1 4x 2 x− 1 4x 2 ∂v2 (s.v2 s. y) ds ∂y 5 . y) +ρ . s Out − In ρ 2 2 = lim ds + lim 4y↓0 4x4y 4y↓0 4y y− 1 4y 4x 2 4x↓0 4x↓0 ¢ ¢ ¡ Z x+ 1 4x ¡ 2 v2 s. v2 ) . s − v1 x − 1 4x.1. The ¯ boundary of the domain is called ∂Ω and its closure Ω = Ω ∪ ∂Ω.2 Flow through area and more 2d Consider the ﬂow through some domain Ω in R2 according to the velocity ﬁeld v = (v1 . y) > ½ ¡ ½ > ½ ½ > ½ ½ 4x µ ¡ ¡ µ ¡ ¡ If we single out a small rectangle with sides of length 4x and 4y with (x.
Instead of a rope or string showing a deviation due to some force applied one may consider a twodimensional membrane. y) = f (x. ∂n The speciﬁc problem in (1.v = 0. A typical problem would be to determine the ﬂow given some boundary data. y) = f (x.8) ∂ u = 0 on ∂Ω\Γ. 6 . y) ∈ ∂Ω. y)2 u(x. The diﬀerential operator ∆ is called the Laplacian and is deﬁned by ∆u = uxx + uyy . (1. (1.∇u = ∇. y) for (x. w) = vx + wy . For larger deviations and assuming the tension is uniform throughout the domain the nonlinear model: −∇ · q ∇u(x. Here ∇ deﬁned by ∇u = (ux . y) = 0 for (x. 1. think of external injection or extraction of ﬂuid. Similarly one may derive the following boundary value problems on a twodimensional domain Ω. For small forces/deviations we may consider the linear model as a reasonable model: ½ −∆u(x. by Green’s Theorem: Z Z Z Z Z ∂ ψdσ = n · ∇u dσ = ∇ · ∇u dA = ∆u dA = 0. y) ∈ Ω. y) ∈ ∂Ω. Indeed. that is ∇.If there is no concentration of ﬂuid then the diﬀerence of these two quantities should be 0. u=φ on Γ. 1 + ∇u(x. ∂n Such a problem could include local sources. y) for (x. udσ = ∂Ω ∂Ω ∂n ∂Ω Ω Ω In case that the potential is given at part of the boundary Γ and no ﬂow in or out at ∂Ω\Γ we ﬁnd the problem: ∆u = 0 in Ω.v = 0. y) = 0 for (x. 2.7). for example obtained by measurements: ½ ∆u = 0 in Ω. For example the deviation u of a soap ﬁlm with a pressure f applied is modeled by this boundary value problem. u(x. uy ) is the gradient and ∇· the divergence deﬁned by ∇ · (v.7) ∂ u = ψ on ∂Ω. and that would lead to ∆u = f. ∂Ω This physical condition does also follow ‘mathematically’ from (1.7) by the way is only solvable under the compatibility condition that the amount ﬂowing in equals the amount going out: Z ψdσ = 0. In case of a potential ﬂow we have v = ∇u for some function u and hence we obtain a harmonic function u: ∆u = ∇. y) ∈ Ω.
3. y) Ur Ur = p p + . for example a stretched piece of cloth and the tension is distributed in the direction of the threads. ϕ) = u(r cos ϕ. 7 After some computations one should ﬁnd for a radially symmetric function. y)2 . y) = 0 for 1 < x2 + y 2 < 4. that ! ! Ã Ã ∂ 1 ∇u(x. the following nonlinear model might be better suited: µ ¶ µ ¶ u (x. (1.y)) u(x.9) Exercise 4 Next we consider a soapﬁlm without any exterior force that is spanned between two rings of radius 1 and 2 with the middle one on level −C for some positive constant C and the outer one at level 0. y) . Mathematically: −∇ · q ∇u(x. ∂y = cos ϕ ∂ ∂ u + sin ϕ u. Instead one could also force the deviation u at the boundary by describing nonzero values.In all of the above cases we have considered socalled zero Dirichlet boundary conditions. This would model a soap ﬁlm attached on a ring with constant force (blowing onto the soapﬁlm might give a good approximation). y) ∈ ∂Ω. In case that we do not consider a membrane that is isotropic (meaning uniform behavior in all directions).10) u(x.y))2 1+(ux (x.y) ∂ ∂ − ∂x √ ux (x. y) ∈ Ω. y) for (x. y)2 u(x. ∂x ∂y ∂ ∂ = −r sin ϕ u + r cos ϕ u. y) ∈ ∂Ω.y) 2 − ∂y √ y = f (x. ∂x ∂ u. Assuming that the solution is radially symmetric compute the solution. y) ∈ Ω. that is U = U (r). u(x. y) = 0 for (x. y) (1. y) = −C for x2 + y 2 = 1. y) −∇ · q = M for (x. Hint: in polar coordinates (r. r sin ϕ) : ∂ U ∂r ∂ U ∂ϕ and hence cos ϕ ∂ sin ϕ ∂ U− U ∂r r ∂ϕ ∂ cos ϕ ∂ sin ϕ U + U ∂r r ∂ϕ = = ∂ u. 2 1 + ∇u(x. ϕ) with x = r cos ϕ and y = r sin ϕ we obtain by identifying U (r. 1+(uy (x. y) = 0 for x2 + y 2 = 4. x2 + y 2 < 1 : ∇u(x. y) = 0 for (x. Exercise 3 Consider the second problem above with a constant right hand side © ª on Ω = (x. ∂x ∂y 1 + ∇u(x. ∇ · q 2 2 ∂r r 1 + Ur 1 + Ur 1 + ∇u(x.
− ∂r q ˜2 ˜2 1 + Ur 1 + Ur ˜ ˜ ˜ U (1) = −C and U (R) = h and Ur (R) = −∞.d. This is a r F0 1 =− F r and an integration yields and hence F (r) = c2 r for some c2 > 0. Hence we ﬁnd a function that solves the boundary value √ ¢ ¡ problem for C > 0 if and only if C ≤ log 2 + 3 ≈ 1.31696. Ur = p 2 − c2 r 2 ln F = − ln r + c1 2 1+Ur = c2 r one ﬁnds Hence The boundary conditions give µ ¶ q 2 − c2 + c U (r) = c2 log r + r 3 2 (1. Hence µ ¶ q 0 = U (2) = c2 log 2 + 4 − c2 + c3 . Solution 4 Writing F = √ Ur separable o. log 2 + 3 . Next solving √ Ur c2 . What happens if C increases? Can one deﬁne a solution that is no longer a function? Hint: think of putting a virtual ring somewhere in between and think of two functions glued together appropriately.: 2 1+Ur we have to solve F 0 + 1 F = 0. 1 + 1 − c2 2 µ √ ¶ 2+ 4−c2 √ 2 is an increasing function that maps [0. For those C there is a unique c2 ∈ (0. 2 Ã and ˜ Ur ˜ Ur 8 . To ﬁnd two smoothly connected functions we are looking for a combination of the two solutions of Ã ! Ã ! Ur Ur ∂ − ∂r p + p = 0 for R < r < 2. 1] and with this c2 we ﬁnd c3 by (1.e.Solve this problem assuming that the solution is radial and C is small. 2 + 4 − c2 2 2+ Let us consider more general solutions by putting in a virtual ring with radius R on level h. 2 µ ¶ q −C = U (1) = c2 log 1 + 1 − c2 + c3 .11) and U : ! Ã p r + r2 − c2 2 p U (r) = c2 log .11) ! p 4 − c2 2 p c2 log = C. necessarily R < 1 and −C < h < 0. ∂ + q = 0 for R < r < 1. 2 2 1 + Ur 1 + Ur U (2) = 0 and U (R) = h and Ur (R) = ∞. 1] Note that c2 7→ c2 log 1+ 1−c2 2 √ ¢¤ £ ¡ onto 0.
25 0. We know that Ã ! p r + r2 − c2 2 p U (r) = c2 log 2 + 4 − c2 2 is deﬁned on [c2 .72398. (Use Maple or Mathematica for this. 0 0.5 2 1 0 1 2 The solution that spans the maximal distance between the two rings. The 0. 1] . 9 .25 0. This lack of stability can be formulated in a mathematically sound way but for the moment we will skip that.5 0. Of the two solutions in the last ﬁgure only the left one is physically relevant.82617 for c2 ≈ 0. As a soap ﬁlm the conﬁguration on the right would immediately collapse.) 2 0 1 0 1 2 ´ ¶ is deﬁned on [0. Hence R = c2 and Ã ! c2 p U (R) = c2 log .75 1 2 1 2 2 1 2 0 1 1 0 1 0 1 2 Two solutions for the same conﬁguration of rings.5 1 1. 2 + 4 − c2 2 ˜ U (r) = 2U (R) − U (r) and next ˜ U (1) = 2U (R) − U (1) = 2c2 log Ã Ã ! ! p c2 1 + 1 − c2 2 p p − c2 log 2 + 4 − c2 2 + 4 − c2 2 2 c2 2 ´³ Then by symmetry c2 ´2 ³ ´ = −C = c2 log ³ p p 2 + 4 − c2 1 + 1 − c2 2 2 Again this function c2 7→ c2 log µ ³ 2+ √ 4−c2 2 1+ √ 1−c2 2 maximal C one ﬁnds is ≈ 1.5 0.75 1 2 2 1 2 0 1 0 0.Note that in order to connect smoothly we need a nonexistent derivative at R! We proceed as follows. 2] and Ur (c2 ) = ∞.
y))2 1 + (ux (x. u(x. + ∂r r ∂r r2 ∂ϕ Better check this instead of just believing it! ¯ Exercise 6 Find a nonconstant function u deﬁned on Ω Ω = {(x. that satisﬁes ∂ ux (x. q − ∂x ∂y 2 1 + (uy (x. y) = 0 for (x. 10 . Exercise 7 Consider the linear model for a square blanket hanging between two straight rods when the deviation from the horizontal is due to some force f. y) ∈ ∂Ω that your solution satisﬁes. y)) (1. y) = 0 for x2 + y 2 = 4. y) − ∂ q uy (x. Give the boundary data u(x. 0 < x < 1 and 0 < y < 1} Exercise 5 Solve the problem of the previous exercise for the linearized problem: −∆u = 0 for 1 < x2 + y 2 < 4. y) ∈ Ω. There is a constant pressure from above.Are there any critical C as for the previous problem? Hint: in polar coordinates one ﬁnds µ ¶2 µ ¶2 ∂ 1 ∂ ∂ 1 ∆= + . Give the boundary value problem and compute the solution.12) the diﬀerential equation for a piece of cloth as in (1. y) = g(x. Exercise 8 Consider the nonlinear model for a soap ﬁlm with the following conﬁguration: The ﬁlm is free to move up and down the vertical front and back walls but is ﬁxed to the rods on the right and left. u(x. y) for (x.9) now with f = 0. y) = −C for x2 + y 2 = 1. y).
For each problem we will study we have to specify in which sense these three properties hold. say in 0 and and describe both the initial position and the initial velocity in all its points we arrive at the following system with c.1 Problems involving time Wave equation We have seen that a force due to tension in a string under some assumption is related to the second order derivative of the deviation from equilibrium.3 1. Instead of balancing this force by an exterior source such a ‘force’ can be neutralized by a change in the movement of the string. t) = 0 for t > 0. Deﬁnition 1. Other systems that are wellposed (if we give the right speciﬁcation of these three conditions) are the following. 2. Even if we are considering time dependent deviations from equilibrium u (x.3. t) . 11 . t) . initial position: u(x. Apart from this. t) dx = dF = dm utt (x. t) − c2 uxx (x.1. Here c depends on the tension in the string and ρ is the massdensity of that string. these conditions stated as in this deﬁnition sound rather soft. There is at most one solution (uniqueness). We ﬁnd the 1dimensional wave equation: utt − c2 uxx = 0. boundary condition: u(0. t) = u( . 3. > 0 and φ and ψ given functions 1d wave equation: utt (x. We should remark that the second property may hold locally. t) = ρdx utt (x. Small changes in the problem result in small changes in the solution (sensitivity). If we ﬁx the string at its ends. 0) = ψ(x) for 0 < x < . 0) = φ(x) for 0 < x < . t) = 0 for 0 < x < and t > 0. initial velocity: ut (x. It has a solution (existence).1 (Hadamard) A system of diﬀerential equations with boundary and initial conditions is called wellposed if it satisﬁes the following properties: 1.3. One may show that this system is wellposed. t) the forcedensity due the tension in the (linearized) string is proportional to uxx (x. The last item is sometimes translated as ‘the solution is continuously dependent on the boundary data’. If this force (Newton’s law: F = ma) implies a vertical movement we ﬁnd c1 uxx (x.
for x ∈ Ω. It will appear in a diﬀerent role as for the wave equation. α > 0 and c > 0 are some numbers and ³ ´2 ³ ´2 ∂ ∂ ∆ = ∂x1 + ∂x2 . 0) = 0 Here n denotes the outer normal. t) + α(x)u (x. 0) = φ(x) for x ∈ Ω. 1. The temperature distribution in six consecutive steps. 0) = 0 initial velocity: ut (x. Again c > 0. t) = u( . t) + αu (x. If we consider a simple threepoint model with the left and right point being kept at constant temperature we expect the middle point to reach the average temperature after due time. t) = 0 initial position: u(x. Exercise 10 Same question for utt (x. t) 3d wave equation: ∂ boundary condition: − ∂n u(x.3 For Ω a domain in R3 (think of room to some source of noise and u models the deviation of compared with complete silence) inhomogeneous utt (x. ut (x. t) = u( . 0) = ψ(x) for x ∈ Ω. initial velocity: ut (x. t) − c2 uxx (x.3.3. t) = 0 for 0 < x < and t > 0. t) = 0 for t > 0. t) = 0 for x ∈ Ω and t > 0. In fact one should expect the speed of convergence proportional to the diﬀerence with the equilibrium as in the following graph. Example 1. Exercise 9 Find all functions of the form u (x. with students listening the pressure of the air for x ∈ Ω and t > 0. t) = f (x. t) = α(x)β(t) that satisfy utt (x. ∂ boundary condition: − ∂n u(x. 0) = 0 for 0 < x < . t) = 0 for t > 0. The initial conditions being zero represents a teacher’s dream. Allow us to give a simple explanation for the onedimensional heat equation. u(0. On soft surfaces such as curtains α is large and on hard surfaces such as concrete α is small. 0) = 0 for 0 < x < . t) − c2 ∆u(x.Example 1. t) = 0 for x ∈ ∂Ω and t > 0. Here n denotes the outer normal and α a positive function deﬁned on ∂Ω. t) − c2 uxx (x. 12 . for x ∈ Ω. initial position: u(x. t) = 0 for 0 < x < and t > 0. u(0. t) − c2 ∆u(x. The function f is given and usually almost zero except near some location close to the blackboard.2 Heat equation In the diﬀerential equation that models a temperature proﬁle time also plays a role.3. u(x. for x ∈ ∂Ω and t > 0.2 For Ω a domain in R2 (think of the horizontal layout of a swimming pool of suﬃcient depth and u models the height of the surface of the water) 2d wave equation: utt (x.
t) = φ(x1 . 1 2 Here ∆ = to space. . Here n is the outward normal at the boundary.5 For second order ordinary and partial diﬀerential equations on a domain Ω the following names are given: • Dirichlet: u(x) = φ (x) on ∂Ω for a given function φ. but adding more nodes to keep the total length ﬁxed. ∂n u(x. t). 0) = 0 for x2 + x2 < d2 . x3 = and t > 0.For this discrete system with ui for i = 1. x2 ) for x2 + x2 < d2 . 0 < x3 < and t > 0. t) = ui (t) and c = c(4x)−2 we ﬁnd for smooth u through ˜ lim −u(x − 4x. 0 < x3 < . ³ ∂ ∂x1 ´2 + ³ ∂ ∂x2 ´2 + ³ ∂ ∂x3 ´2 only contains the derivatives with respect Until now we have seen several types of boundary conditions. t) − u(x + 4x. ˜ ∂t For c > 0 this is the 1d heat equation. t) = −uxx (x. t) + 2u(x. Let us give the names that belong to them. t) = 0 for x2 + x2 < d2 . 1 2 ∂ for x2 + x2 = d2 .. ∂t If we would consider not just three nodes but say 43 of which we keep the ﬁrst and the last one of ﬁxed temperature we obtain u1 (t) = T1 . 2. ∂ • Robin: α (x) ∂n u(x) = u(x) + χ (x) on ∂Ω for a given function χ.. 3 the temperature of the middle node should satisfy: ∂ u2 (t) = −c (−u1 + 2u2 (t) − u3 ) . t) − c2 ∆u(x. Letting the stepsize between nodes go to zero. 13 .3.. x3 = 0 and t > 0. t) (4x)2 4x↓0 the diﬀerential equation ∂ u (x. ˜ Example 1.3. 0 < x3 < and t > 0. 1 2 u(x. t) = 0 1 2 u(x.4 Considering a cylinder of radius d and length which is isolated around with the bottom in ice and heated from above. Deﬁnition 1. ∂ u (t) = −c (−ui−1 (t) + 2ui (t) − ui+1 (t)) for i ∈ {2. t) = c uxx (x. • Neumann: ∂ ∂n u(x) = ψ (x) on ∂Ω for a given function ψ. 42} . 1 2 u(x. and applying the right scaling: u(i4x. ∂t i u43 (t) = T2 . t) = 0 for x2 + x2 < d2 . One ﬁnds the following model which contains a 3d heat equation: ut (x.
Compute the only positive functions satisfying these boundary conditions and (1. Thinking of physics: what would be a restriction for the number α? 14 . t) = 0 for all t > 0. Suppose α ∂n u (0. t) = 0 for all t > 0. t) = u( . t) = 0 for all t > 0. 2. Suppose ∂ ∂n u (0. (1. ) × R+ : ut = c2 uxx . t) = ∂n u( . t) ∈ (0. t) + u (0. t) = ∂ ∂n u( . Suppose u (0. Same question. t) + u (0.13) of the form u (x.Exercise 11 Consider the 1d heat equation for (x.13) 1. ∂ ∂ 3. Same question again. t) = X(x)T (t).
4. ∂τ Deﬁnition 1.4 Diﬀerential equations from calculus of variations A serious course with the title Calculus of Variations would certainly take more time then just a few hours.1. 0 E (y) = Z 0 Assuming that a solution minimizes the energy implies that for any nice function η and number τ it should hold that E (y) ≤ E (y + τ η) . 0) to (1.1 The functional J1 (y. Here we will just borrow some techniques in order to derive some systems of diﬀerential equations. (1.16) If the function y is as we want then this quantity should be 0 for all such η.14) (1.15) (y + τ η)τ =0 is called the ﬁrst Example 1. η) := variation of E.4. then y should be a station7 ary point: ∂ E (y + τ η)τ =0 = 0 for all such η. ∂ ∂τ E For the line with constant tension one has Z ³p ´ E (y) = 1 + (y 0 (x))2 − 1 − y (x) f (x) dx.15) that ! Z Ã y 0 (x)η0 (x) ∂ p − η (x) f (x) dx. Going back to the laundry on a line from (0.2 We ﬁnd for the E in (1. If the functional τ → E (y + τ η) is diﬀerentiable. 1 + (y 0 (x))2 1 + (y 0 (x))2 0 0 15 . First the energy due to deviation of the equilibrium: Z 2 1 0 Erope (y) = 2 (y (x)) dx 0 So the total energy is Also the potential energy due to the laundry pulling the rope down with force f is present: Z Epot (y) = − y (x) f (x)dx. 0) one could think of giving a formula for the energy of the system. E (y + τ η)τ =0 = ∂τ 1 + (y 0 (x))2 0 (1. With an integration by part we ﬁnd " !0 # ! Z ÃÃ y 0 (x) y 0 (x) p 0= p η(x) − + f (x) η (x) dx. Nice means that the boundary condition also hold for the function y + τ η and that η has some diﬀerentiability properties. 0 ¡1 2 (y 0 ¢ (x))2 − y (x) f (x) dx. Instead of trying to ﬁnd some balance of forces in order to derive a diﬀerential equation and the appropriate boundary conditions one can try to minimize for example the energy of the system.
y) 1 + ∇u(x. y)dσ − = 2 ∂Ω Ω 1 + ∇u(x. y)dxdy q ∂n ∇ η(x. Then by Green Z ∇u(x. 1 < x2 + y 2 < 4 is the domain.4.17) is called the EulerLagrange equation for E. Example 1. 1 + ∇u(x. y) · q ∇u(x. y)2 Z Z ∂ u(x. ∂τ (1. Ω = (x. y)2 16 Since u and u + τ η are both equal to 0 on the outer ring and equal to −1 on the inner ring it follows that η = 0 on both parts of the boundary. Assuming the solution is © ª a function deﬁned between these two rings. y) q dxdy. 2 1 + ∇u(x. y) · ∇η(x. Since y and y + τ η are zero in 0 and the boundary terms disappear and we are left with ! !0 Z ÃÃ y 0 (x) p + f (x) η (x) dx = 0.3 The diﬀerential equation that follows for y from ∂ E (y + τ η)τ =0 = 0 for all appropriate η. If u is the height of the surface then the area is Z q Area (u) = 1 + ∇u(x. y) ∇· q = 0. y) η(x. y)2 Z ∇u(x. y) ∇u(x.17) are called the natural boundary conditions. = − ∇ · q 2 Ω 1 + ∇u(x.18) . The extra boundary conditions for y that follow from (1. y) q 0= dxdy = Ω 1 + ∇u(x. y)dxdy. − p 1 + (y 0 (x))2 Here we have our familiar diﬀerential equation.4 Let us consider the minimal surface that connects a ring of radius 2 on level 0 with a ring of radius 1 on level −1. y) (1. y)2 dxdy. y) · ∇η(x. 1 + (y 0 (x))2 0 Deﬁnition 1. Ω We ﬁnd ∂ Area (y + τ η)τ =0 = ∂τ Z Ω If this holds for all functions η then a solution u satisﬁes ∇u(x.If this holds for all functions η then !0 Ã y 0 (x) = f (x). y) η(x.4. y).
for 0 < x < 1 y(0) = y 0 (0) = 0.: y 00 (0) = y 00 ( ) = 0. Consider the problem 0000 y (x) = f (x). Using this only part of the boundary terms disappear: Z h i (#1. 00 y (1) = y 000 (1) = 0. Give the corresponding boundary value problem. Next by choosing η with η 0 6= 0 at the boundary we additionally ﬁnd that a solution y satisﬁes y 00 (0) = y 00 ( ) = 0. natural b.c. the d. So the solution should satisfy: y 0000 (x) = f (x). Exercise 12 At most swimming pools there is a possibility to jump in from a springboard.: y(0) = y( ) = 0. ) .e.c. supposing it is a 1d model. namely y(0) = y 0 (0) = 0 and no prescribed boundary condition at . 0 So the total energy E = Edef − Epot equals ¶ Z µ 1 00 2 E(y) = (y (x)) − y(x)f (x) dx 2 0 and its ﬁrst variation ∂ E (y + τ η)τ =0 = ∂τ Z 0 0 y 00 (x)η 00 (x) − η(x)f (x) dx = (1.4.20) .: prescribed b.19) Since y and y + τ η equals 0 on the boundary we ﬁnd that η equals 0 on the boundary. = y (x)η (x) − y (x)η(x) + 0 If this holds for all functions η with η = 0 at the boundary then we may ﬁrst consider functions which also disappear in the ﬁrst derivative at the boundary and ﬁnd that y 0000 (x) = f (x) for all x ∈ (0. 0 0 Z i h 00 0 000 y 0000 (x)η(x) − η(x)f (x) dx.19) = y 00 (x)η0 (x) + (y 0000 (x) − f (x)) η(x) dx. 17 (1. as for the loaded beam but with diﬀerent boundary conditions. The energy consists of the deformation energy: Z 1 00 2 Edef (y) = (y (x)) dx 0 2 and the potential energy due to the weight that pushes: Z Epot (y) = y(x)f (x)dx.5 Consider a onedimensional loaded beam of shape y (x) that is ﬁxed at its boundaries by y(0) = y( ) = 0. Just standing on it the energy is.Example 1.
20) can be written by means of a Green function: u (x) = R1 G(x.20) is wellposed (specify your setting). The energy functional is E(u) = Z 1 −1 µ ¶ 1 00 2 (u (x)) − f (x) u(x) dx. Exercise 13 Show that the problem (1. How many conditions are needed? 18 . Compute this Green function. s)f (s)ds. 0] and ur on [0. 0 If u is the deviation the prescribed boundary conditions are: u(−1) = u0 (−1) = u(0) = u(1) = u0 (1) = 0. Hint: consider u on [−1. The solution of (1.Exercise 14 Consider a bridge that is clamped in −1 and 1 and which is supported in 0 (and we assume it doesn’t loose contact which the supporting pillar in 0). 1] . 2 Compute the diﬀerential equation for the minimizing solution and the remaining natural boundary conditions that this function should satisfy.
31696. If we restrict ourselves to solutions that are functions of r then we found Ã ! p r + r2 − c2 2 p U (r) = c2 log (1.10).5. If we allowed graphs that are not necessarily represented by one function then there are two solutions if and only if C ≤ Cmax ≈ 1. We now do have an energy for that model.5 Mathematical solutions are not always physically relevant We have seen that if u is a solution then the ﬁrst variation necessarily is 0 for all appropriate test functions η : If we believe physics in the sense that for physically relevant solutions the energy is minimized we could come back to the two solutions of the soap ﬁlm between the two rings as in the last part of Exercise 4. 2 2 2+ 4−c2 2+ 4−c2 r+ r −c2 Let us call these solutions of type II. We were looking for radially symmetric solutions of (1. Let call these solutions of type I. ¡ ∂ ¢2 Deﬁnition 1. Ω ∂ E (u + τ η)τ =0 = 0. type I 19 type II .1. y)2 dxdy. at least when we are discussing functions u of (x.21) 2 + 4 − c2 2 √ ¢ ¡ if and only if C ≤ log 2 + 3 ≈ 1.1 The quantity ∂τ E (u + τ η)τ =0 is called the second variation. ∂τ For twice diﬀerentiable functions Taylor gives f (a + τ b) = f (a) + τ f 0 (a) + 1 2 00 2 2 τ f (a) + o(τ ).82617 : µ √ µ ¶ ¶ r+ r2 −c2 c2 2 ˜ ´ √ 22 and U (r) = c2 log ³ √ 2 ´³ √ U (r) = c2 log .5. So in order that the energyfunctional has a minimum it will be necessary that µ ¶2 ∂ E (u + τ η)τ =0 ≥ 0 ∂τ whenever E is a ‘nice’ functional.2 Back to the rings connected by the soap ﬁlm from Exercise 4. y) : Z q E (u) = 1 + ∇u(x. Example 1.
1 + 1 − c2 2 The energy of type II solution equals Z 1q Z 2p 2 (r)rdr + ˜2 1 + Ur 1 + Ur (r)rdr = = Z c2 2 Just like C = C (c2 ) also E = E(c2 ) turns out to be an ugly function.21) is (skipping the constant 2π) Z q Z 2p 2 1 + ∇U 2 dxdy = 1 + Ur (r)rdr 1 Ω v Ã !2 Z 2u u t1 + p c2 = r dr r2 − c2 1 2 Z 2 r2 p = dr = r2 − c2 1 2 " p #2 p r r2 − c2 c2 log(r + r2 − c2 ) 2 2 2 = + 2 2 1 Ã ! p q q 2 + 4 − c2 2 2− 1 2 + 1 c2 log p = 4 − c2 2 1 − c2 2 2 . 20 . on the right: EI below EII . p 2 1 + Ur (r)rdr 1 c2 Ã ! p q q 2 + 4 − c2 2 2− 1 2 + 1 c2 log p = 4 − c2 2 1 − c2 2 2 + 1 + 1 − c2 2 q q + 1 − c2 + c2 log(1 + 1 − c2 ) − c2 log(c2 ) 2 2 2 2 ´³ ´ ³ p p q q 2 + 4 − c2 1 + 1 − c2 2 2 . But with the help of Mathematica (or Maple) we may plot these functions. The energy of the type I solution U in (1. 4 − c2 + 1 1 − c2 + 1 c2 log = 2 2 2 2 2 c2 2 1 Z p 2 1 + Ur (r)rdr + 2 c2 On the left: CI above CII . The top two or three on the right probably do not exist ‘physically’.On the left are type I and on the right type II solutions.
5.72398 is probably not a local minimizer of the energy and hence not a physically relevant solution. Give the energy E formulation for this R.3 A type II solution with c2 < 0. 1] . C 2 [−C. 0] with H (0) = RII ˜ and H (1) = RI that is such that t → E(H(t)) is decreasing. ¢ be more precise: To ¡ we should construct a homotopy H ∈ C [0. Claim 1. 21 .Exercise 15 Use these plots of height and energy as functions of c2 to compare the energy of the two solutions for the same conﬁguration of rings. See the left of the three ﬁgures: One can connect these two rods through this cylinder by a soap ﬁlm as in the two ﬁgures on the right. Can one conclude instability for one of the solutions? Are physically relevant solutions necessarily global minimizers or could they be local minimizers of the energy functional? Hint: think of the surface area of these solutions. What is the ﬁrst variation ˜ for this E? Using this new formulation it might be possible to ﬁnd a connection from √ ¢ ¡ a type II solution RII to a type I solution RI with the same C < log 2 + 3 such that the energy decreases along the connecting path. This would prove 7 that RII is not even a local minimizer. A formulation without this problem uses u 7→ R (u) (the ‘inverse’ ˜ function). Exercise 16 The approach that uses solutions in the form r 7→ U (r) in Exercise 4 has the drawback that we have to glue together two functions for one solution. Volunteers may try to prove this claim. Exercise 17 Let us consider a glass cylinder with at the top and bottom each a rod laid out on a diametral line and such that both rods cross perpendicular.
The solution depicted in the second cylinder is described by a function. It is not obvious to me if the ﬁrst depicted solution is indeed a minimizer.w (x) < 0 < yr. Derive the EulerLagrange equation for E(ϕ) and state the boundary value problem. 0) = 0 we may use a more suitable coordinate system.1. 5. Show that for a deviation of this function 4 by w the area formula is given by E (w) = Z − Z yr. y) = R2 . 22 . 4.w (x) are in absolute sense smallest solutions of ³ ³π x´ ´2 y 2 + y tan + w(x. y and describe ϕ(x. 4 2. 4 ∂x 3. y) as follows: z r y Show that E (ϕ) = Z Z R −R − s 1+ µ ∂ϕ ∂r ¶2 µ ¶2 π ∂ϕ + r + drdx. − ≤ x ≤ and y 2 + z 2 = R2 the function y. If you ask me for a guess: for R >> it is the minimizer (R > 1. r) instead of w(x. looks as z(x. © ª Setting the cylinder (x.¡ z)¢. For the second depicted solution of the boundary value problem one can ﬁnd a geometrical argument that shows it is not even a local minimizer. r instead of x.w (x) yl.535777884999 ?). (Im)prove these guesses.w (x) q 2 1 + ∇z + ∇w dydx where yl. If we restrict ourselves to functions that satisfy w(x. y) = y tan π x . Show that ‘the constant turning plane’ is indeed a solution in the sense that it yields a stationary point of the energy functional. namely x.
1 2.1 Let Ω ⊂ Rn be a domain. It is the space of functions which are Höldercontinuous with coeﬃcient α. ¯ • C k (Ω) with k·kC k (Ω) consists of all functions u that are ktimes diﬀer¯ entiable in Ω and which are such that for each multiindex β ∈ Nn with ¡ ∂ ¢β ¯ 1 ≤ β ≤ k there exists gβ ∈ C(Ω) with gβ = ∂x u in Ω. Traces and Imbeddings 2.e. 1] .e.1. If we want higher order derivatives to be continuous up to the boundary we should explain how these are deﬁned on the boundary or to consider just derivatives in the interior and extend these. ¯ ¯ ¯ 1≤β≤k 23 . Deﬁnition 2. The classical spaces that are often used for solutions of ¯ ¯ p. Deﬁnition 2.1. For mathematics that means that we have to decide which function space will be used.y∈Ω u(x) − u(y) .α (Ω) with k ∈ N and α ∈ (0.) are C k (Ω) and the Hölder spaces C k.1. (and o. α x − y ¯ consists of all functions u ∈ C(Ω) such that kukα < ∞. ¯ • C(Ω) with k·k∞ deﬁned by kuk∞ = supx∈Ω u(x) is the space of continu¯ ous functions. Here is a short reminder of those spaces.2 Let Ω ⊂ Rn be a domain and k ∈ N+ . 1] . ¯ • Let α ∈ (0. Then C α (Ω) with k·kα deﬁned by kukα = kuk∞ + sup ¯ x.1 Function spaces Hölder spaces Before we will start ‘solving problems’ we have to ﬁx the type of function we are looking for. We choose this second option.d. The norm k·kC k (Ω) is deﬁned by ¯ X kukC k (Ω) = kukC(Ω) + kgβ kC(Ω) .d.Week 2 Spaces.
6 When is a derivative in Lp (Ω)? For the moment we just recall ∂ that ∂x1 u ∈ Lp (Ω) if there is g ∈ Lp (Ω) such that Z Z ∞ ∂ g ϕ dx = − u ∂x1 ϕ dx for all ϕ ∈ C0 (Ω). ( ) µ ¶ µ ¶ ∂ ∂2 2 ¯ ¯ ¯ C 4 (Ω) ∩ C0 (Ω) = u ∈ C 4 (Ω). Ω Ω 24 . Remark 2. u∂Ω = u = u =0 ∂xi ∂Ω ∂xi ∂xj ∂Ω ¯ is a subspace of C 4 (Ω).1.1. without the closure of Ω..2 Sobolev spaces Deﬁnition 2. Spaces that are better suited are the.. . For those cases we are not deﬁning a norm but just consider the collection of functions. ¯ k.p (Ω) are the functions u such that ∂x u ∈ Lp (Ω) for all α ∈ Nn with α ≤ k. For example.p (Ω) X °µ ∂ ¶α ° ° ° ° = u° ° ∂x ° α∈Nn α≤k Let us ﬁrst recall that Lp (Ω) is the space of all measurable functions u : Ω → R R with Ω u(x)p dx < ∞. ¡ ∂ ¢α • W k.α (Ω) consists of all functions u that are k times α¯ Höldercontinuously diﬀerentiable in Ω and which are such that for each ¯ multiindex β ∈ Nn with 1 ≤ β ≤ k there exists gβ ∈ C α (Ω) with gβ = ¡ ∂ ¢β u in Ω.1.3 Adding an index 0 at the bottom in C0 (Ω) means that u and all of its derivatives (read gβ ) up to order k are 0 at the boundary. ∞) . ¯ ¯ For bounded domains Ω the spaces C k (Ω) and C k.1. Although often well equipped for solving diﬀerential equations these spaces are in general not very convenient in variational settings.4 Some other cases where we will use the notation C · (·) are the following. If we write C m (Ω).α (Ω) with k·kC k.1.5 Let Ω ⊂ Rn be a domain and k ∈ N and p ∈ (1. we mean all functions which are mtimes diﬀerentiable inside Ω. The norm k·kC k. Lp (Ω) Remark 2.α (Ω) are Banach spaces. 2. Its norm k·kW k.α (Ω) = kukC(Ω) + ¯ ¯ 1≤β<k X kgβ kC(Ω) + ¯ β=k X kgβ kC α (Ω) .¯ • C k.α (Ω) is deﬁned by ¯ ∂x kukC k.p (Ω) is deﬁned by kukW k. ∞ By C0 (Ω) we will mean all functions with compact support in Ω and which are inﬁnitely diﬀerentiable on Ω.α ¯ Remark 2. ¯ ¯ The set C ∞ (Ω) consists of the functions that belong to C k (Ω) for every k ∈ N.
p ∞ • Set W0 (Ω) = C0 (Ω) W (Ω) .7 Let Ω ⊂ Rn be a domain. . . yn−1 ).p (Ω) that ¡ ∂ ¢α satisfy ∂x u = 0 on ∂Ω for α with 0 ≤ α ≤ m for some m. . Three blocks with local cartesian coordinates 25 .p (Ω) .p k·k Clearly this cannot be a norm on W k. yn−1 ) ∈ Ai . . One way out is through the following deﬁnition.p (Ω) one might also encounter some other norm We may assume there are open blocks Bi := Ai × (ai . k ∈ N and p ∈ (1.p (Ω) norm of the set of inﬁnitely diﬀerentiable functions with compact support in Ω. Then we would like to restrict ourselves to all functions u ∈ W k. the closure in the k·kW k.8 We will call a bounded (n − 1)dimensional manifold M a ¯ C 1 manifold if there are ﬁnitely many C 1 maps fi : Ai → R with Ai an open n o (i) (i) n−1 set in R and corresponding Cartesian coordinates y1 .p (Ω) = u° ° ∂x ° p . . . Deﬁnition 2. Since the functions in W¡k. k. .p (Ω) are not necessarily continuous we cannot assume that a ¢ ∂ α condition like ∂x u = 0 on ∂Ω to hold pointwise.p (Ω) for k ≥ 1 since 1W k. 0 Nevertheless.∂ For u ∈ C 1 (Ω) ∩ Lp (Ω) one takes g = ∂x1 u ∈ C(Ω). say i ∈ {1.p (Ω) deﬁned by taking only the highest order deriva0 tives: X °µ ∂ ¶α ° ° ° ° uW k. . and the formula follows from an integration by parts since each ϕ has a compact support in Ω. A closely related space is used in case of Dirichlet boundary conditions. . Let us ﬁrst ﬁx the meaning of C 1 boundary or manifold. k. checks if g ∈ Lp (Ω). Deﬁnition 2.p (Ω) = 0. m} . yn . bi ) that cover M and are such that n o (i) (i) (i) (i) (i) ¯ Bi ∩ M = yn = fi (y1 . 0 n L (Ω) α∈N α=k Instead of the norm k·kW k. . . yn−1 ). such that M= i=1 m o [n (i) (i) (i) (i) (i) ¯ yn = fi (y1 . .1. . . . yn−1 ) ∈ Ai . . . . The k. k.1. . . (y1 . a result of Poincaré saves our day. ∞) .p standard norm on W0 (Ω) is k·kW k. (y1 .p for W0 (Ω) namely ·W k.
p 0 0 0 p 0 For the higher dimensional problem we go over to new coordinates.1. Ω M Proof. Suppose that there is a bounded (n − 1)dimensional C 1 manifold ¯ M ⊂ Ω such that for every point x ∈ Ω there is a point x∗ ∈ M connected by a smooth curve within Ω with length and curvature uniformly bounded.Theorem 2. We may choose several systems of coordinates to ﬁll up Ω. Taking u ∈ C 1 (0. say by . Since M is C 1 and (n − 1)dimensional we may assume that the Jacobian of each of these transformations is bounded from above and away from zero.9 (A Poincaré type inequality) Let Ω ⊂ Rn be a domain and ∈ R+ . First let us consider the onedimensional problem. Let y ˜ denote a system of coordinates on (part) of the manifold M and let yn ﬁll up the remaining direction. ¯ Then there is a constant c such that for all u ∈ C 1 (Ω) with u = 0 on M Z Z Ω u dx ≤ c p Ω ∇up dx. 1 26 . ) 1 with u (0) = 0 we ﬁnd with p + 1 = 1: q Z ¯p Z ¯Z x ¯ ¯ 0 ¯ u(x) dx = u (s)ds¯ dx ¯ ¯ 0 0 ¶ µZ x ¶p Z µZ x q p 0 u (s) ds 1ds dx ≤ 0 0 ÃZ 0 !Z Z p 1 p p p ≤ u0 (s) ds x q dx = u0 (s) ds. let us assume 0 < c−1 < J < c1 .
1]2 Exercise 22 Suppose that u ∈ C 2 ([0. [0. 1] .2 (Ω) ≤ c2 uW 1. Show the following: there exists c1 > 0 such that Z Z ¡ 2 ¢ 2 u dx ≤ c1 uxx + u2 dx. x2 ) = u(1. x2 ) = u(1.2 (Ω) .1. x2 ) = 0 for x2 ∈ [0.1]2 [0.1]2 2 Hint: show that for these functions u the following holds: Z Z uxx uyy dx = (uxy )2 dx ≥ 0. yy [0. Show the following: there exists c2 > 0 such that Z Z 2 u2 dx ≤ c2 (∆u) dx. [0. 1 p shaded area Filling up Ω appropriately yields the result.1]2 2 Exercise 21 Suppose that u ∈ C 2 ([0.1]2 [0. 1] ) with u(0.1]2 2 Exercise 20 Suppose that u ∈ C 2 ([0. 1] ) with u = 0 on ∂((0.1]2 [0.1]2 [0. Exercise 18 Show that the result of Theorem 2. 1)2 ). [0. 1) ). Show the following: there exists c3 > 0 such that Z Z ¡ 2 ¢ 2 u dxdy ≤ c3 uxx + u2 dx.1]2 [0. yy [0.9 indeed implies that there are constants c1 .By these curvilinear coordinates we ﬁnd for any number κ > µZ κ ¶ Z Z p p u(x) dx ≤ c1 u(y) dyn d˜ ≤ y shaded area part of M 0 ¯p µZ κ ¯ ¶ Z ¯ ∂ ¯ 1 p ¯ ≤ c1 u(y)¯ dyn d˜ ≤ y κ ¯ ∂yn ¯ part of M p 0 µZ κ ¶ Z 1 p p ≤ c1 y (∇u) (y) dyn d˜ ≤ κ part of M p 0 Z 1 ≤ c2 κp (∇u) (x)p dx. 1]2 ) with u = 0 on ∂((0.2 (Ω) ≤ kukW 1. x2 ) = 0 for x2 ∈ [0. Prove or give a counterexample to: there exists c4 > 0 such that Z Z u2 dxdy ≤ c4 (∆u)2 dx. c2 ∈ R+ such that c1 uW 1. 0 0 Exercise 19 Suppose that u ∈ C 2 ([0. 1]2 ) with u(0. 1] .1]2 27 .
If α .. 1) .. Complete the next sentences: 1. 28 .. 1) . then fα ∈ W 2.2 3.2 (0. which are all equipped with the norm k·kW 2..1) . 1) ∩ W0 (0..1. 2.2 2. W 2... If α . 1.2 W0 (0. Consider the functions fα (x) = xα (1 − x)α for α ∈ R. 1) and 2.... then fα ∈ W 2. 1) . 1) ∩ W0 (0.2 (0... 1) ...2 (0. then fα ∈ W0 (0.2 Exercise 23 Here are three spaces: W 2.2 (0.. If α . 1) .2 (0.
4 A molliﬁer on Rn is a function J1 with the following properties: 29 . open sets Ai and Cartesian coordinate systems as in ¯ Deﬁnition 2. . . . yn−1 ) ∈ Ai Exercise 24 Prove this lemma. First let us consider multiplication operators. m. In fact it +1 is possible to ﬁnd a partition of unity {ζ i }i=1 such that ( support(ζ i ) ⊂ Bi for i = 1.α if there is a ﬁnite set of functions fi . Deﬁnition 2. Any function u deﬁned on Ω can now be written as u= X i=1 2. (2. ∞) .1) support(ζ +1 ) ⊂ Ω.1. W m. P 3. .8 and 2. .3 A set of functions {ζ i }i=1 is called a C ∞ partition of unity for Ω if the following holds: ∞ 1. . i = 1.2. . . Deﬁnition 2. . Indeed we ﬁnd support(ui ) ⊂ support(ζ i ). ζ i ∈ C0 (Rn ) for all i. ∞ Lemma 2. Consider the multiplication operator M (u) = ζu . 0 ≤ ζ i (x) ≤ 1 for all i and x ∈ Ω. .2 Restricting and extending for some bounded open set Ai ∈ Rn−1 . This operator M is bounded in any of the spaces m.2.2.2.2. . We may even impose higher regularity of the boundary: Deﬁnition 2. . .2.1.α (Ai ). and we may describe each of these by n o (i) (i) (i) (i) (i) yn = fi (y1 . 1] .2 the functions ui := ζ i u have the same regularity as u but are now restricted to an area where we can deal with them.8.p ¯ ¯ C m (Ω). Often it is convenient to study the behaviour of some function locally. i=1 ζ i (x) = 1 for x ∈ Ω. and if moreover fi ∈ C m. Sometimes it is useful to work with a partition of unity of Ω that coincides with the coordinate systems mentioned in Deﬁnition 2.2. yn−1 ).1. (y1 .α (Ω) with m ∈ N and α ∈ (0. .1 For a bounded domain Ω we say ∂Ω ∈ C m. .2 Fix Ω ⊂ Rn bounded and ζ ∈ C0 (Rn ).p (Ω) and W0 (Ω) with m ∈ N and p ∈ (1. ζiu and by Lemma 2. If Ω ⊂ Rn is bounded and ∂Ω ∈ C 1 then by assumption we may split the boundary in ﬁnitely many pieces. C m. . One of the tools is the socalled partition of unity. .
3. If u has compact support in Ω we may extend u by 0 outside of Ω. Example 2.1.p (Ω) norm. support(J1 ) ⊂ {x ∈ Rn . Then.α (Ω) respectively in k·kW m. But ﬁrst let us consider extension operators from C m [0. and one directly checks that E0 (u) ∈ C [−1. 1] to C m [−1. 30 . 1] . 2. Without any assumption on the boundary this can become very involved. For bounded Ω with at least ∂Ω ∈ C 1 we may deﬁne a bounded extension operator.1] . J1 (x) ≥ 0 for all x ∈ Rn . u0 and u00 . ρ(x) = 0 for x ≥ 1. R 4. x ≤ 1} .2. 1 0.4.5 2 1 1 2 3 0. Rn J1 (x)dx = 1. J1 ∈ C ∞ (Rn ). The nice property is the following. Set ½ u(x) for 0 ≤ x ≤ 1. There are other ways of restricting but let us now focus on extending a function.5 The standard example of a molliﬁer is the function ( ³ ´ −1 c exp 1−x2 for x < 1.5 Mollyfying a jump in u. For m = 0 one proceeds by a simple reﬂection. for ¯ u ∈ C m. 1] for u ∈ C [0. uε (x) := (Jε ∗ u) (x) = y∈Rn to ﬁnd a family of C ∞ functions that approximates u when ε ↓ 0 in k·kC m. 1] and moreover kE0 (u)kC[−1. By rescaling one arrives at the family of functions Jε (x) = ε−n J1 (ε−1 x) that goes in some sense to the δfunction in 0.α (Ω) respectively u ∈ W m.1] ≤ kukC[0.p (Ω) we may deﬁne Z Jε (x − y)u(y)dy. E0 (u)(x) = u(−x) for − 1 ≤ x < 0. where the c is ﬁxed by condition 4 in Deﬁnition 2.2.
2. . ¯ (2.p (Ω) .3) (2. αm. . Set ½ u(x) for 0 ≤ x ≤ 1. . . . yn }. fi . . Let (i) (i) us consider ui = ζ i u in the coordinates {y1 . Pm+1 Em (u)(x) = 1 αm.Then there exists a linear extension operator Em m ¯ ¯ from C m (Ω) to C0 (Ω0 ) such that: 1. 2.p kukW m.1] .m.k from the equations coming out of ¡ ¢ ¡ ¢ (Em (u))(j) 0+ = (Em (u))(j) 0− .m.p ∈ R+ such that for all u ∈ C m (Ω) kEm ukW m.1] ≤ cm kukC m [0.1) ≤ cp. . xn ) = 1 0 k=1 αm. with y∗ = ˜ (i) (i) ∗ (y1 . 1] : kEm (u)kW m. Let the Ai . x ) for 0 ≤ xn ≤ 1. 1]) to C m (Ω × [−1. Convince yourself that ½ u(x0 . 1] and moreover with cm = kEm (u)kC m [−1. namely 1= m+1 X k=1 A simple and elegant procedure does it for m > 0.p (Ω0 ) kEm ukC m (Ω0 ) ¯ ≤ cΩ. .p (0.k  : Exercise 25 Show that there is cp. Lemma 2. yn ) = ¯ ˜ (i) (i) 1 (i) αk ui (y∗ . yn−1 ).m ∈ R+ such that for all u ∈ C m [0. ˜ ˜ k=1 31 .m kukW m. (Em u)Ω = u. yn }.4) ≤ cΩ.1. yn−1 ) and setting Bi the transformed box. . Proof.6 Let Ω and Ω0 be bounded domains in Rn and ∂Ω ∈ C m with ¯ m ∈ N+ .m kukC m (Ω) . is a bounded extension operator from C m (Ω × [0. . m. .8 and (2. . Writing ui in these new coordinates {y∗ . there is cΩ. yn ) ˜ if yn ≥ 0.1) .k u(x .2) m+1 P k=1 Then Em (u) ∈ C m [−1.m and cΩ. k=1 µ ¶j 1 αm.1). Now we may deﬁne the extension ˜ of ui by (i) (i) (i) ui (y∗ . First we will ﬂatten (i) (i) ˜ this part of the boundary by replacing the last coordinate by yn = yn + (i) (i) (i) (i) (i) fi (y1 . (Em u)Rn \Ω0 = 0 ¯ 3. Bi and ζ i be as in Deﬁnition 2.Ω0 .Ω0 .k u(− k x) for − 1 ≤ x < 0.and compute the coeﬃcients αm. we will ﬁrst assume that (i) ui is extended by zero for yn > 0 and large. . Suppose that Ω ⊂ Ω0 . 1. Pm+1 n Em (u)(x0 . − k yn ) if yn < 0.Ω0 .Ω0 .k − for j = 0. ˜ (i) (i) m+1 P ui (y∗ . 1]). k (2. Exercise 26 Let Ω ⊂ Rn−1 be a bounded domain. .p (−1. . . 1] for u ∈ C m [0. − k xn ) for − 1 ≤ xn < 0.
y) . ¯ (2. . Moreover.2 Construct a bounded extension operator E : W 2.2 in order to obtain the estimate in (2. 2 o a regular pentagon: Ω3 = co {(cos(2kπ/5).5) An approximation argument show that (2. Notice that we repeatedly used Lemma 2.p m.¯ where the αk are deﬁned by the equations in (2. the small o means the open interior. 2.6) Em (u) = ¯ with such that χ(x) = 1 for x ∈ Ω and As a last step let χ be a 0 support(χ) ⊂ Ω . 32 . Up ˜ to now we have constructed an extension operator Em : W m.p (Bi ∩Ω) . It is even true that there is a constant Ci.. a USarmy star Ω4 = .p (Ω). such that k¯i kW m. 1) and Ω1 = x ∈ R2 ..2). m.p (Rn ) by X ˜ ui + ζ +1 u. x < 2 . .5) even holds for all u ∈ W m. Hint for the triangle. Assuming that u ∈ C m (Ω) we ¯∗ ﬁnd that with the coeﬃcients chosen this way the function ui ∈ C m (Bi ) and ¯ its derivatives of order ≤ m are continuous. sin(2kπ/5)). 4} .p 0 approaches Em u in W (Ω )norm for ε ↓ 0 one ﬁnds Em u ∈ W0 (Ω0 ). Hence (Jε ∗ Em (u)) ∈ C0 (Ω0 ) and since Jε ∗ Em (u) m.2. co(A) is the convex hull of A.∂Ω kui kW m. k = 0.4). Due to the assumption that 0 support(χ) ⊂ Ω we ﬁnd that support(Jε ∗ Em (u)) ⊂ Ω0 for ε small enough ∞ where Jε is a molliﬁer. − 1 < y < 1 − 3 x . . We deﬁne the extension operator Em by Em (u) = χ X i=1 i=1 ∞ C0 function ui + ζ ¯ +1 u. u (2. 1 Exercise 28 Think about the similar question in case of √ © ª an equilateral triangle: Ω2 = (x. Indeed this fact remains due to the ¯ assumptions on fi . also after the reverse ¯ transformation the function ui ∈ C m (Bi ). 1. .p (Bi ) ≤ Ci.p It remains to show that Em (u) ∈ W0 (Ω0 ).p (Ω) → W m. which just depend on the properties of the ith local coordinate system and ζ i .2 (Ω1 ) → W0 (Ω0 ).∂Ω .. © ª 0 2 Exercise 27 A square inside a disk: Ω1 = (−1.
Prove that uε (x) := y∈Rn0 Jε (x − y)¯(y)dy satisﬁes ¯ u kuε kLp (Ω) ≤ kukLp (Ω) . Let γ ∈ (0. ¯ 3. 1. If we want to use such a molliﬁer on a bounded domain a diﬃculty arises when u doesn’t have a compact support in Ω. In that case we have to use the last lemma ﬁrst and extend the function. Derive from Hölder’s inequality that Z ab dx ≤ µZ 1 1 ¶p ¶ q µZ 1 1 p for p. ∞) with + = 1. Let u ∈ C k (Ω0 ) and α ∈ Nn with α ≤ k. Prove that uε (x) := y∈Ω0 Jε (x−y)u(y)dy is such ¯ that uε → u in C(Ω) for ε ↓ 0.Above we have deﬁned a molliﬁer on Rn . x ∈ ∂Ω. Use that C(Ω0 ) is dense in Lp (Ω0 ) and the previous exercise to conclude that uε → u in Lp (Ω). ³¡ ¢α ´ ¡ ∂ ¢α ∂ ¯ 3. 1] and suppose that u ∈ C γ (Ω0 ). Notice that we don’t need the last part of the proof of that lemma but may use ˜ Em deﬁned in (2. Set u(x) = u (x) for x ∈ Ω and R ¯ u(x) = 0 elsewhere. Prove that uε → u in C γ (Ω) for ε ↓ 0. Suppose that u ∈ Lp (Ω) with p < ∞. x0 ∈ ∂Ω0 } . Show that ∂x uε = ∂x u in Ω if ε < inf {x − x0  . Exercise 30 Let Ω be bounded domains in Rn and let ε > 0. ¯ ¯ 2. ¯ Exercise 29 Let Ω and Ω0 be bounded domains in Rn with Ω ⊂ Ω0 and let ε > 0. 33 . a b dx a dx p q ε 2. q ∈ (1. R ¯ 1. Suppose that u ∈ C(Ω0 ).6).
if u ∈ W 1. then T u = u∂Ω . say to v ∈ Lp (∂Ω).2 being zero on the boundary although we deﬁned W0 (Ω) by approximation ∞ through functions in C0 (Ω).2 This T is called the trace operator. We may assume that there are ﬁnitely many maps and sets of Cartesian coordinate systems as in Deﬁntion 2.p (Ω) → p 0. ∞ ¯ A. What can we do if the boundary data we are interested in are nonhomogeneous? The trace operator solves that problem. ) ¯ ¯¶ µ Z p p−1 ¯ ∂u ¯ ¯ ¯ u + u C ¯ ∂xn ¯ dx Ai ×(0. For this assumption one will pay by a factor (1 + ∇fi 2 )1/2 in the integrals but by the C 1 assumption for ∂Ω the values of 2 ∇fi  are uniformly bounded.p (Ω) ∩ C Ω . We may choose such a partition since these local coordinate systems do overlap. . . Remark 2. and the limit v only depends on u. The following holds: 1.p (Ω) . . Bounded means there is Cp. but not ∂n u 1.Ω such that kT ukLp (∂Ω) ≤ Cp. yn−1 ). ) ∂xn Z ˜ C (up + ∇up ) dx. So T : W 1. ) ¯ ¯ Z Z ¯ ∂u ¯p p 2p−1 1 ¯ ¯ dx u dx + C p C p ¯ ¯ Ai ×(0.8. the Ai are open. .p (Ω) let {um }m=1 ⊂ C 1 (Ω) be such that ku − um kW 1. .1. B. . that ﬁts with the ﬁnitely many boundary maps: n o (i) (i) (i) (i) (i) ∂Ω ∩ support(ζ i ) ⊂ ∂Ω ∩ yn = fi (y1 . (2. ) Ai ×(0. ∞) and assume that Ω is bounded and ∂Ω ∈ C 1 .p (Ω) → Lp (∂Ω) with T u = v is well deﬁned. . .7).3. Integrating inwards from the (ﬂat) boundary part Ai ⊂ Rn−1 gives.3.2 ∂ We have seen that u ∈ W0 (Ω) means u = 0 on ∂Ω in some sense. T is a bounded linear operator from W 1. Then um∂Ω converges in L (∂Ω). that Z ζ i up dx0 = − = ≤ ≤ ≤ ∂ (ζ up ) dxn dx0 ∂xn i µ ¶ Z ∂u ∂ζ i − up + pζ i up−2 u dx ∂xn ∂xn Ai ×(0. ¡ ¢ ¯ 2. ) Ai Z Z 34 .1 Let p ∈ (1.p (Ω) to Lp (∂Ω) .3 Traces 1. For u ∈ W 1. (y1 . ¡ ¢ ¯ First let us suppose that u ∈ C 1 Ω and derive an estimate as in (2. with ‘outside the support of ζ i ’. Now for the sake of simpler notation let us assume these boundary parts are even ﬂat.7) Proof.2. yn−1 ) ∈ Ai . Ai 0 Ai ×(0. Let {ζ i }m be a C ∞ partition of unity i=1 for Ω.Ω kukW 1. Theorem 2.
that Z Z p p p u dx0 ≤ C (u + ∇u ) dx.p (Ω).p (Ω) .9) one also ﬁnds that the limit does not depend on the chosen sequence: if um → u and vm → u in W 1. To complete ¡ u 1. So T u := limm→∞ T um is well deﬁned in Lp (∂Ω) for all u ∈ W 1.p ¯ that argument we use that functions in W (Ω) ∩ C Ω can be approximated ∞ 1 ¯ 1.p (Ω) without the C 1 Ω restriction.9) L (∂Ω) ° ° and if um ∈ C 1 (Ω) with um → u in W 1. The fact that T is bounded also follows from (2. p q p q (2.2.p (Ω) .p (Ωε ) ≤ ˜ ˜ u ∞ ¯ C kukW 1.p (u) then °u∂Ω − um∂Ω °Lp (∂Ω) ≤ ¯ C ku − um kW 1. q > 0 with + = 1. ∂Ω Ω So ° ° ¡ ¢ ¯ °u∂Ω ° p ≤ C kukW 1. then kT um − T vm kLp (∂Ω) ≤ C kum − vm kW 1.8) Combining such an estimate for all boundary parts we ﬁnd. By (2.p (Ω) → 0. For example.p (Ωε ) ∩ C Ωε and k˜kW 1. So ° ° ° ° °u∂Ω − um∂Ω ° p ≤ ∂Ω1/p °u∂Ω − um∂Ω °C(∂Ω) ≤ L (∂Ω) ° ° and °T u − um∂Ω °Lp (∂Ω) → 0 implying that T u = u∂Ω . ¡ ¢ Setting u = E(u) and one ﬁnds u ∈ W 1. ≤ ∂Ω 1/p ku − um kC(Ω) → 0 ¯ 35 .6. since ∂Ω ∈ C 1 one may extend functions ¯ in a small ε1 neighborhood outside of Ω by a local reﬂection as in Lemma 2. m→∞ m→∞ ¡ ¢ ¯ It remains to show that if u ∈ W 1.p (Ω)∩C Ω then T¢ = u∂Ω . Exercise 31 Prove Young’s inequality (2.p (Ω) = C kukW 1. A molliﬁer allows us to construct a sequence {um }m=1 in C ∞ (Ω) 1.p by a sequence of C (Ω)functions {um }m=1 in W (Ω)sense and such that ¯ um → u uniformly in Ω. adding ﬁnitely many constants. Then there ∞ ¯ are C (Ω) functions um converging to u and kT um − T uk kLp (∂Ω) ≤ C kum − uk kW 1.p (Ω) . ¡ ¢ ¯ Now assume that u ∈ W 1.9): kT ukLp (∂Ω) = lim kT um kLp (∂Ω) ≤ C lim kum kW 1.p (Ω) for u ∈ C 1 Ω .p ¯ that approximates both u in W (Ω) as well as uniformly on Ω.8).p (Ω).In the one but last step we used Young’s inequality: ab ≤ 1 p 1 q 1 1 a + b for p. (2.10) So {T um }m=1 is a Cauchy sequence in Lp (∂Ω) and hence converges. So for u ∈ C 1 (Ω) the operator T is welldeﬁned and T u = u∂Ω .p (Ω) → 0. ∞ (2.
3.4. say ∂Ω ∩ Bi = ΓBi ⊂ Rn−1 and set Ω ∩ Bi = ΩBi where Bi is a box. Theorem 2.xn ] ¯ ¯p ¯ ∂ ¯ ¯ um (x0 .2. It would be rather nice if for zero boundary conditions these would coincide.p Zero trace and W0 (Ω) So by now there are two ways of considering boundary conditions.p (Ω) norm. ∞ (⇐) Assuming that T u = 0 on ∂Ω we have to construct a sequence in C0 (Ω) 1 that converges to u in the k·kW 1. Since ∂Ω ∈ C we again may work on the ﬁnitely many local coordinate systems as in Deﬁnition 2. 2] . ζ (x) = 0 for x ∈ [−1. 36 .p (Ω). 1. 1] . First there is 1. Suppose that T is the trace operator from Theorem 2.p (Ω) we ﬁnd that ¯ ¯p ¯ ∂ ¯ ¯ u(x0 . Let u ∈ W 1. We start by m ﬁxing a partition of unity {ζ i }i=1 as in the proof of the previous theorem and which again corresponds with the local coordinate systems. ¯ ∂xn ¯ (2. (⇒) If u ∈ W0 (Ω) then we may approximate u by um ∈ C0 (Ω) and since T is a bounded linear operator: kT ukLp (Ω) = kT u − T um kLp (Ω) ≤ c ku − um kW 1.p ∞ W0 (Ω) as the closure of C0 (Ω) in the k·kW 1. 0) dx0 + cp xn p p p ΓB Z ΓB ×[0.1 Fix p ∈ (1. Then 1. s)¯ dx0 ds. From v(t) = v(0) + 0 v 0 (s)ds one gets Z t v(t) ≤ v(0) + 0 v 0 (s) ds and hence by Minkowski’s and Young’s inequality ¶p µZ t Z t p v(t)p ≤ cp v(0)p + cp v 0 (s) ds ≤ cp v(0)p + cp tp−1 v 0 (s) ds.1.p (Ω) norm and secondly the trace operator. ¯ ∂xn ¯ ΓB ×[0. s)¯ dx0 ds.11) ζ (x) = 1 for x ∈ [0. ∞ ¯ Since T u = 0 we may take any sequence {uk }k=1 ∈ C 1 (Ω) such that kuk − ukW 1.2.p (Ω) → 0 and it follows from the deﬁnition of T that um∂Ω = Rt T um → 0.xn ] and since um∂Ω → 0 in L (∂Ω) and um → u Z Z p u(x0 . ∞) and suppose that Ω is bounded with ∂Ω ∈ C 1 . Next we assume the boundary sets are ﬂat.4 1. xn ) dx0 ≤ p−1 um (x0 .p (Ω) → 0.1.p ∞ Proof. 0 0 It implies that Z ΓB ≤ cp Z um (x0 .p u ∈ W0 (Ω) if and only if T u = 0 on ∂Ω. xn ) dx0 ≤ cp xp−1 n ΓB ∞ Taking ζ ∈ C0 (R) such that in W 1. Now we proceed in three steps. / 0 ≤ ζ ≤ 1 in R.
Now there is room for using a molliﬁer. With J1 as before we may 1 1 ﬁnd that for all ε < 2m (in Ω this becomes ε < cm for some uniform constant ∞ depending on ∂Ω) that vε. We may choose 1 a sequence of εm such that kvεm .2m−1 ] (i) Z ¯ ¯p p mp ¯ζ 0 (mxn )¯ u(x) dx = ΩB ÃZ −1 Z Z 2m−1 Z in ‘Ω’.p (Ω) ≤ m and hence vε.m → u in W 1.2m−1 ] u(x)p dx → 0.2m−1 ] For the gradient part of the k·knorm we ﬁnd Z Z p p ∇u − ∇wm  dx = ∇ (ζ (mxn ) u(x)) dx ΩB ΩB Z ¯ ¯p ¡ ¢ p p−1 ζ (mxn ) ∇u(x)p + mp ¯ζ 0 (mxn )¯ u(x)p dx.11) for the second term of (2. t)¯ ¯ 0 ΓB ×[0. ∞) and a. One directly ﬁnds for m → ∞ that Z Z u − wm p dx ≤ ΩB ΓB ×[0. b ∈ R: µ ¶p−1 1 p p−1 p a + b ≤ (1 + ε) a + 1 + bp . (2.ζ ¯ ∂xn u(x . Exercise 32 Prove Minkowski’s inequality: for all ε > 0. xn ) = (1 − ζ (mxn )) u(x0 .12) when m → ∞ ¯ ¯p p mp ¯ζ 0 (mxn )¯ u(x) dx0 dxn 0 ΓB ! ¯ ¯p 2m ¯ ∂ ¯ p p−1 0 ¯ ¯ dx0 dt dxn ≤ Cp.ζ m xn u(x . For the ﬁrst term of (2. (Here we have to go back to the curved boundaries and glue the wm together).m − wm kW 1.12) ≤ 2 ΩB and from (2.m = Jε ∗ wm is a C0 (Ω)function. ε So wm → u in k·kW 1.we set wm (x0 . xn ). p ∈ (1. t)¯ dx dt → 0.2m−1 ] ∂xn ¯ ¯p Z ¯ ∂ ¯ 0 0 0 ¯ ¯ ≤ Cp.12) when m → ∞ we obtain Z Z p p p ζ (mxn ) ∇u(x) dx ≤ ∇u(x) dx → 0 ΩB ΓB ×[0.p (Ω). ΓB ×[0.p (Ω) norm and moreover support(wn ) lies compactly 37 .
Exercise 34 Set Ω = {x ∈ Rn .1 (GagliardoNirenbergSobolev inequality) Fix n ∈ N+ and suppose that 1 ≤ p < n. But what about the less obvious relations? Are there conditions such that a Sobolev space is contained in some Hölder space? Or is it possible that W m. For bounded (smooth) domains there are the more obvious relations that W m.p (Ω) ⊂ W k. Remark 2.α = m + α.p (Ω). pn L n−p (Rn ) ≤ C k∇ukLp (Rn ) Theorem 2. Compute for which β it holds that: 38 . Then there exists a constant C = Cp.1− n (Rn ) ≤ C kukW 1.α (Ω) and W k. Theorem 2. It does not belong to L∞ (B1 ).n (B1 ) . For the Hölder spaces C m.5. so comparing W 1. Let us introduce some number κ for the quality of the ¯ continuity. one ﬁnds ½ pn 1 − n ≥ − n if q ≤ n−p and n > p.α (Ω) ⊂ C m. p q n if n < p. not to mention ¯ ¯ ¯ C m. The optimal estimates appear in the theorems above.3 These two estimates form a basis for most of the Sobolev imbedding Theorems.q (Ω) with m > k but p < q? Here we will only recall two of the most famous estimates that imply an answer to some of these questions.q (Ω) (with q > p) one needs k > .p (Ω) ⊂ C . p For X1 ⊂ X2 it should hold that κX1 ≥ κX2 . Exercise 33 Show that u deﬁned on B1 = {x ∈ Rn . Then there exists a constant C = Cp. In case of equality of the κ’s an imbedding might hold or just fail.β (Ω) when α > β or C m (Ω) ⊂ W m.5. Nirenberg.p (Ω) when m > k and W m.p (Ω) ⊂ W m.p = m − .p (Rn ) p for all u ∈ C 1 (Rn ).q (Ω) when p > q.α (Ω) this is ‘just’ the coeﬃcient: κC m. x < 1} and deﬁne uβ (x) = xβ for x ∈ Ω. 1− p ≥α This is just a way of remembering the constants.5.p (Ω) with¢Ω ∈ Rn .p (Ω) ⊂ W k.2 (Morrey’s inequality) Fix n ∈ N+ and suppose that n < p < ∞. Obviously for W k. which functions have ‘less dif¡ 1 ¯ ferentiability’ than the ones in C m Ω .2. Let us try to supply you with a way to remember the correct ¯ coeﬃcients.5 Gagliardo. Sobolev and Morrey In the previous sections we have (re)visited some diﬀerent function spaces.n such that kuk 1 for all u ∈ C0 (Rn ). this less corresponds with − p for each dimension and we set n κW m. respectively W 1.n such that kukC 0.p and Lq .p (Ω) ⊂ W . For the Sobolev spaces W m.p and C α . x < 1} with n > 1 by 1 u(x) = log(log(1 + x )) belongs to W 1.
. xi−1 .1. . . . an  1 n−1 1 µ ¶ n−1 Z Z dt ≤ c1  a2  dt . . . 2. x − (1. . So we can get out one factor for free and use a generalized Hölder inequality for the n − 1 remaining ones: Z Z c1  1 n−1 1 ¯ ¯ ¶ n−1 ¯ ∂u ¯ ¯ ¯ dyi . uβ ∈ W 1. 3. . . . xi+1 . 0. xn )dyi ¯ ≤ ¯ ¯ ¯ ¯ −∞ ∂xi −∞ ∂xi so that u(x) n−1 ≤ n i=1 n Y µZ When integrating with respect to x1 one ﬁnds that on the right side there are n − 1 integrals that are x1 dependent and one integral that does not depend on x1 .5.¯ 1. uβ ∈ C 0. a2  dt . ¯ ¯ ¯ ∂xi ¯ dyi dx1 −∞ ∂x1 i=2 −∞ −∞ µZ ∞ 1 ¯ ¯ ¶ n−1 Z ¯ ∂u ¯ ¯ ¯ ¯ ∂x1 ¯ dy1 ∞ n Y µZ ∞ −∞ × µZ Z ∞ ∞ −∞ u(x) ∞ n n−1 −∞ Z −∞ 1 ¯ ¯ ¶ n−1 ¯ ∂u ¯ ¯ ¯ dy2 dx1 ¯ ∂x2 ¯ dx1 dx2 ≤ µZ i=3 1 ¯ ¯ ¶ n−1 ¯ ∂u ¯ ¯ ¯ dy1 dx2 × ¯ ¯ −∞ −∞ ∂x1 1 ¶ n−1 n Y µZ ∞ Z ∞ Z ∞ ¯ ∂u ¯ ¯ ¯ ¯ ¯ dyi dx1 dx2 ¯ ∂xi ¯ ∞ Z ∞ −∞ −∞ −∞ 39 . For (2.α (Ω).α (∂Ω). 0) < 1} and deﬁne uβ (x) = xβ for x ∈ Ω. uβ ∈ C 0. uβ ∈ W 1. . 2. 3. Exercise 35 Set Ω = {x ∈ Rn . .13) a2  1 n−1 .13) it gives us ∞ −∞ u(x) n n−1 dx1 Repeating this argument one ﬁnds Z ∞ 1 ¯ ¯ ¶ n−1 ¯ ∂u ¯ ¯ ¯ dyi ≤ dx1 ¯ ¯ −∞ −∞ i=2 −∞ ∂xi 1 1 ¯ ¯ µZ ∞ ¯ ¶ n−1 Y µZ ∞ Z ∞ ¯ ¶ n−1 n ¯ ∂u ¯ ¯ ∂u ¯ ¯ ¯ dy1 ¯ ¯ dyi dx1 ≤ ¯ ¯ ¯ ¯ −∞ ∂x1 −∞ −∞ ∂xi i=2 1 ÃZ ! n−1 ¯ n ∞ ¯ Y Z ∞ Z ∞ ¯ ∂u ¯ ¯ ∂u ¯ ¯ ¯ ¯ ¯ dy1 ¯ ¯ = . .p (∂Ω). u(x) = ¯ (x1 .p (Ω). uβ ∈ Lq (Ω). Compute for which β it holds that: 1. Proof of Theorem 2. . ¯ ¯ −∞ ∂xi ∞ (2. uβ ∈ Lq (∂Ω). yi . Since u has a compact support we have (using Rt f (t) = f (a) + a f 0 (s)ds) ¯Z xi ¯ Z ∞¯ ¯ ¯ ¯ ¯ ∂u ¯ ∂u ¯ ¯ dyi .
1 for p = 1. we ﬁnd kukLα n−1 ≤ α k∇ukLp . n y<r y<r y So with ω n is the surface area of the unit ball in Rn Z Z ωn n u(y) − u(0) dy + u(y) dy r u (0) ≤ n y<r y<r Z ∇u(y)) 1 n ≤ r n−1 dy + kukL1 (Br (0)) n y<r y ° ° 1 n ° 1−n ° ≤ p + r k∇ukLp (Br (0)) °· ° p−1 n L (Br (0)) 1 ³ ω ´1− p n n + rn− p kukLp (Br (0)) n 40 (2. Starting again with Z 1 u(y) − u(0) = y · (∇u)(ry))dr 0 one ﬁnds Z u(y) − u(0) dσ y ≤ ≤ Z y=s y=s = sn−1 = sn−1 = sn−1 Z Z Z Z 1 0 Z y=s Z 1 0 y · (∇u)(ry)) drdσ y s (∇u)(ry)) drdσy Z Z s 0 s 0 w=1 w=1 y<s ∇u(tw)) dtdσ w ∇u(tw)) tn−1 dtdσ w twn−1 Z ∇u(y)) ∇u(y)) dy ≤ sn−1 n−1 n−1 dy y y<r y (2. Since q > q> np n with p < n and α n−1 = n−p the claim follows. in other words α = n should hold. Indeed.5.14) with u replaced by u for appropriate α > 1. n n−1 n n−1 q n q− n−1 and coincides Proof of Theorem 2.14) i=1 L which completes the proof of Theorem 2.15) and an integration with respect to s gives: Z Z 1 n ∇u(y)) u(y) − u(0) dy ≤ r n−1 dy.2. again with Hölder 1 1 and p + q = 1 ° ° ° α−1 ° α α α n n ∇u° 1 kukLα n−1 = ku kL n−1 ≤ k∇ u kL1 = α °u L ° ° ° α−1 ° α−1 ≤ α °u ° q k∇ukLp = α kukL(α−1)q k∇ukLp .and after n steps kuk n n−1 n L n−1 1 n Y ° ∂u ° n−1 n ° ° n−1 ° ° ≤ ° ∂xi ° 1 ≤ k∇ukL1 (2. L n If we take α > 1 such that α n−1 = (α − 1)q. n) one uses α (2.17) . For p ∈ (1. Let us ﬁx 0 < s ≤ r and we ﬁrst will derive an estimate in Br (0).5.16) (2.
p r1− p −α k∇ukLp (Br (0)) . which is bounded when x → z if α < 1 − n . As in (2. Since 0 is an arbitrary point and taking r = 1 we may conclude that there is C = Cn. α x − z 41 .16) and using 2 2 2 Br (m) ⊂ B2r (x) ∩ B2r (z) : Z ωn n u(x) − u(z) dy ≤ r u (x) − u(z) = n y−m<r Z Z ≤ u(y) − u(x) dy + u(y) − u(z) dy y−m<r y−m<r Z Z ≤ u(y) − u(x) dy + u(y) − u(z) dy y−x<2r ≤ 1 n n 2 r n ÃZ y−z<2r y−x<2r y − x ∇u(y)) n−1 dy + Z y−z<2r y − z ∇u(y)) n−1 dy ! and reasoning as in (2.17) u (x) − u(z) ≤ Cn. p n u (x) − u(z) ≤ 2α Cn.p r So p−n p k∇ukLp (Br (0)) . A bound for the Hölderpart of the norm remains to be proven.p (Rn ) . Let x and z be two points and ﬁx m = 1 x + 1 z and r = 1 x − z .p such that for p > n x∈Rn sup u (x) ≤ C kukW 1.For p > n one ﬁnds ° ° ° 1−n ° °· ° µ Z = ωn r p (1−n) p−1 +n−1 p L p−1 (Br (0)) s 0 ¶ p−1 Ã p ds = ωn 1 1 − n−1 p−1 ! p−1 p r p−n p .
42 .
There is some open bounded set K of functions and numbers E0 < E1 such that: (a) the functional on K is bounded from below by E0 : for all u ∈ K : E (u) ≥ E0 . (c) somewhere inside K the functional has a value in between: there is u0 ∈ K : E (u0 ) < E1 . a sequence such that ˜ ˜ E(u1 ) ≥ E(u2 ) ≥ E(u3 ) ≥ · · · ≥ E and lim E(un ) = E. that is. 43 . (b) on the boundary of K the functional is bounded from below by E1 : for all u ∈ ∂K : E (u) ≥ E1 . Let E be the functional that we want to minimize.Week 3 Some new and old solution methods I 3. n→∞ This does not yet imply that there is a function u such that E (˜) = ˜ u inf u∈K E (u) . ˜ Then E := inf u∈K E (u) exists and hence we may assume that there is a ∞ minimizing sequence {un }n=1 ⊂ K.1 Direct methods in the calculus of variations Whenever the problem is formulated in terms of an energy or some other quantity that should be minimized one could try to skip the derivation of the EulerLagrange equation. Instead of trying to solve the corresponding boundary value problem one could try to minimize the functional directly. In order to do so we need the following: A.
1. (3. which is suﬃcient: limn→∞ E(un ) ≥ E (u∞ ) . The assumptions in A are satisﬁed. k·k) be a function space such that (3. B. Notice that this condition implies that we cannot use a norm with higher order derivatives than the ones appearing in the functional. In order that un leads us to a minimizer the following three properties would be helpful: (a) the function space should be a Banachspace that ﬁts with the functional. there should be enough functions such that a minimizer is among them (b) the sequence {un }∞ converges or at least has a convergent subsen=1 quence in some topology (compactness of the set K).1) holds. and (c) if a sequence {un }∞ converges to u∞ in this topology. So let (X. The function f has a lower bound which we may use as E0 . R with lim f (t) = ∞ it holds that 0 t→∞ E(u) ≥ f (kuk). Deﬁnition 3. Now let us see how the conditions in A follow. First let us remark that the properties mentioned under A all follow from coercivity.1 A function u ∈ K such that E (˜) = inf u∈K E (u) is ˜ u called a minimizer for E on K.1) then E is called coercive. Since limt→∞ f (t) = ∞ we may take M such that f (t) ≥ E1 for all t ≥ M and set K = {u ∈ X. then it n=1 should also hold that limn→∞ E(un ) = E (u∞ ) or. k·k) be a¢function space and suppose that E : X → R ¡ is such that for some f ∈ C R+ .2 Let (X. 44 .Deﬁnition 3.1. kuk < M } . Sometimes there is just a local minimizer. Next take any function u0 ∈ X and take any E1 > E (u0 ) . After this heuristic introduction let us try to give some suﬃcient conditions.
This theorem can be found for example in H.p Theorem 3. .3 supplies us with a weak limit u∞ ∈ K of a subsequence of the minimizing sequence which is also a minimizing sequence itself (so inf u∈K E(u) ≤ E(u∞ )). The result that saves us is the following.p (Ω) is not ﬁnite dimensional so that bounded sets are not precompact.27).p (Théorème III. The functional E : X → R is called sequentially weakly lower semicontinuous. The sequence uk converges weakly to u in the space X.5 Let E be a functional on W m. (3.2) ∞ Now let us assume that (X. ∂x2 . in symbols uk u in X.4 Let (X.p weak limit in W (Ω) . m. a function u0 ∈ K o and hence a minimizing sequence in K.1. Theorem 3. although bounded.1.p (Ω) and W0 (Ω) with 1 < p < ∞ are reﬂexive. . ‘boundary conditions’ and kukW m.3 A bounded sequence {uk }∞ in a reﬂexive Banach space has k=1 a weakly convergent subsequence. So the minimizing sequence in K that we have is. For the Sobolev space W m.p (Ω) < M . Indeed.1.p (Ω). n→∞ u weakly in X We will collect the result discussed above in a theorem. has no reason to be convergent in the norm of W m. E1 . Theorem 3. the bounded linear functionals on X. So although the minimizing sequence {un }n=1 is not strongly convergent it ∞ has at least a subsequence {unk }k=1 that weakly converges. Deﬁnition 3.1. And yes. k·k) is some Sobolev space W m. Let us call u∞ this m. . The ﬁnal touch comes from the assumption that E is ‘sequentially weakly lower semicontinuous’.p (Ω) (or W0 (Ω)) with 1 < p < ∞ which is: 45 .Obviously W m.p (Ω) . N.p (Ω) the statement in (3. if uk implies E(u) ≤ lim inf E(un ). .2) coincides with µ ¶α µ ¶α ∂ ∂ unk u∞ ∂x ∂x weakly in Lp (Ω) for all multiindex α = (α1 . means φ(uk ) → φ(u) for all φ ∈ X 0 .p (Ω). . So unk u∞ weakly in W m. k·k) be a Banach space. Brezis: Analyse Fonctionnelle m. numbers E0 . . the coercivity condition gives us an open bounded set of functions K. αn ) ∈ Nn with α = α1 + · · · + ³ ´α2 ¡ ∂ ¢α ³ ∂ ´α1 ∂ αn ≤ m and ∂x = ∂x1 .B. And the sequentially weakly lower semicontinuity yields E(u∞ ) ≤ inf u∈K E(u).p (Ω) with 1 < p < ∞ and that n o K = u ∈ W m. the Sobolev spaces such as W m.
1. then Z ³ ´ 2 2 1 1 E(un ) − E(u) = 2 ∇un  − 2 ∇u − f (un − u) dx Ω Z ³ ´ 2 1 ∇un − ∇u + (∇un − ∇u) · ∇u − f (un − u) dx = 2 ZΩ ((∇un − ∇u) · ∇u − f (un − u)) dx → 0. 46 Ω . sequentially weakly lower semicontinuous.1. Example 3.2 ≤ kukW 1.2 0 − kuk kf kL2 ³ ´ − 1 kuk2 1.2 Here we also used that v 7→ Ω f vdx ∈ W0 (Ω) .2 ∂Ω.2 W0 0 − 2 kf kL2 .2 1. The functional E is coercive: using the inequality of CauchySchwarz Z ³ ´ 2 1 E (u) = ∇u − f u dx 2 1. 0 R Since (Ω) with hu.2 := ∇u dx . The candidate for the reﬂexive Banach space is W0 (Ω) with the norm k·k deﬁned by µZ ¶1 2 2 kukW 1.2 W 0 2 kukW 1. vi = Ω ∇u · ∇v dx is even a Hilbert space we may ³ ´0 1. Remark 3.1.2 in W0 (Ω).2 .2 W0 Ω √ 1 1+C ≥ ≥ ≥ ≥ 1 2 1 2 1 2 1 4 Z Ω ∇u dx − 2 2 kukW 1. Then there exists a minimizer u of E. If you are interested in variational methods please have a look at Chapter 8 in Evans’ book or any other book concerned with direct methods in the Calculus of Variations.2 ≤ kukW 1. u weakly The sequentially weakly lower semicontinuity goes as follows.2 identify W0 (Ω) and W0 (Ω) . 0 Ω Indeed this is a norm by Poincaré’s inequality: there is C ∈ R+ such that Z Z 2 2 u dx ≤ C ∇u dx Ω Ω and hence kukW 1. ≥ ³ ´0 ¢ ¡ R 1. Often one is only concerned with a minimizing a functional E on some subset of X. If un 1.2 + kf k2 2 L W 4 1. Hence the minimizer exists. 2.6 These 2 conditions are suﬃcient but neither of them are necessary and most of the time too restrictive.7 Let ³ be a bounded domain in Rn and suppose we want to Ω ´ R minimize E (u) = Ω 1 ∇u2 − f u dx for functions u that satisfy u = 0 on 2 1.2 0 µZ Ω ¶ 1 µZ ¶1 2 2 2 u dx f dx 2 Ω kuk2 1. coercive.
47 .p (Ω)function that satisﬁes the boundary conditions.2 lowed us to use the wellknown function space W0 (Ω). For nonzero boundary 1.p conditions a way out is to consider W0 (Ω)+g where g is an arbitrary W 1.In this example we have used zero Dirichlet boundary conditions which al1.
3) hold pointwise u is called a classical solution.4 with y(0) = y(1) = 0.2 If u ∈ W0 (Ω) ∩ W 2.3).2.3 If a strong solution even satisﬁes u ∈ C 2 (Ω) and hence the equations in (3. that is. then u is called a strong solution of (3.2 (∇u · ∇η − f η) dx = 0 for all η ∈ W0 (Ω) (3.3) holds. The appropriate Sobolev space should be W0 (0.2.1 If u ∈ W0 (Ω) is such that Z 1.2 In Example 3. 1.2.3). Example 3. 1.2.4 − 1. 1. Let us ﬁx for the moment the problem ½ −∆u = f in Ω. it is a solution in the strong sense. Often one obtains a weak solution and then one has to do some work in order to show that the solution has more regularity properties.7 we have seen that we found a minimizer u ∈ W0 (Ω) and since it is the minimizer of a diﬀerentiable functional it follows that this minimizer satisﬁes Z 1. then u is called a weak solution of (3. 2 We will skip the sequentially weakly lower semicontinuity and just assume that 48 ¡ ¢ −1 + 1 (y 0 )4 dx = 2 1 1 1 2 kykW 1. In the remainder we will consider an example that shows such upgrading is not an automated process.3. a classical solution or even a C ∞ solution. ¯ Remark 3.2 (∇u · ∇η − f η) dx = 0 for all η ∈ W0 (Ω).4 Suppose that we want to minimize Z 1³ ´ 2 2 dx (1 − (y 0 (x)) E (y) = 0 1.2 (Ω) is such that −∆u = f in L2 (Ω)sense. 0 4 .2 Deﬁnition 3. u=0 on ∂Ω.1. 1) and indeed E is coercive: Z 1 ¡ ¢ E (y) = 1 − 2(y 0 )2 + (y 0 )4 dx ≥ Z 0 1 0 Here we used 2a2 ≤ 1 a4 +2 (indeed 0 ≤ (ε− 2 a−ε 2 b)2 implies 2ab ≤ ε−1 a2 +εb2 ).2 Deﬁnition 3.2 If we knew that u ∈ W0 (Ω) ∩ W 2.2 (−∆u − f ) ηdx = 0 for all η ∈ W0 (Ω) and hence −∆u = f in L2 (Ω)sense.2 (Ω) an integration by parts would show Z 1.2 Solutions in ﬂavours 1.
y ∈ W 2.4 (0.4 in W0 (0. But if y is not in C 2 (0. 1). Here are the graphs of a few candidates: 0. 0 and E(y) for a special function y in the next exercise.5 1 0.° then we know y 0 and y 00 pointwise ° in (0. so if we would ﬁnd a function y such that E (y) = 0 we would have a minimizer.5 0. 1) we may integrate by part to ﬁnd the EulerLagrange equation: ´ ³ 2 4 1 − 3 (y 0 ) y 00 = 0. 2} in order to ﬁnd whether or not y ∈ W 2. And why is y not in W 2. 49 .5 0. 0 0 One uses this result to deﬁne derivatives in the sense of distributions. So how is y 00 deﬁned? Remember that for diﬀerentiable functions y and φ an integration by parts show that Z 1 Z 1 0 y φ dx = − y φ0 dx.1) .4 (0.2.5 0. If you did get that E (y) = 8 < 1 = E(0) then 0 is not the minimizer.4 Notice that E (y) ≥ 0 for any y ∈ W0 (0. What is wrong here? 1. The ∞ distributions that we consider are bounded linear operators from C0 (Rn ) to R.5 0.5 0.5 1 ¯ ¯ Exercise 37 Check that the function y deﬁned by y(x) = 1 − ¯x − 1 ¯ is indeed 2 2 1. 1) Remark 3. 1) and we compute the integral °y (i) °L4 (0. For such a minimizer we’ll ﬁnd through ∂ ∂τ E (y + τ η)τ =0 = 0 that Z 1 0 Assuming that such a minimizer is a strong solution. So y 00 = 0 or (y 0 ) = 1 . 1).5 0. 1).5 0.5 1 0. 1).5 0.5 1 0.5 If a function y is in C 2 (0. 1) we cannot directly R1 compute 0 (y 00 )4 dx.4 −4y 0 + 4 (y 0 ) η0 dx = 0 for all η ∈ W0 (0.1.5 1 0. 1. i ∈ {0.5 0. 1). 2 ³ ´ 3 1.4 there is a minimizer u in W0 (0.5 0. Exercise 36 Let E be as in the previous example and compute E(y) for y(x) = sin πx 3 π . 1)? For those unfamiliar with derivatives in a weaker sense see the following remark. Next let us compute some energies: Z 1 2 E(0) = (1 − 0) dx = 1. A closer inspection gives y(x) = ax + b and plugging in 3 the boundary conditions y(0) = y(1) = 0 we come to y(x) = 0.4 (0. ∞ n Remember that we wrote C0 (R ) for the inﬁnitely diﬀerentiable functions with compact support in Rn .
The signum function is deﬁned by 1 if x > 0. y) = 0. 50 . 2 R Here R = (0.2. 2 R Here R = (0.¡ ∂ ¢α ∞ Deﬁnition 3. ∂x Compute the boundary value problem that comes out. Exercise 39 Let us consider the following functional for the energy of a rectangular plate lying on two supporting walls and being pushed down by some weight is Z ³ ´ 2 1 E (u) = (∆u) − f u dxdy. b) and supported means the deviation u satisﬁes u (x. We ﬁnd. ) × (0. and clamped means u (0. V (φ) = (−1) ∂x Rn Example 3. This Diracdelta function is not a true function but a distribution: δ(φ) = φ(0) for all φ ∈ C0 (R) . 0 Exercise 38 A functional for the energy that belongs to a rectangular plate lying on three supporting walls. clamped on one side and being pushed down by some weight is Z ³ ´ 2 1 E (u) = (∆u) − f u dxdy. y) = 0. that for φ with compact support: Z ∞ sign(x) φ0 (x) dx = V (φ) = − = Z 0 −∞ −∞ φ (x) dx − 0 Z ∞ φ0 (x) dx = 2φ(0) = 2δ(φ).6 If y is some function deﬁned on Rn then ∂x y in C0 (Rn )0 is the distribution V such that µ ¶α Z ∂ α y(x) φ(x) dx. y) = u ( . 0) = u(x. calling ‘sign0 = V ’. b) = u( . y) = ∂ u(0.7 The derivative of the signum function is twice the Diracdelta function.2. sign(x) = −1 if x < 0. ) × (0. b) and supported means the deviation u satisﬁes u (0. 0 if x = 0. y) = 0.
Is the functional E coercive? 2. Is E is sequentially weakly lower semicontinuous? Exercise 41 Show that the equation for Exercise 39 can be written as a system of two second order problems. Can you comment on the changes that appear? Exercise 43 What about uniqueness for the problem in Exercises 38 and 39? Exercise 44 Is it true that Wbr (R) ⊂ W 2. one for u and one for v = −∆u. Especially higher order problems and corners form a research area by themselves. these are by no means easy to ﬁnd. say f ∈ C α (R) . Although since the problems above are linear one might expect that such regularity results are standard available in the literature. it may even be possible that the solution satisﬁes u ∈ C 4. On the contrary.Compute the boundary value problem that comes out. Show ¡ ∂ ¢2 that for all nonzero η ∈ Wbc (R) the second variation ∂τ E (u + τ η)τ =0 is positive but not necessarily strictly positive.2 (R) ∩ Wbr (R) be a solution for Exercise 39.2 solution: for f ∈ L2 (R) it holds that u ∈ W 4. that is u ∈ Wbc (R) satisfying Z 2. + boundary conditions with kukW (R) = kukL2 (R) + k∆ukL2 (R) . Grisvard.2 (R) both for Exercise 38 and for Exercise 39? Exercise 45 Let u ∈ W 4.α (R) . For those who have seen Hopf ’s boundary Point Lemma: can this system be solved for positive f? Exercise 42 Suppose that we replace both in Exercise 38 and 39 the energy by Z ³ ´ 2 2 1 E (u) = (∆u) + δ 1 ∇u − f u dxdy. 51 . A good starting point would be the publications of P. Exercise 40 Consider both for Exercise 38 and for Exercise 39 the variational problem © ¡ ¢ ªk·kW (R) ¯ →R E : Wbc (R) := u ∈ C ∞ R . 2 2 R Compute the boundary conditions for both problems.2 In the last exercises we came up with a weak solution.2 (∆u∆η − f η) dxdy = 0 for all η ∈ Wbc (R) . R Starting from such a weak solution one may try to use ‘regularity results’ in order to ﬁnd that the u satisfying the weak formulation is in fact is a strong 2. 1. For the source term we assume f ∈ L2 (R). In the case that f has a stronger regularity. 2.2 (R) ∩ Wbc (R) .
we ﬁnd a solution by its Taylor series y (x) = ∞ X (x − x0 )k k=0 k! y (k) (x0 ).1 For those who are not to familiar with ordinary diﬀerential equations. X ∂f ∂f := yk+1 (x0 . y 0 (x0 ) = y1 . Secondly. . Multiple equations A similar result holds for systems of o. . yn−1 ) (x0 . y(n−1) (x)) (3. . and if we can show the convergence. .e. . . But if f is appropriate.3.3. y0 . y0 . y0 . y (n−1) (x0 ) = yn−1 the higher order coeﬃcients follow by diﬀerentiation and ﬁlling in the values already known: y (n) (x0 ) = yn := f (x0 . The advantage of such an approach is that the computation of the Taylor series is almost automatical. yn−1 ) + ∂x ∂yi k=0 n−1 It is well known that such an approach has severe shortcomings. p) Lipschitzcontinuous. y (n) (x) = f (x. . . For the nth order o. . y(x). . . First of all the problem itself needs to ﬁt the form of a power series: f needs to be a real analytic function. .3. .4) with prescribed initial conditions is (x. To have at most one solution by the Taylor series near x = x0 one needs to ﬁx y(x0 ).6) where y : R → Rm and f is a vectorfunction. . y (n−1) (x)) (3.d. 52 .4) with initial values y(x0 ) = y0 .5) Remark 3. . . y(n−1) (x0 ).1 Preliminaries for CauchyKowalevski Ordinary diﬀerential equations One equation One of the classical ways of ﬁnding solutions for an o. is by trying to ﬁnd solutions in the form of a power series.e. . . y0 (x0 ) up to y(n−1) (x0 ) the higher values will follow as before and the Taylor series in (3.d.: y(n) (x) = f (x. A suﬃcient condition for existence and uniqueness of a solution for (3. a power series usually has a rather restricted area of convergence. y(x).e.5) will be the same with y instead of y.3 3. . The computation of the coeﬃcients will be more involved but one may convince oneself that given the (vector)values of y(x0 ). . p) being continuous and p 7→ f (x.d. . . p) 7→ f (x. yn−1 ) y (n+1) (x0 ) = yn+1 etc. . (3. . .
f (x. (3..7) for k = 0.8) would allow.7) we would ﬁnd the problem ½ 0 u (x) = g(u(x).. . . 1 .. So we have found an initial value problem for an autonomous ﬁrst order system ½ 0 u (x) = g(u(x)). . 2. p) = ∞ X u(k) (x0 .7) u(x0 ) = u0 .From higher order to ﬁrst order autonomous The trick to go from higher order to ﬁrst order is well known.d. y1 0 0 0 .e. . Taylor series with multiple variables Before we consider partial diﬀerential equations let us recall the Taylor series in multiple dimensions. .3. Such a problem would give a solution of the type u(x. .. For an analytic function v : Rn → R it reads as v (x) = ∞ X X α=k α∈Nn k=0 ¡ ¢ ∂ α v (x0 ) ∂x α! 53 (x − x0 )α . .2 Partial diﬀerential equations The idea of CauchyKowalevski could be phrased as: let us compute the solution of the p. Exercise 46 Compute u(k) (x0 ) from (3. . One could think of p as the remaining coordinates in Rn .4) or system in (3. Setting u = (y.. Just add the equation u0 (x) = 1 with un+1 (x0 ) = x0 .8) u(x0 ) = u0 (p). or in the system of o. Remark 3. y 0 .d. (3. y (n−1) ) the nth order equation in (3. 1. 3 in terms of g and the initial conditions. . . u+ u0 = . We ﬁnd un+1 (x) = x and by replacing n+1 x by un+1 in the o. u) yn−1 0 ··· 0 0 Finally a simpel addition will make the system autonomous.2 If we include some parameter dependance in (3.6) changes in 0 1 ··· 0 0 y0 . p). .e. p) itself could be power series in p if the dependance of p in (3. .d. p) i=0 k! (x − x0 )k and the u(k) (x0 . with u(x0 ) = . Note that u0 (p) would prescribe the values for u on a hypersurface of dimension n − 1.e. by a Taylorseries.3. The complicating factor becomes that it is not so clear what initial values one should impose. 3.’s the system becomes autonomous.
0 )αn . .0 )α2 . .3. Exercise 47 Take your favourite function f of two variables and compute the Taylor polynomial of degree 3 in (0. A mth order semilinear p. . µ The α ∈ Nn that appears here is called a multiindex. m−1 ) ∂x1 ∂xn µ ∂ ∂x ¶α u (x) = f µ ¶ ∂u ∂ m−1 u x. . α1 α2 . quasilinear and ‘strictly’ nonlinear. α! 2 k=0 α∈N Several ﬂavours of nonlinearities A linear partial diﬀerential equation is of order m if it is of the form α≤m α∈Nn X Aα (x) µ ∂ ∂x ¶α u (x) = f (x) .9) and the aα (x) with α = m should not disappear.e.where α! = α1 !α2 ! . . . ∂x1 ∂xn 3. ∂u ∂ m−1 u . .3. A mth order quasilinear p.d. is as follows: Aα (x) µ ∂ ∂x ¶α u (x) = f µ ¶ ∂u ∂ m−1 u x. ∂x ∂x1 ∂x2 ∂xαn n (x − x0 )α = (x1 − x1. 0) xα . Every p. m−1 . . . • The minimal surface equation: ∇ · q 2 1 + ∇u • The MongeAmpère equation: uxx uyy − u2 = f. (3. . .e.4 Here are some nonlinear equations: • The Kortewegde Vries equation: ut + uux + uxxx = 0. . 0): ³¡ ¢α ´ ∂ 3 X X α=k ∂x f (0. .e. . ¶α ∂ ∂ α1 ∂ α2 ∂ αn = . . m−1 . .0 )α1 (x2 − x2. Example 3. is as follows: α=m α∈Nn X Aα (x. αn !. that cannot be written as in (3. Deﬁnition 3. Nevertheless one sometimes makes a distinction.d. ∂x1 ∂xn X 2. xy One could call them respectively semilinear. .9) is not linear (= nonlinear?).3 α=m α∈Nn 1.d. (xn − xn. . 54 . ∇u = 0. The remaining nonlinear equations are the genuine nonlinear ones. . .
Deﬁnition³3.3.3. This is a bit too much. . Exercise 48 Consider M = {(x1 . x2 ).10) ∂ν y = φ1 on M. ∂ Similar. . . y. x2 ) = x2 . ∂xm y = 0 in Rn . . So it will be suﬃcient to ﬁx all normal derivatives from order 0 up to order m − 1.3 The Cauchy problem As we have seen for o. ∂ν Compute ∂2 ∂ν 2 u on M for such a solution. y (m−1) a ﬁxed value. ∂x2 1 2 55 . ∂xm y = 0 on Rn with respect to the smooth (n − 1)n dimensional manifold M is the following: ´ ³ ∂ ∂m F x. ∂x y. . .5 The Cauchy ´ problem for the mth order partial diﬀerential ∂ ∂m equation F x. y 0 .3. . Suppose u is a ∂2 ∂2 solution of ∂x2 u + ∂x2 u = 1 in a neighborhood of M. . x2 ) = x1 and 1 2 φ1 (x1 . ∂ m−1 ∂ν m−1 y = φm−1 where ν is the normal on M and where the φi are given. . ∂x1 y. Indeed if one prescribes y on a smooth manifold M. . Etcetera. if ∂ν y.’s an mth order ordinary diﬀerential equation needs m initial conditions in order to have a unique solution. . Exercise 49 The same question but with the diﬀerential equation replaced by ∂2 ∂2 u − ∂x2 u = 1. . the normal derivative on M is given then also all tangential ∂ derivatives of ∂ν y are determined. y. x2 + x2 = 1} with φ0 (x1 . A standard set of such initial conditions consists of giving y. The analogon of such an ‘initial value’ problem for an mth order partial diﬀerential equation in Rn would be to describe all derivatives up to order m − 1 on some (n − 1)dimensional manifold. 1 n y = φ0 ∂ (3. then also all tangential derivatives are determined. .e. say for 1 − ε < x2 + x2 < 1 2 1 2 1 + ε that satisﬁes ½ u = φ0 for x ∈ M. .d. Let ν be the outside normal direction on M. ∂ u = φ1 for x ∈ M. .
. u. So we will use a more usefull concept that is found in the book of Evans. 0) = ϕ(x1 ) is given for x1 ∈ R. and set g (x1 +x2 . x2 ) for a given function g.2 A manifold M is called noncharacteristic for the diﬀerential equation α=m α∈Nn X Aα (x) µ ∂ ∂x ¶α µ ¶ ∂ ∂m u(x) = B x. x1 −x2 ) = u(x1 . That means we are not allowed to prescribe the function u on such a line and if we want the Cauchyproblem to be wellposed for a 1dmanifold (curve) we better take a curve with no tangential in that direction.13) Aα (x)ν α = 0. x1 ) + 1 ˜ ˜ g (s.e. m u .1 Consider ∂x1 u + ∂x2 u = g(x1 .4.11). u. The line (or curve) along which the p. takes the form of an o. 0) = ϕ( 1 y1 ) and this 2 is in general not compatible with (3. . ˜ 2 x1 which can be rewritten to a formula for u.d.d.4 Characteristics I So the Cauchy problem consists of an mth order partial diﬀerential equation for say u in (a subset of) Rn . For quasilinear diﬀerential equations a characteristic curve or manifold in general will even depend on the solution itself.12) if 56 . x2 ) and u(x1 +x2 . ∂ν u.e.d. x1 ) = ϕ(x1 ) is given for x1 ∈ R. We say ν is a singular direction for (3. ∂ν n−1 u given on M. ∂x1 ∂xn (3. is called a characteristic curve. y2 ) = u (x1 . an (n − 1)dimensional manifold M. . ˜ ˜ ∂y1 (3. One ﬁnds that the solution is constant on each line y2 = c. but if we take new coordinates y1 = x1 + x2 and y2 = x1 −x2 .11) If for example u (x1 . x1 −x2 ) = g(x1 .e. This is a ﬁrst order p. In higher dimensions and for higher order equations the notion of characteristic manifold becomes somewhat involved.e. then u(y1 . ˜ But if u(x1 . ∂ ∂ Example 3. Instead of giving a deﬁnition of characteristic curve or characteristic manifold we will give a deﬁnition of noncharacteristic manifolds.4. Deﬁnition 3. . y2 )ds. One may guess that the diﬀerential equation should be of order m in the νdirection. . 2 ∂ u = g. .3. the normal ∂ ∂ n−1 direction ν on M and with u.12) if for x ∈ M and every normal direction ν to M in x: α=m α∈Nn X Aα (x)ν α 6= 0. then Z y1 u (y1 . . So we have to solve the o. ˜ ˜ we ﬁnd ∂ ∂ ∂ u+ u=2 u. Pα=m α∈Nn (3. ˜ ∂x1 ∂x2 ∂y1 The y2 derivative dissappeared. x2 ).d.
d. 3.16) Y (0) = y(x0 ) So if we would prescribe the initial values for the solution on an n − 1dimensional surface we should take care that this surface does not intersect 57 . yn with y2 to yn arbitrary functions of x1 to xn . .1 First order p. consist of prescribing initial data on a (smooth) n − 1dimensional surface. Deﬁnition 3. y) . on changing from the variables x1 .e. So not characteristic and noncharacteristic are not identical.4 Sobolev deﬁnes a surface given by F (x1 .d. Y (t)) (3. in the ν direction we would ‘loose’ one order in the diﬀerential equation. .e. such that all the yi are continuous and have ﬁrst order derivatives and a nonzero Jacobian in a neighbourhood of the surface under ∂2 ¯ consideration.d. xn to new variables y1 = F (x1 .5 The solution t 7→ x(t) is called a characteristic curve for (3.e. . .4. xn ). .3 The condition in (3. it happens that the coeﬃcient A11 of ∂y2 vanishes on this sur1 face. xn ) = 0 to be characteristic if the (2nd order) diﬀerential equation. 7 Set Y (t) = y (x(t)) and Y solves ½ 0 Y (t) = B (x(t).12) would be an o. .e. .4. The Cauchy problem for such a ﬁrst order p.14) A solution is a function y : Rn → R. . Also Sobolev’s deﬁnition of characteristic does. ∂xi (3. With this curve t → x(t) known one obtains a second o. n X i=1 Consider the ﬁrst order semilinear partial diﬀerential equation Ai (x) ∂ y(x) = B (x. system with a unique solution for analytic A (Lipschitz is suﬃcient). If we consider a curve t 7→ x(t) that satisﬁes x0 (t) = A (x(t)) then one ﬁnds for a solution y on this line that n X ∂ ∂ x0 (t) y(x) = y (x(t)) = i ∂t ∂xi i=1 = n X i=1 Ai (x) ∂ y(x) = B (x(t). . y (x(t))) ∂xi and hence that we may solve y on this curve when one value y (x0 ) is given. We will see that we cannot choose just any surface. . In case that (3. y2 . .d. .d. . Remark 3.4. The deﬁnition of noncharacteristic gives a condition for each point on the manifold. Indeed ½ 0 x (t) = A (x(t)) (3.e.13) means that the coeﬃcient of the diﬀerential equation of the mth order in the νdirection does not disappear. .15) x(0) = x0 is an o. .14).Remark 3. for t 7→ y (x(t)) . .4.
’s for the curve t → x(t) and the solution t 7→ 7 y (x(t)) along this curve. For a ﬁrst order this means that x0 (t) should not be tangential to this surface.d.17) is called 58 (3. In other words A (x(t)) · ν = x0 (t) · ν 6= 0 where ν is the normal direction of the surface. y) ∂ y(x) = B (x.4. y) ∂xi one obtains a system of o.d. in two dimensions The standard form of a second order semilinear partial diﬀerential equation in two dimensions is a ux1 x1 + 2b ux1 x2 + c ux2 x2 = f (x.each of those characteristic curves more than once.6 The equation in (3. Here f is a given function and one is searching for a function u. = TT 0 λ1 b c Deﬁnition 3.17) .e.e. u. We may write this second order diﬀerential operator as follows: µ ¶ ∂2 ∂2 ∂2 a b a + 2b +c =∇· ∇ b c ∂x2 ∂x1 ∂x2 ∂x2 1 2 and since the matrix is symmetric we may diagonalize it by an orthogonal matrix T: µ ¶ µ ¶ λ1 0 a b T. This is best guaranteed by assuming that the characteristic directions are not tangential to the surface. Characteristic lines and an appropriate surface with prescribed values Exercise 50 Show that also for the ﬁrst order quasilinear equations m X i=1 Ai (x.2 Classiﬁcation of second order p.4. ∇u) . 3.
u.4. hyperbolic respectively parabolic in x if the eigenvalues λ1 (x) and λ2 (x) are as above (freeze the coeﬃcients in x).17) is µ ¶ a b 1. Note that µ ¶ a(x) b(x) ∇· ∇u = b(x) c(x) = a(x) ux1 x1 + 2b(x) ux1 x2 + c(x) ux2 x2 + γ(x)ux1 + δ(x)ux2 γ= where ∂b ∂b ∂c ∂a + and δ = + .17) changes to ˜ λ1 uy1 y1 + λ2 uy2 y2 = f (y. n 1 59 . with B a nonsingular 2 × 2matrix. b and c do depend on x one says that the equation is elliptic. b c µ ¶ a b 2. ¡ ∂ ¢α Pα≤m Deﬁnition 3. ∇u). One ﬁnds that (3. ∇u) = c1 uz1 + c2 uz2 . . elliptic if and only if det > 0. u.8 For a linear p. b c Exercise 52 Show that under any regular change of coordinates y = Bx. u. operator Lu := u α∈Nn Aα (x) ∂x ∂ one introduces the symbol by replacing the derivation ∂xi by ξ i : symbolL (ξ) = α≤m α∈Nn X Aα (x) ξ α1 . the classiﬁcation of the diﬀerential equation in (3.• elliptic if λ1 and λ2 have the same (nonzero) sign.17) does not change. b c µ ¶ a b 3. hyperbolic if and only if det < 0. ξ αn . u. Exercise 51 Show that (3. Remark 3. ˆ • uz1 z1 = f (z. ∂x1 ∂x2 ∂x1 ∂x2 So the highest order doesn’t change.e. ˆ • a semilinear wave equation: uz1 z1 − uz2 z2 = f (z. . ∇u) . which can be turned into a heat equation when ˆ f (z. parabolic if and only if det = 0. u. • parabolic if λ1 = 0 or λ2 = 0. p Depending on the signs of λ1 and λ2 we may introduce a scaling yi = λi zi and come up with either one of the following three possibilities: ˆ • a semilinear Laplace equation: uz1 z1 + uz2 z2 = f (z. • hyperbolic if λ1 and λ2 have opposite (nonzero) signs. ∇u). For the constant coeﬃcients case one reduce the highest order coeﬃcients to standard form by taking new coordinates T x = y. ∇u).7 In the case that a.d.4.
The solutions of {ξ ∈ R2 .: it can be transformed into an o. symbolL (ξ) = k} consists of a parabola for some k ∈ R. symbolL (ξ) = k} consists of a hyperbola for some k ∈ R. then L is not a real p. 60 . 3. The solutions of {ξ ∈ R2 . If the solutions of {ξ ∈ R2 .e. symbolL (ξ) = k} consists of an ellips for some k ∈ R. if and only if L is elliptic.e. if and only if L is hyperbolic..d. 4. If the solutions of {ξ ∈ R2 . symbolL (ξ) = k} do not ﬁt the descriptions above for any k ∈ R. 2. then L is parabolic.Exercise 53 Show that for second order operators with constant coeﬃcients in two dimensions: 1.d.
So with x = y1 ν + y2 τ ˜ uy2 y2 = −uy1 y1 + f (y. 0) and hence µ ∂ ∂y1 ¶k µ ∂ ∂y2 ¶2 u(y1 .. ∇u). We may take any line through a point x and a orthonormal coordinate system with ν perpendicular and τ tangential ˜ to ﬁnd in the new coordinates uνν + uτ τ = f (y. revisited Let us consider the three standard equations that we obtained separately. u. any manifold is noncharacteristic.17) that is elliptic the Cauchyproblem looks wellposed. as in (3.. 0) for all k ∈ N+ . ux1 x1 − ux2 x2 = f (x. 0) and uy2 (y1 . uτ .22) sin α cos α the diﬀerential operator turns into ¡ 2 ¢ ∂2 ¡ 2 ¢ ∂2 ∂2 2 cos α − sin2 α 2 − 4 cos α sin α ∂y ∂y + sin α − cos α ∂y 2 ∂y1 1 2 2 61 . u.d.e. ux1 x1 +ux2 x2 = f (x.5.e. (3. Indeed if we prescribe u(y1 . ) For elliptic p.d. uy1 . 0) for y1 we also know ∂ ∂y1 u(y1 . hoping the series converges. µ µ ¶k µ ¶k and ﬁnd that the coeﬃcients in the power series are deﬁned. uy2 (y1 . 2. u.3. ∇u). 0) for all k ∈ N+ .e. uy2 ) for y ∈ R2 .20) Diﬀerentiating the diﬀerential equation with respect to y2 and using the results from (3.5 3. ¯ 1.18) u(y1 . (3.19) we ﬁnd ∂ ∂y2 ¶2 u(y1 .20) gives µ ∂ ∂y2 ¶3 u(y1 . uν ).(We would like to say ‘is wellposed’ but since we didn’t state the corresponding theorem . ∂y2 (3. Going back through the transformation one ﬁnds that for any p. 0) = φ (y1 ) for y1 ∈ R.d. u. 0) for all k ∈ N+ .19) Using the diﬀerential equation and (3.1 Characteristics II Second order p. 0) and ∂ ∂y1 ∂ u(y1 . the solution.193. Taking new orthogonal coordinates by µ ¶ cos α − sin α y= x (3. (3. 0) = ψ (y1 ) for y1 ∈ R.21) By repeating these steps we will ﬁnd all Taylorcoeﬃcients and. 0) and hence µ ∂ ∂y1 ¶k µ ∂ ∂y2 ¶3 u(y1 .
22) one ﬁnds ˆ sin2 α uy2 y2 = 2 sin α cos α uy1 y2 − cos2 α uy1 y1 + f (y. as in (3. On the other hand. 0) follows from uy2 (y1 .e. uν (x) = ψ (x) for x ∈ . The lines x2 = c give the sigular directions for this equation. as in (3. u.d. Going back 1 through the transformation one ﬁnds that any p. 0) and uy2 (y1 .17) that is parabolic has one family of directions where the highest order derivatives disappear. ¡¢ ¡1¢ is (probably) only wellposed for not parallel to 1 or −1 . For any other manifold (a curve in 2 dimensions) the Cauchyproblem is (could be) wellposed. Write the Cauchy problem for utt = c2 uxx on M = {(x. then we are left with ˜ 2uy1 y2 = ±f (y.18) except for α a multiple of π.e. Doing the transformation as in (3. Exercise 54 Consider the onedimensional wave equation (a p. ux1 x1 = f (x. 0) is still unknown but prescribing both u (y1 . ∇u). the Cauchyproblem is (as we will see) wellposed. ux1 . 0) is too much since now uy1 y2 (y1 . 0) is not suﬃcient since then uy2 (y1 . Prescribing just u (y1 . 0) and uy2 (y1 . u. 1. uy1 . 3.and the equation we may write as uy2 y2 = uy1 y1 + 4 cos α sin α ˆ uy1 y2 + f (y.d. Going back through the transformation one ﬁnds that any p. t). 62 . ∇u) sin2 α − cos2 α if and only if sin2 α 6= cos2 α. t = 0}. in R2 ) utt = c2 uxx for some c > 0.d. 2nd order parabolic in 2d: in each point there is one singular direction. ux2 ) for x ∈ R2 . u. So the Cauchy problem ux2 x2 = ux1 x1 + f (x.e. uy2 ). which a (probably) wellposed Cauchy problem (3. 0) then we may proceed as for case 1 and ﬁnd the coeﬃcients of the Taylor series. ∇u). u(x) = φ (x) for x ∈ . u. 2nd order hyperbolic in 2d: in each point there are two singular directions. if sin2 α = cos2 α.17) that is hyperbolic has at each point two directions for which the Cauchy problem has a singularity if one of these two direction coincides with the manifold M. For any 1d manifold that crosses one of those lines exactly once. If we suppose that sin2 α 6= cos2 α and prescribe u(y1 . 0) directly and from the diﬀerential equation indirectly and unless some special relation between f and uy2 holds two diﬀerent values will come out. u.
. xn τ n−1.5. The explicit formula for this Cauchy problem that comes out is named after d’Alembert. Now compute the f and g such that u is a solution of the Cauchy problem with general ϕ0 and ϕ1 .n . . . . Let us suppose that this manifold is given by the implicit equation Φ (x) = c. ∇Φ(¯) x In order to have a welldeﬁned Cauchy problem we should have a nonvanishing ∂2 ∂ν 2 u component in (3. . . . xn } and the new coordinates by {y1 .23) and we want to prescribe u and the normal derivative uν on some analytic n−1dimensional manifold.23). this is it: Z x+ct u(x. . . . . . . τ n−1 at x.2 Semilinear second order in higher dimensions α=2 α∈Nn That means we are considering diﬀerential equations of the form X Aα (x) µ ∂ ∂x ¶α u = f (x. . t). t) = f (x − ct) + g(x + ct). yn τ n−1. .1 τ 1. x−ct Exercise 55 Try the ﬁnd a solution for the Cauchy problem for utt +ut = c2 uxx on M = {(x. ∇u) (3.1 · · · τ n−1. . u. .¡∂ ¢¡ ∂ ¢ ∂ ∂ 2. We may do so by ﬁlling up the coordinate system with tangential directions τ 1 . .2 ··· τ 1. . = . y2 . . . Use utt − c2 uxx = ∂t − c ∂x ∂t + c ∂t to show that solutions of this onedimensional wave equation can be written as u (x. . .n yn and since this transformation matrix T is orthonormal y1 ν1 ν2 ··· νn y2 τ 1.1 y1 x1 x2 ν 2 τ 1. x2 .1 τ n−1. . . . .n ··· τ n−1. . 3.n . . Denoting the old coordinates by ¯ {x1 . t = 0}.2 · · · τ n−1.n ··· 63 one ﬁnds x1 x2 . So the normal in x is given by ¯ ν= ∇Φ(¯) x . .23) such that we recog∂2 nize the ∂ν 2 u component. . . t) = 1 ϕ0 (x + ct) + 1 ϕ0 (x − ct) + 2 2 1 2 ϕ1 (s)ds. . Going to new coordinates We want to rewrite (3. 3. . xn νn τ 1. . = . If you promise to make the exercise before continuing reading. . yn } we have ν 1 τ 1.n y2 . . .
P ) = (1. P + N = n but 1 ≤ P ≤ n − 1. Let P be the number of positive eigenvalues and N the number of negative ones.0 (¯) x 2 .. There are eigenvalues equal to 0.1 (¯) ··· ··· ··· 1 x 2 A1..2 (¯) x ν.. So any direction of the normal ν is ﬁne and there is no direction that a manifold may not contain.1 (¯) 1 A0.0. ∇u) (3. 1 x 2 A1.. 1.23) is called hyperbolic.. ¯ 64 .. 2) or (2. 1 x 2 A0. ...Hence X ∂yj ∂ X ∂ ∂ ∂ ∂y1 ∂ = + = νi + τ i. There are three possibilities. In dimension 3 with (N.0.24) and the factor in front of the at x in (3.25) Classiﬁcation by the number of positive. Equation (3.1 (¯) x 2 . Notice that this matrix does not depend on ν.j ∂xi ∂xi ∂y1 j=1 ∂xi ∂yj ∂y1 j=1 ∂yj So (3.1..0 (¯) x . negative and zero eigenvalues.23). λn on the diagonal.1.. n) . u.j ∂y1 j=1 ∂yj i=1 ∂2 2u ∂y1 n Y n−1 X αi u = f (x... (P. Then there are directions ν such that ν · M ν = 0. Then all eigenvalues of M have the same sign and in that case ν · M ν 6= 0 for any ν with ν = 1.23) is called elliptic.. negative and zero eigenvalues Let us call the matrix in (3..1..0...0 (¯) A0.. Equation (3. . 0) of (P.. (3.25) M.0 (¯) x 1 A1.... There exists a more precise subclassiﬁcation according to the number of positive... .1. . . . Equation (3. N ) = (n.. Since it is symmetric we may diagonalize it and ﬁnd a matrix with eigenvalues λ1 . P + N < n. 2.23) is called parabolic. 3. There are negative and positive eigenvalues but no zero eigenvalues. which is our notation for ∂2 ∂ν 2 u α=2 α∈N X Aα (¯)ν α = ν · x n A2... Also remember that we have a wellposed Cauchy problem if ν · M ν 6= 0.0..2. is ¯ component. 1) one ﬁnds a cone of singular directions near x... N ) = (0. ...23) turns into α=2 α∈Nn n−1 n−1 X Aα (x) ∂ ν i ∂ + τ i. A0...1 (¯) 1 x 2 A1. .. . ..
¯ In dimension 3 with (N. Elliptic however can be formulated by the absence of any singular directions. The classiﬁcation for higher order p. . ¯ Singular directions in a parabolic case. . 3.3 Higher order p.d.e.26)) with (as always) real coeﬃcients Aα (x) the order m is even.e.1 The diﬀerential equation in (3.5. ∂x1 u.26) ∂x ∂x α=m it can be rephrased as: Deﬁnition 3. 2) one ﬁnds one singular direction near x. . u.5. For an mth order semilinear equation as µ ¶α µ µ ¶α ¶ X ∂ ∂ ∂ Aα (x) u = F x. in so far it concerns parabolic and hyperbolic becomes a bit of a zoo. u (3.26) is called elliptic in x if ¯ X Aα (¯)ξ α 6= 0 for all ξ ∈ Rn \{0}. x α=m Exercise 56 Show that for an elliptic mth order semilinear equation (as in (3. 2) or (0.The singular directions at x form a cone in the hyperbolic case.d. 65 . . P ) = (0.
66 .
∂x1 u. are strictly nontangential.’s have singular directions. Theorem 4.d.d. Now for the theorem. ∂xm−1 u ∂x u = F x. .e.e. . 4.1 CauchyKowalevski Statement of the theorem Let us try to formulate some of the ‘impressions’ we might have got from the previous sections. .d.d. α=m 67 . For a wellposed Cauchy problem of a general p. ´ ¡ ¢α ³ ´ ³ X ∂ ∂ m−1 ∂ ∂ ∂ m−1 Aα x. . .) We consider the Cauchy problem for the quasilinear p.e. .Week 4 Some old and new solution methods II 4. 2.1. .e.e. .1 1.’s have an even order m. u.1 4. (Realvalued) elliptic p.e. µ ¶α X ∂ ∂ ∂ m−1 Aα (x) u + F (x.e. m−1 u) = 0 ∂x ∂x1 ∂xn α=m 3. The amount of singular directions increases from elliptic (none) via parabolic to hyperbolic. ∂xm−1 u n n P is α=m Aα (x) ν α = 0.2 (CauchyKowalevski for one p. u. so odd order p.d. . u. one should have a manifold M to which the singular directions of the p. ∂x1 u.d. u. Heuristics 4. A noncharacteristic manifold M means that if P ν is normal to M then α=m Aα (x) ν α 6= 0.d. . .1.1. The condition for the singular directions of the p. .
and the Aα would be N × N matrices. 68 α=m . p) and (x. Assume that 1. . ∂x1 u. ¯ 3.1.2 or the version for systems to a version for a system of ﬁrstorder p. .1) We are considering ¡ ∂ ¢β where Aα and F may depend on x and the derivatives ∂x u of orders β ≤ m − 1. ¯ 2. x For CauchyKowalevski the characteristic curves should not be tangential to the manifold.d. that is u : Rn → RN . ϕ1 . . p) 7→ Aa (x. .d. .with the boundary conditions as in (3. ϕm−1 . Are you still following? Then one might recognize that not only solutions turn into vectorvalued solutions but also that characteristic curves turn into characteristic manifolds. In fact the u could even be a vector function.1. ¯ ´ ³ P ∂ ∂ m−1 ν α 6= 0 with the derivatives of u If ¯ α=m Aα x. . . But similar as for o.10) with an (n − 1)dimensional manifold M and given functions ϕ0 . p) are real analytic in a neighborhood of x. ϕ1 . Let ν be the normal direction ¯ to M in x. . then there is a ball ¯ Br (¯) and a unique analytic solution u of the Cauchyproblem in this ball. u. There is a version of the Theorem of CauchyKowalevski for systems of p. The theorem above seems quite general but in fact it is not.e.2 Reduction to standard form X Aα ¡ ¢ ∂ α u ∂x =F (4.e.e. ∂xm−1 u n replaced by the appropriate (derivatives of ) ϕi at x = x.d. (x. ϕm−1 are real analytic functions deﬁned in a neighborhood of x. 4. M is a real analytic manifold with x ∈ M. p) 7→ F (x. ϕ0 . . . we may reduce Theorem 4.
As a next step one transfers the higher order initial value problem (4.k) ¡ ∂ ¢k ³ ∂t ∂ ∂y ´β u+F ˆ ˆ (4. ∂tm−1 u. i=1 n P ∂ ai ∂τ i ϕ0 (4. ..k) ∂yi u(β.m) ¡ ∂ ¢m ∂t u= ˆ m−1 β=m−k X X k=0 β∈Nn−1 ˆ A(β. yn−1 .0. 0.k+1) ³ ´ ∂ ∂u ˜˜ ∂t ˜(β. ∂t u(0.3) into a ﬁrst order system by setting ¡ ∂ ¢k ³ ∂ ´β u for β + k ≤ m − 1 and β ∈ Nn−1 .. t) ∈ Rn−1 × {0} . . m−1 β=m−k ∂ P P ∂ ˆ ˜ ˜ ˜˜ A(β. . Next we introduce new coordinates for which the manifold is ﬂat. let us say iβ is the ﬁrst nonzero index with β i ≥ 1. β i+1 .2). which is written out: ´ ¡ ¢α ³ ´ ³ X ∂ ∂ m−1 ∂ ∂ ∂ m−1 Aα x. . . . . .m−1) = k=0 β∈Nn−1 β (4. . . t.8 and 2.m) 6= 0 and hence we are allowed to divide by this factor to ¡ ∂ ¢m ﬁnd just ∂t u on the left hand side of (4. .. The prescribed Cauchydata on ˆ M change into u = ϕ0 = ϕ0 ˆ ˆ ∂ ˆ ∂t u = ϕ1 = ϕ1 + ˆ .. . . u. u.k) taking the appropriate identiﬁcation of x ∈ M and (y.k) = ∂yi u(β.0.2..0. β n−1 ) ˜˜ ∂t u(β.. ˆ u(β.2)(4..k) = u(0. We are let to the following problem: ˆ A(0.. . In fact we are only considering local solutions so one box/coordinate system is suﬃcient. Let us write {y1 ..2) ∂ m−1 ˆ ˆ where the coeﬃcients Aα and F depend on y. This we have done before: see Deﬁnitions 2.4) 69 .1)..For the moment let us start with the single equation in (4.k+1) for 0 ≤ k + β ≤ m − 1 with β 6= 0. n n α=m The ﬁrst step is to take new coordinates that ﬁt with the manifold.k) = ∂t ˜ ∂y We ﬁnd for k ≤ m − 1 that ∂ ˜ ∂t u(0. . ∂y1 u.1 where now C 1 or C m is replaced by real analytic.k) = ∂yi β ¡ ∂ ¢k+1 ∂t u ˆ So combining these equations we have ∂ ˜ ˜ for 0 ≤ k ≤ m − 2. The asˆ ∂ ˆ ˆ sumption that M does not contain singular directions is transferred to the ˆ condition A(0. . The assumption that the manifold M does not contain singular directions is preserved in this new coordinate system. that ³ ´ ∂ ∂ ˜ ˜ u(β.k+1) with β = (0. . .. ∂xm−1 u .0. β i − 1..3) m−2 k+β=m−2 P P k=0 β∈Nm−1 ∂ m−1 ˆ ∂tm−1 u = ϕm−1 = ϕm−1 + ˆ a(β. t} for the coordinates where the ﬂattened manifold is {t = 0} . .k) ¡ ¢ ∂ β ∂τ ϕk = and if β + k ≤ m − 1 with β 6= 0. u. . ∂x1 u. ...k) + F ∂t u(0. y2 .. ∂xm−1 u ∂x u = F x. . . . . .1.. ∂x1 u.
70 . y) = ψ(y) on Rn−1 .k) with β + k ≤ m − 1. For a detailed proof one might consider the book by Fritz John.k) and F are analytic functions of y. (4. As a ﬁnal step we consider u(t. ˜ The boundary conditions go to ( ˆ for 0 ≤ k ≤ m − 1. y) = ϕ(y) on Rn−1 .6) u(0.5) ∂ u(β.3 (CauchyKowalevski) Assume that u 7→ Aj (u).˜ ˆ where A(β. u(0. t.d.k) = ϕk ˜ ³ ´β (4. u(0. The formulation of the CauchyKowalevski Theorem is usually stated for this system which is more general than the one we started with. Then there exists a unique analytic solution of (4.1. the equation (4. Theorem 4.6) in a neighborhood of the origin. y) = v(t. y)−ψ(0) in order to reduce to vanishing initial data at the origin: ∂ ∂ ∂t u = Aj (u) ∂yj u + F(u) in Rn−1 × R+ . u(β.5) can be recasted by vector notation: ( ∂ ∂ n−1 ˘ ˘ × R+ . ∂t v = Aj (v) ∂yj v + F(v) in R v(0.e. u 7→ F(u) and y 7→ ϕ(y) are real analytic functions in a neighborhood of the origin. 0) = 0.4)(4.k) = ∂y ϕk for 0 ≤ k + β ≤ m − 1 with β 6= 0 ˜ ˆ With some extra equations for the coordinates in order to make the system autonomous as we did for the o.
x) 6= (0. 1 2 −∞ The constant solution is not very interesting so we forget c2 and for reasons 1 2 R∞ to come we take c1 such that c1 −∞ e− 4 ξ dξ = 1.e. 2π 1 t 1 u 0 4 0 2 0 2 x 4 One ﬁnds that 1 u(t. So let us try the (maybe silly) approach to take c such that c2 t = 1.e.7) also ¡ ¢ u c2 t. For the heat equation one notices that if u(t.2.2 4. Then it could be possible to get a solution of the form u(1. x) solves (4. cx is a solution for any c ∈ R. Moreover for (t. This o. t−1/2 x) which is just a function of one variable. As a result also all derivatives of this u satisfy the ¯ 71 .4. ∂t ∂x t 2t t √ If we set ξ = x/ t we ﬁnd the o. Putting this in (4.d. that is. t−1/2 x) = f (t−1/2 x). is separable and through f 00 (ξ) = −1ξ 2 f 0 (ξ) and integrating both sides we obtain ﬁrst ln f 0 (ξ) = − 1 ξ 2 + C and f 0 (ξ) = c1 e− 4 ξ 4 and next Z √ f (x/ t) = c2 + c1 √ x/ t 1 2 e− 4 ξ dξ.d. x) = √ ¯ 2 π Z √ x/ t e− 4 ξ dξ 1 2 −∞ is a solution of the equation.7) Let us try to derive a solution for and preferably one that satisﬁes some initial condition(s).1 A solution for the 1d heat equation The heat kernel on R ut − uxx = 0 (4. 0) (and t ≥ 0) this function is inﬁnitely diﬀerentiable.7) we obtain ¶ µ √ √ √ x 1 ∂ ∂2 − 2 f (x/ t) = − √ f 0 (x/ t) − f 00 (x/ t). A computation gives c1 = 1 −1/2 . Or in other words exploit this scaling to reduce the dimension. 0 1 2 ξf (ξ) + f 00 (ξ) = 0. Let’s try and set u(1. Here is a graph of that function. try to ﬁnd some solution that ﬁts with the scaling found in the equation. The idea is to look for similarity solutions.
72 . Let us give a name to the xderivative of u ¯ p(t. 1) × R+ . This function p is known as the heat kernel on R. x) = ϕ(x) for x ∈ R. x) = 0 for (t. Putting these special u’s into (4.e. ∂t z(0.heat equation. x) equals Dirac’s δfunction in the ¯ sense of distributions.2 On bounded intervals by an orthonormal set For the heat equation in one dimension on a bounded interval. The function x 7→ u(0. the p. 1} × R+ . That is.8) for some initial value.8) we ﬁnd the following 00 T 0 (t)X(x) − T (t)X (x) = 0 for 0 < x < 1 and t > 0. x) ∈ R+ ×R. x) ∈ R+ ×R. x) = 0 for (t. t) = equation and satisﬁes t↓0 x x √ e− 4t t t 2 is a solution to the heat lim w(x. The derivative ∂x u(0. (4. Exercise 59 Show that the function w (x. ∞ Exercise 58 R ϕ ∈ C0 (R3 ) and set p3 (t. x) = −∞ p (t. 1) × {0} . R∞ ∞ Exercise 57 Let ϕ ∈ C0 (R).2. Show that z(t. we try to ﬁnd functions of the form u(t. say (0. x) = R3 p3 (t.8) the boundary condition: u = 0 on {0. x) = 0 for (t.: the initial value: u=ϕ on (0. 1). T (t)X(0) = T (t)X(1) = 0 for t > 0. We try to solve this boundary value problem by separation of variables. x − y) ϕ(y)dy is a solution of ( ³ ´ ∂ ∂2 − ∂x2 z(t. x) = ϕ(x) for x ∈ R. x − y) ϕ(y)dy is a solution of ½ ¡∂ ¢ 3 + ∂t − ∆ z(t. T (0)X(x) = ϕ(x) for 0 < x < 1. x) equals 0 for ¯ ∂ x < 0 and 1 for x > 0. x) = ϕ(x) for x ∈ R. ∂t z(0. x) = T (t)X(x) that solve the boundary value problem in (4. Show Let that z(t. x) = p(t. x2 )p(t. x) ∈ R ×R . x1 )p(t. t) = 0 for any x ∈ R. x3 ). we may ﬁnd a solution for initial values ϕ ∈ L2 by the following construction. One of them is special. z(0.d. x) = x 1 √ e− 4t 2 πt 2 . ¢ ¡ has a nonunique solution in C 2 R+ ×R ? 0 Is the convergence of limt↓0 w(x. Is this a candidate for showing that ´ ( ³ ∂ ∂2 − ∂x2 z(t. t) = 0 uniform with respect to x ∈ R? 4. First let us recall the boundary value problem: ut − uxx = 0 in (0.
x) = 0 m u(0. The set of functions {ek }∞ in H is called a complete orthonormal system for H if k=1 the following holds: 1. f i ek (·)° = m→∞ ° ° k=1 H + *m m X X = lim hek . x) = k=1 αk sin (kπx) for 0 < x < 1. (4. 1) = 0 for t > 0. e i = δ kl (orthonormality). f i2 = khek . π2 t sin (kπx) .9) u(t. T (t) X(x) where also the constant c is unknown. lim k m hek . ﬁnds functions ae cx + be− cx that do not satisfy the boundary conditions except when they are trivial.1 Suppose H is a Hilbert space with inner product h·.2 As a consequence one ﬁnds Parseval’s identity: for all f ∈ H °m °2 °X ° ° ° 2 kf kH = lim ° hek . hek . ∞ Remark 4. f i δ kj = hek . Some lengthy computations show that these nontrivial solutions are Xk (x) = α sin (kπx) and c = k 2 π2 . 1).2.2. x) = α e−k 2 2 √ √ π2 t . hej . 0) = u(t. 1) as k=1 αk ek (·). f ik22 . k=1 m→∞ √ Now one should remember that the set of functions {ek }k=1 with ek (x) = 2 sin (kπx) is a complete orthonormal system in L2 (0. f i ek (·). f i ej (·) = m→∞ k=1 j=1 = m→∞ lim = m→∞ lim k=1 j=1 m X k=1 m m XX hek . Similarly √ c = 0 doesn’t bring anything. f i hej . ut (t.The only nontrivial solutions for X are found by X 00 (x) T 0 (t) =c=− and X(0) = X(1) = 0.) Having these Xk we can now solve the corresponding T and ﬁnd Tk (t) = e−k So the special solutions we have found are u (t. so that we can write P∞ any function in L2 (0. Let us recall: Since the problem is linear one may combine such special functions and ﬁnd that one can solve uxx for 0 < x < 1 and t > 0. For some c > 0 some combination of cos( cx) √ and sin( cx) does satisfy the boundary condition. Deﬁnition 4. x) − P (t. P 2. f i ek (·) − f (·)kH = 0 for all f ∈ H (completeness). ·i . (Without further knowledge one tries c < 0. 73 .
3 satisfy. is classical. For further reading we refer to the books of CourantHilbert. (4. k ∈ N with Pk (x) = 2k k! dxk x2 − 1 is a complete or• 2 thonormal set in L2 (−1.4 The use of orthogonal polynomials for solving linear p. 1). 1) . are: ª ©√ 2 sin (kπ·) . ϕ e−π k t 2 sin (kπx) . Show that the set of Legendre polynomials is an orthonormal collection. 2 sin (2kπ·) . ux (1) = 0. Remark 4. Show that for a0 . Strauss. 1). u(−1) = 0. ei i ei ° ≤ °f − ai ei ° . u(0) = 0.e. 1) .8) we have found for ϕ ∈ L2 (0. x) = ∞ E X D√ 2 2 √ 2 sin (kπ·) . °f − ° 2 ° 2 ° ° i=0 L (0. 1) the following solution: u(t. which are useful for the diﬀerential equations that we will meet.2. • √ ª © √ • 1.2.3 Some standard complete orthonormal systems on L2 (0. 2 cos (2kπ·) . u(1) = 0. Exercise 61 The same question for the second set of functions. . 1). 1) .Remark 4. Let f ∈ L2 (0. Exercise 64 Suppose that {ek . Prove that the Legendre polynomials form a complete orthonormal set in L2 (−1. . 2.1) i=0 L (0. 1. Coming back to the problem in (4. Weinberger or Sobolev.d. . Show that every polynomial p : R → R can be written by a ﬁnite combination of Legendre polynomials. 2 cos (kπ·) . Another famous set of orthonormal functions are the Legendre polynomials: o nq ¢k ¡ 1 dk k + 1 Pk (·).10) k=1 74 . Exercise 62 Find a complete orthonormal set of functions for −u00 = f in (−1. .2. 3. ak ∈ R the following holds: ° ° ° ° k k ° ° ° ° X X ° ° ° ° hf. k ∈ N . © √ ª • 1. k ∈ N} is an orthonormal system in L2 (0. 1). k ∈ N+ . Exercise 60 Write down the boundary conditions that the third set of functions in Remark 4.1) Exercise 63 Find the appropriate set of orthonormal functions for −u00 = f in (0. k ∈ N+ .
1) .10). 2. 1). 75 . ·)kW 2. x) is in L2 (0.1) ≤ c t kϕkL2 (0. 3.Exercise 65 Let ϕ ∈ L2 (0. 1) and let u be the function in (4. x) for t > 0 are in L2 (0.2 (0. Show that there is c > 0 (independent of ϕ) with ku(t. Show that all functions x 7→ ∂i ∂j ∂xi ∂tj u(t. 1. 1) for all t ≥ 0. Show that x 7→ u(t.
1 If {ek . Our quest for solutions X. 1)2 .12) we obtain −X 00 (x)Y (y) − X(x)Y 00 (y) = λX(x)Y (y) for 0 < x.j ° ° ° M→∞ ° ° m=2 i+j=m 76 = 0. Lemma 4.3 A solution for the Laplace equation on a square where f ∈ L2 (0.4. λ leads us after some computations to Xk (x) = sin(kπx). We will try the use a separation of variables approach and ﬁrst try to ﬁnd suﬃciently many functions of the form u(x. y < 1. y) = 2 sin(kπx) sin( πy) we have ° ° ° ° M X X ° ° lim °f − hei. k. Putting such functions in (4. X(0) = 0 = X(1) Y (0) = 0 = Y (1) The boundary value problem that we will consider has zero Dirichlet boundary conditions: ( 2 −∆u = f in ³ 1) . b)). At least there should be enough functions to approximate f in L2 sense. Since we now have a complete orthonormal system for (4. y < 1. 1) (and ©√ ª ¡¢ 2 sin( π·).´ (0. Skipping the trivial solution we ﬁnd X 00 (x) Y 00 (y) − X(x) − Y (y) = λ for 0 < x. (x. sorry L2 (0.´ (0. (4. λk. ∈ N+ on L2 1 .1)2 . then {˜k. b) . 1) . 1) . Setting ek.11) 2 u=0 on ∂ (0. y) = ek (x)f (y) is a complete orthonormal system on L2 ((0. = π2 k2 + π 2 2 . y) = X(x)Y (y) that solve the corresponding eigenvalue problem: ( −∆φ = λφ in ³ 1)2 . k ∈ N+ is a complete orthonormal system in L2 (0. k ∈ N} is a complete orthonormal system in L2 (0. X(0)Y (y) = 0 = X(1)Y (y) for 0 < y < 1. 1)) the combinations supply such a 0 system in L2 ((0. (4. ©√ ª Since 2 sin(kπ·). . X(x)Y (0) = 0 = X(x)Y (1) for 0 < x < 1. ∈ N} e with ek. Y.3.j .11) we can approximate f ∈ L2 (0. k ∈ N} is a complete orthonormal system in L2 (0. Y (y) = sin( πy). a) × ˜ (0. 1)2 . L2 (0. 1)2 ). a) and if {fk . f i ei.12) 2 φ=0 on ∂ (0.l (x.
1)2 is given by u = lim M X X hei. 1) = 0 for 0 < x < 1.j M X X hei.j .M P ˜ Since for the right hand side f in (4. In polar coordinates: ∆ = ∂2 ∂r 2 (4. This solution will be unique in W 2. uxy and uyy are in L2 (0.2 u is given by (4. 1)2 a solution 1. 1.j . x). Show that u(t. (4. one can show that for every f ∈ L2 (0. 1)2 ∩ W0 (0. u(x) = 0 for x = 1. λij m=2 i+j=m there is some hope that the solution for f ∈ L2 (0. y) = 0 for 0 < y < 1.11) replaced by f := m=2 i+j=m the solution is directly computable. uy . f i ei.j λij with α ∈ N2 and α ≤ 2 (6 diﬀerent series).j . uxx . y) u(0. Exercise 66 Suppose that f ∈ L2 (0. 2. y < 1.13). f i ei. Indeed. uy (x. namely u= ˜ P hei.15) + 1 ∂ r ∂r + 1 ∂2 r 2 ∂ϕ2 .2 (0. 1)2 . 1)2 .j . Exercise 67 Find a complete orthonormal system that ﬁts for for 0 < x. Show that the following series converge in L2 sense: M X X iα1 j α2 m=2 i+j=m hei. −∆u(x.13) at least in L2 sense. 77 . f i ei. ux . M→∞ λij m=2 i+j=m (4.14) Exercise 68 Have a try for the disk in R2 using polar coordinates ½ −∆u(x) = f (x) for x < 1. f i ei. y) = u(1. 1)2 . y) = f (x.j . 0) = uy (x.j .
t) = 1 ϕ0 (x − ct) + 1 ϕ0 (x + ct) + 1 ˜ 2 2 2 = − 1 ϕ0 (ct 2 − x) + 1 2 ϕ0 (x + ct) + 1 2 1 2 Z x−ct 0 ϕ1 (−s)ds + ϕ1 (s)ds. ut (x. use d’Alembert’s formula on R × R+ and hope that if we restrict the 0 function we have found to R+ ×R+ a solution of (4.16) comes out. t) . 2 we set if x > 0. Then we make the distinction for I: = II: = {(x. (the case 0 ≤ t ≤ c x with x < 0 we will not need). u(x. x−ct A way to do this is by a ‘reﬂection’. t) = 0 for x ∈ R and t ∈ R+ .16) for x ∈ R+ . u(x. Case I. We extend the initial values ϕ0 and ϕ1 on R− . 0) = ϕ0 (x) for x ∈ R. t) = 0 for x ∈ R+ and t ∈ R+ . 0) = ϕ1 (x) for x ∈ R. t) . {(x. We may use this solution to ﬁnd a solution for the wave equation on the half line: utt (x. ϕi (x) 0 if x = 0. For i = 1. 0 0 First the extension. Indeed utt (x. 0) = ϕ1 (x) + u(0. 0) = ϕ0 (x) for x ∈ R+ . 0 ≤ t ≤ cx} . − ct) + 1 ϕ0 (x + ct) + 2 x−ct Z x+ct x−ct Case II. = − 1 ϕ0 (ct − x) + 1 ϕ0 (x + ct) + 2 2 x−ct Z ct+x ct−x 1 2 Z x+ct ϕ1 (s)ds = 0 78 . t) = 0 for t ∈ R . has a solution of the form u(x. t) − c2 uxx (x. ut (x.1 Solutions for the 1d wave equation On unbounded intervals For the 1dimensional wave equation we have seen a solution by the formula of d’Alembert. ϕi (x) = ˜ −ϕi (−x) if x < 0. (4.4 4. The solution for the modiﬁed ϕi on R is. The solution for the modiﬁed ϕi on R is ˜ u (x. now using that x − ct < 0 ˜ Z x+ct ˜ ˜ ϕ1 (s)ds = ˜ u (x. t > c x} . t) − c2 uxx (x.4.4. t) = 1 ϕ0 (x − ct) + 1 ϕ0 (x + ct) + 2 2 1 2 Z x+ct ϕ1 (s)ds. t) = ˜ = 1 ˜ 2 ϕ0 (x 1 2 ϕ0 (x − ct) + 1 ˜ 2 ϕ0 (x + ct) + 1 2 1 2 Z x+ct ϕ1 (s)ds = ˜ ϕ1 (s)ds.
1). 1] for all t > 0? 79 .5 1 4. t) − c2 uxx (x. 1).2 On a bounded interval Exercise 72 We return to the boundary value problem in Exercise 71. ut (x. x) = ak (t) ek (x) k=0 2. Use an appropriate complete orthonormal set {ek (·) . 0) = ϕ1 (x) for x ∈ (0.Exercise 69 Suppose that ϕ0 ∈ C 2 (R+ ) and ϕ1 ∈ C 1 (R+ ) with ϕ0 (0) = 0 0 ϕ1 (0) = 0. t) = 0 for (x. The picture on the right might help. 1] for all t > 0? 4. 1) for all t > 0? 3. 0 0 2. x) ∈ C0 [0. + ut (x. Is it true that x 7→ u(t. t) = sign(x − ct) 1 ϕ0 (x 2 − ct) + 1 2 ϕ0 (x + ct) + 1 2 Z x+ct ϕ1 (s)ds x−ct is C 2 (R+ × R+ ) and a solution to (4. t) = u(1. 1] . 1. Is it true that x 7→ u(t. t) = 0 for t ∈ R . Is it true that x 7→ u(t. 0) = ϕ1 (x) for x ∈ R+ . Suppose that ϕ0 . t) = 0 for t ∈ R+ . 1) × R+ . Suppose that ϕ0 . using d’Alembert and a reﬂection. And what if we also remove the condition ϕ0 (0) = 0 ? Exercise 70 Solve the problem utt − c2 uxx = 0 in R+ × R+ . u(x. ϕ1 ∈ C [0. 1). t) ∈ (0. 4 3 Exercise 71 Try to ﬁnd out what happens when we use a similar approach for utt (x. ϕ1 ∈ L2 (0. 0) = ϕ0 (x) for x ∈ R+ . Suppose that ϕ0 . u(0. 1. 0) = ϕ0 (x) for x ∈ (0. x) ∈ L2 (0.16). Show that u (x. x) ∈ C [0. ux (0. What happens when we remove the condition ϕ1 (0) = 0 ? 3. k ∈ N} to ﬁnd a solution: ∞ X u (t.4. ϕ1 ∈ C0 [0. u(x. 2 1 0. 1] .
6. t ≥ 0} is bounded.n o 5.1) . ϕ1 such that sup ku(·. t)kL2 (0. Give a condition on ϕ0 . t) . 0 ≤ x ≤ 1. Give a condition on ϕ0 . 80 .1)×R+ ) is bounded. ϕ1 such that sup {u(x. ·)kL2 ((0. Give a condition on ϕ0 . t ≥ 0 is bounded. ϕ1 such that ku(·. 7.
2−n 1 .5 A fundamental solution for the Laplace operator ∆u = 0 in Ω. So radial solutions of ∆u = 0 satisfy r1−n ∂ n−1 ∂ r u(r) = 0. v ∈ C 2 Ω and Ω ⊂ Rn a bounded domain with C 1 boundary µ ¶ ¶ Z Z Z µ ∂ ∂ u ∆v dy = ∆u v dy + u v− u v dσ. Lemma 4. Since the Laplaceoperator ∆ in Rn can be written for radial coordinates with r = x by ∂ ∂ 1 ∆ = r1−n rn−1 + 2 ∆LB ∂r ∂r r where ∆LB is the socalled LaplaceBeltrami operator on the surface of the unit ball Sn−1 in Rn .18) holds. ω n (n−2) x (4.4. v ∈ C 2 Ω with u is harmonic.17). Removing the not so interesting constant c2 and choosing a special c1 (the reason will become apparent later) we deﬁne for n = 2 : F2 (x) = for n ≥ 3 : Fn (x) = −1 2π log x . A function u deﬁned on a domain Ω ⊂ Rn are called harmonic whenever We will start today by looking for radially symmetric harmonic functions. for n ≥ 3 : u(r) = c1 r2−n + c2 .17) where ω n is the surface area of the unit ball in Rn . which implies = for n = 2 : u(r) = c1 log r + c2 .5. If x ∈ Ω then since the Fn are not harmonic in 0 we / ¯ 81 . ∂ν ∂ν Ω ∂Ω (4. which implies 1−n cr . loc Next let us recall a consequence of Green’s formula: ¡ ¢ ¯ Lemma 4. ∂ν ∂ν Ω Ω ∂Ω where ν is the outwards pointing normal direction. which implies rn−1 ∂r u(r) = c. If x ∈ Ω then (4.1 The functions Fn and ∇Fn are in L1 (Rn ) .2 For u.5. ¡ ¢ ¯ So if u.18) Now we use this for v (y) = Fn (x − y) with x ∈ Rn and the Fn in (4. we ﬁnd Z Z µ µ ¶ ¶ ∂ ∂ u ∆v dy = u v− u v dσ. ∂r ∂r ∂ ∂r u(r) ∂ ∂ ∂ Hence ∂r rn−1 ∂r u(r) = 0.
19) ε↓0 ∂B (x) ∂ν ∂ν ε µ Notice that the minus sign appeared since outside of Ω\Bε (x) is inside of Bε (x). (4. Now let us separately consider the last two terms in (4. Since the Fn have an integrable singularity we ﬁnd Z Z Fn (x − y) ∆v(y) dy = lim Fn (x − y) ∆v(y) dy. 2π = R 1 2−n ∂ n−1 if n ≥ 3: limε↓0 dω = 0. x − y > ε} and ﬁnd Z Fn (x − y) ∆v(y) dy = lim Z ε↓0 Ω\Bε (x) ¶ ¶ µ ∂ ∂ Fn (x − y) = v− Fn (x − y) v dσ = ∂ν ∂ν ∂(Ω\Bε (x)) ¶ ¶ µ Z µ ∂ ∂ Fn (x − y) v− Fn (x − y) v dσ + = ∂ν ∂ν ∂Ω ¶ ¶ µ µ Z ∂ ∂ − lim Fn (x − y) v− Fn (x − y) v dσ. v=ϕ on {0} × Rn−1 .have to drill a hole around x for y 7→ Fn (x − y) . n ε↓0 ω=1 Combining these results we have found. assuming that x ∈ Ω: Z Fn (x − y) (−∆v(y)) dy + v (x) = Ω ¶ Z Z µ ∂ ∂ + Fn (x − y) v dσ − Fn (x − y) v dσ. 82 . Ω ε↓0 Ω\Bε (x) So instead of Ω we will use Ω\Bε (x) = {y ∈ Ω. ω n (n−2) ε ∂ν v(x + εω) ε and since ∂ ∂ν Fn (x − y) = −ω −1 ε1−n on ∂Bε (x) for all n ≥ 2 it follows that n ¶ µ Z ∂ lim Fn (x − y) v dσ = ε↓0 ∂B (x) ∂ν Z ε = lim −ω −1 ε1−n v(x + εω) εn−1 dω = −v (x) .3 Suppose that we want to ﬁnd a solution for ½ −∆v = f on R+ × Rn−1 .5. ∂ν ∂ν ∂Ω ∂Ω If x ∈ Ω then we may directly use (4.18) to ﬁnd / ¯ Z Fn (x − y) (−∆v(y)) dy + 0 = Ω ¶ Z Z µ ∂ ∂ + Fn (x − y) v dσ − Fn (x − y) v dσ. ∂ν ∂ν ∂Ω ∂Ω Example 4. We ﬁnd that Z ∂ lim Fn (x − y) v dσ = ε↓0 ∂B (x) ∂ν ε ¢ ∂ R ¡ if n = 2: limε↓0 ω=1 −1 log ε ∂ν v(x + εω) ε dω = 0.19).
. that is x1 = 0. y) = Fn (x − y) − Fn (x∗ − y) we ﬁnd v(x) = G (x.4 Let Ω ⊂ Rn be a domain. . A function G : Ω × Ω → R such that µ ¶ Z Z ∂ v(x) = G (x. ∂ν {0}×Rn−1 R+ ×Rn−1 Z Since both −∆v and v{0}×Rn−1 are given we might have some hope to have found a solution. namely by Z v(x) = G (x. .20) + Fn (x − y) ∂ν ∂ν ∂Ω ∂Ω Z 0 = Fn (x∗ − y) (−∆v(y)) dy + Ω ¶ Z Z µ ∂ ∂ ∗ ∗ + Fn (x − y) v dσ − Fn (x − y) v dσ.21) and setting G (x. Fn (x − y) = − ∂ν ∂x1 ∂x1 ∂ν So by subtracting (4. 83 . dyn . . y) ϕ(y) dσ ∂ν Ω ∂Ω is a solution of ½ −∆v = f v=ϕ in Ω. Exercise 73 Show that the v in the previous example is indeed a solution. (4. yn ) dy2 . . it holds that Fn (x − y) = Fn (x∗ − y) and ∂ ∂ ∂ ∂ Fn (x − y) = Fn (x∗ − y) = − Fn (x∗ − y) . y) f (y) dy − G (x. . y) (−∆v(y)) dy + µ ¶ Z ∂ − G (x. . . y) v(0.∞ ∞ say for f ∈ C0 (R+ × Rn−1 ) and ϕ ∈ C0 (Rn−1 ). x2 . . Introducing x∗ = (−x1 . . yn ) dy2 . . xn ) we ﬁnd that for x ∈ R+ × Rn−1 : Z v (x) = Fn (x − y) (−∆v(y)) dy + Ω ¶ Z µ Z ∂ ∂ v dσ − Fn (x − y) v dσ.20)(4. ¯ ¯ Deﬁnition 4. y2 .21) ∂ν ∂ν ∂Ω ∂Ω On the boundary. ∂ν {0}×Rn−1 Indeed this will be a solution but this conclusion does need some proof. (4. . dyn . . .22) is called a Green function for Ω. y) ϕ(y2 . on ∂Ω. . (4.5. y) f (y) dy + R+ ×Rn−1 µ ¶ Z ∂ − G (x.
∞ ∞ Exercise 74 Let f ∈ C0 (R+ × R+ × Rn−1 ) and ϕa , ϕb ∈ C0 (R+ × Rn−2 ). Find a solution for for x ∈ R+ × R+ × Rn−2 , −∆v(x) = f (x) v(0, x2 , x3 , . . . , xn ) = ϕa (x2 , x3 , . . . , xn ) for x2 ∈ R+ , xk ∈ R and k ≥ 3, v(x1 , 0, x3 , . . . , xn ) = ϕb (x1 , x3 , . . . , xn ) for x1 ∈ R+ , xk ∈ R and k ≥ 3.
Exercise 75 Let n ∈ {2, 3, . . . } and deﬁne ( ³ ´ y Fn x y − y for y 6= 0, ˜ Fn (x, y) = Fn (1, 0, . . . , 0) for y = 0.
−2 ˜ ˜ 1. Show that Fn (x, y) = Fn (y, x) for all x, y ∈ R with y 6= x x. n o 2. Show that y → Fn (x, y) is harmonic on Rn \ x−2 x . 7 ˜
˜ 3. Set G (x, y) = Fn (x − y) − Fn (x, y) and B = {x ∈ Rn ; x < 1} . Show that ¶ Z Z µ ∂ v(x) = G (x, y) f (y) dy − G (x, y) ϕ(y) dσ ∂ν B ∂B is a solution of ½ −∆v = f in B, v = ϕ on ∂B. (4.23)
¯ You may assume that f ∈ C ∞ (B) and ϕ ∈ C ∞ (∂B). Exercise 76 Consider (4.23) for ϕ = 0 and let G be the Green function of the previous exercise. 1. Let p > 1 n. Show that G : Lp (B) → L∞ (B) deﬁned by 2 Z (Gf ) (x) := G (x, y) f (y) dy
B
(4.24)
is a bounded operator. 2. Let p > n. Show that G : Lp (B) → W 1,∞ (B) deﬁned as in (4.24) is a bounded operator. 3. Try to improve one of the above estimates. 4. Compare these results with the results of section 2.5 Exercise 77 Suppose that f is real analytic and consider in B, −∆v = f v = 0 on ∂B, ∂ ∂ν v = 0 on ∂B. 1. What may we conclude from CauchyKowalevski? 2. Suppose that f = 1. Compute the ‘CauchyKowalevski’solution.
(4.25)
84
Week 5
Some classics for a unique solution
5.1 Energy methods
or
Suppose that we are considering the heat equation or wave equation with the space variable x in a bounded domain Ω, that is ut = ∆u in Ω × R+ , p.d.e.: initial c.: u(x, 0) = φ0 (x) for x ∈ Ω, (5.1) boundary c.: u(x, t) = g(x, t) for x ∈ ∂Ω × R+ , p.d.e.: utt = c2 ∆u u(x, 0) = φ0 (x) initial c.: ut (x, 0) = φ1 (x) boundary c.: u(x, t) = g(x, t) in Ω × R+ , for x ∈ Ω, (5.2)
for x ∈ ∂Ω × R+ .
Then it is not clear by CauchyKowalevski that these problems are wellposed. Only locally near (x, 0) ∈ Ω × R there is a unique solution for the second problem at least when the φi are real analytic. When Ω is a special domain we may construct solutions of both problems above by an appropriate orthonormal system. Even for more general domains there are ways of establishing a solution. The question we want to address in this paragraph is whether or not such a solution in unique. For the two problems above one may proceed by considering an energy functional. • For (5.1) this is E(u)(t) =
1 2
Z
Ω
u(x, t) dx.
2
This quantity is called the thermal energy of the body Ω at time t. Now let u1 and u2 be two solutions of (5.1) and set u = u1 − u2 . This u is a solution of (5.1) with φi = 0 and g = 0. Since u satisﬁes 0 initial and 85
boundary conditions we obtain ¶ Z µ Z ∂ ∂ 1 2 u(x, t) dx = ut (x, t) u(x, t)dx = E(u)(t) = ∂t ∂t 2 Ω Ω Z Z 2 = ∆u(x, t) u(x, t)dx = − ∇u(x, t) dx ≤ 0.
Ω Ω
So t 7→ E(u)(t) decreases, is nonnegative and starts with E(u)(0) = 0. Hence E(u)(t) ≡ 0 for all t ≥ 0, which implies that u(x, t) ≡ 0. Lemma 5.1.1 Suppose Ω is a bounded domain in Rn with a ∂Ω ∈ C 1 and let ¯ T > 0. Then C 2 (Ω × [0, T ]) solutions of (5.1) are unique. • For (5.2) the appropriate functional is Z ³ ´ E(u)(t) = 1 c2 ∇u(x, t)2 + ut (x, t)2 dx. 2
Ω
As before we may conclude that E(u)(t) ≡ 0 for all t ≥ 0. This implies that ∇u(x, t) = 0 for all x ∈ Ω and t ≥ 0. Since u (x, t) = 0 for x ∈ ∂Ω we ﬁnd that u (x, t) = 0. Indeed, let x∗ ∈ ∂Ω the point closest to x, and we ﬁnd Z 1 u (x, t) = u(x∗ , t) + ∇u(θx + (1 − θ)x∗ , t) · (x − x∗ ) dθ = 0.
0
Again we suppose that there are two solutions and set u = u1 − u2 such that u satisﬁes 0 initial and boundary conditions. Now (note that we use u is twice diﬀerentiable) Z ¢ ¡ 2 ∂ c ∇u(x, t) · ∇ut (x, t) + ut (x, t)utt (x, t) dx = E(u)(t) = ∂t ZΩ ¢ ¡ 2 −c ∆u(x, t) + utt (x, t) ut (x, t)dx = 0. =
Ω
This quantity is called the energy of the body Ω at time t; R • 1 Ω c2 ∇u(x, t)2 dx is the potential energy due to the deviation from 2 the equilibrium, R • 1 Ω ut (x, t)2 dx is the kinetic energy. 2
Lemma 5.1.2 Suppose Ω is a bounded domain in Rn with a ∂Ω ∈ C 1 and let ¯ T > 0. Then C 2 (Ω × [0, T ]) solutions of (5.2) are unique. • In a closely related way one can even prove uniqueness for ½ −∆u = f in Ω, u=g on ∂Ω.
(5.3)
Suppose that there are two solutions and call the diﬀerence u. This u satisﬁes (5.3) with f and g both equal to 0. So Z Z 2 ∇u dx = − ∆u u dx = 0
Ω 0
and ∇u = 0 implying, since u∂Ω = 0, that u = 0. 86
∂n u = 0 on ∂Ω.3 Suppose Ω is a bounded domain in Rn with a ∂Ω ∈ C 1 . Γr = {(1. ∂n u = 0 on ∂Ω\Γr . 87 (5. 0) = φ0 (x) for x ∈ Ω. 2.7) .p. t)kL2 (Ω) remains bounded for t → ∞.? in Ω × R+ . −1 < x1 . (5. −1 < s < 1} . Exercise 82 For which of the problems in the previous exercise can you ﬁnd ¡ ¢ ¯ ¯ f ∈ C ∞ (Ω) such that no solution in C 2 Ω exists? 1. A: B: ∂ u=0 on ∂Ω. −∆u = f in Ω\{0}. D: C: u = 0 on ∂Ω.4) ∂ u(x. for x ∈ ∂Ω × R+ . Which of the following problems has at most one solution ¡ 2 ¯ in C Ω ? And which one has at least one solution in W 2.6) ¡ ¢ ¯ Let f ¢∈ C ∞ Ω . t)kL2 (Ω) remains bounded for t → ∞. Can you ﬁnd a condition on λ such that for the solution u of in Ω × R+ . Then ¯ C 2 (Ω) solutions of (5. −∆u = f in Ω. −1 < s < 1} . 0) = φ0 (x) for x ∈ Ω. s) . −∆u = f ∂ u=0 on Γ .v. ut = ∆u u(x. in Ω. (5.2 (Ω)? ½ ½ −∆u = f in Ω. u=0 on ∂Ω. ku(·. 0) = φ0 (x) for x ∈ Ω. Can you ﬁnd a condition on f such that in Ω × R+ . ut = ∆u + λu u(x. 0) = 0. for x ∈ Ω. 0) = φ1 (x) ∂ ∂n u(x. ∂n in Ω. t) = 0 for x ∈ ∂Ω × R+ . Exercise 78 Why is the condition ‘Ω is bounded’ appearing in the three lemma’s above? Exercise 79 Can we prove a similar uniqueness result for the following b. 0) = φ0 (x) ut (x. t) = g(x) Exercise 81 Let Ω = {(x1 .1. s) . 1) = 0. u(x. x2 < 1} and let Γ = {(−1. ∂n in Ω × R+ . E: F : ∂ u(0. x2 ).3) are unique.5) Exercise 80 And for this one? utt = c2 ∆u u(x. (5. u(x. t) = 0 for x ∈ ∂Ω × R+ . −∆u = f in Ω. u = 0 on ∂Ω\Γ . the norm ku(·.Lemma 5. ∂n ∂ u(1. t) = g(x) for x ∈ ∂Ω × R+ . −∆u = f u=0 in Γ . ut = ∆u + f (u) u(x.
for 0 < x < . t) = 0 for x ∈ {0. x (5. utt + a ut  ut = c2 uxx u(x. (5. 0) = φ(x) for 0 < x < . 0) = φ(x) for 0 < x < . 0) = 0 u(x. t)°L2 ≤ M e−bt . t) → 0 for t → ∞ in some norm? Exercise 84 A more realistic model is in (0. ut (x. ) × R+ .Exercise 83 Here is a model for a string with some damping: in (0. utt + aut = c2 uxx u(x.8) Can you show that u (·. } and t ≥ 0. ) × R+ . ut (x. for 0 < x < . } and t ≥ 0. ° ° Find a bound like °u2 (·. t) = 0 for x ∈ {0.9) 88 . 0) = 0 u(x.
Without loss of generality we may assume that aij = aji .2 satisﬁed on ∂Ω × (0. . if ut − ∆u > 0 then u cannot attain a nonpositive minimum in Ω × (0. i.2. u(x. t) = g(x. .12) Deﬁnition 5. −1) one ﬁnds that aij = δ ij for all i. or Pn 2. Example 5. In order to restrict ourselves to parabolic and elliptic cases with the right sign for stating the results below we will assume a sign for the operator.j=1 i. . In (x.3 Let L be parabolicelliptic on Ω with Ω ⊂ Rn and Γ ⊂ ∂Ω\∂L Ω with Γ ∈ C 1 .10) ¯ with aij .j=1 aij (x) ν i ν j > 0 for the outside normal ν at x. T ) ⊂ Rn+1 the nonparabolic boundary is Ω × {T }.2 For Ω ⊂ Rn and L as in (5.2.2 satisﬁed on Ω × {0} . (5. i. T ) and part 3 ˜ of Deﬁnition 5.j=1 n X δ ij 02 + 0 12 = 0 and b · ν = −1 on Ω × {T } .11) We will call L elliptic in x if there is λ > 0 such that n X i. ¯ then u ≥ 0 in Ω × R+ . ∂Ω is not C 1 locally near x. 0 89 . if u satisﬁes in Ω × R+ .1 We will call L parabolicelliptic in x if n X i. Suppose that u ∈ C 2 (Ω ∪ Γ).10)(5. 0) = φ0 (x) ≥ 0 u(x.2. In other words.11) we will deﬁne the parabolic boundary ∂L Ω as follows: x ∈ ∂L Ω if x ∈ ∂Ω and either 1. bi .2 Maximum principles In this section we will consider second order linear partial diﬀerential equations in Rn . Indeed. j 6= (n + 1.j=1 X ∂ ∂ ∂ + bi (x) + c(x) ∂xi ∂xj i=1 ∂xi n (5. So n+1 X aij (x) ν i ν j = i. n + 1) and an+1. rewriting to ∆u − ut we ﬁnd part 2 of Deﬁnition 5. The corresponding diﬀerential operators are as follows: L= n X aij (x) i.n+1 = 0. or Pn Pn 3.4 For the heatequation with source term ut − ∆u = f on Ω = ˜ ˜ Ω × (0. So. c ∈ C(Ω).2. Deﬁnition 5. 2 (5. .j=1 aij (x) ξ i ξ j ≥ 0 for all ξ ∈ Rn . ut − ∆u > 0 for x ∈ Ω. T ] for every T > 0. t) ≥ 0 for x ∈ ∂Ω × R. ¯ Lemma 5.2. t)coordinates b = (0.j=1 aij (x) ν i ν j = 0 and i=1 bi (x)ν i ≥ 0 for the outside normal ν at x.2. Then u cannot attain a nonnegative maximum in Ω ∪ Γ. 0. that Lu > 0 in Ω ∪ Γ and that c ≤ 0 on Ω ∪ Γ.5.j=1 aij (x) ξ i ξ j ≥ λ ξ for all ξ ∈ Rn .
j=1 ∂ ∂ u(x0 ) ≥ Lu(x0 ) > 0.2. Show that if L as in (5.2. say in x0 . n} . ¡ ¢ ¯ ¯ For u ∈ C Ω we deﬁne u+ ∈ C(Ω) by u+ (x) = max [0. If x0 ∈ Ω then ∇u(x0 ) = 0. ∂2 If x0 ∈ Ω then in a maximum ∂y2 U (T x0 ) ≤ 0 for all i ∈ {1. Deﬁnition 5.Proof. −u(x)] . If x0 ∈ Γ then by assumption b · ν ≤ 0 and ∇u(x0 ) = γν for some nonnegative constant γ.10) is ¯ elliptic for all x ∈ Ω then L is strictly elliptic on Ω as in (5. ∂xi ∂xj As before we may diagonalize (aij (x0 )) = T t DT and ﬁnd in the new coordinates y = T x with U (y) = u(x) that n X i=1 d2 ii ∂2 2 U (T x0 ) > 0 ∂yi with dii ≥ 0 by our assumption in (5. The proof of this theorem relies on a smartly chosen auxiliary function. We ﬁnd that n X aij (x0 ) i. (5. Supposing that Ω ⊂ BR (0) we consider w deﬁned by w(x) = u(x) + εeαx1 90 .13).j=1 aij (x) ξ i ξ j ≥ λ ξ2 for all x ∈ Ω and ξ ∈ Rn .5 The operator L as in (5. If u ∈ C 2 (Ω)∩C(Ω) and Lu ≥ 0 in Ω.14). In the remainder we will restrict ourselves to the strictly elliptic case. n} and we ﬁnd i n X i=1 d2 ii ∂2 2 U (T x0 ) ≤ 0. then the maximum of u+ is attained at the boundary. Our i proof is saved since the ﬁrst component is killed through d11 = 0 and we still ﬁnd the contradiction by (5. In a maximum we still have ∂y2 U (T x0 ) ≤ 0 for all i ∈ {2. Proof.6 (Weak Maximum Principle) Let Ω ⊂ Rn be a bounded do¯ main and suppose that L is strictly elliptic on Ω with c ≤ 0. Also give an example of an elliptic L that is not strictly elliptic.14) Exercise 85 Suppose that Ω ⊂ Rn is bounded.13) a contradiction. Theorem 5. Suppose that u does attain a nonnegative maximum in Ω ∪ Γ. If x0 ∈ Γ then ν is an eigenvector of (aij (x0 )) with eigenvalue zero and hence we may use this as our ﬁrst new basis element and one for which we have ∂2 d11 = 0.11). ∂yi (5.10) is called strictly elliptic on Ω if there is λ > 0 such that n X i. u(x)] and u− (x) = max [0. So in both case b(x0 ) · ∇u(x0 ) + c(x0 )u(x0 ) ≤ 0.
9 (Hopf ’s boundary point lemma) Let Ω ⊂ Rn be a domain that satisﬁes the interior sphere condition at x0 ∈ ∂Ω.2. or 2. x ∈ Ω . Theorem 5. ¯ Since a11 (x) ≥ λ be the strict ellipticity assumption and since b1 . u does not attain a nonnegative maximum in Ω. ∂ Exercise 86 State and prove a weak maximum principle for L = − ∂t + ∆ on Ω × (0. or 2.10 If u ∈ C 1 (Ω ∪ {x0 }) then ∂ u(x0 ) − u(x0 + tµ) u(x0 ) = − lim inf < 0. x ∈ Ω ¯ 1. T ) . lim inf t↓0 u(x0 ) − u(x0 + tµ) > 0 (possibly +∞) for every direction µ pointt ing into an interior sphere. One ﬁnds that ¡ ¢ Lw(x) = Lu(x) + ε α2 a11 (x) + αb1 (x) + c(x) eαx1 . For the formulation of a boundary version of the strong maximum principle we need a condition on Ω. Let L be strictly ¯ elliptic on Ω with c ≤ ª If u ∈ C 2 (Ω) ∩ C(Ω) satisﬁes Lu ≥ 0 in Ω and is such 0.with ε > 0. Ω Ω Ω ∂Ω ∂Ω Letting ε ↓ 0 the claim follows. u ≡ u(x0 ) on Ω.2. then either © ª ¯ 1. © ¯ = u(x0 ) ≥ 0. c ∈ C(Ω) with Ω bounded. we may choose α such that α2 a11 (x) + αb1 (x) + c(x)eαx1 > 0. then either that max u(x). u ≡ sup u(x).8 A domain Ω ⊂ Rn satisﬁes the interior sphere condition at x0 ∈ ∂Ω if there is a open ball B such that B ⊂ Ω and x0 ∈ ∂B. Remark 5.7 (Strong Maximum Principle) Let Ω ⊂ Rn be a domain ¯ and suppose that L is strictly elliptic on Ω with c ≤ 0. Deﬁnition 5. t↓0 ∂µ t which means for the outward normal ν ∂ u(x0 ) > 0 ∂ν 91 . Theorem 5.2.2. If u ∈ C 2 (Ω) ∩ C(Ω) and Lu ≥ 0 in Ω. The result follows for w from the previous Lemma and hence sup u ≤ sup w ≤ sup w+ ≤ sup w+ ≤ sup u+ + εeαR .
Let L be ¯ strictly elliptic on Ω with c ≤ 0 and assume that u ∈ C 2 (Ω) ∩ C(Ω) satisﬁes ½ Lu ≥ 0 in Ω. 2 ¯ such that Show that any function w ∈ C (Ω) ∩ C(Ω) ½ −Lw ≥ 0 in Ω. Show that ½ −Lu = f in Ω. Exercise 88 Let Ω be a bounded domain in Rn and let L be strictly elliptic on Ω (and we put no sign restriction on c). u=0 on ∂Ω. w≥0 on ∂Ω.Exercise 87 Suppose that Ω = Br (xa ) ∪ Br (xb ) with r = 2 xa − xb  . Hint: consider some x0 ∈ ∂Br (xa ) ∩ ∂Br (xb ). ¯ has at most one solution in C 2 (Ω) ∩ C(Ω). 92 . u=φ on ∂Ω. Suppose that there is a function ¯ ¯ v ∈ C 2 (Ω) ∩ C(Ω) such that −Lv ≥ 0 in Ω and v > 0 on Ω. Exercise 89 Let Ω be a bounded domain in Rn and let L be strictly elliptic on Ω with c ≤ 0. Let f and φ be given functions. ¯ ¯ Show that if u ∈ C 1 (Ω) then u ≡ 0 in Ω. is nonnegative.
Set s the minimum of x1 − x3  and the distance of x3 to ∂Ω. First we will construct a ball in Ω that touches Σ exactly once. Since x3 − x4  < 1 s we ﬁnd that r1 ≤ 1 s and that there is at least 2 2 one point of Σ on ∂Br1 (x4 ) . The ﬁnal step of the construction is to set x∗ = 1 x4 + 1 x5 and r = 1 r1 . say Ω\Σ 3 x1 and Σ 3 x2 . 93 .5. See the 2 pictures below. Since u is continuous Ω\Σ is open. u(x) = m} . Constructing an appropriate subdomain. I. We argue by contradiction and assume that both Σ and Ω\Σ are nonempty. We have to prove that when m > 0 then either Σ is empty or that Σ = Ω. Take x4 on the arc between x1 and x3 such that x3 − x4  < 1 s. We will also use the ball B 1 r (x5 ). Move along the arc from x1 to x2 and there will be a ﬁrst x3 on this arc that lies in Σ. Let us call such a point x5 .3 Proof of the strong maximum principle Set m = supΩ u and set Σ = {x ∈ Ω. The ball Br (x∗ ) is contained 2 2 2 in Ω\Σ and ∂Br (x∗ ) ∩ Σ = {x5 } . 2 Next we take Br1 (x4 ) to be the largest ball around x4 that is contained in Ω\Σ.
˜ On Γ2 we have h ≤ 0 and hence w = u + εh ≤ u ≤ m. x ∈ ∂B 1 r (x5 ) < m. Deriving a contradiction.II. 94 .j=1 n P aij (xi − x∗ )(xj − x∗ ) − α i j 1− i=1 α 2 e− 2 r n P (aii + bi (xi − x∗ ) + c i e− 2 x−x α 2 α ∗ 2  + −c(x) ≥ Ã 2 and by choosing α large enough we ﬁnd Lh > 0 in B 1 r (x5 ). ¡ ¢ On B 1 r (x5 ) one ﬁnds. α 2 1 − e− 2 r α ∗ 2 α 2 One should notice that this function is tailored in such a way that h = 0 for x ∈ ∂Br (x∗ ). As before we consider w = u+εh. 2 ! α ∗ 2 n X¡ ¡ 1 ¢2 ¢ e− 2 x−x  3 4α λ 2 r − 2α aii (x) + bi (x) 2 r + c α 2 1 − e− 2 r i=1 e− 2 r α 2 ≥ 1 − e− 2 r III. x ∈ Γ1 } = m < m. using that x − x∗  ∈ 1 r. Now the subdomain that we will use to derive a contradiction is B 1 r (x5 ).6. The boundary 2 of this set consists of two parts: Γ1 = ∂B 1 r (x5 ) ∩ Br (x∗ ) and Γ2 = ∂B 1 r (x5 )\Γ1 . since uΓ1 < m by construction and since u is continuous we ﬁnd sup {u(x). 2 Since w(x5 ) = u(x5 ) = m and also Lw > 0 in B 1 r (x5 ) we obtain a contradiction 2 with Theorem 5. positive inside Br (x∗ ) and negative outside of Br (x∗ ). Next we choose ε > 0 such that 1 ε = (m − m) ˜ 2 As a consequence we ﬁnd that n o sup w(x). x ∈ Γ1 } = max {u(x). 2 2 Since Γ1 is closed. The auxiliary function. Moreover h(x) ≤ 1 on Br (x∗ ). 3 r that 2 2 2 α2 Lh = i.2. Next we set for α > 0 the auxiliary function h(x) = e− 2 x−x  − e− 2 r .
Let Σ ⊂ Rn be the following set: © ª Σ = ∇u(x).1 For u ∈ C(Ω) with Ω a domain in Rn . Next we deﬁne a lemma concerning this upper contact set. Ω ∂Ω Then Z ¯ µ ¶¯ ¯ ¯ ∂ ∂ g(z)dz ≤ g(∇u(x)) ¯det u(x) ¯ dx. ¯ ¯ ∂xi ∂xj BM (0)⊂Rn Γ+ Z (5. g(z)dz = g(∇u(x)) ¯ ¯ ∂xi ∂xj Σ Γ+ is just a consequence of a change of variables.5.16) Proof.4. ¯ ¯ ∂xi ∂xj Σ Γ+ 95 .4 Alexandrov’s maximum principle Alexandrov’s maximum principle gives a ﬁrst step towards regularity. Lemma 5. Here the diameter of Ω will play a role: diam(Ω) = sup {x − y . x ∈ Γ+ . Before we are able to state the result we need a deﬁnition.2 Suppose that Ω is bounded. the upper contact set + Γ is deﬁned by Γ+ = {y ∈ Ω. ∃ay ∈ Rn such that ∀x ∈ Ω : u(x) ≤ u(y) + ay · (x − y)} . Set ¶ µ −1 M = (diam(Ω)) (5. Let g ∈ C(Rn ) be nonnegative and ¯ u ∈ C(Ω) ∩ C 2 (Ω).15) sup u − sup u . y ∈ Ω} . One cannot expect that this holds but since the mapping is onto and since g ≥ 0 one ﬁnds ¯ µ ¶¯ Z Z ¯ ¯ ∂ ∂ g(z)dz ≤ g(∇u(x)) ¯det u(x) ¯ dx. ¯ Deﬁnition 5. x. If ∇u : Γ+ → Σ is a bijection then ¯ µ ¶¯ Z Z ¯ ¯ ¯det ∂ ∂ u(x) ¯ dx.4.
Note that n X . Since (5. So ya ∈ ∂Ω and hence ya ∈ Ω. the upper contact set.17) = > x∈∂Ω x∈∂Ω sup u + M diam(Ω) + a · (ya − x0 ) x∈∂Ω sup u + M diam(Ω) − M ya − x0  ≥ sup u.3 Suppose that Ω is bounded and that u ∈ C(Ω)∩C 2 (Ω). u(x) > 0} one ﬁnds ° Pn ° ∂ ∂ diam(Ω) ° − i. ¯ (a) for all x ∈ Ω.j=1 aij (·) ∂xi ∂xj u(·) ° ° ° sup u ≤ sup u + √ ∗ ° ° n ° ωn ° nD∗ (·) Ω ∂Ω .j=1 aij (·) ∂xi ∂xj u(·) ° ° ° + sup u ≤ sup u + √ ∗ ° ° n ° n + ωn ° nD∗ (·) Ω ∂Ω Proof. In other words. (5. Set (a) = min (a · x − u(x)) ¯ x∈Ω ¯ which is attained at some ya since Ω is closed and bounded and u is continuous.4 Replacing Ω with Ω+ = {x ∈ Ω. ¯ Corollary 5.So if we can show that BM (0) ⊂ Σ we are done.4.17) holds and since u is diﬀerentiable in / ya we ﬁnd that a = ∇u(ya ).4. ∂xj ∂xi 96 . Then Ln (Γ+ ) Remark 5. ° ° Pn ∂ ∂ diam(Ω) ° − i. So we have a · ya − u(ya ) = a · x − u(x) ≥ which implies that ¯ u(ya ) ≥ u(x) + a · (ya − x) for all x ∈ Ω. By this construction we even have that ya ∈ Γ+ .j=1 ∂ ∂ u(x) = trace (AD) ∂xi ∂xj with the matrices A and D deﬁned by µ ¶ ³ ´ ∂ ∂ A = aij (x) and D = u(x) . ¯ By taking x0 ∈ Ω such that u (x0 ) = maxx∈Ω u(x) one obtains ¯ u(ya ) ≥ u(x0 ) + a · (ya − x0 ) = sup u + a · (ya − x0 ) x∈Ω ¯ (a) for some ya ∈ Ω. L (Γ ∩Ω+ ) aij (x) i. Denote the volume of the unit ball in Rn by ω ∗ and set n ³ ³ ´ ´1/n D∗ (x) = det aij (x) . it is suﬃcient to show that for every a ∈ BM (0) there is y ∈ Γ+ such that a = ∇u(y).
4.12) with c ≤ 0. x i we may consider the at x diagonalized elliptic operator. Writing i 2 ˜ Since for T orthonormal the eigenvalues of AD and ΛD coincide and ³ ´ ˜ trace (AD) = trace ΛD ³ ´ ˜ det (AD) = det ΛD x ∂ µi = −λi (¯) ∂y2 (v ◦ T ) (y) for the the eigenvalues of −AD by µi and using that i the geometric mean is less then the arithmetic mean (for positive numbers) we obtain Ã n !1/n Y 1/n 1/n ∗ D (x) det (−D) = det (−AD) = µi ≤ i=1 ≤ n 1 1X µ = trace (−AD) .j=1 aij (¯) ∂xi ∂xj by choosing new orthonormal coordinates y at this point x (the new coordinates will be diﬀerent ¯ at each point x but that doesn’t matter since for the moment we are just using ¯ linear algebra at this ﬁxed point x) which ﬁt the eigenvalues of (aij (x)) . So ¯ ˜ AD = T t ΛDT with diagonal matrices µ ¶ µ ¶ ∂2 ˜ Λ = λi (¯) and D = ∂y2 (v ◦ T ) (y) .18) Theorem 5.15) we have µ ¶n Z supΩ u − sup∂Ω u ω∗ = 1dz ≤ n diam(Ω) BM (0) µ ¶¯ Z ¯ ¯ ¯ ¯det ∂ ∂ u(x) ¯ dx ≤ ≤ ¯ ¯ ∂xi ∂xj + Γ !n Z Ã − Pn ∂ ∂ i. ≤ n D∗ (x) Γ+ which completes the proof. D∗ (·) D∗ (·) 97 .j=1 aij (x) ∂xi ∂xj u(x) dx. So on Γ+ the matrix T t ΛDT is nonpositive deﬁnite. The ellipticity condition ¯ implies λi (¯) ≥ λ > 0.P Now let us diagonalize the elliptic operator n x ∂ ∂ i. n i=1 i n So we may conclude that for every x ∈ Γ+ !n µ ¶ Ã − Pn ∂ ∂ ∂ ∂ i. Since we are at a point in the upper contact set Γ+ we x ∂2 ˜ have ∂y2 u(y) ≤ 0.16) and M as in (5. Suppose that ¯ u ∈ C 2 (Ω) ∩ C(Ω) satisﬁes Lu ≥ f with b(·) f (·) . ∂xi ∂xj n D∗ (x) Taking g = 1 in (5. (5.j=1 aij (x) ∂xi ∂xj u(x) det − u(x) ≤ .5 (Alexandrov’s Maximum Principle) Let Ω ⊂ Rn be a bounded domain and let L be elliptic as in (5. ∈ Ln (Ω).
3). If the bi and c components of L would be equal 0 then the result follows from (5. nn L (Γ ∩Ω ) L (Γ ∩Ω ) 98 . Then we have c(x)u(x) ≤ 0 and hence for any µ > 0 : 0 ≤ − ≤ ≤ n X aij (x) i. ¯ µ ¶¯ Z Z ¯ ¯ 1 1 ¯det ∂ ∂ u(x) ¯ dx dz ≤ n n ¯ n n ¯ ∂xi ∂xj BM (0) z + µ Γ+ ∩Ω+ ∇u(x) + µ ˜ Ã Pn !n Z ∂ ∂ − i.4. Then one ﬁnds ° − ° ° f (·) ° + sup u ≤ sup u + C diam(Ω) ° ∗ ° ° D (·) ° n + . aij (x) i. −1 that.4.j=1 ∂ ∂ u(x) ≤ ∂xi ∂xj n X i=1 n X i=1 bi (x) bi (x) ∂ u(x) + c(x)u(x) − f (x) ≤ ∂xi ∂ u(x) + f − (x) ≤ (by CauchySchwarz) ∂xi Using Lemma 5.18).j=1 ∂ ∂ u(x) ≤ −f (x) ≤ f − (x). Indeed. Since it is suﬃcient to consider u > 0 we may restrict ourselves to Ω+ .j=1 aij (x) ∂xi ∂xj u(x) 1 ≤ dx ≤ n n n D∗ (x) Γ+ ∩Ω+ ∇u(x) + µ ¡ ¯ ¯n ¢ 1 n Z n bi (x) + ¯µ−1 f − (x)¯ n dx ≤ ≤ 2n−2 n D∗ (x) Γ+ ∩Ω+ ¯ ¯n Z n bi (x) + ¯µ−1 f − (x)¯ 2n−2 dx = ≤ nn Γ+ ∩Ω+ D∗ (x)n Ã° ! °n ° − °n ° ° 2n−2 ° bi  ° −n ° f ° ° ° = ° D∗ ° n + + + µ ° D∗ ° n + + .2 on Ω+ and with g(z) = (zn + µn ) one ﬁnds with µ ¶ ˜ M = (diam(Ω))−1 sup u − sup u+ Ω ∂Ω ≤ bi (x) ∇u(x) 1 + µ−1 f − (x) µ 1 ≤ (by Hölder) 1 ³ ¯ ¯n ´ n 1 n−2 ≤ bi (x)n + ¯µ−1 f − (x)¯ (∇u(x)n + µn ) n (1 + 1) n . Ω ∂Ω L (Γ ) Ln (Γ+ ) .Proof. on Γ+ 0≤− n X ° ° ° b(·) ° where C depends on n and ° D∗ (·) ° and let Γ+ denote the upper contact set of u. ∂xi ∂xj For nonzero bi and c we proceed as follows. similar as before in (5.
By a direct computation Z 1 dz = nω ∗ n n z + µn Z M BM (0)⊂Rn ˜ 0 rn−1 dr = ω ∗ log n r n + µn Ã ˜ M n + µn µn ! . Choosing µ = kf − /D∗ kLn (Γ+ ∩Ω+ ) it follows that ω∗ n log ÃÃ ˜ M − /D ∗ k kf Ln (Γ+ ∩Ω+ ) !n +1 ! 2n−2 ≤ n n and hence Ã ˜ M kf − /D∗ k Ln (Γ+ ∩Ω+ ) ! Ã° ° ° bi  °n ° ° ° D∗ ° n + + + 1 L (Γ ∩Ω ) +1 L (Γ+ ∩Ω+ ) !n ≤e 2n−2 nn ω∗ n Ã° ° ° bi  ° n ° ∗° °D ° n ! −1 and the claim follows. 99 .
1 2 3 Believing in CauchyKowalevski an appropriate initial value problem would be utt = c2 ∆u in R3 × R+ . at time t the inﬂuence 100 . 2 2 2 We will start with the special case p u is radially symmetric in x.5.19) ∂ ∂ ∂ where ∆u = ∂x2 + ∂x2 + ∂x2 . t) = Φ(r − ct) + Ψ(r + ct). Or similarly if we start at y. z. 0) = ϕ1 (x) for x ∈ R3 . say at x0 .5. that Under this assumption and setting r = x2 + y 2 + z 2 the equation changes in µ ¶ 2 utt = c2 urr + ur . t). t) = ru(r. t) = 1 1 δ (x − ct) = δ (x − ct) . (5. So we are just left with u (x. Returning to u it means we obtain solutions u (x. t) = 1 1 Φ(x − ct) + Ψ(x + ct) x x but this does not look like covering all solutions and in fact the second term is 1 suspicious. How can we turn this into a solution of (5.1 The wave equation 3 space dimensions This is the diﬀerential equation with four variables x ∈ R3 and t ∈ R+ and the unknown function u utt = c2 ∆u.20) ut (x. x ct So at time t the inﬂuence of an initial ‘function’ at x = 0 distributes itself over a circle of radius ct around 0. r r r we return to the one dimensional wave equation Utt = c2 Urr . u(x. then x Ψ(x+ct) is unbounded 1 for t = c−1 x0  in x = 0. r Next we use the substitution U (r. This equation on R × R+ has solutions of the form 0 U (r. y. In the present case one might try to consider as this special ‘function’ (really a distribution) v(x. 0) = ϕ0 (x) for x ∈ R3 . and with some elementary computations utt = 1 2 1 Utt and urr + ur = Urr .5 5. (5. If Ψ is somewhere nonzero. t) = x Φ(x − ct).20)? Remember that for the heat equation it was suﬃcient to have the solution for the δfunction in order to ﬁnd a solution for arbitrary initial values by a convolution.
0) = ϕ1 (x) it could be worth a try to consider the derivative with respect to t of this u in (5. ˜ with ϕ1 replaced by ϕ0 . x − y ct Combining these ‘solutions’ for y ∈ Rn with density f (y) we obtain Z 1 u (x.21). We still have to see about vt (x. that is. 0) are. v(x. 0) = 0 and ut (x. 0) = ϕ0 (x).of an initial ‘function’ at x = y distributes itself over a circle of radius ct around y: v1 (x. For f ∈ C0 (R3 ) one gets Z f (x + ctω) dω = 0 u (x.21) x−y=ct Since we do have a solution with u (x. First let us write 101 . = ct x−y=ct ct z=ct ω=1 Looking backwards from (x. ∞ Now we have to ﬁnd out what u(x. So v(x. t) = ˜ 4πc2 t Z ϕ1 (y) dσ. t) we should see an average of the initial function at positions at distance ct from x. t) = ∂ ˜ ∂t u(ϕ0 . x. t) = δ (x − y − ct) f (y)dy = 3 ct R Z Z Z 1 1 f (y)dσy = f (x + z)dσ z = ct f (x + ctω) dω. in order to solve the ﬁrst initial condition. t) = 1 1 δ (x − y − ct) = δ (x − y − ct) . x. on the right the cone representing the dependence on the initial data. t) will satisfy the diﬀerential equation and the ﬁrst initial condition. 0) = lim f (x + ctω) dω = ct t↓0 ∂t ω=1 Ã Z ! Z 2 2 = lim c f (x + ctω) dω + c t ∇f (x + ctω) · ω dω t↓0 ω=1 ω=1 = 4πcf (x) . namely 1 u(ϕ1 .20) for ϕ0 = 0. 0) and ut (x. So there is a possible solution of (5. On the left the cones that represent the inﬂuence of the initial data. 0). 0) = lim ct t↓0 ω=1 and ! Ã Z ∂ ut (x. (5.
t) = 0. r cos θ). 2 4π x y=x Hint: use spherical coordinates x = (r cos ϕ sin θ.1 Suppose that ϕ0 ∈ C 3 (R3 ) and ϕ1 ∈ C 2 (R3 ). t) = 4πc2 t x−y=ct 1 ∂t 4πc2 t x−y=ct 0 This expression is known as Poisson’s solution. 4πc2 t2 x−y=ct 4πc2 t2 x−y=ct We ﬁnd that vt (x.5. r + 2 sin θ + 2 2 ∂r ∂r r sin θ ∂θ ∂θ r sin θ ∂ϕ2 102 . we have lim vt (x. x.20) has a unique solution u ∈ C 2 R3 × R+ and this solution is given by ! Ã Z Z ∂ 1 1 ϕ (y) dσ + ϕ (y) dσ . 4π x2 y=x This u is called the spherical average of u. t) = u(ϕ0 . Show that: ¯ Z 1 ∆¯ (x) = u ∆u(y)dy. ª ¡ ¢ © ¯ ¯ Exercise 90 Let u ∈ C 2 B with B = x ∈ R3 . t) : ! Ã Z ∂ 1 ∂ ϕ (y) dσ = ˜ v(x. t) = = ∂ ∂t 2c 4π Ã Z 1 4π Z ct ϕ0 (x + ctω) dω + 4π ω=1 ∇ϕ0 (x + ctω) · ω dω + 3 X Z ω=1 ∇ϕ0 (x + ctω) · ω dω ! ω=1 c2 t + 4π and since lim t↓0 Z ωi ωj ω=1 i. t) = ∂t ∂t 4πc2 t x−y=ct 0 ! Ã Z ∂ t = ϕ (x + ctω) dω = ∂t 4π ω=1 0 Z Z 1 ct = ϕ0 (x + ctω) dω + ∇ϕ0 (x + ctω) · ω dω = 4π ω=1 4π ω=1 Z Z 1 1 = ϕ0 (y) dσ + ∇ϕ0 (y) · (y − x) dσ.j=1 ∂2 ϕ (x + ctω) dω ∂xi ∂xj 0 Z 2c 4π Z ω=1 ∇ϕ0 (x + ctω) · ω dω = 2c 4π ω=1 ∇ϕ0 (x) · ω dω = 0. The Cauchy ¡ ¢ problem in (5. r sin ϕ sin θ. x ≤ 1 and deﬁne u as follows: Z 1 u (x) = ¯ u(y)dy. In these coordinates: ∆ = r−2 1 ∂ ∂ 1 ∂ 2 ∂ ∂2 .out v(x. t↓0 Theorem 5. u (x.
20). t)dy. ∆z f (x + z)dz − f (x + z)dσ − 2 z R z=R ∂r R z=R ∂ ∂t Z v(z)dz = c z<ct = z<R the observation that Z v(z)dσ z=ct and a version of the fundamental theorem of calculus on balls in R3 : µ ¶ Z R Z Z 1 ∂ 1 ∂ g (z) dz = g(rω) r2 drdω = 2 2 ∂r z<R z ∂ z ω=1 r=0 r Z Z 1 = (g(Rω) − g(0)) dω = 2 (g (z) − g(0)) dσ z . It remains to verify the uniqueness. Some key ingredients are Green’s formula applied to the fundamental solution: Z −4πf (x) = Z Z 1 1 1 ∂ f (x + z)dσ. Set v = u1 −u2 and consider the average of v over the sphere of radius r around some x Z 1 v (r. t) = c2 ∆u (x. Suppose that u1 and u2 are two diﬀerent solutions to (5. t) = 1 x−y=ct f (y) dσ: t = = = = = = = Z Z c2 1 ∆x f (x + z)dσ z = c3 c2 ∆u (x. ω=1 z=R z R Here we go for u(x. t) = ∆z f (x + z)dσz = t z=ct z=ct z Z ∂ 1 c2 ∆z f (x + z)dz = ∂t z<ct z ! Ã Z Z 1 1 ∂ 2 ∂ f (x + z)dσ = c −4πf (x) + f (x + z)dσ + 2 2 ∂t ct z=ct ∂ z c t z=ct ! ÃZ Z 1 ∂ 1 2 ∂ c = f (x + z)dσ + 2 f (x + z)dσ ∂t z=ct z ∂ z z=ct z ÃZ ! ´ 1 ∂ ³ 2 ∂ c z f (x + z) dz = 2 ∂t z=ct z ∂ z ÃZ ! ´ 1 ∂ ³ ∂2 c 2 z f (x + z) dz = 2 ∂t z<ct z ∂ z ÃZ ! ∂2 1 c 2 f (x + z)dσ = ∂t z=ct z Ã Z ! 1 ∂2 ∂2 f (x + z)dσ = 2 u (x. t) = ¯ v(y. The initial conditions we have already checked above. t) . t) . 4πr2 y−x=r 103 . c 2 ∂t ct z=ct ∂t 2 ∂ Indeed it solves ∂t2 u (x. To give a direct proof that the formula indeed gives a solution of the diﬀerential equation is quite tedious.Proof.
r))r=0 = 0. t) v = ∂r2 4πr2 y−x=r Z 1 ∂2 r vtt (y. t) = 1 4π ZZ · S 1 + 4π 5. 4π 0 0 Explain which boundary value problem each of these u solve (if that one does) and what the appearing symbols mean. 1 u(x. t) ∈ R+ × R+ in Exercise 69. by taking the average as in the last exercise.t) µZ ¶−1 Z Z v(y)dy = 1dy v(y)dy. t) + 2 v (r. t)dy = 2 2 (r¯ (r. t)dy = r¯ (r. for all t ≥ 0 one ﬁnds (r¯ (t. t) = r 2 v (r. v = 2 r2 4πc c ∂t y−x=r v v Since v(·. r) = 0. t) are v equal over each sphere in R3 for all t > 0. ∂ ∂n ZZZ Ω r´ 1 ³ F y. t] + tω[g. t] = Z 2π Z π t g(x1 + t sin θ cos φ. x. t) . t) = 4πc2 t2 Z Z A x−y=ct [tψ(y) + ϕ(y) + ∇ϕ · (x − y)] dσ X i 3. c ∂B(x. t) = r∆¯ (y . We ﬁnd that r¯ (t. This contradicts the assumption that u1 6= u2 . 0) and r¯t (r. 0) are zero implying that r¯ (t.´ ∂ ∂2 ∂2 ³ ¯ r¯ (r. x. t) = 2 th(y) + g(y) + Dg(y) · (y − x) dS(y) 1. 104 . v ¯ v 2 ∂r ∂r ∂r we may. 1 u(x. r) = 0. So we v v may use the version of d’Alembert formula we derived for (x. t) = 4πc2 t2 x−y=ct " tg(y) + f (y) + fyi (y)(yi − xi ) dσ # 4. From the cited books (see the bibliography) the following versions of Kirchhoﬀ ’s formula have been copied: Z 1 u(x. 0) and vt (·. t) = tω[g. t)) . x2 + t sin θ sin φ. Moreover. t) and u2 (·. where A A Since 2. x3 + t cos θ) sin θdθdφ. 0) are both 0 also r¯ (r. Exercise 91 Writing out Poisson’s solution one obtains Kirchhoﬀ ’s formula. u(x. So the average of u1 (·. conclude that Z ´ r ∂2 ³ ∆v(y. t] ∂t where the following expression is called Kirchhoﬀ ’s solution: u (x. x. t − dy c2 r c ¸ µ ¶ 1 1 ∂u1 2 ∂r ∂u1 dS + − − r r ∂n cr ∂n ∂t1 t1 =0 ∂ tω[f .
105 . (5. y2 ) q dy1 dy2 4πc2 t x−y≤ct 1 2 2 c2 t2 − (x1 − y1 ) − (x2 − y2 ) Z 1 ϕ1 (y) q = dy. t) = 2πc x−y≤ct c2 t2 − x − y2 Z ∂ 1 ϕ0 (y) q dy .5. However. ut (x.2 Suppose that ϕ0 ∈ C 3 (R2 ) and ϕ1 ∈ C 2 (R2 ). + ∂t 2πc x−y≤ct 2 t2 − x − y2 c This expression is known as Poisson’s solution in 2 space dimensions. As a result of this ‘spreading out’ higher frequencies die out faster then lower frequencies in 2 dimensions.22) can be solved uniquely. 0) = φ0 (x) for x ∈ R2 . Borrowing the formula from Theorem 5.1 we obtain. u(x.y3 )=ct 1 Z 2 ct = ϕ (y1 . with y = (y1 . Theorem 5.5. y2 ) dσ = 4πc2 t (x. having a solution in 3 space dimensions one could just use that formula for initial values that do not depend on x3 .0)−(y. The Cauchy ¡ ¢ problem in (5. The solution that comes out should not depend on the x3 variable and hence utt = c2 (ux1 x1 + ux2 x2 + ux3 x3 ) = c2 (ux1 x1 + ux2 x2 ) .5. Flatlanders should be baritones. 2πc x−y≤ct c2 t2 − x − y2 So we may conclude that utt = c2 ∆u in R2 × R+ .2 2 space dimensions In 2 dimensions no transformation to a 1 space dimensional problem exists as in 3 dimensions. 0) = φ1 (x) for x ∈ R2 . y2 ) Z 1 ϕ (y1 .22) has a unique solution u ∈ C 2 R2 × R+ and this solution is given by Z 1 ϕ1 (y) q dy + u (x. This idea to go down from a known solution method in dimension n in order to ﬁnd a solution in dimension n − 1 is called the method of descent.5.
106 .
[5] L. Springer. AMS Chelsea Publishing. Sobolev: Partial Diﬀerential Equations of Mathematical Physics.: [5] and [7].F. Brezis: Analyse Fonctionnelle. Giaquinta and S. 1961. 1964 (republished by Dover in 1989).: [13] and [15]. [12] S. 1964. [4] R. For models arising in a variational setting see [9] and [10]. AMS. Modern graduate texts in p.d. Direct methods in the calculus of variations: [5.d. [13] W. Courant and D. Interscience. 1998. Hildebrandt: Calculus of Variations I.C. 1953. [3] R. 1995. Dacorogna: Direct methods in the calculus of variations.L. [8]. 1983 [2] E. 107 . DiBenedetto: Partial Diﬀerential Equations. [4]. [7.R. Chapter 8] and [6]. an introduction. For ordinary diﬀerential equations see [2] or [14]. Partial Diﬀerential Equations. For the functional analytic tools see [1]. Pergamon Press. Courant and D. 1992. 1993. Evans. [14] W. Blaisdell. [6] B. Strauss: Partial Diﬀerential Equations. For derivation of models see [3]. 1955. For CauchyKowalevski see [11]. Hilbert: Methods of Mathematical Physics I.e.A. 1996 [11] Fritz John: Partial Diﬀerential Equations 4th edition. [8] P. [7] E.Bibliography [1] H. Springer. Springer [15] H. Springer. Weinberger: A ﬁrst course in Partial Diﬀerential Equations. Garabedian: Partial Diﬀerential Equations. Walter: Gewöhnliche Diﬀerentialgleichungen. 1965 (republished by Dover 1995). Masson. Interscience. Hildebrandt: Calculus of Variations II.e. Hilbert: Methods of Mathematical Physics II. Springer. Coddington and N. Birkhäuser. Chapter 0]. Giaquinta and S. Introductionary text in p. McGrawHill. 1996 [10] M. 1981. Wiley. Levinson: Theory of Ordinary Diﬀerential Equations. [9] M.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.