Professional Documents
Culture Documents
Notes Le Quang Nam
Notes Le Quang Nam
NAM Q. LE
Abstract. Maximum principles are fundamental tools in elliptic PDEs. They allow to
estimate the size of solutions in terms of boundary data and structure of the equations. A
simple but useful instance of the maximum principle in one dimension says that a convex
quadratic function on an interval attains its maximum value at one of the end points. This
serves as a basic for the Hopf maximum principle for linear elliptic equations, including the
Laplace equation in electromagnetism, in all dimensions which we will cover in detail in
this lecture. For fully nonlinear equations such as the Monge-Ampère equation in geometry
and optimal transport, we will discuss fundamental ideas in the Aleksandrov maximum
principle. We will mention interesting connections with equations lying between Laplace
and Monge-Ampère equations. If time permits, we will point out further connections with
viscosity solutions, Pucci conjecture and Mahler conjecture. The lecture aims at freshman
or sophomore. Terminologies will be explained.
1. Notation
This section lists standard notation used in these notes.
Example 3.2. One can check that the following functions are harmonic on Rn where n ≥ 2
(i) u(x) = ni=1 ai xi + b where ai , b ∈ R.
P
(ii) u(x) = (n − 1)x21 − x22 − · · · − x2n .
(iii) u(x) = x31 − 3x1 x22 .
(iv) u(x) = ex1 sin x2 + exn cos x1 .
A very popular form of the maximum principle that one encounters in PDE textbooks is
the following: A harmonic function on a bounded domain attains its maximum and minimum
values on the boundary. This is called the weak maximum principle. The strong maximum
principle says that a harmonic function cannot attain its max/min values in the interior of a
connected bounded domain unless it is a constant.
Theorem 3.3 (Maximum principle for harmonic functions). Let u ∈ C 2 (Ω) ∩ C(Ω) be har-
monic in Ω. Then the following statements hold:
(i) (Weak maximum principle) maxΩ u = max∂Ω u.
(ii) (Strong maximum principle) Furthermore, if Ω is connected, and there is x0 ∈ Ω
such that u(x0 ) = maxΩ u then u is a constant in Ω.
4 NAM Q. LE
3.1. A classical proof. For the sake of completeness, we present here a classical proof of
Theorem 3.3. It is essentially due to Gauss (1839). Other similar proofs are due to Bernstein
(1904), Picard (1905), Lichtenstein (1912, 1924). The proof uses a mean-value property for
harmonic functions stated in Theorem 3.4. You might skip this classical proof and continue
with the discussion in Section 3.2.
Theorem 3.4 (Mean-value property for harmonic functions). If u ∈ C 2 (Ω) is harmonic,
then for each Br (x) ⊂ Ω, we have
Z
R 1
u(x) = − u(y)dy := u(y)dy.
Br (x) |Br (x)| Br (x)
Classical proof of Theorem 3.3. Since Ω is compact and u ∈ C(Ω), u attains its maximum
value in Ω.
(ii) Assume Ω is connected. Suppose there is x0 ∈ Ω such that
(3.1) u(x0 ) = M := max u.
Ω
Then, for 0 < r < dist (x0 , ∂Ω), by the mean-value property in Theorem 3.4 and (3.1),
R
M = u(x0 ) = − u(y)dy ≤ M.
Br (x0 )
Using the continuity of u in Br (x0 ), we find u ≡ M in Br (x0 ). Hence u(y) = M for all
y ∈ Br (x0 ). This shows that E := {x ∈ Ω | u(x) = M } is open. The continuity of u implies
that E is relatively closed on Ω. Thus E is both open and relatively closed in Ω. Since Ω is
connected, E = Ω. Thus u ≡ M in Ω. Using u ∈ C(Ω), we find u ≡ M in Ω.
(i) We use (ii). Suppose there is x0 ∈ Ω such that
(3.2) u(x0 ) = max u > max u.
Ω ∂Ω
Lemma 3.5 (Second derivative test for functions of one variable). If a C 2 function u on
00
(a, b) has a local maximum at x0 ∈ (a, b) then u0 (x0 ) = 0 and u (x0 ) ≤ 0.
00
Suppose we have a harmonic function u on an interval [a, b]. Then u (x) = 0 for all
x ∈ (a, b). Assume by contradiction that the maximum of u is attained at an interior point
x0 ∈ (a, b). We would like to find a contradiction from this. By the second derivative test for
00
local maximum, we must have u0 (x0 ) = 0 and u (x0 ) ≤ 0. We “almost” have a contradiction
00 00
from the fact that u (x0 ) = 0. Of course, we will have a contradiction if u (x0 ) > 0 (but we
don’t). Anyway, this contradiction argument leads to the following simple observation. We
00
say that a C 2 function v is strictly convex on (a, b) if v (x) > 0 for all x ∈ (a, b).
00
Lemma 3.6. If v (x) > 0 for all x ∈ (a, b), then maxa≤x≤b v(x) = max{v(a), v(b)}. In other
words, a strictly convex function on an interval can only attain its maximum value at one of
the end points.
Back to the contradiction argument above. Although we do not have the strict positivity
00
u (x0 ) > 0, we can try to “borrow” some positivity and then “pay back”. Here is a new
proof of Theorem 3.3 in one-dimension.
A new proof of Theorem 3.3 in one-dimension. Assume u ∈ C 2 (a, b) ∩ C([a, b]) is harmonic,
00
that is, u (x) = 0 in (a, b). Let us “borrow” some positivity by considering uε (x) = u(x)+εx2
where ε > 0. Then
00 00
(3.4) uε (x) = u (x) + 2ε = 2ε > 0 in (a, b).
By Lemma 3.6, uε attains its maximum values at a or b. Thus
max u(x) ≤ max uε (x) = max{uε (a), uε (b)} ≤ max{u(a), u(b)} + ε max{a2 , b2 }.
a≤x≤b a≤x≤b
Proof. The proof reduces to the one-dimensional case. Fix any ξ = (ξ1 , · · · , ξn ) ∈ Rn . Then
for |t| small, x0 + tξ ∈ Ω. Thus, by Lemma 3.5, ϕ(t) := u(x0 + tξ) attains its local maximum
00
at t = 0. Thus ϕ0 (0) = 0 and ϕ (0) ≤ 0. Compute
00
ϕ0 (t) = Du(x0 + tξ) · ξ and ϕ (t) = D2 u(x0 + tξ)ξ · ξ.
Hence ϕ0 (0) = Du(x0 ) · ξ = 0 for all ξ ∈ Rn from which we find Du(x0 ) = 0. Moreover, from
00
ϕ (0) = D2 u(x0 )ξ · ξ ≤ 0 for all ξ ∈ Rn , we find D2 u(x0 ) ≤ 0.
The following theorem is a slight generalization of the weak maximum principle in Theorem
4.1.
Theorem 3.9 (Weak maximum principle). Let b = (b1 , · · · , bn ) ∈ C(Ω; Rn ). Suppose u ∈
C 2 (Ω) ∩ C(Ω) satisfies
∆u(x) + b(x) · Du(x) ≥ 0 in Ω.
Then maxΩ u = max∂Ω u.
It should be emphasized in that in the statement of Theorem 3.9, we do not have a solution
of any PDE, whatsoever. We just have a so called subsolution.
Proof of Theorem 3.9, due to Hopf. We consider two cases.
Case 1:
∆u(x) + b(x) · Du(x) > 0 in Ω.
Suppose there is x0 ∈ Ω such that u(x0 ) = maxΩ u. Then, by Lemma 3.8, Du(x0 ) = 0 and
∆u(x0 ) = trace D2 u(x0 ) = sum of eigenvalues of D2 u(x0 ) ≤ 0.
Hence
∆u(x0 ) + b(x0 ) · Du(x0 ) = ∆u(x0 ) ≤ 0,
a contradiction. Thus, we must have maxΩ u = max∂Ω u.
Case 2: General case. To reduce to Case 1, we “borrow” some positivity. Let ε > 0.
Consider
uε (x) = u(x) + εeλx1
where λ > 0 is a constant to be chosen later.
Compute
∆uε = ∆u(x) + ελ2 eλx1 , Duε (x) = Du(x) + (ελeλx1 , 0, · · · , 0).
Thus
∆uε (x) + b(x) · Duε (x) = ∆u(x) + b(x) · Du(x) + ελeλx1 (λ + b1 (x))
≥ ελeλx1 (λ + b1 (x)) > 0
if we choose
λ = max |b1 | + 1 ≥ −b1 (x) + 1 in Ω.
Ω
By Case 1, we have
(3.5) max uε = max uε .
Ω ∂Ω
3.3. Linear Algebra. Back to Case 1 above, we only used trace (D2 u(x0 )) ≤ 0 but did
not fully use D2 u(x0 ) ≤ 0. This leaves room to explore the full scope of Hopf’s method by
considering
n
X
(3.6) Lu := aij (x)uxi xj (x) + b(x) · Du(x)
i,j=1
where the matrix A(x) = (aij (x))1≤i,j≤n is symmetric. As long as the matrix A(x) is nonneg-
ative definite, Case 1 still works. This is due to Lemmas 3.11 and 3.12 below.
Definition 3.10. An operator of the form (3.6) with A(x) being positive definite is called a
second order elliptic operator in non-divergence form.
Lemma 3.11. We have the following identity
n
X
aij (x)uxi xj (x) = trace (A(x)D2 u(x)) where A(x) = (aij (x))1≤i,j≤n .
i,j=1
Proof. Note that if A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n where B is symmetric, that is,
bij = bji , then for C = AB = (cij )1≤i,j≤n , we have
n
X n X
X n n
X
ij
trace (AB) = cii = a bji = aij bij .
i=1 i=1 j=1 i,j=1
If v is quadratic, say v = |x|2 /2, then from vxi xj = δij (the Kronecker symbol), we find
n
X n
X n
X
ij i i
Lv = a δij + b xi = trace (A(x)) + b xi ≥ nθ + b i xi .
i,j=1 i=1 i=1
We improve the situation a bit but still encounter the troublesome term ni=1 bi xi . How-
P
ever, the above calculations suggest us to seek higher degree polynomials. Thus we can try
|x|m for m large. It is simpler to try
v(x) = (x1 + R + 1)m where Ω ⊂ BR (0) and m > 2.
Then
vx1 = m(x1 + R + 1)m−1 ; and vx1 x1 = m(m − 1)(x1 + R + 1)m−2 .
We have
Lv = a11 vx1 x1 + b1 vx1
= m(x1 + R + 1)m−2 [a11 (m − 1) + b1 (x1 + R + 1)]
≥ m(x1 + R + 1)m−2 [θ(m − 1) − max |b1 |(2R + 1)] > 0
Ω
1
if m is large, for example, m > 1 + maxΩ |b |(2R + 1)/θ.
4.1. Hopf ’s lemma and strong maximum principles. In this section, we prove an im-
portant lemma, due to Hopf. With the aid of this lemma, we will prove a strong maximum
principle which says that a non-constant subsolution of a second-order elliptic operator cannot
attain its maximum at an interior point of a connected region.
Lemma 4.3 (Hopf’s lemma). Let u ∈ C 2 (Ω) ∩ C 1 (Ω). Suppose Lu ≥ 0 in Ω and there is
x0 ∈ ∂Ω such that u(x0 ) > u(x) for all x ∈ Ω. Assume Ω satisfies the interior ball condition
at x0 , that is, there is an open ball B ⊂ Ω with x0 ∈ ∂B. Then ∂u∂ν
(x0 ) > 0 where ν is the
outer unit normal to B at x0 .
Remark 4.4. If t > 0 is small then x0 − tν(x0 ) ∈ Ω. Hence u(x0 ) − u(x0 − tν(x0 )) > 0, and
∂u 0 u(x0 ) − u(x0 − tν(x0 ))
(x ) = Du(x0 ) · ν(x0 ) = lim+ ≥ 0.
∂ν t→0 t
Thus, the importance of Hopf ’s lemma is the strict inequality
∂u 0
(x ) > 0.
∂ν
Theorem 4.5 (Strong maximum principle for subsolutions). Suppose u ∈ C 2 (Ω) ∩ C 1 (Ω)
where Ω is a connected, open, bounded domain. If Lu ≥ 0 in Ω and u attains its maximum
over Ω at an interior point, then u is a constant in Ω.
Proof of Theorem 4.5 using Lemma 4.3. Let M := maxΩ u, and
U := {x ∈ Ω | u(x) = M }.
Suppose that U 6= ∅ and that u is not a constant. Let
V = {x ∈ Ω | u(x) < M } =
6 ∅.
Then V is open. Choose y ∈ V such that
dist (y, U ) < dist (y, ∂Ω).
10 NAM Q. LE
Let B denote the largest ball with center y whose interior lies in V . There is x0 ∈ U with
x0 ∈ ∂B. Note that x0 ∈ Ω by the choice of y. Then u attains its maximum at interior x0 of
Ω so Du(x0 ) = 0. On the other hand, V satisfies the interior ball condition at x0 . By Hopf’s
lemma, we have ∂u
∂ν
(x0 ) > 0. This is a contradiction to
∂u 0
(x ) = Du(x0 ) · ν = 0.
∂ν
Proof of Hopf ’s lemma, Lemma 4.3. We can assume B = Bρ (0). The idea is to turn u(x0 ) >
u(x) for all x ∈ Ω into a more quantitative version.
Suppose we find a ring R around x0 , say, R = Bρ (0) \ Bρ/2 (0), such that
(4.3) 0 ≥ −u(x0 ) + u(x) + εv(x) in R,
where ε > 0, and the function v satisfies
∂v 0
v(x0 ) = 0, v ≥ 0, (x ) < 0.
∂ν
Note that
0 = −u(x0 ) + u(x0 ) + εv(x0 ) ≥ −u(x0 ) + u(x) + εv(x).
It follows that
∂u 0 ∂v
(x ) + ε (x0 ) ≥ 0
∂ν ∂ν
and we are done since
∂u 0 ∂v
(x ) ≥ −ε (x0 ) > 0.
∂ν ∂ν
How to find v? Suppose we can a C 2 function v such that
∂v 0
(4.4) Lv ≥ 0 in R, v = 0 on ∂B, and v ≥ 0 in B, (x ) < 0.
∂ν
Then, by the continuity of u, we can find ε > 0 such that
u(x0 ) ≥ u(x) + εv(x) ∀x ∈ ∂Bρ/2 (0).
Therefore
u(x0 ) ≥ u(x) + εv(x) ∀x ∈ ∂R.
Thus
L(u + εv − u(x0 )) = Lu + εLv ≥ 0 in R.
By the weak maximum principle, Theorem 4.1,
u + εv − u(x0 ) ≤ max[u + εv − u(x0 )] = 0 in R.
∂R
How to make v more convex? We can try to look for vm = (ρ2 − |x|2 )m . The problem with
this function when m ≥ 2 is that it is “flat” near |x| = ρ. More precisely, ∂v∂νm (x0 ) = 0. By
graphing vm , we see that the graph is not very flat when we move a bit away from |x| = ρ.
This suggests us to adjust vm a bit by using
[ρ2 − (|x|/2)2 ]m − (3ρ2 /4)m
or its multiple (4ρ2 − |x|2 )m − (3ρ2 )m .
Our final choice of v: We will choose, for some large m to be determined,
v = (4ρ2 − |x|2 )m − (3ρ2 )m .
To verify that it works, and to make the proof transparent, we work with the case (aij ) =
(δij ), that is A(x) ≡ In . We compute vxi = −2mxi (4ρ2 − |x|2 )m−1 , and
vxi xj = m(4ρ2 − |x|2 )m−2 [−2δij (4ρ2 − |x|2 ) + 4(m − 1)xi xj ].
Thus
Lv = m(4ρ2 − |x|2 )m−2 [−2n(4ρ2 − |x|2 ) + 4(m − 1)|x|2 − 2bi xi (4ρ2 − |x|2 )].
In R = Bρ (0) \ Bρ/2 (0), we have |x| ≥ ρ/2. Therefore, Lv > 0 if m is large.
Clearly, v = 0 on ∂B, v ≥ 0 in B and
∂v 0 x0 x0
(x ) = Dv(x0 ) · = −2mx0 (4ρ2 − |x0 |2 )m−1 · = −2mρ(3ρ2 )m−1 < 0.
∂ν ρ ρ
Thus, v satisfies (4.4) and the proof of the Hopf lemma is complete.
where A(x) = (aij (x))1≤i,j≤n ≥ θIn with θ > 0. Then, there exists a constant C, depending
only on Ω and θ, such that
max u ≤ C max |f |.
Ω Ω
ii
Proof. Let K := maxΩ |f |. Note that a ≥ θ for each i. We compute
Xn
L(u + K|x|2 /(2nθ)) = Lu + K/(2nθ) aij (|x|2 )xi xj
i,j=1
Xn
≥ −|f | + K/(2nθ) 2aii (x) ≥ −|f | + K ≥ 0.
i=1
2
By Theorem 4.1, u + K|x| /(2nθ) attains its maximum value on ∂Ω. Hence, for all x ∈ Ω
u(x) + K|x|2 /(2nθ) ≤ max(u + K|x|2 /(2nθ)) = max K|x|2 /(2nθ) ≤ C(Ω, θ)K.
x∈∂Ω x∈∂Ω
12 NAM Q. LE
The constant C in the proof of Lemma 5.1 is proportional to 1/θ so it is large when θ is
small. Can we do better? What happens to
εux1 x1 + ε−1 ux2 x2 ≥ −1?
Can we find an estimate for u that is independent of ε? It turns out that the answer is yes.
The idea is to refine the above estimate by noticing that
Xn
aii (x) = trace (A(x)) ≥ n[det A(x)]1/n .
i=1
All we need now from the proof of Lemma 5.1 is a positive lower bound for det A(x). We
have the following lemma.
Lemma 5.2. Let Ω = B1 (0) ⊂ Rn . Let A(x) = (aij (x)) be a continuous symmetric, positive
definite in Ω with det A(x) > 1 for all x ∈ B1 (0). Suppose that u ∈ C 2 (Ω) ∩ C(Ω) satisfies
n
X
Lu := aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω.
i,j=1
Then
(1 − |x|2 )
u(x) ≤ max |f | for all x ∈ Ω.
2n Ω
Thus, from Step 1 in the proof of Theorem 4.1, we deduce that u + (|x|2 − 1)/(2n) attains
its maximum value on ∂Ω, which is 0. We are done.
It should be emphasized that the smallest eigenvalues of A(x) in Lemma 5.2 might tend
to 0 when x approaches the boundary ∂Ω. Another way to rephrase the condition on the
coefficient matrix A(x) is that [det A(x)]−1 is bounded from above by 1. It turns out that we
do not actually need this uniform bound. All we need is its average, in a suitable integral
sense. More precise, when f ≡ 1, we only need is that
Z
1
dx ≤ M
Ω det A(x)
for some positive constant M . This follows from the Aleksandrov estimate in Theorem 6.3.
The proof of this estimate is based on geometric arguments to be presented in Section 6.
Hidden in all these is the Monge-Ampère equation
det D2 u = f.
We will not go deeper into this equation. The reader is invited to consult some books on the
subject such as [F, G, LMT]. All one needs to know for the next section is the following fact
on the change of variables in multiple integral: If D2 u ≥ 0 on an open set E, then the for
y = Du(x), the Jacobian determinant is det((uxi )xj )1≤i,j≤n = det D2 u(x), and
Z Z
|Du(E)| = dy = det D2 u(x)dx.
Du(E) E
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 13
6. Aleksandrov-type estimates
The starting point is the following lemma.
Lemma 6.1. If u ∈ C 2 (Ω) ∩ C(Ω) with inf ∂Ω u = 0 then
Z 1/n
diam(Ω) 2
(6.1) − inf u ≤ 1/n
| det D u|
Ω ωn C
where C is the lower contact set
C = {y ∈ Ω|u(x) ≥ u(y) + p · (x − y) for all x ∈ Ω, for some p = p(y) ∈ Rn }.
Proof. Note that, if y ∈ C, and u(x) ≥ u(y) + p · (x − y) for all x ∈ Ω, then p = Du(y), and
we call l(x) := u(y) + p · (x − y) a supporting hyperplane to the graph of u at y.
For (6.1), from inf ∂Ω u = 0, it suffices to consider the case where the minimum of u on Ω
is attained at x0 ∈ Ω with u(x0 ) < 0. Let D = diam(Ω). We first prove that
(6.2) B |u(x0 )| (0) ⊂ Du(C).
D
Indeed, if |p| < |u(x0 )|/D, then the affine function l(x) := u(x0 ) + p · (x − x0 ) satisfies:
l(x0 ) = u(x0 ) while on ∂Ω,
l(x) ≤ −|u(x0 )| + |p||x − x0 | < 0 ≤ u(x).
Thus, u − l attains its minimum value m ≤ 0 on Ω at some point y ∈ Ω. Therefore, l(x) + m
is a supporting hyperplane to the graph of u at y. This shows that y ∈ C. Note that
D(u − l)(y) = 0 and Dl = p. Therefore, p = Du(y) and p ∈ Du(C).
Since u ∈ C 2 (Ω), we have D2 u(y) ≥ 0 when y ∈ C. Now, from (6.2), we have
n
|u(x0 )|
Z Z Z
ωn ≤ |Du(C)| = dy ≤ det D udx = | det D2 u|dx.
2
D Du(C) C C
Then,
diam(Ω)
|f |
sup u ≤ sup u +
n .
1/n (det A) 1/n
Ω ∂Ω nωn L (Ω)
14 NAM Q. LE
Proof. Since u ∈ C 2 (Ω), on the upper contact set Γ+ , we have D2 u ≤ 0. Using Lemma 3.12
for B = −D2 u, A = (aij )1≤i,j≤n , we have on Γ+
1 n 1
| det D2 u| = det(−D2 u) ≤ ij
− a ij
D ij u/n ≤ (|f |/n)n .
det(a ) det A
Hence
Z
2
1/n Z 1 n
1/n 1
|f |
| det D u|dx ≤ (|f |/n) dx =
.
Γ+ Γ+ det A n (det A)1/n )
Ln (Γ+ )
Now, applying Lemma 6.2, we obtain the Aleksandrov estimate in Theorem 6.3.
In the presence of lower order terms, the estimate similar to that of Theorem 6.3 is called
the Aleksandrov-Bakelman-Pucci maximum principle. We refer to the books [GT, HL] for
this estimates and also the original papers [A1, A2, Ba2, P].
Lemma 7.5 (Maximum principle for normal mappings). Let Ω ⊂ Rn be a bounded open set
and u, v ∈ C(Ω). If u = v on ∂Ω and v ≥ u in Ω then
∂v(Ω) ⊂ ∂u(Ω).
Proof. Let p ∈ ∂v(x0 ). Then, p is the slope of a supporting hyperplane to the graph of v at
(x0 , v(x0 )), that is
(7.1) v(x) ≥ v(x0 ) + p · (x − x0 ) for all x ∈ Ω.
We will slide down this hyperplane to obtain a supporting hyperplane for the graph of u.
Hence p ∈ ∂u(Ω).
Definition 7.6 (Convex set). A set E ⊂ Rn is called convex if it contains the line segment
joining any two points in it, that is,
tx + (1 − t)y ∈ E, for all x, y ∈ E, and t ∈ [0, 1].
Definition 7.7 (Convex function). Let Ω ⊂ Rn be an open set. A function u : Ω ⊂ Rn → R
is convex if for all 0 ≤ t ≤ 1, and any x, y ∈ Ω such that tx + (1 − t)y ∈ Ω we have
u(tx + (1 − t)y) ≤ tu(x) + (1 − t)u(y).
Examples of convex functions include |x| and |x|2 .
Proof of Lemma 7.1. Since u is convex, v ≥ u in Ω. By the maximum principle in Lemma
7.5, ∂v(Ω) ⊂ ∂u(Ω). Observe that
(C1) ∂v(Ω) = ∂v(x0 ) and thus ∂v(Ω) is convex.
(C2) ∂v(Ω) contains B |u(x0 )| (0).
diam(Ω)
To see (C1), we note that if p ∈ ∂v(Ω) then there is x1 ∈ Ω such that p = ∂v(x1 ). It suffices
to consider the case x1 6= x0 . Since the graph of v is a cone, v(x1 ) + p · (x − x1 ) is a supporting
hyperplane to the graph of v at (x0 , v(x0 )), that is p ∈ ∂v(x0 ).
For (C2), we note that, since the graph of v is a cone with vertex (x0 , v(x0 )) = (x0 , u(x0 ))
and the base Ω, p ∈ ∂v(x0 ) if and only if v(x) ≥ v(x0 ) + p · (x − x0 ) for all x ∈ ∂Ω. Thus
(C2) is straightforward.
In the proof of Lemma 7.1, we also observe the following:
u(x0 )
(C3) There is p0 ∈ ∂v(Ω) such that |p0 | = − dist(x 0 ,∂Ω)
.
−x0
Indeed, take x1 ∈ ∂Ω such that |x1 − x0 | = dist(x0 , ∂Ω). Then p0 = −u(x0 ) |xx11−x 0|
2 is the
x1 −x0
desired slope. Indeed, for any x ∈ ∂Ω, (x − x0 ) · |x1 −x0 | is the vector projection of x − x0 onto
the ray from x0 to x1 . Using the convexity of Ω, we find
x1 − x0
(x − x0 ) · ≤ |x1 − x0 |
|x1 − x0 |
and hence, from the formula for p0 , we find that for all x ∈ ∂Ω
0 = v(x) = u(x0 ) + |p0 ||x1 − x0 | ≥ v(x0 ) + p0 · (x − x0 ).
Therefore p0 ∈ ∂v(x0 ) as claimed.
From (C2) and (C3), we see that ∂v(Ω) contains the convex hull of B |u(x0 )| (0) and p0 . This
D
convex hull has measure at least
n−1
ωn−1 |u(x0 )| ωn−1
|p0 | = |u(x0 )|n .
n D n[diam(Ω)]n−1 dist (x0 , ∂Ω)
16 NAM Q. LE
Example 8.2. If Γ = {det A ≥ 1}, then from the Aleksandrov maximum principle in The-
orem 6.3, we know that the smallest exponent p satisfies p ≤ n. In fact, p = n as there are
examples by Gilbarg and Serrin [GS] that the estimate is false if p < n.
Example 8.3. If Γ = {In }, then we can take any p > n/2. We do not give the proof here.
However, we just indicate that if, instead of ∆u ≥ −|f |, we have ∆u = f , then u ∈ W 2,p (Ω)
provided that f ∈ Lp (Ω). Since W 2,p (Ω) embeds into C(Ω) if p > n/2, we have the desired
estimate in this range of p.
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 17
Related to Problem 8.1, Pucci [P] made a very interesting conjecture in 1966. The simple
form is the following conjecture.
Conjecture 8.4 (Pucci). Assume that In ≤ A(x) = (aij (x))1≤i,j≤n ≤ (n − 1)In . Then, for
u ∈ C 2 (Ω) ∩ C(Ω) satisfying
Xn
aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω.
i,j=1
Conjecture 8.4 says that, in terms of the maximum principle, all operators ni,j=1 aij uxi xj
P
where the eigenvalues of the coefficient matrices oscillating between 1 and (n − 1) behave like
the Laplace operator! This is absolutely striking and beautiful!
Conjecture 8.4 is completely open for n odd greater than 1. Trudinger [T] proved this
conjecture for n even. His proof relies on new maximum principles, obtained by Kuo and
Trudinger [KT], for linear elliptic equations whose coefficients lie in the dual cone of an
appropriate Gårding cone Γk . The Gårding Γk is the natural cone for the k-Hessian equa-
tion Sk (D2 u) = f (see [W]) where Sk (D2 u) is the k-symmetric elementary function of the
eigenvalues of D2 u. For example, S1 (D2 u) = ∆u, Sn (D2 u) = det D2 u and
n n
2 1h X 2 X 2
i
S2 (D u) = ux x − u xi xj .
2 i=1 i i i,j=1
Example 8.9.
n
1 X
Γ∗2 = {λ ∈ Rn : |λ| ≤ √ λi }.
n − 1 i=1
n
∗ 1 n X 2 o1/2
ρ2 (λ) = √ λi − (n − 1)|λ|2 .
n i=1
where A(x) = (aij (x))1≤i,j≤n ∈ Γ∗k with ρ∗k (A) > 0. Then we have the estimate
|f |
sup u ≤ C(n, q, Ω)
∗
ρk (A) Lq (Ω)
Ω
Proof of Conjecture 8.4 for n even. Assume that In ≤ A(x) = (aij (x))1≤i,j≤n ≤ (n − 1)In
where n is even. Then A(x) ∈ Γ∗n/2 . The conjecture now follows from Theorem 8.10.
Remark 8.12. It can be verified that when n = 4,
4
1 X
Γ∗2 4
= {λ ∈ R : |λ| ≤ √ λi }
4 − 1 i=1
and
(1, 1, 1, 3) ∈ Γ∗2 .
However, when n = 3, we have (1, 1, 2) 6∈ Γ∗1 but (1, 1, 2) ∈ Γ∗2 .
9. A basic calculus question
We have seen the importance of the simple calculus lemma 3.8 in the proof of maximum
principles for C 2 solutions of several PDEs. Many PDEs admit non C 2 solutions. For
example, the infinity Laplacian equation
u2x1 ux1 x1 + 2ux1 ux2 ux1 x2 + u2x2 ux2 x2 = 0 in R2
has a non C 2 solution
4/3 4/3
u(x1 , x2 ) = x1 − x2 .
Thus, an important question is the following.
Question 9.1. Find the optimal smoothness condition on a continuous function u such that
at its local maximum point x0 , we have
Du(x0 ) = 0, and D2 u(x0 ) ≤ 0,
or some interesting variants of these.
Of course, one would like u to have two derivatives in some sense. Natural spaces for those
u include the Sobolev spaces W 2,p (Ω) (1 ≤ p ≤ ∞) where
W 2,p (Ω) = {u ∈ Lp (Ω) : Du ∈ Lp (Ω), D2 u ∈ Lp (Ω)}.
In 1967, Bony proved the following maximum principle for the Sobolev space.
Theorem 9.2 (Bony). If u ∈ W 2,p (Ω) where p > n has a local maximum at x0 ∈ Ω then
ess lim inf |Du(x)| = 0, and ess lim sup D2 u(x) ≤ 0,
x→x0 x→x0
The first condition, which is in fact Du(x0 ) = 0, is standard as W 2,p (Ω) embeds into C 1 (Ω)
20
when p > n. Note that Theorem 9.2 applies to u(x) = −|x| 11 ∈ W 2,5n (B1 (0)) at its local
maximum 0 in the unit ball B1 (0).
Theorem 9.2 raises a new question:
what is the optimal range of p for Bony-type maximum principle to hold?
A satisfactory answer was found 15 years later by Lions [L]. Before stating Lions’ the-
orem, we note that p cannot be less than n. Indeed, the function u(x) = −|x| satis-
fies −|x| ∈ W 2,p (B1 (0)) for all p < n but −|x| 6∈ W 2,n (B1 (0)) since kD2 |x|k is compa-
rable to |x|−1 . However, at the local maximum point 0 in the unit ball B1 (0), we have
ess lim inf x→0 |Du(x)| = 1.
When p = n, Lions [L] showed that a Bony-type maximum principle holds. This is the
content of the following theorem.
20 NAM Q. LE
We briefly mention a very interesting application of Theorem 9.3, following [L], to viscosity
solutions of fully nonlinear elliptic equations
(9.1) F (D2 u, Du, u, x) = 0.
Here we assume that F (A, p, t, x) is continuous in its arguments and satisfies the ellipticity
condition
F (A, p, t, x) ≥ F (B, p, t, x) for all matrices A ≥ B.
There is a theory of viscosity solutions u ∈ C(Ω) to (9.1); see [CL, CC].
Definition 9.4. We say that u ∈ C(Ω) is a viscosity solution of (9.1) if it satisfies for all
ϕ ∈ C 2 (Ω) the following conditions:
(i) (viscosity subsolution) at any local maximum point x0 ∈ Ω of u − ϕ, one has
F (D2 ϕ(x0 ), Dϕ(x0 ), u(x0 ), x0 ) ≥ 0.
(ii) (viscosity supersolution) at any local minimum point x0 ∈ Ω of u − ϕ, one has
F (D2 ϕ(x0 ), Dϕ(x0 ), u(x0 ), x0 ) ≤ 0.
A reasonable question is the following:
If u ∈ C(Ω) satisfies (9.1) a.e., is it a viscosity solution?
Applying Theorem 9.3, we have the following answer:
2,n
Theorem 9.5. If u ∈ Wloc satisfies (9.1) a.e. then it is also a viscosity solution!
Proof. We verify that u is a viscosity subsolution. The proof that u is a viscosity supersolution
2,n
is similar. Let ϕ ∈ C 2 (Ω). Then v := u − ϕ ∈ Wloc satisfies
F (D2 v + D2 ϕ, Dv + Dϕ, u(x), x) = 0 a. e.
If x0 ∈ Ω is a local maximum point of u − ϕ = v, then, by Theorem 9.3
ess lim inf |Dv(x)| = 0, and ess lim sup D2 v(x) ≤ 0,
x→x0 x→x0
References
[A1] Aleksandrov, A. D. Certain estimates for the Dirichlet problem. Dokl. Akad. Nauk SSSR 134 (1960)
1001–1004 (Russian); translated as Soviet Math. Dokl. 1 (1961) 1151–1154.
[A2] Aleksandrov, A. D. Uniqueness conditions and bounds for the solution of the Dirichlet problem.
(Russian. English summary) Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom. 18 (1963) no. 3,
5–29. English translation in Amer. Math.Soc. Transl. (2) 68 (1968), 89–119.
[A3] Aleksandrov, A. D. Majorants of solutions of linear equations of order two. Vestnik Leningrad. Univ.
21 (1966), 5–25 (Russian). English translation in Amer. Math.Soc. Transl. (2) 68 (1968), 120–143.
[Ba2] Bakel’man, I. Ja. On the theory of quasilinear elliptic equations. (Russian) Sibirsk. Mat. Ž. 2 (1961)
179–186.
[Bo] Bony, J.-M. Principe du maximum dans les espaces de Sobolev. C. R. Acad. Sci. Paris Sér. A-B
265 (1967), A333–A336.
[B] Brendle, S. The isoperimetric inequality for a minimal submanifold in Euclidean space. J. Amer.
Math. Soc. 34 (2021), no. 2, 595–603.
[Br] Brezis, H. Functional analysis, Sobolev spaces and partial differential equations.
[CC] Caffarelli, L. A.; Cabré, X. Fully nonlinear elliptic equations. American Mathematical Society Col-
loquium Publications, volume 43, 1995.
[CL] Crandall, M. G.; Lions, P-L. Viscosity solutions of Hamilton-Jacobi equations. Trans. Amer. Math.
Soc. 277 (1983), no. 1, 1–42.
[E] Evans, L. C. Partial differential equations. Second edition. Graduate Studies in Mathematics, 19.
American Mathematical Society, Providence, RI, 2010.
[F] Figalli, A. The Monge-Ampère equation and its applications. Zurich Lectures in Advanced Mathe-
matics. European Mathematical Society (EMS), Zürich, 2017.
[GNN] Gidas, B.; Ni, W. M.; Nirenberg, L. Symmetry and related properties via the maximum principle.
Comm. Math. Phys. 68 (1979), no. 3, 209–243.
[GS] Gilbarg, D.; Serrin, J. On isolated singularities of solutions of second order elliptic differential
equations. J. Analyse Math. 4 (1955/56), 309–340.
[GT] Gilbarg, D.; Trudinger, N.S. Elliptic partial differential equations of second order, Reprint of the
1998 edition. Classics in Mathematics. Berlin: Springer, 2001.
[G] Gutiérrez, C. E. The Monge-Ampère equation. Second edition. Progress in Nonlinear Differential
Equations and their Applications, 89. Birkhaüser, Boston, 2016.
[HL] Han, Q.; Lin, F. H. Elliptic partial differential equations. 2nd ed. Courant Lecture Notes in Math-
ematics, vol. 1. Courant Institute of Mathematical Sciences, New York; American Mathematical
Society, Providence, RI, 2011.
[HJ] Horn, R. A.; Johnson, C. R. M atrix analysis. Second edition. Cambridge University Press, Cam-
bridge, 2013.
[KS1] Krylov, N. V.; Safonov, M. V. An estimate for the probability of a diffusion process hitting a set of
positive measure. (Russian) Dokl. Akad. Nauk SSSR 245 (1979), no. 1, 18–20.
[KS2] Krylov, N. V.; Safonov, M. V. A property of the solutions of parabolic equations with measurable
coefficients. (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 44 (1980), no. 1, 161–175, 239.
[KT] Kuo, H.J.; Trudinger, N. S. New maximum principles for linear elliptic equations. Indiana Univ.
Math. J. 56 (2007), no. 5, 2439–2452.
[LMT] Le, N. Q.; Mitake, H.; Tran, H. V. Dynamical and geometric aspects of Hamilton-Jacobi and lin-
earized Monge-Ampère equations–VIASM 2016. Edited by Mitake and Tran. Lecture Notes in Math-
ematics, 2183. Springer, Cham, 2017.
[Li] Li, Y. Y. The work of Louis Nirenberg. Proceedings of the International Congress of Mathematicians.
Volume I, 127–137, Hindustan Book Agency, New Delhi, 2010.
[L] Lions, P.-L. A remark on Bony maximum principle. Proc. Amer. Math. Soc. 88 (1983), no. 3,
503–508.
[Pg] Pogorelov, A. V. The Minkowski multidimensional problem. Translated from the Russian by
Vladimir Oliker. Introduction by Louis Nirenberg. Scripta Series in Mathematics. V. H. Winston
& Sons, Washington, D.C.; Halsted Press [John Wiley & Sons], New York-Toronto-London, 1978.
[P] Pucci, C. Operatori ellittici estremanti. Ann. Mat. Pura Appl. (4) 72 (1966), 141–170.
22 NAM Q. LE
[PS] Pucci, P.; Serrin, J. The maximum principle. Progress in Nonlinear Differential Equations and their
Applications, 73. Birkhäuser Verlag, Basel, 2007.
[S] Savin, O. Small perturbation solutions for elliptic equations. Comm. Partial Differential Equations
32 (2007), no. 4-6, 557–578.
[St] Strauss, W. A. Partial differential equations. An introduction. Second edition. John Wiley & Sons,
Ltd., Chichester, 2008.
[T] Trudinger, N. S. Remarks on the Pucci conjecture. Indiana Univ. Math. J. 69 (2020), no. 1, 109–118.
[W] Wang, X.J. The k-Hessian equation. Geometric analysis and PDEs, 177–252, Lecture Notes in
Math., 1977, Springer, Dordrecht, 2009.