You are on page 1of 22

AN INTRODUCTION TO MAXIMUM PRINCIPLES IN ELLIPTIC PDES

NAM Q. LE

Abstract. Maximum principles are fundamental tools in elliptic PDEs. They allow to
estimate the size of solutions in terms of boundary data and structure of the equations. A
simple but useful instance of the maximum principle in one dimension says that a convex
quadratic function on an interval attains its maximum value at one of the end points. This
serves as a basic for the Hopf maximum principle for linear elliptic equations, including the
Laplace equation in electromagnetism, in all dimensions which we will cover in detail in
this lecture. For fully nonlinear equations such as the Monge-Ampère equation in geometry
and optimal transport, we will discuss fundamental ideas in the Aleksandrov maximum
principle. We will mention interesting connections with equations lying between Laplace
and Monge-Ampère equations. If time permits, we will point out further connections with
viscosity solutions, Pucci conjecture and Mahler conjecture. The lecture aims at freshman
or sophomore. Terminologies will be explained.

1. Notation
This section lists standard notation used in these notes.

1.1. Geometric notation.


• Rn =n-dimensional real Euclidean space, R = R1 .
• A typical point in Rn is x = (x1 , · · · , xn ).
• ei = (0, · · · , 0, 1, 0, · · · , 0)= ith standard coordinate vector.
• δij : the Kronecker symbol where δij = 1 if i = j and δij = 0 if i 6= j.
• If x = (x1 , · · · , xn ), y = (y1 , · · · , yn ) ∈ Rn , then
n
X n
X 1/2
x·y = xi y i , |x| = x2i .
i=1 i=1

• Br (x) = {y ∈ Rn | |x − y| < r} is the open ball in Rn with center x and radius r.


• Ω denotes an open, bounded subset of Rn , unless otherwise stated.
• ∂Ω= boundary of Ω; Ω = Ω ∪ ∂Ω= closure of Ω.
• diam(E) : the diameter of a bounded set E.
• dist(·, E) : the distance function from a closed set E.
• |Ω|= the Lebesgue measure of Ω.
• ωn is the volume of the unit ball B1 (0) in Rn .

1.2. Notation on partial derivatives. Assume u : Ω → R, x ∈ Ω.


• First partial derivatives:
∂u u(x + hei ) − u(x)
uxi (x) = (x) = lim provided this limit exists.
∂xi h→0 h
1
2 NAM Q. LE

• Second partial derivatives:


∂ 2u
uxi xj = .
∂xi ∂xj
• The gradient vector:
Du(x) := (ux1 (x), · · · , uxn (x)).
• The Hessian matrix:
u x1 x1 u x1 x2 ··· ux1 xn−1 ux1 xn
 
 u x2 x1 u x2 x2 ··· ux2 xn−1 ux2 xn 
2
D u = (uxi xj )1≤i,j≤n

= .. .. .. .. .. 
.
 . . . . . 
u
xn−1 x1 uxn−1 x2 · · · uxn−1 xn−1 uxn−1 xn 
u xn x1 uxn x2 · · · uxn xn−1 uxn xn
• The Laplace operator
n
X
∆u := uxi xi = trace (D2 u).
i=1

• C(Ω), C(Ω): the set of continuous functions on Ω, Ω, respectively.


• C k (Ω): the set of functions whose all derivatives of order ≤ k are continuous in Ω.
1.3. Matrices.
• In is the identity n × n matrix.
• A ≥ B for symmetric n × n matrices A and B: if the eigenvalues of A − B are
nonnegative. In particular, if all eigenvalues of A are non-positive, we write A ≤ 0.
• trace(M ) : the trace of a square matrix M .
• det M : the determinant of a square matrix M .
1.4. Integrals and Lp spaces.
• If 1 ≤ p ≤ ∞, then
Lp (Ω) = {u : Ω → R | u is Lebesgue meaurable, and kukLp (Ω) < ∞},
where
Z  p1
p
kukLp (Ω) = |u(x)| dx (1 ≤ p < ∞)

and
kukL∞ (Ω) := ess supΩ |u|.
• A consequence of he Hölder inequality: from the Hölder inequality
Z
1 1
|f (x)g(x)|dx ≤ kf kLp (Ω) kgkLq (Ω) where + = 1
Ω p q
we can easily deduce that if 1 ≤ r < s ≤ ∞ then
1 1
kukLr (Ω) ≤ |Ω| r − s kukLs (Ω) .
• An inclusion: if 1 ≤ r ≤ s ≤ ∞, then
Ls (Ω) ⊂ Lr (Ω).
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 3

2. Maximum principles in textbooks


Maximum principles are fundamental tools in elliptic PDEs. They allow to estimate the size
of solutions in terms of boundary data and structure of the equations. They appear in most,
if not all, PDE textbooks. Thus, it is impossible to list all of these texts. Instead, we just
mention a select few books such as [Br, CC, E, F, GT, G, HL, St]. The book [PS] is entirely
about the maximum principles. The 2015 Abel laureate L. Nirenberg was reported [Li] to
have said “I have made a living from the maximum principle”. At the heart of the definition
of the viscosity theory of Crandall-Lions [CL] is the maximum principle. Many important
results in analysis, PDEs, and geometry were proved using the maximum principle argument.
These include Pogorelov’s second derivative estimates for the Monge-Ampère equation [Pg],
symmetry results for elliptic PDEs by Gidas, Ni and Nirenberg [GNN], Krylov-Safonov’s
Harnack inequality [KS1, KS2] for uniformly elliptic equations in don-divergence form, and
Brendle’s proof of the isoperimetric inequality for a minimal submanifold in Euclidean space
using the Aleksandrov-Bakelman-Pucci maximum principle method [B], to name a few.
The aim of this notes is to introduce some basic ideas in the maximum principles. As
such, we will not aim for the most general forms of the equations nor sharpest conditions
nor strongest results. Rather, we start from simple observation and try to ask interesting
questions. Moreover, we will only focus on the “max” statements for linear equations as the
“min” statements then follow from considering the negative of the solutions. Except for some
references to Sobolev spaces in Example 8.3 and the final Section 9, I have tried to make all
concepts and results in these notes accessible to freshmen and sophomores.

3. Maximum principle for harmonic functions


Definition 3.1. If Ω ⊂ Rn , and u ∈ C 2 (Ω), then we say u is harmonic if
n
X
∆u := uxi xi = 0.
i=1

Example 3.2. One can check that the following functions are harmonic on Rn where n ≥ 2
(i) u(x) = ni=1 ai xi + b where ai , b ∈ R.
P
(ii) u(x) = (n − 1)x21 − x22 − · · · − x2n .
(iii) u(x) = x31 − 3x1 x22 .
(iv) u(x) = ex1 sin x2 + exn cos x1 .

A very popular form of the maximum principle that one encounters in PDE textbooks is
the following: A harmonic function on a bounded domain attains its maximum and minimum
values on the boundary. This is called the weak maximum principle. The strong maximum
principle says that a harmonic function cannot attain its max/min values in the interior of a
connected bounded domain unless it is a constant.

Theorem 3.3 (Maximum principle for harmonic functions). Let u ∈ C 2 (Ω) ∩ C(Ω) be har-
monic in Ω. Then the following statements hold:
(i) (Weak maximum principle) maxΩ u = max∂Ω u.
(ii) (Strong maximum principle) Furthermore, if Ω is connected, and there is x0 ∈ Ω
such that u(x0 ) = maxΩ u then u is a constant in Ω.
4 NAM Q. LE

3.1. A classical proof. For the sake of completeness, we present here a classical proof of
Theorem 3.3. It is essentially due to Gauss (1839). Other similar proofs are due to Bernstein
(1904), Picard (1905), Lichtenstein (1912, 1924). The proof uses a mean-value property for
harmonic functions stated in Theorem 3.4. You might skip this classical proof and continue
with the discussion in Section 3.2.
Theorem 3.4 (Mean-value property for harmonic functions). If u ∈ C 2 (Ω) is harmonic,
then for each Br (x) ⊂ Ω, we have
Z
R 1
u(x) = − u(y)dy := u(y)dy.
Br (x) |Br (x)| Br (x)
Classical proof of Theorem 3.3. Since Ω is compact and u ∈ C(Ω), u attains its maximum
value in Ω.
(ii) Assume Ω is connected. Suppose there is x0 ∈ Ω such that
(3.1) u(x0 ) = M := max u.

Then, for 0 < r < dist (x0 , ∂Ω), by the mean-value property in Theorem 3.4 and (3.1),
R
M = u(x0 ) = − u(y)dy ≤ M.
Br (x0 )

Using the continuity of u in Br (x0 ), we find u ≡ M in Br (x0 ). Hence u(y) = M for all
y ∈ Br (x0 ). This shows that E := {x ∈ Ω | u(x) = M } is open. The continuity of u implies
that E is relatively closed on Ω. Thus E is both open and relatively closed in Ω. Since Ω is
connected, E = Ω. Thus u ≡ M in Ω. Using u ∈ C(Ω), we find u ≡ M in Ω.
(i) We use (ii). Suppose there is x0 ∈ Ω such that
(3.2) u(x0 ) = max u > max u.
Ω ∂Ω

There exists a unique connected component Ω0 of Ω such that x0 ∈ Ω0 . Clearly u ∈ C 2 (Ω0 ) ∩


C(Ω0 ) is harmonic in Ω0 . Since u(x0 ) = maxΩ u ≥ maxΩ0 u, we must have u(x0 ) = maxΩ0 u.
By (ii), u is a constant in Ω0 . Thus
(3.3) u(x0 ) = max u ≤ max u.
∂Ω0 ∂Ω

We obtain a contradiction from (3.2) and (3.3). 


3.2. Seeking a new proof of Theorem 3.3: Hopf ’s proof. The above proof of Theorem
3.3 cannnot handle the maximum principle for solutions to PDEs without the mean-value
property. Eberhard Hopf (1902-1983, IU faculty 1949-1983) devised an important method
in 1927 that survives the test of time. His method becomes a fundamental tool in PDEs.
We now illustrate Hopf’s idea in the one-dimensional case. Before describing Hopf’s idea, we
note that there is a very simple proof in the one-dimensional case because u being harmonic
gives that u is linear, that is u(x) = cx + d, so its graph is a straight line. Then, it is
obvious that u attains its maximum/minimum value at the end points of the interval under
consideration. However, this simple proof cannot generalize to higher dimensions as there
are nonlinear harmonic functions as illustrated in Example 3.2.
One-dimensional case. Hopf’s idea consists of a contradiction argument with the help
of some convexity/positivity combined with the behavior of the first and second derivatives
of a function at its extremal points that lie in the interior of the domain. The last point is
usually referred to as the second derivative test.
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 5

Lemma 3.5 (Second derivative test for functions of one variable). If a C 2 function u on
00
(a, b) has a local maximum at x0 ∈ (a, b) then u0 (x0 ) = 0 and u (x0 ) ≤ 0.
00
Suppose we have a harmonic function u on an interval [a, b]. Then u (x) = 0 for all
x ∈ (a, b). Assume by contradiction that the maximum of u is attained at an interior point
x0 ∈ (a, b). We would like to find a contradiction from this. By the second derivative test for
00
local maximum, we must have u0 (x0 ) = 0 and u (x0 ) ≤ 0. We “almost” have a contradiction
00 00
from the fact that u (x0 ) = 0. Of course, we will have a contradiction if u (x0 ) > 0 (but we
don’t). Anyway, this contradiction argument leads to the following simple observation. We
00
say that a C 2 function v is strictly convex on (a, b) if v (x) > 0 for all x ∈ (a, b).
00
Lemma 3.6. If v (x) > 0 for all x ∈ (a, b), then maxa≤x≤b v(x) = max{v(a), v(b)}. In other
words, a strictly convex function on an interval can only attain its maximum value at one of
the end points.
Back to the contradiction argument above. Although we do not have the strict positivity
00
u (x0 ) > 0, we can try to “borrow” some positivity and then “pay back”. Here is a new
proof of Theorem 3.3 in one-dimension.
A new proof of Theorem 3.3 in one-dimension. Assume u ∈ C 2 (a, b) ∩ C([a, b]) is harmonic,
00
that is, u (x) = 0 in (a, b). Let us “borrow” some positivity by considering uε (x) = u(x)+εx2
where ε > 0. Then
00 00
(3.4) uε (x) = u (x) + 2ε = 2ε > 0 in (a, b).
By Lemma 3.6, uε attains its maximum values at a or b. Thus
max u(x) ≤ max uε (x) = max{uε (a), uε (b)} ≤ max{u(a), u(b)} + ε max{a2 , b2 }.
a≤x≤b a≤x≤b

Now, we “pay back” what we borrowed by letting ε → 0 to obtain


max u(x) ≤ max{u(a), u(b)}.
a≤x≤b

This easily implies that maxa≤x≤b u(x) = max{u(a), u(b)}. 


Observe that the above proof does not use much the harmonicity of u. The condition
00 00
u (x) ≥ 0 suffices for the positivity of uε in (3.4). Another observation is that in the
contradiction argument, we have an extra information that is not used, that is the first
derivative of the function at its interior local maximum being zero. Thus, we have the
00
conclusion of Lemma 3.6 for v 00 (x) + k(x)v 0 (x) > 0 instead of v (x) > 0.
00
Lemma 3.7. If v (x)+k(x)v 0 (x) > 0 for all x ∈ (a, b), then maxa≤x≤b v(x) = max{v(a), v(b)}.
In these notes, we will see that all these ideas carry to higher dimensions. We begin with
a higher-dimensional version of Lemma 3.5.
Lemma 3.8 (Second Derivative Test for functions of several variables). If u ∈ C 2 (Ω) attains
a local maximum at x0 ∈ Ω, then
Du(x0 ) = 0
and
D2 u(x0 ) ≤ 0
i.e., the symmetric matrix D2 u(x0 ) = (uxi xj (x0 ))1≤i,j≤n is nonpositive definite. This is equiv-
alent to: all eigenvalues of D2 u(x0 ) are non-positive.
6 NAM Q. LE

Proof. The proof reduces to the one-dimensional case. Fix any ξ = (ξ1 , · · · , ξn ) ∈ Rn . Then
for |t| small, x0 + tξ ∈ Ω. Thus, by Lemma 3.5, ϕ(t) := u(x0 + tξ) attains its local maximum
00
at t = 0. Thus ϕ0 (0) = 0 and ϕ (0) ≤ 0. Compute
00
ϕ0 (t) = Du(x0 + tξ) · ξ and ϕ (t) = D2 u(x0 + tξ)ξ · ξ.
Hence ϕ0 (0) = Du(x0 ) · ξ = 0 for all ξ ∈ Rn from which we find Du(x0 ) = 0. Moreover, from
00
ϕ (0) = D2 u(x0 )ξ · ξ ≤ 0 for all ξ ∈ Rn , we find D2 u(x0 ) ≤ 0. 
The following theorem is a slight generalization of the weak maximum principle in Theorem
4.1.
Theorem 3.9 (Weak maximum principle). Let b = (b1 , · · · , bn ) ∈ C(Ω; Rn ). Suppose u ∈
C 2 (Ω) ∩ C(Ω) satisfies
∆u(x) + b(x) · Du(x) ≥ 0 in Ω.
Then maxΩ u = max∂Ω u.
It should be emphasized in that in the statement of Theorem 3.9, we do not have a solution
of any PDE, whatsoever. We just have a so called subsolution.
Proof of Theorem 3.9, due to Hopf. We consider two cases.
Case 1:
∆u(x) + b(x) · Du(x) > 0 in Ω.
Suppose there is x0 ∈ Ω such that u(x0 ) = maxΩ u. Then, by Lemma 3.8, Du(x0 ) = 0 and
∆u(x0 ) = trace D2 u(x0 ) = sum of eigenvalues of D2 u(x0 ) ≤ 0.
Hence
∆u(x0 ) + b(x0 ) · Du(x0 ) = ∆u(x0 ) ≤ 0,
a contradiction. Thus, we must have maxΩ u = max∂Ω u.
Case 2: General case. To reduce to Case 1, we “borrow” some positivity. Let ε > 0.
Consider
uε (x) = u(x) + εeλx1
where λ > 0 is a constant to be chosen later.
Compute
∆uε = ∆u(x) + ελ2 eλx1 , Duε (x) = Du(x) + (ελeλx1 , 0, · · · , 0).
Thus
∆uε (x) + b(x) · Duε (x) = ∆u(x) + b(x) · Du(x) + ελeλx1 (λ + b1 (x))
≥ ελeλx1 (λ + b1 (x)) > 0
if we choose
λ = max |b1 | + 1 ≥ −b1 (x) + 1 in Ω.

By Case 1, we have
(3.5) max uε = max uε .
Ω ∂Ω

Now, we “pay” back what we “borrowed” by letting ε → 0. Letting ε → 0 in (3.5), we obtain


maxΩ u = max∂Ω u, completing the proof of the theorem. 
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 7

3.3. Linear Algebra. Back to Case 1 above, we only used trace (D2 u(x0 )) ≤ 0 but did
not fully use D2 u(x0 ) ≤ 0. This leaves room to explore the full scope of Hopf’s method by
considering
n
X
(3.6) Lu := aij (x)uxi xj (x) + b(x) · Du(x)
i,j=1

where the matrix A(x) = (aij (x))1≤i,j≤n is symmetric. As long as the matrix A(x) is nonneg-
ative definite, Case 1 still works. This is due to Lemmas 3.11 and 3.12 below.
Definition 3.10. An operator of the form (3.6) with A(x) being positive definite is called a
second order elliptic operator in non-divergence form.
Lemma 3.11. We have the following identity
n
X
aij (x)uxi xj (x) = trace (A(x)D2 u(x)) where A(x) = (aij (x))1≤i,j≤n .
i,j=1

Proof. Note that if A = (aij )1≤i,j≤n and B = (bij )1≤i,j≤n where B is symmetric, that is,
bij = bji , then for C = AB = (cij )1≤i,j≤n , we have
n
X n X
X n n
X
ij
trace (AB) = cii = a bji = aij bij .
i=1 i=1 j=1 i,j=1

Apply this to A(x) = (aij (x))1≤i,j≤n and B = (uxi xj )1≤i,j≤n . 


Lemma 3.12. Let A and B be two nonnegative symmetric n × n matrices. Then
trace(AB) ≥ 0.
In fact, we have the following Schur inequality
trace(AB) ≥ n(det A)1/n (det B)1/n .
Proof. Observe that if M is a nonnegative symmetric n × n matrix then
(3.7) trace(M ) ≥ n(det M )1/n .
Indeed, let α1 , · · · , αn be nonnegative eigenvalues of M . By the Arithmetic-Geometric in-
equality,
n
X Yn
1/n
trace(M ) = αi ≥ n αi = n(det M )1/n .
i=1 i=1

Returning to our lemma. Applying (3.7) to M := B 1/2 AB 1/2 , we have


trace (AB) = trace (B 1/2 AB 1/2 ) ≥ n[det(B 1/2 AB 1/2 )]1/n = n(det A)1/n (det B)1/n .

A good reference for matrices is [HJ]. The verification of Case 1 in the proof of Theorem
3.9 does not require a fixed positive lower bound on the non-negative eigenvalues of A(x).
Requiring a positive lower bound allows us to deal also with the general Case 2. We will
take up this point in Section 4 where an important lemma, due to Hopf, is also proved.
8 NAM Q. LE

4. Maximum principles and Hopf’s lemma


In this section, we consider the second order elliptic operator in non-divergence form
n
X n
X
Lu = aij (x)uxi xj + bi (x)uxi in Ω ⊂ Rn
i,j=1 i=1

where aij , bi ∈ C(Ω), with aij = aji , and


(4.1) A(x) = (aij (x)) ≥ θIn for all x ∈ Ω; θ > 0.
Recall that another way to write the last condition is:
n
X
aij (x)ξi ξj ≥ θ|ξ|2 for all x ∈ Ω and all ξ = (ξ1 , · · · , ξn ) ∈ Rn .
i,j=1

Theorem 4.1 (Weak maximum principles). Assume u ∈ C 2 (Ω) ∩ C(Ω). If


Lu ≥ 0 in Ω,
then
max u = max u.
Ω ∂Ω

Remark 4.2. A function u satisfying Lu ≥ 0 is called a subsolution of the PDE Lu = 0.


Theorem 4.1 (i) says that a subsolution attains its maximum on ∂Ω.
Proof of Theorem 4.1. Step 1: The case of strict positivity (or strict subsolution), that is,
Lu > 0 in Ω.
Suppose there is x0 ∈ Ω such that u(x0 ) = maxx∈Ω u(x). At x0 , by Lemma 3.8, we have
Du(x0 ) = 0 and D2 u(x0 ) ≤ 0. Thus −D2 u(x0 ) ≥ 0. Then, by (4.1) and Lemma 3.12, we
have
trace (−D2 u(x0 )A(x0 )) ≥ 0
and therefore
n
X
2
Lu(x0 ) = trace (A(x0 )D u(x0 )) + bi (x0 )uxi (x0 ) = −trace (−D2 u(x0 )A(x0 )) ≤ 0.
i=1

This is a contradiction to Lu(x0 ) > 0. We note that, in this step, θ can be 0.


Step 2: General case Lu ≥ 0 in Ω. We will find some bounded, C 2 function v on Ω such
that Lv > 0 in Ω. Then, for any ε > 0,
L(u + εv) = Lu + εLv > 0 in Ω.
By Step 1,
(4.2) max(u + εv) = max(u + εv).
Ω ∂Ω

Now, let ε → 0 in (4.2) to obtain maxΩ u = max∂Ω u.


How to find v? We can think of functions v of simple forms such as linear functions,
quadratic functions, etc, and
Pntheyi depend on one variables to make the calculations easier.
If v is linear then Lv = i=1 b vxi but this does not guarantee Lv > 0.
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 9

If v is quadratic, say v = |x|2 /2, then from vxi xj = δij (the Kronecker symbol), we find
n
X n
X n
X
ij i i
Lv = a δij + b xi = trace (A(x)) + b xi ≥ nθ + b i xi .
i,j=1 i=1 i=1

We improve the situation a bit but still encounter the troublesome term ni=1 bi xi . How-
P
ever, the above calculations suggest us to seek higher degree polynomials. Thus we can try
|x|m for m large. It is simpler to try
v(x) = (x1 + R + 1)m where Ω ⊂ BR (0) and m > 2.
Then
vx1 = m(x1 + R + 1)m−1 ; and vx1 x1 = m(m − 1)(x1 + R + 1)m−2 .
We have
Lv = a11 vx1 x1 + b1 vx1
= m(x1 + R + 1)m−2 [a11 (m − 1) + b1 (x1 + R + 1)]
≥ m(x1 + R + 1)m−2 [θ(m − 1) − max |b1 |(2R + 1)] > 0

1
if m is large, for example, m > 1 + maxΩ |b |(2R + 1)/θ. 
4.1. Hopf ’s lemma and strong maximum principles. In this section, we prove an im-
portant lemma, due to Hopf. With the aid of this lemma, we will prove a strong maximum
principle which says that a non-constant subsolution of a second-order elliptic operator cannot
attain its maximum at an interior point of a connected region.
Lemma 4.3 (Hopf’s lemma). Let u ∈ C 2 (Ω) ∩ C 1 (Ω). Suppose Lu ≥ 0 in Ω and there is
x0 ∈ ∂Ω such that u(x0 ) > u(x) for all x ∈ Ω. Assume Ω satisfies the interior ball condition
at x0 , that is, there is an open ball B ⊂ Ω with x0 ∈ ∂B. Then ∂u∂ν
(x0 ) > 0 where ν is the
outer unit normal to B at x0 .
Remark 4.4. If t > 0 is small then x0 − tν(x0 ) ∈ Ω. Hence u(x0 ) − u(x0 − tν(x0 )) > 0, and
∂u 0 u(x0 ) − u(x0 − tν(x0 ))
(x ) = Du(x0 ) · ν(x0 ) = lim+ ≥ 0.
∂ν t→0 t
Thus, the importance of Hopf ’s lemma is the strict inequality
∂u 0
(x ) > 0.
∂ν
Theorem 4.5 (Strong maximum principle for subsolutions). Suppose u ∈ C 2 (Ω) ∩ C 1 (Ω)
where Ω is a connected, open, bounded domain. If Lu ≥ 0 in Ω and u attains its maximum
over Ω at an interior point, then u is a constant in Ω.
Proof of Theorem 4.5 using Lemma 4.3. Let M := maxΩ u, and
U := {x ∈ Ω | u(x) = M }.
Suppose that U 6= ∅ and that u is not a constant. Let
V = {x ∈ Ω | u(x) < M } =
6 ∅.
Then V is open. Choose y ∈ V such that
dist (y, U ) < dist (y, ∂Ω).
10 NAM Q. LE

Let B denote the largest ball with center y whose interior lies in V . There is x0 ∈ U with
x0 ∈ ∂B. Note that x0 ∈ Ω by the choice of y. Then u attains its maximum at interior x0 of
Ω so Du(x0 ) = 0. On the other hand, V satisfies the interior ball condition at x0 . By Hopf’s
lemma, we have ∂u
∂ν
(x0 ) > 0. This is a contradiction to
∂u 0
(x ) = Du(x0 ) · ν = 0.
∂ν

Proof of Hopf ’s lemma, Lemma 4.3. We can assume B = Bρ (0). The idea is to turn u(x0 ) >
u(x) for all x ∈ Ω into a more quantitative version.
Suppose we find a ring R around x0 , say, R = Bρ (0) \ Bρ/2 (0), such that
(4.3) 0 ≥ −u(x0 ) + u(x) + εv(x) in R,
where ε > 0, and the function v satisfies
∂v 0
v(x0 ) = 0, v ≥ 0, (x ) < 0.
∂ν
Note that
0 = −u(x0 ) + u(x0 ) + εv(x0 ) ≥ −u(x0 ) + u(x) + εv(x).
It follows that
∂u 0 ∂v
(x ) + ε (x0 ) ≥ 0
∂ν ∂ν
and we are done since
∂u 0 ∂v
(x ) ≥ −ε (x0 ) > 0.
∂ν ∂ν
How to find v? Suppose we can a C 2 function v such that
∂v 0
(4.4) Lv ≥ 0 in R, v = 0 on ∂B, and v ≥ 0 in B, (x ) < 0.
∂ν
Then, by the continuity of u, we can find ε > 0 such that
u(x0 ) ≥ u(x) + εv(x) ∀x ∈ ∂Bρ/2 (0).
Therefore
u(x0 ) ≥ u(x) + εv(x) ∀x ∈ ∂R.
Thus
L(u + εv − u(x0 )) = Lu + εLv ≥ 0 in R.
By the weak maximum principle, Theorem 4.1,
u + εv − u(x0 ) ≤ max[u + εv − u(x0 )] = 0 in R.
∂R

Thus (4.3) is satisfied.


Candidates for v satisfying (4.4). Our first try is v(x) = ρ2 − |x|2 . Then v = 0 on ∂B,
v ≥ 0 in B and
∂v 0 x0
(x ) = −2x0 · = −2ρ < 0.
∂ν ρ
The problem with this v is that: it is a bit concave. Thus, even when bi = 0, we have
X n X n Xn
ij ij
Lv = a vxi xj = a (−2δij ) = −2 aii ≤ −2nθ < 0.
i,j=1 i,j=1 i1
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 11

How to make v more convex? We can try to look for vm = (ρ2 − |x|2 )m . The problem with
this function when m ≥ 2 is that it is “flat” near |x| = ρ. More precisely, ∂v∂νm (x0 ) = 0. By
graphing vm , we see that the graph is not very flat when we move a bit away from |x| = ρ.
This suggests us to adjust vm a bit by using
[ρ2 − (|x|/2)2 ]m − (3ρ2 /4)m
or its multiple (4ρ2 − |x|2 )m − (3ρ2 )m .
Our final choice of v: We will choose, for some large m to be determined,
v = (4ρ2 − |x|2 )m − (3ρ2 )m .
To verify that it works, and to make the proof transparent, we work with the case (aij ) =
(δij ), that is A(x) ≡ In . We compute vxi = −2mxi (4ρ2 − |x|2 )m−1 , and
vxi xj = m(4ρ2 − |x|2 )m−2 [−2δij (4ρ2 − |x|2 ) + 4(m − 1)xi xj ].
Thus
Lv = m(4ρ2 − |x|2 )m−2 [−2n(4ρ2 − |x|2 ) + 4(m − 1)|x|2 − 2bi xi (4ρ2 − |x|2 )].
In R = Bρ (0) \ Bρ/2 (0), we have |x| ≥ ρ/2. Therefore, Lv > 0 if m is large.
Clearly, v = 0 on ∂B, v ≥ 0 in B and
∂v 0 x0 x0
(x ) = Dv(x0 ) · = −2mx0 (4ρ2 − |x0 |2 )m−1 · = −2mρ(3ρ2 )m−1 < 0.
∂ν ρ ρ
Thus, v satisfies (4.4) and the proof of the Hopf lemma is complete. 

5. An application and questions


So far, we have only considered equations with zero right hand side. A simple application
of the weak maximum principle allows us to estimate the maximum of subsolutions in the
case of nonzero right hand side. Here is one example.
Lemma 5.1. Let f ∈ C(Ω). Assume u ∈ C 2 (Ω) ∩ C(Ω) satisfies
Xn
Lu := aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω
i,j=1

where A(x) = (aij (x))1≤i,j≤n ≥ θIn with θ > 0. Then, there exists a constant C, depending
only on Ω and θ, such that
max u ≤ C max |f |.
Ω Ω
ii
Proof. Let K := maxΩ |f |. Note that a ≥ θ for each i. We compute
Xn
L(u + K|x|2 /(2nθ)) = Lu + K/(2nθ) aij (|x|2 )xi xj
i,j=1
Xn
≥ −|f | + K/(2nθ) 2aii (x) ≥ −|f | + K ≥ 0.
i=1
2
By Theorem 4.1, u + K|x| /(2nθ) attains its maximum value on ∂Ω. Hence, for all x ∈ Ω
u(x) + K|x|2 /(2nθ) ≤ max(u + K|x|2 /(2nθ)) = max K|x|2 /(2nθ) ≤ C(Ω, θ)K.
x∈∂Ω x∈∂Ω

12 NAM Q. LE

The constant C in the proof of Lemma 5.1 is proportional to 1/θ so it is large when θ is
small. Can we do better? What happens to
εux1 x1 + ε−1 ux2 x2 ≥ −1?
Can we find an estimate for u that is independent of ε? It turns out that the answer is yes.
The idea is to refine the above estimate by noticing that
Xn
aii (x) = trace (A(x)) ≥ n[det A(x)]1/n .
i=1

All we need now from the proof of Lemma 5.1 is a positive lower bound for det A(x). We
have the following lemma.
Lemma 5.2. Let Ω = B1 (0) ⊂ Rn . Let A(x) = (aij (x)) be a continuous symmetric, positive
definite in Ω with det A(x) > 1 for all x ∈ B1 (0). Suppose that u ∈ C 2 (Ω) ∩ C(Ω) satisfies
n
X
Lu := aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω.
i,j=1

Then
(1 − |x|2 )
u(x) ≤ max |f | for all x ∈ Ω.
2n Ω

Proof. By considering |f | + ε and then letting ε → 0, we can assume |f | > 0 in Ω. We


compute
Xn
2
L(u + max |f |(|x| − 1)/(2n)) = Lu + max |f | aii /n ≥ −|f | + max |f |[det A(x)]1/n > 0.
Ω Ω Ω
i=1

Thus, from Step 1 in the proof of Theorem 4.1, we deduce that u + (|x|2 − 1)/(2n) attains
its maximum value on ∂Ω, which is 0. We are done. 
It should be emphasized that the smallest eigenvalues of A(x) in Lemma 5.2 might tend
to 0 when x approaches the boundary ∂Ω. Another way to rephrase the condition on the
coefficient matrix A(x) is that [det A(x)]−1 is bounded from above by 1. It turns out that we
do not actually need this uniform bound. All we need is its average, in a suitable integral
sense. More precise, when f ≡ 1, we only need is that
Z
1
dx ≤ M
Ω det A(x)
for some positive constant M . This follows from the Aleksandrov estimate in Theorem 6.3.
The proof of this estimate is based on geometric arguments to be presented in Section 6.
Hidden in all these is the Monge-Ampère equation
det D2 u = f.
We will not go deeper into this equation. The reader is invited to consult some books on the
subject such as [F, G, LMT]. All one needs to know for the next section is the following fact
on the change of variables in multiple integral: If D2 u ≥ 0 on an open set E, then the for
y = Du(x), the Jacobian determinant is det((uxi )xj )1≤i,j≤n = det D2 u(x), and
Z Z
|Du(E)| = dy = det D2 u(x)dx.
Du(E) E
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 13

6. Aleksandrov-type estimates
The starting point is the following lemma.
Lemma 6.1. If u ∈ C 2 (Ω) ∩ C(Ω) with inf ∂Ω u = 0 then
Z 1/n
diam(Ω) 2
(6.1) − inf u ≤ 1/n
| det D u|
Ω ωn C
where C is the lower contact set
C = {y ∈ Ω|u(x) ≥ u(y) + p · (x − y) for all x ∈ Ω, for some p = p(y) ∈ Rn }.
Proof. Note that, if y ∈ C, and u(x) ≥ u(y) + p · (x − y) for all x ∈ Ω, then p = Du(y), and
we call l(x) := u(y) + p · (x − y) a supporting hyperplane to the graph of u at y.
For (6.1), from inf ∂Ω u = 0, it suffices to consider the case where the minimum of u on Ω
is attained at x0 ∈ Ω with u(x0 ) < 0. Let D = diam(Ω). We first prove that
(6.2) B |u(x0 )| (0) ⊂ Du(C).
D

Indeed, if |p| < |u(x0 )|/D, then the affine function l(x) := u(x0 ) + p · (x − x0 ) satisfies:
l(x0 ) = u(x0 ) while on ∂Ω,
l(x) ≤ −|u(x0 )| + |p||x − x0 | < 0 ≤ u(x).
Thus, u − l attains its minimum value m ≤ 0 on Ω at some point y ∈ Ω. Therefore, l(x) + m
is a supporting hyperplane to the graph of u at y. This shows that y ∈ C. Note that
D(u − l)(y) = 0 and Dl = p. Therefore, p = Du(y) and p ∈ Du(C).
Since u ∈ C 2 (Ω), we have D2 u(y) ≥ 0 when y ∈ C. Now, from (6.2), we have
 n
|u(x0 )|
Z Z Z
ωn ≤ |Du(C)| = dy ≤ det D udx = | det D2 u|dx.
2
D Du(C) C C

Thus (6.1) is proved. 


An equivalent form of Lemma 6.1 is the following.
Lemma 6.2. For u ∈ C 2 (Ω) ∩ C(Ω), we have
Z 1/n
diam(Ω) 2
sup u ≤ sup u + 1/n
| det D u|
Ω ∂Ω ωn Γ+

where Γ+ is the upper contact set


Γ+ = {y ∈ Ω|u(x) ≤ u(y) + p · (x − y) for all x ∈ Ω, for some p = p(y) ∈ Rn }.
Proof. We apply Lemma 6.1 to û := −(u − sup∂Ω u). 
We are now ready to prove the Aleksandrov maximum principle [A1, A2].
Theorem 6.3 (Aleksandrov maximum principle). Let A(x) = (aij (x))1≤i,j≤n be a positive
definite matrix. Let u ∈ C 2 (Ω) ∩ C(Ω) satisfy
Xn
Lu := aij uxi xj ≥ −|f | in Ω.
i,j=1

Then,
diam(Ω) |f |
sup u ≤ sup u + n .

1/n (det A) 1/n

Ω ∂Ω nωn L (Ω)
14 NAM Q. LE

Proof. Since u ∈ C 2 (Ω), on the upper contact set Γ+ , we have D2 u ≤ 0. Using Lemma 3.12
for B = −D2 u, A = (aij )1≤i,j≤n , we have on Γ+
1  n 1
| det D2 u| = det(−D2 u) ≤ ij
− a ij
D ij u/n ≤ (|f |/n)n .
det(a ) det A
Hence

Z
2
1/n  Z 1 n
1/n 1 |f |
| det D u|dx ≤ (|f |/n) dx =
.
Γ+ Γ+ det A n (det A)1/n ) Ln (Γ+ )
Now, applying Lemma 6.2, we obtain the Aleksandrov estimate in Theorem 6.3. 
In the presence of lower order terms, the estimate similar to that of Theorem 6.3 is called
the Aleksandrov-Bakelman-Pucci maximum principle. We refer to the books [GT, HL] for
this estimates and also the original papers [A1, A2, Ba2, P].

7. Parabolas and Cones


7.1. Comparison with cones. A fundamental, but hidden idea in the proof of Lemma 6.1
is the comparison with cones. This was encoded in (6.2). To illustrate, we will rewrite (6.2)
more geometrically, in the simple, but crucial setting of Ω being convex, and u also being
convex.
Lemma 7.1. Consider a convex function u ∈ C(Ω) with u = 0 on ∂Ω. For any x0 ∈ Ω, let
v be the convex function whose graph is the cone with vertex (x0 , u(x0 )) and the base Ω, with
v = 0 on ∂Ω. Then the set of slopes ∂v(Ω) of supporting hyperplanes to the graph of v contains
B |u(x0 )| (0) but ∂v(Ω) is contained in the set of slopes ∂u(Ω) of supporting hyperplanes to the
diam(Ω)
graph of u.
All terminologies in Lemma 7.1 will be explained below.
Definition 7.2 (The normal mapping/subdifferential). The normal mapping ∂u(x0 ) of u :
Ω → R at x0 ∈ Ω is the set of slopes of supporting hyperplanes to the graph of u at (x0 , u(x0 )):
∂u(x0 ) = {p ∈ Rn : u(x) ≥ u(x0 ) + p · (x − x0 ) for all x ∈ Ω}.
For a subset E ⊂ Ω, we set
∂u(E) = ∪x∈E ∂u(x).
Lemma 7.3. Assume that u : Ω → R and Du(x0 ) exists at x0 ∈ Ω. If ∂u(x0 ) 6= ∅ then
∂u(x0 ) = {Du(x0 )}.
Proof. Suppose p = (p1 , · · · , pn ) ∈ ∂u(x0 ). Note that Br (x0 ) ⊂ Ω for some r > 0 small, and
u(x) ≥ u(x0 ) + p · (x − x0 ) for all x ∈ Br (x0 ). Thus, for each i = 1, ·, n and |t| < r, we have
u(x0 + tei ) ≥ u(x0 ) + tpi . It follows that
u(x0 + tei ) − u(x0 ) u(x0 + tei ) − u(x0 )
lim+ ≥ pi and lim− ≤ pi .
t→0 t t→0 t
Since uxi (x0 ) exists, one has uxi (x0 ) = pi . Hence p = Du(x0 ), and {∂u(x0 )} = {Du(x0 )}. 
Remark 7.4. If u has a minimum value at x0 ∈ Ω, then clearly 0 ∈ ∂u(x0 ). On the other
hand, if ∂u(x0 ) 6= ∅ then ∂u(x0 ) is a closed convex set.
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 15

Lemma 7.5 (Maximum principle for normal mappings). Let Ω ⊂ Rn be a bounded open set
and u, v ∈ C(Ω). If u = v on ∂Ω and v ≥ u in Ω then
∂v(Ω) ⊂ ∂u(Ω).
Proof. Let p ∈ ∂v(x0 ). Then, p is the slope of a supporting hyperplane to the graph of v at
(x0 , v(x0 )), that is
(7.1) v(x) ≥ v(x0 ) + p · (x − x0 ) for all x ∈ Ω.
We will slide down this hyperplane to obtain a supporting hyperplane for the graph of u.
Hence p ∈ ∂u(Ω). 
Definition 7.6 (Convex set). A set E ⊂ Rn is called convex if it contains the line segment
joining any two points in it, that is,
tx + (1 − t)y ∈ E, for all x, y ∈ E, and t ∈ [0, 1].
Definition 7.7 (Convex function). Let Ω ⊂ Rn be an open set. A function u : Ω ⊂ Rn → R
is convex if for all 0 ≤ t ≤ 1, and any x, y ∈ Ω such that tx + (1 − t)y ∈ Ω we have
u(tx + (1 − t)y) ≤ tu(x) + (1 − t)u(y).
Examples of convex functions include |x| and |x|2 .
Proof of Lemma 7.1. Since u is convex, v ≥ u in Ω. By the maximum principle in Lemma
7.5, ∂v(Ω) ⊂ ∂u(Ω). Observe that
(C1) ∂v(Ω) = ∂v(x0 ) and thus ∂v(Ω) is convex.
(C2) ∂v(Ω) contains B |u(x0 )| (0).
diam(Ω)

To see (C1), we note that if p ∈ ∂v(Ω) then there is x1 ∈ Ω such that p = ∂v(x1 ). It suffices
to consider the case x1 6= x0 . Since the graph of v is a cone, v(x1 ) + p · (x − x1 ) is a supporting
hyperplane to the graph of v at (x0 , v(x0 )), that is p ∈ ∂v(x0 ).
For (C2), we note that, since the graph of v is a cone with vertex (x0 , v(x0 )) = (x0 , u(x0 ))
and the base Ω, p ∈ ∂v(x0 ) if and only if v(x) ≥ v(x0 ) + p · (x − x0 ) for all x ∈ ∂Ω. Thus
(C2) is straightforward. 
In the proof of Lemma 7.1, we also observe the following:
u(x0 )
(C3) There is p0 ∈ ∂v(Ω) such that |p0 | = − dist(x 0 ,∂Ω)
.
−x0
Indeed, take x1 ∈ ∂Ω such that |x1 − x0 | = dist(x0 , ∂Ω). Then p0 = −u(x0 ) |xx11−x 0|
2 is the
x1 −x0
desired slope. Indeed, for any x ∈ ∂Ω, (x − x0 ) · |x1 −x0 | is the vector projection of x − x0 onto
the ray from x0 to x1 . Using the convexity of Ω, we find
x1 − x0
(x − x0 ) · ≤ |x1 − x0 |
|x1 − x0 |
and hence, from the formula for p0 , we find that for all x ∈ ∂Ω
0 = v(x) = u(x0 ) + |p0 ||x1 − x0 | ≥ v(x0 ) + p0 · (x − x0 ).
Therefore p0 ∈ ∂v(x0 ) as claimed.
From (C2) and (C3), we see that ∂v(Ω) contains the convex hull of B |u(x0 )| (0) and p0 . This
D
convex hull has measure at least
 n−1
ωn−1 |u(x0 )| ωn−1
|p0 | = |u(x0 )|n .
n D n[diam(Ω)]n−1 dist (x0 , ∂Ω)
16 NAM Q. LE

Since |∂v(Ω)| ≤ |∂u(Ω)|, we find


ωn−1
|u(x0 )|n ≤ |∂u(Ω)|.
n[diam(Ω)]n−1 dist (x0 , ∂Ω)
This leads to the following theorem, due to Aleksandrov [A3], which is of fundamental im-
portance in the theory of Monge-Ampère equations.
Theorem 7.8 (Aleksandrov’s maximum principle). If Ω ⊂ Rn is a bounded, open and convex
set, and u ∈ C(Ω) is a convex function with u = 0 on ∂Ω, then
|u(x0 )| ≤ Cn [diam(Ω)](n−1)/n [dist(x0 , ∂Ω)]1/n |∂u(Ω)|1/n
for all x0 ∈ Ω where Cn is a constant depending only on the dimension n.
7.2. Comparison with Parabolas and Cones. The estimate in Theorem 7.8 should be
compared with the one-sided estimate in Lemma 5.2. On the right hand side, we all have some
quantities that tend to zero near the boundary ∂Ω; the quantity in Lemma 5.2 vanishes with
a faster rate, though. However, their proofs have totally different mechanisms. In Lemma 5.2,
we compare u with the parabola (2n)−1 (1 − |x|2 ) maxΩ |f | while in Lemma 7.8, we compare
u with a function v whose graph is the cone with vertex (x0 , u(x0 )) and the base Ω. We
call this comparison with cones. To summarize: for elliptic equations whose coefficients are
comparable to those of Laplace equation, we can use comparison with parabolas while for
very degenerate equations such as the Monge-Ampère equation, we can think of comparing
with cones.
In reality, not all equations are Laplace equation nor Monge-Ampère equation, so a natural
question is to find effective methods to obtain quantitative information about solutions of
equations lying in between these two extremes. In 2007, Savin [S] developed a method, called
sliding parabolas, to obtain interesting quantitative information such as Harnack inequality,
measure estimate, etc for general elliptic equations.
8. The coefficients and Pucci conjecture
One of the basic questions motivated from Lemma 5.2 and Theorem 6.3 is the following
problem.
Problem 8.1. Assume Ω is a bounded, smooth domain. Given a subset Γ of positive definite
matrices A(x) = (aij (x))1≤i,j≤n . Find the smallest exponent p such that for u ∈ C 2 (Ω)∩C(Ω)
satisfying
X n
aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω.
i,j=1
we have
max u ≤ C(p, Γ, U )kf kLp (Ω) .

Example 8.2. If Γ = {det A ≥ 1}, then from the Aleksandrov maximum principle in The-
orem 6.3, we know that the smallest exponent p satisfies p ≤ n. In fact, p = n as there are
examples by Gilbarg and Serrin [GS] that the estimate is false if p < n.
Example 8.3. If Γ = {In }, then we can take any p > n/2. We do not give the proof here.
However, we just indicate that if, instead of ∆u ≥ −|f |, we have ∆u = f , then u ∈ W 2,p (Ω)
provided that f ∈ Lp (Ω). Since W 2,p (Ω) embeds into C(Ω) if p > n/2, we have the desired
estimate in this range of p.
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 17

Related to Problem 8.1, Pucci [P] made a very interesting conjecture in 1966. The simple
form is the following conjecture.
Conjecture 8.4 (Pucci). Assume that In ≤ A(x) = (aij (x))1≤i,j≤n ≤ (n − 1)In . Then, for
u ∈ C 2 (Ω) ∩ C(Ω) satisfying
Xn
aij uxi xj ≥ −|f | in Ω, u = 0 on ∂Ω.
i,j=1

and p > n/2, we have


max u ≤ C(p, n, Ω)kf kLp (Ω) .

Conjecture 8.4 says that, in terms of the maximum principle, all operators ni,j=1 aij uxi xj
P

where the eigenvalues of the coefficient matrices oscillating between 1 and (n − 1) behave like
the Laplace operator! This is absolutely striking and beautiful!
Conjecture 8.4 is completely open for n odd greater than 1. Trudinger [T] proved this
conjecture for n even. His proof relies on new maximum principles, obtained by Kuo and
Trudinger [KT], for linear elliptic equations whose coefficients lie in the dual cone of an
appropriate Gårding cone Γk . The Gårding Γk is the natural cone for the k-Hessian equa-
tion Sk (D2 u) = f (see [W]) where Sk (D2 u) is the k-symmetric elementary function of the
eigenvalues of D2 u. For example, S1 (D2 u) = ∆u, Sn (D2 u) = det D2 u and
n n
2 1h X 2 X 2
i
S2 (D u) = ux x − u xi xj .
2 i=1 i i i,j=1

Below, we describe Kuo-Trudinger’s new maximum principles.


Let n ≥ 2 and 1 ≤ k ≤ n. We denote the k-th symmetric function of n variables µ =
(µ1 , · · · , µn ) ∈ Rn by X
Sk (µ) := µi1 · · · µik .
1≤i1 <···<ik ≤n
n
Let Γk be an open symmetric convex cone in R , with vertex at the origin, given by
Γk = {µ = (µ1 , · · · , µn ) ∈ Rn | Sj (µ) > 0 ∀j = 1, · · · , k}.
For a symmetric matrix A = (aij )1≤i,j≤n with eigenvalues λ(A) = (λ1 (A), · · · , λn (A)), we
define
Sk (A) = Sk (λ(A)),
and we write A ∈ Γk if λ(A) ∈ Γk . Following Kuo-Trudinger [KT], let Γ∗k be the dual cone of
the Gårding cone Γk , given by
(8.1) Γ∗k = {λ ∈ Rn | λ · µ ≥ 0 ∀µ ∈ Γk }.
Clearly Γ∗k ⊂ Γ∗l for k ≤ l.
Example 8.5. Γ1 is the half-space:
n
X
n
Γ1 = {µ ∈ R | µi > 0}.
i=1

Example 8.6. Γn is the positive cone:


Γn = {µ ∈ Rn | µi > 0, i = 1, · · · , n}.
18 NAM Q. LE

For λ ∈ Γ∗k , denote


  
λ·µ n
ρ∗k (λ) = inf | µ ∈ Γk , Sk (µ) ≥ .
n k
For a symmetric matrix A = (aij )1≤i,j≤n with eigenvalues λ(A) = (λ1 (A), · · · , λn (A)), we
write A ∈ Γ∗k if λ(A) ∈ Γ∗k and define
ρ∗k (A) = ρ∗k (λ(A)).
Example 8.7. λ ∈ Γ∗1 if and only if there is θ ≥ 0 such that λ = θ(1, · · · , 1). In this case,
ρ∗1 (λ) = θ.
Example 8.8. Γ∗n = Γn . If λ = (λ1 , · · · , λn ) ∈ Γ∗n then λi ≥ 0 and ρ∗n (λ) = ( ni=1 λi )1/n .
Q

Example 8.9.
n
1 X
Γ∗2 = {λ ∈ Rn : |λ| ≤ √ λi }.
n − 1 i=1
n
∗ 1 n X 2 o1/2
ρ2 (λ) = √ λi − (n − 1)|λ|2 .
n i=1

Theorem 8.10 (Kuo-Trudinger). Let u ∈ C 2 (Ω) ∩ C(Ω) satisfy


Xn
aij uxi xj ≥ −|f | in Ω, u ≤ 0 on ∂Ω,
i,j=1

where A(x) = (aij (x))1≤i,j≤n ∈ Γ∗k with ρ∗k (A) > 0. Then we have the estimate
|f |
sup u ≤ C(n, q, Ω) ∗

ρk (A) Lq (Ω)

for q = k if k > n/2 and q > n/2 if k ≤ n/2.


From the perspective of the maximum principle, second order elliptic operators whose
coefficients lies in the cone Γ∗k are more like k-Hessian operators! Theorem 6.3 is a special
case of Theorem 8.10 when q = n. In the proof of Theorem 6.3, we used Lemma 3.12. The
appearance of the determinant in this lemma reminds us of the Monge-Ampère operator
Sn (D2 u) = det D2 u. Thus, one can expect that in the proof of Theorem 8.10, a similar
k-Hessian version of Lemma 3.12 was used. Indeed, we have the following result [KT].
Lemma 8.11. For matrices B = (bij )1≤i,j≤n ∈ Γk , A = (aij )1≤i,j≤n ∈ Γ∗k , k = 1, · · · , n, we
have  1/k
1/k ∗ 1 n
[Sk (B)] ρk (A) ≤ trace(AB).
n k
One can appreciate the inequality in Lemma 8.11 in the simple case of n = 3 and k = 2.
The matrix A there is always nonnegative definite but B needs not to be as the diagonal
matrix with entries −1, 2, 4 belongs to Γ2 in R3 .
In the proof of Lemma 6.1 leading to Theorem 6.3, some sort of comparison with cones was
used. Cones such as |x| are fundamental solutions of the Monge-Ampère equation. When
k > n/2, a proof of Theorem 8.10 can be carried out using comparison with fundamental
solutions of the k-Hessian equation. For more details, see [KT].
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 19

Proof of Conjecture 8.4 for n even. Assume that In ≤ A(x) = (aij (x))1≤i,j≤n ≤ (n − 1)In
where n is even. Then A(x) ∈ Γ∗n/2 . The conjecture now follows from Theorem 8.10. 
Remark 8.12. It can be verified that when n = 4,
4
1 X
Γ∗2 4
= {λ ∈ R : |λ| ≤ √ λi }
4 − 1 i=1
and
(1, 1, 1, 3) ∈ Γ∗2 .
However, when n = 3, we have (1, 1, 2) 6∈ Γ∗1 but (1, 1, 2) ∈ Γ∗2 .
9. A basic calculus question
We have seen the importance of the simple calculus lemma 3.8 in the proof of maximum
principles for C 2 solutions of several PDEs. Many PDEs admit non C 2 solutions. For
example, the infinity Laplacian equation
u2x1 ux1 x1 + 2ux1 ux2 ux1 x2 + u2x2 ux2 x2 = 0 in R2
has a non C 2 solution
4/3 4/3
u(x1 , x2 ) = x1 − x2 .
Thus, an important question is the following.
Question 9.1. Find the optimal smoothness condition on a continuous function u such that
at its local maximum point x0 , we have
Du(x0 ) = 0, and D2 u(x0 ) ≤ 0,
or some interesting variants of these.
Of course, one would like u to have two derivatives in some sense. Natural spaces for those
u include the Sobolev spaces W 2,p (Ω) (1 ≤ p ≤ ∞) where
W 2,p (Ω) = {u ∈ Lp (Ω) : Du ∈ Lp (Ω), D2 u ∈ Lp (Ω)}.
In 1967, Bony proved the following maximum principle for the Sobolev space.
Theorem 9.2 (Bony). If u ∈ W 2,p (Ω) where p > n has a local maximum at x0 ∈ Ω then
ess lim inf |Du(x)| = 0, and ess lim sup D2 u(x) ≤ 0,
x→x0 x→x0

The first condition, which is in fact Du(x0 ) = 0, is standard as W 2,p (Ω) embeds into C 1 (Ω)
20
when p > n. Note that Theorem 9.2 applies to u(x) = −|x| 11 ∈ W 2,5n (B1 (0)) at its local
maximum 0 in the unit ball B1 (0).
Theorem 9.2 raises a new question:
what is the optimal range of p for Bony-type maximum principle to hold?
A satisfactory answer was found 15 years later by Lions [L]. Before stating Lions’ the-
orem, we note that p cannot be less than n. Indeed, the function u(x) = −|x| satis-
fies −|x| ∈ W 2,p (B1 (0)) for all p < n but −|x| 6∈ W 2,n (B1 (0)) since kD2 |x|k is compa-
rable to |x|−1 . However, at the local maximum point 0 in the unit ball B1 (0), we have
ess lim inf x→0 |Du(x)| = 1.
When p = n, Lions [L] showed that a Bony-type maximum principle holds. This is the
content of the following theorem.
20 NAM Q. LE

Theorem 9.3 (Lions). If u ∈ W 2,n (Ω) has a local maximum at x0 ∈ Ω then


ess lim inf |Du(x)| = 0, and ess lim sup D2 u(x) ≤ 0,
x→x0 x→x0

We briefly mention a very interesting application of Theorem 9.3, following [L], to viscosity
solutions of fully nonlinear elliptic equations
(9.1) F (D2 u, Du, u, x) = 0.
Here we assume that F (A, p, t, x) is continuous in its arguments and satisfies the ellipticity
condition
F (A, p, t, x) ≥ F (B, p, t, x) for all matrices A ≥ B.
There is a theory of viscosity solutions u ∈ C(Ω) to (9.1); see [CL, CC].
Definition 9.4. We say that u ∈ C(Ω) is a viscosity solution of (9.1) if it satisfies for all
ϕ ∈ C 2 (Ω) the following conditions:
(i) (viscosity subsolution) at any local maximum point x0 ∈ Ω of u − ϕ, one has
F (D2 ϕ(x0 ), Dϕ(x0 ), u(x0 ), x0 ) ≥ 0.
(ii) (viscosity supersolution) at any local minimum point x0 ∈ Ω of u − ϕ, one has
F (D2 ϕ(x0 ), Dϕ(x0 ), u(x0 ), x0 ) ≤ 0.
A reasonable question is the following:
If u ∈ C(Ω) satisfies (9.1) a.e., is it a viscosity solution?
Applying Theorem 9.3, we have the following answer:
2,n
Theorem 9.5. If u ∈ Wloc satisfies (9.1) a.e. then it is also a viscosity solution!
Proof. We verify that u is a viscosity subsolution. The proof that u is a viscosity supersolution
2,n
is similar. Let ϕ ∈ C 2 (Ω). Then v := u − ϕ ∈ Wloc satisfies
F (D2 v + D2 ϕ, Dv + Dϕ, u(x), x) = 0 a. e.
If x0 ∈ Ω is a local maximum point of u − ϕ = v, then, by Theorem 9.3
ess lim inf |Dv(x)| = 0, and ess lim sup D2 v(x) ≤ 0,
x→x0 x→x0

By the ellipticity and continuity of F , we easily deduce


F (D2 ϕ(x0 ), Dϕ(x0 ), u(x0 ), x0 ) ≥ 0.
Thus u is a viscosity subsolution. 
The result in Theorem 9.5 is in some sense optimal. If the integrability n is replaced by
p < n, then the conclusion of Theorem 9.5 does not hold. Let us consider F (A, p, t, x) = 1−|p|.
Then (9.1) becomes the eikonal equation
(9.2) 1 − |Du(x)| = 0.
We see that both u(x) = −|x| and v(x) = |x| satisfy (9.2) a.e, but only one of them is a
viscosity solution!
MAXIMUM PRINCIPLES IN ELLIPTIC PDES 21

References
[A1] Aleksandrov, A. D. Certain estimates for the Dirichlet problem. Dokl. Akad. Nauk SSSR 134 (1960)
1001–1004 (Russian); translated as Soviet Math. Dokl. 1 (1961) 1151–1154.
[A2] Aleksandrov, A. D. Uniqueness conditions and bounds for the solution of the Dirichlet problem.
(Russian. English summary) Vestnik Leningrad. Univ. Ser. Mat. Meh. Astronom. 18 (1963) no. 3,
5–29. English translation in Amer. Math.Soc. Transl. (2) 68 (1968), 89–119.
[A3] Aleksandrov, A. D. Majorants of solutions of linear equations of order two. Vestnik Leningrad. Univ.
21 (1966), 5–25 (Russian). English translation in Amer. Math.Soc. Transl. (2) 68 (1968), 120–143.
[Ba2] Bakel’man, I. Ja. On the theory of quasilinear elliptic equations. (Russian) Sibirsk. Mat. Ž. 2 (1961)
179–186.
[Bo] Bony, J.-M. Principe du maximum dans les espaces de Sobolev. C. R. Acad. Sci. Paris Sér. A-B
265 (1967), A333–A336.
[B] Brendle, S. The isoperimetric inequality for a minimal submanifold in Euclidean space. J. Amer.
Math. Soc. 34 (2021), no. 2, 595–603.
[Br] Brezis, H. Functional analysis, Sobolev spaces and partial differential equations.
[CC] Caffarelli, L. A.; Cabré, X. Fully nonlinear elliptic equations. American Mathematical Society Col-
loquium Publications, volume 43, 1995.
[CL] Crandall, M. G.; Lions, P-L. Viscosity solutions of Hamilton-Jacobi equations. Trans. Amer. Math.
Soc. 277 (1983), no. 1, 1–42.
[E] Evans, L. C. Partial differential equations. Second edition. Graduate Studies in Mathematics, 19.
American Mathematical Society, Providence, RI, 2010.
[F] Figalli, A. The Monge-Ampère equation and its applications. Zurich Lectures in Advanced Mathe-
matics. European Mathematical Society (EMS), Zürich, 2017.
[GNN] Gidas, B.; Ni, W. M.; Nirenberg, L. Symmetry and related properties via the maximum principle.
Comm. Math. Phys. 68 (1979), no. 3, 209–243.
[GS] Gilbarg, D.; Serrin, J. On isolated singularities of solutions of second order elliptic differential
equations. J. Analyse Math. 4 (1955/56), 309–340.
[GT] Gilbarg, D.; Trudinger, N.S. Elliptic partial differential equations of second order, Reprint of the
1998 edition. Classics in Mathematics. Berlin: Springer, 2001.
[G] Gutiérrez, C. E. The Monge-Ampère equation. Second edition. Progress in Nonlinear Differential
Equations and their Applications, 89. Birkhaüser, Boston, 2016.
[HL] Han, Q.; Lin, F. H. Elliptic partial differential equations. 2nd ed. Courant Lecture Notes in Math-
ematics, vol. 1. Courant Institute of Mathematical Sciences, New York; American Mathematical
Society, Providence, RI, 2011.
[HJ] Horn, R. A.; Johnson, C. R. M atrix analysis. Second edition. Cambridge University Press, Cam-
bridge, 2013.
[KS1] Krylov, N. V.; Safonov, M. V. An estimate for the probability of a diffusion process hitting a set of
positive measure. (Russian) Dokl. Akad. Nauk SSSR 245 (1979), no. 1, 18–20.
[KS2] Krylov, N. V.; Safonov, M. V. A property of the solutions of parabolic equations with measurable
coefficients. (Russian) Izv. Akad. Nauk SSSR Ser. Mat. 44 (1980), no. 1, 161–175, 239.
[KT] Kuo, H.J.; Trudinger, N. S. New maximum principles for linear elliptic equations. Indiana Univ.
Math. J. 56 (2007), no. 5, 2439–2452.
[LMT] Le, N. Q.; Mitake, H.; Tran, H. V. Dynamical and geometric aspects of Hamilton-Jacobi and lin-
earized Monge-Ampère equations–VIASM 2016. Edited by Mitake and Tran. Lecture Notes in Math-
ematics, 2183. Springer, Cham, 2017.
[Li] Li, Y. Y. The work of Louis Nirenberg. Proceedings of the International Congress of Mathematicians.
Volume I, 127–137, Hindustan Book Agency, New Delhi, 2010.
[L] Lions, P.-L. A remark on Bony maximum principle. Proc. Amer. Math. Soc. 88 (1983), no. 3,
503–508.
[Pg] Pogorelov, A. V. The Minkowski multidimensional problem. Translated from the Russian by
Vladimir Oliker. Introduction by Louis Nirenberg. Scripta Series in Mathematics. V. H. Winston
& Sons, Washington, D.C.; Halsted Press [John Wiley & Sons], New York-Toronto-London, 1978.
[P] Pucci, C. Operatori ellittici estremanti. Ann. Mat. Pura Appl. (4) 72 (1966), 141–170.
22 NAM Q. LE

[PS] Pucci, P.; Serrin, J. The maximum principle. Progress in Nonlinear Differential Equations and their
Applications, 73. Birkhäuser Verlag, Basel, 2007.
[S] Savin, O. Small perturbation solutions for elliptic equations. Comm. Partial Differential Equations
32 (2007), no. 4-6, 557–578.
[St] Strauss, W. A. Partial differential equations. An introduction. Second edition. John Wiley & Sons,
Ltd., Chichester, 2008.
[T] Trudinger, N. S. Remarks on the Pucci conjecture. Indiana Univ. Math. J. 69 (2020), no. 1, 109–118.
[W] Wang, X.J. The k-Hessian equation. Geometric analysis and PDEs, 177–252, Lecture Notes in
Math., 1977, Springer, Dordrecht, 2009.

Department of Mathematics, Indiana University, 831 E 3rd St, Bloomington, IN 47405,


USA
Email address: nqle@indiana.edu

You might also like