Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
the Theory of Viscosity Solutions
Shigeaki Koike
∗
August 27, 2010
2nd edition
26 August 2010
∗
Department of Mathematics, Saitama University, 255 ShimoOkubo, Sakura, Saitama,
3388570 JAPAN
i
Preface for the 2nd edition
The ﬁrst version of this text has been out of print since 2009. It seems that
there is no hope to publish the second version from the publisher. Therefore,
I have decided to put the ﬁles of the second version in my Home Page.
Although I corrected many errors in the ﬁrst version, there must be some
mistakes in this version. I would be glad if the reader would kindly inform
me errors and typos etc.
Acknowledgment for the 2nd edition
I would like to thank Professors H. Ishii, K. Ishii, T. Nagasawa, M. Ohta,
T. Ohtsuka, and graduate students, T. Imai, K. Kohsaka, H. Mitake, S.
Nakagawa, T. Nozokido for pointing out numerous errors in the First edition.
26 August 2010 Shigeaki Koike
ii
Preface
This book was originally written in Japanese for undergraduate students
in the Department of Mathematics of Saitama University. In fact, the ﬁrst
handwritten draft was prepared for a series of lectures on the viscosity so
lution theory for undergraduate students in Ehime University and Hokkaido
University.
The aim here is to present a brief introduction to the theory of viscosity
solutions for students who have knowledge on Advanced Calculus (i.e. diﬀer
entiation and integration on functions of severalvariables) and hopefully, a
little on Lebesgue Integration and Functional Analysis. Since this is written
for undergraduate students who are not necessarily excellent, I try to give
“easy” proofs throughout this book. Thus, if you do not feel any diﬃculty
to read User’s guide [6], you should try to read that one.
I also try not only to show the viscosity solution theory but also to men
tion some related “classical” results.
Our plan of this book is as follows: We begin with our motivation in
section 1. Section 2 introduces the deﬁnition of viscosity solutions and their
properties. In section 3, we ﬁrst show “classical” comparison principles and
then, extend them to viscosity solutions of ﬁrst and secondorder PDEs,
separately. We establish two kinds of existence results via Perron’s method
and representation formulas for Bellman and Isaacs equations in section 4.
We discuss boundary value problems for viscosity solutions in sections 5.
Section 6 is a short introduction to the L
p
viscosity solution theory, on which
we have an excellent book [4].
In Appendix, which is the hardest part, we give proofs of fundamental
propositions.
In order to learn more on viscosity solutions, I give a list of “books”:
A popular survey paper [6] by CrandallIshiiLions on the theory of viscos
ity solutions of secondorder, degenerate elliptic PDEs is still a good choice
for undergraduate students to learn ﬁrst. However, to my experience, it
seems a bit hard for average undergraduate students to understand.
BardiCapuzzo Dolcetta’s book [1] contains lots of information on viscos
ity solutions for ﬁrstorder PDEs (HamiltonJacobi equations) while Fleming
Soner’s [10] complements topics on secondorder (degenerate) elliptic PDEs
with applications in stochastic control problems.
Barles’ book [2] is also nice to learn his original techniques and French
language simultaneously !
iii
It has been informed that Ishii would write a book [15] in Japanese on
viscosity solutions in the near future, which must be more advanced than
this.
For an important application via the viscosity solution theory, we refer
to Giga’s [12] on curvature ﬂow equations. Also, I recommend the reader to
consult Lecture Notes [3] (BardiCrandallEvansSonerSouganidis) not only
for various applications but also for a “friendly” introduction by Crandall,
who ﬁrst introduced the notion of viscosity solutions with P.L. Lions in early
80s.
If the reader is interested in section 6, I recommend him/her to attack
CaﬀarelliCabr´e’s [4].
As a general PDE theory, although there are so many books on PDEs, I
only refer to my favorite ones; GilbargTrudinger’s [13] and Evans’ [8]. Also
as a textbook for undergraduate students, HanLin’s short lecture notes [14]
is a good choice.
Since this is a textbook, we do not refer the reader to original papers
unless those are not mentioned in the books in our references.
Acknowledgment
First of all, I would like to express my thanks to Professors H. Morimoto
and Y. Tonegawa for giving me the opportunity to have a series of lectures in
their universities. I would also like to thank Professors K. Ishii, T. Nagasawa,
and a graduate student, K. Nakagawa, for their suggestions on the ﬁrst draft.
I wish to express my gratitude to Professor H. Ishii for having given me
enormous supply of ideas since 1980.
I also wish to thank the reviewer for several important suggestions.
My ﬁnal thanks go to Professor T. Ozawa for recommending me to publish
this manuscript. He kindly suggested me to change the original Japanese title
(“A secret club on viscosity solutions”).
iv
Contents
1 Introduction 1
1.1 From classical solutions to weak solutions 2
1.2 Typical examples of weak solutions 4
1.2.1 Burgers’ equation 4
1.2.2 HamiltonJacobi equation 6
2 Deﬁnition 9
2.1 Vanishing viscosity method 9
2.2 Equivalent deﬁnitions 17
3 Comparison principle 23
3.1 Classical comparison principle 24
3.1.1 Degenerate elliptic PDEs 24
3.1.2 Uniformly elliptic PDEs 24
3.2 Comparison principle for ﬁrstorder PDEs 26
3.3 Extension to secondorder PDEs 31
3.3.1 Degenerate elliptic PDEs 33
3.3.2 Remarks on the structure condition 35
3.3.3 Uniformly elliptic PDEs 38
4 Existence results 40
4.1 Perron’s method 40
4.2 Representation formulas 45
4.2.1 Bellman equation 46
4.2.2 Isaacs equation 51
4.3 Stability 58
5 Generalized boundary value problems 62
5.1 Dirichlet problem 65
5.2 State constraint problem 68
5.3 Neumann problem 72
5.4 Growth condition at [x[ → ∞ 75
v
6 L
p
viscosity solutions 78
6.1 A brief history 78
6.2 Deﬁnition and basic facts 81
6.3 Harnack inequality 84
6.3.1 Linear growth 85
6.3.2 Quadratic growth 87
6.4 H¨older continuity estimates 89
6.5 Existence result 92
7 Appendix 95
7.1 Proof of Ishii’s lemma 95
7.2 Proof of the ABP maximum principle 100
7.3 Proof of existence results for Pucci equations 105
7.4 Proof of the weak Harnack inequlity 108
7.5 Proof of the local maximum principle 113
References 118
Notation Index 120
Index 121
vi
1 Introduction
Throughout this book, we will work in Ω (except in sections 4.2 and 5.4),
where
Ω ⊂ R
n
is open and bounded.
We denote by ¸, ) the standard inner product in R
n
, and set [x[ =
_
¸x, x) for x ∈ R
n
. We use the standard notion of open balls: For r > 0
and x ∈ R
n
,
B
r
(x) := ¦y ∈ R
n
[ [x −y[ < r¦, and B
r
:= B
r
(0).
For a function u : Ω → R, we denote its gradient and Hessian matrix at
x ∈ Ω, respectively, by
Du(x) :=
_
_
_
_
_
∂u(x)
∂x
1
.
.
.
∂u(x)
∂xn
_
_
_
_
_
,
D
2
u(x) :=
_
_
_
_
_
_
_
_
_
_
_
_
∂
2
u(x)
∂x
2
1
jth
∂
2
u(x)
∂x
1
∂xn
.
.
.
.
.
.
.
.
.
ith
∂
2
u(x)
∂x
i
∂x
j
.
.
.
.
.
.
.
.
.
.
.
.
∂
2
u(x)
∂xn∂x
1
∂
2
u(x)
∂x
2
n
_
_
_
_
_
_
_
_
_
_
_
_
.
Also, S
n
denotes the set of all realvalued n n symmetric matrices. Note
that if u ∈ C
2
(Ω), then D
2
u(x) ∈ S
n
for x ∈ Ω.
We recall the standard ordering in S
n
:
X ≤ Y ⇐⇒ ¸Xξ, ξ) ≤ ¸Y ξ, ξ) for ∀ξ ∈ R
n
.
We will also use the following notion in sections 6 and 7: For ξ =
t
(ξ
1
, . . . , ξ
n
), η =
t
(η
1
, . . . , η
n
) ∈ R
n
, we denote by ξ ⊗ η the n n matrix
whose (i, j)entry is ξ
i
η
j
for 1 ≤ i, j ≤ n;
ξ ⊗η =
_
_
_
_
_
_
_
_
_
_
ξ
1
η
1
jth ξ
1
η
n
.
.
.
.
.
.
.
.
.
ith ξ
i
η
j
.
.
.
.
.
.
.
.
.
.
.
.
ξ
n
η
1
ξ
n
η
n
_
_
_
_
_
_
_
_
_
_
.
1
We are concerned with general secondorder partial diﬀerential equations
(PDEs for short):
F(x, u(x), Du(x), D
2
u(x)) = 0 in Ω. (1.1)
We suppose (except in several sections) that
F : Ω RR
n
S
n
→ R is continuous
with respect to all variables.
1.1 From classical solutions to weak solutions
As the ﬁrst example of PDEs, we present the Laplace equation:
−△u = 0 in Ω. (1.2)
Here, we deﬁne △u := trace(D
2
u). In the literature of the viscosity solution
theory, we prefer to have the minus sign in front of △.
Of course, since we do not require any boundary condition yet, all poly
nomials of degree one are solutions of (1.2). In many textbooks (particularly
those for engineers), under certain boundary condition, we learn how to solve
(1.2) when Ω has some special shapes such as cubes, balls, the halfspace or
the whole space R
n
. Here, “solve” means that we ﬁnd an explicit formula of
u using elementary functions such as polynomials, trigonometric ones, etc.
However, the study on (1.2) in such special domains is not applicable
because, for instance, solutions of equation (1.2) represent the density of a
gas in a bottle, which is neither a ball nor a cube.
Unfortunately, in general domains, it seems impossible to ﬁnd formulas for
solutions u with elementary functions. Moreover, in order to cover problems
arising in physics, engineering and ﬁnance, we will have to study more general
and complicated PDEs than (1.2). Thus, we have to deal with general PDEs
(1.1) in general domains.
If we give up having formulas for solutions of (1.1), how do we investigate
PDEs (1.1) ? In other words, what is the right question in the study of PDEs
?
In the literature of the PDE theory, the most basic questions are as fol
lows:
2
(1) Existence: Does there exist a solution ?
(2) Uniqueness: Is it the only solution ?
(3) Stability: If the PDE changes a little,
does the solution change a little ?
The importance of the existence of solutions is trivial since, otherwise,
the study on the PDE could be useless.
To explain the signiﬁcance of the uniqueness of solutions, let us remem
ber the reason why we study the PDE. Usually, we discuss PDEs or their
solutions to understand some speciﬁc phenomena in nature, engineerings or
economics etc. Particularly, people working in applications want to know
how the solution looks like, moves, behaves etc. For this purpose, it might
be powerful to use numerical computations. However, numerical analysis
only shows us an “approximate” shapes, movements, etc. Thus, if there
are more than one solution, we do not know which is approximated by the
numerical solution.
Also, if the stability of solutions fails, we could not predict what will hap
pen from the numerical experiments even though the uniqueness of solutions
holds true.
Now, let us come back to the most essential question:
What is the “solution” of a PDE ?
For example, it is natural to call a function u : Ω → R a solution of
(1.1) if there exist the ﬁrst and second derivatives, Du(x) and D
2
u(x), for
all x ∈ Ω, and (1.1) is satisﬁed at each x ∈ Ω when we plug them in the left
hand side of (1.1). Such a function u will be called a classical solution of
(1.1).
However, unfortunately, it is diﬃcult to seek for a classical solution be
cause we have to verify that it is suﬃciently diﬀerentiable and that it satisﬁes
the equality (1.1) simultaneously.
Instead of ﬁnding a classical solution directly, we have decided to choose
the following strategy:
(A) Find a candidate of the classical solution,
(B) Check the diﬀerentiability of the candidate.
In the standard books, the candidate of a classical solution is called a
weak solution; if the weak solution has the ﬁrst and second derivatives, then
3
it becomes a classical solution. In the literature, showing the diﬀerentiability
of solutions is called the study on the regularity of those.
Thus, with these terminologies, we may rewrite the above with mathe
matical terms:
(A) Existence of weak solutions,
(B) Regularity of weak solutions.
However, when we cannot expect classical solutions of a PDE to exist,
what is the right candidate of solutions ?
We will call a function the candidate of solutions of a PDE if it is a
“unique” and “stable” weak solution under a suitable setting. In section 2,
we will deﬁne such a candidate named “viscosity solutions” for a large class
of PDEs, and in the proceeding sections, we will extend the deﬁnition to
more general (possibly discontinuous) functions and PDEs.
In the next subsection, we show a brief history on “weak solutions” to
remind what was known before the birth of viscosity solutions.
1.2 Typical examples of weak solutions
In this subsection, we give two typical examples of PDEs to derive two kinds
of weak solutions which are unique and stable.
1.2.1 Burgers’ equation
We consider Burgers’ equation, which is a model PDE in Fluid Mechanics:
∂u
∂t
+
1
2
∂(u
2
)
∂x
= 0 in R(0, ∞) (1.3)
under the initial condition:
u(x, 0) = g(x) for x ∈ R, (1.4)
where g is a given function.
In general, we cannot ﬁnd classical solutions of (1.3)(1.4) even if g is
smooth enough. See [8] for instance.
In order to look for the appropriate notion of weak solutions, we ﬁrst
introduce a function space C
1
0
(R[0, ∞)) as a “test function space”:
C
1
0
(R[0, ∞)) :=
_
φ ∈ C
1
(R[0, ∞))
¸
¸
¸
¸
¸
there is K > 0 such that
supp φ ⊂ [−K, K] [0, K]
_
.
4
Here and later, we denote by supp φ the following set:
supp φ := ¦(x, t) ∈ R[0, ∞) [ φ(x, t) ,= 0¦.
Suppose that u satisﬁes (1.3). Multiplying (1.3) by φ ∈ C
1
0
(R [0, ∞))
and then, using integration by parts, we have
_
R
_
∞
0
_
u
∂φ
∂t
+
u
2
2
∂φ
∂x
_
(x, t)dtdx +
_
R
u(x, 0)φ(x, 0)dx = 0.
Since there are no derivatives of u in the above, this equality makes sense if
u ∈ ∪
K>0
L
1
((−K, K)(0, K)). Hence, we may adapt the following property
as the deﬁnition of weak solutions of (1.3)(1.4).
_
¸
¸
¸
_
¸
¸
¸
_
_
R
_
∞
0
_
u
∂φ
∂t
+
u
2
2
∂φ
∂x
_
(x, t)dtdx +
_
R
g(x)φ(x, 0)dx = 0
for all φ ∈ C
1
0
(R[0, ∞)).
We often call this a weak solution in the distribution sense. As you no
ticed, we derive this notion by an essential use of integration by parts. We
say that a PDE is in divergence form when we can adapt the notion of
weak solutions in the distribution sense. When the PDE is not in divergence
form, we say that it is in nondivergence form.
We note that the solution of (1.3) may have singularities even though the
initial value g belongs to C
∞
by an observation via “characteristic method”.
From the deﬁnition of weak solutions, we can derive the socalled Rankine
Hugoniot condition on the set of singularities.
On the other hand, unfortunately, we cannot show the uniqueness of weak
solutions of (1.3)(1.4) in general while we know the famous LaxOleinik
formula (see [8] for instance), which is the “expected” solution.
In order to obtain the uniqueness of weak solutions, for the deﬁnition, we
add the following property (called “entropy condition”) which holds for the
expected solution given by the LaxOleinik formula: There is C > 0 such
that
u(x + z, t) −u(x, t) ≤
Cz
t
for all (x, t, z) ∈ R(0, ∞) (0, ∞). We call u an entropy solution of (1.3)
if it is a weak solution satisfying this inequality. It is also known that such
a weak solution has a certain stability property.
5
We note that this entropy solution satisﬁes the above mentioned impor
tant properties; “existence, uniqueness and stability”. Thus, it must be a
right deﬁnition for weak solutions of (1.3)(1.4).
1.2.2 HamiltonJacobi equations
Next, we shall consider general HamiltonJacobi equations, which arise in
Optimal Control and Classical Mechanics:
∂u
∂t
+ H(Du) = 0 in (x, t) ∈ R
n
(0, ∞) (1.5)
under the same initial condition (1.4).
In this example, we suppose that H : R
n
→ R is convex, i.e.
H(θp + (1 −θ)q) ≤ θH(p) + (1 −θ)H(q) (1.6)
for all p, q ∈ R
n
, θ ∈ [0, 1].
Remark. Since a convex function is locally Lipschitz continuous in general,
we do not need to assume the continuity of H.
Example. In Classical Mechanics, we often call this H a “Hamiltonian”.
As a simple example of H, we have H(p) = [p[
2
.
Notice that we cannot adapt the weak solution in the distribution sense
for (1.5) since we cannot use the integration by parts.
We next introduce the Lagrangian L : R
n
→ R deﬁned by
L(q) = sup
p∈R
n
¦¸p, q) −H(p)¦.
When H(p) = [p[
2
, it is easy to verify that the maximum is attained in the
right hand side of the above.
It is surprising that we have a neat formula for the expected solution
(called HopfLax formula) presented by
u(x, t) = min
y∈R
n
_
tL
_
x −y
t
_
+ g(y)
_
. (1.7)
More precisely, it is shown that the right hand side of (1.7) is diﬀerentiable
and satisﬁes (1.5) almost everywhere.
6
Thus, we could call u a weak solution of (1.5)(1.4) when u satisﬁes (1.5)
almost everywhere. However, if we decide to use this notion as a weak solu
tion, the uniqueness of those fails in general. We will see an example in the
next section.
As was shown for Burgers’ equation, in order to say that the “unique
weak” solution is given by (1.7), we have to add one more property for the
deﬁnition of weak solutions: There is C > 0 such that
u(x + z, t) −2u(x, t) + u(x −z, t) ≤ C[z[
2
(1.8)
for all x, z ∈ R, t > 0. This is called the “semiconcavity” of u.
We note that (1.8) is a hypothesis on the onesided bound of second
derivatives of functions u.
In 60s, Kruzkov showed that the limit function of approximate solutions
by the vanishing viscosity method (see the next section) has this property
(1.8) when H is convex. He named u a “generalized” solution of (1.5) when
it satisﬁes (1.5) almost everywhere and (1.8).
To my knowledge, between Kruzkov’s works and the birth of viscosity
solutions, there had been no big progress in the study of ﬁrstorder PDEs in
nondivergence form.
Remark. The convexity (1.6) is a natural hypothesis when we consider
only optimal control problems where one person intends to minimize some
“costs” (“energy” in terms of Physics). However, when we treat game prob
lems (one person wants to minimize costs while the other tries to maximize
them), we meet nonconvex and nonconcave (i.e. “fully nonlinear”)
Hamiltonians. See section 4.2.
In this book, since we are concerned with viscosity solutions of PDEs in
nondivergence form, for which the integration by parts argument cannot be
used to deﬁne the notion of weak solutions in the distribution sense, we shall
give typical examples of such PDEs.
Example. (Bellman and Isaacs equations)
We ﬁrst give Bellman equations and Isaacs equations, which arise in
(stochastic) optimal control problems and diﬀerential games, respectively.
As will be seen, those are extensions of linear PDEs.
Let A and B be sets of parameters. For instance, we suppose A and B
are (compact) subsets in R
m
(for some m ≥ 1). For a ∈ A, b ∈ B, x ∈ Ω,
7
r ∈ R, p =
t
(p
1
, . . . , p
n
) ∈ R
n
, and X = (X
ij
) ∈ S
n
, we set
L
a
(x, r, p, X) := −trace(A(x, a)X) +¸g(x, a), p) + c(x, a)r,
L
a,b
(x, r, p, X) := −trace(A(x, a, b)X) +¸g(x, a, b), p) + c(x, a, b)r.
Here A(, a), A(, a, b), g(, a), g(, a, b), c(, a) and c(, a, b) are given functions
for (a, b) ∈ AB.
For inhomogeneous terms, we consider functions f(, a) and f(, a, b) in Ω
for a ∈ A and b ∈ B.
We call the following PDEs Bellman equations:
sup
a∈A
¦L
a
(x, u(x), Du(x), D
2
u(x)) −f(x, a)¦ = 0 for x ∈ Ω. (1.9)
Notice that the supremum over A is taken at each point x ∈ Ω.
Taking account of one more parameter set B, we call the following PDEs
Isaacs equations:
sup
a∈A
inf
b∈B
¦L
a,b
(x, u(x), Du(x), D
2
u(x)) −f(x, a, b)¦ = 0 for x ∈ Ω (1.10)
and
inf
b∈B
sup
a∈A
¦L
a,b
(x, u(x), Du(x), D
2
u(x)) −f(x, a, b)¦ = 0 for x ∈ Ω. (1.10
′
)
Example. (“Quasilinear” equations)
We say that a PDE is quasilinear if the coeﬃcients of D
2
u contains u
or Du. Although we will not study quasilinear PDEs in this book, we give
some of those which are in nondivergence form.
We ﬁrst give the PDE of mean curvature type:
F(x, p, X) := −
_
[p[
2
trace(X) −¸Xp, p)
_
.
Notice that this F is independent of xvariables. We refer to [12] for appli
cations where this kind of operators appears.
Next, we show a relatively “new” one called L
∞
Laplacian:
F(x, p, X) := −¸Xp, p).
Again, this F does not contain xvariables. We refer to Jensen’s work [16],
where he ﬁrst studied the PDE “−¸D
2
uDu, Du) = 0 in Ω” via the viscosity
solution approach.
8
2 Deﬁnition
In this section, we derive the deﬁnition of viscosity solutions of (1.1) via the
vanishing viscosity method.
We also give some basic properties of viscosity solutions and equivalent
deﬁnitions using “semijets”.
2.1 Vanishing viscosity method
When the notion of viscosity solutions was born, in order to explain the
reason why we need it, many speakers started in their talks by giving the
following typical example called the eikonal equation:
[Du[
2
= 1 in Ω. (2.1)
We seek C
1
functions satisfying (2.1) under the Dirichlet condition:
u(x) = 0 for x ∈ ∂Ω. (2.2)
However, since there is no classical solution of (2.1)(2.2) (showing the non
existence of classical solutions is a good exercise), we intend to derive a
reasonable deﬁnition of weak solutions of (2.1).
In fact, we expect that the following function (the distance from ∂Ω)
would be the unique solution of this problem (see Fig 2.1):
u(x) = dist(x, ∂Ω) := inf
y∈∂Ω
[x −y[.
Fig 2.1
−1
1
1 0
y
y = u(x)
x
9
If we consider the case when n = 1 and Ω = (−1, 1), then the expected
solution is given by
u(x) = 1 −[x[ for x ∈ [−1, 1]. (2.3)
Since this function is C
∞
except at x = 0, we could decide to call u a weak
solution of (2.1) if it satisﬁes (2.1) in Ω except at ﬁnite points.
Fig 2.2
1
−1
−1 0
x
y
However, even in the above simple case of (2.1), we know that there are
inﬁnitely many such weak solutions of (2.1) (see Fig 2.2); for example, −u is
the weak solution and
u(x) =
_
¸
_
¸
_
x + 1 for x ∈ [−1, −
1
2
),
−x for x ∈ [−
1
2
,
1
2
),
x −1 for x ∈ [
1
2
, 1],
. . . etc.
Now, in order to look for an appropriate notion of weak solutions, we
introduce the socalled vanishing viscosity method; for ε > 0, we consider
the following PDE as an approximate equation of (2.1) when n = 1 and
Ω = (−1, 1):
_
−εu
′′
ε
+ (u
′
ε
)
2
= 1 in (−1, 1),
u
ε
(±1) = 0.
(2.4)
The ﬁrst term, −εu
′′
ε
, in the left hand side of (2.4) is called the vanishing
viscosity term (when n = 1) as ε → 0.
By an elementary calculation, we can ﬁnd a unique smooth function u
ε
in the following manner: We ﬁrst note that if a classical solution of (2.4)
10
exists, then it is unique. Thus, we may suppose that u
′
ε
(0) = 0 by symmetry.
Setting v
ε
= u
′
ε
, we ﬁrst solve the ODE:
_
−εv
′
ε
+ v
2
ε
= 1 in (−1, 1),
v
ε
(0) = 0.
(2.5)
It is easy to see that the solution of (2.5) is given by
v
ε
(x) = −tanh
_
x
ε
_
.
Hence, we can ﬁnd u
ε
by
u
ε
(x) = −ε log
_
_
cosh
_
x
ε
_
cosh
_
1
ε
_
_
_
= −ε log
_
e
x
ε
+ e
−
x
ε
e
1
ε
+ e
−
1
ε
_
.
It is a good exercise to show that u
ε
converges to the function in (2.3)
uniformly in [−1, 1].
Remark. Since ˆ u
ε
(x) := −u
ε
(x) is the solution of
_
εu
′′
+ (u
′
)
2
= 1 in (−1, 1),
u(±1) = 0,
we have ˆ u(x) := lim
ε→0
ˆ u
ε
(x) = −u(x). Thus, if we replace −εu
′′
by +εu
′′
,
then the limit function would be diﬀerent in general.
To deﬁne weak solutions, we adapt the properties which hold for the
(uniform) limit of approximate solutions of PDEs with the “minus” vanishing
viscosity term.
Let us come back to general secondorder PDEs:
F(x, u, Du, D
2
u) = 0 in Ω. (2.6)
We shall use the following deﬁnition of classical solutions:
Deﬁnition. We call u : Ω → R a classical subsolution (resp.,
supersolution, solution) of (2.6) if u ∈ C
2
(Ω) and
F(x, u(x), Du(x), D
2
u(x)) ≤ 0 (resp., ≥ 0, = 0) in Ω.
11
Remark. If F does not depend on Xvariables (i.e. F(x, u, Du) = 0; ﬁrst
order PDEs), we only suppose u ∈ C
1
(Ω) in the above in place of u ∈ C
2
(Ω).
Throughout this text, we also suppose the following monotonicity condi
tion with respect to Xvariables:
Deﬁnition. We say that F is (degenerate) elliptic if
_
F(x, r, p, X) ≤ F(x, r, p, Y )
for all x ∈ Ω, r ∈ R, p ∈ R
n
, X, Y ∈ S
n
provided X ≥ Y.
(2.7)
We notice that if F does not depend on Xvariables (i.e. F = 0 is the
ﬁrstorder PDE), then F is automatically elliptic.
We also note that the left hand side F(x, r, p, X) = −trace(X) of the
Laplace equation (1.2) is elliptic.
We will derive properties which hold true for the (uniform) limit (as
ε → +0) of solutions of
−ε△u + F(x, u, Du, D
2
u) = 0 in Ω (ε > 0). (2.8)
Note that since −εtrace(X) + F(x, r, p, X) is “uniformly” elliptic (see in
section 3 for the deﬁnition) provided that F is elliptic, it is easier to solve
(2.8) than (2.6) in practice. See [13] for instance.
Proposition 2.1. Assume that F is elliptic. Let u
ε
∈ C
2
(Ω) ∩ C(Ω)
be a classical subsolution (resp., supersolution) of (2.8). If u
ε
converges to
u ∈ C(Ω) (as ε → 0) uniformly in any compact sets K ⊂ Ω, then, for any
φ ∈ C
2
(Ω), we have
F(x, u(x), Dφ(x), D
2
φ(x)) ≤ 0 (resp., ≥ 0)
provided that u −φ attains its maximum (resp., minimum) at x ∈ Ω.
Remark. When F does not depend on Xvariables, we only need to sup
pose φ and u
ε
to be in C
1
(Ω) as before.
Proof. We only give a proof of the assertion for subsolutions since the
other one can be shown in a symmetric way.
12
Suppose that u−φ attains its maximum at ˆ x ∈ Ω for φ ∈ C
2
(Ω). Setting
φ
δ
(y) := φ(y) + δ[y − ˆ x[
4
for small δ > 0, we see that
(u −φ
δ
)(ˆ x) > (u −φ
δ
)(y) for y ∈ Ω ¸ ¦ˆ x¦.
(This tiny technique to replace a maximum point by a “strict” one will appear
in Proposition 2.2.)
Let x
ε
∈ Ω be a point such that (u
ε
− φ
δ
)(x
ε
) = max
Ω
(u
ε
− φ
δ
). Note
that x
ε
also depends on δ > 0.
Since u
ε
converges to u uniformly in B
r
(ˆ x) and ˆ x is the unique maximum
point of u − φ
δ
, we note that lim
ε→0
x
ε
= ˆ x. Thus, we see that x
ε
∈ Ω for
small ε > 0. Notice that if we argue by φ instead of φ
δ
, the limit of x
ε
might
diﬀer from ˆ x.
Thus, at x
ε
∈ Ω, we have
−ε△u
ε
(x
ε
) + F(x
ε
, u
ε
(x
ε
), Du
ε
(x
ε
), D
2
u
ε
(x
ε
)) ≤ 0.
Since D(u
ε
−φ
δ
)(x
ε
) = 0 and D
2
(u
ε
−φ
δ
)(x
ε
) ≤ 0, in view of ellipticity, we
have
−ε△φ
δ
(x
ε
) + F(x
ε
, u
ε
(x
ε
), Dφ
δ
(x
ε
), D
2
φ
δ
(x
ε
)) ≤ 0.
Sending ε → 0 in the above, we have
F(ˆ x, u(ˆ x), Dφ
δ
(ˆ x), D
2
φ
δ
(ˆ x)) ≤ 0.
Since Dφ
δ
(ˆ x) = Dφ(ˆ x) and D
2
φ
δ
(ˆ x) = D
2
φ(ˆ x), we conclude the proof. 2
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp.,
supersolution) of (2.6) if, for any φ ∈ C
2
(Ω),
F(x, u(x), Dφ(x), D
2
φ(x)) ≤ 0 (resp., ≥ 0)
provided that u −φ attains its maximum (resp., minimum) at x ∈ Ω.
We call u : Ω → R a viscosity solution of (2.6) if it is both a viscosity
sub and supersolution of (2.6).
Remark. Here, we have given the deﬁnition to “general” functions but
we will often suppose that they are (semi)continuous in Theorems etc.
In fact, in our propositions in sections 2.1, we will suppose that viscosity
sub and supersolutions are continuous.
13
However, all the proposition in section 2.1 can be proved by replacing up
per and lower semicontinuity for viscosity subsolutions and supersolutions,
respectively.
We will introduce general viscosity solutions in section 3.3.
Notation. In order to memorize the correct inequality, we will often
say that u is a viscosity subsolution (resp., supersolution) of
F(x, u, Du, D
2
u) ≤ 0 (resp., ≥ 0) in Ω
if it is a viscosity subsolution (resp., supersolution) of (2.6).
Proposition 2.2. For u : Ω → R, the following (1) and (2) are equiva
lent:
_
¸
¸
¸
_
¸
¸
¸
_
(1) u is a viscosity subsolution (resp., supersolution) of (2.6),
(2) if 0 = (u −φ)(ˆ x) > (u −φ)(x) (resp., < (u −φ)(x))
for φ ∈ C
2
(Ω), ˆ x ∈ Ω and x ∈ Ω ¸ ¦ˆ x¦,
then F(ˆ x, φ(ˆ x), Dφ(ˆ x), D
2
φ(ˆ x)) ≤ 0 (resp., ≥ 0).
Fig 2.3
graph of φ
graph of φ
δ
graph of φ() + (u −φ)(ˆ x)
x
ˆ x
graph of u
Proof. The implication (1) ⇒ (2) is trivial.
For the opposite implication in the subsolution case, suppose that u −φ
attains a maximum at ˆ x ∈ Ω. Set
φ
δ
(x) = φ(x) + δ[x − ˆ x[
4
+ (u −φ)(ˆ x).
14
See Fig 2.3. Since 0 = (u −φ
δ
)(ˆ x) > (u −φ
δ
)(x) for x ∈ Ω ¸ ¦ˆ x¦, (2) gives
F(ˆ x, φ
δ
(ˆ x), Dφ
δ
(ˆ x), D
2
φ
δ
(ˆ x)) ≤ 0,
which implies the assertion. 2
By the next proposition, we recognize that viscosity solutions are right
candidates of weak solutions when F is elliptic.
Proposition 2.3. Assume that F is elliptic. A function u : Ω → R
is a classical subsolution (resp., supersolution) of (2.6) if and only if it is a
viscosity subsolution (resp., supersolution) of (2.6) and u ∈ C
2
(Ω).
Proof. Suppose that u is a viscosity subsolution of (2.6) and u ∈ C
2
(Ω).
Taking φ ≡ u, we see that u −φ attains its maximum at any points x ∈ Ω.
Thus, the deﬁnition of viscosity subsolutions yields
F(x, u(x), Du(x), D
2
u(x)) ≤ 0 for x ∈ Ω.
On the contrary, suppose that u ∈ C
2
(Ω) is a classical subsolution of
(2.6).
Fix any φ ∈ C
2
(Ω). Assuming that u − φ takes its maximum at x ∈ Ω,
we have
D(u −φ)(x) = 0 and D
2
(u −φ)(x) ≤ 0.
Hence, in view of ellipticity, we have
0 ≥ F(x, u(x), Du(x), D
2
u(x)) ≥ F(x, u(x), Dφ(x), D
2
φ(x)). 2
We introduce the sets of upper and lower semicontinuous functions: For
K ⊂ R
n
,
USC(K) := ¦u : K → R [ u is upper semicontinuous in K¦,
and
LSC(K) := ¦u : K → R [ u is lower semicontinuous in K¦.
Remark. Throughout this book, we use the following maximum principle
for semicontinuous functions:
15
An upper semicontinuous function in a compact set attains its maximum.
We give the following lemma which will be used without mentioning it.
Since the proof is a bit technical, the reader may skip it over ﬁrst.
Proposition 2.4. Assume that u ∈ USC(Ω) (resp., u ∈ LSC(Ω)) is a viscos
ity subsolution (resp., supersolution) of (2.6) in Ω. Assume also that inf
K
u > −∞
(resp., sup
K
u < ∞) for any compact sets K ⊂ Ω.
Then, for any open set Ω
′
⊂ Ω, u is a viscosity subsolution (resp., supersolution)
of (2.6) in Ω
′
.
Proof. We only show the assertion for subsolutions since the other can be shown
similarly.
For φ ∈ C
2
(Ω
′
), by Proposition 2.2, we suppose that for some ˆ x ∈ Ω
′
,
0 = (u −φ)(ˆ x) > (u −φ)(y) for all y ∈ Ω
′
¸ ¦ˆ x¦.
Choose r > 0 such that B
3r
(ˆ x) ⊂ Ω
′
, and set s := −inf
B
2r
(ˆ x)
(u −φ) > 0.
We recall the standard molliﬁer ρ
ε
∈ C
∞
(R
n
) and supp ρ
ε
⊂ B
ε
: First, we set
ρ(x) :=
_
C
0
e
1
x
2
−1
for [x[ < 1,
0 for [x[ ≥ 1,
where C
0
:=
__
B
1
e
1
x
2
−1
dx
_
−1
.
Next, we deﬁne ρ
ε
by
ρ
ε
(x) :=
1
ε
n
ρ
_
x
ε
_
.
Notice that
_
R
n ρ
ε
dx = 1 for ε > 0.
Now, setting φ
ε
(x) := φ ∗ ρ
ε
(x) =
_
R
n φ(y)ρ
ε
(x −y)dy ∈ C
∞
(R
n
), where
φ(x) :=
_
_
_
φ(x) for x ∈ B
r
(ˆ x),
2s + sup
Ω
u for x ∈ Ω ¸ B
r
(ˆ x),
we easily verify that for 0 < ε < r,
max
Ω\B
2r
(ˆ x)
(u −φ
ε
) ≤ −2s.
On the other hand, since we have
(u −φ
ε
)(ˆ x) =
_
Bε(ˆ x)
ρ
ε
(ˆ x −y)(φ(ˆ x) −φ(y))dy,
16
there is ε
0
> 0 such that
(u −φ
ε
)(ˆ x) ≥ −s for ε ∈ (0, ε
0
).
Therefore, we can choose x
ε
∈ B
2r
(ˆ x) such that max
Ω
(u −φ
ε
) = (u −φ
ε
)(x
ε
).
Thus, taking a subsequence of ¦x
ε
¦
ε>0
denoted by x
ε
again (if necessary),
we ﬁnd z ∈ B
2r
(ˆ x) such that lim
ε→0
x
ε
= z. Hence, sending ε → 0, we have
max
B
2r
(ˆ x)
(u −φ) = (u −φ)(z), which gives z = ˆ x
By the deﬁnition, we have
F(x
ε
, u(x
ε
), Dφ
ε
(x
ε
), D
2
φ
ε
(x
ε
)) ≤ 0.
Note that lim
ε→0
u(x
ε
) = u(ˆ x). Therefore, sending ε → 0 in the above, we conclude
the proof. 2
2.2 Equivalent deﬁnitions
We present equivalent deﬁnitions of viscosity solutions. However, since we
will need those in the proof of uniqueness for secondorder PDEs,
the reader may postpone this subsection until section 3.3.
First, we introduce “semi”jets of functions u : Ω → R at x ∈ Ω by
J
2,+
u(x) :=
_
¸
¸
¸
_
¸
¸
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
u(y) ≤ u(x) +¸p, y −x)
+
1
2
¸X(y −x), y −x)
+o([y −x[
2
) as y ∈ Ω → x
_
¸
¸
¸
_
¸
¸
¸
_
and
J
2,−
u(x) :=
_
¸
¸
¸
_
¸
¸
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
u(y) ≥ u(x) +¸p, y −x)
+
1
2
¸X(y −x), y −x)
+o([y −x[
2
) as y ∈ Ω → x
_
¸
¸
¸
_
¸
¸
¸
_
.
Note that J
2,−
u(x) = −J
2,+
(−u)(x).
Remark. We do not impose any continuity for u in these deﬁnitions.
We recall the notion of “small order o” in the above: For k ≥ 1,
f(x) ≤ o([x[
k
) (resp., ≥ o([x[
k
)) as x → 0
⇐⇒
_
¸
_
¸
_
there is ω ∈ C([0, ∞), [0, ∞)) such that ω(0) = 0, and
sup
x∈Br\{0}
f(x)
[x[
k
≤ ω(r)
_
resp., inf
x∈Br\{0}
f(x)
[x[
k
≥ −ω([x[)
_
17
In the next proposition, we give some basic properties of semijets: (1)
is a relation between semijets and classical derivatives, and (2) means that
semijets are “deﬁned” in dense sets of Ω.
Proposition 2.5. For u : Ω → R, we have the following:
(1) If J
2,+
u(x) ∩ J
2,−
u(x) ,= ∅, then Du(x) and D
2
u(x) exist and,
J
2,+
u(x) ∩ J
2,−
u(x) = ¦(Du(x), D
2
u(x))¦.
(2) If u ∈ USC(Ω) (resp., u ∈ LSC(Ω)), then
Ω =
_
x ∈ Ω
¸
¸
¸
¸
∃x
k
∈ Ω such that J
2,+
u(x
k
) ,= ∅, lim
k→∞
x
k
= x
_
_
resp., Ω =
_
x ∈ Ω
¸
¸
¸
¸
∃x
k
∈ Ω such that J
2,−
u(x
k
) ,= ∅, lim
k→∞
x
k
= x
__
.
Proof. The proof of (1) is a direct consequence from the deﬁnition.
We give a proof of the assertion (2) only for J
2,+
.
Fix x ∈ Ω and choose r > 0 so that B
r
(x) ⊂ Ω. For ε > 0, we can choose
x
ε
∈ B
r
(x) such that u(x
ε
) −ε
−1
[x
ε
−x[
2
= max
y∈Br(x)
(u(y) −ε
−1
[y −x[
2
).
Since [x
ε
− x[
2
≤ ε(max
B
r
(x)
−u(x)), we see that x
ε
converges to x ∈ B
r
(x)
as ε → 0. Thus, we may suppose that x
ε
∈ B
r
(x) for small ε > 0. Hence, we
have
u(y) ≤ u(x
ε
) +
1
ε
([y −x[
2
−[x
ε
−x[
2
) for all y ∈ B
r
(x).
It is easy to check that (2(x
ε
−x)/ε, 2ε
−1
I) ∈ J
2,+
u(x
ε
). 2
We next introduce a sort of closure of semijets:
J
2,±
u(x) :=
_
¸
_
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
∃x
k
∈ Ω and ∃(p
k
, X
k
) ∈ J
2,±
u(x
k
)
such that (x
k
, u(x
k
), p
k
, X
k
)
→ (x, u(x), p, X) as k → ∞
_
¸
_
¸
_
.
Proposition 2.6. For u : Ω → R, the following (1), (2), (3) are equiva
lent.
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
(1) u is a viscosity subsolution (resp., supersolution) of (2.6).
(2) For x ∈ Ω and (p, X) ∈ J
2,+
u(x) (resp., J
2,−
u(x)),
we have F(x, u(x), p, X) ≤ 0 (resp., ≥ 0).
(3) For x ∈ Ω and (p, X) ∈ J
2,+
u(x) (resp., J
2,−
u(x)),
we have F(x, u(x), p, X) ≤ 0 (resp., ≥ 0).
18
Proof. Again, we give a proof of the assertion only for subsolutions.
Step 1: (2) =⇒ (3). For x ∈ Ω and (p, X) ∈ J
2,+
u(x), we can ﬁnd (p
k
, X
k
) ∈
J
2,+
u(x
k
) with x
k
∈ Ω such that lim
k→∞
(x
k
, u(x
k
), p
k
, X
k
) = (x, u(x), p, X)
and
F(x
k
, u(x
k
), p
k
, X
k
) ≤ 0,
which implies (3) by sending k → ∞.
Step 2: (3) =⇒ (1). For φ ∈ C
2
(Ω), suppose also (u−φ)(x) = max(u−φ).
Thus, the Taylor expansion of φ at x gives
u(y) ≤ u(x)+¸Dφ(x), y−x)+
1
2
¸D
2
φ(x)(y−x), y−x)+o([x−y[
2
) as y → x.
Thus, we have (Dφ(x), D
2
φ(x)) ∈ J
2,+
u(x) ⊂ J
2,+
u(x).
Step 3: (1) =⇒ (2). For (p, X) ∈ J
2,+
u(x) (x ∈ Ω), we can ﬁnd nonde
creasing, continuous ω : [0, ∞) → [0, ∞) such that ω(0) = 0 and
u(y) ≤ u(x) +¸p, y −x) +
1
2
¸X(y −x), y −x) +[y −x[
2
ω([y −x[) (2.9)
as y → x. In fact, by the deﬁnition of o, we ﬁnd ω
0
∈ C([0, ∞), [0, ∞)) such
that ω
0
(0) = 0, and
ω
0
(r) ≥ sup
y∈Br(x)\{x}
1
[x −y[
2
_
u(y) −u(x) −¸p, y −x) −
1
2
¸X(y −x), y −x)
_
,
we verify that ω(r) := sup
0≤t≤r
ω
0
(t) satisﬁes (2.9).
Now, we deﬁne φ by
φ(y) := ¸p, y −x) +
1
2
¸X(y −x), y −x) + ψ([x −y[),
where
ψ(t) :=
_
√
3t
t
__
2s
s
ω(r)dr
_
ds ≥ t
2
ω(t).
It is easy to check that
(Dφ(x), D
2
φ(x)) = (p, X) and (u −φ)(x) ≥ (u −φ)(y) for y ∈ Ω.
Therefore, we conclude the proof. 2
19
Remark. In view of the proof of Step 3, we verify that for x ∈ Ω,
J
2,+
u(x) =
_
(Dφ(x), D
2
φ(x)) ∈ R
n
S
n
¸
¸
¸
¸
¸
∃φ ∈ C
2
(Ω) such that u −φ
attains its maximum at x
_
,
J
2,−
u(x) =
_
(Dφ(x), D
2
φ(x)) ∈ R
n
S
n
¸
¸
¸
¸
¸
∃φ ∈ C
2
(Ω) such that u −φ
attains its minimum at x
_
.
Thus, we intuitively know J
2,±
u(x) from their graph.
Example. Consider the function u ∈ C([−1, 1]) in (2.3). From the graph
below, we may conclude that J
2,−
u(0) = ∅, and J
2,+
u(0) = (¦1¦ [0, ∞)) ∪
(¦−1¦ [0, ∞)) ∪ ((−1, 1) R). See Fig 2.4.1 and 2.4.2.
We omit how to obtain J
2,±
u(0) of this and the next examples.
We shall examine J
2,±
for discontinuous functions. For instance, consider
the Heaviside function:
u(x) :=
_
1 for x ≥ 0,
0 for x < 0.
We see that J
2,−
u(0) = ∅ and J
2,+
u(0) = (¦0¦ [0, ∞)) ∪((0, ∞) R). See
Fig 2.5.
Fig 2.4.1
y = φ(x)
y = x + 1
y
y = u(x)
x
In order to deal with “boundary value problems” in section 5, we prepare
some notations: For a set K ⊂ R
n
, which is not necessarily open, we deﬁne
20
Fig 2.4.2
y = φ(x)
y
y = u(x)
x
y = ax + 1 ([a[ < 1)
Fig 2.5
y = φ(x)
y
y = u(x)
x
1
0
y = ax + 1
(a > 0)
21
semijets of u : K → R at x ∈ K by
J
2,+
K
u(x) :=
_
¸
¸
¸
_
¸
¸
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
u(y) ≤ u(x) +¸p, y −x)
+
1
2
¸X(y −x), y −x)
+o([y −x[
2
) as y ∈ K → x
_
¸
¸
¸
_
¸
¸
¸
_
,
J
2,−
K
u(x) :=
_
¸
¸
¸
_
¸
¸
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
¸
¸
u(y) ≥ u(x) +¸p, y −x)
+
1
2
¸X(y −x), y −x)
+o([y −x[
2
) as y ∈ K → x
_
¸
¸
¸
_
¸
¸
¸
_
,
and
J
2,±
K
u(x) :=
_
¸
_
¸
_
(p, X) ∈ R
n
S
n
¸
¸
¸
¸
¸
¸
¸
∃x
k
∈ K and ∃(p
k
, X
k
) ∈ J
2,±
K
u(x
k
)
such that (x
k
, u(x
k
), p
k
, X
k
)
→ (x, u(x), p, X) as k → ∞
_
¸
_
¸
_
.
Remark. It is obvious to verify that
x ∈ Ω =⇒ J
2,±
Ω
u(x) = J
2,±
Ω
u(x) and J
2,±
Ω
u(x) = J
2,±
Ω
u(x).
For x ∈ Ω, we shall simply write J
2,±
u(x) (resp., J
2,±
u(x)) for J
2,±
Ω
u(x) =
J
2,±
Ω
u(x) (resp, J
2,±
Ω
u(x) = J
2,±
Ω
u(x)).
Example. Consider u(x) ≡ 0 in K := [0, 1]. It is easy to observe that
J
2,+
u(x) = J
2,+
K
u(x) = ¦0¦ [0, ∞) provided x ∈ (0, 1). It is also easy to
verify that
J
2,+
K
u(0) = (¦0¦ [0, ∞)) ∪ ((0, ∞) R),
and
J
2,−
K
u(0) = (¦0¦ (−∞, 0]) ∪ ((−∞, 0) R).
We ﬁnally give some properties of J
2,±
Ω
and J
2,±
Ω
. Since the proof is easy,
we omit it.
Proposition 2.7. For u : Ω → R, ψ ∈ C
2
(Ω) and x ∈ Ω, we have
J
2,±
Ω
(u + ψ)(x) = (Dψ(x), D
2
ψ(x)) +J
2,±
Ω
u(x)
and
J
2,±
Ω
(u + ψ)(x) = (Dψ(x), D
2
ψ(x)) +J
2,±
Ω
u(x).
22
3 Comparison principle
In this section, we discuss the comparison principle, which implies the unique
ness of viscosity solutions when their values on ∂Ω coincide (i.e. under the
Dirichlet boundary condition). In the study of the viscosity solution theory,
the comparison principle has been the main issue because the uniqueness of
viscosity solutions is harder to prove than existence and stability of them.
First, we recall some “classical” comparison principles and then, show
how to modify the proof to a modern “viscosity” version.
In this section, the comparison principle roughly means that
“Comparison principle”
viscosity subsolution u
viscosity supersolution v
u ≤ v on ∂Ω
_
¸
_
¸
_
=⇒ u ≤ v in Ω
Modifying our proofs of comparison theorems below, we obtain a slightly
stronger assertion than the above one:
viscosity subsolution u
viscosity supersolution v
_
=⇒ max
Ω
(u −v) = max
∂Ω
(u −v)
We remark that the comparison principle implies the uniqueness of (con
tinuous) viscosity solutions under the Dirichlet boundary condition:
“Uniqueness for the Dirichlet problem”
viscosity solutions u and v
u = v on ∂Ω
_
=⇒ u = v in Ω
Proof of “the comparison principle implies the uniqueness”.
Since u (resp., v) and v (resp., u), respectively, are a viscosity subsolution
and supersolution, by u = v on ∂Ω, the comparison principle yields u ≤ v
(resp., v ≤ u) in Ω. 2
In this section, we mainly deal with the following PDE instead of (2.6).
νu + F(x, Du, D
2
u) = 0 in Ω, (3.1)
where we suppose that
ν ≥ 0, (3.2)
and
F : Ω R
n
S
n
→ R is continuous. (3.3)
23
3.1 Classical comparison principle
In this subsection, we show that if one of viscosity sub and supersolutions
is a classical one, then the comparison principle holds true. We call this the
“classical” comparison principle.
3.1.1 Degenerate elliptic PDEs
We ﬁrst consider the case when F is (degenerate) elliptic and ν > 0.
Proposition 3.1. Assume that ν > 0 and (3.3) hold. Assume also
that F is elliptic. Let u ∈ USC(Ω) (resp., v ∈ LSC(Ω)) be a viscosity
subsolution (resp., supersolution) of (3.1) and v ∈ LSC(Ω) ∩ C
2
(Ω) (resp.,
u ∈ USC(Ω) ∩ C
2
(Ω)) a classical supersolution (resp., subsolution) of (3.1).
If u ≤ v on ∂Ω, then u ≤ v in Ω.
Proof. We only prove the assertion when u is a viscosity subsolution of
(3.1) since the other one can be shown similarly.
Set max
Ω
(u −v) =: θ and choose ˆ x ∈ Ω such that (u −v)(ˆ x) = θ.
Suppose that θ > 0 and then, we will get a contradiction. We note that
ˆ x ∈ Ω because u ≤ v on ∂Ω.
Thus, the deﬁnition of u and v respectively yields
νu(ˆ x) + F(ˆ x, Dv(ˆ x), D
2
v(ˆ x)) ≤ 0 ≤ νv(ˆ x) + F(ˆ x, Dv(ˆ x), D
2
v(ˆ x)).
Hence, by these inequalities, we have
νθ = ν(u −v)(ˆ x) ≤ 0,
which contradicts θ > 0. 2
3.1.2 Uniformly elliptic PDEs
Next, we present the comparison principle when ν = 0 but F is uniformly
elliptic in the following sense. Notice that if ν > 0 and F is uniformly ellip
tic, then Proposition 3.1 yields Proposition 3.3 below because our uniform
ellipticity implies (degenerate) ellipticity.
Throughout this book, we freeze the “uniform ellipticity” constants:
0 < λ ≤ Λ.
24
With these constants, we introduce the Pucci’s operators: For X ∈ S
n
,
T
+
(X) := max¦−trace(AX) [ λI ≤ A ≤ ΛI for A ∈ S
n
¦,
T
−
(X) := min¦−trace(AX) [ λI ≤ A ≤ ΛI for A ∈ S
n
¦.
We give some properties of T
±
. We omit the proof since it is elementary.
Proposition 3.2. For X, Y ∈ S
n
, we have the following:
(1) T
+
(X) = −T
−
(−X),
(2) T
±
(θX) = θT
±
(X) for θ ≥ 0,
(3) T
+
is convex, T
−
is concave,
(4)
_
T
−
(X) +T
−
(Y ) ≤ T
−
(X + Y ) ≤ T
−
(X) +T
+
(Y )
≤ T
+
(X + Y ) ≤ T
+
(X) +T
+
(Y ).
Deﬁnition. We say that F : ΩR
n
S
n
→ R is uniformly elliptic
(with the uniform ellipticity constants 0 < λ ≤ Λ) if
T
−
(X −Y ) ≤ F(x, p, X) −F(x, p, Y ) ≤ T
+
(X −Y )
for x ∈ Ω, p ∈ R
n
, and X, Y ∈ S
n
.
We also suppose the following continuity on F with respect to p ∈ R
n
:
There is µ > 0 such that
[F(x, p, X) −F(x, p
′
, X)[ ≤ µ[p −p
′
[ (3.4)
for x ∈ Ω, p, p
′
∈ R
n
, and X ∈ S
n
.
Proposition 3.3. Assume that (3.2), (3.3) and (3.4) hold. Assume also
that F is uniformly elliptic. Let u ∈ USC(Ω) (resp., v ∈ LSC(Ω)) be a
viscosity subsolution (resp., supersolution) of (3.1) and v ∈ LSC(Ω) ∩C
2
(Ω)
(resp., u ∈ USC(Ω) ∩C
2
(Ω)) a classical supersolution (resp., subsolution) of
(3.1).
If u ≤ v on ∂Ω, then u ≤ v in Ω.
Proof. We give a proof only when u is a viscosity subsolution and v a
classical supersolution of (3.1).
25
Suppose that max
Ω
(u − v) =: θ > 0. Then, we will get a contradiction
again.
For ε > 0, we set φ
ε
(x) = εe
δx
1
, where δ := max¦(µ + 1)/λ, ν + 1¦ > 0.
We next choose ε > 0 so small that
ε max
x∈Ω
e
δx
1
≤
θ
2
Let ˆ x ∈ Ω be the point such that (u−v +φ
ε
)(ˆ x) = max
Ω
(u−v +φ
ε
) ≥ θ.
By the choice of ε > 0, since u ≤ v on ∂Ω, we see that ˆ x ∈ Ω.
From the deﬁnition of viscosity subsolutions, we have
νu(ˆ x) + F(ˆ x, D(v −φ
ε
)(ˆ x), D
2
(v −φ
ε
)(ˆ x)) ≤ 0.
By the uniform ellipticity and (3.4), we have
νu(ˆ x) + F(ˆ x, Dv(ˆ x), D
2
v(ˆ x)) +T
−
(−D
2
φ
ε
(ˆ x)) −µ[Dφ
ε
(ˆ x)[ ≤ 0.
Noting that [Dφ
ε
(ˆ x)[ ≤ δεe
δˆ x
1
and T
−
(−D
2
φ
ε
(ˆ x)) ≥ δ
2
ελe
δˆ x
1
, we have
νu(ˆ x) + F(ˆ x, Dv(ˆ x), D
2
v(ˆ x)) +δε(λδ −µ)e
δˆ x
1
≤ 0. (3.5)
Since v is a classical supersolution of (3.1), by (3.5) and δ ≥ (µ + 1)/λ, we
have
ν(u −v)(ˆ x) + δεe
δˆ x
1
≤ 0.
Hence, we have
ν(θ −φ
ε
(ˆ x)) ≤ −δεe
δˆ x
1
,
which gives a contradiction because δ ≥ ν + 1. 2
3.2 Comparison principle for ﬁrstorder PDEs
In this subsection, without assuming that one of viscosity sub and supersolu
tions is a classical one, we establish the comparison principle when F in (3.1)
does not depend on D
2
u; ﬁrstorder PDEs. We will study the comparison
principle for secondorder ones in the next subsection.
In the viscosity solution theory, Theorem 3.4 below was the ﬁrst surprising
result.
Here, instead of (3.1), we shall consider the following PDE:
νu + H(x, Du) = 0 in Ω. (3.6)
26
We shall suppose that
ν > 0, (3.7)
and that there is a continuous function ω
H
: [0, ∞) → [0, ∞) such that
ω
H
(0) = 0 and
[H(x, p) −H(y, p)[ ≤ ω
H
([x −y[(1 +[p[)) for x, y ∈ Ω and p ∈ R
n
. (3.8)
In what follows, we will call ω
H
in (3.8) a modulus of continuity. For
notational simplicity, we use the following notation:
/ := ¦ω : [0, ∞) → [0, ∞) [ ω() is continuous, ω(0) = 0¦.
Theorem 3.4. Assume that (3.7) and (3.8) hold. Let u ∈ USC(Ω) and
v ∈ LSC(Ω) be a viscosity sub and supersolution of (3.6), respectively.
If u ≤ v on ∂Ω, then u ≤ v in Ω.
Proof. Suppose max
Ω
(u − v) =: θ > 0 as usual. Then, we will get a
contradiction.
Notice that since both u and v may not be diﬀerentiable, we cannot use
the same argument as in Proposition 3.1.
Now, we present the most important idea in the theory of viscosity solu
tions to overcome this diﬃculty.
Setting Φ
ε
(x, y) := u(x) − v(y) − (2ε)
−1
[x − y[
2
for ε > 0, we choose
(x
ε
, y
ε
) ∈ Ω Ω such that
Φ
ε
(x
ε
, y
ε
) = max
x,y∈Ω
Φ
ε
(x, y).
Noting that Φ
ε
(x
ε
, y
ε
) ≥ max
x∈Ω
Φ
ε
(x, x) = θ, we have
[x
ε
−y
ε
[
2
2ε
≤ u(x
ε
) −v(y
ε
) −θ. (3.9)
Since Ω is compact, we can ﬁnd ˆ x, ˆ y ∈ Ω, and ε
k
> 0 such that lim
k→∞
ε
k
= 0
and lim
k→∞
(x
ε
k
, y
ε
k
) = (ˆ x, ˆ y).
We shall simply write ε for ε
k
(i.e. in what follows, “ε → 0” means that
ε
k
→ 0 when k → ∞).
Setting M := max
Ω
u −min
Ω
v, by (3.9), we have
[x
ε
−y
ε
[
2
≤ 2εM → 0 (as ε → 0).
27
Thus, we have ˆ x = ˆ y.
Since (3.9) again implies
0 ≤ liminf
ε→0
[x
ε
−y
ε
[
2
2ε
≤ limsup
ε→0
[x
ε
−y
ε
[
2
2ε
≤ limsup
ε→0
(u(x
ε
) −v(y
ε
)) −θ
≤ (u −v)(ˆ x) −θ ≤ 0,
we have
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0. (3.10)
Moreover, since (u − v)(ˆ x) = θ > 0, we have ˆ x ∈ Ω from the assumption
u ≤ v on ∂Ω. Thus, for small ε > 0, we may suppose that (x
ε
, y
ε
) ∈ Ω Ω.
Furthermore, ignoring the left hand side in (3.9), we have
θ ≤ liminf
ε→0
(u(x
ε
) −v(y
ε
)). (3.11)
Taking φ(x) := v(y
ε
) + (2ε)
−1
[x − y
ε
[
2
, we see that u − φ attains its
maximum at x
ε
∈ Ω. Hence, from the deﬁnition of viscosity subsolutions, we
have
νu(x
ε
) + H
_
x
ε
,
x
ε
−y
ε
ε
_
≤ 0.
On the other hand, taking ψ(y) := u(x
ε
) − (2ε)
−1
[y − x
ε
[
2
, we see that
v −ψ attains its minimum at y
ε
∈ Ω. Thus, from the deﬁnition of viscosity
supersolutions, we have
νv(y
ε
) + H
_
y
ε
,
x
ε
−y
ε
ε
_
≥ 0.
The above two inequalities yield
ν(u(x
ε
) −v(y
ε
)) ≤ ω
H
_
[x
ε
−y
ε
[ +
[x
ε
−y
ε
[
2
ε
_
.
Sending ε → 0 in the above together with (3.10) and (3.11), we have νθ ≤ 0,
which is a contradiction. 2
Remark. In the above proof, we could show that lim
ε→0
u(x
ε
) = u(ˆ x) and
lim
ε→0
v(y
ε
) = v(ˆ x) although we do not need this fact. In fact, by (3.9), we
have
v(y
ε
) ≤ u(x
ε
) −θ,
28
which implies that
v(ˆ x) ≤ liminf
ε→0
v(y
ε
) ≤ liminf
ε→0
u(x
ε
) −θ ≤ limsup
ε→0
u(x
ε
) −θ ≤ u(ˆ x) −θ,
and
v(ˆ x) ≤ liminf
ε→0
v(y
ε
) ≤ limsup
ε→0
v(x
ε
) ≤ limsup
ε→0
u(x
ε
) −θ ≤ u(ˆ x) −θ.
Hence, since all the inequalities become the equalities, we have
u(ˆ x) = liminf
ε→0
u(x
ε
) = limsup
ε→0
u(x
ε
) and v(ˆ x) = liminf
ε→0
v(y
ε
) = limsup
ε→0
v(y
ε
).
We remark here that we cannot apply Theorem 3.4 to the eikonal equation
(2.1) because we have to suppose ν > 0 in the above proof.
We shall modify the above proof so that the comparison principle for
viscosity solutions of (2.1) holds.
To simplify our hypotheses, we shall consider the following PDE:
H(x, Du) −f(x) = 0 in Ω. (3.12)
Here, we suppose that H has homogeneous degree α > 0 with respect to the
second variable; there is α > 0 such that
H(x, µp) = µ
α
H(x, p) for x ∈ Ω, p ∈ R
n
and µ > 0. (3.13)
To recover the lack of assumption ν > 0, we suppose the positivity of f ∈
C(Ω); there is σ > 0 such that
min
x∈Ω
f(x) =: σ > 0. (3.14)
Example. When H(x, p) = [p[
2
(i.e. α = 2) and f(x) ≡ 1 ( i.e. σ = 1),
equation (3.12) becomes (2.1).
The second comparison principle for ﬁrstorder PDEs is as follows:
Theorem 3.5. Assume that (3.8), (3.13) and (3.14) hold. Let u ∈
USC(Ω) and v ∈ LSC(Ω) be a viscosity sub and supersolution of (3.12),
respectively.
If u ≤ v on ∂Ω, then u ≤ v in Ω.
29
Proof. Suppose that max
Ω
(u −v) =: θ > 0 as usual. Then, we will get a
contradiction.
If we choose µ ∈ (0, 1) so that
(1 −µ) max
Ω
u ≤
θ
2
,
then we easily verify that
max
Ω
(µu −v) =: τ ≥
θ
2
.
We note that for any z ∈ Ω such that (µu − v)(z) = τ, we may suppose
z ∈ Ω. In fact, otherwise (i.e. z ∈ ∂Ω), if we further suppose that µ < 1 is
close to 1 so that −(1 − µ) min
∂Ω
v ≤ θ/4, then the assumption (u ≤ v on
∂Ω) implies
θ
2
≤ τ = µu(z) −v(z) ≤ (µ −1)v(z) ≤
θ
4
,
which is a contradiction. For simplicity, we shall omit writing the dependence
on µ for τ and (x
ε
, y
ε
) below.
At this stage, we shall use the idea in the proof of Theorem 3.4: Consider
the mapping Φ
ε
: Ω Ω → R deﬁned by
Φ
ε
(x, y) := µu(x) −v(y) −
[x −y[
2
2ε
.
Choose (x
ε
, y
ε
) ∈ Ω Ω such that max
x,y∈Ω
Φ
ε
(x, y) = Φ
ε
(x
ε
, y
ε
). Note
that Φ
ε
(x
ε
, y
ε
) ≥ τ ≥ θ/2.
As in the proof of Theorem 3.4, we may suppose that lim
ε→0
(x
ε
, y
ε
) =
(ˆ x, ˆ y) for some (ˆ x, ˆ y) ∈ Ω Ω (by taking a subsequence if necessary). Also,
we easily see that
[x
ε
−y
ε
[
2
2ε
≤ µu(x
ε
) −v(y
ε
) −τ ≤ M
µ
:= µmax
Ω
u −min
Ω
v. (3.15)
Thus, sending ε → 0, we have ˆ x = ˆ y. Hence, (3.15) implies that µu(ˆ x) −
v(ˆ x) = τ, which yields ˆ x ∈ Ω because of the choice of µ. Thus, we see that
(x
ε
, y
ε
) ∈ Ω Ω for small ε > 0.
Moreover, (3.15) again implies
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0. (3.16)
30
Now, taking φ(x) := (v(y
ε
) +(2ε)
−1
[x−y
ε
[
2
)/µ, we see that u−φ attains
its maximum at x
ε
∈ Ω. Thus, we have
H
_
x
ε
,
x
ε
−y
ε
µε
_
≤ f(x
ε
).
Hence, by (3.13), we have
H
_
x
ε
,
x
ε
−y
ε
ε
_
≤ µ
α
f(x
ε
). (3.17)
On the other hand, taking ψ(y) = µu(x
ε
) − (2ε)
−1
[y − x
ε
[
2
, we see that
v −ψ attains its minimum at y
ε
∈ Ω. Thus, we have
H
_
y
ε
,
x
ε
−y
ε
ε
_
≥ f(y
ε
). (3.18)
Combining (3.18) with (3.17), we have
f(y
ε
) −µ
α
f(x
ε
) ≤ H
_
y
ε
,
x
ε
−y
ε
ε
_
−H
_
x
ε
,
x
ε
−y
ε
ε
_
≤ ω
H
_
[x
ε
−y
ε
[
_
1 +
[x
ε
−y
ε
[
ε
__
.
Sending ε → 0 in the above with (3.16), we have
(1 −µ
α
)f(ˆ x) ≤ 0,
which contradicts (3.14). 2
3.3 Extension to secondorder PDEs
In this subsection, assuming a key lemma, we will present the comparison
principle for fully nonlinear, secondorder, (degenerate) elliptic PDEs (3.1).
We ﬁrst remark that the argument of the proof of the comparison principle
for ﬁrstorder PDEs cannot be applied at least immediately.
Let us have a look at the diﬃculty. Consider the following simple PDE:
νu −△u = 0, (3.19)
where ν > 0. As one can guess, if the argument does not work for this
“easiest” PDE, then it must be hopeless for general PDEs.
31
However, we emphasize that the same argument as in the proof of The
orem 3.4 does not work. In fact, let u ∈ USC(Ω) and v ∈ LSC(Ω) be a
viscosity sub and supersolution of (3.19), respectively, such that u ≤ v on
∂Ω. Setting Φ
ε
(x, y) := u(x) − v(y) − (2ε)
−1
[x − y[
2
as usual, we choose
(x
ε
, y
ε
) ∈ Ω Ω so that max
x,y∈Ω
Φ
ε
(x, y) = Φ
ε
(x
ε
, y
ε
) > 0 as before.
We may suppose that (x
ε
, y
ε
) ∈ Ω Ω converges to (ˆ x, ˆ x) (as ε → 0) for
some ˆ x ∈ Ω such that (u − v)(ˆ x) > 0. From the deﬁnitions of u and v, we
have
νu(x
ε
) −
n
ε
≤ 0 ≤ νv(y
ε
) +
n
ε
.
Hence, we only have
ν(u(x
ε
) −v(y
ε
)) ≤
2n
ε
,
which does not give any contradiction as ε → 0.
How can we go beyond this diﬃculty ?
In 1983, P.L. Lions ﬁrst obtained the uniqueness of viscosity solutions
for elliptic PDEs arising in stochastic optimal control problems (i.e. Bell
man equations; F is convex in (Du, D
2
u)). However, his argument heavily
depends on stochastic representation of viscosity solutions as “value func
tions”. Moreover, it seems hard to extend the result to Isaacs equations; F
is fully nonlinear.
The breakthrough was done by Jensen in 1988 in case when the coeﬃ
cients on the second derivatives of the PDE are constant. His argument relies
purely on “realanalysis” and can work even for fully nonlinear PDEs.
Then, Ishii in 1989 extended Jensen’s result to enable us to apply to
elliptic PDEs with variable coeﬃcients. We present here the socalled Ishii’s
lemma, which will be proved in Appendix.
Lemma 3.6. (Ishii’s lemma) Let u and w be in USC(Ω). For φ ∈
C
2
(Ω Ω), let (ˆ x, ˆ y) ∈ Ω Ω be a point such that
max
x,y∈Ω
(u(x) + w(y) −φ(x, y)) = u(ˆ x) + w(ˆ y) −φ(ˆ x, ˆ y).
Then, for each µ > 1, there are X = X(µ), Y = Y (µ) ∈ S
n
such that
(D
x
φ(ˆ x, ˆ y), X) ∈ J
2,+
Ω
u(ˆ x), (D
y
φ(ˆ x, ˆ y), Y ) ∈ J
2,+
Ω
w(ˆ y),
32
and
−(µ +A)
_
I 0
0 I
_
≤
_
X 0
0 Y
_
≤ A +
1
µ
A
2
,
where A = D
2
φ(ˆ x, ˆ y) ∈ S
2n
.
Remark. We note that if we suppose that u, w ∈ C
2
(Ω) and (ˆ x, ˆ y) ∈ ΩΩ
in the hypothesis, then we easily have
X = D
2
u(ˆ x), Y = D
2
w(ˆ y), and
_
X 0
0 Y
_
≤ A.
Thus, the last matrix inequality means that when u and w are only contin
uous, we get some error term µ
−1
A
2
, where µ > 1 will be large.
We also note that for φ(x, y) := [x −y[
2
/(2ε), we have
A := D
2
φ(ˆ x, ˆ y) =
1
ε
_
I −I
−I I
_
and A =
2
ε
. (3.20)
For the last identity, since
A
2
:= sup
__
A
_
x
y
_
, A
_
x
y
__¸
¸
¸
¸
¸
[x[
2
+[y[
2
= 1
_
,
the triangle inequality yields A
2
= 2ε
−2
sup¦[x − y[
2
[ [x[
2
+ [y[
2
= 1¦ ≤
4/ε
2
. On the other hand, taking x = −y (i.e. [x[
2
= 1/2) in the supremum
of the deﬁnition of A
2
in the above, we have A
2
≥ 4/ε
2
.
Remark. The other way to show the above identity, we may use the fact
that for B ∈ S
n
, in general,
B = max¦[λ
k
[ [ λ
k
is the eigenvalue of B¦.
3.3.1 Degenerate elliptic PDEs
Now, we give our hypotheses on F, which is called the structure condition.
Structure condition
There is an ω
F
∈ / such that if X, Y ∈ S
n
and µ > 1 satisfy
−3µ
_
I 0
0 I
_
≤
_
X 0
0 −Y
_
≤ 3µ
_
I −I
−I I
_
,
then F(y, µ(x −y), Y ) −F(x, µ(x −y), X)
≤ ω
F
([x −y[(1 +µ[x −y[)) for x, y ∈ Ω.
(3.21)
33
In section 3.3.2, we will see that if F satisﬁes (3.21), then it is elliptic.
We ﬁrst prove the comparison principle when (3.21) holds for F using
this lemma. Afterward, we will explain why assumption (3.21) is reasonable.
Theorem 3.7. Assume that ν > 0 and (3.21) hold. Let u ∈ USC(Ω)
and v ∈ LSC(Ω) be a viscosity sub and supersolution of (3.1), respectively.
If u ≤ v on ∂Ω, then u ≤ v in Ω.
Proof. Suppose that max
Ω
(u −v) =: θ > 0 as usual. Then, we will get a
contradiction.
Again, for ε > 0, consider the mapping Φ
ε
: Ω Ω → R deﬁned by
Φ
ε
(x, y) = u(x) −v(y) −
1
2ε
[x −y[
2
.
Let (x
ε
, y
ε
) ∈ ΩΩ be a point such that max
x,y∈Ω
Φ
ε
(x, y) = Φ
ε
(x
ε
, y
ε
) ≥
θ. As in the proof of Theorem 3.4, we may suppose that
lim
ε→0
(x
ε
, y
ε
) = (ˆ x, ˆ x) for some ˆ x ∈ Ω (i.e. x
ε
, y
ε
∈ Ω for small ε > 0).
Moreover, since we have (u −v)(ˆ x) = θ,
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0, (3.22)
and
θ ≤ liminf
ε→0
(u(x
ε
) −v(y
ε
)). (3.23)
In view of Lemma 3.6 (taking w := −v, µ := 1/ε, φ(x, y) = [x−y[
2
/(2ε))
and its Remark, we ﬁnd X, Y ∈ S
n
such that
_
x
ε
−y
ε
ε
, X
_
∈
¯
J
2,+
u(x
ε
),
_
x
ε
−y
ε
ε
, Y
_
∈
¯
J
2,−
v(y
ε
),
and
−
3
ε
_
I 0
0 I
_
≤
_
X 0
0 −Y
_
≤
3
ε
_
I −I
−I I
_
.
Thus, the equivalent deﬁnition in Proposition 2.6 implies that
νu(x
ε
) + F
_
x
ε
,
x
ε
−y
ε
ε
, X
_
≤ 0 ≤ νv(y
ε
) + F
_
y
ε
,
x
ε
−y
ε
ε
, Y
_
.
34
Hence, by virtue of our assumption (3.21), we have
ν(u(x
ε
) −v(y
ε
)) ≤ ω
F
_
[x
ε
−y
ε
[ +
[x
ε
−y
ε
[
2
ε
_
. (3.24)
Taking the limit inﬁmum, as ε → 0, together with (3.22) and (3.23) in the
above, we have
νθ ≤ 0,
which is a contradiction. 2
3.3.2 Remarks on the structure condition
In order to ensure that assumption (3.21) is reasonable, we ﬁrst present some
examples. For this purpose, we consider the Isaacs equation as in section
1.2.2.
F(x, p, X) := sup
a∈A
inf
b∈B
¦L
a,b
(x, p, X) −f(x, a, b)¦,
where
L
a,b
(x, p, X) := −trace(A(x, a, b)X) +¸g(x, a, b), p) for (a, b) ∈ AB.
If we suppose that A and B are compact sets in R
m
(for some m ≥ 1),
and that the coeﬃcients in the above and f(, a, b) satisfy the hypotheses
below, then F satisﬁes (3.21).
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
¸
_
(1) ∃M
1
> 0 and ∃σ
ij
(, a, b) : Ω → R such that A
ij
(x, a, b) =
m
k=1
σ
ik
(x, a, b)σ
jk
(x, a, b), and [σ
jk
(x, a, b) −σ
jk
(y, a, b)[ ≤ M
1
[x −y[
for x, y ∈ Ω, i, j = 1, . . . , n, k = 1, . . . , m, a ∈ A, b ∈ B,
(2) ∃M
2
> 0 such that [g
i
(x, a, b) −g
i
(y, a, b)[ ≤ M
2
[x −y[ for x, y ∈ Ω,
i = 1, . . . , n, a ∈ A, b ∈ B,
(3) ∃ω
f
∈ / such that
[f(x, a, b) −f(y, a, b)[ ≤ ω
f
([x −y[) for x, y ∈ Ω, a ∈ A, b ∈ B.
We shall show (3.21) only when
F(x, p, X) := −
n
i,j=1
m
k=1
σ
ik
(x, a, b)σ
jk
(x, a, b)X
ij
35
for a ﬁxed (a, b) ∈ AB because we can modify the proof below to general
F.
Thus, we shall omit writing indices a and b.
To verify assumption (3.21), we choose X, Y ∈ S
n
such that
_
X 0
0 −Y
_
≤ 3µ
_
I −I
−I I
_
.
Setting ξ
k
=
t
(σ
1k
(x), . . . , σ
nk
(x)) and η
k
=
t
(σ
1k
(y), . . . , σ
nk
(y)) for any
ﬁxed k ∈ ¦1, 2, . . . , m¦, we have
__
X 0
0 −Y
__
ξ
k
η
k
_
,
_
ξ
k
η
k
__
≤ 3µ
__
I −I
−I I
__
ξ
k
η
k
_
,
_
ξ
k
η
k
__
= 3µ[ξ
k
−η
k
[
2
≤ 3µnM
2
1
[x −y[
2
.
Therefore, taking the summation over k ∈ ¦1, . . . , m¦, we have
F(y, µ(x −y), Y ) −F(x, µ(x −y), X) ≤
n
i,j=1
(−A
ij
(y)Y
ij
+ A
ij
(x)X
ij
)
=
m
k=1
(−¸Y η
k
, η
k
) +¸Xξ
k
, ξ
k
))
≤ 3µmnM
2
1
[x −y[
2
. 2
We next give other reasons why (3.21) is a suitable assumption. The
reader can skip the proof of the following proposition if he/she feels that the
above reason is enough to adapt (3.21).
Proposition 3.8. (1) (3.21) implies ellipticity.
(2) Assume that F is uniformly elliptic. If ¯ ω ∈ / satisﬁes that sup
r≥0
¯ ω(r)/(r +
1) < ∞, and
[F(x, p, X) −F(y, p, X)[ ≤ ¯ ω([x −y[(X +[p[ + 1)) (3.25)
for x, y ∈ Ω, p ∈ R
n
, X ∈ S
n
, then (3.21) holds for F.
Proof. For a proof of (1), we refer to Remark 3.4 in [6].
For the reader’s convenience, we give a proof of (2) which is essentially used
in a paper by IshiiLions (1990). Let X, Y ∈ S
n
satisfy the matrix inequality in
(3.21). Note that X ≤ Y .
36
Multiplying
_
−I −I
−I I
_
to the last matrix inequality from both sides, we
have
_
X −Y X + Y
X + Y X −Y
_
≤ 12µ
_
0 0
0 I
_
.
Thus, multiplying
_
ξ
sη
_
for s ∈ R and ξ, η ∈ R
n
with [η[ = [ξ[ = 1, we see that
0 ≤ (12µ −¸(X −Y )η, η))s
2
−2¸(X + Y )ξ, η)s −¸(X −Y )ξ, ξ).
Hence, we have
[¸(X + Y )ξ, η)[
2
≤ [¸(X −Y )ξ, ξ)[(12µ +[¸(X −Y )η, η)[),
which implies
X + Y  ≤ X −Y 
1/2
(12µ +X −Y )
1/2
.
Thus, we have
X ≤
1
2
(X −Y  +X + Y ) ≤ X −Y 
1/2
(6µ +X −Y )
1/2
.
Since X ≤ Y (i.e. the eigenvalues of X −Y are nonpositive), we see that
F(y, p, X) −F(y, p, Y ) ≥ T
−
(X −Y ) ≥ λX −Y . (3.26)
For the last inequality, we recall Remark after Lemma 3.6.
Since we may suppose ¯ ω is concave, for any ﬁxed ε > 0, there is M
ε
> 0 such
that ¯ ω(r) ≤ ε + M
ε
r and ¯ ω(r) = inf
ε>0
(ε + M
ε
r) for r ≥ 0. By (3.25) and (3.26),
since X ≤ 3µ and Y  ≤ 3µ, we have
F(y, p, Y ) −F(x, p, X)
≤ ε + M
ε
[x −y[([p[ + 1) + sup
0≤t≤6µ
_
M
ε
[x −y[t
1/2
(6µ + t)
1/2
−λt
_
.
Noting that
M
ε
[x −y[t
1/2
(6µ + t)
1/2
−λt ≤
3
λ
M
2
ε
µ[x −y[
2
,
we have
F(y, µ(x −y), Y ) −F(x, µ(x −y), X)
≤ ε + M
ε
[x −y[(µ[x −y[ + 1) + 3λ
−1
M
2
ε
µ[x −y[
2
,
which implies the assertion by taking the inﬁmum over ε > 0. 2
37
3.3.3 Uniformly elliptic PDEs
We shall give a comparison result corresponding to Proposition 3.3; F is
uniformly elliptic and ν ≥ 0.
Theorem 3.9. Assume that (3.2), (3.3), (3.4) and (3.21) hold. Assume
also that F is uniformly elliptic. Let u ∈ USC(Ω) and v ∈ LSC(Ω) be a
viscosity sub and supersolution of (3.1), respectively.
If u ≤ v on ∂Ω, then u ≤ v in Ω.
Remark. As in Proposition 3.3, we may suppose ν = 0.
Proof. Suppose that max
Ω
(u −v) =: θ > 0.
Setting σ := (µ + 1)/λ, we choose δ > 0 so that
δ max
x∈Ω
e
σx
1
≤
θ
2
.
We then set τ := max
x∈Ω
(u(x) −v(x) + δe
σx
1
) ≥ θ > 0.
Putting φ(x, y) := (2ε)
−1
[x − y[
2
− δe
σx
1
, we let (x
ε
, y
ε
) ∈ Ω Ω be the
maximum point of u(x) −v(y) −φ(x, y) over Ω Ω.
By the compactness of Ω, we may suppose that (x
ε
, y
ε
) → (ˆ x, ˆ y) ∈ ΩΩ
as ε → 0 (taking a subsequence if necessary). Since u(x
ε
) −v(y
ε
) ≥ φ(x
ε
, y
ε
),
we have [x
ε
−y
ε
[
2
≤ 2ε(max
Ω
u−min
Ω
v+2
−1
θ) and moreover, ˆ x = ˆ y. Hence,
we have
u(ˆ x) −v(ˆ x) + δe
σˆ x
1
≥ τ,
which implies ˆ x ∈ Ω because of our choice of δ. Thus, we may suppose that
(x
ε
, y
ε
) ∈ Ω Ω for small ε > 0. Moreover, as before, we see that
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0. (3.27)
Applying Lemma 3.6 to ˆ u(x) := u(x)+δe
σx
1
and −v(y), we ﬁnd X, Y ∈ S
n
such that ((x
ε
−y
ε
)/ε, X) ∈ J
2,+
ˆ u(x
ε
), ((x
ε
−y
ε
)/ε, Y ) ∈ J
2,−
v(y
ε
), and
−
3
ε
_
I O
O I
_
≤
_
X O
O −Y
_
≤
3
ε
_
I −I
−I I
_
.
We shall simply write x and y for x
ε
and y
ε
, respectively.
38
Note that Proposition 2.7 implies
_
x −y
ε
−δσe
σx
1
e
1
, X −δσ
2
e
σx
1
I
1
_
∈ J
2,+
u(x),
where e
1
∈ R
n
and I
1
∈ S
n
are given by
e
1
:=
_
_
_
_
_
_
1
0
.
.
.
0
_
_
_
_
_
_
and I
1
:=
_
_
_
_
_
_
1 0 0
0 0 0
.
.
.
.
.
.
.
.
.
0 0 0
_
_
_
_
_
_
.
Setting r := δσe
σx
1
, from the deﬁnition of u and v, we have
0 ≤ F
_
y,
x −y
ε
, Y
_
−F
_
x,
x −y
ε
−re
1
, X −σrI
1
_
.
In view of the uniform ellipticity and (3.4), we have
0 ≤ rµ + σrT
+
(I
1
) + F
_
y,
x −y
ε
, Y
_
−F
_
x,
x −y
ε
, X
_
.
Hence, by (3.21) and the deﬁnition of T
+
, we have
0 ≤ r(µ −σλ) + ω
F
_
[x −y[ +
[x −y[
2
ε
_
,
which together with (3.27) yields 0 ≤ δσe
σˆ x
1
(µ−σλ). This is a contradiction
because of our choice of σ > 0. 2
39
4 Existence results
In this section, we present some existence results for viscosity solutions of
secondorder (degenerate) elliptic PDEs.
We ﬁrst present a convenient existence result via Perron’s method, which
was established by Ishii in 1987.
Next, for Bellman and Isaacs equations, we give representation formulas
for viscosity solutions. From the dynamic programming principle below, we
will realize how natural the deﬁnition of viscosity solutions is.
4.1 Perron’s method
In order to introduce Perron’s method, we need the notion of viscosity solu
tions for semicontinuous functions.
Deﬁnition. For any function u : Ω → R, we denote the upper and
lower semicontinuous envelope of u by u
∗
and u
∗
, respectively, which are
deﬁned by
u
∗
(x) = lim
ε→0
sup
y∈Bε(x)∩Ω
u(y) and u
∗
(x) = lim
ε→0
inf
y∈Bε(x)∩Ω
u(y).
We give some elementary properties for u
∗
and u
∗
without proofs.
Proposition 4.1. For u : Ω → R, we have
(1) u
∗
(x) ≤ u(x) ≤ u
∗
(x) for x ∈ Ω,
(2) u
∗
(x) = −(−u)
∗
(x) for x ∈ Ω,
(3) u
∗
(resp., u
∗
) is upper (resp., lower) semicontinuous in Ω, i.e.
limsup
y→x
u
∗
(y) ≤ u
∗
(x), (resp., liminf
y→x
u
∗
(y) ≥ u
∗
(x)) for x ∈ Ω,
(4) if u is upper (resp., lower) semicontinuous in Ω,
then u(x) = u
∗
(x) (resp., u(x) = u
∗
(x)) for x ∈ Ω.
With these notations, we give our deﬁnition of viscosity solutions of
F(x, u, Du, D
2
u) = 0 in Ω. (4.1)
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp., superso
lution) of (4.1) if u
∗
(resp., u
∗
) is a viscosity subsolution (resp., supersolution)
of (4.1).
40
We call u : Ω → R a viscosity solution of (4.1) if it is both a viscosity
sub and supersolution of (4.1).
Remark. We note that we supposed that viscosity sub and supersolu
tions are, respectively, upper and lower semicontinuous in our comparison
principle in section 3. Adapting the above new deﬁnition, we omit the semi
continuity for viscosity sub and supersolutions in Propositions 3.1, 3.3 and
Theorems 3.4, 3.5, 3.7, 3.9.
In what follows,
we use the above deﬁnition.
Remark. We remark that the comparison principle Theorem 3.7 implies
the continuity of viscosity solutions.
“Continuity of viscosity solutions”
viscosity solution u
satisﬁes u
∗
= u
∗
on ∂Ω
_
=⇒ u ∈ C(Ω)
Proof of the continuity of u. Since u
∗
and u
∗
are, respectively, a viscosity
subsolution and a viscosity supersolution and u
∗
≤ u
∗
on ∂Ω, Theorem 3.7
yields u
∗
≤ u
∗
in Ω. Because u
∗
≤ u ≤ u
∗
in Ω, we have u = u
∗
= u
∗
in Ω;
u ∈ C(Ω). 2
We ﬁrst show that the “pointwise” supremum (resp., inﬁmum) of viscos
ity subsolutions (resp., supersolution) becomes a viscosity subsolution (resp.,
supersolution).
Theorem 4.2. Let o be a nonempty set of upper (resp., lower) semi
continuous viscosity subsolutions (resp., supersolutions) of (4.1).
Set u(x) := sup
v∈S
v(x) (resp., u(x) := inf
v∈S
v(x)). If sup
x∈K
[u(x)[ <
∞ for any compact sets K ⊂ Ω, then u is a viscosity subsolution (resp.,
supersolution) of (4.1).
Proof. We only give a proof for subsolutions since the other can be proved
in a symmetric way.
For ˆ x ∈ Ω, we suppose that 0 = (u
∗
−φ)(ˆ x) > (u
∗
−φ)(x) for x ∈ Ω¸ ¦ˆ x¦
and φ ∈ C
2
(Ω). We shall show that
F(ˆ x, φ(ˆ x), Dφ(ˆ x), D
2
φ(ˆ x)) ≤ 0. (4.2)
41
Let r > 0 be such that B
2r
(ˆ x) ⊂ Ω. We can ﬁnd s > 0 such that
max
∂Br(ˆ x)
(u
∗
−φ) ≤ −s. (4.3)
We choose x
k
∈ B
r
(ˆ x) such that lim
k→∞
x
k
= ˆ x, u
∗
(ˆ x) − k
−1
≤ u(x
k
)
and [φ(x
k
) −φ(ˆ x)[ < 1/k. Moreover, we select upper semicontinuous u
k
∈ o
such that u
k
(x
k
) + k
−1
≥ u(x
k
).
By (4.3), for 3/k < s, we have
max
∂Br(ˆ x)
(u
k
−φ) < (u
k
−φ)(x
k
).
Thus, for large k > 3/s, there is y
k
∈ B
r
(ˆ x) such that u
k
− φ attains its
maximum over B
r
(ˆ x) at y
k
. Hence, we have
F(y
k
, u
k
(y
k
), Dφ(y
k
), D
2
φ(y
k
)) ≤ 0. (4.4)
Taking a subsequence if necessary, we may suppose z := lim
k→∞
y
k
. Since
(u
∗
−φ)(ˆ x) ≤ (u
k
−φ)(x
k
) +
3
k
≤ (u
k
−φ)(y
k
) +
3
k
≤ (u
∗
−φ)(y
k
) +
3
k
by the upper semicontinuity of u
∗
, we have
(u
∗
−φ)(ˆ x) ≤ (u
∗
−φ)(z),
which yields z = ˆ x, and moreover, lim
k→∞
u
k
(y
k
) = u
∗
(ˆ x) = φ(ˆ x). Therefore,
sending k → ∞ in (4.4), by the continuity of F, we obtain (4.2). 2
Our ﬁrst existence result is as follows.
Theorem 4.3. Assume that F is elliptic. Assume also that there are
a viscosity subsolution ξ ∈ USC(Ω) ∩ L
∞
loc
(Ω) and a viscosity supersolution
η ∈ LSC(Ω) ∩ L
∞
loc
(Ω) of (4.1) such that
ξ ≤ η in Ω.
Then, u(x) := sup
v∈S
v(x) (resp., ˆ u(x) = inf
w∈
ˆ
S
w(x)) is a viscosity solu
tion of (4.1), where
o :=
_
v ∈ USC(Ω)
¸
¸
¸
¸
¸
v is a viscosity subsolution
of (4.1) such that ξ ≤ v ≤ η in Ω
_
42
_
resp.,
ˆ
o :=
_
w ∈ LSC(Ω)
¸
¸
¸
¸
¸
w is a viscosity supersolution
of (4.1) such that ξ ≤ w ≤ η in Ω
__
.
Sketch of proof. We only give a proof for u since the other can be shown
in a symmetric way.
First of all, we notice that o , = ∅ since ξ ∈ o.
Due to Theorem 4.2, we know that u is a viscosity subsolution of (4.1).
Thus, we only need to show that it is a viscosity supersolution of (4.1).
Assume that u ∈ LSC(Ω). Assuming that 0 = (u − φ)(ˆ x) < (u − φ)(x)
for x ∈ Ω ¸ ¦ˆ x¦ and φ ∈ C
2
(Ω), we shall show that
F(ˆ x, φ(ˆ x), Dφ(ˆ x), D
2
φ(ˆ x)) ≥ 0.
Suppose that this conclusion fails; there is θ > 0 such that
F(ˆ x, φ(ˆ x), Dφ(ˆ x), D
2
φ(ˆ x)) ≤ −2θ.
Hence, there is r > 0 such that
F(x, φ(x) + t, Dφ(x), D
2
φ(x)) ≤ −θ for x ∈ B
r
(ˆ x) ⊂ Ω and [t[ ≤ r. (4.5)
First, we claim that φ(ˆ x) < η(ˆ x). Indeed, otherwise, since φ ≤ u ≤ η in
Ω, η −φ attains its minimum at ˆ x ∈ Ω. See Fig 4.1.
Fig 4.1
y = η(x)
y = u(x)
y = ξ(x)
ˆ x
x
y = φ(x)
Ω
Hence, from the deﬁnition of supersolution η, we get a contradiction to
(4.5) for x = ˆ x and t = 0.
43
We may suppose that ξ(ˆ x) < η(ˆ x) since, otherwise, ξ = φ = η at ˆ x.
Setting 3ˆ τ := η(ˆ x) −u(ˆ x) > 0, from the lower and upper semicontinuity of
η and ξ, respectively, we may choose s ∈ (0, r] such that
ξ(x) + ˆ τ ≤ φ(x) + 2ˆ τ ≤ η(x) for x ∈ B
2s
(ˆ x).
Moreover, we can choose ε ∈ (0, s) and τ
0
∈ (0, min¦ˆ τ, r¦) such that
φ(x) + 2τ
0
≤ u(x) for x ∈ B
s+ε
(ˆ x) ¸ B
s−ε
(ˆ x).
If we can deﬁne a function w ∈ o such that w(ˆ x) > u(ˆ x), then we ﬁnish
our proof because of the maximality of u at each point.
Now, we set
w(x) :=
_
max¦u(x), φ(x) + τ
0
¦ in B
s
(ˆ x),
u(x) in Ω ¸ B
s
(ˆ x).
See Fig 4.2.
Fig 4.2
y = η(x)
y = u(x)
y = ξ(x)
y = w(x)
ˆ x
x
y = φ(x) + τ
0
Ω
3ˆ τ
It suﬃces to show that w ∈ o. Because of our choice of τ
0
, s > 0, it is
easy to see ξ ≤ w ≤ η in Ω. Thus, we only need to show that w is a viscosity
subsolution of (4.1).
To this end, we suppose that (w
∗
− ψ)(x) ≤ (w
∗
− ψ)(z) = 0 for x ∈ Ω,
and then we will get
F(z, w
∗
(z), Dψ(z), D
2
ψ(z)) ≤ 0. (4.6)
If z ∈ Ω¸B
s
(ˆ x) =: Ω
′
, by Proposition 2.4, then u
∗
−ψ attains its maximum
at z ∈ Ω
′
, we get (4.6).
44
If z ∈ ∂B
s
(ˆ x), then (4.6) holds again since w = u in B
s+ε
(ˆ x) ¸ B
s−ε
(ˆ x).
It remains to show (4.6) when z ∈ B
s
(ˆ x). Since φ + τ
0
is a viscosity
subsolution of (4.1) in B
s
(ˆ x), Theorem 4.2 with Ω := B
s
(ˆ x) yields (4.6). 2
Correct proof, which the reader may skip ﬁrst. Since we do not suppose that
u ∈ LSC(Ω) here, we have to work with u
∗
.
Suppose that 0 = (u
∗
−φ)(ˆ x) < (u
∗
−φ)(x) for x ∈ Ω¸¦ˆ x¦ for some φ ∈ C
2
(Ω),
ˆ x ∈ Ω, θ > 0 and
F(ˆ x, φ(ˆ x), Dφ(ˆ x), D
2
φ(ˆ x)) ≤ −2θ.
Hence, we get (4.5) even in this case.
We also show that the w deﬁned in the above is a viscosity subsolution of (4.1).
It only remains to check that sup
Ω
(w −u) > 0.
In fact, choosing x
k
∈ B
1/k
(ˆ x) such that
u
∗
(ˆ x) +
1
k
≥ u(x
k
),
we easily verify that if 1/k ≤ min¦τ
0
/2, s¦ and [φ(ˆ x)−φ(x
k
)[ < τ
0
/2, then we have
w(x
k
) ≥ φ(x
k
) + τ
0
> φ(ˆ x) +
τ
0
2
= u
∗
(ˆ x) +
τ
0
2
≥ u(x
k
). 2
4.2 Representation formula
In this subsection, for given Bellman and Isaacs equations, we present the
expected solutions, which are called “value functions”. In fact, via the dy
namic programming principle for the value functions, we verify that they are
viscosity solutions of the corresponding PDEs.
Although this subsection is very important to learn how the notion of
viscosity solutions is the right one from a view point of applications in optimal
control and games,
if the reader is more interested in the PDE theory than these applications,
he/she may skip this subsection.
We shall restrict ourselves to
investigate the formulas only for ﬁrstorder PDEs
45
because in order to extend the results below to secondorder ones, we need
to introduce some terminologies from stochastic analysis. However, this is
too much for this thin book.
As will be seen, we study the minimization of functionals associated with
ordinary diﬀerential equations (ODEs for short), which is called a “deter
ministic” optimal control problem. When we adapt “stochastic” diﬀerential
equations instead of ODEs, those are called “stochastic” optimal control
problems. We refer to [10] for the later.
Moreover, to avoid mentioning the boundary condition, we will work on
the whole domain R
n
.
Throughout this subsection, we also suppose (3.7); ν > 0.
4.2.1 Bellman equation
We ﬁx a control set A ⊂ R
m
for some m ∈ N. We deﬁne / by
/ := ¦α : [0, ∞) → A [ α() is measurable¦.
For x ∈ R
n
and α ∈ /, we denote by X(; x, α) the solution of
_
X
′
(t) = g(X(t), α(t)) for t > 0,
X(0) = x,
(4.7)
where we will impose a suﬃcient condition on continuous functions g : R
n
A → R
n
so that (4.7) is uniquely solvable.
For given f : R
n
A → R, under suitable assumptions (see (4.8) below),
we deﬁne the cost functional for X(; x, α):
J(x, α) :=
_
∞
0
e
−νt
f(X(t; x, α), α(t))dt.
Here, ν > 0 is called a discount factor, which indicates that the right hand
side of the above is ﬁnite.
Now, we shall consider the optimal cost functional, which is called the
value function in the optimal control problem;
u(x) := inf
α∈A
J(x, α) for x ∈ R
n
.
Theorem 4.4. (Dynamic Programming Principle) Assume that
_
¸
_
¸
_
(1) sup
a∈A
_
f(, a)
L
∞
(R
n
)
+g(, a)
W
1,∞
(R
n
)
_
< ∞,
(2) sup
a∈A
[f(x, a) −f(y, a)[ ≤ ω
f
([x −y[) for x, y ∈ R
n
,
(4.8)
46
where ω
f
∈ /.
For any T > 0, we have
u(x) = inf
α∈A
_
_
T
0
e
−νt
f(X(t; x, α), α(t))dt + e
−νT
u(X(T; x, α))
_
.
Proof. For ﬁxed T > 0, we denote by v(x) the right hand side of the
above.
Step 1: u(x) ≥ v(x). Fix any ε > 0, and choose α
ε
∈ / such that
u(x) + ε ≥
_
∞
0
e
−νt
f(X(t; x, α
ε
), α
ε
(t))dt.
Setting ˆ x = X(T; x, α
ε
) and ˆ α
ε
∈ / by ˆ α
ε
(t) = α
ε
(T + t) for t ≥ 0, we have
_
∞
0
e
−νt
f(X(t; x, α
ε
), α
ε
(t))dt =
_
T
0
e
−νt
f(X(t; x, α
ε
), α
ε
(t))dt
+e
−νT
_
∞
0
e
−νt
f(X(t; ˆ x, ˆ α
ε
), ˆ α
ε
(t))dt.
Here and later, without mentioning, we use the fact that
X(t + T; x, α) = X(t; ˆ x, ˆ α) for T > 0, t ≥ 0 and α ∈ /,
where
ˆ α(t) := α(t + T) (t ≥ 0) and ˆ x := X(T; x, α).
Indeed, the above relation holds true because of the uniqueness of solutions
of (4.7) under assumptions (4.8). See Fig 4.3.
Thus, taking the inﬁmum in the second term of the right hand side of the
above among /, we have
u(x) + ε ≥
_
T
0
e
−νt
f(X(t; x, α), α(t))dt + e
−νT
u(ˆ x),
which implies onesided inequality by taking the inﬁmum over / since ε > 0
is arbitrary.
Step 2: u(x) ≤ v(x). Fix ε > 0 again, and choose α
ε
∈ / such that
v(x) + ε ≥
_
T
0
e
−νt
f(X(t; x, α
ε
), α
ε
(t))dt + e
−νT
u(ˆ x),
47
Fig 4.3
X(t; x, α)
X(t + T; x, α) = X(t; ˆ x, ˆ α)
t = 0
x
x
t = T
ˆ x = X(T; x, α)
where ˆ x := X(T; x, α
ε
). We next choose α
1
∈ / such that
u(ˆ x) + ε ≥
_
∞
0
e
−νt
f(X(t; ˆ x, α
1
), α
1
(t))dt.
Now, setting
α
0
(t) :=
_
α
ε
(t) for t ∈ [0, T),
α
1
(t −T) for t ≥ T,
we see that
v(x) + 2ε ≥
_
∞
0
e
−νt
f(X(t; x, α
0
), α
0
(t))dt,
which gives the opposite inequality by taking the inﬁmum over α
0
∈ / since
ε > 0 is arbitrary again. 2
Now, we give an existence result for Bellman equations.
Theorem 4.5. Assume that (4.8) holds. Then, u is a viscosity solution
of
sup
a∈A
¦νu −¸g(x, a), Du) −f(x, a)¦ = 0 in R
n
. (4.9)
Sketch of proof. In Steps 1 and 2, we give a proof when u ∈ USC(R
n
)
and u ∈ LSC(R
n
), respectively.
Step 1: Subsolution property. Fix φ ∈ C
1
(R
n
), and suppose that 0 =
(u −φ)(ˆ x) ≥ (u −φ)(x) for some ˆ x ∈ R
n
and any x ∈ R
n
.
Fix any a
0
∈ A, and set α
0
(t) := a
0
for t ≥ 0 so that α
0
∈ /.
48
For small s > 0, in view of Theorem 4.4, we have
φ(ˆ x) −e
−νs
φ(X(s; ˆ x, α
0
)) ≤ u(ˆ x) −e
−νs
u(X(s; ˆ x, α
0
))
≤
_
s
0
e
−νt
f(X(t; ˆ x, α
0
), a
0
)dt.
Setting X(t) := X(t; ˆ x, α
0
) for simplicity, by (4.7), we see that
e
−νt
¦νφ(X(t)) −¸g(X(t), α
0
), Dφ(X(t)))¦ = −
d
dt
_
e
−νt
φ(X(t))
_
. (4.10)
Hence, we have
0 ≥
_
s
0
e
−νt
¦νφ(X(t)) −¸g(X(t), a
0
), Dφ(X(t))) −f(X(t), a
0
)¦dt.
Therefore, dividing the above by s > 0, and then sending s → 0, we have
0 ≥ νφ(ˆ x) −¸g(ˆ x, a
0
), Dφ(ˆ x)) −f(ˆ x, a
0
),
which implies the desired inequality of the deﬁnition by taking the supremum
over A.
Step 2: Supersolution property. To show that u is a viscosity supersolu
tion, we argue by contradiction.
Suppose that there are ˆ x ∈ R
n
, θ > 0 and φ ∈ C
1
(R
n
) such that 0 =
(u −φ)(ˆ x) ≤ (u −φ)(x) for x ∈ R
n
, and that
sup
a∈A
¦νφ(ˆ x) −¸g(ˆ x, a), Dφ(ˆ x)) −f(ˆ x, a)¦ ≤ −2θ.
Thus, we can ﬁnd ε > 0 such that
sup
a∈A
¦νφ(x) −¸g(x, a), Dφ(x)) −f(x, a)¦ ≤ −θ for x ∈ B
ε
(ˆ x). (4.11)
By assumption (4.8) for g, setting t
0
:= ε/(sup
a∈A
g(, a)
L
∞
(R
n
)
+1) > 0,
we easily see that
[X(t; ˆ x, α) − ˆ x[ ≤
_
t
0
[X
′
(s; ˆ x, α)[ds ≤ ε for t ∈ [0, t
0
] and α ∈ /.
Hence, by setting X(t) := X(t; ˆ x, α) for any ﬁxed α ∈ /, (4.11) yields
νφ(X(t)) −¸g(X(t), α(t)), Dφ(X(t))) −f(X(t), α(t)) ≤ −θ (4.12)
49
for t ∈ [0, t
0
]. Since (4.10) holds for α in place of α
0
, multiplying e
−νt
in
(4.12), and then integrating it over [0, t
0
], we obtain
φ(ˆ x) −e
−νt
0
φ(X(t
0
)) −
_
t
0
0
e
−νt
f(X(t), α(t))dt ≤ −
θ
ν
(1 −e
−νt
0
).
Thus, setting θ
0
= θ(1 − e
−νt
0
)/ν > 0, which is independent of α ∈ /, we
have
u(ˆ x) ≤
_
t
0
0
e
−νt
f(X(t), α(t))dt + e
−νt
0
u(X(t
0
)) −θ
0
.
Therefore, taking the inﬁmum over /, we get a contradiction to Theorem
4.4. 2
Correct proof, which the reader may skip ﬁrst.
Step 1: Subsolution property. Assume that there are ˆ x ∈ R
n
, θ > 0 and φ ∈
C
1
(R
n
) such that 0 = (u
∗
−φ)(ˆ x) ≥ (u
∗
−φ)(x) for x ∈ R
n
and that
sup
a∈A
¦νφ(ˆ x) −¸g(ˆ x, a), Dφ(ˆ x)) −f(ˆ x, a)¦ ≥ 2θ.
In view of (4.8), there are a
0
∈ A and r > 0 such that
νφ(x) −¸g(x, a
0
), Dφ(x)) −f(x, a
0
) ≥ θ for x ∈ B
2r
(ˆ x). (4.13)
For large k ≥ 1, we can choose x
k
∈ B
1/k
(ˆ x) such that u
∗
(ˆ x) ≤ u(x
k
) + k
−1
and [φ(ˆ x) −φ(x
k
)[ < 1/k. We will only use k such that 1/k ≤ r.
Setting α
0
(t) := a
0
, we note that X
k
(t) := X(t; x
k
, α
0
) ∈ B
2r
(ˆ x) for t ∈ [0, t
0
]
with some t
0
> 0 and for large k.
On the other hand, by Theorem 4.4, we have
u(x
k
) ≤
_
t
0
0
e
−νt
f(X
k
(t), a
0
)dt + e
−νt
0
u(X
k
(t
0
)).
Thus, we have
φ(x
k
) −
2
k
≤ φ(ˆ x) −
1
k
≤ u(x
k
) ≤
_
t
0
0
e
−νt
f(X
k
(t), a
0
)dt + e
−νt
0
φ(X
k
(t
0
)).
Hence, by (4.13) as in Step 1 of Sketch of proof, we see that
−
2
k
≤
_
t
0
0
e
−νt
¦f(X
k
(t), a
0
) +¸g(X
k
(t), a
0
), Dφ(X
k
(t))) −νφ(X
k
(t))¦dt
≤ −
θ
ν
(1 −e
−νt
0
),
50
which is a contradiction for large k.
Step 2: Supersolution property. Assume that there are ˆ x ∈ R
n
, θ > 0 and
φ ∈ C
1
(R
n
) such that 0 = (u
∗
−φ)(ˆ x) ≤ (u
∗
−φ)(x) for x ∈ R
n
and that
sup
a∈A
¦νφ(ˆ x) −¸g(ˆ x, a), Dφ(ˆ x)) −f(ˆ x, a)¦ ≤ −2θ.
In view of (4.8), there is r > 0 such that
νφ(x) −¸g(x, a), Dφ(x)) −f(x, a) ≤ −θ for x ∈ B
2r
(ˆ x) and a ∈ A. (4.14)
For large k ≥ 1, we can choose x
k
∈ B
1/k
(ˆ x) such that u
∗
(ˆ x) ≥ u(x
k
) − k
−1
and [φ(ˆ x) −φ(x
k
)[ < 1/k. In view of (4.8), there is t
0
> 0 such that
X
k
(t; x
k
, α) ∈ B
2r
(ˆ x) for all k ≥
1
r
, α ∈ / and t ∈ [0, t
0
].
Now, we select α
k
∈ / such that
u(x
k
) +
1
k
≥
_
t
0
0
e
−νt
f(X(t; x
k
, α
k
), α
k
(t))dt + e
−νt
0
u(X(t
0
; x
k
, α
k
)).
Setting X
k
(t) := X(t; x
k
, α
k
), we have
φ(x
k
) +
3
k
≥ φ(ˆ x) +
2
k
≥ u(x
k
) +
1
k
≥
_
t
0
0
e
−νt
f(X
k
(t), α
k
(t))dt + e
−νt
0
φ(X
k
(t)).
Hence, we have
3
k
≥
_
t
0
0
e
−νt
¦¸g(X
k
(t), α
k
(t)), Dφ(X
k
(t))) + f(X
k
(t), α
k
(t)) −νφ(X
k
(t))¦dt.
Putting (4.14) with α
k
in the above, we have
3
k
≥ θ
_
t
0
0
e
−νt
dt,
which is a contradiction for large k ≥ 1. 2
4.2.2 Isaacs equation
In this subsection, we study fully nonlinear PDEs (i.e. p ∈ R
n
→ F(x, p) is
neither convex nor concave) arising in diﬀerential games.
51
We are given continuous functions f : R
n
A B → R and g : R
n
AB → R
n
such that
_
¸
¸
_
¸
¸
_
(1) sup
(a,b)∈A×B
_
f(, a, b)
L
∞
(R
n
)
+g(, a, b)
W
1,∞
(R
n
)
_
< ∞,
(2) sup
(a,b)∈A×B
[f(x, a, b) −f(y, a, b)[ ≤ ω
f
([x −y[) for x, y ∈ R
n
,
(4.15)
where ω
f
∈ /.
Under (4.15), we shall consider Isaacs equations:
sup
a∈A
inf
b∈B
¦νu −¸g(x, a, b), Du) −f(x, a, b)¦ = 0 in R
n
, (4.16)
and
inf
b∈B
sup
a∈A
¦νu −¸g(x, a, b), Du) −f(x, a, b)¦ = 0 in R
n
. (4.16
′
)
As in the previous subsection, we shall derive the expected solution.
We ﬁrst introduce some notations: While we will use the same notion /
as before, we set
B := ¦β : [0, ∞) → B [ β() is measurable¦.
Next, we introduce the socalled sets of “nonanticipating strategies”:
Γ :=
_
¸
_
¸
_
γ : / → B
¸
¸
¸
¸
¸
¸
¸
for any T > 0, if α
1
and α
2
∈ / satisfy
that α
1
(t) = α
2
(t) for a.a. t ∈ (0, T),
then γ[α
1
](t) = γ[α
2
](t) for a.a. t ∈ (0, T)
_
¸
_
¸
_
and
∆ :=
_
¸
_
¸
_
δ : B → /
¸
¸
¸
¸
¸
¸
¸
for any T > 0, if β
1
and β
2
∈ B satisfy
that β
1
(t) = β
2
(t) for a.a. t ∈ (0, T),
then δ[β
1
](t) = δ[β
2
](t) for a.a. t ∈ (0, T)
_
¸
_
¸
_
.
Using these notations, we will consider maximizingminimizing problems
of the following cost functional: For α ∈ /, β ∈ B, and x ∈ R
n
,
J(x, α, β) :=
_
∞
0
e
−νt
f(X(t; x, α, β), α(t), β(t))dt,
52
where X(; x, α, β) is the (unique) solutions of
_
X
′
(t) = g(X(t), α(t), β(t)) for t > 0,
X(0) = x.
(4.17)
The expected solutions for (4.16) and (4.16
′
), respectively, are given by
u(x) = sup
γ∈Γ
inf
α∈A
_
∞
0
e
−νt
f(X(t; x, α, γ[α]), α(t), γ[α](t))dt,
and
v(x) = inf
δ∈∆
sup
β∈B
_
∞
0
e
−νt
f(X(t; x, δ[β], β), δ[β](t), β(t))dt.
We call u and v upper and lower value functions of this diﬀerential game,
respectively. In fact, under appropriate hypotheses, we expect that v ≤ u,
which cannot be proved easily. To show v ≤ u, we ﬁrst observe that u and v
are, respectively, viscosity solutions of (4.16) and (4.16
′
). Noting that
sup
a∈A
inf
b∈B
¦νr−¸g(x, a, b), p)−f(x, a, b)¦ ≤ inf
b∈B
sup
a∈A
¦νr−¸g(x, a, b), p)−f(x, a, b)¦
for (x, r, p) ∈ R
n
RR
n
, we see that u (resp., v) is a viscosity supersolution
(resp., subsolution) of (4.16
′
) (resp., (4.16)). Thus, the standard comparison
principle implies v ≤ u in R
n
(under suitable growth condition at [x[ → ∞
for u and v).
We shall only deal with u since the corresponding results for v can be
obtained in a symmetric way.
To show that u is a viscosity solution of the Isaacs equation (4.16), we ﬁrst
establish the dynamic programming principle as in the previous subsection:
Theorem 4.6. (Dynamic Programming Principle) Assume that (4.15)
hold. Then, for T > 0, we have
u(x) = sup
γ∈Γ
inf
α∈A
_
_
_
T
0
e
−νt
f(X(t; x, α, γ[α]), α(t), γ[α](t))dt
+e
−νT
u(X(T; x, α, γ[α]))
_
_
.
Proof. For a ﬁxed T > 0, we denote by w(x) the right hand side of the
above.
Step 1: u(x) ≤ w(x). For any ε > 0, we choose γ
ε
∈ Γ such that
u(x) −ε ≤ inf
α∈A
_
∞
0
e
−νt
f(X(t; x, α, γ
ε
[α]), α(t), γ
ε
[α](t))dt =: I
ε
.
53
For any ﬁxed α
0
∈ /, we deﬁne the mapping T
0
: / → / by
T
0
[α] :=
_
α
0
(t) for t ∈ [0, T),
α(t −T) for t ∈ [T, ∞)
for α ∈ /.
Thus, for any α ∈ /, we have
I
ε
≤
_
T
0
e
−νt
f(X(t; x, α
0
, γ
ε
[α
0
]), α
0
(t), γ
ε
[α
0
](t))dt
+
_
∞
T
e
−νt
f(X(t; x, T
0
[α], γ
ε
[T
0
[α]]), T
0
[α](t), γ
ε
[T
0
[α]](t))dt
=: I
1
ε
+ I
2
ε
.
We next deﬁne ˆ γ ∈ Γ by
ˆ γ[α](t) := γ
ε
[T
0
[α]](t + T) for t ≥ 0 and α ∈ /.
Note that ˆ γ belongs to Γ.
Setting ˆ x := X(T; x, α
0
, γ
ε
[α
0
]), we have
I
2
ε
= e
−νT
_
∞
0
e
−νt
f(X(t; ˆ x, α, ˆ γ[α]), α(t), ˆ γ[α](t))dt.
Taking the inﬁmum over α ∈ /, we have
u(x) −ε ≤ I
1
ε
+ e
−νT
inf
α∈A
_
∞
0
e
−νt
f(X(t; ˆ x, α, ˆ γ[α]), α(t), ˆ γ[α](t))dt
=: I
1
ε
+
ˆ
I
2
ε
.
Since
ˆ
I
2
ε
≤ e
−νT
u(ˆ x), we have
u(x) −ε ≤ I
1
ε
+ e
−νT
u(ˆ x),
which implies u(x) −ε ≤ w(x) by taking the inﬁmum over α
0
∈ / and then,
the supremum over Γ. Therefore, we get the onesided inequality since ε > 0
is arbitrary.
Step 2: u(x) ≥ w(x). For ε > 0, we choose γ
1
ε
∈ Γ such that
w(x) −ε ≤ inf
α∈A
_
_
_
T
0
e
−νt
f(X(t; x, α, γ
1
ε
[α]), α(t), γ
1
ε
[α](t))dt
+e
−νT
u(X(T; x, α, γ
1
ε
[α]))
_
_
.
54
For any ﬁxed α
0
∈ /, setting ˆ x = X(T; x, α
0
, γ
1
ε
[α
0
]), we have
w(x) −ε ≤
_
T
0
e
−νt
f(X(t; x, α
0
, γ
1
ε
[α
0
]), α
0
(t), γ
1
ε
[α
0
](t))dt + e
−νT
u(ˆ x).
Next, we choose γ
2
ε
∈ Γ such that
u(ˆ x) −ε ≤ inf
α∈A
_
∞
0
e
−νt
f(X(t; ˆ x, α, γ
2
ε
[α]), α(t), γ
2
ε
[α](t))dt. =: I.
For α ∈ /, we deﬁne the mapping T
1
: / → / by
T
1
[α](t) := α(t + T) for t ≥ 0.
Thus, we have
I ≤
_
∞
0
e
−νt
f(X(t; ˆ x, T
1
[α
0
], γ
2
ε
[T
1
[α
0
]]), T
1
[α
0
](t), γ
2
ε
[T
1
[α
0
]](t))dt =:
ˆ
I.
Now, for α ∈ /, setting
ˆ γ[α](t) :=
_
γ
1
ε
[α](t) for t ∈ [0, T),
γ
2
ε
[T
1
[α]](t −T) for t ∈ [T, ∞),
and
ˆ
X(t) := X(t; ˆ x, T
1
[α
0
], γ
2
ε
[T
1
[α
0
]]), we have
ˆ
I =
_
∞
T
e
−ν(t−T)
f(
ˆ
X(t −T), T
1
[α
0
](t −T), γ
2
ε
[T
1
[α
0
]](t −T))dt
= e
νT
_
∞
T
e
−νt
f(
ˆ
X(t −T), α
0
(t), ˆ γ[α
0
](t))dt.
Since
X(t; x, α
0
, ˆ γ[α
0
]) =
_
X(t; x, α
0
, γ
1
ε
[α
0
]) for t ∈ [0, T),
ˆ
X(t −T) for t ∈ [T, ∞),
we have
w(x) −2ε ≤
_
∞
0
e
−νt
f(X(t; x, α
0
, ˆ γ[α
0
]), α
0
(t), ˆ γ[α
0
](t))dt.
Since α
0
is arbitrary, we have
w(x) −2ε ≤ inf
α∈A
_
∞
0
e
−νt
f(X(t; x, α, ˆ γ[α]), α(t), ˆ γ[α](t))dt,
55
which yields the assertion by taking the supremum over Γ and then, by
sending ε → 0. 2
Now, we shall verify that the value function u is a viscosity solution of
(4.16).
Since we only give a sketch of proofs, one can skip the following theorem.
For a correct proof, we refer to [1], originally by EvansSouganidis (1984).
Theorem 4.7. Assume that (4.15) holds.
(1) Then, u is a viscosity subsolution of (4.16).
(2) Assume also the following properties:
_
¸
¸
¸
_
¸
¸
¸
_
(i) A ⊂ R
m
is compact for some integer m ≥ 1.
(ii) there is an ω
A
∈ / such that
[f(x, a, b) −f(x, a
′
, b)[ +[g(x, a, b) −g(x, a
′
, b)[ ≤ ω
A
([a −a
′
[)
for x ∈ R
n
, a, a
′
∈ A and b ∈ B.
(4.18)
Then, u is a viscosity supersolution of (4.16).
Remark. To show that v is a viscosity subsolution of (4.16
′
), instead of (4.18),
we need to suppose the following hypotheses:
_
¸
¸
¸
_
¸
¸
¸
_
(i) B ⊂ R
m
is compact for some integer m ≥ 1.
(ii) there is an ω
B
∈ / such that
[f(x, a, b) −f(x, a, b
′
)[ +[g(x, a, b) −g(x, a, b
′
)[ ≤ ω
B
([b −b
′
[)
for x ∈ R
n
, b, b
′
∈ B and a ∈ A,
(4.18
′
)
while to verify that v is a viscosity supersolution of (4.16
′
), we only need (4.15).
Sketch of proof. We shall only prove the assertion assuming that u ∈ USC(R
n
)
and u ∈ LSC(R
n
) in Step 1 and 2, respectively.
To give a correct proof without the semicontinuity assumption, we need a bit
careful analysis similar to the proof for Bellman equations. We omit the correct
proof here.
Step 1: Subsolution property. Suppose that the subsolution property fails; there
are x ∈ R
n
, θ > 0 and φ ∈ C
1
(R
n
) such that 0 = (u −φ)(x) ≥ (u −φ)(y) (for all
y ∈ R
n
) and
sup
a∈A
inf
b∈B
¦νu(x) −¸g(x, a, b), Dφ(x)) −f(x, a, b)¦ ≥ 3θ.
We note that X(; x, α, γ[α]) are uniformly continuous for any (α, γ) ∈ / Γ
in view of (4.15).
56
Thus, we can choose that a
0
∈ A such that
inf
b∈B
¦νφ(x) −¸g(x, a
0
, b), Dφ(x)) −f(x, a
0
, b)¦ ≥ 2θ.
For any γ ∈ Γ, setting α
0
(t) = a
0
for t ≥ 0, we simply write X() for
X(; x, α
0
, γ[α
0
]). Thus, we ﬁnd small t
0
> 0 such that
νφ(X(t)) −¸g(X(t), a
0
, γ[α
0
](t)), Dφ(X(t))) −f(X(t), a
0
, γ[α
0
](t)) ≥ θ
for t ∈ [0, t
0
]. Multiplying e
−νt
in the above and then, integrating it over [0, t
0
],
we have
θ
ν
(1 −e
−νt
0
) ≤ −
_
t
0
0
_
d
dt
_
e
−νt
φ(X(t))
_
+ e
−νt
f(X(t), a
0
, γ[α
0
](t))
_
dt
= φ(x) −e
−νt
0
φ(X(t
0
)) −
_
t
0
0
e
−νt
f(X(t), a
0
, γ[α
0
](t))dt.
Hence, we have
u(x) −
θ
ν
(1 −e
−νt
0
) ≥
_
t
0
0
e
−νt
f(X(t), a
0
, γ[α
0
](t))dt + e
−νt
0
u(X(t
0
)) =:
ˆ
I.
Taking the inﬁmum over /, we have
ˆ
I ≥ inf
α∈A
_
_
_
t
0
0
e
−νt
f(X(t; x, α, γ[α]), α(t), γ[α](t))dt
+e
−νt
0
u(X(t
0
; x, α, γ[α]))
_
_
.
Therefore, since γ ∈ Γ is arbitrary, we have
u(x) −
θ
ν
(1 −e
−νt
0
) ≥ sup
γ∈Γ
inf
α∈A
_
_
_
t
0
0
e
−νt
f(X(t; x, α, γ[α]), α(t), γ[α](t))dt
+e
−νt
0
u(X(t
0
; x, α, γ[α]))
_
_
,
which contradicts Theorem 4.6.
Step 2: Supersolution property. Suppose that the supersolution property fails;
there are x ∈ R
n
, θ > 0 and φ ∈ C
1
(R
n
) such that 0 = (u − φ)(x) ≤ (u − φ)(y)
for y ∈ R
n
, and
sup
a∈A
inf
b∈B
¦νu(x) −¸g(x, a, b), Dφ(x)) −f(x, a, b)¦ ≤ −3θ.
For any a ∈ A, there is b(a) ∈ B such that
νu(x) −¸g(x, a, b(a)), Dφ(x)) −f(x, a, b(a)) ≤ −2θ.
57
In view of (4.18), there is ε(a) > 0 such that if [a −a
′
[ < ε(a) and [x − y[ < ε(a),
then we have
νφ(y) −¸g(y, a
′
, b(a)), Dφ(y)) −f(y, a
′
, b(a)) ≤ −θ.
From the compactness of A, we may select ¦a
k
¦
M
k=1
such that
A =
M
_
k=1
A
k
,
where
A
k
:= ¦a ∈ A [ [a −a
k
[ < ε(a
k
)¦.
Furthermore, we set
ˆ
A
1
= A
1
, and inductively,
ˆ
A
k
:= A
k
¸ ∪
k−1
j=1
A
j
;
ˆ
A
k
∩
ˆ
A
j
= ∅
for k ,= j. We may also suppose that
ˆ
A
k
,= ∅ for k = 1, . . . , M.
For α ∈ /, we deﬁne
γ
0
[α](t) := b(a
k
) provided α(t) ∈
ˆ
A
k
.
Now, setting X(t) := X(t; x, α, γ
0
[α]), we ﬁnd t
0
> 0 such that
νφ(X(t)) −¸g(X(t), α(t), γ
0
[α](t)), Dφ(X(t))) −f(X(t), α(t), γ
0
[α](t)) ≤ −θ
for t ∈ [0, t
0
]. Multiplying e
−νt
in the above and then, integrating it, we obtain
φ(x) −e
−νt
0
φ(X(t
0
)) −
_
t
0
0
e
−νt
f(X(t), α(t), γ
0
[α](t))dt ≤ −
θ
ν
(1 −e
−νt
0
).
Since α ∈ / is arbitrary, we have
u(x) +
θ
ν
(1 −e
−νt
0
) ≤ inf
α∈A
_
_
_
t
0
0
e
−νt
f(X(t; x, α, γ
0
[α]), α(t), γ
0
[α](t))dt
+e
−νt
0
u(X(t
0
; x, α, γ
0
[α]))
_
_
,
which contradicts Theorem 4.6 by taking the supremum over Γ. 2
4.3 Stability
In this subsection, we present a stability result for viscosity solutions, which
is one of the most important properties for “solutions” as noted in section 1.
Thus, this result justiﬁes our notion of viscosity solutions.
However, since we will only use Proposition 4.8 below in section 7.3, the
reader may skip the proof.
58
First of all, for possibly discontinuous F : ΩRR
n
S
n
→ R, we are
concerned with
F(x, u, Du, D
2
u) = 0 in Ω. (4.19)
We introduce the following notation:
F
∗
(x, r, p, X) := lim
ε→0
inf
_
F(y, s, q, Y )
¸
¸
¸
¸
¸
y ∈ Ω ∩ B
ε
(x), [s −r[ < ε,
[q −p[ < ε, Y −X < ε
_
,
F
∗
(x, r, p, X) := lim
ε→0
sup
_
F(y, s, q, Y )
¸
¸
¸
¸
¸
y ∈ Ω ∩ B
ε
(x), [s −r[ < ε,
[q −p[ < ε, Y −X < ε
_
.
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp., super
solution) of (4.19) if u
∗
(resp., u
∗
) is a viscosity subsolution (resp., super
soluion) of
F
∗
(x, u, Du, D
2
u) ≤ 0
_
resp., F
∗
(x, u, Du, D
2
u) ≥ 0
_
in Ω.
We call u : Ω → R a viscosity solution of (4.19) if it is both a viscosity
sub and supersolution of (4.19).
Now, for given continuous functions F
k
: Ω RR
n
S
n
→ R, we set
liminf
k→∞
∗
F
k
(x, r, p, X)
:= lim
k→∞
inf
_
¸
_
¸
_
F
j
(y, s, q, Y )
¸
¸
¸
¸
¸
¸
¸
[y −x[ < 1/k, [s −r[ < 1/k,
[q −p[ < 1/k, Y −X < 1/k
and j ≥ k
_
¸
_
¸
_
,
limsup
k→∞
∗
F
k
(x, r, p, X)
:= lim
k→∞
sup
_
¸
_
¸
_
F
j
(y, s, q, Y )
¸
¸
¸
¸
¸
¸
¸
[y −x[ < 1/k, [s −r[ < 1/k,
[q −p[ < 1/k, Y −X < 1/k
and j ≥ k
_
¸
_
¸
_
.
Our stability result is as follows.
Proposition 4.8. Let F
k
: Ω R R
n
S
n
→ R be continuous
functions. For (x, r, p, X) ∈ Ω RR
n
S
n
, we set
F(x, r, p, X) := liminf
k→∞
∗
F
k
(x, r, p, X)
59
_
resp., F(x, r, p, X) := limsup
k→∞
∗
F
k
(x, r, p, X)
_
.
Let u
k
: Ω → R be a viscosity subsolution (resp., supersolution) of
F
k
(x, u
k
, Du
k
, D
2
u
k
) = 0 in Ω.
Setting u (resp., u) by
u(x) := lim
k→∞
sup¦(u
j
)
∗
(y) [ y ∈ B
1/k
(x) ∩ Ω, j ≥ k¦
_
resp., u(x) := lim
k→∞
inf¦(u
j
)
∗
(y) [ y ∈ B
1/k
(x) ∩ Ω, j ≥ k¦
_
for x ∈ Ω, then u (resp., u) is a viscosity subsolution (resp., supersolution)
of
F(x, u, Du, D
2
u) ≤ 0 (resp., F(x, u, Du, D
2
u) ≥ 0) in Ω.
Remark. We note that u ∈ USC(Ω), u ∈ LSC(Ω), F ∈ LSC(Ω R
R
n
S
n
) and F ∈ USC(Ω RR
n
S
n
).
Proof. We only give a proof for subsolutions since the other can be shown
similarly.
Given φ ∈ C
2
(Ω), we let x
0
∈ Ω be such that 0 = (u−φ)(x
0
) > (u−φ)(x)
for x ∈ Ω ¸ ¦x
0
¦. We shall show that F(x
0
, u(x
0
), Dφ(x
0
), D
2
φ(x
0
)) ≤ 0.
We may choose x
k
∈ B
r
(x
0
) (for a subsequence if necessary), where r ∈
(0,dist(x
0
, ∂Ω)), such that
lim
k→∞
x
k
= x
0
and lim
k→∞
(u
k
)
∗
(x
k
) = u(x
0
). (4.20)
We select y
k
∈ B
r
(x
0
) such that ((u
k
)
∗
−φ)(y
k
) = sup
Br(x
0
)
((u
k
)
∗
−φ).
We may also suppose that lim
k→∞
y
k
= z for some z ∈ B
r
(x
0
) (taking
a subsequence if necessary). Since ((u
k
)
∗
− φ)(y
k
) ≥ ((u
k
)
∗
− φ)(x
k
), (4.20)
implies
0 = liminf
k→∞
((u
k
)
∗
−φ)(x
k
) ≤ liminf
k→∞
((u
k
)
∗
−φ)(y
k
)
≤ liminf
k→∞
(u
k
)
∗
(y
k
) −φ(z)
≤ limsup
k→∞
(u
k
)
∗
(y
k
) −φ(z) ≤ (u −φ)(z).
60
Thus, this yields z = x
0
and lim
k→∞
(u
k
)
∗
(y
k
) = u(x
0
). Hence, we see that
y
k
∈ B
r
(x
0
) for large k ≥ 1. Since (u
k
)
∗
−φ attains a maximum over B
r
(x
0
)
at y
k
∈ B
r
(x
0
), by the deﬁnition of u
k
(with Proposition 2.4 for Ω
′
= B
r
(x
0
)),
we have
F
k
(y
k
, (u
k
)
∗
(y
k
), Dφ(y
k
), D
2
φ(y
k
)) ≤ 0,
which concludes the proof by taking the liminf
k→∞
∗
. 2
61
5 Generalized boundary value problems
In order to obtain the uniqueness of solutions of an ODE, we have to suppose
certain initial or boundary condition. In the study of PDEs, we need to
impose appropriate conditions on ∂Ω for the uniqueness of solutions.
Following the standard PDE theory, we shall treat a few typical boundary
conditions in this section.
Since we are mainly interested in degenerate elliptic PDEs, we cannot
expect “solutions” to satisfy the given boundary condition on the whole
of ∂Ω. The simplest example is as follows: For Ω := (0, 1), consider the
“degenerate” elliptic PDE
−
du
dx
+ u = 0 in (0, 1).
Note that it is impossible to ﬁnd a solution u of the above such that u(0) =
u(1) = 1.
Our plan is to propose a deﬁnition of “generalized” solutions for boundary
value problems. For this purpose, we extend the notion of viscosity solutions
to possibly discontinuous PDEs on Ω while we normally consider those in Ω.
For general G : Ω RR
n
S
n
→ R, we are concerned with
G(x, u, Du, D
2
u) = 0 in Ω. (5.1)
As in section 4.3, we deﬁne
G
∗
(x, r, p, X) := lim
ε→0
inf
_
G(y, s, q, Y )
¸
¸
¸
¸
¸
y ∈ Ω ∩ B
ε
(x), [s −r[ < ε,
[q −p[ < ε, Y −X < ε
_
,
G
∗
(x, r, p, X) := lim
ε→0
sup
_
G(y, s, q, Y )
¸
¸
¸
¸
¸
y ∈ Ω ∩ B
ε
(x), [s −r[ < ε,
[q −p[ < ε, Y −X < ε
_
.
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp., superso
lution) of (5.1) if, for any φ ∈ C
2
(Ω),
G
∗
(x, u
∗
(x), Dφ(x), D
2
φ(x)) ≤ 0
_
resp., G
∗
(x, u
∗
(x), Dφ(x), D
2
φ(x)) ≥ 0
_
provided that u
∗
− φ (resp., u
∗
− φ) attains its maximum (resp., minimum)
at x ∈ Ω.
62
We call u : Ω → R a viscosity solution of (5.1) if it is both a viscosity
sub and supersolution of (5.1).
Our comparison principle in this setting is as follows:
“Comparison principle in this setting”
viscosity subsolution u of (5.1)
viscosity supersolution v of (5.1)
_
=⇒ u ≤ v in Ω
Note that
the boundary condition is contained in the deﬁnition.
Using the above new deﬁnition, we shall formulate the boundary value
problems in the viscosity sense. Given F : Ω R R
n
S
n
→ R and
B : ∂ΩRR
n
S
n
→ R, we investigate general boundary value problems
_
F(x, u, Du, D
2
u) = 0 in Ω,
B(x, u, Du, D
2
u) = 0 on ∂Ω.
(5.2)
Setting G by
G(x, r, p, X) :=
_
F(x, r, p, X) for x ∈ Ω,
B(x, r, p, X) for x ∈ ∂Ω,
we give the deﬁnition of boundary value problems (5.2) in the viscosity sense.
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp., superso
lution) of (5.2) if it is a viscosity subsolution (resp., supersolution) of (5.1),
where G is deﬁned in the above.
We call u : Ω → R a viscosity solution of (5.2) if it is both a viscosity
sub and supersolution of (5.2).
Remark. When F and B are continuous and G is given as above, G
∗
and
G
∗
can be expressed in the following manner:
G
∗
(x, r, p, X) =
_
F(x, r, p, X) for x ∈ Ω,
min¦F(x, r, p, X), B(x, r, p, X)¦ for x ∈ ∂Ω,
G
∗
(x, r, p, X) =
_
F(x, r, p, X) for x ∈ Ω,
max¦F(x, r, p, X), B(x, r, p, X)¦ for x ∈ ∂Ω.
63
It is not hard to extend the existence and stability results corresponding
to Theorem 4.3 and Proposition 4.8, respectively, to viscosity solutions in
the above sense. However, it is not straightforward to show the comparison
principle in this new setting. Thus, we shall concentrate our attention to
the comparison principle, which implies the uniqueness (and continuity) of
viscosity solutions.
The main diﬃculty to prove the comparison principle is that we have to
“avoid” the boundary conditions for both of viscosity sub and supersolu
tions.
To explain this, let us consider the case when G is given by (5.2). Let u
and v be, respectively, a viscosity sub and supersolution of (5.1). We shall
observe that the standard argument in Theorem 3.7 does not work.
For ε > 0, suppose that (x, y) → u(x) − v(y) −(2ε)
−1
[x −y[
2
attains its
maximum at (x
ε
, y
ε
) ∈ ΩΩ. Notice that there is NO reason to verify that
(x
ε
, y
ε
) ∈ Ω Ω.
The worst case is that (x
ε
, y
ε
) ∈ ∂Ω∂Ω. In fact, in view of Lemma 3.6,
we ﬁnd X, Y ∈ S
n
such that ((x
ε
−y
ε
)/ε, X) ∈ J
2,+
Ω
u(x
ε
), ((x
ε
−y
ε
)/ε, Y ) ∈
J
2,−
Ω
v(y
ε
), the matrix inequalities in Lemma 3.6 hold for X, Y . Hence, we
have
min
_
F
_
x
ε
, u(x
ε
),
x
ε
−y
ε
ε
, X
_
, B
_
x
ε
, u(x
ε
),
x
ε
−y
ε
ε
, X
__
≤ 0
and
max
_
F
_
y
ε
, v(y
ε
),
x
ε
−y
ε
ε
, Y
_
, B
_
y
ε
, v(y
ε
),
x
ε
−y
ε
ε
, Y
__
≥ 0.
However, even if we suppose that (3.21) holds for F and B “in Ω”, we cannot
get any contradiction when
F
_
x
ε
, u(x
ε
),
x
ε
−y
ε
ε
, X
_
≤ 0 ≤ B
_
y
ε
, v(y
ε
),
x
ε
−y
ε
ε
, Y
_
or
B
_
x
ε
, u(x
ε
),
x
ε
−y
ε
ε
, X
_
≤ 0 ≤ F
_
y
ε
, v(y
ε
),
x
ε
−y
ε
ε
, Y
_
.
It seems impossible to avoid this diﬃculty as long as we use [x −y[
2
/(2ε) as
“test functions”.
64
Our plan to go beyond this diﬃculty is to ﬁnd new test functions φ
ε
(x, y)
(instead of [x−y[
2
/(2ε)) so that the function (x, y) → u(x) −v(y) −φ
ε
(x, y)
attains its maximum over Ω Ω at an interior point (x
ε
, y
ε
) ∈ Ω Ω. To
this end, since we will use several “perturbation” techniques, we suppose two
hypotheses on F: First, we shall suppose the following continuity of F with
respect to (p, X)variables.
_
¸
_
¸
_
There is an ω
0
∈ / such that
[F(x, p, X) −F(x, q, Y )[ ≤ ω
0
([p −q[ +X −Y )
for x ∈ Ω, p, q ∈ R
n
, X, Y ∈ S
n
.
(5.3)
The next assumption is a bit stronger than the structure condition (3.21):
_
¸
¸
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
¸
¸
_
There is ˆ ω
F
∈ / such that
if X, Y ∈ S
n
and µ > 1 satisfy
−3µ
_
I 0
0 I
_
≤
_
X 0
0 −Y
_
≤ 3µ
_
I −I
−I I
_
, then
F(y, p, Y ) −F(x, p, X) ≤ ˆ ω
F
([x −y[(1 +[p[ + µ[x −y[))
for x, y ∈ Ω, p ∈ R
n
, X, Y ∈ S
n
.
(5.4)
5.1 Dirichlet problem
First, we consider Dirichlet boundary value problems (Dirichlet problems for
short) in the above sense.
Assuming that viscosity sub and supersolutions are continuous on ∂Ω,
we will obtain the comparison principle for them.
We now recall the classical Dirichlet problem
_
νu + F(x, Du, D
2
u) = 0 in Ω,
u −g = 0 on ∂Ω.
(5.5)
Note that the Dirichlet problem of (5.5) in the viscosity sense is as follows:
subsolution ⇐⇒
_
νu + F(x, Du, D
2
u) ≤ 0 in Ω,
min¦νu + F(x, Du, D
2
u), u −g¦ ≤ 0 on ∂Ω,
and
supersolution ⇐⇒
_
νu + F(x, Du, D
2
u) ≥ 0 in Ω,
max¦νu + F(x, Du, D
2
u), u −g¦ ≥ 0 on ∂Ω.
65
We shall suppose the following property on the shape of Ω, which may
be called an “interior cone condition” (see Fig 5.1):
_
For each z ∈ ∂Ω, there are ˆ r, ˆ s ∈ (0, 1) such that
x −rn(z) + rξ ∈ Ω for x ∈ Ω ∩ B
ˆ r
(z), r ∈ (0, ˆ r) and ξ ∈ B
ˆ s
(0).
(5.6)
Here and later, we denote by n(z) the unit outward normal vector at z ∈ ∂Ω.
Fig 5.1
x
−rn(z)
n(z)
ˆ r
ˆ sr
z
Ω
Theorem 5.1. Assume that ν > 0, (5.3), (5.4) and (5.6) hold. For
g ∈ C(∂Ω), we let u and v : Ω → R be, respectively, a viscosity sub and
supersolution of (5.5) such that
liminf
x∈Ω→z
u
∗
(x) ≥ u
∗
(z) and limsup
x∈Ω→z
v
∗
(x) ≤ v
∗
(z) for z ∈ ∂Ω. (5.7)
Then, u
∗
≤ v
∗
in Ω.
Remark. Notice that (5.7) implies the continuity of u
∗
and v
∗
on ∂Ω.
Proof. Suppose that max
Ω
(u
∗
− v
∗
) =: θ > 0. We simply write u and v
for u
∗
and v
∗
, respectively.
Case 1: max
∂Ω
(u −v) = θ. We choose z ∈ ∂Ω such that (u − v)(z) = θ.
We shall divide three cases:
Case 11: u(z) > g(z). For ε, δ ∈ (0, 1), where δ > 0 will be ﬁxed later,
setting φ(x, y) := (2ε
2
)
−1
[x−y −εδn(z)[
2
−δ[x−z[
2
, we let (x
ε
, y
ε
) ∈ ΩΩ
be the maximum point of Φ(x, y) := u(x) −v(y) −φ(x, y) over Ω Ω.
66
Since z −εδn(z) ∈ Ω for small ε > 0 by (5.6), Φ(x
ε
, y
ε
) ≥ Φ(z, z −εδn(z))
implies that
[x
ε
−y
ε
−εδn(z)[
2
2ε
2
≤ u(x
ε
) −v(y
ε
) −u(z) +v(z −εδn(z)) −δ[x
ε
−z[
2
. (5.8)
Since [x
ε
−y
ε
[ ≤ Mε, where M :=
√
2(max
Ω
u−min
Ω
v −u(z) +v(z) +1)
1/2
,
for small ε > 0, we may suppose that (x
ε
, y
ε
) → (ˆ x, ˆ x) and (x
ε
− y
ε
)/ε → ˆ z
for some ˆ x ∈ Ω and ˆ z ∈ R
n
as ε → 0 along a subsequence (denoted by ε
again). Thus, from the continuity (5.7) of v at z ∈ ∂Ω, (5.8) implies that
θ ≤ u(ˆ x) −v(ˆ x) −δ[ˆ x −z[
2
,
which yields ˆ x = z. Moreover, we have
lim
ε→0
[x
ε
−y
ε
−εδn(z)[
2
ε
2
= 0,
which implies that
lim
ε→0
[x
ε
−y
ε
[
ε
= δ. (5.9)
Furthermore, we note that y
ε
= x
ε
−εδn(z) + o(ε) ∈ Ω because of (5.6).
Applying Lemma 3.6 with Proposition 2.7 to u(x) +ε
−1
δ¸n(z), x) −δ[x−
z[
2
−2
−1
δ
2
and v(y) + ε
−1
δ¸n(z), y), we ﬁnd X, Y ∈ S
n
such that
_
x
ε
−y
ε
ε
2
−
δ
ε
n(z) + 2δ(x
ε
−z), X + 2δI
_
∈ J
2,+
Ω
u(x
ε
), (5.10)
_
x
ε
−y
ε
ε
2
−
δ
ε
n(z), Y
_
∈ J
2,−
Ω
v(y
ε
), (5.11)
and
−
3
ε
2
_
I O
O I
_
≤
_
X O
O −Y
_
≤
3
ε
2
_
I −I
−I I
_
.
Putting p
ε
:= ε
−2
(x
ε
−y
ε
) −δε
−1
n(z), by (5.3), we have
F(x
ε
, p
ε
, X) −F(x
ε
, p
ε
+2δ(x
ε
−z), X +2δI) ≤ ω
0
(2δ[x
ε
−z[ +2δ). (5.12)
Since y
ε
∈ Ω and u(x
ε
) > g(x
ε
) for small ε > 0 provided x
ε
∈ ∂Ω, in view
of (5.10) and (5.11), we have
ν(u(x
ε
) −v(y
ε
)) ≤ F(y
ε
, p
ε
, Y ) −F(x
ε
, p
ε
+ 2δ(x
ε
−z), X + 2δI).
67
Combining this with (5.12) , by (5.4), we have
ν(u(x
ε
)−v(y
ε
)) ≤ ω
0
(2δ[x
ε
−z[+2δ)+ˆ ω
F
_
[x
ε
−y
ε
[
_
1 +[p
ε
[ +
[x
ε
−y
ε
[
ε
2
__
.
Sending ε → 0 together with (5.9) in the above, we have
νθ ≤ ω
0
(2δ) + ˆ ω
F
(2δ
2
),
which is a contradiction for small δ > 0, which only depends on θ and ν.
Case 12: v(z) < g(z). To get a contradiction, we argue as above replacing
φ(x, y) by ψ(x, y) := (2ε
2
)
−1
[x − y + εδn(z)[
2
− δ[x − z[
2
so that x
ε
= y
ε
−
εδn(z) + o(ε) ∈ Ω for small ε > 0. Note that we need here the continuity of
u on ∂Ω in (5.7) while the other one in (5.7) is needed in Case 11. (See also
the proof of Theorem 5.3 below.)
Case 13: u(z) ≤ g(z) and v(z) ≥ g(z). This does not occur because 0 <
θ = (u −v)(z) ≤ 0.
Case 2: sup
∂Ω
(u −v) < θ. In this case, using the standard test function
[x − y[
2
/(2ε) (without δ[x − z[
2
term), we can follow the same argument as
in the proof of Theorem 3.7. 2
Remark. Unfortunately, without assuming the continuity of viscosity so
lutions on ∂Ω, the comparison principle fails in general.
In fact, setting F(x, r, p, X) ≡ r and g(x) ≡ −1, consider the function
u(x) :=
_
0 for x ∈ Ω,
−1 for x ∈ ∂Ω.
Note that u
∗
≡ 0 and u
∗
≡ u in Ω, which are respectively a viscosity sub and
supersolution of G(x, u, Du, D
2
u) = 0 in Ω. Therefore, this example shows
that the comparison principle fails in general without assumption (5.7).
5.2 State constraint problem
The state constraint boundary condition arises in a typical optimal control
problem. Thus, if the reader is more interested in the PDE theory, he/she
may skip Proposition 5.2 below, which explains why we will adapt the “state
constraint boundary condition” in Theorem 5.3.
68
To explain our motivation, we shall consider Bellman equations of ﬁrstorder.
sup
a∈A
¦νu −¸g(x, a), Du) −f(x, a)¦ = 0 in Ω.
Here, we use the notations in section 4.2.1.
We introduce the following set of controls: For x ∈ Ω,
/(x) := ¦α() ∈ / [ X(t; x, α) ∈ Ω for t ≥ 0¦.
We shall suppose that
/(x) ,= ∅ for all x ∈ Ω. (5.13)
Also, we suppose that
_
¸
_
¸
_
(1) sup
a∈A
_
f(, a)
L
∞
(Ω)
+g(, a)
W
1,∞
(Ω)
_
< ∞,
(2) sup
a∈A
[f(x, a) −f(y, a)[ ≤ ω
f
([x −y[) for x, y ∈ Ω,
(5.14)
where ω
f
∈ /.
We are now interested in the following the optimal cost functional:
u(x) := inf
α∈A(x)
_
∞
0
e
−νt
f(X(t; x, α), α(t))dt.
Proposition 5.2. Assume that ν > 0, (5.13) and (5.14) hold. Then, we have
(1) u is a viscosity subsolution of
sup
a∈A
¦νu −¸g(x, a), Du) −f(x, a)¦ ≤ 0 in Ω,
(2) u is a viscosity supersolution of
sup
a∈A
¦νu −¸g(x, a), Du) −f(x, a)¦ ≥ 0 in Ω.
Remark. We often say that u satisﬁes the state constraint boundary condition
when it is a viscosity supersolution of
“F(x, u, Du, D
2
u) ≥ 0 in ∂Ω”.
Proof. In fact, at x ∈ Ω, it is easy to verify that the dynamic programming
principle (Theorem 4.4) holds for small T > 0. Thus, we may show Theorem 4.5
replacing R
n
by Ω.
69
Hence, it only remains to show (2) on ∂Ω. Thus, suppose that there are ˆ x ∈ ∂Ω,
θ > 0 and φ ∈ C
1
(Ω) such that (u
∗
−φ)(ˆ x) = 0 ≤ (u
∗
−φ)(x) for x ∈ Ω, and
sup
a∈A
¦νφ(ˆ x) −¸g(ˆ x, a), Dφ(ˆ x)) −f(ˆ x, a)¦ ≤ −2θ.
Then, we will get a contradiction.
Choose x
k
∈ Ω∩ B
1/k
(ˆ x) such that u
∗
(ˆ x) + k
−1
≥ u(x
k
) and [φ(ˆ x) −φ(x
k
)[ <
1/k. In view of (5.14), there is t
0
> 0 such that for any α ∈ /(x
k
) and large k ≥ 1,
we have
νφ(X
k
(t)) −¸g(X
k
(t), α(t)), Dφ(X
k
(t))) −f(X
k
(t), α(t)) ≤ −θ for t ∈ (0, t
0
),
where X
k
(t) := X(t; x
k
, α). Thus, multiplying e
−νt
and then, integrating it over
(0, t
0
), we have
φ(x
k
) ≤ e
−νt
0
φ(X
k
(t
0
)) +
_
t
0
0
e
−νt
f(X
k
(t), α(t))dt −
θ
ν
(1 −e
−νt
0
).
Since we have
u(x
k
) ≤
2
k
+ e
−νt
0
u(X
k
(t
0
)) +
_
t
0
0
e
−νt
f(X
k
(t), α(t))dt −
θ
ν
(1 −e
−νt
0
),
taking the inﬁmum over /(x
k
), we apply Theorem 4.4 to get
0 ≤
2
k
−
θ
ν
(1 −e
−νt
0
),
which is a contradiction for large k. 2
Motivated by this proposition, we shall consider more general secondorder
elliptic PDEs.
Theorem 5.3. Assume that ν > 0, (5.3), (5.4), (5.6) and (5.12) hold. Let
u : Ω → R be, respectively, a viscosity sub and supersolution of
νu + F(x, Du, D
2
u) ≤ 0 in Ω,
and
νv + F(x, Dv, D
2
v) ≥ 0 in Ω.
Assume also that
liminf
x∈Ω→z
u
∗
(x) ≥ u
∗
(z) for z ∈ ∂Ω. (5.15)
70
Then, u
∗
≤ v
∗
in Ω.
Remark. In 1986, Soner ﬁrst treated the state constraint problems for deter
ministic optimal control (i.e. ﬁrstorder PDEs) by the viscosity solution approach.
We note that we do not need continuity of v on ∂Ω while we need it for
Dirichlet problems. For further discussion on the state constraint problems, we
refer to IshiiKoike (1996).
We also note that the proof below is easier than that for Dirichlet problems
in section 5.1 because we only need to avoid the boundary condition for viscosity
subsolutions.
Proof. Suppose that max
Ω
(u
∗
− v
∗
) =: θ > 0. We shall write u and v for u
∗
and v
∗
, respectively, again.
We may suppose that max
∂Ω
(u − v) = θ since otherwise, we can use the
standard procedure to get a contradiction.
Now, we proceed the same argument in Case 12 in the proof of Theorem 5.1
(although it is not precisely written).
For ε, δ > 0, setting φ(x, y) := (2ε
2
)
−1
[x −y + εδn(z)[
2
+δ[x −z[
2
, where n is
the unit outward normal vector at z ∈ ∂Ω, we let (x
ε
, y
ε
) ∈ Ω Ω the maximum
point of u(x) −v(y) −φ(x, y) over ΩΩ. As in the proof of Theorem 3.4, we have
lim
ε→0
(x
ε
, y
ε
) = (z, z) and lim
ε→0
[x
ε
−y
ε
[
ε
= δ. (5.16)
Since x
ε
= y
ε
−εδn(z) + o(ε) ∈ Ω for small ε > 0, in view of Lemma 3.6 with
Proposition 2.7, we can ﬁnd X, Y ∈ S
n
such that
_
x
ε
−y
ε
ε
2
+
δ
ε
n(z) + 2δ(x
ε
−z), X + 2δI
_
∈ J
2,+
Ω
u(x
ε
),
_
x
ε
−y
ε
ε
2
+
δ
ε
n(z), Y
_
∈ J
2,−
Ω
v(y
ε
),
and
−
3
ε
2
_
I O
O I
_
≤
_
X O
O −Y
_
≤
3
ε
2
_
I −I
−I I
_
.
Setting p
ε
:= ε
−2
(x
ε
−y
ε
) + δε
−1
n(z), we have
ν(u(x
ε
) −v(y
ε
)) ≤ F(y
ε
, p
ε
, Y ) −F(x
ε
, p
ε
+ 2δ(x
ε
−z), X + 2δI)
≤ ω
0
(2δ[x
ε
−z[ + 2δ) + ˆ ω
F
_
[x
ε
−y
ε
[
_
1 +[p
ε
[ +
[x
ε
−y
ε
[
ε
__
.
71
Hence, sending ε → 0 with (5.16), we have
νθ ≤ ω
0
(2δ) + ˆ ω
F
(2δ
2
),
which is a contradiction for small δ > 0. 2
5.3 Neumann problem
In the classical theory and modern theory for weak solutions in the distribu
tion sense, the (inhomogeneous) Neumann condition is given by
¸n(x), Du(x)) −g(x) = 0 on ∂Ω,
where n(x) denotes the unit outward normal vector at x ∈ ∂Ω.
In Dirichlet and state constraint problems, we have used a test function
which forces one of x
ε
and y
ε
to be in Ω. However, in the Neumann boundary
value problem (Neumann problem for short), we have to avoid the boundary
condition for viscosity sub and supersolutions simultaneously. Thus, we need
a new test function diﬀerent from those in sections 5.1 and 5.2.
We ﬁrst deﬁne the signed distance function from Ω by
ρ(x) :=
_
inf¦[x −y[ [ y ∈ ∂Ω¦ for x ∈ Ω
c
,
−inf¦[x −y[ [ y ∈ ∂Ω¦ for x ∈ Ω.
In order to obtain the comparison principle for the Neumann problem,
we shall impose a hypothesis on Ω (see Fig 5.2):
_
¸
¸
¸
_
¸
¸
¸
_
(1) There is ˆ r > 0 such that
Ω ⊂ (B
ˆ r
(z + ˆ rn(z)))
c
for z ∈ ∂Ω.
(2) There is a neighborhood N of ∂Ω such that
ρ ∈ C
2
(N), and Dρ(x) = n(x) for x ∈ ∂Ω.
(5.17)
Remark. This assumption (1) is called the “uniform exterior sphere con
dition”. Since [x −z − ˆ rn(z)[ ≥ ˆ r for z ∈ ∂Ω and x ∈ Ω, we have
¸n(z), x −z) ≤
[x −z[
2
2ˆ r
for z ∈ ∂Ω and x ∈ Ω. (5.18)
It is known that when ∂Ω is “smooth” enough, (2) of (5.17) holds true.
72
Fig 5.2
rn(z)
∂Ω
ˆ r
z
Ω
We shall consider the inhomogeneous Neumann problem:
_
νu + F(x, Du, D
2
u) = 0 in Ω,
¸n(x), Du) −g(x) = 0 on ∂Ω.
(5.19)
Remember that we adapt the deﬁnition of viscosity solutions of (5.19) for
the corresponding G in (5.2).
Theorem 5.4. Assume that ν > 0, (5.3), (5.4) and (5.17) hold. For
g ∈ C(∂Ω), we let u and v : Ω → R be a viscosity sub and supersolution of
(5.19), respectively.
Then, u
∗
≤ v
∗
in Ω.
Remark. We note that we do not need any continuity of u and v on ∂Ω.
Proof. As before, we write u and v for u
∗
and v
∗
, respectively.
As in the proof of Theorem 3.7, we suppose that max
Ω
(u −v) =: θ > 0.
Also, we may suppose that max
∂Ω
(u −v) = θ.
Let z ∈ ∂Ω be a point such that (u −v)(z) = θ. For small δ > 0, we see
that the mapping x ∈ Ω → u(x) −v(y) −δ[x −z[
2
takes its strict maximum
at z.
For small ε, δ > 0, where δ > 0 will be ﬁxed later, setting φ(x, y) :=
(2ε)
−1
[x − y[
2
− g(z)¸n(z), x − y) + δ(ρ(x) + ρ(y) + 2) + δ[x − z[
2
, we let
(x
ε
, y
ε
) ∈ Ω Ω be the maximum point of Φ(x, y) := u(x) − v(y) − φ(x, y)
over Ω ∩ N Ω ∩ N, where N is in (5.17).
Since Φ(x
ε
, y
ε
) ≥ Φ(z, z), as before, we may extract a subsequence, which
is denoted by (x
ε
, y
ε
) again, such that (x
ε
, y
ε
) → (ˆ x, ˆ x). We may suppose
ˆ x ∈ ∂Ω. Since Φ(ˆ x, ˆ x) ≥ limsup
ε→0
Φ(x
ε
, y
ε
), we have
u(ˆ x) −v(ˆ x) −δ[ˆ x −z[
2
≥ θ,
73
which yields ˆ x = z. Moreover, we have
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0. (5.20)
Applying Lemma 3.6 to u(x) −δ(ρ(x) +1) −g(z)¸n(z), x) −δ[x−z[
2
and
−v(y) −δ(ρ(y) + 1) +g(z)¸n(z), y), we ﬁnd X, Y ∈ S
n
such that
_
p
ε
+ δn(x
ε
) + 2δ(x
ε
−z), X + δD
2
ρ(x
ε
) + 2δI
_
∈ J
2,+
Ω
u(x
ε
), (5.21)
_
p
ε
−δn(y
ε
), Y −δD
2
ρ(y
ε
)
_
∈ J
2,−
Ω
v(y
ε
), (5.22)
where p
ε
:= ε
−1
(x
ε
−y
ε
) + g(z)n(z), and
−
3
ε
_
I 0
0 I
_
≤
_
X 0
0 −Y
_
≤
3
ε
_
I −I
−I I
_
.
When x
ε
∈ ∂Ω, by (5.18), we calculate in the following manner:
¸n(x
ε
), D
x
φ(x
ε
, y
ε
)) = ¸n(x
ε
), p
ε
+ δn(x
ε
) + 2δ(x
ε
−z))
≥ −
[x
ε
−y
ε
[
2
2ˆ rε
+ g(z)¸n(x
ε
), n(z)) + δ −2δ[x
ε
−z[.
Hence, given δ > 0, we see that
¸n(x
ε
), D
x
φ(x
ε
, y
ε
)) −g(x
ε
) ≥
δ
2
for small ε > 0.
Thus, by (5.21), this yields
νu(x
ε
) + F(x
ε
, p
ε
+ δn(x
ε
) + 2δ(x
ε
−z), X + δD
2
ρ(x
ε
) + 2δI) ≤ 0. (5.23)
Of course, if x
ε
∈ Ω, then the above inequality holds from the deﬁnition.
On the other hand, similarly, if y
ε
∈ ∂Ω, then
¸n(y
ε
), −D
y
φ(x
ε
, y
ε
)) −g(y
ε
) ≤ −
δ
2
for small ε > 0.
Hence, by (5.22), we have
νv(y
ε
) + F(y
ε
, p
ε
−δn(y
ε
), Y −δD
2
ρ(y
ε
)) ≥ 0. (5.24)
74
Using (5.3) and (5.4), by (5.23) and (5.24), we have
ν(u(x
ε
) −v(y
ε
)) ≤ F(y
ε
, p
ε
, Y ) −F(x
ε
, p
ε
, X) + 2ω
0
(δM)
≤ ˆ ω
F
_
[x
ε
−y
ε
[
_
1 +[p
ε
[ +
[x
ε
−y
ε
[
ε
__
+ 2ω
0
(δM),
where M := 3 + sup
x∈N∩Ω
(2[x − z[ + [D
2
ρ(x)[). Sending ε → 0 with (5.20)
in the above, we have
νθ ≤ 2ω
0
(δM),
which is a contradiction for small δ > 0. 2
5.4 Growth condition at [x[ → ∞
In the standard PDE theory, we often consider PDEs in unbounded domains,
typically, in R
n
. In this subsection, we present a technique to establish the
comparison principle for viscosity solutions of
νu + F(x, Du, D
2
u) = 0 in R
n
. (5.25)
We remind the readers that in the proofs of comparison results we always
suppose max
Ω
(u − v) > 0, where u and v are, respectively, a viscosity sub
and supersolution. However, considering Ω := R
n
, the maximum of u − v
might attain its maximum at “[x[ → ∞”. Thus, we have to choose a test
function φ(x, y), which forces u(x) −v(y) −φ(x, y) to takes its maximum at
a point in a compact set.
For this purpose, we will suppose the linear growth condition (for sim
plicity) for viscosity solutions.
We rewrite the structure condition (3.21) for R
n
:
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
There is an ω
F
∈ / such that if X, Y ∈ S
n
and µ > 1 satisfy
−3µ
_
I 0
0 I
_
≤
_
X 0
0 −Y
_
≤ 3µ
_
I −I
−I I
_
,
then F(y, µ(x −y), Y ) −F(x, µ(x −y), X)
≤ ω
F
([x −y[(1 +µ[x −y[)) for x, y ∈ R
n
.
(5.26)
We will also need the Lipschitz continuity of (p, X) → F(x, p, X), which
is stronger than (5.3).
_
There is µ
0
> 0 such that [F(x, p, X) −F(x, q, Y )[
≤ µ
0
([p −q[ +X −Y ) for x ∈ R
n
, p, q ∈ R
n
, X, Y ∈ S
n
.
(5.27)
75
Proposition 5.5. Assume that ν > 0, (5.26) and (5.27) hold. Let u and
v : R
n
→ R be, respectively, a viscosity sub and supersolution of (5.25).
Assume also that there is C
0
> 0 such that
u
∗
(x) ≤ C
0
(1 +[x[) and v
∗
(x) ≥ −C
0
(1 +[x[) for x ∈ R
n
. (5.28)
Then, u
∗
≤ v
∗
in R
n
.
Proof. We shall simply write u and v for u
∗
and v
∗
, respectively.
For δ > 0, we set θ
δ
:= sup
x∈R
n(u(x) −v(x) −2δ(1 +[x[
2
)). We note that
(5.28) implies that there is z
δ
∈ R
n
such that θ
δ
= u(z
δ
)−v(z
δ
)−2δ(1+[z
δ
[
2
).
Set θ := limsup
δ→0
θ
δ
∈ R∪ ¦∞¦.
When θ ≤ 0, since
(u −v)(x) ≤ 2δ(1 +[x[
2
) + θ
δ
for δ > 0 and x ∈ R
n
,
we have u ≤ v in R
n
.
Thus, we may suppose θ ∈ (0, ∞]. Setting Φ
δ
(x, y) := u(x) − v(y) −
(2ε)
−1
[x − y[
2
− δ(1 + [x[
2
) − δ(1 + [y[
2
) for ε, δ > 0, where δ > 0 will be
ﬁxed later, in view of (5.28), we can choose (x
ε
, y
ε
) ∈ R
n
R
n
such that
Φ
δ
(x
ε
, y
ε
) = max
(x,y)∈R
n
×R
n
Φ
δ
(x, y) ≥ θ
δ
.
As before, extracting a subsequence if necessary, we may suppose that
lim
ε→0
[x
ε
−y
ε
[
2
ε
= 0. (5.29)
By Lemma 3.6 with Proposition 2.7, putting p
ε
:= (x
ε
− y
ε
)/ε, we ﬁnd
X, Y ∈ S
n
such that
(p
ε
+ 2δx
ε
, X + 2δI) ∈ J
2,+
u(x
ε
),
(p
ε
−2δy
ε
, Y −2δI) ∈ J
2,−
v(y
ε
),
and
−
3
ε
_
I O
O I
_
≤
_
X O
O −Y
_
≤
3
ε
_
I −I
−I I
_
.
Hence, we have
ν(u(x
ε
) −v(y
ε
))
≤ F(y
ε
, p
ε
−2δy
ε
, Y −2δI) −F(x
ε
, p
ε
+ 2δx
ε
, X + 2δI)
≤ F(y
ε
, p
ε
, Y ) −F(x
ε
, p
ε
, X) + 2δµ
0
(2 +[x
ε
[ +[y
ε
[)
≤ ω
F
_
[x
ε
−y
ε
[
_
1 +
[x
ε
−y
ε
[
ε
__
+ νδ(2 +[x
ε
[
2
+[y
ε
[
2
) + Cδ,
76
where C = C(µ
0
, ν) > 0 is independent of ε, δ > 0. For the last inequality,
we used “2ab ≤ τa
2
+ τ
−1
b
2
for τ > 0”.
Therefore, we have
νθ ≤ ω
F
_
[x
ε
−y
ε
[
_
1 +
[x
ε
−y
ε
[
ε
__
+ Cδ.
Sending ε → 0 in the above together with (5.29), we get νθ ≤ Cδ, which is
a contradiction for small δ > 0. 2
77
6 L
p
viscosity solutions
In this section, we discuss the L
p
viscosity solution theory for uniformly
elliptic PDEs:
F(x, Du, D
2
u) = f(x) in Ω, (6.1)
where F : ΩR
n
S
n
→ R and f : Ω → R are given. Since we will use the
fact that u + C (for a constant C ∈ R) satisﬁes the same (6.1), we suppose
that F does not depend on u itself. Furthermore, to compare with classical
results, we prefer to have the inhomogeneous term (the right hand side of
(6.1)).
The aim in this section is to obtain the a priori estimates for L
p
viscosity
solutions without assuming any continuity of the mapping x → F(x, q, X),
and then to establish an existence result of L
p
viscosity solutions for Dirichlet
problems.
Remark. In general, without the continuity assumption of x → F(x, p, X),
even if X → F(x, p, X) is uniformly elliptic, we cannot expect the unique
ness of L
p
viscosity solutions. Because Nadirashvili (1997) gave a counter
example of the uniqueness.
6.1 A brief history
Let us simply consider the Poisson equation in a “smooth” domain Ω with
zeroDirichlet boundary condition:
_
−△u = f in Ω,
u = 0 on ∂Ω.
(6.2)
In the literature of the regularity theory for uniformly elliptic PDEs of
secondorder, it is wellknown that
“if f ∈ C
σ
(Ω) for some σ ∈ (0, 1), then u ∈ C
2,σ
(Ω)”. (6.3)
Here, C
σ
(U) (for a set U ⊂ R
n
) denotes the set of functions f : U → R such
that
sup
x∈U
[f(x)[ + sup
x,y∈U,x=y
[f(x) −f(y)[
[x −y[
σ
< ∞.
78
Also, C
k,σ
(U), for an integer k ≥ 1, denotes the set of functions f : U → R
so that for any multiindex α = (α
1
, . . . , α
n
) ∈ ¦0, 1, 2, . . .¦
n
with [α[ :=
n
i=1
α
i
≤ k, D
α
f ∈ C
σ
(U), where
D
α
f :=
∂
α
f
∂x
α
1
1
∂x
αn
n
.
These function spaces are called H¨older continuous spaces and the implication
in (6.3) is called the Schauder regularity (estimates). Since the PDE in
(6.2) is linear, the regularity result (6.3) may be extended to
“if f ∈ C
k,σ
(Ω) for some σ ∈ (0, 1), then u ∈ C
k+2,σ
(Ω)”. (6.4)
Moreover, we obtain that (6.4) holds for the following PDE:
−trace(A(x)D
2
u(x)) = f(x) in Ω, (6.5)
where the coeﬃcient A() ∈ C
∞
(Ω, S
n
) satisﬁes that
λ[ξ[
2
≤ ¸A(x)ξ, ξ) ≤ Λ[ξ[
2
for ξ ∈ R
n
and x ∈ Ω.
Furthermore, we can obtain (6.4) even for linear secondorder uniformly
elliptic PDEs if the coeﬃcients are smooth enough.
Besides the Schauder estimates, we know a diﬀerent kind of regularity
results: For a solution u of (6.5), and an integer k ∈ ¦0, 1, 2, . . .¦,
“if f ∈ W
k,p
(Ω) for some p > 1, then u ∈ W
k+2,p
(Ω)”. (6.6)
Here, for an open set O ⊂ R
n
, we say f ∈ L
p
(O) if [f[
p
is integrable in O,
and f ∈ W
k,p
(O) if for any multiindex α with [α[ ≤ k, D
α
f ∈ L
p
(O). Notice
that L
p
(Ω) = W
0,p
(Ω).
This (6.6) is called the L
p
regularity (estimates). For a later con
venience, for p ≥ 1, we recall the standard norms of L
p
(O) and W
k,p
(O),
respectively:
u
L
p
(O)
:=
__
O
[u(x)[
p
dx
_
1/p
, and u
W
k,p
(O)
:=
α≤k
D
α
u
L
p
(O)
.
In Appendix, we will use the quantity u
L
p
(Ω)
even for p ∈ (0, 1) although
this is not the “norm” (i.e. the triangle inequality does not hold).
79
We refer to [13] for the details on the Schauder and L
p
regularity theory
for secondorder uniformly elliptic PDEs.
As is known, a diﬃculty occurs when we drop the smoothness of A
ij
.
An extreme case is that we only suppose that A
ij
are bounded (possibly
discontinuous, but still satisfy the uniform ellipticity). In this case, what can
we say about the regularity of “solutions” of (6.5) ?
The extreme case for PDEs in divergence form is the following:
−
n
i,j=1
∂
∂x
i
_
A
ij
(x)
∂u
∂x
j
(x)
_
= f(x) in Ω. (6.7)
De Giorgi (1957) ﬁrst obtained H¨older continuity estimates on weak so
lutions of (6.7) in the distribution sense; for any φ ∈ C
∞
0
(Ω),
_
Ω
(¸A(x)Du(x), Dφ(x)) −f(x)φ(x)) dx = 0.
Here, we set
C
∞
0
(Ω) :=
_
φ : Ω → R
¸
¸
¸
¸
¸
φ() is inﬁnitely many times diﬀerentiable,
and supp φ is compact in Ω
_
.
We refer to [14] for the details of De Giorgi’s proof and, a diﬀerent proof
by Moser (1960).
Concerning the corresponding PDE in nondivergence form, by a stochas
tic approach, KrylovSafonov (1979) ﬁrst showed the H¨older continuity esti
mates on “strong” solutions of
−trace(A(x)D
2
u(x)) = f(x) in Ω. (6.8)
Afterward, Trudinger (1980) (see [13]) gave a purely analytic proof for it.
Since these results appeared before the viscosity solution was born, they
could only deal with strong solutions, which satisfy PDEs in the a.e. sense.
In 1989, Caﬀarelli proved the same H¨older estimate for viscosity solutions
of fully nonlinear secondorder uniformly elliptic PDEs.
To show H¨older continuity of solutions, it is essential to prove the follow
ing “Harnack inequality” for nonnegative solutions. In fact, to prove the
Harnack inequality, we split the proof into two parts:
80
weak Harnack inequality
for “super”solutions
local maximum principle
for “sub”solutions
_
¸
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
¸
_
=⇒
Harnack inequality
for “solutions”
In section 6.4, we will show that L
p
viscosity solutions satisfy the (inte
rior) H¨older continuous estimates.
6.2 Deﬁnition and basic facts
We ﬁrst recall the deﬁnition of L
p
strong solutions of general PDEs:
F(x, u, Du, D
2
u) = f(x) in Ω. (6.9)
We will use the following function space:
W
2,p
loc
(Ω) := ¦u : Ω → R [ ζu ∈ W
2,p
(Ω) for all ζ ∈ C
∞
0
(Ω)¦.
Throughout this section, we suppose at least
p >
n
2
so that u ∈ W
2,p
loc
(Ω) has the secondorder Taylor expansion at almost all
points in Ω, and that u ∈ C(Ω).
Deﬁnition. We call u ∈ C(Ω) an L
p
strong subsolution (resp., super
solution, solution) of (6.9) if u ∈ W
2,p
loc
(Ω), and
F(x, u(x), Du(x), D
2
u(x)) ≤ f(x) (resp., ≥ f(x), = f(x)) a.e. in Ω.
Now, we present the deﬁnition of L
p
viscosity solutions of (6.9).
Deﬁnition. We call u ∈ C(Ω) an L
p
viscosity subsolution (resp., su
persolution) of (6.9) if for φ ∈ W
2,p
loc
(Ω), we have
lim
ε→0
ess. inf
Bε(x)
_
F(y, u(y), Dφ(y), D
2
φ(y)) −f(y)
_
≤ 0
81
_
resp. lim
ε→0
ess. sup
Bε(x)
_
F(y, u(y), Dφ(y), D
2
φ(y)) −f(y)
_
≥ 0
_
provided that u −φ takes its local maximum (resp., minimum) at x ∈ Ω.
We call u ∈ C(Ω) an L
p
viscosity solution of (6.9) if it is both an L
p

viscosity sub and supersolution of (6.9).
Remark. Although we will not explicitly utilize the above deﬁnition, we
recall the deﬁnition of ess. sup
A
and ess. inf
A
of h : A → R, where A ⊂ R
n
is a measurable set:
ess. sup
A
h(y) := inf¦M ∈ R [ h ≤ M a.e. in A¦,
and
ess. inf
A
h(y) := sup¦M ∈ R [ h ≥ M a.e. in A¦.
Coming back to (6.1), we give a list of assumptions on F : ΩR
n
S
n
→
R:
_
¸
_
¸
_
(1) F(x, 0, O) = 0 for x ∈ Ω,
(2) x → F(x, q, X) is measurable for (q, X) ∈ R
n
S
n
,
(3) F is uniformly elliptic.
(6.10)
We recall the uniform ellipticity condition of X → F(x, q, X) with the con
stants 0 < λ ≤ Λ from section 3.1.2.
For the right hand side f : Ω → R, we suppose that
f ∈ L
p
(Ω) for p ≥ n. (6.11)
We will often suppose the Lipschitz continuity of F with respect to q ∈
R
n
;
_
there is µ ≥ 0 such that [F(x, q, X) −F(x, q
′
, X)[ ≤ µ[q −q
′
[
for (x, q, q
′
, X) ∈ Ω R
n
R
n
S
n
.
(6.12)
Remark. We note that (1) in (6.10) and (6.12) imply that F has the linear
growth in Du;
[F(x, q, O)[ ≤ µ[q[ for x ∈ Ω and q ∈ R
n
.
Remark. We note that when x → F(x, q, X) and x → f(x) are continu
ous, the deﬁnition of L
p
viscosity subsolution (resp., supersolution) of (6.1)
82
coincides with the standard one under assumption (6.10) and (6.12). For a
proof, we refer to a paper by CaﬀarelliCrandallKocan
´
Swi¸ ech [5].
In this book, we only study the case of (6.11) but most of results can be
extended to the case when p > p
⋆
= p
⋆
(Λ, λ, n) ∈ (n/2, n), where p
⋆
is the
socalled Escauriaza’s constant (see the references in [4]).
The following proposition is obvious but it will be very convenient to
study L
p
viscosity solutions of (6.1) under assumptions (6.10), (6.11) and
(6.12).
Proposition 6.1. Assume that (6.10), (6.11) and (6.12) hold. If u ∈
C(Ω) is an L
p
viscosity subsolution (resp., supersolution) of (6.1), then it is
an L
p
viscosity subsolution (resp., supersolution) of
T
−
(D
2
u) −µ[Du[ ≤ f in Ω
_
resp., T
+
(D
2
u) + µ[Du[ ≥ f in Ω
_
.
We recall the AleksandrovBakelmanPucci (ABP for short) maximum
principle, which will play an essential role in this section (and also Appendix).
To this end, we introduce the notion of “upper contact sets”: For u :
O → R, we set
Γ[u, O] :=
_
x ∈ O
¸
¸
¸
¸
¸
there is p ∈ R
n
such that
u(y) ≤ u(x) +¸p, y −x) for all y ∈ O
_
.
Proposition 6.2. (ABP maximum principle) For µ ≥ 0, there is C
0
:=
C
0
(Λ, λ, n, µ, diam(Ω)) > 0 such that if for f ∈ L
n
(Ω), u ∈ C(Ω) is an
L
n
viscosity subsolution (resp., supersolution) of
T
−
(D
2
u) −µ[Du[ ≤ f in Ω
+
[u]
(resp., T
+
(D
2
u) + µ[Du[ ≥ f in Ω
+
[−u]),
then
max
Ω
u ≤ max
∂Ω
u
+
+ diam(Ω)C
0
f
+

L
n
(Γ[u,Ω]∩Ω
+
[u])
_
resp., max
Ω
(−u) ≤ max
∂Ω
(−u)
+
+ diam(Ω)C
0
f
−

L
n
(Γ[−u,Ω]∩Ω
+
[−u])
_
,
83
Fig 6.1
Γ[u, Ω]
y = u(x)
Ω
x
where
Ω
+
[u] := ¦x ∈ Ω [ u(x) > 0¦.
The next proposition is a key tool to study L
p
viscosity solutions, partic
ularly, when f is not supposed to be continuous. The proof will be given in
Appendix.
Proposition 6.3. Assume that (6.11) holds for p ≥ n. For any µ ≥ 0,
there are an L
p
strong subsolution u and an L
p
strong supersolution v ∈
C(B
1
) ∩ W
2,p
loc
(B
1
), respectively, of
_
T
+
(D
2
u) + µ[Du[ ≤ f in B
1
,
u = 0 on ∂B
1
,
and
_
T
−
(D
2
v) −µ[Dv[ ≥ f in B
1
,
v = 0 on ∂B
1
.
Moreover, we have the following estimates: for w = u or w = v, and small
δ ∈ (0, 1), there is
ˆ
C =
ˆ
C(Λ, λ, n, µ, δ) > 0 such that
w
W
2,p
(B
δ
)
≤
ˆ
C
δ
f
L
p
(B
1
)
.
Remark. In view of the proof (Step 2) of Proposition 6.2, we see that
−Cf
−

L
n
(B
1
)
≤ w ≤ Cf
+

L
n
(B
1
)
in B
1
, where w = u, v.
6.3 Harnack inequality
In this subsection, we often use the cube Q
r
(x) for r > 0 and x =
t
(x
1
, . . . , x
n
) ∈
R
n
;
Q
r
(x) := ¦y =
t
(y
1
, . . . , y
n
) [ [x
i
−y
i
[ < r/2 for i = 1, . . . , n¦,
84
and Q
r
:= Q
r
(0). Notice that
B
r/2
(x) ⊂ Q
r
(x) ⊂ B
r
√
n/2
(x) for r > 0.
We will prove the next two propositions in Appendix.
Proposition 6.4. (Weak Harnack inequality) For µ ≥ 0, there are p
0
=
p
0
(Λ, λ, n, µ) > 0 and C
1
:= C
1
(Λ, λ, n, µ) > 0 such that if u ∈ C(B
2
√
n
) is a
nonnegative L
p
viscosity supersolution of
T
+
(D
2
u) + µ[Du[ ≥ 0 in B
2
√
n
,
then we have
u
L
p
0(Q
1
)
≤ C
1
inf
Q
1/2
u.
Remark. Notice that p
0
might be smaller than 1.
Proposition 6.5. (Local maximum principle) For µ ≥ 0 and q > 0,
there is C
2
= C
2
(Λ, λ, n, µ, q) > 0 such that if u ∈ C(B
2
√
n
) is an L
p
viscosity
subsolution of
T
−
(D
2
u) −µ[Du[ ≤ 0 in B
2
√
n
,
then we have
sup
Q
1
u ≤ C
2
u
+

L
q
(Q
2
)
.
Remark. Notice that we do not suppose that u ≥ 0 in Proposition 6.5.
6.3.1 Linear growth
The next corollary is a direct consequence of Propositions 6.4 and 6.5.
Corollary 6.6. For µ ≥ 0, there is C
3
= C
3
(Λ, λ, n, µ) > 0 such that if
u ∈ C(B
2
√
n
) is a nonnegative L
p
viscosity sub and supersolution of
T
−
(D
2
u) −µ[Du[ ≤ 0 and T
+
(D
2
u) + µ[Du[ ≥ 0 in B
2
√
n
,
respectively, then we have
sup
Q
1
u ≤ C
3
inf
Q
1
u.
85
In order to treat inhomogeneous PDEs, we will need the following corol
lary:
Corollary 6.7. For µ ≥ 0 and f ∈ L
p
(B
3
√
n
) with p ≥ n, there is
C
4
= C
4
(Λ, λ, n, µ) > 0 such that if u ∈ C(B
3
√
n
) is a nonnegative L
p

viscosity sub and supersolution of
T
−
(D
2
u) −µ[Du[ ≤ f and T
+
(D
2
u) + µ[Du[ ≥ f in B
2
√
n
,
respectively, then we have
sup
Q
1
u ≤ C
4
_
inf
Q
1
u +f
L
p
(B
3
√
n
)
_
.
Proof. According to Proposition 6.3, we ﬁnd v, w ∈ C(B
3
√
n
)∩W
2,p
loc
(B
3
√
n
)
such that
_
T
+
(D
2
v) + µ[Dv[ ≤ −f
+
a.e. in B
3
√
n
,
v = 0 on ∂B
3
√
n
,
and
_
T
−
(D
2
w) −µ[Dw[ ≥ f
−
a.e. in B
3
√
n
,
w = 0 on ∂B
3
√
n
.
In view of Proposition 6.3 and its Remark, we can choose
ˆ
C =
ˆ
C(Λ, λ, n, µ) >
0 such that
0 ≤ −v ≤
ˆ
Cf
+

L
p
(B
3
√
n
)
in B
3
√
n
, v
W
2,p
(B
2
√
n
)
≤
ˆ
Cf
+

L
p
(B
3
√
n
)
,
and
0 ≤ w ≤
ˆ
Cf
−

L
p
(B
3
√
n
)
in B
3
√
n
, w
W
2,p
(B
2
√
n
)
≤
ˆ
Cf
−

L
p
(B
3
√
n
)
.
Since v, w ∈ W
2,p
(B
2
√
n
), it is easy to verify that u
1
:= u + v and
u
2
:= u + w are, respectively, an L
p
viscosity sub and supersolution of
T
−
(D
2
u
1
) −µ[Du
1
[ ≤ 0 and T
+
(D
2
u
2
) + µ[Du
2
[ ≥ 0 in B
2
√
n
.
Since v ≤ 0 in B
3
√
n
, applying Proposition 6.5 to u
1
, for any q > 0, we ﬁnd
C
2
(q) > 0 such that
sup
Q
1
u ≤ sup
Q
1
u
1
+
ˆ
Cf
+

L
p
(B
3
√
n
)
≤ C
2
(q)(u
1
)
+

L
q
(Q
2
)
+
ˆ
Cf
+

L
p
(B
3
√
n
)
≤ C
2
(q)u
L
q
(Q
2
)
+
ˆ
Cf
+

L
p
(B
3
√
n
)
.
(6.13)
86
On the other hand, applying Proposition 6.4 to u
2
, there is p
0
> 0 such
that
u
L
p
0(Q
2
)
≤ u
2

L
p
0(Q
2
)
≤ C
1
inf
Q
1
u
2
≤ C
1
_
inf
Q
1
u +
ˆ
Cf
−

L
p
(B
3
√
n
)
_
. (6.14)
Therefore, combining (6.14) with (6.13) for q = p
0
, we can ﬁnd C
4
> 0 such
that the assertion holds. 2
Corollary 6.8. (Harnack inequality, ﬁnal version) Assume that (6.10),
(6.11) and (6.12) hold. If u ∈ C(Ω) is an L
p
viscosity solution of (6.1), and
if B
3
√
nr
(x) ⊂ Ω for r ∈ (0, 1], then
sup
Qr(x)
u ≤ C
4
_
inf
Qr(x)
u + r
2−
n
p
f
L
p
(Ω)
_
,
where C
4
> 0 is the constant in Corollary 6.7.
Proof. By translation, we may suppose that x = 0.
Setting v(x) := u(rx) for x ∈ B
3
√
n
, we easily see that v is an L
p
viscosity
subsolution and supersolution of
T
−
(D
2
v) −µ[Dv[ ≤ r
2
ˆ
f and T
+
(D
2
v) + µ[Dv[ ≥ −r
2
ˆ
f, in B
3
√
n
,
respectively, where
ˆ
f(x) := f(rx). Note that 
ˆ
f
L
p
(B
3
√
n
)
= r
−
n
p
f
L
p
(B
3
√
nr
)
.
Applying Corollary 6.7 to v and then, rescaling v to u, we conclude the
assertion. 2
6.3.2 Quadratic growth
Here, we consider the case when q → F(x, q, X) has quadratic growth. We
refer to [10] for applications where such quadratic nonlinearity appears.
We present a version of the Harnack inequality when F has a quadratic
growth in Du in place of (6.12);
_
there is µ ≥ 0 such that [F(x, q, X) −F(x, q
′
, X)[
≤ µ([q[ +[q
′
[)[q −q
′
[ for (x, q, q
′
, X) ∈ Ω R
n
R
n
S
n
,
(6.15)
which together with (1) of (6.10) implies that
[F(x, q, O)[ ≤ µ[q[
2
for (x, q) ∈ Ω R
n
.
87
The associated Harnack inequality is as follows:
Theorem 6.9. For µ ≥ 0 and f ∈ L
p
(B
3
√
n
) with p ≥ n, there is
C
5
= C
5
(Λ, λ, n, µ) > 0 such that if u ∈ C(B
3
√
n
) is a nonnegative L
p

viscosity sub and supersolution of
T
−
(D
2
u) −µ[Du[
2
≤ f and T
+
(D
2
u) + µ[Du[
2
≥ f in B
3
√
n
,
respectively, then we have
sup
Q
1
u ≤ C
5
e
2µ
λ
M
_
inf
Q
1
u +f
L
p
(B
3
√
n
)
_
,
where M := sup
B
3
√
n
u.
Proof. Set α := µ/λ. Fix any δ ∈ (0, 1).
We claim that v := e
αu
− 1 and w := 1 − e
−αu
are, respectively, a non
negative L
p
viscosity sub and supersolution of
T
−
(D
2
v) ≤ α(e
αM
+ δ)f
+
and T
+
(D
2
w) ≥ −α(1 +δ)f
−
in B
3
√
n
.
We shall only prove this claim for v since the other for w can be obtained
similarly.
Choose φ ∈ W
2,p
loc
(B
3
√
n
) and suppose that u−φ attains its local maximum
at x ∈ B
3
√
n
. Thus, we may suppose that v(x) = φ(x) and v ≤ φ in B
r
(x),
where B
2r
(x) ⊂ B
3
√
n
. Note that 0 ≤ v ≤ e
αM
−1 in B
3
√
n
.
For any δ ∈ (0, 1), in view of W
2,p
(B
r
(x)) ⊂ C
σ
0
(B
r
(x)) with some
σ
0
∈ (0, 1), we can choose ε
0
∈ (0, r) such that
−δ ≤ φ ≤ v + δ in B
ε
0
(x).
Setting ψ(y) := α
−1
log(φ(y) + 1) for y ∈ B
ε
0
(x) (extending ψ ∈ W
2,p
in
B
3
√
n
¸ B
ε
0
(x) if necessary), we have
lim
ε→0
ess inf
Bε(x)
_
T
−
(D
2
ψ) −µ[Dψ[
2
−f
+
_
≤ 0.
Since
Dψ =
Dφ
α(φ + 1)
and D
2
ψ =
D
2
φ
α(φ + 1)
−
Dφ ⊗Dφ
α(φ + 1)
2
,
88
the above inequality yields
lim
ε→0
ess inf
Bε(x)
_
T
−
(D
2
φ)
α(φ + 1)
−f
+
_
≤ 0.
Since 0 < 1 −δ ≤ φ + 1 ≤ e
αM
+ δ in B
ε
0
(x), we have
lim
ε→0
ess inf
Bε(x)
_
T
−
(D
2
φ) −α(e
αM
+ δ)f
+
_
≤ 0.
Since αu ≤ v ≤ αue
αM
and αue
−αM
≤ w ≤ αu, using the same argument
to get (6.13) and (6.14), we have
sup
Q
1
u ≤
1
α
sup
Q
1
v ≤ C
6
_
v
L
p
0(Q
2
)
+ (e
αM
+ δ)f
+

L
p
(B
3
√
n
)
_
≤ C
7
_
e
2αM
w
L
p
0(Q
2
)
+ (e
αM
+ δ)f
+

L
p
(B
3
√
n
)
_
≤ C
8
_
e
2αM
inf
Q
1
w + (e
2αM
+ δ)f
L
p
(B
3
√
n
)
_
≤ C
9
e
2αM
_
inf
Q
1
u + (1 +δ)f
L
p
(B
3
√
n
)
_
.
Since C
k
(k = 6, . . . , 9) are independent of δ > 0, sending δ → 0, we conclude
the proof. 2
Remark. We note that the same argument by using two diﬀerent trans
formations for sub and supersolutions as above can be found in [14] for
uniformly elliptic PDEs in divergence form with the quadratic nonlinearity.
6.4 H¨older continuity estimates
In this subsection, we show how the Harnack inequality implies the H¨older
continuity.
Theorem 6.10. Assume that (6.10), (6.11) and (6.12) hold. For each
compact set K ⊂ Ω, there is σ = σ(Λ, λ, n, µ, p, dist(K, ∂Ω), f
L
p
(Ω)
) ∈
(0, 1) such that if u ∈ C(Ω) is an L
p
viscosity solution of (6.1), then there is
ˆ
C =
ˆ
C(Λ, λ, n, µ, p, dist(K, ∂Ω), max
Ω
[u[, f
L
p
(Ω)
) > 0
[u(x) −u(y)[ ≤
ˆ
C[x −y[
σ
for x, y ∈ K.
Remark. We notice that σ is independent of sup
Ω
[u[.
89
In our proof below, we may relax the dependence max
Ω
[u[ in
ˆ
C by
sup¦[u(x)[ [ dist(x, K) < ε¦ for small ε > 0.
Proof. Setting r
0
:= min¦1,dist(K, ∂Ω)/(3
√
n)¦ > 0, we may suppose
that there is C
4
> 1 such that if w ∈ C(Ω) is a nonnegative L
p
viscosity
sub and supersolution of
T
−
(D
2
w) −µ[Dw[ ≤ f and T
+
(D
2
w) + µ[Dw[ ≥ f in Ω,
respectively, then we see that for any r ∈ (0, r
0
] and x ∈ K (i.e. B
3
√
nr
(x) ⊂
Ω),
sup
Qr(x)
w ≤ C
4
_
inf
Qr(x)
w + r
2−
n
p
f
L
p
(Ω)
_
.
For simplicity, we may suppose x = 0 ∈ K.
Now, we set
M(r) := sup
Qr
u, m(r) := inf
Qr
u and osc(r) := M(r) −m(r).
It is suﬃcient to ﬁnd C > 0 and σ ∈ (0, 1) such that
M(r) −m(r) ≤ Cr
σ
for small r > 0.
We denote by S(r) the set of all nonnegative w ∈ C(B
3
√
nr
), which is,
respectively, an L
p
viscosity sub and supersolution of
T
−
(D
2
w) −µ[Dw[ ≤ [f[ and T
+
(D
2
w) + µ[Dw[ ≥ −[f[ in B
3
√
nr
.
Setting v
1
:= u−m(r) and w
1
:= M(r) −u, we see that v
1
and w
1
belong
to S(r). Hence, setting C
10
:= max¦C
4
f
L
p
(Ω)
, C
4
, 4¦ > 3, we have
sup
Q
r/2
v
1
≤ C
10
_
inf
Q
r/2
v
1
+ r
2−
n
p
_
and sup
Q
r/2
w
1
≤ C
10
_
inf
Q
r/2
w
1
+ r
2−
n
p
_
.
Thus, setting β := 2 −
n
p
> 0, we have
M(r/2) −m(r) ≤ C
10
_
m(r/2) −m(r) + (r/2)
β
_
,
M(r) −m(r/2) ≤ C
10
_
M(r) −M(r/2) + (r/2)
β
_
.
90
Hence, adding these inequalities, we have
(C
10
+ 1)(M(r/2) −m(r/2)) ≤ (C
10
−1)(M(r) −m(r)) + 2C
10
(r/2)
β
.
Therefore, setting θ := (C
10
−1)/(C
10
+1) ∈ (1/2, 1) and C
11
:= 2C
10
/(C
10
+
1), we see that
osc(r/2) ≤ θosc(r) + C
11
(r/2)
β
.
Moreover, changing r/2
k−1
for integers k ≥ 2, we have
osc(r/2
k
) ≤ θ
k
osc(r) + C
11
r
β
k
j=1
2
−βj
≤ θ
k
osc(r
0
) +
C
11
2
β
−1
r
β
≤ C
12
(θ
k
+ r
β
),
where C
12
:= max¦osc(r
0
), C
11
/(2
β
−1)¦.
For r ∈ (0, r
0
), by setting s = r
α
, where α = log θ/(log θ−β log 2) ∈ (0, 1),
there is a unique integer k ≥ 1 such that
s
2
k
≤ r <
s
2
k−1
,
which yields
log(s/r)
log 2
≤ k <
log(s/r)
log 2
+ 1.
Hence, recalling θ ∈ (1/2, 1), we have
osc(r) ≤ osc(s/2
k−1
) ≤ C
12
(θ
k
+ (2s)
β
) ≤ 2
β
C
12
_
θ
(α−1) log r/ log 2
+ r
βα
_
.
Setting σ := (α −1) log θ/ log 2 ∈ (0, 1) (because θ ∈ (1/2, 1)), we have
θ
(α−1) log r/ log 2
= r
σ
and r
βα
= r
σ
.
Thus, setting C
13
:= 2
β
C
12
, we have
osc(r) ≤ C
13
r
σ
. 2 (6.16)
Remark. We note that we may derive (6.16) when p > n/2 by taking
β = 2 −
n
p
> 0.
We shall give the corresponding H¨older continuity for PDEs with quadratic
nonlinearity (6.15). Since we can use the same argument as in the proof of
91
Theorem 6.1 using Theorem 6.9 instead of Corollaries 6.7 and 6.8, we omit
the proof of the following:
Corollary 6.11. Assume that (6.10), (6.11) and (6.15) hold. For each
compact set K ⊂ Ω, there are
ˆ
C =
ˆ
C(Λ, λ, n, µ, p, dist(K, ∂Ω), sup
Ω
[u[) > 0
and σ = σ(Λ, λ, n, µ, p, dist(K, ∂Ω), sup
Ω
[u[) ∈ (0, 1) such that if an L
p

viscosity solution u ∈ C(Ω) of (6.1), then we have
[u(x) −u(y)[ ≤
ˆ
C[x −y[
σ
for x, y ∈ K.
Remark. Note that both of σ and
ˆ
C depend on sup
Ω
[u[ in this quadratic
case.
6.5 Existence result
For the existence of L
p
viscosity solutions of (6.1) under the Dirichlet condition,
we only give an outline of proof, which was ﬁrst shown in a paper by Crandall
KocanLions
´
Swi¸ ech in [7] (1999).
Theorem 6.12. Assume that (6.10), (6.11) and (6.12) hold. Assume also that
(1) of (5.17) holds.
For given g ∈ C(∂Ω), there is an L
p
viscosity solution u ∈ C(Ω) of (6.1) such
that
u(x) = g(x) for x ∈ ∂Ω. (6.17)
Remark. We may relax assumption (1) of (5.17) so that the assertion holds for
Ω which may have some “concave” corners. Such a condition is called “uniform
exterior cone condition”.
Sketch of proof.
Step1: We ﬁrst solve approximate PDEs, which have to satisfy a suﬃcient
condition in Step 3; instead of (6.1), under (6.17), we consider
F
k
(x, Du, D
2
u) = f
k
in Ω, (6.18)
where “smooth” F
k
and f
k
approximate F and f, respectively. In fact, F
k
and f
k
are given by F ∗ρ
1/k
and f ∗ρ
1/k
, where ρ
1/k
is the standard molliﬁer with respect
to xvariables. We remark that F ∗ ρ
1/k
means the convolution of F(, p, X) and
ρ
1/k
.
92
We ﬁnd a viscosity solution u
k
∈ C(Ω) of (6.18) under (6.17) via Perron’s
method for instance. At this stage, we need to suppose the smoothness of ∂Ω to
construct viscosity sub and supersolutions of (6.18) with (6.17). Remember that
if F and f are continuous, then the notion of L
p
viscosity solutions equals to that
of the standard ones (see Proposition 2.9 in [5]).
In view of (1) of (5.17) (i.e. the uniform exterior sphere condition), we can
construct viscosity sub and supersolutions of (6.18) denoted by ξ ∈ USC(Ω) and
η ∈ LSC(Ω) such that ξ = η = g on ∂Ω. To show this fact, we only note that we
can modify the argument in Step 1 in section 7.3.
Step 2: We next obtain the a priori estimates for u
k
so that they converge to
a continuous function u ∈ C(Ω), which is the candidate of the original PDE.
For this purpose, after having established the L
∞
estimates via Proposition
6.2, we apply Theorem 6.10 (interior H¨older continuity) to u
k
in Step 1 because
(6.10)(6.12) hold for approximate PDEs with the same constants λ, Λ, µ.
We need a careful analysis to get the equicontinuity up to the boundary ∂Ω.
See Step 1 in section 7.3 again.
Step 3: Finally, we verify that the limit function u is the L
p
viscosity solution
via the following stability result, which is an L
p
viscosity version of Proposition
4.8.
To state the result, we introduce some notations: For B
2r
(x) ⊂ Ω with r > 0
and x ∈ Ω, and φ ∈ W
2,p
(B
r
(x)), we set
G
k
[φ](y) := F
k
(y, Dφ(y), D
2
φ(y)) −f
k
(y),
and
G[φ](y) := F(y, Dφ(y), D
2
φ(y)) −f(y)
for y ∈ B
r
(x).
Proposition 6.13. Assume that F
k
and F satisfy (6.10) and (6.12) with
λ, Λ > 0 and µ ≥ 0. For f, f
k
∈ L
p
(Ω) with p ≥ n, let u
k
∈ C(Ω) be an L
p
viscosity
subsolution (resp., supersolution) of (6.18). Assume also that u
k
converges to u
uniformly on any compact subsets of Ω as k → ∞, and that for any B
2r
(x) ⊂ Ω
with r > 0 and x ∈ Ω, and φ ∈ W
2,p
(B
r
(x)),
lim
k→∞
(G[φ] −G
k
[φ])
+

L
p
(Br(x))
= 0
_
resp., lim
k→∞
(G[φ] −G
k
[φ])
−

L
p
(Br(x))
= 0
_
.
Then, u ∈ C(Ω) is an L
p
viscosity subsolution (resp., supersolution) of (6.1).
93
Proof of Proposition 6.13. We only give a proof of the assertion for subsolu
tions.
Suppose the contrary: There are r > 0, ε > 0, x ∈ Ω and φ ∈ W
2,p
(B
2r
(x))
such that B
3r
(x) ⊂ Ω, 0 = (u −φ)(x) ≥ (u −φ)(y) for y ∈ B
2r
(x), and
u −φ ≤ −ε in B
2r
(x) ¸ B
r
(x), (6.19)
and
G[φ](y) ≥ ε a.e. in B
2r
(x). (6.20)
For simplicity, we shall suppose that r = 1 and x = 0.
It is suﬃcient to ﬁnd φ
k
∈ W
2,p
(B
2
) such that lim
k→∞
sup
B
2
[φ
k
[ = 0, and
G
k
[φ + φ
k
](y) ≥ ε a.e. in B
2
.
Indeed, since u
k
− (φ + φ
k
) attains its maximum over B
2
at an interior point
z ∈ B
2
by (6.19), the above inequality contradicts the fact that u
k
is an L
p

viscosity subsolution of (6.18).
Setting h(x) := G[φ](x) and h
k
(x) := G
k
[φ](x), in view of Proposition 6.3, we
can ﬁnd φ
k
∈ C(B
2
) ∩ W
2,p
loc
(B
2
) such that
_
¸
¸
¸
_
¸
¸
¸
_
T
−
(D
2
φ
k
) −µ[Dφ
k
[ ≥ (h −h
k
)
+
a.e. in B
2
,
φ
k
= 0 on ∂B
2
,
0 ≤ φ
k
≤ C(h −h
k
)
+

L
p
(B
2
)
in B
2
,
φ
k

W
2,p
(B
1
)
≤ C(h −h
k
)
+

L
p
(B
2
)
.
We note that our assumption together with the third inequality in the above yields
lim
k→∞
sup
B
2
[φ
k
[ = 0.
Using (6.10), (6.12) and (6.20), we have
G
k
[φ + φ
k
] ≥ T
−
(D
2
φ
k
) −µ[Dφ
k
[ + h
k
≥ (h −h
k
)
+
+ ε −(h −h
k
)
≥ ε a.e. in B
2
. 2
94
7 Appendix
In this appendix, we present proofs of the propositions, which appeared in the
previous sections. However, to prove them, we often need more fundamental
results, for which we only give references. One of such results is the following
“Area formula”, which will be employed in sections 7.1 and 7.2. We refer to
[9] for a proof of a more general Area formula.
Area formula
ξ ∈ C
1
(R
n
, R
n
),
g ∈ L
1
(R
n
),
A ⊂ R
n
measurable
_
¸
_
¸
_
=⇒
_
ξ(A)
[g(y)[dy ≤
_
A
[g(ξ(x))[[det(Dξ(x))[dx
We note that the Area formula is a change of variable formula when
[det(Dξ)[ may vanish. In fact, the equality holds if [det(Dξ)[ > 0 and ξ is
injective.
7.1 Proof of Ishii’s lemma
First of all, we recall an important result by Aleksandrov. We refer to the
Appendix of [6] and [10] for a “functional analytic” proof, and to [9] for a
“measure theoretic” proof.
Lemma 7.1. (Theorem A.2 in [6]) If f : R
n
→ R is convex, then for
a.a. x ∈ R
n
, there is (p, X) ∈ R
n
S
n
such that
f(x + h) = f(x) +¸p, h) +
1
2
¸Xh, h) + o([h[
2
) as [h[ → 0.
(i.e., f is twice diﬀerentiable at a.a. x ∈ R
n
.)
We next recall Jensen’s lemma, which is a version of the ABP maximum
principle in 7.2 below.
Lemma 7.2. (Lemma A.3 in [6]) Let f : R
n
→ R be semiconvex
(i.e. x → f(x) + C
0
[x[
2
is convex for some C
0
∈ R). Let ˆ x ∈ R
n
be a strict
maximum point of f. Set f
p
(x) := f(x) −¸p, x) for x ∈ R
n
and p ∈ R
n
.
Then, for r > 0, there are C
1
, δ
0
> 0 such that
[Γ
r,δ
[ ≥ C
1
δ
n
for δ ∈ (0, δ
0
],
95
where
Γ
r,δ
:=
_
x ∈ B
r
(ˆ x)
¸
¸
¸∃p ∈ B
δ
such that f
p
(y) ≤ f
p
(x) for y ∈ B
r
(ˆ x)
_
.
Proof. By translation, we may suppose ˆ x = 0.
For integers m, we set f
m
(x) = f ∗ ρ
1/m
(x), where ρ
1/m
is the molliﬁer.
Note that x → f
m
(x) + C
0
[x[
2
is convex.
Setting
Γ
m
r,δ
=
_
x ∈ B
r
¸
¸
¸∃p ∈ B
δ
such that f
m
p
(y) ≤ f
m
p
(x) for y ∈ B
r
_
,
where f
m
p
(x) = f
m
(x)−¸p, x), we claim that there are C
1
, δ
0
> 0, independent
of large integers m, such that
[Γ
m
r,δ
[ ≥ C
1
δ
n
for δ ∈ (0, δ
0
].
We remark that this concludes the assertion. In fact, setting A
m
:= ∪
∞
k=m
Γ
k
r,δ
,
we have ∩
∞
m=1
A
m
⊂ Γ
r,δ
. Because, for x ∈ ∩
∞
m=1
A
m
, we can select p
k
∈ B
δ
and m
k
such that lim
k→∞
m
k
= ∞, and
max
Br
f
m
k
p
k
= f
m
k
p
k
(x).
Hence, sending k → ∞ (along a subsequence if necessary), we ﬁnd ˆ p ∈ B
δ
such that max
B
r
f
ˆ p
= f
ˆ p
(x), which yields x ∈ Γ
r,δ
.
Therefore, we have
C
1
δ
n
≤ lim
m→∞
[A
m
[ = [ ∩
∞
m=1
A[ ≤ [Γ
r,δ
[.
Now we shall prove our claim. First of all, we notice that x → f
m
(x) +
C
0
[x[
2
is convex.
Since 0 is the strict maximum of f, we ﬁnd ε
0
> 0 such that
ε
0
= f(0) − max
B
4r/3
\B
r/3
f.
Fix p ∈ B
δ
0
, where δ
0
= ε
0
/(3r). For m ≥ 3/r, we note that
f
m
(x) −¸p, x) ≤ f(0) −ε
0
+ δ
0
r ≤ f(0) −
2ε
0
3
in B
r
¸ B
2r/3
.
96
On the other hand, for large m, we verify that
f
m
(0) ≥ f(0) −ω
f
(m
−1
) > f(0) −
ε
0
3
,
where ω
f
denotes the modulus of continuity of f. Hence, in view of these
observations, for any p ∈ B
δ
0
, if max
Br
f
m
p
= f
m
p
(x) for x ∈ B
r
, then x ∈ B
r
.
In other words, we see that
B
δ
= Df
m
(Γ
m
r,δ
) for δ ∈ (0, δ
0
].
Thanks to the Area formula, we have
[B
δ
[ =
_
Df
m
(Γ
m
r,δ
)
dy ≤
_
Γ
m
r,δ
[detD
2
f
m
[dx ≤ (2C
0
)
n
[Γ
m
r,δ
[.
Here, we have employed that −2C
0
I ≤ D
2
f
m
≤ O in Γ
m
r,δ
. 2
Although we can ﬁnd a proof of the next proposition in [6], we recall the
proof with a minor change for the reader’s convenience.
Proposition 7.3. (Lemma A.4 in [6]) If f ∈ C(R
m
), B ∈ S
m
, ξ →
f(ξ) + (λ/2)[ξ[
2
is convex and max
ξ∈R
m
¦f(ξ) − 2
−1
¸Bξ, ξ)¦ = f(0), then
there is an X ∈ S
m
such that
(0, X) ∈ J
2,+
f(0) ∩ J
2,−
f(0) and −λI ≤ X ≤ B.
Proof. For any δ > 0, setting f
δ
(ξ) := f(ξ) −2
−1
¸Bξ, ξ) −δ[ξ[
2
, we notice
that the semiconvex f
δ
attains its strict maximum at ξ = 0.
In view of Lemmas 7.1 and 7.2, there are ξ
δ
, q
δ
∈ B
δ
such that ξ →
f
δ
(ξ) +¸q
δ
, ξ) has a maximum at ξ
δ
, at which f is twice diﬀerentiable.
It is easy to see that Df(ξ
δ
) → 0 (as δ → 0) and, moreover, from the
convexity of ξ → f(ξ) + (λ/2)[ξ[
2
,
−λI ≤ D
2
f(ξ
δ
) ≤ B + 2δI.
Noting (Df(ξ
δ
), D
2
f(ξ
δ
)) ∈ J
2,+
f(ξ
δ
) ∩ J
2,−
f(ξ
δ
), we conclude the assertion
by taking the limit as δ → 0. 2
We next give a “magic” property of supconvolutions. For the reader’s
convenience, we put the proof of [6].
97
Lemma 7.4. (Lemma A.5 in [6]) For v ∈ USC(R
n
) with sup
R
n v < ∞
and λ > 0, we set
ˆ v(ξ) := sup
x∈R
n
_
v(x) −
λ
2
[x −ξ[
2
_
.
For η, q ∈ R
n
, Y ∈ S
n
, and (q, Y ) ∈ J
2,+
ˆ v(η), we have
(q, Y ) ∈ J
2,+
v(η + λ
−1
q) and ˆ v(η) +
[q[
2
2λ
= v(η + λ
−1
q).
In particular, if (0, Y ) ∈ J
2,+
ˆ v(0), then (0, Y ) ∈ J
2,+
v(0).
Proof. For (q, Y ) ∈ J
2,+
ˆ v(η), we choose y ∈ R
n
such that
ˆ v(η) = v(y) −
λ
2
[y −η[
2
.
Thus, from the deﬁnition, we see that for any x, ξ ∈ R
n
,
v(x) −
λ
2
[ξ −x[
2
≤ ˆ v(ξ) ≤ ˆ v(η) +¸q, ξ −η)
+
1
2
¸Y (ξ −η), ξ −η) + o([ξ −η[
2
)
= v(y) −
λ
2
[y −η[
2
+¸q, ξ −η)
+
1
2
¸Y (ξ −η), ξ −η) + o([ξ −η[
2
).
Taking ξ = x −y + η in the above, we have (q, Y ) ∈ J
2,+
v(y).
To verify that y = η + λ
−1
q, putting x = y and ξ = η − ε(λ(η − y) + q)
for ε > 0 in the above again, we have
ε[λ(η −y) + q[
2
≤ o(ε),
which concludes the proof. 2
Proof of Lemma 3.6. First of all, extending upper semicontinuous func
tions u, w in Ω into R
n
by −∞ in R
n
¸ Ω, we shall work in R
n
R
n
instead
of Ω Ω.
By translation, we may suppose that ˆ x = ˆ y = 0, at which u(x) + w(y) −
φ(x, y) attains its maximum.
98
Furthermore, replacing u(x), w(y) and φ(x, y), respectively, by
u(x) −u(0) −¸D
x
φ(0, 0), x), w(y) −w(0) −¸D
y
φ(0, 0), y)
and
φ(x, y) −φ(0, 0) −¸D
x
φ(0, 0), x) −¸D
y
φ(0, 0), y),
we may also suppose that φ(0, 0) = u(0) = w(0) = 0 and Dφ(0, 0) = (0, 0) ∈
R
n
R
n
.
Since φ(x, y) =
_
A
2
_
x
y
_
,
_
x
y
__
+o([x[
2
+[y[
2
), where A := D
2
φ(0, 0) ∈
S
2n
, for each η > 0, we see that the mapping (x, y) → u(x) + w(y) −
1
2
_
(A+ ηI)
_
x
y
_
,
_
x
y
__
attains its (strict) maximum at 0 ∈ R
2n
.
We will show the assertion for A+ηI in place of A. Then, sending η → 0,
we can conclude the proof. Therefore, we need to prove the following:
Simpliﬁed version of Ishii’s lemma.
For upper semicontinuous functions u and w in R
n
, we suppose that
u(x) + w(y) −
_
A
2
_
x
y
_
,
_
x
y
__
≤ u(0) +w(0) = 0 in R
n
R
n
.
Then, for each µ > 1, there are X, Y ∈ S
n
such that (0, X) ∈ J
2,+
u(0),
(0, Y ) ∈ J
2,+
w(0) and −(µ +A)
_
I O
O I
_
≤
_
X O
O Y
_
≤ A+
1
µ
A
2
.
Proof of the simpliﬁed version of Lemma 3.6. Since H¨older’s inequality im
plies
_
A
_
x
y
_
,
_
x
y
__
≤
__
A+
1
µ
A
2
__
ξ
η
_
,
_
ξ
η
__
+(µ +A)([x −ξ[
2
+[y −η[
2
)
for x, y, ξ, η ∈ R
n
and µ > 0, setting λ = µ +A, we have
u(x) −
λ
2
[x −ξ[
2
+ w(y) −
λ
2
[y −η[
2
≤
1
2
__
A +
1
µ
A
2
__
ξ
η
_
,
_
ξ
η
__
.
Using the notation in Lemma 7.4, we denote by ˆ u and ˆ w the supconvolution
of u and w, respectively, with the above λ > 0. Thus, we have
ˆ u(ξ) + ˆ w(η) ≤
1
2
__
A+
1
µ
A
2
__
ξ
η
_
,
_
ξ
η
__
for all ξ, η ∈ R
n
.
99
Since ˆ u(0) ≥ u(0) = 0 and ˆ w(0) ≥ w(0) = 0, the above inequality implies
ˆ u(0) = ˆ w(0) = 0.
In view of Proposition 7.3 with m = 2n, f(ξ, η) = ˆ u(ξ) + ˆ w(η) and
B = A+µ
−1
A
2
, there is Z ∈ S
2n
such that (0, Z) ∈ J
2,+
f(0, 0) ∩J
2,−
f(0, 0)
and −λI ≤ Z ≤ B.
Hence, from the deﬁnition of J
2,±
, it is easy to verify that there are X, Y ∈
S
n
such that (0, X) ∈ J
2,+
ˆ u(0) ∩J
2,−
ˆ u(0), (0, Y ) ∈ J
2,+
ˆ w(0) ∩J
2,−
ˆ w(0), and
Z =
_
X O
O Y
_
.
Applying the last property in Lemma 7.4 to ˆ u and ˆ w, we see that
(0, X) ∈ J
2,+
u(0) and (0, Y ) ∈ J
2,+
w(0). 2
7.2 Proof of the ABP maximum principle
First of all, we remind the readers of our strategy in this and the next sub
sections.
We ﬁrst show that the ABP maximum principle holds under f ∈ L
n
(Ω) ∩
C(Ω) in Steps 1 and 2 of this subsection. Next, using this fact, we estab
lish the existence of L
p
strong solutions of “Pucci” equations in the next
subsection when f ∈ L
p
(Ω).
Employing this existence result, in Step 3, we ﬁnally prove Proposition
6.2; the ABP maximum principle when f ∈ L
n
(Ω).
ABP maximum principle for f ∈ L
n
(Ω) ∩ C(Ω) (Section 7.2)
⇓
Existence of L
p
strong solutions of Pucci equations (Section 7.3)
⇓
ABP maximum principle for f ∈ L
n
(Ω) (Section 7.2)
Proof of Proposition 6.2. We give a proof in [5] for the subsolution asser
tion of Proposition 6.2.
By scaling, we may suppose that diam(Ω) ≤ 1.
Setting
r
0
:= max
Ω
u −max
∂Ω
u
+
,
we may also suppose that r
0
> 0 since otherwise, the conclusion is obvious.
100
We ﬁrst introduce the following notation: For u : Ω → R and r ≥ 0,
Γ
r
:=
_
x ∈ Ω
¸
¸
¸∃p ∈ B
r
such that u(y) ≤ u(x) +¸p, y −x) for y ∈ Ω
_
.
Recalling the upper contact set in section 6.2, we note that
Γ[u, Ω] =
_
r>0
Γ
r
.
Step 1: u ∈ C
2
(Ω) ∩ C(Ω). We ﬁrst claim that for r ∈ (0, r
0
),
_
(i) B
r
= Du(Γ
r
),
(ii) D
2
u ≤ O in Γ
r
.
(7.1)
To show (i), for p ∈ B
r
, we take ˆ x ∈ Ω such that u(ˆ x) − ¸p, ˆ x) =
max
x∈Ω
(u(x) − ¸p, x)). Since u(x) − u(ˆ x) ≤ r < r
0
for x ∈ Ω, taking the
maximum over Ω, we have ˆ x ∈ Ω. Hence, we see p = Du(ˆ x), which concludes
(i).
For x ∈ Γ
r
, Taylor’s formula yields
u(y) = u(x) +¸Du(x), y −x) +
1
2
¸D
2
u(x)(y −x), y −x) + o([y −x[
2
).
Hence, we have 0 ≥ ¸D
2
u(x)(y −x), y −x) + o([y −x[
2
), which shows (ii).
Now, we introduce functions g
κ
(p) :=
_
[p[
n/(n−1)
+ κ
n/(n−1)
_
1−n
for κ > 0.
We shall simply write g for g
κ
.
Thus, for r ∈ (0, r
0
), we see that
_
Du(Γr)
g(p)dp ≤
_
Γr
g(Du(x))[det(D
2
u(x))[dx
=
_
Γr
_
[Du[
n/(n−1)
+ κ
n/(n−1)
_
1−n
[detD
2
u(x)[dx.
Recalling (7.1), we utilize [detD
2
u[ ≤ (−trace(D
2
u)/n)
n
in Γ
r
to ﬁnd
C > 0 such that
_
Br
g(p)dp ≤ C
_
Γr
_
[Du[
n/(n−1)
+ κ
n/(n−1)
_
1−n
(−trace(D
2
u))
n
dx. (7.2)
Thus, since (µ[Du[ +f
+
)
n
≤ g(Du)
−1
(µ
n
+κ
−n
(f
+
)
n
) by H¨older’s inequality,
we have
_
Br
g(p)dp ≤ C
_
Γr
_
µ
n
+
_
f
+
κ
_
n
_
dx. (7.3)
101
On the other hand, since ([p[
n
+ κ
n
)
−1
≤ g(p), we have
log
__
r
κ
_
n
+ 1
_
≤ C
_
Br
1
[p[
n
+ κ
n
dp ≤ C
_
Br
g(p)dp.
Hence, noting Γ
r
⊂ Ω
+
[u] for r ∈ (0, r
0
), by (7.3), we have
r ≤ κ
_
exp
_
C
_
Γ[u,Ω]∩Ω
+
[u]
_
µ
n
+
_
f
+
κ
_
n
_
dx
_
−1
_
1/n
. (7.4)
When f
+

L
n
(Γ[,Ω]∩Ω
+
[u])
= 0, then sending κ → 0, we get a contradiction.
Thus, we may suppose that f
+

L
n
(Γ[,Ω]∩Ω
+
[u])
> 0.
Setting κ := f
+

L
n
(Γ[u,Ω]∩Ω
+
[u])
and r := r
0
/2, we can ﬁnd C > 0,
independent of u, such that r
0
≤ Cf
+

L
n
(Γ[u,Ω]∩Ω
+
[u])
.
Remark. We note that we do not need to suppose f to be continuous in
Step 1 while we need it in the next step.
Step 2: u ∈ C(Ω) and f ∈ L
n
(Ω) ∩ C(Ω). First of all, because of f ∈
C(Ω), we remark that u is a “standard” viscosity subsolution of
T
−
(D
2
u) −µ[Du[ ≤ f in Ω
+
[u].
(See Proposition 2.9 in [5].)
Let u
ε
be the supconvolution of u for ε > 0;
u
ε
(x) := sup
y∈Ω
_
u(y) −
[x −y[
2
2ε
_
.
Note that u
ε
is semiconvex and thus, twice diﬀerentiable a.e. in R
n
.
We claim that for small ε > 0, u
ε
is a viscosity subsolution of
T
−
(D
2
u
ε
) −µ[Du
ε
[ ≤ f
ε
in Ω
ε
, (7.5)
where f
ε
(x) := sup¦f
+
(y) [ [x − y[ ≤ 2(u
L
∞
(Ω)
ε)
1/2
¦ and Ω
ε
:= ¦x ∈
Ω
+
[u] [ dist(x, ∂Ω
+
[u]) > 2(u
L
∞
(Ω)
ε)
1/2
¦. Indeed, for x ∈ Ω
ε
and (q, X) ∈
J
2,+
u
ε
(x), choosing ˆ x ∈ Ω such that u
ε
(x) = u(ˆ x) −(2ε)
−1
[x − ˆ x[
2
, we easily
verify that [q[ = ε
−1
[ˆ x − x[ ≤ 2
_
u
L
∞
(Ω)
/ε. Thus, by Lemma 7.4, we see
that (q, X) ∈ J
2,+
u(x + εq). Hence, we have
T
−
(X) −µ[q[ ≤ f
+
(x + εq) ≤ f
ε
(x).
102
We note that for small ε > 0, we may suppose that
r
ε
:= max
Ωε
u
ε
−max
∂Ωε
(u
ε
)
+
> 0. (7.6)
Here, we list some properties on upper contact sets: For small δ > 0, we
set
Ω
δ
:= ¦x ∈ Ω [ dist(x, ∂Ω) > δ¦.
Lemma 7.5. Let v
δ
∈ C(Ω
δ
) and v ∈ C(Ω) satisfy that v
δ
→ v uniformly
on any compact sets in Ω as δ → 0. Assume that ˆ r := max
Ω
v−max
∂Ω
v
+
> 0.
Then, for r ∈ (0, ˆ r), we have the following properties:
_
¸
¸
_
¸
¸
_
(1) Γ
r
[v, Ω] is a compact set in Ω
+
[v],
(2) limsup
δ→0
Γ
r
[v
δ
, Ω
δ
] ⊂ Γ
r
[v, Ω],
(3) for small α > 0, there is δ
α
such that ∪
0≤δ<δα
Γ
r
[v
δ
, Ω
δ
] ⊂
ˆ
Γ
α
r
,
where
ˆ
Γ
α
r
:= ¦x ∈ Ω [ dist(x, Γ
r
[v, Ω]) < α¦.
Proof of Lemma 7.5. To show (1), we ﬁrst need to observe that for r ∈
(0, ˆ r), dist(Γ
r
[v, Ω], ∂Ω) > 0. Suppose the contrary; if there is x
k
∈ Γ
r
[v, Ω]
such that x
k
∈ Ω → ˆ x ∈ ∂Ω, then there is p
k
∈ B
r
such that v(y) ≤
v(x
k
) +¸p
k
, y −x
k
) for y ∈ Ω. Hence, sending k → ∞, we have
max
Ω
v −max
∂Ω
v
+
≤ r < ˆ r,
which is a contradiction. Thus, we can ﬁnd a compact set K ⊂ Ω such that
Γ
r
[v, Ω] ⊂ K.
Moreover, if v(x) ≤ 0 for x ∈ Γ
r
[v, Ω], then we get a contradiction:
ˆ r ≤ max
Ω
v ≤ r < ˆ r.
Next, choose x ∈ limsup
δ→0
Γ
r
[v
δ
, Ω
δ
]. Then, for any k ≥ 1, there are
δ
k
∈ (0, 1/k) and p
k
∈ B
r
such that
v
δ
k
(y) ≤ v
δ
k
(x) +¸p
k
, y −x) for y ∈ Ω
δ
k
.
We may suppose p
k
→ p for some p ∈ B
r
taking a subsequence if necessary.
Sending k → ∞ in the above, we see that x ∈ Γ
r
[v, Ω].
103
If (3) does not hold, then there are α
0
> 0, δ
k
∈ (0, 1/k) and x
k
∈
Γ
r
[v
δ
k
, Ω
δ
k
] ¸
ˆ
Γ
α
0
r
. We may suppose again that lim
k→∞
x
k
= ˆ x for some ˆ x ∈ Ω.
When ˆ x ∈ ∂Ω, since there is p
k
∈ B
r
such that v
δ
k
(y) ≤ v
δ
k
(x
k
) +¸p
k
, y −x
k
)
for y ∈ Ω, we have ˆ r < ˆ r, which is a contradiction. Thus, we may suppose
that ˆ x ∈ Ω and, then ˆ x ∈ Γ
r
[v, Ω]. Thus, there is k
0
≥ 1 such that x
k
∈
ˆ
Γ
α
0
r
for k ≥ k
0
, which is a contradiction. 2
For δ > 0, we set u
ε
δ
:= u
ε
∗ ρ
δ
, where ρ
δ
is the standard molliﬁer. We set
˜
Γ
ε,δ
r
:= Γ
r
[u
ε
δ
, Ω
ε
] for r ∈ (0, r
ε
δ
), where r
ε
δ
:= max
Ω
ε
u
ε
δ
−max
∂Ωε
(u
ε
δ
)
+
. Notice
that for small δ > 0, r
ε
δ
> 0.
In view of the argument to derive (7.2) in Step 1, we have
_
Br
g(p)dp ≤ C
_
˜
Γ
ε,δ
r
_
[Du
ε
δ
[
n/(n−1)
+ κ
n/(n−1)
_
1−n
(−trace(D
2
u
ε
δ
))
n
dx
for small r > 0.
Also, by the same argument for (ii) in (7.1), we can show that D
2
u
ε
δ
(x) ≤
O in
˜
Γ
ε,δ
r
. Furthermore, from the deﬁnition of u
ε
, we verify that −ε
−1
I ≤
D
2
u
ε
δ
(x) in Ω
ε
.
Hence, sending δ → 0 with Lemma 7.5 (3), we have
_
Br
g(p)dp ≤ C
_
Γr[u
ε
,Ωε]
_
[Du
ε
[
n/(n−1)
+ κ
n/(n−1)
_
1−n
(−trace(D
2
u
ε
))
n
dx
≤ C
_
Γr[u
ε
,Ωε]
_
µ
n
+
_
f
ε
κ
_
n
_
dx.
Therefore, sending ε → 0 (again with Lemma 7.5 (3)), we obtain (7.4), which
implies the conclusion.
Remark. Using the ABP maximum principle in Step 2 (i.e. f ∈ C(Ω)), we
can give a proof of Proposition 6.3, which will be seen in section 7.3. Thus,
in Step 3 below, we will use Proposition 6.3.
Step 3: u ∈ C(Ω) and f ∈ L
n
(Ω). Let f
k
∈ C(Ω) be nonnegative func
tions such that f
k
−f
+

L
n
(Ω)
→ 0 as k → ∞.
In view of Proposition 6.3, we choose φ
k
∈ C(Ω) ∩ W
2,n
loc
(Ω) such that
_
¸
_
¸
_
T
+
(D
2
φ
k
) + µ[Dφ
k
[ = f
k
−f
+
a.e. in Ω,
φ
k
= 0 on ∂Ω,
φ
k

L
∞
(Ω)
≤ Cf
k
−f
+

L
n
(Ω)
.
104
Setting w
k
:= u +φ
k
−φ
k

L
∞
(Ω)
, we easily verify that w
k
is an L
n
viscosity
subsolution of
T
−
(D
2
w
k
) −µ[Dw
k
[ ≤ f
k
in Ω.
Note that Ω
+
[w
k
] ⊂ Ω
+
[u].
Thus, by Step 2, we have
max
Ω
w
k
≤ max
∂Ω
w
k
+ C(f
k
)
+

L
n
(Γr[w
k
,Ω]∩Ω
+
[u])
.
Therefore, sending k → ∞ with Lemma 7.5 (2), we ﬁnish the proof. 2
7.3 Proof of existence results for Pucci equations
We shall solve Pucci equations under the Dirichlet condition in Ω. For sim
plicity of statemants, we shall treat the case when Ω is a ball though we will
need the existence result in smooth domains later. To extend the result for
general Ω with smooth boundary, we only need to modify the function v
z
in
the argument below.
For µ ≥ 0 and f ∈ L
p
(Ω) with p ≥ n,
_
T
−
(D
2
u) −µ[Du[ ≥ f in B
1
,
u = 0 on ∂B
1
,
(7.7)
and
_
T
+
(D
2
u) + µ[Du[ ≤ f in B
1
,
u = 0 on ∂B
1
.
(7.8)
Note that the ﬁrst estimate of (7.10) is valid by Proposition 6.2 when the
inhomogeneous term is continuous.
Sketch of proof. We only show the assertion for (7.8).
Step 1: f ∈ C
∞
(B
1
). We shall consider the case when f ∈ C
∞
(B
1
).
Set o
λ,Λ
:= ¦A := (A
ij
) ∈ S
n
[ λI ≤ A ≤ ΛI¦. We can choose a countable
set o
0
:= ¦A
k
:= (A
k
ij
) ∈ o
λ,Λ
¦
∞
k=1
such that o
0
= o
λ,Λ
.
Noting that µ[q[ = max¦¸b, q) [ b ∈ ∂B
µ
¦ for q ∈ R
n
, we choose B
0
:=
¦b
k
∈ ∂B
µ
¦
∞
k=1
such that B
0
= ∂B
µ
.
According to Evans’ result in 1983, we can ﬁnd classical solutions u
N
∈
C(Ω) ∩ C
2
(Ω) of
_
_
_
max
k=1,...,N
_
−trace(A
k
D
2
u) +¸b
k
, Du)
_
= f in B
1
,
u = 0 on ∂B
1
.
(7.9)
105
Moreover, we ﬁnd σ = σ(ε) ∈ (0, 1), C
ε
> 0 (for each ε ∈ (0, 1)) and C
1
> 0,
which are independent of N ≥ 1, such that
u
N

L
∞
(B
1
)
≤ C
1
f
L
n
(B
1
)
and u
N

C
2,σ
(B
1−ε
)
≤ C
ε
. (7.10)
Note that the ﬁrst estimate of (7.10) is valid by Proposition 6.2 when the
inhomogeneous term is continuous.
More precisely, by the classical comparison principle, Proposition 3.3, we
have
u
N
≤ u
1
in B
1
. (7.11)
Furthermore, we can construct a subsoluion of (7.9) for any N ≥ 1 in
the following manner: Fix z ∈ ∂B
1
. Set v
z
(x) := α(e
−βx−2z
2
− e
−β
), where
α, β > 0 (independent of z ∈ ∂B
1
) will be chosen later. We ﬁrst note that
v
z
(z) = 0 and v
z
(x) ≤ 0 for x ∈ B
1
.
Setting L
k
w(x) := −trace(A
k
D
2
w(x)) +¸b
k
, Dw(x)), we verify that
L
k
v
z
(x) ≤ 2αβe
−βx−2z
2
(Λn −2βλ[x −2z[
2
+ µ[x −2z[)
≤ 2αβe
−9β
(Λn −2βλ + 3µ).
Thus, ﬁxing β := (Λn+3µ+1)/(2λ), we have L
k
v
z
(x) ≤ −2αβe
−9β
. Hence,
taking α > 0 large enough so that 2αβe
−9β
≥ f
L
∞
(B
1
)
, we have
max
k=1,2,...,N
L
k
v
z
(x) ≤ f(x) in B
1
.
Now, putting V (x) := sup
z∈∂B
1
v
z
(x), in view of Theorem 4.2, we see that
V is a viscosity subsolution of
max
k=1,2,...,N
L
k
u(x) −f(x) ≤ 0 in B
1
.
Moreover, it is easy to check that V
∗
(x) = 0 for x ∈ ∂B
1
. Thus, by Proposi
tion 3.3 again, we obtain that
V ≤ u
N
in B
1
. (7.12)
Therefore, in view of (7.10)(7.12), we can choose a sequence N
k
and
u ∈ C
2
(B
1
) such that lim
k→∞
N
k
= ∞,
(u
N
k
, Du
N
k
, D
2
u
N
k
) → (u, Du, D
2
u) uniformly in B
1−ε
106
for each ε ∈ (0, 1), and
V ≤ u ≤ u
1
in B
1
. (7.13)
We note that (7.13) implies that u
∗
= u
∗
on ∂B
1
.
By virtue of the stability result (Proposition 4.8), we see that u is a
viscosity solution of
T
+
(D
2
u) + µ[Du[ −f = 0 in B
1
since sup
k≥1
¦−trace(A
k
X) + ¸b
k
, p)¦ = T
+
(X) + µ[p[. Hence, Theorem 3.9
yields u ∈ C(B
1
).
Therefore, by Proposition 2.3, we see that u ∈ C(B
1
) ∩ C
2
(B
1
) is a
classical solution of (7.8).
Step 2: f ∈ L
p
(B
1
). (Lemma 3.1 in [5]) Choose f
k
∈ C
∞
(B
1
) such that
f
k
−f
L
p
(Ω)
→ 0 as k → ∞.
Let u
k
∈ C(B
1
) ∩ C
2
(B
1
) be a classical solution of
T
+
(D
2
u) + µ[Du[ −f
k
= 0 in B
1
such that u
k
= 0 on ∂B
1
.
We ﬁrst claim that ¦u
k
¦
∞
k=1
is a Cauchy sequence in L
∞
(B
1
). Indeed,
since (1) and (4) of Proposition 3.2 imply that
T
−
(D
2
(u
j
−u
k
)) −µ[D(u
j
−u
k
)[
≤ T
+
(D
2
u
j
) +T
−
(−D
2
u
k
) + µ[Du
j
[ −µ[Du
k
[
= f
j
−f
k
≤ T
+
(D
2
u
j
) −T
+
(D
2
u
k
) + µ[D(u
j
−u
k
)[
≤ T
+
(D
2
(u
j
−u
k
)) +µ[D(u
j
−u
k
)[,
using Proposition 6.2 when the inhomogeneous term is continuous, we
have
max
B
1
[u
j
−u
k
[ ≤ Cf
j
−f
k

L
n
(B
1
)
.
Recalling p ≥ n, we thus have
u
j
−u
k

L
∞
(B
1
)
≤ Cf
j
−f
k

L
p
(B
1
)
.
Hence, we ﬁnd u ∈ C(B
1
) such that u
k
converges to u uniformly in B
1
as
k → ∞.
107
Therefore, by the standard covering and limiting arguments with weakly
convergence in W
2,p
locally, it suﬃces to ﬁnd C > 0, independent of k ≥ 1,
such that
u
k

W
2,p
(B
1/2
)
≤ C.
Moreover, we see that −Cf
−

L
p
(B
1
)
≤ u ≤ Cf
+

L
p
(B
1
)
in B
1
.
For ε ∈ (0, 1/2), we select η := η
ε
∈ C
2
(B
1
) such that
_
¸
¸
¸
_
¸
¸
¸
_
(i) 0 ≤ η ≤ 1 in B
1
,
(ii) η = 0 in B
1
¸ B
1−ε
,
(iii) η = 1 in B
1−2ε
,
(iv) [Dη[ ≤ C
0
ε
−1
, [D
2
η[ ≤ C
0
ε
−2
in B
1
,
where C
0
> 0 is independent of ε ∈ (0, 1/2).
Now, we recall Caﬀarelli’s result (1989) (see also [4]): There is a universal
constant
ˆ
C > 0 such that
D
2
(ηu
k
)
L
p
(B
1−ε
)
≤
ˆ
CT
+
(D
2
(ηu
k
))
L
p
(B
1−ε
)
.
Hence, we ﬁnd C
1
> 0 such that for 0 < ε < 1/4,
D
2
u
k

L
p
(B
1−2ε
)
≤ D
2
(ηu
k
)
L
p
(B
1−ε
)
≤
ˆ
CT
+
(D
2
(ηu
k
))
L
p
(B
3/4
)
≤ C
1
_
f
k

L
p
(B
1−ε
)
+ ε
−1
Du
k

L
p
(B
1−ε
)
+ ε
−2
u
k

L
p
(B
1−ε
)
_
Multiplying ε
2
> 0 in the above, we get
ε
2
D
2
u
k

L
p
(B
1−2ε
)
≤ C
1
(f
k

L
p
(B
1
)
+ φ
1
(u
k
) + φ
0
(u
k
)),
where φ
j
(u
k
) := sup
0<ε<1/2
ε
j
D
j
u
k

L
p
(B
1−ε
)
for j = 0, 1, 2.
Therefore, in view of the “interpolation” inequality (see [13] for example),
i.e. for any δ > 0, there is C
δ
> 0 such that
φ
1
(u
k
) ≤ δφ
2
(u
k
) + C
δ
φ
0
(u
k
),
we ﬁnd C
3
> 0 such that
φ
2
(u
k
) ≤ C
3
_
f
k

L
p
(B
1
)
+ φ
0
(u
k
)
_
.
On the other hand, since we have L
∞
estimates for u
k
, we conclude the
proof. 2
Remark. It is possible to show that the uniform limit u in Step 2 is
an L
p
viscosity solution of (7.8) by Proposition 6.13. Moreover, since it is
known that if L
p
viscosity supersolution of (7.8) belongs to W
2,p
loc
(B
1
), then
it is an L
p
strong supersolution (see [5]), u satisﬁes T
+
(D
2
u) +µ[Du[ = f(x)
a.e. in B
1
.
108
7.4 Proof of the weak Harnack inequality
We need a modiﬁcation of Lemma 4.1 in [4] since our PDE (7.14) below has
the ﬁrst derivative term.
Lemma 7.6. (cf. Lemma 4.1 in [4]) There are φ ∈ C
2
(B
2
√
n
) and
ξ ∈ C(B
2
√
n
) such that
_
¸
¸
¸
_
¸
¸
¸
_
(1) T
−
(D
2
φ) −µ[Dφ[ ≥ −ξ in B
2
√
n
,
(2) φ(x) ≤ −2 for x ∈ Q
3
,
(3) φ(x) = 0 for x ∈ ∂B
2
√
n
,
(4) ξ(x) = 0 for x ∈ B
2
√
n
¸ B
1/2
.
Proof. Set φ
0
(r) := A¦1 −(2
√
n/r)
α
¦ for A, α > 0 so that φ
0
(2
√
n) = 0.
Fig 7.1
y = φ(x)
y = φ
0
(x)
2
√
n
−2
√
n
−2
0
Q
3
1/2
r
−1/2
Since
_
Dφ
0
([x[) = A(2
√
n)
α
α[x[
−α−2
x,
D
2
φ
0
([x[) = A(2
√
n)
α
α[x[
−α−4
¦[x[
2
I −(α + 2)x ⊗x¦,
we caluculate in the following way: At x ,= 0, we have
T
−
(D
2
φ
0
([x[)) −µ[Dφ
0
([x[)[ ≥ A(2
√
n)
α
α[x[
−α−2
¦(α + 2)λ −nΛ −µ[x[¦.
109
Setting α := λ
−1
(nΛ + 2µ
√
n) − 2 so that α > 0 for n ≥ 2, we see that
the right hand side of the above is nonnegative for x ∈ B
2
√
n
¸ ¦0¦. Thus,
taking φ ∈ C
2
(B
2
√
n
) such that φ(x) = φ
0
([x[) for x ∈ B
2
√
n
¸ B
1/2
and
φ(x) ≤ φ
0
(3
√
n/2) for x ∈ B
3
√
n/2
, we can choose a continuous function ξ
satisfying (1) and (4). See Fig 7.1.
Moreover, taking A := 2/¦(4/3)
α
− 1¦ so that φ
0
(3
√
n/2) = −2, we see
that (2) holds. 2
We now present an important “cube decomposition lemma”.
We shall explain a terminology for the lemma: For a cube
˜
Q := Q
r
(x) with
r > 0 and x ∈ R
n
, we call Q a dyadic cube of
˜
Q if it is one of cubes ¦Q
k
¦
2
n
k=1
so that Q
k
:= Q
r/2
(x
k
) for some x
k
∈
˜
Q, and ∪
2
n
k=1
Q
k
⊂
˜
Q ⊂ ∪
2
n
k=1
Q
k
.
Lemma 7.7. (Lemma 4.2 in [4]) Let A ⊂ B ⊂ Q
1
be measurable sets
and 0 < δ < 1 such that
(a) [A[ ≤ δ,
(b) Assume that if a dyadic cube Q of
˜
Q ⊂ Q
1
satisﬁes [A∩ Q[ > δ[Q[,
then
˜
Q ⊂ B.
Then, [A[ ≤ δ[B[.
Proof of Proposition 6.4. Assuming that u ∈ C(B
2
√
n
) is a nonnegative
viscosity supersolution of
T
+
(D
2
u) + µ[Du[ ≥ 0 in B
2
√
n
, (7.14)
we shall show that for some constants p
0
> 0 and C
1
> 0,
u
L
p
0(Q
1
)
≤ C
1
inf
Q
1/2
u.
To this end, it is suﬃcient to show that if u ∈ C(B
2
√
n
) satisﬁes that
inf
Q
1/2
u ≤ 1, then we have u
L
p
0(Q
1
)
≤ C
1
for some constants p
0
, C
1
> 0.
Indeed, by taking v(x) := u(x)
_
inf
Q
1/2
u + δ
_
−1
for any δ > 0 in place of u,
we have v
L
p
0(Q
1
)
≤ C
1
, which implies the assertion by sending δ → 0.
Lemma 7.8. There are θ > 0 and M > 1 such that if u ∈ C(B
2
√
n
) is a
nonnegative L
p
viscosity supersolution of (7.14) such that
inf
Q
3
u ≤ 1, (7.15)
110
then we have
[¦x ∈ Q
1
[ u(x) ≤ M¦[ ≥ θ.
Remark. In our setting of proof of Proposition 7.4, assumption (7.15) is
automatically satisﬁed.
Proof of Lemma 7.8. Choose φ ∈ C
2
(B
2
√
n
) and ξ ∈ C(B
2
√
n
) from Lemma
7.6. Using (4) of Proposition 3.2, we easily see that w := u + φ is an L
n

viscosity supersolution of
T
+
(D
2
w) + µ[Dw[ ≥ −ξ in B
2
√
n
.
Since inf
Q
3
w ≤ −1 and w ≥ 0 on ∂B
2
√
n
by (2) and (3) in Lemma 7.6,
respectively, by Proposition 6.2, we ﬁnd
ˆ
C > 0 such that
1 ≤ sup
Q
3
(−w) ≤ sup
B
2
√
n
(−w) ≤
ˆ
Cξ
L
n
(Γ[−w,B
2
√
n
]∩B
+
2
√
n
[−w])
. (7.16)
In view of (4) of Lemma 7.6, (7.16) implies that
1 ≤
ˆ
C max
B
1/2
[ξ[[¦x ∈ Q
1
[ (u + φ)(x) < 0¦[.
Since
u(x) ≤ −φ(x) ≤ max
B
2
√
n
(−φ) =: M for x ∈ B
2
√
n
.
Therefore, setting θ = (
ˆ
C sup
Q
1
[ξ[)
−1
> 0 and M = sup
B
2
√
n
(−φ) ≥ 2, we
have
θ ≤ [¦x ∈ Q
1
[ u(x) ≤ M¦[. 2
We next show the following:
Lemma 7.9. Under the same assumptions as in Lemma 7.8, we have
[¦x ∈ Q
1
[ u(x) > M
k
¦[ ≤ (1 −θ)
k
for all k = 1, 2, . . .
Proof. Lemma 7.8 yields the assertion for k = 1.
Suppose that it holds for k −1. Setting A := ¦x ∈ Q
1
[ u(x) > M
k
¦ and
B := ¦x ∈ Q
1
[ u(x) > M
k−1
¦, we shall show [A[ ≤ (1 −θ)[B[.
111
Since A ⊂ B ⊂ Q
1
and [A[ ≤ [¦x ∈ Q
1
[ u(x) > M¦[ ≤ δ := 1 − θ, in
view of Lemma 7.8, it is enough to check that property (b) in Lemma 7.7
holds.
To this end, let Q := Q
1/2
j (z) be a dyadic cube of
˜
Q := Q
1/2
j−1 (ˆ z) (for
some z, ˆ z ∈ Q
1
and j ≥ 1) such that
[A∩ Q[ > δ[Q[ =
1 −θ
2
jn
. (7.17)
It remains to show
˜
Q ⊂ B.
Assuming that there is ˜ x ∈
˜
Q such that ˜ x / ∈ B; i.e. u(˜ x) ≤ M
k−1
.
Set v(x) := u(z + 2
−j
x)/M
k−1
for x ∈ B
2
√
n
. Since [˜ x
i
−z
i
[ ≤ 3/2
j+1
, we
see that inf
Q
3
v ≤ u(˜ x)/M
k−1
≤ 1. Furthermore, since z ∈ Q
1
, z + 2
−j
x ∈
B
2
√
n
for x ∈ B
2
√
n
.
Fig 7.2
z
˜ x
˜
Q
Q
ˆ z
1
2
j
1
2
j−1
z +
1
2
j
Q
3
Thus, since v is an L
p
viscosity supersolution of
T
+
(D
2
v) + µ[Dv[ ≥ 0,
Lemma 7.8 yields [¦x ∈ Q
1
[ v(x) ≤ M¦[ ≥ θ. Therefore, we have
[¦x ∈ Q [ u(x) ≤ M
k
¦[ ≥
θ
2
jn
= θ[Q[.
Thus, we have [Q¸ A[ ≥ θ[Q[. Hence, in view of (7.17), we have
[Q[ = [A∩ Q[ +[Q¸ A[ > δ[Q[ + θ[Q[ = [Q[,
112
which is a contradiction. 2
Back to the proof of Proposition 6.4. A direct consequence of Lemma 7.9
is that there are
˜
C, ε > 0 such that
[¦x ∈ Q
1
[ u(x) ≥ t¦[ ≤
˜
Ct
−ε
for t > 0. (7.18)
Indeed, for t > M, we choose an integer k ≥ 1 so that M
k+1
≥ t > M
k
.
Thus, we have
[¦x ∈ Q
1
[ u(x) ≥ t¦[ ≤ [¦x ∈ Q
1
[ u(x) > M
k
¦[ ≤ (1 −θ)
k
≤
˜
C
0
t
−ε
,
where
˜
C
0
:= (1 −θ)
−1
and ε := −log(1 −θ)/ log M > 0.
Since 1 ≤ M
ε
t
−ε
for 0 < t ≤ M, taking
˜
C := max¦
˜
C
0
, M
ε
¦, we obtain
(7.18).
Now, recalling Fubini’s theorem,
_
Q
1
u
p
0
(x)dx ≤
_
{x∈Q
1
 u(x)≥1}
u
p
0
(x)dx + 1
= p
0
_
∞
1
t
p
0
−1
[¦x ∈ Q
1
[ u(x) ≥ t¦[dt + 1,
(see Lemma 9.7 in [13] for instance), in view of (7.18), for any p
0
∈ (0, ε), we
can ﬁnd C(p
0
) > 0 such that u
L
p
0(Q
1
)
≤ C(p
0
). 2
7.5 Proof of the local maximum principle
Although our proof is a bit technical, we give a modiﬁcation of Trudinger’s
proof in [13] (Theorem 9.20), in which he observed a precise estimate for
“strong” subsolutions on the upper contact set. Recently, Fok in [11] (1996)
gave a similar proof to ours.
We note that we can ﬁnd a diﬀerent proof of the local maximum principle
in [4] (Theorem 4.8 (2)).
Proof of Proposition 6.5. We give a proof only when q ∈ (0, 1] because it
is immediate to show the assertion for q > 1 by H¨older’s inequality.
Let x
0
∈ Q
1
be such that max
Q
1
u = u(x
0
). It is suﬃcient to show that
max
B
1/4
(x
0
)
u ≤ C
2
u
+

L
q
(B
1/2
(x
0
))
113
since B
1/2
(x
0
) ⊂ Q
2
. Thus, by considering u((x − x
0
)/2) instead of u(x), it
is enough to ﬁnd C
2
> 0 such that
max
B
1/2
u ≤ C
2
u
+

L
q
(B
1
)
.
We may suppose that
max
B
1
u > 0 (7.19)
since otherwise, the conclusion is trivial.
Furthermore, by the continuity of u, we can choose τ ∈ (0, 1/4) such that
1 −2τ ≥ 1/2 and
max
B
1−2τ
u > 0.
We shall consider the supconvolution of u again: For ε ∈ (0, τ),
u
ε
(x) := sup
y∈B
1
_
u(y) −
[x −y[
2
2ε
_
.
By the uniform convergence of u
ε
to u, (7.19) yields
max
B
1−τ
u
ε
> 0 for small ε > 0. (7.20)
For small ε > 0, we can choose δ := δ(ε) ∈ (0, τ) such that lim
ε→0
δ = 0,
and
T
−
(D
2
u
ε
) −µ[Du
ε
[ ≤ 0 a.e. in B
1−δ
.
Putting η
ε
(x) := ¦(1 −δ)
2
−[x[
2
¦
β
for β := 2n/q ≥ 2, we deﬁne v
ε
(x) :=
η
ε
(x)u
ε
(x). We note that
r
ε
:= max
B
1−δ
v
ε
> 0.
Fix r ∈ (0, r
ε
) and set Γ
ε
r
:= Γ
r
[v
ε
, B
1−δ
]. By (1) in Lemma 7.5, we see
Γ
ε
r
⊂ B
+
1−δ
[v
ε
].
For later convenience, we observe that
Dv
ε
(x) = −2βxη(x)
(β−1)/β
u
ε
(x) + η(x)Du
ε
(x), (7.21)
D
2
v
ε
(x) = −2βη(x)
(β−1)/β
¦u
ε
(x)I + x ⊗Du
ε
(x) + Du
ε
(x) ⊗x¦
+4β(β −1)η(x)
(β−2)/β
u
ε
(x)x ⊗x + η(x)D
2
u
ε
(x).
(7.22)
114
Since u
ε
is twice diﬀerentiable almost everywhere, we can choose a mea
surable set N
ε
⊂ B
1−δ
such that [N
ε
[ = 0 and u
ε
is twice diﬀerentiable at
x ∈ B
1−δ
¸ N
ε
. Of course, v
ε
is also twice diﬀerentiable at x ∈ B
1−δ
¸ N
ε
.
By (7.22), we have
T
−
(D
2
v
ε
) ≤ ηT
−
(D
2
u
ε
) + 2βη
(β−1)/β
¦Λnu
ε
−T
−
(x ⊗Du
ε
+ Du
ε
⊗x)¦
in B
+
1−δ
[v
ε
]. By using (7.21), the last term in the above can be estimated
from above by
C¦η
−2/β
(v
ε
)
+
+ η
−1/β
[Dv
ε
[¦.
Moreover, using (7.21) again, we have
T
−
(D
2
u
ε
) ≤ µ[Du
ε
[ ≤ µη
−1
[Dv
ε
[ + Cη
−1/β
(u
ε
)
+
.
Hence, we ﬁnd C > 0 such that
T
−
(D
2
v
ε
) ≤ Cη
−1/β
[Dv
ε
[ + Cη
−2/β
(v
ε
)
+
=: g
ε
in B
1−δ
¸ N
ε
. (7.23)
We next claim that there is C > 0 such that
[Dv
ε
(x)[ ≤ Cη
−1/β
(x)v
ε
(x) for x ∈ Γ
ε
r
¸ N
ε
. (7.24)
First, we note that at x ∈ Γ
ε
r
¸ N
ε
, v
ε
(y) ≤ v
ε
(x) + ¸Dv
ε
(x), y − x) for
y ∈ B
1−δ
.
To show this claim, since we may suppose [Dv
ε
(x)[ > 0 to get the esti
mate, setting y := x − tDv
ε
(x)[Dv
ε
(x)[
−1
∈ ∂B
1−δ
for t ∈ [1 − δ − [x[, 1 −
δ +[x[], we see that
0 = v
ε
(y) ≤ v
ε
(x) −t[Dv
ε
(x)[,
which implies
[Dv
ε
(x)[ ≤ Cv
ε
(x)η
−1/β
(x) in Γ
ε
r
¸ N
ε
. (7.25)
Here, we use Lemma 2.8 in [5], which will be proved in the end of this
subsection for the reader’s convenience:
Lemma 7.10. Let w ∈ C(Ω) be twice diﬀerentiable a.e. in Ω, and satisfy
T
−
(D
2
w) ≤ g a.e. in Ω,
115
where g ∈ L
p
(Ω) with p ≥ n. If −C
1
I ≤ D
2
w(x) ≤ O a.e. in Ω for some
C
1
> 0, then w is an L
p
viscosity subsolution of
T
−
(D
2
w) ≤ g in Ω. (7.26)
Since u
ε
is Lipschitz continuous in B
1−δ
, by (7.22), we see that v
ε
is an
L
n
viscosity subsolution of
T
−
(D
2
v
ε
) ≤ g
ε
in B
1−δ
.
Noting (7.25), in view of Proposition 6.2, we have
max
B
1−δ
v
ε
≤ Cη
−2/β
(v
ε
)
+

L
n
(Γ
ε
r
)
≤ C
_
max
B
1−δ
(v
ε
)
+
_β−2
β
((u
ε
)
+
)
2/β

L
n
(B
1−δ
)
,
which together with our choice of β yields
max
B
1−δ
v
ε
≤ C(u
ε
)
+

L
q
(B
1−δ
)
.
Therefore, by (7.20), we have
max
B
1/2
u
ε
≤ C max
B
1−δ
v
ε
≤ C(u
ε
)
+

L
q
(B
1−δ
)
,
Therefore, sending ε → 0 in the above, we ﬁnish the proof. 2
Proof of Lemma 7.10. In order to show that w ∈ C(Ω) is an L
p
viscosity
subsolution of (7.26), we suppose the contrary; there are ε, r > 0, ˆ x ∈ Ω and
φ ∈ W
2,p
loc
(Ω) such that 0 = (w −φ)(ˆ x) = max
Ω
(w −φ), B
2r
(ˆ x) ⊂ Ω, and
T
−
(D
2
φ) −g ≥ 2ε a.e. in B
r
(ˆ x).
We may suppose that ˆ x = 0 ∈ Ω. Setting ψ(x) := φ(x) + τ[x[
4
for small
τ > 0, we observe that
h := T
−
(D
2
ψ) −g ≥ ε a.e. in B
r
.
Notice that 0 = (w −ψ)(0) > (w −ψ)(x) for x ∈ B
r
¸ ¦0¦.
116
Moreover, we observe
T
−
(D
2
(w −ψ)) ≤ −ε a.e. in B
r
. (7.27)
Consider w
δ
:= w ∗ ρ
δ
, where ρ
δ
is the standard molliﬁer for δ > 0. From
our assumption, we see that, as δ → 0,
_
(1) w
δ
→ w uniformly in B
r
,
(2) D
2
w
δ
→ D
2
w a.e. in B
r
.
By Lusin’s Theorem, for any α > 0, we ﬁnd E
α
⊂ B
r
such that [B
r
¸E
α
[ < α,
_
Br\Eα
(1 +[T
−
(−D
2
ψ)[)
p
dx < α,
and
D
2
w
δ
→ D
2
w uniformly in E
α
(as δ → 0).
Setting h
δ
:= T
−
(D
2
(w
δ
−ψ)), we ﬁnd C > 0 such that
h
δ
≤ C +T
−
(−D
2
ψ)
because of our hypothesis. Hence, we have
(h
δ
)
+

p
L
p
(Br)
≤ C
_
Br\Eα
(1 +[T
−
(−D
2
ψ)[)
p
dx +
_
Eα
[(h
δ
)
+
[
p
dx.
Sending δ → 0 in the above, by (7.27), we have
limsup
δ→0
(h
δ
)
+

L
p
(Br)
≤ C(1 +[T
−
(−D
2
ψ)[)
L
p
(Br\Eα)
≤ Cα. (7.28)
On the other hand, in view of Proposition 6.2, we see that
max
Br
(w
δ
−ψ) ≤ max
∂Br
(w
δ
−ψ) + C(h
δ
)
+

L
p
(Br)
.
Hence, by sending δ → 0, this inequality together with (7.28) implies that
0 = max
Br
(w −ψ) ≤ max
∂Br
(w −ψ) + Cα for any α > 0.
This is a contradiction since max
∂Br
(w −ψ) < 0. 2
117
References
[1] M. Bardi & I. Capuzzo Dolcetta, Optimal Control and Viscosity
Solutions of HamiltonJacobiBellman Equations, Systems & Control,
Birk¨auser, 1997.
[2] G. Barles, Solutions de Viscosit´e des
´
Equations de HamiltonJacobi,
Math´ematiques et Applications 17, Springer, 1994.
[3] M. Bardi, M. G. Crandall, L. C. Evans, H. M. Soner & P. E.
Souganidis, Lecture Notes in Mathematics, 1660, Springer, 1997.
[4] L. A. Caffarelli & X. Cabr´e, Fully Nonlinear Elliptic Equations,
Colloquium Publications 43, AMS, 1995.
[5] L. A. Caffarelli, M. G. Crandall, M. Kocan & A.
´
Swie¸ch,
On viscosity solutions of fully nonlinear equations with measurable in
gredients, Comm. Pure Appl. Math. 49 (1996), 365397.
[6] M. G. Crandall, H. Ishii & P.L. Lions, User’s guide to viscos
ity solutions of second order partial diﬀerential equations, Bull. Amer.
Math. Soc., 27 (1992), 167.
[7] M. G. Crandall, M. Kocan, P.L. Lions & A.
´
Swie¸ch, Existence
results for boundary problems for uniformly elliptic and parabolic fully
nonlinear equations, Electron. J. Diﬀerential Equations, 1999 (1999),
122.
[8] L. C. Evans, Partial Diﬀerential Equations, GSM 19, Amer. Math.
Soc., 1998.
[9] L. C. Evans & R. F. Gariepy, Measure Theory and Fine Properties
of Functions, Studies in Advanced Mathematics, CRC Press, 1992.
[10] W. H. Fleming & H. M. Soner, Controlled Markov Processes and
Viscosity Solutions, Applications of Mathematics 25, Springer, 1992.
[11] P.K. Fok, Some maximum principles and continuity estimates for fully
nonlinear elliptic equations of second order, Dissertation, University of
California at Santa Barbara, 1996.
118
[12] Y. Giga, Surface Evolution Equations – a level set method – Preprint,
2002.
[13] D. Gilbarg & N. S. Trudinger, Elliptic Partial Diﬀerential Equa
tions of Second Order, 2nd edition, GMW 224, Springer, 1983.
[14] Q. Han & F. H. Lin, Elliptic Partial Diﬀerential Equations, Courant
Institute Lecture Note 1, Amer. Math. Soc., 1997.
[15] H. Ishii, Viscosity Solutions, (in Japanese).
[16] R. Jensen, Uniqueness of Lipschitz extensions: Minimizing the sup
norm of the gradient, Arch. Rational Mech. Anal., 123 (1993), 5174.
119
Notation Index
/, 46
B, 52
C
k,σ
, 79
C
σ
, 78
C
∞
0
, 80
∆, 52
ess. inf, 80
ess. sup, 80
G
∗
, G
∗
, 62
Γ, 52
Γ[u, O], 83
J
2,±
, 17, 22
J
2,±
, 18, 22
liminf
k→∞
∗
F
k
, 59
limsup
k→∞
∗
F
k
, 59
△, 2
L
p
, 79
LSC, 15
/, 27
n, 66
o, 17
T
±
, 25
supp, 5
USC, 15
u
∗
, u
∗
, 40
W
2,p
, 79
W
2,p
loc
, 81
120
Index
Aleksandrov A. D., 95
Area formula, 95
Bellman equation, 8
Burgers’ equation, 4
Caﬀarelli L. A., 80
—CrandallKocan
´
Swi¸ ech, 83
comparison principle
— for boundary value problems,
63
classical —, 24, 25
counterexample of —, 68
— for Dirichlet problems, 23, 66
— for ﬁrstorder PDEs, 27, 29
— for Neumann problems, 73
— for secondorder PDEs, 34, 38
— for state constraint problems,
70
— in unbounded domains, 76
continuity
— of viscosity solutions, 41
convex, 6
cost functional, 46, 52
optimal —, 46, 53
— for state constraints, 69
Crandall M. G.
—KocanLions
´
Swi¸ ech, 92
cube decomposition lemma, 110
De Giorgi E., 80
divergence form, 5
dyadic cube, 110
dynamic programming principle, 46,
53
eikonal equation, 9
elliptic
degenerate —, 12
uniformly —, 25
— constants, 24
entropy condition, 5
Escauriaza L., 83
Evans L. C., 105
—Souganidis, 56
existence, 3
— of L
p
strong solutions, 84
— of L
p
viscosity solutions, 92
— of viscosity solutions, 42
— of Isaacs equations, 56
— of state constraint problems,
69
— of Bellman equations, 48
Fok P.K., 113
fully nonlinear, 7
HamiltonJacobi equation, 6
Harnack inequality, 80, 85, 86, 88
—, ﬁnal version, 87
weak —, 81, 85, 109
Heaviside function, 20
H¨older continuity, 79
— of L
p
viscosity solutions, 89,
92
homogeneous degree, 29
HopfLax formula, 6
interior cone condition, 66
Isaacs equation, 8
Ishii H., 32, 40
—Koike, 71
121
—’s lemma, 32, 95
simpliﬁed —, 99
—Lions, 36
Jensen R., 8, 32
—’s lemma, 95
Kruzkov S. N., 7
Krylov N. V.
—Safonov, 80
Laplace equation, 2
LaxOleinik formula, 5
L
∞
Laplacian, 8
Lions P.L., 32
L
p
regularity, 79
maximum principle
AleksandrovBakelmanPucci —,
83, 100
local —, 81, 85, 113
— for semicontinuous functions,
15
mean curvature, 8
modulus of continuity, 27
molliﬁer, 16
Moser J., 80
multiindex, 79
Nadirashvili N., 78
Neumann
— condition, 72
— problem, 73
nonanticipating strategy, 52
nondivergence form, 5
Poisson equation, 78
Pucci’s operator, 25
quasilinear, 8
RankineHugoniot condition, 5
Schauder regularity, 79
semiconcave, 7
semicontinuous envelope, 40
semiconvex, 95
semijet, 17
solution
classical —, 3, 11
L
p
strong —, 81
L
p
viscosity —, 81
viscosity —, 13
— of semicontinuous PDEs, 59
semicontinuous —, 40
— for boundary value problems,
62, 63
weak —, 3
— distribution sense, 5
Soner H. M., 71
stability, 3
— of L
p
viscosity solutions, 93
— of viscosity solutions, 59
structure condition, 33
— in R
n
, 75
subsolution see solution, 11, 13, 40,
59, 62, 63, 81
supersolution see solution, 11, 13, 40,
59, 62, 63, 81
Trudinger N. S., 80, 113
uniform exterior sphere condition, 72
uniform exterior cone condition, 92
uniqueness, 3
— for Dirichlet problem, 23
unit outward normal, 66
upper contact set, 83
value function, 46, 53
122
vanishing viscosity method, 10
123
Preface for the 2nd edition
The ﬁrst version of this text has been out of print since 2009. It seems that there is no hope to publish the second version from the publisher. Therefore, I have decided to put the ﬁles of the second version in my Home Page. Although I corrected many errors in the ﬁrst version, there must be some mistakes in this version. I would be glad if the reader would kindly inform me errors and typos etc.
Acknowledgment for the 2nd edition
I would like to thank Professors H. Ishii, K. Ishii, T. Nagasawa, M. Ohta, T. Ohtsuka, and graduate students, T. Imai, K. Kohsaka, H. Mitake, S. Nakagawa, T. Nozokido for pointing out numerous errors in the First edition.
26 August 2010
Shigeaki Koike
ii
Preface
This book was originally written in Japanese for undergraduate students in the Department of Mathematics of Saitama University. In fact, the ﬁrst handwritten draft was prepared for a series of lectures on the viscosity solution theory for undergraduate students in Ehime University and Hokkaido University. The aim here is to present a brief introduction to the theory of viscosity solutions for students who have knowledge on Advanced Calculus (i.e. diﬀerentiation and integration on functions of severalvariables) and hopefully, a little on Lebesgue Integration and Functional Analysis. Since this is written for undergraduate students who are not necessarily excellent, I try to give “easy” proofs throughout this book. Thus, if you do not feel any diﬃculty to read User’s guide [6], you should try to read that one. I also try not only to show the viscosity solution theory but also to mention some related “classical” results. Our plan of this book is as follows: We begin with our motivation in section 1. Section 2 introduces the deﬁnition of viscosity solutions and their properties. In section 3, we ﬁrst show “classical” comparison principles and then, extend them to viscosity solutions of ﬁrst and secondorder PDEs, separately. We establish two kinds of existence results via Perron’s method and representation formulas for Bellman and Isaacs equations in section 4. We discuss boundary value problems for viscosity solutions in sections 5. Section 6 is a short introduction to the Lp viscosity solution theory, on which we have an excellent book [4]. In Appendix, which is the hardest part, we give proofs of fundamental propositions. In order to learn more on viscosity solutions, I give a list of “books”: A popular survey paper [6] by CrandallIshiiLions on the theory of viscosity solutions of secondorder, degenerate elliptic PDEs is still a good choice for undergraduate students to learn ﬁrst. However, to my experience, it seems a bit hard for average undergraduate students to understand. BardiCapuzzo Dolcetta’s book [1] contains lots of information on viscosity solutions for ﬁrstorder PDEs (HamiltonJacobi equations) while FlemingSoner’s [10] complements topics on secondorder (degenerate) elliptic PDEs with applications in stochastic control problems. Barles’ book [2] is also nice to learn his original techniques and French language simultaneously ! iii
although there are so many books on PDEs. Also as a textbook for undergraduate students.It has been informed that Ishii would write a book [15] in Japanese on viscosity solutions in the near future. Morimoto and Y. I only refer to my favorite ones. I would also like to thank Professors K. I also wish to thank the reviewer for several important suggestions. Since this is a textbook. e As a general PDE theory. I would like to express my thanks to Professors H. and a graduate student.L. Ishii. My ﬁnal thanks go to Professor T. iv . Ozawa for recommending me to publish this manuscript. we do not refer the reader to original papers unless those are not mentioned in the books in our references. which must be more advanced than this. Also. I recommend the reader to consult Lecture Notes [3] (BardiCrandallEvansSonerSouganidis) not only for various applications but also for a “friendly” introduction by Crandall. For an important application via the viscosity solution theory. Lions in early 80s. we refer to Giga’s [12] on curvature ﬂow equations. Tonegawa for giving me the opportunity to have a series of lectures in their universities. Ishii for having given me enormous supply of ideas since 1980. HanLin’s short lecture notes [14] is a good choice. for their suggestions on the ﬁrst draft. Nakagawa. Acknowledgment First of all. GilbargTrudinger’s [13] and Evans’ [8]. who ﬁrst introduced the notion of viscosity solutions with P. T. K. Nagasawa. He kindly suggested me to change the original Japanese title (“A secret club on viscosity solutions”). I wish to express my gratitude to Professor H. If the reader is interested in section 6. I recommend him/her to attack CaﬀarelliCabr´’s [4].
2 Introduction From classical solutions to weak solutions Typical examples of weak solutions Burgers’ equation HamiltonJacobi equation 2 2.3.2 3.3 5.1 4.1 3.2 3.2 Deﬁnition Vanishing viscosity method Equivalent deﬁnitions Comparison principle Classical comparison principle Degenerate elliptic PDEs Uniformly elliptic PDEs 3.2 Existence results Perron’s method Representation formulas Bellman equation Isaacs equation 4.3 Comparison principle for ﬁrstorder PDEs Extension to secondorder PDEs Degenerate elliptic PDEs Remarks on the structure condition Uniformly elliptic PDEs 4 4.1 3.1 2.3 3.4 Stability Generalized boundary value problems Dirichlet problem State constraint problem Neumann problem Growth condition at x → ∞ 1 2 4 4 6 9 9 17 23 24 24 24 26 31 33 35 38 40 40 45 46 51 58 62 65 68 72 75 v .2 5.1 1.2 1.1 4.2.2.1 3.2.2 3 3.3 5 5.Contents 1 1.1 1.1.3.2.1 5.3.1.2 4.
5 78 78 81 84 Linear growth 85 Quadratic growth 87 H¨lder continuity estimates o 89 Existence result 92 Appendix 95 Proof of Ishii’s lemma 95 Proof of the ABP maximum principle 100 Proof of existence results for Pucci equations 105 Proof of the weak Harnack inequlity 108 Proof of the local maximum principle 113 References 118 Notation Index 120 Index 121 vi .3 6.2 6.4 7.3 7.1 7.6 6.4 6.2 7.2 Lp viscosity solutions A brief history Deﬁnition and basic facts Harnack inequality 6.1 6.1 6.3.5 7 7.3.
. . ∂u(x) ∂xn jth . . ξ⊗η = · · · ξi ηj . . ··· . ξn η1 · · · · · · · · · ξn ηn 1 · · · ξ1 ηn . . · the standard inner product in Rn . . For a function u : Ω → R. . we denote by ξ ⊗ η the n × n matrix whose (i. . respectively. . ξ for ∀ξ ∈ Rn . . Throughout this book. . . We use the standard notion of open balls: For r > 0 and x ∈ Rn . ··· ··· ··· . . .1 Introduction Ω ⊂ Rn is open and bounded. . . . . . and Br := Br (0). . . then D 2 u(x) ∈ S n for x ∈ Ω. . . we will work in Ω (except in sections 4. . . . ith jth . . . j)entry is ξiηj for 1 ≤ i. ξ ≤ Y ξ. Note that if u ∈ C 2 (Ω). x for x ∈ Rn . S n denotes the set of all realvalued n × n symmetric matrices. ηn ) ∈ Rn . ∂ 2 u(x) ∂xn ∂x1 ··· ··· . . where We denote by ·. Br (x) := {y ∈ Rn  x − y < r}.4).2 and 5. . X ≤Y ⇐⇒ Xξ. . by Du(x) := ∂ 2 u(x) ∂x2 1 D 2 u(x) := Also. . We recall the standard ordering in S n : We will also use the following notion in sections 6 and 7: For ξ =t (ξ1 . ··· ∂u(x) ∂x1 . . . . ∂ 2 u(x) ∂x2 n . . and set x = x. η =t (η1 . . . . . . . j ≤ n. . ∂ 2 u(x) ∂xi ∂xj ··· ∂ 2 u(x) ∂x1 ∂xn ith . . we denote its gradient and Hessian matrix at x ∈ Ω. ξn ). . ξ1 η1 · · · . .
u(x). We suppose (except in several sections) that F : Ω × R × Rn × S n → R is continuous with respect to all variables. under certain boundary condition.2) when Ω has some special shapes such as cubes.1).2). we prefer to have the minus sign in front of △. In many textbooks (particularly those for engineers). how do we investigate PDEs (1. etc. all polynomials of degree one are solutions of (1. it seems impossible to ﬁnd formulas for solutions u with elementary functions. we have to deal with general PDEs (1. for instance.We are concerned with general secondorder partial diﬀerential equations (PDEs for short): F (x.2). D 2u(x)) = 0 in Ω. trigonometric ones. we present the Laplace equation: (1. we deﬁne △u := trace(D 2 u). Of course.2) Here. we learn how to solve (1. what is the right question in the study of PDEs ? In the literature of the PDE theory. Here. Du(x). Thus. in general domains. which is neither a ball nor a cube. balls. If we give up having formulas for solutions of (1. (1.1) 1.2) in such special domains is not applicable because. the study on (1. in order to cover problems arising in physics. the most basic questions are as follows: 2 . the halfspace or the whole space Rn . Unfortunately. since we do not require any boundary condition yet.1) ? In other words. “solve” means that we ﬁnd an explicit formula of u using elementary functions such as polynomials. engineering and ﬁnance. However. solutions of equation (1.1) in general domains.1 From classical solutions to weak solutions −△u = 0 in Ω. As the ﬁrst example of PDEs. Moreover. In the literature of the viscosity solution theory. we will have to study more general and complicated PDEs than (1.2) represent the density of a gas in a bottle.
let us remember the reason why we study the PDE. To explain the signiﬁcance of the uniqueness of solutions. we do not know which is approximated by the numerical solution. the candidate of a classical solution is called a weak solution. for all x ∈ Ω. movements. it might be powerful to use numerical computations.1) simultaneously. then 3 . we could not predict what will happen from the numerical experiments even though the uniqueness of solutions holds true. moves. Thus. Particularly.1). does the solution change a little ? The importance of the existence of solutions is trivial since. numerical analysis only shows us an “approximate” shapes. let us come back to the most essential question: What is the “solution” of a PDE ? For example. people working in applications want to know how the solution looks like. For this purpose. Such a function u will be called a classical solution of (1. if the weak solution has the ﬁrst and second derivatives. behaves etc. otherwise. we have decided to choose the following strategy: (A) Find a candidate of the classical solution. etc. if the stability of solutions fails. Du(x) and D 2 u(x). engineerings or economics etc.(1) Existence: (2) Uniqueness: (3) Stability: Does there exist a solution ? Is it the only solution ? If the PDE changes a little. Also.1) is satisﬁed at each x ∈ Ω when we plug them in the left hand side of (1. Usually.1) if there exist the ﬁrst and second derivatives. unfortunately. Instead of ﬁnding a classical solution directly. However.1). However. and (1. Now. it is diﬃcult to seek for a classical solution because we have to verify that it is suﬃciently diﬀerentiable and that it satisﬁes the equality (1. we discuss PDEs or their solutions to understand some speciﬁc phenomena in nature. the study on the PDE could be useless. it is natural to call a function u : Ω → R a solution of (1. (B) Check the diﬀerentiability of the candidate. In the standard books. if there are more than one solution.
(1. K] × [0. ∞)) as a “test function space”: 1 C0 (R × [0. (B) Regularity of weak solutions. See [8] for instance.it becomes a classical solution. 1. In section 2. In the literature. we will extend the deﬁnition to more general (possibly discontinuous) functions and PDEs. with these terminologies.4) where g is a given function. which is a model PDE in Fluid Mechanics: ∂u 1 ∂(u2 ) + = 0 in R × (0.4) even if g is smooth enough. when we cannot expect classical solutions of a PDE to exist. and in the proceeding sections.3) (1. we may rewrite the above with mathematical terms: (A) Existence of weak solutions.2. showing the diﬀerentiability of solutions is called the study on the regularity of those.3)(1.2 Typical examples of weak solutions In this subsection. In order to look for the appropriate notion of weak solutions. In the next subsection. what is the right candidate of solutions ? We will call a function the candidate of solutions of a PDE if it is a “unique” and “stable” weak solution under a suitable setting. we show a brief history on “weak solutions” to remind what was known before the birth of viscosity solutions. K] . 1. we will deﬁne such a candidate named “viscosity solutions” for a large class of PDEs. ∞)) := φ ∈ C 1 (R × [0. we cannot ﬁnd classical solutions of (1. Thus. However. . we ﬁrst 1 introduce a function space C0 (R × [0. In general. we give two typical examples of PDEs to derive two kinds of weak solutions which are unique and stable.1 Burgers’ equation We consider Burgers’ equation. ∞) ∂t 2 ∂x under the initial condition: u(x. ∞)) 4 there is K > 0 such that supp φ ⊂ [−K. 0) = g(x) for x ∈ R.
t) ≤ t for all (x. t.4). K)). t) ∈ R × [0. ∞).4) in general while we know the famous LaxOleinik formula (see [8] for instance). which is the “expected” solution. Hence. using integration by parts. ∞)) and then. ∞)  φ(x. We often call this a weak solution in the distribution sense. z) ∈ R × (0. From the deﬁnition of weak solutions.Here and later. 5 . we add the following property (called “entropy condition”) which holds for the expected solution given by the LaxOleinik formula: There is C > 0 such that Cz u(x + z. In order to obtain the uniqueness of weak solutions. t) = 0}. t)dtdx + + ∂t 2 ∂x g(x)φ(x. R Since there are no derivatives of u in the above. we cannot show the uniqueness of weak solutions of (1. unfortunately. ∞ R 0 u ∂φ u2 ∂φ (x. ∞)). It is also known that such a weak solution has a certain stability property. we say that it is in nondivergence form. When the PDE is not in divergence form. 0)φ(x.3) may have singularities even though the initial value g belongs to C ∞ by an observation via “characteristic method”. We call u an entropy solution of (1. ∞) × (0. K) ×(0.3)(1. 1 Suppose that u satisﬁes (1. We note that the solution of (1. we derive this notion by an essential use of integration by parts. this equality makes sense if u ∈ ∪K>0 L1 ((−K. we may adapt the following property as the deﬁnition of weak solutions of (1. we can derive the socalled RankineHugoniot condition on the set of singularities. On the other hand.3)(1.3). we denote by supp φ the following set: supp φ := {(x.3) if it is a weak solution satisfying this inequality.3) by φ ∈ C0 (R × [0. Multiplying (1. 0)dx = 0. we have ∞ R 0 u ∂φ u2 ∂φ (x. As you noticed. 0)dx = 0 R 1 for all φ ∈ C0 (R × [0. t)dtdx + + ∂t 2 ∂x u(x. We say that a PDE is in divergence form when we can adapt the notion of weak solutions in the distribution sense. for the deﬁnition. t) − u(x.
“existence. θ ∈ [0.We note that this entropy solution satisﬁes the above mentioned important properties. it is shown that the right hand side of (1. we shall consider general HamiltonJacobi equations. H(θp + (1 − θ)q) ≤ θH(p) + (1 − θ)H(q) for all p. p∈Rn (1. 6 .5) since we cannot use the integration by parts. it must be a right deﬁnition for weak solutions of (1.3)(1.6) When H(p) = p2 . which arise in Optimal Control and Classical Mechanics: ∂u + H(Du) = 0 in (x. we do not need to assume the continuity of H. t) = min tL n y∈R x−y + g(y) . In this example. q − H(p)}. 1.4). q ∈ Rn . 1]. uniqueness and stability”. we have H(p) = p2 . t (1. ∞) ∂t under the same initial condition (1. we suppose that H : Rn → R is convex. Example.4).5) (1. We next introduce the Lagrangian L : Rn → R deﬁned by L(q) = sup { p. i. we often call this H a “Hamiltonian”.2 HamiltonJacobi equations Next. Remark. it is easy to verify that the maximum is attained in the right hand side of the above.7) More precisely. t) ∈ Rn × (0. Thus.7) is diﬀerentiable and satisﬁes (1. In Classical Mechanics. Since a convex function is locally Lipschitz continuous in general. Notice that we cannot adapt the weak solution in the distribution sense for (1.e. As a simple example of H. It is surprising that we have a neat formula for the expected solution (called HopfLax formula) presented by u(x.5) almost everywhere.2.
b ∈ B. “fully nonlinear”) Hamiltonians. Kruzkov showed that the limit function of approximate solutions by the vanishing viscosity method (see the next section) has this property (1. when we treat game problems (one person wants to minimize costs while the other tries to maximize them). t) − 2u(x. Example.5) when it satisﬁes (1. between Kruzkov’s works and the birth of viscosity solutions. However. He named u a “generalized” solution of (1. there had been no big progress in the study of ﬁrstorder PDEs in nondivergence form.5) almost everywhere and (1. To my knowledge.2. t) + u(x − z.5) almost everywhere. t > 0. we could call u a weak solution of (1. We will see an example in the next section. This is called the “semiconcavity” of u.5)(1. in order to say that the “unique weak” solution is given by (1. Let A and B be sets of parameters. For a ∈ A. we meet nonconvex and nonconcave (i. As will be seen. We note that (1. t) ≤ Cz2 (1. (Bellman and Isaacs equations) We ﬁrst give Bellman equations and Isaacs equations.8) is a hypothesis on the onesided bound of second derivatives of functions u. the uniqueness of those fails in general. The convexity (1. In 60s. See section 4. we suppose A and B are (compact) subsets in Rm (for some m ≥ 1). In this book. which arise in (stochastic) optimal control problems and diﬀerential games.8). However. 7 . if we decide to use this notion as a weak solution.6) is a natural hypothesis when we consider only optimal control problems where one person intends to minimize some “costs” (“energy” in terms of Physics). respectively. As was shown for Burgers’ equation. we shall give typical examples of such PDEs. z ∈ R. Remark.e. we have to add one more property for the deﬁnition of weak solutions: There is C > 0 such that u(x + z. since we are concerned with viscosity solutions of PDEs in nondivergence form.Thus.8) for all x. those are extensions of linear PDEs.8) when H is convex.4) when u satisﬁes (1. For instance.7). for which the integration by parts argument cannot be used to deﬁne the notion of weak solutions in the distribution sense. x ∈ Ω.
Du(x). we consider functions f (·. Although we will not study quasilinear PDEs in this book. p + c(x. D 2u(x)) − f (x. we give some of those which are in nondivergence form. X) := −trace(A(x. b). a)r. D 2u(x)) − f (x. (1. 8 . we set La (x. Here A(·. . a∈A La. Du = 0 in Ω” via the viscosity solution approach. X) := − p2 trace(X) − Xp. a). a)X) + g(x. p. a. p . For inhomogeneous terms.r ∈ R. (1. we show a relatively “new” one called L∞ Laplacian: F (x. Again. . Du(x). a). a. b) are given functions for (a. a. a) and f (·. g(·. a) and c(·. g(·. a. p. D 2u(x)) − f (x. a. a. a.b (x. b) ∈ A × B. this F does not contain xvariables. . (“Quasilinear” equations) We say that a PDE is quasilinear if the coeﬃcients of D 2 u contains u or Du. r. Notice that this F is independent of xvariables. c(·. We call the following PDEs Bellman equations: sup{La (x. and X = (Xij ) ∈ S n . b)r. p . b)} = 0 for x ∈ Ω a∈A b∈B (1. where he ﬁrst studied the PDE “− D 2 uDu.b (x. X) := −trace(A(x. X) := − Xp. a. b). p + c(x.10′ ) Example. We refer to [12] for applications where this kind of operators appears. b) in Ω for a ∈ A and b ∈ B. u(x). . b). p. p.9) Notice that the supremum over A is taken at each point x ∈ Ω. b)X) + g(x.b (x. p =t(p1 . a). Du(x). Next. a.10) and b∈B a∈A inf sup{La. r. a)} = 0 for x ∈ Ω. We ﬁrst give the PDE of mean curvature type: F (x. u(x). We refer to Jensen’s work [16]. A(·. Taking account of one more parameter set B. we call the following PDEs Isaacs equations: sup inf {La. b)} = 0 for x ∈ Ω. pn ) ∈ Rn . u(x).
2) (showing the nonexistence of classical solutions is a good exercise). We seek C 1 functions satisfying (2.1): u(x) = dist(x.1) However. we expect that the following function (the distance from ∂Ω) would be the unique solution of this problem (see Fig 2. We also give some basic properties of viscosity solutions and equivalent deﬁnitions using “semijets”.2) (2. in order to explain the reason why we need it. we intend to derive a reasonable deﬁnition of weak solutions of (2. many speakers started in their talks by giving the following typical example called the eikonal equation: Du2 = 1 in Ω.1) under the Dirichlet condition: u(x) = 0 for x ∈ ∂Ω.1 1 9 . since there is no classical solution of (2.1)(2.1 Vanishing viscosity method When the notion of viscosity solutions was born. ∂Ω) := inf x − y. y∈∂Ω y 1 y = u(x) x −1 0 Fig 2.2 Deﬁnition In this section.1) via the vanishing viscosity method. In fact. we derive the deﬁnition of viscosity solutions of (1. 2. (2.1).
for example. 1). The ﬁrst term. we consider the following PDE as an approximate equation of (2. −u is the weak solution and 1 x + 1 for x ∈ [−1. in order to look for an appropriate notion of weak solutions. then the expected solution is given by u(x) = 1 − x for x ∈ [−1.1) in Ω except at ﬁnite points. . . 1). ε (2. (2. for ε > 0.4) 10 . Now.2 However. − 2 ). even in the above simple case of (2.4) uε (±1) = 0.1) (see Fig 2. By an elementary calculation.2).If we consider the case when n = 1 and Ω = (−1.1) if it satisﬁes (2.1).4) is called the vanishing ε viscosity term (when n = 1) as ε → 0. u(x) = −x 1 x − 1 for x ∈ [ 2 . etc. . in the left hand side of (2.3) Since this function is C ∞ except at x = 0. 1): −εu′′ + (u′ε )2 = 1 in (−1. 1 1 for x ∈ [− 2 . we know that there are inﬁnitely many such weak solutions of (2. y −1 0 1 x −1 Fig 2. we can ﬁnd a unique smooth function uε in the following manner: We ﬁrst note that if a classical solution of (2.1) when n = 1 and Ω = (−1. 1]. −εu′′ . we introduce the socalled vanishing viscosity method. 1]. 2 ). we could decide to call u a weak solution of (2.
D 2u(x)) ≤ 0 (resp.6) . ˆ Remark.6) if u ∈ C 2 (Ω) and F (x. Thus. we can ﬁnd uε by uε (x) = −ε log x . 1). u(x). 1]. It is a good exercise to show that uε converges to the function in (2. Du.5) is given by vε (x) = − tanh Hence. if we replace −εu′′ by +εu′′ . Let us come back to general secondorder PDEs: F (x. supersolution. u(±1) = 0.5) It is easy to see that the solution of (2. Thus. = 0) in Ω. we may suppose that u′ε (0) = 0 by symmetry. We shall use the following deﬁnition of classical solutions: Deﬁnition. vε (0) = 0.3) uniformly in [−1. we adapt the properties which hold for the (uniform) limit of approximate solutions of PDEs with the “minus” vanishing viscosity term. (2. ≥ 0. ˆ ˆ then the limit function would be diﬀerent in general.exists. then it is unique. D 2u) = 0 in Ω. we ﬁrst solve the ODE: ′ 2 −εvε + vε = 1 in (−1. 1).. Du(x). we have u(x) := limε→0 uε (x) = −u(x). solution) of (2. We call u : Ω → R a classical subsolution (resp. Setting vε = u′ε . To deﬁne weak solutions. ε cosh cosh x ε 1 ε = −ε log e ε + e− ε e ε + e− ε 1 1 x x . Since uε (x) := −uε (x) is the solution of εu′′ + (u′)2 = 1 in (−1. 11 (2.. u.
p. Y ) for all x ∈ Ω. r. Throughout this text. for any φ ∈ C 2 (Ω). u.e. (2. X) ≤ F (x. D 2u) = 0 in Ω (ε > 0). p. If F does not depend on Xvariables (i. X. X) = −trace(X) of the Laplace equation (1.. minimum) at x ∈ Ω. we only suppose u ∈ C 1 (Ω) in the above in place of u ∈ C 2 (Ω).8).2) is elliptic. r ∈ R. Du.8) than (2.Remark.1. ≥ 0) provided that u − φ attains its maximum (resp. X) is “uniformly” elliptic (see in section 3 for the deﬁnition) provided that F is elliptic. We will derive properties which hold true for the (uniform) limit (as ε → +0) of solutions of −ε△u + F (x. F (x. supersolution) of (2. See [13] for instance. r. We only give a proof of the assertion for subsolutions since the other one can be shown in a symmetric way. F = 0 is the ﬁrstorder PDE)... u. Dφ(x). We say that F is (degenerate) elliptic if F (x.6) in practice. D 2φ(x)) ≤ 0 (resp. Proof. r.8) Note that since −εtrace(X) + F (x. Proposition 2. it is easier to solve (2. ﬁrstorder PDEs).7) We notice that if F does not depend on Xvariables (i. we also suppose the following monotonicity condition with respect to Xvariables: Deﬁnition. (2. r. 12 . Assume that F is elliptic. Let uε ∈ C 2 (Ω) ∩ C(Ω) be a classical subsolution (resp. Remark. Du) = 0. u(x). we have F (x. then F is automatically elliptic. We also note that the left hand side F (x. Y ∈ S n provided X ≥ Y. we only need to suppose φ and uε to be in C 1 (Ω) as before. If uε converges to u ∈ C(Ω) (as ε → 0) uniformly in any compact sets K ⊂ Ω. When F does not depend on Xvariables. then. p.e. p ∈ Rn . p.
we have given the deﬁnition to “general” functions but we will often suppose that they are (semi)continuous in Theorems etc. the limit of xε might diﬀer from x.) Let xε ∈ Ω be a point such that (uε − φδ )(xε ) = maxΩ (uε − φδ ). u(x). we have −ε△uε (xε ) + F (xε . Thus.. for any φ ∈ C 2 (Ω). we have F (ˆ. in our propositions in sections 2.1. in view of ellipticity. we note that limε→0 xε = x.and supersolutions are continuous. we see that xε ∈ Ω for ˆ small ε > 0. Since D(uε − φδ )(xε ) = 0 and D 2 (uε − φδ )(xε ) ≤ 0. we will suppose that viscosity sub. Notice that if we argue by φ instead of φδ . we have −ε△φδ (xε ) + F (xε .6) if it is both a viscosity sub. x x (This tiny technique to replace a maximum point by a “strict” one will appear in Proposition 2. Sending ε → 0 in the above. x x x x 2 Deﬁnition. Since uε converges to u uniformly in Br (ˆ) and x is the unique maximum x ˆ point of u − φδ . We call u : Ω → R a viscosity subsolution (resp. ˆ Thus. uε (xε ). D 2φ(x)) ≤ 0 (resp. D 2 uε (xε )) ≤ 0. uε (xε ). we conclude the proof. supersolution) of (2. Remark. D 2 φδ (xε )) ≤ 0.. Setting ˆ 4 φδ (y) := φ(y) + δy − x for small δ > 0. at xε ∈ Ω. Dφδ (xε ). x x x x Since Dφδ (ˆ) = Dφ(ˆ) and D 2 φδ (ˆ) = D 2 φ(ˆ).6) if..and supersolution of (2.2. F (x. Dφ(x). minimum) at x ∈ Ω. Dφδ (ˆ). we see that ˆ (u − φδ )(ˆ) > (u − φδ )(y) for y ∈ Ω \ {ˆ}. Duε(xε ). Here.Suppose that u − φ attains its maximum at x ∈ Ω for φ ∈ C 2 (Ω). u(ˆ). We call u : Ω → R a viscosity solution of (2. Note that xε also depends on δ > 0. ≥ 0) provided that u − φ attains its maximum (resp. In fact.6). D 2 φδ (ˆ)) ≤ 0. 13 .
For the opposite implication in the subsolution case.3..6). supersolution) of (2.6).However.. < (u − φ)(x)) x 2 for φ ∈ C (Ω). the following (1) and (2) are equivalent: (1) u is a viscosity subsolution (resp. D 2u) ≤ 0 (resp. ˆ x 14 .. x ∈ Ω and x ∈ Ω \ {ˆ}. Dφ(ˆ). u.2. ˆ x then F (ˆ. In order to memorize the correct inequality. Proposition 2. supersolution) of (2.. φ(ˆ). D 2φ(ˆ)) ≤ 0 (resp. suppose that u − φ attains a maximum at x ∈ Ω. The implication (1) ⇒ (2) is trivial. ≥ 0) in Ω if it is a viscosity subsolution (resp. For u : Ω → R.3 x Proof. (2) if 0 = (u − φ)(ˆ) > (u − φ)(x) (resp. respectively. Du. we will often say that u is a viscosity subsolution (resp. x x x x graph of φδ graph of φ graph of φ(·) + (u − φ)(ˆ) x graph of u x ˆ Fig 2. Notation. Set ˆ φδ (x) = φ(x) + δx − x4 + (u − φ)(ˆ). all the proposition in section 2. ≥ 0).1 can be proved by replacing upper and lower semicontinuity for viscosity subsolutions and supersolutions.. supersolution) of F (x.. We will introduce general viscosity solutions in section 3.
we use the following maximum principle for semicontinuous functions: 15 . supersolution) of (2. φδ (ˆ). and LSC(K) := {u : K → R  u is lower semicontinuous in K}. Proof. (2) gives x x F (ˆ. USC(K) := {u : K → R  u is upper semicontinuous in K}. Taking φ ≡ u. we have D(u − φ)(x) = 0 and D 2 (u − φ)(x) ≤ 0.6) and u ∈ C 2 (Ω).6) if and only if it is a viscosity subsolution (resp. D 2φ(x)). Dφδ (ˆ). u(x). Thus. we recognize that viscosity solutions are right candidates of weak solutions when F is elliptic. Du(x). Assuming that u − φ takes its maximum at x ∈ Ω. supersolution) of (2.3. u(x). in view of ellipticity. D 2u(x)) ≥ F (x. Hence. Fix any φ ∈ C 2 (Ω). u(x). we have 0 ≥ F (x. A function u : Ω → R is a classical subsolution (resp. Du(x).See Fig 2. the deﬁnition of viscosity subsolutions yields F (x. we see that u − φ attains its maximum at any points x ∈ Ω. Suppose that u is a viscosity subsolution of (2. suppose that u ∈ C 2 (Ω) is a classical subsolution of (2.6). Dφ(x). Throughout this book. D 2 φδ (ˆ)) ≤ 0. x x x x which implies the assertion..6) and u ∈ C 2 (Ω).. Remark. Assume that F is elliptic. Proposition 2.3. Since 0 = (u − φδ )(ˆ) > (u − φδ )(x) for x ∈ Ω \ {ˆ}. On the contrary. 2 By the next proposition. D 2u(x)) ≤ 0 for x ∈ Ω. 2 We introduce the sets of upper and lower semicontinuous functions: For K ⊂ Rn .
setting φε (x) := φ ∗ ρε (x) = φ(x) := φ(x) Rn we easily verify that for 0 < ε < r. We only show the assertion for subsolutions since the other can be shown similarly. Then.4. On the other hand. x x We recall the standard molliﬁer ρε ∈ C ∞ (Rn ) and supp ρε ⊂ B ε : First.6) in Ω. Since the proof is a bit technical. Proposition 2. supK u < ∞) for any compact sets K ⊂ Ω.6) in Ω′ . supersolution) of (2. where Notice that Rn ρε dx = 1 for ε > 0. ρε (x) := 1 where C0 := B1 e x2 −1 dx −1 . x x 16 . supersolution) of (2. Next.An upper semicontinuous function in a compact set attains its maximum. and set s := − inf B 2r (ˆ) (u − φ) > 0. x Ω Ω\B2r (ˆ) x max (u − φε ) ≤ −2s. x 2s + sup u for x ∈ Ω \ Br (ˆ). For φ ∈ C 2 (Ω′ ). ˆ 0 = (u − φ)(ˆ) > (u − φ)(y) x for all y ∈ Ω′ \ {ˆ}. n ε ε φ(y)ρε (x − y)dy ∈ C ∞ (Rn ). we set 1 ρ(x) := C0 e x2 −1 0 for x < 1. we deﬁne ρε by 1 x ρ . We give the following lemma which will be used without mentioning it.. Assume also that inf K u > −∞ (resp. by Proposition 2.2. u is a viscosity subsolution (resp. u ∈ LSC(Ω)) is a viscosity subsolution (resp. Now. Proof.. the reader may skip it over ﬁrst... since we have (u − φε )(ˆ) = x Bε (ˆ) x ρε (ˆ − y)(φ(ˆ) − φ(y))dy. for x ∈ Br (ˆ). Assume that u ∈ U SC(Ω) (resp. x Choose r > 0 such that B3r (ˆ) ⊂ Ω′ . for any open set Ω′ ⊂ Ω. for x ≥ 1. we suppose that for some x ∈ Ω′ .
which gives z = x ˆ x By the deﬁnition. X) ∈ Rn × S n u(y) ≤ u(x) + p. We recall the notion of “small order o” in the above: For k ≥ 1.+ u(x) := and J 2. Dφε (xε ). Therefore. since we will need those in the proof of uniqueness for secondorder PDEs. We do not impose any continuity for u in these deﬁnitions. we have maxB 2r (ˆ) (u − φ) = (u − φ)(z). x we ﬁnd z ∈ B 2r (ˆ) such that limε→0 xε = z. sending ε → 0. ≥ o(xk )) as x → 0 there is ω ∈ C([0.2 Equivalent deﬁnitions We present equivalent deﬁnitions of viscosity solutions. y − x 2 +o(y − x2 ) as y ∈ Ω → x (p. y − x 1 + X(y − x). we introduce “semi”jets of functions u : Ω → R at x ∈ Ω by J 2.+ (−u)(x). ε0 ). Note that limε→0 u(xε ) = u(ˆ). the reader may postpone this subsection until section 3. Remark. u(y) ≥ u(x) + p. x Thus. ∞). D2 φε (xε )) ≤ 0. x Therefore.− u(x) := (p. Hence. taking a subsequence of {xε }ε>0 denoted by xε again (if necessary). y − x 2 +o(y − x2 ) as y ∈ Ω → x . First. we conclude x the proof. ∞)) such that ω(0) = 0. sending ε → 0 in the above. [0... u(xε ). y − x 1 + X(y − x). X) ∈ Rn × S n Note that J 2.3. f (x) ≤ o(xk ) (resp. we have F (xε . inf ≥ −ω(x) k x∈Br \{0} xk x∈Br \{0} x 17 . and f (x) f (x) ⇐⇒ sup ≤ ω(r) resp. we can choose xε ∈ B2r (ˆ) such that maxΩ (u − φε ) = (u − φε )(xε ).− u(x) = −J 2. 2 2.there is ε0 > 0 such that (u − φε )(ˆ) ≥ −s for ε ∈ (0. However.
X) ∈ J 2. ε It is easy to check that (2(xε − x)/ε.+ u(xk ) = ∅.. Fix x ∈ Ω and choose r > 0 so that B r (x) ⊂ Ω. u(x). D 2u(x))}. X) as k → ∞ . p. Hence.6. and (2) means that semijets are “deﬁned” in dense sets of Ω. we see that xε converges to x ∈ B r (x) as ε → 0.− u(xk ) = ∅.+ u(xε ).+ u(x) ∩ J 2..6). 2.− u(x) = ∅. 18 . u ∈ LSC(Ω)). we have the following: (1) If J 2. (2) For x ∈ Ω and (p.− (3) For x ∈ Ω and (p... (3) are equivalent. X) ∈ J u(x) (resp.± u(xk ) such that (xk . u(x). we can choose xε ∈ B r (x) such that u(xε ) − ε−1 xε − x2 = maxy∈B r (x) (u(y) − ε−1 y − x2 ). Xk ) ∈ J 2. supersolution) of (2. J 2. p.In the next proposition. lim xk = x k→∞ . For ε > 0. the following (1).5. J 2. pk . 2ε−1I) ∈ J 2. (2). p. Ω = x ∈ Ω ∃xk ∈ Ω such that J 2.. Proof.. The proof of (1) is a direct consequence from the deﬁnition. ≥ 0). u(xk ).− u(x) = {(Du(x).− u(x)).+ u(x) (resp. ≥ 0). Thus. X) ≤ 0 (resp. For u : Ω → R. then Du(x) and D 2 u(x) exist and. u(x). Xk ) → (x. we may suppose that xε ∈ Br (x) for small ε > 0.. we have 1 u(y) ≤ u(xε ) + (y − x2 − xε − x2 ) for all y ∈ B r (x). Proposition 2.+ 2. J u(x)). 2 We next introduce a sort of closure of semijets: J 2.+ . X) ∈ Rn × S n ∃xk ∈ Ω and ∃(pk . We give a proof of the assertion (2) only for J 2. we give some basic properties of semijets: (1) is a relation between semijets and classical derivatives. we have F (x. we have F (x. X) ≤ 0 (resp. Since xε − x2 ≤ ε(maxB r (x) −u(x)). Proposition 2.± u(x) := (p. then Ω = x ∈ Ω ∃xk ∈ Ω such that J 2. (2) If u ∈ USC(Ω) (resp. (1) u is a viscosity subsolution (resp. For u : Ω → R.+ u(x) ∩ J 2. lim xk = x k→∞ resp.
continuous ω : [0. D 2φ(x)) ∈ J 2. ∞) → [0. ∞)) such that ω0 (0) = 0. we can ﬁnd nondecreasing. ∞). y − x + y − x2 ω(y − x) 2 (2. In fact.+ Step 1: (2) =⇒ (3). y − x 2 2 y∈Br (x)\{x} x − y sup . Xk ) ∈ J 2.9). For (p. X) and (u − φ)(x) ≥ (u − φ)(y) for y ∈ Ω. 2 19 t √ 3t s 1 X(y − x). X) and F (xk .9) as y → x. Now. we deﬁne φ by φ(y) := p. Thus. we verify that ω(r) := sup0≤t≤r ω0 (t) satisﬁes (2. For φ ∈ C 2 (Ω). Again. suppose also (u−φ)(x) = max(u−φ). we have (Dφ(x). y − x + ψ(x − y). y−x +o(x−y2 ) as y → x. we can ﬁnd (pk . X) ∈ J u(x). we give a proof of the assertion only for subsolutions. y−x + 1 2 D φ(x)(y−x). Therefore. and ω0 (r) ≥ 1 1 u(y) − u(x) − p. 2 2. Xk ) ≤ 0. . u(xk ).+ u(xk ) with xk ∈ Ω such that limk→∞ (xk . For x ∈ Ω and (p. [0. p. u(xk ). 2. ∞) such that ω(0) = 0 and u(y) ≤ u(x) + p. 2 2s ω(r)dr ds ≥ t2 ω(t). Step 2: (3) =⇒ (1). the Taylor expansion of φ at x gives u(y) ≤ u(x)+ Dφ(x). Xk ) = (x. y − x − X(y − x).+ u(x) ⊂ J u(x). we conclude the proof. pk . we ﬁnd ω0 ∈ C([0. X) ∈ J 2.+ Thus. pk . y − x + 1 X(y − x). u(x).+ u(x) (x ∈ Ω).Proof. by the deﬁnition of o. y − x + where ψ(t) := It is easy to check that (Dφ(x). which implies (3) by sending k → ∞. D 2 φ(x)) = (p. Step 3: (1) =⇒ (2).
± u(x) from their graph. From the graph below. and J 2.− u(0) = ∅ and J 2. y y = φ(x) y =x+1 y = u(x) x Fig 2.± u(0) of this and the next examples.+ u(0) = ({0} × [0.+ u(0) = ({1} × [0. ∞)) ∪ ((0.1 and 2. we verify that for x ∈ Ω.+ u(x) = (Dφ(x). 1]) in (2.5.4.4. ∞)) ∪ ((−1. See Fig 2. we intuitively know J 2. We shall examine J 2. We see that J 2.− u(0) = ∅. .4. 0 for x < 0.± for discontinuous functions. Consider the function u ∈ C([−1.1 In order to deal with “boundary value problems” in section 5.2. See Fig 2. We omit how to obtain J 2. J 2. For instance. ∞) × R).3). In view of the proof of Step 3. Thus. which is not necessarily open. D 2φ(x)) ∈ Rn × S n J 2. consider the Heaviside function: u(x) := 1 for x ≥ 0. we deﬁne 20 . we may conclude that J 2. ∞)) ∪ ({−1} × [0.− u(x) = (Dφ(x). 1) × R). we prepare some notations: For a set K ⊂ Rn . Example.Remark. D 2φ(x)) ∈ Rn × S n ∃φ ∈ C 2 (Ω) such that u − φ attains its maximum at x ∃φ ∈ C 2 (Ω) such that u − φ attains its minimum at x .
4.2 y 1 y = ax + 1 (a > 0) 0 Fig 2.5 y = φ(x) y = u(x) x 21 .y y = φ(x) y = ax + 1 y = u(x) (a < 1) x Fig 2.
u(x).semijets of u : K → R at x ∈ K by 2. It is obvious to verify that x∈Ω =⇒ 2. Proposition 2. 2.± u(x) (resp. J 2.+ 2. Remark. and 2. 22 2. ψ ∈ C 2 (Ω) and x ∈ Ω.± 2. 0) × R). u(xk ).+ JK u(x) := (p. y − x 2 +o(y − x2 ) as y ∈ K → x 2.± 2.− JK u(0) = ({0} × (−∞. It is easy to observe that 2. D 2 ψ(x)) + JΩ u(x) and J Ω (u + ψ)(x) = (Dψ(x). 0]) ∪ ((−∞. 1]. y − x 1 + X(y − x).± JΩ u(x) (resp.± 2.± 2. y − x 2 +o(y − x2 ) as y ∈ K → x u(y) ≥ u(x) + p.± 2. Since the proof is easy. p.± . X) ∈ Rn × S n . 2. D 2ψ(x)) + J Ω u(x). 2.± JΩ (u + ψ)(x) = (Dψ(x). 1). we omit it.7.± u(x)) for JΩ u(x) = Example.+ J u(x) = JK u(x) = {0} × [0.± J K u(x) := (p.± 2. X) ∈ Rn × S n u(y) ≤ u(x) + p. we have 2. and 2. pk . Xk ) ∈ JK u(xk ) such that (xk .. 2. y − x 1 + X(y − x).± 2. ∞) provided x ∈ (0. ∞) × R). X) as k → ∞ . It is also easy to verify that 2. J Ω u(x) = J Ω u(x)). Xk ) → (x.− JK u(x) := (p. ∞)) ∪ ((0.± 2.± JΩ u(x) = JΩ u(x) and J Ω u(x) = J Ω u(x). X) ∈ Rn × S n .± For x ∈ Ω.± ∃xk ∈ K and ∃(pk . For u : Ω → R. we shall simply write J 2. Consider u(x) ≡ 0 in K := [0.+ JK u(0) = ({0} × [0.± We ﬁnally give some properties of JΩ and J Ω .
under the Dirichlet boundary condition). v ≤ u) in Ω. In this section. u).. are a viscosity subsolution and supersolution. we recall some “classical” comparison principles and then.3 Comparison principle In this section... v) and v (resp. νu + F (x. which implies the uniqueness of viscosity solutions when their values on ∂Ω coincide (i. Du. D 2u) = 0 in Ω.6). F : Ω × Rn × S n → R is continuous. 23 (3.e. 2 In this section. the comparison principle has been the main issue because the uniqueness of viscosity solutions is harder to prove than existence and stability of them. the comparison principle yields u ≤ v (resp. respectively. by u = v on ∂Ω. In the study of the viscosity solution theory. Since u (resp. we obtain a slightly stronger assertion than the above one: viscosity subsolution u viscosity supersolution v max(u − v) = max(u − v) Ω ∂Ω We remark that the comparison principle implies the uniqueness of (continuous) viscosity solutions under the Dirichlet boundary condition: “Uniqueness for the Dirichlet problem” viscosity solutions u and v u = v on ∂Ω =⇒ u = v in Ω Proof of “the comparison principle implies the uniqueness”. where we suppose that and ν ≥ 0.1) . we mainly deal with the following PDE instead of (2.3) (3. we discuss the comparison principle. show how to modify the proof to a modern “viscosity” version.2) (3. the comparison principle roughly means that “Comparison principle” viscosity subsolution u viscosity supersolution v =⇒ u ≤ v on ∂Ω =⇒ u ≤ v in Ω Modifying our proofs of comparison theorems below. First.
D 2 v(ˆ)). we show that if one of viscosity sub. ˆ Set maxΩ (u − v) =: θ and choose x ∈ Ω such that (u − v)(ˆ) = θ. u ∈ USC(Ω) ∩ C 2 (Ω)) a classical supersolution (resp. We call this the “classical” comparison principle.1 yields Proposition 3.. We note that x ∈ Ω because u ≤ v on ∂Ω. then u ≤ v in Ω.3. We only prove the assertion when u is a viscosity subsolution of (3. Let u ∈ USC(Ω) (resp. subsolution) of (3. ˆ Thus. we will get a contradiction.1.. we have νθ = ν(u − v)(ˆ) ≤ 0. x which contradicts θ > 0. v ∈ LSC(Ω)) be a viscosity subsolution (resp. If u ≤ v on ∂Ω. 3.3) hold. supersolution) of (3. Notice that if ν > 0 and F is uniformly elliptic.1. x x x x x x x x Hence. 3. Dv(ˆ).1 Classical comparison principle In this subsection. x Suppose that θ > 0 and then.1. we freeze the “uniform ellipticity” constants: 0 < λ ≤ Λ. Dv(ˆ).and supersolutions is a classical one.1). then the comparison principle holds true. Proposition 3.. we present the comparison principle when ν = 0 but F is uniformly elliptic in the following sense. Assume also that F is elliptic.2 2 Uniformly elliptic PDEs Next. Throughout this book. D 2v(ˆ)) ≤ 0 ≤ νv(ˆ) + F (ˆ.3 below because our uniform ellipticity implies (degenerate) ellipticity. the deﬁnition of u and v respectively yields νu(ˆ) + F (ˆ. by these inequalities.. Assume that ν > 0 and (3. Proof.1) since the other one can be shown similarly.1 Degenerate elliptic PDEs We ﬁrst consider the case when F is (degenerate) elliptic and ν > 0.1) and v ∈ LSC(Ω) ∩ C 2 (Ω) (resp. then Proposition 3. 24 .
1). supersolution) of (3. p′ . Proof. u ∈ USC(Ω) ∩ C 2 (Ω)) a classical supersolution (resp. . Proposition 3. p.2. 25 (3. P − is concave. Assume that (3. subsolution) of (3. We omit the proof since it is elementary. then u ≤ v in Ω. We also suppose the following continuity on F with respect to p ∈ Rn : There is µ > 0 such that F (x. (3) P + is convex. we introduce the Pucci’s operators: For X ∈ S n . We say that F : Ω × Rn × S n → R is uniformly elliptic (with the uniform ellipticity constants 0 < λ ≤ Λ) if P − (X − Y ) ≤ F (x. We give a proof only when u is a viscosity subsolution and v a classical supersolution of (3. X) − F (x.. X) − F (x. We give some properties of P ± . If u ≤ v on ∂Ω. Y ) ≤ P + (X − Y ) for x ∈ Ω. Proposition 3. p. For X. (2) P ± (θX) = θP ± (X) for θ ≥ 0.2).4) P − (X) := min{−trace(AX)  λI ≤ A ≤ ΛI for A ∈ S n }. p..4) hold. P + (X) := max{−trace(AX)  λI ≤ A ≤ ΛI for A ∈ S n }. (3.1).3. we have the following: (1) P + (X) = −P − (−X).With these constants. p′ ∈ Rn .1) and v ∈ LSC(Ω) ∩ C 2 (Ω) (resp.. Y ∈ S n . Let u ∈ USC(Ω) (resp.3) and (3. and X ∈ S n . p. Deﬁnition. Y ∈ S n . p ∈ Rn . v ∈ LSC(Ω)) be a viscosity subsolution (resp. and X. Assume also that F is uniformly elliptic. P − (X) + P − (Y ) ≤ P − (X + Y ) ≤ P − (X) + P + (Y ) (4) ≤ P + (X + Y ) ≤ P + (X) + P + (Y ).. X) ≤ µp − p′  for x ∈ Ω.
ﬁrstorder PDEs. x ˆ ν(θ − φε (ˆ)) ≤ −δεeδx1 .2 Comparison principle for ﬁrstorder PDEs In this subsection. we have ˆ ν(u − v)(ˆ) + δεeδx1 ≤ 0. Du) = 0 in Ω. D 2 v(ˆ)) + P − (−D 2 φε (ˆ)) − µDφε(ˆ) ≤ 0.6) . Then. instead of (3. by (3.Suppose that maxΩ (u − v) =: θ > 0. we set φε (x) = εeδx1 . x x x x (3. we have Since v is a classical supersolution of (3. x which gives a contradiction because δ ≥ ν + 1.1). Dv(ˆ). x x x x By the uniform ellipticity and (3. Theorem 3. Dv(ˆ). we have x x ˆ νu(ˆ) + F (ˆ.4). D(v − φε )(ˆ). In the viscosity solution theory. we establish the comparison principle when F in (3. We will study the comparison principle for secondorder ones in the next subsection. 2 3. we shall consider the following PDE: νu + H(x.4 below was the ﬁrst surprising result. For ε > 0.5) Hence. ˆ From the deﬁnition of viscosity subsolutions. ˆ By the choice of ε > 0.5) and δ ≥ (µ + 1)/λ. ν + 1} > 0. without assuming that one of viscosity sub. since u ≤ v on ∂Ω. we will get a contradiction again. We next choose ε > 0 so small that ε max eδx1 ≤ x∈Ω θ 2 x Let x ∈ Ω be the point such that (u − v + φε )(ˆ) = maxΩ (u − v + φε ) ≥ θ. D 2(v − φε )(ˆ)) ≤ 0. we see that x ∈ Ω. 26 (3.and supersolutions is a classical one. x x x x x x ˆ ˆ Noting that Dφε(ˆ) ≤ δεeδx1 and P − (−D 2 φε (ˆ)) ≥ δ 2 ελeδx1 . we have νu(ˆ) + F (ˆ. D 2 v(ˆ)) + δε(λδ − µ)eδx1 ≤ 0.1) does not depend on D 2 u.1). Here. where δ := max{(µ + 1)/λ. we have νu(ˆ) + F (ˆ.
1.and supersolution of (3. 2ε (3. “ε → 0” means that εk → 0 when k → ∞).7) and (3. Setting Φε (x.8) a modulus of continuity. yεk ) = (ˆ. (3. we have xε − yε 2 ≤ u(xε ) − v(yε ) − θ. we can ﬁnd x. x. Then. then u ≤ v in Ω.We shall suppose that ν > 0. y). ∞) such that ωH (0) = 0 and H(x.8) In what follows.9).7) and that there is a continuous function ωH : [0. by (3. ∞) → [0. y ∈ Ω and p ∈ Rn . 27 .9) ˆ ˆ Since Ω is compact. Let u ∈ USC(Ω) and v ∈ LSC(Ω) be a viscosity sub. we will call ωH in (3. If u ≤ v on ∂Ω. y). we present the most important idea in the theory of viscosity solutions to overcome this diﬃculty. we choose (xε . respectively. y ∈ Ω. (3. Theorem 3. Suppose maxΩ (u − v) =: θ > 0 as usual. yε ) = max Φε (x. Notice that since both u and v may not be diﬀerentiable.6). in what follows. yε ) ≥ maxx∈Ω Φε (x. ω(0) = 0}. we use the following notation: M := {ω : [0. Proof. For notational simplicity. p) − H(y.4. ∞) → [0. x ˆ We shall simply write ε for εk (i.y∈Ω Noting that Φε (xε . y) := u(x) − v(y) − (2ε)−1x − y2 for ε > 0. Now.8) hold. x) = θ. and εk > 0 such that limk→∞ εk = 0 and limk→∞ (xεk . ∞)  ω(·) is continuous. Setting M := maxΩ u − minΩ v. we cannot use the same argument as in Proposition 3. we have xε − yε 2 ≤ 2εM → 0 (as ε → 0). p) ≤ ωH (x − y(1 + p)) for x. Assume that (3. we will get a contradiction. yε ) ∈ Ω × Ω such that Φε (xε .e.
from the deﬁnition of viscosity subsolutions. Sending ε → 0 in the above together with (3. from the deﬁnition of viscosity supersolutions.11) Taking φ(x) := v(yε ) + (2ε)−1 x − yε 2 . we see that v − ψ attains its minimum at yε ∈ Ω. The above two inequalities yield ν(u(xε ) − v(yε )) ≤ ωH xε − yε  + xε − yε 2 .9) again implies 0 ≤ lim inf ε→0 xε − yε 2 xε − yε 2 ≤ lim sup 2ε 2ε ε→0 ≤ lim sup(u(xε ) − v(yε )) − θ ≤ (u − v)(ˆ) − θ ≤ 0. we x have v(yε ) ≤ u(xε ) − θ.Thus. which is a contradiction. In the above proof. ignoring the left hand side in (3. we have lim θ ≤ lim inf (u(xε ) − v(yε )). yε ) ∈ Ω × Ω. x ε→0 xε − yε 2 = 0. In fact. by (3. taking ψ(y) := u(xε ) − (2ε)−1y − xε 2 . Hence. for small ε > 0. we have xε − yε νu(xε ) + H xε .11). 28 . ˆ ˆ Since (3.10) ε→0 ε Moreover.10) and (3. we have x ∈ Ω from the assumption x ˆ u ≤ v on ∂Ω. we have νv(yε ) + H yε . ≤ 0. ε→0 we have (3. we may suppose that (xε . Thus. ε On the other hand. we could show that limε→0 u(xε ) = u(ˆ) and x limε→0 v(yε ) = v(ˆ) although we do not need this fact. ε xε − yε ε ≥ 0. Furthermore. 2 Remark. we have νθ ≤ 0.9). we see that u − φ attains its maximum at xε ∈ Ω. (3. we have x = y . Thus. since (u − v)(ˆ) = θ > 0.9).
12) becomes (2.14) hold. we shall consider the following PDE: H(x. If u ≤ v on ∂Ω. To simplify our hypotheses.e. since all the inequalities become the equalities.e.1) because we have to suppose ν > 0 in the above proof. equation (3. µp) = µα H(x.which implies that v(ˆ) ≤ lim inf v(yε ) ≤ lim inf u(xε ) − θ ≤ lim sup u(xε ) − θ ≤ u(ˆ) − θ. there is α > 0 such that H(x. p ∈ Rn and µ > 0.12). We shall modify the above proof so that the comparison principle for viscosity solutions of (2. x x ε→0 ε→0 ε→0 Hence. there is σ > 0 such that min f (x) =: σ > 0.13) To recover the lack of assumption ν > 0. (3.1) holds.13) and (3.1). p) for x ∈ Ω. then u ≤ v in Ω.4 to the eikonal equation (2. Du) − f (x) = 0 in Ω.5.12) Here. x x ε→0 ε→0 ε→0 and v(ˆ) ≤ lim inf v(yε ) ≤ lim sup v(xε ) ≤ lim sup u(xε ) − θ ≤ u(ˆ) − θ. p) = p2 (i.8). 29 . Let u ∈ USC(Ω) and v ∈ LSC(Ω) be a viscosity sub. we have u(ˆ) = lim inf u(xε ) = lim sup u(xε ) and v(ˆ) = lim inf v(yε ) = lim sup v(yε ). respectively. The second comparison principle for ﬁrstorder PDEs is as follows: Theorem 3. (3.14) Example. we suppose that H has homogeneous degree α > 0 with respect to the second variable. x x ε→0 ε→0 ε→0 ε→0 We remark here that we cannot apply Theorem 3. σ = 1). we suppose the positivity of f ∈ C(Ω). (3. α = 2) and f (x) ≡ 1 ( i. Assume that (3. x∈Ω (3.and supersolution of (3. When H(x.
yε ). Hence. x ˆ x ˆ we easily see that xε − yε 2 ≤ µu(xε ) − v(yε ) − τ ≤ Mµ := µ max u − min v. 2 4 which is a contradiction. otherwise (i. Also. (3. y ) for some (ˆ. we will get a contradiction. y) = Φε (xε . 2 Ω We note that for any z ∈ Ω such that (µu − v)(z) = τ . At this stage.15) Thus. Note that Φε (xε . we have x = y . Suppose that maxΩ (u − v) =: θ > 0 as usual. (3. ε→0 ε lim 30 (3. yε ) ≥ τ ≥ θ/2. In fact. we see that x ˆ (xε .Proof. if we further suppose that µ < 1 is close to 1 so that −(1 − µ) min∂Ω v ≤ θ/4. we shall omit writing the dependence on µ for τ and (xε . 1) so that θ (1 − µ) max u ≤ . sending ε → 0. y) ∈ Ω × Ω (by taking a subsequence if necessary). we may suppose that limε→0 (xε . we may suppose z ∈ Ω. then the assumption (u ≤ v on ∂Ω) implies θ θ ≤ τ = µu(z) − v(z) ≤ (µ − 1)v(z) ≤ .15) again implies xε − yε 2 = 0. Thus. we shall use the idea in the proof of Theorem 3. 2ε Choose (xε .16) .4. Then. 2ε Ω Ω (3. z ∈ ∂Ω). If we choose µ ∈ (0. As in the proof of Theorem 3. yε ) below. yε ) ∈ Ω × Ω such that maxx.15) implies that µu(ˆ) − ˆ ˆ x v(ˆ) = τ . which yields x ∈ Ω because of the choice of µ. For simplicity.y∈Ω Φε (x. yε ) ∈ Ω × Ω for small ε > 0. Moreover. 2 Ω then we easily verify that θ max(µu − v) =: τ ≥ . yε ) = (ˆ. y) := µu(x) − v(y) − x − y2 .e.4: Consider the mapping Φε : Ω × Ω → R deﬁned by Φε (x.
taking φ(x) := (v(yε ) + (2ε)−1 x − yε 2 )/µ. xε − yε ε ≤ µα f (xε ). (3. ≤ ωH xε − yε xε − yε − H xε . we see that u − φ attains its maximum at xε ∈ Ω. by (3.16).17) xε − yε µε ≤ f (xε ).14). Hence. As one can guess. we will present the comparison principle for fully nonlinear. x which contradicts (3.19) where ν > 0. Thus.13). we have f (yε ) − µα f (xε ) ≤ H yε . assuming a key lemma.17). secondorder. we have H xε . (degenerate) elliptic PDEs (3. Consider the following simple PDE: νu − △u = 0. we have (1 − µα )f (ˆ) ≤ 0. Let us have a look at the diﬃculty.Now. we have H xε . we have H yε . 2 3. we see that v − ψ attains its minimum at yε ∈ Ω.18) Combining (3.3 Extension to secondorder PDEs In this subsection. 31 .18) with (3.1). taking ψ(y) = µu(xε ) − (2ε)−1 y − xε 2 . On the other hand. xε − yε  1 + ε Sending ε → 0 in the above with (3. xε − yε ε ≥ f (yε ). ε ε xε − yε  . if the argument does not work for this “easiest” PDE. (3. (3. then it must be hopeless for general PDEs. Thus. We ﬁrst remark that the argument of the proof of the comparison principle for ﬁrstorder PDEs cannot be applied at least immediately.
let u ∈ USC(Ω) and v ∈ LSC(Ω) be a viscosity sub. For φ ∈ C (Ω × Ω). we ˆ x have n n νu(xε ) − ≤ 0 ≤ νv(yε ) + . y ). x ˆ y 2. which will be proved in Appendix. Y ) ∈ J Ω w(ˆ). Then. The breakthrough was done by Jensen in 1988 in case when the coeﬃcients on the second derivatives of the PDE are constant. His argument relies purely on “realanalysis” and can work even for fully nonlinear PDEs. However. x y x ˆ Then.e. ν(u(xε ) − v(yε )) ≤ ε which does not give any contradiction as ε → 0. y) := u(x) − v(y) − (2ε)−1 x − y2 as usual.19).+ . In fact. y) = Φε (xε . y) ∈ Ω × Ω be a point such that x ˆ 2 x. ε ε Hence. From the deﬁnitions of u and v. Lions ﬁrst obtained the uniqueness of viscosity solutions for elliptic PDEs arising in stochastic optimal control problems (i. we only have 2n . yε ) ∈ Ω × Ω so that maxx. (Ishii’s lemma) Let u and w be in USC(Ω). it seems hard to extend the result to Isaacs equations. let (ˆ. there are X = X(µ). Bellman equations.L. Y = Y (µ) ∈ S n such that (Dx φ(ˆ. such that u ≤ v on ∂Ω. F is fully nonlinear. y ). We present here the socalled Ishii’s lemma. F is convex in (Du. x) (as ε → 0) for x ˆ some x ∈ Ω such that (u − v)(ˆ) > 0. x ˆ x 32 2. his argument heavily depends on stochastic representation of viscosity solutions as “value functions”. y)) = u(ˆ) + w(ˆ) − φ(ˆ. for each µ > 1.However. Moreover. yε ) > 0 as before.4 does not work.6.y∈Ω max (u(x) + w(y) − φ(x. Lemma 3. Setting Φε (x. y ). P. we choose (xε .and supersolution of (3. we emphasize that the same argument as in the proof of Theorem 3. How can we go beyond this diﬃculty ? In 1983. X) ∈ J Ω u(ˆ). We may suppose that (xε . D 2u)). yε ) ∈ Ω × Ω converges to (ˆ.y∈Ω Φε (x. Ishii in 1989 extended Jensen’s result to enable us to apply to elliptic PDEs with variable coeﬃcients. respectively.+ (Dy φ(ˆ.
Remark. 33 (3. x ˆ I 0 0 I ≤ X 0 0 Y ≤A+ 1 2 A . we have A 2 ≥ 4/ε2 . µ(x − y). y ∈ Ω. 0 I 0 −Y −I I then F (y. Y ∈ S n and µ > 1 satisfy X 0 I −I I 0 −3µ ≤ ≤ 3µ . X) ≤ ωF (x − y(1 + µx − y)) for x. Y = D 2 w(ˆ). µ Remark. the last matrix inequality means that when u and w are only continuous. y) := x − y2/(2ε). y ) ∈ Ω×Ω x ˆ in the hypothesis. and x y X 0 0 Y ≤ A. We also note that for φ(x. Structure condition There is an ωF ∈ M such that if X. we give our hypotheses on F . then we easily have X = D 2 u(ˆ).and −(µ + A ) where A = D 2 φ(ˆ. 3. taking x = −y (i. which is called the structure condition. Y ) − F (x.3.20) := sup A x y .A x2 + y2 = 1 . w ∈ C 2 (Ω) and (ˆ. x2 = 1/2) in the supremum of the deﬁnition of A 2 in the above. we may use the fact that for B ∈ S n . µ(x − y). B = max{λk   λk is the eigenvalue of B}. in general. we get some error term µ−1 A2 . since A 2 1 ε I −I −I I x y and 2 A = . we have A := D 2 φ(ˆ.e.21) . The other way to show the above identity. ε (3.1 Degenerate elliptic PDEs Now. y ) = x ˆ For the last identity. We note that if we suppose that u. where µ > 1 will be large. y ) ∈ S 2n . the triangle inequality yields A 2 = 2ε−2 sup{x − y2  x2 + y2 = 1} ≤ 4/ε2. On the other hand. Thus.
µ := 1/ε.Y . Afterward.and supersolution of (3. yε ) = (ˆ. As in the proof of Theorem 3. for ε > 0. Y ∈ S n such that xε − yε ¯ . x) for some x ∈ Ω (i.2. consider the mapping Φε : Ω × Ω → R deﬁned by Φε (x. Theorem 3. yε ) ≥ θ. y) = Φε (xε . Again. Let u ∈ USC(Ω) and v ∈ LSC(Ω) be a viscosity sub. X ∈ J 2. y) = u(x) − v(y) − 1 x − y2. yε ) ∈ Ω×Ω be a point such that maxx. ε→0 (3. lim ε→0 ε and θ ≤ lim inf (u(xε ) − v(yε )). Then. φ(x.6 implies that νu(xε ) + F xε .22) (3.y∈Ω Φε (x. . xε − yε xε − yε .− v(yε ). x xε − yε 2 = 0.1). we will get a contradiction. Suppose that maxΩ (u − v) =: θ > 0 as usual. Assume that ν > 0 and (3. respectively. then it is elliptic. x ˆ ˆ Moreover. y) = x − y2 /(2ε)) and its Remark.3.21). Proof. X ≤ 0 ≤ νv(yε ) + F yε . we ﬁnd X. ε and − 3 ε I 0 0 I ≤ X 0 0 −Y xε − yε .4.Y ε ≤ 3 ε ¯ ∈ J 2. xε . we will explain why assumption (3. we may suppose that ε→0 lim(xε . the equivalent deﬁnition in Proposition 2.+ u(xε ). since we have (u − v)(ˆ) = θ.21) hold. yε ∈ Ω for small ε > 0).7. If u ≤ v on ∂Ω.6 (taking w := −v.23) In view of Lemma 3. We ﬁrst prove the comparison principle when (3. I −I −I I Thus. then u ≤ v in Ω. 2ε Let (xε .21) is reasonable. .e. we will see that if F satisﬁes (3. ε ε 34 .21) holds for F using this lemma.In section 3.
we have νθ ≤ 0. b) ∈ A × B. a.24) Taking the limit inﬁmum.2. b) ≤ M2 x − y for x. b)σjk (x. p. . a. b) = m for x. We shall show (3. X) := −trace(A(x. which is a contradiction. p for (a. we consider the Isaacs equation as in section 1. a ∈ A.21). b) satisfy the hypotheses below. . . . k = 1. a. a.2. a. . we ﬁrst present some examples. a. and that the coeﬃcients in the above and f (·. then F satisﬁes (3. b) ≤ M1 x − y F (x. we have ν(u(xε ) − v(yε )) ≤ ωF xε − yε  + xε − yε 2 . i. a. together with (3. b ∈ B. 3.3.j=1 k=1 35 . b)X) + g(x. b)Xij i. by virtue of our assumption (3. . m. a∈A b∈B where La. p.Hence. b). . . ε (3.23) in the above. as ε → 0. a. a. b ∈ B. .21) only when n m k=1 σik (x. a ∈ A. . a. If we suppose that A and B are compact sets in Rm (for some m ≥ 1). n. .b (x. b). (1) ∃M1 > 0 and ∃σij (·. y ∈ Ω. j = 1. n. p. F (x. For this purpose. a.21) is reasonable.21). b) − σjk (y. a ∈ A. i = 1.2 2 Remarks on the structure condition In order to ensure that assumption (3. b) : Ω → R such that Aij (x. X) − f (x. b)σjk (x. b) ≤ ωf (x − y) for x. a. b)}. X) := sup inf {La. b ∈ B. a.22) and (3.b (x. (3) ∃ωf ∈ M such that f (x. a. y ∈ Ω. y ∈ Ω. b) − f (y. and σjk (x. p. (2) ∃M2 > 0 such that gi(x. a. . a. b) − gi(y. X) := − σik (x.
j=1 m (− Y ηk .4 in [6]. y ∈ Ω. ξk ηk I −I −I I = 3µξk − ηk 2 2 ≤ 3µnM1 x − y2 . If ω ∈ M satisﬁes that supr≥0 ω (r)/(r + ¯ ¯ 1) < ∞. ≤ 3µ n ξk ηk . µ(x − y).21). σnk (x)) and ηk =t (σ1k (y). For a proof of (1).21). m}. Y ) − F (x. . 2. we give a proof of (2) which is essentially used in a paper by IshiiLions (1990). we choose X.8. .21) implies ellipticity. . . . we shall omit writing indices a and b. p. (2) Assume that F is uniformly elliptic. . ξk ηk Therefore. taking the summation over k ∈ {1.25) 36 . Thus. ξk ) k=1 2 ≤ 3µmnM1 x − y2. we refer to Remark 3. . . . µ(x − y). we have X 0 0 −Y ξk ηk .21) holds for F . we have F (y. then (3. m}. To verify assumption (3. . ηk + Xξk . X) ≤ ω (x − y( X + p + 1)) ¯ for x. For the reader’s convenience.21). Note that X ≤ Y . X) ≤ = (−Aij (y)Yij + Aij (x)Xij ) i. X ∈ S n . . . p.for a ﬁxed (a.21) is a suitable assumption. . p ∈ Rn . 2 We next give other reasons why (3. σnk (y)) for any ﬁxed k ∈ {1. Y ∈ S n satisfy the matrix inequality in (3. Proposition 3. (3. . . b) ∈ A × B because we can modify the proof below to general F. Setting ξk =t (σ1k (x). . Let X. Proof. Y ∈ S n such that X 0 0 −Y ≤ 3µ I −I −I I . (1) (3. The reader can skip the proof of the following proposition if he/she feels that the above reason is enough to adapt (3. and F (x. X) − F (y.
there is Mε > 0 such ¯ that ω (r) ≤ ε + Mε r and ω (r) = inf ε>0 (ε + Mε r) for r ≥ 0. µ(x − y). X) ≤ ε + Mε x − y(p + 1) + sup Noting that Mε x − yt1/2 (6µ + t)1/2 − λt ≤ we have Mε x − yt1/2 (6µ + t)1/2 − λt . By (3. η ∈ Rn with η = ξ = 1. X) − F (y. ξ (12µ +  (X − Y )η.26). p. for any ﬁxed ε > 0. ¯ ¯ since X ≤ 3µ and Y ≤ 3µ. we see that F (y. (6µ + X − Y )1/2 . 2 which implies the assertion by taking the inﬁmum over ε > 0. multiplying ξ sη for s ∈ R and ξ.Multiplying have −I −I −I I X −Y X +Y to the last matrix inequality from both sides. we have F (y. we have X ≤ 1 ( X −Y + X +Y )≤ X −Y 2 1/2 1/2 (12µ + X − Y )1/2 . ξ . 3 2 M µx − y2 . (3. 37 .25) and (3. p. η 2 ≤  (X − Y )ξ. Since X ≤ Y (i. the eigenvalues of X − Y are nonpositive). Y ) − F (x. which implies X +Y ≤ X −Y Thus. we recall Remark after Lemma 3. Since we may suppose ω is concave. Y ) ≥ P − (X − Y ) ≥ λ X − Y .26) For the last inequality. p. µ(x − y). Thus. we see that 0 ≤ (12µ − (X − Y )η. Y ) − F (x. X) 2 ≤ ε + Mε x − y(µx − y + 1) + 3λ−1 Mε µx − y2 . we X +Y X −Y 0 0 0 I ≤ 12µ . λ ε 0≤t≤6µ F (y. η s − (X − Y )ξ. p. we have  (X + Y )ξ.e. η )s2 − 2 (X + Y )ξ.6. η ). Hence.
3.3.3
Uniformly elliptic PDEs
We shall give a comparison result corresponding to Proposition 3.3; F is uniformly elliptic and ν ≥ 0. Theorem 3.9. Assume that (3.2), (3.3), (3.4) and (3.21) hold. Assume also that F is uniformly elliptic. Let u ∈ USC(Ω) and v ∈ LSC(Ω) be a viscosity sub and supersolution of (3.1), respectively. If u ≤ v on ∂Ω, then u ≤ v in Ω. Remark. As in Proposition 3.3, we may suppose ν = 0. Proof. Suppose that maxΩ (u − v) =: θ > 0. Setting σ := (µ + 1)/λ, we choose δ > 0 so that θ δ max eσx1 ≤ . 2 x∈Ω We then set τ := maxx∈Ω (u(x) − v(x) + δeσx1 ) ≥ θ > 0. Putting φ(x, y) := (2ε)−1 x − y2 − δeσx1 , we let (xε , yε ) ∈ Ω × Ω be the maximum point of u(x) − v(y) − φ(x, y) over Ω × Ω. By the compactness of Ω, we may suppose that (xε , yε ) → (ˆ, y ) ∈ Ω × Ω x ˆ as ε → 0 (taking a subsequence if necessary). Since u(xε ) − v(yε ) ≥ φ(xε , yε ), ˆ ˆ we have xε −yε 2 ≤ 2ε(maxΩ u−minΩ v +2−1θ) and moreover, x = y . Hence, we have ˆ u(ˆ) − v(ˆ) + δeσx1 ≥ τ, x x which implies x ∈ Ω because of our choice of δ. Thus, we may suppose that ˆ (xε , yε ) ∈ Ω × Ω for small ε > 0. Moreover, as before, we see that xε − yε 2 = 0. ε→0 ε lim (3.27)
Applying Lemma 3.6 to u(x) := u(x)+δeσx1 and −v(y), we ﬁnd X, Y ∈ S n ˆ 2,+ 2,− such that ((xε − yε )/ε, X) ∈ J u(xε ), ((xε − yε )/ε, Y ) ∈ J v(yε ), and ˆ − 3 ε I O O I ≤ X O O −Y ≤ 3 ε I −I −I I .
We shall simply write x and y for xε and yε , respectively. 38
Note that Proposition 2.7 implies x−y 2,+ − δσeσx1 e1 , X − δσ 2 eσx1 I1 ∈ J u(x), ε where e1 ∈ Rn and I1 ∈ S n are given by e1 :=
1 0 . . . 0
and I1 :=
1 0 . . .
0 ··· 0 ··· . . .
0 0 . . .
0 0 ··· 0
.
Setting r := δσeσx1 , from the deﬁnition of u and v, we have 0 ≤ F y, x−y ,Y ε − F x, x−y − re1 , X − σrI1 . ε
In view of the uniform ellipticity and (3.4), we have 0 ≤ rµ + σrP + (I1 ) + F y, x−y ,Y ε − F x, x−y ,X . ε
Hence, by (3.21) and the deﬁnition of P + , we have 0 ≤ r(µ − σλ) + ωF x − y + x − y2 , ε
ˆ which together with (3.27) yields 0 ≤ δσeσx1 (µ − σλ). This is a contradiction because of our choice of σ > 0. 2
39
4
Existence results
In this section, we present some existence results for viscosity solutions of secondorder (degenerate) elliptic PDEs. We ﬁrst present a convenient existence result via Perron’s method, which was established by Ishii in 1987. Next, for Bellman and Isaacs equations, we give representation formulas for viscosity solutions. From the dynamic programming principle below, we will realize how natural the deﬁnition of viscosity solutions is.
4.1
Perron’s method
In order to introduce Perron’s method, we need the notion of viscosity solutions for semicontinuous functions. Deﬁnition. For any function u : Ω → R, we denote the upper and lower semicontinuous envelope of u by u∗ and u∗ , respectively, which are deﬁned by u∗ (x) = lim
ε→0
sup
y∈Bε (x)∩Ω
u(y) and u∗ (x) = lim
ε→0 y∈Bε (x)∩Ω
inf
u(y).
We give some elementary properties for u∗ and u∗ without proofs. Proposition 4.1. For u : Ω → R, we have (1) u∗(x) ≤ u(x) ≤ u∗ (x) for x ∈ Ω, (2) u∗(x) = −(−u)∗ (x) for x ∈ Ω, (3) u∗(resp., u∗ ) is upper (resp., lower) semicontinuous in Ω, i.e. lim sup u∗ (y) ≤ u∗ (x), (resp., lim inf u∗ (y) ≥ u∗ (x)) for x ∈ Ω,
y→x y→x
(4) if u is upper (resp., lower) semicontinuous in Ω, then u(x) = u∗ (x) (resp., u(x) = u∗ (x)) for x ∈ Ω. With these notations, we give our deﬁnition of viscosity solutions of F (x, u, Du, D 2u) = 0 in Ω. (4.1)
Deﬁnition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (4.1) if u∗ (resp., u∗ ) is a viscosity subsolution (resp., supersolution) of (4.1). 40
. Dφ(ˆ). Since u∗ and u∗ are. we omit the semicontinuity for viscosity sub.1).5. inﬁmum) of viscosity subsolutions (resp. 3. φ(ˆ). If supx∈K u(x) < ∞ for any compact sets K ⊂ Ω. supersolutions) of (4. Remark. u ∈ C(Ω). we have u = u∗ = u∗ in Ω.9. 3. D 2 φ(ˆ)) ≤ 0. Theorem 3.1). We remark that the comparison principle Theorem 3.. respectively.and supersolutions are. upper and lower semicontinuous in our comparison principle in section 3.7.7 implies the continuity of viscosity solutions.We call u : Ω → R a viscosity solution of (4. lower) semicontinuous viscosity subsolutions (resp. Let S be a nonempty set of upper (resp. u(x) := inf v∈S v(x)). Theorem 4.and supersolutions in Propositions 3.1) if it is both a viscosity sub. We shall show that F (ˆ. 3..3 and Theorems 3. respectively. Set u(x) := supv∈S v(x) (resp.2. For x ∈ Ω.4. supersolution) becomes a viscosity subsolution (resp. we use the above deﬁnition. 3. In what follows. supersolution) of (4.and supersolution of (4. we suppose that 0 = (u∗ − φ)(ˆ) > (u∗ − φ)(x) for x ∈ Ω \ {ˆ} ˆ x x 2 and φ ∈ C (Ω). then u is a viscosity subsolution (resp.1). We only give a proof for subsolutions since the other can be proved in a symmetric way. Adapting the above new deﬁnition. 2 We ﬁrst show that the “pointwise” supremum (resp. supersolution).1.2) . We note that we supposed that viscosity sub.. Proof.. a viscosity subsolution and a viscosity supersolution and u∗ ≤ u∗ on ∂Ω. “Continuity of viscosity solutions” viscosity solution u satisﬁes u∗ = u∗ on ∂Ω =⇒ u ∈ C(Ω) Proof of the continuity of u.. x x x x 41 (4. Remark.. Because u∗ ≤ u ≤ u∗ in Ω.7 yields u∗ ≤ u∗ in Ω.
Moreover. (4.1).3) We choose xk ∈ Br (ˆ) such that limk→∞ xk = x.Let r > 0 be such that B2r (ˆ) ⊂ Ω. Theorem 4. Hence.4). Therefore. u∗ (ˆ) − k −1 ≤ u(xk ) x ˆ x and φ(xk ) − φ(ˆ) < 1/k.1) such that ξ ≤ v ≤ η in Ω 42 . we have (u∗ − φ)(ˆ) ≤ (u∗ − φ)(z).. there is yk ∈ Br (ˆ) such that uk − φ attains its x maximum over B r (ˆ) at yk .3).4) Taking a subsequence if necessary. x which yields z = x. By (4. Since (u∗ − φ)(ˆ) ≤ (uk − φ)(xk ) + x 3 3 3 ≤ (uk − φ)(yk ) + ≤ (u∗ − φ)(yk ) + k k k by the upper semicontinuity of u∗ . where S := v ∈ USC(Ω) v is a viscosity subsolution of (4. we may suppose z := limk→∞ yk . u(x) = inf w∈S w(x)) is a viscosity soluˆ ˆ tion of (4.1) such that loc ξ≤η in Ω. Then. Thus. for large k > 3/s. we have x F (yk .3. and moreover. by the continuity of F . We can ﬁnd s > 0 such that x ∂Br (ˆ) x max (u∗ − φ) ≤ −s. u(x) := supv∈S v(x) (resp. D 2 φ(yk )) ≤ 0. ˆ x x sending k → ∞ in (4. for 3/k < s. (4. Assume that F is elliptic. limk→∞ uk (yk ) = u∗ (ˆ) = φ(ˆ). uk (yk ). Dφ(yk ). 2 Our ﬁrst existence result is as follows. we have ∂Br (ˆ) x max (uk − φ) < (uk − φ)(xk ). we select upper semicontinuous uk ∈ S x such that uk (xk ) + k −1 ≥ u(xk ). Assume also that there are a viscosity subsolution ξ ∈ USC(Ω) ∩ L∞ (Ω) and a viscosity supersolution loc η ∈ LSC(Ω) ∩ L∞ (Ω) of (4.2). we obtain (4.
there is θ > 0 such that F (ˆ. Dφ(ˆ). otherwise. Sketch of proof. φ(ˆ). Due to Theorem 4. D 2 φ(ˆ)) ≤ −2θ. Assuming that 0 = (u − φ)(ˆ) < (u − φ)(x) 2 for x ∈ Ω \ {ˆ} and φ ∈ C (Ω)..1). First of all. x Assume that u ∈ LSC(Ω). x x x x Suppose that this conclusion fails.5) x First. We only give a proof for u since the other can be shown in a symmetric way. x x x x Hence.1 x Hence.2. ˆ 43 . Dφ(x). since φ ≤ u ≤ η in x x Ω. φ(ˆ). φ(x) + t. we know that u is a viscosity subsolution of (4. See Fig 4. Indeed. (4. Dφ(ˆ). we shall show that x F (ˆ. D 2φ(x)) ≤ −θ for x ∈ Br (ˆ) ⊂ Ω and t ≤ r.1) such that ξ ≤ w ≤ η in Ω .ˆ resp.5) for x = x and t = 0. we claim that φ(ˆ) < η(ˆ). we notice that S = ∅ since ξ ∈ S. ˆ y = η(x) y = u(x) y = φ(x) y = ξ(x) x ˆ Ω Fig 4. η − φ attains its minimum at x ∈ Ω. from the deﬁnition of supersolution η. there is r > 0 such that F (x. we get a contradiction to (4.1).1. D 2 φ(ˆ)) ≥ 0. Thus. we only need to show that it is a viscosity supersolution of (4. S := w ∈ LSC(Ω) w is a viscosity supersolution of (4.
then we ﬁnish x x our proof because of the maximality of u at each point.2 max{u(x). respectively. we only need to show that w is a viscosity subsolution of (4. Because of our choice of τ0 . To this end. ξ = φ = η at x. we suppose that (w ∗ − ψ)(x) ≤ (w ∗ − ψ)(z) = 0 for x ∈ Ω. by Proposition 2. s > 0. φ(x) + τ0 } in Bs (ˆ). we can choose ε ∈ (0. x x ˆ Setting 3ˆ := η(ˆ) − u(ˆ) > 0. r] such that ξ(x) + τ ≤ φ(x) + 2ˆ ≤ η(x) for x ∈ B2s (ˆ).4. we may choose s ∈ (0. 44 . we get (4. Thus. x x If we can deﬁne a function w ∈ S such that w(ˆ) > u(ˆ). ˆ τ x Moreover. Now. D 2 ψ(z)) ≤ 0. y = η(x) 3ˆ τ y = u(x) y = φ(x) + τ0 x ˆ Ω y = w(x) Fig 4. r}) such that τ φ(x) + 2τ0 ≤ u(x) for x ∈ B s+ε (ˆ) \ Bs−ε (ˆ).6) If z ∈ Ω\B s (ˆ) =: Ω′ . min{ˆ. from the lower and upper semicontinuity of τ x x η and ξ. Dψ(z).We may suppose that ξ(ˆ) < η(ˆ) since. x y = ξ(x) x It suﬃces to show that w ∈ S. it is easy to see ξ ≤ w ≤ η in Ω. x u(x) in Ω \ Bs (ˆ). we set w(x) := See Fig 4. then u∗ −ψ attains its maximum x ′ at z ∈ Ω . (4. and then we will get F (z.1).6).2. s) and τ0 ∈ (0. w ∗ (z). otherwise.
then we have x w(xk ) ≥ φ(xk ) + τ0 > φ(ˆ) + x τ0 τ0 = u∗ (ˆ) + x ≥ u(xk ). It only remains to check that supΩ (w − u) > 0. 2 x x Correct proof. for given Bellman and Isaacs equations. 2 2 2 4. We also show that the w deﬁned in the above is a viscosity subsolution of (4.6). s} and φ(ˆ) − φ(xk ) < τ0 /2. we have to work with u∗ . Suppose that 0 = (u∗ −φ)(ˆ) < (u∗ −φ)(x) for x ∈ Ω\{ˆ} for some φ ∈ C 2 (Ω). k we easily verify that if 1/k ≤ min{τ0 /2. then (4. D2 φ(ˆ)) ≤ −2θ. Dφ(ˆ). Since φ + τ0 is a viscosity x subsolution of (4.2 Representation formula In this subsection. x x x ∈ Ω.6) when z ∈ Bs (ˆ). if the reader is more interested in the PDE theory than these applications. which the reader may skip ﬁrst. choosing xk ∈ B1/k (ˆ) such that x u∗ (ˆ) + x 1 ≥ u(xk ). x x x It remains to show (4. which are called “value functions”. Theorem 4. we present the expected solutions. he/she may skip this subsection. We shall restrict ourselves to investigate the formulas only for ﬁrstorder PDEs 45 .1) in Bs (ˆ). In fact.If z ∈ ∂Bs (ˆ). θ > 0 and ˆ F (ˆ.5) even in this case. x x x x Hence. we get (4.2 with Ω := Bs (ˆ) yields (4. via the dynamic programming principle for the value functions. In fact. Although this subsection is very important to learn how the notion of viscosity solutions is the right one from a view point of applications in optimal control and games.1).6) holds again since w = u in Bs+ε (ˆ) \ B s−ε (ˆ). we verify that they are viscosity solutions of the corresponding PDEs. φ(ˆ). Since we do not suppose that u ∈ LSC(Ω) here.
7). where we will impose a suﬃcient condition on continuous functions g : Rn × A → Rn so that (4. y ∈ Rn . we need to introduce some terminologies from stochastic analysis. a) L∞ (Rn ) + g(·. When we adapt “stochastic” diﬀerential equations instead of ODEs.2. α): J(x. which indicates that the right hand side of the above is ﬁnite. We deﬁne A by For x ∈ Rn and α ∈ A. this is too much for this thin book. x. we study the minimization of functionals associated with ordinary diﬀerential equations (ODEs for short).because in order to extend the results below to secondorder ones. x. we denote by X(·. 4.7) A := {α : [0.4. (4.7) is uniquely solvable. Now. α) the solution of X ′ (t) = g(X(t). we also suppose (3.8) . we will work on the whole domain Rn . ∞) → A  α(·) is measurable}. which is called a “deterministic” optimal control problem. Moreover. 46 < ∞. x. Here. we shall consider the optimal cost functional. α(t)) for t > 0. α). (Dynamic Programming Principle) Assume that (1) sup a∈A a∈A f (·. X(0) = x. ν > 0. (4. a) W 1. a) − f (y. to avoid mentioning the boundary condition. α) := ∞ 0 e−νt f (X(t. u(x) := inf J(x. We refer to [10] for the later. which is called the value function in the optimal control problem. under suitable assumptions (see (4. those are called “stochastic” optimal control problems. we deﬁne the cost functional for X(·.1 Bellman equation We ﬁx a control set A ⊂ Rm for some m ∈ N. As will be seen. However. α∈A Theorem 4. ν > 0 is called a discount factor. a) ≤ ωf (x − y) for x. For given f : Rn × A → R. α) for x ∈ Rn . Throughout this subsection.∞ (Rn ) (2) sup f (x.8) below). α(t))dt.
3. we denote by v(x) the right hand side of the above. t ≥ 0 and α ∈ A. Setting x = X(T . ˆ ˆ ˆ Here and later.7) under assumptions (4. x. For ﬁxed T > 0. x. αε (t))dt = T 0 e−νt f (X(t. x. Step 1: u(x) ≥ v(x). αε ). Fix ε > 0 again. ˆ ˆ where α(t) := α(t + T ) (t ≥ 0) and x := X(T . ˆ ˆ Indeed. we have ˆ ˆ ˆ ∞ 0 e−νt f (X(t. taking the inﬁmum in the second term of the right hand side of the above among A. α(t))dt + e−νT u(ˆ). x. αε ). α(t))dt + e−νT u(X(T . x. αε (t))dt ∞ 0 +e−νT e−νt f (X(t. αε (t))dt. α). α) = X(t. α). the above relation holds true because of the uniqueness of solutions of (4. we use the fact that X(t + T . Thus. See Fig 4. we have u(x) = inf T 0 α∈A e−νt f (X(t. α) for T > 0. αε (t))dt. α). x. For any T > 0. Proof. αε (t))dt + e−νT u(ˆ). we have u(x) + ε ≥ T 0 e−νt f (X(t. x. x which implies onesided inequality by taking the inﬁmum over A since ε > 0 is arbitrary. αε ) and αε ∈ A by αε (t) = αε (T + t) for t ≥ 0. and choose αε ∈ A such that v(x) + ε ≥ T 0 e−νt f (X(t. x. x. αε ). αε ).where ωf ∈ M. Fix any ε > 0. x. αε ). without mentioning. Step 2: u(x) ≤ v(x). x 47 . α)) . and choose αε ∈ A such that u(x) + ε ≥ ∞ 0 e−νt f (X(t. x. x.8).
α1 (t))dt. We next choose α1 ∈ A such that ˆ u(ˆ) + ε ≥ x Now. respectively. and set α0 (t) := a0 for t ≥ 0 so that α0 ∈ A. we give an existence result for Bellman equations.3 where x := X(T . a)} = 0 in Rn . ˆ αε (t) for t ∈ [0. 48 . 2 Now. α1 (t − T ) for t ≥ T. x. a∈A (4. α0 (t))dt. Step 1: Subsolution property.9) Sketch of proof. x. Fix φ ∈ C 1 (Rn ). In Steps 1 and 2. α0 ). x. α) t=T t=0 x x x = X(T . u is a viscosity solution of sup{νu − g(x. x ˆ Fix any a0 ∈ A.5. T ). x. x. setting α0 (t) := we see that v(x) + 2ε ≥ 0 ∞ 0 e−νt f (X(t. a). Assume that (4. α) ˆ Fig 4. αε ).X(t + T . x. α1 ). Theorem 4. and suppose that 0 = (u − φ)(ˆ) ≥ (u − φ)(x) for some x ∈ Rn and any x ∈ Rn . ∞ e−νt f (X(t. we give a proof when u ∈ USC(Rn ) and u ∈ LSC(Rn ). α) = X(t.8) holds. x. α) ˆ ˆ X(t. which gives the opposite inequality by taking the inﬁmum over α0 ∈ A since ε > 0 is arbitrary again. Then. Du − f (x.
a)} ≤ −θ a∈A for x ∈ Bε (ˆ). a)} ≤ −2θ. a0 )}dt.4. x. x (4. a0 )dt. To show that u is a viscosity supersolution. by (4. dt (4. α(t)). (4. Therefore. By assumption (4. and that x sup{νφ(ˆ) − g(ˆ. α0 ) for simplicity. we see that ˆ e−νt {νφ(X(t)) − g(X(t). Dφ(X(t)) − f (X(t). α0 )) ≤ u(ˆ) − e−νs u(X(s. α(t)) ≤ −θ 49 (4. x. x.12) . in view of Theorem 4. dividing the above by s > 0. by setting X(t) := X(t. α) − x ≤ ˆ ˆ t 0 L∞ (Rn ) +1) X ′(s. x. Dφ(ˆ) − f (ˆ. t0 ] and α ∈ A. α0 )) x ˆ x ˆ ≤ s 0 e−νt f (X(t. and then sending s → 0. a).10) e−νt {νφ(X(t)) − g(X(t). x x x x a∈A Thus. we have 0 ≥ νφ(ˆ) − g(ˆ.11) > 0. α0 ). we have 0≥ s 0 d −νt e φ(X(t)) . a0 ).11) yields ˆ νφ(X(t)) − g(X(t). α0). x. Suppose that there are x ∈ Rn . a0). we argue by contradiction. Dφ(X(t)) } = − Hence.8) for g. x. Dφ(ˆ) − f (ˆ. θ > 0 and φ ∈ C 1 (Rn ) such that 0 = ˆ (u − φ)(ˆ) ≤ (u − φ)(x) for x ∈ Rn . a) we easily see that X(t. x.For small s > 0. Dφ(X(t)) − f (X(t). we have φ(ˆ) − e−νs φ(X(s. x x x x which implies the desired inequality of the deﬁnition by taking the supremum over A.7). Step 2: Supersolution property. we can ﬁnd ε > 0 such that sup{νφ(x) − g(x. setting t0 := ε/(supa∈A g(·. ˆ Setting X(t) := X(t. a0 ). a). α)ds ≤ ε for t ∈ [0. ˆ Hence. Dφ(x) − f (x. α) for any ﬁxed α ∈ A.
1 2 ≤ φ(ˆ) − ≤ u(xk ) ≤ x k k t0 0 e−νt f (Xk (t). ν ≤ 0 e−νt {f (Xk (t). a)} ≥ 2θ. We will only use k such that 1/k ≤ r. t0 ]. a0 ).4. On the other hand. taking the inﬁmum over A. we can choose xk ∈ B1/k (ˆ) such that u∗ (ˆ) ≤ u(xk ) + k−1 x x and φ(ˆ) − φ(xk ) < 1/k. t0 ]. a0 ) + g(Xk (t). α(t))dt + e−νt0 u(X(t0 )) − θ0 .13) For large k ≥ 1. a0 ). Step 1: Subsolution property. by Theorem 4. Dφ(Xk (t)) − νφ(Xk (t))}dt 50 . θ > 0 and φ ∈ ˆ 1 (Rn ) such that 0 = (u∗ − φ)(ˆ) ≥ (u∗ − φ)(x) for x ∈ Rn and that C x sup{νφ(ˆ) − g(ˆ. a0 )dt + e−νt0 φ(Xk (t0 )). α0 ) ∈ B2r (ˆ) for t ∈ [0. setting θ0 = θ(1 − e−νt0 )/ν > 0.10) holds for α in place of α0 . x Setting α0 (t) := a0 . multiplying e−νt in (4. Dφ(x) − f (x. a). xk .4. Since (4. Dφ(ˆ) − f (ˆ. Hence. Assume that there are x ∈ Rn . we note that Xk (t) := X(t. we have t0 e−νt f (X(t). we have t0 u(xk ) ≤ Thus. we get a contradiction to Theorem 4.8). x x x x a∈A In view of (4. u(ˆ) ≤ x 0 Therefore.12). we have φ(xk ) − 0 e−νt f (Xk (t). which is independent of α ∈ A. a0 ) ≥ θ for x ∈ B2r (ˆ). a0 )dt + e−νt0 u(Xk (t0 )). 2 Correct proof. x (4. α(t))dt ≤ − (1 − e−νt0 ). we obtain φ(ˆ) − e−νt0 φ(X(t0 )) − x t0 0 θ e−νt f (X(t). we see that − 2 k t0 θ ≤ − (1 − e−νt0 ). ν Thus. which the reader may skip ﬁrst. and then integrating it over [0.13) as in Step 1 of Sketch of proof.for t ∈ [0. by (4. t0 ] x with some t0 > 0 and for large k. there are a0 ∈ A and r > 0 such that νφ(x) − g(x.
αk )).14) with αk in the above. xk . x x x x In view of (4.e. αk ). a).14) For large k ≥ 1. αk (t)) − νφ(Xk (t))}dt. xk .which is a contradiction for large k. p ∈ Rn → F (x. we have φ(xk ) + 3 2 1 ≥ φ(ˆ) + ≥ u(xk ) + ≥ x k k k t0 0 e−νt f (Xk (t). Hence. a) ≤ −θ for x ∈ B2r (ˆ) and a ∈ A. xk . a). αk (t))dt + e−νt0 φ(Xk (t)). we have 3 ≥ k t0 0 e−νt { g(Xk (t). Assume that there are x ∈ Rn . we have 3 ≥θ k t0 0 e−νt dt.2. 51 . αk (t))dt + e−νt0 u(X(t0 . we select αk ∈ A such that u(xk ) + 1 ≥ k t0 0 e−νt f (X(t.8). we can choose xk ∈ B1/k (ˆ) such that u∗ (ˆ) ≥ u(xk ) − k−1 x x and φ(ˆ) − φ(xk ) < 1/k. αk (t)). α ∈ A and t ∈ [0.2 Isaacs equation In this subsection. there is r > 0 such that νφ(x) − g(x. αk ). Setting Xk (t) := X(t. Dφ(Xk (t)) + f (Xk (t). In view of (4. 2 which is a contradiction for large k ≥ 1. there is t0 > 0 such that x 1 Xk (t. t0 ]. 4. Step 2: Supersolution property. x r Now. Putting (4. xk . a)} ≤ −2θ. x (4. θ > 0 and ˆ 1 (Rn ) such that 0 = (u − φ)(ˆ) ≤ (u − φ)(x) for x ∈ Rn and that φ∈C x ∗ ∗ a∈A sup{νφ(ˆ) − g(ˆ. α) ∈ B2r (ˆ) for all k ≥ .8). Dφ(ˆ) − f (ˆ. we study fully nonlinear PDEs (i. p) is neither convex nor concave) arising in diﬀerential games. Dφ(x) − f (x.
b)} = 0 in Rn . β(t))dt. t ∈ (0.We are given continuous functions f : Rn × A × B → R and g : Rn × A × B → Rn such that (1) (2) sup (a. α. x.b)∈A×B f (x. then γ[α1 ](t) = γ[α2 ](t) for a. T ) e−νt f (X(t. a. b) − f (y. b) L∞ (Rn ) + g(·. a. y ∈ Rn . β) := ∞ 0 for any T > 0. Next. we introduce the socalled sets of “nonanticipating strategies”: Γ := and ∆ := γ:A→B for any T > 0. b). a. a∈A b∈B (4. J(x. ∞) → B  β(·) is measurable}. t ∈ (0. t ∈ (0. a. a. β ∈ B. b). T ). Du − f (x. (4. We ﬁrst introduce some notations: While we will use the same notion A as before.a. α(t). b)} = 0 in Rn . Du − f (x. 52 .15).b)∈A×B f (·. if β1 and β2 ∈ B satisfy that β1 (t) = β2 (t) for a. a. and x ∈ Rn .a. we shall consider Isaacs equations: sup inf {νu − g(x.∞ (Rn ) < ∞. β). α.16) and b∈B a∈A inf sup{νu − g(x. (4. we set B := {β : [0.16′ ) As in the previous subsection. . we will consider maximizingminimizing problems of the following cost functional: For α ∈ A. then δ[β1 ](t) = δ[β2 ](t) for a.a. b) ≤ ωf (x − y) for x. a. T ) δ:B→A Using these notations. we shall derive the expected solution. T ).a. Under (4. if α1 and α2 ∈ A satisfy that α1 (t) = α2 (t) for a. t ∈ (0. b) W 1.15) where ωf ∈ M. a. sup (a.
. Noting that sup inf {νr− g(x. +e−νT u(X(T . α(t). 53 . p −f (x.16)). are given by u(x) = sup inf and v(x) = inf sup δ∈∆ β∈B 0 ∞ γ∈Γ α∈A 0 ∞ e−νt f (X(t. X(0) = x. respectively. which cannot be proved easily. δ[β]. γ[α]).where X(·. respectively. we choose γε ∈ Γ such that u(x) − ε ≤ inf ∞ α∈A 0 e−νt f (X(t. Step 1: u(x) ≤ w(x). x. x. respectively. we ﬁrst observe that u and v are. α. α.. x. x. γ[α]).6. β). for T > 0. γ[α])) Proof.16′ ). a. For any ε > 0. v) is a viscosity supersolution (resp. We shall only deal with u since the corresponding results for v can be obtained in a symmetric way. γ[α](t))dt. r. under appropriate hypotheses. β(t)) for t > 0. a. the standard comparison principle implies v ≤ u in Rn (under suitable growth condition at x → ∞ for u and v). p −f (x. b). β) is the (unique) solutions of X ′ (t) = g(X(t). e−νt f (X(t. viscosity solutions of (4. (4. we expect that v ≤ u. β(t))dt. δ[β](t).16′ ). a. γ[α](t))dt . α(t). α. To show v ≤ u.. x. α. α. To show that u is a viscosity solution of the Isaacs equation (4. For a ﬁxed T > 0. subsolution) of (4. b). In fact. Then.16). γε[α](t))dt =: Iε . γε[α]).17) The expected solutions for (4. a. (Dynamic Programming Principle) Assume that (4. b)} a∈A b∈B b∈B a∈A for (x. we see that u (resp. we have u(x) = sup inf γ∈Γ α∈A T 0 e −νt f (X(t.16′ ) (resp. p) ∈ Rn ×R×Rn .16) and (4. (4.15) hold. Thus. we denote by w(x) the right hand side of the above.16) and (4. we ﬁrst establish the dynamic programming principle as in the previous subsection: Theorem 4. b)} ≤ inf sup{νr− g(x. α(t). x. We call u and v upper and lower value functions of this diﬀerential game. α(t).
γ [α]). ˆ ˆ ˆ Taking the inﬁmum over α ∈ A. γε [α])) 54 . the supremum over Γ. ε ε ∞ 0 e−νt f (X(t. α. ∞) for α ∈ A. α. γ [α]). x. we have x 1 u(x) − ε ≤ Iε + e−νT u(ˆ). γ [α](t))dt. x. we deﬁne the mapping T0 : A → A by T0 [α] := α0 (t) for t ∈ [0. α(t − T ) for t ∈ [T. x. we have 1 u(x) − ε ≤ Iε + e−νT inf α∈A ˆ =: I 1 + I 2 . γε [α0 ]). x. T0 [α](t). α(t). α(t). For ε > 0. Thus. we get the onesided inequality since ε > 0 is arbitrary. T0 [α]. γε [T0 [α]](t))dt We next deﬁne γ ∈ Γ by ˆ γ [α](t) := γε [T0 [α]](t + T ) for t ≥ 0 and α ∈ A. ˆ Setting x := X(T . α. x. we choose γε ∈ Γ such that w(x) − ε ≤ inf α∈A T 0 1 1 e−νt f (X(t. α0 (t). we have Iε ≤ T 0 e−νt f (X(t. γε [T0 [α]]). Therefore. 1 Step 2: u(x) ≥ w(x). α. α0 . α(t). x. T ). γε [α]). ˆ Note that γ belongs to Γ. x which implies u(x) − ε ≤ w(x) by taking the inﬁmum over α0 ∈ A and then. γ [α](t))dt ˆ ˆ ˆ ˆ2 Since Iε ≤ e−νT u(ˆ). 1 +e−νT u(X(T . e−νt f (X(t.For any ﬁxed α0 ∈ A. α0 . γε [α](t))dt . γε [α0 ](t))dt ∞ T + 1 2 =: Iε + Iε . x. for any α ∈ A. γε [α0 ]). we have ˆ 2 Iε = e−νT ∞ 0 e−νt f (X(t.
γε [α0 ](t))dt + e−νT u(ˆ). α. T ). T1 [α0 ](t − T ). γε [α](t))dt. γ [α](t))dt. x. α0 . T1 [α0 ](t). x. ˆ For α ∈ A. γε [α0 ]). for α ∈ A. T1 [α0 ]. ∞). ˆ X(t − T ) for t ∈ [T. 2 γε [T1 [α]](t − T ) for t ∈ [T. γε [α]). ∞). α0 . α. γ [α0 ]) = ˆ we have w(x) − 2ε ≤ ∞ 0 1 X(t. ˆ X(t. setting γ [α](t) := ˆ 1 γε [α](t) for t ∈ [0. we have ˆ w(x) − ε ≤ T 0 1 1 e−νt f (X(t. α(t).1 For any ﬁxed α0 ∈ A. γε [T1 [α0 ]](t − T ))dt ∞ T =e Since ˆ e−νt f (X(t − T ). ˆ ˆ Since α0 is arbitrary. x. T ). γ [α0 ](t))dt. x. x. γε [α0 ]) for t ∈ [0. α0 (t). we have I≤ ∞ 0 2 2 ˆ e−νt f (X(t. γε [T1 [α0 ]]). α0 . γ [α0 ]). Thus. we have w(x) − 2ε ≤ inf ∞ α∈A 0 e−νt f (X(t. α(t). x. we deﬁne the mapping T1 : A → A by T1 [α](t) := α(t + T ) for t ≥ 0. α0 . 2 ˆ and X(t) := X(t. x. γ [α0 ](t))dt. γε [T1 [α0 ]](t))dt =: I. ˆ ˆ 55 . x. γ [α]). α0 (t). we have ˆ ˆ I = ∞ T νT 2 ˆ e−ν(t−T ) f (X(t − T ). setting x = X(T . e−νt f (X(t. γε [T1 [α0 ]]). α0 (t). γε [α0 ]). =: I. x. x 2 Next. ˆ Now. α0 . we choose γε ∈ Γ such that u(ˆ) − ε ≤ inf x ∞ α∈A 0 2 2 e−νt f (X(t. T1 [α0 ].
a. respectively. one can skip the following theorem.which yields the assertion by taking the supremum over Γ and then. (1) Then. Remark. b) − f (x. we need a bit careful analysis similar to the proof for Bellman equations. We omit the correct proof here. there are x ∈ Rn . b. b′ ) ≤ ωB (b − b′ ) for x ∈ Rn .7. a. there is an ωB ∈ M such that f (x. Since we only give a sketch of proofs. (4.15). Suppose that the subsolution property fails. Dφ(x) − f (x. we refer to [1]. (2) Assume also the following properties: (i) (ii) A ⊂ Rm is compact for some integer m ≥ 1. b) − f (x. a. b′ ) + g(x. α. Step 1: Subsolution property. we only need (4. originally by EvansSouganidis (1984). γ[α]) are uniformly continuous for any (α. (4.16). a.18′ ) while to verify that v is a viscosity supersolution of (4. u is a viscosity supersolution of (4. b)} ≥ 3θ. x. a. a′ . θ > 0 and φ ∈ C 1 (Rn ) such that 0 = (u − φ)(x) ≥ (u − φ)(y) (for all y ∈ Rn ) and sup inf {νu(x) − g(x.16). 2 Now. 56 . γ) ∈ A × Γ in view of (4. u is a viscosity subsolution of (4. b). a. we shall verify that the value function u is a viscosity solution of (4. For a correct proof.16′ ). a′ . there is an ωA ∈ M such that f (x. To show that v is a viscosity subsolution of (4. b) + g(x. by sending ε → 0. Theorem 4. To give a correct proof without the semicontinuity assumption. instead of (4.15).18). b) − g(x. We shall only prove the assertion assuming that u ∈ U SC(Rn ) and u ∈ LSC(Rn ) in Step 1 and 2. b) ≤ ωA (a − a′ ) for x ∈ Rn .16). a.16′ ). a′ ∈ A and b ∈ B. Assume that (4. b) − g(x. we need to suppose the following hypotheses: (i) (ii) B ⊂ Rm is compact for some integer m ≥ 1. a. a. Sketch of proof.18) Then.15) holds. a∈A b∈B We note that X(·. b′ ∈ B and a ∈ A.
Dφ(X(t)) − f (X(t). γ[α]). a0 . a. we ﬁnd small t0 > 0 such that νφ(X(t)) − g(X(t). a0 . γ[α0 ](t)). γ[α])) t0 0 Therefore. θ > 0 and φ ∈ C 1 (Rn ) such that 0 = (u − φ)(x) ≤ (u − φ)(y) for y ∈ Rn . x. b)} ≤ −3θ. x. we have ˆ I ≥ inf α∈A t0 0 e−νt f (X(t. we can choose that a0 ∈ A such that b∈B inf {νφ(x) − g(x. α(t). ˆ e−νt f (X(t). γ[α0 ](t)) dt dt t0 0 = φ(x) − e−νt0 φ(X(t0 )) − Hence. α. γ[α0 ](t))dt + e−νt0 u(X(t0 )) =: I. γ[α](t))dt . Dφ(x) − f (x. Dφ(x) − f (x. we simply write X(·) for X(·. b). α(t). a. a0 . γ[α]). +e−νt0 u(X(t0 . α. γ[α](t))dt . For any γ ∈ Γ. +e−νt0 u(X(t0 . a∈A b∈B For any a ∈ A. γ[α0 ](t)) ≥ θ for t ∈ [0. b(a)).6. and sup inf {νu(x) − g(x. Step 2: Supersolution property. Suppose that the supersolution property fails. x. b(a)) ≤ −2θ. a0 . a. γ[α0 ]). b)} ≥ 2θ. a. there is b(a) ∈ B such that νu(x) − g(x. Thus. integrating it over [0. we have θ u(x) − (1 − e−νt0 ) ≥ sup inf ν γ∈Γ α∈A e−νt f (X(t. α. there are x ∈ Rn . α0 . Multiplying e−νt in the above and then. 57 .Thus. a0 . b). t0 ]. t0 ]. γ[α])) which contradicts Theorem 4. x. Dφ(x) − f (x. Taking the inﬁmum over A. a0 . γ[α0 ](t))dt. setting α0 (t) = a0 for t ≥ 0. α. since γ ∈ Γ is arbitrary. x. a0 . we have θ u(x) − (1 − e−νt0 ) ≥ ν t0 0 e−νt f (X(t). we have θ (1 − e−νt0 ) ≤ − ν t0 0 d −νt e φ(X(t)) + e−νt f (X(t).
which is one of the most important properties for “solutions” as noted in section 1. setting X(t) := X(t. α.18). 58 . For α ∈ A. we deﬁne Ak := {a ∈ A  a − ak  < ε(ak )}. there is ε(a) > 0 such that if a − a′  < ε(a) and x − y < ε(a).3. we may select {ak }M such that k=1 M A= k=1 Ak . and inductively. Thus. Dφ(y) − f (y. γ0 [α](t)) ≤ −θ for t ∈ [0. Dφ(X(t)) − f (X(t). . . M . γ0[α]). γ0 [α](t))dt ≤ − (1 − e−νt0 ).8 below in section 7. We may also suppose that Ak = ∅ for k = 1. γ0 [α](t)). b(a)) ≤ −θ. x. the reader may skip the proof. this result justiﬁes our notion of viscosity solutions. since we will only use Proposition 4. α(t). . we ﬁnd t0 > 0 such that νφ(X(t)) − g(X(t). α(t). From the compactness of A. we obtain φ(x) − e−νt0 φ(X(t0 )) − t0 0 θ e−νt f (X(t). integrating it. α. γ0[α])) 2 which contradicts Theorem 4. t0 ]. we present a stability result for viscosity solutions. a′ . where k−1 ˆ ˆ ˆ ˆ Furthermore. α. α(t). b(a)). a′ . Ak ∩ Aj = ∅ ˆ for k = j. we set A1 = A1 .In view of (4. γ0 [α](t) := b(ak ) ˆ provided α(t) ∈ Ak . Multiplying e−νt in the above and then. However. Ak := Ak \ ∪j=1 Aj . α(t). +e−νt0 u(X(t0 . Now. γ0[α](t))dt . γ0 [α]). x.6 by taking the supremum over Γ. .3 Stability In this subsection. x. 4. then we have νφ(y) − g(y. ν Since α ∈ A is arbitrary. we have θ u(x) + (1 − e−νt0 ) ≤ inf α∈A ν t0 0 e−νt f (X(t.
. supersoluion) of F∗ (x. Proposition 4. q..8. for given continuous functions Fk : Ω × R × Rn × S n → R. s. q. D 2u) = 0 in Ω. We call u : Ω → R a viscosity solution of (4. r. F ∗ (x. (4. Y ) . q − p < 1/k.19) We introduce the following notation: F∗ (x. Our stability result is as follows. D 2u) ≥ 0 in Ω. r. s − r < 1/k. Y ) y ∈ Ω ∩ Bε (x). Y − X < ε y ∈ Ω ∩ Bε (x). X) ∈ Ω × R × Rn × S n . Du. p. s − r < ε. r. we set lim inf ∗ Fk (x. F ∗ (x. and j ≥ k y − x < 1/k. q − p < ε. r. Y − X < ε . p. s.and supersolution of (4. r.. q − p < ε.. q − p < 1/k. p. X) := lim inf ε→0 F (y. r. u∗ ) is a viscosity subsolution (resp. s. X) := lim sup ε→0 F (y. X) := lim inf ∗ Fk (x.19). supersolution) of (4. For (x.First of all. we are concerned with F (x. D 2u) ≤ 0 resp. s. we set F (x. Y ) lim sup ∗ Fk (x. Du. p. p. q. We call u : Ω → R a viscosity subsolution (resp. Now. s − r < ε.19) if u∗ (resp. p. p. Let Fk : Ω × R × Rn × S n → R be continuous functions. X) k→∞ y − x < 1/k. Deﬁnition. u. Du. u. for possibly discontinuous F : Ω × R × Rn × S n → R. Y − X < 1/k . q. X) k→∞ 59 . Y ) . Y − X < 1/k and j ≥ k := lim sup k→∞ Fj (y. u. s − r < 1/k.19) if it is both a viscosity sub. X) k→∞ := lim inf k→∞ Fj (y. r.
We only give a proof for subsolutions since the other can be shown similarly.. k→∞ Let uk : Ω → R be a viscosity subsolution (resp. supersolution) of Fk (x. (4. u.. F ∈ LSC(Ω × R × R × S n ) and F ∈ USC(Ω × R × Rn × S n ). (4..20) implies 0 = lim inf ((uk )∗ − φ)(xk ) ≤ lim inf ((uk )∗ − φ)(yk ) k→∞ k→∞ k→∞ k→∞ ≤ lim inf (uk )∗ (yk ) − φ(z) ≤ lim sup(uk )∗ (yk ) − φ(z) ≤ (u − φ)(z). p. r. Since ((uk )∗ − φ)(yk ) ≥ ((uk )∗ − φ)(xk ). u(x0 ). 60 . then u (resp. F (x.. u. We note that u ∈ USC(Ω). We may also suppose that limk→∞ yk = z for some z ∈ B r (x0 ) (taking a subsequence if necessary). u) by u(x) := lim sup{(uj )∗ (y)  y ∈ B1/k (x) ∩ Ω. Setting u (resp. such that k→∞ lim xk = x0 and k→∞ lim (uk )∗ (xk ) = u(x0 ). X) := lim sup ∗ Fk (x. where r ∈ (0.resp. X) . we let x0 ∈ Ω be such that 0 = (u−φ)(x0 ) > (u−φ)(x) for x ∈ Ω \ {x0 }. D 2 φ(x0 )) ≤ 0.20) We select yk ∈ B r (x0 ) such that ((uk )∗ − φ)(yk ) = supBr (x0 ) ((uk )∗ − φ). Duk . r.. Remark. Du. We may choose xk ∈ Br (x0 ) (for a subsequence if necessary).. Given φ ∈ C 2 (Ω).. supersolution) of F (x.dist(x0 . F (x. n Proof. ∂Ω)). p. u ∈ LSC(Ω). D 2u) ≤ 0 (resp. u(x) := lim inf{(uj )∗ (y)  y ∈ B1/k (x) ∩ Ω. D 2u) ≥ 0) in Ω. uk . u) is a viscosity subsolution (resp. D 2 uk ) = 0 in Ω. Du. Dφ(x0 ). j ≥ k} k→∞ resp. j ≥ k} k→∞ for x ∈ Ω. We shall show that F (x0 .
Thus. Dφ(yk ). this yields z = x0 and limk→∞ (uk )∗ (yk ) = u(x0 ). by the deﬁnition of uk (with Proposition 2. Hence. we see that yk ∈ Br (x0 ) for large k ≥ 1. k→∞ 2 61 . Since (uk )∗ − φ attains a maximum over Br (x0 ) at yk ∈ Br (x0 ).4 for Ω′ = Br (x0 )). we have Fk (yk . D 2 φ(yk )) ≤ 0. which concludes the proof by taking the lim inf ∗ . (uk )∗ (yk ).
q − p < ε. Dφ(x). s.1) G(y. p.. Du. G∗ (x. u∗ (x). s − r < ε.5 Generalized boundary value problems In order to obtain the uniqueness of solutions of an ODE. For general G : Ω × R × Rn × S n → R. As in section 4. Y − X < ε y ∈ Ω ∩ Bε (x). Since we are mainly interested in degenerate elliptic PDEs. Following the standard PDE theory.1) if. Y ) y ∈ Ω ∩ Bε (x).3.. for any φ ∈ C 2 (Ω). In the study of PDEs. 1). Y − X < ε . G∗ (x. we cannot expect “solutions” to satisfy the given boundary condition on the whole of ∂Ω.. u. q. For this purpose. r. supersolution) of (5. consider the “degenerate” elliptic PDE − du + u = 0 in (0. u∗ (x). we have to suppose certain initial or boundary condition. D 2φ(x)) ≥ 0 provided that u∗ − φ (resp. we extend the notion of viscosity solutions to possibly discontinuous PDEs on Ω while we normally consider those in Ω. we need to impose appropriate conditions on ∂Ω for the uniqueness of solutions. We call u : Ω → R a viscosity subsolution (resp. G∗ (x. dx Note that it is impossible to ﬁnd a solution u of the above such that u(0) = u(1) = 1. 62 . Dφ(x). we shall treat a few typical boundary conditions in this section. q. D 2 φ(x)) ≤ 0 resp. X) := lim inf ε→0 (5. D 2u) = 0 in Ω. u∗ − φ) attains its maximum (resp. p. The simplest example is as follows: For Ω := (0. 1). r. s. q − p < ε. we are concerned with G(x. minimum) at x ∈ Ω. Deﬁnition.. Y ) . Our plan is to propose a deﬁnition of “generalized” solutions for boundary value problems. s − r < ε. we deﬁne G∗ (x. X) := lim sup ε→0 G(y.
p. X) = G∗ (x. B(x. p. Setting G by G(x.1) Note that the boundary condition is contained in the deﬁnition. we shall formulate the boundary value problems in the viscosity sense. B(x.1) if it is both a viscosity sub. r. X) = F (x. p. X)} F (x. r. p. where G is deﬁned in the above. X)} 63 for x ∈ Ω. We call u : Ω → R a viscosity solution of (5.1) viscosity supersolution v of (5.1). r.and supersolution of (5. X) := F (x. B(x. Deﬁnition. p. X) min{F (x. supersolution) of (5. u. G∗ and G can be expressed in the following manner: ∗ G∗ (x.2) if it is both a viscosity sub. r. r. p. p. D 2u) = 0 in Ω. r.2) if it is a viscosity subsolution (resp. When F and B are continuous and G is given as above. Du. p. X). we investigate general boundary value problems F (x. Remark. Du. p. r. supersolution) of (5.2).2) =⇒ u ≤ v in Ω we give the deﬁnition of boundary value problems (5.2) in the viscosity sense.and supersolution of (5. (5. for x ∈ ∂Ω. Given F : Ω × R × Rn × S n → R and B : ∂Ω × R × Rn × S n → R..1). for x ∈ ∂Ω. r. B(x. Using the above new deﬁnition. p. u. for x ∈ Ω. Our comparison principle in this setting is as follows: “Comparison principle in this setting” viscosity subsolution u of (5. r. . D 2u) = 0 on ∂Ω. p. X). X) max{F (x. X) for x ∈ Ω. X) for x ∈ ∂Ω. r. We call u : Ω → R a viscosity subsolution (resp.We call u : Ω → R a viscosity solution of (5.. r.
it is not straightforward to show the comparison principle in this new setting. . Notice that there is NO reason to verify that (xε .+ we ﬁnd X. v(yε). X ≤ 0 ≤ F yε . The main diﬃculty to prove the comparison principle is that we have to “avoid” the boundary conditions for both of viscosity sub. X . ε ε It seems impossible to avoid this diﬃculty as long as we use x − y2/(2ε) as “test functions”. v(yε ).It is not hard to extend the existence and stability results corresponding to Theorem 4. 64 . 2. respectively. and max F yε . To explain this. to viscosity solutions in the above sense. u(xε ). X ≤ 0 ≤ B yε . yε ) ∈ ∂Ω × ∂Ω. yε ) ∈ Ω × Ω. Hence.Y ε ε xε − yε xε − yε . xε − yε xε − yε . which implies the uniqueness (and continuity) of viscosity solutions. The worst case is that (xε .X ε ε ≤0 However. We shall observe that the standard argument in Theorem 3. In fact. we have min F xε .6.− J Ω v(yε ).and supersolution of (5. X) ∈ J Ω u(xε ).1). xε − yε xε − yε . B xε . a viscosity sub. yε ) ∈ Ω × Ω. . y) → u(x) − v(y) − (2ε)−1 x − y2 attains its maximum at (xε . v(yε). Y . v(yε). we cannot get any contradiction when F xε . Let u and v be. u(xε ). . B yε . . Thus.3 and Proposition 4. u(xε ). even if we suppose that (3. Y ) ∈ 2. let us consider the case when G is given by (5. Y .and supersolutions. ((xε − yε )/ε.Y ε ε ≥ 0. we shall concentrate our attention to the comparison principle. respectively. For ε > 0. Y ∈ S n such that ((xε − yε )/ε.21) holds for F and B “in Ω”. in view of Lemma 3. or B xε . u(xε ).7 does not work.2). the matrix inequalities in Lemma 3. suppose that (x.8. However.Y .6 hold for X. xε − yε xε − yε .
Y ∈ S n . Y ) − F (x. X.and supersolutions are continuous on ∂Ω. (5. Du. X)variables. D u). p ∈ Rn . Assuming that viscosity sub. we consider Dirichlet boundary value problems (Dirichlet problems for short) in the above sense. u − g = 0 on ∂Ω. 65 νu + F (x. p. D 2u) ≥ 0 in Ω.Our plan to go beyond this diﬃculty is to ﬁnd new test functions φε (x. D 2u) = 0 in Ω. The next assumption is a bit stronger than the structure condition (3. X) ≤ ωF (x − y(1 + p + µx − y)) ˆ for x. (5.5) in the viscosity sense is as follows: subsolution ⇐⇒ and supersolution ⇐⇒ νu + F (x. p. since we will use several “perturbation” techniques. q.4) 5. p. Du.3) There is ωF ∈ M such that ˆ if X. then 0 I 0 −Y −I I F (y. Y ∈ S n and µ > 1 satisfy I 0 X 0 I −I −3µ ≤ ≤ 3µ . Du. p. 2 min{νu + F (x.21): There is an ω0 ∈ M such that F (x. To this end. Du. We now recall the classical Dirichlet problem νu + F (x.1 Dirichlet problem First. y) attains its maximum over Ω × Ω at an interior point (xε . we shall suppose the following continuity of F with respect to (p. X. y) (instead of x − y2/(2ε)) so that the function (x. Du. .5) Note that the Dirichlet problem of (5. X) − F (x. Y ) ≤ ω0 (p − q + X − Y ) for x ∈ Ω. D 2u) ≤ 0 in Ω. (5. we will obtain the comparison principle for them. y ∈ Ω. 2 max{νu + F (x. u − g} ≤ 0 on ∂Ω. q ∈ Rn . D u). we suppose two hypotheses on F : First. u − g} ≥ 0 on ∂Ω. yε ) ∈ Ω × Ω. y) → u(x) − v(y) − φε (x. Y ∈ S n .
5) such that lim inf u∗ (x) ≥ u∗ (z) and x∈Ω→z lim sup v∗ (x) ≤ v∗ (z) for z ∈ ∂Ω. which may be called an “interior cone condition” (see Fig 5.4) and (5. 1).7) implies the continuity of u∗ and v∗ on ∂Ω.1 −rn(z) Ω Theorem 5. we let (xε .We shall suppose the following property on the shape of Ω.6) Here and later. Proof.7) Then.and supersolution of (5. we let u and v : Ω → R be. Remark. s ∈ (0. a viscosity sub. r ∈ (0. respectively. yε ) ∈ Ω × Ω be the maximum point of Φ(x. Suppose that maxΩ (u∗ − v∗ ) =: θ > 0.1. x∈Ω→z (5. For ε. u∗ ≤ v∗ in Ω. Case 1: max∂Ω (u − v) = θ. We simply write u and v for u∗ and v∗ . Notice that (5. We shall divide three cases: Case 11: u(z) > g(z).6) hold. (5. We choose z ∈ ∂Ω such that (u − v)(z) = θ. there are r . Assume that ν > 0. δ ∈ (0. y) := (2ε2 )−1 x − y − εδn(z)2 − δx − z2 . ˆ ˆ ˆ (5. For g ∈ C(∂Ω). (5. 66 .3).1): For each z ∈ ∂Ω. 1) such that ˆ ˆ x − rn(z) + rξ ∈ Ω for x ∈ Ω ∩ Br (z). we denote by n(z) the unit outward normal vector at z ∈ ∂Ω. y) := u(x) − v(y) − φ(x. respectively. where δ > 0 will be ﬁxed later. r) and ξ ∈ Bs (0). y) over Ω × Ω. setting φ(x. z x n(z) r ˆ sr ˆ Fig 5.
11) ∈ J Ω v(yε ). Moreover. x x x which yields x = z.6 with Proposition 2. y . we have ˆ xε − yε − εδn(z)2 = 0. we ﬁnd X. X + 2δI) ≤ ω0 (2δxε − z + 2δ). (5. where M := 2(maxΩ u − minΩ v − u(z) + v(z) + 1)1/2 . we may suppose that (xε .6).Since z −εδn(z) ∈ Ω for small ε > 0 by (5. X + 2δI). (5. yε ) → (ˆ. Thus. Y ∈ S n such that ε→0 lim xε − yε δ − n(z) + 2δ(xε − z). for small ε > 0. pε .7 to u(x) + ε−1 δ n(z). we have ν(u(xε ) − v(yε )) ≤ F (yε .12) Since yε ∈ Ω and u(xε ) > g(xε ) for small ε > 0 provided xε ∈ ∂Ω. yε ) ≥ Φ(z. X + 2δI ε2 ε xε − yε δ − n(z). Y ε2 ε and − 3 ε2 I O O I ≤ X O O −Y 2. (5. x − δx − z2 − 2−1 δ 2 and v(y) + ε−1 δ n(z).8) 2ε2 √ Since xε − yε  ≤ Mε. x) and (xε − yε )/ε → z x ˆ ˆ n ˆ for some x ∈ Ω and z ∈ R as ε → 0 along a subsequence (denoted by ε ˆ again). z −εδn(z)) implies that xε − yε − εδn(z)2 ≤ u(xε ) − v(yε ) − u(z) + v(z − εδn(z)) − δxε − z2 .10) and (5. Applying Lemma 3. X) − F (xε .6). by (5.11). we have F (xε . Putting pε := ε−2 (xε − yε ) − δε−1 n(z).3). pε + 2δ(xε − z). pε + 2δ(xε − z). in view of (5.7) of v at z ∈ ∂Ω. ≤ 3 ε2 I −I −I I . 2.− ∈ J Ω u(xε ).8) implies that θ ≤ u(ˆ) − v(ˆ) − δˆ − z2 . Φ(xε .10) (5. from the continuity (5. pε . ε→0 ε2 lim which implies that xε − yε  = δ. 67 . we note that yε = xε − εδn(z) + o(ε) ∈ Ω because of (5. Y ) − F (xε .9) ε Furthermore. (5.+ (5.
7) is needed in Case 11. This does not occur because 0 < θ = (u − v)(z) ≤ 0. ˆ which is a contradiction for small δ > 0. In this case. 2 Remark. (See also the proof of Theorem 5. setting F (x. Note that u∗ ≡ 0 and u∗ ≡ u in Ω. D 2u) = 0 in Ω.9) in the above. by (5. Note that we need here the continuity of u on ∂Ω in (5.3. To get a contradiction. he/she may skip Proposition 5. Thus.2 State constraint problem The state constraint boundary condition arises in a typical optimal control problem. 68 .12) . this example shows that the comparison principle fails in general without assumption (5.and supersolution of G(x. consider the function u(x) := 0 for x ∈ Ω.7. which explains why we will adapt the “state constraint boundary condition” in Theorem 5. the comparison principle fails in general. we argue as above replacing φ(x.2 below. In fact. we have ν(u(xε )−v(yε)) ≤ ω0 (2δxε −z+2δ)+ ωF xε − yε  1 + pε  + ˆ Sending ε → 0 together with (5.) Case 13: u(z) ≤ g(z) and v(z) ≥ g(z). we have νθ ≤ ω0 (2δ) + ωF (2δ 2 ).Combining this with (5. Therefore. we can follow the same argument as in the proof of Theorem 3. Du. X) ≡ r and g(x) ≡ −1.3 below. r. y) := (2ε2 )−1 x − y + εδn(z)2 − δx − z2 so that xε = yε − εδn(z) + o(ε) ∈ Ω for small ε > 0. Unfortunately. if the reader is more interested in the PDE theory. 5.4). Case 2: sup∂Ω (u − v) < θ. y) by ψ(x. p. Case 12: v(z) < g(z). −1 for x ∈ ∂Ω. without assuming the continuity of viscosity solutions on ∂Ω. which are respectively a viscosity sub. xε − yε  ε2 . using the standard test function x − y2/(2ε) (without δx − z2 term).7). u.7) while the other one in (5. which only depends on θ and ν.
x. a) W 1.2. Proof. Du − f (x. a)} = 0 a∈A in Ω. x. sup{νu − g(x. Du − f (x. Here.13) sup a∈A a∈A f (·. We introduce the following set of controls: For x ∈ Ω.4) holds for small T > 0. u.∞ (Ω) < ∞. a)} ≤ 0 a∈A in Ω. at x ∈ Ω.13) and (5.2. We shall suppose that A(x) = ∅ for all x ∈ Ω. a). Also.14) hold. We often say that u satisﬁes the state constraint boundary condition when it is a viscosity supersolution of “F (x. Then.1.14) where ωf ∈ M. a). Remark. a) ≤ ωf (x − y) for x. 69 . D2 u) ≥ 0 in ∂Ω”. we may show Theorem 4. Assume that ν > 0. a) L∞ (Ω) + g(·. (5. A(x) := {α(·) ∈ A  X(t. Thus. Proposition 5. (2) u is a viscosity supersolution of sup{νu − g(x. we shall consider Bellman equations of ﬁrstorder. a) − f (y. In fact. we have (1) u is a viscosity subsolution of sup{νu − g(x. α). a)} ≥ 0 a∈A in Ω. a). α(t))dt. it is easy to verify that the dynamic programming principle (Theorem 4. we suppose that (1) (2) (5. sup f (x. (5.To explain our motivation. Du − f (x. Du. y ∈ Ω. we use the notations in section 4. α) ∈ Ω for t ≥ 0}.5 replacing Rn by Ω. We are now interested in the following the optimal cost functional: u(x) := α∈A(x) 0 inf ∞ e−νt f (X(t.
x x x Choose xk ∈ Ω ∩ B1/k (ˆ) such that u∗ (ˆ) + k−1 ≥ u(xk ) and φ(ˆ) − φ(xk ) < 1/k. α(t))dt − (1 − e−νt0 ).12) hold. a)} ≤ −2θ. α(t)). integrating it over (0. Dv. multiplying e−νt and then. α). k ν 2 which is a contradiction for large k.15) for z ∈ ∂Ω. respectively. suppose that there are x ∈ ∂Ω. Motivated by this proposition. where Xk (t) := X(t. t0 ). Dφ(Xk (t)) − f (Xk (t). it only remains to show (2) on ∂Ω. (5. in Ω. α(t)) ≤ −θ for t ∈ (0. we have φ(xk ) ≤ e−νt0 φ(Xk (t0 )) + Since we have u(xk ) ≤ 2 + e−νt0 u(Xk (t0 )) + k t0 0 t0 0 e−νt f (Xk (t). we have νφ(Xk (t)) − g(Xk (t). Let u : Ω → R be. and x a∈A sup{νφ(ˆ) − g(ˆ. we apply Theorem 4. ˆ θ > 0 and φ ∈ C 1 (Ω) such that (u∗ − φ)(ˆ) = 0 ≤ (u∗ − φ)(x) for x ∈ Ω. Assume that ν > 0. Dφ(ˆ) − f (ˆ.4). we will get a contradiction. D2 v) ≥ 0 Assume also that lim inf u∗ (x) ≥ u∗ (z) x∈Ω→z in Ω. Theorem 5.4 to get 0≤ θ 2 − (1 − e−νt0 ).6) and (5.3). Du. t0 ). xk . x x x x Then. ν θ e−νt f (Xk (t). (5. a viscosity sub.3. a). ν taking the inﬁmum over A(xk ). 70 . Thus.14).and supersolution of νu + F (x. there is t0 > 0 such that for any α ∈ A(xk ) and large k ≥ 1. D2 u) ≤ 0 and νv + F (x. we shall consider more general secondorder elliptic PDEs. In view of (5. (5. α(t))dt − θ (1 − e−νt0 ). Thus. (5.Hence.
As in the proof of Theorem 3. in view of Lemma 3. y) over Ω × Ω.6 with Proposition 2. yε ) ∈ Ω × Ω the maximum point of u(x) − v(y) − φ(x. respectively.4. X + 2δI) ≤ ω0 (2δxε − z + 2δ) + ωF xε − yε  1 + pε  + ˆ xε − yε  ε . pε . yε ) = (z. Setting pε := ε−2 (xε − yε ) + δε−1 n(z). For ε. Now. we can ﬁnd X. we have ε→0 lim (xε .1 because we only need to avoid the boundary condition for viscosity subsolutions. Proof. again. We also note that the proof below is easier than that for Dirichlet problems in section 5. Y ε2 ε and − 3 ε2 I O O I ≤ X O O −Y ≤ 3 ε2 I −I −I I . We shall write u and v for u∗ and v∗ . Suppose that maxΩ (u∗ − v∗ ) =: θ > 0. 2. we refer to IshiiKoike (1996). Soner ﬁrst treated the state constraint problems for deterministic optimal control (i.7. Y ∈ S n such that xε − yε δ + n(z) + 2δ(xε − z). u∗ ≤ v∗ in Ω. z) and xε − yε  = δ. 71 . 2. We may suppose that max∂Ω (u − v) = θ since otherwise.+ ∈ J Ω v(yε ).1 (although it is not precisely written). Remark. X + 2δI ε2 ε xε − y ε δ + n(z). ε→0 ε lim (5. y) := (2ε2 )−1 x − y + εδn(z)2 + δx − z2 .16) Since xε = yε − εδn(z) + o(ε) ∈ Ω for small ε > 0. we proceed the same argument in Case 12 in the proof of Theorem 5. we let (xε .e. setting φ(x. pε + 2δ(xε − z). For further discussion on the state constraint problems. ﬁrstorder PDEs) by the viscosity solution approach.Then. we have ν(u(xε ) − v(yε )) ≤ F (yε . We note that we do not need continuity of v on ∂Ω while we need it for Dirichlet problems. where n is the unit outward normal vector at z ∈ ∂Ω.− ∈ J Ω u(xε ). δ > 0. In 1986. we can use the standard procedure to get a contradiction. Y ) − F (xε .
we have ˆ ˆ x − z2 n(z). where n(x) denotes the unit outward normal vector at x ∈ ∂Ω. ˆ which is a contradiction for small δ > 0. However. (1) (5. 72 . In Dirichlet and state constraint problems. we have used a test function which forces one of xε and yε to be in Ω. in the Neumann boundary value problem (Neumann problem for short).2. Since x − z − r n(z) ≥ r for z ∈ ∂Ω and x ∈ Ω. Du(x) − g(x) = 0 on ∂Ω. (2) of (5. This assumption (1) is called the “uniform exterior sphere condition”. x − z ≤ 2ˆ r for z ∈ ∂Ω and x ∈ Ω.3 Neumann problem In the classical theory and modern theory for weak solutions in the distribution sense.1 and 5. Thus.17) holds true. ˆ ˆ (2) There is a neighborhood N of ∂Ω such that ρ ∈ C 2 (N). we have to avoid the boundary condition for viscosity sub. we shall impose a hypothesis on Ω (see Fig 5.and supersolutions simultaneously. and Dρ(x) = n(x) for x ∈ ∂Ω. sending ε → 0 with (5. In order to obtain the comparison principle for the Neumann problem. 2 5. we need a new test function diﬀerent from those in sections 5. we have νθ ≤ ω0 (2δ) + ωF (2δ2 ).Hence.16).2): There is r > 0 such that ˆ Ω ⊂ (Br (z + rn(z)))c for z ∈ ∂Ω. We ﬁrst deﬁne the signed distance function from Ω by ρ(x) := inf{x − y  y ∈ ∂Ω} for x ∈ Ωc .18) It is known that when ∂Ω is “smooth” enough.17) Remark. − inf{x − y  y ∈ ∂Ω} for x ∈ Ω. the (inhomogeneous) Neumann condition is given by n(x). (5.
we let (xε .4. respectively. y) := (2ε)−1x − y2 − g(z) n(z).19) for the corresponding G in (5. y) over Ω ∩ N × Ω ∩ N .3). such that (xε . For small ε. as before. x) ≥ lim supε→0 Φ(xε . δ > 0. (5. respectively. z). we let u and v : Ω → R be a viscosity sub. n(x). Then. we may suppose that max∂Ω (u − v) = θ. Remark. x).4) and (5. u∗ ≤ v∗ in Ω. (5. Since Φ(xε . we see that the mapping x ∈ Ω → u(x) − v(y) − δx − z2 takes its strict maximum at z. For small δ > 0. yε ) ∈ Ω × Ω be the maximum point of Φ(x. (5. yε ). Also. yε ) ≥ Φ(z. where N is in (5. yε ) → (ˆ.r ˆ rn(z) ∂Ω z Fig 5. we have ˆ x ˆ u(ˆ) − v(ˆ) − δˆ − z2 ≥ θ.and supersolution of (5. Du − g(x) = 0 on ∂Ω. We may suppose x ˆ x ∈ ∂Ω. Assume that ν > 0. we suppose that maxΩ (u − v) =: θ > 0.17). D 2u) = 0 in Ω. Let z ∈ ∂Ω be a point such that (u − v)(z) = θ. Du. which is denoted by (xε .19) Remember that we adapt the deﬁnition of viscosity solutions of (5. we write u and v for u∗ and v∗ . we may extract a subsequence.7. Theorem 5. x x x 73 . As before.2 Ω We shall consider the inhomogeneous Neumann problem: νu + F (x.19). setting φ(x. Since Φ(ˆ. For g ∈ C(∂Ω). Proof.2). yε ) again.17) hold. y) := u(x) − v(y) − φ(x. where δ > 0 will be ﬁxed later. We note that we do not need any continuity of u and v on ∂Ω. x − y + δ(ρ(x) + ρ(y) + 2) + δx − z2 . As in the proof of Theorem 3.
by (5. X + δD2 ρ(xε ) + 2δI ∈ J Ω u(xε ). 2. n(z) + δ − 2δxε − z. −Dy φ(xε . δ 2 for small ε > 0. Y ∈ S n such that pε + δn(xε ) + 2δ(xε − z). given δ > 0. pε + δn(xε ) + 2δ(xε − z) xε − yε 2 + g(z) n(xε ).which yields x = z.22) When xε ∈ ∂Ω. and − 3 ε I 0 0 I ≤ X 0 0 −Y ≤ 3 ε I −I −I I . yε ) − g(yε ) ≤ − Hence. x − δx − z2 and −v(y) − δ(ρ(y) + 1) + g(z) n(z). we have ˆ xε − yε 2 = 0. yε ) − g(xε ) ≥ Thus.22). Dx φ(xε . this yields νu(xε ) + F (xε . ≥− 2ˆε r Hence. pε − δn(yε ). ε→0 ε lim (5. we have νv(yε ) + F (yε . Y − δD2 ρ(yε ) ∈ J Ω v(yε ). (5.6 to u(x) − δ(ρ(x) + 1) − g(z) n(z). we ﬁnd X.+ (5.21) (5.23) Of course.24) δ 2 for small ε > 0. pε − δn(yε ). if xε ∈ Ω. Dx φ(xε . then the above inequality holds from the deﬁnition. we see that n(xε ). On the other hand. we calculate in the following manner: n(xε ). .18). similarly.− 2. where pε := ε−1 (xε − yε ) + g(z)n(z). yε ) = n(xε ). by (5. pε + δn(xε ) + 2δ(xε − z). then n(yε ). y . Y − δD2 ρ(yε )) ≥ 0.21). X + δD2 ρ(xε ) + 2δI) ≤ 0. 74 (5.20) Applying Lemma 3. if yε ∈ ∂Ω. by (5. Moreover.
X. which is stronger than (5.27) . pε .3). For this purpose. typically. p. considering Ω := Rn . ε where M := 3 + supx∈N ∩Ω (2x − z + D 2 ρ(x)). p. a viscosity suband supersolution. X). X) − F (x. Y ∈ S n .3) and (5.25) Growth condition at x → ∞ We remind the readers that in the proofs of comparison results we always suppose maxΩ (u − v) > 0. However.4 In the standard PDE theory. (5. Y ) ≤ µ0 (p − q + X − Y ) for x ∈ Rn . X) + 2ω0 (δM) xε − yε  ≤ ωF xε − yε  1 + pε  + ˆ + 2ω0 (δM). µ(x − y). y). Du. we will suppose the linear growth condition (for simplicity) for viscosity solutions. we have ν(u(xε ) − v(yε )) ≤ F (yε .26) We will also need the Lipschitz continuity of (p. Sending ε → 0 with (5. q ∈ Rn . q. we often consider PDEs in unbounded domains. y) to takes its maximum at a point in a compact set. where u and v are. Thus. p. Y ) − F (x. (5. There is µ0 > 0 such that F (x.23) and (5. pε . Y ) − F (xε . In this subsection. 75 (5. we present a technique to establish the comparison principle for viscosity solutions of νu + F (x. 0 I 0 −Y −I I then F (y. which forces u(x) − v(y) − φ(x.24). X) → F (x. the maximum of u − v might attain its maximum at “x → ∞”. y ∈ Rn . respectively. which is a contradiction for small δ > 0. We rewrite the structure condition (3.4). we have νθ ≤ 2ω0 (δM). we have to choose a test function φ(x.20) in the above. Y ∈ S n and µ > 1 satisfy I 0 X 0 I −I −3µ ≤ ≤ 3µ . in Rn . X) ≤ ωF (x − y(1 + µx − y)) for x. by (5.21) for Rn : There is an ωF ∈ M such that if X.Using (5. 2 5. D 2u) = 0 in Rn . µ(x − y).
δ > 0. pε . extracting a subsequence if necessary. u∗ (x) ≤ C0 (1 + x) and v∗ (x) ≥ −C0 (1 + x) for x ∈ Rn .− 3 ε ν(u(xε ) − v(yε )) ≤ F (yε . (5. Assume that ν > 0. X) + 2δµ0 (2 + xε  + yε ) xε − yε  ≤ ωF xε − yε  1 + + νδ(2 + xε 2 + yε 2 ) + Cδ. Y ) − F (xε . 2. Assume also that there is C0 > 0 such that Then.28) Proof. respectively. We shall simply write u and v for u∗ and v∗ . we ﬁnd X.29) ε→0 ε By Lemma 3. Let u and v : Rn → R be.and supersolution of (5. respectively. X + 2δI) ≤ F (yε . a viscosity sub. As before. Y − 2δI) ∈ J 3 ε I O O I ≤ X O O −Y ≤ 2. we may suppose θ ∈ (0. v(yε ). in view of (5. pε . We note that (5.5.+ (u − v)(x) ≤ 2δ(1 + x2 ) + θδ for δ > 0 and x ∈ Rn . (5. Y ∈ S n such that lim (pε + 2δxε . When θ ≤ 0. we set θδ := supx∈Rn (u(x) − v(x) − 2δ(1 + x2 )). we may suppose that xε − yε 2 = 0.25).28) implies that there is zδ ∈ Rn such that θδ = u(zδ )−v(zδ )−2δ(1+zδ 2 ). Thus. putting pε := (xε − yε )/ε. pε + 2δxε . (5. ∞]. For δ > 0.Proposition 5. yε ) ∈ Rn × Rn such that Φδ (xε . pε − 2δyε .27) hold. u∗ ≤ v∗ in Rn . y) ≥ θδ . u(xε ). we can choose (xε . we have (pε − 2δyε . Set θ := lim supδ→0 θδ ∈ R ∪ {∞}.26) and (5. X + 2δI) ∈ J and − Hence. Y − 2δI) − F (xε . where δ > 0 will be ﬁxed later.6 with Proposition 2.7. y) := u(x) − v(y) − (2ε)−1x − y2 − δ(1 + x2 ) − δ(1 + y2) for ε. since we have u ≤ v in Rn . I −I −I I . ε 76 .28). yε ) = max(x.y)∈Rn ×Rn Φδ (x. Setting Φδ (x.
Sending ε → 0 in the above together with (5. δ > 0. which is a contradiction for small δ > 0. we used “2ab ≤ τ a2 + τ −1 b2 for τ > 0”. ν) > 0 is independent of ε. 2 77 . we get νθ ≤ Cδ. For the last inequality.where C = C(µ0 .29). we have νθ ≤ ωF xε − yε  1 + xε − yε  ε + Cδ. Therefore.
Furthermore. X). p. X) is uniformly elliptic.σ (Ω)”.1) where F : Ω × Rn × S n → R and f : Ω → R are given.1)). X). then u ∈ C 2.1 A brief history Let us simply consider the Poisson equation in a “smooth” domain Ω with zeroDirichlet boundary condition: −△u = f in Ω. we discuss the Lp viscosity solution theory for uniformly elliptic PDEs: F (x. (6. C σ (U) (for a set U ⊂ Rn ) denotes the set of functions f : U → R such that f (x) − f (y) < ∞. we suppose that F does not depend on u itself. (6.x=y 78 . 1). we prefer to have the inhomogeneous term (the right hand side of (6. Remark. without the continuity assumption of x → F (x. q. The aim in this section is to obtain the a priori estimates for Lp viscosity solutions without assuming any continuity of the mapping x → F (x. 6. it is wellknown that “if f ∈ C σ (Ω) for some σ ∈ (0.3) Here. even if X → F (x. and then to establish an existence result of Lp viscosity solutions for Dirichlet problems. D 2u) = f (x) in Ω. (6. p. to compare with classical results.y∈U.1). we cannot expect the uniqueness of Lp viscosity solutions. sup f (x) + sup x − yσ x∈U x.6 Lpviscosity solutions In this section. Du. u = 0 on ∂Ω. In general.2) In the literature of the regularity theory for uniformly elliptic PDEs of secondorder. Because Nadirashvili (1997) gave a counterexample of the uniqueness. Since we will use the fact that u + C (for a constant C ∈ R) satisﬁes the same (6.
4) even for linear secondorder uniformly elliptic PDEs if the coeﬃcients are smooth enough. · · · ∂xαn n These function spaces are called H¨lder continuous spaces and the implication o in (6. . 1). and u W k. for an open set O ⊂ Rn . where D α f := ∂xα1 1 ∂ α f . (6. 2. 1. .p(Ω) for some p > 1. the triangle inequality does not hold). for an integer k ≥ 1.p(O). In Appendix. we know a diﬀerent kind of regularity results: For a solution u of (6. 1.σ (U). Since the PDE in (6. respectively: u Lp (O) := O u(x)pdx 1/p . . then u ∈ W k+2.6) is called the Lp regularity (estimates).6) Here. D α f ∈ Lp (O). This (6. we can obtain (6.}n with α := n α σ i=1 αi ≤ k. C k. where the coeﬃcient A(·) ∈ C ∞ (Ω.σ (Ω)”. (6.4) Furthermore. D f ∈ C (U). .σ (Ω) for some σ ∈ (0. we recall the standard norms of Lp (O) and W k.4) holds for the following PDE: −trace(A(x)D 2 u(x)) = f (x) in Ω. .2) is linear. Notice that Lp (Ω) = W 0. .p (O) := α≤k Dαu Lp (O) .5).5) (6. we say f ∈ Lp (O) if f p is integrable in O.e. denotes the set of functions f : U → R so that for any multiindex α = (α1 .p (Ω)”.Also. ξ ≤ Λξ2 for ξ ∈ Rn and x ∈ Ω. for p ≥ 1. For a later convenience. we obtain that (6. the regularity result (6.3) may be extended to “if f ∈ C k. and an integer k ∈ {0. we will use the quantity u Lp (Ω) even for p ∈ (0.3) is called the Schauder regularity (estimates). “if f ∈ W k. 2. 1) although this is not the “norm” (i. Moreover. Besides the Schauder estimates. . 79 .}.p(O) if for any multiindex α with α ≤ k. . and f ∈ W k. . then u ∈ C k+2. S n ) satisﬁes that λξ2 ≤ A(x)ξ. αn ) ∈ {0. .p (Ω).
a diﬃculty occurs when we drop the smoothness of Aij . Ω ( A(x)Du(x). Since these results appeared before the viscosity solution was born. Dφ(x) − f (x)φ(x)) dx = 0. what can we say about the regularity of “solutions” of (6. Here. In this case. and supp φ is compact in Ω . To show H¨lder continuity of solutions. we set ∞ C0 (Ω) := φ:Ω→R φ(·) is inﬁnitely many times diﬀerentiable.7) De Giorgi (1957) ﬁrst obtained H¨lder continuity estimates on weak soo ∞ lutions of (6. by a stochastic approach. Trudinger (1980) (see [13]) gave a purely analytic proof for it. we split the proof into two parts: 80 . Caﬀarelli proved the same H¨lder estimate for viscosity solutions o of fully nonlinear secondorder uniformly elliptic PDEs. In fact. a diﬀerent proof by Moser (1960).8) Afterward. In 1989. but still satisfy the uniform ellipticity). (6. KrylovSafonov (1979) ﬁrst showed the H¨lder continuity estio mates on “strong” solutions of −trace(A(x)D 2 u(x)) = f (x) in Ω. which satisfy PDEs in the a. it is essential to prove the followo ing “Harnack inequality” for nonnegative solutions.5) ? The extreme case for PDEs in divergence form is the following: − ∂ ∂u Aij (x) (x) = f (x) in Ω. An extreme case is that we only suppose that Aij are bounded (possibly discontinuous. to prove the Harnack inequality. they could only deal with strong solutions. for any φ ∈ C0 (Ω). sense. As is known.7) in the distribution sense. ∂xj i. We refer to [14] for the details of De Giorgi’s proof and. Concerning the corresponding PDE in nondivergence form.j=1 ∂xi n (6.We refer to [13] for the details on the Schauder and Lp regularity theory for secondorder uniformly elliptic PDEs.e.
we suppose at least p> n 2 2. we will show that Lp viscosity solutions satisfy the (interior) H¨lder continuous estimates. u(y). D 2u(x)) ≤ f (x) (resp. and that u ∈ C(Ω). Deﬁnition.weak Harnack inequality for “super”solutions local maximum principle for “sub”solutions =⇒ Harnack inequality for “solutions” In section 6. D 2φ(y)) − f (y) ≤ 0 81 .e. inf Bε (x) F (y.9) if for φ ∈ Wloc (Ω). solution) of (6. D 2u) = f (x) in Ω. We ﬁrst recall the deﬁnition of Lp strong solutions of general PDEs: (6.2 Deﬁnition and basic facts F (x.9) if u ∈ Wloc (Ω). Du(x). u(x).p so that u ∈ Wloc (Ω) has the secondorder Taylor expansion at almost all points in Ω.p solution.9) We will use the following function space: 2. = f (x)) a.. in Ω. We call u ∈ C(Ω) an Lp strong subsolution (resp. ≥ f (x). o 6.p ∞ Wloc (Ω) := {u : Ω → R  ζu ∈ W 2.. su2. super2. u. We call u ∈ C(Ω) an Lp viscosity subsolution (resp.9). Throughout this section. Now.p persolution) of (6.4. we present the deﬁnition of Lp viscosity solutions of (6.p (Ω) for all ζ ∈ C0 (Ω)}. Deﬁnition. and F (x.. we have ε→0 lim ess. Dφ(y). Du.
Dφ(y). there is µ ≥ 0 such that F (x. X) ≤ µq − q ′  for (x. (6.. Remark.resp. X) and x → f (x) are continuous.9).e. A and ess. q. q ′ . (3) F is uniformly elliptic. Remark.10) and (6. in A}. u(y). q. O) = 0 for x ∈ Ω. inf h(y) := sup{M ∈ R  h ≥ M a. We call u ∈ C(Ω) an Lp viscosity solution of (6.12) imply that F has the linear growth in Du. For the right hand side f : Ω → R. (6. Although we will not explicitly utilize the above deﬁnition. sup h(y) := inf{M ∈ R  h ≤ M a. minimum) at x ∈ Ω. O) ≤ µq for x ∈ Ω and q ∈ Rn . D 2φ(y)) − f (y) ≥ 0 ε→0 Bε (x) provided that u − φ takes its local maximum (resp. where A ⊂ Rn is a measurable set: ess. q. X) ∈ Rn × S n . supA and ess. q ′.9) if it is both an Lp viscosity sub. we recall the deﬁnition of ess. q. inf A of h : A → R.12) (1) F (x. sup F (y.10) Remark.and supersolution of (6. X) ∈ Ω × Rn × Rn × S n . supersolution) of (6.. We note that when x → F (x. the deﬁnition of Lp viscosity subsolution (resp. q. in A}.e. A R: Coming back to (6. q. (2) x → F (x. X) − F (x. (6. We note that (1) in (6. 0.1.11) We will often suppose the Lipschitz continuity of F with respect to q ∈ Rn .1) 82 .2. F (x.1). lim ess. X) is measurable for (q. we suppose that f ∈ Lp (Ω) for p ≥ n. X) with the constants 0 < λ ≤ Λ from section 3. we give a list of assumptions on F : Ω×Rn ×S n → We recall the uniform ellipticity condition of X → F (x.
P + (D 2 u) + µDu ≥ f then max u ≤ max u+ + diam(Ω)C0 f + Ω ∂Ω Ω ∂Ω Ln (Γ[u.11) and (6.1).12). Proposition 6. which will play an essential role in this section (and also Appendix). (6. Proposition 6. we introduce the notion of “upper contact sets”: For u : O → R. where p⋆ is the socalled Escauriaza’s constant (see the references in [4]). P + (D 2 u) + µDu ≥ f We recall the AleksandrovBakelmanPucci (ABP for short) maximum principle.12) hold. To this end.coincides with the standard one under assumption (6. n.1) under assumptions (6.. u ∈ C(Ω) is an Ln viscosity subsolution (resp. we refer to a paper by CaﬀarelliCrandallKocanSwi¸ch [5].11) but most of results can be extended to the case when p > p⋆ = p⋆ (Λ. max(−u) ≤ max(−u)+ + diam(Ω)C0 f − 83 Ln (Γ[−u. diam(Ω)) > 0 such that if for f ∈ Ln (Ω).2. µ. In this book. n) ∈ (n/2. resp.11) and (6. resp.10) and (6.Ω]∩Ω+ [−u]) . we set Γ[u. y − x for all y ∈ O . we only study the case of (6.Ω]∩Ω+ [u]) in Ω+ [u] in Ω+ [−u]). n). λ. supersolution) of P − (D 2 u) − µDu ≤ f in Ω in Ω . (6. then it is an Lp viscosity subsolution (resp.. Assume that (6. (ABP maximum principle) For µ ≥ 0. supersolution) of (6. there is C0 := C0 (Λ. If u ∈ C(Ω) is an Lp viscosity subsolution (resp.1. ..... For a ´ e proof.10).12).10). O] := x∈O there is p ∈ Rn such that u(y) ≤ u(x) + p. λ. supersolution) of P − (D 2 u) − µDu ≤ f (resp. The following proposition is obvious but it will be very convenient to study Lp viscosity solutions of (6.
For any µ ≥ 0. Remark. xn ) ∈ Rn . .p C(B 1 ) ∩ Wloc (B1 ). we often use the cube Qr (x) for r > 0 and x =t (x1 . Ω] Fig 6. we see that −C f − Ln (B1 ) ≤ w ≤ C f + Ln (B1 ) in B1 . respectively. and P − (D 2 v) − µDv ≥ f in B1 . . Moreover. Proposition 6. v = 0 on ∂B1 . when f is not supposed to be continuous. 1).3. . v.p (Bδ ) ˆ ≤ Cδ f Lp (B1 ) . . of P + (D 2 u) + µDu ≤ f in B1 . yn )  xi − yi  < r/2 for i = 1. Qr (x) := {y =t (y1 . n}. . µ.2.y = u(x) Ω Γ[u. Ω+ [u] := {x ∈ Ω  u(x) > 0}. The proof will be given in Appendix. and small ˆ ˆ δ ∈ (0. .11) holds for p ≥ n. we have the following estimates: for w = u or w = v. λ. particularly. u = 0 on ∂B1 . . n. .1 x where The next proposition is a key tool to study Lp viscosity solutions. Assume that (6. where w = u. . 84 . . δ) > 0 such that w W 2. there is C = C(Λ. 6. there are an Lp strong subsolution u and an Lp strong supersolution v ∈ 2. .3 Harnack inequality In this subsection. . In view of the proof (Step 2) of Proposition 6.
q) > 0 such that if u ∈ C(B 2√n ) is an Lp viscosity subsolution of P − (D 2 u) − µDu ≤ 0 in B2√n .5. Proposition 6.and supersolution of P − (D 2 u) − µDu ≤ 0 and P + (D 2 u) + µDu ≥ 0 in B2√n . λ. We will prove the next two propositions in Appendix. Proposition 6. then we have u Lp0 (Q1 ) ≤ C1 inf u. µ) > 0 such that if u ∈ C(B 2√n ) is a nonnegative Lp viscosity supersolution of P + (D 2 u) + µDu ≥ 0 in B2√n .5. there is C2 = C2 (Λ. (Local maximum principle) For µ ≥ 0 and q > 0. Q1 Q1 85 . n. Remark. n. Notice that Br/2 (x) ⊂ Qr (x) ⊂ Br√n/2 (x) for r > 0. For µ ≥ 0. Corollary 6. λ. µ.6.1 Linear growth The next corollary is a direct consequence of Propositions 6.3.5. there are p0 = p0 (Λ. respectively. µ) > 0 such that if u ∈ C(B 2√n ) is a nonnegative Lp viscosity sub. 6. then we have sup u ≤ C2 u+ Q1 Lq (Q2 ) . there is C3 = C3 (Λ. µ) > 0 and C1 := C1 (Λ. Notice that p0 might be smaller than 1. λ.4 and 6. (Weak Harnack inequality) For µ ≥ 0. Notice that we do not suppose that u ≥ 0 in Proposition 6. Q1/2 Remark. n.and Qr := Qr (0). then we have sup u ≤ C3 inf u. λ.4. n.
p (B2√n ) Lp (B3√n ) . there is C4 = C4 (Λ. 86 (6. Since v. we ﬁnd C2 (q) > 0 such that ˆ sup u ≤ sup u1 + C f + Q1 Q1 Lp (B3√n ) ˆ ≤ C2 (q) (u1 )+ Lq (Q2 ) + C f + Lp (B3√n ) ˆ ≤ C2 (q) u Lq (Q2 ) + C f + Lp (B3√n ) .3. in B3√n . we ﬁnd v. then we have sup u ≤ C4 inf u + f Q1 Q1 Lp (B3√n ) and P + (D 2 u) + µDu ≥ f . µ) > 0 such that if u ∈ C(B 3√n ) is a nonnegative Lp viscosity sub. an Lp viscosity sub.In order to treat inhomogeneous PDEs. For µ ≥ 0 and f ∈ Lp (B3√n ) with p ≥ n.p (B2√n ) ˆ ≤ C f+ ˆ ≤ C f− Lp (B3√n ) . in B3√n . respectively.5 to u1 .13) . applying Proposition 6. n. in B2√n . w ∈ W 2. ˆ ˆ In view of Proposition 6. in B3√n . for any q > 0. w W 2. λ. v W 2. and P − (D 2 w) − µDw ≥ f − a.e. 2. µ) > 0 such that ˆ 0 ≤ −v ≤ C f + and ˆ 0 ≤ w ≤ C f− Lp (B3√n ) Lp (B3√n ) in B3√n .3 and its Remark.e. w ∈ C(B 3√n )∩Wloc (B3√n ) such that P + (D 2 v) + µDv ≤ −f + a.7. it is easy to verify that u1 := u + v and u2 := u + w are.and supersolution of P − (D 2 u) − µDu ≤ f respectively. λ.p (B2√n ). we can choose C = C(Λ. we will need the following corollary: Corollary 6.p Proof. According to Proposition 6. w = 0 on ∂B3√n .and supersolution of P − (D 2 u1 ) − µDu1 ≤ 0 and P + (D 2 u2 ) + µDu2  ≥ 0 in B2√n . Since v ≤ 0 in B3√n . v = 0 on ∂B3√n . n.
1]. applying Proposition 6. n ˆ ˆ respectively. (Harnack inequality.1).13) for q = p0 .14) with (6. there is p0 > 0 such that u Lp0 (Q2 ) ≤ u2 Lp0 (Q2 ) ˆ ≤ C1 inf u2 ≤ C1 inf u + C f − Q1 Q1 Lp (B3√n ) . we may suppose that x = 0.12) hold. where C4 > 0 is the constant in Corollary 6. X) has quadratic growth. (6. q.On the other hand. there is µ ≥ 0 such that F (x. where f (x) := f (rx).8.2 Quadratic growth Here.3. we conclude the assertion. q.4 to u2 . we consider the case when q → F (x.15) . combining (6.11) and (6. Proof. We refer to [10] for applications where such quadratic nonlinearity appears. q ′. 87 (6. q. we easily see that v is an Lp viscosity subsolution and supersolution of ˆ ˆ P − (D 2 v) − µDv ≤ r 2 f and P + (D 2 v) + µDv ≥ −r 2 f. Note that f Lp (B3√n ) = r − p f Lp (B3√nr ) . (6. Setting v(x) := u(rx) for x ∈ B3√n . which together with (1) of (6. X) ∈ Ω × Rn × Rn × S n . 2 6. X) − F (x. in B3√n . 2 Corollary 6. If u ∈ C(Ω) is an Lp viscosity solution of (6. q) ∈ Ω × Rn .14) Therefore. q ′. then Qr (x) sup u ≤ C4 Qr (x) inf u + r 2− p f n Lp (Ω) . q. We present a version of the Harnack inequality when F has a quadratic growth in Du in place of (6.10) implies that F (x. and if B3√nr (x) ⊂ Ω for r ∈ (0.7.7 to v and then.12). rescaling v to u. By translation. we can ﬁnd C4 > 0 such that the assertion holds.10). X) ≤ µ(q + q ′ )q − q ′  for (x. ﬁnal version) Assume that (6. Applying Corollary 6. O) ≤ µq2 for (x.
λ. r) such that −δ ≤ φ ≤ v + δ in Bε0 (x). then we have sup u ≤ C5 e λ M inf u + f Q1 Q1 2µ and P + (D 2 u) + µDu2 ≥ f in B3√n . we may suppose that v(x) = φ(x) and v ≤ φ in Br (x). respectively. For any δ ∈ (0. where M := supB3√n u. we have ε→0 lim ess inf Bε (x) P − (D 2 ψ) − µDψ2 − f + ≤ 0. We claim that v := eαu − 1 and w := 1 − e−αu are.The associated Harnack inequality is as follows: Theorem 6. there is C5 = C5 (Λ. n.p (Br (x)) ⊂ C σ0 (B r (x)) with some σ0 ∈ (0. For µ ≥ 0 and f ∈ Lp (B3√n ) with p ≥ n. Setting ψ(y) := α−1 log(φ(y) + 1) for y ∈ Bε0 (x) (extending ψ ∈ W 2.9. Lp (B3√n ) . we can choose ε0 ∈ (0. in view of W 2. 2. Fix any δ ∈ (0.p in B3√n \ Bε0 (x) if necessary). 1). and D 2 ψ = D2φ Dφ ⊗ Dφ − .p Choose φ ∈ Wloc (B3√n ) and suppose that u−φ attains its local maximum at x ∈ B3√n . We shall only prove this claim for v since the other for w can be obtained similarly. a nonnegative Lp viscosity sub. 1). 1). Note that 0 ≤ v ≤ eαM − 1 in B3√n . where B2r (x) ⊂ B3√n . Thus.and supersolution of P − (D 2 v) ≤ α(eαM + δ)f + and P + (D 2 w) ≥ −α(1 + δ)f − in B3√n . Set α := µ/λ. α(φ + 1) α(φ + 1)2 Since Dψ = Dφ α(φ + 1) 88 . Proof. µ) > 0 such that if u ∈ C(B 3√n ) is a nonnegative Lp viscosity sub.and supersolution of P − (D 2 u) − µDu2 ≤ f respectively.
14). dist(K. there is σ = σ(Λ. For each compact set K ⊂ Ω. Assume that (6. sending δ → 0. . 9) are independent of δ > 0. then there is ˆ ˆ C = C(Λ. dist(K.the above inequality yields ε→0 lim ess inf Bε (x) P − (D 2 φ) − f + ≤ 0. Since Ck (k = 6. 89 .11) and (6. 6. maxΩ u. we have ε→0 lim ess inf Bε (x) P − (D 2 φ) − α(eαM + δ)f + ≤ 0. y ∈ K. (6. α(φ + 1) Since 0 < 1 − δ ≤ φ + 1 ≤ eαM + δ in Bε0 (x).10). Since αu ≤ v ≤ αueαM and αue−αM ≤ w ≤ αu. Theorem 6. n. 2 Remark.and supersolutions as above can be found in [14] for uniformly elliptic PDEs in divergence form with the quadratic nonlinearity. f Lp(Ω) ) > 0 ˆ u(x) − u(y) ≤ Cx − yσ for x.13) and (6. p. 1) such that if u ∈ C(Ω) is an Lp viscosity solution of (6. µ. we conclude the proof. λ. we show how the Harnack inequality implies the H¨lder o continuity.1). using the same argument to get (6. n. µ.4 H¨lder continuity estimates o In this subsection. p. ∂Ω). . . Remark. f Lp (Ω) ) ∈ (0. λ. we have sup u ≤ Q1 1 sup v ≤ C6 v Lp0 (Q2 ) + (eαM + δ) f + Lp (B3√n ) α Q1 ≤ C7 e2αM w Lp0 (Q2 ) + (eαM + δ) f + Lp (B3√n ) ≤ C8 e2αM inf w + (e2αM + δ) f Q1 Lp (B3√n ) ≤ C9 e2αM inf u + (1 + δ) f Q1 Lp (B3√n ) . ∂Ω). We note that the same argument by using two diﬀerent transformations for sub.10. . We notice that σ is independent of supΩ u.12) hold.
e. We denote by S(r) the set of all nonnegative w ∈ C(B 3√nr ). we may suppose that there is C4 > 1 such that if w ∈ C(Ω) is a nonnegative Lp viscosity sub. Qr It is suﬃcient to ﬁnd C > 0 and σ ∈ (0. we may suppose x = 0 ∈ K. 1) such that M(r) − m(r) ≤ Cr σ for small r > 0. we see that v1 and w1 belong to S(r). Thus. sup{u(x)  dist(x. Now. then we see that for any r ∈ (0. ∂Ω)/(3 n)} > 0.and supersolution of P − (D 2 w) − µDw ≤ f and P + (D 2 w) + µDw ≥ f in Ω. Setting v1 := u − m(r) and w1 := M(r) − u. we set M(r) := sup u. 4} > 3. Setting r0 := min{1. setting C10 := max{C4 f Lp (Ω) . Qr (x) sup w ≤ C4 Qr (x) inf w + r 2− p f n Lp (Ω) .dist(K. we have M(r/2) − m(r) ≤ C10 m(r/2) − m(r) + (r/2)β . r0 ] and x ∈ K (i. Hence. Qr m(r) := inf u and osc(r) := M(r) − m(r).ˆ In our proof below. an Lp viscosity sub. which is.and supersolution of P − (D 2 w) − µDw ≤ f  and P + (D 2 w) + µDw ≥ −f  in B3√nr . √ Proof. M(r) − m(r/2) ≤ C10 M(r) − M(r/2) + (r/2)β . C4 . respectively. we may relax the dependence maxΩ u in C by for small ε > 0. 90 . B3√nr (x) ⊂ Ω). K) < ε} respectively. setting β := 2 − > 0. For simplicity. we have sup v1 ≤ C10 inf v1 + r 2− p n p n Qr/2 Qr/2 and Qr/2 sup w1 ≤ C10 Qr/2 inf w1 + r 2− p n .
changing r/2k−1 for integers k ≥ 2. 1). 1)).15). C11 /(2β − 1)}.Hence.16) and r βα = r σ . Remark. Moreover. 1). recalling θ ∈ (1/2. For r ∈ (0. 1) and C11 := 2C10 /(C10 + 1). Therefore. we have osc(r) ≤ osc(s/2k−1) ≤ C12 (θk + (2s)β ) ≤ 2β C12 θ(α−1) log r/ log 2 + r βα . 1) (because θ ∈ (1/2. 2k 2 which yields log(s/r) log(s/r) ≤k< + 1. r0 ). where α = log θ/(log θ−β log 2) ∈ (0. p We shall give the corresponding H¨lder continuity for PDEs with quadratic o nonlinearity (6. We note that we may derive (6. adding these inequalities. we see that osc(r/2) ≤ θosc(r) + C11 (r/2)β . there is a unique integer k ≥ 1 such that s s ≤ r < k−1 . setting C13 := 2β C12 . we have osc(r) ≤ C13 r σ . we have (C10 + 1)(M(r/2) − m(r/2)) ≤ (C10 − 1)(M(r) − m(r)) + 2C10 (r/2)β . we have θ(α−1) log r/ log 2 = r σ Thus. Setting σ := (α − 1) log θ/ log 2 ∈ (0. Since we can use the same argument as in the proof of 91 . setting θ := (C10 − 1)/(C10 + 1) ∈ (1/2.16) when p > n/2 by taking β = 2 − n > 0. by setting s = r α . log 2 log 2 Hence. we have k osc(r/2k ) ≤ θk osc(r) + C11 r β 2−βj j=1 C11 β ≤ θk osc(r0 ) + β r 2 −1 ≤ C12 (θk + r β ). 2 (6. where C12 := max{osc(r0 ).
Sketch of proof. We remark that F ∗ ρ1/k means the convolution of F (·.1 using Theorem 6. D2 u) = fk in Ω.17). dist(K. We may relax assumption (1) of (5.11) and (6.7 and 6. ∂Ω). ˆ Remark.10). where ρ1/k is the standard molliﬁer with respect to xvariables. n. µ.17) holds. (6. p.Theorem 6.17) so that the assertion holds for Ω which may have some “concave” corners. there are C = C(Λ. Step1: We ﬁrst solve approximate PDEs. ∂Ω). then we have ˆ u(x) − u(y) ≤ Cx − yσ for x. we consider Fk (x.11. 6. (6. Theorem 6. 1) such that if an Lp viscosity solution u ∈ C(Ω) of (6.5 Existence result For the existence of Lp viscosity solutions of (6.12) hold. respectively.18) where “smooth” Fk and fk approximate F and f . y ∈ K.12.9 instead of Corollaries 6. there is an Lp viscosity solution u ∈ C(Ω) of (6. dist(K. p. (6.17) Remark. instead of (6. µ. Assume that (6. p.10). λ. (6. In fact. For given g ∈ C(∂Ω).11) and (6. we only give an outline of proof. Fk and fk are given by F ∗ ρ1/k and f ∗ ρ1/k . 92 .8. we omit the proof of the following: Corollary 6. Assume that (6.1) under the Dirichlet condition. Such a condition is called “uniform exterior cone condition”.1). Assume also that (1) of (5. supΩ u) > 0 and σ = σ(Λ. For each ˆ ˆ compact set K ⊂ Ω. supΩ u) ∈ (0.1). Note that both of σ and C depend on supΩ u in this quadratic case. n. Du.1) such that u(x) = g(x) for x ∈ ∂Ω. λ. which have to satisfy a suﬃcient condition in Step 3. X) and ρ1/k . under (6. which was ﬁrst shown in a paper by Crandall´ e KocanLionsSwi¸ch in [7] (1999).15) hold.
We ﬁnd a viscosity solution uk ∈ C(Ω) of (6.18) under (6.17) via Perron’s method for instance. At this stage, we need to suppose the smoothness of ∂Ω to construct viscosity sub and supersolutions of (6.18) with (6.17). Remember that if F and f are continuous, then the notion of Lp viscosity solutions equals to that of the standard ones (see Proposition 2.9 in [5]). In view of (1) of (5.17) (i.e. the uniform exterior sphere condition), we can construct viscosity sub and supersolutions of (6.18) denoted by ξ ∈ U SC(Ω) and η ∈ LSC(Ω) such that ξ = η = g on ∂Ω. To show this fact, we only note that we can modify the argument in Step 1 in section 7.3. Step 2: We next obtain the a priori estimates for uk so that they converge to a continuous function u ∈ C(Ω), which is the candidate of the original PDE. For this purpose, after having established the L∞ estimates via Proposition 6.2, we apply Theorem 6.10 (interior H¨lder continuity) to uk in Step 1 because o (6.10)(6.12) hold for approximate PDEs with the same constants λ, Λ, µ. We need a careful analysis to get the equicontinuity up to the boundary ∂Ω. See Step 1 in section 7.3 again. Step 3: Finally, we verify that the limit function u is the Lp viscosity solution via the following stability result, which is an Lp viscosity version of Proposition 4.8. To state the result, we introduce some notations: For B2r (x) ⊂ Ω with r > 0 and x ∈ Ω, and φ ∈ W 2,p (Br (x)), we set Gk [φ](y) := Fk (y, Dφ(y), D2 φ(y)) − fk (y), and G[φ](y) := F (y, Dφ(y), D2 φ(y)) − f (y) for y ∈ Br (x). Proposition 6.13. Assume that Fk and F satisfy (6.10) and (6.12) with λ, Λ > 0 and µ ≥ 0. For f, fk ∈ Lp (Ω) with p ≥ n, let uk ∈ C(Ω) be an Lp viscosity subsolution (resp., supersolution) of (6.18). Assume also that uk converges to u uniformly on any compact subsets of Ω as k → ∞, and that for any B2r (x) ⊂ Ω with r > 0 and x ∈ Ω, and φ ∈ W 2,p (Br (x)),
k→∞
lim (G[φ] − Gk [φ])+
k→∞
Lp (Br (x))
=0 =0 .
resp., lim (G[φ] − Gk [φ])−
Lp (Br (x))
Then, u ∈ C(Ω) is an Lp viscosity subsolution (resp., supersolution) of (6.1).
93
Proof of Proposition 6.13. We only give a proof of the assertion for subsolutions. Suppose the contrary: There are r > 0, ε > 0, x ∈ Ω and φ ∈ W 2,p (B2r (x)) such that B3r (x) ⊂ Ω, 0 = (u − φ)(x) ≥ (u − φ)(y) for y ∈ B2r (x), and u − φ ≤ −ε in B2r (x) \ Br (x), and G[φ](y) ≥ ε a.e. in B2r (x). (6.20) For simplicity, we shall suppose that r = 1 and x = 0. It is suﬃcient to ﬁnd φk ∈ W 2,p (B2 ) such that limk→∞ supB2 φk  = 0, and Gk [φ + φk ](y) ≥ ε a.e. in B2 . Indeed, since uk − (φ + φk ) attains its maximum over B2 at an interior point z ∈ B2 by (6.19), the above inequality contradicts the fact that uk is an Lp viscosity subsolution of (6.18). Setting h(x) := G[φ](x) and hk (x) := Gk [φ](x), in view of Proposition 6.3, we 2,p can ﬁnd φk ∈ C(B 2 ) ∩ Wloc (B2 ) such that P − (D 2 φk ) − µDφk  ≥ (h − hk )+ a.e. in B2 , φk = 0 on ∂B2 , 0 ≤ φk ≤ C (h − hk )+ Lp (B2 ) in B2 , φk W 2,p (B1 ) ≤ C (h − hk )+ Lp (B2 ) .
(6.19)
We note that our assumption together with the third inequality in the above yields limk→∞ supB2 φk  = 0. Using (6.10), (6.12) and (6.20), we have Gk [φ + φk ] ≥ P − (D2 φk ) − µDφk  + hk ≥ (h − hk )+ + ε − (h − hk ) ≥ ε a.e. in B2 . 2
94
7
Appendix
In this appendix, we present proofs of the propositions, which appeared in the previous sections. However, to prove them, we often need more fundamental results, for which we only give references. One of such results is the following “Area formula”, which will be employed in sections 7.1 and 7.2. We refer to [9] for a proof of a more general Area formula. Area formula ξ ∈ C (R , R ), g ∈ L1 (Rn ), =⇒ A ⊂ Rn measurable
1 n n
ξ(A)
g(y)dy ≤
A
g(ξ(x))det(Dξ(x))dx
We note that the Area formula is a change of variable formula when det(Dξ) may vanish. In fact, the equality holds if det(Dξ) > 0 and ξ is injective.
7.1
Proof of Ishii’s lemma
First of all, we recall an important result by Aleksandrov. We refer to the Appendix of [6] and [10] for a “functional analytic” proof, and to [9] for a “measure theoretic” proof. Lemma 7.1. (Theorem A.2 in [6]) If f : Rn → R is convex, then for a.a. x ∈ Rn , there is (p, X) ∈ Rn × S n such that f (x + h) = f (x) + p, h + 1 Xh, h + o(h2) as h → 0. 2
(i.e., f is twice diﬀerentiable at a.a. x ∈ Rn .) We next recall Jensen’s lemma, which is a version of the ABP maximum principle in 7.2 below. Lemma 7.2. (Lemma A.3 in [6]) Let f : Rn → R be semiconvex (i.e. x → f (x) + C0 x2 is convex for some C0 ∈ R). Let x ∈ Rn be a strict ˆ maximum point of f . Set fp (x) := f (x) − p, x for x ∈ Rn and p ∈ Rn . Then, for r > 0, there are C1 , δ0 > 0 such that Γr,δ  ≥ C1 δ n for δ ∈ (0, δ0 ], 95
. where δ0 = ε0 /(3r). we have C1 δ n ≤ lim Am  =  ∩∞ A ≤ Γr. r. we ﬁnd p ∈ B δ ˆ such that maxBr fp = fp (x). sending k → ∞ (along a subsequence if necessary).δ for δ ∈ (0. Since 0 is the strict maximum of f . m=1 m→∞ Now we shall prove our claim. k=m r. which yields x ∈ Γr.δ . x x Proof.δ m where fp (x) = f m (x)− p.δ . x . Fix p ∈ B δ0 .where Γr. By translation. In fact. and m m max fpk k = fpk k (x). we set f m (x) = f ∗ ρ1/m (x).δ . independent of large integers m. we ﬁnd ε0 > 0 such that ε0 = f (0) − max B 4r/3 \Br/3 f. Br Hence. we note that f m (x) − p. δ0 > 0. δ0 ]. Note that x → f m (x) + C0 x2 is convex.δ := x ∈ Br (ˆ) ∃p ∈ B δ such that fp (y) ≤ fp (x) for y ∈ Br (ˆ) . where ρ1/m is the molliﬁer. ˆ ˆ Therefore. For m ≥ 3/r. for x ∈ ∩∞ Am . Because. we claim that there are C1 . We remark that this concludes the assertion. such that Γm  ≥ C1 δ n r. we notice that x → f m (x) + C0 x2 is convex. ˆ For integers m. we may suppose x = 0. we can select pk ∈ B δ m=1 m=1 and mk such that limk→∞ mk = ∞. x ≤ f (0) − ε0 + δ0 r ≤ f (0) − 96 2ε0 3 in B r \ B2r/3 . setting Am := ∪∞ Γk . First of all. Setting m m Γm = x ∈ Br ∃p ∈ B δ such that fp (y) ≤ fp (x) for y ∈ B r .δ we have ∩∞ Am ⊂ Γr.
we have Bδ  = Df m (Γm ) r.− f (ξδ ).δ Although we can ﬁnd a proof of the next proposition in [6].2.3. qδ ∈ Bδ such that ξ → fδ (ξ) + qδ . ξ has a maximum at ξδ . D 2 f (ξδ )) ∈ J 2. In other words. then x ∈ Br . we see that B δ = Df m (Γm ) for δ ∈ (0.+ f (ξδ ) ∩ J 2. −λI ≤ D 2 f (ξδ ) ≤ B + 2δI. Proof. Noting (Df (ξδ ). For any δ > 0.1 and 7.δ detD 2 f m dx ≤ (2C0 )n Γm . ξ − δξ2. moreover. we have employed that −2C0 I ≤ D 2 f m ≤ O in Γm .δ Thanks to the Area formula. 2 We next give a “magic” property of supconvolutions. B ∈ S m . X) ∈ J 2.δ dy ≤ Γm r. (Lemma A. we notice that the semiconvex fδ attains its strict maximum at ξ = 0. for large m. 3 where ωf denotes the modulus of continuity of f . setting fδ (ξ) := f (ξ) − 2−1 Bξ. It is easy to see that Df (ξδ ) → 0 (as δ → 0) and. at which f is twice diﬀerentiable.+ f (0) ∩ J 2.On the other hand. In view of Lemmas 7. ξ } = f (0). we conclude the assertion by taking the limit as δ → 0. in view of these m m observations.− f (0) and − λI ≤ X ≤ B. 97 . Proposition 7.4 in [6]) If f ∈ C(Rm ). Hence. r. for any p ∈ B δ0 . we recall the proof with a minor change for the reader’s convenience. we verify that f m (0) ≥ f (0) − ωf (m−1 ) > f (0) − ε0 . δ0 ]. there are ξδ . from the convexity of ξ → f (ξ) + (λ/2)ξ2. For the reader’s convenience. ξ → f (ξ) + (λ/2)ξ2 is convex and maxξ∈Rm {f (ξ) − 2−1 Bξ. if maxBr fp = fp (x) for x ∈ B r .δ 2 Here. we put the proof of [6]. r. then there is an X ∈ S m such that (0. r.
putting x = y and ξ = η − ε(λ(η − y) + q) for ε > 0 in the above again. we may suppose that x = y = 0. Y ) ∈ J ˆ v(0). Y ) ∈ J 2. ξ − η + o(ξ − η2 ). ξ ∈ Rn . we choose y ∈ Rn such that λ v(η) = v(y) − y − η2 . w in Ω into Rn by −∞ in Rn \ Ω.6. ˆ 2 Thus. ξ − η + o(ξ − η2 ) 2 λ = v(y) − y − η2 + q. 2 Taking ξ = x − y + η in the above. then (0. Y ) ∈ J 2.+ v(η + λ−1 q) and v (η) + ˆ In particular.+ v (η). we see that for any x. y) attains its maximum. which concludes the proof. at which u(x) + w(y) − ˆ ˆ φ(x.4. we set v(ξ) := sup ˆ x∈R n λ v(x) − x − ξ2 . 2λ 2. 2 For η. By translation. we have ελ(η − y) + q2 ≤ o(ε). Y ) ∈ J 2. and (q. ξ − η 2 1 + Y (ξ − η).Lemma 7.+ v (0).+ v (η). (Lemma A. Y ∈ S n . from the deﬁnition. extending upper semicontinuous functions u. q ∈ Rn .+ q2 = v(η + λ−1 q).5 in [6]) For v ∈ USC(Rn ) with supRn v < ∞ and λ > 0. we shall work in Rn × Rn instead of Ω × Ω. we have (q.+ v(y). 2 Proof of Lemma 3. For (q. Y ) ∈ J 2. λ ˆ ˆ v(x) − ξ − x2 ≤ v (ξ) ≤ v (η) + q. if (0. To verify that y = η + λ−1 q. we have ˆ (q. 98 . First of all. Y ) ∈ J 2. ξ − η 2 1 + Y (ξ − η). ˆ Proof.
y) → u(x) + w(y) − 1 x x (A + ηI) . A η η µ +(µ + A )(x − ξ2 + y − η2) A+ 1 2 A µ ξ η ξ η φ(x. Using the notation in Lemma 7. Therefore. for each µ > 1. 0) ∈ Rn × Rn . y 2 y S 2n . we have u(ξ) + w(η) ≤ ˆ ˆ 1 2 A+ 1 2 A µ ξ η 99 . respectively. ≤ u(0) + w(0) = 0 in Rn × Rn . x y ≤ 1 2 ξ ξ . . ξ η for all ξ. y) − φ(0. by u(x) − u(0) − Dx φ(0.+ Then. x . Since H¨lder’s inequality imo plies A x y . 0) = (0. we can conclude the proof. ξ. η ∈ Rn and µ > 0. 0) = u(0) = w(0) = 0 and Dφ(0. with the above λ > 0. y . Y ∈ S n such that (0. Thus. y. Y ) ∈ J w(0) and −(µ + A ) ≤ ≤ A + A2 . Then. 0). we need to prove the following: Simpliﬁed version of Ishii’s lemma. we have λ λ 1 u(x) − x − ξ2 + w(y) − y − η2 ≤ 2 2 2 A+ . 0).6.+ (0. 0). sending η → 0.Furthermore. x − Dy φ(0. 0). we denote by u and w the supconvolution ˆ ˆ of u and w. . y) = +o(x2 +y2). w(y) − w(0) − Dy φ(0. replacing u(x). setting λ = µ + A . for each η > 0. y). attains its (strict) maximum at 0 ∈ R2n . η ∈ Rn . we suppose that A x x u(x) + w(y) − . we see that the mapping (x. and we may also suppose that φ(0. there are X. respectively. y 2 y 2. 0) ∈ . 1 I O X O 2. For upper semicontinuous functions u and w in Rn . X) ∈ J u(0). O I O Y µ Proof of the simpliﬁed version of Lemma 3. 0) − Dx φ(0. y y 2 We will show the assertion for A + ηI in place of A.4. where A := D 2 φ(0. y for x. w(y) and φ(x. A x x Since φ(x.
Employing this existence result. we may suppose that diam(Ω) ≤ 1. Next. from the deﬁnition of J . η) = u(ξ) + w(η) and ˆ ˆ 2. it is easy to verify that there are X. (0.4 to u and w. We ﬁrst show that the ABP maximum principle holds under f ∈ Ln (Ω) ∩ C(Ω) in Steps 1 and 2 of this subsection.2) ⇓ p Existence of L strong solutions of Pucci equations (Section 7.+ u(0) and (0. the conclusion is obvious. Z) ∈ J f (0. the above inequality implies ˆ ˆ u(0) = w(0) = 0. using this fact.2. 2. By scaling.− −1 2 2n B = A + µ A . 2 7.Since u(0) ≥ u(0) = 0 and w(0) ≥ w(0) = 0. the ABP maximum principle when f ∈ Ln (Ω).2) Proof of Proposition 6. we remind the readers of our strategy in this and the next subsections. there is Z ∈ S such that (0. ˆ ˆ In view of Proposition 7. Setting r0 := max u − max u+ .3 with m = 2n.3) ⇓ ABP maximum principle for f ∈ Ln (Ω) (Section 7. Y ) ∈ J w(0) ∩ J w(0).2. Applying the last property in Lemma 7. We give a proof in [5] for the subsolution assertion of Proposition 6.± Hence. Y ∈ 2.+ 2.+ w(0). we ﬁnally prove Proposition 6. Ω ∂Ω we may also suppose that r0 > 0 since otherwise. Y ) ∈ J 2.+ 2. 100 . ABP maximum principle for f ∈ Ln (Ω) ∩ C(Ω) (Section 7. we establish the existence of Lp strong solutions of “Pucci” equations in the next subsection when f ∈ Lp (Ω).− 2. f (ξ.2 Proof of the ABP maximum principle First of all.2.− ˆ ˆ ˆ ˆ S n such that (0. we see that ˆ ˆ (0. X) ∈ J 2. X) ∈ J u(0) ∩ J u(0). and Z= X O O Y . in Step 3. 0) and −λI ≤ Z ≤ B. 0) ∩ J f (0.+ 2.
We ﬁrst introduce the following notation: For u : Ω → R and r ≥ 0, Γr := x ∈ Ω ∃p ∈ B r such that u(y) ≤ u(x) + p, y − x for y ∈ Ω . Recalling the upper contact set in section 6.2, we note that Γ[u, Ω] =
r>0
Γr .
Step 1: u ∈ C 2 (Ω) ∩ C(Ω). We ﬁrst claim that for r ∈ (0, r0 ), (i) B r = Du(Γr ), (ii) D 2 u ≤ O in Γr . (7.1)
To show (i), for p ∈ B r , we take x ∈ Ω such that u(ˆ) − p, x = ˆ x ˆ x maxx∈Ω (u(x) − p, x ). Since u(x) − u(ˆ) ≤ r < r0 for x ∈ Ω, taking the maximum over Ω, we have x ∈ Ω. Hence, we see p = Du(ˆ), which concludes ˆ x (i). For x ∈ Γr , Taylor’s formula yields u(y) = u(x) + Du(x), y − x + 1 2 D u(x)(y − x), y − x + o(y − x2 ). 2
Hence, we have 0 ≥ D 2 u(x)(y − x), y − x + o(y − x2 ), which shows (ii). 1−n Now, we introduce functions gκ (p) := pn/(n−1) + κn/(n−1) for κ > 0. We shall simply write g for gκ . Thus, for r ∈ (0, r0 ), we see that
Du(Γr )
g(p)dp ≤ =
Γr Γr
g(Du(x))det(D 2 u(x))dx Dun/(n−1) + κn/(n−1)
1−n
detD 2 u(x)dx.
Recalling (7.1), we utilize detD 2 u ≤ (−trace(D 2 u)/n)n in Γr to ﬁnd C > 0 such that
Br
g(p)dp ≤ C
Γr
Dun/(n−1) + κn/(n−1)
1−n
(−trace(D 2 u))n dx.
(7.2)
Thus, since (µDu + f +)n ≤ g(Du)−1(µn + κ−n (f + )n ) by H¨lder’s inequality, o we have n f+ n dx. (7.3) g(p)dp ≤ C µ + κ Br Γr 101
On the other hand, since (pn + κn )−1 ≤ g(p), we have log r κ
n
+1 ≤C
Br
1 dp ≤ C pn + κn f+ µ + κ
n n
Br
g(p)dp.
Hence, noting Γr ⊂ Ω+ [u] for r ∈ (0, r0 ), by (7.3), we have
1/n
r ≤ κ exp C
Γ[u,Ω]∩Ω+ [u]
dx − 1
.
(7.4)
When f + Ln (Γ[,Ω]∩Ω+ [u]) = 0, then sending κ → 0, we get a contradiction. Thus, we may suppose that f + Ln (Γ[,Ω]∩Ω+ [u]) > 0. Setting κ := f + Ln (Γ[u,Ω]∩Ω+ [u]) and r := r0 /2, we can ﬁnd C > 0, independent of u, such that r0 ≤ C f + Ln (Γ[u,Ω]∩Ω+ [u]) . Remark. We note that we do not need to suppose f to be continuous in Step 1 while we need it in the next step. Step 2: u ∈ C(Ω) and f ∈ Ln (Ω) ∩ C(Ω). First of all, because of f ∈ C(Ω), we remark that u is a “standard” viscosity subsolution of P − (D 2 u) − µDu ≤ f in Ω+ [u].
(See Proposition 2.9 in [5].) Let uε be the supconvolution of u for ε > 0; uε (x) := sup u(y) −
y∈Ω
x − y2 . 2ε
Note that uε is semiconvex and thus, twice diﬀerentiable a.e. in Rn . We claim that for small ε > 0, uε is a viscosity subsolution of P − (D 2 uε ) − µDuε  ≤ f ε in Ωε , (7.5)
where f ε (x) := sup{f + (y)  x − y ≤ 2( u L∞ (Ω) ε)1/2 } and Ωε := {x ∈ Ω+ [u]  dist(x, ∂Ω+ [u]) > 2( u L∞ (Ω) ε)1/2 }. Indeed, for x ∈ Ωε and (q, X) ∈ x ˆ J 2,+ uε (x), choosing x ∈ Ω such that uε (x) = u(ˆ) − (2ε)−1 x − x2 , we easily ˆ −1 verify that q = ε ˆ − x ≤ 2 u L∞ (Ω) /ε. Thus, by Lemma 7.4, we see x 2,+ that (q, X) ∈ J u(x + εq). Hence, we have P − (X) − µq ≤ f + (x + εq) ≤ f ε (x). 102
We note that for small ε > 0, we may suppose that r ε := max uε − max(uε )+ > 0.
Ωε ∂Ωε
(7.6)
Here, we list some properties on upper contact sets: For small δ > 0, we set Ωδ := {x ∈ Ω  dist(x, ∂Ω) > δ}. Lemma 7.5. Let vδ ∈ C(Ω ) and v ∈ C(Ω) satisfy that vδ → v uniformly on any compact sets in Ω as δ → 0. Assume that r := maxΩ v−max∂Ω v + > 0. ˆ Then, for r ∈ (0, r), we have the following properties: ˆ
δ
ˆ (3) for small α > 0, there is δα such that ∪0≤δ<δα Γr [vδ , Ωδ ] ⊂ Γα , r
(1) Γr [v, Ω] is a compact set in Ω+ [v], (2) lim sup Γr [vδ , Ωδ ] ⊂ Γr [v, Ω],
δ→0
ˆ where Γα := {x ∈ Ω  dist(x, Γr [v, Ω]) < α}. r Proof of Lemma 7.5. To show (1), we ﬁrst need to observe that for r ∈ (0, r), dist(Γr [v, Ω], ∂Ω) > 0. Suppose the contrary; if there is xk ∈ Γr [v, Ω] ˆ such that xk ∈ Ω → x ∈ ∂Ω, then there is pk ∈ B r such that v(y) ≤ ˆ v(xk ) + pk , y − xk for y ∈ Ω. Hence, sending k → ∞, we have max v − max v + ≤ r < r , ˆ
Ω ∂Ω
which is a contradiction. Thus, we can ﬁnd a compact set K ⊂ Ω such that Γr [v, Ω] ⊂ K. Moreover, if v(x) ≤ 0 for x ∈ Γr [v, Ω], then we get a contradiction: ˆ r ≤ max v ≤ r < r . ˆ
Ω
Next, choose x ∈ lim supδ→0 Γr [vδ , Ωδ ]. Then, for any k ≥ 1, there are δk ∈ (0, 1/k) and pk ∈ B r such that vδk (y) ≤ vδk (x) + pk , y − x for y ∈ Ωδk .
We may suppose pk → p for some p ∈ B r taking a subsequence if necessary. Sending k → ∞ in the above, we see that x ∈ Γr [v, Ω]. 103
Ω]. then x ∈ Γr [v. we can give a proof of Proposition 6.n In view of Proposition 6. which is a contradiction. δ Hence.2) in Step 1. Thus. we set uε := uε ∗ ρδ .Ω ε] Duε n/(n−1) + κn/(n−1) fε µ + κ n n 1−n (−trace(D 2 uε ))n dx Γr [uε . where rδ := maxΩε uε − max∂Ωε (uε )+ . Ωε ] for r ∈ (0. Therefore. 104 . which implies the conclusion. ˆ ˆ r When x ∈ ∂Ω. 2. we can show that D 2 uε (x) ≤ δ ˜r O in Γε. which is a contradiction. there is k0 ≥ 1 such that xk ∈ Γα0 ˆ ˆ r for k ≥ k0 . in Step 3 below.Ωε ] dx. we choose φk ∈ C(Ω) ∩ Wloc (Ω) such that P + (D 2 φk ) + µDφk  = fk − f + a. Thus.δ .δ := Γr [uε . rδ ). rδ > 0. we obtain (7. by the same argument for (ii) in (7. we verify that −ε−1 I ≤ D 2 uε (x) in Ωε . where ρδ is the standard molliﬁer.3. we have Br g(p)dp ≤ C ≤C Γr [uε . we have Br g(p)dp ≤ C ˜r Γε. Also. Furthermore. which will be seen in section 7.e. φk = 0 on ∂Ω.3.e.4).5 (3)).5 (3). Using the ABP maximum principle in Step 2 (i. f ∈ C(Ω)). in Ω.δ Duε n/(n−1) + κn/(n−1) δ 1−n (−trace(D 2 uε ))n dx δ for small r > 0. since there is pk ∈ B r such that vδk (y) ≤ vδk (xk ) + pk . We set δ ε ε ˜r Γε. 1/k) and xk ∈ ˆ Γr [vδk . + φk L∞ (Ω) ≤ C fk − f Ln (Ω) . we will use Proposition 6. Ωδk ]\ Γα0 . sending δ → 0 with Lemma 7. Let fk ∈ C(Ω) be nonnegative functions such that fk − f + Ln (Ω) → 0 as k → ∞.3. We may suppose again that limk→∞ xk = x for some x ∈ Ω. δk ∈ (0. we may suppose ˆ ˆ ˆ that x ∈ Ω and. In view of the argument to derive (7. sending ε → 0 (again with Lemma 7.3.1). Step 3: u ∈ C(Ω) and f ∈ Ln (Ω). Notice δ δ δ ε that for small δ > 0. Remark.If (3) does not hold. then there are α0 > 0. from the deﬁnition of uε . Thus. y − xk ˆ for y ∈ Ω. we have r < r . 2 For δ > 0.
Du 105 = f in B1 .8).Λ . Therefore. q  b ∈ ∂Bµ } for q ∈ Rn . Step 1: f ∈ C ∞ (B 1 ). u = 0 on ∂B1 . u = 0 on ∂B1 . u = 0 on ∂B1 . For simplicity of statemants. k=1 According to Evans’ result in 1983. 2 7.Λ := {A := (Aij ) ∈ S n  λI ≤ A ≤ ΛI}. we have Ω ∂Ω max wk ≤ max wk + C (fk )+ Ln (Γr [wk . Sketch of proof. we only need to modify the function v z in the argument below. (7. To extend the result for general Ω with smooth boundary..7) (7. Set Sλ.. (7. we ﬁnish the proof. we easily verify that wk is an Ln viscosity subsolution of P − (D 2 wk ) − µDwk  ≤ fk in Ω. Thus.3 Proof of existence results for Pucci equations We shall solve Pucci equations under the Dirichlet condition in Ω. and P + (D 2 u) + µDu ≤ f in B1 .8) Note that the ﬁrst estimate of (7. we can ﬁnd classical solutions uN ∈ C(Ω) ∩ C 2 (Ω) of k=1.Λ }∞ such that S0 = Sλ.10) is valid by Proposition 6. ij k=1 Noting that µq = max{ b. we shall treat the case when Ω is a ball though we will need the existence result in smooth domains later.Ω]∩Ω+ [u]) ..2 when the inhomogeneous term is continuous.9) . For µ ≥ 0 and f ∈ Lp (Ω) with p ≥ n. by Step 2. sending k → ∞ with Lemma 7. we choose B0 := {bk ∈ ∂Bµ }∞ such that B0 = ∂Bµ .5 (2). We shall consider the case when f ∈ C ∞ (B 1 ). Note that Ω+ [wk ] ⊂ Ω+ [u]. We can choose a countable set S0 := {Ak := (Ak ) ∈ Sλ.Setting wk := u + φk − φk L∞ (Ω) . We only show the assertion for (7.. P − (D 2 u) − µDu ≥ f in B1 .N max −trace(Ak D 2 u) + bk .
σ (B1−ε ) ≤ Cε . Now. such that uN L∞ (B1 ) ≤ C1 f Ln (B1 ) and uN C 2. D 2 uNk ) → (u.2 when the inhomogeneous term is continuous. we obtain that V ≤ uN in B 1 . we can construct a subsoluion of (7. Cε > 0 (for each ε ∈ (0. Hence. we have k=1. in view of Theorem 4. Dw(x) .2.12). it is easy to check that V ∗ (x) = 0 for x ∈ ∂B1 . by the classical comparison principle. Thus. in view of (7. (uNk . More precisely. D 2u) uniformly in B1−ε 106 . We ﬁrst note that v z (z) = 0 and v z (x) ≤ 0 for x ∈ B 1 . we can choose a sequence Nk and u ∈ C 2 (B1 ) such that limk→∞ Nk = ∞.. we have uN ≤ u1 in B 1 ..3 again.. taking α > 0 large enough so that 2αβe−9β ≥ f L∞ (B1 ) . Set v z (x) := α(e−βx−2z − e−β ).Moreover.. we ﬁnd σ = σ(ε) ∈ (0. Moreover.2. we verify that Lk v z (x) ≤ 2αβe−βx−2z (Λn − 2βλx − 2z2 + µx − 2z) ≤ 2αβe−9β (Λn − 2βλ + 3µ). (7. (7. putting V (x) := supz∈∂B1 v z (x).. DuNk .N 2 max Lk v z (x) ≤ f (x) in B1 . we see that V is a viscosity subsolution of k=1. Setting Lk w(x) := −trace(Ak D 2 w(x)) + bk .10)(7. by Proposition 3.12) Therefore.. where α.2. which are independent of N ≥ 1. ﬁxing β := (Λn + 3µ + 1)/(2λ).3.9) for any N ≥ 1 in 2 the following manner: Fix z ∈ ∂B1 . we have Lk v z (x) ≤ −2αβe−9β .10) Note that the ﬁrst estimate of (7.. Proposition 3.N max Lk u(x) − f (x) ≤ 0 in B1 .. (7.11) Furthermore. 1)) and C1 > 0. β > 0 (independent of z ∈ ∂B1 ) will be chosen later.10) is valid by Proposition 6. Thus. Du. 1).
for each ε ∈ (0. p } = P + (X) + µp. by Proposition 2.1 in [5]) Choose fk ∈ C ∞ (B 1 ) such that fk − f Lp (Ω) → 0 as k → ∞.9 yields u ∈ C(B 1 ). 107 .2 imply that ≤ = ≤ ≤ P − (D 2 (uj − uk )) − µD(uj − uk ) P + (D 2 uj ) + P − (−D 2 uk ) + µDuj  − µDuk  fj − fk P + (D 2 uj ) − P + (D 2 uk ) + µD(uj − uk ) P + (D 2 (uj − uk )) + µD(uj − uk ). Step 2: f ∈ Lp (B1 ). we have max uj − uk  ≤ C fj − fk Ln (B1 ) . Hence.13) implies that u∗ = u∗ on ∂B1 . Therefore. we thus have uj − uk L∞ (B1 ) ≤ C fj − fk Lp (B1 ) .2 when the inhomogeneous term is continuous. we see that u is a viscosity solution of P + (D 2 u) + µDu − f = 0 in B1 since supk≥1{−trace(Ak X) + bk . B1 Recalling p ≥ n. 1). By virtue of the stability result (Proposition 4. Let uk ∈ C(B 1 ) ∩ C 2 (B1 ) be a classical solution of P + (D 2 u) + µDu − fk = 0 in B1 such that uk = 0 on ∂B1 . we see that u ∈ C(B 1 ) ∩ C 2 (B1 ) is a classical solution of (7. Indeed.13) using Proposition 6. V ≤ u ≤ u1 in B 1 .8). (Lemma 3. k=1 since (1) and (4) of Proposition 3.8). (7. We ﬁrst claim that {uk }∞ is a Cauchy sequence in L∞ (B1 ). we ﬁnd u ∈ C(B 1 ) such that uk converges to u uniformly in B 1 as k → ∞. Theorem 3. Hence. and We note that (7.3.
D 2 η ≤ C0 ε−2 in B1 . η = 1 in B1−2ε . we recall Caﬀarelli’s result (1989) (see also [4]): There is a universal ˆ constant C > 0 such that ˆ D 2 (ηuk ) Lp (B ) ≤ C P + (D 2 (ηuk )) Lp (B ) . ˆ D 2 uk Lp (B1−2ε ) ≤ D 2 (ηuk ) Lp (B1−ε ) ≤ C P + (D 2 (ηuk )) Multiplying ε2 > 0 in the above.8) by Proposition 6.e.8) belongs to Wloc (B1 ). we see that −C f − Lp (B1 ) ≤ u ≤ C f + Lp (B1 ) in B1 . Moreover. It is possible to show that the uniform limit u in Step 2 is an Lp viscosity solution of (7. since we have L∞ estimates for uk . 1/2). 2. 2 Remark. we select η := ηε ∈ C 2 (B1 ) such that (i) (ii) (iii) (iv) Therefore. Therefore. η = 0 in B1 \ B1−ε . such that uk W 2. since it is 2.13. there is Cδ > 0 such that we ﬁnd C3 > 0 such that φ1 (uk ) ≤ δφ2 (uk ) + Cδ φ0 (uk ).Moreover. in B1 . Now. fk Lp (B1 ) ≤ C1 ( fk Lp (B1 ) + φ1 (uk ) + φ0 (uk )). ≤ C1 fk Lp (B1−ε ) +ε −1 Lp (B3/4 ) Duk Lp (B1−ε ) + ε−2 uk Lp (B1−ε ) where φj (uk ) := sup0<ε<1/2 εj D j uk Lp (B1−ε ) for j = 0. 108 φ2 (uk ) ≤ C3 + φ0 (uk ) .p locally. we conclude the proof. 1−ε 1−ε 0 ≤ η ≤ 1 in B1 . u satisﬁes P + (D 2 u) + µDu = f (x) a. Hence. 1/2). Dη ≤ C0 ε−1 . i. it suﬃces to ﬁnd C > 0. we get ε2 D 2 u k Lp (B1−2ε ) where C0 > 0 is independent of ε ∈ (0. then it is an Lp strong supersolution (see [5]). independent of k ≥ 1. we ﬁnd C1 > 0 such that for 0 < ε < 1/4.e. . in view of the “interpolation” inequality (see [13] for example). by the standard covering and limiting arguments with weakly convergence in W 2. 1. For ε ∈ (0. On the other hand. for any δ > 0.p (B1/2 ) ≤ C.p known that if Lp viscosity supersolution of (7.
−2 n √ Q3 −1/2 1/2 0 y = φ(x) √ 2 n r P − (D 2 φ) − µDφ ≥ −ξ in B2√n . we caluculate in the following way: At x = 0.4 Proof of the weak Harnack inequality We need a modiﬁcation of Lemma 4. 109 . Lemma 4. −2 y = φ0 (x) Fig 7. φ(x) = 0 for x ∈ ∂B2√n . we have √ P − (D 2 φ0 (x)) − µDφ0(x) ≥ A(2 n)α αx−α−2 {(α + 2)λ − nΛ − µx}.1 in [4] since our PDE (7. Set φ0 (r) := A{1 − (2 n/r)α } for A. φ(x) ≤ −2 for x ∈ Q3 . 2 D φ0 (x) = A(2 n)α αx−α−4 {x2 I − (α + 2)x ⊗ x}. (cf. ξ(x) = 0 for x ∈ B2√n \ B1/2 .6.7. Lemma 7. α > 0 so that φ0 (2 n) = 0.14) below has the ﬁrst derivative term.1 in [4]) There are φ ∈ C 2 (B 2√n ) and ξ ∈ C(B2√n ) such that (1) (2) (3) (4) √ √ Proof.1 Since √ n) Dφ0 (x) = A(2 √ α αx−α−2 x.
k=1 k=1 Lemma 7. and ∪2 Qk ⊂ Q ⊂ ∪2 Qk . √ Moreover.7. ˜ then Q ⊂ B. See Fig 7. Q1/2 Indeed. then we have u Lp0 (Q1 ) ≤ C1 for some constants p0 . which implies the assertion by sending δ → 0.4. we have v Lp0 (Q1 ) ≤ C1 . Assuming that u ∈ C(B 2√n ) is a nonnegative viscosity supersolution of P + (D 2 u) + µDu ≥ 0 in B2√n .2 in [4]) Let A ⊂ B ⊂ Q1 be measurable sets and 0 < δ < 1 such that (a) A ≤ δ. we can choose a continuous function ξ satisfying (1) and (4).15) 110 .14) ≤ C1 inf u. Thus. we call Q a dyadic cube of Q k=1 n n ˜ ˜ so that Qk := Qr/2 (xk ) for some xk ∈ Q. taking φ ∈ √2 (B2√n ) such that φ(x) = φ0 (x) for x ∈ B2√n \ B1/2 and C φ(x) ≤ φ0 (3 n/2) for x ∈ B3√n/2 . ˜ We shall explain a terminology for the lemma: For a cube Q := Qr (x) with n ˜ if it is one of cubes {Qk }2n r > 0 and x ∈ R .√ Setting α := λ−1 (nΛ + 2µ n) − 2 so that α > 0 for n ≥ 2. taking A := 2/{(4/3)α − 1} so that φ0 (3 n/2) = −2. C1 > 0. (Lemma 4.1. we shall show that for some constants p0 > 0 and C1 > 0. Q3 To this end. we see that (2) holds. Proof of Proposition 6. u Lp0 (Q1 ) (7. by taking v(x) := u(x) inf Q1/2 u + δ for any δ > 0 in place of u. ˜ (b) Assume that if a dyadic cube Q of Q ⊂ Q1 satisﬁes A ∩ Q > δQ. Lemma 7.8. A ≤ δB. we see that the right hand side of the above is nonnegative for x ∈ B2√n \ {0}. −1 (7. Then. There are θ > 0 and M > 1 such that if u ∈ C(B 2√n ) is a nonnegative Lp viscosity supersolution of (7. 2 We now present an important “cube decomposition lemma”.14) such that inf u ≤ 1. it is suﬃcient to show that if u ∈ C(B 2√n ) satisﬁes that inf Q1/2 u ≤ 1.
Proof of Lemma 7. Choose φ ∈ C 2 (B2√n ) and ξ ∈ C(B2√n ) from Lemma 7. we ﬁnd C > 0 such that ˆ 1 ≤ sup(−w) ≤ sup (−w) ≤ C ξ Q3 B2√n + Ln (Γ[−w. .6. by Proposition 6. Under the same assumptions as in Lemma 7. (7. Proof. we have {x ∈ Q1  u(x) > M k } ≤ (1 − θ)k for all k = 1.2.4. we shall show A ≤ (1 − θ)B.then we have {x ∈ Q1  u(x) ≤ M} ≥ θ. Since inf Q3 w ≤ −1 and w ≥ 0 on ∂B2√n by (2) and (3) in Lemma 7. Using (4) of Proposition 3.8 yields the assertion for k = 1.9. B1/2 Since u(x) ≤ −φ(x) ≤ max(−φ) =: M B 2√n for x ∈ B2√n .8. Lemma 7. 2 We next show the following: Lemma 7. ˆ Therefore. 2.2. Remark. . we have θ ≤ {x ∈ Q1  u(x) ≤ M}. assumption (7. we easily see that w := u + φ is an Ln viscosity supersolution of P + (D 2 w) + µDw ≥ −ξ in B2√n .16) implies that ˆ 1 ≤ C max ξ{x ∈ Q1  (u + φ)(x) < 0}.8. . In our setting of proof of Proposition 7.B2√n ]∩B2√n [−w]) . (7. 111 . ˆ respectively.16) In view of (4) of Lemma 7. Setting A := {x ∈ Q1  u(x) > M k } and B := {x ∈ Q1  u(x) > M k−1 }.15) is automatically satisﬁed.6. setting θ = (C supQ1 ξ)−1 > 0 and M = supB2√n (−φ) ≥ 2.6. Suppose that it holds for k − 1.
it is enough to check that property (b) in Lemma 7. Assuming that there is x ∈ Q such that x ∈ B. Hence. we have Q \ A ≥ θQ. x ˜ z ˆ z ˜ Q Q 1 2j 1 2j−1 z+ 1 Q 2j 3 Fig 7. since z ∈ Q1 .8. we have {x ∈ Q  u(x) ≤ M k } ≥ θ = θQ. z ∈ Q1 and j ≥ 1) such that ˆ A ∩ Q > δQ = 1−θ .2 Thus. i. in view of (7.e. let Q := Q1/2j (z) be a dyadic cube of Q := Q1/2j−1 (ˆ) (for z some z. 2jn Thus. we have Q = A ∩ Q + Q \ A > δQ + θQ = Q. 2jn (7. z + 2−j x ∈ x B2√n for x ∈ B2√n .Since A ⊂ B ⊂ Q1 and A ≤ {x ∈ Q1  u(x) > M} ≤ δ := 1 − θ. Lemma 7. ˜ To this end. u(˜) ≤ M k−1 . Since ˜ − z  ≤ 3/2j+1 . since v is an Lp viscosity supersolution of P + (D 2 v) + µDv ≥ 0. we xi Set v(x) := u(z + 2 x)/M for x ∈ B2 n i see that inf Q3 v ≤ u(˜)/M k−1 ≤ 1. Furthermore.17).7 holds.17) ˜ It remains to show Q ⊂ B. 112 .8 yields {x ∈ Q1  v(x) ≤ M} ≥ θ. in view of Lemma 7. ˜ ˜ ˜ / x −j k−1 √ . Therefore.
2 Back to the proof of Proposition 6. we can ﬁnd C(p0 ) > 0 such that u Lp0 (Q1 ) ≤ C(p0 ). 1] because it is immediate to show the assertion for q > 1 by H¨lder’s inequality. (see Lemma 9. (7. in which he observed a precise estimate for “strong” subsolutions on the upper contact set. recalling Fubini’s theorem. for t > M.7 in [13] for instance).18). we have ˜ {x ∈ Q1  u(x) ≥ t} ≤ {x ∈ Q1  u(x) > M k } ≤ (1 − θ)k ≤ C0 t−ε . Q1 up0 (x)dx ≤ = {x∈Q1  u(x)≥1} ∞ tp0 −1 {x p0 1 up0 (x)dx + 1 ∈ Q1  u(x) ≥ t}dt + 1. Proof of Proposition 6. Thus. ε > 0 such that ˜ {x ∈ Q1  u(x) ≥ t} ≤ Ct−ε for t > 0. M ε }. o Let x0 ∈ Q1 be such that maxQ1 u = u(x0 ). ˜ where C0 := (1 − θ)−1 and ε := − log(1 − θ)/ log M > 0. Fok in [11] (1996) gave a similar proof to ours.5 Proof of the local maximum principle Although our proof is a bit technical. taking C := max{C0 .5.8 (2)). A direct consequence of Lemma 7.4. we obtain (7. ε).9 ˜ is that there are C. 2 7. Recently. ˜ ˜ Since 1 ≤ M ε t−ε for 0 < t ≤ M. in view of (7. We note that we can ﬁnd a diﬀerent proof of the local maximum principle in [4] (Theorem 4. We give a proof only when q ∈ (0. for any p0 ∈ (0.18). It is suﬃcient to show that B 1/4 (x0 ) max u ≤ C2 u+ 113 Lq (B1/2 (x0 )) . Now. we choose an integer k ≥ 1 so that M k+1 ≥ t > M k .20). we give a modiﬁcation of Trudinger’s proof in [13] (Theorem 9.18) Indeed.which is a contradiction.
the conclusion is trivial.19) yields max uε > 0 for small ε > 0. Thus.19) since otherwise. uε (x) := sup u(y) − y∈B1 x − y2 . (7.22) . B1−δ ].since B1/2 (x0 ) ⊂ Q2 . τ ). We may suppose that max u > 0 B1 (7.21) (7. we see r + Γε ⊂ B1−δ [v ε ]. Furthermore. 2ε By the uniform convergence of uε to u. r For later convenience.20) For small ε > 0. B 1−τ (7. we can choose δ := δ(ε) ∈ (0.5. r ε ) and set Γε := Γr [v ε . 1/4) such that 1 − 2τ ≥ 1/2 and max u > 0. in B1−δ . By (1) in Lemma 7. by the continuity of u. we deﬁne v ε (x) := η ε (x)uε (x). Putting η ε (x) := {(1 − δ)2 − x2 }β for β := 2n/q ≥ 2.e. we can choose τ ∈ (0. We note that r ε := max v ε > 0. we observe that Dv ε (x) = −2βxη(x)(β−1)/β uε (x) + η(x)Duε (x). τ ) such that limε→0 δ = 0. B 1−2τ We shall consider the supconvolution of u again: For ε ∈ (0. B 1−δ Fix r ∈ (0. by considering u((x − x0 )/2) instead of u(x). 114 (7. and P − (D 2 uε ) − µDuε ≤ 0 a. it is enough to ﬁnd C2 > 0 such that max u ≤ C2 u+ B 1/2 Lq (B1 ) . D 2 v ε (x) = −2βη(x)(β−1)/β {uε (x)I + x ⊗ Duε (x) + Duε (x) ⊗ x} +4β(β − 1)η(x)(β−2)/β uε (x)x ⊗ x + η(x)D 2 uε (x).
r (7. .23) First.25) Here. we note that at x ∈ Γε \ Nε . Of course. we ﬁnd C > 0 such that P − (D 2 v ε ) ≤ Cη −1/β Dv ε  + Cη −2/β (v ε )+ =: g ε We next claim that there is C > 0 such that Dv ε (x) ≤ Cη −1/β (x)v ε (x) for x ∈ Γε \ Nε . By (7. since we may suppose Dv ε (x) > 0 to get the estimate. we have P − (D 2 v ε ) ≤ ηP − (D 2 uε ) + 2βη (β−1)/β {Λnuε − P − (x ⊗ Duε + Duε ⊗ x)} + in B1−δ [v ε ]. (7. we can choose a measurable set Nε ⊂ B1−δ such that Nε  = 0 and uε is twice diﬀerentiable at x ∈ B1−δ \ Nε .24) in B1−δ \ Nε . and satisfy P − (D 2 w) ≤ g 115 a. Moreover.8 in [5].Since uε is twice diﬀerentiable almost everywhere.e. using (7. v ε (y) ≤ v ε (x) + Dv ε (x).21). we see that 0 = v ε (y) ≤ v ε (x) − tDv ε (x). we use Lemma 2. in Ω. which implies Dv ε(x) ≤ Cv ε (x)η −1/β (x) in Γε \ Nε . Hence.22). which will be proved in the end of this subsection for the reader’s convenience: Lemma 7.21) again. the last term in the above can be estimated from above by C{η −2/β (v ε )+ + η −1/β Dv ε }.10. y − x for r y ∈ B1−δ . v ε is also twice diﬀerentiable at x ∈ B1−δ \ Nε . we have P − (D 2 uε ) ≤ µDuε ≤ µη −1 Dv ε  + Cη −1/β (uε )+ . r (7. Let w ∈ C(Ω) be twice diﬀerentiable a. setting y := x − tDv ε (x)Dv ε (x)−1 ∈ ∂B1−δ for t ∈ [1 − δ − x. By using (7. 1 − δ + x].e. To show this claim. in Ω.
we suppose the contrary.e. and x P − (D 2 φ) − g ≥ 2ε a. we see that v ε is an Ln viscosity subsolution of P − (D 2 v ε ) ≤ g ε max v ε ≤ C η −2/β (v ε )+ B 1−δ in B1−δ . B2r (ˆ) ⊂ Ω. by (7. (7. If −C1 I ≤ D 2 w(x) ≤ O a. Therefore.e.25). Therefore. r > 0.26) Since uε is Lipschitz continuous in B1−δ . which together with our choice of β yields max v ε ≤ C (uε )+ B 1−δ Lq (B1−δ ) . there are ε.p x φ ∈ Wloc (Ω) such that 0 = (w − φ)(ˆ) = maxΩ (w − φ). Setting ψ(x) := φ(x) + τ x4 for small ˆ τ > 0.20). in Br . Noting (7. x We may suppose that x = 0 ∈ Ω. sending ε → 0 in the above. in Br (ˆ). In order to show that w ∈ C(Ω) is an Lp viscosity subsolution of (7.10. we observe that h := P − (D 2 ψ) − g ≥ ε a.22). we have Ln (Γε ) r β−2 β ≤ C max(v ) B 1−δ ε + ((uε )+ )2/β Ln (B1−δ ) . we ﬁnish the proof. 116 .2.e. in Ω for some C1 > 0. in view of Proposition 6. 2 Proof of Lemma 7.where g ∈ Lp (Ω) with p ≥ n.26). Notice that 0 = (w − ψ)(0) > (w − ψ)(x) for x ∈ Br \ {0}. then w is an Lp viscosity subsolution of P − (D 2 w) ≤ g in Ω. by (7. x ∈ Ω and ˆ 2. we have max uε ≤ C max v ε ≤ C (uε )+ B 1/2 B 1−δ Lq (B1−δ ) .
By Lusin’s Theorem.27) Consider wδ := w ∗ ρδ . (2) D 2 wδ → D 2 w a. we ﬁnd Eα ⊂ Br such that Br \Eα  < α.e. we see that max(wδ − ψ) ≤ max(wδ − ψ) + C (hδ )+ Br ∂Br Lp (Br ) . as δ → 0. and Setting hδ := P − (D 2 (wδ − ψ)). by (7. we have (hδ )+ p Lp (Br ) D 2 wδ → D 2 w ≤C Br \Eα (1 + P − (−D 2 ψ))p dx + Eα (hδ )+ p dx. we see that. Br ∂Br This is a contradiction since max∂Br (w − ψ) < 0. we have lim sup (hδ )+ δ→0 Lp (Br ) ≤ C (1 + P − (−D 2 ψ)) Lp (Br \Eα ) ≤ Cα. 117 2 . in view of Proposition 6. Sending δ → 0 in the above.28) implies that 0 = max(w − ψ) ≤ max(w − ψ) + Cα for any α > 0. in Br . Br \Eα (1 + P − (−D 2 ψ))p dx < α. (7. in Br . this inequality together with (7. Hence. by sending δ → 0. we observe P − (D 2 (w − ψ)) ≤ −ε a. (7. Hence.Moreover. where ρδ is the standard molliﬁer for δ > 0. From our assumption.28) On the other hand. for any α > 0.e. (1) wδ → w uniformly in Br . we ﬁnd C > 0 such that hδ ≤ C + P − (−D 2 ψ) because of our hypothesis.2. uniformly in Eα (as δ → 0).27).
M. Existence results for boundary problems for uniformly elliptic and parabolic fully nonlinear equations. 167. P. C. G. 49 (1996). Studies in Advanced Mathematics. AMS. G. Soner. e [3] M. Lions & A. H. Math. 1992. 1660. Electron. F. University of California at Santa Barbara. C. Ishii & P. 122. Applications of Mathematics 25. Capuzzo Dolcetta. M. GSM 19. Kocan. Soner & P. Swiech. Amer. a ´ [2] G. 1992. Crandall. Partial Diﬀerential Equations. Crandall. [11] P. M. A. Swiech. 1997. Birk¨user. M. Springer. M. Gariepy. Measure Theory and Fine Properties of Functions. 27 (1992).L. Comm. ´ ¸ [5] L. [4] L. Caffarelli & X. 1994. G. 1996. Springer. On viscosity solutions of fully nonlinear equations with measurable ingredients. 1995. 1999 (1999).References [1] M. Amer. User’s guide to viscosity solutions of second order partial diﬀerential equations. [8] L. H. E. Evans. Caffarelli.L. Cabr´. Math. Pure Appl. Lecture Notes in Mathematics.. L. Lions. Fok. Crandall. Kocan & A. A. Barles. Evans. Bardi & I. Souganidis.. Soc. Dissertation. J. 1997. 365397. Soc. Math. H. Crandall. Solutions de Viscosit´ des Equations de HamiltonJacobi. Springer. Systems & Control. Optimal Control and Viscosity Solutions of HamiltonJacobiBellman Equations. Bardi. M. G. Fully Nonlinear Elliptic Equations. 118 . [10] W. 1998. Diﬀerential Equations. Some maximum principles and continuity estimates for fully nonlinear elliptic equations of second order. Fleming & H. Evans & R. Bull.K. e Colloquium Publications 43. CRC Press. [9] L. e Math´matiques et Applications 17. Controlled Markov Processes and Viscosity Solutions. [6] M. C. ´ ¸ [7] M.
Jensen. 123 (1993). [13] D. [16] R.. Math. Surface Evolution Equations – a level set method – Preprint. Amer. Trudinger. [14] Q. Soc. Springer. 2nd edition. Elliptic Partial Diﬀerential Equations of Second Order. 2002. Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient. Lin. Han & F. 5174. Arch.. Rational Mech. GMW 224. Gilbarg & N. 1997. Anal. H. 1983.[12] Y. Viscosity Solutions. Giga. 119 . (in Japanese). S. Courant Institute Lecture Note 1. Ishii. Elliptic Partial Diﬀerential Equations. [15] H.
80 ∆. 79 LSC. 80 G∗ . 79 C σ . 27 n. 81 120 .± J .± . 17. 62 Γ. 46 B. 80 ess. 5 USC.Notation Index A. 78 ∞ C0 . O]. 52 Γ[u. 83 J 2. inf. 17 P ± . 40 W 2. 66 o.p . 52 ess. 22 2. 79 2. 52 C k. 59 k→∞ △. 25 supp. 2 Lp . G∗ . u∗ . 18. 22 lim inf ∗ Fk .p Wloc .σ . 15 M. 15 u∗ . sup. 59 k→∞ lim sup ∗ Fk .
20 — for state constraints. 25 — of Lp viscosity solutions. 34. 83 Evans L. 105 comparison principle —Souganidis.. 6 — of viscosity solutions. 38 — of Bellman equations. 4 — constants. 27. 66 dyadic cube. 73 69 — for secondorder PDEs. 66 — of Isaacs equations. 84 classical —. 79 o Crandall M. 29 — of state constraint problems. 5 interior cone condition. 85. 6 —. 87 cost functional. 42 — for Dirichlet problems. 81. 56 — for ﬁrstorder PDEs. 56 — for boundary value problems. G. 25 Burgers’ equation. ﬁnal version. 12 Bellman equation. 92 — of Lp viscosity solutions. 3 63 — of Lp strong solutions. — for Neumann problems. 92 counterexample of —. 7 continuity HamiltonJacobi equation. 24 entropy condition. 5 Caﬀarelli L. 71 Aleksandrov A. 53 Heaviside function.Index eikonal equation. 8 Ishii H. ´ ech. 48 — for state constraint problems. 41 Harnack inequality. 46. 88 convex. 85. —KocanLionsSwi¸ 92 cube decomposition lemma. existence. 113 — in unbounded domains.. 110 homogeneous degree. C. 95 121 . 83 ´ e —CrandallKocanSwi¸ch. 46. 95 Area formula. Isaacs equation.. 80 HopfLax formula.. 8 uniformly —.. 69 H¨lder continuity. 109 optimal —.K. 23. 68 — of viscosity solutions. 89. 110 dynamic programming principle. 32.. 29 De Giorgi E. A. 40 53 —Koike. 70 Fok P. 46. 52 weak —. 86. 24. 80 Escauriaza L.. 76 fully nonlinear. 80. D. 9 elliptic degenerate —. 6 divergence form.
95 simpliﬁed —. 36 RankineHugoniot condition. 63. 95 semiconvex. 78 upper contact set.Schauder regularity. 113 — of Lp viscosity solutions. 27 subsolution see solution. 3. 81 Laplace equation. 8 semicontinuous —. 3 local —. 63. 13. 17 Kruzkov S. 79 62. 23 unit outward normal. 25 quasilinear. 40. 92 nonanticipating strategy. 46. 66 Poisson equation. 72 uniform exterior sphere condition. classical —. V. 80 supersolution see solution. 72 — problem. 40 Lions P.... 16 59. 78 Trudinger N. 5 . 95 semijet. 8 value function. 63 weak —. 11. Lp regularity. 32 — for boundary value problems.L. 32. Soner H. multiindex.. 13 LaxOleinik formula. 5 — for Dirichlet problem.. 8 — in Rn . 40 —’s lemma. 52 uniqueness. 81 Moser J. — of viscosity solutions. 85. 79 59. 59 15 structure condition. 13. 3 nondivergence form. 75 modulus of continuity. 71 83. S.. 11 —Safonov. 99 —Lions. molliﬁer. 80 Lp strong —. 81 Nadirashvili N. 59 ∞ L Laplacian. 73 uniform exterior cone condition.. 79 semiconcave. 113 Neumann — condition. 62. 33 mean curvature. 3 maximum principle — distribution sense. 5 — of semicontinuous PDEs. 81. 81 Lp viscosity —. N. 32 semicontinuous envelope. 40. 11. 62. 53 122 —’s lemma. 83 Pucci’s operator. 7 solution Krylov N. 8. M. 80. 100 stability. 7 Jensen R. 5 AleksandrovBakelmanPucci —. 93 — for semicontinuous functions. 2 viscosity —.
vanishing viscosity method. 10 123 .
This action might not be possible to undo. Are you sure you want to continue?