Methods on Nonlinear Elliptic
Equations
SPIN AIMS’ internal project number, if known
– Monograph –
September 22, 2008
AIMS
Preface
In this book we intend to present basic materials as well as real research
examples to young researchers in the ﬁeld of nonlinear analysis for partial
diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We
hope it will become a good reading material for graduate students and a handy
textbook for professors to use in a topic course on nonlinear analysis.
We will introduce a series of wellknown typical methods in nonlinear
analysis. We will ﬁrst illustrate them by using simple examples, then we will
lead readers to the research front and explain how these methods can be
applied to solve practical problems through careful analysis of a series of
recent research articles, mostly the authors’ recent papers.
From our experience, roughly speaking, in applying these commonly used
methods, there are usually two aspects:
i) general scheme (more or less universal) and,
ii) key points for each individual problem.
We will use our own research articles to show readers what the general
schemes are and what the key points are; and with the understandings of these
examples, we hope the readers will be able to apply these general schemes to
solve their own research problems with the discovery of their own key points.
In Chapter 1, we introduce basic knowledge on Sobolev spaces and some
commonly used inequalities. These are the major spaces in which we will seek
weak solutions of PDEs.
Chapter 2 shows how to ﬁnd weak solutions for some typical linear and
semilinear PDEs by using functional analysis methods, mainly, the calculus
of variations and critical point theories including the wellknown Mountain
Pass Lemma.
In Chapter 3, we establish W
2,p
a priori estimates and regularity. We prove
that, in most cases, weak solutions are actually diﬀerentiable, and hence are
classical ones. We will also present a Regularity Lifting Theorem. It is a simple
method to boost the regularity of solutions. It has been used extensively in
various forms in the authors’ previous works. The essence of the approach is
wellknown in the analysis community. However, the version we present here
VI Preface
contains some new developments. It is much more general and is very easy
to use. We believe that our method will provide convenient ways, for both
experts and nonexperts in the ﬁeld, in obtaining regularities. We will use
examples to show how this theorem can be applied to PDEs and to integral
equations.
Chapter 4 is a preparation for Chapter 5 and 6. We introduce Rieman
nian manifolds, curvatures, covariant derivatives, and Sobolev embedding on
manifolds.
Chapter 5 deals with semilinear elliptic equations arising from prescrib
ing Gaussian curvature, on both positively and negatively curved manifolds.
We show the existence of weak solutions in critical cases via variational ap
proaches. We also introduce the method of lower and upper solutions.
Chapter 6 focus on solving a problem from prescribing scalar curvature
on S
n
for n ≥ 3. It is in the critical case where the corresponding variational
functional is not compact at any level sets. To recover the compactness, we
construct a maxmini variational scheme. The outline is clearly presented,
however, the detailed proofs are rather complex. The beginners may skip
these proofs.
Chapter 7 is devoted to the study of various maximum principles, in partic
ular, the ones based on comparisons. Besides classical ones, we also introduce a
version of maximum principle at inﬁnity and a maximum principle for integral
equations which basically depends on the absolute continuity of a Lebesgue
integral. It is a preparation for the method of moving planes.
In Chapter 8, we introduce the method of moving planes and its variant–
the method of moving spheres–and apply them to obtain the symmetry, mono
tonicity, a priori estimates, and even nonexistence of solutions. We also intro
duce an integral form of the method of moving planes which is quite diﬀerent
than the one for PDEs. Instead of using local properties of a PDE, we exploit
global properties of solutions for integral equations. Many research examples
are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as
well as a few from the authors’ recent papers.
Chapters 7 and 8 function as a selfcontained group. The readers who are
only interested in the maximum principles and the method of moving planes,
can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University Wenxiong Chen
University of Colorado Congming Li
Contents
1 Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Approximation by Smooth Functions. . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Compact Embedding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.1 Poincar´e’s Inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . 36
2 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 44
2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 44
2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 55
2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.4.6 Existence of a Minimax via the Mountain Pass Theorem 65
3 Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.1 W
2,p
a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2 W
2,p
Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
VIII Contents
3.2.2 The Case 1 < p < 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2.3 Other Useful Results Concerning the Existence,
Uniqueness, and Regularity . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.3 Regularity Lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.1 Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.3.2 Regularity Lifting Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.3.3 Applications to PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.4 Applications to Integral Equations . . . . . . . . . . . . . . . . . . . 103
4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . 107
4.1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.2 Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3 Riemannian Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.1 Curvature of Plane Curves. . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.2 Curvature of Surfaces in R
3
. . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.3 Curvature on Riemannian Manifolds . . . . . . . . . . . . . . . . . 117
4.5 Calculus on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.5.2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature123
4.6 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
5 Prescribing Gaussian Curvature on Compact 2Manifolds . . 127
5.1 Prescribing Gaussian Curvature on S
2
. . . . . . . . . . . . . . . . . . . . . 129
5.1.1 Obstructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.2 Variational Approach and Key Inequalities . . . . . . . . . . . 130
5.1.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2 Prescribing Gaussian Curvature on Negatively Curved
Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.2.1 Kazdan and Warner’s Results. Method of Lower and
Upper Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.2.2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6 Prescribing Scalar Curvature on S
n
, for n ≥ 3 . . . . . . . . . . . . . 157
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
6.2 The Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.2.1 Estimate the Values of the Functional . . . . . . . . . . . . . . . . 162
6.2.2 The Variational Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.3 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.1 In the Region Where R < 0 . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.3.2 In the Region Where R is Small . . . . . . . . . . . . . . . . . . . . . 172
6.3.3 In the Regions Where R > 0. . . . . . . . . . . . . . . . . . . . . . . . 174
Contents IX
7 Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.2 Weak Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.3 The Hopf Lemma and Strong Maximum Principles . . . . . . . . . . 183
7.4 Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . 188
7.5 A Maximum Principle for Integral Equations . . . . . . . . . . . . . . . . 191
8 Methods of Moving Planes and of Moving Spheres . . . . . . . . 195
8.1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . . . . 197
8.2 Applications of the Maximum Principles Based on
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.2.1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . . . 199
8.2.2 Symmetry of Solutions of −´u = u
p
in R
n
. . . . . . . . . . . 202
8.2.3 Symmetry of Solutions for −´u = e
u
in R
2
. . . . . . . . . . . 209
8.3 Method of Moving Planes in a Local Way . . . . . . . . . . . . . . . . . . 214
8.3.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
8.3.2 The A Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
8.4 Method of Moving Spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.4.2 Necessary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
8.5 Method of Moving Planes in Integral Forms and Symmetry
of Solutions for an Integral Equation . . . . . . . . . . . . . . . . . . . . . . . 228
A Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
A.1.1 Algebraic and Geometric Notations . . . . . . . . . . . . . . . . . . 235
A.1.2 Notations for Functions and Derivatives . . . . . . . . . . . . . . 235
A.1.3 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
A.1.4 Notations for Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
A.1.5 Notations from Riemannian Geometry . . . . . . . . . . . . . . . 239
A.2 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
A.3 CalderonZygmund Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 243
A.4 The Contraction Mapping Principle . . . . . . . . . . . . . . . . . . . . . . . . 246
A.5 The ArzelaAscoli Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
1
Introduction to Sobolev Spaces
1.1 Distributions
1.2 Sobolev Spaces
1.3 Approximation by Smooth Functions
1.4 Sobolev Embeddings
1.5 Compact Embedding
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
1.6.2 The Classical HardyLittlewoodSobolev Inequality
In real life, people use numbers to quantify the surrounding objects. In
mathematics, the absolute value [a − b[ is used to measure the diﬀerence
between two numbers a and b. Functions are used to describe physical states.
For example, temperature is a function of time and place. Very often, we use
a sequence of approximate solutions to approach a real one; and how close
these solutions are to the real one depends on how we measure them, that
is, which metric we are choosing. Hence, it is imperative that we need not
only develop suitable metrics to measure diﬀerent states (functions), but we
must also study relationships among diﬀerent metrics. For these purposes,
the Sobolev Spaces were introduced. They have many applications in various
branches of mathematics, in particular, in the theory of partial diﬀerential
equations.
The role of Sobolev Spaces in the analysis of PDEs is somewhat similar
to the role of Euclidean spaces in the study of geometry. The fundamental
research on the relations among various Sobolev Spaces (Sobolev norms) was
ﬁrst carried out by G. Hardy and J. Littlewood in the 1910s and then by S.
Sobolev in the 1930s. More recently, many well known mathematicians, such
as H. Brezis, L. Caﬀarelli, A. Chang, E. Lieb, L. Nirenberg, J. Serrin, and
2 1 Introduction to Sobolev Spaces
E. Stein, have worked in this area. The main objectives are to determine if
and how the norms dominate each other, what the sharp estimates are, which
functions achieve these sharp estimates, and which functions are ‘critically’
related to these sharp estimates.
To ﬁnd the existence of weak solutions for partial diﬀerential equations,
especially for nonlinear partial diﬀerential equations, the method of functional
analysis, in particular, the calculus of variations, has seen more and more
applications.
To roughly illustrate this kind of application, let’s start oﬀ with a simple
example. Let Ω be a bounded domain in R
n
and consider the Dirichlet problem
associated with the Laplace equation:
−´u = f(x), x ∈ Ω
u = 0, x ∈ ∂Ω
(1.1)
To prove the existence of solutions, one may view −´as an operator acting
on a proper linear space and then apply some known principles of functional
analysis, for instance, the ‘ﬁxed point theory’ or ‘the degree theory,’ to derive
the existence. One may also consider the corresponding variational functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
f(x) ud x (1.2)
in a proper linear space, and seek critical points of the functional in that
space. This kind of variational approach is particularly powerful in dealing
with nonlinear equations. For example, in equation (1.1), instead of f(x), we
consider f(x, u). Then it becomes a semilinear equation. Correspondingly, we
have the functional
J(u) =
1
2
Ω
[u[
2
d x −
Ω
F(x, u) d x, (1.3)
where
F(x, u) =
u
0
f(x, s) d x
is an antiderivative of f(x, ). From the deﬁnition of the functional in either
(1.2) or (1.3), one can see that the function u in the space need not be second
order diﬀerentiable as required by the classical solutions of (1.1). Hence the
critical points of the functional are solutions of the problem only in the ‘weak’
sense. However, by an appropriate regularity argument, one may recover the
diﬀerentiability of the solutions, so that they can still satisfy equation (1.1)
in the classical sense.
In general, given a PDE problem, our intention is to view it as an op
erator A acting on some proper linear spaces X and Y of functions and to
symbolically write the equation as
Au = f (1.4)
1.1 Distributions 3
Then we can apply the general and elegant principles of linear or nonlinear
functional analysis to study the solvability of various equations involving A.
The result can then be applied to a broad class of partial diﬀerential equations.
We may also associate this operator with a functional J(), whose critical
points are the solutions of the equation (1.4). In this process, the key is to
ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. As we
will see later, the Sobolev spaces are designed precisely for this purpose and
will work out properly.
In solving a partial diﬀerential equation, in many cases, it is natural to
ﬁrst ﬁnd a sequence of approximate solutions, and then go on to investigate
the convergence of the sequence. The limit function of a convergent sequence
of approximate solutions would be the desired exact solution of the equation.
As we will see in the next few chapters, there are two basic stages in showing
convergence:
i) in a reﬂexive Banach space, every bounded sequence has a weakly con
vergent subsequence, and then
ii) by the compact embedding from a “stronger” Sobolev space into a
“weaker” one, the weak convergent sequence in the “stronger” space becomes
a strong convergent sequence in the “weaker” space.
Before going onto the details of this chapter, the readers may have a glance
at the introduction of the next chapter to gain more motivations for studying
the Sobolev spaces.
In Section 1.1, we will introduce the distributions, mainly the notion of
the weak derivatives, which are the elements of the Sobolev spaces.
We then deﬁne Sobolev spaces in Section 1.2.
To derive many useful properties in Sobolev spaces, it is not so convenient
to work directly on weak derivatives. Hence in Section 1.3, we show that, these
weak derivatives can be approximated by smooth functions. Then in the next
three sections, we can just work on smooth functions to establish a series of
important inequalities.
1.1 Distributions
As we have seen in the introduction, for the functional J(u) in (1.2) or (1.3),
what really involved were only the ﬁrst derivatives of u instead of the second
derivatives as required classically for a second order equation; and these ﬁrst
derivatives need not be continuous or even be deﬁned everywhere. Therefore,
via functional analysis approach, one can substantially weaken the notion of
partial derivatives. The advantage is that it allows one to divide the task of
ﬁnding “suitable” smooth solutions for a PDE into two major steps:
Step 1. Existence of Weak Solutions. One seeks solutions which are less
diﬀerentiable but are easier to obtain. It is very common that people use
“energy” minimization or conservation, sometimes use ﬁnite dimensional ap
proximation, to show the existence of such weak solutions.
4 1 Introduction to Sobolev Spaces
Step 2. Regularity Lifting. One uses various analysis tools to boost the
diﬀerentiability of the known weak solutions and try to show that they are
actually classical ones.
Both the existence of weak solutions and the regularity lifting have become
two major branches of today’s PDE analysis. Various functions spaces and
the related embedding theories are basic tools in both analysis, among which
Sobolev spaces are the most frequently used ones.
In this section, we introduce the notion of ‘weak derivatives,’ which will
be the elements of the Sobolev spaces.
Let R
n
be the ndimensional Euclidean space and Ω be an open connected
subset in R
n
. Let D(Ω) = C
∞
0
(Ω) be the linear space of inﬁnitely diﬀeren
tiable functions with compact support in Ω. This is called the space of test
functions on Ω.
Example 1.1.1 Assume
B
R
(x
o
) := ¦x ∈ R
n
[ [x −x
o
[ < R¦ ⊂ Ω,
then for any r < R, the following function
f(x) =
exp¦
1
x−x
o

2
−r
2
¦ for [x −x
o
[ < r
0 elsewhere
is in C
∞
0
(Ω).
Example 1.1.2 Assume ρ ∈ C
∞
0
(R
n
), u ∈ L
p
(Ω), and supp u ⊂ K ⊂⊂ Ω.
Let
u
(x) := ρ
∗ u :=
R
n
1
n
ρ(
x −y
)u(y)dy.
Then u
∈ C
∞
0
(Ω) for suﬃciently small.
Let L
p
loc
(Ω) be the space of p
th
power locally summable functions for
1 ≤ p ≤ ∞. Such functions are Lebesque measurable functions f deﬁned on
Ω and with the property that
f
L
p
(K)
:=
K
[f(x)[
p
dx
1/p
< ∞
for every compact subset K in Ω.
Assume that u is a C
1
function in Ω and φ ∈ D(Ω). Through integration
by parts, we have, for i = 1, 2, , n,
Ω
∂u
∂x
i
φ(x)dx = −
Ω
u(x)
∂φ
∂x
i
dx. (1.5)
Now if u is not in C
1
(Ω), then
∂u
∂x
i
does not exist. However, the integral on
the right hand side of (1.5) still makes sense if u is a locally L
1
summable
1.1 Distributions 5
function. For this reason, we deﬁne the ﬁrst derivative
∂u
∂x
i
weakly as the
function v(x) that satisﬁes
Ω
v(x)φ(x)dx = −
Ω
u
∂φ
∂x
i
dx
for all functions φ ∈ D(Ω).
The same idea works for higher partial derivatives. Let α = (α
1
, α
2
, , α
n
)
be a multiindex of order
k := [α[ := α
1
+α
2
+ +α
n
.
For u ∈ C
k
(Ω), the regular αth partial derivative of u is
D
α
u =
∂
α1
∂
α2
∂
αn
u
∂x
α1
1
∂x
α2
2
∂x
αn
n
.
Given any test function φ ∈ D(Ω), through a straight forward integration by
parts k times, we arrive at
Ω
D
α
uφ(x) d x = (−1)
α
Ω
uD
α
φd x. (1.6)
There is no boundary term because φ vanishes near the boundary.
Now if u is not k times diﬀerentiable, the left hand side of (1.6) makes no
sense. However the right hand side is valid for functions u with much weaker
diﬀerentiability, i.e. u only need to be locally L
1
summable. Thus it is natural
to choose those functions v that satisfy (1.6) as the weak representatives of
D
α
u.
Deﬁnition 1.1.1 For u, v ∈ L
1
loc
(Ω), we say that v is the αth weak derivative
of u, written
v = D
α
u
provided
Ω
v(x) φ(x) d x = (−1)
α
Ω
uD
α
φd x
for all test functions φ ∈ D(Ω).
Example 1.1.3 For n = 1 and Ω = (−π, 1), let
u(x) =
cos x if −π < x ≤ 0
1 −x if 0 < x < 1.
Then its weak derivative u
(x) can be represented by
v(x) =
−sin x if −π < x ≤ 0
−1 if 0 < x < 1.
6 1 Introduction to Sobolev Spaces
To see this, we verify, for any φ ∈ D(Ω), that
1
−π
u(x)φ
(x)dx = −
1
−π
v(x)φ(x)dx. (1.7)
In fact, through integration by parts, we have
1
−π
u(x)φ
(x)dx =
0
−π
cos xφ
(x)dx +
1
0
(1 −x)φ
(x)dx
=
0
−π
sin xφ(x)dx +φ(0) −φ(0) +
1
0
φ(x)dx
= −
1
−π
v(x)φ(x)dx.
In this example, one can see that, in the classical sense, the function u is
not diﬀerentiable at x = 0. Since the weak derivative is deﬁned by integrals,
one may alter the values of the weak derivative v(x) in a set of measure zero,
and (1.7) still holds. However, it should be unique up to a set of measure zero.
Lemma 1.1.1 (Uniqueness of Weak Derivatives). If v and w are the weak
αth partial derivatives of u, D
α
u, then v(x) = w(x) almost everywhere.
Proof. By the deﬁnition of the weak derivatives, we have
Ω
v(x)φ(x)dx = (−1)
α
Ω
u(x)D
α
φdx =
Ω
w(x)φ(x)dx
for any φ ∈ D(Ω). It follows that
Ω
(v(x) −w(x))φ(x)dx = 0 ∀φ ∈ D(Ω).
Therefore, we must have
v(x) = w(x) almost everywhere .
¯ .
From the deﬁnition, we can view a weak derivative as a linear functional
acting on the space of test functions D(Ω), and we call it a distribution. More
generally, we have
Deﬁnition 1.1.2 A distribution is a continuous linear functional on D(Ω).
The linear space of distributions or the generalized functions on Ω, denoted
by D
(Ω), is the collection of all continuous linear functionals on D(Ω).
1.1 Distributions 7
Here, the continuity of a functional T on D(Ω) means that, for any se
quence ¦φ
k
¦ ⊂ D(Ω) with φ
k
→φ in D(Ω), we have
T(φ
k
)→T(φ), as k→∞;
and we say that φ
k
→φ in D(Ω) if
a) there exists K ⊂⊂ Ω such that suppφ
k
, suppφ ⊂ K, and
b) for any α, D
α
φ
k
→D
α
φ uniformly as k→∞.
The most important and most commonly used distributions are locally
summable functions. In fact, for any f ∈ L
p
loc
(Ω) with 1 ≤ p ≤ ∞, consider
T
f
(φ) =
Ω
f(x)φ(x)dx.
It is easy to verify that T
f
() is a continuous linear functional on D(Ω), and
hence it is a distribution.
For any distribution µ, if there is an f ∈ L
1
loc
(Ω) such that
µ(φ) = T
f
(φ), ∀φ ∈ D(Ω),
then we say that µ is (or can be realized as ) a locally summable function and
identify it as f.
An interesting example of a distribution that is not a locally summable
function is the wellknown Dirac delta function. Let x
o
be a point in Ω. For
any φ ∈ D(Ω), the delta function at x
o
can be deﬁned as
δ
x
o(φ) = φ(x
o
).
Hence it is a distribution. However, one can show that such a delta function is
not locally summable. It is not a function at all. This kind of “function” has
been used widely and so successfully by physicists and engineers, who often
simply view δ
x
o as
δ
x
o(x) =
0, for x = x
o
∞, for x = x
o
.
Surprisingly, such a delta function is the derivative of some function in the
following distributional sense. To explain this, let Ω = (−1, 1), and let
f(x) =
0, for x < 0
1, for x ≥ 0.
Then, we have
−
1
−1
f(x)φ
(x)dx = −
1
0
φ
(x)dx = φ(0) = δ
0
(φ).
Compare this with the deﬁnition of weak derivatives, we may regard δ
0
(x) as
f
(x) in the sense of distributions.
8 1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ L
p
(Ω), we want to solve the partial diﬀer
ential equation
∆u = f(x)
in the sense of weak derivatives. Naturally, we would seek solutions u, such
that ∆u are in L
p
(Ω). More generally, we would start from the collections of
all distributions whose second weak derivatives are in L
p
(Ω).
In a variational approach, as we have seen in the introduction, to seek
critical points of the functional
1
2
Ω
[u[
2
dx −
Ω
F(x, u)dx,
a natural set of functions we start with is the collection of distributions whose
ﬁrst weak derivatives are in L
2
(Ω). More generally, we have
Deﬁnition 1.2.1 The Sobolev space W
k,p
(Ω) (k ≥ 0 and p ≥ 1) is the col
lection of all distributions u on Ω such that for all multiindex α with [α[ ≤ k,
D
α
u can be realized as a L
p
function on Ω. Furthermore, W
k,p
(Ω) is a Ba
nach space with the norm
[[u[[ :=
¸
¸
α≤k
Ω
[D
α
u(x)[
p
dx
¸
1/p
.
In the special case when p = 2, it is also a Hilbert space and we usually
denote it by H
k
(Ω).
Deﬁnition 1.2.2 W
k,p
0
(Ω) is the closure of C
∞
0
(Ω) in W
k,p
(Ω).
Intuitively, W
k,p
0
(Ω) is the space of functions whose up to (k −1)th order
derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f(x) = [x[
β
. For
0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any
1 ≤ p <
1
1 −β
, it is in the Sobolev space W
1,p
(Ω). More generally, let Ω be
an open unit ball centered at the origin in R
n
, then the function [x[
β
is in
W
1,p
(Ω) if and only if
β > 1 −
n
p
. (1.8)
To see this, we ﬁrst calculate
f
xi
(x) =
βx
i
[x[
2−β
, for x = 0,
and hence
1.3 Approximation by Smooth Functions 9
[f(x)[ =
β
[x[
1−β
. (1.9)
Fix a small > 0. Then for any φ ∈ D(Ω), by integration by parts, we
have
Ω\B(0)
f
xi
(x)φ(x)dx = −
Ω\B(0)
f(x)φ
xi
(x)dx +
∂B(0)
fφν
i
dS.
where ν = (ν
1
, ν
2
, , ν
n
) is an inwardnormal vector on ∂B
(0).
Now, under the condition that β > 1 −n/p, f
xi
is in L
p
(Ω) ⊂ L
1
(Ω), and
[
∂B(0)
fφν
i
dS[ ≤ C
n−1+β
→0, as →0.
It follows that
Ω
[x[
β
φ
xi
(x)dx = −
Ω
βx
i
[x[
2−β
φ(x)dx.
Therefore, the weak ﬁrst partial derivatives of [x[
β
are
βx
i
[x[
2−β
, i = 1, 2, , n.
Moreover, from (1.9), one can see that [f[ is in L
p
(Ω) if and only if
β > 1 −
n
p
.
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel
inconvenient and cumbersome to manage the weak derivatives directly based
on its deﬁnition. To get around, in this section, we will show that any function
in a Sobolev space can be approached by a sequence of smooth functions. In
other words, the smooth functions are dense in Sobolev spaces. Based on this,
when deriving many useful properties of Sobolev spaces, we can just work on
smooth functions and then take limits.
At the end of the section, for more convenient application of the approxi
mation results, we introduce an Operator Completion Theorem. We also prove
an Extension Theorem. Both theorems will be used frequently in the next few
sections.
The idea in approximation is based on molliﬁers. Let
j(x) =
c e
1
x
2
−1
if [x[ < 1
0 if [x[ ≥ 1.
One can verify that
10 1 Introduction to Sobolev Spaces
j(x) ∈ C
∞
0
(B
1
(0)).
Choose the constant c, such that
R
n
j(x)dx = 1.
For each > 0, deﬁne
j
(x) =
1
n
j(
x
).
Then obviously
R
n
j
(x)dx = 1, ∀ > 0.
One can also verify that j
∈ C
∞
0
(B
(0)), and
lim
→0
j
(x) =
0 for x = 0
∞ for x = 0.
The above observations suggest that the limit of j
(x) may be viewed as a
delta function, and from the wellknown property of the delta function, we
would naturally expect, for any continuous function f(x) and for any point
x ∈ Ω,
(J
f)(x) :=
Ω
j
(x −y)f(y)dy→f(x), as →0. (1.10)
More generally, if f(x) is only in L
p
(Ω), we would expect (1.10) to hold almost
everywhere. We call j
(x) a molliﬁer and (J
f)(x) the molliﬁcation of f(x).
We will show that, for each > 0, J
f is a C
∞
function, and as →0,
J
f→f in W
k,p
.
Notice that, actually
(J
f)(x) =
B(x)∩Ω
j
(x −y)f(y)dy,
hence, in order that J
f(x) to approximate f(x) well, we need B
(x) to be
completely contained in Ω to ensure that
B(x)∩Ω
j
(x −y)dy = 1
(one of the important property of delta function). Equivalently, we need x to
be in the interior of Ω. For this reason, we ﬁrst prove a local approximation
theorem.
Theorem 1.3.1 (Local approximation by smooth functions).
For any f ∈ W
k,p
(Ω), J
f ∈ C
∞
(R
n
) and J
f → f in W
k,p
loc
(Ω) as →0.
1.3 Approximation by Smooth Functions 11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many
open sets O
i
, i = 1, 2, , each of which has a positive distance to the bound
ary of Ω, and whose union is the whole Ω. Based on the above theorem, we are
able to approximate a W
k,p
(Ω) function on each O
i
by a sequence of smooth
functions. Combining this with a partition of unity, and a cut oﬀ function if
Ω is unbounded, we will then prove
Theorem 1.3.2 (Global approximation by smooth functions).
For any f ∈ W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂
C
∞
(Ω) ∩ W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
Theorem 1.3.3 (Global approximation by smooth functions up to the bound
ary).
Assume that Ω is bounded with C
1
boundary ∂Ω, then for any f ∈
W
k,p
(Ω), there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
(Ω) = C
∞
(Ω) ∩
W
k,p
(Ω) such that f
m
→ f in W
k,p
(Ω) as m→∞.
When Ω is the entire space R
n
, the approximation by C
∞
or by C
∞
0
functions are essentially the same. We have
Theorem 1.3.4 W
k,p
(R
n
) = W
k,p
0
(R
n
). In other words, for any f ∈ W
k,p
(R
n
),
there exists a sequence of functions ¦f
m
¦ ⊂ C
∞
0
(R
n
), such that
f
m
→f, as m→∞; in W
k,p
(R
n
).
Proof of Theorem 1.3.1
We prove the theorem in three steps.
In step 1, we show that J
f ∈ C
∞
(R
n
) and
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
.
From the deﬁnition of J
f(x), we can see that it is well deﬁned for all x ∈ R
n
,
and it vanishes if x is of distance away from Ω. Here and in the following,
for simplicity of argument, we extend f to be zero outside of Ω.
In step 2, we prove that, if f is in L
p
(Ω),
(J
f)→f in L
p
loc
(Ω).
We ﬁrst verify this for continuous functions and then approximate L
p
functions
by continuous functions.
In step 3, we reach the conclusion of the Theorem. For each f ∈ W
k,p
(Ω)
and [α[ ≤ k, D
α
f is in L
p
(Ω). Then from the result in Step 2, we have
J
(D
α
f)→D
α
f in L
loc
(Ω).
Hence what we need to verify is
12 1 Introduction to Sobolev Spaces
D
α
(J
f)(x) = J
(D
α
f)(x).
As the readers will notice, the arguments in the last two steps only work
in any compact subset of Ω.
Step 1.
Let e
i
= (0, , 0, 1, 0, , 0) be the unit vector in the x
i
direction.
Fix > 0 and x ∈ R
n
. By the deﬁnition of J
f, we have, for [h[ < ,
(J
f)(x +he
i
) −(J
f)(x)
h
=
B2(x)∩Ω
j
(x +he
i
−y) −j
(x −y)
h
f(y)dy.
(1.11)
Since as h→0,
j
(x +he
i
−y) −j
(x −y)
h
→
∂j
(x −y)
∂x
i
uniformly for all y ∈ B
2
(x) ∩ Ω, we can pass the limit through the integral
sign in (1.11) to obtain
∂(J
f)(x)
∂x
i
=
Ω
∂j
(x −y)
∂x
i
f(y)dy.
Similarly, we have
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy.
Noticing that j
() is inﬁnitely diﬀerentiable, we conclude that J
f is also
inﬁnitely diﬀerentiable.
Then we derive
J
f
L
p
(Ω)
≤ f
L
p
(Ω)
. (1.12)
By the H¨older inequality, we have
[(J
f)(x)[ = [
Ω
j
p−1
p
(x −y)j
1
p
(x −y)f(y)dy[
≤
Ω
j
(x −y)dy
p−1
p
Ω
j
(x −y)[f(y)[
p
dy
1
p
≤
B(x)
j
(x −y)[f(y)[
p
dy
1
p
.
It follows that
Ω
[(J
f)(x)[
p
dx ≤
Ω
B(x)
j
(x −y)[f(y)[
p
dy
dx
≤
Ω
[f(y)[
p
B(y)
j
(x −y)dx
dy
≤
Ω
[f(y)[
p
dy.
1.3 Approximation by Smooth Functions 13
This veriﬁes (1.12).
Step 2.
We prove that, for any compact subset K of Ω,
J
f −f
L
p
(K)
→0, as →0. (1.13)
We ﬁrst show this for a continuous function f. By writing
(J
f)(x) −f(x) =
B(0)
j
(y)[f(x −y) −f(x)]dy,
we have
[(J
f)(x) −f(x)[ ≤ max
x∈K,y<
[f(x −y) −f(x)[
B(0)
j
(y)dy
≤ max
x∈K,y<
[f(x −y) −f(x)[.
Due to the continuity of f and the compactness of K, the last term in the above
inequality tends to zero uniformly as →0. This veriﬁes (1.13) for continuous
functions.
For any f in L
p
(Ω), and given any δ > 0, choose a a continuous function
g, such that
f −g
L
p
(Ω)
<
δ
3
. (1.14)
This can be derived from the wellknown fact that any L
p
function can be
approximated by a simple function of the form
¸
k
j=1
a
j
χ
j
(x), where χ
j
is the
characteristic function of some measurable set A
j
; and each simple function
can be approximated by a continuous function.
For the continuous function g, (1.13) infers that for suﬃciently small , we
have
J
g −g
L
p
(K)
<
δ
3
.
It follows from this and (1.14) that
J
f −f
L
p
(K)
≤ J
f −J
g
L
p
(K)
+J
g −g
L
p
(K)
+g −f
L
p
(K)
≤ 2f −g
L
p
(K)
+J
g −g
L
p
(K)
≤ 2
δ
3
+
δ
3
= δ.
This proves (1.13).
Step 3.
Now assume that f ∈ W
k,p
(Ω). Then for any α with [α[ ≤ k, we have
D
α
f ∈ L
p
(Ω). We show that
14 1 Introduction to Sobolev Spaces
D
α
(J
f)→D
α
f in L
p
(K). (1.15)
By the result in Step 2, we have
J
(D
α
f)→D
α
f in L
p
(K).
Now what left to verify is, for suﬃciently small ,
D
α
(J
f)(x) = J
(D
α
f)(x), ∀x ∈ K.
To see this, we ﬁx a point x in K. Choose <
1
2
dist(K, ∂Ω), then any
point y in the ball B
(x) is in the interior of Ω. Consequently,
D
α
(J
f)(x) =
Ω
D
α
x
j
(x −y)f(y)dy
=
B(x)
D
α
x
j
(x −y)f(y)dy
= (−1)
α
B(x)
D
α
y
j
(x −y)f(y)dy
=
B(x)
j
(x −y)D
α
f(y)dy
=
Ω
j
(x −y)D
α
f(y)dy.
Here we have used the fact that j
(x−y) and all its derivatives are supported
in B
(x) ⊂ Ω.
This completes the proof of the Theorem 1.3.1. ¯ .
The Proof of Theorem 1.3.2
Step 1. For a Bounded Region Ω.
Let
Ω
i
= ¦x ∈ Ω [ dist(x, ∂Ω) >
1
i
¦, i = 1, 2, 3,
Write O
i
= Ω
i+3
`
¯
Ω
i+1
. Choose some open set O
0
⊂⊂ Ω so that Ω = ∪
∞
i=0
O
i
.
Choose a smooth partition of unity ¦η
i
¦
∞
i=0
associated with the open sets
¦O
i
¦
∞
i=0
,
0 ≤ η
i
(x) ≤ 1, η
i
∈ C
∞
0
(O
i
)
¸
∞
i=0
η
i
(x) = 1, x ∈ Ω.
Given any function f ∈ W
k,p
(Ω), obvious η
i
f ∈ W
k,p
(Ω) and supp(η
i
f) ⊂ O
i
.
Fix a δ > 0. Choose
i
> 0 so small that f
i
:= J
i
(η
i
f) satisﬁes
f
i
−η
i
f
W
k,p
(Ω)
≤
δ
2
i+1
i = 0, 1,
suppf
i
⊂ (Ω
i+4
`
¯
Ω
i
) i = 1, 2,
(1.16)
1.3 Approximation by Smooth Functions 15
Set g =
¸
∞
i=0
f
i
. Then g ∈ C
∞
(Ω), because each f
i
is in C
∞
(Ω), and for
each open set O ⊂⊂ Ω, there are at most ﬁnitely many nonzero terms in the
sum. To see that g ∈ W
k,p
(Ω), we write
g −f =
∞
¸
i=0
(f
i
−η
i
f).
It follows from (1.16) that
∞
¸
i=0
f
i
−η
i
f
W
k,p
(Ω)
≤ δ
∞
¸
i=0
1
2
i+1
= δ.
From here we can see that the series
¸
∞
i=0
(f
i
−η
i
f) converges in W
k,p
(Ω) (see
Exercise 1.3.1 below), hence (g − f) ∈ W
k,p
(Ω), and therefore g ∈ W
k,p
(Ω).
Moreover, from the above inequality, we have
g −f
W
k,p
(Ω)
≤ δ.
Since δ > 0 is any number, we complete step 1.
Remark 1.3.1 In the above argument, neither
¸
f
i
nor
¸
η
i
f converge, but
their diﬀerence converges. Although each partial sum of
¸
f
i
is in C
∞
0
(Ω),
the inﬁnite series
¸
f
i
does not converge to g in W
k,p
(Ω). Then how did we
prove that g ∈ W
k,p
(Ω)? We showed that the diﬀerence (g−f) is in W
k,p
(Ω).
Exercise 1.3.1 .
Let X be a Banach space and v
i
∈ X. Show that the series
¸
∞
i
v
i
converges
in X if
¸
∞
i
v
i
 < ∞.
Hint: Show that the partial sum is a Cauchy sequence.
Step 2. For Unbounded Region Ω.
Given any δ > 0, since f ∈ W
k,p
(Ω), there exist R > 0, such that
f
W
k,p
(Ω\B
R−2
(0))
≤ δ. (1.17)
Choose a cut oﬀ function φ ∈ C
∞
(R
n
) satisfying
φ(x) =
1, x ∈ B
R−2
(0),
0, x ∈ R
n
` B
R
(0);
and [D
α
φ(x)[ ≤ 1 ∀x ∈ R
n
, ∀[α[ ≤ k.
Then by (1.17), it is easy to verify that, there exists a constant C, such that
φf −f
W
k,p
(Ω)
≤ Cδ. (1.18)
Now in the bounded domain Ω ∩ B
R
(0), by the argument in Step 1, there is
a function g ∈ C
∞
(Ω ∩ B
R
(0)), such that
16 1 Introduction to Sobolev Spaces
g −f
W
k,p
(Ω∩B
R
(0))
≤ δ. (1.19)
Obviously, the function φg is in C
∞
(Ω), and by (1.18) and (1.19),
φg −f
W
k,p
(Ω)
≤ φg −φf
W
k,p
(Ω)
+φf −f
W
k,p
(Ω)
≤ C
1
g −f
W
k,p
(Ω∩B
R
(0))
+Cδ
≤ (C
1
+C)δ.
This completes the proof of the Theorem 1.3.2. ¯ .
The Proof of Theorem 1.3.3
We will still use the molliﬁers to approximate a function. As compared to
the proofs of the previous theorem, the main diﬃculty here is that for a point
on the boundary, there is no room to mollify. To circumvent this, we cover
Ω with ﬁnitely many open sets, and on each set that covers the boundary
layer, we will translate the function a little bit inward, so that there is room
to mollify within Ω. Then we will again use the partition of unity to complete
the proof.
Step 1. Approximating in a Small Set Covering ∂Ω.
Let x
o
be any point on ∂Ω. Since ∂Ω is C
1
, we can make a C
1
change
of coordinates locally, so that in the new coordinates system (x
1
, , x
n
), we
can express, for a suﬃciently small r > 0,
B
r
(x
o
) ∩ Ω = ¦x ∈ B
r
(x
o
) [ x
n
> φ(x
1
, x
n−1
)¦
with some C
1
function Φ.
Set
D = Ω ∩ B
r/2
(x
o
).
Shift every point x ∈ D in x
n
direction a units, deﬁne
x
= x +ae
n
.
and
D
= ¦x
[ x ∈ D¦.
This D
is obtained by shifting D toward the inside of Ω by a units. Choose
a suﬃciently large, so that the ball B
(x) lies in Ω ∩ B
r
(x
o
) for all x ∈ D
and for all small > 0. Now, there is room to mollify a given W
k,p
function
f on D
within Ω. More precisely, we ﬁrst translate f a distance in the x
n
direction to become f
(x) = f(x
), then mollify it. We claim that
J
f
→f in W
k,p
(D).
Actually, for any multiindex [α[ ≤ k, we have
D
α
(J
f
) −D
α
f
L
p
(D)
≤ D
α
(J
f
) −D
α
f

L
p
(D)
+D
α
f
−D
α
f
L
p
(D)
.
1.3 Approximation by Smooth Functions 17
A similar argument as in the proof of Theorem 1.3.1 implies that the ﬁrst
term on the right hand side goes to zero as →0; while the second term also
vanishes in the process due to the continuity of the translation in the L
p
norm.
Step 2. Applying the Partition of Unity.
Since ∂Ω is compact, we can ﬁnd ﬁnitely many such sets D, call them D
i
,
i = 1, 2, , N, the union of which covers ∂Ω. Given δ > 0, from the argument
in Step 1, for each D
i
, there exists g
i
∈ C
∞
(
¯
D
i
), such that
g
i
−f
W
k,p
(Di)
≤ δ. (1.20)
Choose an open set D
0
⊂⊂ Ω such that Ω ⊂ ∪
N
i=0
D
i
, and select a function
g
0
∈ C
∞
(
¯
D
0
) such that
g
0
−f
W
k,p
(D0)
≤ δ. (1.21)
Let ¦η
i
¦ be a smooth partition of unity subordinated to the open sets
¦D
i
¦
N
i=0
. Deﬁne
g =
N
¸
i=0
η
i
g
i
.
Then obviously g ∈ C
∞
(
¯
Ω), and f =
¸
N
i=0
η
i
f. Similar to the proof of Theo
rem 1.3.1, it follows from (1.20) and (1.21) that, for each [α[ ≤ k
D
α
g −D
α
f
L
p
(Ω)
≤
N
¸
i=0
D
α
(η
i
g
i
) −D
α
(η
i
f)
L
p
(Di)
≤ C
N
¸
i=0
g
i
−f
W
k,p
(Di)
= C(N + 1)δ.
This completes the proof of the Theorem 1.3.3. ¯ .
The Proof of Theorem 1.3.4
Let φ(r) be a C
∞
0
cut oﬀ function such that
φ(r) =
1 , for 0 ≤ r ≤ 1;
between 0 and 1 , for 1 < r < 2;
0 , for r ≥ 2.
Then by a direct computation, one can show that
φ(
[x[
R
)f(x) −f(x)
W
k,p
(R
n
)
→0, as R→∞. (1.22)
Thus, there exists a sequence of numbers ¦R
m
¦ with R
m
→∞, such that for
18 1 Introduction to Sobolev Spaces
g
m
(x) := φ(
[x[
R
m
)f(x),
we have
g
m
−f
W
k,p
(R
n
)
≤
1
m
. (1.23)
On the other hand, from the Approximation Theorem 1.3.3, for each ﬁxed
m,
J
(g
m
)→g
m
, as →0, in W
k,p
(R
n
).
Hence there exist
m
, such that for
f
m
:= J
m
(g
m
),
we have
f
m
−g
m

W
k,p
(R
n
)
≤
1
m
. (1.24)
Obviously, each function f
m
is in C
∞
0
(R
n
), and by (1.23) and (1.24),
f
m
−f
W
k,p
(R
n
)
≤ f
m
−g
m

W
k,p
(R
n
)
+g
m
−f
W
k,p
(R
n
)
≤
2
m
→0, as m→∞.
This completes the proof of Theorem 1.3.4. ¯ .
Now we have proved all the four Approximation Theorems, which show
that smooth functions are dense in Sobolev spaces W
k,p
. In other words, W
k,p
is the completion of C
k
under the norm  
W
k,p. Later, particularly in the
next three sections, when we derive various inequalities in Sobolev spaces, we
can just ﬁrst work on smooth functions and then extend them to the whole
Sobolev spaces. In order to make such extensions more conveniently, without
going through approximation process in each particular space, we prove the
following Operator Completion Theorem in general Banach spaces.
Theorem 1.3.5 Let D be a dense linear subspace of a normed space X. Let
Y be a Banach space. Assume
T : D → Y
is a bounded linear map. Then there exists an extension
¯
T of T from D to the
whole space X, such that
¯
T is a bounded linear operator from X to Y ,

¯
T = T and
¯
Tx = Tx ∀ x ∈ D.
Proof. Given any element x
o
∈ X, since D is dense in X, there exists a
sequence ¦x
i
¦ ∈ D, such that
x
i
→x
o
, as i→∞.
1.3 Approximation by Smooth Functions 19
It follows that
Tx
i
−Tx
j
 ≤ Tx
i
−x
j
→0, as i, j→∞.
This implies that ¦Tx
i
¦ is a Cauchy sequence in Y . Since Y is a Banach space,
¦Tx
i
¦ converges to some element y
o
in Y . Let
¯
Tx
o
= y
o
. (1.25)
To see that (1.25) is well deﬁned, suppose there is another sequence ¦x
i
¦ that
converges to x
o
in X and Tx
i
→y
1
in Y . Then
y
1
−y
o
 = lim
i→∞
Tx
i
−Tx
i
 ≤ Clim
i→∞
x
i
−x
i
 = 0.
Now, for x ∈ D, deﬁne
¯
Tx = Tx; and for other x ∈ X, deﬁne
¯
Tx by (1.25).
Obviously,
¯
T is a linear operator. Moreover

¯
Tx
o
 = lim
i→∞
Tx
i
 ≤ lim
i→∞
Tx
i
 = Tx
o
.
Hence
¯
T is also bounded. This completes the proof. ¯ .
As a direct application of this Operator Completion Theorem, we prove
the following Extension Theorem.
Theorem 1.3.6 Assume that Ω is bounded and ∂Ω is C
k
, then for any open
set O ⊃ Ω, there exists a bounded linear operator E : W
k,p
(Ω) → W
k,p
0
(O)
such that Eu = u a.e. in Ω.
Proof. To deﬁne the extension operator E, we ﬁrst work on functions u ∈
C
k
(
¯
Ω). Then we can apply the Operator Completion Theorem to extend E
to W
k,p
(Ω), since C
k
(
¯
Ω) is dense in W
k,p
(Ω) (see Theorem 1.3.3). From this
density, it is easy to show that, for u ∈ W
k,p
(Ω),
E(u)(x) = u(x) almost everywhere on Ω
based on
E(u)(x) = u(x) on
¯
Ω ∀ u ∈ C
k
(
¯
Ω).
Now we prove the theorem for u ∈ C
k
(
¯
Ω). We deﬁne E(u) in the following
two steps.
Step 1. The special case when Ω is a half ball
B
+
r
(0) := ¦x = (x
, x
n
) ∈ R
n
[ [x[ < 1, x
n
> 0¦.
Assume u ∈ C
k
(B
+
r
(0)). We extend u to be a C
k
function on the whole
ball B
r
(0). Deﬁne
¯ u(x
, x
n
) =
u(x
, x
n
) , x
n
≥ 0
¸
k
i=0
a
i
u(x
, −λ
i
x
n
) , x
n
< 0.
20 1 Introduction to Sobolev Spaces
To guarantee that all the partial derivatives
∂
j
¯ u(x
, x
n
)
∂x
j
n
up to the order k are
continuous across the hyper plane x
n
= 0, we ﬁrst pick λ
i
, such that
0 < λ
0
< λ
1
< < λ
k
< 1.
Then solve the algebraic system
k
¸
i=0
a
i
λ
j
i
= 1 , j = 0, 1, , k,
to determine a
0
, a
1
, , a
k
. Since the coeﬃcient determinant det(λ
j
i
) is not
zero, and hence the system has a unique solution. One can easily verify that,
for such λ
i
and a
i
, the extended function ¯ u is in C
k
(B
r
(0)).
Step 2. Reduce the general domains to the case of half ball covered in Step
1 via domain transformations and a partition of unity.
Let O be an open set containing
¯
Ω. Given any x
o
∈ ∂Ω, there exists a
neighborhood U of x
o
and a C
k
transformation φ : U→R
n
which satisﬁes
that φ(x
o
) = 0, φ(U ∩ ∂Ω) lies on the hyper plane x
n
= 0, and φ(U ∩ Ω) is
contained in R
n
+
. Then there exists an r
o
> 0, such that B
ro
(0) ⊂⊂ φ(U),
D
x
o := φ
−1
(B
ro
(0)) is an open set containing x
o
, and φ ∈ C
k
(D
x
o). Choose
r
o
suﬃciently small so that D
x
o ⊂ O. All such D
x
o and Ω forms an open
covering of
¯
Ω, hence there exist a ﬁnite subcovering
D
0
:= Ω, D
1
, , D
m
and a corresponding partition of unity
η
0
, η
1
, , η
m
,
such that
η
i
∈ C
∞
0
(D
i
) i = 0, 1, , m
and
m
¸
i=0
η
i
(x) = 1 ∀ x ∈ Ω.
Let φ
i
be the mapping associated with D
i
as described in Step 1. And let
˜ u
i
(y) = u(φ
−1
i
(y) be the function deﬁned on the half ball
B
+
ri
(0) = φ
i
(D
i
∩
¯
Ω) , i = 1, 2, m.
From Step 1, each ˜ u
i
(y) can be extended as a C
k
function ¯ u
i
(y) onto the
whole ball B
ri
(0).
Now, we can deﬁne the extension of u from Ω to its neighborhood O as
E(u) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
u(x).
1.4 Sobolev Embeddings 21
Obviously, E(u) ∈ C
∞
0
(O), and
E(u)
W
k,p
(O)
≤ Cu
W
k,p
(Ω)
.
Moreover, for any x ∈
¯
Ω, we have
E(u)(x) =
m
¸
i=1
η
i
(x)¯ u
i
(φ
i
(x)) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)˜ u
i
(φ
i
(x)) +η
0
(x)u(x)
=
m
¸
i=1
η
i
(x)u(φ
−1
i
(φ
i
(x))) +η
0
(x)u(x) =
m
¸
i=1
η
i
(x)u(x) +η
0
(x)u(x)
=
m
¸
i=0
η
i
(x)
u(x) = u(x).
This completes the proof of the Extension Theorem. ¯ .
Remark 1.3.2 Notice that the Extension Theorem actually implies an im
proved version of the Approximation Theorem under stronger assumption on
∂Ω (be C
k
instead of C
1
). From the Extension Theorem, one can derive im
mediately
Corollary 1.3.1 Assume that Ω is bounded and ∂Ω is C
k
. Let O be an open
set containing
¯
Ω. Let
O
:= ¦x ∈ R
n
[ dist(x, O) ≤ ¦.
Then the linear operator
J
(E(u)) : W
k,p
(Ω)→C
∞
0
(O
)
is bounded, and for each u ∈ W
k,p
(Ω), we have
J
(E(u)) −u
W
k,p
(Ω)
→0 , as →0.
Here, as compare to the previous approximation theorems, the improvement
is that one can write out the explicit form of the approximation.
1.4 Sobolev Embeddings
When we seek weak solutions of partial diﬀerential equations, we start with
functions in a Sobolev space W
k,p
. Then it is natural that we would like to
know whether the functions in this space also automatically belong to some
other spaces. The following theorem answers the question and at meantime
provides inequalities among the relevant norms.
22 1 Introduction to Sobolev Spaces
Theorem 1.4.1 (General Sobolev inequalities).
Assume Ω is bounded and has a C
1
boundary. Let u ∈ W
k,p
(Ω).
(i) If k <
n
p
, then u ∈ L
q
(Ω) with
1
q
=
1
p
−
k
n
and there exists a constant
C such that
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
(ii) If k >
n
p
, then u ∈ C
k−[
n
p
]−1,γ
(Ω), and there exists a constant C, such
that
u
C
k−[
n
p
]−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
where
γ =
1 +
n
p
−
n
p
, if
n
p
is not an integer
any positive number < 1, if
n
p
is an integer .
Here, [b] is the integer part of the number b.
The proof of the Theorem is based upon several simpler theorems. We
ﬁrst consider the functions in W
1,p
(Ω). From the deﬁnition, apparently these
functions belong to L
q
(Ω) for 1 ≤ q ≤ p. Naturally, one would expect more,
and what is more meaningful is to ﬁnd out how large this q can be. And to
control the L
q
norm for larger q by W
1,p
norm–the norm of the derivatives
Du
L
p
(Ω)
=
Ω
[Du[
p
dx
1
p
–would suppose to be more useful in practice.
For simplicity, we start with the smooth functions with compact supports
in R
n
. We would like to know for what value of q, can we establish the in
equality
u
L
q
(R
n
)
≤ CDu
L
p
(R
n
)
, (1.26)
with constant C independent of u ∈ C
∞
0
(R
n
). Now suppose (1.26) holds. Then
it must also be true for the rescaled function of u:
u
λ
(x) = u(λx),
that is
u
λ

L
q
(R
n
)
≤ CDu
λ

L
p
(R
n
)
. (1.27)
By substitution, we have
R
n
[u
λ
(x)[
q
dx =
1
λ
n
R
n
[u(y)[
q
dy, and
R
n
[Du
λ
[
p
dx =
λ
p
λ
n
R
n
[Du(y)[
p
dy.
1.4 Sobolev Embeddings 23
It follows from (1.27) that
u
L
q
(R
n
)
≤ Cλ
1−
n
p
+
n
q
Du
L
p
(R
n
)
.
Therefore, in order that C to be independent of u, it is necessarily that the
power of λ here be zero, that is
q =
np
n −p
.
It turns out that this condition is also suﬃcient, as will be stated in the
following theorem. Here for q to be positive, we must require p < n.
Theorem 1.4.2 (GagliardoNirenbergSobolev inequality).
Assume that 1 ≤ p < n. Then there exists a constant C = C(p, n), such
that
u
L
p
∗
(R
n
)
≤ CDu
L
p
(R
n
)
, u ∈ C
1
0
(R
n
) (1.28)
where p
∗
=
np
n−p
.
Since any function in W
1,p
(R
n
) can be approached by a sequence of func
tions in C
1
0
(R
n
), we derive immediately
Corollary 1.4.1 Inequality (1.28) holds for all functions u in W
1,p
(R
n
).
For functions in W
1,p
(Ω), we can extended them to be W
1,p
(R
n
) functions
by Extension Theorem and arrive at
Theorem 1.4.3 Assume that Ω is a bounded, open subset of R
n
with C
1
boundary. Suppose that 1 ≤ p < n and u ∈ W
1,p
(Ω). Then u is in L
p∗
(Ω)
and there exists a constant C = C(p, n, Ω), such that
u
L
p∗
(Ω)
≤ Cu
W
1,p
(Ω)
. (1.29)
In the limiting case as p→n, p∗ :=
np
n−p
→∞. Then one may suspect that
u ∈ L
∞
when p = n. Unfortunately, this is true only in dimension one. For
n = 1, from
u(x) =
x
−∞
u
(x)dx,
we derive immediately that
[u(x)[ ≤
∞
−∞
[u
(x)[dx.
However, for n > 1, it is false. One counter example is u = log log(1 +
1
x
) on
Ω = B
1
(0). It belongs to W
1,n
(Ω), but not to L
∞
(Ω). This is a more delicate
situation, and we will deal with it later.
24 1 Introduction to Sobolev Spaces
Naturally, for p > n, one would expect W
1,p
to embed into better spaces.
To get some rough idea what these spaces might be, let us ﬁrst consider the
simplest case when n = 1 and p > 1. Obviously, for any x, y ∈ R
1
with x < y,
we have
u(y) −u(x) =
y
x
u
(t)dt,
and consequently, by H¨older inequality,
[u(y) −u(x)[ ≤
y
x
[u
(t)[dt ≤
y
x
[u
(t)[
p
dt
1
p
y
x
dt
1−
1
p
.
It follows that
[u(y) −u(x)[
[y −x[
1−
1
p
≤
∞
−∞
[u
(t)[
p
dt
1
p
.
Take the supremum over all pairs x, y in R
1
, the left hand side of the above
inequality is the norm in the H¨older space C
0,γ
(R
1
) with γ = 1 −
1
p
. This is
indeed true in general, and we have
Theorem 1.4.4 (Morrey’s inequality).
Assume n < p ≤ ∞. Then there exists a constant C = C(p, n) such that
u
C
0,γ
(R
n
)
≤ Cu
W
1,p
(R
n
)
, u ∈ C
1
(R
n
)
where γ = 1 −
n
p
.
To establish an inequality like (1.28), we follow two basic principles.
First, we consider the special case where Ω = R
n
. Instead of dealing with
functions in W
k,p
, we reduce the proof to functions with enough smoothness
(which is just Theorem 1.4.2).
Secondly, we deal with general domains by extending functions u ∈
W
1,p
(Ω) to W
1,p
(R
n
) via the Extension Theorem.
Here, one sees that Theorem 1.4.2 is the ‘key’ and inequality (1.27) there
provides the foundation for the proof of the Sobolev embedding. Often, the
proof of (1.27) is called a ‘hard analysis’, and the steps leading from (1.27)
to (1.28) are called ‘soft analyseses’. In the following, we will show the ‘soft’
parts ﬁrst and then the ‘hard’ ones. First, we will assume that Theorem 1.4.2
is true and derive Corollary 1.4.1 and Theorem 1.4.3. Then, we will prove
Theorem 1.4.2.
The Proof of Corollary 1.4.1.
Given any function u ∈ W
1,p
(R
n
), by the Approximation Theorem, there
exists a sequence ¦u
k
¦ ⊂ C
∞
0
(R
n
) such that u−u
k

W
1,p
(R
n
)
→ 0 as k → ∞.
Applying Theorem 1.4.2, we obtain
u
i
−u
j

L
p
∗
(R
n
)
≤ Cu
i
−u
j

W
1,p
(R
n
)
→ 0, as i, j → ∞.
1.4 Sobolev Embeddings 25
Thus ¦u
k
¦ also converges to u in L
p
∗
(R
n
). Consequently, we arrive at
u
L
p
∗
(R
n
)
= limu
k

L
p
∗
(R
n
)
≤ limCu
k

W
1,p
(R
n
)
= Cu
W
1,p
(R
n
)
.
This completes the proof of the Corollary. ¯ .
The Proof of Theorem 1.4.3.
Now for functions in W
1,p
(Ω), to apply inequality (1.27), we ﬁrst extend
them to be functions with compact supports in R
n
. More precisely, let O be
an open set that covers Ω, by the Extension Theorem (Theorem 1.3.6), for
every u in W
1,p
(Ω), there exists a function ˜ u in W
1,p
0
(O), such that
˜ u = u, almost everywhere in Ω;
moreover, there exists a constant C
1
= C
1
(p, n, Ω, O), such that
˜ u
W
1,p
(O)
≤ C
1
u
W
1,p
(Ω)
. (1.30)
Now we can apply the GagliardoNirenbergSobolev inequality to ˜ u to derive
u
L
p
∗
(Ω)
≤ ˜ u
L
p
∗
(O)
≤ C˜ u
W
1,p
(O)
≤ CC
1
u
W
1,p
(Ω)
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.2.
We ﬁrst establish the inequality for p = 1, i.e. we prove
R
n
[u[
n
n−1
dx
n−1
n
≤
R
n
[Du[dx. (1.31)
Then we will apply (1.31) to [u[
γ
for a properly chosen γ > 1 to extend the
inequality to the case when p > 1.
We need
Lemma 1.4.1 (General H¨older Inequality) Assume that
u
i
∈ L
pi
(Ω) for i = 1, 2, , m
and
1
p
1
+
1
p
2
+ +
1
p
m
= 1.
Then
Ω
[u
1
u
2
u
m
[dx ≤
m
¸
i
Ω
[u
i
[
pi
dx
1
p
i
. (1.32)
26 1 Introduction to Sobolev Spaces
The proof can be obtained by applying induction to the usual H¨older inequal
ity for two functions.
Now we are ready to prove the Theorem.
Step 1. The case p = 1.
To better illustrate the idea, we ﬁrst derive inequality (1.31) for n = 2,
that is, we prove
R
2
[u(x)[
2
dx ≤
R
2
[Du[dx
2
. (1.33)
Since u has a compact support, we have
u(x) =
x1
−∞
∂u
∂y
1
(y
1
, x
2
)dy
1
.
It follows that
[u(x)[ ≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
.
Similarly, we have
[u(x)[ ≤
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
The above two inequalities together imply that
[u(x)[
2
≤
∞
−∞
[Du(y
1
, x
2
)[dy
1
∞
−∞
[Du(x
1
, y
2
)[dy
2
.
Now, integrate both sides of the above inequality with respect to x
1
and x
2
from −∞ to ∞, we arrive at (1.33).
Then we deal with the general situation when n > 2. We write
u(x) =
xi
−∞
∂u
∂y
i
(x
1
, , x
i−1
, y
i
, x
i+1
, , x
n
)dy
i
, i = 1, 2, n.
Consequently,
[u(x)[ ≤
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
, i = 1, 2, n.
And it follows that
[u(x)[
n
n−1
≤
n
¸
i=1
∞
−∞
[Du(x
1
, , y
i
, , x
n
)[dy
i
1
n−1
.
Integrating both sides with respect to x
1
and applying the general H¨older
inequality (1.32), we obtain
∞
−∞
[u[
n
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
∞
−∞
n
¸
i=2
∞
−∞
[Du[dy
i
1
n−1
dx
1
≤
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=2
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
.
1.4 Sobolev Embeddings 27
Then integrate the above inequality with respect to x
2
. We have
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
∞
−∞
∞
−∞
[Du[dy
1
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
[Du[dx
1
dy
i
1
n−1
¸
dx
2
.
Again, applying the General H¨older Inequality, we arrive at
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
2
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
1
n−1
n
¸
i=3
∞
−∞
∞
−∞
∞
−∞
[Du[dx
1
dx
2
dy
i
1
n−1
.
Continuing this way by integrating with respect to x
3
, , x
n−1
, we deduce
∞
−∞
∞
−∞
[u[
n
n−1
dx
1
dx
n−1
≤
∞
−∞
∞
−∞
[Du[dy
1
dx
2
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dy
2
dx
3
dx
n−1
1
n−1
∞
−∞
∞
−∞
[Du[dx
1
dx
n−2
dy
n−1
1
n−1
R
n
[Du[dx
1
n−1
.
Finally, integrating both sides with respect to x
n
and applying the general
H¨older inequality, we obtain
R
n
[u[
n
n−1
dx ≤
R
n
[Du[dx
n
n−1
.
This veriﬁes (1.31).
Exercise 1.4.1 Write your own proof with all details for the cases n = 3 and
n = 4.
Step 2. The Case p > 1.
Applying (1.31) to the function [u[
γ
with γ > 1 to be chosen later, and by
the H¨older inequality, we have
R
n
[u[
γn
n−1
dx
n−1
n
≤
R
n
[D([u[
γ
)[dx = γ
R
n
[u[
γ−1
[Du[dx
≤ γ
R
n
[u[
γn
n−1
dx
(γ−1)(n−1)
γn
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
28 1 Introduction to Sobolev Spaces
It follows that
R
n
[u[
γn
n−1
dx
n−1
γn
≤ γ
R
n
[Du[
γn
γ+n−1
dx
γ+n−1
γn
.
Now choose γ, so that
γn
γ +n −1
= p, that is
γ =
p(n −1)
n −p
,
we obtain
R
n
[u[
np
n−p
dx
n−p
np
≤ γ
R
n
[Du[
p
dx
1
p
.
This completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.4 (Morrey’s inequality).
We will establish two inequalities
sup
R
n
[u[ ≤ Cu
W
1,p
(R
n
)
, (1.34)
and
sup
x=y
[u(x) −u(y)[
[x −y[
1−n/p
≤ CDu
L
p
(R
n
)
. (1.35)
Both of them can be derived from the following
1
[B
r
(x)[
Br(x)
[u(y) −u(x)[dy ≤ C
Br(x)
[Du(y)[
[y −x[
n−1
dy, (1.36)
where [B
r
(x)[ is the volume of B
r
(x). We will carry the proof out in three
steps. In Step 1, we prove (1.36), and in Step 2 and Step 3, we verify (1.34)
and (1.35), respectively.
Step 1. We start from
u(y) −u(x) =
1
0
d
dt
u(x +t(y −x))dt =
s
0
Du(x +τω) ωdτ,
where
ω =
y −x
[x −y[
, s = [x −y[, and hence y = x +sω.
It follows that
[u(x +sω) −u(x)[ ≤
s
0
[Du(x +τω)[dτ.
Integrating both sides with respect to ω on the unit sphere ∂B
1
(0), then
converting the integral on the right hand side from polar to rectangular coor
dinates, we obtain
1.4 Sobolev Embeddings 29
∂B1(0)
[u(x +sω) −u(x)[dσ ≤
s
0
∂B1(0)
[Du(x +τω)[dσdτ
=
s
0
∂B1(0)
[Du(x +τω)[
τ
n−1
τ
n−1
dσdτ
=
Bs(x)
[Du(z)[
[x −z[
n−1
dz.
Multiplying both sides by s
n−1
, integrating with respect to s from 0 to r, and
taking into account that the integrand on the right hand side is nonnegative,
we arrive at
Br(x)
[u(y) −u(x)[dy ≤
r
n
n
Br(x)
[Du(y)[
[x −y[
n−1
dy.
This veriﬁes (1.36).
Step 2. For each ﬁxed x ∈ R
n
, we have
[u(x)[ =
1
[B
1
(x)[
B1(x)
[u(x)[dy
≤
1
[B
1
(x)[
B1(x)
[u(x) −u(y)[dy +
1
[B
1
(x)[
B1(x)
[u(y)[dy
= I
1
+I
2
. (1.37)
By (1.36) and the H¨older inequality, we deduce
I
1
≤ C
B1(x)
[Du(y)[
[x −y[
n−1
dy
≤ C
R
n
[Du[
p
dy
1/p
B1(x)
dy
[x −y[
(n−1)p
p−1
dy
p−1
p
≤ C
1
R
n
[Du[
p
dy
1/p
. (1.38)
Here we have used the condition that p > n, so that
(n−1)p
p−1
< n, and hence
the integral
B1(x)
dy
[x −y[
(n−1)p
p−1
is ﬁnite.
Also it is obvious that
I
2
≤ Cu
L
p
(R
n
)
. (1.39)
Now (1.34) is an immediate consequence of (1.37), (1.38), and (1.39).
30 1 Introduction to Sobolev Spaces
Step 3. For any pair of ﬁxed points x and y in R
n
, let r = [x − y[ and
D = B
r
(x) ∩ B
r
(y). Then
[u(x) −u(y)[ ≤
1
[D[
D
[u(x) −u(z)[dz +
1
[D[
D
[u(z) −u(y)[dz. (1.40)
Again by (1.36), we have
1
[D[
D
[u(x) −u(z)[dz ≤
C
[B
r
(x)[
Br(x)
[u(x) −u(z)[dz
≤ C
R
n
[Du[
p
dz
1/p
Br(x)
dz
[x −z[
(n−1)p
p−1
p−1
p
= Cr
1−n/p
Du
L
p
(R
n
)
. (1.41)
Similarly,
1
[D[
D
[u(z) −u(y)[dz ≤ Cr
1−n/p
Du
L
p
(R
n
)
. (1.42)
Now (1.40), (1.41), and (1.42) yield
[u(x) −u(y)[ ≤ C[x −y[
1−n/p
Du
L
p
(R
n
)
.
This implies (1.35) and thus completes the proof of the Theorem. ¯ .
The Proof of Theorem 1.4.1 (the General Sobolev Inequality).
(i) Assume that u ∈ W
k,p
(Ω) with k <
n
p
. We want to show that u ∈
L
q
(Ω) and
u
L
q
(Ω)
≤ Cu
W
k,p
(Ω)
. (1.43)
with
1
q
=
1
p
−
k
n
. This can be done by applying Theorem 1.4.3 successively on
the integer k. Again denote p
∗
=
np
n−p
, then
1
p
∗
=
1
p
−
1
n
. For [α[ ≤ k−1, D
α
u ∈
W
1,p
(Ω). By Theorem 1.4.3, we have D
α
u ∈ L
p
∗
(Ω), thus u ∈ W
k−1,p
∗
(Ω)
and
u
W
k−1,p
∗
(Ω)
≤ C
1
u
W
k,p
(Ω)
.
Applying Theorem 1.4.3 again to W
k−1,p
∗
(Ω), we have u ∈ W
k−2,p
∗∗
(Ω) and
u
W
k−2,p
∗∗
(Ω)
≤ C
2
u
W
k−1,p
∗
(Ω)
≤ C
2
C
1
u
W
k,p
(Ω)
,
where p
∗∗
=
np∗
n−p
∗
, or
1
p
∗∗
=
1
p
∗
−
1
n
= (
1
p
−
1
n
) −
1
n
=
1
p
−
2
n
.
Continuing this way k times, we arrive at (1.43).
1.4 Sobolev Embeddings 31
(ii) Now assume that k >
n
p
. Recall that in the Morrey’s inequality
u
C
0,γ
(Ω)
≤ Cu
W
1,p
(Ω)
, (1.44)
we require that p > n. However, in our situation, this condition is not neces
sarily met. To remedy this, we can use the result in the previous step. We can
ﬁrst decrease the order of diﬀerentiation to increase the power of summability.
More precisely, we will try to ﬁnd a smallest integer m, such that
W
k,p
(Ω) → W
k−m,q
(Ω),
with q > n. That is, we want
q =
np
n −mp
> n,
and equivalently,
m >
n
p
−1.
Obviously, the smallest such integer m is
m =
¸
n
p
.
For this choice of m, we can apply Morrey’s inequality (1.44) to D
α
u, with
any [α[ ≤ k −m−1 to obtain
D
α
u
C
0,γ
(Ω)
≤ C
1
u
W
k−m,q
(Ω)
≤ C
1
C
2
u
W
k,p
(Ω)
.
Or equivalently,
u
C
k−
[
n
p
]
−1,γ
(Ω)
≤ Cu
W
k,p
(Ω)
.
Here, when
n
p
is not an integer,
γ = 1 −
n
q
= 1 +
¸
n
p
−
n
p
.
While
n
p
is an integer, we have m =
n
p
, and in this case, q can be any number
> n, which implies that γ can be any positive number < 1.
This completes the proof of the Theorem. ¯ .
32 1 Introduction to Sobolev Spaces
1.5 Compact Embedding
In the previous section, we proved that W
1,p
(Ω) is embedded into L
p
∗
(Ω)
with p
∗
=
np
n−p
. Obviously, for q < p
∗
, the embedding of W
1,p
(Ω) into L
q
(Ω)
is still true, if the region Ω is bounded. Actually, due to the strict inequality
on the exponent, one can expect more, as stated below.
Theorem 1.5.1 (RellichKondrachov Compact Embedding).
Assume that Ω is a bounded open subset in R
n
with C
1
boundary ∂Ω.
Suppose 1 ≤ p < n. Then for each 1 ≤ q < p
∗
, W
1,p
(Ω) is compactly imbedded
into L
q
(Ω):
W
1,p
(Ω) →→ L
q
(Ω),
in the sense that
i) there is a constant C, such that
u
L
q
(Ω)
≤ Cu
W
1,p
(Ω)
, ∀ u ∈ W
1,p
(Ω); (1.45)
and
ii) every bounded sequence in W
1,p
(Ω) possesses a convergent subsequence
in L
q
(Ω).
Proof. The ﬁrst part–inequality (1.45)– is included in the general Embedding
Theorem. What we need to show is that if ¦u
k
¦ is a bounded sequence in
W
1,p
(Ω), then it possesses a convergent subsequence in L
q
(Ω). This can be
derived immediately from the following
Lemma 1.5.1 Every bounded sequence in W
1,1
(Ω) possesses a convergent
subsequence in L
1
(Ω).
We postpone the proof of the Lemma for a moment and see how it implies
the Theorem.
In fact, assume that ¦u
k
¦ is a bounded sequence in W
1,p
(Ω). Then there
exists a subsequence (still denoted by ¦u
k
¦) which converges weakly to an ele
ment u
o
in W
1,p
(Ω). By the Sobolev embedding, ¦u
k
¦ is bounded in L
p∗
(Ω).
On the other hand, it is also bounded in W
1,1
(Ω), since p ≥ 1 and Ω is
bounded. Now, by Lemma 1.5.1, there is a subsequence (still denoted by ¦u
k
¦)
that converges strongly to u
o
in L
1
(Ω). Applying the H¨older inequality
u
k
−u
o

L
q
(Ω)
≤ u
k
−u
o

θ
L
1
(Ω)
u
k
−u
o

1−θ
L
p∗
(Ω)
,
we conclude immediately that ¦u
k
¦ converges strongly to u
o
in L
q
(Ω). This
proves the Theorem.
We now come back to prove the Lemma. Let ¦u
k
¦ be a bounded sequence
in W
1,1
(Ω). We will show the strong convergence of this sequence in three
steps with the help of a family of molliﬁers
u
k
(x) =
Ω
j
(y)u
k
(x −y)dy.
1.5 Compact Embedding 33
First, we show that
u
k
→u
k
in L
1
(Ω) as →0 , uniformly in k. (1.46)
Then, for each ﬁxed > 0, we prove that
there is a subsequence of ¦u
k
¦ which converges uniformly . (1.47)
Finally, corresponding to the above convergent sequence ¦u
k
¦, we extract
diagonally a subsequence of ¦u
k
¦ which converges strongly in L
1
(Ω).
Based on the Extension Theorem, we may assume, without loss of gener
ality, that Ω = R
n
, the sequence of functions ¦u
k
¦ all have compact support
in a bounded open set G ⊂ R
n
, and
u
k

W
1,1
(G)
≤ C < ∞, for all k = 1, 2, .
Since every W
1,1
function can be approached by a sequence of smooth
functions, we may also assume that each u
k
is smooth.
Step 1. From the property of molliﬁers, we have
u
k
(x) −u
k
(x) =
B1(0)
j
(y)[u
k
(x −y) −u
k
(x)]dy
=
B1(0)
j
(y)
1
0
d
dt
u
k
(x −ty)dt dy
= −
B1(0)
j
(y)
1
0
Du
k
(x −ty)dt y dy.
Integrating with respect to x and changing the order of integration, we
obtain
u
k
−u
k

L
1
(G)
≤
B1(0)
j(y)
1
0
G
[Du
k
(x −ty)[dxdt dy
≤
G
[Du
k
(z)[dz ≤ u
k

W
1,1
(G)
. (1.48)
It follows that
u
k
−u
k

L
1
(G)
→0 , as →0 , uniformly in k. (1.49)
Step 2. Now ﬁx an > 0, then for all x ∈ R
n
and for all k = 1, 2, , we
have
[u
k
(x)[ ≤
B(x)
j
(x −y)[u
k
(y)[dy
≤ j

L
∞
(R
n
)
u
k

L
1
(G)
≤
C
n
< ∞. (1.50)
34 1 Introduction to Sobolev Spaces
Similarly,
[Du
k
(x)[ ≤
C
n+1
< ∞. (1.51)
(1.50) and (1.51) imply that, for each ﬁxed > 0, the sequence ¦u
k
¦ is uni
formly bounded and equicontinuous. Therefore, by the ArzelaAscoli Theo
rem (see the Appendix), it possesses a subsequence (still denoted by ¦u
k
¦ )
which converges uniformly on G, in particular
lim
j,i→∞
u
j
−u
i

L
1
(G)
= 0. (1.52)
Step 3. Now choose to be 1, 1/2, 1/3, , 1/k, successively, and denote
the corresponding subsequence that satisﬁes (1.52) by
u
1/k
k1
, u
1/k
k2
, u
1/k
k3
,
for k = 1, 2, 3, . Pick the diagonal subsequence from the above:
¦u
1/i
ii
¦ ⊂ ¦u
k
¦.
Then select the corresponding subsequence from ¦u
k
¦:
u
11
, u
22
, u
33
, .
This is our desired subsequence, because
u
ii
−u
jj

L
1
(G)
≤ u
ii
−u
1/i
ii

L
1
(G)
+u
1/i
ii
−u
1/j
jj

L
1
(G)
+u
1/j
jj
−u
jj

L
1
(G)
→0 , as i, j→∞,
due to the fact that each of the norm on the right hand side →0.
This completes the proof of the Lemma and hence the Theorem. ¯ .
1.6 Other Basic Inequalities
1.6.1 Poincar´e’s Inequality
For functions that vanish on the boundary, we have
Theorem 1.6.1 (Poincar´e’s Inequality I).
Assume Ω is bounded. Suppose u ∈ W
1,p
0
(Ω) for some 1 ≤ p ≤ ∞. Then
u
L
p
(Ω)
≤ CDu
L
p
(Ω)
. (1.53)
1.6 Other Basic Inequalities 35
Remark 1.6.1 i) Now based on this inequality, one can take an equivalent
norm of W
1,p
0
(Ω) as u
W
1,p
0
(Ω)
= Du
L
p
(Ω)
.
ii) Just for this theorem, one can easily prove it by using the Sobolev in
equality u
L
p ≤ u
L
q with p =
nq
n−q
and the H¨older inequality. However,
the one we present in the following is a uniﬁed proof that works for both this
and the next theorem.
Proof. For convenience, we abbreviate u
L
p
(Ω)
as u
p
. Suppose inequality
(1.53) does not hold, then there exists a sequence ¦u
k
¦ ⊂ W
1,p
0
(Ω), such that
Du
k

p
= 1 , while u
k

p
→∞, as k→∞.
Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we may assume that ¦u
k
¦ ⊂ C
∞
0
(Ω).
Let v
k
=
u
k
u
k

p
. Then
v
k

p
= 1 , and Dv
k

p
→0 .
Consequently, ¦v
k
¦ is bounded in W
1,p
(Ω) and hence possesses a subsequence
(still denoted by ¦v
k
¦) that converges weakly to some v
o
∈ W
1,p
(Ω). From the
compact embedding results in the previous section, ¦v
k
¦ converges strongly
in L
p
(Ω) to v
o
, and therefore
v
o

p
= 1 . (1.54)
On the other hand, for each φ ∈ C
∞
0
(Ω),
Ω
vφ
xi
dx = lim
k→∞
Ω
v
k
φ
xi
dx = − lim
k→∞
Ω
v
k,xi
φdx = 0.
It follows that
Dv
o
(x) = 0 , a.e.
Thus v
o
is a constant. Taking into account that v
k
∈ C
∞
0
(Ω), we must have
v
o
≡ 0. This contradicts with (1.54) and therefore completes the proof of the
Theorem. ¯ .
For functions in W
1,p
(Ω), which may not be zero on the the boundary, we
have another version of Poincar´e’s inequality.
Theorem 1.6.2 (Poincar´e Inequality II).
Let Ω be a bounded, connected, and open subset in R
n
with C
1
boundary.
Let ¯ u be the average of u on Ω. Assume 1 ≤ p ≤ ∞. Then there exists a
constant C = C(n, p, Ω), such that
u − ¯ u
p
≤ CDu
p
, ∀ u ∈ W
1,p
(Ω). (1.55)
36 1 Introduction to Sobolev Spaces
The proof of this Theorem is similar to the previous one. Instead of letting
v
k
=
u
k
u
k

p
, we choose
v
k
=
u
k
− ¯ u
k
u
k
− ¯ u
k

p
.
Remark 1.6.2 The connectness of Ω is essential in this version of the in
equality. A simple counter example is when n = 1, Ω = [0, 1] ∪ [2, 3], and
u(x) =
−1 for x ∈ [0, 1]
1 for x ∈ [2, 3] .
1.6.2 The Classical HardyLittlewoodSobolev Inequality
Theorem 1.6.3 (HardyLittlewoodSobolev Inequality).
Let 0 < λ < n and s, r > 1 such that
1
r
+
1
s
+
λ
n
= 2.
Assume that f ∈ L
r
(R
n
) and g ∈ L
s
(R
n
). Then
R
n
R
n
f(x)[x −y[
−λ
g(y)dxdy ≤ C(n, s, λ)[[f[[
r
[[g[[
s
(1.56)
where
C(n, s, λ) =
n[B
n
[
λ/n
(n −λ)rs
λ/n
1 −1/r
λ/n
+
λ/n
1 −1/s
λ/n
with [B
n
[ being the volume of the unit ball in R
n
, and where
f
r
:= f
L
r
(R
n
)
.
Proof. Without loss of generality, we may assume that both f and g are non
negative and f
r
= 1 = g
s
.
Let
χ
G
(x) =
1 if x ∈ G
0 if x ∈G
be the characteristic function of the set G. Then one can see obviously that
f(x) =
∞
0
χ
{f>a}
(x) da (1.57)
g(x) =
∞
0
χ
{g>b}
(x) db (1.58)
[x[
−λ
= λ
∞
0
c
−λ−1
χ
{x<c}
(x) dc (1.59)
1.6 Other Basic Inequalities 37
To see the last identity, one may ﬁrst write
[x[
−λ
=
∞
0
χ
{x
−λ
>˜ c}
(x) d˜ c,
and then let ˜ c = c
−λ
.
Substituting (1.57), (1.58), and (1.59) into the left hand side of (1.56), we
have
I :=
R
n
R
n
f(x)[x −y[
−λ
g(y) dxdy
= λ
∞
0
∞
0
∞
0
R
n
R
n
c
−λ−1
χ
{f>a}
(x)χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy dc db da
= λ
∞
0
∞
0
∞
0
c
−λ−1
I(a, b, c) dc db da. (1.60)
Let
u(c) = [B
n
[c
n
,
the volume of the ball of radius c, and let
v(a) =
R
n
χ
{f>a}
(x) dx, w(b) =
R
n
χ
{g>b}
(y) dy,
the measure of the sets ¦x [ f(x) > a¦ and ¦y [ g(y) > b¦, respectively. Then
we can express the norms as
f
r
r
= r
∞
0
a
r−1
v(a) da = 1 and g
s
s
= s
∞
0
b
s−1
w(b) db = 1. (1.61)
It is easy to see that
I(a, b, c) ≤
R
n
R
n
χ
{x<c}
(x −y)χ
{g>b}
(y) dxdy
≤
R
n
u(c)χ
{g>b}
(y) dy = u(c)w(b)
Similarly, one can show that I(a, b, c) is bounded above by other pairs and
arrive at
I(a, b, c) ≤ min¦u(c)w(b), u(c)v(a), v(a)w(b)¦. (1.62)
We integrate with respect to c ﬁrst. By (1.62), we have
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤v(a)
c
−λ−1
w(b)u(c) dc +
u(c)>v(a)
c
−λ−1
w(b)v(a) dc
38 1 Introduction to Sobolev Spaces
= w(b)[B
n
[
(v(a)/B
n
)
1/n
0
c
−λ−1+n
dc +w(b)v(a)
(v(a)/B
n
)
1/n
c
−λ−1
dc
=
[B
n
[
λ/n
n −λ
w(b)v(a)
1−λ/n
+
[B
n
[
λ/n
λ
w(b)v(a)
1−λ/n
=
[B
n
[
λ/n
λ(n −λ)
w(b)v(a)
1−λ/n
(1.63)
Exchanging v(a) with w(b) in the second line of (1.63), we also obtain
∞
0
c
−λ−1
I(a, b, c) dc
≤
u(c)≤w(b)
c
−λ−1
v(a)u(c) dc +
u(c)>w(b))
c
−λ−1
w(b)v(a) dc
≤
[B
n
[
λ/n
λ(n −λ)
v(a)w(b)
1−λ/n
(1.64)
In view of (1.61), we split the bintegral into two parts, one from 0 to a
r/s
and the other from a
r/s
to ∞. By virtue of (1.60), (1.63), and (1.64), we derive
I ≤
n
n −λ
[B
n
[
λ/n
∞
0
v(a)
a
r/s
0
w(b)
1−λ/n
db da +
∞
0
v(a)
1−λ/n
∞
a
r/s
w(b) db da. (1.65)
To estimate the ﬁrst integral in (1.65), we use H¨older inequality with
m = (s −1)(1 −λ/n)
a
r/s
0
w(b)
1−λ/n
b
m
b
−m
db
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r/s
0
b
−mn/λ
db
λ/n
≤
a
r/s
0
w(b)b
s−1
db
1−λ/n
a
r−1
, (1.66)
since mn/λ < 1. It follows that the ﬁrst integral in (1.65) is bounded above
by
λ
n −s(n −λ)
λ/n
∞
0
v(a)a
r−1
da
∞
0
w(b)b
s−1
db
1−λ/n
=
1
rs
λ/n
1 −1/r
λ/n
. (1.67)
1.6 Other Basic Inequalities 39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞
0
w(b)
b
s/r
0
v(a)
1−λ/n
da db,
then an analogous computation shows that it is bounded above by
1
rs
λ/n
1 −1/s
λ/n
. (1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from
(1.65), (1.67), and (1.68). ¯ .
Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev in
equality)
Let g ∈ L
p
(R
n
) for
n
n−α
< p < ∞. Deﬁne
Tg(x) =
R
n
[x −y[
α−n
g(y)dy.
Then
Tg
p
≤ C(n, p, α)g
np
n+αp
. (1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have
< f, Tg >=< Tf, g >≤ C(n, s, α) f
r
g
s
,
where < , > is the L
2
inner product.
Consequently,
Tg
p
= sup
fr=1
< f, Tg > ≤ C(n, s, α)g
s
,
where
1
p
+
1
r
= 1
1
r
+
1
s
=
n+α
n
.
Solving for s, we arrive at
s =
np
n +αp
.
This completes the proof of the Theorem. ¯ .
Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev
inequality, let’s rewrite it as
Tg
nq
n−αq
≤ Cg
q
(1.70)
with
40 1 Introduction to Sobolev Spaces
1 < q :=
np
n +αp
<
n
α
.
Let u = Tg. Then one can show that (see [CLO]),
(−´)
α
2
u = g.
Now inequality (1.70) becomes the Sobolev one
u
nq
n−αq
≤ C(−´)
α
2
u
q
.
2
Existence of Weak Solutions
2.1 Second Order Elliptic Operators
2.2 Weak Solutions
2.3 Method of Linear Functional Analysis
2.3.1 Linear Equations
2.3.2 Some Basic Principles in Functional Analysis
2.3.3 Existence of Weak Solutions
2.4 Variational Methods
2.4.1 Semilinear Equations
2.4.2 Calculus of Variations
2.4.3 Existence of Minimizers
2.4.4 Existence of Minimizers under Constrains
2.4.5 Minimax Critical Points
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Many physical models come with natural variational structures where one
can minimize a functional, usually an energy E(u). Then the minimizers are
weak solutions of the related partial diﬀerential equation
E
(u) = 0.
For example, for an open bounded domain Ω ⊂ R
n
, and for f ∈ L
2
(Ω),
the functional
E(u) =
1
2
Ω
[u[
2
dx −
Ω
f udx
possesses a minimizer among all u ∈ H
1
0
(Ω) (try to show this by using the
knowledge from the previous chapter). As we will see in this chapter, the
minimizer is a weak solution of the Dirichlet boundary value problem
42 2 Existence of Weak Solutions
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
In practice, it is often much easier to obtain a weak solution than a classical
one. One of the seven millennium problem posed by the Clay Institute of
Mathematical Sciences is to show that some suitable weak solutions to the
NavieStoke equation with good initial values are in fact classical solutions
for all time.
In this chapter, we consider the existence of weak solutions to second order
elliptic equations, both linear and semilinear.
For linear equations, we will use the LaxMilgram Theorem to show the
existence of weak solutions.
For semilinear equations, we will introduce variational methods. We will
consider the corresponding functionals in proper Hilbert spaces and seek weak
solutions of the equations associated with the critical points of the functionals.
We will use examples to show how to ﬁnd a minimizer, a minimizer under
constraint, and a minimax critical point. The wellknown Mountain Pass
Theorem is introduced and applied to show the existence of such a minimax.
To show that a weak solution possesses the desired diﬀerentiability so that
it is actually a classical one is called the regularity method, which will be
studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential operator L either in the divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u, (2.1)
or in the nondivergence form
Lu = −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such
that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for almost every x ∈ Ω and for all ξ ∈ R
n
.
In the simplest case when a
ij
(x) = δ
ij
, b
i
(x) ≡ 0, and c(x) ≡ 0, L reduces
to −´. And, as we will see later, that the solutions of the general second order
elliptic equation Lu = 0 share many similar properties with the harmonic
functions.
2.2 Weak Solutions 43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the
second order elliptic equation with Dirichlet boundary condition
Lu = f , x ∈ Ω
u = 0 , x ∈ ∂Ω
(2.3)
When f = f(x), the equation is linear; and when f = f(x, u), semilinear.
Assume that, f(x) is in L
2
(Ω), or for a given solution u, f(x, u) is in
L
2
(Ω).
Multiplying both sides of the equation by a test function v ∈ C
∞
0
(Ω), and
integrating by parts on Ω, we have
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx =
Ω
fvdx. (2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see
that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are
square integrable. Write H
1
(Ω) = W
1,2
(Ω), and let H
1
0
(Ω) the completion of
C
∞
0
(Ω) in the norm of H
1
(Ω):
u =
¸
Ω
([Du[
2
+[u[
2
)dx
1/2
.
Then (2.4) remains true for any v ∈ H
1
0
(Ω). One can easily see that H
1
0
(Ω)
is also the most appropriate space for u. On the other hand, if u ∈ H
1
0
(Ω)
satisﬁes (2.4) and is second order diﬀerentiable, then through integration by
parts, we have
Ω
(Lu −f)vdx = 0 ∀v ∈ H
1
0
(Ω).
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3).
The above observation naturally leads to
Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈
H
1
0
(Ω) that satisﬁes (2.4) for all v ∈ H
1
0
(Ω).
Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit.
By the Sobolev embedding
H
1
(Ω) → L
2n
n−2
(Ω),
44 2 Existence of Weak Solutions
we see that v is in L
2n
n−2
(Ω), and hence by duality, f only need to be in
L
2n
n+2
(Ω). In the semilinear case, if f(x, u) = u
p
for any p ≤
n + 2
n −2
or more
generally
f(x, u) ≤ C
1
+C
2
[u[
n+2
n−2
;
then for any u ∈ H
1
(Ω), f(x, u) is in L
2n
n+2
(Ω).
2.3 Methods of Linear Functional Analysis
2.3.1 Linear Equations
For the Dirichelet problem with linear equation
Lu = f(x) , x ∈ Ω
u = 0 , x ∈ ∂Ω,
(2.5)
to ﬁnd its weak solutions, we can apply representation type theorems in Linear
Functional Analysis, such as Riesz Representation Theorem or LaxMilgram
Theorem. To this end, we introduce the bilinear form
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx
deﬁned on H
1
0
(Ω), which is the left hand side of (2.4) in the deﬁnition of the
weak solutions. While the right hand side of (2.4) may be regarded as a linear
functional on H
1
0
(Ω);
< f, v >:=
Ω
fvdx.
Now, to ﬁnd a weak solution of (2.5), it is equivalent to show that there
exists a function u ∈ H
1
0
(Ω), such that
B[u, v] =< f, v >, ∀ v ∈ H
1
0
(Ω). (2.6)
This can be realized by representation type theorems in Functional Analysis.
2.3.2 Some Basic Principles in Functional Analysis
Here we list brieﬂy some basic deﬁnitions and principles of functional analysis,
which will be used to obtain the existence of weak solutions. For more details,
please see [Sch1].
Deﬁnition 2.3.1 A Banach space X is a complete, normed linear space with
norm   that satisﬁes
(i) u +v ≤ u +v, ∀u, v ∈ X,
(ii) λu = [λ[u, ∀u, v ∈ X, λ ∈ R,
(iii) u = 0 if and only if u = 0.
2.3 Methods of Linear Functional Analysis 45
The examples of Banach spaces are:
(i) L
p
(Ω) with norm u
L
p
(Ω)
,
(ii) Sobolev spaces W
k,p
(Ω) with norm u
W
k,p
(Ω)
, and
(iii) H¨older spaces C
0,γ
(Ω) with norm u
C
0,γ
(Ω)
.
Deﬁnition 2.3.2 A Hilbert space H is a Banach space endowed with an inner
product (, ) which generates the norm
u := (u, u)
1/2
with the following properties
(i) (u, v) = (v, u), ∀u, v ∈ H,
(ii) the mapping u → (u, v) is linear for each v ∈ H,
(iii) (u, u) ≥ 0, ∀u ∈ H, and
(iv) (u, u) = 0 if and only if u = 0.
Examples of Hilbert spaces are
(i) L
2
(Ω) with inner product
(u, v) =
Ω
u(x)v(x)dx,
and
(ii) Sobolev spaces H
1
(Ω) or H
1
0
(Ω) with inner product
(u, v) =
Ω
[u(x)v(x) +Du(x) Dv(x)]dx.
Deﬁnition 2.3.3 (i) A mapping A : X→Y is a linear operator if
A[au +bv] = aA[u] +bA[u] ∀ u, v ∈ X, a, b ∈ R
1
.
(ii) A linear operator A : X→Y is bounded provided
A := sup
u
X
≤1
Au
Y
< ∞.
(iii) A bounded linear operator f : X→R
1
is called a bounded linear func
tional on X. The collection of all bounded linear functionals on X, denoted by
X
∗
, is called the dual space of X. We use
< f, u >:= f[u]
to denote the pairing of X
∗
and X.
Lemma 2.3.1 (Projection).
Let M be a closed subspace of H. Then for every u ∈ H, there is a v ∈ M,
such that
(u −v, w) = 0, ∀w ∈ M. (2.7)
In other words,
H = M ⊕M
⊥
:= ¦u ∈ H [ u⊥M¦.
46 2 Existence of Weak Solutions
Here v is the projection of u on M. The readers may ﬁnd the proof of the
Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise.
Exercise 2.3.1 i) Given any u ∈ H and u ∈M, there exist a v
o
∈ M such
that
v
o
−u = inf
v∈M
v −u.
Hint: Show that a minimizing sequence is a Cauchy sequence.
ii) Show that
(u −v
o
, w) = 0 , ∀w ∈ M.
Based on this Projection Lemma, one can prove
Theorem 2.3.1 (Riesz Representation Theorem). Let H be a real Hilbert
space with inner product (, ), and H
∗
be its dual space. Then H
∗
can be
canonically identiﬁed with H, more precisely, for each u
∗
∈ H
∗
, there exists
a unique element u ∈ H, such that
< u
∗
, v >= (u, v) ∀v ∈ H.
The mapping u
∗
→ u is a linear isomorphism from H
∗
onto H.
This is a wellknown theorem in functional analysis. The readers may see
the book [Sch1] for its proof or do it as the following exercise.
Exercise 2.3.2 Prove the Riesz Representation Theorem in 4 steps.
Let T be a linear operator from H to R
1
and T ≡0. Show that
i) K(T) := ¦u ∈ H [ Tu = 0¦ is closed.
ii) H = K(T) ⊕K(T)
⊥
.
iii) The dimension of K(T) is 1.
iv) There exists v, such that Tv = 1. Then
Tv = (u, v) , ∀u ∈ H.
From Riesz Representation Theorem, one can derive the following
Theorem 2.3.2 (LaxMilgram Theorem). Let H be a real Hilbert space with
norm  . Let
B : H H→R
1
be a bilinear mapping. Assume that there exist constants M, m > 0, such that
(i) [B[u, v][ ≤ Muv, ∀u, v ∈ H
and
(ii) mu
2
≤ B[u, u], ∀u ∈ H.
Then for each bounded linear functional f on H, there exists a unique
element u ∈ H, such that
B[u, v] =< f, v >, ∀v ∈ H.
2.3 Methods of Linear Functional Analysis 47
Proof. 1. If B[u, v] is symmetric, i.e. B[u, v] = B[v, u], then the conditions
(i) and (ii) ensure that B[u, v] can be made as an inner product in H, and
the conclusion of the theorem follows directly from the Reisz Representation
Theorem.
2. In the case B[u, v] may not be symmetric, we proceed as follows.
On one hand, by the Riesz Representation Theorem, for a given bounded
linear functional f on H, there is an
˜
f ∈ H, such that
< f, v >= (
˜
f, v), ∀v ∈ H. (2.8)
On the other hand, for each ﬁxed element u ∈ H, by condition (i), B[u, ]
is also a bounded linear functional on H, and hence there exist a ˜ u ∈ H, such
that
B[u, v] = (˜ u, v), ∀v ∈ H. (2.9)
Denote this mapping from u to ˜ u by A, i.e.
A : H→H, Au = ˜ u. (2.10)
From (2.8), (2.9), and (2.10), It suﬃce to show that for each
˜
f ∈ H, there
exists a unique element u ∈ H, such that
Au =
˜
f.
We carry this out in two parts. In part (a), we show that A is a bounded
linear operator, and in part (b), we prove that it is onetoone and onto.
(a) For any u
1
, u
2
∈ H, any a
1
, a
2
∈ R
1
, and each v ∈ H, we have
(A(a
1
u
1
+a
2
u
2
), v) = B[(a
1
u
1
+a
2
u
2
), v]
= a
1
B[u
1
, v] +a
2
B[u
2
, v]
= a
1
(Au
1
, v) +a
2
(Au
2
, v)
= (a
1
Au
1
+a
2
Au
2
, v)
Hence
A(a
1
u
1
+a
2
u
2
) = a
1
Au
1
+a
2
Au
2
.
A is linear.
Moreover, by condition (i), for any u ∈ H,
Au
2
= (Au, Au) = B[u, Au] ≤ MuAu,
and consequently
Au ≤ Mu ∀u ∈ H. (2.11)
This veriﬁes that A is bounded.
(b) To see that A is onetoone, we apply condition (ii):
mu
2
≤ B[u, u] = (Au, u) ≤ Auu,
48 2 Existence of Weak Solutions
and it follows that
mu ≤ Au. (2.12)
Now if Au = Av, then from (2.12),
mu −v ≤ Au −Av.
This implies u = v. Hence A is onetoone.
From (2.12), one can also see that R(A), the range of A in H is closed. In
fact, assume that v
k
∈ R(A) and v
k
→v in H. Then for each v
k
, there is a u
k
,
such that Au
k
= v
k
. Inequality (2.12) implies that
u
i
−u
j
 ≤ Cv
i
−v
j
,
i.e. u
k
is a Cauchy sequence in H, and hence u
k
→u for some u ∈ H. Now, by
(2.11), Au
k
→Au and v = Au ∈ H. Therefore, R(A) is closed.
To show that R(A) = H, we need the Projection Lemma 2.3.1. For any
u ∈ H, by the Projection Lemma, there exists a v ∈ R(A), such that
0 = (u −v, A(u −v)) = B[(u −v), (u −v)] ≥ mu −v
2
.
Consequently, u = v ∈ R(A) and hence H ⊂ R(A). Therefore R(A) = H.
This completes the proof of the Theorem. ¯ .
2.3.3 Existence of Weak Solutions
Now we explore that in what situations, our second order elliptic operator
L would satisfy the conditions (i) and (ii) requested by the LaxMilgram
Theorem.
Let H = H
1
0
(Ω) be our Hilbert space with inner product
(u, v) :=
Ω
(DuDv +uv)dx.
By Poincar´e inequality, we can use the equivalent inner product
(u, v)
H
:=
Ω
DuDvdx
and the corresponding norm
u
H
:=
Ω
[Du[
2
dx.
For any v ∈ H, by the Sobolev Embedding, v is in L
2n
n−2
(Ω). And by virtue
of the H¨older inequality,
2.3 Methods of Linear Functional Analysis 49
< f, v >:=
Ω
f(x)v(x)dx ≤
Ω
[f(x)[
2n
n+2
dx
n+2
2n
Ω
[v(x)[
2n
n−2
dx
n−2
2n
,
hence we only need to assume that f ∈ L
2n
n+2
(Ω).
In the simplest case when L = −´, we have
B[u, v] =
Ω
Du Dvdx.
Obviously, condition (i) is met:
[B[u, v][ ≤ u
H
v
H
,
Condition (ii) is also satisﬁed:
B[u, u] = u
2
H
.
Now applying the LaxMilgram Theorem, we conclude that there exists a
unique u ∈ H such that
B[u, v] =< f, v >, ∀v ∈ H.
We have actually proved
Theorem 2.3.3 For every f(x) in L
2n
n+2
(Ω), the Dirichlet problem
−´u = f(x), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω
has a unique weak solution u in H
1
0
(Ω).
In general,
B[u, v] =
Ω
¸
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i=1
b
i
(x)u
xi
v +c(x)uv
¸
dx.
Under the assumption that
a
ij

L
∞
(Ω)
, b
i

L
∞
(Ω)
, c
L
∞
(Ω)
≤ C,
and by H¨older inequality, one can easily verify that
[B[u, v][ ≤ 3Cu
H
v
H
.
Hence condition (i) is always satisﬁed. However, condition (ii) may not auto
matically hold. It depends on the sign of c(x), and the L
∞
norm of b
i
(x) and
c(x).
First from the uniform ellipticity condition, we have
50 2 Existence of Weak Solutions
Ω
n
¸
i,j=1
a
ij
(x)u
xi
u
xj
dx ≥ δ
Ω
[Du[
2
dx = δu
2
H
, (2.13)
for some δ > 0.
From here one can see that if b
i
(x) ≡ 0 and c(x) ≥ 0, then condition (ii) is
met. Actually, the condition on c(x) can be relaxed a little bit, for instance, if
c(x) ≥ −δ/2 (2.14)
with δ as in (2.13), then condition (ii) holds. Hence we have
Theorem 2.3.4 Assume that b
i
(x) ≡ 0 and c(x) satisﬁes (2.14). Then for
every f ∈ L
2n
n+2
(Ω), the problem
Lu = f(x) x ∈ Ω
u(x) = 0 x ∈ ∂Ω
has a unique weak solution in H
1
0
(Ω).
2.4 Variational Methods
2.4.1 Semilinear Equations
To ﬁnd the weak solutions of semilinear equations
Lu = f(x, u)
the LaxMilgram Theorem can no longer be applied. We will use variational
methods to obtain the existence. Roughly speaking, we will associate the
equation with the functional
J(u) =
1
2
Ω
(Lu u)dx −
Ω
F(x, u)dx
where
F(x, u) =
u
0
f(x, s)ds.
We show that the critical point of J(u) is a weak solution of our problem.
Then we will focus on how to seek critical points in various situations.
2.4.2 Calculus of Variations
For a continuously diﬀerentiable function g deﬁned on R
n
, a critical point is
a point where Dg vanishes. The simplest sort of critical points are global or
local maxima or minima. Others are saddle points. To seek weak solutions of
2.4 Variational Methods 51
partial diﬀerential equations, we generalize this concept to inﬁnite dimensional
spaces. To illustrate the idea, let us start from a simple example:
−´u = f(x) , x ∈ Ω;
u(x) = 0 , x ∈ ∂Ω.
(2.15)
To ﬁnd weak solutions of the problem, we study the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
in H
1
0
(Ω).
Now suppose that there is some function u which happen to be a minimizer
of J() in H
1
0
(Ω), then we will demonstrate that u is actually a weak solution
of our problem.
Let v be any function in H
1
0
(Ω). Consider the realvalue function
g(t) = J(u +tv), t ∈ R.
Since u is a minimizer of J(), the function g(t) has a minimum at t = 0;
and therefore, we must have
g
(0) = 0, i.e.
d
dt
J(u +tv) [
t=0
= 0.
Here, explicitly,
J(u +tv) =
1
2
Ω
[D(u +tv)[
2
dx −
Ω
f(x)(u +tv)dx,
and
d
dt
J(u +tv) =
Ω
D(u +tv) Dvdx −
Ω
f(x)vdx.
Consequently, g
(0) = 0 yields
Ω
Du Dvdx −
Ω
f(x)vdx = 0, ∀v ∈ H
1
0
(Ω). (2.16)
This implies that u is a weak solution of problem (2.15).
Furthermore, if u is also second order diﬀerentiable, then through integra
tion by parts, we obtain
Ω
[−´u −f(x)] vdx = 0.
Since this is true for any v ∈ H
1
0
(Ω), we conclude
−´u −f(x) = 0.
52 2 Existence of Weak Solutions
Therefore, u is a classical solution of the problem.
From the above argument, the readers can see that, in order to be a weak
solution of the problem, here u need not to be a minima; it can be a maxima,
or a saddle point of the functional; or more generally, any point that satisﬁes
d
dt
J(u +tv) [
t=0
.
More precisely, we have
Deﬁnition 2.4.1 Let J be a linear functional on a Banach space X.
i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a con
tinuous linear map L = L(u) from X to X
∗
satisfying:
For any > 0, there is a δ = δ(, u), such that
[J(u +v) −J(u)− < L(u), v > [ ≤ v
X
whenever v
X
< δ,
where
< L(u), v >:= L(u)(v)
is the duality between X and X
∗
.
The mapping L(u) is usually denoted by J
(u).
ii) A critical point of J is a point at which J
(u) = 0, that is
< J
(u), v >= 0 ∀v ∈ X.
iii) We call J
(u) = 0 the EulerLagrange equation of the functional J.
Remark 2.4.1 One can verify that, if J is Frechet diﬀerentiable at u, then
< J
(u), v >= lim
t→0
J(u +tv) −J(u)
t
=
d
dt
J(u +tv) [
t=0
.
2.4.3 Existence of Minimizers
In the previous subsection, we have seen that the critical points of the func
tional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
f(x)udx
are weak solutions of the Dirichlet problem (2.15). Then our next question is:
Does the functional J actually possess a critical point?
We will show that there exists a minimizer of J. To this end, we verify
that J possesses the following three properties:
i) boundedness from below,
ii) coercivity, and
iii) weak lower semicontinuity.
i) Boundedness from Below.
2.4 Variational Methods 53
First we prove that J is bounded from below in H := H
1
0
(Ω) if f ∈ L
2
(Ω).
Again, for simplicity, we use the equivalent norm in H:
u
H
=
Ω
[Du[
2
dx.
By Poincar´e and H¨older inequalities, we have
J(u) ≥
1
2
u
2
H
−Cu
H
f
L
2
(Ω)
=
1
2
u
H
−Cf
L
2
(Ω)
2
−
C
2
2
f
2
L
2
(Ω)
≥ −
C
2
2
f
2
L
2
(Ω)
. (2.17)
Therefore, the functional J is bounded from below.
ii) Coercivity. However, A functional that is bounded from below does not
guarantee that it has a minimum. A simple counter example is
g(x) =
1
1 +x
2
, x ∈ R
1
.
Obviously, the inﬁmum of g(x) is 1, but it can never be attained. If we take a
minimizing sequence ¦x
k
¦ such that g(x
k
)→1, then x
k
goes away to inﬁnity.
This suggests that in order to prevent a minimizing sequence from ‘leaking’ to
inﬁnity, we need to have some sort of ‘coercive’ condition on our functional J.
That is, if a sequence ¦u
k
¦ goes to inﬁnity, i.e. if u
H
→∞, then J(u
k
) must
also become unbounded. Therefore a minimizing sequence would be retained
in a bounded set. And this is indeed true for our functional J. Actually, from
the second part of (2.17), one can see that
if u
k

H
→∞, then J(u
k
)→∞.
This implies that any minimizing sequence must be bounded in H.
iii) Weak Lower Semicontinuity. If H is a ﬁnite dimensional space, then a
bounded minimizing sequence ¦u
k
¦ would possesses a convergent subsequence,
and the limit would be a minimizer of J. This is no longer true in inﬁnite
dimensional spaces. We therefore turn our attention to weak topology.
Deﬁnition 2.4.2 Let X be a real Banach space. If a sequence ¦u
k
¦ ⊂ X
satisﬁes
< f, u
k
> → < f, u
o
> as k→∞
for every bounded linear functional f on X; then we say that ¦u
k
¦ converges
weakly to u
o
in X, and write
u
k
u
o
.
54 2 Existence of Weak Solutions
Deﬁnition 2.4.3 A Banach space is called reﬂexive if (X
∗
)
∗
= X, where X
∗
is the dual of X.
Theorem 2.4.1 (Weak Compactness.) Let X be a reﬂexive Banach space.
Assume that the sequence ¦u
k
¦ is bounded in X. Then there exists a subse
quence of ¦u
k
¦, which converges weakly in X.
Now let’s come back to our Hilbert space H := H
1
0
(Ω). By the Reisz
Representation Theorem, for every linear functional f on H, there is a unique
v ∈ H, such that
< f, u >= (v, u), ∀u ∈ H.
It follows that H
∗
= H, and therefore H is reﬂexive. By the Coercivity and
Weak Compactness Theorem, now a minimizing sequence ¦u
k
¦ is bounded,
and hence possesses a subsequence ¦u
ki
¦ that converges weakly to an element
u
o
in H:
(u
ki
, v)→(u
o
, v), ∀v ∈ H.
Naturally, we wish that the functional J we consider is continuous with respect
to this weak convergence, that is
lim
i→∞
J(u
ki
) = J(u
o
),
then u
o
would be our desired minimizer. However, this is not the case with our
functional, nor is it true for most other functionals of interest. Fortunately,
we do not really need such a continuity, instead, we only need a weaker one:
J(u
o
) ≤ liminf
i→∞
J(u
ki
).
Since on the other hand, by the deﬁnition of a minimizing sequence, we obvi
ously have
J(u
o
) ≥ liminf
i→∞
J(u
ki
),
and hence
J(u
o
) = liminf
i→∞
J(u
ki
).
Therefore, u
o
is a minimizer.
Deﬁnition 2.4.4 We say that a functional J() is weakly lower semicontinuous
on a Banach space X, if for every weakly convergent sequence
u
k
u
o
, in X,
we have
J(u
o
) ≤ liminf
k→∞
J(u
k
).
2.4 Variational Methods 55
Now we show that our functional J is weakly lower semicontinuous in H.
Assume that ¦u
k
¦ ⊂ H, and
u
k
u
o
in H.
Since for each ﬁxed f ∈ L
2n
n+2
(Ω),
Ω
fudx is a linear functional on H, it
follows immediately
Ω
fu
k
dx→
Ω
fu
o
dx, as k→∞. (2.18)
From the algebraic inequality
a
2
+b
2
≥ 2ab,
we have
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx + 2
Ω
Du
o
(Du
k
−Du
o
)dx.
Here the second term on the right goes to zero due to the weak convergence
u
k
u
o
. Therefore
liminf
k→∞
Ω
[Du
k
[
2
dx ≥
Ω
[Du
o
[
2
dx. (2.19)
Now (2.18) and (2.19) imply the weakly lower semicontinuity. So far, we
have proved
Theorem 2.4.2 Assume that Ω is a bounded domain with smooth boundary.
Then for every f ∈ L
2n
n+2
(Ω), the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
fudx
possesses a minimum u
o
in H
1
0
(Ω), which is a weak solution of the boundary
value problem
−´u = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
2.4.4 Existence of Minimizers Under Constraints
Now we consider the semilinear Dirichlet problem
−´u = [u[
p−1
u, x ∈ Ω,
u(x) = 0, x ∈ ∂Ω,
(2.20)
with 1 < p <
n+2
n−2
.
56 2 Existence of Weak Solutions
Obviously, u ≡ 0 is a solution. We call this a trivial solution. Of course,
what we are more interested is whether there exists nontrivial solutions. Let
J(u) =
1
2
Ω
[Du[
2
dx −
1
p + 1
Ω
[u[
p+1
dx.
It is easy to verify that
d
dt
J(u +tv) [
t=0
=
Ω
Du Dv −[u[
p−1
uv
dx.
Therefore, a critical point of the functional J in H := H
1
0
(Ω) is a weak solution
of (2.20)
However, this functional is not bounded from below. To see this, ﬁxed an
element u ∈ H and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
t
p+1
p + 1
Ω
[u[
p+1
dx.
Noticing p + 1 > 2, we ﬁnd that
J(tu)→−∞, as t→∞.
Therefore, there is no minimizer of J(u). For this reason, instead of J, we will
consider another functional
I(u) :=
1
2
Ω
[Du[
2
dx,
in the constrain set
M := ¦u ∈ H [ G(u) :=
Ω
[u[
p+1
dx = 1¦.
We seek minimizers of I in M. Let ¦u
k
¦ ⊂ M be a minimizing sequence, i.e.
lim
k→∞
I(u
k
) = inf
u∈M
I(u) := m.
It follows that
Ω
[Du
k
[
2
dx is bounded, hence ¦u
k
¦ is bounded in H. By the
Weak Compactness Theorem, there exist a subsequence (for simplicity, we
still denote it by ¦u
k
¦), which converges weakly to u
o
in H. The weak lower
semicontinuity of the functional I yields
I(u
o
) ≤ liminf
k→∞
I(u
k
) = m. (2.21)
On the other hand, since p+1 <
2n
n −2
, by the Compact Sobolev Embedding
Theorem
H
1
(Ω) →→ L
p+1
(Ω),
2.4 Variational Methods 57
we ﬁnd that u
k
converges strongly to u
o
in L
p+1
(Ω), and hence
Ω
[u
o
[
p+1
dx = 1.
That is u
o
∈ M, and therefore
I(u
o
) ≥ m.
Together with (2.21), we obtain
I(u
o
) = m.
Now we have shown the existence of a minimizer u
o
of I in M. To link
u
o
to a weak solution of (2.20), we derive the corresponding EulerLagrange
equation for this minimizer under the constraint.
Theorem 2.4.3 (Lagrange Multiplier). let u be a minimizer of I in M, i.e.
I(u) = min
v∈M
I(v).
Then there exists a real number λ such that
I
(u) = λG
(u),
or
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
Before proving the theorem, let’s ﬁrst recall the Lagrange Multiplier for
functions deﬁned on R
n
. Let f(x) and g(x) be smooth functions in R
n
. It is
well known that the critical points ( in particular, the minima ) x
o
of f(x)
under the constraint g(x) = c satisfy
Df(x
o
) = λDg(x
o
) (2.22)
for some constant λ. Geometrically, Df(x
o
) is a vector in R
n
. It can be de
composed as the sum of two perpendicular vectors:
Df(x
o
) = Df(x
o
) [
N
+Df(x
o
) [
T
,
where the two terms on the right hand side are the projection of Df(x
o
)
onto the normal and tangent spaces of the level set g(x) = c at point x
o
. By
deﬁnition, x
o
being a critical point of f(x) on the level set g(x) = c means
that the tangential projection Df(x
o
) [
T
= 0. While the normal projection
Df(x
o
) [
N
is parallel to Dg(x
o
), and we therefore have (2.22). Heuristically,
we may generalize this perception into inﬁnite dimensional space H. At a ﬁxed
point u ∈ H, I
(u) is a linear functional on H. By the Reisz Representation
58 2 Existence of Weak Solutions
Theorem, it can be identiﬁed as a point (or a vector) in H, denoted by
¯
I
(u),
such that
< I
(u), v >= (
¯
I
(u), v), ∀v ∈ H.
Similarly for G
(u), we have
¯
G
(u) ∈ H, and it is a normal vector of M at
point u. At a minimum u of I on the hyper surface ¦w ∈ H [ G(w) = 1¦,
¯
I
(u) must be perpendicular to the tangent space at point u, and hence
¯
I
(u) = λ
¯
G
(u).
We now prove this rigorously.
The Proof of Theorem 2.4.3.
Recall that if u is the minimum of J in the whole space H, then, for any
v ∈ H, we have
d
dt
I(u +tv) [
t=0
. (2.23)
Now u is only the minimum on M, we can no longer use (2.23) because u+tv
can not be kept on M, we can not guarantee that
G(u +tv) :=
Ω
[u +tv[
p+1
dx = 1
for all small t. To remedy this, we consider
g(t, s) := G(u +tv +sw).
For each small t, we try to show that there is an s = s(t), such that
G(u +tv +s(t)w) = 1.
Then we can calculate the variation of J(u +tv +s(t)w) as t→0.
In fact,
∂g
∂s
(0, 0) =< G
(u), w >= (p + 1)
Ω
[u[
p−1
uwdx.
Since u ∈ M,
Ω
[u[
p+1
dx = 1, there exists w (may just take w as u),
such that the right hand side of the above is not zero. Hence, according to the
Implicit Function Theorem, there exists a C
1
function s : R
1
→R
1
, such that
s(0) = 0, and g(t, s(t)) = 1,
for suﬃciently small t. Now u+tv+s(t)w is on M, and hence at the minimum
u of J on M, we must have
d
dt
I(u +tv +s(t)w) [
t=0
=< I
(u), v > + < I
(u), w > s
(0) = 0. (2.24)
2.4 Variational Methods 59
On the other hand, we have
0 =
dg
dt
(0, 0) =
∂g
∂t
(0, 0) +
∂g
∂s
(0, 0)s
(0)
= < G
(u), v > + < G
(u), w > s
(0).
Consequently
s
(0) = −
< G
(u), v >
< G
(u), w >
.
Substitute this into (2.24), and denote
λ =
< I
(u), w >
< G
(u), w >
,
we arrive at
< I
(u), v >= λ < G
(u), v >, ∀v ∈ H.
This completes the proof of the Theorem. ¯ .
Now let’s come back to the weak solution of problem (2.20). Our minimizer
u
o
of I under the constraint G(u) = 1 satisﬁes
< I
(u
o
), v >= λ < G
(u
o
), v >, ∀v ∈ H,
that is
Ω
Du
o
Dvdx = λ
Ω
[u
o
[
p−1
u
o
vdx, ∀v ∈ H. (2.25)
One can see that λ > 0. Let ˜ u = au
o
with λ/a
p−1
= 1. This is possible
since p > 1. Then it is easy to verify that
Ω
D˜ u Dvdx =
Ω
[˜ u[
p−1
˜ uvdx.
Now ˜ u is the desired weak solution of (2.20) in H. ¯ .
Exercise 2.4.1 Show that λ > 0 in (2.25).
2.4.5 Minimax Critical Points
Besides local or global minima and maxima, there are other types of critical
points: saddle points or minimax points. For a simple example, let’s consider
the function f(x, y) = x
2
−y
2
from R
2
to R
1
. (x, y) = (0, 0) is a critical point
of f, i.e. Df(0, 0) = 0. However, it is neither a local maximum nor a local
minimum. The graph of z = f(x, y) in R
3
looks like a horse saddle. If we
go on the graph along the direction of xaxis, (0,0) is the minimum, while
along the direction of yaxis, (0,0) is the maximum. For this reason, we also
call (0,0) a minimax critical point of f(x, y). The way to locate or to show
60 2 Existence of Weak Solutions
the existence of such minimax point is called a minimax variational method.
To illustrate the main idea, let’s still take this simple example. Suppose we
don’t know whether f(x, y) possesses a minimax point, but we do detect a
‘mountain range’ on the graph near xz plane. To show the existence of a mini
max point, we pick up two points on xy plane, for instance, P = (1, 1) and
Q = (−1, −1) on two sides of the ‘mountain range’. Let γ(s) be a continuous
curve linking the two points P and Q with γ(0) = P and γ(1) = Q. To go
from (P, f(P)) to (Q, f(Q)) on the graph, one needs ﬁrst to climb over the
’mountain range’ near xz plane, then to decent to the point (Q, f(Q)). Hence
on this path, there is a highest point. Then we take the inﬁmum among the
highest points on all the possible paths linking P and Q. More precisely, we
deﬁne
c = inf
γ∈Γ
max
s∈[0,1]
f(γ(s)),
where
Γ = ¦γ [ γ = γ(s) is continuous on [0,1], and γ(0) = P, γ(1) = Q¦
We then try to show that there exists a point P
o
at which f attains this
minimax value c, and more importantly, Df(P
o
) = 0.
For a functional J deﬁned on an inﬁnite dimensional space X, we will apply
a similar idea to seek its minimax critical points. Unlike ﬁnite dimensional
space, in an inﬁnite dimensional space, a bounded sequence may not converge.
Hence we require the functional J to satisfy some compactness condition,
which is the PalaisSmale condition.
Deﬁnition 2.4.5 We say that the functional J satisﬁes the PalaisSmale con
dition (in short, (PS)), if any sequence ¦u
k
¦ ∈ X for which J(u
k
) is bounded
and J
(u
k
)→0 as k→∞ possesses a convergent subsequence.
Under this compactness condition, Ambrosetti and Rabinowitz [AR] estab
lish the following wellknown theorem on the existence of a minimax critical
point.
Theorem 2.4.4 (Mountain Pass Theorem). Let X be a real Banach space
and J ∈ C
1
(X, R
1
). Suppose J satisﬁes (PS), J(0) = 0,
(J
1
) there exist constants ρ, α > 0 such that J [
∂Bρ(0)
≥ α, and
(J
2
) there is an e ∈ X ` B
ρ
(0), such that J(e) ≤ 0.
Then J possesses a critical value c ≥ α which can be characterized as
c = inf
γ∈Γ
max
u∈γ[0,1]
J(u)
where
Γ = ¦γ ∈ C([0, 1], R
1
) [ γ(0) = 0, γ(1) = e¦.
2.4 Variational Methods 61
Here, a critical value c means a value of the functional which is attained
by a critical point u, i.e. J
(u) = 0 and J(u) = c.
Heuristically, the Theorem says if the point 0 is located in a valley sur
rounded by a mountain range, and if at the other side of the range, there is
another low point e; then there must be a mountain pass from 0 to e which
contains a minimax critical point of J. In the ﬁgure below, on the graph of
J, the path connecting the two low points (0, J(0)) and (e, J(e)) contains a
highest point S, which is the lowest as compared to the highest points on the
nearby paths. This S is a saddle point, the minimax critical point of J.
The proof of the Theorem is mainly based on a deformation theorem which
exploits the change of topology of the level sets of J when the value of J
goes through a critical value. To illustrate this, let’s come back to the simple
example f(x, y) = x
2
−y
2
. Denote the level set of f by
f
c
:= ¦(x, y) ∈ R
2
[ f(x, y) ≤ c¦.
One can see that 0 is a critical value and for any a > 0, the level sets f
a
and
f
−a
has diﬀerent topology. f
a
is connected while f
−a
is not. If there is no
critical value between the two level sets, say f
a
and f
b
with 0 < a < b, then
one can continuously deform the set f
b
into f
a
. One convenient way is let the
points in the set f
b
‘ﬂow’ along the direction of −Df into f
a
; correspondingly,
the points on the graph ‘ﬂow downhills’ into a lower level. More precisely and
generally, we have
Theorem 2.4.5 (Deformation Theorem). Let X be a real Banach space. As
sume that J ∈ C
1
(X, R
1
) and satisﬁes the (PS). Suppose that c is not a
critical value of J.
62 2 Existence of Weak Solutions
Then for each suﬃciently small > 0, there exists a constant 0 < δ <
and a continuous mapping η(t, u) from [0, 1] X to X, such that
(i) η(0, u) = u, ∀u ∈ X,
(ii) η(1, u) = u, ∀u ∈J
−1
[c −, c +],
(iii) J(η(t, u) ≤ J(u), ∀u ∈ X, t ∈ [0, 1],
(iv) η(1, J
c+δ
) ⊂ J
c−δ
.
Roughly speaking, the Theorem says if c is not a critical value of J, then
one can continuously deform a higher level set J
c+δ
into a lower level J
c−δ
by
‘ﬂowing downhills’. Condition (ii) is to ensure that after the deformation, the
two points 0 and e as in the Mountain Pass theorem still remain ﬁxed.
Proof of the Deformation Theorem.
The main idea is to solve an ODE initial value problem (with respect to
t) for each u ∈ X, roughly look like the following:
dη
dt
(t, u) = −J
(η(t, u)),
η(0, u) = u.
(2.26)
That is, to let the ‘ﬂow’ go along the direction of −J
(η(t, u)), so that the
value of J(η(t, u) would decrease as t increases. However, we need to modify
the right hand side of the equation a little bit, so that the ﬂow will satisfy
other desired conditions.
Step 1 We ﬁrst choose a small > 0, so that the ﬂow would not go too
slow in J
c+
` J
c−
. This is actually guaranteed by the (PS). We claim that
there exist constants 0 < a, < 1, such that
J
(u) ≥ a, ∀u ∈ J
c+
` J
c−
. (2.27)
Otherwise, there would exist sequences a
k
→0,
k
→0 and elements u
k
∈ J
c+
k
`
J
c−
k
with J
(u
k
) ≤ a
k
. Then by virtue of the (PS) condition, there is a
subsequence ¦u
ki
¦ that converges to an element u
o
∈ X. Since J ∈ C
1
(X, R
1
),
we must have
J(u
o
) = c and J
(u
o
) = 0.
This is a contradiction with the assumption that c is not a critical value of J.
Therefore (2.27) holds.
Step 2. To satisfy condition (ii), i.e. to make the ﬂow stay still in the set
M := ¦u ∈ X [ J(u) ≤ c − or J(u) ≥ c +¦,
we could multiply the right hand side of equation (2.26) by dist(η, M), but
this would make the ﬂow go too slow near M. To modify further, we choose
0 < δ < , such that
δ <
a
2
2
, (2.28)
and let
2.4 Variational Methods 63
N := ¦u ∈ X [ c −δ ≤ J(u) ≤ c +δ¦.
Now for u ∈ X, deﬁne
g(u) :=
dist(u, M)
dist(u, M) +dist(u, N)
,
and let
h(s) :=
1 if 0 ≤ s ≤ 1
1
s
if s > 1.
We ﬁnally modify the −J
(η) as
F(η) := −g(η)h(J
(η))J
(η).
The introduction of the factor h() is to ensure the boundedness of F().
Step 3. Now for each ﬁxed u ∈ X, we consider the modiﬁed ODE problem
dη
dt
(t, u) = F(η(t, u))
η(0, u) = u.
One can verify that F is bounded and Lipschitz continuous on bounded sets,
and therefore by the wellknown ODE theory, there exists a unique solution
η(t, u) for all time t ≥ 0.
Condition (i) is satisﬁed automatically. From the deﬁnition of g
g(u) = 0 ∀u ∈ M;
hence condition (ii) is also satisﬁed.
To verify (iii) and (iv), we compute
d
dt
J(η(t, u)) = < J
(η(t, u)),
dη
dt
(t, u) >
= < J
(η(t, u)), F(η(t, u)) >
= −g(η(t, u))h(J
(η(t, u)))J
(η(t, u))
2
. (2.29)
In the above equality, obviously, the right hand side is nonpositive, and
hence J(η(t, u)) is nonincreasing. This veriﬁes (iii).
To see (iv), it suﬃce to show that for each u in N = J
c+δ
` J
c−δ
, we have
η(1, u) ∈ J
c−δ
.
Notice that for η ∈ N, g(η) ≡ 1, and (2.29) becomes
d
dt
J(η(t, u)) = −h(J
(η(t, u)))J
(η(t, u)
2
.
By the deﬁnition of h and (2.27), we have,
64 2 Existence of Weak Solutions
d
dt
J(η(t, u)) =
−J
(η(t, u))
2
≤ −a
2
if J
(η(t, u)) ≤ 1
−J
(η(t, u)) ≤ −a if J
(η(t, u)) > 1.
In any case, we derive
J(η(1, u)) ≤ J(η(0, u)) −a
2
≤ c +δ −a
2
≤ c −δ.
This establishes (iv) and therefore completes the proof of the Theorem. ¯ .
With this Deformation Theorem, the proof of the Mountain Pass Theorem
becomes very simple.
The Proof of the Mountain Pass Theorem.
First, due to the deﬁnition, we have c < ∞. To see this, we take a particular
path γ(t) = te and consider the function
g(t) = J(γ(t)).
It is continuous on the closed interval [0,1], and hence bounded from above.
Therefore
c ≤ max
u∈γ([0,1])
J(u) = max
t∈[0,1]
g(t) < ∞.
On the other hand, for any γ ∈ Γ
γ([0, 1]) ∩ ∂B
ρ
(0) = ∅.
Hence by condition (J
1
),
max
u∈γ([0,1])
J(u) ≥ inf
v∈∂Bρ(0)
J(v) ≥ α.
And it follows that c ≥ α.
Suppose the number c so deﬁned is not a critical value. We will use the
Deformation Theorem to continuously deform a path γ ∈ Γ down to the
level set J
c−δ
to derive a contradiction. More precisely, choose the in the
Deformation Theorem to be
α
2
, and 0 < δ < . From the deﬁnition of c, we
can pick a path γ ∈ Γ, such that
max
u∈γ([0,1])
J(u) ≤ c +δ,
i.e.
γ([0, 1]) ⊂ J
c+δ
.
Now by the Deformation Theorem, this path can be continuously deformed
down to the level set J
c−δ
, that is
η(1, γ([0, 1]) ⊂ J
c−δ
,
or in order words,
2.4 Variational Methods 65
max
u∈η(1,γ([0,1]))
J(u) ≤ c −δ. (2.30)
On the other hand, since 0 and e are in J
c−
, by (ii) of the Deformation
Theorem,
η(1, 0) = 0 and η(1, e) = e.
This means that η(1, γ([0, 1])) is also a path in Γ, and therefore we must have
max
u∈η(1,γ([0,1]))
J(u) ≥ c.
This contradicts with (2.30) and hence completes the proof of the Theorem.
¯ .
2.4.6 Existence of a Minimax via the Mountain Pass Theorem
Now we apply the Mountain Pass Theorem to seek weak solutions of the
semilinear elliptic problem
−´u = f(x, u), x ∈ Ω,
u(x) = 0, x ∈ ∂Ω.
(2.31)
Although the method we illustrate here could be applied to a more general
second order uniformly elliptic equations, we prefer this simple model that
better illustrates the main ideas.
We assume that f(x, u) satisﬁes
(f
1
)f(x, s) ∈ C(
¯
Ω R
1
, R
1
),
(f
2
) there exists constants c
1
, c
2
≥ 0, such that
[f(x, s)[ ≤ c
1
+c
2
[s[
p
,
with 0 ≤ p <
n+2
n−2
if n > 2. If n = 1, (f
2
) can be dropped; if n = 2, it suﬃce
that
[f(x, s)[ ≤ c
1
e
φ(s)
,
with φ(s)/s
2
→0 as [s[→∞.
(f
3
)f(x, s) = o([s[) as s→0, and
(f
4
) there are constants µ > 2 and r ≥ 0 such that for [s[ ≥ r,
0 < µF(x, s) ≤ sf(x, s),
where
F(x, s) =
s
0
f(x, t)dt.
Remark 2.4.2 A simple example of such function f(x, u) is [u[
p−1
u.
Theorem 2.4.6 (Rabinowitz [Ra]) If f satisﬁes condition (f
1
) − (f
4
), then
problem (2.31) possesses a nontrivial weak solution.
66 2 Existence of Weak Solutions
To ﬁnd weak solutions of the problem, we seek minimax critical points of
the functional
J(u) =
1
2
Ω
[Du[
2
dx −
Ω
F(x, u)dx
on the Hilbert space H := H
1
0
(Ω). We verify that J satisﬁes the conditions in
the Mountain Pass Theorem.
We ﬁrst need to show that J ∈ C
1
(H, R
1
). Since we have veriﬁed that the
ﬁrst term
Ω
[Du[
2
dx
is continuously diﬀerentiable, now we only need to check the second term
I(u) :=
Ω
F(x, u)dx.
Proposition 2.4.1 (Rabinowitz [Ra]) Let Ω ⊂ R
n
be a bounded domain and
let g satisfy
(g
1
)g ∈ C(
¯
Ω R
1
, R
1
), and
(g
2
) there are constants r, s ≥ 1 and a
1
, a
2
≥ 0 such that
[g(x, ξ)[ ≤ a
1
+a
2
[ξ[
r/s
for all x ∈
¯
Ω, ξ ∈ R
1
.
Then the map u(x)→g(x, u(x)) belongs to C(L
r
(Ω), L
s
(Ω)).
Proof. By (g
2
),
Ω
[g(x, u(x))[
s
dx ≤
Ω
(a
1
+a
2
[u[
r/s
)
s
dx
≤ a
3
Ω
(1 +[u[
r
)dx.
It follows that if u ∈ L
r
(Ω), then g(x, u) ∈ L
s
(Ω). That is
g(x, ) : L
r
(Ω)→L
s
(Ω).
Now we verify the continuity of the map. Fix u ∈ L
r
(Ω). Given any > 0, we
want to show that, there is a δ > 0, such that,
whenever φ
L
r
(Ω)
< δ, we have g(, u +φ) −g(, u)
L
s
(Ω)
< . (2.32)
Let
Ω
1
:= ¦x ∈
¯
Ω [ [φ(x)[ ≤ m¦
and
Ω
2
:=
¯
Ω ` Ω
1
.
Let
2.4 Variational Methods 67
I
i
=
Ωi
[g(x, u(x) +φ(x)) −g(x, u(x))[
s
dx, i = 1, 2.
By the continuity of g(x, ), for any η > 0, there exists m > 0, such that
[g(x, u(x) +φ(x)) −g(x, u(x))[ ≤ η, ∀x ∈ Ω
1
.
It follows that
I
1
≤ [Ω
1
[η
s
≤ [Ω[η
s
,
where [Ω[ is the volume of Ω. For the given > 0, we choose η and then m,
so that
[Ω[η
s
<
2
s
,
and therefore
I
1
≤
2
s
. (2.33)
Now we ﬁx this m, and estimate I
2
.
By (g
2
), we have
I
2
≤
Ω2
(c
1
+c
2
([u[
r
+[φ[
r
)) dx ≤ c
1
[Ω
2
[ +c
2
Ω2
[u[
r
dx +c
2
φ
r
L
r
(Ω)
,
(2.34)
for some constant c
1
and c
2
.
Moreover, as φ
L
r
(Ω)
< δ,
δ
r
>
Ω
[φ[
r
dx ≥ m
r
[Ω
2
[. (2.35)
Since u ∈ L
r
(Ω), we can make
Ω2
[u[
r
dx as small as we wish by letting [Ω
2
[
small. Now by (2.35), we can choose δ so small, such that [Ω
2
[ is suﬃciently
small, and hence the right hand side of (2.34) is less than
2
s
. Consequently,
by (2.33), for such a small δ,
I
1
+I
2
≤
2
s
+
2
s
.
This veriﬁes (2.32), and therefore completes the proof of the Proposition. ¯ .
Proposition 2.4.2 (Rabinowitz [Ra]) Assume that f(x, s) satisﬁes (f
1
) and
(f
2
). Let n ≥ 3. Then I ∈ C
1
(H, R) and
< I
(u), v >=
Ω
f(x, u)vdx, ∀v ∈ H.
Moreover, I() is weakly continuous and I
() is compact, i.e.
I(u
k
)→I(u) and I
(u
k
)→I
(u), whenever u
k
u in H;
68 2 Existence of Weak Solutions
Proof. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove
that I
(u) is continuous.
1. Let u, v ∈ H. We want to show that given any > 0, there exists a
δ = δ(, u), such that
Q(u, v) := [I(u +v) −I(u) −
Ω
f(x, u)vdx[ ≤ v (2.36)
whenever v ≤ δ. Here v denotes the H
1
(Ω) norm of v.
In fact,
Q(u, v) ≤
Ω
[F(x, u(x) +v(x)) −F(x, u(x)) −f(x, u(x))v(x)[dx
=
Ω
[f(x, u(x) +ξ(x)) −f(x, u(x))[ [v(x)[dx
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
v
L
2n
n−2
(Ω)
≤ f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
Kv. (2.37)
Here we have applied the Mean Value Theorem (with [ξ(x)[ ≤ [v(x)[), the
H¨older inequality, and the Sobolev inequality
v
L
2n
n−2
(Ω)
≤ Kv.
In Proposition 2.4.1, let r =
2n
n−2
and s =
2n
n+2
. Then by (f
2
), we deduce
that the map
u(x)→f(x, u(x))
is continuous from L
2n
n−2
(Ω) to L
2n
n+2
(Ω). Hence for any given > 0, we
can choose suﬃciently small δ > 0, such that whenever v < δ, we have
v
L
2n
n−2
(Ω)
< Kδ, and hence ξ
L
2n
n−2
(Ω)
< Kδ, and therefore
f(, u +ξ) −f(, u)
L
2n
n+2
(Ω)
<
K
.
It follows from (2.37) that
Q(u, v) < , whenever v < δ.
This veriﬁes (2.36) and hence I() is diﬀerentiable, and
< I
(u), v >=
Ω
f(x, u(x))v(x)dx. (2.38)
2. To verify the continuity of I
(), we show that for each ﬁxed u,
I
(u +φ) −I
(u)→0, as φ→0. (2.39)
2.4 Variational Methods 69
By deﬁnition,
I
(u +φ) −I
(u) = sup
v≤1
< I
(u +φ) −I
(u), v >
= sup
v≤1
Ω
[f(x, u(x) +φ(x)) −f(x, u(x))]v(x)dx
≤ sup
v≤1
f(, u +φ) −f(, u)
L
2n
n+2
(Ω)
Kv
≤ Kf(, u +φ) −f(, u)
L
2n
n+2
(Ω)
(2.40)
Here we have applied (2.38), the H¨older inequality, and the Sobolev inequality.
Now (2.39) follows again from Proposition 2.4.1. Therefore, I
() is continuous.
3. Finally, we verify the weak continuity of I(u) and the compactness of
I
(u). Assume that
¦u
k
¦ ⊂ H, and u
k
u in H.
We show that
I(u
k
)→I(u) as k→∞.
In fact,
[I(u
k
) −I(u)[ ≤
Ω
[F(x, u
k
(x)) −F(x, u(x))[dx
≤
Ω
[f(x, ξ
k
(x))[ [u
k
(x) −u(x)[dx
≤ f(, ξ
k
)
L
r
(Ω)
u
k
−u
L
q
(Ω)
≤ Cξ
k

L
2n
n−2
(Ω)
u
k
−u
L
q
(Ω)
(2.41)
Here we have applied the Mean Value Theorem and the H¨older inequality with
ξ
k
(x) between u
k
(x) and u(x), and
r =
2n
p(n −2)
, q =
r
r −1
=
2n
2n −p(n −2)
.
It is easy to see that
q <
2n
n −2
.
Since a weak convergent sequence is bounded, ¦u
k
¦ is bounded in H,
and hence by Sobolev Embedding, it is bounded in L
2n
n−2
(Ω), so does ¦ξ
k
¦.
Furthermore, by Compact Embedding of H into L
q
(Ω), the last term of (2.41)
approaches zero as k→∞. This veriﬁes the weak continuity of I().
Using a similar argument as in deriving (2.40), we obtain
I
(u
k
) −I
(u) ≤ Kf(, u
k
) −f(, u)
L
2n
n+2
(Ω)
. (2.42)
70 2 Existence of Weak Solutions
Since p <
n+2
n−2
, we have
2np
n+2
<
2n
n−2
. Then by the Compact Embedding,
H →→ L
2np
n+2
(Ω),
we derive
u
k
→u strongly in L
2np
n+2
(Ω).
Consequently, form Proposition 2.4.1,
f(, u
k
)→f(, u) strongly in L
2n
n+2
(Ω).
Now, together with (2.42), we arrive at
I
(u
k
)→I
(u) as k→∞.
This completes the proof of the proposition. ¯ .
Now let’s come back to Problem (2.31) and prove Theorem 2.4.6. We will
verify that J() satisﬁes the conditions in the Mountain Pass Theorem, and
thus possesses a nontrivial minimax critical point, which is a weak solution
of (2.31).
First we verify the (PS) condition. Assume that ¦u
k
¦ is a sequence in H
satisfying
[J(u
k
)[ ≤ M and J
(u
k
)→0.
We show that ¦u
k
¦ possesses a convergent subsequence.
By (f
4
), we have
µM +J
(u
k
)u
k
 ≥ µJ(u
k
)− < J
(u
k
), u
k
>
= (
µ
2
−1)
Ω
[Du
k
[
2
dx +
Ω
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx +
u
k
(x)≤r
[f(x, u
k
)u
k
−µF(x, u
k
)] dx
≥ (
µ
2
−1)
Ω
[Du
k
[
2
dx −(a
1
r +a
2
r
p+1
)[Ω[
= c
o
u
k

2
−C
o
, (2.43)
for some positive constant c
o
and C
o
. Since J
(u
k
) is bounded, (2.43) implies
that ¦u
k
¦ must be bounded in H. Hence there exists a subsequence (still
denoted by ¦u
k
¦), which converges weakly to some element u
o
in H.
Let A : H→H
∗
be the duality map between H and its dual space H
∗
.
Then for any u, v ∈ H,
< Au, v >=
Ω
Du Dvdx.
Consequently,
2.4 Variational Methods 71
A
−1
J
(u) = u −A
−1
f(, u). (2.44)
From Proposition 2.4.2,
f(, u
k
)→f(, u) strongly in H
∗
.
It follows that
u
k
= A
−1
J
(u
k
) +A
−1
f(, u
k
)→A
−1
f(, u
o
), as k→∞.
This veriﬁes the (PS).
To see there is a ‘mountain range’ surrounding the origin, we estimate
Ω
F(x, u)dx. By (f
3
), for any given > 0, there is a δ > 0, such that, for all
x ∈
¯
Ω,
[F(x, s)[ ≤ [s[
2
, whenever [s[ < δ. (2.45)
While by (f
2
), there is a constant M = M(δ), such that for all x ∈
¯
Ω,
[F(x, s)[ ≤ M[s[
p+1
. (2.46)
Combining (2.45) and (2.46), we have,
[F(x, s)[ ≤ [s[
2
+M[s[
p+1
, for all x ∈
¯
Ω and for all s ∈ R.
It follows, via the Poincar´e and the Sobolev inequality, that
[
Ω
F(x, u)dx[ ≤
Ω
u
2
dx+M
Ω
[u[
p+1
dx ≤ C( +Mu
p−1
)u
2
. (2.47)
And consequently,
J(u) ≥
¸
1
2
−C( +Mu
p−1
)
u
2
. (2.48)
Choose =
1
8C
. For this , we ﬁx M; then choose u = ρ small, so that
CMρ
p−1
≤
1
8
.
Then form (2.48), we deduce
J[
∂Bρ(0)
≥
1
4
ρ
2
.
To see the existence of a point e ∈ H, such that J(e) ≤ 0, we ﬁx any u = 0
in H, and consider
J(tu) =
t
2
2
Ω
[Du[
2
dx −
Ω
F(x, tu)dx. (2.49)
72 2 Existence of Weak Solutions
By (f
4
), we have for [s[ ≥ r,
d
ds
[s[
−µ
F(x, s)
= [s[
−µ−2
s (−µF(x, s) +sf(x, s))
≥ 0 if s ≥ r
≤ 0 if s ≤ −r.
In any case, we deduce, for [s[ ≥ r,
F(x, s) ≥ a
3
[s[
µ
;
and it follows that
F(x, s) ≥ a
3
[s[
µ
−a
4
. (2.50)
Combining (2.49) and (2.50), and taking into account that µ > 2, we conclude
that
J(tu)→−∞, as t→∞.
Now J satisﬁes all the conditions in the Mountain Pass Theorem, hence it
possesses a minimax critical point u
o
, which is a weak solution of (2.31).
Moreover, J(u
o
) > 0, therefore it is a nontrivial solution. This completes the
proof of the Theorem 2.4.6. ¯ .
3
Regularity of Solutions
3.1 W
2,p
a Priori Estimates
3.1.1 Newtonian Potentials,
3.1.2 Uniform Elliptic Equations
3.2 W
2,p
Regularity of Solutions
3.2.1 The Case p ≥ 2.
3.2.2 The Case 1 < p < 2.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness, and
Regularity
3.3 Regularity Lifting
3.3.1 Bootstraps
3.3.2 the Regularity Lifting Theorem
3.3.3 Applications to PDEs
3.3.4 Applications to Integral Equations
In the previous chapter, we used functional analysis, mainly calculus of
variations to seek the existence of weak solutions for second order linear or
semilinear elliptic equations. These weak solutions we obtained were in the
Sobolev space W
1,2
and hence, by Sobolev imbedding, in L
p
space for p ≤
n+2
n−2
.
Usually, the weak solutions are easier to obtain than the classical ones, and
this is particularly true for nonlinear equations. However, in practice, we are
often required to ﬁnd classical solutions. Therefore, one would naturally want
to know whether these weak solutions are actually diﬀerentiable, so that they
can become classical ones. These questions will be answered here.
In this chapter, we will introduce methods to show that, in most cases, a
weak solution is in fact smooth. This is called the regularity argument. Very
often, the regularity is equivalent to the a priori estimate of solutions. To see
the diﬀerence between the regularity and the a priori estimate, we take the
equation
74 3 Regularity of Solutions
−´u = f(x)
for example. Roughly speaking, the W
2,p
regularity theory infers that
If u ∈ W
1,p
is a weak solution and if f ∈ L
p
, then u ∈ W
2,p
.
While the W
2,p
a priori estimate says
If u ∈ W
1,p
0
∩ W
2,p
is a weak solution and if f ∈ L
p
, then
u
W
2,p ≤ Cf
L
p.
In the a priori estimate, we preassumed that u is in W
2,p
. It might seem
a little bit surprising at this moment that the a priori estimates turn out to
be powerful tools in deriving regularities. The readers will see in section 3.2
how the a priori estimates and uniqueness of solutions lead to the regularity.
Let Ω be an open bounded set in R
n
. Let a
ij
(x), b
i
(x), and c(x) be bounded
functions on Ω with a
ij
(x) = a
ji
(x). Consider the second order partial diﬀer
ential equation
Lu := −
n
¸
i,j=1
a
ij
(x)u
xixj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u = f(x). (3.1)
We assume L is uniformly elliptic, that is, there exists a constant δ > 0,
such that
n
¸
i,j=1
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for a. e. x ∈ Ω and for all ξ ∈ R
n
.
In Section 3.1, we establish W
2,p
a priori estimate for the solutions of (3.1).
We show that if the function f(x) is in L
p
(Ω), and u ∈ W
2,p
(Ω) is a strong
solution of (3.1), then there exists a constant C, such that
u
W
2,p ≤ C(u
L
p +f
L
p).
We will ﬁrst establish this for a Newtonian potential, then use the method of
“frozen coeﬃcients” to generalize it to uniformly elliptic equations.
In Section 3.2, we establish a W
2,p
regularity. We show that if f(x) is in
L
p
(Ω), then any weak solution u of (3.1) is in W
2,p
(Ω).
In Section 3.3, we introduce and prove a regularity lifting theorem. It
provides a simple method for the study of regularity. The version we present
here contains some new developments. It is much more general and very easy
to use. We believe that the method will be helpful to both experts and non
experts in the ﬁeld.
We will also use examples to show how this method can be applied to boost
the regularity of the solutions for PDEs as well as for integral equations.
3.1 W
2,p
a Priori Estimates 75
3.1 W
2,p
a Priori Estimates
We establish W
2,p
estimate for the solutions of
Lu = f(x) , x ∈ Ω. (3.2)
First we start with the Newtonian potential.
3.1.1 Newtonian Potentials
Let Ω be a bounded domain of R
n
. Let
Γ(x) =
1
n(n −2)ω
n
1
[x[
n−2
n ≥ 3,
be the fundamental solution of the Laplace equation. The Newtonian potential
is known as
w(x) =
Ω
Γ(x −y)f(y)dy.
We prove
Theorem 3.1.1 Let f ∈ L
p
(Ω) for 1 < p < ∞, and let w be the Newtonian
potential of f. Then w ∈ W
2,p
(Ω) and
´w = f(x), a.e. x ∈ Ω, (3.3)
and
D
2
w
L
p
(Ω)
≤ Cf
L
p
(Ω)
. (3.4)
Write w = Nf, where N is obviously a linear operator. For ﬁxed i, j,
deﬁne the linear operator T as
Tf = D
ij
Nf = D
ij
R
n
Γ(x −y)f(y)dy
To prove Theorem 3.1.1, it is equivalent to show that
T : L
p
(Ω) −→ L
p
(Ω) is a bounded linear operator . (3.5)
Our proof can actually be applied to more general operators. To this end,
we introduce the concept of weak type and strong type operators.
Deﬁne
µ
f
(t) = [¦x ∈ Ω[ [f(x)[ > t¦[.
For p ≥ 1, a weak L
p
space L
p
w
(Ω) is the collection of functions f that
satisfy
f
p
L
p
w
(Ω)
= sup¦µ
f
(t)t
p
, ∀t > 0¦ < ∞.
76 3 Regularity of Solutions
An operator T : L
p
(Ω) → L
q
(Ω) is of strong type (p, q) if
Tf
L
q
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
T is of weak type (p, q) if
Tf
L
q
w
(Ω)
≤ Cf
L
p
(Ω)
, ∀f ∈ L
p
(Ω).
Outline of Proof of (3.5).
We decompose the proof into the proofs of the following four lemmas.
First, we use Fourier transform to show
Lemma 3.1.1 T : L
2
(Ω) → L
2
(Ω) is a bounded linear operator. i.e. T is of
strong type (2, 2).
Secondly, we use CalderonZygmund’s Decomposition Lemma to prove
Lemma 3.1.2 T is of weak type (1,1).
Thirdly, we employ Marcinkiewicz Interpolation Theorem to derive
Lemma 3.1.3 T is of strong type (r, r) for any 1 < r ≤ 2.
Finally, by duality, we conclude
Lemma 3.1.4 T is of strong type (p, p), for 1 < p < ∞.
The Proof of Lemma 3.1.1.
First we consider f ∈ C
∞
0
(Ω) ⊂ C
∞
0
(R
n
). Then obviously w ∈ C
∞
(R
n
)
and satisﬁes
´w = f(x) , ∀x ∈ R
n
.
Let
ˆ
f(ξ) =
1
(2π)
n/2
R
n
e
i<x,ξ>
f(x)dx,
be the Fourier transform of f, where
< x, ξ >=
n
¸
k
x
k
ξ
k
is the inner product of R
n
, and i =
√
−1.
We need the following simple property of the transform
¯
D
k
f(ξ) = −iξ
k
ˆ
f(ξ) , and
¯
D
jk
f(ξ) = −ξ
j
ξ
k
ˆ
f(ξ), (3.6)
and the wellknown Plancherel’s Theorem:

ˆ
f
L
2
(R
n
)
= f
L
2
(R
n
)
. (3.7)
It follows from (3.6) and (3.7),
3.1 W
2,p
a Priori Estimates 77
Ω
[f(x)[
2
dx =
R
n
[f(x)[
2
dx
=
R
n
[´w(x)[
2
dx
=
R
n
[
¯
´w(ξ)[
2
dx
=
R
n
[ξ[
4
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
ξ
2
k
ξ
2
j
[ ˆ w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[
¯
D
kj
w(ξ)[
2
dξ
=
n
¸
k,j=1
R
n
[D
kj
w(x)[
2
dx
=
R
n
[D
2
w[
2
dx.
Consequently,
Tf
L
2
(Ω)
≤ f
L
2
(Ω)
, ∀f ∈ C
∞
0
(Ω). (3.8)
Now to verify (3.8) for any f ∈ L
2
(Ω), we simply pick a sequence ¦f
k
¦ ⊂
C
∞
0
(Ω) that converges to f in L
2
(Ω), and take the limit. This proves the
Lemma. ¯ .
The Proof of Lemma 3.1.2.
We need the following wellknown Calder´onZygmund’s Decomposition
Lemma (see its proof in Appendix C).
Lemma 3.1.5 For f ∈ L
1
(R
n
), ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) [f(x)[ ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
[f(x)[dx ≤ 2
n
α
For any f ∈ L
1
(Ω), to apply the Calder´onZygmund’s Decomposition
Lemma, we ﬁrst extend f to vanish outside Ω. For any given α > 0, ﬁx a
large cube Q
o
in R
n
, such that
Qo
[f(x)[dx ≤ α[Q
o
[.
78 3 Regularity of Solutions
We show that
µ
Tf
(α) := [¦x ∈ R
n
[ [Tf(x)[ ≥ α¦[ ≤ C
f
L
1
(Ω)
α
. (3.9)
Split the function f into the “good” part g and “bad” part b: f = g + b,
where
g(x) =
f(x) for x ∈ E
1
Q
k

Q
k
f(x)dx for x ∈ Q
k
, k = 1, 2, .
Since the operator T is linear, Tf = Tg +Tb; and therefore
µ
Tf
(α) ≤ µ
Tg
(
α
2
) +µ
Tb
(
α
2
).
We will estimate µ
Tg
(
α
2
) and µ
Tb
(
α
2
) separately. The estimate of the ﬁrst one
is easy, because g ∈ L
2
. To estimate µ
Tb
(
α
2
), we divide R
n
into two parts: G
∗
and E
∗
:= R
n
` G
∗
(see below for the precise deﬁnition of G
∗
.) We will show
that
(a) [G
∗
[ ≤
C
α
f
L
1
(Ω)
, and
(b) [¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦[ ≤
C
α
E
∗
[Tb(x)[dx ≤
C
α
f
L
1
(Ω)
.
These will imply the desired estimate for µ
Tb
(
α
2
).
Obviously, from the deﬁnition of g, we have
[g(x)[ ≤ 2
n
α , almost everywhere (3.10)
and
b(x) = 0 for x ∈ E , and
Q
k
b(x)dx = 0 for k = 1, 2, . (3.11)
We ﬁrst estimate µ
Tg
. By Lemma 3.1.1 and (3.10), we derive
µ
Tg
(
α
2
) ≤
4
α
2
g
2
(x)dx ≤
2
n+2
α
[g(x)[dx ≤
2
n+2
α
[f(x)[dx. (3.12)
We then estimate µ
Tb
. Let
b
k
(x) =
b(x) for x ∈ Q
k
0 elsewhere .
Then
Tb =
∞
¸
k=1
Tb
k
.
For each ﬁxed k, let ¦b
km
¦ ⊂ C
∞
0
(Q
k
) be a sequence converging to b
k
in
L
2
(Ω) satisfying
Q
k
b
km
(x)dx =
Q
k
b
k
(x)dx = 0. (3.13)
3.1 W
2,p
a Priori Estimates 79
From the expression
Tb
km
(x) =
Q
k
D
ij
Γ(x −y)b
km
(y)dy,
one can see that due to the singularity of D
ij
Γ(x−y) in Q
k
and the fact that
b
km
may not be bounded in Q
k
, one can only estimate Tb
km
(x) when x is of
a positive distance away from Q
k
. For this reason, we cover Q
k
by a bigger
ball B
k
which has the same center as Q
k
, and the radius of the ball δ
k
is the
same as the diameter of Q
k
. We now estimate the integral in the complement
of B
k
:
R
n
\B
k
[Tb
km
[(x)dx =
Qo\B
k
[
Q
k
D
ij
Γ(x −y)b
km
(y)dy[dx
=
Qo\B
k
[
Q
k
[D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y)]b
km
(y)dy[dx
≤ C δ
k
Qo\B
k
1
[x[
n+1
dx [
Q
k
b
km
(y)dy[
≤ C
1
δ
k
∞
δ
k
1
r
2
dr
Q
k
[b
km
(y)[dy
≤ C
2
Q
k
[b
km
(y)[dy (3.14)
where ¯ y is the center of the cube Q
k
. One small trick here is to add a term
(which is 0 by (3.13)):
Q
k
D
ij
Γ(x − ¯ y)b
km
(y)dy
to produce a helpful factor δ
k
by applying the mean value theorem to the
diﬀerence:
D
ij
Γ(x −y) −D
ij
Γ(x − ¯ y) = (y − ¯ y) D(D
ij
Γ)(x −ξ) ≤ δ
k
[D(D
ij
)(x −ξ)[.
Now letting m→∞ in (3.14), we obtain
R
n
\B
k
[Tb
k
(x)[dx ≤ C
Q
k
[b
k
(y)[dy.
Let
G
∗
=
∞
¸
k=1
B
k
and E
∗
= R
n
` G
∗
.
It follows that
E
∗
[Tb(x)[dx ≤ C
∞
¸
k=1
R
n
\G
∗
[Tb
k
[dx ≤ C
∞
¸
k=1
R
n
\B
k
[Tb
k
[dx
≤ C
∞
¸
k=1
Q
k
[b
k
(y)[dy ≤ C
R
n
[f(x)[dx. (3.15)
80 3 Regularity of Solutions
Obviously
µ
Tb
(
α
2
) ≤ [G
∗
[ +[¦x ∈ E
∗
[ Tb(x) ≥
α
2
¦[. (3.16)
By (iii) in the Calder´onZygmund’s Decomposition Lemma, we have
[G
∗
[ =
∞
¸
k=1
[B
k
[ = C
∞
¸
k=1
[Q
k
[ ≤
C
α
∞
¸
k=1
Q
k
[f(x)[dx =
C
α
R
n
[f(x)[dx.
(3.17)
Write
E
∗
α
= ¦x ∈ E
∗
[ [Tb(x)[ ≥
α
2
¦.
Then by (3.15), we derive
[E
∗
α
[
α
2
≤
E
∗
α
[Tb(x)[dx ≤
E
∗
[Tb(x)[dx ≤ C
R
n
[f(x)[dx. (3.18)
Now the desired inequality (3.9) is a direct consequence of (3.12), (3.16),
(3.17, and (3.18). This completes the proof of the Lemma.
The proof of Lemma 3.1.3. In the previous lemmas, we have shown that
the operator T is of weak type (1, 1) and strong type (2, 2) (of course also weak
type (2, 2)). Now Lemma 3.1.3 is a direct consequence of the Marcinkiewicz
interpolation theorem in the following restricted form:
Lemma 3.1.6 Let T be a linear operator from L
p
(Ω) ∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
The proof of this lemma can be found in Appendix C.
The proof of Lemma 3.1.4. From the previous lemma, we know that,
for any 1 < r ≤ 2, we have
Tg
L
r
(Ω)
≤ C
r
g
L
r
(Ω)
. (3.19)
3.1 W
2,p
a Priori Estimates 81
Let
< f, g >=
Ω
f(x)g(x)dx
be the duality between f and g. Then it is easy to verify that
< g, Tf >=< Tg, f > . (3.20)
Given any 2 < p < ∞, let r =
p
p−1
, i.e.
1
r
+
1
p
= 1. Obviously, 1 < r < 2. It
follows from (3.19) and (3.20) that
Tf
L
p = sup
g
L
r =1
< g, Tf >= sup
g
L
r =1
< Tg, f >
≤ sup
g
L
r =1
f
L
pTg
L
r ≤ C
r
f
L
p.
This completes the proof of the Lemma.
3.1.2 Uniform Elliptic Equations
In this section, we consider general second order uniform elliptic equations
with Dirichlet boundary condition
Lu := −
¸
n
i,j=1
a
ij
(x)u
xixj
+
¸
n
i=1
b
i
(x)u
xi
+c(x)u = f(x), x ∈ Ω
u(x) = 0, x ∈ ∂Ω.
(3.21)
We assume that Ω is bounded with C
2,α
boundary, a
ij
(x) ∈ C
0
(
¯
Ω), b
i
(x) ∈
L
∞
(Ω), c(x) ∈ L
∞
(Ω), and
λ[ξ[
2
≤ a
ij
ξ
i
ξ
j
≤ Λ[ξ[
2
for some positive constants λ and Λ.
Deﬁnition 3.1.1 We say that u is a strong solution of
Lu = f(x), x ∈ Ω
if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost ev
erywhere in Ω.
Based on the result in the previous section–the estimates on the Newtonian
potentials–we will establish a priori estimates on the strong solutions of (3.21).
Theorem 3.1.2 Let u ∈ W
2,p
(Ω)
¸
W
1,2
0
(Ω) be a strong solution of (3.21).
Assume that f ∈ L
p
(Ω). Then
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.22)
where C is a constant depending on [[b
i
(x)[[
L
∞
(Ω)
, [[c(x)[[
L
∞
(Ω)
, λ, Λ, n, p,
and the domain Ω.
82 3 Regularity of Solutions
Remark 3.1.1 Conditions
b
i
(x), c(x) ∈ L
∞
(Ω)
can be replaced by the weaker ones
b
i
(x), c(x) ∈ L
p
(Ω) , for any p > n.
Proof of Theorem 3.1.2.
We divide the proof into two major parts.
Part 1. Interior Estimate
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.23)
where K is any compact subset of Ω.
Part 2. Boundary Estimate
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.24)
where Ω
δ
= ¦x ∈ Ω [ d(x, ∂Ω) > δ¦.
Combining the interior and the boundary estimates, we obtain
[[u[[
W
2,p
(Ω)
≤ [[u[[
W
2,p
(Ω\Ω
2δ
)
+[[u[[
W
2,p
(Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.25)
Theorem 3.1.2 is then a simple consequence of (3.25) and the following
well known GargaliadoJohnNirenberg type interpolation estimate
[[∇u[[
L
p
(Ω)
≤ C[[u[[
1/2
L
p
(Ω)
[[∇
2
u[[
1/2
L
p
(Ω)
≤ [[∇
2
u[[
L
p
(Ω)
+
C
4
[[u[[
L
p
(Ω)
.
Substituting this estimate of [[∇u[[
L
p
(Ω)
into our inequality (3.25), we get:
[[u[[
W
2,p
(Ω)
≤ C[[∇
2
u[[
L
p
(Ω)
+C(
C
4
[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
).
Choosing <
1
2C
, we can then absorb the term [[∇
2
u[[
L
p
(Ω)
on the right hand
side by the left hand side and arrive at the conclusion of our theorem
[[u[[
W
2,p
(Ω)
≤ C([[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
) (3.26)
For both the interior and the boundary estimates, the main idea is the well
known method of “frozen” coeﬃcients. Locally, near a point x
o
, we may regard
the leading coeﬃcients a
ij
(x) of the operator approximately as the constant
a
ij
(x
o
) (as if the functions a
ij
are frozen at point x
o
), and thus we are able
to treat the operator as a constant coeﬃcient one and apply the potential
estimates derived in the previous section.
3.1 W
2,p
a Priori Estimates 83
Part 1: Interior Estimates.
First we deﬁne the cutoﬀ function φ(s) to be a compact supported C
∞
function:
φ(s) = 1 if s ≤ 1
φ(s) = 0 if s ≥ 2.
Then we quantify the ‘module’ continuity of a
ij
with
(δ) = sup
x−y≤δ;x,y∈Ω;1≤i,j≤n
[a
ij
(x) −a
ij
(y)[
The function (δ) ` 0 as δ ` 0, and it measures the ‘module’ continuity
of the functions a
ij
.
For any x
o
∈ Ω
2δ
, let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x),
then
a
ij
(x
o
)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+a
ij
(x)
∂
2
w
∂x
i
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)a
ij
(x)
∂
2
u
∂x
i
∂x
j
+
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
= (a
ij
(x
o
) −a
ij
(x))
∂
2
w
∂x
i
∂x
j
+η(x)(b
i
(x)
∂u
∂x
i
+c(x)u −f(x)) +
+ a
ij
(x)u(x)
∂
2
η
∂x
i
∂x
j
+ 2a
ij
(x)
∂u
∂x
i
∂η
∂x
j
:= F(x) for x ∈ R
n
.
In the above, we skipped the summation sign
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on. Here we notice that all terms are supported in B
2δ
:=
B
2δ
(x
o
) ⊂ Ω. With a simple linear transformation, we may assume a
ij
(x
o
) =
δ
ij
. Apparently both w and
Γ ∗ F :=
1
n(n −2)ω
n
R
n
1
[x −y[
n−2
F(y)dy
are solutions of
´u = F.
Thus by the uniqueness, w = Γ ∗ F (see the exercise after the proof of
the theorem). Now, following the Newtonian potential estimate (see Theorem
3.3), we obtain:
84 3 Regularity of Solutions
[[
2
w[[
L
p
(B
2δ
)
= [[
2
w[[
L
p
(R
n
)
≤ C[[F[[
L
p
(R
n
)
= C[[F[[
L
p
(B
2δ
)
. (3.27)
Calculating term by term, we have
[[F[[
L
p
(B
2δ
)
≤ (2δ)[[
2
w[[
L
p
(B
2δ
)
+[[f[[
L
p
(B
2δ
)
+C([[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
Combining this with the previous inequality (3.27) and choosing δ so small
such that C(2δ) < 1/2, we get
[[
2
w[[
L
p
(B
2δ
)
≤ C(2δ)[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
≤
1
2
[[
2
w[[
L
p
(B
2δ
)
+C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
).
This is equivalent to
[[
2
w[[
L
p
(B
2δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
)
Since u = w on B
δ
(x
o
), we have
[[
2
u[[
L
p
(B
δ
)
= [[
2
w[[
L
p
(B
δ
)
.
Consequently,
[[
2
u[[
L
p
(B
δ
)
≤ C([[f[[
L
p
(B
2δ
)
+[[∇u[[
L
p
(B
2δ
)
+[[u[[
L
p
(B
2δ
)
). (3.28)
To extend this estimate from a δball to a compact set, we notice that the
collection of balls
¦B
δ
(x) [ x ∈ Ω
2δ
¦
forms an open covering of Ω
2δ
. Hence there are ﬁnitely many balls ¦B
δ
(x
i
) [
i = 1, .......m¦ that have already covered Ω
2δ
. We now apply the above esti
mate (3.28) on each of the balls and sum up to obtain
[[
2
u[[
L
p
(Ω
2δ
)
≤
m
¸
i=1
[[
2
u[[
L
p
(B
δ
)(x
i
)
≤
m
¸
i=1
C([[f[[
L
p
(B
2δ
)(x
i
)
+[[∇u[[
L
p
(B
2δ
)(x
i
)
+[[u[[
L
p
(B
2δ
)(x
i
)
)
≤ C([[f[[
L
p
(Ω)
+[[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
) (3.29)
For any compact subset K of Ω, let δ <
1
2
dist(K, ∂Ω), then K ⊂ Ω
2δ
, and
we derive
[[∇
2
u[[
L
p
(K)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.30)
This establishes the interior estimates.
Part 2. Boundary Estimates
The boundary estimates are very similar. Let’s just describe how we can
apply almost the same scheme here. For any point x
o
∈ ∂Ω, B
δ
(x
o
)
¸
∂Ω
3.1 W
2,p
a Priori Estimates 85
is a C
2,α
graph for δ small. With a suitable rotation, we may assume that
B
δ
(x
o
)
¸
∂Ω is given by the graph of
x
n
= h(x
1
, x
2
, ..., x
n−1
) = h(x
),
and Ω is on top of this graph locally. Let
y = ψ(x) = (x
−x
o
, x
n
−h(x
)),
then ψ is a diﬀeomorphism that maps a neighborhood of x
o
in Ω onto B
+
r
(0) =
¦y ∈ B
r
(0) [ y
n
> 0¦. The equation becomes
−
¸
n
i,j=1
¯ a
ij
(y)u
yiyj
+
¸
n
i=1
¯
b
i
(y)u
yi
+ ¯ c(y)u =
¯
f(y) , for y ∈ B
+
r
(0)
u(y) = 0 , for y ∈ ∂B
+
r
(0) with y
n
= 0;
(3.31)
where the new coeﬃcients are computed from the original ones via the chain
rule of diﬀerentiation. For example,
¯ a
ij
(y) =
∂ψ
i
∂x
l
(ψ
−1
(y))a
lk
(ψ
−1
(y))
∂ψ
j
∂x
k
(ψ
−1
(y)).
If necessary, we make a linear transformation so that ¯ a
ij
(0) = δ
ij
. Since
a plane is still mapped to a plane, we may assume that equation (3.31) is
valid for some smaller r. Applying the method of frozen coeﬃcients (with
w(y) = φ(
2y
r
)u(y)) we get:
´w(y) = F(y) on B
+
r
(0).
Now let ¯ w(y) and
¯
F(y) be the odd extension of w(y) and F(y) from B
+
r
(0)
to B
r
(0), i.e.
¯ w(y) = ¯ w(y
1
, , y
n−1
, y
n
) =
w(y
1
, , y
n−1
, y
n
), if y
n
≥ 0;
−w(y
1
, , y
n−1
, −y
n
), if y
n
< 0.
Similarly for
¯
F(y).
Then one can check that:
´¯ w(y) =
¯
F(y) , for y ∈ B
r
(0).
Through the same calculations as in Part 1, we derive the basic interior esti
mate
[[
2
u[[
L
p
(Br(x
o
))
≤ C([[f[[
L
p
(B2r(x
o
))
+[[∇u[[
L
p
(B2r(x
o
))
+[[u[[
L
p
(B2r(x
o
))
)
≤ C
1
([[f[[
L
p
(B2r(x
o
)∩Ω)
+[[∇u[[
L
p
(B2r(x
o
)∩Ω)
+[[u[[
L
p
(B2r(x
o
)∩Ω)
)
for any x
o
on ∂Ω and for some small radius r. The last inequality in the above
was due to the symmetric extension of w to ¯ w from the half ball to the whole
ball.
86 3 Regularity of Solutions
These balls form a covering of the compact set ∂Ω, and thus has a ﬁnite
covering B
ri
(x
i
) (i = 1, 2, ...., k). These balls also cover a neighborhood of ∂Ω
including Ω`Ω
δ
for some δ small. Summing the estimates up, and following
the same steps as we did for the interior estimates, we get:
[[u[[
W
2,p
(Ω\Ω
δ
)
≤ C([[∇u[[
L
p
(Ω)
+[[u[[
L
p
(Ω)
+[[f[[
L
p
(Ω)
). (3.32)
This establishes the boundary estimates and hence completes the proof of
the theorem. ¯ .
Exercise 3.1.1 Assume that Ω is bounded, and w ∈ W
2,p
0
(Ω). Show that if
−´w = F, then w = Γ ∗ F.
Hint: (a) Show that Γ ∗ F(x) = 0 for x ∈
¯
Ω.
(b) Show that it is true for w ∈ C
2
0
(R
n
).
(c) Show that
´(J
w) = J
(´w)
and thus
J
w = Γ ∗ (J
F).
Let →0 to derive w = Γ ∗ F.
3.2 W
2,p
Regularity of Solutions
In this section, we will use the a priori estimates established in the previous
section to derive the W
2,p
regularity for the weak solutions.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.33)
Deﬁnition 3.2.1 We say that u ∈ W
1,p
0
(Ω) is a weak solution of
Lu = f(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω
(3.34)
if for any v ∈ W
1,q
0
(Ω),
Ω
n
¸
i,j=1
a
ij
(x)u
xi
v
xj
+
n
¸
i
b
i
(x)u
xi
v +c(x)uv
¸
¸
dx =
Ω
f(x)vdx, (3.35)
where
1
p
+
1
q
= 1.
3.2 W
2,p
Regularity of Solutions 87
Remark 3.2.1 Since C
∞
0
(Ω) is dense in W
1,p
0
(Ω), we only need to require
(3.35) be true for all v ∈ C
∞
0
(Ω); and this is more convenient in many appli
cations.
The main result of the section is
Theorem 3.2.1 Assume Ω is a bounded domain in R
n
. Let L be a uniformly
elliptic operator deﬁned in (3.33) with a
ij
(x) Lipschitz continuous and b
i
(x)
and c(x) bounded.
Assume that u ∈ W
1,p
0
(Ω) is a weak solution of (3.34). If f(x) ∈ L
p
(Ω),
then u ∈ W
2,p
(Ω) .
Outline of the Proof. The proof of the theorem is quite complex and
will be accomplished in two stages.
In stage 1, we ﬁrst consider the case when p ≥ 2, because one can rather
easily show the uniqueness of the weak solutions by multiplying both sides of
the equation by the solution itself and integrating by parts. As one will see,
this uniqueness plays a key role in deriving the regularity.
The proof of the regularity is based on the following fundamental proposi
tion on the existence, uniqueness, and regularity for the solution of the Laplace
equation on a unit ball.
Proposition 3.2.1 Assume f ∈ L
p
(B
1
(0)) with p ≥ 2. Then the Dirichlet
problem
´u = f(x) , x ∈ B
1
(0)
u(x) = 0 , x ∈ ∂B
1
(0)
(3.36)
exists a unique solution u ∈ W
2,p
(B
1
(0)) satisfying
u
W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.37)
We will prove this proposition in subsection 3.2.1, and then use the
“frozen” coeﬃcients method to derive the regularity for general operator L.
In stage 2 (subsection 3.2.2), we consider the case 1 < p < 2. The main
diﬃculty lies in the uniqueness of the weak solution. To show the uniqueness,
we ﬁrst prove a W
2,p
version of Fredholm Alternative for p ≥ 2 based on
the regularity result in stage 1, which will be used to deduce the existence of
solutions for equation
−
¸
ij
(a
ij
(x)w
xi
)
xj
= F(x).
Then we use this equation with a proper choice of F(x) to derive a contradic
tion if there exists a nontrivial solution of equation
−
¸
ij
(a
ij
(x)v
xi
)
xj
= 0.
88 3 Regularity of Solutions
After proving the uniqueness, to derive the regularity, everything else is
the same as in stage 1.
Remark 3.2.2 Actually, the conditions on the coeﬃcients of L can be weak
ened. This will use the Regularity Lifting Theorem in Section 3.3, and hence
will be deliberated there.
3.2.1 The Case p ≥ 2
We will prove Proposition 3.2.1 and use it to derive the regularity of weak
solutions for general operator L. To this end, we need
Lemma 3.2.1 (Better A Priori Estimates at Presence of Uniqueness.) As
sume that
i) for solutions of
Lu = f(x) , x ∈ Ω (3.38)
it holds the a priori estimate
u
W
2,p
(Ω)
≤ C(u
L
p
(Ω)
+f
L
p
(Ω)
), (3.39)
and
ii) (uniqueness) if Lu = 0, then u = 0.
Then we have a better a priori estimate for the solution of (3.38)
u
W
2,p
(Ω)
≤ Cf
L
p
(Ω)
. (3.40)
Proof. Suppose the inequality (3.40) is false, then there exists a sequence of
functions ¦f
k
¦ with f
k

L
p = 1 and the corresponding sequence of solutions
¦u
k
¦ for
Lu
k
= f
k
(x) , x ∈ Ω
such that
u
k

W
2,p→∞ as k→∞.
It follows from the a priori estimate (3.39) that
u
k

L
p→∞ as k→∞.
Let
v
k
=
u
k
u
k

L
p
and g
k
=
f
k
u
k

L
p
.
Then
v
k

L
p = 1 , and g
k

L
p→0. (3.41)
From the equation
Lv
k
= g
k
(x) , x ∈ Ω (3.42)
3.2 W
2,p
Regularity of Solutions 89
it holds the a priori estimate
v
k

W
2,p ≤ C(v
k

L
p +g
k

L
p). (3.43)
(3.41) and (3.43) imply that ¦v
k
¦ is bounded in W
2,p
, and hence there exists
a subsequence (still denoted by ¦v
k
¦) that converges weakly to v in W
2,p
. By
the compact Sobolev embedding, the same sequence converges strongly to v
in L
p
, and hence v
L
p = 1. From (3.42), we arrive at
Lv = 0 , x ∈ Ω.
Therefore by the uniqueness assumption, we must have v = 0. This contradicts
with the fact v
L
p = 1 and therefore completes the proof. ¯ .
The Proof of Proposition 3.2.1.
For convenience, we abbreviate B
r
(0) as B
r
.
To see the uniqueness, assume that both u and v are solutions of (3.36).
Let w = u −v. Then w weakly satisﬁes
´w = 0 , x ∈ B
1
w(x) = 0 , x ∈ ∂B
1
Multiplying both sides of the equation by w and then integrating on B
1
, we
derive
B1
[w[
2
dx = 0.
Hence w = 0 almost everywhere, this, together with the zero boundary
condition, implies w = 0 almost everywhere.
For the existence, it is wellknown that, if f is continuous, then
u(x) =
B1
G(x, y)f(y)dy
is the solution. Here
G(x, y) = Γ(x −y) −h(x, y)
is the Green’s function, in which
Γ(x) =
1
n(n−2)ωn
1
x
n−2
, n ≥ 3
1
2π
ln [x[ , n = 2
is the fundamental solution of Laplace equation as introduce in the deﬁnition
of Newtonian Potentials, and
h(x, y) =
1
n(n −2)ω
n
1
([x[[
x
x
2
−y[)
n−2
, for n ≥ 3
90 3 Regularity of Solutions
is a harmonic function. The addition of h(x, y) is to ensure that G(x, y) van
ishes on the boundary.
We have obtained the needed estimate on the ﬁrst part
B1
Γ(x −y)f(y)dy
when we worked on the Newtonian Potentials. For the second part, noticing
that h(x, y) is smooth when y is away from the boundary (see the exercise
below), we start from a smaller ball B
1−δ
for each δ > 0. Let
u
δ
(x) =
B1
G(x, y)f
δ
(y)dy,
where
f
δ
(x) =
f(x) , x ∈ B
1−δ
0 , elsewhere .
Now, by our result for the Newtonian Potentials, D
2
u
δ
is in L
p
(B
1
). Applying
the Poincar`e inequality, we derive that u
δ
is also in L
p
(B
1
), and hence it is in
W
2,p
(B
1
). Moreover, due to uniqueness and Lemma 3.2.1, we have the better
a priori estimate
u
δ

W
2,p
(B1)
≤ Cf
L
p
(B1)
. (3.44)
Pick a sequence ¦δ
i
¦ tending to zero, then the corresponding solutions
¦u
δi
¦ is a Cauchy sequence in W
2,p
(B
1
), because
u
δi
−u
δj

W
2,p
(B1)
≤ Cf
δi
−f
δj

L
p
(B1)
→0 , as i, j→∞.
Let
u
o
= lim
i→∞
u
δi
, in W
2,p
(B
1
).
Then u
o
is a solution of (3.36). Moreover, from 3.44, we see that the better a
priori estimate (3.37) holds for u
o
.
This completes the proof of Proposition 3.2.1. ¯ .
Exercise 3.2.1 Show that if [y[ ≤ 1 − δ for some δ > 0, then there exist a
constant c
o
> 0, such that
[x[[
x
[x[
2
−y[ ≥ c
o
, ∀ x ∈ B
1
(0).
Proof of Regularity Theorem 3.2.1 for p ≥ 2.
The general frame of the proof is similar to the W
2,p
a priori estimate in
the previous section. The key diﬀerence here is that in the a priori estimate, we
preassume that u is a strong solution in W
2,p
, and here we only assume that
u is a weak solution in W
1,p
. After reading the proof of the a priori estimates,
the reader may notice that if one can establish a W
2,p
regularity in a small
3.2 W
2,p
Regularity of Solutions 91
neighborhood of each point in Ω, then by an entirely similar argument as in
obtaining the a priori estimates, one can derive the regularity on the whole of
Ω.
As in the proof of the a priori estimates, we deﬁne the cutoﬀ function
φ(s) =
1 , if s ≤ 1
0 , if s ≥ 2.
Let u be a W
1,p
0
(Ω) weak solution of (3.34).
For any
x
o
∈ Ω
2δ
:= ¦x ∈ Ω [ dist(x, ∂Ω) ≥ 2δ¦,
let
η(x) = φ(
[x −x
o
[
δ
) and w(x) = η(x)u(x).
Then w is supported in B
2δ
(x
o
). By (3.35) and a straight forward calculation,
one can verify that, for any v ∈ C
∞
0
(B
2δ
(x
o
)),
B
2δ
(x
o
)
a
ij
(x
o
)w
xi
v
xj
dx =
B
2δ
(x
o
)
[a
ij
(x
o
)−a
ij
(x)]w
xi
v
xj
dx+
B
2δ
(x
o
)
F(x)vdx
where
F(x) = f(x) −(a
ij
(x)η
xi
u)
xj
−b
i
(x)u
xi
−c(x)u(x).
In order words, w is a weak solution of
a
ij
(x
o
)w
xixj
= ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) , x ∈ B
2δ
(x
o
)
w(x) = 0 , x ∈ ∂B
2δ
(x
o
).
(3.45)
In the above, we omitted the summation signs
¸
. We abbreviated
¸
n
i,j=1
a
ij
as a
ij
and so on.
With a change of coordinates, we may assume that a
ij
(x
o
) = δ
ij
and write
equation (3.45) as
´w = ([a
ij
(x
o
) −a
ij
(x)]w
xi
)
xj
−F(x) (3.46)
For any v ∈ W
2,p
(B
2δ
(x
o
)), obviously,
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
∈ L
p
(B
2δ
(x
o
)).
Also, one can easily verify that F(x) is in L
p
(B
2δ
(x
o
)). By virtue of Proposi
tion 3.2.1, the operator ´ is invertible. Consider the equation in W
2,p
v = Kv +´
−1
F (3.47)
where
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
.
92 3 Regularity of Solutions
Under the assumption that a
ij
(x) are Lipschitz continuous, one can verify that
(see the exercise below), for suﬃciently small δ, K is a contracting map from
W
2,p
(B
2δ
(x
o
)) to itself. Therefore, there exists a unique solution v of equation
(3.47). This v is also a weak solution of equation (3.46). Similar to the proof
of Proposition 3.2.1, one can show (as an exercise) the uniqueness of the weak
solution of (3.46). Therefore, we must have w = v and thus conclude that w is
also in W
2,p
(B
2δ
(x
o
)). This completes the stage 1 of proof for Theorem 3.2.1.
¯ .
Exercise 3.2.2 Assume that each a
ij
(x) is Lipschitz continuous. Let
(Kv)(x) = ´
−1
([a
ij
(x
o
) −a
ij
(x)]v
xi
)
xj
, x ∈ B
2δ
(x
o
).
Show that, for δ suﬃciently small, the operator K is a contracting map in
W
2,p
(B
2δ
(x
o
)), i.e. there exists a constant 0 < γ < 1, such that
Kφ −Kψ
W
2,p
(B
2δ
(x
o
))
≤ γφ −ψ
W
2,p
(B
2δ
(x
o
))
.
Exercise 3.2.3 Prove the uniqueness of the W
1,p
(B
2δ
(x
o
)) weak solution for
equation (3.46) when p ≥ 2.
3.2.2 The Case 1 < p < 2.
Lemma 3.2.2 Let p ≥ 2. If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω)
weak solution), then for any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩
W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p. (3.48)
Proof. This is actually a W
2,p
version of the Fredholm Alternative.
From the previous chapter, one can see that for α suﬃciently large, by
LaxMilgram Theorem, for any given g ∈ L
2
(Ω), there exist a unique W
1,2
0
(Ω)
weak solution v of the equation
(L +α)v = g , x ∈ Ω.
From the Regularity Theorem in this chapter, we have v ∈ W
2,2
(Ω). There
fore, we can write
v = (L +α)
−1
g
where (L +α)
−1
is a bounded linear operator from L
2
(Ω) to W
2,2
(Ω).
Let ¦f
k
¦ ⊂ C
∞
0
(Ω) be a sequence of smooth approximations of f(x). For
each k, consider equation
(L +α)w −αw = f
k
, (3.49)
3.2 W
2,p
Regularity of Solutions 93
or equivalently,
(I −K)w = F, (3.50)
where I is the identity operator, K = α(L+α)
−1
, and F = (L+α)
−1
f
k
. One
can see that K is a compact operator from W
1,2
0
(Ω) into itself, because of
the compact embedding of W
2,2
into W
1,2
. Now we can apply the Fredholm
Alternative:
Equation (3.50) exists a unique solution if and only if the corresponding
homogeneous equation (I −K)w = 0 has only trivial solution.
The second half of the above is just the assumption of the theorem, hence,
equation (3.50) possesses a unique solution u
k
in W
1,2
0
(Ω), i.e.
u
k
−α(L +α)
−1
u
k
= (L +α)
−1
f
k
. (3.51)
Furthermore, since both α(L + α)
−1
u
k
and (L + α)
−1
f
k
are in W
2,2
(Ω), we
derive immediately that u
k
is also in W
2,2
(Ω), or we can deduce this from the
Regularity Theorem 3.2.1. Since f
k
is in L
p
(Ω) for any p, we conclude from
Theorem 3.2.1 that u
k
is in W
2,p
(Ω), and furthermore, due to uniqueness, we
have the improved a priori estimate (see Lemma 3.2.1)
u
k

W
2,p ≤ Cf
k

L
p , k = 1, 2,
Moreover, for any integers i and j,
L(u
i
−u
j
) = f
i
(x) −f
j
(x) , x ∈ Ω.
It follows that
u
i
−u
j

W
2,p ≤ Cf
i
−f
j

L
p→0 , as i, j→∞.
This implies that ¦u
k
¦ is a Cauchy sequence in W
2,p
and hence it converges
to an element u in W
2,p
(Ω). Now this function u is the desired solution. ¯ .
Proposition 3.2.2 (Uniqueness of W
1,p
0
Weak Solution for 1 < p < ∞. )
Let Ω be a bounded domain in R
n
. Assume that a
ij
(x) are Lipschitz con
tinuous. If u is a W
1,p
0
(Ω) weak solution of
(a
ij
(x)u
xi
)
xj
= 0, (3.52)
then u = 0 almost every where.
Proof. . When p ≥ 2, we have proved the uniqueness. Now assume 1 < p < 2.
Suppose there exist a nontrivial weak solution u. Let ¦u
k
¦ be a sequence
of C
∞
0
(Ω) functions such that
u
k
→u in W
1,p
0
(Ω) , as k→∞.
For each k, consider the equation
94 3 Regularity of Solutions
(a
ij
(x)v
xi
)
xj
= F
k
(x) :=
[u
k
[
p/q
u
k
1 +[u
k
[
2
, (3.53)
where
1
q
+
1
p
= 1.
Obviously, for p < 2, q is greater than 2, and F
k
(x) is in L
q
(Ω). Since we
already have uniqueness for W
1,q
0
weak solution, Lemma 3.2.2 guarantees the
existence of a solution v
k
for equation (3.53).
Through integration by parts several times, one can verify that
0 =
Ω
v
k
(a
ij
(x)u
xi
)
xj
dx =
Ω
[u
k
[
p/q
u
k
1 +[u
k
[
2
udx. (3.54)
Since u
k
→u in W
1,p
, it is easy to see that
[u
k
[
p/q
u
k
1 +[u
k
[
2
→
[u[
p/q
u
1 +[u[
2
in L
q
(Ω), and therefore, by (3.54), we must have u = 0 almost everywhere.
Taking into account that u ∈ W
1,p
0
(Ω), we ﬁnally deduce that u = 0 almost
everywhere. This completes the proof of the proposition. ¯ .
After proving the uniqueness, the rest of the arguments are entirely the
same as in stage 1. This completes stage 2 of the proof for Theorem 3.2.1.
3.2.3 Other Useful Results Concerning the Existence, Uniqueness,
and Regularity
Theorem 3.2.2 Given any pair p, q > 1 and f ∈ L
q
(Ω), if u ∈ W
1,p
0
(Ω) is
a weak solution of
Lu = f(x) , x ∈ Ω, (3.55)
then u ∈ W
2,q
(Ω) and it holds the a priori estimate
u
W
2,q
(Ω)
≤ C(u
L
q +f
L
q ). (3.56)
Proof. Case i) If q ≤ p, then obviously, u is also a W
1,q
0
(Ω) weak solution, and
the results follow from the Regularity Theorem 3.2.1 and the W
2,q
a priori
estimates.
Case ii) If q > p, then we use the Sobolev embedding
W
2,p
→ W
1,p1
,
with
p
1
=
np
n −p
.
If p
1
≥ q, we are done. If p
1
< q, we use the fact that u is a W
1,p1
weak
solution, and by Regularity, u ∈ W
2,p1
. Repeating this process, after a few
steps, we will arrive at a p
k
≥ q.
3.3 Regularity Lifting 95
From this Theorem, we can derive immediately the equivalence between
the uniqueness of weak solutions in any two spaces W
1,p
0
and W
1,q
0
:
Corollary 3.2.1 For any pair p, q > 1, the following are equivalent
i) If u ∈ W
1,p
0
(Ω) is a weak solution of Lu = 0, then u = 0.
ii) If u ∈ W
1,q
0
(Ω) is a weak solution of Lu = 0, then u = 0.
Theorem 3.2.3 (W
2,p
Version of the Fredholm Alternative). Let 1 < p < ∞.
If u = 0 whenever Lu = 0 (in the sense of W
1,p
0
(Ω) weak solution), then for
any f ∈ L
p
(Ω), there exist a unique u ∈ W
1,p
0
(Ω) ∩ W
2,p
(Ω), such that
Lu = f(x) and u
W
2,p
(Ω)
≤ Cf
L
p.
For 2 ≤ p < ∞, we have stated this theorem in Lemma 3.2.2. Now, for
1 < p < 2, after we obtained the W
2,p
regularity in this case, the proof of this
theorem is entirely the same as for Lemma 3.2.2.
3.3 Regularity Lifting
3.3.1 Bootstrap
Assume that u ∈ H
1
(Ω) is a weak solution of
−´u = u
p
(x) , x ∈ Ω (3.57)
Then by Sobolev Embedding, u ∈ L
2n
n−2
(Ω). If the power p is less than the
critical number
n+2
n−2
, then the regularity of u can be enhanced repeatedly
through the equation until we reach that u ∈ C
α
(Ω). Finally the “Schauder
Estimate” will lift u to C
2+α
(Ω) and hence to be a classical solution. We call
this the “Bootstrap Method” as will be illustrated below.
For each ﬁxed p <
n+2
n−2
, write
p =
n + 2
n −2
−δ
for some δ > 0.
Since u ∈ L
2n
n−2
(Ω),
u
p
∈ L
2n
p(n−2)
(Ω) = L
2n
(n+2)−δ(n−2)
(Ω).
The equation (3.57) boosts the solution u to W
2,q1
(Ω) with
q
1
=
2n
(n + 2) −δ(n −2)
.
By the Sobolev Embedding
96 3 Regularity of Solutions
W
2,q1
(Ω) → L
nq
1
n−2q
1
(Ω)
we have u ∈ L
s1
(Ω) with
s
1
=
nq
1
n −2q
1
=
2n
n −2
1
1 −δ
.
The integrable power of u has been ampliﬁed by
1
1−δ
(> 1) times.
Now, u
p
∈ L
q2
(Ω) with
q
2
=
s
1
p
=
4n
(1 −δ)[n + 2 −δ(n −2)]
.
And hence through the equation, we derive u ∈ W
2,q2
(Ω). By Sobolev em
bedding, if 2q
2
≥ n, i.e. if
(1 −δ)[n + 2 −δ(n −2)] −4 ≤ 0 ,
then, we have either u ∈ L
q
(Ω) for any q, or u ∈ C
α
(Ω). We are done. If
2q
2
< n, then u ∈ L
s2
(Ω), with
s
2
=
2n
(1 −δ)[n + 2 −δ(n −2)] −4
.
It is easy to verify that
s
2
≥
2n
(1 −δ)(n −2)
1
1 −δ
= s
1
1
1 −δ
.
The integrable power of u has again been ampliﬁed by at least
1
1−δ
times.
Continuing this process, after a ﬁnite many steps, we will boost u to L
q
(Ω)
for any q, and hence in C
α
(Ω) for some 0 < α < 1. Finally, by the Schauder
estimate, u ∈ C
2+α
(Ω), and therefore, it is a classical solution.
From the above argument, one can see that, if p =
n+2
n−2
, the so called
“critical power”, then δ = 0, and the integrable power of the solution u can not
be boosted this way. To deal with this situation, we introduce a “Regularity
Lifting Method” in the next subsection.
3.3.2 Regularity Lifting Theorem
Here, we present a simple method to boost regularity of solutions. It has been
used extensively in various forms in the authors previous works. The essence of
the approach is wellknown in the analysis community. However, the version
we present here contains some new developments. It is much more general
and is very easy to use. We believe that our method will provide convenient
ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities.
Essentially, it is based on the following Regularity Lifting Theorem.
3.3 Regularity Lifting 97
Let Z be a given vector space. Let  
X
and  
Y
be two norms on Z.
Deﬁne a new norm  
Z
by
 
Z
=
p
 
p
X
+ 
p
Y
For simplicity, we assume that Z is complete with respect to the norm  
Z
.
Let X and Y be the completion of Z under  
X
and  
Y
, respectively.
Here, one can choose p, 1 ≤ p ≤ ∞, according to what one needs. It’s easy to
see that Z = X ∩ Y .
Theorem 3.3.1 Let T be a contraction map from X into itself and from Y
into itself. Assume that f ∈ X, and that there exits a function g ∈ Z such
that f = Tf +g. Then f also belongs to Z.
Proof. Step 1. First show that T : Z −→ Z is a contraction. Since T is a
contraction on X, there exists a constant θ
1
, 0 < θ
1
< 1 such that
Th
1
−Th
2

X
≤ θ
1
h
1
−h
2

X
.
Similarly, we can ﬁnd a constant θ
2
, 0 < θ
2
< 1 such that
Th
1
−Th
2

Y
≤ θ
2
h
1
−h
2

Y
.
Let θ = max¦θ
1
, θ
2
¦. Then, for any h
1
, h
2
∈ Z,
Th
1
−Th
2

Z
=
p
Th
1
−Th
2

p
X
+Th
1
−Th
2

p
Y
≤
p
θ
p
1
h
1
−h
2

p
X
+θ
p
2
h
1
−h
2

p
Y
≤ θh
1
−h
2

Z
.
Step 2. Since T : Z −→ Z is a contraction, given g ∈ Z, we can ﬁnd a
solution h ∈ Z such that h = Th + g. We see that T : X −→ X is also a
contraction and g ∈ Z ⊂ X. The equation x = Tx + g has a unique solution
in X. Thus, f = h ∈ Z since both h and f are solutions of x = Tx +g in X.
3.3.3 Applications to PDEs
Now, we explain how the “Regularity Lifting Theorem” proved in the previous
subsection can be used to boost the regularity of week solutions involving
critical exponent:
−´u = u
n+2
n−2
. (3.58)
Still assume that Ω is a smooth bounded domain in R
n
with n ≥ 3. Let
u ∈ H
1
0
(Ω) be a weak solution of equation (3.58). Then by Sobolev embedding,
u ∈ L
2n
n−2
(Ω).
98 3 Regularity of Solutions
We can split the right hand side of (3.58) in two parts:
u
n+2
n−2
= u
4
n−2
u := a(x)u.
Then obviously, a(x) ∈ L
n
2
(Ω). Hence, more generally, we consider the regu
larity of the weak solution of the following equation
−´u = a(x)u +b(x). (3.59)
Theorem 3.3.2 Assume that a(x), b(x) ∈ L
n
2
(Ω). Let u be any weak solution
of equation (3.59) in H
1
0
(Ω). Then u is in L
p
(Ω) for any 1 ≤ p < ∞.
Remark 3.3.1 Even in the best situation when a(x) ≡ 0, we can only have
u ∈ W
2,
n
2
(Ω) via the W
2,p
estimate, and hence derive that u ∈ L
p
(Ω) for any
p < ∞ by Sobolev embedding.
Now let’s come back to nonlinear equation (3.58). Assume that u is a
H
1
0
(Ω) weak solution. From the above theorem, we ﬁrst conclude that u is
in L
q
(Ω) for any 1 < q < ∞. Then by our Regularity Theorem 3.2.1, u is
in W
2,q
(Ω). This implies that u ∈ C
1,α
(Ω) via Sobolev embedding. Finally,
from the classical C
2,α
regularity, we arrive at u ∈ C
2,α
(Ω).
Corollary 3.3.1 If u is a H
1
0
(Ω) weak solution of equation (3.58), then u ∈
C
2,α
(Ω), and hence it is a classical solution.
Proof of Theorem 3.3.2.
Let G(x, y) be the Green’s function of −´ in Ω. Deﬁne
(Tf)(x) = (−´)
−1
f(x) =
Ω
G(x, y)f(y)dy.
Then obviously
0 < G(x, y) < Γ([x −y[) :=
C
n
[x −y[
n−2
,
and it follows that, for any q ∈ (1,
n
2
),
Tf
L
nq
n−2q
(Ω)
≤ Γ ∗ f
L
nq
n−2q
(R
n
)
≤ Cf
L
q
(Ω)
.
Here, we may extend f to be zero outside Ω when necessary.
For a positive number A, deﬁne
a
A
(x) =
a(x) if [a(x)[ ≥ A
0 otherwise ,
and
a
B
(x) = a(x) −a
A
(x).
3.3 Regularity Lifting 99
Let
(T
A
u)(x) =
Ω
G(x, y)a
A
(y)u(y)dy.
Then equation (3.59) can be written as
u(x) = (T
A
u)(x) +F
A
(x),
where
F
A
(x) =
Ω
G(x, y)[a
B
(y)u(y) +b(y)]dy.
We will show that, for any 1 ≤ p < ∞,
i) T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω) for A large, and
ii) F
A
(x) is in L
p
(Ω).
Then by the “Regularity Lifting Theorem”, we derive immediately that
u ∈ L
p
(Ω).
i) The Estimate of the Operator T
A
.
For any
n
n−2
< p < ∞, there is a q, 1 < q <
n
2
, such that
p =
nq
n −2q
and then by H¨older inequality, we derive
T
A
u
L
p
(Ω)
≤ Γ ∗ a
A
u
L
p
(R
n
)
≤ Ca
A
u
L
q
(Ω)
≤ Ca
A

L
n
2 (Ω)
u
L
p
(Ω)
.
Since a(x) ∈ L
n
2
(Ω), one can choose a large number A, such that
a
A

L
n
2 (Ω)
≤
1
2
and hence arrive at
T
A
u
L
p
(Ω)
≤
1
2
u
L
p
(Ω)
.
That is T
A
: L
p
(Ω)→L
p
(Ω) is a contracting operator.
ii) The Integrability of F
A
(x).
(a) First consider
F
1
A
(x) :=
Ω
G(x, y)b(y)dy.
For any
n
n−2
< p < ∞, choose 1 < q <
n
2
, such that p =
nq
n−2q
. Extending b(x)
to be zero outside Ω, we have
F
1
A

L
p
(Ω)
≤ Γ ∗ b
L
p
(R
n
)
≤ Cb
L
q
(Ω)
≤ Cb
L
n
2 (Ω)
< ∞.
Hence
F
1
A
(x) ∈ L
p
(Ω).
100 3 Regularity of Solutions
(b) Then estimate
F
2
A
(x) =
Ω
G(x, y)a
B
(y)u(y)dy.
By the boundedness of a
B
(x), we have
F
2
A

L
p
(Ω)
≤ a
B
u
L
q
(Ω)
≤ Cu
L
q
(Ω)
.
Noticing that u ∈ L
q
(Ω) for any 1 < q ≤
2n
n −2
, and
p =
2n
n −6
when q =
2n
n −2
,
we conclude that, for the following values of p and dimension n,
1 < p < ∞ when 3 ≤ n ≤ 6
1 < p <
2n
n−6
when n > 6,
F
A
(x) ∈ L
p
(Ω), and hence T
A
is a contracting operator from L
p
(Ω) to L
p
(Ω).
Applying the Regularity Lifting Theorem, we arrive at
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 6
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−6
if n > 6.
The only thing that prevent us from reaching the full range of p is the
estimate of F
2
A
(x). However, from the starting point where u ∈ L
2n
n−2
(Ω), we
have arrived at
u ∈ L
p
(Ω) , for any 1 < p <
2n
n −6
.
Now from this point on, through an entire similar argument as above, we will
reach
u ∈ L
p
(Ω) , for any 1 < p < ∞ if 3 ≤ n ≤ 10
u ∈ L
p
(Ω) , for any 1 < p <
2n
n−10
if n > 10.
Each step, we add 4 to the dimension n for which holds
u ∈ L
p
(Ω) , 1 ≤ p < ∞.
Continuing this way, we prove our theorem for all dimensions. This completes
the proof of the Theorem. ¯ .
Now we will use the similar idea and the Regularity Lifting Theorem to
carry out Stage 3 of the proof of the Regularity Theorem.
Let L be a second order uniformly elliptic operator in divergence form
Lu = −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
+
n
¸
i=1
b
i
(x)u
xi
+c(x)u. (3.60)
we will prove
3.3 Regularity Lifting 101
Theorem 3.3.3 (W
2,p
Regularity under Weaker Conditions)
Let Ω be a bounded domain in R
n
and 1 < p < ∞. Assume that
i) a
ij
(x) ∈ W
1,n+δ
(Ω) for some δ > 0;
ii)
b
i
(x) ∈
L
n
(Ω), if 1 < p < n
L
n+δ
(Ω), if p = n
L
p
(Ω), if n < p < ∞;
iii)
c(x) ∈
L
n/2
(Ω), if 1 < p < n/2
L
n/2+δ
(Ω), if p = n/2
L
p
(Ω), if n/2 < p < ∞.
Suppose f ∈ L
p
(Ω) and u is a W
1,p
0
(Ω) weak solution of
Lu = f(x) x ∈ Ω. (3.61)
Then u is in W
2,p
(Ω).
Proof. Write equation (3.61) as
−
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
= −
n
¸
i=1
b
i
(x)u
xi
+c(x)u
+f(x).
Let
L
o
u := −
n
¸
i,j=1
(a
ij
(x)u
xi
)
xj
.
By Proposition 3.2.2, we know that equation L
o
u = 0 has only trivial
solution, and it follows from Theorem 3.2.3, the operator L
o
is invertible.
Denote it by L
−1
o
. Then L
−1
o
is a bounded linear operator from L
p
(Ω) to
W
2,p
(Ω).
For each i and each positive number A, deﬁne
b
iA
(x) =
b
i
(x) , if [b
i
(x)[ ≥ A
0 , elsewhere .
Let
b
iB
(x) := b
i
(x) −b
iA
(x).
Similarly for C
A
(x) and C
B
(x).
Now we can rewrite equation (3.61) as
u = T
A
u +F
A
(u, x) (3.62)
where
T
A
u = −L
−1
o
¸
i
b
iA
(x)u
xi
+c
A
(x)u
102 3 Regularity of Solutions
and
F
A
(u, x) = −L
−1
o
¸
i
b
iB
(x)u
xi
+c
B
(x)u
+L
−1
o
f(x).
We ﬁrst show that
F
A
(u, ) ∈ W
2,p
(Ω) , ∀u ∈ W
1,p
(Ω), f ∈ L
p
(Ω). (3.63)
Since b
iB
(x) and c
iB
(x) are bounded, one can easily verify that
¸
i
b
iB
(x)u
xi
+c
B
(x)u
∈ L
p
(Ω)
for any u ∈ W
1,p
(Ω). This implies (3.63).
Then we prove that T
A
is a contracting operator fromW
2,p
(Ω) to W
2,p
(Ω).
In fact, by H¨older inequality and Sobolev embedding from W
2,p
to W
1,q
,
one can see that, for any v ∈ W
2,p
(Ω),
b
iA
Dv
L
p ≤ C
b
iA

L
nv
W
2,p , if 1 < p < n
b
iA

L
n+δ v
W
2,p , if p = n
b
iA

L
pv
W
2,p , if n < p < ∞,
(3.64)
and
c
A
v
L
p ≤ C
c
A

L
n/2v
W
2,p , if 1 < p < n/2
c
A

L
n/2+δ v
W
2,p , if p = n/2
c
A

L
pv
W
2,p , if n/2 < p < ∞.
(3.65)
Under the conditions of the theorem, the ﬁrst norms on the right hand
side of (3.64) and (3.65), i.e. b
iA

L
n, c
A

L
n/2 can be made arbitrarily small
if A is suﬃciently large. Hence for any > 0, there exists an A > 0, such that

¸
i
b
iA
(x)v
xi
+c
A
(x)v
L
p ≤ v
W
2,p , ∀ v ∈ W
2,p
(Ω).
Taking into account that L
−1
o
is a bounded operator from L
p
(Ω) to
W
2,p
(Ω), we deduce that T
A
is a contract mapping from W
2,p
(Ω) to W
2,p
(Ω).
It follows from the Contract Mapping Theorem, there exist a v ∈ W
2,p
(Ω),
such that
v = T
A
v +F
A
(u, x).
Now u and v satisfy the same equation. By uniqueness, we must have
u = v. Therefore we derive that u ∈ W
2,p
(Ω). This completes the proof of the
theorem. ¯ .
3.3 Regularity Lifting 103
3.3.4 Applications to Integral Equations
As another application of the Regularity Lifting Theorem, we consider the
following integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
n+α
n−α
dy , x ∈ R
n
, (3.66)
where α is any real number satisfying 0 < α < n.
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities.
It is also closely related to the following family of semilinear partial dif
ferential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
, x ∈ R
n
n ≥ 3. (3.67)
Actually, one can prove (See [CLO]) that the integral equation (3.66) and the
PDE (3.67) are equivalent.
In the special case α = 2, (3.67) becomes
−∆u = u
(n+2)/(n−2)
, x ∈ R
n
.
which is the well known Yamabe equation.
In the context of the HardyLittlewoodSobolev inequality, it is natural
to start with u ∈ L
2n
n−α
(R
n
). We will use the Regularity Lifting Theorem to
boost u to L
q
(R
n
) for any 1 < q < ∞, and hence in L
∞
(R
n
). More generally,
we consider the following equation
u(x) =
R
n
K(y)[u(y)[
p−1
u(y)
[x −y[
n−α
dy (3.68)
for some 1 < p < ∞. We prove
Theorem 3.3.4 Let u be a solution of (3.68). Assume that
u ∈ L
qo
(R
n
) for some q
o
>
n
n −α
, (3.69)
R
n
[K(y)[u(y)[
p−1
[
n
α
dy < ∞ , and [K(y)[ ≤ C(1 +
1
[y[
γ
) for some γ < α.
(3.70)
Then u is in L
q
(R
n
) for any 1 < q < ∞.
Remark 3.3.2 i) If
p >
n
n −α
, [K(x)[ ≤ M, and u ∈ L
n(p−1)
α
(R
n
), (3.71)
then conditions (3.69) and (3.70) are satisﬁed.
104 3 Regularity of Solutions
ii) Last condition in (3.71) is somewhat sharp in the sense that if it is
violated, then equation (3.68) when K(x) ≡ 1 possesses singular solutions
such as
u(x) =
c
[x[
α
p−1
.
Proof of Theorem 3.3.4: Deﬁne the linear operator
T
v
w =
R
n
K(y)[v(y)[
p−1
w
[x −y[
n−α
dy.
For any real number a > 0, deﬁne
u
a
(x) = u(x) , if [u(x)[ > a , or if [x[ > a
u
a
(x) = 0 , otherwise .
Let u
b
(x) = u(x) −u
a
(x).
Then since u satisﬁes equation (3.68), one can verify that u
a
satisﬁes the
equation
u
a
= T
ua
u
a
+g(x) (3.72)
with the function
g(x) =
R
n
K(y)[u
b
(y)[
p−1
u
b
(y)
[x −y[
n−α
dy −u
b
(x).
Under the second part of the condition (3.70), it is obvious that
g(x) ∈ L
∞
∩ L
qo
.
For any q >
n
n−α
, we ﬁrst apply the HardyLittlewoodSobolev inequality
to obtain
T
ua
w
L
q ≤ C(α, n, q)K[u
a
[
p−1
w
L
nq
n+αq
. (3.73)
Then write H(x) = K(x)[u
a
(x)[
p−1
, we apply the H¨older inequality to the
right hand side of the above inequality
R
n
[H(y)[
nq
n+αq
[f(y)[
nq
n+αq
dy
n+αq
nq
≤
R
n
[H(y)[
nq
n+αq
·r
dy
1
r
R
n
[f(y)[
nq
n+αq
·s
dy
1
s
¸
n+αq
nq
=
R
n
[H(y)[
n
α
dy
α
n
w
L
q . (3.74)
Here we have chosen
s =
n +αq
n
and r =
n +αq
αq
.
3.3 Regularity Lifting 105
It follows from (3.73) and (3.74) that
T
ua
w
L
q ≤ C(α, n, q)(
[K(y)[u
a
(y)[
p−1
[
n
α
dy)
α
n
w
L
q . (3.75)
By (3.70) and (3.75), we deduce, for suﬃciently large a,
T
ua
w
L
q ≤
1
2
w
L
q . (3.76)
Applying (3.76) to both the case q = q
o
and the case q = p
o
> q
o
, and by
the Contracting Mapping Theorem, we see that the equation
w = T
ua
w +g(x) (3.77)
has a unique solution in both L
qo
and L
po
∩L
qo
. From (3.72), u
a
is a solution
of (3.77) in L
qo
. Let w be the solution of (3.77) in L
po
∩L
qo
, then w is also a
solution in L
qo
. By the uniqueness, we must have u
a
= w ∈ L
po
∩L
qo
for any
p
o
>
n
n−α
. So does u. This completes the proof of the Theorem.
4
Preliminary Analysis on Riemannian Manifolds
4.1 Diﬀerentiable Manifolds
4.2 Tangent Spaces
4.3 Riemannian Metrics
4.4 Curvature
4.4.1 Curvature of Plane Curves
4.4.2 Curvature of Surfaces in R
3
4.4.3 Curvature on Riemannian Manifolds
4.5 Calculus on Manifolds
4.5.1 Higher Order Covariant Derivatives and the LaplaceBertrami Op
erator
4.5.2 Integrals
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
4.6 Sobolev Embeddings
According to Einstein, the Universe we live in is a curved space. We call
this curved space a manifold, which is a generalization of the ﬂat Euclidean
space. Roughly speaking, each neighborhood of a point on a manifold is home
omorphic to an open set in the Euclidean space, and the manifold as a whole
is obtained by pasting together pieces of the Euclidean spaces. In this section,
we will introduce the most basic concepts of diﬀerentiable manifolds, tangent
space, Riemannian metrics, and curvatures. We will try to be as intuitive and
brief as possible. For more rigorous arguments and detailed discussions, please
see, for example, the book Riemannian Geometry by do Carmo [Ca].
4.1 Diﬀerentiable Manifolds
A simple example of a two dimensional diﬀerentiable manifold is a regular
surface in R
3
. Intuitively, it is a union of open sets of R
2
, organized in such a
108 4 Preliminary Analysis on Riemannian Manifolds
way that at the intersection of any two open sets the change from one to the
other can be made in a diﬀerentiable manner. More precisely, we have
Deﬁnition 4.1.1 (Diﬀerentiable Manifolds)
A diﬀerentiable ndimensional manifold M is
(i) a union of open sets, M =
¸
α
V
α
and a family of homeomorphisms φ
α
from V
α
to an open set φ
α
(V
α
) in
R
n
, such that
(ii) For any pair α and β with V
α
¸
V
β
= U = ∅, the images of the
intersection φ
α
(U) and φ
β
(U) are open sets in R
n
and the mappings φ
β
◦φ
−1
α
are diﬀerentiable.
(iii) The family ¦V
α
, φ
α
¦ are maximal relative to the conditions (i) and
(ii).
A manifold M is a union of open sets. In each open set U ⊂ M, one
can deﬁne a coordinates chart. In fact, let φ
U
be the homeomorphism from
U to φ
U
(U) in R
n
, for each p ∈ U, if φ
U
(p) = (x
1
, x
2
, , x
n
) in R
n
, then
we can deﬁne the coordinates of p to be (x
1
, x
2
, x
n
). This is called a local
coordinates of p, and (U, φ
U
) is called a coordinates chart. If a family of
coordinates charts that covers M are well made so that the transition from
one to the other is in a diﬀerentiable way as in condition (ii), then it is called
a diﬀerentiable structure of M. Given a diﬀerentiable structure on M, one
can easily complete it into a maximal one, by taking the union of all the
coordinates charts of M that are compatible with (in the sense of condition
(ii)) the ones in the given structure.
A trivial example of ndimensional manifolds is the Euclidean space R
n
,
with the diﬀerentiable structure given by the identity. The following are two
nontrivial examples.
Example 1. The ndimensional unit sphere
S
n
= ¦x ∈ R
n+1
[ x
2
1
+ +x
2
n+1
= 1¦.
Take the unit circle S
1
for instance, we can use the following four coordinates
charts:
V
1
= ¦x ∈ S
1
[ x
2
> 0¦, φ
1
(x) = x
1
,
V
2
= ¦x ∈ S
1
[ x
2
< 0¦, φ
2
(x) = x
1
,
V
3
= ¦x ∈ S
1
[ x
1
> 0¦, φ
3
(x) = x
2
,
V
4
= ¦x ∈ S
1
[ x
1
< 0¦, φ
4
(x) = x
2
.
Obviously, S
1
is the union of the four open sets V
1
, V
2
, V
3
, and V
4
. On the
intersection of any two open sets, say on V
1
¸
V
3
, we have
x
2
=
1 −x
2
1
, x
1
> 0;
x
1
=
1 −x
2
2
, x
2
> 0.
They are both diﬀerentiable functions. Same is true on other intersections.
Therefore, with this diﬀerentiable structure, S
1
is a 1dimensional diﬀeren
tiable manifold.
4.2 Tangent Spaces 109
Example 2. The ndimensional real projective space P
n
(R). It is the set of
straight lines of R
n+1
which passes through the origin (0, , 0) ∈ R
n+1
, or
the quotient space of R
n+1
` ¦0¦ by the equivalence relation
(x
1
, x
n+1
) ∼ (λx
1
, , λx
n+1
), λ ∈ R, λ = 0.
We denote the points in P
n
(R) by [x]. Observe that if x
i
= 0, then
[x
1
, , x
n+1
] = [x
1
/x
i
, x
i−1
/x
i
, 1, x
i+1
/x
i
, x
n+1
/x
i
].
For i = 1, 2, , n + 1, let
V
i
= ¦[x] [ x
i
= 0¦.
Apparently,
P
n
(R) =
n+1
¸
i=1
V
i
.
Geometrically, V
i
is the set of straight lines of R
n+1
which passes through the
origin and which do not belong to the hyperplane x
i
= 0.
Deﬁne
φ
i
([x]) = (ξ
i
1
, , ξ
i
i−1
, ξ
i
i+1
, , ξ
i
n+1
),
where ξ
i
k
= x
k
/x
i
(i = k) and 1 ≤ i ≤ n + 1.
On the intersection V
i
¸
V
j
, the changes of coordinates
ξ
j
k
=
ξ
i
k
ξ
i
j
k = i, j
ξ
j
i
=
1
ξ
i
j
.
are obviously smooth, thus ¦V
i
, φ
i
¦ is a diﬀerentiable structure of P
n
(R) and
hence P
n
(R) is a diﬀerentiable manifold.
4.2 Tangent Spaces
We know that at every point of a regular curve or at a regular surface, there
exists a tangent line or a tangent plane. Similarly, given a diﬀerentiable struc
ture on a manifold, we can deﬁne a tangent space at each point. For a regular
surfaces S in R
3
, at each given point p, the tangent plane is the space of all
tangent vectors, and each tangent vector is deﬁned as the “velocity in the am
bient space R
3
. Since on a diﬀerential manifold, there is no such an ambient
space, we have to ﬁnd a characteristic property of the tangent vector which
will not involve the concepts of velocity. To this end, let’s consider a smooth
curve in R
n
:
γ(t) = (x
1
(t), , x
n
(t)), t ∈ (−, ), with γ(0) = p.
110 4 Preliminary Analysis on Riemannian Manifolds
The tangent vector of γ at point p is
v = (x
1
(0), , x
n
(0)).
Let f(x) be a smooth function near p. Then the directional derivative of f
along vector v ∈ R
n
can be expressed as
d
dt
(f ◦ γ) [
t=0
=
n
¸
i=1
∂f
∂x
i
(p)
∂x
i
∂t
(0) = (
¸
i
x
i
(0)
∂
∂x
i
)f.
Hence the directional derivative is an operator on diﬀerentiable functions that
depends uniquely on the vector v. We can identify v with this operator and
thus generalize the concept of tangent vectors to diﬀerentiable manifolds M.
Deﬁnition 4.2.1 A diﬀerentiable function γ : (−, )→M is called a (diﬀer
entiable) curve in M.
Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈
M. The tangent vector to the curve γ at t = 0 is an operator (or function)
γ
(0) : D→R given by
γ
(0)f =
d
dt
(f ◦ γ) [
t=0
.
A tangent vector at p is the tangent vector of some curve γ at t = 0 with
γ(0) = p. The set of all tangent vectors at p is called the tangent space at p
and denoted by T
p
M.
Let (x
1
, , x
n
) be the coordinates in a local chart (U, φ) that covers p
with φ(p) = 0 ∈ φ(U). Let γ(t) = (x
1
(t), , x
n
(t)) be a curve in M with
γ(0) = p. Then
γ
(0)f =
d
dt
f(γ(t)) [
t=0
=
d
dt
f(x
1
(t), , x
n
(t)) [
t=0
=
¸
i
x
i
(0)
∂f
∂x
i
=
¸
i
x
i
(0)
∂
∂x
i
p
f.
This shows that the vector γ
(0) can be expressed as the linear combination
of
∂
∂xi
p
, i.e.
γ
(0) =
¸
i
x
i
(0)
∂
∂x
i
p
.
Here
∂
∂xi
p
is the tangent vector at p of the “coordinate curve”:
x
i
→φ
−1
(0, , 0, x
i
, 0, , 0);
and
4.2 Tangent Spaces 111
∂
∂x
i
p
f =
∂
∂x
i
f(φ
−1
(0, , 0, x
i
, 0, , 0)) [
xi=0
.
One can see that the tangent space T
p
M is an ndimensional linear space with
the basis
∂
∂x
1
p
, ,
∂
∂x
n
p
.
The dual space of the tangent space T
p
M is called the cotangent space,
and denoted by T
∗
p
M. We can realize it in the following.
We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping.
Deﬁnition 4.2.2 Let M and N be two diﬀerential manifolds and let f :
M→N be a diﬀerentiable mapping. For every p ∈ M and for each v ∈ T
p
M,
choose a diﬀerentiable curve γ : (−, )→M, with γ(0) = p and γ
(0) = v. Let
β = f ◦ γ. Deﬁne the mapping
df
p
: T
p
M→T
f(p)
N
by
df
p
(v) = β
(0).
We call df
p
the diﬀerential of f. Obviously, it is a linear mapping from one
tangent space T
p
M to the other T
f(p)
N; And one can show that it does not
depend on the choice of γ (See [Ca]). When N = R
1
, the collection of all such
diﬀerentials form the cotangent space. From the deﬁnition, one can derive
that
df
p
(
∂
∂x
i
p
) =
∂
∂x
i
p
f.
In particular, for the diﬀerential (dx
i
)
p
of the coordinates function x
i
, we have
(dx
i
)
p
(
∂
∂x
j
p
) = δ
ij
.
And consequently,
df
p
=
¸
i
∂f
∂x
i
p
(dx
i
)
p
.
From here we can see that
(dx
1
)
p
, (dx
2
)
p
, , (dx
n
)
p
form a basis of the cotangent space T
∗
p
M at point p.
Deﬁnition 4.2.3 The tangent space of M is
TM =
¸
p∈M
T
p
M.
And the cotangent space of M is
T
∗
M =
¸
p∈M
T
∗
p
M.
112 4 Preliminary Analysis on Riemannian Manifolds
4.3 Riemannian Metrics
To measure the arc length, area, or volume on a manifold, we need to introduce
some kind of measurements, or ‘metrics’. For a surface S in the three dimen
sional Euclidean space R
3
, there is a natural way of measuring the length of
vectors tangent to S, which are simply the length of the vectors in the ambient
space R
3
. These can be expressed in terms of the inner product <, > in R
3
.
Given a curve γ(t) on S for t ∈ [a, b], its length is the integral
b
a
[γ
(t)[dt,
where the length of the velocity vector γ
(t) is given by the inner product in
R
3
:
[γ
(t)[ =
< γ
(t), γ
(t) >.
The deﬁnition of the inner product <, > enable us to measure not only
the lengths of curves on S, but also the area of domains on S, as well as other
quantities in geometry.
Now we generalize this concept to diﬀerentiable manifolds without using
the ambient space. More precisely, we have
Deﬁnition 4.3.1 A Riemannian metric on a diﬀerentiable manifold M is a
correspondence which associates to each point p of M an inner product < , >
p
on the tangent space T
p
M, which varies diﬀerentiably, that is,
g
ij
(q) ≡<
∂
∂x
i
q
,
∂
∂x
j
q
>
q
is a diﬀerentiable function of q for all q near p.
A diﬀerentiable manifold with a given Riemannian metric will be called a
Riemannian manifold.
It is clear this deﬁnition does not depend on the choice of coordinate
system. Also one can prove that ( see [Ca])
Proposition 4.3.1 A diﬀerentiable manifold M has a Riemannian metric.
Now we show how a Riemannian metric can be used to measure the length
of a curve on a manifold M.
Let I be an open interval in R
1
, and let γ : I→M be a diﬀerentiable
mapping ( curve ) on M. For each t ∈ I, dγ is a linear mapping from T
t
I to
T
γ(t)
M, and γ
(t) ≡
dγ
dt
≡ dγ(
d
dt
) is a tangent vector of T
γ(t)
M. The restriction
of the curve γ to a closed interval [a, b] ⊂ I is called a segment. We deﬁne the
length of the segment by
b
a
< γ
(t), γ
(t) >
1/2
dt.
4.3 Riemannian Metrics 113
Here the arc length element is
ds =< γ
(t), γ
(t) >
1/2
dt. (4.1)
As we mentioned in the previous section, let (x
1
, , x
n
) be the coordinates
in a local chart (U, φ) that covers p = γ(t) = (x
1
(t), , x
n
(t)). Then
γ
(t) =
¸
i
x
i
(t)
∂
∂x
i
p
.
Consequently,
ds
2
= < γ
(t), γ
(t) > dt
2
=
n
¸
i,j=1
<
∂
∂x
i
,
∂
∂x
j
>
p
x
i
(t)dt x
j
(t)dt
=
n
¸
i,j=1
g
ij
(p)dx
i
dx
j
.
This expresses the length element ds in terms of the metric g
ij
.
To derive the formula for the volume of a region (an open connected subset)
D in M in terms of metric g
ij
, let’s begin with the volume of the parallelepiped
form by the tangent vectors. Let ¦e
1
, , e
n
¦ be an orthonormal basis of T
p
M.
Let
∂
∂x
i
(p) =
¸
j
a
ij
e
j
, i = 1, n.
Then
g
ik
(p) =<
∂
∂x
i
,
∂
∂x
k
>
p
=
¸
j l
a
ij
a
kl
< e
j
, e
l
>=
¸
j
a
ij
a
kj
. (4.2)
We know the volume of the parallelepiped form by
∂
∂x1
,
∂
∂xn
is the deter
minant det(a
ij
), and by virtue of (4.2), it is the same as
det(g
ij
).
Assume that the region D is cover by a coordinate chart (U, φ) with co
ordinates (x
1
, x
n
). From the above arguments, it is natural to deﬁne the
volume of D as
vol(D) =
φ(U)
det(g
ij
)dx
1
dx
n
.
A trivial example is M = R
n
, the ndimensional Euclidean space with
∂
∂x
i
= e
i
= (0, , 1, 0, , 0).
The metric is given by
g
ij
=< e
i
, e
j
>= δ
ij
.
114 4 Preliminary Analysis on Riemannian Manifolds
A less trivial example is S
2
, the 2dimensional unit sphere with standard
metric, i.e., with the metric inherited from the ambient space R
3
= ¦(x, y, z)¦.
We will use the usual spherical coordinates (θ, ψ) with
x = sin θ cos ψ
y = sin θ sin ψ
z = cos θ
to illustrate how we use the measurement (inner product) in R
3
to derive the
expression of the standard metric g
ij
in term of the local coordinate (θ, ψ) on
S
2
.
Let p be a point on S
2
covered by a local coordinates chart (U, φ) with
coordinates (θ, ψ). We ﬁrst derive the expression for
∂
∂θ
p
and
∂
∂ψ
p
the
tangent vectors at point p = φ
−1
(θ, ψ) of the coordinate curves ψ = constant
and θ =constant respectively.
In the coordinates of R
3
, we have,
∂
∂θ
p
= (
∂x
∂θ
,
∂y
∂θ
,
∂z
∂θ
) = (cos θ cos ψ, cos θ sin ψ, −sin θ),
and
∂
∂ψ
p
= (
∂x
∂ψ
,
∂y
∂ψ
,
∂z
∂ψ
) = (−sin θ sin ψ, sin θ cos ψ, 0).
It follows that
g
11
(p) =<
∂
∂θ
,
∂
∂θ
>
p
= cos
2
θ cos
2
ψ + cos
2
θ sin
2
ψ + sin
2
θ = 1,
g
12
(p) = g
21
(p) =<
∂
∂θ
,
∂
∂ψ
>
p
= 0,
and
g
22
(p) =<
∂
∂ψ
,
∂
∂ψ
>
p
= sin
2
θ.
Consequently, the length element is
ds =
g
11
dθ
2
+ 2g
12
dθdψ +g
22
dψ
2
=
dθ
2
+ sin
2
dψ
2
,
and the area element is
dA =
det(g
ij
)dθdψ = sin θdθdψ.
These conform with our knowledge on the length and area in the spherical
coordinates.
4.4 Curvature 115
4.4 Curvature
4.4.1 Curvature of Plane Curves
Consider curves on a plane. Intuitively, we feel that a straight line does not
bend at all, and a circle of radius one bends more than a circle of radius 2.
To measure the extent of bending of a curve, or the amount it deviates from
being straight, we introduce curvature.
Let p be a point on a curve, and T(p) be the unit tangent vector at p. Let
q be a nearby point. Let ∆α be the amount of change of the angle from T(p)
to T(q), and ∆s the arc length between p and q on the curve. We deﬁne the
curvature at p as
k(p) = lim
q→p
∆α
∆s
=
dα
ds
(p).
If a plane curve is given parametrically as c(t) = (x(t), y(t)), then the unit
tangent vector at c(t) is
T(t) =
(x
(t), y
(t))
[x
(t)]
2
+ [y
(t)]
2
,
and
ds =
[x
(t)]
2
+ [y
(t)]
2
dt.
By a straight forward calculation, one can verify that
k(c(t)) =
[dT[
ds
=
x
(t)y
(t) −y
(t)x
(t)
¦[x
(t)]
2
+ [y
(t)]
2
¦
3/2
.
If a plane curve is given explicitly as y = f(x), then in the above formula,
replacing t by x and y(t) by f(x), we ﬁnd that the curvature at point (x, f(x))
is
k(x, f(x)) =
f
(x)
¦1 + [f
(x)]
2
¦
3/2
. (4.3)
4.4.2 Curvature of Surfaces in R
3
1. Principle Curvature
Let S be a surface in R
3
, p be a point on S, and N(p) a normal vector
of S at p. Given a tangent vector T(p) at p, consider the intersection of
the surface with the plane containing T(p) and N(p). This intersection is a
plane curve, and its curvature is taken as the absolute value of the normal
curvature, which is positive if the curve turns in the same direction as the
surface’s chosen normal, and negative otherwise. The normal curvature varies
as the tangent vector T(p) changes. The maximum and minimum values of
the normal curvature at point p are called principal curvatures, and usually
denoted by k
1
(p) and k
2
(p).
116 4 Preliminary Analysis on Riemannian Manifolds
2. Gaussian Curvature. One way to measure the extent of bending of a
surface is by Gaussian curvature. It is deﬁned as the product of the two
principal curvatures
K(p) = k
1
(p) k
2
(p).
From this, one can see that no matter which side normal is chosen (say, for
a closed surface, one may chose outer normals or inner normals), the Gaussian
curvature of a sphere is positive, of a one sheet hyperboloids is negative, and
of a torus or a plane is zero. A sphere of radius R has Gaussian curvature
1
R
2
.
Let f(x, y) be a diﬀerentiable function. Consider a surface S in R
3
deﬁned
as the graph of x
3
= f(x
1
, x
2
). Assume that the origin is on S, and the xy
plane is tangent to S, that is
f(0, 0) = 0 and f
1
(0, 0) = 0 = f
2
(0, 0),
where, for simplicity, we write f
i
=
∂f
∂xi
and we will use f
ij
to denote
∂
2
f
∂xi∂xj
.
We calculate the Gaussian curvature of S at the origin 0 = (0, 0, 0).
Let u = (u
1
, u
2
) be a unit vector on the tangent plane. Then by (4.3), the
normal curvature k
n
(u) in u direction is
∂
2
f
∂u
2
[(
∂f
∂u
)
2
+ 1]
3/2
(0)
where
∂f
∂u
and
∂
2
f
∂u
2
are ﬁrst and second directional derivative of f in u direction.
Using the assumption that f
1
(0) = 0 = f
2
(0), we arrive at
k
n
(u) = (f
11
u
2
1
+ 2f
12
u
1
u
2
+f
22
u
2
2
)(0) = u
T
F u,
where
F =
f
11
f
12
f
21
f
22
(0),
and u
T
denotes the transpose of u.
Let λ
1
and λ
2
be two eigenvalues of the matrix F with corresponding unit
eigenvectors e
1
and e
2
, i.e.
F e
i
= λ
i
e
i
, i = 1, 2. (4.4)
Since the matrix F is symmetric, the two eigenvectors e
1
and e
2
are orthog
onal, and it is obvious that
e
T
i
F e
i
= λ
i
, i = 1, 2.
Let θ be the angle between u and e
1
. Then we can express
u = cos θ e
1
+ sin θ e
2
.
It follows that
4.4 Curvature 117
k
n
(u) = u
T
F u = cos
2
θ λ
1
+ sin
2
θ λ
2
.
Assume that λ
1
≤ λ
2
. Then
λ
1
≤ λ
1
+ sin
2
θ (λ
2
−λ
1
) = k
n
(u) = cos
2
θ (λ
1
−λ
2
) +λ
2
≤ λ
2
.
Hence λ
1
and λ
2
are minimum and maximum values of normal curvature,
and therefore they are principal curvatures k
1
and k
2
.
From (4.4), we see that λ
1
and λ
2
are solutions of the quadratic equation
f
11
−λ f
12
f
21
f
22
−λ
= λ
2
−(f
11
+f
22
)λ + (f
11
f
22
−f
12
f
21
) = 0.
Therefore, by deﬁnition, the Gaussian curvature at 0 is
K(0) = k
1
k
2
= λ
1
λ
2
= (f
11
f
22
−f
12
f
21
)(0).
4.4.3 Curvature on Riemannian Manifolds
On a general n dimensional Riemannian manifold M, there are several kinds
of curvatures. Before introducing them, we need a few preparations.
1. Vector Fields and Brackets
A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M
a vector X(p) in the tangent space T
p
M. It is a mapping from M to T M. If
this mapping is diﬀerentiable, then we say the vector ﬁeld X is diﬀerentiable.
In a local coordinates, we can express
X(p) =
n
¸
i
a
i
(p)
∂
∂x
i
.
When applying to a smooth function f on M, it is a directional derivative
operator in the direction X(p):
(Xf)(p) =
n
¸
i
a
i
(p)
∂f
∂x
i
.
Deﬁnition 4.4.1 For two diﬀerentiable vector ﬁelds X and Y, deﬁne the Lie
Bracket as
[X, Y ] = XY −Y X.
If X(p) =
¸
n
i
a
i
(p)
∂
∂xi
and Y (p) =
¸
n
j
b
j
(p)
∂
∂xj
, then
[X, Y ]f = XY f −Y Xf =
n
¸
i,j=1
a
i
∂b
j
∂x
i
−b
i
∂a
j
∂x
i
∂f
∂x
j
.
118 4 Preliminary Analysis on Riemannian Manifolds
2. Covariant Derivatives and Riemannian Connections
To be more intuitive, we begin with a surface S in R
3
. Let c : I→S be a
curve on S, and let V (t) be a vector ﬁeld along c(t), t ∈ I. One can see that
dV
dt
is a vector in the ambient space R
3
, but it may not belong to the tangent
plane of S. To consider that rate of change of V (t) restricted to the surface S,
we make an orthogonal projection of
dV
dt
onto the tangent plane, and denote
it by
DV
dt
. We call this the covariant derivative of V (t) on S – the derivative
viewed by the twodimensional creatures on S.
On an ndimensional diﬀerentiable manifold M, the covariant derivative
is deﬁned by aﬃne connections.
Let D(M) be the set of all smooth vector ﬁelds.
Deﬁnition 4.4.2 Let X, Y, Z ∈ D(M), and let f and g be smooth functions
on M. An aﬃne connection D on a diﬀerentiable manifold M is a mapping
from D(M) D(M) to D(M) which maps (X, Y ) to D
X
Y and satisﬁes
1)D
fX+gY
Z = fD
X
Z +gD
Y
Z
2)D
X
(Y +Z) = D
X
Y +D
X
Z
3)D
X
(fY ) = fD
X
Y +X(f)Y.
Given an aﬃne connection D on M, for a vector ﬁeld V (t) = Y (c(t)) along
a diﬀerentiable curve c : I→M, deﬁne the covariant derivative of V along c(t)
as
DV
dt
= Ddc
dt
Y.
In a local coordinates, let
c(t) = (x
1
(t), x
n
(t)) and V (t) =
n
¸
i
v
i
(t)
∂
∂x
i
.
Then by the properties of the aﬃne connection, one can express
DV
dt
=
¸
i
dv
i
dt
∂
∂x
i
+
¸
i,j
dx
i
dt
v
j
D ∂
∂x
i
∂
∂x
j
. (4.5)
Recall that in Euclidean spaces with inner product <, >, a vector ﬁeld
V (t) along a curve c(t) is parallel if only if
dV
dt
= 0; and for a pair of vector
ﬁelds X and Y along c(t), we have < X, Y >= constant. Similarly, we have
Deﬁnition 4.4.3 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
4.4 Curvature 119
Deﬁnition 4.4.4 Let M be a diﬀerentiable manifold with an aﬃne connec
tion D and a Riemannian metric <, >. If for any pair of parallel vector ﬁelds
X and Y along any smooth curve c(t), we have
< X, Y >
c(t)
= constant ,
then we say that D is compatible with the metric <, >.
The following fundamental theorem of Levi and Civita guarantees the
existence of such an aﬃne connection on a Riemannian manifold.
Theorem 4.4.1 Given a Riemannian manifold M with metric <, >, there
exists a unique connection D on M that is compatible with <, >. Moreover
this connection is symmetric, i.e.
D
X
Y −D
Y
X = [X, Y ].
We skip the proof of the theorem. Interested readers may see the book of do
Carmo [ca] (page 55).
In a local coordinates (x
1
, , x
n
), this unique Riemannian connection can
be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where
Γ
k
ij
=
1
2
¸
m
∂g
jm
∂x
i
+
∂g
mi
∂x
j
−
∂g
ij
∂x
m
g
mk
is called the Christoﬀel symbols, g
ij
=<
∂
∂xi
,
∂
∂xj
>, and (g
ij
) is the inverse
matrix of (g
ij
).
3. Curvature
Deﬁnition 4.4.5 Let M be a Riemannian manifold with the Riemannian
connection D. The curvature R is a correspondence that assigns every pair of
smooth vector ﬁeld X, Y ∈ D(M) a mapping R(X, Y ) : D(M)→D(M) deﬁned
by
R(X, Y )Z = D
Y
D
X
Z −D
X
D
Y
Z +D
[X,Y ]
Z, ∀Z ∈ D(M).
Deﬁnition 4.4.6 The sectional curvature of a given two dimensional sub
space E ⊂ T
p
M at point p is
K(E) =
< R(X, Y )X, Y >
[X[
2
[Y [
2
− < X, Y >
2
,
where X and Y are any two linearly independent vectors in E, and
[X[
2
[Y [
2
− < X, Y >
2
is the area of the parallelogram spun by X and Y .
120 4 Preliminary Analysis on Riemannian Manifolds
One can show that K(E) so deﬁned is independent of the choice of X and Y .
It is actually the Gaussian curvature of the section.
Example. Let M be the unit sphere with standard metric, then one can
verify that its sectional curvature at every point is K(E) = 1.
Deﬁnition 4.4.7 Let X = Y
n
be a unit vector in T
p
M, and let ¦Y
1
, , Y
n−1
¦
be an orthonormal basis of the hyper plane in T
p
M orthogonal to X, then the
Ricci curvature in the direction X is the average of the sectional curvature of
the n −1 sections spun by X and Y
1
, X and Y
2
, , and X and Y
n−1
:
Ric
p
(X) =
1
n −1
n−1
¸
i=1
< R(X, Y
i
)X, Y
i
> .
The scalar curvature at p is
R(p) =
1
n
n
¸
i=1
Ric
p
(Y
i
).
One can show that the above deﬁnition does not depend on the choice of
the orthonormal basis. On two dimensional manifolds, Ricci curvature is the
same as sectional curvature, and they both depends only on the point p.
4.5 Calculus on Manifolds
In the previous chapter, we introduced the concept of aﬃne connection D on
a diﬀerentiable manifold M. It is a mapping from Γ(M) Γ(M) to Γ(M),
where Γ(M) is the set of all smooth vector ﬁelds on M. Here we will extend
this deﬁnition to the space of diﬀerentiable tensor ﬁelds, and thus introduce
higher order covariant derivatives. For convenience, we adapt the summation
convention and write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
4.5.1 Higher Order Covariant Derivatives and the
LaplaceBertrami Operator
Let p and q be two nonnegative integers. At a point x on M, as usual, we
denote the tangent space by T
x
M. Deﬁne T
q
p
(T
x
M) as the space of (p, q)
tensors on T
x
M, that is, the space of (p +q)linear forms
4.5 Calculus on Manifolds 121
η : T
x
M T
x
M
. .. .
p
T
∗
x
M T
∗
x
M
. .. .
q
→R
1
.
In a local coordinates, we can express
T = T
j1···jq
i1···ip
dx
i1
⊗ ⊗dx
ip
⊗
∂
∂x
j1
⊗ ⊗
∂
∂x
jq
.
Recall that, for the aﬃne connection D, in a local coordinates, if we set
i
= D ∂
∂x
i
,
then
i
∂
∂x
j
(x) = Γ
k
ij
∂
∂x
k
x
,
where Γ
k
ij
are the Christoﬀel symbols. For smooth vector ﬁelds X, Y ∈ Γ(M)
with
X = (X
1
, , X
n
) and Y = (Y
1
, , Y
n
),
by the bilinear properties of the connection, we have
D
X
Y = D
X
i ∂
∂x
i
Y = X
i
i
Y
= X
i
∂Y
j
∂x
i
∂
∂x
j
+Y
j
Γ
k
ij
∂
∂x
k
= X
i
∂Y
j
∂x
i
+Γ
j
ik
Y
k
∂
∂x
j
(4.6)
Now we naturally extend this aﬃne connection D to diﬀerentiable (p, q)
tensor ﬁelds T on M. For a point x on M, and a vector X ∈ T
x
M, D
X
T is
deﬁned to be a (p, q)tensor on T
x
M by
D
X
T(x) = X
i
(
i
T)(x),
where
(
i
T)(x)
j1···jq
i1···ip
=
∂T
j1···jq
i1···ip
∂x
i
x
−
p
¸
k=1
Γ
α
ii
k
(x)T(x)
j1···jq
i1···i
k−1
αi
k+1
···ip
+
q
¸
k=1
Γ
j
k
iα
(x)T(x)
j1···j
k−1
αj
k+1
jq
i1···ip
. (4.7)
Notice that (4.6) is a particular case when (4.7) is applied to the (0, 1)
tensor Y . If we apply it to a (1, 0)tensor, say dx
j
, then
i
(dx
j
) = −Γ
j
ik
dx
k
. (4.8)
Given two diﬀerentiable tensor ﬁelds T and
˜
T, we have
122 4 Preliminary Analysis on Riemannian Manifolds
D
X
(T ⊗
˜
T) = D
X
T ⊗
˜
T +T ⊗D
X
˜
T.
For a diﬀerentiable (p, q)tensor ﬁeld T, we deﬁne T to be the (p +1, q)
tensor ﬁeld whose components are given by
(T)
j1···jq
i1···ip+1
= (
i1
T)
j1···jq
i2···ip+1
.
Similarly, one can deﬁne
2
T,
3
T, .
For example, a smooth function f(x) on M is a (0, 0)tensor. Hence f is
a (1, 0)tensor deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
And
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
j
∂f
∂x
i
dx
i
+
∂f
∂x
i
j
(dx
i
)
dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
Here we have used (4.7) and (4.8). In the Riemannian context,
2
f is called
the Hessian of f and denoted by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Bertrami operator ´ acting on f,
´f = g
ij
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
where (g
ij
) is the inverse matrix of (g
ij
). Through a straight forward calcula
tion, one can verify that
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
4.5.2 Integrals
On a smooth ndimensional Riemannian manifold (M, g), one can deﬁne a
natural positive Radon measure on M, and the theory of Lebesgue integral
applies.
4.5 Calculus on Manifolds 123
Let (U
i
, φ
i
)
i
be a family of coordinates charts that covers M. We say that
a family (U
i
, φ
i
, η
i
)
i
is a partition of unity associate to (U
i
, φ
i
)
i
if
(i) (η
i
)
i
is a smooth partition of unity associated to the covering (U
i
)
i
,
and
(ii) for any i, supp η
i
⊂ U
i
.
One can show that, for each family of coordinates chart (U
i
, φ
i
)
i
of M,
there exists a corresponding family of partition of unity (U
i
, φ
i
, η
i
)
i
.
Let f be a continuous function on M with compact support, and given a
family of coordinates charts (U
i
, φ
i
)
i
of M, we deﬁne its integral on M as the
sum of the integrals on open sets in R
n
:
M
fdV
g
=
¸
i
φi(Ui)
(η
i
[g[f) ◦ φ
−1
i
dx
where (U
i
, φ
i
, η
i
)
i
is a family of partition of unity associated to (U
i
, φ
i
)
i
, [g[ is
the determinant of (g
ij
) in the charts (U
i
, φ
i
)
i
, and dx stands for the volume
element of R
n
. One can prove that such a deﬁnition does not depend on the
choice of the charts (U
i
, φ
i
)
i
and the partition of unity (U
i
, φ
i
, η
i
)
i
.
For smooth functions u and v on a compact manifold M, one can verify
the following integration by parts formula
−
M
´
g
uvdV
g
=
M
< u, v > dV
g
, (4.9)
where ´
g
is the LaplaceBertrami operator associated to the metric g, and
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
4.5.3 Equations on Prescribing Gaussian and Scalar Curvature
A Riemannian metric g on M is said to be pointwise conformal to another
metric g
o
if there exist a positive function ρ, such that
g(x) = ρ(x)g
o
(x), ∀ x ∈ M.
Suppose M is a two dimensional Riemannian manifold with a metric g
o
and
the corresponding Gaussian curvature K
o
(x). Let g(x) = e
2u(x)
g
o
(x) for some
smooth function u(x) on M. Let K(x) be the Gaussian curvature associated
to the metric g. Then by a straight forward calculation, one can verify that
−´
o
u +K
o
(x) = K(x)e
2u(x)
, ∀ x ∈ M. (4.10)
Here ´
o
is the LaplaceBertrami operator associated to g
o
.
124 4 Preliminary Analysis on Riemannian Manifolds
Similarly, for conformal relation between scalar curvatures on ndimensional
manifolds ( n ≥ 3), say on standard sphere S
n
, we have
−´
o
u +
n(n −2)
4
u =
(n −2)
4(n −1)
R(x)u
n+2
n−2
, ∀ x ∈ S
n
, (4.11)
where g
o
is the standard metric on S
n
with the corresponding Laplace
Bertrami operator ´
o
, R(x) is the scalar curvature associated to the point
wise conformal metric g = u
4
n−2
g
o
.
In Chapter 5 and 6, we will study the existence and qualitative properties
of the solutions u for the above two equations respectively.
4.6 Sobolev Embeddings
Let (M, g) be a smooth Riemannian manifold. Let u be a smooth function
on M and k be a nonnegative integer. We denote
k
u the k
th
covariant
derivative of u as deﬁned in the previous section, and its norm [
k
u[ is deﬁned
in a local coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have [
0
u[ = [u[ and
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
For an integer k and a real number p ≥ 1, let (
p
k
(M) be the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
Deﬁnition 4.6.1 The Sobolev space H
k,p
(M) is the completion of (
p
k
(M)
with respect to the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
Many results concerning the Sobolev spaces on R
n
introduced in Chapter
1 can be generalized to Sobolev spaces on Riemannian manifolds. For appli
cation purpose, we list some of them in the following. Interested readers may
ﬁnd the proofs in [He] or [Au].
Theorem 4.6.1 For any integer k, H
k,2
(M) is a Hilbert space when equipped
with the equivalent norm
4.6 Sobolev Embeddings 125
u =
k
¸
i=1
M
[
i
u[
2
dV
g
1/2
.
The corresponding scalar product is deﬁned by
< u, v >=
k
¸
i=1
M
<
i
u,
i
v > dV
g
where < , > is the scalar product on covariant tensor ﬁelds associated to the
metric g.
Theorem 4.6.2 If M is compact, then H
k,p
(M) does not depend on the met
ric.
Theorem 4.6.3 (Sobolev Embeddings I)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume 1 ≤ q < n and p =
nq
n−q
. Then
H
1,q
(M) ⊂ L
p
(M) .
More generally, if m, k are two integers with 0 ≤ m < k, and if p =
nq
n−q(k−m)
, then
H
k,q
(M) ⊂ H
m,p
(M) .
Theorem 4.6.4 (Sobolev Embeddings II)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold. As
sume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. If q >
n
k−m
,
then
H
k,q
(M) ⊂ C
m
(M) .
Theorem 4.6.5 (Compact Embeddings)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold.
(i) Assume that q ≥ 1 and m, k are two integers with 0 ≤ m < k. Then
for any real number p such that 1 ≤ p <
nq
n−q(k−m)
, the embedding
H
k,q
(M) ⊂ H
m,p
(M)
is compact. In particular, the embedding of H
1,q
(M) into L
p
(M) is compact
if 1 ≤ p <
nq
n−q
.
(ii) For q > n, the embedding of H
k,q
(M) into C
λ
(M) is compact, if
(k−λ) >
n
q
. In particular, the embedding of H
1,q
(M) into C
0
(M) is compact.
126 4 Preliminary Analysis on Riemannian Manifolds
Theorem 4.6.6 (Poincar´e Inequality)
Let (M, g) be a smooth, compact ndimensional Riemannian manifold and
let p ∈ [1, n) be real. Then there exists a positive constant C such that for any
u ∈ H
1,p
(M),
M
[u − ¯ u[
p
dV
g
1/p
≤ C
M
[u[
p
dV
g
1/p
,
where
¯ u =
1
V ol
(M,g)
M
udV
g
is the average of u on M.
5
Prescribing Gaussian Curvature on Compact
2Manifolds
5.1 Prescribing Gaussian Curvature on S
2
5.1.1 Obstructions
5.1.2 Variational Approaches and Key Inequalities
5.1.3 Existence of Weak Solutions
5.2 Gaussian Curvature on Negatively Curved Manifolds
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solu
tions
5.2.2 Chen and Li’s Results
In this and the next chapters, we will present the readers with real re
search examples–semilinear equations arising from prescribing Gaussian and
scalar curvatures. We will illustrate, among other methods, how the calculus
of variations can be applied to seek weak solutions of these equations.
To show the existence of weak solutions for a certain equation, we consider
the corresponding functional J() in an appropriate Hilbert space. According
to the compactness of the functional, people usually divide it into three cases:
a) subcritical case where the level set of J is compact, say, J satisﬁes the
(PS) condition (see also Chapter 2):
Every sequence ¦u
k
¦ that satisﬁes J(u
k
)→c and J
(u
k
)→0 possesses a
convergent subsequence,
b) critical case which is the limiting situation, and can be approximated
by subcritical cases, or
c) super critical case–the remaining cases.
To roughly see when a functional may lose compactness, let’s consider
J(x) = (x
2
−1)e
x
deﬁned on one dimensional space R
1
. One can easily see that
there exists a sequence ¦x
k
¦, for instance, x
k
= −k, such that J(x
k
)→0 and
J
(x
k
)→0 as k→∞, however, ¦x
k
¦ possesses no convergent subsequence be
cause this sequence is not bounded. The functional has no compactness at level
128 5 Prescribing Gaussian Curvature on Compact 2Manifolds
J = 0. We know that in a ﬁnite dimensional space, a bounded sequence has a
convergent subsequence. While in inﬁnite dimensional space, even a bounded
sequence may not have a convergent subsequence. For instance ¦sin kx¦ is a
bounded sequence in L
2
([0, 1]) but does not have a convergent subsequence.
In the situation where the lack of compactness occurs, to ﬁnd a critical
point of the functional, one would try to recover the compactness through
various means. Take again the one dimensional example J(x) = (x
2
− 1)e
x
,
which has no compactness at level J = 0. However, if we restrict it to levels
less than 0, we can recover the compactness. That is, if we pick a sequence
¦x
k
¦ satisfying J(x
k
) < 0 and J
(x
k
)→0, then one can verify that it possesses
a subsequence which converges to a critical point x
0
of J. Actually, this x
0
is
a minimum of J.
To further illustrate the concept of compactness, we consider some exam
ples in inﬁnite dimensional space. Let Ω be an open bounded subset in R
n
,
and let H
1
0
(Ω) be the Hilbert space introduced in previous chapters. Consider
the Dirichlet boundary value problem
−´u = u
p
(x) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(5.1)
To seek weak solutions, we study the corresponding functional
J
p
(u) =
Ω
[u[
p+1
dx
on the constraint
H = ¦u ∈ H
1
0
(Ω) [
Ω
[u[
2
dx = 1¦.
When p <
n+2
n−2
, it is in the subcritical case. As we have seen in Chapter
2, the functional satisﬁes the (PS) condition. If we deﬁne
m
p
= inf
u∈H
J
p
(u),
then a minimizing sequence ¦u
k
¦, J
p
(u
k
)→m
p
possesses a convergent subse
quence in H
1
0
(Ω), and the limiting function u
0
would be the minimum of J
p
in H, hence it is the desired weak solution.
When p = τ :=
n+2
n−2
, it is in the critical case. One can ﬁnd a minimizing
sequence that does not converge. For example, when Ω = B
1
(0) is a ball,
consider
u
k
(x) = u(x) =
[n(n −2)k]
n−2
4
(1 +k[x[
2
)
n−2
2
η(x)
where η ∈ C
∞
0
(Ω) is a cut oﬀ function satisfying
η(x) =
1 , x ∈ B1
2
(0)
between 0 and 1 , elsewhere .
5.1 Prescribing Gaussian Curvature on S
2
129
Let
v
k
(x) =
u
k
(x)
Ω
[u
k
[
2
dx
.
Then one can verify that ¦v
k
¦ is a minimizing sequence of J
τ
in H with
J
(v
k
)→0. However it does not have a convergent subsequence. Actually, as
k→∞, the energy of ¦v
k
¦ are concentrating at one point 0. We say that the
sequence “blows up ” at point 0.
In essence, the compactness of the functional J
τ
here relies on the Sobolev
embedding:
H
1
(Ω) → L
p+1
(Ω).
As we learned in Chapter 1, this embedding is compact when p <
n+2
n−2
and is
not when p =
n+2
n−2
.
In the critical case, the compactness may be recovered by considering
other level sets or by changing the topology of the domain. For example, if Ω
is an annulus, then the above J
τ
has a minimizer in H. In this and the next
chapter, the readers will see examples from prescribing Gaussian and scalar
curvature, where the corresponding equations are in critical case and where
various means have been exploited to recover the compactness.
5.1 Prescribing Gaussian Curvature on S
2
Given a function K(x) on two dimensional unit sphere S := S
2
with standard
metric g
o
, can it be realized as the Gaussian curvature of some pointwise
conformal metric g? This has been an interesting problem in diﬀerential ge
ometry and is known as the Nirenberg problem. If we let g = e
2u
g
o
, then it is
equivalent to solving the following semilinear elliptic equation on S
2
:
−´u + 1 = K(x)e
2u
, x ∈ S
2
. (5.2)
where ´is the LaplaceBertrami operator associated with the standard metric
g
o
.
If we replace 2u by u and let R(x) = 2K(x), then equation (5.2) becomes
−´u + 2 = R(x)e
u(x)
. (5.3)
Here 2 and R(x) are actually the scalar curvature of S
2
with standard metric
g
o
and with the pointwise conformal metric e
u
g
o
, respectively.
5.1.1 Obstructions
For equation (5.3) to have a solution, the function R(x) must satisfy the
obvious GaussBonnet sign condition
S
R(x)e
u
dx = 8π. (5.4)
130 5 Prescribing Gaussian Curvature on Compact 2Manifolds
This can be seen by integrating both sides of the equation (5.3), and the fact
that
−
S
´u dx =
S
u 1 dx = 0.
Condition (5.4) implies that R(x) must be positive somewhere. Besides this
obvious necessary condition, there are other necessary conditions found by
Kazdan and Warner. It reads
S
φ
i
R(x)e
u
dx = 0, i = 1, 2, 3. (5.5)
Here φ
i
are the ﬁrst spherical harmonic functions, i.e
−´φ
i
= 2φ
i
(x), i = 1, 2, 3.
Actually, φ
i
(x) = x
i
in the coordinates x = (x
1
, x
2
, x
3
) of R
3
.
Later, Bouguignon and Ezin generalized this condition to
S
X(R)e
u
dx = 0. (5.6)
where X is any conformal Killing vector ﬁeld on standard S
2
.
Condition (5.5) gives rise to many examples of R(x) for which the equa
tion (5.3) has no solution. In particular, a monotone rotationally symmetric
function admits no solution. Then for which kinds of functions R(x), can one
solve (5.3)? This has been an interesting problem in geometry.
5.1.2 Variational Approach and Key Inequalities
To obtain the existence of a solution for equation (5.3), people usually use
a variational approach. First prove the existence of a weak solution, then by
a regularity argument, one can show that the weak solution is smooth and
hence is the classical solution.
To obtain a weak solution, we let
H
1
(S) := H
1,2
(S)
be the Hilbert space with norm
u
1
=
¸
S
([u[
2
+u
2
)dx
1/2
.
Let
H(S) = ¦u ∈ H
1
(S) [
S
udx = 0,
S
R(x)e
u
dx > 0¦.
Then by the Poincar´e inequality, we can use
5.1 Prescribing Gaussian Curvature on S
2
131
u =
¸
S
[u[
2
dx
1/2
as an equivalent norm in H(S).
Consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H(S).
To estimate the value of the functional J, we introduce some useful in
equalities.
Lemma 5.1.1 (MoserTrudinger Inequality)
Let S be a compact two dimensional Riemannian manifold. Then there
exists constant C, such that the inequality
S
e
4πu
2
dA ≤ C (5.7)
holds for all u ∈ H
1
(S), satisfying
S
[u[
2
dA ≤ 1 and
S
udA = 0. (5.8)
We skip the proof of this inequality. Interested readers many see the article
of Moser [Mo1] or the article of Chen [C1].
From this inequality, one can derive immediately that
Lemma 5.1.2 There exists constant C, such that for all u ∈ H
1
(S) holds
S
e
u
dA ≤ C exp
1
16π
S
[u[
2
dA+
1
4π
S
udA
. (5.9)
Proof. Let
¯ u =
1
4π
S
u(x)dA
be the average of u on S. Still denote
u =
¸
S
[u[
2
dx
1/2
.
i) For any v ∈ H
1
(S) with ¯ v = 0, let w =
v
v
. Then w = 1 and ¯ w = 0.
Applying Lemma 5.1.1 to such w, we have
S
e
4πv
2
v
2
dA ≤ C. (5.10)
132 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From the obvious inequality
v
4
√
π
−
2
√
πv
v
2
≥ 0,
we deduce immediately
v ≤
v
2
16π
+
4πv
2
v
2
.
Together with (5.10), we have
S
e
v
dA ≤ C exp
1
16π
v
2
, ∀v ∈ H
1
(S), with ¯ v = 0. (5.11)
ii) For any u ∈ H
1
(S), let v = u−¯ u. Then ¯ v = 0, and we can apply (5.11)
to v and obtain
S
e
u−¯ u
dA ≤ C exp
1
16π
u
2
.
Consequently
S
e
u
dA ≤ C exp
1
16π
u
2
+ ¯ u
.
This completes the proof of the Lemma. ¯ .
From Lemma 5.1.2, we can conclude that the functional J() is bounded
from below in H(S). In fact, by the Lemma,
S
R(x)e
u
dA ≤ max R
S
e
u
dA ≤ max R(x) C exp
1
16π
u
2
.
It follows that
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
16π
u
2
≥ −8π ln(C max R) (5.12)
Therefore, the functional J is bounded from below in H(S). Now we can take
a minimizing sequence ¦u
k
¦ ⊂ H(S):
J(u
k
)→ inf
u∈H(S)
J(u).
However, as we mentioned before, a minimizing sequence may not be bounded,
that is, it may ‘leak’ to inﬁnity. To prevent this from happening, we need some
coerciveness of the functional. Although it is not true in general, we do have
coerciveness for J in some special subset of H(S), In particular, in the set
of even functions–antipodal symmetric functions. More precisely, we have the
following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8].
5.1 Prescribing Gaussian Curvature on S
2
133
Lemma 5.1.3 Let Ω
1
and Ω
2
be two subsets of S such that
dist(Ω
1
, Ω
2
) ≥ δ
o
> 0.
Let 0 < α
o
≤
1
2
. Then for any > 0, there exists a constant C = C(α
o
, δ
o
, ),
such that the inequality
S
e
u
dA ≤ C exp
1
32π
+
u
2
+ ¯ u
(5.13)
holds for all u ∈ H
1
(S) satisfying
Ω1
e
u
dA
S
e
u
dA
≥ α
o
and
Ω2
e
u
dA
S
e
u
dA
≥ α
o
. (5.14)
Intuitively, if we think e
u(x)
as the density of a solid sphere S at point
x, then
Ωi
e
u
dA is the mass of the region Ω
i
, and
S
e
u
dA the total mass
of the sphere. The Lemma says that if the mass of the sphere is kind of
distributed in two parts of the sphere, more precisely, if the total mass does
not concentrate at one point, then we can have a stronger inequality than
(5.9). In particular, for even functions, this stronger inequality (5.13) holds.
If we apply this stronger inequality to the family of even functions in H(S),
we have
J(u) ≥
1
2
u
2
−8π
¸
ln(C max R) +
1
32π
+
u
2
≥ −8π ln(C max R) +
1
4
−8π
u
2
.
Choosing small enough, we arrive at
J(u) ≥
1
8
u
2
−C
1
, for all even functions u in H(S). (5.15)
This stronger inequality guarantees the boundedness of a minimizing se
quence, and hence it converges to a weak limit u
o
in H
1
(S). As we will show in
the next section, the functional J is weakly lower semicontinuous, and there
fore, u
o
so obtained is the minimum of J(u), i.e., a weak solution of equation
(5.2). Then by a standard regularity argument, as we will see in the next next
section, we will be able to conclude that u
o
is a classical solution.
In many articles, the authors use the “center of mass ” analysis, i.e., con
sider the center of mass of the sphere with density e
u(x)
at point x:
P(u) =
S
x e
u
dA
S
e
u
dA
,
and use this to study the behavior of a minimizing sequence (See [CD], [CD1],
[CY], and [CY1]). Both analysis are equivalent on the sphere S
2
. However,
134 5 Prescribing Gaussian Curvature on Compact 2Manifolds
since “the center of mass” P(u) lies in the ambient space R
3
, on a general
two dimensional compact Riemannian surfaces, this kinds of analysis can not
be applied. While the “Distribution of Mass” analysis can be applied to any
compact surfaces, even surfaces with conical singularities. Interested reader
may see the author’s paper [CL7] [CL8]for more general version of inequality
(5.13).
Proof of Lemma 5.1.3.
Let g
1
, g
2
be two smooth functions, such that
1 ≥ g
i
≥ 0, g
i
(x) ≡ 1, for x ∈ Ω
i
, i = 1, 2
and supp g
1
∩ supp g
2
= ∅. It suﬃce to show that for u ∈ H
1
(S) with
S
udA =
0, (5.14) implies
S
e
u
dA ≤ C exp
(
1
32π
+)u
2
. (5.16)
Case i) If g
1
u ≤ g
2
u, then by (5.9)
S
e
u
dA ≤
1
α
o
Ω1
e
u
dA ≤
1
α
o
S
e
g1u
dA
≤
C
α
o
exp¦
1
16π
g
1
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
g
1
u +g
2
u
2
+g
1
u¦
≤
C
α
o
exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)u
2
L
2¦. (5.17)
for some small
1
> 0. Here we have used the fact that
g
1
u +g
2
u
2
=
S
[(g
1
u +g
2
u)[
2
dA
=
S
[(g
1
+g
2
)u + (g
1
+g
2
)u[
2
dA
≤ u
2
+C
1
uu
L
2 +C
2
u
2
L
2.
In order to get rid of the term u
2
L
2
on the right hand side of the above
inequality, we employ the condition
S
udA = 0.
Given any η > 0, choose a, such that meas¦x ∈ S[u(x) ≥ a¦ = η. Apply
(5.17) to the function (u −a)
+
≡ max¦0, (u −a)¦, we have
S
e
u
dA ≤ e
a
S
e
u−a
dA ≤ e
a
S
e
(u−a)
+
dA
≤ C exp¦
1
32π
(1 +
1
)u
2
+c
1
(
1
)(u −a)
+

2
2
+a¦ (5.18)
5.1 Prescribing Gaussian Curvature on S
2
135
Where C = C(δ, α
o
).
By the H¨ older and Sobolev inequality (See Theorem 4.6.3)
(u −a)
+

2
L
2 ≤ η
1
2
(u −a)
+

2
L
4 ≤ cη
1
2
(u
2
+(u −a)
+

2
L
2) (5.19)
Choose η so small that Cη
1/2
≤
1
2
, then the above inequality implies
(u −a)
+

2
L
2 ≤ 2Cη
1
2
u
2
. (5.20)
Now by Poincar´e inequality (See Theorem 4.6.6)
a η ≤
u≥a
udA ≤
S
[u[dA ≤ cu
hence for any δ > 0,
a ≤ δu
2
+
c
2
4δη
2
. (5.21)
Now (5.16) follows from (5.18), (5.20) and (5.21).
Case ii) If g
2
u ≤ g
1
u, by a similar argument as in case i), we obtain
(5.17) and then (5.16). This completes the proof.
5.1.3 Existence of Weak Solutions
Theorem 5.1.1 (Moser) If the function R is even, that is if
R(x) = R(−x) ∀ x ∈ S
2
,
where x and −x are two antipodal points. Then the necessary and suﬃcient
condition for equation (5.3) to have a solution is R(x) be positive somewhere.
To prove the existence of a weak solution, as in the previous section, we
consider the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
in H
e
(S) = ¦u ∈ H(S) [ u is even almost everywhere ¦. Let ¦u
k
¦ be a mini
mizing sequence of J in H
e
(S). Then by (5.15), ¦u
k
¦ is bounded in H
1
(S), and
hence possesses a subsequence (still denoted by ¦u
k
¦) that converges weakly
to some element u
o
in H
1
(S). Then by the Compact Sobolev Embedding of
H
1
into L
2
u
k
→u
o
in L
2
(S). (5.22)
Consequently
u
o
(−x) = u
o
(x) almost everywhere , and
S
u
o
(x)dA = 0,
136 5 Prescribing Gaussian Curvature on Compact 2Manifolds
that is u
o
∈ H
e
(S).
Furthermore, as we showed in the previous chapter, the integral is weakly
lower semicontinuous, i.e.
lim
k→∞
S
[u
k
[
2
dA ≥
S
[u
o
[
2
dA.
To show that the functional J is weakly lower semicontinuous:
J(u
o
) ≤ limJ(u
k
) = inf
u∈He(S)
J(u).
We need the following
Lemma 5.1.4 If ¦u
k
¦ converges weakly to u in H
1
(S), then there exists a
subsequence (still denoted by ¦u
k
¦), such that
S
e
u
k
(x)
dA→
S
e
u(x)
dA, as k→∞.
We postpone the proof of this lemma for a moment. Now from this lemma,
we obviously have
S
R(x)e
u
k
dA→
S
R(x)e
uo
dA,
and it follows that J() is weakly lower semicontinuous. On the other hand,
we have
J(u
o
) ≥ inf
u∈He(S)
J(u).
Therefore, u
o
is a minimum of J in H
e
(S).
Consequently, for any v ∈ H
e
(S), we have
0 =< J
(u
o
), v >=
S
< u
o
, v > dA−
8π
S
R(x)e
uo
dA
S
R(x)e
uo
vdA.
For any w ∈ H(S), let
v(x) =
1
2
(w(x) +w(−x)) :=
1
2
(w(x) +w
−
(x)).
Then obviously, v ∈ H
e
(S), and it follows that
0 =< J
(u
o
), v >=
1
2
(< J
(u
o
), w > + < J
(u
o
), w
−
>) =< J
(u
o
), w > .
Here we have used the fact that u
o
(−x) = u
o
(x). This implies that u
o
is a
critical point of J in H
1
(S) and hence a constant multiple of u
o
is a weak
solution of equation (5.3). Now what left is to prove Lemma 5.1.4.
The Proof of Lemma 5.1.4.
5.1 Prescribing Gaussian Curvature on S
2
137
From Lemma 5.1.2, for any real number p > 0, we have
S
e
pu
dA ≤ C exp¦
p
2
16π
S
[u[
2
dA+
p
4π
S
udA¦. (5.23)
And by H¨ oder inequality, for any 0 < p < 2,
S
[(e
u
)[
p
dA =
S
e
pu
[u[
p
dA ≤
S
e
2p
2−p
u
dA
2−p
2
S
[u[
2
dA
p
2
.
(5.24)
Inequalities (5.23) and (5.24) imply that ¦e
u
k
¦ is a bounded sequence in H
1,p
for any 1 < p < 2. By the compact Sobolev embedding of H
1,p
into L
1
, there
exists a subsequence of ¦e
u
k
¦ which converges strongly to e
uo
in L
1
(S). This
completes the proof of the Lemma.
Chen and Ding [CD] generalized Moser’s result to a broader class of sym
metric functions, as we will describe in the following.
Let O(3) be the group of orthogonal transformations of R
3
. Let G be a
closed ﬁnite subgroup of O(3). Let
F
G
= ¦x ∈ S
2
[ gx = x, ∀g ∈ G¦
be the set of ﬁxed points under the action of G.
The action of G on S
2
induces an action of G on the Hilbert space H
1
(S):
g[u](x) → u(gx)
under which the ﬁxed point subspace of H
1
(S) is given by
X = ¦u ∈ H
1
(S) [ u(gx) = u(x), ∀g ∈ G¦.
Assume that R(x) is Gsymmetric, or Ginvariant, i.e.
R(gx) = R(x), ∀g ∈ G. (5.25)
Let X
∗
= X
¸
H(S). Recall that
H(S) = ¦u ∈ H
1
(S) [
S
udA = 0,
S
R(x)e
u
dA > 0¦.
Under the assumption (5.25), the functional
J(u) =
1
2
S
[u[
2
dx −8π ln
S
R(x)e
u
dx
is Ginvariant in X
∗
, i.e.
J(g[u]) = J(u), ∀g ∈ G, ∀u ∈ X
∗
.
We seek a minimum of J in X
∗
. The following lemma guarantees that such a
minimum is the critical point we desired.
138 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Lemma 5.1.5 Assume that u
o
is a critical point of J in X
∗
, then it is also
a critical point of J in H(S).
Proof. First, for any g ∈ G and any u, v ∈ H(S), we have
< J
(u), g[v] >=< J
(g
−1
[u]), v > . (5.26)
Where g
−1
is the inverse of g. This can be easily see from
< J
(u), v >=
S
uvdA−
8π
S
Re
u
dA
S
Re
u
vdA.
Assume
G = ¦g
1
, g
m
¦.
For any w ∈ H(S), let
v =
1
m
(g
1
[w] + +g
m
[w]).
Then for any g ∈ G
g[v] =
1
m
(gg
1
[w] + +gg
m
[w]) = v.
Hence v ∈ X
∗
. It follows that
0 = < J
(u
o
), v >=
1
m
m
¸
i
< J
(u
o
), g
i
[w] >
=
1
m
m
¸
i
< J
(g
−1
i
u
o
), w >=
1
m
m
¸
i
< J
(u
o
), w >
= < J
(u
o
), w > .
Therefore, u
o
is a critical point of J in H(S). ¯ .
Theorem 5.1.2 (Chen and Ding [CD]). suppose that R ∈ C
α
(S
2
) satisﬁes
(5.25) with max
S
R > 0. If F
G
= ∅, let m = max
F
G
R. Then equation (5.2)
has a solution provided one of the following conditions holds:
(i) F
G
= ∅,
(ii) m ≤ 0, or
(iii) m > 0 and c
∗
= inf
X∗
J < −8π ln 4πm.
Remark 5.1.1 In the case when the function R(x) is even, the group G =
¦id, −id¦, where id is the identity transform, i.e.
id(x) = x and −id(x) = −x ∀ x ∈ S
2
.
Obviously, in this case F
G
is empty. Hence the above Theorem contains
Moser’s existence result as a special case.
5.1 Prescribing Gaussian Curvature on S
2
139
To prove this theorem, we need another two lemmas.
Lemma 5.1.6 (Onofri [On]) For any u ∈ H
1
(S) with
S
udA = 0, holds
S
e
u
dA ≤ 4π exp
1
16π
S
[u[
2
dA
. (5.27)
Lemma 5.1.7 Suppose that ¦u
i
¦ is a sequence in H(S) with J(u
i
) ≤ β. If
u
i
→∞, then there exists a subsequence (still denoted by ¦u
i
¦) and a point
ξ ∈ S
2
with R(ξ) > 0, such that
S
R(x)e
ui
dA = (R(ξ) +o(1))
S
e
ui
dA, (5.28)
and
lim
i→∞
J(u
i
) ≥ −8π ln 4πR(ξ). (5.29)
Proof. We ﬁrst show that, there exists a subsequence of ¦u
i
¦ and a point
ξ ∈ S
2
, such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.30)
Otherwise, there exists two subsets of S
2
, Ω
1
and Ω
2
, with dist(Ω
1
, Ω
2
)
≥
o
> 0 such that ¦u
i
¦ satisﬁes all the conditions in Lemma 5.1.3. Hence the
inequality
S
e
ui
dA ≤ C exp(
1
16π
−δ)u
i

2
holds for some δ > 0. Then by the previous argument, we conclude that u
i

is bounded. This contradicts with our assumption, and hence veriﬁes (5.30).
Now (5.30) and the continuity assumption on R(x) implies (5.28).
Roughly speaking, the above argument shows that if a minimizing sequence
of J is unbounded, then passing to a subsequence, the mass of the surface S
with density e
ui(x)
at point x will be concentrating at one point ξ on S.
To verify (5.29), we use (5.28) and Onofri’s Lemma:
J(u
i
) =
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1))
S
e
ui
dA
=
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π ln
S
e
ui
dA
≥
1
2
S
[u[
2
dA−8π ln(R(ξ) +o(1)) −8π(ln 4π +
1
16π
S
[u[
2
dA)
= −8π ln(4πR(ξ) +o(1)).
140 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Let i→∞, we arrive at (5.29). Since the right hand side of (5.29) is bounded
above by β, we must have R(ξ) > 0. This completes the proof of the Lemma.
Proof of the Theorem. Deﬁne
c
∗
= inf
X∗
J.
As we argue in the previous section (see (5.12)), we have c
∗
> −∞. Let
¦u
i
¦ ⊂ X
∗
be a minimizing sequence, i.e. J(u
i
)→c
∗
, as i→∞.
Similar to the reasoning at the beginning of this section, one can see that
we only need to show that ¦u
i
¦ is bounded in H
1
(S). Assume on the contrary
that u
i
→∞. Then from the proof of Lemma 5.1.7, there is a point ξ ∈ S
2
,
such that for any > 0, we have
S\B(ξ)
e
ui
dA→0. (5.31)
Since u
i
is invariant under the action of G, we must also have
S\B(gξ)
e
ui
dA→0. (5.32)
Because is arbitrary, we deduce that gξ = ξ, that is ξ ∈ F
G
.
Now under condition (i), F
G
is empty. Therefore, ¦u
i
¦ must be bounded.
Moreover, by Lemma 5.1.7,
R(ξ) > 0 and c
∗
≥ −8π ln 4πR(ξ).
Noticing that m ≥ R(ξ) by deﬁnition, this again contradicts with conditions
(ii) and (iii).
Therefore, under either one of the conditions (i), (ii), or (iii), the mini
mizing sequence ¦u
i
¦ must be bounded, and hence possesses a subsequence
converging weakly to some u
o
∈ X
∗
. This a constant multiple of u
o
is the
desired weak solution of equation (5.3).
Here conditions (i) and (ii) in Theorem 5.1.2 can be easily veriﬁed through
the group G or the given function R. However, one may wonder, under what
conditions on R, do we have condition (iii). The following theorem provides a
suﬃcient condition.
Theorem 5.1.3 (Chen and Ding [CD]). Suppose that R ∈ C
2
(S
2
), is G
invariant, and m = max
F
G
R > 0. Let
D = ¦x ∈ S
2
[ R(x) = m¦.
If ´R(x
o
) > 0 for some x
o
∈ D, then
c
∗
< −8π ln 4πm, (5.33)
and therefore (5.3) possesses a weak solution.
5.1 Prescribing Gaussian Curvature on S
2
141
Proof. Choose a spherical polar coordinate system on S := S
2
:
(θ, φ), −
π
2
≤ θ ≤
π
2
, 0 ≤ φ ≤ 2π
with x
o
= (
π
2
, φ) as the north pole.
Consider the following family of functions:
u
λ
(θ, φ) = ln
1 −λ
2
(1 −λsin θ)
2
, v
λ
= u
λ
−
1
4π
S
u
λ
dA, 0 < λ < 1.
One can verify that this family of functions satisfy the equation
−´u
λ
+ 2 = 2e
u
λ
. (5.34)
As λ→1, the family ¦u
λ
¦ ‘blows up’ (tend to ∞) at the one point x
o
,
while approaching −∞ everywhere else on S
2
. Therefore, the mass of the
sphere with density e
u
λ
(x)
is concentrating at point x
o
as λ→1.
It is easy to see that v
λ
∈ H(S) for λ closed to 1. Moreover, we claim that
v
λ
∈ X, i.e. v
λ
(gx) = v
λ
(x), ∀g ∈ G.
In fact, since g is an isometry of S
2
, and gx
o
= x
o
, we have
d(gx, x
o
) = d(gx, gx
o
) = d(x, x
o
),
where d(, ) is the geodesic distance on S
2
. But v
λ
depends only on θ =
d(x, x
o
), so
v
λ
(gx) = v
λ
(x), ∀g ∈ G.
It follows that v
λ
∈ X
∗
for λ suﬃciently closed to 1.
By a direct computation, one can verify that
1
2
S
[u
λ
[
2
dA+ 8π¯ u
λ
= 0.
Consequently,
J(v
λ
) = −8π ln
S
R(x)e
u
λ
dA.
Notice that c
∗
≤ J(v
λ
) by the deﬁnition of c
∗
. Thus, in order to show (5.33),
we only need to verify that, for λ suﬃciently close to 1
S
R(x)e
u
λ
dA > 4πm. (5.35)
Let δ > 0 be small, we have
S
Re
u
λ
dA = (1 −λ
2
)
2π
0
π
2
π
2
−δ
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
+
2π
0
π
2
−δ
−
π
2
[R(θ, φ) −m]
cos θ
(1 −λsin θ)
2
dθdφ
¸
+ 4πm
= (1 −λ
2
)¦I +II¦ + 4πm.
142 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact
S
e
u
λ
dA = 4π
which can be derived directly from equation (5.34).
For each ﬁxed δ, it is easy to check that the integral II remains bounded
as λ tends to 1. Thus (5.35) will hold if we can show that the integral I →+∞
as λ→1.
Notice that x
o
is a maximum point of the restriction of R on F
G
. Hence
by the principle of symmetric critically, x
o
is a critical point of R, that is
R(x
o
) = 0. Using this fact and the second order Taylor expansion of R(x)
near x
o
, we can derive
I = π
π
2
π
2
−δ
´R(x
o
) cos
2
θ +o
θ −
π
2
2
(1 −λsin θ)
2
cos θ dθ
≥ c
o
π
2
π
2
−δ
cos
3
θ
(1 −λsin θ)
2
dθ.
Here c
o
is a positive constant. We have chosen δ to be suﬃciently small and
have used the fact that ´R(x
o
) is positive. From the above inequality, one
can easily verify that as λ→1, I→+∞, since the integral
π
2
π
2
−δ
cos
3
θ
(1 −sin θ)
2
d θ
diverges to +∞.
This completes the proof of the Theorem. ¯ .
5.2 Prescribing Gaussian Curvature on Negatively
Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let g
o
(x) be
a metrics on M with the corresponding LaplaceBeltrami operator ´ and
Gaussian curvature K
o
(x). Given a function K(x) on M, can it be realized as
the Gaussian curvature associated to the pointwise conformal metrics g(x) =
e
2u(x)
g
o
(x)? As we mentioned before, to answer this question, it is equivalent
to solve the following semilinear elliptic equation
−´u +K
o
(x) = K(x)e
2u
x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence re
sult.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 143
Lemma 5.2.1 Let f(x) ∈ L
2
(M). Then the equation
−´u = f(x) x ∈ M (5.37)
possesses a solution if and only if
M
f(x)d A = 0. (5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation
(5.37) on M.
We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the
existence of a solution to (5.37). To this end, we consider the functional
J(u) =
1
2
M
[u[
2
d A−
M
f(x)ud A.
under the constraint
G = ¦u ∈ H
1
(M) [
M
u(x)d A = 0¦.
As we have seen before, we can use, in G, the equivalent norm
u =
M
[u[
2
d A
1
2
.
By H¨ oder and Poincar´e inequalities, we derive
J(u) ≥
1
2
u
2
−f
L
2u
L
2
≥
1
2
u
2
−c
o
f
L
2u
=
1
2
(u −c
o
f
L
2)
2
−
c
2
o
f
2
L
2
2
. (5.39)
The above inequality implies immediately that the functional J(u) is
bounded from below in the set G.
Let ¦u
k
¦ be a minimizing sequence of J in G, i.e.
J(u
k
)→inf
G
J, as k→∞.
Then again by (5.39), ¦u
k
¦ is bounded in H
1
, and hence possesses a subse
quence (still denoted by ¦u
k
¦) that converges weakly to some u
o
in H
1
(M).
From compact Sobolev imbedding
H
1
→→ L
2
,
144 5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that
u
k
→u
o
strongly in L
2
, and hence so in L
1
.
Therefore
M
u
o
(x)d A = lim
M
u
k
(x)d A = 0, (5.40)
and
M
f(x)u
o
(x)d A = lim
M
f(x)u
k
(x)d A. (5.41)
(5.40) implies that u
o
∈ G, and hence
J(u
o
) ≥ inf
G
J(u). (5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral:
liminf
M
[u
k
[
2
d A ≥
M
[u
o
[
2
d A
leads to
J(u
o
) ≤ inf
G
J(u).
From this and (5.42) we conclude that u
o
is the minimum of J in the set G,
and therefore there exists constant λ, such that u
o
is a weak solution of
−´u
o
−f(x) = λ.
Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore u
o
is a
solution of equation (5.37). This completes the proof of the Lemma. ¯ .
We now simplify equation (5.36). First let ˜ u = 2u, then
−´˜ u + 2K
o
(x) = 2K(x)e
˜ u
. (5.43)
Let v(x) be a solution of
−´v = 2K
o
(x) −2
¯
K
o
(5.44)
where
¯
K
o
=
1
[M[
M
K
o
(x)d A
is the average of K
o
(x) on M. Because the integral of the right hand side of
(5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1.
Let w = ˜ u +v. Then it is easy to verify that
−´w + 2
¯
K
o
= 2K(x)e
−v
e
w
. (5.45)
Finally, letting
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 145
α = 2
¯
K
o
and R(x) = 2K(x)e
−v(x)
and rewriting w as u, we reduce equation (5.36) to
−´u +α = R(x)e
u(x)
, x ∈ M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics
on M with corresponding Gaussian curvature K
g
(x), then
M
K
g
(x)d A
g
= 2πχ(M)
where χ(M) is the Euler Characteristic of M, and d A
g
the area element
associated to metric g.
We say M is negatively curved, if χ(M) < 0, and hence α < 0.
5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper
Solutions
Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan
and Warner considered the equation
−´u +α = R(x)e
u(x)
, x ∈ M (5.47)
and proved
Theorem 5.2.1 (KazdanWarner) Let
¯
R be the average of R(x) on M. Then
i) A necessary condition for (5.46) to have a solution is
¯
R < 0.
ii) If
¯
R < 0, then there is a constant α
o
, with −∞ ≤ α
o
< 0, such that
one can solve (5.46) if α > α
o
, but cannot solve (5.46) if α < α
o
.
iii) α
o
= −∞ if and only if R(x) ≤ 0.
The existence of a solution was established by using the method of upper
and lower solutions. We say that u
+
is an upper solution of equation (5.47) if
−´u
+
+α ≥ R(x)e
u+
, x ∈ M.
Similarly, we say that u
−
is a lower solution of (5.47) if
−´u
−
+α ≤ R(x)e
u−
, x ∈ M.
The following approach is adapted from [KW] with minor modiﬁcations.
Lemma 5.2.2 If a solution of (5.47) exists, then
¯
R < 0. Even more strongly,
the unique solution φ of
´φ +αφ = R(x) , x ∈ M (5.48)
must be positive.
146 5 Prescribing Gaussian Curvature on Compact 2Manifolds
Proof. We ﬁrst prove the second part. Using the substitution v = e
−u
, one
sees that (5.47) has a solution u if and only if there is a positive solution v of
´v +αv −R(x) −
[v[
2
v
= 0. (5.49)
Assume that v is such a positive solution. Let φ be the unique solution of
(5.48) and let w = φ −v. Then
´w +αw = −
[v[
2
v
≤ 0. (5.50)
It follows that w ≥ 0. Otherwise, there is a point x
o
∈ M, such that w(x
o
) < 0
and x
o
is a minimum of w, hence ´w(x
o
) ≥ 0. Consequently, we must have
´w(x
o
) +αw(x
o
) > 0.
A contradiction with (5.50). Therefore w ≥ 0 and φ ≥ v > 0. This shows
that a necessary condition for (5.47) to have a solution is that the unique
solution φ of (5.48) must be positive. Integrating both sides of (5.48), we see
immediately that
¯
R < 0. This completes the proof of the Lemma. ¯ .
Lemma 5.2.3 (The Existence of Upper Solutions).
i) If R(x) ≤ 0 but not identically 0, then there exist an upper solution u
+
of (5.47).
ii) If
¯
R < 0, then there exists a small δ > 0, such that for −δ ≤ α < 0,
there exists an upper solution u
+
of (5.47).
Proof. Let v be a solution of
−´v = R(x) −
¯
R.
We try to choose constant a and b, such that u
+
= av+b is an upper solution.
We have
−´u
+
+α −R(x)e
u+
= aR(x) −a
¯
R +α −R(x)e
av+b
. (5.51)
We want to choose constant a and b, so that the right hand side of (5.51) is
nonnegative.
In Case i), when R(x) ≤ 0, we regroup the right hand side of (5.51) as
f(a, b, x) ≡ (−a
¯
R +α) −R(x)(e
av+b
−a). (5.52)
For any given α < 0, we ﬁrst choose a suﬃciently large, so that −a
¯
R+α > 0.
Then we choose b large, so that
e
av(x)+b
−a ≥ 0 , ∀ x ∈ M.
This is possible because v(x) is continuous and M is compact. It follows from
(5.51) and (5.52) that
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147
−´u
+
+α −R(x)e
u+
≥ 0.
In Case ii), we choose α ≥
a
¯
R
2
and e
b
= a, where a is a positive number
to be determine later. Then (5.52) becomes
f(a, b, x) ≥ −
a
¯
R
2
−aR(x)(e
av
−1)
≥ a
−
¯
R
2
−R
L
∞[e
av
−1[
= aR
L
∞
−
¯
R
2R
L
∞
−[e
av
−1[
.
Again by the continuity of v and the compactness of M, we see that
e
av(x)
→1 , uniformly on M as a→0.
Therefore, we can make
[e
av(x)
−1[ ≤
−
¯
R
2R
L
∞
, for all x ∈ M,
by choosing a suﬃciently small. Thus u
+
so constructed is an upper solution
of (5.47) for α ≥
a
¯
R
2
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.4 (Existence of Lower Solutions) Given any R ∈ L
p
(M) and a
function u
+
∈ W
2,p
(M), there is a lower solution u
−
∈ W
2,p
(M) of (5.47)
such that u
−
< u
+
.
Proof. If R(x) is bounded from below, then one can clearly use any suf
ﬁciently large negative constant as u
−
. For general R(x) ∈ L
p
(M), let
K(x) = max¦1, −R(x)¦. Choose a positive constant a so that a
¯
K = −α.
Then aK(x) +α = 0 and (aK(x) + α) ∈ L
p
(M). Thus there is a solution w
of
´w = aK(x) +α.
By the L
p
regularity theory, w ∈ W
2,p
(M) and hence is continuous. Let
u
−
= w −λ. Then
−´u
−
+α −R(x)e
u−
= −´w +α −R(x)e
w−λ
= −aK(x) −R(x)e
w−λ
≤ −K(x)(a −e
w−λ
). (5.53)
Choosing λ suﬃciently large so that
a −e
w−λ
≥ 0,
148 5 Prescribing Gaussian Curvature on Compact 2Manifolds
and noticing that K(x) ≥ 0, by (5.53), we arrive at
−´u
−
+α −R(x)e
u−
≤ 0.
Therefore u
−
is a lower solution. Moreover, one can make λ large enough such
that u
−
= w −λ < u
+
. This completes the proof of the Lemma. ¯ .
Lemma 5.2.5 Let p > 2. If there exist upper and lower solutions, u
+
, u
−
∈
W
2,p
(M) of (5.47), and if u
−
≤ u
+
, then there is a solution u ∈ W
2,p
(M).
Moreover, u
−
≤ u ≤ u
+
, and u is C
∞
in any open set where R(x) is C
∞
.
Proof. We will rewrite the equation (5.47) as
Lu = f(x, u),
then start from the upper solution u
+
and apply iterations:
Lu
1
= f(x, u
+
) , Lu
i+1
= f(x, u
i
) , i = 1, 2, .
In order that for each i, u
−
≤ u
i
≤ u
+
, we need the operator L obeys some
maximum principle and f(x, ) possess some monotonicity. The Laplace oper
ator in the original equation does not obey maximum principles, however, if
we let
Lu = −´u +k(x)u,
where k(x) is any positive function on M, then the new operator L obeys the
maximum principle:
If Lu ≥ 0, then u(x) ≥ 0 , for all x ∈ M.
To see this, we suppose in the contrary, there is some point on M where u < 0.
Let x
o
be a negative minimum of u, then −´u(x
o
) ≤ 0, and it follows that
−´u(x
o
) +k(x
o
)u(x
o
) ≤ k(x
o
)u(x
o
) < 0.
A contradiction. Hence the maximum principle holds for L. We now write
equation (5.47) as
Lu ≡ −´u +k(x)u = R(x)e
u
−α +k(x)u := f(x, u).
Based on the maximum principle on L, to keep u
−
≤ u
i
≤ u
+
, it requires that
f(x, ) be monotone increasing, that is
∂f
∂u
= R(x)e
u
+k(x) ≥ 0.
To this end, we choose
k(x) = k
1
(x)e
u+(x)
, where k
1
(x) = max¦1, −R(x)¦.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149
Base on this choice of L and f, we show by math induction that the
iteration sequence so constructed is monotone decreasing:
u
i+1
≤ u
i
≤ ≤ u
1
≤ u
+
, i = 1, 2, . (5.54)
In fact, from
Lu
+
≥ f(x, u
+
) and Lu
1
= f(x, u
+
)
we derive immediately
L(u
+
−u
1
) ≥ 0.
Then the maximum principle for L implies u
+
≥ u
1
. This completes the ﬁrst
step of our induction. Assume that for any integer m, u
m+1
≤ u
m
≤ u
+
, we
will show that u
m+2
≤ u
m+1
. Since f(x, ) is monotone increasing in u for
u ≤ u
+
, we have
f(x, u
m+1
) ≤ f(x, u
m
)
and hence L(u
m+1
−u
m+2
) ≥ 0. Therefore u
m+1
≥ u
m+2
. This veriﬁes (5.54)
through math induction. Similarly, one can show the other side of the inequal
ity and arrive at
u
−
≤ u
i+1
≤ u
i
≤ ≤ u
+
, i = 1, 2, . (5.55)
Since u
+
, u
−
, and u
i
are continuous, inequality (5.55) shows that ¦[u
i
[¦ is
uniformly bounded. Consequently, in the L
p
norm,
Lu
i+1

L
p = R(x)e
ui
−α +k(x)u
i

L
p ≤ C.
Hence ¦u
i
¦ is bounded in W
2,p
(M). By the compact imbedding of W
2,p
into
C
1
, there is a subsequence that converges uniformly to some function u in
C
1
(M). In view of the monotonicity (5.55), we conclude that the entire se
quence ¦u
i
¦ itself converges uniformly to u. Moreover
u
i+1
−u
j+1

W
2,p ≤ CL(u
i+1
−u
j+1
)
L
p
≤ C(R
L
pe
ui
−e
uj

L
∞ +k
L
pu
i
−u
j

L
∞).
Therefore, ¦u
i
¦ converges strongly in W
2,p
, so u ∈ W
2,p
. Since L : W
2,p
→L
p
is continuous, it follows that u is a solution of (5.47) and satisﬁes u
−
≤ u ≤ u
+
.
By the Schauder theory, one proves inductively that u ∈ C
∞
in any open
set where R(x) is C
∞
. More precisely, if R ∈ C
m+γ
, then u ∈ C
m+2+γ
. This
completes the proof of the Lemma. ¯ .
Based on these Lemmas, we are now able to complete the proof of the
Theorem 5.2.1.
Proof of Theorem 5.2.1.
Part i) can obviously derived from Lemma 5.2.2.
150 5 Prescribing Gaussian Curvature on Compact 2Manifolds
From Lemma 5.2.4 and 5.2.5, if there is a upper solution, then there is
a solution. Hence by Lemma 5.2.3, there exist a δ > 0, such that (5.47) is
solvable for −δ ≤ α < 0. Let
α
o
= inf¦α [ (5.47) is sovable ¦.
Then for any 0 > α > α
o
, one can solve (5.47). In fact, if for α
1
, (5.47)
possesses a solution u
1
, then for any α > α
1
, we have
−´u
1
+α > R(x)e
u1
,
that is, u
1
is an upper solution for equation (5.47) at α. Hence equation
−´u +α = R(x)e
u
also has a solution.
Lemma 5.2.3 infers that α
o
= −∞ if R(x) ≤ 0 but R(x) is not identically
0. Now to complete the proof of the Theorem, it suﬃce to prove that if R(x)
is positive somewhere, then α
o
> −∞. By virtue of Lemma 5.2.2, we only
need to show
Claim: There exist α < 0, such that the unique solution of
´φ +αφ = R(x) (5.56)
is negative somewhere.
Actually, we can prove a stronger result:
−αφ(x)→R(x) almost everywhere as α→−∞. (5.57)
We argue in the following 2 steps.
i) For a given α < 0, let u = u(x, α) be the solution of (5.56). Multiplying
both sides of equation (5.56) by u and integrate over M to obtain
−
M
[u[
2
dA+α
M
u
2
dA =
M
R(x)u(x)dA.
Consequently,
u
L
2 ≤
1
α
M
RudA.
Then by H¨ oder inequality
u
L
2 ≤
1
α
R
L
2.
This implies that
as α→−∞, u(x, α)→0, almost everywhere . (5.58)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151
Since for any α < 0, the operator ´+α is invertible, we can express
u = (´+α)
−1
R(x).
From the above reasoning, one can see that, as α→−∞,
if R ∈ L
2
, then (´+α)
−1
R(x)→0 almost everywhere . (5.59)
ii) To see (5.57), we write
−αφ +R(x) = (´+α)
−1
´R. (5.60)
One can see the validity of (5.60) by simply apply the operator ´+α to both
sides.
Now by the assumption, we have ´R ∈ L
2
, and hence (5.59) implies that
−αφ +R(x)→0, a.e..
Or
φ(x)→
1
α
R(x). a.e.
Since R(x) is continuous and positive somewhere, for suﬃciently negative α,
φ(x) is negative somewhere.
This completes the proof of the Theorem.
5.2.2 Chen and Li’s Results
From the previous section, one can see from the Kazdan and Warner’s result
that in the case when R(x) is nonpositive, equation (5.46) has been solved
completely. i.e. a necessary and suﬃcient condition for equation (5.46) to have
a solution for any 0 > α > −∞ is R(x) ≤ 0. However, in the case when R(x)
changes signs, we have α
o
> −∞, and hence there are still some questions
remained. In particular, one may naturally ask:
Does equation (5.46) has a solution for the critical value α = α
o
?
In [CL1], Chen and Li answered this question aﬃrmatively.
Theorem 5.2.2 (ChenLi)
Assume that R(x) is a continuous function on M. Let α
o
be deﬁned in the
previous section. Then equation (5.46) has at least one solution when α = α
o
.
Proof. The Outline. Let ¦α
k
¦ be a sequence of numbers such that
α
k
> α
o
, and α
k
→α
o
, as k→∞.
We will prove the Theorem in 3 steps.
152 5 Prescribing Gaussian Curvature on Compact 2Manifolds
In step 1, we minimize the functional, for each ﬁxed α
k
,
J
k
(u) =
1
2
M
[u[
2
d A+α
k
M
ud A−
M
R(x)e
u
d A
in a class of functions that are between a lower and a Upper solution. Let the
minimizer be u
k
. Then let α
k
→α
o
.
In step 2, we show that the sequence of minimizers ¦u
k
¦ is bounded in the
region where R(x) is positively bounded away from 0.
In step 3, we prove that ¦u
k
¦ is bounded in H
1
(M) and hence converges
to a desired solution.
Step 1. For each α
k
> α
o
, by Kazdan and Warner, there exists a solution
φ
k
of
−´φ
k
+α
k
= R(x)e
φ
k
. (5.61)
We ﬁrst show that there exists a constant c
o
> 0, such that
φ
k
(x) ≥ −c
o
, ∀ x ∈ M and ∀ k = 1, 2, . (5.62)
Otherwise, for each k, let x
k
be a minimum of φ
k
(x) on M. Then obviously,
φ
k
(x
k
)→−∞, as k→∞.
On the other hand, since x
k
is a minimum, we have
−´φ
k
(x
k
) ≤ 0 , ∀ k = 1, 2, .
These contradict equation (5.61) because
α
k
→α
o
< 0 , as k→∞.
This veriﬁes (5.62).
Choose a suﬃciently negative constant A < −c
o
to be the lower solution
of equation (5.46) for all α
k
, that is,
−´A+α
k
≤ R(x)e
A
.
This is possible because R(x) is bounded below on M, and hence we can make
R(x)e
A
greater than any ﬁxed negative number.
For each ﬁxed α
k
, choose α
o
< ˜ α
k
< α
k
. By the result of Kazdan and
Warner, there exists a solution of equation (5.46) for α = ˜ α
k
. Call it ψ
k
. Then
−´ψ
k
+α
k
> −´ψ
k
+ ˜ α
k
= R(x)e
ψ
k
(x)
,
i.e. ψ
k
is a super solution of the equation
−´u +α
k
= R(x)e
u(x)
. (5.63)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153
Illuminated by an idea of Ding and Liu [DL], we minimize the functional
J
k
(u) in a class of functions
H = ¦u ∈ C
1
(M) [ A ≤ u ≤ ψ
k
¦.
This is possible, because the functional J
k
(u) is bounded from below. In
fact, since M is compact and R(x) is continuous, there exists a constant C
1
,
such that
[R(x)[ ≤ C
1
, ∀ x ∈ M.
Consequently, for any u ∈ H,
J
k
(u) ≥
1
2
M
[u[
2
d A+α
k
M
ud A−C
1
M
e
ψ
k
d A
≥
1
2
(u −C
2
)
2
−C
3
. (5.64)
Here we have used the H¨ older and Poincar´e inequality. From (5.64), one
can easily see that J
k
is bounded from below. Also by (5.65) and a similar
argument as in the previous section, we can show that there is a subsequence
of a minimizing sequence which converges weakly to a minimum u
k
of J
k
(u)
in the set H. In order to show that u
k
is a solution of equation (5.63), we
need it be in the interior of H, i.e.
A < u
k
(x) < ψ
k
(x), ∀ x ∈ M. (5.65)
To verify this, we use a maximum principle.
First we show that
u
k
(x) < ψ
k
(x) , ∀ x ∈ M. (5.66)
Suppose in the contrary, there exists x
o
∈ M, such that u
k
(x
o
) = ψ
k
(x
o
), we
will derive a contradiction. Since u
k
(x
o
) > A, there is a δ > 0, such that
A < u
k
(x) , ∀x ∈ B
δ
(x
o
).
It follows that, for any positive function v(x) with its support in B
δ
(x
o
),
there exists an > 0, such that as − ≥ t ≤ 0, we have u
k
+ tv ∈ H.
Since u
k
is a minimum of J
k
in H, we have in particular that the function
g(t) = J
k
(u
k
+tv) achieves its minimum at t = 0 on the interval [−, 0]. Hence
g
(0) ≤ 0. This implies that
M
[−´u
k
+α
k
−R(x)e
u
k
]v(x)d A ≤ 0. (5.67)
Consequently
−´u
k
(x
o
) +α
k
−R(x
o
)e
u
k
(xo)
≤ 0. (5.68)
Let w(x) = ψ
k
(x) −u
k
(x). Then w(x) ≥ 0, and x
o
is a minimum of w. It
follows that −´w(x
o
) ≤ 0. While on the other hand, we have, by (5.68),
154 5 Prescribing Gaussian Curvature on Compact 2Manifolds
−´w(x
o
) = −´ψ
k
(x
o
) −(−´u
k
(x
o
)) ≥ −˜ α
k
+α
k
> 0.
Again a contradiction. Therefore we must have
u
k
(x) < ψ
k
(x) , ∀ x ∈ M.
Similarly, one can show that
A < u
k
(x) , ∀ x ∈ M.
This veriﬁes (5.65). Now for any v ∈ C
1
(M), u
k
+tv is in H for all suﬃciently
small t, and hence
d
dt
J
k
(u+tv)[
t=0
= 0. Then a direct computation as we did
in the past will lead to
−´u
k
+α
k
= R(x)e
u
k
, ∀ x ∈ M.
Step 2. It is obvious that the sequence ¦u
k
¦ so obtained in the previous
step is uniformly bounded from below by the negative constant A. To show
that it is also bounded from above, we ﬁrst consider the region where R(x) is
positively bounded away from 0. We will use a sup + inf inequality of Brezis,
Li, and Shafrir [BLS]:
Lemma 5.2.6 (Brezis, Li, and Shafrir). Assume that V is a Lipschitz func
tion satisfying
0 < a ≤ V (x) ≤ b < ∞
and let K be a compact subset of a domain Ω in R
2
. Then any solution u of
the equation
−´u = V (x)e
u
, x ∈ Ω,
satisﬁes
sup
K
u + inf
Ω
u ≤ C(a, b, V 
L
∞, K, Ω). (5.69)
Let x
o
be a point on M at which R(x
o
) > 0. Choose so small such that
R(x) ≥ a > 0 , ∀ x ∈ Ω ≡ B
2
(x
o
),
for some constant a.
Let v
k
be a solution of
−´v
k
−α
k
= 0 x ∈ Ω
v
k
(x) = 1 x ∈ ∂Ω.
Let w
k
= u
k
+v
k
. Then w
k
satisﬁes
−´w
k
= R(x)e
−v
k
e
w
k
.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155
By the maximum principle, the sequence ¦v
k
¦ is bounded from above and from
below in the small ball Ω. Since ¦u
k
¦ is uniformly bounded from below, ¦w
k
¦
is also bounded from below. Locally, the metric is pointwise conformal to the
Euclidean metric, hence one can apply Lemma 5.2.6 to ¦w
k
¦ to conclude that
the sequence ¦w
k
¦ is also uniformly bounded from above in the smaller ball
B
(x
o
). Since x
o
is an arbitrary point where R is positive, we conclude that
¦w
k
¦ is uniformly bounded in the region where R(x) is positively bounded
away from 0. Now the uniform bound for ¦u
k
¦ in the same region follows suit.
Step 3. We show that the sequence ¦u
k
¦ is bounded in H
1
(M).
Choose a small δ > 0, such that the set
D = ¦x ∈ M [ R(x) > δ¦
is nonempty. It is open since R(x) is continuous. From the previous step, we
know that ¦u
k
¦ is uniformly bounded in D.
Let K(x) be a smooth function such that
K(x) < 0 , for x ∈ D; and K(x) = 0 , else where.
For each α
k
, let v
k
be the unique solution of
−´v
k
+α
k
= K(x)e
v
k
, x ∈ M.
Then since
−´v
k
+α
k
≤ 0 , and α
k
→α
o
The sequence ¦v
k
¦ is uniformly bounded.
Let w
k
= u
k
−v
k
, then
−´w
k
= R(x)e
u
k
−K(x)e
v
k
. (5.70)
We show that ¦w
k
¦ is bounded in H
1
(M).
On one hand, multiplying both sides of (5.70) by e
w
k
and integrating on
M, we have
M
[w
k
[
2
e
w
k
dA =
M
R(x)e
u
k
+w
k
dA−
D
K(x)e
u
k
dA. (5.71)
On the other hand, since each u
k
is a minimizer of the functional J
k
, the
second derivative J
k
(u
k
) is positively deﬁnite. It follows that for any function
φ ∈ H
1
(M),
M
[[φ[
2
−R(x)e
u
k
φ
2
]d A ≥ 0.
Choosing φ = e
w
k
2
, we derive
156 5 Prescribing Gaussian Curvature on Compact 2Manifolds
1
4
M
[w
k
[
2
e
w
k
dA ≥
M
R(x)e
u
k
+w
k
dA. (5.72)
Combining (5.71) and (5.72), we arrive at
−
D
K(x)e
u
k
dA ≥
3
4
M
[w
k
[
2
e
w
k
dA. (5.73)
Notice that ¦u
k
¦ is uniformly bounded in region D and ¦w
k
¦ is bounded
from below due to the fact that ¦u
k
¦ is bounded from below and ¦v
k
¦ is
bounded, (5.73) implies that
M
[w
k
[
2
dAis bounded, and so does
M
[u
k
[
2
dA.
Consider ˜ u
k
= u
k
−¯ u
k
, where ¯ u
k
is the average of u
k
. Obviously
M
˜ u
k
dA =
0, hence we can apply Poincar´e inequality to ˜ u
k
and conclude that ¦˜ u
k
¦ is
bounded in H
1
(M). Apparently, we have
M
u
2
k
dA ≤ 2
M
˜ u
2
k
dA+ ¯ u
2
k
[M[
.
If
M
u
2
k
dA is unbounded, then ¦¯ u
k
¦ is unbounded. Taking into account that
¦u
k
¦ is bounded in the region D where R(x) is positively bounded away from
0, we would have
D
˜ u
2
k
dA =
D
[u
k
− ¯ u
k
[
2
dA→∞,
for some subsequence, a contradiction with the fact that ¦˜ u
k
¦ is bounded
in H
1
(M). Therefore, ¦u
k
¦ must also be bounded in H
1
(M). Consequently,
there exists a subsequence of ¦u
k
¦ that converges to a function u
o
in H
1
(M),
and u
o
is the desired weak solution for
−´u
o
+α
o
= R(x)e
uo
.
This completes the proof of the Theorem. ¯ .
6
Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.1 Introduction
6.2 The Variational Approach
6.2.1 Estimate the Values of the Functional
6.2.2 The Variational Scheme
6.3 The A Priori Estimates
6.3.1 In the Region Where R < 0
6.3.2 In the Region Where R is small
6.3.3 In the Region Where R > 0
6.1 Introduction
On higher dimensional sphere S
n
with n ≥ 3, Kazdan and Warner raise the
following question:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics? It is equivalent to consider the existence of a
solution to the following semilinear elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (6.1)
The equation is “critical” in the sense that lack of compactness occurs.
Besides the obvious necessary condition that R(x) be positive somewhere,
there are wellknown obstructions found by Kazdan and Warner [KW2] and
later generalized by Bourguignon and Ezin [BE]. The conditions are:
S
n
X(R)dV
g
= 0 (6.2)
158 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
where dV
g
is the volume element of the conformal metric g and X is any
conformal vector ﬁeld associated with the standard metric g
0
. We call these
KazdanWarner type conditions. These conditions give rise to many examples
of R(x) for which equation (6.1) has no solution. In particular, a monotone
rotationally symmetric function R admits no solution.
In the last two decades, numerous studies were dedicated to these problems
and various suﬃcient conditions were found (please see the articles [Ba] [BC]
[C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li]
[Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). However, among others,
one problem of common concern left open, namely, were those KazdanWarner
type necessary conditions also suﬃcient ? In the case where R is rotationally
symmetric, the conditions become:
R > 0 somewhere and R
changes signs. (6.3)
Then
(a) is (6.3) a suﬃcient condition? and
(b) if not, what are the necessary and suﬃcient conditions?
Kazdan listed these as open problems in his CBMS Lecture Notes [Ka].
Recently, we answered question (a) negatively [CL3] [CL4]. We found some
stronger obstructions. Our results imply that for a rotationally symmetric
function R, if it is monotone in the region where it is positive, then problem
(6.1) admits no solution unless R is a constant.
In other words, a necessary condition to solve (6.1) is that
R
(r) changes signs in the region where R is positive. (6.4)
Now, is this a suﬃcient condition?
For prescribing Gaussian curvature equation (5.3) on S
2
, Xu and Yang
[XY] showed that if R is ‘nondegenerate,’ then (6.4) is a suﬃcient condition.
For equation (6.1) on higher dimensional spheres, a major diﬃculty is
that multiple blowups may occur when approaching a solution by a mini
max sequence of the functional or by solutions of subcritical equations.
In dimensions higher than 3, the ‘nondegeneracy’ condition is no longer
suﬃcient to guarantee the existence of a solution. It was illustrated by a
counter example in [Bi] constructed by Bianchi. He found some positive ro
tationally symmetric function R on S
4
, which is nondegenerate and non
monotone, and for which the problem admits no solution. In this situation,
a more proper condition is called the ‘ﬂatness condition’. Roughly speaking,
it requires that at every critical point of R, the derivatives of R up to order
(n − 2) vanish, while some higher but less than n
th
order derivative is dis
tinct from 0. For n = 3, the ‘nondegeneracy’ condition is a special case of
the ‘ﬂatness condition’. Although people are wondering if the ‘ﬂatness condi
tion’ is necessary, it is still used widely today (see [CY3], [Li2], and [SZ]) as a
standard assumption to guarantee the existence of a solution. The above men
tioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’
is somewhat sharp.
6.1 Introduction 159
Now, a natural question is:
Under the ‘ﬂatness condition,’ is (6.4) a suﬃcient condition?
In this section, we answer the question aﬃrmatively. We prove that, under
the ‘ﬂatness condition,’ (6.4) is a necessary and suﬃcient condition for (6.1) to
have a solution. This is true in all dimensions n ≥ 3, and it applies to functions
R with changing signs. Thus, we essentially answer the open question (b)
posed by Kazdan.
There are many versions of ‘ﬂatness conditions,’ a general one was pre
sented in [Li2] by Y. Li. Here, to better illustrate the idea, in the statement
of the following theorem, we only list a typical and easytoverify one.
Theorem 6.1.1 Let n ≥ 3. Let R = R(r) be rotationally symmetric and
satisfy the following ﬂatness condition near every positive critical point r
o
:
R(r) = R(r
o
) +a[r −r
o
[
α
+h([r −r
o
[), with a = 0 and n−2 < α < n. (6.5)
where h
(s) = o(s
α−1
).
Then a necessary and suﬃcient condition for equation (6.1) to have a
solution is that
R
(r) changes signs in the regions where R > 0.
The theorem is proved by a variational approach. We blend in our new
ideas with other ideas in [XY], [Ba], [Li2], and [SZ]. We use the ‘center of
mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity,’ obtain some quite
sharp and clean estimates in those neighborhoods, and construct a maxmini
variational scheme at subcritical levels and then approach the critical level.
Outline of the proof of Theorem 6.1.1.
Let γ
n
=
n(n−2)
4
and τ =
n+2
n−2
. We ﬁrst ﬁnd a positive solution u
p
of the
subcritical equation
−´u +γ
n
u = R(r)u
p
. (6.6)
for each p < τ and close to τ. Then let p→τ, take the limit.
To ﬁnd the solution of equation (6.6), we construct a maxmini variational
scheme. Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
We seek critical points of J
p
(u) under the constraint
H = ¦u ∈ H
1
(S
n
) [ E(u) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
160 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
If R has only one positive local maximum, then by condition (6.4), on each
of the poles, R is either nonpositive or has a local minimum. Then similar
to the approach in [C], we seek a solution by minimizing the functional in a
family of rotationally symmetric functions in H.
In the following, we assume that R has at least two positive local maxima.
In this case, the solutions we seek are not necessarily symmetric.
Our scheme is based on the following key estimates on the values of the
functional J
p
(u) in a neighborhood of the ‘critical points at inﬁnity’ associated
with each local maximum of R. Let r
1
be a positive local maximum of R. We
prove that there is an open set G
1
⊂ H (independent of p), such that on the
boundary ∂G
1
of G
1
, we have
J
p
(u) ≤ R(r
1
)[S
n
[ −δ, (6.7)
while there is a function ψ
1
∈ G
1
, such that
J
p
(ψ
1
) > R(r
1
)[S
n
[ −
δ
2
. (6.8)
Here δ > 0 is independent of p. Roughly speaking, we have some kinds of
‘mountain pass’ associated to each local maximum of R. The set G
1
is deﬁned
by using the ‘center of mass’.
Let r
1
and r
2
be two smallest positive local maxima of R. Let ψ
1
and ψ
2
be two functions deﬁned by (6.8) associated to r
1
and r
2
. Let γ be a path in
H connecting ψ
1
and ψ
2
, and let Γ be the family of all such paths.
Deﬁne
c
p
= sup
γ∈Γ
min
γ
J
p
(u). (6.9)
For each p < τ, by compactness, there is a critical point u
p
of J
p
(), such
that
J
p
(u
p
) = c
p
.
Obviously, a constant multiple of u
p
is a solution of (6.6). Moreover, by (6.7),
J
p
(u
p
) ≤ R(r
i
)[S
n
[ −δ, (6.10)
for any positive local maximum r
i
of R.
To ﬁnd a solution of (6.1), we let p→τ, take the limit. To show the conver
gence of a subsequence of ¦u
p
¦, we established apriori bounds for the solutions
in the following order.
(i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and
a maximum principle.
(ii) In the region where R is small: This is mainly due to the boundedness
of the energy E(u
p
).
(iii) In the region where R is positively bounded away from 0: First due to
the energy bound, ¦u
p
¦ can only blow up at most ﬁnitely many points. Using
a Pohozaev type identity, we show that the sequence can only blow up at one
6.2 The Variational Approach 161
point and the point must be a local maximum of R. Finally, we use (6.10) to
argue that even one point blow up is impossible and thus establish an apriori
bound for a subsequence of ¦u
p
¦.
In subsection 2, we carry on the maxmini variational scheme and obtain
a solution u
p
of (6.6) for each p.
In subsection 3, we establish a priori estimates on the solution sequence
¦u
p
¦.
6.2 The Variational Approach
In this section, we construct a maxmini variational scheme to ﬁnd a solution
of
−´u +γ
n
u = R(r)u
p
(6.11)
for each p < τ :=
n+2
n−2
.
Let
J
p
(u) :=
S
n
Ru
p+1
dV
and
E(u) :=
S
n
([u[
2
+γ
n
u
2
)dV.
Let
(u, v) = (
S
uv +γ
n
uv)dV
be the inner product in the Hilbert space H
1
(S
n
), and
u :=
E(u)
be the corresponding norm.
We seek critical points of J
p
(u) under the constraint
H := ¦u ∈ H
1
(S
n
) [ E(u) = E(1) = γ
n
[S
n
[, u ≥ 0¦.
where [S
n
[ is the volume of S
n
.
One can easily see that a critical point of J
p
in S multiplied by a constant
is a solution of (6.11).
We divide the rest of the section into two parts.
First, we establish the key estimates (6.7) and (6.8).
Then, we carry on the maxmini variational scheme.
162 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.2.1 Estimate the Values of the Functional
To construct a maxmini variational scheme, we ﬁrst show that there is some
kinds of ‘mountain pass’ associated to each positive local maximum of R.
Unlike the classical ones, these ‘mountain passes’ are in some neighborhoods
of the ‘critical points at inﬁnity’ (See Proposition 6.2.2 and 6.2.1 below).
Choose a coordinate system in R
n+1
, so that the south pole of S
n
is at
the origin O and the center of the ball B
n+1
is at (0, , 0, 1). As usual, we
use [ [ to denote the distance in R
n+1
.
Deﬁne the center of mass of u as
q(u) =:
S
n
xu
τ+1
(x)dV
S
n
u
τ+1
(x)dV
(6.12)
It is actually the center of mass of the sphere S
n
with density u
τ+1
(x) at the
point x.
We recall some wellknown facts in conformal geometry.
Let φ
q
be the standard solution with its ‘center of mass’ at q ∈ B
n+1
, that
is, φ
q
∈ H and satisﬁes
−´u +γ
n
u = γ
n
u
τ
, (6.13)
We may also regard φ
q
as depend on two parameters λ and ˜ q, where ˜ q
is the intersection of S
n
with the ray passing the center and the point q.
When ˜ q = O (the south pole of S
n
), we can express, in the spherical polar
coordinates x = (r, θ) of S
n
(0 ≤ r ≤ π, θ ∈ S
n−1
) centered at the south pole
where r = 0,
φ
q
= φ
λ,˜ q
= (
λ
λ
2
cos
2
r
2
+ sin
2 r
2
)
n−2
2
. (6.14)
with 0 < λ ≤ 1. As λ→0, this family of functions blows up at south pole,
while tends to zero elsewhere.
Correspondingly, there is a family of conformal transforms
T
q
: H −→ H; T
q
u := u(h
λ
(x))[det(dh
λ
)]
n−2
2n
(6.15)
with
h
λ
(r, θ) = (2 tan
−1
(λtan
r
2
), θ).
Here det(dh
λ
) is the determinant of dh
λ
, and can be expressed explicitly as
det(dh
λ
) =
λ
cos
2
r
2
+λ
2
sin
2 r
2
n
.
It is wellknown that this family of conformal transforms leave the equation
(6.13), the energy E(), and the integral
S
n
u
τ+1
dV invariant. More generally
and more precisely, we have
6.2 The Variational Approach 163
Lemma 6.2.1 For any u, v ∈ H
1
(S
n
), and for any nonnegative integer k,
it holds the following
(i) (T
q
u, T
q
v) = (u, v) (6.16)
(ii)
S
n
(T
q
u)
k
(T
q
v)
τ+1−k
dV =
S
n
u
k
v
τ+1−k
dV. (6.17)
In particular,
E(T
q
u) = E(u) and
S
n
(T
q
u)
τ+1
dV =
S
n
u
τ+1
dV.
Proof. (i) We will use a property of the conformal Laplacian to prove the
invariance of the energy
E(T
q
u) = E(u) .
The proof for (i) is similar.
On a ndimensional Riemannian manifold (M, g),
L
g
:= ´
g
−c(n)R
g
is call a conformal Laplacian, where c(n) =
n−2
4(n−1)
, ´
g
and R
g
are the Laplace
Beltrami operator and the scalar curvature associated with the metric g re
spectively.
The conformal Laplacian has the following wellknown invariance property
under the conformal change of metrics. For ˆ g = w
4/(n−2)
g, w > 0, we have
L
ˆ g
u = w
−
n+2
n−2
L
g
(uw) , for all u ∈ C
∞
(M). (6.18)
Interested readers may ﬁnd its proof in [Bes].
If M is a compact manifold without boundary, then multiplying both sides
of (6.18) by u and integrating by parts, one would arrive at
M
¦[
ˆ g
u[
2
+c(n)R
ˆ g
[u[
2
¦dV
ˆ g
=
M
¦[
g
(uw)[
2
+c(n)R
g
[uw[
2
¦dV
g
. (6.19)
In our situation, we choose g to be the Euclidean metric ¯ g in R
n
with
¯ g
ij
= δ
ij
and ˆ g = g
o
, the standard metric on S
n
. Then it is wellknown that
g
o
= w
4
n−2
(x)¯ g
with
w(x) =
4
4 +[x[
2
n−2
2
satisfying the equation
−
¯
´w = γ
n
w
n+2
n−2
.
We also have
164 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
c(n)R
go
= γ
n
:=
n(n −2)
4
and
c(n)R
¯ g
= c(n)0 = 0.
Now formula (6.19) becomes
S
n
¦[
o
u[
2
+γ
n
[u[
2
¦dV
o
=
R
n
[
¯
(uw)[
2
dx, (6.20)
where
o
and
¯
are gradient operators associated with the metrics g
o
and ¯ g
respectively.
Again, let ˜ q be the intersection of S
n
with the ray passing through the
center of the sphere and the point q. Make a stereographic projection from
S
n
to R
n
as shown in the ﬁgure below, where ˜ q is placed at the origin of R
n
.
Under this projection, the point (r, θ) ∈ S
n
becomes x = (x
1
, , x
n
) ∈
R
n
, with
[x[
2
= tan
r
2
and
(T
q
u)(x) = u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
.
It follows from this and (6.20) that
E(T
q
u) =
R
n
[
o
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
¸
[
2
+γ
n
[u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
[
2
¸
dV
go
=
R
n
[
¯
¸
u(λx)
λ(4 +[x[
2
)
4 +λ[x[
2
n−2
2
w(x)
¸
[
2
dx
=
R
n
[
¯
¸
u(λx)
4λ
4 +λ[x[
2
n−2
2
¸
[
2
dx
=
R
n
[
¯
¸
u(y)
4
4 +[y[
2
n−2
2
¸
[
2
dy
=
R
n
[
¯
(u(y)w(y))[
2
dy
= E(u).
(ii) This can be derived immediately by making change of variable y =
h
λ
(x).
This completes the proof of the lemma. ¯ .
One can also verify that
T
q
φ
q
= 1.
6.2 The Variational Approach 165
The relations between q and λ, ˜ q and T
q
for ˜ q = O can be expressed in a
similar way.
We now carry on the estimates near the south pole (0, θ) which we as
sume to be a positive local maximum. The estimates near other positive local
maxima are similar. Our conditions on R implies that
R(r) = R(0) −ar
α
, for some a > 0, n −2 < α < n. (6.21)
in a small neighborhood of O.
Deﬁne
Σ = ¦u ∈ H [ [q(u)[ ≤ ρ
o
, v := min
t,q
u −tφ
q
 ≤ ρ
o
¦. (6.22)
Notice that the ‘centers of mass’ of functions in Σ are near the south pole O.
This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.
We will estimate the functional J
p
in this neighborhood.
We ﬁrst notice that the supremum of J
p
in Σ approaches R(0)[S
n
[ as p→τ.
More precisely, we have
Proposition 6.2.1 For any δ
1
> 0, there is a p
1
≤ τ, such that for all
τ ≥ p ≥ p
1
,
sup
Σ
J
p
(u) > R(0)[S
n
[ −δ
1
. (6.23)
Then we show that on the boundary of Σ, J
p
is strictly less and bounded
away from R(0)[S
n
[.
Proposition 6.2.2 There exist positive constants ρ
o
, p
o
, and δ
o
, such that
for all p ≥ p
o
and for all u ∈ ∂Σ, holds
J
p
(u) ≤ R(0)[S
n
[ −δ
o
. (6.24)
Proof of Proposition 6.2.1
Through a straight forward calculation, one can show that
J
τ
(φ
λ,O
)→R(0)[S
n
[, as λ→0. (6.25)
For a given δ
1
> 0, by (6.25), one can choose λ
o
, such that φ
λo,O
∈ Σ, and
J
τ
(φ
λo,O
) > R(0)[S
n
[ −
δ
1
2
. (6.26)
It is easy to see that for a ﬁxed function φ
λo,O
, J
p
is continuous with
respect to p. Hence, by (6.26), there exists a p
1
, such that for all p ≥ p
1
,
166 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
J
p
(φ
λo,O
) > R(0)[S
n
[ −δ
1
.
This completes the proof of Proposition 6.2.1.
The proof of Proposition 6.2.2 is rather complex. We ﬁrst estimate J
p
for
a family of standard functions in Σ.
Lemma 6.2.2 For p suﬃciently close to τ, and for λ and [˜ q[ suﬃciently
small, we have
J
p
(φ
λ,˜ q
) ≤ (R(0) −C
1
[˜ q[
α
)[S
n
[(1 +o
p
(1)) −C
1
λ
α+δp
, (6.27)
where δ
p
:= τ −p and o
p
(1)→0 as p→τ.
Proof of Lemma 6.2.2. Let > 0 be small such that (6.21) holds in
B
(O) ⊂ S
n
. We express
J
p
(φ
λ,˜ q
) =
S
n
R(x)φ
p+1
λ,˜ q
dV =
B(O)
+
S
n
\B(O)
. (6.28)
From the deﬁnition (6.14) of φ
λ,˜ q
and the boundedness of R, we obtain
the smallness of the second integral:
S
n
\B(O)
R(x)φ
p+1
λ,˜ q
dV ≤ C
2
λ
n−δp
n−2
2
for ˜ q ∈ B
2
(O). (6.29)
To estimate the ﬁrst integral, we work in a local coordinate centered at O.
B(O)
R(x)φ
p+1
λ,˜ q
dV =
B(O)
R(x + ˜ q)φ
p+1
λ,O
dV
≤
B(O)
[R(0) −a[x + ˜ q[
α
]φ
p+1
λ,O
dV
≤
B(O)
[R(0) −C
3
[x[
α
−C
3
[˜ q[
α
]φ
p+1
λ,O
dV
≤ [R(0) −C
3
[˜ q[
α
][S
n
[[1 +o
p
(1)] −C
4
λ
α+δp(n−2)/2
. (6.30)
Here we have used the fact that [x + ˜ q[
α
≥ c([x[
α
+[˜ q[
α
) for some c > 0 in
one half of the ball B
(O) and the symmetry of φ
λ,O
.
Noticing that α < n and δ
p
→0, we conclude that (6.28), (6.29), and (6.30)
imply (6.27). This completes the proof of the Lemma.
To estimate J
p
for all u ∈ ∂Σ, we also need the following two lemmas that
describe some useful properties of the set Σ.
Lemma 6.2.3 (On the ‘center of mass’)
(i) Let q, λ, and ˜ q be deﬁned by (6.14). Then for suﬃciently small q,
[q[
2
≤ C([˜ q[
2
+λ
4
). (6.31)
(ii) Let ρ
o
, q, and v be deﬁned by (6.22). Then for ρ
o
suﬃciently small,
ρ
o
≤ [q[ +Cv. (6.32)
6.2 The Variational Approach 167
Lemma 6.2.4 (Orthogonality)
Let u ∈ Σ and v = u−t
o
φ
qo
be deﬁned by (6.22). Let T
qo
be the conformal
transform such that
T
qo
φ
qo
= 1. (6.33)
Then
S
n
T
qo
vdV = 0 (6.34)
and
S
n
T
qo
v ψ
i
(x)dV = 0, i = 1, 2, n. (6.35)
where ψ
i
are ﬁrst spherical harmonic functions on S
n
:
−´ψ
i
= nψ
i
(x) , i = 1, 2, , n.
If we place the center of the sphere S
n
at the origin of R
n+1
(not in our case),
then for x = (x
1
, , x
n
),
ψ
i
(x) = x
i
, i = 1, 2, , n.
Proof of Lemma 6.2.3.
(i) From the deﬁnition that φ
q
= φ
λ,˜ q
, and (6.14), and by an elementary
calculation, one arrives at
[q − ˜ q[ ∼ λ
2
, for λ suﬃciently small . (6.36)
Draw a perpendicular line segment from O to q˜ q, then one can see that
(6.31) is a direct consequence of the triangle inequality and (6.36).
(ii) For any u = tφ
q
+ v ∈ ∂Σ, the boundary of Σ, by the deﬁnition, we
have either v = ρ
o
or [q(u)[ = ρ
o
. If v = ρ
o
, then we are done. Hence we
assume that [q(u)[ = ρ
o
. It follows from the deﬁnition of the ‘center of mass’
that
ρ
o
= [
S
n
x(tφ
q
+v)
τ+1
S
n
(tφ
q
+v)
τ+1
[. (6.37)
Noticing that v ≤ ρ
o
is very small, we can expand the integrals in (6.37) in
terms of v:
ρ
o
= [
t
τ+1
xφ
τ+1
q
+ (τ + 1)t
τ
xφ
τ
q
v +o(v)
t
τ+1
φ
τ+1
q
+ (τ + 1)t
τ
φ
τ
q
v +o(v)
[
≤ [
xφ
τ+1
q
φ
τ+1
q
[ +Cv ≤ [q[ +Cv.
Here we have used the fact that v is small and that t is close to 1. This
completes the proof of Lemma 6.2.3.
168 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
Proof of Lemma 6.2.4.
Write L = ´+γ
n
. We use the fact that v = u −t
o
φ
qo
is the minimum of
E(u −tφ
q
) among all possible values of t and q.
(i) By a variation with respect to t, we have
0 = (v, φ
qo
) =
S
n
vLφ
qo
dV. (6.38)
It follows from (6.13) that
S
n
vφ
τ
qo
dV = 0.
Now, apply the conformal transform T
qo
to both v and φ
qo
in the above
integral. By the invariant property of the transform, we arrive at (6.34).
(ii) To prove (6.35), we make a variation with respect to q.
0 =
q
vLφ
q
[
q=qo
= γ
n
q
vφ
τ
q
[
q=qo
q
T
qo
v(T
qo
φ
q
)
τ
[
q=qo
=
q
T
qo
v(φ
q−qo
)
τ
[
q=qo
τ
T
qo
vφ
τ−1
q−qo
q
φ
q−qo
[
q=qo
= τ
T
qo
v(ψ
1
, , ψ
n
).
This completes the proof of the Lemma.
Proof of Proposition 6.2.2
Make a perturbation of R(x):
¯
R(x) =
R(x) x ∈ B
2ρo
(O)
m x ∈ S
n
` B
2ρo
(O),
(6.39)
where m = R [
∂B
2ρo(O)
.
Deﬁne
¯
J
p
(u) =
S
n
¯
R(x)u
p+1
dV.
The estimate is divided into two steps.
In step 1, we show that the diﬀerence between J
p
(u) and
¯
J
τ
(u) is very
small. In step 2, we estimate
¯
J
τ
(u).
Step 1.
First we show
¯
J
p
(u) ≤
¯
J
τ
(u)(1 +o
p
(1)). (6.40)
where o
p
(1)→0 as p→τ.
6.2 The Variational Approach 169
In fact, by the H¨older inequality
¯
Ru
p+1
≤
¯
R
τ+1
p+1
u
τ+1
dV
p+1
τ+1
[S
n
[
τ−p
τ+1
≤
¯
Ru
τ+1
dV
p+1
τ+1
[ R(0)[S
n
[ [
τ−p
τ+1
.
This implies (6.40).
Now estimate the diﬀerence between J
p
(u) and
¯
J
p
(u).
[ J
p
(u) −
¯
J
p
(u) [≤ C
1
S
n
\B2ρo
(O)
u
p+1
≤ C
2
S
n
\B2ρo
(O)
(tφ
q
)
p+1
+C
2
S
n
\B2ρo
(O)
v
p+1
≤ C
3
λ
n−δp
n−2
2
+C
3
v
p+1
.
(6.41)
Step 2.
By virtue of (6.40) and (6.41), we now only need to estimate
¯
J
τ
(u). For
any u ∈ ∂Σ, write u = v + tφ
q
. (6.38) implies that v and φ
q
are orthogonal
with respect to the inner product associated to E(), that is
S
n
(vφ
q
+γ
n
vφ
q
)dV = 0.
Notice that u
2
:= E(u), we have
u
2
= v
2
+t
2
φ
q

2
.
Using the fact that both u and φ
q
belong to H with
u = γ
n
[S
n
[ = φ
q
,
We derive
t
2
= 1 −
v
2
γ
n
[S
n
[
. (6.42)
It follows that
S
n
¯
R(x)u
τ+1
≤
t
τ+1
S
n
¯
R(x)φ
τ+1
q
+ (τ + 1)
S
n
¯
R(x)φ
τ
q
v +
τ(τ + 1)
2
S
n
φ
τ−1
q
v
2
+o(v
2
)
= I
1
+ (τ + 1)I
2
+
τ(τ + 1)
2
I
3
+o(v
2
). (6.43)
To estimate I
1
, we use (6.42) and (6.27),
170 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
I
1
≤ (1−
τ + 1
2
v
2
γ
n
[S
n
[
)R(0)[S
n
[(1−c
1
[˜ q[
α
−c
1
λ
α
) +O(λ
n
) +o(v
2
). (6.44)
for some positive constant c
1
.
To estimate I
2
, we use the H
1
orthogonality between v and φ
q
(see (6.38)),
I
2
=
S
n
(
¯
R(x) −m)φ
τ
q
vdV =
B2ρo
(O)
(
¯
R(x) −m)φ
τ
q
vdV
=
1
γ
n
B2ρo
(O)
(
¯
R(x) −m)Lφ
q
vdV
≤ Cρ
α
o
[
B2ρo
(O)
Lφ
q
vdV [= Cρ
α
o
[(φ
q
, v)[
≤ Cρ
α
o
φ
q
v ≤ C
4
ρ
α
o
v. (6.45)
To estimate I
3
, we use (6.34) and (6.35). It is wellknown that the ﬁrst
and second eigenspaces of −´ on S
n
are constants and ψ
i
corresponding to
eigenvalues 0 and n respectively. Now (6.34) and (6.35) imply that T
q
v is
orthogonal to these two eigenspaces and hence
S
n
(T
q
v)[
2
dV ≥ λ
2
S
n
[T
q
v[
2
dV ,
where λ
2
= n+c
2
, for some positive number c
2
, is the second nonzero eigen
value of −´. Consequently
v
2
= T
q
v
2
≥ (γ
n
+n +c
2
)
S
n
(T
q
v)
2
.
It follows that
I
3
≤ R(0)
S
n
φ
τ−1
q
v
2
dV = R(0)
S
n
(T
q
φ
q
)
τ−1
(T
q
v)
2
dV
= R(0)
S
n
(T
q
v)
2
≤
R(0)
γ
n
+n +c
2
v
2
. (6.46)
Now (6.44), (6.45), and (6.46) imply that there is a β > 0 such that
¯
J
τ
(u) ≤ R(0)[S
n
[[1 −β([˜ q[
α
+λ
α
+v
2
)]. (6.47)
Here we have used the fact that v is very small and ρ
α
o
v can be controlled
by [˜ q[
α
+λ
α
(see Lemma 6.2.2). Notice that the positive constant c
2
in (6.46)
has played a key role, and without it, the coeﬃcient of v
2
in (6.47) would
be 0 due to the fact that γ
n
=
n(n−1)
4
.
For any u ∈ ∂Σ, we have
either v = ρ
o
or [q(u)[ = ρ
o
.
By (6.31) and (6.47), in either case there exist a δ
o
> 0, such that
6.2 The Variational Approach 171
¯
J
τ
(u) ≤ R(0)[S
n
[ −2δ
o
. (6.48)
Now (6.24) is an immediate consequence of (6.40), (6.41), and (6.48) . This
completes the proof of the Proposition.
6.2.2 The Variational Scheme
In this part, we construct variational schemes to show the existence of a
solution to equation (6.11) for each p < τ.
Case (i): R has only one positive local maximum.
In this case, condition (6.4) implies that R must have local minima at
both poles. Similar to the ideas in [C], we seek a solution by maximizing the
functional J
p
in a class of rotationally symmetric functions:
H
r
= ¦u ∈ H [ u = u(r)¦. (6.49)
Obviously, J
p
is bounded from above in H
r
and it is wellknown that the
variational scheme is compact for each p < τ (See the argument in Chapter
2). Therefore any maximizing sequence possesses a converging subsequence in
H
r
and the limiting function multiplied by a suitable constant is a solution of
(6.11).
Case (ii): R has at least two positive local maxima.
Let r
1
and r
2
be the two smallest positive local maxima of R. By Propo
sition 6.2.1 and 6.2.2, there exist two disjoint open sets Σ
1
, Σ
2
⊂ H, ψ
i
∈
Σ
i
, p
o
< τ and δ > 0, such that for all p ≥ p
o
,
J
p
(ψ
i
) > R(r
i
)[S
n
[ −
δ
2
, i = 1, 2; (6.50)
and
J
p
(u) ≤ R(r
i
)[S
n
[ −δ, ∀u ∈ ∂Σ
i
, i = 1, 2. (6.51)
Let γ be a path in H joining ψ
1
and ψ
2
. Let Γ be the family of all such
paths. Deﬁne
c
p
= sup
γ∈Γ
min
u∈γ
J
p
(u). (6.52)
For each ﬁxed p < τ, by the wellknown compactness, there exists a critical
(saddle) point u
p
of J
p
with
J
p
(u
p
) = c
p
.
Moreover, due to (6.51) and the deﬁnition of c
p
, we have
J
p
(u
p
) ≤ min
i=1,2
R(r
i
)[S
n
[ −δ. (6.53)
One can easily see that a critical point of J
p
in H multiplied by a constant
is a solution of (6.11) and for all p close to τ, the constant multiples are
bounded from above and bounded away from 0.
172 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3 The A Priori Estimates
In the previous section, we showed the existence of a positive solution u
p
to
the subcritical equation (6.11) for each p < τ. In this section, we prove that
as p→τ, there is a subsequence of ¦u
p
¦, which converges to a solution u
o
of
(6.1). The convergence is based on the following a priori estimate.
Theorem 6.3.1 Assume that R satisﬁes condition (6.5) in Theorem 6.1.1,
then there exists p
o
< τ, such that for all p
o
< p < τ, the solution of (6.11)
obtained by the variational scheme are uniformly bounded.
To prove the Theorem, we estimate the solutions in three regions where
R < 0, R close to 0, and R > 0 respectively.
6.3.1 In the Region Where R < 0
The apriori bound of the solutions is stated in the following proposition, which
is a direct consequence of Proposition 6.2.2 in our paper [CL6].
Proposition 6.3.1 For all 1 < p ≤ τ, the solutions of (6.11) are uniformly
bounded in the regions where R ≤ −δ, for every δ > 0. The bound depends
only on δ, dist(¦x [ R(x) ≤ −δ¦, ¦x [ R(x) = 0¦), and the lower bound of R.
6.3.2 In the Region Where R is Small
In [CL6], we also obtained estimates in this region by the method of moving
planes. However, in that paper, we assume that R be bounded away from
zero. In [CL7], for rotationally symmetric functions R on S
3
, we removed this
condition by using a blowing up analysis near R(x) = 0. That method may
be applied to higher dimensions with a few modiﬁcations. Now within our
variational frame in this paper, the a priori estimate is simpler. It is mainly
due to the energy bound.
Proposition 6.3.2 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ and a δ > 0,
such that for all p
o
< p ≤ τ, ¦u
p
¦ are uniformly bounded in the regions where
[R(x)[ ≤ δ.
Proof. It is easy to see that the energy E(u
p
) are bounded for all p. Now
suppose that the conclusion of the Proposition is violated. Then there exists
a subsequence ¦u
i
¦ with u
i
= u
pi
, p
i
→τ, and a sequence of points ¦x
i
¦, with
x
i
→x
o
and R(x
o
) = 0, such that
u
i
(x
i
)→∞.
We will use a rescaling argument to derive a contradiction. Since x
i
may not
be a local maximum of u
i
, we need to choose a point near x
i
, which is almost
a local maximum. To this end, let K be any large number and let
6.3 The A Priori Estimates 173
r
i
= 2K[u
i
(x
i
)]
−
p
i
−1
2
.
In a small neighborhood of x
o
, choose a local coordinate and let
s
i
(x
i
) = u
i
(x)(r
i
−[x −x
i
[)
2
p
i
−1
.
Let a
i
be the maximum of s
i
(x) in B
ri
(x
i
). Let λ
i
= [u
i
(a
i
)]
−
p
i
−1
2
.
Then from the deﬁnition of a
i
, one can easily verify that the ball B
λiK
(a
i
)
is in the interior of B
ri
(x
i
) and the value of u
i
(x) in B
λiK
(a
i
) is bounded
above by a constant multiple of u
i
(a
i
).
To see the ﬁrst part, we use the fact s
i
(a
i
) ≥ s
i
(x
i
). From this, we derive
u
i
(a
i
)(r
i
−[a
i
−x
i
[)
2
p
i
−1
≥ u
i
(x
i
)r
2
p
i
−1
i
= (2K)
2
p
i
−1
.
It follows that
r
i
−[a
i
−x
i
[
2
≥ λ
i
K, (6.54)
hence
B
λiK
(a
i
) ⊂ B
ri
(x
i
).
To see the second part, we use s
i
(a
i
) ≥ s
i
(x) and derive from (6.54),
u
i
(x) ≤ u
i
(a
i
)
r
i
−[a
i
−x
i
[
r
i
−[x −x
i
[
2
p
i
−1
≤ u
i
(a
i
) 2
2
p
i
−1
, ∀ x ∈ B
λiK
(a
i
).
Now we can make a rescaling.
v
i
(x) =
1
u
i
(a
i
)
u
i
(λ
i
x +a
i
).
Obviously v
i
is bounded in B
K
(0), and it follows by a standard argument
that ¦v
i
¦ converges to a harmonic function v
o
in B
K
(0) ⊂ R
n
, with v
o
(0) = 1.
Consequently for i suﬃciently large,
B
K
(0)
v
i
(x)
τ+1
dx ≥ cK
n
, (6.55)
for some c > 0.
On the other hand, the boundedness of the energy E(u
i
) implies that
S
n
u
τ+1
i
dV ≤ C, (6.56)
for some constant C. By a straight forward calculation, one can verify that,
for any K > 0,
S
n
u
τ+1
i
dV ≥
B
K
(0)
v
τ+1
i
dx. (6.57)
Obviously, (6.56) and (6.57) contradict with (6.55). This completes the
proof of the Proposition. ¯ .
174 6 Prescribing Scalar Curvature on S
n
, for n ≥ 3
6.3.3 In the Regions Where R > 0.
Proposition 6.3.3 Let ¦u
p
¦ be solutions of equation (6.11) obtained by the
variational approach in section 2. Then there exists a p
o
< τ, such that for
all p
o
< p ≤ τ and for any > 0, ¦u
p
¦ are uniformly bounded in the regions
where R(x) ≥ .
Proof. The proof is divided into 3 steps. In Step 1, we argue that the solutions
can only blow up at ﬁnitely many points because of the energy bound. In Step
2, we show that there is no more than one point blow up and the point must
be a local maximum of R. This is done mainly by using a Pohozaeve type
identity. In Step 3, we use (6.53) to conclude that even one point blow up is
impossible.
Step 1. The argument is standard. Let ¦x
i
¦ be a sequence of points such
that u
i
(x
i
)→∞and x
i
→x
o
with R(x
o
) > 0. Let r
i
, s
i
(x), and v
i
(x) be deﬁned
as in Part II. Then similar to the argument in Part II, we see that ¦v
i
(x)¦
converges to a standard function v
o
(x) in R
n
with
−´v
o
= R(x
o
)v
τ
o
.
It follows that
Br
i
(xo)
u
τ+1
i
dV ≥ c
o
> 0.
Because the total energy of u
i
is bounded, we can have only ﬁnitely many
such x
o
. Therefore, ¦u
i
¦ can only blow up at ﬁnitely many points.
Step 2. We have shown that ¦u
i
¦ has only isolated blow up points. As a
consequence of a result in [Li2] (also see [CL6] or [SZ]), we have
Lemma 6.3.1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1, Then ¦u
i
¦
can have at most one simple blow up point and this point must be a local
maximum of R. Moreover, ¦u
i
¦ behaves almost like a family of the standard
functions φ
λi,˜ q
, with λ
i
= (max u
i
)
−
2
n−2
and with ˜ q at the maximum of R.
Step 3. Finally, we show that even one point blow up is impossible. For
convenience, let ¦u
i
¦ be the sequence of critical points of J
pi
we obtained in
Section 2. From the proof of Proposition 2.2, one can obtain
J
τ
(u
i
) ≤ min
k
R(r
k
)[S
n
[ −δ, (6.58)
for all positive local maxima r
k
of R, because each u
i
is the minimum of J
pi
on a path. Now if ¦u
i
¦ blow up at x
o
, then by Lemma 6.3.1, we have
J
τ
(u
i
)→R(x
o
)[S
n
[.
This is a contradiction with (6.58).
Therefore, the sequence ¦u
i
¦ is uniformly bounded and possesses a subse
quence converging to a solution of (6.1). This completes the proof of Theorem
6.1.1. ¯ .
7
Maximum Principles
7.1 Introduction
7.2 Weak Maximum Principle
7.3 The Hopf Lemma and Strong Maximum Principle
7.4 Maximum Principle Based on Comparison
7.5 A Maximum Principle for Integral Equations
7.1 Introduction
As a simple example of maximum principle, let’s consider a C
2
function u(x)
of one independent variable x. It is wellknown in calculus that at a local
maximum point x
o
of u, we must have
u
(x
o
) ≤ 0.
Based on this observation, then we have the following simplest version of
maximum principle:
Assume that u
(x) > 0 in an open interval (a, b), then u can not have
any interior maximum in the interval.
One can also see this geometrically. Since under the condition u
(x) > 0,
the graph is concave up, it can not have any local maximum.
More generally, for a C
2
function u(x) of nindependent variables x =
(x
1
, , x
n
), at a local maximum x
o
, we have
D
2
u(x
o
) := (u
xixj
(x
o
)) ≤ 0 ,
that is, the symmetric matrix is nonpositive deﬁnite at point x
o
. Correspond
ingly, the simplest version maximum principle reads:
176 7 Maximum Principles
If
(a
ij
(x)) ≥ 0 and
¸
ij
a
ij
(x)u
xixj
(x) > 0 (7.1)
in an open bounded domain Ω, then u can not achieve its maximum in the
interior of the domain.
An interesting special case is when (a
ij
(x)) is an identity matrix, in which
condition (7.1) becomes
´u > 0.
Unlike its onedimensional counterpart, condition (7.1) no longer implies
that the graph of u(x) is concave up. A simple counter example is
u(x
1
, x
2
) = x
2
1
−
1
2
x
2
2
.
One can easily see from the graph of this function that (0, 0) is a saddle point.
In this case, the validity of the maximum principle comes from the simple
algebraic fact:
For any two n n matrices A and B, if A ≥ 0 and B ≤ 0, then AB ≤ 0.
In this chapter, we will introduce various maximum principles, and most
of them will be used in the method of moving planes in the next chapter.
Besides this, there are numerous other applications. We will list some below.
i) Providing Estimates
Consider the boundary value problem
−´u = f(x) , x ∈ B
1
(0) ⊂ R
n
u(x) = 0 , x ∈ ∂B
1
(0).
(7.2)
If a ≤ f(x) ≤ b in B
1
(0), then we can compare the solution u with the
two functions
a
2n
(1 −[x[
2
) and
b
2n
(1 −[x[
2
)
which satisfy the equation with f(x) replaced by a and b, respectively, and
which vanish on the boundary as u does. Now, applying the maximum prin
ciple for ´ operator (see Theorem 7.1.1 in the following), we obtain
a
2n
(1 −[x[
2
) ≤ u(x) ≤
b
2n
(1 −[x[
2
).
ii) Proving Uniqueness of Solutions
In the above example, if f(x) ≡ 0, then we can choose a = b = 0, and
this implies that u ≡ 0. In other words, the solution of the boundary value
problem (7.2) is unique.
iii) Establishing the Existence of Solutions
(a) For a linear equation such as (7.2) in any bounded open domain Ω, let
u(x) = sup φ(x)
7.1 Introduction 177
where the sup is taken among all the functions that satisfy the corresponding
diﬀerential inequality
−´φ ≤ f(x) , x ∈ Ω
φ(x) = 0 , x ∈ ∂Ω.
Then u is a solution of (7.2).
(b) Now consider the nonlinear problem
−´u = f(u) , x ∈ Ω
u(x) = 0 , x ∈ ∂Ω.
(7.3)
Assume that f() is a smooth function with f
() ≥ 0. Suppose that there
exist two functions u(x) ≤ ¯ u(x), such that
−´u ≤ f(u) ≤ f(¯ u) ≤ −´¯ u.
These two functions are called sub (or lower) and super (or upper) solutions
respectively.
To seek a solution of problem (7.3), we use successive approximations. Let
−´u
1
= f(u) and −´u
i+1
= f(u
i
).
Then by maximum principle, we have
u ≤ u
1
≤ u
2
≤ ≤ u
i
≤ ≤ ¯ u.
Let u be the limit of the sequence ¦u
i
¦:
u(x) = limu
i
(x),
then u is a solution of the problem (7.3).
In Section 7.2 , we introduce and prove the weak maximum principles.
Theorem 7.1.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω,
then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If
−´u(x) ≤ 0, x ∈ Ω,
then
max
¯
Ω
u ≤ max
∂Ω
u.
178 7 Maximum Principles
This result can be extended to general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x).
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.1.2 (Weak Maximum Principle for L) Let L be the uniformly
elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
.
These Weak Maximum Principles infer that the minima or maxima of u
attain at some points on the boundary ∂Ω. However, they do not exclude
the possibility that the minima or maxima may also occur in the interior of
Ω. Actually this can not happen unless u is constant, as we will see in the
following.
Theorem 7.1.3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum value only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum value only on ∂Ω unless u is constant.
This maximum principle (as well as the weak one) can also be applied to
the case when c(x) ≥ 0 with slight modiﬁcations.
7.1 Introduction 179
Theorem 7.1.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
We will prove these Theorems in Section 7.3 by using the Hopf Lemma.
Notice that in the previous Theorems, we all require that c(x) ≥ 0.
Roughly speaking, maximum principles hold for ‘positive’ operators. −´ is
‘positive’, and obviously so does −´+ c(x) if c(x) ≥ 0. However, as we will
see in the next chapter, in practical problems it occurs frequently that the
condition c(x) ≥ 0 can not be met. Do we really need c(x) ≥ 0? The answer
is ‘no’. Actually, if c(x) is not ‘too negative’, then the operator ‘´+ c(x)’
can still remain ‘positive’ to ensure the maximum principle. These will be
studied in Section 7.4, where we prove the ‘Maximum Principles Based on
Comparisons’.
Let φ be a positive function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.4)
Let u be a function such that
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.5)
Theorem 7.1.5 (Maximum Principle Based on Comparison)
Assume that Ω is a bounded domain. If
c(x) > λ(x) , ∀x ∈ Ω,
then u ≥ 0 in Ω.
Also in Section 7.4, as consequences of Theorem 7.1.5, we derive the ‘Nar
row Region Principle’ and the ‘Decay at Inﬁnity Principle’. These principles
can be applied very conveniently in the ‘Method of Moving Planes’ to estab
lish the symmetry of solutions for semilinear elliptic equations, as we will see
in later sections.
In Section 7.5, we establish a maximum principle for integral inequalities.
180 7 Maximum Principles
7.2 Weak Maximum Principles
In this section, we prove the weak maximum principles.
Theorem 7.2.1 (Weak Maximum Principle for −´.)
i) If
−´u(x) ≥ 0, x ∈ Ω, (7.6)
then
min
¯
Ω
u ≥ min
∂Ω
u. (7.7)
ii) If
−´u(x) ≤ 0, x ∈ Ω, (7.8)
then
max
¯
Ω
u ≤ max
∂Ω
u. (7.9)
Proof. Here we only present the proof of part i). The entirely similar proof
also works for part ii).
To better illustrate the idea, we will deal with one dimensional case and
higher dimensional case separately.
First, let Ω be the interval (a, b). Then condition (7.6) becomes u
(x) ≤
0. This implies that the graph of u(x) on (a, b) is concave downward, and
therefore one can roughly see that the values of u(x) in (a, b) are large or
equal to the minimum value of u at the end points (See Figure 2).
a
b
Figure 2
1
To prove the above observation rigorously, we ﬁrst carry it out under the
stronger assumption that
−u
(x) > 0. (7.10)
Let m = min
∂Ω
u. Suppose in contrary to (7.7), there is a minimum x
o
∈ (a, b)
of u, such that u(x
o
) < m. Then by the second order Taylor expansion of u
around x
o
, we must have −u
(x
o
) ≤ 0. This contradicts with the assumption
(7.10).
Now for u only satisfying the weaker condition (7.6), we consider a per
turbation of u:
u
(x) = u(x) −x
2
.
Obviously, for each > 0, u
(x) satisﬁes the stronger condition (7.10), and
hence
7.2 Weak Maximum Principles 181
min
¯
Ω
u
≥ min
∂Ω
u
.
Now letting →0, we arrive at (7.7).
To prove the theorem in dimensions higher than one, we need the following
Lemma 7.2.1 (Mean Value Inequality) Let x
o
be a point in Ω. Let B
r
(x
o
) ⊂
Ω be the ball of radius r center at x
o
, and ∂B
r
(x
o
) be its boundary.
i) If −´u(x) > (=) 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any
r
o
> r > 0,
u(x
o
) > (=)
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.11)
It follows that, if x
o
is a minimum of u in Ω, then
−´u(x
o
) ≤ 0. (7.12)
ii) If −´u(x) < 0 for x ∈ B
ro
(x
o
) with some r
o
> 0, then for any r
o
>
r > 0,
u(x
o
) <
1
[∂B
r
(x
o
)[
∂Br(x
o
)
u(x)dS. (7.13)
It follows that, if x
o
is a maximum of u in Ω, then
−´u(x
o
) ≥ 0. (7.14)
We postpone the proof of the Lemma for a moment. This Lemma tell
us that, if −´u(x) > 0, then the value of u at the center of the small ball
B
r
(x
o
) is larger than its average value on the boundary ∂B
r
(x
o
). Roughly
speaking, the graph of u is locally somewhat concave downward. Now based
on this Lemma, to prove the theorem, we ﬁrst consider u
(x) = u(x) −[x[
2
.
Obviously,
−´u
= −´u + 2n > 0. (7.15)
Hence we must have
min
Ω
u
≥ min
∂Ω
u
(7.16)
Otherwise, if there exists a minimum x
o
of u in Ω, then by Lemma 7.2.1, we
have −´u
(x
o
) ≤ 0. This contradicts with (7.15). Now in (7.16), letting →0,
we arrive at the desired conclusion (7.7).
This completes the proof of the Theorem.
The Proof of Lemma 7.2.1. By the Divergence Theorem,
Br(x
o
)
´u(x)dx =
∂Br(x
o
)
∂u
∂ν
dS = r
n−1
S
n−1
∂u
∂r
(x
o
+rω)dS
ω
, (7.17)
where dS
ω
is the area element of the n − 1 dimensional unit sphere S
n−1
=
¦ω [ [ω[ = 1¦.
182 7 Maximum Principles
If ´u < 0, then by (7.17),
∂
∂r
¦
S
n−1
u(x
o
+rω)dS
ω
¦ < 0. (7.18)
Integrating both sides of (7.18) from 0 to r yields
S
n−1
u(x
o
+rω)dS
ω
−u(x
o
)[S
n−1
[ < 0,
where [S
n−1
[ is the area of S
n−1
. It follows that
u(x
o
) >
1
r
n−1
[S
n−1
[
∂Br(x
o
)
u(x)dS.
This veriﬁes (7.11).
To see (7.12), we suppose in contrary that −´u(x
o
) > 0. Then by the
continuity of ´u, there exists a δ > 0, such that
−´u(x) > 0, ∀x ∈ B
δ
(x
o
).
Consequently, (7.11) holds for any 0 < r < δ. This contradicts with the
assumption that x
o
is a minimum of u.
This completes the proof of the Lemma.
From the proof of Theorem 7.2.1, one can see that if we replace −´ op
erator by −´+ c(x) with c(x) ≥ 0, then the conclusion of Theorem 7.2.1 is
still true (with slight modiﬁcations). Furthermore, we can replace the Laplace
operator −´ with general uniformly elliptic operators. Let
D
i
=
∂
∂x
i
, D
ij
=
∂
2
∂x
i
∂x
j
.
Deﬁne
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x). (7.19)
Here we always assume that a
ij
(x), b
i
(x), and c(x) are bounded continuous
functions in
¯
Ω. We say that L is uniformly elliptic if
a
ij
(x)ξ
i
ξ
j
≥ δ[ξ[
2
for any x ∈ Ω, any ξ ∈ R
n
and for some δ > 0.
Theorem 7.2.2 (Weak Maximum Principle for L with c(x) ≡ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≡ 0.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u. (7.20)
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u.
7.3 The Hopf Lemma and Strong Maximum Principles 183
For c(x) ≥ 0, the principle still applies with slight modiﬁcations.
Theorem 7.2.3 (Weak Maximum Principle for L with c(x) ≥ 0). Let L be
the uniformly elliptic operator deﬁned above. Assume that c(x) ≥ 0. Let
u
−
(x) = min¦u(x), 0¦ and u
+
(x) = max¦u(x), 0¦.
i) If Lu ≥ 0 in Ω, then
min
¯
Ω
u ≥ min
∂Ω
u
−
.
ii) If Lu ≤ 0 in Ω, then
max
¯
Ω
u ≤ max
∂Ω
u
+
.
Interested readers may ﬁnd its proof in many standard books, say in [Ev],
page 327.
7.3 The Hopf Lemma and Strong Maximum Principles
In the previous section, we prove a weak form of maximum principle. In the
case Lu ≥ 0, it concludes that the minimum of u attains at some point on the
boundary ∂Ω. However it does not exclude the possibility that the minimum
may also attain at some point in the interior of Ω. In this section, we will show
that this can not actually happen, that is, the minimum value of u can only
be achieved on the boundary unless u is constant. This is called the “Strong
Maximum Principle”. We will prove it by using the following
Lemma 7.3.1 (Hopf Lemma). Assume that Ω is an open, bounded, and
connected domain in R
n
with smooth boundary ∂Ω. Let u be a function in
C
2
(Ω) ∩ C(
¯
Ω). Let
L = −
¸
ij
a
ij
(x)D
ij
+
¸
i
b
i
(x)D
i
+c(x)
be uniformly elliptic in Ω with c(x) ≡ 0. Assume that
Lu ≥ 0 in Ω. (7.21)
Suppose there is a ball B contained in Ω with a point x
o
∈ ∂Ω ∩ ∂B and
suppose
u(x) > u(x
o
), ∀x ∈ B. (7.22)
Then for any outward directional derivative at x
o
,
∂u(x
o
)
∂ν
< 0. (7.23)
184 7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(x
o
) ≤ 0, then the
same conclusion of the Hopf Lemma holds.
Proof. Without loss of generality, we may assume that B is centered at
the origin with radius r. Deﬁne
w(x) = e
−αr
2
−e
−αx
2
.
Consider v(x) = u(x) +w(x) on the set D = B
r
2
(x
o
) ∩ B (See Figure 3).
x
o
0
D
r
Ω
Figure 3
1
We will choose α and appropriately so that we can apply the Weak
Maximum Principle to v(x) and arrive at
v(x) ≥ v(x
o
) ∀x ∈ D. (7.24)
We postpone the proof of (7.24) for a moment. Now from (7.24), we have
∂v
∂ν
(x
o
) ≤ 0, (7.25)
Noticing that
∂w
∂ν
(x
o
) > 0
We arrive at the desired inequality
∂u
∂ν
(x
o
) < 0.
Now to complete the proof of the Lemma, what left to verify is (7.24). We
will carry this out in two steps. First we show that
Lv ≥ 0. (7.26)
Hence we can apply the Weak Maximum Principle to conclude that
min
¯
D
v ≥ min
∂D
. (7.27)
7.3 The Hopf Lemma and Strong Maximum Principles 185
Then we show that the minimum of v on the boundary ∂D is actually
attained at x
o
:
v(x) ≥ v(x
o
) ∀x ∈ ∂D. (7.28)
Obviously, (7.27) and (7.28) imply (7.24).
To see (7.26), we directly calculate
Lw = e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ +c(x)e
−αr
2
≥ e
−αx
2
¦4α
2
n
¸
i,j=1
a
ij
(x)x
i
x
j
−2α
n
¸
i=1
[a
ii
(x) −b
i
(x)x
i
] −c(x)¦ (7.29)
By the ellipticity assumption, we have
n
¸
i,j=1
a
ij
(x)x
i
x
j
≥ δ[x[
2
≥ δ(
r
2
)
2
> 0 in D. (7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together
with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak
Maximum Principle.
To verify (7.28), we consider two parts of the boundary ∂D separately.
(i) On ∂D ∩ B, since u(x) > u(x
o
), there exists a c
o
> 0, such that
u(x) ≥ u(x
o
) +c
o
. Take small enough such that [w[ ≤ δ on ∂D∩B. Hence
v(x) ≥ u(x
o
) = v(x
o
) ∀x ∈ ∂D ∩ B.
(ii) On ∂D∩∂B, w(x) = 0, and by the assumption u(x) ≥ u(x
o
), we have
v(x) ≥ v(x
o
).
This completes the proof of the Lemma.
Now we are ready to prove
Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≡ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u attains its minimum only on ∂Ω unless u is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u attains its maximum only on ∂Ω unless u is constant.
Proof. We prove part i) here. The proof of part ii) is similar. Let m be
the minimum value of u in Ω. Set Σ = ¦x ∈ Ω [ u(x) = m¦. It is relatively
closed in Ω. We show that either Σ is empty or Σ = Ω.
186 7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of
Ω. Then we can ﬁnd an open ball B ⊂ Ω ` Σ with a point on its boundary
belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω ` Σ such that
d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until
it hits Σ (before hitting ∂Ω). Let x
o
be the point at ∂B ∩ Σ. Obviously we
have in B
Lu ≥ 0 and u(x) > u(x
o
).
Now we can apply the Hopf Lemma to conclude that the normal outward
derivative
∂u
∂ν
(x
o
) < 0. (7.31)
On the other hand, x
o
is an interior minimum of u in Ω, and we must have
Du(x
o
) = 0. This contradicts with (7.31) and hence completes the proof of
the Theorem.
In the case when c(x) ≥ 0, the strong principle still applies with slight
modiﬁcations.
Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume
that Ω is an open, bounded, and connected domain in R
n
with smooth bound
ary ∂Ω. Let u be a function in C
2
(Ω) ∩ C(
¯
Ω). Assume that c(x) ≥ 0 in
Ω.
i) If
Lu(x) ≥ 0, x ∈ Ω,
then u can not attain its nonpositive minimum in the interior of Ω unless u
is constant.
ii) If
Lu(x) ≤ 0, x ∈ Ω,
then u can not attain its nonnegative maximum in the interior of Ω unless
u is constant.
Remark 7.3.1 In order that the maximum principle to hold, we assume that
the domain Ω be bounded. This is essential, since it guarantees the existence
of maximum and minimum of u in
¯
Ω. A simple counter example is when Ω
is the half space ¦x ∈ R
n
[ x
n
> 0¦, and u(x, y) = x
n
. Obviously, ´u = 0, but
u does not obey the maximum principle:
max
Ω
u ≤ max
∂Ω
u.
Equally important is the nonnegativeness of the coeﬃcient c(x). For exam
ple, set Ω = ¦(x, y) ∈ R
2
[ −
π
2
< x <
π
2
, −
π
2
< y <
π
2
¦. Then u = cos xcos y
satisﬁes
−´u −2u = 0, in Ω
u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles 187
But, obviously, there are some points in Ω at which u < 0.
However, if we impose some sign restriction on u, say u ≥ 0, then both
conditions can be relaxed. A simple version of such result will be present in
the next theorem.
Also, as one will see in the next section, c(x) is actually allowed to be
negative, but not ‘too negative’.
Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily
bounded domain and not necessarily nonnegative c(x). )
Let Ω be a domain in R
n
with smooth boundary. Assume that u ∈ C
2
(Ω)∩
C(
¯
Ω) and satisﬁes
−´u +
¸
n
i=1
b
i
(x)D
i
u +c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω
u(x) = 0 x ∈ ∂Ω
(7.32)
with bounded functions b
i
(x) and c(x). Then
i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and
ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative
∂u
∂ν
< 0.
To prove the Theorem, we need the following Lemma concerning eigenval
ues.
Lemma 7.3.2 Let λ
1
be the ﬁrst positive eigenvalue of
−´φ = λ
1
φ(x) x ∈ B
1
(0)
φ(x) = 0 x ∈ ∂B
1
(0)
(7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst
positive eigenvalue of the problem on B
ρ
(0) is
λ
1
ρ
2
. More precisely, if we let
ψ(x) = φ(
x
ρ
), then
−´ψ =
λ
1
ρ
2
ψ(x) x ∈ B
ρ
(0)
ψ(x) = 0 x ∈ ∂B
ρ
(0)
(7.34)
The proof is straight forward, and will be left for the readers.
The Proof of Theorem 7.3.3.
i) Suppose that u = 0 at some point in Ω, but u ≡0 on Ω. Let
Ω
+
= ¦x ∈ Ω [ u(x) > 0¦.
Then by the regularity assumption on u, Ω
+
is an open set with C
2
boundary.
Obviously,
u(x) = 0 , ∀ x ∈ ∂Ω
+
.
Let x
o
be a point on ∂Ω
+
, but not on ∂Ω. Then for ρ > 0 suﬃciently small,
one can choose a ball B
ρ/2
(¯ x) ⊂ Ω
+
with x
o
as its boundary point. Let ψ be
188 7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on B
ρ
(x
o
) corre
sponding to the eigenvalue
λ
1
ρ
2
. Obviously, B
ρ
(x
o
) completely covers B
ρ/2
(¯ x).
Let v =
u
ψ
. Then from (7.32), it is easy to deduce that
0 ≤ −´v −2v
ψ
ψ
+
n
¸
i=1
b
i
(x)D
i
v +
−´ψ
ψ
+
n
¸
i=1
D
i
ψ
ψ
+c(x)
v
≡ −´v +
n
¸
i=1
˜
b
i
(x)D
i
v + ˜ c(x)v.
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B
1
,
then
˜ c(x) =
λ
1
ρ
2
+
1
ρ
n
¸
i=1
D
i
φ
φ
+c(x).
This allows us to choose ρ suﬃciently small so that ˜ c(x) ≥ 0. Now we can
apply Hopf Lemma to conclude that, the outward normal derivative at the
boundary point x
o
of B
ρ/2
(¯ x),
∂v
∂ν
(x
o
) < 0, (7.35)
because
v(x) > 0 ∀x ∈ B
ρ/2
(¯ x) and v(x
o
) =
u(x
o
)
ψ(x
o
)
= 0.
On the other hand, since x
o
is also a minimum of v in the interior of Ω,
we must have
v(x
o
) = 0.
This contradicts with (7.35) and hence proves part i) of the Theorem.
ii) The proof goes almost the same as in part i) except we consider the
point x
o
on ∂Ω and the ball B
ρ/2
(¯ x) is in Ω with x
o
∈ ∂B
ρ/2
(¯ x). Then for
the outward normal derivative of u, we have
∂u
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) +v(x
o
)
∂ψ
∂ν
(x
o
) =
∂v
∂ν
(x
o
)ψ(x
o
) < 0.
Here we have used a wellknown fact that the eigenfunction ψ on B
ρ
(x
o
) is ra
dially symmetric about the center x
o
, and hence ψ(x
o
) = 0. This completes
the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (−´+c(x))u ≥ 0, then the maximum
principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −´
7.4 Maximum Principles Based on Comparisons 189
as a ‘positive’ operator, and the maximum principle holds for any ‘positive’
operators. For c(x) ≥ 0, −´+c(x) is also ‘positive’. Do we really need c(x) ≥ 0
here? To answer the question, let us consider the Dirichlet eigenvalue problem
of −´:
−´φ −λφ(x) = 0 x ∈ Ω
φ(x) = 0 x ∈ ∂Ω.
(7.36)
We notice that the eigenfunction φ corresponding to the ﬁrst positive eigen
value λ
1
is either positive or negative in Ω. That is, the solutions of (7.36)
with λ = λ
1
obey Maximum Principle, that is, the maxima or minima of φ
are attained only on the boundary ∂Ω. This suggests that, to ensure the Max
imum Principle, c(x) need not be nonnegative, it is allowed to be as negative
as −λ
1
. More precisely, we can establish the following more general maximum
principle based on comparison.
Theorem 7.4.1 Assume that Ω is a bounded domain. Let φ be a positive
function on
¯
Ω satisfying
−´φ +λ(x)φ ≥ 0. (7.37)
Assume that u is a solution of
−´u +c(x)u ≥ 0 x ∈ Ω
u ≥ 0 on ∂Ω.
(7.38)
If
c(x) > λ(x) , ∀x ∈ Ω, (7.39)
then u ≥ 0 in Ω.
Proof. We argue by contradiction. Suppose that u(x) < 0 somewhere in
Ω. Let v(x) =
u(x)
φ(x)
. Then since φ(x) > 0, we must have v(x) < 0 somewhere
in Ω. Let x
o
∈ Ω be a minimum of v(x). By a direct calculation, it is easy to
verify that
−´v = 2v
φ
φ
+
1
φ
(−´u +
´φ
φ
u). (7.40)
On one hand, since x
o
is a minimum, we have
−´v(x
o
) ≤ 0 and v(x
o
) = 0. (7.41)
While on the other hand, by (7.37), (7.38), and (7.39), and taking into
account that u(x
o
) < 0, we have, at point x
o
,
−´u +
´φ
φ
u(x
o
) ≥ −´u +λ(x
o
)u(x
o
)
> −´u +c(x
o
)u(x
o
) ≥ 0.
This is an obvious contradiction with (7.40) and (7.41), and thus completes
the proof of the Theorem.
190 7 Maximum Principles
Remark 7.4.1 From the proof, one can see that conditions (7.37) and (7.39)
are required only at the points where v attains its minimum, or at points where
u is negative.
The Theorem is also valid on an unbounded domains if u is “nonnegative”
at inﬁnity:
Theorem 7.4.2 If Ω is an unbounded domain, besides condition (7.38), we
assume further that
liminf
x→∞
u(x) ≥ 0. (7.42)
Then u ≥ 0 in Ω.
Proof. Still consider the same v(x) as in the proof of Theorem 7.4.1. Now
condition (7.42) guarantees that the minima of v(x) do not “leak” away to
inﬁnity. Then the rest of the arguments are exactly the same as in the proof
of Theorem 7.4.1.
For convenience in applications, we provide two typical situations where
there exist such functions φ and c(x) satisfying condition (7.37) and (7.39),
so that the Maximum Principle Based on Comparison applies:
i) Narrow regions, and
ii) c(x) decays fast enough at ∞.
i) Narrow Regions. When
Ω = ¦x [ 0 < x
1
< l¦
is a narrow region with width l as shown:
Ω

0 l x
1
We can choose φ(x) = sin(
x1+
l
). Then it is easy
to see that −´φ = (
1
l
)
2
φ, where λ(x) =
−1
l
2
can be very negative when l is suﬃciently small.
Corollary 7.4.1 (Narrow Region Principle.) If u satisﬁes (7.38) with bounded
function c(x). Then when the width l of the region Ω is suﬃciently small, c(x)
satisﬁes (7.39), i.e. c(x) > λ(x) =
−1
l
2
. Hence we can directly apply Theorem
7.4.1 to conclude that u ≥ 0 in Ω, provided liminf
x→∞
u(x) ≥ 0.
ii) Decay at Inﬁnity. In dimension n ≥ 3, one can choose some positive
number q < n −2, and let φ(x) =
1
x
q
Then it is easy to verify that
7.5 A Maximum Principle for Integral Equations 191
−´φ =
q(n −2 −q)
[x[
2
φ.
In the case c(x) decays fast enough near inﬁnity, we can adapt the proof of
Theorem 7.4.1 to derive
Corollary 7.4.2 (Decay at Inﬁnity) Assume there exist R > 0, such that
c(x) > −
q(n −2 −q)
[x[
2
, ∀[x[ > R. (7.43)
Suppose
lim
x→∞
u(x)
[x[
q
= 0.
Let Ω be a region containing in B
C
R
(0) ≡ R
n
` B
R
(0). If u satisﬁes (7.38) on
¯
Ω, then
u(x) ≥ 0 for all x ∈ Ω.
Remark 7.4.2 From Remark 7.4.1, one can see that actually condition
(7.43) is only required at points where u is negative.
Remark 7.4.3 Although Theorem 7.4.1 as well as its Corollaries are stated
in linear forms, they can be easily applied to a nonlinear equation, for example,
−´u −[u[
p−1
u = 0 x ∈ R
n
. (7.44)
Assume that the solution u decays near inﬁnity at the rate of
1
[x[
s
with
s(p − 1) > 2. Let c(x) = −[u(x)[
p−1
. Then for R suﬃciently large, and for
the region Ω as stated in Corollary 7.4.2, c(x) satisﬁes (7.43) in Ω. If further
assume that
u [
∂Ω
≥ 0,
then we can derive from Corollary 7.4.2 that u ≥ 0 in the entire region Ω.
7.5 A Maximum Principle for Integral Equations
In this section, we introduce a maximum principle for integral equations.
Let Ω be a region in R
n
, may or may not be bounded. Assume
K(x, y) ≥ 0, ∀(x, y) ∈ Ω Ω.
Deﬁne the integral operator T by
(Tf)(x) =
Ω
K(x, y)f(y)dy.
192 7 Maximum Principles
Let   be a norm in a Banach space satisfying
f ≤ g, whenever 0 ≤ f(x) ≤ g(x) in Ω. (7.45)
There are many norms that satisfy (7.45), such as L
p
(Ω) norms, L
∞
(Ω) norm,
and C(Ω) norm.
Theorem 7.5.1 Suppose that
Tg ≤ θg with some 0 < θ < 1 ∀g. (7.46)
If
f(x) ≤ (Tf)(x), (7.47)
then
f(x) ≤ 0 for all x ∈ Ω. (7.48)
Proof. Deﬁne
Ω
+
= ¦x ∈ Ω [ f(x) > 0¦,
and
f
+
(x) = max¦0, f(x)¦, f
−
(x) = min¦0, f(x)¦.
Then it is easy to see that for any x ∈ Ω
+
,
0 ≤ f
+
(x) ≤ (Tf)(x) = (Tf)
+
(x) + (Tf)
−
(x) ≤ (Tf)
+
(x) (7.49)
While in the rest of the region Ω, we have f
+
(x) = 0 and (Tf)
+
(x) ≥ 0 by
the deﬁnitions. Therefore, (7.49) holds for any x ∈ Ω. It follows from (7.49)
that
f
+
 ≤ (Tf)
+
 ≤ θf
+
.
Since θ < 1, we must have
f
+
 = 0,
which implies that f
+
(x) ≡ 0, and hence f(x) ≤ 0. This completes the proof
of the Theorem.
Recall that in Section 5, after introducing the “Maximum Principle Based
on Comparison” for Partial Diﬀerential Equations, we explained how it can
be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”.
Parallel to this, we have similar applications for integral equations. Let α be
a number between 0 and n. Deﬁne
(Tf)(x) =
Ω
1
[x −y[
n−α
c(y)f(y)dy.
Assume that f satisﬁes the integral equation
f = Tf in Ω,
7.5 A Maximum Principle for Integral Equations 193
or more generally, the integral inequality
f ≤ Tf in Ω. (7.50)
Further assume that
Tf
L
p
(Ω)
≤ Cc(y)
L
τ
(Ω)
f
L
p
(Ω)
, (7.51)
for some p, τ > 1.
If we have some right integrability condition on c(y), then we can derived,
from Theorem 7.5.1, a maximum principle that will be applied to “Narrow
Regions” and “Near Inﬁnity”. More precisely, we have
Corollary 7.5.1 Assume that c(y) ≥ 0 and c(y) ∈ L
τ
(R
n
). Let f ∈ L
p
(R
n
)
be a nonnegative function satisfying (7.50) and (7.51). Then there exist posi
tive numbers R
o
and
o
depending on c(y) only, such that
if µ(Ω ∩ B
Ro
(0)) ≤
o
, then f
+
≡ 0 in Ω.
where µ(D) is the measure of the set D.
Proof. Since c(y) ∈ L
τ
(R
n
), by Lebesgue integral theory, when the measure
of the intersection of Ω with B
Ro
(0) is suﬃciently small, we can make the
integral
Ω
[c(y)[
τ
dy as small as we wish, and thus to obtain
Cc(y)
L
τ
(Ω)
< 1.
Now it follows from Theorem 7.5.1 that
f
+
(x) ≡ 0 , ∀ x ∈ Ω.
This completes the proof of the Corollary.
Remark 7.5.1 One can see that the condition µ(Ω ∩ B
Ro
(0)) ≤
o
in the
Corollary is satisﬁed in the following two situations.
i) Narrow Regions: The width of Ω is very small.
ii) Near Inﬁnity: Say, Ω = B
C
R
(0), the complement of the ball B
R
(0), with
suﬃciently large R.
As an immediate application of this “Maximum Principle”, we study an
integral equation in the next section. We will use the method of moving planes
to obtain the radial symmetry and monotonicity of the positive solutions.
8
Methods of Moving Planes and of Moving
Spheres
8.1 Outline of the Method of Moving Planes
8.2 Applications of Maximum Principles Based on Comparison
8.2.1 Symmetry of Solutions on a Unit Ball
8.2.2 Symmetry of Solutions of −∆u = u
p
in R
n
.
8.2.3 Symmetry of Solutions of −∆u = e
u
in R
2
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
8.3.2 The A Priori Estimates
8.4 Method of Moving Spheres
8.4.1 The Background
8.4.2 Necessary Conditions
8.5 Method of Moving Planes in an Integral Form and
Symmetry of Solutions for Integral Equations
The Method of Moving Planes (MMP) was invented by the Soviet math
ematician Alexanderoﬀ in the early 1950s. Decades later, it was further de
veloped by Serrin [Se], Gidas, Ni, and Nirenberg [GNN], Caﬀarelli, Gidas,
and Spruck [CGS], Li [Li], Chen and Li [CL] [CL1], Chang and Yang [CY],
and many others. This method has been applied to free boundary problems,
semilinear partial diﬀerential equations, and other problems. Particularly for
semilinear partial diﬀerential equations, there have seen many signiﬁcant con
tributions. We refer to the paper of Frenkel [F] for more descriptions on the
method.
The Method of Moving Planes and its variant–the Method of Moving
Spheres–have become powerful tools in establishing symmetries and mono
tonicity for solutions of partial diﬀerential equations. They can also be used
to obtain a priori estimates, to derive useful inequalities, and to prove non
existence of solutions.
196 8 Methods of Moving Planes and of Moving Spheres
From the previous chapter, we have seen the beauty and power of maxi
mum principles. The MMP greatly enhances the power of maximum principles.
Roughly speaking, the MMP is a continuous way of repeated applications of
maximum principles. During this process, the maximum principle has been
used inﬁnitely many times; and the advantage is that each time we only need
to use the maximum principle in a very narrow region. From the previous
chapter, one can see that, in such a narrow region, even if the coeﬃcients of
the equation are not ‘good,’ the maximum principle can still be applied. In the
authors’ research practice, we also introduced a form of maximum principle
at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended
the results in more natural ways. We recommend the readers study this part
carefully, so that they will be able to apply it to their own research.
It is wellknown that by using a Green’s function, one can change a dif
ferential equation into an integral equation, and under certain conditions,
they are equivalent. To investigate the symmetry and monotonicity of inte
gral equations, the authors, together with Ou, created an integral form of
MMP. Instead of using local properties (say diﬀerentiability) of a diﬀeren
tial equation, they employed the global properties of the solutions of integral
equations.
In this chapter, we will apply the Method of Moving Planes and their
variant–the Method of Moving Spheres– to study semilinear elliptic equa
tions and integral equations. We will establish symmetry, monotonicity, a
priori estimates, and nonexistence of the solutions. During the process of
Moving Planes, the Maximum Principles introduced in the previous chapter
are applied in innovative ways.
In Section 8.2, we will establish radial symmetry and monotonicity for the
solutions of the following three semilinear elliptic problems
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0);
−´u = u
n+2
n−2
(x) x ∈ R
n
n ≥ 3;
and
−´u = e
u(x)
x ∈ R
2
.
During the moving of planes, the Maximum Principles Base on Comparison
will play a major role. In particular, the Narrow Region Principle and the
Decay at Inﬁnity Principle will be used repeatedly in dealing with the three
examples.
In Section 8.3, we will apply the Method of Moving Planes in a ‘local way’
to obtain a priori estimates on the solutions of the prescribing scalar curvature
equation on a compact Riemannian manifold M
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, in M.
8.1 Outline of the Method of Moving Planes 197
We alow the function R(x) to change signs. In this situation, the traditional
blowingup analysis fails near the set where R(x) = 0. We will use the Method
of Moving Planes in an innovative way to obtain a priori estimates. Since the
Method of Moving Planes can not be applied to the solution u directly, we
introduce an auxiliary function to circumvent this diﬃculty.
In Section 8.4, we use the Method of Moving Spheres to prove a non
existence of solutions for the prescribing Gaussian and scalar curvature equa
tions
−´u + 2 = R(x)e
u
,
and
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
on S
2
and on S
n
(n ≥ 3), respectively. We prove that if the function R(x)
is rotationally symmetric and monotone in the region where it is positive,
then both equations admit no solution. This provides a stronger necessary
condition than the well known KazdanWarner condition, and it also becomes
a suﬃcient condition for the existence of solutions in most cases.
In Section 8.5, as an application of the maximum principle for integral
equations introduced in Section 7.5, we study the integral equation in R
n
u(x) =
R
n
1
[x −y[
n−α
u
n+α
n−α
(y)dy,
for any real number α between 0 and n. It arises as an EulerLagrange equation
for a functional in the context of the HardyLittlewoodSobolev inequalities.
Due to the diﬀerent nature of the integral equation, the traditional Method of
Moving Planes does not work. Hence we exploit its global property and develop
a new idea–the Integral Form of the Method of Moving Planes to obtain the
symmetry and monotonicity of the solutions. The Maximum Principle for
Integral Equations established in Chapter 7 is combined with the estimates
of various integral norms to carry on the moving of planes.
8.1 Outline of the Method of Moving Planes
To outline how the Method of Moving Planes works, we take the Euclidian
space R
n
for an example. Let u be a positive solution of a certain partial
diﬀerential equation. If we want to prove that it is symmetric and monotone
in a given direction, we may assign that direction as x
1
axis. For any real
number λ, let
T
λ
= ¦x = (x
1
, x
2
, , x
n
) ∈ R
n
[ x
1
= λ¦.
This is a plane perpendicular to x
1
axis and the plane that we will move with.
Let Σ
λ
denote the region to the left of the plane, i.e.
198 8 Methods of Moving Planes and of Moving Spheres
Σ
λ
= ¦x ∈ R
n
[ x
1
< λ¦.
Let
x
λ
= (2λ −x
1
, x
2
, , x
n
),
the reﬂection of the point x = (x
1
, , x
n
) about the plane T
λ
(See Figure 1).
x
x
λ
Σ
λ
T
λ
Figure 1
1
We compare the values of the solution u at point x and x
λ
, and we want
to show that u is symmetric about some plane T
λo
. To this end, let
w
λ
(x) = u(x
λ
) −u(x).
In order to show that, there exists some λ
o
, such that
w
λo
(x) ≡ 0 , ∀x ∈ Σ
λo
,
we generally go through the following two steps.
Step 1. We ﬁrst show that for λ suﬃciently negative, we have
w
λ
(x) ≥ 0 , ∀x ∈ Σ
λ
. (8.1)
Then we are able to start oﬀ from this neighborhood of x
1
= −∞, and move
the plane T
λ
along the x
1
direction to the right as long as the inequality (8.1)
holds.
Step 2. We continuously move the plane this way up to its limiting position.
More precisely, we deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We prove that u is symmetric about the plane T
λo
, that is w
λo
(x) ≡ 0 for all
x ∈ Σ
λo
. This is usually carried out by a contradiction argument. We show
that if w
λo
(x) ≡ 0, then there would exist λ > λ
o
, such that (8.1) holds, and
this contradicts with the deﬁnition of λ
o
.
From the above illustration, one can see that the key to the Method of
Moving Planes is to establish inequality (8.1), and for partial diﬀerential equa
tions, maximum principles are powerful tools for this task. While for integral
equations, we use a diﬀerent idea. We estimate a certain norm of w
λ
on the
set
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦
where the inequality (8.1) is violated. We show that this norm must be zero,
and hence Σ
−
λ
is empty.
8.2 Applications of the Maximum Principles Based on Comparisons 199
8.2 Applications of the Maximum Principles Based on
Comparisons
In this section, we study some semilinear elliptic equations. We will apply
the method of moving planes to establish the symmetry of the solutions. The
essence of the method of moving planes is the application of various maximum
principles. In the proof of each theorem, the readers will see vividly how the
Maximum Principles Based on Comparisons are applied to narrow regions and
to solutions with decay at inﬁnity.
8.2.1 Symmetry of Solutions in a Unit Ball
We ﬁrst begin with an elegant result of Gidas, Ni, and Nirenberg [GNN1]:
Theorem 8.2.1 Assume that f() is a Lipschitz continuous function such
that
[f(p) −f(q)[ ≤ C
o
[p −q[ (8.2)
for some constant C
o
. Then every positive solution u of
−´u = f(u) x ∈ B
1
(0)
u = 0 on ∂B
1
(0).
(8.3)
is radially symmetric and monotone decreasing about the origin.
Proof.
As shown on Figure 4 below, let T
λ
= ¦x [ x
1
= λ¦ be the plane perpen
dicular to the x
1
axis. Let Σ
λ
be the part of B
1
(0) which is on the left of the
plane T
λ
. For each x ∈ Σ
λ
, let x
λ
be the reﬂection of the point x about the
plane T
λ
, more precisely, x
λ
= (2λ −x
1
, x
2
, , x
n
).
0
x x
λ
Σ
λ
T
λ
Figure 4
x
1
1
We compare the values of the solution u on Σ
λ
with those on its reﬂection.
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
200 8 Methods of Moving Planes and of Moving Spheres
Then it is easy to see that u
λ
satisﬁes the same equation as u does. Applying
the Mean Value Theorem to f(u), one can verify that w
λ
satisﬁes
´w
λ
+C(x, λ)w
λ
(x) = 0, x ∈ Σ
λ
,
where
C(x, λ) =
f(u
λ
(x)) −f(u(x))
u
λ
(x) −u(x)
,
and by condition (8.2),
[C(x, λ)[ ≤ C
o
. (8.4)
Step 1: Start Moving the Plane.
We start from the near left end of the region. Obviously, for λ suﬃciently
close to −1, Σ
λ
is a narrow (in x
1
direction) region, and on ∂Σ
λ
, w
λ
(x) ≥ 0.
( On T
λ
, w
λ
(x) = 0; while on the curve part of ∂Σ
λ
, w
λ
(x) > 0 since u > 0
in B
1
(0).)
Now we can apply the “Narrow Region Principle” ( Corollary 7.4.1 ) to
conclude that, for λ close to −1,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.5)
This provides a starting point for us to move the plane T
λ
Step 2: Move the Plane to Its Right Limit.
We now increase the value of λ continuously, that is, we move the plane
T
λ
to the right as long as the inequality (8.5) holds. We show that, by moving
this way, the plane will not stop before hitting the origin. More precisely, let
¯
λ = sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦,
we ﬁrst claim that
¯
λ ≥ 0. (8.6)
Otherwise, we will show that the plane can be further moved to the right
by a small distance, and this would contradicts with the deﬁnition of
¯
λ. In
fact, if
¯
λ < 0, then the image of the curved surface part of ∂Σ¯
λ
under the
reﬂection about T¯
λ
lies inside B
1
(0), where u(x) > 0 by assumption. It follows
that, on this part of ∂Σ¯
λ
, w¯
λ
(x) > 0. By the Strong Maximum Principle, we
deduce that
w¯
λ
(x) > 0
in the interior of Σ¯
λ
.
Let d
o
be the maximum width of narrow regions that we can apply the
“Narrow Region Principle”. Choose a small positive number δ, such that δ ≤
do
2
, −
¯
λ. We consider the function w¯
λ+δ
(x) on the narrow region (See Figure
5):
Ω
δ
= Σ¯
λ+δ
∩ ¦x [ x
1
>
¯
λ −
d
o
2
¦.
8.2 Applications of the Maximum Principles Based on Comparisons 201
It satisﬁes
´w¯
λ+δ
+C(x,
¯
λ +δ)w¯
λ+δ
= 0 x ∈ Ω
δ
w¯
λ+δ
(x) ≥ 0 x ∈ ∂Ω
δ
.
(8.7)
0
Ω
δ
T
¯
λ−
do
2
Σ
¯
λ−
do
2
T¯
λ+δ
Figure 5
x
1
1
The equation is obvious. To see the boundary condition, we ﬁrst notice
that it is satisﬁed on the two curved parts and one ﬂat part where x
1
=
¯
λ+δ
of the boundary ∂Ω
δ
due to the deﬁnition of w¯
λ+δ
. To see that it is also true
on the rest of the boundary where x
1
=
¯
λ −
do
2
, we use continuity argument.
Notice that on this part, w¯
λ
is positive and bounded away from 0. More
precisely and more generally, there exists a constant c
o
> 0, such that
w¯
λ
(x) ≥ c
o
, ∀x ∈ Σ
¯
λ−
do
2
.
Since w
λ
is continuous in λ, for δ suﬃciently small, we still have
w¯
λ+δ
(x) ≥ 0 , ∀x ∈ Σ
¯
λ−
do
2
.
Hence in particular, the boundary condition in (8.7) holds for such small δ.
Now we can apply the “Narrow Region Principle” to conclude that
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Ω
δ
.
And therefore,
w¯
λ+δ
(x) ≥ 0, ∀x ∈ Σ¯
λ+δ
.
This contradicts with the deﬁnition of
¯
λ and thus establishes (8.6).
(8.6) implies that
u(−x
1
, x
) ≤ u(x
1
, x
) , ∀x
1
≥ 0, (8.8)
202 8 Methods of Moving Planes and of Moving Spheres
where x
= (x
2
, , x
n
).
We then start from λ close to 1 and move the plane T
λ
toward the left.
Similarly, we obtain
u(−x
1
, x
) ≥ u(x
1
, x
) , ∀x
1
≥ 0, (8.9)
Combining two opposite inequalities (8.8) and (8.9), we see that u(x) is
symmetric about the plane T
0
. Since we can place x
1
axis in any direction,
we conclude that u(x) must be radially symmetric about the origin. Also the
monotonicity easily follows from the argument. This completes the proof.
8.2.2 Symmetry of Solutions of −u = u
p
in R
n
In an elegant paper of Gidas, Ni, and Nirenberg [2], an interesting results is
the symmetry of the positive solutions of the semilinear elliptic equation:
´u +u
p
= 0 , x ∈ R
n
, n ≥ 3. (8.10)
They proved
Theorem 8.2.2 For p =
n+2
n−2
, all the positive solutions of (8.10) with rea
sonable behavior at inﬁnity, namely
u = O(
1
[x[
n−2
),
are radially symmetric and monotone decreasing about some point, and hence
assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and for some x
o
∈ R
n
.
This uniqueness result, as was pointed out by R. Schoen, is in fact equiv
alent to the geometric result due to Obata [O]: A Riemannian metric on S
n
which is conformal to the standard one and having the same constant scalar
curvature is the pull back of the standard one under a conformal map of
S
n
to itself. Recently, Caﬀarelli, Gidas and Spruck [CGS] removed the de
cay assumption u = O([x[
2−n
) and proved the same result. In the case that
1 ≤ p <
n+2
n−2
, Gidas and Spruck [GS] showed that the only nonnegative solu
tion of (8.10) is identically zero. Then, in the authors paper [CL1], a simpler
and more elementary proof was given for almost the same result:
Theorem 8.2.3 i) For p =
n+2
n−2
, every positive C
2
solution of (8.10) must
be radially symmetric and monotone decreasing about some point, and
hence assumes the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for some λ > 0 and x
o
∈ R
n
.
8.2 Applications of the Maximum Principles Based on Comparisons 203
ii) For p <
n+2
n−2
, the only nonnegative solution of (8.10) is identically zero.
The proof of Theorem 8.2.2 is actually included in the more general proof
of the ﬁrst part of Theorem 8.2.3. However, to better illustrate the idea, we
will ﬁrst present the proof of Theorem 8.2.2 (mostly in our own idea). And
the readers will see vividly, how the “Decay at Inﬁnity” principle is applied
here.
Proof of Theorem 8.2.2.
Deﬁne
Σ
λ
= ¦x = (x
1
, , x
n
) ∈ R
n
[ x
1
< λ¦, T
λ
= ∂Σ
λ
and let x
λ
be the reﬂection point of x about the plane T
λ
, i.e.
x
λ
= (2λ −x
1
, x
2
, , x
n
).
(See the previous Figure 1.)
Let
u
λ
(x) = u(x
λ
), and w
λ
(x) = u
λ
(x) −u(x).
The proof consists of three steps. In the ﬁrst step, we start from the very
left end of our region R
n
, that is near x
1
= −∞. We will show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.11)
Here, the “Decay at Inﬁnity” is applied.
Then in the second step, we will move our plane T
λ
in the the x
1
direction
toward the right as long as inequality (8.11) holds. The plane will stop at
some limiting position, say at λ = λ
o
. We will show that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
This implies that the solution u is symmetric and monotone decreasing about
the plane T
λo
. Since x
1
axis can be chosen in any direction, we conclude that
u must be radially symmetric and monotone about some point.
Finally, in the third step, using the uniqueness theory in Ordinary Diﬀer
ential Equations, we will show that the solutions can only assume the given
form.
Step 1. Prepare to Move the Plane from Near −∞.
To verify (8.11) for λ suﬃciently negative, we apply the maximum principle
to w
λ
(x). Write τ =
n+2
n−2
. By the deﬁnition of u
λ
, it is easy to see that, u
λ
satisﬁes the same equation as u does. Then by the Mean Value Theorem, it is
easy to verify that
−´w
λ
= u
τ
λ
(x) −u
τ
(x) = τψ
τ−1
λ
(x)w
λ
(x). (8.12)
where ψ
λ
(x) is some number between u
λ
(x) and u(x). Recalling the “Maxi
mum Principle Based on Comparison” (Theorem 7.4.1), we see here c(x) =
204 8 Methods of Moving Planes and of Moving Spheres
−τψ
τ−1
λ
(x). By the “Decay at Inﬁnity” argument (Corollary 7.4.2), it suﬃce
to check the decay rate of ψ
τ−1
λ
(x), and more precisely, only at the points ˜ x
where w
λ
is negative (see Remark 7.4.2 ). Apparently at these points,
u
λ
(˜ x) < u(˜ x),
and hence
0 ≤ u
λ
(˜ x) ≤ ψ
λ
(˜ x) ≤ u(˜ x).
By the decay assumption of the solution
u(x) = O
1
[x[
n−2
,
we derive immediately that
ψ
τ−1
λ
(˜ x) = O
(
1
[˜ x[
)
4
n−2
= O
1
[˜ x[
4
.
Here the power of
1
[˜ x[
is greater than two, which is what (actually more
than) we desire for. Therefore, we can apply the “Maximum Principle Based
on Comparison” to conclude that for λ suﬃciently negative ( [˜ x[ suﬃciently
large ), we must have (8.11). This completes the preparation for the moving
of planes.
Step 2. Move the Plane to the Limiting Position to Derive Symmetry.
Now we can move the plane T
λ
toward right, i.e., increase the value of λ,
as long as the inequality (8.11) holds. Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
Obviously, λ
o
< +∞, due to the asymptotic behavior of u near x
1
= +∞. We
claim that
w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
. (8.13)
Otherwise, by the “Strong Maximum Principle” on unbounded domains ( see
Theorem 7.3.3 ), we have
w
λo
(x) > 0 in the interior of Σ
λo
. (8.14)
We show that the plane T
λo
can still be moved a small distance to the right.
More precisely, there exists a δ
o
> 0 such that, for all 0 < δ < δ
o
, we have
w
λo+δ
(x) ≥ 0 , ∀x ∈ Σ
λo+δ
. (8.15)
This would contradict with the deﬁnition of λ
o
, and hence (8.13) must hold.
Recall that in the last section, we use the “Narrow Region Principle ” to
derive (8.15). Unfortunately, it can not be applied in this situation, because
8.2 Applications of the Maximum Principles Based on Comparisons 205
the “narrow region” here is unbounded, and we are not able to guarantee that
w
λo
is bounded away from 0 on the left boundary of the “narrow region”.
To overcome this diﬃculty, we introduce a new function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
,
where
φ(x) =
1
[x[
q
with 0 < q < n −2.
Then it is a straight forward calculation to verify that
−´¯ w
λ
= 2¯ w
λ
φ
φ
+
−´w
λ
+
´φ
φ
w
λ
1
φ
(8.16)
We have
Lemma 8.2.1 There exists a R
o
> 0 ( independent of λ), such that if x
o
is
a minimum point of ¯ w
λ
and ¯ w
λ
(x
o
) < 0, then [x
o
[ < R
o
.
We postpone the proof of the Lemma for a moment. Now suppose that (8.15) is
violated for any δ > 0. Then there exists a sequence of numbers ¦δ
i
¦ tending
to 0 and for each i, the corresponding negative minimum x
i
of w
λo+δi
. By
Lemma 8.2.1, we have
[x
i
[ ≤ R
o
, ∀ i = 1, 2, .
Then, there is a subsequence of ¦x
i
¦ (still denoted by ¦x
i
¦) which converges
to some point x
o
∈ R
n
. Consequently,
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) = 0 (8.17)
and
¯ w
λo
(x
o
) = lim
i→∞
¯ w
λo+δi
(x
i
) ≤ 0.
However, we already know ¯ w
λo
≥ 0, therefore, we must have ¯ w
λo
(x
o
) = 0. It
follows that
w
λo
(x
o
) = ¯ w
λo
(x
o
)φ(x
o
) + ¯ w
λo
(x
o
)φ = 0 + 0 = 0. (8.18)
On the other hand, by (8.14), since w
λo
(x
o
) = 0, x
o
must be on the
boundary of Σ
λo
. Then by the Hopf Lemma (see Theorem 7.3.3), we have,
the outward normal derivative
∂w
λo
∂ν
(x
o
) < 0.
This contradicts with (8.18). Now, to verify (8.15), what left is to prove the
Lemma.
206 8 Methods of Moving Planes and of Moving Spheres
Proof of Lemma 8.2.1. Assume that x
o
is a negative minimum of ¯ w
λ
.
Then
−´¯ w
λ
(x
o
) ≤ 0 and ¯ w
λ
(x
o
) = 0. (8.19)
On the other hand, as we argued in Step 1, by the asymptotic behavior of
u at inﬁnity, if [x
o
[ is suﬃciently large,
c(x
o
) := −τψ
τ−1
(x
o
) > −
q(n −2 −q)
[x
o
[
≡
´φ(x
o
)
φ(x
o
)
.
It follows from (8.12) that
−´w
λ
+
´φ
φ
w
λ
(x
o
) > 0.
This, together with (8.19) contradicts with (8.16), and hence completes the
proof of the Lemma.
Step 3. In the previous two steps, we show that the positive solutions
of (8.10) must be radially symmetric and monotone decreasing about some
point in R
n
. Since the equation is invariant under translation, without loss of
generality, we may assume that the solutions are symmetric about the origin.
Then they satisﬁes the following ordinary diﬀerential equation
−u
(r) −
n−1
r
u
(r) = u
τ
u
(0) = 0
u(0) =
[n(n−2)]
(n−2)/4
λ
n−2
2
for some λ > 0. One can verify that u(r) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+r
2
)
n−2
2
is a solution, and
by the uniqueness of the ODE problem, this is the only solution. Therefore,
we conclude that every positive solution of (8.10) must assume the form
u(x) =
[n(n −2)λ
2
]
n−2
4
(λ
2
+[x −x
o
[
2
)
n−2
2
for λ > 0 and some x
o
∈ R
n
. This completes the proof of the Theorem.
Proof of Theorem 8.2.3.
i) The general idea in proving this part of the Theorem is almost the
same as that for Theorem 8.2.2. The main diﬀerence is that we have no decay
assumption on the solution u at inﬁnity, hence the method of moving planes
can not be applied directly to u. So we ﬁrst make a Kelvin transform to deﬁne
a new function
v(x) =
1
[x[
n−2
u(
x
[x[
2
).
8.2 Applications of the Maximum Principles Based on Comparisons 207
Then obviously, v(x) has the desired decay rate
1
[x[
n−2
at inﬁnity, but has a
possible singularity at the origin. It is easy to verify that v satisﬁes the same
equation as u does, except at the origin:
´v +v
τ
(x) = 0 x ∈ R
n
` ¦0¦, n ≥ 3. (8.20)
We will apply the method of moving planes to the function v, and show
that v is radially symmetric and monotone decreasing about some point. If
the center point is the origin, then by the deﬁnition of v, u is also symmetric
and monotone decreasing about the origin. If the center point of v is not the
origin, then v has no singularity at the origin, and hence u has the desired
decay at inﬁnity. Then the same argument in the proof of Theorem 8.2.2 would
imply that u is symmetric and monotone decreasing about some point.
Deﬁne
v
λ
(x) = v(x
λ
) , w
λ
(x) = v
λ
(x) −v(x).
Because v(x) may be singular at the origin, correspondingly w
λ
may be sin
gular at the point x
λ
= (2λ, 0, , 0). Hence instead of on Σ
λ
, we consider w
λ
on
˜
Σ
λ
= Σ
λ
` ¦x
λ
¦. And in our proof, we treat the singular point carefully.
Each time we show that the points of interest are away from the singularities,
so that we can carry on the method of moving planes to the end to show the
existence of a λ
o
such that w
λo
(x) ≡ 0 for x ∈
˜
Σ
λo
and v is strictly increasing
in the x
1
direction in
˜
Σ
λo
.
As in the proof of Theorem 8.2.2, we see that v
λ
satisﬁes the same equation
as v does, and
−´w
λ
= τψ
τ−1
λ
(x)w
λ
(x).
where ψ
λ
(x) is some number between v
λ
(x) and v(x).
Step 1. We show that, for λ suﬃciently negative, we have
w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
. (8.21)
By the asymptotic behavior
v(x) ∼
1
[x[
n−2
,
we derive immediately that, at a negative minimum point x
o
of w
λ
,
ψ
τ−1
λ
(x
o
) ∼ (
1
[x
o
[
n−2
)
τ−1
=
1
[x
o
[
4
,
the power of
1
[x
o
[
is greater than two, and we have the desired decay rate for
c(x) := −τψ
τ−1
λ
(x), as mentioned in Corollary 7.4.2. Hence we can apply the
“Decay at Inﬁnity” to w
λ
(x). The diﬀerence here is that, w
λ
has a singularity
208 8 Methods of Moving Planes and of Moving Spheres
at x
λ
, hence we need to show that, the minimum of w
λ
is away from x
λ
.
Actually, we will show that,
If inf
˜
Σ
λ
w
λ
(x) < 0, then the inﬁmum is achieved in Σ
λ
` B
1
(x
λ
) (8.22)
To see this, we ﬁrst note that for x ∈ B
1
(0),
v(x) ≥ min
∂B1(0)
v(x) =
0
> 0
due to the fact that v(x) > 0 and ´v ≤ 0.
Then let λ be so negative, that v(x) ≤
0
for x ∈ B
1
(x
λ
). This is possible
because v(x) → 0 as [x[ → ∞.
For such λ, obviously w
λ
(x) ≥ 0 on B
1
(x
λ
) ` ¦x
λ
¦. This implies (8.22).
Now similar to the Step 1 in the proof of Theorem 8.2.2, we can deduce that
w
λ
(x) ≥ 0 for λ suﬃciently negative.
Step 2. Again, deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈
˜
Σ
λ
¦.
We will show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈
˜
Σ
λo
.
Deﬁne ¯ w
λ
=
w
λ
φ
the same way as in the proof of Theorem 8.2.2. Suppose
that ¯ w
λo
(x) ≡0, then by Maximum Principle we have
¯ w
λo
(x) > 0 , for x ∈
˜
Σ
λo
.
The rest of the proof is similar to the step 2 in the proof of Theorem 8.2.2
except that now we need to take care of the singularities. Again let λ
k
` λ
o
be a sequence such that ¯ w
λ
k
(x) < 0 for some x ∈
˜
Σ
λ
k
. We need to show that
for each k, inf
˜
Σ
λ
k
¯ w
λ
k
(x) can be achieved at some point x
k
∈
˜
Σ
λ
k
and that
the sequence ¦x
k
¦ is bounded away from the singularities x
λ
k
of w
λ
k
. This
can be seen from the following facts
a) There exists > 0 and δ > 0 such that
¯ w
λo
(x) ≥ for x ∈ B
δ
(x
λo
) ` ¦x
λo
¦.
b) lim
λ→λo
inf
x∈B
δ
(x
λ
)
¯ w
λ
(x) ≥ inf
x∈B
δ
(x
λo
)
¯ w
λo
(x) ≥ .
Fact (a) can be shown by noting that ¯ w
λo
(x) > 0 on
˜
Σ
λo
and ´w
λo
≤ 0,
while fact (b) is obvious.
Now through a similar argument as in the proof of Theorem 8.2.2 one can
easily arrive at a contradiction. Therefore ¯ w
λo
(x) ≡ 0.
8.2 Applications of the Maximum Principles Based on Comparisons 209
If λ
o
= 0, then we can carry out the above procedure in the opposite di
rection, namely, we move the plane in the negative x
1
direction from positive
inﬁnity toward the origin. If our planes T
λ
stop somewhere before the origin,
we derive the symmetry and monotonicity in x
1
direction by the above ar
gument. If they stop at the origin again, we also obtain the symmetry and
monotonicity in x
1
direction by combining the two inequalities obtained in
the two opposite directions. Since the x
1
direction can be chosen arbitrarily,
we conclude that the solution u must be radially symmetric about some point.
ii) To show the nonexistence of solutions in the case p <
n+2
n−2
, we notice
that after the Kelvin transform, v satisﬁes
´v +
1
[x[
n+2−p(n−2)
v
p
(x) = 0 , x ∈ R
n
` ¦0¦.
Due to the singularity of the coeﬃcient of v
p
at the origin, one can easily
see that v can only be symmetric about the origin if it is not identically
zero. Hence u must also be symmetric about the origin. Now given any two
points x
1
and x
2
in R
n
, since equation (8.10) is invariant under translations
and rotations, we may assume that the origin is at the mid point of the line
segment x
1
x
2
. Then from the above argument, we must have u(x
1
) = u(x
2
).
It follows that u is constant. Finally, from the equation (8.10), we conclude
that u ≡ 0. This completes the proof of the Theorem.
8.2.3 Symmetry of Solutions for −u = e
u
in R
2
When considering prescribing Gaussian curvature on two dimensional com
pact manifolds, if the sequence of approximate solutions “blows up”, then by
rescaling and taking limit, one would arrive at the following equation in the
entire space R
2
:
∆u + exp u = 0 , x ∈ R
2
R
2
exp u(x)dx < +∞
(8.23)
The classiﬁcation of the solutions for this limiting equation would provide es
sential information on the original problems on manifolds, also it is interesting
in its own right.
It is known that
φ
λ,x
o(x) = ln
32λ
2
(4 +λ
2
[x −x
o
[
2
)
2
for any λ > 0 and any point x
o
∈ R
2
is a family of explicit solutions.
We will use the method of moving planes to prove:
Theorem 8.2.4 Every solution of (8.23) is radially symmetric with respect
to some point in R
2
and hence assumes the form of φ
λ,x
o(x).
To this end, we ﬁrst need to obtain some decay rate of the solutions near
inﬁnity.
210 8 Methods of Moving Planes and of Moving Spheres
Some Global and Asymptotic Behavior of the Solutions
The following Theorem gives the asymptotic behavior of the solutions near
inﬁnity, which is essential to the application of the method of moving planes.
Theorem 8.2.5 If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx ≤ −4
uniformly.
This Theorem is a direct consequence of the following two Lemmas.
Lemma 8.2.2 (W. Ding) If u is a solution of
−´u = e
u
, x ∈ R
2
and
R
2
exp u(x)dx < +∞,
then
R
2
exp u(x)dx ≥ 8π.
Proof. For −∞ < t < ∞, let Ω
t
= ¦x [ u(x) > t¦, one can obtain
Ωt
exp u(x)dx = −
Ωt
´u =
∂Ωt
[u[ds
−
d
dt
[Ω
t
[ =
∂Ωt
ds
[u[
By the Schwartz inequality and the isoperimetric inequality,
∂Ωt
ds
[u[
∂Ωt
[u[ ≥ [∂Ω
t
[
2
≥ 4π[Ω
t
[.
Hence
−(
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≥ 4π[Ω
t
[
and so
d
dt
(
Ωt
exp u(x)dx)
2
= 2 exp t (
d
dt
[Ω
t
[)
Ωt
exp u(x)dx ≤ −8π[Ω
t
[e
t
.
Integrating from −∞ to ∞ gives
−(
R
2
exp u(x)dx)
2
≤ −8π
R
2
exp u(x)dx
which implies
R
2
exp u(x)dx ≥ 8π as desired.
Lemma 8.2.2 enables us to obtain the asymptotic behavior of the solutions
at inﬁnity.
8.2 Applications of the Maximum Principles Based on Comparisons 211
Lemma 8.2.3 . If u(x) is a solution of (8.23), then as [x[ → +∞,
u(x)
ln [x[
→ −
1
2π
R
2
exp u(x)dx uniformly.
Proof.
By a result of Brezis and Merle [BM], we see that the condition
R
2
exp u(x)dx <
∞ implies that the solution u is bounded from above.
Let
w(x) =
1
2π
R
2
(ln [x −y[ −ln([y[ + 1)) exp u(y)dy.
Then it is easy to see that
´w(x) = exp u(x) , x ∈ R
2
and we will show
w(x)
ln [x[
→
1
2π
R
2
exp u(x)dx uniformly as [x[ → +∞. (8.24)
To see this, we need only to verify that
I :=
R
2
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
e
u(y)
dy→0
as [x[→∞. Write I = I
1
+I
2
+I
3
, where I
1
, I
2
and I
3
are the integrals on the
three regions
D
1
= ¦y [ [x −y[ ≤ 1¦,
D
2
= ¦y [ [x −y[ > 1 and [y[ ≤ K¦
and
D
3
= ¦y [ [x −y[ > 1 and [y[ > K¦
respectively. We may assume that [x[ ≥ 3.
a) To estimate I
1
, we simply notice that
I
1
≤ C
x−y≤1
e
u(y)
dy −
1
ln [x[
x−y≤1
ln [x −y[e
u(y)
dy
Then by the boundedness of e
u(y)
and
R
2
e
u(y)
dy, we see that I
1
→0 as
[x[→∞.
b) For each ﬁxed K, in region D
2
, we have, as [x[→∞,
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
→0
hence I
2
→0.
212 8 Methods of Moving Planes and of Moving Spheres
c)To see I
3
→0, we use the fact that for [x −y[ > 1
[
ln [x −y[ −ln([y[ + 1) −ln [x[
ln [x[
[ ≤ C
Then let K→∞. This veriﬁes (8.24).
Consider the function v(x) = u(x) +w(x). Then ´v ≡ 0 and
v(x) ≤ C +C
1
ln([x[ + 1)
for some constant C and C
1
. Therefore v must be a constant. This completes
the proof of our Lemma.
Combining Lemma 8.2.2 and Lemma 8.2.3, we obtain our Theorem 8.2.5.
The Method of Moving Planes and Symmetry
In this subsection, we apply the method of moving planes to establish the
symmetry of the solutions. For a given solution, we move the family of lines
which are orthogonal to a given direction from negative inﬁnity to a critical
position and then show that the solution is symmetric in that direction about
the critical position. We also show that the solution is strictly increasing before
the critical position. Since the direction can be chosen arbitrarily, we conclude
that the solution must be radially symmetric about some point. Finally by
the uniqueness of the solution of the following O.D.E. problem
u
(r) +
1
r
u
(r) = f(u)
u
(0) = 0
u(0) = 1
we see that the solutions of (8.23) must assume the form φ
λ,x
0(x).
Assume that u(x) is a solution of (8.23). Without loss of generality, we
show the monotonicity and symmetry of the solution in the x
1
direction.
For λ ∈ R
1
, let
Σ
λ
= ¦(x
1
, x
2
) [ x
1
< λ¦
and
T
λ
= ∂Σ
λ
= ¦(x
1
, x
2
) [ x
1
= λ¦.
Let
x
λ
= (2λ −x
1
, x
2
)
be the reﬂection point of x = (x
1
, x
2
) about the line T
λ
. (See the previous
Figure 1.)
Deﬁne
w
λ
(x) = u(x
λ
) −u(x).
A straight forward calculation shows
´w
λ
(x) + (exp ψ
λ
(x))w
λ
(x) = 0 (8.25)
8.2 Applications of the Maximum Principles Based on Comparisons 213
where ψ
λ
(x) is some number between u(x) and u
λ
(x).
Step 1. As in the previous examples, we show that, for λ suﬃciently neg
ative, we have
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
.
By the “Decay at Inﬁnity” argument, the key is to check the decay rate of
c(x) := −e
ψ
λ
(x)
at points x
o
where w
λ
(x) is negative. At such point
u(x
oλ
) < u(x
o
), and hence ψ
λ
(x
o
) ≤ u(x
o
).
By virtue of Theorem 8.2.5, we have
e
ψ
λ
(x
o
)
= O(
1
[x
o
[
4
) (8.26)
Notice that we are in dimension two, while the key function φ =
1
x
q
given
in Corollary 7.4.2 requires 0 < q < n − 2, hence it does not work here. As a
modiﬁcation, we choose
φ(x) = ln([x[ −1).
Then it is easy to verify that
´φ
φ
(x) =
−1
[x[([x[ −1)
2
ln([x[ −1)
.
It follows from this and (8.26) that
e
ψ(x
o
)
+
´φ
φ
(x
o
) < 0 for suﬃciently large [x
o
[. (8.27)
This is what we desire for.
Then similar to the argument in Subsection 5.2, we introduce the function
¯ w
λ
(x) =
w
λ
(x)
φ(x)
.
It satisﬁes
´¯ w
λ
+ 2¯ w
λ
φ
φ
+
e
ψ
λ
(x)
+
´φ
φ
¯ w
λ
= 0. (8.28)
Moreover, by the asymptotic behavior of the solution u near inﬁnity (see
Lemma 8.2.3), we have, for each ﬁxed λ,
¯ w
λ
(x) =
u(x
λ
) −u(x)
ln([x[ −1)
→0 , as [x[→∞. (8.29)
Now, if w
λ
(x) < 0 somewhere, then by (8.29), there exists a point x
o
,
which is a negative minimum of ¯ w
λ
(x). At this point, one can easily derive a
contradiction from (8.27) and (8.28). This completes Step 1.
214 8 Methods of Moving Planes and of Moving Spheres
Step 2 . Deﬁne
λ
o
= sup¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
We show that
If λ
o
< 0, then w
λo
(x) ≡ 0, ∀x ∈ Σ
λo
.
The argument is entirely similar to that in the Step 2 of the proof of
Theorem 8.2.2 except that we use φ(x) = ln(−x
1
+ 2). This completes the
proof of the Theorem.
8.3 Method of Moving Planes in a Local Way
8.3.1 The Background
Let M be a Riemannian manifold of dimension n ≥ 3 with metric g
o
. Given
a function R(x) on M, one interesting problem in diﬀerential geometry is to
ﬁnd a metric g that is pointwise conformal to the original metric g
o
and has
scalar curvature R(x). This is equivalent to ﬁnding a positive solution of the
semilinear elliptic equation
−
4(n −1)
n −2
´
o
u +R
o
(x)u = R(x)u
n+2
n−2
, x ∈ M, (8.30)
where ´
o
is the BeltramiLaplace operator of (M, g
o
) and R
o
(x) is the scalar
curvature of g
o
.
In recent years, there have seen a lot of progress in understanding equation
(8.30). When (M, g
o
) is the standard sphere S
n
, the equation becomes
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
, u > 0, x ∈ S
n
. (8.31)
It is the socalled critical case where the lack of compactness occurs. In this
case, the wellknown KazdanWarner condition gives rise to many examples of
R in which there is no solution. In the last few years, a tremendous amount of
eﬀort has been put to ﬁnd the existence of the solutions and many interesting
results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6]
[CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein).
One main ingredient in the proof of existence is to obtain a priori estimates
on the solutions. For equation (8.31) on S
n
, to establish a priori estimates,
a useful technique employed by most authors was a ‘blowingup’ analysis.
However, it does not work near the points where R(x) = 0. Due to this
limitation, people had to assume that R was positive and bounded away from
0. This technical assumption became somewhat standard and has been used
by many authors for quite a few years. For example, see articles by Bahri
8.3 Method of Moving Planes in a Local Way 215
[Ba], Bahri and Coron [BC], Chang and Yang [CY1], Chang, Gursky, and
Yang [CY2], Li [Li2] [Li3], and Schoen and Zhang [SZ].
Then for functions with changing signs, are there any a priori estimates?
In [CL6], we answered this question aﬃrmatively. We removed the above
wellknown technical assumption and obtained a priori estimates on the solu
tions of (8.31) for functions R which are allowed to change signs.
The main diﬃculty encountered by the traditional ‘blowingup’ analysis
is near the point where R(x) = 0. To overcome this, we used the ‘method of
moving planes’ in a local way to control the growth of the solutions in this
region and obtained an a priori estimate.
In fact, we derived a priori bounds for the solutions of more general equa
tions
−´u +
n(n −2)
4
u = R(x)u
p
, u > 0, x ∈ S
n
. (8.32)
for any exponent p between 1 +
1
A
and A, where A is an arbitrary positive
number. For a ﬁxed A, the bound is independent of p.
Proposition 8.3.1 Assume R ∈ C
2,α
(S
n
) and [R(x)[ ≥ β
0
for any x with
[R(x)[ ≤ δ
0
. Then there are positive constants and C depending only on β
0
,
δ
0
, M, and R
C
2,α
(S
n
)
, such that for any solution u of (8.32), we have u ≤ C
in the region where [R(x)[ ≤ .
In this proposition, we essentially required R(x) to grow linearly near its
zeros, which seems a little restrictive. Recently, in the case p =
n+2
n−2
, Lin [Lin1]
weaken the condition by assuming only polynomial growth of R near its zeros.
To prove this result, he ﬁrst established a Liouville type theorem in R
n
, then
use a blowing up argument.
Later, by introducing a new auxiliary function, we sharpen our method
of moving planes to further weaken the condition and to obtain more general
result:
Theorem 8.3.1 Let M be a locally conformally ﬂat manifold. Let
Γ = ¦x ∈ M [ R(x) = 0¦.
Assume that Γ ∈ C
2,α
and R ∈ C
2,α
(M) satisfying
∂R
∂ν
≤ 0 where ν is the
outward normal (pointing to the region where R is negative) of Γ, and
R(x)
[R(x)[
is
continuous near Γ,
= 0 at Γ.
(8.33)
Let D be any compact subset of M and let p be any number greater than 1.
Then the solutions of the equation
−´
o
u +R
o
(x)u = R(x)u
p
, in M (8.34)
are uniformly bounded near Γ ∩ D.
216 8 Methods of Moving Planes and of Moving Spheres
One can see that our condition (8.33) is weaker than Lin’s polynomial
growth restriction. To illustrate, one may simply look at functions R(x) which
grow like exp¦−
1
d(x)
¦, where d(x) is the distance from the point x to Γ.
Obviously, this kinds of functions satisfy our condition, but are not polynomial
growth.
Moreover our restriction on the exponent is much weaker. Besides being
n + 2
n −2
, p can be any number greater than 1.
8.3.2 The A Priori Estimates
Now, we estimate the solutions of equation (8.34) and prove Theorem 8.3.1.
Since M is a locally conformally ﬂat manifold, a local ﬂat metric can be
chosen so that equation (8.34) in a compact set D ⊂ M can be reduced to
−´u = R(x)u
p
, p > 1, (8.35)
in a compact set K in R
n
. This is the equation we will consider throughout
the section.
The proof of the Theorem is divided into two parts. We ﬁrst derive the
bound in the region(s) where R is negatively bounded away from zero. Then
based on this bound, we use the ‘method of moving planes’ to estimate the
solutions in the region(s) surrounding the set Γ.
Part I. In the region(s) where R is negative
In this part, we derive estimates in the region(s) where R is negatively
bounded away from zero. It comes from a standard elliptic estimate for sub
harmonic functions and an integral bound on the solutions. We prove
Theorem 8.3.2 The solutions of (8.35) are uniformly bounded in the region
¦x ∈ K : R(x) ≤ 0 and dist(x, Γ) ≥ δ¦. The bound depends only on δ, the
lower bound of R.
To prove Theorem 8.3.2, we introduce the following lemma of Gilbarg and
Trudinger [GT].
Lemma 8.3.1 (See Theorem 9.20 in [GT]). Let u ∈ W
2,n
(D) and suppose
´u ≥ 0. Then for any ball B
2r
(x) ⊂ D and any q > 0, we have
sup
Br(x)
u ≤ C(
1
r
n
B2r(x)
(u
+
)
q
dV )
1
q
, (8.36)
where C = C(n, D, q).
In virtue of this Lemma, to establish an a priori bound of the solutions,
one only need to obtain an integral bound. In fact, we prove
8.3 Method of Moving Planes in a Local Way 217
Lemma 8.3.2 Let B be any ball in K that is bounded away from Γ, then
there is a constant C such that
B
u
p
dx ≤ C. (8.37)
Proof. Let Ω be an open neighborhood of K such that dist (∂Ω, K ) ≥ 1.
Let ϕ be the ﬁrst eigenfunction of −´ in Ω, i.e.
−´ϕ = λ
1
ϕ(x) x ∈ Ω
ϕ(x) = 0 x ∈ ∂Ω.
Let
sgnR =
1 if R(x) > 0
0 if R(x) = 0
−1 if R(x) < 0.
Let α =
1+2p
p−1
. Multiply both sides of equation (8.35) by ϕ
α
[R[
α
sgnR and
then integrate. Taking into account that α > 2 and ϕ and R are bounded in
Ω, through a straight forward calculation, we have
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
1
Ω
ϕ
α−2
[R[
α−2
udx ≤ C
2
Ω
ϕ
α
p
[R[
α−2
udx.
Applying H¨older inequality on the righthandside, we arrive at
Ω
ϕ
α
[R[
1+α
u
p
dx ≤ C
3
. (8.38)
Now (8.37) follows suit. This completes the proof of the Lemma.
The Proof of Theorem 8.3.2. It is a direct consequence of Lemma 8.3.1
and Lemma 8.3.2
Part II. In the region where R is small
In this part, we estimate the solutions in a neighborhood of Γ, where R is
small.
Let u be a given solution of (8.35), and x
0
∈ Γ. It is suﬃcient to show that
u is bounded in a neighborhood of x
0
. The key ingredient is to use the method
of moving planes to show that, near x
0
, along any direction that is close to
the direction of R, the values of the solution u is comparable. This result,
together with the boundedness of the integral
u
p
in a small neighborhood
of x
0
, leads to an a priori bound of the solutions. Since the method of moving
planes can not be applied directly to u, we construct some auxiliary functions.
The proof consists of the following four steps.
Step 1. Transforming the region
Make a translation, a rotation, or if necessary, a Kelvin transform. Denote
the new system by x = (x
1
, y) with y = (x
2
, , x
n
). Let x
1
axis pointing
218 8 Methods of Moving Planes and of Moving Spheres
to the right. In the new system, x
0
becomes the origin, the image of the set
Ω
+
:= ¦x [ R(x) > 0¦ is contained in the left of the yplane, and the image of
Γ is tangent to the yplane at origin. The image of Γ is also uniformly convex
near the origin.
Let x
1
= φ(y) be the equation of the image of Γ near the origin. Let
∂
1
D := ¦x [ x
1
= φ(y) +, x
1
≥ −2¦
The intersection of ∂
1
D and the plane ¦x ∈ R
n
[ x
1
= −2¦ encloses a part
of the plane, which is denote by ∂
2
D. Let D by the region enclosed by two
surfaces ∂
1
D and ∂
2
D. (See Figure 6.)
x
1 0
y
Γ
∂
1
D
∂
2
D
R(x) > 0
Figure 6
T
λ
R(x) < 0
1
The small positive number is chosen to ensure that
(a) ∂
1
D is uniformly convex,
(b)
∂R
∂x1
≤ 0 in D, and
(c)
R(x)
[R[
is continuous in D.
Step 2. Introducing an auxiliary function
Let m = max
∂1D
u. Let ˜ u be a C
2
extension of u from ∂
1
D to the entire
∂D, such that
0 ≤ ˜ u ≤ 2m, and [˜ u[ ≤ Cm.
8.3 Method of Moving Planes in a Local Way 219
Let w be the harmonic function
´w = 0 x ∈ D
w = ˜ u x ∈ ∂D
Then by the maximum principle and a standard elliptic estimate, we have
0 ≤ w(x) ≤ 2m and w
C
1
(D)
≤ C
1
m. (8.39)
Introduce a new function
v(x) = u(x) −w(x) +C
0
m[ +φ(y) −x
1
] +m[ +φ(y) −x
1
]
2
which is well deﬁned in D. Here C
0
is a large constant to be determined later.
One can easily verify that v(x) satisﬁes the following
´v +ψ(y) +f(x, v) = 0 x ∈ D
v(x) = 0 x ∈ ∂
1
D
where
ψ(y) = −C
0
m´φ(y) −m´[ +φ(y)]
2
−2m,
and
f(x, v) = 2mx
1
´φ(y)+R(x)¦v+w(x)−C
0
m[+φ(y)−x
1
]−m[+φ(y)−x
1
]
2
¦
p
At this step, we show that
v(x) > 0 ∀x ∈ D.
We consider the following two cases.
Case 1) φ(y) +
2
≤ x
1
≤ φ(y) + . In this region, R(x) is negative and
bounded away from 0. By Theorem 8.3.2 and the standard elliptic estimate,
we have
[
∂u
∂x
1
[ ≤ C
2
m,
for some positive constant C
2
. It follows from this and (8.39),
∂v
∂x
1
≤ (C
1
+C
2
−C
0
)m−2m( +φ(y) −x
1
).
Noticing that ( + φ(y) − x
1
) ≥ 0, one can choose C
0
suﬃciently large, such
that
∂v
∂x
1
< 0
Then since
v(x) = 0 ∀x ∈ ∂
1
D
we have v(x) > 0.
220 8 Methods of Moving Planes and of Moving Spheres
Case 2) −2 ≤ x
1
≤ φ(y) +
2
. In this region, again by (8.39), we have
v(x) ≥ −w(x) +C
0
m
2
≥ −2m+C
0
m
2
.
Again, choosing C
0
suﬃciently large (depending only on ), we arrive at v(x) >
0.
Step 3. Applying the Method of Moving Planes to v in x
1
direc
tion
In the previous step, we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂
1
D.
This lies a foundation for the moving of planes. Now we can start from the
rightmost tip of region D, and move the plane perpendicular to x
1
axis toward
the left. More precisely, let
Σ
λ
= ¦x ∈ D [ x
1
≥ λ¦,
T
λ
= ¦x ∈ R
n
[ x
1
= λ¦.
Let x
λ
= (2λ − x
1
, y) be the reﬂection point of x with respect to the plane
T
λ
. Let
v
λ
(x) = v(x
λ
) and w
λ
(x) = v
λ
(x) −v(x).
We are going to show that
w
λ
(x) ≥ 0 ∀ x ∈ Σ
λ
. (8.40)
We want to show that the above inequality holds for −
1
≤ λ ≤ for some
0 <
1
< . This choice of
1
is to guarantee that when the plane T
λ
moves to
λ = −
1
, the reﬂection of Σ
λ
.about the plane T
λ
still lies in region D.
(8.40) is true when λ is less but close to , because Σ
λ
is a narrow region
and f(x, v) is Lipschitz continuous in v (Actually,
∂f
∂v
= R(x)). For detailed
argument, the readers may see the ﬁrst example in Section 5.
Now we decrease λ, that is, move the plane T
λ
toward the left as long as
the inequality (8.40) remains true. We show that the moving of planes can be
carried on provided
f(x, v(x)) ≤ f(x
λ
, v(x)) for x = (x
1
, y) ∈ D with x
1
> λ > −
1
. (8.41)
In fact, it is easy to see that v
λ
satisﬁes the equation
´v
λ
+ψ(y) +f(x
λ
, v
λ
) = 0,
and hence w
λ
satisﬁes
−´w
λ
= f(x
λ
, v
λ
) −f(x
λ
, v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)(v
λ
−v) +f(x
λ
, v) −f(x, v)
= R(x
λ
)w
λ
+f(x
λ
, v) −f(x, v).
If (8.41) holds, then we have
8.3 Method of Moving Planes in a Local Way 221
−´w
λ
−R(x
λ
)w
λ
(x) ≥ 0. (8.42)
This enables us to apply the maximum principle to w
λ
.
Let
λ
o
= inf¦λ [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
If λ
o
> −
1
, then since v(x) > 0 in D, by virtue of (8.42) and the maximum
principle, we have
w
λo
(x) > 0, ∀x ∈ Σ
λo
and
∂w
λo
∂x
1
< 0, ∀x ∈ T
λo
∩ Σ
λo
.
Then, similar to the argument in the examples in Section 5, one can derive a
contradiction.
It is elementary to see that (8.41) holds if
∂f(x, v)
∂x
1
≤ 0 in the set
¦x ∈ D [ R(x) < 0 or x
1
> −2
1
¦.
To estimate
∂f
∂x
1
, we use
∂f
∂x
1
= 2m´φ(y) +u
p−1
¦
∂R
∂x
1
u +R(x)p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]¦.
First, we notice that the uniform convexity of the image of Γ near the
origin implies
´φ(y) ≤ −a
0
< 0. (8.43)
for some constant a
0
.
We consider the following two possibilities.
a) R(x) ≤ 0. Choose C
0
suﬃciently large, so that
∂w
∂x
1
+C
0
m ≥ 0
Noticing that
∂R
∂x1
≤ 0 and ( +φ(y) −x
1
) ≥ 0 in D, we have
∂f
∂x
1
≤ 0.
b) R(x) > 0, x = (x
1
, y) with x
1
> −2
1
.
In the part where u ≥ 1, we use u to control
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
More precisely, we write
∂f
∂x
1
= 2m´φ(y)+u
p−1
∂R
∂x
1
u +
R(x)/
∂R
∂x
1
p[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)]
.
222 8 Methods of Moving Planes and of Moving Spheres
Choose
1
suﬃciently small, such that
[
∂R
∂x
1
[ ∼ [R[.
Then by condition (8.33) on R, we can make R(x)/
∂R
∂x1
arbitrarily small by a
small choice of
1
, and therefore arrive at
∂f
∂x1
≤ 0.
In the part where u ≤ 1, in virtue of (8.43), we can use 2m´φ(y) to control
Rp[
∂w
∂x
1
+C
0
m+ 2m( +φ(y) −x
1
)].
Here again we use the smallness of R.
So far, our conclusion is: The method of moving planes can be carried on
up to λ = −
1
. More precisely, for any λ between −
1
and , the inequality
(8.40) is true.
Step 4. Deriving the a priori bound
Inequality (8.40) implies that, in a small neighborhood of the origin, the
function v(x) is monotone decreasing in x
1
direction. A similar argument
will show that this remains true if we rotate the x
1
axis by a small angle.
Therefore, for any point x
0
∈ Γ, one can ﬁnd a cone ∆
x0
with x
0
as its vertex
and staying to the left of x
0
, such that
v(x) ≥ v(x
0
) ∀x ∈ ∆
x0
(8.44)
Noticing that w(x) is bounded in D, (8.44) leads immediately to
u(x) +C
6
≥ u(x
0
) ∀x ∈ ∆
x0
(8.45)
More generally, by a similar argument, one can show that (8.45) is true
for any point x
0
in a small neighborhood of Γ. Furthermore, the intersection
of the cone ∆
x0
with the set ¦x[R(x) ≥
δ0
2
¦ has a positive measure and the
lower bound of the measure depends only on δ
0
and the C
1
norm of R. Now
the a priori bound of the solutions is a consequence of (8.45) and an integral
bound on u, which can be derived from Lemma 8.3.2.
This completes the proof of Theorem 8.3.1.
8.4 Method of Moving Spheres
8.4.1 The Background
Given a function K(x) on the two dimensional standard sphere S
2
, the well
known Nirenberg problem is to ﬁnd conditions on K(x), so that it can be
realized as the Gaussian curvature of some conformally related metric. This
is equivalent to solving the following nonlinear elliptic equation
8.4 Method of Moving Spheres 223
−´u + 1 = K(x)e
2u
(8.46)
on S
2
. Where ´ is the Laplacian operator associated to the standard metric.
On the higher dimensional sphere S
n
, a similar problem was raised by
Kazdan and Warner:
Which functions R(x) can be realized as the scalar curvature of some
conformally related metrics ?
The problem is equivalent to the existence of positive solutions of another
elliptic equation
−´u +
n(n −2)
4
u =
n −2
4(n −1)
R(x)u
n+2
n−2
(8.47)
For a unifying description, on S
2
, we let R(x) = 2K(x) be the scalar curvature,
then (8.46) is equivalent to
−´u + 2 = R(x)e
u
. (8.48)
Both equations are socalled ‘critical’ in the sense that lack of compactness
occurs. Besides the obvious necessary condition that R(x) be positive some
where, there are other wellknown obstructions found by Kazdan and Warner
[KW1] and later generalized by Bourguignon and Ezin [BE]. The conditions
are:
S
n
X(R)dA
g
= 0 (8.49)
where dA
g
is the volume element of the metric e
u
g
o
and u
4
n−2
g
o
for n = 2 and
for n ≥ 3 respectively. The vector ﬁeld X in (8.49) is a conformal Killing ﬁeld
associated to the standard metric g
o
on S
n
. In the following, we call these
necessary conditions KazdanWarner type conditions.
These conditions give rise to many examples of R(x) for which (8.48) and
(8.47) have no solution. In particular, a monotone rotationally symmetric
function R admits no solution.
Then for which R, can one solve the equations? What are the necessary
and suﬃcient conditions for the equations to have a solution? These are very
interesting problems in geometry.
In recent years, various suﬃcient conditions were obtained by many au
thors (For example, see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2]
[CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references
therein.) However, there are still gaps between those suﬃcient conditions and
the necessary ones. Then one may naturally ask: ‘ Are the necessary condi
tions of KazdanWarner type also suﬃcient?’ This question has been open for
many years. In [CL3], we gave a negative answer to this question.
To ﬁnd necessary and suﬃcient conditions, it is natural to begin with
functions having certain symmetries. A pioneer work in this direction is [Mo]
by Moser. He showed that, for even functions R on S
2
, the necessary and
suﬃcient condition for (8.48) to have a solution is R being positive somewhere.
224 8 Methods of Moving Planes and of Moving Spheres
Note that, in the class of even functions, (1.2) is satisﬁed automatically. Thus
one can interpret Moser’s result as:
In the class of even functions, the KazdanWarner type conditions are the
necessary and suﬃcient conditions for (8.48) to have a solution.
A simple question was asked by Kazdan [Ka]:
What are the necessary and suﬃcient conditions for rotationally symmetric
functions?
To be more precise, for rotationally symmetric functions R = R(θ) where
θ is the distance from the north pole, the KazdanWarner type conditions take
the form:
R > 0 somewhere and R
changes signs (8.50)
In [XY], Xu and Yang found a family of rotationally symmetric functions
R
satisfying conditions (8.50), for which problem (8.48) has no rotationally
symmetric solution. Even though one didn’t know whether there were any
other nonsymmetric solutions for these functions R
, this result is still very
interesting, because it suggests that (8.50) may not be the suﬃcient condition.
In our earlier paper [CL3], we present a class of rotationally symmetric
functions satisfying (8.50) for which problem (8.48) has no solution at all. And
thus, for the ﬁrst time, pointed out that the Kazdan Warner type conditions
are not suﬃcient for (8.48) to have a solution. First, we proved the non
existence of rotationally symmetric solutions:
Proposition 8.4.1 Let R be rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.51)
Then problem (8.48) and (8.47) has no rotationally symmetric solution.
This generalizes Xu and Yang’s result, since their family of functions R
satisfy (8.51). We also cover the higher dimensional cases.
Although we believed that for all such functions R, there is no solution at
all, we were not able to prove it at that time. However, we could show this
for a family of such functions.
Proposition 8.4.2 There exists a family of functions R satisfying the Kazdan
Warner type conditions (8.50), for which the problem (8.48) has no solution
at all.
At that time, we were not able to obtain the counter part of this result in
higher dimensions.
In [XY], Xu and Yang also proved:
Proposition 8.4.3 Let R be rotationally symmetric. Assume that
i) R is nondegenerate
ii)
R
changes signs in the region where R > 0 (8.52)
Then problem (8.48) has a solution.
8.4 Method of Moving Spheres 225
The above results and many other existence results tend to lead people
to believe that what really counts is whether R
changes signs in the region
where R > 0. And we conjectured in [CL3] that for rotationally symmetric
R, condition (8.52), instead of (8.50), would be the necessary and suﬃcient
condition for (8.48) or (8.47) to have a solution.
The main objective of this section is to present the proof on the necessary
part of the conjecture, which appeared in our later paper [CL4]. In [CL3], to
prove Proposition 8.4.2, we used the method of ‘moving planes’ to show that
the solutions of (8.48) are rotationally symmetric. This method only work
for a special family of functions satisfying (8.51). It does not work in higher
dimensions. In [CL4], we introduce a new idea. We call it the method of ‘mov
ing spheres’. It works in all dimensions, and for all functions satisfying (8.51).
Instead of showing the symmetry of the solutions, we obtain a comparison for
mula which leads to a direct contradiction. In an earlier work [P] , a similar
method was used by P. Padilla to show the radial symmetry of the solutions
for some nonlinear Dirichlet problems on annuluses.
The following are our main results.
Theorem 8.4.1 Let R be continuous and rotationally symmetric. If
R is monotone in the region where R > 0, and R ≡ C, (8.53)
Then problems (8.48) and (8.47) have no solution at all.
We also generalize this result to a family of nonsymmetric functions.
Theorem 8.4.2 Let R be a continuous function. Let B be a geodesic ball
centered at the North Pole x
n+1
= 1. Assume that
i) R(x) ≥ 0, R is nondecreasing in x
n+1
direction, for x ∈ B ;
ii) R(x) ≤ 0, for x ∈ S
n
` B;
Then problems (8.48) and (8.47) have no solution at all.
Combining our Theorem 1 with Xu and Yang’s Proposition 3, one can
obtain a necessary and suﬃcient condition in the nondegenerate case:
Corollary 8.4.1 Let R be rotationally symmetric and nondegenerate in the
sense that R
= 0 whenever R
= 0. Then the necessary and suﬃcient condi
tion for (8.48) to have a solution is
R > 0 somewhere and R
changes signs in the region where R > 0
8.4.2 Necessary Conditions
In this subsection, we prove Theorem 8.4.1 and Theorem 8.4.2.
For convenience, we make a stereographic projection from S
n
to R
n
. Then
it is equivalent to consider the following equation in Euclidean space R
n
:
226 8 Methods of Moving Planes and of Moving Spheres
−´u = R(r)e
u
, x ∈ R
2
. (8.54)
−´u = R(r)u
τ
u > 0, x ∈ R
n
, n ≥ 3, (8.55)
with appropriate asymptotic growth of the solutions at inﬁnity
u ∼ −4 ln [x[ for n=2 ; u ∼ C[x[
2−n
for n ≥ 3 (8.56)
for some C > 0. Here ´ is the Euclidean Laplacian operator, r = [x[ and
τ =
n+2
n−2
is the socalled critical Sobolev exponent. The function R(r) is the
projection of the original R in equations (8.48) and (8.47); and this projection
does not change the monotonicity of the function. The function R is also
bounded and continuous, and we assume these throughout the subsection.
For a radial function R satisfying (8.53), there are three possibilities:
i) R is nonpositive everywhere,
ii) R ≡ C is nonnegative everywhere and monotone,
iii) R > 0 and nonincreasing for r < r
o
, R ≤ 0 for r > r
o
; or vice versa.
The ﬁrst two cases violate the KazdanWarner type conditions, hence there
is no solution. Thus, we only need to show that in the remaining case, there
is also no solution. Without loss of generality, we may assume that:
R(r) > 0, R
(r) ≤ 0 for r < 1; R(r) ≤ 0 for r ≥ 1. (8.57)
The proof of Theorem 8.4.1 is based on the following comparison formulas.
Lemma 8.4.1 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.54) and (8.56) for n = 2, then
u(λx) > u(
λx
[x[
2
) −4 ln [x[ ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.58)
Lemma 8.4.2 Let R be a continuous function satisfying (8.57). Let u be a
solution of (8.55) and (8.56) for n > 2, then
u(λx) > [x[
2−n
u(
λx
[x[
2
) ∀x ∈ B
1
(0), 0 < λ ≤ 1 (8.59)
We ﬁrst prove Lemma 8.4.2. A similar idea will work for Lemma 8.4.1.
Proof of Lemma 8.4.2. We use a new idea called the method of
‘moving spheres ’. Let x ∈ B
1
(0), then λx ∈ B
λ
(0). The reﬂection point of λx
about the sphere ∂B
λ
(0) is
λx
x
2
. We compare the values of u at those pairs
of points λx and
λx
x
2
. In step 1, we start with the unit sphere and show that
(8.59) is true for λ = 1. This is done by Kelvin transform and the strong
maximum principle. Then in step 2, we move (shrink) the sphere ∂B
λ
(0)
8.4 Method of Moving Spheres 227
towards the origin. We show that one can always shrink ∂B
λ
(0) a little bit
before reaching the origin. By doing so, we establish the inequality for all
x ∈ B
1
(0) and λ ∈ (0, 1].
Step 1. In this step, we establish the inequality for x ∈ B
1
(0), λ =
1. We show that
u(x) > [x[
2−n
u(
x
[x[
2
) ∀x ∈ B
1
(0). (8.60)
Let v(x) = [x[
2−n
u(
x
x
2
) be the Kelvin transform. Then it is easy to verify
that v satisﬁes the equation
−´v = R(
1
r
)v
τ
(8.61)
and v is regular.
It follows from (8.57) that ´u < 0, and ´v ≥ 0 in B
1
(0). Thus −´(u −
v) > 0. Applying the maximum principle, we obtain
u > v in B
1
(0) (8.62)
which is equivalent to (8.60).
Step 2. In this step, we move the sphere ∂B
λ
(0) towards λ = 0.
We show that the moving can not be stopped until λ reaches the origin. This
is done again by the Kelvin transform and the strong maximum principle.
Let u
λ
(x) = λ
n
2
−1
u(λx). Then
−´u
λ
= R(λx)u
τ
λ
(x). (8.63)
Let v
λ
(x) = [x[
2−n
u
λ
(
x
x
2
) be the Kelvin transform of u
λ
. Then it is easy
to verify that
−´v
λ
= R(
λ
r
)v
τ
λ
(x). (8.64)
Let w
λ
= u
λ
−v
λ
. Then by (8.63) and (8.64),
´w
λ
+R(
λ
r
)ψ
λ
w
λ
= [R(
λ
r
) −R(λr)]u
τ
λ
(8.65)
where ψ
λ
is some function with values between τu
τ−1
λ
and τv
τ−1
λ
. Taking into
account of assumption (2.4), we have
R(
λ
r
) −R(λr) ≤ 0 for r ≤ 1, λ ≤ 1.
It follows from (8.65) that
´w
λ
+C
λ
(x)w
λ
≤ 0 (8.66)
where C
λ
(x) is a bounded function if λ is bounded away from 0.
228 8 Methods of Moving Planes and of Moving Spheres
It is easy to see that for any λ, strict inequality holds somewhere for (8.66).
Thus applying the strong maximum principle, we know that the inequality
(8.59) is equivalent to
w
λ
≥ 0. (8.67)
From step 1, we know (8.67) is true for (x, λ) ∈ B
1
(0) ¦1¦. Now we decrease
λ. Suppose (8.67) does not hold for all λ ∈ (0, 1] and let λ
o
> 0 be the smallest
number such that the inequality is true for (x, λ) ∈ B
1
(0) [λ
o
, 1]. We will
derive a contradiction by showing that for λ close to and less than λ
o
, the
inequality is still true. In fact, we can apply the strong maximum principle
and then the Hopf lemma to (8.66) for λ = λ
o
to get:
w
λo
> 0 in B
1
and
∂w
λo
∂r
< 0 on ∂B
1
. (8.68)
These combined with the fact that w
λ
≡ 0 on ∂B
1
imply that (8.67) holds for
λ close to and less than λ
o
.
Proof of Lemma 8.4.1. In step 1, we let
v(x) = u(
x
[x[
2
) −4 ln [x[.
In step 2, let
u
λ
(x) = u(λx) + 2 ln λ, v
λ
(x) = u
λ
(
x
[x[
2
) −4 ln [x[.
Then arguing as in the proof of Lemma 8.4.2 we derive the conclusion of
Lemma 8.4.1.
Now we are ready to prove Theorem 8.4.1.
Proof of Theorem 8.4.1. Taking λ to 0 in (8.58), we get ln [x[ > 0
for [x[ < 1 which is impossible. Letting λ→0 in (8.59) and using the fact
that u(0) > 0 we obtain again a contradiction. These complete the proof of
Theorem 8.4.1.
Proof of Theorem 8.4.2. Noting that in the proof of Theorem
8.4.1, we only compare the values of the solutions along the same radial di
rection, one can easily adapt the argument in the proof of Theorem 8.4.1 to
derive the conclusion of Theorem 8.4.2.
8.5 Method of Moving Planes in Integral Forms and
Symmetry of Solutions for an Integral Equation
Let R
n
be the n−dimensional Euclidean space, and let α be a real number
satisfying 0 < α < n. Consider the integral equation
u(x) =
R
n
1
[x −y[
n−α
u(y)
(n+α)/(n−α)
dy. (8.69)
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 229
It arises as an EulerLagrange equation for a functional under a constraint
in the context of the HardyLittlewoodSobolev inequalities. In [L], Lieb clas
siﬁed the maximizers of the functional, and thus obtained the best constant
in the HLS inequalities. He then posed the classiﬁcation of all the critical
points of the functional – the solutions of the integral equation (8.69) as an
open problem.
This integral equation is also closely related to the following family of
semilinear partial diﬀerential equations
(−∆)
α/2
u = u
(n+α)/(n−α)
. (8.70)
In the special case n ≥ 3 and α = 2, it becomes
−∆u = u
(n+2)/(n−2)
. (8.71)
As we mentioned in Section 6, solutions to this equation were studied by
Gidas, Ni, and Nirenberg [GNN]; Caﬀarelli, Gidas, and Spruck [CGS]; Chen
and Li [CL]; and Li [Li]. Recently, Wei and Xu [WX] generalized this result
to the solutions of (8.70) with α being any even numbers between 0 and n.
Apparently, for other real values of α between 0 and n, equation (8.70) is
also of practical interest and importance. For instance, it arises as the Euler
Lagrange equation of the functional
I(u) =
R
n
[(−∆)
α
4
u[
2
dx/(
R
n
[u[
2n
n−α
dx)
n−α
n
.
The classiﬁcation of the solutions would provide the best constant in the
inequality of the critical Sobolev imbedding from H
α
2
(R
n
) to L
2n
n−α
(R
n
):
(
R
n
[u[
2n
n−α
dx)
n−α
n
≤ C
R
n
[(−∆)
α
4
u[
2
dx.
As usual, here (−∆)
α/2
is deﬁned by Fourier transform. We can show that
(see [CLO]), all the solutions of partial diﬀerential equation (8.70) satisfy our
integral equation (8.69), and vise versa. Therefore, to classify the solutions
of this family of partial diﬀerential equations, we only need to work on the
integral equations (8.69).
In order either the HardyLittlewoodSobolev inequality or the above men
tioned critical Sobolev imbedding to make sense, u must be in L
2n
n−α
(R
n
). Un
der this assumption, we can show that (see [CLO]) a positive solution u is in
fact bounded, and therefore possesses higher regularity. Furthermore, by using
the method of moving planes, we can show that if a solution is locally L
2n
n−α
,
then it is in L
2n
n−α
(R
n
). Hence, in the following, we call a solution u regular if
it is locally L
2n
n−α
. It is interesting to notice that if this condition is violated,
then a solution may not be bounded. A simple example is u =
1
x
(n−α)/2
,
which is a singular solution. We studied such solutions in [CLO1].
We will use the method of moving planes to prove
230 8 Methods of Moving Planes and of Moving Spheres
Theorem 8.5.1 Every positive regular solution u(x) of (8.69) is radially
symmetric and decreasing about some point x
o
and therefore assumes the form
c(
t
t
2
+[x −x
o
[
2
)
(n−α)/2
, (8.72)
with some positive constants c and t.
To better illustrate the idea, we will ﬁrst prove the Theorem under stronger
assumption that u ∈ L
2n
n−2
(R
n
). Then we will show how the idea of this proof
can be extended under weaker condition that u is only locally in L
2n
n−2
.
For a given real number λ, deﬁne
Σ
λ
= ¦x = (x
1
, . . . , x
n
) [ x
1
≤ λ¦,
and let x
λ
= (2λ−x
1
, x
2
, . . . , x
n
) and u
λ
(x) = u(x
λ
). (Again see the previous
Figure 1.)
Lemma 8.5.1 For any solution u(x) of (1.1), we have
u(x) −u
λ
(x) =
Σ
λ
(
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
)(u(y)
n+α
n−α
−u
λ
(y)
n+α
n−α
)dy,
(8.73)
It is also true for v(x) =
1
[x[
n−α
u(
x
[x[
2
), the Kelvin type transform of u(x),
for any x = 0.
Proof. Since [x −y
λ
[ = [x
λ
−y[, we have
u(x) =
Σ
λ
1
[x −y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x
λ
−y[
n−α
u
λ
(y)
n+α
n−α
dy, and
u(x
λ
) =
Σ
λ
1
[x
λ
−y[
n−α
u(y)
n+α
n−α
dy +
Σ
λ
1
[x −y[
n−α
u
λ
(y)
n+α
n−α
.
This implies (8.73). The conclusion for v(x) follows similarly since it satisﬁes
the same integral equation (8.69) for x = 0.
Proof of the Theorem 8.5.1 (Under Stronger Assumption u ∈ L
2n
n−2
(R
n
)).
Let w
λ
(x) = u
λ
(x) − u(x). As in Section 6, we will ﬁrst show that, for λ
suﬃciently negative,
w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
. (8.74)
Then we can start moving the plane T
λ
= ∂Σ
λ
from near −∞ to the right,
as long as inequality (8.74) holds. Let
λ
o
= sup¦λ ≤ 0 [ w
λ
(x) ≥ 0, ∀x ∈ Σ
λ
¦.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 231
We will show that
λ
o
< ∞, and w
λo
(x) ≡ 0.
Unlike partial diﬀerential equations, here we do not have any diﬀerential
equations for w
λ
. Instead, we will introduce a new idea – the method of moving
planes in integral forms. We will exploit some global properties of the integral
equations.
Step 1. Start Moving the Plane from near −∞.
Deﬁne
Σ
−
λ
= ¦x ∈ Σ
λ
[ w
λ
(x) < 0¦. (8.75)
We show that for suﬃciently negative values of λ, Σ
−
λ
must be empty. By
Lemma 8.5.1, it is easy to verify that
u(x) −u
λ
(x) ≤
Σ
−
λ
1
[x −y[
n−α
−
1
[x
λ
−y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
≤
Σ
−
λ
1
[x −y[
n−α
[u
τ
(y) −u
τ
λ
(y)]dy
= τ
Σ
−
λ
1
[x −y[
n−α
ψ
τ−1
λ
(y)[u(y) −u
λ
(y)]dy
≤ τ
Σ
−
λ
1
[x −y[
n−α
u
τ−1
(y)[u(y) −u
λ
(y)]dy,
where ψ
λ
(x) is valued between u
λ
(x) and u(x) by the Mean Value Theorem,
and since on Σ
−
λ
, u
λ
(x) < u(x), we have ψ
λ
(x) ≤ u(x).
We ﬁrst apply the classical HardyLittlewoodSobolev inequality (its equiv
alent form) to the above to obtain, for any q >
n
n−α
:
w
λ

L
q
(Σ
−
λ
)
≤ Cu
τ−1
w
λ

L
nq/(n+αq)
(Σ
−
λ
)
=
Σ
λ
−
[u
τ−1
(x)[
nq
n+αq
[w
λ
(x)[
nq
n+αq
dx
n+αq
nq
.
Then use the H¨older inequality to the last integral in the above. First choose
s =
n +αq
n
so that
nq
n +αq
s = q,
then choose
r =
n +αq
αq
232 8 Methods of Moving Planes and of Moving Spheres
to ensure
1
r
+
1
s
= 1.
We thus arrive at
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
By condition that u ∈ L
τ+1
(R
n
), we can choose N suﬃciently large, such
that for λ ≤ −N, we have
C¦
Σ
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now (8.76) implies that w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be measure
zero, and hence empty by the continuity of w
λ
(x). This veriﬁes (8.74) and
hence completes Step 1. The readers can see that here we have actually used
Corollary 7.5.1, with Ω = Σ
λ
, a region near inﬁnity.
Step 2. We now move the plane T
λ
= ¦x [ x
1
= λ¦ to the right as long as
(8.74) holds. Let λ
o
as deﬁned before, then we must have λ
o
< ∞. This can
be seen by applying a similar argument as in Step 1 from λ near +∞.
Now we show that
w
λo
(x) ≡ 0 , ∀ x ∈ Σ
λo
. (8.76)
Otherwise, we have w
λo
(x) ≥ 0, but w
λo
(x) ≡ 0 on Σ
λo
; we show that the
plane can be moved further to the right. More precisely, there exists an
depending on n, α, and the solution u(x) itself such that w
λ
(x) ≥ 0 on Σ
λ
for
all λ in [λ
o
, λ
o
+).
By Lemma 8.5.1, we have in fact w
λo
(x) > 0 in the interior of Σ
λo
. Let
Σ
−
λo
= ¦x ∈ Σ
λo
[ w
λo
(x) ≤ 0¦. Then obviously, Σ
−
λo
has measure zero, and
lim
λ→λo
Σ
−
λ
⊂ Σ
−
λo
. From the ﬁrst inequality of (8.76), we deduce,
w
λ

L
q
(Σ
−
λ
)
≤ C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
w
λ

L
q
(Σ
−
λ
)
(8.77)
Condition u ∈ L
τ+1
(R
n
) ensures that one can choose suﬃciently small,
so that for all λ in [λ
o
, λ
o
+),
C¦
Σ
−
λ
u
τ+1
(y) dy¦
α
n
≤
1
2
.
Now by (8.77), we have w
λ

L
q
(Σ
−
λ
)
= 0, and therefore Σ
−
λ
must be empty.
Here we have actually applied Corollary 7.5.1 with Ω = Σ
−
λ
. For λ suﬃciently
close to λ
o
, Ω is contained in a very narrow region.
8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 233
Proof of the Theorem 8.5.1 ( under Weaker Assumption that u is only
locally in L
τ+1
).
Outline. Since we do not assume any integrability condition of u(x) near
inﬁnity, we are not able to carry on the method of moving planes directly on
u(x). To overcome this diﬃculty, we consider v(x), the Kelvin type transform
of u(x). It is easy to verify that v(x) satisﬁes the same equation (8.69), but
has a possible singularity at origin, where we need to pay some particular
attention. Since u is locally L
τ+1
, it is easy to see that v(x) has no singularity
at inﬁnity, i.e. for any domain Ω that is a positive distance away from the
origin,
Ω
v
τ+1
(y) dy < ∞. (8.78)
Let λ be a real number and let the moving plane be x
1
= λ. We compare
v(x) and v
λ
(x) on Σ
λ
` ¦0¦. The proof consists of three steps. In step 1, we
show that there exists an N > 0 such that for λ ≤ −N, we have
v(x) ≤ v
λ
(x), ∀x ∈ Σ
λ
` ¦0¦. (8.79)
Thus we can start moving the plane continuously from λ ≤ −N to the right as
long as (8.79) holds. If the plane stops at x
1
= λ
o
for some λ
o
< 0, then v(x)
must be symmetric and monotone about the plane x
1
= λ
o
. This implies that
v(x) has no singularity at the origin and u(x) has no singularity at inﬁnity.
In this case, we can carry on the moving planes on u(x) directly to obtain
the radial symmetry and monotonicity. Otherwise, we can move the plane all
the way to x
1
= 0, which is shown in step 2. Since the direction of x
1
can
be chosen arbitrarily, we deduce that v(x) must be radially symmetric and
decreasing about the origin. We will show in step 3 that, in any case, u(x) can
not have a singularity at inﬁnity, and hence both u and v are in L
τ+1
(R
n
).
Step 1 and Step 2 are entirely similar to the proof under stronger assump
tion. What we need to do is to replace u there by v and Σ
λ
there by Σ
λ
`¦0¦.
Hence we only show Step 3 here.
Step 3. Finally, we showed that u has the desired asymptotic behavior at
inﬁnity, i.e., it satisﬁes
u(x) = O(
1
[x[
n−2
).
Suppose in the contrary, let x
1
and x
2
be any two points in R
n
and let x
o
be
the midpoint of the line segment x
1
x
2
. Consider the Kelvin type transform
centered at x
o
:
v(x) =
1
[x −x
o
[
n−α
u(
x −x
o
[x −x
o
[
2
).
Then v(x) has a singularity at x
o
. Carry on the arguments as in Steps 1 and 2,
we conclude that v(x) must be radially symmetric about x
o
, and in particular,
u(x
1
) = u(x
2
). Since x
1
and x
2
are any two points in R
n
, u must be constant.
This is impossible. Similarly, The continuity and higher regularity of u follows
234 8 Methods of Moving Planes and of Moving Spheres
from standard theory on singular integral operators. This completes the proof
of the Theorem.
A
Appendices
A.1 Notations
A.1.1 Algebraic and Geometric Notations
1. A = (a
ij
) is a an mn matrix with ij
th
entry a
ij
.
2. det(a
ij
) is the determinant of the matrix (a
ij
).
3. A
T
= transpose of the matrix A.
4. R
n
= ndimensional real Euclidean space with a typical point x =
(x
1
, , x
n
).
5. Ω usually denotes and open set in R
n
.
6.
¯
Ω is the closure of Ω.
7. ∂Ω is the boundary of Ω.
8. For x = (x
1
, , x
n
) and y = (y
1
, , y
n
) in R
n
,
x y =
n
¸
i=1
x
i
y
i
, [x[ =
n
¸
i=1
x
2
i
1
2
.
9. [x −y[ is the distance of the two points x and y in R
n
.
10. B
r
(x
o
) = ¦x ∈ R
n
[ [x − x
o
[ < r¦ is the ball of radius r centered at the
point x
o
in R
n
.
11. S
n
is the sphere of radius one centered at the origin in R
n+1
, i.e. the
boundary of the ball of radius one centered at the origin:
B
1
(0) = ¦x ∈ R
n+1
[ [x[ < 1¦.
A.1.2 Notations for Functions and Derivatives
1. For a function u : Ω ⊂ R
n
→R
1
, we write
u(x) = u(x
1
, , x
n
) , x ∈ Ω.
We say u is smooth if it is inﬁnitely diﬀerentiable.
236 A Appendices
2. For two functions u and v, u ≡ v means u is identically equal to v.
3. We set
u := v
to deﬁne u as equaling v.
4. We usually write u
xi
for
∂u
∂xi
, the ﬁrst partial derivative of u with respect
to x
i
. Similarly
u
xixj
=
∂
2
u
∂x
i
∂x
j
u
xixjx
k
=
∂
3
u
∂x
i
∂x
j
∂x
k
.
5.
D
α
u(x) :=
∂
α
u(x)
∂x
α1
1
∂x
αn
n
where α = (α
1
, , α
n
) is a multiindex of order
[α[ = α
1
+α
2
+ α
n
.
6. For a nonnegative integer k,
D
k
u(x) := ¦D
α
u(x) [ [α[ = k¦
the set of all partial derivatives of order k.
If k = 1, we regard the elements of Du as being arranged in a vector
Du := (u
x1
, , u
xn
) = u,
the gradient vector.
If k = 2, we regard the elements of D
2
u being arranged in a matrix
D
2
u :=
¸
¸
∂
2
u
∂x1∂x1
∂
2
u
∂x1∂xn
∂
2
u
∂xn∂x1
∂
2
u
∂xn∂xn
¸
,
the Hessian matrix.
7. The Laplacian
´u :=
n
¸
i=1
u
xixi
is the trace of D
2
u.
8.
[D
k
u[ :=
¸
¸
α=k
[D
α
u[
2
¸
1/2
.
A.1 Notations 237
A.1.3 Function Spaces
1.
C(Ω) := ¦u : Ω→R
1
[ u is continuous ¦
2.
C(
¯
Ω) := ¦u ∈ C(Ω) [ u is uniformly continuous¦
3.
u
C(
¯
Ω)
:= sup
x∈Ω
[u(x)[
4.
C
k
(Ω) := ¦u : Ω→R
1
[ u is ktimes continuously diﬀerentiable ¦
5.
C
k
(
¯
Ω) := ¦u ∈ C
k
(Ω) [ D
α
u is uniformly continuous for all [α[ ≤ k¦
6.
u
C
k
(
¯
Ω)
:=
¸
α≤k
D
α
u
C(
¯
Ω)
7. Assume Ω is an open subset in R
n
and 0 < α ≤ 1. If there exists a
constant C such that
[u(x) −u(y)[ ≤ C[x −y[
α
, ∀ x, y ∈ Ω,
then we say that u is H¨ oder continuous in Ω.
8. The α
th
H¨ oder seminorm of u is
[u]
C
0,α
(
¯
Ω)
:= sup
x=y
[u(x) −u(y)[
[x −y[
α
9. The α
th
H¨ oder norm is
u
C
0,α
(
¯
Ω)
:= u
C(
¯
Ω)
+ [u]
C
0,α
(
¯
Ω)
10. The H¨ oder space C
k,α
(
¯
Ω) consists of all functions u ∈ C
k
(
¯
Ω) for which
the norm
u
C
k,α
(
¯
Ω)
:=
¸
β≤k
D
β
u
C(
¯
Ω)
+
¸
β=k
[D
β
u]
C
0,α
(
¯
Ω)
is ﬁnite.
11. C
∞
(Ω) is the set of all inﬁnitely diﬀerentiable functions
12. C
k
0
(Ω) denotes the subset of functions in C
k
(Ω) with compact support in
Ω.
238 A Appendices
13. L
p
(Ω) is the set of Lebesgue measurable functions u on Ω with
u
L
p
(Ω)
:=
Ω
[u(x)[
p
dx
1/p
< ∞.
14. L
∞
(Ω) is the set of Lebesgue measurable functions u with
u
L
∞
(Ω)
:= ess sup
Ω
[u[ < ∞.
15. L
p
loc
(Ω) denotes the set of Lebesgue measurable functions u on Ω whose
p
th
power is locally integrable, i.e. for any compact subset K ⊂ Ω,
K
[u(x)[
p
dx < ∞.
16. W
k,p
(Ω) denotes the Sobolev space consisting functions u with
u
W
k,p
(Ω)
:=
¸
0≤α≤k
Ω
[D
α
u[
p
dx
1/p
< ∞.
17. H
k,p
(Ω) is the completion of C
∞
(Ω) under the norm u
W
k,p
(Ω)
.
18. For any open subset Ω ⊂ R
n
, any nonnegative integer k and any p ≥ 1,
it is shown (See Adams [Ad]) that
H
k,p
(Ω) = W
k,p
(Ω).
19. For p = 2, we usually write
H
k
(Ω) = H
k,2
(Ω).
A.1.4 Notations for Estimates
1. In the process of estimates, we use C to denote various constants that can
be explicitly computed in terms of known quantities. The value of C may
change from line to line in a given computation.
2. We write
u = O(v) as x→x
o
,
provided there exists a constant C such that
[u(x)[ ≤ C[v(x)[
for x suﬃciently close to x
o
.
3. We write
u = o(v) as x→x
o
,
provided
[u(x)[
[v(x)[
→0 as x→x
o
.
A.1 Notations 239
A.1.5 Notations from Riemannian Geometry
1. M denotes a diﬀerentiable manifold.
2. T
p
M is the tangent space of M at point p.
3. T
∗
p
M is the cotangent space of M at point p.
4. A Riemannian metric g can be expressed in a matrix
g =
¸
g
11
g
1n
g
n1
g
nn
¸
where in a local coordinates
g
ij
(p) =<
∂
∂x
i
p
,
∂
∂x
j
p
>
p
.
5. (M, g) denotes an ndimensional manifold M with Riemannain metric g.
In a local coordinates, the length element
ds :=
¸
n
¸
i,j=1
g
ij
dx
i
dx
j
¸
1/2
.
The volume element
dV :=
det(g
ij
)dx
1
dx
n
.
6. K(x) denotes the Gaussian curvature, and R(x) scalar curvature of a
manifold M at point x.
7. For two diﬀerentiable vector ﬁelds X and Y on M, the Lie Bracket is
[X, Y ] := XY −Y X.
8. D denotes an aﬃne connection on a diﬀerentiable manifold M.
9. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if
DV
dt
= 0.
10. Given a Riemannian manifold M with metric <, >, there exists a unique
connection D on M that is compatible with the metric <, >, i.e.
< X, Y >
c(t)
= constant ,
for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t).
11. In a local coordinates (x
1
, , x
n
), this unique Riemannian connection
can be expressed as
D ∂
∂x
i
∂
∂x
j
=
¸
k
Γ
k
ij
∂
∂x
k
,
where Γ
k
ij
is the Christoﬀel symbols.
240 A Appendices
12. We denote
i
= D ∂
∂x
i
.
13. The summation convention. We write, for instance,
X
i
∂
∂x
i
:=
¸
i
X
i
∂
∂x
i
,
Γ
j
ik
dx
k
:=
¸
k
Γ
j
ik
dx
k
,
and so on.
14. A smooth function f(x) on M is a (0, 0)tensor. f is a (1, 0)tensor
deﬁned by
f =
i
fdx
i
=
∂f
∂x
i
dx
i
.
2
f is a (2, 0)tensor:
2
f =
j
∂f
∂x
i
dx
i
⊗dx
j
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
dx
i
⊗dx
j
.
In the Riemannian context,
2
f is called the Hessian of f and denoted
by Hess(f). Its ij
th
component is
(
2
f)
ij
=
∂
2
f
∂x
i
∂x
j
−Γ
k
ij
∂f
∂x
k
.
15. The trace of the Hessian matrix ((
2
f)
ij
) is deﬁned to be the Laplace
Beltrami operator ´.
´ =
1
[g[
n
¸
i,j=1
∂
∂x
i
(
[g[ g
ij
∂
∂x
j
)
where [g[ is the determinant of (g
ij
).
16. We abbreviate:
M
uvdV
g
:=
M
< u, v > dV
g
where
< u, v >= g
ij
i
u
j
v = g
ij
∂u
∂x
i
∂v
∂x
j
is the scalar product associated with g for 1forms.
A.2 Inequalities 241
17. The norm of the k
th
covariant derivative of u, [
k
u[ is deﬁned in a local
coordinates chart by
[
k
u[
2
= g
i1j1
g
i
k
j
k
(
k
u)
i1···i
k
(
k
u)
j1···j
k
.
In particular, we have
[
1
u[
2
= [u[
2
= g
ij
(u)
i
(u)
j
= g
ij
i
u
j
u = g
ij
∂u
∂x
i
∂u
∂x
j
.
18. (
p
k
(M) is the space of C
∞
functions on M such that
M
[
j
u[
p
dV
g
< ∞, ∀ j = 0, 1, , k.
19. The Sobolev space H
k,p
(M) is the completion of (
p
k
(M) with respect to
the norm
u
H
k,p
(M)
=
k
¸
j=0
M
[
j
u[
p
dV
g
1/p
.
A.2 Inequalities
Young’s Inequality. Assume 1 < p, q < ∞,
1
p
+
1
q
= 1. Then for any a, b > 0,
it holds
ab ≤
a
p
p
+
b
q
q
(A.1)
Proof. We will use the convexity of the exponential function e
x
. Let x
1
< x
2
be any two point in R
1
. Then since
1
p
+
1
q
= 1,
1
p
x
1
+
1
q
x
2
is a point between
x
1
and x
2
. By the convexity of e
x
, we have
e
1
p
x1+
1
q
x2
≤
1
p
e
x1
+
1
q
e
x2
. (A.2)
Taking x
1
= ln a
p
and x
2
= ln b
q
in (A.2), we arrive at inequality (A.1). ¯ .
CauchySchwarz Inequality.
[x y[ ≤ [x[[y[ , ∀ x, y ∈ R
n
.
Proof. For any real number λ > 0, we have
0 ≤ [x ±λy[
2
= [x[
2
±2λx y +λ
2
[y[
2
.
It follows that
±x y ≤
1
2λ
[x[
2
+
λ
2
[y[
2
.
The minimum value of the right hand side is attained at λ =
x
y
. At this value
of λ, we arrive at desired inequality. ¯ .
242 A Appendices
H¨ oder’s Inequality. Let Ω be a domain in R
n
. Assume that u ∈ L
p
(Ω)
and v ∈ L
q
(Ω) with 1 ≤ p, q ≤ ∞ and
1
p
+
1
q
= 1. Then
Ω
[uv[dx ≤ u
L
p
(Ω)
v
L
q
(Ω)
. (A.3)
Proof. Let f =
u
u
L
p
(Ω)
and g =
v
v
L
q
(Ω)
. Then obviously, f
L
p
(Ω)
= 1 =
g
L
q
(Ω)
. It follows from the Young’s inequality (A.1),
Ω
[f g[dx ≤
1
p
Ω
[f[
p
dx +
1
q
Ω
[g[
q
dx =
1
p
+
1
q
= 1.
This implies (A.3). ¯ .
Minkowski’s Inequality. Assume 1 ≤ p ≤ ∞. Then for any u, v ∈
L
p
(Ω),
u +v
L
p
(Ω)
≤ u
L
p
(Ω)
+v
L
p
(Ω)
.
Proof. By the above H¨ oder inequality, we have
u +v
p
L
p
(Ω)
=
Ω
[u +v[
p
dx ≤
Ω
[u +v[
p−1
([u[ +[v[)dx
≤
Ω
[u +v[
p
dx
p−1
p
¸
Ω
[u[
p
dx
1
p
+
Ω
[v[
p
dx
1
p
¸
= u +v
p−1
L
p
(Ω)
u
L
p
(Ω)
+v
L
p
(Ω)
.
Then divide both sides by u +v
p−1
L
p
(Ω)
. ¯ .
Interpolation Inequality for L
p
Norms. Assume that u ∈ L
r
(Ω) ∩
L
s
(Ω) with 1 ≤ r ≤ s ≤ ∞. Then for any r ≤ t ≤ s and 0 < θ < 1 such that
1
t
=
θ
r
+
1 −θ
s
(A.4)
we have u ∈ L
t
(Ω), and
u
L
t
(Ω)
≤ u
θ
L
r
(Ω)
u
1−θ
L
s
(Ω)
. (A.5)
Proof. Write
Ω
[u[
t
dx =
Ω
[u[
a
[u[
t−a
dx.
Choose p and q so that
ap = r , (t −a)q = s , and
1
p
+
1
q
= 1. (A.6)
A.3 CalderonZygmund Decomposition 243
Then by H¨ oder inequality, we have
Ω
[u[
t
dx ≤
Ω
[u[
r
dx
a/r
Ω
[u[
s
dx
(t−a)/s
.
Divide through the powers by t, we obtain
u
t
≤ u
a/t
r
u
1−a/t
s
.
Now let θ = a/t, we arrive at desired inequality (A.5). Here the restriction
(A.4) comes from (A.6). ¯ .
A.3 CalderonZygmund Decomposition
Lemma A.3.1 For f ∈ L
1
(R
n
), f ≥ 0, ﬁxed α > 0, ∃ E, G such that
(i) R
n
= E ∪ G, E ∩ G = ∅
(ii) f(x) ≤ α, a.e. x ∈ E
(iii) G =
∞
¸
k=1
Q
k
, ¦Q
k
¦: disjoint cubes s.t.
α <
1
[Q
k
[
Q
k
f(x)dx ≤ 2
n
α
Proof. Since
R
n
f(x)dx is ﬁnite, for a given α > 0, one can pick a cube Q
o
suﬃciently large, such that
Qo
f(x)dx ≤ α[Q
o
[. (A.7)
Divide Q
o
into 2
n
equal subcubes with disjoint interior. Those subcubes
Q satisfying
Q
f(x)dx ≤ α[Q[
are similarly subdivided, and this process is repeated indeﬁnitely. Let O de
note the set of subcubes Q thus obtained that satisfy
Q
f(x)dx > α[Q[.
For each Q ∈ O, let
˜
Q be its predecessor, i.e., Q is one of the 2
n
subcubes of
˜
Q. Then obviously, we have [
˜
Q[/[Q[ = 2
n
, and consequently,
α <
1
[Q[
Q
f(x)dx ≤
1
[Q[
˜
Q
f(x)dx ≤
1
[Q[
α[
˜
Q[ = 2
n
α. (A.8)
Let
244 A Appendices
G =
¸
Q∈Q
Q , and E = Q
o
` G.
Then iii) follows immediately. To see ii), noticing that each point of E lies in
a nested sequence of cubes Q with diameters tending to zero and satisfying
Q
f(x)dx ≤ α[Q[,
now by Lebesgue’s Diﬀerentiation Theorem, we have
f(x) ≤ α , a.e in E. (A.9)
This completes the proof of the Lemma. ¯ .
Lemma A.3.2 Let T be a linear operator from L
p
(Ω)∩L
q
(Ω) into itself with
1 ≤ p < q < ∞. If T is of weak type (p, p) and weak type (q, q), then for any
p < r < q, T is of strong type (r, r). More precisely, if there exist constants
B
p
and B
q
, such that, for any t > 0,
µ
Tf
(t) ≤
B
p
f
p
t
p
and µ
Tf
(t) ≤
B
q
f
q
t
q
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
then
Tf
r
≤ CB
θ
p
B
1−θ
q
f
r
, ∀f ∈ L
p
(Ω) ∩ L
q
(Ω),
where
1
r
=
θ
p
+
1 −θ
q
and C depends only on p, q, and r.
Proof. For any number s > 0, let
g(x) =
f(x) if [f(x)[ ≤ s
0 if [f(x)[ > s.
We split f into the good part g and the bad part b: f(x) = g(x) +b(x). Then
[Tf(x)[ ≤ [Tg(x)[ +[Tb(x)[,
and hence
µ(t) ≡ µ
Tf
(t) ≤ µ
Tg
(
t
2
) +µ
Tb
(
t
2
)
≤
2B
q
t
q
Ω
[g(x)[
q
dx +
2B
p
t
p
Ω
[b(x)[
p
dx.
It follows that
A.3 CalderonZygmund Decomposition 245
Ω
[Tf[
r
dx =
∞
0
µ(t)d(t
r
) = r
∞
0
t
r−1
µ(t)dt
≤ r(2B
q
)
q
∞
0
t
r−1−q
f≤s
[f(x)[
q
dx
dt
+ r(2B
p
)
p
∞
0
t
r−1−p
f>s
[f(x)[
p
dx
dt
≡ r(2B
q
)
q
I
q
+r(2B
p
)
p
I
p
. (A.10)
Let s = t/A for some positive number A to be ﬁxed later. Then
I
q
= A
r−q
∞
0
s
r−1−q
f≤s
[f(x)[
q
dx
ds
= A
r−q
Ω
[f(x)[
q
∞
f
s
r−1−q
ds
dx
=
A
r−q
q −r
Ω
[f(x)[
r
dx. (A.11)
Similarly,
I
p
= A
r−p
∞
0
s
r−1−p
f>s
[f(x)[
p
dx
ds =
Ω
[f(x)[
p
f
0
s
r−1−p
ds
=
A
r−p
r −p
Ω
[f(x)[
r
dx. (A.12)
Combining (A.10), (A.11), and (A.12), we derive
Ω
[Tf(x)[
r
dx ≤ rF(A)
Ω
[f(x)[
r
dx, (A.13)
where
F(A) =
(2B
q
)
q
A
r−q
q −r
+
(2B
p
)
p
A
r−p
r −p
.
By elementary calculus, one can easily verify that the minimum of F(A) is
A = 2B
q/(q−p)
q
B
p/(p−q)
p
.
For this value of A, (A.13) becomes
Ω
[Tf(x)[
r
dx ≤ r2
r
1
q −r
+
1
r −p
B
q(r−p)/(q−p)
q
B
p(q−r)/(q−p)
p
Ω
[f(x)[
r
dx.
Letting
C = 2
r
q −r
+
r
r −p
1/r
, and
1
r
=
θ
p
+
1 −θ
q
,
246 A Appendices
we arrive immediately at
Tf
L
r
(Ω)
≤ CB
θ
p
B
1−θ
q
f
L
r
(Ω)
.
This completes the proof of the Lemma. ¯ .
A.4 The Contraction Mapping Principle
Let X be a linear space with norm  , and let T be a mapping from X into
itself. If there exists a number θ < 1, such that
Tx −Ty ≤ θx −y for all x, y ∈ X,
Then T is called a contraction mapping .
Theorem A.4.1 Let T be a contraction mapping in a Banach space B. Then
it has a unique ﬁxed point in B, that is, there is a unique x ∈ B, such that
Tx = x.
Proof. Let x
o
be any point in the Banach space B. Deﬁne
x
1
= Tx
o
, and x
k+1
= Tx
k
k = 1, 2, .
We show that ¦x
k
¦ is a Cauchy sequence in B. In fact, for any integers n > m,
x
n
−x
m
 ≤
n−1
¸
k=m
x
k+1
−x
k

=
n−1
¸
k=m
T
k
x
1
−T
k
x
o

≤
n−1
¸
k=m
θ
k
x
1
−x
o

≤
θ
m
1 −θ
x
1
−x
o
→0 as m→∞.
Since a Banach is complete, the sequence ¦x
k
¦ converges to an element x
in B. Taking limit on both side of
x
k+1
= Tx
k
,
we arrive at
Tx = x. (A.14)
A.5 The ArzelaAscoli Theorem 247
To see the uniqueness, assume that y is a solution of (A.14). Then
x −y = Tx −Ty ≤ θx −y.
Therefore, we must have
x −y = 0,
That is x = y. ¯ .
A.5 The ArzelaAscoli Theorem
We say that the sequence of functions ¦u
k
(x)¦ are uniformly equicontinuous
if for each > 0, there exists δ > 0, such that
[u
k
(x) −u
k
(y)[ < whenever [x −y[ < δ
for all x, y ∈ R
n
and k = 1, 2, .
Theorem A.5.1 (ArzelaAscoli Compactness Criterion for Uniform Conver
gence).
Assume that ¦u
k
(x)¦ is a sequence of realvalued functions deﬁned on R
n
satisfying
[u
k
(x)[ ≤ M k = 1, 2, , x ∈ R
n
for some positive number M, and ¦u
k
(x)¦ are uniformly equicontinuous. Then
there exists a subsequence ¦u
ki
(x)¦ of ¦u
k
(x)¦ and a continuous function u(x),
such that
u
ki
→u
uniformly on any compact subset of R
n
.
References
[Ad] R. Adams, Sobolev Spaces, Academic Press, San Diego, 1978.
[AR] A. Ambrosetti and P. Rabinowitz, Dual variational method in critical
point theory and applications, J. Funct. Anal. 14(1973), 349381.
[Au] T. Aubin, Nonlinear Analysis on Manifolds. MongeAmpere Equations,
SpringerVerlag, 1982.
[Ba] A. Bahri, Critical points at inﬁnity in some variational problems, Pit
man, 1989, Research Notes in Mathematics Series, Vol. 182, 1988.
[BC] A. Bahri and J. Coron, The scalar curvature problem on three
dimensional sphere, J. Funct. Anal., 95(1991), 106172.
[BE] J. Bourguignon and J. Ezin, Scalar curvature functions in a class of
metrics and conformal transformations, Trans. AMS, 301(1987), 723
736.
[Bes] A. Besse, Einstein Manifolds, SpringerVerlag, Berling, Heidelberg,
1987.
[Bi] G. Bianchi, Nonexistence and symmetry of solutions to the scalar curva
ture equation, Comm. Partial Diﬀerential Equations, 21(1996), 229234.
[BLS] H. Brezis, Y. Li, and I. Shafrir, A sup + inf inequality for some nonlinear
elliptic equations involving exponential nonlinearities, J. Funct. Anal.
115 (1993), 344358.
[BM] H. Brezis, and F. Merle, Uniform estimates and blowup behavior for
solutions of u = V (x)e
u
in two dimensions, Comm. PDE. 16 (1991),
12231253.
[C] W. Chen, Scalar curvatures on S
n
, Mathematische Annalen, 283(1989),
353365.
[C1] W. Chen, A Tr¨ udinger inequality on surfaces with conical singularities,
Proc. AMS 3(1990), 821832.
[Ca] M. do Carmo, Riemannian Geometry, Translated by F. Flaherty,
Birkhauser Boston, 1992.
[CC] K. Cheng and J. Chern, Conformal metrics with prescribed curvature
on S
n
, J. Diﬀ. Equs., 106(1993), 155185.
[CD1] W. Chen and W. Ding, A problem concerning the scalar curvature on
S
2
, Kexue Tongbao, 33(1988), 533537.
[CD2] W. Chen and W. Ding, Scalar curvatures on S
2
, Trans. AMS,
303(1987), 365382.
250 References
[CGS] L.Caﬀarelli, B.Gidas, and J.Spruck, Asymptotic symmetry and local
behavior of semilinear elliptic equations with critical Sobolev growth,
Comm. Pure Appl. Math. XLII(1989), 271297.
[ChL] K. Chang and Q. Liu, On Nirenberg’s problem, International J. Math.,
4(1993), 5988.
[CL1] W. Chen and C. Li, Classiﬁcation of solutions of some nonlinear elliptic
equations , Duke Math. J., 63(1991), 615622.
[CL2] W. Chen and C. Li, Qualitative properties of solutions to some nonlinear
elliptic equations in R
2
, Duke Math J., 71(1993), 427439.
[CL3] W. Chen and C. Li, A note on KazdanWarner type conditions, J. Diﬀ.
Geom. 41(1995), 259268.
[CL4] W. Chen and C. Li, A necessary and suﬃcient condition for the Niren
berg problem, Comm. Pure Appl. Math., 48(1995), 657667.
[CL5] W. Chen and C. Li A priori estimates for solutions to nonlinear elliptic
equations, Arch. Rat. Mech. Anal., 122(1993), 145157.
[CL6] W. Chen and C. Li, A priori estimates for prescribing scalar curvature
equations, Ann. Math., 145(1997), 547564.
[CL7] W. Chen and C. Li, Prescribing Gaussian curvature on surfaces with
conical singularities, J. Geom. Analysis, 1(1991), 359372.
[CL8] W. Chen and C. Li, Gaussian curvature on singular surfaces, J. Geom.
Anal., 3(1993), 315334.
[CL9] W. Chen and C. Li, What kinds of singular surface can admit constant
curvature, Duke Math J., 2(1995), 437451.
[CL10] W. Chen and C. Li, Prescribing scalar curvature on S
n
, Paciﬁc J.
Math, 1(2001), 6178.
[CL11] W. Chen and C. Li, Indeﬁnite elliptic problems in a domain, Disc.
Cont. Dyn. Sys., 3(1997), 333340.
[CL12] W. Chen and C. Li, Gaussian curvature in the negative case, Proc.
AMS, 131(2003), 741744.
[CL13] W. Chen and C. Li, Regularity of solutions for a system of integral
equations, C.P.A.A., 4(2005), 18.
[CL14] W. Chen and C. Li, Moving planes, moving spheres, and a priori esti
mates, J. Diﬀ. Equa. 195(2003), 113.
[CLO] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for an integral
equation, , CPAM, 59(2006), 330343.
[CLO1] W. Chen, C. Li, and B. Ou, Qualitative properties of solutions for an
integral equation, Disc. Cont. Dyn. Sys. 12(2005), 347354.
[CLO2] W. Chen, C. Li, and B. Ou, Classiﬁcation of solutions for a system of
integral equations, Comm. in PDE, 30(2005), 5965.
[CS] K. Cheng and J. Smoller, Conformal metrics with prescribed Gaussian
curvature on S
2
, Trans. AMS, 336(1993), 219255.
[CY1] A. Chang, and P. Yang, A perturbation result in prescribing scalar
curvature on S
n
, Duke Math. J., 64(1991), 2769.
[CY2] A. Chang, M. Gursky, and P. Yang, The scalar curvature equation on
2 and 3spheres, Calc. Var., 1(1993), 205229.
[CY3] A. Chang and P. Yang, Prescribing Gaussian curvature on S
2
, Acta.
Math., 159(1987), 214259.
[CY4] A. Chang, P. Yang, Conformal deformation of metric on S
2
, J. Diﬀ.
Geom., 23(1988) 259296.
References 251
bibitem[DL]DL W. Ding and J. Liu, A note on the problem of prescribing
Gaussian curvature on surfaces, Trans. AMS, 347(1995), 10591065.
[ES] J. Escobar and R. Schoen, Conformal metric with prescribed scalar
curvature, Invent. Math. 86(1986), 243254.
[Ev] L. Evans, Partial Diﬀerential Equations, Graduate Studies in Math,
Vol 19, AMS, 1998.
[F] L. Frenkel, An Introduction to Maximum Principles and Symmetry in
Elliptic Problems, Cambridge Unversity Press, New York, 2000.
[He] E. Hebey, Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequal
ities, AMS, Courant Institute of Mathematical Sciences Lecture Notes
5, 1999.
[GNN] B. Gidas, W. Ni, and L. Nirenberg, Symmetry of positive solutions of
nonlinear elliptic equations in R
n
, In the book Mathematical Analysis
and Applications, Academic Press, New York, 1981.
[GNN1] B. Gidas, W. Ni, and L. Nirenberg, Symmetry and related properties
via the maximum principle, Comm. Math. Phys., 68(1979), 209243.
[GS] B. Gidas and J. Spruck, Global and local behavior of positive solutions
of nonlinear elliptic equations, Comm. Pure Appl. Math., 34(1981), 525
598.
[GT] D. Gilbarg and N. Trudinger, Elliptic partial diﬀerential equations of
second order, SpringerVerlag, 1983.
[Ha] Z. Han, Prescribing Gaussian curvature on S
2
, Duke Math. J., 61(1990),
679703.
[Ho] C. Hong, A best constant and the Gaussian curvature, Proc. of AMS,
97(1986), 737747.
[Ka] J. Kazdan, Prescribing the curvature of a Riemannian manifold, AMS
CBMS Lecture, No.57, 1984.
[KW1] J. Kazdan and F. Warner, Curvature functions for compact two mani
folds, Ann. of Math. , 99(1974), 1447.
[Kw2] J. Kazdan and F. Warner, Existence and conformal deformation of met
rics with prescribing Gaussian and scalar curvature, Annals of Math.,
101(1975), 317331.
[L] E.Lieb, Sharp constants in the HardyLittlewoodSobolev and related
inequalities, Ann. of Math. 118(1983), 349374.
[Li] C. Li, Local asymptotic symmetry of singular solutions to nonlinear
elliptic equations, Invent. Math. 123(1996), 221231.
[Li0] C. Li, Monotonicity and symmetry of solutions of fully nonlinear elliptic
equations on bounded domain, Comm. in PDE. 16(2&3)(1991), 491526.
[Li1] Y. Li, On prescribing scalar curvature problem on S
3
and S
4
, C. R.
Acad. Sci. Paris, t. 314, Serie I (1992), 5559.
[Li2] Y. Li, Prescribing scalar curvature on S
n
and related problems, J.
Diﬀerential Equations 120 (1995), 319410.
[Li3] Y. Li, Prescribing scalar curvature on S
n
and related problems, II: Ex
istence and compactness, Comm. Pure Appl. Math. 49 (1996), 541597.
[Lin1] C. Lin, On Liouville theorem and apriori estimates for the scalar curva
ture equations, Ann. Scuola. Norm. Sup. Pisa, Cl. Sci. 27(1998), 107130.
[Mo] J. Moser, On a nonlinear problem in diﬀerential geometry, Dynamical
Systems, Academic Press, N.Y., (1973).
[P] P. Padilla, Symmetry properties of positive solutions of elliptic equations
on symmetric domains, Appl. Anal. 64(1997), 153169.
252 References
[O] M. Obata, The conjecture on conformal transformations of Riemannian
manifolds, J. Diﬀ. Geom., 6(1971), 247258.
[On] E. Onofri, On the positivity of the eﬀective action in a theory of random
surfaces, Comm. Math. Phys., 86 (1982), 321326.
[Rab] P. Rabinowitz, Minimax Methods in Critical Point Theory with Appli
cations to Diﬀerential Equations, CBMS series, No. 65, 1986.
[Sch] M. Schechter, Modern Methods in Partial Diﬀerential Equations,
McGrawHill Inc., 1977.
[Sch1] M. Schester, Principles of Functional Analysis, Graduate Studies in
Math, Vol. 6, AMS, 2002.
[SZ] R. Schoen and D. Zhang, Prescribed scalar curvature on the nspheres,
Calc. Var. Partial Diﬀerential Equations 4 (1996), 125.
[WX] J. Wei and X. Xu, Classiﬁcation of solutions of higher order conformally
invariant equations, Math. Ann. 313(1999), 207228.
[XY] X. Xu and P. Yang, Remarks on prescribing Gauss curvature, Tran.
AMS, Vol. 336, No. 2, (1993), 831840.
Preface
In this book we intend to present basic materials as well as real research examples to young researchers in the ﬁeld of nonlinear analysis for partial diﬀerential equations (PDEs), in particular, for semilinear elliptic PDEs. We hope it will become a good reading material for graduate students and a handy textbook for professors to use in a topic course on nonlinear analysis. We will introduce a series of wellknown typical methods in nonlinear analysis. We will ﬁrst illustrate them by using simple examples, then we will lead readers to the research front and explain how these methods can be applied to solve practical problems through careful analysis of a series of recent research articles, mostly the authors’ recent papers. From our experience, roughly speaking, in applying these commonly used methods, there are usually two aspects: i) general scheme (more or less universal) and, ii) key points for each individual problem. We will use our own research articles to show readers what the general schemes are and what the key points are; and with the understandings of these examples, we hope the readers will be able to apply these general schemes to solve their own research problems with the discovery of their own key points. In Chapter 1, we introduce basic knowledge on Sobolev spaces and some commonly used inequalities. These are the major spaces in which we will seek weak solutions of PDEs. Chapter 2 shows how to ﬁnd weak solutions for some typical linear and semilinear PDEs by using functional analysis methods, mainly, the calculus of variations and critical point theories including the wellknown Mountain Pass Lemma. In Chapter 3, we establish W 2,p a priori estimates and regularity. We prove that, in most cases, weak solutions are actually diﬀerentiable, and hence are classical ones. We will also present a Regularity Lifting Theorem. It is a simple method to boost the regularity of solutions. It has been used extensively in various forms in the authors’ previous works. The essence of the approach is wellknown in the analysis community. However, the version we present here
VI
Preface
contains some new developments. It is much more general and is very easy to use. We believe that our method will provide convenient ways, for both experts and nonexperts in the ﬁeld, in obtaining regularities. We will use examples to show how this theorem can be applied to PDEs and to integral equations. Chapter 4 is a preparation for Chapter 5 and 6. We introduce Riemannian manifolds, curvatures, covariant derivatives, and Sobolev embedding on manifolds. Chapter 5 deals with semilinear elliptic equations arising from prescribing Gaussian curvature, on both positively and negatively curved manifolds. We show the existence of weak solutions in critical cases via variational approaches. We also introduce the method of lower and upper solutions. Chapter 6 focus on solving a problem from prescribing scalar curvature on S n for n ≥ 3. It is in the critical case where the corresponding variational functional is not compact at any level sets. To recover the compactness, we construct a maxmini variational scheme. The outline is clearly presented, however, the detailed proofs are rather complex. The beginners may skip these proofs. Chapter 7 is devoted to the study of various maximum principles, in particular, the ones based on comparisons. Besides classical ones, we also introduce a version of maximum principle at inﬁnity and a maximum principle for integral equations which basically depends on the absolute continuity of a Lebesgue integral. It is a preparation for the method of moving planes. In Chapter 8, we introduce the method of moving planes and its variant– the method of moving spheres–and apply them to obtain the symmetry, monotonicity, a priori estimates, and even nonexistence of solutions. We also introduce an integral form of the method of moving planes which is quite diﬀerent than the one for PDEs. Instead of using local properties of a PDE, we exploit global properties of solutions for integral equations. Many research examples are illustrated, including the wellknown one by Gidas, Ni, and Nirenberg, as well as a few from the authors’ recent papers. Chapters 7 and 8 function as a selfcontained group. The readers who are only interested in the maximum principles and the method of moving planes, can skip the ﬁrst six chapters, and start directly from Chapter 7.
Yeshiva University University of Colorado
Wenxiong Chen Congming Li
Contents
1
Introduction to Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Approximation by Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sobolev Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Compact Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Other Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Poincar´’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . e 1.6.2 The Classical HardyLittlewoodSobolev Inequality . . . . Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Second Order Elliptic Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Methods of Linear Functional Analysis . . . . . . . . . . . . . . . . . . . . . 2.3.1 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Some Basic Principles in Functional Analysis . . . . . . . . . 2.3.3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Variational Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Semilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Calculus of Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Existence of Minimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Existence of Minimizers Under Constraints . . . . . . . . . . . 2.4.5 Minimax Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 Existence of a Minimax via the Mountain Pass Theorem Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 W 2,p a Priori Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Newtonian Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Uniform Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 W 2,p Regularity of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 The Case p ≥ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 3 8 9 21 32 34 34 36 41 42 43 44 44 44 48 50 50 50 52 55 59 65 73 75 75 81 86 88
2
3
. . . . . . . . . . . . . . . 96 3. . . . . . . . . . . 135 5. . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4. . . 107 4. . . . . . . . . . .5. . . . . . .1 In the Region Where R < 0 . . . . . . . . . . .4 Applications to Integral Equations . . . . . . . . .3 Curvature on Riemannian Manifolds . 107 4. . . . .3 The A Priori Estimates . . . . . . . . . . . . . . . . . . . .3. . . . . . . . .1 Estimate the Values of the Functional . . . . . . . .3 Riemannian Metrics . . . . . . . . .3. . . . . . . . 171 6. . for n ≥ 3 . . . 112 4. . . . .3. . . . . . . . .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds . . . . . . 161 6. . . . . . . . . . . . . . . . . . . .5 Calculus on Manifolds . . . . . . 115 4. . . . . . . . . . . . . . . .1 Diﬀerentiable Manifolds . . . . . . . . . . . . . . . . . . . . . . . . .1 Curvature of Plane Curves . . . . . . . . . . . . . . . . . . . . . . .6 Sobolev Embeddings . . . . . . . . . . . . . . . . .2 In the Region Where R is Small . . . . . . . . . . . . .2. . . . . . . . . . .3 Other Useful Results Concerning the Existence. . . . . Method of Lower and Upper Solutions . 127 5. . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . .1 Introduction . 124 Prescribing Gaussian Curvature on Compact 2Manifolds . . . . . . .1. . . .3 Applications to PDEs . . . . . . . . . . 174 5 6 . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.2 Curvature of Surfaces in R3 . . . .3 Regularity Lifting . . . . . . . 145 5. . . . . . . .2 The Case 1 < p < 2. .4 Curvature . .2 Chen and Li’s Results . . . . . . . . . . . . . . . . . . . . . 122 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . .3 In the Regions Where R > 0. . .2. . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . 157 6. . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . and Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 5. . Uniqueness. . . . . . 109 4. . . . . 142 5. . . . . . . . . . . 151 Prescribing Scalar Curvature on S n . 117 4. . . .3 Equations on Prescribing Gaussian and Scalar Curvature123 4. . . . . . . . . . . . . . . . . . . . . . . . . . .3. 95 3. . . . . . . . . . . . . .2 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6. . . . . . . . . . . . . 92 3. . . . . . . . . . . . .2 Regularity Lifting Theorem . . . . .1 Bootstrap . . 120 4. . 172 6.1 Kazdan and Warner’s Results. . . . . 172 6. . . . . .3. . . . . . . . . . . . . . . . . . . . . . . 130 5. .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4. . . .VIII Contents 3. . . .2.5. . . . .1 Obstructions . . . . . . . . . . . . 94 3.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator . . . 97 3. . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . .2 The Variational Scheme . . . .1 Prescribing Gaussian Curvature on S 2 . . 115 4. .3 Existence of Weak Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3. . . . 162 6. . . . . . .2 Variational Approach and Key Inequalities . . . . .2 Tangent Spaces . . . . . . . . . . . .2 The Variational Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 129 5. . . . . .5. . . . . . 103 4 Preliminary Analysis on Riemannian Manifolds . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .1 Notations . . . . . . . . . . . . . . . .5 Notations from Riemannian Geometry . .4. . . . . . . 180 7. . . . . . . . . . . . 247 8 A References . . . . 209 8. . . . . . . . . . . . . . . . .3. . . . . . . . . . .1 The Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Method of Moving Planes in a Local Way . . . . . . . . . . . . . . . . . . . .5 A Maximum Principle for Integral Equations . . . . . . . . . .2 Symmetry of Solutions of − u = up in Rn . .4. . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Function Spaces . . . . 235 A. . . . . . . . . .2. . . . .4 Notations for Estimates . . . . . . . . . . . . . . . . . . . . . .1 Algebraic and Geometric Notations . . . . . . . . . . . . . . 235 A. .Contents IX 7 Maximum Principles . . .3 CalderonZygmund Decomposition . . . . . . . . . 214 8. . . . . . . . .2. . . 175 7. . . . . . . . . . . . .1. 202 8. . . . . . . .2. . . . . . . . . . . . . . . . .2 Weak Maximum Principles . . . . . . . . . . . . .3 Symmetry of Solutions for − u = eu in R2 . 249 . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . 237 A. . . . .2 Notations for Functions and Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 8. . . . . . .4 The Contraction Mapping Principle . . . .2 The A Priori Estimates . . . . . . . . . .1 Symmetry of Solutions in a Unit Ball . . . . . . . . . . . . . 214 8. . 235 A. . . . . . . . . . . . . . . . . . . .2 Inequalities . . . . . . . . . . . . . . .5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation . . . . . . . . . . . .1. . . . . . . . .5 The ArzelaAscoli Theorem . . . . . . . 199 8. . . . . . . . . . . . . . . . . . 195 8. . . . . . . . . . . . .2 Necessary Conditions . . . 216 8. . . . . . 199 8. .1. . . . . . . 197 8. . .1 The Background . 183 7. . . . . . . . . . . . 241 A. . . 175 7. . . . . . . . . . . . . . . .1 Outline of the Method of Moving Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 A. . . . . . . . . . . . . . . . 246 A. . . . . . . . . . . . . . . . . 239 A. . . . . . . . . . . . . . . . . . . . . . . 191 Methods of Moving Planes and of Moving Spheres . 225 8. . . . . .3. . . . . . . . . . . . . . . . . . . . . . . 222 8. . . . 235 A. . . . . . . . . . . . . . . . . . . . . . . . 188 7. . . . . . . . . . . . . . . . . . . . . . . . . . .4 Maximum Principles Based on Comparisons . 228 Appendices . . . . 238 A.2 Applications of the Maximum Principles Based on Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 The Hopf Lemma and Strong Maximum Principles .1. . . . . . . . . . . . . . . . . . . . . . . . .4 Method of Moving Spheres .
.
Nirenberg. but we must also study relationships among diﬀerent metrics. The fundamental research on the relations among various Sobolev Spaces (Sobolev norms) was ﬁrst carried out by G.6. L.1 1. in particular. people use numbers to quantify the surrounding objects. They have many applications in various branches of mathematics. The role of Sobolev Spaces in the analysis of PDEs is somewhat similar to the role of Euclidean spaces in the study of geometry.2 1. Sobolev in the 1930s. and . For example. For these purposes. the absolute value a − b is used to measure the diﬀerence between two numbers a and b. J. and how close these solutions are to the real one depends on how we measure them. in the theory of partial diﬀerential equations. such as H.2 Poincar´’s Inequality e The Classical HardyLittlewoodSobolev Inequality In real life. temperature is a function of time and place. we use a sequence of approximate solutions to approach a real one. Hence. In mathematics. E. Brezis. Littlewood in the 1910s and then by S. More recently. many well known mathematicians.5 1.6.3 1. that is. which metric we are choosing. Serrin. it is imperative that we need not only develop suitable metrics to measure diﬀerent states (functions). Caﬀarelli. L. Functions are used to describe physical states.6 Distributions Sobolev Spaces Approximation by Smooth Functions Sobolev Embeddings Compact Embedding Other Basic Inequalities 1.4 1. Chang. Lieb. Hardy and J. A.1 1. the Sobolev Spaces were introduced.1 Introduction to Sobolev Spaces 1. Very often.
and which functions are ‘critically’ related to these sharp estimates.2) in a proper linear space. in particular. for instance.1) To prove the existence of solutions. given a PDE problem. (1.’ to derive the existence. has seen more and more applications.1) in the classical sense.1). u) = 0 1 2  u2 d x − Ω u Ω F (x. one may view − as an operator acting on a proper linear space and then apply some known principles of functional analysis. x ∈ ∂Ω (1. Let Ω be a bounded domain in Rn and consider the Dirichlet problem associated with the Laplace equation: − u = f (x).2 1 Introduction to Sobolev Spaces E. one may recover the diﬀerentiability of the solutions. so that they can still satisfy equation (1. From the deﬁnition of the functional in either (1. In general. Hence the critical points of the functional are solutions of the problem only in the ‘weak’ sense. Then it becomes a semilinear equation. Stein. Correspondingly. ·).1).4) . one can see that the function u in the space need not be second order diﬀerentiable as required by the classical solutions of (1. in equation (1. which functions achieve these sharp estimates.3). u) d x. For example. To ﬁnd the existence of weak solutions for partial diﬀerential equations. what the sharp estimates are. x ∈ Ω u = 0. One may also consider the corresponding variational functional J(u) = 1 2  u2 d x − Ω Ω f (x) u d x (1. our intention is to view it as an operator A acting on some proper linear spaces X and Y of functions and to symbolically write the equation as Au = f (1. u).3) f (x. let’s start oﬀ with a simple example. s) d x is an antiderivative of f (x. the ‘ﬁxed point theory’ or ‘the degree theory. we have the functional J(u) = where F (x.2) or (1. However. the calculus of variations. instead of f (x). have worked in this area. the method of functional analysis. we consider f (x. especially for nonlinear partial diﬀerential equations. To roughly illustrate this kind of application. The main objectives are to determine if and how the norms dominate each other. This kind of variational approach is particularly powerful in dealing with nonlinear equations. and seek critical points of the functional in that space. by an appropriate regularity argument.
1 Distributions 3 Then we can apply the general and elegant principles of linear or nonlinear functional analysis to study the solvability of various equations involving A. and these ﬁrst derivatives need not be continuous or even be deﬁned everywhere. to show the existence of such weak solutions. As we will see in the next few chapters. . there are two basic stages in showing convergence: i) in a reﬂexive Banach space. sometimes use ﬁnite dimensional approximation. Hence in Section 1.3.1. To derive many useful properties in Sobolev spaces. In Section 1. what really involved were only the ﬁrst derivatives of u instead of the second derivatives as required classically for a second order equation. we can just work on smooth functions to establish a series of important inequalities.2.2) or (1. and then ii) by the compact embedding from a “stronger” Sobolev space into a “weaker” one.1. The result can then be applied to a broad class of partial diﬀerential equations. and then go on to investigate the convergence of the sequence. The advantage is that it allows one to divide the task of ﬁnding “suitable” smooth solutions for a PDE into two major steps: Step 1. As we will see later. every bounded sequence has a weakly convergent subsequence. mainly the notion of the weak derivatives. In this process. we will introduce the distributions. In solving a partial diﬀerential equation.1 Distributions As we have seen in the introduction. which are the elements of the Sobolev spaces. the weak convergent sequence in the “stronger” space becomes a strong convergent sequence in the “weaker” space. Therefore. We may also associate this operator with a functional J(·). via functional analysis approach. these weak derivatives can be approximated by smooth functions. the readers may have a glance at the introduction of the next chapter to gain more motivations for studying the Sobolev spaces. the Sobolev spaces are designed precisely for this purpose and will work out properly. it is natural to ﬁrst ﬁnd a sequence of approximate solutions.3). we show that. The limit function of a convergent sequence of approximate solutions would be the desired exact solution of the equation. One seeks solutions which are less diﬀerentiable but are easier to obtain. the key is to ﬁnd an appropriate operator ‘A’ and appropriate spaces ‘X’ and ‘Y’. in many cases. one can substantially weaken the notion of partial derivatives. for the functional J(u) in (1. Before going onto the details of this chapter. It is very common that people use “energy” minimization or conservation. whose critical points are the solutions of the equation (1. Existence of Weak Solutions. We then deﬁne Sobolev spaces in Section 1. it is not so convenient to work directly on weak derivatives. 1. Then in the next three sections.4).
However.5) still makes sense if u is a locally L1 summable Now if u is not in C 1 (Ω).1. ∂xi (1. Both the existence of weak solutions and the regularity lifting have become two major branches of today’s PDE analysis. Assume that u is a C 1 function in Ω and φ ∈ D(Ω).2 Assume ρ ∈ C0 (Rn ). for i = 1. Various functions spaces and the related embedding theories are basic tools in both analysis.’ which will be the elements of the Sobolev spaces. One uses various analysis tools to boost the diﬀerentiability of the known weak solutions and try to show that they are actually classical ones. then for any r < R. ∞ Example 1. Regularity Lifting. This is called the space of test functions on Ω. the integral on ∂xi the right hand side of (1. Let 1 x−y ρ( )u(y)dy.1 Assume BR (xo ) := {x ∈ Rn  x − xo  < R} ⊂ Ω. · · · . among which Sobolev spaces are the most frequently used ones. Let D(Ω) = C0 (Ω) be the linear space of inﬁnitely diﬀerentiable functions with compact support in Ω.5) ∂u does not exist. Through integration by parts. Example 1. and supp u ⊂ K ⊂⊂ Ω. ∂u φ(x)dx = − ∂xi u(x) Ω Ω ∂φ dx. we have.4 1 Introduction to Sobolev Spaces Step 2. n. we introduce the notion of ‘weak derivatives. u (x) := ρ ∗ u := n Rn 1 exp{ x−xo 2 −r2 } for x − xo  < r 0 elsewhere Then u ∈ ∞ C0 (Ω) for suﬃciently small. Let Rn be the ndimensional Euclidean space and Ω be an open connected ∞ subset in Rn .1. then . Such functions are Lebesque measurable functions f deﬁned on Ω and with the property that 1/p f Lp (K) := K f (x)p dx <∞ for every compact subset K in Ω. In this section. Let Lp (Ω) be the space of pth power locally summable functions for loc 1 ≤ p ≤ ∞. u ∈ Lp (Ω). the following function f (x) = ∞ is in C0 (Ω). 2.
6) There is no boundary term because φ vanishes near the boundary. Deﬁnition 1.6) makes no sense. α2 . we say that v is the αth weak derivative loc of u.1. the regular αth partial derivative of u is Dα u = ∂ α1 ∂ α2 · · · ∂ αn u . For u ∈ C k (Ω).e. The same idea works for higher partial derivatives. Thus it is natural to choose those functions v that satisfy (1. let u(x) = cos x if − π < x ≤ 0 1 − x if 0 < x < 1. we arrive at Dα u φ(x) d x = (−1)α Ω Ω u Dα φ d x. ∂xα1 ∂xα2 · · · ∂xαn n 1 2 Given any test function φ ∈ D(Ω).1. u only need to be locally L1 summable. written v = Dα u provided v(x) φ(x) d x = (−1)α Ω Ω u Dα φ d x for all test functions φ ∈ D(Ω). 1). v ∈ L1 (Ω). · · · . However the right hand side is valid for functions u with much weaker diﬀerentiability. i. (1.6) as the weak representatives of Dα u. Then its weak derivative u (x) can be represented by v(x) = − sin x if − π < x ≤ 0 −1 if 0 < x < 1. we deﬁne the ﬁrst derivative function v(x) that satisﬁes v(x)φ(x)dx = − Ω Ω ∂u weakly as the ∂xi u ∂φ dx ∂xi for all functions φ ∈ D(Ω).1.1 Distributions 5 function. Let α = (α1 . Example 1. Now if u is not k times diﬀerentiable. For this reason. .1 For u. through a straight forward integration by parts k times.3 For n = 1 and Ω = (−π. αn ) be a multiindex of order k := α := α1 + α2 + · · · + αn . the left hand side of (1.
It follows that (v(x) − w(x))φ(x)dx = 0 ∀φ ∈ D(Ω). we have v(x)φ(x)dx = (−1)α Ω Ω u(x)Dα φdx = Ω w(x)φ(x)dx for any φ ∈ D(Ω). denoted by D (Ω). and we call it a distribution.6 1 Introduction to Sobolev Spaces To see this. in the classical sense. we have 1 0 1 u(x)φ (x)dx = −π −π 0 cos x φ (x)dx + 0 (1 − x)φ (x)dx 1 = −π sin x φ(x)dx + φ(0) − φ(0) + 0 1 φ(x)dx =− −π v(x)φ(x)dx. we verify. it should be unique up to a set of measure zero. In this example. The linear space of distributions or the generalized functions on Ω. is the collection of all continuous linear functionals on D(Ω). then v(x) = w(x) almost everywhere. one may alter the values of the weak derivative v(x) in a set of measure zero. that 1 1 u(x)φ (x)dx = − −π −π v(x)φ(x)dx.1. If v and w are the weak αth partial derivatives of u. However. By the deﬁnition of the weak derivatives. (1. More generally. and (1. we have Deﬁnition 1. Ω Therefore. From the deﬁnition.1. for any φ ∈ D(Ω). one can see that. Proof. through integration by parts. .7) In fact.2 A distribution is a continuous linear functional on D(Ω).1 (Uniqueness of Weak Derivatives). we can view a weak derivative as a linear functional acting on the space of test functions D(Ω).7) still holds. the function u is not diﬀerentiable at x = 0. Lemma 1. we must have v(x) = w(x) almost everywhere . Since the weak derivative is deﬁned by integrals. Dα u.
1 Distributions 7 Here. for x ≥ 0. we have 1 1 0. we may regard δ0 (x) as f (x) in the sense of distributions. the delta function at xo can be deﬁned as δxo (φ) = φ(xo ). who often simply view δxo as 0. It is easy to verify that Tf (·) is a continuous linear functional on D(Ω). then we say that µ is (or can be realized as ) a locally summable function and identify it as f . suppφ ⊂ K. To explain this. as k→∞. consider loc Tf (φ) = Ω f (x)φ(x)dx. Surprisingly. Compare this with the deﬁnition of weak derivatives. The most important and most commonly used distributions are locally summable functions. and b) for any α. Hence it is a distribution. ∀φ ∈ D(Ω). For any φ ∈ D(Ω). for x < 0 1. Dα φk →Dα φ uniformly as k→∞. For any distribution µ. one can show that such a delta function is not locally summable. let Ω = (−1. This kind of “function” has been used widely and so successfully by physicists and engineers. we have T (φk )→T (φ). the continuity of a functional T on D(Ω) means that. An interesting example of a distribution that is not a locally summable function is the wellknown Dirac delta function. and we say that φk →φ in D(Ω) if a) there exists K ⊂⊂ Ω such that suppφk . Let xo be a point in Ω. for x = xo δxo (x) = ∞. 1). It is not a function at all. − −1 f (x)φ (x)dx = − 0 φ (x)dx = φ(0) = δ0 (φ). for x = xo . for any f ∈ Lp (Ω) with 1 ≤ p ≤ ∞.1. if there is an f ∈ L1 (Ω) such that loc µ(φ) = Tf (φ). However. . In fact. and hence it is a distribution. and let f (x) = Then. such a delta function is the derivative of some function in the following distributional sense. for any sequence {φk } ⊂ D(Ω) with φk →φ in D(Ω).
8
1 Introduction to Sobolev Spaces
1.2 Sobolev Spaces
Now suppose given a function f ∈ Lp (Ω), we want to solve the partial diﬀerential equation ∆u = f (x) in the sense of weak derivatives. Naturally, we would seek solutions u, such that ∆u are in Lp (Ω). More generally, we would start from the collections of all distributions whose second weak derivatives are in Lp (Ω). In a variational approach, as we have seen in the introduction, to seek critical points of the functional 1 2  u2 dx −
Ω Ω
F (x, u)dx,
a natural set of functions we start with is the collection of distributions whose ﬁrst weak derivatives are in L2 (Ω). More generally, we have Deﬁnition 1.2.1 The Sobolev space W k,p (Ω) (k ≥ 0 and p ≥ 1) is the collection of all distributions u on Ω such that for all multiindex α with α ≤ k, Dα u can be realized as a Lp function on Ω. Furthermore, W k,p (Ω) is a Banach space with the norm u :=
α≤k Ω
1/p Dα u(x)p dx .
In the special case when p = 2, it is also a Hilbert space and we usually denote it by H k (Ω).
k,p ∞ Deﬁnition 1.2.2 W0 (Ω) is the closure of C0 (Ω) in W k,p (Ω). k,p Intuitively, W0 (Ω) is the space of functions whose up to (k − 1)th order derivatives vanish on the boundary.
Example 1.2.1 Let Ω = (−1, 1). Consider the function f (x) = xβ . For 0 < β < 1, it is obviously not diﬀerentiable at the origin. However for any 1 , it is in the Sobolev space W 1,p (Ω). More generally, let Ω be 1≤p< 1−β an open unit ball centered at the origin in Rn , then the function xβ is in W 1,p (Ω) if and only if n β >1− . (1.8) p To see this, we ﬁrst calculate fxi (x) = and hence βxi , x2−β for x = 0,
1.3 Approximation by Smooth Functions
9
 f (x) = Fix a small have
β . x1−β
(1.9)
> 0. Then for any φ ∈ D(Ω), by integration by parts, we f (x)φxi (x)dx +
Ω\B (0) ∂B (0)
fxi (x)φ(x)dx = −
Ω\B (0)
f φνi dS.
where ν = (ν1 , ν2 , · · · , νn ) is an inwardnormal vector on ∂B (0). Now, under the condition that β > 1 − n/p, fxi is in Lp (Ω) ⊂ L1 (Ω), and 
∂B (0)
f φνi dS ≤ C
n−1+β
→0,
as →0.
It follows that xβ φxi (x)dx = −
Ω Ω
βxi φ(x)dx. x2−β
Therefore, the weak ﬁrst partial derivatives of xβ are βxi , i = 1, 2, · · · , n. x2−β Moreover, from (1.9), one can see that  f  is in Lp (Ω) if and only if β >1− n . p
1.3 Approximation by Smooth Functions
While working in Sobolev spaces, for instance, proving inequalities, it may feel inconvenient and cumbersome to manage the weak derivatives directly based on its deﬁnition. To get around, in this section, we will show that any function in a Sobolev space can be approached by a sequence of smooth functions. In other words, the smooth functions are dense in Sobolev spaces. Based on this, when deriving many useful properties of Sobolev spaces, we can just work on smooth functions and then take limits. At the end of the section, for more convenient application of the approximation results, we introduce an Operator Completion Theorem. We also prove an Extension Theorem. Both theorems will be used frequently in the next few sections. The idea in approximation is based on molliﬁers. Let j(x) = One can verify that c e x2 −1 if x < 1 0 if x ≥ 1.
1
10
1 Introduction to Sobolev Spaces
∞ j(x) ∈ C0 (B1 (0)).
Choose the constant c, such that j(x)dx = 1.
Rn
For each
> 0, deﬁne j (x) =
1
n
x j( ).
Then obviously j (x)dx = 1, ∀ > 0.
Rn ∞ One can also verify that j ∈ C0 (B (0)), and
lim j (x) = →0
0 for x = 0 ∞ for x = 0.
The above observations suggest that the limit of j (x) may be viewed as a delta function, and from the wellknown property of the delta function, we would naturally expect, for any continuous function f (x) and for any point x ∈ Ω, (J f )(x) :=
Ω
j (x − y)f (y)dy→f (x),
as →0.
(1.10)
More generally, if f (x) is only in Lp (Ω), we would expect (1.10) to hold almost everywhere. We call j (x) a molliﬁer and (J f )(x) the molliﬁcation of f (x). We will show that, for each > 0, J f is a C ∞ function, and as →0, J f →f in W k,p . Notice that, actually (J f )(x) =
B (x)∩Ω
j (x − y)f (y)dy,
hence, in order that J f (x) to approximate f (x) well, we need B (x) to be completely contained in Ω to ensure that j (x − y)dy = 1
B (x)∩Ω
(one of the important property of delta function). Equivalently, we need x to be in the interior of Ω. For this reason, we ﬁrst prove a local approximation theorem. Theorem 1.3.1 (Local approximation by smooth functions). k,p For any f ∈ W k,p (Ω), J f ∈ C ∞ (Rn ) and J f → f in Wloc (Ω) as →0.
1.3 Approximation by Smooth Functions
11
Then, to extend this result to the entire Ω, we will choose inﬁnitely many open sets Oi , i = 1, 2, · · · , each of which has a positive distance to the boundary of Ω, and whose union is the whole Ω. Based on the above theorem, we are able to approximate a W k,p (Ω) function on each Oi by a sequence of smooth functions. Combining this with a partition of unity, and a cut oﬀ function if Ω is unbounded, we will then prove Theorem 1.3.2 (Global approximation by smooth functions). For any f ∈ W k,p (Ω), there exists a sequence of functions {fm } ⊂ ∞ C (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞. Theorem 1.3.3 (Global approximation by smooth functions up to the boundary). Assume that Ω is bounded with C 1 boundary ∂Ω, then for any f ∈ k,p W (Ω), there exists a sequence of functions {fm } ⊂ C ∞ (Ω) = C ∞ (Ω) ∩ W k,p (Ω) such that fm → f in W k,p (Ω) as m→∞.
∞ When Ω is the entire space Rn , the approximation by C ∞ or by C0 functions are essentially the same. We have k,p Theorem 1.3.4 W k,p (Rn ) = W0 (Rn ). In other words, for any f ∈ W k,p (Rn ), ∞ there exists a sequence of functions {fm } ⊂ C0 (Rn ), such that
fm →f,
as m→∞;
in W k,p (Rn ).
Proof of Theorem 1.3.1 We prove the theorem in three steps. In step 1, we show that J f ∈ C ∞ (Rn ) and Jf
Lp (Ω)
≤ f
Lp (Ω) .
From the deﬁnition of J f (x), we can see that it is well deﬁned for all x ∈ Rn , and it vanishes if x is of distance away from Ω. Here and in the following, for simplicity of argument, we extend f to be zero outside of Ω. In step 2, we prove that, if f is in Lp (Ω), (J f )→f in Lp (Ω). loc We ﬁrst verify this for continuous functions and then approximate Lp functions by continuous functions. In step 3, we reach the conclusion of the Theorem. For each f ∈ W k,p (Ω) and α ≤ k, Dα f is in Lp (Ω). Then from the result in Step 2, we have J (Dα f )→Dα f Hence what we need to verify is in Lloc (Ω).
12
1 Introduction to Sobolev Spaces
Dα (J f )(x) = J (Dα f )(x). As the readers will notice, the arguments in the last two steps only work in any compact subset of Ω. Step 1. Let ei = (0, · · · , 0, 1, 0, · · · , 0) be the unit vector in the xi direction. Fix > 0 and x ∈ Rn . By the deﬁnition of J f , we have, for h < , (J f )(x + hei ) − (J f )(x) = h Since as h→0,
B2 (x)∩Ω
j (x + hei − y) − j (x − y) f (y)dy. h (1.11)
j (x + hei − y) − j (x − y) ∂j (x − y) → h ∂xi uniformly for all y ∈ B2 (x) ∩ Ω, we can pass the limit through the integral sign in (1.11) to obtain ∂(J f )(x) = ∂xi Similarly, we have Dα (J f )(x) =
Ω α Dx j (x − y)f (y)dy. Ω
∂j (x − y) f (y)dy. ∂xi
Noticing that j (·) is inﬁnitely diﬀerentiable, we conclude that J f is also inﬁnitely diﬀerentiable. Then we derive Jf
p−1 p
Lp (Ω)
≤ f
1
Lp (Ω) .
(1.12)
By the H¨lder inequality, we have o (J f )(x) = 
Ω
j
(x − y)j p (x − y)f (y)dy
p−1 p
≤
Ω
j (x − y)dy
j (x − y)f (y) dy
p Ω
1 p
1 p
≤
B (x)
j (x − y)f (y)p dy
.
It follows that (J f )(x)p dx ≤
Ω Ω B (x)
j (x − y)f (y)p dy dx f (y)p
Ω B (y)
≤ ≤
Ω
j (x − y)dx dy
f (y)p dy.
choose a a continuous function g.12). Step 2.13) infers that for suﬃciently small .14) that J f −f Lp (K) ≤ J f −J g Lp (K) + J g−g Lp (K) + g−f Lp (K) ≤ 2 f − g Lp (K) + J g − g δ δ ≤ 2· + 3 3 = δ. (1. We show that . This veriﬁes (1.y< max f (x − y) − f (x) B (0) j (y)dy max f (x − y) − f (x). as →0. 3 This can be derived from the wellknown fact that any Lp function can be k approximated by a simple function of the form j=1 aj χj (x). and given any δ > 0. we have α D f ∈ Lp (Ω).y< x∈K.3 Approximation by Smooth Functions 13 This veriﬁes (1. This proves (1.p (Ω).13). for any compact subset K of Ω. Lp (K) Step 3. we have (J f )(x) − f (x) ≤ ≤ x∈K. such that δ (1. We prove that. (1. For any f in Lp (Ω). we have δ J g − g Lp (K) < .1. the last term in the above inequality tends to zero uniformly as →0.14) f − g Lp (Ω) < . For the continuous function g. and each simple function can be approximated by a continuous function. By writing (J f )(x) − f (x) = B (0) j (y)[f (x − y) − f (x)]dy. Now assume that f ∈ W k.13) We ﬁrst show this for a continuous function f . Then for any α with α ≤ k. where χj is the characteristic function of some measurable set Aj . 3 It follows from this and (1. J f −f Lp (K) →0. Due to the continuity of f and the compactness of K.13) for continuous functions.
i = 1. 1 To see this. Given any function f ∈ W k. Let 1 Ωi = {x ∈ Ω  dist(x.3. · · · (1. Fix a δ > 0.1. 3. ∂Ω) > }. ∀x ∈ K. x ∈ Ω. we have J (Dα f )→Dα f in Lp (K).2 Step 1.p (Ω). i=0 ∞ 0 ≤ ηi (x) ≤ 1. 2. This completes the proof of the Theorem 1. ∂Ω). Consequently. Ω = Here we have used the fact that j (x − y) and all its derivatives are supported in B (x) ⊂ Ω. obvious ηi f ∈ W k. Dα (J f )(x) = Ω α Dx j (x − y)f (y)dy α Dx j (x − y)f (y)dy B (x) α Dy j (x − y)f (y)dy B (x) = = (−1)α = B (x) j (x − y)Dα f (y)dy j (x − y)Dα f (y)dy. For a Bounded Region Ω. for suﬃciently small . Dα (J f )(x) = J (Dα f )(x). (1. 1.14 1 Introduction to Sobolev Spaces Dα (J f )→Dα f By the result in Step 2.15) in Lp (K). we ﬁx a point x in K.16) . · · · ¯i ) suppfi ⊂ (Ωi+4 \ Ω i = 1. Choose some open set O0 ⊂⊂ Ω so that Ω = ∪∞ Oi .3. The Proof of Theorem 1. · · · i ¯ Write Oi = Ωi+3 \ Ωi+1 . then any point y in the ball B (x) is in the interior of Ω. Choose i > 0 so small that fi := J i (ηi f ) satisﬁes δ fi − ηi f W k. i=0 Choose a smooth partition of unity {ηi }∞ associated with the open sets i=0 {Oi }∞ .p (Ω) and supp(ηi f ) ⊂ Oi . 2. Choose < 2 dist(K. Now what left to verify is. ηi ∈ C0 (Oi ) ∞ i=0 ηi (x) = 1.p (Ω) ≤ 2i+1 i = 0.
we have g−f W k. we complete step 1. such that . Exercise 1. there exist R > 0.p (Ω) (see Exercise 1. because each fi is in C ∞ (Ω). Since δ > 0 is any number. To see that g ∈ W k.p (Ω) ∞ i=0 (fi −ηi f ) k. It follows from (1. Then g ∈ C ∞ (Ω). 2i+1 From here we can see that the series converges in W k. but ∞ their diﬀerence converges. k. since f ∈ W k.17) Choose a cut oﬀ function φ ∈ C ∞ (Rn ) satisfying φ(x) = 1.p (Ω)? We showed that the diﬀerence (g − f ) is in W k. Hint: Show that the partial sum is a Cauchy sequence.p (Ω) ≤δ i=0 1 = δ. ∞ i vi converges Step 2.3. (1.1 below). and for each open set O ⊂⊂ Ω. there are at most ﬁnitely many nonzero terms in the sum.p (Ω). Remark 1. (1. there exists a constant C.p (Ω). such that f W k. we write ∞ g−f = i=0 (fi − ηi f ). ∀α ≤ k. and Dα φ(x) ≤ 1 ∀x ∈ Rn . and therefore g ∈ W k.1. Moreover. from the above inequality.17).1 In the above argument. there is a function g ∈ C ∞ (Ω ∩ BR (0)).p the inﬁnite series fi does not converge to g in W (Ω). Show that the series ∞ in X if i vi < ∞. Then how did we prove that g ∈ W k. x ∈ BR−2 (0). it is easy to verify that.16) that ∞ ∞ fi − η i f i=0 W k.3 Approximation by Smooth Functions ∞ 15 Set g = i=0 fi .3.p (Ω\BR−2 (0)) ≤ δ.18) Now in the bounded domain Ω ∩ BR (0). Although each partial sum of fi is in C0 (Ω).p (Ω). by the argument in Step 1. Let X be a Banach space and vi ∈ X. Given any δ > 0.p ≤ δ. x ∈ Rn \ BR (0). Then by (1. neither fi nor ηi f converge.p (Ω) ≤ Cδ.p (Ω). such that φf − f W k. For Unbounded Region Ω.1 .3. 0. hence (g − f ) ∈ W (Ω).
p (Ω) W k. the function φg is in C ∞ (Ω). there is no room to mollify.16 1 Introduction to Sobolev Spaces g−f W k. we can express. and by (1. for a suﬃciently small r > 0. we will translate the function a little bit inward.18) and (1. φg − f W k. The Proof of Theorem 1. Actually. we can make a C 1 change of coordinates locally.p function f on D within Ω. This D is obtained by shifting D toward the inside of Ω by a units. Choose a suﬃciently large.p (Ω) ≤ φg − φf ≤ C1 g − f W k. and D = {x  x ∈ D}. so that the ball B (x) lies in Ω ∩ Br (xo ) for all x ∈ D and for all small > 0. Set D = Ω ∩ Br/2 (xo ). . To circumvent this. · · · . for any multiindex α ≤ k.3.p (Ω∩B ≤ (C1 + C)δ.19).3. Step 1.19) Obviously. deﬁne x = x + a en . Shift every point x ∈ D in xn direction a units. Approximating in a Small Set Covering ∂Ω. Since ∂Ω is C 1 . This completes the proof of the Theorem 1. so that in the new coordinates system (x1 . then mollify it. · · · xn−1 )} with some C 1 function Φ. we ﬁrst translate f a distance in the xn direction to become f (x) = f (x ).2. so that there is room to mollify within Ω. As compared to the proofs of the previous theorem. Then we will again use the partition of unity to complete the proof. Now. we have Dα (J f ) − Dα f Lp (D) ≤ Dα (J f ) − Dα f Lp (D) + Dα f − Dα f Lp (D) . xn ). More precisely. Let xo be any point on ∂Ω. we cover Ω with ﬁnitely many open sets.3 We will still use the molliﬁers to approximate a function.p (Ω∩BR (0)) ≤ δ.p (D). Br (xo ) ∩ Ω = {x ∈ Br (xo )  xn > φ(x1 . and on each set that covers the boundary layer. (1. the main diﬃculty here is that for a point on the boundary. there is room to mollify a given W k.p (Ω) + φf − f + Cδ R (0)) W k. We claim that J f →f in W k.
one can show that φ( x )f (x) − f (x) R W k.20) Choose an open set D0 ⊂⊂ Ω such that Ω ⊂ ∪N Di .3. Similar to the proof of Theorem 1. Applying the Partition of Unity. Then by a direct computation. call them Di . Given δ > 0. there exists a sequence of numbers {Rm } with Rm →∞.21) Let {ηi } be a smooth partition of unity subordinated to the open sets {Di }N . 2. for each Di .3 Approximation by Smooth Functions 17 A similar argument as in the proof of Theorem 1. 1 . such that gi − f W k.p (Di ) ≤ δ.21) that. for each α ≤ k N Dα g − Dα f Lp (Ω) ≤ i=0 N Dα (ηi gi ) − Dα (ηi f ) gi − f i=0 Lp (Di ) ≤C W k. and f = i=0 ηi f.4 ∞ Let φ(r) be a C0 cut oﬀ function such that for 0 ≤ r ≤ 1. The Proof of Theorem 1. Deﬁne i=0 N g= i=0 ηi gi . · · · .p (D0 ) ≤ δ. the union of which covers ∂Ω.p (Di ) = C(N + 1)δ. i = 1. (1. (1.p (Rn ) →0. from the argument ¯ in Step 1. as R→∞.1.1 implies that the ﬁrst term on the right hand side goes to zero as →0. between 0 and 1 . we can ﬁnd ﬁnitely many such sets D. N . there exists gi ∈ C ∞ (Di ).3. (1. This completes the proof of the Theorem 1.22) Thus. Step 2.1.3. while the second term also vanishes in the process due to the continuity of the translation in the Lp norm. N ¯ Then obviously g ∈ C ∞ (Ω). and select a function i=0 ¯ g0 ∈ C ∞ (D0 ) such that g0 − f W k. for 1 < r < 2. it follows from (1. Since ∂Ω is compact. for r ≥ 2. φ(r) = 0. such that for .3.3.20) and (1.
W k.p (Rn ) ≤ 2 →0. Assume T :D→Y ¯ is a bounded linear map. Let Y be a Banach space.p (Rn ) fm − gm 1 . W k. such that T ¯ T = T ¯ and T x = T x ∀ x ∈ D.5 Let D be a dense linear subspace of a normed space X.4.24) m ∞ is in C0 (Rn ). Later. particularly in the next three sections.p (Rn ) + gm −f W k.p . as m→∞.24).p (Rn ) ≤ ≤ fm −gm W k. in W k. since D is dense in X.23) and (1.p (Rn ). Theorem 1.3. we have Obviously. In other words. (1. Now we have proved all the four Approximation Theorems. we can just ﬁrst work on smooth functions and then extend them to the whole Sobolev spaces. such that for fm := J m (gm ). Proof. for each ﬁxed gm − f W k.3. as i→∞. J (gm )→gm . Rm ≤ 1 . there exists a sequence {xi } ∈ D. each function fm fm −f W k.23) m On the other hand. we prove the following Operator Completion Theorem in general Banach spaces. In order to make such extensions more conveniently. from the Approximation Theorem 1. Given any element xo ∈ X.3. such that xi →xo .p . m This completes the proof of Theorem 1.p (Rn ) m. Then there exists an extension T of T from D to the ¯ is a bounded linear operator from X to Y . .p is the completion of C k under the norm · W k. and by (1. whole space X. when we derive various inequalities in Sobolev spaces. which show that smooth functions are dense in Sobolev spaces W k.3. without going through approximation process in each particular space.18 1 Introduction to Sobolev Spaces gm (x) := φ( we have x )f (x). as →0. Hence there exist m. (1.
xn ≥ 0 k ai u(x . The special case when Ω is a half ball + Br (0) := {x = (x . we ﬁrst work on functions u ∈ ¯ C k (Ω). Moreover Obviously. Then y1 − yo = lim T xi − T xi ≤ Climi→∞ xi − xi = 0. xn > 0}. Deﬁne u(x . We extend u to be a C k function on the whole ball Br (0). xn < 0.3. and for other x ∈ X.p (Ω). ¯ is a linear operator. As a direct application of this Operator Completion Theorem. we prove the following Extension Theorem.3.25) To see that (1. for x ∈ D. {T xi } converges to some element yo in Y . −λi xn ) .3). From this density.p (Ω). i=0 . (1. j→∞.p (Ω) → W0 (O) such that Eu = u a.25). since C k (Ω) is dense in W k.p (Ω) (see Theorem 1. This completes the proof. then for any open k. Theorem 1. + Assume u ∈ C k (Br (0)).3 Approximation by Smooth Functions 19 It follows that T xi − T xj ≤ T xi − xj →0. xn ) . suppose there is another sequence {xi } that converges to xo in X and T xi →y1 in Y . as i. To deﬁne the extension operator E. Proof. deﬁne T x by (1.p set O ⊃ Ω. Since Y is a Banach space. xn ) ∈ Rn  x < 1. T ¯ T xo = lim T xi ≤ lim T i→∞ i→ ∞ xi = T xo . We deﬁne E(u) in the following two steps. This implies that {T xi } is a Cauchy sequence in Y .6 Assume that Ω is bounded and ∂Ω is C k . ¯ Hence T is also bounded. Let ¯ T xo = yo .25) is well deﬁned. for u ∈ W k. Then we can apply the Operator Completion Theorem to extend E ¯ to W k. xn ) = ¯ u(x . in Ω. there exists a bounded linear operator E : W k. ¯ Now we prove the theorem for u ∈ C k (Ω). i→∞ ¯ ¯ Now.1. it is easy to show that. E(u)(x) = u(x) almost everywhere on Ω based on ¯ ¯ E(u)(x) = u(x) on Ω ∀ u ∈ C k (Ω). deﬁne T x = T x.e. Step 1.
φ(U ∩ ∂Ω) lies on the hyper plane xn = 0. 2. · · · . i = 1. such that and ∞ ηi ∈ C0 (Di ) i = 0. hence there exist a ﬁnite subcovering D0 := Ω. · · · . ∂ j u(x . ¯ Let O be an open set containing Ω. the extended function u is in C k (Br (0)). there exists a o k neighborhood U of x and a C transformation φ : U →Rn which satisﬁes that φ(xo ) = 0. Since the coeﬃcient determinant det(λj ) is not i zero. One can easily verify that. for such λi and ai . each ui (y) can be extended as a C k function ui (y) onto the ˜ ¯ whole ball Bri (0). 1. we can deﬁne the extension of u from Ω to its neighborhood O as m E(u) = i=1 ηi (x)¯i (φi (x)) + η0 u(x). i i=0 to determine a0 . ak . i=0 Let φi be the mapping associated with Di as described in Step 1. 1. −1 Dxo := φ (Bro (0)) is an open set containing xo . such that 0 < λ0 < λ1 < · · · < λk < 1. And let ui (y) = u(φ−1 (y) be the function deﬁned on the half ball ˜ i + ¯ Bri (0) = φi (Di ∩ Ω) . ¯ Step 2. Choose ro suﬃciently small so that Dxo ⊂ O. η1 . u . D1 . All such Dxo and Ω forms an open ¯ covering of Ω. Reduce the general domains to the case of half ball covered in Step 1 via domain transformations and a partition of unity. a1 . · · · . and φ ∈ C k (Dxo ). xn ) ¯ Then solve the algebraic system k ai λj = 1 . and φ(U ∩ Ω) is n contained in R+ . · · · m. m m ηi (x) = 1 ∀ x ∈ Ω. Dm and a corresponding partition of unity η0 . we ﬁrst pick λi . ηm . and hence the system has a unique solution. · · · . Then there exists an ro > 0. j = 0. From Step 1. Given any xo ∈ ∂Ω. Now.20 1 Introduction to Sobolev Spaces To guarantee that all the partial derivatives up to the order k are ∂xj n continuous across the hyper plane xn = 0. such that Bro (0) ⊂⊂ φ(U ). · · · . k.
4 Sobolev Embeddings When we seek weak solutions of partial diﬀerential equations.p (Ω) . O) ≤ }.p (Ω).3. as →0. and 21 E(u) W k. 1. as compare to the previous approximation theorems. Then the linear operator ∞ J (E(u)) : W k.p .1.3. i=0 This completes the proof of the Extension Theorem.1 Assume that Ω is bounded and ∂Ω is C k . the improvement is that one can write out the explicit form of the approximation. Remark 1.2 Notice that the Extension Theorem actually implies an improved version of the Approximation Theorem under stronger assumption on ∂Ω (be C k instead of C 1 ).4 Sobolev Embeddings ∞ Obviously.p (Ω) →0 . Here.p (O) ≤C u W k. one can derive immediately Corollary 1. Let O be an open ¯ set containing Ω. we start with functions in a Sobolev space W k. Then it is natural that we would like to know whether the functions in this space also automatically belong to some other spaces. E(u) ∈ C0 (O). Let O := {x ∈ Rn  dist(x. and for each u ∈ W k. we have J (E(u)) − u W k. ¯ Moreover. for any x ∈ Ω.p (Ω)→C0 (O ) is bounded. we have m m E(u)(x) = i=1 m ηi (x)¯i (φi (x)) + η0 (x)u(x) = u i=1 ηi (x)˜i (φi (x)) + η0 (x)u(x) u m = = ηi (x)u(φ−1 (φi (x))) i i=1 m + η0 (x)u(x) = i=1 ηi (x)u(x) + η0 (x)u(x) ηi (x) u(x) = u(x). The following theorem answers the question and at meantime provides inequalities among the relevant norms. . From the Extension Theorem.
The proof of the Theorem is based upon several simpler theorems.p (Ω).22 1 Introduction to Sobolev Spaces Theorem 1. Let u ∈ W k. Here.27) Rn Du(y)p dy. Rn .γ (Ω). and there exists a constant C. we start with the smooth functions with compact supports in Rn . And to control the Lq norm for larger q by W 1. apparently these functions belong to Lq (Ω) for 1 ≤ q ≤ p.26) ∞ with constant C independent of u ∈ C0 (Rn ). Then it must also be true for the rescaled function of u: uλ (x) = u(λx). if n p n p is not an integer is an integer . Assume Ω is bounded and has a C 1 boundary. Now suppose (1. We ﬁrst consider the functions in W 1.p (Ω).1 (General Sobolev inequalities). We would like to know for what value of q. can we establish the inequality u Lq (Rn ) ≤ C Du Lp (Rn ) . and what is more meaningful is to ﬁnd out how large this q can be. then u ∈ Lq (Ω) with 1 = p − n and there exists a constant p q C such that u Lq (Ω) ≤ C u W k. such n u where γ= C k−[ n ]−1.4.p norm–the norm of the derivatives Du Lp (Ω) = Ω Du dx p 1 p –would suppose to be more useful in practice. one would expect more.γ p (Ω) ≤C u W k. [b] is the integer part of the number b.p (Ω) 1 + n − n.26) holds. if p p any positive number < 1. (1. (1. From the deﬁnition. and Rn Lq (Rn ) ≤ C Duλ Lp (Rn ) . Naturally. For simplicity. then u ∈ C k−[ p ]−1. we have 1 λn λp Duλ p dx = n λ Rn uλ (x)q dx = u(y)q dy. that is uλ By substitution. 1 k (i) If k < n .p (Ω) (ii) If k > that n p.
p (Ω). 1 However. This is a more delicate situation.28) where p∗ = np n−p . and we will deal with it later. we derive immediately that ∞ u(x) ≤ −∞ u (x)dx. Therefore.4 Sobolev Embeddings 23 It follows from (1.1 Inequality (1.4. it is necessarily that the power of λ here be zero. it is false. but not to L∞ (Ω).4. n.p (Rn ) can be approached by a sequence of func1 tions in C0 (Rn ). Then u is in Lp∗ (Ω) and there exists a constant C = C(p.28) holds for all functions u in W 1.1. such that 1 u Lp∗ (Rn ) ≤ C Du Lp (Rn ) . n). we can extended them to be W 1.4. Since any function in W 1.2 (GagliardoNirenbergSobolev inequality).29) np In the limiting case as p→n.p (Rn ). that is q= np . from x u(x) = −∞ u (x)dx. It belongs to W 1. in order that C to be independent of u. Ω). Here for q to be positive. this is true only in dimension one. we must require p < n. for n > 1.3 Assume that Ω is a bounded. (1. Suppose that 1 ≤ p < n and u ∈ W 1. For functions in W 1. For n = 1. Unfortunately. n−p It turns out that this condition is also suﬃcient. as will be stated in the following theorem. open subset of Rn with C 1 boundary. we derive immediately Corollary 1.p (Ω) . Then there exists a constant C = C(p.p (Rn ) functions by Extension Theorem and arrive at Theorem 1. One counter example is u = log log(1 + x ) on Ω = B1 (0). u ∈ C0 (Rn ) (1.27) that u Lq (Rn ) ≤ Cλ1− p + q Du n n Lp (Rn ) .p (Ω). Assume that 1 ≤ p < n. Then one may suspect that u ∈ L∞ when p = n. Theorem 1. such that u Lp∗ (Ω) ≤C u W 1. . p∗ := n−p →∞.n (Ω).
p (Rn ) → 0 as k → ∞.p (Rn ). for p > n.4.p . there ∞ exists a sequence {uk } ⊂ C0 (Rn ) such that u − uk W 1. one would expect W 1. u ∈ C 1 (Rn ) ≤ C ui − uj W 1.p (Rn ) via the Extension Theorem. by the Approximation Theorem.24 1 Introduction to Sobolev Spaces Naturally.γ (R1 ) with γ = 1 − p .p (Ω) to W 1.2. one sees that Theorem 1. To get some rough idea what these spaces might be. Here. we follow two basic principles. n) such that u where γ = 1 − n .1. First. First.4. Then. we will prove Theorem 1.3. This is o indeed true in general. we obtain ui − uj Lp∗ (Rn ) C 0. . and the steps leading from (1.27) there provides the foundation for the proof of the Sobolev embedding. Instead of dealing with functions in W k. the proof of (1.4 (Morrey’s inequality).4.27) is called a ‘hard analysis’. j → ∞. p To establish an inequality like (1. Often. Take the supremum over all pairs x. we reduce the proof to functions with enough smoothness (which is just Theorem 1. In the following.p (Rn ) .4. we will show the ‘soft’ parts ﬁrst and then the ‘hard’ ones. we deal with general domains by extending functions u ∈ W 1.2).γ (Rn ) ≤C u W 1. It follows that u(y) − u(x) y − x1− p 1 ∞ ≤ −∞ u (t) dt p 1 p .p (Rn ) → 0. and consequently. let us ﬁrst consider the simplest case when n = 1 and p > 1. y in R1 . Applying Theorem 1.p to embed into better spaces.1 and Theorem 1. and we have Theorem 1.4. we have y u(y) − u(x) = x u (t)dt. y ∈ R1 with x < y. by H¨lder inequality.2.4. Secondly. for any x. Assume n < p ≤ ∞. we will assume that Theorem 1.2 is the ‘key’ and inequality (1. Given any function u ∈ W 1.4.28) are called ‘soft analyseses’.28). as i. Obviously.2 is true and derive Corollary 1.27) to (1.4. we consider the special case where Ω = Rn .4. Then there exists a constant C = C(p. the left hand side of the above 1 inequality is the norm in the H¨lder space C 0. The Proof of Corollary 1. o y y u(y) − u(x) ≤ x u (t)dt ≤ x u (t) dt p 1 p y 1 1− p · x dt .
˜ almost everywhere in Ω. such that u ˜ W 1. This completes the proof of the Theorem. there exists a function u in W0 (O). We need Lemma 1. (1.1 (General H¨lder Inequality) Assume that o ui ∈ Lpi (Ω) for i = 1. · · · . (1.p (Rn ) . for 1. (1. we ﬁrst extend them to be functions with compact supports in Rn .3.p (Ω) . Consequently. Ω.p (Ω).p (Rn ) =C u W 1. there exists a constant C1 = C1 (p.32) . This completes the proof of the Corollary.1. The Proof of Theorem 1. More precisely.p (Ω) .2.31) Then we will apply (1. n.4 Sobolev Embeddings 25 Thus {uk } also converges to u in Lp (Rn ). O). by the Extension Theorem (Theorem 1.31) to uγ for a properly chosen γ > 1 to extend the inequality to the case when p > 1. We ﬁrst establish the inequality for p = 1.p (O) ≤ C1 u W 1.30) Now we can apply the GagliardoNirenbergSobolev inequality to u to derive ˜ u Lp∗ (Ω) ≤ u ˜ Lp∗ (O) ≤C u ˜ W 1. m and 1 1 1 + + ··· + = 1.e. Now for functions in W 1. to apply inequality (1. we arrive at u Lp∗ (Rn ) ∗ = lim uk Lp∗ (Rn ) ≤ lim C uk W 1. let O be an open set that covers Ω.p (Ω).27). 2. p1 p2 pm m Then u1 u2 · · · um dx ≤ Ω ui pi dx i Ω 1 pi .p every u in W 1. moreover.p (O) ≤ CC1 u W 1. The Proof of Theorem 1. we prove u Rn n n−1 n−1 n dx ≤ Rn Dudx.6). i.3.4.4. such that ˜ u = u.4.
integrate both sides of the above inequality with respect to x1 and x2 from −∞ to ∞. Now we are ready to prove the Theorem.31) for n = 2. Now.26 1 Introduction to Sobolev Spaces The proof can be obtained by applying induction to the usual H¨lder inequalo ity for two functions. −∞ The above two inequalities together imply that ∞ ∞ u(x)2 ≤ −∞ Du(y1 . xi+1 . that is. · · · n. · · · . x2 )dy1 . Similarly. x2 )dy1 · −∞ Du(x1 . xi−1 . Then we deal with the general situation when n > 2. Step 1. we have u(x) ≤ ∞ Du(x1 .32). We write u(x) = Consequently. Integrating both sides with respect to x1 and applying the general H¨lder o inequality (1. we have u(x) = It follows that u(x) ≤ −∞ x1 −∞ ∞ ∂u (y1 . i = 1. yi . 2. we arrive at (1. 2. (1. To better illustrate the idea. . The case p = 1. · · · . ∂yi u(x) ≤ −∞ Du(x1 . · · · . ∂y1 Du(y1 . · · · . And it follows that n u(x) n n−1 ∞ ≤ i=1 −∞ Du(x1 . xn )dyi . we obtain ∞ u −∞ n n−1 ∞ 1 n−1 ∞ n ∞ 1 n−1 dx1 ≤ −∞ ∞ Dudy1 −∞ i=2 1 n−1 Dudyi −∞ ∞ n ∞ −∞ dx1 1 n−1 ≤ −∞ Dudy1 i=2 −∞ Dudx1 dyi . · · · . we prove 2 u(x)2 dx ≤ R2 R2 Dudx . yi .33) Since u has a compact support. · · · n. y2 )dy2 .33). i = 1. yi . · · · . y2 )dy2 . xn )dyi . ∞ xi −∞ ∂u (x1 . we ﬁrst derive inequality (1. xn )dyi 1 n−1 . x2 )dy1 .
and by the H¨lder inequality.4 Sobolev Embeddings 27 Then integrate the above inequality with respect to x2 . we have o u Rn γn n−1 n−1 n dx ≤ Rn D(uγ )dx = γ Rn uγ−1 Dudx Du Rn γn γ+n−1 γ+n−1 γn ≤γ Rn u γn n−1 (γ−1)(n−1) γn dx dx .31). The Case p > 1.1 Write your own proof with all details for the cases n = 3 and n = 4. . · · · . Applying (1. Step 2. we arrive at o ∞ −∞ ∞ −∞ ∞ −∞ n ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ −∞ ∞ ∞ 1 n−1 n Dudy1 dx2 −∞ ∞ −∞ −∞ 1 n−1 Dudx1 dy2 .1. integrating both sides with respect to xn and applying the general H¨lder inequality. Again. × Dudx1 dx2 dyi i=3 −∞ Continuing this way by integrating with respect to x3 . Exercise 1. we obtain o u n−1 dx ≤ Rn Rn n n n−1 Dudx . Finally. This veriﬁes (1. we deduce ∞ ∞ ··· −∞ ∞ −∞ u n−1 dx1 · · · dxn−1 ≤ ∞ 1 n−1 n ∞ ∞ 1 n−1 ··· −∞ ∞ −∞ ∞ Dudy1 dx2 · · · dxn−1 −∞ 1 n−1 ··· −∞ Dudx1 dy2 dx3 · · · dxn−1 1 n−1 ··· × ··· −∞ −∞ Dudx1 · · · dxn−2 dyn−1 Rn Dudx .4. xn−1 . We have ∞ −∞ ∞ −∞ ∞ −∞ ∞ 1 n−1 u n−1 dx1 dx2 ≤ ∞ −∞ ∞ 1 n−1 n n ∞ −∞ ∞ 1 n−1 Dudx1 dy2 −∞ −∞ Dudy1 · i=3 −∞ Dudx1 dyi dx2 . applying the General H¨lder Inequality.31) to the function uγ with γ > 1 to be chosen later.
In Step 1. We start from 1 u(y) − u(x) = 0 d u(x + t(y − x))dt = dt s Du(x + τ ω) · ωdτ.4. We will establish two inequalities sup u ≤ C u Rn W 1.36) where Br (x) is the volume of Br (x). This completes the proof of the Theorem. that is γ+n−1 γ= p(n − 1) . we verify (1. n−p 1 p we obtain u n−p dx Rn np n−p np ≤γ Rn Dup dx . and hence y = x + sω. (1. The Proof of Theorem 1. so that γn = p. we prove (1.35). Now choose γ. Integrating both sides with respect to ω on the unit sphere ∂B1 (0).36). we obtain . We will carry the proof out in three steps.4 (Morrey’s inequality). s = x − y. Step 1. and in Step 2 and Step 3. y − xn−1 (1. (1.34) and sup x=y u(x) − u(y) ≤ C Du x − y1−n/p Lp (Rn ) . x − y s u(x + sω) − u(x) ≤ 0 Du(x + τ ω)dτ. then converting the integral on the right hand side from polar to rectangular coordinates.35) Both of them can be derived from the following 1 Br (x) u(y) − u(x)dy ≤ C Br (x) Br (x) Du(y) dy. respectively.34) and (1.28 1 Introduction to Sobolev Spaces It follows that u Rn γn n−1 n−1 γn dx ≤γ Rn Du γn γ+n−1 γ+n−1 γn dx .p (Rn ) . 0 where ω= It follows that y−x .
36). Step 2.38) ≤ C1 Rn Dup dy .39). we deduce o I1 ≤ C B1 (x) u(x)dy B1 (x) u(x) − u(y)dy + B1 (x) 1 B1 (x) u(y)dy B1 (x) (1. By (1. For each ﬁxed x ∈ Rn . we have u(x) = 1 B1 (x) 1 ≤ B1 (x) = I1 + I2 . and hence x − y (n−1)p p−1 is ﬁnite. x − zn−1 Multiplying both sides by sn−1 .34) is an immediate consequence of (1.38).36) and the H¨lder inequality.39) Now (1.4 Sobolev Embeddings s 29 u(x + sω) − u(x)dσ ≤ ∂B1 (0) 0 s ∂B1 (0) Du(x + τ ω)dσdτ Du(x + τ ω) 0 ∂B1 (0) = = Bs (x) τ n−1 dσdτ τ n−1 Du(z) dz.1. and (1. so that the integral dy B1 (x) < n. (1. x − yn−1 This veriﬁes (1.37) Du(y) dy x − yn−1 1/p ≤C Rn Dup dy B1 (x) 1/p dy x − y (n−1)p p−1 p−1 p dy (1. and taking into account that the integrand on the right hand side is nonnegative. Also it is obvious that I2 ≤ C u Lp (Rn ) .37). we arrive at u(y) − u(x)dy ≤ Br (x) rn n Br (x) Du(y) dy. (1. . (n−1)p p−1 Here we have used the condition that p > n. integrating with respect to s from 0 to r.
43) 1 k with 1 = p − n .41).4.p (Ω) .42) Now (1.p (Ω) .43). By Theorem 1. let r = x − y and D = Br (x) ∩ Br (y). we arrive at (1.4. or 1 1 1 1 1 1 1 2 = ∗ − =( − )− = − . we have 1 D u(x) − u(z)dz ≤ D C Br (x) u(x) − u(z)dz Br (x) 1/p p ≤C Rn Du dz Br (x) Lp (Rn ) . Again denote p∗ = n−p .3 successively on q np 1 1 the integer k. For α ≤ k−1.42) yield u(x) − u(y) ≤ Cx − y1−n/p Du Lp (Rn ) . we have Dα u ∈ Lp (Ω). we have u ∈ W k−2. We want to show that u ∈ p Lq (Ω) and u Lq (Ω) ≤ C u W k. The Proof of Theorem 1. (1.p (Ω) with k < . D (1. Applying Theorem 1. (1. then p1 = p − n . (1. Dα u ∈ ∗ ∗ ∗ W 1.1 (the General Sobolev Inequality).p (Ω) and u W k−1.4.p∗ (Ω) ≤ C2 C1 u W k. n (i) Assume that u ∈ W k.p (Ω) and u where p∗∗ = W k−2.40).41) u(z) − u(y)dz ≤ Cr1−n/p Du D Lp (Rn ) . 1 D (1. np∗ n−p∗ .3 again to W k−1.40) Again by (1. Then u(x) − u(y) ≤ 1 D u(x) − u(z)dz + D 1 D u(z) − u(y)dz. dz x − z (n−1)p p−1 p−1 p = Cr1−n/p Du Similarly. thus u ∈ W k−1. ∗∗ p p n p n n p n Continuing this way k times.p∗∗ (Ω) ∗ ∗∗ ≤ C2 u W k−1.p (Ω).35) and thus completes the proof of the Theorem. .p (Ω). and (1. This can be done by applying Theorem 1.30 1 Introduction to Sobolev Spaces Step 3. This implies (1.4.3.p (Ω) .p∗ (Ω) ≤ C1 u W k.36). For any pair of ﬁxed points x and y in Rn .
44) to Dα u. we will try to ﬁnd a smallest integer m. Recall that in the Morrey’s inequality p ≤C u W 1.44) we require that p > n.1.p (Ω) . We can ﬁrst decrease the order of diﬀerentiation to increase the power of summability. in our situation. we have m = . when n is not an integer. with any α ≤ k − m − 1 to obtain Dα u Or equivalently. q p p n n is an integer. [ ] ≤C u Here. such that W k. To remedy this. p γ =1− n n n =1+ − . we can apply Morrey’s inequality (1. W k.q (Ω). This completes the proof of the Theorem.q (Ω) ≤ C1 C2 u W k. u C k− n −1. C 0. m> np > n. n − mp n − 1.4 Sobolev Embeddings 31 (ii) Now assume that k > u n . While . we want q= and equivalently.p (Ω) → W k−m.γ p (Ω) C 0. this condition is not necessarily met. q can be any number p p > n. p Obviously. More precisely.γ (Ω) (1.p (Ω) . we can use the result in the previous step. p For this choice of m. That is. and in this case.p (Ω) . with q > n.γ (Ω) ≤ C1 u W k−m. However. the smallest such integer m is m= n . which implies that γ can be any positive number < 1.
assume that {uk } is a bounded sequence in W 1. We will show the strong convergence of this sequence in three steps with the help of a family of molliﬁers uk (x) = Ω j (y)uk (x − y)dy.5. there is a subsequence (still denoted by {uk }) that converges strongly to uo in L1 (Ω). by Lemma 1. We now come back to prove the Lemma. for q < p∗ . Then there exists a subsequence (still denoted by {uk }) which converges weakly to an element uo in W 1. In fact. The ﬁrst part–inequality (1. Applying the H¨lder inequality o uk − uo Lq (Ω) ≤ uk − uo θ L1 (Ω) uk − uo 1−θ Lp∗ (Ω) .p (Ω). On the other hand.1. Let {uk } be a bounded sequence in W 1.5.1 (Ω). What we need to show is that if {uk } is a bounded sequence in W 1. (1.p (Ω) is compactly imbedded into Lq (Ω): W 1. {uk } is bounded in Lp∗ (Ω). as stated below. Suppose 1 ≤ p < n.5. Then for each 1 ≤ q < p∗ .45)– is included in the general Embedding Theorem. We postpone the proof of the Lemma for a moment and see how it implies the Theorem. the embedding of W 1.p (Ω) → → Lq (Ω).p (Ω) is embedded into Lp (Ω) np with p∗ = n−p . Actually. such that u Lq (Ω) ∗ ≤C u W 1.1 Every bounded sequence in W 1. Proof.45) and ii) every bounded sequence in W 1. since p ≥ 1 and Ω is bounded.p (Ω) . Now.p (Ω) into Lq (Ω) is still true.32 1 Introduction to Sobolev Spaces 1. we proved that W 1. This can be derived immediately from the following Lemma 1. . This proves the Theorem.p (Ω) possesses a convergent subsequence in Lq (Ω). Assume that Ω is a bounded open subset in Rn with C 1 boundary ∂Ω.p (Ω). Obviously. we conclude immediately that {uk } converges strongly to uo in Lq (Ω). then it possesses a convergent subsequence in Lq (Ω). ∀ u ∈ W 1. if the region Ω is bounded. W 1.5 Compact Embedding In the previous section.1 (Ω). Theorem 1. due to the strict inequality on the exponent. in the sense that i) there is a constant C. it is also bounded in W 1.p (Ω). By the Sobolev embedding.1 (RellichKondrachov Compact Embedding). one can expect more.p (Ω).1 (Ω) possesses a convergent subsequence in L1 (Ω).
48) It follows that uk − uk L1 (G) →0 .46) there is a subsequence of {uk } which converges uniformly .1 (G) ≤ C < ∞ . From the property of molliﬁers. as →0 . uniformly in k. then for all x ∈ Rn and for all k = 1. Then. for all k = 1.1. uniformly in k. we have uk (x) − uk (x) = B1 (0) 1 j (y)[uk (x − y) − uk (x)]dy j (y) B1 (0) 0 = =− d uk (x − ty)dt dy dt 1 j (y) B1 (0) 0 Duk (x − ty)dt y dy. 2. we j (x − y)uk (y)dy B (x) L∞ (Rn ) uk (x) ≤ ≤ j uk L1 (G) ≤ C n < ∞. we extract diagonally a subsequence of {uk } which converges strongly in L1 (Ω).5 Compact Embedding 33 First. (1. Since every W 1. Now ﬁx an have > 0.1 (G) .1 function can be approached by a sequence of smooth functions. corresponding to the above convergent sequence {uk }. and uk W 1. Integrating with respect to x and changing the order of integration. the sequence of functions {uk } all have compact support in a bounded open set G ⊂ Rn . without loss of generality. we obtain 1 uk − uk L1 (G) ≤ B1 (0) j(y) 0 G Duk (x − ty)dx dt dy uk W 1. (1.50) . ≤ G Duk (z)dz ≤ (1. we show that uk →uk in L1 (Ω) as →0 . we prove that (1. · · · . we may also assume that each uk is smooth.47) (1. Finally. that Ω = Rn . Based on the Extension Theorem. Step 1. we may assume. · · ·.49) Step 2. 2. for each ﬁxed > 0.
e 1. Duk (x) ≤ C n+1 < ∞. · · · . This completes the proof of the Lemma and hence the Theorem. 2.i→∞ lim uj − ui L1 (G) = 0. (1. · · · . Pick the diagonal subsequence from the above: {uii } ⊂ {uk }. by the ArzelaAscoli Theorem (see the Appendix).6.51) (1. This is our desired subsequence. and denote the corresponding subsequence that satisﬁes (1. as i.1 (Poincar´’s Inequality I). 1/k. j→∞.51) imply that. u22 .1 Poincar´’s Inequality e For functions that vanish on the boundary. in particular j. the sequence {uk } is uniformly bounded and equicontinuous. (1. for k = 1. 1/3.52) Step 3. for each ﬁxed > 0. Therefore.53) . Suppose u ∈ W0 (Ω) for some 1 ≤ p ≤ ∞. we have Theorem 1. because uii −ujj L1 (G) 1/i 1/k 1/k 1/k ≤ uii −uii 1/i L1 (G) + uii −ujj 1/i 1/j L1 (G) + ujj −ujj 1/j L1 (G) →0 .50) and (1.6 Other Basic Inequalities 1. 3. Then u Lp (Ω) ≤ C Du Lp (Ω) . (1. Now choose to be 1. u33 . 1.p Assume Ω is bounded. · · · successively. · · · .6. uk3 · · · . it possesses a subsequence (still denoted by {uk } ) which converges uniformly on G.52) by uk1 . 1/2. Then select the corresponding subsequence from {uk }: u11 .34 1 Introduction to Sobolev Spaces Similarly. due to the fact that each of the norm on the right hand side →0. uk2 .
while uk p →∞ . which may not be zero on the the boundary.55) . e Theorem 1. for each φ ∈ C0 (Ω). p. ∞ Thus vo is a constant. Taking into account that vk ∈ C0 (Ω). Then there exists a ¯ constant C = C(n. uk Let vk = . For functions in W 1. as k→∞. Ω). one can easily prove it by using the Sobolev innq equality u Lp ≤ u Lq with p = n−q and the H¨lder inequality. {vk } is bounded in W 1. However.p (Ω).1 i) Now based on this inequality. we have another version of Poincar´’s inequality. Consequently. Ω vφxi dx = lim k →∞ Ω vk φxi dx = − lim k→∞ vk.54) ∞ On the other hand.1. This contradicts with (1. ∀ u ∈ W 1. e Let Ω be a bounded. 1. Proof.6. we may assume that {uk } ⊂ C0 (Ω).e.p (Ω) = Du Lp (Ω) .p (Ω) and hence possesses a subsequence (still denoted by {vk }) that converges weakly to some vo ∈ W 1.xi φdx = 0. we abbreviate u Lp (Ω) as u p . and Dvk p →0 . we must have vo ≡ 0. 0 ii) Just for this theorem.p norm of W0 (Ω) as u W 1.p ∞ ∞ Since C0 (Ω) is dense in W0 (Ω). Ω It follows that Dvo (x) = 0 . then there exists a sequence {uk } ⊂ W0 (Ω).p (1.2 (Poincar´ Inequality II).54) and therefore completes the proof of the Theorem.6. Suppose inequality 1. such that u−u ¯ p ≤ C Du p . and open subset in Rn with C 1 boundary.p (Ω). connected. Then uk p vk p = 1 . such that Duk p = 1 . Assume 1 ≤ p ≤ ∞.53) does not hold. {vk } converges strongly in Lp (Ω) to vo .6 Other Basic Inequalities 35 Remark 1. one can take an equivalent 1. and therefore vo p = 1. o the one we present in the following is a uniﬁed proof that works for both this and the next theorem. For convenience. (1.p (Ω). (1. From the compact embedding results in the previous section. a. Let u be the average of u on Ω.
Then one can see obviously that ∞ f (x) = 0 χ{f >a} (x) da ∞ (1.6.2 The connectness of Ω is essential in this version of the inequality. Let 0 < λ < n and s. A simple counter example is when n = 1. 3] . Without loss of generality. 1. λ)f r gs Rn Rn (1. 3].58) (1. r s n Assume that f ∈ Lr (Rn ) and g ∈ Ls (Rn ).3 (HardyLittlewoodSobolev Inequality). Proof.6.2 The Classical HardyLittlewoodSobolev Inequality Theorem 1. s. Let 1 if x ∈ G χG (x) = 0 if x ∈G be the characteristic function of the set G. we may assume that both f and g are nonnegative and f r = 1 = g s .57) (1.6. Instead of letting uk vk = . 1] ∪ [2.59) g(x) = 0 ∞ χ{g>b} (x) db x−λ = λ 0 c−λ−1 χ{x<c} (x) dc . Then f (x)x − y−λ g(y)dxdy ≤ C(n. and u(x) = −1 for x ∈ [0. 1] 1 for x ∈ [2. s.36 1 Introduction to Sobolev Spaces The proof of this Theorem is similar to the previous one.56) where C(n. and where f r := f Lr (Rn ) . vk = uk − uk p ¯ Remark 1. Ω = [0. λ) = nB n λ/n (n − λ)rs λ/n 1 − 1/r λ/n + λ/n 1 − 1/s λ/n with B n  being the volume of the unit ball in Rn . we choose uk p uk − uk ¯ . r > 1 such that 1 1 λ + + = 2.
one may ﬁrst write ∞ x−λ = 0 χ{x−λ >˜} (x) d˜. ˜ Substituting (1. b.62).57). (1. respectively. c) ≤ min{u(c)w(b).56).60) u(c) = B n cn . b. one can show that I(a. b. and (1. c) dc 0 ≤ u(c)≤v(a) c−λ−1 w(b)u(c) dc + u(c)>v(a) c−λ−1 w(b)v(a) dc . the measure of the sets {x  f (x) > a} and {y  g(y) > b}. (1. u(c)v(a).6 Other Basic Inequalities 37 To see the last identity. c) dc db da. v(a)w(b)}. Then we can express the norms as ∞ ∞ f r r =r 0 ar−1 v(a) da = 1 and g s s =s 0 bs−1 w(b) db = 1.61) It is easy to see that I(a. and let v(a) = Rn χ{f >a} (x) dx . c) is bounded above by other pairs and arrive at I(a. (1. we have I := Rn f (x)x − y−λ g(y) dx dy Rn ∞ ∞ 0 ∞ ∞ 0 0 0 ∞ ∞ = λ 0 Rn Rn c−λ−1 χ{f >a} (x)χ{x<c} (x − y)χ{g>b} (y) dx dy dc db da c−λ−1 I(a. c c and then let c = c−λ . the volume of the ball of radius c. c) ≤ Rn Rn χ{x<c} (x − y)χ{g>b} (y) dx dy u(c)χ{g>b} (y) dy = u(c)w(b) Rn ≤ Similarly.59) into the left hand side of (1. By (1. b. we have ∞ c−λ−1 I(a.1. b. 0 = λ Let (1.62) We integrate with respect to c ﬁrst. w(b) = Rn χ{g>b} (y) dy.58).
we derive I ≤ × 0 n B n λ/n n−λ ∞ ar/s ∞ ∞ v(a) 0 w(b)1−λ/n db da + 0 v(a)1−λ/n ar/s w(b) db da.60). (1. and (1. It follows that the ﬁrst integral in (1. we use H¨lder inequality with o m = (s − 1)(1 − λ/n) ar/s w(b)1−λ/n bm b−m db 0 ar/s 1−λ/n ar/s λ/n ≤ 0 ar/s w(b)b s−1 db 0 1−λ/n b −mn/λ db ≤ 0 w(b)b s−1 db · ar−1 .64) ≤ B n λ/n v(a)w(b)1−λ/n λ(n − λ) In view of (1. (1.65) is bounded above by λ n − s(n − λ) λ/n 0 ∞ ∞ 1−λ/n v(a)ar−1 da 0 w(b)bs−1 db = 1 rs λ/n 1 − 1/r λ/n . (1.65).64).38 1 Introduction to Sobolev Spaces (v(a)/B n )1/n = w(b)B  n 0 c−λ−1+n dc + w(b)v(a) (v(a)/B n )1/n c−λ−1 dc = B n λ/n B n λ/n w(b)v(a)1−λ/n + w(b)v(a)1−λ/n n−λ λ B n λ/n = w(b)v(a)1−λ/n λ(n − λ) (1. c) dc 0 ≤ u(c)≤w(b) c−λ−1 v(a)u(c) dc + u(c)>w(b)) c−λ−1 w(b)v(a) dc (1.63). (1. b. one from 0 to ar/s and the other from ar/s to ∞. By virtue of (1.65) To estimate the ﬁrst integral in (1.63) Exchanging v(a) with w(b) in the second line of (1. we also obtain ∞ c−λ−1 I(a.67) .61). we split the bintegral into two parts.66) since mn/λ < 1.63).
1.6 Other Basic Inequalities
39
To estimate the second integral in (1.65), we ﬁrst rewrite it as
∞ bs/r
w(b)
0 0
v(a)1−λ/n da db,
then an analogous computation shows that it is bounded above by 1 rs λ/n 1 − 1/s
λ/n
.
(1.68)
Now the desired HardyLittlewoodSobolev inequality follows directly from (1.65), (1.67), and (1.68). Theorem 1.6.4 (An equivalent form of the HardyLittlewoodSobolev inequality) n Let g ∈ Lp (Rn ) for n−α < p < ∞. Deﬁne T g(x) =
Rn
x − yα−n g(y)dy.
Then Tg
p
≤ C(n, p, α) g
np n+αp
.
(1.69)
Proof. By the classical HardyLittlewoodSobolev inequality, we have < f, T g >=< T f, g >≤ C(n, s, α) f where < ·, · > is the L2 inner product. Consequently, Tg where
p r
g
s
,
= sup < f, T g > ≤ C(n, s, α) g s ,
f
r =1
1 p 1 r
+ +
1 r 1 s
=1 = n+α . n
Solving for s, we arrive at s=
np . n + αp
This completes the proof of the Theorem. Remark 1.6.3 To see the relation between inequality (1.69) and the Sobolev inequality, let’s rewrite it as Tg with
nq n−αq
≤C g
q
(1.70)
40
1 Introduction to Sobolev Spaces
1 < q :=
np n < . n + αp α
Let u = T g. Then one can show that (see [CLO]), (− ) 2 u = g. Now inequality (1.70) becomes the Sobolev one u
nq n−αq α
≤ C (− ) 2 u q .
α
2 Existence of Weak Solutions
2.1 2.2 2.3
Second Order Elliptic Operators Weak Solutions Method of Linear Functional Analysis 2.3.1 2.3.2 2.3.3 Linear Equations Some Basic Principles in Functional Analysis Existence of Weak Solutions Semilinear Equations Calculus of Variations Existence of Minimizers Existence of Minimizers under Constrains Minimax Critical Points Existence of a Minimax via the Mountain Pass Theorem
2.4
Variational Methods 2.4.1 2.4.2 2.4.3 2.4.4 2.4.5 2.4.6
Many physical models come with natural variational structures where one can minimize a functional, usually an energy E(u). Then the minimizers are weak solutions of the related partial diﬀerential equation E (u) = 0. For example, for an open bounded domain Ω ⊂ Rn , and for f ∈ L2 (Ω), the functional 1  u2 dx − f u dx E(u) = 2 Ω Ω
1 possesses a minimizer among all u ∈ H0 (Ω) (try to show this by using the knowledge from the previous chapter). As we will see in this chapter, the minimizer is a weak solution of the Dirichlet boundary value problem
42
2 Existence of Weak Solutions
− u = f (x) , x ∈ Ω u(x) = 0 , x ∈ ∂Ω. In practice, it is often much easier to obtain a weak solution than a classical one. One of the seven millennium problem posed by the Clay Institute of Mathematical Sciences is to show that some suitable weak solutions to the NavieStoke equation with good initial values are in fact classical solutions for all time. In this chapter, we consider the existence of weak solutions to second order elliptic equations, both linear and semilinear. For linear equations, we will use the LaxMilgram Theorem to show the existence of weak solutions. For semilinear equations, we will introduce variational methods. We will consider the corresponding functionals in proper Hilbert spaces and seek weak solutions of the equations associated with the critical points of the functionals. We will use examples to show how to ﬁnd a minimizer, a minimizer under constraint, and a minimax critical point. The wellknown Mountain Pass Theorem is introduced and applied to show the existence of such a minimax. To show that a weak solution possesses the desired diﬀerentiability so that it is actually a classical one is called the regularity method, which will be studied in Chapter 3.
2.1 Second Order Elliptic Operators
Let Ω be an open bounded set in Rn . Let aij (x), bi (x), and c(x) be bounded functions on Ω with aij (x) = aji (x). Consider the second order partial diﬀerential operator L either in the divergence form
n n
Lu = −
(aij (x)uxi )xj +
i,j=1 i=1
bi (x)uxi + c(x)u,
(2.1)
or in the nondivergence form
n n
Lu = −
i,j=1
aij (x)uxi xj +
i=1
bi (x)uxi + c(x)u.
(2.2)
We say that L is uniformly elliptic if there exists a constant δ > 0, such that
n
aij (x)ξi ξj ≥ δξ2
i,j=1
for almost every x ∈ Ω and for all ξ ∈ Rn . In the simplest case when aij (x) = δij , bi (x) ≡ 0, and c(x) ≡ 0, L reduces to − . And, as we will see later, that the solutions of the general second order elliptic equation Lu = 0 share many similar properties with the harmonic functions.
2.2 Weak Solutions
43
2.2 Weak Solutions
Let the operator L be in the divergence form as deﬁned in (2.1). Consider the second order elliptic equation with Dirichlet boundary condition Lu = f , x ∈ Ω u = 0 , x ∈ ∂Ω (2.3)
When f = f (x), the equation is linear; and when f = f (x, u), semilinear. Assume that, f (x) is in L2 (Ω), or for a given solution u, f (x, u) is in 2 L (Ω). ∞ Multiplying both sides of the equation by a test function v ∈ C0 (Ω), and integrating by parts on Ω, we have
n n
Ω i,j=1
aij (x)uxi vxj +
i=1
bi (x)uxi v + c(x)uv dx =
Ω
f vdx.
(2.4)
There are no boundary terms because both u and v vanish on ∂Ω. One can see that the integrals in (2.4) are well deﬁned if u, v, and their ﬁrst derivatives are 1 square integrable. Write H 1 (Ω) = W 1,2 (Ω), and let H0 (Ω) the completion of ∞ 1 C0 (Ω) in the norm of H (Ω):
1/2
u =
Ω
(Du2 + u2 )dx
.
1 1 Then (2.4) remains true for any v ∈ H0 (Ω). One can easily see that H0 (Ω) 1 is also the most appropriate space for u. On the other hand, if u ∈ H0 (Ω) satisﬁes (2.4) and is second order diﬀerentiable, then through integration by parts, we have 1 (Lu − f )vdx = 0 ∀v ∈ H0 (Ω). Ω
This implies that
Lu = f , ∀x ∈ Ω.
And therefore u is a classical solution of (2.3). The above observation naturally leads to Deﬁnition 2.2.1 The weak solution of problem (2.3) is a function u ∈ 1 1 H0 (Ω) that satisﬁes (2.4) for all v ∈ H0 (Ω). Remark 2.2.1 Actually, here the condition on f can be relaxed a little bit. By the Sobolev embedding H 1 (Ω) → L n−2 (Ω),
2n
we introduce the bilinear form n n B[u. f only need to be in 2n n+2 or more L n+2 (Ω). x ∈ ∂Ω. (2. which will be used to obtain the existence of weak solutions. and hence by duality. v ∈ X. For more details. (2. v] =< f. if f (x. then for any u ∈ H 1 (Ω). Now. In the semilinear case. which is the left hand side of (2.j=1 aij (x)uxi vxj + i=1 bi (x)uxi v + c(x)uv dx 1 deﬁned on H0 (Ω). ∀ v ∈ H0 (Ω).1 Linear Equations For the Dirichelet problem with linear equation Lu = f (x) . 2n 2. to ﬁnd a weak solution of (2.44 2 Existence of Weak Solutions 2n we see that v is in L n−2 (Ω). v >:= Ω f vdx.1 A Banach space X is a complete. (iii) u = 0 if and only if u = 0. such that 1 B[u. x ∈ Ω u=0. v] = Ω i. . we can apply representation type theorems in Linear Functional Analysis. Deﬁnition 2.3. u) is in L n+2 (Ω). such as Riesz Representation Theorem or LaxMilgram Theorem.5). < f. v >. λ ∈ R.4) may be regarded as a linear 1 functional on H0 (Ω). please see [Sch1]. (ii) λu = λ u . ∀u.3. To this end.3.2 Some Basic Principles in Functional Analysis Here we list brieﬂy some basic deﬁnitions and principles of functional analysis. ∀u. v ∈ X. While the right hand side of (2. 2.6) This can be realized by representation type theorems in Functional Analysis. u) = up for any p ≤ n−2 generally n+2 f (x. it is equivalent to show that there 1 exists a function u ∈ H0 (Ω). u) ≤ C1 + C2 u n−2 .5) to ﬁnd its weak solutions. normed linear space with norm · that satisﬁes (i) u + v ≤ u + v . f (x.3 Methods of Linear Functional Analysis 2.4) in the deﬁnition of the weak solutions.
v) is linear for each v ∈ H.2 A Hilbert space H is a Banach space endowed with an inner product (·. (ii) Sobolev spaces W k. Then for every u ∈ H. ∀u ∈ H.7) In other words. v) = (v. v ∈ X. u >:= f [u] to denote the pairing of X ∗ and X. v) = Ω [u(x)v(x) + Du(x) · Dv(x)]dx. (iii) A bounded linear operator f : X→R1 is called a bounded linear functional on X. The collection of all bounded linear functionals on X. H = M ⊕ M ⊥ := {u ∈ H  u⊥M }. Examples of Hilbert spaces are (i) L2 (Ω) with inner product (u. Lemma 2.3. v) = Ω u(x)v(x)dx. v ∈ H. u) = 0 if and only if u = 0.p (Ω) with norm u W k. and (iv) (u.3 (i) A mapping A : X→Y is a linear operator if A[au + bv] = aA[u] + bA[u] ∀ u.3.3 Methods of Linear Functional Analysis 45 The examples of Banach spaces are: (i) Lp (Ω) with norm u Lp (Ω) . ∀u.3. there is a v ∈ M . u). and (iii) H¨lder spaces C 0. such that (u − v. a. u) ≥ 0. (iii) (u. We use < f. (ii) A linear operator A : X→Y is bounded provided A := sup u X ≤1 Au Y < ∞.2. o Deﬁnition 2.1 (Projection). and 1 (ii) Sobolev spaces H 1 (Ω) or H0 (Ω) with inner product (u. Deﬁnition 2. is called the dual space of X. ·) which generates the norm u := (u. (2.γ (Ω) .γ (Ω) with norm u C 0. Let M be a closed subspace of H. . (ii) the mapping u → (u. w) = 0. b ∈ R1 .p (Ω) . ∀w ∈ M. u)1/2 with the following properties (i) (u. denoted by X ∗ .
w) = 0 . one can derive the following Theorem 2.3. Then for each bounded linear functional f on H. such that (i) B[u. for each u∗ ∈ H ∗ . Let H be a real Hilbert space with norm · . ii) Show that (u − vo .3. Then T v = (u.46 2 Existence of Weak Solutions Here v is the projection of u on M . v >= (u.3. Let T be a linear operator from H to R1 and T ≡0. there exists a unique element u ∈ H. This is a wellknown theorem in functional analysis. Let B : H × H→R1 be a bilinear mapping. v ∈ H and (ii) m u 2 ≤ B[u. one can prove Theorem 2. ∀u. v) . iii) The dimension of K(T ) is 1. Exercise 2. v] =< f.2 Prove the Riesz Representation Theorem in 4 steps. such that < u∗ . From Riesz Representation Theorem. iv) There exists v. more precisely. Assume that there exist constants M. there exists a unique element u ∈ H. ∀w ∈ M.1 (Riesz Representation Theorem). Then H ∗ can be canonically identiﬁed with H.1 i) Given any u ∈ H and u ∈M .3. v >. u]. ∀u ∈ H. ii) H = K(T ) ⊕ K(T )⊥ . The readers may ﬁnd the proof of the Lemma in the book [Sch] ( Theorem 13) or as given in the following exercise. Show that i) K(T ) := {u ∈ H  T u = 0} is closed. v) ∀v ∈ H. Based on this Projection Lemma. Let H be a real Hilbert space with inner product (·. and H ∗ be its dual space. ∀u ∈ H. v] ≤ M u v . v∈M Hint: Show that a minimizing sequence is a Cauchy sequence. The mapping u∗ → u is a linear isomorphism from H ∗ onto H. such that T v = 1.2 (LaxMilgram Theorem). Exercise 2. such that B[u. ∀v ∈ H. there exist a vo ∈ M such that vo − u = inf v − u . ·). . m > 0. The readers may see the book [Sch1] for its proof or do it as the following exercise.
for any u ∈ H. B[u. On one hand. 1. we prove that it is onetoone and onto. there is an f ∈ H.8) On the other hand. there exists a unique element u ∈ H. v] + a2 B[u2 .e.10) ˜ From (2. v] can be made as an inner product in H. v). Au and consequently Au ≤ M u ∀u ∈ H. any a1 .9). and (2. for a given bounded ˜ linear functional f on H. (2. If B[u. v] = (˜. . then the conditions (i) and (ii) ensure that B[u.2. u]. ˜ A : H→H. by the Riesz Representation Theorem. v >= (f . ·] is also a bounded linear functional on H. for each ﬁxed element u ∈ H. (a) For any u1 . a2 ∈ R1 . v] may not be symmetric. ˜ (2. such ˜ that B[u. we proceed as follows. Au = u. A is linear. we have (A(a1 u1 + a2 u2 ).3 Methods of Linear Functional Analysis 47 Proof. (2. It suﬃce to show that for each f ∈ H. (b) To see that A is onetoone. v) + a2 (Au2 .10). v) = B[(a1 u1 + a2 u2 ). Moreover. u) ≤ Au u . and each v ∈ H. and hence there exist a u ∈ H. i. by condition (i). ∀v ∈ H. i. v) Hence A(a1 u1 + a2 u2 ) = a1 Au1 + a2 Au2 .8). and the conclusion of the theorem follows directly from the Reisz Representation Theorem. u2 ∈ H. (2. v] = B[v. u] = (Au. In the case B[u. v] = a1 (Au1 . v] = a1 B[u1 . v) = (a1 Au1 + a2 Au2 . v] is symmetric. and in part (b). Au) = B[u. We carry this out in two parts. u (2. 2. such that ˜ < f. such that ˜ Au = f .11) ≤ B[u.9) Denote this mapping from u to u by A. by condition (i). v). ∀v ∈ H. we show that A is a bounded linear operator. This veriﬁes that A is bounded. In part (a). Au] ≤ M u Au . we apply condition (ii): m u 2 2 = (Au.e. B[u.
Now. Therefore R(A) = H. R(A) is closed.1.3 Existence of Weak Solutions Now we explore that in what situations. v is in L n−2 (Ω). assume that vk ∈ R(A) and vk →v in H. u = v ∈ R(A) and hence H ⊂ R(A).11). Therefore. by (2. Inequality (2.12) implies that ui − uj ≤ C vi − vj . Consequently. and hence uk →u for some u ∈ H. 2. This completes the proof of the Theorem.48 2 Existence of Weak Solutions and it follows that m u ≤ Au . Then for each vk . such that 0 = (u − v. A(u − v)) = B[(u − v).3. we can use the equivalent inner product e (u. then from (2. From (2. Now if Au = Av. Hence A is onetoone. we need the Projection Lemma 2. To show that R(A) = H. v) := Ω (DuDv + uv)dx. there is a uk . 1 Let H = H0 (Ω) be our Hilbert space with inner product (u. m u − v ≤ Au − Av . 2n For any v ∈ H. In fact.12) This implies u = v. v)H := Ω DuDvdx and the corresponding norm u := Ω H Du2 dx. By Poincar´ inequality. by the Sobolev Embedding. Auk →Au and v = Au ∈ H. such that Auk = vk . by the Projection Lemma. there exists a v ∈ R(A).e. (u − v)] ≥ m u − v 2 . o . our second order elliptic operator L would satisfy the conditions (i) and (ii) requested by the LaxMilgram Theorem. uk is a Cauchy sequence in H. For any u ∈ H.12).3. And by virtue of the H¨lder inequality. the range of A in H is closed. i. one can also see that R(A). (2.12).
In the simplest case when L = − . bi L∞ (Ω) . However. the Dirichlet problem − u = f (x). Obviously. H v H. First from the uniform ellipticity condition. 2n In general. x ∈ ∂Ω 1 has a unique weak solution u in H0 (Ω). u(x) = 0. We have actually proved Theorem 2. hence we only need to assume that f ∈ L n+2 (Ω). ∀v ∈ H. Now applying the LaxMilgram Theorem. condition (ii) may not automatically hold. Hence condition (i) is always satisﬁed. c L∞ (Ω) ≤ C. i.j=1 aij (x)uxi vxj + i=1 Under the assumption that aij L∞ (Ω) . and by H¨lder inequality. one can easily verify that o B[u. v >:= Ω f (x)v(x)dx ≤ Ω f (x) 2n n+2 n+2 2n dx Ω v(x) 2n n−2 dx . It depends on the sign of c(x). v >. B[u. v] ≤ u Condition (ii) is also satisﬁed: B[u.2.3 Methods of Linear Functional Analysis 49 n−2 2n < f. v] =< f. we conclude that there exists a unique u ∈ H such that B[u. and the L∞ norm of bi (x) and c(x). u] = u 2 H. condition (i) is met: B[u. v] ≤ 3C u H v H. v] = Ω n n bi (x)uxi v + c(x)uv dx.3. v] = Ω 2n Du · Dvdx.3 For every f (x) in L n+2 (Ω). we have B[u. we have . x ∈ Ω.
a critical point is a point where Dg vanishes. (2.50 2 Existence of Weak Solutions n aij (x)uxi uxj dx ≥ δ Ω i. then condition (ii) is met. Others are saddle points. To seek weak solutions of .4 Assume that bi (x) ≡ 0 and c(x) satisﬁes (2. for instance. We will use variational methods to obtain the existence. Actually. we will associate the equation with the functional J(u) = where F (x. u) the LaxMilgram Theorem can no longer be applied. u) = 0 1 2 (Lu · u)dx − Ω u Ω F (x. We show that the critical point of J(u) is a weak solution of our problem.4 Variational Methods 2. s)ds.13) for some δ > 0.j=1 Ω Du2 dx = δ u 2 H.14) 2. the problem Lu = f (x) x ∈ Ω u(x) = 0 x ∈ ∂Ω 1 has a unique weak solution in H0 (Ω). Then we will focus on how to seek critical points in various situations.13). if c(x) ≥ −δ/2 with δ as in (2.3.14).4. From here one can see that if bi (x) ≡ 0 and c(x) ≥ 0. then condition (ii) holds.1 Semilinear Equations To ﬁnd the weak solutions of semilinear equations Lu = f (x.2 Calculus of Variations For a continuously diﬀerentiable function g deﬁned on Rn .4. the condition on c(x) can be relaxed a little bit. (2. Roughly speaking. Hence we have Theorem 2. u)dx f (x. The simplest sort of critical points are global or local maxima or minima. Then for 2n every f ∈ L n+2 (Ω). 2.
d J(u + tv) = dt D(u + tv) · Dvdx − Ω Ω f (x)vdx.2.15) f (x)udx 1 in H0 (Ω). Now suppose that there is some function u which happen to be a minimizer 1 of J(·) in H0 (Ω). Consequently. explicitly. u(x) = 0 . we must have g (0) = 0. (2. 1 Let v be any function in H0 (Ω). then we will demonstrate that u is actually a weak solution of our problem. Furthermore. . x ∈ ∂Ω. Here.15). Since u is a minimizer of J(·). t ∈ R.4 Variational Methods 51 partial diﬀerential equations. we generalize this concept to inﬁnite dimensional spaces. x ∈ Ω. J(u + tv) = and 1 2 D(u + tv)2 dx − Ω Ω i. d J(u + tv) t=0 = 0.16) This implies that u is a weak solution of problem (2. then through integration by parts. To ﬁnd weak solutions of the problem. we obtain [− u − f (x)] vdx = 0. ∀v ∈ H0 (Ω). if u is also second order diﬀerentiable. g (0) = 0 yields Du · Dvdx − Ω Ω 1 f (x)vdx = 0.e. the function g(t) has a minimum at t = 0. dt f (x)(u + tv)dx. Ω 1 Since this is true for any v ∈ H0 (Ω). Consider the realvalue function g(t) = J(u + tv). we study the functional J(u) = 1 2 Du2 dx − Ω Ω (2. let us start from a simple example: − u = f (x) . we conclude − u − f (x) = 0. To illustrate the idea. and therefore.
we have Deﬁnition 2.4. we have seen that the critical points of the functional 1 J(u) = Du2 dx − f (x)udx 2 Ω Ω are weak solutions of the Dirichlet problem (2. or a saddle point of the functional.52 2 Existence of Weak Solutions Therefore. and iii) weak lower semicontinuity. the readers can see that. v >  ≤ where < L(u). then < J (u). if J is Frechet diﬀerentiable at u. Remark 2. in order to be a weak solution of the problem.1 One can verify that.3 Existence of Minimizers In the previous subsection.15). ii) coercivity. u). dt More precisely. v >= lim t→0 d J(u + tv) − J(u) = J(u + tv) t=0 . iii) We call J (u) = 0 the EulerLagrange equation of the functional J. i) Boundedness from Below. v >= 0 ∀v ∈ X. From the above argument. The mapping L(u) is usually denoted by J (u).4. such that J(u + v) − J(u)− < L(u). t dt v X whenever v X < δ. To this end. we verify that J possesses the following three properties: i) boundedness from below. Then our next question is: Does the functional J actually possess a critical point? We will show that there exists a minimizer of J.4. i) We say that J is Frechet diﬀerentiable at u ∈ X if there exists a continuous linear map L = L(u) from X to X ∗ satisfying: For any > 0. there is a δ = δ( . it can be a maxima. or more generally.1 Let J be a linear functional on a Banach space X. v >:= L(u)(v) is the duality between X and X ∗ . u is a classical solution of the problem. 2. here u need not to be a minima. any point that satisﬁes d J(u + tv) t=0 . that is < J (u). . ii) A critical point of J is a point at which J (u) = 0.
L 2 2 L2 (Ω) (2.4 Variational Methods 53 1 First we prove that J is bounded from below in H := H0 (Ω) if f ∈ L2 (Ω). if a sequence {uk } goes to inﬁnity.17). and the limit would be a minimizer of J. the functional J is bounded from below. . then we say that {uk } converges weakly to uo in X.4. This suggests that in order to prevent a minimizing sequence from ‘leaking’ to inﬁnity. uo > as k→∞ for every bounded linear functional f on X. This is no longer true in inﬁnite dimensional spaces.e. Again. if u H →∞. i. then a bounded minimizing sequence {uk } would possesses a convergent subsequence. Deﬁnition 2. we need to have some sort of ‘coercive’ condition on our functional J.2.2 Let X be a real Banach space. then J(uk )→∞. That is. x ∈ R1 . Actually. for simplicity. from the second part of (2. ii) Coercivity. the inﬁmum of g(x) is 1. If we take a minimizing sequence {xk } such that g(xk )→1. 1 + x2 Obviously. one can see that if uk H →∞. And this is indeed true for our functional J. then J(uk ) must also become unbounded. A simple counter example is g(x) = 1 . If H is a ﬁnite dimensional space. By Poincar´ and H¨lder inequalities. However. This implies that any minimizing sequence must be bounded in H.17) Therefore. but it can never be attained. If a sequence {uk } ⊂ X satisﬁes < f. A functional that is bounded from below does not guarantee that it has a minimum. we have e o J(u) ≥ 1 u 2 − C u H f L2 (Ω) H 2 1 C2 2 = u H − C f L2 (Ω) − f 2 2 C2 ≥− f 2 2 (Ω) . iii) Weak Lower Semicontinuity. Therefore a minimizing sequence would be retained in a bounded set. uk > → < f. we use the equivalent norm in H: u H = Ω Du2 dx. then xk goes away to inﬁnity. We therefore turn our attention to weak topology. and write uk uo .
we wish that the functional J we consider is continuous with respect to this weak convergence. Theorem 2. nor is it true for most other functionals of interest. By the Coercivity and Weak Compactness Theorem. i→∞ Since on the other hand. uo is a minimizer. .) Let X be a reﬂexive Banach space.4. and therefore H is reﬂexive. By the Reisz Representation Theorem. ∀v ∈ H. we only need a weaker one: J(uo ) ≤ lim inf J(uki ). Fortunately. for every linear functional f on H. and hence possesses a subsequence {uki } that converges weakly to an element uo in H: (uki . i→∞ and hence J(uo ) = lim inf J(uki ). if for every weakly convergent sequence uk we have J(uo ) ≤ lim inf J(uk ). instead. then uo would be our desired minimizer. in X. u).3 A Banach space is called reﬂexive if (X ∗ )∗ = X. i→∞ Therefore. by the deﬁnition of a minimizing sequence. such that < f. this is not the case with our functional. 1 Now let’s come back to our Hilbert space H := H0 (Ω).1 (Weak Compactness. Deﬁnition 2. v).4. Naturally.4. now a minimizing sequence {uk } is bounded. we obviously have J(uo ) ≥ lim inf J(uki ).4 We say that a functional J(·) is weakly lower semicontinuous on a Banach space X. It follows that H ∗ = H. However. ∀u ∈ H. Assume that the sequence {uk } is bounded in X. we do not really need such a continuity. which converges weakly in X.54 2 Existence of Weak Solutions Deﬁnition 2. v)→(uo . k →∞ uo . u >= (v. Then there exists a subsequence of {uk }. that is i→ ∞ lim J(uki ) = J(uo ). there is a unique v ∈ H. where X ∗ is the dual of X.
x ∈ Ω u(x) = 0 . Therefore lim inf k→∞ Duk 2 dx ≥ Ω Ω Duo 2 dx. Here the second term on the right goes to zero due to the weak convergence uk uo . (2. with 1 < p < n+2 n−2 . x ∈ Ω.4 Variational Methods 55 Now we show that our functional J is weakly lower semicontinuous in H.18) From the algebraic inequality a2 + b2 ≥ 2ab.18) and (2. and uk Since for each ﬁxed f ∈ L n+2 (Ω). follows immediately f uk dx→ Ω Ω 2n uo in H. (2.4. x ∈ ∂Ω. the functional J(u) = 1 2 Du2 dx − Ω Ω f udx 1 possesses a minimum uo in H0 (Ω). So far. 2.19) Now (2.20) .4. it f uo dx. Ω f udx is a linear functional on H.19) imply the weakly lower semicontinuity. we have proved Theorem 2. as k→∞.4 Existence of Minimizers Under Constraints Now we consider the semilinear Dirichlet problem − u = up−1 u. which is a weak solution of the boundary value problem − u = f (x) .2 Assume that Ω is a bounded domain with smooth boundary.2. x ∈ ∂Ω. 2n Then for every f ∈ L n+2 (Ω). u(x) = 0. Assume that {uk } ⊂ H. (2. we have Duk 2 dx ≥ Ω Ω Duo 2 dx + 2 Ω Duo · (Duk − Duo )dx.
instead of J. what we are more interested is whether there exists nontrivial solutions. we ﬁnd that J(tu)→ − ∞. since p+1 < Theorem (2. Of course. there exist a subsequence (for simplicity. By the Weak Compactness Theorem. by the Compact Sobolev Embedding n−2 H 1 (Ω) → → Lp+1 (Ω). Let J(u) = It is easy to verify that d J(u + tv) t=0 = dt Du · Dv − up−1 uv dx. i. We call this a trivial solution. We seek minimizers of I in M. u ≡ 0 is a solution.e.20) However. Ω Noticing p + 1 > 2. Ω 1 Therefore. hence {uk } is bounded in H. k →∞ On the other hand. Ω up+1 dx = 1}. u∈M It follows that Ω Duk 2 dx is bounded. k →∞ lim I(uk ) = inf I(u) := m. For this reason. To see this. a critical point of the functional J in H := H0 (Ω) is a weak solution of (2. Ω 1 2 Du2 dx − Ω 1 p+1 up+1 dx. ﬁxed an element u ∈ H and consider J(tu) = t2 2 Du2 dx − Ω tp+1 p+1 up+1 dx. Therefore. which converges weakly to uo in H.21) 2n . . as t→∞. we will consider another functional I(u) := in the constrain set M := {u ∈ H  G(u) := Ω 1 2 Du2 dx. we still denote it by {uk }). The weak lower semicontinuity of the functional I yields I(uo ) ≤ lim inf I(uk ) = m.56 2 Existence of Weak Solutions Obviously. there is no minimizer of J(u). Let {uk } ⊂ M be a minimizing sequence. this functional is not bounded from below.
and we therefore have (2. Let f (x) and g(x) be smooth functions in Rn .3 (Lagrange Multiplier). i. Theorem 2. v >= λ < G (u). While the normal projection Df (xo ) N is parallel to Dg(xo ). Ω That is uo ∈ M . v >. and therefore I(uo ) ≥ m. It is well known that the critical points ( in particular. let u be a minimizer of I in M. we may generalize this perception into inﬁnite dimensional space H. the minima ) xo of f (x) under the constraint g(x) = c satisfy Df (xo ) = λDg(xo ) (2. let’s ﬁrst recall the Lagrange Multiplier for functions deﬁned on Rn . To link uo to a weak solution of (2. Now we have shown the existence of a minimizer uo of I in M . and hence uo p+1 dx = 1. we obtain I(uo ) = m. we derive the corresponding EulerLagrange equation for this minimizer under the constraint. By the Reisz Representation .22) for some constant λ. It can be decomposed as the sum of two perpendicular vectors: Df (xo ) = Df (xo ) N +Df (xo ) T .4. Geometrically. or < I (u).21). v∈M Then there exists a real number λ such that I (u) = λG (u).2.20). Df (xo ) is a vector in Rn . where the two terms on the right hand side are the projection of Df (xo ) onto the normal and tangent spaces of the level set g(x) = c at point xo . I (u) is a linear functional on H. At a ﬁxed point u ∈ H. Together with (2. Heuristically.4 Variational Methods 57 we ﬁnd that uk converges strongly to uo in Lp+1 (Ω). xo being a critical point of f (x) on the level set g(x) = c means that the tangential projection Df (xo ) T = 0. By deﬁnition.e.22). I(u) = min I(v). Before proving the theorem. ∀v ∈ H.
Ω up+1 dx = 1. denoted by I (u). and it is a normal vector of M at point u. Then we can calculate the variation of J(u + tv + s(t)w) as t→0. we can no longer use (2. I (u) must be perpendicular to the tangent space at point u. v). we have G (u) ∈ H. we try to show that there is an s = s(t). we consider g(t.23) dt Now u is only the minimum on M . Recall that if u is the minimum of J in the whole space H. such that the right hand side of the above is not zero. it can be identiﬁed as a point (or a vector) in H. (2. The Proof of Theorem 2. In fact. w > s (0) = 0. Ω Since u ∈ M . for any v ∈ H.24) . Hence.58 2 Existence of Weak Solutions Theorem. we can not guarantee that G(u + tv) := Ω u + tvp+1 dx = 1 for all small t.4. ∂g (0. v >= (I (u). according to the Implicit Function Theorem. 0) =< G (u). there exists a C 1 function s : R1 →R1 . To remedy this. We now prove this rigorously. for suﬃciently small t. such that G(u + tv + s(t)w) = 1. such that s(0) = 0. and hence I (u) = λG (u). s(t)) = 1. s) := G(u + tv + sw). then. Similarly for G (u).23) because u + tv can not be kept on M . v > + < I (u). we must have d I(u + tv + s(t)w) t=0 =< I (u). dt (2. we have d I(u + tv) t=0 . w >= (p + 1) ∂s up−1 uwdx. At a minimum u of I on the hyper surface {w ∈ H  G(w) = 1}. For each small t.3. Now u + tv + s(t)w is on M . such that < I (u). ∀v ∈ H. and hence at the minimum u of J on M . there exists w (may just take w as u). and g(t.
For this reason. Let u = auo with λ/ap−1 = 1. This is possible ˜ since p > 1.25) One can see that λ > 0. The graph of z = f (x. y) = x2 − y 2 from R2 to R1 . ∀v ∈ H. The way to locate or to show . 2. u ˜ Now u is the desired weak solution of (2. there are other types of critical points: saddle points or minimax points. v >= λ < G (uo ). Now let’s come back to the weak solution of problem (2. This completes the proof of the Theorem. 0) = 0.25). 0)s (0) dt ∂t ∂s = < G (u). < G (u). we have 0= dg ∂g ∂g (0. For a simple example.0) is the minimum. (0.2. v >. we also call (0. w > < I (u). and denote λ= we arrive at < I (u).0) a minimax critical point of f (x.4. it is neither a local maximum nor a local minimum. 0) = (0.e. i. that is Duo · Dvdx = λ Ω Ω uo p−1 uo vdx. (x.20). Our minimizer uo of I under the constraint G(u) = 1 satisﬁes < I (uo ). ∀v ∈ H. < G (u). (0. However. 0) + (0. < G (u).1 Show that λ > 0 in (2. Then it is easy to verify that D˜ · Dvdx = u Ω Ω ˜p−1 uvdx. y) = (0. w > s (0).24). v >.5 Minimax Critical Points Besides local or global minima and maxima. Df (0.20) in H. 0) is a critical point of f . v >= λ < G (u).0) is the maximum. If we go on the graph along the direction of xaxis. y). while along the direction of yaxis. ∀v ∈ H.4 Variational Methods 59 On the other hand. v > .4. ˜ Exercise 2. let’s consider the function f (x. (2. w > . y) in R3 looks like a horse saddle. w > Consequently s (0) = − Substitute this into (2. v > + < G (u).
and more importantly. one needs ﬁrst to climb over the ’mountain range’ near xz plane.4. α > 0 such that J ∂Bρ (0) ≥ α. f (P )) to (Q.5 We say that the functional J satisﬁes the PalaisSmale condition (in short. To show the existence of a minimax point. Df (Po ) = 0. Suppose J satisﬁes (PS).60 2 Existence of Weak Solutions the existence of such minimax point is called a minimax variational method. R1 ). if any sequence {uk } ∈ X for which J(uk ) is bounded and J (uk )→0 as k→∞ possesses a convergent subsequence. we deﬁne c = inf max f (γ(s)). then to decent to the point (Q. we pick up two points on xy plane. . Let X be a real Banach space and J ∈ C 1 (X. f (Q)). let’s still take this simple example. γ∈Γ s∈[0.1]. Unlike ﬁnite dimensional space. To illustrate the main idea. P = (1. Theorem 2. a bounded sequence may not converge.4 (Mountain Pass Theorem). γ(1) = Q} We then try to show that there exists a point Po at which f attains this minimax value c. such that J(e) ≤ 0. Suppose we don’t know whether f (x. Then J possesses a critical value c ≥ α which can be characterized as c = inf max J(u) γ∈Γ u∈γ[0. −1) on two sides of the ‘mountain range’. y) possesses a minimax point. More precisely. and (J2 ) there is an e ∈ X \ Bρ (0). (J1 ) there exist constants ρ. Hence we require the functional J to satisfy some compactness condition.4. but we do detect a ‘mountain range’ on the graph near xz plane. which is the PalaisSmale condition. 1].1] where Γ = {γ  γ = γ(s) is continuous on [0.1] where Γ = {γ ∈ C([0. there is a highest point. and γ(0) = P. Then we take the inﬁmum among the highest points on all the possible paths linking P and Q. (PS)). Let γ(s) be a continuous curve linking the two points P and Q with γ(0) = P and γ(1) = Q. R1 )  γ(0) = 0. Hence on this path. To go from (P. γ(1) = e}. For a functional J deﬁned on an inﬁnite dimensional space X. in an inﬁnite dimensional space. f (Q)) on the graph. for instance. 1) and Q = (−1. we will apply a similar idea to seek its minimax critical points. Under this compactness condition. Deﬁnition 2. Ambrosetti and Rabinowitz [AR] establish the following wellknown theorem on the existence of a minimax critical point. J(0) = 0.
R1 ) and satisﬁes the (PS). This S is a saddle point. One can see that 0 is a critical value and for any a > 0. say fa and fb with 0 < a < b. The proof of the Theorem is mainly based on a deformation theorem which exploits the change of topology of the level sets of J when the value of J goes through a critical value. the path connecting the two low points (0. y) ∈ R2  f (x. there is another low point e. Let X be a real Banach space. In the ﬁgure below. J(e)) contains a highest point S. J(0)) and (e.e. i. y) ≤ c}. One convenient way is let the points in the set fb ‘ﬂow’ along the direction of −Df into fa .5 (Deformation Theorem). Assume that J ∈ C 1 (X. correspondingly. Heuristically. Denote the level set of f by fc := {(x. let’s come back to the simple example f (x. the minimax critical point of J. a critical value c means a value of the functional which is attained by a critical point u. More precisely and generally. we have Theorem 2. the Theorem says if the point 0 is located in a valley surrounded by a mountain range.4. To illustrate this. which is the lowest as compared to the highest points on the nearby paths. . fa is connected while f−a is not. If there is no critical value between the two level sets. J (u) = 0 and J(u) = c. on the graph of J. then there must be a mountain pass from 0 to e which contains a minimax critical point of J. then one can continuously deform the set fb into fa . the level sets fa and f−a has diﬀerent topology.2. and if at the other side of the range.4 Variational Methods 61 Here. y) = x2 − y 2 . Suppose that c is not a critical value of J. the points on the graph ‘ﬂow downhills’ into a lower level.
u) ≤ J(u). the Theorem says if c is not a critical value of J. The main idea is to solve an ODE initial value problem (with respect to t) for each u ∈ X. t ∈ [0. Jc+δ ) ⊂ Jc−δ .62 2 Existence of Weak Solutions Then for each suﬃciently small > 0. ∀u ∈ X. To satisfy condition (ii). dη dt (t.26) That is. 1]. to make the ﬂow stay still in the set M := {u ∈ X  J(u) ≤ c − or J(u) ≥ c + }. c + ]. then one can continuously deform a higher level set Jc+δ into a lower level Jc−δ by ‘ﬂowing downhills’. Then by virtue of the (PS) condition. such that J (u) ≥ a. ∀u ∈ Jc+ \ Jc− . We claim that there exist constants 0 < a. η(0. This is a contradiction with the assumption that c is not a critical value of J. we need to modify the right hand side of the equation a little bit. Step 2. such that (i) η(0.e. but this would make the ﬂow go too slow near M . i. the two points 0 and e as in the Mountain Pass theorem still remain ﬁxed. Therefore (2. However. so that the ﬂow would not go too slow in Jc+ \ Jc− . < 1. roughly look like the following: = −J (η(t. so that the value of J(η(t. (ii) η(1. To modify further. we must have J(uo ) = c and J (uo ) = 0. u) (2. (2. there is a subsequence {uki } that converges to an element uo ∈ X. This is actually guaranteed by the (PS). u) = u. u) = u. k →0 and elements uk ∈ Jc+ k \ Jc− k with J (uk ) ≤ ak . (iv) η(1. such that a2 δ< . Since J ∈ C 1 (X.27) Otherwise. u) from [0. 1] × X to X. M ). u) = u. Proof of the Deformation Theorem. there would exist sequences ak →0. to let the ‘ﬂow’ go along the direction of −J (η(t. (iii) J(η(t. Step 1 We ﬁrst choose a small > 0. R1 ). (2.27) holds. u)). there exists a constant 0 < δ < and a continuous mapping η(t. we could multiply the right hand side of equation (2. u) would decrease as t increases. so that the ﬂow will satisfy other desired conditions. Condition (ii) is to ensure that after the deformation.26) by dist(η.28) 2 and let . Roughly speaking. ∀u ∈ X. u)). ∀u ∈J −1 [c − . we choose 0 < δ < .
29) becomes d J(η(t. Now for u ∈ X. u)) 2 . dist(u. hence condition (ii) is also satisﬁed. we consider the modiﬁed ODE problem = F (η(t. . we have. Notice that for η ∈ N . u)) is nonincreasing.27). Now for each ﬁxed u ∈ X. u) for all time t ≥ 0. u)) η(0. u)) ) J (η(t. N ) 1 if 0 ≤ s ≤ 1 1 s if s > 1. g(η) ≡ 1. u) = u. there exists a unique solution η(t. and hence J(η(t. To see (iv). it suﬃce to show that for each u in N = Jc+δ \ Jc−δ . (2. u)).4 Variational Methods 63 N := {u ∈ X  c − δ ≤ J(u) ≤ c + δ}. we have η(1. u)) > = −g(η(t. u) ∈ Jc−δ . u)). u))h( J (η(t. dη dt (t. u)) = −h( J (η(t. the right hand side is nonpositive. u)) ) J (η(t. we compute d dη J(η(t. To verify (iii) and (iv). and (2. This veriﬁes (iii). F (η(t. dt By the deﬁnition of h and (2. u)) = < J (η(t. One can verify that F is bounded and Lipschitz continuous on bounded sets. M ) + dist(u. and therefore by the wellknown ODE theory.29) In the above equality. From the deﬁnition of g g(u) = 0 ∀u ∈ M . M ) . obviously. Condition (i) is satisﬁed automatically. (t. u) > dt dt = < J (η(t. u) dist(u. The introduction of the factor h(·) is to ensure the boundedness of F (·). u) 2 . Step 3. deﬁne g(u) := and let h(s) := We ﬁnally modify the −J (η) as F (η) := −g(η)h( J (η) )J (η).2.
Hence by condition (J1 ). u∈γ([0. and 0 < δ < . i. due to the deﬁnition. Therefore c ≤ max J(u) = max g(t) < ∞. u)) ≤ J(η(0.64 2 Existence of Weak Solutions d J(η(t. This establishes (iv) and therefore completes the proof of the Theorem. . this path can be continuously deformed down to the level set Jc−δ . J(η(1. we take a particular path γ(t) = te and consider the function g(t) = J(γ(t)). Suppose the number c so deﬁned is not a critical value.1]) t∈[0. We will use the Deformation Theorem to continuously deform a path γ ∈ Γ down to the level set Jc−δ to derive a contradiction. we 2 can pick a path γ ∈ Γ . that is η(1. 1]) ∩ ∂Bρ (0) = ∅. choose the in the Deformation Theorem to be α . With this Deformation Theorem. From the deﬁnition of c. 1]) ⊂ Jc+δ .e. More precisely. we have c < ∞.1] On the other hand. u)) − a2 ≤ c + δ − a2 ≤ c − δ. And it follows that c ≥ α.1]. or in order words. To see this. we derive − J (η(t. and hence bounded from above. The Proof of the Mountain Pass Theorem. First. u∈γ([0. such that u∈γ([0. u)) ≤ −a if J (η(t. Now by the Deformation Theorem. u)) > 1.1]) max J(u) ≤ c + δ.1]) max J(u) ≥ v∈∂Bρ (0) inf J(v) ≥ α. the proof of the Mountain Pass Theorem becomes very simple. u)) 2 ≤ −a2 if J (η(t. u)) ≤ 1 − J (η(t. 1]) ⊂ Jc−δ . for any γ ∈ Γ γ([0. u)) = dt In any case. γ([0. It is continuous on the closed interval [0. γ([0.
where F (x. If n = 1. s) ≤ sf (x. s) ≤ c1 eφ(s) . This means that η(1. 1])) is also a path in Γ . e) = e. s) = 0 s f (x.6 Existence of a Minimax via the Mountain Pass Theorem Now we apply the Mountain Pass Theorem to seek weak solutions of the semilinear elliptic problem − u = f (x. 0 < µF (x.2. then problem (2.31) Although the method we illustrate here could be applied to a more general second order uniformly elliptic equations. u) satisﬁes ¯ (f1 )f (x.31) possesses a nontrivial weak solution. we prefer this simple model that better illustrates the main ideas. (f2 ) can be dropped. and (f4 ) there are constants µ > 2 and r ≥ 0 such that for s ≥ r. (f2 ) there exists constants c1 .1])) max J(u) ≥ c. (f3 )f (x. u) is up−1 u. c2 ≥ 0. since 0 and e are in Jc− . it suﬃce f (x. (2. with φ(s)/s2 →0 as s→∞.1])) 65 max J(u) ≤ c − δ.4. u). x ∈ Ω.30) On the other hand. s) ≤ c1 + c2 sp .30) and hence completes the proof of the Theorem. s). u(x) = 0.4.γ([0. 2. R1 ).6 (Rabinowitz [Ra]) If f satisﬁes condition (f1 ) − (f4 ). . x ∈ ∂Ω. (2. if n = 2. We assume that f (x. t)dt.4 Variational Methods u∈η(1. Remark 2. s) ∈ C(Ω × R1 .4. η(1.2 A simple example of such function f (x. with 0 ≤ p < that n+2 n−2 if n > 2. γ([0. Theorem 2. s) = o(s) as s→0.γ([0. by (ii) of the Deformation Theorem. This contradicts with (2. 0) = 0 and η(1. such that f (x. and therefore we must have u∈η(1.
a2 ≥ 0 such that g(x. ·) : Lr (Ω)→Ls (Ω). That is g(x. Since we have veriﬁed that the ﬁrst term Du2 dx Ω is continuously diﬀerentiable. whenever Let and Let φ Lr (Ω) < δ. Ls (Ω)). g(x. (2. Ω ≤ a3 It follows that if u ∈ Lr (Ω). We ﬁrst need to show that J ∈ C 1 (H. Given any > 0. We verify that J satisﬁes the conditions in the Mountain Pass Theorem. we have g(·. now we only need to check the second term I(u) := Ω F (x. Ls (Ω) < . we want to show that. we seek minimax critical points of the functional 1 Du2 dx − F (x. u + φ) − g(·.66 2 Existence of Weak Solutions To ﬁnd weak solutions of the problem. ξ ∈ R1 . then g(x. Fix u ∈ Lr (Ω). s ≥ 1 and a1 . u(x))s dx ≤ Ω Ω (a1 + a2 ur/s )s dx (1 + ur )dx.32) . Proof. Proposition 2. R1 ). u)dx.4. R1 ). u) ∈ Ls (Ω). u)dx J(u) = 2 Ω Ω 1 on the Hilbert space H := H0 (Ω). there is a δ > 0. u) ¯ Ω1 := {x ∈ Ω  φ(x) ≤ m} ¯ Ω2 := Ω \ Ω1 . By (g2 ). such that.1 (Rabinowitz [Ra]) Let Ω ⊂ Rn be a bounded domain and let g satisfy ¯ (g1 )g ∈ C(Ω × R1 . Then the map u(x)→g(x. ξ) ≤ a1 + a2 ξr/s ¯ for all x ∈ Ω. Now we verify the continuity of the map. and (g2 ) there are constants r. u(x)) belongs to C(Lr (Ω).
u(x))s dx. and therefore completes the proof of the Proposition. u)vdx.32). for any η > 0.2.e.4. I(uk )→I(u) and I (uk )→I (u). v >= Ω f (x. there exists m > 0. ∀x ∈ Ω1 .4 Variational Methods 67 Ii = Ωi g(x. for such a small δ. so that s .33) 2 Now we ﬁx this m. By the continuity of g(x. Moreover. s) satisﬁes (f1 ) and (f2 ). Now by (2. (2. Let n ≥ 3. By (g2 ). I(·) is weakly continuous and I (·) is compact. . For the given > 0. u(x) + φ(x)) − g(x.2 (Rabinowitz [Ra]) Assume that f (x. for some constant c1 and c2 . 2. and estimate I2 . Ωη s < 2 and therefore s I1 ≤ . u(x)) ≤ η. Consequently.34) φr dx ≥ mr Ω2 . Moreover. This veriﬁes (2. Proposition 2.33). by (2.35) Since u ∈ Lr (Ω). R) and < I (u).34) is less than 2 . we can make Ω2 ur dx as small as we wish by letting Ω2  small. (2. I1 + I2 ≤ s 2 + s 2 . δr > Ω (2. i = 1. It follows that I1 ≤ Ω1 η s ≤ Ωη s . we can choose δ so small. such that Ω2  is suﬃciently s small. we choose η and then m. Then I ∈ C 1 (H. such that g(x. whenever uk u in H.35). ∀v ∈ H. i. and hence the right hand side of (2. as φ Lr (Ω) < δ. u(x) + φ(x)) − g(x. where Ω is the volume of Ω. ·). we have I2 ≤ Ω2 (c1 + c2 (ur + φr )) dx ≤ c1 Ω2  + c2 Ω2 ur dx + c2 φ r Lr (Ω) .
4.36) and hence I(·) is diﬀerentiable. We want to show that given any > 0. there exists a δ = δ( . let r = n−2 and s = n+2 . such that Q(u.68 2 Existence of Weak Solutions Proof. u)vdx ≤ v (2. u(x) + ξ(x)) − f (x. Q(u. 2n 2n In Proposition 2. This veriﬁes (2. To verify the continuity of I (·). whenever v < δ. I (u + φ) − I (u) →0.39) . and the Sobolev inequality o v 2n L n−2 (Ω) ≤K v . (2. In fact. we can choose suﬃciently small δ > 0. v ∈ H. u(x)) − f (x. u(x)) is continuous from L n−2 (Ω) to L n+2 (Ω). we deduce that the map u(x)→f (x. and hence ξ n−2 < Kδ. v) < .38) 2. u). we show that for each ﬁxed u. we have 2n 2n v n−2 < Kδ. Here v denotes the H 1 (Ω) norm of v.37) Here we have applied the Mean Value Theorem (with ξ(x) ≤ v(x)). and < I (u). v >= Ω f (x. and therefore L (Ω) L (Ω) 2n 2n f (·. u(x))v(x)dx f (x. u(x))v(x)dx. 2n L n+2 (Ω) < K . u) ≤ f (·.37) that Q(u. u + ξ) − f (·. Then by (f2 ). u + ξ) − f (·. such that whenever v < δ.36) whenever v ≤ δ. 1. u(x)) · v(x)dx Ω 2n = ≤ f (·. the H¨lder inequality. (2. Let u. (2. u + ξ) − f (·. v) := I(u + v) − I(u) − Ω f (x. v) ≤ Ω F (x. We will ﬁrst show that I is Frechet diﬀerentiable on H and then prove that I (u) is continuous. u) L n+2 (Ω) 2n v 2n L n−2 (Ω) L n+2 (Ω) K v .1. Hence for any given > 0. u(x) + v(x)) − F (x. as φ →0. u) It follows from (2.
and r= It is easy to see that q< r 2n 2n . so does {ξk }.4 Variational Methods 69 By deﬁnition. u) 2n ≤ sup v ≤1 L n+2 (Ω) K v (2. ξk (x)) · uk (x) − u(x)dx Ω Lr (Ω) 2n ≤ ≤ f (·. by Compact Embedding of H into Lq (Ω). Assume that {uk } ⊂ H. and uk u in H.41) Here we have applied the Mean Value Theorem and the H¨lder inequality with o ξk (x) between uk (x) and u(x). it is bounded in L n−2 (Ω). Furthermore. We show that In fact. I (·) is continuous. u) 2n L n+2 (Ω) . u(x) + φ(x)) − f (x. Therefore. I (u + φ) − I (u) = sup < I (u + φ) − I (u). F (x. {uk } is bounded in H. n−2 Since a weak convergent sequence is bounded.41) approaches zero as k→∞. Finally. u + φ) − f (·.39) follows again from Proposition 2. I(uk ) − I(u) ≤ Ω I(uk )→I(u) as k→∞.38). we verify the weak continuity of I(u) and the compactness of I (u).40) ≤ K f (·. uk ) − f (·. v > v ≤1 = sup v ≤1 Ω [f (x. Using a similar argument as in deriving (2. q= = . and the Sobolev inequality. (2. p(n − 2) r−1 2n − p(n − 2) 2n .1.40). u(x))]v(x)dx f (·. u) 2n L n+2 (Ω) Here we have applied (2. o Now (2. u(x))dx f (x. 3.2.4.42) . 2n and hence by Sobolev Embedding. we obtain I (uk ) − I (u) ≤ K f (·. ξk ) ≤ C ξk uk − u uk − u Lq (Ω) Lq (Ω) L n−2 (Ω) (2. the H¨lder inequality. This veriﬁes the weak continuity of I(·). uk (x)) − F (x. u + φ) − f (·. the last term of (2.
1.43) implies that {uk } must be bounded in H. we derive uk →u strongly in L n+2 (Ω). form Proposition 2. which converges weakly to some element uo in H. 2np Consequently. f (·. u) strongly in L n+2 (Ω). Then by the Compact Embedding. we have µM + J (uk ) uk ≥ µJ(uk )− < J (uk ). We show that {uk } possesses a convergent subsequence. We will verify that J(·) satisﬁes the conditions in the Mountain Pass Theorem. we arrive at I (uk )→I (u) as k→∞. (2. uk )] dx 2 Ω uk (x)≤r µ ≥ ( − 1) Duk 2 dx − (a1 r + a2 rp+1 )Ω 2 Ω = co uk 2 − Co . This completes the proof of the proposition.42). and thus possesses a nontrivial minimax critical point. < Au.31).4. Now. Since J (uk ) is bounded.4. v >= Ω 2n Du · Dvdx. Hence there exists a subsequence (still denoted by {uk }). we have 2np n+2 < 2n n−2 . uk )uk − µF (x. By (f4 ). . uk )→f (·. (2. uk )] dx 2 Ω Ω µ ≥ ( − 1) Duk 2 dx + [f (x. uk > µ = ( − 1) Duk 2 dx + [f (x.6.43) for some positive constant co and Co . v ∈ H. together with (2. Now let’s come back to Problem (2. which is a weak solution of (2. Assume that {uk } is a sequence in H satisfying J(uk ) ≤ M and J (uk )→0. 2np H → → L n+2 (Ω).31) and prove Theorem 2. Let A : H→H ∗ be the duality map between H and its dual space H ∗ .70 2 Existence of Weak Solutions Since p < n+2 n−2 . Then for any u. uk )uk − µF (x. Consequently. First we verify the (PS) condition.
4 To see the existence of a point e ∈ H.46) It follows. and consider J(tu) = t2 2 Du2 dx − Ω Ω F (x. uo ). s) ≤ M sp+1 . J(u) ≥ Choose = 1 8C . (2. uk )→A−1 f (·.4. 8 Then form (2.4 Variational Methods 71 A−1 J (u) = u − A−1 f (·. F (x. s) ≤ s2 + M sp+1 . we have. (2.45) ¯ While by (f2 ). To see there is a ‘mountain range’ surrounding the origin. It follows that uk = A−1 J (uk ) + A−1 f (·. we ﬁx any u = 0 in H. f (·.47) And consequently. then choose u = ρ small. u) strongly in H ∗ .44) This veriﬁes the (PS). u). we estimate F (x. such that J(e) ≤ 0. there is a constant M = M (δ). (2. for any given > 0. tu)dx. we deduce J∂Bρ (0) ≥ 1 2 ρ . so that CM ρp−1 ≤ 1 . we ﬁx M . via the Poincar´ and the Sobolev inequality.46). F (x. F (x. whenever s < δ. From Proposition 2. ¯ for all x ∈ Ω and for all s ∈ R. such that.48) For this . such that for all x ∈ Ω.48). as k→∞. there is a δ > 0.2. (2. that e  Ω F (x.49) . s) ≤ s2 . (2. 1 − C( + M u 2 p−1 ) u 2. By (f3 ).2. u)dx ≤ Ω u2 dx + M Ω up+1 dx ≤ C( + M u p−1 ) u 2 . uk )→f (·.45) and (2. for all Ω ¯ x ∈ Ω. u)dx. (2. Combining (2.
s) = s−µ−2 s (−µF (x. and taking into account that µ > 2. Now J satisﬁes all the conditions in the Mountain Pass Theorem. we conclude that J(tu)→ − ∞. we deduce. . s) + sf (x. This completes the proof of the Theorem 2. as t→∞.50). for s ≥ r. which is a weak solution of (2. s)) ds In any case. we have for s ≥ r. (2. s) ≥ a3 sµ .72 2 Existence of Weak Solutions By (f4 ).50) ≥ 0 if s ≥ r ≤ 0 if s ≤ −r. d s−µ F (x. F (x.6. Combining (2. s) ≥ a3 sµ − a4 .49) and (2. therefore it is a nontrivial solution. Moreover. and it follows that F (x.31).4. J(uo ) > 0. hence it possesses a minimax critical point uo .
These questions will be answered here.3.2 The Case 1 < p < 2. Uniqueness.2. a weak solution is in fact smooth.3. mainly calculus of variations to seek the existence of weak solutions for second order linear or semilinear elliptic equations. Usually. we used functional analysis.1. This is called the regularity argument. and this is particularly true for nonlinear equations.3.4 Bootstraps the Regularity Lifting Theorem Applications to PDEs Applications to Integral Equations In the previous chapter.1 The Case p ≥ 2. To see the diﬀerence between the regularity and the a priori estimate. in Lp space for p ≤ n−2 .1 3. one would naturally want to know whether these weak solutions are actually diﬀerentiable.1 3. in practice.3 3. Therefore.2. These weak solutions we obtained were in the n+2 Sobolev space W 1. and Regularity 3. we take the equation .p Regularity of Solutions 3. we are often required to ﬁnd classical solutions.2 Newtonian Potentials. Uniform Elliptic Equations 3.3. 3.3 Other Useful Results Concerning the Existence.2 and hence. so that they can become classical ones. by Sobolev imbedding. In this chapter. we will introduce methods to show that. in most cases.1 W 2. the regularity is equivalent to the a priori estimate of solutions.2 3.3 Regularity of Solutions 3.2.p a Priori Estimates 3. Very often. the weak solutions are easier to obtain than the classical ones. 3.1.2 W 2.3 Regularity Lifting 3. However.
e.p a priori estimate says 1. then any weak solution u of (3.p If u ∈ W0 ∩ W 2. then there exists a constant C. We show that if f (x) is in p L (Ω).p is a weak solution and if f ∈ Lp . then use the method of “frozen coeﬃcients” to generalize it to uniformly elliptic equations. We show that if the function f (x) is in Lp (Ω). such that n aij (x)ξi ξj ≥ δξ2 i. The readers will see in section 3.j=1 aij (x)uxi xj + i=1 bi (x)uxi + c(x)u = f (x).p (Ω). the W 2.j=1 for a.1). It provides a simple method for the study of regularity. In Section 3.p regularity. we introduce and prove a regularity lifting theorem. It is much more general and very easy to use. Let aij (x).p (Ω) is a strong solution of (3. x ∈ Ω and for all ξ ∈ Rn . While the W 2.p . In the a priori estimate.p ≤C f Lp . We will ﬁrst establish this for a Newtonian potential. there exists a constant δ > 0.p is a weak solution and if f ∈ Lp .p . then u W 2. and c(x) be bounded functions on Ω with aij (x) = aji (x). we preassumed that u is in W 2. we establish W 2. Roughly speaking.2. We will also use examples to show how this method can be applied to boost the regularity of the solutions for PDEs as well as for integral equations. We believe that the method will be helpful to both experts and nonexperts in the ﬁeld. In Section 3.3.p regularity theory infers that If u ∈ W 1.p a priori estimate for the solutions of (3.p ≤ C( u Lp + f Lp ). Consider the second order partial diﬀerential equation n n Lu := − i.1). In Section 3. .2 how the a priori estimates and uniqueness of solutions lead to the regularity.1) is in W 2. Let Ω be an open bounded set in Rn .1. It might seem a little bit surprising at this moment that the a priori estimates turn out to be powerful tools in deriving regularities. The version we present here contains some new developments. we establish a W 2. (3. bi (x). that is. such that u W 2. and u ∈ W 2. then u ∈ W 2.1) We assume L is uniformly elliptic.74 3 Regularity of Solutions − u = f (x) for example.
For p ≥ 1. x ∈ Ω.p (Ω) and w = f (x). The Newtonian potential is known as w(x) = Γ (x − y)f (y)dy.1. ∀t > 0} < ∞.p a Priori Estimates We establish W 2. Let Γ (x) = 1 1 n(n − 2)ωn xn−2 n ≥ 3.1. a.1 W 2.3) (3. and D2 w Lp (Ω) (3. For ﬁxed i.p a Priori Estimates 75 3. Then w ∈ W 2.1.5) Our proof can actually be applied to more general operators.1 Newtonian Potentials Let Ω be a bounded domain of Rn . deﬁne the linear operator T as T f = Dij N f = Dij Rn Γ (x − y)f (y)dy To prove Theorem 3. (3.3.1 Let f ∈ Lp (Ω) for 1 < p < ∞. Write w = N f.p estimate for the solutions of Lu = f (x) . Ω We prove Theorem 3. (3. and let w be the Newtonian potential of f .1 W 2. 3. be the fundamental solution of the Laplace equation.4) ≤C f Lp (Ω) . To this end. x ∈ Ω.e.2) First we start with the Newtonian potential. Deﬁne µf (t) = {x ∈ Ω  f (x) > t}. it is equivalent to show that T : Lp (Ω) −→ Lp (Ω) is a bounded linear operator . a weak Lp space Lp (Ω) is the collection of functions f that w satisfy f p Lp (Ω) w = sup{µf (t)tp . we introduce the concept of weak type and strong type operators. where N is obviously a linear operator.1. j. .
∀f ∈ Lp (Ω).5). we employ Marcinkiewicz Interpolation Theorem to derive Lemma 3. i. 2). T is of weak type (p. ˆ and Djk f (ξ) = −ξj ξk f (ξ). Secondly.2 T is of weak type (1.6) and the wellknown Plancherel’s Theorem: ˆ f L2 (Rn ) = f L2 (Rn ) .4 T is of strong type (p.e. (3. we use CalderonZygmund’s Decomposition Lemma to prove Lemma 3. Let ˆ f (ξ) = 1 (2π)n/2 ei<x.6) and (3.7) It follows from (3. (3. Rn be the Fourier transform of f .1. Outline of Proof of (3. ∀x ∈ Rn . q) if Tf Lq (Ω) w ≤C f Lp (Ω) . we use Fourier transform to show Lemma 3.1. we conclude Lemma 3. Thirdly. T is of strong type (2. r) for any 1 < r ≤ 2. q) if Tf Lq (Ω) ≤C f Lp (Ω) . by duality.1. Then obviously w ∈ C ∞ (Rn ) and satisﬁes w = f (x) .3 T is of strong type (r. p). and i = −1. ∀f ∈ Lp (Ω). ∞ ∞ First we consider f ∈ C0 (Ω) ⊂ C0 (Rn ). ξ >= k xk ξk √ is the inner product of Rn . where n < x.1.7). .1.1). Finally. First.76 3 Regularity of Solutions An operator T : Lp (Ω) → Lq (Ω) is of strong type (p.1 T : L2 (Ω) → L2 (Ω) is a bounded linear operator.ξ> f (x)dx. We decompose the proof into the proofs of the following four lemmas.1. for 1 < p < ∞. We need the following simple property of the transform ˆ Dk f (ξ) = −iξk f (ξ) . The Proof of Lemma 3.
a. we simply pick a sequence {fk } ⊂ that converges to f in L2 (Ω).j=1 n Rn = k.3.8) Now to verify (3.8) for any f ∈ L2 (Ω). For any given α > 0. and take the limit. {Qk }: disjoint cubes s. Tf ∞ C0 (Ω) L2 (Ω) ≤ f L2 (Ω) ∞ . k=1 α< 1 Qk  f (x)dx ≤ 2n α Qk For any f ∈ L1 (Ω).2. we ﬁrst extend f to vanish outside Ω. ﬁxed α > 0. Qo . E ∩ G = ∅ (ii) f (x) ≤ α. Lemma 3. such that f (x)dx ≤ αQo .1 W 2. x ∈ E (iii) G = ∞ E.p a Priori Estimates 77 f (x)2 dx = Ω Rn f (x)2 dx  w(x)2 dx Rn = = Rn  w(ξ)2 dx ξ4 w(ξ)2 dξ ˆ Rn n 2 2 ξk ξj w(ξ)2 dξ ˆ = = k. ∃ (i) Rn = E ∪ G. This proves the Lemma.5 For f ∈ L1 (Rn ). ﬁx a large cube Qo in Rn . G such that Qk . We need the following wellknown Calder´nZygmund’s Decomposition o Lemma (see its proof in Appendix C).e. The Proof of Lemma 3. Consequently.1. (3.1.t.j=1 n Rn Dkj w(ξ)2 dξ Dkj w(x)2 dx k.j=1 Rn = = Rn D2 w2 dx. to apply the Calder´nZygmund’s Decomposition o Lemma. ∀f ∈ C0 (Ω).
(3. The estimate of the ﬁrst one 2 2 is easy.10) b(x)dx = 0 for k = 1. bkm (x)dx = Qk Qk bk (x)dx = 0. we have g(x) ≤ 2n α .1.12) We then estimate µT b . almost everywhere and b(x) = 0 for x ∈ E . from the deﬁnition of g. · · · . and therefore α α µT f (α) ≤ µT g ( ) + µT b ( ).78 3 Regularity of Solutions We show that µT f (α) := {x ∈ Rn  T f (x) ≥ α} ≤ C f L1 (Ω) α .10). and α α C C (b) {x ∈ E ∗  T b(x) ≥ } ≤ T b(x)dx ≤ f L1 (Ω) .1 and (3. we derive 4 α µT g ( ) ≤ 2 2 α g 2 (x)dx ≤ 2n+2 α g(x)dx ≤ 2n+2 α f (x)dx. and Qk (3. T f = T g + T b. where f (x) for x ∈ E g(x) = 1 Qk  Qk f (x)dx for x ∈ Qk . · · · . because g ∈ L2 . (3. Since the operator T is linear. (3. To estimate µT b ( α ). ∞ T bk . 2.9) Split the function f into the “good” part g and “bad” part b: f = g + b. Let bk (x) = Then Tb = k=1 ∞ For each ﬁxed k. By Lemma 3. (3. we divide Rn into two parts: G∗ 2 and E ∗ := Rn \ G∗ (see below for the precise deﬁnition of G∗ . let {bkm } ⊂ C0 (Qk ) be a sequence converging to bk in 2 L (Ω) satisfying b(x) for x ∈ Qk 0 elsewhere . 2 2 We will estimate µT g ( α ) and µT b ( α ) separately.13) .11) We ﬁrst estimate µT g . Obviously. E∗ 2 α α α These will imply the desired estimate for µT b ( 2 ).) We will show that C (a) G∗  ≤ f L1 (Ω) . 2. k = 1.
we cover Qk by a bigger ball Bk which has the same center as Qk . We now estimate the integral in the complement of Bk : T bkm (x)dx = Rn \Bk Qo \Bk  Qk Dij Γ (x − y)bkm (y)dydx [Dij Γ (x − y) − Dij Γ (x − y )]bkm (y)dydx ¯ Qk = Qo \Bk  ≤ C δk Qo \Bk ∞ 1 dx ·  xn+1 Qk bkm (y)dy Qk ≤ C1 δk δk 1 dr · r2 bkm (y)dy (3.15) ≤C k=1 Qk bk (y)dy ≤ C Rn . one can see that due to the singularity of Dij Γ (x − y) in Qk and the fact that bkm may not be bounded in Qk . one can only estimate T bkm (x) when x is of a positive distance away from Qk . Let G∗ = Bk k=1 and E ∗ = Rn \ G∗ .p a Priori Estimates 79 From the expression T bkm (x) = Qk Dij Γ (x − y)bkm (y)dy.1 W 2.13)): Dij Γ (x − y )bkm (y)dy ¯ Qk to produce a helpful factor δk by applying the mean value theorem to the diﬀerence: Dij Γ (x − y) − Dij Γ (x − y ) = (y − y ) · D(Dij Γ )(x − ξ) ≤ δk D(Dij )(x − ξ). One small trick here is to add a term ¯ (which is 0 by (3. and the radius of the ball δk is the same as the diameter of Qk . It follows that ∞ ∞ T b(x)dx ≤ C E∗ k=1 ∞ Rn \G∗ T bk dx ≤ C k=1 Rn \Bk T bk dx f (x)dx. ¯ ¯ Now letting m→∞ in (3.3. For this reason.14) ≤ C2 Qk bkm (y)dy where y is the center of the cube Qk .14). (3. we obtain T bk (x)dx ≤ C Rn \Bk Qk ∞ bk (y)dy.
More precisely. such that µT f (t) ≤ then Tf where r θ 1−θ ≤ CBp Bq f r Bp f t p p and µT f (t) ≤ Bq f t q q .1. p) and weak type (q. (3.6 Let T be a linear operator from Lp (Ω) ∩ Lq (Ω) into itself with 1 ≤ p < q < ∞.17) α }. 1 θ 1−θ = + r p q and C depends only on p. for any 1 < r ≤ 2. From the previous lemma. 2 ∗ Eα = {x ∈ E ∗  T b(x) ≥ Then by (3.17.16).1. then for any p < r < q. we have o ∞ ∞ (3.19) .1. 2) (of course also weak type (2. we have shown that the operator T is of weak type (1. if there exist constants Bp and Bq . Rn Write (3. T is of strong type (r. In the previous lemmas. (3. 1) and strong type (2. This completes the proof of the Lemma.12). we have Tg Lr (Ω) ≤ Cr g Lr (Ω) .15). (3.80 3 Regularity of Solutions Obviously α α µT b ( ) ≤ G∗  + {x ∈ E ∗  T b(x) ≥ }. q.3. and r.9) is a direct consequence of (3. The proof of this lemma can be found in Appendix C.18) Now the desired inequality (3.4. If T is of weak type (p. and (3. Now Lemma 3. r).16) G∗  = k=1 Bk  = C k=1 Qk  ≤ C α ∞ f (x)dx = k=1 Qk C α f (x)dx. we know that. 2)).18). The proof of Lemma 3. q). The proof of Lemma 3. 2 2 By (iii) in the Calder´nZygmund’s Decomposition Lemma. ∀f ∈ Lp (Ω) ∩ Lq (Ω). we derive ∗ Eα  α ≤ 2 T b(x)dx ≤ ∗ Eα T b(x)dx ≤ C E∗ Rn f (x)dx. .3 is a direct consequence of the Marcinkiewicz interpolation theorem in the following restricted form: Lemma 3.1. ∀f ∈ Lp (Ω) ∩ Lq (Ω). (3.
Deﬁnition 3.1.2 Uniform Elliptic Equations In this section. f > . bi (x) ∈ L∞ (Ω). we consider general second order uniform elliptic equations with Dirichlet boundary condition + c(x)u = f (x). p Given any 2 < p < ∞. 3.2 Let u ∈ W 2. Based on the result in the previous section–the estimates on the Newtonian potentials–we will establish a priori estimates on the strong solutions of (3. c(x)L∞ (Ω) . Lr =1 ≤ g sup Lr =1 f Lp Tg Lr ≤ Cr f This completes the proof of the Lemma.e.2 W0 (Ω) be a strong solution of (3.1. Λ.21) ¯ We assume that Ω is bounded with C 2. g >= Ω f (x)g(x)dx be the duality between f and g.p (Ω) Assume that f ∈ Lp (Ω). and the domain Ω. follows from (3.p a Priori Estimates 81 Let < f. n. and λξ2 ≤ aij ξi ξj ≤ Λξ2 for some positive constants λ and Λ.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3.21). c(x) ∈ L∞ (Ω). (3. i. p. T f >= g < T g. T f >=< T g. Lu := − u(x) = 0. 1 < r < 2. x ∈ Ω x ∈ ∂Ω. .1 W 2.22) where C is a constant depending on bi (x)L∞ (Ω) . Theorem 3.20) that 1 r (3.20) + 1 p = 1. f > Lp . Then 1. aij (x) ∈ C0 (Ω). let r = p−1 . λ.19) and (3. x ∈ Ω if u is twice weakly diﬀerentiable in Ω and satisﬁes the equation almost everywhere in Ω.1 We say that u is a strong solution of Lu = f (x).j=1 aij (x)uxi xj + n i=1 bi (x)uxi uW 2. Obviously.3.α boundary.1.21). n i. Then it is easy to verify that < g. It sup Tf Lp = g sup Lr =1 < g.
Combining the interior and the boundary estimates.23) where K is any compact subset of Ω. (3. 4 Substituting this estimate of  uLp (Ω) into our inequality (3.1. we obtain uW 2. the main idea is the well known method of “frozen” coeﬃcients. c(x) ∈ Lp (Ω) .82 3 Regularity of Solutions Remark 3. Locally.p (Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). we may regard the leading coeﬃcients aij (x) of the operator approximately as the constant aij (xo ) (as if the functions aij are frozen at point xo ).26) For both the interior and the boundary estimates. Interior Estimate  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) (3.1. and thus we are able to treat the operator as a constant coeﬃcient one and apply the potential estimates derived in the previous section.p (Ω) ≤ C(uLp (Ω) + f Lp (Ω) ) (3.p (Ω) ≤ uW 2. 4 1 Choosing < 2C . for any p > n. We divide the proof into two major parts. Boundary Estimate uW 2.1 Conditions bi (x).24) uLp (Ω) ≤  1/2 2 uLp (Ω) + C uLp (Ω) . near a point xo .2. Part 2.p (Ω) ≤ C  2 uLp (Ω) + C( C uLp (Ω) + f Lp (Ω) ).25) Theorem 3. ∂Ω) > δ}.25). we can then absorb the term  2 uLp (Ω) on the right hand side by the left hand side and arrive at the conclusion of our theorem uW 2. Part 1. c(x) ∈ L∞ (Ω) can be replaced by the weaker ones bi (x).p (Ω\Ω2δ ) + uW 2.1. we get: uW 2. .25) and the following well known GargaliadoJohnNirenberg type interpolation estimate  uLp (Ω) ≤ CuLp (Ω)  1/2 2 (3.p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ) where Ωδ = {x ∈ Ω  d(x. Proof of Theorem 3.2 is then a simple consequence of (3.
Here we notice that all terms are supported in B2δ := B2δ (xo ) ⊂ Ω. We abbreviated i. we may assume aij (xo ) = δij .j=1 aij as aij and so on. First we deﬁne the cutoﬀ function φ(s) to be a compact supported C ∞ function: φ(s) = 1 if s ≤ 1 φ(s) = 0 if s ≥ 2. With a simple linear transformation. In the above.1≤i.x. Apparently both w and Γ ∗ F := are solutions of 1 n(n − 2)ωn 1 F (y)dy x − yn−2 n Rn u = F. following the Newtonian potential estimate (see Theorem 3. and it measures the ‘module’ continuity x − xo  ) and w(x) = η(x)u(x).3. w = Γ ∗ F (see the exercise after the proof of the theorem). we skipped the summation sign .y∈Ω. Then we quantify the ‘module’ continuity of aij with (δ) = The function (δ) 0 as δ of the functions aij . we obtain: .p a Priori Estimates 83 Part 1: Interior Estimates.j≤n aij (x) − aij (y) 0. δ ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj ∂u ∂2w + η(x)(bi (x) + c(x)u − f (x)) + ∂xi ∂xj ∂xi = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2η ∂u ∂η + 2aij (x) ∂xi ∂xj ∂xi ∂xj := F (x) for x ∈ Rn . let η(x) = φ( then aij (xo ) ∂2w ∂2w ∂2w = (aij (xo ) − aij (x)) + aij (x) ∂xi ∂xj ∂xi ∂xj ∂xi ∂xj = (aij (xo ) − aij (x)) + aij (x)u(x) ∂2w ∂2u + η(x)aij (x) + ∂xi ∂xj ∂xi ∂xj sup x−y≤δ. Thus by the uniqueness.3). For any xo ∈ Ω2δ .1 W 2. Now.
∂Ω).. Bδ (xo ) ∂Ω . Let’s just describe how we can apply almost the same scheme here. For any point xo ∈ ∂Ω. (3..30) This establishes the interior estimates. We now apply the above estimate (3..m} that have already covered Ω2δ . we have F Lp (B2δ ) ≤ (2δ) 2 wLp (B2δ ) +f Lp (B2δ ) +C( uLp (B2δ ) +uLp (B2δ ) ). Hence there are ﬁnitely many balls {Bδ (xi )  i = 1. (3. let δ < 1 dist(K. Combining this with the previous inequality (3.27) and choosing δ so small such that C (2δ) < 1/2.  2 2 uLp (Bδ ) =  2 wLp (Bδ ) .28) To extend this estimate from a δball to a compact set. . we get  2 wLp (B2δ ) ≤ C (2δ) 2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) ≤ 1  2 wLp (B2δ ) + C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ). Boundary Estimates The boundary estimates are very similar.28) on each of the balls and sum up to obtain m  2 uLp (Ω2δ ) ≤ i=1 m  2 uLp (Bδ )(xi ) ≤ i=1 C(f Lp (B2δ )(xi ) +  uLp (B2δ )(xi ) + uLp (B2δ )(xi ) ) (3. we notice that the collection of balls {Bδ (x)  x ∈ Ω2δ } forms an open covering of Ω2δ . then K ⊂ Ω2δ . and 2 we derive  2 uLp (K) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ). uLp (Bδ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ).. (3..84 3 Regularity of Solutions  2 wLp (B2δ ) =  2 wLp (Rn ) ≤ CF Lp (Rn ) = CF Lp (B2δ ) .27) Calculating term by term. 2 This is equivalent to  2 wLp (B2δ ) ≤ C(f Lp (B2δ ) +  uLp (B2δ ) + uLp (B2δ ) ) Since u = w on Bδ (xo ).. we have  Consequently. Part 2.29) ≤ C(f Lp (Ω) +  uLp (Ω) + uLp (Ω) ) For any compact subset K of Ω.
i. The equation becomes n n + ¯ − i.3. if yn ≥ 0. −yn ). For example.31) where the new coeﬃcients are computed from the original ones via the chain rule of diﬀerentiation.1 W 2. for y ∈ ∂Br (0) with yn = 0.j=1 aij (y)uyi yj + i=1 ¯i (y)uyi + c(y)u = f (y) . xn−1 ) = h(x ). we make a linear transformation so that aij (0) = δij . xn − h(x )). . yn−1 . Then one can check that: w(y1 .. . Let y = ψ(x) = (x − xo . and Ω is on top of this graph locally. if yn < 0.α graph for δ small. ¯ w(y) = F (y) . yn ) = ¯ ¯ ¯ Similarly for F (y).. + ¯ Now let w(y) and F (y) be the odd extension of w(y) and F (y) from Br (0) ¯ to Br (0). · · · . Applying the method of frozen coeﬃcients (with w(y) = φ( 2y )u(y)) we get: r + w(y) = F (y) on Br (0). ∂xl ∂x If necessary.e. + then ψ is a diﬀeomorphism that maps a neighborhood of xo in Ω onto Br (0) = {y ∈ Br (0)  yn > 0}. · · · .. ¯ Through the same calculations as in Part 1. −w(y1 . · · · . The last inequality in the above was due to the symmetric extension of w to w from the half ball to the whole ¯ ball. x2 . yn ). for y ∈ Br (0) ¯ b ¯ + u(y) = 0 . w(y) = w(y1 .31) is valid for some smaller r. we may assume that equation (3. (3. With a suitable rotation. aij (y) = ¯ ∂ψ j ∂ψ i −1 (ψ (y))alk (ψ −1 (y)) k (ψ −1 (y)). Since ¯ a plane is still mapped to a plane. yn−1 . we may assume that Bδ (xo ) ∂Ω is given by the graph of xn = h(x1 .p a Priori Estimates 85 is a C 2. we derive the basic interior estimate  2 uLp (Br (xo )) ≤ C(f Lp (B2r (xo )) +  uLp (B2r (xo )) + uLp (B2r (xo )) ) ≤ C1 (f Lp (B2r (xo )∩Ω) +  uLp (B2r (xo )∩Ω) + uLp (B2r (xo )∩Ω) ) for any xo on ∂Ω and for some small radius r. for y ∈ Br (0). yn−1 .
p Regularity of Solutions In this section..j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. and following the same steps as we did for the interior estimates. Show that if − w = F .1. . we get: uW 2.2 W 2.33) 1. 2 n (b) Show that it is true for w ∈ C0 (R ).p Deﬁnition 3. x ∈ ∂Ω 1. (c) Show that (J w) = J ( w) and thus J w = Γ ∗ (J F ).j=1 1 p aij (x)uxi vxj + i (3. 2.q if for any v ∈ W0 (Ω)..35) where + 1 q = 1. k). and thus has a ﬁnite covering Bri (xi ) (i = 1. (3. 3. and w ∈ W0 (Ω). . then w = Γ ∗ F . Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i.p Exercise 3. we will use the a priori estimates established in the previous section to derive the W 2. ¯ Hint: (a) Show that Γ ∗ F (x) = 0 for x ∈Ω.p regularity for the weak solutions. Summing the estimates up.1 Assume that Ω is bounded. 2. Let →0 to derive w = Γ ∗ F .p (Ω\Ωδ ) ≤ C( uLp (Ω) + uLp (Ω) + f Lp (Ω) ).34) n bi (x)uxi v + c(x)uv dx = f (x)vdx. n (3.86 3 Regularity of Solutions These balls form a covering of the compact set ∂Ω.1 We say that u ∈ W0 (Ω) is a weak solution of Lu = f (x) .2. Ω Ω i. These balls also cover a neighborhood of ∂Ω including Ω\Ωδ for some δ small.32) This establishes the boundary estimates and hence completes the proof of the theorem. x ∈ Ω u(x) = 0 .. (3..
and then use the “frozen” coeﬃcients method to derive the regularity for general operator L. .1. because one can rather easily show the uniqueness of the weak solutions by multiplying both sides of the equation by the solution itself and integrating by parts. we ﬁrst prove a W 2.1 Assume f ∈ Lp (B1 (0)) with p ≥ 2.35) be true for all v ∈ C0 (Ω). then u ∈ W 2. (3.1 Assume Ω is a bounded domain in Rn . If f (x) ∈ Lp (Ω).p version of Fredholm Alternative for p ≥ 2 based on the regularity result in stage 1.2 W 2.p (Ω) . uniqueness. Proposition 3.3. The proof of the regularity is based on the following fundamental proposition on the existence. To show the uniqueness. In stage 2 (subsection 3.1 Since C0 (Ω) is dense in W0 (Ω). Let L be a uniformly elliptic operator deﬁned in (3. we only need to require ∞ (3.p (B1 (0)) satisfying u W 2.34). x ∈ ∂B1 (0) exists a unique solution u ∈ W 2.2). The main result of the section is Theorem 3. Then the Dirichlet problem u = f (x) . Outline of the Proof.2. we consider the case 1 < p < 2. The main diﬃculty lies in the uniqueness of the weak solution.2. The proof of the theorem is quite complex and will be accomplished in two stages. this uniqueness plays a key role in deriving the regularity.2.2. x ∈ B1 (0) (3. 1. we ﬁrst consider the case when p ≥ 2.2.37) We will prove this proposition in subsection 3.p ∞ Remark 3. As one will see. and regularity for the solution of the Laplace equation on a unit ball. Then we use this equation with a proper choice of F (x) to derive a contradiction if there exists a nontrivial solution of equation − ij (aij (x)vxi )xj = 0.p Assume that u ∈ W0 (Ω) is a weak solution of (3.36) u(x) = 0 . which will be used to deduce the existence of solutions for equation − ij (aij (x)wxi )xj = F (x). In stage 1.p (B1 ) ≤C f Lp (B1 ) .33) with aij (x) Lipschitz continuous and bi (x) and c(x) bounded. and this is more convenient in many applications.p Regularity of Solutions 87 1.
x ∈ Ω .1 and use it to derive the regularity of weak solutions for general operator L. x ∈ Ω (3.2 Actually.p (Ω) ≤C f Lp (Ω) .2. to derive the regularity. fk uk and gk = .) Assume that i) for solutions of Lu = f (x) .2.88 3 Regularity of Solutions After proving the uniqueness. (3.40) Proof. we need Lemma 3. and hence will be deliberated there. Then we have a better a priori estimate for the solution of (3. and gk Lp →0.2. It follows from the a priori estimate (3. Suppose the inequality (3.39) that uk Let vk = Then vk From the equation Lp Lp →∞ as k→∞. Remark 3. This will use the Regularity Lifting Theorem in Section 3.40) is false.42) Lvk = gk (x) .38) it holds the a priori estimate u W 2. then there exists a sequence of functions {fk } with fk Lp = 1 and the corresponding sequence of solutions {uk } for Luk = fk (x) .2. p uk L uk Lp = 1 . x ∈ Ω such that uk W 2. then u = 0. everything else is the same as in stage 1.p (Ω) ≤ C( u Lp (Ω) + f Lp (Ω) ). (3. the conditions on the coeﬃcients of L can be weakened.38) u W 2.1 (Better A Priori Estimates at Presence of Uniqueness.39) and ii) (uniqueness) if Lu = 0. 3.p →∞ as k→∞.3.41) (3. To this end.1 The Case p ≥ 2 We will prove Proposition 3. (3.
36). y)f (y)dy is the solution. x ∈ B1 w(x) = 0 .n≥3 n=2 is the fundamental solution of Laplace equation as introduce in the deﬁnition of Newtonian Potentials. y) = 1 1 . To see the uniqueness. we arrive at Lv = 0 . then u(x) = B1 G(x. Here G(x.41) and (3. y) is the Green’s function. For the existence.p .1. For convenience.p . y) = Γ (x − y) − h(x.p ≤ C( vk Lp + gk Lp ). From (3.p Regularity of Solutions 89 it holds the a priori estimate vk W 2. . assume that both u and v are solutions of (3. for n ≥ 3 x n(n − 2)ωn (x x2 − y)n−2 . it is wellknown that. we abbreviate Br (0) as Br . By the compact Sobolev embedding. Let w = u − v. (3. this. and hence there exists a subsequence (still denoted by {vk }) that converges weakly to v in W 2. if f is continuous. together with the zero boundary condition. Then w weakly satisﬁes w = 0 .2 W 2. in which Γ (x) = 1 1 n(n−2)ωn xn−2 1 2π ln x . This contradicts with the fact v Lp = 1 and therefore completes the proof. x ∈ Ω. and hence v Lp = 1. we derive  w2 dx = 0.43) (3. The Proof of Proposition 3.43) imply that {vk } is bounded in W 2.42).2.3. x ∈ ∂B1 Multiplying both sides of the equation by w and then integrating on B1 . implies w = 0 almost everywhere. and h(x. B1 Hence w = 0 almost everywhere. we must have v = 0. Therefore by the uniqueness assumption. the same sequence converges strongly to v in Lp .
f (x) . ∀ x ∈ B1 (0). from 3. we derive that uδ is also in Lp (B1 ).p . x2 Proof of Regularity Theorem 3. noticing that h(x.p a priori estimate in the previous section. due to uniqueness and Lemma 3. y) is smooth when y is away from the boundary (see the exercise below).1. uo = lim uδi . then there exist a constant co > 0.p (B1 ) ≤ C f Lp (B1 ) .p . Moreover.2. the reader may notice that if one can establish a W 2. in W 2. y) vanishes on the boundary. The general frame of the proof is similar to the W 2.p (B1 ). After reading the proof of the a priori estimates. For the second part.44) Pick a sequence {δi } tending to zero.2. where fδ (x) = Now.2.36). elsewhere . then the corresponding solutions {uδi } is a Cauchy sequence in W 2. y)fδ (y)dy.p regularity in a small .1 for p ≥ 2. and here we only assume that u is a weak solution in W 1. we see that the better a priori estimate (3. Applying the Poincar` inequality.37) holds for uo . as i.p (B1 ) ≤ C fδi − fδj Lp (B1 ) →0 .1 Show that if y ≤ 1 − δ for some δ > 0. because uδi − uδj Let W 2. The addition of h(x. i→∞ Then uo is a solution of (3. The key diﬀerence here is that in the a priori estimate. y) is to ensure that G(x. This completes the proof of Proposition 3. we preassume that u is a strong solution in W 2. Let uδ (x) = B1 G(x. We have obtained the needed estimate on the ﬁrst part Γ (x − y)f (y)dy B1 when we worked on the Newtonian Potentials.44. (3. Exercise 3. we have the better a priori estimate uδ W 2. such that x x − y ≥ co .2. we start from a smaller ball B1−δ for each δ > 0. D2 uδ is in Lp (B1 ). x ∈ B1−δ 0.p (B1 ).1.90 3 Regularity of Solutions is a harmonic function. Moreover. j→∞. and hence it is in e W 2. by our result for the Newtonian Potentials.p (B1 ).
p Regularity of Solutions 91 neighborhood of each point in Ω.2. Also.1.p (B2δ (xo )).p Let u be a W0 (Ω) weak solution of (3. then by an entirely similar argument as in obtaining the a priori estimates.45) as w = ([aij (xo ) − aij (x)]wxi )xj − F (x) For any v ∈ W 2. one can derive the regularity on the whole of Ω. We abbreviated i. x ∈ B2δ (xo ) w(x) = 0 . In order words.3. ([aij (xo ) − aij (x)]vxi )xj ∈ Lp (B2δ (xo )). the operator is invertible.35) and a straight forward calculation. w is a weak solution of aij (xo )wxi xj = ([aij (xo ) − aij (x)]wxi )xj − F (x) .p v = Kv + where (Kv)(x) = −1 −1 (3. x ∈ ∂B2δ (xo ).34). if s ≤ 1 0 . For any xo ∈ Ω2δ := {x ∈ Ω  dist(x. we omitted the summation signs . x − xo  ) and w(x) = η(x)u(x).46) F (3. 1. if s ≥ 2. As in the proof of the a priori estimates. one can easily verify that F (x) is in Lp (B2δ (xo )). n (3. By virtue of Proposition 3. for any v ∈ C0 (B2δ (xo )). ∞ one can verify that. ∂Ω) ≥ 2δ}.47) ([aij (xo ) − aij (x)]vxi )xj . .2 W 2.45) In the above. we deﬁne the cutoﬀ function φ(s) = 1 .j=1 aij as aij and so on. By (3. Consider the equation in W 2. δ Then w is supported in B2δ (xo ). With a change of coordinates. obviously. η(x) = φ( aij (xo )wxi vxj dx = B2δ (xo ) B2δ (xo ) let [aij (xo )−aij (x)]wxi vxj dx+ B2δ (xo ) F (x)vdx where F (x) = f (x) − (aij (x)ηxi u)xj − bi (x)uxi − c(x)u(x). we may assume that aij (xo ) = δij and write equation (3.
2 The Case 1 < p < 2. ∞ Let {fk } ⊂ C0 (Ω) be a sequence of smooth approximations of f (x). Show that. we must have w = v and thus conclude that w is also in W 2. for any given g ∈ L2 (Ω).p version of the Fredholm Alternative.2 (Ω).2 Let p ≥ 2. such that Kφ − Kψ W 2. From the previous chapter. one can verify that (see the exercise below). Therefore.p (Ω).1. one can show (as an exercise) the uniqueness of the weak solution of (3.2 (Ω). For each k. by 1.3 Prove the uniqueness of the W 1.92 3 Regularity of Solutions Under the assumption that aij (x) are Lipschitz continuous. for δ suﬃciently small.1. From the Regularity Theorem in this chapter.p (B2δ (xo )) weak solution for equation (3. 3. we have v ∈ W 2.p (B2δ (xo )) ≤γ φ−ψ W 2.p (B2δ (xo )) .46). If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) 1. the operator K is a contracting map in W 2. one can see that for α suﬃciently large.p (B2δ (xo )).46). there exist a unique W0 (Ω) weak solution v of the equation (L + α)v = g . Therefore.46) when p ≥ 2. such that Lu = f (x) and u W 2. x ∈ B2δ (xo ).47). for suﬃciently small δ. then for any f ∈ L (Ω). (3.p (Ω) ≤C f Lp .p (B2δ (xo )) to itself. (3.p (B2δ (xo )). there exists a unique solution v of equation (3. there exist a unique u ∈ W0 (Ω) ∩ W 2.p Lemma 3. This is actually a W 2. consider equation (L + α)w − αw = fk . This v is also a weak solution of equation (3.2. Let (Kv)(x) = −1 ([aij (xo ) − aij (x)]vxi )xj . we can write v = (L + α)−1 g where (L + α)−1 is a bounded linear operator from L2 (Ω) to W 2. 1.2. Exercise 3.48) Proof.p p weak solution).2. Similar to the proof of Proposition 3. K is a contracting map from W 2. This completes the stage 1 of proof for Theorem 3. there exists a constant 0 < γ < 1.49) .e. Therefore.2 Assume that each aij (x) is Lipschitz continuous.2 LaxMilgram Theorem. x ∈ Ω.2. Exercise 3. i.2.2.
When p ≥ 2. because of the compact embedding of W 2. Now we can apply the Fredholm Alternative: Equation (3.50) possesses a unique solution uk in W0 (Ω). One 1.2 equation (3.p uk →u in W0 (Ω) . we derive immediately that uk is also in W 2. L(ui − uj ) = fi (x) − fj (x) . Let {uk } be a sequence ∞ of C0 (Ω) functions such that 1. and furthermore.2.p (Ω). Assume that aij (x) are Lipschitz con1. . hence.2 into W 1. since both α(L + α)−1 uk and (L + α)−1 fk are in W 2. Since fk is in Lp (Ω) for any p. and F = (L + α)−1 fk . (3.p ≤ C fi − fj Lp →0 .p Proposition 3.52) Proof.2. Now this function u is the desired solution. If u is a W0 (Ω) weak solution of (aij (x)uxi )xj = 0. or we can deduce this from the Regularity Theorem 3. uk − α(L + α)−1 uk = (L + α)−1 fk . (3. i.1) uk W 2. The second half of the above is just the assumption of the theorem. Now assume 1 < p < 2. x ∈ Ω.1.e. then u = 0 almost every where. consider the equation . we have proved the uniqueness.p (Ω).1 that uk is in W 2. ) n Let Ω be a bounded domain in R . This implies that {uk } is a Cauchy sequence in W 2. For each k. K = α(L + α)−1 .50) where I is the identity operator. 1.2 (Ω).50) exists a unique solution if and only if the corresponding homogeneous equation (I − K)w = 0 has only trivial solution. It follows that ui − uj W 2.2 (Ω). (I − K)w = F.3. for any integers i and j. as k→∞. j→∞.2.2 .2.p ≤ C fk Lp .p Regularity of Solutions 93 or equivalently.51) Furthermore.2 W 2. as i. we conclude from Theorem 3. due to uniqueness.2 can see that K is a compact operator from W0 (Ω) into itself.2 (Uniqueness of W0 Weak Solution for 1 < p < ∞. · · · Moreover. 1. we have the improved a priori estimate (see Lemma 3. Suppose there exist a nontrivial weak solution u.p and hence it converges to an element u in W 2. 2.p tinuous. (3. k = 1.
54) Since uk →u in W 1. Repeating this process. n−p If p1 ≥ q.q Proof. Lemma 3. and therefore.2.p Taking into account that u ∈ W0 (Ω).p → W 1. Since we 1. u ∈ W 2.54).94 3 Regularity of Solutions (aij (x)vxi )xj = Fk (x) := ·  uk p/q uk 1 +  uk 2 .q already have uniqueness for W0 weak solution. and Regularity 1. 3.2 Given any pair p.53) 1 where 1 + p = 1. with p1 = np .55) then u ∈ W 2. Case ii) If q > p. 1. q Obviously. (3. for p < 2. we are done. we will arrive at a pk ≥ q.1. (3. then obviously. This completes stage 2 of the proof for Theorem 3. (3. Case i) If q ≤ p. we use the fact that u is a W 1. . Through integration by parts several times. After proving the uniqueness. x ∈ Ω. u is also a W0 (Ω) weak solution. we must have u = 0 almost everywhere.2 guarantees the existence of a solution vk for equation (3.p Theorem 3.1 and the W 2. q > 1 and f ∈ Lq (Ω). (3. after a few steps.2.p1 weak solution.56) 1.2. and the results follow from the Regularity Theorem 3. the rest of the arguments are entirely the same as in stage 1.53). if u ∈ W0 (Ω) is a weak solution of Lu = f (x) .q a priori estimates. we ﬁnally deduce that u = 0 almost everywhere.3 Other Useful Results Concerning the Existence.p . Uniqueness.q (Ω) and it holds the a priori estimate u W 2. then we use the Sobolev embedding W 2.p1 . and by Regularity. and Fk (x) is in Lq (Ω).2.q (Ω) ≤ C( u Lq + f Lq ). by (3.2.p1 . This completes the proof of the proposition. it is easy to see that  uk p/q uk 1 +  uk 2 →  up/q u 1 +  u2 in Lq (Ω). one can verify that 0= Ω vk (aij (x)uxi )xj dx = Ω  uk p/q uk 1 +  uk 2 · u dx. If p1 < q. q is greater than 2.
q1 (Ω) with q1 = By the Sobolev Embedding 2n .2. We call this the “Bootstrap Method” as will be illustrated below.1 Bootstrap Assume that u ∈ H 1 (Ω) is a weak solution of − u = up (x) . x ∈ Ω 2n (3.3 Regularity Lifting 3.p the uniqueness of weak solutions in any two spaces W0 and W0 : Corollary 3. then the regularity of u can be enhanced repeatedly through the equation until we reach that u ∈ C α (Ω). u ∈ L n−2 (Ω). Now. then u = 0.3.p If u = 0 whenever Lu = 0 (in the sense of W0 (Ω) weak solution).57) boosts the solution u to W 2.2.3. such that Lu = f (x) and u W 2. we have stated this theorem in Lemma 3. the following are equivalent 1.3 (W 2. there exist a unique u ∈ W0 (Ω) ∩ W 2.3 Regularity Lifting 95 From this Theorem.q ii) If u ∈ W0 (Ω) is a weak solution of Lu = 0.2.p Version of the Fredholm Alternative). after we obtained the W 2.p (Ω) ≤C f Lp .p (Ω). we can derive immediately the equivalence between 1. 2n Since u ∈ L n−2 (Ω). For 2 ≤ p < ∞. If the power p is less than the n+2 critical number n−2 .2. Finally the “Schauder Estimate” will lift u to C 2+α (Ω) and hence to be a classical solution. (n + 2) − δ(n − 2) 2n 2n n+2 −δ n−2 . The equation (3. Theorem 3. for 1 < p < 2. write p= for some δ > 0. then u = 0. then for 1. up ∈ L p(n−2) (Ω) = L (n+2)−δ(n−2) (Ω).q 1.p i) If u ∈ W0 (Ω) is a weak solution of Lu = 0.2. Let 1 < p < ∞.p any f ∈ Lp (Ω). n+2 For each ﬁxed p < n−2 .p regularity in this case.1 For any pair p. 1.2. 1.57) Then by Sobolev Embedding. q > 1. the proof of this theorem is entirely the same as for Lemma 3. 3.
q2 (Ω). 3. if 2q2 ≥ n. If 2q2 < n.96 3 Regularity of Solutions W 2. By Sobolev embedding. To deal with this situation. for both experts and nonexperts in the ﬁeld. in obtaining regularities. We believe that our method will provide convenient ways. n − 2q1 n−21−δ 1 1−δ nq1 The integrable power of u has been ampliﬁed by Now. It is much more general and is very easy to use. (1 − δ)(n − 2) 1 − δ 1−δ 2n . then. Continuing this process. and therefore.2 Regularity Lifting Theorem Here. n+2 From the above argument. after a ﬁnite many steps. s1 4n = . we have either u ∈ Lq (Ω) for any q. (1 − δ)[n + 2 − δ(n − 2)] − 4 1 The integrable power of u has again been ampliﬁed by at least 1−δ times. u ∈ C 2+α (Ω). if p = n−2 . Finally. the so called “critical power”. we will boost u to Lq (Ω) for any q.3. one can see that. and hence in C α (Ω) for some 0 < α < 1. p (1 − δ)[n + 2 − δ(n − 2)] And hence through the equation. it is based on the following Regularity Lifting Theorem. It has been used extensively in various forms in the authors previous works. . we present a simple method to boost regularity of solutions. However. Essentially. with s2 = It is easy to verify that s2 ≥ 2n 1 1 = s1 . up ∈ Lq2 (Ω) with q2 = (> 1) times. the version we present here contains some new developments. and the integrable power of the solution u can not be boosted this way. it is a classical solution. or u ∈ C α (Ω). if (1 − δ)[n + 2 − δ(n − 2)] − 4 ≤ 0 .q1 (Ω) → L n−2q1 (Ω) we have u ∈ Ls1 (Ω) with s1 = nq1 2n 1 = . We are done.e. i. then u ∈ Ls2 (Ω). The essence of the approach is wellknown in the analysis community. we derive u ∈ W 2. we introduce a “Regularity Lifting Method” in the next subsection. by the Schauder estimate. then δ = 0.
h2 ∈ Z.58). ≤ θ h1 − h2 Step 2. Then by Sobolev embedding. Since T : Z −→ Z is a contraction. Step 1. we assume that Z is complete with respect to the norm · Z . = p · p X + · p Y For simplicity. Thus. The equation x = T x + g has a unique solution in X. according to what one needs.1 Let T be a contraction map from X into itself and from Y into itself.3 Regularity Lifting 97 Let Z be a given vector space. T h1 − T h2 Z = p T h1 − T h2 p p X + T h1 − T h2 p X p + θ2 h1 − h2 p Y p Y ≤ p θ1 h1 − h2 Z. Here. 0 < θ2 < 1 such that T h1 − T h2 Y ≤ θ2 h1 − h2 Y . u ∈ L n−2 (Ω). Similarly. Let 1 u ∈ H0 (Ω) be a weak solution of equation (3. one can choose p. given g ∈ Z.3. Theorem 3. 1 ≤ p ≤ ∞. Then f also belongs to Z. f = h ∈ Z since both h and f are solutions of x = T x + g in X. Since T is a contraction on X.3.3 Applications to PDEs Now. Then. Let Deﬁne a new norm · Z by · Z · X and · Y be two norms on Z. (3. θ2 }. Let θ = max{θ1 . First show that T : Z −→ Z is a contraction. Let X and Y be the completion of Z under · X and · Y .3. for any h1 . Proof. Assume that f ∈ X. 0 < θ1 < 1 such that T h1 − T h2 X ≤ θ1 h1 − h2 X. 3. We see that T : X −→ X is also a contraction and g ∈ Z ⊂ X. we explain how the “Regularity Lifting Theorem” proved in the previous subsection can be used to boost the regularity of week solutions involving critical exponent: n+2 − u = u n−2 . and that there exits a function g ∈ Z such that f = T f + g.58) Still assume that Ω is a smooth bounded domain in Rn with n ≥ 3. we can ﬁnd a solution h ∈ Z such that h = T h + g. It’s easy to see that Z = X ∩ Y . respectively. we can ﬁnd a constant θ2 . there exists a constant θ1 . 2n .
59) in H0 (Ω).58) in two parts: u n−2 = u n−2 u := a(x)u. we arrive at u ∈ C 2. Then obviously. we consider the regularity of the weak solution of the following equation − u = a(x)u + b(x). Proof of Theorem 3.p estimate. y) be the Green’s function of − (T f )(x) = (− )−1 f (x) = Ω in Ω.1 If u is a H0 (Ω) weak solution of equation (3. Remark 3. Finally. Here. This implies that u ∈ C 1.2. aB (x) = a(x) − aA (x).α (Ω) via Sobolev embedding. 1 Corollary 3. .2. more generally. deﬁne aA (x) = and a(x) if a(x) ≥ A 0 otherwise .3. from the classical C 2. From the above theorem.1.α C (Ω). a(x) ∈ L 2 (Ω). For a positive number A. we can only have n u ∈ W 2.3.2 Assume that a(x).3. 2 (Ω) via the W 2.q (Ω).1 Even in the best situation when a(x) ≡ 0. Now let’s come back to nonlinear equation (3. and hence derive that u ∈ Lp (Ω) for any p < ∞ by Sobolev embedding.58).59) Theorem 3. for any q ∈ (1. Deﬁne G(x. we may extend f to be zero outside Ω when necessary.3. y)f (y)dy. Let G(x. Then u is in Lp (Ω) for any 1 ≤ p < ∞. and hence it is a classical solution. then u ∈ 2. x − yn−2 L n−2q (Ω) ≤ Γ ∗f nq L n−2q (Rn ) ≤C f Lq (Ω) . 2 Tf nq Cn . Let u be any weak solution 1 of equation (3. Assume that u is a 1 H0 (Ω) weak solution. Then by our Regularity Theorem 3. b(x) ∈ L 2 (Ω).α (Ω).58). Then obviously 0 < G(x. n n n+2 4 (3. we ﬁrst conclude that u is in Lq (Ω) for any 1 < q < ∞. u is in W 2. y) < Γ (x − y) := and it follows that. Hence. n ).98 3 Regularity of Solutions We can split the right hand side of (3.α regularity.
n For any n−2 < p < ∞. i) TA is a contracting operator from Lp (Ω) to Lp (Ω) for A large. such that 2 p= nq n − 2q and then by H¨lder inequality. y)b(y)dy. we have 1 FA Lp (Ω) such that p = ≤C b nq n−2q . one can choose a large number A. i) The Estimate of the Operator TA . ii) The Integrability of FA (x).59) can be written as u(x) = (TA u)(x) + FA (x).3 Regularity Lifting 99 Let (TA u)(x) = Ω G(x. ≤ Γ ∗b Lp (Rn ) ≤C b Lq (Ω) L 2 (Ω) n Hence 1 FA (x) ∈ Lp (Ω). y)[aB (y)u(y) + b(y)]dy. such that aA and hence arrive at L 2 (Ω) n ≤ 1 2 1 u Lp (Ω) . Extending b(x) < ∞. n 2. Then equation (3. Since a(x) ∈ L 2 (Ω). We will show that. choose 1 < q < to be zero outside Ω. . we derive immediately that u ∈ Lp (Ω). n For any n−2 < p < ∞. 1 < q < n . y)aA (y)u(y)dy. (a) First consider TA u Lp (Ω) ≤ 1 FA (x) := Ω G(x. there is a q. we derive o TA u Lp (Ω) ≤ Γ ∗ aA u n Lp (Rn ) ≤ C aA u Lq (Ω) ≤ C aA L 2 (Ω) n u Lp (Ω) . Then by the “Regularity Lifting Theorem”. and ii) FA (x) is in Lp (Ω). 2 That is TA : Lp (Ω)→Lp (Ω) is a contracting operator.3. for any 1 ≤ p < ∞. where FA (x) = Ω G(x.
for any 1 < p < n−6 if n > 6. Let L be a second order uniformly elliptic operator in divergence form n n Lu = − i. FA (x) ∈ Lp (Ω). (3. n−6 Now from this point on. This completes the proof of the Theorem. for any 1 < p < n−10 if n > 10. 1 ≤ p < ∞. we prove our theorem for all dimensions. for any 1 < p < . Applying the Regularity Lifting Theorem. we will reach u ∈ Lp (Ω) . from the starting point where u ∈ L n−2 (Ω). we have arrived at 2n u ∈ Lp (Ω) .100 3 Regularity of Solutions (b) Then estimate 2 FA (x) = Ω G(x. The only thing that prevent us from reaching the full range of p is the 2n 2 estimate of FA (x). Now we will use the similar idea and the Regularity Lifting Theorem to carry out Stage 3 of the proof of the Regularity Theorem. we have 2 FA Lp (Ω) ≤ aB u Lq (Ω) ≤C u Lq (Ω) . Each step. we add 4 to the dimension n for which holds u ∈ Lp (Ω) . we arrive at u ∈ Lp (Ω) . n−6 n−2 we conclude that. through an entire similar argument as above.60) we will prove . By the boundedness of aB (x). Noticing that u ∈ Lq (Ω) for any 1 < q ≤ p= 2n .j=1 (aij (x)uxi )xj + i=1 bi (x)uxi + c(x)u. and hence TA is a contracting operator from Lp (Ω) to Lp (Ω). 1 < p < ∞ when 3 ≤ n ≤ 6 2n 1 < p < n−6 when n > 6. for any 1 < p < ∞ if 3 ≤ n ≤ 10 2n u ∈ Lp (Ω) . for the following values of p and dimension n. y)aB (y)u(y)dy. However. and n−2 2n 2n when q = . Continuing this way. for any 1 < p < ∞ if 3 ≤ n ≤ 6 2n u ∈ Lp (Ω) .
2. For each i and each positive number A.n+δ (Ω) for some δ > 0. Similarly for CA (x) and CB (x).2. if p = n/2 p L (Ω).3 (W 2.62) biA (x)uxi + cA (x)u . Let Lo u := − n (aij (x)uxi )xj . Write equation (3.j=1 By Proposition 3. Assume that i) aij (x) ∈ W 1. bi (x) ∈ Ln+δ (Ω). Then u is in W 2.61) as n n (3. 1. ii) n if 1 < p < n L (Ω).p (Ω).3. c(x) ∈ Ln/2+δ (Ω).3.j=1 (aij (x)uxi )xj = − i=1 bi (x)uxi + c(x)u + f (x). iii) n/2 if 1 < p < n/2 L (Ω). Then L−1 is a bounded linear operator from Lp (Ω) to o o W 2. Proof. biB (x) := bi (x) − biA (x). deﬁne biA (x) = Let bi (x) . i.3 Regularity Lifting 101 Theorem 3. if p = n p L (Ω).p Regularity under Weaker Conditions) Let Ω be a bounded domain in Rn and 1 < p < ∞. if n < p < ∞. Now we can rewrite equation (3. x) where TA u = −L−1 o i (3. we know that equation Lo u = 0 has only trivial solution. if bi (x) ≥ A 0. the operator Lo is invertible.2. if n/2 < p < ∞. elsewhere .61) as u = TA u + FA (u.p Suppose f ∈ Lp (Ω) and u is a W0 (Ω) weak solution of Lu = f (x) x ∈ Ω. and it follows from Theorem 3. Denote it by L−1 .3.61) − i.p (Ω).
102 3 Regularity of Solutions and FA (u.64) biA Lp v W 2. if 1 < p < n biA Ln+δ v W 2.64) and (3. This completes the proof of the theorem.p (Ω). if n < p < ∞. there exists an A > 0. Taking into account that L−1 is a bounded operator from Lp (Ω) to o W (Ω). cA Ln/2 can be made arbitrarily small if A is suﬃciently large. the ﬁrst norms on the right hand side of (3.p . o We ﬁrst show that FA (u.p Now u and v satisfy the same equation.p . biA Ln v W 2. x) = −L−1 o i biB (x)uxi + cB (x)u + L−1 f (x). for any v ∈ W 2. x). It follows from the Contract Mapping Theorem. Since biB (x) and ciB (x) are bounded. By uniqueness.p (Ω) to W 2. if p = n/2 if n/2 < p < ∞. we must have u = v. ∀ v ∈ W 2. i. Therefore we derive that u ∈ W 2.p (Ω). if p = n biA Dv Lp ≤ C · (3. Hence for any > 0.63).p .65).q .p (Ω). f ∈ Lp (Ω). such that v = TA v + FA (u. o one can see that. ·) ∈ W 2.p to W 1.63) ∈ Lp (Ω) for any u ∈ W 1.p (Ω).p v W 2. and cA v cA cA ≤C· cA Ln/2 Lp v W 2.p (Ω).e. one can easily verify that biB (x)uxi + cB (x)u i (3. In fact.p .p (Ω).p .p (Ω).65) Under the conditions of the theorem. we deduce that TA is a contract mapping from W 2. such that biA (x)vxi + cA (x)v i Lp ≤ v W 2.p (Ω) to W 2. Lp Ln/2+δ v W 2. This implies (3.p . by H¨lder inequality and Sobolev embedding from W 2. Then we prove that TA is a contracting operator from W 2. there exist a v ∈ W 2.p (Ω). ∀u ∈ W 1. if 1 < p < n/2 . (3. biA Ln . 2.p (Ω) . .
70) are satisﬁed. In the context of the HardyLittlewoodSobolev inequality.67) becomes −∆u = u(n+2)/(n−2) .3. yγ (3. and K(y) ≤ C(1 + Rn n Then u is in Lq (Rn ) for any 1 < q < ∞. It is also closely related to the following family of semilinear partial differential equations (−∆)α/2 u = u(n+α)/(n−α) . we consider the following integral equation u(x) = Rn n+α 1 u(y) n−α dy .3 Regularity Lifting 103 3. x ∈ Rn n ≥ 3. it is natural 2n to start with u ∈ L n−α (Rn ).3. and u ∈ L α (Rn ). n−α (3.66) and the PDE (3. Remark 3. More generally.66) where α is any real number satisfying 0 < α < n. (3.68).70) n(p−1) n .68) for some 1 < p < ∞. x ∈ Rn .4 Let u be a solution of (3.67) are equivalent. .69) K(y)u(y)p−1  α dy < ∞ . Assume that u ∈ Lqo (Rn ) for some qo > n . (3. one can prove (See [CLO]) that the integral equation (3. In the special case α = 2.71) then conditions (3. K(x) ≤ M.3. n−α x − y (3.69) and (3. We prove Theorem 3. n−α (3.3.4 Applications to Integral Equations As another application of the Regularity Lifting Theorem. We will use the Regularity Lifting Theorem to boost u to Lq (Rn ) for any 1 < q < ∞. It arises as an EulerLagrange equation for a functional under a constraint in the context of the HardyLittlewoodSobolev inequalities.2 i) If p> 1 ) for some γ < α .67) Actually. we consider the following equation u(x) = Rn K(y)u(y)p−1 u(y) dy x − yn−α (3. x ∈ Rn . and hence in L∞ (Rn ). which is the well known Yamabe equation.
104 3 Regularity of Solutions ii) Last condition in (3. it is obvious that g(x) ∈ L∞ ∩ Lqo . For any q > to obtain n n−α . then equation (3. or if x > a ua (x) = 0 . Here we have chosen s= n + αq n + αq and r = . deﬁne ua (x) = u(x) . Let ub (x) = u(x) − ua (x).68).72) with the function g(x) = Rn K(y)ub (y)p−1 ub (y) dy − ub (x).68) when K(x) ≡ 1 possesses singular solutions such as c u(x) = α . x p−1 Proof of Theorem 3. n. one can verify that ua satisﬁes the equation ua = Tua ua + g(x) (3. we ﬁrst apply the HardyLittlewoodSobolev inequality Lq Tua w ≤ C(α. x − yn−α For any real number a > 0. otherwise .71) is somewhat sharp in the sense that if it is violated.74) = Rn α n H(y) dy w Lq . we apply the H¨lder inequality to the o right hand side of the above inequality H(y) n+αq f (y) n+αq dy Rn nq n+αq ·r 1 r nq nq n+αq nq ≤ Rn H(y) n α dy Rn f (y) nq n+αq ·s 1 s n+αq nq dy (3. (3.4: Deﬁne the linear operator Tv w = Rn K(y)v(y)p−1 w dy.70).73) Then write H(x) = K(x)ua (x)p−1 . x − yn−α Under the second part of the condition (3. n αq . if u(x) > a . q) Kua p−1 w nq L n+αq . Then since u satisﬁes equation (3.3.
and by the Contracting Mapping Theorem. (3.70) and (3.3 Regularity Lifting 105 It follows from (3. (3. then w is also a solution in Lqo .75).72). From (3. So does u.77) in Lpo ∩ Lqo . Tua w Lq ≤ 1 w 2 Lq .76) Applying (3.77) has a unique solution in both Lqo and Lpo ∩ Lqo . we must have ua = w ∈ Lpo ∩ Lqo for any n po > n−α .76) to both the case q = qo and the case q = po > qo . ua is a solution of (3. Let w be the solution of (3.74) that Tua w Lq ≤ C(α.73) and (3. q)( K(y)ua (y)p−1  α dy) n w n α Lq . for suﬃciently large a. .3. By the uniqueness. n. we see that the equation w = Tua w + g(x) (3.77) in Lqo . we deduce. This completes the proof of the Theorem.75) By (3.
.
4 Diﬀerentiable Manifolds Tangent Spaces Riemannian Metrics Curvature 4.4.5.4 Preliminary Analysis on Riemannian Manifolds 4. it is a union of open sets of R2 .1 Diﬀerentiable Manifolds A simple example of a two dimensional diﬀerentiable manifold is a regular surface in R3 . Riemannian metrics. and the manifold as a whole is obtained by pasting together pieces of the Euclidean spaces. In this section.3 Curvature on Riemannian Manifolds 4. for example.2 Integrals 4. which is a generalization of the ﬂat Euclidean space.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator 4. the Universe we live in is a curved space.3 4.2 4. Intuitively.5 Calculus on Manifolds 4.1 4. For more rigorous arguments and detailed discussions.2 Curvature of Surfaces in R3 4.3 Equations on Prescribing Gaussian and Scalar Curvature 4. organized in such a . the book Riemannian Geometry by do Carmo [Ca]. We call this curved space a manifold.6 Sobolev Embeddings According to Einstein.4. each neighborhood of a point on a manifold is homeomorphic to an open set in the Euclidean space. Roughly speaking. please see. We will try to be as intuitive and brief as possible. 4.4.5. and curvatures. tangent space.5. we will introduce the most basic concepts of diﬀerentiable manifolds.1 Curvature of Plane Curves 4.
1 (Diﬀerentiable Manifolds) A diﬀerentiable ndimensional manifold M is (i) a union of open sets. φ1 (x) = x1 . x2 > 0. M = α Vα and a family of homeomorphisms φα from Vα to an open set φα (Vα ) in Rn . we have x2 = x1 = 1 − x2 . x2 . with the diﬀerentiable structure given by the identity. and (U. then it is called a diﬀerentiable structure of M . In each open set U ⊂ M . by taking the union of all the coordinates charts of M that are compatible with (in the sense of condition (ii)) the ones in the given structure. A trivial example of ndimensional manifolds is the Euclidean space Rn . (iii) The family {Vα . On the intersection of any two open sets. for each p ∈ U . φ4 (x) = x2 .1. and V4 . 1 n+1 Take the unit circle S 1 for instance. · · · . V4 = {x ∈ S 1  x1 < 0}. Same is true on other intersections. we have Deﬁnition 4. The following are two nontrivial examples. · · · xn ). with this diﬀerentiable structure. S 1 is the union of the four open sets V1 . More precisely. V3 = {x ∈ S 1  x1 > 0}. Obviously. V2 = {x ∈ S 1  x2 < 0}. A manifold M is a union of open sets. let φU be the homeomorphism from U to φU (U ) in Rn . If a family of coordinates charts that covers M are well made so that the transition from one to the other is in a diﬀerentiable way as in condition (ii). then we can deﬁne the coordinates of p to be (x1 . . 2 They are both diﬀerentiable functions. Example 1. φ2 (x) = x1 . one can deﬁne a coordinates chart. say on V1 V3 . Therefore. Given a diﬀerentiable structure on M . φU ) is called a coordinates chart. 1 1 − x2 . This is called a local coordinates of p. the images of the intersection φα (U ) and φβ (U ) are open sets in Rn and the mappings φβ ◦ φ−1 α are diﬀerentiable. φα } are maximal relative to the conditions (i) and (ii). In fact. The ndimensional unit sphere S n = {x ∈ Rn+1  x2 + · · · + x2 = 1}. φ3 (x) = x2 . xn ) in Rn . S 1 is a 1dimensional diﬀerentiable manifold. x1 > 0. we can use the following four coordinates charts: V1 = {x ∈ S 1  x2 > 0}. if φU (p) = (x1 . V3 . such that (ii) For any pair α and β with Vα Vβ = U = ∅.108 4 Preliminary Analysis on Riemannian Manifolds way that at the intersection of any two open sets the change from one to the other can be made in a diﬀerentiable manner. V2 . one can easily complete it into a maximal one. x2 .
· · · . xn+1 ] = [x1 /xi .4. i where ξk = xk /xi (i = k) and 1 ≤ i ≤ n + 1. 0) ∈ Rn+1 .2 Tangent Spaces We know that at every point of a regular curve or at a regular surface. there is no such an ambient space. · · · xn+1 /xi ]. let Vi = {[x]  xi = 0}. 2.2 Tangent Spaces 109 Example 2. j k ξi j ξi = j 1 i . To this end. 1. Geometrically. ). · · · xi−1 /xi . Apparently. thus {Vi . t ∈ (− . 4. For a regular surfaces S in R3 . there exists a tangent line or a tangent plane. given a diﬀerentiable structure on a manifold. ξn+1 ). Deﬁne i i i i φi ([x]) = (ξ1 . we have to ﬁnd a characteristic property of the tangent vector which will not involve the concepts of velocity. let’s consider a smooth curve in Rn : γ(t) = (x1 (t). · · · . Vi is the set of straight lines of Rn+1 which passes through the origin and which do not belong to the hyperplane xi = 0. Since on a diﬀerential manifold. φi } is a diﬀerentiable structure of P n (R) and hence P n (R) is a diﬀerentiable manifold. λ ∈ R. λxn+1 ). we can deﬁne a tangent space at each point. then [x1 . xn (t)). The ndimensional real projective space P n (R). or the quotient space of Rn+1 \ {0} by the equivalence relation (x1 . with γ(0) = p. It is the set of straight lines of Rn+1 which passes through the origin (0. Similarly. Observe that if xi = 0. n + 1. · · · . xi+1 /xi . λ = 0. ξj are obviously smooth. ξi−1 . n+1 P n (R) = i=1 Vi . · · · . · · · . at each given point p. · · · . the tangent plane is the space of all tangent vectors. We denote the points in P n (R) by [x]. · · · xn+1 ) ∼ (λx1 . For i = 1. On the intersection Vi Vj . the changes of coordinates i ξ j = ξk k = i. · · · . . and each tangent vector is deﬁned as the “velocity in the ambient space R3 . ξi+1 .
e. xn (t)) t=0 dt dt ∂ ∂f = xi (0) = xi (0) f. xn (0)). The set of all tangent vectors at p is called the tangent space at p and denoted by Tp M . · · · . Then the directional derivative of f along vector v ∈ Rn can be expressed as d (f ◦ γ) t=0 = dt n i=1 ∂xi ∂f (p) (0) = ( ∂xi ∂t xi (0) i ∂ )f. xn (t)) be a curve in M with γ(0) = p. γ (0) = i xi (0) ∂ ∂xi . Let (x1 . 0). )→M is called a (diﬀerentiable) curve in M . The tangent vector to the curve γ at t = 0 is an operator (or function) γ (0) : D→R given by γ (0)f = d (f ◦ γ) t=0 . 0. Let f (x) be a smooth function near p.2. dt A tangent vector at p is the tangent vector of some curve γ at t = 0 with γ(0) = p. 0. ∂xi Hence the directional derivative is an operator on diﬀerentiable functions that depends uniquely on the vector v. Let γ(t) = (x1 (t). xi . ∂xi ∂xi p i i This shows that the vector γ (0) can be expressed as the linear combination of ∂ ∂xi p . and . We can identify v with this operator and thus generalize the concept of tangent vectors to diﬀerentiable manifolds M . Deﬁnition 4. · · · .110 4 Preliminary Analysis on Riemannian Manifolds The tangent vector of γ at point p is v = (x1 (0). Then γ (0)f = d d f (γ(t)) t=0 = f (x1 (t).1 A diﬀerentiable function γ : (− . · · · . φ) that covers p with φ(p) = 0 ∈ φ(U ). · · · . xn ) be the coordinates in a local chart (U. p Here ∂ ∂xi p is the tangent vector at p of the “coordinate curve”: xi →φ−1 (0. Let D be the set of all functions that are diﬀerentiable at point p = γ(0) ∈ M . · · · . i. · · · .
∂x1 p ∂xn p The dual space of the tangent space Tp M is called the cotangent space. we have (dxi )p ( And consequently. For every p ∈ M and for each v ∈ Tp M . We ﬁrst introduce the diﬀerential of a diﬀerentiable mapping. ∗ and denoted by Tp M .2. · · · . We can realize it in the following. Deﬁne the mapping dfp : Tp M →Tf (p) N by dfp (v) = β (0). And one can show that it does not depend on the choice of γ (See [Ca]). Let β = f ◦ γ. dfp = i ∂ ∂xj ∂f ∂xi ) = δij . Obviously. And the cotangent space of M is T ∗M = p∈M ∗ Tp M. for the diﬀerential (dxi )p of the coordinates function xi .2 Tangent Spaces 111 ∂ ∂xi f= p ∂ f (φ−1 (0. choose a diﬀerentiable curve γ : (− . When N = R1 . . one can derive that ∂ ∂ )= f.···. · · · . (dx2 )p . Deﬁnition 4. it is a linear mapping from one tangent space Tp M to the other Tf (p) N . · · · . )→M . Deﬁnition 4. (dxn )p ∗ form a basis of the cotangent space Tp M at point p. From the deﬁnition.3 The tangent space of M is TM = p∈M Tp M.4. 0. p From here we can see that (dx1 )p . 0)) xi =0 . We call dfp the diﬀerential of f .2. . with γ(0) = p and γ (0) = v. 0. ∂xi One can see that the tangent space Tp M is an ndimensional linear space with the basis ∂ ∂ . p (dxi )p . dfp ( ∂xi p ∂xi p In particular. xi . the collection of all such diﬀerentials form the cotangent space.2 Let M and N be two diﬀerential manifolds and let f : M →N be a diﬀerentiable mapping.
b] ⊂ I is called a segment. Also one can prove that ( see [Ca]) Proposition 4. The deﬁnition of the inner product <. For a surface S in the three dimensional Euclidean space R3 . γ (t) >.3 Riemannian Metrics To measure the arc length. and let γ : I→M be a diﬀerentiable mapping ( curve ) on M . q ∂ ∂xj >q q is a diﬀerentiable function of q for all q near p.112 4 Preliminary Analysis on Riemannian Manifolds 4. These can be expressed in terms of the inner product <. gij (q) ≡< ∂ ∂xi . γ (t) >1/2 dt. and γ (t) ≡ dγ ≡ dγ( dt ) is a tangent vector of Tγ(t) M . It is clear this deﬁnition does not depend on the choice of coordinate system. a where the length of the velocity vector γ (t) is given by the inner product in R3 : γ (t) = < γ (t). More precisely. which are simply the length of the vectors in the ambient space R3 . which varies diﬀerentiably. We deﬁne the length of the segment by b < γ (t). or volume on a manifold. a .1 A Riemannian metric on a diﬀerentiable manifold M is a correspondence which associates to each point p of M an inner product < ·. but also the area of domains on S. Now we show how a Riemannian metric can be used to measure the length of a curve on a manifold M . The restriction dt of the curve γ to a closed interval [a. or ‘metrics’. dγ is a linear mapping from Tt I to d Tγ(t) M .3. area. that is. we need to introduce some kind of measurements. A diﬀerentiable manifold with a given Riemannian metric will be called a Riemannian manifold. > enable us to measure not only the lengths of curves on S. there is a natural way of measuring the length of vectors tangent to S. Now we generalize this concept to diﬀerentiable manifolds without using the ambient space. Let I be an open interval in R1 . Given a curve γ(t) on S for t ∈ [a.3. For each t ∈ I. b].1 A diﬀerentiable manifold M has a Riemannian metric. we have Deﬁnition 4. > in R3 . as well as other quantities in geometry. its length is the integral b γ (t)dt. · >p on the tangent space Tp M .
3 Riemannian Metrics 113 Here the arc length element is ds =< γ (t).2) ∂ ∂ We know the volume of the parallelepiped form by ∂x1 . · · · .1) As we mentioned in the previous section. i = 1. · · · xn ). p Consequently. 0.4. it is the same as det(gij ). This expresses the length element ds in terms of the metric gij . >p xi (t)dt xj (t)dt ∂xi ∂xj = i. and by virtue of (4. ds2 = < γ (t). · · · . let (x1 . φ) with coordinates (x1 . ∂xi The metric is given by gij =< ei . 1. · · · n. the ndimensional Euclidean space with ∂ = ei = (0. 0). Let ∂ (p) = aij ej . Assume that the region D is cover by a coordinate chart (U. xn (t)).2). · · · .j=1 gij (p)dxi dxj . · · · . . From the above arguments.j=1 n < ∂ ∂ . φ) that covers p = γ(t) = (x1 (t). To derive the formula for the volume of a region (an open connected subset) D in M in terms of metric gij . en } be an orthonormal basis of Tp M . (4. φ(U ) A trivial example is M = Rn . ∂xi j Then gik (p) =< ∂ ∂ . let’s begin with the volume of the parallelepiped form by the tangent vectors. · · · . (4. Then γ (t) = i xi (t) ∂ ∂xi . el >= jl j aij akj . γ (t) >1/2 dt. it is natural to deﬁne the volume of D as vol(D) = det(gij )dx1 · · · dxn . Let {e1 . >p = ∂xi ∂xk aij akl < ej . · · · ∂xn is the determinant det(aij ). γ (t) > dt2 n = i. ej >= δij . xn ) be the coordinates in a local chart (U.
− sin θ). y. . ∂ψ ∂ψ ∂ψ =( p It follows that g11 (p) =< ∂ ∂ . These conform with our knowledge on the length and area in the spherical coordinates. >p = sin2 θ. >p = 0. We will use the usual spherical coordinates (θ. cos θ sin ψ. ψ). and the area element is dA = det(gij )dθdψ = sin θdθdψ. ∂ ∂θ and ∂ ∂ψ =( p p the ∂x ∂y ∂z . ∂ψ ∂ψ Consequently..114 4 Preliminary Analysis on Riemannian Manifolds A less trivial example is S 2 . . ψ) on S2. z)}. φ) with coordinates (θ. sin θ cos ψ. Let p be a point on S 2 covered by a local coordinates chart (U. ) = (− sin θ sin ψ. 0). ∂θ ∂θ g12 (p) = g21 (p) =< and g22 (p) =< ∂ ∂ .e. ) = (cos θ cos ψ. >p = cos2 θ cos2 ψ + cos2 θ sin2 ψ + sin2 θ = 1. We ﬁrst derive the expression for −1 ∂ ∂θ p and ∂ ∂ψ tangent vectors at point p = φ (θ. . we have. In the coordinates of R3 . ∂θ ∂ψ ∂ ∂ . the 2dimensional unit sphere with standard metric. i. the length element is ds = g11 dθ2 + 2g12 dθdψ + g22 dψ 2 = dθ2 + sin2 dψ 2 . ψ) with x = sin θ cos ψ y = sin θ sin ψ z = cos θ to illustrate how we use the measurement (inner product) in R3 to derive the expression of the standard metric gij in term of the local coordinate (θ. ∂θ ∂θ ∂θ ∂x ∂y ∂z . ψ) of the coordinate curves ψ = constant and θ =constant respectively. with the metric inherited from the ambient space R3 = {(x.
Let q be a nearby point. which is positive if the curve turns in the same direction as the surface’s chosen normal. Given a tangent vector T(p) at p. Let p be a point on a curve. To measure the extent of bending of a curve. replacing t by x and y(t) by f (x). then the unit tangent vector at c(t) is T(t) = and (x (t). k(p) = lim q →p ∆s ds If a plane curve is given parametrically as c(t) = (x(t). consider the intersection of the surface with the plane containing T(p) and N(p). y(t)). ds {[x (t)]2 + [y (t)]2 }3/2 If a plane curve is given explicitly as y = f (x). and negative otherwise. Intuitively. f (x)) = . we feel that a straight line does not bend at all.4.1 Curvature of Plane Curves Consider curves on a plane. y (t)) [x (t)]2 + [y (t)]2 . The normal curvature varies as the tangent vector T(p) changes. Principle Curvature Let S be a surface in R3 . By a straight forward calculation.3) {1 + [f (x)]2 }3/2 4. and its curvature is taken as the absolute value of the normal curvature.4. The maximum and minimum values of the normal curvature at point p are called principal curvatures. we ﬁnd that the curvature at point (x. or the amount it deviates from being straight. f(x)) is f (x) k(x. . then in the above formula. one can verify that k(c(t)) = dT x (t)y (t) − y (t)x (t) = .4 Curvature 115 4.4. Let ∆α be the amount of change of the angle from T(p) to T(q).2 Curvature of Surfaces in R3 1. and T(p) be the unit tangent vector at p. ds = [x (t)]2 + [y (t)]2 dt.4 Curvature 4. (4. p be a point on S. and usually denoted by k1 (p) and k2 (p). we introduce curvature. and N(p) a normal vector of S at p. We deﬁne the curvature at p as dα ∆α = (p). and a circle of radius one bends more than a circle of radius 2. This intersection is a plane curve. and ∆s the arc length between p and q on the curve.
and uT denotes the transpose of u. We calculate the Gaussian curvature of S at the origin 0 = (0. u2 ) be a unit vector on the tangent plane. Then we can express u = cos θ e1 + sin θ e2 . 0). 3 Let f (x. 2. i = 1. one may chose outer normals or inner normals). Consider a surface S in R deﬁned as the graph of x3 = f (x1 . one can see that no matter which side normal is chosen (say. and the xy plane is tangent to S. Gaussian Curvature. and it is obvious that eT F ei = λi . 0. A sphere of radius R has Gaussian curvature R2 . F ei = λi ei . and 1 of a torus or a plane is zero. From this. the normal curvature kn (u) in u direction is ∂2f ∂u2 ∂f [( ∂u )2 + 2 2 1]3/2 (0) ∂f ∂ where ∂u and ∂uf are ﬁrst and second directional derivative of f in u direction. Then by (4. Let λ1 and λ2 be two eigenvalues of the matrix F with corresponding unit eigenvectors e1 and e2 . 0) = 0 = f2 (0. i Let θ be the angle between u and e1 . It follows that . we write fi = ∂xi and we will use fij to denote ∂xi ∂xj . x2 ). Assume that the origin is on S. we arrive at kn (u) = (f11 u2 + 2f12 u1 u2 + f22 u2 )(0) = uT F u. that is f (0.116 4 Preliminary Analysis on Riemannian Manifolds 2. ∂ f ∂f where.e. 2 Using the assumption that f1 (0) = 0 = f2 (0). 1 2 where F= f11 f12 f21 f22 (0). It is deﬁned as the product of the two principal curvatures K(p) = k1 (p) · k2 (p). for a closed surface. of a one sheet hyperboloids is negative.4) Since the matrix F is symmetric. y) be a diﬀerentiable function.3). i. for simplicity. One way to measure the extent of bending of a surface is by Gaussian curvature. the Gaussian curvature of a sphere is positive. (4. 0) = 0 and f1 (0. Let u = (u1 . 2. 0). the two eigenvectors e1 and e2 are orthogonal. i = 1.
In a local coordinates.4. Y ]f = XY f − Y Xf = i. Then λ1 ≤ λ1 + sin2 θ (λ2 − λ1 ) = kn (u) = cos2 θ (λ1 − λ2 ) + λ2 ≤ λ2 . f21 f22 − λ Therefore. ∂xi Deﬁnition 4. If this mapping is diﬀerentiable.1 For two diﬀerentiable vector ﬁelds X and Y. there are several kinds of curvatures. Vector Fields and Brackets A vector ﬁeld X on M is a correspondence that assigns each point p ∈ M a vector X(p) in the tangent space Tp M . Hence λ1 and λ2 are minimum and maximum values of normal curvature. by deﬁnition. Assume that λ1 ≤ λ2 . we can express n X(p) = i ai (p) ∂ .j=1 ai ∂bj ∂aj − bi ∂xi ∂xi . n then ∂f .4). deﬁne the Lie Bracket as [X.4. we see that λ1 and λ2 are solutions of the quadratic equation f11 − λ f12 = λ2 − (f11 + f22 )λ + (f11 f22 − f12 f21 ) = 0. we need a few preparations. If X(p) = n i ∂ ai (p) ∂xi and Y (p) = n ∂ j bj (p) ∂xj . From (4. 1. and therefore they are principal curvatures k1 and k2 . it is a directional derivative operator in the direction X(p): n (Xf )(p) = i ai (p) ∂f . then we say the vector ﬁeld X is diﬀerentiable. ∂xj [X. the Gaussian curvature at 0 is K(0) = k1 k2 = λ1 λ2 = (f11 f22 − f12 f21 )(0). Before introducing them. 4.3 Curvature on Riemannian Manifolds On a general n dimensional Riemannian manifold M .4. Y ] = XY − Y X. ∂xi When applying to a smooth function f on M . It is a mapping from M to T M .4 Curvature 117 kn (u) = uT F u = cos2 θ λ1 + sin2 θ λ2 .
we have < X. Given an aﬃne connection D on M . Covariant Derivatives and Riemannian Connections To be more intuitive. dV we make an orthogonal projection of onto the tangent plane. but it may not belong to the tangent dt plane of S. the covariant derivative is deﬁned by aﬃne connections.5) i i. Deﬁnition 4. Let c : I→S be a curve on S. An aﬃne connection D on a diﬀerentiable manifold M is a mapping from D(M ) × D(M ) to D(M ) which maps (X. we have Deﬁnition 4. Let D(M ) be the set of all smooth vector ﬁelds. Similarly. We call this the covariant derivative of V (t) on S – the derivative dt viewed by the twodimensional creatures on S. ∂xi ∂x dt j (4. and let V (t) be a vector ﬁeld along c(t). deﬁne the covariant derivative of V along c(t) as DV = D dc Y. for a vector ﬁeld V (t) = Y (c(t)) along a diﬀerentiable curve c : I→M .j Recall that in Euclidean spaces with inner product <. and denote dt DV it by .4. Z ∈ D(M ). >. A vector ﬁeld V (t) along a curve c(t) ∈ M is parallel if = 0.2 Let X. dt . ∂xi Then by the properties of the aﬃne connection. To consider that rate of change of V (t) restricted to the surface S. One can see that dV is a vector in the ambient space R3 . and for a pair of vector dt ﬁelds X and Y along c(t). t ∈ I.3 Let M be a diﬀerentiable manifold with an aﬃne connecDV tion D. On an ndimensional diﬀerentiable manifold M . one can express DV = dt dvi ∂ + dt ∂xi ∂ dxi vj D ∂ . we begin with a surface S in R3 . a vector ﬁeld dV V (t) along a curve c(t) is parallel if only if = 0.118 4 Preliminary Analysis on Riemannian Manifolds 2. and let f and g be smooth functions on M . Y ) to DX Y and satisﬁes 1)Df X+gY Z = f DX Z + gDY Z 2)DX (Y + Z) = DX Y + DX Z 3)DX (f Y ) = f DX Y + X(f )Y. Y >= constant. let n c(t) = (x1 (t). Y. dt dt In a local coordinates.4. · · · xn (t)) and V (t) = i vi (t) ∂ .
Y )X. Y >2 where X and Y are any two linearly independent vectors in E.4. and (g ij ) is the inverse Deﬁnition 4. DX Y − DY X = [X. Y )Z = DY DX Z − DX DY Z + D[X.4 Let M be a diﬀerentiable manifold with an aﬃne connection D and a Riemannian metric <. · · · .5 Let M be a Riemannian manifold with the Riemannian connection D. Y > . In a local coordinates (x1 . . Moreover this connection is symmetric. >. Y >2 is the area of the parallelogram spun by X and Y . xn ). 3. Y >c(t) = constant .6 The sectional curvature of a given two dimensional subspace E ⊂ Tp M at point p is K(E) = < R(X. we have < X. X2 Y 2 − < X. We skip the proof of the theorem. ∂xj g mk is called the Christoﬀel symbols. gij =< matrix of (gij ).4 Curvature 119 Deﬁnition 4. Curvature >. this unique Riemannian connection can be expressed as ∂ k ∂ = Γij . and X2 Y 2 − < X. Y ]. >. Theorem 4. If for any pair of parallel vector ﬁelds X and Y along any smooth curve c(t). The following fundamental theorem of Levi and Civita guarantees the existence of such an aﬃne connection on a Riemannian manifold. Deﬁnition 4. Y ) : D(M )→D(M ) deﬁned by R(X.4. there exists a unique connection D on M that is compatible with <. The curvature R is a correspondence that assigns every pair of smooth vector ﬁeld X. D ∂ ∂xi ∂x ∂xk j k where k Γij = 1 2 m ∂gjm ∂gmi ∂gij + − ∂xi ∂xj ∂xm ∂ ∂ ∂xi .Y ] Z.4.4.4. Y ∈ D(M ) a mapping R(X. i. >.1 Given a Riemannian manifold M with metric <.e. >. then we say that D is compatible with the metric <. ∀Z ∈ D(M ). Interested readers may see the book of do Carmo [ca] (page 55).
we adapt the summation convention and write. Let M be the unit sphere with standard metric. then one can verify that its sectional curvature at every point is K(E) = 1. q)tensors on Tx M . Example. It is actually the Gaussian curvature of the section. Xi ∂ := ∂xi Xi i ∂ . 4. for instance. Yn−1 } be an orthonormal basis of the hyper plane in Tp M orthogonal to X. and so on. Here we will extend this deﬁnition to the space of diﬀerentiable tensor ﬁelds. we introduced the concept of aﬃne connection D on a diﬀerentiable manifold M.7 Let X = Yn be a unit vector in Tp M . that is. On two dimensional manifolds. ∂xi j Γik dxk := k j Γik dxk . At a point x on M . Deﬁne Tp (Tx M ) as the space of (p. i=1 One can show that the above deﬁnition does not depend on the choice of the orthonormal basis.4.1 Higher Order Covariant Derivatives and the LaplaceBertrami Operator Let p and q be two nonnegative integers.5. Yi )X. then the Ricci curvature in the direction X is the average of the sectional curvature of the n − 1 sections spun by X and Y1 . · · ·. where Γ (M ) is the set of all smooth vector ﬁelds on M . as usual. Deﬁnition 4. 4.5 Calculus on Manifolds In the previous chapter. Yi > . and they both depends only on the point p. and let {Y1 . and X and Yn−1 : Ricp (X) = 1 n−1 n−1 < R(X. · · · . It is a mapping from Γ (M ) × Γ (M ) to Γ (M ). Ricci curvature is the same as sectional curvature. we q denote the tangent space by Tx M . For convenience. and thus introduce higher order covariant derivatives. the space of (p + q)linear forms . i=1 The scalar curvature at p is R(p) = 1 n n Ricp (Yi ). X and Y2 .120 4 Preliminary Analysis on Riemannian Manifolds One can show that K(E) so deﬁned is independent of the choice of X and Y .
For a point x on M . q) tensor ﬁelds T on M. 0)tensor. we have DX Y = DX i = Xi = Xi ∂ ∂xi Y = Xi iY ∂Y j ∂ k ∂ + Y j Γij ∂xi ∂xj ∂xk ∂Y j j + Γik Y k ∂xi ∂ ∂xj (4. say dxj . by the bilinear properties of the connection. X n ) and Y = (Y 1 . For smooth vector ﬁelds X. Y ∈ Γ (M ) with X = (X 1 .7) Notice that (4. q)tensor on Tx M by DX T (x) = X i ( where ( 1 q i T )(x)i1 ···ip = i T )(x). we have . then i (dx j j ) = −Γik dxk . for the aﬃne connection D.6) is a particular case when (4. ∂ ∂xk then i ∂ ∂xj k (x) = Γij . If we apply it to a (1. · · · . and a vector X ∈ Tx M . Y n ).5 Calculus on Manifolds ∗ ∗ η : Tx M × · · · × Tx M × Tx M × · · · × Tx M →R1 .6) Now we naturally extend this aﬃne connection D to diﬀerentiable (p. we can express T = Ti11···ipq dxi1 ⊗ · · · ⊗ dxip ⊗ j ···j ∂ ∂ ⊗ ··· ⊗ . x k where Γij are the Christoﬀel symbols. j ···j ∂Ti11···ipq ∂xi q x j ···j p − k=1 j ···j 1 q α Γiik (x)T (x)i1 ···ik−1 αik+1 ···ip j ···j + k=1 jk 1 k−1 Γiα (x)T (x)i1 ···ip αjk+1 jq . if we set i =D ∂ ∂xi .8) ˜ Given two diﬀerentiable tensor ﬁelds T and T .7) is applied to the (0. (4. 1)tensor Y . ∂xj1 ∂xjq Recall that. p q 121 In a local coordinates. DX T is deﬁned to be a (p. in a local coordinates.4. · · · . (4.
4.j=1 n where g is the determinant of (gij ). the Hessian of f and denoted by Hess(f ). 0)tensor deﬁned by f= And 2 i i f dx f is = ∂f i dx .5. q) j ···j j1 ···jq i1 T )i2 ···ip+1 . and the theory of Lebesgue integral applies. Hence a (1. Through a straight forward calculation. we deﬁne tensor ﬁeld whose components are given by 1 q ( T )i1 ···ip+1 = ( T to be the (p + 1.7) and (4. ∂xi ∂xj ∂xk j 2 Here we have used (4. 3 T . For example. 0)tensor: 2 f = = = ∂f i dx ⊗ dxj ∂xi ∂f ∂f (dxi ) dxj dxi + j ∂xi ∂xi j ∂2f k ∂f − Γij dxi ⊗ dxj . f = g ij f )ij ) is deﬁned to be the Laplace ∂2f k ∂f − Γij ∂xi ∂xj ∂xk where (g ij ) is the inverse matrix of (gij ).2 Integrals On a smooth ndimensional Riemannian manifold (M. In the Riemannian context. · · ·. one can deﬁne a natural positive Radon measure on M . q)tensor ﬁeld T . 0)tensor. . ∂xi f is a (2.8).122 4 Preliminary Analysis on Riemannian Manifolds ˜ ˜ ˜ DX (T ⊗ T ) = DX T ⊗ T + T ⊗ DX T . g). one can verify that = 1 ∂ ∂ ( g g ij ) ∂xi ∂xj g i. For a diﬀerentiable (p. one can deﬁne 2 T . Its ij th component is ( 2 f is called f )ij = ∂2f k ∂f − Γij . a smooth function f (x) on M is a (0. ∂xi ∂xj ∂xk 2 The trace of the Hessian matrix (( Bertrami operator acting on f . Similarly.
10) is the LaplaceBertrami operator associated to go . (4. ηi )i is a family of partition of unity associated to (Ui . g is the determinant of (gij ) in the charts (Ui .5.4. φi )i be a family of coordinates charts that covers M . (4.9) where g is the LaplaceBertrami operator associated to the metric g. ∀ x ∈ M. and dx stands for the volume element of Rn . Let g(x) = e2u(x) go (x) for some smooth function u(x) on M . φi . for each family of coordinates chart (Ui . For smooth functions u and v on a compact manifold M . such that g(x) = ρ(x)go (x). φi )i of M . φi )i . we deﬁne its integral on M as the sum of the integrals on open sets in Rn : f dVg = M i φi (Ui ) (ηi gf ) ◦ φ−1 dx i where (Ui . One can show that. and (ii) for any i. 4. φi . . φi . φi )i . supp ηi ⊂ Ui . ηi )i is a partition of unity associate to (Ui . Then by a straight forward calculation.3 Equations on Prescribing Gaussian and Scalar Curvature A Riemannian metric g on M is said to be pointwise conformal to another metric go if there exist a positive function ρ. and < u. ηi )i . φi )i if (i) (ηi )i is a smooth partition of unity associated to the covering (Ui )i . φi )i and the partition of unity (Ui . We say that a family (Ui . and given a family of coordinates charts (Ui . v >= g ij iu jv = g ij ∂u ∂v ∂xi ∂xj is the scalar product associated with g for 1forms. ∀ x ∈ M. Suppose M is a two dimensional Riemannian manifold with a metric go and the corresponding Gaussian curvature Ko (x).5 Calculus on Manifolds 123 Let (Ui . Let f be a continuous function on M with compact support. One can prove that such a deﬁnition does not depend on the choice of the charts (Ui . Let K(x) be the Gaussian curvature associated to the metric g. one can verify the following integration by parts formula − M g u vdVg = M < u. φi . there exists a corresponding family of partition of unity (Ui . φi )i of M . one can verify that − Here o ou + Ko (x) = K(x)e2u(x) . ηi )i . v > dVg .
we will study the existence and qualitative properties of the solutions u for the above two equations respectively.6 Sobolev Embeddings Let (M. we have   1 u = u and iu ju u2 =  u2 = g ij ( u)i ( u)j = g ij = g ij ∂u ∂u .6. ∀ x ∈ S n .p (M ) is the completion of Ck (M ) with respect to the norm k 1/p u H k. H k.6. 4 4(n − 1) (4. For application purpose. Theorem 4. Many results concerning the Sobolev spaces on Rn introduced in Chapter 1 can be generalized to Sobolev spaces on Riemannian manifolds.p (M ) = j=0 M  j up dVg . We denote k u the k th covariant derivative of u as deﬁned in the previous section. p Deﬁnition 4. for conformal relation between scalar curvatures on ndimensional manifolds ( n ≥ 3). 4. · · · . we list some of them in the following. R(x) is the scalar curvature associated to the point4 wise conformal metric g = u n−2 go . g) be a smooth Riemannian manifold. 1. ∀ j = 0. In Chapter 5 and 6. say on standard sphere S n . let Ck (M ) be the space of C ∞ functions on M such that  M j up dVg < ∞ .1 For any integer k.11) where go is the standard metric on S n with the corresponding LaplaceBertrami operator o .124 4 Preliminary Analysis on Riemannian Manifolds Similarly. ∂xi ∂xj p For an integer k and a real number p ≥ 1.1 The Sobolev space H k. and its norm  k u is deﬁned in a local coordinates chart by  k u2 = g i1 j1 · · · g ik jk ( 0 k u)i1 ···ik ( k u)j1 ···jk . In particular. k. we have − ou + n+2 n(n − 2) (n − 2) u= R(x)u n−2 . Let u be a smooth function on M and k be a nonnegative integer.2 (M ) is a Hilbert space when equipped with the equivalent norm . Interested readers may ﬁnd the proofs in [He] or [Au].
then H k.q (M ) into Lp (M ) is compact nq if 1 ≤ p < n−q . In particular.6.p (M ) is compact.p (M ) . Then nq for any real number p such that 1 ≤ p < n−q(k−m) .q (M ) into C 0 (M ) is compact. compact ndimensional Riemannian manifold.q (M ) into C λ (M ) is compact. then H k. Theorem 4. k are two integers with 0 ≤ m < k. Theorem 4. then H k. Asnq sume 1 ≤ q < n and p = n−q .6 Sobolev Embeddings k 1/2 125 u = i=1 M  i u dVg 2 . the embedding of H k.5 (Compact Embeddings) Let (M. k are two integers with 0 ≤ m < k. the embedding of H 1.3 (Sobolev Embeddings I) Let (M.q (M ) ⊂ H m. compact ndimensional Riemannian manifold. If q > k−m . Asn sume that q ≥ 1 and m. the embedding H k. q . Theorem 4. the embedding of H 1. The corresponding scalar product is deﬁned by k < u.6.q (M ) ⊂ Lp (M ) .q (M ) ⊂ H m.q (M ) ⊂ C m (M ) .4 (Sobolev Embeddings II) Let (M. Theorem 4. · > is the scalar product on covariant tensor ﬁelds associated to the metric g. In particular. v >= i=1 M < i u. k are two integers with 0 ≤ m < k.6.2 If M is compact. (i) Assume that q ≥ 1 and m. and if p = nq n−q(k−m) . g) be a smooth.6. g) be a smooth.4. Then H 1. i v > dVg where < ·. if (k − λ) > n . (ii) For q > n. More generally.p (M ) does not depend on the metric. if m. compact ndimensional Riemannian manifold. g) be a smooth.
1 V ol(M. g) be a smooth.p (M ).126 4 Preliminary Analysis on Riemannian Manifolds Theorem 4. compact ndimensional Riemannian manifold and let p ∈ [1. n) be real. Then there exists a positive constant C such that for any u ∈ H 1. where u= ¯ is the average of u on M . 1/p 1/p u − up dVg ¯ M ≤C M  up dVg .g) M udVg .6.6 (Poincar´ Inequality) e Let (M.
1 tions 5.3 Obstructions Variational Approaches and Key Inequalities Existence of Weak Solutions Kazdan and Warner’s Results. or c) super critical case–the remaining cases. how the calculus of variations can be applied to seek weak solutions of these equations. To show the existence of weak solutions for a certain equation. people usually divide it into three cases: a) subcritical case where the level set of J is compact. say.1 Prescribing Gaussian Curvature on S 2 5. To roughly see when a functional may lose compactness. we will present the readers with real research examples–semilinear equations arising from prescribing Gaussian and scalar curvatures. Method of Lower and Upper SoluChen and Li’s Results 5. such that J(xk )→0 and J (xk )→0 as k→∞. we consider the corresponding functional J(·) in an appropriate Hilbert space.1. J satisﬁes the (P S) condition (see also Chapter 2): Every sequence {uk } that satisﬁes J(uk )→c and J (uk )→0 possesses a convergent subsequence. The functional has no compactness at level .1.1 5. One can easily see that there exists a sequence {xk }. xk = −k. {xk } possesses no convergent subsequence because this sequence is not bounded.2 Gaussian Curvature on Negatively Curved Manifolds 5.2.5 Prescribing Gaussian Curvature on Compact 2Manifolds 5. among other methods. According to the compactness of the functional. for instance. We will illustrate. however. and can be approximated by subcritical cases.2 In this and the next chapters.1.2 5. let’s consider J(x) = (x2 −1)ex deﬁned on one dimensional space R1 .2. b) critical case which is the limiting situation.
mp = inf Jp (u). we can recover the compactness. Actually. this x0 is a minimum of J. a bounded sequence has a convergent subsequence. We know that in a ﬁnite dimensional space. even a bounded sequence may not have a convergent subsequence. consider n−2 [n(n − 2)k] 4 uk (x) = u(x) = n−2 η(x) (1 + kx2 ) 2 ∞ where η ∈ C0 (Ω) is a cut oﬀ function satisfying η(x) = 1. If we deﬁne  u2 dx = 1}. As we have seen in Chapter 2. to ﬁnd a critical point of the functional. we study the corresponding functional Jp (u) = Ω (5. then one can verify that it possesses a subsequence which converges to a critical point x0 of J. That is. For instance {sin kx} is a bounded sequence in L2 ([0. To further illustrate the concept of compactness.1) up+1 dx on the constraint 1 H = {u ∈ H0 (Ω)  Ω n+2 When p < n−2 .128 5 Prescribing Gaussian Curvature on Compact 2Manifolds J = 0. Take again the one dimensional example J(x) = (x2 − 1)ex . 1 and let H0 (Ω) be the Hilbert space introduced in previous chapters. However. x ∈ ∂Ω. we consider some examples in inﬁnite dimensional space. u∈H then a minimizing sequence {uk }. hence it is the desired weak solution. when Ω = B1 (0) is a ball. which has no compactness at level J = 0. the functional satisﬁes the (P S) condition. one would try to recover the compactness through various means. and the limiting function u0 would be the minimum of Jp in H. it is in the critical case. For example. if we pick a sequence {xk } satisfying J(xk ) < 0 and J (xk )→0. if we restrict it to levels less than 0. . it is in the subcritical case. Let Ω be an open bounded subset in Rn . elsewhere . While in inﬁnite dimensional space. x ∈ B 1 (0) 2 between 0 and 1 . x ∈ Ω u(x) = 0 . 1]) but does not have a convergent subsequence. One can ﬁnd a minimizing sequence that does not converge. Consider the Dirichlet boundary value problem − u = up (x) . n+2 When p = τ := n−2 . Jp (uk )→mp possesses a convergent subse1 quence in H0 (Ω). To seek weak solutions. In the situation where the lack of compactness occurs.
In essence.2) where is the LaplaceBertrami operator associated with the standard metric go . (5.3) to have a solution. as k→∞. In this and the next chapter. where the corresponding equations are in critical case and where various means have been exploited to recover the compactness. In the critical case. can it be realized as the Gaussian curvature of some pointwise conformal metric g? This has been an interesting problem in diﬀerential geometry and is known as the Nirenberg problem. x ∈ S 2 . 5.1. the readers will see examples from prescribing Gaussian and scalar curvature.5. If we let g = e2u go .2) becomes − u + 2 = R(x)eu(x) . the function R(x) must satisfy the obvious GaussBonnet sign condition R(x)eu dx = 8π. However it does not have a convergent subsequence. then it is equivalent to solving the following semilinear elliptic equation on S 2 : − u + 1 = K(x)e2u . then equation (5. S (5. respectively. this embedding is compact when p < n−2 and is n+2 not when p = n−2 . (5. the compactness may be recovered by considering other level sets or by changing the topology of the domain.1 Obstructions For equation (5. Actually. the compactness of the functional Jτ here relies on the Sobolev embedding: H 1 (Ω) → Lp+1 (Ω). the energy of {vk } are concentrating at one point 0. then the above Jτ has a minimizer in H.4) . if Ω is an annulus. For example.  uk 2 dx Ω Then one can verify that {vk } is a minimizing sequence of Jτ in H with J (vk )→0.3) Here 2 and R(x) are actually the scalar curvature of S 2 with standard metric go and with the pointwise conformal metric eu go . We say that the sequence “blows up ” at point 0. If we replace 2u by u and let R(x) = 2K(x).1 Prescribing Gaussian Curvature on S 2 129 Let vk (x) = uk (x) . 5.1 Prescribing Gaussian Curvature on S 2 Given a function K(x) on two dimensional unit sphere S := S 2 with standard metric go . n+2 As we learned in Chapter 1.
S (5. x3 ) of R3 . Then for which kinds of functions R(x).130 5 Prescribing Gaussian Curvature on Compact 2Manifolds This can be seen by integrating both sides of the equation (5.2 (S) be the Hilbert space with norm 1/2 u Let 1 = S ( u2 + u2 )dx .6) where X is any conformal Killing vector ﬁeld on standard S 2 .3)? This has been an interesting problem in geometry. 5. S S Condition (5. Bouguignon and Ezin generalized this condition to X(R)eu dx = 0.3). a monotone rotationally symmetric function admits no solution.e − φi = 2φi (x). x2 . First prove the existence of a weak solution. φi (x) = xi in the coordinates x = (x1 . Then by the Poincar´ inequality. we can use e .5) Here φi are the ﬁrst spherical harmonic functions.1. 3.4) implies that R(x) must be positive somewhere. S R(x)eu dx > 0}. In particular. H(S) = {u ∈ H 1 (S)  S udx = 0. It reads φi R(x)eu dx = 0. 3. people usually use a variational approach.3) has no solution. i = 1. Actually.5) gives rise to many examples of R(x) for which the equation (5. then by a regularity argument. there are other necessary conditions found by Kazdan and Warner. i. and the fact that − u dx = u · 1 dx = 0. S (5. i = 1. 2. Condition (5. Later. 2. To obtain a weak solution. can one solve (5.2 Variational Approach and Key Inequalities To obtain the existence of a solution for equation (5. one can show that the weak solution is smooth and hence is the classical solution. Besides this obvious necessary condition.3). we let H 1 (S) := H 1.
(5. Still denote 1/2 u = S  u2 dx v v .7) holds for all u ∈ H 1 (S). S (5. Consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in H(S). ¯ i) For any v ∈ H 1 (S) with v = 0.10) .5. Let u= ¯ 1 4π u(x)dA S be the average of u on S. Then w = 1 and w = 0. Interested readers many see the article of Moser [Mo1] or the article of Chen [C1].1 Prescribing Gaussian Curvature on S 2 1/2 131 u = S  u2 dx as an equivalent norm in H(S). Lemma 5. satisfying  u2 dA ≤ 1 and S S udA = 0. such that the inequality e4πu dA ≤ C S 2 (5.2 There exists constant C. we have e S 4πv 2 v 2 dA ≤ C. such that for all u ∈ H 1 (S) holds eu dA ≤ C exp S 1 16π  u2 dA + S 1 4π udA . one can derive immediately that Lemma 5. Then there exists constant C. .1.9) Proof. To estimate the value of the functional J.1 (MoserTrudinger Inequality) Let S be a compact two dimensional Riemannian manifold.1. we introduce some useful inequalities.1 to such w. (5.1.8) We skip the proof of this inequality. From this inequality. let w = ¯ Applying Lemma 5.
we have the following ”Distribution of Mass Principle” from Chen and Li [CL7] [CL8]. that is. let v = u − u. v 2 4πv 2 . In fact. More precisely. R(x)eu dA ≤ max R S S eu dA ≤ max R(x) · C exp 1 u 16π 2 .1. we can conclude that the functional J(·) is bounded from below in H(S). In particular. u∈H(S) However. Then v = 0. It follows that J(u) ≥ 1 1 u 2 − 8π ln(C max R) + u 2 16π ≥ −8π ln(C max R) 2 (5.11) ii) For any u ∈ H 1 (S). ¯ (5. ∀v ∈ H 1 (S). we do have coerciveness for J in some special subset of H(S). Although it is not true in general. eu−¯ dA ≤ C exp 16π S Consequently eu dA ≤ C exp S 1 u 16π 2 +u . the functional J is bounded from below in H(S).12) Therefore.132 5 Prescribing Gaussian Curvature on Compact 2Manifolds From the obvious inequality √ v 2 πv √ − v 4 π we deduce immediately v≤ Together with (5. + 16π v 2 1 v 16π 2 . we need some coerciveness of the functional. with v = 0. .11) ¯ ¯ to v and obtain 1 u u 2 . in the set of even functions–antipodal symmetric functions. it may ‘leak’ to inﬁnity. a minimizing sequence may not be bounded. ¯ This completes the proof of the Lemma.2. From Lemma 5. we have ev dA ≤ C exp S 2 ≥ 0. by the Lemma. To prevent this from happening. and we can apply (5. Now we can take a minimizing sequence {uk } ⊂ H(S): J(uk )→ inf J(u).10). as we mentioned before.
Then by a standard regularity argument. 1 + 32π eu dA ≤ C exp S u 2 +u ¯ (5. we arrive at 1 u 8 2 J(u) ≥ − C1 . In particular. as we will see in the next next section.13) holds.13) holds for all u ∈ H 1 (S) satisfying Ω1 S eu dA eu dA ≥ αo and Ω2 S eu dA eu dA ≥ αo .. we will be able to conclude that uo is a classical solution. a weak solution of equation (5. Both analysis are equivalent on the sphere S 2 . if the total mass does not concentrate at one point.2). the functional J is weakly lower semicontinuous. uo so obtained is the minimum of J(u). Ω2 ) ≥ δo > 0.. u 2 ≥ −8π ln(C max R) + Choosing small enough. then we can have a stronger inequality than (5. The Lemma says that if the mass of the sphere is kind of distributed in two parts of the sphere.1. Then for any such that the inequality > 0. for even functions. then Ωi eu dA is the mass of the region Ωi . (5. this stronger inequality (5. for all even functions u in H(S). eu dA S and use this to study the behavior of a minimizing sequence (See [CD].1 Prescribing Gaussian Curvature on S 2 133 Lemma 5.14) Intuitively. and [CY1]). there exists a constant C = C(αo . (5. consider the center of mass of the sphere with density eu(x) at point x: P (u) = S x eu dA . more precisely.15) This stronger inequality guarantees the boundedness of a minimizing sequence. [CY]. However. If we apply this stronger inequality to the family of even functions in H(S). [CD1]. i. In many articles.e. we have J(u) ≥ 1 u 2 2 − 8π ln(C max R) + 1 − 8π 4 1 + 32π u 2.3 Let Ω1 and Ω2 be two subsets of S such that dist(Ω1 . . i. ). 1 Let 0 < αo ≤ 2 . δo . the authors use the “center of mass ” analysis.e. and S eu dA the total mass of the sphere. if we think eu(x) as the density of a solid sphere S at point x.9). and hence it converges to a weak limit uo in H 1 (S).5. As we will show in the next section. and therefore.
choose a. f or x ∈ Ωi . i = 1. Proof of Lemma 5. this kinds of analysis can not be applied. Here we have used the fact that 2 g1 u + g2 u = S  (g1 u + g2 u)2 dA ( g1 + S 2 = ≤ u g2 )u + (g1 + g2 ) u2 dA L2 + C1 u u + C2 u 2 L2 . we have eu dA ≤ ea S S eu−a dA ≤ ea · S 1) e(u−a) dA u 2 + 1 (1 + ≤ C exp{ 32π + c1 ( 1 ) (u − a)+ 2 2 + a} (5. Interested reader may see the author’s paper [CL7] [CL8]for more general version of inequality (5.16) Case i) If g1 u ≤ g2 u . (u − a)}. (5. we employ the condition S udA = 0.14) implies eu dA ≤ C exp ( S udA = 1 + ) u 32π 2 . such that 1 ≥ gi ≥ 0. It suﬃce to show that for u ∈ H 1 (S) with 0. g2 be two smooth functions.17) for some small 1 > 0. such that meas{x ∈ Su(x) ≥ a} = η.13). even surfaces with conical singularities. (5.1.17) to the function (u − a)+ ≡ max{0.9) eu dA ≤ S 1 1 eu dA ≤ eg1 u dA αo Ω1 αo S C 1 g1 u 2 + g1 u} ≤ exp{ αo 16π C 1 ≤ exp{ g1 u + g2 u 2 + g1 u} αo 32π C 1 ≤ exp{ (1 + 1 ) u 2 + c1 ( 1 ) u αo 32π 2 L2 }. on a general two dimensional compact Riemannian surfaces. While the “Distribution of Mass” analysis can be applied to any compact surfaces. gi (x) ≡ 1.18) .3. Apply (5. Let g1 . In order to get rid of the term u 2 2 on the right hand side of the above L inequality. Given any η > 0. then by (5.134 5 Prescribing Gaussian Curvature on Compact 2Manifolds since “the center of mass” P (u) lies in the ambient space R3 . (5. 2 S and supp g1 ∩ supp g2 = ∅.
Then the necessary and suﬃcient condition for equation (5.1.20) Now by Poincar´ inequality (See Theorem 4. a≤δ u 2 + c2 .3) to have a solution is R(x) be positive somewhere. To prove the existence of a weak solution. 1 (5. and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some element uo in H 1 (S).5. By the H older and Sobolev inequality (See Theorem 4. by a similar argument as in case i).16) follows from (5. then the above inequality implies 2 (u − a)+ 2 L2 ≤ 2Cη 2 u 2 . This completes the proof.6.1 (Moser) If the function R is even.17) and then (5. {uk } is bounded in H 1 (S).19) Choose η so small that Cη 1/2 ≤ 1 . .1.18).3 Existence of Weak Solutions Theorem 5.22) Consequently uo (−x) = uo (x) almost everywhere . Then by the Compact Sobolev Embedding of H 1 into L2 uk →uo in L2 (S). and S uo (x)dA = 0.16). αo ).3) ¨ (u − a)+ 2 L2 ≤ η 2 (u − a)+ 1 2 L4 ≤ cη 2 ( u 1 2 + (u − a)+ 2 L2 ) (5.21) Now (5.6) e a·η ≤ u≥a udA ≤ S udA ≤ c u hence for any δ > 0.20) and (5. as in the previous section. (5.21). we obtain (5. Let {uk } be a minimizing sequence of J in He (S). 5. where x and −x are two antipodal points. 4δη 2 (5.15).1 Prescribing Gaussian Curvature on S 2 135 Where C = C(δ. that is if R(x) = R(−x) ∀ x ∈ S 2 . Then by (5.6. (5. Case ii) If g2 u ≤ g1 u . we consider the functional J(u) = 1 2  u2 dx − 8π ln S S R(x)eu dx in He (S) = {u ∈ H(S)  u is even almost everywhere }.
2 Here we have used the fact that uo (−x) = uo (x). v > dA − 8π R(x)euo dA S R(x)euo vdA. w > . we have 0 =< J (uo ). as k→∞. we have J(uo ) ≥ inf J(u). and it follows that 0 =< J (uo ). we obviously have R(x)euk dA→ S S R(x)euo dA.1. . w− >) =< J (uo ). 2 2 Then obviously. for any v ∈ He (S). eu(x) dA . limk→∞  uk 2 dA ≥ S S  uo 2 dA. Now from this lemma. To show that the functional J is weakly lower semicontinuous: J(uo ) ≤ limJ(uk ) = We need the following Lemma 5. w > + < J (uo ). as we showed in the previous chapter.4. then there exists a subsequence (still denoted by {uk }). S For any w ∈ H(S). such that euk (x) dA→ S S u∈He (S) inf J(u). let v(x) = 1 1 (w(x) + w(−x)) := (w(x) + w− (x)).4. u∈He (S) Therefore.1. Now what left is to prove Lemma 5. uo is a minimum of J in He (S).3). v >= S < uo . v >= 1 (< J (uo ). v ∈ He (S). This implies that uo is a critical point of J in H 1 (S) and hence a constant multiple of uo is a weak solution of equation (5.1. On the other hand. Furthermore. Consequently.e. and it follows that J(·) is weakly lower semicontinuous.136 5 Prescribing Gaussian Curvature on Compact 2Manifolds that is uo ∈ He (S). i. We postpone the proof of this lemma for a moment.4 If {uk } converges weakly to u in H 1 (S). the integral is weakly lower semicontinuous. The Proof of Lemma 5.
∀g ∈ G. Chen and Ding [CD] generalized Moser’s result to a broader class of symmetric functions. This completes the proof of the Lemma. the functional J(u) = is Ginvariant in X∗ . there exists a subsequence of {euk } which converges strongly to euo in L1 (S). i. Recall that u dA = 0.23) And by H oder inequality. (5. ∀g ∈ G}. as we will describe in the following.25) H(S) = {u ∈ H 1 (S)  R(x)eu dA > 0}.25). R(gx) = R(x).1 Prescribing Gaussian Curvature on S 2 137 From Lemma 5. ∀u ∈ X∗ .e. Let G be a closed ﬁnite subgroup of O(3). Assume that R(x) is Gsymmetric.p into L1 . The action of G on S 2 induces an action of G on the Hilbert space H 1 (S): g[u](x) → u(gx) under which the ﬁxed point subspace of H 1 (S) is given by X = {u ∈ H 1 (S)  u(gx) = u(x).23) and (5. By the compact Sobolev embedding of H 1. Let X∗ = X H(S). or Ginvariant.24) Inequalities (5. i. The following lemma guarantees that such a minimum is the critical point we desired. S S (5. we have epu dA ≤ C exp{ S p2 16π  u2 dA + S p 4π udA}. ∀g ∈ G} be the set of ﬁxed points under the action of G. for any real number p > 0. ¨  (e ) dA = u p S S e  u dA ≤ S pu p e 2p 2−p u 2−p 2 dA S  u dA 2 p 2 . We seek a minimum of J in X∗ .24) imply that {euk } is a bounded sequence in H 1. S (5. Let FG = {x ∈ S 2  gx = x.e.2. for any 0 < p < 2. 1 2  u2 dx − 8π ln S S R(x)eu dx .p for any 1 < p < 2. Under the assumption (5. J(g[u]) = J(u). ∀g ∈ G. Let O(3) be the group of orthogonal transformations of R3 .1.5.
S Assume For any w ∈ H(S). v ∈ H(S).e. w > i = < J (uo ).26) u vdA − 8π Reu dA S Reu vdA.25) with maxS R > 0.2) has a solution provided one of the following conditions holds: (i) FG = ∅. Where g −1 is the inverse of g.1. i. Hence the above Theorem contains Moser’s existence result as a special case. then it is also a critical point of J in H(S). gi [w] > i 1 m m < J (uo ). w >= i < J (uo ). let v= Then for any g ∈ G g[v] = G = {g1 .1 In the case when the function R(x) is even. −id}. uo is a critical point of J in H(S).2 (Chen and Ding [CD]).1. (ii) m ≤ 0. for any g ∈ G and any u. Remark 5. let m = maxFG R. · · · gm }. m 1 (gg1 [w] + · · · + ggm [w]) = v. v >= = 1 m m −1 < J (gi uo ). w > . Proof. This can be easily see from < J (u). we have < J (u). in this case FG is empty.5 Assume that uo is a critical point of J in X∗ . v > . If FG = ∅.138 5 Prescribing Gaussian Curvature on Compact 2Manifolds Lemma 5. or (iii) m > 0 and c∗ = inf X∗ J < −8π ln 4πm. Obviously. Then equation (5. v >= S (5. the group G = {id. It follows that 0 = < J (uo ). Theorem 5. m 1 m m Hence v ∈ X∗ . id(x) = x and − id(x) = −x ∀ x ∈ S 2 . suppose that R ∈ C α (S 2 ) satisﬁes (5.1. g[v] >=< J (g −1 [u]). First. 1 (g1 [w] + · · · + gm [w]). . where id is the identity transform. Therefore.
we use (5.1.  u2 dA) S . there exists two subsets of S 2 . Ω1 and Ω2 .1 Prescribing Gaussian Curvature on S 2 139 To prove this theorem.30) Otherwise.7 Suppose that {ui } is a sequence in H(S) with J(ui ) ≤ β.28) and Onofri’s Lemma: J(ui ) = 1  u2 dA − 8π ln(R(ξ) + o(1)) eui dA 2 S S 1 2 =  u dA − 8π ln(R(ξ) + o(1)) − 8π ln eui dA 2 S S 1 1 ≥  u2 dA − 8π ln(R(ξ) + o(1)) − 8π(ln 4π + 2 S 16π = −8π ln(4πR(ξ) + o(1)). Then by the previous argument. such that for any > 0. the mass of the surface S with density eui (x) at point x will be concentrating at one point ξ on S. Lemma 5. and hence veriﬁes (5. we have eui dA→0.29). Roughly speaking. the above argument shows that if a minimizing sequence of J is unbounded. Ω2 ) ≥ o > 0 such that {ui } satisﬁes all the conditions in Lemma 5.1.28) and i→ ∞ lim J(ui ) ≥ −8π ln 4πR(ξ). To verify (5.27) 1 16π  u2 dA . with dist(Ω1 .3. S Lemma 5. there exists a subsequence of {ui } and a point ξ ∈ S 2 .6 (Onofri [On]) For any u ∈ H 1 (S) with eu dA ≤ 4π exp S S udA = 0.28). (5. we conclude that ui is bounded.5. then passing to a subsequence.30). This contradicts with our assumption. such that R(x)eui dA = (R(ξ) + o(1)) S S eui dA.29) Proof.1. Now (5. we need another two lemmas. S\B (ξ) (5.30) and the continuity assumption on R(x) implies (5. holds (5. (5. We ﬁrst show that. If ui →∞. then there exists a subsequence (still denoted by {ui }) and a point ξ ∈ S 2 with R(ξ) > 0. Hence the inequality 1 − δ) ui 2 eui dA ≤ C exp( 16π S holds for some δ > 0.
and m = maxFG R > 0. Proof of the Theorem. X∗ As we argue in the previous section (see (5. J(ui )→c∗ .1. under what conditions on R. Therefore. Noticing that m ≥ R(ξ) by deﬁnition. {ui } must be bounded. Here conditions (i) and (ii) in Theorem 5. S\B (ξ) (5.12)). there is a point ξ ∈ S 2 . or (iii). then c∗ < −8π ln 4πm. and therefore (5.1. Since the right hand side of (5. Similar to the reasoning at the beginning of this section.33) . The following theorem provides a suﬃcient condition. by Lemma 5. we have eui dA→0.31) Since ui is invariant under the action of G.1. Deﬁne c∗ = inf J. S\B (gξ) (5. Now under condition (i). This completes the proof of the Lemma.7. we must have R(ξ) > 0. (ii). do we have condition (iii). is Ginvariant.e.7.140 5 Prescribing Gaussian Curvature on Compact 2Manifolds Let i→∞. i. However. Moreover. Theorem 5. we arrive at (5. (5. Let {ui } ⊂ X∗ be a minimizing sequence. as i→∞.3). Let D = {x ∈ S 2  R(x) = m}. under either one of the conditions (i).3 (Chen and Ding [CD]). we have c∗ > −∞. the minimizing sequence {ui } must be bounded. we deduce that gξ = ξ. This a constant multiple of uo is the desired weak solution of equation (5.29) is bounded above by β. If R(xo ) > 0 for some xo ∈ D.2 can be easily veriﬁed through the group G or the given function R. this again contradicts with conditions (ii) and (iii). Assume on the contrary that ui →∞. we must also have eui dA→0. and hence possesses a subsequence converging weakly to some uo ∈ X∗ .32) Because is arbitrary. R(ξ) > 0 and c∗ ≥ −8π ln 4πR(ξ). FG is empty.1. Therefore. Then from the proof of Lemma 5.29). one can see that we only need to show that {ui } is bounded in H 1 (S). Suppose that R ∈ C 2 (S 2 ). such that for any > 0. that is ξ ∈ FG . one may wonder.3) possesses a weak solution.
5.1 Prescribing Gaussian Curvature on S 2
141
Proof. Choose a spherical polar coordinate system on S := S 2 : π π (θ, φ), − ≤ θ ≤ , 0 ≤ φ ≤ 2π 2 2 with xo = ( π , φ) as the north pole. 2 Consider the following family of functions: uλ (θ, φ) = ln 1 − λ2 1 , vλ = uλ − (1 − λ sin θ)2 4π − uλ + 2 = 2euλ . uλ dA, 0 < λ < 1.
S
One can verify that this family of functions satisfy the equation (5.34) As λ→1, the family {uλ } ‘blows up’ (tend to ∞) at the one point xo , while approaching −∞ everywhere else on S 2 . Therefore, the mass of the sphere with density euλ (x) is concentrating at point xo as λ→1. It is easy to see that vλ ∈ H(S) for λ closed to 1. Moreover, we claim that vλ ∈ X, i.e. vλ (gx) = vλ (x), ∀g ∈ G. In fact, since g is an isometry of S 2 , and gxo = xo , we have d(gx, xo ) = d(gx, gxo ) = d(x, xo ), where d(·, ·) is the geodesic distance on S 2 . But vλ depends only on θ = d(x, xo ), so vλ (gx) = vλ (x), ∀g ∈ G. It follows that vλ ∈ X∗ for λ suﬃciently closed to 1. By a direct computation, one can verify that 1 2 Consequently, J(vλ ) = −8π ln
S
 uλ 2 dA + 8π¯λ = 0. u
S
R(x)euλ dA.
Notice that c∗ ≤ J(vλ ) by the deﬁnition of c∗ . Thus, in order to show (5.33), we only need to verify that, for λ suﬃciently close to 1 R(x)euλ dA > 4πm.
S
(5.35)
Let δ > 0 be small, we have
2π
Reuλ dA = (1 − λ2 )
S 2π 0
π 2 π 2 −δ
[R(θ, φ) − m]
cos θ dθdφ (1 − λ sin θ)2 + 4πm
+
0
π 2 −δ
[R(θ, φ) − m]
−π 2
cos θ dθdφ (1 − λ sin θ)2
= (1 − λ2 ){I + II} + 4πm.
142
5 Prescribing Gaussian Curvature on Compact 2Manifolds
Here we have used the fact euλ dA = 4π
S
which can be derived directly from equation (5.34). For each ﬁxed δ, it is easy to check that the integral II remains bounded as λ tends to 1. Thus (5.35) will hold if we can show that the integral I → + ∞ as λ→1. Notice that xo is a maximum point of the restriction of R on FG . Hence by the principle of symmetric critically, xo is a critical point of R, that is R(xo ) = 0. Using this fact and the second order Taylor expansion of R(x) near xo , we can derive I=π ≥ co
π 2 −δ π 2 π 2 −δ π 2
R(xo ) cos2 θ + o cos3 θ dθ. (1 − λ sin θ)2
θ−
π 2 2
(1 − λ sin θ)2
cos θ dθ
Here co is a positive constant. We have chosen δ to be suﬃciently small and have used the fact that R(xo ) is positive. From the above inequality, one can easily verify that as λ→1, I→ + ∞, since the integral
π 2 π 2 −δ
cos3 θ dθ (1 − sin θ)2
diverges to +∞. This completes the proof of the Theorem.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
Let M be a compact two dimensional Riemannian manifold. Let go (x) be a metrics on M with the corresponding LaplaceBeltrami operator and Gaussian curvature Ko (x). Given a function K(x) on M , can it be realized as the Gaussian curvature associated to the pointwise conformal metrics g(x) = e2u(x) go (x)? As we mentioned before, to answer this question, it is equivalent to solve the following semilinear elliptic equation − u + Ko (x) = K(x)e2u x ∈ M. (5.36)
To simplify this equation, we introduce the following simple existence result.
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
143
Lemma 5.2.1 Let f (x) ∈ L2 (M ). Then the equation − u = f (x) x ∈ M possesses a solution if and only if f (x)d A = 0.
M
(5.37)
(5.38)
Proof. The ‘only if’ part can be seen by integrating both side of the equation (5.37) on M . We now prove the ‘if’ part. Assume that (5.38) holds, we will obtain the existence of a solution to (5.37). To this end, we consider the functional J(u) = under the constraint G = {u ∈ H 1 (M ) 
M
1 2
 u2 d A −
M M
f (x)ud A.
u(x)d A = 0}.
As we have seen before, we can use, in G, the equivalent norm u =
M
 u d A
2
1 2
.
By H oder and Poincar´ inequalities, we derive ¨ e J(u) ≥ 1 u 2 − f L2 u L2 2 1 u 2 − co f L2 u ≥ 2 c2 f 1 = ( u − co f L2 )2 − o 2 2
2 L2
.
(5.39)
The above inequality implies immediately that the functional J(u) is bounded from below in the set G. Let {uk } be a minimizing sequence of J in G, i.e. J(uk )→ inf J,
G
as k→∞.
Then again by (5.39), {uk } is bounded in H 1 , and hence possesses a subsequence (still denoted by {uk }) that converges weakly to some uo in H 1 (M ). From compact Sobolev imbedding H 1 → → L2 ,
144
5 Prescribing Gaussian Curvature on Compact 2Manifolds
we see that uk →uo strongly in L2 , Therefore uo (x)d A = lim
M M
and hence so in L1 . uk (x)d A = 0, f (x)uk (x)d A.
M
(5.40)
and f (x)uo (x)d A = lim
M
(5.41)
(5.40) implies that uo ∈ G, and hence J(uo ) ≥ inf J(u).
G
(5.42)
While (5.41) and the weak lower semicontinuity of the Dirichlet integral: lim inf
M
 uk 2 d A ≥
M
 uo 2 d A
leads to
J(uo ) ≤ inf J(u).
G
From this and (5.42) we conclude that uo is the minimum of J in the set G, and therefore there exists constant λ, such that uo is a weak solution of − uo − f (x) = λ. Integrating both sides and by (5.38), we ﬁnd that λ = 0. Therefore uo is a solution of equation (5.37). This completes the proof of the Lemma. We now simplify equation (5.36). First let u = 2u, then ˜
˜ − u + 2Ko (x) = 2K(x)eu . ˜
(5.43)
Let v(x) be a solution of ¯ − v = 2Ko (x) − 2Ko where 1 ¯ Ko = M  Ko (x)d A
M
(5.44)
is the average of Ko (x) on M . Because the integral of the right hand side of (5.44) on M is 0, the solution of (5.44) exists due to Lemma 5.2.1. Let w = u + v. Then it is easy to verify that ˜ ¯ − w + 2Ko = 2K(x)e−v ew . Finally, letting (5.45)
5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds
145
¯ α = 2Ko and R(x) = 2K(x)e−v(x) and rewriting w as u, we reduce equation (5.36) to − u + α = R(x)eu(x) , x∈M (5.46)
By the wellknown GaussBonnet Theorem, if g is a Riemannian metrics on M with corresponding Gaussian curvature Kg (x), then Kg (x)d Ag = 2πχ(M )
M
where χ(M ) is the Euler Characteristic of M , and d Ag the area element associated to metric g. We say M is negatively curved, if χ(M ) < 0, and hence α < 0. 5.2.1 Kazdan and Warner’s Results. Method of Lower and Upper Solutions Let M be a compact two dimensional manifold. Let α < 0. In [KW], Kazdan and Warner considered the equation − u + α = R(x)eu(x) , and proved ¯ Theorem 5.2.1 (KazdanWarner) Let R be the average of R(x) on M . Then ¯ i) A necessary condition for (5.46) to have a solution is R < 0. ¯ < 0, then there is a constant αo , with −∞ ≤ αo < 0, such that ii) If R one can solve (5.46) if α > αo , but cannot solve (5.46) if α < αo . iii) αo = −∞ if and only if R(x) ≤ 0. The existence of a solution was established by using the method of upper and lower solutions. We say that u+ is an upper solution of equation (5.47) if − u+ + α ≥ R(x)eu+ , x ∈ M. Similarly, we say that u− is a lower solution of (5.47) if − u− + α ≤ R(x)eu− , x ∈ M. The following approach is adapted from [KW] with minor modiﬁcations. ¯ Lemma 5.2.2 If a solution of (5.47) exists, then R < 0. Even more strongly, the unique solution φ of φ + αφ = R(x) , x ∈ M must be positive. (5.48) x∈M (5.47)
one sees that (5. This is possible because v(x) is continuous and M is compact. Therefore w ≥ 0 and φ ≥ v > 0. Then w + αw = −  v2 ≤ 0. Then we choose b large. so that the right hand side of (5.47) to have a solution is that the unique solution φ of (5. Otherwise. so that −aR + α > 0. In Case i). ¯ ii) If R < 0. Proof. (5. we regroup the right hand side of (5. b.52) that . then there exists a small δ > 0.3 (The Existence of Upper Solutions). such that u+ = av + b is an upper solution. Using the substitution v = e−u .52) ¯ For any given α < 0.48) must be positive.51) We want to choose constant a and b. It follows from (5. v (5. Integrating both sides of (5.49) Assume that v is such a positive solution. there exists an upper solution u+ of (5. v (5. We have ¯ − u+ + α − R(x)eu+ = aR(x) − aR + α − R(x)eav+b .51) is nonnegative. Let v be a solution of ¯ − v = R(x) − R.146 5 Prescribing Gaussian Curvature on Compact 2Manifolds Proof. such that for −δ ≤ α < 0. we ﬁrst choose a suﬃciently large. hence w(xo ) ≥ 0.47). i) If R(x) ≤ 0 but not identically 0. there is a point xo ∈ M .50) It follows that w ≥ 0. Consequently.2. so that eav(x)+b − a ≥ 0 .48) and let w = φ − v. we must have w(xo ) + αw(xo ) > 0. (5. A contradiction with (5.47) has a solution u if and only if there is a positive solution v of v + αv − R(x) −  v2 = 0. This shows that a necessary condition for (5.47). Lemma 5. x) ≡ (−aR + α) − R(x)(eav+b − a). We try to choose constant a and b.48).51) as ¯ f (a. Let φ be the unique solution of (5. when R(x) ≤ 0. then there exist an upper solution u+ of (5. such that w(xo ) < 0 and xo is a minimum of w. We ﬁrst prove the second part. This completes the proof of the Lemma.50).51) and (5. ∀ x ∈ M. we see ¯ immediately that R < 0.
we can make eav(x) − 1 ≤ ¯ −R . we choose α ≥ a2 and eb = a. (5. −R(x)}. w ∈ W 2. Therefore. let ¯ K(x) = max{1. by choosing a suﬃciently small. Then (5. uniformly on M as a→0. By the Lp regularity theory. Choosing λ suﬃciently large so that a − ew−λ ≥ 0. x) ≥ − ¯ aR − aR(x)(eav − 1) 2 ¯ R ≥ a − − R L∞ eav − 1 2 ¯ −R = a R L∞ − eav − 1 . Thus u+ so constructed is an upper solution ¯ R of (5.p (M ) and hence is continuous. Choose a positive constant a so that aK = −α. then one can clearly use any sufﬁciently large negative constant as u− . b. p Then aK(x) + α = 0 and (aK(x) + α) ∈ L (M ).5.p (M ).47) such that u− < u+ . we see that eav(x) →1 .53) .2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 147 − u+ + α − R(x)eu+ ≥ 0. Then − u− + α − R(x)eu− = − w + α − R(x)ew−λ = −aK(x) − R(x)ew−λ ≤ −K(x)(a − ew−λ ). This completes the proof of the Lemma.52) becomes ¯ f (a. Proof. For general R(x) ∈ Lp (M ). Lemma 5. there is a lower solution u− ∈ W 2. Thus there is a solution w of w = aK(x) + α. 2 R L∞ Again by the continuity of v and the compactness of M . Let u− = w − λ.4 (Existence of Lower Solutions) Given any R ∈ Lp (M ) and a function u+ ∈ W 2.p (M ) of (5.47) for α ≥ a2 . 2 R L∞ for all x ∈ M. If R(x) is bounded from below. where a is a positive number to be determine later. R In Case ii).2.
it requires that f (x. we suppose in the contrary. where k(x) is any positive function on M . If there exist upper and lower solutions.2. then start from the upper solution u+ and apply iterations: Lu1 = f (x. We will rewrite the equation (5. u). and u is C ∞ in any open set where R(x) is C ∞ . where k1 (x) = max{1. i = 1.p (M ) of (5. we choose k(x) = k1 (x)eu+ (x) . then u(x) ≥ 0 . u− ∈ W 2.47). Based on the maximum principle on L. then − u(xo ) ≤ 0. −R(x)}. we arrive at − u− + α − R(x)eu− ≤ 0. one can make λ large enough such that u− = w − λ < u+ . 2. u). and if u− ≤ u+ . ·) be monotone increasing. by (5. ui ) . To see this. Lemma 5. . In order that for each i. A contradiction. ·) possess some monotonicity. we need the operator L obeys some maximum principle and f (x. Proof.148 5 Prescribing Gaussian Curvature on Compact 2Manifolds and noticing that K(x) ≥ 0. that is ∂f = R(x)eu + k(x) ≥ 0. u+ . then there is a solution u ∈ W 2. and it follows that − u(xo ) + k(xo )u(xo ) ≤ k(xo )u(xo ) < 0. then the new operator L obeys the maximum principle: If Lu ≥ 0.53).5 Let p > 2.47) as Lu = f (x. Moreover.p (M ).47) as Lu ≡ − u + k(x)u = R(x)eu − α + k(x)u := f (x. u+ ) . u− ≤ u ≤ u+ . Moreover. ∂u To this end. This completes the proof of the Lemma. u− ≤ ui ≤ u+ . there is some point on M where u < 0. however. Hence the maximum principle holds for L. · · · . for all x ∈ M. The Laplace operator in the original equation does not obey maximum principles. We now write equation (5. to keep u− ≤ ui ≤ u+ . Therefore u− is a lower solution. Let xo be a negative minimum of u. if we let Lu = − u + k(x)u. Lui+1 = f (x.
2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 149 Base on this choice of L and f . Since f (x. um+1 ) ≤ f (x. Similarly.55) Since u+ . ·) is monotone increasing in u for u ≤ u+ . · · · . we will show that um+2 ≤ um+1 . Hence {ui } is bounded in W 2. there is a subsequence that converges uniformly to some function u in C 1 (M ). we conclude that the entire sequence {ui } itself converges uniformly to u. we show by math induction that the iteration sequence so constructed is monotone decreasing: ui+1 ≤ ui ≤ · · · ≤ u1 ≤ u+ .p ≤ C L(ui+1 − uj+1 ) ≤ C( R Lp Lp L∞ e ui −e uj + k Lp ui − uj L∞ ).2. i = 1. This completes the ﬁrst step of our induction. it follows that u is a solution of (5. from Lu+ ≥ f (x. and ui are continuous. inequality (5. Based on these Lemmas. Consequently. · · · .p →Lp is continuous. Since L : W 2.54) L(u+ − u1 ) ≥ 0.2. one proves inductively that u ∈ C ∞ in any open set where R(x) is C ∞ . {ui } converges strongly in W 2.55). so u ∈ W 2. (5. um ) and hence L(um+1 − um+2 ) ≥ 0. This completes the proof of the Lemma. Therefore. .55) shows that {ui } is uniformly bounded. Assume that for any integer m. then u ∈ C m+2+γ . i = 1. we are now able to complete the proof of the Theorem 5.47) and satisﬁes u− ≤ u ≤ u+ .1. In fact. in the Lp norm. u+ ) (5.2. 2. Lui+1 Lp = R(x)eui − α + k(x)ui Lp ≤ C. Therefore um+1 ≥ um+2 .54) through math induction.p (M ).p . if R ∈ C m+γ . Part i) can obviously derived from Lemma 5. In view of the monotonicity (5.5. Then the maximum principle for L implies u+ ≥ u1 . u+ ) we derive immediately and Lu1 = f (x.2. By the Schauder theory.p . um+1 ≤ um ≤ u+ . u− . we have f (x. Proof of Theorem 5.1. More precisely. 2. This veriﬁes (5. one can show the other side of the inequality and arrive at u− ≤ ui+1 ≤ ui ≤ · · · ≤ u+ . Moreover ui+1 − uj+1 W 2. By the compact imbedding of W 2.p into C 1 .
u1 is an upper solution for equation (5. Consequently.2. one can solve (5.3 infers that αo = −∞ if R(x) ≤ 0 but R(x) is not identically 0. Let αo = inf{α  (5. (5. it suﬃce to prove that if R(x) is positive somewhere. i) For a given α < 0.47) is solvable for −δ ≤ α < 0.3. such that (5. (5.2. M ≤ 1 R α L2 .47) at α. we have − u1 + α > R(x)eu1 .47).56).56) by u and integrate over M to obtain − M  u2 dA + α M u2 dA = M R(x)u(x)dA.150 5 Prescribing Gaussian Curvature on Compact 2Manifolds From Lemma 5.5. α) be the solution of (5. if there is a upper solution. we can prove a stronger result: −αφ(x)→R(x) almost everywhere as α→ − ∞. that is.2. Now to complete the proof of the Theorem.2. (5.2. if for α1 . such that the unique solution of φ + αφ = R(x) is negative somewhere. u(x. then αo > −∞. Hence equation − u + α = R(x)eu also has a solution. Lemma 5. almost everywhere . Multiplying both sides of equation (5. we only need to show Claim: There exist α < 0. as α→ − ∞.47) possesses a solution u1 . α)→0. Actually. then for any α > α1 . let u = u(x. Then for any 0 > α > αo . By virtue of Lemma 5. u Then by H oder inequality ¨ u This implies that L2 L2 ≤ 1 α R udA. then there is a solution. there exist a δ > 0.4 and 5. Hence by Lemma 5.47) is sovable }. In fact.58) .56) We argue in the following 2 steps.2.57) (5.
5. one can see that. one can see from the Kazdan and Warner’s result that in the case when R(x) is nonpositive.e. then ( + α)−1 R(x)→0 almost everywhere . However. (5. Then equation (5.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 151 Since for any α < 0. one may naturally ask: Does equation (5. 5. We will prove the Theorem in 3 steps.46) has a solution for the critical value α = αo ? In [CL1]. φ(x) is negative somewhere. Chen and Li answered this question aﬃrmatively. and hence (5.46) has at least one solution when α = αo . a necessary and suﬃcient condition for equation (5.60) One can see the validity of (5. and αk →αo .2. as α→ − ∞. we have αo > −∞.57). α Since R(x) is continuous and positive somewhere.46) to have a solution for any 0 > α > −∞ is R(x) ≤ 0. a.2. and hence there are still some questions remained. we can express + α)−1 R(x).46) has been solved completely. Now by the assumption. if R ∈ L2 .60) by simply apply the operator + α to both sides. for suﬃciently negative α.e. Theorem 5.e. we write −αφ + R(x) = ( + α)−1 R. Or Proof. a. i. Let αo be deﬁned in the previous section. the operator u=( + α is invertible. This completes the proof of the Theorem. equation (5.59) ii) To see (5.2 (ChenLi) Assume that R(x) is a continuous function on M . In particular. .2 Chen and Li’s Results From the previous section. in the case when R(x) changes signs. The Outline. Let {αk } be a sequence of numbers such that αk > αo . (5. From the above reasoning. as k→∞. 1 φ(x)→ R(x).. we have R ∈ L2 .59) implies that −αφ + R(x)→0.
Then obviously. we minimize the functional. For each ﬁxed αk . we prove that {uk } is bounded in H 1 (M ) and hence converges to a desired solution. and hence we can make R(x)eA greater than any ﬁxed negative number. 2. · · · . Then ˜ − ψk + αk > − ψk + αk = R(x)eψk (x) . These contradict equation (5. In step 2.62).63) . as k→∞.46) for α = αk . since xk is a minimum. ∀ k = 1. ψk is a super solution of the equation − u + αk = R(x)eu(x) . · · · . choose αo < αk < αk . (5. This veriﬁes (5. Choose a suﬃciently negative constant A < −co to be the lower solution of equation (5. (5. Jk (u) = 1 2  u2 d A + αk M M ud A − M R(x)eu d A in a class of functions that are between a lower and a Upper solution. By the result of Kazdan and ˜ Warner. In step 3. Step 1. as k→∞. ˜ i. by Kazdan and Warner. such that φk (x) ≥ −co . for each ﬁxed αk . there exists a solution of equation (5. φk (xk )→ − ∞ . that is. − A + αk ≤ R(x)eA .62) Otherwise.152 5 Prescribing Gaussian Curvature on Compact 2Manifolds In step 1. there exists a solution φk of − φk + αk = R(x)eφk . Then let αk →αo . let xk be a minimum of φk (x) on M . This is possible because R(x) is bounded below on M . 2.61) because αk →αo < 0 .e. Let the minimizer be uk . For each αk > αo . we have − φk (xk ) ≤ 0 . Call it ψk .61) We ﬁrst show that there exists a constant co > 0. ∀ x ∈ M and ∀ k = 1.46) for all αk . for each k. (5. we show that the sequence of minimizers {uk } is bounded in the region where R(x) is positively bounded away from 0. On the other hand.
for any u ∈ H.66) (5. A < uk (x) < ψk (x). since M is compact and R(x) is continuous. such that R(x) ≤ C1 . In fact. It follows that. i.e. we use a maximum principle. there is a δ > 0. Consequently. and xo is a minimum of w. one ¨ e can easily see that Jk is bounded from below. (5. such that A < uk (x) . we have uk + tv ∈ H. we minimize the functional Jk (u) in a class of functions H = {u ∈ C 1 (M )  A ≤ u ≤ ψk }. there exists an > 0. To verify this.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 153 Illuminated by an idea of Ding and Liu [DL]. From (5. there exists xo ∈ M .64) Here we have used the H older and Poincar´ inequality. we have in particular that the function g(t) = Jk (uk + tv) achieves its minimum at t = 0 on the interval [− . for any positive function v(x) with its support in Bδ (xo ). 2 ud A − C1 M M eψk d A (5.64). Jk (u) ≥ 1  u2 d A + αk 2 M 1 ≥ ( u − C2 )2 − C3 . because the functional Jk (u) is bounded from below. This implies that [− uk + αk − R(x)euk ]v(x)d A ≤ 0.68) Let w(x) = ψk (x) − uk (x). Also by (5. such that uk (xo ) = ψk (xo ). Since uk is a minimum of Jk in H. we will derive a contradiction.68). we can show that there is a subsequence of a minimizing sequence which converges weakly to a minimum uk of Jk (u) in the set H.63). In order to show that uk is a solution of equation (5. While on the other hand. there exists a constant C1 . ∀ x ∈ M. ∀ x ∈ M. (5. we need it be in the interior of H.65) and a similar argument as in the previous section. such that as − ≥ t ≤ 0.67) Consequently − uk (xo ) + αk − R(xo )euk (xo ) ≤ 0. Then w(x) ≥ 0. 0]. Since uk (xo ) > A. we have. This is possible. by (5. It follows that − w(xo ) ≤ 0. First we show that uk (x) < ψk (x) . ∀ x ∈ M. M (5. . Hence g (0) ≤ 0. ∀x ∈ Bδ (xo ).65) Suppose in the contrary.5.
To show that it is also bounded from above. Similarly. ∀ x ∈ M.6 (Brezis. ∀ x ∈ M. α Again a contradiction. Step 2. K Ω V L∞ . Then wk satisﬁes − wk = R(x)e−vk ewk . Li. K.69) so small such that Let xo be a point on M at which R(xo ) > 0.154 5 Prescribing Gaussian Curvature on Compact 2Manifolds − w(xo ) = − ψk (xo ) − (− uk (xo )) ≥ −˜ k + αk > 0. ∀ x ∈ Ω ≡ B2 (xo ). x ∈ Ω. Now for any v ∈ C 1 (M ). Then a direct computation as we did in the past will lead to − uk + αk = R(x)euk . Ω). and Shafrir). This veriﬁes (5. satisﬁes sup u + inf u ≤ C(a. Assume that V is a Lipschitz function satisfying 0 < a ≤ V (x) ≤ b < ∞ and let K be a compact subset of a domain Ω in R2 . . We will use a sup + inf inequality of Brezis. ∀ x ∈ M.2. Let vk be a solution of − vk − αk = 0 x ∈ Ω vk (x) = 1 x ∈ ∂Ω.65). Li. and Shafrir [BLS]: Lemma 5. for some constant a. and hence dt Jk (u + tv)t=0 = 0. (5. Choose R(x) ≥ a > 0 . Then any solution u of the equation − u = V (x)eu . we ﬁrst consider the region where R(x) is positively bounded away from 0. uk + tv is in H for all suﬃciently d small t. Let wk = uk + vk . b. It is obvious that the sequence {uk } so obtained in the previous step is uniformly bounded from below by the negative constant A. Therefore we must have uk (x) < ψk (x) . one can show that A < uk (x) .
the second derivative Jk (uk ) is positively deﬁnite. Locally.2 Prescribing Gaussian Curvature on Negatively Curved Manifolds 155 By the maximum principle. then − wk = R(x)euk − K(x)evk . such that the set D = {x ∈ M  R(x) > δ} is nonempty. It is open since R(x) is continuous. x ∈ M.5. we conclude that {wk } is uniformly bounded in the region where R(x) is positively bounded away from 0. Then since − vk + αk ≤ 0 . On one hand. M Choosing φ = e wk 2 .2. and αk →αo The sequence {vk } is uniformly bounded. (5. the metric is pointwise conformal to the Euclidean metric.70) by ewk and integrating on M . Let K(x) be a smooth function such that K(x) < 0 . Step 3. let vk be the unique solution of − vk + αk = K(x)evk . It follows that for any function φ ∈ H 1 (M ). For each αk . Since {uk } is uniformly bounded from below. multiplying both sides of (5. since each uk is a minimizer of the functional Jk . Since xo is an arbitrary point where R is positive. [ φ2 − R(x)euk φ2 ]d A ≥ 0. Let wk = uk − vk . else where. for x ∈ D. We show that the sequence {uk } is bounded in H 1 (M ). From the previous step. Choose a small δ > 0.70) We show that {wk } is bounded in H 1 (M ).6 to {wk } to conclude that the sequence {wk } is also uniformly bounded from above in the smaller ball B (xo ). we know that {uk } is uniformly bounded in D. hence one can apply Lemma 5. we have  wk 2 ewk dA = M M R(x)euk +wk dA − D K(x)euk dA. the sequence {vk } is bounded from above and from below in the small ball Ω. we derive . {wk } is also bounded from below. (5. and K(x) = 0 .71) On the other hand. Now the uniform bound for {uk } in the same region follows suit.
{uk } must also be bounded in H 1 (M ).156 5 Prescribing Gaussian Curvature on Compact 2Manifolds 1 4  wk 2 ewk dA ≥ M M R(x)euk +wk dA. a contradiction with the fact that {˜k } is bounded u in H 1 (M ).71) and (5. and so does M  uk 2 dA. . Consequently. ˜k ¯k If M u2 dA is unbounded. where uk is the average of uk . hence we can apply Poincar´ inequality to uk and conclude that {˜k } is e ˜ u bounded in H 1 (M ). M (5. we would have u2 dA = ˜k D D uk − uk 2 dA→∞. there exists a subsequence of {uk } that converges to a function uo in H 1 (M ). Apparently.72). (5. we arrive at − D K(x)euk dA ≥ 3 4  wk 2 ewk dA.73) implies that M  wk 2 dA is bounded. and uo is the desired weak solution for − uo + αo = R(x)euo .73) Notice that {uk } is uniformly bounded in region D and {wk } is bounded from below due to the fact that {uk } is bounded from below and {vk } is bounded. Consider uk = uk −¯k .72) Combining (5. then {¯k } is unbounded. This completes the proof of the Theorem. we have u2 dA ≤ 2 k M M u2 dA + u2 M  . Therefore. Obviously M uk dA = ˜ u ¯ ˜ 0. (5. Taking into account that u k {uk } is bounded in the region D where R(x) is positively bounded away from 0. ¯ for some subsequence.
2.1 6. for n ≥ 3 6.3.1 Introduction On higher dimensional sphere S n with n ≥ 3.3.2 Introduction The Variational Approach 6. there are wellknown obstructions found by Kazdan and Warner [KW2] and later generalized by Bourguignon and Ezin [BE].2) . The conditions are: X(R)dVg = 0 Sn (6.2 6.1 6.3 6.1 6.2 Estimate the Values of the Functional The Variational Scheme In the Region Where R < 0 In the Region Where R is small In the Region Where R > 0 6. u > 0.2.3 The A Priori Estimates 6. Besides the obvious necessary condition that R(x) be positive somewhere.1) The equation is “critical” in the sense that lack of compactness occurs. Kazdan and Warner raise the following question: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics? It is equivalent to consider the existence of a solution to the following semilinear elliptic equation − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 . 4 4(n − 1) (6.3.6 Prescribing Scalar Curvature on S n . x ∈ S n .
1) on higher dimensional spheres. (6. The above mentioned Bianchi’s counter example seems to suggest that the ‘ﬂatness condition’ is somewhat sharp. These conditions give rise to many examples of R(x) for which equation (6. Roughly speaking.1) is that R (r) changes signs in the region where R is positive. He found some positive rotationally symmetric function R on S 4 . However. we answered question (a) negatively [CL3] [CL4]. It was illustrated by a counter example in [Bi] constructed by Bianchi. the ‘nondegeneracy’ condition is no longer suﬃcient to guarantee the existence of a solution. . a major diﬃculty is that multiple blowups may occur when approaching a solution by a minimax sequence of the functional or by solutions of subcritical equations. and for which the problem admits no solution. numerous studies were dedicated to these problems and various suﬃcient conditions were found (please see the articles [Ba] [BC] [C] [CC] [CD1] [CD2] [ChL] [CS] [CY1] [CY2] [CY3] [CY4] [ES] [Ha] [Ho] [Li] [Li2] [Li3] [Mo] [SZ] [XY] and the references therein ). Xu and Yang [XY] showed that if R is ‘nondegenerate. a monotone rotationally symmetric function R admits no solution. it requires that at every critical point of R. In this situation.3) Then (a) is (6.158 6 Prescribing Scalar Curvature on S n .’ then (6. while some higher but less than nth order derivative is distinct from 0. We call these KazdanWarner type conditions. if it is monotone in the region where it is positive. Our results imply that for a rotationally symmetric function R.3) on S 2 . the ‘nondegeneracy’ condition is a special case of the ‘ﬂatness condition’. among others. namely. In other words.4) is a suﬃcient condition. the conditions become: R > 0 somewhere and R changes signs. a more proper condition is called the ‘ﬂatness condition’. For n = 3. In the last two decades. it is still used widely today (see [CY3]. which is nondegenerate and nonmonotone. In dimensions higher than 3. then problem (6.4) Now.1) admits no solution unless R is a constant. a necessary condition to solve (6. We found some stronger obstructions. In particular. one problem of common concern left open. for n ≥ 3 where dVg is the volume element of the conformal metric g and X is any conformal vector ﬁeld associated with the standard metric g0 .3) a suﬃcient condition? and (b) if not.1) has no solution. and [SZ]) as a standard assumption to guarantee the existence of a solution. Although people are wondering if the ‘ﬂatness condition’ is necessary. the derivatives of R up to order (n − 2) vanish. what are the necessary and suﬃcient conditions? Kazdan listed these as open problems in his CBMS Lecture Notes [Ka]. is this a suﬃcient condition? For prescribing Gaussian curvature equation (5. were those KazdanWarner type necessary conditions also suﬃcient ? In the case where R is rotationally symmetric. Recently. [Li2]. For equation (6. (6.
u ≥ 0}.4) is a necessary and suﬃcient condition for (6. (6. There are many versions of ‘ﬂatness conditions.’ (6. We prove that. [Li2]. [Ba]. . (6. We use the ‘center of mass’ to deﬁne neighborhoods of ‘critical points at inﬁnity. The theorem is proved by a variational approach.’ obtain some quite sharp and clean estimates in those neighborhoods.1. and construct a maxmini variational scheme at subcritical levels and then approach the critical level. Here. we construct a maxmini variational scheme. We ﬁrst ﬁnd a positive solution up of the 4 subcritical equation − u + γn u = R(r)up .’ is (6. we answer the question aﬃrmatively.6.1) to have a solution. Let Jp (u) := Rup+1 dV Sn and E(u) := Sn ( u2 + γn u2 )dV.1 Introduction 159 Now. with a = 0 and n − 2 < α < n.6) for each p < τ and close to τ .6).4) a suﬃcient condition? In this section.1 Let n ≥ 3. to better illustrate the idea.1) to have a solution is that R (r) changes signs in the regions where R > 0.1.1. Then a necessary and suﬃcient condition for equation (6. Theorem 6. To ﬁnd the solution of equation (6. we essentially answer the open question (b) posed by Kazdan. Outline of the proof of Theorem 6. in the statement of the following theorem. We blend in our new ideas with other ideas in [XY]. Then let p→τ . and it applies to functions R with changing signs. We seek critical points of Jp (u) under the constraint H = {u ∈ H 1 (S n )  E(u) = γn S n . Thus. a natural question is: Under the ‘ﬂatness condition. we only list a typical and easytoverify one.’ a general one was presented in [Li2] by Y. Li. This is true in all dimensions n ≥ 3. take the limit. Let R = R(r) be rotationally symmetric and satisfy the following ﬂatness condition near every positive critical point ro : R(r) = R(ro ) + ar − ro α + h(r − ro ). where S n  is the volume of S n . under the ‘ﬂatness condition. n+2 Let γn = n(n−2) and τ = n−2 . and [SZ].5) where h (s) = o(sα−1 ).
(i) In the region where R < 0: This is done by the ‘Kelvin Transform’ and a maximum principle. {up } can only blow up at most ﬁnitely many points. In this case.7). we let p→τ .10) for any positive local maximum ri of R. Jp (up ) ≤ R(ri )S n  − δ. while there is a function ψ1 ∈ G1 .8) (6.7) Here δ > 0 is independent of p. and let Γ be the family of all such paths.160 6 Prescribing Scalar Curvature on S n . Deﬁne cp = sup min Jp (u). Let r1 be a positive local maximum of R. by compactness. (ii) In the region where R is small: This is mainly due to the boundedness of the energy E(up ). (6. Let r1 and r2 be two smallest positive local maxima of R. we seek a solution by minimizing the functional in a family of rotationally symmetric functions in H. for n ≥ 3 If R has only one positive local maximum.4). we have Jp (u) ≤ R(r1 )S n  − δ. we assume that R has at least two positive local maxima. such that δ Jp (ψ1 ) > R(r1 )S n  − . To show the convergence of a subsequence of {up }. (6. such that on the boundary ∂G1 of G1 . Obviously. take the limit. R is either nonpositive or has a local minimum. then by condition (6. We prove that there is an open set G1 ⊂ H (independent of p). a constant multiple of up is a solution of (6.1). on each of the poles. In the following. Moreover. Using a Pohozaev type identity. there is a critical point up of Jp (·).8) associated to r1 and r2 . by (6. 2 (6. To ﬁnd a solution of (6.9) γ∈Γ γ For each p < τ . Let ψ1 and ψ2 be two functions deﬁned by (6. Let γ be a path in H connecting ψ1 and ψ2 . Then similar to the approach in [C]. we have some kinds of ‘mountain pass’ associated to each local maximum of R. The set G1 is deﬁned by using the ‘center of mass’. the solutions we seek are not necessarily symmetric. such that Jp (up ) = cp .6). Our scheme is based on the following key estimates on the values of the functional Jp (u) in a neighborhood of the ‘critical points at inﬁnity’ associated with each local maximum of R. we established apriori bounds for the solutions in the following order. Roughly speaking. we show that the sequence can only blow up at one . (iii) In the region where R is positively bounded away from 0: First due to the energy bound.
where S n  is the volume of S n . Let (u. Finally. we use (6. we carry on the maxmini variational scheme. . In subsection 2.11). we establish a priori estimates on the solution sequence {up }. Then.2 The Variational Approach 161 point and the point must be a local maximum of R. First.11) for each p < τ := Let n+2 n−2 . We divide the rest of the section into two parts. v) = ( S u v + γn uv)dV be the inner product in the Hilbert space H 1 (S n ). we construct a maxmini variational scheme to ﬁnd a solution of − u + γn u = R(r)up (6.7) and (6. Jp (u) := Sn Rup+1 dV and E(u) := Sn ( u2 + γn u2 )dV. we establish the key estimates (6.6) for each p.10) to argue that even one point blow up is impossible and thus establish an apriori bound for a subsequence of {up }.8).2 The Variational Approach In this section. and u := E(u) be the corresponding norm.6. We seek critical points of Jp (u) under the constraint H := {u ∈ H 1 (S n )  E(u) = E(1) = γn S n . we carry on the maxmini variational scheme and obtain a solution up of (6. u ≥ 0}. One can easily see that a critical point of Jp in S multiplied by a constant is a solution of (6. In subsection 3. 6.
so that the south pole of S n is at the origin O and the center of the ball B n+1 is at (0. (6. and the integral S n uτ +1 dV invariant. that is.2. θ).1 Estimate the Values of the Functional To construct a maxmini variational scheme.162 6 Prescribing Scalar Curvature on S n . More generally and more precisely. Correspondingly. the energy E(·). and can be expressed explicitly as det(dhλ ) = λ + λ2 sin2 n r 2 cos2 r 2 . We recall some wellknown facts in conformal geometry. there is a family of conformal transforms Tq : H −→ H. while tends to zero elsewhere.˜ = ( 2 ) 2 . this family of functions blows up at south pole. As usual. θ) of S n (0 ≤ r ≤ π. these ‘mountain passes’ are in some neighborhoods of the ‘critical points at inﬁnity’ (See Proposition 6.15) r hλ (r.13).2. · · · . we can express. in the spherical polar ˜ coordinates x = (r. (6. θ) = (2 tan−1 (λ tan ). When q = O (the south pole of S n ). n−2 λ φq = φλ.12) It is actually the center of mass of the sphere S n with density uτ +1 (x) at the point x. Choose a coordinate system in Rn+1 . Unlike the classical ones. θ ∈ S n−1 ) centered at the south pole where r = 0. we ﬁrst show that there is some kinds of ‘mountain pass’ associated to each positive local maximum of R. φq ∈ H and satisﬁes − u + γn u = γn uτ . we use  ·  to denote the distance in Rn+1 . It is wellknown that this family of conformal transforms leave the equation (6. Let φq be the standard solution with its ‘center of mass’ at q ∈ B n+1 . As λ→0.13) We may also regard φq as depend on two parameters λ and q . 1). Deﬁne the center of mass of u as q(u) =: Sn xuτ +1 (x)dV uτ +1 (x)dV Sn (6. we have . 0. with Tq u := u(hλ (x))[det(dhλ )] n−2 2n (6. 2 Here det(dhλ ) is the determinant of dhλ . for n ≥ 3 6.2. where q ˜ ˜ is the intersection of S n with the ray passing the center and the point q.14) q 2 r + sin2 r λ cos 2 2 with 0 < λ ≤ 1.2 and 6.1 below).
The proof for (i) is similar. g and Rg are the LaplaceBeltrami operator and the scalar curvature associated with the metric g respectively. Tq v) = (u. On a ndimensional Riemannian manifold (M. then multiplying both sides of (6. g). E(Tq u) = E(u) and Sn (Tq u)τ +1 dV = Sn uτ +1 dV. The conformal Laplacian has the following wellknown invariance property under the conformal change of metrics. v) (ii) Sn (6. where c(n) = 4(n−1) . the standard metric on S n . If M is a compact manifold without boundary.6. (6.18) by u and integrating by parts. v ∈ H 1 (S n ).18) Interested readers may ﬁnd its proof in [Bes]. Then it is wellknown that ¯ ˆ go = w n−2 (x)¯ g with w(x) = satisfying the equation We also have 4 4 + x2 n+2 n−2 2 4 − ¯ w = γn w n−2 . For g = w4/(n−2) g.2 The Variational Approach 163 Lemma 6. w > 0.16) uk v τ +1−k dV. Lg := g − c(n)Rg n−2 is call a conformal Laplacian. Proof.19) In our situation. . one would arrive at { M 2 g u ˆ + c(n)Rg u2 }dVg = ˆ ˆ M { 2 g (uw) + c(n)Rg uw2 }dVg . we have ˆ Lg u = w− n−2 Lg (uw) . and for any nonnegative integer k. it holds the following (i) (Tq u.2. (i) We will use a property of the conformal Laplacian to prove the invariance of the energy E(Tq u) = E(u) . Sn (Tq u)k (Tq v)τ +1−k dV = (6.1 For any u. (6. ˆ n+2 for all u ∈ C ∞ (M ). we choose g to be the Euclidean metric g in Rn with ¯ gij = δij and g = go .17) In particular.
19) becomes { Sn o u 2 + γn u2 }dVo = Rn  ¯ (uw)2 dx. Make a stereographic projection from S n to Rn as shown in the ﬁgure below. θ) ∈ S n becomes x = (x1 .20) that E(Tq u) = Rn  o u(λx) λ(4 + x2 ) 4 + λx2 n−2 2  + γn u(λx) 2 λ(4 + x2 ) 4 + λx2 n−2 2 2 dVgo = Rn  ¯ u(λx)  ¯ u(λx) Rn λ(4 + x2 ) 4 + λx2 4λ 4 + λx2 4 4 + y2 n−2 2 w(x) 2 dx n−2 2 = = Rn 2 dx  ¯ u(y) n−2 2 2 dy = Rn  ¯ (u(y)w(y))2 dy = E(u). .20) where o and ¯ are gradient operators associated with the metrics go and g ¯ respectively. · · · . 4 + λx2 It follows from this and (6. let q be the intersection of S n with the ray passing through the ˜ center of the sphere and the point q. xn ) ∈ Rn . (6. This completes the proof of the lemma. (ii) This can be derived immediately by making change of variable y = hλ (x). for n ≥ 3 c(n)Rgo = γn := and n(n − 2) 4 c(n)Rg = c(n)0 = 0. One can also verify that Tq φq = 1. with r x = tan 2 2 and n−2 λ(4 + x2 ) 2 (Tq u)(x) = u(λx) .164 6 Prescribing Scalar Curvature on S n . the point (r. Again. where q is placed at the origin of Rn . ˜ Under this projection. ¯ Now formula (6.
for some a > 0. such that for all p ≥ po and for all u ∈ ∂Σ.26) It is easy to see that for a ﬁxed function φλo .O )→R(0)S n .O ) > R(0)S n  − δ1 .25). n − 2 < α < n.O ∈ Σ. .2. po .2 There exist positive constants ρo . and Jτ (φλo .24) Proof of Proposition 6. Hence. there exists a p1 . (6. This is a neighborhood of the ‘critical points at inﬁnity’ corresponding to O.2. by (6. The estimates near other positive local maxima are similar. θ) which we assume to be a positive local maximum. (6. Proposition 6. such that for all p ≥ p1 .1 For any δ1 > 0.25) For a given δ1 > 0. 2 (6.1 Through a straight forward calculation. as λ→0. We ﬁrst notice that the supremum of Jp in Σ approaches R(0)S n  as p→τ .6.O . t. there is a p1 ≤ τ .22) Notice that the ‘centers of mass’ of functions in Σ are near the south pole O. in a small neighborhood of O. by (6.26). Deﬁne Σ = {u ∈ H  q(u) ≤ ρo . sup Jp (u) > R(0)S n  − δ1 .21) (6.q (6. We now carry on the estimates near the south pole (0. one can choose λo . More precisely. and δo . Jp is continuous with respect to p. such that φλo .2. v := min u − tφq ≤ ρo }. Our conditions on R implies that R(r) = R(0) − arα . q and Tq for q = O can be expressed in a ˜ ˜ similar way. one can show that Jτ (φλ. holds Jp (u) ≤ R(0)S n  − δo .23) Σ Then we show that on the boundary of Σ. (6. such that for all τ ≥ p ≥ p1 . We will estimate the functional Jp in this neighborhood. we have Proposition 6.2 The Variational Approach 165 The relations between q and λ. Jp is strictly less and bounded away from R(0)S n .
O .2.28). The proof of Proposition 6. and v be deﬁned by (6.28) From the deﬁnition (6.2. λ.29).O ) > R(0)S n  − δ1 . q q where δp := τ − p and op (1)→0 as p→τ . q. (6.166 6 Prescribing Scalar Curvature on S n . To estimate Jp for all u ∈ ∂Σ.O [R(0) − ax + q α ]φp+1 dV ˜ λ. Noticing that α < n and δp →0. This completes the proof of the Lemma.21) holds in ··· + B (O) S n \B (O) R(x)φp+1 dV = λ.14).˜) = q Sn (6.31) (6. We ﬁrst estimate Jp for a family of standard functions in Σ. Then for suﬃciently small q.2 is rather complex.14) of φλ.2.˜ q n−2 2 for q ∈ B 2 (O). and q be deﬁned by (6.O (6.2. Then for ρo suﬃciently small.29) To estimate the ﬁrst integral.30) imply (6. we work in a local coordinate centered at O. we conclude that (6.3 (On the ‘center of mass’) (i) Let q. (6. B (O) R(x)φp+1 dV = λ.1. . Lemma 6. we obtain q the smallness of the second integral: S n \B (O) R(x)φp+1 dV ≤ C2 λn−δp λ.˜ q ···.32) (ii) Let ρo . Proof of Lemma 6. and (6. Lemma 6. and for λ and ˜ suﬃciently q small. ˜ (6.27). (6. we also need the following two lemmas that describe some useful properties of the set Σ.2 For p suﬃciently close to τ .O ≤ B (O) [R(0) − C3 xα − C3 ˜α ]φp+1 dV q λ.2. for n ≥ 3 Jp (φλo . q α α α Here we have used the fact that x + q  ≥ c(x + ˜ ) for some c > 0 in ˜ q one half of the ball B (O) and the symmetry of φλ.30) ≤ [R(0) − C3 ˜α ]S n [1 + op (1)] − C4 λα+δp (n−2)/2 .˜) ≤ (R(0) − C1 ˜α )S n (1 + op (1)) − C1 λα+δp . q ρo ≤ q + C v . We express Jp (φλ.22).27) > 0 be small such that (6.˜ and the boundedness of R. This completes the proof of Proposition 6. ˜ q2 ≤ C(˜2 + λ4 ).2. we have Jp (φλ. Let B (O) ⊂ S n .˜ q ≤ B (O) B (O) R(x + q )φp+1 dV ˜ λ.
we have either v = ρo or q(u) = ρo . by the deﬁnition.36). i = 1. the boundary of Σ. (6. xn ).2. 2. Hence we assume that q(u) = ρo . 2. Let Tqo be the conformal transform such that Tqo φqo = 1. ˜ for λ suﬃciently small . It follows from the deﬁnition of the ‘center of mass’ that τ +1 n x(tφq + v) ρo =  S .2 The Variational Approach 167 Lemma 6. i = 1. then one can see that (6. (6.14). i = 1.2. then for x = (x1 .22). n. Here we have used the fact that v is small and that t is close to 1. Sn (6. · · · .4 (Orthogonality) Let u ∈ Σ and v = u − to φqo be deﬁned by (6. · · · .37) (tφq + v)τ +1 Sn Noticing that v ≤ ρo is very small. · · · n. If we place the center of the sphere S n at the origin of Rn+1 (not in our case). one arrives at q − q  ∼ λ2 . and by an elementary q calculation.3. and (6.34) and Tqo v · ψi (x)dV = 0.3. .37) in terms of v : ρo =  tτ +1 tτ +1 xφτ +1 + (τ + 1)tτ q φτ +1 + (τ + 1)tτ q ≤ xφτ +1 q φτ +1 q xφτ v + o( v ) q φτ v + o( v ) q   + C v ≤ q + C v .2. Proof of Lemma 6. then we are done. This completes the proof of Lemma 6. we can expand the integrals in (6.36) ˜ Draw a perpendicular line segment from O to q q .6. (6. n.˜. (ii) For any u = tφq + v ∈ ∂Σ. (i) From the deﬁnition that φq = φλ. If v = ρo . · · · . ψi (x) = xi .33) Then Tqo vdV = 0 Sn (6.31) is a direct consequence of the triangle inequality and (6. 2.35) where ψi are ﬁrst spherical harmonic functions on S n : − ψi = n ψi (x) .
168 6 Prescribing Scalar Curvature on S n . we estimate Jτ (u). (ii) To prove (6. .13) that vφτo dV = 0. 0= q q vLφq q=qo = γn q q vφτ q=qo q Tqo v(Tqo φq )τ q=qo = τ Tqo v(φq−qo )τ q=qo q φq−qo q=qo Tqo vφτ −1o q−q =τ Tqo v(ψ1 . for n ≥ 3 Proof of Lemma 6. Step 1. (6. Write L = + γn .2 Make a perturbation of R(x): ¯ R(x) = where m = R ∂B2ρo (O) . We use the fact that v = u − to φqo is the minimum of E(u − tφq ) among all possible values of t and q.2. First we show ¯ ¯ Jp (u) ≤ Jτ (u)(1 + op (1)). we have 0 = (v. q Sn Now. (i) By a variation with respect to t. apply the conformal transform Tqo to both v and φqo in the above integral.38) It follows from (6. φqo ) = Sn vLφqo dV.4.2.39) ¯ Jp (u) = Sn ¯ R(x)up+1 dV. This completes the proof of the Lemma. (6. By the invariant property of the transform. ¯ In step 1. Proof of Proposition 6. we show that the diﬀerence between Jp (u) and Jτ (u) is very ¯ small.35). In step 2. · · · .40) where op (1)→0 as p→τ .34). we make a variation with respect to q. Deﬁne R(x) x ∈ B2ρo (O) m x ∈ S n \ B2ρo (O). we arrive at (6. ψn ). The estimate is divided into two steps. (6.
42) and (6. We derive t2 = 1 − It follows that ¯ R(x)uτ +1 ≤ Sn v 2 . we now only need to estimate Jτ (u).40) and (6. 2 To estimate I1 .41) Step 2. we use (6. ¯ By virtue of (6. (6. write u = v + tφq . ¯ Now estimate the diﬀerence between Jp (u) and Jp (u).41). we have u 2 = v 2 + t2 φq 2 . that is ( v φq + γn vφq )dV = 0. For any u ∈ ∂Σ.27).2 The Variational Approach 169 In fact.38) implies that v and φq are orthogonal with respect to the inner product associated to E(·).43) = I1 + (τ + 1)I2 + τ (τ + 1) I3 + o( v 2 ).42) tτ +1 Sn ¯ R(x)φτ +1 + (τ + 1) q τ (τ + 1) ¯ R(x)φτ v + q 2 Sn Sn φτ −1 v 2 + o( v 2 ) q (6. Using the fact that both u and φq belong to H with u = γn S n  = φq . Sn Notice that u 2 := E(u).40).6. by the H¨lder inequality o ¯ Rup+1 ≤ ≤ ¯ R p+1 uτ +1 dV τ +1 p+1 τ +1 p+1 τ +1 S n  τ +1 τ −p τ −p ¯ Ruτ +1 dV  R(0)S n   τ +1 . This implies (6. γn S n  (6. (6. ¯  Jp (u) − Jp (u) ≤ C1 S n \B2ρo (O) up+1 v p+1 ≤ C3 λn−δp n−2 2 ≤ C2 S n \B2ρo (O) (tφq )p+1 + C2 S n \B 2ρo (O) + C3 v p+1 . .
To estimate I2 .38)). q (6. Now (6. in either case there exist a δo > 0. where λ2 = n + c2 . for n ≥ 3 I1 ≤ (1 − τ +1 v 2 )R(0)S n (1 − c1 ˜α − c1 λα ) + O(λn ) + o( v 2 ).31) and (6. (6. and without it.44).47) Here we have used the fact that v is very small and ρα v can be controlled o by ˜α + λα (see Lemma 6. is the second nonzero eigenvalue of − . (6. 4 For any u ∈ ∂Σ. Consequently v It follows that I3 ≤ R(0) Sn 2 = Tq v 2 ≥ (γn + n + c2 ) Sn (Tq v)2 .46) q has played a key role.35) imply that Tq v is orthogonal to these two eigenspaces and hence (Tq v)2 dV ≥ λ2 Sn Sn Tq v2 dV .170 6 Prescribing Scalar Curvature on S n .45).47).46) Now (6. I2 = Sn ¯ (R(x) − m)φτ vdV = q B2ρo (O) ¯ (R(x) − m)φτ vdV q = 1 γn ¯ (R(x) − m)Lφq · vdV B2ρo (O) ≤ Cρα  o B2ρo (O) Lφq · vdV = Cρα (φq . v) o v ≤ C 4 ρα v . for some positive number c2 . the coeﬃcient of v 2 in (6.2).34) and (6.47) would be 0 due to the fact that γn = n(n−1) . such that . γ n + n + c2 (6. we use the H 1 orthogonality between v and φq (see (6.2. and (6.44) q 2 γn S n  for some positive constant c1 . o (6. we have either v = ρo or q(u) = ρo .34) and (6.46) imply that there is a β > 0 such that ¯ Jτ (u) ≤ R(0)S n [1 − β(˜α + λα + v 2 )]. It is wellknown that the ﬁrst and second eigenspaces of − on S n are constants and ψi corresponding to eigenvalues 0 and n respectively.45) ≤ Cρα φq o To estimate I3 . we use (6. φτ −1 v 2 dV = R(0) q (Tq v)2 ≤ (Tq φq )τ −1 (Tq v)2 dV Sn = R(0) Sn R(0) v 2. Notice that the positive constant c2 in (6. By (6.35).
(6.11) and for all p close to τ .6.2 The Variational Approach 171 ¯ Jτ (u) ≤ R(0)S n  − 2δo .51) Let γ be a path in H joining ψ1 and ψ2 . 2. Let Γ be the family of all such paths. i = 1.11) for each p < τ . Let r1 and r2 be the two smallest positive local maxima of R.2 The Variational Scheme In this part.2. (6. (6. ∀u ∈ ∂Σi .1 and 6.24) is an immediate consequence of (6. δ Jp (ψi ) > R(ri )S n  − .41). 2 and Jp (u) ≤ R(ri )S n  − δ.2.50) (6. . there exist two disjoint open sets Σ1 .53) One can easily see that a critical point of Jp in H multiplied by a constant is a solution of (6.49) Obviously.2. there exists a critical (saddle) point up of Jp with Jp (up ) = cp . we seek a solution by maximizing the functional Jp in a class of rotationally symmetric functions: Hr = {u ∈ H  u = u(r)}. and (6.2 (6. In this case. 2. i=1. Jp is bounded from above in Hr and it is wellknown that the variational scheme is compact for each p < τ (See the argument in Chapter 2). By Proposition 6.11). Case (i): R has only one positive local maximum. condition (6.40). ψi ∈ Σi . the constant multiples are bounded from above and bounded away from 0.52) γ∈Γ u∈γ For each ﬁxed p < τ .48) . Moreover. Σ2 ⊂ H.2. This completes the proof of the Proposition. by the wellknown compactness. Deﬁne cp = sup min Jp (u). we construct variational schemes to show the existence of a solution to equation (6.4) implies that R must have local minima at both poles. (6. Similar to the ideas in [C]. we have Jp (up ) ≤ min R(ri )S n  − δ. (6. i = 1. 6. po < τ and δ > 0.48) Now (6. due to (6. Therefore any maximizing sequence possesses a converging subsequence in Hr and the limiting function multiplied by a suitable constant is a solution of (6. such that for all p ≥ po . Case (ii): R has at least two positive local maxima.51) and the deﬁnition of cp .
R close to 0. However. the a priori estimate is simpler. The bound depends only on δ.11) obtained by the variational scheme are uniformly bounded. dist({x  R(x) ≤ −δ}. we estimate the solutions in three regions where R < 0. the solution of (6. which converges to a solution uo of (6. We will use a rescaling argument to derive a contradiction. {x  R(x) = 0}).3. To prove the Theorem. Now within our variational frame in this paper. pi →τ .5) in Theorem 6.3. Proposition 6. The convergence is based on the following a priori estimate.2 In the Region Where R is Small In [CL6]. we removed this condition by using a blowing up analysis near R(x) = 0.172 6 Prescribing Scalar Curvature on S n . Then there exists a po < τ and a δ > 0. we also obtained estimates in this region by the method of moving planes. 6. and a sequence of points {xi }. Proposition 6. for rotationally symmetric functions R on S 3 . Since xi may not be a local maximum of ui . we showed the existence of a positive solution up to the subcritical equation (6. let K be any large number and let .3. and the lower bound of R. the solutions of (6. with xi →xo and R(xo ) = 0.3 The A Priori Estimates In the previous section. In [CL7]. 6.2. {up } are uniformly bounded in the regions where R(x) ≤ δ. In this section. Proof.3.1 For all 1 < p ≤ τ .2 in our paper [CL6]. Then there exists a subsequence {ui } with ui = upi . for every δ > 0. Theorem 6. there is a subsequence of {up }. for n ≥ 3 6. we need to choose a point near xi .11) for each p < τ .1. which is almost a local maximum. we assume that R be bounded away from zero. To this end. That method may be applied to higher dimensions with a few modiﬁcations. It is easy to see that the energy E(up ) are bounded for all p.1). then there exists po < τ . and R > 0 respectively.1. Now suppose that the conclusion of the Proposition is violated. which is a direct consequence of Proposition 6. such that for all po < p < τ . we prove that as p→τ .11) obtained by the variational approach in section 2.2 Let {up } be solutions of equation (6. It is mainly due to the energy bound. in that paper.1 In the Region Where R < 0 The apriori bound of the solutions is stated in the following proposition. such that for all po < p ≤ τ .11) are uniformly bounded in the regions where R ≤ −δ.3. such that ui (xi )→∞.1 Assume that R satisﬁes condition (6.
57) contradict with (6. choose a local coordinate and let si (xi ) = ui (x)(ri − x − xi ) pi −1 . ui (x) ≤ ui (ai ) ≤ ui (ai ) 2 pi −1 . ui (ai ) Obviously vi is bounded in BK (0). 2 Bλi K (ai ) ⊂ Bri (xi ). From this.6.54) 2 ri − ai − xi  ≥ λi K. Consequently for i suﬃciently large. Now we can make a rescaling. On the other hand.55) for some c > 0. (6. the boundedness of the energy E(ui ) implies that Sn uτ +1 dV ≤ C. . vi (x)τ +1 dx ≥ cK n . with vo (0) = 1. for any K > 0. Let ai be the maximum of si (x) in Bri (xi ).57) Obviously. we use the fact si (ai ) ≥ si (xi ).54). Sn uτ +1 dV ≥ i BK (0) τ vi +1 dx. i (6.3 The A Priori Estimates 173 ri = 2K[ui (xi )]− pi −1 2 . ri − ai − xi  ri − x − xi  2 2 pi −1 To see the second part. By a straight forward calculation. This completes the proof of the Proposition. one can verify that. 2 In a small neighborhood of xo .55). (6.56) and (6. one can easily verify that the ball Bλi K (ai ) is in the interior of Bri (xi ) and the value of ui (x) in Bλi K (ai ) is bounded above by a constant multiple of ui (ai ). Then from the deﬁnition of ai . we derive ui (ai )(ri − ai − xi ) pi −1 ≥ ui (xi )ri i It follows that hence 2 2 p −1 pi −1 = (2K) pi −1 . (6. we use si (ai ) ≥ si (x) and derive from (6. BK (0) (6. and it follows by a standard argument that {vi } converges to a harmonic function vo in BK (0) ⊂ Rn . Let λi = [ui (ai )]− 2 .56) for some constant C. vi (x) = ∀ x ∈ Bλi K (ai ). 1 ui (λi x + ai ). To see the ﬁrst part.
This is a contradiction with (6. The proof is divided into 3 steps. This is done mainly by using a Pohozaeve type identity. i Bri (xo ) Because the total energy of ui is bounded. Proposition 6.1. then by Lemma 6.53) to conclude that even one point blow up is impossible. and vi (x) be deﬁned as in Part II. In Step 1. Therefore. k (6. We have shown that {ui } has only isolated blow up points. with λi = (max ui )− n−2 and with q at the maximum of R. As a consequence of a result in [Li2] (also see [CL6] or [SZ]). In Step 3. Let {xi } be a sequence of points such that ui (xi )→∞ and xi →xo with R(xo ) > 0.3 Let {up } be solutions of equation (6. we have Jτ (ui )→R(xo )S n . the sequence {ui } is uniformly bounded and possesses a subsequence converging to a solution of (6. The argument is standard. .58) for all positive local maxima rk of R. Moreover.˜. In Step 2.3. For convenience.3 In the Regions Where R > 0. we show that there is no more than one point blow up and the point must be a local maximum of R. because each ui is the minimum of Jpi on a path.1).1 Let R satisfy the ‘ﬂatness condition’ in Theorem 1. Finally. This completes the proof of Theorem 6. Let ri . From the proof of Proposition 2. for n ≥ 3 6.1. Then {ui } can have at most one simple blow up point and this point must be a local maximum of R. we show that even one point blow up is impossible.58).3. we have Lemma 6.1. we see that {vi (x)} converges to a standard function vo (x) in Rn with τ − vo = R(xo )vo .174 6 Prescribing Scalar Curvature on S n . {ui } can only blow up at ﬁnitely many points. Now if {ui } blow up at xo . Then similar to the argument in Part II. we argue that the solutions can only blow up at ﬁnitely many points because of the energy bound. let {ui } be the sequence of critical points of Jpi we obtained in Section 2. {up } are uniformly bounded in the regions where R(x) ≥ .3.11) obtained by the variational approach in section 2. Proof. Therefore.2. Step 1. such that for all po < p ≤ τ and for any > 0. one can obtain Jτ (ui ) ≤ min R(rk )S n  − δ. Then there exists a po < τ .3. {ui } behaves almost like a family of the standard 2 ˜ functions φλi . q Step 3. we can have only ﬁnitely many such xo . It follows that uτ +1 dV ≥ co > 0. si (x). we use (6. Step 2.
then u can not have any interior maximum in the interval.1 7. that is. More generally. b).4 7. then we have the following simplest version of maximum principle: Assume that u (x) > 0 in an open interval (a. It is wellknown in calculus that at a local maximum point xo of u.7 Maximum Principles 7.2 7. Since under the condition u (x) > 0.5 Introduction Weak Maximum Principle The Hopf Lemma and Strong Maximum Principle Maximum Principle Based on Comparison A Maximum Principle for Integral Equations 7. · · · . let’s consider a C 2 function u(x) of one independent variable x. Correspondingly. xn ). Based on this observation.1 Introduction As a simple example of maximum principle. the graph is concave up. the simplest version maximum principle reads: . it can not have any local maximum. we have D2 u(xo ) := (uxi xj (xo )) ≤ 0 . at a local maximum xo .3 7. One can also see this geometrically. the symmetric matrix is nonpositive deﬁnite at point xo . for a C 2 function u(x) of nindependent variables x = (x1 . we must have u (xo ) ≤ 0.
1. there are numerous other applications. In this case. iii) Establishing the Existence of Solutions (a) For a linear equation such as (7. 1 2 2 One can easily see from the graph of this function that (0.176 7 Maximum Principles If (aij (x)) ≥ 0 and ij aij (x)uxi xj (x) > 0 (7. In other words. if A ≥ 0 and B ≤ 0. we obtain a b (1 − x2 ) ≤ u(x) ≤ (1 − x2 ). Unlike its onedimensional counterpart. let u(x) = sup φ(x) . condition (7. we will introduce various maximum principles.1) no longer implies that the graph of u(x) is concave up. 2n 2n ii) Proving Uniqueness of Solutions In the above example. A simple counter example is 1 u(x1 . In this chapter. and most of them will be used in the method of moving planes in the next chapter.2) If a ≤ f (x) ≤ b in B1 (0). if f (x) ≡ 0. We will list some below. respectively. x2 ) = x2 − x2 . (7. and which vanish on the boundary as u does. then we can choose a = b = 0. then AB ≤ 0. in which condition (7. then u can not achieve its maximum in the interior of the domain. An interesting special case is when (aij (x)) is an identity matrix.1) in an open bounded domain Ω. Besides this. and this implies that u ≡ 0. i) Providing Estimates Consider the boundary value problem − u = f (x) .2) is unique.2) in any bounded open domain Ω. the solution of the boundary value problem (7. x ∈ ∂B1 (0).1 in the following). the validity of the maximum principle comes from the simple algebraic fact: For any two n × n matrices A and B. Now. applying the maximum principle for operator (see Theorem 7.1) becomes u > 0. then we can compare the solution u with the two functions b a (1 − x2 ) and (1 − x2 ) 2n 2n which satisfy the equation with f (x) replaced by a and b. 0) is a saddle point. x ∈ B1 (0) ⊂ Rn u(x) = 0 .
such that − u ≤ f (u) ≤ f (¯) ≤ − u. (b) Now consider the nonlinear problem − u = f (u) .3) Assume that f (·) is a smooth function with f (·) ≥ 0. x ∈ ∂Ω.) i) If − u(x) ≥ 0. ii) If − u(x) ≤ 0. ¯ Ω ∂Ω − ui+1 = f (ui ). Let − u1 = f (u) and Then by maximum principle. In Section 7. we have u ≤ u1 ≤ u2 ≤ · · · ≤ ui ≤ · · · ≤ u. x ∈ ∂Ω. ¯ Ω ∂Ω x ∈ Ω. x ∈ Ω u(x) = 0 . .1 Introduction 177 where the sup is taken among all the functions that satisfy the corresponding diﬀerential inequality − φ ≤ f (x) . u ¯ These two functions are called sub (or lower) and super (or upper) solutions respectively. then min u ≥ min u. Then u is a solution of (7.2). we use successive approximations. x ∈ Ω. ¯ Let u be the limit of the sequence {ui }: u(x) = lim ui (x). Theorem 7.1.3). we introduce and prove the weak maximum principles. then max u ≤ max u. Suppose that there ¯ exist two functions u(x) ≤ u(x).1 (Weak Maximum Principle for − . x ∈ Ω φ(x) = 0 . then u is a solution of the problem (7. To seek a solution of problem (7. (7.2 .3).7.
3 (Strong Maximum Principle for L with c(x) ≡ 0) Assume that Ω is an open. they do not exclude the possibility that the minima or maxima may also occur in the interior of Ω. then u attains its maximum value only on ∂Ω unless u is constant. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. any ξ ∈ Rn and for some δ > 0. ¯ Ω ∂Ω These Weak Maximum Principles infer that the minima or maxima of u attain at some points on the boundary ∂Ω. Assume that c(x) ≡ 0. However. i L=− ij aij (x)Dij + Here we always assume that aij (x).1. This maximum principle (as well as the weak one) can also be applied to the case when c(x) ≥ 0 with slight modiﬁcations. Theorem 7. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. i) If Lu ≥ 0 in Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). as we will see in the following. i) If Lu(x) ≥ 0. x ∈ Ω.178 7 Maximum Principles This result can be extended to general uniformly elliptic operators. Theorem 7. x ∈ Ω. then max u ≤ max .2 (Weak Maximum Principle for L) Let L be the uniformly elliptic operator deﬁned above. and connected domain in Rn with smooth bound¯ ary ∂Ω.1. Dij = . . ∂xi ∂xi ∂xj bi (x)Di + c(x). Actually this can not happen unless u is constant. then u attains its minimum value only on ∂Ω unless u is constant. bounded. and c(x) are bounded continuous ¯ functions in Ω. bi (x). Assume that c(x) ≡ 0 in Ω. ii) If Lu(x) ≤ 0. then min u ≥ min u. Let Di = Deﬁne ∂ ∂2 .
4. Actually. Do we really need c(x) ≥ 0? The answer is ‘no’. we all require that c(x) ≥ 0. ¯ Let φ be a positive function on Ω satisfying − φ + λ(x)φ ≥ 0. If c(x) > λ(x) . as consequences of Theorem 7. − is ‘positive’. we establish a maximum principle for integral inequalities.5. These principles can be applied very conveniently in the ‘Method of Moving Planes’ to establish the symmetry of solutions for semilinear elliptic equations.1.1. (7. However. where we prove the ‘Maximum Principles Based on Comparisons’. These will be studied in Section 7. if c(x) is not ‘too negative’. Also in Section 7.1 Introduction 179 Theorem 7.4) . then u can not attain its nonnegative maximum in the interior of Ω unless u is constant.7. Roughly speaking. Let u be a function in C 2 (Ω) ∩ C(Ω). x ∈ Ω. i) If Lu(x) ≥ 0.4 (Strong Maximum Principle for L with c(x) ≥ 0) Assume that Ω is an open. we derive the ‘Narrow Region Principle’ and the ‘Decay at Inﬁnity Principle’.1. and obviously so does − + c(x) if c(x) ≥ 0. maximum principles hold for ‘positive’ operators. and connected domain in Rn with smooth bound¯ ary ∂Ω. x ∈ Ω. We will prove these Theorems in Section 7.3 by using the Hopf Lemma. in practical problems it occurs frequently that the condition c(x) ≥ 0 can not be met. ∀x ∈ Ω.4.+ c(x)’ can still remain ‘positive’ to ensure the maximum principle. ii) If Lu(x) ≤ 0. In Section 7. then u ≥ 0 in Ω. Notice that in the previous Theorems. then the operator ‘.5) (7.5. bounded. then u can not attain its nonpositive minimum in the interior of Ω unless u is constant.5 (Maximum Principle Based on Comparison) Assume that Ω is a bounded domain. as we will see in later sections. as we will see in the next chapter. Assume that c(x) ≥ 0 in Ω. Let u be a function such that − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω. Theorem 7.
then min u ≥ min u. b). we prove the weak maximum principles. Now for u only satisfying the weaker condition (7.6) becomes u (x) ≤ 0. we consider a perturbation of u: u (x) = u(x) − x2 . Proof.1 (Weak Maximum Principle for − . and . b) of u.6). and therefore one can roughly see that the values of u(x) in (a. Here we only present the proof of part i).) i) If − u(x) ≥ 0. The entirely similar proof also works for part ii). b) is concave downward. Obviously.7) (7.2 Weak Maximum Principles In this section. we must have −u (xo ) ≤ 0. Suppose in contrary to (7. b) are large or equal to the minimum value of u at the end points (See Figure 2).2. ¯ Ω ∂Ω (7. we ﬁrst carry it out under the stronger assumption that −u (x) > 0.10) Let m = min∂Ω u. First. ¯ Ω ∂Ω x ∈ Ω. Then by the second order Taylor expansion of u around xo . u (x) satisﬁes the stronger condition (7.10). To better illustrate the idea.6) (7. for each hence > 0. a Figure 2 b To prove the above observation rigorously. let Ω be the interval (a. Then condition (7.180 7 Maximum Principles 7. there is a minimum xo ∈ (a.7). (7.9) ii) If − u(x) ≤ 0.8) (7. such that u(xo ) < m. x ∈ Ω. Theorem 7. we will deal with one dimensional case and higher dimensional case separately. This implies that the graph of u(x) on (a. This contradicts with the assumption (7.10). then max u ≤ max u.
17) where dSω is the area element of the n − 1 dimensional unit sphere S n−1 = {ω  ω = 1}. Let Br (xo ) ⊂ Ω be the ball of radius r center at xo . we arrive at the desired conclusion (7. if − u(x) > 0.11) u(xo ) > (=) ∂Br (xo ) ∂Br (xo ) It follows that.15) Hence we must have min u ≥ min u Ω ∂Ω (7.13) u(xo ) < ∂Br (xo ) ∂Br (xo ) It follows that.7). (7. ¯ Ω ∂Ω Now letting →0.15). By the Divergence Theorem. if xo is a minimum of u in Ω. we arrive at (7. − u = − u + 2 n > 0. we have − u (xo ) ≤ 0.7). (7. The Proof of Lemma 7.2.2.7. i) If − u(x) > (=) 0 for x ∈ Bro (xo ) with some ro > 0. then − u(xo ) ≤ 0. to prove the theorem. ∂r (7.16). This Lemma tell us that. Roughly speaking. and ∂Br (xo ) be its boundary.1 (Mean Value Inequality) Let xo be a point in Ω. To prove the theorem in dimensions higher than one. 1 u(x)dS. (7. the graph of u is locally somewhat concave downward. Now in (7. This completes the proof of the Theorem. then for any ro > r > 0. (7. we need the following Lemma 7.16) Otherwise. Now based on this Lemma.2 Weak Maximum Principles 181 min u ≥ min u . This contradicts with (7. then for any ro > r > 0. u(x)dx = Br (xo ) ∂Br (xo ) ∂u dS = rn−1 ∂ν S n−1 ∂u o (x + rω)dSω .1.2. if there exists a minimum xo of u in Ω.12) ii) If − u(x) < 0 for x ∈ Bro (xo ) with some ro > 0. if xo is a maximum of u in Ω. Obviously. (7. letting →0. then the value of u at the center of the small ball Br (xo ) is larger than its average value on the boundary ∂Br (xo ). then by Lemma 7. we ﬁrst consider u (x) = u(x) − x2 .1. then − u(xo ) ≥ 0. 1 u(x)dS.14) We postpone the proof of the Lemma for a moment. .
2.1 is still true (with slight modiﬁcations).18) Integrating both sides of (7. i) If Lu ≥ 0 in Ω. there exists a δ > 0. From the proof of Theorem 7.17). Let Di = Deﬁne L=− ij ∂2 ∂ . ∀x ∈ Bδ (xo ). Assume that c(x) ≡ 0. then by (7.2. We say that L is uniformly elliptic if aij (x)ξi ξj ≥ δξ2 for any x ∈ Ω. ∂ { ∂r u(xo + rω)dSω } < 0. bi (x). ¯ Ω ∂Ω (7. then min u ≥ min u.20) ii) If Lu ≤ 0 in Ω.11). (7.19) Here we always assume that aij (x). Then by the continuity of u. one can see that if we replace − operator by − + c(x) with c(x) ≥ 0. S n−1 where S n−1  is the area of S n−1 . i aij (x)Dij + (7. then max u ≤ max u. Theorem 7. then the conclusion of Theorem 7.2. such that − u(x) > 0. This veriﬁes (7. To see (7. It follows that u(xo ) > 1 rn−1 S n−1  ∂Br (xo ) u(x)dS. ¯ Ω ∂Ω . This completes the proof of the Lemma. Dij = . any ξ ∈ Rn and for some δ > 0. Furthermore. we can replace the Laplace operator − with general uniformly elliptic operators. we suppose in contrary that − u(xo ) > 0. Let L be the uniformly elliptic operator deﬁned above.2 (Weak Maximum Principle for L with c(x) ≡ 0). ∂xi ∂xi ∂xj bi (x)Di + c(x). This contradicts with the assumption that xo is a minimum of u. S n−1 (7.18) from 0 to r yields u(xo + rω)dSω − u(xo )S n−1  < 0.182 7 Maximum Principles If u < 0.12).11) holds for any 0 < r < δ. Consequently. and c(x) are bounded continuous ¯ functions in Ω.1.
bounded. ¯ Ω ∂Ω ii) If Lu ≤ 0 in Ω. ∂u(xo ) < 0. say in [Ev]. In the case Lu ≥ 0. that is. the principle still applies with slight modiﬁcations.3 The Hopf Lemma and Strong Maximum Principles In the previous section. 7. Theorem 7.3. Assume that Ω is an open. (7. This is called the “Strong Maximum Principle”. we prove a weak form of maximum principle. Let u be a function in ¯ C 2 (Ω) ∩ C(Ω). and connected domain in Rn with smooth boundary ∂Ω. Assume that Lu ≥ 0 in Ω.22) Then for any outward directional derivative at xo .21) Suppose there is a ball B contained in Ω with a point xo ∈ ∂Ω ∩ ∂B and suppose u(x) > u(xo ). the minimum value of u can only be achieved on the boundary unless u is constant. We will prove it by using the following Lemma 7. ¯ Ω ∂Ω Interested readers may ﬁnd its proof in many standard books. ∀x ∈ B. 0} and u+ (x) = max{u(x). However it does not exclude the possibility that the minimum may also attain at some point in the interior of Ω. Assume that c(x) ≥ 0. then max u ≤ max u+ .3 (Weak Maximum Principle for L with c(x) ≥ 0). it concludes that the minimum of u attains at some point on the boundary ∂Ω. Let L=− ij aij (x)Dij + i bi (x)Di + c(x) be uniformly elliptic in Ω with c(x) ≡ 0.3 The Hopf Lemma and Strong Maximum Principles 183 For c(x) ≥ 0. we will show that this can not actually happen.7.1 (Hopf Lemma).2. page 327. In this section. then min u ≥ min u− .23) . Let L be the uniformly elliptic operator deﬁned above. 0}. Let u− (x) = min{u(x). ∂ν (7. i) If Lu ≥ 0 in Ω. (7.
184
7 Maximum Principles
In the case c(x) ≥ 0, if we require additionally that u(xo ) ≤ 0, then the same conclusion of the Hopf Lemma holds. Proof. Without loss of generality, we may assume that B is centered at the origin with radius r. Deﬁne w(x) = e−αr − e−αx . Consider v(x) = u(x) + w(x) on the set D = B r (xo ) ∩ B (See Figure 3). 2
2 2
r 0 D xo Figure 3
We will choose α and appropriately so that we can apply the Weak Maximum Principle to v(x) and arrive at v(x) ≥ v(xo ) ∀x ∈ D. (7.24)
Ω
We postpone the proof of (7.24) for a moment. Now from (7.24), we have ∂v o (x ) ≤ 0, ∂ν Noticing that ∂w o (x ) > 0 ∂ν We arrive at the desired inequality ∂u o (x ) < 0. ∂ν Now to complete the proof of the Lemma, what left to verify is (7.24). We will carry this out in two steps. First we show that Lv ≥ 0. Hence we can apply the Weak Maximum Principle to conclude that min v ≥ min .
¯ D ∂D
(7.25)
(7.26)
(7.27)
7.3 The Hopf Lemma and Strong Maximum Principles
185
Then we show that the minimum of v on the boundary ∂D is actually attained at xo : v(x) ≥ v(xo ) ∀x ∈ ∂D. (7.28) Obviously, (7.27) and (7.28) imply (7.24). To see (7.26), we directly calculate
n n
Lw = e−αx {4α2
i,j=1 n
2
aij (x)xi xj − 2α
i=1 n
[aii (x) − bi (x)xi ] − c(x)} + c(x)e−αr [aii (x) − bi (x)xi ] − c(x)}
i=1
2
≥ e−αx {4α2
i,j=1
2
aij (x)xi xj − 2α
(7.29)
By the ellipticity assumption, we have r aij (x)xi xj ≥ δx2 ≥ δ( )2 > 0 in D. 2 i,j=1
n
(7.30)
Hence we can choose α suﬃciently large, such that Lw ≥ 0. This, together with the assumption Lu ≥ 0 implies Lv ≥ 0, and (7.27) follows from the Weak Maximum Principle. To verify (7.28), we consider two parts of the boundary ∂D separately. (i) On ∂D ∩ B, since u(x) > u(xo ), there exists a co > 0, such that u(x) ≥ u(xo ) + co . Take small enough such that w ≤ δ on ∂D ∩ B. Hence v(x) ≥ u(xo ) = v(xo ) ∀x ∈ ∂D ∩ B. (ii) On ∂D ∩ ∂B, w(x) = 0, and by the assumption u(x) ≥ u(xo ), we have v(x) ≥ v(xo ). This completes the proof of the Lemma. Now we are ready to prove Theorem 7.3.1 (Strong Maximum Principle for L with c(x) ≡ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≡ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u attains its minimum only on ∂Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u attains its maximum only on ∂Ω unless u is constant. Proof. We prove part i) here. The proof of part ii) is similar. Let m be the minimum value of u in Ω. Set Σ = {x ∈ Ω  u(x) = m}. It is relatively closed in Ω. We show that either Σ is empty or Σ = Ω.
186
7 Maximum Principles
We argue by contradiction. Suppose Σ is a nonempty proper subset of Ω. Then we can ﬁnd an open ball B ⊂ Ω \ Σ with a point on its boundary belonging to Σ. Actually, we can ﬁrst ﬁnd a point p ∈ Ω \ Σ such that d(p, Σ) < d(p, ∂Ω), then increase the radius of a small ball center at p until it hits Σ (before hitting ∂Ω). Let xo be the point at ∂B ∩ Σ. Obviously we have in B Lu ≥ 0 and u(x) > u(xo ). Now we can apply the Hopf Lemma to conclude that the normal outward derivative ∂u o (x ) < 0. (7.31) ∂ν On the other hand, xo is an interior minimum of u in Ω, and we must have Du(xo ) = 0. This contradicts with (7.31) and hence completes the proof of the Theorem. In the case when c(x) ≥ 0, the strong principle still applies with slight modiﬁcations. Theorem 7.3.2 (Strong Maximum Principle for L with c(x) ≥ 0.) Assume that Ω is an open, bounded, and connected domain in Rn with smooth bound¯ ary ∂Ω. Let u be a function in C 2 (Ω) ∩ C(Ω). Assume that c(x) ≥ 0 in Ω. i) If Lu(x) ≥ 0, x ∈ Ω, then u can not attain its nonpositive minimum in the interior of Ω unless u is constant. ii) If Lu(x) ≤ 0, x ∈ Ω, then u can not attain its nonnegative maximum in the interior of Ω unless u is constant. Remark 7.3.1 In order that the maximum principle to hold, we assume that the domain Ω be bounded. This is essential, since it guarantees the existence ¯ of maximum and minimum of u in Ω. A simple counter example is when Ω n is the half space {x ∈ R  xn > 0}, and u(x, y) = xn . Obviously, u = 0, but u does not obey the maximum principle: max u ≤ max u.
Ω ∂Ω
Equally important is the nonnegativeness of the coeﬃcient c(x). For example, set Ω = {(x, y) ∈ R2  − π < x < π , − π < y < π }. Then u = cos x cos y 2 2 2 2 satisﬁes − u − 2u = 0, in Ω u = 0, on ∂Ω.
7.3 The Hopf Lemma and Strong Maximum Principles
187
But, obviously, there are some points in Ω at which u < 0. However, if we impose some sign restriction on u, say u ≥ 0, then both conditions can be relaxed. A simple version of such result will be present in the next theorem. Also, as one will see in the next section, c(x) is actually allowed to be negative, but not ‘too negative’. Theorem 7.3.3 (Maximum Principle and Hopf Lemma for not necessarily bounded domain and not necessarily nonnegative c(x). ) Let Ω be a domain in Rn with smooth boundary. Assume that u ∈ C 2 (Ω)∩ ¯ and satisﬁes C(Ω) − u+ u(x) = 0
n i=1 bi (x)Di u
+ c(x)u ≥ 0, u(x) ≥ 0, x ∈ Ω x ∈ ∂Ω
(7.32)
with bounded functions bi (x) and c(x). Then i) if u vanishes at some point in Ω, then u ≡ 0 in Ω; and ii) if u ≡0 in Ω, then on ∂Ω, the exterior normal derivative ∂u < 0. ∂ν
To prove the Theorem, we need the following Lemma concerning eigenvalues. Lemma 7.3.2 Let λ1 be the ﬁrst positive eigenvalue of − φ = λ1 φ(x) x ∈ B1 (0) φ(x) = 0 x ∈ ∂B1 (0) (7.33)
with the corresponding eigenfunction φ(x) > 0. Then for any ρ > 0, the ﬁrst λ1 positive eigenvalue of the problem on Bρ (0) is 2 . More precisely, if we let ρ ψ(x) = φ( x ), then ρ λ1 − ψ = 2 ψ(x) x ∈ Bρ (0) (7.34) ρ ψ(x) = 0 x ∈ ∂Bρ (0) The proof is straight forward, and will be left for the readers. The Proof of Theorem 7.3.3. i) Suppose that u = 0 at some point in Ω, but u ≡ 0 on Ω. Let Ω+ = {x ∈ Ω  u(x) > 0}. Then by the regularity assumption on u, Ω+ is an open set with C 2 boundary. Obviously, u(x) = 0 , ∀ x ∈ ∂Ω+ . Let xo be a point on ∂Ω+ , but not on ∂Ω. Then for ρ > 0 suﬃciently small, one can choose a ball Bρ/2 (¯) ⊂ Ω+ with xo as its boundary point. Let ψ be x
188
7 Maximum Principles
the positive eigenfunction of the eigenvalue problem (7.34) on Bρ (xo ) correλ1 x sponding to the eigenvalue 2 . Obviously, Bρ (xo ) completely covers Bρ/2 (¯). ρ u Let v = . Then from (7.32), it is easy to deduce that ψ 0 ≤ − v−2 v·
n
ψ + bi (x)Di v + ψ i=1
n
Di ψ − ψ + + c(x) v ψ ψ i=1
n
≡− v+
i=1
˜i (x)Di v + c(x)v. b ˜
Let φ be the positive eigenfunction of the eigenvalue problem (7.33) on B1 , then n 1 Di φ λ1 + c(x). c(x) = 2 + ˜ ρ ρ i=1 φ This allows us to choose ρ suﬃciently small so that c(x) ≥ 0. Now we can ˜ apply Hopf Lemma to conclude that, the outward normal derivative at the boundary point xo of Bρ/2 (¯), x ∂v o (x ) < 0, ∂ν because v(x) > 0 ∀x ∈ Bρ/2 (¯) and v(xo ) = x u(xo ) = 0. ψ(xo ) (7.35)
On the other hand, since xo is also a minimum of v in the interior of Ω, we must have v(xo ) = 0. This contradicts with (7.35) and hence proves part i) of the Theorem. ii) The proof goes almost the same as in part i) except we consider the point xo on ∂Ω and the ball Bρ/2 (¯) is in Ω with xo ∈ ∂Bρ/2 (¯). Then for x x the outward normal derivative of u, we have ∂u o ∂v o ∂ψ o ∂v o (x ) = (x )ψ(xo ) + v(xo ) (x ) = (x )ψ(xo ) < 0. ∂ν ∂ν ∂ν ∂ν Here we have used a wellknown fact that the eigenfunction ψ on Bρ (xo ) is radially symmetric about the center xo , and hence ψ(xo ) = 0. This completes the proof of the Theorem.
7.4 Maximum Principles Based on Comparisons
In the previous section, we show that if (− + c(x))u ≥ 0, then the maximum principle, i.e. (7.20), applies. There, we required c(x) ≥ 0. We can think −
to ensure the Maximum Principle. Do we really need c(x) ≥ 0 here? To answer the question.36) φ(x) = 0 x ∈ ∂Ω. For c(x) ≥ 0. we can establish the following more general maximum principle based on comparison.37) Assume that u is a solution of − u + c(x)u ≥ 0 x ∈ Ω u≥0 on ∂Ω.39) (7.41). − +c(x) is also ‘positive’. we must have v(x) < 0 somewhere Ω. that is. That is. We notice that the eigenfunction φ corresponding to the ﬁrst positive eigenvalue λ1 is either positive or negative in Ω. More precisely. This suggests that.1 Assume that Ω is a bounded domain. we have − v(xo ) ≤ 0 and v(xo ) = 0. (7. it is allowed to be as negative as −λ1 . it is easy to verify that φ 1 φ − v=2 v· + (− u + u). and taking into account that u(xo ) < 0. ∀x ∈ Ω. (7. and thus completes the proof of the Theorem. − u+ φ u(xo ) ≥ − u + λ(xo )u(xo ) φ > − u + c(xo )u(xo ) ≥ 0. then u ≥ 0 in Ω. let us consider the Dirichlet eigenvalue problem of − : − φ − λφ(x) = 0 x ∈ Ω (7. Let xo ∈ Ω be a minimum of v(x). . the solutions of (7.4.37). by (7. Theorem 7. the maxima or minima of φ are attained only on the boundary ∂Ω.40) φ φ φ On one hand. and the maximum principle holds for any ‘positive’ operators. Let φ be a positive ¯ function on Ω satisfying − φ + λ(x)φ ≥ 0.41) (7. we have. Then since φ(x) > 0. (7. and (7.40) and (7. since xo is a minimum.39). c(x) need not be nonnegative. Proof. This is an obvious contradiction with (7. By a direct calculation.38) While on the other hand.38).36) with λ = λ1 obey Maximum Principle.7. If c(x) > λ(x) . at point xo .4 Maximum Principles Based on Comparisons 189 as a ‘positive’ operator. Let v(x) = φ(x) in Ω. Suppose that u(x) < 0 somewhere in u(x) . (7. We argue by contradiction.
Then the rest of the arguments are exactly the same as in the proof of Theorem 7. we provide two typical situations where there exist such functions φ and c(x) satisfying condition (7. one can see that conditions (7. In dimension n ≥ 3. so that the Maximum Principle Based on Comparison applies: i) Narrow regions.4.4. and let φ(x) = xq Then it is easy to verify that .2 If Ω is an unbounded domain.1 From the proof. where λ(x) = l −1 l2 can be very negative when l is suﬃciently small. we assume further that lim inf u(x) ≥ 0. For convenience in applications.39) are required only at the points where v attains its minimum. Now condition (7.4. i) Narrow Regions.39). or at points where u is negative. When Ω = {x  0 < x1 < l} is a narrow region with width l as shown: We can choose φ(x) = sin( x1l+ ). besides condition (7.38).1 (Narrow Region Principle.38) with bounded function c(x).37) and (7.42) guarantees that the minima of v(x) do not “leak” away to inﬁnity. (7.39). Proof. c(x) −1 satisﬁes (7.1 to conclude that u ≥ 0 in Ω.1.) If u satisﬁes (7.190 7 Maximum Principles Remark 7. Hence we can directly apply Theorem l 7.4. c(x) > λ(x) = 2 . ii) Decay at Inﬁnity. Still consider the same v(x) as in the proof of Theorem 7. provided lim inf x→∞ u(x) ≥ 0. i. Then it is easy Ω 0 l x1 to see that − φ = ( 1 )2 φ.1.37) and (7. Corollary 7.4. Then when the width l of the region Ω is suﬃciently small. one can choose some positive 1 number q < n − 2. and ii) c(x) decays fast enough at ∞.4.42) x→∞ Then u ≥ 0 in Ω. The Theorem is also valid on an unbounded domains if u is “nonnegative” at inﬁnity: Theorem 7.e.
43) is only required at points where u is negative.43) in Ω. If u satisﬁes (7.2.4. y)f (y)dy.3 Although Theorem 7.4.5 A Maximum Principle for Integral Equations 191 − φ= q(n − 2 − q) φ.4. c(x) satisﬁes (7. − u − up−1 u = 0 x ∈ Rn . y) ≥ 0.2 From Remark 7. and for the region Ω as stated in Corollary 7.4.7. Let c(x) = −u(x)p−1 . such that c(x) > − Suppose u(x) lim = 0.4. If further assume that u ∂Ω ≥ 0. ∀x > R. Then for R suﬃciently large. for example.1 as well as its Corollaries are stated in linear forms. .4. x2 (7.1.2 (Decay at Inﬁnity) Assume there exist R > 0. then u(x) ≥ 0 for all x ∈ Ω. one can see that actually condition (7. y) ∈ Ω × Ω.38) on ¯ Ω. Remark 7.43) Remark 7. we can adapt the proof of Theorem 7. may or may not be bounded.2 that u ≥ 0 in the entire region Ω. x→∞ xq C Let Ω be a region containing in BR (0) ≡ Rn \ BR (0). they can be easily applied to a nonlinear equation.1 to derive Corollary 7.44) 1 with xs s(p − 1) > 2. Assume that the solution u decays near inﬁnity at the rate of (7. x2 In the case c(x) decays fast enough near inﬁnity.4. ∀(x. Deﬁne the integral operator T by (T f )(x) = Ω K(x. q(n − 2 − q) . we introduce a maximum principle for integral equations. Assume K(x. 7.5 A Maximum Principle for Integral Equations In this section. Let Ω be a region in Rn .4. then we can derive from Corollary 7.
. (7. 0 ≤ f + (x) ≤ (T f )(x) = (T f )+ (x) + (T f )− (x) ≤ (T f )+ (x) (7.49) that f + ≤ (T f )+ ≤ θ f + . and hence f (x) ≤ 0.46) Ω+ = {x ∈ Ω  f (x) > 0}. f − (x) = min{0.49) While in the rest of the region Ω. we explained how it can be applied to the situations of “Narrow Regions” and “Decay at Inﬁnity”. Recall that in Section 5.49) holds for any x ∈ Ω. such as Lp (Ω) norms.5. f + (x) = max{0. It follows from (7. we have similar applications for integral equations.45). Let α be a number between 0 and n. Therefore. (7. after introducing the “Maximum Principle Based on Comparison” for Partial Diﬀerential Equations.1 Suppose that Tg ≤ θ g If f (x) ≤ (T f )(x). Deﬁne (T f )(x) = Ω 1 c(y)f (y)dy. we must have f + = 0. x − yn−α Assume that f satisﬁes the integral equation f = Tf in Ω. and C(Ω) norm. then f (x) ≤ 0 for all x ∈ Ω. f (x)}. Parallel to this.192 7 Maximum Principles Let · be a norm in a Banach space satisfying f ≤ g . (7.47) with some 0 < θ < 1 ∀g. Theorem 7. f (x)}. Since θ < 1. we have f + (x) = 0 and (T f )+ (x) ≥ 0 by the deﬁnitions. This completes the proof of the Theorem. Deﬁne and (7.48) (7. Then it is easy to see that for any x ∈ Ω+ . which implies that f + (x) ≡ 0. Proof. L∞ (Ω) norm. whenever 0 ≤ f (x) ≤ g(x) in Ω.45) There are many norms that satisfy (7.
50) ≤ C c(y) Lτ (Ω) f Lp (Ω) . Then there exist positive numbers Ro and o depending on c(y) only. τ > 1. More precisely. where µ(D) is the measure of the set D. C ii) Near Inﬁnity: Say. we study an integral equation in the next section. the integral inequality f ≤ Tf Further assume that Tf Lp (Ω) in Ω.51). we have Corollary 7.5. Since c(y) ∈ Lτ (Rn ). we can make the integral Ω c(y)τ dy as small as we wish.5.1 that f + (x) ≡ 0 . (7. Proof. We will use the method of moving planes to obtain the radial symmetry and monotonicity of the positive solutions. This completes the proof of the Corollary. and thus to obtain C c(y) Lτ (Ω) < 1. (7. a maximum principle that will be applied to “Narrow Regions” and “Near Inﬁnity”. the complement of the ball BR (0). Let f ∈ Lp (Rn ) be a nonnegative function satisfying (7. by Lebesgue integral theory. with suﬃciently large R. Ω = BR (0). Remark 7. .5 A Maximum Principle for Integral Equations 193 or more generally. Now it follows from Theorem 7.1.51) for some p. such that if µ(Ω ∩ BRo (0)) ≤ o. i) Narrow Regions: The width of Ω is very small. If we have some right integrability condition on c(y). from Theorem 7.5. then f + ≡ 0 in Ω.50) and (7. then we can derived. o in the As an immediate application of this “Maximum Principle”.1 One can see that the condition µ(Ω ∩ BRo (0)) ≤ Corollary is satisﬁed in the following two situations.7. when the measure of the intersection of Ω with BRo (0) is suﬃciently small. ∀ x ∈ Ω.1 Assume that c(y) ≥ 0 and c(y) ∈ Lτ (Rn ).5.
.
Decades later. and Spruck [CGS]. Gidas. semilinear partial diﬀerential equations.3 Method of Moving Planes in a Local Way 8.2 Symmetry of Solutions of −∆u = up in Rn . This method has been applied to free boundary problems.1 The Background 8. Chang and Yang [CY]. it was further developed by Serrin [Se].2.1 Symmetry of Solutions on a Unit Ball 8. We refer to the paper of Frenkel [F] for more descriptions on the method. Particularly for semilinear partial diﬀerential equations.1 The Background 8.2 Applications of Maximum Principles Based on Comparison 8.5 Method of Moving Planes in an Integral Form and Symmetry of Solutions for Integral Equations The Method of Moving Planes (MMP) was invented by the Soviet mathematician Alexanderoﬀ in the early 1950s. Gidas. and other problems.2 The A Priori Estimates Method of Moving Spheres 8.4 8. 8.4.4. Caﬀarelli. and Nirenberg [GNN].2. Li [Li]. there have seen many signiﬁcant contributions. Chen and Li [CL] [CL1].8 Methods of Moving Planes and of Moving Spheres 8.2.3.3. Ni. and to prove nonexistence of solutions.1 Outline of the Method of Moving Planes 8. and many others. . They can also be used to obtain a priori estimates.3 Symmetry of Solutions of −∆u = eu in R2 8. to derive useful inequalities. The Method of Moving Planes and its variant–the Method of Moving Spheres–have become powerful tools in establishing symmetries and monotonicity for solutions of partial diﬀerential equations.2 Necessary Conditions 8.
they are equivalent. we will apply the Method of Moving Planes in a ‘local way’ to obtain a priori estimates on the solutions of the prescribing scalar curvature equation on a compact Riemannian manifold M − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 .196 8 Methods of Moving Planes and of Moving Spheres From the previous chapter. together with Ou. − u = u n−2 (x) x ∈ Rn n ≥ 3. one can see that. the Maximum Principles Base on Comparison will play a major role. we have seen the beauty and power of maximum principles. and under certain conditions. n+2 in M. Roughly speaking. created an integral form of MMP. monotonicity. It is wellknown that by using a Green’s function.’ the maximum principle can still be applied. We will establish symmetry. n+2 During the moving of planes. In Section 8. In particular. the MMP is a continuous way of repeated applications of maximum principles. they employed the global properties of the solutions of integral equations. the Maximum Principles introduced in the previous chapter are applied in innovative ways. During the process of Moving Planes. we also introduced a form of maximum principle at inﬁnity to the MMP and therefore simpliﬁed many proofs and extended the results in more natural ways. From the previous chapter.2. so that they will be able to apply it to their own research.3. even if the coeﬃcients of the equation are not ‘good. the maximum principle has been used inﬁnitely many times. we will establish radial symmetry and monotonicity for the solutions of the following three semilinear elliptic problems − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). In this chapter. a priori estimates. and nonexistence of the solutions. We recommend the readers study this part carefully. In Section 8. In the authors’ research practice. To investigate the symmetry and monotonicity of integral equations. the Narrow Region Principle and the Decay at Inﬁnity Principle will be used repeatedly in dealing with the three examples. in such a narrow region. one can change a differential equation into an integral equation. and − u = eu(x) x ∈ R2 . we will apply the Method of Moving Planes and their variant–the Method of Moving Spheres– to study semilinear elliptic equations and integral equations. . During this process. The MMP greatly enhances the power of maximum principles. and the advantage is that each time we only need to use the maximum principle in a very narrow region. the authors. Instead of using local properties (say diﬀerentiability) of a diﬀerential equation.
i. and − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) on S 2 and on S n (n ≥ 3). we introduce an auxiliary function to circumvent this diﬃculty. respectively. We prove that if the function R(x) is rotationally symmetric and monotone in the region where it is positive. We will use the Method of Moving Planes in an innovative way to obtain a priori estimates. For any real number λ.1 Outline of the Method of Moving Planes To outline how the Method of Moving Planes works. · · · .1 Outline of the Method of Moving Planes 197 We alow the function R(x) to change signs.5. The Maximum Principle for Integral Equations established in Chapter 7 is combined with the estimates of various integral norms to carry on the moving of planes.4. 8. In this situation. Hence we exploit its global property and develop a new idea–the Integral Form of the Method of Moving Planes to obtain the symmetry and monotonicity of the solutions.8. let Tλ = {x = (x1 . as an application of the maximum principle for integral equations introduced in Section 7.e. Since the Method of Moving Planes can not be applied to the solution u directly. This is a plane perpendicular to x1 axis and the plane that we will move with. we may assign that direction as x1 axis. Let Σλ denote the region to the left of the plane. we use the Method of Moving Spheres to prove a nonexistence of solutions for the prescribing Gaussian and scalar curvature equations − u + 2 = R(x)eu . we take the Euclidian space Rn for an example. x2 . Due to the diﬀerent nature of the integral equation. It arises as an EulerLagrange equation for a functional in the context of the HardyLittlewoodSobolev inequalities. This provides a stronger necessary condition than the well known KazdanWarner condition. In Section 8. we study the integral equation in Rn u(x) = Rn n+α 1 u n−α (y)dy. the traditional Method of Moving Planes does not work. Let u be a positive solution of a certain partial diﬀerential equation. . xn ) ∈ Rn  x1 = λ}. the traditional blowingup analysis fails near the set where R(x) = 0. n−α x − y for any real number α between 0 and n. then both equations admit no solution. and it also becomes a suﬃcient condition for the existence of solutions in most cases.5. In Section 8. If we want to prove that it is symmetric and monotone in a given direction.
− and hence Σλ is empty. Step 1. We continuously move the plane this way up to its limiting position. x Σλ Tλ xλ Figure 1 We compare the values of the solution u at point x and xλ . we deﬁne λo = sup{λ  wλ (x) ≥ 0. This is usually carried out by a contradiction argument. We ﬁrst show that for λ suﬃciently negative. such that (8. one can see that the key to the Method of Moving Planes is to establish inequality (8. the reﬂection of the point x = (x1 .1) holds. While for integral equations. We show that if wλo (x) ≡ 0. and for partial diﬀerential equations. · · · .1). Let xλ = (2λ − x1 . we have wλ (x) ≥ 0 . and we want to show that u is symmetric about some plane Tλo . (8. Step 2. In order to show that. x2 . that is wλo (x) ≡ 0 for all x ∈ Σλo . ∀x ∈ Σλ .1) is violated. and this contradicts with the deﬁnition of λo . xn ) about the plane Tλ (See Figure 1). From the above illustration. We show that this norm must be zero. we use a diﬀerent idea. More precisely. such that wλo (x) ≡ 0 . xn ).1) Then we are able to start oﬀ from this neighborhood of x1 = −∞. · · · . maximum principles are powerful tools for this task. there exists some λo . we generally go through the following two steps. ∀x ∈ Σλ }. let wλ (x) = u(xλ ) − u(x). and move the plane Tλ along the x1 direction to the right as long as the inequality (8. then there would exist λ > λo . 1 . We estimate a certain norm of wλ on the set − Σλ = {x ∈ Σλ  wλ (x) < 0} where the inequality (8. ∀x ∈ Σλo . To this end. We prove that u is symmetric about the plane Tλo .1) holds.198 8 Methods of Moving Planes and of Moving Spheres Σλ = {x ∈ Rn  x1 < λ}.
let Tλ = {x  x1 = λ} be the plane perpendicular to the x1 axis. more precisely.2 Applications of the Maximum Principles Based on Comparisons In this section.2. xλ = (2λ − x1 .2. In the proof of each theorem.8. we study some semilinear elliptic equations. the readers will see vividly how the Maximum Principles Based on Comparisons are applied to narrow regions and to solutions with decay at inﬁnity.1 Symmetry of Solutions in a Unit Ball We ﬁrst begin with an elegant result of Gidas. Let Σλ be the part of B1 (0) which is on the left of the plane Tλ . and Nirenberg [GNN1]: Theorem 8. and wλ (x) = uλ (x) − u(x). As shown on Figure 4 below.3) x xλ Σλ Tλ Figure 4 Let We compare the values of the solution u on Σλ with those on its reﬂection.1 Assume that f (·) is a Lipschitz continuous function such that f (p) − f (q) ≤ Co p − q (8. Proof. xn ). · · · . We will apply the method of moving planes to establish the symmetry of the solutions. let xλ be the reﬂection of the point x about the plane Tλ .2) for some constant Co . For each x ∈ Σλ .2 Applications of the Maximum Principles Based on Comparisons 199 8. 8. The essence of the method of moving planes is the application of various maximum principles. Then every positive solution u of − u = f (u) x ∈ B1 (0) u=0 on ∂B1 (0). 0 x1 . uλ (x) = u(xλ ). Ni. (8. is radially symmetric and monotone decreasing about the origin. x2 .
C(x. that is. f (uλ (x)) − f (u(x)) . and this would contradicts with the deﬁnition of λ. ∀x ∈ Σλ }. In ¯ < 0. wλ (x) = 0. We consider the function wλ+δ (x) on the narrow region (See Figure 5): ¯ do Ωδ = Σλ+δ ∩ {x  x1 > λ − }. x ∈ Σλ . wλ (x) ≥ 0. We now increase the value of λ continuously. we move the plane Tλ to the right as long as the inequality (8.5) holds. wλ (x) ≥ 0.6) (8. More precisely. we ﬁrst claim that ¯ λ ≥ 0. for λ close to −1. ¯ Let do be the maximum width of narrow regions that we can apply the “Narrow Region Principle”. Obviously. where C(x.) Now we can apply the “Narrow Region Principle” ( Corollary 7. one can verify that wλ satisﬁes wλ + C(x. wλ (x) > 0 since u > 0 in B1 (0). on this part of ∂Σλ . and on ∂Σλ . then the image of the curved surface part of ∂Σ ¯ under the fact. (8. wλ (x) > 0. λ)wλ (x) = 0. ∀x ∈ Σλ . such that δ ≤ do ¯ ¯ 2 .5) Otherwise. ¯ 2 . −λ. By the Strong Maximum Principle.2). We start from the near left end of the region. ( On Tλ . λ) = and by condition (8.4) Step 1: Start Moving the Plane. we ¯ ¯ deduce that wλ (x) > 0 ¯ in the interior of Σλ . Σλ is a narrow (in x1 direction) region. We show that. where u(x) > 0 by assumption.4. if λ λ reﬂection about Tλ lies inside B1 (0). uλ (x) − u(x) (8. while on the curve part of ∂Σλ . let ¯ λ = sup{λ  wλ (x) ≥ 0. Applying the Mean Value Theorem to f (u). we will show that the plane can be further moved to the right ¯ by a small distance. λ) ≤ Co . This provides a starting point for us to move the plane Tλ Step 2: Move the Plane to Its Right Limit. by moving this way. the plane will not stop before hitting the origin. Choose a small positive number δ. It follows ¯ that.200 8 Methods of Moving Planes and of Moving Spheres Then it is easy to see that uλ satisﬁes the same equation as u does. for λ suﬃciently close to −1.1 ) to conclude that.
To see the boundary condition. we use continuity argument. we still have wλ+δ (x) ≥ 0 . (8. for δ suﬃciently small.7) holds for such small δ.6). ¯ ¯ 2 Since wλ is continuous in λ. More ¯ precisely and more generally. ¯ And therefore.8) 1 .8. ∀x ∈ Σλ− do . there exists a constant co > 0.6) implies that u(−x1 . 2 Notice that on this part. such that wλ (x) ≥ co .7) Σλ− do Ωδ 0 ¯ 2 x1 Tλ− do Tλ+δ ¯ ¯ 2 Figure 5 The equation is obvious. ¯ ¯ ¯ This contradicts with the deﬁnition of λ and thus establishes (8. (8. the boundary condition in (8. ∀x ∈ Σλ+δ .2 Applications of the Maximum Principles Based on Comparisons 201 It satisﬁes ¯ wλ+δ + C(x. x ) ≤ u(x1 . ∀x ∈ Ωδ . wλ+δ (x) ≥ 0. ∀x ∈ Σλ− do . ¯ ¯ 2 Hence in particular. Now we can apply the “Narrow Region Principle” to conclude that wλ+δ (x) ≥ 0. ∀x1 ≥ 0. we ﬁrst notice ¯ that it is satisﬁed on the two curved parts and one ﬂat part where x1 = λ + δ of the boundary ∂Ωδ due to the deﬁnition of wλ+δ . wλ is positive and bounded away from 0. ¯ (8. λ + δ)wλ+δ = 0 x ∈ Ωδ ¯ ¯ wλ+δ (x) ≥ 0 x ∈ ∂Ωδ . x ) . To see that it is also true ¯ ¯ on the rest of the boundary where x1 = λ − do .
8) and (8. namely (8. a simpler and more elementary proof was given for almost the same result: n+2 Theorem 8.2 For p = n−2 . all the positive solutions of (8. every positive C 2 solution of (8. · · · . we obtain u(−x1 . and hence assume the form u(x) = [n(n − 2)λ2 ] (λ2 + x − n−2 4 n−2 2 xo 2 ) for λ > 0 and for some xo ∈ Rn . n ≥ 3. is in fact equivalent to the geometric result due to Obata [O]: A Riemannian metric on S n which is conformal to the standard one and having the same constant scalar curvature is the pull back of the standard one under a conformal map of S n to itself.10) u = O( 1 ).2.3 i) For p = n−2 . Schoen.10) must be radially symmetric and monotone decreasing about some point. an interesting results is the symmetry of the positive solutions of the semilinear elliptic equation: u + up = 0 . Also the monotonicity easily follows from the argument. This uniqueness result. . Gidas and Spruck [GS] showed that the only nonnegative solution of (8. xn−2 are radially symmetric and monotone decreasing about some point. In the case that n+2 1 ≤ p < n−2 . We then start from λ close to 1 and move the plane Tλ toward the left. and hence assumes the form u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 (λ2 + x − xo 2 ) for some λ > 0 and xo ∈ Rn .10) with reasonable behavior at inﬁnity.10) is identically zero. Recently. Caﬀarelli. (8.9) Combining two opposite inequalities (8. ∀x1 ≥ 0. This completes the proof. and Nirenberg [2]. 8.2. Ni. Gidas and Spruck [CGS] removed the decay assumption u = O(x2−n ) and proved the same result. x ) ≥ u(x1 .2 Symmetry of Solutions of − u = up in Rn In an elegant paper of Gidas.9). xn ). They proved n+2 Theorem 8. in the authors paper [CL1]. Then. Similarly. Since we can place x1 axis in any direction. we conclude that u(x) must be radially symmetric about the origin.202 8 Methods of Moving Planes and of Moving Spheres where x = (x2 .2. as was pointed out by R. x ) . we see that u(x) is symmetric about the plane T0 . x ∈ Rn .
2. Then in the second step. we start from the very left end of our region Rn . for λ suﬃciently negative. The proof consists of three steps. we will move our plane Tλ in the the x1 direction toward the right as long as inequality (8. x2 .2 Applications of the Maximum Principles Based on Comparisons 203 ii) For p < n+2 n−2 . how the “Decay at Inﬁnity” principle is applied here. To verify (8. Deﬁne Σλ = {x = (x1 . to better illustrate the idea. · · · . Prepare to Move the Plane from Near −∞. we will ﬁrst present the proof of Theorem 8. Tλ = ∂Σλ and let xλ be the reﬂection point of x about the plane Tλ . say at λ = λo . we see here c(x) = . ∀x ∈ Σλo .2.3. it is easy to verify that τ − wλ = uτ (x) − uτ (x) = τ ψλ−1 (x)wλ (x).2 (mostly in our own idea). We will show that. By the deﬁnition of uλ . in the third step. Write τ = n−2 . Then by the Mean Value Theorem.8. However.2. Step 1. ∀x ∈ Σλ . xn ). Since x1 axis can be chosen in any direction. And the readers will see vividly.11) for λ suﬃciently negative. The plane will stop at some limiting position. we apply the maximum principle n+2 to wλ (x). The proof of Theorem 8. Recalling the “Maximum Principle Based on Comparison” (Theorem 7. xn ) ∈ Rn  x1 < λ}. xλ = (2λ − x1 . (8. using the uniqueness theory in Ordinary Diﬀerential Equations.4. λ (8. and wλ (x) = uλ (x) − u(x). Proof of Theorem 8. This implies that the solution u is symmetric and monotone decreasing about the plane Tλo . the “Decay at Inﬁnity” is applied. (See the previous Figure 1. We will show that wλo (x) ≡ 0. Finally.) Let uλ (x) = u(xλ ). In the ﬁrst step. it is easy to see that.2.e.1). wλ (x) ≥ 0. that is near x1 = −∞.10) is identically zero. we will show that the solutions can only assume the given form.12) where ψλ (x) is some number between uλ (x) and u(x). we conclude that u must be radially symmetric and monotone about some point.2 is actually included in the more general proof of the ﬁrst part of Theorem 8. · · · . the only nonnegative solution of (8.2. uλ satisﬁes the same equation as u does. i.11) holds.11) Here.
13) must hold. Apparently at these points. we have wλo +δ (x) ≥ 0 . increase the value of λ.11). we have wλo (x) > 0 in the interior of Σλo . By the “Decay at Inﬁnity” argument (Corollary 7. (8.14) We show that the plane Tλo can still be moved a small distance to the right.4. λo < +∞. we use the “Narrow Region Principle ” to derive (8. ∀x ∈ Σλo . due to the asymptotic behavior of u near x1 = +∞.204 8 Methods of Moving Planes and of Moving Spheres τ −τ ψλ−1 (x). it can not be applied in this situation. Deﬁne λo = sup{λ  wλ (x) ≥ 0. it suﬃce τ to check the decay rate of ψλ−1 (x).15) This would contradict with the deﬁnition of λo . Recall that in the last section. for all 0 < δ < δo . uλ (˜) < u(˜). we can apply the “Maximum Principle Based on Comparison” to conclude that for λ suﬃciently negative ( ˜ suﬃciently x large ).2 ).. Unfortunately.15). and more precisely.13) Otherwise. and hence (8. ∀x ∈ Σλo +δ . Now we can move the plane Tλ toward right. ∀x ∈ Σλ }.3 ). This completes the preparation for the moving of planes. as long as the inequality (8.3.4.e. Here the power of Step 2. x x and hence 0 ≤ uλ (˜) ≤ ψλ (˜) ≤ u(˜). Therefore.11) holds. because .2). we must have (8. We claim that wλo (x) ≡ 0. Obviously. i. 1 is greater than two. Move the Plane to the Limiting Position to Derive Symmetry. which is what (actually more ˜ x than) we desire for. x x x By the decay assumption of the solution u(x) = O we derive immediately that τ ψλ−1 (˜) = O ( x 4 1 n−2 ) ˜ x 1 xn−2 . (8. (8. there exists a δo > 0 such that. More precisely. =O 1 ˜4 x . only at the points x ˜ where wλ is negative (see Remark 7. by the “Strong Maximum Principle” on unbounded domains ( see Theorem 7.
By Lemma 8. we already know wλo ≥ 0. Now suppose that (8. Now. there is a subsequence of {xi } (still denoted by {xi }) which converges to some point xo ∈ Rn . since wλo (xo ) = 0. to verify (8. ∂ν This contradicts with (8.3. we must have wλo (xo ) = 0. we introduce a new function wλ (x) = ¯ where φ(x) = wλ (x) . by (8. wλo (xo ) = lim ¯ i→ ∞ and wλo +δi (xi ) = 0 ¯ (8. the corresponding negative minimum xi of wλo +δi . what left is to prove the Lemma.1. · · · .14).2.15).18) On the other hand.16) wλo (xo ) = lim wλo +δi (xi ) ≤ 0.18).2. we have xi  ≤ Ro . ¯ ¯ i→∞ However.17) φ φ + − wλ + wλ φ φ 1 φ (8.15) is violated for any δ > 0. . such that if xo is a minimum point of wλ and wλ (xo ) < 0. we have. Then by the Hopf Lemma (see Theorem 7. the outward normal derivative ∂wλo o (x ) < 0.2 Applications of the Maximum Principles Based on Comparisons 205 the “narrow region” here is unbounded. It ¯ ¯ follows that wλo (xo ) = wλo (xo )φ(xo ) + wλo (xo ) φ = 0 + 0 = 0. ∀ i = 1. xo must be on the boundary of Σλo . xq Then it is a straight forward calculation to verify that − wλ = 2 wλ · ¯ ¯ We have Lemma 8.1 There exists a Ro > 0 ( independent of λ). Consequently.8. To overcome this diﬃculty. and we are not able to guarantee that wλo is bounded away from 0 on the left boundary of the “narrow region”. 2. therefore. φ(x) 1 with 0 < q < n − 2. Then. ¯ ¯ (8. then xo  < Ro .3). ¯ ¯ We postpone the proof of the Lemma for a moment. Then there exists a sequence of numbers {δi } tending to 0 and for each i.
c(xo ) := −τ ψ τ −1 (xo ) > − It follows from (8. Then they satisﬁes the following ordinary diﬀerential equation n−1 −u (r) − r u (r) = uτ u (0) = 0 (n−2)/4 u(0) = [n(n−2)] n−2 λ 2 is a solution. this is the only solution. without loss of generality. and hence completes the proof of the Lemma.2. Therefore.2.12) that − wλ + φ wλ (xo ) > 0. we may assume that the solutions are symmetric about the origin. The main diﬀerence is that we have no decay assumption on the solution u at inﬁnity.1. by the asymptotic behavior of u at inﬁnity. if xo  is suﬃciently large. ¯ ¯ (8. o x φ(xo ) This. One can verify that u(r) = u(x) = [n(n − 2)λ2 ] n−2 4 n−2 2 [n(n − 2)λ2 ] n−2 4 (λ2 + x − xo 2 ) for λ > 0 and some xo ∈ Rn .2. as we argued in Step 1.10) must be radially symmetric and monotone decreasing about some point in Rn .19) On the other hand. we show that the positive solutions of (8. and n−2 (λ2 + r2 ) 2 by the uniqueness of the ODE problem. So we ﬁrst make a Kelvin transform to deﬁne a new function 1 x u( ). Assume that xo is a negative minimum of wλ .3. Since the equation is invariant under translation. In the previous two steps. i) The general idea in proving this part of the Theorem is almost the same as that for Theorem 8.10) must assume the form for some λ > 0. hence the method of moving planes can not be applied directly to u. Proof of Theorem 8.16). we conclude that every positive solution of (8. together with (8. φ φ(xo ) q(n − 2 − q) ≡ . ¯ Then − wλ (xo ) ≤ 0 and wλ (xo ) = 0. This completes the proof of the Theorem.19) contradicts with (8. Step 3.2.206 8 Methods of Moving Planes and of Moving Spheres Proof of Lemma 8. v(x) = xn−2 x2 .
and we have the desired decay rate for xo  τ c(x) := −τ ψλ−1 (x). and τ − wλ = τ ψλ−1 (x)wλ (x). we have ˜ wλ (x) ≥ 0. xn−2 (8. Then the same argument in the proof of Theorem 8. wλ (x) = vλ (x) − v(x).2. We show that. and show that v is radially symmetric and monotone decreasing about some point. It is easy to verify that v satisﬁes the same equation as u does. we see that vλ satisﬁes the same equation as v does.2. Each time we show that the points of interest are away from the singularities. If the center point is the origin. τ ψλ−1 (xo ) ∼ ( 1 1 )τ −1 = o 4 . u is also symmetric and monotone decreasing about the origin. v(x) has the desired decay rate v + v τ (x) = 0 x ∈ Rn \ {0}. ∀x ∈ Σλ . wλ has a singularity the power of . Hence instead of on Σλ . as mentioned in Corollary 7.21) we derive immediately that. Hence we can apply the “Decay at Inﬁnity” to wλ (x). we treat the singular point carefully.2.20) We will apply the method of moving planes to the function v. By the asymptotic behavior v(x) ∼ 1 . The diﬀerence here is that.2 would imply that u is symmetric and monotone decreasing about some point. 0). so that we can carry on the method of moving planes to the end to show the ˜ existence of a λo such that wλo (x) ≡ 0 for x ∈ Σλo and v is strictly increasing ˜λ . in the x1 direction in Σ o As in the proof of Theorem 8. Step 1. · · · . (8.2 Applications of the Maximum Principles Based on Comparisons 207 1 at inﬁnity. If the center point of v is not the origin. correspondingly wλ may be singular at the point xλ = (2λ. then v has no singularity at the origin. n ≥ 3. xo n−2 x  1 is greater than two. Because v(x) may be singular at the origin. And in our proof. except at the origin: Then obviously. 0. but has a xn−2 possible singularity at the origin.8.2. Deﬁne vλ (x) = v(xλ ) . at a negative minimum point xo of wλ . and hence u has the desired decay at inﬁnity. where ψλ (x) is some number between vλ (x) and v(x). we consider wλ ˜ on Σλ = Σλ \ {xλ }.4. for λ suﬃciently negative. then by the deﬁnition of v.
¯ ¯ ˜ Fact (a) can be shown by noting that wλo (x) > 0 on Σλo and wλo ≤ 0. For such λ. then by Maximum Principle we have ¯ Deﬁne wλ = ¯ wλo (x) > 0 .2. We need to show that ¯ ˜ for each k. inf ˜ wλk (x) can be achieved at some point xk ∈ Σλk and that ¯ Σλk the sequence {xk } is bounded away from the singularities xλk of wλk .2 except that now we need to take care of the singularities. Actually. ¯ ˜ for x ∈ Σλo . Again. This implies (8. This is possible because v(x) → 0 as x → ∞.2. Step 2. limλ→λo inf x∈Bδ (xλ ) wλ (x) ≥ inf x∈Bδ (xλo ) wλo (x) ≥ . Therefore wλo (x) ≡ 0.2.22).2. v(x) ≥ min v(x) = ∂B1 (0) 0 (8.208 8 Methods of Moving Planes and of Moving Spheres at xλ . We will show that ˜ If λo < 0. Again let λk λo ˜ be a sequence such that wλk (x) < 0 for some x ∈ Σλk . wλ the same way as in the proof of Theorem 8. The rest of the proof is similar to the step 2 in the proof of Theorem 8. ¯ . that v(x) ≤ 0 for x ∈ B1 (xλ ). we can deduce that wλ (x) ≥ 0 for λ suﬃciently negative. the minimum of wλ is away from xλ . then the inﬁmum is achieved in Σλ \ B1 (xλ ) ˜ To see this. ∀x ∈ Σλo . we will show that. obviously wλ (x) ≥ 0 on B1 (xλ ) \ {xλ }. Now similar to the Step 1 in the proof of Theorem 8. Then let λ be so negative. Now through a similar argument as in the proof of Theorem 8.2.22) >0 due to the fact that v(x) > 0 and v ≤ 0. then wλo (x) ≡ 0. If inf Σλ wλ (x) < 0. ¯ while fact (b) is obvious. hence we need to show that.2 one can easily arrive at a contradiction. Suppose φ that wλo (x) ≡0. ∀x ∈ Σλ }. This can be seen from the following facts a) There exists > 0 and δ > 0 such that wλo (x) ≥ ¯ b) for x ∈ Bδ (xλo ) \ {xλo }. deﬁne ˜ λo = sup{λ  wλ (x) ≥ 0.2. we ﬁrst note that for x ∈ B1 (0).
4 Every solution of (8. we conclude that u ≡ 0. 8. Since the x1 direction can be chosen arbitrarily. ii) To show the nonexistence of solutions in the case p < that after the Kelvin transform. To this end. then we can carry out the above procedure in the opposite direction. If they stop at the origin again. also it is interesting in its own right. x ∈ R2 (8. . This completes the proof of the Theorem.3 Symmetry of Solutions for − u = eu in R2 When considering prescribing Gaussian curvature on two dimensional compact manifolds. we must have u(x1 ) = u(x2 ). We will use the method of moving planes to prove: Theorem 8. one would arrive at the following equation in the entire space R2 : ∆u + exp u = 0 .xo (x) = ln 32λ2 (4 + λ2 x − xo 2 )2 for any λ > 0 and any point xo ∈ R2 is a family of explicit solutions. from the equation (8. one can easily see that v can only be symmetric about the origin if it is not identically zero. Finally.10).23) exp u(x)dx < +∞ R2 The classiﬁcation of the solutions for this limiting equation would provide essential information on the original problems on manifolds. we notice Due to the singularity of the coeﬃcient of v p at the origin. we move the plane in the negative x1 direction from positive inﬁnity toward the origin.23) is radially symmetric with respect to some point in R2 and hence assumes the form of φλ.xo (x).10) is invariant under translations and rotations. Now given any two points x1 and x2 in Rn .8. since equation (8. namely. we derive the symmetry and monotonicity in x1 direction by the above argument. xn+2−p(n−2) n+2 n−2 . we also obtain the symmetry and monotonicity in x1 direction by combining the two inequalities obtained in the two opposite directions.2. If our planes Tλ stop somewhere before the origin. we may assume that the origin is at the mid point of the line segment x1 x2 . Then from the above argument. Hence u must also be symmetric about the origin. It follows that u is constant. we conclude that the solution u must be radially symmetric about some point. It is known that φλ. then by rescaling and taking limit. x ∈ Rn \ {0}.2. v satisﬁes v+ 1 v p (x) = 0 . we ﬁrst need to obtain some decay rate of the solutions near inﬁnity. if the sequence of approximate solutions “blows up”.2 Applications of the Maximum Principles Based on Comparisons 209 If λo = 0.
This Theorem is a direct consequence of the following two Lemmas.23). For −∞ < t < ∞.2. let Ωt = {x  u(x) > t}. Lemma 8. Integrating from −∞ to ∞ gives −( R2 exp u(x)dx)2 ≤ −8π R2 exp u(x)dx which implies R2 exp u(x)dx ≥ 8π as desired. x ∈ R 2 and R2 R2 exp u(x)dx ≤ −4 exp u(x)dx < +∞. ds ·  u  u ≥ ∂Ωt 2 ≥ 4πΩt .2. which is essential to the application of the method of moving planes.2 (W. Theorem 8. Ding) If u is a solution of − u = eu .210 8 Methods of Moving Planes and of Moving Spheres Some Global and Asymptotic Behavior of the Solutions The following Theorem gives the asymptotic behavior of the solutions near inﬁnity. Lemma 8.2.2 enables us to obtain the asymptotic behavior of the solutions at inﬁnity.5 If u(x) is a solution of (8. ∂Ωt ∂Ωt Hence −( and so d ( dt Ωt d Ωt ) · dt Ωt exp u(x)dx ≥ 4πΩt  d Ωt ) · dt exp u(x)dx)2 = 2 exp t · ( Ωt exp u(x)dx ≤ −8πΩt et . one can obtain Ωt exp u(x)dx = − Ωt u= ∂Ωt  uds − d Ωt  = dt ∂Ωt ds  u By the Schwartz inequality and the isoperimetric inequality. then as x → +∞. . exp u(x)dx ≥ 8π. then R2 Proof. 1 u(x) →− ln x 2π uniformly.
2 Applications of the Maximum Principles Based on Comparisons 211 Lemma 8. Write I = I1 + I2 + I3 . ln x − y − ln(y + 1) − ln x →0 ln x hence I2 →0. Proof. we need only to verify that I := R2 ln x − y − ln(y + 1) − ln x u(y) e dy→0 ln x as x→∞. (8. as x→∞. By a result of Brezis and Merle [BM].23). I2 and I3 are the integrals on the three regions D1 = {y  x − y ≤ 1}. D2 = {y  x − y > 1 and y ≤ K} and D3 = {y  x − y > 1 and y > K} respectively. a) To estimate I1 . Let 1 w(x) = 2 (ln x − y − ln(y + 1)) exp u(y)dy.3 . If u(x) is a solution of (8.24) To see this. b) For each ﬁxed K. in region D2 . where I1 .8. we have. we see that the condition ∞ implies that the solution u is bounded from above. u(x) 1 →− ln x 2π R2 exp u(x)dx uniformly. we simply notice that I1 ≤ C x−y≤1 eu(y) dy − 1 ln x ln x − yeu(y) dy x−y≤1 Then by the boundedness of eu(y) and R2 eu(y) dy.2. then as x → +∞. We may assume that x ≥ 3. x ∈ R2 and we will show w(x) 1 → ln x 2π R2 R2 exp u(x)dx < exp u(x)dx uniformly as x → +∞. . we see that I1 →0 as x→∞. 2π R Then it is easy to see that w(x) = exp u(x) .
212 8 Methods of Moving Planes and of Moving Spheres c)To see I3 →0. Assume that u(x) is a solution of (8.24). Finally by the uniqueness of the solution of the following O.23).D.x0 (x). This veriﬁes (8.23) must assume the form φλ. Combining Lemma 8.) Deﬁne wλ (x) = u(xλ ) − u(x). The Method of Moving Planes and Symmetry In this subsection. Consider the function v(x) = u(x) + w(x).3. x2 ) about the line Tλ . we apply the method of moving planes to establish the symmetry of the solutions. problem u (r) + 1 u (r) = f (u) r u (0) = 0 u(0) = 1 we see that the solutions of (8.25) . we use the fact that for x − y > 1  ln x − y − ln(y + 1) − ln x ≤C ln x v ≡ 0 and Then let K→∞. Since the direction can be chosen arbitrarily.2. Then v(x) ≤ C + C1 ln(x + 1) for some constant C and C1 . we conclude that the solution must be radially symmetric about some point. We also show that the solution is strictly increasing before the critical position. (See the previous Figure 1.5. For a given solution. Without loss of generality. A straight forward calculation shows wλ (x) + (exp ψλ (x))wλ (x) = 0 (8. we move the family of lines which are orthogonal to a given direction from negative inﬁnity to a critical position and then show that the solution is symmetric in that direction about the critical position.E. This completes the proof of our Lemma.2.2. Therefore v must be a constant. we obtain our Theorem 8. x2 )  x1 < λ} and Let Tλ = ∂Σλ = {(x1 . we show the monotonicity and symmetry of the solution in the x1 direction. x2 )  x1 = λ}.2 and Lemma 8. let Σλ = {(x1 . x2 ) be the reﬂection point of x = (x1 . For λ ∈ R1 . xλ = (2λ − x1 .
4. we introduce the function wλ (x) = ¯ It satisﬁes wλ + 2 wλ ¯ ¯ wλ (x) .2. As in the previous examples. we choose φ(x) = ln(x − 1). if wλ (x) < 0 somewhere. we have wλ (x) ≥ 0. one can easily derive a ¯ contradiction from (8.29) Now. .5. and hence ψλ (xo ) ≤ u(xo ). which is a negative minimum of wλ (x). Then similar to the argument in Subsection 5. for each ﬁxed λ. By the “Decay at Inﬁnity” argument. φ (8.2. (8. ln(x − 1) as x→∞. hence it does not work here. ¯ (8. φ(x) wλ = 0.8. then by (8. By virtue of Theorem 8. Step 1.28) φ φ + eψλ (x) + φ φ Moreover. there exists a point xo . while the key function φ = xq given in Corollary 7.2. As a modiﬁcation. ∀x ∈ Σλ . Then it is easy to verify that −1 φ (x) = . At this point. we show that.26) that eψ(x ) + o φ o (x ) < 0 for suﬃciently large xo .28).27) and (8. for λ suﬃciently negative. φ x(x − 1)2 ln(x − 1) It follows from this and (8. we have eψλ (x o ) = O( 1 ) xo 4 (8.26) 1 Notice that we are in dimension two. the key is to check the decay rate of c(x) := −eψλ (x) at points xo where wλ (x) is negative.2 Applications of the Maximum Principles Based on Comparisons 213 where ψλ (x) is some number between u(x) and uλ (x).3).29). by the asymptotic behavior of the solution u near inﬁnity (see Lemma 8.27) This is what we desire for.2 requires 0 < q < n − 2. This completes Step 1. wλ (x) = ¯ u(xλ ) − u(x) →0 . we have. At such point u(xo λ ) < u(xo ).
This is equivalent to ﬁnding a positive solution of the semilinear elliptic equation − 4(n − 1) n−2 ou + Ro (x)u = R(x)u n−2 . people had to assume that R was positive and bounded away from 0. one interesting problem in diﬀerential geometry is to ﬁnd a metric g that is pointwise conformal to the original metric go and has scalar curvature R(x). Deﬁne λo = sup{λ  wλ (x) ≥ 0. We show that If λo < 0. Given a function R(x) on M. the wellknown KazdanWarner condition gives rise to many examples of R in which there is no solution.30). ∀x ∈ Σλo . see articles by Bahri .3. u > 0. 8.214 8 Methods of Moving Planes and of Moving Spheres Step 2 . One main ingredient in the proof of existence is to obtain a priori estimates on the solutions. a useful technique employed by most authors was a ‘blowingup’ analysis. there have seen a lot of progress in understanding equation (8. However. then wλo (x) ≡ 0. go ) is the standard sphere S n .31) It is the socalled critical case where the lack of compactness occurs. x ∈ S n . 4 4(n − 1) (8. This technical assumption became somewhat standard and has been used by many authors for quite a few years. Due to this limitation. In recent years. ∀x ∈ Σλ }. n+2 (8. go ) and Ro (x) is the scalar curvature of go .2. This completes the proof of the Theorem.31) on S n . When (M. it does not work near the points where R(x) = 0. In the last few years. For equation (8.30) where o is the BeltramiLaplace operator of (M.1 The Background Let M be a Riemannian manifold of dimension n ≥ 3 with metric go . to establish a priori estimates. The argument is entirely similar to that in the Step 2 of the proof of Theorem 8. x ∈ M. the equation becomes − u+ n+2 n(n − 2) n−2 u= R(x)u n−2 . a tremendous amount of eﬀort has been put to ﬁnd the existence of the solutions and many interesting results have been obtained ( see [Ba] [BC] [Bi] [BE] [CL1] [CL3] [CL4] [CL6] [CY1] [CY2] [ES] [KW1] [Li2] [Li3] [SZ] [Lin1] and the references therein). In this case.2 except that we use φ(x) = ln(−x1 + 2). For example.3 Method of Moving Planes in a Local Way 8.
Li [Li2] [Li3]. In fact. in the case p = n−2 .α and R ∈ C 2.8. Then for functions with changing signs. To prove this result. Then the solutions of the equation − ou + Ro (x)u = R(x)up . the bound is independent of p.α (M ) satisfying R(x)  R(x) is continuous near Γ. For a ﬁxed A. Lin [Lin1] weaken the condition by assuming only polynomial growth of R near its zeros. we derived a priori bounds for the solutions of more general equations n(n − 2) u = R(x)up .31) for functions R which are allowed to change signs.34) are uniformly bounded near Γ ∩ D. δ0 . ∂R ≤ 0 where ν is the ∂ν outward normal (pointing to the region where R is negative) of Γ . we have u ≤ C in the region where R(x) ≤ .33) Let D be any compact subset of M and let p be any number greater than 1.1 Let M be a locally conformally ﬂat manifold. u > 0. Chang. We removed the above wellknown technical assumption and obtained a priori estimates on the solutions of (8.3. The main diﬃculty encountered by the traditional ‘blowingup’ analysis is near the point where R(x) = 0. we used the ‘method of moving planes’ in a local way to control the growth of the solutions in this region and obtained an a priori estimate. which seems a little restrictive. are there any a priori estimates? In [CL6].32). Then there are positive constants and C depending only on β0 . in M (8. we sharpen our method of moving planes to further weaken the condition and to obtain more general result: Theorem 8. where A is an arbitrary positive number. and Yang [CY2]. we essentially required R(x) to grow linearly near its n+2 zeros. Chang and Yang [CY1]. we answered this question aﬃrmatively. and Assume that Γ ∈ C 2. then use a blowing up argument.3 Method of Moving Planes in a Local Way 215 [Ba]. Recently. Let Γ = {x ∈ M  R(x) = 0}. (8. In this proposition. and Schoen and Zhang [SZ]. M.α (S n ) and  R(x) ≥ β0 for any x with R(x) ≤ δ0 . To overcome this. . he ﬁrst established a Liouville type theorem in Rn . by introducing a new auxiliary function. − u+ Proposition 8. x ∈ S n .3. (8.32) 4 1 for any exponent p between 1 + A and A. and R C 2.1 Assume R ∈ C 2. such that for any solution u of (8. Bahri and Coron [BC].α (S n ) . Gursky. Later. = 0 at Γ.
one only need to obtain an integral bound. D.3. we prove .1. The proof of the Theorem is divided into two parts. we estimate the solutions of equation (8. In the region(s) where R is negative In this part. Besides being n+2 .3.33) is weaker than Lin’s polynomial growth restriction.34) in a compact set D ⊂ M can be reduced to − u = R(x)up .35) in a compact set K in Rn .36) where C = C(n. q). Let u ∈ W 2.34) and prove Theorem 8. To illustrate. The bound depends only on δ. one may simply look at functions R(x) which 1 }. This is the equation we will consider throughout the section. p can be any number greater than 1. a local ﬂat metric can be chosen so that equation (8. To prove Theorem 8. to establish an a priori bound of the solutions.2 The solutions of (8. n−2 8. we derive estimates in the region(s) where R is negatively bounded away from zero. (8. Then based on this bound. It comes from a standard elliptic estimate for subharmonic functions and an integral bound on the solutions. grow like exp{− d(x) Obviously. p > 1.2 The A Priori Estimates Now. we have sup u ≤ C( Br (x) 1 rn (u+ )q dV ) q .1 (See Theorem 9.2.35) are uniformly bounded in the region {x ∈ K : R(x) ≤ 0 and dist(x.n (D) and suppose u ≥ 0. where d(x) is the distance from the point x to Γ .216 8 Methods of Moving Planes and of Moving Spheres One can see that our condition (8. we introduce the following lemma of Gilbarg and Trudinger [GT]. B2r (x) 1 (8.3. We prove Theorem 8. We ﬁrst derive the bound in the region(s) where R is negatively bounded away from zero. Moreover our restriction on the exponent is much weaker. Since M is a locally conformally ﬂat manifold. Γ ) ≥ δ}. the lower bound of R. Part I. In virtue of this Lemma.20 in [GT]). but are not polynomial growth. Lemma 8. Then for any ball B2r (x) ⊂ D and any q > 0.3. we use the ‘method of moving planes’ to estimate the solutions in the region(s) surrounding the set Γ . this kinds of functions satisfy our condition. In fact.3.
3. through a straight forward calculation.2 Part II. where R is small. y) with y = (x2 .35). we arrive at o ϕα R1+α up dx ≤ C3 . together with the boundedness of the integral up in a small neighborhood of x0 .3 Method of Moving Planes in a Local Way 217 Lemma 8. In the region where R is small In this part. Since the method of moving planes can not be applied directly to u. the values of the solution u is comparable.35) by ϕα Rα sgnR and p−1 then integrate.37) Proof. − ϕ = λ1 ϕ(x) x ∈ Ω ϕ(x) = 0 x ∈ ∂Ω. Multiply both sides of equation (8.3.e. Let α = 1+2p . It is a direct consequence of Lemma 8. then there is a constant C such that up dx ≤ C. we estimate the solutions in a neighborhood of Γ . Let x1 axis pointing . The key ingredient is to use the method of moving planes to show that. we have ϕα R1+α up dx ≤ C1 Ω Ω ϕα−2 Rα−2 udx ≤ C2 Ω ϕ p Rα−2 udx. The Proof of Theorem 8. Ω (8.38) Now (8. B (8. or if necessary. This result. α Applying H¨lder inequality on the righthandside. Denote the new system by x = (x1 . a Kelvin transform. This completes the proof of the Lemma. Let ϕ be the ﬁrst eigenfunction of − in Ω.8. Let u be a given solution of (8. The proof consists of the following four steps. · · · .1 and Lemma 8. leads to an a priori bound of the solutions. Step 1.37) follows suit. Taking into account that α > 2 and ϕ and R are bounded in Ω. xn ).2 Let B be any ball in K that is bounded away from Γ . along any direction that is close to the direction of R.3. Transforming the region Make a translation.2. a rotation.3. Let 1 if R(x) > 0 sgnR = 0 if R(x) = 0 −1 if R(x) < 0. we construct some auxiliary functions. and x0 ∈ Γ . It is suﬃcient to show that u is bounded in a neighborhood of x0 . i. Let Ω be an open neighborhood of K such that dist (∂Ω. K ) ≥ 1. near x0 .
(c)  R Step 2. Let u be a C 2 extension of u from ∂1 D to the entire ˜ ∂D. x0 becomes the origin. the image of the set Ω + := {x  R(x) > 0} is contained in the left of the yplane.) Γ y ∂1 D ∂2 D R(x) > 0 0 R(x) < 0 x1 Tλ Figure 6 The small positive number (a) ∂1 D is uniformly convex. and the image of Γ is tangent to the yplane at origin. (See Figure 6. ˜ ˜ is chosen to ensure that . Let x1 = φ(y) be the equation of the image of Γ near the origin. In the new system. and  u ≤ Cm. Let ∂1 D := {x  x1 = φ(y) + . such that 0 ≤ u ≤ 2m. Introducing an auxiliary function Let m = max∂1 D u. The image of Γ is also uniformly convex near the origin. which is denote by ∂2 D. and R(x) is continuous in D. ∂R (b) ∂x1 ≤ 0 in D. Let D by the region enclosed by two surfaces ∂1 D and ∂2 D. x1 ≥ −2 } The intersection of ∂1 D and the plane {x ∈ Rn  x1 = −2 } encloses a part of the plane.218 8 Methods of Moving Planes and of Moving Spheres to the right.
3. we have 0 ≤ w(x) ≤ 2m Introduce a new function v(x) = u(x) − w(x) + C0 m[ + φ(y) − x1 ] + m[ + φ(y) − x1 ]2 which is well deﬁned in D.39) We consider the following two cases.  ∂x1 for some positive constant C2 . we have ∂u  ≤ C2 m. In this region. and w C 1 (D) ≤ C1 m. v) = 0 x ∈ D v(x) = 0 x ∈ ∂1 D where and f (x. R(x) is negative and bounded away from 0. one can choose C0 suﬃciently large. such that ∂v <0 ∂x1 Then since we have v(x) > 0.2 and the standard elliptic estimate. Here C0 is a large constant to be determined later. ∂v ≤ (C1 + C2 − C0 )m − 2m( + φ(y) − x1 ). v(x) = 0 ∀x ∈ ∂1 D . Case 1) φ(y) + 2 ≤ x1 ≤ φ(y) + . ψ(y) = −C0 m φ(y) − m [ + φ(y)]2 − 2m. v) = 2mx1 φ(y)+R(x){v+w(x)−C0 m[ +φ(y)−x1 ]−m[ +φ(y)−x1 ]2 }p At this step. It follows from this and (8. By Theorem 8. ∂x1 Noticing that ( + φ(y) − x1 ) ≥ 0. One can easily verify that v(x) satisﬁes the following v + ψ(y) + f (x. (8.39).3 Method of Moving Planes in a Local Way 219 Let w be the harmonic function w=0x∈D w = u x ∈ ∂D ˜ Then by the maximum principle and a standard elliptic estimate.8. we show that v(x) > 0 ∀x ∈ D.
move the plane Tλ toward the left as long as the inequality (8. More precisely. If (8. v) − f (x. (8. and hence wλ satisﬁes − wλ = f (xλ . In this region. v(x)) ≤ f (xλ . we arrive at v(x) > 0. again by (8. This lies a foundation for the moving of planes.40) is true when λ is less but close to . Now we can start from the rightmost tip of region D. Now we decrease λ. v) = R(xλ )(vλ − v) + f (xλ .40) remains true. v) − f (x. because Σλ is a narrow region and f (x. the readers may see the ﬁrst example in Section 5. Let xλ = (2λ − x1 . then we have (8. y) be the reﬂection point of x with respect to the plane Tλ .about the plane Tλ still lies in region D. Applying the Method of Moving Planes to v in x1 direction In the previous step. vλ ) − f (xλ . ∂f = R(x)). and move the plane perpendicular to x1 axis toward the left. v). For detailed ∂v argument.220 8 Methods of Moving Planes and of Moving Spheres Case 2) −2 ≤ x1 ≤ φ(y) + 2 . it is easy to see that vλ satisﬁes the equation vλ + ψ(y) + f (xλ . v) + f (xλ . choosing C0 suﬃciently large (depending only on ). Let vλ (x) = v(xλ ) and wλ (x) = vλ (x) − v(x). vλ ) = 0. y) ∈ D with x1 > λ > − 1 . we showed that v(x) ≥ 0 in D and v(x) = 0 on ∂1 D. that is.41) .40) We want to show that the above inequality holds for − 1 ≤ λ ≤ for some 0 < 1 < . Step 3.41) holds. This choice of 1 is to guarantee that when the plane Tλ moves to λ = − 1 . In fact. v) = R(xλ )wλ + f (xλ . (8.39). We are going to show that wλ (x) ≥ 0 ∀ x ∈ Σλ . the reﬂection of Σλ . v(x)) for x = (x1 . We show that the moving of planes can be carried on provided f (x. v) is Lipschitz continuous in v (Actually. v) − f (x. let Σλ = {x ∈ D  x1 ≥ λ}. 2 Again. Tλ = {x ∈ Rn  x1 = λ}. we have v(x) ≥ −w(x) + C0 m 2 ≥ −2m + C0 m .
To estimate ∂f . we use ∂x1 ∂f ∂R ∂w = 2m φ(y) + up−1 { u + R(x)p[ + C0 m + 2m( + φ(y) − x1 )]}. one can derive a contradiction. by virtue of (8. we use u to control R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )]. In the part where u ≥ 1. so that ∂w + C0 m ≥ 0 ∂x1 Noticing that ∂R ∂x1 ≤ 0 and ( + φ(y) − x1 ) ≥ 0 in D.8. Choose C0 suﬃciently large.42) If λo > − 1 . then since v(x) > 0 in D. We consider the following two possibilities. ∀x ∈ Σλ }. we have ∂f ≤ 0. ∀x ∈ Tλo ∩ Σλo .41) holds if ≤ 0 in the set ∂x1 {x ∈ D  R(x) < 0 or x1 > −2 1 }. (8. we have wλo (x) > 0. v) It is elementary to see that (8. (8. ∂x1 . we notice that the uniform convexity of the image of Γ near the origin implies φ(y) ≤ −a0 < 0. x = (x1 .42) and the maximum principle. ∂x1 ∂x1 ∂x1 First. ∂x1 b) R(x) > 0. similar to the argument in the examples in Section 5. ∀x ∈ Σλo and ∂wλo < 0. ∂f (x. we write ∂f ∂R = 2m φ(y)+up−1 ∂x1 ∂x1 u + R(x)/ ∂R ∂x1 p[ ∂w + C0 m + 2m( + φ(y) − x1 )] . a) R(x) ≤ 0.43) for some constant a0 . ∂x1 More precisely. y) with x1 > −2 1 . This enables us to apply the maximum principle to wλ . Let λo = inf{λ  wλ (x) ≥ 0.3 Method of Moving Planes in a Local Way 221 − wλ − R(xλ )wλ (x) ≥ 0. ∂x1 Then.
45) is true for any point x0 in a small neighborhood of Γ . Now the a priori bound of the solutions is a consequence of (8.33) on R.4. the wellknown Nirenberg problem is to ﬁnd conditions on K(x).45) and an integral bound on u. such that v(x) ≥ v(x0 ) ∀x ∈ ∆x0 (8.43). which can be derived from Lemma 8. by a similar argument. Furthermore. 8. the intersection 0 of the cone ∆x0 with the set {xR(x) ≥ δ2 } has a positive measure and the lower bound of the measure depends only on δ0 and the C 1 norm of R. for any point x0 ∈ Γ . such that  ∂R  ∼  R.40) implies that. ∂x1 ∂R Then by condition (8.44) leads immediately to u(x) + C6 ≥ u(x0 ) ∀x ∈ ∆x0 (8. Deriving the a priori bound Inequality (8. A similar argument will show that this remains true if we rotate the x1 axis by a small angle. This is equivalent to solving the following nonlinear elliptic equation .222 8 Methods of Moving Planes and of Moving Spheres Choose 1 suﬃciently small. Step 4. so that it can be realized as the Gaussian curvature of some conformally related metric. ∂x1 Here again we use the smallness of R. the inequality (8. we can make R(x)/ ∂x1 arbitrarily small by a ∂f small choice of 1 . in virtue of (8.45) More generally. More precisely.3.1.2. in a small neighborhood of the origin. the function v(x) is monotone decreasing in x1 direction. and therefore arrive at ∂x1 ≤ 0. Therefore.40) is true.4 Method of Moving Spheres 8. we can use 2m φ(y) to control Rp[ ∂w + C0 m + 2m( + φ(y) − x1 )]. This completes the proof of Theorem 8. for any λ between − 1 and .44) Noticing that w(x) is bounded in D.3. one can ﬁnd a cone ∆x0 with x0 as its vertex and staying to the left of x0 . So far. our conclusion is: The method of moving planes can be carried on up to λ = − 1 .1 The Background Given a function K(x) on the two dimensional standard sphere S 2 . In the part where u ≤ 1. (8. one can show that (8.
48) Both equations are socalled ‘critical’ in the sense that lack of compactness occurs. we gave a negative answer to this question. can one solve the equations? What are the necessary and suﬃcient conditions for the equations to have a solution? These are very interesting problems in geometry. A pioneer work in this direction is [Mo] by Moser. there are still gaps between those suﬃcient conditions and the necessary ones.8. we call these necessary conditions KazdanWarner type conditions.47) For a unifying description. He showed that. Besides the obvious necessary condition that R(x) be positive somewhere. In particular. These conditions give rise to many examples of R(x) for which (8. various suﬃcient conditions were obtained by many authors (For example. In the following. a similar problem was raised by Kazdan and Warner: Which functions R(x) can be realized as the scalar curvature of some conformally related metrics ? The problem is equivalent to the existence of positive solutions of another elliptic equation − u+ n+2 n−2 n(n − 2) u= R(x)u n−2 4 4(n − 1) (8. Then for which R. there are other wellknown obstructions found by Kazdan and Warner [KW1] and later generalized by Bourguignon and Ezin [BE]. Where is the Laplacian operator associated to the standard metric. The vector ﬁeld X in (8.46) is equivalent to − u + 2 = R(x)eu . for even functions R on S 2 . In recent years. In [CL3]. On the higher dimensional sphere S n . see [BC] [CY1] [CY2] [CY3] [CY4] [CL10] [CD1] [CD2] [CC] [CS] [ES] [Ha] [Ho] [KW1] [Kw2] [Li1] [Li2] [Mo] and the references therein.48) and (8. on S 2 .49) Sn where dAg is the volume element of the metric eu go and u n−2 go for n = 2 and for n ≥ 3 respectively.48) to have a solution is R being positive somewhere. the necessary and suﬃcient condition for (8.4 Method of Moving Spheres 223 − u + 1 = K(x)e2u (8.46) on S 2 . it is natural to begin with functions having certain symmetries.49) is a conformal Killing ﬁeld associated to the standard metric go on S n . To ﬁnd necessary and suﬃcient conditions. Then one may naturally ask: ‘ Are the necessary conditions of KazdanWarner type also suﬃcient?’ This question has been open for many years. a monotone rotationally symmetric function R admits no solution. (8. then (8.47) have no solution. The conditions are: X(R)dAg = 0 (8. 4 . we let R(x) = 2K(x) be the scalar curvature.) However.
48) has no solution at all. and R ≡ C. In our earlier paper [CL3]. we proved the nonexistence of rotationally symmetric solutions: Proposition 8.50) may not be the suﬃcient condition.48) to have a solution. Thus one can interpret Moser’s result as: In the class of even functions. pointed out that the Kazdan. Proposition 8.50) In [XY].48) has no rotationally symmetric solution. because it suggests that (8.52) . (1. Although we believed that for all such functions R.50).48) has no solution at all.1 Let R be rotationally symmetric. for which problem (8. If R is monotone in the region where R > 0.224 8 Methods of Moving Planes and of Moving Spheres Note that.50). Even though one didn’t know whether there were any other nonsymmetric solutions for these functions R .48) and (8. this result is still very interesting.Warner type conditions are not suﬃcient for (8. we were not able to prove it at that time. we present a class of rotationally symmetric functions satisfying (8. (8.2) is satisﬁed automatically.51) Then problem (8. We also cover the higher dimensional cases. However.4. This generalizes Xu and Yang’s result. the KazdanWarner type conditions take the form: R > 0 somewhere and R changes signs (8.48) to have a solution.4. the KazdanWarner type conditions are the necessary and suﬃcient conditions for (8. for which the problem (8.47) has no rotationally symmetric solution. First. Xu and Yang found a family of rotationally symmetric functions R satisfying conditions (8. Xu and Yang also proved: Proposition 8.2 There exists a family of functions R satisfying the KazdanWarner type conditions (8. there is no solution at all. At that time. A simple question was asked by Kazdan [Ka]: What are the necessary and suﬃcient conditions for rotationally symmetric functions? To be more precise. (8. since their family of functions R satisfy (8.4.48) has a solution. for the ﬁrst time. in the class of even functions.51). Assume that i) R is nondegenerate ii) R changes signs in the region where R > 0 Then problem (8. for rotationally symmetric functions R = R(θ) where θ is the distance from the north pole. In [XY].3 Let R be rotationally symmetric. we were not able to obtain the counter part of this result in higher dimensions.50) for which problem (8. And thus. we could show this for a family of such functions.
we obtain a comparison formula which leads to a direct contradiction.4. The following are our main results. It works in all dimensions. Instead of showing the symmetry of the solutions.48) are rotationally symmetric. and for all functions satisfying (8.4.4. condition (8. one can obtain a necessary and suﬃcient condition in the nondegenerate case: Corollary 8.53) 8.51). The main objective of this section is to present the proof on the necessary part of the conjecture. ii) R(x) ≤ 0.47) have no solution at all.1 and Theorem 8. we make a stereographic projection from S n to Rn .4. to prove Proposition 8. And we conjectured in [CL3] that for rotationally symmetric R. It does not work in higher dimensions. Then problems (8. would be the necessary and suﬃcient condition for (8.48) to have a solution is R > 0 somewhere and R changes signs in the region where R > 0 (8. In [CL3]. We call it the method of ‘moving spheres’.50). Let B be a geodesic ball centered at the North Pole xn+1 = 1. Theorem 8. R is nondecreasing in xn+1 direction.4 Method of Moving Spheres 225 The above results and many other existence results tend to lead people to believe that what really counts is whether R changes signs in the region where R > 0. We also generalize this result to a family of nonsymmetric functions.4.1 Let R be continuous and rotationally symmetric. Then problems (8. we used the method of ‘moving planes’ to show that the solutions of (8.2. for x ∈ B . If R is monotone in the region where R > 0.48) or (8.48) and (8. a similar method was used by P.2.4. Assume that i) R(x) ≥ 0. Theorem 8.48) and (8.2 Necessary Conditions In this subsection. we prove Theorem 8. Padilla to show the radial symmetry of the solutions for some nonlinear Dirichlet problems on annuluses. In [CL4]. which appeared in our later paper [CL4]. instead of (8.51). we introduce a new idea. This method only work for a special family of functions satisfying (8. for x ∈ S n \ B. Combining our Theorem 1 with Xu and Yang’s Proposition 3.8.47) have no solution at all.4.2 Let R be a continuous function.47) to have a solution. Then it is equivalent to consider the following equation in Euclidean space Rn : .52).1 Let R be rotationally symmetric and nondegenerate in the sense that R = 0 whenever R = 0. For convenience. In an earlier work [P] . Then the necessary and suﬃcient condition for (8. and R ≡ C.
r = x and n+2 τ = n−2 is the socalled critical Sobolev exponent. The function R is also bounded and continuous. iii) R > 0 and nonincreasing for r < ro . (8. then u(λx) > u( λx ) − 4 ln x x2 ∀x ∈ B1 (0).57) The proof of Theorem 8. and this projection does not change the monotonicity of the function. For a radial function R satisfying (8. Without loss of generality. or vice versa. R (r) ≤ 0 for r < 1. 0 < λ ≤ 1 (8. Let x ∈ B1 (0).48) and (8. R ≤ 0 for r > ro .2. there are three possibilities: i) R is nonpositive everywhere.58) Lemma 8.1. then u(λx) > x2−n u( λx ) x2 ∀x ∈ B1 (0). Let u be a solution of (8. (8. This is done by Kelvin transform and the strong maximum principle. The ﬁrst two cases violate the KazdanWarner type conditions. x ∈ Rn .56) for n = 2.55) and (8.56) for some C > 0.4.4.4. Let u be a solution of (8.53). ii) R ≡ C is nonnegative everywhere and monotone.4.2 Let R be a continuous function satisfying (8.2.56) for n > 2. We compare the values of u at those pairs λx of points λx and x2 . R(r) ≤ 0 for r ≥ 1. Thus. we start with the unit sphere and show that (8. Lemma 8.55) with appropriate asymptotic growth of the solutions at inﬁnity u ∼ −4 ln x for n=2 . Then in step 2.1 Let R be a continuous function satisfying (8.4.59) We ﬁrst prove Lemma 8.54) and (8. and we assume these throughout the subsection.57). 0 < λ ≤ 1 (8. then λx ∈ Bλ (0). hence there is no solution. we move (shrink) the sphere ∂Bλ (0) . u ∼ Cx2−n for n ≥ 3 (8.57).226 8 Methods of Moving Planes and of Moving Spheres − u = R(r)eu . − u = R(r)uτ u > 0.4. In step 1. Here is the Euclidean Laplacian operator. we only need to show that in the remaining case. The function R(r) is the projection of the original R in equations (8.1 is based on the following comparison formulas. We use a new idea called the method of ‘moving spheres ’.47).54) (8.59) is true for λ = 1. The reﬂection point of λx λx about the sphere ∂Bλ (0) is x2 . we may assume that: R(r) > 0. A similar idea will work for Lemma 8. n ≥ 3. Proof of Lemma 8. x ∈ R2 . there is also no solution.
64). we obtain u > v in B1 (0) which is equivalent to (8.65) τ where ψλ is some function with values between τ uτ −1 and τ vλ−1 .63) and (8.62) x Let vλ (x) = x2−n uλ ( x2 ) be the Kelvin transform of uλ .57) that u < 0. we establish the inequality for x ∈ B1 (0). Then it is easy to verify that λ τ − vλ = R( )vλ (x). In this step.63) (8. We show that one can always shrink ∂Bλ (0) a little bit before reaching the origin. (8. n Let uλ (x) = λ 2 −1 u(λx). λ (8. Taking into λ account of assumption (2. and v ≥ 0 in B1 (0). x x Let v(x) = x2−n u( x2 ) be the Kelvin transform. We show that the moving can not be stopped until λ reaches the origin. We show that x (8. In this step. λ = 1. Then it is easy to verify that v satisﬁes the equation 1 − v = R( )v τ r (8.66) .60) u(x) > x2−n u( 2 ) ∀x ∈ B1 (0). Step 1. This is done again by the Kelvin transform and the strong maximum principle.64) r Let wλ = uλ − vλ .61) and v is regular. we have λ R( ) − R(λr) ≤ 0 r It follows from (8. Then by (8.60). 1]. Applying the maximum principle. By doing so. (8. Thus − (u − v) > 0.4). λ ≤ 1. wλ + Cλ (x)wλ ≤ 0 where Cλ (x) is a bounded function if λ is bounded away from 0.65) that for r ≤ 1.8. Then − uλ = R(λx)uτ (x). Step 2. It follows from (8.4 Method of Moving Spheres 227 towards the origin. we establish the inequality for all x ∈ B1 (0) and λ ∈ (0. we move the sphere ∂Bλ (0) towards λ = 0. λ λ wλ + R( )ψλ wλ = [R( ) − R(λr)]uτ λ r r (8.
2.4.228 8 Methods of Moving Planes and of Moving Spheres It is easy to see that for any λ.69) . We will derive a contradiction by showing that for λ close to and less than λo . the inequality is still true.4.1. 1].4.4. In fact. λ) ∈ B1 (0) × [λo .1.4. let uλ (x) = u(λx) + 2 ln λ.1 to derive the conclusion of Theorem 8.2 we derive the conclusion of Lemma 8.4.4.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation Let Rn be the n−dimensional Euclidean space. vλ (x) = uλ ( x ) − 4 ln x.4.4. x − yn−α (8.1. Now we decrease λ. These complete the proof of Theorem 8. we get ln x > 0 for x < 1 which is impossible.4. In step 1. Taking λ to 0 in (8. λ) ∈ B1 (0) × {1}.59) is equivalent to wλ ≥ 0.66). Letting λ→0 in (8. Suppose (8. we know (8. Proof of Theorem 8. x2 x ) − 4 ln x. Consider the integral equation u(x) = Rn 1 u(y)(n+α)/(n−α) dy. we only compare the values of the solutions along the same radial direction. strict inequality holds somewhere for (8. we know that the inequality (8. ∂r (8.58). we let v(x) = u( In step 2. Proof of Theorem 8. Noting that in the proof of Theorem 8.67) is true for (x. we can apply the strong maximum principle and then the Hopf lemma to (8.67) From step 1. one can easily adapt the argument in the proof of Theorem 8. (8.59) and using the fact that u(0) > 0 we obtain again a contradiction. 8. and let α be a real number satisfying 0 < α < n.68) These combined with the fact that wλ ≡ 0 on ∂B1 imply that (8.67) holds for λ close to and less than λo .1. 1] and let λo > 0 be the smallest number such that the inequality is true for (x.2. Now we are ready to prove Theorem 8.1.67) does not hold for all λ ∈ (0. Proof of Lemma 8.1. Thus applying the strong maximum principle. x2 Then arguing as in the proof of Lemma 8.66) for λ = λo to get: wλo > 0 in B1 and ∂wλo < 0 on ∂B1 .
A simple example is u = x(n−α)/2 . Lieb classiﬁed the maximizers of the functional. 2n then it is in L n−α (Rn ). He then posed the classiﬁcation of all the critical points of the functional – the solutions of the integral equation (8. and Spruck [CGS]. which is a singular solution. all the solutions of partial diﬀerential equation (8. for other real values of α between 0 and n. Chen and Li [CL]. 1 then a solution may not be bounded. Ni.70) As we mentioned in Section 6.8. This integral equation is also closely related to the following family of semilinear partial diﬀerential equations (−∆)α/2 u = u(n+α)/(n−α) . Wei and Xu [WX] generalized this result to the solutions of (8. to classify the solutions of this family of partial diﬀerential equations. Furthermore.70) satisfy our integral equation (8. In order either the HardyLittlewoodSobolev inequality or the above men2n tioned critical Sobolev imbedding to make sense. In the special case n ≥ 3 and α = 2. and thus obtained the best constant in the HLS inequalities. α As usual. (8. Therefore.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 229 It arises as an EulerLagrange equation for a functional under a constraint in the context of the HardyLittlewoodSobolev inequalities. Recently. We can show that (see [CLO]). here (−∆)α/2 is deﬁned by Fourier transform. Caﬀarelli. Hence. it becomes −∆u = u(n+2)/(n−2) .69) as an open problem. solutions to this equation were studied by Gidas.71) (8.70) with α being any even numbers between 0 and n. In [L]. and Li [Li]. and vise versa. We will use the method of moving planes to prove .69).70) is also of practical interest and importance. Under this assumption. we can show that if a solution is locally L n−α . The classiﬁcation of the solutions would provide the best constant in the 2n α inequality of the critical Sobolev imbedding from H 2 (Rn ) to L n−α (Rn ): ( Rn u n−α dx) 2n n−α n ≤C Rn (−∆) 4 u2 dx. and therefore possesses higher regularity. and Nirenberg [GNN].69). It is interesting to notice that if this condition is violated. we can show that (see [CLO]) a positive solution u is in fact bounded. equation (8. it arises as the EulerLagrange equation of the functional I(u) = Rn (−∆) 4 u2 dx/( Rn α u n−α dx) 2n n−α n . we call a solution u regular if 2n it is locally L n−α . we only need to work on the integral equations (8. For instance. in the following. Gidas. u must be in L n−α (Rn ). by using 2n the method of moving planes. We studied such solutions in [CLO1]. Apparently.
(8.1).69) for x = 0.69) is radially symmetric and decreasing about some point xo and therefore assumes the form c( t )(n−α)/2 .72) with some positive constants c and t. . we have n+α n+α 1 1 − λ )(u(y) n−α − uλ (y) n−α )dy. .74) holds. Let wλ (x) = uλ (x) − u(x). . .230 8 Methods of Moving Planes and of Moving Spheres Theorem 8. wλ (x) ≥ 0. For a given real number λ. as long as inequality (8. Let λo = sup{λ ≤ 0  wλ (x) ≥ 0. xn−α x2 for any x = 0. we will ﬁrst show that.1 For any solution u(x) of (1. and let xλ = (2λ − x1 . Proof of the Theorem 8.5. . t2 + x − xo 2 (8.73) 1 x It is also true for v(x) = u( ). xn ) and uλ (x) = u(xλ ). we will ﬁrst prove the Theorem under stronger 2n assumption that u ∈ L n−2 (Rn ). and x − yn−α xλ − yn−α Σλ n+α n+α 1 1 u(y) n−α dy + uλ (y) n−α . deﬁne Σλ = {x = (x1 . . . To better illustrate the idea.1 Every positive regular solution u(x) of (8.1 (Under Stronger Assumption u ∈ L n−2 (Rn )). we have u(x) = Σλ n+α n+α 1 1 u(y) n−α dy + uλ (y) n−α dy. (Again see the previous Figure 1. Then we will show how the idea of this proof 2n can be extended under weaker condition that u is only locally in L n−2 . 2n .73). xλ − yn−α x − yn−α Σλ u(xλ ) = Σλ This implies (8. for λ suﬃciently negative. xn )  x1 ≤ λ}. ∀x ∈ Σλ }. u(x) − uλ (x) = ( Proof. Since x − y λ  = xλ − y.) Lemma 8. x2 .74) Then we can start moving the plane Tλ = ∂Σλ from near −∞ to the right. x − yn−α x − yn−α Σλ (8. ∀x ∈ Σλ . The conclusion for v(x) follows similarly since it satisﬁes the same integral equation (8. the Kelvin type transform of u(x).5. As in Section 6. .5.
We will exploit some global properties of the integral equations. for any q > n−α : wλ − Lq (Σλ ) ≤ C uτ −1 wλ = Σλ − − Lnq/(n+αq) (Σλ ) n+αq nq u τ −1 (x) nq n+αq wλ (x) nq n+αq dx . Step 1. it is easy to verify that u(x) − uλ (x) ≤ ≤ − Σλ 1 1 − λ n−α x − y x − yn−α 1 [uτ (y) − uτ (y)]dy λ x − yn−α [uτ (y) − uτ (y)]dy λ − Σλ =τ ≤τ − Σλ 1 ψ τ −1 (y)[u(y) − uλ (y)]dy x − yn−α λ 1 uτ −1 (y)[u(y) − uλ (y)]dy. − and since on Σλ .1. Σλ must be empty. (8. Start Moving the Plane from near −∞. we will introduce a new idea – the method of moving planes in integral forms. First choose o s= so that then choose r= n + αq n nq · s = q. here we do not have any diﬀerential equations for wλ .75) − We show that for suﬃciently negative values of λ. and wλo (x) ≡ 0. x − yn−α − Σλ where ψλ (x) is valued between uλ (x) and u(x) by the Mean Value Theorem.8. Instead. we have ψλ (x) ≤ u(x). Unlike partial diﬀerential equations. Deﬁne − Σλ = {x ∈ Σλ  wλ (x) < 0}. By Lemma 8.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 231 We will show that λo < ∞. Then use the H¨lder inequality to the last integral in the above. We ﬁrst apply the classical HardyLittlewoodSobolev inequality (its equivn alent form) to the above to obtain. n + αq n + αq αq .5. uλ (x) < u(x).
The readers can see that here we have actually used Corollary 7. Now we show that wλo (x) ≡ 0 . 2 − Now (8. a region near inﬁnity.74) holds. Let − − Σλo = {x ∈ Σλo  wλo (x) ≤ 0}.76) Otherwise.74) and hence completes Step 1. For λ suﬃciently close to λo . but wλo (x) ≡ 0 on Σλo .232 8 Methods of Moving Planes and of Moving Spheres to ensure We thus arrive at wλ − Lq (Σλ ) 1 1 + = 1. we have in fact wλo (x) > 0 in the interior of Σλo . λo + ). such that for λ ≤ −N.77). and therefore Σλ must be measure λ zero.76). we deduce. we have wλ − Here we have actually applied Corollary 7. we show that the plane can be moved further to the right.1 with Ω = Σλ . This veriﬁes (8. C{ − Σλ suﬃciently small.5.5. we can choose N suﬃciently large. − Lq (Σλ ) − = 0. and the solution u(x) itself such that wλ (x) ≥ 0 on Σλ for all λ in [λo . Let λo as deﬁned before. Σλo has measure zero. uτ +1 (y) dy} n ≤ α 1 . r s uτ +1 (y) dy} n wλ uτ +1 (y) dy} n wλ Σλ α α ≤ C{ − Σλ − Lq (Σλ ) ≤ C{ − Lq (Σλ ) By condition that u ∈ Lτ +1 (Rn ). and therefore Σλ must be empty. then we must have λo < ∞. We now move the plane Tλ = {x  x1 = λ} to the right as long as (8. ∀ x ∈ Σλo . 2 Now by (8. we have wλo (x) ≥ 0. This can be seen by applying a similar argument as in Step 1 from λ near +∞. More precisely. with Ω = Σλ . From the ﬁrst inequality of (8. there exists an depending on n.1. λo + ). and − − limλ→λo Σλ ⊂ Σλo . Then obviously. Ω is contained in a very narrow region.1. . By Lemma 8. Step 2.77) Condition u ∈ Lτ +1 (Rn ) ensures that one can choose so that for all λ in [λo . wλ − Lq (Σλ ) ≤ C{ − Σλ uτ +1 (y) dy} n wλ α − Lq (Σλ ) (8.76) implies that wλ Lq (Σ − ) = 0. (8. α. and hence empty by the continuity of wλ (x).5. we have C{ Σλ uτ +1 (y) dy} n ≤ α 1 .
we conclude that v(x) must be radially symmetric about xo . Since u is locally Lτ +1 . What we need to do is to replace u there by v and Σλ there by Σλ \ {0}. To overcome this diﬃculty. o n−α x − x x − xo 2 Then v(x) has a singularity at xo .5. we are not able to carry on the method of moving planes directly on u(x). It is easy to verify that v(x) satisﬁes the same equation (8. where we need to pay some particular attention. we have v(x) ≤ vλ (x). which is shown in step 2. Hence we only show Step 3 here. ∀x ∈ Σλ \ {0}. Otherwise. u(x1 ) = u(x2 ). Finally.5 Method of Moving Planes in Integral Forms and Symmetry of Solutions for an Integral Equation 233 Proof of the Theorem 8. i.79) Thus we can start moving the plane continuously from λ ≤ −N to the right as long as (8. This implies that v(x) has no singularity at the origin and u(x) has no singularity at inﬁnity. it is easy to see that v(x) has no singularity at inﬁnity. the Kelvin type transform of u(x). Consider the Kelvin type transform centered at xo : 1 x − xo v(x) = u( ). in any case.. and in particular. Since the direction of x1 can be chosen arbitrarily. we deduce that v(x) must be radially symmetric and decreasing about the origin. We will show in step 3 that. Since x1 and x2 are any two points in Rn . In step 1. Similarly.e. (8. and hence both u and v are in Lτ +1 (Rn ). Step 1 and Step 2 are entirely similar to the proof under stronger assumption.e.79) holds. Since we do not assume any integrability condition of u(x) near inﬁnity. Outline.69). We compare v(x) and vλ (x) on Σλ \ {0}. we can carry on the moving planes on u(x) directly to obtain the radial symmetry and monotonicity.78) Let λ be a real number and let the moving plane be x1 = λ. v τ +1 (y) dy < ∞. x Suppose in the contrary. for any domain Ω that is a positive distance away from the origin. u(x) can not have a singularity at inﬁnity. This is impossible. but has a possible singularity at origin. we showed that u has the desired asymptotic behavior at inﬁnity. we show that there exists an N > 0 such that for λ ≤ −N. u must be constant. let x1 and x2 be any two points in Rn and let xo be the midpoint of the line segment x1 x2 . If the plane stops at x1 = λo for some λo < 0. we consider v(x). The continuity and higher regularity of u follows . it satisﬁes 1 u(x) = O( n−2 ). then v(x) must be symmetric and monotone about the plane x1 = λo . Step 3.1 ( under Weaker Assumption that u is only locally in Lτ +1 ). Carry on the arguments as in Steps 1 and 2. we can move the plane all the way to x1 = 0.8. The proof consists of three steps. In this case. i. Ω (8.
. This completes the proof of the Theorem.234 8 Methods of Moving Planes and of Moving Spheres from standard theory on singular integral operators.
1 Notations A. Rn = ndimensional real Euclidean space with a typical point x = (x1 . x − y is the distance of the two points x and y in Rn . A. · · · . x = i=1 x2 i . xn ) .e. 11. 7. 2. i. n n 1 2 x·y = i=1 xi yi . 10. 8. ∂Ω is the boundary of Ω. . · · · .A Appendices A. xn ). Br (xo ) = {x ∈ Rn  x − xo  < r} is the ball of radius r centered at the point xo in Rn . AT = transpose of the matrix A. · · · .1. the boundary of the ball of radius one centered at the origin: B1 (0) = {x ∈ Rn+1  x < 1}. ¯ Ω is the closure of Ω.1 Algebraic and Geometric Notations 1. Ω usually denotes and open set in Rn . · · · . we write u(x) = u(x1 . 6. We say u is smooth if it is inﬁnitely diﬀerentiable.1. S n is the sphere of radius one centered at the origin in Rn+1 . det(aij ) is the determinant of the matrix (aij ).2 Notations for Functions and Derivatives 1. 5. A = (aij ) is a an m × n matrix with ij th entry aij . For a function u : Ω ⊂ Rn →R1 . x ∈ Ω. 3. 4. 9. xn ) and y = (y1 . yn ) in Rn . For x = (x1 .
If k = 2. αn ) is a multiindex of order α = α1 + α2 + · · · αn . · · · . . For two functions u and v. 7. We usually write uxi for ∂xi . 2 2 ∂ u ∂ u ∂xn ∂x1 · · · ∂xn ∂xn the Hessian matrix. Similarly uxi xj = 5. Dk u := α=k 1/2 Dα u2 . ∂u 4. we regard the elements of D2 u being arranged in a matrix 2 ∂ u ∂2u · · · ∂x1 ∂xn ∂x1 ∂x1 · · D2 u := · . u ≡ v means u is identically equal to v. 6. the ﬁrst partial derivative of u with respect to xi . is the trace of D2 u. If k = 1. We set u := v to deﬁne u as equaling v. ∂xi ∂xj ∂xk ∂ α u(x) ∂xα1 · · · ∂xαn n 1 where α = (α1 . For a nonnegative integer k. Dα u(x) := ∂2u ∂xi ∂xj uxi xj xk = ∂3u . · · · . uxn ) = u. Dk u(x) := {Dα u(x)  α = k} the set of all partial derivatives of order k. The Laplacian u := i=1 n uxi xi 8. 3. we regard the elements of Du as being arranged in a vector Du := (ux1 . the gradient vector.236 A Appendices 2.
2. The H oder space C k. 11. ∀ x .α (Ω) := sup ¯ x=y u(x) − u(y) x − yα 9. C0 (Ω) denotes the subset of functions in C k (Ω) with compact support in Ω.3 Function Spaces 1. . ¯ C k (Ω) := {u ∈ C k (Ω)  Dα u is uniformly continuous for all α ≤ k} 6. u ¯ C k (Ω) ¯ C(Ω) C(Ω) := {u : Ω→R1  u is continuous } ¯ C(Ω) := {u ∈ C(Ω)  u is uniformly continuous} := sup u(x) x∈Ω := α≤k Dα u ¯ C(Ω) 7.α (Ω) consists of all functions u ∈ C k (Ω) for which ¨ the norm u ¯ C k.A.1. y ∈ Ω. The αth H oder norm is ¨ u ¯ C 0. C ∞ (Ω) is the set of all inﬁnitely diﬀerentiable functions k 12. C k (Ω) := {u : Ω→R1  u is ktimes continuously diﬀerentiable } 5. u 4.α (Ω) ¯ ¯ ¯ 10.α (Ω) := β≤k Dβ u ¯ C(Ω) + β=k [Dβ u]C 0. If there exists a constant C such that u(x) − u(y) ≤ Cx − yα . then we say that u is H oder continuous in Ω. ¨ 8.α (Ω) := u ¯ C(Ω) + [u]C 0.1 Notations 237 A. The αth H oder seminorm of u is ¨ [u]C 0. Assume Ω is an open subset in Rn and 0 < α ≤ 1.α (Ω) ¯ is ﬁnite. 3.
p (Ω) denotes the Sobolev space consisting functions u with 1/p u W k. 2. 3. we usually write H k (Ω) = H k. .4 Notations for Estimates 1. 19. A. any nonnegative integer k and any p ≥ 1. L∞ (Ω) is the set of Lebesgue measurable functions u with u 15. For any open subset Ω ⊂ Rn . it is shown (See Adams [Ad]) that H k. Lp (Ω) is the set of Lebesgue measurable functions u on Ω with 1/p u Lp (Ω) := Ω u(x)p dx < ∞.238 A Appendices 13. H k. K 16. We write u = O(v) as x→xo . we use C to denote various constants that can be explicitly computed in terms of known quantities. In the process of estimates. 17.2 (Ω).e.p (Ω). W k.p (Ω) := 0≤α≤k Ω Dα up dx < ∞. For p = 2.p (Ω) . 18. provided there exists a constant C such that u(x) ≤ Cv(x) for x suﬃciently close to xo .1. provided u(x) →0 v(x) as x→xo .p (Ω) = W k. Ω denotes the set of Lebesgue measurable functions u on Ω whose power is locally integrable. 14. Lp (Ω) loc th p L∞ (Ω) := ess sup u < ∞. u(x)p dx < ∞. The value of C may change from line to line in a given computation. i. for any compact subset K ⊂ Ω.p (Ω) is the completion of C ∞ (Ω) under the norm u W k. We write u = o(v) as x→xo .
6. A Riemannian metric g can be